Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I have implemented a multi master HA kubernetes cluster and wanted to implement the Calico the hardway as described in <a href="https://docs.projectcalico.org/getting-started/kubernetes/hardway/" rel="nofollow noreferrer">here</a>. I was able complete all the steps and my connectivity is not there between the pods and services and pods and other pods in different nodes.</p> <p>only, difference is I use two different AZs in AWS and I suppose it should not be an issue. I can see pods are getting the IPs and calico network interfaces are creating but still see the connectivity as I explained. Node even doesn't have the public internet access. I did the BGP configuration exactly same in the guide but no luck and I'm not quite sure something to be changed in the BGP configuration when it comes to multi-AZ deployment. I'm not much aware of the Calico BGP configuration.</p> <p>Unfortunately, calicoctl node diags does not properly run and not providing much more information to move forward.</p> <p>I'd love here your valuable thoughts and constructive criticism to fix this.</p>
Aruna Fernando
<p>Calico configured in <a href="https://docs.projectcalico.org/v3.7/networking/service-advertisement#about-advertising-kubernetes-services-over-bgp" rel="nofollow noreferrer">BGP mode</a> requires all of the instances to be located in the same subnet to work out of the box.</p> <p>To use calico with deployments that are split across multiple availability zones you must:</p> <h3>Disable AWS source / destination check (see here):</h3> <p><strong>You can do that using AWS CLI:</strong></p> <pre><code> aws ec2 modify-instance-attribute --no-source-dest-check --instance-id $EC2_INSTANCE_ID --region &lt;REGION-WHERE-EC2-INSTANCE-IS-LAUNCHED&gt; </code></pre> <p><strong>Or using the AWS console:</strong></p> <blockquote> <ol> <li>Open the Amazon EC2 console at <a href="https://console.aws.amazon.com/ec2/" rel="nofollow noreferrer">https://console.aws.amazon.com/ec2/</a>.</li> <li>In the navigation pane, choose <strong>Instances</strong>.</li> <li>Select the NAT instance, choose <strong>Actions</strong>, <strong>Networking</strong>, <strong>Change Source/Dest. Check</strong>.</li> <li>For the NAT instance, verify that this attribute is disabled. Otherwise, choose <strong>Yes, Disable</strong>.</li> <li>If the NAT instance has a secondary network interface, choose it from <strong>Network interfaces</strong> on the <strong>Description</strong> tab and choose the interface ID to go to the network interfaces page. Choose <strong>Actions</strong>, <strong>Change Source/Dest. Check</strong>, disable the setting, and choose <strong>Save</strong>.*</li> </ol> </blockquote> <h3>Enable IPIP encapsulation and outgoing NAT on your Calico IP pools</h3> <blockquote> <p>(<code>IPPool</code>) represents a collection of IP addresses from which Calico expects endpoint IPs to be assigned. (<a href="https://docs.projectcalico.org/reference/resources/ippool" rel="nofollow noreferrer">see here how to set it up</a>)</p> </blockquote> <p>, then all of the Kubernetes instances must be located in the same subnet for Calico to work out of the box.</p> <p>To enable the “CrossSubnet” IPIP feature, configure your Calico IP pool resources to enable IPIP mode to “CrossSubnet” like in the example below:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: ippool-multi-az spec: cidr: 192.168.0.0/16 ipipMode: CrossSubnet EOF </code></pre> <p>Example above refers to <a href="https://docs.projectcalico.org/reference/public-cloud/aws" rel="nofollow noreferrer">AWS cloud configuration</a> taken from the Calico documentation. Please note that Calico docs has also information about <a href="https://docs.projectcalico.org/reference/public-cloud/" rel="nofollow noreferrer">GCP, Azure and IBM</a>.</p> <p>Remark: If you face another problems going &quot;the hard way&quot; you may want to use as a reference another cluster created by following calico guides below:</p> <ul> <li><a href="https://docs.projectcalico.org/getting-started/openshift/requirements#network-requirements" rel="nofollow noreferrer">Getting started - openshift</a></li> <li><a href="https://docs.projectcalico.org/getting-started/kubernetes/self-managed-public-cloud/gce" rel="nofollow noreferrer">Getting started - self managed public cloud - GCE</a></li> <li><a href="https://docs.projectcalico.org/getting-started/kubernetes/self-managed-public-cloud/aws" rel="nofollow noreferrer">Getting started - self managed public cloud - AWS</a></li> </ul> <p>Lastly, it is worth to check is also this very good document about <a href="https://octetz.com/docs/2020/2020-10-01-calico-routing-modes/" rel="nofollow noreferrer">calico routing modes</a> (it shows also cross subnets ipip mode).</p>
acid_fuji
<p>Kubeadm init issue.</p> <p>Config data versions:</p> <pre><code>os -rhel7.5 env -onprem server docker - 19 kube - 18 </code></pre> <p>Console output:</p> <pre><code>[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. </code></pre> <p>What goes wrong and how to resolve it?</p>
Veerareddy Aasamthavva
<p>Based on the information you provided there are couple of things that can be done here. <br> <br> <strong>First</strong> you can check if docker`s native cgroupdriver and kubelet are consistent. You can view the config of the kubelet by running:</p> <pre><code>cat /var/lib/kubelet/kubeadm-flags.env </code></pre> <p>To check docker config you can simply use: </p> <pre><code>docker info | grep Cgroup </code></pre> <p>If you need to change it you can do it like this: </p> <pre><code>cat &lt;&lt; EOF &gt; /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"] } EOF </code></pre> <p>To change kubelet cgroup driver you have to: </p> <pre><code>`vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf` </code></pre> <p>and update <code>KUBELET_CGROUP_ARGS=--cgroup-driver=&lt;systemd or cgroupfs&gt;</code></p> <p><strong>Second</strong> possible solution could be disabling swap. You can do that with these commands: </p> <pre><code>sudo swapoff -a sudo sed -i '/ swap / s/^/#/' /etc/fstab </code></pre> <p>Reboot a machine after that and then perform <code>kubeadm reset</code> and try to initialize the cluster with <code>kubeadm init</code>.</p>
acid_fuji
<p>I'm running Kubernetes 1.13.2, setup using kubeadm and struggling with getting calico 3.5 up and running. The cluster is run on top of KVM.</p> <p>Setup:</p> <ol> <li><code>kubeadm init --apiserver-advertise-address=10.255.253.20 --pod-network-cidr=192.168.0.0/16</code></li> <li><p>modified <code>calico.yaml</code> file to include:</p> <pre><code> - name: IP_AUTODETECTION_METHOD value: "interface=ens.*" </code></pre></li> <li>applied <code>rbac.yaml</code>, <code>etcd.yaml</code>, <code>calico.yaml</code></li> </ol> <p>Output from <code>kubectl describe pods</code>:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23m default-scheduler Successfully assigned kube-system/calico-node-hjwrc to k8s-master-01 Normal Pulling 23m kubelet, k8s-master-01 pulling image "quay.io/calico/cni:v3.5.0" Normal Pulled 23m kubelet, k8s-master-01 Successfully pulled image "quay.io/calico/cni:v3.5.0" Normal Created 23m kubelet, k8s-master-01 Created container Normal Started 23m kubelet, k8s-master-01 Started container Normal Pulling 23m kubelet, k8s-master-01 pulling image "quay.io/calico/node:v3.5.0" Normal Pulled 23m kubelet, k8s-master-01 Successfully pulled image "quay.io/calico/node:v3.5.0" Warning Unhealthy 23m kubelet, k8s-master-01 Readiness probe failed: calico/node is not ready: felix is not ready: Get http://localhost:9099/readiness: dial tcp [::1]:9099: connect: connection refused Warning Unhealthy 23m kubelet, k8s-master-01 Liveness probe failed: Get http://localhost:9099/liveness: dial tcp [::1]:9099: connect: connection refused Normal Created 23m (x2 over 23m) kubelet, k8s-master-01 Created container Normal Started 23m (x2 over 23m) kubelet, k8s-master-01 Started container Normal Pulled 23m kubelet, k8s-master-01 Container image "quay.io/calico/node:v3.5.0" already present on machine Warning Unhealthy 3m32s (x23 over 7m12s) kubelet, k8s-master-01 Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 10.255.253.22 </code></pre> <p>Output from <code>calicoctl node status</code>:</p> <pre><code>Calico process is running. IPv4 BGP status +---------------+-------------------+-------+----------+---------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +---------------+-------------------+-------+----------+---------+ | 10.255.253.22 | node-to-node mesh | start | 16:24:44 | Passive | +---------------+-------------------+-------+----------+---------+ IPv6 BGP status No IPv6 peers found. </code></pre> <p>Output from <code>ETCD_ENDPOINTS=http://localhost:6666 calicoctl get nodes -o yaml</code>:</p> <pre><code> apiVersion: projectcalico.org/v3 items: - apiVersion: projectcalico.org/v3 kind: Node metadata: annotations: projectcalico.org/kube-labels: '{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/hostname":"k8s-master-01","node-role.kubernetes.io/master":""}' creationTimestamp: 2019-01-31T16:08:56Z labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/os: linux kubernetes.io/hostname: k8s-master-01 node-role.kubernetes.io/master: "" name: k8s-master-01 resourceVersion: "28" uid: 82fee4dc-2572-11e9-8ab7-5254002c725d spec: bgp: ipv4Address: 10.255.253.20/24 ipv4IPIPTunnelAddr: 192.168.151.128 orchRefs: - nodeName: k8s-master-01 orchestrator: k8s - apiVersion: projectcalico.org/v3 kind: Node metadata: annotations: projectcalico.org/kube-labels: '{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/hostname":"k8s-worker-01"}' creationTimestamp: 2019-01-31T16:24:44Z labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/os: linux kubernetes.io/hostname: k8s-worker-01 name: k8s-worker-01 resourceVersion: "170" uid: b7c2c5a6-2574-11e9-aaa4-5254007d5f6a spec: bgp: ipv4Address: 10.255.253.22/24 ipv4IPIPTunnelAddr: 192.168.36.192 orchRefs: - nodeName: k8s-worker-01 orchestrator: k8s kind: NodeList metadata: resourceVersion: "395" </code></pre> <p>Output from <code>ETCD_ENDPOINTS=http://localhost:6666 calicoctl get bgppeers</code>:</p> <pre><code>NAME PEERIP NODE ASN </code></pre> <p>Ouput from <code>kubectl logs</code>:</p> <pre><code>2019-01-31 17:01:20.519 [INFO][48] int_dataplane.go 751: Applying dataplane updates 2019-01-31 17:01:20.519 [INFO][48] ipsets.go 223: Asked to resync with the dataplane on next update. family="inet" 2019-01-31 17:01:20.519 [INFO][48] ipsets.go 254: Resyncing ipsets with dataplane. family="inet" 2019-01-31 17:01:20.523 [INFO][48] ipsets.go 304: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=3.675284ms 2019-01-31 17:01:20.523 [INFO][48] int_dataplane.go 765: Finished applying updates to dataplane. msecToApply=4.124166000000001 bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 36329) bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 52383) 2019-01-31 17:01:23.182 [INFO][48] health.go 150: Overall health summary=&amp;health.HealthReport{Live:true, Ready:true} bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 39661) 2019-01-31 17:01:25.433 [INFO][48] health.go 150: Overall health summary=&amp;health.HealthReport{Live:true, Ready:true} bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 57359) bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 47151) bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 39243) 2019-01-31 17:01:30.943 [INFO][48] int_dataplane.go 751: Applying dataplane updates 2019-01-31 17:01:30.943 [INFO][48] ipsets.go 223: Asked to resync with the dataplane on next update. family="inet" 2019-01-31 17:01:30.943 [INFO][48] ipsets.go 254: Resyncing ipsets with dataplane. family="inet" 2019-01-31 17:01:30.945 [INFO][48] ipsets.go 304: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=2.369997ms 2019-01-31 17:01:30.946 [INFO][48] int_dataplane.go 765: Finished applying updates to dataplane. msecToApply=2.8165820000000004 bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 60641) 2019-01-31 17:01:33.190 [INFO][48] health.go 150: Overall health summary=&amp;health.HealthReport{Live:true, Ready:true} </code></pre> <p>Note: the above unknown address (10.255.253.14) is the IP under <code>br0</code> on the KVM host, not too sure why it's made an appearance.</p>
phone_melter
<p>I got the solution : </p> <p>The first preference of ifconfig(in my case) through that it will try to connect to the worker-nodes which is not the right ip.</p> <p>Solution:Change the calico.yaml file to override that ip to etho-ip by using the following steps.</p> <p>Need to open port <a href="https://docs.projectcalico.org/v3.8/getting-started/kubernetes/requirements" rel="noreferrer">Calico networking (BGP) - TCP 179</a></p> <pre><code> # Specify interface - name: IP_AUTODETECTION_METHOD value: "interface=eth1" </code></pre> <h1>calico.yaml</h1> <pre><code>--- # Source: calico/templates/calico-config.yaml # This ConfigMap is used to configure a self-hosted Calico installation. kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: # Typha is disabled. typha_service_name: "none" # Configure the backend to use. calico_backend: "bird" # Configure the MTU to use veth_mtu: "1440" # The CNI network configuration to install on each node. The special # values in this config will be automatically populated. cni_network_config: |- { "name": "k8s-pod-network", "cniVersion": "0.3.1", "plugins": [ { "type": "calico", "log_level": "info", "datastore_type": "kubernetes", "nodename": "__KUBERNETES_NODE_NAME__", "mtu": __CNI_MTU__, "ipam": { "type": "calico-ipam" }, "policy": { "type": "k8s" }, "kubernetes": { "kubeconfig": "__KUBECONFIG_FILEPATH__" } }, { "type": "portmap", "snat": true, "capabilities": {"portMappings": true} } ] } --- # Source: calico/templates/kdd-crds.yaml apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: felixconfigurations.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: FelixConfiguration plural: felixconfigurations singular: felixconfiguration --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ipamblocks.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPAMBlock plural: ipamblocks singular: ipamblock --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: blockaffinities.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: BlockAffinity plural: blockaffinities singular: blockaffinity --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ipamhandles.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPAMHandle plural: ipamhandles singular: ipamhandle --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ipamconfigs.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPAMConfig plural: ipamconfigs singular: ipamconfig --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: bgppeers.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: BGPPeer plural: bgppeers singular: bgppeer --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: bgpconfigurations.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: BGPConfiguration plural: bgpconfigurations singular: bgpconfiguration --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ippools.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPPool plural: ippools singular: ippool --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: hostendpoints.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: HostEndpoint plural: hostendpoints singular: hostendpoint --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: clusterinformations.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: ClusterInformation plural: clusterinformations singular: clusterinformation --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: globalnetworkpolicies.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: GlobalNetworkPolicy plural: globalnetworkpolicies singular: globalnetworkpolicy --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: globalnetworksets.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: GlobalNetworkSet plural: globalnetworksets singular: globalnetworkset --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: networkpolicies.crd.projectcalico.org spec: scope: Namespaced group: crd.projectcalico.org version: v1 names: kind: NetworkPolicy plural: networkpolicies singular: networkpolicy --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: networksets.crd.projectcalico.org spec: scope: Namespaced group: crd.projectcalico.org version: v1 names: kind: NetworkSet plural: networksets singular: networkset --- # Source: calico/templates/rbac.yaml # Include a clusterrole for the kube-controllers component, # and bind it to the calico-kube-controllers serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers rules: # Nodes are watched to monitor for deletions. - apiGroups: [""] resources: - nodes verbs: - watch - list - get # Pods are queried to check for existence. - apiGroups: [""] resources: - pods verbs: - get # IPAM resources are manipulated when nodes are deleted. - apiGroups: ["crd.projectcalico.org"] resources: - ippools verbs: - list - apiGroups: ["crd.projectcalico.org"] resources: - blockaffinities - ipamblocks - ipamhandles verbs: - get - list - create - update - delete # Needs access to update clusterinformations. - apiGroups: ["crd.projectcalico.org"] resources: - clusterinformations verbs: - get - create - update --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-kube-controllers subjects: - kind: ServiceAccount name: calico-kube-controllers namespace: kube-system --- # Include a clusterrole for the calico-node DaemonSet, # and bind it to the calico-node serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-node rules: # The CNI plugin needs to get pods, nodes, and namespaces. - apiGroups: [""] resources: - pods - nodes - namespaces verbs: - get - apiGroups: [""] resources: - endpoints - services verbs: # Used to discover service IPs for advertisement. - watch - list # Used to discover Typhas. - get - apiGroups: [""] resources: - nodes/status verbs: # Needed for clearing NodeNetworkUnavailable flag. - patch # Calico stores some configuration information in node annotations. - update # Watch for changes to Kubernetes NetworkPolicies. - apiGroups: ["networking.k8s.io"] resources: - networkpolicies verbs: - watch - list # Used by Calico for policy information. - apiGroups: [""] resources: - pods - namespaces - serviceaccounts verbs: - list - watch # The CNI plugin patches pods/status. - apiGroups: [""] resources: - pods/status verbs: - patch # Calico monitors various CRDs for config. - apiGroups: ["crd.projectcalico.org"] resources: - globalfelixconfigs - felixconfigurations - bgppeers - globalbgpconfigs - bgpconfigurations - ippools - ipamblocks - globalnetworkpolicies - globalnetworksets - networkpolicies - networksets - clusterinformations - hostendpoints verbs: - get - list - watch # Calico must create and update some CRDs on startup. - apiGroups: ["crd.projectcalico.org"] resources: - ippools - felixconfigurations - clusterinformations verbs: - create - update # Calico stores some configuration information on the node. - apiGroups: [""] resources: - nodes verbs: - get - list - watch # These permissions are only requried for upgrade from v2.6, and can # be removed after upgrade or on fresh installations. - apiGroups: ["crd.projectcalico.org"] resources: - bgpconfigurations - bgppeers verbs: - create - update # These permissions are required for Calico CNI to perform IPAM allocations. - apiGroups: ["crd.projectcalico.org"] resources: - blockaffinities - ipamblocks - ipamhandles verbs: - get - list - create - update - delete - apiGroups: ["crd.projectcalico.org"] resources: - ipamconfigs verbs: - get # Block affinities must also be watchable by confd for route aggregation. - apiGroups: ["crd.projectcalico.org"] resources: - blockaffinities verbs: - watch # The Calico IPAM migration needs to get daemonsets. These permissions can be # removed if not upgrading from an installation using host-local IPAM. - apiGroups: ["apps"] resources: - daemonsets verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: calico-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-node subjects: - kind: ServiceAccount name: calico-node namespace: kube-system --- # Source: calico/templates/calico-node.yaml # This manifest installs the calico-node container, as well # as the CNI plugins and network config on # each master and worker node in a Kubernetes cluster. kind: DaemonSet apiVersion: apps/v1 metadata: name: calico-node namespace: kube-system labels: k8s-app: calico-node spec: selector: matchLabels: k8s-app: calico-node updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: labels: k8s-app: calico-node annotations: # This, along with the CriticalAddonsOnly toleration below, # marks the pod as a critical add-on, ensuring it gets # priority scheduling and that its resources are reserved # if it ever gets evicted. scheduler.alpha.kubernetes.io/critical-pod: '' spec: nodeSelector: beta.kubernetes.io/os: linux hostNetwork: true tolerations: # Make sure calico-node gets scheduled on all nodes. - effect: NoSchedule operator: Exists # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists serviceAccountName: calico-node # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. terminationGracePeriodSeconds: 0 priorityClassName: system-node-critical initContainers: # This container performs upgrade from host-local IPAM to calico-ipam. # It can be deleted if this is a fresh installation, or if you have already # upgraded to use calico-ipam. - name: upgrade-ipam image: calico/cni:v3.8.2 command: ["/opt/cni/bin/calico-ipam", "-upgrade"] env: - name: KUBERNETES_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend volumeMounts: - mountPath: /var/lib/cni/networks name: host-local-net-dir - mountPath: /host/opt/cni/bin name: cni-bin-dir # This container installs the CNI binaries # and CNI network config file on each node. - name: install-cni image: calico/cni:v3.8.2 command: ["/install-cni.sh"] env: # Name of the CNI config file to create. - name: CNI_CONF_NAME value: "10-calico.conflist" # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG valueFrom: configMapKeyRef: name: calico-config key: cni_network_config # Set the hostname based on the k8s node name. - name: KUBERNETES_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName # CNI MTU Config variable - name: CNI_MTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # Prevents the container from sleeping forever. - name: SLEEP value: "false" volumeMounts: - mountPath: /host/opt/cni/bin name: cni-bin-dir - mountPath: /host/etc/cni/net.d name: cni-net-dir # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes # to communicate with Felix over the Policy Sync API. - name: flexvol-driver image: calico/pod2daemon-flexvol:v3.8.2 volumeMounts: - name: flexvol-driver-host mountPath: /host/driver containers: # Runs calico-node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: calico/node:v3.8.2 env: # Use Kubernetes API as the backing datastore. - name: DATASTORE_TYPE value: "kubernetes" # Wait for the datastore. - name: WAIT_FOR_DATASTORE value: "true" # Set based on the k8s node name. - name: NODENAME valueFrom: fieldRef: fieldPath: spec.nodeName # Choose the backend to use. - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend # Cluster type to identify the deployment type - name: CLUSTER_TYPE value: "k8s,bgp" # Specify interface - name: IP_AUTODETECTION_METHOD value: "interface=eth1" # Auto-detect the BGP IP address. - name: IP value: "autodetect" # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Always" # Set MTU for tunnel device used if ipip is enabled - name: FELIX_IPINIPMTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have # no effect. This should fall within `--cluster-cidr`. - name: CALICO_IPV4POOL_CIDR value: "192.168.0.0/16" # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION value: "ACCEPT" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT value: "false" # Set Felix logging to "info" - name: FELIX_LOGSEVERITYSCREEN value: "info" - name: FELIX_HEALTHENABLED value: "true" securityContext: privileged: true resources: requests: cpu: 250m livenessProbe: httpGet: path: /liveness port: 9099 host: localhost periodSeconds: 10 initialDelaySeconds: 10 failureThreshold: 6 readinessProbe: exec: command: - /bin/calico-node - -bird-ready - -felix-ready periodSeconds: 10 volumeMounts: - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - mountPath: /var/run/calico name: var-run-calico readOnly: false - mountPath: /var/lib/calico name: var-lib-calico readOnly: false - name: policysync mountPath: /var/run/nodeagent volumes: # Used by calico-node. - name: lib-modules hostPath: path: /lib/modules - name: var-run-calico hostPath: path: /var/run/calico - name: var-lib-calico hostPath: path: /var/lib/calico - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate # Used to install CNI. - name: cni-bin-dir hostPath: path: /opt/cni/bin - name: cni-net-dir hostPath: path: /etc/cni/net.d # Mount in the directory for host-local IPAM allocations. This is # used when upgrading from host-local to calico-ipam, and can be removed # if not using the upgrade-ipam init container. - name: host-local-net-dir hostPath: path: /var/lib/cni/networks # Used to create per-pod Unix Domain Sockets - name: policysync hostPath: type: DirectoryOrCreate path: /var/run/nodeagent # Used to install Flex Volume Driver - name: flexvol-driver-host hostPath: type: DirectoryOrCreate path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-node namespace: kube-system --- # Source: calico/templates/calico-kube-controllers.yaml # See https://github.com/projectcalico/kube-controllers apiVersion: apps/v1 kind: Deployment metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers spec: # The controllers can only have a single active instance. replicas: 1 selector: matchLabels: k8s-app: calico-kube-controllers strategy: type: Recreate template: metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: nodeSelector: beta.kubernetes.io/os: linux tolerations: # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - key: node-role.kubernetes.io/master effect: NoSchedule serviceAccountName: calico-kube-controllers priorityClassName: system-cluster-critical containers: - name: calico-kube-controllers image: calico/kube-controllers:v3.8.2 env: # Choose which controllers to run. - name: ENABLED_CONTROLLERS value: node - name: DATASTORE_TYPE value: kubernetes readinessProbe: exec: command: - /usr/bin/check-status - -r --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-kube-controllers namespace: kube-system --- # Source: calico/templates/calico-etcd-secrets.yaml --- # Source: calico/templates/calico-typha.yaml --- # Source: calico/templates/configure-canal.yaml </code></pre>
MadProgrammer
<p>I installed a kubernetes v1.16 cluster with two nodes, and enabled "IPv4/IPv6 dual-stack", following <a href="https://kubernetes.io/docs/concepts/services-networking/dual-stack/" rel="nofollow noreferrer">this guide</a>. For "dual-stack", I set <code>--network-plugin=kubenet</code> to kubelet.</p> <p>Now, the pods have ipv4 and ipv6 address, and each node has a cbr0 gw with both ipv4 and ipv6 address. But when I ping from a node to the cbr0 gw of other node, it failed.</p> <p>I tried to add route as follow manually: "ip route add [podCIDR of other node] via [ipaddress of other node]"</p> <p>After I added the route on two nodes, I can ping cbr0 gw successful with ipv4. But "adding route manually" does not seem to be a correct way.</p> <p>When I use kubenet, how should I config to ping from a node to the cbr0 gw of other node?</p>
KeepTheBeats
<p>Kubenet is a <a href="https://kubernetes.io/docs/concepts/services-networking/dual-stack/#prerequisites" rel="nofollow noreferrer">requirement</a> for enabling IPv6 and as you stated, kubenet have some limitations and <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet" rel="nofollow noreferrer">here</a> we can read:</p> <blockquote> <p><strong>Kubenet is a very basic, simple network plugin</strong>, on Linux only. <strong>It does not, of itself, implement more advanced features</strong> like cross-node networking or network policy. It is typically used together with a <strong>cloud provider that sets up routing rules</strong> for communication between nodes, or in single-node environments.</p> </blockquote> <p>I would like to highlight that kubenet is not creating routes automatically for you. </p> <p>Based on this information we can understand that in your scenario <strong>this is the expected behavior and there is no problem happening.</strong> If you want to keep going in this direction you need to create routes manually. </p> <p>It's important to remember this is an alpha feature (WIP).</p> <p>There is also some work being done to make it possible to bootstrap a <a href="https://github.com/kubernetes/kubeadm/issues/1612" rel="nofollow noreferrer">Kubernetes cluster with Dual Stack using Kubeadm</a>, but it's not usable yet and there is no ETA for it. </p> <p>There are some examples of IPv6 and dual-stack setups with other networking plugins in <a href="https://github.com/Nordix/k8s-ipv6/tree/dual-stack" rel="nofollow noreferrer">this repository</a>, but it still require adding routes manually.</p> <blockquote> <p>This project serves two primary purposes: (i) study and validate ipv6 support in kubernetes and associated plugins (ii) provide a dev environment for implementing and testing additional functionality <strong>(e.g.dual-stack)</strong></p> </blockquote>
Mark Watney
<p>I have an error while running a pipeline in Jenkins using a Kubernetes Cloud server.</p> <p>Everything works fine until the moment of the <code>npm install</code> where i get <code>Cannot contact nodejs-rn5f3: hudson.remoting.ChannelClosedException: Channel &quot;hudson.remoting.Channel@3b1e0041:nodejs-rn5f3&quot;: Remote call on nodejs-rn5f3 failed. The channel is closing down or has closed down</code></p> <p>How can I fix this error ?</p> <p>Here are my logs :</p> <pre><code>[Pipeline] Start of Pipeline [Pipeline] podTemplate [Pipeline] { [Pipeline] node Still waiting to schedule task ‘nodejs-rn5f3’ is offline Agent nodejs-rn5f3 is provisioned from template nodejs --- apiVersion: &quot;v1&quot; kind: &quot;Pod&quot; metadata: labels: jenkins: &quot;slave&quot; jenkins/label-digest: &quot;XXXXXXXXXXXXXXXXXXXXXXXXXX&quot; jenkins/label: &quot;nodejs&quot; name: &quot;nodejs-rn5f3&quot; spec: containers: - args: - &quot;cat&quot; command: - &quot;/bin/sh&quot; - &quot;-c&quot; image: &quot;node:15.5.1-alpine3.10&quot; imagePullPolicy: &quot;IfNotPresent&quot; name: &quot;node&quot; resources: limits: {} requests: {} tty: true volumeMounts: - mountPath: &quot;/home/jenkins/agent&quot; name: &quot;workspace-volume&quot; readOnly: false workingDir: &quot;/home/jenkins/agent&quot; - env: - name: &quot;JENKINS_SECRET&quot; value: &quot;********&quot; - name: &quot;JENKINS_AGENT_NAME&quot; value: &quot;nodejs-rn5f3&quot; - name: &quot;JENKINS_WEB_SOCKET&quot; value: &quot;true&quot; - name: &quot;JENKINS_NAME&quot; value: &quot;nodejs-rn5f3&quot; - name: &quot;JENKINS_AGENT_WORKDIR&quot; value: &quot;/home/jenkins/agent&quot; - name: &quot;JENKINS_URL&quot; value: &quot;http://XX.XX.XX.XX/&quot; image: &quot;jenkins/inbound-agent:4.3-4&quot; name: &quot;jnlp&quot; resources: requests: cpu: &quot;100m&quot; memory: &quot;256Mi&quot; volumeMounts: - mountPath: &quot;/home/jenkins/agent&quot; name: &quot;workspace-volume&quot; readOnly: false hostNetwork: false nodeSelector: kubernetes.io/os: &quot;linux&quot; restartPolicy: &quot;Never&quot; volumes: - emptyDir: medium: &quot;&quot; name: &quot;workspace-volume&quot; Running on nodejs-rn5f3 in /home/jenkins/agent/workspace/something [Pipeline] { [Pipeline] stage [Pipeline] { (Test) [Pipeline] checkout Selected Git installation does not exist. Using Default [... cloning repository] [Pipeline] container [Pipeline] { [Pipeline] sh + ls -la total 1240 drwxr-xr-x 5 node node 4096 Feb 26 07:33 . drwxr-xr-x 4 node node 4096 Feb 26 07:33 .. -rw-r--r-- 1 node node 1689 Feb 26 07:33 package.json and some other files and folders [Pipeline] sh + cat package.json { [...] &quot;dependencies&quot;: { [blabla....] }, &quot;devDependencies&quot;: { [blabla...] } } [Pipeline] sh + npm install Cannot contact nodejs-rn5f3: hudson.remoting.ChannelClosedException: Channel &quot;hudson.remoting.Channel@3b1e0041:nodejs-rn5f3&quot;: Remote call on nodejs-rn5f3 failed. The channel is closing down or has closed down </code></pre> <p>At this stage, here are the logs of the container <code>jnlp</code> in my pod <code>nodejs-rnf5f3</code> :</p> <pre><code>INFO: Connected Feb 26, 2021 8:05:53 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Read side closed Feb 26, 2021 8:05:53 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Terminated Feb 26, 2021 8:05:53 AM jenkins.slaves.restarter.JnlpSlaveRestarterInstaller$FindEffectiveRestarters$1 onReconnect INFO: Restarting agent via jenkins.slaves.restarter.UnixSlaveRestarter@1a39588e Feb 26, 2021 8:05:55 AM hudson.remoting.jnlp.Main createEngine INFO: Setting up agent: nodejs-rnf5f3 Feb 26, 2021 8:05:55 AM hudson.remoting.jnlp.Main$CuiListener &lt;init&gt; INFO: Jenkins agent is running in headless mode. Feb 26, 2021 8:05:55 AM hudson.remoting.Engine startEngine INFO: Using Remoting version: 4.3 Feb 26, 2021 8:05:55 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir INFO: Using /home/jenkins/agent/remoting as a remoting work directory Feb 26, 2021 8:05:55 AM org.jenkinsci.remoting.engine.WorkDirManager setupLogging INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting Feb 26, 2021 8:05:55 AM hudson.remoting.jnlp.Main$CuiListener status INFO: WebSocket connection open Feb 26, 2021 8:05:58 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Connected .... same as above </code></pre> <p>I don't know where this error come from. Is this related to the usage of resources ?</p> <p>Here are the usage of my containers :</p> <pre><code>POD NAME CPU(cores) MEMORY(bytes) jenkins-1-jenkins-0 jenkins-master 61m 674Mi nodejs-rnf5f3 jnlp 468m 104Mi nodejs-rnf5f3 node 1243m 1284Mi </code></pre> <p>My cluster is a <code>e2-medium</code> in GKE with 2 nodes.</p>
mmoussa
<p>If I had to bet (but its just a wild guess) I had say that the pod was killed due to running out of memory (OOM Killed).</p> <p>The ChannelClosedException is a symptom, not the problem.</p> <p>Its kind of hard to debug because the agent pod is being deleted, you can try <code>kubectl get events</code> in the relevant namespace, but events only last for 1 hour by default.</p>
GuyCarmy
<p>In my team, we sometimes scale down to just one pod in Openshift to make testing easier. If we then do a rolling update with the desired replica count set to 2, Openshift scales up to two pods before performing a rolling deploy. It is a nuisance, because the new &quot;old&quot; pod can start things that we don't expect to be started before the new deployment starts, and so we have to remember to take down the one pod before the new deploy.</p> <p>Is there a way to stop the old deployment from scaling up to the desired replica count while the new deployment is scaled up to the desired replica count? Also, why does it work this way?</p> <ul> <li>OpenShift Master: v3.11.200</li> <li>Kubernetes Master: v1.11.0+d4cacc0</li> <li>OpenShift Web Console: 3.11.200-1-8a53b1d</li> </ul> <p>From our Openshift template:</p> <pre><code>- apiVersion: v1 kind: DeploymentConfig spec: replicas: 2 strategy: type: Rolling </code></pre>
Programmer Trond
<p>This is expected behavior when using <code>RollingUpdate</code> strategy. It removes old pods one by one, while adding new ones at the same time, keeping the application available throughout the whole process, and ensuring there’s no drop in its capacity to handle requests. Since you have only one pod, Kubernetes scales the deployment to keep the strategy and <code>zero-downtime</code> as requested in the manifest.</p> <p>It scales up to 2, because if not specified <code>maxSurge</code> defaults to 25%. It means that there can be at most 25% more pod instances than the desired count during an update.</p> <p>If you want to ensure that this won't be scaled you might change the strategy to <code>Recreate</code>. This will cause all old pods to be deleted before the new ones are created. Use this strategy when your application doesn’t support running multiple versions in parallel and requires the old version to be stopped completely before the new one is started. However please note that, this strategy does involve a short period of time when your app becomes completely unavailable.</p> <p>Here`s a good <a href="https://blog.sebastian-daschner.com/entries/zero-downtime-updates-kubernetes#:%7E:text=Per%20default,%20Kubernetes%20deployments%20roll,time%20while%20performing%20the%20updates." rel="nofollow noreferrer">document</a> that describes rolling update strategy. It is worth also checking official kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">documentation</a> about deployments.</p>
acid_fuji
<p>Im very new to kubernetes ,here i tired a cronjob yaml in which the pods are created at every 1 minute.</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: &quot;*/1 * * * *&quot; jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure </code></pre> <p>but the pods are created only after 1 minute.is it possible to run job immediately and after that every 1 minute ?</p>
june alex
<p>As already stated in the comments <code>CronJob</code> is backed by <code>Job</code>. What you can do is literally launch <code>Cronjob</code> and <code>Job</code> resources using the same spec at the same time. You can do that conveniently using <a href="https://medium.com/@alexander.hungenberg/helm-vs-kustomize-how-to-deploy-your-applications-in-2020-67f4d104da69" rel="nofollow noreferrer">helm chart or kustomize</a>.</p> <p>Alternatively you can place both manifests in the same file or two files in the same directory and then use:</p> <pre><code>kubectl apply -f &lt;file/dir&gt; </code></pre> <p>With this workaround initial <code>Job</code> is started and then after some time <code>Cronjob</code>. The downside of this solution is that first <code>Job</code> is standalone and it is not included in the Cronjob's history. Another possible side effect is that the first <code>Job</code> and first <code>CronJob</code> can run in parallel if the <code>Job</code> cannot finish its tasks fast enough. <code>concurrencyPolicy</code> does not take that <code>Job</code> into consideration.</p> <p>From the documentation:</p> <blockquote> <p>A cron job creates a job object about once per execution time of its schedule. We say &quot;about&quot; because there are certain circumstances where two jobs might be created, or no job might be created. We attempt to make these rare, but do not completely prevent them.</p> </blockquote> <p>So if you want to keep the task execution more strict, perhaps it may be better to use Bash wrapper script with sleep 1 between task executions or design an app that forks sub processes after specified interval, create a container image and run it as a Deployment.</p>
acid_fuji
<p>This is a Kubespray deployment using calico. All the defaults are were left as-is except for the fact that there is a proxy. Kubespray ran to the end without issues.</p> <p>Access to Kubernetes services started failing and after investigation, there was <strong>no route to host</strong> to the <em>coredns</em> service. Accessing a K8S service by IP worked. Everything else seems to be correct, so I am left with a cluster that works, but without DNS.</p> <p>Here is some background information: Starting up a busybox container:</p> <pre><code># nslookup kubernetes.default Server: 169.254.25.10 Address: 169.254.25.10:53 ** server can't find kubernetes.default: NXDOMAIN *** Can't find kubernetes.default: No answer </code></pre> <p>Now the output while explicitly defining the IP of one of the CoreDNS pods:</p> <pre><code># nslookup kubernetes.default 10.233.0.3 ;; connection timed out; no servers could be reached </code></pre> <p>Notice that telnet to the Kubernetes API works:</p> <pre><code># telnet 10.233.0.1 443 Connected to 10.233.0.1 </code></pre> <p><strong>kube-proxy logs:</strong> 10.233.0.3 is the service IP for coredns. The last line looks concerning, even though it is INFO.</p> <pre><code>$ kubectl logs kube-proxy-45v8n -nkube-system I1114 14:19:29.657685 1 node.go:135] Successfully retrieved node IP: X.59.172.20 I1114 14:19:29.657769 1 server_others.go:176] Using ipvs Proxier. I1114 14:19:29.664959 1 server.go:529] Version: v1.16.0 I1114 14:19:29.665427 1 conntrack.go:52] Setting nf_conntrack_max to 262144 I1114 14:19:29.669508 1 config.go:313] Starting service config controller I1114 14:19:29.669566 1 shared_informer.go:197] Waiting for caches to sync for service config I1114 14:19:29.669602 1 config.go:131] Starting endpoints config controller I1114 14:19:29.669612 1 shared_informer.go:197] Waiting for caches to sync for endpoints config I1114 14:19:29.769705 1 shared_informer.go:204] Caches are synced for service config I1114 14:19:29.769756 1 shared_informer.go:204] Caches are synced for endpoints config I1114 14:21:29.666256 1 graceful_termination.go:93] lw: remote out of the list: 10.233.0.3:53/TCP/10.233.124.23:53 I1114 14:21:29.666380 1 graceful_termination.go:93] lw: remote out of the list: 10.233.0.3:53/TCP/10.233.122.11:53 </code></pre> <p>All pods are running without crashing/restarts etc. and otherwise services behave correctly.</p> <p>IPVS looks correct. CoreDNS service is defined there:</p> <pre><code># ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -&gt; RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.233.0.1:443 rr -&gt; x.59.172.19:6443 Masq 1 0 0 -&gt; x.59.172.20:6443 Masq 1 1 0 TCP 10.233.0.3:53 rr -&gt; 10.233.122.12:53 Masq 1 0 0 -&gt; 10.233.124.24:53 Masq 1 0 0 TCP 10.233.0.3:9153 rr -&gt; 10.233.122.12:9153 Masq 1 0 0 -&gt; 10.233.124.24:9153 Masq 1 0 0 TCP 10.233.51.168:3306 rr -&gt; x.59.172.23:6446 Masq 1 0 0 TCP 10.233.53.155:44134 rr -&gt; 10.233.89.20:44134 Masq 1 0 0 UDP 10.233.0.3:53 rr -&gt; 10.233.122.12:53 Masq 1 0 314 -&gt; 10.233.124.24:53 Masq 1 0 312 </code></pre> <p>Host routing also looks correct.</p> <pre><code># ip r default via x.59.172.17 dev ens3 proto dhcp src x.59.172.22 metric 100 10.233.87.0/24 via x.59.172.21 dev tunl0 proto bird onlink blackhole 10.233.89.0/24 proto bird 10.233.89.20 dev calib88cf6925c2 scope link 10.233.89.21 dev califdffa38ed52 scope link 10.233.122.0/24 via x.59.172.19 dev tunl0 proto bird onlink 10.233.124.0/24 via x.59.172.20 dev tunl0 proto bird onlink x.59.172.16/28 dev ens3 proto kernel scope link src x.59.172.22 x.59.172.17 dev ens3 proto dhcp scope link src x.59.172.22 metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown </code></pre> <p>I have redeployed this same cluster in separate environments with flannel and calico with iptables instead of ipvs. I have also disabled the docker http proxy after deploy temporarily. None of which makes any difference.</p> <p>Also: kube_service_addresses: 10.233.0.0/18 kube_pods_subnet: 10.233.64.0/18 (They do not overlap)</p> <p>What is the next step in debugging this issue?</p>
user179763
<p>I highly recommend you to avoid using latest busybox image to troubleshoot DNS. There are few <a href="https://github.com/docker-library/busybox/issues/48" rel="nofollow noreferrer">issues</a> reported regarding dnslookup on versions newer than 1.28. </p> <p>v 1.28.4</p> <pre class="lang-sh prettyprint-override"><code>user@node1:~$ kubectl exec -ti busybox busybox | head -1 BusyBox v1.28.4 (2018-05-22 17:00:17 UTC) multi-call binary. user@node1:~$ kubectl exec -ti busybox -- nslookup kubernetes.default Server: 169.254.25.10 Address 1: 169.254.25.10 Name: kubernetes.default Address 1: 10.233.0.1 kubernetes.default.svc.cluster.local </code></pre> <p>v 1.31.1</p> <pre class="lang-sh prettyprint-override"><code>user@node1:~$ kubectl exec -ti busyboxlatest busybox | head -1 BusyBox v1.31.1 (2019-10-28 18:40:01 UTC) multi-call binary. user@node1:~$ kubectl exec -ti busyboxlatest -- nslookup kubernetes.default Server: 169.254.25.10 Address: 169.254.25.10:53 ** server can't find kubernetes.default: NXDOMAIN *** Can't find kubernetes.default: No answer command terminated with exit code 1 </code></pre> <p>Going deeper and exploring more possibilities, I've reproduced your problem on GCP and after some digging I was able to figure out what is causing this communication problem. </p> <p>GCE (Google Compute Engine) blocks traffic between hosts by default; we have to allow Calico traffic to flow between containers on different hosts. </p> <p>According to calico <a href="https://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/gce" rel="nofollow noreferrer">documentation</a>, you can do it by creating a firewall allowing this communication rule: </p> <pre class="lang-sh prettyprint-override"><code>gcloud compute firewall-rules create calico-ipip --allow 4 --network "default" --source-ranges "10.128.0.0/9" </code></pre> <p>You can verify the rule with this command:</p> <pre class="lang-sh prettyprint-override"><code>gcloud compute firewall-rules list </code></pre> <p>This is not present on the most recent calico documentation but it's still true and necessary. </p> <p>Before creating firewall rule:</p> <pre class="lang-sh prettyprint-override"><code>user@node1:~$ kubectl exec -ti busybox2 -- nslookup kubernetes.default Server: 10.233.0.3 Address 1: 10.233.0.3 coredns.kube-system.svc.cluster.local nslookup: can't resolve 'kubernetes.default' command terminated with exit code 1 </code></pre> <p>After creating firewall rule:</p> <pre class="lang-sh prettyprint-override"><code>user@node1:~$ kubectl exec -ti busybox2 -- nslookup kubernetes.default Server: 10.233.0.3 Address 1: 10.233.0.3 coredns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.233.0.1 kubernetes.default.svc.cluster.local </code></pre> <p>It doesn't matter if you bootstrap your cluster using kubespray or kubeadm, this problem will happen because calico needs to communicate between nodes and GCE is blocking it as default. </p>
Mark Watney
<p>I have local and dockerized apps which are working excelent on localhost : java backend at 8080, angular at 4200, activemq at 8161, and postgres on 5432 Now,I am trying also to kubernetize apps to make them work on localhosts. As far as I know kubernetes provides random Ip on clusters, what should I do do make them work on localhosts to listen to each other ? Is there any way to make them automatically start at those localhosts instead of using port forwariding for each service ? Every service and deployment has similiar structure :</p> <p>apiVersion: v1 kind: Service metadata: name: backend spec: selector: app: backend type: LoadBalancer ports: - protocol: 8080 port: 8080 targetPort: 8080</p> <p>Deployment apiVersion: apps/v1 kind: Deployment metadata: name: backend labels: app: backend spec: replicas: 3 selector: matchLabels: app: backend template: metadata: labels: app: backend spec: containers: - name: backend image: ports: - containerPort: 8080</p> <p>Tried port-forwarding, works, but requires lot of manual work ( open few new powershell windows and then do manual port forwarding)</p>
PiotrXD
<p>In the kubernetes eco system apps talk to each other through their services. If they are in the same namespace they can directly go to the service name of not they need to specify the full name which includes the namespace name: my-svc.my-namespace.svc.cluster-domain.example</p>
mn0o7
<p>I have a deployment, that looks as follows: <a href="https://i.stack.imgur.com/j5TPW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j5TPW.png" alt="enter image description here"></a></p> <p>The question is, what is the difference between the red border label and the violet one? </p>
softshipper
<p>These key fields are refereed as Labels on Kubernetes. These labels are used in Kubernetes in order to organize our cluster. </p> <p>Labels are key/value pairs that are attached to objects that can be used to identify or group resources in Kubernetes. They can be used to select resources from a list. </p> <p>Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key/value labels defined. Each Key must be unique for a given object.</p> <p>Going deeper on it, lets suppose you have this Pod in your cluster:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: sample-pod namespace: default labels: env: development spec: containers: - name: busybox image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always </code></pre> <p>As you can see we are setting one label: <code>env: development</code></p> <p>If you deploy this pod you can run the following command to list all lables set to this pod: </p> <pre class="lang-sh prettyprint-override"><code>kubectl get pod sample-pod --show-labels NAME READY STATUS RESTARTS AGE LABELS sample-pod 1/1 Running 0 28s env=development </code></pre> <p>You can also list all pods with <code>development</code> label:</p> <pre><code>$ kubectl get pods -l env=development NAME READY STATUS RESTARTS AGE sample-pod 1/1 Running 0 106s </code></pre> <p>You can also delete a pod using the label selection: </p> <pre><code>pod "sample-pod" deleted </code></pre> <p>Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well. Three kinds of operators are admitted <code>=</code>,<code>==</code>,<code>!=</code>. The first two represent <em>equality</em> (and are simply synonyms), while the latter represents <em>inequality</em>. For example:</p> <pre><code>environment = production tier != frontend </code></pre> <p>You can read more about labels on <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">Kubernetes Documentation</a>.</p>
Mark Watney
<p>i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls <code>kubectl apply -f kubernetes</code> to apply a bunch of yaml kubernetes definitions</p> <p>let's say i add yaml for a new service, and deploy it -- kubectl will add it</p> <p>but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it</p> <p>is there any way that <code>kubectl</code> can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around</p> <p>does the developer have to know to connect <code>kubectl</code> to the production cluster and delete the service manually, in addition to deleting the yaml definition?</p> <p>is there a mechanism for kubernetes to "know what's missing"?</p>
ChaseMoskal
<p>You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by <a href="https://stackoverflow.com/users/12868186/sithroo" title="121 reputation">Sithroo</a> Helm is a very good option. </p> <blockquote> <p>Helm lets you fetch, deploy and manage the lifecycle of applications, both 3rd party products and your own.</p> <p>No more maintaining random groups of YAML files (or very long ones) describing pods, replica sets, services, RBAC settings, etc. With helm, there is a structure and a convention for a software package that defines a layer of YAML <strong>templates</strong> and another layer that changes the templates called <strong>values.</strong> Values are injected into templates, thus allowing a separation of configuration, and defines where changes are allowed. This whole package is called a <strong>Helm</strong> <strong>Chart</strong>.</p> <p>Essentially you create structured application packages that contain everything they need to run on a Kubernetes cluster; including <strong>dependencies</strong> the application requires. <a href="https://medium.com/prodopsio/a-6-minute-introduction-to-helm-ab5949bf425" rel="nofollow noreferrer">Source</a> </p> </blockquote> <p>Before you start, I recommend you these articles explaining it's quirks and features.</p> <p><a href="https://medium.com/@gajus/the-missing-ci-cd-kubernetes-component-helm-package-manager-1fe002aac680" rel="nofollow noreferrer">The missing CI/CD Kubernetes component: Helm package manager</a></p> <p><a href="https://medium.com/velotio-perspectives/continuous-integration-delivery-ci-cd-for-kubernetes-using-circleci-helm-b8b0a91ef1a3" rel="nofollow noreferrer">Continuous Integration &amp; Delivery (CI/CD) for Kubernetes Using CircleCI &amp; Helm</a></p>
Mark Watney
<p>I am assessing the migration of my current Airflow deployment from Celery executor to Kubernetes (K8s) executor to leverage the dynamic allocation of resources and the isolation of tasks provided by pods.</p> <p>It is clear to me that we can use the native KubernetesPodOperator to run tasks on a K8s cluster via the K8s executor. However I couldn't find info about the compatibility between the K8s executor with other operators, such as bash and athena.</p> <p><strong>Here is the question</strong> is it possible to run a bash (or any other) operator on a K8s powered Airflow or I should migrate all my tasks to the KubernetesPodOperator?</p> <p>Thanks!</p>
André Perez
<p>Kubernetes executor will work with all operators.<br /> Using the kubernetes executor will create a worker pod for every task instead of using the celery worker as the celery executor will.</p> <p>Using the <strong>KubernetesPodOperator</strong> will pull any specific image to launch a pod and execute your task.<br /> So if you are to use the <strong>KubernetesPodOperator</strong> with the <strong>KubernetesExecutor</strong>, Airflow will launch a worker pod for your task, and that task will launch a pod and monitor its execution. 2 pods for 1 task.</p> <p>If you use a <strong>BashOperator</strong> with the <strong>KubernetesExecutor</strong>, Airflow will launch a worker pod and execute bash commands on that worker pod. 1 pod for 1 task.</p>
Sk1tter
<p>I would like to execute a command in a container (let it be <em>ls</em>) then read the exit code with <em>echo $?</em> <code>kubectl exec -ti mypod -- bash -c &quot;ls; echo $?&quot;</code> does not work because it returns the exit code of my current shell not the one of the container.</p> <p>So I tried to use <em>eval</em> on a env varible I defined in my manifest :</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - container2 image: varunuppal/nonrootsudo env: - name: resultCmd value: 'echo $?' </code></pre> <p>then <code>kubectl exec -ti mypod -- bash -c &quot;ls;eval $resultCmd&quot;</code> but the eval command does not return anything.</p> <pre><code>bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr </code></pre> <p>Note that I can run these two commands within the container</p> <pre><code>kubectl exec -ti mypod bash #ls;eval $resultCmd bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr **0** </code></pre> <p>How can I make it work? Thanks in advance,</p>
Abdelghani
<p>This is happening because you use double quotes instead of single ones. Single quotes won't substitute anything, but double quotes will.</p> <p>From the bash documentation:</p> <blockquote> <p><a href="http://www.gnu.org/software/bash/manual/html_node/Single-Quotes.html" rel="nofollow noreferrer">3.1.2.2 Single Quotes</a></p> <p>Enclosing characters in single quotes (<code>'</code>) preserves the literal value of each character within the quotes. A single quote may not occur between single quotes, even when preceded by a backslash.</p> </blockquote> <p>To summarize, this is how your command should look like:</p> <pre><code>kubectl exec -ti firstpod -- bash -c 'ls; echo $?' </code></pre>
acid_fuji
<p>I am having tar of my service specific images. I am importing it in containerd so that it will be used by k3s to deploy PODs. command used to import image's tar is-<br> <code>k3s ctr images import XXX.tar</code> </p> <p>By default it loads images in <code>/var/lib/rancher/data</code> dir. However, I would like to load images in different directory. Anyone know, how to specify custom directory while loading images?</p>
Yogesh Jilhawar
<p>I didn't find anything that may natively allow for changing this directory. This doesn't mean nothing can be done. You can always create a symlink like this:</p> <pre><code>ln -s target_path /var/lib/rancher/data </code></pre> <p>Let me know if it helped.</p>
Matt
<p>I am running a docker image that has certain configuration files within it. I need to persist/mount the same folder to the disk as new files will get added later on. When I use standard volume mount in kubernetes, it mounts an empty directory without the intial configuration files. How do I make sure my initial files are copied to the volume while mounting?</p> <pre><code> - mountPath: /tmp name: my-vol dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: my-vol persistentVolumeClaim: claimName: wso2-disk2``` </code></pre>
Pranav
<p>A possible solution could be the use the node storage mounted on containers (easiest way) or using a DFS solution like NFS, GlusterFS, and so on. </p> <p>Another and recommended way to achieve what you need is to use a persistent volumes to share the same files between your containers. </p> <p>Assuming you have a kubernetes cluster that has only one Node, and you want to share the path <code>/mtn/data</code> of your node with your pods (<a href="https://stackoverflow.com/a/60092545/12153576">Source</a>):</p> <p><strong>Create a PersistentVolume:</strong></p> <blockquote> <p>A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.</p> </blockquote> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" </code></pre> <p><strong>Create a PersistentVolumeClaim:</strong></p> <blockquote> <p>Pods use PersistentVolumeClaims to request physical storage</p> </blockquote> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 3Gi </code></pre> <p>Look at the PersistentVolumeClaim:</p> <p><code>kubectl get pvc task-pv-claim</code></p> <p>The output shows that the PersistentVolumeClaim is bound to your PersistentVolume, <code>task-pv-volume</code>.</p> <pre><code>NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s </code></pre> <p><strong>Create a deployment with 2 replicas for example:</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim containers: - name: task-pv-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/mnt/data" name: task-pv-storage </code></pre> <p>Now you can check inside both container the path <code>/mnt/data</code> has the same files.</p> <p>If you have cluster with more than 1 node I recommend you to think about the other types of <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">persistent volumes</a> or using <a href="https://kubernetes.io/docs/concepts/storage/" rel="nofollow noreferrer">DFS</a>. </p> <p><strong>References:</strong> <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">Configure persistent volumes</a> <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent volumes</a> <a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes" rel="nofollow noreferrer">Volume Types</a></p>
Mark Watney
<p>I have some endpoint running on a pod inside of my Kubernetes cluster that is secured by basic auth. The normal way of accessing this pod through the API server would be:</p> <pre><code>https://{{ apiServer }}:{{ apiServerPort }}/api/v1/namespaces/{{ namespaceName }}/pods/{{ podName }}:{{ podPort }}/proxy/somepath </code></pre> <p>This works fine for endpoints that are not secured using Basic Auth, but when trying to access the secure endpoints, I get a 403 Forbidden every time because I can not specify two authentication headers and I already need to authenticate myself to the API server itself. Is it possible to get those Basic Auth credentials forwarded to the pod or am I out of luck using the API server proxy?</p>
W3D3
<p>The working solution that I can think of is to use <code>kubectl port-forward</code>. As a result of that Kubernetes API server will establish a single http connection between your localhost and the resource running on your cluster. This will preserve all your client requests.</p> <pre><code>kubectl port-forward TYPE/NAME [options] LOCAL_PORT:REMOTE_PORT </code></pre> <p>You can send the traffic to specific pod, use random local port or even specify local ip address use for forwarding.</p> <p>You can read more about port forwarding <a href="https://kubectl.docs.kubernetes.io/pages/container_debugging/port_forward_to_pods.html" rel="nofollow noreferrer">here</a> and check <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">this</a> example in the official kuberentes document.</p> <p>If you want some alternatives you want to check <a href="https://www.telepresence.io/" rel="nofollow noreferrer">telepresence</a>.</p> <p>PS. I tried to use a sidecar container with kubectl proxy but that did not work unfortunately.</p>
acid_fuji
<p>I know that <code>kubectl delete pod &lt;pod_name&gt;</code> will remove the pod and a new pod will be auto-created if it is managed by a deployment.</p> <p>Just want to know if there's a way to make the recreation happen before removal? Just like rolling restart one single pod with surge.</p>
eval
<p>There is no easy way but there is a workaround, although it requires few steps that need to be done one by one and is not error prone but I'll just show it to you to see that this can be done but <strong>you probably should not do this</strong>.</p> <p>Let's first create a test deployment:</p> <pre><code>$ kubectl create deployment --image nginx ngx --replicas 3 --dry-run -oyaml &gt; depl $ kubectl apply -f depl deployment.apps/ngx created $ kubectl get po NAME READY STATUS RESTARTS AGE ngx-768fd5d6f5-bj5z4 1/1 Running 0 45s ngx-768fd5d6f5-rt9p5 1/1 Running 0 45s ngx-768fd5d6f5-w4bv7 1/1 Running 0 45s </code></pre> <p>scale the deployment one replica up:</p> <pre><code>$ kubectl scale deployment --replicas 4 ngx deployment.apps/ngx scaled </code></pre> <p>delete a deployment and replicaset with <code>--cascade=orphan</code> (it will remove deployment and replicaset but will leave the pods untouched):</p> <pre><code>$ kubectl delete deployment ngx --cascade=orphan deployment.apps &quot;ngx&quot; deleted $ kubectl delete replicaset ngx-768fd5d6f5 --cascade=orphan replicaset.apps &quot;ngx-768fd5d6f5&quot; deleted </code></pre> <p>delete a pod you want:</p> <pre><code>$ kubectl get po NAME READY STATUS RESTARTS AGE ngx-768fd5d6f5-bj5z4 1/1 Running 0 4m53s ngx-768fd5d6f5-rt9p5 1/1 Running 0 4m53s ngx-768fd5d6f5-t4jch 1/1 Running 0 3m23s ngx-768fd5d6f5-w4bv7 1/1 Running 0 4m53s $ kubectl delete po ngx-768fd5d6f5-t4jch pod &quot;ngx-768fd5d6f5-t4jch&quot; deleted $ kubectl get po NAME READY STATUS RESTARTS AGE ngx-768fd5d6f5-bj5z4 1/1 Running 0 5m50s ngx-768fd5d6f5-rt9p5 1/1 Running 0 5m50s ngx-768fd5d6f5-w4bv7 1/1 Running 0 5m50s </code></pre> <p>Now restore the deployment:</p> <pre><code>$ kubectl apply -f depl deployment.apps/ngx created </code></pre> <p>newly created deployment will create a new replicaset that will inherit already existing pods.</p> <p>As you see this can be done, but it requires more effort and some tricks. This can be useful sometimes but I'd not recommend including it in your CI/CD pipeline.</p>
Matt
<p>I am running two threads inside the container of the Kubernetes pod one thread pushes some data to db and other thread (flask app) shows the data from database. So as soon as the pod starts up main.py(starts both the threads mentioned above) will be called.</p> <p>Docker file:</p> <pre><code>FROM python:3 WORKDIR /usr/src/app COPY app/requirements.txt . RUN pip install -r requirements.txt COPY app . CMD [&quot;python3&quot;,&quot;./main.py&quot;] </code></pre> <p>I have two questions:</p> <ol> <li><p>Is logs the only way to see the output of the running script? Can't we see its output continuously as it runs on the terminal?</p> </li> <li><p>Also, I m not able to run the same main.py file by going into the container. It throws below error:</p> </li> </ol> <pre><code> Exception in thread Thread-1: Traceback (most recent call last): File &quot;/usr/local/lib/python3.9/threading.py&quot;, line 954, in _bootstrap_inner self.run() File &quot;/usr/local/lib/python3.9/threading.py&quot;, line 892, in run self._target(*self._args, **self._kwargs) File &quot;/usr/local/lib/python3.9/site-packages/flask/app.py&quot;, line 920, in run run_simple(t.cast(str, host), port, self, **options) File &quot;/usr/local/lib/python3.9/site-packages/werkzeug/serving.py&quot;, line 1008, in run_simple inner() File &quot;/usr/local/lib/python3.9/site-packages/werkzeug/serving.py&quot;, line 948, in inner srv = make_server( File &quot;/usr/local/lib/python3.9/site-packages/werkzeug/serving.py&quot;, line 780, in make_server return ThreadedWSGIServer( File &quot;/usr/local/lib/python3.9/site-packages/werkzeug/serving.py&quot;, line 686, in __init__ super().__init__(server_address, handler) # type: ignore File &quot;/usr/local/lib/python3.9/socketserver.py&quot;, line 452, in __init__ self.server_bind() File &quot;/usr/local/lib/python3.9/http/server.py&quot;, line 138, in server_bind socketserver.TCPServer.server_bind(self) File &quot;/usr/local/lib/python3.9/socketserver.py&quot;, line 466, in server_bind self.socket.bind(self.server_address) OSError: [Errno 98] Address already in use (edited) </code></pre> <p>How do I stop the <code>main.py</code> script which starts along with the pod and be able to run the main.py from the container itself directly?</p> <p>Thank you.</p>
Naga sai Kiran
<p>The error message says all:</p> <pre><code>OSError: [Errno 98] Address already in use (edited) </code></pre> <p>It looks like your python script tries to open the same port twice. You cannot do that. Check your code and fix it.</p> <p>Now answering your other question:</p> <blockquote> <p>Is logs the only way to see the output of the running script? Can't we see its output continuously as it runs on the terminal?</p> </blockquote> <p>Running <code>kubectl logs -f</code> will follow the logs, which should let you see the output continuously as it runs in terminal.</p>
Matt
<p>In kuberntes OPA gatekeeper, I need to determine if there is <code>volumeName</code> defined in PVC object, like below code:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;apiVersion&quot;: &quot;v1&quot;, &quot;kind&quot;: &quot;PersistentVolumeClaim&quot;, &quot;metadata&quot;: { &quot;annotations&quot;: {}, &quot;name&quot;: &quot;pvc-test-mxh&quot;, &quot;namespace&quot;: &quot;default&quot; }, &quot;spec&quot;: { &quot;accessModes&quot;: [ &quot;ReadWriteOnce&quot; ], &quot;resources&quot;: { &quot;requests&quot;: { &quot;storage&quot;: &quot;5Gi&quot; } }, &quot;storageClassName&quot;: &quot;csi-disk&quot;, &quot;volumeName&quot;: &quot;mxh-test&quot; } } </code></pre> <p>here, the <code>volumeName</code> is defined and belongs to normal behavior that it's allowed in the gatekeeper policy; while <code>volumeName</code> is missing here it would match to the violation. But how to write this policy, I tried <code>input.review.object.spec.volumeName == &quot;&quot;</code> or <code>count(input.review.object.spec.volumeName) &lt;= 0</code>, seems like it didn't work, anyone could help?</p>
Marco Mei
<p>You'd normally use the <code>not</code> keyword for that purpose:</p> <pre><code>not input.review.object.spec.volumeName </code></pre> <p><code>not</code> will evaluate to true if <code>input.review.object.spec.volumeName</code> is undefined, and correspondingly to undefined if <code>input.review.object.spec.volumeName</code> is set.</p>
Devoops
<p>I have a kube-prometheus deployed to multiple environments using kustomize.</p> <p>kube-prometheus is a base and each environment is an overlay. Let's say I want to deploy dashboards to overlays, which means I need to deploy the same ConfigMaps and the same patch to each overlay.</p> <p>Ideally, I want to avoid changing the base as it is declared outside of my repo and to keep things DRY and not to copy the same configs all over the place.</p> <p>Is there a way to achieve this?</p> <p>Folder structure:</p> <pre><code>/base/ /kube-prometheus/ /overlays/ /qa/ &lt;--- /dev/ &lt;--- I want to share resources+patches between those /staging/ &lt;--- </code></pre>
RedReaperKun
<p>The proper way to do this is using <strong>components</strong>.</p> <p>Components can encapsulate both resources and patches together. In my case, I wanted to add ConfigMaps (resource) and mount this ConfigMaps to my Deployment (patch) without repeating the patches.</p> <p>So my overlay would look like this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base/kube-prometheus/ # Base components: - ../../components/grafana-aws-dashboards/ # Folder with kustomization.yaml that includes both resources and patches </code></pre> <p>And this is the component:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1alpha1 kind: Component resources: - grafana-dashboard-aws-apigateway.yaml - grafana-dashboard-aws-auto-scaling.yaml - grafana-dashboard-aws-ec2-jwillis.yaml - grafana-dashboard-aws-ec2.yaml - grafana-dashboard-aws-ecs.yaml - grafana-dashboard-aws-elasticache-redis.yaml - grafana-dashboard-aws-elb-application-load-balancer.yaml - grafana-dashboard-aws-elb-classic-load-balancer.yaml - grafana-dashboard-aws-lambda.yaml - grafana-dashboard-aws-rds-os-metrics.yaml - grafana-dashboard-aws-rds.yaml - grafana-dashboard-aws-s3.yaml - grafana-dashboard-aws-storagegateway.yaml patchesStrategicMerge: - grafana-mount-aws-dashboards.yaml </code></pre> <p>This approach is documented here:<br /> <a href="https://kubectl.docs.kubernetes.io/guides/config_management/components/" rel="noreferrer">https://kubectl.docs.kubernetes.io/guides/config_management/components/</a></p>
RedReaperKun
<p>I tried with this <code>Gateway</code>, and <code>VirtualService</code>, didn't work.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: stomp spec: selector: istio: ingressgateway servers: - port: number: 80 name: stomp protocol: TCP hosts: - rmq-stomp.mycompany.com </code></pre> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: rmq-stomp spec: hosts: - rmq-stomp.mycompany.com gateways: - stomp http: - match: - uri: prefix: / route: - destination: port: number: 61613 host: rabbitmq.default.svc.cluster.local </code></pre> <p>There's no problem with the service, because when I tried to connect from other pod, it's connected.</p>
Fauzan
<p>Use <code>tcp.match</code>, not <code>http.match</code>. Here is the example I have found in <a href="https://istio.io/latest/docs/reference/config/networking/gateway/" rel="nofollow noreferrer">istio gateway docs</a> and in <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#TCPRoute" rel="nofollow noreferrer">istio virtualservice dosc</a></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo-mongo namespace: bookinfo-namespace spec: hosts: - mongosvr.prod.svc.cluster.local # name of internal Mongo service gateways: - some-config-namespace/my-gateway # can omit the namespace if gateway is in same namespace as virtual service. tcp: - match: - port: 27017 route: - destination: host: mongo.prod.svc.cluster.local port: number: 5555 </code></pre> <p>So your would look sth like:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: rmq-stomp spec: hosts: - rmq-stomp.mycompany.com gateways: - stomp tcp: - match: - port: 80 route: - destination: host: rabbitmq.default.svc.cluster.local port: number: 61613 </code></pre> <hr /> <p>Here is a similar question answered: <a href="https://stackoverflow.com/questions/54492068/how-to-configure-istios-virtualservice-for-a-service-which-exposes-multiple-por">how-to-configure-istios-virtualservice-for-a-service-which-exposes-multiple-por</a></p>
Matt
<p>I have been trying to find how to do this but so far have found nothing, I am quite new to Kubernetes so I might just have looked over it. I want to use my own certificate for the Kubernetes API server, is this possible? And if so, can someone perhaps give me a link?</p>
Thijs van der Heijden
<p>Ok, so here is my idea. We know we cannot change cluster certs, but there is other way to do it. We should be able to proxy through ingress.</p> <p>First we enabled ingres addon:</p> <pre><code>➜ ~ minikube addons enable ingress </code></pre> <p>Given <code>tls.crt</code> and <code>tls.key</code> we create a secret (you don't need to do this if you are using certmanager but this requires some additinal steps I am not going to describe here):</p> <pre><code>➜ ~ kubectl create secret tls my-tls --cert=tls.crt --key tls.key </code></pre> <p>and an ingress object:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-k8s annotations: nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; spec: tls: - hosts: - foo.bar.com secretName: my-tls rules: - host: foo.bar.com http: paths: - path: / pathType: Prefix backend: service: name: kubernetes port: number: 443 </code></pre> <p>Notice what docs say about CN and FQDN: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="nofollow noreferrer">k8s docs</a>:</p> <blockquote> <p>Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. <strong>You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for https-example.foo.com</strong>.</p> </blockquote> <p>The only issue with this approach is that we cannot use certificates for authentication when accessing from the outside.</p> <p>But we can use tokens. Here is a page in k8s docs: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/</a> that lists all possible methods of authentication.</p> <p>For testing I choose serviceaccout token but feel free to experiment with others.</p> <p>Let's create a service account, bind a role to it, and try to access the cluster:</p> <pre><code>➜ ~ kubectl create sa cadmin serviceaccount/cadmin created ➜ ~ kubectl create clusterrolebinding --clusterrole cluster-admin --serviceaccount default:cadmin cadminbinding clusterrolebinding.rbac.authorization.k8s.io/cadminbinding created </code></pre> <p>Now we follow these instructions: <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">access-cluster-api</a> from docs to try to access the cluster with sa token.</p> <pre><code>➜ ~ APISERVER=https://$(minikube ip) ➜ ~ TOKEN=$(kubectl get secret $(kubectl get serviceaccount cadmin -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode ) ➜ ~ curl $APISERVER/api --header &quot;Authorization: Bearer $TOKEN&quot; --insecure -H &quot;Host: foo.bar.com&quot; { &quot;kind&quot;: &quot;APIVersions&quot;, &quot;versions&quot;: [ &quot;v1&quot; ], &quot;serverAddressByClientCIDRs&quot;: [ { &quot;clientCIDR&quot;: &quot;0.0.0.0/0&quot;, &quot;serverAddress&quot;: &quot;192.168.39.210:8443&quot; } ] } </code></pre> <blockquote> <p>note: I am testing it with invalid/selfsigned certificates and I don't own the foo.bar.com domain so I need to pass Host header by hand. For you it may look a bit different, so don't just copypate; try to understand what's happening and adjust it. If you have a domain you should be able to access it directly (no <code>$(minikube ip)</code> necessary).</p> </blockquote> <p>As you should see, it worked! We got a valid response from api server.</p> <p>But we probably don't want to use curl to access k8s.</p> <p>Let's create a kubeconfig with the token.</p> <pre><code>kubectl config set-credentials cadmin --token $TOKEN --kubeconfig my-config kubectl config set-cluster mini --kubeconfig my-config --server https://foo.bar.com kubectl config set-context mini --kubeconfig my-config --cluster mini --user cadmin kubectl config use-context --kubeconfig my-config mini </code></pre> <p>And now we can access k8s with this config:</p> <pre><code>➜ ~ kubectl get po --kubeconfig my-config No resources found in default namespace. </code></pre>
Matt
<p>I am trying to run mysql 5.7 in kubernetes and got this error <code>mysql: unknown option '--&quot;'</code></p> <p>My database.yaml looks like this</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: app-db labels: app: app-db spec: containers: - name: mysql image: mysql:5.7 ports: - name: mysql-port containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: app-secrets key: rootPassword - name: MYSQL_USER valueFrom: secretKeyRef: name: app-secrets key: username - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: app-secrets key: password - name: MYSQL_DATABASE valueFrom: secretKeyRef: name: app-secrets key: defaultDatabase </code></pre> <p>Maybe I missed something?</p> <p>I tried to switch versions, tried to use samples from the official Kubernetes site - nothing works for me.</p> <p>The last logs with error:</p> <pre class="lang-sh prettyprint-override"><code>2020-07-19 20:51:01 100 [Note] Event Scheduler: Loaded 0 events 2020-07-19 20:51:01 100 [Note] mysqld: ready for connections. Version: '5.6.49' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server (GPL) 2020-07-19 20:51:01+00:00 [Note] [Entrypoint]: Temporary server started. Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it. 2020-07-19 20:51:04 100 [Warning] 'proxies_priv' entry '@ root@app-db' ignored in --skip-name-resolve mode. 2020-07-19 20:51:04+00:00 [Note] [Entrypoint]: Creating database app_db mysql: unknown option '--&quot;' </code></pre>
Squeez
<p>I ran into the same issue and after a lot of frustrating debugging I finally solved it.</p> <p>The problem was that I created my secrets with the <code>echo 'secret' | base64</code> command and the echo command automatically inserts a trailing newline.</p> <p>Use <code>echo -n 'secret' | base64</code> instead. ✔️</p> <p>Unfortunately this was not at all what I expected and therefore I didn't notice that there was a line break in the log output. Hopefully this can help some people who also use the echo command to encode to base64.</p>
Tom Böttger
<p>I just add <code>kubectl wait --for=condition=ready pod -l app=appname --timeout=30s</code> in the last step of BitBucket Pipeline to report any deployment failure if the new pod somehow producing error.</p> <p><strong>I realize that the wait doesn't really consistent. Sometimes it gets timed out even if new pod from new image doesn't producing any error, pod turn to ready state.</strong></p> <blockquote> <p>Try to always change deployment.yaml or push newer image everytime to test this, the result is inconsistent.</p> </blockquote> <p>BTW, I believe using <code>kubectl rollout status</code> doesn't suitable, I think because it just return after the deployment done without waiting for pod ready.</p> <p>Note that there is not much difference if I change timeout from <code>30s</code> to <code>5m</code> since apply or rollout restart is quite instant.</p> <ul> <li>kubectl version: 1.17</li> <li>AWS EKS: latest 1.16</li> </ul>
CallMeLaNN
<p>I'm placing this answer for better visibility as noted in the comments this indeed solves some problems with <code>kubectl wait</code> behavior.</p> <p>I managed to replicate the issue and have some timeouts when my client version was older than server version. You have to match your client version with server in order to <code>kubectl wait</code> work properly.</p>
acid_fuji
<p>I'm running some I/O intensive Python code on Dask and want to increase the number of threads per worker. I've deployed a Kubernetes cluster that runs Dask distributed via <a href="https://github.com/helm/charts/blob/master/stable/dask/values.yaml" rel="nofollow noreferrer">helm</a>. I see from the <a href="https://github.com/helm/charts/blob/master/stable/dask/templates/dask-worker-deployment.yaml" rel="nofollow noreferrer">worker deployment template</a> that the number of threads for a worker is set to the number of CPUs, but I'd like to set the number of threads higher unless that's an anti-pattern. How do I do that? </p> <p>It looks like from <a href="https://stackoverflow.com/questions/49406987/how-do-we-choose-nthreads-and-nprocs-per-worker-in-dask-distributed">this similar question</a> that I can ssh to the dask scheduler and spin up workers with <code>dask-worker</code>? But ideally I'd be able to configure the worker resources via helm so that I don't have to interact with the scheduler other than submitting jobs to it via the <code>Client</code>. </p>
skeller88
<p>Kubernetes resource limits and requests should match the --memory-limit and --nthreads parameters given to the dask-worker command. For more information please follow the link <a href="https://kubernetes.dask.org/en/latest/index.html#quickstart" rel="nofollow noreferrer">1</a> (Best practices described on Dask`s official documentation) and <a href="https://stackoverflow.com/questions/54417135/dask-keeps-failing-with-killed-worker-exception-while-running-tpot">2</a> </p>
Mahboob
<p>I've a certificate (pfx) in my Azure KeyVault - I use that certificate as a secret (deployed via Azure DevOps using Helm). The problem I've encountered is that the certificate is somehow incorrectly read from KeyVault (I use Variable Group) - the result is that when my application starts, I get an exception that looks like: </p> <pre><code>error:23076071:PKCS12 routines:PKCS12_parse:mac verify failure </code></pre> <p>However, when I manually create a secret (by using powershell to read certificate content as base64) everything works correctly. What am I doing incorrectly ?</p>
macpak
<p>Currently, Azure Pipelines variable group integration supports mapping only secrets from the Azure key vault. Cryptographic keys and certificates are not supported. See <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops&amp;tabs=yaml#secrets-management-notes" rel="nofollow noreferrer">here</a>.</p> <p>As workaround, you can use <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops" rel="nofollow noreferrer">Azure Key Vault task</a> in your azure devops pipeline.</p> <blockquote> <p>Use this task to download secrets such as authentication keys, storage account keys, data encryption keys, .PFX files, and passwords from an Azure Key Vault instance.</p> <p>If the value fetched from the vault is a certificate (for example, a PFX file), the task variable will contain the contents of the PFX in string format.</p> </blockquote> <pre><code>- task: AzureKeyVault@1 inputs: azureSubscription: keyVaultName: secretsFilter: '*' </code></pre> <p>Before using this task. You should ensure the service principal you used in the Azure service connection has at least Get and List permissions on the vault.</p> <p>This tutorial <a href="https://azuredevopslabs.com/labs/vstsextend/azurekeyvault/" rel="nofollow noreferrer">Using secrets from Azure Key Vault in a pipeline</a> might also be helpful.</p>
Levi Lu-MSFT
<p>I am using Stackdriver to monitor the clusters deployed in Kubernetes in GCP. In Stackdriver monitoring overview tab, I am able to see different charts showing resource utilization vs time. I want to convert this charts to a csv file which contains the resource utilization for every second. Has anyone done this before or have an idea of how to do it?</p>
anushiya-thevapalan
<p>There isn't an "easy" way built into Stackdriver to export metrics to a .csv file.</p> <p>Probably the "easiest" way is by using this project on github that is a Google App Engine service to export to a .csv file. It is in Alpha, and you need to install it. <a href="https://github.com/CloudMile/stackdriver-monitoring-exporter" rel="nofollow noreferrer">https://github.com/CloudMile/stackdriver-monitoring-exporter</a></p> <p>The recommended way to export is explained here. <a href="https://cloud.google.com/solutions/stackdriver-monitoring-metric-export" rel="nofollow noreferrer">https://cloud.google.com/solutions/stackdriver-monitoring-metric-export</a> and this method is geared toward archiving large amounts of metric data for later comparison, not really for smaller amounts to a spreadsheet.</p> <p>The recommended way requires using the Monitoring API (<a href="https://cloud.google.com/monitoring/custom-metrics/reading-metrics" rel="nofollow noreferrer">https://cloud.google.com/monitoring/custom-metrics/reading-metrics</a>) which returns JSON, which you'd have to convert to a .csv file. You could probably get curl or postman to make the calls.</p> <p>Here's an other example project on github. This sends the data to bigquery for storage though. <a href="https://github.com/GoogleCloudPlatform/stackdriver-metrics-export" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/stackdriver-metrics-export</a></p>
AlphaPapa
<p>I am using jsonnet to describe deployment configuration for Kubernetes.</p> <pre><code>{ apiVersion: 'apps/v1', kind: 'Deployment', metadata: { name: 'dapi-test-pod', }, spec: { selector: { matchLabels: { app: 'dapi-test-pod', }, }, template: { metadata: { labels: { app: 'dapi-test-pod', }, }, spec: { containers: [ { name: 'test-container', image: 'library/nginx', }, ] }, }, }, } </code></pre> <p>Create deployment using kubecfg:</p> <pre><code>kubecfg show k8s/deployment.jsonnet | kubectl apply -f - </code></pre> <p>Everything is going well, but is there any great way to delete deployment using <code>kubecfg</code> and <code>jsonnet</code> file.</p>
Uladzislau Kaminski
<p>I reproduced your scenario on my cluster and basically the same logic will works for deleting it. </p> <pre><code>kubecfg show k8s/deployment.jsonnet | kubectl delete -f - </code></pre> <p>This command will delete everything described in the manifest. </p> <p>Or you can just delete using bare kubectl:</p> <pre><code>kubectl delete deployment dapi-test-pod </code></pre>
Mark Watney
<p>I have K8S cluster with version 1.13.2, and I want to upgrade to version 1.17.x (latest 1.17).</p> <p>I looked at the official notes:<a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/</a> which states that I need to upgrade one minor at a time, meaning 1.14, then 1.15, 1.16 and only then to 1.17.</p> <p>I made all perparations (disabled swap), run everything by the docs, determined that the latest 1.14 is 1.14.10.</p> <p>When I ran:</p> <pre><code>apt-mark unhold kubeadm kubelet &amp;&amp; \ apt-get update &amp;&amp; apt-get install -y kubeadm=1.14.10-00 &amp;&amp; \ apt-mark hold kubeadm </code></pre> <p>For some reason it seems that <code>kubectl</code> v1.18 was downloaded as well.</p> <p>I continued and tried running <code>sudo kubeadm upgrade plan</code>, but it failed with the following error:</p> <pre><code>[perflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/health] FATAL: [preflight] Some fatal errors occurred: [ERROR ControlPlaneNodesReady]: there are Notready control-planes in the cluster: [&lt;name of master&gt;] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` </code></pre> <p>When running <code>kubectl get nodes</code>, it says under <code>VERSION</code> that master is indeed <code>NotReady</code> and with version 1.18.0, while workers are of course v1.13.2 and <code>Ready</code> (unchanged).</p> <p>How can I fix my cluster?</p> <p>And what did I do wrong when I tried upgrading?</p>
ChikChak
<p>I reproduced your problem in my lab and what happened is that you accidentally upgraded more than you wanted. More specifically, you upgraded <code>kubelet</code> package in your master node (Control Plane). </p> <p>So here is my healthy cluster with version <code>1.13.2</code>:</p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubeadm-lab-0 Ready master 9m25s v1.13.2 kubeadm-lab-1 Ready &lt;none&gt; 6m17s v1.13.2 kubeadm-lab-2 Ready &lt;none&gt; 6m9s v1.13.2 </code></pre> <p>Now I will unhold <code>kubeadm</code> and <code>kubelet</code> as you did:</p> <pre><code>$ sudo apt-mark unhold kubeadm kubelet Canceled hold on kubeadm. Canceled hold on kubelet. </code></pre> <p>And finally I will upgrade <code>kubeadm</code> to <code>1.14.1</code>:</p> <pre><code>$ sudo apt-get install kubeadm=1.14.10-00 Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: conntrack kubelet kubernetes-cni The following NEW packages will be installed: conntrack The following packages will be upgraded: kubeadm kubelet kubernetes-cni 3 upgraded, 1 newly installed, 0 to remove and 8 not upgraded. Need to get 34.1 MB of archives. After this operation, 7,766 kB of additional disk space will be used. Do you want to continue? [Y/n] y Get:2 http://deb.debian.org/debian stretch/main amd64 conntrack amd64 1:1.4.4+snapshot20161117-5 [32.9 kB] Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.18.0-00 [19.4 MB] Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.14.10-00 [8,155 kB] Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.7.5-00 [6,473 kB] Fetched 34.1 MB in 2s (13.6 MB/s) Selecting previously unselected package conntrack. (Reading database ... 97656 files and directories currently installed.) Preparing to unpack .../conntrack_1%3a1.4.4+snapshot20161117-5_amd64.deb ... Unpacking conntrack (1:1.4.4+snapshot20161117-5) ... Preparing to unpack .../kubelet_1.18.0-00_amd64.deb ... Unpacking kubelet (1.18.0-00) over (1.13.2-00) ... Preparing to unpack .../kubeadm_1.14.10-00_amd64.deb ... Unpacking kubeadm (1.14.10-00) over (1.13.2-00) ... Preparing to unpack .../kubernetes-cni_0.7.5-00_amd64.deb ... Unpacking kubernetes-cni (0.7.5-00) over (0.6.0-00) ... Setting up conntrack (1:1.4.4+snapshot20161117-5) ... Setting up kubernetes-cni (0.7.5-00) ... Setting up kubelet (1.18.0-00) ... Processing triggers for man-db (2.7.6.1-2) ... Setting up kubeadm (1.14.10-00) ... </code></pre> <p>As you can see in this output, <code>kubelet</code> got updated to latest version as it's a dependency for <code>kubeadm</code>. Now my Master Node is <code>NotReady</code> as yours: </p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubeadm-lab-0 NotReady master 7m v1.18.0 kubeadm-lab-1 Ready &lt;none&gt; 3m52s v1.13.2 kubeadm-lab-2 Ready &lt;none&gt; 3m44s v1.13.2 </code></pre> <p><strong>How to fix it?</strong> To fix this situation you have to downgrade a few packages that got upgraded mistakenly: </p> <pre><code>$ sudo apt-get install -y \ --allow-downgrades \ --allow-change-held-packages \ kubelet=1.13.2-00 \ kubeadm=1.13.2-00 \ kubectl=1.13.2-00 \ kubernetes-cni=0.6.0-00 </code></pre> <p>After running this command, wait a few moments and check your nodes:</p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubeadm-lab-0 Ready master 9m25s v1.13.2 kubeadm-lab-1 Ready &lt;none&gt; 6m17s v1.13.2 kubeadm-lab-2 Ready &lt;none&gt; 6m9s v1.13.2 </code></pre> <p><strong>How to successfully upgrade it?</strong></p> <p>You have to carefully check the impact of <code>apt-get install</code> before running it and make sure that your packages will be upgraded to the desired version. </p> <p>In my cluster I upgraded with the following command in my master node:</p> <pre><code>$ sudo apt-mark unhold kubeadm kubelet &amp;&amp; \ sudo apt-get update &amp;&amp; \ sudo apt-get install -y kubeadm=1.14.10-00 kubelet=1.14.10-00 &amp;&amp; \ sudo apt-mark hold kubeadm kubelet </code></pre> <p>My Master Node got upgraded to desired version: </p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubeadm-lab-0 Ready master 58m v1.14.10 kubeadm-lab-1 Ready &lt;none&gt; 55m v1.13.2 kubeadm-lab-2 Ready &lt;none&gt; 55m v1.13.2 </code></pre> <p>Now if you run sudo kubeadm upgrade plan we have the following output: </p> <pre><code>$ sudo kubeadm upgrade plan [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.13.12 [upgrade/versions] kubeadm version: v1.14.10 I0326 10:08:44.926849 21406 version.go:240] remote version is much newer: v1.18.0; falling back to: stable-1.14 [upgrade/versions] Latest stable version: v1.14.10 [upgrade/versions] Latest version in the v1.13 series: v1.13.12 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 2 x v1.13.2 v1.14.10 1 x v1.14.10 v1.14.10 Upgrade to the latest stable version: COMPONENT CURRENT AVAILABLE API Server v1.13.12 v1.14.10 Controller Manager v1.13.12 v1.14.10 Scheduler v1.13.12 v1.14.10 Kube Proxy v1.13.12 v1.14.10 CoreDNS 1.2.6 1.3.1 Etcd 3.2.24 3.3.10 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.14.10 _____________________________________________________________________ </code></pre> <p>As you can see in the message, we are required to upgrade kubelet on all nodes so I run the following command on my other 2 nodes:</p> <pre><code>$ sudo apt-mark unhold kubeadm kubelet kubernetes-cni &amp;&amp; \ sudo apt-get update &amp;&amp; \ sudo apt-get install -y kubeadm=1.14.10-00 kubelet=1.14.10-00 &amp;&amp; \ sudo apt-mark hold kubeadm kubelet kubernetes-cni </code></pre> <p>And finally I proceed with:</p> <pre><code>$ sudo kubeadm upgrade apply v1.14.10 </code></pre> <pre><code>[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.10". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. </code></pre>
Mark Watney
<p>I'm been doing the steps in this tutorial: <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip" rel="nofollow noreferrer">Create an ingress controller with a static public IP address in Azure Kubernetes Service (AKS)</a></p> <p>When I finish the tutorial, I can browse to the DNS name label for the static ip: <a href="https://demo-aks-ingress.eastus.cloudapp.azure.com" rel="nofollow noreferrer">https://demo-aks-ingress.eastus.cloudapp.azure.com</a></p> <p>What I don't get is, lets say I have a sub-domain hello.john.com. How can I configure the DNS of the sub-domain to point to <a href="https://demo-aks-ingress.eastus.cloudapp.azure.com" rel="nofollow noreferrer">https://demo-aks-ingress.eastus.cloudapp.azure.com</a> so it will work with https and letsencrypt that I setup in the AKS tutorial above?</p>
gunnarst
<p>Based on <a href="https://github.com/kubernetes/kubernetes/issues/43633#issuecomment-675437741" rel="nofollow noreferrer">this issue comment</a> on k8s github repo, it looks like it should work if you do the following:</p> <ul> <li>create a CNAME record for <code>hello.john.com</code> domain and point it to <code>demo-aks-ingress.eastus.cloudapp.azure.com</code></li> <li>add second domain to ingress (so that ingress knows how to route it)</li> <li>add second domain to certificate object (so that cert-manager can generate a valid certificate for this domain)</li> </ul> <hr /> <p>Ingress part:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: hello-world-ingress annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-staging nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; nginx.ingress.kubernetes.io/server-alias: &quot;hello.john.com&quot; #👈 spec: tls: - hosts: - demo-aks-ingress.eastus.cloudapp.azure.com - hello.john.com #👈 secretName: tls-secret rules: - host: demo-aks-ingress.eastus.cloudapp.azure.com http: paths: - backend: serviceName: aks-helloworld servicePort: 80 path: /hello-world-one(/|$)(.*) - backend: serviceName: ingress-demo servicePort: 80 path: /hello-world-two(/|$)(.*) - backend: serviceName: aks-helloworld servicePort: 80 path: /(.*) </code></pre> <p>Docs:</p> <ul> <li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-alias" rel="nofollow noreferrer">server alias annotation</a></li> <li><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#ingresstls-v1-networking-k8s-io" rel="nofollow noreferrer">tls.hosts field description</a></li> </ul> <hr /> <p>Certificate part:</p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: tls-secret namespace: ingress-basic spec: secretName: tls-secret dnsNames: - demo-aks-ingress.eastus.cloudapp.azure.com - hello.john.com #👈 acme: config: - http01: ingressClass: nginx domains: - demo-aks-ingress.eastus.cloudapp.azure.com - hello.john.com #👈 issuerRef: name: letsencrypt-staging kind: ClusterIssuer </code></pre> <p>Docs:</p> <ul> <li><a href="https://github.com/jetstack/cert-manager/blob/v1.3.1/pkg/apis/certmanager/v1alpha2/types_certificate.go#L115-L117" rel="nofollow noreferrer">dnsNames field description</a></li> </ul>
Matt
<p>I am running a <strong>Kubernates Cluster</strong> in bare metal of three nodes. I have applied a couple of <strong>yaml</strong> files of different services. Now I would like to make order in the cluster and <strong>clean</strong> some orphaned kube objects. To do that I need to understand the set of pods or other entities which use or refer a certain <strong>ServiceAccount</strong>.</p> <p>For example, I can dig ClusterRoleBinding of the, say, admin-user and investigate it:</p> <pre><code>kubectl get сlusterrolebinding admin-user </code></pre> <p>But is there a good <code>kubectl</code> options combination to find <strong>all the usages/references</strong> of some <strong>ServiceAccount</strong>?</p>
alex007
<p>You can list all resources using a service account with the following command: </p> <pre><code>kubectl get rolebinding,clusterrolebinding --all-namespaces -o jsonpath='{range .items[?(@.subjects[0].name=="YOUR_SERVICE_ACCOUNT_NAME")]}[{.roleRef.kind},{.roleRef.name}];{end}' | tr ";" "\n" </code></pre> <p>You just need to replace <code>YOUR_SERVICE_ACCOUNT_NAME</code> to the one you are investigating. </p> <p>I tested this command on my cluster and it works. </p> <p>Let me know if this solution helped you.</p>
Mark Watney
<p><strong>Problem:</strong></p> <p>how to resolve host name of kubernetes pod?</p> <p>I have the Following requirement we are using grpc with java where we have one app where we are running out grpc server other app where we are creating grpc client and connecting to grpc server (that is running on another pod).</p> <hr /> <p>We have three kubernetes pod running where our grpc server is running.</p> <p>lets say : my-service-0, my-service-1, my-service-2</p> <p>my-service has a cluster IP as: 10.44.5.11</p> <hr /> <p>We have another three kubernetes pod running where our gprc client is running.</p> <p>lets say: my-client-0, my-client-1, my-client-2</p> <hr /> <p><strong>Without Security:</strong></p> <p>i am try to connect grpc server pod with grpc client pod and it work fine.</p> <pre><code>grpc client (POD -&gt; my-client) ----------------&gt; groc server(POD -&gt; my-service) </code></pre> <p>So without security i am giving host name as my-service and it's working fine without any problem..</p> <pre><code>ManagedChannel channel = ManagedChannelBuilder.forAddress(&quot;my-service&quot;, 50052) .usePlaintext() .build(); </code></pre> <hr /> <p><strong>With SSL Security:</strong></p> <p>if i try to connect grpc server it will throw host name not match. we have created a certificate with wild card *.default.pod.cluster.local</p> <p>it will throw the below error:</p> <pre><code>java.security.cert.CertificateException: No name matching my-service found at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:225) ~[na:na] at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:98) ~[na:na] at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) ~[na:na] Not Working Code: ManagedChannel channel = NettyChannelBuilder.forAddress(&quot;my-service&quot;, 50052) .sslContext(GrpcSslContexts.forClient().trustManager(new File(System.getenv(&quot;GRPC_CLIENT_CA_CERT_LOCATION&quot;))).build()) .build(); </code></pre> <p>but if i give the host name as like this ==&gt; <strong>10-44-5-11.default.pod.cluster.local</strong> it will work fine correctly.</p> <pre><code>Working Code ManagedChannel channel = NettyChannelBuilder.forAddress(&quot;10-44-5-11.default.pod.cluster.local&quot;, 50052) .sslContext(GrpcSslContexts.forClient().trustManager(new File(System.getenv(&quot;GRPC_CLIENT_CA_CERT_LOCATION&quot;))).build()) .build(); </code></pre> <p>Now my problem is cluster ip of pod is dynamic and it will change every time during app deploy. what is the right way to resolve this host name?</p> <p>is it possible if i give host name and it will return me the ip then i will append default.pod.cluster.local to hostname and try to connect to grpc server?</p>
Mohit Singh
<p>Addressing your pod directly is not a good solution since Kubernetes may need to move your pods around the cluster. This can occur for example because of the failing node.</p> <p>To allow you clients/traffic to easy find desired containers you can place them behind a service with single static IP address. Service IP can be look up through DNS.</p> <p>This is how you can connect to the service through it`s FQDN:</p> <pre><code>my-service.default.svc.cluster.local </code></pre> <p>Where <code>my-service</code> is your service name, <code>default</code> for your namespace and <code>svc.cluster.local</code> is a configurable cluster domain suffix used in all cluster services.</p> <p>It's worth to know that you can skip <code>svc.cluster.local</code> suffix and even the namespace if the pods are in the same namespace. So you'll just refer to the service as <code>my-service</code>.</p> <p>For more you can check K8s <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">documents</a> about DNS.</p>
acid_fuji
<p>I have an external SQL server, (On the Internet accessible from my local system) that I am trying to call from inside the Minikube. I am unable to do that. I have tried the <a href="https://stackoverflow.com/questions/54464722/calling-an-external-service-from-within-minikube">Calling an external service from within Minikube</a></p> <p>Error that i am getting is <em>"sqlalchemy.exc.OperationalError: (pymssql.OperationalError) (20009, b'DB-Lib error message 20009, severity 9:\nUnable to connect: Adaptive Server is unavailable or does not exist "</em></p> <hr> <p>I have already created pod --> service --> Endpoints. All my Clusters are under an ingress. Please see the below code for the configuration that I have done. </p> <p>Currently, I am passing the DB HOST (1.1.1.1) as an environment variable to the POD and after this configuration, I am trying to pass the service name (sql-server) instead of DB Host Name is this correct? Moreover, I am unable to ping the IP from inside the container. </p> <p>Can anyone please help me. </p> <pre><code>apiVersion: v1 kind: Endpoints metadata: name: sql-server subsets: - addresses: - ip: 1.1.1.1 ports: - port: 1433 </code></pre> <pre><code>apiVersion: v1 kind: Service metadata: name: sql-server spec: type: ClusterIP ports: - port: 1433 targetPort: 1433 </code></pre>
Nimble Fungus
<p>I reproduced a similar scenario in my minikube system and this solution works as described. I will drive you through the setup and how to troubleshoot this issue. </p> <p>I have a linux server (hostname http-server) and I installed a http server (apache2) in on it that's serving a hello world message: </p> <pre><code>user@http-server:~$ netstat -tan | grep ::80 tcp6 0 0 :::80 :::* LISTEN user@minikube-server:~$ curl 10.128.15.209 Hello World! </code></pre> <p>Now that we confirmed that my service is accessible from the machine where I have minikube installed, lets connect to minikube VM and check if I can access this http service: </p> <pre><code>user@minikube-server:~$ minikube ssh _ _ _ _ ( ) ( ) ___ ___ (_) ___ (_)| |/') _ _ | |_ __ /' _ ` _ `\| |/' _ `\| || , &lt; ( ) ( )| '_`\ /'__`\ | ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/ (_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____) $ curl 10.128.15.209 Hello World! </code></pre> <p>Great! This is looking good. If you can't access your service here, you have to check your network, something is preventing your minikube server from communicating with your service. </p> <p>Now let's exit from this minikube ssh and create our endpoint: </p> <p>My endpoint manifest is looking like this: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Endpoints metadata: name: http-server subsets: - addresses: - ip: 10.128.15.209 ports: - port: 80 </code></pre> <pre><code>user@minikube-server:~$ kubectl apply -f http-server-endpoint.yaml endpoints/http-server configured </code></pre> <p>Let's create our service: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: http-server spec: ports: - port: 80 targetPort: 80 </code></pre> <pre><code>user@minikube-server:~$ kubectl apply -f http-server-service.yaml service/http-server created </code></pre> <p>Checking if our service exists and save it's clusterIP for letter usage: </p> <pre><code>user@minikube-server:~$$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-server ClusterIP 10.96.228.220 &lt;none&gt; 80/TCP 30m kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 10d </code></pre> <p>Now it's time to verify if we can access our service from a pod: </p> <pre><code>kubectl run ubuntu -it --rm=true --restart=Never --image=ubuntu bash </code></pre> <p>This command will create and open a bash session inside a ubuntu pod. </p> <p>In my case I'll install curl to be able to check if I can access my http server. You may need install mysql:</p> <pre><code>root@ubuntu:/# apt update; apt install -y curl </code></pre> <p>Checking connectivity with my service using clusterIP:</p> <pre><code>root@ubuntu:/# curl 10.128.15.209:80 Hello World! </code></pre> <p>And finally using the service name (DNS): </p> <pre><code>root@ubuntu:/# curl http-server Hello World! </code></pre> <p>Please run into all these steps and let me know if you have trouble on any and where.</p>
Mark Watney
<p><em>Background</em></p> <p>We have a server with one external IP, microk8s and k8s ingress (nginx) configured for name-based virtual hosts. The machine is intended as a playground for several devs for testing container technologies. A problem quickly emerged of route names collisions, with two users trying to set up the same route (like test or dev). One solution would be to include namespaces in the hostnames, but users would still have to cooperate (as opposed to using someone else's namespace).</p> <p><em>Question</em></p> <p>How to restrict user-generated host names (set up with Ingress config files) to include only user's own namespace in name-based virtual hosting (preferably using Nginx ingress)? It seems to be possible, because this is how our corporate Openshift routes (auto-generated hostnames that include namespaces) work: it is not possible to create a route in a namespace without having access (controlled by RBAC) to it.</p>
mirekphd
<p>In <a href="https://docs.openshift.com/container-platform/3.9/architecture/networking/routes.html" rel="nofollow noreferrer">openshift 3.x docs</a> is mentioned:</p> <blockquote> <p>If a host name is not provided as part of the route definition, then OpenShift Container Platform automatically generates one for you. The generated host name is of the form:</p> <pre><code>&lt;route-name&gt;[-&lt;namespace&gt;].&lt;suffix&gt; </code></pre> </blockquote> <p>So I guess that what you want is to do the same; generate hostname when one is not provided.</p> <p>Unfortunately for you, this is not supported by k8s and k8s nginx ingress as far as I know.</p> <p>What you might want to do is create a <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook" rel="nofollow noreferrer">mutating webhook</a> to mutate the object in flight when applying to k8s (it can e.g. generate a host field if one is not provided), or use <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook" rel="nofollow noreferrer">validating webhook</a> to validate if the object meets the requirements.</p> <p>Here is a tutorial you might want to check out: <a href="https://github.com/morvencao/kube-mutating-webhook-tutorial" rel="nofollow noreferrer">kube-mutating-webhook-tutorial</a></p> <p>You can also try to find a different ingress controller that supports the feature you want.</p> <p>One other solution involves removing access to create ingress objects from developers, and pick only one person that will be responsible for creation and validation of ingress objects (probably bad idea but it is a solution).</p>
Matt
<p>I am trying to implement peer discovery logic through DNS in go using SRV records in cluster. I have headless service and statefulset pods ready and I am able to list all SRV records by using</p> <pre><code>kubectl run -it srvlookup --image=tutum/dnsutils --rm --restart=Never -- dig SRV demoapp.default.svc.cluster.local </code></pre> <p>but the following code does not work in cluster:</p> <pre><code>func pingdns() (url string) { log.Println("start ping demoapp.default.svc.cluster.local.") _, addrs, err := net.LookupSRV("dns-tcp", "tcp", "demoapp.default.svc.cluster.local") if err != nil { log.Println(err.Error()) return "dns wrong" } fmt.Println(addrs) return "dns done." </code></pre> <p>}</p> <p>error output:</p> <pre><code>lookup _dns-tcp._tcp.demoapp.default.svc.cluster.local on 10.96.0.10:53: no such host </code></pre> <p>I found example in this k8s-in-action book but it is written in NodeJS. How to do it in Golang ?</p> <pre><code>const dns = require('dns'); const dataFile = "/var/data/kubia.txt"; const serviceName = "kubia.default.svc.cluster.local"; const port = 8080; ... var handler = function(request, response) { if (request.method == 'POST') { ... } else { response.writeHead(200); if (request.url == '/data') { var data = fileExists(dataFile) ? fs.readFileSync(dataFile, 'utf8') : "No data posted yet"; response.end(data); } else { response.write("You've hit " + os.hostname() + "\n"); response.write("Data stored in the cluster:\n"); dns.resolveSrv(serviceName, function (err, addresses) { The app performs a DNS lookup to obtain SRV records. if (err) { response.end("Could not look up DNS SRV records: " + err); return; } var numResponses = 0; if (addresses.length == 0) { response.end("No peers discovered."); } else { addresses.forEach(function (item) { ... </code></pre>
cong
<p>Thanks Shubham ! I read the medium post and found using grpc to make connection to SRV should browse all IPs as a RR fashion. But I am still looking for get all IPs. </p> <p>Medium article: <a href="https://medium.com/google-cloud/loadbalancing-grpc-for-kubernetes-cluster-services-3ba9a8d8fc03" rel="nofollow noreferrer">https://medium.com/google-cloud/loadbalancing-grpc-for-kubernetes-cluster-services-3ba9a8d8fc03</a></p> <p>Gitrepo: <a href="https://github.com/jtattermusch/grpc-loadbalancing-kubernetes-examples#example-1-round-robin-loadbalancing-with-grpcs-built-in-loadbalancing-policy" rel="nofollow noreferrer">https://github.com/jtattermusch/grpc-loadbalancing-kubernetes-examples#example-1-round-robin-loadbalancing-with-grpcs-built-in-loadbalancing-policy</a></p> <pre><code>import ( "google.golang.org/grpc/balancer/roundrobin" "google.golang.org/grpc/credentials" ) conn, err := grpc.Dial("dns:///be-srv-lb.default.svc.cluster.local", grpc.WithTransportCredentials(ce), grpc.WithBalancerName(roundrobin.Name)) c := echo.NewEchoServerClient(conn) </code></pre> <p>It makes calls on a list of IPs one by one. RR </p> <pre><code>Creating channel with target greeter-server.default.svc.cluster.local:8000 Greeting: Hello you (Backend IP: 10.0.2.95) Greeting: Hello you (Backend IP: 10.0.0.74) Greeting: Hello you (Backend IP: 10.0.1.51) </code></pre> <p>I have found my main issue is related to this problem. </p> <p><a href="https://stackoverflow.com/questions/30043248/why-golang-lookup-function-cant-provide-a-server-parameter">Why golang Lookup*** function can&#39;t provide a server parameter?</a></p>
cong
<p>I want to expose kubernetes dashboard to multiple users who have access to my vpc, i've seen some examples using internal load balancer with external DNS but i just want to know if there are more suggestions.</p>
touati ahmed
<p>When you install the dashboard, the service is set as <code>ClusterIP</code>. To let users from the same VPC access it you need to change the service to <code>NodePort</code>.</p> <pre><code>$ kubectl get service kubernetes-dashboard -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard ClusterIP 10.0.184.227 &lt;none&gt; 80/TCP 15m </code></pre> <p>To change it you have to edit the service:</p> <pre><code>kubectl edit service kubernetes-dashboard -n kube-system </code></pre> <p>And change the <code>.spec.type</code> from <code>ClusterIP</code> to <code>NodePort</code>.</p> <p>Another option is to patch the service with the following command: </p> <pre><code>$ kubectl patch service -n kube-system kubernetes-dashboard --patch '{"spec": {"type": "NodePort"}}' </code></pre> <p>After you edit or patch it your service is ready to be acceded as you need.</p> <pre><code>$ kubectl get service kubernetes-dashboard -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.0.184.227 &lt;none&gt; 80:30334/TCP 18m ... </code></pre> <p>Now to connect to the dashboard you have to point your browser to <a href="http://master-node-ip:nodePort" rel="nofollow noreferrer">http://master-node-ip:nodePort</a></p> <pre><code>$ kubectl describe service kubernetes-dashboard -n kube-system ... NodePort: &lt;unset&gt; 30334/TCP ... </code></pre> <pre><code>$ kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME aks-agentpool-20139558-vmss000000 Ready agent 16m v1.15.10 10.240.0.5 &lt;none&gt; Ubuntu 16.04.6 LTS 4.15.0-1071-azure docker://3.0.10+azure ... </code></pre> <p>So based on this example it looks like: <code>http://10.240.0.5:30334</code></p> <p>And it can be accessed from anyone in the same network as your master node.</p> <pre><code>$ curl http://10.240.0.5:30334 &lt;!doctype html&gt; &lt;html ng-app="kubernetesDashboard"&gt; &lt;head&gt; &lt;meta charset="utf-8"&gt; &lt;title ng-controller="kdTitle as $ctrl" ng-bind="$ctrl.title()"&gt;&lt;/title&gt; &lt;link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png"&gt; &lt;meta name="viewport" content="width=device-width"&gt; &lt;link rel="stylesheet" href="static/vendor.93db0a0d.css"&gt; &lt;link rel="stylesheet" href="static/app.ddd3b5ec.css"&gt; &lt;/head&gt; &lt;body ng-controller="kdMain as $ctrl"&gt; &lt;!--[if lt IE 10]&gt; &lt;p class="browsehappy"&gt;You are using an &lt;strong&gt;outdated&lt;/strong&gt; browser. Please &lt;a href="http://browsehappy.com/"&gt;upgrade your browser&lt;/a&gt; to improve your experience.&lt;/p&gt; &lt;![endif]--&gt; &lt;kd-login layout="column" layout-fill ng-if="$ctrl.isLoginState()"&gt; &lt;/kd-login&gt; &lt;kd-chrome layout="column" layout-fill ng-if="!$ctrl.isLoginState()"&gt; &lt;/kd-chrome&gt; &lt;script src="static/vendor.bd425c26.js"&gt;&lt;/script&gt; &lt;script src="api/appConfig.json"&gt;&lt;/script&gt; &lt;script src="static/app.91a96542.js"&gt;&lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>To know more about the different between all Kubernetes services type, check the following links: </p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">Publishing Services (ServiceTypes)</a> Kubernetes – Service Publishing</p>
Mark Watney
<p>I have a preStop hook defined in a statefulset pod resource that runs a bash script to make sure not to kill the pod until few processes finishes/cancels/errors within an application. I don't have the terminationGracePeriodSeconds defined. Now when I delete the pod, I tested that the script that is part of preStop hook is run as expected. But after adding terminationGracePeriodSeconds for 10 min, first the bash script is run as part of preStop hook successfully for couple of minutes and it is supposed to kill the pod. But the pod is hanging in TERMINATING status and it is killed only after 10 min.</p> <ol> <li>Why is the pod is hanging? Unable to find an answer for this.</li> <li>When the terminationGracePeriodSeconds was not added, the flow was working as expected by killing the pod as soon as finishing the script or within 30 sec which is the terminationGracePeriodSeconds. But when I added the grace period of 10 min or more, it is waiting until that time and then killing the pod.</li> </ol> <p>How to solve this issue. Is there a way to send SIGTERM or SIGKILL to the pod. Any ideas? Thank you in advance!</p> <p>STATEFULSET.YAML</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: appx name: appx spec: serviceName: appx replicas: 1 updateStrategy: type: RollingUpdate selector: matchLabels: app: appx template: metadata: labels: app: appx spec: #removed some of the sensitive info terminationGracePeriodSeconds: 600 containers: - image: appx imagePullPolicy: IfNotPresent name: appx lifecycle: preStop: exec: command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;sleep 30 &amp;&amp; bash /var/tmp/runscript.sh; sleep10&quot;] </code></pre> <p>KUBECTL DESCRIBE POD</p> <pre><code> **kubectl describe pod appx** Name: appx Namespace: default Priority: 0 Node: docker-desktop/192.168.65.3 Start Time: Mon, 21 Sep 2020 07:30:55 -0500 Labels: app=appx Annotations: &lt;none&gt; Status: Running IP: x.x.x.x Controlled By: StatefulSet/appx Containers: appx: Container ID: docker://dfdgfgfgfgfgfgfg Image: appx Image ID: docker://sha256:49dfgfgfgfgfgfgfgfgfg96a6fc Port: &lt;none&gt; Host Port: &lt;none&gt; State: Running Started: Mon, 21 Sep 2020 07:30:56 -0500 Ready: True Restart Count: 0 Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; appx-token-xj6q9: Type: Secret (a volume populated by a Secret) SecretName: appx-token-fhfdlf Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m43s default-scheduler Successfully assigned default/appx to docker-desktop Normal Pulled 2m42s kubelet, docker-desktop Container image &quot;appx&quot; already present on machine Normal Created 2m42s kubelet, docker-desktop Created container appx Normal Started 2m42s kubelet, docker-desktop Started container appx </code></pre>
acm
<p><code>preStop</code> hook and <code>terminationGracePeriodSeconds</code> are asynchronous. It means that as soon as the kubelet sees that a Pod has been marked as terminating, the kubelet begins the local Pod shutdown process. This means that if container doesn`t terminate within the grace period, a SIGKILL singal will be sent and the container will be killed regardless of wehther the commands in the preStop hook are completed.</p> <blockquote> <ol start="2"> <li>When the terminationGracePeriodSeconds was not added, the flow was working as expected by killing the pod as soon as finishing the script or within 30 sec which is the terminationGracePeriodSeconds. But when I added the grace period of 10 min or more, it is waiting until that time and then killing the pod.</li> </ol> </blockquote> <p><code>terminationGracePeriodSeconds</code> grace period is always added. It just defaults to the 30 seconds as I already mentioned in the comments. So, what if <code>terminationGracePeriodSeconds</code> is less than the time to complete the preStop hook?</p> <p><strong>Then the container will terminate at the end of the <code>terminationGracePeriodSeconds</code> and the preStop hook will not finish/run.</strong></p> <p>When terminationGracePeriodSeconds is set to 600s, the preStop hook script is hanging (currently unclear whether it ever worked as it wasn’t properly tested with the default 30s terminationGracePeriodSeconds due to preemptive termination). It means that some processes are not handling SIGTERM correctly which is currently not corrected for in the preStop hook, meaning that the container is instead waiting for the SIGKILL to be sent after the 10 min terminationGracePeriod ends.</p> <p>If you take a look <a href="https://github.com/kubernetes/kubernetes/issues/24695" rel="noreferrer">here</a> you will find out that even though the user specified a preStop hook, they needed to SIGTERM nginx for a graceful shutdown.</p> <p>In the case whereas you have set <code>terminationGracePeriodSeconds</code> to 10 minutes, even though your preStop hook executed successfully Kubernetes waited 10 minutes before terminating your container because that is exactly what you told him to do. Termination signal is being sent by kubelet, but it is not being passed to the application insider the container. Most common reason for that is when your container runs a shell which runs the application process the signal might be consumed/interrupted by shell itself instead of passed towards the child process. Also, since it is unclear what your <code>runscript.sh</code> is doing it is difficult to make any other suggestions to what processes are failing to handle SIGTERM.</p> <p>What you can do in this case? The options for ending sooner are :</p> <ul> <li>Decrease terminationGracePeriodSeconds</li> <li>Send a signal for a graceful shutdown by ensuring SIGTERM is handled correctly and all running processes are listening for termination. Examples of how to do this are <a href="https://medium.com/flant-com/kubernetes-graceful-shutdown-nginx-php-fpm-d5ab266963c2" rel="noreferrer">here</a>. You can see that they use the “quit” command for NGINX.</li> </ul> <p>For more information about you can find great articles <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-terminating-with-grace" rel="noreferrer">here</a> and <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="noreferrer">here</a>.</p>
acid_fuji
<p>I deployed istio/bookinfo on kubernetes, and I want to install stress on the microservice container to inject fault. However, When I use</p> <pre><code>kubectl exec -it reviews-v1-f55d74d54-kpxr2 -c reviews --username=root -- /bin/bash </code></pre> <p>to log in the container, it show that the user is still default. and the command 'apt-get' got</p> <pre><code>default@reviews-v2-6f4995984d-4752v:/$ apt-get update Reading package lists... Done E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied) </code></pre> <p>I tried to use 'su root' but I don't know the answer. I searched some answer say that I can use 'docker exec', it works but it is not convenient, so I want to know how to log in the container by use the command kubectl exec.</p>
gxxxh
<p>This is not supported.</p> <p>Source code suggests it's a TODO feature: <a href="https://github.com/kubernetes/kubectl/blob/a0af655b7abaf06983c99ad5a4dc8f02ae5eb3e5/pkg/cmd/exec/exec.go#L100" rel="noreferrer">kubernetes/kubectl/pkg/cmd/exec/exec.go</a></p> <p>The <code>--username</code> flag explained by kubectl:</p> <pre><code>➜ ~ kubectl options | grep user --user='': The name of the kubeconfig user to use --username='': Username for basic authentication to the API server </code></pre> <p>As you probably see, none of the user flags can change user/UID for exec.</p> <p>All flags supported by exec command:</p> <pre><code>➜ ~ kubectl exec --help [...] Options: -c, --container='': Container name. If omitted, the first container in the pod will be chosen -f, --filename=[]: to use to exec into the resource --pod-running-timeout=1m0s: The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running -i, --stdin=false: Pass stdin to the container -t, --tty=false: Stdin is a TTY </code></pre> <p>Additionally, apt-get update is best to be run at build time, not at a run time.</p> <p>It is a good practise to keep your containers immutable. For testing purpouses you should stick with docker exec because ther is no other known alternative.</p> <p>Also, If you have a specific problem to solve, explain the problem, not the solution. <a href="https://xyproblem.info/" rel="noreferrer">xyproblem</a></p>
Matt
<p>I have a machine on a local network, that I have exposed to the Internet via port forwarding in the router.</p> <p>Now how can I expose the serving machine running Kubernetes on docker for mac?</p> <p>using minikube you can do:</p> <pre><code>minikube tunnel </code></pre> <p>But how to do it on mac for mac/desktop?</p> <p>Normally the LoadBalancer is created for you if you use a cloud provider.</p>
Chris G.
<p>The command suggested by @Marko is almost correct.</p> <p>The command:</p> <pre><code>➜ ~ kubectl port-forward pod/pod-name local_port:pod_port Forwarding from 127.0.0.1:8080 -&gt; 80 Forwarding from [::1]:8080 -&gt; 80 </code></pre> <p>opens a port but only locally (on loopback interface/localhost). To make it accessible from the outside you need to pass `--address=0.0.0.0. So the complete command is:</p> <pre><code>➜ ~ kubectl port-forward pod/pod-name local_port:pod_port --address=0.0.0.0 Forwarding from 0.0.0.0:local_port -&gt; pod_port </code></pre>
Matt
<p>I'm trying to deploy Postgresql on Azure Kubernetes with data persistency. So I'm using PVC. I searched lots of posts on here, most of them offered yaml files like below, but it's giving the error below;</p> <pre><code>chmod: changing permissions of '/var/lib/postgresql/data/pgdata': Operation not permitted The files belonging to this database system will be owned by user &quot;postgres&quot;. This user must also own the server process. The database cluster will be initialized with locale &quot;en_US.utf8&quot;. The default database encoding has accordingly been set to &quot;UTF8&quot;. The default text search configuration will be set to &quot;english&quot;. Data page checksums are disabled. initdb: error: could not change permissions of directory &quot;/var/lib/postgresql/data/pgdata&quot;: Operation not permitted fixing permissions on existing directory /var/lib/postgresql/data/pgdata ... </code></pre> <p>deployment yaml file is below;</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: postgresql spec: replicas: 1 selector: matchLabels: app: postgresql template: metadata: labels: app: postgresql spec: containers: - name: postgresql image: postgres:13.2 securityContext: runAsUser: 999 imagePullPolicy: &quot;IfNotPresent&quot; ports: - containerPort: 5432 envFrom: - secretRef: name: postgresql-secret volumeMounts: - mountPath: /var/lib/postgresql/data name: postgredb-kap volumes: - name: postgredb-kap persistentVolumeClaim: claimName: postgresql-pvc </code></pre> <p>Secret yaml is below;</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: postgresql-secret type: Opaque data: POSTGRES_DB: a2V5sd4= POSTGRES_USER: cG9zdGdyZXNhZG1pbg== POSTGRES_PASSWORD: c234Rw== PGDATA: L3Za234dGF0YQ== </code></pre> <p>pvc and sc yaml files are below:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgresql-pvc labels: app: postgresql spec: storageClassName: postgresql-sc accessModes: - ReadWriteOnce resources: requests: storage: 5Gi --- allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: postgresql-sc mountOptions: - dir_mode=0777 - file_mode=0777 - uid=1000 - gid=1000 parameters: skuName: Standard_LRS provisioner: kubernetes.io/azure-file reclaimPolicy: Retain </code></pre> <p>So when I use the mountpath like &quot;<strong>- mountPath: /var/lib/postgresql/</strong>&quot;, it's working. I can reach the DB and it's good. But when I delete the pod and recreating, there is no DB! So no data persistency.</p> <p>Can you please help, what am I missing here?</p> <p>Thanks!</p>
yatta
<p>One thing you could try is to change <code>uid=1000,gid=1000</code> in mount options to 999 since this is the uid of postgres user in postgres conatiner (I didn't test this).</p> <hr /> <p>Another solution that will for certain solve this issue involves init conatainers.</p> <p>Postgres container requires to start as root to be able to <code>chown</code> pgdata dir since its mounted as root dir. After it does this, it drops root permisions and runs as postgres user.</p> <p>But you can use init container (running as root) to chmod the volume dir so that you can run main container as non-root.</p> <p>Here is an example:</p> <pre><code> initContainers: - name: init image: alpine command: [&quot;sh&quot;, &quot;-c&quot;, &quot;chown 999:999 /var/lib/postgresql/data&quot;] volumeMounts: - mountPath: /var/lib/postgresql/data name: postgredb-kap </code></pre>
Matt
<p>I am trying to create ConfigMap in OpenShift by using YAML file.</p> <p>My YAML is having a list of values as below.</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: sample-map-json-fields namespace: default data: fields: - hello - world - my.test.field </code></pre> <p>I executed like below -</p> <pre><code> oc create -f filename.yaml </code></pre> <p>Getting exception like below -</p> <blockquote> <p>Error from server (BadRequest): error when creating &quot;filename.yaml&quot;: ConfigMap in version &quot;v1&quot; cannot be handled as a ConfigMap: [pos 42]: json: expect char '&quot;' but got char '['</p> </blockquote> <p>If I do the same without list content inside data, it works.</p> <p>Please help how to handle the YAML list for ConfigMap.</p>
Jet
<p>This is not working because you are providing <code>array</code> whereas Kubernetes expects to receive <code>string</code> .</p> <pre><code>got &quot;array&quot;, expected &quot;string&quot; </code></pre> <p>Please read more about <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#configmap-v1-core" rel="nofollow noreferrer">configMap</a> in Kubernetes API documentation:</p> <pre><code>data - _object_ Data contains the configuration data. Each key must consist of alphanumeric characters, '-', '_' or '.'. Values with non-UTF-8 byte sequences must use the BinaryData field. The keys stored in Data must not overlap with the keys in the BinaryData field, this is enforced during validation process. </code></pre> <p>Or in the go <a href="https://godoc.org/k8s.io/api/core/v1#ConfigMap" rel="nofollow noreferrer">docs</a>:</p> <pre><code>// Data contains the configuration data. // Each key must consist of alphanumeric characters, '-', '_' or '.'. // Values with non-UTF-8 byte sequences must use the BinaryData field. // The keys stored in Data must not overlap with the keys in // the BinaryData field, this is enforced during validation process. // +optional Data map[string]string `json:&quot;data,omitempty&quot; protobuf:&quot;bytes,2,rep,name=data&quot;` </code></pre>
acid_fuji
<p>I'm trying to exec a command into a pod, but I keep getting the error <code>unable to upgrade connection: Forbidden</code></p> <p>I'm trying to test my code in development by doing <code>kubectl proxy</code> which works for all other operations such as creating a deployment or deleting it, however it's not working for executing a command, I read that I need <code>pods/exec</code> so I created a service account with such role like</p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: dev-sa namespace: default --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: pod-view-role rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: pod-exec-view-role rules: - apiGroups: [""] resources: ["pods/exec"] verbs: ["get","create"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-pods-svc-account namespace: default subjects: - kind: ServiceAccount name: dev-sa roleRef: kind: Role name: pod-view-role apiGroup: rbac.authorization.k8s.io --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-pods-exec-svc-account namespace: default subjects: - kind: ServiceAccount name: dev-sa roleRef: kind: Role name: pod-exec-view-role apiGroup: rbac.authorization.k8s.io </code></pre> <p>then I retrieve the bearer token for the service account and try to use it in my code </p> <pre><code>func getK8sConfig() *rest.Config { // creates the in-cluster config var config *rest.Config fmt.Println(os.Getenv("DEVELOPMENT")) if os.Getenv("DEVELOPMENT") != "" { //when doing local development, mount k8s api via `kubectl proxy` fmt.Println("DEVELOPMENT") config = &amp;rest.Config{ Host: "http://localhost:8001", TLSClientConfig: rest.TLSClientConfig{Insecure: true}, APIPath: "/", BearerToken: "eyJhbGciOiJSUzI1NiIsImtpZCI6InFETTJ6R21jMS1NRVpTOER0SnUwdVg1Q05XeDZLV2NKVTdMUnlsZWtUa28ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRldi1zYS10b2tlbi14eGxuaiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZXYtc2EiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmZDVhMzRjNy0wZTkwLTQxNTctYmY0Zi02Yjg4MzIwYWIzMDgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZXYtc2EifQ.woZ6Bmkkw-BMV-_UX0Y-S_Lkb6H9zqKZX2aNhyy7valbYIZfIzrDqJYWV9q2SwCP20jBfdsDS40nDcMnHJPE5jZHkTajAV6eAnoq4EspRqORtLGFnVV-JR-okxtvhhQpsw5MdZacJk36ED6Hg8If5uTOF7VF5r70dP7WYBMFiZ3HSlJBnbu7QoTKFmbJ1MafsTQ2RBA37IJPkqi3OHvPadTux6UdMI8LlY7bLkZkaryYR36kwIzSqsYgsnefmm4eZkZzpCeyS9scm9lPjeyQTyCAhftlxfw8m_fsV0EDhmybZCjgJi4R49leJYkHdpnCSkubj87kJAbGMwvLhMhFFQ", } } else { var err error config, err = rest.InClusterConfig() if err != nil { panic(err.Error()) } } return config } </code></pre> <p>Then I try to run the <a href="http://people.redhat.com/jrivera/openshift-docs_preview/openshift-online/glusterfs-review/go_client/executing_remote_processes.html" rel="nofollow noreferrer">OpenShift example</a> to exec into a pod </p> <pre><code> // Determine the Namespace referenced by the current context in the // kubeconfig file. namespace := "default" // Get a rest.Config from the kubeconfig file. This will be passed into all // the client objects we create. restconfig := getK8sConfig() // Create a Kubernetes core/v1 client. coreclient, err := corev1client.NewForConfig(restconfig) if err != nil { panic(err) } // Create a busybox Pod. By running `cat`, the Pod will sit and do nothing. var zero int64 pod, err := coreclient.Pods(namespace).Create(&amp;corev1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: "busybox", }, Spec: corev1.PodSpec{ Containers: []corev1.Container{ { Name: "busybox", Image: "busybox", Command: []string{"cat"}, Stdin: true, }, }, TerminationGracePeriodSeconds: &amp;zero, }, }) if err != nil { panic(err) } // Delete the Pod before we exit. defer coreclient.Pods(namespace).Delete(pod.Name, &amp;metav1.DeleteOptions{}) // Wait for the Pod to indicate Ready == True. watcher, err := coreclient.Pods(namespace).Watch( metav1.SingleObject(pod.ObjectMeta), ) if err != nil { panic(err) } for event := range watcher.ResultChan() { switch event.Type { case watch.Modified: pod = event.Object.(*corev1.Pod) // If the Pod contains a status condition Ready == True, stop // watching. for _, cond := range pod.Status.Conditions { if cond.Type == corev1.PodReady &amp;&amp; cond.Status == corev1.ConditionTrue { watcher.Stop() } } default: panic("unexpected event type " + event.Type) } } // Prepare the API URL used to execute another process within the Pod. In // this case, we'll run a remote shell. req := coreclient.RESTClient(). Post(). Namespace(pod.Namespace). Resource("pods"). Name(pod.Name). SubResource("exec"). VersionedParams(&amp;corev1.PodExecOptions{ Container: pod.Spec.Containers[0].Name, Command: []string{"date"}, Stdin: true, Stdout: true, Stderr: true, TTY: true, }, scheme.ParameterCodec) exec, err := remotecommand.NewSPDYExecutor(restconfig, "POST", req.URL()) if err != nil { panic(err) } // Connect this process' std{in,out,err} to the remote shell process. err = exec.Stream(remotecommand.StreamOptions{ Stdin: os.Stdin, Stdout: os.Stdout, Stderr: os.Stderr, Tty: true, }) if err != nil { panic(err) } fmt.Println("done") </code></pre> <p>so it seems like the bearer token is getting ignored and isntead I'm getting the privileges of the kubectl admin.</p> <p>How can I force the rest client to use the provided bearer token? Is this the right way to exec a command into a pod?</p>
perrohunter
<p>You are getting the <code>privileges of the kubectl admin</code> because you are connecting through <code>localhost</code> endpoint exposed by <code>kubeproxy</code>. This already authorizes you with your admin credentials. </p> <p>I have replicated this and I have come up with this solution: </p> <p>What you want to do is to connect directly to the API server. To retrieve API address use this command: </p> <pre><code>$ kubectl cluster-info </code></pre> <p>Then replace that <code>localhost</code> address with the <code>APIserverIP</code> address</p> <pre><code>... config = &amp;rest.Config{ Host: "&lt;APIserverIP:port&gt;", TLSClientConfig: rest.TLSClientConfig{Insecure: true}, ... </code></pre> <p>Your code is creating a pod so you also need to add <code>create</code> and <code>delete</code> permissions to your <code>Service Account</code></p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: pod-view-role rules: - apiGroups: [""] resources: ["pods"] verbs: ["create", "delete", "get", "list", "watch"] </code></pre> <p>Let me know if that was helpful.</p>
acid_fuji
<p>I wanted to create a SQL Server database in Kubernetes pod using a SQL script file. I have the SQL script which creates the database and inserts the master data. As I'm new to Kubernetes, I'm struggling to run the SQL script in a pod. I know the SQL script can be executed manually in a separate kubectl exec command, but I wanted it to be executed automatically in the pod deploy yml file itself. </p> <p>Is there a way to mount the script file into pod's volume and run it after starting the container?</p>
Dhaya
<p>You could use kubernetes <code>hooks</code> for that case. There are two of them: <code>PostStart</code> and <code>PreStop</code>.</p> <p><code>PostStart</code> executes immediately after a container is created. <code>PreStop</code> on other hand is called immediately before a container is terminated. </p> <p>You have two types of hook handlers that can be implemented: <code>Exec</code> or <code>HTTP</code> </p> <p><code>Exec</code> - Executes a specific command, such as pre-stop.sh, inside the cgroups and namespaces of the Container. Resources consumed by the command are counted against the Container. <code>HTTP</code> - Executes an HTTP request against a specific endpoint on the Container.</p> <p><code>PostStart</code> is the one to go with here, however please note that the hook is running in parallel with the main process. It does not wait for the main process to start up fully. Until the hook completes, the container will stay in waiting state. </p> <p>You could use a little workaround for that and add a <code>sleep</code> command to your script in order to have it wait a bit for your main container creation. Your script file can be stored in the container image or mounted to volume shared with the pod using <code>ConfigMap</code>. Here`s some examples how to do that: </p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: namespace: &lt;your-namespace&gt; name: poststarthook data: poststart.sh: | #!/bin/bash echo "It`s done" </code></pre> <p>Make sure your script does not exceed <code>1mb</code> limit for <code>ConfigMap</code></p> <p>After you define <code>configMap</code> you will have mount it using <code>volumes</code>: </p> <pre><code>spec: containers: - image: &lt;your-image&gt; name: example-container volumeMounts: - mountPath: /opt/poststart.sh subPath: poststart.sh name: hookvolume volumes: - name: hookvolume configMap: name: poststarthook defaultMode: 0755 #please remember to add proper (executable) permissions </code></pre> <p>And then you can define <code>postStart</code> in your spec: </p> <pre><code>spec: containers: - name: example-container image: &lt;your-image&gt; lifecycle: postStart: exec: command: ["/bin/sh", "-c", /opt/poststart.sh ] </code></pre> <p>You can read more about hooks in kubernetes <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="noreferrer">documentation</a> and in this <a href="https://medium.com/@pvishvesh/housekeeping-task-post-pod-formation-and-before-a-pod-dies-ce8ec2b6423f" rel="noreferrer">article</a>. Let me know if that was helpful. </p>
acid_fuji
<p>I have build a docker image containing <code>tshark</code> (its an image I am going to use for doing various manual debugging from a kubernetes pod).</p> <p>I have deployed a container in kubernetes running that image. But when I access the container and try to run <code>tshark</code> I get:</p> <pre><code>$ kubectl exec myapp-cbd49f587-w2swx -it bash root@myapp-cbd49f587-w2swx:/# tshark -ni any -f "test.host" -w sample.pcap -F libpcap Running as user "root" and group "root". This could be dangerous. Capturing on 'any' tshark: cap_set_proc() fail return: Operation not permitted </code></pre> <p>Googling that error:</p> <p><a href="https://www.weave.works/blog/container-capabilities-kubernetes/" rel="noreferrer">https://www.weave.works/blog/container-capabilities-kubernetes/</a> <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/policy/container-capabilities/" rel="noreferrer">https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/policy/container-capabilities/</a></p> <p>it seems I need to configure a <code>securityContext</code> for my container/pod. In my deployment.yaml I have added:</p> <pre><code> containers: ... securityContext: capabilities: add: - NET_ADMIN </code></pre> <p>But when I apply that deployment I get:</p> <pre><code>error: error validating "deployment.yaml": error validating data: ValidationError(Deployment.spec.template.spec.securityContext): unknown field "capabilities" in io.k8s.api.core.v1.PodSecurityContext; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>Adding <code>--validate=false</code> removes the error but also means the securityContext is ignored.</p> <p>What is preventing me from setting:</p> <pre><code> securityContext: capabilities: add: - NET_ADMIN </code></pre> <p>Based on the guides I have found this should be fine.</p> <p>I have also looked at (looks to be non free):</p> <p><a href="https://sysdig.com/blog/tracing-in-kubernetes-kubectl-capture-plugin/" rel="noreferrer">https://sysdig.com/blog/tracing-in-kubernetes-kubectl-capture-plugin/</a></p> <p>so probably the right way is to use some tool like that (<a href="https://github.com/eldadru/ksniff" rel="noreferrer">ksniff</a>) or setup a <a href="https://developers.redhat.com/blog/2019/02/27/sidecars-analyze-debug-network-traffic-kubernetes-pod/" rel="noreferrer">sidecar container</a>. But I am still curious to why I get the above error.</p>
u123
<p>Looking specifically to the error, you posted only part of your manifest and looking to this we can see that you put <code>securityContext:</code> in the same level as <code>containers:</code>:</p> <pre><code> containers: ... securityContext: capabilities: add: - NET_ADMIN </code></pre> <p>It should be under <code>containers:</code> as as written in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="noreferrer">documentation</a>:</p> <blockquote> <p>To add or remove Linux capabilities for a Container, include the <code>capabilities</code> field in the <code>securityContext</code> section of the Container manifest.</p> </blockquote> <p>Example:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: security-context-demo spec: replicas: 2 selector: matchLabels: app: security-context-demo template: metadata: labels: app: security-context-demo spec: containers: - name: sec-ctx-4 image: gcr.io/google-samples/node-hello:1.0 securityContext: capabilities: add: - NET_ADMIN </code></pre>
Mark Watney
<p>Is there any way we can correctly guess how much resource limits we need to keep for running deployments on kubernetes clusters.</p>
akhil11
<p>Yes, you can guess that single threaded application most likely won't need more that 1 CPU.</p> <p>For any other programs: no, there is not easy way to guess it. Every application is different, and reacts differently under different workloads.</p> <p>The easiest way to figure out how many resources it needs is to run it and measure it.</p> <p>Run some benchmarks/profilers and see how application behaves. Then make decisions based on that.</p>
Matt
<p>I am trying to get ETCD metrics like the number of ETCD keys and size as well as the number of requests made to ETCD through exec (ing) into a kubernetes pod (etcdctl) and am not sure what command to use for this.</p> <p>An alternative (such as cUrl) would help as well. </p> <p>Thanks for the help!</p>
Shrey Baid
<p>You need to extract the information from etcd and filter what you want. To illustrate, I will show you how to get the number of total keys from etcd.</p> <blockquote> <p><strong>NOTE</strong>: Tested in kubernetes 1.18.2. </p> </blockquote> <pre class="lang-sh prettyprint-override"><code># Getting etcd pod IP and set a local variable: ADVERTISE_URL="https://$(kubectl get pods -n kube-system -l=component=etcd -o=jsonpath='{ .items[*].status.podIP }'):2379" # Getting ectd pod name and set a variable ETCD_POD ETCD_POD=$(kubectl get pods -n kube-system -l=component=etcd -o=jsonpath='{ .items[*].metadata.name}') # Extracting all etcd keys/values to a file called "etcd-kv.json": kubectl exec $ETCD_POD -n kube-system -- sh -c \ "ETCDCTL_API=3 etcdctl \ --endpoints $ADVERTISE_URL \ --cacert /etc/kubernetes/pki/etcd/ca.crt \ --key /etc/kubernetes/pki/etcd/server.key \ --cert /etc/kubernetes/pki/etcd/server.crt \ get \"\" --prefix=true -w json" &gt; etcd-kv.json </code></pre> <p>Now you have all the keys/values pairs from etcd, you just need to filter to extract the information you need. For example, to list all keys you can use the command:</p> <pre><code>for k in $(cat etcd-kv.json | jq '.kvs[].key' | cut -d '"' -f2); do echo $k | base64 --decode; echo; done </code></pre> <p>and to count the number of keys, just use the command <code>wc -l</code> on the end of this command, like:</p> <pre><code>for k in $(cat etcd-kv.json | jq '.kvs[].key' | cut -d '"' -f2); do echo $k | base64 --decode; echo; done | echo "Total keys=$(wc -l)" Total keys=308 </code></pre> <p><strong>References:</strong></p> <p><a href="https://medium.com/better-programming/a-closer-look-at-etcd-the-brain-of-a-kubernetes-cluster-788c8ea759a5" rel="nofollow noreferrer">A closer look at etcd: The brain of a kubernetes cluster</a></p>
Mr.KoopaKiller
<p><strong>Description</strong>: Unable to bind a new PVC to an existing PV that already contains data from previous run (and was dynamically created using gluster storage class).</p> <ul> <li>Installed a helm release which created PVC and dynamically generated PV from GlusterStorage class.</li> <li>However due to some reason, we need to bring down the release (<code>helm del</code>) and re-install it (<code>helm install</code>). However, want to use the existing PV instead of creating a new one.</li> </ul> <p>I tried a few things: - Following the instruction here: <a href="https://github.com/kubernetes/kubernetes/issues/48609" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/48609</a>. However, that did not work for GlusterFS storage solution since after I tried the needed steps, it complained:</p> <pre><code> Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling &lt;unknown&gt; default-scheduler error while running "VolumeBinding" filter plugin for pod "opensync-wifi-controller-opensync-mqtt-broker-fbbd69676-bmqqm": pod has unbound immediate PersistentVolumeClaims Warning FailedScheduling &lt;unknown&gt; default-scheduler error while running "VolumeBinding" filter plugin for pod "opensync-wifi-controller-opensync-mqtt-broker-fbbd69676-bmqqm": pod has unbound immediate PersistentVolumeClaims Normal Scheduled &lt;unknown&gt; default-scheduler Successfully assigned connectus/opensync-wifi-controller-opensync-mqtt-broker-fbbd69676-bmqqm to rahulk8node1-virtualbox Warning FailedMount 31s (x7 over 62s) kubelet, rahulk8node1-virtualbox MountVolume.NewMounter initialization failed for volume "pvc-dc52b290-ae86-4cb3-aad0-f2c806a23114" : endpoints "glusterfs-dynamic-dc52b290-ae86-4cb3-aad0-f2c806a23114" not found Warning FailedMount 30s (x7 over 62s) kubelet, rahulk8node1-virtualbox MountVolume.NewMounter initialization failed for volume "pvc-735baedf-323b-47bc-9383-952e6bc5ce3e" : endpoints "glusterfs-dynamic-735baedf-323b-47bc-9383-952e6bc5ce3e" not found </code></pre> <p>Apparently besides the PV, we would also need to persist gluster-dynamic-endpoints and glusterfs-dynamic-service. However, these are created in the pod namespace and since the <em>namespace is removed</em> as part of <code>helm del</code>, it also deletes these endpoints and svc. </p> <p>I looked around other pages related to GlusterFS endpoint missing: <a href="https://github.com/openshift/origin/issues/6331" rel="nofollow noreferrer">https://github.com/openshift/origin/issues/6331</a> but that does not applies to the current version of Storage class. When I added <code>endpoint: "heketi-storage-endpoints"</code> to the Storage class parameters, I got the following error when creating the PVC:</p> <pre><code>Failed to provision volume with StorageClass "glusterfs-storage": invalid option "endpoint" for volume plugin kubernetes.io/glusterfs </code></pre> <p>This option was removed in 2016 - see <a href="https://github.com/gluster/gluster-kubernetes/issues/87" rel="nofollow noreferrer">https://github.com/gluster/gluster-kubernetes/issues/87</a>. </p> <p>Is there anyway to use existing PV from a new PVC? </p>
Rahul Sharma
<p>I would like to suggest a different approach.</p> <p>You can use this annotation on the PVC, it will skip deleting the resource on delete.</p> <pre><code>helm.sh/resource-policy: &quot;keep&quot; </code></pre> <p><a href="https://github.com/helm/charts/blob/master/stable/bitcoind/templates/pvc.yaml" rel="nofollow noreferrer">Here</a> is an example:</p> <pre class="lang-yaml prettyprint-override"><code>{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }} kind: PersistentVolumeClaim apiVersion: v1 metadata: name: {{ template &quot;bitcoind.fullname&quot; . }} annotations: &quot;helm.sh/resource-policy&quot;: keep labels: app: {{ template &quot;bitcoind.name&quot; . }} chart: &quot;{{ .Chart.Name }}-{{ .Chart.Version }}&quot; release: &quot;{{ .Release.Name }}&quot; heritage: &quot;{{ .Release.Service }}&quot; spec: accessModes: - {{ .Values.persistence.accessMode | quote }} resources: requests: storage: {{ .Values.persistence.size | quote }} {{- if .Values.persistence.storageClass }} {{- if (eq &quot;-&quot; .Values.persistence.storageClass) }} storageClassName: &quot;&quot; {{- else }} storageClassName: &quot;{{ .Values.persistence.storageClass }}&quot; {{- end }} {{- end }} {{- end }} </code></pre> <p>You can also use parameters as seen <a href="https://github.com/helm/charts/tree/master/stable/hoard" rel="nofollow noreferrer">here</a>, where they implemented an option to flag (which is either true or false) while you install your helm chart.</p> <pre><code>persistence.annotations.&quot;helm.sh/resource-policy&quot; </code></pre> <p>You can also include a configurable parameters to set the name of the pvc you want to reuse as seen <a href="https://github.com/helm/charts/tree/master/stable/mysql" rel="nofollow noreferrer">here</a>.</p> <p>In this example you can set <code>persistence.existingClaim=mysql-pvc</code> during your chart install.</p> <p>So mixing everything you can have something that should look like this when you run your helm install:</p> <pre><code>helm install --namespace myapp --set existingClaim=mysql-pvc stable/myapp </code></pre>
Mark Watney
<p>I want to deploy a simple nginx app on my own kubernetes cluster. </p> <p>I used the basic nginx deployment. On the machine with the ip <code>192.168.188.10</code>. It is part of cluster of 3 raspberries. </p> <pre><code>NAME STATUS ROLES AGE VERSION master-pi4 Ready master 2d20h v1.18.2 node1-pi4 Ready &lt;none&gt; 2d19h v1.18.2 node2-pi3 Ready &lt;none&gt; 2d19h v1.18.2 $ kubectl create deployment nginx --image=nginx deployment.apps/nginx created $ kubectl create service nodeport nginx --tcp=80:80 service/nginx created $ kubectl get pods NAME READY STATUS RESTARTS AGE my-nginx-8fb6d868-6957j 1/1 Running 0 10m my-nginx-8fb6d868-8c59b 1/1 Running 0 10m nginx-f89759699-n6f79 1/1 Running 0 4m20s $ kubectl describe service nginx Name: nginx Namespace: default Labels: app=nginx Annotations: &lt;none&gt; Selector: app=nginx Type: NodePort IP: 10.98.41.205 Port: 80-80 80/TCP TargetPort: 80/TCP NodePort: 80-80 31400/TCP Endpoints: &lt;none&gt; Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>But I always get a time out </p> <pre><code>$ curl http://192.168.188.10:31400/ curl: (7) Failed to connect to 192.168.188.10 port 31400: Connection timed out </code></pre> <p>Why is the web server nginx not reachable? I tried to run it from the same machine I deployed it to? How can I make it accessible from an other machine from the network on port <code>31400</code>? </p>
A.Dumas
<p>As mentioned by @suren, you are creating a stand-alone service without any link with your deployment.</p> <p>You can solve using the command from suren answer, or creating a new deployment using the follow yaml spec:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - name: http containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-svc spec: type: NodePort selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 </code></pre> <p>After, type <code>kubectl get svc</code> to get the nodeport to access your service.</p> <p><code>nginx-svc NodePort 10.100.136.135 &lt;none&gt; 80:31816/TCP 34s</code></p> <p>To access use <code>http://&lt;YOUR_NODE_IP&gt;:31816</code></p>
Mr.KoopaKiller
<p>I'd need to reach jupyter-lab from port 80 and have the k8s configuration redirect to 8888. This is a problem I have set myself to learn about k8s networking, and also get a jupyter-lab running.</p> <p>Here is the MetalLB config map. Local DNS resolves &quot;jupyter-lab.k8s.home&quot; to these ip addresses</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 10.10.10.24-10.10.10.26 </code></pre> <p>Here is my LoadBalancer pointing to the ingress controller, is this not exposing port 80 and redirecting to the target 8888 ?</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: jupyter-lab-lb namespace: default spec: type: LoadBalancer ports: - port: 80 targetPort: 8888 selector: app: jupyter-lab-ingress </code></pre> <p>This is my ingress controller, is it correctly configured the with ingress object pointing to the CIP ?</p> <pre><code>--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: jupyter-lab-ingress annotations: # nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io: / spec: rules: - host: jupyter-lab.k8s.home http: paths: - path: / pathType: Prefix backend: service: name: jupyter-lab-cip port: number: 8888 </code></pre> <p>This is the CIP that targets my deployment of jupyer-lab</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: jupyter-lab-cip namespace: default spec: type: ClusterIP ports: - port: 8888 targetPort: 8888 selector: app: jupyter-lab </code></pre> <p>This is my deployment that is running jupyter-lab on port 8888</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: jupyter-lab-dpt namespace: default spec: replicas: 1 selector: matchLabels: app: jupyter-lab template: metadata: labels: app: jupyter-lab spec: volumes: - name: jupyter-lab-home persistentVolumeClaim: claimName: jupyter-lab-pvc containers: - name: jupyter-lab image: docker.io/jupyter/tensorflow-notebook ports: - containerPort: 8888 volumeMounts: - name: jupyter-lab-home mountPath: /var/jupyter-lab_home env: - name: &quot;JUPYTER_ENABLE_LAB&quot; value: &quot;yes&quot; </code></pre> <p>I do see jupyter-lab.k8s.home:8888, but I can't log in with the token I get from <code>kubectl logs -n default jupyter-lab-dpt-dfbd554b7-bf7fk</code></p> <p>How do I set the configuration up so that I can browse to <a href="http://jupyter-lab.k8s.home?noportnumber" rel="nofollow noreferrer">http://jupyter-lab.k8s.home?noportnumber</a></p>
Kickaha
<p>After you installed <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml" rel="nofollow noreferrer">nginx ingress conrtoller</a> (this is the link from your previous question) there should be a service created:</p> <pre><code># Source: ingress-nginx/templates/controller-service.yaml apiVersion: v1 kind: Service metadata: annotations: labels: helm.sh/chart: ingress-nginx-3.23.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.44.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: type: NodePort ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller </code></pre> <p>You can make sure it exists by running:</p> <pre><code>kubectl get svc -n ingress-nginx ingress-nginx-controller NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.105.157.46 &lt;none&gt; 80:30835/TCP,443:31421/TCP 17s </code></pre> <p>Notice its type is NodePort and you want LoadBalancer. Run <code>kubectl edit svc -n ingress-nginx ingress-nginx-controller</code> and change <code>NodePort</code> to <code>LoadBalancer</code>.</p> <p>Now you should se this:</p> <pre><code>kubectl get svc -n ingress-nginx ingress-nginx-controller NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.105.157.46 &lt;pending&gt; 80:30835/TCP,443:31421/TCP 83s </code></pre> <p>If your metalLB is configured correctly there should be an IP in place of a &lt;pending&gt;. Now point your domain to this IP.</p> <p>You mentioned that: <code>Local DNS resolves &quot;jupyter-lab.k8s.home&quot; to these ip addresses</code>. Don't resolve to all addresses. Use the one that is assigned to the LB. Only this one.</p> <p>Your ingress looks fine but you don't need this annotations.</p> <p>jupyter-lab-cip service also looks good.</p> <p>I don't like the jupyter-lab-lb service. You don't need it. What you need is a load balancer but pointing to ingress controller as described earlier.</p> <p>Also i am not sure what is this:</p> <pre><code> selector: app: jupyter-lab-ingress </code></pre> <p>Your deploymet doesn't have <code>app: jupyter-lab-ingress</code> label. Nginx ingress controller also doesn't have it (unless you added it, and didn't mention). So I am not sure what was the idea behind it and what you've tried to achieve. Anyway, you probably don't need it.</p> <hr /> <blockquote> <p>I do see jupyter-lab.k8s.home:8888, but I can't log in with the token I get from kubectl logs -n default jupyter-lab-dpt-dfbd554b7-bf7fk</p> </blockquote> <p>I am not sure why this works because the configuration you provided shouln't allow it (Unless I am missing sth).</p>
Matt
<p>I have an umbrella Helm chart containing different sub-charts (including a RabbitMQ-ha). I am able to easily install this helm chart on a k8s cluster but I want to know is it possible to just install this helm chart (or any ready to use helm chart) on a multi-cluster k8s or I have to change the chart and make it compatible with a multi-cluster setup?</p> <p>I have to mention that my chart requires dns and metric-server.</p>
AVarf
<p>If by multi-cluster Kubernetes clusters you mean using federation (<a href="https://github.com/kubernetes-sigs/kubefed" rel="nofollow noreferrer">Federation v2</a>) which syncs resources across clusters based on the user defined policies, e.g. ensure that Deployments created by helm exists in multiple clusters - in this case you use single umbrella Helm chart like you were working with single cluster.</p> <p>If by multi-cluster you mean multiple standalone/self-contains clusters, then you need to have probably a customized version of the same umbrella chart with cloud provider specific values - check how to achieve that with the idea using one of the helmfile feature called templatization. <a href="https://itnext.io/setup-your-kubernetes-cluster-with-helmfile-809828bc0a9f" rel="nofollow noreferrer">This</a> article shows how to do that. </p>
acid_fuji
<p>Kubernetes version 1.17.4</p> <p>Trying to toy around with custom scheduler priorities, I'm passing <code>--policy-config-file</code> pointing to file with following contents:</p> <pre><code> kind: Policy apiVersion: v1 predicates: - name: CheckNodeUnschedulable - name: GeneralPredicates - name: PodFitsResources - name: PodToleratesNodeTaints - name: CheckVolumeBinding - name: MaxEBSVolumeCount - name: MatchInterPodAffinity - name: NoDiskConflict - name: NoVolumeZoneConflict - name: MatchNodeSelector - name: HostName priorities: - {name: BalancedResourceAllocation, weight: 1} - {name: LeastRequestedPriority, weight: 1} - {name: ServiceSpreadingPriority, weight: 1} - {name: NodePreferAvoidPodsPriority, weight: 1} - {name: NodeAffinityPriority, weight: 1} - {name: TaintTolerationPriority, weight: 1} - {name: ImageLocalityPriority, weight: 1} - {name: SelectorSpreadPriority, weight: 1} - {name: InterPodAffinityPriority, weight: 1} </code></pre> <p>which, i believe, is the default set of predicates and policies, however kubernetes scheduler fails to start with following error:</p> <pre><code>F0417 12:35:52.291434 1 factory.go:265] error initializing the scheduling framework: plugin "NodeName" already registered as "FilterPlugin" </code></pre> <p>The <code>NodeName</code> is not mentioned anywhere in my config file. What am i doing wrong?</p>
keymone
<p>In this <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/#options" rel="nofollow noreferrer">link</a> you can see the parameter <code>--policy-config-file</code> is deprecated and the use is not recommended. </p> <p><a href="https://github.com/kubernetes/kubernetes/blob/2db6ec1db7505b092c8e50eb8f10cd3a5f9950af/pkg/scheduler/framework/plugins/default_registry.go#L115" rel="nofollow noreferrer">Here</a> you can see that <code>GeneralPredicates</code> and <code>HostName</code> use the same <code>nodename</code> plugin:</p> <p><strong>NodeName Predicate:</strong></p> <pre><code>registry.RegisterPredicate(predicates.GeneralPred, ... plugins.Filter = appendToPluginSet(plugins.Filter, nodename.Name, nil) ... </code></pre> <p><strong>Hostname predicate:</strong></p> <pre><code>registry.RegisterPredicate(predicates.HostNamePred, ... plugins.Filter = appendToPluginSet(plugins.Filter, nodename.Name, nil) ... </code></pre> <p>So you could try to disable one of them and see if the error persist.</p> <p>This should solve the issue for <code>nodename</code> plugin but if an other collision was detected you could try to solve in the same way.</p>
Mr.KoopaKiller
<p>I have a corporate network(10.22.<em>.</em>) which hosts a Kubernetes cluster(10.225.0.1). How can I access some VM in the same network but outside the cluster from within the pod in the cluster?</p> <p>For example, I have a VM with IP 10.22.0.1:30000, which I need to access from a Pod in Kubernetes cluster. I tried to create a Service like this</p> <pre><code>apiVersion: v1 kind: Service metadata: name: vm-ip spec: selector: app: vm-ip ports: - name: vm protocol: TCP port: 30000 targetPort: 30000 externalIPs: - 10.22.0.1 </code></pre> <p>But when I do "curl <a href="http://vm-ip:30000" rel="nofollow noreferrer">http://vm-ip:30000</a>" from a Pod(kubectl exec -it), it returns "connection refused" error. But it works with "google.com". What are the ways of accessing the external IPs?</p>
passwd
<p>You can create an <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#endpoints-v1-core" rel="noreferrer">endpoint</a> for that. </p> <p>Let's go through an example: </p> <p>In this example, I have a http server on my network with IP <code>10.128.15.209</code> and I want it to be accessible from my pods inside my Kubernetes Cluster. </p> <p>First thing is to create an endpoint. This is going to let me create a service pointing to this endpoint that will redirect the traffic to my external http server. </p> <p>My endpoint manifest is looking like this:</p> <pre><code>apiVersion: v1 kind: Endpoints metadata: name: http-server subsets: - addresses: - ip: 10.128.15.209 ports: - port: 80 </code></pre> <pre><code>$ kubectl apply -f http-server-endpoint.yaml endpoints/http-server configured </code></pre> <p>Let's create our service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: http-server spec: ports: - port: 80 targetPort: 80 </code></pre> <pre><code>$ kubectl apply -f http-server-service.yaml service/http-server created </code></pre> <p>Checking if our service exists and save it's clusterIP for letter usage:</p> <pre><code>user@minikube-server:~$$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-server ClusterIP 10.96.228.220 &lt;none&gt; 80/TCP 30m kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 10d </code></pre> <p>Now it's time to verify if we can access our service from a pod:</p> <pre><code>$ kubectl run ubuntu -it --rm=true --restart=Never --image=ubuntu bash </code></pre> <p>This command will create and open a bash session inside a ubuntu pod.</p> <p>In my case I'll install curl to be able to check if I can access my http server. You may need install mysql:</p> <pre><code>root@ubuntu:/# apt update; apt install -y curl </code></pre> <p>Checking connectivity with my service using clusterIP:</p> <pre><code>root@ubuntu:/# curl 10.128.15.209:80 Hello World! </code></pre> <p>And finally using the service name (DNS):</p> <pre><code>root@ubuntu:/# curl http-server Hello World! </code></pre> <p>So, in your specific case you have to create this: </p> <pre><code>apiVersion: v1 kind: Endpoints metadata: name: vm-server subsets: - addresses: - ip: 10.22.0.1 ports: - port: 30000 --- apiVersion: v1 kind: Service metadata: name: vm-server spec: ports: - port: 30000 targetPort: 30000 </code></pre>
Mark Watney
<p>That's what I do:</p> <ol> <li>Deploy a stateful set. The pod will always exit with an error to provoke a failing pod in status <code>CrashLoopBackOff</code>: <code>kubectl apply -f error.yaml</code></li> <li>Change error.yaml (<code>echo a</code> =&gt; <code>echo b</code>) and redeploy stateful set: <code>kubectl apply -f error.yaml</code></li> <li>Pod keeps the error status and will not immediately redeploy but wait until the pod is restarted after some time.</li> </ol> <p><strong>Requesting pod status:</strong></p> <pre><code>$ kubectl get pod errordemo-0 NAME READY STATUS RESTARTS AGE errordemo-0 0/1 CrashLoopBackOff 15 59m </code></pre> <p><strong>error.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: errordemo labels: app.kubernetes.io/name: errordemo spec: serviceName: errordemo replicas: 1 selector: matchLabels: app.kubernetes.io/name: errordemo template: metadata: labels: app.kubernetes.io/name: errordemo spec: containers: - name: demox image: busybox:1.28.2 command: ['sh', '-c', 'echo a; sleep 5; exit 1'] terminationGracePeriodSeconds: 1 </code></pre> <p><strong>Questions</strong></p> <p>How can I achieve an immediate redeploy even if the pod has an error status? I found out these solutions but I would like to have a single command to achieve that (In real life I'm using helm and I just want to call <code>helm upgrade</code> for my deployments):</p> <ul> <li>Kill the pod before the redeploy</li> <li>Scale down before the redeploy</li> <li>Delete the statefulset before the redeploy</li> </ul> <p>Why doesn't kubernetes redeploy the pod at once?</p> <ul> <li>In my demo example I have to wait until kubernetes tries to restart the pod after waiting some time.</li> <li>A pod with no error (e.g. <code>echo a; sleep 10000;</code>) will be restarted immediately. That's why I set <code>terminationGracePeriodSeconds: 1</code></li> <li>But in my real deployments (where I use helm) I also encountered the case that the pods are never redeployed. Unfortunately I cannot reproduce this behaviour in a simple example.</li> </ul>
Matthias M
<p>You could set <code>spec.podManagementPolicy: &quot;Parallel&quot;</code></p> <blockquote> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management" rel="nofollow noreferrer">Parallel pod management</a> tells the StatefulSet controller to launch or terminate all Pods in parallel, and not to wait for Pods to become Running and Ready or completely terminated prior to launching or terminating another Pod.</p> </blockquote> <p>Remember that the default podManagementPolicy is <code>OrderedReady</code></p> <blockquote> <p><a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#orderedready-pod-management" rel="nofollow noreferrer">OrderedReady pod management</a> is the default for StatefulSets. It tells the StatefulSet controller to respect the ordering guarantees demonstrated above</p> </blockquote> <p>And if your application requires ordered update then there is nothing you can do.</p>
Matt
<p>I am using Kubernetes to deploy my grafana dashboard and I am trying to use Kubernetes Secrets for saving grafana admin-password .. Here is my yaml file for secret</p> <pre><code> apiVersion: v1 kind: Secret metadata: name: $APP_INSTANCE_NAME-grafana labels: app.kubernetes.io/name: $APP_INSTANCE_NAME app.kubernetes.io/component: grafana type: Opaque data: # By default, admin-user is set to `admin` admin-user: YWRtaW4= admin-password: "$GRAFANA_GENERATED_PASSWORD" </code></pre> <p>value for GRAFANA_GENERATED_PASSWORD is base64 encoded and exported like</p> <pre><code>export GRAFANA_GENERATED_PASSWORD="$(echo -n $PASSWORD | base64)" </code></pre> <p>where PASSWORD is a variable which i exported on my machine like <code>export PASSWORD=qwerty123</code></p> <p>I am trying to pass the value of GRAFANA_GENERATED_PASSWORD to the yaml file for secret like</p> <pre><code>envsubst '$GRAFANA_GENERATED_PASSWORD' &gt; "grafana_secret.yaml" </code></pre> <p>The yaml file after passing the base64 encoded value looks like </p> <pre><code>apiVersion: v1 kind: Secret metadata: name: kafka-monitor-grafana labels: app.kubernetes.io/name: kafka-monitor app.kubernetes.io/component: grafana type: Opaque data: # By default, admin-user is set to `admin` admin-user: YWRtaW4= admin-password: "cXdlcnR5MTIz" </code></pre> <p>After deploying all my objects i couldn't login to my dashboard using password qwerty123 which is encoded properly .. </p> <p>But when i try to encode my password like <code>export GRAFANA_GENERATED_PASSWORD="$(echo -n 'qwerty123' | base64)</code>"</p> <p>It is working properly and i can login to my dashboard using the password qwerty123 .. Looks like the problem occur when i encode my password using a variable ... But i have encode my password using a variable </p>
Pratheesh
<p>As mentioned in <strong>@Pratheesh</strong> comment, after deploy the grafana for the first time, the persistent volume was not deleted/recreated and the file <code>grafana.db</code> that contains the Grafana dashboard password still keeping the old password.</p> <p>In order to solve, the PersistentVolume (pv) need to be deleted before apply the secret with the new password.</p>
Mr.KoopaKiller
<p>I've kubernetes installed on Ubuntu 19.10. I've setup ingress-nginx and can access my test service using http. However, I get a "Connection refused" when I try to access via https.</p> <p>[Edit] I'm trying to get https to terminate in the ingress and pass unencrypted traffic to my service the same way http does. I've implemented the below based on many examples I've seen but with little luck.</p> <p>Yaml</p> <pre><code>kind: Service apiVersion: v1 metadata: name: messagemanager-service namespace: default labels: name: messagemanager-service spec: type: NodePort selector: app: messagemanager ports: - port: 80 protocol: TCP targetPort: 8080 nodePort: 31212 name: http externalIPs: - 192.168.0.210 --- kind: Deployment #apiVersion: extensions/v1beta1 apiVersion: apps/v1 metadata: name: messagemanager labels: app: messagemanager version: v1 spec: replicas: 3 selector: matchLabels: app: messagemanager template: metadata: labels: app: messagemanager version: v1 spec: containers: - name: messagemanager image: test/messagemanager:1.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 protocol: TCP --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: messagemanager-ingress annotations: nginx.ingress.kubernetes.io/ssl-passthrough: false ingress.kubernetes.io/rewrite-target: / spec: tls: - secretName: tls-secret rules: - http: paths: - path: /message backend: serviceName: messagemanager-service servicePort: 8080 </code></pre> <p>https test</p> <pre><code>curl -kL https://192.168.0.210/message -verbose * Trying 192.168.0.210:443... * TCP_NODELAY set * connect to 192.168.0.210 port 443 failed: Connection refused * Failed to connect to 192.168.0.210 port 443: Connection refused * Closing connection 0 curl: (7) Failed to connect to 192.168.0.210 port 443: Connection refused </code></pre> <p>http test</p> <pre><code>curl -kL http://192.168.0.210/message -verbose * Trying 192.168.0.210:80... * TCP_NODELAY set * Connected to 192.168.0.210 (192.168.0.210) port 80 (#0) &gt; GET /message HTTP/1.1 &gt; Host: 192.168.0.210 &gt; User-Agent: curl/7.65.3 &gt; Accept: */* &gt; Referer: rbose &gt; * Mark bundle as not supporting multiuse &lt; HTTP/1.1 200 OK &lt; Content-Type: text/plain;charset=UTF-8 &lt; Date: Fri, 24 Apr 2020 18:44:07 GMT &lt; connection: keep-alive &lt; content-length: 50 &lt; * Connection #0 to host 192.168.0.210 left intact </code></pre> <pre><code>$ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.105.92.236 &lt;pending&gt; 80:31752/TCP,443:32035/TCP 2d ingress-nginx-controller-admission ClusterIP 10.100.223.87 &lt;none&gt; 443/TCP 2d $ kubectl get ingress -o wide NAME CLASS HOSTS ADDRESS PORTS AGE messagemanager-ingress &lt;none&gt; * 80, 443 37m </code></pre> <p>key creation</p> <pre><code>openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc" kubectl create secret tls tls-secret --key tls.key --cert tls.crt </code></pre> <pre><code>$ kubectl describe ingress Name: messagemanager-ingress Namespace: default Address: Default backend: default-http-backend:80 (&lt;error: endpoints "default-http-backend" not found&gt;) TLS: tls-secret terminates Rules: Host Path Backends ---- ---- -------- * /message messagemanager-service:8080 () Annotations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 107s nginx-ingress-controller Ingress default/messagemanager-ingress </code></pre> <p>I was under the assumption that TLS would terminate in the ingress and the request would be passed on to the service as http. I had to add the external IPs in the service to get HTTP to work. Am I missing something similar for HTTPS?</p> <p>Any help and guidance is appreciated.</p> <p>Thanks</p> <p>Mark</p>
Schaffer
<p>I've reproduced your scenario in my lab and after a few changes in your ingress it's working as you described.</p> <p>In my lab I used an nginx image that serves a default landing page on port 80 and with this Ingress rule, it's possible to serve it on port 80 and 443.</p> <pre><code>kind: Deployment apiVersion: apps/v1 metadata: name: nginx labels: app: nginx version: v1 spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx version: v1 spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 protocol: TCP --- kind: Service apiVersion: v1 metadata: name: nginx-service namespace: default labels: name: nginx-service spec: type: NodePort selector: app: nginx ports: - port: 80 protocol: TCP targetPort: 80 nodePort: 31000 name: http --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: nginx labels: app: nginx annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: tls: - secretName: tls-secret rules: - http: paths: - path: /nginx backend: serviceName: nginx-service servicePort: 80 </code></pre> <p>The only difference between my ingress and yours is that I removed <code>nginx.ingress.kubernetes.io/ssl-passthrough: false</code>. In the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#ssl-passthrough" rel="nofollow noreferrer">documentation</a> we can read:</p> <blockquote> <p>note SSL Passthrough is <strong>disabled by default</strong></p> </blockquote> <p>So there is no need for you to specify that.</p> <p>I used the same secret as you:</p> <pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj &quot;/CN=nginxsvc/O=nginxsvc&quot; $ kubectl create secret tls tls-secret --key tls.key --cert tls.crt </code></pre> <p>In your question I have the impression that you are trying to reach your ingress through the IP <code>192.168.0.210</code>. This is your service IP and not your Ingress IP.</p> <p>If you are using Cloud managed Kubernetes you have to run the following command to find your Ingress IP:</p> <pre><code>$ kubectl get ingresses nginx NAME HOSTS ADDRESS PORTS AGE nginx * 34.89.108.48 80, 443 6m32s </code></pre> <p>If you are running on Bare Metal without any LoadBalancer solution as <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a>, you can see that your ingress-nginx service will be with<code> EXTERNAL-IP</code> on <code>Pending</code> forever.</p> <pre><code>$ kubectl get service -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-1587980954-controller LoadBalancer 10.110.188.236 &lt;pending&gt; 80:31024/TCP,443:30039/TCP 23s </code></pre> <p>You can do the same thing as you did with your service and add an externalIP manually:</p> <pre><code> kubectl get service -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-1587980954-controller LoadBalancer 10.110.188.236 10.156.0.24 80:31024/TCP,443:30039/TCP 9m14s </code></pre> <p>After this change, your ingress will have the same IP as you defined in your Ingress Service:</p> <pre><code>$ kubectl get ingress nginx NAME CLASS HOSTS ADDRESS PORTS AGE nginx &lt;none&gt; * 10.156.0.24 80, 443 118s </code></pre> <pre><code>$ curl -kL https://10.156.0.24/nginx --verbose * Trying 10.156.0.24... * TCP_NODELAY set * Connected to 10.156.0.24 (10.156.0.24) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt CApath: /etc/ssl/certs * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Client hello (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Client hello (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 * ALPN, server accepted to use h2 * Server certificate: * subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate * start date: Apr 27 09:49:19 2020 GMT * expire date: Apr 27 09:49:19 2021 GMT * issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate * SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway. * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x560cee14fe90) &gt; GET /nginx HTTP/1.1 &gt; Host: 10.156.0.24 &gt; User-Agent: curl/7.52.1 &gt; Accept: */* &gt; * Connection state changed (MAX_CONCURRENT_STREAMS updated)! &lt; HTTP/2 200 &lt; server: nginx/1.17.10 &lt; date: Mon, 27 Apr 2020 10:01:29 GMT &lt; content-type: text/html &lt; content-length: 612 &lt; vary: Accept-Encoding &lt; last-modified: Tue, 14 Apr 2020 14:19:26 GMT &lt; etag: &quot;5e95c66e-264&quot; &lt; accept-ranges: bytes &lt; strict-transport-security: max-age=15724800; includeSubDomains &lt; &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; &lt;style&gt; body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } &lt;/style&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Welcome to nginx!&lt;/h1&gt; &lt;p&gt;If you see this page, the nginx web server is successfully installed and working. Further configuration is required.&lt;/p&gt; &lt;p&gt;For online documentation and support please refer to &lt;a href=&quot;http://nginx.org/&quot;&gt;nginx.org&lt;/a&gt;.&lt;br/&gt; Commercial support is available at &lt;a href=&quot;http://nginx.com/&quot;&gt;nginx.com&lt;/a&gt;.&lt;/p&gt; &lt;p&gt;&lt;em&gt;Thank you for using nginx.&lt;/em&gt;&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; * Curl_http_done: called premature == 0 * Connection #0 to host 10.156.0.24 left intact </code></pre> <p>EDIT:</p> <blockquote> <p>There does not seem to be a way to manually set the &quot;external IPs&quot; for the ingress as can be done in the service. If you know of one please let me know :-). Looks like my best bet is to try MetalLB.</p> </blockquote> <p>MetalLB would be the best option for production. If you are running it for lab only, you have the option to add your node public IP (the same you can get by running <code>kubectl get nodes -o wide</code>) and attach it to your NGINX ingress controller.</p> <p><strong>Adding your node IP to your NGINX ingress controller</strong></p> <pre><code>spec: externalIPs: - 192.168.0.210 </code></pre> <p>Create a file called <code>ingress-nginx-svc-patch.yaml</code> and paste the contents above.</p> <p>Next apply the changes with the following command:</p> <pre><code>kubectl patch service ingress-nginx-controller -n kube-system --patch &quot;$(cat ingress-nginx-svc-patch.yaml)&quot; </code></pre> <p>And as result:</p> <pre><code>$ kubectl get service -n kube-system ingress-nginx-controller NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.97.0.243 192.168.0.210 80:31409/TCP,443:30341/TCP 39m </code></pre>
Mark Watney
<p>In kubernetes, if I desired to have pods for a specific task not always required is it possible to have the system spin pods up for a given task when needed? Can this be managed through the backend code (in my instance python)? Do services need to be defined in a certain way in advance?</p> <p>I found an api for docker management in <a href="https://docker-py.readthedocs.io/en/stable/" rel="nofollow noreferrer">Python</a>, would this work for Kubernetes?</p>
EoinHanan
<p>I don't know exactly what you want to achieve because you are not giving enough information. Here are some tools you migh want to check:</p> <p><a href="http://keda.sh" rel="nofollow noreferrer">keda</a> - event driven autoscaler, can scale down to zero</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">k8s jobs</a> - runs one time jobs</p> <p><a href="https://knative.dev/" rel="nofollow noreferrer">knative</a> - can scale down to zero</p> <p>You can also just use k8s python client library to run a pod, make it do whatever you want and then delete it.</p>
Matt
<p>I am following the instructions from this link: <a href="https://kubecloud.io/kubernetes-dashboard-on-arm-with-rbac-61309310a640" rel="nofollow noreferrer">https://kubecloud.io/kubernetes-dashboard-on-arm-with-rbac-61309310a640</a></p> <p>and I run this command:</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/72832429656c74c4c568ad5b7163fa9716c3e0ec/src/deploy/recommended/kubernetes-dashboard-arm.yaml </code></pre> <p>But I'm getting this output/error:</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/72832429656c74c4c568ad5b7163fa9716c3e0ec/src/deploy/recommended/kubernetes-dashboard-arm.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created service/kubernetes-dashboard created error: unable to recognize "https://raw.githubusercontent.com/kubernetes/dashboard/72832429656c74c4c568ad5b7163fa9716c3e0ec/src/deploy/recommended/kubernetes-dashboard-arm.yaml": no match es for kind "Deployment" in version "apps/v1beta2" </code></pre> <p>I'm not sure how to proceed from here? I'm trying to install the Kubernetes Dashboard for a Raspberry PI cluster.</p> <p>Here is my setup:</p> <pre><code>pi@k8s-master:/etc/kubernetes$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 2d11h v1.16.2 k8s-node1 Ready worker 2d3h v1.16.2 k8s-node2 Ready worker 2d2h v1.15.2 k8s-node3 Ready worker 2d2h v1.16.2 </code></pre>
ChrisRTech
<p>The reason behind your error is that after 1.16.0 kubernetes stopped using <code>apps/v1beta2</code>for deployments. You should use <code>apps/v1</code> instead. </p> <p>Please donwnload the file: </p> <pre><code>wget https://raw.githubusercontent.com/kubernetes/dashboard/72832429656c74c4c568ad5b7163fa9716c3e0ec/src/deploy/recommended/kubernetes-dashboard-arm.yaml </code></pre> <p>Edit the file using <code>nano</code> or <code>vi</code> and change the deployment api version to <code>apps/v1</code>. </p> <p>Don`t forget to save the file when exiting. </p> <p>Then: </p> <pre><code>kubectl apply -f [file_name] </code></pre> <p>You may find more about there release changes <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16" rel="nofollow noreferrer">here</a>.</p>
acid_fuji
<p>i have used item proxy-body-size as describe in document, and recreate my ingress.But it has no effect on the ingress-controller.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: fileupload-ingress annotations: nginx.ingress.kubernetes.io/proxy-body-size: 100m nginx.org/rewrites: "serviceName=fileupload-service rewrite=/;" </code></pre> <p>and then i changed my configmap to change proxy-body-size.But it still doesn't work globally.</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: nginx-config namespace: nginx-ingress data: proxy-body-size: "100m" </code></pre> <p>here's the document <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#rewrite" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#rewrite</a></p> <p>what's wrong with my ingress..help!!! <a href="https://i.stack.imgur.com/Xszq3.png" rel="nofollow noreferrer">enter image description here</a></p>
Aaron.li
<p>There are different ingress controllers and the annotations for them vary. So for <code>kubernetes/ingress-nginx</code> annotations starts with <code>nginx.ingress</code> and for <code>nginxinc/kubernetes</code> the annotations starts with <code>nginx.org</code>.</p> <p>Here is also a <a href="https://grigorkh.medium.com/there-are-two-nginx-ingress-controllers-for-k8s-what-44c7b548e678" rel="nofollow noreferrer">good article</a> showing more differences between them.</p>
acid_fuji
<p>Is there a command I can run or is there some way to identify whether a cluster has one or more ingress controllers configured?</p> <p>I'm not asking about the actual ingresses themselves (which I know can be found with <code>kubectl get ingress --all-namespaces</code>).</p>
Chris Stryczynski
<p>There is no fancy way to achieve what you need. These two commands can help you and it really depends on what you need to choose between them. </p> <pre><code>$ kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .metadata}{.labels}{", "}{end}{end}' | grep ingress | grep controller </code></pre> <pre><code>$ kubectl get pods --show-labels --all-namespaces | grep ingress | grep controller </code></pre> <p>Both commands are similar but with different outputs. </p> <p>These commands are based on this <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">documation</a> page. </p>
Mark Watney
<p>I want to remove Kubernetes from a Debian machine (I didn't do the setup)</p> <p>I followed the instructions from <a href="https://stackoverflow.com/questions/44698283/how-to-completely-uninstall-kubernetes">How to completely uninstall kubernetes</a> </p> <pre><code>kubeadm reset sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube* sudo apt-get autoremove sudo rm -rf ~/.kube </code></pre> <p>But it seems to be still installed:</p> <pre><code># which kubeadm /usr/local/bin/kubeadm # which kubectl /usr/local/bin/kubectl # which kubelet /usr/local/bin/kubelet </code></pre> <p>Also, <code>apt list --installed | grep kube*</code> does not return anything, so it make me think it was not installed via <code>apt</code></p> <p>Do you know how to clean this machine ? Should I just <code>rm /usr/local/bin/kubectl</code> etc ? I don't really like this idea..</p> <p>Thanks for help</p>
iAmoric
<p>The method suggested by <a href="https://stackoverflow.com/users/4749074/rib47">Rib47</a> on the answer you <a href="https://stackoverflow.com/a/49253264/12153576">indicated</a> is correct to completely remove and clean Kubernetes installed with apt-get.</p> <p>As mentioned by <a href="https://stackoverflow.com/users/2757035/underscore-d" title="4,484 reputation">underscore_d</a>, <code>/usr/local/bin/</code> isn't the directory where the packages installed by apt-get are placed. </p> <p>For example, when you install kubectl using apt-get, it's placed on <code>/usr/bin/kubectl</code> and that's what is going to be removed by <code>apt-get purge</code>.</p> <p>I tested it on my kubeadm cluster lab and I don't have these files at <code>/usr/local/bin/</code>. </p> <p>You have to revisit all the steps you followed during the install process to know how exactly these files got there. </p> <p>If you run <code>kubeadm reset</code>, I would say it's safe to remove these files. I suggest you to check if they are being used before removing using the command fuser. This command might not be installed in your linux and you can install it by running <code>sudo apt-get install psmisc</code>. After installing you can run it as in this example: </p> <pre><code> $ sudo fuser /usr/bin/kubelet /usr/bin/kubelet: 21167e </code></pre> <p>It means this file is being used by process number 21167. </p> <p>Checking this process we can see what's using it:</p> <pre><code>$ ps -aux | grep 21167 root 21167 4.1 0.5 788164 88696 ? Ssl 08:50 0:07 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2 </code></pre> <p>If the files related to kubernetes you have under <code>/usr/local/bin/</code> are not in use, I would remove them with no worries. </p>
Mark Watney
<p>I've deployed Prometheus to a Kubernetes cluster using the <code>prometheus-community/kube-prometheus-stack version 13.10.0</code> Helm chart, and would like to edit my <code>prometheus.yml</code> file in the <code>/etc/prometheus/</code> directory.</p> <p>The reason for this is I need to add an extra scrape config for Jenkins as I'm unable to do it dynamically via an additional service monitor.</p> <p>Is it possible to edit this file? Describing the pod, I can see the file is created by a secret.</p> <pre><code>Volumes: config: Type: Secret (a volume populated by a Secret) SecretName: prometheus-prometheus-kube-prometheus-prometheus Optional: false </code></pre> <p>But I can't find the template that creates this secret anywhere.</p>
Hammed
<p>Here is the link that seems to be a solution: <a href="https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L2108-L2146" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L2108-L2146</a></p> <hr /> <p>I decided to provide some short explanation because sometimes links stop working:</p> <p><code>prometheus-community/kube-prometheus-stack</code> has <em>additionalScrapeConfigs</em> field in the values.yaml file of helm chart.</p> <p>Here is it's definition:</p> <pre><code>## AdditionalScrapeConfigs allows specifying additional Prometheus scrape configurations. Scrape configurations ## are appended to the configurations generated by the Prometheus Operator. Job configurations must have the form ## as specified in the official Prometheus documentation: ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config. As scrape configs are ## appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility ## to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible ## scrape configs are going to break Prometheus after the upgrade. ## ## The scrape configuration example below will find master nodes, provided they have the name .*mst.*, relabel the ## port to 2379 and allow etcd scraping provided it is running on all Kubernetes master nodes ## additionalScrapeConfigs: [] </code></pre>
Matt
<p>I've been following the following tutorial for my AKS setup: <a href="https://github.com/Azure/phippyandfriends" rel="nofollow noreferrer">https://github.com/Azure/phippyandfriends</a>. But now I'm struggling to get HTTPS working.</p> <h2>Here's what i did</h2> <p>I've generated a cert and key via following <a href="https://gist.githubusercontent.com/PowerKiKi/02a90c765d7b79e7f64d/raw/353b5450944434baf2977d7ce3e1286f8494f22d/generate-wildcard-certificate.sh" rel="nofollow noreferrer">shell script</a> run it in cmd </p> <p><code>bash generate-wildcard-certificate.sh mydomain.somenumbers.westeurope.aksapp.io</code></p> <p>That generates 2 files: </p> <ul> <li><code>mydomain.somenumbers.westeurope.aksapp.io.crt</code></li> <li><code>mydomain.somenumbers.westeurope.aksapp.io.key</code></li> </ul> <p>Then I created a secret with following command:</p> <p><code>kubectl create secret tls ingress-crypto-auth --key mydomain.somenumbers.westeurope.aksapp.io.crt --cert mydomain.somenumbers.westeurope.aksapp.io.crt</code></p> <p>Added the secret to my <code>ingress.yaml</code> files:</p> <pre><code>{{ if .Values.ingress.enabled }} apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: {{ template "fullname" . }} labels: app: {{ template "fullname" . }} chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}" release: "{{ .Release.Name }}" heritage: "{{ .Release.Service }}" annotations: kubernetes.io/ingress.class: addon-http-application-routing spec: tls: - hosts: - {{ .Values.ingress.basedomain }} secretName: ingress-crypto-auth rules: - host: {{ .Release.Name }}.{{ .Values.ingress.basedomain }} http: paths: - path: / backend: serviceName: {{ template "fullname" . }} servicePort: {{ .Values.service.externalPort }} {{ end }} </code></pre> <p>And it seems that my certificate is loaded, but I get following error:</p> <blockquote> <p>This CA Root certificate is not trusted because it is not in the Trusted Root Certification Authorities store.</p> </blockquote> <p>Did I do anything wrong? and even more important, how di I get it to work? I don't care how it's for a temporary project</p>
Kiwi
<p>It is happen because you are using a self-signed certificate.</p> <blockquote> <p>A <strong>self-signed certificate</strong> is a <a href="https://en.wikipedia.org/wiki/Identity_certificate" rel="nofollow noreferrer" title="Identity certificate">certificate</a> that is not signed by a <a href="https://en.wikipedia.org/wiki/Certificate_authority" rel="nofollow noreferrer" title="Certificate authority">certificate authority</a> (CA). These certificates are easy to make and do not cost money. However, they do not provide all of the security properties that certificates signed by a CA aim to provide. For instance, when a website owner uses a self-signed certificate to provide <a href="https://en.wikipedia.org/wiki/HTTPS" rel="nofollow noreferrer" title="HTTPS">HTTPS</a> services, people who visit that website will see a warning in their browser.</p> </blockquote> <p>To solve this issue you could buy a valid certificate from a trusted CA, or use Let's Encrypt to generate it.</p> <h3>Using cert-manager with Let's Encrypt</h3> <blockquote> <p><strong>cert-manager</strong> builds on top of Kubernetes, introducing certificate authorities and certificates as first-class resource types in the Kubernetes API. This makes it possible to provide 'certificates as a service' to developers working within your Kubernetes cluster.</p> <p><strong>Let's Encrypt</strong> is a non-profit certificate authority run by Internet Security Research Group that provides X.509 certificates for Transport Layer Security encryption at no charge. The certificate is valid for 90 days, during which renewal can take place at any time. I'm supossing you already have NGINX ingress installed and working.</p> </blockquote> <p><strong>Pre-requisites:</strong> - <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">NGINX Ingress</a> installed and working - <a href="https://helm.sh/docs/intro/install/" rel="nofollow noreferrer">HELM 3.0</a> installed and working</p> <p><strong>cert-manager install</strong></p> <blockquote> <p><strong>Note</strong>: When running on GKE (Google Kubernetes Engine), you may encounter a ‘permission denied’ error when creating some of these resources. This is a nuance of the way GKE handles RBAC and IAM permissions, and as such you should ‘elevate’ your own privileges to that of a ‘cluster-admin’ <strong>before</strong> running the above command. If you have already run the above command, you should run them again after elevating your permissions:</p> </blockquote> <p>Follow the <a href="https://cert-manager.io/docs/installation/kubernetes/" rel="nofollow noreferrer">official docs</a> to install, or just use HELM 3.0 with the followe command:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl create namespace cert-manager $ helm repo add jetstack https://charts.jetstack.io $ helm repo update $ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager-legacy.crds.yaml </code></pre> <p>Create <code>CLusterIssuer</code> for Let's Encrypt: Save the content below in a new file called <code>letsencrypt-production.yaml</code>:</p> <blockquote> <p><strong>Note:</strong> Replace <code>&lt;EMAIL-ADDRESS&gt;</code> with your valid email.</p> </blockquote> <pre class="lang-yaml prettyprint-override"><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: labels: name: letsencrypt-prod name: letsencrypt-prod spec: acme: email: &lt;EMAIL-ADDRESS&gt; http01: {} privateKeySecretRef: name: letsencrypt-prod server: 'https://acme-v02.api.letsencrypt.org/directory' </code></pre> <p>Apply the configuration with:</p> <p><code>kubectl apply -f letsencrypt-production.yaml</code></p> <p>Install cert-manager with Let's Encrypt as a default CA:</p> <pre><code>helm install cert-manager \ --namespace cert-manager --version v0.8.1 jetstack/cert-manager \ --set ingressShim.defaultIssuerName=letsencrypt-prod \ --set ingressShim.defaultIssuerKind=ClusterIssuer </code></pre> Verify the installation: <pre class="lang-sh prettyprint-override"><code>$ kubectl get pods --namespace cert-manager NAME READY STATUS RESTARTS AGE cert-manager-5c6866597-zw7kh 1/1 Running 0 2m cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m </code></pre> <h3>Using cert-manager</h3> <p>Apply this annotation in you ingress spec:</p> <pre><code>cert-manager.io/cluster-issuer: "letsencrypt-prod" </code></pre> <p>After apply cert-manager will generate the tls certificate fot the domain configured in <code>Host:</code>.</p> <pre><code>{{ if .Values.ingress.enabled }} apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: {{ template "fullname" . }} labels: app: {{ template "fullname" . }} chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}" release: "{{ .Release.Name }}" heritage: "{{ .Release.Service }}" annotations: kubernetes.io/ingress.class: addon-http-application-routing cert-manager.io/cluster-issuer: "letsencrypt-prod" spec: tls: - hosts: - {{ .Values.ingress.basedomain }} secretName: ingress-crypto-auth rules: - host: {{ .Release.Name }}.{{ .Values.ingress.basedomain }} http: paths: - path: / backend: serviceName: {{ template "fullname" . }} servicePort: {{ .Values.service.externalPort }} {{ end }} </code></pre> <p>Please let me know if that helped.</p>
Mr.KoopaKiller
<p>When I try to copy some files in an existing directory with a wildcard, I receive the error:</p> <pre><code>kubectl cp localdir/* my-namespace/my-pod:/remote-dir/ error: one of src or dest must be a remote file specification </code></pre> <p>It looks like wildcards support have been removed but I have many files to copy and my remote dir is not empty so I can't use the recursive.</p> <p>How can I run a similar operation?</p>
Vincent J
<p>Here is what I have come up with:</p> <pre><code>kubectl exec -n &lt;namespace&gt; &lt;pod_name&gt; -- mkdir -p &lt;dest_dir&gt; \ &amp;&amp; tar cf - -C &lt;src_dir&gt; . | kubectl exec -i -n &lt;namespace&gt; &lt;pod_name&gt; -- tar xf - -C &lt;dest_dir&gt; </code></pre> <p>Notice there are two parts, first is making sure the destination directory exists. Second is using tar to archive the files, send it and unpack it in a container.</p> <p>Remember, that in order for this to work, it is required that the <code>tar</code> and <code>mkdir</code> binaries are present in your container.</p> <p>The advantage od this solution over the one proposed earlier (the one with xargs) is that it is faster because it sends all files at once, and not one by one.</p>
Matt
<p>I am trying to work with TLS in our Kubernetes cluster. I've followed MS documentation on "Create an HTTPS ingress controller on Azure Kubernetes Service" (<a href="https://learn.microsoft.com/en-us/azure/aks/ingress-tls" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-tls</a>).</p> <p>I've deployed a nginx-ingress controller, added the DNS record and installed the cert-manager. I created a CA ClusterIssuer of SelfSigned and also created the 2 demo applications.</p> <p>When I created the ingress route, the certificate created automatically and with "True" on the Ready status, but the route is not working - I can't access the demo applications with the host name deployed (<code>https://hello-world-ingress.&lt;Ingress_Service_DNS_Name&gt;</code>).</p> <p>The Self-Signed ClusterIssuer:</p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: selfsigned-issuer spec: selfSigned: {} </code></pre> <p>The Ingress route:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hello-world-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$2 cert-manager.io/cluster-issuer: selfsigned-issuer spec: tls: - hosts: - hello-world-ingress.&lt;Ingress_Service_DNS_Name&gt; secretName: tls-secret rules: - host: hello-world-ingress.&lt;Ingress_Service_DNS_Name&gt; http: paths: - backend: serviceName: aks-helloworld servicePort: 80 path: /(.*) - backend: serviceName: aks-helloworld-two servicePort: 80 path: /hello-world-two(/|$)(.*) --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hello-world-ingress-static annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /static/$2 cert-manager.io/cluster-issuer: selfsigned-issuer spec: tls: - hosts: - hello-world-ingress.&lt;Ingress_Service_DNS_Name&gt; secretName: tls-secret rules: - host: hello-world-ingress.&lt;Ingress_Service_DNS_Name&gt; http: paths: - backend: serviceName: aks-helloworld servicePort: 80 path: /static(/|$)(.*) </code></pre> <p>I've created a DNS record on GoDaddy in our domain for <code>&lt;Ingress_Service_DNS_Name&gt;</code> (but with the real name) that points to the external ingress controller service IP Address.</p> <p>The rest of the installations and deployments are the same as the documentation.</p> <p>Does anyone has any idea why it's not working?</p> <p>---------------- Edit ----------------------</p> <p>Ingress-controller logs:</p> <pre><code>I0330 06:03:16.780788 7 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-basic", Name:"hello-world-ingress", UID:"488a4c00-7072-11ea-a46c-1a8c7fb34cf9", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"37375594", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-basic/hello-world-ingressI0330 06:03:46.358414 7 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-basic", Name:"hello-world-ingress-static", UID:"48b91e0e-7072-11ea-a46c-1a8c7fb34cf9", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"37375687", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-basic/hello-world-ingress-static I0330 06:03:46.386930 7 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-basic", Name:"hello-world-ingress", UID:"488a4c00-7072-11ea-a46c-1a8c7fb34cf9", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"37375688", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-basic/hello-world-ingress I0330 06:04:16.783483 7 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-basic", Name:"hello-world-ingress", UID:"488a4c00-7072-11ea-a46c-1a8c7fb34cf9", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"37375802", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-basic/hello-world-ingress I0330 06:04:16.788210 7 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-basic", Name:"hello-world-ingress-static", UID:"48b91e0e-7072-11ea-a46c-1a8c7fb34cf9", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"37375803", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-basic/hello-world-ingress-static I0330 06:04:46.584035 7 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-basic", Name:"hello-world-ingress", UID:"488a4c00-7072-11ea-a46c-1a8c7fb34cf9", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"37375904", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-basic/hello-world-ingress I0330 06:04:46.587677 7 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-basic", Name:"hello-world-ingress-static", UID:"48b91e0e-7072-11ea-a46c-1a8c7fb34cf9", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"37375905", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-basic/hello-world-ingress-static I0330 06:05:16.938952 7 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-basic", Name:"hello-world-ingress", UID:"488a4c00-7072-11ea-a46c-1a8c7fb34cf9", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"37376008", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-basic/hello-world-ingress I0330 06:05:16.938975 7 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-basic", Name:"hello-world-ingress-static", UID:"48b91e0e-7072-11ea-a46c-1a8c7fb34cf9", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"37376007", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-basic/hello-world-ingress-static I0330 06:05:46.337384 7 event.go:281] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-basic", Name:"hello-world-ingress-static", UID:"48b91e0e-7072-11ea-a46c-1a8c7fb34cf9", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"37376095", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-basic/hello-world-ingress-static </code></pre> <p>Cert-manager logs:</p> <pre><code>I0330 06:16:19.953430 1 reflector.go:432] external/io_k8s_client_go/tools/cache/reflector.go:108: Watch close - *v1alpha2.Order total 0 items received I0330 06:16:19.989382 1 reflector.go:278] external/io_k8s_client_go/tools/cache/reflector.go:108: forcing resync I0330 06:16:39.861201 1 metrics.go:304] cert-manager/metrics "msg"="attempting to clean up metrics for recently deleted certificates" I0330 06:16:39.861233 1 metrics.go:307] cert-manager/metrics "msg"="active certificates is still uninitialized" I0330 06:16:46.353253 1 controller.go:129] cert-manager/controller/ingress-shim "msg"="syncing item" "key"="ingress-basic/hello-world-ingress" I0330 06:16:46.354661 1 metrics.go:385] cert-manager/metrics "msg"="incrementing controller sync call count" "controllerName"="ingress-shim" I0330 06:16:46.355124 1 sync.go:163] cert-manager/controller/ingress-shim "msg"="certificate already exists for ingress resource, ensuring it is up to date" "related_resource_kind"="Certificate" "related_resource_name"="tls-secret-selfsigned" "related_resource_namespace"="ingress-basic" "resource_kind"="Ingress" "resource_name"="hello-world-ingress" "resource_namespace"="ingress-basic" I0330 06:16:46.356804 1 sync.go:176] cert-manager/controller/ingress-shim "msg"="certificate resource is already up to date for ingress" "related_resource_kind"="Certificate" "related_resource_name"="tls-secret-selfsigned" "related_resource_namespace"="ingress-basic" "resource_kind"="Ingress" "resource_name"="hello-world-ingress" "resource_namespace"="ingress-basic" I0330 06:16:46.357190 1 controller.go:135] cert-manager/controller/ingress-shim "msg"="finished processing work item" "key"="ingress-basic/hello-world-ingress" I0330 06:16:46.358636 1 controller.go:129] cert-manager/controller/ingress-shim "msg"="syncing item" "key"="ingress-basic/hello-world-ingress-static" I0330 06:16:46.361782 1 metrics.go:385] cert-manager/metrics "msg"="incrementing controller sync call count" "controllerName"="ingress-shim" I0330 06:16:46.367596 1 sync.go:163] cert-manager/controller/ingress-shim "msg"="certificate already exists for ingress resource, ensuring it is up to date" "related_resource_kind"="Certificate" "related_resource_name"="tls-secret-selfsigned" "related_resource_namespace"="ingress-basic" "resource_kind"="Ingress" "resource_name"="hello-world-ingress-static" "resource_namespace"="ingress-basic" I0330 06:16:46.368271 1 sync.go:171] cert-manager/controller/ingress-shim "msg"="certificate resource is not owned by this ingress. refusing to update non-owned certificate resource for ingress" "related_resource_kind"="Certificate" "related_resource_name"="tls-secret-selfsigned" "related_resource_namespace"="ingress-basic" "resource_kind"="Ingress" "resource_name"="hello-world-ingress-static" "resource_namespace"="ingress-basic" I0330 06:16:46.368424 1 controller.go:135] cert-manager/controller/ingress-shim "msg"="finished processing work item" "key"="ingress-basic/hello-world-ingress-static" I0330 06:16:47.581355 1 reflector.go:278] external/io_k8s_client_go/tools/cache/reflector.go:108: forcing resync I0330 06:16:49.383317 1 reflector.go:278] external/io_k8s_client_go/tools/cache/reflector.go:108: forcing resync </code></pre> <p>The only thing that looks like it can be a problem is in the cert manager logs:</p> <pre><code>"certificate resource is not owned by this ingress. refusing to update non-owned certificate resource for ingress" "related_resource_kind"="Certificate" "related_resource_name"="tls-secret-selfsigned" "related_resource_namespace"="ingress-basic" "resource_kind"="Ingress" "resource_name"="hello-world-ingress-static" "resource_namespace"="ingress-basic" " </code></pre> <p>Thanks,</p> <p>Afik</p>
Afik.A
<p>Based on the information provided a believe that the problem is two ingresses using the same self-signed certificate. </p> <p>What you trying to achieve here is that you want to manage your certificate from two different places. As the documentation states: </p> <blockquote> <p>Deploy a TLS Ingress Resource - “There are two primary ways to do this: using annotations on the ingress with ingress-shim or directly creating a certificate resource.”</p> </blockquote> <p>So your <code>hello-world-ingress</code> can use the annotation: </p> <pre><code>cert-manager.io/cluster-issuer: selfsigned-issuer </code></pre> <p>But the <code>helo-world-ingress-static</code> cant because the certificate has been already created under <code>secretName: tls-secret</code>.</p> <p>So from the <code>hello-world-ingress-static</code> you should remove the annotation:</p> <pre><code>cert-manager.io/cluster-issuer: selfsigned-issuer </code></pre> <p>Because it creates interest conflict since the <code>secretName</code> is already created and managed by other resource. In this case <code>CertificateRequest</code> from another Ingress. </p> <p>Let me know if this helps. </p>
acid_fuji
<p>I notice the cpu utilization of pods in same hpa varies from 31m to 1483m. Is this expected and normal? See below for the cpu utilization of 8 pods which are of the same hpa.</p> <pre><code>NAME CPU(cores) myapp-svc-pod1 31m myapp-svc-pod2 87m myapp-svc-pod3 1061m myapp-svc-pod4 35m myapp-svc-pod5 523m myapp-svc-pod6 1483m myapp-svc-pod7 122m myapp-svc-pod8 562m </code></pre>
Saha
<p>HPA main goal is to spawn more pods to keep average load for a group of pods on specified level.</p> <p>HPA is not responsible for Load Balancing and equal connection distribution.</p> <p>For equal connection distribution is responsible k8s service, which works by deafult in iptables mode and - <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="nofollow noreferrer">according to k8s docs</a> - it picks pods by random.</p> <p>Your uneven cpu load distribution is most probably caused by the data it processes. To make sure it's not the issue with k8s service, I'd recomment you to export some metrics like number of connections and time it takes to process one request. After you gathered this data, have a look at it and see if a pattern emerges.</p> <p>Now to answer your question:</p> <blockquote> <p>Is this expected and normal?</p> </blockquote> <p>It depends what you consider as normal, but if you were expecting more equal cpu load distribution then you may want to rethink your design. It's hard to say what you can do to make it more equal because I don't know what myapp-svc-pods do, but as I already mentioned, it may be best to have a look at the metrics.</p>
Matt
<p>I am new to microservices and want to understand what is the best way I can implement below behaviour in microservices deployed on Kubernetes :</p> <p>There are 2 different K8s clusters. Microservice B is deployed on both the clusters.</p> <p>Now if a Microservice A calls Microservice B and B’s pods are not available in cluster 1, then the call should go to B of cluster 2.</p> <p>I could have imagined to implement this functionality by using Netflix OSS but here I am not using it.</p> <p>Also, keeping aside the inter-cluster communication aside for a second, how should I communicate between microservices ?</p> <p>One way that I know is to create Kubernetes Service of type NodePort for each microservice and use the IP and the nodePort in the calling microservice.</p> <p>Question : What if someone deletes the target microservice's K8s Service? A new nodePort will be randomly assigned by K8s while recreating the K8s service and then again I will have to go back to my calling microservice and change the nodePort of the target microservice. How can I decouple from the nodePort?</p> <p>I researched about kubedns but seems like it only works within a cluster.</p> <p>I have very limited knowledge about Istio and Kubernetes Ingress. Does any one of these provide something what I am looking for ?</p> <p>Sorry for a long question. Any sort of guidance will be very helpful.</p>
Abhinav
<p>You can expose you application using services, there are some kind of services you can use:</p> <blockquote> <ul> <li><p><code>ClusterIP</code>: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default <code>ServiceType</code>.</p> </li> <li><p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a>: Exposes the Service on each Node’s IP at a static port (the <code>NodePort</code>). A <code>ClusterIP</code> Service, to which the <code>NodePort</code> Service routes, is automatically created. You’ll be able to contact the <code>NodePort</code> Service, from outside the cluster, by requesting <code>&lt;NodeIP&gt;:&lt;NodePort&gt;</code>.</p> </li> <li><p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a>: Exposes the Service externally using a cloud provider’s load balancer. <code>NodePort</code> and <code>ClusterIP</code> Services, to which the external load balancer routes, are automatically created.</p> </li> <li><p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer"><code>ExternalName</code></a>: Maps the Service to the contents of the <code>externalName</code> field (e.g. <code>foo.bar.example.com</code>), by returning a <code>CNAME</code> record</p> </li> </ul> </blockquote> <p>For <strong>internal communication</strong> you an use service type <code>ClusterIP</code>, and you could configure the service dns for your applications instead an IP. I.e.: a service called <code>my-app-1</code> could be reach internnaly using the dns <code>http://my-app-1</code> or with fqdn <code>http://my-app-1.&lt;namespace&gt;.svc.cluster.local</code>.</p> <p>For <strong>external communication</strong>, you can use <code>NodePort</code> or <code>LoadBalancer</code>.</p> <p><code>NodePort</code> is good when you have few nodes and know the ip of all of them. And yes, by the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">service docs</a> you can specify a specific port number:</p> <blockquote> <p>If you want a specific port number, you can specify a value in the <code>nodePort</code> field. The control plane will either allocate you that port or report that the API transaction failed. This means that you need to take care of possible port collisions yourself. You also have to use a valid port number, one that’s inside the range configured for NodePort use.</p> </blockquote> <p><code>LoadBalancer</code> give you more flexibility, because you don't need to know all node ips, you just must to know the service IP and port. But <code>LoadBalancer</code> is only supported in cloudproviders, if you wan to implement in bare-metal cluster, I recomend you take a look in <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a>.</p> <p>Finally, there is another option that is use <code>ingress</code>, in my point of view is the best way to expose HTTP applications externally, because you can create rules by path and host, and it gives you much more flexibility than services. But <strong>only</strong> HTTP/HTTPS is supported, if you need TCP then go to Services.</p> <p>I'd recommend you take a look in these links to understand in deep how services and ingress works:</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Services</a></p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes Ingress</a></p> <p><a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX Ingress</a></p>
Mr.KoopaKiller
<p>I have configured a kubernetes cluster on bare metal using kubeadm. Everything works well and I can deploy an example nginx app. Problem comes in when I want to deploy a statefulset with <code>volumeClaimTemplates</code> as shown below</p> <pre><code> volumeClaimTemplates: - metadata: name: jackrabbit-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: jackrabbit </code></pre> <p>and the storageclass</p> <pre><code>allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: meta.helm.sh/release-name: chart-1591185140 meta.helm.sh/release-namespace: gluu storageclass.beta.kubernetes.io/is-default-class: "false" labels: app.kubernetes.io/managed-by: Helm storage: jackrabbit managedFields: - apiVersion: storage.k8s.io/v1 mountOptions: - debug parameters: fsType: ext4 pool: default provisioner: kubernetes.io/no-provisioner reclaimPolicy: Retain </code></pre> <p>I have also tried to add a <code>persistentVolume</code> with <code>hostPath</code> spec but still not working.</p> <pre><code> ---- ------ ---- ---- ------- Warning ProvisioningFailed 82s (x3 over 98s) persistentvolume-controller no volume plugin matched </code></pre>
Shammir
<p>In your StorageClass you are using <code>kubernetes.io/no-provisioner</code> and this means you are trying to use Local Volume Plugin. </p> <p>You cluster doesn't know <code>kubernetes.io/no-provisioner</code> yet and that's why <code>no volume plugin matched</code> is presented. </p> <p>According to <a href="https://kubernetes.io/docs/concepts/storage/storage-classes" rel="nofollow noreferrer">documentation</a>, this Plugin is not included in the <code>kubernetes.io</code> as an Internal Provisioner. <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner" rel="nofollow noreferrer">Here</a> you can see a chart listing all Provisioners, their Plugin Names, if they are included in the Internal Provisioner and a link to a config example. </p> <p>In the documentation we can read:</p> <blockquote> <p>You are not restricted to specifying the “internal” provisioners listed here (whose names are prefixed with <code>kubernetes.io</code> and shipped alongside Kubernetes). You can also run and specify external provisioners, which are independent programs that follow a <a href="https://git.k8s.io/community/contributors/design-proposals/storage/volume-provisioning.md" rel="nofollow noreferrer">specification</a> defined by Kubernetes. Authors of external provisioners have full discretion over where their code lives, how the provisioner is shipped, how it needs to be run, what volume plugin it uses (including Flex), etc. The repository <a href="https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner" rel="nofollow noreferrer">kubernetes-sigs/sig-storage-lib-external-provisioner</a> houses a library for writing external provisioners that implements the bulk of the specification. Some external provisioners are listed under the repository <a href="https://github.com/kubernetes-incubator/external-storage" rel="nofollow noreferrer">kubernetes-incubator/external-storage</a>.</p> <p>For example, NFS doesn’t provide an internal provisioner, but an external provisioner can be used. There are also cases when 3rd party storage vendors provide their own external provisioner.</p> </blockquote> <p>The Local external provisioner is maiteined on <a href="https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#getting-started" rel="nofollow noreferrer">this</a> GitHub repository and there you can find the <a href="https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#getting-started" rel="nofollow noreferrer">Getting Started</a> guide that will lead you trough how to use it. </p>
Mark Watney
<p>How can I using kubectl cmd to get specific pod's commit sha1 like:</p> <pre><code>kubectl get git_commit_sha1 [pod_name] </code></pre>
Dave kam
<p>There is no way to achieve what you want at the moment using kubectl. They only possible way would be if your docker image have <code>git</code> command built in. In that case you could use <code>kubectl exec</code> to get the information you want. </p> <p>Example: </p> <pre><code>$ kubectl exec -ti podname -- git show </code></pre> <p>Alternatively, if you really think your idea makes sense and may be useful to more people, you can open a feature request on <a href="https://github.com/kubernetes/kubectl/issues" rel="nofollow noreferrer">kubernetes github issues page</a>.</p>
Mark Watney
<p>I have couple of deployments - say Deployment A and Deployment B. The K8s Subnet is 10.0.0.0/20. My requirement : Is it possible to get all pods in Deployment A to get IP from 10.0.1.0/24 and pods in Deployment B from 10.0.2.0/24. This helps the networking clean and with help of IP itself a particular deployment can be identified. </p>
amp
<p>Deployment in Kubernetes is a high-level abstraction that rely on controllers to build basic objects. That is different than object itself such as pod or service. </p> <p>If you take a look into deployments spec in <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#deploymentspec-v1-apps" rel="nofollow noreferrer">Kubernetes API Overview</a>, you will notice that there is no such a thing as defining subnets, neither IP addresses that would be specific for deployment so you cannot specify subnets for deployments. </p> <p>Kubernetes idea is that pod is ephemeral. You should not try to identify resources by IP addresses as IPs are randomly assigned. If the pod dies it will have another IP address. You could try to look on something like statefulsets if you are after unique stable network identifiers. </p> <p>While Kubernetes does not support this feature I found workaround for this using Calico: <a href="https://docs.projectcalico.org/networking/migrate-pools" rel="nofollow noreferrer">Migrate pools</a> feature. </p> <p>First you need to have <code>calicoctl</code> installed. There are several ways to do that mentioned in the <a href="https://docs.projectcalico.org/getting-started/calicoctl/install" rel="nofollow noreferrer">install calicoctl</a> docs. </p> <p>I choose to install <code>calicoctl</code> as a Kubernetes pod: </p> <pre><code> kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml </code></pre> <p>To make work faster you can setup an alias : </p> <pre><code>alias calicoctl="kubectl exec -i -n kube-system calicoctl /calicoctl -- " </code></pre> <p>I have created two yaml files to setup ip pools: </p> <pre><code>apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: pool1 spec: cidr: 10.0.0.0/24 ipipMode: Always natOutgoing: true apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: pool2 spec: cidr: 10.0.1.0/24 ipipMode: Always natOutgoing: true </code></pre> <p>Then you you have apply the following configuration but since my yaml were being placed in my host filesystem and not in calico pod itself I placed the yaml as an input to the command: </p> <pre><code>➜ cat ippool1.yaml | calicoctl apply -f- Successfully applied 1 'IPPool' resource(s) ➜ cat ippool2.yaml | calicoctl apply -f- Successfully applied 1 'IPPool' resource(s) </code></pre> <p>Listing the ippools you will notice the new added ones: </p> <pre><code>➜ calicoctl get ippool -o wide NAME CIDR NAT IPIPMODE VXLANMODE DISABLED SELECTOR default-ipv4-ippool 192.168.0.0/16 true Always Never false all() pool1 10.0.0.0/24 true Always Never false all() pool2 10.0.1.0/24 true Always Never false all() </code></pre> <p>Then you can specify what pool you want to choose for you deployment: </p> <pre><code>--- metadata: labels: app: nginx name: deployment1-pool1 spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: annotations: cni.projectcalico.org/ipv4pools: "[\"pool1\"]" --- </code></pre> <p>I have created similar one called <code>deployment2</code> that used <code>ippool2</code> with the results below: </p> <pre><code>deployment1-pool1-6d9ddcb64f-7tkzs 1/1 Running 0 71m 10.0.0.198 acid-fuji deployment1-pool1-6d9ddcb64f-vkmht 1/1 Running 0 71m 10.0.0.199 acid-fuji deployment2-pool2-79566c4566-ck8lb 1/1 Running 0 69m 10.0.1.195 acid-fuji deployment2-pool2-79566c4566-jjbsd 1/1 Running 0 69m 10.0.1.196 acid-fuji </code></pre> <p>Also its worth mentioning that while testing this I found out that if your default deployment will have many replicas and will ran out of ips Calico will then use different pool. </p>
acid_fuji
<p>I have a question. I am trying to install nginx with helm 3 but it is not working when i specify the namespace. Any idea why ? it works without. </p> <pre><code>helm install nginx-release nginx-stable/nginx-ingres -n ingress-basic Error: failed to download "nginx-stable/nginx-ingres" (hint: running `helm repo update` may help) </code></pre>
EchoRo
<p>Your command has a typo, you typed <code>nginx-stable/nginx-ingres</code> and it should be <code>nginx-stable/nginx-ingress</code>. </p> <p>Following the <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/" rel="noreferrer">documentation</a>, your are using the right repository for the official NGINX Ingress. To successfully install it using helm you have to run the following commands: </p> <ol> <li>Add NGINX Helm repository: <pre><code>$ helm repo add nginx-stable https://helm.nginx.com/stable $ helm repo update </code></pre></li> <li><p>To install the chart with the release name my-release (my-release is the name that you choose):</p> <pre><code>$ helm install my-release nginx-stable/nginx-ingress </code></pre></li> </ol> <p>In your scenario the command should look like this:</p> <pre><code>$ helm install nginx-release nginx-stable/nginx-ingress -n ingress-basic </code></pre> <p>Before running the above command, you have to create the namespace: </p> <pre><code>kubectl create namespace ingress-basic </code></pre>
Mark Watney
<p>i have an application that record live traffic and replay them.</p> <p><a href="https://github.com/buger/goreplay" rel="nofollow noreferrer">https://github.com/buger/goreplay</a></p> <p>it is a simple app to use, but when i tried to use it with kubernetes i get a problem with persisting data in volumes.</p> <p>i want to do this :</p> <ul> <li>in pod number one i use the goreplay container and other container that just have a simple python server... the job is the goreplay will listen to the requests coming from outside to the server and save them to a file , this is the deployment file :</li> </ul> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: goreplay-deployment labels: app: goreplay-app spec: replicas: 1 selector: matchLabels: app: goreplay-app template: metadata: labels: app: goreplay-app spec: containers: - name: goreplay image: feiznouri/goreplay:2.0 args: - &quot;--input-raw&quot; - &quot;:3000&quot; - &quot;--output-file=requests_docker.gor&quot; volumeMounts: - name: data mountPath: /var/lib/goreplay - name: myserver image: feiznouri/python-server:1.1 args: - &quot;3000&quot; ports: - name: server-port containerPort: 3000 volumes: - name: data persistentVolumeClaim: claimName: goreplay-claim </code></pre> <p>normally this will create the file.</p> <p>the prblem is that when i delete the deployment, and create one that it's job is to read the file and forward the saving request to a server, it can't find the file , clearly i am using the volumes wrong , this is the second deployment that suppose to find and read the file :</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: goreplay-deployment labels: app: goreplay-app spec: replicas: 1 selector: matchLabels: app: goreplay-app template: metadata: labels: app: goreplay-app spec: containers: - name: goreplay image: feiznouri/goreplay:2.0 args: - &quot;--input-file&quot; - &quot;requests_docker_0.gor&quot; - &quot;--output-http=http://localhost:3000&quot; volumeMounts: - name: data mountPath: /var/lib/goreplay - name: myserver image: feiznouri/python-server:1.1 args: - &quot;3000&quot; ports: - name: server-port containerPort: 3000 volumes: - name: data persistentVolumeClaim: claimName: goreplay-claim </code></pre> <p>PS: this is the yaml file for the persistent volume :</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: goreplay-volume labels: type: local spec: storageClassName: custum capacity: storage: 1Gi accessModes: - ReadWriteMany hostPath: path: &quot;/mnt/data&quot; </code></pre> <p>and this the file for the storage class :</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom provisioner: k8s.io/minikube-hostpath reclaimPolicy: Retain volumeBindingMode: Immediate </code></pre> <p>and this for the persistent volume claim :</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: goreplay-claim spec: storageClassName: custum accessModes: - ReadWriteMany resources: requests: storage: 50Mi </code></pre> <p>how can i make this work and find and use the first file that i created in the second pod !</p> <p>thank you in advance.</p>
feiz
<p>I replicated it and it looks like the volumes are fine.</p> <p>What is not fine, is how you pass file paths to goreplay.</p> <p>Here is what I did:</p> <pre><code>kubectl exec -it goreplay-deployment-899c49f95-7qdh4 -c goreplay sh /home/goreplay # ps auxwf PID USER TIME COMMAND 1 root 0:00 ./gor --input-raw :3000 --output-file=requests_docker.gor 36 root 0:00 sh 42 root 0:00 ps auxwf /home/goreplay # ls /proc/1/cwd -l lrwxrwxrwx 1 root root 0 Feb 19 09:44 /proc/1/cwd -&gt; /home/goreplay </code></pre> <p>Let me explain what you see here. I execed into goreplay container and checked the PID of goreplay process (PID=1). Next, I checked what is this process's current working directory by checking the <code>/proc/1/cwd</code> symlink. As you see it's symlinked to <code>/home/goreplay</code>.</p> <p>What does it tell us?</p> <p>It tells us that <code>--output-file=requests_docker.gor</code> is making goreplay to save the file in <code>/home/goreplay/requests_docker.gor</code> (since you are specifying path relative to process's current working dir instead of using absolute path pointing to volume). It should be set to:</p> <pre><code>--output-file=/var/lib/gorepath/requests_docker.gor </code></pre> <p>since it's the directory where the volume is mounted.</p> <hr /> <p>Same applies to the second deployment. You should specify:</p> <pre><code>--input-file=/var/lib/goreplay/requests_docker_0.gor` </code></pre> <p>so that it reads from the volume and not from the pod's home directory (<code>/home/goreplay</code>).</p> <hr /> <p>Change it and it should work.</p>
Matt
<p>I have deployed nginx ingress controller with internal load balancer and externalDNS on my EKS cluster so i tried to expose kibana with the hostname registred on route53 with private hosted zone (my-hostname.com). but when i access it on the browser using vpn it shows me site can't be reached. So i need to know what i did wrong </p> <p>here is all the resources :</p> <p><strong>ingress controller :</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: app: default-http-backend --- apiVersion: apps/v1 kind: Deployment metadata: name: default-http-backend spec: selector: matchLabels: app: default-http-backend template: metadata: labels: app: default-http-backend spec: containers: - name: default-http-backend image: gcr.io/google_containers/defaultbackend:1.3 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: internal-ingress name: internal-ingress-controller spec: replicas: 1 selector: matchLabels: app: internal-ingress template: metadata: labels: app: internal-ingress spec: containers: - args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/internal-ingress-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/internal-tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/internal-udp-services - --annotations-prefix=nginx.ingress.kubernetes.io - --ingress-class=internal-ingress - --publish-service=$(POD_NAMESPACE)/internal-ingress env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.11.0 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: internal-ingress-controller ports: - containerPort: 80 name: http protocol: TCP - containerPort: 443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 --- apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600" service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*' labels: app: internal-ingress name: internal-ingress spec: externalTrafficPolicy: Cluster ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https selector: app: internal-ingress sessionAffinity: None type: LoadBalancer </code></pre> <p>externalDNS :</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list","watch"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1beta2 kind: Deployment metadata: labels: app: external-dns-private name: external-dns-private spec: replicas: 1 selector: matchLabels: app: external-dns-private strategy: type: Recreate template: metadata: labels: app: external-dns-private spec: serviceAccountName: external-dns containers: - args: - --source=ingress - --domain-filter=my-hostname.com - --provider=aws - --registry=txt - --txt-owner-id=dev.k8s.nexus - --annotation-filter=kubernetes.io/ingress.class=internal-ingress - --aws-zone-type=private image: registry.opensource.zalan.do/teapot/external-dns:latest name: external-dns-private </code></pre> <p><strong>ingress resource:</strong></p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: "internal-ingress" labels: app: app name: app-private spec: rules: - host: kibana.my-hostname.com http: paths: - backend: serviceName: kibana servicePort: 5601 </code></pre> <p><strong>kibana service :</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: kibana spec: selector: app: kibana ports: - name: client port: 5601 protocol: TCP type: ClusterIP </code></pre> <p>I have checked recordsets of my private hosted zone and figured that kibana.my-hostname.com has been added but still cannot access it.</p>
touati ahmed
<p>Route53 will only respond request coming from your internal and allowed VPC's. You cannot reach the domain out of your VPC.</p> <p>To solve the issue, change your zone to public, or use a VPN with Simple AD to forward requests to your private zone as described <a href="https://aws.amazon.com/premiumsupport/knowledge-center/private-hosted-zone-directory-service/" rel="nofollow noreferrer">here</a>.</p> <p><strong>References:</strong></p> <p><a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html" rel="nofollow noreferrer">Working with private hosted zones</a></p>
Mr.KoopaKiller
<p>I tried to install <code>Kubernetes</code> with <code>kubeadm</code> on 3 virtual machines with <code>Debian</code> OS on my laptop, one as master node and the other two as worker nodes. I did exactly as the tutorials on <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">kubernetes.io</a> suggests. I initialized cluster with command <code>kubeadm init --pod-network-cidr=10.244.0.0/16</code> and joined the workers with corresponding <code>kube join</code> command. I installed <code>Flannel</code> as the network overlay with command <code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</code>.</p> <p>The repsonse of command <code>kubectl get nodes</code> looks fine:</p> <pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE k8smaster Ready master 20h v1.18.3 192.168.1.100 &lt;none&gt; Debian GNU/Linux 10 (buster) 4.19.0-9-amd64 docker://19.3.9 k8snode1 Ready &lt;none&gt; 20h v1.18.3 192.168.1.101 &lt;none&gt; Debian GNU/Linux 10 (buster) 4.19.0-9-amd64 docker://19.3.9 k8snode2 Ready &lt;none&gt; 20h v1.18.3 192.168.1.102 &lt;none&gt; Debian GNU/Linux 10 (buster) 4.19.0-9-amd64 docker://19.3.9 </code></pre> <p>The response of command <code>kubectl get pods --all-namespaces</code> doesn't show any error:</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-66bff467f8-7hlnp 1/1 Running 9 20h 10.244.0.22 k8smaster &lt;none&gt; &lt;none&gt; kube-system coredns-66bff467f8-wmvx4 1/1 Running 11 20h 10.244.0.23 k8smaster &lt;none&gt; &lt;none&gt; kube-system etcd-k8smaster 1/1 Running 11 20h 192.168.1.100 k8smaster &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-k8smaster 1/1 Running 9 20h 192.168.1.100 k8smaster &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-k8smaster 1/1 Running 11 20h 192.168.1.100 k8smaster &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-amd64-9c5rr 1/1 Running 17 20h 192.168.1.102 k8snode2 &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-amd64-klw2p 1/1 Running 21 20h 192.168.1.101 k8snode1 &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-amd64-x7vm7 1/1 Running 11 20h 192.168.1.100 k8smaster &lt;none&gt; &lt;none&gt; kube-system kube-proxy-jdfzg 1/1 Running 11 19h 192.168.1.101 k8snode1 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-lcdvb 1/1 Running 6 19h 192.168.1.102 k8snode2 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-w6jmf 1/1 Running 11 20h 192.168.1.100 k8smaster &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-k8smaster 1/1 Running 10 20h 192.168.1.100 k8smaster &lt;none&gt; &lt;none&gt; </code></pre> <p>Then i tried to create a <code>POD</code> with command <code>kubectl apply -f podexample.yml</code> with following content:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: nginx image: nginx </code></pre> <p>Command <code>kubectl get pods -o wide</code> shows that the <code>POD</code> is created on worker node1 and is in <code>Running</code> state.</p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES example 1/1 Running 0 135m 10.244.1.14 k8snode1 &lt;none&gt; &lt;none&gt; </code></pre> <p>The thing is, when i try to connect to the pod with <code>curl -I 10.244.1.14</code> command i get the following response in master node:</p> <pre><code>curl: (7) Failed to connect to 10.244.1.14 port 80: Connection timed out </code></pre> <p>but the same command on the worker node1 responds successfully with:</p> <pre><code>HTTP/1.1 200 OK Server: nginx/1.17.10 Date: Sat, 23 May 2020 19:45:05 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT Connection: keep-alive ETag: "5e95c66e-264" Accept-Ranges: bytes </code></pre> <p>I thought maybe that's because somehow <code>kube-proxy</code> is not running on master node but command <code>ps aux | grep kube-proxy</code> shows that it's running.</p> <pre><code>root 16747 0.0 1.6 140412 33024 ? Ssl 13:18 0:04 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8smaster </code></pre> <p>Then i checked for kernel routing table with command <code>ip route</code> and it shows that packets destined for <code>10.244.1.0/244</code> get routed to flannel.</p> <pre><code>default via 192.168.1.1 dev enp0s3 onlink 10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 169.254.0.0/16 dev enp0s3 scope link metric 1000 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 192.168.1.0/24 dev enp0s3 proto kernel scope link src 192.168.1.100 </code></pre> <p>Everything looks fine to me and i don't know what else should i check to see what's the problem. Am i missing something? </p> <p>UPDATE1:</p> <p>If i start an <code>NGINX</code> container on worker node1 and map it's 80 port to port 80 of the worker node1 host, then i can connect to it via command <code>curl -I 192.168.1.101</code> from master node. Also, i didn't add any iptable rule and there is no firewall daemon like <code>UFW</code> installed on the machines. So, i think it's not a firewall issue.</p> <p>UPDATE2:</p> <p>I recreated the cluster and used <code>canal</code> instead of <code>flannel</code>, still no luck.</p> <p>UPDATE3:</p> <p>I took a look at canal and flannel logs with following commands and everything seems fine:</p> <pre><code>kubectl logs -n kube-system canal-c4wtk calico-node kubectl logs -n kube-system canal-c4wtk kube-flannel kubectl logs -n kube-system canal-b2fkh calico-node kubectl logs -n kube-system canal-b2fkh kube-flannel </code></pre> <p>UPDATE4:</p> <p>for the sake of completeness, <a href="https://gofile.io/d/rZIjLa" rel="nofollow noreferrer">here are the logs of mentioned containers</a>.</p> <p>UPDATE5:</p> <p>I tried to install specific version of kubernetes components and docker, to check if there is an issue related to versioning mismatch with following commands:</p> <pre><code>sudo apt-get install docker-ce=18.06.1~ce~3-0~debian sudo apt-get install -y kubelet=1.12.2-00 kubeadm=1.12.2-00 kubectl=1.12.2-00 kubernetes-cni=0.6.0-00 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml </code></pre> <p>but nothing changed.</p> <p>i even updated file <code>/etc/bash.bashrc</code> on all nodes to clear any proxy settings just to make sure it's not about proxy:</p> <pre><code>export HTTP_PROXY= export http_proxy= export NO_PROXY=127.0.0.0/8,192.168.0.0/16,172.0.0.0/8,10.0.0.0/8 </code></pre> <p>and also added following environments to docker systemd file <code>/lib/systemd/system/docker.service</code> on all nodes:</p> <pre><code>Environment="HTTP_PROXY=" Environment="NO_PROXY=" </code></pre> <p>Then rebooted all nodes and when i logged in, still got <code>curl: (7) Failed to connect to 10.244.1.12 port 80: Connection timed out</code> </p> <p>UPDATE6:</p> <p>i event tried to setup the cluster in <code>CentOS</code> machines. thought maybe there is something related to <code>Debian</code>. i also stopped and disabled <code>firewalld</code> to make sure that firewall is not causing problem, but i got the exact same result again: <code>Failed to connect to 10.244.1.2 port 80: Connection timed out</code>.</p> <p>The only thing that now i'm suspicious about is that maybe it's all because of <code>VirtualBox</code> and virtual machines network configuration? The virtual machines are attched to a <code>Bridge Adapter</code> connected to my Wireless network interface. </p> <p>UPDATE7:</p> <p>I went inside the created POD and figured out there is no internet connectivity inside the POD. So, I created another POD from a <code>NGINX</code> image that has commands like <code>curl</code>, <code>wget</code>, <code>ping</code> and <code>traceroute</code> and tried <code>curl https://www.google.com -I</code> and got result: <code>curl: (6) Could not resolve host: www.google.com</code>. I checked <code>/etc/resolv.conf</code> file and found that the DNS server address inside the POD is <code>10.96.0.10</code>. Changed the DNS to <code>8.8.8.8</code> still <code>curl https://www.google.com -I</code> results in <code>curl: (6) Could not resolve host: www.google.com</code>. Tried to ping <code>8.8.8.8</code> and the result is <code>56 packets transmitted, 0 received, 100% packet loss, time 365ms</code>. For the last step i tried <code>traceroute 8.8.8.8</code> and got the following result:</p> <pre><code> 1 10.244.1.1 (10.244.1.1) 0.116 ms 0.056 ms 0.052 ms 2 * * * 3 * * * 4 * * * 5 * * * 6 * * * 7 * * * 8 * * * 9 * * * 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * </code></pre> <p>I don't know the fact that there is no internet connectivity in POD has anything to do with the problem that i can't connect to POD within the cluster from nodes other than the one that POD is deployed on.</p>
chubock
<p>Debian system uses <code>nftables</code> for the <code>iptables</code> backend which is not compatible with Kubernetes network setup. So you have to set it to use iptables-legacy instead of nftables with the following commands: </p> <pre><code>sudo update-alternatives --set iptables /usr/sbin/iptables-legacy sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy </code></pre>
acid_fuji
<p>I am trying to monitor filesystem usage for pods in k8s. I am using Kubernetes (microk8s) and hostpath persistent volumes. I am running Kafka along with a number of producers to see what happens when I go past the PVC size limit among other things. I have tried getting information from the API server but it is not reported there. Since it is only using hostpath, that kind of makes sense. It is not a dynamic volume system. Doing df on the host just shows all of the volumes with the same utilization as the root filesystem. This is the same result using exec -- df within the container. There are no pvcRefs on the containers using api server, which kind of explains why the dashboard doesn't have this information. Is this a dead end or does someone have a way around this limitation? I am now wondering if the PVC limits will be enforced.</p>
user2065750
<p>Since with <code>hostPath</code> your data is stored directly on the worker you won't be able to monitor the usage. Using <code>hostPath</code> has many drawbacks and while its good for testing it should not be used for some prod system. Keeping the data directly on the node is dangerous and in the case of node failure/replacement you will loose it. Other disadvantages are:</p> <ul> <li><p>Pods created from the same pod template may behave differently on different nodes because of different hostPath file/dir contents on those nodes</p> </li> <li><p>Files or directories created with HostPath on the host are only writable by root. Which means, you either need to run your container process as root or modify the file permissions on the host to be writable by non-root user, which may lead to security issues</p> </li> <li><p><code>hostPath</code> volumes should not be used with Statefulsets.</p> </li> </ul> <p>As you already found out it would be good idea to move on from <code>hostPath</code> towards something else.</p>
acid_fuji
<p>I have .netcore 2.2 API Pod on my K8S Farm i implement health check api to let k8s liveness check. Here is My settings.</p> <pre><code> livenessProbe: httpGet: path: /api/Authentication/CheckLiveness port: 80 scheme: HTTP initialDelaySeconds: 100 timeoutSeconds: 60 periodSeconds: 30 successThreshold: 1 failureThreshold: 1 readinessProbe: httpGet: path: /api/Authentication/CheckReadiness port: 80 scheme: HTTP initialDelaySeconds: 50 timeoutSeconds: 30 periodSeconds: 15 successThreshold: 1 failureThreshold: 1 </code></pre> <p>The Problem is other worker node seem working find without problem except pod on worker node 1.</p> <p><a href="https://i.stack.imgur.com/ZHeYK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZHeYK.png" alt="Error Pod on WorkerNode1"></a></p> <p>Here is the error event.</p> <pre><code>Liveness probe failed: Get http://10.244.3.218:80/api/Authentication/CheckLiveness:net/http: request canceled (Client.Timeout exceeded while awaiting headers) Readiness probe failed: Get http://10.244.3.218:80/api/Authentication/CheckReadiness: net/http: request canceled (Client.Timeout exceeded while awaiting headers) </code></pre> <p>I can curl to my pod by exec it. But the pod keep restarting.</p> <p>I think it a node Problem. This K8s Farm run on Vmware with centos7 os. I tried this config on dev environment with the same infrastructure. and it all green without problem. Need any suggestion to debug or solve this problem. Thank you.</p> <p>@mWatney Edited</p> <p>And here your result</p> <pre><code>Name: authenservice-dpm-7d468bfcc4-px44m Namespace: pbtsapi Priority: 0 Node: ptseclsbtwkn01/192.168.10.136 Start Time: Fri, 12 Jun 2020 11:23:07 +0700 Labels: app=authenservice-api pod-template-hash=7d468bfcc4 Annotations: &lt;none&gt; Status: Running IP: 10.244.3.218 Controlled By: ReplicaSet/authenservice-dpm-7d468bfcc4 Containers: authenservice: Container ID: docker://1b1acffeae54421201d1bbc54b8020a75db660e1ae1a0f0d18a56930bbca0d12 Image: 10.99.21.89:5000/authenservice:v1.0.4 Image ID: docker-pullable://10.99.21.89:5000/authenservice@sha256:b9244059195edff3cc3592d3e19a94ac00e481e9936413a4315a3cf41b0023ea Port: 80/TCP Host Port: 0/TCP State: Running Started: Fri, 12 Jun 2020 14:46:22 +0700 Last State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 12 Jun 2020 14:37:52 +0700 Finished: Fri, 12 Jun 2020 14:46:21 +0700 Ready: False Restart Count: 28 Limits: cpu: 500m memory: 400Mi Requests: cpu: 250m memory: 200Mi Liveness: http-get http://:80/api/Authentication/CheckLiveness delay=300s timeout=60s period=30s #success=1 #failure=1 Readiness: http-get http://:80/api/Authentication/CheckReadiness delay=300s timeout=60s period=30s #success=1 #failure=1 Environment: MSSQL_PORT: 1433 Mounts: /app/appsettings.json from authen-v (rw,path="appsettings.json") /etc/localtime from tz-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-h8x2b (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: authen-v: Type: ConfigMap (a volume populated by a ConfigMap) Name: authenservice-config Optional: false tz-config: Type: HostPath (bare host directory volume) Path: /usr/share/zoneinfo/Asia/Bangkok HostPathType: default-token-h8x2b: Type: Secret (a volume populated by a Secret) SecretName: default-token-h8x2b Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Killing 22m (x26 over 3h23m) kubelet, ptseclsbtwkn01 Container authenservice failed liveness probe, will be restarted Warning Unhealthy 6m42s (x28 over 3h23m) kubelet, ptseclsbtwkn01 Liveness probe failed: Get http://10.244.3.218:80/api/Authentication/CheckLiveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 10s (x22 over 3h8m) kubelet, ptseclsbtwkn01 Readiness probe failed: Get http://10.244.3.218:80/api/Authentication/CheckReadiness: net/http: request canceled (Client.Timeout exceeded while awaiting headers) </code></pre>
Peeradis Sa-nguanTrakul
<p>Virtual Machines are never identical. You can setup Virtual Machines with identical specs, by doing this you make sure they will have similarities but it doesn't mean they will perform exactly the same. </p> <p>Same thing happens to servers. You can buy two identical physical servers and they will perform similarly but never identically. </p> <p>Logs are clearly saying that by the time that the application was tested, it wasn't ready yet. </p> <p>As <a href="https://stackoverflow.com/users/251311/zerkms" title="217,534 reputation">zerkms</a> said, you can't have identical rediness and liveness probes. You definitely have to review this. You have also to review your readiness probe <code>initialDelaySeconds</code>. Try increasing it to give more time for your application to start. </p> <p>To troubleshoot, <a href="https://stackoverflow.com/users/5564578/suren" title="3,990 reputation">suren</a> suggestion and increase your timeout, I would increase the <code>initialDelaySeconds</code> instead. </p>
Mark Watney
<p>How do you set up environment variable for config. Could some one please explain in details. I am using windows home and trying to <code>docker-compose.yml</code> to <code>k8s</code> but when I do <code>kompose</code> up it says: I have installed <code>kubectl</code> and <code>minikube</code> and dont know how to set the config file so this api can be started</p> <p><code>Error while deploying application: Get http://localhost:8080/api: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.</code></p> <p>Thanks by advance</p>
sks123245
<p>Kompose always refer to <a href="http://localhost:8080/" rel="nofollow noreferrer">http://localhost:8080/</a> by default. The problem is that as you are using minikube, your api server is in a different address. </p> <p>To check the address of your API, run any kubectl command and get your API server address:</p> <pre><code>$ kubectl get nodes -v6 </code></pre> <p>Output:</p> <pre><code>I0518 07:27:05.109476 3656 loader.go:375] Config loaded from file: /home/christofoletti/.kube/config I0518 07:27:05.138651 3656 round_trippers.go:443] GET https://192.168.39.6:8443/api/v1/nodes?limit=500 200 OK in 19 milliseconds NAME STATUS ROLES AGE VERSION cluster2 Ready master 3d19h v1.18.2 </code></pre> <p>As you can see, we have <code>GET https://192.168.39.36:8443/api/v1/nodes?limit=500 200 OK</code>.</p> <p>So, my API server address is <code>https://192.168.39.26:8443/</code>.</p> <p>Now you can run <code>$ kompose up --server https://192.168.39.26:8443/</code> and Kompose will know where to send the request. </p>
Mark Watney
<p>If I set up MariaDB form the official image within a Docker Compose configuration, I can access it by its host name - for example if in a bash shell within the MariaDB container:</p> <pre><code># host db db has address 172.21.0.2 # curl telnet://db:3306 Warning: Binary output can mess up your terminal. Use "--output -" to tell Warning: curl to output it to your terminal anyway, or consider "--output Warning: &lt;FILE&gt;" to save to a file. </code></pre> <ul> <li>no connection refused issue here</li> </ul> <p>But if have MariaDB deployed from the official image within a Kubernetes cluster (tried both MicroK8s and GKE), I can connect to it via <code>localhost</code> but not by its host name:</p> <pre><code># host db db.my-namspace.svc.cluster.local has address 10.152.183.124 # curl telnet://db:3306 curl: (7) Failed to connect to db port 3306: Connection refused # curl telnet://localhost:3306 Warning: Binary output can mess up your terminal. Use "--output -" to tell Warning: curl to output it to your terminal anyway, or consider "--output Warning: &lt;FILE&gt;" to save to a file. </code></pre> <ul> <li>connection is refused for the service host name, but localhost responds</li> </ul> <p>I've tried to replace the included <code>my.cnf</code> with a simplified version like:</p> <pre><code>[mysqld] skip-grant-tables skip-networking=0 #### Unix socket settings (making localhost work) user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock #### TCP Socket settings (making all remote logins work) port = 3306 bind-address = * </code></pre> <ul> <li>with no luck</li> </ul> <p>The MariaDB Kubernetes deployment is like:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: db spec: replicas: 1 strategy: type: Recreate selector: matchLabels: name: db template: metadata: labels: name: db spec: containers: - env: - name: MYSQL_PASSWORD value: template - name: MYSQL_ROOT_PASSWORD value: root - name: MYSQL_USER value: template image: mariadb:10.4 name: db ports: - containerPort: 3306 resources: {} volumeMounts: - mountPath: /var/lib/mysql name: dbdata restartPolicy: Always volumes: - name: dbdata persistentVolumeClaim: claimName: dbdata status: {} </code></pre> <p>and the corresponding Persistent Volume Claim:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: io.kompose.service: dbdata name: dbdata spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi status: {} </code></pre> <p>It baffles me that the same configuration works with Docker Compose but not within a Kubernetes cluster.</p> <p>Any ideas what may be going on?</p> <p><em>Update 2020-03-18</em> I forgot to include the service declaration for the database and add it here:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: db name: db spec: ports: - name: "3306" port: 3306 targetPort: 3306 selector: app: db name: db type: ClusterIP status: loadBalancer: {} </code></pre> <p>...am including both <code>app</code> and <code>name</code> for the <code>spec.selector</code> - am used to having only <code>name</code> but @Al-waleed Shihadeh's example includes <code>app</code> so I'll include that also, just in case - but without success.</p> <p>Here are outputs from a couple of kubectl listing commands:</p> <pre><code>$ sudo microk8s.kubectl get svc db -n my-namespace NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE db ClusterIP 10.152.183.246 &lt;none&gt; 3306/TCP 35m </code></pre> <pre><code>$ sudo microk8s.kubectl get pods -owide -n my-namespace NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES db-77cbcf87b6-l44lm 1/1 Running 0 34m 10.1.48.118 microk8s-vm &lt;none&gt; &lt;none&gt; </code></pre> <p><strong><em>Solution</em></strong> Comparing the service declaration posted by KoopaKiller, which proved to work, I finally noticed that setting the <code>protocol</code> attribute to "TCP" in the ports declaration was missing - this part:</p> <pre><code>spec: ports: - protocol: TCP ... </code></pre>
Bjorn Thor Jonsson
<p>Since you are using Kubernetes Deployment, the name of your pods will be generated dinamically based on the name you gave in spec file, in your example, the pods will be create with the name <code>db-xxxxxxxxxx-xxxxx</code>. </p> <p>In order to make a 'fixed' hostname, you need to create a service for reach your pods, example:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: db spec: selector: name: db ports: - protocol: TCP port: 3306 targetPort: 3306 type: ClusterIP </code></pre> <p>And to check if was successfully deployed:</p> <pre><code>$ kubectl get svc db NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE db ClusterIP 10.96.218.18 &lt;none&gt; 3306/TCP 89s </code></pre> <p>The fullname of your service will be: <code>&lt;name&gt;.&lt;namespace&gt;.cluster.local</code> in this case using <code>default</code> namespace will be <code>db.default.cluster.local</code> pointing to ip <code>10.96.218.18</code> as showed in example above.</p> <p>To reach your service you need to configure your /etc/hosts with his information:</p> <pre><code>echo -ne "10.96.218.18\tdb.default.cluster.local db db.default" &gt;&gt; /etc/hosts </code></pre> <p>After that you will be able to reach your service by dns:</p> <pre><code>$ dig +short db 10.96.218.18 $ mysql -h db -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 10 Server version: 5.5.5-10.4.12-MariaDB-1:10.4.12+maria~bionic mariadb.org binary distribution Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql&gt; </code></pre> <p>Just for you know, you could also use HELM template to setup a mariadb with replication. See this <a href="https://engineering.bitnami.com/articles/deploy-a-production-ready-mariadb-cluster-on-kubernetes-with-bitnami-and-helm.html" rel="nofollow noreferrer">article</a></p> <p><strong>References:</strong></p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a></p>
Mr.KoopaKiller
<p>I am trying to deploy my php symfony application on Azure Kubernetes Services. I have the following <code>deployment.yaml</code> for php pod</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: myphp-deployment labels: app: php spec: replicas: 1 selector: matchLabels: app: php template: metadata: labels: app: php spec: containers: - name: php image: myimage ports: - containerPort: 9000 </code></pre> <p>Heres's the php service yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: php spec: selector: app: php ports: - protocol: TCP port: 9000 targetPort: 9000 </code></pre> <p>Here's my nginx deployment yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mynginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: myimage ports: - containerPort: 80 </code></pre> <p>And my nginx-service yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx spec: selector: app: nginx ports: - name: http protocol: TCP port: 80 targetPort: 80 type: LoadBalancer </code></pre> <p>With the above setup in place, everything is working fine. However, if I increase the replica set to two i.e <code>replicas:2</code> for php pod, the app is <strong>not consistently maintaining the states</strong>.</p> <p>I found that the requests from my nginx pod is forwarded to either of the two replicas(of php), and it is logging me out. Sometimes, it logs me in but the application is not consitent in terms of behaviour.</p> <p>How can I control to which replica set the requests should be forwarded to? Or is there a way to dynamically provision another replica if the existing pod fails?</p> <p><strong>P.S I am very new to Kubernetes</strong></p>
pnkjkmr469
<p><strong>How can I control to which replica set the requests should be forwarded to?</strong></p> <p>What you are looking for is the session affinity or sticky session. This can be achieved with ingress. The kubernetes ingress controllers, such as <a href="https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/" rel="nofollow noreferrer">Nginx ingress</a> controller already has these requirement considered and implemented. The ingress controller replies the response with a Set-Cookie header to the first request. The value of the cookie will map to a specific pod replica. When the subsequent request come back again, the client browser will attach the cookie and the ingress controller is therefore able to route the traffic to the same pod replica.</p> <p>Kubernetes has the mechanisms to have the stable pod name like <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> or discover the sets of pods with <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Headless Service</a> but those solutions while great, won't be as good as sticky session in your use case. Still if you're totally new to Kubernetes its worth checking them out.</p> <p><strong>Or is there a way to dynamically provision another replica if the existing pod fails?</strong></p> <p>If you're application will crash Kubernetes will try to restart it. This is being controller by <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">Container restart policy</a>:</p> <blockquote> <p>The <code>spec</code> of a Pod has a <code>restartPolicy</code> field with possible values Always, OnFailure, and Never. The default value is Always.</p> <p>The <code>restartPolicy</code> applies to all containers in the Pod. <code>restartPolicy</code> only refers to restarts of the containers by the kubelet on the same node. After containers in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s, 40s, …), that is capped at five minutes. Once a container has executed for 10 minutes without any problems, the kubelet resets the restart backoff timer for that container.</p> </blockquote> <p>In some situation you may encounter that your application is falling or not working properly but still its not crashing/restarting. In this case you can use the Kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">livenessProbe</a>. Kubernetes can check if a container is still alive through liveness probes. You can specify a liveness probe for each container in the pod’s specification. Kubernetes will periodically execute the probe and restart the container if the probe fails.</p>
acid_fuji
<p>My <a href="https://sdk.operatorframework.io/docs/building-operators/golang/" rel="nofollow noreferrer">Go-based</a> custom resource operator needs some cleanup operations before it is deleted. It has to delete a specific znode from the ZooKeeper.</p> <p>These operations must not be executed before regenerating resource. They have to be executed with the user's deletion command only. Thus, I can't use an ordinary prestop-hook.</p> <p>Can I execute a prestop hook only before deletion? Or is there any other way for the operator to execute cleanup logic before the resource is deleted?</p>
Park Beomsu
<p><strong>Can I execute a prestop hook only before deletion?</strong></p> <p>This is the whole purpose of the <code>preStop</code> hook. A pre-stop hook is executed immediately before the container is terminated. Once there as terminatino signal from API, Kubelet runs the pre-stop hook and afterwards sends the SIGTERM signal to the process.</p> <p>Its design was to perform arbitrary operations before shutdown without having to implement those operations in the application itself. This is especially useful if you run some 3rd party app which code you can't modify.</p> <p>Now the call the to terminate pod and invoke the hook can be due to API request, probes failed, resource contention and others.</p> <p>For more reading please visit: <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="nofollow noreferrer">Container Lifecycle Hooks</a></p>
acid_fuji
<p>We are hitting an error "Request entity too large: limit is 3145728" when trying to update a custom resource object. That would be very helpful if any one knows how to change the size limit from k8s side. Is there any exposed parameters for user?</p>
yuany
<p>Source for this answer: <a href="https://stackoverflow.com/a/60492986/12153576">https://stackoverflow.com/a/60492986/12153576</a></p> <ul> <li>The <code>"error": "Request entity too large: limit is 3145728"</code> is probably the default response from kubernetes handler for objects larger than 3MB, as you can see <a href="https://github.com/kubernetes/kubernetes/blob/db1990f48b92d603f469c1c89e2ad36da1b74846/test/integration/master/synthetic_master_test.go#L315" rel="nofollow noreferrer">here at L305</a> of the source code:</li> </ul> <pre><code>expectedMsgFor1MB := `etcdserver: request is too large` expectedMsgFor2MB := `rpc error: code = ResourceExhausted desc = trying to send message larger than max` expectedMsgFor3MB := `Request entity too large: limit is 3145728` expectedMsgForLargeAnnotation := `metadata.annotations: Too long: must have at most 262144 bytes` </code></pre> <ul> <li><p>The <a href="https://github.com/etcd-io/etcd/issues/9925" rel="nofollow noreferrer">ETCD</a> has indeed a 1.5MB limit for processing a file and you will find on <a href="https://github.com/etcd-io/etcd/blob/master/Documentation/dev-guide/limit.md#request-size-limit" rel="nofollow noreferrer">ETCD Documentation</a> a suggestion to try the<code>--max-request-bytes</code> flag but it would have no effect on a GKE cluster because you don't have such permission on master node.</p></li> <li><p>But even if you did, it would not be ideal because usually this error means that you are <a href="https://github.com/kubeflow/pipelines/issues/3134#issuecomment-591278230" rel="nofollow noreferrer">consuming the objects</a> instead of referencing them which would degrade your performance.</p></li> </ul> <p>I highly recommend that you consider instead these options:</p> <ul> <li><strong>Determine whether your object includes references that aren't used;</strong></li> <li><strong>Break up your resource;</strong></li> <li><strong>Consider a volume mount instead;</strong></li> </ul> <hr> <p>There's a request for a <a href="https://github.com/kubernetes/kubernetes/issues/88709" rel="nofollow noreferrer">new API Resource: File (orBinaryData)</a> that could apply to your case. It's very fresh but it's good to keep an eye on.</p>
Mark Watney
<p>I was trying to use Kubernetes to set up a service locally. I am using ingress-nginx for routing. I am using Ubuntu 18.04. This is my ingress.yaml file:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: 'true' spec: rules: - host: ecommerce.dev http: paths: - path: /api/users/?(.*) backend: serviceName: auth-srv servicePort: 3000 </code></pre> <p>Also, I mapped this in my <code>/etc/hosts/</code> file:</p> <pre><code>127.0.0.1 localhost 127.0.1.1 TALHA # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 127.0.0.1 ecommerce.dev </code></pre> <p>When I try to reach 'ecommerce.dev' from my browser, I am unable to access it as it says 'Site can not be reached'. Can someone please help me about it?</p>
Talha Chafekar
<p>I saw you mentioned you are using minikube with nginx ingress addon.</p> <p>This information helps a lot. Try not to skip this kind of information in future. I was assuming that by saying <em>&quot;I was trying to use Kubernetes to set up a service locally&quot;</em> you mean that you run baremetal k8s.</p> <p>Minikube is most probably running in a VM and this is why you cannot access it.</p> <p>Running <code>minikube ip</code> gives you the IP address of a VM:</p> <pre><code>$ minikube ip 192.168.39.67 </code></pre> <p>Your ip may be different so don't use my IP, check what IP you got assigned.</p> <p>Now that you have the IP of a minikube VM, use it in <code>/etc/hosts</code>. In my case it looks like following:</p> <pre><code>192.168.39.67 ecommerce.dev </code></pre>
Matt
<p>when implementing canary deployment using istio, I want to create secondary(for canary) deployment and svc while modifying the virtualservice’s destination rule traffic. However, if I change the destination rule and create canary svc &amp; deployment at the same time, traffic will go to non-ready canary deployment. In K8s, there is readiness probe but it’s limited to deployment resource. Is there anything in Istio that can send traffic to svc when the deployment itself is ready? If it’s not possible, I might have to add one step, that waits until the deployment is ready..</p>
Piljae Chae
<p>With the case of heath checks Istio does not support active health check by its own but instead rely on Kubernetes liveness and readiness probes. However remember that with mutualTLS enabled the https request probe won't work and you'll need special annotation that will make the sidecar agent to pick up the request.</p> <p>You can read more about it at <a href="https://istio.io/latest/docs/ops/configuration/mesh/app-health-check/" rel="nofollow noreferrer">application health check section</a>.</p> <p>The principle of canary deployment is to roll out new code/features to a subset of users as an initial test. This means that old deployment still covers most of your traffic with the new deployment run by its side. Traffic shifting should be handled as a separate deployment task. <a href="https://istio.io/v1.1/blog/2017/0.1-canary/" rel="nofollow noreferrer">This blog post</a> describes the canary deployments with istio and how traffic can be weighted with virtual service. Readiness probes then will handle the pod start failures excluding any endpoint from the chain that is not ready.</p> <p>Having any traffic-wise probe would be less beneficial because you would need at least one failed request to trigger the endpoint removal from the load balancing.</p>
acid_fuji
<p>I my case, I have a branch wise deployment in EKS 1.14 and I want to handle this with "regex" &amp; Nginx ingress.</p> <p>Scenario:- Let's say I have Branch B1 with service_A(apache service), Similarly under B2 with service_A((apache service) and so on and want to access the service via URL like:- apache-{branch_name}.example.com Note :- Branch B1/B2 is nothing but unique namespaces where same kind of service is running.</p> <p>I need single ingress from where I can control all different branch URL</p> <p>My example file:-</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: regex-ingress annotations: kubernetes.io/ingress.class: "nginx" cert-manager.io/cluster-issuer: "letsencrypt-prod" spec: tls: - hosts: - '*.k8s.example.com' secretName: prod-crt rules: - host: {service_A-b1}.k8s.acko.in http: paths: - backend: serviceName: {service_A-b1} servicePort: 80 - host: {service_A-b2}.k8s.acko.in http: paths: - backend: serviceName: {service_A-b2} servicePort: 80 </code></pre>
me25
<p>Nginx ingress don't work in this way, is not possible to have regex in <code>serviceName</code> neither <code>host</code>. </p> <p>From NGINX <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>Regular expressions and wild cards are not supported in the <code>spec.rules.host</code> field. Full hostnames must be used.</p> </blockquote> <p>You can use regex <strong>only</strong> in <code>path</code> field:</p> <blockquote> <p>The ingress controller supports <strong>case insensitive</strong> regular expressions in the <code>spec.rules.http.paths.path</code> field. This can be enabled by setting the <code>nginx.ingress.kubernetes.io/use-regex</code> annotation to <code>true</code> (the default is false).</p> </blockquote> <p>If you need to control your <code>serviceName</code> and <code>host</code> dinamically I strong recommend use some kind of automation (could be jenkins, bash script etc...) or templates by <a href="https://helm.sh/" rel="nofollow noreferrer">HELM</a> which will modify at deployment time.</p> <p><strong><em>References:</em></strong></p> <p><a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/</a></p>
Mr.KoopaKiller
<p>I have installed Docker Desktop in my windows 10 machine. And I am running the Linux Containers. I have enabled kubernetes. I am able to run hello-world docker image. </p> <p>Now I need to setup a cluster environment in my machine, with one master node and 2-3 worker nodes. As I can see master node is already setup, I need to setup worker nodes with it and deploy my microservices out there.</p> <p>Please let me know the process on how to do the setup. I have checked on internet, but I could not find a very clear cut steps to perform the same.</p> <p>Below is my current configuration :</p> <pre><code>PS C:\WINDOWS\system32&gt; kubectl get ns NAME STATUS AGE default Active 16m docker Active 15m kube-node-lease Active 16m kube-public Active 16m kube-system Active 16m PS C:\WINDOWS\system32&gt; kubectl get nodes NAME STATUS ROLES AGE VERSION docker-desktop Ready master 17m v1.16.6-beta.0 PS C:\WINDOWS\system32&gt; kubectl get pods No resources found in default namespace. PS C:\WINDOWS\system32&gt; kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP x.x.x.x &lt;none&gt; 443/TCP 21m </code></pre> <p>And below are my version information :</p> <pre><code>PS C:\WINDOWS\system32&gt; docker version Client: Docker Engine - Community Version: 19.03.8 API version: 1.40 Go version: go1.12.17 Git commit: afacb8b Built: Wed Mar 11 01:23:10 2020 OS/Arch: windows/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.8 API version: 1.40 (minimum version 1.12) Go version: go1.12.17 Git commit: afacb8b Built: Wed Mar 11 01:29:16 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.2.13 GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683 Kubernetes: Version: v1.16.6-beta.0 StackAPI: Unknown PS C:\WINDOWS\system32&gt; kubectl version Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:18:29Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
Som
<p>Multi nodes cluster <a href="https://github.com/docker/for-mac/issues/3342" rel="nofollow noreferrer">isn't supported</a> by Docker Desktop and it seems like they don't have any plans to support. This solution is meant for simple and small workloads. </p> <p>I understand that you may want to simulate more complex workloads and for that I suggest you to take a look on <a href="https://github.com/kubernetes-sigs/kind" rel="nofollow noreferrer">Kind</a>. </p> <p><a href="https://www.bogotobogo.com/DevOps/Docker/Docker-Kubernetes-Multi-Node-Local-Clusters-kind.php" rel="nofollow noreferrer">This guide</a> can lead you through the process. </p>
Mark Watney
<p>I am getting this result for flannel service on my slave node. Flannel is running fine on master node.</p> <pre><code>kube-system kube-flannel-ds-amd64-xbtrf 0/1 CrashLoopBackOff 4 3m5s </code></pre> <p>Kube-proxy running on the slave is fine but not the flannel pod.</p> <p>I have a master and a slave node only. At first its say <code>running</code>, then it goes to <code>error</code> and finally, <code>crashloopbackoff</code>.</p> <pre><code>godfrey@master:~$ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system kube-flannel-ds-amd64-jszwx 0/1 CrashLoopBackOff 4 2m17s 192.168.152.104 slave3 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-hxs6m 1/1 Running 0 18m 192.168.152.104 slave3 &lt;none&gt; &lt;none&gt; </code></pre> <p>I am also getting this from the logs:</p> <pre><code>I0515 05:14:53.975822 1 main.go:390] Found network config - Backend type: vxlan I0515 05:14:53.975856 1 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false E0515 05:14:53.976072 1 main.go:291] Error registering network: failed to acquire lease: node "slave3" pod cidr not assigned I0515 05:14:53.976154 1 main.go:370] Stopping shutdownHandler... </code></pre> <p>I could not find a solution so far. Help appreciated.</p>
Godfrey Tan
<p>As solution came from OP, I'm posting answer as community wiki.</p> <p>As reported by OP in the comments, he didn't passed the podCIDR during kubeadm init. </p> <p>The following command was used to see that the flannel pod was in "CrashLoopBackoff" state: </p> <pre><code>sudo kubectl get pods --all-namespaces -o wide </code></pre> <p>To confirm that podCIDR was not passed to flannel pod <code>kube-flannel-ds-amd64-ksmmh</code> that was in <code>CrashLoopBackoff</code> state.</p> <pre><code>$ kubectl logs kube-flannel-ds-amd64-ksmmh </code></pre> <p><code>kubeadm init --pod-network-cidr=172.168.10.0/24</code> didn't pass the podCIDR to the slave nodes as expected. </p> <p>Hence to solve the problem, <code>kubectl patch node slave1 -p '{"spec":{"podCIDR":"172.168.10.0/24"}}'</code> command had to be used to pass podCIDR to each slave node. </p> <p>Please see this link: <a href="http://coreos.com/flannel/docs/latest/troubleshooting.html" rel="nofollow noreferrer">coreos.com/flannel/docs/latest/troubleshooting.html</a> and section "Kubernetes Specific"</p>
Mark Watney
<p>I have deployed an application on AWS using EKS. I roughly need 20-25 loadbalancers in my application.</p> <p>Now, AWS offers 20 Classic load balancers and 50 Application load balancers in my account.</p> <p>I use helm chart for creating these load balancers using service =&gt; type =&gt; LoadBalancer, and these loadbalancers are considered Classic load balancers.</p> <p>Is there a way to use ALB in place of CLB (either using AWS settings OR passing an option in the helm chart) ?</p> <p>Thanks in advance !</p>
Keval Bhogayata
<p>According to <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">AWS ELB documentation</a>, You can use following ingress annotation for ingress object:</p> <pre><code>annotations: kubernetes.io/ingress.class: alb </code></pre> <p>From AWS docs:</p> <blockquote> <p>The AWS Load Balancer Controller creates ALBs and the necessary supporting AWS resources whenever a Kubernetes Ingress resource is created on the cluster with the kubernetes.io/ingress.class: alb annotation. The Ingress resource configures the ALB to route HTTP or HTTPS traffic to different pods within the cluster. To ensure that your Ingress objects use the AWS Load Balancer Controller, add the following annotation to your Kubernetes Ingress specification. For more information, see <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/ingress/spec/" rel="nofollow noreferrer">Ingress specification on GitHub</a>.</p> </blockquote> <p>What's good in this solution is that an ALB can also be shared across multiple Ingresses, so you wouldn't need to use so many of separete LBs.</p> <p>EDIT: As mentioned by Bastian, You need to have <a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="nofollow noreferrer">AWS Load Balancer Controller</a> deployed to your cluster in order for it to work.</p>
Matt
<p>I have <strong>Wordpress</strong> run as an <strong>app container</strong> in Google Cloud Kubernetes Cluster. I've ruined my site a bit by wrong modifications of theme's <strong>functions.php</strong> file. So now i would like to remove my bad code to make site working. Hoever I can not find where Wordpress is located. As all I need is to remove couple lines of PHP code I thought it might be easier to do it right from the SSH command line without playing with SFTP and keys (sorry I'm newby in WordPress/Sites in general) This how it looks like in Google Cloud Console</p> <p><strong>Wordpress install</strong></p> <p><img src="https://i.stack.imgur.com/S45GC.png" alt="Wordpress App" /></p> <p><strong>Google Cloud Console: my cluster</strong></p> <p><img src="https://i.stack.imgur.com/dfH3M.png" alt="Google Cloud Console screenshot" /></p> <p>I'm connecting to cluster through SSH by pressing &quot;Connect&quot; button. And... tada! I see NO &quot;/var/www/html&quot; in &quot;var&quot; folder! &quot;.../www/html&quot; folder is not exists/visible even under root</p> <p><img src="https://i.stack.imgur.com/L0uYN.png" alt="Contents on VAR folder" /></p> <p>Can someone help me with finding WordPress install, please :)</p> <p>Here is the output for <code>$ kubectl describe pod market-engine-wordpress-0 mypod -n kalm-system</code> comand</p> <pre><code>Name: market-engine-wordpress-0 Namespace: kalm-system Priority: 0 Node: gke-cluster-1-default-pool-6c5a3d37-sx7g/10.164.0.2 Start Time: Thu, 25 Jun 2020 17:35:54 +0300 Labels: app.kubernetes.io/component=wordpress-webserver app.kubernetes.io/name=market-engine controller-revision-hash=market-engine-wordpress-b47df865b statefulset.kubernetes.io/pod-name=market-engine-wordpress-0 Annotations: &lt;none&gt; Status: Running IP: 10.36.0.17 IPs: IP: 10.36.0.17 Controlled By: StatefulSet/market-engine-wordpress Containers: wordpress: Container ID: docker://32ee6d8662ff29ce32a5c56384ba9548bdb54ebd7556de98cd9c401a742344d6 Image: gcr.io/cloud-marketplace/google/wordpress:5.3.2-20200515-193202 Image ID: docker-pullable://gcr.io/cloud-marketplace/google/wordpress@sha256:cb4515c3f331e0c6bcca5ec7b12d2f3f039fc5cdae32f0869abf19238d580575 Port: 80/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 29 Jun 2020 15:37:38 +0300 Finished: Mon, 29 Jun 2020 15:40:08 +0300 Ready: False Restart Count: 774 Environment: POD_NAME: market-engine-wordpress-0 (v1:metadata.name) POD_NAMESPACE: kalm-system (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4f6xq (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: market-engine-wordpress-pvc: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: market-engine-wordpress-pvc-market-engine-wordpress-0 ReadOnly: false apache-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: market-engine-wordpress-config Optional: false config-map: Type: ConfigMap (a volume populated by a ConfigMap) Name: market-engine-wordpress-config Optional: false default-token-4f6xq: Type: Secret (a volume populated by a Secret) SecretName: default-token-4f6xq Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 8m33s (x9023 over 2d15h) kubelet, gke-cluster-1-default-pool-6c5a3d37-sx7g Readiness probe failed: HTTP probe failed with statuscode: 500 Warning BackOff 3m30s (x9287 over 2d15h) kubelet, gke-cluster-1-default-pool-6c5a3d37-sx7g Back-off restarting failed container </code></pre>
0LEg
<p>As you described, your application is crashing because of a change you have made in the code. This is making your website to fail and your pod is configured to check if the website is running fine and if not, the container will be restarted. The configuration that makes it happen is the LivenessProbe and the ReadinessProbe.</p> <p>The problem here is that prevents you from fixing the problem.</p> <p>The good news is that your data is saved under <code>/var/www/html</code> and this directory is on a external storage.</p> <p>So, the easiest solution is to create a new pod and attach this storage to this pod. Problem is that this storage cannot be mounted on more than one container at the same time.</p> <p>Creating this new pod, requires you to temporarily remove your wordpress pod. I know, it may be scary but we will recreate it after.</p> <p>I reproduced your scenario and tested these steps. So Let's start. (All steps as mandatory)</p> <p>Before we start, let's save your <code>market-engine-wordpress</code> manifest:</p> <pre><code>$ kubectl get statefulsets market-engine-wordpress -o yaml &gt; market-engine-wordpress.yaml </code></pre> <p>Delete your wordpress statefulset:</p> <pre><code>$ kubectl delete statefulsets market-engine-wordpress </code></pre> <p>This commands delete the instruction that creates your wordpress pod.</p> <p>Now, let's create a new pod using the following manifest:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: fenix namespace: kalm-system spec: volumes: - name: market-engine-wordpress-pvc persistentVolumeClaim: claimName: market-engine-wordpress-pvc-market-engine-wordpress-0 containers: - name: ubuntu image: ubuntu command: ['sh', '-c', &quot;sleep 36000&quot;] volumeMounts: - mountPath: /var/www/html name: market-engine-wordpress-pvc subPath: wp </code></pre> <p>To create this pod, save this content in a file as <code>fenix.yaml</code> and run the following command:</p> <pre><code>$ kubectl apply -f fenix.yaml </code></pre> <p>Check if the pod is ready:</p> <pre><code>$ kubectl get pods fenix NAME READY STATUS RESTARTS AGE fenix 1/1 Running 0 5m </code></pre> <p>From this point, you can connect to this pod and fix your <code>functions.php</code> file:</p> <pre><code>$ kubectl exec -ti fenix -- bash root@fenix:/# cd /var/www/html/wp-includes/ root@fenix:/var/www/html/wp-includes# </code></pre> <p>When you are done fixing your code, we can delete this pod and re-create your wordpress pod.</p> <pre><code>$ kubectl delete pod fenix pod &quot;fenix&quot; deleted </code></pre> <pre><code>$ kubectl apply -f market-engine-wordpress.yaml statefulset.apps/market-engine-wordpress created </code></pre> <p>Check if the pod is ready:</p> <pre><code>$ kubectl get pod market-engine-wordpress-0 NAME READY STATUS RESTARTS AGE market-engine-wordpress-0 2/2 Running 0 97s </code></pre> <p>If you need to exec into the wordpress container, your application uses the concept of multi-container pod and connecting to the right container requires you to indicate what container you want to connect.</p> <p>To check how many containers and the name of which one you can run <code>kubectl get pod mypod -o yaml</code> or run <code>kubectl describe pod mypod</code>.</p> <p>To finally exec into it, use the following command:</p> <pre><code>$ kubectl exec -ti market-engine-wordpress-0 -c wordpress -- bash root@market-engine-wordpress-0:/var/www/html# </code></pre>
Mark Watney
<p>I have two Microservices deployed on K8S cluster (Locally on 3 VMs - 1 Master and 2 Worker Nodes):<br> 1- currency-exchange Microservice<br> 2- currency-conversion Microservice<br></p> <p>I am trying to call <strong>currency-exchange</strong> Microservice from <strong>currency-conversion</strong> by using service name : <br> http:///currency-exchange:8000.</p> <p>It returns error as below: <br> <code>{&quot;timestamp&quot;:&quot;2021-02-17T08:38:25.590+0000&quot;,&quot;status&quot;:500,&quot;error&quot;:&quot;Internal Server Error&quot;,&quot;message&quot;:&quot;currency-exchange executing GET http://currency-exchange:8000/currency-exchange/from/EUR/to/INR&quot;,&quot;path&quot;:&quot;/currency-conversion/from/EUR/to/INR/quantity/10&quot;}</code></p> <p>I am using Kubernetes, CentOS8 using Calico CNI with set FELIX_IPTABLESBACKEND=NFT , based on <a href="https://stackoverflow.com/questions/65836764/kubernetes-ingress-second-node-port-is-not-responding">this link</a> to facilitate POD-TO-POD communications. <br> Current services available:</p> <pre><code>[root@k8s-master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE currency-conversion NodePort 10.106.70.108 &lt;none&gt; 8100:32470/TCP 3h40m currency-exchange NodePort 10.110.232.189 &lt;none&gt; 8000:31776/TCP 3h41m </code></pre> <p>Pods:<br></p> <pre><code>[root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES currency-conversion-86d9bc4698-rxdkh 1/1 Running 0 5h45m 192.168.212.125 worker-node-1 &lt;none&gt; &lt;none&gt; currency-exchange-c79ff888b-c8sdd 1/1 Running 0 5h44m 192.168.19.160 worker-node-2 &lt;none&gt; &lt;none&gt; currency-exchange-c79ff888b-nfqpx 1/1 Running 0 5h44m 192.168.212.65 worker-node-1 &lt;none&gt; &lt;none&gt; </code></pre> <p>List of CoreDNS Pods available:<br></p> <pre><code>[root@k8s-master ~]# kubectl get pods -o wide -n kube-system | grep coredns coredns-74ff55c5b-9x5qm 1/1 Running 8 25d 192.168.235.218 k8s-master &lt;none&gt; &lt;none&gt; coredns-74ff55c5b-zkkn7 1/1 Running 8 25d 192.168.235.220 k8s-master &lt;none&gt; &lt;none&gt; </code></pre> <p>List all ENV variables:<br></p> <pre><code>[root@k8s-master ~]# kubectl exec -it currency-conversion-86d9bc4698-rxdkh -- printenv HOSTNAME=currency-conversion-86d9bc4698-rxdkh CURRENCY_EXCHANGE_SERVICE_HOST=http://currency-exchange KUBERNETES_SERVICE_HOST=10.96.0.1 CURRENCY_EXCHANGE_SERVICE_PORT=8000 ........ </code></pre> <p>nslookup kubernetes.default exec Command :<br></p> <pre><code>[root@k8s-master ~]# kubectl exec -it currency-conversion-86d9bc4698-rxdkh -- nslookup kubernetes.default nslookup: can't resolve '(null)': Name does not resolve nslookup: can't resolve 'kubernetes.default': Try again command terminated with exit code 1 </code></pre> <p>How do people solve such a problem? do they configure/tweak DNS to work properly as service registry ?</p> <p>Thanks in advance</p> <p><strong>EDITED:</strong></p> <pre><code>[root@k8s-master ~]# kubectl describe service currency-conversion Name: currency-conversion Namespace: default Labels: app=currency-conversion Annotations: &lt;none&gt; Selector: app=currency-conversion Type: NodePort IP Families: &lt;none&gt; IP: 10.106.70.108 IPs: 10.106.70.108 Port: &lt;unset&gt; 8100/TCP TargetPort: 8100/TCP NodePort: &lt;unset&gt; 32470/TCP Endpoints: 192.168.212.125:8100 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; [root@k8s-master ~]# kubectl describe service currency-exchange Name: currency-exchange Namespace: default Labels: app=currency-exchange Annotations: &lt;none&gt; Selector: app=currency-exchange Type: NodePort IP Families: &lt;none&gt; IP: 10.110.232.189 IPs: 10.110.232.189 Port: &lt;unset&gt; 8000/TCP TargetPort: 8000/TCP NodePort: &lt;unset&gt; 31776/TCP Endpoints: 192.168.19.160:8000,192.168.212.65:8000 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>I just discovered wrong behavior in all coredns PODS, a lot of timeouts:</p> <pre><code>[root@k8s-master ~]# kubectl logs coredns-74ff55c5b-zkkn7 -n kube-system .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:59744-&gt;192.168.100.1:53: i/o timeout [ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:53400-&gt;192.168.100.1:53: i/o timeout [ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:58465-&gt;192.168.100.1:53: i/o timeout [ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:58197-&gt;192.168.100.1:53: i/o timeout [ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:57794-&gt;192.168.100.1:53: i/o timeout [ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:43345-&gt;192.168.100.1:53: i/o timeout [ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:57361-&gt;192.168.100.1:53: i/o timeout [ERROR] plugin/errors: 2 6675909625369157619.3582573715596351475. HINFO: read udp 192.168.235.221:51716-&gt;192.168.100.1:53: i/o timeout </code></pre> <p>how can i start trace the problem?</p> <p><strong>Extra Details:</strong></p> <pre><code>[root@k8s-master ~]# kubectl exec -i -t currency-conversion-86d9bc4698-rxdkh -- sh / # wget http://currency-exchange:8000/currency-exchange/from/EUR/to/INR wget: bad address 'currency-exchange:8000' </code></pre>
Faris Rjoub
<p>It looks to me that you have incorrectly setup your CNI overlay network. I checked your previous question to verify node's ip address and to me it looks that your pod network overlap with your host network:</p> <p>The Kubernetes <code>pod-network-cidr</code> is the <code>IP prefix</code> for all pods in the Kubernetes cluster. This range must not clash with other networks in your <code>VPC</code></p> <p>The Kubernetes pod network <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network" rel="nofollow noreferrer">documentation</a> describes this as well:</p> <blockquote> <p>Take care that your Pod network must not overlap with any of the host networks: you are likely to see problems if there is any overlap. (If you find a collision between your network plugin's preferred Pod network and some of your host networks, you should think of a suitable CIDR block to use instead, then use that during <code>kubeadm init</code> with <code>--pod-network-cidr</code> and as a replacement in your network plugin's YAML).</p> </blockquote> <p>This is also mentioned in calico instructions when creating a <a href="https://docs.projectcalico.org/getting-started/kubernetes/quickstart#create-a-single-host-kubernetes-cluster" rel="nofollow noreferrer">cluster</a>:</p> <blockquote> <p><strong>Note</strong>: If 192.168.0.0/16 is already in use within your network you must select a different pod network CIDR, replacing 192.168.0.0/16 in the above command.</p> </blockquote> <p>PS. You can always <code>wget</code> curl from <a href="https://github.com/moparisthebest/static-curl" rel="nofollow noreferrer">here</a>.</p>
acid_fuji
<p>our server running using Kubernetes for auto-scaling and we use newRelic for observability but we face some issues</p> <p>1- we need to restart pods when memory usage reaches 1G it automatically restarts when it reaches 1.2G but everything goes slowly.</p> <p>2- terminate pods when there no requests to the server</p> <p>my configuration</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Release.Name }} labels: app: {{ .Release.Name }} spec: revisionHistoryLimit: 2 replicas: {{ .Values.replicas }} selector: matchLabels: app: {{ .Release.Name }} template: metadata: labels: app: {{ .Release.Name }} spec: containers: - name: {{ .Release.Name }} image: &quot;{{ .Values.imageRepository }}:{{ .Values.tag }}&quot; env: {{- include &quot;api.env&quot; . | nindent 12 }} resources: limits: memory: {{ .Values.memoryLimit }} cpu: {{ .Values.cpuLimit }} requests: memory: {{ .Values.memoryRequest }} cpu: {{ .Values.cpuRequest }} imagePullSecrets: - name: {{ .Values.imagePullSecret }} {{- if .Values.tolerations }} tolerations: {{ toYaml .Values.tolerations | indent 8 }} {{- end }} {{- if .Values.nodeSelector }} nodeSelector: {{ toYaml .Values.nodeSelector | indent 8 }} {{- end }} </code></pre> <p>my values file</p> <pre><code>memoryLimit: &quot;2Gi&quot; cpuLimit: &quot;1.0&quot; memoryRequest: &quot;1.0Gi&quot; cpuRequest: &quot;0.75&quot; </code></pre> <p>thats what I am trying to approach</p> <p><a href="https://i.stack.imgur.com/kcoV1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kcoV1.png" alt="enter image description here" /></a></p>
Mina Fawzy
<p>If you want to be sure your pod/deployment won't consume more than <code>1.0Gi</code> of memory then setting that <code>MemoryLimit</code> will do job just fine.</p> <p>Once you set that limits and your container exceed it it becomes a potential candidate for termination. If it continues to consume memory beyond its limit, the Container will be terminated. If a terminated Container can be restarted, kubelet restarts it, as with any other type of runtime container failure.</p> <p>For more readying please visit section <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#exceed-a-container-s-memory-limit" rel="nofollow noreferrer">exceeding a container's memory limit</a></p> <p>Moving on if you wish to scale your deployment based on requests you would require to have custom metrics to be provided by external adapter such as <a href="https://github.com/kubernetes-sigs/prometheus-adapter" rel="nofollow noreferrer">prometheus</a>. Horizontal pod autoascaler natively provides you scaling based only on CPU and Memory (based on the metrics from metrics server).</p> <p>The adapter documents provides you <a href="https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/docs/walkthrough.md" rel="nofollow noreferrer">walkthrough</a> how to configure it with Kubernetes API and HPA. The list of other adapters can be found <a href="https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md" rel="nofollow noreferrer">here</a>.</p> <p>Then you can scale your deployment based on the <code>http_requests</code> metric as showed <a href="https://github.com/stefanprodan/k8s-prom-hpa#auto-scaling-based-on-custom-metrics" rel="nofollow noreferrer">here</a> or <code>request-per-seconds</code> as described <a href="https://www.weave.works/blog/kubernetes-horizontal-pod-autoscaler-and-prometheus" rel="nofollow noreferrer">here</a>.</p>
acid_fuji
<h1>I can't apply an ingress configuration.</h1> <h3>I need access a jupyter-lab service by it's DNS</h3> <ul> <li><a href="http://jupyter-lab.local" rel="nofollow noreferrer">http://jupyter-lab.local</a></li> </ul> <p>It's deployed to a 3 node bare metal k8s cluster</p> <ul> <li>node1.local (master)</li> <li>node2.local (worker)</li> <li>node3.local (worker)</li> </ul> <p>Flannel is installed as the Network controller</p> <h3>I've installed nginx ingress for bare metal like this</h3> <ul> <li><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml</code></li> </ul> <p>When deployed the jupyter-lab pod is on node2 and the <em>NodePort</em> service responds correctly from <a href="http://node2.local:30004" rel="nofollow noreferrer">http://node2.local:30004</a> (see below)</p> <h2>I'm expecting that the ingress-nginx controller will expose the <em>ClusterIP</em> service by its DNS name ...... thats what I need, is that wrong?</h2> <p>This is the CIP service, defined with symmetrical ports <code>8888</code> to be as simple as possible (is that wrong?)</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: jupyter-lab-cip namespace: default spec: type: ClusterIP ports: - port: 8888 targetPort: 8888 selector: app: jupyter-lab </code></pre> <ul> <li><p>The DNS name <code>jupyter-lab.local</code> resolves to the ip address range of the cluster, but times out with no response. <code>Failed to connect to jupyter-lab.local port 80: No route to host</code></p> </li> <li><p><code>firewall-cmd --list-all</code> shows that port 80 is open on each node</p> </li> </ul> <p>This is the ingress definition for http into the cluster (any node) on port 80. (is that wrong ?)</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: jupyter-lab-ingress annotations: # nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io: / spec: rules: - host: jupyter-lab.local http: paths: - path: / pathType: Prefix backend: service: name: jupyter-lab-cip port: number: 80 </code></pre> <p>This the deployment</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: jupyter-lab-dpt namespace: default spec: replicas: 1 selector: matchLabels: app: jupyter-lab template: metadata: labels: app: jupyter-lab spec: volumes: - name: jupyter-lab-home persistentVolumeClaim: claimName: jupyter-lab-pvc containers: - name: jupyter-lab image: docker.io/jupyter/tensorflow-notebook ports: - containerPort: 8888 volumeMounts: - name: jupyter-lab-home mountPath: /var/jupyter-lab_home env: - name: &quot;JUPYTER_ENABLE_LAB&quot; value: &quot;yes&quot; </code></pre> <p>I can successfully access jupyter-lab by its NodePort http://node2:30004 with this definition:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: jupyter-lab-nodeport namespace: default spec: type: NodePort ports: - port: 10003 targetPort: 8888 nodePort: 30004 selector: app: jupyter-lab </code></pre> <p>How can I get ingress to my jupyter-lab at <a href="http://jupyter-lab.local" rel="nofollow noreferrer">http://jupyter-lab.local</a> ???</p> <ul> <li>the command <code>kubectl get endpoints -n ingress-nginx ingress-nginx-controller-admission</code> returns :</li> </ul> <p><code>ingress-nginx-controller-admission 10.244.2.4:8443 15m</code></p> <hr /> <h2>Am I misconfiguring ports ?</h2> <h2>Are my &quot;selector:appname&quot; definitions wrong ?</h2> <h2>Am I missing a part</h2> <h2>How can I debug what's going on ?</h2> <hr /> <p>Other details</p> <ul> <li><p>I was getting this error when applying an ingress <code>kubectl apply -f default-ingress.yml</code></p> <pre><code>Error from server (InternalError): error when creating &quot;minnimal-ingress.yml&quot;: Internal error occurred: failed calling webhook &quot;validate.nginx.ingress.kubernetes.io&quot;: Post &quot;https://ingress-nginx-contr oller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s&quot;: context deadline exceeded </code></pre> <p>This command <code>kubectl delete validatingwebhookconfigurations --all-namespaces</code> removed the validating webhook ... was that wrong to do?</p> </li> <li><p>I've opened port 8443 on each node in the cluster</p> </li> </ul>
Kickaha
<p>Ingress is invalid, try the following:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: jupyter-lab-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: jupyter-lab.local http: # &lt;- removed the - paths: - path: / pathType: Prefix backend: service: # name: jupyter-lab-cip name: jupyter-lab-nodeport port: number: 8888 --- apiVersion: v1 kind: Service metadata: name: jupyter-lab-cip namespace: default spec: type: ClusterIP ports: - port: 8888 targetPort: 8888 selector: app: jupyter-lab </code></pre> <hr /> <p>If I understand correctly, you are trying to expose jupyternb through ingress nginx proxy and to make it accessible through port 80.</p> <p>Run the folllowing command to check what nodeport is used by nginx ingress service:</p> <pre><code>$ kubectl get svc -n ingress-nginx ingress-nginx-controller NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.96.240.73 &lt;none&gt; 80:30816/TCP,443:31475/TCP 3h30m </code></pre> <p>In my case that is port 30816 (for http) and 31475 (for https).</p> <p>Using NodePort type you can only use ports in range 30000-32767 (k8s docs: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#nodeport</a>). You can change it using kube-apiserver flag <code>--service-node-port-range</code> and then set it to e.g. <code>80-32767</code> and then in your ingress-nginx-controller service set <code>nodePort: 80</code></p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: {} labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: ingress-nginx app.kubernetes.io/version: 0.44.0 helm.sh/chart: ingress-nginx-3.23.0 name: ingress-nginx-controller namespace: ingress-nginx spec: ports: - name: http port: 80 protocol: TCP targetPort: http nodePort: 80 # &lt;- HERE - name: https port: 443 protocol: TCP targetPort: https nodePort: 443 # &lt;- HERE selector: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx type: NodePort </code></pre> <p>Although this is genereally not advised to change service-node-port-range since you may encounter some issues if you use ports that are already open on nodes (e.g. port 10250 that is opened by kubelet on every node).</p> <hr /> <p>What might be a better solution is to use <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a>.</p> <hr /> <p>EDIT:</p> <blockquote> <p>How can I get ingress to my jupyter-lab at <a href="http://jupyter-lab.local" rel="nofollow noreferrer">http://jupyter-lab.local</a> ???</p> </blockquote> <p>Assuming you don't need a failure tolerant solution, download the <code>https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml</code> file and change <code>ports:</code> section for the deployment object like following:</p> <pre><code> ports: - name: http containerPort: 80 hostPort: 80 # &lt;- add this line protocol: TCP - name: https containerPort: 443 hostPort: 443 # &lt;- add this line protocol: TCP - name: webhook containerPort: 8443 protocol: TCP </code></pre> <p>and apply the changes:</p> <pre><code>kubectl apply -f deploy.yaml </code></pre> <p>Now run:</p> <pre><code>$ kubectl get po -n ingress-nginx ingress-nginx-controller-&lt;HERE PLACE YOUR HASH&gt; -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx-controller-67897c9494-c7dwj 1/1 Running 0 97s 172.17.0.6 &lt;node_name&gt; &lt;none&gt; &lt;none&gt; </code></pre> <p>Notice the &lt;node_name&gt; in NODE column. This is a node's name where the pod got scheduled. Now take this nodes IP and add it to your <code>/etc/hosts</code> file.</p> <p>It should work now (go to <a href="http://jupyter-lab.local" rel="nofollow noreferrer">http://jupyter-lab.local</a> to check it), but this solution is fragile and if nginx ingress controller pod gets rescheduled to other node it will stop working (and it will stay lik this until you change the ip in /etc/hosts file). It's also generally not advised to use <code>hostPort:</code> field unless you have a very good reason to do so, so don't abuse it.</p> <hr /> <p>If you need failure tolerant solution, use MetalLB and create a service of type LoadBalancer for nginx ingress controller.</p> <p>I haven't tested it but the following should do the job, assuming that you correctly configured MetalLB:</p> <pre><code>kubectl delete svc -n ingress-nginx ingress-nginx-controller kubectl expose deployment -n ingress-nginx ingress-nginx-controller --type LoadBalancer </code></pre>
Matt
<p>I would like to use <em>podAffinity</em> with <em>matchFields</em>. On the web I only found examples with field metadata.name :</p> <pre><code>- matchFields: - key: metadata.name operator: NotIn values: - worker-1 </code></pre> <p>I would like to know if there are any other valid node fields I can use and how/where I can find them?</p> <p>Thx in advance</p>
Abdelghani
<p>I did some digging and I found out that <code>Matchfields</code> was introduced because of the issue with scheduling <code>DaemonSets</code>. The Controller that creates pod was depending the <code>kubernetes.io/hostname</code> label on a Node. This whole setup was assuming that this label is being equal to the node name which turned out wrong one becuase node name and <code>hostname</code> are distinct in some cases. See <a href="https://github.com/kubernetes/kubernetes/issues/61410" rel="nofollow noreferrer">#61410</a> for more reading. This also explains why there example of using this is only related to DaemonSet in the docs.</p> <p>Because of that PR #<a href="https://github.com/kubernetes/kubernetes/pull/62002" rel="nofollow noreferrer">62002</a> was merged which added <code>MatchFields</code> to <code>NodeSelectorTerm</code> with the release note that explains that only <code>metadata.name</code> is being supported:</p> <pre><code>Added `MatchFields` to `NodeSelectorTerm`; in 1.11, it only support `metadata.name`. </code></pre> <p>So at the time of this writing the only supported field is <code>metadata.name</code>. I've check also <a href="https://github.com/k82cn/kubernetes/blob/master/pkg/apis/core/validation/validation_test.go#L8050-L8075" rel="nofollow noreferrer">tests code</a> and it appears that this behavior has not changed till now.</p> <p>PS. Please note that this can only be used in <code>NodeSelector</code> and not applicable in <code>PodAffinity/PodAntiAffinity</code></p>
acid_fuji
<p>I have a few pods that I am trying to match URLs for their respective services.</p> <p>Please note that I <strong>need to use</strong> <code>nginx.ingress.kubernetes.io/rewrite-target</code> to solve this and <strong>not</strong> <code>nginx.ingress.kubernetes.io/rewrite-target</code></p> <p>My ingress config file looks like this. Notice the <code>/api/tile-server/</code> does not have any regex pattern</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-service annotations: nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; kubernetes.io/ingress.class: &quot;nginx&quot; cert-manager.io/cluster-issuer: &quot;letsencrypt-prod&quot; namespace: default spec: tls: - hosts: - example.com secretName: tls-secret rules: - host: example.com http: paths: - path: /?(.*) backend: serviceName: client servicePort: 80 - path: /api/auth/?(.*) backend: serviceName: auth servicePort: 8000 - path: /api/data/?(.*) backend: serviceName: data servicePort: 8001 - path: /api/tile-server/ backend: serviceName: tile-server servicePort: 7800 </code></pre> <ul> <li><code>client</code> pod is a react app built inside nginx docker image working fine</li> <li><code>nginx.conf</code> looks like this (if it's helpful)</li> </ul> <pre><code>server { # listen on port 80 listen 80; # where the root here root /usr/share/nginx/html; # what file to server as index index index.html index.htm; location / { # First attempt to serve request as file, then # as directory, then fall back to redirecting to index.html try_files $uri $uri/ /index.html; } # Media: images, icons, video, audio, HTC location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ { expires 1M; access_log off; add_header Cache-Control &quot;public&quot;; } # Javascript and CSS files location ~* \.(?:css|js)$ { try_files $uri =404; expires 1y; access_log off; add_header Cache-Control &quot;public&quot;; } # Any route containing a file extension (e.g. /devicesfile.js) location ~ ^.+\..+$ { try_files $uri =404; } } </code></pre> <ul> <li><code>auth</code> and <code>data</code> are Flask API pods working fine</li> <li><code>tile-server</code> is also a Flask pod but need not do any pattern matching. I need to match the exact <code>/api/tile-server/</code> URL</li> </ul> <p>I have tried the following patterns but failed:</p> <ul> <li><code>/api/tile-server/</code></li> <li><code>/api/tile-server/?(.*)</code></li> <li><code>/api/tile-server(/|$)?(.*)</code></li> </ul> <p>I can confirm that the pods/services are running on their proper ports and I am able to access them through node ports but not through load balancer/domain. What would be the right pattern to exactly match <code>/api/tile-server/</code> URL?</p>
Sudhanva Narayana
<p><strong>First solution</strong> - create separate ingress object for tile-server with rewrite-target annotation. This will work because ingress rules with the same host are merged together by ingress controller and separate ingress object allow for use of different annotations per object:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: tile-ingress-service annotations: nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/rewrite-target: &quot;/$2&quot; kubernetes.io/ingress.class: &quot;nginx&quot; cert-manager.io/cluster-issuer: &quot;letsencrypt-prod&quot; namespace: default spec: tls: - hosts: - example.com secretName: tls-secret rules: - host: example.com http: paths: - path: /api/tile-server(/|$)(.*) backend: serviceName: tile-server servicePort: 7800 </code></pre> <p><strong>Second solution</strong> - rewrite current ingress to work with rewrite-path. Some regex changes are necessary.</p> <p>Notice the non-capturing group notation: <code>(?:&lt;regex&gt;)</code>. This allows to skip numbering for these groups since I need everything relevant to be in the first group in order for it to work, because <code>rewrite-target: &quot;/$1&quot;</code>.</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-service annotations: nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/rewrite-target: &quot;/$1&quot; kubernetes.io/ingress.class: &quot;nginx&quot; cert-manager.io/cluster-issuer: &quot;letsencrypt-prod&quot; namespace: default spec: tls: - hosts: - example.com secretName: tls-secret rules: - host: example.com http: paths: - path: /(.*) backend: serviceName: client servicePort: 80 - path: /(api/auth(?:/|$).*) backend: serviceName: auth servicePort: 8000 - path: /(api/data(?:/|$).*) backend: serviceName: data servicePort: 8001 - path: /api/tile-server(?:/|$)(.*) backend: serviceName: tile-server servicePort: 7800 </code></pre> <p>Here is how the rewrites will work for:</p> <ul> <li>auth service (same applies to data service)</li> </ul> <pre><code> /api/auth ---&gt; /api/auth /api/auth/ ---&gt; /api/auth/ /api/auth/xxx ---&gt; /api/auth/xxx </code></pre> <ul> <li>tile-server service:</li> </ul> <pre><code> /api/tile-server ---&gt; / /api/tile-server/ ---&gt; / /api/tile-server/xxx ---&gt; /xxx </code></pre> <ul> <li>client service</li> </ul> <pre><code> /xxx ---&gt; /xxx </code></pre> <p>Notice that the following paths will be forwarded to client service (where xxx is any alphanumerical string):</p> <pre><code> /api/authxxx /api/dataxxx /api/tile-serverxxx </code></pre> <p>If you want them to be forwaded to other/matching services, add <code>?</code> after <code>(?:/|$)</code> in path.</p>
Matt
<p>When doing <code>helm upgrade ... --force</code> I'm getting this below error </p> <pre><code>Error: UPGRADE FAILED: failed to replace object: Service "api" is invalid: spec.clusterIP: Invalid value: "": field is immutable </code></pre> <p>And This is how my service file looks like: (Not passing clusterIP anywhere )</p> <pre><code>apiVersion: v1 kind: Service metadata: name: {{ .Chart.Name }} namespace: {{ .Release.Namespace }} annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https" service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" labels: app: {{ .Chart.Name }}-service kubernetes.io/name: {{ .Chart.Name | quote }} dns: route53 chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" release: "{{ .Release.Name }}" spec: selector: app: {{ .Chart.Name }} type: LoadBalancer ports: - port: 443 name: https targetPort: http-port protocol: TCP </code></pre> <p><strong>Helm</strong> Version: <strong>3.0.1</strong></p> <p><strong>Kubectl</strong> Version: <strong>1.13.1</strong> [Tried with the <strong>1.17.1</strong> as well]</p> <p><strong>Server</strong>: <strong>1.14</strong></p> <p><strong>Note</strong>: Previously I was using some old version (of server, kubectl, helm) at that time I did not face this kind of issue. I can see lots of similar issues in GitHub regarding this, but unable to find any working solution for me.</p> <p>few of the similar issues:</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/25241" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/25241</a></p> <p><a href="https://github.com/helm/charts/pull/13646" rel="noreferrer">https://github.com/helm/charts/pull/13646</a> [For Nginx chart]</p>
Saikat Chakrabortty
<p>I've made some tests with Helm and got the same issue when trying to change the Service type from <code>NodePort/ClusterIP</code> to <code>LoadBalancer</code>.</p> <p>This is how I've reproduced your issue:</p> <p><strong>Kubernetes</strong> 1.15.3 (GKE) <strong>Helm</strong> 3.1.1</p> <p>Helm chart used for test: <a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="noreferrer">stable/nginx-ingress</a></p> <h3>How I reproduced:</h3> <ol> <li>Get and decompress the file:</li> </ol> <pre><code>helm fetch stable/nginx-ingress tar xzvf nginx-ingress-1.33.0.tgz </code></pre> <ol start="2"> <li>Modify service type from <code>type: LoadBalancer</code> to <code>type: NodePort</code> in the <code>values.yaml</code> file (line 271):</li> </ol> <pre><code>sed -i '271s/LoadBalancer/NodePort/' values.yaml </code></pre> <ol start="3"> <li>Install the chart:</li> </ol> <pre><code>helm install nginx-ingress ./ </code></pre> <ol start="4"> <li>Check service type, must be <code>NodePort</code>:</li> </ol> <pre><code>kubectl get svc -l app=nginx-ingress,component=controller NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller NodePort 10.0.3.137 &lt;none&gt; 80:30117/TCP,443:30003/TCP 1m </code></pre> <ol start="5"> <li>Now modify the Service type again to <code>LoadBalancer</code> in the <code>values.yaml</code>:</li> </ol> <pre><code>sed -i '271s/NodePort/LoadBalancer/' values.yaml </code></pre> <ol start="6"> <li>Finally, try to upgrade the chart using <code>--force</code> flag:</li> </ol> <pre><code>helm upgrade nginx-ingress ./ --force </code></pre> <p>And then:</p> <pre><code>Error: UPGRADE FAILED: failed to replace object: Service "nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable </code></pre> <h3>Explanation</h3> <p>Digging around I found this in HELM <a href="https://github.com/helm/helm/blob/6bed5949cbff923851aac168373c5279f659c46a/pkg/kube/client.go#L428" rel="noreferrer">source code:</a></p> <pre class="lang-golang prettyprint-override"><code>// if --force is applied, attempt to replace the existing resource with the new object. if force { obj, err = helper.Replace(target.Namespace, target.Name, true, target.Object) if err != nil { return errors.Wrap(err, "failed to replace object") } c.Log("Replaced %q with kind %s for kind %s\n", target.Name, currentObj.GetObjectKind().GroupVersionKind().Kind, kind) } else { // send patch to server obj, err = helper.Patch(target.Namespace, target.Name, patchType, patch, nil) if err != nil { return errors.Wrapf(err, "cannot patch %q with kind %s", target.Name, kind) } } </code></pre> <p>Analyzing the code above Helm will use similar to <code>kubectl replace</code> api request (instead of <code>kubectl replace --force</code> as we could expect)... when the helm <code>--force</code> flag is set.</p> <p>If not, then Helm will use <code>kubectl patch</code> api request to make the upgrade.</p> <p>Let's check if it make sense:</p> <h3>PoC using kubectl</h3> <ol> <li>Create a simple service as <code>NodePort</code>:</li> </ol> <pre><code>kubectl apply -f - &lt;&lt;EOF apiVersion: v1 kind: Service metadata: labels: app: test-svc name: test-svc spec: selector: app: test-app ports: - port: 80 protocol: TCP targetPort: 80 type: NodePort EOF </code></pre> <p>Make the service was created:</p> <pre><code>kubectl get svc -l app=test-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-svc NodePort 10.0.7.37 &lt;none&gt; 80:31523/TCP 25 </code></pre> <p>Now lets try to use <code>kubectl replace</code> to upgrade the service to <code>LoadBalancer</code>, like <code>helm upgrade --force</code>:</p> <pre><code>kubectl replace -f - &lt;&lt;EOF apiVersion: v1 kind: Service metadata: labels: app: test-svc name: test-svc spec: selector: app: test-app ports: - port: 80 protocol: TCP targetPort: 80 type: LoadBalancer EOF </code></pre> <p>This shows the error:</p> <pre><code>The Service "test-svc" is invalid: spec.clusterIP: Invalid value: "": field is immutable </code></pre> <p>Now, lets use <code>kubectl patch</code> to change the NodePort to LoadBalancer, simulating the helm upgrade command <em>without</em> <code>--force</code> flag:</p> <p><em><a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_patch/" rel="noreferrer">Here</a> is the kubectl patch documentation, if want to see how to use.</em></p> <pre><code>kubectl patch svc test-svc -p '{"spec":{"type":"LoadBalancer"}}' </code></pre> <p>Then you see: <code>service/test-svc patched</code></p> <h3>Workaround</h3> <p>You should to use <code>helm upgrade</code> without <code>--force</code>, it will work.</p> <p>If you really need to use <code>--force</code> to recreate some resources, like pods to get the latest <code>configMap</code> update, for example, then I suggest you first manually change the service specs before Helm upgrade.</p> <p>If you are trying to change the service type you could do it exporting the service <code>yaml</code>, changing the type and apply it again (because I experienced this behavior only when I tried to apply the same template from the first time):</p> <pre><code>kubectl get svc test-svc -o yaml | sed 's/NodePort/LoadBalancer/g' | kubectl replace --force -f - </code></pre> <p>The Output:</p> <pre><code>service "test-svc" deleted service/test-svc replaced </code></pre> <p>Now, if you try to use <code>helm upgrade --force</code> and doesn't have any change to do in the service, it will work and will recreate your pods and others resources.</p> <p>I hope that helps you!</p>
Mr.KoopaKiller
<p>I configured authentication through nginx to a specific service in k8s.</p> <p>It works fine with WUI.</p> <p>I saw some examples</p> <p>This works fine too:</p> <p><code>curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar'</code></p> <p>But i need to close other part my url same.</p> <p>For example /api/v1/upload</p> <p>I deployd 2nd ingress with path:</p> <pre><code>spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: service servicePort: 8000 path: /api/v1/upload </code></pre> <p>Without nginx i got to type:</p> <p><code>curl -XPOST 'file=@/file' http://10.2.29.4:8000/api/v1/upload -H "Authorization:key"</code></p> <p>How do i need to try use curl for hide real ip or port and get a good result?</p> <p>I would really appreciate if you could help me figure it out.</p> <p>ingress1:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: mobsf namespace: default labels: app: mobsf # env: dev annotations: nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: mobsf-basic-auth nginx.ingress.kubernetes.io/auth-realm: "Authentication Required" spec: rules: - host: worker1.mydomain.local http: paths: - path: / - backend: serviceName: mobsf servicePort: 8000 </code></pre> <p>Ingress 2:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: mobsf2 namespace: default labels: app: mobsf2 # env: dev annotations: nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: mobsf-basic-auth nginx.ingress.kubernetes.io/auth-realm: "Authentication Required" spec: rules: - host: worker1.mydomain.local http: paths: - backend: serviceName: mobsf2 servicePort: 8000 path: /api/v1/ </code></pre> <p>DNS check (correct):</p> <pre><code> nslookup worker1.mydomain.local Server: 10.2.67.10 Address: 10.2.67.10#53 Name: worker1.mydomain.local Address: 10.2.67.203 </code></pre> <p>Services:</p> <pre><code>kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.233.0.1 &lt;none&gt; 443/TCP 159d minio-service LoadBalancer 10.233.32.19 &lt;pending&gt; 9001:30809/TCP 104d mobsf NodePort 10.233.18.34 &lt;none&gt; 8000:30426/TCP 8d vault NodePort 10.233.43.134 &lt;none&gt; 8200:30820/TCP 69 </code></pre> <p>Ing:</p> <pre><code> kubectl get ing NAME HOSTS ADDRESS PORTS AGE mobsf worker1.dev002.local 80 2d1h mobsf2 worker1.dev002.local 80 23h </code></pre>
cryptoparty
<p>In bare metal installations is not possible to use <code>Ingress</code> and <code>LoadBalancer</code> services by default. You can't get <code>EXTERNAL-IP</code>, as displayed in the outputs that you provided. </p> <p>The first <code>curl</code> command you provided show that you are using your service <code>mobsf</code> as <code>NodePort</code>, it means you are able to reach your application typing IP of your node + port like this <code>http://&lt;NODE_IP&gt;:8080</code>, but without authentication since you are not accessing the server through the ingress. </p> <p>Here is all service types and how that works:</p> <blockquote> <ul> <li><code>ClusterIP</code>: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default <code>ServiceType</code>.</li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a>: Exposes the Service on each Node’s IP at a static port (the <code>NodePort</code>). A <code>ClusterIP</code> Service, to which the <code>NodePort</code> Service routes, is automatically created. You’ll be able to contact the <code>NodePort</code> Service, from outside the cluster, by requesting <code>&lt;NodeIP&gt;:&lt;NodePort&gt;</code>.</li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a>: Exposes the Service externally using a cloud provider’s load balancer. <code>NodePort</code> and <code>ClusterIP</code> Services, to which the external load balancer routes, are automatically created.</li> </ul> </blockquote> <h3>How to use LoadBalancer and Ingress in bare metal installations?</h3> <p>The Nginx <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">docs</a> shows how to setup <strong>MetalLB</strong> to allow your bare metal cluster the usage of LoadBalancer Services.</p> <blockquote> <p><a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.</p> </blockquote> <p>Basically, the setup is easy:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml </code></pre> <p>And then create a ConfigMap to configure: <em>- Edit the ip range according yout network</em></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.1.240-192.168.1.250 &lt;= EDIT IP RANGE </code></pre> <p>Check the installation typing <code>kubectl get pods -n metallb-system</code>, this is an expected output:</p> <pre><code>$ kubectl get pods -n metallb-system NAME READY STATUS RESTARTS AGE controller-65895b47d4-6wzfr 1/1 Running 0 9d speaker-v52xj 1/1 Running 0 9d </code></pre> <p>After <strong>MetalLB</strong> installed and configured you should be able to use your ingress and Loadbalancer services.</p> <p>Here there's an example of how setup a Service (ClusterIP) and Ingress:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mobsf spec: selector: app: mobsf ports: - protocol: TCP port: 8000 targetPort: 8000 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: mobsf-ing namespace: default annotations: nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: basic-auth nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo' spec: rules: - host: worker1.mydomain.local http: paths: - path: "/" backend: serviceName: mobsf servicePort: 8000 - path: "/api/v1" backend: serviceName: mobsf servicePort: 8080 </code></pre> <p>Check your ingress with the command <code>kubectl get ing</code> and look for in <code>EXTERNAL-IP</code> column.</p> <p>After that make sure you configure in your local DNS an entry for <code>worker1.mydomain.local</code> pointing to the ip above.</p> <p>Please let me know if that helped</p>
Mr.KoopaKiller
<p>I'm just getting started with kubebuilder and Golang to extend our Kubernetes-cluster with a custom resource. I would love to do different things in the reconciler-function based on the event, that actually called it.</p> <p>Was the resource created? Was it updated? Was it deleted?</p> <p>Each of those events triggers the controller, however, I can't seem to find a possibility to see, which of those events actually happened. I can work around this issue by writing a reconciler like this:</p> <pre class="lang-golang prettyprint-override"><code>func (r *ServiceDescriptorReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { service := &amp;batchv1.ServiceDescriptor{} if err := r.Get(context.TODO(), req.NamespacedName, service); err != nil &amp;&amp; errors.IsNotFound(err) { fmt.Println(&quot;Resource was not found -&gt; must have been deleted&quot;) else { fmt.Println(&quot;No errors found -&gt; Resource must have been created or updated&quot;) } } </code></pre> <p>However, this feels oddly implicit and kinda hacky.</p> <p>Is there a clean (possibly native) way of getting the event-type of the reconciler-call?</p>
Tim Hilt
<p>You won't be able to to that because this system was designed as level-based and it's not being triggered by individual events changes but rather by the actual cluster state that is being fetch from the apiserver.</p> <p>Looking at <code>reconcile.go</code> you will notice in line <a href="https://github.com/kubernetes-sigs/controller-runtime/blob/b704f447ea7c8f7059c6665143a4aa1f6da28328/pkg/reconcile/reconcile.go#L84-L87" rel="noreferrer">#84</a> has this comment about it:</p> <blockquote> <p>Reconciliation is level-based, meaning action <strong>isn't driven off changes in individual Events</strong>, but instead is driven by actual cluster state read from the apiserver or a local cache. For example if responding to a Pod Delete Event, the Request won't contain that a Pod was deleted,instead the reconcile function observes this when reading the cluster state and seeing the Pod as missing.</p> </blockquote> <p>And in line <a href="https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/reconcile/reconcile.go#L44-L46" rel="noreferrer">#44</a>:</p> <blockquote> <p>Request contains the information necessary to reconcile a Kubernetes object. This includes the information to uniquely identify the object - its Name and Namespace. <strong>It does NOT contain information about any specific Event or the object contents itself</strong>.</p> </blockquote>
acid_fuji
<p>I'm running Jenkins pod with helm charts and having weird logs when starting jenkins jobs. The requested resources and limits are seems to be in default state - compared to what I set in values.</p> <pre><code>helm install stable/jenkins --name jenkins -f jenkins.yaml </code></pre> <p>And after creating and running random job from UI</p> <pre><code>Agent jenkins-agent-mql8q is provisioned from template Kubernetes Pod Template --- apiVersion: "v1" kind: "Pod" metadata: annotations: {} labels: jenkins/jenkins-slave: "true" jenkins/label: "jenkins-jenkins-slavex" name: "jenkins-agent-mql8q" spec: containers: - args: - "********" - "jenkins-agent-mql8q" env: - name: "JENKINS_SECRET" value: "********" - name: "JENKINS_TUNNEL" value: "jenkins-agent:50000" - name: "JENKINS_AGENT_NAME" value: "jenkins-agent-mql8q" - name: "JENKINS_NAME" value: "jenkins-agent-mql8q" - name: "JENKINS_AGENT_WORKDIR" value: "/home/jenkins/agent" - name: "JENKINS_URL" value: "http://jenkins:8080/" image: "jenkins/jnlp-slave:3.27.1" imagePullPolicy: "IfNotPresent" name: "jnlp" resources: limits: memory: "2Gi" cpu: "2" requests: memory: "1Gi" cpu: "1" </code></pre> <p>And my helm values is </p> <pre><code>master: (...) resources: requests: cpu: "1" memory: "1Gi" limits: cpu: "3" memory: "3Gi" agent: resources: requests: cpu: "2" memory: "2Gi" limits: cpu: "4" memory: "3Gi" </code></pre> <p>Any idea why it spawns agents with default 1cpu/1Gi to 2cpu/2Gi</p>
CptDolphin
<p>I've reproduced your scenario, I will explain how that works for me. I'm using GKE with Kubernetes 1.15.3 and HELM 2.16.1.</p> <p>I've downloaded the helm chart to my local machine, and decompress the file to customize the value.yaml:</p> <pre><code>$ helm fetch stable/jenkins $ tar xzvf jenkins-1.9.16.tgz </code></pre> <p>In jenkins folder, edit the lines 422-427 from <code>values.yaml</code> file.</p> <pre><code>agent: ... requests: cpu: "2" memory: "2Gi" limits: cpu: "4" memory: "3Gi" ... </code></pre> <p>This will configure the agent container to spawn with the specified resources.</p> <p>Perform other changes in file if you desire, for this example I'll let with default values.</p> <p>Install helm chart:</p> <p><code>helm install jenkins/ -n jenkins</code></p> <p>After installed, follow the instructions on screen to access the jenkins console.</p> <p>To verify if the agents will start with configured resources, let's create a new job using a simple shell command.</p> <p><code>New Item &gt; Freestyle project</code></p> <p>In Job configuration, select "Execute shell" above the section "Build" in dropdown list. Type any linux command as <code>id</code>, <code>ls</code>, <code>uname -a</code> etc... </p> <p>Save and trigger <code>Build Now</code> button.</p> <p>Verify in kubernetes for the new containers, in this case a new agent container is named with <code>default-6w3fq</code></p> <p>See pod description:</p> <p><code>kubectl describe pod default-6w3fq</code></p> <pre><code>Name: default-6w3fq ... IP: Containers: jnlp: Image: jenkins/jnlp-slave:3.27-1 ... Limits: cpu: 4 memory: 3Gi Requests: cpu: 2 memory: 2Gi ... </code></pre> <p>You could wait for the job completion and see the job logs instead to use <code>kubectl</code> command.</p> <p>I've tried to deploy with default values, and upgrade the helm chart with new values values... nothing happened. That worked when I ran upgrade with <code>--force</code> flag: <code>helm upgrade jenkins jenkins/ --force</code></p> <blockquote> <p>--force - force resource updates through a replacement strategy</p> </blockquote> <p>References: <a href="https://helm.sh/docs/helm/helm_upgrade/" rel="nofollow noreferrer">https://helm.sh/docs/helm/helm_upgrade/</a> <a href="https://github.com/helm/charts/tree/master/stable/jenkins" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/jenkins</a></p>
Mr.KoopaKiller
<p>The application which I am working on runs as a deployment in kubernetes cluster. Pods created for this deployment is spread across various nodes in the cluster. Our application can handle only one TCP connection at a time and would reject further connections. Currently we use kube-proxy (Iptables mode) for distributing loads across pods in various nodes, but pods are chosen in a random way and connections are getting dropped when its passed to a busy pod. Can I use Kube-router's least-connection based load balancing algorithm for my usecase. I want the traffic to be load balanced across various pods running in various nodes. Can I achieve this using Kube-router. </p> <p>As far I know kube-proxy's IPVS mode load balances traffic only across pods in same node, as kube-proxy runs as a daemon set. Is it the same with Kube-router as well?</p>
LPT
<p>Kube-proxy's IPVS mode does load balancing of traffic across pods placed in different nodes. </p> <p>You can refer to this blog post where have a Deep Dive into this matter: <a href="https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/" rel="nofollow noreferrer">IPVS-Based In-Cluster Load Balancing Deep Dive</a></p>
Mark Watney
<p>I have a list of ports </p> <pre class="lang-yaml prettyprint-override"><code>ports: - 123 - 456 - 789 </code></pre> <p>that I like to turn into <code>containerPort</code> for usage in a Kubernetes deployment:</p> <pre><code>"container_ports": [ { "containerPort": 123 }, { "containerPort": 456 }, { "containerPort": 789 } ] </code></pre> <p>The example above is created when I define <code>ports</code> as </p> <pre class="lang-yaml prettyprint-override"><code>ports: - containerPort: 123 - containerPort: 456 - containerPort: 789 </code></pre> <p>However, I'd like to save the user some typing and add <code>containerPort</code> automatically. I am able to prepend it using </p> <pre class="lang-yaml prettyprint-override"><code>- name: Creating custom ports set_fact: container_ports: "{{ ports | map('regex_replace', '(^)', '- containerPort: \\1') | list }}" when: ports is defined </code></pre> <p>This does only create </p> <pre><code> ok: [localhost] =&gt; { "container_ports": [ "- containerPort: 123", "- containerPort: 456", "- containerPort: 789" ] } </code></pre> <p>though, which is close, but not quite right.</p>
stiller_leser
<p>You can do this like so:</p> <pre><code>--- - hosts: localhost connection: local gather_facts: no vars: ports: - 123 - 456 tasks: - name: Creating custom ports set_fact: container_ports: '{{ ports | map("regex_replace", "^(.*)$", "containerPort: \1") | map("from_yaml") | list }}' - debug: var: container_ports ... </code></pre> <p>The trick is to convert each item into a YAML hash and then convert that to a Python dict using <code>from_yaml</code>.</p> <p>Debug output:</p> <pre><code>ok: [localhost] =&gt; { "container_ports": [ { "containerPort": 123 }, { "containerPort": 456 } ] } </code></pre>
chash
<p>I have an EKS cluster, and a separate website built on (and hosted by) Webflow.</p> <p>The cluster is behind <code>cluster.com</code> and the website <code>website.webflow.io</code></p> <p>What I would like to achieve is to proxy requests coming to <code>cluster.com/website</code> to <code>website.webflow.io</code></p> <p>Based on my research, this problem could/might be solved with the ExternalName service. Unfortunately, it doesn't solve it for me, and it's trying to do a DNS lookup within the cluster. I tried various other configurations with Endpoints as well. The ExternalName seems the most promising of everything I tried that's why I'm attaching the configuration below.</p> <p>Here is what my configuration looks like:</p> <pre><code>--- kind: Service apiVersion: v1 metadata: namespace: development name: external-service spec: type: ExternalName externalName: website.webflow.io ports: - port: 443 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: namespace: development name: external-ingress annotations: ingress.kubernetes.io/preserve-host: &quot;false&quot; ingress.kubernetes.io/secure-backends: &quot;true&quot; ingress.kubernetes.io/upstream-vhost: &quot;website.webflow.io&quot; nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; nginx.ingress.kubernetes.io/server-snippet: | proxy_ssl_name website.webflow.io; proxy_ssl_server_name on; spec: rules: - host: cluster.com http: paths: - path: /website backend: serviceName: external-service servicePort: 443 </code></pre> <p>Is there a straight-forward way to achieve this? What stands out as wrong in the configuration?</p>
Andrei Gaspar
<p>Here is what I did.</p> <p>I applied your config but changed the following annotation name:</p> <pre><code>ingress.kubernetes.io/upstream-vhost: &quot;website.webflow.io&quot; </code></pre> <p>To the one I have found in <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">the nginx ingress docs</a>:</p> <pre><code>nginx.ingress.kubernetes.io/upstream-vhost: &quot;website.webflow.io&quot; ^^^^^^ </code></pre> <p>Try it and let me know if it solves it.</p> <p>EDIT: here is a complete yaml I used:</p> <pre><code>--- kind: Service apiVersion: v1 metadata: name: external-service spec: type: ExternalName externalName: website.webflow.io ports: - port: 443 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: external-ingress annotations: ingress.kubernetes.io/preserve-host: &quot;false&quot; ingress.kubernetes.io/secure-backends: &quot;true&quot; nginx.ingress.kubernetes.io/upstream-vhost: &quot;website.webflow.io&quot; nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; nginx.ingress.kubernetes.io/server-snippet: | proxy_ssl_name website.webflow.io; proxy_ssl_server_name on; spec: rules: - host: cluster.com http: paths: - path: /website backend: serviceName: external-service servicePort: 443 </code></pre>
Matt
<p>I have a home Kubernetes cluster with multiple SSDs attached to one of the nodes. I currently have one persistence volume per mounted disk. Is there an easy way to create a persistence volume that can access data from multiple disks? I thought about symlink but it doesn't seem to work.</p>
Steve
<p>As already mentioned by coderanger Kubernetes does not manage your storage at lower level. While with cloud solutions there might some provisioners that will do some of the work for you with bare metal there isn't.</p> <p>The closest thing that help you manage local storage is <a href="https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/getting-started.md" rel="nofollow noreferrer">Local-volume-static-provisionner</a>.</p> <blockquote> <p>The local volume static provisioner manages the PersistentVolume lifecycle for pre-allocated disks by detecting and creating PVs for each local disk on the host, and cleaning up the disks when released. It does not support dynamic provisioning.</p> </blockquote> <p>Have a look at <a href="https://medium.com/alterway/kubernetes-local-static-provisioner-4c197e0f83ab#:%7E:text=The%20local%20volume%20static%20provisioner,does%20not%20support%20dynamic%20provisioning." rel="nofollow noreferrer">this</a> article for more example it.</p>
acid_fuji