Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I was going through the differences between Swarm vs K8s one of the <strong>cons of Swarm is that it has limited fault tolerance functionality</strong>. How does K8s achieve fault tolerance, is it via K8s multi-master. Please share your inputs</p>
| Zaks | <p>Yes! In order to achieve Kubernetes fault-tolerance is recommended to have multiples Control Planes (master) nodes and if you are running in cloud providers multiples availability zones are recommended.</p>
<blockquote>
<p>The Control Plane’s components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/" rel="nofollow noreferrer">pod</a> when a deployment’s <code>replicas</code> field is unsatisfied).</p>
</blockquote>
<p>Basically, the control plane is composed by theses components: </p>
<p><a href="https://kubernetes.io/docs/concepts/overview/components/#kube-apiserver" rel="nofollow noreferrer">kube-apiserver</a> - Exposes the Kubernetes API. Is the front for Kubernetes control plane</p>
<p><a href="https://kubernetes.io/docs/concepts/overview/components/#etcd" rel="nofollow noreferrer">etcd</a> - Key/Value Kubernetes' backing store for cluster data</p>
<p><a href="https://kubernetes.io/docs/concepts/overview/components/#kube-scheduler" rel="nofollow noreferrer">kube-scheduler</a> - Responsible for watches for newly created pods with no assigned node, and selects a node for them to run on</p>
<p><a href="https://kubernetes.io/docs/concepts/overview/components/#kube-controller-manager" rel="nofollow noreferrer">kube-controller-manager</a> - One of controller responsibility is maintain the correct number of pods for every replication controller, populate endpoints objects and responding when nodes go down.</p>
<p><a href="https://kubernetes.io/docs/concepts/overview/components/#cloud-controller-manager" rel="nofollow noreferrer">cloud-controller-manager</a> - Interact with the underlying cloud providers, </p>
<p>Every cluster need 1 worker nodes at least, the work nodes is responsible to run your workloads.</p>
<blockquote>
<p>Here’s the diagram of a Kubernetes cluster with all the components tied together:</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/fhqnj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fhqnj.png" alt="enter image description here"></a></p>
<p>For more info see <a href="https://kubernetes.io/docs/concepts/overview/components/" rel="nofollow noreferrer">here</a></p>
| Mr.KoopaKiller |
<p>I found the following description about <code>supported docker versions</code> in kubernetes v1.18 and 1.19 documents.</p>
<p><a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/" rel="nofollow noreferrer">Container runtimes</a></p>
<pre><code>The Kubernetes release notes list which versions of Docker are compatible with that version of Kubernetes.
</code></pre>
<p>But I cannot find <code>supported docker versions</code> in the Relase Notes.<br />
<a href="https://v1-18.docs.kubernetes.io/docs/setup/release/notes/" rel="nofollow noreferrer">v1.18 Release Notes</a><br />
<a href="https://kubernetes.io/docs/setup/release/notes/" rel="nofollow noreferrer">v1.19 Release Notes</a></p>
<p>Also, I check the other k8s documents.
v1.15 and v1.16 document describe <code>supported docker versions</code>.<br />
The documents specifically list them.<br />
<a href="https://v1-15.docs.kubernetes.io/docs/setup/release/notes/" rel="nofollow noreferrer">v1.15 Release Notes</a><br />
<a href="https://v1-16.docs.kubernetes.io/docs/setup/release/notes/" rel="nofollow noreferrer">v1.16 Release Notes</a></p>
<pre><code>The list of validated docker versions remains unchanged.
The current list is 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09. (#72823, #72831)
</code></pre>
| geroge k | <p>In fact, there is no information about docker version for 1.18 and 1.19 kubernetes version.</p>
<p>You can suggest documentation upgrade using this <a href="https://github.com/kubernetes/website/issues/new?title=Improvement%20for%20k8s.io/docs/setup/production-environment/container-runtimes/" rel="nofollow noreferrer">link</a> for kubernetes team to include theses information.</p>
| Mr.KoopaKiller |
<p>I want to know if there is any solution in order to submit a flink job to a kubernetes cluster.</p>
<p>In the jobmanager deployment file after startup I tried to add a command option to my jobmanager pod but I realized that the command I passed override the image entrypoint.</p>
<p>So I want to know if there is a solution to do so?</p>
| mark dev | <p>Yes, if you provide a <code>command</code> and/or its <code>args</code>, it overrides the original image's <code>Entrypoint</code> and/or <code>Cmd</code>. If you want to know how exactly it happens, please refer to <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes" rel="nofollow noreferrer">this fragment of the official kubernetes docs</a>.</p>
<p>If you want to run some additional command immediatelly after your <code>Pod</code> startup, you can do it with a <code>postStart</code> handler, which usage is presented in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers" rel="nofollow noreferrer">this example</a>:</p>
<blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
preStop:
exec:
command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"]
</code></pre>
</blockquote>
| mario |
<p>Am working on Kubernetes - Elastic search deployment,</p>
<p>I have followed documentation provided by elastic.co (<a href="https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-deploy-elasticsearch.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-deploy-elasticsearch.html</a>)</p>
<p>My YAML file for elastic is below:</p>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.8.0
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
podTemplate:
spec:
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: -Xms2g -Xmx2g
resources:
requests:
memory: 4Gi
cpu: 0.5
limits:
memory: 4Gi
cpu: 2
EOF
</code></pre>
<p>But am getting below error, when I describe the pod created</p>
<pre><code>Name: quickstart-es-default-0
Namespace: default
Priority: 0
Node: <none>
Labels: common.k8s.elastic.co/type=elasticsearch
controller-revision-hash=quickstart-es-default-55759bb696
elasticsearch.k8s.elastic.co/cluster-name=quickstart
elasticsearch.k8s.elastic.co/config-hash=178912897
elasticsearch.k8s.elastic.co/http-scheme=https
elasticsearch.k8s.elastic.co/node-data=true
elasticsearch.k8s.elastic.co/node-ingest=true
elasticsearch.k8s.elastic.co/node-master=true
elasticsearch.k8s.elastic.co/node-ml=true
elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
elasticsearch.k8s.elastic.co/version=7.8.0
statefulset.kubernetes.io/pod-name=quickstart-es-default-0
Annotations: co.elastic.logs/module: elasticsearch
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/quickstart-es-default
Init Containers:
elastic-internal-init-filesystem:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
Port: <none>
Host Port: <none>
Command:
bash
-c
/mnt/elastic-internal/scripts/prepare-fs.sh
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
Environment:
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
Mounts:
/mnt/elastic-internal/downward-api from downward-api (ro)
/mnt/elastic-internal/elasticsearch-bin-local from elastic-internal-elasticsearch-bin-local (rw)
/mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
/mnt/elastic-internal/elasticsearch-config-local from elastic-internal-elasticsearch-config-local (rw)
/mnt/elastic-internal/elasticsearch-plugins-local from elastic-internal-elasticsearch-plugins-local (rw)
/mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
/mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
/mnt/elastic-internal/transport-certificates from elastic-internal-transport-certificates (ro)
/mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
/mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
/usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
/usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
/usr/share/elasticsearch/data from elasticsearch-data (rw)
/usr/share/elasticsearch/logs from elasticsearch-logs (rw)
sysctl:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
Port: <none>
Host Port: <none>
Command:
sh
-c
sysctl -w vm.max_map_count=262144
Environment:
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
Mounts:
/mnt/elastic-internal/downward-api from downward-api (ro)
/mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
/mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
/mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
/mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
/mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
/usr/share/elasticsearch/bin from elastic-internal-elasticsearch-bin-local (rw)
/usr/share/elasticsearch/config from elastic-internal-elasticsearch-config-local (rw)
/usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
/usr/share/elasticsearch/config/transport-certs from elastic-internal-transport-certificates (ro)
/usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
/usr/share/elasticsearch/data from elasticsearch-data (rw)
/usr/share/elasticsearch/logs from elasticsearch-logs (rw)
/usr/share/elasticsearch/plugins from elastic-internal-elasticsearch-plugins-local (rw)
Containers:
elasticsearch:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
Ports: 9200/TCP, 9300/TCP
Host Ports: 0/TCP, 0/TCP
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 500m
memory: 4Gi
Readiness: exec [bash -c /mnt/elastic-internal/scripts/readiness-probe-script.sh] delay=10s timeout=5s period=5s #success=1 #failure=3
Environment:
ES_JAVA_OPTS: -Xms2g -Xmx2g
POD_IP: (v1:status.podIP)
POD_NAME: quickstart-es-default-0 (v1:metadata.name)
PROBE_PASSWORD_PATH: /mnt/elastic-internal/probe-user/elastic-internal-probe
PROBE_USERNAME: elastic-internal-probe
READINESS_PROBE_PROTOCOL: https
HEADLESS_SERVICE_NAME: quickstart-es-default
NSS_SDB_USE_CACHE: no
Mounts:
/mnt/elastic-internal/downward-api from downward-api (ro)
/mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
/mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
/mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
/mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
/mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
/usr/share/elasticsearch/bin from elastic-internal-elasticsearch-bin-local (rw)
/usr/share/elasticsearch/config from elastic-internal-elasticsearch-config-local (rw)
/usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
/usr/share/elasticsearch/config/transport-certs from elastic-internal-transport-certificates (ro)
/usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
/usr/share/elasticsearch/data from elasticsearch-data (rw)
/usr/share/elasticsearch/logs from elasticsearch-logs (rw)
/usr/share/elasticsearch/plugins from elastic-internal-elasticsearch-plugins-local (rw)
Conditions:
Type Status
PodScheduled False
Volumes:
elasticsearch-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elasticsearch-data-quickstart-es-default-0
ReadOnly: false
downward-api:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
elastic-internal-elasticsearch-bin-local:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
elastic-internal-elasticsearch-config:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-default-es-config
Optional: false
elastic-internal-elasticsearch-config-local:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
elastic-internal-elasticsearch-plugins-local:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
elastic-internal-http-certificates:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-http-certs-internal
Optional: false
elastic-internal-probe-user:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-internal-users
Optional: false
elastic-internal-remote-certificate-authorities:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-remote-ca
Optional: false
elastic-internal-scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: quickstart-es-scripts
Optional: false
elastic-internal-transport-certificates:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-transport-certificates
Optional: false
elastic-internal-unicast-hosts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: quickstart-es-unicast-hosts
Optional: false
elastic-internal-xpack-file-realm:
Type: Secret (a volume populated by a Secret)
SecretName: quickstart-es-xpack-file-realm
Optional: false
elasticsearch-logs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler running "VolumeBinding" filter plugin for pod "quickstart-es-default-0": pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler running "VolumeBinding" filter plugin for pod "quickstart-es-default-0": pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling 20m (x3 over 21m) default-scheduler 0/2 nodes are available: 2 Insufficient memory.
</code></pre>
<p><strong>Question 2:</strong>
I have created two ec2 servers(t2 large). Master and worker.
I am using 300 GB HDD for both the servers.</p>
<p><strong>I have following pv</strong></p>
<pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0001 200Gi RWO Retain Available
</code></pre>
<p>I am using the below code to create a claim for my elastic.</p>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.8.0
nodeSets:
name: default count: 1 config: node.master: true node.data: true node.ingest: true node.store.allow_mmap: false volumeClaimTemplates:
metadata: name: elasticsearch-data spec: accessModes:
ReadWriteOnce resources: requests: storage: 200Gi storageClassName: gp2 EOF
</code></pre>
<p><strong>Storage class:</strong>(I created and make it as default)</p>
<pre><code>NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
gp2 (default) kubernetes.io/aws-ebs Delete Immediate false
</code></pre>
<p><strong>Kubectl get pv</strong></p>
<pre><code>Labels: <none>
Annotations: Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 200Gi
Node Affinity: <none>
</code></pre>
<p><strong>kubectl get pvc</strong></p>
<pre><code>Namespace: default
StorageClass: gp2
Status: Pending
Volume:
Labels: common.k8s.elastic.co/type=elasticsearch
elasticsearch.k8s.elastic.co/cluster-name=quickstart
elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: quickstart-es-default-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 61s (x18 over 24m) persistentvolume-controller Failed to provision volume with StorageClass "gp2": Failed to get AWS Cloud Provider. GetCloudProvider returned <nil> instead
</code></pre>
<p><strong>But am getting below error:</strong>
running "VolumeBinding" filter plugin for pod "quickstart-es-default-0": pod has unbound immediate PersistentVolumeClaims</p>
<p><strong>My volume is in Ec2 EBS</strong></p>
| Rajeev Uppala | <p>Based on logs from the pod there are 2 issues you need to fix</p>
<hr />
<h2>Resources</h2>
<pre><code>Warning FailedScheduling 20m (x3 over 21m) default-scheduler 0/2 nodes are available: 2 Insufficient memory.
</code></pre>
<p>In the docs you provided it is specified that you need atleast 2GiB of memory, you should try to change your request resources from 4Gi to 2Gi in both, limits and requests. As mentioned in the aws <a href="https://aws.amazon.com/ec2/instance-types/" rel="nofollow noreferrer">documentation</a> t2.large have 2vCPU and 8GiB of memory, so with your current requests it´s almost all resources of the vms.</p>
<hr />
<h2>VolumeBinding</h2>
<pre><code>Warning FailedScheduling <unknown> default-scheduler running "VolumeBinding" filter plugin for pod "quickstart-es-default-0": pod has unbound immediate PersistentVolumeClaims
</code></pre>
<p>If you do a <code>kubectl describe</code> on the pv and pvc you should be able to see more detail on why it cannot be bound.</p>
<p>I assume it´s because there is no default storage class.</p>
<p>As mentioned in <a href="https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>Depending on the installation method, your Kubernetes cluster may be deployed with an existing StorageClass that is marked as default. This default StorageClass is then used to dynamically provision storage for PersistentVolumeClaims that do not require any specific storage class. See <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer">PersistentVolumeClaim documentation</a> for details.</p>
</blockquote>
<p>You can check if you have default storage class with</p>
<pre><code>kubectl get storageclass
</code></pre>
<p>Command which can be used to make your storageclass a default one.</p>
<pre><code>kubectl patch storageclass <name_of_storageclass> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
</code></pre>
<hr />
<p>There is related <a href="https://discuss.elastic.co/t/pod-has-unbound-immediate-persistentvolumeclaims/223788" rel="nofollow noreferrer">issue</a> on elastic discuss about that.</p>
<hr />
<p><strong>EDIT</strong></p>
<p>Quoted from the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>The Retain reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps.</p>
</blockquote>
<ul>
<li>Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.</li>
<li>Manually clean up the data on the associated storage asset accordingly.</li>
<li>Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the storage asset definition.</li>
</ul>
<p>So please try to delete your pv and pvc and create them again.</p>
| Jakub |
<p>Setting up a Django API and running it with Skaffold in an Kubernetes environment.</p>
<p><code>minikube</code> is running at the <code>192.168.99.105</code>. Navigating to <code>/api/auth/test/</code> should just respond with <code>"Hello World!"</code> as you see below.</p>
<p><a href="https://i.stack.imgur.com/hcLCl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hcLCl.png" alt="enter image description here"></a></p>
<p>However, when I try to do the same thing in Postman, I get the following (picture shows <code>https</code>, but happens with <code>http</code> too).</p>
<p><a href="https://i.stack.imgur.com/XYTAf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XYTAf.png" alt="enter image description here"></a></p>
<p>Why would this be?</p>
<p>I have <code>--port-forward</code> setup so I can still access the API from Postman via <code>localhost:5000/auth/test/</code>, so this issue isn't preventing me from getting stuff done.</p>
| cjones | <p>Make sure you have <code>SSL certificate verification</code> set to <code>OFF</code>, as follows:</p>
<p><a href="https://i.stack.imgur.com/jdfhs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jdfhs.png" alt="stackoverflow" /></a></p>
| siddhesh bhasme |
<p>We have a request to expose certain pods in an AKS environment to the internet for 3rd party use.</p>
<p>Currently we have a private AKS cluster with a managed standard SKU load balancer in front using the advanced azure networking (basically Calico) where each Pod gets its own private IP from the Vnet IP space. All private IPs currently route through a firewall via user defined route in order to reach the internet, and vice versa. Traffic between on prem routes over a VPN connection through the azure virtual wan. I don’t want to change any existing routing behavior unless 100% necessary.</p>
<p>My question is, how do you expose an existing private AKS cluster’s specific Pods to be accessible from the internet? The entire cluster does not need to be exposed to the internet. The issue I foresee is the ephemeral Pods and ever changing IPs making simple NATing in the firewalls not an option. I’ve also thought about simply making a new AKS cluster with a public load balancer. The issue here though is security as it must still go through the firewalls and likely could with existing user defined routes</p>
<p>What is the recommended way to setup the architecture where certain Pods in AKS can be accessible over the internet, while still allowing those Pods to access the Pods over the private network. I want to avoid exposing all Pods to the internet</p>
| dcvl | <p>There are a couple of options that you can use in order to expose your application to</p>
<p>outside your network, such as: Service:</p>
<blockquote>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a>: Exposes the Service on each Node’s IP at a static port (the <code>NodePort</code>). A <code>ClusterIP</code> Service, to which the <code>NodePort</code> Service routes, is automatically created. You’ll be able to contact the <code>NodePort</code> Service, from outside the cluster, by requesting <code><NodeIP>:<NodePort></code>.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a>: Exposes the Service externally using a cloud provider’s load balancer. <code>NodePort</code> and <code>ClusterIP</code> Services, to which the external load balancer routes, are automatically created.</p>
</li>
</ul>
</blockquote>
<p>Also, there is another option that is use an <strong><code>ingress</code></strong>, IMO this is the best way to expose HTTP applications externally, because it's possible to create rules by path and host, and gives you much more flexibility than services. For ingress <strong>only</strong> HTTP/HTTPS is supported, if you need TCP then go to Services</p>
<p>I'd recommend you take a look in this links to understand in deep how services and ingress works:</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Services</a></p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes Ingress</a></p>
<p><a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX Ingress</a></p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/concepts-network" rel="nofollow noreferrer">AKS network concepts</a></p>
| Mr.KoopaKiller |
<p>Created new cluster in GKE and see in logs error:</p>
<p>"ERROR: logging before flag.Parse: E0907 16:33:58.813216 1 nanny_lib.go:128] Get <a href="https://10.0.0.1:443/api/v1/nodes?resourceVersion=0" rel="nofollow noreferrer">https://10.0.0.1:443/api/v1/nodes?resourceVersion=0</a>: http2: no cached connection was available
"</p>
<pre class="lang-json prettyprint-override"><code>{
textPayload: "ERROR: logging before flag.Parse: E0907 16:33:58.813216 1 nanny_lib.go:128] Get https://10.0.0.1:443/api/v1/nodes?resourceVersion=0: http2: no cached connection was available"
insertId: "zzz"
resource: {
type: "k8s_container"
labels: {
project_id: "zzz"
namespace_name: "kube-system"
container_name: "metrics-server-nanny"
pod_name: "metrics-server-v0.3.6-7b7d6c7576-jksst"
cluster_name: "zzz"
location: "zzz"
}
}
timestamp: "2020-09-07T16:33:58.813411604Z"
severity: "ERROR"
labels: {
gke.googleapis.com/log_type: "system"
k8s-pod/version: "v0.3.6"
k8s-pod/k8s-app: "metrics-server"
k8s-pod/pod-template-hash: "7b7d6c7576"
}
logName: "projects/zzz/logs/stderr"
receiveTimestamp: "2020-09-07T16:34:05.273766386Z"
}
</code></pre>
<p>I try to find a solution on how to fix this error.</p>
<p>Master version: 1.16.13-gke.1</p>
<p>Cloud Operations for GKE: System and workload logging and monitoring</p>
| Bukashk0zzz | <p>I've test in my account with versions: <code>1.16.13-gke.1</code>, <code>1.16.13-gke.400</code> and <code>1.17.9-gke1503</code> and got a similar error, but not the same:</p>
<pre><code>$ kubectl logs metrics-server-v0.3.6-547dc87f5f-jrnjt -c metrics-server-nanny -n kube-system
ERROR: logging before flag.Parse: I0910 11:57:46.951966 1 pod_nanny.go:67] Invoked by [/pod_nanny --config-dir=/etc/config --cpu=40m --extra-cpu=0.5m --memory=35Mi --extra-memory=4Mi --threshold=5 --deployment=metrics-server-v0.3.6 --container=metrics-server --poll-period=300000 --estimator=exponential --scale-down-delay=24h --minClusterSize=5]
ERROR: logging before flag.Parse: I0910 11:57:46.952179 1 pod_nanny.go:68] Version: 1.8.8
ERROR: logging before flag.Parse: I0910 11:57:46.952258 1 pod_nanny.go:84] Watching namespace: kube-system, pod: metrics-server-v0.3.6-547dc87f5f-jrnjt, container: metrics-server.
ERROR: logging before flag.Parse: I0910 11:57:46.952320 1 pod_nanny.go:85] storage: MISSING, extra_storage: 0Gi
ERROR: logging before flag.Parse: I0910 11:57:46.954042 1 pod_nanny.go:115] cpu: 40m, extra_cpu: 0.5m, memory: 35Mi, extra_memory: 4Mi
ERROR: logging before flag.Parse: I0910 11:57:46.954164 1 pod_nanny.go:144] Resources: [{Base:{i:{value:40 scale:-3} d:{Dec:<nil>} s:40m Format:DecimalSI} ExtraPerNode:{i:{value:5 scale:-4} d:{Dec:<nil>} s: Format:DecimalSI} Name:cpu} {Base:{i:{value:36700160 scale:0} d:{Dec:<nil>} s:35Mi Format:BinarySI} ExtraPerNode:{i:{value:4194304 scale:0} d:{Dec:<nil>} s:4Mi Format:BinarySI} Name:memory}]
</code></pre>
<p>Since I haven't deployed anything in the cluster, it seems to me some issue in <em>System and workload logging and monitoring</em> plugin enabled by default in GKE.</p>
<p>My sugestion is open a ticket public issue in <a href="https://cloud.google.com/support/docs/issue-trackers" rel="nofollow noreferrer">GCP Issue tracker</a> since the containers are managed by GKE.</p>
| Mr.KoopaKiller |
<p>I am trying to replace authenticationEndpoint url and other configuation in the config.json of angular project using environment variable in kubernetes dynamically. For that configured in the helm chart for environment variable in the CI & CD pipeline of VSTS. But not sure how config.json field will be replaced with environment variable in kubernetes. Could you please help me on this.?</p>
<h2> env in pods (kubernetes) ran printenv cmd</h2>
<pre><code> authenticationEndpoint=http://localhost:8888/security/auth
</code></pre>
<h2> config.json</h2>
<pre><code> {
"authenticationEndpoint": "http://localhost:8080/Security/auth",
"authenticationClientId": "my-project",
"baseApiUrl": "http://localhost:8080/",
"homeUrl": "http://localhost:4300/"
}
</code></pre>
<h2> Generated yaml file from helm chart</h2>
<pre><code> # Source: sample-web/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: cloying-rattlesnake-sample-web
labels:
app.kubernetes.io/name: sample-web
helm.sh/chart: sample-web-0.1.0
app.kubernetes.io/instance: cloying-rattlesnake
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app.kubernetes.io/name: sample-web
app.kubernetes.io/instance: cloying-rattlesnake
---
# Source: sample-web/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloying-rattlesnake-sample-web
labels:
app.kubernetes.io/name: sample-web
helm.sh/chart: sample-web-0.1.0
app.kubernetes.io/instance: cloying-rattlesnake
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: sample-web
app.kubernetes.io/instance: cloying-rattlesnake
template:
metadata:
labels:
app.kubernetes.io/name: sample-web
app.kubernetes.io/instance: cloying-rattlesnake
spec:
containers:
- name: sample-web
image: "sample-web:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
env:
- name: authenticationEndpoint
value: "http://localhost:8080/security/auth"
resources:
{}
---
# Source: sample-web/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cloying-rattlesnake-sample-web
labels:
app.kubernetes.io/name: sample-web
helm.sh/chart: sample-web-0.1.0
app.kubernetes.io/instance: cloying-rattlesnake
app.kubernetes.io/managed-by: Tiller
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: ""
http:
paths:
- path: /?(.*)
backend:
serviceName: cloying-rattlesnake-sample-web
servicePort: 80
</code></pre>
<h2>Absolute path of config.json</h2>
<pre><code>Ran shell cmd - kubectl exec -it sample-web-55b71d19c6-v82z4 /bin/sh
path: usr/share/nginx/html/config.json
</code></pre>
| Seenu | <p>Use a init container to modify your config.json when the pod starts.</p>
<h1>Updated your Deployment.yaml</h1>
<pre><code> # Source: sample-web/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloying-rattlesnake-sample-web
labels:
app.kubernetes.io/name: sample-web
helm.sh/chart: sample-web-0.1.0
app.kubernetes.io/instance: cloying-rattlesnake
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: sample-web
app.kubernetes.io/instance: cloying-rattlesnake
template:
metadata:
labels:
app.kubernetes.io/name: sample-web
app.kubernetes.io/instance: cloying-rattlesnake
spec:
initContainers:
- name: init-myconfig
image: busybox:1.28
command: ['sh', '-c', 'cat /usr/share/nginx/html/config.json | sed -e "s#\$authenticationEndpoint#$authenticationEndpoint#g" > /tmp/config.json && cp /tmp/config.json /usr/share/nginx/html/config.json']
env:
- name: authenticationEndpoint
value: "http://localhost:8080/security/auth"
containers:
- name: sample-web
image: "sample-web:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
env:
- name: authenticationEndpoint
value: "http://localhost:8080/security/auth"
volumeMounts:
- mountPath: /usr/share/nginx/html/config.json
name: config-volume
volumes:
- name: config-volume
hostPath:
path: /mnt/data.json # Create this file in the host where the pod starts. Content below.
type: File
</code></pre>
<h1>Create <code>/mnt/data.json</code> file in the host where the pod starts</h1>
<pre><code>{
"authenticationEndpoint": "$authenticationEndpoint",
"authenticationClientId": "my-project",
"baseApiUrl": "http://localhost:8080/",
"homeUrl": "http://localhost:4300/"
}
</code></pre>
| Prakash Krishna |
<p>I have <code>Chart.yaml</code> as:</p>
<pre><code>dependencies:
- name: mysql
version: "5.0.9"
repository: "https://charts.bitnami.com/bitnami"
alias: a
- name: mysql
version: "5.0.9"
repository: "https://charts.bitnami.com/bitnami"
alias: b
</code></pre>
<p>and <code>values.yaml</code> as</p>
<pre><code>mysql:
somename: Overriden
somename2: NotOverriden
a:
somename: A
b:
somename: B
</code></pre>
<p>but the helm is reading only values from <code>a:</code> and <code>b:</code>. I would expect that values from <code>mysql:</code> are applied to both <code>a:</code> and <code>b:</code> and overridden where needed.</p>
<p>Is this possible at all, or is there some other way?</p>
| Bojan Vukasovic | <p>You could use yaml anchors and aliases.</p>
<pre><code> mysql: &mysql
somename: Overriden
somename2: NotOverriden
a:
<<: *mysql
somename: A
b:
<<: *mysql
somename: B
</code></pre>
| GK_ |
<p>First, I'm a complete ansible playbook noob. I'm busy trying to understand a clutser at my workplace. I tried following the readme's quick start guide whilst also following my companies kubespray fork. One thing that is really bothering me right now, is that configuration for our personal cluster is littered throughout the entire fork. Is there no way to separate my personal config files for the cluster from the kubespray repository? My idea is that I have a kubespray directory which is a fork, or master of the kubespray repository and when running 'kubespray' I supply my cluster's config to kubespray. Because currently I can't see how this is a clean and manageable way to maintain cluster resources with commits while also trying to update kubespray when I want to apply a new version. the current process seems like a utter mess!</p>
| Jared Rieger | <p>So I ended up finding a nice solution that extrapolated away custom personal configuration from the kubespray repo. I assume this would actually be pretty obvious to seasoned Ansible users but the structure is as followed.</p>
<pre><code>.
├── README.md
├── bin
├── docs
├── inventory
│ └── prod
│ ├── group_vars
│ │ ├── all
│ │ │ ├── all.yml
│ │ │ ├── azure.yml
│ │ │ ├── coreos.yml
│ │ │ ├── docker.yml
│ │ │ ├── oci.yml
│ │ │ └── openstack.yml
│ │ ├── balance.yml
│ │ ├── etcd.yml
│ │ └── k8s-cluster
│ │ ├── addons.yml
│ │ ├── ip.yml
│ │ ├── k8s-cluster.yml
│ │ ├── k8s-net-calico.yml
│ │ ├── k8s-net-canal.yml
│ │ ├── k8s-net-cilium.yml
│ │ ├── k8s-net-contiv.yml
│ │ ├── k8s-net-flannel.yml
│ │ ├── k8s-net-kube-router.yml
│ │ └── k8s-net-weave.yml
│ └── hosts.ini
└── kubespray
</code></pre>
<p>Now within the main dir you can run your kubespray commands like so</p>
<pre class="lang-sh prettyprint-override"><code>ansible-playbook \
$(pwd)/kubespray/scale.yml \
--inventory $(pwd)/inventory/prod/hosts.ini \
--user root \
--become \
--become-user=root \
--limit=$node \
--extra-vars 'ansible_python_interpreter=/usr/bin/python3' \
--flush-cache
</code></pre>
<hr />
<p>The great thing about this structure is that you can now use git to track your changes to your infrastructure only and not having to worry about meddling with
the files within Kubespray. Plus by having kubespray as a gitsubmodule you can also track the different versions with the configuration of servers. just general git goodness.</p>
<p>Anyway, I hope someone finds this useful. I've been using for a couple of months and found it far cleaner than having your configuration within the kubespray module.</p>
| Jared Rieger |
<p>I have updated a running cluster with a new image which unfortunately it's crashing. I want to log into the pod to look at logs. What is the way to do so?</p>
<pre class="lang-sh prettyprint-override"><code>manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
codingjediweb-7c45484669-czcpk 0/1 CrashLoopBackOff 6 9m34s
codingjediweb-7c45484669-qn4m5 0/1 CrashLoopBackOff 6 9m32s
</code></pre>
<p>The application does not generate much console logs. The main logs are in a file. How can I access that file?</p>
<pre class="lang-sh prettyprint-override"><code>manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl logs codingjediweb-7c45484669-czcpk
Oops, cannot start the server.
play.api.libs.json.JsResult$Exception: {"obj":[{"msg":["Unable to connect with database"],"args":[]}]}
manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl logs codingjediweb-7c45484669-qn4m5
Oops, cannot start the server.
play.api.libs.json.JsResult$Exception: {"obj":[{"msg":["Unable to connect with database"],"args":[]}]}
</code></pre>
<p>UPDATE
I tried to implement Christoph's suggestion of using two containers in a pod - one for main application and the other for logging. I switched back to the stable version of my application to be sure that the application is up/running and is generating logs. This would help test that the pattern works. It looks that the logging application keeps existing/crashing.</p>
<p>yaml file</p>
<pre><code>manuchadha25@cloudshell:~ (copper-frame-262317)$ cat codingjediweb-nodes.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: codingjediweb
spec:
replicas: 2
selector:
matchLabels:
app: codingjediweb
template:
metadata:
labels:
app: codingjediweb
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: codingjediweb
image: docker.io/manuchadha25/codingjediweb:03072020v2
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: db.cassandraUri
value: cassandra://xx.yy.xxx.238:9042
- name: db.password
value: 9__something
- name: db.keyspaceName
value: something2
- name: db.username
value: superawesomeuser
ports:
- containerPort: 9000
- name: logging
image: busybox
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
command: ["tail -f /deploy/codingjediweb-1.0/logs/*.log"]
</code></pre>
<p>When I apply the configuration then only one container stays up</p>
<pre><code>manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 1 10h
codingjediweb-857c6d584b-n4njp 1/2 CrashLoopBackOff 6 8m46s
codingjediweb-857c6d584b-s2hg2 1/2 CrashLoopBackOff 6 8m46s
</code></pre>
<p>further inspection shows that the main application is up</p>
<pre><code>manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl exec -it codingjediweb-857c6d584b-s2hg2 -c logging -- bash
error: unable to upgrade connection: container not found ("logging")
manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl exec -it codingjediweb-857c6d584b-s2hg2 -c codingjediweb -- bash
</code></pre>
<p>And the application is generating logs at the right path</p>
<pre><code>root@codingjediweb-857c6d584b-s2hg2:/deploy# tail -f /deploy/codingjediweb-1.0/logs/*.log
2020-07-07 06:40:37,385 [DEBUG] from com.datastax.driver.core.Connection in codingJediCluster-nio-worker-0 - Connection[/34.91.191.238:9042-2, inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
2020-07-07 06:40:37,389 [DEBUG] from com.datastax.driver.core.Connection in codingJediCluster-nio-worker-0 - Connection[/34.91.191.238:9042-2, inFlight=0, closed=false] heartbeat query succeeded
2020-07-07 06:41:07,208 [DEBUG] from com.datastax.driver.core.Connection in codingJediCluster-nio-worker-0 - Connection[/34.91.191.238:9042-1, inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
2020-07-07 06:41:07,210 [DEBUG] from com.datastax.driver.core.Connection in codingJediCluster-nio-worker-0 - Connection[/34.91.191.238:9042-1, inFlight=0, closed=false] heartbeat query succeeded
2020-07-07 06:41:07,271 [DEBUG] from com.datastax.driver.core.Connection in codingJediCluster-nio-worker-0 - Connection[/10.44.1.4:9042-1, inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
2020-07-07 06:41:07,274 [DEBUG] from com.datastax.driver.core.Connection in codingJediCluster-nio-worker-0 - Connection[/10.44.1.4:9042-1, inFlight=0, closed=false] heartbeat query succeeded
2020-07-07 06:41:07,332 [DEBUG] from com.datastax.driver.core.Connection in codingJediCluster-nio-worker-0 - Connection[/10.44.2.5:9042-1, inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
2020-07-07 06:41:07,337 [DEBUG] from com.datastax.driver.core.Connection in codingJediCluster-nio-worker-0 - Connection[/10.44.2.5:9042-1, inFlight=0, closed=false] heartbeat query succeeded
2020-07-07 06:41:07,392 [DEBUG] from com.datastax.driver.core.Connection in codingJediCluster-nio-worker-0 - Connection[/34.91.191.238:9042-2, inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
</code></pre>
| Manu Chadha | <p>Another way to get the logs is using a volume in your node with <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer"><code>hostPath</code></a>.</p>
<p>You can create a <code>hostPath</code> and then mount as a volume in your pod. When the container runs, it will generate the log in this directory that is persisted in youe node disk.</p>
<blockquote>
<p><strong>Note:</strong> If you have more than one node, the directory must exists in all of them.</p>
</blockquote>
<p><strong>Example:</strong></p>
<p>To use the dir <code>/mnt/data</code> of yout node, create the dir with <code>mkdir -p /mnt/data</code> and apply the yaml below to create the persistent volume and persistent volume claim:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<p>Add the <code>persistentVolumeClaim</code> and <code>volumeMounts</code> in your deployment file, example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: codingjediweb
spec:
replicas: 2
selector:
matchLabels:
app: codingjediweb
template:
metadata:
labels:
app: codingjediweb
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: codingjediweb
image: docker.io/manuchadha25/codingjediweb:03072020v2
env:
- name: db.cassandraUri
value: cassandra://xx.yy.xxx.238:9042
- name: db.password
value: 9__something
- name: db.keyspaceName
value: something2
- name: db.username
value: superawesomeuser
ports:
- containerPort: 9000
volumeMounts:
- mountPath: "/deploy/codingjediweb-1.0/logs/"
name: task-pv-storage
</code></pre>
| Mr.KoopaKiller |
<p>I have two clusters</p>
<pre><code>NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
cassandra-cluster europe-west4-a 1.14.10-gke.36 xx.90.xx.31 n1-standard-1 1.14.10-gke.36 3 RUNNING
codingjediweb-cluster europe-west4-a 1.14.10-gke.36 uu.90.uu.182 n1-standard-1 1.14.10-gke.36 2 RUNNING
manuchadha25@cloudshell:~ (copper-frame-262317)$
</code></pre>
<p>I want to run the following command on cassandra-cluster. How do I make cassandra-cluster my current context?</p>
<p>I am getting error</p>
<pre><code>CASS_USER=$(kubectl --cluster gke_copper-frame-262317_europe-west4-a_cassandra-cluster get secret cluster1-superuser -o json | jq -r '.data.username' | base64 --decode)kubectl
Error from server (NotFound): secrets "cluster1-superuser" not found
</code></pre>
<p>I tried this but it failed.</p>
<pre><code>manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl config use-context cassandra-cluster
error: no context exists with the name: "cassandra-cluster"
</code></pre>
| Manu Chadha | <p>You can work with multiple cluster setting the correct context, as mentioned <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration" rel="nofollow noreferrer">here</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>kubectl config get-contexts # display list of contexts
kubectl config current-context # display the current-context
kubectl config use-context my-cluster-name # set the default context to my-cluster-name
</code></pre>
<p>When working with multilples cluster, you always need to know in what cluster you are performing commands, to make it easy,you can use <a href="https://github.com/jonmosco/kube-ps1" rel="nofollow noreferrer">this bash script</a> to show in you <code>$PS1</code> the current context and namespace.</p>
| Mr.KoopaKiller |
<p><a href="https://i.stack.imgur.com/fGisn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fGisn.png" alt="enter image description here"></a></p>
<p>This is my Kubernetes cluster node monitoring. K8s cluster is running on GKE and using the stack driver monitoring and logging.</p>
<p>Cluster size is 4vCPU and 15GB memory. In the CPU graph why there is spike above the limit of CPU ? as my cluster CPU is 4vCPU but the limit spike is there. </p>
<p>There is no cluster auto scaler, Node auto scaler, Vertical auto scaler nothing is running. </p>
<p>Same question For memory?</p>
<p><a href="https://i.stack.imgur.com/wgUxH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wgUxH.png" alt="enter image description here"></a></p>
<p>Total size is 15 GB but capacity is 15.77 GB and allocatable 13 GB mean 2 GB is for the Kubernetes system.</p>
<p>For perfect monitoring, I have installed the Default Kubernetes dashboard</p>
<p><a href="https://i.stack.imgur.com/njsre.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/njsre.png" alt="enter image description here"></a></p>
<p>Which shows usage is around 10.2GB so I have still 2-3 GB of RAM?
As allocatable is 13 GB system taken 2 GB? Am I right?</p>
<p>I have also installed Grafana also </p>
<p><a href="https://i.stack.imgur.com/uR81w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uR81w.png" alt="enter image description here"></a></p>
<p>This shows 450 MB Free ram I have imported this dashboard.</p>
<p>But if it's using around 10GB of RAM then out of 13 GB I should have 2-3 GB remaining.</p>
<p><strong>Update :</strong></p>
<pre><code>Kubectl describe node <node>
Resource Requests Limits
-------- -------- ------
cpu 3073m (78%) 5990m (152%)
memory 7414160Ki (58%) 12386704Ki (97%)
</code></pre>
<p>If you have a look at the first graph of stackdriver as usage increase of RAM limit increase tp 15GB but while allocation or usable memory is 13GB only. How ?</p>
| Harsh Manvar | <p>In your case you have 2 questions, one related to CPU usage and other to Memory Usage:</p>
<p>You put a limited information and the CPU and memory usage depends on different aspects, such as pods, number of nodes, etc.</p>
<p>You put you aren’t using autoscaler for nodes.</p>
<p>This page for <a href="https://cloud.google.com/monitoring/api/metrics_gcp?hl=en_US&_ga=2.33532794.-598886069.1562156452#gcp-container" rel="nofollow noreferrer">Stackdriver Monitoring</a>, you can see the part of containers and for the CPU graph uses “container/cpu/usage_time” where explain “Cumulative CPU usage on all cores in seconds. This number divided by the elapsed time represents usage as a number of cores, regardless of any core limit that might be set. Sampled every 60 seconds”.</p>
<p>In the same page and talking about memory you can read about this graph use “container/memory/bytes_used” where tells “Memory usage in bytes, broken down by type: evictable and non-evictable. Sampled every 60 seconds.
memory_type: Either <code>evictable</code> or <code>non-evictable</code>. Evictable memory is memory that can be easily reclaimed by the kernel, while non-evictable memory cannot.”, in this case is using non-evictable.</p>
<p>In your question about the size which the system is allocatable in the case of the memory, it depends on the size you put for cluster works.</p>
<p>for example I proceed to create a cluster with 1 vCPU and 4Gb of memory and the memory allocatable is 2.77Gb.</p>
| Alfredo F. |
<p>Is there a go client to drain a Kubernetes node.
I am writing E2E testcases using existing kubernetes E2E framework and i need to cover a node drain scenario for storage.</p>
| ambikanair | <p>There currently isn't a method in client-go to facilitate draining. I believe that there is some work to bring that functionality to client-go, but it's not there yet. That being said you can base an E2E test case on the drain code found at:
<a href="https://github.com/kubernetes/kubectl/tree/master/pkg/drain" rel="nofollow noreferrer">https://github.com/kubernetes/kubectl/tree/master/pkg/drain</a></p>
| Nicholas Lane |
<h3>Short version</h3>
<p>How can I get an integrated-graphics-accelerated headless X display running inside a Google Cloud Kubernetes Engine pod?</p>
<h3>Background</h3>
<p>I'm working on a reinforcement learning project that involves running a large number of simulated environments in parallel. I'm doing the simulations using Google Cloud Kubernetes Engine, with <a href="https://github.com/panda3d/panda3d" rel="nofollow noreferrer">panda3d</a> rendering to an <a href="https://www.x.org/releases/X11R7.6/doc/man/man1/Xvfb.1.xhtml" rel="nofollow noreferrer">Xvfb</a> virtual display.</p>
<p>However, I've noticed that the simulation on my Macbook runs 2x faster than the one on Kubernetes, and profiling suggests the difference is entirely from <a href="https://i.stack.imgur.com/FMkFe.jpg" rel="nofollow noreferrer">drawing the frame</a>. Other operations - like linear algebra - are at most 30% slower. My theory is this is because on my Macbook panda3d can take advantage of the integrated graphics, while Xvfb uses software rendering.</p>
<p>My suspicion - gathering together the info in the links below - is the trick is to get a hardware-accelerated headless X server running, then use Virtual GL to fork it across a second Xvfb display. But lord, I am way out of my depth here.</p>
<h3>Uncertainties</h3>
<ul>
<li>Is hardware vs software rendering actually the source of my slowdown?</li>
<li>Do Google Cloud instances have integrated graphics? </li>
<li>Can a Kubernetes pod use integrated graphics without modifications to the host?</li>
</ul>
<h3>Useful sources</h3>
<ul>
<li><a href="https://arrayfire.com/remote-off-screen-rendering-with-opengl/" rel="nofollow noreferrer">Headless rendering on a VM using an NVidia card</a></li>
<li><a href="http://wiki.ros.org/docker/Tutorials/Hardware%20Acceleration" rel="nofollow noreferrer">Setting up Intel hardware-accelerated docker instances</a>, though it requires some host commands</li>
<li><a href="https://medium.com/@pigiuz/hw-accelerated-gui-apps-on-docker-7fd424fe813e" rel="nofollow noreferrer">NVidia-accelerated hardware rendering in Docker</a></li>
<li><a href="https://virtualgl-users.narkive.com/ysqsq4v3/running-virtualgl-with-xvfb-on-ec2-ubuntu-headless-gpu-instance" rel="nofollow noreferrer">Discussion of combining VirtualGL and Xvfb</a></li>
</ul>
| Andy Jones | <p>I will answer your questions in order :</p>
<ul>
<li><p>Most likely yes but it is hard to determine for sure with the information you provided. It depends on how your software and the library you are using (panda3d) handle the rendering. </p></li>
<li><p>Google Cloud Compute Engine instances do not have integrated graphics, but you can always use GPUs (supported GPUs and related zones listed <a href="https://cloud.google.com/compute/docs/gpus/#gpus-list" rel="nofollow noreferrer">here</a>). You can enable virtual displays on certain instances as explained in this <a href="https://cloud.google.com/compute/docs/instances/enable-instance-virtual-display" rel="nofollow noreferrer">document</a>.</p></li>
<li><p>You can setup Kubernetes clusters or node pools in Google Cloud where the nodes are equipped with Nvidia GPUs as it is explained <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus" rel="nofollow noreferrer">here</a>.</p></li>
</ul>
<p>You can take a look in <a href="https://github.com/GoogleCloudPlatform/container-engine-accelerators" rel="nofollow noreferrer">here</a> to check some examples on how to use Kubernetes with GPUs on Google Cloud Platform.</p>
| marcusp |
<p>I have a link to a public URL in the format of <code>https://storage.googleapis.com/companyname/foldername/.another-folder/file.txt</code></p>
<p>I want to create an ingress rule to create a path to this public file, so that whoever open a specific URL, e.g., <a href="https://myapp.mydomain.com/.another-folder/myfile.txt" rel="nofollow noreferrer">https://myapp.mydomain.com/.another-folder/myfile.txt</a> -> it open up above file.</p>
<p>I tried a few different ingress rules such as:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: googlestoragebucket
spec:
externalName: storage.googleapis.com
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
type: ExternalName
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: staging-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: staging-static-ip
kubernetes.io/ingress.class: gce
spec:
defaultBackend:
service:
name: website-frontend
port:
number: 80
rules:
- host: myapp.mydomain.com
http:
paths:
- path: /.another-folder/
pathType: Prefix
backend:
service:
name: googlestoragebucket
port:
number: 443
- pathType: ImplementationSpecific
backend:
service:
name: myactual-app
port:
number: 80
</code></pre>
<p>But I couldn't make it wrok. In this case I've got an error: <code>Translation failed: invalid ingress spec: service "staging/googlestoragebucket" is type "ExternalName", expected "NodePort" or "LoadBalancer</code></p>
<p>I don’t mind any other solutions to achieve the same result in the context of GCP and Kubernetes.</p>
<p>Do you have any ideas?</p>
<p>Looking forward for you suggestions.</p>
| Michel Gokan Khan | <p>Think that you should be able to do it via Cloud External Load Balancer:</p>
<p>Here is some information about that:</p>
<p><a href="https://cloud.google.com/load-balancing/docs/https/ext-load-balancer-backend-buckets" rel="nofollow noreferrer">https://cloud.google.com/load-balancing/docs/https/ext-load-balancer-backend-buckets</a></p>
<p>Then you can point the ingress to that load balancer:
<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features</a></p>
<p>Another option is use some proxy, la Nginx, there is a post on GitHub about this: <a href="https://github.com/kubernetes/ingress-nginx/issues/1809" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/1809</a></p>
| Chaotic Pechan |
<p>I have a pod deployed in Kubernetes environment and created service account to access my S3 bucket with full access. I want to upload my logs to s3 bucket.</p>
<pre><code>module.exports.uploadFile = () => {
const s3 = new AWS.S3();
const fileContent = fs.readFileSync(path.resolve(__dirname,'../logger/MyLogFile.log'))
const params = {
Bucket: 'MYBUCKETNAME',
Key: 'MyLogFile.log',
Body: fileContent
};
s3.putObject(params, function(err, data) {
if (err) {
console.log(err)
logger.error('file upload error')
throw err;
}
logger.info(`File uploaded successfully. ${data.Location}`);
})}
</code></pre>
<p>This is the error I am getting...</p>
<blockquote>
<p>Error [CredentialsError]: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
at Timeout.connectTimeout [as _onTimeout] (/usr/src/app/node_modules/aws-sdk/lib/http/node.js:69:15)
at listOnTimeout (internal/timers.js:557:17)
at processTimers (internal/timers.js:500:7) {
code: 'CredentialsError',
time: 2021-12-09T10:43:29.712Z,
retryable: true,
originalError: {
message: 'Could not load credentials from any providers',
code: 'CredentialsError',
time: 2021-12-09T10:43:29.705Z,
retryable: true,
originalError: {
message: 'EC2 Metadata roleName request returned error',
code: 'TimeoutError',
time: 2021-12-09T10:43:29.705Z,
retryable: true,
originalError: {
message: 'Socket timed out without establishing a connection',
code: 'TimeoutError',
time: 2021-12-09T10:43:29.705Z,
retryable: true
}
}
}
}</p>
</blockquote>
| Joshua | <p>Creating a service Account from EKS side to my application resolved the problem.</p>
<p>After creating the service account my file changes will be</p>
<p>1.) aws.yml</p>
<blockquote>
<p>serviceAccount:<br />
enabled: true<br />
name: MY_SERVICE_ACC_NAME</p>
</blockquote>
<p>2.) my AWS.S3 object creation will not have any changes</p>
<pre><code> const s3 = new AWS.S3();
</code></pre>
<p>Now the AWS.S3 object will be filled with mandatory parameters automatically.</p>
| Joshua |
<p>Is there any command that can be used to apply new changes, because when I apply new changes with:</p>
<pre><code>istioctl apply manifest --set XXX.XXXX=true
</code></pre>
<p>It overwrites the current value and set it to default.</p>
| Shudhanshu Badkur | <p>That´s might not work because you have used <code>istioctl manifest apply</code>, which is deprecated and it´s <code>istioctl install</code> since istio 1.6 version.</p>
<p>Quoted from the <a href="https://istio.io/latest/docs/setup/install/istioctl/#install-istio-using-the-default-profile" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>Note that istioctl install and istioctl manifest apply are exactly the same command. In Istio 1.6, the simpler install command replaces manifest apply, which is deprecated and will be removed in 1.7.</p>
</blockquote>
<p>AFAIK there are 2 ways to update new changes in istio</p>
<h2><a href="https://istio.io/latest/docs/setup/install/istioctl/" rel="nofollow noreferrer">istioctl install</a></h2>
<blockquote>
<p>To enable the Grafana dashboard on top of the default profile, set the addonComponents.grafana.enabled configuration parameter with the following command:</p>
</blockquote>
<pre><code>$ istioctl install --set addonComponents.grafana.enabled=true
</code></pre>
<blockquote>
<p>In general, you can use the --set flag in istioctl as you would with Helm. The only difference is you must prefix the setting paths with values. because this is the path to the Helm pass-through API in the <a href="https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/" rel="nofollow noreferrer">IstioOperator API</a>.</p>
</blockquote>
<h2><a href="https://istio.io/latest/docs/setup/install/standalone-operator/" rel="nofollow noreferrer">istio operator</a></h2>
<blockquote>
<p>In addition to installing any of Istio’s built-in <a href="https://istio.io/latest/docs/setup/additional-setup/config-profiles/" rel="nofollow noreferrer">configuration profiles</a>, istioctl install provides a complete API for customizing the configuration.</p>
<p><a href="https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/" rel="nofollow noreferrer">The IstioOperator API</a>
The configuration parameters in this API can be set individually using --set options on the command line. For example, to enable the control plane security feature in a default configuration profile, use this command:</p>
</blockquote>
<pre><code>$ istioctl install --set values.global.controlPlaneSecurityEnabled=true
</code></pre>
<blockquote>
<p>Alternatively, the IstioOperator configuration can be specified in a YAML file and passed to istioctl using the -f option:</p>
</blockquote>
<pre><code>$ istioctl install -f samples/operator/pilot-k8s.yaml
</code></pre>
<blockquote>
<p>For backwards compatibility, the previous <a href="https://archive.istio.io/v1.4/docs/reference/config/installation-options/" rel="nofollow noreferrer">Helm installation options</a>, with the exception of Kubernetes resource settings, are also fully supported. To set them on the command line, prepend the option name with “values.”. For example, the following command overrides the pilot.traceSampling Helm configuration option:</p>
</blockquote>
<pre><code>$ istioctl install --set values.pilot.traceSampling=0.1
</code></pre>
<blockquote>
<p>Helm values can also be set in an IstioOperator CR (YAML file) as described in Customize Istio settings using the <a href="https://archive.istio.io/v1.4/docs/reference/config/installation-options/" rel="nofollow noreferrer">Helm API</a>, below.</p>
</blockquote>
<blockquote>
<p>If you want to set Kubernetes resource settings, use the IstioOperator API as described in Customize Kubernetes settings.</p>
</blockquote>
<p>Related documentation and examples for istio operator.</p>
<ul>
<li><a href="https://istio.io/latest/docs/setup/install/istioctl/#customizing-the-configuration" rel="nofollow noreferrer">https://istio.io/latest/docs/setup/install/istioctl/#customizing-the-configuration</a></li>
<li><a href="https://istio.io/latest/docs/setup/install/standalone-operator/#update" rel="nofollow noreferrer">https://istio.io/latest/docs/setup/install/standalone-operator/#update</a></li>
<li><a href="https://stackoverflow.com/a/61865633/11977760">https://stackoverflow.com/a/61865633/11977760</a></li>
<li><a href="https://github.com/istio/operator/blob/master/samples/pilot-advanced-override.yaml" rel="nofollow noreferrer">https://github.com/istio/operator/blob/master/samples/pilot-advanced-override.yaml</a></li>
</ul>
| Jakub |
<p><strong>If I have a backend implementation for TLS, does Ingress NGINX expose it correctly?</strong></p>
<p>I'm exposing an MQTT service through an Ingress NGNIX with the following configuration:</p>
<p><strong>ConfigMap:</strong></p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-tcp-microk8s-conf
namespace: ingress
#Add the service we want to expose
data:
1883: "default/mosquitto-broker:1883"
</code></pre>
<p><strong>DaemonSet:</strong></p>
<pre><code>---
apiVersion: apps/v1
kind: DaemonSet
...
spec:
selector:
matchLabels:
name: nginx-ingress-microk8s
template:
metadata:
...
spec:
...
ports:
- containerPort: 80
- containerPort: 443
#Add the service we want to expose
- name: prx-tcp-1883
containerPort: 1883
hostPort: 1883
protocol: TCP
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf
- --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
- --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
$DEFAULT_CERT
$EXTRA_ARGS
</code></pre>
<p>I have configured the MQTT broker to use TLS in the backend. When I run the broker in my machine, outside the kubernetes cluster, Wireshark detects the messages as TLS, and it doesn't show anything about MQTT:</p>
<p><a href="https://i.stack.imgur.com/DvAnT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DvAnT.png" alt="enter image description here" /></a></p>
<p>However, if I run the broker inside the cluster, it shows that im using MQTT, and nothing about TLS. But the messages aren't read correctly:
<a href="https://i.stack.imgur.com/Jay4R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jay4R.png" alt="enter image description here" /></a></p>
<p>And finally, if I run the MQTT broker inside the cluster without TLS, Wireshark detects correctly the MQTT pakcets:
<a href="https://i.stack.imgur.com/QoudM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QoudM.png" alt="enter image description here" /></a></p>
<p>My question is: <strong>Is the connection encrypted when I use TLS inside the cluster?</strong> It's true that Wireshark doesn't show the content of the packets, but it knows I'm using MQTT. Maybe it's because the headers aren't encrypted, but the payload is? Does anyone knows exactly?</p>
| Manu Ruiz Ruiz | <p>The problem was that I was running TLS MQTT in port 8883 as recommended by the documentation (not in 1883 port for standar MQTT), but Wireshark didn't recognise this port as an MQTT port, so the format given by Wireshark was kinda broken.</p>
| Manu Ruiz Ruiz |
<p>I'm sporting a fresh <strong>Minikube</strong> install on an <strong>ArchLinux</strong> box, using <strong>Docker</strong> as the <strong>Minikube</strong> driver.</p>
<p>I started the <strong>minikube</strong> "cluster" using the <code>minikube start</code> command. <code>docker container ls</code> tells us it's up and running:</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d86070af0c21 gcr.io/k8s-minikube/kicbase:v0.0.28 "/usr/local/bin/entr…" 50 minutes ago Up 50 minutes 127.0.0.1:49162->22/tcp, 127.0.0.1:49161->2376/tcp, 127.0.0.1:49160->5000/tcp, 127.0.0.1:49159->8443/tcp, 127.0.0.1:49158->32443/tcp minikube
</code></pre>
<p>I'm trying to run a simple <strong>nginx</strong> pod, using this command: <code>kubectl run my-nginx --image nginx</code></p>
<p>Since I'm pulling a public image from a public repo, I would expect I don't need any authentication. But the <code>describe pod</code> sub-command shows:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 47s default-scheduler Successfully assigned default/my-nginx to minikube
Normal BackOff 31s kubelet Back-off pulling image "nginx"
Warning Failed 31s kubelet Error: ImagePullBackOff
Normal Pulling 19s (x2 over 46s) kubelet Pulling image "nginx"
Warning Failed 4s (x2 over 31s) kubelet Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 4s (x2 over 31s) kubelet Error: ErrImagePull
</code></pre>
<p>When I try to <strong>curl</strong> the URL found in the error message from inside the <strong>minikube</strong> container, it shows that authentication is needed:</p>
<pre><code>patres@arch:~$ minikube ssh
docker@minikube:~$ curl https://registry-1.docker.io/v2/
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
</code></pre>
<p>When I try to pull that very image from host using <code>docker pull nginx</code> command, the image gets pulled, no auth required.</p>
<p>I also tried to create a <strong>kubernetes</strong> secret this way, then launching the pod using YAML with that secret, but it was to no avail.</p>
<pre><code>kubectl create secret docker-registry regcred --docker-server=https://registry-1.docker.io/v2/ --docker-username=myusername --docker-password=mypass [email protected]
</code></pre>
<p>Finally, it seems like the issue might not be unique to <strong>DockerHub</strong>, since if I follow
the official <strong>minikubes</strong> <a href="https://minikube.sigs.k8s.io/docs/start/" rel="noreferrer">documentation</a> and launch the default <code>hello-minikube</code> deployment:</p>
<pre><code>kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
</code></pre>
<p>I get the same <code>ImagePullBackOff</code> error:</p>
<pre><code>$ kubectl get pod hello-minikube-6ddfcc9757-zdzz2
NAME READY STATUS RESTARTS AGE
hello-minikube-6ddfcc9757-zdzz2 0/1 ImagePullBackOff 0 6m11s
</code></pre>
| Pat Res | <p>The problem got resolved by one of these actions (not sure by which exactly):</p>
<ul>
<li>terminating my VPN connection</li>
<li>deleting the <strong>minikube</strong> container and image</li>
<li>rebooting my computer</li>
<li>starting anew with <code>minikube start</code></li>
</ul>
| Pat Res |
<p>I have created a service account with cluster role, is it possible to deploy pods across different namespaces with this service account through APIs?</p>
<p>Below is the template from which the role creation and binding is done: </p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: api-access
rules:
-
apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
resources:
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
- horizontalpodautoscalers
- ingress
- jobs
- limitranges
- namespaces
- nodes
- pods
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
verbs: ["*"]
- nonResourceURLs: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: api-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: api-access
subjects:
- kind: ServiceAccount
name: api-service-account
namespace: default
</code></pre>
| Akshay chittal | <p>Kubernetes Service Accounts are not namespace objects, so answer of "can i use service account between namespaces?" is yes.</p>
<p>For second part: I don't know what you mean with API's but if it is kubernetes-apiserver then yes, you can use service account with kubectl make sure you are executing as service account. You can impersonate user for this and reference: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation</a></p>
<p>If you mean you built new API for deployment or using external deployer then you should deploy that with this service account as described here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/</a></p>
| Akin Ozer |
<p>I'm working on <strong>k8s</strong> airflow setup and I need a flow to update DAGs with ho hassle.<br>
Looks like the most "kubernetish" way is to use persistent volume.<br>
But how to write data (DAG python files) from outside the cluster into persistent volume?</p>
| orkenstein | <p>You need to use external volume provider or setup NFS to achieve this. Volume has to be able to mount on different machines and then you need to reference it to Kubernetes with <code>ReadWriteMany</code> <code>persistentVolume</code> type.</p>
| Akin Ozer |
<p>I have a stateful spring application and I want to deploy it to kubernetes cluster. There will be more than one instance of the application so i need to enable sticy session using ingress-nginx controller. I made the following configuration:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "JSESSIONID"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/session-cookie-path: /ingress-test
# UPDATE THIS LINE ABOVE
spec:
rules:
- http:
paths:
- path: /ingress-test
backend:
serviceName: ingress-test
servicePort: 31080
</code></pre>
<p>ingress-nginx redirect subsequent request to correct pod if login is successful. However, it sometimes switches to other pod just after JSESSIONID is changed (JSESSIONID cookie is changed by spring-security afer successful login) and frontend redirects back to login page even user credentials are correct. Is there anyone that tried ingress-nginx with spring-security?</p>
<p>Best Regards</p>
| savas | <p>Following change fixed the problem. Without a host definition in rules, ingress-nginx doesn't set session cookie. </p>
<p>There is an open issue: <a href="https://github.com/kubernetes/ingress-nginx/issues/3989" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/3989</a></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/session-cookie-path: /ingress-test
# UPDATE THIS LINE ABOVE
spec:
rules:
- host: www.domainname.com
http:
paths:
- path: /ingress-test
backend:
serviceName: ingress-test
servicePort: 31080
</code></pre>
| savas |
<p>I have created a PersistentVolume, PersistentVolumeClaim and StorageClass for elasticsearch in a perisistance.yaml file. </p>
<p>The PersistentVolume, StorageClass,PersistentVolumeClaim is created successfully. The bound is also successful. </p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
reclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-persistent-volume
spec:
storageClassName: ssd
capacity:
storage: 30G
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: gke-webtech-instance-2-pvc-f5964ddc-d446-11e9-9d1c-42010a800076
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
storageClassName: ssd
volumeName: pv-persistent-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30G
</code></pre>
<p><a href="https://i.stack.imgur.com/ORorR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ORorR.png" alt="pv-claim_bound_successful"></a></p>
<p>I have also attached the deployment.yaml for elasticsearch below. </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
name: elasticsearch
spec:
type: NodePort
ports:
- name: elasticsearch-port1
port: 9200
protocol: TCP
targetPort: 9200
- name: elasticsearch-port2
port: 9300
protocol: TCP
targetPort: 9300
selector:
app: elasticsearch
tier: elasticsearch
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch-application
labels:
app: elasticsearch
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: elasticsearch
tier: elasticsearch
spec:
hostname: elasticsearch
containers:
- image: gcr.io/xxxxxxx/elasticsearch:7.3.1
name: elasticsearch
ports:
- containerPort: 9200
name: elasticport1
- containerPort: 9300
name: elasticport2
env:
- name: discovery.type
value: single-node
volumeMounts:
- mountPath: "/usr/share/elasticsearch/html"
name: pv-volume
volumes:
- name: pv-volume
persistentVolumeClaim:
claimName: pv-claim
</code></pre>
<p>I have created the deployment.yaml file as well. The Elastic search applications runs successfully without any issues and i am able to hit the elasticsearch URL also. I have run tests and populated the data in elaticsearch and i am able to view the data's also.</p>
<p>Once deleted the cluster in Kubernetes i try to connect with the same disk which has the persistance data. Everything is perfect. But i am not able to get the data which is already stored. My data is lost and i have a disk empty i guess. </p>
| klee | <p>Kubernetes has <code>reclaimPolicy</code> for persistent volumes which defaults <em>in most cases</em> to <code>delete</code>. You can change it with:</p>
<blockquote>
<p>kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'</p>
</blockquote>
<p>Or simply adding <code>persistentVolumeReclaimPolicy: Retain</code> in persistentVolume.yaml</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/" rel="nofollow noreferrer">Some additional reading about this</a>. </li>
<li><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">And more general in storage</a></li>
</ul>
<p>Edited: As in comment below, this problem may not about data being lost. Pasting my comment below:</p>
<p>"I don't think your data is lost. Elasticsearch just needs to index existing data because it doesn't just grab existing stored data. You need to reingest data to elasticsearch or save snapshots regularly or use master, data, client architecture."</p>
| Akin Ozer |
<p>I have a deployment where cpu request is 500m and cpu limit is 1000m.
I create a hpa as -</p>
<pre><code> metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
</code></pre>
<p>This makes calculations based on the cpu requests. Is there a way to make it look at the cpu limit instead ?</p>
| Dhanuj Dharmarajan | <p>If you set different values for limits and request, you made them "Burstable". What this means in Kubernetes is "This application is very important and may exceed it's normal targeted usage". You should set requests as normal limits and scale based on them. Actual limits provides infrastructure safety by limiting pod capabilities.</p>
<p>This is why you should start with setting requests and limits to same values. If you have problems with this(like pod startup requires little bit more than needed or scaling happens slowly) you should set bursting range but still think like requests as normal limits(More like soft limit/hard limit thing).</p>
<p>Answer is no, you can't. Because you're looking it wrong.</p>
| Akin Ozer |
<p>I have an EKS cluster and a nodegroup running 6 nodes. For some reson nodes get marked as <code>unschedulable</code> randomly, once a week or two and they stay that way. When I notice that I uncordon them manually and everything works fine.</p>
<p>Why does this happen and how can I debug it, prevent it or configure cluster to fix it automatically?</p>
| robliv | <p>In my case the problem was <code>AWS Termination Handler</code> daemonset that was running. It was outdated and not really used in the cluster and after removing it, problems with nodes getting marked Unschedulable just went away.</p>
| robliv |
<p>I have had jhub released in my cluster successfully. I then changed the config to pull another docker image as stated in the <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-jupyterhub.html" rel="noreferrer">documentation</a>.</p>
<p>This time, while running the same old command: </p>
<pre><code># Suggested values: advanced users of Kubernetes and Helm should feel
# free to use different values.
RELEASE=jhub
NAMESPACE=jhub
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.8.2 \
--values jupyter-hub-config.yaml
</code></pre>
<p>where the <code>jupyter-hub-config.yaml</code> file is:</p>
<pre><code>proxy:
secretToken: "<a secret token>"
singleuser:
image:
# Get the latest image tag at:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
tag: 177037d09156
</code></pre>
<p>I get the following problem:</p>
<pre><code>UPGRADE FAILED
ROLLING BACK
Error: "jhub" has no deployed releases
Error: UPGRADE FAILED: "jhub" has no deployed releases
</code></pre>
<p>I then deleted the namespace via <code>kubectl delete ns/jhub</code> and the release via <code>helm delete --purge jhub</code>. Again tried this command in vain, again the same error.</p>
<p>I read few GH issues and found that either the YAML file was invalid or that the <code>--force</code> flag worked. However, in my case, none of these two are valid.</p>
<p>I expect to make this release and also learn how to edit the current releases.</p>
<p>Note: As you would find in the aforementioned documentation, there is a pvc created.</p>
| Aviral Srivastava | <p>I had the same issue when I was trying to update my <code>config.yaml</code> file in GKE. Actually what worked for me is to redo these steps:</p>
<ol>
<li><p>run <code>curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash</code></p>
</li>
<li><p><code>helm init --service-account tiller --history-max 100 --wait</code></p>
</li>
<li><p>[OPTIONAL] <code>helm version</code> to verify that you have a similar output as the documentation</p>
</li>
<li><p>Add repo</p>
</li>
</ol>
<pre><code>helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
</code></pre>
<ol start="5">
<li>Run upgrade</li>
</ol>
<pre><code>RELEASE=jhub
NAMESPACE=jhub
helm upgrade $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
</code></pre>
| Antoine Krajnc |
<p>I'm currently working on a case when we need to dynamically create services and provide access to them via URI subpaths of the main gateway.</p>
<p>I'm planning to use virtual services for traffic routing for them. Virtual Service for a particular service should look like:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: subpaths-routes
spec:
hosts:
- mainservice.prod.svc.cluster.local
http:
- name: "subpath-redirection"
match:
- uri:
prefix: "/bservices/svc-2345-6789"
route:
- destination:
host: svc-2345-6789.prod.svc.cluster.local
</code></pre>
<p>But there may be a huge number of such services (like thousands). All follow the same pattern of routing.
I would like to know if Istio has a mechanism to specify VirtualService with variables/parameters like the following:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: subpaths-routes
spec:
hosts:
- mainservice.prod.svc.cluster.local
http:
- name: "subpath-redirection"
match:
- uri:
prefix: "/bservices/"{{ variable }}
route:
- destination:
host: {{ variable }}.prod.svc.cluster.local
</code></pre>
<p>In Nginx, one can do a similar thing by specifying something like this:</p>
<pre><code>location ~ /service/(?<variable>[0-9a-zA-Z\_\-]+)/ {
proxy_pass http://$variable:8080;
}
</code></pre>
<p>Is there a way in Istio to accomplish that?
And if there is not, how would thousands of VSs impact the performance of request processing? Is It expensive to keep them in terms of CPU and RAM being consumed?</p>
<p>Thank you in advance!</p>
| fonhorst | <blockquote>
<p>How to use variables in Istio VirtualService?</p>
</blockquote>
<p>As far as I know there is no such option in istio to specify a variable in prefix and host, if it was only a prefix then you could try with <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest" rel="nofollow noreferrer">regex</a> instead of prefix.</p>
<hr />
<p>If you would like to automate it in some way, I mean create a variable and put in in both, prefix and host, then you could try do to it with <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a>.</p>
<p>There are few examples for virtual service in helm.</p>
<ul>
<li><a href="https://github.com/streamsets/helm-charts/blob/master/incubating/control-hub/templates/istio-gateway-virtualservice.yaml" rel="nofollow noreferrer">https://github.com/streamsets/helm-charts/blob/master/incubating/control-hub/templates/istio-gateway-virtualservice.yaml</a></li>
<li><a href="https://github.com/salesforce/helm-starter-istio/blob/master/templates/virtualService.yaml" rel="nofollow noreferrer">https://github.com/salesforce/helm-starter-istio/blob/master/templates/virtualService.yaml</a></li>
</ul>
<hr />
<blockquote>
<p>how would thousands of VSs impact the performance of request processing?</p>
</blockquote>
<p>There is <a href="https://github.com/istio/istio/issues/25685" rel="nofollow noreferrer">github issue</a> about that, as @lanceliuu mentioned there</p>
<blockquote>
<p>When we create ~1k virtualservices in a single cluster, the ingress gateway is picking up new virtualservice slowly.</p>
</blockquote>
<p>So that might be one of the issues with thousands of Virtual Services.</p>
<blockquote>
<p>Is It expensive to keep them in terms of CPU and RAM being consumed?</p>
</blockquote>
<p>I would say it would require testing. I checked in above github issue and they mentioned that there is no mem/cpu pressure for istio components, but I can't say how expensive is that.</p>
<p>In theory you could create 1 big virtual service instead of thousands, but as mentioned in <a href="https://istio.io/latest/docs/ops/best-practices/traffic-management/#split-virtual-services" rel="nofollow noreferrer">documentation</a> you should rather Split large virtual services into multiple resources.</p>
<hr />
<p>Additional resources:</p>
<ul>
<li><a href="https://engineering.hellofresh.com/everything-we-learned-running-istio-in-production-part-2-ff4c26844bfb" rel="nofollow noreferrer">https://engineering.hellofresh.com/everything-we-learned-running-istio-in-production-part-2-ff4c26844bfb</a></li>
<li><a href="https://istio.io/latest/docs/ops/deployment/performance-and-scalability/" rel="nofollow noreferrer">https://istio.io/latest/docs/ops/deployment/performance-and-scalability/</a></li>
<li><a href="https://perf.dashboard.istio.io/" rel="nofollow noreferrer">https://perf.dashboard.istio.io/</a></li>
</ul>
| Jakub |
<p>I set up datadog trace client in my kubernetes cluster to monitor my deployed application. It was working fine with the kubernetes version 1.15x but as soon as I upgraded the version to 1.16x, the service itself is not showing in the Datadog Dashboard.</p>
<p>Currently using:</p>
<ol>
<li><p>Kubernetes 1.16.9 </p></li>
<li><p>Datadog 0.52.0</p></li>
</ol>
<p>When checked for agent status. It is giving following exception :</p>
<pre><code>Instance ID: kubelet:xxxxxxxxxxxxx [ERROR]
Configuration Source: file:/etc/datadog-agent/conf.d/kubelet.d/conf.yaml.default
Total Runs: 12,453
Metric Samples: Last Run: 0, Total: 0
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 0, Total: 0
Average Execution Time : 5ms
Last Execution Date : 2020-06-19 15:18:19.000000 UTC
Last Successful Execution Date : Never
Error: Unable to detect the kubelet URL automatically.
Traceback (most recent call last):
File "/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py", line 822, in run
self.check(instance)
File "/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/kubelet/kubelet.py", line 297, in check
raise CheckException("Unable to detect the kubelet URL automatically.")
datadog_checks.base.errors.CheckException: Unable to detect the kubelet URL automatically.
</code></pre>
<p>This looks like a version issue to me. If it is which Datadog version I need to use for monitoring?</p>
| Sanket Singh | <p>This was a issue with deployed DataDog daemonset for me:</p>
<p>What I did to resolve:</p>
<ol>
<li><p>Check daemonset if it exists or not:</p>
<pre><code>kubectl get ds -n datadog
</code></pre>
</li>
<li><p>Edit the datadog daemonset:</p>
<pre><code>kubectl edit ds datadog -n datadog
</code></pre>
</li>
<li><p>In the opened yaml, add</p>
<pre><code>- name: DD_KUBELET_TLS_VERIFY
value: "false"
</code></pre>
<p>Add this in <strong>env:</strong> tag for all places. For me there were 4 places which are having DD tags in the yaml.</p>
</li>
<li><p>Save and close it. The daemonset will restart. And the application will start getting traced.</p>
</li>
</ol>
| Sanket Singh |
<p>I have gone through couple of articles related how clusterIP abd nodeport service works(like <a href="https://www.ovh.com/blog/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/" rel="nofollow noreferrer">this blog</a>)</p>
<p>Say I have 3 different micro-service based web application each running on three separate node. Each runs the replica set of two.</p>
<p>My understanding is that there will be separate clusterIP service for application replicaset instead of one single
clusterIP service for all application type. Is that correct ? Now if one pode need to connect to another pod, it will call that corresponding
clusterIP service to connect to right pod ?</p>
| user3198603 | <p>Yes, that's right.<br>
In fact, you need to <em>forget</em> about the notion of pod. </p>
<p>As you said, you created 3 web based micro-<strong>service</strong>. So the correct terminology (and need) here, is to contact <strong>(micro-)service A</strong> from <strong>(micro-)service B</strong>. In order to do that, you need to create a <code>kind: Service</code> for each of your <code>ReplicaSet</code>. </p>
<p>For example :</p>
<pre><code>---
# This is one micro-service based on Nginx
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
# This is the ClusterIp service corresponding
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- port: 8080
targetPort: 80
</code></pre>
<p>In the example above, we have two replicates of a <em>micro-service</em> based on Nginx. We also have a ClusterIP <code>kind: Service</code> that target our nginx app.</p>
<p>Now, if we want to contact nginx from another pod, all we need to do is use the service name and the port configured from <strong>inside the cluster</strong>. In our case, it'll be <code>nginx:8080</code>.</p>
<p>To try that, you need to create a pod that will serve us as the entry point in the cluster : </p>
<pre><code>---
apiVersion: v1
kind: Pod
metadata:
name: gateway
spec:
containers:
- image: centos:7
name: gateway
command: ["bash", "-c", "sleep infinity"]
</code></pre>
<p>Now, if you want to contact your nginx app from the cluster, you'll have to execute this command : </p>
<pre><code>kubectl exec -ti gateway curl nginx:8080
</code></pre>
| Marc ABOUCHACRA |
<p>I was trying to display the metrics for 64 nodes on my k8s clsuter. Then I found out that whenever I select more than 60 nodes in the variable dropdown</p>
<p><a href="https://i.stack.imgur.com/VLQmc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VLQmc.png" alt="enter image description here"></a></p>
<p>Grafana throws query error that looks like this:</p>
<p><a href="https://i.stack.imgur.com/PbmhX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PbmhX.png" alt="enter image description here"></a></p>
<p>The exception message is not particularly helpful, could somebody provide me more insights? Thanks!</p>
| RonZhang724 | <p>I've had a similar problem after selecting too many variables. As long as the rest of your monitor is able to pull the info successfully from prometheus, you can disable the annotation query. Go to the dashboard and remove the annotations under settings. </p>
| Sunny |
<p>I am trying to add files in volumeMounts to .dockerignore and trying to understand the difference between subPath and mountPath. Reading official documentation isn't clear to me.</p>
<p>I should add from what I read mountPath is the directory in the pod where volumes will be mounted.</p>
<p>from official documentation: "subPath The volumeMounts.subPath property specifies a sub-path inside the referenced volume instead of its root." <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath</a> (this part isn't clear)</p>
<pre><code>- mountPath: /root/test.pem
name: test-private-key
subPath: test.testing.com.key
</code></pre>
<p>In this example should I include both test.pem and test.testing.com.key to dockerignore?</p>
| girlcoder1 | <p><code>mountPath</code> shows where the referenced volume should be mounted in the container. For instance, if you mount a volume to <code>mountPath: /a/b/c</code>, the volume will be available to the container under the directory <code>/a/b/c</code>.</p>
<p>Mounting a volume will make all of the volume available under <code>mountPath</code>. If you need to mount only part of the volume, such as a single file in a volume, you use <code>subPath</code> to specify the part that must be mounted. For instance, <code>mountPath: /a/b/c</code>, <code>subPath: d</code> will make whatever <code>d</code> is in the mounted volume under directory <code>/a/b/c</code></p>
| Burak Serdar |
<p>I am trying to do this tutorial <a href="https://kubernetes.io/blog/2019/07/23/get-started-with-kubernetes-using-python/" rel="nofollow noreferrer">https://kubernetes.io/blog/2019/07/23/get-started-with-kubernetes-using-python/</a> on my local machine</p>
<p>I have accomplished all of the steps:</p>
<pre><code>app git:(master) ✗ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-python-6c7b478cf5-49vdg 1/1 Running 0 2m53s
hello-python-6c7b478cf5-d4tfl 1/1 Running 0 2m53s
hello-python-6c7b478cf5-ltb8r 1/1 Running 0 2m53s
hello-python-6c7b478cf5-qsqvt 1/1 Running 0 2m53s
app git:(master) ✗
</code></pre>
<p>But when I go to <code>localhost:6000</code> I get an error:</p>
<blockquote>
<p>This site can’t be reached. The web page at http://localhost:6000/ might be temporarily down or it may have moved permanently to a new web address.
ERR_UNSAFE_PORT</p>
</blockquote>
<p>When I do lsof -i tcp:6000 I get this:</p>
<pre><code>
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 98546 <me> 76u IPv6 0xc52e947a7a450f69 0t0 TCP *:6000 (LISTEN)
</code></pre>
<p>Is this a bug in the tutorial, or am I doing something wrong?</p>
| nz_21 | <h2>Docker Desktop</h2>
<p>I checked if it works and I have the same issue.</p>
<p><strong>EDIT</strong></p>
<p>As @Boris the Spider mentioned in comments</p>
<blockquote>
<p>Port 6000 is <a href="https://chromium.googlesource.com/chromium/src.git/+/refs/heads/master/net/base/port_util.cc" rel="nofollow noreferrer">blocked by chrome</a> for safety - it’s the X11 port and I think Chrome is trying to prevent certain kinds of XSS attacks. You can <a href="https://superuser.com/questions/188006/how-to-fix-err-unsafe-port-error-on-chrome-when-browsing-to-unsafe-ports">disable</a> this protection by passing flags to chrome.</p>
</blockquote>
<p>If you don't want to change chrome settings you change the port of the service. For example from 6000 to 8000.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-python-service
spec:
selector:
app: hello-python
ports:
- protocol: "TCP"
port: 8000 <---
targetPort: 5000
type: LoadBalancer
</code></pre>
<p>if you change it to 8000 then use localhost:8000 instead and it works.</p>
<p><a href="https://i.stack.imgur.com/FBGez.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FBGez.png" alt="enter image description here" /></a></p>
<hr />
<h2>Minikube</h2>
<p>Minikube doesn't supports LB external IP.</p>
<blockquote>
<p>On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On minikube, the LoadBalancer type makes the Service accessible through the minikube service command.</p>
</blockquote>
<p>So if you use minikube try with <code>minikube service</code> instead.</p>
<p>There is related <a href="https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-service" rel="nofollow noreferrer">documentation</a> with an example.</p>
| Jakub |
<p>I am deploying some apps in kubernetes,and my apps using a config management tool called apollo.This tool need to define the apps running environment(develop\test\production......) through this ways:1.java args 2.application.properties 3./etc/settings/data.properties. Now I am running apps in Kubernetes,the question is,how to define running environment variable?</p>
<p>1.if I choose java args,so I should keep some scripts like: <code>start-develop-env.sh/start-test-env.sh/start-pro-env.sh</code></p>
<p>2.if I choose <code>application.properties</code>,I should keep <code>application-develop.properties/application-test.properties</code>.....</p>
<p>3.if I choose <code>/etc/settings/data.properties</code>,It is impossible to login every docker container to define the config file of each environment.</p>
<p>what is the best way to solve the problem? write in kubernetes deployment yaml and my apps could not read it(define variable in batch pods collections in one place is better).</p>
| Dolphin | <p>You can implement #2 and #3 using a configmap. You can define the properties file as a configmap, and mount that into the containers, either as application.properties or data.properties. The relevant section in k8s docs is:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/</a></p>
<p>Using java args might be more involved. You can define a script as you said, and run that script to setup the environment for the container. You can store that script as a ConfigMap as well. Or, you can define individual environment variables in your deployment yaml, define a ConfigMap containing properties, and populate those environment variables from the configmap. The above section also describes how to setup environment variables from a configmap.</p>
| Burak Serdar |
<p>I created Kubernetes cluster with kops (on AWS), and i want to access to one of my nodes as a root.According to <a href="https://stackoverflow.com/questions/42793382/exec-commands-on-kubernetes-pods-with-root-access">this post</a>, it's possible only with Docker command.
When i type <code>docker image ls</code> i'm getting nothing. When i was using minikube i solved this issue with <code>minikube docker-env</code> and from output i just copied last line into new CMD line <code>@FOR /f "tokens=*" %i IN ('minikube docker-env') DO @%i</code>
(I'm using Widnows 10) and using above procedure, after typing <code>docker image ls</code> or <code>docker image ps</code> i was able to see all minikube pods. Is there any way to do the same for pods created with kops ?</p>
<p>I'm able to do it connecting to Kubernetes node, install docker on it, and then i can connect to pod specifying -u root switch, but i wonder is it possible to do the same from host machine (Windows 10 in my case)</p>
| overflowed | <p>It's a bit unclear what you're trying to do. So I'm going to give some general info here based on this scenario : <em>You've created a K8S cluster on AWS using Kops</em></p>
<h2>I want to access one of my AWS node as root</h2>
<p>This has nothing to do with <strong>kops</strong> nor it has with <strong>Docker</strong>. This is basic AWS management. You need to check on your AWS console management to get all the info to connect to your node.</p>
<h2>I want to see all the docker image from my windows laptop</h2>
<p>Again, this has nothing to do with <strong>kops</strong>. Kops is a Kubernetes distribution. In Kubernetes, the smallest units of computing that can be managed is the <strong>pod</strong>. You cannot manage <em>directly</em> docker containers or images with kubernetes.<br>
So if you want to see your docker images, you'll need to somehow connect to your AWS node and then execute </p>
<pre class="lang-sh prettyprint-override"><code>docker image ls
</code></pre>
<p>In fact, that's what you're doing with your minikube example. You're just executing the docker command on the VM managed by minikube. </p>
<p>More info on what's a pod <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">here</a></p>
<h2>I want to see all the pods created with kops</h2>
<p>Well, assuming that you've succesfully configured your system to access AWS with kops (more info on that <a href="https://github.com/kubernetes/kops/blob/master/docs/aws.md" rel="nofollow noreferrer">here</a>), then you'll just have to directly execute any <code>kubectl</code> command. For example, to list all the pods located in the <code>kube-system</code> namespace : </p>
<pre class="lang-sh prettyprint-override"><code>kubectl -n kube-system get po
</code></pre>
<p>Hope this helps !</p>
| Marc ABOUCHACRA |
<p>I have two identical <code>pod</code> running on two working nodes, which are served externally via a <code>svc</code>. as follows: </p>
<pre><code>root@master1:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kubia-nwjcc 1/1 Running 0 33m 10.244.1.27 worker1
kubia-zcpbb 1/1 Running 0 33m 10.244.2.11 worker2
root@master1:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h
kubia ClusterIP 10.98.41.49 <none> 80/TCP 34m
</code></pre>
<p>But when I try to access the <code>svc</code> in one of the <code>pod</code>, I can only get the response of the <code>pod</code> on the same node. When svc accesses the <code>pod</code> on the other node. When <code>svc</code> tries to access the <code>pod</code> on other nodes, it will return <code>command terminated with exit code 7</code>. correct output and bad output seem to be randomly generated, as follows</p>
<p><strong>correct output</strong></p>
<pre><code>root@master1:~# k exec kubia-nwjcc -- curl http://10.98.41.49
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 23 0 23 0 0 8543 0 --:--:-- --:--:-- --:--:-- 11500
You've hit kubia-nwjcc
</code></pre>
<p><strong>bad output</strong></p>
<pre><code>root@master1:~# kubectl exec kubia-nwjcc -- curl http://10.98.41.49
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 10.98.41.49 port 80: No route to host
command terminated with exit code 7
</code></pre>
<p>The following is the software version I am using:</p>
<ul>
<li>ubuntu: <code>v18.04</code></li>
<li>kubelet / kubeadm / kubectl: <code>v1.15.0</code></li>
<li>docker: <code>v18.09.5</code></li>
</ul>
<p>The following is <code>svc</code> describe:</p>
<pre><code>root@master1:~# kubectl describe svc kubia
Name: kubia
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=kubia
Type: ClusterIP
IP: 10.98.41.49
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 10.244.1.27:8080,10.244.2.11:8080
Session Affinity: None
Events: <none>
</code></pre>
<p>The following is return results with using <code>-v=9</code>:</p>
<pre><code>root@master1:~# kubectl exec kubia-nwjcc -v=9 -- curl -s http://10.98.41.49
I0702 11:45:52.481239 23171 loader.go:359] Config loaded from file: /root/.kube/config
I0702 11:45:52.501154 23171 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.15.0 (linux/amd64) kubernetes/e8462b5" 'https://192.168.56.11:6443/api/v1/namespaces/default/pods/kubia-nwjcc'
I0702 11:45:52.525926 23171 round_trippers.go:438] GET https://192.168.56.11:6443/api/v1/namespaces/default/pods/kubia-nwjcc 200 OK in 24 milliseconds
I0702 11:45:52.525980 23171 round_trippers.go:444] Response Headers:
I0702 11:45:52.525992 23171 round_trippers.go:447] Content-Type: application/json
I0702 11:45:52.526003 23171 round_trippers.go:447] Content-Length: 2374
I0702 11:45:52.526012 23171 round_trippers.go:447] Date: Tue, 02 Jul 2019 11:45:52 GMT
I0702 11:45:52.526063 23171 request.go:947] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kubia-nwjcc","generateName":"kubia-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/kubia-nwjcc","uid":"2fd67789-c48d-4459-8b03-ac562b4a3f5c","resourceVersion":"188689","creationTimestamp":"2019-07-02T10:51:34Z","labels":{"app":"kubia"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"kubia","uid":"f3a4c457-dee4-4aec-ad73-1f0ca41628aa","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-6pgh8","secret":{"secretName":"default-token-6pgh8","defaultMode":420}}],"containers":[{"name":"kubia","image":"luksa/kubia","ports":[{"containerPort":8080,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"default-token-6pgh8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"worker1","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-07-03T01:35:15Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-07-03T01:35:20Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-07-03T01:35:20Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-07-02T10:51:34Z"}],"hostIP":"192.168.56.21","podIP":"10.244.1.27","startTime":"2019-07-03T01:35:15Z","containerStatuses":[{"name":"kubia","state":{"running":{"startedAt":"2019-07-03T01:35:19Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"luksa/kubia:latest","imageID":"docker-pullable://luksa/kubia@sha256:3f28e304dc0f63dc30f273a4202096f0fa0d08510bd2ee7e1032ce600616de24","containerID":"docker://27da556930baf857e5af92b13934dcb1b2b2f001ecab5e7b952b2bda5aa27f0b"}],"qosClass":"BestEffort"}}
I0702 11:45:52.543108 23171 round_trippers.go:419] curl -k -v -XPOST -H "X-Stream-Protocol-Version: v4.channel.k8s.io" -H "X-Stream-Protocol-Version: v3.channel.k8s.io" -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -H "User-Agent: kubectl/v1.15.0 (linux/amd64) kubernetes/e8462b5" 'https://192.168.56.11:6443/api/v1/namespaces/default/pods/kubia-nwjcc/exec?command=curl&command=-s&command=http%3A%2F%2F10.98.41.49&container=kubia&stderr=true&stdout=true'
I0702 11:45:52.591166 23171 round_trippers.go:438] POST https://192.168.56.11:6443/api/v1/namespaces/default/pods/kubia-nwjcc/exec?command=curl&command=-s&command=http%3A%2F%2F10.98.41.49&container=kubia&stderr=true&stdout=true 101 Switching Protocols in 47 milliseconds
I0702 11:45:52.591208 23171 round_trippers.go:444] Response Headers:
I0702 11:45:52.591217 23171 round_trippers.go:447] Connection: Upgrade
I0702 11:45:52.591221 23171 round_trippers.go:447] Upgrade: SPDY/3.1
I0702 11:45:52.591225 23171 round_trippers.go:447] X-Stream-Protocol-Version: v4.channel.k8s.io
I0702 11:45:52.591229 23171 round_trippers.go:447] Date: Wed, 03 Jul 2019 02:29:33 GMT
F0702 11:45:53.783725 23171 helpers.go:114] command terminated with exit code 7
</code></pre>
<p>The status of <code>kube-system</code> pod and the two pods providing the service is all <code>Running</code>, as following:</p>
<pre><code>root@master1:~/k8s-yaml# kubectl get --all-namespaces pod -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default kubia-6pjz9 1/1 Running 0 5m35s 10.244.2.12 worker2 <none> <none>
default kubia-nwjcc 1/1 Running 0 16h 10.244.1.27 worker1 <none> <none>
kube-system coredns-bccdc95cf-792px 1/1 Running 4 5d19h 10.244.0.11 master1 <none> <none>
kube-system coredns-bccdc95cf-bc76j 1/1 Running 4 5d19h 10.244.0.10 master1 <none> <none>
kube-system etcd-master1 1/1 Running 8 5d19h 192.168.56.11 master1 <none> <none>
kube-system kube-apiserver-master1 1/1 Running 7 5d19h 192.168.56.11 master1 <none> <none>
kube-system kube-controller-manager-master1 1/1 Running 7 5d18h 192.168.56.11 master1 <none> <none>
kube-system kube-flannel-ds-amd64-9trbq 1/1 Running 3 5d18h 192.168.56.21 worker1 <none> <none>
kube-system kube-flannel-ds-amd64-btt74 1/1 Running 5 5d18h 192.168.56.11 master1 <none> <none>
kube-system kube-flannel-ds-amd64-swzzd 1/1 Running 0 3d20h 192.168.56.22 worker2 <none> <none>
kube-system kube-proxy-27zfk 1/1 Running 7 5d19h 192.168.56.11 master1 <none> <none>
kube-system kube-proxy-79kt9 1/1 Running 0 3d20h 192.168.56.22 worker2 <none> <none>
kube-system kube-proxy-lx4gk 1/1 Running 3 5d18h 192.168.56.21 worker1 <none> <none>
kube-system kube-scheduler-master1 1/1 Running 7 5d19h 192.168.56.11 master1 <none> <none>
</code></pre>
<p>My expected return is: when using <code>curl</code> access <code>svc</code>, <code>svc</code> can randomly access the <code>pod</code> on different nodes as follows:</p>
<pre><code>root@master1:~# kubectl get po
NAME READY STATUS RESTARTS AGE
kubia-nwjcc 1/1 Running 0 50m
kubia-zcpbb 1/1 Running 0 50m
root@master1:~# k exec kubia-nwjcc -- curl -s http://10.98.41.49
You've hit kubia-nwjcc
root@master1:~# k exec kubia-nwjcc -- curl -s http://10.98.41.49
You've hit kubia-zcpbb
root@master1:~# k exec kubia-nwjcc -- curl -s http://10.98.41.49
You've hit kubia-zcpbb
root@master1:~# k exec kubia-nwjcc -- curl -s http://10.98.41.49
You've hit kubia-nwjcc
</code></pre>
<p>And now the return results of five tests are as follows:</p>
<pre><code>root@master1:~# k exec kubia-nwjcc -- curl -s http://10.98.41.49
You've hit kubia-nwjcc
root@master1:~# k exec kubia-nwjcc -- curl -s http://10.98.41.49
You've hit kubia-nwjcc
root@master1:~# k exec kubia-nwjcc -- curl -s http://10.98.41.49
command terminated with exit code 7
root@master1:~# k exec kubia-nwjcc -- curl -s http://10.98.41.49
You've hit kubia-nwjcc
root@master1:~# k exec kubia-nwjcc -- curl -s http://10.98.41.49
command terminated with exit code 7
</code></pre>
| hopgoldy | <p>This problem is solved. The official documentation of the <code>flannel</code> mentions that need to use <code>--iface</code> to specify the network card to be used when running in a <code>vagrant</code> type virtual machine. You can use the command <code>kubectl edit daemonset kube-flannel-ds-amd64 -n kube-system</code> to edit the <code>fllannel</code> configuration file. Then use <code>kubectl delete pod -n kube-system <pod-name></code> to all flannel pods. K8s will rebuild them. </p>
<p>You can find detailed answers in <a href="https://medium.com/@anilkreddyr/kubernetes-with-flannel-understanding-the-networking-part-1-7e1fe51820e4" rel="nofollow noreferrer">Kubernetes with Flannel</a> and <a href="https://coreos.com/flannel/docs/latest/troubleshooting.html" rel="nofollow noreferrer">flannel - troubleshooting</a>.</p>
| hopgoldy |
<p>I can't seem to get cert-manager working:</p>
<pre><code>$ kubectl get certificates -o wide
NAME READY SECRET ISSUER STATUS AGE
example-ingress False example-ingress letsencrypt-prod Waiting for CertificateRequest "example-ingress-2556707613" to complete 6m23s
$ kubectl get CertificateRequest -o wide
NAME READY ISSUER STATUS AGE
example-ingress-2556707613 False letsencrypt-prod Referenced "Issuer" not found: issuer.cert-manager.io "letsencrypt-prod" not found 7m7s
</code></pre>
<p>and in the logs i see:</p>
<pre><code>I1025 06:22:00.117292 1 sync.go:163] cert-manager/controller/ingress-shim "level"=0 "msg"="certificate already exists for ingress resource, ensuring it is up to date" "related_resource_kind"="Certificate" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Ingress" "resource_name"="example-ingress" "resource_namespace"="default"
I1025 06:22:00.117341 1 sync.go:176] cert-manager/controller/ingress-shim "level"=0 "msg"="certificate resource is already up to date for ingress" "related_resource_kind"="Certificate" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Ingress" "resource_name"="example-ingress" "resource_namespace"="default"
I1025 06:22:00.117382 1 controller.go:135] cert-manager/controller/ingress-shim "level"=0 "msg"="finished processing work item" "key"="default/example-ingress"
I1025 06:22:00.118026 1 sync.go:361] cert-manager/controller/certificates "level"=0 "msg"="no existing CertificateRequest resource exists, creating new request..." "related_resource_kind"="Secret" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Certificate" "resource_name"="example-ingress" "resource_namespace"="default"
I1025 06:22:00.147147 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-venafi "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613"
I1025 06:22:00.147267 1 sync.go:373] cert-manager/controller/certificates "level"=0 "msg"="created certificate request" "related_resource_kind"="Secret" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Certificate" "resource_name"="example-ingress" "resource_namespace"="default" "request_name"="example-ingress-2556707613"
I1025 06:22:00.147284 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-acme "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613"
I1025 06:22:00.147273 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.147254385 +0000 UTC m=+603.871617341
I1025 06:22:00.147392 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.147380513 +0000 UTC m=+603.871743521
E1025 06:22:00.147560 1 pki.go:128] cert-manager/controller/certificates "msg"="error decoding x509 certificate" "error"="error decoding cert PEM block" "related_resource_kind"="Secret" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Certificate" "resource_name"="example-ingress" "resource_namespace"="default" "secret_key"="tls.crt"
I1025 06:22:00.147620 1 conditions.go:155] Setting lastTransitionTime for Certificate "example-ingress" condition "Ready" to 2019-10-25 06:22:00.147613112 +0000 UTC m=+603.871976083
I1025 06:22:00.147731 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-ca "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613"
I1025 06:22:00.147765 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.14776244 +0000 UTC m=+603.872125380
I1025 06:22:00.147912 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-selfsigned "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613"
I1025 06:22:00.147942 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.147938966 +0000 UTC m=+603.872301909
I1025 06:22:00.147968 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-vault "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613"
I1025 06:22:00.148023 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.148017945 +0000 UTC m=+603.872380906
</code></pre>
<p>i deployed cert-manager via the manifest:</p>
<p><a href="https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml" rel="noreferrer">https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml</a></p>
<pre><code>$ kubectl get clusterissuer letsencrypt-prod -o yaml
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cert-manager.io/v1alpha2","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-prod"},"spec":{"acme":{"email":"[email protected]","privateKeySecretRef":{"name":"letsencrypt-prod"},"server":"https://acme-staging-v02.api.letsencrypt.org/directory","solvers":[{"http01":{"ingress":{"class":"nginx"}},"selector":{}}]}}}
creationTimestamp: "2019-10-25T06:27:06Z"
generation: 1
name: letsencrypt-prod
resourceVersion: "1759784"
selfLink: /apis/cert-manager.io/v1alpha2/clusterissuers/letsencrypt-prod
uid: 05831417-b359-42de-8298-60da553575f2
spec:
acme:
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-staging-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: nginx
selector: {}
status:
acme:
lastRegisteredEmail: [email protected]
uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/11410425
conditions:
- lastTransitionTime: "2019-10-25T06:27:07Z"
message: The ACME account was registered with the ACME server
reason: ACMEAccountRegistered
status: "True"
type: Ready
</code></pre>
<p>and my ingress is:</p>
<pre><code>$ kubectl get ingress example-ingress -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/issuer: letsencrypt-prod
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"cert-manager.io/issuer":"letsencrypt-prod","kubernetes.io/ingress.class":"nginx","kubernetes.io/tls-acme":"true"},"name":"example-ingress","namespace":"default"},"spec":{"rules":[{"host":"example-ingress.example.com","http":{"paths":[{"backend":{"serviceName":"apple-service","servicePort":5678},"path":"/apple"},{"backend":{"serviceName":"banana-service","servicePort":5678},"path":"/banana"}]}}],"tls":[{"hosts":["example-ingress.example.com"],"secretName":"example-ingress"}]}}
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
creationTimestamp: "2019-10-25T06:22:00Z"
generation: 1
name: example-ingress
namespace: default
resourceVersion: "1758822"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/example-ingress
uid: 921b2e91-9101-4c3c-a0d8-3f871dafdd30
spec:
rules:
- host: example-ingress.example.com
http:
paths:
- backend:
serviceName: apple-service
servicePort: 5678
path: /apple
- backend:
serviceName: banana-service
servicePort: 5678
path: /banana
tls:
- hosts:
- example-ingress.example.com
secretName: example-ingress
status:
loadBalancer:
ingress:
- ip: x.y.z.a
</code></pre>
<p>any idea whats wrong? cheers,</p>
| yee379 | <p>Your ingress is referring to an issuer, but the issuer is a ClusterIssuer. Could that be the reason? I have a similar setup with Issuer instead of a ClusterIssuer and it is working. </p>
| Burak Serdar |
<p>I'm trying to deploy a gRPC server with kubernetes, and connect to it outside the cluster.
The relevant part of the server:</p>
<pre><code>function main() {
var hello_proto = grpc.loadPackageDefinition(packageDefinition).helloworld;
var server = new grpc.Server();
server.addService(hello_proto.Greeter.service, {sayHello: sayHello});
const url = '0.0.0.0:50051'
server.bindAsync(url, grpc.ServerCredentials.createInsecure(), () => {
server.start();
console.log("Started server! on " + url);
});
}
function sayHello(call, callback) {
console.log('Hello request');
callback(null, {message: 'Hello ' + call.request.name + ' from ' + require('os').hostname()});
}
</code></pre>
<p>And here is the relevant part of the client:</p>
<pre><code>function main() {
var target = '0.0.0.0:50051';
let pkg = grpc.loadPackageDefinition(packageDefinition);
let Greeter = pkg.helloworld["Greeter"];
var client = new Greeter(target,grpc.credentials.createInsecure());
var user = "client";
client.sayHello({name: user}, function(err, response) {
console.log('Greeting:', response.message);
});
}
</code></pre>
<p>When I run them manually with nodeJS, as well as when I run the server in a docker container (client is still run with node without a container) it works just fine.</p>
<p>The docker file with the command: <code>docker run -it -p 50051:50051 helloapp</code></p>
<pre><code>FROM node:carbon
# Create app directory
WORKDIR /usr/src/appnpm
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
CMD npm start
</code></pre>
<p>However, when I'm deploying the server with kubernetes (again, the client isnt run within a container) I'm not able to connect.</p>
<p>The yaml file is as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: helloapp
spec:
replicas: 1
selector:
matchLabels:
app: helloapp
strategy: {}
template:
metadata:
labels:
app: helloapp
spec:
containers:
image: isolatedsushi/helloapp
name: helloapp
ports:
- containerPort: 50051
name: helloapp
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: helloservice
spec:
selector:
app: helloapp
ports:
- name: grpc
port: 50051
targetPort: 50051
</code></pre>
<p>The deployment and the service start up just fine</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helloservice ClusterIP 10.105.11.22 <none> 50051/TCP 17s
kubectl get pods
NAME READY STATUS RESTARTS AGE
helloapp-dbdfffb-brvdn 1/1 Running 0 45s
</code></pre>
<p>But when I run the client it can't reach the server.</p>
<p>Any ideas what I'm doing wrong?</p>
| IsolatedSushi | <p>As mentioned in comments</p>
<hr />
<h2>ServiceTypes</h2>
<p>If you have exposed your service as <strong>ClusterIP</strong> it's visible only internally in the cluster, if you wan't to expose your service externally you have to use either <strong>nodePort</strong> or <strong>LoadBalancer</strong>.</p>
<blockquote>
<p><strong>Publishing Services (ServiceTypes)</strong></p>
<p>For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that's outside of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.</p>
<p>Type values and their behaviors are:</p>
<p><strong>ClusterIP</strong>: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.</p>
<p><strong>NodePort</strong>: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.</p>
<p><strong>LoadBalancer</strong>: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.</p>
<p><strong>ExternalName</strong>: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.</p>
</blockquote>
<p>Related <a href="http://%5Bdocumentation%5D%5B3%5D" rel="nofollow noreferrer">documentation</a> about that.</p>
<hr />
<h2>Minikube</h2>
<p>With minikube you can achieve that with <code>minikube service</code> command.</p>
<p>There is <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">documentation</a> about minikube service and there is an <a href="https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-service" rel="nofollow noreferrer">example</a>.</p>
<hr />
<h2>grpc http/https</h2>
<p>As mentioned <a href="https://stackoverflow.com/a/62136381/11977760">here</a> by @murgatroid99</p>
<blockquote>
<p>The gRPC library does not recognize the https:// scheme for addresses, so that target name will cause it to try to resolve the wrong name. You should instead use grpc-server-xxx.com:9090 or dns:grpc-server-xxx.com:9090 or dns:///grpc-server-xxx.com:9090. More detailed information about how gRPC interprets channel target names can be found in this <a href="https://github.com/grpc/grpc/blob/master/doc/naming.md" rel="nofollow noreferrer">documentation page</a>.</p>
</blockquote>
<p>As it does not recognize https I assume it's the same for http, so it's not possible.</p>
<hr />
<h2>kubectl port-forward</h2>
<p>Additionally as @IsolatedSushi mentioned</p>
<blockquote>
<p>It also works when I portforward with the command <code>kubectl -n hellospace port-forward svc/helloservice 8080:50051</code></p>
</blockquote>
<p>As mentioned <a href="https://phoenixnap.com/kb/kubectl-port-forward" rel="nofollow noreferrer">here</a></p>
<blockquote>
<p>Kubectl port-forward allows you to access and interact with internal Kubernetes cluster processes from your localhost. You can use this method to investigate issues and adjust your services locally without the need to expose them beforehand.</p>
</blockquote>
<p>There is an <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">example</a> in documentation.</p>
| Jakub |
<p>We need some scripts or some way that can help us to automate our login into the IBM kubernetes cluster so that we don't have to do it manually everytime and can keep running automation scripts into the pipeline.
We already have a cluster on IBM cloud with three worker nodes. And, we are trying to perform some automation on the same.</p>
<p>Please help if you have any idea regarding the same.</p>
| Muskan Sharma | <p>given the fact that your question is pretty vague and sincerely I am not shore short answer would be to use something like openshift. Here is an article that might help you:</p>
<p><a href="https://www.openshift.com/blog/enhancing-the-openshift-web-console-login-experience" rel="nofollow noreferrer">https://www.openshift.com/blog/enhancing-the-openshift-web-console-login-experience</a></p>
| Opri |
<p>I have several deployments that consist on my application. I would like to perform an custom action on the end of successful deployment of my app, this equal all deployments went well. How can I determine all my kubernetes deployments finished successfully?</p>
| sobi3ch | <p>Maybe with a basic <code>watch</code> command on all deployments ? </p>
<pre class="lang-sh prettyprint-override"><code>watch kubectl get deployments
</code></pre>
<p>And check the <strong>READY</strong> column.</p>
<p>Or am I missing the point here ?</p>
| Marc ABOUCHACRA |
<p>I am trying to set some values in server.xml using environment variables. From this <a href="https://stackoverflow.com/questions/67214216/how-to-set-org-apache-tomcat-util-digester-environmentpropertysource-in-tomcat">how to set org.apache.tomcat.util.digester.EnvironmentPropertySource in tomcat</a>, I create setenv.sh file in /tomcat/bin with this:</p>
<pre><code>CATALINA_OPTS="$CATALINA_OPTS -Dorg.apache.tomcat.util.digester.PROPERTY_SOURCE=org.apache.tomcat.util.digester.EnvironmentPropertySource"
</code></pre>
<p>When I run tomcat, I get this exception:</p>
<pre><code>org.apache.tomcat.util.digester.Digester.<clinit> Unable to load property source[org.apache.tomcat.util.digester.EnvironmentPropertySource].
</code></pre>
<p>I am really new to tomcat, so I have no idea what it means. I am not sure even if it is related to the <code>setenv.sh</code>. I don't see the same exception without <code>setenv.sh</code> file. I tried to research on this topic, but not many information was found.</p>
<p>Can anyone please answer why this is happening?</p>
<p>EDIT: here is my whole stack trace from the log file</p>
<pre><code>26-Apr-2021 19:32:44.857 SEVERE [main] org.apache.tomcat.util.digester.Digester.<clinit> Unable to load property source[org.apache.tomcat.util.digester.EnvironmentPropertySource].
java.lang.ClassNotFoundException: org.apache.tomcat.util.digester.EnvironmentPropertySource
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.tomcat.util.digester.Digester.<clinit>(Digester.java:97)
at org.apache.catalina.startup.Catalina.createStartDigester(Catalina.java:272)
at org.apache.catalina.startup.Catalina.load(Catalina.java:528)
at org.apache.catalina.startup.Catalina.load(Catalina.java:644)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:311)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:494)
26-Apr-2021 19:32:44.859 SEVERE [main] org.apache.tomcat.util.digester.Digester.<clinit> Unable to load property source[org.apache.tomcat.util.digester.EnvironmentPropertySource].
java.lang.ClassNotFoundException: org.apache.tomcat.util.digester.EnvironmentPropertySource
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.tomcat.util.digester.Digester.<clinit>(Digester.java:97)
at org.apache.catalina.startup.Catalina.createStartDigester(Catalina.java:272)
at org.apache.catalina.startup.Catalina.load(Catalina.java:528)
at org.apache.catalina.startup.Catalina.load(Catalina.java:644)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:311)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:494)
</code></pre>
| Jonathan Hagen | <p>The <code>org.apache.tomcat.util.digester.EnvironmentPropertySource</code> class is available since <a href="https://tomcat.apache.org/tomcat-7.0-doc/changelog.html#Tomcat_7.0.101_(violetagg)" rel="nofollow noreferrer">Tomcat 7.0.108</a>, <a href="https://tomcat.apache.org/tomcat-8.5-doc/changelog.html#Tomcat_8.5.52_(markt)" rel="nofollow noreferrer">Tomcat 8.5.65</a> and <a href="https://tomcat.apache.org/tomcat-9.0-doc/changelog.html#Tomcat_9.0.32_(markt)" rel="nofollow noreferrer">Tomcat 9.0.45</a>. You must be running an older release.</p>
| Piotr P. Karwasz |
<p>In Spring Boot 2.6.0 using Log4J2. I want to use env variables from external to log4j.propeties
but it is always taking local <code>application.propeties</code> file instead of real docker or Kubernetes env variables</p>
<p>File <code>application.properties</code></p>
<pre><code>spring.application.name=myapp
#Logger FilePath
log.file.path=logs/dev/my-app
</code></pre>
<p>Docker Composer File</p>
<pre><code> version: "3"
services:
spring-app-log4j2:
build: ./log4j2
ports:
- "8080:80"
environment:
- SERVER_PORT=80
- LOG_FILE_PATH=logs/prod/my-app
</code></pre>
<p>File <code>log4j2.properties</code></p>
<pre><code>name=config
#Read Properties values from application properties
property.filename = ${bundle:application:log.file.path}
property.layoutPattern = %d{MMM dd yyyy HH:mm:ss.SSS z} | ${hostName} | %-5p | %c{1}:%L | %M() - %m%n
appenders=console, rolling
#log to console
appender.console.type=Console
appender.console.name=STDOUT
appender.console.layout.type=PatternLayout
appender.console.layout.pattern=${layoutPattern}
#log to file and daily rolling
appender.rolling.type = RollingFile
appender.rolling.name = roll
appender.rolling.fileName = ${filename}.log
appender.rolling.filePattern = ${filename}.%d{dd-MMM-yyyy}.log.gz
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = ${layoutPattern}
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 2
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 5
loggers = rolling
logger.rolling.name = org.apache.logging.log4j.core.appender.rolling
logger.rolling.level = info
logger.rolling.additivity = true
logger.rolling.appenderRefs = rolling
logger.rolling.appenderRef.rolling.ref = roll
rootLogger.level=info
rootLogger.appenderRefs=stdout,rolling
rootLogger.appenderRef.stdout.ref=STDOUT
rootLogger.appenderRef.rolling.ref = roll
</code></pre>
<p>Output Tried</p>
<pre><code> property.filename = ${bundle:application:logpath} #Output - logs/dev/app
#(Taking values from application Properties only, not taking from Docker pod environment varibales)
property.filename = ${sys:logpath} #Output - ${sys:logpath}
property.filename = ${env:logpath} # Not Working #Output - ${env:logpath}
#use filename variable
appender.rolling.fileName = ${filename}.log
</code></pre>
<p>When Printing inside Java code</p>
<pre class="lang-java prettyprint-override"><code>@Value("${logpath}") String logpath; #Output -logs/prod/app
# Working with Docker env
</code></pre>
<p>Problem : When log4j initiated it is not finding the file path from environment , so by default reading from application.propeties</p>
<p>How can i read from Docker environment or Kubernetes Environment</p>
<p>i need log file should "logs/prod/my-app" instead of "logs/dev/my-app"</p>
<p>Src Code : <a href="https://github.com/jeebendu/log4j2" rel="nofollow noreferrer">https://github.com/jeebendu/log4j2</a></p>
| Jeebendu kumar Behera | <p>In all your attempts you use a single source for the <code>filename</code> property. You need to use one of the <em>fallback</em> features of variable substitution:</p>
<ul>
<li><p>You can add a default value to your variable substitution using the syntax <code>${variable:-default}</code>:</p>
<pre><code>property.filename = ${env:LOG_FILE_PATH:-${bundle:application:log.file.path}}
appender.rolling.fileName = ${filename}.log
</code></pre>
</li>
<li><p>or you can exploit the fact that every <code>${prefix:variable}</code> falls back to <code>${variable}</code>:</p>
<pre><code>property.LOG_FILE_PATH = ${bundle:application:log.file.path}
appender.rolling.fileName = ${env:LOG_FILE_PATH}
</code></pre>
</li>
</ul>
| Piotr P. Karwasz |
<p>I used <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">Kubernetes document</a> to create a request for user certificate via API-server. </p>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: myuser
spec:
request: $(cat server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
</code></pre>
<p>I generated the certificate, created the kubeconfig file and created the necessary role/rolebindings successfully. However, when I try to access the cluster, I get the below error. I am quite sure that the issue is with the above yaml definition; but could not figure out.</p>
<pre><code>users error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>Any idea please?</p>
| Kajani Sivadas | <p>Seems, the issue is with the "spec" part. It is user authentication not server authentication. Hence, "server auth" should be client auth.</p>
<pre><code>spec:
request: $(cat server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- client auth
</code></pre>
| Thilee |
<p>After deploying Istio 1.1.2 on OpenShift there is an istio-ingressgateway route with its associated service and pod.</p>
<p>I have successfully used that ingress gateway to access an application, configuring a Gateway and a VirtualService using * as hosts.</p>
<p>However I would like to configure a domain, e.g insuranceinc.es, to access the application. According to the documentation I have this Istio config:</p>
<p><strong>Gateway:</strong></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: insuranceinc-gateway
namespace: istio-insuranceinc
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "insuranceinc.es"
</code></pre>
<p><strong>VirtualService</strong></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: insuranceinc
namespace: istio-insuranceinc
spec:
hosts:
- insuranceinc.es
gateways:
- insuranceinc-gateway
http:
- route:
- destination:
host: insuranceinc-web
port:
number: 8080
</code></pre>
<p>If I make this curl invocation...</p>
<p><code>curl http://istio-ingressgateway-istio-system.apps.mycluster.com/login</code></p>
<p>... I can see a 404 error in the ingress-gateway pod:</p>
<pre><code>[2019-04-12T15:27:51.765Z] "GET /login HTTP/1.1" 404 NR "-" 0 0 1 - "xxx" "curl/7.54.0" "xxx" "istio-ingressgateway-istio-system.apps.mycluster.com" "-" - - xxx -
</code></pre>
<p>This makes sense since it isn't comming from an insuranceinc.es host. So I change the curl to send a <code>Host: insuranceinc.es</code> header:</p>
<p><code>curl -H "Host: insuranceinc.es" http://istio-ingressgateway-istio-system.apps.mycluster.com/login</code></p>
<p>Now I am getting a 503 error and there are no logs in the istio-ingressgateway pod.</p>
<blockquote>
<h1>Application is not available</h1>
<p><p>The application is currently not serving requests at this endpoint. It may not have been started or is still starting.</p></p>
</blockquote>
<p>This means the request hasn't been processed by that istio-ingressgateway route->service->poc. </p>
<p>Since it is an <code>Openshift Route</code> it must be needing a Host header containing the route host <code>istio-ingressgateway-istio-system.apps.mycluster.com</code>. In fact if I send <code>curl -H "Host: istio-ingressgateway-istio-system.apps.mycluster.com" http://istio-ingressgateway-istio-system.apps.mycluster.com/login</code> it is processed by the istio ingress gateway returning a 404.</p>
<p>So, how can I send my Host insuranceinc.es header and also reach the istio ingress gateway (which is actually an OpenShift route)?</p>
| codependent | <p>You need to create an openshift route in the istio-system namespace to relate to the hostname you created. </p>
<p>For example:</p>
<pre><code>oc -n istio-system get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
gateway1-lvlfn insuranceinc.es istio-ingressgateway <all> None
</code></pre>
| Chris Reiche |
<p>I'm trying to deploy free5GC (<a href="https://www.free5gc.org/cluster" rel="nofollow noreferrer">cluster version</a>) over K8s.
The problem with this software is that there are some services that have to know other service IPs before starting. I solve this issue in docker-compose executing a script inside each docker container with other service IPs as parameters.
This is my docker-compose.yaml:</p>
<pre><code>version: '3'
networks:
testing_net:
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
services:
mongo:
container_name: mongo
image: mongo
networks:
testing_net:
ipv4_address: ${mongo_ip}
webui:
container_name: webui
image: j0lama/free5gc-webui
depends_on:
- mongo
ports:
- '80:3000'
extra_hosts:
- "mongo:${mongo_ip}"
networks:
testing_net:
ipv4_address: ${webui_ip}
hss:
container_name: hss
command: bash -c "./hss_setup.sh ${mongo_ip} ${hss_ip} ${amf_ip}"
image: j0lama/free5gc-hss
depends_on:
- mongo
networks:
testing_net:
ipv4_address: ${hss_ip}
amf:
container_name: amf
command: bash -c "./amf_setup.sh ${mongo_ip} ${hss_ip} ${amf_ip} ${smf_ip}"
image: j0lama/free5gc-amf
depends_on:
- mongo
- hss
ports:
- '36412:36412'
networks:
testing_net:
ipv4_address: ${amf_ip}
smf:
container_name: smf
command: bash -c "./smf_setup.sh ${smf_ip} ${upf_ip} ${pcrf_ip}"
image: j0lama/free5gc-smf
depends_on:
- mongo
- hss
- amf
networks:
testing_net:
ipv4_address: ${smf_ip}
pcrf:
container_name: pcrf
command: bash -c "./pcrf_setup.sh ${mongo_ip} ${smf_ip} ${pcrf_ip}"
image: j0lama/free5gc-pcrf
depends_on:
- mongo
- hss
- amf
- smf
networks:
testing_net:
ipv4_address: ${pcrf_ip}
upf:
container_name: upf
command: bash -c "./upf_setup.sh ${upf_ip}"
image: j0lama/free5gc-upf
depends_on:
- mongo
- hss
- amf
- smf
- pcrf
networks:
testing_net:
ipv4_address: ${upf_ip}
</code></pre>
<p>With this I am able to setup all the components of my cluster correctly.
I already try kompose utility but does not work.</p>
<p>Any suggestion or alternative for kubernetes?</p>
<p>Thanks for your help.</p>
| j0lama | <p>You can expose each deployment with a kubernetes service. When you do that, each exposed service cluster IP will be available to the containers as environment variables.</p>
<p>For example: deploy hss, and expose hss ports using a service named <code>hss</code>. Then, any container that needs to connect to <code>hss</code> can use the environment variable <code>HSS_SERVICE_HOST</code> to get the IP address for that service. There are more environment variable that will give you service port numbers, or service addresses in other formats.</p>
| Burak Serdar |
<p>The EKS docs in the page <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html" rel="nofollow noreferrer">Amazon EKS node IAM role</a> state that before you create worker nodes, you must create a role with the following policies:</p>
<ul>
<li>AmazonEKSWorkerNodePolicy</li>
<li>AmazonEC2ContainerRegistryReadOnly</li>
<li>AmazonEKS_CNI_Policy</li>
</ul>
<p>Regarding the last one, the docs state that:</p>
<blockquote>
<p>Rather than attaching the policy to this role however, we recommend that you attach the policy to a separate role used specifically for the Amazon VPC CNI add-on</p>
</blockquote>
<p>Can someone explain why is this recommended?</p>
| YoavKlein | <p>The reason why it is recommended to attach the AmazonEKS_CNI_Policy to a separate role used specifically for the Amazon VPC CNI add-on is to follow the principle of least privilege.</p>
<p>The Amazon VPC CNI (Container Network Interface) is a plugin for Kubernetes that enables networking between pods and the rest of the cluster in a VPC (Virtual Private Cloud) environment. This plugin needs certain permissions to function properly, such as creating and managing network interfaces and route tables.</p>
<p>By creating a separate role for the Amazon VPC CNI add-on, you can ensure that this plugin has only the necessary permissions to perform its specific tasks, and not other permissions that may be included in the AmazonEKSWorkerNodePolicy. This helps to reduce the risk of accidental or intentional misuse of privileges, and makes it easier to audit and manage permissions for different components of your cluster.</p>
<p>Additionally, separating the Amazon VPC CNI permissions from the worker node IAM role can also help with troubleshooting, as it allows you to isolate issues related to the network plugin from other potential problems that may affect the worker nodes or other components of your cluster.</p>
| shock_in_sneakers |
<p>I want to achieve TLS mutual auth between my different services running in a kubernetes cluster and I have found that Istio is a good solution to achieve this without making any changes in code.</p>
<p>I am trying to use Istio sidecar injection to do TLS mutual auth between services running inside the cluster.</p>
<ul>
<li>Outside traffic enters the mesh through nginx ingress controller. We want to keep using it instead of the Istio ingress controller(we want to make as little changes as possible).</li>
<li>The services are able to communicate with each other properly when the Istio Sidecar injection is disabled. But as soon as I enable the sidecar in the application's namespace, the app is not longer able to serve requests(I am guessing the incoming requests are dropped by the envoy sidecar proxy).</li>
</ul>
<p><a href="https://i.stack.imgur.com/mNC7Q.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mNC7Q.jpg" alt="My architecture looks like this" /></a></p>
<p>What I want to do:</p>
<ul>
<li>Enable istio sidecar proxy injection on namespace-2(nginx ingress controller, service 1 and service 2) so that all services communicate with each other through TLS mutual auth.</li>
</ul>
<p>What I don't want to do:</p>
<ul>
<li>Enable istio sidecar proxy injection on the nginx ingress controller(I don't want to make any changes in it as it is serving as frontend for multiple other workloads).</li>
</ul>
<p>I have been trying to make it work since a couple of weeks with no luck. Any help from the community will be greatly appreciated.</p>
| dishant makwana | <blockquote>
<p>my goal is to atleast enable TLS mutual auth between service-1 and service-2</p>
</blockquote>
<p>AFAIK if you have enabled injection in namespace-2 then services here already have mTLS enabled. It's enabled by default since istio 1.5 version. There are related <a href="https://istio.io/latest/news/releases/1.5.x/announcing-1.5/upgrade-notes/#automatic-mutual-tls" rel="nofollow noreferrer">docs</a> about this.</p>
<blockquote>
<p>Automatic mutual TLS is now enabled by default. Traffic between sidecars is automatically configured as mutual TLS. You can disable this explicitly if you worry about the encryption overhead by adding the option -- set values.global.mtls.auto=false during install. For more details, refer to <a href="https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls" rel="nofollow noreferrer">automatic mutual TLS</a>.</p>
</blockquote>
<p>Take a look here for more information about how mtls between services works.</p>
<h2>Mutual TLS in Istio</h2>
<blockquote>
<p>Istio offers mutual TLS as a solution for service-to-service authentication.</p>
<p>Istio uses the sidecar pattern, meaning that each application container has a sidecar Envoy proxy container running beside it in the same pod.</p>
<ul>
<li><p>When a service receives or sends network traffic, the traffic always
goes through the Envoy proxies first.</p>
</li>
<li><p>When mTLS is enabled between two services, the client side and server side Envoy proxies verify each other’s identities before sending requests.</p>
</li>
<li><p>If the verification is successful, then the client-side proxy encrypts the traffic, and sends it to the server-side proxy.</p>
</li>
<li><p>The server-side proxy decrypts the traffic and forwards it locally to the actual destination service.</p>
</li>
</ul>
</blockquote>
<p><a href="https://i.stack.imgur.com/7LXQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7LXQ3.png" alt="enter image description here" /></a></p>
<h2>NGINX</h2>
<blockquote>
<p>But the problem is, the traffic from outside the mesh is getting terminated at the ingress resource. The nginx reverse proxy in namespace-2 does not see the incoming calls.</p>
</blockquote>
<p>I see there is similar issue on <a href="https://github.com/istio/istio/issues/24668" rel="nofollow noreferrer">github</a> about that, worth to try with this.</p>
<p><a href="https://github.com/istio/istio/issues/14450#issuecomment-498771781" rel="nofollow noreferrer">Answer</a> provided by @stono.</p>
<blockquote>
<p>Hey,
This is not an istio issue, getting nginx to work with istio is a little bit difficult. The issue is because fundamentally nginx is making an outbound request to an ip that is has resolved from your hostname foo-bar. This won't work as envoy doesn't know what cluster ip belongs to, so it fails.</p>
<p>I'd suggest using the ingress-nginx kubernetes project and in turn using the following value in your Ingress configuration:</p>
<p>annotations:
nginx.ingress.kubernetes.io/service-upstream: "true"
What this does is ensure that nginx doesn't resolve the upstream address to an ip, and maintains the correct Host header which the sidecar uses in order to route to your destination.</p>
<p>I recommend using this project because I use it, with Istio, with a 240 odd service deployment.</p>
<p>If you're not using ingress-nginx, I think you can set proxy_ssl_server_name on; or another thing you could try is forcefully setting the Host header on the outbound request to the internal fqdn of the service so:</p>
<p>proxy_set_header Host foo-bar;
Hope this helps but as I say, it's an nginx configuration rather than an istio problem.</p>
</blockquote>
| Jakub |
<p>I'm using the "Workloads" service of Kubernetes Engine of Google Cloud Platform to deploy my application.</p>
<p>Once you click on deploy I can see in "Cloud Build" what command GCP has launched:
<a href="https://i.stack.imgur.com/qpHuO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qpHuO.png" alt="enter image description here" /></a></p>
<p>The current build command is: <code>build -t gcr.io/ma...g:9e4dab3 -d Dockerfile</code></p>
<p>Is there a way to change the build command ? Like: <code>build -t gcr.io/ma...g:9e4dab3 -d Dockerfile --build-arg APP_ENV=dev</code></p>
| Lenny4 | <p>Workloads is a <strong>beta</strong> feature, and doesn't include any option to add or modify the build command you can open a <a href="https://cloud.google.com/support/docs/issue-trackers" rel="nofollow noreferrer">feature request</a> for this functionality.</p>
<p>As a workaround you can create your image directly and storing it on Container Registry by using Cloud build with all parameters necessaries for your image.</p>
<p>Additionally you can create a build to automate this process, For example:</p>
<pre><code>steps:
#Building Red Velvet Image
- name: 'gcr.io/cloud-builders/docker'
id: build-redvelvet
args:
- build
- --tag=${_RV}:$SHORT_SHA
- --tag=${_RV}:latest
- --build-arg APP_ENV=dev
- .
dir: 'redvelvet/'
#Pushing Red Velvet Image
- name: 'gcr.io/cloud-builders/docker'
id: push-redvelvet
args:
- push
- ${_RV}
#Deploying to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
id: deploy-gke
args:
- run
- --filename=something.yaml
- --location=${_COMPUTE_ZONE}
- --cluster=${_CLUSTER_NAME}
#Update Red Velvet Image
- name: 'gcr.io/cloud-builders/kubectl'
id: update-redvelvet
args:
- set
- image
- deployment/redvelvet-deployment
- redvelvet=${_RV}:$SHORT_SHA
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER_NAME}'
waitFor:
- deploy-gke
substitutions:
_RV: gcr.io/${PROJECT_ID}/redvelvet
_CLUSTER_NAME: something
_COMPUTE_ZONE: us-central1
</code></pre>
| Jan Hernandez |
<p>I have added Istio to an existing GKE cluster. This cluster was initially deployed from the GKE UI with Istio "disabled".</p>
<p>I have deployed Istio from the CLI using kubectl and while everything works fine (istio namespace, pods, services, etc...) and I was able later on to deploy an app with Istio sidecar pods etc..., I wonder why the GKE UI still reports that Istio is <code>disabled</code> on this cluster. This is confusing - in effect, Istio is deployed in the cluster but the UI reports the opposite.</p>
<p>Is that a GKE bug ?</p>
<p>Deployed Istio using:
kubectl apply -f install/kubernetes/istio-auth.yaml</p>
<p>Deployment code can be seen here:</p>
<p><a href="https://github.com/hassanhamade/istio/blob/master/deploy" rel="nofollow noreferrer">https://github.com/hassanhamade/istio/blob/master/deploy</a></p>
| hassan hamade | <p>From my point of view this doesn't look as a bug, I assume that the status is <code>disabled</code> because you have deployed a custom version of Istio on you cluster. This flag should be indicating the status of the GKE managed version. </p>
<p>If you want to update your cluster to use GKE managed version, you can do it as following:</p>
<p>With TLS enforced</p>
<pre><code>gcloud beta container clusters update CLUSTER_NAME \
--update-addons=Istio=ENABLED --istio-config=auth=MTLS_STRICT
</code></pre>
<p>or</p>
<p>With mTLS in permissive mode </p>
<pre><code>gcloud beta container clusters update CLUSTER_NAME \
--update-addons=Istio=ENABLED --istio-config=auth=MTLS_PERMISSIVE
</code></pre>
<p>Check <a href="https://cloud.google.com/istio/docs/istio-on-gke/installing#creating_a_cluster_with_istio_on_gke" rel="nofollow noreferrer">this</a> for more details.</p>
<p>Be careful since you already have deployed Istio, enabling the GKE managed one may cause issues.</p>
| Kostikas Visnia |
<p>I created an Kubernetes Cluster in Google Cloud, I'm using my macbook to create PODs, and I'm using <code>gcloud</code> to connect to cluster from my computer:</p>
<p><a href="https://i.stack.imgur.com/DjKb5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DjKb5.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/TgqTe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TgqTe.png" alt="enter image description here"></a></p>
<p>When I run <code>gcloud container clusters get-credentials gcloud-cluster-dev --zone europe-west1-d --project ***********</code> in my computer, <code>gcloud</code> configures automatically <code>~/.kube/config</code> file.</p>
<p>But now I want to connect to kubectl from a Docker container (this one: <code>dtzar/helm-kubectl:2.14.0</code>), and I don't want to use <code>gcloud</code>, I only want to use <code>kubectl</code>.</p>
<p>When I run <code>docker run -it dtzar/helm-kubectl:2.14.0 sh</code>, I already have <code>kubectl</code> installed, but not configurated to connect to cluster.</p>
<p>I'm trying to connect <code>kubectl</code> to cluster without installing <code>gcloud</code>.</p>
<p>I tried basic authentication <a href="https://blog.christianposta.com/kubernetes/logging-into-a-kubernetes-cluster-with-kubectl/" rel="nofollow noreferrer">https://blog.christianposta.com/kubernetes/logging-into-a-kubernetes-cluster-with-kubectl/</a> without success.
Returns an error:</p>
<pre><code># kubectl get pods
error: You must be logged in to the server (Unauthorized)
# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
</code></pre>
<p>I also tried this: <a href="https://codefarm.me/2019/02/01/access-kubernetes-api-with-client-certificates/" rel="nofollow noreferrer">https://codefarm.me/2019/02/01/access-kubernetes-api-with-client-certificates/</a>
But I don't found where are <code>ca.crt</code> and <code>ca.key</code> to use in this line: <code>(...) -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key (...)</code></p>
<p>I only see this:
<a href="https://i.stack.imgur.com/LhM20.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LhM20.png" alt="enter image description here"></a></p>
<p>Can I use this CA? How?</p>
<p>Anyone can help me? Thanks.</p>
<p><strong>EDIT:</strong>
I can't mount my kubectl config in the docker image, because I created this config with gcloud, and the Docker image don't have gcloud. I want to connect directly to kubectl withou gcloud</p>
<pre class="lang-sh prettyprint-override"><code>
$ docker run -v ~/.kube:/root/.kube -it dtzar/helm-kubectl:2.14.0 sh
# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: error executing access token command "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud config config-helper --format=json": err=fork/exec /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud: no such file or directory output= stderr=
</code></pre>
| Rui Martins | <p>The easiest would be to mount your ~/.kube/config into your container. Like:</p>
<pre><code>docker run -v ~/.kube:/root/.kube <your container image:tag>
</code></pre>
<p><strong>EDIT:</strong> If this is not enough, you can, also, mount your sdk folder (kinda hackish):</p>
<pre><code>docker run -v ~/.kube:/root/.kube -v /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk:/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk -it dtzar/helm-kubecsh:2.14.0 sh
</code></pre>
| Serhiy |
<p>I'm beginning to dig into kubeflow pipelines for a project and have a beginner's question. It seems like kubeflow pipelines work well for training, but how about serving in production?</p>
<p>I have a fairly intensive pre processing pipeline for training and must apply that same pipeline for production predictions. Can I use something like Seldon Serving to create an endpoint to kickoff the pre processing pipeline, apply the model, then to return the prediction? Or is the better approach to just put everything in one docker container?</p>
| kevin.w.johnson | <p>Yes, you can definitely use Seldon for serving. In fact, Kubeflow team offers an easy way to link between training and serving: <a href="https://github.com/kubeflow/fairing" rel="nofollow noreferrer">fairing</a></p>
<p>Fairing provides a programmatic way of deploying your prediction endpoint. You could also take a look at <a href="https://github.com/kubeflow/fairing/tree/master/examples/prediction" rel="nofollow noreferrer">this example</a> on how to deploy your Seldon endpoint with your training result.</p>
| Gabriel Wen |
<p>After upgrading Jenkins to version <code>2.375.4</code> and Kubernetes AWS EKS cluster to <code>v1.23</code> along with changing container runtime from <code>docker</code> to <code>containerd</code>, I sometimes get the following error on Jenkins jobs that run on Kubernetes AWS EKS cluster via Jenkins agent (Jenkins slave).</p>
<p>Below is the error I get:</p>
<pre><code>03:39:51 java.nio.channels.ClosedChannelException
03:39:51 Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from ip-10-20-53-103.eu-west-1.compute.internal/10.20.53.103:38004
03:39:51 at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1784)
03:39:51 at hudson.remoting.Request.call(Request.java:199)
03:39:51 at hudson.remoting.Channel.call(Channel.java:999)
03:39:51 at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:153)
03:39:51 at jdk.internal.reflect.GeneratedMethodAccessor1121.invoke(Unknown Source)
03:39:51 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
03:39:51 at java.base/java.lang.reflect.Method.invoke(Method.java:566)
03:39:51 at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:138)
03:39:51 at com.sun.proxy.$Proxy262.execute(Unknown Source)
03:39:51 at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1359)
03:39:51 at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:129)
03:39:51 at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:97)
03:39:51 at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:84)
03:39:51 at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
03:39:51 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
03:39:51 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
03:39:51 Also: org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: df677487-c98d-4870-aa71-74faab41e552
03:39:51 Also: org.jenkinsci.plugins.workflow.support.steps.AgentOfflineException: Unable to create live FilePath for current-frontend-e2e-native-deps-9106-4ptkj-3hltr-q0stv; current-frontend-e2e-native-deps-9106-4ptkj-3hltr-q0stv was marked offline: Connection was broken
03:39:51 at org.jenkinsci.plugins.workflow.support.steps.ExecutorStepDynamicContext$FilePathTranslator.get(ExecutorStepDynamicContext.java:182)
03:39:51 at org.jenkinsci.plugins.workflow.support.steps.ExecutorStepDynamicContext$FilePathTranslator.get(ExecutorStepDynamicContext.java:154)
03:39:51 at org.jenkinsci.plugins.workflow.support.steps.ExecutorStepDynamicContext$Translator.get(ExecutorStepDynamicContext.java:147)
03:39:51 at org.jenkinsci.plugins.workflow.support.steps.ExecutorStepDynamicContext$FilePathTranslator.get(ExecutorStepDynamicContext.java:164)
03:39:51 at org.jenkinsci.plugins.workflow.support.steps.ExecutorStepDynamicContext$FilePathTranslator.get(ExecutorStepDynamicContext.java:154)
03:39:51 at org.jenkinsci.plugins.workflow.steps.DynamicContext$Typed.get(DynamicContext.java:95)
03:39:51 at org.jenkinsci.plugins.workflow.cps.ContextVariableSet.get(ContextVariableSet.java:139)
03:39:51 at org.jenkinsci.plugins.workflow.cps.CpsThread.getContextVariable(CpsThread.java:137)
03:39:51 at org.jenkinsci.plugins.workflow.cps.CpsStepContext.doGet(CpsStepContext.java:297)
03:39:51 at org.jenkinsci.plugins.workflow.cps.CpsBodySubContext.doGet(CpsBodySubContext.java:88)
03:39:51 at org.jenkinsci.plugins.workflow.support.DefaultStepContext.get(DefaultStepContext.java:75)
03:39:51 at org.jenkinsci.plugins.workflow.steps.CoreWrapperStep$Callback.finished(CoreWrapperStep.java:187)
03:39:51 at org.jenkinsci.plugins.workflow.steps.CoreWrapperStep$Execution2$Callback2.finished(CoreWrapperStep.java:150)
03:39:51 at org.jenkinsci.plugins.workflow.steps.GeneralNonBlockingStepExecution$TailCall.lambda$onFailure$1(GeneralNonBlockingStepExecution.java:156)
03:39:51 at org.jenkinsci.plugins.workflow.steps.GeneralNonBlockingStepExecution.lambda$run$0(GeneralNonBlockingStepExecution.java:77)
03:39:51 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
03:39:51 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
03:39:51 Caused: hudson.remoting.RequestAbortedException
03:39:51 at hudson.remoting.Request.abort(Request.java:346)
03:39:51 at hudson.remoting.Channel.terminate(Channel.java:1080)
03:39:51 at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:241)
03:39:51 at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:221)
03:39:51 at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:825)
03:39:51 at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:289)
03:39:51 at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:168)
03:39:51 at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:825)
03:39:51 at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:155)
03:39:51 at org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:143)
03:39:51 at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:789)
03:39:51 at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:30)
03:39:51 at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:70)
03:39:51 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
03:39:51 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
03:39:51 at java.base/java.lang.Thread.run(Thread.java:829)
</code></pre>
<p>What is the reason for it? How to fix it?</p>
| Abdullah Khawer | <p><strong>Possible Solutions:</strong></p>
<ol>
<li><p>Make sure that your <code>kubernetes-plugin</code> is on the latest version and not outdated.</p>
</li>
<li><p>Make sure that the <code>java</code> version of your Jenkins master matched with the <code>java</code> version of Jenkins slave to avoid any incompatibilities.</p>
</li>
<li><p>Make sure that the pod is not throttling and has enough CPU and/or Memory. If not, increase one of them or both of them to fix this issue.</p>
</li>
</ol>
<p><strong>How did I find solution no. 3?</strong></p>
<p>Looking at the metrics of the containers of that job pod on Grafana, I realised that CPU usage reached 100% that caused CPU throttling for the jnlp container. Increasing its CPU request and limit fixed the issue.</p>
<p>Old Configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>resources:
limits:
cpu: "2"
memory: "2Gi"
requests:
cpu: "2"
memory: "2Gi"
</code></pre>
<p>New Configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>resources:
limits:
cpu: "3"
memory: "2Gi"
requests:
cpu: "3"
memory: "2Gi"
</code></pre>
| Abdullah Khawer |
<p>I have used the following configuration to setup the Istio</p>
<pre><code>cat << EOF | kubectl apply -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-control-plane
spec:
# Use the default profile as the base
# More details at: https://istio.io/docs/setup/additional-setup/config-profiles/
profile: default
# Enable the addons that we will want to use
addonComponents:
grafana:
enabled: true
prometheus:
enabled: true
tracing:
enabled: true
kiali:
enabled: true
values:
global:
# Ensure that the Istio pods are only scheduled to run on Linux nodes
defaultNodeSelector:
beta.kubernetes.io/os: linux
kiali:
dashboard:
auth:
strategy: anonymous
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF
</code></pre>
<p>and exposed the jaeger-query service as mentioned below</p>
<pre><code>kubectl expose service jaeger-query --type=LoadBalancer --name=jaeger-query-svc --namespace istio-system
kubectl get svc jaeger-query-svc -n istio-system -o json
export JAEGER_URL=$(kubectl get svc jaeger-query-svc -n istio-system -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}"):$(kubectl get svc jaeger-query-svc -n istio-system -o 'jsonpath={.spec.ports[0].port}')
echo http://${JAEGER_URL}
curl http://${JAEGER_URL}
</code></pre>
<p>I couldn't see the below deployed application in Jaeger</p>
<p><a href="https://i.stack.imgur.com/Dmou2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dmou2.png" alt="enter image description here" /></a></p>
<p>and have deployed the application as mentioned below</p>
<pre><code>cat << EOF | kubectl apply -f -
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
namespace: akv2k8s-test
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: stenote/nginx-hostname
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web
namespace: akv2k8s-test
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
EOF
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: akv2k8s-test
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: "${KEY_CERT2_NAME}"
hosts:
- web.zaalion.com
EOF
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
namespace: akv2k8s-test
spec:
hosts:
- web.zaalion.com
gateways:
- public-gateway
http:
- route:
- destination:
host: web.akv2k8s-test.svc.cluster.local
port:
number: 80
EOF
</code></pre>
<p>I could access the service as shown below</p>
<pre><code>export EXTERNAL_IP=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl -v --resolve web.zaalion.com:443:$EXTERNAL_IP --cacert cert2.crt https://web.zaalion.com
</code></pre>
<p>I do know why the service is not listed in the Jaeger UI?</p>
| One Developer | <p>According to istio <a href="https://istio.io/latest/docs/tasks/observability/distributed-tracing/jaeger/#generating-traces-using-the-bookinfo-sample" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>To see trace data, you must send requests to your service. The number of requests depends on Istio’s sampling rate. You set this rate when you install Istio. The default sampling rate is 1%. You need to send at least 100 requests before the first trace is visible. Could you try to send at least 100 requests and check if it works?</p>
</blockquote>
<p>If you wan't to change the default sampling rate then there is istio <a href="https://istio.io/latest/docs/tasks/observability/distributed-tracing/configurability/#customizing-trace-sampling" rel="nofollow noreferrer">documentation</a> about that.</p>
<blockquote>
<p><strong>Customizing Trace sampling</strong></p>
<p>The sampling rate option can be used to control what percentage of requests get reported to your tracing system. This should be configured depending upon your traffic in the mesh and the amount of tracing data you want to collect. The default rate is 1%.</p>
<p>To modify the default random sampling to 50, add the following option to your tracing.yaml file.</p>
</blockquote>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
defaultConfig:
tracing:
sampling: 50
</code></pre>
<blockquote>
<p>The sampling rate should be in the range of 0.0 to 100.0 with a precision of 0.01. For example, to trace 5 requests out of every 10000, use 0.05 as the value here.</p>
</blockquote>
| Jakub |
<p>In docker we can use -p flag to map container port to whatever port required. But in kubernetes if we use use NodePort then we will get ports of host which range start from 30000. So is there a way to map to a specific port?</p>
| sandeep P | <p>You can use the <code>nodePort</code> field in the service definition to specify the port for the nodeport:</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#nodeport</a></p>
<p>However, a nodePort will allocate that port on all nodes in the cluster.</p>
<p>You can also specify a <code>hostPort</code> in the pod container spec itself, though it is not recommended:</p>
<pre><code>ports:
- name: http
containerPort: 80
hostPort: 80
</code></pre>
| Burak Serdar |
<p>I want to list the pods that are owned by the resource <code>X</code> from the Kubernetes cluster using Kubuilder's <code>List(ctx context.Context, list ObjectList, opts ...ListOption)</code> method. <code>ListOptions</code> contains options for limiting or filtering results. Here is the the structure of the <code>ListOptions</code></p>
<pre class="lang-golang prettyprint-override"><code>type ListOptions struct {
// LabelSelector filters results by label. Use labels.Parse() to
// set from raw string form.
LabelSelector labels.Selector
// FieldSelector filters results by a particular field. In order
// to use this with cache-based implementations, restrict usage to
// a single field-value pair that's been added to the indexers.
FieldSelector fields.Selector
// Namespace represents the namespace to list for, or empty for
// non-namespaced objects, or to list across all namespaces.
Namespace string
// Limit specifies the maximum number of results to return from the server. The server may
// not support this field on all resource types, but if it does and more results remain it
// will set the continue field on the returned list object. This field is not supported if watch
// is true in the Raw ListOptions.
Limit int64
// Continue is a token returned by the server that lets a client retrieve chunks of results
// from the server by specifying limit. The server may reject requests for continuation tokens
// it does not recognize and will return a 410 error if the token can no longer be used because
// it has expired. This field is not supported if watch is true in the Raw ListOptions.
Continue string
// Raw represents raw ListOptions, as passed to the API server. Note
// that these may not be respected by all implementations of interface,
// and the LabelSelector, FieldSelector, Limit and Continue fields are ignored.
Raw *metav1.ListOptions
}
</code></pre>
<p>Now, How can I provide the owner information to this <code>ListOptions</code> so the <code>List</code> method will only list the pods that are owned by <code>X</code>?</p>
<p>Here is an example from the KubeBuilder book that shows how to filter results by a particular field,</p>
<pre class="lang-golang prettyprint-override"><code> listOps := &client.ListOptions{
FieldSelector: fields.OneTermEqualSelector(configMapField, configMap.GetName()),
Namespace: configMap.GetNamespace(),
}
err := r.List(context.TODO(), attachedConfigDeployments, listOps)
</code></pre>
| Hossain Mahmud | <p>Unfortunately it's not possible to use field selector for every field of a resource. In your case for example, you can only use <a href="https://github.com/kubernetes/kubernetes/blob/9d577d8a29893062dfbd669997396dbd01ab0e47/pkg/apis/core/v1/conversion.go#L33" rel="nofollow noreferrer">these fields</a> as field selector. It's also stated in <a href="https://stackoverflow.com/a/59443446/11764782">this thread</a>.</p>
<p>Alternatively, you can put labels to pods that is owned by a custom resource and use label selectors. Or you can get all pods and apply programmatic filter to get necessary pods. (I recommend the first approach since <code>metadata.ownerReferences</code> is an array and the cost is O(n^2))</p>
| tuna |
<p>The code is,</p>
<pre><code>const userSchema = new mongoose.Schema({
email: {
type: String,
required: true,
},
password: {
type: String,
required: true,
},
});
console.log(userSchema);
userSchema.statics.build = (user: UserAttrs) => {
return new User(user);
};
userSchema.pre("save", async function (next) {
if (this.isModified("password")) {
const hashed = await Password.toHash(this.get("password"));
this.set("password", hashed);
}
next();
});
</code></pre>
<p>Now, the error I'm running into is,</p>
<pre><code>[auth] > [email protected] start /app
[auth] > ts-node-dev src/index.ts
[auth]
[auth] [INFO] 12:46:59 ts-node-dev ver. 1.0.0 (using ts-node ver. 9.0.0, typescript ver. 3.9.7)
[auth] Compilation error in /app/src/models/user.ts
[auth] [ERROR] 12:47:04 ⨯ Unable to compile TypeScript:
[auth] src/models/user.ts(37,12): error TS2551: Property 'statics' does not exist on type 'Schema'. Did you mean 'static'?
[auth] src/models/user.ts(46,3): error TS2554: Expected 1 arguments, but got 0.
</code></pre>
<p>The statics property does exist in the schema object and it does show when I console.log(userSchema). I think it has something to do with kubernetes and skaffold. Any idea how to fix this problem ??</p>
| Sonish Maharjan | <p>I think this could help</p>
<p>First you have to create 3 interfaces.</p>
<pre><code>interface UserAttrs {
email: string;
password: string;
}
interface UserModel extends mongoose.Model<UserDoc> {
build(attrs: UserAttrs): UserDoc;
}
interface UserDoc extends mongoose.Document {
email: string;
password: string;
}
</code></pre>
<p>Then in your schema's middleware you have to declare the type of the variables that you're using</p>
<pre><code>userSchema.pre("save", async function (this: UserDoc, next: any) {
if (this.isModified("password")) {
const hashed = await Password.toHash(this.get("password"));
this.set("password", hashed);
}
next();
});
const User = mongoose.model<UserDoc, UserModel>('User', userSchema);
</code></pre>
<p><a href="https://github.com/Automattic/mongoose/issues/6725" rel="nofollow noreferrer">Related issue that I found</a></p>
| MarioHdoz |
<p>I'm going through a not very understandable situation.</p>
<blockquote>
<ul>
<li>Environment
<ul>
<li>Two dedicated nodes with azure <em>centos 8.2</em> (2vcpu, 16G ram), not AKS</li>
<li>1 master node, 1 worker node.</li>
<li><em>kubernetes v1.19.3</em></li>
<li><em>helm v2.16.12</em></li>
<li>Helm charts Elastic (<a href="https://github.com/elastic/helm-charts/tree/7.9.3" rel="nofollow noreferrer">https://github.com/elastic/helm-charts/tree/7.9.3</a>)</li>
</ul>
</li>
</ul>
</blockquote>
<p>At the first time, It works fine with below installation.</p>
<pre><code>## elasticsearch, filebeat
# kubectl apply -f pv.yaml
# helm install -f values.yaml --name elasticsearch elastic/elasticsearch
# helm install --name filebeat --version 7.9.3 elastic/filebeat
</code></pre>
<p><strong>curl elasitcsearchip:9200</strong> and <strong>curl elasitcsearchip:9200/_cat/indices</strong>
show right values.</p>
<p>but after rebooting a worker node, it just keeping ready 0/1 and not working.</p>
<p><code>NAME READY STATUS RESTARTS AGE</code><br>
<code>elasticsearch-master-0 0/1 Running 10 71m</code><br>
<code>filebeat-filebeat-67qm2 0/1 Running 4 40m</code><br></p>
<p>In this situation, after removing /mnt/data/nodes and rebooting again
then works fine.</p>
<p>elasticsearch pod has nothing special I think.</p>
<pre><code>#describe
{"type": "server", "timestamp": "2020-10-26T07:49:49,708Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[filebeat-7.9.3-2020.10.26-000001][0]]]).", "cluster.uuid": "sWUAXJG9QaKyZDe0BLqwSw", "node.id": "ztb35hToRf-2Ahr7olympw" }
#logs
Normal SandboxChanged 4m4s (x3 over 4m9s) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 4m3s kubelet Container image "docker.elastic.co/elasticsearch/elasticsearch:7.9.3" already present on machine
Normal Created 4m1s kubelet Created container configure-sysctl
Normal Started 4m1s kubelet Started container configure-sysctl
Normal Pulled 3m58s kubelet Container image "docker.elastic.co/elasticsearch/elasticsearch:7.9.3" already present on machine
Normal Created 3m58s kubelet Created container elasticsearch
Normal Started 3m57s kubelet Started container elasticsearch
Warning Unhealthy 91s (x14 over 3m42s) kubelet Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )
#events
6m1s Normal Pulled pod/elasticsearch-master-0 Container image "docker.elastic.co/elasticsearch/elasticsearch:7.9.3" already present on machine
6m1s Normal Pulled pod/filebeat-filebeat-67qm2 Container image "docker.elastic.co/beats/filebeat:7.9.3" already present on machine
5m59s Normal Started pod/elasticsearch-master-0 Started container configure-sysctl
5m59s Normal Created pod/elasticsearch-master-0 Created container configure-sysctl
5m59s Normal Created pod/filebeat-filebeat-67qm2 Created container filebeat
5m58s Normal Started pod/filebeat-filebeat-67qm2 Started container filebeat
5m56s Normal Created pod/elasticsearch-master-0 Created container elasticsearch
5m56s Normal Pulled pod/elasticsearch-master-0 Container image "docker.elastic.co/elasticsearch/elasticsearch:7.9.3" already present on machine
5m55s Normal Started pod/elasticsearch-master-0 Started container elasticsearch
61s Warning Unhealthy pod/filebeat-filebeat-67qm2 Readiness probe failed: elasticsearch: http://elasticsearch-master:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 10.97.133.135
dial up... ERROR dial tcp 10.97.133.135:9200: connect: connection refused
59s Warning Unhealthy pod/elasticsearch-master-0 Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )
</code></pre>
<p>/mnt/data path has chown 1000:1000</p>
<p>and In case of only elastisearch without filebeat, rebooting has no problem.</p>
<p>I can't figure this out at all. :(</p>
<p>What am I missing?</p>
<hr />
<ol>
<li>pv.yaml</li>
</ol>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: elastic-pv
labels:
type: local
app: elastic
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: elasticsearch-master-elasticsearch-master-0
hostPath:
path: "/mnt/data"
</code></pre>
<ol start="2">
<li>values.yaml</li>
</ol>
<pre><code>---
clusterName: "elasticsearch"
nodeGroup: "master"
# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: ""
# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.master=true
roles:
master: "true"
ingest: "true"
data: "true"
replicas: 1
minimumMasterNodes: 1
esMajorVersion: ""
# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
# name: env-secret
# - configMapRef:
# name: config-map
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
# defaultMode: 0755
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "7.9.3"
imagePullPolicy: "IfNotPresent"
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# additionals labels
labels: {}
esJavaOpts: "-Xmx1g -Xms1g"
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "2Gi"
initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
sidecarResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
networkHost: "0.0.0.0"
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 5Gi
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
podSecurityPolicy:
create: false
name: ""
spec:
privileged: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
persistence:
enabled: true
name: elastic-vc
labels:
# Add default labels for the volumeClaimTemplate fo the StatefulSet
app: elastic
annotations: {}
extraVolumes: []
# - name: extras
# emptyDir: {}
extraVolumeMounts: []
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
extraInitContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
# The environment variables injected by service links are not used, but can lead to slow Elasticsearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true
protocol: http
httpPort: 9200
transportPort: 9300
service:
labels: {}
labelsHeadless: {}
type: ClusterIP
nodePort: ""
annotations: {}
httpPortName: http
transportPortName: transport
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
securityContext:
capabilities:
drop:
- ALL
#readOnlyRootFilesystem: false
runAsNonRoot: true
runAsUser: 1000
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
# https://www.elastic.co/guide/en/elasticsearch/reference/7.9/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# - effect: NoSchedule
# key: node-role.kubernetes.io/master
# Enabling this will publically expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
nameOverride: ""
fullnameOverride: ""
# https://github.com/elastic/helm-charts/issues/63
masterTerminationFix: false
lifecycle: {}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command:
# - bash
# - -c
# - |
# #!/bin/bash
# # Add a template to adjust number of shards/replicas
# TEMPLATE_NAME=my_template
# INDEX_PATTERN="logstash-*"
# SHARD_COUNT=8
# REPLICA_COUNT=1
# ES_URL=http://localhost:9200
# while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
# curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
sysctlInitContainer:
enabled: true
keystore: []
# Deprecated
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""
</code></pre>
| Klaud Yu | <h2>Issue</h2>
<p>There is an issue with elasticsearch <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="noreferrer">readiness probe</a> when running on single replica cluster.</p>
<pre><code>Warning Unhealthy 91s (x14 over 3m42s) kubelet Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )
</code></pre>
<h2>Solution</h2>
<p>As mentioned <a href="https://github.com/elastic/helm-charts/issues/783#issuecomment-701037663" rel="noreferrer">here</a> by @adinhodovic</p>
<blockquote>
<p>If your running a single replica cluster add the following helm value:</p>
</blockquote>
<pre><code>clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s"
</code></pre>
<blockquote>
<p>Your status will never go green with a single replica cluster.</p>
<p>The following values should work:</p>
</blockquote>
<pre><code>replicas: 1
minimumMasterNodes: 1
clusterHealthCheckParams: 'wait_for_status=yellow&timeout=1s'
</code></pre>
| Jakub |
<p>I have created a K8s cluster on GCP, and I deployed an application.</p>
<p>Then I scaled it:</p>
<blockquote>
<p>kubectl scale deployment hello-world-rest-api --replicas=3</p>
</blockquote>
<p>Now when I run 'kubectl get pods', I see three pods. Their NODE value is same. I understand it means they all are deployed on same machine. But I observe that IP value for all three is different.</p>
<p>If NODE is same, then why is IP different?</p>
| Mandroid | <p>There are several networks in a k8s cluster. The pods are on the pod network, so every pod deployed on the nodes of a k8s cluster can see each other as though they are independent nodes on a network. The pod address space is different from the node address space. So, each pod running on a node gets a unique address from the pod network, which is also different from the node network. The k8s components running on each node perform the address translation.</p>
| Burak Serdar |
<p>I am new to Kubernetes, and trying to set up Rundeck (3.3.5) on it.
The image has been installed correctly.
However when I added a Postrges database on AWS RDS, it's unable to connect to it.
I am able to connect to the database by using the same URL and port number with DBeaver though.
Below is the detailed information of the Error and the yaml.
Any help in this regard is highly appreciated.</p>
<p>Error:</p>
<pre><code>[2020-10-29T19:02:47,013] ERROR pool.ConnectionPool - Unable to create initial connections of pool.
java.sql.SQLException: Driver:org.postgresql.Driver@18918d70 returned null for URL:jdbc:postgres://xxx.amazonaws.com:5432/RUNDECK
at org.apache.tomcat.jdbc.pool.PooledConnection.connectUsingDriver(PooledConnection.java:338) ~[tomcat-jdbc-9.0.31.jar!/:?]
at org.apache.tomcat.jdbc.pool.PooledConnection.connect(PooledConnection.java:212) ~[tomcat-jdbc-9.0.31.jar!/:?]
at org.apache.tomcat.jdbc.pool.ConnectionPool.createConnection(ConnectionPool.java:744) ~[tomcat-jdbc-9.0.31.jar!/:?]
at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:676) ~[tomcat-jdbc-9.0.31.jar!/:?]
at org.apache.tomcat.jdbc.pool.ConnectionPool.init(ConnectionPool.java:483) ~[tomcat-jdbc-9.0.31.jar!/:?]
at org.apache.tomcat.jdbc.pool.ConnectionPool.<init>(ConnectionPool.java:154) ~[tomcat-jdbc-9.0.31.jar!/:?]
at org.apache.tomcat.jdbc.pool.DataSourceProxy.pCreatePool(DataSourceProxy.java:118) ~[tomcat-jdbc-9.0.31.jar!/:?]
at org.apache.tomcat.jdbc.pool.DataSourceProxy.createPool(DataSourceProxy.java:107) ~[tomcat-jdbc-9.0.31.jar!/:?]
at org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:131) ~[tomcat-jdbc-9.0.31.jar!/:?]
at org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy$LazyConnectionInvocationHandler.getTargetConnection(LazyConnectionDataSourceProxy.java:412) ~[spring-jdbc-5.1.18.RELEASE.jar!/:5.1.18.RELEASE]
</code></pre>
<p>Yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rundeck
name: test-rundeck
namespace: testops
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: rundeck
template:
metadata:
labels:
app: rundeck
spec:
containers:
- env:
- name: JVM_MAX_RAM_PERCENTAGE
value: "75"
- name: RUNDECK_GRAILS_URL
value: http://xxx.us-east-1.elb.amazonaws.com:4440/rundeck
- name: RUNDECK_SERVER_CONTEXTPATH
value: /rundeck
- name: RUNDECK_DATABASE_URL
value: jdbc:postgres://xxx.us-east-1.rds.amazonaws.com:5432/RUNDECK
- name: RUNDECK_DATABASE_DRIVER
value: org.postgresql.Driver
- name: RUNDECK_DATABASE_USERNAME
value: postgres
- name: RUNDECK_DATABASE_PASSWORD
value: postgres123
image: rundeck/rundeck:3.3.5-20201019
imagePullPolicy: Always
name: rundeck
resources:
limits:
memory: 1Gi
volumeMounts:
- mountPath: "/opt/test/mnt"
name: testops-pv
volumes:
- name: testops-pv
persistentVolumeClaim:
claimName: testops-pvc
restartPolicy: Always
status: {}
</code></pre>
| android.1215 | <h2>Issue</h2>
<p>The <code>jdbc:postgres</code> url is incorrect.</p>
<h2>Solution</h2>
<p>As mentioned <a href="https://stackoverflow.com/a/42721150/11977760">here</a> and mentioned by @MegaDrive68k in the comments you should use <code>jdbc:postgresql</code> instead of <code>jdbc:postgres</code>.</p>
<p>There is rundeck <a href="https://docs.rundeck.com/docs/administration/configuration/database/postgres.html#configure-rundeck" rel="nofollow noreferrer">documentation</a> about that.</p>
| Jakub |
<p>I have been trying to authenticate OIDC using DEX for LDAP. I have succeeded in authenticating but the problem is, LDAP search is not returning the groups. Following are my DEX configs and LDAP Data. Please help me out</p>
<p>Screenshot: Login successful, groups are empty</p>
<p><a href="https://i.stack.imgur.com/YbUwj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YbUwj.png" alt="enter image description here"></a></p>
<p><strong>My Dex Config</strong></p>
<pre><code># User search maps a username and password entered by a user to a LDAP entry.
userSearch:
# BaseDN to start the search from. It will translate to the query
# "(&(objectClass=person)(uid=<username>))".
baseDN: ou=People,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com
# Optional filter to apply when searching the directory.
#filter: "(objectClass=posixAccount)"
# username attribute used for comparing user entries. This will be translated
# and combine with the other filter as "(<attr>=<username>)".
username: mail
# The following three fields are direct mappings of attributes on the user entry.
# String representation of the user.
idAttr: uid
# Required. Attribute to map to Email.
emailAttr: mail
# Maps to display name of users. No default value.
nameAttr: uid
# Group search queries for groups given a user entry.
groupSearch:
# BaseDN to start the search from. It will translate to the query
# "(&(objectClass=group)(member=<user uid>))".
baseDN: dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com
# Optional filter to apply when searching the directory.
#filter: "(objectClass=posixGroup)"
# Following two fields are used to match a user to a group. It adds an additional
# requirement to the filter that an attribute in the group must match the user's
# attribute value.
userAttr: uid
groupAttr: memberUid
# Represents group name.
nameAttr: cn
</code></pre>
<p><strong>My LDAP Data</strong></p>
<blockquote>
<p>dn:
ou=People,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com
ou: People objectClass: organizationalUnit</p>
<p>dn:
uid=johndoe,ou=People,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com
gecos: John Doe uid: johndoe loginShell: / bin / bash mail:
[email protected] homeDirectory: / home / jdoe cn: John Doe sn: Doe
uidNumber: 10002 objectClass: posixAccount objectClass: inetOrgPerson
objectClass: top userPassword: bar gidNumber: 10002</p>
<p>dn:
uid=janedoe,ou=People,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com
gecos: Jane Doe uid: janedoe loginShell: / bin / bash mail:
[email protected] homeDirectory: / home / jdoe cn: Jane Doe sn: Doe
uidNumber: 10001 objectClass: posixAccount objectClass: inetOrgPerson
objectClass: top userPassword: foo gidNumber: 10001</p>
<p>dn:
ou=Groups,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com
ou: Groups objectClass: organizationalUnit</p>
<p>dn:
cn=admins,ou=Groups,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com
cn: admins objectClass: posixGroup objectClass: top gidNumber: 20001
memberUid: janedoe memberUid: johndoe</p>
<p>dn:
cn=developers,ou=Groups,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com
cn: developers objectClass: posixGroup objectClass: top gidNumber:
20002 memberUid: janedoe</p>
</blockquote>
| Waqar Ahmed | <p>Sorry for a late replay but I didnt know the answer until now :)</p>
<p>I had the same problem, in my setup I used <code>dex (quay.io/dexidp/dex:v2.16.0)</code> to use MS AD. I used <strong>kubernetes 1.13</strong> in my tests.</p>
<p>To generate kubeconfig i used <code>heptiolabs/gangway (gcr.io/heptio-images/gangway:v3.0.0)</code> and for handle dashboard login i used <code>pusher/oauth2_proxy (quay.io/pusher/oauth2_proxy)</code>.</p>
<p>I spent a lot of time trying different ldap setups in dex but didnt get the AD groups to show up in dex log or get them to work in kubernetes, and every example I read was using only users.</p>
<p>The problem and solution for me was not in the dex config, dex will request groups from ldap if you tell dex to do so.
Its all in the clients. OIDC have a "concept" of scopes and I guess that most (all?) oidc clients implement it, at least both gangway and oauth2-proxy does.
So the solution for me was to configure the client (gangway and oauth2-proxy in my case) so that they also ask dex for groups.</p>
<p>In gangway I used the following config (including the comments)</p>
<pre><code># Used to specify the scope of the requested Oauth authorization.
# scopes: ["openid", "profile", "email", "offline_access"]
scopes: ["openid", "profile", "email", "offline_access", "groups"]
</code></pre>
<p>For oauth2-proxy I added this to the args deployment</p>
<pre><code>- args:
- --scope=openid profile email groups
</code></pre>
<p>And then I could use groups instead of users in my rolebindings, dont forget to also configure the api-server to use dex for its oidc.</p>
<p>Hope that helps</p>
<p>-Robert</p>
| robert |
<p>I am currently struggeling with the following tasks. I don't want to include my TLS certificates in my templates because</p>
<ol>
<li><p>I don't want to check in credentials in code management while still checking in the templates</p></li>
<li><p>I am using multiple Applications with the same Certificate and I don't want to update repos just because I might distribute another certificate</p></li>
</ol>
<p>Now my approach is this. I am using Jenkins for my build pipelines. I have a Repo that is used just for certificate management. It will run when updated and distribute the certificate and private key to Openshift Secrets on various clusters.</p>
<p>When running the Template of an application I am retrieving the Information from the secret and setting the values in the route. And here's where things get tricky. I can only use single line values because</p>
<ol>
<li>Openshift templates will not accept multiline parameters with oc process</li>
<li>Secrets will not store multiline values</li>
</ol>
<p>So the solution seemed to be easy. Just store the Certificate with \n and set it in the Route like this. However Openshift will not accept single line certificates resulting in the error</p>
<blockquote>
<p>spec.tls.key: Invalid value: "redacted key data": tls: found a certificate rather than a key in the PEM for the private key</p>
</blockquote>
<p>Now the solution could be to insert the Certificate as multiple lines directly in the template file before processing and applying it to the cluster but that seems a little bit hacky to me. So my Question is</p>
<p>How can you centrally manage TLS Certificates for your applications and set them correclty in the Templates you're applying?</p>
| relief.melone | <p>Secrets can be multiple lines. You can create a secret using a certificate file, and mount that secret as a file into your containers. See here for how to create secrets from files:</p>
<p><a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a></p>
<p>Use the openshift command line tool instead of kubectl.</p>
<p>For certificates, there is something called cert-manager:</p>
<p><a href="https://docs.cert-manager.io/en/latest/" rel="nofollow noreferrer">https://docs.cert-manager.io/en/latest/</a></p>
<p>This will generate certs as needed. You might want to take a look.</p>
| Burak Serdar |
<p>I have created a k8s cluster and network, using 1 master and 2 nodes, the deployment happens correctly having one pod on each node</p>
<p>From my observation i would like to know, when we deploy the nginx pods (using deployment say replicas = 2) it deploy as a container of node1 or node2. but the nginx service actually runs on the server itself <strong>not</strong> inside the container as i see the service running on node1 and node2 currently? </p>
<pre><code>
[root@node1 ~]# ps -ef|grep nginx
root 13512 13494 0 10:57 ? 00:00:00 nginx: master process nginx -g daemon off;
101 13531 13512 0 10:57 ? 00:00:00 nginx: worker process
root 17310 16644 0 11:14 pts/0 00:00:00 grep --color=auto nginx
[root@node1 ~]#
</code></pre>
<p>Is it a right setup, I have on my machine? that nginx service which is deployed to node1 and node2 from master machine is running on the node servers, though it is created as a part of pod deployment, or should it be running inside the container only?</p>
| Jagdish0886 | <p>You are probably looking at the nginx process running in the container. Look at the parent process of that nginx, it should be the container-shim, or something like that. When you run a process in a container, it runs as one of the processes of the machine, as a child of the container process, with limited access to the parent machine resources.</p>
| Burak Serdar |
<p>I am using istioctl to install istio in an EKS cluster. However, for the moment I will be using an nginx ingress for externally facing services. How can I just deploy the istio service internally, or at least avoid the automatically created ELB?</p>
| shaunc | <p>You can do it by editing <a href="https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/" rel="nofollow noreferrer">istio-ingressgateway</a>.</p>
<p>Change <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">service type</a> from </p>
<p><strong>LoadBalancer</strong> -> Exposes the Service externally using a cloud provider’s load balancer</p>
<p>to </p>
<p><strong>ClusterIP</strong> -> Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. </p>
<p>Let's edit ingressgateway </p>
<pre><code>kubectl edit svc istio-ingressgateway -n istio-system
</code></pre>
<p>Then please change the type from LoadBalancer to ClusterIP and <strong>#</strong> or <strong>delete</strong> every nodePort since You won't use them anymore and it have to be # or deleted so You could actually edit the file, without it, it fails to edit and nothing is happening.</p>
<p><strong>EDIT</strong></p>
<blockquote>
<p>I can do this at install with istioctl using a values.yaml file?</p>
</blockquote>
<p>Yes, it's possible. </p>
<p>This is a value You need to change:</p>
<blockquote>
<p>values.gateways.istio-ingressgateway.type</p>
</blockquote>
<p>example</p>
<p>Creating manifest to apply istio demo profile with ClusterIP</p>
<pre><code>istioctl manifest generate --set profile=demo --set values.gateways.istio-ingressgateway.type="ClusterIP" > $HOME/generated-manifest.yaml
</code></pre>
| Jakub |
<p>When installing consul using Helm, it expects the cluster to dynamic provison the PersistentVolume requested by consul-helm chart. It is the default behavior. </p>
<p>I have the PV and PVC created manually and need to use this PV to be used by consul-helm charts. Is it posisble to install consul using helm to use manually created PV in kubernetes.</p>
| intechops6 | <p>As @coderanger said </p>
<blockquote>
<p>For this to be directly supported the chart author would have to provide helm variables you could set. Check the docs.</p>
</blockquote>
<p>As showed on <a href="https://github.com/helm/charts/tree/master/stable/consul#configuration" rel="nofollow noreferrer">github</a> docs there is no variables to change that. </p>
<hr>
<p>If You have to change it, You would have to work with <a href="https://github.com/helm/charts/blob/master/stable/consul/templates/consul-statefulset.yaml" rel="nofollow noreferrer">consul-statefulset.yaml</a>, this chart provide dynamically volumes for each statefulset pod created.</p>
<p><a href="https://github.com/helm/charts/blob/master/stable/consul/templates/consul-statefulset.yaml#L100-L109" rel="nofollow noreferrer">volumeMounts</a></p>
<p><a href="https://github.com/helm/charts/blob/master/stable/consul/templates/consul-statefulset.yaml#L263-L278" rel="nofollow noreferrer">volumeClaimTemplates</a></p>
<p>Use helm fetch to download consul files to your local directory</p>
<p><code>helm fetch stable/consul --untar</code> </p>
<p>Then i found a <a href="https://stackoverflow.com/questions/49729461/add-persistent-volume-in-kubernetes-statefulset">github answer</a> with good explain and example about using one PV & PVC in all replicas of Statefulset, so I think it could actually work in consul chart.</p>
| Jakub |
<p>I'm trying to create an AWS EKS private cluster using Terraform with the private subnet in VPC in AWS region <code>us-west-2</code> region, with default terraform eks module configurations.</p>
<p>When I set the <strong><code>endpoint_private_access=true</code></strong> and <strong><code>endpoint_public_access=false</code></strong> ran the <code>terraform apply</code> command for provisioning the cluster and it fails and throws the error which is in the below error section.</p>
<p>I have followed the steps by deleting the .terraform folder and its contents and re-initiated the modules and applied the terraform plan and terraform apply, but still, it throws the same error.</p>
<p>But, when I set the public and private API endpoint to <code>true</code>, everything works well without any issues.
As have recommended in Terraform eks module git issues, I've applied the <code>terraform apply</code> second time but still, I'm seeing the same error <strong><code>data "http" "wait_for_cluster"</code></strong> error</p>
<p>I have waited for 35min to provisioned for <strong><code>coredns</code>, <code>aws-node</code> and <code>kube-proxy</code></strong>, but those are not provisioned</p>
<p>I'm using Terraform:v.1.02 with Terraform eks module 17.1.0 from terraform registry</p>
<p><strong>Error</strong></p>
<p><code>with module.app_eks.data.http.wait_for_cluster[0], on .terraform\modules\private_eks\data.tf line 89, in data "http" "wait_for_cluster": 89: data "http" "wait_for_cluster" {</code></p>
<p>I have added variables as
<code>wait_for_cluster_timeout = var.wait_for_cluster_timeout</code> and set the timeout to 300 and 3000 also, but I'm seeing the same error</p>
<p>If someone gives any inputs or solutions or recommendations to this, it will help me and others who are facing this issue</p>
| KNCK | <p>When you are setting up your EKS Cluster and setting <code>endpoint_private_access = "true"</code> try using <code>public_access_cidrs = ["<your-ip>/32", <or-try 0.0.0.0/0>]</code></p>
<p>You can also refer to my terraform code which creates the whole EKS Cluster:
<a href="https://github.com/lovedeepsh/aws_eks_terraform/tree/main/eks" rel="nofollow noreferrer">EKS TERRAFORM</a></p>
| Lovedeep Sharma |
<p>I have an environment variable file that is declared like following inside a <code>deployment.yaml</code> file:</p>
<pre><code> env:
- name: NATS_CLIENT_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
</code></pre>
<p>I am just wondering to know if I want to use an <code>.env</code> file instead of the <code>deployment.yaml</code> file, how can I declare the above port within the <code>.env</code>?</p>
| best_of_man | <p>Maybe you can create a configmap that contains your <code>.env</code> and use your configmap to inject the environment variables.</p>
<p>An example here: <a href="https://humanitec.com/blog/handling-environment-variables-with-kubernetes" rel="nofollow noreferrer">https://humanitec.com/blog/handling-environment-variables-with-kubernetes</a></p>
| pida |
<p>My team is using istio (version 1.2.8) on our k8s (v 1.15.6) landscape,
and we want to provide Prometheus with <strong>external IP</strong>, when applying the <code>vs</code>
We were able to access Prometheus in the browser but without and <code>css/js</code> files. we got <strong>404</strong> error for the <code>.js</code> (see logs below) files (see envoy logs below) as the <code>/static</code> files are not served.</p>
<p><strong>This is the UI we got</strong> (no css and js files are served) </p>
<p><img src="https://user-images.githubusercontent.com/34491236/70865211-8877d700-1f63-11ea-92fd-d388103d6d9f.png" alt="image"></p>
<p>instead of the following Prometheus default UI (when using loadbalancer or port forwarding...) </p>
<p><img src="https://user-images.githubusercontent.com/34491236/70865296-87937500-1f64-11ea-98e4-2f80586c51d5.png" alt="image"></p>
<p>This is the minimal steps to see the issue:</p>
<p>Install <a href="https://github.com/helm/charts/tree/master/stable/prometheus" rel="nofollow noreferrer">Prometheus</a> via helm <strong>as-is</strong>(latest- <strong>we didn't change any default</strong> config of Prometheus from the chart ) </p>
<p>Take the name of the <code>service</code> (with <code>kubectl get svc</code> on the <code>ns</code> which the service deployed) put it on the <code>destination->host</code> section in the VS (update the <code>gw</code> host etc) and apply the <code>VS</code> file</p>
<p><strong>vs.yaml</strong></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prom-virtualservice
namespace: mon
spec:
gateways:
- de-system-gateway.ws-system.svc.cluster.local
hosts:
- lzs.dev10.int.str.cloud.rpn
http:
- match:
- uri:
prefix: /prometheus
rewrite:
uri: /graph
route:
- destination:
host: prom-prometheus-server
port:
number: 80
</code></pre>
<p>BTW,</p>
<p>If I just change the type of Prometheus to use <a href="https://github.com/helm/charts/blob/master/stable/prometheus/values.yaml#L851" rel="nofollow noreferrer"><code>LoadBalancer</code></a> it work, I was able to get <code>external-ip</code> and see istio UI as expected.</p>
<p>another info, if I remove the following </p>
<pre><code> rewrite:
uri: /graph
</code></pre>
<p>I got <code>404 error</code> in the browser without any data from prom</p>
<p>in the browser without js/css files , the network in the browser is like following:</p>
<p><img src="https://user-images.githubusercontent.com/34491236/70866298-8b2cf900-1f70-11ea-9962-bfd6ba8c5f78.png" alt="image"></p>
<p>I even try the following which doesn't work either</p>
<pre><code> - uri:
prefix: /prometheus
rewrite:
uri: /static
</code></pre>
<p>or </p>
<pre><code> - uri:
prefix: /prometheus/static
</code></pre>
<p>our <code>gateway</code> spec looks like following</p>
<pre><code>...
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- lzs.dev10.int.str.cloud.rpn
port:
name: https-manager
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: /etc/istio/de-tls/tls.key
serverCertificate: /etc/istio/de-tls/tls.crt
</code></pre>
<p>using port forwarding (local) or <code>loadbalancer</code> for Prometheus it works.
How can we make it work with istio ? </p>
<p><strong>update</strong></p>
<p>I've tried also to add the static and got the same results:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prom-virtualservice
namespace: mon
spec:
gateways:
- de-system-gateway.ws-system.svc.cluster.local
hosts:
- lzs.dev10.int.str.cloud.rpn
http:
- match:
- uri:
prefix: /prometheus
- uri:
prefix: /static
- uri:
regex: '^.*\.(ico|png|jpg)$'
rewrite:
uri: /graph
route:
- destination:
host: prom-prometheus-server
port:
number: 80
</code></pre>
<p><strong>update 2</strong></p>
<p>after using the yaml which provided as answer, now I see the ui with the css etc however it's not functional , I got error: <code>Error loading available metrics!</code>
in the browser debug mode network tab I can see that the following is not working</p>
<p>This is the logs for envoy for the error</p>
<pre><code>[2019-12-17T09:04:18.670Z] "GET /api/v1/query?query=time()&_=1576573457737 HTTP/2" 404 NR "-" "-" 0 0 0 - "100.96.0.1" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" "57592874-27f5-4b57-9dea-1bcf13365f60" "lzs.dev10.int.str.cloud.rpn" "-" - - 100.96.2.13:443 100.96.0.1:24664 lzs.dev10.int.str.cloud.rpn
[2019-12-17T09:04:18.670Z] "GET /api/v1/label/__name__/values?_=1576573457738 HTTP/2" 404 NR "-" "-" 0 0 0 - "100.96.0.1" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" "edad441d-58fe-4214-aae0-a0aec9012030" "lzs.dev10.int.str.cloud.rpn" "-" - - 100.96.2.13:443 100.96.0.1:24664 lzs.dev10.int.str.cloud.rpn
</code></pre>
<p><a href="https://i.stack.imgur.com/VN5c4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VN5c4.png" alt="enter image description here"></a></p>
<p>(we are not talking about Prometheus which comes with istio, we need to install diff Prometheus on diff namespace...) </p>
<p><strong>This is the logs from envoy</strong> </p>
<p>2019-12-15T13:57:16.977357Z info Envoy proxy is ready</p>
<p>[2019-12-15 14:29:51.226][14][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 13,
[2019-12-15 15:00:50.980][14][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 13,
<strong>[2019-12-15T15:11:02.572Z] "GET /prometheus HTTP/2" 200 - "-" "-" 0 5785 2 1 "100.96.3.1"</strong> "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:70.0) Gecko/20100101 Firefox/70.0" "531e2f39-0c9f-44d3-b11b-e336126ea836" "lzs.dev10.int.str.cloud.rpn" "100.96.0.16:9090" outbound|80||prom-prometheus-server.mon.svc.cluster.local - 100.96.2.10:443 100.96.3.1:32972 lzs.dev10.int.str.cloud.rpn
<strong>[2019-12-15T15:11:02.705Z] "GET /static/vendor/js/jquery-3.3.1.min.js?v=6f92ce56053866194ae5937012c1bec40f1dd1d9 HTTP/2" 404 NR "-" "-" 0 0 0 -</strong> "100.96.3.1" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:70.0) Gecko/20100101 Firefox/70.0" "40119d8d-2103-4453-b589-e1561d44d363" "lzs.dev10.int.str.cloud.rpn" "-" - - 100.96.2.10:443 100.96.3.1:32972 lzs.dev10.int.str.cloud.rpn
<strong>[2019-12-15T15:11:02.705Z] "GET /static/vendor/js/popper.min.js?v=6f92ce56053866194ae5937012c1bec40f1dd1d9 HTTP/2" 404 NR "-" "-" 0 0 0 -</strong> "100.96.3.1" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:70.0) Gecko/20100101 Firefox/70.0" "dbdf2a2a-cfd3-422a-82f4-e6e466407671" "lzs.dev10.int.str.cloud.rpn" "-" - - 100.96.2.10:443 100.96.3.1:32972 lzs.dev10.int.str.cloud.rpn
[2019-12-15T15:11:02.706Z] "GET /static/vendor/bootstrap-4.3.1/js/bootstrap.min.js?v=6f92ce56053866194ae5937012c1bec40f1dd1d9 HTTP/2" 404 NR "-" "-" 0 0 0 - "100.96.3.1" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:70.0) Gecko/20100101 Firefox/70.0" "efd95571-03e9-492d-98ff-b4910d1646d6" "lzs.dev10.int.str.cloud.rpn" "-" - - 100.96.2.10:443 100.96.3.1:32972 lzs.dev10.int.str.cloud.rpn
[2019-12-15T15:11:02.706Z] "GET /static/vendor/bootstrap-4.3.1/css/bootstrap.min.css?v=6f92ce56053866194ae5937012c1bec40f1dd1d9 HTTP/2" 404 NR "-" "-" 0 0 0 - "100.96.3.1" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:70.0) Gecko/20100101 Firefox/70.0" "8ae04546-06cb-4ba0-8430-f04388811460" "lzs.dev10.int.str.cloud.rpn" "-" - - 100.96.2.10:443 100.96.3.1:32972 lzs.dev10.int.str.cloud.rpn
[2019-12-15T15:11:02.706Z] "GET /static/css/prometheus.css?v=6f92ce56053866194ae5937012c1bec40f1dd1d9 HTTP/2" 404 NR "-" "-" 0 0 0 - "100.96.3.1" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:70.0) Gecko/20100101 Firefox/70.0" "70c88f5f-b582-4dd9-a2e2-47605c812344" "lzs.dev10.int.str.cloud.rpn" "-" - - 100.96.2.10:443 100.96.3.1:32972 lzs.dev10.int.str.cloud.rpn</p>
| NSS | <p>For start I found some information about <a href="https://archive.istio.io/v1.2/docs/setup/kubernetes/platform-setup/" rel="nofollow noreferrer">istio 1.2</a> that tells </p>
<blockquote>
<p>Istio 1.2 has been tested with these Kubernetes releases: 1.12, 1.13, 1.14.</p>
</blockquote>
<p>So if You use kubernetes 1.15 I would recommend to upgrade your istio to latest version.</p>
<hr>
<p>About prometheus, I see You want to use prometheus helm chart, why won't You use built-in prometheus? As provided in <a href="https://istio.io/docs/setup/additional-setup/config-profiles/" rel="nofollow noreferrer">istio documentation</a> prometheus is enabled in versions default,demo and sds.</p>
<hr>
<p>Based on istio <a href="https://istio.io/docs/tasks/observability/gateways/" rel="nofollow noreferrer">remotely accessign telemetry addons</a> You can use either secure(<a href="https://istio.io/docs/tasks/observability/gateways/#option-1-secure-access-https" rel="nofollow noreferrer">https</a>) or insecure(<a href="https://istio.io/docs/tasks/observability/gateways/#option-2-insecure-access-http" rel="nofollow noreferrer">http</a>) option to expose prometheus.</p>
<hr>
<p>Personally i did an insecure reproduction by following above tutorial and everything is working. </p>
<p><strong>Kubernetes Version</strong>: 1.13.11-gke.14</p>
<p><strong>Istio Version</strong>: 1.4.2</p>
<p>Steps to follow</p>
<p>1.Install </p>
<ul>
<li><a href="https://istio.io/docs/setup/getting-started/#download" rel="nofollow noreferrer">Istioctl</a></li>
<li><a href="https://istio.io/docs/setup/install/istioctl/#install-istio-using-the-default-profile" rel="nofollow noreferrer">Istio default</a></li>
</ul>
<p>2.Expose prometheus</p>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: prometheus-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 15030
name: http-prom
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prometheus-vs
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- prometheus-gateway
http:
- match:
- port: 15030
route:
- destination:
host: prometheus
port:
number: 9090
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: prometheus
namespace: istio-system
spec:
host: prometheus
trafficPolicy:
tls:
mode: DISABLE
---
EOF
</code></pre>
<p>3.Result</p>
<p><a href="https://i.stack.imgur.com/44e9K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/44e9K.png" alt="enter image description here"></a></p>
<hr>
<p><strong>EDIT</strong></p>
<p>Could You try use this yaml?</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prom-virtualservice
namespace: mon
spec:
gateways:
- de-system-gateway.ws-system.svc.cluster.local
hosts:
- lzs.dev10.int.str.cloud.rpn
http:
- match:
- uri:
prefix: /prometheus
rewrite:
uri: /graph
route:
- destination:
host: prom-prometheus-server
port:
number: 80
- match:
- uri:
prefix: /static
- uri:
regex: '^.*\.(ico|png|jpg)$'
route:
- destination:
host: prom-prometheus-server
port:
number: 80
</code></pre>
<p><strong>EDIT2</strong> Please add /api prefix to your second match like below</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prom-virtualservice
namespace: mon
spec:
gateways:
- de-system-gateway.ws-system.svc.cluster.local
hosts:
- lzs.dev10.int.str.cloud.rpn
http:
- match:
- uri:
prefix: /prometheus
rewrite:
uri: /graph
route:
- destination:
host: prom-prometheus-server
port:
number: 80
- match:
- uri:
prefix: /static
- uri:
regex: '^.*\.(ico|png|jpg)$'
- uri:
prefix: /api
route:
- destination:
host: prom-prometheus-server
port:
number: 80
</code></pre>
<p><strong>EDIT3</strong> </p>
<blockquote>
<p>In your answer you separate it to two matches , why?</p>
</blockquote>
<p>This <a href="https://istio.io/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer">link</a> is the answer here I think, You rewrite /prometheus to /graph since it's main Prometheus url, and that's okay. But you can't rewrite /static, /api to /graph because you need those paths to grab files and metrics, if it won't match then error 404 appears.</p>
| Jakub |
<p>Are there any metrics I can use to know if pods are in the running state or errored-out, crashloopbackoff state etc in GKE Google Cloud?</p>
<p>Basically I want a metric I can export to Stackdriver that can tell if my jobs are running healthy pods or pods have errors and no pods are running( Evicted, crashloopbackoff etc. )</p>
| Tabber | <p>According to the official documentation Cloud Monitoring supports the following metric types from Google Kubernetes Engine:</p>
<p><a href="https://cloud.google.com/monitoring/api/metrics_kubernetes" rel="nofollow noreferrer">Kubernetes metrics</a></p>
<p>I believe you can use for your case:</p>
<p><a href="https://i.stack.imgur.com/iugte.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iugte.png" alt="enter image description here" /></a></p>
| marian.vladoi |
<p>I am using Cert manager with letsencrypt via below yaml code. What am I doing wrong. When I use "kubectl get issuer" it returns to me : "No resources found in default namespace."</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: [email protected]
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: 5XXXX31d821ba586302ff5d38647b701de750823711ff55b2a776c60d8eXXXXX
</code></pre>
<p><a href="https://i.stack.imgur.com/8ph5w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8ph5w.png" alt="enter image description here" /></a></p>
<p>But I can check certificates I realized that there are some certifications:</p>
<p><a href="https://i.stack.imgur.com/FMP0U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FMP0U.png" alt="enter image description here" /></a></p>
| loki | <p>I think you are entering incorrect name of resource <code>kubectl get issuer</code> but rather you should run <code>kubectl get ClusterIssuer</code>.</p>
<p>Try it</p>
| Lovedeep Sharma |
<p>How the OR expression can be used with selectors and labels?</p>
<pre><code> selector:
app: myapp
tier: frontend
</code></pre>
<p>The above matches pods where labels <code>app==myapp</code> <strong>AND</strong> <code>tier=frontend</code>. </p>
<p>But the OR expression can be used?</p>
<p><code>app==myapp</code> <strong>OR</strong> <code>tier=frontend</code>?</p>
| Tevin J | <p>Now you can do that :</p>
<pre><code>kubectl get pods -l 'environment in (production, qa)'
</code></pre>
<p><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#list-and-watch-filtering" rel="noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#list-and-watch-filtering</a></p>
| Pasc23 |
<p>I recently noticed a big accumulation of pods with status 'Shutdown'. We have been using Kubernetes since October, 2020.</p>
<p>Production and staging is running on the same nodes except that staging uses preemtible nodes to cut the cost. The containers are also stable in staging. (Failures occur rarely as they are caught in testing before).</p>
<p>Service provider Google Cloud Kubernetes.</p>
<p>I familiarized myself with the docs and tried searching however neither I recognize neither google helps with this particular status. There are no errors in the logs.</p>
<p><a href="https://i.stack.imgur.com/pC7LW.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pC7LW.png" alt="example of bunch of shutdown pods" /></a>
<a href="https://i.stack.imgur.com/gEh6n.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gEh6n.png" alt="pod description only says failed" /></a></p>
<p>I have no problem pods being stopped. Ideally I'd like K8s to automatically delete these shutdown pods. If I run <code>kubectl delete po redis-7b86cdccf9-zl6k9</code>, it goes away in a blink.</p>
<p><code>kubectl get pods | grep Shutdown | awk '{print $1}' | xargs kubectl delete pod</code> is manual temporary workaround.</p>
<p>PS. <code>k</code> is an alias to <code>kubectl</code> in my environment.</p>
<p>Final example: it happens across all namespaces // different containers.
<a href="https://i.stack.imgur.com/PXHY8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PXHY8.png" alt="enter image description here" /></a></p>
<p>I stumbled upon few related issues explaining the status
<a href="https://github.com/kubernetes/website/pull/28235" rel="noreferrer">https://github.com/kubernetes/website/pull/28235</a>
<a href="https://github.com/kubernetes/kubernetes/issues/102820" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/102820</a></p>
<p>"When pods were evicted during the graceful node shutdown, they are marked as failed. Running <code>kubectl get pods</code> shows the status of the the evicted pods as <code>Shutdown</code>."</p>
| Lukas | <p>The evicted pods are not removed on purpose, as k8s team says here <a href="https://github.com/kubernetes/kubernetes/issues/54525#issuecomment-340035375" rel="noreferrer">1</a>, the evicted pods are nor removed in order to be inspected after eviction.</p>
<p>I believe here the best approach would be to create a cronjob <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="noreferrer">2</a> as already mentioned.</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: del-shutdown-pods
spec:
schedule: "* 12 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- kubectl get pods | grep Shutdown | awk '{print $1}' | xargs kubectl delete pod
restartPolicy: OnFailure
</code></pre>
| Toni |
<p>we have istio installed without the side car enabled gloablly , and I want to enable it to specific service in a new namespace </p>
<p>I’ve added to my deployment the following:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gow
labels:
app: gowspec:
replicas: 2
template:
metadata:
labels:
app: gow
tier: service
annotations:
sidecar.istio.io/inject: "true"
</code></pre>
<p>while running </p>
<p><code>get namespace -L istio-injection</code> I don’t see anything enabled , everything is empty…</p>
<p>How can I verify that the side car is created ? I dont see anything new ...</p>
| NSS | <p>You can use <a href="https://istio.io/docs/setup/getting-started/#download" rel="nofollow noreferrer">istioctl</a> <a href="https://istio.io/docs/setup/additional-setup/sidecar-injection/#manual-sidecar-injection" rel="nofollow noreferrer">kube-inject</a> to make that</p>
<pre><code>kubectl create namespace asdd
istioctl kube-inject -f nginx.yaml | kubectl apply -f -
</code></pre>
<hr>
<p>nginx.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: asdd
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
annotations:
sidecar.istio.io/inject: "True"
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre>
<p>Result:</p>
<pre><code>nginx-deployment-55b6fb474b-77788 2/2 Running 0 5m36s
nginx-deployment-55b6fb474b-jrkqk 2/2 Running 0 5m36s
</code></pre>
<p>Let me know if You have any more questions.</p>
| Jakub |
<p>I am bit confused about using Istio with EKS. We have 2 spring boot microservices, one is a REST service provider and the other the consumer. We want to implement Authn and authz using Istio.</p>
<p>FOr that:
1. On provider service side : I have the a VirtualService, a Destination Rule (stating that the TLS mode should be ISTIO_MUTUAL for incoming traffic) , an AuthorizationPolicy which basically whitelists the client serviceaccounts. I also have a AuthenticationPolicy as below:</p>
<pre><code>apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: $APP_NAME-$FEATURE_NAME-authenticationpolicy
namespace: $NAME_SPACE
spec:
targets:
- name: "$APP_NAME-$FEATURE_NAME"
peers:
- mtls:
mode: STRICT
</code></pre>
<p>My understanding here is that this policy wont allow any incoming traffic which is non mtls.</p>
<p>Now I have a doubt that how do I configure my client pod to send all mtls outgoing traffic.I understand I have to create a ServiceAccount which is whitelisted at the provider side using Authz Policy.I am more concerned about my client pod here since I am not sure how to enable mtls at the pod level. FYI, I dont want to enable mtls at the namespace level.I want to do it at the pod level using a yaml file.</p>
<p>Is my understanding about the usage of the Destination rule, Authn and Authz policies correct? Is it correct that Destination rule, Authn and Authz policies have to be at the service provider level? And the client just has to enable MTLS for the communication to work successfully? I have been go thru Istio documentation but this is where I have a doubt</p>
| Amol Kshirsagar | <blockquote>
<p>My understanding here is that this policy wont allow any incoming traffic which is non mtls.</p>
</blockquote>
<p>That's true, if you set the tls mode to strict then client cert must be presented, connection is in TLS.</p>
<hr>
<blockquote>
<p>I am more concerned about my client pod here since I am not sure how to enable mtls at the pod level.</p>
</blockquote>
<p>There is good <a href="https://itnext.io/musings-about-istio-with-mtls-c64b551fe104" rel="nofollow noreferrer">article</a> about how to make that work, specially the part </p>
<p><strong>Setting up mTLS for a single connection between two services</strong></p>
<blockquote>
<p>As Bookinfo is the Hello World of Istio, I am going to use this to explain how to set up mTLS from productpage to details service as shown in the above graph snippet.</p>
<p>There are two parts to this:</p>
<p>Install a Policy to tell Details that it wants to receive TLS traffic (only):</p>
</blockquote>
<pre><code>apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: details-receive-tls
spec:
targets:
- name: details
peers:
- mtls: {}
</code></pre>
<blockquote>
<ol start="2">
<li>Install a DestinationRule to tell clients (productpage) to talk TLS with details:</li>
</ol>
</blockquote>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: details-istio-mtls
spec:
host: details.bookinfo.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
</code></pre>
<blockquote>
<p>The following is a graphical representation of the involved services and where the previous two configuration documents apply.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/Z8Srp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z8Srp.png" alt="enter image description here"></a></p>
<blockquote>
<p>Now when you look closely at the Policy above you will see and entry for the peer authentication</p>
</blockquote>
<pre><code>peers:
- mtls: {}
</code></pre>
<blockquote>
<p>This means that TLS verification is strict and Istio (or rather the Envoy proxy in the pod) requires TLS traffic and a valid certificate. We can pass a flag to get permissive mode:</p>
</blockquote>
<pre><code>peers:
- mtls:
mode: PERMISSIVE
</code></pre>
<hr>
<blockquote>
<p>Is it correct that Destination rule, Authn and Authz policies have to be at the service provider level?</p>
</blockquote>
<p>As far as I know yes.</p>
<blockquote>
<p>And the client just has to enable MTLS for the communication to work successfully?</p>
</blockquote>
<p>I'm not sure about that, since MTLS works inside mesh, it depends on your application requirements.</p>
<hr>
<blockquote>
<p>I want to do it at the pod level using a yaml file.</p>
</blockquote>
<p>There is a link to <a href="https://istio.io/docs/tasks/security/authentication/" rel="nofollow noreferrer">istio documentation</a> about authentication which include</p>
<ul>
<li><a href="https://istio.io/docs/tasks/security/authentication/https-overlay/#create-an-https-service-with-istio-sidecar-with-mutual-tls-enabled" rel="nofollow noreferrer">Create an HTTPS service with Istio sidecar with mutual TLS enabled</a> </li>
<li><a href="https://istio.io/docs/tasks/security/authentication/mutual-tls/" rel="nofollow noreferrer">Mutual TLS Deep-Dive</a></li>
</ul>
<p>And another one from github</p>
<ul>
<li><a href="https://github.com/GoogleCloudPlatform/istio-samples/tree/6fa69cf46424c055535ddbdc22f715e866c4d179/security-intro#enable-mtls-for-the-frontend-service" rel="nofollow noreferrer">Enable mtls for frontend service</a></li>
</ul>
<p>Or you can extend your gateway’s definition to support mutual TLS. Change the credentials of the ingress gateway by deleting its secret and creating a new one. The server uses the CA certificate to verify its clients, and we must use the name cacert to hold the CA certificate. You can use cert-manager to generate a client certificate.</p>
<ul>
<li><a href="https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-sds/#configure-a-mutual-tls-ingress-gateway" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-sds/#configure-a-mutual-tls-ingress-gateway</a></li>
<li><a href="https://istio.io/docs/tasks/traffic-management/ingress/ingress-certmgr/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/ingress/ingress-certmgr/</a></li>
</ul>
<hr>
<p>I have found some tutorials which might be helpful, check it out.</p>
<ul>
<li><a href="https://banzaicloud.com/blog/istio-mtls/?fbclid=IwAR1--yRJNBRWK8cyYCnpCOxAy0yQaho-PKPsl5tNefs9Iy28dIDjz3AFygQ" rel="nofollow noreferrer">Managing mutual TLS between services with Istio</a></li>
<li><a href="https://istiobyexample.dev/authorization/" rel="nofollow noreferrer">Authorization</a></li>
<li><a href="https://istiobyexample.dev/jwt/" rel="nofollow noreferrer">JWT</a></li>
<li><a href="https://www.youtube.com/watch?v=pKN4x4uXswU" rel="nofollow noreferrer">istio mtls,jwt,authn</a></li>
</ul>
<hr>
<p>Let me know if you have any more questions.</p>
| Jakub |
<p>What is the best way to inject a file into a Pod?
I did it using a configMap but now I have an xml file that is bigger than 1MB so need to find some other way. The file is available in the git repository.</p>
| AdamU | <p>As @Arghya Sadhu mentioned </p>
<blockquote>
<p>You can store it in a <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">volume</a> and <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/" rel="nofollow noreferrer">mount</a> that volume into the pod.</p>
</blockquote>
<hr>
<p>There was storage called <a href="https://kubernetes.io/docs/concepts/storage/volumes/#gitrepo" rel="nofollow noreferrer">gitRepo</a> which might be the best solution for you, but as now it's deprecated.</p>
<blockquote>
<p>Warning: The gitRepo volume type is deprecated. To provision a container with a git repo, mount an <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">EmptyDir</a> into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container.</p>
</blockquote>
<h2>emptyDir</h2>
<blockquote>
<p>An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the Pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.</p>
<p>Note: A Container crashing does NOT remove a Pod from a node, so the data in an emptyDir volume is safe across Container crashes.</p>
</blockquote>
<p>Some uses for an emptyDir are:</p>
<blockquote>
<ul>
<li>scratch space, such as for a disk-based merge sort</li>
<li>checkpointing a long computation for recovery from crashes</li>
<li>holding files that a content-manager Container fetches while a
webserver Container serves the data</li>
</ul>
<p>By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment. However, you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead. While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on node reboot and any files you write will count against your Container’s memory limit.</p>
</blockquote>
<h2>Example Pod</h2>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
</code></pre>
<hr>
<p>Check this stackoverflow <a href="https://stackoverflow.com/a/44149704/11977760">answer</a>, as it might be the solution you're looking for.</p>
<hr>
<p>If you want to read more about storage in kubernetes take a look here:</p>
<ul>
<li><a href="https://www.magalix.com/blog/kubernetes-storage-101" rel="nofollow noreferrer">https://www.magalix.com/blog/kubernetes-storage-101</a></li>
<li><a href="https://kubernetes.io/docs/concepts/storage/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/</a></li>
</ul>
| Jakub |
<p>I am very new to using helm charts for deploying containers, and I have also never worked with nginx controllers or ingress controllers.
However, I am being asked to look into improving our internal nginx ingress controllers to allow for SSL-passthrough.</p>
<p>Right now we have external (public facing) and internal controllers. Where the public ones allow SSL-passthrough, and the internal ones have SSL-termination.
I have also been told that nginx is a reverse proxy, and that it works based on headers in the URL.</p>
<p>I am hoping someone can help me out on this helm chart that I have for the internal ingress controllers.
Currently I am under the impression that having SSL termination as well as SSL-passthrough on the same ingress controllers would not be possible.
Answered this one myself: <a href="https://serversforhackers.com/c/tcp-load-balancing-with-nginx-ssl-pass-thru" rel="nofollow noreferrer">https://serversforhackers.com/c/tcp-load-balancing-with-nginx-ssl-pass-thru</a></p>
<p>Our current (internal) ingress code:</p>
<pre><code>---
rbac:
create: true
controller:
ingressClass: nginx-internal
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu:110:certificate/62-b3
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: !!str 443
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: !!str 3600
targetPorts:
https: 80
replicaCount: 3
defaultBackend:
replicaCount: 3
</code></pre>
<p>Can I simply add the following? :</p>
<pre><code>controller:
extraArgs:
enable-ssl-passthrough: ""
</code></pre>
<p>Note: The above piece of code is what we use on our external ingress controller.</p>
<p>additionally, I found this:
<a href="https://stackoverflow.com/questions/48025879/ingress-and-ssl-passthrough">Ingress and SSL Passthrough</a></p>
<p>Can I just go and mix the annotations? Or do annotations only care about the 'top domain level' where the annotation comes from?
eg:</p>
<pre><code>service.beta.kubernetes.io
nginx.ingress.kubernetes.io
</code></pre>
<p>Both come from the domain kubernetes.io, or does the sub-domain make a difference?
I mean: <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md</a>
That page doesn't show any of the service.beta annotations on it ..</p>
<p>What's the difference between the extraArg ssl-passthrough configuration and the ssl-passthrough configuration in the annotations?</p>
<p>I'm looking mostly for an answer on how to get the SSL-passthrough working without breaking the SSL-termination on the internal ingress controllers.
However, any extra information to gain more insight and knowledge as far as my other questions go would also be very appreciated :)</p>
| Marco | <p>So I found the answer to my own question(s):
The annotations appear to be 'configuration items'. I'm using quotes because i can't find a better term.
The extraArgs parameter is where you can pass any parameter to the controller as if it were a commandline parameter.
And I think it is also safe to say that the annotations can be either any of the same top-level domain. I have not found any that weren't from another domain then kubernetes.io</p>
<p>To get my ingress controller to work side-by-side with the SSL-termination controller the helm chart looks as following:</p>
<pre><code>---
rbac:
create: true
controller:
ingressClass: nginx-internal-ssl-passthrough
service:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "tag3=value3, tag3=value3, tag3=value3, tag3=value3"
targetPorts:
https: 443
replicaCount: 2
extraArgs:
enable-ssl-passthrough: ""
defaultBackend:
replicaCount: 2
</code></pre>
<p>Toke me about 2 days of researching/searching the web & 6 deployments to get the whole setup working with AWS nlb, ssl-passthrough enabled, cross-zone loadbalancing, etc. But after having found the following pages it went pretty fast:
<a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a>
<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/</a>
<a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
<p>This last page helped me a lot. If someone else gets to deploy SSL-termination and SSL-passthrough for either public or private connections, I hope this helps too.</p>
| Marco |
<p>In kubernetes, I have the following service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: default
spec:
ports:
- name: tcp
protocol: TCP
port: 5555
targetPort: 5555
- name: udp
protocol: UDP
port: 5556
targetPort: 5556
selector:
tt: test
</code></pre>
<p>Which exposes two ports, 5555 for TCP and 5556 for UDP. How can expose these ports externally using the same ingress? I tried using nginx to do something like the following but it doesn't work. It complains that mixed ports are not supported.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
5555: "default/test-service:5555"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
5556: "default/test-service:5556"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: tcp
port: 5555
targetPort: 5555
protocol: TCP
- name: udp
port: 5556
targetPort: 5556
protocol: UDP
args:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
<p>Is there a way to do this?</p>
| Mohamed | <p>You can enable <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">feature gates</a> <code>MixedProtocolLBService</code>. For instructions on how to enable function gates, see below.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/59814862/how-do-you-enable-feature-gates-in-k8s">How do you enable Feature Gates in K8s?</a></li>
</ul>
<p>Restart (delete and re-create) the Ingress controller after enabling it for the settings to take effect.</p>
<p><code>MixedProtocolLBService</code> is a beta feature since Kubernetes 1.20. Whether it becomes stable or depprecated remains to be seen.</p>
| Yeon-Gu |
<p>I am using the default bookinfo application <a href="https://istio.io/docs/examples/bookinfo/" rel="nofollow noreferrer">https://istio.io/docs/examples/bookinfo/</a> and trying to test split traffic with the reviews service. Kiali is showing the split and everything seems to be configured correctly but its still doing a round robin. If I remove all virtual services and destination rules, the app works as normally expected. </p>
<pre><code># Source: bookinfo/templates/destination-rule-all.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: productpage
spec:
host: productpage
subsets:
- name: v1
labels:
version: v1
---
# Source: bookinfo/templates/destination-rule-all.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ratings
spec:
host: ratings
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v2-mysql
labels:
version: v2-mysql
- name: v2-mysql-vm
labels:
version: v2-mysql-vm
---
# Source: bookinfo/templates/destination-rule-all.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: details
spec:
host: details
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
# Source: bookinfo/templates/destination-rule-all.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
---
# Source: bookinfo/templates/bookinfo-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
chart: bookinfo-0.1.2
release: bookinfo
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
# Source: bookinfo/templates/bookinfo-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
chart: bookinfo-0.1.2
release: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
---
# Source: bookinfo/templates/virtual-service-all-v1.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: productpage
spec:
hosts:
- productpage
gateways:
- bookinfo-gateway
http:
- route:
- destination:
host: productpage
subset: v1
---
# Source: bookinfo/templates/virtual-service-all-v1.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
gateways:
- bookinfo-gateway
http:
- route:
- destination:
host: reviews
subset: v1
weight: 100
- destination:
host: reviews
subset: v2
weight: 0
- destination:
host: reviews
subset: v3
weight: 0
---
# Source: bookinfo/templates/virtual-service-all-v1.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
gateways:
- bookinfo-gateway
http:
- route:
- destination:
host: ratings
subset: v1
---
# Source: bookinfo/templates/virtual-service-all-v1.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: details
spec:
hosts:
- details
gateways:
- bookinfo-gateway
http:
- route:
- destination:
host: details
subset: v1
# Source: bookinfo/templates/reviews-v1-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v1
labels:
chart: bookinfo-0.1.2
release: bookinfo
app: reviews
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: reviews
release: bookinfo
version: v1
template:
metadata:
labels:
app: reviews
release: bookinfo
version: v1
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: myhub/istio/examples-bookinfo-reviews-v1:1.15.0
imagePullPolicy: IfNotPresent
env:
- name: LOG_DIR
value: "/tmp/logs"
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
- name: wlp-output
mountPath: /opt/ibm/wlp/output
volumes:
- name: wlp-output
emptyDir: {}
- name: tmp
emptyDir: {}
---
# Source: bookinfo/templates/reviews-v2-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v2
labels:
chart: bookinfo-0.1.2
release: bookinfo
app: reviews
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: reviews
release: bookinfo
version: v2
template:
metadata:
labels:
app: reviews
release: bookinfo
version: v2
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: myhub/istio/examples-bookinfo-reviews-v2:1.15.0
imagePullPolicy: IfNotPresent
env:
- name: LOG_DIR
value: "/tmp/logs"
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
- name: wlp-output
mountPath: /opt/ibm/wlp/output
volumes:
- name: wlp-output
emptyDir: {}
- name: tmp
emptyDir: {}
---
# Source: bookinfo/templates/reviews-v3-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
metadata:
name: reviews-v3
labels:
chart: bookinfo-0.1.2
release: bookinfo
app: reviews
version: v3
spec:
replicas: 1
selector:
matchLabels:
app: reviews
release: bookinfo
version: v3
template:
metadata:
labels:
app: reviews
release: bookinfo
version: v3
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: ratings
image: myhub/istio/examples-bookinfo-reviews-v3:1.15.0
imagePullPolicy: IfNotPresent
env:
- name: LOG_DIR
value: "/tmp/logs"
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
- name: wlp-output
mountPath: /opt/ibm/wlp/output
volumes:
- name: wlp-output
emptyDir: {}
- name: tmp
emptyDir: {}
---
</code></pre>
<p>Environment</p>
<p>kind v0.7.0 go1.13.6 linux/amd64</p>
<p>K8s v1.18.1 v1.17.0 </p>
<p><a href="https://i.stack.imgur.com/rnRjC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rnRjC.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/95heG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/95heG.png" alt="enter image description here"></a></p>
| CodyK | <p>I tried to reproduce your problem on gke with istio 1.5.2 and everything works fine.</p>
<hr>
<p>I followed istio <a href="https://istio.io/docs/examples/bookinfo/" rel="nofollow noreferrer">bookinfo documentation</a></p>
<pre><code>kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
</code></pre>
<p>As mentioned <a href="https://istio.io/docs/examples/bookinfo/#apply-default-destination-rules" rel="nofollow noreferrer">here</a> </p>
<h2>Apply default destination rules</h2>
<blockquote>
<p>Before you can use Istio to control the Bookinfo version routing, you need to define the available versions, called subsets, in destination rules.</p>
<p>Run the following command to create default destination rules for the Bookinfo services:</p>
<p>If you did not enable mutual TLS, execute this command:</p>
<p>Choose this option if you are new to Istio and are using the demo configuration profile.</p>
</blockquote>
<pre><code>$ kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
</code></pre>
<blockquote>
<p>If you did enable mutual TLS, execute this command:</p>
</blockquote>
<pre><code>$ kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml
</code></pre>
<blockquote>
<p>Wait a few seconds for the destination rules to propagate.</p>
<p>You can display the destination rules with the following command:</p>
</blockquote>
<pre><code>$ kubectl get destinationrules -o yaml
</code></pre>
<hr>
<h2>Apply your virtual service for reviews</h2>
<p><strong>Examples</strong></p>
<p>Traffic only for subset v1.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
</code></pre>
<p>Traffic only for subset v2.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v2
</code></pre>
<p>50/50 weight traffic for subset v2 and v3.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v2
weight: 50
- destination:
host: reviews
subset: v3
weight: 50
</code></pre>
<p>100/0/0 weight traffic for subset v1,v2 and v3.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v2
weight: 100
- destination:
host: reviews
subset: v3
weight: 0
- destination:
host: reviews
subset: v1
weight: 0
</code></pre>
<hr>
<p>With virtual service only for subset v1 Kiali shows traffic goes only to v1</p>
<p><a href="https://i.stack.imgur.com/XKbAZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XKbAZ.png" alt="enter image description here"></a></p>
<p>Istio productpage doesn't show the stars, so it's review v1.</p>
<p><a href="https://i.stack.imgur.com/5to5b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5to5b.png" alt="enter image description here"></a></p>
<hr>
<p>There is documentation about <a href="https://istio.io/docs/tasks/traffic-management/traffic-shifting/#apply-weight-based-routing" rel="nofollow noreferrer">weight-based routing</a> for reviews.</p>
| Jakub |
<p>I just created a cluster on GKE with 2 n1-standard-2 nodes and installed a prometheusOperator using the official <a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="nofollow noreferrer">helm</a>.</p>
<p>Prometheus seems to be working fine but i'm getting alerts like this :</p>
<pre><code>message: 33% throttling of CPU in namespace kube-system for container metrics-server in pod metrics-server-v0.3.1-8d4c5db46-zddql.
22 minutes agocontainer: metrics-serverpod: metrics-server-v0.3.1-8d4c5db46-zddql
message: 35% throttling of CPU in namespace kube-system for container heapster-nanny in pod heapster-v1.6.1-554bfbc7d-tg6fm.
an hour agocontainer: heapster-nannypod: heapster-v1.6.1-554bfbc7d-tg6fm
message: 77% throttling of CPU in namespace kube-system for container prometheus-to-sd in pod prometheus-to-sd-789b2.
20 hours agocontainer: prometheus-to-sdpod: prometheus-to-sd-789b2
message: 45% throttling of CPU in namespace kube-system for container heapster in pod heapster-v1.6.1-554bfbc7d-tg6fm.
20 hours agocontainer: heapsterpod: heapster-v1.6.1-554bfbc7d-tg6fm
message: 38% throttling of CPU in namespace kube-system for container default-http-backend in pod l7-default-backend-8f479dd9-9n77b.
</code></pre>
<p>All those pods are part of the default GKE installation and I haven't done any modification on them. I believe they are part of some google cloud tools that I haven't really tried yet.</p>
<p>My nodes aren't really under heavy load :</p>
<pre><code>kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-psi-cluster-01-pool-1-d5650403-cl4g 230m 11% 2973Mi 52%
gke-psi-cluster-01-pool-1-d5650403-xn35 146m 7% 2345Mi 41%
</code></pre>
<p>Here are my prometheus helm config : </p>
<pre><code>alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
config:
global:
resolve_timeout: 5m
receivers:
- name: "null"
- name: slack_k8s
slack_configs:
- api_url: REDACTED
channel: '#k8s'
send_resolved: true
text: |-
{{ range .Alerts }}
{{- if .Annotations.summary }}
*{{ .Annotations.summary }}*
{{- end }}
*Severity* : {{ .Labels.severity }}
{{- if .Labels.namespace }}
*Namespace* : {{ .Labels.namespace }}
{{- end }}
{{- if .Annotations.description }}
{{ .Annotations.description }}
{{- end }}
{{- if .Annotations.message }}
{{ .Annotations.message }}
{{- end }}
{{ end }}
title: '{{ (index .Alerts 0).Labels.alertname }}'
title_link: https://karma.REDACTED?q=alertname%3D{{ (index .Alerts 0).Labels.alertname
}}
route:
group_by:
- alertname
- job
group_interval: 5m
group_wait: 30s
receiver: slack_k8s
repeat_interval: 6h
routes:
- match:
alertname: Watchdog
receiver: "null"
- match:
alertname: KubeAPILatencyHigh
receiver: "null"
ingress:
enabled: false
hosts:
- alertmanager.REDACTED
coreDns:
enabled: false
grafana:
adminPassword: REDACTED
ingress:
annotations:
kubernetes.io/tls-acme: "true"
enabled: true
hosts:
- grafana.REDACTED
tls:
- hosts:
- grafana.REDACTED
secretName: grafana-crt-secret
persistence:
enabled: true
size: 5Gi
kubeControllerManager:
enabled: true
kubeDns:
enabled: true
kubeScheduler:
enabled: true
nodeExporter:
enabled: true
prometheus:
ingress:
enabled: false
hosts:
- prometheus.REDACTED
prometheusSpec:
additionalScrapeConfigs:
- basic_auth:
password: REDACTED
username: prometheus
retention: 30d
ruleSelectorNilUsesHelmValues: false
serviceMonitorSelectorNilUsesHelmValues: false
storageSpec:
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
prometheusOperator:
createCustomResource: false
</code></pre>
<p>I've found this git issue <a href="https://github.com/kubernetes-monitoring/kubernetes-mixin/issues/108" rel="nofollow noreferrer">https://github.com/kubernetes-monitoring/kubernetes-mixin/issues/108</a>
but i'm not sure if this apply to my case because this is default GKE pods.
I want to make sure everything is running smoothly and Stackdriver is able to retrieve all my logs properly even if I haven't really looked up how to use it yet.</p>
<p>Should I modify the limits on GKE default deployement in kube-system? Is there any problem with deploying prometheusOperator on GKE ?</p>
| Guilhem30 | <p>After looking through many links, I think that I understand the issue here.</p>
<p>I think that this is the k8s issue that you’re experiencing. [1]</p>
<p>There seems to be an issue with CFS quotas in Linux that is affecting all containerized clouds including Kubernetes, you can workaround the issue by adding a higher CPU Limit to your cluster or remove CPU limits from your containers. Please do test this on a staging environment and not straight in production.</p>
<p>Best of Luck!</p>
<hr>
<p>[1] <a href="https://github.com/kubernetes/kubernetes/issues/67577" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/67577</a></p>
| Carlo C. |
<p>I have configured a liveness probe for my Redis instances that makes sure that the Redis is able to retrieve keys for it to be able to be called 'alive'.</p>
<pre><code> livenessProbe:
initialDelaySeconds: 20
periodSeconds: 10
exec:
command:
{{- include "liveness_probe" . | nindent 16 }}
</code></pre>
<p>_liveness.tpl</p>
<pre><code>{{/* Liveness probe script. */}}
{{- define "liveness_probe" -}}
- "redis-cli"
- "set"
- "liveness_test_key"
- "\"SUCCESS\""
- "&&"
- "redis-cli"
- "get"
- "liveness_test_key"
- "|"
- "awk"
- "'$1 != \"SUCCESS\" {exit 1}'"
{{- end }}
</code></pre>
<p>The pod is able to start after doing the change. However, I would like to make sure that the probe is working as expected. For that I just added a delete command before the get command.</p>
<pre><code>{{/* Liveness probe script. */}}
{{- define "liveness_probe" -}}
- "redis-cli"
- "set"
- "liveness_test_key"
- "\"SUCCESS\""
- "&&"
- "redis-cli"
- "del"
- "liveness_test_key"
- "&&"
- "redis-cli"
- "get"
- "liveness_test_key"
- "|"
- "awk"
- "'$1 != \"SUCCESS\" {exit 1}'"
{{- end }}
</code></pre>
<p>I get the expected exit codes when I execute this command directly in my command prompt.</p>
<p>But the thing is that my pod is still able to start.</p>
<p>Is the liveness probe command I am using okay? If so, how do I verify this?</p>
| rishav | <p>Try this for your liveness probe it is working fine and you can try the same in readinessProbe:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: redis
spec:
containers:
- image: redis
name: redis
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
#export REDISCLI_AUTH="$REDIS_PASSWORD"
set_response=$(
redis-cli set liveness_test_key "SUCCESS"
)
del_response=$(
redis-cli del liveness_test_key
)
response=$(
redis-cli get liveness_test_key
)
if [ "$response" != "SUCCESS" ] ; then
echo "Unable to get keys, something is wrong"
exit 1
fi
initialDelaySeconds: 5
periodSeconds: 5
status: {}
</code></pre>
<p><strong>You will need to edit these values in your template</strong></p>
| Lovedeep Sharma |
<p>I have an application with an Ingress resource shown below.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}-stateful
labels:
app: oxauth
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/ssl-services: "oxtrust"
nginx.ingress.kubernetes.io/app-root: "/identity"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/proxy-next-upstream: "error timeout invalid_header http_500 http_502 http_503 http_504"
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: /identity
backend:
serviceName: oxtrust
servicePort: 8080
- path: /idp
backend:
serviceName: oxshibboleth
servicePort: 8080
- path: /passport
backend:
serviceName: oxpassport
servicePort: 8090
</code></pre>
<p>I would like to translate that into a <code>VirtualService</code> to be used by Istio gateway. But once I do that the service <code>oxpassport</code> always returns a <code>503</code> error in the logs. That means the deployment can't be reached.</p>
<p>Below is the <code>Service</code> definition </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-04-15T18:21:12Z"
labels:
app: oxpassport
app.kubernetes.io/instance: kk
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 4.1.0_01
helm.sh/chart: oxpassport-1.0.0
name: oxpassport
namespace: test
spec:
clusterIP: 10.111.71.120
ports:
- name: tcp-oxpassport
port: 8090
protocol: TCP
targetPort: 8090
selector:
app: oxpassport
release: kk
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>And finally the <code>VS</code> I am trying to use:</p>
<p><code>VirtualService</code></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {{ include "istio.fullname" . }}-oxpassport
namespace: {{ .Release.Namespace }}
spec:
hosts:
- oxpassport.{{ .Release.Namespace }}.svc.cluster.local
gateways:
- {{ .Release.Name }}-global-gtw
http:
- match:
- uri:
prefix: /passport
rewrite:
uri: /identity
route:
- destination:
host: oxpassport.{{ .Release.Namespace }}.svc.cluster.local
port:
number: 8090
</code></pre>
<p><code>Gateway</code> snippet:</p>
<pre><code> - port:
number: 8090
name: tcp-oxpassport
protocol: HTTP
hosts:
- oxpassport.{{ .Release.Namespace }}.svc.cluster.local
</code></pre>
<p>Things to note:</p>
<ol>
<li><p>There is a backend app with these labels. And that has it's own VS and it's working:</p>
<pre><code> labels:
app: oxauth
</code></pre></li>
<li><p>Oxpassport has a deployment with labels</p>
<pre><code> labels:
app: oxpassport
</code></pre></li>
</ol>
<p>I know it's a long post but it's a blocker for quite some days now. If it is possible, please explain.</p>
<p>Thanks</p>
| Shammir | <p>Gateway should be in the same namespace as virtual service, if it´s not in the same namespace as virtual service, you should add it like in below example.</p>
<p>Check the <code>spec.gateways</code> section</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo-Mongo
namespace: bookinfo-namespace
spec:
gateways:
- some-namespace/my-gateway
</code></pre>
<hr>
<p>In your ingress you have 3 paths then virtual service based on that ingress should look like there</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {{ include "istio.fullname" . }}-oxpassport
namespace: {{ .Release.Namespace }}
spec:
hosts:
- oxpassport.{{ .Release.Namespace }}.svc.cluster.local
gateways:
- {{ .Release.Name }}-global-gtw
http:
- name: a
match:
- uri:
prefix: /identity
route:
- destination:
host: oxtrust.{{ .Release.Namespace }}.svc.cluster.local
port:
number: 8080
- name: b
match:
- uri:
prefix: /idp
route:
- destination:
host: oxshibboleth.{{ .Release.Namespace }}.svc.cluster.local
port:
number: 8080
- name: c
match:
- uri:
prefix: /passport
route:
- destination:
host: oxpassport.{{ .Release.Namespace }}.svc.cluster.local
port:
number: 8090
</code></pre>
<hr>
<p>Cases with answers worth to check when problem 503 appears.</p>
<ul>
<li><p><a href="https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule" rel="nofollow noreferrer">https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule</a></p></li>
<li><p><a href="https://stackoverflow.com/questions/54160215/accessing-service-using-istio-ingress-gives-503-error-when-mtls-is-enabled?rq=1">Accessing service using istio ingress gives 503 error when mTLS is enabled</a></p></li>
<li><p><a href="https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes" rel="nofollow noreferrer">https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes</a></p></li>
<li><p><a href="https://stackoverflow.com/questions/59560394/how-to-terminate-ssl-at-ingress-gateway-in-istio">how to terminate ssl at ingress-gateway in istio?</a></p></li>
<li><p><a href="https://stackoverflow.com/questions/57638780/kubernetes-istio-ingress-gateway-responds-with-503-always">Kubernetes Istio ingress gateway responds with 503 always</a></p></li>
</ul>
<hr>
<p><strong>EDIT</strong></p>
<hr>
<blockquote>
<p>Did you consider this nginx.ingress.kubernetes.io/app-root: "/identity"?</p>
</blockquote>
<p>Missed that /identity app root, you can always rewrite all of them like you did. </p>
<blockquote>
<p>Also, is there a particular reason why we can separate that whole - big - vs into different VS files?</p>
</blockquote>
<p>No, you should be able to create seperate smaller virtual services instead of the big one, I just copied the ingress you provided.</p>
| Jakub |
<p>I created my WordPress website using the Duplicator plugin. Firstly i deployed it on a regular docker container on a machine with IP1, then, after i configured WordPress to work, i did 'docker commit' to it and then push it to my docker hub repo, then, i used this new image with configured WordPress on to deploy wp on my Kubernetes pods, but when i deployed it, the images won't show up and in f12 img <code>src=IP1/bla/bla/bla.jpg</code></p>
<p>I did update my wp_options and wp_posts to my Kubernetes IP, but it still unchanged and show IP1 in src.
What should i do?</p>
| Dmytro Lenchuk | <p>There are several different ways in the WordPress ecosystem of scanning the database for instances of an older IP or URL and replace it with a new one.</p>
<p>One is using WP CLI if you're comfortable using your terminal. You install WP CLI and then simply do
wp search-replace old-ip new-ip
You can add --dry-run to show you what it will do without making any actual changes.
There are several nice parameters to exclude tables, and others</p>
<p>Another is using a Plugin like Better Search Replace which basically does the same, but gives you a UI in the WordPress admin.</p>
<p>It's not recommended to do direct queries on the db as suggested in the comment, because some instances of the URL or IP in your case can be stored in a serialized array, that can break if the old and new URLs don't have the exact same length.</p>
<p>The Plugin and wp cli approach ensure this doesn't happen and serialized arrays are correctly updated.</p>
| Tami |
<p>I'm new with Kubernetes and Azure. I want to Deply my application and I am floowing the microsoft tutorial about kubernetes. At first I have created the resouce group and ACR instance. When I try to login in ACR console show this error:
<code>Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?</code></p>
<p>I'm using azure cli localy and I have docker running.</p>
| Adrian Gago | <p>You can try below options to connect ACR :</p>
<p>run az acr login first with the --expose-token parameter. This option exposes an access token instead of logging in through the Docker CLI.</p>
<pre><code>az acr login --name <acrName> --expose-token
</code></pre>
<p>Output displays the access token, abbreviated here:</p>
<pre><code>{
"accessToken": "eyJhbGciOiJSUzI1NiIs[...]24V7wA",
"loginServer": "myregistry.azurecr.io"
}
</code></pre>
<p>For registry authentication, we recommend that you store the token credential in a safe location and follow recommended practices to manage docker login credentials. For example, store the token value in an environment variable:</p>
<pre><code>TOKEN=$(az acr login --name <acrName> --expose-token --output tsv --query accessToken)
</code></pre>
<p>Then, run docker login, passing 00000000-0000-0000-0000-000000000000 as the username and using the access token as password:</p>
<pre><code>docker login myregistry.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password $TOKEN
</code></pre>
<p>you will get the below promt if you follow the above method :</p>
<pre><code>WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
</code></pre>
| Ghansham Mahajan |
<p>I'm currently developing my ci/cd pipeline via 'gitHub' actions.<br />
My k8s deployments being managed by 'helm' and runs on GKE and my 'images' stored in 'gcp'
I've successfully manages to build and deploy a new image via 'gitHub' actions, and now I've
would like that one of the pods will fetch the latest version after the image was deployed to 'gcp'.<br />
As I understand the current flow is to update the helm chart version after creating the new image and run '<code>helm upgrade</code>' from k8s (am I right?), but currently I would like to skip the helm
versioning part and just force the pod to get the new image.<br />
Until now to make it work, after I was creating the new image I was simply deleting the pod and because the deployment is exists the pod was recreated, but my questions is:<br />
Should I do the same from my 'CI' pipeline(deleting the pod) or there is another way doing that?</p>
| Anna | <h1>Use kubectl rollout</h1>
<p>If you are using <code>latest</code> tag for image and <code>imagePullPolicy</code> is set as <code>Always</code>, you can try <code>kubectl rollout</code> command to fetch the latest built image.</p>
<p>But <code>latest</code> image tag is not recommended for the prod deployment, because you cannot ensure the full control of the deployment version.</p>
<h1>Update image tag in values.yaml file</h1>
<p>If you have some specific reasons to avoid chart version bump, you can only update the values.yaml file and try <code>helm upgrade</code> command with the new values.yaml file which has the new image tag. In this case, you have to use specific image tags, not <code>latest</code>.</p>
<p>If you have to use <code>latest</code> image tag, you can use <code>sha256</code> value of the image as the tag in the values.yaml file.</p>
| James Wang |
<p>I'm trying to enable some sort of rate limiting for a EKS cluster using nginx ingress controller where I also need to somehow whitelist a couple if IPs from this rate limit rule that are in charge of health and metrics checkups.
If I use the annotations <code>nginx.ingress.kubernetes.io/whitelist-source-range</code> and <code>nginx.ingress.kubernetes.io/limit-connections</code> it just adds the limit to the whitelist IPs.
Is there another way to setup this?
Thank you!</p>
| Sima Liviu | <p>The problem in fact that it ignores the whitelisting due to the lack of <code>x-forwarded-for</code>, but this activating this in production can be a security flaw, as discussed on: <a href="https://github.com/kubernetes/ingress-nginx/pull/2881" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/pull/2881</a></p>
| Sima Liviu |
<p>My k8 cluster runs on minikube.</p>
<p>I am familiar with kubectl port-forward command which allows to route traffic from localhost into the cluster.</p>
<p>Is there a way do do it the other way around? Can I route the traffic from one of the pods to the web server that runs locally on my machine?</p>
| Pavel Ryvintsev | <p>Yes, by default you can route traffic from your pod to the local machine.
make sure you use local machine IP instead of <code>localhost</code> while connecting to web server running locally in your machine.</p>
| Srikrishna B H |
<p>I have two Istio clusters using a replicated control plane with Kiali running. In each cluster I have two applications which interact, but I don't see the traffic between them in the Kiali dashboard. Instead, the traffic shows as going through the Passthrough Cluster. </p>
<p>The application's interact using the kubernetes service name, and they are interacting correctly, it's just not showing correctly in Kiali.</p>
<p>Any thoughts as to what might be the problem? Or is this an expected behaviour (I'm still new to Istio).</p>
| the_witch_king_of_angmar | <p>As far as I know this is an expected behaviour when you use Passthrough option. Check below istiobyexample link, which shows exactly how it works.</p>
<hr />
<blockquote>
<p>When <strong>ALLOW_ANY</strong> is enabled, Istio uses an Envoy cluster called PassthroughCluster, enforced by sidecar proxy, to monitor the egress traffic.</p>
</blockquote>
<hr />
<p>Take a look at kiali <a href="https://kiali.io/faq/graph/#passthrough-traffic" rel="noreferrer">documentation</a> about that</p>
<h2>Why do I see traffic to PassthroughCluster?</h2>
<blockquote>
<p>Requests going to PassthroughCluster (or BlackHoleCluster) are requests that did not get routed to a defined service or service entry, and instead end up at one of these built-in Istio request handlers. See Monitoring Blocked and Passthrough External Service Traffic for more information.</p>
<p>Unexpected routing to these nodes does not indicate a Kiali problem, you’re seeing the actual routing being performed by Istio. In general it is due to a misconfiguration and/or missing Istio sidecar. Less often but possible is an actual issue with the mesh, like a sync issue or evicted pod.</p>
<p>Use Kiali’s Workloads list view to ensure sidecars are not missing. Use Kiali’s Istio Config list view to look for any config validation errors.</p>
</blockquote>
<hr />
<p>And an <a href="https://istiobyexample.dev/monitoring-egress-traffic/" rel="noreferrer">example</a> on <a href="http://istiobyexample.dev" rel="noreferrer">istiobyexample.dev</a>.</p>
<h2>Option 1 - Passthrough</h2>
<blockquote>
<p>To start, let's use an Istio installation with the default ALLOW_ANY option for egress. This means that idgen's requests to httpbin are allowed with no additional configuration. When ALLOW_ANY is enabled, Istio uses an Envoy cluster called PassthroughCluster, enforced by idgen's sidecar proxy, to monitor the egress traffic.</p>
<p>An Envoy cluster is a backend (or “upstream”) set of endpoints, representing an external service. The Istio sidecar Envoy proxy applies filters to intercepted requests from an application container. Based on these filters, Envoy sends traffic to a specific route. And a route specifies a cluster to send traffic to.</p>
<p>The Istio Passthrough cluster is set up so that the backend is the <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/service_discovery#original-destination" rel="noreferrer">original request destination</a>. So when ALLOW_ANY is enabled for egress traffic, Envoy will simply “pass through” idgen's request to httpbin.</p>
<p>With this configuration, if we send recipe ID requests through the IngressGateway, idgen can successfully call httpbin. This traffic appears as PassthroughCluster traffic in the Kiali service graph - we'll need to add a ServiceEntry in order for httpbin to get its own service-level telemetry. (We'll do this in a moment.)</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/BjrrN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BjrrN.png" alt="enter image description here" /></a></p>
<blockquote>
<p>But if we drill down in Prometheus, and find the istio_total_requests metric, we can see that PassthroughCluster traffic is going to a destinationservice called httpbin.org.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/q8e1B.png" rel="noreferrer"><img src="https://i.stack.imgur.com/q8e1B.png" alt="enter image description here" /></a></p>
<hr />
<p>Hope you find this useful.</p>
| Jakub |
<p>I tried running simple DaemonSet on kube cluster - the Idea was that other kube pods would connect to that containers docker daemon (dockerd) and execute commands on it. (The other pods are Jenkins slaves and would have just env DOCKER_HOST point to 'tcp://localhost:2375'); In short the config looks like this:</p>
<p>dind.yaml</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: dind
spec:
selector:
matchLabels:
name: dind
template:
metadata:
labels:
name: dind
spec:
# tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
containers:
- name: dind
image: docker:18.05-dind
resources:
limits:
memory: 2000Mi
requests:
cpu: 100m
memory: 500Mi
volumeMounts:
- name: dind-storage
mountPath: /var/lib/docker
volumes:
- name: dind-storage
emptyDir: {}
</code></pre>
<p>Error message when running</p>
<pre><code>mount: mounting none on /sys/kernel/security failed: Permission denied
Could not mount /sys/kernel/security.
AppArmor detection and --privileged mode might break.
mount: mounting none on /tmp failed: Permission denied
</code></pre>
<p>I took the idea from medium post that didn't describe it fully: <a href="https://medium.com/hootsuite-engineering/building-docker-images-inside-kubernetes-42c6af855f25" rel="nofollow noreferrer">https://medium.com/hootsuite-engineering/building-docker-images-inside-kubernetes-42c6af855f25</a> describing docker of docker, docker in docker and Kaniko</p>
| CptDolphin | <p>found the solution</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dind
spec:
containers:
- name: jenkins-slave
image: gcr.io/<my-project>/myimg # it has docker installed on it
command: ['docker', 'run', '-p', '80:80', 'httpd:latest']
resources:
requests:
cpu: 10m
memory: 256Mi
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
- name: dind-daemon
image: docker:18.05-dind
resources:
requests:
cpu: 20m
memory: 512Mi
securityContext:
privileged: true
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
volumes:
- name: docker-graph-storage
emptyDir: {}
</code></pre>
| CptDolphin |
<p>I have a project that contains the deployment descriptor files for Kubernetes. This project has a folder structure that looks like this:</p>
<pre><code>> project-deployment
> - base
> - dev
> - production
</code></pre>
<p>Inside the base folder, I have the kubernetes deployment files (deployment, service, namespaces etc.,). In the dev and production folder, I have kustomization.yaml that composes everything from the base folder. So far so good. I now want to introduce helm into this so that I can manage my releases much better. My question now is how do I go about structuring my folder structure?</p>
<p>Should I move everything (base, dev and production) folder into templates and just have one Charts.yaml and values.yaml? Any thoughts?</p>
| joesan | <p>The configuration values that you push into your charts should be separate between environments. Build simple extendable charts that can have overrides per environment.
For example, a good workflow would have different value files per environment with specific differences in configuration:</p>
<pre><code>~/myapp
└── config
├── production.yml
└── staging.yml
</code></pre>
<p>There are tools that can help you manage that particular use case. For example, consider using <a href="https://github.com/nuvo/orca" rel="nofollow noreferrer">orca</a>:</p>
<blockquote>
<p>What Orca does best is manage environments. An Environment is a
Kubernetes namespace with a set of Helm charts installed on it. There
are a few use cases you will probably find useful right off the bat.</p>
</blockquote>
<p>There are also some <a href="https://github.com/nuvo/orca/tree/master/docs/examples" rel="nofollow noreferrer">examples</a> provided with it.</p>
<p>I also recommend going through the official <a href="https://helm.sh/docs/chart_best_practices/" rel="nofollow noreferrer">The Chart Best Practices Guide</a>.</p>
| Wytrzymały Wiktor |
<p>I have created statefulset of mysql using below yaml with this command:</p>
<ul>
<li><code>kubectl apply -f mysql-statefulset.yaml</code></li>
</ul>
<p>Yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka"
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
</code></pre>
<p>After that 3 pods and for each of them a pvc and pv was created. I successfully entered one of the pod using:</p>
<ul>
<li><code>kubectl exec -it mysql-sts-0 sh</code></li>
</ul>
<p>and then login in mysql using:</p>
<ul>
<li><code>mysql -u root -p</code></li>
</ul>
<p>after giving this command a:</p>
<ul>
<li><code>Enter password:</code></li>
</ul>
<p>came and I entered the password:</p>
<ul>
<li><code>okaoka</code></li>
</ul>
<p>and successfully could login. After that I exited from the pod.</p>
<p>Then I deleted the statefulset (as expected the pvc and pv were there even after the deletion of statefulset). After that I have applied a new yaml slightly changing the previous one, I changed the password in yaml, gave new password:</p>
<ul>
<li><code>okaoka1234</code></li>
</ul>
<p>and rest of the yaml were same as before. The yaml is given below, now after applying this yaml (only changed the password) by:</p>
<ul>
<li><code>kubectl apply -f mysql-statefulset.yaml</code></li>
</ul>
<p>it successfully created statefulset and 3 new pods (who binded with previous pvc and pv, as expected).</p>
<p>Changed Yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
name: db
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-sts
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: mysql-service
replicas: 3 # by default is 1
template:
metadata:
labels:
app: mysql # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "okaoka1234" # here is the change
ports:
- containerPort: 3306
name: db
volumeMounts:
- name: db-volume
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: db-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
</code></pre>
<p>Now the problem is when I again entered a pod using:</p>
<ul>
<li><code>kubectl exec -it mysql-sts-0 sh</code></li>
</ul>
<p>then used:</p>
<ul>
<li><code>mysql -u root -p</code></li>
</ul>
<p>and again the:</p>
<ul>
<li><code>Enter password:</code></li>
</ul>
<p>came and this time when I gave my new password:</p>
<ul>
<li><code>okaoka1234</code></li>
</ul>
<p>it gave access denied.</p>
<p>When I printed the env (inside the pod) using:</p>
<ul>
<li><code>printenv</code></li>
</ul>
<p>then I could see that:</p>
<ul>
<li><code>MYSQL_ROOT_PASSWORD=okaoka1234</code></li>
</ul>
<p>that means in environment variable it changed and took the new password, but I could not logged in by the new password.</p>
<p>The interesting thing is that I could logged in by giving my previous password <code>okaoka</code>, I don't know why it is taking the previous password in this scenario not the new one which is even in the env (inside pod) also. Can anybody provide the logic behind this?</p>
| Sahadat Hossain | <p>Most probably, the image that you are using in your StatefulSet, uses the environment variable as a way to initialize the password when it creates for the first time the structure on the persisted storage (on its pvc).</p>
<p>Given the fact that the pvc and pv are the same of the previous installation, that step is skipped, the database password is not updated, since the database structure is already found in the existing pvc.</p>
<p>After all, the root user is just a user of the database, its password is stored in the database. Unless the image applies any particular functionality at its start with its entrypoint, it makes sense to me that the password remain the same.</p>
<p>What image are you using? The docker hub mysql image or a custom one?</p>
<p><strong>Update</strong></p>
<p>Given the fact that you are using the mysql image on docker hub, let me quote a piece of the entrypoint (<a href="https://github.com/docker-library/mysql/blob/master/template/docker-entrypoint.sh" rel="nofollow noreferrer">https://github.com/docker-library/mysql/blob/master/template/docker-entrypoint.sh</a>)</p>
<pre><code> # there's no database, so it needs to be initialized
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_verify_minimum_env
# check dir permissions to reduce likelihood of half-initialized database
ls /docker-entrypoint-initdb.d/ > /dev/null
docker_init_database_dir "$@"
mysql_note "Starting temporary server"
docker_temp_server_start "$@"
mysql_note "Temporary server started."
docker_setup_db
docker_process_init_files /docker-entrypoint-initdb.d/*
mysql_expire_root_user
mysql_note "Stopping temporary server"
docker_temp_server_stop
mysql_note "Temporary server stopped"
echo
mysql_note "MySQL init process done. Ready for start up."
echo
fi
</code></pre>
<p>When the container starts, it makes some checks and if no database is found (and the database is expected to be on the path where the persisted pvc is mounted) a series of operations are performed, creating it, creating default users and so on.</p>
<p>Only in this case, the root user is created with the password specified in the environment (inside the function docker_setup_db)</p>
<p>Should a database already be available in the persisted path, which is your case since you let it mount the previous pvc, there's no initialization of the database, it already exists.</p>
<p>Everything in Kubernetes is working as expected, this is just the behaviour of the database and of the mysql image. The environment variable is used only for initialization, from what I can see in the entrypoint.</p>
<p>It is left to the root user to manually change the password, if desired, by using a mysql client.</p>
| AndD |
<p>As a first step in using ejabberd cluster in GCP, I tried to change the node name using the environment variable "ERLANG_NODE_ARG=ejabberd@main" as mentioned in the <a href="https://github.com/processone/docker-ejabberd/blob/master/ecs/README.md#clustering-example" rel="nofollow noreferrer">readme file</a>.</p>
<p>But I am not able to access the ejabberd server in the service. I tried to check the status using ejabberdctl, the start command returns node already running message while the status command return node down message.
<a href="https://i.stack.imgur.com/MaED6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MaED6.png" alt="enter image description here" /></a></p>
<p>I want to create an ejabberd cluster. Below is my deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: main
spec:
replicas: 1
selector:
matchLabels:
app: main
template:
metadata:
labels:
app: main
spec:
containers:
- name: main
image: ejabberd/ecs
env:
- name: ERLANG_NODE_ARG
value: ejabberd@main
# - name: ERLANG_COOKIE
# value: dummycookie123
# - name: CTL_ON_CREATE
# value: "register admin localhost asd"
ports:
- containerPort: 5222
- containerPort: 5269
- containerPort: 5280
- containerPort: 5443
</code></pre>
<p>I am trying to access the above deployment by defining the service. I am able to access the service if I remove the environment added to change the nodename, but it fails when I include the variable in the yaml file.</p>
<p>I checked the ejabberd.log file and error.log file inside the container using cloudshell, there is no entry in error.log and all comments in ejabberd.log matches the log of the ejabberd tested in local machine. I couldn't find why this file in GCP. Can you help me identifying the cause for this issue and also suggest the guideliness regarding the ejabberd deployment in GCP cluster.</p>
| Navin Vinayagam | <p>The ejabberd node naming format seems to be the issue. I followed the suggestions provided in the <a href="https://github.com/processone/docker-ejabberd/issues/101#issuecomment-1623854745" rel="nofollow noreferrer">GitHub query</a> and used the name in the format <em><strong>name@(host_name/machine_name/contiainer_name)</strong></em> and it worked. I am able to access the ejabberd service with the provided node name.</p>
| Navin Vinayagam |
<p>the node of microk8s does not watn to start. Kube.system pods are stick at pending state. <code>kubectl describe nodes</code> says as Warning <code>InvalidDiskCapacity</code>. My Server has more than enough resources. </p>
<p>PODS:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
container-registry registry-7cf58dcdcc-hf8gx 0/1 Pending 0 5d
kube-system coredns-588fd544bf-4m6mj 0/1 Pending 0 5d
kube-system dashboard-metrics-scraper-db65b9c6f-gj5x4 0/1 Pending 0 5d
kube-system heapster-v1.5.2-58fdbb6f4d-q6plc 0/4 Pending 0 5d
kube-system hostpath-provisioner-75fdc8fccd-6mdvc 0/1 Pending 0 5d
kube-system kubernetes-dashboard-67765b55f5-8xsh5 0/1 Pending 0 5d
kube-system monitoring-influxdb-grafana-v4-6dc675bf8c-82fg4 0/2 Pending 0 5d
</code></pre>
<p>Describe node:</p>
<pre><code>Normal Starting 72s kubelet, h2860142.stratoserver.net Starting kubelet.
Warning InvalidDiskCapacity 71s kubelet, h2860142.stratoserver.net invalid capacity 0 on image filesystem
Normal NodeHasSufficientPID 70s kubelet, h2860142.stratoserver.net Node h2860142.stratoserver.net status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 70s kubelet, h2860142.stratoserver.net Node h2860142.stratoserver.net status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 70s kubelet, h2860142.stratoserver.net Node h2860142.stratoserver.net status is now: NodeHasSufficientMemory
Warning InvalidDiskCapacity 54s kubelet, h2860142.stratoserver.net invalid capacity 0 on image filesystem
Normal NodeHasSufficientMemory 54s kubelet, h2860142.stratoserver.net Node h2860142.stratoserver.net status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 54s kubelet, h2860142.stratoserver.net Node h2860142.stratoserver.net status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 54s kubelet, h2860142.stratoserver.net Node h2860142.stratoserver.net status is now: NodeHasSufficientPID
</code></pre>
<p>How can I solve this problem?</p>
<p>Thank you :)</p>
| volkanb | <p>I could find the solution for that issue. Well, that's not really a solution, but the answer is...I am hosting a <strong>VPS-Server</strong> on <strong>STRATO</strong> as hostingprovider. So in that case virtualisation is <strong>not</strong> possible. If I do not an upgrade to <strong>dedicated Server</strong>, I will never be able to run Kubernetes or Microk8s.</p>
| volkanb |
<p>I am planning to deploy HA database cluster on my kubernetes cluster. I am new to database and I am confused by the various database terms. I have decided on MariaDB and I have found two charts, <a href="https://artifacthub.io/packages/helm/bitnami/mariadb" rel="nofollow noreferrer">MariaDB</a> and <a href="https://artifacthub.io/packages/helm/bitnami/mariadb-galera" rel="nofollow noreferrer">MariaDB Galera Cluster</a>.</p>
<p>I understand that both can achieve the same goal, but what are the main differences between the two? Under what scenario I should use either or?</p>
<p>Thanks in advance!</p>
| WeiTang Lau | <p>I'm not an expert so take my explanation with precaution (and double check it)</p>
<p>The main difference between the MariaDB's Chart and the MariaDB Galera Cluster's Chart is that the first one will deploy the standard master-slave (or primary-secondary) database, while the second one is a resilient master-master (or primary-primary) database cluster.</p>
<p>What does it means in more detail is the following:</p>
<p><strong>MariaDB Chart</strong> will deploy a Master <strong>StatefulSet</strong> and a Slave <strong>StatefulSet</strong> which will spawn (with default values) one master Pod and 2 slave Pods. Once your database is up and running, you can connect to the master and write or read data, which is then replicated on the slaves, so that you have safe copies of your data available.</p>
<p>The copies can be used to read data, but only the master Pod can write new data in the database. Should the Pod crash.. or the Kubernetes cluster node where the Pod is running malfunction, you will not be able to write new data until the master's Pod is once more up and running (which may require manual intervention).. or if you perform a failover, promoting one of the other Pods to be the new temporary master (which also requires a manual intervention or some setup with proxies or virtual ips and so on).</p>
<p><strong>Galera Cluster Chart</strong> instead, will deploy something more resilient. With default values, it will create a single <strong>StatefulSet</strong> with 3 Pods.. and each one of these Pods will be able to either read and write data, acting virtually as a master.</p>
<p>This means that if one of the Pods stop working for whatever reason, the other 2 will continue serving the database as if nothing happened, making the whole thing way more resilient. When the Pod (which stopped working) will come back up and running, it will obtain the new / different data from the other Pods, getting in sync.</p>
<p>In exchange for the resilience of the whole infrastructure (it would be too easy if the Galera Cluster solution would offer extreme resilience with no drawbacks), there are some cons in a multi-master application, with the more commons being some added latency in the operations, required to keep everything in sync and consistent.. and added complexity, which often may brings headaches.</p>
<p>There are several other limits with Galera Cluster, like explicit LOCKS of tables not working or that all tables must declare a primary key. You can find the full list here (<a href="https://mariadb.com/kb/en/mariadb-galera-cluster-known-limitations/" rel="nofollow noreferrer">https://mariadb.com/kb/en/mariadb-galera-cluster-known-limitations/</a>)</p>
<p>Deciding between the two solutions mostly depends on the following question:</p>
<ul>
<li>Do you have the necessity that, should one of your Kubernetes cluster node fail, the database keeps working (and being usable by your apps) like nothing happened, even if one of its Pods was running on that particular node?</li>
</ul>
| AndD |
<p>I'm looking the way to define <strong>externalIP</strong> range during Openshift cluster installation ( via declarations in install-config.yaml ).</p>
<p>Openshift docs for 4.3 and later version ( <a href="https://docs.openshift.com/container-platform/4.3/installing/installing_bare_metal/installing-bare-metal.html#installation-bare-metal-config-yaml_installing-bare-metal" rel="nofollow noreferrer">linky</a> ) did not provide any fields for that.</p>
<p>Older definition ( externalIPNetworkCIDR ) from 3.5 ( <a href="https://docs.openshift.com/container-platform/3.5/admin_guide/tcp_ingress_external_ports.html#service-externalip" rel="nofollow noreferrer">linky</a> ) doesn't seems to work ether.</p>
| Andy | <p>actually you can:</p>
<p>first create the openshift install manifests</p>
<pre><code>./openshift-install create manifests --dir=<installation_directory>
</code></pre>
<p>check the output:</p>
<pre><code>ls <installation_directory>/manifests/cluster-network-*
cluster-network-01-crd.yml
cluster-network-02-config.yml
cluster-network-03-config.yml
</code></pre>
<p>edit this file cluster-network-03-config.yml:</p>
<pre><code>apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
externalIP:
autoAssignCIDRs:
- 10.0.0.0/16
policy:
allowedCIDRs:
- 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
defaultNetwork:
type: OpenShiftSDN
openshiftSDNConfig:
mode: NetworkPolicy
mtu: 1450
vxlanPort: 4789
</code></pre>
<p>proceed with the install route:</p>
<pre><code>./openshift-install create ignition-configs --dir=<installation_directory>
</code></pre>
<p>maybe to note that this is the way you can configure basically everything at cluster install :)</p>
<p>in install-config.yaml directly its not possible, i rised a github issue a few months ago but it gained no views:
<a href="https://github.com/openshift/installer/issues/4275" rel="nofollow noreferrer">https://github.com/openshift/installer/issues/4275</a></p>
<p>most of the knowledge coming from here:
<a href="https://docs.openshift.com/container-platform/4.6/installing/installing_bare_metal/installing-bare-metal-network-customizations.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.6/installing/installing_bare_metal/installing-bare-metal-network-customizations.html</a></p>
| Elytscha Smith |
<p>I'm trying to figure out the best way to integrate Istio into my app, which consists of a React frontend (served by Nginx) and a Django Rest Framework API. I was able to get it to work using the following nginx config and istio-specific kubernetes files:</p>
<pre><code>server {
listen 80;
root /app/build;
location / {
try_files $uri $uri/ /index.html;
}
}
</code></pre>
<pre><code># Source: myapp/gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myapp-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
- port:
number: 443
name: https
protocol: HTTP
hosts:
- '*'
---
# Source: myapp/virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- '*'
gateways:
- myapp-gateway
http:
- match:
- port: 80
route:
- destination:
host: frontend-svc
port:
number: 80
- match:
- port: 443
route:
- destination:
host: backend-svc
port:
number: 8000
</code></pre>
<p>And the frontend can hit the backend at <code>localhost:443</code>. Note, I'm serving the backend on port 443 (instead of 8000) because of <a href="https://github.com/istio/istio/issues/7242" rel="nofollow noreferrer">some issue regarding the istio gateway not working with any port other than 80 and 443</a>.</p>
<p>Regardless, this approach exposes BOTH the frontend and backend outside of the cluster, which feels like overkill. Is there anyway to set this up so only the frontend is exposed explicitly and I can proxy the backend through the frontend? Either using istio or nginx?</p>
<p>I may be way off here but sounds like this may be tricky because the client is making the call to the backend. I'd have to figure out a way to make the call inside of the cluster and return it back to the client?</p>
| Johnny Metz | <p>As far as I understand it should work like this.</p>
<pre><code>user -> istio ingressgateway -> istio virtual service -> frontend service -> nginx -> backend service
</code></pre>
<p>Istio virtual service should look like this, so only the frontend is exposed and then you configure your nginx to proxy the backend through the frontend.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- '*'
gateways:
- myapp-gateway
http:
- route:
- destination:
host: frontend-svc
port:
number: 80
</code></pre>
<hr>
<p>For start I would advise to take a look at kubernetes documentation about <a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="nofollow noreferrer">Connect a Front End to a Back End Using a Service</a>, and more specifically look at the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend" rel="nofollow noreferrer">nginx configuration</a> which connect the frontend with backend service.</p>
<hr>
<p>And some django + react tutorials which might help:</p>
<ul>
<li><a href="https://medium.com/@gazzaazhari/django-backend-react-frontend-basic-tutorial-6249af7964e4" rel="nofollow noreferrer">https://medium.com/@gazzaazhari/django-backend-react-frontend-basic-tutorial-6249af7964e4</a></li>
<li><a href="https://blog.miguelgrinberg.com/post/how-to-create-a-react--flask-project" rel="nofollow noreferrer">https://blog.miguelgrinberg.com/post/how-to-create-a-react--flask-project</a></li>
<li><a href="https://felipelinsmachado.com/connecting-django-reactjs-via-nginx-using-docker-containers/" rel="nofollow noreferrer">https://felipelinsmachado.com/connecting-django-reactjs-via-nginx-using-docker-containers/</a></li>
<li><a href="https://github.com/felipelm/django-nginx-reactjs-docker" rel="nofollow noreferrer">https://github.com/felipelm/django-nginx-reactjs-docker</a></li>
</ul>
| Jakub |
<p>I am trying to write a cron job which hits a rest endpoint of the application it is pulling image of.
Below is the sample code:</p>
<pre><code>---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Chart.Name }}-cronjob
labels:
app: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
spec:
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 2
failedJobsHistoryLimit: 2
startingDeadlineSeconds: 1800
jobTemplate:
spec:
template:
metadata:
name: {{ .Chart.Name }}-cronjob
labels:
app: {{ .Chart.Name }}
spec:
restartPolicy: OnFailure
containers:
- name: demo
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command: ["/bin/sh", "-c", "curl http://localhost:8080/hello"]
readinessProbe:
httpGet:
path: "/healthcheck"
port: 8081
initialDelaySeconds: 300
periodSeconds: 60
timeoutSeconds: 30
failureThreshold: 3
livenessProbe:
httpGet:
path: "/healthcheck"
port: 8081
initialDelaySeconds: 300
periodSeconds: 60
timeoutSeconds: 30
failureThreshold: 3
resources:
requests:
cpu: 200m
memory: 2Gi
limits:
cpu: 1
memory: 6Gi
schedule: "*/5 * * * *"
</code></pre>
<p>But i keep running into <code>*curl: (7) Failed to connect to localhost port 8080: Connection refused*</code>.
I can see from the events that it creates the container and immediately throws: Back-off restarting failed container.
I already have pods running of demo app and it works fine, it is just when i am trying to point to this existing app and hit a rest endpoint i start running into connection refused errors.</p>
<p>Exact output when seeing the logs:</p>
<pre><code> % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8080: Connection refused
</code></pre>
<p>Event Logs:</p>
<pre><code>Container image "wayfair/demo:728ac13-as_test_cron_job" already present on machine
9m49s Normal Created pod/demo-cronjob-1619108100-ndrnx Created container demo
6m17s Warning BackOff pod/demo-cronjob-1619108100-ndrnx Back-off restarting failed container
5m38s Normal SuccessfulDelete job/demo-cronjob-1619108100 Deleted pod: demo-cronjob-1619108100-ndrnx
5m38s Warning BackoffLimitExceeded job/demo-cronjob-1619108100 Job has reached the specified backoff limit
</code></pre>
<p>Being new to K8, Any pointers are helpful!</p>
| anzie001 | <p>You are trying to connect to <code>localhost:8080</code> with your curl which doesn't make sense from what I understand of your CronJob definition.</p>
<p>From the docs (at <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod</a> )</p>
<blockquote>
<p>The <strong>command</strong> and <strong>arguments</strong> that you define in the configuration file
<strong>override</strong> the default command and arguments provided by the container
image. If you define args, but do not define a command, the default
command is used with your new arguments.</p>
<p>Note: The command field corresponds to entrypoint in some container
runtimes. Refer to the Notes below.</p>
</blockquote>
<p>If you define a command for the image, even if the image would start a rest application on port 8080 on localhost with its default entrypoint (or command, depends on the container type you are using), the command overrides the entrypoint and no application is start.</p>
<p>If you have the necessity of both starting the application and then performing other operations, like curls and so on, I suggest to use a <code>.sh</code> script or something like that, depending on what is the Job objective.</p>
| AndD |
<p>I have an azuredevops build job to get the log of a deployment pod.</p>
<p>command: <code>kubectl logs deployment/myapp</code></p>
<p>I am getting the output in the summary page of azure devops pipeline, but the same I want to send a team with a log as an attachment. I am not getting any option in azure devops for that</p>
| vyshakh | <p>Basically, your k8s log (pods) will gone after the pods has been terminated (although you can somehow keep it for a little while). For debug purpose or any other purpose you want, you need to <code>Centralized logging</code> your k8s log (use some tools: filebeat, fluentd, fluent-bit to forward your k8s log to elasticsearch).</p>
<p>EX: Some software (tools) for <code>Centralized logging</code> Elasticsearch, Graylog, ...</p>
<p><a href="https://www.elastic.co/fr/what-is/elk-stack" rel="nofollow noreferrer">https://www.elastic.co/fr/what-is/elk-stack</a></p>
<p>And then you can save, export, analyze your log ... You can do anythings you want with your stored k8s log.</p>
<p>Hope this may help you, guy!</p>
<p>Edit: I use GCP as cloud solution and in GCP, by default, they will use fluentd to forward your k8s log to store in <code>Logging</code>. And the <code>Logging</code> has feature <code>Export</code>, I think you can search somethings similar to <code>Logging</code> in your cloud solution: Azure</p>
| Tho Quach |
<p>I have installed Istio as described <a href="https://istio.io/docs/setup/getting-started/" rel="nofollow noreferrer">here</a>.</p>
<p>I used <code>istioctl manifest apply --set profile=demo</code> for this purpose. And then installed <code>bookinfo</code> application.</p>
<p>And set kiali to use <code>NordPort</code> using <code>kubectl -n istio-system edit svc kiali</code>.</p>
<p><code>kubectl -n istio-system get svc kiali</code> shows its <code>NordPort</code> and Ports <code>20001:32173/TCP</code></p>
<p>When I try to access kiali dashboard using <code>192.168.123.456:32173/kiali</code>, with default username and password <code>admin</code> I get following warining.</p>
<blockquote>
<p>Your session has expired or was terminated in another window</p>
</blockquote>
<p>Why is it happening? I haven't change any default settings.</p>
<p>Kiali pod is running.</p>
<p>As <a href="https://stackoverflow.com/users/11977760/jt97">jt97</a> requested <code>curl -v externalIP:port/kiali</code></p>
<pre><code>* Trying 192.168.123.456...
* TCP_NODELAY set
* Connected to 192.168.123.456 (192.168.123.456) port 15029 (#0)
> GET /kiali/ HTTP/1.1
> Host: 192.168.123.456:15029
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< accept-ranges: bytes
< content-length: 2330
< content-type: text/html; charset=utf-8
< last-modified: Mon, 04 May 2020 14:46:17 GMT
< vary: Accept-Encoding
< date: Mon, 04 May 2020 14:59:40 GMT
< x-envoy-upstream-service-time: 0
< server: istio-envoy
<
<!doctype html><html lang="en"><head><meta charset="utf-8"/><meta name="viewport" content="width=device-width,initial-scale=1,shrink-to-fit=no"/><meta name="theme-color" content="#000000"/><base href="/kiali/"/><script type="text/javascript" src="./env.js"></script><link rel="manifest" href="./manifest.json"/><link rel="shortcut icon" href="./kiali_icon_lightbkg_16px.png"/><title>Kiali Console</title><link href="./static/css/2.51abb30a.chunk.css" rel="stylesheet"><link href="./static/css/main.aebbfcdd.chunk.css" rel="stylesheet"></head><body class="pf-m-redhat-font"><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div><script>!function(a){function e(e){for(var r,t,n=e[0],o=e[1],i=e[2],u=0,l=[];u<n.length;u++)t=n[u],Object.prototype.hasOwnProperty.call(p,t)&&p[t]&&l.push(p[t][0]),p[t]=0;for(r in o)Object.prototype.hasOwnProperty.call(o,r)&&(a[r]=o[r]);for(s&&s(e);l.length;)l.shift()();return c.push.apply(c,i||[]),f()}function f(){for(var e,r=0;r<c.length;r++){for(var t=c[r],n=!0,o=1;o<t.length;o++){var i=t[o];0!==p[i]&&(n=!1)}n&&(c.splice(r--,1),e=u(u.s=t[0]))}return e}var t={},p={1:0},c=[];function u(e){if(t[e])return t[e].exports;var r=t[e]={i:e,l:!1,exports:{}};return a[e].call(r.exports,r,r.exports,u),r.l=!0,r.exports}u.m=a,u.c=t,u.d=function(e,r,t){u.o(e,r)||Object.defineProperty(e,r,{enumerable:!0,get:t})},u.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},u.t=function(r,e){if(1&e&&(r=u(r)),8&e)return r;if(4&e&&"object"==typeof r&&r&&r.__esModule)return r;var t=Object.create(null);if(u.r(t),Object.defineProperty(t,"default",{enumerable:!0,value:r}),2&e&&"string"!=typeof r)for(var n in r)u.d(t,n,function(e){return r[e]}.bind(null,n));return t},u.n=function(e){var r=e&&e.__esModule?function(){return e.default}:function(){return e};return u.d(r,"a",r),r},u.o=function(e,r){return Object.prototype.hasOwnProperty.call(e,r)},u.p="./";var r=this["webpackJsonp@* Connection #0 to host 192.168.123.456 left intact
kiali/kiali-ui"]=this["webpackJsonp@kiali/kiali-ui"]||[],n=r.push.bind(r);r.push=e,r=r.slice();for(var o=0;o<r.length;o++)e(r[o]);var s=n;f()}([])</script><script src="./static/js/2.f84a82a8.chunk.js"></script><script src="./static/js/main.339a2916.chunk.js"></script></body></html>
</code></pre>
<p>Kiali log : <code>/var/log/containers/kiali-869c6894c5-4jp2v_istio-system_kiali-1xxx.log</code></p>
<pre><code>{"log":"I0505 04:49:19.151849 1 kiali.go:66] Kiali: Version: v1.15.2, Commit: 718aedca76e612e2f95498d022fab1e116613792\n","stream":"stderr","time":"2020-05-05T04:49:19.152333612Z"}
{"log":"I0505 04:49:19.153038 1 kiali.go:205] Using authentication strategy [login]\n","stream":"stderr","time":"2020-05-05T04:49:19.153122786Z"}
{"log":"I0505 04:49:19.158187 1 kiali.go:87] Kiali: Console version: 1.15.1\n","stream":"stderr","time":"2020-05-05T04:49:19.158268318Z"}
{"log":"I0505 04:49:19.158210 1 kiali.go:286] Updating base URL in index.html with [/kiali]\n","stream":"stderr","time":"2020-05-05T04:49:19.158284789Z"}
{"log":"I0505 04:49:19.158840 1 kiali.go:267] Generating env.js from config\n","stream":"stderr","time":"2020-05-05T04:49:19.158915814Z"}
{"log":"I0505 04:49:19.168786 1 server.go:57] Server endpoint will start at [:20001/kiali]\n","stream":"stderr","time":"2020-05-05T04:49:19.168870138Z"}
{"log":"I0505 04:49:19.168813 1 server.go:58] Server endpoint will serve static content from [/opt/kiali/console]\n","stream":"stderr","time":"2020-05-05T04:49:19.16888486Z"}
{"log":"I0505 04:49:19.179424 1 metrics_server.go:18] Starting Metrics Server on [:9090]\n","stream":"stderr","time":"2020-05-05T04:49:19.179497168Z"}
{"log":"I0505 04:49:19.179752 1 kiali.go:137] Secret is now available.\n","stream":"stderr","time":"2020-05-05T04:49:19.17998388Z"}
</code></pre>
<p>I found another error, which is not visible at once. When I enter username and password, it gives :</p>
<blockquote>
<p>You are logged in, but there was a problem when fetching some required server configurations, try refreshing the page.</p>
</blockquote>
| Sachith Muhandiram | <p>As mentioned in istio docs <a href="https://istio.io/docs/tasks/observability/gateways/#option-2-insecure-access-http" rel="nofollow noreferrer">here</a> </p>
<p>If you want to acces kiali dashboard you should install your istio demo profile with <code>--set values.kiali.enabled=true</code></p>
<pre><code>istioctl manifest apply --set profile=demo --set values.kiali.enabled=true
</code></pre>
<p>Then apply virtual service, gateway and destination rule</p>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: kiali-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 15029
name: http-kiali
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali-vs
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- kiali-gateway
http:
- match:
- port: 15029
route:
- destination:
host: kiali
port:
number: 20001
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali
trafficPolicy:
tls:
mode: DISABLE
---
EOF
</code></pre>
<p>Get your external-ip with</p>
<pre><code>kubectl get svc istio-ingressgateway -n istio-system
</code></pre>
<p>And visit kiali via your browser with <code>http://<EXTERNAL-IP>:15029/</code>and credentials admin:admin.</p>
<hr>
<p>Additionally if you want to change the kiali credentials check this <a href="https://stackoverflow.com/questions/61107694/changing-secrets-of-kiali-in-istio-is-not-working/61116599#61116599">stackoverflow question</a>.</p>
| Jakub |
<p>I've seen another similar question here: <a href="https://stackoverflow.com/questions/52636213/docker-distroless-image-how-to-add-customize-certificate-to-trust-store">Docker distroless image how to add customize certificate to trust store?</a> but the answer relied on having the certificate available at image build time, which I do not have.</p>
<p>I am looking for a way to copy a CA certificate into a distroless based container image at Kubernetes pod deployment time and have the CA store get updated so that the certificate is considered valid by openssl.</p>
<p>I have seen that using kubernetes volumes I can share the certificate.crt into the container when it is deployed (it will be present at /usr/local/share/ca-certificates/cert.crt inside the container) but there is no update-ca-certificates or update-ca-trust command available inside of distroless - so how can I ensure that the CA store/bundle is properly updated to make the cert be considered valid? Note that editing/appending to the cert bundle manually is not recommended. We are looking for the proper way to execute update-ca-certificates inside of distroless.</p>
<p>I have seen examples with alpine base images where people have used apk to add the missing packages such as ca-certificates so that the update-ca-certificates command will be available. Is there a similar way to achieve this when building distroless images?</p>
| DaveUK | <p>This is a community wiki answer. Feel free to expand on it.</p>
<p>The solution for your issue was proposed in this feature request:</p>
<p><a href="https://github.com/GoogleContainerTools/distroless/pull/272" rel="nofollow noreferrer">Add option in cacerts rules to include additional ca certs #272</a></p>
<p>However, the request is still not merged and thus not available yet.</p>
<p>There is a workaround however which was explained <a href="https://github.com/GoogleContainerTools/distroless/issues/451#issuecomment-588515325" rel="nofollow noreferrer">here</a>. Bear in mind that the workaround assumes that the initContainer is based on an image other than distroless.</p>
| Wytrzymały Wiktor |
<p>I'm using an Ansible JMeter Operator to do distributed load testing and am having trouble with creating a Kubernetes secret. The operator I'm modifying is <a href="https://github.com/kubernauts/jmeter-operator" rel="noreferrer">the JMeter one</a> and the additional YAML I'm adding is as below:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: InfluxDB Storage Secret
k8s:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: azure-storage-account-infxluxdb-secret
namespace: '{{ meta.namespace }}'
stringData:
azurestorageaccountname: 'xxxxxxx'
azurestorageaccountkey: 'xxxxxxxxxxx'
</code></pre>
<p>Is there anything wrong with the YAML definition? I'm modifying the <em>roles/jmeter/tasks/main.yaml</em> of the role to add it into my specific namespace.</p>
| David C | <p>Here is my example, that works for me, hope it help.</p>
<pre><code> - name: CREATE MONGOSECRETS SECRET
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: "{{ secret_name }}"
namespace: "{{ project_name | lower }}"
data:
config_data.json: "{{ lookup('template', mongo_conn_templates_path + '/config_data.json' ) | tojson | b64encode }}"
</code></pre>
| Samush |
<p>I can't get the demo profile to work with istioctl. It seems like istioctl is having trouble creating IngressGateway and the AddonComponents. I have tried doing the helm installation with similar issues. I did a fresh k8s cluster from kops and the same issue. Any help debugging this issue would be greatly appreciated. </p>
<p>I am following these instructions.
<a href="https://istio.io/docs/setup/getting-started/#download" rel="nofollow noreferrer">https://istio.io/docs/setup/getting-started/#download</a></p>
<p>I am running</p>
<pre><code> istioctl manifest apply --set profile=demo --logtostderr
</code></pre>
<p>This is the output </p>
<pre><code>2020-04-06T19:59:24.951136Z info Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT. See https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens for details.
- Applying manifest for component Base...
✔ Finished applying manifest for component Base.
- Applying manifest for component Pilot...
✔ Finished applying manifest for component Pilot.
- Applying manifest for component IngressGateways...
- Applying manifest for component EgressGateways...
- Applying manifest for component AddonComponents...
✔ Finished applying manifest for component EgressGateways.
2020-04-06T20:00:11.501795Z error installer error running kubectl: exit status 1
✘ Finished applying manifest for component AddonComponents.
2020-04-06T20:00:40.418396Z error installer error running kubectl: exit status 1
✘ Finished applying manifest for component IngressGateways.
2020-04-06T20:00:40.421746Z info
Component AddonComponents - manifest apply returned the following errors:
2020-04-06T20:00:40.421823Z info Error: error running kubectl: exit status 1
2020-04-06T20:00:40.421884Z info Error detail:
Error from server (Timeout): error when creating "STDIN": Timeout: request did not complete within requested timeout 30s (repeated 1 times)
clusterrole.rbac.authorization.k8s.io/kiali unchanged
clusterrole.rbac.authorization.k8s.io/kiali-viewer unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-istio-system unchanged
clusterrolebinding.rbac.authorization.k8s.io/kiali unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-istio-system unchanged
serviceaccount/kiali-service-account unchanged
serviceaccount/prometheus unchanged
configmap/istio-grafana unchanged
configmap/istio-grafana-configuration-dashboards-citadel-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-galley-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-istio-mesh-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-istio-performance-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-istio-service-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-istio-workload-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-mixer-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-pilot-dashboard unchanged
configmap/kiali configured
configmap/prometheus unchanged
secret/kiali unchanged
deployment.apps/grafana unchanged
deployment.apps/istio-tracing unchanged
deployment.apps/kiali unchanged
deployment.apps/prometheus unchanged
service/grafana unchanged
service/jaeger-agent unchanged
service/jaeger-collector unchanged
service/jaeger-collector-headless unchanged
service/jaeger-query unchanged
service/kiali unchanged
service/prometheus unchanged
service/tracing unchanged
service/zipkin unchanged
2020-04-06T20:00:40.421999Z info
Component IngressGateways - manifest apply returned the following errors:
2020-04-06T20:00:40.422056Z info Error: error running kubectl: exit status 1
2020-04-06T20:00:40.422096Z info Error detail:
Error from server (Timeout): error when creating "STDIN": Timeout: request did not complete within requested timeout 30s (repeated 2 times)
serviceaccount/istio-ingressgateway-service-account unchanged
deployment.apps/istio-ingressgateway configured
poddisruptionbudget.policy/ingressgateway unchanged
role.rbac.authorization.k8s.io/istio-ingressgateway-sds unchanged
rolebinding.rbac.authorization.k8s.io/istio-ingressgateway-sds unchanged
service/istio-ingressgateway unchanged
2020-04-06T20:00:40.422134Z info
✘ Errors were logged during apply operation. Please check component installation logs above.
Error: failed to apply manifests: errors were logged during apply operation
</code></pre>
<p>I ran the below to verify install before running the above commands.</p>
<pre><code>istioctl verify-install
Checking the cluster to make sure it is ready for Istio installation...
#1. Kubernetes-api
-----------------------
Can initialize the Kubernetes client.
Can query the Kubernetes API Server.
#2. Kubernetes-version
-----------------------
Istio is compatible with Kubernetes: v1.16.7.
#3. Istio-existence
-----------------------
Istio will be installed in the istio-system namespace.
#4. Kubernetes-setup
-----------------------
Can create necessary Kubernetes configurations: Namespace,ClusterRole,ClusterRoleBinding,CustomResourceDefinition,Role,ServiceAccount,Service,Deployments,ConfigMap.
#5. SideCar-Injector
-----------------------
This Kubernetes cluster supports automatic sidecar injection. To enable automatic sidecar injection see https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/#deploying-an-app
</code></pre>
| user2385520 | <p>As mentioned in your logs</p>
<blockquote>
<p>2020-04-06T19:59:24.951136Z info Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT.</p>
</blockquote>
<hr>
<p>As mentioned <a href="https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens" rel="nofollow noreferrer">here</a></p>
<blockquote>
<p>To determine if your cluster supports third party tokens, look for the TokenRequest API:</p>
</blockquote>
<pre><code>$ kubectl get --raw /api/v1 | jq '.resources[] | select(.name | index("serviceaccounts/token"))'
{
"name": "serviceaccounts/token",
"singularName": "",
"namespaced": true,
"group": "authentication.k8s.io",
"version": "v1",
"kind": "TokenRequest",
"verbs": [
"create"
]
}
</code></pre>
<blockquote>
<p>While most cloud providers support this feature now, many local development tools and custom installations may not. To enable this feature, please refer to the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection" rel="nofollow noreferrer">Kubernetes documentation</a>.</p>
</blockquote>
<hr>
<blockquote>
<p>To authenticate with the Istio control plane, the Istio proxy will use a Service Account token. Kubernetes supports two forms of these tokens:</p>
<p>Third party tokens, which have a scoped audience and expiration.
First party tokens, which have no expiration and are mounted into all pods.
Because the properties of the first party token are less secure, Istio will default to using third party tokens. However, this feature is not enabled on all Kubernetes platforms.</p>
<p>If you are using istioctl to install, support will be automatically detected. This can be done <strong>manually</strong> as well, and configured by passing <strong>--set values.global.jwtPolicy=third-party-jwt</strong> or <strong>--set values.global.jwtPolicy=first-party-jwt.</strong></p>
</blockquote>
<hr>
<p>If that won't work I would open a new github issue, or add a comment <a href="https://github.com/istio/istio/issues/22319" rel="nofollow noreferrer">here</a> as issue with installation is similar.</p>
| Jakub |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.