Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I am trying out kubernetes and I have deployed my Nginx in the default namespace and I am trying to create a virtual server to route the dashboard.</p>
<p>nginx: default namespace
dashboard: kubernetes-dashboard namespace</p>
<p>However, when I try to create the virtual server, it is giving me a warning that the virtualserverroute doesn't exist or invalid? From what I understand, if I will want to route to a different namespace I could do so by putting the namespace in front of the service.</p>
<p>nginx-ingress-dashboard.yaml</p>
<pre><code>apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: kubernetes-dashboard
spec:
host: k8.test.com
tls:
secret: nginx-tls-secret
# basedOn: scheme
redirect:
enable: true
code: 301
upstreams:
- name: kubernetes-dashboard
service: kubernetes-dashboard
port: 8443
routes:
- path: /
route: kubernetes-dashboard/kubernetes-dashboard
</code></pre>
<p>kubernetes-dashboard</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
</code></pre>
<p>Any hints what I have done wrongly? Thanks in advance.</p>
<pre><code>192.168.254.9 - - [27/Apr/2021:07:14:43 +0000] "GET /api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ HTTP/2.0" 400 48 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36" "-"2021/04/27 07:14:43 [error] 137#137: *106 readv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.254.9, server: k8.test.com, request: "GET /api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ HTTP/2.0", upstream: "http://192.168.253.130:8443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/", host: "k8.test.com"
192.168.254.9 - - [27/Apr/2021:07:14:43 +0000] "GET /api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ HTTP/2.0" 400 48 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36" "-" 2021/04/27 07:14:43 [error] 137#137: *106 readv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.254.9, server: k8.test.com, request: "GET /api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ HTTP/2.0", upstream: "http://192.168.253.130:8443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/", host: "k8.test.com"
</code></pre>
<p>secret.yaml</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
</code></pre>
| rosepalette | <p>Instead of defining a route, you need to go with an <code>action.pass</code>, as you want to redirect the requests to the service directly.</p>
<p>Additionally, I don't have much experience of the <code>VirtualServer</code> resource, but <code>Ingress</code> resources should usually be on the same namespace of the service that you want to serve. The Ingress Controller picks them up even if they are in a different namespace. (This means that the tls secret needs to be in that namespace tho)</p>
<p>So, I would put an <code>action.pass</code> and also put the <code>VirtualServer</code> in the same namespace of the resource you want to serve, something like the following:</p>
<pre><code>apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
host: k8.test.com
tls:
secret: nginx-tls-secret
# basedOn: scheme
redirect:
enable: true
code: 301
upstreams:
- name: kubernetes-dashboard
service: kubernetes-dashboard
port: 443
routes:
- path: /
action:
pass: kubernetes-dashboard
</code></pre>
<p>If you use route, then you need to define a VirtualServerRoute with that name, like explained in the documentation ( <a href="https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources/#virtualserverroute-specification" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources/#virtualserverroute-specification</a> )</p>
| AndD |
<p>How to figure out how much min and max resources to allocate for each application deployment? I'm setting up a cluster and I haven't setup any resources and letting it run freely.</p>
<p>I guess I could use <code>top command</code> to figure out the load during the peak time and work on that but still top says like 6% or 10% but then I'm not sure how to calculate them to produce something like <code>0.5 cpu</code> or <code>100 MB</code>. Is there a method/formula to determine max and min based on <code>top command</code> usage?</p>
<p>I'm running two t3.medium nodes and I have the following pods <code>httpd and tomcat in namespace1</code>, <code>mysql in namepsace2</code>, <code>jenkins and gitlab in namespace3</code>. Is there any guide to minimum resources it needs? or Do I have to figure it based on top or some other method?</p>
| user630702 | <p>There are few things to discuss here:</p>
<ol>
<li>Unix <code>top</code> and <code>kubectl top</code> are different:</li>
</ol>
<ul>
<li><p>Unix <code>top</code> uses <code>proc</code> virtual filesystem and reads <code>/proc/meminfo</code> file to get an actual information about the current memory usage.</p>
</li>
<li><p><code>kubectl top</code> shows metrics information based on reports from <a href="https://github.com/google/cadvisor" rel="nofollow noreferrer">cAdvisor</a>, which collects the resource usage. For example: <code>kubectl top pod POD_NAME --containers</code>: Show metrics for a given pod and its containers or <code>kubectl top node NODE_NAME</code>: Show metrics for a given node.</p>
</li>
</ul>
<ol start="2">
<li><p>You can use the <a href="https://github.com/kubernetes-sigs/metrics-server#kubernetes-metrics-server" rel="nofollow noreferrer">metrics-server</a> to get the CPU and memory usage of the pods. With it you will be able to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">Assign CPU Resources to Containers and Pods</a>.</p>
</li>
<li><p>Optimally, your pods should be using exactly the amount of resources you requested but that's almost impossible to achieve. If the usage is lower than your request, you are wasting resources. If it's higher, you are risking performance issues. Consider a 25% margin up and down the request value as a good starting point. Regarding limits, achieving a good setting would depend on trying and adjusting. There is no optimal value that would fit everyone as it
depends on many factors related to the application itself, the
demand model, the tolerance to errors etc.</p>
</li>
<li><p>As a supplement I recommend going through the <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">Managing Resources for Containers</a> docs.</p>
</li>
</ol>
| Wytrzymały Wiktor |
<p>I set up a cluster issuer, certificate, and ingress under my Kubernetes environment and everything is working fine as per status, but when I am connecting to the host as per my ingress, it's giving me "Your connection is not private".</p>
<p><strong>CluserterIssuer</strong> output lastlines;-</p>
<pre><code>...
Conditions:
Last Transition Time: 2020-02-16T10:21:24Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
</code></pre>
<p>Certificate output last lines :- </p>
<pre><code>Status:
Conditions:
Last Transition Time: 2020-02-16T10:24:06Z
Message: Certificate is up to date and has not expired
Reason: Ready
Status: True
Type: Ready
Not After: 2020-05-14T09:24:05Z
Events: <none>
</code></pre>
<p><strong>Ingress</strong> file:-</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: b1-ingress # change me
namespace: b1
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- '*.testing.example.com'
secretName: acme-crt
rules:
- host: flower.testing.example.com
http:
paths:
- backend:
serviceName: flower-service
servicePort: 5555
- host: hasura.testing.example.com
http:
paths:
- backend:
serviceName: hasura-service
servicePort: 80
</code></pre>
| me25 | <p>Based on cert menager <a href="http://docs.cert-manager.io/en/release-0.7/tasks/issuing-certificates/ingress-shim.html#supported-annotations" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p><strong>certmanager.k8s.io/issuer</strong> - The Issuer must be in the <strong>same namespace</strong> as the Ingress resource.</p>
</blockquote>
<p>As @me25 confirmed in comments </p>
<blockquote>
<p>yes everything worked when I copied secret in to namespace: b1 – me25</p>
</blockquote>
<p>The answer here was missing secret certificate in proper namespace.</p>
<p>The solution was to copy secret certificate to <code>namespace: b1</code>,same as the ingress.</p>
<hr>
<blockquote>
<p>Do you know any better way other than a copy secrets</p>
</blockquote>
<p>This <a href="https://stackoverflow.com/questions/46297949/kubernetes-sharing-secret-across-namespaces">stackoverflow post</a> provide few tricks about copying secret from one namespace to other.</p>
<hr>
<p>Additional links:</p>
<ul>
<li><a href="https://itnext.io/automated-tls-with-cert-manager-and-letsencrypt-for-kubernetes-7daaa5e0cae4" rel="nofollow noreferrer">https://itnext.io/automated-tls-with-cert-manager-and-letsencrypt-for-kubernetes-7daaa5e0cae4</a></li>
<li><a href="https://cert-manager.io/docs/tutorials/acme/ingress/" rel="nofollow noreferrer">https://cert-manager.io/docs/tutorials/acme/ingress/</a></li>
</ul>
| Jakub |
<p>How do I automatically restart Kubernetes pods associated with Daemonsets when their configmap is updated?</p>
<p>As per the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically" rel="nofollow noreferrer">kubernetes</a> documentation when the configmap volume mount is updated, it automatically updates the pods. However I do not see that happening for Daemonsets. What am I missing?</p>
<p>The below is my configmap</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-update
namespace: default
labels:
k8s-app: fluent-bit
data:
# Configuration files: server, input, filters and output
# ======================================================
fluent-bit.conf: |
[SERVICE]
Flush 1
Log_Level info
Daemon off
Parsers_File parsers.conf
@INCLUDE input-kubernetes.conf
@INCLUDE filter-kubernetes.conf
@INCLUDE output-logaggregator.conf
input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/abc.log
Parser docker
DB /var/log/tail-containers-state.db
DB.Sync Normal
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10
Rotate_Wait 60
Docker_Mode On
filter-kubernetes.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.conatiners.
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude Off
Labels On
Annotations On
output-kubernetes.conf: |
[OUTPUT]
Name cloudwatch
Match kube.*
region us-west-2
log_group_name fluent-bit-cloudwatch
log_stream_prefix from-fluent-bit
auto_create_group true
parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
Decode_Field_As escaped_utf8 log do_next
Decode_Field_As json log
[PARSER]
Name docker_default
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
</code></pre>
<p>& my daemonset manifest file</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluent-bit-update
namespace: default
labels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "2020"
prometheus.io/path: /api/v1/metrics/prometheus
spec:
containers:
- name: aws-for-fluent-bit
image: amazon/aws-for-fluent-bit:latest
imagePullPolicy: Always
ports:
- containerPort: 2020
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: fluent-bit-volume
mountPath: /fluent-bit/etc/
terminationGracePeriodSeconds: 10
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: fluent-bit-volume
configMap:
name: fluent-bit-update
serviceAccountName: fluent-bit
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
</code></pre>
<p>When I updated the <code>Path</code> field in configmap to read another log file, though I see the volume mounts getting updated, I do not see the pods picking up the change unless I delete and recreate the daemonset. </p>
<p>Is there a way to achieve this automagically without restarting the daemonset? I would appreciate some guidance on this. Thanks</p>
| fledgling | <pre><code>kubectl rollout restart ds/<daemonset_name> -n namespace
</code></pre>
<p>These will do rolling update and with updated configMaps.</p>
| gln reddy |
<p>I'm using the K3s distribution of Kubernetes which is deployed on
a Spot EC2 Instance in AWS.</p>
<p>I have scheduled a certain processing job and sometimes this job is being terminated and becomes in "Unknown" state (the job code is abnormally terminated)</p>
<pre><code>kubectl describe pod <pod_name>
</code></pre>
<p>it shows this:</p>
<pre><code> State: Terminated
Reason: Unknown
Exit Code: 255
Started: Wed, 06 Jan 2021 21:13:29 +0000
Finished: Wed, 06 Jan 2021 23:33:46 +0000
</code></pre>
<p>The AWS logs show that the CPU consumption was 99% right before the crash.
From number of sources (<a href="https://jamesdefabia.github.io/docs/user-guide/pod-states/" rel="nofollow noreferrer">1</a>, <a href="https://github.com/kubernetes/kubernetes/issues/51333" rel="nofollow noreferrer">2</a>, <a href="https://www.reddit.com/r/kubernetes/comments/f7feec/pods_stuck_at_unknown_status_after_node_goes_down/" rel="nofollow noreferrer">3</a>) I saw that this can be a reason of a node crash but didn't see that one,
What may be the reason?</p>
<p>Thanks!</p>
| sborpo | <p>The actual state of the Job is <code>Terminated</code> with the <code>Unknown</code> reason. In order to debug this situation you need to get a relevant logs from Pods created by your Job.</p>
<blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup" rel="nofollow noreferrer">When a Job completes</a>, no more Pods are created, but the Pods are
not deleted either. Keeping them around allows you to still view the
logs of completed pods to check for errors, warnings, or other
diagnostic output.</p>
</blockquote>
<p>To do so, execute <code>kubectl describe job $JOB</code> to see the Pods' names under the Events section and than execute <code>kubectl logs $POD</code>.</p>
<p>If that won't be enough, you can try different ways to <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/" rel="nofollow noreferrer">Debug Pods</a>, such as:</p>
<ul>
<li><p>Debugging with container exec</p>
</li>
<li><p>Debugging with an ephemeral debug container, or</p>
</li>
<li><p>Debugging via a shell on the node</p>
</li>
</ul>
<p>The methods above will give you more info retarding the actual reasons behind the Job termination.</p>
| Wytrzymały Wiktor |
<p>I've tried creating a cluster role that only has access to view pods, however, for some reason that account can still see everything; secrets, deployments, nodes etc. I also enabled skip-login, and it seems like by default anonymous users don't have any restrictions either. </p>
<p><strong>Service account:</strong></p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-example
namespace: default
</code></pre>
<p><strong>Cluster Role:</strong></p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cr-example
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<p><strong>Cluster Role Binding:</strong></p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: crb-example
roleRef:
apiGroup: rbac.authorization.k8s.io
name: cr-example
kind: ClusterRole
subjects:
- kind: ServiceAccount
name: sa-example
namespace: default
</code></pre>
<p><strong>Context:</strong></p>
<pre><code>K8s version: 1.17.3
Dashboard version: v2.0.0-rc5
Cluster type: bare metal
authorization-mode=Node,RBAC
</code></pre>
| Gaby | <p>How did You check if it works or no?</p>
<p>I made a reproduction of your issue with below yamls</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-example
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cr-example
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: crb-example
roleRef:
apiGroup: rbac.authorization.k8s.io
name: cr-example
kind: ClusterRole
subjects:
- kind: ServiceAccount
name: sa-example
namespace: default
</code></pre>
<p>And I used <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#checking-api-access" rel="nofollow noreferrer">kubectl auth can-i</a> to verify if it works </p>
<pre><code>kubectl auth can-i get pods --as=system:serviceaccount:default:sa-example
yes
kubectl auth can-i get deployment --as=system:serviceaccount:default:sa-example
no
kubectl auth can-i get secrets --as=system:serviceaccount:default:sa-example
no
kubectl auth can-i get nodes --as=system:serviceaccount:default:sa-example
no
</code></pre>
<p>And it seems like everything works just fine</p>
<p>The only thing which if different in my yaml is </p>
<pre><code>kind: ClusterRole
metadata:
name: cr-example instead of cr-<role>
</code></pre>
<p>So it actually match ClusterRoleBinding</p>
<p>I hope it help you with your issues. Let me know if you have any more questions.</p>
| Jakub |
<p>I’m looking for ways to maintain high availability in the case that one of the policy pods is unavailable and found the following information on the official website:</p>
<p><a href="https://istio.io/docs/reference/config/policy-and-telemetry/istio.mixer.v1.config.client/#NetworkFailPolicy" rel="nofollow noreferrer">https://istio.io/docs/reference/config/policy-and-telemetry/istio.mixer.v1.config.client/#NetworkFailPolicy</a></p>
<p>But I did not find any additional information on how to apply these rules in my deployment. Can someone help me with this and tell me how to change these values?</p>
| KubePony | <p>What you´re looking for can be found <a href="https://istio.io/docs/reference/config/networking/destination-rule/" rel="nofollow noreferrer">here</a>, in the istio documentation Destination Rules</p>
<p>Specifically check this <a href="https://istio.io/docs/reference/config/networking/destination-rule/#ConnectionPoolSettings-HTTPSettings" rel="nofollow noreferrer">link</a></p>
<hr />
<p>This istio <a href="https://istio.io/blog/2017/0.1-using-network-policy/" rel="nofollow noreferrer">blog</a> about Using Network Policy with Istio redirects us to Calico documentation.</p>
<blockquote>
<p>Network Policy is universal, highly efficient, and isolated from the pods, making it ideal for applying policy in support of network security goals. Furthermore, having policy that operates at different layers of the network stack is a really good thing as it gives each layer specific context without commingling of state and allows separation of responsibility.</p>
<p>This post is based on the three part blog series by Spike Curtis, one of the Istio team members at Tigera. The full series can be found here: <a href="https://www.projectcalico.org/using-network-policy-in-concert-with-istio/" rel="nofollow noreferrer">https://www.projectcalico.org/using-network-policy-in-concert-with-istio/</a></p>
</blockquote>
<hr />
<p>Additional links which could be useful:</p>
<ul>
<li><p><a href="https://docs.projectcalico.org/v3.8/security/calico-network-policy" rel="nofollow noreferrer">Calico Network Policy</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Kubernetes Network Policy</a></p>
</li>
</ul>
| Jakub |
<p>I found a lot of information on how to give helm permission to create resources in a particular namespace. </p>
<p>I am trying to see if I can create namespaces on the fly(with random names) and then use helm to install and delete resources inside the namespace.</p>
<p>My idea is to create a namespace with name such as Fixedsuffix-randomprefix and then allow helm to create all resources inside it. Is this possible ?</p>
<p>I can create a clusterrole and clusterrolebinding to allow tiller's serviceaccount to create namespaces, but I am not able to figure out how to have a serviceaccount that could create resources in the particular namespace( mainly because this serviceaccount to create resources cant would have to be created when the namespace is created and then assigned to tiller pod).</p>
<p>TIA</p>
| The_Lost_Avatar | <p>My question is why would you create sa, clusterrole and rolebinding to do that? Helm has it´s own resources which allow him to install and delete resources inside new namespace.</p>
<blockquote>
<p>My idea is to create a namespace with name such as Fixedsuffix-randomprefix and then allow helm to create all resources inside it. Is this possible ?</p>
</blockquote>
<p>Yes, you can create your new namespace and use helm to install everything in this namespace.Or even better you can just use helm install and it will create new namespace for you. For that purpose helm have <a href="https://helm.sh/docs/helm/helm_install/#options-inherited-from-parent-commands" rel="nofollow noreferrer">helm install</a> --namespace.</p>
<blockquote>
<p>-n, --namespace string namespace scope for this request</p>
</blockquote>
<p>For example you can install <a href="https://github.com/helm/charts/tree/master/stable/traefik" rel="nofollow noreferrer">traefik chart</a> in namespace tla.</p>
<pre><code>helm install stable/traefik --namespace=tla
NAME: oily-beetle
LAST DEPLOYED: Tue Mar 24 07:33:03 2020
NAMESPACE: tla
STATUS: DEPLOYED
</code></pre>
<hr>
<p>Another idea which came to my mind is you might want tiller not to use cluster-admin credentials, then this <a href="https://medium.com/@elijudah/configuring-minimal-rbac-permissions-for-helm-and-tiller-e7d792511d10" rel="nofollow noreferrer">link</a> could help. </p>
| Jakub |
<p>I applied the following taint, and label to a node but the pod never reaches a running status and I cannot seem to figure out why</p>
<pre><code>kubectl taint node k8s-worker-2 dedicated=devs:NoSchedule
kubectl label node k8s-worker-2 dedicated=devs
</code></pre>
<p>and here is a sample of my pod yaml file:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
security: s1
name: pod-1
spec:
containers:
- image: nginx
name: bear
resources: {}
tolerations:
- key: "dedicated"
operator: "Equal"
value: "devs"
effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: dedicated
operator: In
values:
- devs
dnsPolicy: ClusterFirst
restartPolicy: Always
nodeName: k8s-master-2
status: {}
</code></pre>
<p>on creating the pod, it gets scheduled on the <code>k8s-worker-2</code> node but remains in a pending state before it's finally evicted. Here are sample outputs:</p>
<p><code>kubectl describe no k8s-worker-2 | grep -i taint</code>
<code>Taints: dedicated=devs:NoSchedule</code></p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-1 0/1 Pending 0 9s <none> k8s-master-2 <none> <none>
# second check
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-1 0/1 Pending 0 59s <none> k8s-master-2 <none> <none>
</code></pre>
<pre><code>Name: pod-1
Namespace: default
Priority: 0
Node: k8s-master-2/
Labels: security=s1
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
bear:
Image: nginx
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dzvml (ro)
Volumes:
kube-api-access-dzvml:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: dedicated=devs:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
</code></pre>
<p>Also, here is output of <code>kubectl describe node</code></p>
<pre><code>root@k8s-master-1:~/scheduling# kubectl describe nodes k8s-worker-2
Name: k8s-worker-2
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
dedicated=devs
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8s-worker-2
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 10.128.0.4/32
projectcalico.org/IPv4IPIPTunnelAddr: 192.168.140.0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 18 Jul 2021 16:18:41 +0000
Taints: dedicated=devs:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: k8s-worker-2
AcquireTime: <unset>
RenewTime: Sun, 10 Oct 2021 18:54:46 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Sun, 10 Oct 2021 18:48:50 +0000 Sun, 10 Oct 2021 18:48:50 +0000 CalicoIsUp Calico is running on this node
MemoryPressure False Sun, 10 Oct 2021 18:53:40 +0000 Mon, 04 Oct 2021 07:52:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 10 Oct 2021 18:53:40 +0000 Mon, 04 Oct 2021 07:52:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 10 Oct 2021 18:53:40 +0000 Mon, 04 Oct 2021 07:52:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 10 Oct 2021 18:53:40 +0000 Mon, 04 Oct 2021 07:52:58 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.128.0.4
Hostname: k8s-worker-2
Capacity:
cpu: 2
ephemeral-storage: 20145724Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8149492Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 18566299208
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8047092Ki
pods: 110
System Info:
Machine ID: 3c2709a436fa0c630680bac68ad28669
System UUID: 3c2709a4-36fa-0c63-0680-bac68ad28669
Boot ID: 18a3541f-f3b4-4345-ba45-8cfef9fb1364
Kernel Version: 5.8.0-1038-gcp
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.21.3
Kube-Proxy Version: v1.21.3
PodCIDR: 192.168.2.0/24
PodCIDRs: 192.168.2.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-node-gp4tk 250m (12%) 0 (0%) 0 (0%) 0 (0%) 84d
kube-system kube-proxy-6xxgx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 81d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 250m (12%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 6m25s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 6m25s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 6m19s (x7 over 6m25s) kubelet Node k8s-worker-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m19s (x7 over 6m25s) kubelet Node k8s-worker-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m19s (x7 over 6m25s) kubelet Node k8s-worker-2 status is now: NodeHasSufficientPID
Warning Rebooted 6m9s kubelet Node k8s-worker-2 has been rebooted, boot id: 18a3541f-f3b4-4345-ba45-8cfef9fb1364
Normal Starting 6m7s kube-proxy Starting kube-proxy.
</code></pre>
<p>Included the following to show that the pod never issues events and it terminates later on by itself.</p>
<pre><code>root@k8s-master-1:~/format/scheduling# kubectl get po
No resources found in default namespace.
root@k8s-master-1:~/format/scheduling# kubectl create -f nginx.yaml
pod/pod-1 created
root@k8s-master-1:~/format/scheduling# kubectl get po pod-1
NAME READY STATUS RESTARTS AGE
pod-1 0/1 Pending 0 10s
root@k8s-master-1:~/format/scheduling# kubectl describe po pod-1
Name: pod-1
Namespace: default
Priority: 0
Node: k8s-master-2/
Labels: security=s1
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
bear:
Image: nginx
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5hsq4 (ro)
Volumes:
kube-api-access-5hsq4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: dedicated=devs:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
root@k8s-master-1:~/format/scheduling# kubectl get po pod-1
NAME READY STATUS RESTARTS AGE
pod-1 0/1 Pending 0 45s
root@k8s-master-1:~/format/scheduling# kubectl get po pod-1
NAME READY STATUS RESTARTS AGE
pod-1 0/1 Pending 0 62s
root@k8s-master-1:~/format/scheduling# kubectl get po pod-1
NAME READY STATUS RESTARTS AGE
pod-1 0/1 Pending 0 74s
root@k8s-master-1:~/format/scheduling# kubectl get po pod-1
Error from server (NotFound): pods "pod-1" not found
root@k8s-master-1:~/format/scheduling# kubectl get po
No resources found in default namespace.
root@k8s-master-1:~/format/scheduling#
</code></pre>
| Mekky_Mayata | <p>I was able to figure this one out later. On reproducing the same case on another cluster, the pod got created on the node having the scheduling parameters set. Then it occurred to me that the only change I had to make on the manifest was setting <code>nodeName: node-1</code> to match the right node on other cluster.
I was literally assigning the pod to a control plane node <code>nodeName: k8s-master-2</code> and this was causing conflicts.</p>
| Mekky_Mayata |
<p>I am using the standalone <a href="https://docs.docker.com/docker-for-mac/kubernetes/" rel="nofollow noreferrer">Kubernetes</a> server and client that docker desktop includes.</p>
<p>I created two namespaces for k8s named: <code>development</code> and <code>production</code>.</p>
<pre><code>☁ kubernetes-labs [master] ⚡ k get namespace
NAME STATUS AGE
default Active 3d22h
development Active 2d23h
kube-node-lease Active 3d23h
kube-public Active 3d23h
kube-system Active 3d23h
production Active 5m1s
</code></pre>
<p>Then, set a new cluster named <code>kubernetes-labs</code>:</p>
<pre><code>☁ kubernetes-labs [master] ⚡ k config set-cluster kubernetes-labs --server=https://kubernetes.docker.internal:6443
Cluster "kubernetes-labs" set.
</code></pre>
<p>As you can see, the new cluster's server point to <code>https://kubernetes.docker.internal:6443</code> which is used by the standalone Kubernetes server.</p>
<p>Next, created two contexts:</p>
<pre><code>☁ kubernetes-labs [master] ⚡ kubectl config set-context kubernetes-labs-dev --cluster=kubernetes-labs --namespace=development --user=dev
Context "kubernetes-labs-dev" modified.
☁ kubernetes-labs [master] ⚡ kubectl config set-context kubernetes-labs-prod --cluster=kubernetes-labs --namespace=production --user=prod
Context "kubernetes-labs-prod" created.
</code></pre>
<p>Switch to <code>kubernetes-labs-dev</code> context:</p>
<pre><code>☁ kubernetes-labs [master] ⚡ k config use-context kubernetes-labs-dev
Switched to context "kubernetes-labs-dev".
</code></pre>
<p>Now, when I try to get pods from the current namespace:</p>
<pre><code>☁ kubernetes-labs [master] ⚡ k get pods
Please enter Username: dev
Please enter Password:
</code></pre>
<p>Need an authentication, I don't know what <code>username</code> and <code>password</code> should be entered.</p>
<p>Besides, when I try to view the config used by the current context, got an error.</p>
<pre class="lang-sh prettyprint-override"><code>☁ kubernetes-labs [master] ⚡ k config view --minify=true
error: cannot locate user dev
</code></pre>
| Lin Du | <p>In order to make it work you need to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">Configure Access to Multiple Clusters</a>:</p>
<blockquote>
<p>This page shows how to configure access to multiple clusters by using
configuration files. After your clusters, users, and contexts are
defined in one or more configuration files, you can quickly switch
between clusters by using the kubectl config use-context command.</p>
</blockquote>
<p>You need to make sure that your configuration file is correct. A configuration file describes clusters, users, and contexts. Than, you can add users details to your configuration file, for example:</p>
<pre><code>kubectl config --kubeconfig=config-demo set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
kubectl config --kubeconfig=config-demo set-credentials experimenter --username=exp --password=some-password
</code></pre>
<p>The same can be done with contexts, for example:</p>
<pre><code>kubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer
kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer
kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter
</code></pre>
<p>and clusters, for example:</p>
<pre><code>kubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
kubectl config --kubeconfig=config-demo set-cluster scratch --server=https://5.6.7.8 --insecure-skip-tls-verify
</code></pre>
<p>Bear in mind that you need to set the proper pathnames of the certificate files in your environment for your configuration file to work properly.</p>
<p>Also, remember that:</p>
<blockquote>
<p>Each context is a triple (cluster, user, namespace). For example, the
dev-frontend context says, "Use the credentials of the developer user
to access the frontend namespace of the development cluster".</p>
</blockquote>
<p>You can find more details and examples in the linked documentation. The step by step guide will make it easier for you to setup properly.</p>
| Wytrzymały Wiktor |
<p>I was trying to make file before application gets up in kubernetes cluster with initcontainers,</p>
<p>But when i am setting up the pod.yaml and trying to apply it with "kubectl apply -f pod.yaml" it throws below error
<a href="https://i.stack.imgur.com/KV1mO.png" rel="nofollow noreferrer">error-image</a></p>
| Jayesh Desai | <p>Like the error says, you cannot update a Pod adding or removing containers. To quote the documentation ( <a href="https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement</a> )</p>
<blockquote>
<p>Kubernetes doesn't prevent you from managing Pods directly. It is
possible to update some fields of a running Pod, in place. However,
Pod update operations like patch, and replace have some limitations</p>
</blockquote>
<p>This is because usually, you don't create Pods directly, instead you use Deployments, Jobs, StatefulSets (and more) which are high-level resources that defines Pods templates. When you modify the template, Kubernetes simply delete the old Pod and then schedule the new version.</p>
<p>In your case:</p>
<ul>
<li>you could delete the pod first, then create it again with the new specs you defined. But take into consideration that the Pod <strong>may be scheduled on a different node</strong> of the cluster (if you have more than one) and that <strong>may have a different IP Address</strong> as Pods are disposable entities.</li>
<li>Change your definition with a slightly more complex one, a Deployment ( <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a> ) which can be changed as desired and, each time you'll make a change to its definition, the old Pod will be removed and a new one will be scheduled.</li>
</ul>
<hr />
<p>From the spec of your Pod, I see that you are using a volume to share data between the init container and the main container. This is the optimal way but you don't necessarily need to use a hostPath. If the only needs for the volume is to share data between init container and other containers, you can simply use <code>emptyDir</code> type, which acts as a temporary volume that can be shared between containers and that will be cleaned up when the Pod is removed from the cluster for any reason.</p>
<p>You can check the documentation here: <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#emptydir</a></p>
| AndD |
<p>I set up a local Kubernetes cluster using Kind, and then I run Apache-Airflow on it using Helm.</p>
<p>To actually create the pods and run Airflow, I use the command:</p>
<pre><code>helm upgrade -f k8s/values.yaml airflow bitnami/airflow
</code></pre>
<p>which uses the chart <code>airflow</code> from the <code>bitnami/airflow</code> repo, and "feeds" it with the configuration of <code>values.yaml</code>.
The file <code>values.yaml</code> looks something like:</p>
<pre><code>web:
extraVolumeMounts:
- name: functions
mountPath: /dir/functions/
extraVolumes:
- name: functions
hostPath:
path: /dir/functions/
type: Directory
</code></pre>
<p>where <code>web</code> is one component of Airflow (and one of the pods on my setup), and the directory <code>/dir/functions/</code> is successfully mapped from the cluster inside the pod. However, I fail to do the same for a single, specific file, instead of a whole directory.</p>
<p>Does anyone knows the syntax for that? Or have an idea for an alternative way to map the file into the pod (its whole directory is successfully mapped into the cluster)?</p>
| localhost | <p>There is a <code>File</code> type for <code>hostPath</code> which should behave like you desire, as it states in the <a href="https://v1-17.docs.kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>File: A file must exist at the given path</p>
</blockquote>
<p>which you can then use with the precise file path in <code>mountPath</code>. Example:</p>
<pre><code>web:
extraVolumeMounts:
- name: singlefile
mountPath: /path/to/mount/the/file.txt
extraVolumes:
- name: singlefile
hostPath:
path: /path/on/the/host/to/the/file.txt
type: File
</code></pre>
<p>Or if it's not a problem, you could mount the whole directory containing it at the expected path.</p>
<hr />
<p>With this said, I want to point out that using <code>hostPath</code> is (almost always) never a good idea.</p>
<p>If you have a cluster with more than one node, saying that your Pod is mounting an <code>hostPath</code> doesn't restrict it to run on a specific host (even tho you can enforce it with <code>nodeSelectors</code> and so on) which means that if the Pod starts on a different node, it may behave differently, not finding the directory and / or file it was expecting.</p>
<p>But even if you restrict the application to run on a specific node, you need to be ok with the idea that, if such node becomes unavailable, the Pod will not be scheduled on its own somewhere else.. meaning you'll need manual intervention to recover from a single node failure (unless the application is multi-instance and can resist one instance going down)</p>
<hr />
<p>To conclude:</p>
<ul>
<li>if you want to mount a path on a particular host, for whatever reason, I would go for <a href="https://v1-17.docs.kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">local</a> volumes.. or at least use hostPath and restrict the Pod to run on the specific node it needs to run on.</li>
<li>if you want to mount small, textual files, you could consider mounting them from <a href="https://kubernetes.io/docs/concepts/configuration/configmap/#using-configmaps-as-files-from-a-pod" rel="nofollow noreferrer">ConfigMaps</a></li>
<li>if you want to configure an application, providing a set of files at a certain path when the app starts, you could go for an init container <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/" rel="nofollow noreferrer">which prepares files for the main container in an emptyDir volume</a></li>
</ul>
| AndD |
<p>How can I add alertmanager to istio prometheus deployed by official helm chart?</p>
<p><a href="https://istio.io/docs/setup/install/helm/" rel="nofollow noreferrer">https://istio.io/docs/setup/install/helm/</a></p>
<pre><code>helm upgrade istio install/kubernetes/helm/istio --namespace istio-system --tiller-namespace istio-system \
--set tracing.enabled=true \
--set tracing.ingress.enabled=true \
--set grafana.enabled=true \
--set kiali.enabled=true \
--set "kiali.dashboard.jaegerURL=http://jaeger-query.kaws.skynet.com/jaeger" \
--set "kiali.dashboard.grafanaURL=http://grafana.kaws.skynet.com" \
--set "kiali.prometheusAddr=http://prometheus.kaws.skynet.com"
</code></pre>
<p>Is it possible to add alertmanager to istio setup?</p>
| jisnardo | <blockquote>
<p>Is it possible to add alertmanager to istio setup?</p>
</blockquote>
<p>Yes, it is possible.</p>
<p>As i could read on <a href="https://github.com/istio/istio/issues/17094" rel="nofollow noreferrer">github</a></p>
<blockquote>
<p>Generally Istio is not trying to manage production grade Prometheus, grafana, etc deployments. We are doing some work to make it easy to integrate istio with your own Prometheus, kiali, etc. See <a href="https://github.com/istio/installer/tree/master/istio-telemetry/prometheus-operator" rel="nofollow noreferrer">https://github.com/istio/installer/tree/master/istio-telemetry/prometheus-operator</a> as one way you can integrate with the Prometheus operator. You can define your own Prometheus setup then just add the configs to scrape istio components.</p>
</blockquote>
<p>You will have to change prometheus <a href="https://github.com/istio/istio/tree/master/install/kubernetes/helm/istio/charts/prometheus" rel="nofollow noreferrer">values and templates</a> like <a href="https://github.com/helm/charts/tree/master/stable/prometheus" rel="nofollow noreferrer">there</a>, add alertmanager yamls, and then configure it to work on istio namespace.</p>
<blockquote>
<p>How can I add alertmanager to istio prometheus deployed by official helm chart?</p>
</blockquote>
<p>I would recommend to use</p>
<pre><code>helm fetch istio.io/istio --untar
</code></pre>
<p>Which Download a chart to your local directory to view.</p>
<p>Then add alertmanager, and install istio helm chart from your local directory instead of helm repository.</p>
| Jakub |
<p>A container defined inside a deployment has a <code>livenessProbe</code> set up: by definition, it calls a remote endpoint and checks, whether response contains useful information or an empty response (which should trigger the pod's restart).</p>
<p>The whole definition is as follows (I removed the further checks for better clarity of the markup):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: fc-backend-deployment
labels:
name: fc-backend-deployment
app: fc-test
spec:
replicas: 1
selector:
matchLabels:
name: fc-backend-pod
app: fc-test
template:
metadata:
name: fc-backend-pod
labels:
name: fc-backend-pod
app: fc-test
spec:
containers:
- name: fc-backend
image: localhost:5000/backend:1.3
ports:
- containerPort: 4044
env:
- name: NODE_ENV
value: "dev"
- name: REDIS_HOST
value: "redis"
livenessProbe:
exec:
command:
- curl -X GET $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats | head -c 30 > /app/out.log
initialDelaySeconds: 20
failureThreshold: 12
periodSeconds: 10
</code></pre>
<p>I also tried putting the command into an array:</p>
<pre class="lang-yaml prettyprint-override"><code>command: ["sh", "-c", "curl -X GET $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats", "|", "head", "-c", "30", ">", "/app/out.log"]
</code></pre>
<p>and splitting into separate lines:</p>
<pre class="lang-yaml prettyprint-override"><code>- /bin/bash
- -c
- curl
- -X
- GET
- $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats
- |
- head
- -c
- "30"
- >
- /app/out.log
</code></pre>
<p>and even like this:</p>
<pre class="lang-yaml prettyprint-override"><code>command:
- |
curl -X GET $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats | head -c 30 > /app/out.log
</code></pre>
<p>All attempts were made with and without <code>(/bin/ba)sh -c</code> - with the same result.</p>
<p>But, as you're reading this, you already know that none of these worked.</p>
<p>I know it by <code>exec</code>'ing into running container and trying to find the <code>/app/out.log</code> file - it wasn't present any time I watched the directory contents. It looks like the probe gets never executed.</p>
<p>The command run inside running container works just fine: data gets fetched and written to the specified file.</p>
<p>What might be causing the probe not to get executed?</p>
| AbreQueVoy | <p>When using the <code>exec</code> type of probes, Kubernetes will <strong>not</strong> run a shell to process the command, it will just run the command directly. This means that you can only use a single command and that the <code>|</code> character is considered just another parameter of your <code>curl</code>.</p>
<p>To solve the problem, you need to use <code>sh -c</code> to exec shell code, something like the following:</p>
<pre><code> livenessProbe:
exec:
command:
- sh
- -c
- >-
curl -X GET $BACKEND_SERVICE_HOST:$BACKEND_SERVICE_PORT/api/v3/stats |
head -c 30 > /app/out.log
</code></pre>
| AndD |
<p>I am using bitnami PostgreSQL image to deploy StatfulSet inside my cluster node. I am not sure how to initiate schema for the PostgreSQL pod without building on top of bitnami image. I have looked around on the internet and someone said to use init containers but I am also not sure how exactly I would do that.</p>
| Ashish | <p>From the <a href="https://github.com/bitnami/bitnami-docker-postgresql" rel="nofollow noreferrer">Github Readme</a> of the Bitnami Docker image:</p>
<blockquote>
<p>When the container is executed for the first time, it will execute the
files with extensions <code>.sh</code>, <code>.sql</code> and <code>.sql.gz</code> located at
<code>/docker-entrypoint-initdb.d</code>.</p>
<p>In order to have your custom files inside the docker image you can
mount them as a volume.</p>
</blockquote>
<p>You can just mount such scripts under that directory using a ConfigMap volume. An example could be the following:</p>
<p>First, create the ConfigMap with the scripts, for example:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: p-init-sql
labels:
app: the-app-name
data:
01_init_db.sql: |-
# content of the script goes here
02_second_init_db.sql: |-
# more content for another script goes here
</code></pre>
<p>Second, under <code>spec.template.spec.volumes</code>, you can add:</p>
<pre><code>volumes:
- configMap:
name: p-init-sql
</code></pre>
<p>Then, under <code>spec.template.spec.containers[0].volumeMounts</code>, you can mount this volume with:</p>
<pre><code>volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: p-init-sql
</code></pre>
<hr />
<p>With this said, you may find out that it is more easy to use <a href="https://helm.sh/" rel="nofollow noreferrer">HELM</a> Charts.</p>
<p>Bitnami provides HELM Charts for all its images which simplify the usage of such images by a lot (as everything is ready to be installed and configured from a simple <code>values.yaml</code> file)</p>
<p>For example, there is such a chart for postgresql which you can find <a href="https://github.com/bitnami/charts/tree/master/bitnami/postgresql/" rel="nofollow noreferrer">here</a> and that can be of inspiration in how to configure the docker image even if you decide to write your own Kubernetes resources around that image.</p>
| AndD |
<p>I'm trying to start using helm and when I type <code>helm init</code>
it shows me the following error:</p>
<pre><code>Creating C:\Users\username\.helm\repository\repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp 74.125.193.128:443: connectex: No connection could be made because the target machine actively refused it.
</code></pre>
<p>Tried pinging to 74.125.193.128:443 as well and it won't work too.
I thought it was a proxy issue but it's not it so I tried looking online for similar issues and haven't encountered any with the same error.</p>
| teamdever | <p>What eventually was the problem is that a <code>repositories.yaml</code> file didn't exist in the .helm/repository folder.
It worked when I created the file with the following content:</p>
<pre><code>apiVersion: v1
repositories:
- name: charts
url: "https://kubernetes-charts.storage.googleapis.com"
- name: local
url: "http://localhost:8879/charts"
</code></pre>
<p>Then I could do <code>helm init</code> with no problem.</p>
| teamdever |
<p>I'm trying to deploy the Vanilla MineCraft Server from <a href="https://github.com/helm/charts/tree/master/stable/minecraft" rel="nofollow noreferrer">stable/minecraft</a> using Helm on Kubernetes 1.14 running on AWS EKS but I am consitently either getting <code>CrashLoopBackOff</code> or <code>Liveness Probe Failures</code>. This seems strange to me as I'm deploying the chart as specified per the documentation:</p>
<pre><code>helm install --name mine-release --set minecraftServer.eula=TRUE --namespace=mine-release stable/minecraft
</code></pre>
<p>Already Attempted Debugging:</p>
<ol>
<li>Tried decreasing and increasing memory <code>helm install --name mine-release --set resources.requests.memory="1024Mi" --set minecraftServer.memory="1024M" --set minecraftServer.eula=TRUE --namespace=mine-release stable/minecraft</code></li>
<li>Tried viewing logs through <code>kubectl logs mine-release-minecraft-56f9c8588-xn9pv --namespace mine-release</code> but this error is allways appearing</li>
</ol>
<pre><code>Error from server: Get https://10.0.143.216:10250/containerLogs/mine-release/mine-release-minecraft-56f9c8588-xn9pv/mine-release-minecraft: dial tcp 10.0.143.216:10250: i/o timeout
</code></pre>
<p>To give more context the <code>kubectl describe pods mine-release-minecraft-56f9c8588-xn9pv --namespace mine-release</code> output for pod description and events are below:</p>
<pre><code>Name: mine-release-minecraft-56f9c8588-xn9pv
Namespace: mine-release
Priority: 0
PriorityClassName: <none>
Node: ip-10-0-143-216.ap-southeast-2.compute.internal/10.0.143.216
Start Time: Fri, 11 Oct 2019 08:48:34 +1100
Labels: app=mine-release-minecraft
pod-template-hash=56f9c8588
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 10.0.187.192
Controlled By: ReplicaSet/mine-release-minecraft-56f9c8588
Containers:
mine-release-minecraft:
Container ID: docker://893f622e1129937fab38dc902e25e95ac86c2058da75337184f105848fef773f
Image: itzg/minecraft-server:latest
Image ID: docker-pullable://itzg/minecraft-server@sha256:00f592eb6660682f327770d639cf10692b9617fa8b9a764b9f991c401e325105
Port: 25565/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 11 Oct 2019 08:50:56 +1100
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 11 Oct 2019 08:50:03 +1100
Finished: Fri, 11 Oct 2019 08:50:53 +1100
Ready: False
Restart Count: 2
Requests:
cpu: 500m
memory: 1Gi
Liveness: exec [mcstatus localhost:25565 status] delay=30s timeout=1s period=5s #success=1 #failure=3
Readiness: exec [mcstatus localhost:25565 status] delay=30s timeout=1s period=5s #success=1 #failure=3
Environment:
EULA: true
TYPE: VANILLA
VERSION: 1.14.4
DIFFICULTY: easy
WHITELIST:
OPS:
ICON:
MAX_PLAYERS: 20
MAX_WORLD_SIZE: 10000
ALLOW_NETHER: true
ANNOUNCE_PLAYER_ACHIEVEMENTS: true
ENABLE_COMMAND_BLOCK: true
FORCE_gameMode: false
GENERATE_STRUCTURES: true
HARDCORE: false
MAX_BUILD_HEIGHT: 256
MAX_TICK_TIME: 60000
SPAWN_ANIMALS: true
SPAWN_MONSTERS: true
SPAWN_NPCS: true
VIEW_DISTANCE: 10
SEED:
MODE: survival
MOTD: Welcome to Minecraft on Kubernetes!
PVP: false
LEVEL_TYPE: DEFAULT
GENERATOR_SETTINGS:
LEVEL: world
ONLINE_MODE: true
MEMORY: 512M
JVM_OPTS:
JVM_XX_OPTS:
Mounts:
/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-j8zql (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mine-release-minecraft-datadir
ReadOnly: false
default-token-j8zql:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-j8zql
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
</code></pre>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m25s default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
Normal Scheduled 2m24s default-scheduler Successfully assigned mine-release/mine-release-minecraft-56f9c8588-xn9pv to ip-10-0-143-216.ap-southeast-2.compute.internal
Warning FailedAttachVolume 2m22s (x3 over 2m23s) attachdetach-controller AttachVolume.Attach failed for volume "pvc-b48ba754-eba7-11e9-b609-02ed13ff0a10" : "Error attaching EBS volume \"vol-08b29bb4eeca4df56\"" to instance "i-00ae1f5b96eed8e6a" since volume is in "creating" state
Normal SuccessfulAttachVolume 2m18s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-b48ba754-eba7-11e9-b609-02ed13ff0a10"
Warning Unhealthy 60s kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Readiness probe failed: Traceback (most recent call last):
File "/usr/bin/mcstatus", line 11, in <module>
sys.exit(cli())
File "/usr/lib/python2.7/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/lib/python2.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python2.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python2.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/mcstatus/scripts/mcstatus.py", line 58, in status
response = server.status()
File "/usr/lib/python2.7/site-packages/mcstatus/server.py", line 49, in status
connection = TCPSocketConnection((self.host, self.port))
File "/usr/lib/python2.7/site-packages/mcstatus/protocol/connection.py", line 129, in __init__
self.socket = socket.create_connection(addr, timeout=timeout)
File "/usr/lib/python2.7/socket.py", line 575, in create_connection
raise err
socket.error: [Errno 99] Address not available
Normal Pulling 58s (x2 over 2m14s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal pulling image "itzg/minecraft-server:latest"
Normal Killing 58s kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Killing container with id docker://mine-release-minecraft:Container failed liveness probe.. Container will be killed and recreated.
Normal Started 55s (x2 over 2m11s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Started container
Normal Pulled 55s (x2 over 2m11s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Successfully pulled image "itzg/minecraft-server:latest"
Normal Created 55s (x2 over 2m11s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Created container
Warning Unhealthy 25s (x2 over 100s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Readiness probe failed: Traceback (most recent call last):
File "/usr/bin/mcstatus", line 11, in <module>
sys.exit(cli())
File "/usr/lib/python2.7/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/lib/python2.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python2.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python2.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/mcstatus/scripts/mcstatus.py", line 58, in status
response = server.status()
File "/usr/lib/python2.7/site-packages/mcstatus/server.py", line 61, in status
raise exception
socket.error: [Errno 104] Connection reset by peer
Warning Unhealthy 20s (x8 over 95s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Readiness probe failed:
Warning Unhealthy 17s (x5 over 97s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Liveness probe failed:
</code></pre>
<p>I bit more about my Kubernetes Setup:</p>
<p>Kubernetes version 1.14 and nodes running on <code>m5.larges</code></p>
| James Marino | <p>I made reproduction of your problem and the answer is <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="nofollow noreferrer">readiness and liveness probe.</a></p>
<p>Your chart dont have enough time to get up,so after readiness probe return false, liveness probe kill it and try to do it again,and again.</p>
<blockquote>
<p>livenessProbe: Indicates whether the Container is running. If the liveness probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a liveness probe, the default state is Success.</p>
<p>readinessProbe: Indicates whether the Container is ready to service requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.</p>
</blockquote>
<p>You can either use your command after my edit</p>
<pre><code>helm install --name mine-release --set resources.requests.memory="1024Mi" --set minecraftServer.memory="1024M" --set minecraftServer.eula=TRUE --set livenessProbe.initialDelaySeconds=90 --set livenessProbe.periodSeconds=15 --set readinessProbe.initialDelaySeconds=90 --set readinessprobe.periodSeconds=15 --namespace=mine-release stable/minecraft
</code></pre>
<p><strong>OR</strong></p>
<p>Use helm fetch to download helm chart to your pc</p>
<pre><code>helm fetch stable/minecraft --untar
</code></pre>
<p>instead of changing values in helm install command, you can use text editor like vi or nano, and update everything in <a href="https://github.com/helm/charts/blob/master/stable/minecraft/values.yaml" rel="nofollow noreferrer">minecraft/values.yaml</a></p>
<pre><code>vi/nano ./minecraft/values.yaml
</code></pre>
<p>minecraft/values.yaml file after edit</p>
<pre><code># ref: https://hub.docker.com/r/itzg/minecraft-server/
image: itzg/minecraft-server
imageTag: latest
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
requests:
memory: 1024Mi
cpu: 500m
nodeSelector: {}
tolerations: []
affinity: {}
securityContext:
# Security context settings
runAsUser: 1000
fsGroup: 2000
# Most of these map to environment variables. See Minecraft for details:
# https://hub.docker.com/r/itzg/minecraft-server/
livenessProbe:
command:
- mcstatus
- localhost:25565
- status
initialDelaySeconds: 90
periodSeconds: 15
readinessProbe:
command:
- mcstatus
- localhost:25565
- status
initialDelaySeconds: 90
periodSeconds: 15
minecraftServer:
# This must be overridden, since we can't accept this for the user.
eula: "TRUE"
# One of: LATEST, SNAPSHOT, or a specific version (ie: "1.7.9").
version: "1.14.4"
# This can be one of "VANILLA", "FORGE", "SPIGOT", "BUKKIT", "PAPER", "FTB", "SPONGEVANILLA"
type: "VANILLA"
# If type is set to FORGE, this sets the version; this is ignored if forgeInstallerUrl is set
forgeVersion:
# If type is set to SPONGEVANILLA, this sets the version
spongeVersion:
# If type is set to FORGE, this sets the URL to download the Forge installer
forgeInstallerUrl:
# If type is set to BUKKIT, this sets the URL to download the Bukkit package
bukkitDownloadUrl:
# If type is set to SPIGOT, this sets the URL to download the Spigot package
spigotDownloadUrl:
# If type is set to PAPER, this sets the URL to download the PaperSpigot package
paperDownloadUrl:
# If type is set to FTB, this sets the server mod to run
ftbServerMod:
# Set to true if running Feed The Beast and get an error like "unable to launch forgemodloader"
ftbLegacyJavaFixer: false
# One of: peaceful, easy, normal, and hard
difficulty: easy
# A comma-separated list of player names to whitelist.
whitelist:
# A comma-separated list of player names who should be admins.
ops:
# A server icon URL for server listings. Auto-scaled and transcoded.
icon:
# Max connected players.
maxPlayers: 20
# This sets the maximum possible size in blocks, expressed as a radius, that the world border can obtain.
maxWorldSize: 10000
# Allows players to travel to the Nether.
allowNether: true
# Allows server to announce when a player gets an achievement.
announcePlayerAchievements: true
# Enables command blocks.
enableCommandBlock: true
# If true, players will always join in the default gameMode even if they were previously set to something else.
forcegameMode: false
# Defines whether structures (such as villages) will be generated.
generateStructures: true
# If set to true, players will be set to spectator mode if they die.
hardcore: false
# The maximum height in which building is allowed.
maxBuildHeight: 256
# The maximum number of milliseconds a single tick may take before the server watchdog stops the server with the message. -1 disables this entirely.
maxTickTime: 60000
# Determines if animals will be able to spawn.
spawnAnimals: true
# Determines if monsters will be spawned.
spawnMonsters: true
# Determines if villagers will be spawned.
spawnNPCs: true
# Max view distance (in chunks).
viewDistance: 10
# Define this if you want a specific map generation seed.
levelSeed:
# One of: creative, survival, adventure, spectator
gameMode: survival
# Message of the Day
motd: "Welcome to Minecraft on Kubernetes!"
# If true, enable player-vs-player damage.
pvp: false
# One of: DEFAULT, FLAT, LARGEBIOMES, AMPLIFIED, CUSTOMIZED
levelType: DEFAULT
# When levelType == FLAT or CUSTOMIZED, this can be used to further customize map generation.
# ref: https://hub.docker.com/r/itzg/minecraft-server/
generatorSettings:
worldSaveName: world
# If set, this URL will be downloaded at startup and used as a starting point
downloadWorldUrl:
# force re-download of server file
forceReDownload: false
# If set, the modpack at this URL will be downloaded at startup
downloadModpackUrl:
# If true, old versions of downloaded mods will be replaced with new ones from downloadModpackUrl
removeOldMods: false
# Check accounts against Minecraft account service.
onlineMode: true
# If you adjust this, you may need to adjust resources.requests above to match.
memory: 1024M
# General JVM options to be passed to the Minecraft server invocation
jvmOpts: ""
# Options like -X that need to proceed general JVM options
jvmXXOpts: ""
serviceType: LoadBalancer
rcon:
# If you enable this, make SURE to change your password below.
enabled: false
port: 25575
password: "CHANGEME!"
serviceType: LoadBalancer
query:
# If you enable this, your server will be "published" to Gamespy
enabled: false
port: 25565
## Additional minecraft container environment variables
##
extraEnv: {}
persistence:
## minecraft data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
dataDir:
# Set this to false if you don't care to persist state between restarts.
enabled: true
Size: 1Gi
podAnnotations: {}
</code></pre>
<p>Then we use helm install</p>
<pre><code>helm install --name mine-release --namespace=mine-release ./minecraft -f ./minecraft/values.yaml
</code></pre>
<p>Results from helm install:</p>
<pre><code>NAME: mine-release
LAST DEPLOYED: Fri Oct 11 14:52:17 2019
NAMESPACE: mine-release
STATUS: DEPLOYED
RESOURCES:
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mine-release-minecraft-datadir Pending standard 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mine-release-minecraft-f4558bfd5-mwm55 0/1 Pending 0 0s
==> v1/Secret
NAME TYPE DATA AGE
mine-release-minecraft Opaque 1 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mine-release-minecraft LoadBalancer 10.0.13.180 <pending> 25565:32020/TCP 0s
==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mine-release-minecraft 0/1 1 0 0s
NOTES:
Get the IP address of your Minecraft server by running these commands in the
same shell:
!! NOTE: It may take a few minutes for the LoadBalancer IP to be available. !!
You can watch for EXTERNAL-IP to populate by running:
kubectl get svc --namespace mine-release -w mine-release-minecraft
</code></pre>
<p>Results from logs:</p>
<pre><code>[12:53:45] [Server-Worker-1/INFO]: Preparing spawn area: 98%
[12:53:45] [Server thread/INFO]: Time elapsed: 26661 ms
[12:53:45] [Server thread/INFO]: Done (66.833s)! For help, type "help"
[12:53:45] [Server thread/INFO]: Starting remote control listener
[12:53:45] [RCON Listener #1/INFO]: RCON running on 0.0.0.0:25575
</code></pre>
| Jakub |
<p>If i deploy Postgres in a statefulSet <strong>without</strong> using replicas (just one pod) and i kill the node that the stateful set is running on will I be able to start up the node and reconnect to a persisted database</p>
<p>Here is an example configuration:
<a href="https://medium.com/@suyashmohan/setting-up-postgresql-database-on-kubernetes-24a2a192e962" rel="nofollow noreferrer">https://medium.com/@suyashmohan/setting-up-postgresql-database-on-kubernetes-24a2a192e962</a></p>
<p>I am working with someone who is convinced this should not work and that statefulSets only make sense as a way to maintain state between replicas. Im under the impression that the problem of mounting the PG data to ephemeral pods is specific to NOT using a statefulSet, and that even though there is only one pod in the example above that this will still make use of StatefulSet to solve the problem.</p>
<p>(as in this official mysql example: <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/</a></p>
| Happy Machine | <p>The database will be persisted if the Pods make use of persisted volume storages. Ephemeral Volumes are <strong>not</strong> persisted and should not be used to save a db, as the name says, there is no long-term guarantee about durability.</p>
<p>But if your Pod saves the database on a persisted volume of some sort (such as local storage on the node itself or something more complex) then it will be persisted between runs.</p>
<p>Which means that if you have your Pod running on a node, let's say using local storage on that node, if you stop the node and then make it restart correctly, the Pod will be scheduled again and the persisted volume will be there with all the data saved previously.</p>
<hr />
<p>With this said, if you have only 1 Pod (StatefulSet with just 1 replica) and the node on which the Pod is currently running is somehow killed / stops working / stops answering then the Pod will <strong>not</strong> automatically restart on another node (not even if you are not using local storage)</p>
<p>You will be able to force it to run to another node, sure, but only with manual operations.</p>
<p>This is because (from the <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/" rel="nofollow noreferrer">docs</a>):</p>
<blockquote>
<p>In normal operation of a StatefulSet, there is never a need to force
delete a StatefulSet Pod. The StatefulSet controller is responsible
for creating, scaling and deleting members of the StatefulSet. It
tries to ensure that the specified number of Pods from ordinal 0
through N-1 are alive and ready. <strong>StatefulSet ensures that, at any</strong>
<strong>time, there is at most one Pod with a given identity running in a</strong>
<strong>cluster</strong>. This is referred to as at most one semantics provided by a
StatefulSet.</p>
</blockquote>
<p>If the controller <strong>cannot</strong> be sure if a Pod is running or not (and the example of a node getting killed or stopping to work correctly for a error is such a situation) then the Pod will <strong>never</strong> be restarted, until either:</p>
<ul>
<li>Manual operation such as a force delete.</li>
<li>The node starts answering again and becomes Ready once more.</li>
</ul>
<p>Note that draining a node will not create any problem as it will gracefully terminate StatefulSets Pods before starting them again (on other nodes).</p>
<hr />
<p>StatefulSets can work really well for databases, but usually it requires a more complex installation with multi-primary nodes and (at least) 3 replicas.</p>
<p>Also, databases requires very fast write operations on disk and as such perform better if they can work on high quality disks.</p>
<hr />
<p><strong>Update:</strong></p>
<p>StatefulSets are usually intended to be used when each of the replicas Pods require a unique identity (multi primary databases or apps that uses quorum are good example of this necessity)</p>
<p>When deployed with only 1 replica, the differences with a Deployment are small (but there are differences, for example a Deployment's Pod would eventually restart on another node if the node on which it was running stops working, a StatefulSet Pod will require manual intervention).. in general you should refer to the "<a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#using-statefulsets" rel="nofollow noreferrer">Using StatefulSets</a>" in the docs to decide if an app requires to be run as StatefulSet or Deployment.</p>
<p>Personally, I would run a database as a StatefulSet because it is a stateful app.. but I would also run it with 3 replicas, so that it can suffer the loss of one Pod without stopping to work.</p>
| AndD |
<p>I have an existing ebs volume in AWS with data on it. I need to create a PVC in order to use it in my pods.
Following this guide: <a href="https://medium.com/pablo-perez/launching-a-pod-with-an-existing-ebs-volume-mounted-in-k8s-7b5506fa7fa3" rel="nofollow noreferrer">https://medium.com/pablo-perez/launching-a-pod-with-an-existing-ebs-volume-mounted-in-k8s-7b5506fa7fa3</a></p>
<p>persistentvolume.yaml</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: jenkins-volume
labels:
type: amazonEBS
spec:
capacity:
storage: 60Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-011111111x
fsType: ext4
</code></pre>
<pre><code>[$$]>kubectl describe pv
Name: jenkins-volume
Labels: type=amazonEBS
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 60Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-011111111x
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
</code></pre>
<p>persistentVolumeClaim.yaml</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: jenkins-pvc-shared4
namespace: jenkins
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60Gi
</code></pre>
<pre><code>[$$]>kubectl describe pvc jenkins-pvc-shared4 -n jenkins
Name: jenkins-pvc-shared4
Namespace: jenkins
StorageClass: gp2
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 12s (x2 over 21s) persistentvolume-controller waiting for first consumer to be created before binding
[$$]>kubectl get pvc jenkins-pvc-shared4 -n jenkins
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins-pvc-shared4 Pending gp2 36s
</code></pre>
<p>Status is pending (waiting to the consumer to be attached) - but it should already be provisioned.</p>
| SimonSK | <p>The right config should be:</p>
<pre><code>[$$]>cat persistentvolume2.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-name
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: aws://eu-west-2a/vol-123456-ID
capacity:
storage: 60Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: gp2
[$$]>cat persistentVolumeClaim2.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: new-namespace
labels:
app.kubernetes.io/name: <app-name>
name: pvc-name
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60Gi
storageClassName: gp2
volumeName: pv-name
</code></pre>
| SimonSK |
<p>Is there a way to cut out some of the verbosity you get in the Jenkins console output?</p>
<p>This is my context.</p>
<p>I've configured the Kubernetes cloud on my Jenkins server and each time I run a job, I get a ton of output in the form of system configuration that I don't need. And it's consistent with every build run using the jnlp agent.</p>
<pre><code>‘foo’ is offline
Agent foo is provisioned from template foo
---
apiVersion: "v1"
kind: "Pod"
metadata:
labels:
jenkins: "slave"
jenkins/label-digest: "xxxxxxxxxxxxxxx"
jenkins/label: "foo"
name: "foo"
spec:
containers:
- args:
- "********"
- "foo"
env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_TUNNEL"
value: "1.2.3.4:50000"
- name: "JENKINS_AGENT_NAME"
value: "foo"
- name: "JENKINS_NAME"
value: "foo"
- name: "JENKINS_AGENT_WORKDIR"
value: "/home/jenkins/agent"
- name: "JENKINS_URL"
value: "https://example.com"
</code></pre>
<p>My base image is <code>jenkins/inbound-agent:4.3-4</code> I'm not passing any <strong>Command to run</strong> or <strong>Arguments to pass to the command</strong> in my Pod template.</p>
<p>So I'm a little curious as to where these extra logs are coming from.</p>
<p>I do expect and get the standard console output that is relative to the actual build and all, but I'd like to know how to get rid of these extra bits</p>
<p>Anyone had any experience with this?</p>
| Hammed | <p>I'm using Jenkins version 2.378. Unfortunately disabling 'ShowRawYaml' under Kubernetes Pod Templates did not help. May be a bug.</p>
<p>Alternatively, i'm using below statement in the Pipeline of JenkinsFile. This is helping me disable Pod Log/Yaml display in Console Output. See pic attached for reference</p>
<p>showRawYaml 'false'</p>
<p><a href="https://i.stack.imgur.com/ccbhi.png" rel="nofollow noreferrer">Jenkins Pipeline snapshot</a></p>
| Chethan |
<p>I'm trying to restrict a <code>ServiceAccount</code>'s RBAC permissions to manage secrets in all namespaces:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gitlab-secrets-manager
rules:
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- gitlab-registry
verbs:
- get
- list
- create
- update
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-service-account
namespace: gitlab
secrets:
- name: gitlab-service-account-token-lllll
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gitlab-service-account-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gitlab-secrets-manager
subjects:
- kind: ServiceAccount
name: gitlab-service-account
namespace: gitlab
</code></pre>
<p>So far, I've created the ServiceAccount and the related CRB, however, actions are failing:</p>
<pre><code>secrets "gitlab-registry" is forbidden: User "system:serviceaccount:gitlab:default" cannot get resource "secrets" in API group "" in the namespace "shamil"
</code></pre>
<p>Anyone know what I'm missing?</p>
| bear | <p>You can do the following steps:</p>
<ul>
<li>At first, you need to insure that your serviceaccount named <code>gitlab-service-account</code> in <code>gitlab</code> namespace exists in the cluster.</li>
<li>Then you will create a <code>ClusterRole</code> as you have given:</li>
</ul>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gitlab-secrets-manager
rules:
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- gitlab-registry
verbs:
- get
- list
- create
- update
</code></pre>
<ul>
<li>Then you will also create a <code>ClusterRoleBinding</code> to grant permission at the cluster level.</li>
</ul>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gitlab-secrets-manager-clusterrolebinding
subjects:
- kind: ServiceAccount
name: gitlab-service-account
namespace: gitlab
roleRef:
kind: ClusterRole
name: gitlab-secrets-manager
apiGroup: rbac.authorization.k8s.io
</code></pre>
| Sayf Uddin Al Azad Sagor |
<p>Kuberentes returns the following error:</p>
<pre><code>fabiansc@Kubernetes-Master:~/Dokumente$ kubectl run -f open-project.yaml
Error: required flag(s) "image" not set
</code></pre>
<p>I want to create Open Project based on a Kubernetes On-Prem installation. There are <a href="https://www.openproject.org/docker/" rel="nofollow noreferrer">references for docker</a>; however I would like to use Kubernetes on top of it to get more familar with it. It's important to keep things working after a reboot of my host. Therefore I want to persistent the open project configuration. Docker references this by adding a volume (-v option):</p>
<pre><code>docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=secret \
-v /var/lib/openproject/pgdata:/var/openproject/pgdata \
-v /var/lib/openproject/static:/var/openproject/assets \
openproject/community:8
</code></pre>
<p>My Kubernetes file looks like the following:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: open-project-pod
labels:
environment: production
spec:
containers:
- name: open-project-container
image: openproject/community:8
ports:
- name: open-project
containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: "/var/openproject"
name: data
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 600
timeoutSeconds: 1
volumes:
- name: data
persistentVolumeClaim:
claimName: open-project-storage-claim
imagePullSecrets:
- name: regcred
</code></pre>
<p>Error: required flag(s) "image" not set</p>
| Fabiansc | <p>The correct command is <code>kubectl apply -f open-project.yaml</code></p>
| Fabiansc |
<p>i'm trying to config my ingress controller to allow only GET method on it ,
i saw there is a cors config that i can use to do that , but no idea why it doesn't work here my config :</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-elasticsearch-service
namespace: my-application-namespace
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://my-host.com"
spec:
tls:
- hosts:
- my-host.com
secretName: my-ingress-secret
rules:
- host: my-host.com
http:
paths:
- path: /elasticsearch/(.+)
pathType: Prefix
backend:
service:
name: elasticsearch-service
port:
number: 9200
</code></pre>
<p>as you guess i'm trying to expose an elasticsearch but only the get method so my frontend can use it directly .</p>
<p>Another option i saw is that it's possible to config nginx with "nginx.ingress.kubernetes.io/server-snippet" like the following (from documentation ) :</p>
<pre><code>
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
set $agentflag 0;
if ($http_user_agent ~* "(Mobile)" ){
set $agentflag 1;
}
if ( $agentflag = 1 ) {
return 301 https://m.example.com;
}
</code></pre>
<p>i've tried both config i put this in annotations :</p>
<pre><code>nginx.ingress.kubernetes.io/server-snippet: |
location ~* "^/elasticsearch/(.+)" {
if ($request_method != GET) {
return 403;
}
}
</code></pre>
<p>but my entire elasticsearch GET route went into a 404 for some reason , not sure why .
but the other HTTP method return a 403 .</p>
<p>Anyone got an idea on how can i achieve this properly ?</p>
<p>Thanks .</p>
| kevP-Sirius | <p>Solved finally ,
i used the wrong snippet i had to use configuration-snippet instead of server-snippet and without the location condition because it was overwriting the kubernetes config and i couldnt reproduce the way kubernetes redirect inside my location .</p>
<p>As a result the final solution look like the following :</p>
<pre><code> nginx.ingress.kubernetes.io/configuration-snippet: |
if ($request_method != GET) {
return 403;
}
</code></pre>
| kevP-Sirius |
<p>Could somebody let me know,how the service discovery happening in docker swarm and kubernetes to understand the difference or even if any source like books or docs which explains this then share it</p>
| JPNagarajan | <p>As I could find <a href="https://victorops.com/blog/kubernetes-vs-docker-swarm" rel="nofollow noreferrer">there</a> and <a href="https://vexxhost.com/blog/kubernetes-vs-docker-swarm-containerization-platforms/" rel="nofollow noreferrer">there</a></p>
<p><strong>Kubernetes vs. Docker Swarm</strong></p>
<blockquote>
<p>Docker Swarm and Kubernetes both offer different approaches to service discovery. In K8s you need to define containers as services manually. On the other hand, containers in Swarm can communicate via virtual private IP addresses and service names regardless of their underlying hosts.</p>
</blockquote>
<hr />
<blockquote>
<p>Kubernetes network is flat, as it enables all pods to communicate with one another. In Kubernetes, the model requires two CIDRs. The first one requires pods to get an IP address, the other is for services.</p>
<p>In a Docker Swarm, a node joining a cluster creates an overlay network of services that span all of the hosts in the Swarm and a host only Docker bridge network for containers. In Docker Swarm, users have the option to encrypt container data traffic when creating an overlay network by on their own.</p>
</blockquote>
<hr />
<blockquote>
<p>Kubernetes provides easy service organization with pods</p>
<p>With Kubernetes you don’t need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.</p>
</blockquote>
<p><strong>Kubernetes</strong></p>
<p>There is example which provide informations about service discovery in <a href="https://platform9.com/blog/kubernetes-service-discovery-principles-in-practice/" rel="nofollow noreferrer">kubernetes</a>.</p>
<p>And more informations from <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">kubernetes documentation</a>.</p>
<hr />
<p><strong>Docker swarm</strong></p>
<p>There is example which provide informations about service discovery in <a href="https://medium.com/quick-mobile/using-docker-swarm-to-do-service-discovery-and-deploy-like-a-god-ba5ef2be8094" rel="nofollow noreferrer">docker swarm</a>.</p>
<p>There is training about how to use <a href="https://training.play-with-docker.com/swarm-service-discovery/" rel="nofollow noreferrer">Service Discovery under Docker Swarm Mode</a>.</p>
<p>And some more informations from <a href="https://www.linux.com/tutorials/how-service-discovery-works-containerized-applications-using-docker-swarm-mode/" rel="nofollow noreferrer">linux tutorials</a>.</p>
| Jakub |
<p>Hi everyone,</p>
<p>I have deployed a Kubernetes cluster based on kubeadm and for the purpose of performing HorizontalPodAutoscaling based on the Custom Metrics, I have deployed prometheus-adpater through Helm.</p>
<p>Now, i want to edit the configuration for prometheus-adpater and because i am new to Helm, i don't know how to do this. So could you guid me how to edit the deployed helm charts?</p>
| zain ul abedin | <p>I guess <a href="https://helm.sh/docs/helm/#helm-upgrade" rel="noreferrer">helm upgrade</a> is that are you looking for.</p>
<blockquote>
<p>This command upgrades a release to a specified version of a chart and/or updates chart values.</p>
</blockquote>
<p>So if you have deployed prometheus-adapter, you can use command <a href="https://helm.sh/docs/helm/#helm-fetch" rel="noreferrer">helm fetch</a> </p>
<blockquote>
<p>Download a chart from a repository and (optionally) unpack it in local directory</p>
</blockquote>
<p>You will have all yamls, you can edit them and upgrade your current deployed chart via helm upgrade</p>
<p>I found an <a href="https://dzone.com/articles/create-install-upgrade-rollback-a-helm-chart-part" rel="noreferrer">example</a>, which should explain it to you more precisely. </p>
| Jakub |
<p>I am trying to create a namespace in an AWS EKS cluster and keep getting an error.</p>
<p>I can do everything I want using the default namespace yet when I try to create a new namespace name I am forbidden.</p>
<p>It must be something that I have done incorrectly with the user "thera-eks".
Perhaps the role binding?</p>
<p>It looks like I gave the role access to everything since in the rules I gave it the * wildcard.</p>
<p>The command I use is -</p>
<pre><code>kubectl create namespace ernie
</code></pre>
<p>The error I get is -</p>
<pre><code>Error from server (Forbidden): namespaces is forbidden: User "thera-eks" cannot create resource "namespaces" in API group "" at the cluster scope
</code></pre>
<p>My role.yaml is:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: full_access
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
</code></pre>
<p>My rolebinding.yaml is:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: full_access_role_binding
subjects:
- kind: User
name: thera-eks
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: full_access
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>The aws-auth config map is:</p>
<pre><code>data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::9967xxxxxxxx:role/eksctl-ops-nodegroup-linux-ng-sys-NodeInstanceRole-346VJPTOXI7L
username: system:node:{{EC2PrivateDNSName}}
- groups:
- eks-role
- system:master
rolearn: arn:aws:iam::9967xxxxxxxx:role/thera-eks
username: thera-eks
mapUsers: |
- userarn: arn:aws:iam::9967xxxxxxxx:user/test-ecr
username: test-ecr
groups:
- eks-role
</code></pre>
<p>The AWS IAM permissions JSON for the role "thera-eks" is -</p>
<pre><code> {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:DescribeImages",
"ecr:DescribeRepositories",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:ListImages",
"ecr:PutImage",
"ecr:UploadLayerPart",
"ecr:GetAuthorizationToken"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"eks:*",
"iam:ListRoles",
"sts:AssumeRole"
],
"Resource": "*"
}
]
}
</code></pre>
| ErnieAndBert | <p>@mdaniel and @PEkambaram are right but I would like to expand and back it up with the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">official docs</a> for better understanding:</p>
<blockquote>
<p>An <code>RBAC</code> <code>Role</code> or <code>ClusterRole</code> contains rules that represent a set
of permissions. Permissions are purely additive (there are no "deny"
rules).</p>
<p>A <code>Role</code> always sets permissions within a particular namespace; when
you create a <code>Role</code>, you have to specify the namespace it belongs in.</p>
<p><code>ClusterRole</code>, by contrast, is a non-namespaced resource. The
resources have different names (<code>Role</code> and <code>ClusterRole</code>) because a
Kubernetes object always has to be either namespaced or not
namespaced; it can't be both.</p>
<p><code>ClusterRoles</code> have several uses. You can use a <code>ClusterRole</code> to:</p>
<ul>
<li><p>define permissions on namespaced resources and be granted within individual namespace(s)</p>
</li>
<li><p>define permissions on namespaced resources and be granted across all namespaces</p>
</li>
<li><p>define permissions on cluster-scoped resources</p>
</li>
</ul>
<p><strong>If you want to define a role within a namespace, use a <code>Role</code>; if you want to define a role cluster-wide, use a <code>ClusterRole</code>.</strong></p>
</blockquote>
<p>You will also find an example of a <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#clusterrole-example" rel="nofollow noreferrer">ClusterRole</a>:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: secret-reader
rules:
- apiGroups: [""]
#
# at the HTTP level, the name of the resource for accessing Secret
# objects is "secrets"
resources: ["secrets"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>and for a <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#clusterrolebinding-example" rel="nofollow noreferrer">ClusterRoleBinding</a>:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: Group
name: manager # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>The linked docs will show you all the necessary details with examples that would help understand and setup your RBAC.</p>
| Wytrzymały Wiktor |
<p>I'm running Jenkins on GKE and having issue when running job - it can't download image for slave - anyone met similar issue - it happens whether I use private Container Registry on GCP or official <code>jenkins/jnlp-slave</code> image</p>
<pre><code>jenkins-agent-dz3kg 0/1 ErrImagePull 0 66s x.x.x.x gke-x-default-pool-xxxxx-x <none> <none>
jenkins-agent-dz3kg 0/1 ImagePullBackOff 0 73s x.x.x.x gke-x-default-pool-xxxxx-x <none> <none>
</code></pre>
<p>and the values file of jenkins helm is pretty plain</p>
<pre><code>agent:
image: "gcr.io/my-image"
tag: "latest"
podName: "jenkins-agent"
TTYEnabled: true
resources:
requests:
cpu: "1"
memory: "1Gi"
limits:
cpu: "4"
memory: "4Gi"
</code></pre>
<p>jenkins installed with helm 2.13.1 and config above</p>
<pre><code>helm install stable/jenkins --name jenkins -f jenkins.yaml
</code></pre>
<p>and to show that image is there</p>
<pre><code>$ gcloud container images list
NAME
gcr.io/my-project/my-image
</code></pre>
<p>does the jenkins need some special permissons or? </p>
| CptDolphin | <p>It happens because the slave is not authenticated within GCP</p>
<p><a href="https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry" rel="nofollow noreferrer">Private registries</a> may require keys to read images from them. Credentials can be provided in several ways:</p>
<ul>
<li>Per-cluster</li>
<li>automatically configured on Google Compute Engine or Google Kubernetes Engine</li>
<li>all pods can read the project’s private registry</li>
</ul>
<p>Those 2 toutorials should be helpful</p>
<ul>
<li><a href="https://itnext.io/setup-jenkins-with-google-container-registry-2f8d39aaa275" rel="nofollow noreferrer">https://itnext.io/setup-jenkins-with-google-container-registry-2f8d39aaa275</a></li>
<li><a href="https://plugins.jenkins.io/google-container-registry-auth" rel="nofollow noreferrer">https://plugins.jenkins.io/google-container-registry-auth</a></li>
</ul>
<p>Specially step 1 and step 2 from the first tutorial.</p>
<blockquote>
<p>1.Create a service account which has full access to GCR in Google Cloud.</p>
<p>2.In jenkins, create a credential for this service account with Google OAuth Credentials plugin.</p>
<p>3.Create a pull/push build step with docker-build-step plugin, and set the registry url to GCR.</p>
<p>4.Google Container Registry Auth plugin will provide the credential created in Step 2 to docker when the build step is executed.</p>
</blockquote>
| Jakub |
<p>I have fallowed the <a href="https://darienmt.com/kubernetes/2019/03/31/kubernetes-at-home.html" rel="nofollow noreferrer">this web site</a> to configure kubernetes in aws ubuntu(18.04) ec2 instance. I have fallowed same steps in above web page. But after applied Network Overlay, core dns pod's state not changed to running state.
<a href="https://i.stack.imgur.com/jDHoR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jDHoR.png" alt="enter image description here"></a> </p>
<p>My installed kubernetes version as bellow</p>
<ul>
<li>kubeadm - GitVersion:"v1.16.0"</li>
<li>kubectl - GitVersion:"v1.16.0"</li>
<li>kubelet - Kubernetes v1.16.0</li>
</ul>
<p>resolve this issue i have tried the this answer in <a href="https://stackoverflow.com/questions/44086826/kubeadm-master-node-never-ready">stackoverflow</a> </p>
<p>How can i resolve this issue ?</p>
<p>The output of <code>kubectl get nodes</code></p>
<p><a href="https://i.stack.imgur.com/xWA7E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xWA7E.png" alt="enter image description here"></a></p>
<p><strong>out put of</strong> <code>kubectl describe pod coredns-644d7b6d9-nv9mj -n kube-system</code></p>
<pre><code> ubuntu@master:~$ sudo kubectl describe pod coredns-644d7b6d9-nv9mj -n kube-system
Name: coredns-644d7b6d9-nv9mj
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: <none>
Labels: k8s-app=kube-dns
pod-template-hash=644d7b6d9
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/coredns-644d7b6d9
Containers:
coredns:
Image: k8s.gcr.io/coredns:1.6.2
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-54rgm (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-54rgm:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-54rgm
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate .
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate
</code></pre>
| Harshana Samaranayake | <p>So basically there are some problems with flannel working on kubernetes v1.16 and you need to add <code>"cniVersion": "0.2.0"</code> to <a href="https://github.com/coreos/flannel/blob/d893bcbfe6b04791054aea6c7569dea4080cc289/Documentation/kube-flannel.yml#L108" rel="noreferrer">kube-flannel.yml</a> file</p>
<pre><code>---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unsed in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.2.0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: ppc64le
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: s390x
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
</code></pre>
<p>I added this there so everything you need to do is use:</p>
<pre><code>sudo kubeadm reset
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo sysctl net.bridge.bridge-nf-call-iptables=1
</code></pre>
<p>use nano or vi to create a kube-flannel.yaml file and copy above yaml to it.</p>
<pre><code>sudo kubectl apply -f kube-flannel.yaml
</code></pre>
| Jakub |
<p>I am using helm version 2.14.1. I have created helm charts for an application that will be deployed by users to test their code on kubernetes cluster. I want to add labels for username values, so I can retrieve deployments by users (deployments by user labels). Is there a way to include system username in helm charts just like we do in Java with <code>System.getProperty("user.name")</code>. My helm template is like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "common.fullname" . }}--{{ Release.name }}
labels:
application: {{ include "common.name" . }}
branch: "{{ Release.name }}"
username: "{{ System.user.name }}" # need to fetch the logged in user from system here
spec:
...
</code></pre>
<p>Is there a standard way to achieve this or is there anyway I can allow users to input there usernames from command line while using <code>helm install</code> or <code>helm template</code> commands?</p>
<p>EDIT:
Although, the <code>--set</code> works for me in setting the values for my chart, I also need to set the same value in the dependencies. Something like this:</p>
<p>values.yaml</p>
<pre><code>username: ""
dependency1:
username: {{ .Values.username }}
dependency2:
username: {{ .Values.username }}
...
</code></pre>
<p>Of course the above implementation doesn't work. I need to reference the set value in the dependencies as well</p>
| Avdhut Mankavale | <p>This is a community wiki answer based on the comments and posted for better visibility. Feel free to expand it.</p>
<p>You can use the <code>helm template</code> command with a <code>--set</code> option:</p>
<pre><code>--set stringArray set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
--set-string stringArray set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
</code></pre>
<p>The <code>--set</code> parameters have the highest precedence among other methods of passing values into the charts. It means that by default values come from the <code>values.yaml</code> which can be overridden by a parent chart's <code>values.yaml</code>, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by <code>--set</code> parameters.</p>
<p>You can check more details and examples in the <a href="https://helm.sh/docs/chart_template_guide/values_files/" rel="nofollow noreferrer">official docs</a>.</p>
| Wytrzymały Wiktor |
<p>I am trying to set up hive using mr3 on a kubernetes cluster hosted on AWS ec2. When I run the command run-hive.sh, Hive-server starts and the master-DAg is initialised but then it gets stuck on pending. When I describe the pod. This is the error message shows. I have kept the resources to minimum so it should not be that issue and I do not have any tainted nodes. If you know any alternative for running hive on Kubernetes with access to S3 or a better way to implement mr3 hive on Kubernetes cluster, please share.</p>
<p><a href="https://i.stack.imgur.com/MR8Pi.png" rel="nofollow noreferrer">One of the node description</a></p>
| Manik Malhotra | <p>Based on the topic i think the problem here is your cluster have not enough resources on your worker nodes, and a master node is <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">tainted</a>.</p>
<p>So the option here is either inreasing the resources on workers or taint the master node so You would be able to schedule pods there.</p>
<blockquote>
<p>Control plane node isolation</p>
<p>By default, your cluster will not schedule pods on the control-plane node for security reasons. If you want to be able to schedule pods on the control-plane node, e.g. for a single-machine Kubernetes cluster for development, run:</p>
</blockquote>
<pre><code>kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre>
<blockquote>
<p>This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the control-plane node, meaning that the scheduler will then be able to schedule pods everywhere</p>
</blockquote>
| Jakub |
<p><br><strong>Question 1.)</strong>
<strong><br>Given the scenario a multi-container pod, where all containers have a defined CPU request:
<br>How would Kubernetes Horizontal Pod Autoscaler calculate CPU Utilization for Multi Container pods? <br></strong>
Does it average them? (((500m cpu req + 50m cpu req) /2) * X% HPA target cpu utilization <br>
Does it add them? ((500m cpu req + 50m cpu req) * X% HPA target cpu utilization <br>
Does it track them individually? (500m cpu req * X% HPA target cpu utilization = target #1, 50m cpu req * X% HPA target cpu utilization = target #2.) <br>
<br><strong>Question 2.)</strong> <br>
<strong>Given the scenario of a multi-container pod, where 1 container has a defined CPU request and a blank CPU request for the other containers: <br>
How would Kubernetes Horizontal Pod Autoscaler calculate CPU Utilization for Multi Container pods?</strong><br />
Does it work as if you only had a 1 container pod?</p>
<p><strong>Question 3.)</strong> <br>
<strong>Do the answers to questions 1 and 2 change based on the HPA API version?</strong> <br>I noticed stable/nginx-ingress helm chart, chart version 1.10.2, deploys an HPA for me with these specs:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
</code></pre>
<p>(I noticed apiVersion: autoscaling/v2beta2 now exists)</p>
<p><strong>Background Info:</strong>
<br> I recently had an issue with unexpected wild scaling / constantly going back and forth between min and max pods after adding a sidecar(2nd container) to an nginx ingress controller deployment (which is usually a pod with a single container). In my case, it was an oauth2 proxy, although I image istio sidecar container folks might run into this sort of problem all the time as well.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
spec:
replicas: 3
template:
spec:
containers:
- name: nginx-ingress-controller #(primary-container)
resources:
requests:
cpu: 500m #baseline light load usage in my env
memory: 2Gi #according to kubectl top pods
limits:
memory: 8Gi #(oom kill pod if this high, because somethings wrong)
- name: oauth2-proxy #(newly-added-2nd-sidecar-container)
resources:
requests:
cpu: 50m
memory: 50Mi
limits:
memory: 4Gi
</code></pre>
<p>I have an HPA (apiVersion: autoscaling/v1) with:</p>
<ul>
<li>min 3 replicas (to preserve HA during rolling updates)</li>
<li>targetCPUUtilizationPercentage = 150%</li>
</ul>
<p><strong>It occurred to me that my misconfiguration leads to unexpected wild scaling was caused by 2 issues:</strong></p>
<ol>
<li>I don't actually understand how HPAs work when the pod has multiple containers</li>
<li>I don't know how to dig deep to get metrics of what's going on.
<br></li>
</ol>
<hr />
<p><strong>To address the first issue: I brainstormed my understanding of how it works in the single container scenario</strong> (and then realized I don't know the multi-container scenario so I decided to ask this question)
<br><br></p>
<p><strong>This is my understanding of how HPA (autoscaling/v1) works when I have 1 container (temporarily ignore the 2nd container in the above deployment spec):</strong>
<br>The HPA would spawn more replicas when the CPU utilization average of all pods shifted from my normal expected load of 500m or less to 750m (150% x 500m request)</p>
<hr />
<p><strong>To address the 2nd issue: I found out how to dig to see concrete numeric value-based metrics vs relative percentage-based metrics to help figure out what's happening behind the scenes:</strong></p>
<pre class="lang-sh prettyprint-override"><code>bash# kubectl describe horizontalpodautoscaler nginx-ingress-controller -n=ingress | grep Metrics: -A 1
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 5% (56m) / 100%
</code></pre>
<p>(Note: kubectl top pods -n=ingress, showed cpu usage of the 5 replicas as 36m, 34m, 88m, 36m, 91m, so that 57m current which ~matches 56m current)</p>
<p>Also now it's a basic proportions Math Problem that allows solving for target static value: <br>
(5% / 56m) = (100% / x m) --> x = 56 * 100 / 5 = 1120m target cpu
<br> (Note: this HPA isn't associated with the deployment mentioned above, that's why the numbers are off.)</p>
| neoakris | <p>Basing on stackoverflow community member answer in other <a href="https://stackoverflow.com/questions/48172151/kubernetes-pod-cpu-usage-calculation-method-for-hpa">case</a></p>
<blockquote>
<p>"HPA calculates pod cpu utilization as total cpu usage of all containers in pod divided by total request. I don't think that's specified in docs anywhere, but the relevant code is <a href="https://github.com/kubernetes/kubernetes/blob/v1.9.0/pkg/controller/podautoscaler/metrics/utilization.go#L49" rel="nofollow noreferrer">here</a>"</p>
</blockquote>
<p>You have got more informations,with examples in the link above.</p>
<hr>
<p>Basing on documentation </p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, <strong>with beta support, on some other, application-provided metrics</strong>).</p>
<p>So basically:</p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#api-object" rel="nofollow noreferrer">apiVersion</a> autoscaling/v1 HPA base on <strong>cpu</strong>.</p>
<p>apiVersion autoscaling/v2beta2 base on <strong>cpu,memory,custom metrics</strong>.</p>
<p>More informations <a href="https://rancher.com/blog/2018/2018-08-06-k8s-hpa-resource-custom-metrics/" rel="nofollow noreferrer">here</a> </p>
| Jakub |
<p>When using helm install/upgrade in some percentage of the time I get this failure:</p>
<pre><code>Failed to install app MyApp. Error: UPGRADE FAILED: timed out waiting for the condition
</code></pre>
<p>This is because the app sometimes needs a bit more time to be up and running.</p>
<p>When I get this message helm doesn't stop the install/upgrade, but still works on it, which will be succeed by the end. And my whole cluster will be fully functional. <br>
However helm still shows this failed status for the release.<br> On one hand it is pretty annoying, on the other hand it can mess up a correctly installed release.</p>
<p>How to remove this false error and get into a 'deployed' state(without a new install/upgrade)?</p>
| beatrice | <p>What you might find useful here are the two following options:</p>
<blockquote>
<ul>
<li><p><code>--wait</code>: Waits until all Pods are in a ready state, PVCs are bound, Deployments have minimum (<code>Desired</code> minus <code>maxUnavailable</code>) Pods in
ready state and Services have an IP address (and Ingress if a
LoadBalancer) before marking the release as successful. It will wait
for as long as the <code>--timeout</code> value. If timeout is reached, the
release will be marked as <code>FAILED</code>. Note: In scenarios where
Deployment has replicas set to 1 and <code>maxUnavailable</code> is not set to 0
as part of rolling update strategy, <code>--wait</code> will return as ready as
it has satisfied the minimum Pod in ready condition.</p>
</li>
<li><p><code>--timeout</code>: A value in seconds to wait for Kubernetes commands to complete This defaults to 5m0s</p>
</li>
</ul>
</blockquote>
<p>Helm install and upgrade commands include two CLI options to assist in checking the deployments: <code>--wait</code> and <code>--timeout</code>. When using <code>--wait</code>, Helm will wait until a minimum expected number of Pods in the deployment are launched before marking the release as successful. Helm will wait as long as what is set with <code>--timeout</code>.</p>
<p>Also, please note that this is not a full list of cli flags. To see a description of all flags, just run <code>helm <command> --help</code>.</p>
<p>If you want to check why your chart might have failed you can use the <a href="https://helm.sh/docs/helm/helm_history/" rel="nofollow noreferrer"><code>helm history</code> command</a>.</p>
| Wytrzymały Wiktor |
<p>We have OPA installed in our Kubernetes cluster. Not Gatekeeper. The "original" OPA...</p>
<p><strong>I don't understand how I can look at what OPA is receiving as input request from the API-server ?</strong><br />
=> If I knew exactly what the payload looks like then writing the Rego would be simple.</p>
<p>I tried to use <code>-v=8</code> option in <code>kubectl</code> to see the request and response from api-server like so:</p>
<pre><code>$ kubectl get pod -v=8
...
GET https://xxxx.k8s.ovh.net/api/v1/namespaces/default/pods?limit=500
...
Request Headers: Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json
...
Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"resourceVersion":"37801112226"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names","priority":0},{"name":"Ready","type":"string","format":"","description":"The aggregate readiness state of this pod for accepting traffic.","priority":0},{"name":"Status","type":"string","format":"","description":"The aggregate status of the containers in this pod.","priority":0},{"name":"Restarts","type":"string","format":"","description":"The number of times the containers in this pod have been restarted and when the last container in this pod has restarted.","priority":0},{"name":"Age","type [truncated 4024 chars]
</code></pre>
<p>Unfortunatly the above JSON payload doesn't match what I see in the different tutorials.</p>
<p><em>How is anybody able to write OPA rules for Kubernetes ???</em><br />
Thx</p>
| Doctor | <p>You have two options:</p>
<ul>
<li><p>Run the OPA server with debug level logging:</p>
<p><code>opa run --server --log-level debug ...</code></p>
<p>This is obviously very noisy, so beware.</p>
</li>
<li><p>Run the server with <a href="https://www.openpolicyagent.org/docs/latest/management-decision-logs/" rel="nofollow noreferrer">decision logging</a> enabled. This is almost always preferable, and allows you to either dump decisions (including input data) to console, or for production deployments, to a remote server aggregating the logs. The decision logging system is really the native way of doing this, and comes with a bunch of features, like masking of sensitive data, etc.. but if you just want something printed to the console, you can run OPA like:</p>
<p><code>opa run --server --set decision_logs.console=true ...</code></p>
</li>
</ul>
| Devoops |
<p>I'm using <a href="https://www.conftest.dev/" rel="nofollow noreferrer">conftest</a> for validating policies on Kubernetes manifests.</p>
<p>Below policy validates that images in StatefulSet manifests have to come from specific registry <code>reg_url</code></p>
<pre><code>package main
deny[msg] {
input.kind == "StatefulSet"
not regex.match("[reg_url]/.+", input.spec.template.spec.initContainers[0].image)
msg := "images come from artifactory"
}
</code></pre>
<p>Is there a way to enforce such policy for all kubernetes resources that have image field somewhere in their description? This may be useful for policy validation on all <code>helm</code> chart manifests, for instance.</p>
<p>I'm looking for something like:</p>
<pre><code>package main
deny[msg] {
input.kind == "*" // all resources
not regex.match("[reg_url]/.+", input.*.image) // any nested image field
msg := "images come from artifactory"
}
</code></pre>
| rok | <p>You <em>could</em> do this using something like the <a href="https://www.openpolicyagent.org/docs/latest/policy-reference/#builtin-graph-walk" rel="nofollow noreferrer">walk</a> built-in function. However, I would recommend against it, because:</p>
<ul>
<li>You'd need to scan every attribute of every request/resource (expensive).</li>
<li>You can't know for sure that e.g. "image" means the same thing across all current and future resouce manifests, including CRDs.</li>
</ul>
<p>I'd probably just stick with checking for a match of resource kind here, and include any resource type known to have an image attribute with a shared meaning.</p>
| Devoops |
<p>I know what Priority class are in k8s but could not find anything about priority object after lot of searching.</p>
<p>So problem is I have created a PriorityClass object in my k8s cluster and set its value to -100000 and have created a pod with this priorityClass. Now when I do kubectl describe Pod I am getting two different field </p>
<pre><code>Priority: 0
PriorityClassName: imagebuild-priority
</code></pre>
<p>My admission-controller throws the following error</p>
<pre><code>Error from server (Forbidden): error when creating "/tmp/tmp.4tJSpSU0dy/app.yml":
pods "pod-name" is forbidden: the integer value of priority (0) must not be provided in pod spec;
priority admission controller computed -1000000 from the given PriorityClass name
</code></pre>
<p>Somewhere it is setting Priority to 0 and PriorityClass trying to set it to -10000.</p>
<p>PriorityClass object has globalDefault: False</p>
<p>Command Run </p>
<p><code>kubectl create -f app.yml</code></p>
<p>Yaml file</p>
<pre><code>---
apiVersion: v1
kind: Pod
metadata:
name: image-builder-serviceacc
spec:
securityContext:
runAsUser: 0
serviceAccountName: {{ serviceaccount }}
automountServiceAccountToken: false
containers:
- name: container
image: ....
imagePullPolicy: Always
env:
- name: PATH
value: "$PATH:/bin:/busybox/"
command: [ "sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
initContainers:
- name: init-container
image: ....
imagePullPolicy: Always
env:
- name: PATH
value: "$PATH:/bin:/busybox/"
command: [ "sh", "-c", "--" ]
args: [ "ls" ]
restartPolicy: Always
</code></pre>
<p>Mutating controlled will append PriorityClass</p>
| prashant | <p>As per <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>PriorityClass also has two optional fields: globalDefault and description. The globalDefault field indicates that the value of this PriorityClass should be used for Pods without a priorityClassName. <strong>Only one PriorityClass with globalDefault set to true</strong> can exist in the system. If there is no PriorityClass with globalDefault set, the priority of Pods with no priorityClassName is zero.</p>
</blockquote>
<p>This error means that u have collision</p>
<pre><code>the integer value of priority (0) must not be provided in pod spec;
priority admission controller computed -1000000 from the given PriorityClass name
</code></pre>
<p>You can fix it in 2 ways:</p>
<p>your should choose between <strong>globalDefault: true</strong> :</p>
<p>PriorityClass:</p>
<pre><code>apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: high-priority-minus
value: -2000000
globalDefault: True
description: "This priority class should be used for XYZ service pods only."
</code></pre>
<p>Pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx5
labels:
env: test
spec:
containers:
- name: nginx5
image: nginx
imagePullPolicy: IfNotPresent
priorityClassName: high-priority-minus
</code></pre>
<p>priorityClassName <strong>can be used here, but you dont need to</strong></p>
<p>Or with <strong>globalDefault: false</strong> :</p>
<p>You need to choose <strong>1 option,</strong> priorityClassName or priority in your pod as described in you'r message error.</p>
<p>PriorityClass:</p>
<pre><code>apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: high-priority
value: 1000000
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
</code></pre>
<p>Pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx7
labels:
env: test
spec:
containers:
- name: nginx7
image: nginx
imagePullPolicy: IfNotPresent
priorityClassName: high-priority
</code></pre>
| Jakub |
<p>I created a 3 node cluster in GKE using the below command </p>
<pre><code>gcloud container clusters create kubia --num-nodes 3 --machine-type=f1-micro
</code></pre>
<p>The status of all the three nodes is <code>NotReady</code>. When I inspected the node using the <code>kubectl describe <node></code>, I get the following output:</p>
<pre><code>λ kubectl describe node gke-kubia-default-pool-c324a5d8-2m14
Name: gke-kubia-default-pool-c324a5d8-2m14
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/fluentd-ds-ready=true
beta.kubernetes.io/instance-type=f1-micro
beta.kubernetes.io/os=linux
cloud.google.com/gke-nodepool=default-pool
cloud.google.com/gke-os-distribution=cos
failure-domain.beta.kubernetes.io/region=asia-south1
failure-domain.beta.kubernetes.io/zone=asia-south1-a
kubernetes.io/hostname=gke-kubia-default-pool-c324a5d8-2m14
Annotations: container.googleapis.com/instance_id: 1338348980238562031
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 02 Jan 2020 11:52:25 +0530
Taints: node.kubernetes.io/unreachable:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime
Reason Message
---- ------ ----------------- ------------------
------ -------
KernelDeadlock False Thu, 02 Jan 2020 11:52:30 +0530 Thu, 02 Jan 2020 11:52:29 +0530 KernelHasNoDeadlock kernel has no deadlock
ReadonlyFilesystem False Thu, 02 Jan 2020 11:52:30 +0530 Thu, 02 Jan 2020 11:52:29 +0530 FilesystemIsNotReadOnly Filesystem is not read-only
CorruptDockerOverlay2 False Thu, 02 Jan 2020 11:52:30 +0530 Thu, 02 Jan 2020 11:52:29 +0530 NoCorruptDockerOverlay2 docker overlay2 is functioning properly
FrequentUnregisterNetDevice False Thu, 02 Jan 2020 11:52:30 +0530 Thu, 02 Jan 2020 11:52:29 +0530 NoFrequentUnregisterNetDevice node is functioning properly
FrequentKubeletRestart False Thu, 02 Jan 2020 11:52:30 +0530 Thu, 02 Jan 2020 11:52:29 +0530 NoFrequentKubeletRestart kubelet is functioning properly
FrequentDockerRestart False Thu, 02 Jan 2020 11:52:30 +0530 Thu, 02 Jan 2020 11:52:29 +0530 NoFrequentDockerRestart docker is functioning properly
FrequentContainerdRestart False Thu, 02 Jan 2020 11:52:30 +0530 Thu, 02 Jan 2020 11:52:29 +0530 NoFrequentContainerdRestart containerd is functioning properly
NetworkUnavailable False Thu, 02 Jan 2020 11:52:31 +0530 Thu, 02 Jan 2020 11:52:31 +0530 RouteCreated RouteController created a route
MemoryPressure Unknown Thu, 02 Jan 2020 11:52:52 +0530 Thu, 02 Jan 2020 11:53:38 +0530 NodeStatusUnknown Kubelet stopped posting node status.
DiskPressure Unknown Thu, 02 Jan 2020 11:52:52 +0530 Thu, 02 Jan 2020 11:53:38 +0530 NodeStatusUnknown Kubelet stopped posting node status.
PIDPressure Unknown Thu, 02 Jan 2020 11:52:52 +0530 Thu, 02 Jan 2020 11:53:38 +0530 NodeStatusUnknown Kubelet stopped posting node status.
Ready Unknown Thu, 02 Jan 2020 11:52:52 +0530 Thu, 02 Jan 2020 11:53:38 +0530 NodeStatusUnknown Kubelet stopped posting node status.
OutOfDisk Unknown Thu, 02 Jan 2020 11:52:25 +0530 Thu, 02 Jan 2020 11:53:38 +0530 NodeStatusNeverUpdated Kubelet never posted node status.
Addresses:
InternalIP: 10.160.0.34
ExternalIP: 34.93.231.83
InternalDNS: gke-kubia-default-pool-c324a5d8-2m14.asia-south1-a.c.k8s-demo-263903.internal
Hostname: gke-kubia-default-pool-c324a5d8-2m14.asia-south1-a.c.k8s-demo-263903.internal
Capacity:
attachable-volumes-gce-pd: 15
cpu: 1
ephemeral-storage: 98868448Ki
hugepages-2Mi: 0
memory: 600420Ki
pods: 110
Allocatable:
attachable-volumes-gce-pd: 15
cpu: 940m
ephemeral-storage: 47093746742
hugepages-2Mi: 0
memory: 236900Ki
pods: 110
System Info:
Machine ID: 7231bcf8072c0dbd23802d0bf5644676
System UUID: 7231BCF8-072C-0DBD-2380-2D0BF5644676
Boot ID: 819fa587-bd7d-4909-ab40-86b3225f201e
Kernel Version: 4.14.138+
OS Image: Container-Optimized OS from Google
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.9.7
Kubelet Version: v1.13.11-gke.14
Kube-Proxy Version: v1.13.11-gke.14
PodCIDR: 10.12.3.0/24
ProviderID: gce://k8s-demo-263903/asia-south1-a/gke-kubia-default-pool-c324a5d8-2m14
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default kubia-4hbfv 100m (10%) 0 (0%)
0 (0%) 0 (0%) 27m
kube-system event-exporter-v0.2.4-5f88c66fb7-6kh96 0 (0%) 0 (0%)
0 (0%) 0 (0%) 27m
kube-system fluentd-gcp-scaler-59b7b75cd7-8fhkt 0 (0%) 0 (0%)
0 (0%) 0 (0%) 27m
kube-system fluentd-gcp-v3.2.0-796rf 100m (10%) 1 (106%) 200Mi (86%) 500Mi (216%) 28m
kube-system kube-dns-autoscaler-bb58c6784-nkz8g 20m (2%) 0 (0%)
10Mi (4%) 0 (0%) 27m
kube-system kube-proxy-gke-kubia-default-pool-c324a5d8-2m14 100m (10%) 0 (0%)
0 (0%) 0 (0%) 28m
kube-system prometheus-to-sd-qw7sm 1m (0%) 3m (0%) 20Mi (8%) 20Mi (8%) 28m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 321m (34%) 1003m (106%)
memory 230Mi (99%) 520Mi (224%)
ephemeral-storage 0 (0%) 0 (0%)
attachable-volumes-gce-pd 0 0
Events:
Type Reason Age From
Message
---- ------ ---- ----
-------
Normal Starting 43m kubelet, gke-kubia-default-pool-c324a5d8-2m14
Starting kubelet.
Normal NodeHasSufficientMemory 43m (x2 over 43m) kubelet, gke-kubia-default-pool-c324a5d8-2m14
Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 43m (x2 over 43m) kubelet, gke-kubia-default-pool-c324a5d8-2m14
Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 43m (x2 over 43m) kubelet, gke-kubia-default-pool-c324a5d8-2m14
Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 43m kubelet, gke-kubia-default-pool-c324a5d8-2m14
Updated Node Allocatable limit across pods
Normal NodeReady 43m kubelet, gke-kubia-default-pool-c324a5d8-2m14
Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeReady
Normal Starting 42m kube-proxy, gke-kubia-default-pool-c324a5d8-2m14 Starting kube-proxy.
Normal Starting 28m kubelet, gke-kubia-default-pool-c324a5d8-2m14
Starting kubelet.
Normal NodeHasSufficientMemory 28m (x2 over 28m) kubelet, gke-kubia-default-pool-c324a5d8-2m14
Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 28m (x2 over 28m) kubelet, gke-kubia-default-pool-c324a5d8-2m14
Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 28m (x2 over 28m) kubelet, gke-kubia-default-pool-c324a5d8-2m14
Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 28m kubelet, gke-kubia-default-pool-c324a5d8-2m14
Updated Node Allocatable limit across pods
Normal NodeReady 28m kubelet, gke-kubia-default-pool-c324a5d8-2m14
Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeReady
Normal Starting 28m kube-proxy, gke-kubia-default-pool-c324a5d8-2m14 Starting kube-proxy.
</code></pre>
<p>Where I am going wrong? I am able to create pods using the <code>kubectl run kubia-3 --image=luksa/kubia --port=8080 --generator=run/v1</code> command.</p>
| zilcuanu | <p>I just created a <a href="https://issuetracker.google.com/147136410" rel="nofollow noreferrer">Public issue tracker</a> to follow-up on this issue.</p>
<p>In the meantime as a workaround. I would recommend deploying nodes with the default machine type <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture" rel="nofollow noreferrer">n1-standard-1</a></p>
| Mohamad Bazzi |
<p>Working with <a href="https://www.openpolicyagent.org/docs/latest/policy-language/" rel="nofollow noreferrer">Rego</a>, the <a href="https://www.openpolicyagent.org/" rel="nofollow noreferrer">Open Policy Agent</a> (OPA) "rules" language, and given the following data:</p>
<pre><code>{
"x-foo": "a",
"x-bar": "b",
"y-foo": "c",
"y-bar": "d"
}
</code></pre>
<p>what is the correct Rego expression(s) or statement(s) to get just the keys that start with "x-"? That is, I want an array of</p>
<pre><code>[ "x-foo", "x-bar" ]
</code></pre>
<p>I know about the <code>startswith()</code> function, and I've tried various attempts at <a href="https://www.openpolicyagent.org/docs/latest/policy-language/#comprehensions" rel="nofollow noreferrer">comprehensions</a>, but no matter what I try I can't get it to work.</p>
<p>Any help would be much appreciated.</p>
| asm2stacko | <p>This can be accomplished either by using a comprehension, like you suggest, or a partial rule that <a href="https://www.openpolicyagent.org/docs/latest/policy-language/#generating-sets" rel="nofollow noreferrer">generates a set</a>:</p>
<pre><code>x_foo_comprehension := [key | object[key]; startswith(key, "x-")]
x_foo_rule[key] {
object[key]
startswith(key, "x-")
}
</code></pre>
<p>Finally, if you need to take nested keys into account, you could use the walk built-in to traverse the object:</p>
<pre><code>x_foo_rule[key] {
walk(object, [path, value])
last := path[count(path) - 1]
startswith(last, "x-")
key := concat(".", path)
}
# x_foo_rule == {
# "x-bar",
# "x-foo",
# "y-bar.x-bar",
# "y-bar.x-foo"
# ]
</code></pre>
| Devoops |
<p>For the prometheus deployment's ClusterRole I have</p>
<pre><code># ClusterRole for the deployment
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
</code></pre>
<p>With the ServiceAccount and the ClusterRoleBinding already put to place too.</p>
<p>And the following are the settings for the jobs inside <code>prometheus.yml</code> that are getting 403 error</p>
<pre><code>- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
</code></pre>
<p>I don't get the reason why I keep getting 403 error even though the <code>ServiceAccount</code> and the <code>ClusterRole</code> has been binded together.</p>
| MatsuzakaSteven | <p>Make sure that the <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> file contains the correct token. To do so, you can enter into Prometheus pod with:</p>
<p><code>kubectl exec -it -n <namespace> <Prometheus_pod_name> -- bash</code></p>
<p>and cat the token file. Then exit the pod and execute:</p>
<p><code>echo $(kubectl get secret -n <namespace> <prometheus_serviceaccount_secret> -o jsonpath='{.data.token}') | base64 --decode</code></p>
<p>If the tokens match, you can try querying the Kubernetes API server with Postman or Insomnia to see if the rules you put in your <code>ClusterRole</code> are correct. I suggest you to query both <code>/proxy/metrics/cadvisor</code> and <code>/proxy/metrics</code> URLs</p>
| TheHakky |
<p>I am trying to remove privileged mode from init container, when i set to priviliged: false. I am getting above error. I had set readOnlyRootFilesystem: false and lines below at the pod securityContext level</p>
<pre><code> securityContext:
sysctls:
- name: net.ipv4.ip_local_port_range
value: 0 65535
</code></pre>
| sacboy | <p>The problem is that you cannot run <code>sysctl</code> without the privileged mode due to security reasons. This is expected since docker restricts access to <code>/proc</code> and <code>/sys</code>.</p>
<p>In order for this to work you need to use the privileged mode for the init container and than either:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/" rel="nofollow noreferrer">Use sysctls in a Kubernetes Cluster</a> by specifing a proper <code>securityContext</code> for a Pod. For example:</li>
</ul>
<hr />
<pre><code> securityContext:
sysctls:
- name: kernel.shm_rmid_forced
value: "0"
- name: net.core.somaxconn
value: "1024"
- name: kernel.msgmax
value: "65536"
</code></pre>
<ul>
<li>Use <a href="https://docs.docker.com/engine/reference/commandline/run/#configure-namespaced-kernel-parameters-sysctls-at-runtime" rel="nofollow noreferrer">PodSecurityPolicy</a> to control which <code>sysctls</code> can be set in pods by specifying lists of <code>sysctls</code> or <code>sysctl</code> patterns in the <code>forbiddenSysctls</code> and/or <code>allowedUnsafeSysctls</code> fields of the <code>PodSecurityPolicy</code>. For example:</li>
</ul>
<hr />
<pre><code>apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: sysctl-psp
spec:
allowedUnsafeSysctls:
- kernel.msg*
forbiddenSysctls:
- kernel.shm_rmid_forced
</code></pre>
<p>Notice that:</p>
<blockquote>
<p>If you allow unsafe <code>sysctls</code> via the <code>allowedUnsafeSysctls</code> field in a
<code>PodSecurityPolicy</code>, any pod using such a <code>sysctl</code> will fail to start if
the <code>sysctl</code> is not allowed via the <code>--allowed-unsafe-sysctls</code> kubelet
flag as well on that node.</p>
</blockquote>
<ul>
<li>You can also set a limited number of <code>sysctls</code> on a container-local basis with <a href="https://docs.docker.com/engine/reference/commandline/run/#configure-namespaced-kernel-parameters-sysctls-at-runtime" rel="nofollow noreferrer"><code>docker run --sysctl</code></a>.</li>
</ul>
<p>I also recommend going through the whole linked documentation as caution is advised because use of unsafe <code>sysctls</code> is at-your-own-risk and can lead to severe problems like wrong behavior of containers, resource shortage or complete breakage of a node.</p>
| Wytrzymały Wiktor |
<p>I need to deploy NGINX to a Kubernetes cluster, for which I can either use a Helm chart or a Docker image. But I am not clear of the benefits of using a Helm chart. I guess my question is not specific to NGINX but in general.</p>
| user11081980 | <p>Helm Charts: making it simple to package and deploy common applications on Kubernetes [1]. Helm brings three major benefits to your service deployments [2]: </p>
<ul>
<li>Deployment speed</li>
<li>Helm chart on Kubernetes for application configuration templates</li>
<li>Application testing </li>
</ul>
<p>Use of Helm charts is recommended, because they are maintained and typically kept up to date by the Kubernetes community [3].</p>
<p>[1] <a href="https://kubernetes.io/blog/2016/10/helm-charts-making-it-simple-to-package-and-deploy-apps-on-kubernetes/" rel="nofollow noreferrer">https://kubernetes.io/blog/2016/10/helm-charts-making-it-simple-to-package-and-deploy-apps-on-kubernetes/</a></p>
<p>[2] <a href="https://www.nebulaworks.com/blog/2019/10/30/three-benefits-to-using-a-helm-chart-on-kubernetes/" rel="nofollow noreferrer">https://www.nebulaworks.com/blog/2019/10/30/three-benefits-to-using-a-helm-chart-on-kubernetes/</a></p>
<p>[3] <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">https://cloud.google.com/community/tutorials/nginx-ingress-gke</a></p>
| Shafiq I |
<p>I know that you can start <code>minikube</code> with a different K8s version with <code>--kubernetes-version</code>.</p>
<p>But how can I let minikube list all versions which it supports?</p>
<p>I had a look at the <a href="https://minikube.sigs.k8s.io/docs/commands/start/" rel="noreferrer">command reference of start</a>, but could not find a way up to now.</p>
<p>In my case I would like to know which one is the latest <code>v1.17.X</code> version which is supported.</p>
<p>On the <a href="https://github.com/kubernetes/kubernetes/releases" rel="noreferrer">github release page</a> I found that v1.17.12 is today the latest version in the <code>17.x</code> series. But it would be nice, if I <code>minikube</code> or <code>kubectl</code> could tell me this.</p>
| guettli | <p>@Esteban Garcia is right but I would like to expand on this topic a bit more with the help of <a href="https://minikube.sigs.k8s.io/docs/handbook/config/#selecting-a-kubernetes-version" rel="nofollow noreferrer">the official documentation</a>:</p>
<blockquote>
<p>By default, minikube installs the latest stable version of Kubernetes
that was available at the time of the minikube release. You may select
a different Kubernetes release by using the <code>--kubernetes-version</code>
flag, for example:</p>
<pre><code>minikube start --kubernetes-version=v1.11.10
</code></pre>
<p>minikube follows <a href="https://kubernetes.io/docs/setup/release/version-skew-policy/" rel="nofollow noreferrer">the Kubernetes Version and Version Skew Support
Policy</a>, so we guarantee support for the latest build for the last
3 minor Kubernetes releases. When practical, minikube aims to support
older releases as well so that users can emulate legacy environments.</p>
<p>For up to date information on supported versions, see
OldestKubernetesVersion and NewestKubernetesVersion in
<a href="https://github.com/kubernetes/minikube/blob/master/pkg/minikube/constants/constants.go" rel="nofollow noreferrer">constants.go</a>.</p>
</blockquote>
| Wytrzymały Wiktor |
<p>I have a Django app that is deployed on kubernetes. The container also has a mount to a persistent volume containing some files that are needed for operation. I want to have a check that will check that the files are there and accessible during runtime everytime a pod starts. The Django documentation recommends against running checks in production (the app runs in uwsgi), and because the files are only available in the production environment, the check will fail when unit tested.</p>
<p>What would be an acceptable process for executing the checks in production?</p>
| Stuart Buckingham | <p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p>
<p>Your use case can be addressed from Kubernetes perspective. All you have to do is to use the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes" rel="nofollow noreferrer">Startup probes</a>:</p>
<blockquote>
<p>The kubelet uses startup probes to know when a container application
has started. If such a probe is configured, it disables liveness and
readiness checks until it succeeds, making sure those probes don't
interfere with the application startup. This can be used to adopt
liveness checks on slow starting containers, avoiding them getting
killed by the kubelet before they are up and running.</p>
</blockquote>
<p>With it you can use the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#execaction-v1-core" rel="nofollow noreferrer">ExecAction</a> that would execute a specified command inside the container. The diagnostic would be considered successful if the command exits with a status code of 0. An example of a simple command check could be one that checks if a particular file exists:</p>
<pre><code> exec:
command:
- stat
- /file_directory/file_name.txt
</code></pre>
<p>You could also use a shell script but remember that:</p>
<blockquote>
<p>Command is the command line to execute inside the container, the
working directory for the command is root ('/') in the container's
filesystem. The command is simply exec'd, it is not run inside a
shell, so traditional shell instructions ('|', etc) won't work. To use
a shell, you need to explicitly call out to that shell.</p>
</blockquote>
| Wytrzymały Wiktor |
<p>I have deployed few services in kubernetes and using NGINX ingress to access outside.(Using EC2 instance for all cluster setup). Able to access service through host tied with ingress. Now instead of accessing the svc directly I am trying to add authentication and before accessing the service. And redirecting to login page , user enters credentials and should redirect to the asked page. The following code snipet I I tried so far. Please guide to find solution.</p>
<p>my-ingress.yml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: mynamespace
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-name: JSESSIONID
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.mysite.domain/api/auth/login" #will show login page
nginx.ingress.kubernetes.io/auth-url: "https://auth.mysite.domain/api/auth/token/validate"
nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
spec:
tls:
- hosts:
- mysite.domain
#secretName: ${TLS_TOKEN_NAME}
rules:
- host: a.mysite.domain
http:
paths:
- path: /
backend:
serviceName: myservice1
servicePort: 9090
</code></pre>
<p>so first it will call "/token/validate" and will get unauthorized then got to "auth/login" and login page will show
after entering credentials going to "/token/validate" and again login page. Actually should redirect to the called page.</p>
<p>How to achieve this?[If after successful auth if we can add header in called ling I think can solve but not sure how]</p>
<p>backend: Java Spring</p>
<pre><code>@RequestMapping("login")
public String login() {
return "login.html";
}
</code></pre>
<p>login.html</p>
<pre><code> <form action="validate-user" method="post" enctype="application/x-www-form-urlencoded">
<label for="username">Username</label>
<input type="text" id="username" value="admin" name="username" autofocus="autofocus" /> <br>
<label for="password">Password</label>
<input type="password" id="password" value="password" name="password" /> <br>
<input id="submit" type="submit" value="Log in" />
</form>
</code></pre>
<p>backend: Java Spring</p>
<pre><code>@PostMapping("validate-user")
@ResponseBody
public ResponseEntity<?> validateUser(HttpServletRequest request, HttpServletResponse response) throws Exception {
...
HttpStatus httpStatus=HttpStatus.FOUND;
//calling authentication api and validating
//else
httpStatus=HttpStatus.UNAUTHORIZED;
HttpHeaders responseHeaders= new HttpHeaders();
responseHeaders.set("Authoriztion", token);
//responseHeaders.setLocation(new URI("https://a.mysite.domain")); ALSO TRIED BUT NOT WORKED
return new ResponseEntity<>(responseHeaders,httpStatus);
}
</code></pre>
<p><strong>UPDATE1:</strong> I am using my own custom auth api, if I am hitting the url with custom header "Authorization":"bearer token" from postman then response is ok, but from from browser not possible, So from upstream svc only(after successfull login) the header should include in redirect page that how can we do?<br />
ANY ANNOTATION AM I MISSING?</p>
<p><strong>UPDATE2:</strong> While redirecting after successful auth I am passing token as query string like <code>responseHeaders.setLocation(new URI("https://a.mysite.domain/?access_token="+token)</code> and after redirecting its going to validate. After successful validation going to downstream svc[expected]. But when that svc is routing say <code>a.mysite.domain/route1</code> then query string is gone and auth svc not able to get token hence <code>401</code> again. It should be like <code>a.mysite.domain/route1/?access_token=token</code>. Any way is there to do that? If every route will have same query string then will work.[This is my PLAN-B...but still passwing token is header is my priority]</p>
<p><strong>UPDATE3:</strong> I tried with annotations like:</p>
<pre><code>nginx.ingress.kubernetes.io/auth-signin: 'https://auth.example.com/api/auth-service-ui/login'
nginx.ingress.kubernetes.io/auth-response-headers: 'UserID, Authorization, authorization'
nginx.ingress.kubernetes.io/auth-snippet: |
auth_request_set $token $upstream_http_authorization;
proxy_set_header Foo-Header1 $token; //not showing as request header AND this value only need LOOKS $token val is missed
proxy_set_header Foo-Header headerfoo1; //showing as request header OK
more_set_input_headers 'Authorization: $token';//not showing as request header AND this value only need LOOKS $token val is missed
nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $token1 $upstream_http_authorization;
add_header authorization2 QQQAAQ1; //showing as response header OK no use
add_header authorization $token; //showing as response header OK how to send as request header on next call
more_set_input_headers 'Authorization11: uuu1';//showing as request header in next call
more_set_input_headers 'Authorization: $token1';//not showing as request header and need this val ONLY
</code></pre>
<p>**What annotation I missed?</p>
<p><strong>UPDATE4</strong>
PLAN-C: Now trying to store jwt token in cookies.</p>
<pre><code> nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $token5 $upstream_http_authorization;
add_header Set-Cookie "JWT_TOKEN=$token5";
</code></pre>
<p>In every request the same cookie is set but in browser its storing everytime. ie multiple cookies of same. How to set only once?</p>
| Joe | <p>the following worked for me</p>
<p>put in your values yaml:</p>
<pre><code>nginx.ingress.kubernetes.io/auth-url: "url service here"
</code></pre>
<p>then for this url you must implement a GET service that returs 200 if authorization was success or 401 in other case.</p>
<p>I implemented in flask, with Basic Authorization, but you can use whatever you want</p>
<pre><code> def auth():
request_path = request.headers.get("X-Auth-Request-Redirect")
authorization_header = request.headers.get('Authorization')
if ServiceImp.auth(authorization_header,request_path):
return Response(
response="authorized",
status=200,
mimetype="application/json"
)
else:
resp = Response()
resp.headers['WWW-Authenticate'] = 'Basic'
return resp, 401
</code></pre>
| Adán Escobar |
<p>I tried creating an internal load balancer with the following annotation as mentioned in <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access_beta" rel="noreferrer">this documentation</a>:</p>
<pre><code>networking.gke.io/internal-load-balancer-allow-global-access: "true"
</code></pre>
<p>Here is the full manifest:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ilb-global
annotations:
cloud.google.com/load-balancer-type: "Internal"
networking.gke.io/internal-load-balancer-allow-global-access: "true"
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
</code></pre>
<p>I tried accessing the load balancer from a VM in a different region but the VM is not able to reach the endpoint.</p>
<p>However according to <a href="https://cloud.google.com/load-balancing/docs/internal/setting-up-internal#ilb-global-access" rel="noreferrer">this documentation</a> I ran the following command on the created forwarding-rule (GCP creates a forwarding-rule with a random name for all the load balancers which can be retrieved using <code>gcloud compute forwarding-rules list</code> command) to update the Global access:</p>
<pre><code>gcloud beta compute forwarding-rules update abcrandomnamehehe --region [REGION-NAME] --allow-global-access
</code></pre>
<p>After running the above command and updating the load balancer manually, the endpoint is accessible from all the regions. Is this manual step of running <code>gcloud</code> command necessary?</p>
<p>If yes,what is the use of the annotation then? I the annotation with the latest available GKE version (<em>1.15.4-gke.22</em>) as well but doesn't work without being updated using the <code>gcloud</code> command. Is there something I am missing or it is a bug in GCP?</p>
<p><strong><em>EDIT</em></strong>: I had also opened <a href="https://issuetracker.google.com/issues/147451305" rel="noreferrer">this</a> issue with GCP which was addressed pretty quickly and they have updated the public documentations (Jan 15, 2020) to specifically mention the requirement of GKE 1.16 for the Global Access feature to work.</p>
| Amit Yadav | <p>This is expected but the reason for this behavior is not explained at all in the public documentation. In fact, the 'Global Access' feature works with GKE 1.16 clusters. </p>
<p>So far, I can share with you the following bullet points:</p>
<ul>
<li>There are 2 different features regarding Global Access: 1 for ILB and 1 specifically for GKE.</li>
<li>Global Access Feature for GKE was launched on December 23rd.</li>
<li>Global Access Feature for GKE works from GKE 1.16 but it appears to not be mentioned in the documentation.</li>
<li>Our tests have been done with a GKE 1.13.11-gke.14 cluster.</li>
<li>Need to create a GKE 1.16 cluster and test it again.</li>
</ul>
<p>That being said, I'd like to notify you that this mismatch in the public information has been addressed properly with the correct team and is being handled in order to update the public documentation available <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access_beta" rel="noreferrer">here</a> to prevent future confusion. You can keep track of this process by following updates <a href="https://issuetracker.google.com/issues/147451305" rel="noreferrer">here</a></p>
<p>How can you verify the information provided above? Here you have a brief process that you can follow:</p>
<p>TEST 1:</p>
<ul>
<li>Create GKE 1.16 cluster in europe-west4 (this region/zone is not mandatory).</li>
<li>Create Deployment.</li>
<li>Create an internal TCP load balancer with annotation “networking.gke.io/internal-load-balancer-allow-global-access: "true" by writing the Service configuration file.</li>
<li>Go within Network Services > Load Balancing > Advanced menu (at the bottom) > : Global access should be Enabled.</li>
<li>SSH VM in europe-west1.</li>
<li>Run command $curl -v : You should receive a HTTP/1.1 200 OK.</li>
</ul>
<p>TEST 2:</p>
<ul>
<li>Delete annotation “networking.gke.io/internal-load-balancer-allow-global-access: "true" in Service Configuration File.</li>
<li>Update my Service by running command $kubectl apply -f </li>
<li>Go within Network Services > Load Balancing > Advanced menu (at the bottom) > : Global access should be disabled.</li>
<li>SSH VM in europe-west1.</li>
<li>Run command $curl -v : You should receive a Timeout error message.</li>
</ul>
| Raynel A.S |
<p>I am trying to build and deploy microservices images to a single-node Kubernetes cluster running on my development machine using minikube. I am using the cloud-native microservices demo application Online Boutique by Google to understand the use of technologies like Kubernetes, Istio etc. </p>
<p>Link to github repo: <a href="https://github.com/GoogleCloudPlatform/microservices-demo" rel="noreferrer">microservices-demo</a></p>
<p>While following the installation process, and on running command <code>skaffold run</code> to build and deploy my application, I get some errors:</p>
<pre><code>Step 10/11 : RUN apt-get -qq update && apt-get install -y --no-install-recommends curl
---> Running in 43d61232617c
W: GPG error: http://deb.debian.org/debian buster InRelease: At least one invalid signature was encountered.
E: The repository 'http://deb.debian.org/debian buster InRelease' is not signed.
W: GPG error: http://deb.debian.org/debian buster-updates InRelease: At least one invalid signature was encountered.
E: The repository 'http://deb.debian.org/debian buster-updates InRelease' is not signed.
W: GPG error: http://security.debian.org/debian-security buster/updates InRelease: At least one invalid signature was encountered.
E: The repository 'http://security.debian.org/debian-security buster/updates InRelease' is not signed.
failed to build: couldn't build "loadgenerator": unable to stream build output: The command '/bin/sh -c apt-get -qq update && apt-get install -y --no-install-recommends curl' returned a non-zero code: 100
</code></pre>
<p>I receive these errors when trying to build loadgenerator.
How can I resolve this issue? </p>
| Saranya Gupta | <p>There are a few reasons why you encounter these errors:</p>
<ol>
<li><p>There might be an issue with the existing cache and/or disc space. In order to fix it you need to clear the APT cache by executing: <code>sudo apt-get clean</code> and <code>sudo apt-get update</code>.</p>
</li>
<li><p>The same goes with existing docker images. Execute: <code>docker image prune -f</code> and <code>docker container prune -f</code> in order to remove unused data and free disc space. Executing <code>docker image prune -f</code> will delete all the unused images. To delete some selective images of large size, run <code>docker images</code> and identify the images you want to remove, and then run <code>docker rmi -f <IMAGE-ID1> <IMAGE-ID2> <IMAGE-ID3></code>.</p>
</li>
<li><p>If you don't care about the security risks, you can try to run the <code>apt-get</code> command with the <code>--allow-unauthenticated</code> or <code>--allow-insecure-repositories</code> flag. According to the <a href="https://manpages.debian.org/stretch/apt/apt-get.8.en.html" rel="noreferrer">docs</a>:</p>
</li>
</ol>
<blockquote>
<p>Ignore if packages can't be authenticated and don't prompt about it.
This can be useful while working with local repositories, but is a
huge security risk if data authenticity isn't ensured in another way
by the user itself.</p>
</blockquote>
<p>Please let me know if that helped.</p>
| Wytrzymały Wiktor |
<p>In pod specification, there is the option <code>enableServiceLinks</code>. When set to false, environment variables related to services running at the moment of pod creation will not be injected into pod.</p>
<p>The problem is that I expected this to also happen with kubernetes clusterIp service on default namespace:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.116.0.1 <none> 443/TCP 27d
</code></pre>
<p>But it is injecting environment variables into pod as follows:</p>
<pre><code>KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.116.0.1:443
NODE_VERSION=12.18.0
HOSTNAME=static-web
YARN_VERSION=1.22.4
SHLVL=1
HOME=/root
test_value=8585
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.116.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.116.0.1:443
KUBERNETES_SERVICE_HOST=10.116.0.1
PWD=/indecision-app
</code></pre>
<p>Deployment file used for deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: indecision-app-deployment
labels:
app: indecision-app
spec:
selector:
matchLabels:
app: indecision-app
template:
metadata:
labels:
app: indecision-app
spec:
enableServiceLinks: false
containers:
- name: indecision-app
image: hleal18/indecision-app:latest
ports:
- containerPort: 8080
</code></pre>
<p>Is this the correct behavior? Is there an API-supported way of avoiding these service environment variables to be injected?</p>
| hleal18 | <p>According to the <a href="https://github.com/kubernetes/kubernetes/blob/9050c510e640f325d008792a517fe85808b34150/pkg/kubelet/kubelet_pods.go#L523" rel="nofollow noreferrer">source code</a>:</p>
<blockquote>
<p>We always want to add environment variabled for master services from
the master service namespace, even if <code>enableServiceLinks</code> is false.</p>
</blockquote>
<p>Which basically means that you will and should not prevent those specific environment variables from being injected to the pods.</p>
<p>I hope this solves your issue.</p>
| Wytrzymały Wiktor |
<p>Hy there,</p>
<p>I'm trying to configure Kubernetes Cronjobs monitoring & alerts with Prometheus. I found this helpful <a href="https://medium.com/@tristan_96324/prometheus-k8s-cronjob-alerts-94bee7b90511" rel="nofollow noreferrer">guide</a></p>
<p>But I always get a <strong>many-to-many matching not allowed: matching labels must be unique on one side</strong> error. </p>
<p>For example, this is the PromQL query which triggers this error:</p>
<p><code>max(
kube_job_status_start_time
* ON(job_name) GROUP_RIGHT()
kube_job_labels{label_cronjob!=""}
) BY (job_name, label_cronjob)
</code></p>
<p>The queries by itself result in e.g. these metrics</p>
<p><strong>kube_job_status_start_time</strong>:
<code>
kube_job_status_start_time{app="kube-state-metrics",chart="kube-state-metrics-0.12.1",heritage="Tiller",instance="REDACTED",job="kubernetes-service-endpoints",job_name="test-1546295400",kubernetes_name="kube-state-metrics",kubernetes_namespace="monitoring",kubernetes_node="REDACTED",namespace="test-develop",release="kube-state-metrics"}
</code></p>
<p><strong>kube_job_labels{label_cronjob!=""}</strong>:
<code>
kube_job_labels{app="kube-state-metrics",chart="kube-state-metrics-0.12.1",heritage="Tiller",instance="REDACTED",job="kubernetes-service-endpoints",job_name="test-1546295400",kubernetes_name="kube-state-metrics",kubernetes_namespace="monitoring",kubernetes_node="REDACTED",label_cronjob="test",label_environment="test-develop",namespace="test-develop",release="kube-state-metrics"}
</code></p>
<p>Is there something I'm missing here? The same many-to-many error happens for every query I tried from the guide.
Even constructing it by myself from ground up resulted in the same error.
Hope you can help me out here :)</p>
| hajowieland | <blockquote>
<p>In my case I don't get this extra label from Prometheus when installed via helm (stable/prometheus-operator).</p>
</blockquote>
<p>You need to configure it in Prometheus. It calls: honor_labels: false</p>
<pre><code># If honor_labels is set to "false", label conflicts are resolved by renaming
# conflicting labels in the scraped data to "exported_<original-label>" (for
# example "exported_instance", "exported_job") and then attaching server-side
# labels.
</code></pre>
<p>So you have to configure your prometheus.yaml file - config with option honor_labels: false</p>
<pre><code># Setting honor_labels to "true" is useful for use cases such as federation and
# scraping the Pushgateway, where all labels specified in the target should be
# preserved
</code></pre>
<p>Anyway if I have it like this (I have now exported_jobs), still can't do proper query, but I guess is still because of my LHS.</p>
<pre><code>Error executing query: found duplicate series for the match group
{exported_job="kube-state-metrics"} on the left hand-side of the operation:
[{__name__=
</code></pre>
| maitza |
<p>My application is running a SSL NodeJS server with mutual authentication.
How do I tell k8s to access the container thought HTTPS?
How do I forward the client SSL certificates to the container?</p>
<p>I tried to setup a Gateway & a Virtual host without success. In every configuration I tried I hit a 503 error.</p>
| Antoine | <p>The Istio sidecar proxy container (when injected) in the pod will automatically handle communicating over HTTPS. The application code can continue to use HTTP, and the Istio sidecar will intercept the request, "upgrading" it to HTTPS. The sidecar proxy in the receiving pod will then handle "downgrading" the request to HTTP for the application container.</p>
<p>Simply put, there is no need to modify any application code. The Istio sidecar proxies requests and responses between Kubernetes pods with TLS/HTTPS.</p>
<p><strong>UPDATE:</strong></p>
<p>If you wish to use HTTPS at the application level, you can tell Istio to exclude certain inbound and outbound ports. To do so, you can add the <code>traffic.sidecar.istio.io/excludeInboundPorts</code> and <code>traffic.sidecar.istio.io/excludeOutboundPorts</code> annotations, respectively, to the Kubernetes deployment YAML.</p>
<p>Example:</p>
<pre><code>...
spec:
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
traffic.sidecar.istio.io/includeInboundPorts: "443"
traffic.sidecar.istio.io/excludeInboundPorts: "443"
labels:
...
</code></pre>
| NG235 |
<p>I have developed a Camel route with Spring Boot. Now I want to trace the route using Jaeger. I tried <a href="https://github.com/apache/camel/tree/master/examples/camel-example-opentracing" rel="nofollow noreferrer">this example</a> to trace the route using <code>camel-opentracing</code> component, but I am unable to get the traces to Jaeger.</p>
<p>I can only see it in the console. One thing I am not clear is where to add the Jaeger URL?
Any working example will be helpful.</p>
| Debdeep Das | <p>What I eventually did is create a JaegerTraces and annotated with Bean</p>
| Debdeep Das |
<p>I would like to trigger a scheduled kubernetes job manually for testing purposes. How can I do this?</p>
| sreekari vemula | <p>Set <code>CRONJOB</code> to the name of your scheduled job. Set <code>JOB</code> to whatever you want.</p>
<pre><code>kubectl create job --from=cronjob/CRONJOB JOB;
</code></pre>
<p>Depending on which version of Kubernetes you are running, you may need to use the entire cronjob api resource name, for example:</p>
<pre><code>kubectl create job --from=cronjob.v1beta1.batch/CRONJOB JOB;
</code></pre>
<p>You can determine the version to use by running:</p>
<pre><code>kubectl api-resources | grep cronjob
</code></pre>
| ericfossas |
<p>I've deployed an docker registry inside my kubernetes:</p>
<pre><code>$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
registry-docker-registry ClusterIP 10.43.39.81 <none> 443/TCP 162m
</code></pre>
<p>I'm able to pull images from my machine (service is exposed via an ingress rule):</p>
<pre><code>$ docker pull registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty@sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd
...
Status: Downloaded newer image for registry-do...
</code></pre>
<p>When I'm trying to test it in order to deploy my image into the same kubernetes:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: covid-backend
namespace: skaffold
spec:
replicas: 3
selector:
matchLabels:
app: covid-backend
template:
metadata:
labels:
app: covid-backend
spec:
containers:
- image: registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty@sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd
name: covid-backend
ports:
- containerPort: 8080
</code></pre>
<p>Then, I've tried to deploy it:</p>
<pre><code>$ cat pod.yaml | kubectl apply -f -
</code></pre>
<p>However, kubernetes isn't able to reach registry:</p>
<p>Extract of <code>kubectl get events</code>:</p>
<pre><code>6s Normal Pulling pod/covid-backend-774bd78db5-89vt9 Pulling image "registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty@sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd"
1s Warning Failed pod/covid-backend-774bd78db5-89vt9 Failed to pull image "registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty@sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": rpc error: code = Unknown desc = failed to pull and unpack image "registry-docker-registry.registry/skaffold-covid-backend@sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": failed to resolve reference "registry-docker-registry.registry/skaffold-covid-backend@sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": failed to do request: Head https://registry-docker-registry.registry/v2/skaffold-covid-backend/manifests/sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd: dial tcp: lookup registry-docker-registry.registry: Try again
1s Warning Failed pod/covid-backend-774bd78db5-89vt9 Error: ErrImagePull
</code></pre>
<p>As you can see, kubernetes is not able to get access to the internal deployed registry...</p>
<p>Any ideas?</p>
| Jordi | <p>I would recommend to follow docs from k3d, they are <a href="https://github.com/rancher/k3d/blob/master-v1/docs/registries.md#using-the-k3d-registry" rel="nofollow noreferrer">here</a>.</p>
<p>More precisely this one</p>
<h2><a href="https://github.com/rancher/k3d/blob/master-v1/docs/registries.md#using-your-own-local-registry" rel="nofollow noreferrer">Using your own local registry</a></h2>
<blockquote>
<p>If you don't want k3d to manage your registry, you can start it with some docker commands, like:</p>
</blockquote>
<pre><code>docker volume create local_registry
docker container run -d --name registry.local -v local_registry:/var/lib/registry --restart always -p 5000:5000 registry:2
</code></pre>
<blockquote>
<p>These commands will start you registry in registry.local:5000. In order to push to this registry, you will need to add the line at /etc/hosts as we described in the <a href="https://github.com/rancher/k3d/blob/master-v1/docs/registries.md#etc-hosts" rel="nofollow noreferrer">previous section</a> . Once your registry is up and running, we will need to add it to your <a href="https://github.com/rancher/k3d/blob/master-v1/docs/registries.md#registries-file" rel="nofollow noreferrer">registries.yaml configuration</a> file. Finally, you must connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.local. And then you can <a href="https://github.com/rancher/k3d/blob/master-v1/docs/registries.md#testing" rel="nofollow noreferrer">check you local registry</a>.</p>
</blockquote>
<p><strong>Pushing to your local registry address</strong></p>
<blockquote>
<p>The registry will be located, by default, at registry.local:5000 (customizable with the --registry-name and --registry-port parameters). All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname but also be resolved from your host.</p>
<p>The easiest solution for this is to add an entry in your <strong>/etc/hosts</strong> file like this:</p>
</blockquote>
<pre><code>127.0.0.1 registry.local
</code></pre>
<blockquote>
<p>Once again, this will only work with k3s >= v0.10.0 (see the section below when using k3s <= v0.9.1)</p>
</blockquote>
<p><strong>Local registry volume</strong></p>
<blockquote>
<p>The local k3d registry uses a volume for storying the images. This volume will be destroyed when the k3d registry is released. In order to persist this volume and make these images survive the removal of the registry, you can specify a volume with the --registry-volume and use the --keep-registry-volume flag when deleting the cluster. This will create a volume with the given name the first time the registry is used, while successive invocations will just mount this existing volume in the k3d registry container.</p>
</blockquote>
<p><strong>Docker Hub cache</strong></p>
<blockquote>
<p>The local k3d registry can also be used for caching images from the Docker Hub. You can start the registry as a pull-through cache when the cluster is created with --enable-registry-cache. Used in conjuction with --registry-volume/--keep-registry-volume can speed up all the downloads from the Hub by keeping a persistent cache of images in your local machine.</p>
</blockquote>
<p><strong>Testing your registry</strong></p>
<p>You should test that you can</p>
<ul>
<li>push to your registry from your local development machine.</li>
<li>use images from that registry in Deployments in your k3d cluster.</li>
</ul>
<blockquote>
<p>We will verify these two things for a local registry (located at registry.local:5000) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker's documentation for this).</p>
<p>Firstly, we can download some image (like nginx) and push it to our local registry with:</p>
</blockquote>
<pre><code>docker pull nginx:latest
docker tag nginx:latest registry.local:5000/nginx:latest
docker push registry.local:5000/nginx:latest
</code></pre>
<blockquote>
<p>Then we can deploy a pod referencing this image to your cluster:</p>
</blockquote>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test-registry
labels:
app: nginx-test-registry
spec:
replicas: 1
selector:
matchLabels:
app: nginx-test-registry
template:
metadata:
labels:
app: nginx-test-registry
spec:
containers:
- name: nginx-test-registry
image: registry.local:5000/nginx:latest
ports:
- containerPort: 80
EOF
</code></pre>
<blockquote>
<p>Then you should check that the pod is running with kubectl get pods -l "app=nginx-test-registry".</p>
</blockquote>
<hr>
<p>Additionaly there are 2 github links worth visting</p>
<ul>
<li><p><a href="https://github.com/rancher/k3d/issues/144" rel="nofollow noreferrer">K3d not able resolve dns</a> </p>
<p>You could try to use an <a href="https://github.com/rancher/k3d/issues/144#issuecomment-552863119" rel="nofollow noreferrer">answer</a> provided by @rjshrjndrn, might solve your issue with dns.</p></li>
<li><p><a href="https://github.com/rancher/k3d/issues/184" rel="nofollow noreferrer">docker images are not pulled from docker repository behind corporate proxy</a></p>
<p>Open github issue on k3d with same problem as yours.</p></li>
</ul>
| Jakub |
<p>I am trying to retrieve cluster client certificate from GKE cluster to authenticate with Kubernetes Server API. I am using GKE API to retrieve cluster information but <strong>client certificate</strong> and <strong>client key</strong> is empty in the response. On further investigation, I found out that client certificate is disabled by-default in Google Kubernetes Engine in their latest version. Now, when I try to enable it from Cluster Settings, it says that</p>
<blockquote>
<p>client certificate is immutable.</p>
</blockquote>
<p>My question is that how I can enable client certificate for GKE cluster.</p>
| Jawad Tariq | <p>As per the <a href="https://gitlab.com/gitlab-org/gitlab-foss/-/issues/58208" rel="nofollow noreferrer">gitlab</a> Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the <code>--[no-]issue-client-certificate</code> flag. The clusters will have basic authentication and client certificate issuance disabled by default.</p>
<p>As per @Dawid you can create an cluster having Client certificate > Enable using the below command and after that modification is not possible on that cluster.</p>
<pre><code>gcloud container clusters create YOUR-CLUSTER --machine-type=custom-2-12288 --issue-client-certificate --zone us-central1-a
</code></pre>
<p>As a workaround if you want to enable the client certificate on existing cluster, you can clone (DUPLICATE) the cluster using command line and --issue-client-certificate at the end of the command as follows:</p>
<pre><code>gcloud beta container --project "xxxxxxxx" clusters create "high-mem-pool-clone-1" --zone "us-central1-f" --username "admin" --cluster-version "1.16.15-gke.6000" --release-channel "None" --machine-type "custom-2-12288" --image-type "COS" --disk-type "pd-standard" --disk-size "100" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "3" --enable-stackdriver-kubernetes --no-enable-ip-alias --network "projects/xxxxxxx/global/networks/default" --subnetwork "projects/xxxxxxxx/regions/us-central1/subnetworks/default" --no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing --enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 --issue-client-certificate
</code></pre>
| Mahboob |
<p>Is there a way to get <code>top pods</code> filtered by node?</p>
<p>Use case: I have a node which is reported to use 103% of the cpu and I want to validate which pods are causing it.</p>
| Vojtěch | <p>I don't think there is direct way to do it with <code>kubectl top pods</code> command since the only option to filter is label/selector which apply to pod only.</p>
<p>For your use case, you can use the command:</p>
<pre><code>kubectl get pods -o wide | grep <node> | awk {'print $1'} | xargs -n1 command kubectl top pods --no-headers
</code></pre>
<ul>
<li><code>kubectl get pods -o wide</code>: display pods with their associated node information</li>
<li><code>grep <node></code>: let you filter pod located on a specific node</li>
<li><code>awk {'print $1'}</code>: print the first column (name of the pods)</li>
<li><code>xargs -n1 command kubectl top pods --no-headers</code>: execute the top command for each pod without the headers (NAME, CPU, MEMORY)</li>
</ul>
<p>Additionally, you can check the limits you've set for each pod in one specific node using the command <code>kubectl describe node <node></code></p>
| Titou |
<p>This is the resource status:</p>
<pre><code>kind: Namespace
api
Version: v1
metadata:
name: linkerd
selfLink: /api/v1/namespaces/linkerd
uid: e7337b2b-bddb-4344-a986-d450973bc8cf
resourceVersion: '5540346'
creationTimestamp: '2020-05-10T13:49:21Z'
deletionTimestamp: '2020-06-03T20:16:30Z'
labels:
config.linkerd.io/admission-webhooks: disabled
linkerd.io/is-control-plane: 'true'
spec:
finalizers:
- kubernetes
status:
phase: Terminating
conditions:
- type: NamespaceDeletionDiscoveryFailure
status: 'True'
lastTransitionTime: '2020-06-03T20:16:44Z'
reason: DiscoveryFailed
message: >-
Discovery failed for some groups, 1 failing: unable to retrieve the
complete list of server APIs: tap.linkerd.io/v1alpha1: the server is
currently unable to handle the request
- type: NamespaceDeletionGroupVersionParsingFailure
status: 'False'
lastTransitionTime: '2020-06-03T20:16:38Z'
reason: ParsedGroupVersions
message: All legacy kube types successfully parsed
- type: NamespaceDeletionContentFailure
status: 'False'
lastTransitionTime: '2020-06-03T20:16:38Z'
reason: ContentDeleted
message: 'All content successfully deleted, may be waiting on finalization'
- type: NamespaceContentRemaining
status: 'False'
lastTransitionTime: '2020-06-03T20:16:57Z'
reason: ContentRemoved
message: All content successfully removed
- type: NamespaceFinalizersRemaining
status: 'False'
lastTransitionTime: '2020-06-03T20:16:38Z'
reason: ContentHasNoFinalizers
message: All content-preserving finalizers finished
</code></pre>
<p>Apiservices:</p>
<pre><code>$ kubectl get apiservice
NAME SERVICE AVAILABLE AGE
v1. Local True 28d
v1.admissionregistration.k8s.io Local True 28d
v1.apiextensions.k8s.io Local True 28d
v1.apps Local True 28d
v1.authentication.k8s.io Local True 28d
v1.authorization.k8s.io Local True 28d
v1.autoscaling Local True 28d
v1.batch Local True 28d
v1.coordination.k8s.io Local True 28d
v1.networking.k8s.io Local True 28d
v1.rbac.authorization.k8s.io Local True 28d
v1.scheduling.k8s.io Local True 28d
v1.storage.k8s.io Local True 28d
v1alpha1.linkerd.io Local True 18d
v1alpha1.snapshot.storage.k8s.io Local True 28d
v1alpha1.split.smi-spec.io Local True 18d
v1alpha1.tap.linkerd.io linkerd/linkerd-tap False (ServiceNotFound) 24d
v1alpha2.acme.cert-manager.io Local True 18d
v1alpha2.cert-manager.io Local True 18d
v1alpha2.linkerd.io Local True 18d
v1beta1.admissionregistration.k8s.io Local True 28d
v1beta1.apiextensions.k8s.io Local True 28d
v1beta1.authentication.k8s.io Local True 28d
v1beta1.authorization.k8s.io Local True 28d
v1beta1.batch Local True 28d
v1beta1.certificates.k8s.io Local True 28d
v1beta1.coordination.k8s.io Local True 28d
v1beta1.discovery.k8s.io Local True 18d
v1beta1.events.k8s.io Local True 28d
v1beta1.extensions Local True 28d
v1beta1.networking.k8s.io Local True 28d
v1beta1.node.k8s.io Local True 28d
v1beta1.policy Local True 28d
v1beta1.rbac.authorization.k8s.io Local True 28d
v1beta1.scheduling.k8s.io Local True 28d
v1beta1.storage.k8s.io Local True 28d
v2.cilium.io Local True 18d
v2beta1.autoscaling Local True 28d
v2beta2.autoscaling Local True 28d
</code></pre>
<p>I tried deleting the finalizer, did nothing.
Also tried to delete with <code>--grace-period=0 --force</code> still nothing.
It does not display any resources under the namespace.</p>
<p>Anything other I can do to force the delete?</p>
| Alex Efimov | <p>The error you experience is caused by the apiservice <code>v1alpha1.tap.linkerd.io</code> which is not working (<code>ServiceNotFound</code>). It is hard to say what have caused it but I can see two ways out of it:</p>
<ol>
<li><p>If you don't need that API than simply delete it: <code>kubectl delete apiservice v1alpha1.tap.linkerd.io</code>.</p></li>
<li><p>If you need it, you can try to delete pods related to it in order to restart them and see if that helps.</p></li>
</ol>
<p>After that you should be able to delete the namespace you mentioned.</p>
<p>Please let me know if that helps. </p>
| Wytrzymały Wiktor |
<p>I have a K8s job that spins up a pod with two containers. Those containers are client and a server. After the client did all it needs to, it sends a special stop signal to the service after which the service exits; then client exits. The job succeeds.</p>
<p>The client and the service containers use <a href="https://www.eclipse.org/jetty/documentation/jetty-9/index.html" rel="nofollow noreferrer">jetty</a> (see "Startup / Shutdown Command Line"), which has that signaling capability. I am looking for something more portable. It would be nice, to be able to send a SIGTERM from the client to the service, then the client would not need to use <code>jetty</code> for signaling. Is there a way to send SIGTERM from the client container to the server container. The client and the server processes are PID 1 in their respective containers.</p>
| Anton Daneyko | <p>Yes, enable <code>shareProcessNamespace</code> on the pod, for example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: app
spec:
shareProcessNamespace: true
</code></pre>
<p>Your containers can now send signals to one another. They will no longer use PID 1 anymore though.</p>
<p>Here are the docs that explain it all in detail:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/</a></p>
| ericfossas |
<p>I have created a NodeJS application using http/2 following <a href="https://nodejs.org/api/http2.html#http2_compatibility_api" rel="nofollow noreferrer">this example</a>:</p>
<blockquote>
<p>Note: this application uses self-signed certificate until now.</p>
</blockquote>
<p>We deployed it on GKE, and it is working until now.
Here is how this simple architecture looks like:</p>
<p><a href="https://i.stack.imgur.com/hXWsR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hXWsR.png" alt="enter image description here"></a></p>
<p>Now, we want to start using real certificate, and don`t know where is the right place to put it. </p>
<p>Should we put it in pod (overriding self-signed certificate)? </p>
<p>Should we add a proxy on the top of this architecture to put the certificate in?</p>
| Marcelo Dias | <p>In GKE you can use a <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">ingress object</a> to routing external HTTP(S) traffic to your applications in your cluster. With this you have 3 options:</p>
<ul>
<li>Google-managed certificates</li>
<li>Self-managed certificates shared with GCP</li>
<li>Self-managed certificates as Secret resources</li>
</ul>
<p>Check <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress" rel="nofollow noreferrer">this guide</a> for the ingress load balancing</p>
| David C |
<p>I am trying to convert one Deployment to StatefulSet in Kubernetes. Below is my Deployment description .</p>
<pre><code># Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
creationTimestamp: "2020-04-02T07:43:32Z"
generation: 6
labels:
run: jbpm-server-full
name: jbpm-server-full
namespace: dice-jbpm
resourceVersion: "2689379"
selfLink: /apis/apps/v1/namespaces/dice-jbpm/deployments/jbpm-server-full
uid: 8aff5d46-533a-4178-b9b5-5015ff1cdd5d
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: jbpm-server-full
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: jbpm-server-full
spec:
containers:
- image: jboss/jbpm-server-full:latest
imagePullPolicy: Always
name: jbpm-server-full
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /k8sdata/jbpmdata
name: jbpm-pv-storage
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: jbpm-pv-storage
persistentVolumeClaim:
claimName: jbpm-pv-claim
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-04-02T07:43:32Z"
lastUpdateTime: "2020-04-09T12:35:19Z"
message: ReplicaSet "jbpm-server-full-b48989789" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2020-04-09T12:37:05Z"
lastUpdateTime: "2020-04-09T12:37:05Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 6
readyReplicas: 1
replicas: 1
updatedReplicas: 1
</code></pre>
<p>Error:</p>
<pre><code>deployments.apps "jbpm-server-full" was not valid:
* : Invalid value: "The edited file failed validation":
ValidationError(StatefulSet.spec): unknown field "progressDeadlineSeconds" in io.k8s.api.apps.v1.StatefulSetSpec,
ValidationError(StatefulSet.spec):
unknown field "strategy" in io.k8s.api.apps.v1.StatefulSetSpec,
ValidationError(StatefulSet.spec): missing required field "serviceName" in io.k8s.api.apps.v1.StatefulSetSpec,
ValidationError(StatefulSet.status): unknown field "availableReplicas" in io.k8s.api.apps.v1.StatefulSetStatus,
ValidationError(StatefulSet.status.conditions[0]): unknown field "lastUpdateTime" in io.k8s.api.apps.v1.StatefulSetCondition,
ValidationError(StatefulSet.status.conditions[1]): unknown field "lastUpdateTime" in io.k8s.api.apps.v1.StatefulSetCondition]
</code></pre>
<p>I have attached a persistent volume to this deployment , but i am losing the data whenever the pod is restarted. And now i am trying to convert this existing Deployment Type to statefulSet. I have followed several links , but in vain resulting in errors.</p>
<p>Could you help me.</p>
| anil kumar | <p>You have few fields which can't be used in statefulset.</p>
<blockquote>
<p>unknown field "strategy" in io.k8s.api.apps.v1.StatefulSetSpec</p>
</blockquote>
<p>It should be <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies" rel="nofollow noreferrer">UpdateStrategy</a></p>
<blockquote>
<p>unknown field "progressDeadlineSeconds" in io.k8s.api.apps.v1.StatefulSetSpec</p>
</blockquote>
<p>As far as I know it's deployment field, it's not available in statefulset.</p>
<blockquote>
<p>ValidationError(StatefulSet.status): unknown field "availableReplicas" in io.k8s.api.apps.v1.StatefulSetStatus,</p>
</blockquote>
<blockquote>
<p>ValidationError(StatefulSet.status.conditions[0]): unknown field "lastUpdateTime" in io.k8s.api.apps.v1.StatefulSetCondition,</p>
</blockquote>
<blockquote>
<p>ValidationError(StatefulSet.status.conditions[1): unknown field "lastUpdateTime" in io.k8s.api.apps.v1.StatefulSetCondition]</p>
</blockquote>
<p>You should delete everything from the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#deployment-status" rel="nofollow noreferrer">status</a> field. It's created after deployment.</p>
<blockquote>
<p>ValidationError(StatefulSet.spec): missing required field "serviceName" in io.k8s.api.apps.v1.StatefulSetSpec</p>
</blockquote>
<p>You have to add spec.serviceName with name of your service.</p>
<hr />
<p>It should look like this</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
labels:
run: jbpm-server-full
name: jbpm-server-full
spec:
replicas: 1
serviceName: jbpm-server-servicename
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
run: jbpm-server-full
template:
metadata:
labels:
run: jbpm-server-full
spec:
terminationGracePeriodSeconds: 30
containers:
- name: jbpm-server-full
image: jboss/jbpm-server-full:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: jbpm-pv-storage
mountPath: /k8sdata/jbpmdata
volumeClaimTemplates:
- metadata:
name: jbpm-pv-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
</code></pre>
<hr />
<p>Links which might be helpful when working with statefulsets.</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/</a></li>
<li><a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/</a></li>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset</a></li>
</ul>
| Jakub |
<p>I want to write a kubectl command to query all the namespace and then collect the value of a specific lable.</p>
<pre><code>{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/created-by\":\"testuser\",\"app.kubernetes.io/instance\":\"thisisatest\",\"app.kubernetes.io/name\":\"company\",\"app.kubernetes.io/version\":\"2.5\"},\"name\":\"thisiatest\"}}\n"
},
"creationTimestamp": "2022-09-01T13:16:12Z",
"labels": {
"app.kubernetes.io/created-by": "testuser",
...
</code></pre>
<p>I have a version with jq that works.</p>
<pre><code>printf "\ncreated by:\n"
kubectl get namespace -l app.kubernetes.io/name=phoenics -o json | jq '.items [] | .metadata | .labels | ."app.kubernetes.io/created-by"'
</code></pre>
<p>But i can't really get a version with jsonpath to work. What am i doing wrong?</p>
<pre><code>printf "\ncreated by: JsonPath\n"
kubectl get namespace -l app.kubernetes.io/name=phoenics -o jsonpath="{range ['items'][*]['metadata']['labels']['app.kubernetes.io/created-by']}{'\n'}{end}"
</code></pre>
<p>There is no output. Oh, and i'm working on windows with a git bash.</p>
| Patrick | <p>this should work:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get namespace -l app.kubernetes.io/name=phoenics \
-o jsonpath="{range .items[*]}{.metadata.labels.app\.kubernetes\.io/created-by}{'\n'}{end}"
</code></pre>
| ericfossas |
<p>When installing Kube-router on a single node Kubernetes Cluster, I came across the following issue:</p>
<pre><code>kube-system kube-router-wnnq8 0/1 CrashLoopBackOff 11 55m
</code></pre>
<p>Output of the logs:</p>
<pre><code>kubectl logs kube-router-wnnq8 -n kube-system
I0826 06:02:25.760806 1 kube-router.go:223] Running /usr/local/bin/kube-router version v1.0.1, built on 2020-07-28T23:51:21+0000, go1.10.8
Failed to parse kube-router config: Failed to build configuration from CLI: Error loading config file "/var/lib/kube-router/kubeconfig": read /var/lib/kube-router/kubeconfig: is a directory
</code></pre>
<p>output of kubectl describe:</p>
<pre><code>Normal Pulling 35m (x12 over 49m) kubelet, bridge19102 Pulling image "docker.io/cloudnativelabs/kube-router"
Warning Failed 30m (x31 over 37m) kubelet, bridge19102 Error: ImagePullBackOff
Normal BackOff 20m (x73 over 37m) kubelet, bridge19102 Back-off pulling image "docker.io/cloudnativelabs/kube-router"
Warning BackOff 31s (x140 over 49m) kubelet, bridge19102 Back-off restarting failed container
</code></pre>
| Lorenzo Sterenborg | <p>This is a community wiki answer based on the comments and posted for better visibility.</p>
<p>The error message: <code>Failed to parse kube-router config: Failed to build configuration from CLI: Error loading config file "/var/lib/kube-router/kubeconfig": read /var/lib/kube-router/kubeconfig: is a directory</code> indicates that the path to the config <strong>file</strong> was not found and it points to a <strong>directory</strong> instead.</p>
<p>As you already found out, removing the directory and making it a file is a working resolution for this problem.</p>
| Wytrzymały Wiktor |
<p>In kubernetes we can use environment variable to pass hostIP using</p>
<pre><code> env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
</code></pre>
<p>So similarly how get hostName instead of HostIP?</p>
| Shreyas Holla P | <pre><code>env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
</code></pre>
<p>See: <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api</a></p>
| Edward Rosinzonsky |
<p>I have istio installed and can see it on Rancher. I have keycloak installed as well. I am trying to connect the two and have a gateway setup so I can access keycloak front-end through a URL.
In my keycloak manifest I have </p>
<pre><code># Source: keycloak/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: keycloak
.
. #Many other lines here
.
ports:
- name: http
containerPort: 8080
protocol: TCP
</code></pre>
<p>I then setup a gateway with command - </p>
<pre><code>kubectl apply -f networking/custom-gateway.yaml
</code></pre>
<p>And in my custom-gateway.yaml file I have - </p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: keycloak-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: keycloak
spec:
hosts:
- "*"
gateways:
- keycloak-gateway
http:
- match:
- uri:
exact: /keycloak
rewrite:
uri: "/" # Non context aware backend
route:
- destination:
host: keycloak
port:
number: 80
websocketUpgrade: true
</code></pre>
<p>Now when I try to access the URL with <a href="http://node_ip_address:port/keycloak" rel="nofollow noreferrer">http://node_ip_address:port/keycloak</a>, I find that I am not able to access the front end. I have verified that keycloak is installed and the pod is up and running on Rancher.
I also have my istio instance connected to the <a href="https://istio.io/docs/examples/bookinfo/" rel="nofollow noreferrer">bookinfo application</a> and am able to run the bookinfo-gateway and connect to <a href="http://node_ip_address:port/productpage" rel="nofollow noreferrer">http://node_ip_address:port/productpage</a> with a gateway that looks like the one described <a href="https://github.com/istio/istio/blob/master/samples/bookinfo/networking/bookinfo-gateway.yaml" rel="nofollow noreferrer">here</a>. I am trying to setup the same gateway only for keycloak.
What am I doing wrong in my yaml files. How do I fix this? Any help is appreciated. Do I have the ports connected correctly? </p>
| BipinS. | <p>As far as I can see, you should fix your Virtual Service.</p>
<p>I prepared small example with <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a> and <a href="https://github.com/helm/charts/tree/master/stable/keycloak" rel="nofollow noreferrer">keycloak helm chart</a>.</p>
<hr>
<p>Save this as keycloak.yaml, you can configure your keycloak password here.</p>
<pre><code>keycloak:
service:
type: ClusterIP
password: mykeycloakadminpasswd
persistence:
deployPostgres: true
dbVendor: postgres
</code></pre>
<hr>
<p>Install keycloak with helm and values prepared above.</p>
<hr>
<pre><code>helm upgrade --install keycloak stable/keycloak -f keycloak.yml
</code></pre>
<hr>
<p>Create gateway and virtual service</p>
<hr>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: keycloak-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: keycloak
spec:
hosts:
- "*"
gateways:
- keycloak-gateway
http:
- match:
- uri:
prefix: /auth
- uri:
prefix: /keycloak
rewrite:
uri: /auth
route:
- destination:
host: keycloak-http
port:
number: 80
</code></pre>
<p>virtual service <code>route.host</code> is name of kubernetes keycloak pod <strong>service</strong>.</p>
<blockquote>
<p>kubectl get svc</p>
</blockquote>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
keycloak-http ClusterIP 10.0.14.36 <none> 80/TCP 22m
</code></pre>
<p>You should be able to connect to keycloak via your ingress_gateway_ip/keycloak or ingress_gateway_ip/auth and login with keycloak credentials, in my example it's <code>login: keycloak</code> and <code>password: mykeycloakadminpasswd</code>. </p>
<p><strong>Note</strong> that you need to add prefix for /auth as it's default keycloak web to do everything. Keycloak prefix just rewrite to /auth here.</p>
| Jakub |
<p>Because of increasing build time of our pipeline, we have tried several things to improve it. One step which was taking quite some time was the docker images push step which was running sequentially. Being 12 images, this step was taking 12-14 minutes and we decided trying to push the images in parallel (under the considerate that this will take the time from 12-14 to 2-4 minutes).</p>
<p>Tried multiple steps under a publish images stage, but it fails.</p>
<pre><code>- name: Publish images
steps:
- publishImageConfig:
dockerfilePath: ./frontend/deployment/Dockerfile
buildContext: ./frontend
tag: registry.remote.com/remote/frontend-${CICD_EXECUTION_ID}
pushRemote: true
registry: registry.remote.com
- publishImageConfig:
dockerfilePath: ./gateway/backend/src/Dockerfile
buildContext: ./gateway/backend
tag: registry.remote.com/remote/backend-${CICD_EXECUTION_ID}
pushRemote: true
registry: registry.remote.com
[...]
</code></pre>
<p>One image is pushed, but all the rest fail with <code>Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?</code></p>
<p>I've also tried increasing the <code>--max-concurrent-uploads</code> from <code>/etc/docker/daemon.json</code> without any success.</p>
| Turbut Alin | <p>Docker's <code>/var/lib/docker</code> can only be managed by a single daemon. If you want to publish more than one there is a workaround for that. Try something like this:</p>
<pre><code>stages:
- name: Publish images_1
steps:
- publishImageConfig:
dockerfilePath: ./frontend/deployment/Dockerfile
buildContext: ./frontend
tag: registry.remote.com/remote/frontend-${CICD_EXECUTION_ID}
pushRemote: true
registry: registry.remote.com
- name: Publish images_2
steps:
- publishImageConfig:
dockerfilePath: ./gateway/backend/src/Dockerfile
buildContext: ./gateway/backend
tag: registry.remote.com/remote/backend-${CICD_EXECUTION_ID}
pushRemote: true
registry: registry.remote.com
env:
PLUGIN_STORAGE_PATH: /var/lib/docker_2
[...]
</code></pre>
<p>This bug was already reported in <a href="https://github.com/rancher/rancher/issues/16624" rel="nofollow noreferrer">this thread</a>, you can find more info there.
Issue was supposed to be fixed in Rancher <code>v2.2</code> but some people still experience this in <code>v2.3</code>.
However, the workaround is still valid.</p>
<p>I am posting this answer as a community wiki because the fix was not my original idea.</p>
<p>Please let me know if that helped.</p>
| Wytrzymały Wiktor |
<p>Im using docker to run container app A, When i upgrade version of container app A i will upgrade remote db using pgsql with image postgres.</p>
<p>In k8s, i use init container to init images posgres and run script update.sh => If process successfully then run the container app A.</p>
<p>With docker environment, i wonder how to do that same with k8s?</p>
<p>#this problem has been solved, i using command into k8s resource and it work</p>
<pre><code>- name: main-container
...
command:
- bash
- -c
args:
- |
if [ -f /tmp/update_success ]; then
do
else
# Update failed
do somethingelse
done
</code></pre>
| Đặng Lực | <p>You would probably get a better answer if you posted your initContainer, but I would do something like this:</p>
<pre><code>initContainers:
- name: init
...
command:
- bash
- -c
args:
- |
update.sh && touch /tmp/update_success
volumeMounts:
- name: tmp
mountPath: /tmp
containers:
- name: main-container
...
command:
- bash
- -c
args:
- |
if [ -f /tmp/update_success ]; then
# Update succeeded
do_whatever
else
# Update failed
do_something_else
done
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
</code></pre>
<p>Also, if your init container exits non-zero, the main container will not run. If that's what you want, just make sure update.sh exits an error code when the update fails, and you don't need the above.</p>
| Edward Rosinzonsky |
<p>I'm trying to deploy the BookInfo application described here: <a href="https://istio.io/docs/examples/bookinfo/" rel="nofollow noreferrer">https://istio.io/docs/examples/bookinfo/</a></p>
<p>And, I'm working on Routing the Request based on header "end-user: jason" per this tutorial. <a href="https://istio.io/docs/tasks/traffic-management/request-routing/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/request-routing/</a></p>
<p>According to this tutorial, <em>product</em> Microservices adds a request header "end-user: jason" <strong>once you log in</strong>. </p>
<p>I want it to send out this header in all circumstances. In other words, for all requests which go out of <strong>product</strong> microservice, I want this header to to attached, irrespective of where the request lands on.</p>
<p>How can I achieve this?</p>
<p><a href="https://i.stack.imgur.com/fI2vh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fI2vh.png" alt="I want to achieve this"></a></p>
<p><strong>EDIT</strong></p>
<p>I created the following based on advice given below. For some reason, the traffic is going to both versions of product all the time. This are all the configurations I have.</p>
<pre><code>##################################################################################################
# Details service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
service: details
spec:
ports:
- port: 9080
name: http
selector:
app: details
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: details-v1
labels:
app: details
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: details
version: v1
spec:
containers:
- name: details
image: istio/examples-bookinfo-details-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
service: ratings
spec:
ports:
- port: 9080
name: http
selector:
app: ratings
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ratings-v1
labels:
app: ratings
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: ratings
version: v1
spec:
containers:
- name: ratings
image: istio/examples-bookinfo-ratings-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
service: reviews
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v1
labels:
app: reviews
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v1
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v2
labels:
app: reviews
version: v2
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v2
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v2:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v3
labels:
app: reviews
version: v3
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v3
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v3:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
service: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: productpage-v1
labels:
app: productpage
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: productpage
version: v1
spec:
containers:
- name: productpage
image: istio/examples-bookinfo-productpage-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: productpage-v2
labels:
app: productpage
version: v2
spec:
replicas: 1
template:
metadata:
labels:
app: productpage
version: v2
spec:
containers:
- name: productpage
image: istio/examples-bookinfo-productpage-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: productpage
spec:
host: productpage
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ratings
spec:
host: ratings
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v2-mysql
labels:
version: v2-mysql
- name: v2-mysql-vm
labels:
version: v2-mysql-vm
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: details
spec:
host: details
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: productpage
spec:
hosts:
- productpage
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: productpage
subset: v2
- route:
- destination:
host: productpage
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: details
spec:
hosts:
- details
http:
- route:
- destination:
host: details
subset: v1
---
</code></pre>
<p><a href="https://i.stack.imgur.com/wrn1q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wrn1q.png" alt="enter image description here"></a></p>
| Anoop Hallimala | <p>As Anoop mentioned in comments he want to deploy 2 productpage apps</p>
<ul>
<li>first will route only to review v1</li>
<li>second route only to review v2</li>
</ul>
<p>So I made quick test with the productpage from istio docs, and u have to configure virtual services and destination rules to make it happen.</p>
<hr />
<h2>Install <a href="https://istio.io/docs/examples/bookinfo/" rel="nofollow noreferrer">istio bookinfo</a></h2>
<p>deployments and services</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.5/samples/bookinfo/platform/kube/bookinfo.yaml
</code></pre>
<p>gateway and virtual service</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.5/samples/bookinfo/networking/bookinfo-gateway.yaml
</code></pre>
<hr />
<h2>As mentioned <a href="https://istio.io/docs/examples/bookinfo/#apply-default-destination-rules" rel="nofollow noreferrer">here</a></h2>
<blockquote>
<p>Before you can use Istio to control the Bookinfo version routing, you need to define the available versions, called subsets, in destination rules.</p>
<p>If you did not enable mutual TLS, execute this command:</p>
</blockquote>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.5/samples/bookinfo/networking/destination-rule-all.yaml
</code></pre>
<blockquote>
<p>If you did enable mutual TLS, execute this command:</p>
</blockquote>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.5/samples/bookinfo/networking/destination-rule-all-mtls.yaml
</code></pre>
<hr />
<h2>Then simply add your virtual service</h2>
<p>You can either use v1 of each microservice as in this <a href="https://istio.io/docs/tasks/traffic-management/request-routing/#apply-a-virtual-service" rel="nofollow noreferrer">example</a> or just reviews v1.</p>
<p>So for each microservice to use v1 it would be</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.5/samples/bookinfo/networking/virtual-service-all-v1.yaml
</code></pre>
<p>Just for reviews v1</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
</code></pre>
<p>And that's everything what you need to do for the first productpage app.</p>
<hr />
<h2>Second productpage app</h2>
<p>You have to do exactly the same with second one, the only change here would be virtual service to match subset v2, of course if you want to deploy both of them I suggest using 2 namespaces and seperate them, change the namespaces in virtual services,deployments, gateways etc.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v2
</code></pre>
<hr />
<h2>Headers</h2>
<p>As mentioned in <a href="https://istio.io/docs/reference/config/networking/virtual-service/#Headers" rel="nofollow noreferrer">istio documentation</a> you can use</p>
<blockquote>
<p>request Header -> manipulation rules to apply before forwarding a request to the destination service</p>
</blockquote>
<p><strong>OR</strong></p>
<blockquote>
<p>response Header -> manipulation rules to apply before returning a response to the caller</p>
</blockquote>
<p>I'm not totally sure what you need, this <a href="https://stackoverflow.com/questions/60818880/istio-virtual-service-header-rules-are-not-applied/60826906#60826906">example</a> shows how to add response header to every request, you can either add it in virtual service. More about it in below example link.</p>
<hr />
<p><strong>EDIT</strong></p>
<hr />
<p>I made this virtual services based on the picture you add, so every time if you login as jason you it will redirect you to product v2 and review v2, I left ratings and details v1 by default.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: productpage
spec:
hosts:
- productpage
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: productpage
subset: v2
- route:
- destination:
host: productpage
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: details
spec:
hosts:
- details
http:
- route:
- destination:
host: details
subset: v1
---
</code></pre>
| Jakub |
<p>When applying an update to a kubernetes resource with <code>kubectl -f apply</code> and in the applied configuration I removed some annotations that currently exists in the deployed resource, these annotations are not being removed (but the changes in the existing ones are being properly updated). How can I force to delete the removed annotations in the update process?</p>
<p>BTW, I want avoid to delete and recreate the resource</p>
| artisan | <p>As @Matt mentioned</p>
<blockquote>
<p>Did you use kubectl apply to create this data on the resource? apply stores previous state in an annotation. If that annotation doesn't exist then it can't detect what data to delete</p>
</blockquote>
<p>More about it <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#how-apply-calculates-differences-and-merges-changes" rel="nofollow noreferrer">here</a></p>
<hr>
<p>You can use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#edit" rel="nofollow noreferrer">kubectl edit</a> to delete those annotations.</p>
<blockquote>
<p>Edit a resource from the default editor.</p>
<p>The edit command allows you to directly edit any API resource you can retrieve via the command line tools. It will open the editor defined by your KUBE_EDITOR, or EDITOR environment variables, or fall back to 'vi' for Linux or 'notepad' for Windows. You can edit multiple objects, although changes are applied one at a time. The command accepts filenames as well as command line arguments, although the files you point to must be previously saved versions of resources.</p>
<p>Editing is done with the API version used to fetch the resource. To edit using a specific API version, fully-qualify the resource, version, and group.</p>
<p>The default format is YAML. To edit in JSON, specify "-o json".</p>
<p>The flag --windows-line-endings can be used to force Windows line endings, otherwise the default for your operating system will be used.</p>
<p>In the event an error occurs while updating, a temporary file will be created on disk that contains your unapplied changes. The most common error when updating a resource is another editor changing the resource on the server. When this occurs, you will have to apply your changes to the newer version of the resource, or update your temporary saved copy to include the latest resource version.</p>
</blockquote>
<hr>
<p>I made an example with nginx pod and some annotation</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: annotations-demo
annotations:
delete: without-restart
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
</code></pre>
<p>I used </p>
<ul>
<li><p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe" rel="nofollow noreferrer">kubectl describe</a> to check if the annotation is added. </p>
<blockquote>
<p>Annotations: delete: without-restart</p>
</blockquote></li>
<li><p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#edit" rel="nofollow noreferrer">kubectl edit</a> to delete this annotation, it's empty now.</p>
<blockquote>
<p>Annotations:</p>
</blockquote></li>
</ul>
| Jakub |
<p>I find issue, where my fluentd full buffer and cannot send log to elastic. is there a way to manually flush?</p>
<p>this is error log</p>
<p><img src="https://i.stack.imgur.com/Ga1cT.jpg" alt=""></p>
| ichsancode | <p>Arghya's suggestion is correct but there are more options that can help you.</p>
<ol>
<li><p>You can set <code>flush_mode</code> to <code>immediate</code> in order to force flush or set or set additional flush parameters in order to adjust it to your needs. You can read more about it here: <a href="https://docs.fluentd.org/output#control-flushing" rel="nofollow noreferrer">Control Flushing</a>.</p></li>
<li><p>You can also consider using <a href="https://docs.fluentd.org/deployment/signals#sigusr1" rel="nofollow noreferrer">SIGUSR1 Signal</a>:</p></li>
</ol>
<blockquote>
<p>Forces the buffered messages to be flushed and reopens Fluentd's log.
Fluentd will try to flush the current buffer (both memory and file)
immediately, and keep flushing at <code>flush_interval</code>.</p>
</blockquote>
<p>Please let me know if that helped. </p>
| Wytrzymały Wiktor |
<p>I'm trying to install a Helm chart, but I'm receiving errors from an annotation</p>
<pre><code> annotations: {}
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
</code></pre>
<pre><code>helm.go:84: [debug] error converting YAML to JSON: yaml: line **: did not find expected key
</code></pre>
<pre><code>code fast with www.microapi.io
</code></pre>
| Victor | <p>Remove {}</p>
<pre><code> annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
</code></pre>
| Madhu Potana |
<p>I have a build pipeline that builds my docker image and pushes it to my docker registry. I am trying to create a release pipeline to pull that image from the registry and deploys it to my staging environment which is a Azure Kubernetes Cluster. This release pipeline works to the point I see deployment, pods, and services in the cluster after it has run. However, I'm struggling to pass in the image that is select from the artifact selection before you run the release pipeline.</p>
<p>Artifact Selection During <code>Create a new release</code>:</p>
<p><a href="https://i.stack.imgur.com/y0UPH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y0UPH.png" alt="enter image description here" /></a></p>
<p><code>Kubectl</code> Release Task:</p>
<p><a href="https://i.stack.imgur.com/soJfT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/soJfT.png" alt="enter image description here" /></a></p>
<p>I cannot seem to pass the image that is selected at the beginning into the configuration.</p>
| Ross | <p>You can use the predefined release variable <code>Release.Artifacts.{alias}.BuildId</code> to get the version of the selected artifacts. See below:</p>
<pre><code>image:_stars.api.web.attainment:$(Release.Artifacts._stars.api.web.attainment.BuildId)
</code></pre>
<p>Check <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/release/variables?view=azure-devops&tabs=batch#default-variables---general-artifact" rel="nofollow noreferrer">here</a> for more release variables.</p>
| Levi Lu-MSFT |
<p>Google Cloud Platform announced "Stackdriver Kubernetes Monitoring" at Kubecon 2018. It looks awesome.</p>
<p>I am an AWS user running a few Kubernetes clusters and immediately had envy, until I saw that it also supported AWS and "on prem".</p>
<p><a href="https://cloud.google.com/kubernetes-monitoring" rel="nofollow noreferrer">Stackdriver Kubernetes Engine Monitoring</a></p>
<p>This is where I am getting a bit lost.</p>
<ol>
<li><p>I cannot find any documentation for helping me deploy the agents onto my Kubernetes clusters. The closest example I could find was here: <a href="https://cloud.google.com/monitoring/kubernetes-engine/customizing" rel="nofollow noreferrer">Manual installation of Stackdriver support</a>, but the agents are polling for "internal" GCP metadata services.</p>
<pre><code>E0512 05:14:12 7f47b6ff5700 environment.cc:100 Exception: Host not found (authoritative): 'http://metadata.google.internal./computeMetadata/v1/instance/attributes/cluster-name'
</code></pre></li>
<li><p>I'm not sure the Stackdriver dashboard has "Stackdriver Kubernetes Monitoring" turned on. I don't seem to have the same interface as the demo on YouTube <a href="https://youtu.be/aa8cgmfHTAs?t=4m25s" rel="nofollow noreferrer">here</a></p></li>
</ol>
<p>I'm not sure if this is something which will get turned on when I configure the agents correctly, or something I'm missing.</p>
<p>I think I might be missing some "getting started" documentation which takes me through the setup.</p>
| Nick Schuch | <p>You can use a Stackdriver partner service, Blue Medora BindPlane, to monitor AWS Kubernetes or almost anything else in AWS for that matter or on-premise. Here's an article from Google Docs about the partnership: <em><a href="https://cloud.google.com/stackdriver/blue-medora" rel="nofollow noreferrer">About Blue Medora</a></em>; you can signup for BindPlane through the <a href="https://console.cloud.google.com/marketplace/details/bluemedora/bindplane" rel="nofollow noreferrer">Google Cloud Platform Marketplace</a>.</p>
<p>It looks like BindPlane is handling deprecated Stackdriver monitoring agents. <em><a href="https://cloud.google.com/monitoring/agent/plugins/bindplane-transition" rel="nofollow noreferrer">Google Cloud: Transition guide for deprecated third-party integrations</a></em></p>
| AlphaPapa |
<p>Greetings fellow humans,</p>
<p>i am trying to route all traffic incoming to my cluster with the following annotation:</p>
<pre><code>nginx.ingress.kubernetes.io/auth-url: http://my-auth-service/
</code></pre>
<p>I followed the tutorials and i still have not achieved to route every request to my auth module. I am following a strategy of master-minion. When I check the constructed nginx file the annotation is not found.</p>
<p>I tried as well something like this in one of my minon ingress files</p>
<pre><code>auth_request /auth;
auth_request_set $auth_service $upstream_http_auth_service;
proxy_pass $request_uri
proxy_set_header X-foo-Token $auth_service;
</code></pre>
<p>I am using the following ingress controller version</p>
<pre><code>Image: nginx/nginx-ingress:1.8.1
Ports: 80/TCP, 443/TCP, 9113/TCP, 8081/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
</code></pre>
<p>Samples:</p>
<p><strong>Master ingress</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: master-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/mergeable-ingress-type: "master"
kubernetes.io/ingress.global-static-ip-name: <cluster-ip>
nginx.ingress.kubernetes.io/ssl-redirect: "true"
# nginx.ingress.kubernetes.io/auth-url: http://my-auth-service/
spec:
tls:
- hosts:
- app.myurl.com
secretName: secret-tls
rules:
- host: app.myurl.com
</code></pre>
<p><strong>Minion ingress</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pod-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/mergeable-ingress-type: "minion"
# nginx.ingress.kubernetes.io/auth-url: http://my-auth-service/
# nginx.ingress.kubernetes.io/auth-snippet: |
# auth_request /new-auth-service;
# auth_request_set $new_auth_service $upstream_http_new_auth_service;
# proxy_pass $request_uri
# proxy_set_header X-foo-Token $new_auth_service;
nginx.org/rewrites: "serviceName={{ .Values.serviceName }} rewrite=/"
spec:
rules:
- host: {{ .Values.clusterHost }}
http:
paths:
- path: /{{ .Values.serviceName }}/
backend:
serviceName: {{ .Values.serviceName }}
servicePort: 80
</code></pre>
| graned | <p>so i was able to make it work. First of all, the urls provided by <code>matt-j</code> helped be a lot to figure out a solution.</p>
<p>Turns out that i was using <code>nginx-stable</code> for my ingress controller, as in the documentation <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">here</a> suggested, i needed to use the new ingress controller version. i followed the instructions for a full reset, since i am working on my staging env. (for production i might go with a 0 downtime approach). Once installed, I ran into an known issue which is related to the webhooks, similar error can be seen <a href="https://github.com/kubernetes/ingress-nginx/issues/5401" rel="nofollow noreferrer">here</a>. Basically one solution for overcoming this error is to delete the <code>validatingwebhookconfigurations</code>. Finally I applied the ingress config and made some adjustments to use the proper annotations, which made the magic.</p>
<p>NOTE: I ran into an issue regarding of how forwarding the auth request to my internal cluster service, to fix that i am using the FQDN of kubernetes pod.</p>
<p>NOTE 2: I removed the concept of master minion, since they the merging in kubernetes/ingress-nginx happens automatically more info <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/" rel="nofollow noreferrer">here</a></p>
<p>Here are the fixed samples:</p>
<p>MAIN INGRESS</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: master-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.global-static-ip-name: <PUBLIC IP>
spec:
rules:
- host: domain.com
</code></pre>
<p>CHILD INGRESS</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Values.serviceName }}-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/auth-url: http://<SERVICE NAME>.<NAMESPACE>.svc.cluster.local # using internal FQDN
spec:
rules:
- host: {{ .Values.clusterHost }}
http:
paths:
- path: /{{ .Values.serviceName }}(/|$)(.*)
backend:
serviceName: {{ .Values.serviceName }}
servicePort: 80
</code></pre>
| graned |
<p>My custom Helm chart contains a variable which is taking strings list as a parameter,</p>
<p><strong>values.yaml</strong></p>
<pre><code>myList: []
</code></pre>
<p>I need to pass the <strong>myList</strong> as a parameter,</p>
<pre><code>helm upgrade --install myService helm/my-service-helm --set myList=[string1,string2]
</code></pre>
<p>Is there any possible way to do this?</p>
| Janitha Madushan | <p>Array values can be specified using curly braces: --set foo={a,b,c}.</p>
<p>From: <a href="https://github.com/helm/helm/issues/1987" rel="nofollow noreferrer">https://github.com/helm/helm/issues/1987</a></p>
| volcanic |
<p>I'm new in kubernetes world, so forgive me if i'm writing mistake. I'm trying to deploy kubernetes dashboard </p>
<p>My cluster is containing three masters and 3 workers drained and not schedulable in order to install dashboard to masters nodes :</p>
<pre><code>[root@pp-tmp-test20 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
pp-tmp-test20 Ready master 2d2h v1.15.2
pp-tmp-test21 Ready master 37h v1.15.2
pp-tmp-test22 Ready master 37h v1.15.2
pp-tmp-test23 Ready,SchedulingDisabled worker 36h v1.15.2
pp-tmp-test24 Ready,SchedulingDisabled worker 36h v1.15.2
pp-tmp-test25 Ready,SchedulingDisabled worker 36h v1.15.2
</code></pre>
<p>I'm trying to deploy kubernetes dashboard via this url : </p>
<pre><code>[root@pp-tmp-test20 ~]# kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
</code></pre>
<ul>
<li><p>After this, a pod <code>kubernetes-dashboard-5698d5bc9-ql6q8</code> is scheduled on my master node <code>pp-tmp-test20/172.31.68.220</code></p></li>
<li><p>the pod</p></li>
</ul>
<pre><code>kube-system kubernetes-dashboard-5698d5bc9-ql6q8 /1 Running 1 7m11s 10.244.0.7 pp-tmp-test20 <none> <none>
</code></pre>
<ul>
<li>the pod's logs</li>
</ul>
<pre><code>[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system
2019/08/14 10:14:57 Starting overwatch
2019/08/14 10:14:57 Using in-cluster config to connect to apiserver
2019/08/14 10:14:57 Using service account token for csrf signing
2019/08/14 10:14:58 Successful initial request to the apiserver, version: v1.15.2
2019/08/14 10:14:58 Generating JWE encryption key
2019/08/14 10:14:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/08/14 10:14:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/08/14 10:14:59 Initializing JWE encryption key from synchronized object
2019/08/14 10:14:59 Creating in-cluster Heapster client
2019/08/14 10:14:59 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 10:14:59 Auto-generating certificates
2019/08/14 10:14:59 Successfully created certificates
2019/08/14 10:14:59 Serving securely on HTTPS port: 8443
2019/08/14 10:15:29 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 10:15:59 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
</code></pre>
<ul>
<li>the describe of the pod</li>
</ul>
<pre><code>[root@pp-tmp-test20 ~]# kubectl describe pob kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system
Name: kubernetes-dashboard-5698d5bc9-ql6q8
Namespace: kube-system
Priority: 0
Node: pp-tmp-test20/172.31.68.220
Start Time: Wed, 14 Aug 2019 16:58:39 +0200
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=5698d5bc9
Annotations: <none>
Status: Running
IP: 10.244.0.7
Controlled By: ReplicaSet/kubernetes-dashboard-5698d5bc9
Containers:
kubernetes-dashboard:
Container ID: docker://40edddf7a9102d15e3b22f4bc6f08b3a07a19e4841f09360daefbce0486baf0e
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
Image ID: docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
State: Running
Started: Wed, 14 Aug 2019 16:58:43 +0200
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 14 Aug 2019 16:58:41 +0200
Finished: Wed, 14 Aug 2019 16:58:42 +0200
Ready: True
Restart Count: 1
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-ptw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kubernetes-dashboard-token-ptw78:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-ptw78
Optional: false
QoS Class: BestEffort
Node-Selectors: dashboard=true
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m41s default-scheduler Successfully assigned kube-system/kubernetes-dashboard-5698d5bc9-ql6q8 to pp-tmp-test20.tec.prj.in.phm.education.gouv.fr
Normal Pulled 2m38s (x2 over 2m40s) kubelet, pp-tmp-test20 Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
Normal Created 2m37s (x2 over 2m39s) kubelet, pp-tmp-test20 Created container kubernetes-dashboard
Normal Started 2m37s (x2 over 2m39s) kubelet, pp-tmp-test20 Started container kubernetes-dashboard
</code></pre>
<ul>
<li>the describe of the dashboard service</li>
</ul>
<pre><code>[root@pp-tmp-test20 ~]# kubectl describe svc/kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.110.236.88
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints: 10.244.0.7:8443
Session Affinity: None
Events: <none>
</code></pre>
<ul>
<li>the docker ps on my master running the pod</li>
</ul>
<pre><code>[root@pp-tmp-test20 ~]# Docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
40edddf7a910 f9aed6605b81 "/dashboard --inse..." 7 minutes ago Up 7 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-5698d5bc9-ql6q8_kube-system_f785d4bd-2e67-4daa-9f6c-19f98582fccb_1
e7f3820f1cf2 k8s.gcr.io/pause:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kubernetes-dashboard-5698d5bc9-ql6q8_kube-system_f785d4bd-2e67-4daa-9f6c-19f98582fccb_0
[root@pp-tmp-test20 ~]# docker logs 40edddf7a910
2019/08/14 14:58:43 Starting overwatch
2019/08/14 14:58:43 Using in-cluster config to connect to apiserver
2019/08/14 14:58:43 Using service account token for csrf signing
2019/08/14 14:58:44 Successful initial request to the apiserver, version: v1.15.2
2019/08/14 14:58:44 Generating JWE encryption key
2019/08/14 14:58:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/08/14 14:58:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/08/14 14:58:44 Initializing JWE encryption key from synchronized object
2019/08/14 14:58:44 Creating in-cluster Heapster client
2019/08/14 14:58:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:58:44 Auto-generating certificates
2019/08/14 14:58:44 Successfully created certificates
2019/08/14 14:58:44 Serving securely on HTTPS port: 8443
2019/08/14 14:59:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:59:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 15:00:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
</code></pre>
<p>1/ On my master I start the proxy</p>
<pre><code>[root@pp-tmp-test20 ~]# kubectl proxy
Starting to serve on 127.0.0.1:8001
</code></pre>
<p>2/ I launch firefox with x11 redirect from my master and hit this url</p>
<pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
</code></pre>
<p>this is the error message I get in the browser</p>
<pre><code>Error: 'dial tcp 10.244.0.7:8443: connect: no route to host'
Trying to reach: 'https://10.244.0.7:8443/'
</code></pre>
<p>In the same time i got these errors from the console where I launched the proxy</p>
<pre><code>I0814 16:10:05.836114 20240 log.go:172] http: proxy error: context canceled
I0814 16:10:06.198701 20240 log.go:172] http: proxy error: context canceled
I0814 16:13:21.708190 20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:21.708229 20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:21.708270 20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:39.335483 20240 log.go:172] http: proxy error: context canceled
I0814 16:13:39.716360 20240 log.go:172] http: proxy error: context canceled
</code></pre>
<p>but after refresh n times (randomly) the browser I'm able to reach the login interface to enter the token (created before)</p>
<p><a href="https://i.stack.imgur.com/ArNRb.png" rel="noreferrer">Dashboard_login</a></p>
<p>But... the same error occur again</p>
<p><a href="https://i.stack.imgur.com/1ANks.png" rel="noreferrer">Dashboard_login_error</a></p>
<p>After hit n times the 'sign in' button I'm able to get the dashboard.. for few seconds. </p>
<p><a href="https://i.stack.imgur.com/iSd2x.png" rel="noreferrer">dashboard_interface_1</a></p>
<p><a href="https://i.stack.imgur.com/vNXHM.png" rel="noreferrer">dashboard_interface_2</a></p>
<p>after that the dashboard start to produce the same errors when I'm am exploring the interface:</p>
<p><a href="https://i.stack.imgur.com/Rdl08.png" rel="noreferrer">dashboard_interface_error_1</a></p>
<p><a href="https://i.stack.imgur.com/qEOEP.png" rel="noreferrer">dashboard_interface_error_2</a></p>
<p>I looked the pod logs, we can see some trafic :</p>
<pre><code>[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system
2019/08/14 14:16:56 Getting list of all services in the cluster
2019/08/14 14:16:56 [2019-08-14T14:16:56Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:01 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/token request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 POST /api/v1/token/refresh request from 10.244.0.1:56140: { contents hidden }
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/settings/global/cani request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/settings/global request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 Cannot find settings config map: configmaps "kubernetes-dashboard-settings" not found
</code></pre>
<p>and again the pod logs</p>
<pre><code>[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system
Error from server: Get https://172.31.68.220:10250/containerLogs/kube-system/kubernetes-dashboard-5698d5bc9-ql6q8/kubernetes-dashboard: Forbidden
</code></pre>
<p>What I'm doing wrong ? Could you please tell me some investigating way ?</p>
<p>EDIT :</p>
<p>my service account that I used </p>
<pre><code># cat dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
# cat dashboard-adminuser-ClusterRoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
</code></pre>
| Vincent | <p>It seams heapster is deprecated with kubernetes in favor of metrics-server: <a href="https://github.com/kubernetes/dashboard/issues/2986" rel="nofollow noreferrer">Support metrics API #2986</a> & <a href="https://github.com/kubernetes-retired/heapster/blob/master/docs/deprecation.md" rel="nofollow noreferrer">Heapster Deprecation Timeline</a> .</p>
<p>I have deployed a dashboard that use heapster. This dashboard version is not compatible with my kubernetes version (1.15). So possible way to resolve the issue: install dashboard <a href="https://github.com/kubernetes/dashboard/releases" rel="nofollow noreferrer">v2.0.0-beta3</a></p>
<pre><code># kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta3/aio/deploy/recommended.yaml
</code></pre>
| Vincent |
<p>Im struggling with the ingress configuration that will allow accessibility from two <strong>different paths</strong> to services which are deployed on <strong>different namespaces</strong>.</p>
<p>1# Ingress:</p>
<pre><code> # Source: deployment/templates/ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: portal-api
labels:
helm.sh/chart: deployment-0.1.0
app.kubernetes.io/name: deployment
app.kubernetes.io/instance: portal-api
app.kubernetes.io/version: "0.0.1"
app.kubernetes.io/managed-by: Helm
annotations:
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kuberentes.io/tls-acme: "true"
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- "example.com"
secretName: portal-certificate
rules:
- host: "example.com"
http:
paths:
- path: /api/rest/(.*)
backend:
serviceName: portal-api
servicePort: 80
</code></pre>
<h1>2 Ingress</h1>
<pre><code># Source: deployment/templates/ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: portal-ui
labels:
helm.sh/chart: deployment-0.1.0
app.kubernetes.io/name: deployment
app.kubernetes.io/instance: portal-ui
app.kubernetes.io/version: "0.0.1"
app.kubernetes.io/managed-by: Helm
annotations:
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- "example.com"
secretName: portal-certificate
rules:
- host: "example.com"
http:
paths:
- path: /(.*)
backend:
serviceName: portal-ui
servicePort: 80
</code></pre>
<p>Routing for path example.com - works, its redirect to portal-ui.
Routing for path example.com/api/rest/(something) - doesnt works, its redirect to portal-ui service.</p>
<p>I think that It will works on the same namespace... But i need two namespaces for each service.</p>
| TeK | <p>As @Arghya Sadhu mentioned</p>
<blockquote>
<p>Ingress and its corresponding service need to be in same namespace otherwise ingress cannot discover endpoints from the service.</p>
</blockquote>
<p>There is a <a href="https://github.com/kubernetes/kubernetes/issues/17088" rel="nofollow noreferrer">github issue</a> opened since 2015 and there is still discuss how to make it work.</p>
<p>For now it´s <strong>either to create ingress in the same namespace</strong></p>
<p><strong>Example made by github issue member @aledbf</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: component1
namespace: component1
spec:
rules:
- host: domain.com
http:
paths:
- backend:
serviceName: component1
servicePort: 80
path: /component1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: component2
namespace: component2
spec:
rules:
- host: domain.com
http:
paths:
- backend:
serviceName: component2
servicePort: 80
path: /component2
</code></pre>
<hr />
<p><strong>OR</strong></p>
<hr />
<p><strong>You could try <a href="https://github.com/kubernetes/kubernetes/issues/17088#issuecomment-394313647" rel="nofollow noreferrer">workaround</a> made by github issue member @chestack</strong></p>
<blockquote>
<p>my workaround:</p>
<p>serviceA in namespaceA</p>
<p>create serviceB in namespaceB</p>
</blockquote>
<pre><code>spec:
...
type: ExtertalName
externalName: serviceA.namespaceA.svc.cluster.local
</code></pre>
<blockquote>
<p>add ingress rule into ingressB in namespaceB</p>
</blockquote>
<pre><code>- path: /****
backend:
serviceName: serviceB
servicePort: ***
</code></pre>
| Jakub |
<p>We have several clusters where our applications are running. We would like to set up a Central Monitoring cluster which can scrape metrics from rest of cluster using Prometheus Federation.</p>
<p>So to do that, I need to install prometheus server in each of cluster and install prometheus server via federation in central cluster.I will install Grafana as well in central cluster to visualise the metrics that we gather from rest of prometheus server.</p>
<p>So the question is;</p>
<ol>
<li><p>Where should I setup the Alert Manager? Only for Central Cluster or each cluster has to be also alert manager?</p></li>
<li><p>What is the best practice alerting while using Federation?</p></li>
<li><p>I though ı can use ingress controller to expose each prometheus server? What is the best practice to provide communication between prometheus server and federation in k8s?</p></li>
</ol>
| semural | <p>Based on this <a href="https://www.robustperception.io/federation-what-is-it-good-for" rel="nofollow noreferrer">blog</a></p>
<blockquote>
<p>Where should I setup the Alert Manager? Only for Central Cluster or each cluster has to be also alert manager?</p>
<p>What is the best practice alerting while using Federation?</p>
</blockquote>
<p>The answer here would be to do that on each cluster.</p>
<blockquote>
<p>If the data you need to do alerting is moved from one Prometheus to another then you've added an additional point of failure. This is particularly risky when WAN links such as the internet are involved. As far as is possible, you should try and push alerting as deep down the federation hierarchy as possible. For example an alert about a target being down should be setup on the Prometheus scraping that target, not a global Prometheus which could be several steps removed.</p>
</blockquote>
<hr />
<blockquote>
<p>I though ı can use ingress controller to expose each prometheus server? What is the best practice to provide communication between prometheus server and federation in k8s?</p>
</blockquote>
<p>I think that depends on use case, in each doc I checked they just use targets in <code>scrape_configs.static_configs</code> in the prometheus.yml</p>
<hr />
<p>like <a href="https://prometheus.io/docs/prometheus/latest/federation/#configuring-federation" rel="nofollow noreferrer">here</a></p>
<pre><code>scrape_configs:
- job_name: 'federate'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 'source-prometheus-1:9090'
- 'source-prometheus-2:9090'
- 'source-prometheus-3:9090'
</code></pre>
<hr />
<p><strong>OR</strong></p>
<hr />
<p>like <a href="https://developers.mattermost.com/blog/cloud-monitoring/" rel="nofollow noreferrer">here</a></p>
<pre><code>prometheus.yml:
rule_files:
- /etc/config/rules
- /etc/config/alerts
scrape_configs:
- job_name: 'federate'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 'prometheus-server:80'
</code></pre>
<hr />
<p>Additionally, worth to check how they did this in this <a href="https://developers.mattermost.com/blog/cloud-monitoring/" rel="nofollow noreferrer">tutorial</a>, where they used <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a> to build central monitoring cluster with two prometheus servers on two clusters.</p>
<p><a href="https://i.stack.imgur.com/ybdMe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ybdMe.png" alt="enter image description here" /></a></p>
| Jakub |
<p>Im trying to implement network policy in my kubernetes cluster to isolate my pods in a namespace but still allow them to access the internet since im using Azure MFA for authentication. </p>
<p>This is what i tried but cant seem to get it working. Ingress is working as expected but these policies blocks all egress. </p>
<pre><code>
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
</code></pre>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: grafana-policy
namespace: default
spec:
podSelector:
matchLabels:
app: grafana
ingress:
- from:
- podSelector:
matchLabels:
app: nginx-ingress
</code></pre>
<p>Anybody who can tell me how i make above configuration work so i will also allow internet traffic but blocking traffic to other POD's?</p>
| superset | <p>Try adding a default deny all network policy on the namespace:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
</code></pre>
<p>Then adding an allow Internet policy after:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-internet-only
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 192.168.0.0/16
- 172.16.0.0/20
</code></pre>
<p>This will block all traffic except for internet outbound.
In the <code>allow-internet-only</code> policy, there is an exception for all private IPs <em>which will prevent pod to pod communication.</em></p>
<p>You will also have to allow Egress to Core DNS from <code>kube-system</code> if you require DNS lookups, as the <code>default-deny-all</code> policy will block DNS queries.</p>
| user12009826 |
<p>When searching the image php:7.3.15-apache in openshift and we found it, but the same image is not found when searching using the docker search command. </p>
<p>Why it is like that ? Why docker pull can find the image but docker search can't find the image.</p>
<p><a href="https://hub.docker.com/layers/php/library/php/7.3.15-apache/images/sha256-b46474a6978f90a7be661870ac3ff09643e8d5ed350f48f47e4bc6ff785fc7b1?context=explore" rel="nofollow noreferrer">Example</a> </p>
<p><a href="https://i.stack.imgur.com/sK2LH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sK2LH.png" alt="enter image description here"></a></p>
<pre><code>testuser@docker:~$ sudo docker search php:7.3.15-apache
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
testuser@docker:~$
</code></pre>
<p>When using docker pull, it is downloadable :</p>
<pre><code>testuser@docker:~$ sudo docker pull php:7.3.15-apache
7.3.15-apache: Pulling from library/php
68ced04f60ab: Downloading [=========> ] 5.008MB/27.09MB
68ced04f60ab: Pull complete
1d2a5d8fa585: Pull complete
5d59ec4ae241: Pull complete
d42331ef4d44: Pull complete
408b7b7ee112: Pull complete
570cd47896d5: Pull complete
2419413b2a16: Pull complete
8c722e1dceb9: Pull complete
34fb68439fc4: Pull complete
e775bf0f756d: Pull complete
b1949a1e9661: Pull complete
6ed8bcec42ae: Pull complete
f6247da7d55f: Pull complete
a090bafe99ea: Pull complete
Digest: sha256:ad53b6b5737c389d1bcea8acc2225985d5d90e6eb362911547e163f1924ec089
Status: Downloaded newer image for php:7.3.15-apache
docker.io/library/php:7.3.15-apache
</code></pre>
| Gopu | <p>As far as I understand <a href="https://hub.docker.com/" rel="noreferrer">docker hub</a> have only those versions of php .</p>
<pre><code>sudo docker search php
</code></pre>
<hr>
<pre><code>NAME DESCRIPTION STARS OFFICIAL AUTOMATED
php While designed for web development, the PHP … 5114 [OK]
phpmyadmin/phpmyadmin A web interface for MySQL and MariaDB. 967 [OK]
adminer Database management in a single PHP file. 362 [OK]
php-zendserver Zend Server - the integrated PHP application… 180 [OK]
webdevops/php-nginx Nginx with PHP-FPM 150 [OK]
webdevops/php-apache-dev PHP with Apache for Development (eg. with xd… 116 [OK]
webdevops/php-apache Apache with PHP-FPM (based on webdevops/php) 100 [OK]
bitnami/php-fpm Bitnami PHP-FPM Docker Image 86 [OK]
phpunit/phpunit PHPUnit is a programmer-oriented testing fra… 75 [OK]
nazarpc/phpmyadmin phpMyAdmin as Docker container, based on off… 60 [OK]
circleci/php CircleCI images for PHP 28
thecodingmachine/php General-purpose ultra-configurable PHP images 28 [OK]
phpdockerio/php72-fpm PHP 7.2 FPM base container for PHPDocker.io. 19 [OK]
bitnami/phpmyadmin Bitnami Docker Image for phpMyAdmin 18 [OK]
phpdockerio/php7-fpm PHP 7 FPM base container for PHPDocker.io. 14 [OK]
phpdockerio/php56-fpm PHP 5.6 FPM base container for PHPDocker.io 13 [OK]
graze/php-alpine Smallish php7 alpine image with some common … 13 [OK]
appsvc/php Azure App Service php dockerfiles 12 [OK]
phpdockerio/php73-fpm PHP 7.3 FPM base container for PHPDocker.io. 11
phpdockerio/php71-fpm PHP 7.1 FPM base container for PHPDocker.io. 7 [OK]
phpdockerio/php72-cli PHP 7.2 CLI base container for PHPDocker.io. 4 [OK]
phpdockerio/php7-cli PHP 7 CLI base container image for PHPDocker… 1 [OK]
phpdockerio/php56-cli PHP 5.6 CLI base container for PHPDocker.io … 1 [OK]
phpdockerio/php71-cli PHP 7.1 CLI base container for PHPDocker.io. 1 [OK]
isotopab/php Docker PHP 0 [OK]
</code></pre>
<p>So you could either use 1 of that.</p>
<hr>
<p><strong>OR, if you want this specific version</strong></p>
<hr>
<p><a href="https://hub.docker.com/layers/php/library/php/7.3.15-apache/images/sha256-b46474a6978f90a7be661870ac3ff09643e8d5ed350f48f47e4bc6ff785fc7b1?context=explore" rel="noreferrer">There</a> is the specific image version on docker hub.</p>
<hr>
<p>You can use <a href="https://docs.docker.com/engine/reference/commandline/pull/" rel="noreferrer">docker pull</a> </p>
<pre><code>docker pull php:7.3.15-apache
</code></pre>
<p>And push it to your private registry with <a href="https://docs.docker.com/engine/reference/commandline/push/" rel="noreferrer">docker push</a> </p>
<pre><code>docker push
</code></pre>
<p>More about it.</p>
<ul>
<li><a href="https://stackoverflow.com/a/28349540/11977760">https://stackoverflow.com/a/28349540/11977760</a></li>
<li><a href="https://www.docker.com/blog/how-to-use-your-own-registry/" rel="noreferrer">https://www.docker.com/blog/how-to-use-your-own-registry/</a></li>
</ul>
<hr>
<p>And use your own registry instead of docker hub.</p>
<blockquote>
<p>To deploy an image from a private repository, you must create an image pull secret with your image registry credentials. You have more informations under your Image Name.</p>
</blockquote>
<hr>
<p>I hope this answer your question. Let me know if you have any more questions.</p>
| Jakub |
<p>I'm having some trouble getting a ReadOnlyMany persistent volume to mount across multiple pods on GKE. Right now it's only mounting on one pod and failing to mount on any others (due to the volume being in use on the first pod), causing the deployment to be limited to one pod.</p>
<p>I suspect the issue is related to the volume being populated from a volume snapshot.</p>
<p>Looking through related questions, I've sanity-checked that
spec.containers.volumeMounts.readOnly = true
and
spec.containers.volumes.persistentVolumeClaim.readOnly = true
which seemed to be the most common fixes for related issues.</p>
<p>I've included the relevant yaml below. Any help would be greatly appreciated!</p>
<p>Here's (most of) the deployment spec:</p>
<pre><code>spec:
containers:
- env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
image: eu.gcr.io/myimage
imagePullPolicy: IfNotPresent
name: monsoon-server-sha256-1
resources:
requests:
cpu: 100m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /mnt/sample-ssd
name: sample-ssd
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: gke-cluster-1-default-pool-3d6123cf-kcjo
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 29
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: sample-ssd
persistentVolumeClaim:
claimName: sample-ssd-read-snapshot-pvc-snapshot-5
readOnly: true
</code></pre>
<p>The storage class (which is also the default storage class for this cluster):</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: sample-ssd
provisioner: pd.csi.storage.gke.io
volumeBindingMode: Immediate
parameters:
type: pd-ssd
</code></pre>
<p>The PVC:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sample-ssd-read-snapshot-pvc-snapshot-5
spec:
storageClassName: sample-ssd
dataSource:
name: sample-snapshot-5
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 20Gi
</code></pre>
| Mike Perrow | <p>Google Engineers are aware about this issue.</p>
<p>More details about this issue you can find in <a href="https://github.com/kubernetes/kubernetes/issues/70505" rel="nofollow noreferrer">issue report</a> and <a href="https://github.com/kubernetes-csi/external-provisioner/pull/469" rel="nofollow noreferrer">pull request</a> on GitHub.</p>
<p>There's a <strong>temporary workaround</strong> if you're trying to provision a PD from a snapshot and make it ROX:</p>
<ol>
<li>Provision a PVC with datasource as RWO;</li>
</ol>
<blockquote>
<p>It will create a new Compute Disk with the content of the source disk<br />
2. Take the PV that was provisioned and copy it to a new PV that's ROX according to the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/readonlymany-disks" rel="nofollow noreferrer">docs</a></p>
</blockquote>
<p>You can execute it with the following commands:</p>
<h3>Step 1</h3>
<blockquote>
<p>Provision a PVC with datasource as RWO;</p>
</blockquote>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: workaround-pvc
spec:
storageClassName: ''
dataSource:
name: sample-ss
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>You can check the <strong>disk name</strong> with command:</p>
<p><code>kubectl get pvc</code> and check the <code>VOLUME</code> column. This is the <code>disk_name</code></p>
<h3>Step 2</h3>
<blockquote>
<p>Take the PV that was provisioned and copy it to a new PV that's ROX</p>
</blockquote>
<p>As mentioned in the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/readonlymany-disks" rel="nofollow noreferrer">docs</a> you need to create another disk using the previous disk (created in step 1) as source:</p>
<pre><code># Create a disk snapshot:
gcloud compute disks snapshot <disk_name>
# Create a new disk using snapshot as source
gcloud compute disks create pvc-rox --source-snapshot=<snapshot_name>
</code></pre>
<p>Create a new PV and PVC ReadOnlyMany</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: my-readonly-pv
spec:
storageClassName: ''
capacity:
storage: 20Gi
accessModes:
- ReadOnlyMany
claimRef:
namespace: default
name: my-readonly-pvc
gcePersistentDisk:
pdName: pvc-rox
fsType: ext4
readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-readonly-pvc
spec:
storageClassName: ''
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 20Gi
</code></pre>
<p>Add the <code>readOnly: true</code> on your <code>volumes</code> and <code>volumeMounts</code> as mentioned <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/readonlymany-disks#using_the_persistentvolumeclaim_in_a_pod" rel="nofollow noreferrer">here</a></p>
<pre><code>readOnly: true
</code></pre>
| Mr.KoopaKiller |
<p>I just started using OpenShift and currently using the 60 day free trial. I was hoping to test some of my developmental Dockerfiles in it, but when I try to use any Dockerfile I get this error:</p>
<pre><code>admission webhook "validate.build.create" denied the request: Builds with docker strategy are prohibited on this cluster
</code></pre>
<p>To recreate:
Developer view -> Topology -> From Dockerfile ->
GitHub Repo URL = <a href="https://github.com/alpinelinux/docker-alpine" rel="nofollow noreferrer">https://github.com/alpinelinux/docker-alpine</a> -> Defaults for everything else -> Create</p>
<p>This example just uses the official Alpine Dockerfile and it does not work.</p>
| Hustlin | <p>Based on this <a href="https://stackoverflow.com/a/46840462/11977760">answer</a> made by Graham Dumpleton</p>
<blockquote>
<p>If you are using OpenShift Online, it is not possible to enable the docker build type. For OpenShift Online your options are to build your image locally and then push it up to an external image registry such as Docker Hub, or login to the internal OpenShift registry and push your image directly in to it. The image can then be used in a deployment.</p>
<p>If you have set up your own OpenShift cluster, my understanding is that docker build type should be enabled by default. You can find more details at:</p>
<p><a href="https://docs.openshift.com/container-platform/3.11/admin_guide/securing_builds.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.11/admin_guide/securing_builds.html</a></p>
<p>If you are after a way to deploy a site using a httpd web server, there is a S2I builder image available that can do that. See:</p>
<p><a href="https://github.com/sclorg/httpd-container" rel="nofollow noreferrer">https://github.com/sclorg/httpd-container</a></p>
<p>OpenShift Online provides the source build strategy (S2I). Neither docker or custom build strategies are enabled. So you can build images in OpenShift Online, but only using the source build strategy.</p>
</blockquote>
| Jakub |
<p>I have a kubernetes cluster with vault installed (by a helm chart).</p>
<p>I want to populate secret from vault to a file in a pod (nginx for example) and <strong>refresh the secrets every 5 minutes.</strong></p>
<p>I used the following configuration to test it (with appropriate vault policy/backend auth):</p>
<p>namespace.yaml</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: web
</code></pre>
<p>Service_account.yaml</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx
namespace: web
secrets:
- name: nginx
</code></pre>
<p>nginx-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: web
labels:
app: nginx
run: nginx
version: vault-injector
spec:
replicas: 1
selector:
matchLabels:
run: nginx
version: vault-injector
template:
metadata:
labels:
app: nginx
run: nginx
version: vault-injector
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "nginx"
#vault.hashicorp.com/agent-inject-status: "update"
vault.hashicorp.com/agent-inject-secret-nginx.pass: "infrastructure/nginx/"
spec:
serviceAccountName: nginx
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
</code></pre>
<p>When i apply this configuration to my kubernetes cluster the deployment is created and my secret are filled into /vault/secret/nginx.pass(as expected).</p>
<pre><code>kubectl exec -it pod/nginx-69955d8744-v9jm2 -n web -- cat /vault/secrets/nginx.pass
Password1: MySecretPassword1
Password2: MySecretPassword2
</code></pre>
<p>I tried to update the kv and add a password on nginx kv but my pods doesn't refresh the file on /vault/secrets/nginx.pass. If i restart my secrets are filled</p>
<p>Is it possible to dynamically refresh the kv ? What's the best way to do it ? I want to use vault as a configuration manager and be able to modify kv without restarting pods.</p>
| tzouintzouin | <p>You can define a TTL on your kv secret by specifying a TTL value. For example :</p>
<pre><code> vault kv put infrastructure/nginx ttl=1m Password1=PasswordUpdated1 Password2=PasswordUpdated2
</code></pre>
<p>will expire your infrastructure/nginx secret every minute. Vault sidecar will automatically check for a new value and refresh the file into your pods.</p>
<pre><code>root@LAP-INFO-28:/mnt/c/Users/cmonsieux/Desktop/IAC/kubernetes/yaml/simplePod# k logs nginx-69955d8744-mwhmf vault-agent -n web
renewal process
2020-09-06T07:16:42.867Z [INFO] sink.file: token written: path=/home/vault/.vault-token
2020-09-06T07:16:42.867Z [INFO] template.server: template server received new token
2020/09/06 07:16:42.867793 [INFO] (runner) stopping
2020/09/06 07:16:42.867869 [INFO] (runner) creating new runner (dry: false, once: false)
2020/09/06 07:16:42.868051 [INFO] (runner) creating watcher
2020/09/06 07:16:42.868101 [INFO] (runner) starting
2020-09-06T07:16:42.900Z [INFO] auth.handler: renewed auth token
2020/09/06 07:18:26.268835 [INFO] (runner) rendered "(dynamic)" => "/vault/secrets/nginx.pass"
2020/09/06 07:19:18.810479 [INFO] (runner) rendered "(dynamic)" => "/vault/secrets/nginx.pass"
2020/09/06 07:24:41.189868 [INFO] (runner) rendered "(dynamic)" => "/vault/secrets/nginx.pass"
2020/09/06 07:25:36.095547 [INFO] (runner) rendered "(dynamic)" => "/vault/secrets/nginx.pass"
2020/09/06 07:29:11.479051 [INFO] (runner) rendered "(dynamic)" => "/vault/secrets/nginx.pass"
2020/09/06 07:31:00.715215 [INFO] (runner) rendered "(dynamic)" => "/vault/secrets/nginx.pass"
root@LAP-INFO-28:/mnt/c/Users/cmonsieux/Desktop/IAC/kubernetes/yaml/simplePod# k exec -it pod/nginx-69955d8744-mwhmf -n web -- cat /vault/secrets/nginx.pass
Password1: PasswordUpdated1
Password2: PasswordUpdated2
ttl: 1m
</code></pre>
| tzouintzouin |
<p>i would want to know how can i assign memory resources to a running pod ?</p>
<p>i tried <code>kubectl get po foo-7d7dbb4fcd-82xfr -o yaml > pod.yaml</code>
but when i run the command <code>kubectl apply -f pod.yaml</code></p>
<pre><code> The Pod "foo-7d7dbb4fcd-82xfr" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
</code></pre>
<p>thanks in advance for your help .</p>
| zyriuse | <p>Pod is the minimal kubernetes resources, and it doesn't not support editing as you want to do.</p>
<p>I suggest you to use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a> to run your pod, since it is a "pod manager" where you have a lot of additional features, like pod self-healing, pod liveness/readness etc...</p>
<p>You can define the resources in your deployment file like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
resources:
limits:
cpu: 15m
memory: 100Mi
requests:
cpu: 15m
memory: 100Mi
ports:
- name: http
containerPort: 80
</code></pre>
| Mr.KoopaKiller |
<p>I've a Service <code>my-service</code> of type ClusterIP in namespace <code>A</code> which can load balance to a few pods. I want to create another Service of type ExternalName in namespace <code>B</code> that points to <code>my-service</code> in namespace <code>A</code>.</p>
<p>I create the following YAML:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: B
spec:
type: ExternalName
externalName: my-service.A
</code></pre>
<p>and if I exec into a pod running in namespace <code>B</code> and do:</p>
<pre><code># ping my-service
ping: my-service: Name or service not known
</code></pre>
<p>But if I change the <code>externalName</code> in the above YAML to below:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: B
spec:
type: ExternalName
externalName: my-service.A.svc.cluster.local <--- full FQDN here
</code></pre>
<p>things work as expected. Also, if I ping <code>my-service</code> directly from a pod in namespace <code>B</code> it is being resolved:</p>
<pre><code># ping my-service.A
PING my-service.A.svc.cluster.local (10.0.80.133) 56(84) bytes of data.
</code></pre>
<p>Why <code>my-service.A</code> is not resolved to <code>my-service.A.svc.cluster.local</code> in ExternalName Service?</p>
<p>My K8s version is <code>1.14.8</code> and uses CoreDNS.</p>
| Shubham | <p>Based on <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">kubernetes documentation</a> and <a href="https://docs.okd.io/latest/dev_guide/integrating_external_services.html" rel="nofollow noreferrer">okd</a> that's how it's supposed to work</p>
<blockquote>
<p>Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service’s IP to resolve DNS names.</p>
<p>What things get DNS names?</p>
<p>Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod’s DNS search list will include the Pod’s <strong>own namespace and the cluster’s default domain</strong>.</p>
</blockquote>
<hr />
<blockquote>
<p>Services are assigned a DNS A or AAAA record, depending on the IP family of the service, for a name of the form <strong>my-svc.my-namespace.svc.cluster-domain.example</strong>. This resolves to the cluster IP of the Service.</p>
</blockquote>
<hr />
<p>There are logs from kubedns when you want to use my-service.A</p>
<pre><code>I0306 09:44:32.424126 1 logs.go:41] skydns: incomplete CNAME chain from "my-service.dis.": rcode 3 is not equal to success
</code></pre>
<p>That's why you need whole path for the service, which in your situation is</p>
<pre><code>my-service.A.svc.cluster.local
</code></pre>
<p><strong>Because</strong></p>
<blockquote>
<p>Using an external domain name service tells the system that the DNS name in the externalName field (example.domain.name in the previous <a href="https://docs.okd.io/latest/dev_guide/integrating_external_services.html#mysql-define-service-using-fqdn" rel="nofollow noreferrer">example</a>) is the location of the resource that backs the service. When a DNS request is made against the Kubernetes DNS server, it returns the externalName in a CNAME record telling the client to look up the returned name to get the IP address.</p>
</blockquote>
<hr />
<p>I hope this answer your question. Let me know if you have any more questions.</p>
| Jakub |
<p>When a kubernetes object has parent objects, it is mentioned under "ownerReferences". For example when i printed a pod spec in yaml format, i see ownerReferences mentioned as follows:</p>
<pre><code> ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: statefuleset-name
uid: <uuid>
....
</code></pre>
<p>I see that ownerReferences is a list. Does anyone know when the ownerReferences will have more than one entries. I am not able to imagine a object having more than one owner.</p>
| Ramesh Kuppili | <p>If I understand you correctly it is possible in some circumstances. </p>
<p>In <a href="https://medium.com/@bharatnc/kubernetes-garbage-collection-781223f03c17" rel="nofollow noreferrer">this blog</a> you can see an example of multiple <code>ownerReferences</code>. The blog explains garbage collection in K8s and shows that <strong>Multiple ownerReferences are possible</strong>:</p>
<blockquote>
<p>Yes, you heard that right, now postgres-namespace can be owned by more
than one database object.</p>
</blockquote>
<p>I hope it helps. </p>
| Wytrzymały Wiktor |
<p>So I have successfully deployed istio, atleast I think so, everything seems to work fine. I have deployed my API in Istio and I can reach it through my browser. I can even test my API using postman, but when I try to reach my API through curl it says <code>The remote name could not be resolved: 'api.localhost'</code>. That was the first red flag but I ignored it. Now I'm trying to reach my API from my webapp but Chrome responds with <code>net:ERR_FAILED</code>.</p>
<p>It seems like my services are only available for the host, which is me, and nothing else. I can't seem to find a solution for this on the internet so I hope someone has expirience and knows a fix.</p>
<p>Thanks!</p>
<hr>
<p><strong>EDIT:</strong> More information</p>
<p>My infrastructure is all local, <strong>Docker for Desktop with Kubernetes</strong>. The Istio version I'm using is <strong>1.5.0</strong>.</p>
<p><strong>Gateway</strong>:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: api-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-api
protocol: HTTP
hosts:
- "api.localhost"
</code></pre>
<p><strong>Virtual service</strong>:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: pb-api
spec:
gateways:
- api-gateway
hosts:
- "*"
http:
- match:
- uri:
prefix: /
rewrite:
uri: /
route:
- destination:
host: pb-api
port:
number: 3001
</code></pre>
<p>When I try to do <code>curl http://api.localhost/user/me</code> I expect a <code>401</code>, but instead I get <code>The remote name could not be resolved: 'api.localhost'</code> as stated above. That error is just the same as when I turn off Docker for desktop and try again. Through postman and the browser it works fine, but curl and my react webapp can't reach it.</p>
| RiesvGeffen | <p>As I mentioned in the comments the curl should look like this</p>
<blockquote>
<p>curl -v -H "host: api.localhost" istio-ingressgateway-external-ip/</p>
</blockquote>
<p>You can check istio-ingressgateway-external ip with</p>
<pre><code>kubectl get svc istio-ingressgateway -n istio-system
</code></pre>
<p>As @SjaakvBrabant mentioned </p>
<blockquote>
<p>External IP is localhost so I tried this command curl -v -H "host: api.localhost" localhost/user/me which gave me 401</p>
</blockquote>
<hr>
<h2>Ubuntu minikube example</h2>
<p>Additionally if you would like to curl api.localhost itself then you would have to configure your hosts locally, i'm not sure how this would work in your situation since your external IP is localhost.</p>
<p>But if you want, you can use <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">metallb</a> which is a loadbalancer, so your istio-ingressgateway would get an IP which could be configured in etc/hosts.</p>
<p><strong>Yamls</strong></p>
<pre><code>piVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: demo
spec:
selector:
matchLabels:
app: demo
replicas: 1
template:
metadata:
labels:
app: demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: demo
namespace: demo
labels:
app: demo
spec:
ports:
- name: http-demo
port: 80
protocol: TCP
selector:
app: demo
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: demo-gw
namespace: demo
spec:
selector:
istio: ingressgateway
servers:
- port:
name: http
number: 80
protocol: HTTP
hosts:
- "example.com"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: demo-vs
namespace: demo
spec:
gateways:
- demo-gw
hosts:
- "example.com"
http:
- match:
- uri:
prefix: /
rewrite:
uri: /
route:
- destination:
host: demo.demo.svc.cluster.local
port:
number: 80
</code></pre>
<p><strong>etc/hosts</strong></p>
<pre><code>127.0.0.1 localhost
10.101.143.xxx example.com
</code></pre>
<p><strong>Testing</strong></p>
<pre><code>curl -v -H "host: example.com" http://10.101.143.xxx/
< HTTP/1.1 200 OK
curl -v example.com
< HTTP/1.1 200 OK
</code></pre>
<hr>
<p>Hope you find this useful.</p>
| Jakub |
<p>I am trying to deploy a kubernetes container, the image of which requires a URL pointing to a resource. This resource needs to be accessed through github authentication which is contained in a secret on the cluster. To get the access token from this secret I must do some string manipulation with bash commands. Here is the current container spec in the deployment yaml:</p>
<pre><code> - name: <container-name>
image: stoplight/prism:latest
env:
- name: GITHUB_AUTH
valueFrom:
secretKeyRef:
name: githubregistry
key: .dockerconfigjson
args:
- "mock"
- "https://$(echo $GITHUB_AUTH | sed 's/{auths:{docker.pkg.github.com:{auth://g' | sed 's/}}}//g' | base64 --decode | cut -d':' -f2)@raw.githubusercontent.com/<path-to-deployment>.yaml"
- "-h"
- "0.0.0.0"
- "-p"
- "8080"
ports:
- name: http
containerPort: 8080
protocol: TCP
</code></pre>
<p>now this line:</p>
<pre><code> - "https://$(echo $GITHUB_AUTH | sed 's/{auths:{docker.pkg.github.com:{auth://g' | sed 's/}}}//g' | base64 --decode | cut -d':' -f2)@raw.githubusercontent.com/<path-to-deployment>.yaml"
</code></pre>
<p>obviously doesn't evaluate because this a deployment yaml not bash. </p>
<p>Any idea how I could replicate such behaviour? I suppose I could do something with an initcontainer but I'd like to avoid overcomplicating.</p>
| DazKins | <p>The line [1] is not being evaluated because it's being sent as parameter to the ENTRYPOINT. </p>
<p>You will need to run the commands on a shell. Following the section "Run a command in a shell" on [2] you will need to something like this:</p>
<pre><code>- name: <container-name>
image: stoplight/prism:latest
env:
- name: GITHUB_AUTH
valueFrom:
secretKeyRef:
name: githubregistry
key: .dockerconfigjson
command: ["/bin/sh"]
args:
- "-c"
- "TOKEN_FETCHED=`https://$(echo $GITHUB_AUTH | sed 's/{auths:{docker.pkg.github.com:{auth://g' | sed 's/}}}//g' | base64 --decode | cut -d':' -f2)@raw.githubusercontent.com/<path-to-deployment>.yaml`"
- "<your-entrypoint-location> mock $TOKEN_FETCHED -h 0.0.0.0 -p 8080"
ports:
- name: http
containerPort: 8080
protocol: TCP
</code></pre>
<p>So, your last approach didn't work because you were sending line [1] as a parameter to the entrypoint. What I propose to you is to actually sent the output of your command (the line [1]) to a variable [3] and then pass that variable to the ENTRYPOINT. </p>
<p>Please, keep in mind that you you need to place the complete location of your entry point in the section. You can get this by inspecting the image you are using:</p>
<pre><code>docker inspect stoplight/prism:latest
</code></pre>
<p>And look for the "CMD" output. </p>
<p>[1] </p>
<pre><code> - "https://$(echo $GITHUB_AUTH | sed 's/{auths:{docker.pkg.github.com:{auth://g' | sed 's/}}}//g' | base64 --decode | cut -d':' -f2)@raw.githubusercontent.com/<path-to-deployment>.yaml"
</code></pre>
<p>[2] <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/</a></p>
<p>[3] <a href="https://www.cyberciti.biz/faq/unix-linux-bsd-appleosx-bash-assign-variable-command-output/" rel="nofollow noreferrer">https://www.cyberciti.biz/faq/unix-linux-bsd-appleosx-bash-assign-variable-command-output/</a></p>
| Armando Cuevas |
<p>I'm trying to deploy a simple python app to Google Container Engine:</p>
<p>I have created a cluster then run <code>kubectl create -f deployment.yaml</code>
It has been created a deployment pod on my cluster. After that i have created a service as: <code>kubectl create -f deployment.yaml</code></p>
<blockquote>
<p>Here's my Yaml configurations:</p>
<p><strong>pod.yaml</strong>:</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-app
spec:
containers:
- name: test-ctr
image: arycloud/flask-svc
ports:
- containerPort: 5000
</code></pre>
<blockquote>
<p>Here's my Dockerfile:</p>
</blockquote>
<pre><code>FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./app.py
</code></pre>
<blockquote>
<p><strong>deployment.yaml:</strong></p>
</blockquote>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
name: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
resources:
requests:
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 8080
</code></pre>
<blockquote>
<p><strong>service.yaml:</strong></p>
</blockquote>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 32000
selector:
app: test-app
</code></pre>
<blockquote>
<p><strong>Ingress</strong></p>
</blockquote>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
</code></pre>
<p>It creates a LoadBalancer and provides an external IP, when I open the IP it returns <code>Connection Refused error</code></p>
<p>What's going wrong?</p>
<p>Help me, please!</p>
<p>Thank You,
Abdul</p>
| Abdul Rehman | <p>Your deployment file doesn't have <code>selector</code>, it means that the <code>service</code> cannot find any pods to redirect the request.</p>
<p>Also, you must match the <code>conteinerPOrt</code> on deployment file with <code>targetPort</code> in the service file.</p>
<p>I've tested in my lab environment and it works for me:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
selector:
matchLabels:
app: test-app
replicas: 1
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 5000
selector:
app: test-app
</code></pre>
| Mr.KoopaKiller |
<p>I want to automate the use a certificate, that is created by <code>cert-manager</code> as documented <a href="https://cert-manager.io/docs/usage/certificate/" rel="nofollow noreferrer">here</a>, in a Helm chart. For example, the YAML below.</p>
<pre><code>---
apiVersion: v1
kind: Pod
metadata:
name: mypod
labels:
app: mypod
spec:
containers:
- name: mypod
image: repo/image:0.0.0
imagePullPolicy: Always
volumeMounts:
- name: certs
mountPath: /etc/certs
readOnly: true
ports:
- containerPort: 4443
protocol: TCP
volumes:
- name: certs
secret:
secretName: as_created_by_cert-manager
</code></pre>
<p>How do I submit the YAML for getting a <code>Certificate</code> from <code>cert-manager</code> and then plugin the generated <code>Secret</code> into the <code>Pod</code> YAML above, in a Helm chart?</p>
| cogitoergosum | <p>I am posting David's comment as a community wiki answer as requested by the OP:</p>
<blockquote>
<p>You should be able to write the YAML for the Certificate in the same
chart, typically in its own file. I'd expect it would work to create
them together, the generated Pod would show up as "Pending" in kubectl
get pods output until cert-manager actually creates the matching
Secret. – David Maze</p>
</blockquote>
| Wytrzymały Wiktor |
<p>I'm currently on OSX (Catalina) with Docker Desktop (19.03.5) running k8s (v.14.8)</p>
<p>Following the <a href="https://www.envoyproxy.io/docs/envoy/latest/start/distro/ambassador" rel="nofollow noreferrer">ambassador getting started</a> article, I've done the following:</p>
<p>Created a file called <code>ambassador-ingress.yaml</code></p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
labels:
service: ambassador
name: ambassador
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
service: ambassador
---
apiVersion: v1
kind: Service
metadata:
name: google
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: google_mapping
prefix: /google/
service: https://google.com:443
host_rewrite: www.google.com
spec:
type: ClusterIP
clusterIP: None
</code></pre>
<p>And I've run the following</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl apply -f https://www.getambassador.io/yaml/ambassador/ambassador-rbac.yaml
$ kubectl apply -f ambassador-ingress.yaml
</code></pre>
<p>I can now look at <code>kubectl get pods</code> and <code>kubectl get service</code></p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ambassador-74fb8f5668-2b85z 1/1 Running 0 20m
ambassador-74fb8f5668-r6jrf 1/1 Running 0 20m
ambassador-74fb8f5668-vrmjg 1/1 Running 0 20m
</code></pre>
<pre><code>$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador LoadBalancer 10.97.166.229 localhost 80:31327/TCP 18m
ambassador-admin NodePort 10.96.1.56 <none> 8877:30080/TCP 18m
google ClusterIP None <none> <none> 18m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13d
</code></pre>
<p>Everything looks like it is setup correctly, however, whenever I attempt to curl k8s I can't get anything but an empty server response even though I can hit google directly:</p>
<pre><code>$ curl localhost/google/
> curl: (52) Empty reply from server
$ curl www.google.com
> <!doctype html> ............
</code></pre>
<p>The question I have is, where do I begin troubleshooting? I don't know where the failure lies or how to begin digging to find what has gone wrong. What is the right direction?</p>
| Niko | <p>Based on "The Kubernetes network model" [1] there are 2 important rules:</p>
<blockquote>
<ul>
<li>pods on a node can communicate with all pods on all nodes without NAT</li>
<li>agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node</li>
</ul>
</blockquote>
<p>So basically it says that since your K8s cluster is located on your machine, you can directly communicate to the service ip "10.97.166.229" and POD IP. </p>
<p>So, regarding how to begin your troubleshooting steps, since your PODs are up and running, most likely this is a network error. You can try this:</p>
<p>a) Try to connect to your pod directly. You can get your IP by executing the command:</p>
<pre><code>kubectl get pod -o wide
</code></pre>
<p>b) Get the LOGs of your POD and search for any error:</p>
<pre><code>kubectl logs ambassador-74fb8f5668-2b85z
</code></pre>
<p>c) Go inside your POD and check if you can test connectivity inside your POD. [2]</p>
<pre><code>kubectl exec -it ambassador-74fb8f5668-2b85z -- /bin/bash
</code></pre>
<p>[1] <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/networking/</a></p>
<p>[2] <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/</a></p>
| Armando Cuevas |
<p>I'm trying to install Openshift 3.11 on a one master, one worker node setup.</p>
<p>The installation fails, and I can see in <code>journalctl -r</code>:</p>
<pre><code>2730 kubelet.go:2101] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
2730 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
</code></pre>
<p>Things I've tried:</p>
<ol>
<li>reboot master node</li>
<li>Ensure that <code>hostname</code> is the same as <code>hostname -f</code> on all nodes</li>
<li>Disable IP forwarding on master node as described on <a href="https://github.com/openshift/openshift-ansible/issues/7967#issuecomment-405196238" rel="noreferrer">https://github.com/openshift/openshift-ansible/issues/7967#issuecomment-405196238</a> and <a href="https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux" rel="noreferrer">https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux</a></li>
<li>Applying kube-flannel, on master node as described on <a href="https://stackoverflow.com/a/54779881/265119">https://stackoverflow.com/a/54779881/265119</a></li>
<li><code>unset http_proxy https_proxy</code> on master node as described on <a href="https://github.com/kubernetes/kubernetes/issues/54918#issuecomment-385162637" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/54918#issuecomment-385162637</a></li>
<li>modify <code>/etc/resolve.conf</code> to have <code>nameserver 8.8.8.8</code>, as described on <a href="https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-452172710" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-452172710</a></li>
<li>created a file /etc/cni/net.d/80-openshift-network.conf with content <code>{ "cniVersion": "0.2.0", "name": "openshift-sdn", "type": "openshift-sdn" }</code>, as described on <a href="https://stackoverflow.com/a/55743756/265119">https://stackoverflow.com/a/55743756/265119</a></li>
</ol>
<p>The last step does appear to have allowed the master node to become ready, however the ansible openshift installer still fails with <code>Control plane pods didn't come up</code>.</p>
<p>For a more detailed description of the problem see <a href="https://github.com/openshift/openshift-ansible/issues/11874" rel="noreferrer">https://github.com/openshift/openshift-ansible/issues/11874</a></p>
| Magick | <p>Along with Step 6:
make sure that hostname and hostname -f bot return the FQDN for your hosts.</p>
<p><a href="https://github.com/openshift/openshift-ansible/issues/10798" rel="nofollow noreferrer">https://github.com/openshift/openshift-ansible/issues/10798</a></p>
| Byron |
<p>I have created a Node port service in Google cloud with the following specification... I have a firewall rule created to allow traffic from 0.0.0.0/0 for the port '30100' ,I have verified stackdriver logs and traffic is allowed but when I either use curl or from browser to hit http://:30100 I am not getting any response. I couldn't proceed how to debug the issue also... can someone please suggest on this ?</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginxv1
template:
metadata:
labels:
app: nginxv1
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginxv1
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
nodePort: 30100
selector:
app: nginxv1
type: NodePort
</code></pre>
<p>Thanks.</p>
| deals my | <p>You need to fix the container port, it must be <code>80</code> because the nginx container exposes this port as you can see <a href="https://hub.docker.com/_/nginx" rel="nofollow noreferrer">here</a></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginxv1
template:
metadata:
labels:
app: nginxv1
spec:
containers:
- name: nginx
image: nginx:latest
---
apiVersion: v1
kind: Service
metadata:
name: nginxv1
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 30100
selector:
app: nginxv1
type: NodePort
</code></pre>
<p>Also, you need to create a firewall rule to permit the traffic to the node, as mentioned by @danyL in comments:</p>
<pre><code>gcloud compute firewall-rules create test-node-port --allow tcp:30100
</code></pre>
<p>Get the node IP with the command</p>
<pre><code>kubectl get nodes -owide
</code></pre>
<p>And them try to access the nginx page with:</p>
<pre><code>curl http://<NODEIP>:30100
</code></pre>
| Mr.KoopaKiller |
<p>Modified the <a href="https://github.com/alerta/docker-alerta/tree/master/contrib/kubernetes/helm/alerta" rel="nofollow noreferrer">helm chart</a> of <code>alerta</code> to spin it up on an <code>istio</code>- enabled GKE cluster.</p>
<p>The alerta pod and its sidecar are created OK</p>
<pre><code>▶ k get pods | grep alerta
alerta-758bc87dcf-tp5nv 2/2 Running 0 22m
</code></pre>
<p>When I try to access the url that my virtual service is pointing to</p>
<p>I get the following error</p>
<blockquote>
<p>upstream connect error or disconnect/reset before headers. reset reason: connection termination</p>
</blockquote>
<pre><code>▶ k get vs alerta-virtual-service -o yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
annotations:
helm.fluxcd.io/antecedent: mynamespace:helmrelease/alerta
creationTimestamp: "2020-04-23T14:45:04Z"
generation: 1
name: alerta-virtual-service
namespace: mynamespace
resourceVersion: "46844125"
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/mynamespace/virtualservices/alerta-virtual-service
uid: 2a3caa13-3900-4da1-a3a1-9f07322b52b0
spec:
gateways:
- mynamespace/istio-ingress-gateway
hosts:
- alerta.myurl.com
http:
- appendHeaders:
x-request-start: t=%START_TIME(%s.%3f)%
match:
- uri:
prefix: /
route:
- destination:
host: alerta
port:
number: 80
timeout: 60s
</code></pre>
<p>and here is the service</p>
<pre><code>▶ k get svc alerta -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
helm.fluxcd.io/antecedent: mynamespace:helmrelease/alerta
creationTimestamp: "2020-04-23T14:45:04Z"
labels:
app: alerta
chart: alerta-0.1.0
heritage: Tiller
release: alerta
name: alerta
namespace: mynamespace
resourceVersion: "46844120"
selfLink: /api/v1/namespaces/mynamespace/services/alerta
uid: 4d4a3c73-ee42-49e3-a4cb-8c51536a0508
spec:
clusterIP: 10.8.58.228
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: alerta
release: alerta
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>However, when I exec via another pod into the cluster and try to reach the alerta svc endpoint:</p>
<pre><code>/ # curl -IL http://alerta
curl: (56) Recv failure: Connection reset by peer
/ # nc -zv -w 3 alerta 80
alerta (10.8.58.228:80) open
</code></pre>
<p>although as it is evident the port is open</p>
<p>any suggestion?</p>
<p>could it be that the chaining of the 2 proxies is creating issues? nginx behind envoy?</p>
<p>The container logs seem normal</p>
<pre><code>2020-04-23 15:34:40,272 DEBG 'nginx' stdout output:
ip=\- [\23/Apr/2020:15:34:40 +0000] "\GET / HTTP/1.1" \200 \994 "\-" "\kube-probe/1.15+"
/web | /index.html | > GET / HTTP/1.1
</code></pre>
<p><strong>edit</strong>: Here is a verbose <code>curl</code> with host header explicitly set</p>
<pre><code>/ # curl -v -H "host: alerta.myurl.com" http://alerta:80
* Rebuilt URL to: http://alerta:80/
* Trying 10.8.58.228...
* TCP_NODELAY set
* Connected to alerta (10.8.58.228) port 80 (#0)
> GET / HTTP/1.1
> host: alerta.myurl.com
> User-Agent: curl/7.57.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
</code></pre>
<p>The <code>nginx</code> config file used by the app/pod is the following FWIW</p>
<pre><code>worker_processes 4;
pid /tmp/nginx.pid;
daemon off;
error_log /dev/stderr info;
events {
worker_connections 1024;
}
http {
client_body_temp_path /tmp/client_body;
fastcgi_temp_path /tmp/fastcgi_temp;
proxy_temp_path /tmp/proxy_temp;
scgi_temp_path /tmp/scgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
include /etc/nginx/mime.types;
gzip on;
gzip_disable "msie6";
log_format main 'ip=\$http_x_real_ip [\$time_local] '
'"\$request" \$status \$body_bytes_sent "\$http_referer" '
'"\$http_user_agent"' ;
log_format scripts '$document_root | $uri | > $request';
default_type application/octet-stream;
server {
listen 8080 default_server;
access_log /dev/stdout main;
access_log /dev/stdout scripts;
location ~ /api {
include /etc/nginx/uwsgi_params;
uwsgi_pass unix:/tmp/uwsgi.sock;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
root /web;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
}
</code></pre>
<p><strong>edit 2</strong>: Trying to get the <code>istio</code> authentication policy</p>
<pre><code>✔ 18h55m ⍉
▶ kubectl get peerauthentication.security.istio.io
No resources found.
✔ 18h55m
▶ kubectl get peerauthentication.security.istio.io/default -o yaml
Error from server (NotFound): peerauthentications.security.istio.io "default" not found
</code></pre>
<p><strong>edit 3</strong>: when performing <code>curl</code> to the service from within the istio proxy container</p>
<pre><code>▶ k exec -it alerta-758bc87dcf-jzjgj -c istio-proxy bash
istio-proxy@alerta-758bc87dcf-jzjgj:/$ curl -v http://alerta:80
* Rebuilt URL to: http://alerta:80/
* Trying 10.8.58.228...
* Connected to alerta (10.8.58.228) port 80 (#0)
> GET / HTTP/1.1
> Host: alerta
> User-Agent: curl/7.47.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
</code></pre>
| pkaramol | <p>I created new gke cluster with istio 1.5.2, in fact if you check for mtls, there are no resources found</p>
<pre><code>kubectl get peerauthentication --all-namespaces
</code></pre>
<blockquote>
<p>No resources found.</p>
</blockquote>
<pre><code>kubectl get peerauthentication.security.istio.io/default
</code></pre>
<blockquote>
<p>Error from server (NotFound): peerauthentications.security.istio.io "default" not found</p>
</blockquote>
<p>So I tried to make this <a href="https://istio.io/docs/tasks/security/authentication/authn-policy/" rel="nofollow noreferrer">example</a> and that clearly shows istio is in strict tls mode when you installed it with <code>global.mtls.enabled=true</code>.</p>
<p>If you add pods,namespaces as mentioned <a href="https://istio.io/docs/tasks/security/authentication/authn-policy/#setup" rel="nofollow noreferrer">here</a> it should be 200 for every request, but it's not</p>
<pre><code>sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
sleep.bar to httpbin.legacy: 200
sleep.legacy to httpbin.foo: 000
command terminated with exit code 56
sleep.legacy to httpbin.bar: 000
command terminated with exit code 56
sleep.legacy to httpbin.legacy: 200
</code></pre>
<p>So if you change the mtls from strict to permissive with above below yaml</p>
<pre><code>apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "istio-system"
spec:
mtls:
mode: PERMISSIVE
</code></pre>
<p>it works now</p>
<pre><code>sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 200
sleep.foo to httpbin.legacy: 200
sleep.bar to httpbin.foo: 200
sleep.bar to httpbin.bar: 200
sleep.bar to httpbin.legacy: 200
sleep.legacy to httpbin.foo: 200
sleep.legacy to httpbin.bar: 200
sleep.legacy to httpbin.legacy: 200
</code></pre>
<hr>
<p>Additionaly <a href="https://github.com/istio/istio/issues/14963" rel="nofollow noreferrer">github issue</a> with error you provided.</p>
<hr>
<p>About the question </p>
<blockquote>
<p>why the pod fails to mtls authenticate with itself, when curling from inside it </p>
</blockquote>
<p>There is a <a href="https://github.com/istio/istio/issues/12551" rel="nofollow noreferrer">github</a> issue about this.</p>
<hr>
<p>Additionally take a look at this <a href="https://istio.io/pt-br/docs/tasks/security/authentication/mutual-tls/#verify-requests" rel="nofollow noreferrer">istio docs</a>.</p>
| Jakub |
<p>I have created ekscluster with a name called "prod". I worked on this "prod" cluster after that i have deleted it. I have deleted all its associated vpc, interfaces, security groups everything. But if i try to create the ekscluster with the same name "prod" am getting this below error. Can you please help me on this issue?</p>
<pre><code>[centos@ip-172-31-23-128 ~]$ eksctl create cluster --name prod
--region us-east-2 [ℹ] eksctl version 0.13.0 [ℹ] using region us-east-2 [ℹ] setting availability zones to [us-east-2b us-east-2c us-east-2a] [ℹ] subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19 [ℹ] subnets for us-east-2c - public:192.168.32.0/19 private:192.168.128.0/19 [ℹ] subnets for us-east-2a - public:192.168.64.0/19 private:192.168.160.0/19 [ℹ] nodegroup "ng-1902b9c1" will use "ami-080fbb09ee2d4d3fa" [AmazonLinux2/1.14] [ℹ] using Kubernetes version 1.14 [ℹ] creating EKS cluster "prod" in "us-east-2" region with un-managed nodes [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks
--region=us-east-2 --cluster=prod' [ℹ] CloudWatch logging will not be enabled for cluster "prod" in "us-east-2" [ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=us-east-2
--cluster=prod' [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "prod" in "us-east-2" [ℹ] 2 sequential tasks: { create cluster control plane "prod", create nodegroup "ng-1902b9c1" } [ℹ] building cluster stack "eksctl-prod-cluster" [ℹ] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-east-2
--name=prod' [✖] creating CloudFormation stack "eksctl-prod-cluster": AlreadyExistsException: Stack [eksctl-prod-cluster] already exists status code: 400, request id: 49258141-e03a-42af-ba8a-3fef9176063e Error: failed to create cluster "prod"
</code></pre>
| nikhileshwar y | <p>There are two things to consider here.</p>
<ol>
<li><p>The <code>delete</code> command does not wait for all the resources to actually be gone. You should add the <code>--wait</code> flag in order to let it finish. It usually it takes around 10-15 mins. </p></li>
<li><p>If that is still not enough you should make sure that you delete the <code>CloudFormation</code> object. It would look something like this (adjust the naming):</p>
<p><code>#delete cluster:
-delete cloudformation stack
aws cloudformation list-stacks --query StackSummaries[].StackName
aws cloudformation delete-stack --stack-name worker-node-stack
aws eks delete-cluster --name EKStestcluster</code></p></li>
</ol>
<p>Please let me know if that helped.</p>
| Wytrzymały Wiktor |
<p>I have a pod (<code>kubectl run app1 --image tomcat:7.0-slim</code>) in GKE after applying the egress network policy <code>apt-get update</code> command unable to connect internet.</p>
<p><em><strong>Before applying policy:</strong></em></p>
<p><a href="https://i.stack.imgur.com/8et61.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8et61.png" alt="enter image description here" /></a></p>
<p><em><strong>After applying policy:</strong></em></p>
<p><a href="https://i.stack.imgur.com/u4Dxd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u4Dxd.png" alt="enter image description here" /></a></p>
<p><em><strong>This is the policy applied:</strong></em></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app2-np
namespace: default
spec:
podSelector:
matchLabels:
name: app2
policyTypes:
- Egress
- Ingress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: app3
ports:
- port: 8080
- ports:
- port: 80
- port: 53
- port: 443
</code></pre>
<p>The Here am able to connect 8080 port of app3 pod in same namespace. Please help in correcting my netpol.</p>
| sudhir tataraju | <p>It happens because you are defining the egress rule only for app3 on port 8080, and it will block all internet connect attempts.</p>
<p>If you need to use access internet from some of your pods, you can tag them and create a NetworkPOlicy to permit the internet access.</p>
<p>In the example below, the pods with the tag <code>networking/allow-internet-egress: "true"</code> will be able to reach the internet:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internet-egress
spec:
podSelector:
matchLabels:
networking/allow-internet-egress: "true"
egress:
- {}
policyTypes:
- Egress
</code></pre>
<p>Another option is allow by ip blocks, in the example below, a rule will allow the internet access (<code>0.0.0.0</code>) <strong>except</strong> for the ipBlocks <code>10.0.0.0/8</code></p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-internet-only
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
</code></pre>
<p>Finally, in <a href="https://orca.tufin.io/netpol/" rel="nofollow noreferrer">this</a> site you can visualize your NetworkPolices in a good way to understand what is the exact behaviour.</p>
<p>References:</p>
<p><a href="https://www.stackrox.com/post/2020/01/kubernetes-egress-network-policies/" rel="nofollow noreferrer">https://www.stackrox.com/post/2020/01/kubernetes-egress-network-policies/</a></p>
<p><a href="https://stackoverflow.com/questions/57789969/kubernets-networkpolicy-allow-external-traffic-to-internet-only">Kubernets networkpolicy allow external traffic to internet only</a></p>
| Mr.KoopaKiller |
<p>I'm attempting to get Config Connector up and running on my GKE project and am following <a href="https://cloud.google.com/config-connector/docs/how-to/getting-started" rel="nofollow noreferrer">this getting started guide.</a></p>
<p>So far I have enabled the appropriate APIs:</p>
<pre><code>> gcloud services enable cloudresourcemanager.googleapis.com
</code></pre>
<p>Created my service account and added policy binding:</p>
<pre><code>> gcloud iam service-accounts create cnrm-system
> gcloud iam service-accounts add-iam-policy-binding [email protected] --member="serviceAccount:test-connector.svc.id.goog[cnrm-system/cnrm-controller-manager]" --role="roles/iam.workloadIdentityUser"
> kubectl wait -n cnrm-system --for=condition=Ready pod --all
</code></pre>
<p>Annotated my namespace:</p>
<pre><code>> kubectl annotate namespace default cnrm.cloud.google.com/project-id=test-connector
</code></pre>
<p>And then run through trying to apply the Spanner yaml in the example:</p>
<pre><code>~ >>> kubectl describe spannerinstance spannerinstance-sample
Name: spannerinstance-sample
Namespace: default
Labels: label-one=value-one
Annotations: cnrm.cloud.google.com/management-conflict-prevention-policy: resource
cnrm.cloud.google.com/project-id: test-connector
API Version: spanner.cnrm.cloud.google.com/v1beta1
Kind: SpannerInstance
Metadata:
Creation Timestamp: 2020-09-18T18:44:41Z
Generation: 2
Resource Version: 5805305
Self Link: /apis/spanner.cnrm.cloud.google.com/v1beta1/namespaces/default/spannerinstances/spannerinstance-sample
UID:
Spec:
Config: northamerica-northeast1-a
Display Name: Spanner Instance Sample
Num Nodes: 1
Status:
Conditions:
Last Transition Time: 2020-09-18T18:44:41Z
Message: Update call failed: error fetching live state: error reading underlying resource: Error when reading or editing SpannerInstance "test-connector/spannerinstance-sample": googleapi: Error 403: Request had insufficient authentication scopes.
Reason: UpdateFailed
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning UpdateFailed 6m41s spannerinstance-controller Update call failed: error fetching live state: error reading underlying resource: Error when reading or editing SpannerInstance "test-connector/spannerinstance-sample": googleapi: Error 403: Request had insufficient authentication scopes.
</code></pre>
<p>I'm not really sure what's going on here, because my cnrm service account has ownership of the project my cluster is in, and I have the APIs listed in the guide enabled.</p>
<p>The CC pods themselves appear to be healthy:</p>
<pre><code>~ >>> kubectl wait -n cnrm-system --for=condition=Ready pod --all
pod/cnrm-controller-manager-0 condition met
pod/cnrm-deletiondefender-0 condition met
pod/cnrm-resource-stats-recorder-58cb6c9fc-lf9nt condition met
pod/cnrm-webhook-manager-7658bbb9-kxp4g condition met
</code></pre>
<p>Any insight in to this would be greatly appreciated!</p>
| tparrott | <p>By the error message you have posted, I should supposed that it might be an error in your <a href="https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam" rel="nofollow noreferrer">GKE scopes</a>.</p>
<p>To GKE access others GCP APIs you must allow this access when creating the cluster. You can check the enabled scopes with the command:</p>
<p><code>gcloud container clusters describe <cluster-name></code> and find in the result for <code>oauthScopes</code>.</p>
<p><a href="https://developers.google.com/identity/protocols/oauth2/scopes#spanner" rel="nofollow noreferrer">Here</a> you can see the scope's name for Cloud Spanner, you must enable the scope <code>https://www.googleapis.com/auth/cloud-platform</code> as minimum permission.</p>
<p>To verify in the GUI, you can see the permission in: <code>Kubernetes Engine</code> > <code><Cluster-name></code> > expand the section <code>permissions</code> and find for <code>Cloud Platform</code></p>
| Mr.KoopaKiller |
<p>Hello Guys hope you well!</p>
<p>I need the that my master machine order the slave to pull the image from my docker hub repo and I get the error below, It doesn't let the slave pull from the repo, but when I go to the slave, manually pull he pulls</p>
<p>This from kubernetes master:</p>
<p>The first lines are a describe from pod my-app-6c99bd7b9c-dqd6l which is running now because I pulled manually the image from the docker hub, but I want Kubernetes to do it.</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/my-app2-74969ddd4f-l6d6l to kubeslave.machine.pt
Normal SandboxChanged <invalid> kubelet, kubeslave.machine.pt Pod sandbox changed, it will be killed and re-created.
Warning Failed <invalid> (x3 over <invalid>) kubelet, kubeslave.machine.pt Failed to pull image "bedjase/repository/my-java-app:my-java-app": rpc error: code = Unknown desc = Error response from daemon: pull access denied for bedjase/repository/my-java-app, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Warning Failed <invalid> (x3 over <invalid>) kubelet, kubeslave.machine.pt Error: ErrImagePull
Normal BackOff <invalid> (x7 over <invalid>) kubelet, kubeslave.machine.pt Back-off pulling image "bedjase/repository/my-java-app:my-java-app"
Warning Failed <invalid> (x7 over <invalid>) kubelet, kubeslave.machine.pt Error: ImagePullBackOff
Normal Pulling <invalid> (x4 over <invalid>) kubelet, kubeslave.machine.pt Pulling image "bedjase/repository/my-java-app:my-java-app"
[root@kubernetes ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-6c99bd7b9c-dqd6l 1/1 Running 0 14m
my-app2-74969ddd4f-l6d6l 0/1 ImagePullBackOff 0 2m20s
nginx-86c57db685-bxkpl 1/1 Running 0 8h
</code></pre>
<p>This from slave:</p>
<pre><code>[root@kubeslave docker]# docker pull bedjase/repository:my-java-app
my-java-app: Pulling from bedjase/repository
50e431f79093: Already exists
dd8c6d374ea5: Already exists
c85513200d84: Already exists
55769680e827: Already exists
e27ce2095ec2: Already exists
5943eea6cb7c: Already exists
3ed8ceae72a6: Already exists
7ba151cdc926: Already exists
Digest: sha256:c765d09bdda42a4ab682b00f572fdfc4bbcec0b297e9f7716b3e3dbd756ba4f8
Status: Downloaded newer image for bedjase/repository:my-java-app
docker.io/bedjase/repository:my-java-app
</code></pre>
<p>I already made the login in both master and slave to docker hub repo and succeed.
Both have /etc/hosts ok, also nodes are connected and ready:</p>
<pre><code>[root@kubernetes ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes.machine.pt Ready master 26h v1.17.4
kubeslave.machine.pt Ready <none> 26h v1.17.4
</code></pre>
<p>Am I missing some point here? </p>
| Bedjase | <p>For private images you must create a <code>secret</code> with <code>username</code> and <code>password</code> of Docker Hub to Kubernetes be able to pull the image.</p>
<p>The command bellow create a secret name <code>regcred</code> with your Docker Hub credentials, replace the fields <code><<your-name>></code>, <code><your-password></code> and <code><your-email></code>:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username=<your-name> --docker-password=<your-password> --docker-email=<your-email>
</code></pre>
<p>After that you need to add in your pod/deployment spec that you want to use this credentials to pull your private image adding the <code>imagePullSecrets</code> with the credentials created above, see this example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
</code></pre>
<p><strong>References:</strong></p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret</a></p>
| Mr.KoopaKiller |
<p>Istio has several default metrics, such as <code>istio_requests_total</code>, <code>istio_request_bytes</code>, <code>istio_tcp_connections_opened_total</code>. Istio envoy proxy computes and exposes these metrics. On the <a href="https://istio.io/docs/reference/config/telemetry/metrics/" rel="nofollow noreferrer">Istio website</a>, it shows that <code>istio_requests_total</code> is a COUNTER incremented for every request handled by an Istio proxy. </p>
<p>We made some experiments where we let a lot of requests go through the Istio envoy to reach a microservice behind Istio envoy, and at the same time we monitored the metric from Istio envoy. However, we found that <code>istio_requests_total</code> does not include the requests that have got through Istio envoy to the backend microservice but their responses have not arrived at Istio envoy from the backend microservice. In other words, <code>istio_requests_total</code> only includes the number of served requests, and does not include the requests in flight.</p>
<p>My question is: is our observation right? Why does <code>istio_requests_total</code> not include the requests in flight?</p>
| Jeffrey | <p>As mentioned <a href="https://banzaicloud.com/blog/istio-telemetry/" rel="nofollow noreferrer">here</a></p>
<blockquote>
<p>The default metrics are standard information about HTTP, gRPC and TCP requests and responses. Every request is reported by the source proxy and the destination proxy as well and these can provide a different view on the traffic. Some requests may not be reported by the destination (if the request didn't reach the destination at all), but some labels (like connection_security_policy) are only available on the destination side. Here are some of the most important HTTP metrics:</p>
<p><strong>istio_requests_total is a COUNTER that aggregates request totals between Kubernetes workloads, and groups them by response codes, response flags and security policy.</strong></p>
</blockquote>
<hr />
<p>As mentioned <a href="https://www.datadoghq.com/blog/istio-metrics/#istio-mesh-metrics" rel="nofollow noreferrer">here</a></p>
<blockquote>
<p>When Mixer collects metrics from Envoy, it assigns dimensions that downstream backends can use for grouping and filtering. In Istio’s default configuration, dimensions include attributes that indicate where in your cluster a request is traveling, such as the name of the origin and destination service. This gives you visibility into traffic anywhere in your cluster.</p>
</blockquote>
<hr />
<blockquote>
<p>Metric to watch: requests_total</p>
<p>The request count metric indicates the overall throughput of requests between services in your mesh, and increments whenever an Envoy sidecar receives an HTTP or gRPC request. You can track this metric by both origin and destination service. If the count of requests between one service and another has plummeted, either the origin has stopped sending requests or the destination has failed to handle them. In this case, you should check for a misconfiguration in Pilot, the Istio component that routes traffic between services. If there’s a rise in demand, you can correlate this metric with increases in resource metrics like CPU utilization, and ensure that your system resources are scaled correctly.</p>
</blockquote>
<hr />
<p>Maybe it's worth to check envoy docs about that, because of what's written <a href="https://istio.io/docs/examples/microservices-istio/logs-istio/" rel="nofollow noreferrer">here</a></p>
<blockquote>
<p>The queries above use the istio_requests_total metric, which is a standard Istio metric. You can observe other metrics, in particular, the ones of Envoy (Envoy is the sidecar proxy of Istio). You can see the collected metrics in the insert metric at cursor drop-down menu.</p>
</blockquote>
<hr />
<p>Based on above docs I agree with that what @Joel mentioned in comments</p>
<blockquote>
<p>I think you're correct, and I imagine the "why" is because of response flags that are expected to be found on the metric labels. <strong>This can be written only when a response is received</strong>. If they wanted to do differently, I guess it would mean having 2 different counters, one for request sent and one for response received.</p>
</blockquote>
| Jakub |
<p>I am very new to k8s so apologies if the question doesn't make sense or is incorrect/stupid. </p>
<p>I have a liveness probe configured for my pod definition which just hits a health API and checks it's response status to test for the liveness of the pod.</p>
<p>My question is, while I understand the purpose of the liveness/readiness probes...what exactly are they? Are they just another type of pods which are spun up to try and communicate with our pod via the configured API? Or are they some kind of a lightweight process which runs inside the pod itself and attempts the API call?</p>
<p>Also, how does a probe communicate with a pod? Do we require a service to be configured for the pod so that the probe is able to access the API or is it an internal process with no additional config required?</p>
| Marco Polo | <p><strong>Short answer:</strong> <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a> handle this checks to ensure your service is running, and if not it will be replaced by another container. Kubelet runs in every node of your cluster, you don't need to make any addional configurations.</p>
<p>You <strong>don't need</strong> to configure a service account to have the probes working, it is a internal process handled by kubernetes.</p>
<p>From Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>A <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#probe-v1-core" rel="nofollow noreferrer">Probe</a> is a diagnostic performed periodically by the <a href="https://kubernetes.io/docs/admin/kubelet/" rel="nofollow noreferrer">kubelet</a> on a Container. To perform a diagnostic, the kubelet calls a <a href="https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Handler" rel="nofollow noreferrer">Handler</a> implemented by the Container. There are three types of handlers:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#execaction-v1-core" rel="nofollow noreferrer">ExecAction</a>: Executes a specified command inside the Container. The diagnostic is considered successful if the command exits with a status code of 0.</p></li>
<li><p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#tcpsocketaction-v1-core" rel="nofollow noreferrer">TCPSocketAction</a>: Performs a TCP check against the Container’s IP address on a specified port. The diagnostic is considered successful if the port is open.</p></li>
<li><p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#httpgetaction-v1-core" rel="nofollow noreferrer">HTTPGetAction</a>: Performs an HTTP Get request against the Container’s IP address on a specified port and path. The diagnostic is considered successful if the response has a status code greater than or equal to 200 and less than 400.</p></li>
</ul>
<p>Each probe has one of three results:</p>
<ul>
<li>Success: The Container passed the diagnostic.</li>
<li>Failure: The Container failed the diagnostic.</li>
<li>Unknown: The diagnostic failed, so no action should be taken.</li>
</ul>
<p>The kubelet can optionally perform and react to three kinds of probes on running Containers:</p>
<ul>
<li><p><code>livenessProbe</code>: Indicates whether the Container is running. If the liveness probe fails, the kubelet kills the Container, and the Container is subjected to its <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">restart policy</a>. If a Container does not provide a liveness probe, the default state is <code>Success</code>.</p></li>
<li><p><code>readinessProbe</code>: Indicates whether the Container is ready to service requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is <code>Failure</code>. If a Container does not provide a readiness probe, the default state is <code>Success</code>.</p></li>
<li><p><code>startupProbe</code>: Indicates whether the application within the Container is started. All other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the Container, and the Container is subjected to its <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">restart policy</a>. If a Container does not provide a startup probe, the default state is <code>Success</code>.</p></li>
</ul>
</blockquote>
| Mr.KoopaKiller |
<p>I try to build a continuous deployment pipeline for my GKE cluster. I use my own gitlab-runner as CI pipeline build and push images to gcr.io/PROJECT/APP:google tag there. </p>
<p>Is there any possibility to implement the rolling restart of the containers that use this image after its update? I have seen a lot of examples of how to do it using Jenkins and Google Source Repository directly in a Kubernetes cluster, but is there any possibility to trigger only on image changes?</p>
<p>I have found something that I need here <a href="https://cloud.google.com/container-registry/docs/configuring-notifications" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/configuring-notifications</a>. But still, I have no idea how to connect these notifications with the cluster. </p>
| Oleg | <p>After some tests, finally I made it works using PubSub and Kubernetes Cronjob.</p>
<h2>How it works:</h2>
<p>When a new image is pushe to <strong>Container Registry</strong>, a message is sent to <strong>Pub/Sub</strong> that contains some important data, like this:</p>
<pre><code>{
"action":"INSERT",
"digest":"gcr.io/my-project/hello-world@sha256:6ec128e26cd5...",
"tag":"gcr.io/my-project/hello-world:1.1"
}
</code></pre>
<p>The <code>action</code> with value <code>INSERT</code> means that a new image was pushed to Pub/Sub.
The key <code>tag</code> contains the name of the image that was pushed.</p>
<p>So, all we need to do is, read this data and update the deployment image.</p>
<p>First, We need some code to retrieve the message from Pub/Sub. Unfortunately I can't find anything "ready" for this task, so you need to create you own. <a href="https://cloud.google.com/pubsub/docs/pull" rel="nofollow noreferrer">Here</a> there are some examples of how you can retrive the messages from Pub/Sub.</p>
<blockquote>
<p>As a <strong><em>proof-of-concept</em></strong>, my choice was to use a shell script (<a href="https://github.com/MrKoopaKiller/docker-gcloud-kubectl/blob/master/imageUpdater.sh" rel="nofollow noreferrer">imageUpdater.sh</a>) to retrieve the message from Pub/Sub and execute the <code>kubectl set image...</code> command to update the deployment image.</p>
</blockquote>
<p>Second, create a Cronjob using the first code to read the message and update the deployment.</p>
<blockquote>
<p>In my example, I've create a docker image with gcloud and kubectl command to perform the tasks, you can find the code <a href="https://github.com/MrKoopaKiller/docker-gcloud-kubectl/blob/master/Dockerfile" rel="nofollow noreferrer">Here</a>.</p>
</blockquote>
<p>But, to make it all work, your must to grant permissions to job pod's to execute "kubectl set image", and for this wee need to configure the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> permissions.</p>
<blockquote>
<p>To create a "isolated" PoC, I will create every resources describe here in a new namespace named "myns". The RBAC will only work in this name space, because <code>Role</code> is namespace, if you what to use in all namespaces, change to <code>ClusterRole</code></p>
</blockquote>
<h2>1. PubSub Configuration</h2>
<p>First of all, you need to configure Container Registry to sent messages to Pub/Sub. You can follow <a href="https://cloud.google.com/container-registry/docs/configuring-notifications" rel="nofollow noreferrer">this</a> guide. </p>
<blockquote>
<p>In this example I will use a nginx image to demonstrate.</p>
</blockquote>
<pre><code>gcloud pubsub topics create projects/[PROJECT-ID]/topics/gcr
</code></pre>
<p>From the system where Docker images are pushed or tagged run the following command:</p>
<pre><code>gcloud pubsub subscriptions create nginx --topic=gcr
</code></pre>
<h2>2. GKE Cluster</h2>
<p>GKE needs permission to access Pub/Sub, and it can be done only when a new cluster is created using the <code>--scope "https://www.googleapis.com/auth/pubsub"</code>. So I will create a new cluster to our example:</p>
<p><strong>Create a new cluster to our example:</strong></p>
<pre><code>gcloud container clusters create "my-cluster" --num-nodes "1" --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/cloud-platform","https://www.googleapis.com/auth/pubsub","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append"
</code></pre>
<p>More information about <code>scopes</code> <a href="https://adilsoncarvalho.com/changing-a-running-kubernetes-cluster-permissions-a-k-a-scopes-3e90a3b95636" rel="nofollow noreferrer">here</a>.</p>
<p><strong>Getting the cluster credentials:</strong></p>
<pre><code>gcloud container clusters get-credentials my-cluster
</code></pre>
<h2>3. Configuring RBAC permissions</h2>
<p>As mentioned before all resources will be created in the namespace <code>myns</code>. So let's create the namespace:</p>
<p><code>kubectl create ns myns</code></p>
<p>After that we can create a new Service account called <code>sa-image-update</code> and apply the RBAC permissions:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-image-update
namespace: myns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: myns
name: role-set-image
rules:
- apiGroups: ["apps", "extensions"]
resources: ["deployments"]
verbs: ["get", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rolebinding-set-image
namespace: myns
roleRef:
kind: Role
name: role-set-image
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: sa-image-update
namespace: myns
</code></pre>
<h2>4. Confimap</h2>
<p>To make it easiest as possible, I will create a configmap with the shell script file that will be mounted and executed by the pod:</p>
<pre><code># Download script
wget https://raw.githubusercontent.com/MrKoopaKiller/docker-gcloud-kubectl/master/imageUpdater.sh
# Create configmap
kubectl create configmap imageupdater -n myns --from-file imageUpdater.sh
</code></pre>
<h2>5. CronJob</h2>
<p>The shell script need 3 variables to work:</p>
<p><code>PROJECT-NAME</code>: The name of gcloud project
<code>DEPLOYMENT-NAME</code>: The name of the deployment that will be updated
<code>IMAGE-NAME</code>: The name of the image to update without the tag. </p>
<p>In this case, my example deployment will be called <code>nginx</code> and the image <code>nginx</code>.</p>
<p>The image is from the dockerfile I mentioned before, you can find <a href="https://github.com/MrKoopaKiller/docker-gcloud-kubectl/blob/master/Dockerfile" rel="nofollow noreferrer">here</a> and build it.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: image-updater
namespace: myns
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-image-update
volumes:
- name: imageupdater
configMap:
name: imageupdater
containers:
- name: image-updater
image: <your_custom_image>
volumeMounts:
- name: imageupdater
mountPath: /bin/imageUpdater.sh
subPath: imageUpdater.sh
command: ['bash', '/bin/imageUpdater.sh', 'PROJECT-NAME', 'nginx', 'nginx']
restartPolicy: Never
</code></pre>
<p>OK, everything is done. Now we need to create a deployment as example to demonstrated:</p>
<h2>Example deployment: nginx</h2>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: myns
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: myns
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>Ok, now when your gitlab push a new image to COntainer Registry, a message will be sent to Pub/Sub, the cronjob will runs every 2 minutes, verify if the image name is <code>nginx</code> and if yes, will run the <code>kubectl set image</code>.</p>
<h3>References:</h3>
<p><a href="https://medium.com/better-programming/k8s-tips-using-a-serviceaccount-801c433d0023" rel="nofollow noreferrer">https://medium.com/better-programming/k8s-tips-using-a-serviceaccount-801c433d0023</a></p>
<p><a href="https://cloud.google.com/solutions/integrating-microservices-with-pubsub#creating_a_gke_cluster" rel="nofollow noreferrer">https://cloud.google.com/solutions/integrating-microservices-with-pubsub#creating_a_gke_cluster</a></p>
| Mr.KoopaKiller |
<p>I want to create a nginx container that copies the content of my local machine /home/git/html into container /usr/share/nginx/html. However I cannot use Volumes and Mountpath, as my kubernetes cluster has 2 nodes.
I decided instead to copy the content from my github account. I then created this dockerfile:</p>
<pre><code>FROM nginx
CMD ["apt", "get", "update"]
CMD ["apt", "get", "install", "git"]
CMD ["git", "clone", "https://github.com/Sonlis/kubernetes/html"]
CMD ["rm", "-r", "/usr/share/nginx/html"]
CMD ["cp", "-r", "html", "/usr/share/nginx/html"]
</code></pre>
<p>The dockerfile builds correctly, however when I apply a deployment with this image, the container keeps restarting. I know that once a docker has done its job, it shutdowns, and then the deployment restarts it, creating the loop. However when applying a basic nginx image it works fine. What would be the solution ? I saw solutions running a process indefinitely to keep the container alive, but I do not think it is a suitable solution.</p>
<p>Thanks !</p>
| Baptise | <p>You need to use <code>RUN</code> to perform commands when build a docker image, as mentioned in @tgogos comment. See the <a href="https://stackoverflow.com/questions/37461868/difference-between-run-and-cmd-in-a-dockerfile">refer</a>.</p>
<p>You can try something like that:</p>
<pre><code>FROM nginx
RUN apt-get update && \
apt-get install git
RUN rm -rf /usr/share/nginx/html && \
git clone https://github.com/Sonlis/kubernetes/html /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p>Also, I would like to recommend you to take a look in <a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/" rel="nofollow noreferrer">this part of documentation</a> of how to optmize your image taking advantages of cache layer and multi-stages builds.</p>
| Mr.KoopaKiller |
<p>I have created an Ingress, Deployment and Service as follows: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: helloworld-ingress
spec:
rules:
- host: "hw.service.databaker.io"
http:
paths:
- path: /
backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
</code></pre>
<p>When I call <a href="https://hw.service.databaker.io/" rel="nofollow noreferrer">https://hw.service.databaker.io/</a>, it blocks: </p>
<p><a href="https://i.stack.imgur.com/QoIqr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QoIqr.png" alt="enter image description here"></a></p>
<p>CSS and PNG. What am I doing wrong? I am using Istio 1.52. </p>
<p>The log of one of three pods has following content:</p>
<pre><code>::ffff:127.0.0.1 - - [04/May/2020:10:25:06 +0000] "HEAD / HTTP/1.1" 200 680 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.67 Safari/537.36"
::ffff:127.0.0.1 - - [04/May/2020:10:33:33 +0000] "GET / HTTP/1.1" 200 680 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) snap Chromium/81.0.4044.129 Chrome/81.0.4044.129 Safari/537.36"
::ffff:127.0.0.1 - - [04/May/2020:10:34:19 +0000] "GET / HTTP/1.1" 200 680 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) snap Chromium/81.0.4044.129 Chrome/81.0.4044.129 Safari/537.36"
::ffff:127.0.0.1 - - [04/May/2020:10:34:20 +0000] "GET / HTTP/1.1" 200 680 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) snap Chromium/81.0.4044.129 Chrome/81.0.4044.129 Safari/537.36"
::ffff:127.0.0.1 - - [04/May/2020:10:34:21 +0000] "GET / HTTP/1.1" 200 680 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) snap Chromium/81.0.4044.129 Chrome/81.0.4044.129 Safari/537.36"
::ffff:127.0.0.1 - - [04/May/2020:10:34:22 +0000] "GET / HTTP/1.1" 200 680 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) snap Chromium/81.0.4044.129 Chrome/81.0.4044.129 Safari/537.36"
::ffff:127.0.0.1 - - [04/May/2020:10:36:24 +0000] "GET / HTTP/1.1" 200 680 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) snap Chromium/81.0.4044.129 Chrome/81.0.4044.129 Safari/537.36"
::ffff:127.0.0.1 - - [04/May/2020:10:36:25 +0000] "GET / HTTP/1.1" 200 680 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) snap Chromium/81.0.4044.129 Chrome/81.0.4044.129 Safari/537.36"
</code></pre>
| softshipper | <p>It's not accessible because you have to show istio the path to it.</p>
<hr />
<p>As @zero_coding mentioned in comments one way is to change the path</p>
<p>from</p>
<pre><code>http:
paths:
- path: /
backend:
serviceName: hello-kubernetes-first
servicePort: 80
</code></pre>
<p><strong>to</strong></p>
<pre><code>http:
paths:
- path: /*
backend:
serviceName: hello-kubernetes-first
servicePort: 80
</code></pre>
<hr />
<p>Additionally I would add this <a href="https://rinormaloku.com/istio-practice-routing-virtualservices/" rel="nofollow noreferrer">Istio in practise</a> tutorial here, it explains well second way of dealing with that problem, which is to add more paths.</p>
<blockquote>
<p>Let’s break down the requests that should be routed to Frontend:</p>
<p><strong>Exact path</strong> / should be routed to Frontend to get the Index.html</p>
<p><strong>Prefix path</strong> /static/* should be routed to Frontend to get any static files needed by the frontend, like <strong>Cascading Style Sheets</strong> and <strong>JavaScript files</strong>.</p>
<p><strong>Paths matching the regex ^.*.(ico|png|jpg)$</strong> should be routed to Frontend as it is an image, that the page needs to show.</p>
</blockquote>
<pre><code>http:
- match:
- uri:
exact: /
- uri:
exact: /callback
- uri:
prefix: /static
- uri:
regex: '^.*\.(ico|png|jpg)$'
route:
- destination:
host: frontend
port:
number: 80
</code></pre>
| Jakub |
<p>I am trying to use kompose convert on my docker-compose.yaml files however, when I run the command: </p>
<pre><code>kompose convert -f docker-compose.yaml
</code></pre>
<p>I get the output: </p>
<pre><code>WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak" isn't supported - ignoring path on the host
</code></pre>
<p>It also says more warning for the other persistent volumes</p>
<p>My docker-compose file is: </p>
<pre><code>version: '3'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.1
container_name: es01
environment:
[env]
ulimits:
nproc: 3000
nofile: 65536
memlock: -1
volumes:
- /home/centos/Sprint0Demo/Servers/elasticsearch:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- kafka_demo
zookeeper:
image: confluentinc/cp-zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
volumes:
- /home/centos/Sprint0Demo/Servers/zookeeper/zk-data:/var/lib/zookeeper/data
- /home/centos/Sprint0Demo/Servers/zookeeper/zk-txn-logs:/var/lib/zookeeper/log
networks:
kafka_demo:
kafka0:
image: confluentinc/cp-kafka
container_name: kafka0
environment:
[env]
volumes:
- /home/centos/Sprint0Demo/Servers/kafkaData:/var/lib/kafka/data
ports:
- "9092:9092"
depends_on:
- zookeeper
- es01
networks:
kafka_demo:
schema_registry:
image: confluentinc/cp-schema-registry:latest
container_name: schema_registry
environment:
[env]
ports:
- 8081:8081
networks:
- kafka_demo
depends_on:
- kafka0
- es01
elasticSearchConnector:
image: confluentinc/cp-kafka-connect:latest
container_name: elasticSearchConnector
environment:
[env]
volumes:
- /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect:/etc/kafka-connect
- /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch:/etc/kafka-elasticsearch
- /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak:/etc/kafka
ports:
- "28082:28082"
networks:
- kafka_demo
depends_on:
- kafka0
- es01
networks:
kafka_demo:
driver: bridge
</code></pre>
<p>Does anyone know how I can fix this issue? I was thinking it has to do with the error message saying that its a volume mount vs host mount? </p>
| James Ukilin | <p>I have made some research and there are three things to point out:</p>
<ol>
<li><p><code>kompose</code> does not support volume mount on host. You might consider using <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a> instead.</p></li>
<li><p>Kubernetes makes it difficult to pass in <code>host/root</code> volumes. You can try with
<a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a>.
<code>kompose convert --volumes hostPath</code> works for k8s.</p></li>
<li>Also you can check out <a href="https://github.com/docker/compose-on-kubernetes" rel="nofollow noreferrer">Compose on Kubernetes</a> if you'd like to run things on a single machine. </li>
</ol>
<p>Please let me know if that helped. </p>
| Wytrzymały Wiktor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.