prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>If you are managing ingress service such as in the following example, instead of updating the ingress file below, is there a means of adding an additional host/service such as echo3.example.com with out needing to apply an updated version of the original file?</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: echo-ingress spec: rules: - host: echo1.example.com http: paths: - backend: serviceName: echo1 servicePort: 80 - host: echo2.example.com http: paths: - backend: serviceName: echo2 servicePort: 80 </code></pre> <pre><code># NEW HOST/SERVICE apiVersion: extensions/v1beta1 kind: Ingress metadata: name: echo-ingress spec: rules: - host: echo3.example.com ### &lt;= `echo3` addeded http: paths: - backend: serviceName: echo3 servicePort: 80 </code></pre> <p>Is there a way of applying this new host without needing to extend the old file?</p>
<p>If you apply the two files, the second one will overwrite the first one as they have the same name. So, you would need to edit the original every time you add a new rule.</p> <p>One possible solution to avoid this problem would be to use <a href="https://github.com/heptio/contour/blob/master/docs/ingressroute.md" rel="nofollow noreferrer">Contour</a>. In that case you could keep each <code>IngressRoute</code> in a separate resource and avoid conflicts like that.</p> <p>In your case, you would have something like:</p> <pre><code># ingressroute-echo1.yaml apiVersion: contour.heptio.com/v1beta1 kind: IngressRoute metadata: name: echo-ingress-1 spec: virtualhost: fqdn: echo1.example.com routes: - match: / services: - name: echo1 port: 80 # ingressroute-echo2.yaml apiVersion: contour.heptio.com/v1beta1 kind: IngressRoute metadata: name: echo-ingress-2 spec: virtualhost: fqdn: echo2.example.com routes: - match: / services: - name: echo2 port: 80 # ingressroute-echo3.yaml apiVersion: contour.heptio.com/v1beta1 kind: IngressRoute metadata: name: echo-ingress-3 spec: virtualhost: fqdn: echo3.example.com routes: - match: / services: - name: echo3 port: 80 </code></pre>
<p>I'm trying to install Prometheus on my K8S cluster</p> <p>when I run command </p> <pre><code>kubectl get namespaces </code></pre> <p>I got the following namespace:</p> <pre><code>default Active 26h kube-public Active 26h kube-system Active 26h monitoring Active 153m prod Active 5h49m </code></pre> <p>Now I want to create the Prometheus via </p> <pre><code>helm install stable/prometheus --name prom -f k8s-values.yml </code></pre> <p>and I got <strong>error:</strong></p> <blockquote> <p>Error: release prom-demo failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "default"</p> </blockquote> <p>even if I switch to <code>monitoring</code> ns I got the same error,</p> <p>the k8s-values.yml look like following</p> <pre><code>rbac: create: false server: name: server service: nodePort: 30002 type: NodePort </code></pre> <p>Any idea what could be missing here ?</p>
<p>You are getting this error because you are using RBAC without giving the right permissions.<br /></p> <p><strong>Give the tiller permissions:</strong><br /> <em>taken from <a href="https://github.com/helm/helm/blob/master/docs/rbac.md" rel="nofollow noreferrer">https://github.com/helm/helm/blob/master/docs/rbac.md</a></em></p> <p>Example: Service account with cluster-admin role In rbac-config.yaml:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system </code></pre> <p><em>Note: The cluster-admin role is created by default in a Kubernetes cluster, so you don't have to define it explicitly.</em></p> <pre><code>$ kubectl create -f rbac-config.yaml serviceaccount "tiller" created clusterrolebinding "tiller" created $ helm init --service-account tiller </code></pre> <p><strong>Create a service account for prometheus:<br /></strong> Change the value of <code>rbac.create</code> to <code>true</code>:</p> <pre><code>rbac: create: true server: name: server service: nodePort: 30002 type: NodePort </code></pre>
<p>I'm working on a project using Helm-kubernetes and azure kubernetes service, in which I'm trying to use a simple node image which I have been pushed on azure container registry inside my helm chart but it returns <code>ImagePullBackOff</code> error.</p> <p>Here are some details:</p> <p><strong>My <code>Dockerfile</code>:</strong></p> <pre><code>FROM node:8 # Create app directory WORKDIR /usr/src/app COPY package*.json ./ RUN npm install # Bundle app source COPY . . EXPOSE 32000 CMD [ "npm", "start" ] </code></pre> <p><strong>My <code>helm_chart/values.yaml</code>:</strong></p> <pre><code>replicaCount: 1 image: registry: helmcr.azurecr.io repository: helloworldtest tag: 0.7 pullPolicy: IfNotPresent nameOverride: "" fullnameOverride: "" service: name: http type: LoadBalancer port: 32000 internalPort: 32000 ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" paths: [] hosts: - name: mychart.local path: / tls: [] resources: {} nodeSelector: {} tolerations: [] affinity: {} </code></pre> <p>When I try to pull the image directly uasing the command below as: <code>docker pull helmcr.azurecr.io/helloworldtest:0.7</code> then it pulls the image successfully.</p> <p>Whats can be wrong here?</p> <p>Thanks in advance!</p>
<p>Your kubernetes cluster needs to be authenticated to the container registry to pull images, generally this is done by a docker secret:</p> <pre><code>kubectl create secret docker-registry regcred --docker-server=&lt;your-registry-server&gt; --docker-username=&lt;your-name&gt; --docker-password=&lt;your-pword&gt; --docker-email=&lt;your-email&gt; </code></pre> <p>If you are using AKS, you can grant cluster application id pull rights to the registry, that is enough.</p> <p>Reading: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
<p>I am attempting to install kubernetes on my local Windows 10 machine via hyperv. I am getting a few errors of which I am unsure how to resolve. I've tried running kubernetes v1.8.0 as well but received the same errors.</p> <p>Here's what I input into powershell:</p> <p>minikube start --kubernetes-version="v1.10.11" --vm-driver="hyperv" --hyperv-virtual-switch="Minikube"</p> <p>Here's what was returned:</p> <p>Getting VM IP address... Moving files into cluster... Stopping extra container runtimes... Machine exists, restarting cluster components... E0201 20:22:12.487856 13792 start.go:382] Error restarting cluster: running cmd: sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml: command failed: sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml stdout: [certificates] Using the existing ca certificate and key.</p> <p>stderr: failure loading apiserver certificate: the certificate is not valid yet : Process exited with status 1 minikube failed :( exiting with error code 1</p>
<p>It might happen due to your old minikube cache. What I can advise you, is to delete your minikube installation along with minikube cache, and start from the scratch.</p> <p>1) delete minikube <code>minikube delete</code> </p> <p>2) clean up <code>cache</code> or <code>.minikube</code> folders:</p> <pre><code>cd C:\Users\user_name\.minikube\ Remove-Item –path C:\Users\user_name\.minikube\cache\* -recurse </code></pre> <p>The above will clean up the existing cache of your minikube. For me after the test is was:</p> <pre><code> Directory: C:\Users\User\.minikube\cache Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- Mon 04.02.19 1:55 PM iso d----- Mon 04.02.19 2:25 PM v1.10.11 d----- Mon 04.02.19 2:40 PM v1.8.0 </code></pre> <p>Or alternatively you can try to remove all .minikube folder.</p> <pre><code>Remove-Item –path C:\Users\user_name\.minikube\* -recurse -force </code></pre> <p>3) start minikube. For me it started to work after above manipulations.</p> <pre><code>PS C:\Windows\system32&gt; minikube start --kubernetes-version="v1.10.11" --vm-driver="hyperv" --hyperv-virtual-switch="Minikube" Starting local Kubernetes v1.10.11 cluster... Starting VM... Downloading Minikube ISO 181.48 MB / 181.48 MB [============================================] 100.00% 0s Getting VM IP address... Moving files into cluster... Downloading kubeadm v1.10.11 Downloading kubelet v1.10.11 Finished Downloading kubeadm v1.10.11 Finished Downloading kubelet v1.10.11 Setting up certs... Connecting to cluster... Setting up kubeconfig... Stopping extra container runtimes... Starting cluster components... Verifying kubelet health ... Verifying apiserver health ... Kubectl is now configured to use the cluster. Loading cached images from config file. Everything looks great. Please enjoy minikube! </code></pre>
<p>I'd like to use Cloudflare's <a href="https://1.1.1.1/" rel="noreferrer">1.1.1.1 and 1.0.0.1</a> nameservers in Kubernetes, alongside <a href="https://developers.cloudflare.com/1.1.1.1/dns-over-tls/" rel="noreferrer">DNS over TLS</a>. It <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="noreferrer">looks like</a> I can do it using <a href="https://github.com/coredns/coredns/issues/1650" rel="noreferrer">core-dns</a>. I need to setup the following somehow:</p> <ul> <li>IPv4: <code>1.1.1.1</code> and <code>1.0.0.1</code></li> <li>IPv6: <code>2606:4700:4700::1111</code> and <code>2606:4700:4700::1001</code></li> <li>TLS Server Name: <code>tls.cloudflare-dns.com</code></li> </ul> <p>What should my <code>ConfigMap</code> look like? I've started it below:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: upstreamNameservers: | ["1.1.1.1", "1.0.0.1"] </code></pre>
<p>You can configure your core-dns <code>kubectl -n kube-system edit configmap coredns</code> and add to end of corefile:</p> <pre><code>. { forward . tls://1.1.1.1 tls://1.0.0.1 { tls_servername cloudflare-dns.com health_check 5s } cache 30 } </code></pre> <p>and than save new configuration and restart core-dns pods.</p> <pre><code>kubectl get pod -n kube-system | grep core-dns | cut -d " " -f1 - | xargs -n1 -P 10 kubectl delete pod -n kube-system </code></pre>
<p>In Openshift/Kubernetes, I want to test how my application (pod) that consists of 2 containers performs on machines with different number of cores. The machine I have at hand has 32 cores, but I'd like to limit those to 4, 8, 16...</p> <p>One way is using resource limits on the <strong>containers</strong>, but that would force me to set the ratio on each container; instead, I want to set resource limits for whole pod and let the containers compete on CPU. My feeling is that this should be possible, as the containers could belong to the same cgroup and therefore share the limits from the scheduler POV.</p> <p>Would the <code>LimitRange</code> on pod do what I am looking for? LimitRange is project/namespace -scoped, is there a way to achieve the same with finer granularity (just for certain pods)?</p>
<p>As per <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container" rel="nofollow noreferrer">documentation</a>: resource constraints are only applicable on container level. You can however define different <code>requests</code> and <code>limits</code> to allow the container to burst beyond the amount defined in requests. But this comes with other implications see <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md" rel="nofollow noreferrer">Quality of Service</a>. </p> <p>The reason for this is that some resources such as memory cannot be competed about, as it works for CPU. Memory is either enough or too less. There is no such thing in Kubernetes as shared RAM. (If your are not explicitly call the relevant systemcalls)</p> <p>May I ask, what the use case for Pod internal CPU competition is?</p>
<p>I have a Kubernetes cluster running. I used:</p> <pre><code>kubeadm init --apiserver-advertise-address=192.168.20.101 --pod-network-cidr=10.244.0.0/16 </code></pre> <p>This is working okay. Now I'm putting this in a script and I only want to execute kubeadm init again if my cluster is not running fine. How can I check if a Kubernetes cluster is running fine? So if not I can recreate the cluster.</p>
<p>You can use the following command to do that:</p> <pre><code>[root@ip-10-0-1-19]# kubectl cluster-info Kubernetes master is running at https://10.0.1.197:6443 KubeDNS is running at https://10.0.1.197:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy </code></pre> <p>It shows that your master is running fine on particular url. </p>
<p>I have a container which exposes multiple ports. So, the kubernetes service configured for the deployment looks like the following:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: myapp labels: app: myapp spec: selector: name: myapp ports: - protocol: TCP port: 5555 targetPort: 5555 - protocol: TCP port: 5556 targetPort: 5556 </code></pre> <p>I use Istio to manage routing and to expose this service via istio ingress gateway. We have one gateway for port 80, do we have to create two different gateways for the same host with two different virtual service?</p> <p>I want to configure that "example.myhost.com" 's 80 route to 5556 and some other port, let say, "example.myhost.com" 's 8088 route to 5555 of the service.</p> <p>Is that possible with one virtualservice?</p>
<p>Assuming that Istio <a href="https://istio.io/docs/reference/config/istio.networking.v1alpha3/#Gateway" rel="noreferrer">Gateway</a> is serving TCP network connections, you might be able to combine one <code>Gateway</code> configuration for two external ports 80 and 5556:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: myapp-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: port1 protocol: TCP hosts: - example.myhost.com - port: number: 8088 name: port2 protocol: TCP hosts: - example.myhost.com </code></pre> <p>Field <code>hosts</code> identifies here a list of target addresses that have to be exposed by this <code>Gateway</code>.</p> <p>In order to make appropriate network routing to the nested Pods, you can specify <a href="https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService" rel="noreferrer">VirtualService</a> with the matching set for the ports:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: myapp-virtual-service spec: hosts: - example.myhost.com gateways: - myapp-gateway tcp: - match: - port: 80 route: - destination: host: myapp.prod.svc.cluster.local port: number: 5556 - match: - port: 8088 route: - destination: host: myapp.prod.svc.cluster.local port: number: 5555 </code></pre> <p>Above <code>VirtualService</code> defines the rules to route network traffic coming on 80 and 8088 ports for <code>example.myhost.com</code> to the <code>myapp</code> service ports 5556, 5555 respectively.</p> <p>I encourage you to get more information about Istio <a href="https://istio.io/docs/reference/config/istio.networking.v1alpha3/#TCPRoute" rel="noreferrer">TCPRoute</a> capabilities and further appliance.</p>
<p>When I only add the TLS secret to the Ingress, Traefik serves it's default certificate.</p> <pre><code>kind: Ingress spec: rules: .... tls: - secretName: ingress-mgt-server-keys </code></pre> <p>Only when I mount the secret and add below parameter, does Traefik start serving the real certificate.</p> <blockquote> <p>entryPoints.https.tls.certificates</p> </blockquote> <p><strong>Are TLS secrets to be define in both Ingress and Ingress-controller</strong>? This forces me to repeat the keys as secrets to all the ingress namespaces as well as the ingress-controller namespace.</p> <p>[Update]: Traefik has RBAC to read secrets from the target namespace.</p>
<p>Hmm, that does not seem to be the case.</p> <p><a href="https://docs.traefik.io/user-guide/kubernetes/#add-a-tls-certificate-to-the-ingress" rel="nofollow noreferrer">https://docs.traefik.io/user-guide/kubernetes/#add-a-tls-certificate-to-the-ingress</a></p> <p><code>In addition to the modified ingress you need to provide the TLS certificate via a Kubernetes secret in the same namespace as the ingress.</code></p>
<p>I would like to deploy my projects (api rest, webapp, backoffice etc...) in several environments (dev, staging and production) with gitlab-ci, Kubernetes and Google Kubernetes Engine.</p> <p>All projects are separated in gitlab repositories</p> <p>I have 2 branch on gitlab : master and develop. Master must be deploy in staging and production environment Develop must be deploy in dev environment</p> <p>I read multiple tutorials and blogs since 2 days but I didn't found what is the best approach to manage several environments with kubernetes.</p> <p>I have to create 3 clusters in GKE (dev, staging, production) ? or I have to create just 1 cluster and use namespace to manage my environments ? or others solutions ?</p> <p>1 / Create 3 clusters In gitlab-ci (free account), how can I deploy dev branch on dev cluster, master branch on staging/production cluster ? what will be the config to do it ? Can I set a specific cluster in gitlab-ci.yml ? How ? (kubectl config use-context ? </p> <p>2 / Create 1 cluster and 3 namespace (dev,staging, production) I don't think that is the best approach for security and performance reason, right ? </p> <p>Thanks</p>
<p>Q. I have to create 3 clusters in GKE (dev, staging, production) ? or I have to create just 1 cluster and use namespace to manage my environments ? or others solutions ?</p> <p>Ans: Use one cluster and name namesapce them like dev, staging, production and it will be easy to manage .</p>
<p>Given a multi-master clustered database running under Kubernetes, what happens when a master node goes down and comes back up again?</p> <ul> <li>Is it possible to configure Kubernetes to retain the same IP address across disconnects (node goes down and comes back up)?</li> <li>If the node comes back with a different IP address, are multi-master database clusters designed to allow master nodes to change their IP addresses on the fly?</li> </ul> <p>The goal is to get this working without any downtime.</p>
<blockquote> <p>Is it possible to configure Kubernetes to retain the same IP address across disconnects (node goes down and comes back up)?</p> </blockquote> <p>Yes. The general idea is that you have to use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> to preserve names/IPs, although it's more a standard practice to use names (DNS) instead of IPs.</p> <p>One example is <a href="http://cassandra.apache.org/" rel="nofollow noreferrer">Cassandra</a> and <a href="https://kubernetes.io/docs/tutorials/stateful-application/cassandra/" rel="nofollow noreferrer">this is an example</a> on how to deploy a cluster on K8s.</p> <blockquote> <p>If the node comes back with a different IP address, are multi-master database clusters designed to allow master nodes to change their IP addresses on the fly?</p> </blockquote> <p>This really depends on your configuration, if you hard-code IP addresses in the config then if there's a change of IP address then the master will not be able to join the cluster. If you use names (DNS) as configuration then it's more likely that the master will rejoin the cluster. Again, this really depends on the specific database that you are using (together with the database capabilities).</p>
<p>I have a problem, I can't find how to change the pod check parameter to move on another node. When k8s detects that a node is down . </p> <p>I found the parameter <code>--sync-synchrionizes</code> but I'm not sure.</p> <p>Someone know how to do it ? </p>
<p>You need to change the <code>kube-controller-manager.conf</code> and update the following parameters: (you can find the file in <code>/etc/kubernetes/manifests</code>)</p> <pre><code>node-status-update-frequency: 10s node-monitor-period: 5s node-monitor-grace-period: 40s pod-eviction-timeout: 30s </code></pre> <p>This is what happens when node dies or go into offline mode:</p> <ol> <li>The kubelet posts its status to masters by <code>--node-status-update-fequency=10s</code>.</li> <li>Node goes offline</li> <li>kube-controller-manager is monitoring all the nodes by <code>--node-monitor-period=5s</code></li> <li>kube-controller-manager will see the node is unresponsive and has the grace period <code>--node-monitor-grace-period=40s</code> until it considers node unhealthy. PS: This parameter should be in N x node-status-update-fequency</li> <li>Once the node marked unhealthy, the <code>kube-controller-manager</code> will remove the pods based on <code>--pod-eviction-timeout=5m</code></li> </ol> <p>Now, if you tweaked the parameter <code>pod-eviction-timeout</code> to say 30 seconds, it will still take total 70 seconds to evict the pod from node The node-status-update-fequency and node-monitor-grace-period time counts in node-monitor-grace-period also. You can tweak these variable as well to further lower down your total node eviction time.</p>
<p>Currently, I have the following architecuture in kubernetes: </p> <ul> <li>In a pod, a service and a sidecar container (called <code>logger</code>) is running. </li> <li>The services writes to a file, the sidecar container reads that file and writes it to stdout.</li> <li>A fluentd daemonset is configured to read the output (which is collected in a file in <code>/var/log/containers/*_logger-*.log</code>, which is a link to another file (the latest file since the last file rotation, to the older files, no link points).</li> <li>Always 3 log messages belong together (some same fields)</li> </ul> <p>This configration works as expected for thousands of messages.</p> <p>However, here is the problem:</p> <p>I noticed that fluentd sometimes only forwards logmessage 1 or 2 of the 3 messages that belong together, although all 3 messages are written by the service and the sidecar container. </p> <p>For the explaination, assume 1 is forwarded, 2 and 3 not. After some research, I found out, that in such cases, message 1 is the last message before the log rotates, message 2 and 3 are in another file (where the symbolic link points to since the rotation, and therefore should be read).</p> <p>Therefore, it looks like fluentd skips some lines before continue reading at the new file after the kubernetes log rotation.</p> <ul> <li>Is this a known problem?</li> <li>Why are fluentd and kubernetes behaving like this?</li> <li>And the main question: <strong>What can I do to prevent this behavior, in order to receive all log messages?</strong></li> </ul> <p>I am using the docker-image <code>fluent/fluentd-kubernetes-daemonset:v0.12.33-elasticsearch</code></p> <p>If more information is required, please let me know.</p>
<p><strong>TLDR</strong>:</p> <p>In theory this should work with the latest version of <code>fluentd-kubernetes-daemonset</code>. If it's not the default value of <code>rotate_wait</code> will probably need to be overwritten for the <code>in_tail_container_logs</code> configuration because of timing issues.</p> <p>To do so you'll need to create a custom docker image that will overwrite the <code>kubernetes.conf</code> file, or use a config map with your custom config, mount it in the container and set <code>FLUENT_CONF</code> to the main config file in the mounted directory.</p> <p><strong>Explanation</strong>:</p> <p>The docker process is reading from both stdout and stderr of a container. While flushing the streams to the logfile it will also keep track of the set limits. When a limit has been reached it will start the log rotation.</p> <p>At the same time fluentd is watching the symlink. When the symlink changes fluentd's file watcher will get triggered to update its internal pointer to the actual log file and reset the position in the pos file because the newly created log file is empty.</p> <p>Using the config parameter <code>rotate_wait</code> we're telling fluentd to wait for the set amount of seconds (defaults to 5) so the last log lines that have been flushed to the file (or are soon to be) can be picked up before we're continuing with the newly created log file. This will also make sure that the log lines are processed in the correct order.</p>
<p>Why do pod names have 5 random alphanumeric characters appended to their name when created through a kubernetes deployment? Is it possible to get rid of them so that the pods names don't change? I am frequently deleting and creating deployments and would prefer that pod names don't change.</p> <p>Update: I would like to have the same name because I am constantly deleting/recreating the same deployment and if the name doesn't change, then I can quickly reuse old commands to exec into/see the logs of the containers.</p> <p><a href="https://i.stack.imgur.com/wvi32.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/wvi32.jpg" alt=""></a></p>
<p><strong>Reason for having random alphanumeric in pod names:</strong> </p> <p>When we create a <code>deployment</code>, it will <strong>not directly create pods</strong>(to match the replica count). </p> <ul> <li>It will create a <strong>replicaset</strong> (with name = deployname_name + 10 digit aplhanumeric). But <strong>why</strong> extra alphanumeric ? When we do <strong>upgrade</strong> of deployment, <strong>new</strong> replicaset is create with new alphanumeric and old is kept as it is. This old replicaset is used for <strong>rollbacks</strong>.</li> <li>The created replicaset will create <strong>pods</strong> (with name = replicaset_name + 5 digit alphanumeric). But <strong>why</strong> extra alphanumeric? We cannot have two pods with same name.</li> </ul> <p>If your usecase is to use the old commands frequently, then going for <code>Statefulset</code> is <strong>not the good solution</strong>. Statefulsets are <code>heavy weight</code>(ordered deployment, ordered termination, unique network names) and they are specially designed to preserve state across restart (in combination with persistent volume).</p> <p>There are few <strong>tools</strong> which you can use:</p> <ol> <li><a href="https://github.com/wercker/stern" rel="noreferrer">stern</a></li> <li><a href="https://github.com/arunvelsriram/kube-fzf/" rel="noreferrer">kube-fzf</a></li> </ol> <p><strong>Lightweight solution to your problem:</strong></p> <ol> <li><p>You can use <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">labels</a> to get the same pod across deployments:</p> <pre><code>kubectl get pods -l app=my_app,app_type=server NAME READY STATUS RESTARTS AGE my-app-5b7644f7f6-4hb8s 1/1 Running 0 22h my-app-5b7644f7f6-72ssz 1/1 Running 0 22h </code></pre></li> </ol> <p>after this we can use some bash magic get what we want like below</p> <p><strong>Final command:</strong></p> <pre><code>kubectl get pods -l app=my_app,app_type=server -o name | rg "pod/" -r "" | head -n 1 | awk '{print "kubectl logs " $0}' | bash </code></pre> <p><strong>Explanation:</strong></p> <ol> <li><p>get list of pod names</p> <pre><code>kubectl get pods -l app=my_app,app_type=server -o namenames pod/my-app-5b7644f7f6-4hb8s pod/my-app-5b7644f7f6-72ssz </code></pre></li> <li><p>replace pod/ using <code>ripgrep</code> or <code>sed</code> (rg "pod/" -r "")</p></li> <li>take only one pod using <code>head -n 1</code></li> <li>use awk to print exec/see_log command</li> <li>pipe it to bash to execute</li> </ol>
<p>My Bash script using <code>kubectl create/apply -f ...</code> to deploy lots of Kubernetes resources has grown too large for Bash. I'm converting it to Python using the PyPI <code>kubernetes</code> package.</p> <p>Is there a generic way to create resources given the YAML manifest? Otherwise, the only way I can see to do it would be to create and maintain a mapping from Kind to API method <code>create_namespaced_&lt;kind&gt;</code>. That seems tedious and error prone to me.</p> <p>Update: I'm deploying many (10-20) resources to many (10+) GKE clusters.</p>
<p>I have written a following piece of code to achieve the functionality of creating k8s resources from its json/yaml file:</p> <pre><code>def create_from_yaml(yaml_file): """ :param yaml_file: :return: """ yaml_object = yaml.loads(common.load_file(yaml_file)) group, _, version = yaml_object["apiVersion"].partition("/") if version == "": version = group group = "core" group = "".join(group.split(".k8s.io,1")) func_to_call = "{0}{1}Api".format(group.capitalize(), version.capitalize()) k8s_api = getattr(client, func_to_call)() kind = yaml_object["kind"] kind = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', kind) kind = re.sub('([a-z0-9])([A-Z])', r'\1_\2', kind).lower() if "namespace" in yaml_object["metadata"]: namespace = yaml_object["metadata"]["namespace"] else: namespace = "default" try: if hasattr(k8s_api, "create_namespaced_{0}".format(kind)): resp = getattr(k8s_api, "create_namespaced_{0}".format(kind))( body=yaml_object, namespace=namespace) else: resp = getattr(k8s_api, "create_{0}".format(kind))( body=yaml_object) except Exception as e: raise e print("{0} created. status='{1}'".format(kind, str(resp.status))) return k8s_api </code></pre> <p>In above function, If you provide any object yaml/json file, it will automatically pick up the API type and object type and create the object like statefulset, deployment, service etc.</p> <p>PS: The above code doesn't handler multiple kubernetes resources in one file, so you should have only one object per yaml file.</p>
<p>We need a Kubernetes service that brings up an AWS Load Balancer that supports web-sockets, i.e. not the classic LB. Support for the AWS NLB is in Alpha state - but seems to work well.</p> <p>The issue we have is with setting the Listener to be TLS and not TCP and attach the ACM SSL Certificate correctly - something that works well with the Classic LB</p> <p>The annotations we have in the <code>service.yml</code> are:</p> <pre><code> service.beta.kubernetes.io/aws-load-balancer-ssl-cert: 'arn:aws:acm:{{ .Values.certificate.region }}:{{ .Values.certificate.aws_user_id }}:certificate/{{ .Values.certificate.id }}' service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01" service.beta.kubernetes.io/aws-load-balancer-type: "nlb </code></pre> <p>The result is:</p> <pre><code>| Listener ID | Security Policy | SSL Certificate | Default Action | | --- | --- | --- | --- | | TCP: 443 | N/A | N/A | Forward to: k8s| </code></pre> <p>Expected:</p> <pre><code>| Listener ID | Security Policy | SSL Certificate | Default Action | | --- | --- | --- | --- | | TLS: 443 | ELBSecurityPol..| f456ac87d0ed99..| Forward to: k8s| </code></pre>
<p>You can use ingress nginx controller on kubernetes and indirectly it is also make load balancer but handling certificate Renewal with Cert manager will be so easy</p> <p>So ingress with Cert manager will be best idea for SSL and TLS certificate on kubernetes </p> <blockquote> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p> </blockquote> <p>For More details of tutorial sharing this link check it out</p> <blockquote> <p><a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a></p> </blockquote>
<p>I created a Kubernetes cluster using ansible-playbook command below</p> <pre><code>ansible-playbook kubectl.yaml --extra-vars "kubernetes_api_endpoint=&lt;Path to aws load balancer server&gt;" </code></pre> <p>Now I have deleted the cluster using command</p> <pre><code>kubectl config delete-cluster &lt;Name of cluster&gt; </code></pre> <p>But still EC2 nodes are running, I tried to manually stop them but they start again automatically (expected because they are running in a cluster)</p> <p>Is there any way by which I can detach the nodes from the cluster or delete the cluster in total?</p> <p>Kubectl config view shows below message</p> <blockquote> <p>apiVersion: v1 clusters: [] contexts: - context: cluster: "" user: "" name: default-context current-context: default-context kind: Config preferences: {} users: - name: cc3.k8s.local user: token: cc3.k8s.local</p> </blockquote> <p>This means there is no cluster. I want to delete the cluster in total and start fresh.</p>
<p>I just ran into this same problem. You need to delete the autoscaling group that spawns the worker nodes, which for some reason isn't deleted when you delete the EKS cluster.</p> <p>Open the AWS console (console.aws.amazon.com), navigate to the EC2 dashboard, then scroll down the left pane to "Auto Scaling Groups". Deleting the autoscaling group should stop the worker nodes from endlessly spawning. You may also want to click on "Launch Configurations" and delete the template as well.</p> <p>HTH!</p>
<p>I have a service that generates a picture. Once it's ready, the user will be able to download it.</p> <p>What is the recommended way to share a storage volume between a worker pod and a backend service?</p>
<p>In general the recommended way is "don't". While a few volume providers support multi-mounting, it's very hard to do that in a way that isn't sadmaking. Preferably use an external services like AWS S3 for hosting the actual file content and store references in your existing database(s). If you need a local equivalent, check out Minio for simple cases.</p>
<p>I am using kops to create / manage kubernetes clusters in AWS. </p> <p>I have created a pilot / intermediate instance to access all the clusters. </p> <p>I have noticed that even if we create multiple SSH sessions (using same user), and if I change context to cluster-a, it gets changed to cluster-a in other session.</p> <p>The problem is we need to switch context every time if we want to manage different cluster simultaneously. It is very hard to maintain context switching if more than two people are using that instance.</p> <p>Question may arise that why are we using multiple clusters, the things is there are multiple streams and modules being developed on parallel and all go to testing at same time.</p> <p>Is there any way where I don't have to switch context and <code>kops</code>/<code>kubectl</code> can understand cluster context automatically?</p> <p>Example: If I am executing command from <code>directory-a</code> then it automatically understand the cluster <code>a.k8s.local</code>. Just thinking this, any other solution is welcomed.</p> <p>Last solution is to create separate pilot instances for all the clusters, which I am trying to avoid as those instances doesn't provide much value and just increase costing. </p>
<p>I am using exactly the solution you are searching for: I can manage a specific cluster when I am located in a specific directory.</p> <p>First of all, let me explain why you cannot work on multiple clusters at the same time even on different SSH sessions.</p> <p>When you do a <code>kubectl config use-context</code> to switch the current context, you are actually modifiying <code>current-context: your-context</code> in the <code>~/.kube/config</code>. So if one of your team members is switching the context, that also applies to your other team members especially if they connect to the same user.</p> <p>Now, the following steps can help you workaround this issue:</p> <ul> <li>Install <a href="https://direnv.net/" rel="nofollow noreferrer">direnv</a>. This tool will allow you to set custom env vars when you are located in a directory.</li> <li><p>Next to the kubeconfig files, create a <code>.envrc</code> file:</p> <pre><code>path_add KUBECONFIG kubeconfig </code></pre></li> <li>Run <code>direnv allow</code></li> <li>Check the content of the <code>KUBECONFIG</code> env var (<code>echo $KUBECONFIG</code>). It should look like <code>/path/to/dir-a/kubeconfig:/home/user/.kube/config</code></li> <li>Split your current <code>~/.kube/config</code> in multiple <code>kubeconfig</code> files located in different folders: <code>dir-a/kubeconfig</code>, <code>dir-b/kubeconfig</code> and so on. You can also go into dir-a and do a <code>kops export kubecfg your-cluster-name</code>.</li> <li>Check the current context with the <code>kubectl config view --minify</code></li> <li>Go to <code>dir-b</code> and repeat from step 2</li> </ul> <p>You could also setup other env vars in the <code>.envrc</code> that could help you manage these different clusters (maybe a different kops state store).</p>
<p>I have a k8s cluster deployed in AWS's EKS and I want to change horizontal-pod-autoscaler-sync-period from the default 30s value.</p> <p>How can I change this flag?</p>
<p>Unfortunately you are not able do this on GKE, EKS and other managed clusters. </p> <p>In order to change/add flags in <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">kube-controller-manager</a> - you should have access to your <code>/etc/kubernetes/manifests/</code> dir on master node and be able to modify parameters in <code>/etc/kubernetes/manifests/kube-controller-manager.yaml</code>.</p> <p>GKE, EKS and similar clusters manages only by their providers without getting you permissions to have access to master nodes.</p> <p>Similar questions:</p> <p>1) <a href="https://stackoverflow.com/questions/47563575/horizontal-autoscaler-in-a-gke-cluster">horizontal-autoscaler-in-a-gke-cluster</a></p> <p>2) <a href="https://stackoverflow.com/questions/46317275/change-the-horizontal-pod-autoscaler-sync-period-with-gke">change-the-horizontal-pod-autoscaler-sync-period-with-gke</a></p> <p>As a workaround you can <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">create cluster using kubeadm init</a> and configure/change it in any way you want.</p>
<p>I am trying to expose deployment using Ingress where DeamonSet has <code>hostNetwork=true</code> which would allow me to skip additional LoadBalancer layer and expose my service directly on the Kubernetes external node IP. Unfortunately I can't reach the Ingress controller from the external network.</p> <p>I am running Kubernetes version 1.11.16-gke.2 on GCP.</p> <p>I setup my fresh cluster like this:</p> <pre><code>gcloud container clusters get-credentials gcp-cluster kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller helm init --service-account tiller --upgrade helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress </code></pre> <p>I run the deployment:</p> <pre><code>cat &lt;&lt;EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: hello-node spec: selector: matchLabels: app: hello-node template: metadata: labels: app: hello-node spec: containers: - name: hello-node image: gcr.io/google-samples/node-hello:1.0 ports: - containerPort: 8080 EOF </code></pre> <p>Then I create service:</p> <pre><code>cat &lt;&lt;EOF | kubectl create -f - apiVersion: v1 kind: Service metadata: name: hello-node spec: ports: - port: 80 targetPort: 8080 selector: app: hello-node EOF </code></pre> <p>and ingress resource:</p> <pre><code>cat &lt;&lt;EOF | kubectl create -f - apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: hello-node-single-ingress spec: backend: serviceName: hello-node servicePort: 80 EOF </code></pre> <p>I get the node external IP:</p> <pre><code>12:50 $ kubectl get nodes -o json | jq '.items[] | .status .addresses[] | select(.type=="ExternalIP") | .address' "35.197.204.75" </code></pre> <p>Check if ingress is running:</p> <pre><code>12:50 $ kubectl get ing NAME HOSTS ADDRESS PORTS AGE hello-node-single-ingress * 35.197.204.75 80 8m 12:50 $ kubectl get pods --namespace ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-ingress-controller-7kqgz 1/1 Running 0 23m ingress-nginx-ingress-default-backend-677b99f864-tg6db 1/1 Running 0 23m 12:50 $ kubectl get svc --namespace ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-ingress-controller ClusterIP 10.43.250.102 &lt;none&gt; 80/TCP,443/TCP 24m ingress-nginx-ingress-default-backend ClusterIP 10.43.255.43 &lt;none&gt; 80/TCP 24m </code></pre> <p>Then trying to connect from the external network:</p> <pre><code>curl 35.197.204.75 </code></pre> <p>Unfortunately it times out</p> <p>On Kubernetes Github there is a page regarding ingress-nginx (host-netork: true) setup: <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network</a></p> <p>which mentions:</p> <p><em>"This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it."</em></p> <p>I've tried to follow that and delete <code>ingress-nginx</code> services:</p> <pre><code>kubectl delete svc --namespace ingress-nginx ingress-nginx-ingress-controller ingress-nginx-ingress-default-backend </code></pre> <p>but this doesn't help.</p> <p>Any ideas how to set up the Ingress on the node external IP? What I am doing wrong? The amount of confusion over running Ingress reliably without the LB overwhelms me. Any help much appreciated !</p> <p>EDIT: When another service accessing my deployment with <code>NodePort</code> gets created:</p> <pre><code>cat &lt;&lt;EOF | kubectl create -f - apiVersion: v1 kind: Service metadata: name: hello-node2 spec: ports: - port: 80 targetPort: 8080 type: NodePort selector: app: hello-node EOF NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-node ClusterIP 10.47.246.91 &lt;none&gt; 80/TCP 2m hello-node2 NodePort 10.47.248.51 &lt;none&gt; 80:31151/TCP 6s </code></pre> <p>I still can't access my service e.g. using: <code>curl 35.197.204.75:31151</code>.</p> <p>However when I create 3rd service with LoadBalancer type:</p> <pre><code>cat &lt;&lt;EOF | kubectl create -f - apiVersion: v1 kind: Service metadata: name: hello-node3 spec: ports: - port: 80 targetPort: 8080 type: LoadBalancer selector: app: hello-node EOF NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-node ClusterIP 10.47.246.91 &lt;none&gt; 80/TCP 7m hello-node2 NodePort 10.47.248.51 &lt;none&gt; 80:31151/TCP 4m hello-node3 LoadBalancer 10.47.250.47 35.189.106.111 80:31367/TCP 56s </code></pre> <p>I can access my service using the external LB: <code>35.189.106.111</code> IP.</p>
<p>The problem was missing firewall rules on GCP.</p> <p>Found the answer: <a href="https://stackoverflow.com/a/42040506/2263395">https://stackoverflow.com/a/42040506/2263395</a></p> <p>Running: </p> <pre><code>gcloud compute firewall-rules create myservice --allow tcp:80,tcp:30301 </code></pre> <p>Where <code>80</code> is the ingress port and <code>30301</code> is the NodePort port. On production you would probabaly use just the ingress port.</p>
<p>Is it possible to enable k8s basic auth in AWS EKS?</p> <p>I need it to make Jenkins Kubernetes plugin work when Jenkins is deployed outside k8s. </p>
<p>You can use service account tokens. <br /> Read more about it here: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens</a></p>
<p>Setup is Kubernetes v1.13 &amp; Istio 1.0.5</p> <p>I'm running into an issue where the Istio service discovery is creating Envoy configurations that match TCP listeners instead of HTTP listeners. </p> <p>The communication is working in the service mesh, but I need Envoy to serve as a Layer 7 proxy and not a Layer 4 pass through. I'm not getting the logs I need for the HTTP requests coming through Envoy. </p> <p>Here is what I see in the sidecar istio-proxy log: </p> <p>[2019-02-05T15:40:59.403Z] - 5739 7911 149929 "127.0.0.1:80" inbound|80||api-endpoint.default.svc.cluster.local 127.0.0.1:44560 10.244.3.100:80 10.244.3.105:35204</p> <p>Which when I inspect the Envoy config in the sidecar - this is the corresponding config for that log message.</p> <pre><code> "name": "envoy.tcp_proxy", "config": { "cluster": "inbound|80||api-endpoint.default.svc.cluster.local", "access_log": [ { "name": "envoy.file_access_log", "config": { "path": "/dev/stdout", "format": "[%START_TIME%] %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS%\n" } } ], "stat_prefix": "inbound|80||api-endpoint.default.svc.cluster.local" } </code></pre> <p>So my question is: <strong>Why is Pilot providing Envoy with a TCP config for an HTTP service?</strong></p>
<p>I've come across this, in my case the port name for my service was not in the form <code>http-xyz</code>.</p> <p>Istio/Envoy assumes that traffic is TCP, unless it gets a hint from the port name that it is some other protocol.</p> <p>As per <a href="https://istio.io/help/faq/traffic-management/#naming-port-convention" rel="nofollow noreferrer">https://istio.io/help/faq/traffic-management/#naming-port-convention</a></p> <blockquote> <p>Named ports: Service ports must be named.</p> <p>The port names must be of the form protocol-suffix with http, http2, grpc, mongo, or redis as the protocol in order to take advantage of Istio’s routing features.</p> <p>For example, name: http2-foo or name: http are valid port names, but name: http2foo is not. If the port name does not begin with a recognized prefix or if the port is unnamed, traffic on the port will be treated as plain TCP traffic (unless the port explicitly uses Protocol: UDP to signify a UDP port).</p> </blockquote>
<p>We have an GKE ingress that is using the below frontend-service. The ingress terminates tls as well. We want to have http to https permanent redirects for any traffic that comes on http. </p> <p>With the below configuration we have all working, and serving traffic on both http and https (without redirect).</p> <p>The container used for the Deployment can be configured to rewrite http to https with --https-redirect flag. It also respect and trust the <strong>X-Forwarded-Proto</strong> header, and will consider it to be secure if the header value is set to <strong>https</strong>.</p> <p>So the most reasonable configuration I can see for the readinessProbe would be the configuration below, but with the commented lines uncommented. However, as soon as we apply this version we never get into a healthy state, and instead the terminated domain configured with the Ingress returns with 502 and never recovers.</p> <p>So what is wrong with the below approach? I have tested using port-forwarding both the pod and the service, and they both return 301 if I do not provide the X-Forwarded-Proto header, and return 200 if I provide the X-Forwarded-Proto header with https value.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: frontend-service spec: type: NodePort ports: - port: 8080 selector: app: frontend --- apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - image: eu.gcr.io/someproject/frontend:master imagePullPolicy: Always # args: # - '--https-redirect' name: frontend resources: limits: memory: 1Gi cpu: '0.5' ports: - containerPort: 8080 name: frontend readinessProbe: httpGet: path: /_readinessProbe port: 8080 # httpHeaders: # - name: X-Forwarded-Proto # value: https </code></pre>
<p>The GCP Health Check is very picky about the HTTP response codes it gets back. It must be a 200, and not a redirect. If in the configuration you have posted, the NLB gets a 301/302 response from your server. it will then mark your backend as unhealthy as this is not a 200 response. If the health check is sending HTTP without the X-Forwarded-Proto header, this is likely.</p> <p>You can check this by inspecting the kubectl logs of your deployment's pods.</p> <p>My previous answer may be useful if you move to an HTTPS health check, in an attempt to remedy this.</p> <hr> <p>From <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">GKE documentation</a>: You will need to put an annotation on your Service definition that tells GKE to use HTTPS for the health check. Otherwise it will try sending HTTP and get confused.</p> <pre><code>kind: Service metadata: name: my-service-3 annotations: cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}' spec: type: NodePort selector: app: metrics department: sales ports: - name: my-https-port port: 443 targetPort: 8443 - name: my-http-port port: 80 targetPort: 50001 </code></pre> <p>I haven't used the latest syntax, but this used to work for me.</p> <p>However this was so clunky to use I ended up going over to Istio and getting that to do all the HTTPS termination. That's no small undertaking however, but it you're thinking of using cert-manager/Let's Encrypt it might be worth exploring.</p>
<p>I made some experiments with <code>terraform</code>, <code>kubernetes</code>, <code>cassandra</code> and <code>elassandra</code>, I separated all by modules, but now I can't delete a specific module.</p> <p>I'm using <code>gitlab-ci</code>, and I store the terraform states on a AWS backend. This mean that, every time that I change the infrastructure in terraform files, after a <code>git push</code>, the infrastructure will be updated with an <code>gitlab-ci</code> that run <code>terraform init</code>, <code>terraform plan</code> and <code>terraform apply</code>.</p> <p><strong>My terraform main file is this:</strong></p> <pre><code># main.tf ########################################################################################################################################## # BACKEND # ########################################################################################################################################## terraform { backend "s3" {} } data "terraform_remote_state" "state" { backend = "s3" config { bucket = "${var.tf_state_bucket}" dynamodb_table = "${var.tf_state_table}" region = "${var.aws-region}" key = "${var.tf_key}" } } ########################################################################################################################################## # Modules # ########################################################################################################################################## # Cloud Providers: ----------------------------------------------------------------------------------------------------------------------- module "gke" { source = "./gke" project = "${var.gcloud_project}" workspace = "${terraform.workspace}" region = "${var.region}" zone = "${var.gcloud-zone}" username = "${var.username}" password = "${var.password}" } module "aws" { source = "./aws-config" aws-region = "${var.aws-region}" aws-access_key = "${var.aws-access_key}" aws-secret_key = "${var.aws-secret_key}" } # Elassandra: ---------------------------------------------------------------------------------------------------------------------------- module "k8s-elassandra" { source = "./k8s-elassandra" host = "${module.gke.host}" username = "${var.username}" password = "${var.password}" client_certificate = "${module.gke.client_certificate}" client_key = "${module.gke.client_key}" cluster_ca_certificate = "${module.gke.cluster_ca_certificate}" } # Cassandra: ---------------------------------------------------------------------------------------------------------------------------- module "k8s-cassandra" { source = "./k8s-cassandra" host = "${module.gke.host}" username = "${var.username}" password = "${var.password}" client_certificate = "${module.gke.client_certificate}" client_key = "${module.gke.client_key}" cluster_ca_certificate = "${module.gke.cluster_ca_certificate}" } </code></pre> <p><strong>This is a tree of my directory:</strong></p> <pre><code>. ├── aws-config │   ├── terraform_s3.tf │   └── variables.tf ├── gke │   ├── cluster.tf │   ├── gcloud_access_key.json │   ├── gcp.tf │   └── variables.tf ├── k8s-cassandra │   ├── k8s.tf │   ├── limit_ranges.tf │   ├── quotas.tf │   ├── services.tf │   ├── stateful_set.tf │   └── variables.tf ├── k8s-elassandra │   ├── k8s.tf │   ├── limit_ranges.tf │   ├── quotas.tf │   ├── services.tf │   ├── stateful_set.tf │   └── variables.tf ├── main.tf └── variables.tf </code></pre> <p><strong>I'm blocked here:</strong></p> <p>-> I want to remove the module <code>k8s-cassandra</code></p> <ul> <li>If I comment ou delete the module in <code>main.tf</code> (<code>module "k8s-cassandra" {...</code>), I receive this error: </li> </ul> <blockquote> <p>TERRAFORM PLAN... Acquiring state lock. This may take a few moments... Releasing state lock. This may take a few moments...</p> <p><strong>Error: module.k8s-cassandra.kubernetes_stateful_set.cassandra: configuration for module.k8s-cassandra.provider.kubernetes is not present; a provider configuration block is required for all operations</strong></p> </blockquote> <ul> <li>If I insert <code>terraform destroy -target=module.k8s-cassandra -auto-approve</code> between <code>terraform init</code> and <code>terraform plan</code> stills not working.</li> </ul> <p>Anyone can help me, please? Thanks :)</p>
<p>The meaning of this error message is that Terraform was relying on a <code>provider "kubernetes"</code> block inside the <code>k8s-cassandra</code> module in order to configure the AWS provider. By removing the module from source code, you've implicitly removed that configuration and so the existing objects already present in the state cannot be deleted -- the provider configuration needed to do that is not present.</p> <p>Although Terraform allows <code>provider</code> blocks inside child modules for flexibility, the documentation recommends keeping all of them in the root module and passing the provider configurations by name into the child modules using a <code>providers</code> map, or by automatic inheritance by name.</p> <pre><code>provider "kubernetes" { # global kubernetes provider config } module "k8s-cassandra" { # ...module arguments... # provider "kubernetes" is automatically inherited by default, but you # can also set it explicitly: providers = { "kubernetes" = "kubernetes" } } </code></pre> <p>To get out of the conflict situation you have already though, the answer is to temporarily restore the <code>module "k8s-cassandra"</code> block and then destroy the objects it is managing <em>before</em> removing it, using the <code>-target</code> option:</p> <pre><code>terraform destroy -target module.k8s-cassandra </code></pre> <p>Once all of the objects managed by that module have been destroyed and removed from the state, you can then safely remove the <code>module "k8s-cassandra"</code> block from configuration.</p> <p>To prevent this from happening again, you should rework the root and child modules here so that the provider configurations are all in the root module, and child modules only inherit provider configurations passed in from the root. For more information, see <a href="https://www.terraform.io/docs/modules/usage.html#providers-within-modules" rel="noreferrer">Providers Within Modules</a> in the documentation.</p>
<p>I want to terminate a pod when container dies but I did not find a efficient way of doing it. </p> <p>I can kill the pod using kubctl but I want pod should get killed/restart automatically whenever any container restarts in a pod. </p> <p>Can this task be achieved using operator?</p>
<p>There's a way, you have to add <code>livenessProbe</code> configuration with <code>restartPolicy</code> <code>never</code> in your pod config.</p> <ol> <li>The livenessProbe listen to container failures</li> <li>When the container dies, as <code>restartPolicy</code> is never, pod status becomes <code>Failed</code>. </li> </ol> <p>For example;</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-http spec: restartPolicy: Never containers: - args: - /server image: k8s.gcr.io/liveness livenessProbe: httpGet: # when "host" is not defined, "PodIP" will be used # host: my-host # when "scheme" is not defined, "HTTP" scheme will be used. Only "HTTP" and "HTTPS" are allowed # scheme: HTTPS path: /healthz port: 8080 httpHeaders: - name: X-Custom-Header value: Awesome initialDelaySeconds: 15 timeoutSeconds: 1 name: liveness </code></pre> <p>Here's the reference; <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase</a></p>
<p>I have a kubernetes cluster (using flannel):</p> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.5", GitCommit:"51dd616cdd25d6ee22c83a858773b607328a18ec", GitTreeState:"clean", BuildDate:"2019-01-16T18:14:49Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"} kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} </code></pre> <p>Everything seems to be running okay</p> <pre><code>$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-576cbf47c7-q7ncm 1/1 Running 1 30m coredns-576cbf47c7-tclp8 1/1 Running 1 30m etcd-kube1 1/1 Running 1 30m kube-apiserver-kube1 1/1 Running 1 30m kube-controller-manager-kube1 1/1 Running 1 30m kube-flannel-ds-amd64-6vlkx 1/1 Running 1 30m kube-flannel-ds-amd64-7twk8 1/1 Running 1 30m kube-flannel-ds-amd64-rqzr7 1/1 Running 1 30m kube-proxy-krfzk 1/1 Running 1 30m kube-proxy-vrssw 1/1 Running 1 30m kube-proxy-xlrgz 1/1 Running 1 30m kube-scheduler-kube1 1/1 Running 1 30m </code></pre> <p>Now I've deployed 2 pods (without a service). 2 NGinx pods. I've also created a busybox pod. When I curl from inside the busybox pod to the nginx pod on the same node it works:</p> <pre><code>kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE busybox 1/1 Running 2 30m 10.244.2.5 kube2 &lt;none&gt; nginx-d55b94fd-l7swz 1/1 Running 1 30m 10.244.2.4 kube2 &lt;none&gt; nginx-d55b94fd-zg7sj 1/1 Running 1 30m 10.244.1.6 kube3 &lt;none&gt; </code></pre> <p>curl:</p> <pre><code>kubectl exec busybox -- curl 10.244.2.4 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 612 100 612 0 0 357k 0 --:--:-- --:--:-- --:--:-- 597k &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; &lt;style&gt; body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } &lt;/style&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Welcome to nginx!&lt;/h1&gt; &lt;p&gt;If you see this page, the nginx web server is successfully installed and working. Further configuration is required.&lt;/p&gt; &lt;p&gt;For online documentation and support please refer to &lt;a href="http://nginx.org/"&gt;nginx.org&lt;/a&gt;.&lt;br/&gt; Commercial support is available at &lt;a href="http://nginx.com/"&gt;nginx.com&lt;/a&gt;.&lt;/p&gt; &lt;p&gt;&lt;em&gt;Thank you for using nginx.&lt;/em&gt;&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>But when I curl the pod on a different node:</p> <pre><code> kubectl exec busybox -- curl 10.244.1.6 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- 10.244.1.6 port 80: No route to host </code></pre> <p>How can I debug this? What can be wrong? (All firewalls are turned off/disabled)</p> <p>additional info:</p> <pre><code>$ sysctl net.ipv4.ip_forward net.ipv4.ip_forward = 1 $ sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-iptables = 1 </code></pre> <p>info:</p> <pre><code>kubectl exec -it busybox ip a 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: eth0@if7: &lt;BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN&gt; mtu 1450 qdisc noqueue link/ether 0a:58:0a:f4:02:05 brd ff:ff:ff:ff:ff:ff inet 10.244.2.5/24 scope global eth0 valid_lft forever preferred_lft forever </code></pre> <p>Iptables:</p> <pre><code>vagrant@kube1:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */ KUBE-FIREWALL all -- anywhere anywhere Chain FORWARD (policy DROP) target prot opt source destination KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */ DOCKER-USER all -- anywhere anywhere DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED DOCKER all -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT all -- kube1/16 anywhere ACCEPT all -- anywhere kube1/16 Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */ KUBE-FIREWALL all -- anywhere anywhere Chain DOCKER (1 references) target prot opt source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references) target prot opt source destination DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-ISOLATION-STAGE-2 (1 references) target prot opt source destination DROP all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-USER (1 references) target prot opt source destination RETURN all -- anywhere anywhere Chain KUBE-EXTERNAL-SERVICES (1 references) target prot opt source destination Chain KUBE-FIREWALL (2 references) target prot opt source destination DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000 Chain KUBE-FORWARD (1 references) target prot opt source destination ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000 ACCEPT all -- kube1/16 anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere kube1/16 /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED Chain KUBE-SERVICES (1 references) target prot opt source destination </code></pre>
<blockquote> <p>How can I debug this? What can be wrong? (All firewalls are turned off/disabled)</p> </blockquote> <p>This may be the problem if you have disabled iptables on your nodes. The overlay (flannel) sets up iptables to allow pod to pod traffic. You can check on your K8s nodes with something like this:</p> <pre><code>iptables-save | grep 10.244.2.4 iptables-save | grep 10.244.2.5 iptables-save | grep 10.244.2.6 </code></pre> <p>You should see some rules like this for port <code>80</code>:</p> <pre><code>-A KUBE-SEP-XXXXXXXXXXXXXXXX -s 10.244.2.4/32 -j KUBE-MARK-MASQ -A KUBE-SEP-XXXXXXXXXXXXXXXX -p tcp -m tcp -j DNAT --to-destination 10.244.2.4:80 </code></pre>
<p>I have installed bookinfo on EKS according to the instructions <a href="https://istio.io/docs/setup/kubernetes/helm-install/#option-2-install-with-helm-and-tiller-via-helm-install" rel="nofollow noreferrer">here</a> and <a href="https://istio.io/docs/examples/bookinfo/" rel="nofollow noreferrer">here</a>. </p> <p>While verifying that the application was installed correctly, i received <code>000</code> when trying to bring up the product page. After checking my network connections VPC/Subnets/Routing/SecurityGroups, I have narrorwed the issue down to being an istio networking issue.</p> <p>Upon further investigation, I logged into the istio-sidecar container for productpage and have noticed the following error.</p> <pre><code>[2019-01-21 09:06:01.039][10][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream [2019-01-21 09:06:28.150][10][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream </code></pre> <p>This led me to to notice that the istio-proxy is pointing to the <code>istio-pilot.istio-system:15007</code> address for discovery. Only the strange thing was, the kubernetes <code>istio-pilot.istio-system</code> service does not seem to be exposing port <code>15007</code> as shown below.</p> <pre><code>[procyclinsur@localhost Downloads]$ kubectl get svc istio-pilot --namespace=istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-pilot ClusterIP 172.20.185.72 &lt;none&gt; 15010/TCP,15011/TCP,8080/TCP,9093/TCP 1d </code></pre> <p>Infact none of the services from the <code>istio-system</code> namespace seem to expose that port. I am assuming that this <code>istio-pilot.istio-system</code> address is the one used for gRPC and would like to know how to fix this as it seems to be pointing to the wrong address; please correct me if I am wrong.</p> <p><strong>Relevant Logs</strong></p> <p>istio-proxy</p> <pre><code>2019-01-21T09:04:58.949152Z info Version [email protected]/istio-1.0.5-c1707e45e71c75d74bf3a5dec8c7086f32f32fad-Clean 2019-01-21T09:04:58.949283Z info Proxy role: model.Proxy{ClusterID:"", Type:"sidecar", IPAddress:"10.20.228.89", ID:"productpage-v1-54b8b9f55-jpz8g.default", Domain:"default.svc.cluster.local", Metadata:map[string]string(nil)} 2019-01-21T09:04:58.949971Z info Effective config: binaryPath: /usr/local/bin/envoy configPath: /etc/istio/proxy connectTimeout: 10s discoveryAddress: istio-pilot.istio-system:15007 discoveryRefreshDelay: 1s drainDuration: 45s parentShutdownDuration: 60s proxyAdminPort: 15000 serviceCluster: productpage zipkinAddress: zipkin.istio-system:9411 </code></pre>
<p>Ignore the gRPC warnings they are not meaningful. Make sure you did the <code>kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml</code></p> <p>If you do <code>kubectl exec $(kubectl get pod --selector app=ratings --output jsonpath='{.items[0].metadata.name}') -c istio-proxy -- ps -ef</code> you will see an entry like <code>--discoveryAddress istio-pilot.istio-system:15011</code>. That is the address the sidecar uses to contact Pilot and SHOULD match an entry you see using <code>kubectl -n istio-system get service istio-pilot</code>.</p> <p>If the discoveryAddress matches a Pilot port you can test networking. You can't easily <em>curl</em> on the discovery address but if you do <code>kubectl exec $(kubectl get pod --selector app=ratings --output jsonpath='{.items[0].metadata.name}') -c istio-proxy -- curl https://istio-pilot.istio-system:15011</code> and you get a timeout then there is a communication problem.</p> <p>The discovery address comes from Istio configuration. If you do <code>kubectl -n istio-system get cm istio-sidecar-injector</code> and the age is older than your Istio install there may have been a problem with upgrading an older Istio version.</p>
<p>I have a use case in which front-end application in sending a file to back-end service for processing. And at a time only one request can be processed by backend service pod. And if multiple request came service should autoscale and send that request to new Pod. So I am finding a way in which I can spawn a new POD against each request and after completion of processing by backend service pod it will return the result to front-end service and destroy itself. So that each pod only process a single request at a time.</p> <p>I explore the HPA autoscaling but did not find any suitable way. Open to use any custom metric server for that, even can use Jobs if they are able to fulfill the above scenario.</p> <p>So if someone have knowledge or tackle the same use case then help me so that I can also try that solution. Thanks in advance.</p>
<p>As already said, there is no built in way for doing this , you need to find custom way to achive this.</p> <p>One solution can be use of service account and http request to api server to create back end pod as soon as your service is received by front end pod, check status of back end pod and once it is up, forward request to back end.</p> <p>Second way i can think of using some temp storage ( db or hostpath volume ) and write cronejob in your master to poll that storage and depending on status spawn pod having job container. </p>
<p>Create a pod named xyz with a single container for each of the following images running inside there may be between 1 and 4 images specified +nginx+redis+Memcached+ consul</p>
<p>Not quite clear from the question but assuming you want one pod having multiple containers, below is the sample manifest which can be used: <code> apiVersion: v1 kind: Pod metadata: name: xyz labels: app: myapp spec: containers: - name: container-1 image: nginx - name: container-2 image: redis - name: container-3 image: Memcached - name: container-4 image: consul </code></p> <p>There will be 4 different docker processes for 4 different containers but there will be only one pod containing four of them.</p>
<p>we have k8s cluster and I’ve application which is running there. Now I try to add <a href="https://prometheus.io/" rel="nofollow noreferrer">https://prometheus.io/</a> and I use the command </p> <p><code>helm install stable/prometheus --version 6.7.4 --name my-prometheus</code></p> <p>this command works and I got this</p> <p><code>NAME: my-prometheus LAST DEPLOYED: Tue Feb 5 15:21:46 2019 NAMESPACE: default STATUS: DEPLOYED ... </code> when I run command </p> <p><code>kubectl get services</code></p> <p>I got this </p> <pre><code>kubernetes ClusterIP 100.64.0.1 &lt;none&gt; 443/TCP 2d4h my-prometheus-alertmanager ClusterIP 100.75.244.55 &lt;none&gt; 80/TCP 8m44s my-prometheus-kube-state-metrics ClusterIP None &lt;none&gt; 80/TCP 8m43s my-prometheus-node-exporter ClusterIP None &lt;none&gt; 9100/TCP 8m43s my-prometheus-pushgateway ClusterIP 100.75.24.67 &lt;none&gt; 9091/TCP 8m43s my-prometheus-server ClusterIP 100.33.26.206 &lt;none&gt; 80/TCP 8m43s </code></pre> <p>I didnt get any externalIP</p> <p>Does someone knows how to add it ? via service? any example for this</p> <p><strong>update</strong></p> <p>i’ve added the following yml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: prometheus-service spec: selector: app: prometheus-server type: LoadBalancer ports: - port: 8080 targetPort: 9090 nodePort: 30001 </code></pre> <p>which created successfully </p> <p>now I see the external ip like when running kubectl get services</p> <pre><code>my-prometheus-server LoadBalancer 100.33.26.206 8080:30001/TCP 80/TCP 8m43s </code></pre> <p>And I use in the browser 100.33.26.206:30001 and nothing happen, any idea? </p>
<p>I think what you are trying to do is to create a service with a type LoadBalancer, those have an internal and external IP.</p> <p>You can create one like any other service but you should precise those two fields:</p> <pre><code>externalTrafficPolicy: Local type: LoadBalancer </code></pre> <p><strong>Updated</strong>:</p> <p>There seems to be some confusion, you don't need an external ip to monitor your apps, it will only be used to access prometheus UI.</p> <p>The UI is accessible on port 9090 but prometheus is never accessed by the exporter as it will be prometheus wich will be <strong>scraping</strong> the exporters.</p> <p>Now to access a service from the internet you should have a google ip, but it seems that what you have is still an internal IP, it's in the same subnet as the other clusterIP, and it should not. For now in place of an external ip it's showing a port redirect wich is also wrong as the prometheus UI is on port 9090 (if you didn't modify your configuration it should still be). You should try to remove the &quot;nodePort&quot; and leave the port redirect to kubernetes.</p>
<p>I have a use case in which front-end application in sending a file to back-end service for processing. And at a time only one request can be processed by backend service pod. And if multiple request came service should autoscale and send that request to new Pod. So I am finding a way in which I can spawn a new POD against each request and after completion of processing by backend service pod it will return the result to front-end service and destroy itself. So that each pod only process a single request at a time.</p> <p>I explore the HPA autoscaling but did not find any suitable way. Open to use any custom metric server for that, even can use Jobs if they are able to fulfill the above scenario.</p> <p>So if someone have knowledge or tackle the same use case then help me so that I can also try that solution. Thanks in advance.</p>
<p>There's not really anything built-in for this that I can think of. You could create a service account for your app that has permissions to create pods, and then build the spawning behavior into your app code directly. If you can get metrics about which pods are available, you could use HPA with Prometheus to ensure there is always at least one unoccupied backend, but that depends on what kind of metrics your stuff exposes.</p>
<p>I'm trying to leverage the Jenkins credentials plugin to store sensitive data which I want to inject into Secrets within my Kubernetes cluster. I have a JenkinsFile which is used in my project to define the steps and I've added the following code to pull a username/password from a credential and pass to shell script to replace a placeholder in a file with the actual file:</p> <pre><code>stages { stage('Build') { steps { withCredentials([usernamePassword(credentialsId: 'creds-test', passwordVariable: 'PASSWORD', usernameVariable: 'USERNAME')]) { sh ''' echo $USERNAME echo $PASSWORD chmod +x secrets-replace.sh ./secrets-replace.sh USERNAME_PLACEHOLDER $USERNAME ./secrets-replace.sh PASSWORD_PLACEHOLDER $PASSWORD ''' } echo 'Building...' sh './gradlew build --refresh-dependencies' } } ... } </code></pre> <p>However whenever this runs all I ever get is the masked **** value back, even when I pass it to the shell script. Here is part of the build log:</p> <p><a href="https://i.stack.imgur.com/rXHwW.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rXHwW.png" alt="Jenkins Log"></a></p> <p>Is there something I need to configure to get access to the unmasked value?</p>
<p>Write the variable to a file in jenkins. Go to the jenkins workspace and look inside the file. The token will be present in plain text there.</p> <p>UPDATE</p> <p>Further easy way will be to print the <code>base64</code> encoded value of the credential and then decode it.</p>
<p>The Kubernets cluster is setup using Alibaba container service, there is no Issue with accessing the cluster using root account, when a new namespace is created and user is added to that namespace, it throws the error <strong>The connection to the server localhost:8080 was refused</strong></p> <p>Here is the setup of troubleshoot</p> <p>Defined the namespace <strong>dev</strong> and using get verb to display all the kubernetes namespace.</p> <pre><code>root@kube-master:# kubectl get namespaces NAME STATUS AGE default Active 14d dev Active 56m kube-public Active 14d kube-system Active 14d </code></pre> <p>Added new context in the Kubernetes cluster.</p> <pre><code>kubectl config set-context dev --namespace=dev --user=user1 </code></pre> <p>I should get an <strong>access denied error</strong> when using the kubectl CLI with this configuration file</p> <pre><code>root@kube-master:/home/ansible# kubectl --context=dev get pods The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre> <p>Instead of it's showing <strong>The connection to the server localhost:8080 was refused</strong></p> <p>Without the <strong>--context</strong> it's works perfectly well</p> <pre><code>root@kube-master:# kubectl get pods -n dev NAME READY STATUS RESTARTS AGE busybox 1/1 Running 1 1h </code></pre> <p>Here is the kubernetes config view</p> <pre><code>root@kube-master:/home/ansible# kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://172.16.2.13:6443 name: kubernetes contexts: - context: cluster: "" namespace: dev user: user1 name: dev - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED root@kube-master:# kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE dev user1 dev * kubernetes-admin@kubernetes kubernetes kubernetes-admin </code></pre>
<p>I figured it out, what i noticed when i execute the command </p> <pre><code>kubectl config view </code></pre> <p>The cluster is showing as empty </p> <pre><code>- context: cluster: "" namespace: dev user: user1 </code></pre> <p>To fix this issues added the <code>--cluster</code> information and modified the <code>set-context</code></p> <pre><code>root@kube-master:/home/ansible# kubectl config set-context dev --cluster=kubernetes --namespace=dev --user=user1 Context "dev" modified. </code></pre> <p>and the context is set properly </p> <pre><code>contexts: - context: cluster: kubernetes namespace: dev user: user1 name: dev </code></pre> <p>and i got desired results when looking up the pods with <code>--context=dev</code></p> <pre><code>root@kube-master:/home/ansible# kubectl --context=dev get pods No resources found. Error from server (Forbidden): pods is forbidden: User "system:anonymous" cannot list pods in the namespace "dev" </code></pre>
<p>while testing the role based access in the Alibaba container service it's throwing with me an error <strong>"pods is forbidden: User "user1" cannot list pods in the namespace "stage""</strong> this is RBAC issue, which i'm not able to figure it where i'm heading it wrong</p> <p><strong>The RoleBinding Definition</strong></p> <pre><code>root@kube-master:# kubectl describe rolebinding stage-role-binding -n stage Name: stage-role-binding Labels: &lt;none&gt; Annotations: &lt;none&gt; Role: Kind: Role Name: staging Subjects: Kind Name Namespace ---- ---- --------- User user2 </code></pre> <p><strong>The Role Definition</strong></p> <pre><code>root@kube-master:# kubectl describe role -n stage Name: staging Labels: &lt;none&gt; Annotations: &lt;none&gt; PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- deployments [] [] [get list watch create update patch delete] pods [] [] [get list watch create update patch delete] replicasets [] [] [get list watch create update patch delete] deployments.apps [] [] [get list watch create update patch delete] pods.apps [] [] [get list watch create update patch delete] replicasets.apps [] [] [get list watch create update patch delete] deployments.extensions [] [] [get list watch create update patch delete] pods.extensions [] [] [get list watch create update patch delete] replicasets.extensions [] [] [get list watch create update patch delete] </code></pre> <p><strong>One pod is running well in the stage namespace</strong></p> <pre><code>root@kube-master:# kubectl get pods -n stage NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 10m </code></pre> <p><strong>Defining context</strong></p> <pre><code>root@kube-master:# kubectl config set-context stage --cluster=kubernetes --namespace=stage --user=user2 Context "stage" modified. </code></pre> <p><strong>Testing RBAC</strong></p> <pre><code>root@kube-master:/home/ansible# kubectl --context=stage get pods No resources found. Error from server (Forbidden): pods is forbidden: User "user1" cannot list pods in the namespace "stage" </code></pre> <p>Not sure from where <strong>user1</strong> </p> <blockquote> <p>is coming and throwing the RBAC Error</p> </blockquote> <p>There is only <strong>context</strong> is set for <strong>user2</strong></p> <pre><code>root@kube-master:# kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@kubernetes kubernetes kubernetes-admin stage kubernetes user2 stage </code></pre> <p><strong>This is how i created the user</strong></p> <pre><code>openssl genrsa -out user2.key 2048 openssl req -new -key user2.key -out user2.csr -subj "/CN=user1/O=8gwifi.org" openssl x509 -req -in user2.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out user2.crt -days 500 kubectl config set-credentials user2 --client-certificate=user2.crt --client-key=user2.key kubectl config set-context stage --cluster=kubernetes --namespace=stage --user=user2 </code></pre>
<p>The RoleBinding is for user <strong>user2</strong>, not for <strong>user1</strong>. That's why you are getting RBAC error.</p> <p>Setting context for user <strong>user2</strong> does not mean that kubernetes will identify this user as <strong>user2</strong>. It depends on the credential you use. If the used credential is of user <strong>user-x</strong>, then kubernetes will treat it as <strong>user-x</strong>. The <strong>context user</strong> is for kubectl to find user credential info. To understand kubernetes authentication see <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">here</a>.</p> <p>The credential you used there resolved to user <strong>user1</strong>. So you should update your RoleBinding to <strong>user1</strong>.</p> <p><strong>After updated question</strong></p> <p>For certificate authetication, <strong>CN</strong> will be the username (ref: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#x509-client-certs" rel="nofollow noreferrer">here</a>). In your certificate <code>"/CN=user1/O=8gwifi.org"</code>, so username will be <strong>user1</strong> not <strong>user2</strong>.</p>
<p>Where are documented the "types" of secrets that you can create in kubernetes?</p> <p>looking at different samples I have found "generic" and "docker-registry" but I have no been able to find a pointer to documentation where the different type of secrets are documented.</p> <p>I always end in the k8s doc: <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a> <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/" rel="noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/</a></p> <p>Thank you.</p>
<p>Here is a list of 'types' from the <a href="https://github.com/kubernetes/kubernetes/blob/7693a1d5fe2a35b6e2e205f03ae9b3eddcdabc6b/pkg/apis/core/types.go#L4394-L4478" rel="noreferrer">source code</a>:</p> <pre><code>SecretTypeOpaque SecretType = "Opaque" [...] SecretTypeServiceAccountToken SecretType = "kubernetes.io/service-account-token" [...] SecretTypeDockercfg SecretType = "kubernetes.io/dockercfg" [...] SecretTypeDockerConfigJson SecretType = "kubernetes.io/dockerconfigjson" [...] SecretTypeBasicAuth SecretType = "kubernetes.io/basic-auth" [...] SecretTypeSSHAuth SecretType = "kubernetes.io/ssh-auth" [...] SecretTypeTLS SecretType = "kubernetes.io/tls" [...] SecretTypeBootstrapToken SecretType = "bootstrap.kubernetes.io/token" </code></pre>
<h1>Context</h1> <p>I have a bash script that uses the <a href="https://cloud.google.com/sdk/gcloud/" rel="nofollow noreferrer"><code>gcloud</code></a> command-line tool to perform maintenance operations.</p> <p>This script works fine.</p> <p>This script is in a docker image based on <a href="https://hub.docker.com/r/google/cloud-sdk" rel="nofollow noreferrer"><code>google/cloud-sdk</code></a>, executed automatically directly through the container entrypoint.</p> <p>The goal is to have it executed periodically through a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">Kubernetes CronJob</a>. This works too.</p> <p>I have currently not setup anything regarding authentication, so my script uses the <a href="https://cloud.google.com/compute/docs/access/service-accounts?hl=en_US#the_default_service_account" rel="nofollow noreferrer">Compute Engine default service account</a>.</p> <p>So far so good, however, I need to stop using this default service account, and switch to a separate service account, with an API key file. That's where the problems start.</p> <h1>Problem</h1> <p>My plan was to mount my API key in the container through a Kubernetes Secret, and then use the <code>GOOGLE_APPLICATION_CREDENTIALS</code> (documented <a href="https://cloud.google.com/docs/authentication/production" rel="nofollow noreferrer">here</a>) to have it loaded automatically, with the following (simplified) configuration :</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: some-name spec: schedule: "0 1 * * *" jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - name: some-name image: some-image-path imagePullPolicy: Always env: - name: GOOGLE_APPLICATION_CREDENTIALS value: "/credentials/credentials.json" volumeMounts: - name: credentials mountPath: /credentials volumes: - name: credentials secret: secretName: some-secret-name </code></pre> <p>But apparently, the <code>gcloud</code> tool behaves differently from the programming-languages SDKs, and ignores this env variable completely.</p> <p>The <a href="https://hub.docker.com/r/google/cloud-sdk" rel="nofollow noreferrer">image documentation</a> isn't much help either, since it only gives you a way to change the gcloud config location.</p> <p>Moreover, I'm pretty sure that I'm going to need a way to provide some extra configuration to gcloud down the road (project, zone, etc…), so I guess my solution should give me the option to do so from the start.</p> <h1>Possible solutions</h1> <p>I've found a few ways to work around the issue :</p> <ul> <li><p>Change the entrypoint script of my image, to read environment variables, and execute env preparation with <code>gcloud</code> commands :</p> <p>That's the simplest solution, and the one that would allow me to keep my Kubernetes configuration the cleanest (each environment only differs by some environment variables). It requires however maintaining my own copy of the image I'm using, which I'd like to avoid if possible.</p></li> <li><p>Override the entrypoint of my image with a Kubernetes configMap mounted as a file :</p> <p>This option is probably the most convenient : execute a separate configmap for each environment, where I can do whatever environment setup I want (such as <code>gcloud auth activate-service-account --key-file /credentials/credentials.json</code>). Still, it feels hacky, and is hardly readable compared to env variables.</p></li> <li><p>Manually provide configuration files for <code>gcloud</code> (in <code>/root/.config/gcloud</code>) :</p> <p>I suppose this would be the cleanest solution, however, the configuration syntax doesn't seem really clear, and I'm not sure how easy it would be to provide this configuration through a configMap.</p></li> </ul> <p>As you can see, I found ways to work around my issue, but none of them satisfies me completely. Did I miss something ?</p>
<p>For the record, here is the solution I finally used, although it's still a workaround in my opinion :</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: some-name spec: schedule: "0 1 * * *" jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - name: some-name image: some-image-path imagePullPolicy: Always command: ["/bin/bash", "/k8s-entrypoint/entrypoint.sh"] volumeMounts: - name: credentials mountPath: /credentials - name: entrypoint mountPath: /k8s-entrypoint volumes: - name: credentials secret: secretName: some-secret-name - name: entrypoint configMap: name: entrypoint </code></pre> <p>With the following ConfigMap :</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: entrypoint data: entrypoint.sh: | #!/bin/bash gcloud auth activate-service-account --key-file /credentials/credentials.json # Chainload the original entrypoint exec sh -c /path/to/original/entrypoint.sh </code></pre>
<p>I have tried before on Openshift Origin 3.9 and Online. I have deployed a simple hello world php app on Openshift. It has a Service and a Route.</p> <p>When I call the route, I am getting expected output with Hello world and the Pod IP. Let's call this pod ip as 1.1.1.1</p> <p>Now i deployed same app with small text change with same label under same Service. Let's call this pod ip as 2.2.2.2</p> <p>I can see both pods running in a single Service. Now when I call the route, it always shows Podip 1.1.1.1 My route never hits the second pod. </p> <p>My understand is Route will call the Service and Service will load balance between available pods.</p> <p>But it isn't happening. Any help is appreciated. </p>
<p>The default behavior of the HAProxy router is to use a cookie to ensure "sticky" routing. This enables sessions to remain with the same pod. <a href="https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html" rel="noreferrer">https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html</a></p> <p>If you set a <code>haproxy.router.openshift.io/disable_cookies</code> annotation on the route to <code>true</code> it should disable this behavior.</p>
<p>We have to set https_proxy &amp; http_proxy for internet access from our cluster instances.</p> <p>https_proxy &amp; http_proxy environment variables should be exported to all pods so that application can access external sites.</p> <p>We are using helm charts so is there common place we can set these environment variables so all pods can access internet.</p>
<p>You should be using PodPreset object to pass common environment variables and other params to all the matching pods.</p> <h1>Add label <code>setproxy:true</code> to all pods</h1> <p>The below <code>PodPreset</code> object would inject <code>HTTPS_PROXY</code> and <code>HTTP_PROXY</code> environment variable to all pods that match label <code>setproxy:true</code></p> <pre><code>apiVersion: settings.k8s.io/v1alpha1 kind: PodPreset metadata: name: inject-proxy-var spec: selector: matchLabels: setproxy: true env: - name: HTTPS_PROXY value: &quot;https_proxy&quot; - name: HTTP_PROXY value: &quot;http_proxy&quot; </code></pre> <p>Follow the link for more help --&gt; <a href="https://kubernetes.io/docs/tasks/inject-data-application/podpreset/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/podpreset/</a></p> <h1>You should enable Pod Preset in your cluster. follow the below link</h1> <p><a href="https://kubernetes.io/docs/concepts/workloads/pods/podpreset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/podpreset/</a></p>
<p>I cannot resolve service from kubernetes.</p> <pre><code>kubectl get pods -l k8s-app=kube-dns --namespace kube-system NAME READY STATUS RESTARTS AGE IP coredns-86c58d9df4-gn62b 1/1 Running 0 18d 10.244.0.58 coredns-86c58d9df4-svmk5 1/1 Running 0 18d 10.244.0.59 </code></pre> <p>containers do not resolve any domains, including kubernetes.default</p> <pre><code>kubectl exec -ti busybox -- sh / # nslookup kubernetes. defaultServer: 10.96.0.10 Address 1: 10.96.0.10 nslookup: can't resolve 'kubernetes.default' command terminated with exit code 1 </code></pre> <p>Logs from dns pods do not show any queries (note coredns is configured to log queries)</p> <pre><code>kubectl logs --namespace=kube-system coredns-86c58d9df4-gn62b .:53 2019-01-18T21:44:34.271Z [INFO] CoreDNS-1.2.6 2019-01-18T21:44:34.271Z [INFO] linux/amd64, go1.11.2, 756749c CoreDNS-1.2.6 linux/amd64, go1.11.2, 756749c [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769 [INFO] Reloading [INFO] plugin/reload: Running configuration MD5 = 2394cf331ea25e9aacc36ddf69fafcdb [INFO] Reloading complete 2019-02-04T22:23:21.266Z [INFO] 127.0.0.1:39695 - 58939 "HINFO IN 4718439545634584094.2038959545847864189. udp 57 false 512" NXDOMAIN qr,rd,ra 133 0.021492508s </code></pre> <p>The kube-node coredns is hosted on is running ubuntu xenial. </p> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues" rel="nofollow noreferrer">I noticed there is a known issue on ubuntu hosts</a></p> <p>I applied custom kubelet conf, setting <code>--resolv-conf=/run/systemd/resolve/resolv.conf</code> </p> <pre><code>❯ systemctl status kubelet.service ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Wed 2019-02-06 01:05:42 GMT; 5min ago Docs: https://kubernetes.io/docs/home/ Main PID: 27867 (kubelet) Tasks: 30 (limit: 4915) CGroup: /system.slice/kubelet.service └─27867 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-drive --resolv-conf=/run/systemd/resolve/resolv.conf </code></pre> <p>However I still cannot resolve any services. </p>
<p>I delete the pods, and their controller rescheduled them.</p> <p>Now dns queries and service discovery is working.</p> <p>Am not sure if the coredns service is now reachable because I update <code>kubelet --resolve-conf</code> or if the service just needed to restart.</p>
<p>I have a simple SpringBoot application (really a REST-based microservice) that I am deploying in Kubernetes.</p> <p>It has one downstream dependency (another REST-based web service). I know that for the REST endpoint feeding the <strong><em>liveness</em></strong> probe I should not return a failure if my downstream dependency is unavailable / inaccessible (as Kubernetes restarting my microservice pod won't fix the dependency!).</p> <p>But in the REST endpoint feeding my <strong><em>readiness</em></strong> probe should I be checking downstream dependencies? I'd rather just do something basic but if I need to be checking more then I will.</p> <pre><code>@RequestMapping("/health") public String getHealth() { return "OK"; } </code></pre>
<p>Assuming, the liveness of your spring-boot app (user's perspective) does not require the dependent service to be up, your idea of checking the status of the Readiness Probe is a right thing to do. </p> <p>As the dependent app is a REST service, you could expose an HTTP/HTTPS endpoint to be checked by the <em>readiness probe</em>. And keep spring-boot app's health check (or similar) endpoint for <em>liveness probe</em>.</p> <p>However, beware that your pod which runs first microservice (spring-boot app) could become unresponsive if the dependent service didn't respond.</p> <p>Therefore, providing correct timeouts (initialDelays &amp; periodDelay) with success and failure thresholds help you mitigate such unresponsive status. For example;</p> <pre><code>readinessProbe: httpGet: # make an HTTP request to dependent's health/readiness endpoint port: &lt;port&gt; path: /health scheme: HTTP initialDelaySeconds: 10 # how long to wait before checking periodSeconds: 10 # how long to wait between checks successThreshold: 1 # how many successes to hit before accepting failureThreshold: 3 # how many failures to accept before failing timeoutSeconds: 15 </code></pre> <p>The official doc: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes</a></p> <p>A good article: <a href="https://itnext.io/kubernetes-readiness-probe-83f8a06d33d3" rel="nofollow noreferrer">https://itnext.io/kubernetes-readiness-probe-83f8a06d33d3</a></p>
<p>I have deployed Google cloud Kubernetes cluster. The cluster has internal IP only. </p> <p>In order to access it, I created a virtual machine <code>bastion-1</code> which has external IP. </p> <p>The structure: </p> <pre><code>My Machine -&gt; bastion-1 -&gt; Kubernetes cluster </code></pre> <p>The connection to the proxy station:</p> <pre><code>$ ssh bastion -D 1080 </code></pre> <p>now using <code>kubectl</code> using proxy:</p> <pre><code>$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pods No resources found. </code></pre> <p>The Kubernetes master server is responding, which is a good sign.</p> <p>Now, trying to ssh a pod:</p> <pre><code>$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl exec -it "my-pod" -- /bin/bash error: error sending request: Post https://xxx.xxx.xxx.xxx/api/v1/namespaces/xxx/pods/pod-xxx/exec?command=%2Fbin%2Fbash&amp;container=xxx&amp;container=xxx&amp;stdin=true&amp;stdout=true&amp;tty=true: EOF </code></pre> <p>Question: How to allow ssh connection to pod via bastion? What I'm doing wrong?</p>
<p>You can't do this right now.</p> <p>The reason is because the connections used for commands like exec and proxy use SPDY2.</p> <p>There's a bug report <a href="https://github.com/kubernetes/kubernetes/issues/58065" rel="nofollow noreferrer">here</a> with more information.</p> <p>You'll have to switch to using a HTTP proxy</p>
<p>I have a Kubernetes cluster and when I try to scale a Deployment up to 8 pods, it gives an error message:</p> <p>"0/3 nodes are available: 3 insufficient cpu."</p> <p>After some time it shows 3/8 pods available and then 5/8 pods available with the same error, but never reached 8 pods.</p> <p>Recently we introduced CPU limits on Pods.</p> <p>What is the cause and solution for this error?</p>
<p>Scheduler is not able to schedule pods to any of 3 nodes as required resources are not available on nodes. </p> <p>This may be due to cpu request value of pod is more than available cpu of nodes or actually your nodes don't have any cpu capacity left to schedule new pods. </p> <p>Check available cpu capacity of nodes and increase it by removing non required pods. Also reduce cpu request value of pod if specified. </p>
<p>I installed a Spark on K8s operator in my K8s cluster and I have an app running within the k8s cluster. I'd like to enable this app to talk to the sparkapplication CRD service. Can I know what would be the endpoint I should use? (or what's the K8s endpoint within a K8s cluster)</p>
<p>It's clearly documented <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/quick-start-guide.md#driver-ui-access-and-ingress" rel="nofollow noreferrer">here</a>. So basically, it creates a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">NodePort</a> type of service. It also specifies that it could create an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> to access the UI. For example:</p> <pre><code>... status: sparkApplicationId: spark-5f4ba921c85ff3f1cb04bef324f9154c9 applicationState: state: COMPLETED completionTime: 2018-02-20T23:33:55Z driverInfo: podName: spark-pi-83ba921c85ff3f1cb04bef324f9154c9-driver webUIAddress: 35.192.234.248:31064 webUIPort: 31064 webUIServiceName: spark-pi-2402118027-ui-svc webUIIngressName: spark-pi-ui-ingress webUIIngressAddress: spark-pi.ingress.cluster.com </code></pre> <p>In this case, you could use <code>35.192.234.248:31064</code> to access your UI. Internally within the K8s cluster, you could use <code>spark-pi-2402118027-ui-svc.&lt;namespace&gt;.svc.cluster.local</code> or simply <code>spark-pi-2402118027-ui-svc</code> if you are within the same namespace.</p>
<p>I manage a couple (presently, but will increase) clusters at GKE and up till now have been ok launching things manually as needed. I've started working my own API that can take in requests to spin up new resources on-demand for a specific cluster but in order to make it scalable I need to do something more dynamic than switching between clusters with each request. I have found a link for a Google API python client that supposedly can access GKE:</p> <p><a href="https://developers.google.com/api-client-library/python/apis/container/v1#system-requirements" rel="nofollow noreferrer">https://developers.google.com/api-client-library/python/apis/container/v1#system-requirements</a></p> <p>I've also found several other clients (specifically one I was looking closely at was the nodejs client from godaddy) that can access Kubernetes:</p> <p><a href="https://github.com/godaddy/kubernetes-client" rel="nofollow noreferrer">https://github.com/godaddy/kubernetes-client</a></p> <p>The Google API Client doesn't appear to be documented for use with GKE/kubectl commands, and the godaddy kubernetes-client has to access a single cluster master but can't reach one at GKE (without a kubectl proxy enabled first). So my question is, how does one manage kubernetes on GKE programmatically without having to use the command-line utilities in either nodejs or python?</p>
<p>I know this question is a couple of years old, but hopefully this helps someone. Newer GKE APIs are available for Node.js here: <a href="https://cloud.google.com/nodejs/docs/reference/container/0.3.x/" rel="nofollow noreferrer">https://cloud.google.com/nodejs/docs/reference/container/0.3.x/</a></p> <p>See list of container APIs here: <a href="https://developers.google.com/apis-explorer/#p/container/v1/" rel="nofollow noreferrer">https://developers.google.com/apis-explorer/#p/container/v1/</a></p> <p>Once connected via the API, you can access cluster details, which includes the connectivity information for connecting to the master with standard API calls.</p>
<p>Having a kubernetes <code>service</code> (of type <code>ClusterIP</code>) connected to a set of <code>pod</code>s, but none of them are currently ready - what will happen to the request? Will it:</p> <ul> <li>fail eagerly</li> <li>timeout</li> <li>wait until a ready pod is available (or forever, whichever is earlier)</li> <li>something else?</li> </ul>
<p><strong>It will time out.</strong></p> <p>Kube-proxy pulls out the IP addresses from healthy pods and sets as endpoints of the service (backends). Also, note that all kube-proxy does is to re-write the iptables when you create, delete or modify a service.</p> <p>So, when you send a request within your network and there is no one to reply, your request will timeout.</p>
<p>I am doing the CKAD (Certified Kubernetes Application Developer) 2019 using GCP (Google Cloud Platform) and I am facing timeouts issue when trying to <code>curl</code> the pod from another node. I set a simple Pod with a simple Service.</p> <p>Looks the firewall is blocking something ip/port/protocol but I cannot find any documentation. </p> <p>Any ideas?</p>
<p>So after some heavy investigation with <code>tshark</code> and google firewall I was able to unblock myself.</p> <p>If you add a new firewall rule to GPC allowing <code>ipip</code> protocol for your node networks (in my case 10.128.0.0/9) the <code>curl</code> works !!</p> <p>sources: <a href="https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml" rel="nofollow noreferrer">https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml</a></p>
<p>[Cross-posted on the k8s Discuss forum -- apologies if that's considered bad form.]</p> <p>Greetings!</p> <p>With a fresh 1.13.3 cluster stood up via Kubernetes The Hard Way plus the additional configuration for TLS bootstrapping, I'm struggling to get auto approval of kubelet <strong>server</strong> certificates. Client certs are bootstrapping fine and kubelet -> apiserver communication is working smoothly, but apiserver -> kubelet communication is the problem at hand.</p> <p>Server certificates are being requested, but they are stuck pending manual intervention, and I've not been able to divine the RBAC incantation needed to auto-approve server CSRs the same way that client CSRs are.</p> <p>Here are the CSRs (after having just reinstantiated the cluster):</p> <pre><code>NAME AGE REQUESTOR CONDITION csr-m7rjs 4s system:node:k8s-lab3-worker1 Pending node-csr-aTpBsagYzYaZYJM6iGMN5AvqzVXATDj1BrmZs_dZCJA 5s system:bootstrap:ec5591 Approved,Issued </code></pre> <p>At this point obviously apiserver -> kubelet communication (via <code>kubectl exec</code> or <code>logs</code>) fails. If I manually approve the certificate, things work as expected.</p> <p>The fact that both client and server CSRs are being issued both leads me to believe the kubelet is configured properly (plus the fact that manually approving makes it go).</p> <p>The main thing that triggers my spidey sense is the fact that when the apiserver starts up for the first time, I see:</p> <pre><code>Feb 6 00:14:13 k8s-lab3-controller1[3495]: I0206 00:14:13.697030 3495 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient Feb 6 00:14:13 k8s-lab3-controller1[3495]: I0206 00:14:13.706561 3495 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient </code></pre> <p>The cluster roles for client cert signing are auto-created by the apiserver. But certificatesigningrequests:selfnodeserver is not auto-created. Does this suggest auto approval for server certs isn't actually implemented or supported?</p> <p>Well, I've created it manually:</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults rules: - apiGroups: ["certificates.k8s.io"] resources: ["certificatesigningrequests/selfnodeserver"] verbs: ["create"] </code></pre> <p>And then I bind the role to the system:nodes group:</p> <pre><code>kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: auto-approve-server-renewals-for-nodes subjects: - kind: Group name: system:nodes apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver apiGroup: rbac.authorization.k8s.io </code></pre> <p>And just to be sure, system:nodes is one of the groups associated with the server CSR:</p> <pre><code>$ kubectl get csr csr-m7rjs -o template --template {{.spec.groups}} [system:nodes system:authenticated] </code></pre> <p>I've tried several hours worth of black belt levels of Copying and Pasting from Stack Overflow (with most of the advice really applying to older versions of Kubernetes) to no avail. I'm hoping the brain trust here can spot what I'm doing wrong.</p> <p>In case it's relevant, here's how I'm starting the apiserver (and again this is v1.13.3 so I'm ):</p> <pre><code>/usr/local/bin/kube-apiserver \ --advertise-address=172.24.22.168 \ --allow-privileged=true \ --anonymous-auth=false \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=10 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/audit.log \ --authorization-mode=Node,RBAC \ --bind-address=0.0.0.0 \ --client-ca-file=/etc/kubernetes/pki/ca.crt \ --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,AlwaysPullImages,DenyEscalatingExec,SecurityContextDeny,EventRateLimit \ --admission-control-config-file=/etc/kubernetes/admissionconfig.yaml \ --enable-bootstrap-token-auth=true \ --enable-swagger-ui=true \ --etcd-cafile=/etc/kubernetes/pki/ca.crt \ --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt \ --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key \ --etcd-servers=https://172.24.22.168:2379 \ --event-ttl=1h \ --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \ --feature-gates=RotateKubeletServerCertificate=true \ --insecure-port=0 \ --kubelet-certificate-authority=/etc/kubernetes/pki/ca.crt \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key \ --kubelet-https=true \ --profiling=false \ --repair-malformed-updates=false \ --runtime-config=api/all \ --service-account-lookup=true \ --service-account-key-file=/etc/kubernetes/pki/sa.crt \ --service-cluster-ip-range=10.32.0.0/24 \ --service-node-port-range=30000-32767 \ --tls-cert-file=/etc/kubernetes/pki/apiserver.crt \ --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \ --tls-private-key-file=/etc/kubernetes/pki/apiserver.key \ --v=2 </code></pre> <p>(Given RotateKubeletServerCertificate is true by default as of 1.12 I suppose the --feature-gates argument is redundant but I happened to have left it in.)</p> <p>Many thanks for any help you might be able to offer.</p>
<p>It turns out auto approval for server CSRs was removed.</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/73356" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/73356</a></p> <p>So much for that.</p>
<p>I'm new to using Helm and I'm not sure which is the best approach when you have two deployments. I've created a chart for my application. It contains two deployments:</p> <ol> <li>app-nginx-phpfpm.yaml</li> <li>app-mysql.yaml</li> </ol> <p>Should I keep them in the same chart or should I create a sub-chart for app-mysql.yaml? </p>
<p>You can have both, depending on how you want to structure your deployments.</p> <p>You should keep in mind the following</p> <h1>Considerations</h1> <h2>Single chart benefits</h2> <ul> <li>Easier to deploy: only deploy once, single diffing</li> <li>Single version, so rollback/upgrades happen on a single element</li> <li>You can uninstall parts by using feature flags</li> <li>Installing a new component without touching the rest of the elements may prove tricky</li> </ul> <h2>Single chart caveats</h2> <ul> <li>Harder to deploy uncoupled services, e.g., a mock service for data access while upgrading the database</li> <li>Harder to decouple and test each instance</li> <li>Harder to name and make sense of each component (in different releases your <code>{{.Release.Name}}</code> would already change for each "app").</li> <li>Harder to provide/keep track of different release cycles for different components</li> <li>Versions stored in a single ConfigMap, which may lead to size limit problems if you have charts which contain, for example, testing data embedded</li> </ul> <h1>Note on version control</h1> <p>You can have a master chart that you use for testing with all subcharts, and package the subcharts independently but still have everything on the same repo.</p> <p>For example, I usually keep things like either:</p> <pre><code>. / helm / charts / whatever / charts / subchart1 . / helm / charts / whatever / charts / subchart2 . / helm / charts / whatever / values.yaml </code></pre> <p>or</p> <pre><code>. / helm / charts / whatever-master / values.yaml . / helm / charts / whatever-master / requirements.yaml . / helm / charts / whatever-subchart1 / values.yaml . / helm / charts / whatever-subchart2 / values.yaml </code></pre> <p>And use the requirements.yaml on the master chart to pull from <code>file://../whatever-subchartx</code>.</p> <p>This way I can have <code>whatever-stress-test</code> and <code>whatever-subcomponent-unit-test</code> while still being flexible to deploy separately components that have different release cycles if so wanted.</p> <p>This will in the end also depend on your upgrade strategy. Canary upgrades will probably require you to handle stateful microservices in a more specific way than you can have with a single chart, so plan accordingly.</p>
<p>I am setting up my jenkins on kubernetes via helm to run my java deployment, </p> <p>This spawns new pods for every build -> fair enough</p> <p>Then I see that I need to store some maven jars which will be used by other builds,</p> <p>So I came up with NFS installation and mounted it to /home/jenkins/.m2 via jenkins configure (Web console)</p> <p>Then I keep getting the </p> <p><code>error org.apache.maven.repository.LocalRepositoryNotAccessibleException: Could not create local repository at /home/jenkins/.m2/repository</code></p> <p>I even tried </p> <p><code>securityContext: runAsUser: 1000 fsGroup: 1000 </code> in the deployment.yaml </p> <p>I tried adding </p> <p><code>USER root RUN chown -R root /home/jenkins</code> </p> <p>in the slave container</p> <p>Its just not with the .m2 , slave is not able to write anything on this pvc.</p> <p>At this point I’m confused where else the problem could be. can some one please help?</p>
<p>I had similar issue. I solved this way:</p> <ul> <li>made own jnlp image, modified, with maven installed on it</li> <li>i moved m2 out of jenkins home directory into /opt/m2 through maven configuration</li> <li>created persistent disk, and bounded pv and pvc with it and mounted on /opt/m2 inside pod</li> <li>and last, I set inside Dockerfile for jnlp that User running image is ROOT, since every mount on kubernetes is mounted as root. Not very elegant but working solution</li> </ul>
<p>What is the usual way to organize pods in a cluster in Kubernetes?</p> <p>I have a Jenkins build server, Docker registry, Git repository and other development tools that I want to run in Google Container Engine. Do I create one cluster for each of them? Or can multiple pods be scheduled on each node? </p> <p>So my question is: Would you create one cluster that holds all these services, or multiple clusters? The same question applies to production, qa etc enviroments. Do I create one cluster for each enviroment or do I have them in the same cluster?</p>
<p>Usually the pods are hosted by different nodes on our cluster depending on the resources the pod would need and also that the running pod may not overload the node. Moreover a single node in a cluster can host multiple pods.</p> <p>Specific to the question,when talking about all the development tools all the pods here should be hosted on a single node only because all of them have to communicate with each other . When the question arises for different environments it would be wise to host them on different nodes but same cluster.</p> <p>Kubernetes provides us with the added advantage of selecting nodes on which we wish to run our pods. The NodeSelector concept can come handy in this case.</p> <pre><code>kubectl get nodes kubectl get pods </code></pre> <p>Label any node that you have in your cluster :</p> <pre><code>kubectl label nodes gke-cluster1-default-pool-4db7fabf-zzx9 disktype=ccd kubectl get nodes --show-labels </code></pre> <p>Now create a file to create a pod and mention the nodeSelector in that file.</p> <p>nano task6pod.yaml</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: pod6 labels: env: test spec: containers: - name: container6 image: nginx nodeSelector: disktype: ccd </code></pre> <p>nodeSelector in this file is the same label as of the label of the node mentioned earlier.</p> <pre><code>kubectl create -f task6pod.yaml kubectl get pods -o wide </code></pre> <p>After this command you can see that the newly created pod will have the node that you wanted the one with the label.</p>
<p>When issuing a helm upgrade on a running pod, my configmap is updated, but will the pod know about the configmap updated values automatically or is there another step that I need to take to inject new configmap values into the pod?</p> <p>My overall goal is to avoid having to interact with the running pod such as a delete or restart / reinstall.</p> <p>I've seen a lot of info about changing sha1sum and doing some workarounds, but my question is more basic - do pods automatically become aware of new configmap items?</p> <p>---- UPDATE --- so what we ended up doing was:</p> <p>helm upgrade -n release -f release/values.yaml <strong>--recreate-pods</strong> </p> <p>although this terminates the existing pod, another one is instantly started upon issuing the command, meaning "near zero" downtime.</p>
<p>If your Helm chart creates a ConfigMap, and that ConfigMap is mounted as a volume into a pod, then when the ConfigMap updates, <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically" rel="nofollow noreferrer">the container filesystem also updates</a>. It's then up to the application to notice that the files have changed.</p> <p>Tricks like <a href="https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments" rel="nofollow noreferrer">setting a hash of the file contents as a pod annotation</a> are specifically there to cause a Deployment to update in a way that will delete and recreate existing Pods. This is okay! Pods in Kubernetes are very disposable, and if you delete a Pod managed by a Deployment it will get automatically recreated. If your application only reads the ConfigMap contents at startup time (this is very typical) then you need to do something like this to cause the Pod to restart on its own (copied out of the linked documentation):</p> <pre><code>kind: Deployment spec: template: metadata: annotations: checksum/config: {{ include (print $.Template.BasePath &quot;/configmap.yaml&quot;) . | sha256sum }} </code></pre>
<p>We have upgraded our Kubernates Service cluster on Azure to latest version 1.12.4. After that we suddenly recognize that pods and nodes cannot communicate between anymore by private ip :</p> <pre><code>kubectl get pods -o wide -n kube-system -l component=kube-proxy NAME READY STATUS RESTARTS AGE IP NODE kube-proxy-bfhbw 1/1 Running 2 16h 10.0.4.4 aks-agentpool-16086733-1 kube-proxy-d7fj9 1/1 Running 2 16h 10.0.4.35 aks-agentpool-16086733-0 kube-proxy-j24th 1/1 Running 2 16h 10.0.4.97 aks-agentpool-16086733-3 kube-proxy-x7ffx 1/1 Running 2 16h 10.0.4.128 aks-agentpool-16086733-4 </code></pre> <p>As you see the node aks-agentpool-16086733-0 has private IP 10.0.4.35 . When we try to check logs on pods which are on this node we got such error:</p> <blockquote> <p>Get <a href="https://aks-agentpool-16086733-0:10250/containerLogs/emw-sit/nginx-sit-deploy-864b7d7588-bw966/nginx-sit?tailLines=5000&amp;timestamps=true" rel="nofollow noreferrer">https://aks-agentpool-16086733-0:10250/containerLogs/emw-sit/nginx-sit-deploy-864b7d7588-bw966/nginx-sit?tailLines=5000&amp;timestamps=true</a>: dial tcp 10.0.4.35:10250: i/o timeout</p> </blockquote> <p>We got the Tiller ( Helm) on this node as well, and if try to connect to tiller we got such error from Client PC:</p> <blockquote> <p>shmits-imac:~ andris.shmits01$ helm version Client: &amp;version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"} Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 10.0.4.35:10250: i/o timeout</p> </blockquote> <p>Does anybody have any idea why the pods and nodes lost connectivity by private IP ? </p>
<p>So , after we scaled down the cluster from 4 nodes to 2 nodes problem disappeared. And after we again scaled up from 2 nodes to 4 everything started working fine </p>
<p>I'm trying to deploy a Kubernetes Pod in AKS (I'm new to Kubernetes, so at this stage, I just want to create a container, deploy to Kubernetes and connect to it).</p> <p>My Yaml file is as follows:</p> <pre><code>apiVersion: v1 kind: Pod spec: containers: - name: dockertest20190205080020 image: dockertest20190205080020.azurecr.io ports: - containerPort: 443 metadata: name: my-test </code></pre> <p>I've created the image in Azure Container Registry and, according to the CLI, successfully deployed it to Kubernetes.</p> <p>After deploying, I used the following command:</p> <pre><code>kubectl get service </code></pre> <p>And it tells me there is no External IP to connect to. I then tried:</p> <pre><code>kubectl describe pod my-test </code></pre> <p>Which gave the following errors:</p> <pre><code> Events: Warning Failed 4m (x2221 over 8h) kubelet, aks-nodepool1-27401563-2 Error: ImagePullBackOff Normal BackOff 0s (x2242 over 8h) kubelet, aks-nodepool1-27401563-2 Back-off pulling image "dockertest20190205080020.azurecr.io" </code></pre> <p>I then tried editing the deployment:</p> <pre><code>kubectl edit pods my-test </code></pre> <p>Which game me the error:</p> <pre><code>message: 'containers with unready status: [dockertest20190205080020]' </code></pre> <p>I'm not a little unsure what my next diagnostic step would be. I get the impression there's an issue with the container or the container registry, but I'm unsure how to determine what that may be.</p>
<p>What happens here (most likely) - your AKS doesnt have permissions to pull images frmo you ACR (that's the default behaviour). You need to grant those (<a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks#grant-aks-access-to-acr" rel="nofollow noreferrer">link</a>):</p> <pre><code>#!/bin/bash AKS_RESOURCE_GROUP=myAKSResourceGroup AKS_CLUSTER_NAME=myAKSCluster ACR_RESOURCE_GROUP=myACRResourceGroup ACR_NAME=myACRRegistry # Get the id of the service principal configured for AKS CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv) # Get the ACR registry resource id ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv) # Create role assignment az role assignment create --assignee $CLIENT_ID --role acrpull --scope $ACR_ID </code></pre> <p>Alternative is to just use a docker login secret (that article mentions that as well).</p> <p>Example image in ACR: <a href="https://i.stack.imgur.com/2TVmO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2TVmO.png" alt="enter image description here"></a></p> <p>image name would be </p> <p>clrtacr.azurecr.io/dns:tag (or without tag for latest)</p>
<p>My Java microservices are running in k8s cluster hosted on AWS EC2 instances.</p> <p>I have around 30 microservice(a good mix of nodejs and Java 8) running in a K8s cluster. I am facing a challange where my java application pods gets restart unexpectedly which leads to increase in application 5xx count.</p> <p>To debug this, I started a newrelic agent in pod along with application and found the following graph:</p> <p><a href="https://i.stack.imgur.com/lSUfE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lSUfE.png" alt="enter image description here"></a></p> <p>Where I can see that, I have Xmx value as 6GB and my uses is max 5.2GB.</p> <p>This clearly stats that JVM is not crossing the Xmx value.</p> <p>But when I describe the pod and look for last state it says "Reason:Error" with "Exit code: 137"</p> <p><a href="https://i.stack.imgur.com/PPHVw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PPHVw.png" alt="enter image description here"></a></p> <p>Then on further investigation I find that my Pod average memory uses is close to its limit all the time.(Allocated 9Gib, uses ~9Gib). I am not able to understand why memory uses is so high in Pod even thogh I have only one process running((JVM) and that too is restricted with 6Gib Xmx.</p> <p><a href="https://i.stack.imgur.com/Aw06D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Aw06D.png" alt="enter image description here"></a> </p> <p>When I login to my worker nodes and check the status of docker containers I can see the last container of that appriction with Exited state and says "Container exits with non-zero exit code 137"</p> <p>I can see the wokernode kernel logs as:</p> <p><a href="https://i.stack.imgur.com/LtDL0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LtDL0.png" alt="enter image description here"></a></p> <p>which shows kernel is terminitaing my process running inside container.</p> <p>I can see I have lot of free memory in my worker node.</p> <p><a href="https://i.stack.imgur.com/R9uLY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R9uLY.png" alt="enter image description here"></a></p> <p>I am not sure why my pods get restart again and again is this k8s behaviour or something spoofy in my infrastructure. This force me to move my application from Container to VM again as this leades to increase in 5xx count.</p> <p>EDIT: I am getting OOM after increasing memory to 12GB.</p> <p><a href="https://i.stack.imgur.com/EWPrR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EWPrR.png" alt="enter image description here"></a></p> <p>I am not getting sure why POD is getting killed because of OOM th ough JVM xmx is 6 GB only.</p> <p>Need help!</p>
<p>Some older Java versions( prior to Java 8 u131 release) don’t recognize that they are running in a container. So even if you specify maximum heap size for the JVM with -Xmx, the JVM will set the maximum heap size based on the host’s total memory instead of the memory available to the container and then when a process tries to allocate memory over its limit(defined in a pod/deployment spec) your container is getting OOMKilled.</p> <p>These problems might not pop up when running your Java apps in K8 cluster locally, because the difference between pod memory limit and total local machine memory aren’t big. But when you run it in production on nodes with more memory available, then JVM may go over your container memory limit and will be OOMKilled.</p> <p>Starting from Java 8(u131 release) it is possible to make JVM be “container-aware” so that it recognizes constraints set by container control groups (cgroups).</p> <p>For <strong>Java 8</strong>(from U131 release) <strong>and Java9</strong> you can set this experimental flags to JVM:</p> <pre><code>-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap </code></pre> <p>It will set the heap size based on your container cgroups memory limit, which is defined as "resources: limits" in your container definition part of the pod/deployment spec. There still probably can be cases of JVM’s off-heap memory increase in Java 8, so you might monitor that, but overall those experimental flags must be handling that as well.</p> <p>From <strong>Java 10</strong> these experimental flags are the new default and are enabled/disabled by using this flag:</p> <pre><code> -XX:+UseContainerSupport -XX:-UseContainerSupport </code></pre>
<p>I have kubernetes cluster running on <code>Ubuntu 16.04</code>. When I run <code>nslookup kubernetes.default</code> on <code>master</code> it shows below:</p> <pre><code>Server: 192.168.88.21 Address: 192.168.88.21#53 ** server can't find kubernetes.default: NXDOMAIN </code></pre> <p>Below is the content of <code>/etc/resolv.conf</code></p> <pre><code># Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.88.21 nameserver 127.0.1.1 search VISDWK.local </code></pre> <p>Using kubernetes version</p> <pre><code>kubeadm version: &amp;version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:36:44Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Using weave for networking and installed using:</p> <pre><code>kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" </code></pre> <p><code>coredns</code> pods are running fine:</p> <pre><code>coredns-86c58d9df4-42xqc 1/1 Running 8 1d6h coredns-86c58d9df4-p6d98 1/1 Running 7 1d1h </code></pre> <p>Below are the logs of <code>coredns-86c58d9df4-42xqc</code></p> <p><code>.:53 2019-02-08T08:40:10.038Z [INFO] CoreDNS-1.2.6 2019-02-08T08:40:10.039Z [INFO] linux/amd64, go1.11.2, 756749c CoreDNS-1.2.6 linux/amd64, go1.11.2, 756749c [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769 t</code></p> <p>Can anyone please help me debug this issue. Please help. Thanks.</p>
<p>something wrong with busybox image. it works but am getting some errors after the nslookup command is run</p> <pre><code>[node1 ~]$ kubectl run busybox1 --image busybox --restart=Never --rm -it -- sh If you don't see a command prompt, try pressing enter. / # nslookup kubernetes Server: 10.96.0.10 Address: 10.96.0.10:53 Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1 *** Can't find kubernetes.svc.cluster.local: No answer *** Can't find kubernetes.cluster.local: No answer *** Can't find kubernetes.default.svc.cluster.local: No answer *** Can't find kubernetes.svc.cluster.local: No answer *** Can't find kubernetes.cluster.local: No answer / # exit pod &quot;busybox1&quot; deleted [node1 ~]$ </code></pre> <p>try with below image. it works perfectly. other versions are throwing some errors</p> <h1>busybox:1.28</h1> <pre><code>[node1 ~]$ kubectl run busybox1 --image busybox:1.28 --restart=Never --rm -it -- sh If you don't see a command prompt, try pressing enter. / # nslookup kubernetes Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local / # exit pod &quot;busybox1&quot; deleted </code></pre> <h1>Test the api server from master</h1> <pre><code>[node1 ~]$ kubectl get svc|grep kubernetes kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 2h [node1 ~]$ [node1 ~]$ curl -k https://10.96.0.1/version { &quot;major&quot;: &quot;1&quot;, &quot;minor&quot;: &quot;11&quot;, &quot;gitVersion&quot;: &quot;v1.11.7&quot;, &quot;gitCommit&quot;: &quot;65ecaf0671341311ce6aea0edab46ee69f65d59e&quot;, &quot;gitTreeState&quot;: &quot;clean&quot;, &quot;buildDate&quot;: &quot;2019-01-24T19:22:45Z&quot;, &quot;goVersion&quot;: &quot;go1.10.7&quot;, &quot;compiler&quot;: &quot;gc&quot;, &quot;platform&quot;: &quot;linux/amd64&quot; }[node1 ~]$ </code></pre>
<p>The examples in the k8s java client all use default client, see <a href="https://github.com/kubernetes-client/java/tree/client-java-parent-2.0.0/examples/src/main/java/io/kubernetes/client/examples" rel="nofollow noreferrer">here</a>. </p> <pre><code>ApiClient client = Config.defaultClient(); Configuration.setDefaultApiClient(client); </code></pre> <p>How I can config k8s client so that it can talk to k8s CRDs (say, sparkoperator) from a k8s cluster pod? How should I config this client? (basePath, authentications?) And what is the basePath I should use within a pod in the same k8s cluster?</p>
<p>You can use the <code>defaultClient</code> for that as well. </p> <p>The <code>defaultClient()</code> method will create a in-cluster client if the application is running inside the cluster and has the correct service account.</p> <p>You can see the rules for <code>defaultClient</code> from comments on the method <a href="https://github.com/kubernetes-client/java/blob/client-java-parent-2.0.0/util/src/main/java/io/kubernetes/client/util/Config.java#L91" rel="nofollow noreferrer">here</a>: </p> <pre class="lang-java prettyprint-override"><code>/** * Easy client creation, follows this plan * * &lt;ul&gt; * &lt;li&gt;If $KUBECONFIG is defined, use that config file. * &lt;li&gt;If $HOME/.kube/config can be found, use that. * &lt;li&gt;If the in-cluster service account can be found, assume in cluster config. * &lt;li&gt;Default to localhost:8080 as a last resort. * &lt;/ul&gt; * * @return The best APIClient given the previously described rules */ </code></pre> <p>So if the application using the k8s java client, run on the cluster it self, it should be able to access stuff on the cluster as long as it has correct permission. You need to allow your client application to be able to access the CRDs, like this example of <code>ClusterRole</code> for <a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/rbac-crd.md" rel="nofollow noreferrer">CRDs of Prometheus Operator</a>:</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: prometheus-crd-view labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" rules: - apiGroups: ["monitoring.coreos.com"] resources: ["alertmanagers", "prometheuses", "prometheusrules", "servicemonitors"] verbs: ["get", "list", "watch"] </code></pre>
<p>currently kubectl assigns the IP address to a pod and that is shared within the pod by all the containers. </p> <p>I am trying to assign a static IP address to a pod i.e in the same network range as the one assigned by kubectl, I am using the following deployment file </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis spec: replicas: 1 template: metadata: labels: run: rediscont spec: containers: - name: redisbase image: localhost:5000/demo/redis ports: - containerPort: 6379 hostIP: 172.17.0.1 hostPort: 6379 </code></pre> <p>On the dockerhost where its deployed i see the following:</p> <pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4106d81a2310 localhost:5000/demo/redis "/bin/bash -c '/root/" 28 seconds ago Up 27 seconds k8s_redisbase.801f07f1_redis-1139130139-jotvn_default_f1776984-d6fc-11e6-807d-645106058993_1b598062 71b03cf0bb7a gcr.io/google_containers/pause:2.0 "/pause" 28 seconds ago Up 28 seconds 172.17.0.1:6379-&gt;6379/tcp k8s_POD.99e70374_redis-1139130139-jotvn_default_f1776984-d6fc-11e6-807d-645106058993_8c381981 </code></pre> <p><strong>The IP tables-save gives the following output</strong></p> <pre><code>-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 6379 -j DNAT --to-destination 172.17.0.3:6379 </code></pre> <p>Even with this, from other pods the IP 172.17.0.1 is not accessible. Basically the question is how to assign static IP to a pod so that 172.17.0.3 doesn't get assigned to it </p>
<p>Generally, assigning a Pod a static IP address is an anti-pattern in Kubernetes environments. There are a couple of approaches you may want to explore instead. Using a Service to front-end your Pods (or to front-end even just a single Pod) will give you a stable network identity, and allow you to horizontally scale your workload (if the workload supports it). Alternately, using a StatefulSet may be more appropriate for some workloads, as it will preserve startup order, host name, PersistentVolumes, etc., across Pod restarts.</p> <p>I know this doesn't necessarily <em>directly</em> answer your question, but hopefully it provides some additional options or information that proves useful.</p>
<p>I have simple python 2.7 program running in container off Kubernetes (AKS) which is printing debug information to standard output</p> <pre><code>response = requests.post(uri,data=body, headers=headers) if (response.status_code &gt;= 200 and response.status_code &lt;= 299): print 'Accepted ' + log_type + ' on ' + rfc1123date else: print "Response code: {}".format(response.status_code) </code></pre> <p>I don't see it with <code>kubectl logs container_name</code>, output is empty (but I know it is fine because of successful post). I tried to add <code>"-u"</code> to <code>CMD ["-u","syslog2la.py"]</code> in Dockerfile, but it didn't help. How to get the python print in '<em>kubectl logs</em>'?</p>
<p>Add the following to your Dockerfile:</p> <pre><code>ENV PYTHONUNBUFFERED=0 </code></pre>
<p>Kubernetes ships with a <code>ConfigMap</code> called <code>coredns</code> that lets you specify DNS settings. I want to modify or patch a small piece of this configuration by adding:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: upstreamNameservers: | ["1.1.1.1", "1.0.0.1"] </code></pre> <p>I know I can use <code>kubectrl edit</code> to edit the <code>coredns</code> <code>ConfigMap</code> is there some way I can take the above file containing only the settings I want to insert or update and have it merged on top of or patched over the existing <code>ConfigMap</code>?</p> <p>The reason for this is that I want my deployment to be repeatable using CI/CD. So, even if I ran my Helm chart on a brand new Kubernetes cluster, the settings above would be applied.</p>
<p>This will apply the same patch to that single field:</p> <pre><code>kubectl patch configmap/coredns \ -n kube-system \ --type merge \ -p '{"data":{"upstreamNameservers":"[\"1.1.1.1\", \"1.0.0.1\"]"}}' </code></pre>
<p>This is my first time trying to deploy a microservices architecture into Kubernetes. At the beginning, I was considering to use Ambassador as my API Gateway. I also have an authentication service which validates users and generates a JWT token, however, I need to validate this token every time a service is called. This represents an overload problem (since every time the API Gateway receives traffic it will go to this external authentication service to validate the JWT token) and Ambassador does not have an option to do this filtering without the use of the external service.</p> <p>Using the Zuul Gateway seems like the best option in this case, since it allows me to validate the JWT token inside the gateway (not through an external service like Ambassador). However, I'm not sure how Zuul is going to work if I deploy it in Kubernetes since, as I understand, Zuul requires to have the address of the service discovery (like Eureka).</p> <p>if I deploy Zuul in my Kubernetes cluster, then how it will be able to locate my services? </p> <p>Locally, for example, there is no problem since I was using Eureka before, and I knew its address. Also, I don't think having Eureka deployed in Kubernetes will be a good idea, since it will be redundant.</p> <p>If it is not possible to do it with Zuul, is there another API Gateway or approach where I can validate tokens using the Gateway instead of relying on an external authentication service like Ambassador does?</p> <p>Thank you.</p>
<p>In kubernetes you already have "discovery" service which is kubernetes-service. It locates pods and serves as load balancer for them.</p> <p>Lets say you have Zuul configuration like this:</p> <pre><code>zuul: routes: books-service: path: /books/** serviceId: books-service </code></pre> <p>which routes requests matching <code>/books/**</code> to the service <code>books-service</code>. Usually you have an Eureka which gives you real address of <code>books-service</code>, but not now. </p> <p>And this is where Ribbon can help you - it allows you to manually tune routing after Zuul has matched it's request. So you need to add this to configuration:</p> <pre><code>books-service.ribbon.listOfServers: "http://books:8080" </code></pre> <p>and after Zuul had found serviceId (<code>books-service</code>) it will route the request to <code>books:8080</code></p> <p>And <code>books:8080</code> is just a kubernetes-service:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: books spec: selector: app: spring-books-service ports: - protocol: TCP port: 8080 targetPort: 9376 </code></pre> <p>You can say its a load balancer that takes traffic from :8080 and redirects it to pods with label <code>app: spring-books-service</code>.</p> <p>All you have to do next is to assign labels to pods (via deployments for example)</p> <p>Btw, you can configure Ribbon like this in any app and kubernetes will locate all your apps (pods) with its services so you dont need any discovery service at all! And since k8s-services are much more reliable than Eureka, you can simply remove it.</p>
<p>I'm trying to understand some concepts behind kubernetes, so i have the following: </p> <ol> <li>A kubernetes cluster with 2 nodes</li> <li>A deployment of nginx with 2 replicas, so i have 2 POD's. </li> <li><p>A service exposing nginx on port 32134, so i can access each node with: </p> <p><a href="http://node01:32134" rel="nofollow noreferrer">http://node01:32134</a> or <a href="http://node02:32134" rel="nofollow noreferrer">http://node02:32134</a></p></li> </ol> <p>So, let's go to my doubts:</p> <ol> <li><p>Doing a <code>kubectl describe pod nginx-001</code> i got this pod running under <strong>node01</strong>. Doing the same command to pod nginx-002 i got this pod running under <strong>node01</strong> also. So, if my pods are running under only one node how i can get HTTP 200 in both URL (node01 and node02) ? Node02 should not respond because don't have any nginx running, right ?</p></li> <li><p>Looking at <code>kubctl logs -f nginx-001</code> i got all access requests log. The strange thing is: Don't matter if i hit <code>http://node01</code> or <code>http://node02</code> i always get logs in nginx-001 POD, the other pod (nginx-002) never get requests in log. Seems like <strong>ks8</strong> is redirecting all requests to nginx-001 always and forget the other pod. </p></li> </ol> <p><strong>Important note</strong> I'm using Digital Ocean Kubernetes services</p>
<p>1) Thats where kube-proxy steps in. Kube-proxy is responsible to route your requests to the pods, irrespective of where the pod is deployed. You could have a 50 node cluster. You could have a 10 replica nginx deployed will all replicas deployed on only 1 node and it will be kube-proxy's job to route the requests to the service.</p> <p>2) That basically depends on how much load you're encountering. You probably are the only one hitting the nginx service. So it persists on sending requests to one pod. Kubernetes service enables load balancing across a set of server pods, allowing client pods to operate independently and durably.</p>
<p>I've a web application (soap service) running in Tomcat 8 server in Openshift. The payload size is relatively small with 5-10 elements and the traffic is also small (300 calls per day, 5-10 max threads at a time). I'm little confused on the Pod resource restriction. How do I come up with min and max cpu and memory limits for each pod if I'm going to use min 1 and max 3 pods for my application?</p>
<p>It's tricky to configure accurate limitation value without <code>performance test</code>. Because we don't expect your application is required how much resources process per requests. A good rule of thumb is to limit the resource based on heaviest <code>workload</code> on your environment. Memory limitation can trigger <code>OOM-killer</code>, so you should set up afforded value which is based on your <code>tomcat</code> <code>heap</code> and <code>static</code> memory size. As opposed to <code>CPU</code> limitation will not kill your pod if reached the limitation value, but slow down the process speed.</p> <p>My suggestion of each limitation value's starting point is as follows.</p> <ul> <li><code>Memory</code>: <code>Tomcat(Java)</code> memory size + 30% buffer</li> <li><code>CPU</code>: personally I think <code>CPU limitation</code> is useless to maximize the process performance and efficiency. Even though <code>CPU usage</code> is afforded and the pod can use full cpu resources to process the requests as soon as possible at that time, the limitation setting can disturb it. But if you should spread the resource usage evenly for suppressing some aggressive resource eater, you can consider the <code>CPU limitation</code>.</li> </ul> <p>This answer might not be what you want to, but I hope it help you to consider your capacity planning.</p>
<p>Deployment.yaml</p> <pre><code>... env: {{ .Values.env}} ... </code></pre> <p>Values.yaml:</p> <pre><code>env: - name: "DELFI_DB_USER" value: "yyy" - name: "DELFI_DB_PASSWORD" value: "xxx" - name: "DELFI_DB_CLASS" value: "com.mysql.jdbc.Driver" - name: "DELFI_DB_URL" value: "jdbc:sqlserver://dockersqlserver:1433;databaseName=ddbeta;sendStringParametersAsUnicode=false" </code></pre> <p>feels like I'm missing something obvious.<br> linter says: ok<br> template says:</p> <blockquote> <p>env: [map[name:DELFI_DB_USER value:yyy] map[name:DELFI_DB_PASSWORD value:xxx] map[name:DELFI_DB_CLASS value:com.mysql.jdbc.Driver] map[value:jdbc:mysql://dockersqlserver.{{ .Release.Namespace }}.svc.cluster.local:3306/ddbeta\?\&amp;amp\;useSSL=true\&amp;amp\;requireSSL=false name:DELFI_DB_URL]]</p> </blockquote> <p>upgrade says:</p> <blockquote> <p>Error: UPGRADE FAILED: YAML parse error on xxx/templates/deployment.yaml: error converting YAML to JSON: yaml: line 35: found unexpected ':'</p> </blockquote> <p>solution:</p> <pre><code>env: {{- range .Values.env }} - name: {{ .name | quote }} value: {{ .value | quote }} {{- end }} </code></pre>
<p>The current Go template expansion will give output which is not YAML: </p> <pre><code>env: {{ .Values.env}} </code></pre> <p>becomes:</p> <pre><code>env: env: [Some Go type stuff that isn't YAML]... </code></pre> <p>The Helm Go template needs to loop over the keys of the source YAML dictionary. This is described in the <a href="https://docs.helm.sh/chart_template_guide/#looping-with-the-range-action" rel="nofollow noreferrer">Helm docs</a>.</p> <p>The correct Deployment.yaml is:</p> <pre><code>... env: {{- range .Values.env }} - name: {{ .name | quote }} value: {{ .value | quote }} {{- end }} ... </code></pre>
<p>I am working on a java service that basically creates files in a network file system to store data. It runs in a k8s cluster in a Ubuntu 18.04 LTS. When we began to limit the memory in kubernetes (limits: memory: 3Gi), the pods began to be OOMKilled by kubernetes.</p> <p>At the beginning we thought it was a leak of memory in the java process, but analyzing more deeply we noticed that the problem is the memory of the kernel. We validated that looking at the file /sys/fs/cgroup/memory/memory.kmem.usage_in_bytes</p> <p>We isolated the case to only create files (without java) with the DD command like this:</p> <pre><code>for i in {1..50000}; do dd if=/dev/urandom bs=4096 count=1 of=file$i; done </code></pre> <p>And with the dd command we saw that the same thing happened ( the kernel memory grew until OOM). After k8s restarted the pod, I got doing a describe pod:</p> <ul> <li>Last State:Terminated</li> <li>Reason: OOMKilled</li> <li>Exit Code: 143</li> </ul> <p>Creating files cause the kernel memory grows, deleting those files cause the memory decreases . But our services store data , so it creates a lot of files continuously, until the pod is killed and restarted because OOMKilled.</p> <p>We tested limiting the kernel memory using a stand alone docker with the --kernel-memory parameter and it worked as expected. The kernel memory grew to the limit and did not rise anymore. But we did not find any way to do that in a kubernetes cluster. Is there a way to limit the kernel memory in a K8S environment ? Why the creation of files causes the kernel memory grows and it is not released ? </p>
<p>Thanks for all this info, it was very useful!</p> <p>On my app, I solved this by creating a new side container that runs a cron job, every 5 minutes with the following command:</p> <pre><code>echo 3 &gt; /proc/sys/vm/drop_caches </code></pre> <p>(note that you need the side container to run in privileged mode)</p> <p>It works nicely and has the advantage of being predictable: every 5 minutes, your memory cache will be cleared.</p>
<p>I'm learning about creating a k8s cluster on DO using terraform, I've been trying to take the <code>ID</code> of the single K8s node I've created, and reference it from the loadbalancer.</p> <p>The main reasoning for this is so that I can declare the FQDN in the <code>.tf</code> file.</p> <p>First, here is the cluster declaration:</p> <pre><code> variable "digitalocean_token" {} provider "digitalocean" { token = "${var.digitalocean_token}" } resource "digitalocean_kubernetes_cluster" "foo" { name = "foo" region = "nyc1" version = "1.12.1-do.2" node_pool { name = "woker-pool" size = "s-1vcpu-2gb" node_count = 1 } } </code></pre> <p>And here is the load balancer declaration:</p> <pre><code>resource "digitalocean_loadbalancer" "foo" { name = "k8s-lb.nyc1" region = "nyc1" forwarding_rule { entry_port = 80 entry_protocol = "http" target_port = 80 target_protocol = "http" } droplet_ids = ["${digitalocean_kubernetes_cluster.foo.node_pool.0.id}"] } output "loadbalancer_ip" { value = "${digitalocean_loadbalancer.foo.ip}" } resource "digitalocean_record" "terraform" { domain = "example.com" # "${digitalocean_domain.example.name}" type = "A" name = "terraform" value = "${digitalocean_loadbalancer.foo.ip}" } # Output the FQDN for the record output "fqdn" { value = "${digitalocean_record.terraform.fqdn}" } </code></pre> <p>I'm guessing that maybe the <code>digitalocean_loadbalancer</code> resources is only setup to work with individual droplets?</p> <hr> <p>Here are the output errors: when I run <code>terraform apply</code>:</p> <pre><code>* output.loadbalancer_ip: Resource 'digitalocean_loadbalancer.foo' not found for variable 'digitalocean_loadbalancer.foo.ip' * digitalocean_record.terraform: Resource 'digitalocean_loadbalancer.foo' not found for variable 'digitalocean_loadbalancer.foo.ip' * digitalocean_loadbalancer.foo: droplet_ids.0: cannot parse '' as int: strconv.ParseInt: parsing "d4292e64-9c0a-4afb-83fc-83f239bcb4af": invalid syntax </code></pre> <hr> <p>Pt. 2</p> <p>I added a <code>digitalocean_droplet</code> resource, to see what kind of id was passed to the load balancer. </p> <pre><code>resource "digitalocean_droplet" "web" { name = "web-1" size = "s-1vcpu-1gb" image = "ubuntu-18-04-x64" region = "nyc1" } </code></pre> <p><code>digitalocean_kubernetes_cluster.foo.node_pool.0.id = '6ae6a787-d837-4e78-a915-cb52155f66fe'</code></p> <p><code>digitalocean_droplet.web.id = 132533158</code></p>
<p>So, the <code>digitalocean_loadbalancer</code> resource has an optional <code>droplet_tag</code> argument, which can be used to supply a common tag given to the created nodes/droplets.</p> <p>However, when declaring a load-balancer inside kubernetes, a new one will still be created. So for now at least, it would appear that defining the domain/CNAME record with terraform isn't possible on digitalocean</p>
<p>I configured an automatic build of my Angular 6 app and deployment in Kubernetes each time is push to my code repository (Google Cloud Repository).</p> <p>Dev environment variables are classically store in a environment.ts file like this:</p> <pre><code>export const environment = { production: false, api_key: "my_dev_api_key" }; </code></pre> <p>But I don't want to put my Prod secrets in my repository so I figured I could use Kubernetes secrets.</p> <p>So, I create a secret in Kubernetes:</p> <pre><code>kubectl create secret generic literal-token --from-literal api_key=my_prod_api_key </code></pre> <p>But how to use it in my Angular app?</p>
<p>Nevertheless what you do, your Angular app is a <em>client</em> application i.e. the user's browser downloads the source code of the app (a bunch of CSS/JS/HTML files, images etc.) and it executes it on the user's machine. So you can't hide anything like you do when implementing a <em>client/server</em> app. In client/server applications all the secrets will reside in the server part. If you put the secret in a k8s secret you will not commit it in the repository, but you will expose it to all of your users anyway.</p> <p>If you still want to populate a configuration based on environment variables (which is a legit use-case), I've seen and used the following approach. The app is Angular 6 and is served to the browser by an <code>nginx</code> server. The startup script in the docker container is a bit odd and looks similar to those lines below:</p> <pre><code>envsubst &lt; /usr/share/nginx/html/assets/config.json.tpl &gt; /usr/share/nginx/html/assets/config.json rm /usr/share/nginx/html/assets/config.json.tpl echo "Configuration:" cat /usr/share/nginx/html/assets/config.json nginx -g 'daemon off;' </code></pre> <p>As you see we've used <code>envsubst</code> to substitute a config template in the assets folder. The <code>config.json.tpl</code> may look like this:</p> <pre><code>{ "apiUrl": "${API_URL}" } </code></pre> <p><code>envsubst</code> will substitute the environment variables with their real values and you will have a valid JSON configuration snippet in your assets. Then <code>nginx</code> will then startup.</p>
<p>I understand that Kubernetes make great language-agnostic distributed computing clusters, easy to deploy, etc.</p> <p>However, it seems that each platform has his own set of tools to deploy and manage Kubernetes.</p> <p>So for example, If I use Amazon Elastic Container Service for Kubernetes (Amazon EKS), Google Kubernetes engine or Oracle Container Engine for Kubernetes, how easy (or hard) is to switch between them ?</p>
<p>"It depends". The core APIs of Kubernetes like pods and services work pretty much the same everywhere, or at least if you are getting into provider specific behavior you would know it since the provider name would be in the annotation. But each vendor does have their own extensions. For example, GKE offers integration with GCP IAM permissions as an alternative to Kuberenetes' internal RBAC system. If you use that, then switching is that much harder. The more provider-specific annotations and extensions you use, the more work it will be to switch.</p>
<p><strong><em>Please see edit below for added information.</em></strong></p> <p>So long story short I'm trying to get the Unifi Controller to run on my home Kubernetes cluster. In doing so, I have needed to decentralize the MongoDB database since having a MongoDB instance bundled with each replica on Kubernetes causes the database to crash. Here is my project this far: <a href="https://github.com/zimmertr/Kubernetes-Manifests/tree/unifi_mongodb_separation/Unifi_Controller" rel="nofollow noreferrer">https://github.com/zimmertr/Kubernetes-Manifests/tree/unifi_mongodb_separation/Unifi_Controller</a></p> <p>In doing so, I have written the following script that runs at provision time for the <a href="https://hub.docker.com/_/mongo" rel="nofollow noreferrer">MongoDB container</a>:</p> <pre><code>mongo \ --username ubnt \ --password "{{ mongodb_password }}" \ --authenticationDatabase admin \ --eval 'db.getSiblingDB("unifi").createUser({user: "ubnt", pwd: "{{ mongodb_password }}", roles: [{role: "readWrite", db: "unifi"}]})' mongo \ --username ubnt \ --password "{{ mongodb_password }}" \ --authenticationDatabase admin \ --eval 'db.getSiblingDB("unifi_stat").createUser({user: "ubnt", pwd: "{{ mongodb_password }}", roles: [{role: "readWrite", db: "unifi_stat"}]})' </code></pre> <p>And then I have configured the Unifi Controller to talk to the database through a volume mounted <code>system.properties</code> file configured like so:</p> <pre><code># Inform IP Address system_ip={{ load_balancer_ip }} # Autobackup directory autobackup.dir=/backups # External MongoDB information db.mongo.local=false db.mongo.uri=mongodb://ubnt:{{ mongodb_password }}@unifi-controller-mongodb:27017/unifi statdb.mongo.uri=mongodb://ubnt:{{ mongodb_password }}@unifi-controller-mongodb:27017/unifi_stat unifi.db.name=unifi </code></pre> <p>This is configured as <a href="https://community.ubnt.com/t5/UniFi-Wireless/External-MongoDB-Server/m-p/2669795" rel="nofollow noreferrer">instructed by Ubiquiti.</a> </p> <p>This all works, and when the Kubernetes deployments start up I see that the Unifi Controller connects to the MongoDB instance in the logs. Furthermore, if I manually connect to the MongoDB databases and run a <code>show collections</code> I can see that many new collections have been created. However, the Unifi Controller stops producing logs here.</p> <p>If I manually stop the <code>jar</code> file that is running the Unifi Controller in the background on the container, and then restart it, the following stack trace is produced:</p> <pre><code>$&gt; s6-setuidgid abc java -Xmx1024M -jar /usr/lib/unifi/lib/ace.jar start org.tuckey.web.filters.urlrewrite.UrlRewriteFilter INFO: destroy called Exception in thread "launcher" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'Ò00000' defined in class com.ubnt.service.AppContext: Instantiation of bean failed; nested exception is org.springframework.beans.factory.BeanDefinitionStoreException: Factory method [com.ubnt.service.P.D com.ubnt.service.AppContext.Ò00000()] threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dbService' defined in class com.ubnt.service.AppContext: Invocation of init method failed; nested exception is com.mongodb.CommandFailureException: { "serverUsed" : "unifi-controller-mongodb:27017" , "ok" : 0.0 , "errmsg" : "not authorized on unifi to execute command { dropDatabase: 1 }" , "code" : 13 , "codeName" : "Unauthorized"} </code></pre> <p>The key element here is</p> <blockquote> <p>not authorized on unifi to execute command { dropDatabase: 1 }"</p> </blockquote> <p>And this is where my understanding of MongoDB comes to an end. And my question comes to a beginning. I believe that the reason the Unifi Controller does not continue to start or log any additional messages after connecting to the database and creating the collections is that it is silently failing to perform an action on the MongoDB database. For which it does not have the requisite permissions to do.</p> <p>When I provision the MongoDB Docker container, I specify the <code>MONGO_INITDB_ROOT_USERNAME</code> &amp; <code>MONGO_INITDB_ROOT_PASSWORD</code> environment variables. Which, I believe, turns <code>auth</code> mode on. This belief is reiterated by the fact that I can connect to MongoDB's <code>admin</code> authentication database via the username and password I provide to these variables.</p> <p>However, based on the script I posted above that creates my databases and assigns the role <code>readWrite</code> to the <code>ubnt</code> user, I'm curious how I should go about giving the <code>ubnt</code> user the requisite permissions required to drop a database. If I swap out <code>readWrite</code> for other roles like <code>root</code> and <code>dbAdminAnyDatabase</code> the commands fail.</p> <p>What do I have to do to make it so my ubnt user can drop the <code>unifi</code> and <code>unifi_stat</code> databases? Or what do I have to change about my connection strings to prevent this from happening? I'm a bit of a database admin noob.</p> <p><strong><em>Continuation Edit:</em></strong></p> <p>I have updated the Role that is attributed to the <code>ubnt</code> user on <code>unifi</code> and <code>unifi_stat</code> to be <code>dbAdmin</code> instead of <code>readWrite</code>. And got a little further.</p> <pre><code>#!/bin/bash mongo \ --username ubnt \ --password "{{ mongodb_password }}" \ --authenticationDatabase admin \ --eval 'db.createUser({user: "ubnt", pwd: "{{ mongodb_password }}", roles: [{role: "dbAdmin", db: "unifi"}]})' mongo \ --username ubnt \ --password "{{ mongodb_password }}" \ --authenticationDatabase admin \ --eval 'db.createUser({user: "ubnt", pwd: "{{ mongodb_password }}", roles: [{role: "dbAdmin", db: "unifi_stat"}]})' </code></pre> <p>However, the Unifi Controller is still acting strange. It is now simply looping this in the log files:</p> <pre><code>2019-02-10 22:33:45,449] &lt;launcher&gt; INFO system - ====================================================================== [2019-02-10 22:33:45,450] &lt;launcher&gt; INFO system - UniFi 5.6.40 (build atag_5.6.40_10370 - release) is started [2019-02-10 22:33:45,450] &lt;launcher&gt; INFO system - ====================================================================== [2019-02-10 22:33:45,457] &lt;launcher&gt; INFO system - BASE dir:/usr/lib/unifi [2019-02-10 22:33:45,464] &lt;launcher&gt; INFO system - Current System IP: 192.168.0.1 [2019-02-10 22:33:45,465] &lt;launcher&gt; INFO system - Hostname: unifi-controller-5bb95c7688-bzp4z [2019-02-10 22:33:48,635] &lt;launcher&gt; INFO db - waiting for db connection... [2019-02-10 22:33:49,173] &lt;launcher&gt; INFO db - Connecting to mongodb://ubnt:PASSWORD@unifi-controller-mongodb:27017/unifi [2019-02-10 22:33:49,526] &lt;launcher&gt; DEBUG db - db connected (3.4.19@unifi-controller-mongodb:27017) [2019-02-10 22:33:49,534] &lt;launcher&gt; INFO db - *** Factory Default *** Database exists. Drop it [2019-02-10 22:33:52,391] &lt;launcher&gt; INFO db - waiting for db connection... [2019-02-10 22:33:52,896] &lt;launcher&gt; DEBUG db - db connected (3.4.19@unifi-controller-mongodb:27017) [2019-02-10 22:34:13,292] &lt;launcher&gt; INFO system - ====================================================================== [2019-02-10 22:34:13,295] &lt;launcher&gt; INFO system - UniFi 5.6.40 (build atag_5.6.40_10370 - release) is started [2019-02-10 22:34:13,295] &lt;launcher&gt; INFO system - ====================================================================== [2019-02-10 22:34:13,303] &lt;launcher&gt; INFO system - BASE dir:/usr/lib/unifi [2019-02-10 22:34:13,312] &lt;launcher&gt; INFO system - Current System IP: 192.168.0.1 [2019-02-10 22:34:13,313] &lt;launcher&gt; INFO system - Hostname: unifi-controller-5bb95c7688-bzp4z [2019-02-10 22:34:16,781] &lt;launcher&gt; INFO db - waiting for db connection... [2019-02-10 22:34:17,300] &lt;launcher&gt; INFO db - Connecting to mongodb://ubnt:PASSWORD@unifi-controller-mongodb:27017/unifi [2019-02-10 22:34:17,640] &lt;launcher&gt; DEBUG db - db connected (3.4.19@unifi-controller-mongodb:27017) [2019-02-10 22:34:17,656] &lt;launcher&gt; INFO db - *** Factory Default *** Database exists. Drop it [2019-02-10 22:34:20,463] &lt;launcher&gt; INFO db - waiting for db connection... [2019-02-10 22:34:20,969] &lt;launcher&gt; DEBUG db - db connected (3.4.19@unifi-controller-mongodb:27017) </code></pre> <p>So I'm at a loss here. Not sure why it's trying to drop the database? Is Unifi trying to create the databases from scratch on the MongoDB instance?</p> <ol> <li><p>If I DON'T create the <code>unifi</code> and <code>unifi_stat</code> databases at provision time for MongoDB, then the Unifi Controller fails to ever connect to them and the logs stall there.</p></li> <li><p>If I DO create the databases and give the <code>ubnt</code> user <code>dbAdmin</code> over them, then Unifi just appears to drop them over and over. Shown above.</p></li> <li><p>If I DO create the databases and give the <code>ubnt</code> user <code>readWrite</code> over them, then Unifi just connects to the databases, creates the collections, and silently stalls for no apparent reason. And If I try and manually execute the <code>jar</code> file mentioned above then it leaves the stacktrace describing the lack of necessary permissions to drop a database.</p></li> </ol> <p>If someone could please provide some documentation on how an external MongoDB database should be prepared for the Unifi Controller to use it would be greatly beneficial for me. I've only been able to track down a forum post  whereby an employee discussed how to configure the controller's system.properties to point to an external instance. </p>
<p>Turns out this was a bug in MetalLB, the service with which I expose my bare metal kubernetes services to my network.</p> <p><a href="https://github.com/google/metallb/issues/399" rel="nofollow noreferrer">https://github.com/google/metallb/issues/399</a></p> <p>All is working now. But decentralizing MongoDB did little to resolve the problems introduced by multiple replicas, unfortunately. </p> <p>However I still think something is amiss in the container as it reports that <code>service unifi status</code> is <code>unifi is not running</code>. Despite the fact that it is actually running.</p>
<p>I want to manage different clusters of k8s,<br> one called <code>production</code> for prod deployments,<br> and another one called <code>staging</code> other deployments and configurations.</p> <p>How can I connect <code>helm</code> to the tiller in those 2 different clusters?<br> Assume that I already have <code>tiller</code> installed and I have a configured ci pipeline.</p>
<p>Helm will connect to the same cluster that <code>kubectl</code> is pointing to.</p> <p>By setting multiple <code>kubectl</code> contexts and changing them with <code>kubectl config use-context [environment]</code> you can achieve what you want.</p> <p>Of course you will need to set appropriate HELM_ environment values in your shell for each cluster including TLS certificates if you have them enabled.</p> <p>Also it’s worth taking steps so that you don’t accidentally deploy to the wrong cluster by mistake.</p>
<p>Two pods are running and have different volume mounts, but there is a need of using the same configmap in the 2 running pods.</p>
<p>Sure you can do that. You can mount same <code>ConfigMap</code> into different volume. You can take a look into <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">configure-pod-configmap</a>.</p> <p>Say, your <code>ConfigMap</code> is like following:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: SPECIAL_LEVEL: very SPECIAL_TYPE: charm </code></pre> <p>And two pods:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: dapi-test-pod-01 spec: containers: - name: test-container image: busybox command: [ "/bin/sh", "-c", "ls /etc/config/" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: # Provide the name of the ConfigMap containing the files you want # to add to the container name: special-config restartPolicy: Never --- apiVersion: v1 kind: Pod metadata: name: dapi-test-pod-02 spec: containers: - name: test-container image: busybox command: [ "/bin/sh", "-c", "ls /etc/config/" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: # Provide the name of the ConfigMap containing the files you want # to add to the container name: special-config restartPolicy: Never </code></pre> <p>Now see the logs after creating the above <code>ConfigMap</code> and two <code>Pods</code>:</p> <pre><code># for 1st Pod $ kubectl logs -f dapi-test-pod-01 SPECIAL_LEVEL SPECIAL_TYPE # for 2nd Pod $ kubectl logs -f dapi-test-pod-02 SPECIAL_LEVEL SPECIAL_TYPE </code></pre>
<p>I need to authenticate to a Kubernetes cluster provisioned in <a href="https://cloud.google.com/kubernetes-engine/" rel="noreferrer">GKE</a> using the <a href="https://github.com/kubernetes-client" rel="noreferrer">Kubernetes Python client</a> and the Google Cloud python client. I would prefer not to shell out to <code>gcloud</code> for several reasons:</p> <ul> <li>relying on the system shell <code>gcloud</code> in a Python script when I have a native Google Cloud library is inelegant</li> <li>it requires the system to have <code>gcloud</code></li> <li>I would have to switch users to the relevant ServiceAccount and switch back</li> <li>It incurs the cost of starting/joining another process</li> </ul> <p>As such, the workflow of <code>gcloud container clusters get-credentials</code> (which delegates to <code>gcloud config config-helper</code>) will not suffice to get me the API key I need. How do I get the equivalent output with the Google Cloud Python API?</p> <p>Here is what I have so far:</p> <pre><code>import kubernetes.client import googleapiclient.discovery import base64 # get the cluster object from GKE gke = googleapiclient.discovery.build('container', 'v1', credentials=config['credentials']) name = f'projects/{config["project_id"]}/locations/{config["location"]}/{parent}/clusters/{config["name"]}' gke_clusters = gke.projects().locations().clusters() gke_cluster = gke_clusters.get(name=name).execute() # set up Kubernetes Config kube_config = kubernetes.client.Configuration() kube_config.host = 'https://{0}/'.format(gke_cluster['endpoint']) kube_config.verify_ssl = True #kube_config.api_key['authenticate'] = "don't know what goes here" # regretably, the Kubernetes client requires `ssl_ca_cert` to be a path, not the literal cert, so I will write it here. kube_config.ssl_ca_cert = 'ssl_ca_cert' with open(kube_config.ssl_ca_cert, 'wb') as f: f.write(base64.decodestring(gke_cluster['masterAuth']['clusterCaCertificate'].encode())) # use Kubernetes client to do something kube_client = kubernetes.client.ApiClient(configuration=kube_config) kube_v1 = kubernetes.client.CoreV1Api(kube_client) kube_v1.list_pod_for_all_namespaces(watch=False) </code></pre>
<p>Below is a solution that pulls the access token out of the googleapiclient, rather than copy-pasting things manually.</p> <pre class="lang-py prettyprint-override"><code>import googleapiclient.discovery from tempfile import NamedTemporaryFile import kubernetes import base64 def token(*scopes): credentials = googleapiclient._auth.default_credentials() scopes = [f'https://www.googleapis.com/auth/{s}' for s in scopes] scoped = googleapiclient._auth.with_scopes(credentials, scopes) googleapiclient._auth.refresh_credentials(scoped) return scoped.token def kubernetes_api(cluster): config = kubernetes.client.Configuration() config.host = f'https://{cluster["endpoint"]}' config.api_key_prefix['authorization'] = 'Bearer' config.api_key['authorization'] = token('cloud-platform') with NamedTemporaryFile(delete=False) as cert: cert.write(base64.decodebytes(cluster['masterAuth']['clusterCaCertificate'].encode())) config.ssl_ca_cert = cert.name client = kubernetes.client.ApiClient(configuration=config) api = kubernetes.client.CoreV1Api(client) return api def run(cluster): """You'll need to give whichever account `googleapiclient` is using the 'Kubernetes Engine Developer' role so that it can access the Kubernetes API. `cluster` should be the dict you get back from `projects.zones.clusters.get` and the like""" api = kubernetes_api(cluster) print(api.list_pod_for_all_namespaces()) </code></pre> <p>Figuring this out took longer than I care to admit. @Ivan's post helped a lot.</p>
<p>I have two different Minecraft server containers running, both set to use the default TCP port 25565. To keep things simple for laymen to connect, I would like to have a subdomain dedicated to each server, say mc1.example.com and mc2.example.com, such that they only put the address in and the client connects. </p> <p>For an HTTP(s) service, the NGINX L7 ingress works fine, but it doesn't seem to work for Minecraft. NodePort works well, but then each server would need a different port.</p> <p>This is also installed on bare metal - there is not a cloud L4 load balancer available, and a very limited pool of IP addresses (assume there are not enough to cover all the various Minecraft servers).</p> <p>Can the L7 ingress be modified to redirect mc1.example.com to the correct container's port 25565? Would I need to use something like <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a>?</p>
<blockquote> <p>This is also installed on bare metal - there is not a cloud L4 load balancer available, and a very limited pool of IP addresses (assume there are not enough to cover all the various Minecraft servers).</p> </blockquote> <p>If you don't have enough IP addresses, then MetalLB won't help you, since it is just using BGP to v-host for you but you'd have to virtual addresses to hand out. Based on your description of the situation and your problem, I'd venture to say you're trying to do this on the cheap, and it is -- as one might expect -- hard to work without resources.</p> <p>That said:</p> <p>As best I can tell, there is no redirect in <a href="https://wiki.vg/Protocol" rel="nofollow noreferrer">the modern Minecraft protocol</a>, but interestingly enough during <a href="https://wiki.vg/Protocol#Handshake" rel="nofollow noreferrer">the Handshake</a> the client does actually send the hostname to which it is attempting to connect. That may or may not be something that <a href="https://github.com/SpigotMC/BungeeCord#readme" rel="nofollow noreferrer">BungeeCord</a> takes advantage of, I didn't study its source code.</p> <p>It could therefore be <strong>theoretically</strong> possible to make a Minecraft-specific virtual-hosting proxy, since there are already quite a few implementations of the protocol. But, one could have to study all the messages in the protocol to ensure they contain a reference to the actual connection id, otherwise you'd have to resort to just <code>(client-ip, client-port)</code> identification tuples, effectively turning your server into a reverse NAT/PAT implementation. That may be fine, just watch out.</p>
<p>I need to build and run some tests using a fresh database. I though of using a sidecar container to host the DB.</p> <p>I've installed jenkins using helm inside my kubernetes cluster using <a href="https://cloud.google.com/solutions/jenkins-on-kubernetes-engine-tutorial" rel="noreferrer">google's own tutorial</a>. I can launch simple 'hello world' pipelines which will start on a new pod.</p> <p>Next, I tried <a href="https://jenkins.io/doc/book/pipeline/docker/#running-sidecar-containers" rel="noreferrer">Jenkin's documentation</a> for running an instance of mysql as a sidecar.</p> <pre><code>node { checkout scm docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c -&gt; docker.image('mysql:5').inside("--link ${c.id}:db") { /* Wait until mysql service is up */ sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done' } docker.image('centos:7').inside("--link ${c.id}:db") { /* * Run some tests which require MySQL, and assume that it is * available on the host name `db` */ sh 'make check' } } } </code></pre> <p>At first, it complained that docker was not found, and the internet suggested using a custom jenkins slave image with docker installed.</p> <p>Now, if I run the pipeline, it just hangs in the loop waiting for the db to be ready.</p> <p>Disclaimer: New to jenkins/docker/kubernetes</p>
<p>Eventually I've found <a href="https://dariuszsadowski.com/2018/06/integration-tests-with-jenkins-pipelines-and-mysql-on-kubernetes/" rel="nofollow noreferrer">this method</a>. It relies on the kubernetes pipeline plugin, and allows running multiple containers in the agent pod while sharing resources.</p> <p>Note that <code>label</code> should not be an existing label, otherwise when you go to run, your podTemplate will be unable to find the container you made. With this method you are making a new set of containers in an entirely new pod.</p> <pre><code>def databaseUsername = 'app' def databasePassword = 'app' def databaseName = 'app' def databaseHost = '127.0.0.1' def jdbcUrl = "jdbc:mariadb://$databaseHost/$databaseName".toString() podTemplate( label: label, containers: [ containerTemplate( name: 'jdk', image: 'openjdk:8-jdk-alpine', ttyEnabled: true, command: 'cat', envVars: [ envVar(key: 'JDBC_URL', value: jdbcUrl), envVar(key: 'JDBC_USERNAME', value: databaseUsername), envVar(key: 'JDBC_PASSWORD', value: databasePassword), ] ), containerTemplate( name: "mariadb", image: "mariadb", envVars: [ envVar(key: 'MYSQL_DATABASE', value: databaseName), envVar(key: 'MYSQL_USER', value: databaseUsername), envVar(key: 'MYSQL_PASSWORD', value: databasePassword), envVar(key: 'MYSQL_ROOT_PASSWORD', value: databasePassword) ], ) ] ) { node(label) { stage('Checkout'){ checkout scm } stage('Waiting for environment to start') { container('mariadb') { sh """ while ! mysqladmin ping --user=$databaseUsername --password=$databasePassword -h$databaseHost --port=3306 --silent; do sleep 1 done """ } } stage('Migrate database') { container('jdk') { sh './gradlew flywayMigrate -i' } } stage('Run Tests') { container('jdk') { sh './gradlew test' } } } } </code></pre>
<p>The stateful set es-data was failing on our test environment and I was asked to delete corresponding PV.</p> <p>So I deleted the following for es-data: 1) PVC 2) PV They showed as terminating and was left for the weekend. Upon arriving this morning they still showed as terminating so deleted both PVC and PV forcefully. No joy. To fix the whole thing I had to delete the stateful set.</p> <p>Is this correct if you wanted to delete the PV? </p>
<p>You can delete the PV using following two commands:</p> <pre><code>kubectl delete pv &lt;pv_name&gt; --grace-period=0 --force </code></pre> <p>And then deleting the finalizer using:</p> <pre><code>kubectl patch pv &lt;pv_name&gt; -p '{"metadata": {"finalizers": null}}' </code></pre>
<p>I'm getting <code>Unable to connect to the server: dial tcp &lt;IP&gt; i/o timeout</code> when trying to run <code>kubectl get pods</code> when connected to my cluster in google shell. This started out of the blue without me doing any changes to my cluster setup. </p> <pre><code>gcloud beta container clusters create tia-test-cluster \ --create-subnetwork name=my-cluster\ --enable-ip-alias \ --enable-private-nodes \ --master-ipv4-cidr &lt;IP&gt; \ --enable-master-authorized-networks \ --master-authorized-networks &lt;IP&gt; \ --no-enable-basic-auth \ --no-issue-client-certificate \ --cluster-version=1.11.2-gke.18 \ --region=europe-north1 \ --metadata disable-legacy-endpoints=true \ --enable-stackdriver-kubernetes \ --enable-autoupgrade </code></pre> <p>This is the current cluster-config. I've run <code>gcloud container clusters get-credentials my-cluster --zone europe-north1-a --project &lt;my project&gt;</code> before doing this aswell.</p> <p>I also noticed that my compute instances have lost their external IPs. In our staging environment, everything works as it should based on the same config.</p> <p>Any pointers would be greatly appreciated.</p>
<p>From what I can see of what you've posted you've turned on master authorized networks for the network <code>&lt;IP&gt;</code>.</p> <p>If the IP address of the Google Cloud Shell ever changes that is the exact error that you would expect.</p> <p>As per <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cloud_shell" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cloud_shell</a>: you need to update the allowed IP address.</p> <pre><code>gcloud container clusters update tia-test-cluster \ --region europe-north1 \ --enable-master-authorized-networks \ --master-authorized-networks [EXISTING_AUTH_NETS],[SHELL_IP]/32 </code></pre>
<p>I'm learning about creating a k8s cluster on DO using terraform, I've been trying to take the <code>ID</code> of the single K8s node I've created, and reference it from the loadbalancer.</p> <p>The main reasoning for this is so that I can declare the FQDN in the <code>.tf</code> file.</p> <p>First, here is the cluster declaration:</p> <pre><code> variable "digitalocean_token" {} provider "digitalocean" { token = "${var.digitalocean_token}" } resource "digitalocean_kubernetes_cluster" "foo" { name = "foo" region = "nyc1" version = "1.12.1-do.2" node_pool { name = "woker-pool" size = "s-1vcpu-2gb" node_count = 1 } } </code></pre> <p>And here is the load balancer declaration:</p> <pre><code>resource "digitalocean_loadbalancer" "foo" { name = "k8s-lb.nyc1" region = "nyc1" forwarding_rule { entry_port = 80 entry_protocol = "http" target_port = 80 target_protocol = "http" } droplet_ids = ["${digitalocean_kubernetes_cluster.foo.node_pool.0.id}"] } output "loadbalancer_ip" { value = "${digitalocean_loadbalancer.foo.ip}" } resource "digitalocean_record" "terraform" { domain = "example.com" # "${digitalocean_domain.example.name}" type = "A" name = "terraform" value = "${digitalocean_loadbalancer.foo.ip}" } # Output the FQDN for the record output "fqdn" { value = "${digitalocean_record.terraform.fqdn}" } </code></pre> <p>I'm guessing that maybe the <code>digitalocean_loadbalancer</code> resources is only setup to work with individual droplets?</p> <hr> <p>Here are the output errors: when I run <code>terraform apply</code>:</p> <pre><code>* output.loadbalancer_ip: Resource 'digitalocean_loadbalancer.foo' not found for variable 'digitalocean_loadbalancer.foo.ip' * digitalocean_record.terraform: Resource 'digitalocean_loadbalancer.foo' not found for variable 'digitalocean_loadbalancer.foo.ip' * digitalocean_loadbalancer.foo: droplet_ids.0: cannot parse '' as int: strconv.ParseInt: parsing "d4292e64-9c0a-4afb-83fc-83f239bcb4af": invalid syntax </code></pre> <hr> <p>Pt. 2</p> <p>I added a <code>digitalocean_droplet</code> resource, to see what kind of id was passed to the load balancer. </p> <pre><code>resource "digitalocean_droplet" "web" { name = "web-1" size = "s-1vcpu-1gb" image = "ubuntu-18-04-x64" region = "nyc1" } </code></pre> <p><code>digitalocean_kubernetes_cluster.foo.node_pool.0.id = '6ae6a787-d837-4e78-a915-cb52155f66fe'</code></p> <p><code>digitalocean_droplet.web.id = 132533158</code></p>
<p>You're using the wrong attribute reference for your load balancer droplet ids.</p> <p><code>droplet_ids = ["${digitalocean_kubernetes_cluster.foo.node_pool.0.id}"]</code></p> <p>This will use the <code>node_pool</code> id linked <a href="https://www.terraform.io/docs/providers/do/r/kubernetes_cluster.html#id-1" rel="nofollow noreferrer">here</a></p> <p>What you actually need to do is use the <code>node_pool</code> <em>nodes</em> id, which is referenced <a href="https://www.terraform.io/docs/providers/do/r/kubernetes_cluster.html#id-2" rel="nofollow noreferrer">here</a></p> <pre><code>droplet_ids = "${digitalocean_kubernetes_cluster.foo.node_pool.0.nodes}" </code></pre> <p>The next problem you're going to have is that this returns a list of maps, and you'll need to build a list of ids from that. I'm not currently sure how to solve that, I'm afraid, but this should move you along hopefully.</p> <p>It seems from your answer however, that what you want to do is update DNS for your loadbalancer.</p> <p>You can do this <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">external-dns</a> using the <a href="https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/digitalocean.md" rel="nofollow noreferrer">digitalocean provider</a></p> <p>Simply deploy this as a pod, specifying the required configuration, and ensure that the arg <code>--source=service</code> is set.</p> <p>If you want to go a step further, and allow updating DNS with specific hostname, deploy an ingress controller like nginx-ingress and specify ingresses for your applications. The external-dns deployment (if you set <code>--source=ingress</code>) will the hostname from your ingress and update DNS for you.</p>
<p>I install kubeadm (version : v1.13.2 ), after init, I install flannel, it fails, install command:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml </code></pre> <p>error is like below.</p> <blockquote> <p>Error from server (Forbidden): error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1beta1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1beta1, Kind=ClusterRole" Name: "flannel", Namespace: "" Object: &amp;{map["apiVersion":"rbac.authorization.k8s.io/v1beta1" "kind":"ClusterRole" "metadata":map["name":"flannel" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "rules":[map["apiGroups":[""] "resources":["pods"] "verbs":["get"]] map["apiGroups":[""] "resources":["nodes"] "verbs":["list" "watch"]] map["apiGroups":[""] "resources":["nodes/status"] "verbs":["patch"]]]]} from server for: "kube-flannel.yml": clusterroles.rbac.authorization.k8s.io "flannel" is forbidden: User "system:node:node1" cannot get resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): error when retrieving current configuration of: Resource: "rbac.authorization.k8s.io/v1beta1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1beta1, Kind=ClusterRoleBinding" Name: "flannel", Namespace: "" Object: &amp;{map["subjects":[map["kind":"ServiceAccount" "name":"flannel" "namespace":"kube-system"]] "apiVersion":"rbac.authorization.k8s.io/v1beta1" "kind":"ClusterRoleBinding" "metadata":map["name":"flannel" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "roleRef":map["apiGroup":"rbac.authorization.k8s.io" "kind":"ClusterRole" "name":"flannel"]]} from server for: "kube-flannel.yml": clusterrolebindings.rbac.authorization.k8s.io "flannel" is forbidden: User "system:node:node1" cannot get resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): error when retrieving current configuration of: Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" Name: "flannel", Namespace: "kube-system" Object: &amp;{map["kind":"ServiceAccount" "metadata":map["name":"flannel" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "apiVersion":"v1"]} from server for: "kube-flannel.yml": serviceaccounts "flannel" is forbidden: User "system:node:node1" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system": can only create tokens for individual service accounts Error from server (Forbidden): error when retrieving current configuration of: Resource: "/v1, Resource=configmaps", GroupVersionKind: "/v1, Kind=ConfigMap" Name: "kube-flannel-cfg", Namespace: "kube-system" Object: &amp;{map["kind":"ConfigMap" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["app":"flannel" "tier":"node"] "name":"kube-flannel-cfg" "namespace":"kube-system"] "apiVersion":"v1" "data":map["cni-conf.json":"{\n \"name\": \"cbr0\",\n \"plugins\": [\n {\n \"type\": \"flannel\",\n \"delegate\": {\n \"hairpinMode\": true,\n \"isDefaultGateway\": true\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\n \"portMappings\": true\n }\n }\n ]\n}\n" "net-conf.json":"{\n \"Network\": \"10.244.0.0/16\",\n \"Backend\": {\n \"Type\": \"vxlan\"\n }\n}\n"]]} from server for: "kube-flannel.yml": configmaps "kube-flannel-cfg" is forbidden: User "system:node:node1" cannot get resource "configmaps" in API group "" in the namespace "kube-system": no path found to object Error from server (Forbidden): error when retrieving current configuration of: Resource: "extensions/v1beta1, Resource=daemonsets", GroupVersionKind: "extensions/v1beta1, Kind=DaemonSet" Name: "kube-flannel-ds-amd64", Namespace: "kube-system" Object: &amp;{map["apiVersion":"extensions/v1beta1" "kind":"DaemonSet" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["app":"flannel" "tier":"node"] "name":"kube-flannel-ds-amd64" "namespace":"kube-system"] "spec":map["template":map["metadata":map["labels":map["app":"flannel" "tier":"node"]] "spec":map["tolerations":[map["effect":"NoSchedule" "operator":"Exists"]] "volumes":[map["hostPath":map["path":"/run"] "name":"run"] map["hostPath":map["path":"/etc/cni/net.d"] "name":"cni"] map["configMap":map["name":"kube-flannel-cfg"] "name":"flannel-cfg"]] "containers":[map["name":"kube-flannel" "resources":map["limits":map["cpu":"100m" "memory":"50Mi"] "requests":map["cpu":"100m" "memory":"50Mi"]] "securityContext":map["privileged":%!q(bool=true)] "volumeMounts":[map["mountPath":"/run" "name":"run"] map["mountPath":"/etc/kube-flannel/" "name":"flannel-cfg"]] "args":["--ip-masq" "--kube-subnet-mgr"] "command":["/opt/bin/flanneld"] "env":[map["name":"POD_NAME" "valueFrom":map["fieldRef":map["fieldPath":"metadata.name"]]] map["name":"POD_NAMESPACE" "valueFrom":map["fieldRef":map["fieldPath":"metadata.namespace"]]]] "image":"quay.io/coreos/flannel:v0.10.0-amd64"]] "hostNetwork":%!q(bool=true) "initContainers":[map["args":["-f" "/etc/kube-flannel/cni-conf.json" "/etc/cni/net.d/10-flannel.conflist"] "command":["cp"] "image":"quay.io/coreos/flannel:v0.10.0-amd64" "name":"install-cni" "volumeMounts":[map["mountPath":"/etc/cni/net.d" "name":"cni"] map["mountPath":"/etc/kube-flannel/" "name":"flannel-cfg"]]]] "nodeSelector":map["beta.kubernetes.io/arch":"amd64"] "serviceAccountName":"flannel"]]]]} from server for: "kube-flannel.yml": daemonsets.extensions "kube-flannel-ds-amd64" is forbidden: User "system:node:node1" cannot get resource "daemonsets" in API group "extensions" in the namespace "kube-system" Error from server (Forbidden): error when retrieving current configuration of: Resource: "extensions/v1beta1, Resource=daemonsets", GroupVersionKind: "extensions/v1beta1, Kind=DaemonSet" Name: "kube-flannel-ds-arm64", Namespace: "kube-system" Object: &amp;{map["apiVersion":"extensions/v1beta1" "kind":"DaemonSet" "metadata":map["name":"kube-flannel-ds-arm64" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["app":"flannel" "tier":"node"]] "spec":map["template":map["spec":map["containers":[map["resources":map["limits":map["cpu":"100m" "memory":"50Mi"] "requests":map["cpu":"100m" "memory":"50Mi"]] "securityContext":map["privileged":%!q(bool=true)] "volumeMounts":[map["mountPath":"/run" "name":"run"] map["mountPath":"/etc/kube-flannel/" "name":"flannel-cfg"]] "args":["--ip-masq" "--kube-subnet-mgr"] "command":["/opt/bin/flanneld"] "env":[map["valueFrom":map["fieldRef":map["fieldPath":"metadata.name"]] "name":"POD_NAME"] map["name":"POD_NAMESPACE" "valueFrom":map["fieldRef":map["fieldPath":"metadata.namespace"]]]] "image":"quay.io/coreos/flannel:v0.10.0-arm64" "name":"kube-flannel"]] "hostNetwork":%!q(bool=true) "initContainers":[map["command":["cp"] "image":"quay.io/coreos/flannel:v0.10.0-arm64" "name":"install-cni" "volumeMounts":[map["mountPath":"/etc/cni/net.d" "name":"cni"] map["mountPath":"/etc/kube-flannel/" "name":"flannel-cfg"]] "args":["-f" "/etc/kube-flannel/cni-conf.json" "/etc/cni/net.d/10-flannel.conflist"]]] "nodeSelector":map["beta.kubernetes.io/arch":"arm64"] "serviceAccountName":"flannel" "tolerations":[map["effect":"NoSchedule" "operator":"Exists"]] "volumes":[map["hostPath":map["path":"/run"] "name":"run"] map["hostPath":map["path":"/etc/cni/net.d"] "name":"cni"] map["configMap":map["name":"kube-flannel-cfg"] "name":"flannel-cfg"]]] "metadata":map["labels":map["app":"flannel" "tier":"node"]]]]]} from server for: "kube-flannel.yml": daemonsets.extensions "kube-flannel-ds-arm64" is forbidden: User "system:node:node1" cannot get resource "daemonsets" in API group "extensions" in the namespace "kube-system" Error from server (Forbidden): error when retrieving current configuration of: Resource: "extensions/v1beta1, Resource=daemonsets", GroupVersionKind: "extensions/v1beta1, Kind=DaemonSet" Name: "kube-flannel-ds-arm", Namespace: "kube-system" Object: &amp;{map["apiVersion":"extensions/v1beta1" "kind":"DaemonSet" "metadata":map["labels":map["app":"flannel" "tier":"node"] "name":"kube-flannel-ds-arm" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "spec":map["template":map["metadata":map["labels":map["app":"flannel" "tier":"node"]] "spec":map["hostNetwork":%!q(bool=true) "initContainers":[map["args":["-f" "/etc/kube-flannel/cni-conf.json" "/etc/cni/net.d/10-flannel.conflist"] "command":["cp"] "image":"quay.io/coreos/flannel:v0.10.0-arm" "name":"install-cni" "volumeMounts":[map["mountPath":"/etc/cni/net.d" "name":"cni"] map["mountPath":"/etc/kube-flannel/" "name":"flannel-cfg"]]]] "nodeSelector":map["beta.kubernetes.io/arch":"arm"] "serviceAccountName":"flannel" "tolerations":[map["effect":"NoSchedule" "operator":"Exists"]] "volumes":[map["hostPath":map["path":"/run"] "name":"run"] map["name":"cni" "hostPath":map["path":"/etc/cni/net.d"]] map["configMap":map["name":"kube-flannel-cfg"] "name":"flannel-cfg"]] "containers":[map["name":"kube-flannel" "resources":map["limits":map["cpu":"100m" "memory":"50Mi"] "requests":map["cpu":"100m" "memory":"50Mi"]] "securityContext":map["privileged":%!q(bool=true)] "volumeMounts":[map["mountPath":"/run" "name":"run"] map["mountPath":"/etc/kube-flannel/" "name":"flannel-cfg"]] "args":["--ip-masq" "--kube-subnet-mgr"] "command":["/opt/bin/flanneld"] "env":[map["name":"POD_NAME" "valueFrom":map["fieldRef":map["fieldPath":"metadata.name"]]] map["name":"POD_NAMESPACE" "valueFrom":map["fieldRef":map["fieldPath":"metadata.namespace"]]]] "image":"quay.io/coreos/flannel:v0.10.0-arm"]]]]]]} from server for: "kube-flannel.yml": daemonsets.extensions "kube-flannel-ds-arm" is forbidden: User "system:node:node1" cannot get resource "daemonsets" in API group "extensions" in the namespace "kube-system" Error from server (Forbidden): error when retrieving current configuration of: Resource: "extensions/v1beta1, Resource=daemonsets", GroupVersionKind: "extensions/v1beta1, Kind=DaemonSet" Name: "kube-flannel-ds-ppc64le", Namespace: "kube-system" Object: &amp;{map["spec":map["template":map["metadata":map["labels":map["tier":"node" "app":"flannel"]] "spec":map["containers":[map["command":["/opt/bin/flanneld"] "env":[map["name":"POD_NAME" "valueFrom":map["fieldRef":map["fieldPath":"metadata.name"]]] map["name":"POD_NAMESPACE" "valueFrom":map["fieldRef":map["fieldPath":"metadata.namespace"]]]] "image":"quay.io/coreos/flannel:v0.10.0-ppc64le" "name":"kube-flannel" "resources":map["requests":map["cpu":"100m" "memory":"50Mi"] "limits":map["cpu":"100m" "memory":"50Mi"]] "securityContext":map["privileged":%!q(bool=true)] "volumeMounts":[map["mountPath":"/run" "name":"run"] map["name":"flannel-cfg" "mountPath":"/etc/kube-flannel/"]] "args":["--ip-masq" "--kube-subnet-mgr"]]] "hostNetwork":%!q(bool=true) "initContainers":[map["args":["-f" "/etc/kube-flannel/cni-conf.json" "/etc/cni/net.d/10-flannel.conflist"] "command":["cp"] "image":"quay.io/coreos/flannel:v0.10.0-ppc64le" "name":"install-cni" "volumeMounts":[map["mountPath":"/etc/cni/net.d" "name":"cni"] map["mountPath":"/etc/kube-flannel/" "name":"flannel-cfg"]]]] "nodeSelector":map["beta.kubernetes.io/arch":"ppc64le"] "serviceAccountName":"flannel" "tolerations":[map["effect":"NoSchedule" "operator":"Exists"]] "volumes":[map["hostPath":map["path":"/run"] "name":"run"] map["hostPath":map["path":"/etc/cni/net.d"] "name":"cni"] map["configMap":map["name":"kube-flannel-cfg"] "name":"flannel-cfg"]]]]] "apiVersion":"extensions/v1beta1" "kind":"DaemonSet" "metadata":map["labels":map["app":"flannel" "tier":"node"] "name":"kube-flannel-ds-ppc64le" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]]]} from server for: "kube-flannel.yml": daemonsets.extensions "kube-flannel-ds-ppc64le" is forbidden: User "system:node:node1" cannot get resource "daemonsets" in API group "extensions" in the namespace "kube-system" Error from server (Forbidden): error when retrieving current configuration of: Resource: "extensions/v1beta1, Resource=daemonsets", GroupVersionKind: "extensions/v1beta1, Kind=DaemonSet" Name: "kube-flannel-ds-s390x", Namespace: "kube-system" Object: &amp;{map["apiVersion":"extensions/v1beta1" "kind":"DaemonSet" "metadata":map["labels":map["app":"flannel" "tier":"node"] "name":"kube-flannel-ds-s390x" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "spec":map["template":map["metadata":map["labels":map["app":"flannel" "tier":"node"]] "spec":map["nodeSelector":map["beta.kubernetes.io/arch":"s390x"] "serviceAccountName":"flannel" "tolerations":[map["effect":"NoSchedule" "operator":"Exists"]] "volumes":[map["hostPath":map["path":"/run"] "name":"run"] map["hostPath":map["path":"/etc/cni/net.d"] "name":"cni"] map["configMap":map["name":"kube-flannel-cfg"] "name":"flannel-cfg"]] "containers":[map["env":[map["name":"POD_NAME" "valueFrom":map["fieldRef":map["fieldPath":"metadata.name"]]] map["name":"POD_NAMESPACE" "valueFrom":map["fieldRef":map["fieldPath":"metadata.namespace"]]]] "image":"quay.io/coreos/flannel:v0.10.0-s390x" "name":"kube-flannel" "resources":map["limits":map["cpu":"100m" "memory":"50Mi"] "requests":map["cpu":"100m" "memory":"50Mi"]] "securityContext":map["privileged":%!q(bool=true)] "volumeMounts":[map["mountPath":"/run" "name":"run"] map["name":"flannel-cfg" "mountPath":"/etc/kube-flannel/"]] "args":["--ip-masq" "--kube-subnet-mgr"] "command":["/opt/bin/flanneld"]]] "hostNetwork":%!q(bool=true) "initContainers":[map["args":["-f" "/etc/kube-flannel/cni-conf.json" "/etc/cni/net.d/10-flannel.conflist"] "command":["cp"] "image":"quay.io/coreos/flannel:v0.10.0-s390x" "name":"install-cni" "volumeMounts":[map["mountPath":"/etc/cni/net.d" "name":"cni"] map["mountPath":"/etc/kube-flannel/" "name":"flannel-cfg"]]]]]]]]} from server for: "kube-flannel.yml": daemonsets.extensions "kube-flannel-ds-s390x" is forbidden: User "system:node:node1" cannot get resource "daemonsets" in API group "extensions" in the namespace "kube-system"</p> </blockquote>
<p>D.TS the problem seems to be in the fact that the previous kube.config is still present on the cluster. Try to run:</p> <pre><code> mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config #confirm the overwrite with yes + enter. sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl get nodes </code></pre> <p>and then try to apply Flannel again. You can find more steps required for Kubeadm in this <a href="https://stackoverflow.com/questions/50959228/kubectl-apply-yield-forbidden-error-when-retrieving-current-configuration">answer</a> in case you did not prepare the environment properly. </p>
<p>I have created a NodeJS script for deploying review apps to Kubernetes for my GitLab repository. To do this, I’m using the Kubernetes NodeJS client.</p> <p>For completeness sake, I have included truncated definitions of the Kubernetes resources.</p> <pre class="lang-js prettyprint-override"><code>const k8s = require('@kubernetes/client-node'); const logger = require('../logger'); const { CI_COMMIT_REF_NAME, CI_ENVIRONMENT_SLUG, CI_ENVIRONMENT_URL, CI_REGISTRY_IMAGE, KUBE_NAMESPACE, } = process.env; const { hostname } = new URL(CI_ENVIRONMENT_URL); const mysqlDeployment = { apiVersion: 'apps/v1', kind: 'Deployment', metadata: { name: `${CI_ENVIRONMENT_SLUG}-mysql`, labels: { app: CI_ENVIRONMENT_SLUG, tier: 'mysql', }, }, spec: { replicas: 1, selector: { matchLabels: { app: CI_ENVIRONMENT_SLUG, tier: 'mysql', }, }, template: { metadata: { labels: { app: CI_ENVIRONMENT_SLUG, tier: 'mysql', }, }, spec: { containers: [ { image: 'mysql:8', name: 'mysql', }, ], ports: { containerPort: 3306 }, }, }, }, }; const mysqlService = { apiVersion: 'v1', kind: 'Service', metadata: { name: `${CI_ENVIRONMENT_SLUG}-mysql`, labels: { app: CI_ENVIRONMENT_SLUG, tier: 'mysql', }, }, spec: { ports: [{ port: 3306 }], selector: { app: CI_ENVIRONMENT_SLUG, tier: 'mysql', }, clusterIP: 'None', }, }; const appDeployment = { apiVersion: 'apps/v1', kind: 'Deployment', metadata: { name: `${CI_ENVIRONMENT_SLUG}-frontend`, labels: { app: CI_ENVIRONMENT_SLUG, tier: 'frontend', }, }, spec: { replicas: 1, selector: { matchLabels: { app: CI_ENVIRONMENT_SLUG, tier: 'frontend', }, }, template: { metadata: { labels: { app: CI_ENVIRONMENT_SLUG, tier: 'frontend', }, }, spec: { containers: [ { image: `${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}`, imagePullPolicy: 'Always', name: 'app', ports: [{ containerPort: 9999 }], }, ], imagePullSecrets: [{ name: 'registry.gitlab.com' }], }, }, }, }; const appService = { apiVersion: 'v1', kind: 'Service', metadata: { name: `${CI_ENVIRONMENT_SLUG}-frontend`, labels: { app: CI_ENVIRONMENT_SLUG, tier: 'frontend', }, }, spec: { ports: [{ port: 9999 }], selector: { app: CI_ENVIRONMENT_SLUG, tier: 'frontend', }, clusterIP: 'None', }, }; const ingress = { apiVersion: 'extensions/v1beta1', kind: 'Ingress', metadata: { name: `${CI_ENVIRONMENT_SLUG}-ingress`, labels: { app: CI_ENVIRONMENT_SLUG, }, annotations: { 'certmanager.k8s.io/cluster-issuer': 'letsencrypt-prod', 'kubernetes.io/ingress.class': 'nginx', 'nginx.ingress.kubernetes.io/proxy-body-size': '50m', }, }, spec: { tls: [ { hosts: [hostname], secretName: `${CI_ENVIRONMENT_SLUG}-prod`, }, ], rules: [ { host: hostname, http: { paths: [ { path: '/', backend: { serviceName: `${CI_ENVIRONMENT_SLUG}-frontend`, servicePort: 9999, }, }, ], }, }, ], }, }; </code></pre> <p>I use the following functions to deploy these resources to Kubernetes.</p> <pre class="lang-js prettyprint-override"><code>async function noConflict(resource, create, replace) { const { kind } = resource; const { name } = resource.metadata; try { logger.info(`Creating ${kind.toLowerCase()}: ${name}`); await create(KUBE_NAMESPACE, resource); logger.info(`Created ${kind.toLowerCase()}: ${name}`); } catch (err) { if (err.response.statusCode !== 409) { throw err; } logger.warn(`${kind} ${name} already exists… Replacing instead.`); await replace(name, KUBE_NAMESPACE, resource); logger.info(`Replaced ${kind.toLowerCase()}: ${name}`); } } async function deploy() { const kc = new k8s.KubeConfig(); kc.loadFromDefault(); const apps = kc.makeApiClient(k8s.Apps_v1Api); const beta = kc.makeApiClient(k8s.Extensions_v1beta1Api); const core = kc.makeApiClient(k8s.Core_v1Api); await noConflict( mysqlDeployment, apps.createNamespacedDeployment.bind(apps), apps.replaceNamespacedDeployment.bind(apps), ); await noConflict( mysqlService, core.createNamespacedService.bind(core), core.replaceNamespacedService.bind(core), ); await noConflict( appDeployment, apps.createNamespacedDeployment.bind(apps), apps.replaceNamespacedDeployment.bind(apps), ); await noConflict( appService, core.createNamespacedService.bind(core), core.replaceNamespacedService.bind(core), ); await noConflict( ingress, beta.createNamespacedIngress.bind(beta), beta.replaceNamespacedIngress.bind(beta), ); } </code></pre> <p>The initial deployment goes fine, but the replacement of the mysql service fails with the following HTTP request body.</p> <pre class="lang-js prettyprint-override"><code>{ kind: 'Status', apiVersion: 'v1', metadata: {}, status: 'Failure', message: 'Service "review-fix-kubern-8a4yh2-mysql" is invalid: metadata.resourceVersion: Invalid value: "": must be specified for an update', reason: 'Invalid', details: { name: 'review-fix-kubern-8a4yh2-mysql', kind: 'Service', causes: [Array] }, code: 422 } } </code></pre> <p>I have tried modifying <code>noConflict</code> to get the current version, and use the active <code>versionResource</code> to replace resources.</p> <pre class="lang-js prettyprint-override"><code>async function noConflict(resource, create, get, replace) { const { kind, metadata } = resource; const { name } = resource.metadata; try { logger.info(`Creating ${kind.toLowerCase()}: ${name}`); await create(KUBE_NAMESPACE, resource); logger.info(`Created ${kind.toLowerCase()}: ${name}`); } catch (err) { if (err.response.statusCode !== 409) { throw err; } logger.warn(`${kind} ${name} already exists… Replacing instead.`); const { body: { metadata: { resourceVersion }, }, } = await get(name, KUBE_NAMESPACE); const body = { ...resource, metadata: { ...metadata, resourceVersion, }, }; logger.warn(`${kind} ${name} already exists… Replacing instead.`); await replace(name, KUBE_NAMESPACE, body); logger.info(`Replaced ${kind.toLowerCase()}: ${name}`); } } </code></pre> <p>However, this gives me another error.</p> <pre class="lang-js prettyprint-override"><code>{ kind: 'Status', apiVersion: 'v1', metadata: {}, status: 'Failure', message: 'Service "review-prevent-ku-md2ghh-frontend" is invalid: spec.clusterIP: Invalid value: "": field is immutable', reason: 'Invalid', details: { name: 'review-prevent-ku-md2ghh-frontend', kind: 'Service', causes: [Array] }, code: 422 } } </code></pre> <p>What should I do to replace the running resources?</p> <p>Whether or not the the database stays up, is a minor detail.</p> <p><strong>Update</strong></p> <p>To address the comment by LouisBaumann:</p> <p>I have changed by code to the following, where <code>read</code> is the respective read call for each resource.</p> <pre><code>async function noConflict(resource, create, read, replace) { const { kind } = resource; const { name } = resource.metadata; try { logger.info(`Creating ${kind.toLowerCase()}: ${name}`); await create(KUBE_NAMESPACE, resource); logger.info(`Created ${kind.toLowerCase()}: ${name}`); } catch (err) { if (err.response.statusCode !== 409) { throw err; } logger.warn(`${kind} ${name} already exists… Replacing instead.`); const { body: existing } = await read(name, KUBE_NAMESPACE); await replace(name, KUBE_NAMESPACE, merge(existing, resource)); logger.info(`Replaced ${kind.toLowerCase()}: ${name}`); } } </code></pre> <p>The above doesn’t crash, but it doesn’t update the review environment either.</p> <p><strong>Update</strong></p> <p>To address the answer by Crou:</p> <p>I have updated the replace calls with patch calls. So the <code>noConflict</code> function becomes:</p> <pre><code>async function noConflict(resource, create, patch) { const { kind } = resource; const { name } = resource.metadata; try { logger.info(`Creating ${kind.toLowerCase()}: ${name}`); await create(KUBE_NAMESPACE, resource); logger.info(`Created ${kind.toLowerCase()}: ${name}`); } catch (err) { if (err.response.statusCode !== 409) { throw err; } logger.warn(`${kind} ${name} already exists… Patching instead.`); await patch(name, KUBE_NAMESPACE, resource); logger.info(`Replaced ${kind.toLowerCase()}: ${name}`); } } </code></pre> <p>I also changed the <code>noConflict</code> calls to pass the patch versions instead of the replace functions.</p> <pre><code>await noConflict( mysqlDeployment, apps.createNamespacedDeployment.bind(apps), apps.patchNamespacedDeployment.bind(apps), ); // etc </code></pre> <p>This resulted in the following error:</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "415: Unsupported Media Type", "reason": "UnsupportedMediaType", "details": {}, "code": 415 } </code></pre>
<p>From what I understand you are using <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#replace" rel="nofollow noreferrer"><code>replace</code></a> incorrectly.</p> <blockquote> <p>Replace a resource by filename or stdin.</p> <p>JSON and YAML formats are accepted. If replacing an existing resource, the complete resource spec must be provided. This can be obtained by</p> <p><code>$ kubectl get TYPE NAME -o yaml</code></p> </blockquote> <p>If you do replace without getting the <code>yaml</code> from Kubernetes, you are missing <code>resourceVersion</code>. So this is why you get the error:</p> <p><code>Service "review-fix-kubern-8a4yh2-mysql" is invalid: metadata.resourceVersion: Invalid value: "": must be specified for an update</code></p> <p>You should use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#patch" rel="nofollow noreferrer"><code>patch</code></a> or <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply" rel="nofollow noreferrer"><code>apply</code></a> if you are replacing just parts of the <code>Deployment</code>.</p>
<p>I have a cloud distributed database (AWS RDS - PostGres) with a table of <code>sources</code>. Sources can be a web page or a social media account.</p> <p>I have a cron job on my service that will go through each <code>source</code> and get updated information like comments or stats.</p> <p>Sometimes if specific conditions are met, another action can be triggered, i.e. if an instagram post hits 1000 likes, comment with a string, or if a blog creates a new post, send an email out to subscribers.</p> <p>I would like to scale my service horizontally through docker and k8s, if I scale to two services, there will be two cron jobs, and any specific action could be sent twice. I do not want <code>n</code> emails to be sent for <code>n</code> instances I've scaled</p> <p>What is the correct architecture to handle this? </p>
<p>If you want to horizontally scale the whole stack, split your domain by some reasonable key (say creation date) into N partitions, and have each partition be a full stack. </p> <p>If you are concerned with scaleability, then you probably want to separate your stack into multiple layers (source refresher workers, action handlers, etc), connected by work queues so that any particular action can be scaled independently... But I'd start with a straight domain partition at first.</p>
<p>I'm trying to change the CPU Manager Policy for a Kubernetes cluster that I manage, as described <a href="https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/" rel="noreferrer">here</a> however, I've run into numerous issues while doing so.</p> <p>The cluster is running in DigitalOcean and here is what I've tried so far.</p> <ul> <li><strong><em>1.</em></strong> Since the article mentions that <code>--cpu-manager-policy</code> is a kubelet option I assume that I cannot change it via the API Server and have to change it manually on each node. (Is this assumption BTW?)</li> <li><strong><em>2.</em></strong> I <code>ssh</code> into one of the nodes (droplets in DigitalOcean lingo) and run <code>kubelet --cpu-manager-policy=static</code> command as described in the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">kubelet CLI reference here</a>. It gives me the message <code>Flag --cpu-manager-policy has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.</code> </li> <li><strong><em>3.</em></strong> So I check the file pointed at by the --config flag by running <code>ps aux | grep kubelet</code> and find that its <code>/etc/kubernetes/kubelet.conf</code>.</li> <li><strong><em>4.</em></strong> I edit the file and add a line <code>cpuManagerPolicy: static</code> to it, and also <code>kubeReserved</code> and <code>systemReserved</code> because they become required fields if specifying <code>cpuManagerPolicy</code>.</li> <li><strong><em>5.</em></strong> Then I kill the process that was running the process and restart it. A couple other things showed up (delete this file and drain the node etc) that I was able to get through and was able to restart the kubelet ultimately</li> </ul> <p>I'm a little lost about the following things</p> <ul> <li>How do I need to do this for all nodes? My cluster has 12 of them and doing all of these steps for each seems very inefficient. </li> <li>Is there any way I can set these params from the globally i.e. cluster-wide rather than doing node by node?</li> <li>How can I even confirm that what I did actually changed the CPU Manager policy?</li> </ul>
<p>One issue with Dynamic Configuration is that in case the node fails to restart, the API does not give a reasonable response back that tells you what you did wrong, you'll have to <code>ssh</code> into the node and tail the kubelet logs. Plus, you have to <code>ssh</code> into every node and set the <code>--dynamic-config-dir</code> flag anyways. </p> <p>The folllowing worked best for me</p> <ol> <li>SSH into the node. Edit </li> </ol> <pre><code>vim /etc/systemd/system/kubelet.service </code></pre> <ol start="2"> <li>Add the following lines</li> </ol> <pre><code> --cpu-manager-policy=static \ --kube-reserved=cpu=1,memory=2Gi,ephemeral-storage=1Gi \ --system-reserved=cpu=1,memory=2Gi,ephemeral-storage=1Gi \ </code></pre> <p>We need to set the <code>--kube-reserved</code> and <code>--system-reserved</code> flags because they're prerequisties to setting the <code>--cpu-manager-policy</code> flag</p> <ol start="3"> <li>Then drain the node and delete the following folder </li> </ol> <pre><code>rm -rf /var/lib/kubelet/cpu_manager_state </code></pre> <ol start="4"> <li>Restart the kubelet </li> </ol> <pre><code>sudo systemctl daemon-reload sudo systemctl stop kubelet sudo systemctl start kubelet </code></pre> <ol start="6"> <li>uncordon the node and check the policy. This assumes that you're running <code>kubectl proxy</code> on port 8001.</li> </ol> <pre><code>curl -sSL "http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz" | grep cpuManager </code></pre>
<p>When a request takes over 60s to respond it seems that the ingress controller will bounce </p> <p>From what I can see our NGINX ingress controller returns 504 to the client after a request takes more than 60s to process. I can see this from the NGINX logs:</p> <pre><code>2019/01/25 09:54:15 [error] 2878#2878: *4031130 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.244.0.1, server: myapplication.com, request: "POST /api/text HTTP/1.1", upstream: "http://10.244.0.39:45606/api/text", host: "myapplication.com" 10.244.0.1 - [10.244.0.1] - - [25/Jan/2019:09:54:15 +0000] "POST /api/text HTTP/1.1" 504 167 "-" "PostmanRuntime/7.1.6" 2940 60.002 [default-myapplication-service-80] 10.244.0.39:45606 0 60.000 504 bdc1e0571e34bf1223e6ed4f7c60e19d </code></pre> <p>The second log item shows 60 seconds for both <strong>upstream response time</strong> and <strong>request time</strong> (see <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/log-format.md" rel="noreferrer">NGINX log format here</a>)</p> <p>But I have specified all the timeout values to be 3 minutes in the ingress configuration:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: aks-ingress annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/send_timeout: "3m" nginx.ingress.kubernetes.io/proxy-connect-timeout: "3m" nginx.ingress.kubernetes.io/proxy-read-timeout: "3m" nginx.ingress.kubernetes.io/proxy-send-timeout: "3m" spec: tls: - hosts: - myapplication.com secretName: tls-secret rules: - host: myapplication.com http: paths: - path: / backend: serviceName: myapplication-service servicePort: 80 </code></pre> <p>What am I missing?</p> <p>I am using nginx-ingress-1.1.0 and k8s 1.9.11 on Azure (AKS).</p>
<p>The issue was fixed by provided integer values (in seconds) for these annotations:</p> <pre><code>nginx.ingress.kubernetes.io/proxy-connect-timeout: "180" nginx.ingress.kubernetes.io/proxy-read-timeout: "180" nginx.ingress.kubernetes.io/proxy-send-timeout: "180" </code></pre> <p>It seems that <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">this variation</a> of the NGINX ingress controller requires such.</p>
<p>hy folks </p> <p>after updating my server, I can't restart kubernetes.</p> <pre><code>Feb 6 10:34:26 chgvas99 kubelet: F0206 10:34:26.662744 27634 server.go:189] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory Feb 6 10:34:26 chgvas99 systemd: kubelet.service: main process exited, code=exited, status=255/n/a Feb 6 10:34:26 chgvas99 systemd: Unit kubelet.service entered failed state. Feb 6 10:34:26 chgvas99 systemd: kubelet.service failed. </code></pre> <p>i checked on the directory and indeed there is no <strong>config.yaml</strong> i've the same error on my nodes i cant restart them </p> <p>server : <strong>3.10.0-957.5.1.el7.x86_64</strong></p> <p>kubernetes : <strong>Major:"1", Minor:"13", GitVersion:"v1.13.3"</strong> <strong>GoVersion:"go1.11.5"</strong></p>
<p>I would recommend runninng 'kubeadm-init' to reinitialise the cluster. Also please make sure you '/var' directory is not full. Please see this <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/" rel="nofollow noreferrer">link</a> for more information about 'kubeadm init' command.</p>
<p>I'm <strong>looking</strong> for a <strong>bash script</strong> that could <strong>backup all</strong> the <strong>kubernetes</strong> in a <strong>yaml format</strong> or <strong>json</strong> it's good too:) I already backup the kubernetes conf files already. </p> <pre><code>/etc/kubernetes /etc/systemd/system/system/kubelet.service.d </code></pre> <p>etc... </p> <p>Now I'm just looking to save the </p> <p><strong>namespaces</strong></p> <p><strong>deployment</strong></p> <p>etc... </p>
<p>You can dump your entire cluster info into one file using:</p> <pre><code>kubectl cluster-info dump &gt; cluster_dump.txt </code></pre> <p>The above command will dump all the yaml and container logs into one file</p> <p>Or if you just want yaml files, you can write a script of some commands which includes</p> <pre><code>kubectl get deployment -o yaml &gt; deployment.yaml kubectl get statefulset -o yaml &gt; statefulset.yaml kubectl get daemonset -o yaml &gt; daemonset.yaml </code></pre> <p>Then you have to keep the namespace also in mind while creating the script. This gives you fair idea what to do</p>
<p>I deploy kong via helm on my kubernetes cluster but I can't configure it as I want.</p> <pre><code>helm install stable/kong -f values.yaml </code></pre> <p>value.yaml:</p> <pre><code>{ "persistence.size":"1Gi", "persistence.storageClass":"my-kong-storage" } </code></pre> <p>Unfortunately, the created persistenceVolumeClaim stays at 8G instead of 1Gi. Even adding "persistence.enabled":false has no effect on deployment. So I think my all my configuration is bad.</p> <p>What should be a good configuration file?</p> <p>I am using kubernetes rancher deployment on bare metal servers. I use Local Persistent Volumes. (working well with mongo-replicaset deployment)</p>
<p>What you are trying to do is to configure a dependency chart (a.k.a subchart ) which is a little different from a main chart when it comes to writing <code>values.yaml</code>. Here is how you can do it:</p> <p>As <code>postgresql</code> is a dependency chart for <code>kong</code> so you have to use the name of the dependency chart as a key then the rest of the options you need to modify in the following form:</p> <blockquote> <p>The content of <code>values.yaml</code> does not need to be surrounded with curly braces. so you need to remove it from the code you posted in the question.</p> </blockquote> <pre class="lang-yaml prettyprint-override"><code>&lt;dependcy-chart-name&gt;: &lt;configuration-key-name&gt;: &lt;configuration-value&gt; </code></pre> <p>For Rancher you have to write it as the following:</p> <pre class="lang-yaml prettyprint-override"><code>#values.yaml for rancher postgresql.persistence.storageClass: &quot;my-kong-storage&quot; postgresql.persistence.size: &quot;1Gi&quot; </code></pre> <p>Unlike if you are using helm itself with vanilla kubernetes - at least - you can write the <code>values.yml</code> as below:</p> <pre class="lang-yaml prettyprint-override"><code>#values.yaml for helm postgresql: persistence: storageClass: &quot;my-kong-storage&quot; size: &quot;1Gi&quot; </code></pre> <blockquote> <ul> <li><p>More about <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md#overriding-values-of-a-child-chart" rel="nofollow noreferrer">Dealing with SubChart values</a></p> </li> <li><p>More about <a href="https://github.com/helm/charts/tree/master/stable/postgresql#configuration" rel="nofollow noreferrer">Postgresql chart configuration</a></p> </li> </ul> </blockquote>
<p>I am creating redis-cluster on kube with aws-gp2 persistent volume. I was using <a href="https://raw.githubusercontent.com/sanderploegsma/redis-cluster/master/redis-cluster.yml" rel="nofollow noreferrer">redis-cluster.yml</a></p> <p>I have created Storage Class according to this <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">doc</a>, for dynamic persistence volume creation</p> <p>This is my StorageClass definition</p> <pre><code> kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: aws-gp2 provisioner: kubernetes.io/aws-ebs parameters: type: gp2 zones: us-west-2a, us-west-2b, us-west-2c fsType: ext4 reclaimPolicy: Retain allowVolumeExpansion: true </code></pre> <p>When I try to create cluster volume creation stuck at <code>pending</code> state, after checking logs found this </p> <pre><code> $ kubectl -n staging describe pvc data-redis-cluster-0 Name: data-redis-cluster-0 Namespace: staging StorageClass: Status: Pending Volume: Labels: app=redis-cluster Annotations: &lt;none&gt; Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal FailedBinding 13s (x11 over 2m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set </code></pre> <p>and events </p> <pre><code> $ kubectl -n staging get events LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 10s 10s 1 redis-cluster.15816c6dc1d6c03a StatefulSet Normal SuccessfulCreate statefulset-controller create Claim data-redis-cluster-0 Pod redis-cluster-0 in StatefulSet redis-cluster success 10s 10s 1 redis-cluster.15816c6dc2226fe0 StatefulSet Normal SuccessfulCreate statefulset-controller create Pod redis-cluster-0 in StatefulSet redis-cluster successful 8s 10s 3 data-redis-cluster-0.15816c6dc1dfd0cb PersistentVolumeClaim Normal FailedBinding persistentvolume-controller no persistent volumes available for this claim and no storage class is set 3s 10s 5 redis-cluster-0.15816c6dc229258d Pod Warning FailedScheduling default-scheduler pod has unbound PersistentVolumeClaims (repeated 4 times) </code></pre> <p>someone point out what is wrong here ?</p>
<p>Since cluster doesn't have default <code>StorageClass</code>, I had to add <code>storageClassName: aws-gp2</code> to <code>volumeClaimTemplates</code>, which helped me to fix this issue</p> <p>like this </p> <pre><code> volumeClaimTemplates: - metadata: namespace: staging name: data labels: name: redis-cluster spec: accessModes: [ "ReadWriteOnce" ] storageClassName: aws-gp2 resources: requests: storage: 100Mi </code></pre>
<p>I was following <a href="https://cloud.google.com/python/django/kubernetes-engine" rel="nofollow noreferrer">this</a> tutorial for deploying Django App to Kubernetes Cluster. I've created cloudsql credentials and exported them as in the tutorial</p> <pre><code>export DATABASE_USER=&lt;your-database-user&gt; export DATABASE_PASSWORD=&lt;your-database-password&gt; </code></pre> <p>However my password was generated by LastPass and contains special characters, which are striped out in Kubernetes Pod thus making the password incorrect.</p> <p>This is my password (altered, just showing the special chars) <code>5bb4&amp;sL!EB%e</code></p> <p>So i've tried various ways of exporting this string, echoing it out always show correct password, however in Kubernetes Dashboard the password is always incorrect (Also altered in DevTools, but some chars are just stripped out)</p> <p><a href="https://i.stack.imgur.com/yR5o9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yR5o9.png" alt="enter image description here"></a></p> <p>Things I've tried</p> <pre><code>export DATABASE_PASSWORD=$'5bb4&amp;sL\!EB\%e' export DATABASE_PASSWORD='5bb4&amp;sL!EB%e' </code></pre> <p>Echoing is always good but kubernetes is always stripping it.</p> <p>Deploying with <code>skaffold deploy</code></p> <p>EDIT:</p> <p>After hint I've tried to store the password in base64 encoding form, however I suspect it only applies to local scope, as the password in Kubernetes Dashboard is still the same, I suspect that I need to regenerate the certificate to make this work remotely on gke cluster?</p> <p><a href="https://i.stack.imgur.com/VGjsJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VGjsJ.jpg" alt="enter image description here"></a></p> <p>So the env variables are for local and credentials in cloud sql proxy are the ones that are being used and misinterpreted? Where are those files by the way?</p> <p>EDIT2:</p> <p>I've just found out that indeed the gke cluster in using the credentials json rather than the exported variables. The configuration json already contains the password in base64 encoded form, HOWEVER it is base64 encode of string which still is missing special characters. Looks like the only way out is to generate new credentials without specials characters, that looks like a bug, doesnt it?</p>
<p>You should <code>base64</code> encode your password before passing it into the pod so that special characters are encoded in a way that they can be preserved.</p> <p>In bash you can do this with: </p> <pre><code>export DATABASE_PASSWORD=`echo [ACTUAL_PASSWORD_HERE] | base64` </code></pre> <p>You'll then need to ensure that the Django app <code>settings.py</code> uses a base64 decode before applying the password to its internal variable.</p> <p>So in the tutorial you linked to, the line</p> <p><code>'PASSWORD': os.getenv('DATABASE_PASSWORD'),</code></p> <p>would need to change to:</p> <p><code>'PASSWORD': base64.b64decode(os.getenv('DATABASE_PASSWORD')),</code></p>
<p>I have an EKS cluster with several microservices (A, B and C). I followed the "Getting Started Guide" and having a dedicated VPC with worker nodes inside. In the front I have a Load Balancer accepting and routing <em>HTTPS</em> traffic from the Internet. That LB supposed to terminate TLS. I planning to use Istio for traffic management (e.g. as API gateway).</p> <p>In addition, I have another group of web servers, deployed on the same AWS account and part of the same system. It supposed to be accessible from the Internet as well, but this time it should have Load Balancer accepting <em>TCP</em> traffic. These web servers shall be able to send REST calls to one of the services deployed on EKS (let's say, "A").</p> <p>The catch: service "A" is internal service and its API shall be accessible internally only (e.g. it should not be exposed to the Internet), while services B and C does exposed to the Internet.</p> <p>What is the simplest and <strong>securest</strong> way to achieve what I need?</p>
<p>You can create a Kubernetes service of type 'ClusterIP' for micro-service A. Micro-service B and C can be exposed as LoadBalancer. With this B &amp; C will be exposed. Service A can be accessed internally within K8s Cluster. Hope this helps. </p>
<p>I want to reference the label's value in VirtualService's spec section inside k8s yaml file. I use ${metadata.labels[component]} to indicate the positions below. Is there a way to implement my idea?</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: istio-ingress-version namespace: netops labels: component: version spec: hosts: - "service.api.com" gateways: - public-inbound-gateway http: - match: - uri: prefix: /${metadata.labels[component]}/ headers: referer: regex: ^https://[^\s/]*a.api.com[^\s]* rewrite: uri: "/" route: - destination: host: ${metadata.labels[component]}.3da.svc.cluster.local - match: - uri: prefix: /${metadata.labels[component]}/ headers: referer: regex: ^https://[^\s/]*b.api.com[^\s]* rewrite: uri: "/" route: - destination: host: ${metadata.labels[component]}.3db.svc.cluster.local - match: - uri: prefix: /${metadata.labels[component]}/ rewrite: uri: "/" route: - destination: host: ${metadata.labels[component]}.3db.svc.cluster.local </code></pre>
<p>This isn't a capability of Kubernetes itself, however other tools exist that can help you with this scenario.</p> <p>The main one of these is <a href="https://docs.helm.sh" rel="nofollow noreferrer">Helm</a>. It allows you to create variables that can be shared across several different YAML files, allowing you to share values or even fully parameterise your deployment.</p>
<p>What is the difference between Hadoop on Kubernetes and the standard Hadoop ? and what is the benefit from deploying Hadoop on Kubernetes ?</p>
<p>As people have said, "the only difference is you are in kubernetes/container". The reality is that means a couple of <em>huge</em> things in terms of actual operation:</p> <ul> <li>The helm chart linked above is a toy. <ul> <li>It builds vanilla hadoop (i.e. not HDP or CDH)</li> <li>It doesn't do HA namenodes</li> <li>It doesn't do kerberos</li> </ul></li> <li>You have to manage your own volumes <ul> <li>If you are running on a public cloud this isn't a super big deal, as you can dynamically get storage</li> </ul></li> </ul> <p>So unless you just want a super lightweight hdfs deployment, or you are comfortable/willing to build out your own deployment of a more sophisticated k8s hadoop deployment, or you are willing to pay for a 3rd party kubernetes stack with hadoop support (e.g. robin.io), I would say that in general it is not worth running on k8s at this point. </p> <p>Note that if/when the hadoop vendors make their own <a href="https://github.com/operator-framework/operator-sdk" rel="noreferrer">operator</a>, this might change.</p>
<p>This may be a basic question but I haven't seen any documentation on it.</p> <p>Can you override parameters defined within the StorageClass using the PVC?</p> <p>For example, here is a StorageClass I have created: </p> <pre><code>--- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: sc-gold provisioner: hpe.com/hpe parameters: provisioning: 'full' cpg: 'SSD_r6' snapcpg: 'FC_r6' </code></pre> <p>PVC</p> <pre><code>--- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-nginx spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: sc-gold </code></pre> <p>I want to use the "sc-gold" StorageClass as defined above but be able to override/change the provisioning type from "full" to "thin" when creating the PVC without having to create another StorageClass. I don't see any examples of how the PVC would be formatted or if this is even supported within the spec.</p> <p>Traditionally as Storage Admins, we create the StorageClass as storage "profiles" and then the users are assigned/consume the SC in order to create volumes, but is there any flexibility in the spec? I just want to limit the StorageClass sprawl that I can see happening in order to accommodate any and all scenarios.</p> <p>Thoughts?</p>
<p>No. you cant override storage class params during PVC creation. you might need to create additional storageClass and map the required storageClass to the PVC.</p>
<p>I am trying to setup HPA for my statefulset(for elasticsearch) in kubernetes environment. I am planning to scale the statefulset using the cpu utilization. I have created the metric server from <a href="https://github.com/stefanprodan/k8s-prom-hpa/tree/master/metrics-server" rel="noreferrer">https://github.com/stefanprodan/k8s-prom-hpa/tree/master/metrics-server</a>.</p> <p>and my HPA yaml for statefulset is as folows:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: dz-es-cluster spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: StatefulSet name: dz-es-cluster minReplicas: 2 maxReplicas: 3 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 </code></pre> <p>But getting output in hpa as follows:</p> <pre><code>Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: the server could not find the requested resource Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 1m (x71 over 36m) horizontal-pod-autoscaler the server could not find the requested resource </code></pre> <p>someone please help me..</p>
<p>The support for autoscaling the statefulsets using HPA is added in kubernetes 1.9, so your version doesn't has support for it. After kubernetes 1.9, you can autoscale your statefulsets using:</p> <pre><code>apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: YOUR_HPA_NAME spec: maxReplicas: 3 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1 kind: StatefulSet name: YOUR_STATEFUL_SET_NAME targetCPUUtilizationPercentage: 80 </code></pre> <p>Please refer the following link for more information:</p> <blockquote> <p><a href="https://github.com/kubernetes/kubernetes/issues/44033" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/44033</a></p> </blockquote>
<p>trying to shell into the container by <code>kubectl exec -it xxxxxx</code></p> <p>but it returns </p> <pre><code>rpc error: code = 5 desc = open /var/run/docker/libcontainerd/containerd/faf3fd49262cc738e16368001eba5e1113abcb8a87e7b818cb84af3799906149/30fe901c16e0465aa15b596bf3e4f244fb12a7e4133b6e4da5aa35167a8dfb30/shim-log.json: no such file or directory </code></pre> <p>trying to reboot the node but not help</p>
<p>Thanks @Prafull Ladha</p> <p>Eventually I restarted the docker (<code>systemctl restart docker</code>) of that Node which my pods could not be shelled, and it resumes to normal</p>
<p>I can not remove kubernetes-dashboard from Minikube. I tried deleting the deployment "deployment.apps/kubernetes-dashboard" multiple times. But it gets recreated automatically in a few seconds.</p> <p>I am using the following command to delete the deployment:</p> <blockquote> <p>kubectl delete deployment.apps/kubernetes-dashboard -n kube-system</p> </blockquote> <p>I even tried to edit the deployment by setting the replica count to zero. But even it gets reset automatically after a few seconds.</p> <p>The same thing happens for nginx-ingress deployment in kube-system.</p>
<p>I had to disable the dashboard addon using minikube first. Then deleting the deployment did work for me.</p> <pre><code>minikube addons disable dashboard </code></pre> <p>And in case of ingress:</p> <pre><code>minikube addons disable ingress </code></pre>
<p>I am using kubernetes and trying to call the following command:<br> <code>kubectl set env deployment/server A=a B=b ...etc</code> My env variables are located in .env file, and each value can contain spaces, non-escaped chars, comments, empty lines and so on:</p> <pre><code>## Common variables NODE_ENV=production ## Server variables SERVER_PORT=8009 CORS_ORIGIN=https://www.example.io,http://www.example.io,http://localhost:3000 SESSION_SECRET=/qm%7HLw"pk(8@"pja#I9CbN#2Lg[%d&gt;5{CDA_9g|ZvZmuZ$]=';EhA#g+C;1&gt;&amp; </code></pre> <p>I am trying to parse the .env file so that I can integrate it in the command:</p> <pre><code>kubectl set env deployment/server $(do magic with .env file to extract the escaped variables) </code></pre> <p>Tried <code>grep -v '^#' .env | xargs</code> but it doesn't work with characters that need escaping or quotes. My bash abilities are not the strongest right now. Any idea how to solve this?</p>
<pre><code>cat .env | grep -v '^#\|^$' | xargs -0 echo | tr '\n' ' ' </code></pre> <p>This would print everything in one line: <code>NODE_ENV=production SERVER_PORT=8009 CORS_ORIGIN=https://www.example.io,http://www.example.io,http://localhost:3000 SESSION_SECRET=/qm%7HLw"pk(8@"pja#I9CbN#2Lg[%d&gt;5{CDA_9g|ZvZmuZ$]=';EhA#g+C;1&gt;&amp;</code> </p> <p>I am not sure if you need to keep the <code>VAR=</code> or if you also would like to get rid of that (pipe into <code>sed</code> or <code>awk</code> in the end then and replace it with whatever you need)</p>
<p>We have a product that each device will report data to our server every minute, we've sold 10K and it'll be more than 100K within the next 2 years.</p> <p>Now, we just use <code>AWS EC2</code>(8 CPUs) + <code>Nginx</code> + <code>Koa2</code>, there're 8 instances, according to the requirement increasing, it won't be appropriate anymore.</p> <p>So, I'm confused about which one should I choose?</p> <p>BTW, I've been using <code>AWS Lambda</code> for 2 years to develop IOT issues and I used Docker 5 years ago, I think all of these three ways can solve my issue.</p>
<p>Kubernetes is a great tool if you are not sure how your application is going to scale out. Using Amazon's Kubernetes engine or GCP equivalent Kubernetes engine can help you create and manage these clusters with click of few buttons and manage your application containers using Kubernetes's superior container orchestration framework.</p> <p>Docker swarm , IMO , has lost to kubernetes because they did not seem to have jumped into this part of architecture soon enough and kubernetes has already created a very mature framework with great adoption in many production environments and lot of community support for any issues related to it.</p> <p>Another advantage of using an Orchestration mechanism instead of AWS native services would be that you do not get into Vendor lock-in situations and you can move your stack to any other cloud platform hosting Kubernetes easily. </p> <p>If you are also interested in continue to use your Serverless architecture , you might want to look at Open FAAS which can be leverage on top of your Kubernetes framework. Check <a href="https://docs.openfaas.com/deployment/kubernetes/" rel="nofollow noreferrer">this</a> link for more details.</p>
<p>I currently use the following script to wait for the job completion</p> <p><code>ACTIVE=$(kubectl get jobs my-job -o jsonpath='{.status.active}') until [ -z $ACTIVE ]; do ACTIVE=$(kubectl get jobs my-job -o jsonpath='{.status.active}') ; sleep 30 ; done </code></p> <p>The problem is the job can either fail or be successful as it is a test job.</p> <p>Is there a better way to achieve the same?</p>
<p>Yes. As I pointed out in <a href="https://hackernoon.com/kubectl-tip-of-the-day-wait-like-a-boss-40a818c423ac" rel="nofollow noreferrer">kubectl tip of the day: wait like a boss</a>, you can use the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait" rel="nofollow noreferrer">kubectl wait</a> command.</p>
<p>According to <a href="https://github.com/spring-cloud/spring-cloud-kubernetes#service-account" rel="noreferrer">Spring Cloud Kubernetes</a> docs, in order to discover services/pods in RBAC enabled Kubernetes distros:</p> <blockquote> <p>you need to make sure a pod that runs with spring-cloud-kubernetes has access to the Kubernetes API. For any service accounts you assign to a deployment/pod, you need to make sure it has the correct roles. For example, you can add <code>cluster-reader</code> permissions to your default service account depending on the project you’re in.</p> </blockquote> <p>What are <code>cluster-reader</code> permissions in order to discover services/pods?</p> <p>Error I receiving is:</p> <pre><code>io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://x.x.x.x/api/v1/namespaces/jx-staging/services. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. services is forbidden: User "system:serviceaccount:jx-staging:default" cannot list services in the namespace "jx-staging" </code></pre>
<p>Read <code>endpoints</code> and <code>services</code> seems to be a bare minimum for Spring Cloud Kubernetes to discover pods and services. </p> <p>Example adds permissions to <code>default</code> service account in <code>default</code> namespace. </p> <pre><code>--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-read-role rules: - apiGroups: - "" resources: - endpoints - pods - services - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cluster-read-rolebinding subjects: - kind: ServiceAccount name: default namespace: default roleRef: kind: ClusterRole name: cluster-read-role apiGroup: rbac.authorization.k8s.io </code></pre>
<p>I have a kubernetes cluster with 1 master and 2 workers. All nodes have their IP address. Let's call them like this:</p> <ul> <li>master-0</li> <li>worker-0</li> <li>worker-1</li> </ul> <p>The network pod policy and all my nodes communication are setting up correctly, all works perfectly. If I specify this infrastructure, it's just to be more specific about my case.</p> <p>Using helm I have created a chart which deploy a basic nginx. It's a docker image that I build on my private gitlab registry.</p> <p>With the gitlab ci, I have created a job which used two functions:</p> <pre><code># Init helm client on k8s cluster for using helm with gitlab runner function init_helm() { docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY" mkdir -p /etc/deploy echo ${kube_config} | base64 -d &gt; ${KUBECONFIG} kubectl config use-context ${K8S_CURRENT_CONTEXT} helm init --client-only helm repo add stable https://kubernetes-charts.storage.googleapis.com/ helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/ helm repo update } # Deploy latest tagged image on k8s cluster function deploy_k8s_cluster() { echo "Create and apply secret for docker gitlab runner access to gitlab private registry ..." kubectl create secret -n "$KUBERNETES_NAMESPACE_OVERWRITE" \ docker-registry gitlab-registry \ --docker-server="https://registry.gitlab.com/v2/" \ --docker-username="${CI_DEPLOY_USER:-$CI_REGISTRY_USER}" \ --docker-password="${CI_DEPLOY_PASSWORD:-$CI_REGISTRY_PASSWORD}" \ --docker-email="$GITLAB_USER_EMAIL" \ -o yaml --dry-run | kubectl replace -n "$KUBERNETES_NAMESPACE_OVERWRITE" --force -f - echo "Build helm dependancies in $CHART_TEMPLATE" cd $CHART_TEMPLATE/ helm dep build export DEPLOYS="$(helm ls | grep $PROJECT_NAME | wc -l)" if [[ ${DEPLOYS} -eq 0 ]]; then echo "Creating the new chart ..." helm install --name ${PROJECT_NAME} --namespace=${KUBERNETES_NAMESPACE_OVERWRITE} . -f values.yaml else echo "Updating the chart ..." helm upgrade ${PROJECT_NAME} --namespace=${KUBERNETES_NAMESPACE_OVERWRITE} . -f values.yaml fi } </code></pre> <p>The first function allow the gitlabrunner to login with docker, init helm and kubectl. The second to deploy on the cluster my image.</p> <p>All the process works well, e-g my jobs are passed on the gitlab ci, no error occurred except for the deployment of the pod.</p> <p><strong>Indeed I have this error</strong>:</p> <pre><code>Failed to pull image "registry.gitlab.com/path/to/repo/project/image:TAG_NUMBER": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.gitlab.com/v2/path/to/repo/project/image/manifests/image:TAG_NUMBER: denied: access forbidden </code></pre> <p>To be more specific, I am using <a href="https://docs.gitlab.com/ee/install/kubernetes/gitlab_runner_chart.html#configuration" rel="nofollow noreferrer">gitlab-runner helm chart</a> and this the config of the chart:</p> <pre><code>## GitLab Runner Image ## ## By default it's using gitlab/gitlab-runner:alpine-v{VERSION} ## where {VERSION} is taken from Chart.yaml from appVersion field ## ## ref: https://hub.docker.com/r/gitlab/gitlab-runner/tags/ ## # image: gitlab/gitlab-runner:alpine-v11.6.0 ## Specify a imagePullPolicy ## 'Always' if imageTag is 'latest', else set to 'IfNotPresent' ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## imagePullPolicy: IfNotPresent ## The GitLab Server URL (with protocol) that want to register the runner against ## ref: https://docs.gitlab.com/runner/commands/README.html#gitlab-runner-register ## gitlabUrl: https://gitlab.com/ ## The Registration Token for adding new Runners to the GitLab Server. This must ## be retrieved from your GitLab Instance. ## ref: https://docs.gitlab.com/ce/ci/runners/README.html#creating-and-registering-a-runner ## runnerRegistrationToken: "&lt;token&gt;" ## The Runner Token for adding new Runners to the GitLab Server. This must ## be retrieved from your GitLab Instance. It is token of already registered runner. ## ref: (we don't yet have docs for that, but we want to use existing token) ## # runnerToken: "" # ## Unregister all runners before termination ## ## Updating the runner's chart version or configuration will cause the runner container ## to be terminated and created again. This may cause your Gitlab instance to reference ## non-existant runners. Un-registering the runner before termination mitigates this issue. ## ref: https://docs.gitlab.com/runner/commands/README.html#gitlab-runner-unregister ## unregisterRunners: true ## Set the certsSecretName in order to pass custom certficates for GitLab Runner to use ## Provide resource name for a Kubernetes Secret Object in the same namespace, ## this is used to populate the /etc/gitlab-runner/certs directory ## ref: https://docs.gitlab.com/runner/configuration/tls-self-signed.html#supported-options-for-self-signed-certificates ## # certsSecretName: ## Configure the maximum number of concurrent jobs ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section ## concurrent: 10 ## Defines in seconds how often to check GitLab for a new builds ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section ## checkInterval: 30 ## Configure GitLab Runner's logging level. Available values are: debug, info, warn, error, fatal, panic ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section ## # logLevel: ## For RBAC support: rbac: create: true ## Run the gitlab-bastion container with the ability to deploy/manage containers of jobs ## cluster-wide or only within namespace clusterWideAccess: true ## Use the following Kubernetes Service Account name if RBAC is disabled in this Helm chart (see rbac.create) ## serviceAccountName: default ## Configure integrated Prometheus metrics exporter ## ref: https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server metrics: enabled: true ## Configuration for the Pods that that the runner launches for each new job ## runners: ## Default container image to use for builds when none is specified ## image: ubuntu:16.04 ## Specify one or more imagePullSecrets ## ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## imagePullSecrets: ["namespace-1", "namespace-2", "default"] ## Specify the image pull policy: never, if-not-present, always. The cluster default will be used if not set. ## # imagePullPolicy: "" ## Specify whether the runner should be locked to a specific project: true, false. Defaults to true. ## # locked: true ## Specify the tags associated with the runner. Comma-separated list of tags. ## ## ref: https://docs.gitlab.com/ce/ci/runners/#using-tags ## tags: my-tag-1, my-tag-2" ## Run all containers with the privileged flag enabled ## This will allow the docker:dind image to run if you need to run Docker ## commands. Please read the docs before turning this on: ## ref: https://docs.gitlab.com/runner/executors/kubernetes.html#using-docker-dind ## privileged: true ## The name of the secret containing runner-token and runner-registration-token # secret: gitlab-runner ## Namespace to run Kubernetes jobs in (defaults to the same namespace of this release) ## # namespace: # Regular expression to validate the contents of the namespace overwrite environment variable (documented following). # When empty, it disables the namespace overwrite feature namespace_overwrite_allowed: overrided-namespace-* ## Distributed runners caching ## ref: https://gitlab.com/gitlab-org/gitlab-runner/blob/master/docs/configuration/autoscale.md#distributed-runners-caching ## ## If you want to use s3 based distributing caching: ## First of all you need to uncomment General settings and S3 settings sections. ## ## Create a secret 's3access' containing 'accesskey' &amp; 'secretkey' ## ref: https://aws.amazon.com/blogs/security/wheres-my-secret-access-key/ ## ## $ kubectl create secret generic s3access \ ## --from-literal=accesskey="YourAccessKey" \ ## --from-literal=secretkey="YourSecretKey" ## ref: https://kubernetes.io/docs/concepts/configuration/secret/ ## ## If you want to use gcs based distributing caching: ## First of all you need to uncomment General settings and GCS settings sections. ## ## Access using credentials file: ## Create a secret 'google-application-credentials' containing your application credentials file. ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-cache-gcs-section ## You could configure ## $ kubectl create secret generic google-application-credentials \ ## --from-file=gcs-applicaton-credentials-file=./path-to-your-google-application-credentials-file.json ## ref: https://kubernetes.io/docs/concepts/configuration/secret/ ## ## Access using access-id and private-key: ## Create a secret 'gcsaccess' containing 'gcs-access-id' &amp; 'gcs-private-key'. ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-cache-gcs-section ## You could configure ## $ kubectl create secret generic gcsaccess \ ## --from-literal=gcs-access-id="YourAccessID" \ ## --from-literal=gcs-private-key="YourPrivateKey" ## ref: https://kubernetes.io/docs/concepts/configuration/secret/ cache: {} ## General settings # cacheType: s3 # cachePath: "cache" # cacheShared: true ## S3 settings # s3ServerAddress: s3.amazonaws.com # s3BucketName: # s3BucketLocation: # s3CacheInsecure: false # secretName: s3access ## GCS settings # gcsBucketName: ## Use this line for access using access-id and private-key # secretName: gcsaccess ## Use this line for access using google-application-credentials file # secretName: google-application-credential ## Build Container specific configuration ## builds: # cpuLimit: 200m # memoryLimit: 256Mi cpuRequests: 100m memoryRequests: 128Mi ## Service Container specific configuration ## services: # cpuLimit: 200m # memoryLimit: 256Mi cpuRequests: 100m memoryRequests: 128Mi ## Helper Container specific configuration ## helpers: # cpuLimit: 200m # memoryLimit: 256Mi cpuRequests: 100m memoryRequests: 128Mi image: gitlab/gitlab-runner-helper:x86_64-latest ## Service Account to be used for runners ## # serviceAccountName: ## If Gitlab is not reachable through $CI_SERVER_URL ## # cloneUrl: ## Specify node labels for CI job pods assignment ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ ## nodeSelector: {} # gitlab: true ## Configure resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: # limits: # memory: 256Mi # cpu: 200m requests: memory: 128Mi cpu: 100m ## Affinity for pod assignment ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## affinity: {} ## Node labels for pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: {} # Example: The gitlab runner manager should not run on spot instances so you can assign # them to the regular worker nodes only. # node-role.kubernetes.io/worker: "true" ## List of node taints to tolerate (requires Kubernetes &gt;= 1.6) ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ ## tolerations: [] # Example: Regular worker nodes may have a taint, thus you need to tolerate the taint # when you assign the gitlab runner manager with nodeSelector or affinity to the nodes. # - key: "node-role.kubernetes.io/worker" # operator: "Exists" ## Configure environment variables that will be present when the registration command runs ## This provides further control over the registration process and the config.toml file ## ref: `gitlab-runner register --help` ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html ## envVars: - name: RUNNER_EXECUTOR value: kubernetes </code></pre> <p>As you can see, I created a <strong>secret</strong> on my ci job, no error occurred here too. In my chart, I declare this same secret (by his name) in <code>values.yaml</code> file, which allow <code>deployment.yaml</code> to use it.</p> <p>So I do not understand where I am wrong. Why I get this error ?</p>
<p>Extending my last comment, I suppose that <code>TAG_NUMBER</code> variable is somewhere in your CI Gitlab job. However, you are not able to be authorized with the assigned variables in <code>--docker-username</code> and <code>--docker-password</code> docker flags. Have you checked the credentials used for the connection to <code>docker-registry</code>? Or it might be the option to manage secret within a <a href="https://docs.gitlab.com/ee/install/kubernetes/gitlab_runner_chart.html#providing-a-custom-certificate-for-accessing-gitlab" rel="nofollow noreferrer">GitLab Runner</a> Helm Chart template.</p>
<p>I have a Kubernetes cron job that gets an OOMKilled (Out of Memory) message when running. This specific cron job runs once a day. The node itself has 4 GB of RAM.</p> <p>Found somewhere that said the default for a cron job is 100 MB? Where can I view or change the default for Kubernetes cron jobs?</p>
<p>You can just add resources and limits to that job so you can prevent it from being OOM killed. One thing to note is how resources and limits work. I have explained some scenarios in this <a href="https://stackoverflow.com/questions/54184886/what-is-the-use-case-of-setting-memory-request-less-than-limit-in-k8s/54203128#54203128">answer</a> on Stack. It is worthy to check the official <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container" rel="noreferrer">documentation</a> to avoid common mistakes and also CPU and Memory resources/limits work a little bit different. </p> <p>About the default values of resources and limits it depends on where is your cluster. For example in my kubeadm I do not have any resources or limits by default in <code>default</code> namespace:</p> <pre><code>kubectl describe ns default Name: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Status: Active No resource quota. No resource limits. </code></pre> <p>And this is for GKE:</p> <p><a href="https://i.stack.imgur.com/kNdMN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/kNdMN.png" alt="GKE"></a></p> <p>So your choice is to set up Requests and Limits for your namespace or just add it into the Job spec, like in this example:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: testing-hello spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hi-there image: busybox args: - /bin/sh - -c - date; echo "Hello, World!" resources: requests: cpu: "100Mi" limits: cpu: "100Mi" restartPolicy: OnFailure </code></pre> <p>Also usually you would just set resources and limits per namespace as it is described <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/#create-a-namespace" rel="noreferrer">here</a>.</p>