prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>How can I run a docker image in a kubernetes pod?</p>
<p>A teammate of mine has defined a new docker image and pushed it to my team's private registry. I own some code running in our kubernetes cluster and we need to get my code to effectively <code>docker run gcr.io/our-project/teammates-image:latest</code>. </p>
<p>I've been looking at enabling docker-in-docker on our kubernetes pods but I'm having little luck. I'm currently experimenting with <code>minikube</code> and whenever I try to <code>docker run</code> I get: <code>docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.</code>. </p>
<p>I've added
<code>
securityContext:
privileged: true
</code></p>
<p>To my container spec. I suspect I need more configuration but haven't figured out what.</p>
<p>I'm also trying to evaluate whether I can <code>kubectl run --image=gcr.io/our-project/teammates-image:lastest</code> from within the kubernetes pod. </p>
<p>Is there a "correct" way to run docker images from within a kubernetes pod? How can I run image with docker-in-docker? </p>
| <p>Here's the solution I ended up using for anyone who may stumble upon this in the future.</p>
<p>Install <code>kubectl</code> in the main docker image which needs to run the other docker image. I did this by following <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl" rel="noreferrer">these docs</a>. Once you have <code>kubectl</code> installed, it should automatically be able to interact with the cluster it's running in and doesn't need extra authentication.</p>
<p>Next, to run the <code>teammtes-image:latest</code> docker image you can do something like the following in python:</p>
<pre>
import subprocess
subprocess.run(["kubectl", "run", "teammates-image", "--image", "gcr.io/our-project/teammates-image:lastest", "--attach", "--restart", "Never", "--rm"])
</pre>
<p>This should:</p>
<ol>
<li>create a pod</li>
<li>run the image</li>
<li>return the return code of the container</li>
<li>delete the pod after completion</li>
</ol>
|
<p>I have been trying to setup Spring Cloud Dataflow Server for Kubernetes locally using minikube. Have followed the installation instructions in the the link here : <a href="https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.4.0.RELEASE/reference/htmlsingle/#_installation" rel="nofollow noreferrer">SCDF Installation Reference</a></p>
<p>I've been getting the below error for the SCDF server:</p>
<pre><code>
11:32:52.095 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client namespace from Kubernetes service account namespace path...
11:32:52.096 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace].
2018-04-24 11:33:14.348 WARN 1 --- [ main] o.s.cloud.kubernetes.StandardPodUtils : Failed to get pod with name:[scdf-server-869d56967c-97lsd]. You should look into this if things aren't working as you expect. Are you missing serviceaccount permissions?
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/scdf-server-869d56967c-97lsd. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "scdf-server-869d56967c-97lsd" is forbidden: User "system:serviceaccount:default:default" cannot get pods in the namespace "default".
</code></pre>
<p>Below are the version details:</p>
<ul>
<li>Spring Cloud Data Flow Server : 1.4.0.RELEASE</li>
<li>Kubernetes Local Deployment using minikube</li>
<li>Kubernetes Version : 1.10</li>
</ul>
| <p>The latest release of minikube enabled RBAC by default.</p>
<p>For RBAC enabled clusters, we have added a note in the installation section on this matter.</p>
<p><em>"The latest releases of kubernetes have enabled RBAC on the api-server. If your target platform has RBAC enabled you must ask a cluster-admin to create the roles and role-bindings for you before deploying the dataflow server. They associate the dataflow service account with the roles it needs to be run with."</em></p>
<p>For minikube, however, you can run the following command and retry installaing.</p>
<blockquote>
<p>kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default</p>
</blockquote>
<p>Alternatively, if you're using the helm-chart, you can disable RBAC and install the chart with the following on minikube.</p>
<blockquote>
<p>helm init</p>
<p>helm repo add incubator <a href="https://kubernetes-charts-incubator.storage.googleapis.com" rel="nofollow noreferrer">https://kubernetes-charts-incubator.storage.googleapis.com</a></p>
<p>helm repo update</p>
<p>helm install --name my-release --set server.service.type=NodePort --set rbac.create=false incubator/spring-cloud-data-flow</p>
</blockquote>
|
<p>I am trying to "pass" a value from the init container to a container. Since values in a configmap are shared across the namespace, I figured I can use it for this purpose. Here is my job.yaml (with faked-out info):</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['kubectl', 'create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url']
restartPolicy: Never
backoffLimit: 0
</code></pre>
<p>This does not seem to work (EDIT: although the statements following this edit note may still be correct, this is not working because kubectl is not a recognizable command in the busybox image), and I am assuming that the pod can only read values from a configmap created BEFORE the pod is created. Has anyone else come across the difficulty of passing values between containers, and what did you do to solve this?</p>
<p>Should I deploy the configmap in another pod and wait to deploy this one until the configmap exists?</p>
<p>(I know I can write files to a volume, but I'd rather not go that route unless it's absolutely necessary, since it essentially means our docker images must be coupled to an environment where some specific files exist)</p>
| <p>You can create an <code>EmptyDir</code> volume, and mount this volume onto both containers. Unlike <code>persistent volume</code>, <code>EmptyDir</code> has no portability issue.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['/bin/sh', '-c', 'cp x /tmp/artifact/x']
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
restartPolicy: Never
volumes:
- name: tmp
emptyDir: {}
backoffLimit: 0
</code></pre>
|
<p>I saw it on <a href="https://kubernetes.io/docs/admin/authentication/#users-in-kubernetes" rel="nofollow noreferrer">offical doc</a>, but I don't know how to <code>add or introduce</code> a <code>normal user</code> outside kubernetes clusters. And I searched a lot about normal user in kubernetes but nothing useful.</p>
<p>I know it's different from serviceaccount and we cannot add a normal user through Kubernetes API.</p>
<p>Any idea about how to add or introduce a normal user to kubernetes cluster and what's normal user for?</p>
| <p>See "<a href="https://medium.com/@etienne_24233/comparing-kubernetes-authentication-methods-6f538d834ca7" rel="nofollow noreferrer">Comparing Kubernetes Authentication Methods</a>" by <a href="https://twitter.com/etiennedi" rel="nofollow noreferrer">Etienne Dilocker</a></p>
<p>A possible solution is the <a href="https://kubernetes.io/docs/admin/authentication/#x509-client-certs" rel="nofollow noreferrer">x509 client certs</a>:</p>
<blockquote>
<p>Advantages</p>
<p>operating the Kubernetes cluster and issuing user certificates is decoupled
much more secure than basic authentication</p>
<p>Disadvantages</p>
<p>x509 certificates tend to have a very long lifetime (months or years). So, revoking user access is nearly impossible. If we instead choose to issue short-lived certificates, the user experience drops, because replacing certificates involves some effort.</p>
</blockquote>
<p>But Etienne recommends <a href="https://kubernetes.io/docs/admin/authentication/#openid-connect-tokens" rel="nofollow noreferrer">OpenID</a>:</p>
<blockquote>
<p>Wouldn’t it be great if we could have short-lived certificates or tokens, that are issued by a third-party, so there is no coupling to the operators of the K8s cluster.<br>
And at the same time <strong>all of this should be integrated with existing enterprise infrastructure, such as LDAP or Active Directory</strong>.</p>
<p>This is where <a href="http://openid.net/connect/" rel="nofollow noreferrer">OpenID Connect (OIDC)</a> comes in. </p>
<p>For my example, I’ve used <a href="http://www.keycloak.org/" rel="nofollow noreferrer">Keycloak</a> as a token issuer. Keycloak is both a token issuer and an identity provider out-of-the box and quite easy to spin up using Docker. </p>
</blockquote>
<hr>
<p>To use RBAC with that kind of authentication is not straight-forward, but possible.<br>
See "<a href="https://github.com/kubernetes/dashboard/issues/964#issuecomment-240473494" rel="nofollow noreferrer">issue 118; Security, auth and logging in</a>"</p>
<blockquote>
<p>With 1.3 I have SSO into the dashboard working great with a reverse proxy and OIDC/OAuth2. I wouldn't create an explicit login screen, piggy back off of the RBAC model and the Auth model that is already supported. It would be great to have something that says who the logged in user is though.</p>
</blockquote>
<p>Note that since 1.3, there might be simpler solution.</p>
<p>The same thread includes:</p>
<blockquote>
<p>I have a prototype image working that will do what I think you're looking for: <a href="https://hub.docker.com/r/mlbiam/openunison-k8s-dashboard/" rel="nofollow noreferrer">https://hub.docker.com/r/mlbiam/openunison-k8s-dashboard/</a></p>
<p>I removed all the requirements for user provisioning and stripped it down to just:</p>
<ul>
<li>reverse proxy</li>
<li>integration with openid connect</li>
<li>display the user's access token</li>
<li>simple links page</li>
</ul>
</blockquote>
<p>It <a href="https://github.com/kubernetes/dashboard/issues/964#issuecomment-242497969" rel="nofollow noreferrer">includes the role binding</a>:</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
nonResourceURLs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:
- kind: Group
name: admin
- kind: ServiceAccount
name: default
namespace: kube-system
- kind: ServiceAccount
name: openunison
namespace: default
roleRef:
kind: ClusterRole
name: admin-role
</code></pre>
<p>Again, this was specific to the dashboard RBAC access, and has since been improved with <a href="https://github.com/kubernetes/dashboard/pull/2206" rel="nofollow noreferrer">PR 2206 Add log in mechanism (to dashboard)</a>.</p>
<p>It still can give you some clues in order to link a regular user to kubernetes RBAC.</p>
|
<p>Does Kubernetes support for STONITH operations for hardware nodes? We have smart electric sockets which allows for API for 'power off server', and they work great with pacemaker.</p>
<p>Does Kubernetes support STONITH? </p>
| <p>No yet.<br />
STONITH is mentioned in <a href="https://github.com/kubernetes/kubernetes/issues/39828" rel="nofollow noreferrer">kubernetes issue 39828</a></p>
<blockquote>
<p>STONITH ("Shoot The Other Node In The Head" or "Shoot The Offending Node In The Head"), sometimes called STOMITH ("Shoot The Other Member/Machine In The Head"), is a technique for <strong>fencing</strong> in computer clusters.<a href="https://github.com/kubernetes/kubernetes/issues/39828" rel="nofollow noreferrer">1</a></p>
<p>Fencing is the isolation of a failed node so that it does not cause disruption to a computer cluster. As its name suggests, STONITH fences failed nodes by resetting or powering down the failed node.</p>
</blockquote>
<p>It is actually discussed in <a href="https://github.com/kubernetes/kops/issues/2002" rel="nofollow noreferrer">kubernetes/kops issue 2002</a></p>
<blockquote>
<p>I think we should take a look at the autoscaler and I think we could default to Reboot, perhaps configurable in the manifest to AllowTermination.</p>
</blockquote>
<p>But this is stale at the moment.</p>
<p>This is also described in <a href="https://github.com/kubernetes/community/blob/e8f6a0f8b4ba8fc3afb6733cc9301b74d35090a1/contributors/design-proposals/storage/pod-safety.md" rel="nofollow noreferrer">kubernetes/community/contributors/design-proposals/storage/pod-safety.md</a></p>
<blockquote>
<p>In order to reconcile partitions, an actor (human or automated) must decide when the partition is unrecoverable. The actor may be informed of the failure in an unambiguous way (e.g. the node was destroyed by a meteor) allowing for certainty that the processes on that node are terminated, and thus may resolve the partition by deleting the node and the pods on the node.<br />
<strong>Alternatively, the actor may take steps to ensure the partitioned node cannot return to the cluster or access shared resources - this is known as fencing</strong> and is a well understood domain.</p>
</blockquote>
|
<p>I'm setting up Spinnaker in K8s with aws-ecr. My setup and steps are:</p>
<p><strong>on AWS side:</strong></p>
<ol>
<li>Added policies ecr-pull, ecr-push, and ecr-generate-token</li>
<li>Attached the policy to a role </li>
</ol>
<p><strong>Spinnaker setup:</strong></p>
<ol start="3">
<li><p>Modified <strong>values.yaml</strong> with below above settings:
```accounts:</p>
<ul>
<li>name: my-ecr
address: <a href="https://123456xxx.dkr.ecr.my-region.amazonaws.com" rel="nofollow noreferrer">https://123456xxx.dkr.ecr.my-region.amazonaws.com</a>
repositories:</li>
<li>123456xxx.dkr.ecr..amazonaws.com/spinnaker-test-project
```</li>
</ul></li>
<li><p>Annotated <strong>clouddriver.yaml</strong>: deployment to use created role (using the IAM role in a pod by referencing the role name in an annotation on the pod specification) </p></li>
</ol>
<p>But it doesn't work and the error on the cloudrvier side is :</p>
<p>.<code>d.r.p.a.DockerRegistryImageCachingAgent : Could not load tags for 1234xxxxx.dkr.ecr.<my_region>.amazonaws.com/spinnaker-test-project in https://1234xxxxx.dkr.ecr.<my_region>.amazonaws.com</code></p>
<p>Would like to get some help or advice what I'm missing, thank you</p>
| <p>Got the answer from an official Spinnaker slack channel. That adding an iam policy to the clouddriver pod won't work unfortunately since it uses the docker client instead of the aws client. The workaround to make it work can be found <a href="https://blog.spinnaker.io/using-aws-ecr-with-spinnaker-and-kubernetes-2b2a9bac8bd1" rel="nofollow noreferrer">here</a></p>
<p>Note* Ecr support currently is broken in halyard.This might get fixed in future after halyard migrates from the kubernetes v1 -> v2 or earlier so please verify with community or docs.</p>
|
<p>I installed kubernetes using <code>kubeadm</code>. And to enable basic authentication, I added <code>--basic-auth-file=/etc/kubernetes/user-password.txt</code> in my <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> and also mounted corresponding volume of type "File" as described <a href="https://stackoverflow.com/questions/46618383/how-to-config-simple-login-pass-authentication-for-kubernetes-desktop-ui">here</a></p>
<p>Basic auth works now, my question is how does kube-apiserver knows to automatically restart Pod after I edit <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> on Kubernetes master host machine?</p>
| <p>K8s control-plane components(apiserver,controller manager and scheduler) are <strong>static pods</strong> in the kube-system namespaces. </p>
<p>When you run following command </p>
<pre><code>Kubeadm init
</code></pre>
<blockquote>
<p>Generates static Pod manifests for the API server, controller manager and scheduler</p>
</blockquote>
<p>Kubelet periodically scan all of these files.</p>
<blockquote>
<p>Static Pod manifests are written to /etc/kubernetes/manifests; the kubelet watches this directory for Pods to create on startup.</p>
</blockquote>
<p>I have attached the reference for further research</p>
<p><a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow" rel="noreferrer">kubeadm-init-workflow</a> </p>
|
<p>I was following the following tutorial on continuous integration using gitlab and Kubernetes (in my case on google cloud): <a href="https://about.gitlab.com/2016/12/14/continuous-delivery-of-a-spring-boot-application-with-gitlab-ci-and-kubernetes/" rel="noreferrer">https://about.gitlab.com/2016/12/14/continuous-delivery-of-a-spring-boot-application-with-gitlab-ci-and-kubernetes/</a>.</p>
<p>At some point in the tutorial you will have to first delete and then create a secret for the image registry of Gitlab:</p>
<pre><code>- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=$REGISTRY_USERNAME --docker-password=$REGISTRY_PASSWD --docker-email=$EMAIL
</code></pre>
<p>Things go wrong in this step, I get the following error:</p>
<pre><code>Error from server (Forbidden): secrets "registry.gitlab.com" is forbidden: User "client" cannot delete secrets in the namespace "default": Unknown user "client"
Error from server (Forbidden): secrets is forbidden: User "client" cannot create secrets in the namespace "default": Unknown user "client"
</code></pre>
<p>I get the same exact error in the Google cloud shell:</p>
<p><a href="https://i.stack.imgur.com/mu5my.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mu5my.png" alt="enter image description here"></a></p>
<p>Adding the following line does not really help, I still get the creation error (I am also 100% sure that the deletion also 'crashes' but the '2>/dev/null' just makes it move to the creation step):</p>
<pre><code>kubectl delete secret registry.gitlab.com 2>/dev/null || echo "secret does not exist"
</code></pre>
<p>What am I doing wrong? Thx in advance!</p>
| <p>RUN <code>gcloud config unset container/use_client_certificate</code></p>
<p>After this logout and login. It should work. This happens when you disable Legacy Authorisation in the cluster settings, because the client certificate that you are using is a legacy authentication method</p>
|
<p>I have an application which accesses a couple of files from a directory. I went through kubernetes volumes and persistent volumes and volume claims. this is a multi-node kubernetes cluster. do we have any direct solution which can be used which does not need any external storage like a nfs server etc?</p>
<p>I have a VM server from where i execute my kubernetes commands. I am new to kubernetes so please help me with this.</p>
<p>Also i was looking at local persistent volume. is there a link which i can go through for an example. I am looking at local <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#local</a></p>
<p>But the example does not explain what below are under nodeAffinity:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 100Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
</code></pre>
| <p>This depends on your use case, if the files you want to share across the cluster are more than a few megabytes in size, you'll need some kind of storage operator. Local storage is probably not what you're looking for.</p>
<hr />
<h2>For small files (configs, keys, init scripts)</h2>
<p>If the files are small, such as configuration files or ssh keys or similar you can use a kubernetes configmap (or secret). This will allow you to setup a few files or directories with a few files. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Checkout the documentation</a></p>
<hr />
<h2>For large files (shared data, graphics, binaries)</h2>
<p>If however you want to share a few hundred megabytes or gigabytes of files, you need a storage provider with your cluster.</p>
<p>If you are using a cloud provider, such as Google, AWS or Azure, this should be straightforward, you need to create a persistent disk with your cloud provider and copy your required data onto the disk. Once that's done, simply follow the guide for the relevant cloud providers:</p>
<ul>
<li>Google Cloud - <a href="https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk" rel="nofollow noreferrer">GCE Persistent Disk</a></li>
<li><strike>AWS - <a href="https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore" rel="nofollow noreferrer">Elastic Block Storage</a></strike></li>
<li><strike>Azure - <a href="https://kubernetes.io/docs/concepts/storage/volumes/#azuredisk" rel="nofollow noreferrer">Azure Disk</a></strike></li>
</ul>
<p>(@justcompile pointed out that AWS doesn't support multiple read-only mounts to instances, I was unable to find similar information for Azure)</p>
<p>If however, you're running your own kubernetes cluster on "baremetal", you'll have to setup either an NFS server, a <a href="https://ceph.com/" rel="nofollow noreferrer">Ceph cluster</a> and probably use something like <a href="https://github.com/rook/rook" rel="nofollow noreferrer">rook</a> on top.</p>
|
<p>Can one store a binary file in a <a href="http://kubernetes.io/" rel="noreferrer">Kubernetes</a> <a href="http://kubernetes.io/docs/user-guide/configmap/" rel="noreferrer">ConfigMap</a> and then later read the same content from a volume that mounts this ConfigMap? For example, if directory <code>/etc/mycompany/myapp/config</code> contains binary file <code>keystore.jks</code>, will</p>
<pre><code>kubectl create configmap myapp-config --from-file=/etc/mycompany/myapp/config
</code></pre>
<p>include file <code>keystore.jks</code> in ConfigMap <code>myapp-config</code> that can later be mapped to a volume, mounted into a container, and read as a binary file?</p>
<p>For example, given the following pod spec, should <code>keystore.jks</code> be available to <code>myapp</code> at <code>/etc/mycompany/myapp/config/keystore.jks</code>?</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: mycompany/myapp
volumeMounts:
- name: myapp-config
mountPath: /etc/mycompany/myapp/config
volumes:
- name: myapp-config
configMap:
name: myapp-config
</code></pre>
<p>Kubernetes version details:</p>
<pre><code>derek@derek-HP-EliteOne-800-G1-AiO:~/Documents/platinum/fix/brvm$ kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.6", GitCommit:"ae4550cc9c89a593bcda6678df201db1b208133b", GitTreeState:"clean", BuildDate:"2016-08-26T18:13:23Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.6+coreos.0", GitCommit:"f6f0055b8e503cbe5fb7b6f1a2ee37d0f160c1cd", GitTreeState:"clean", BuildDate:"2016-08-29T17:01:01Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>Binary ConfigMaps are now supported since Kubernetes version 1.10.0. From the readme notes:</p>
<blockquote>
<p>ConfigMap objects now support binary data via a new binaryData field. When using kubectl create configmap --from-file, files containing non-UTF8 data will be placed in this new field in order to preserve the non-UTF8 data. Note that kubectl's --append-hash feature doesn't take binaryData into account. Use of this feature requires 1.10+ apiserver and kubelets. (#57938, @dims)</p>
</blockquote>
<p>See the changelog for more details: <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md#apps" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md#apps</a></p>
|
<p>by using <code>kubectl exec -ti POD_NAME bash</code> I am able to access the terminal inside the container and execute the command.</p>
<p>I can understand the usability and convenient of the above command. As K8s Operator I use <strong>exec</strong> regularly. </p>
<p>However, What is the use case of <code>kubectl attach POD_NAME</code>? </p>
<p>How can it be <strong>utilised</strong>?
What is the real <strong>purpose</strong> of it?
In what situation or circumstance it can be used? </p>
| <p>The use cases for kubectl attach are discussed in <a href="https://github.com/kubernetes/kubernetes/issues/23335" rel="noreferrer">kubernetes/issue 23335</a>.</p>
<p>It can attach to the main process run by the container, which is not always bash.<br>
As opposed to exec, which allows you to execute <em>any</em> process within the container (often: bash)</p>
<pre><code># Get output from running pod 123456-7890, using the first container by default
kubectl attach 123456-7890
# Get output from ruby-container from pod 123456-7890
kubectl attach 123456-7890 -c ruby-container
</code></pre>
<p><a href="https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28#attach-to-existing-containers" rel="noreferrer">This article</a> proposes:</p>
<blockquote>
<p>In addition to interactive execution of commands, you can now also attach to any running process. Like <code>kubectl logs</code>, you’ll get stderr and stdout data, but <strong>with attach, you’ll also be able to send stdin from your terminal to the program.<br>
Awesome for interactive debugging, or even just sending <kbd>ctrl</kbd>-<kbd>c</kbd> to a misbehaving application.</strong></p>
</blockquote>
<pre><code> $> kubectl attach redis -i
</code></pre>
<hr>
<p>Again, the main difference is in the process you interact with in the container:</p>
<ul>
<li>exec: any one you want to create</li>
<li>attach: the one currently running (no choice)</li>
</ul>
|
<p>I have a Kubernetes cluster (external ips: 1.2.3.4, 2.3.4.5, 3.4.5.6)
I want to host a docker registry on this cluster on port 5000. Now to enable this I did a test with externalips, which works. This makes nginx available on port 85.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-extip
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 85
targetPort: 80
selector:
app: nginx-extip
externalIPs:
- 1.2.3.4
- 2.3.4.5
- 3.4.5.6
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-extip
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-extip
spec:
containers:
- name: nginx-extip-server
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>Now to reuse the externalip config I want to put this into a configmap. So all yamls can just reference the configmap and we don't have to manually update the externalips when they change. How can I put an array of ips into the configmap?</p>
<p>My current (not working) configmap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: externalips
namespace: default
data:
externalips:
- 1.2.3.4
- 2.3.4.5
- 3.4.5.6
</code></pre>
<p>The error I get:</p>
<pre><code>error: error validating "static-ips-configmap.yml": error validating data:
ValidationError(ConfigMap.data.externalips): invalid type for
io.k8s.api.core.v1.ConfigMap.data: got "array", expected "string";
if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>How can I put these IP's into a configmap?</p>
| <p>There are two problems here:</p>
<ol>
<li><p>There is a syntax error in the creation of the configmap itself. Config map expects you to list a bunch of files and their contents, so the correct syntax would look something like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: externalips
namespace: default
data:
external-ips.list: |
externalips:
- 1.2.3.4
- 2.3.4.5
- 3.4.5.6
</code></pre></li>
<li><p>I don't think it's possible to refer to a configmap that you created to <em>template a service spec</em>. So even if you managed to create the configmap correctly, you still wouldn't be able to reuse it as part of different service definitions.</p></li>
</ol>
<p>You need an out-of-band templating system that you can use to add these external IPs to services. Alternatively, use an ingress control, that you have to configure with external IPs <em>once</em> and then use it to multiplex all your HTTP services inside the cluster. This way you manage IPs for only one service anyway.</p>
|
<p>I have some openshift nodes, some nodes have some Tesla P40s and should be dedicated to ML use through nvidia device plugin. But i don't want to let users add some taints or node affinity in their original DeploymentConfigs which could lead to a messy. How could i achieve this implicitly?</p>
<p>what i want to achieve is: </p>
<ol>
<li>only ML pods could stay on these nodes which has GPU </li>
<li>ML users doesn't need to change their dc except the resource limit of "nvidia.com/gpu".</li>
</ol>
<p>if a scheduler is the only way, then how to write it ? Thanks</p>
| <p>My understanding of your problem is that you're trying to do two things:</p>
<ol>
<li>Schedule GPU workloads correctly onto nodes that have GPUs available</li>
<li>Make sure pods that <em>don't</em> need GPUs are not scheduled onto nodes that <em>have</em> GPUs.</li>
</ol>
<p>You're doing (1) With the NVidia device plugin, which seems to be the correct way, since it uses the concept of <a href="https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/" rel="nofollow noreferrer">Extended Resources</a>.</p>
<p>To do (2), Taints and Tolerations are indeed the recommended way. The docs even talk explicitly about the GPU use case -
<a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#example-use-cases" rel="nofollow noreferrer">Quoting the documentation:</a></p>
<blockquote>
<p><strong>Nodes with Special Hardware:</strong> In a cluster where a small subset of
nodes have specialized hardware (for example GPUs), it is desirable to
keep pods that don’t need the specialized hardware off of those nodes,
thus leaving room for later-arriving pods that do need the specialized
hardware. This can be done by tainting the nodes that have the
specialized hardware (e.g. kubectl taint nodes nodename
special=true:NoSchedule or kubectl taint nodes nodename
special=true:PreferNoSchedule) and adding a corresponding toleration
to pods that use the special hardware. As in the dedicated nodes use
case, it is probably easiest to apply the tolerations using a custom
admission controller). For example, it is recommended to use Extended
Resources to represent the special hardware, taint your special
hardware nodes with the extended resource name and run the
ExtendedResourceToleration admission controller. Now, because the
nodes are tainted, no pods without the toleration will schedule on
them. But when you submit a pod that requests the extended resource,
the ExtendedResourceToleration admission controller will automatically
add the correct toleration to the pod and that pod will schedule on
the special hardware nodes. This will make sure that these special
hardware nodes are dedicated for pods requesting such hardware and you
don’t have to manually add tolerations to your pods.</p>
</blockquote>
<p>Only those users that explicitly <em>need</em> the GPUs need to add a toleration for it in their pod spec, and it's fairly straightforward to do so. It looks like this (ref: <a href="https://kubernetes.io/blog/2017/03/advanced-scheduling-in-kubernetes" rel="nofollow noreferrer">Advanced Scheduling in Kubernetes</a>):</p>
<pre><code>tolerations:
- key: "gpu"
operator: "Equal"
value: "true"
effect: "NoSchedule"
</code></pre>
<p>So usually this is an acceptable trade-off.</p>
<p>However if you absolutely <strong><em>do not</em></strong> want to let users have to add that toleration. What you need is an <a href="https://kubernetes.io/docs/admin/admission-controllers/" rel="nofollow noreferrer">Admission Controller</a>.</p>
<blockquote>
<p>An admission controller is a piece of code that intercepts requests to
the Kubernetes API server prior to persistence of the object, but
after the request is authenticated and authorized.</p>
</blockquote>
<p>In particular you want the special AdmissionController known as <code>MutatingAdmissionWebhook</code>.</p>
<p>Your custom <code>MutatingAdmissionWebhook</code> can take a look at the pod spec, look for:</p>
<pre><code>resources:
limits:
nvidia.com/gpu: 2
</code></pre>
<p>and then automatically add the required "Toleration" to the pod spec, all without letting the user know. You still end up using Taints and Tolerations, the users just don't see them anymore. You <strong>don't</strong> need to write a new scheduler for this.</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/test/images/webhook/main.go" rel="nofollow noreferrer">Here's an example</a> of how to <em>write</em> an admission controller webhook, is available in the official kubernetes repository as part of the e2e tests.</p>
|
<p>How to limit a container's network usage or bandwidth?</p>
<p>I searched the Internet but it seems no existing mature solutions.</p>
<p>I can modify the host, but cannot modify the program running in docker or docker itself. It means I can change the configurations, but not the <em>code</em> of docker that I need to re-build/re-compile.</p>
| <p><a href="https://github.com/moby/moby/issues/9607" rel="noreferrer">Docker issue 9607</a> refers to <a href="https://docs.docker.com/network/bridge/" rel="noreferrer">bridge network</a>, and <a href="https://github.com/moby/moby/issues/4763" rel="noreferrer">issue 4763</a> is still opened.</p>
<p>So this is not yet natively supported by docker.</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/2856" rel="noreferrer">Kubernetes issue 2856</a> references a <a href="https://istio.io/docs/tasks/policy-enforcement/rate-limiting.html" rel="noreferrer">sidecar option: istio</a>, so that would be your best approach, if you can add an orchestrator to your current docker setup.</p>
|
<p>I am using the this plugin to run dynamic agents in a Kubernetes cluster, jenkinsci/kubernetes-plugin and so far everything is going great except for when I try and use the feature available for defining slave pods in yaml format.<a href="https://i.stack.imgur.com/XIszR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XIszR.png" alt="enter image description here"></a></p>
<p>Unfortunately, when I attempt to use this feature things go bad. I changed my Jenkins pipeline script from this:</p>
<pre><code>def label = "kubernetes"
podTemplate(label: label,
containers: [containerTemplate(name: 'jnlp', image: 'artifactory.baorg.com:5001/sum/coreimage:1', ttyEnabled: true, label: label)],
imagePullSecrets: [ 'ad-artifactory-cred' ],
) {
node(label) {
stage('Core') {
container(name: 'jnlp') {
stage('building program') {
sh "echo hello world"
}
}
}
}
}
</code></pre>
<p>To this:</p>
<pre><code>def label = "kubernetes"
podTemplate(label: label, yaml: """
apiVersion: v1
kind: Pod
metadata:
labels:
label: label
spec:
containers:
- name: jenkins-slave
image: artifactory.baorg.com:5001/sum/coreimage:1
tty: true
"""
) {
node(label) {
stage('Core') {
container(name: 'jnlp') {
stage('building program') {
sh "echo hello world"
}
}
}
}
}
</code></pre>
<p>When the pipeline script is written in the former way, everything is working as expected. The slave container is created and the job runs. Unfortunately, when I take these settings and attempt to codify them in YAML format it seems that the configuration isn't even being read or something. When it's working the image is pulled if it doesn't already exist in the cluster and the job runs fine:
<a href="https://i.stack.imgur.com/uvBM7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uvBM7.png" alt="enter image description here"></a></p>
<p>But when I changed the configuration so that it's done in YAML, the job attempts to pull an image named "jenkins/jnlp-slave:alpine" instead of the one I specify and times out because my cluster doesn't have access to the internet (index.docker...). The reason it's pulling this image is due to a bug in the plugin which occurs when the name of the slave isn't set to "jnlp" (this is not related to my issue anyway).
<a href="https://i.stack.imgur.com/dYvPL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dYvPL.png" alt="enter image description here"></a></p>
<p>The important observation to be made is that the YAML information isn't being accepted or recognized for some reason and I'm not sure why. Is it because of some bad formatting? Or is this some known issue with this plugin (I find that hard to believe). I already checked to see if I had any extra tabs in the code but I did not.</p>
| <p>Based on a quick look at your pipeline DSL and YAML spec. The following snippet is what a direct translation of your DSL would look like (untested).</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
label: label
spec:
containers:
- name: jnlp
image: artifactory.baorg.com:5001/sum/coreimage:1
tty: true
imagePullSecrets:
- name: ad-artifactory-cred
</code></pre>
<p>In your original configuration you specified that the default "jnlp" container not be used and instead use the container you specified. In your YAML version, you used the name "jenkins-slave" thereby indicating to the the plugin that you want the default jnlp container (<code>jenkins/jnlp-slave:alpine</code>) to be launched in the pod along with your "jenkins-slave" container.</p>
<p>As for why the pull fails, this is likely a network configuration issue (firewall or proxy) as indicated in the events. If you have access to the node, try doing a <code>docker pull jenkins/jnlp-slave:aline</code> manually to debug.</p>
|
<p>I am using the OpenShift Jenkins image within an OpenShift Cluster. This default Jenkins image results in a Jenkins container that is preconfigured to point to my Kubernetes cluster. Additionally, the container has two Kubernetes pod templates defined, one for maven and one for nodejs.</p>
<p><a href="https://i.stack.imgur.com/GzPQz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/GzPQz.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/uwPWl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/uwPWl.png" alt="enter image description here"></a></p>
<p>What I would now like to do is use a declarative pipeline and reference these pods. I tried the following</p>
<pre><code> agent {
kubernetes {
//cloud 'kubernetes'
label 'maven'
}
}
</code></pre>
<p>But that gives an error stating</p>
<blockquote>
<p>org.codehaus.groovy.control.MultipleCompilationErrorsException:
startup failed:</p>
<p>WorkflowScript: 4: Missing required parameter for agent type
"kubernetes": containerTemplate @ line 4, column 10.</p>
<pre><code> kubernetes {
^
</code></pre>
</blockquote>
<p>All of the (<a href="https://github.com/jenkinsci/kubernetes-plugin/blob/master/src/test/resources/org/csanchez/jenkins/plugins/kubernetes/pipeline/declarative.groovy" rel="noreferrer">examples</a>) that I can find for declarative pipelines show the pod templates being defined when the agent is specified.</p>
<p>Is it possible to reuse already defined templates in a declarative pipeline?</p>
| <p>Here is an example using a pre-defined pod template.</p>
<pre><code>pipeline {
agent {
label "maven"
}
stages {
stage('Run maven') {
steps {
sh 'mvn -version'
}
}
}
}
</code></pre>
<p>Your original pipeline definition was in effect defining a brand new pod template and hence the error enforcing the requirement for <code>containerTeamplates</code> parameter. When using an existing template, you can simply specify the label in the <code>agent</code> block.</p>
|
<p>I've created a <a href="https://kubernetes.io/docs/user-guide/cron-jobs/" rel="noreferrer">Kubernetes Scheduled Job</a>, which runs twice a day according to its schedule. However, I would like to trigger it manually for testing purposes. How can I do this?</p>
| <p>The issue <a href="https://github.com/kubernetes/kubernetes/issues/47538" rel="noreferrer">#47538</a> that <a href="https://stackoverflow.com/a/46062397/2884309">@jdf mentioned</a> is now closed and this is now possible. The original implementation can be found <a href="https://github.com/kubernetes/kubernetes/commit/5b57c2db0067c8a16681fe9ec57ace3ad0342c8f" rel="noreferrer">here</a> but the syntax has changed.</p>
<p>With kubectl v1.10.1+ the command is:</p>
<p><code>kubectl create job --from=cronjob/<cronjob-name> <job-name> -n <namespace-name></code></p>
<p>It seems to be backwardly compatible with older clusters as it worked for me on v0.8.x.</p>
|
<p>I'm struggling to understand how to correctly configure kube-dns with flannel on kubernetes 1.10 and containerd as the CRI.</p>
<p>kube-dns fails to run, with the following error:</p>
<pre><code>kubectl -n kube-system logs kube-dns-595fdb6c46-9tvn9 -c kubedns
I0424 14:56:34.944476 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver...
I0424 14:56:35.444469 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver...
E0424 14:56:35.815863 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:192: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: no route to host
E0424 14:56:35.815863 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:189: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: no route to host
I0424 14:56:35.944444 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver...
I0424 14:56:36.444462 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver...
I0424 14:56:36.944507 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver...
F0424 14:56:37.444434 1 dns.go:209] Timeout waiting for initialization
kubectl -n kube-system describe pod kube-dns-595fdb6c46-9tvn9
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 47m (x181 over 3h) kubelet, worker1 Readiness probe failed: Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning BackOff 27m (x519 over 3h) kubelet, worker1 Back-off restarting failed container
Normal Killing 17m (x44 over 3h) kubelet, worker1 Killing container with id containerd://dnsmasq:Container failed liveness probe.. Container will be killed and recreated.
Warning Unhealthy 12m (x178 over 3h) kubelet, worker1 Liveness probe failed: Get http://10.244.0.2:10054/metrics: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning BackOff 2m (x855 over 3h) kubelet, worker1 Back-off restarting failed container
</code></pre>
<p>There is indeed no route to the 10.96.0.1 endpoint:</p>
<pre><code>ip route
default via 10.240.0.254 dev ens160
10.240.0.0/24 dev ens160 proto kernel scope link src 10.240.0.21
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.0.0/16 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
10.244.4.0/24 via 10.244.4.0 dev flannel.1 onlink
10.244.5.0/24 via 10.244.5.0 dev flannel.1 onlink
</code></pre>
<p>What is responsible for configuring the cluster service address range and associated routes? Is it the container runtime, the overlay network (flannel in this case), or something else? Where should it be configured?</p>
<p>The <code>10-containerd-net.conflist</code> configures the bridge between the host and my pod network. Can the service network be configured here too?</p>
<pre><code>cat /etc/cni/net.d/10-containerd-net.conflist
{
"cniVersion": "0.3.1",
"name": "containerd-net",
"plugins": [
{
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"promiscMode": true,
"ipam": {
"type": "host-local",
"subnet": "10.244.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
},
{
"type": "portmap",
"capabilities": {"portMappings": true}
}
]
}
</code></pre>
<p>Edit:</p>
<p>Just came across <a href="https://github.com/kubernetes/kubernetes/issues/27161#issuecomment-225396985" rel="nofollow noreferrer">this</a> from 2016:</p>
<blockquote>
<p>As of a few weeks ago (I forget the release but it was a 1.2.x where x
!= 0) (#24429) we fixed the routing such that any traffic that arrives
at a node destined for a service IP will be handled as if it came to a
node port. This means you should be able to set yo static routes for
your service cluster IP range to one or more nodes and the nodes will
act as bridges. This is the same trick most people do with flannel to
bridge the overlay.</p>
<p>It's imperfect but it works. In the future will will need to get more
precise with the routing if you want optimal behavior (i.e. not losing
the client IP), or we will see more non-kube-proxy implementations of
services.</p>
</blockquote>
<p>Is that still relevant? Do I need to setup a static route for the service CIDR? Or is the issue actually with <code>kube-proxy</code> rather than flannel or containerd?</p>
<p>My flannel configuration:</p>
<pre><code>cat /etc/cni/net.d/10-flannel.conflist
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
</code></pre>
<p>And kube-proxy:</p>
<pre><code>[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--cluster-cidr=10.244.0.0/16 \
--feature-gates=SupportIPVSProxyMode=true \
--ipvs-min-sync-period=5s \
--ipvs-sync-period=5s \
--ipvs-scheduler=rr \
--kubeconfig=/etc/kubernetes/kube-proxy.conf \
--logtostderr=true \
--master=https://192.168.160.1:6443 \
--proxy-mode=ipvs \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
</code></pre>
<p>Edit:</p>
<p>Having looked at the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#is-the-kube-proxy-working" rel="nofollow noreferrer">kube-proxy debugging steps</a>, it appears that <code>kube-proxy</code> cannot contact the master. I suspect this is a large part of the problem. I have 3 controller/master nodes behind a HAProxy loadbalancer, which is bound to <code>192.168.160.1:6443</code> and forwards round robin to each of the masters on <code>10.240.0.1[1|2|3]:6443</code>. This can be seen in the output/configs above.</p>
<p>In <code>kube-proxy.service</code>, I have specified <code>--master=192.168.160.1:6443</code>. Why are connections being attempted to port 443? Can I change this - there doesn't seem to be a port flag? Does it need to be port 443 for some reason?</p>
| <p>There are two components to this answer, one about running <code>kube-proxy</code> and one about where those :443 URLs are coming from.</p>
<p>First, about <code>kube-proxy</code>: please don't run <code>kube-proxy</code> as a systemd service like that. It is designed to be launched by <code>kubelet</code> <em>in the cluster</em> so that the SDN addresses behave rationally, since they are effectively "fake" addresses. By running <code>kube-proxy</code> outside the control of <code>kubelet</code>, all kinds of weird things are going to happen unless you expend a huge amount of energy to replicate the way that <code>kubelet</code> configures its subordinate docker containers.</p>
<hr>
<p>Now, about that :443 URL:</p>
<blockquote>
<p><code>E0424 14:56:35.815863 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:192: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: no route to host</code></p>
<p>...</p>
<p>Why are connections being attempted to port 443? Can I change this - there doesn't seem to be a port flag? Does it need to be port 443 for some reason?</p>
</blockquote>
<p>That 10.96.0.1 is from the Service CIDR of your cluster, which is (and should be) separate from the Pod CIDR which should be separate from the Node's subnets, etc. The <code>.1</code> of the cluster's Service CIDR is either reserved (or <em>traditionally</em> allocated) to the <code>kubernetes.default.svc.cluster.local</code> <code>Service</code>, with its one <code>Service.port</code> as <code>443</code>.</p>
<p>I'm not super sure why the <code>--master</code> flag doesn't supersede the value in <code>/etc/kubernetes/kube-proxy.conf</code> but since that file is very clearly only supposed to be used by <code>kube-proxy</code>, why not just update the value in the file to remove all doubt?</p>
|
<p>I have successfully set up a kubernetes cluster on AWS using <code>kops</code> and the following commands:</p>
<pre><code>$ kops create cluster --name=<my_cluster_name> --state=s3://<my-state-bucket> --zones=eu-west-1a --node-count=2 --node-size=t2.micro --master-size=t2.small --dns-zone=<my-cluster-dns>
$ kops update cluster <my-cluster-name> --yes
</code></pre>
<p>When accessing the dashboard, I am prompted to either enter a token or </p>
<blockquote>
<p>Please select the kubeconfig file that you have created to configure access to the cluster.</p>
</blockquote>
<p>When creating the cluster, <code>~/.kube/config</code> was created that has the following form:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
<some_key_or_token_here>
server: https://api.<my_cluster_url>
name: <my_cluster_name>
contexts:
- context:
cluster: <my_cluster_name>
user: <my_cluster_name>
name: <my_cluster_name>
current-context: <my_cluster_name>
kind: Config
preferences: {}
users:
- name: <my_cluster_name>
user:
as-user-extra: {}
client-certificate-data:
<some_key_or_certificate>
client-key-data:
<some_key_or_certificate>
password: <password>
username: admin
- name:<my-cluster-url>-basic-auth
user:
as-user-extra: {}
password: <password>
username: admin
</code></pre>
<p>Why when pointing the kubernetes ui to the above file, I get </p>
<blockquote>
<p>Authentication failed. Please try again. </p>
</blockquote>
| <p>I tried the same and had the same problem. It turns out that kops creates a certificate based authentication. Certificate based authentication can't be used on the web UI interface. Instead, I tried using the token based authentication. Next question, where do you find the token?</p>
<pre><code>kubectl describe secret
</code></pre>
<p>This will show you the default token for the cluster. I assume this is very bad security practice but if you're using the UI to improve your learning and understanding then it will get you moving in the right direction.</p>
<p><a href="https://github.com/kubernetes/dashboard/wiki/Access-control" rel="noreferrer">This Dashboard wiki page is about authentication.</a> That's where I discovered how to do it.</p>
|
<p>I am planning to use Kubernetes and Jenkins for my microservice deployment.</p>
<p><strong>Nature Of Application</strong> </p>
<p>I have total 15 Spring Boot micro services. And need to deploy all these 15 micro services for different customers - everyone using same code, but need to deploy this all separately. Means every customer have their own deployed. Total I have 5 customer (This is assumption. Not exact one. It will varies from 20 to 50).</p>
<p><strong>My Current Design</strong> </p>
<p>I am currently planning to use 5 Kubernetes Nodes.Means one cluster master plus 5 Node. Total 6 VMs. And planning to deploy each of these 15 micro services in these 5 nodes for 5 customer. So everyone will get their own deployment. And also I will install Jenkins in my Kubernetes cluster master VM for making a CI/CD pipeline.</p>
<p>So these about all architecture. I am only a starter in architecture and designing of cloud application. Here I need to know that any issues related with this architecture. I need to confirm its feasibility. </p>
<p>Please clear my confusion about my current approach. If this is a good one, I can continue. I am only searching this is a industrial standard way of using Kubernetes. Is this way a good architecture?</p>
| <p>Several things come to mind:</p>
<ul>
<li>one master is not enough. The loss of that VM, the underlying hardware, or a failure of the services on the master will lead to an outage for all customers, and potentially catastrophic data loss. Run 3 masters at minimum.</li>
<li>each client running distinct copies of each of the services is a suboptimal tenancy model. There are lots of situations in which that's required, but in those situations each client is often running their own distinct release of the services. Managing that in a microservice architecture is awful. </li>
<li>CD in an environment where each client is running their own copies of services, potentially their own versions of each, will be impossible. </li>
<li>the JVM is a heavy CPU and memory consumer. A single JVM was designed to itself support multiple workloads. So there can be conflicts between container resource management and JVM resource management, and running JVM microservices</li>
</ul>
|
<p>If created a service using an externalName pointing to some external service.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test
spec:
type: ExternalName
externalName: google.com
</code></pre>
<p>When I now try to do a DNS lookup it doesn't return anything useful:</p>
<pre><code># dig test.development.svc.cluster.local.
; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> test.development.svc.cluster.local.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58159
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;test.development.svc.cluster.local. IN A
;; AUTHORITY SECTION:
cluster.local. 60 IN SOA ns.dns.cluster.local. hostmaster.cluster.local. 1524736800 28800 7200 604800 60
;; Query time: 0 msec
;; SERVER: 100.64.0.10#53(100.64.0.10)
;; WHEN: Thu Apr 26 10:58:48 UTC 2018
;; MSG SIZE rcvd: 106
</code></pre>
<p>If I query explicitly for type CNAME I get a response:</p>
<pre><code># dig -t CNAME test.development.svc.cluster.local.
; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> -t CNAME test.development.svc.cluster.local.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54517
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;test.development.svc.cluster.local. IN CNAME
;; ANSWER SECTION:
test.development.svc.cluster.local. 30 IN CNAME google.com.
;; Query time: 0 msec
;; SERVER: 100.64.0.10#53(100.64.0.10)
;; WHEN: Thu Apr 26 10:59:55 UTC 2018
;; MSG SIZE rcvd: 76
</code></pre>
<p>If I change the Service to point to some other cluster internal service, I get the following expected result of an IP address:</p>
<pre><code># dig dogstatsdport.development.svc.cluster.local
; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> dogstatsdport.development.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32857
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;dogstatsdport.development.svc.cluster.local. IN A
;; ANSWER SECTION:
dogstatsdport.development.svc.cluster.local. 30 IN CNAME dogstatsdport.default.svc.cluster.local.
dogstatsdport.default.svc.cluster.local. 30 IN A 100.68.195.103
;; Query time: 0 msec
;; SERVER: 100.64.0.10#53(100.64.0.10)
;; WHEN: Thu Apr 26 11:02:14 UTC 2018
;; MSG SIZE rcvd: 113
</code></pre>
<p>What is missing/wrong here? And how do I get the external record to resolve?</p>
<p>Cluster is currently running on Kubernetes 1.8.12, set up with kops 1.9.0.</p>
| <p>The issue lies with the used kube-dns version 1.14.9.</p>
<p>Applying the following command to update to 1.14.10 fixed it:</p>
<pre><code>kubectl set image deployment/kube-dns -n kube-system \
kubedns=gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.10 \
dnsmasq=gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.10 \
sidecar=gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.10
</code></pre>
<p><a href="https://github.com/kubernetes/dns/releases" rel="noreferrer">https://github.com/kubernetes/dns/releases</a> states for 1.14.10: </p>
<blockquote>
<p>Merge pull request #225 from grayluck/externalname
Fix external name not solving by reloading resolv.conf.</p>
</blockquote>
|
<p>I need to debug (with chrome://inspect/ debugger) my nodejs app while it is deployed in a kubernetes pod. I tried adding a port to the service in front of the deployment but this is not working.</p>
<p>I added the port 42126</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: account-service # matches name in nginx.conf
spec:
ports:
- name: traffic
port: 80
targetPort: 80
- name: debug
port: 42126
targetPort: 42126
type: NodePort # speculating need this so nginx can route to this ???????
selector:
app: account-pod # matches name of pod created by deployment account-deployment
</code></pre>
<p>The deployment is not modified</p>
<pre><code>apiVersion: apps/v1beta2 # for version 1.8, once 1.9 switch to apps/v1
kind: Deployment
metadata:
name: account-deployment # name of the deployment
spec:
replicas: 1
selector:
matchLabels:
app: account-pod # matches name of pod to create
template:
metadata:
labels:
app: account-pod # name of pod, matches deployment and service definition
spec:
containers:
- name: account-container # name of docker container
image: us.gcr.io/purple01-0000000/purple_account:2018_04_25_18_08
</code></pre>
<p>The docker image is created with the following. I know this works since I use it for debugging locally when running docker compose with all my modules.</p>
<pre><code>FROM node:9.4
WORKDIR /app
COPY . /app/
RUN npm install
# comment out normal mode and created one for the debug
#CMD node ./bin/www
CMD node --inspect-brk=0.0.0.0:42126 ./bin/www
EXPOSE 80 42126
</code></pre>
<p>The configured ingress send the http traffic to an nginx service that
serves static files and routes to various micro services.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: ingress-ip
spec:
tls:
- secretName: tls-secret
backend:
serviceName: purple-front-end-service
servicePort: 80
</code></pre>
<p>I am new to kubernetes, so any advice is appreciated.</p>
<p>One of the many things I don't get: ingress terminates https and sends the http to my purple-front-end-service (a nginx service). How is port 42126 on the account-service made available to the outside so I can connect to it via chrome?</p>
<p>What do I configure in the "Target discovery settings" for "chrome://inspect/#devices".</p>
| <p>in the Dockerfile that creates your application, enable debugging (see line CMD)</p>
<pre><code>FROM node:9.4
WORKDIR /app
COPY ./app.js /app/app.js
COPY ./package.json /app/package.json
RUN npm install
CMD node --inspect-brk=0.0.0.0:42132 ./bin/www
</code></pre>
<p>Using the "Google Cloud Platform" website
Kubernetes Engine / Workloads /
Select the deployment from the displayed list to show details.
Scroll down to "Managed pods" and note the name.</p>
<p>In the detail section of the service there is a "port forwarding" button that reveals the command.
<a href="https://i.stack.imgur.com/53sZu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/53sZu.png" alt="enter image description here"></a></p>
<p>On your desktop, run the command using the pods name and port # you want to use.
kubectl port-forward account-deployment-85f7dcf65b-v27zx 8080:42126</p>
|
<p>Right now, I deployed some pods on my kubernetes cluster. But sometime, my image may has some bugs which make the pod cannot start correctly.</p>
<p>For example:
nats-1 0/1 CrashLoopBackOff 121 10h</p>
<p>I also cannot see any error in the kubectl log.</p>
<p>So is there any way to access this pod? Or is there any tools or tech can allow to to enter the container?</p>
<p>Thanks a lot all! :)</p>
| <p>You can <code>kubectl describe</code> to get the events, it sometimes might show some errors there. Otherwise you can probably also make the deployment/pod run a command like <code>sleep 3600</code> to keep it open for you to exec into it to investigate further.</p>
|
<p>I want to set up an Ingress, which routes traffic to my underlying Services. Unfortunately, I get an error when I deploy my ingress-controller-deployment.yaml and I don't know why... The pod with the ingress-controller crashes immediately, with the error message "CrashLoopBackOff".</p>
<p>With my understanding the Ingress-Control has to be deployed in a Pod and this pod can be accessed through the ingress-svc. The ingress-svc seems to work, but the Pod crashes. After the ingress-controller works I need an additional file that defines the routes and everything. But I don't see the point of continuing with out a working and deployable ingress-controller.</p>
<hr>
<pre><code>Pod description:
Name: ingress-controller-7749c785f-x94ll
Namespace: ingress
Node: gke-cluster-1-default-pool-8484e77d-r4wp/10.128.0.2
Start Time: Thu, 26 Apr 2018 14:25:04 +0200
Labels: k8s-app=nginx-ingress-lb
pod-template-hash=330573419
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"ingress","name":"ingress-controller-7749c785f","uid":"d8ff0a6d-494c-11e8-a840
-420...
Status: Running
IP: 10.8.0.14
Created By: ReplicaSet/ingress-controller-7749c785f
Controlled By: ReplicaSet/ingress-controller-7749c785f
Containers:
nginx-ingress-controller:
Container ID: docker://5654c7dffc44510132cba303d66ee570280f2cec235e4d4fa6ef8ad543e0c91d
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
Image ID: docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:39cc6ce23e5bcdf8aa78bc28bbcfe0999e449bf99fe2e8d60984b417facc5cd4
Ports: 80/TCP, 443/TCP
Args:
/nginx-ingress-controller
--admin-backend-svc=$(POD_NAMESPACE)/admin-backend
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 26 Apr 2018 14:26:57 +0200
Finished: Thu, 26 Apr 2018 14:26:57 +0200
Ready: False
Restart Count: 4
Liveness: http-get http://:10254/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-controller-7749c785f-x94ll (v1:metadata.name)
POD_NAMESPACE: ingress (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-plbss (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-plbss:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-plbss
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
</code></pre>
<hr>
<pre><code>Ingress-controller-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
spec:
containers:
- args:
- /nginx-ingress-controller
- "--admin-backend-svc=$(POD_NAMESPACE)/admin-backend"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: ingress-svc
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
selector:
k8s-app: nginx-ingress-lb
</code></pre>
| <p>The issue is the args. The args on one of mine are</p>
<pre><code> args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
</code></pre>
<p>I had also created the config maps for configuration, tcp and udp. </p>
|
<p>I have couple of questions regarding the configMap versioning.</p>
<ol>
<li><p>Is it possible to use a specific version of a configMap in the deployment file?</p></li>
<li><p>I dont see any API's to get list of versions. How to get the list of versions?</p></li>
<li>Is it possible to compare configMap b/w versions?</li>
<li>How to control the number of versions?</li>
</ol>
<p>Thanks</p>
| <blockquote>
<p>Is it possible to use a specific version of a configMap in the deployment file?</p>
</blockquote>
<p>Not really.<br>
The closest notion of a "version" is resourceVersion, but that is not for the user to directly act upon.</p>
<p>See <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#concurrency-control-and-consistency" rel="nofollow noreferrer">API conventions: concurrency control and consistency</a>:</p>
<blockquote>
<p>Kubernetes leverages the concept of resource versions to achieve optimistic concurrency. All Kubernetes resources have a "<code>resourceVersion</code>" field as part of their metadata. This <code>resourceVersion</code> is a string that identifies the internal version of an object that can be used by clients to determine when objects have changed.<br>
When a record is about to be updated, it's version is checked against a pre-saved value, and if it doesn't match, the update fails with a <code>StatusConflict</code> (HTTP status code 409).</p>
<p>The <code>resourceVersion</code> is changed by the server every time an object is modified. If <code>resourceVersion</code> is included with the <code>PUT</code> operation the system will verify that there have not been other successful mutations to the resource during a read/modify/write cycle, by verifying that the current value of <code>resourceVersion</code> matches the specified value.</p>
<p>The <code>resourceVersion</code> is currently backed by etcd's <code>modifiedIndex</code>.<br>
However, it's important to note that the application should not rely on the implementation details of the versioning system maintained by Kubernetes. We may change the implementation of <code>resourceVersion</code> in the future, such as to change it to a timestamp or per-object counter.</p>
<p>The only way for a client to know the expected value of <code>resourceVersion</code> is to have received it from the server in response to a prior operation, typically a <code>GET</code>. This value MUST be treated as opaque by clients and passed unmodified back to the server.<br>
Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers.<br>
Currently, the value of <code>resourceVersion</code> is set to match etcd's sequencer. You could think of it as a logical clock the API server can use to order requests.
However, we expect the implementation of <code>resourceVersion</code> to change in the future, such as in the case we shard the state by kind and/or namespace, or port to another storage system.</p>
<p>In the case of a conflict, the correct client action at this point is to <code>GET</code> the resource again, apply the changes afresh, and try submitting again.<br>
This mechanism can be used to prevent races like the following:</p>
</blockquote>
<pre><code>Client #1 Client #2
GET Foo GET Foo
Set Foo.Bar = "one" Set Foo.Baz = "two"
PUT Foo PUT Foo
</code></pre>
<blockquote>
<p>When these sequences occur in parallel, either the change to <code>Foo.Bar</code> or the change to <code>Foo.Baz</code> can be lost.</p>
<p>On the other hand, when specifying the <code>resourceVersion</code>, one of the <code>PUT</code>s will fail, since whichever write succeeds changes the <code>resourceVersion</code> for <code>Foo</code>.</p>
<p><code>resourceVersion</code> may be used as a precondition for other operations (e.g., <code>GET</code>, <code>DELETE</code>) in the future, such as for read-after-write consistency in the presence of caching.</p>
<p>"Watch" operations specify <code>resourceVersion</code> using a query parameter. It is used to specify the point at which to begin watching the specified resources.<br>
This may be used to ensure that no mutations are missed between a <code>GET</code> of a resource (or list of resources) and a subsequent Watch, even if the current version of the resource is more recent.<br>
This is currently the main reason that list operations (<code>GET</code> on a collection) return <code>resourceVersion</code>.</p>
</blockquote>
|
<p>I'm trying to set the <code>minimum-container-ttl-duration</code> property on a Kubernetes CronJob. I see a bunch of properties like this that appear to be configurable, but the documentation doesn't appear to show where, in the yml file, they can actually be set.</p>
<p>In this example yml, where would I put this property?</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
| <p><code>minimum-container-ttl-duration</code> is not a property on <code>CronJob</code> but <strong>is a Node-level property</strong> set via a command line parameter: <code>kubelet ... --minimum-container-ttl-duration=x</code>.</p>
<p><a href="https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/#user-configuration" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/#user-configuration</a>:</p>
<blockquote>
<p><code>minimum-container-ttl-duration</code>, minimum age for a finished container before it is garbage collected. Default is 0 minute, which means every finished container will be garbage collected.</p>
</blockquote>
<p>The usage of this flag is deprecated.</p>
|
<p>I am trying to run my first app with Kubernetes locally (or i should say minikube).</p>
<p>I have a pretty basic web server (one local docker image), and the official mongodb (that i would like to pull ideally from dockerhub) image.</p>
<p>I am not trying to deploy a mongodb cluster, just the minimum stuff to get my app running locally would be a great start!</p>
<p>First, i succeed to run my web server alone with <code>kubectl run <MY_APP> --image=<MY_IMAGE> --port 3030 --image-pull-policy=IfNotPresent</code>, then <code>kubectl port-forward <MY_POD> 3030:80</code> and it works fine, i can hit the app from the 3030 port (the app is listening and the container expose the port 80).</p>
<p>But i would like to translate that into a manifest file to describe all the containers i need to run easily. </p>
<p>My first issue is i can't find how <code>kubectl port-forward</code> is supposed to be translated into a manifest file.
I was thinking about <code>targetPort</code>but i have a validation error when trying that, it looks like we can't use <code>targetPort</code> in pods containers ports description.</p>
<p>My second issue is that i am not really sure about what i am doing by trying to run that stack by describing a pod only. It may need other pieces, <code>service</code> is i think optional for my need, i am not sure about <code>deployment</code>, but i have seen an <code>endpoint</code> kind, and i could ignore other ones...</p>
<p>I am a little bit confuse since <code>kubectl run</code> seems to create a pod, a deployment, and a replica sets, i am not sure if i have to create all of that from my manifest file.</p>
<p>I just want to run my both containers locally to work on the code and refresh it everytime i make a change, and test it.</p>
<p>So my question have some sub questions due to my lack of knowledges about Kubernetes, but basically, i would like to know how to translate my <code>kubectl run <MY_APP></code>and <code>kubectl port-forward <MY_POD> 3030:80</code>into a manifest file so i can add the mongodb container and start the whole stack with a single <code>kubectl create -f ./local.yaml</code> command line.</p>
<p>Here is a first draft:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my_app
spec:
containers:
- name: web-server
image: my_app
imagePullPolicy: IfNotPresent
ports:
- name: my_app
containerPort: 3030
targetPort: 80
protocol: TCP
- name: mongodb
image: mongodb
ports:
- name: mongodb
containerPort: 27017
protocol: TCP
</code></pre>
| <p>Yes, you are right. You can expose your app as a service with <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="noreferrer">Type NodePort</a> (though it's not very clear in documentation), and your service's yaml will look like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
type: NodePort
ports:
- port: 3030
targetPort: 80
selector:
app: app-server
</code></pre>
<p>Your deployment's yaml will look like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app-server
labels:
app: app-server
spec:
selector:
matchLabels:
app: app-server
template:
metadata:
labels:
app: app-server
spec:
containers:
- name: web-server
image: my_app
ports:
- containerPort: 80
- name: mongodb
image: mongodb
ports:
- containerPort: 27017
</code></pre>
<p>As you see, I exposed only your web server. Now, to get access to Mongo from outside the Kubernetes, you need to expose it too. </p>
<p>You can deploy your app as a command:</p>
<pre><code>kubctl apply -f ./file_with_service.yaml ./file_with_deployment.yaml
</code></pre>
<p>And you can use all that as an example to begin, and read <a href="https://kubernetes.io/docs/concepts/" rel="noreferrer">more documentation</a> to understand it clearly. </p>
|
<p>I'm at a real loss on this one. I've been attempting to get my application running with a replica set in Kubernetes for awhile. I'm setting: <code>spring.data.mongodb.uri=${MYAPP_MONGODB}:mongodb://localhost:27017/myapp</code>
in <code>application.properties</code> and using Spring Data to access my objects. </p>
<p>Locally using a local MongoDB container it works fine even if I set the env var to my remote databases locally I can connect to them and work just fine. But when I put the value of MYAPP_MONGODB into k8s secrets when the container boots I get quoted error from the title. The value is like this:</p>
<p><code>mongodb://myuser:[email protected]:27017,2.2.2.2:27017,3.3.3.3:27017,4.4.4.4:27017,5.5.5.5:27017/myapp</code> </p>
<p>I <a href="https://github.com/spring-projects/spring-data-mongodb/blob/master/spring-data-mongodb/src/main/java/org/springframework/data/mongodb/core/SimpleMongoDbFactory.java#L88" rel="nofollow noreferrer">reviewed the source</a> and still baffled as to why this is happening. Pulling the secret from the k8s environment it is correct. </p>
<p>Any help is much appreciated!</p>
| <p>It sounds like your secret in k8s might be setup incorrectly. I would try uploading your secrets again and decoding them to make sure they are correct. Careful for random line breaks :)</p>
|
<p>I'm trying to delete an existent job using</p>
<pre><code>kubectl delete job/job-name -n my-namespace
</code></pre>
<p>But this error is displayed</p>
<pre><code>caling the resource failed with: Job.batch "kong-loop" is invalid:
spec.template: Invalid value: api.PodTemplateSpec{...}: field is
immutable; Current resource version 12189833
</code></pre>
| <p>The solution posted by @esnible does work in this scenario, but it is simpler do these steps:</p>
<ol>
<li>Delete job with cascade false</li>
</ol>
<p><code>kubectl delete job/jobname -n namespace --cascade=false</code></p>
<ol start="2">
<li>Delete any pod that exists </li>
</ol>
<p><code>kubectl delete pod/podname -n namespace</code></p>
<p>Solution found at in this google groups discussion <a href="https://groups.google.com/forum/#!topic/kubernetes-users/YVmUgktoqtI" rel="nofollow noreferrer">https://groups.google.com/forum/#!topic/kubernetes-users/YVmUgktoqtI</a></p>
|
<p>I am trying to use Kubernetes and Jenkins for my deployment of micro services developed using Spring Boot. When I am exploring many YouTube videos and other documentation tutorials are using dockerhub.com as keeping published image as repository.</p>
<p>Can I deploy docker image in Kubernetes by using Jenkins docker image build without using this dockerhub.com ? Means I don't want to share client code in a public place. So can I use Jenkins without dockerhub.com?</p>
| <p>You do need to use <em>some</em> registry- kubernetes needs a registry URL to be able to pull and instantiate a particular image as a container in a pod. To avoid having the images themselves be publicly accessible you have 2 options:</p>
<ul>
<li>use a business account at a public registry. You can get one of these from Docker, or from other services like Google or Quay. When you push images using a business account, you get a private space in the public registry and only your account credentials can push and pull those images. In this case your Kubernetes- and your Jenkins- has to be configured with credentials derived from your account to be able to pull those private images into your cluster.</li>
<li>run a private registry in your cluster or on your non-cluster infrastructure There are many flavors of private registries, including Docker's, Atlassian's, and many others. This keeps your images entirely on your infrastructure. The tradeoff is that you have to configure and run this as a production service, and most private registries suitable for production use have a lot of moving parts for scalable image storage, indexing, backup, and so forth. </li>
</ul>
|
<p>I have a question related to Kubernetes networking. </p>
<p>I have a microservice (say <em>numcruncherpod</em>) running in a pod which is serving requests via port <code>9000</code>, and I have created a corresponding Service of type <code>NodePort</code> (<em>numcrunchersvc</em>) and node port which this service is exposed is <code>30900</code>. </p>
<p>My cluster has 3 nodes with following IPs: </p>
<ol>
<li>192.168.201.70, </li>
<li>192.168.201.71</li>
<li>192.168.201.72</li>
</ol>
<p>I will be routing the traffic to my cluster via reverse proxy (nginx). As I understand in nginx I need to specify IPs of all these cluster nodes to route the traffic to the cluster, is my understanding correct ? </p>
<p>My worry is since nginx won't have knowledge of cluster it might not be a good judge to decide the cluster node to which the traffic should be sent to. So is there a better way to route the traffic to my kubernetes cluster ? </p>
<p><em>PS: I am not running the cluster on any cloud platform.</em></p>
| <p>This answer is a little late, and a little long, so I ask for forgiveness before I begin. :)</p>
<p>For people not running kubernetes clusters on Cloud Providers there are 4 distinct options for exposing services running inside the cluster to the world outside.</p>
<ol>
<li><p>Service of <code>type: NodePort</code>. This is the simplest and default. Kubernetes assigns a random port to your service. Every node in the cluster listens for traffic to this particular port and then forwards that traffic to any one of the pods backing that service. This is usually handled by kube-proxy, which leverages iptables and load balances using a round-robin strategy. Typically since the UX for this setup is not pretty, people often add an external "proxy" server, such as HAProxy, Nginx or httpd to listen to traffic on a single IP and forward it to one of these backends. This is the setup you, OP, described.</p></li>
<li><p>A step up from this would be using a Service of <code>type: ExternalIP</code>. This is identical to the <code>NodePort</code> service, except it also gets kubernetes to add an additional rule on all kubernetes nodes that says "All traffic that arrives for destination IP == must <em>also</em> be forwarded to the pods". This basically allows you to specify any arbitrary IP as the "external IP" for the service. As long as traffic destined for that IP reaches one of the nodes in the cluster, it will be routed to the correct pod. Getting that traffic to any of the nodes however, is your responsibility as the cluster administrator. The advantage here is that you no longer have to run an haproxy/nginx setup, if you specify the IP of one of the physical interfaces of one of your nodes (for example one of your master nodes). Additionally you cut down the number of hops by one.</p></li>
<li><p>Service of <code>type: LoadBalancer</code>. This service type brings baremetal clusters at parity with cloud providers. A fully functioning loadbalancer provider is able to select IP from a pre-defined pool, automatically assign it to your service and advertise it to the network, assuming it is configured correctly. This is the most "seamless" experience you'll have when it comes to kubernetes networking on baremetal. Most of LoadBalancer provider implementations use BGP to talk and advertise to an upstream L3 router. Metallb and kube-router are the two FOSS projects that fit this niche.</p></li>
<li><p>Kubernetes Ingress. If your requirement is limited to L7 applications, such as REST APIs, HTTP microservices etc. You can setup a single Ingress provider (nginx is one such provider) and then configure ingress resources for all your microservices, instead of service resources. You deploy your ingress provider and make sure it has an externally available and routable IP (you can pin it to a master node, and use the physical interface IP for that node for example). The advantage of using ingress over services is that ingress objects understand HTTP mircoservices natively and you can do smarter health checking, routing and management.</p></li>
</ol>
<p>Often people combine one of (1), (2), (3) with (4), since the first 3 are L4 (TCP/UDP) and (4) is L7. So things like URL path/Domain based routing, SSL Termination etc is handled by the ingress provider and the IP lifecycle management and routing is taken care of by the service layer.</p>
<p>For your use case, the ideal setup would involve:</p>
<ol>
<li>A deployment for your microservice, with health endpoints on your pod</li>
<li>An Ingress provider, so that you can tweak/customize your routing/load-balancing as well as use for SSL termination, domain matching etc.</li>
<li>(optional): Use a <code>LoadBalancer</code> provider to front your Ingress provider, so that you don't have to manually configure your Ingress's networking.</li>
</ol>
|
<p>How can I programatically determine if a <code>job</code> has failed for good and will not retry any more? I've seen the following on failed jobs:</p>
<pre><code>status:
conditions:
- lastProbeTime: 2018-04-25T22:38:34Z
lastTransitionTime: 2018-04-25T22:38:34Z
message: Job has reach the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
</code></pre>
<p>However, the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-patterns" rel="noreferrer">documentation</a> doesn't explain why <code>conditions</code> is a list. Can there be multiple conditions? If so, which one do I rely on? Is it a guarantee that there will only be one with <code>status: "True"</code>?</p>
| <p><code>JobConditions</code> is similar as <code>PodConditions</code>. You may read about <code>PodConditions</code> in <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">official docs</a>.</p>
<p>Anyway, To determine a successful pod, I follow another way. Let's look at it.</p>
<hr>
<p><strong>There are two fields in Job Spec.</strong> </p>
<p>One is <a href="https://github.com/kubernetes/api/blob/master/batch/v1/types.go#L78" rel="nofollow noreferrer"><code>spec.completion</code></a> (default value 1), which says,</p>
<blockquote>
<p>Specifies the desired number of successfully finished pods the
job should be run with.</p>
</blockquote>
<p>Another is <a href="https://github.com/kubernetes/api/blob/master/batch/v1/types.go#L88" rel="nofollow noreferrer"><code>spec.backoffLimit</code></a> (default value 6), which says,</p>
<blockquote>
<p>Specifies the number of retries before marking this job failed.</p>
</blockquote>
<hr>
<p><strong>Now In JobStatus</strong></p>
<p>There are two fields in JobStatus too. <a href="https://github.com/kubernetes/api/blob/master/batch/v1/types.go#L146" rel="nofollow noreferrer"><code>Succeeded</code></a> and <a href="https://github.com/kubernetes/api/blob/master/batch/v1/types.go#L150" rel="nofollow noreferrer"><code>Failed</code></a>. <code>Succeeded</code> means how many times the Pod completed successfully and <code>Failed</code> denotes, The number of pods which reached phase Failed. </p>
<ul>
<li>Once the <code>Success</code> is equal or bigger than the <code>spec.completion</code>, the job will become <code>completed</code>.</li>
<li>Once the <code>Failed</code> is equal or bigger than the <code>spec.backOffLimit</code>, the job will become <code>failed</code>.</li>
</ul>
<p>So, the logic will be here,</p>
<pre><code>if job.Status.Succeeded >= *job.Spec.Completion {
return "completed"
} else if job.Status.Failed >= *job.Spec.BackoffLimit {
return "failed"
}
</code></pre>
|
<p>The Linux I have in this container is as shown below:</p>
<pre><code>root@sbolla-6c7b7589d8-5c2rb:/usr/safenet/lunaclient# uname -a
Linux sbolla-6c7b7589d8-5c2rb 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 x86_64 GNU/Linux
root@sbolla-6c7b7589d8-5c2rb:/usr/safenet/lunaclient#
</code></pre>
<p>I am trying to do a simple scp with sshpass command as shown below and running into this error. any ideas really appreciated. Please note that I have tried scp and not cp. Infact this line is in a script, I tried it on the linux command line I got this error. I have also tried escaping the Environment variables with ' and " and \ and all combos, but that doesn't seem to have helped.</p>
<pre><code>root@sbolla-6c7b7589d8-5c2rb:/usr/safenet/lunaclient/bin# sshpass -p '$H_PASSWORD' scp -v $H_USERNAME@$H_HOSTNAME:server.pem .
Executing: cp '--' '[email protected]' '.'
cp: cannot stat '[email protected]': No such file or directory
Executing: cp '--' ':server.pem' '.'
cp: cannot stat ':server.pem': No such file or directory
root@sbolla-6c7b7589d8-5c2rb:/usr/safenet/lunaclient/bin#
</code></pre>
<p>If I explicitly use this command i got it to work, not sure why. Please not that hostname in these outputs have been edited to some goofy names</p>
<pre><code>root@sbolla-6c7b7589d8-5c2rb:/usr/safenet/lunaclient# sshpass -p $HSM_PASSWORD scp [email protected]:server.pem .
ls -lia
root@sbolla-6c7b7589d8-5c2rb:/usr/safenet/lunaclient# ls -l | grep ser
-rw-r--r-- 1 root root 1172 Apr 28 12:23 server.pem
root@sbolla-6c7b7589d8-5c2rb:/usr/safenet/lunaclient# date
Sat Apr 28 12:24:29 UTC 2018
root@sbolla-6c7b7589d8-5c2rb:/usr/safenet/lunaclient#
</code></pre>
<p>I have tried the answer provided below but didn't work</p>
<pre><code>sshpass -p "$H_PASSWORD" scp -v $H_USERNAME@${H_HOSTNAME}:server.pem .
</code></pre>
<p>another thing I have noticed is if I do the env on this container, I see an extra line in the environment variable , could that be an issue. See how this env shows. Note that I have not entered a line on purpose, when I type env command I see line next to the H_HOSTNAME and the H_PARTITION which is weird
H_PARTITION=Operator</p>
<pre><code>MYSQL_PORT=tcp://11.123.113.242:3306
LUNAHS=TRUE
H_HOSTNAME=grs-defcon.brilliant.local
ROOT_PORT_443_TCP_PORT=443
MYSQL_PORT_3306_TCP_ADDR=11.456.231.242
</code></pre>
| <p>Looks like 2 shell expansion issues. Try:</p>
<pre><code>sshpass -p "$H_PASSWORD" scp -v $H_USERNAME@${H_HOSTNAME}:server.pem .
</code></pre>
|
<p>An SSL redirect is enabled by default in a Kubernetes NGINX ingress. How can this be disabled? Current implementation below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: project_name-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: project_name
servicePort: 80
</code></pre>
| <p>Adding <code>nginx.ingress.kubernetes.io/ssl-redirect: "false"</code> to <code>annotations</code> will disable the SSL redirect:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: project_name-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: project_name
servicePort: 80
</code></pre>
<p>Note that <code>false</code> is wrapped in quotation marks. I found it didn't work without this string casting.</p>
|
<p>In Kubernates, heapster and influxdb is present with thier specific schema. I want to store pod all information in a custom database (sqlite) with my specified schema. Is it possible? If it is possible, what is the source of information for pod? How can I take it and store?</p>
| <p>Kubernetes uses etcd to store metadata about the cluster. etcd is a schemaless key-value store. Storage of cluster objects uses an object serialization scheme. </p>
<p>The code to read starts here: <a href="https://github.com/kubernetes/kubernetes/tree/2bb1e7581544b9bd059eafe6ac29775332e5a1d6/staging/src/k8s.io/apiserver/pkg/storage" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/2bb1e7581544b9bd059eafe6ac29775332e5a1d6/staging/src/k8s.io/apiserver/pkg/storage</a> </p>
|
<p>I would like to configure Kafka broker in Kubernetes. The docker image I am using is <code>confluentinc/cp-kafka:latest</code>. It requires <code>KAFKA_ADVERTISED_LISTENERS</code> environment variable which allows Kafka client to communicate with broker. </p>
<p>The problem is the difficulty to assign service endpoints IP to <code>KAFKA_ADVERTISED_LISTENERS</code>. If I am using <code>localhost</code> as this value, it is only working in local Kafka broker pod but it won't work for some Kafka client pods in kubernetes cluster to communicate with it. If I am using the service endpoint IP coming from <code>kubectl get endpoints -l app=kafka</code>, this is working but it is little overhead to use some audit script set this dynamic value every time.</p>
<p>I wonder is there a better way I can dynamically set this value in Kubernetes yaml file, so I don't need to programatically set this IP every time.</p>
<p>Here is the yaml file:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: kafka-broker
labels:
app: kafka
spec:
type: NodePort
ports:
- port: 9092
targetPort: 9092
selector:
app: kafka
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-broker
spec:
replicas: 1
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
hostname: broker
containers:
- name: kafka
image: confluentinc/cp-kafka:latest
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_LISTENERS
value: "PLAINTEXT://DYNAMIC_ENDPOINT_IP:9092"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
</code></pre>
<p>Thanks in advance.</p>
<p><strong>Edit:</strong> I tried to use Server name, Service host environment variable, service source IP and Pod IP. Unfortunately, I still get the error in kafka log:
<code>
java.lang.IllegalArgumentException: Error creating broker listeners from 'PLAINTEXT://$KAFKA_BROKER_SERVICE_HOST:9092': Unable to parse PLAINTEXT://$KAFKA_BROKER_SERVICE_HOST:9092 to a broker endpoint
</code></p>
<p>If I am using <code>kubectl exec -it kafa-broker-ssfjks env</code>, those environment variables are actually set correctly in this pod. I guess it may be related to a Kafka broker configuration issue ?</p>
| <p>You should let your clients connect through the service, so exposing the ip or dns of the service should work. By default services are exposed as variable names in the pod. If a dns plugin is configured dns can be used. More info: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables</a></p>
|
<p>My understanding is that in Kubernetes, when using the Horizontal Pod Autoscaler, if the <code>targetCPUUtilizationPercentage</code> field is set to 50%, and the average CPU utilization across all the pod's replicas is above that value, the HPA will create more replicas. Once the average CPU drops below 50% for some time, it will lower the number of replicas.<br><br> <strong>Here is the part that I am not sure about:</strong>
<br>What if the CPU utilization on a pod is 10%, not 0%?Will HPA still terminate the replica?<br> 10% CPU isn't much, but since it's not 0%, some task is currently running on that pod. If it's a long lasting task (several seconds) and HPA decides to terminate the pod, that task will not be finished.</p>
<p>Does the HPA terminate pods only if the CPU utilization on them is 0% or does it terminate them whenever it sees that the value is below <code>targetCPUUtilizationPercentage</code>? </p>
<p>How does HPA decide which pods to remove?<br>
Thank you!</p>
| <p>So you have two questions in there and let me address one by one. The first part - if a pod in a replica set is consuming let's say 10% then will Kubernetes kill that pod? The answer is Yes. Kubernetes is not looking at the individual pods but at an average of that metric across all pods in that replica set. Also the scaling down is gradual <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="nofollow noreferrer">as explained here</a></p>
<p>The second part of the question - how does your application behave gracefully when a pod is about to be killed and it is still serving some requests? This can be handled by the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">grace period of the pod termination</a> and even better if you implement <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">a <code>PreStop</code> hook</a> - which will allow you to do something like stop taking incoming requests but process existing requests. The implementation of this will vary based on the language runtime you are using, so I won't go in the details here.</p>
<p>Lastly - one scenario you should consider is what if VM on which pod was running goes down abruptly - you have no chance to execute PreStop hook! I think the application needs to be robust enough to handle failures.</p>
|
<p>I'm having troubles establishing a SSL connection between a web service and a remotely hosted Postgres database. With the same cert and key files being used for the web service, I can connect to the database with tools such as pgAdmin and DataGrip. These files were downloaded from Postgres instance in the Google Cloud Console.</p>
<p><strong>Issue:</strong><br></p>
<p>At the time of Spring Boot service start up, the following error occurs:</p>
<pre><code>org.postgresql.util.PSQLException: Could not read SSL key file /tls/tls.key
</code></pre>
<p>Where I look at the Postgres server logs, the error is recorded as</p>
<pre><code>LOG: could not accept SSL connection: UNEXPECTED_RECORD
</code></pre>
<p><strong>Setup:<br></strong></p>
<p>Spring Boot service running on Minikube (local) and GKE connecting to a Google Cloud SQL Postgres instance.</p>
<p><strong>Actions Taken:<br></strong></p>
<p>I have downloaded the client cert & key. I created a K8s TLS Secret using the downloaded client cert & key. I also have made sure the files can be read from the volume mount by running the following command on the k8s deployment config:</p>
<pre><code>command: ["bin/sh", "-c", "cat /tls/tls.key"]
</code></pre>
<p>Here is the datasource url which is fed in via an environment variable (DATASOURCE).</p>
<pre><code>"jdbc:postgresql://[Database-Address]:5432/[database]?ssl=true&sslmode=require&sslcert=/tls/tls.crt&sslkey=/tls/tls.key"
</code></pre>
<p>Here is the k8s deployment yaml, any idea where i'm going wrong?</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "service.name" . }}
labels:
release: {{ template "release.name" . }}
chart: {{ template "chart.name" . }}
chart-version: {{ template "chart.version" . }}
release: {{ template "service.fullname" . }}
spec:
replicas: {{ $.Values.image.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: {{ template "service.name" . }}
release: {{ template "release.name" . }}
env: {{ $.Values.environment }}
spec:
imagePullSecrets:
- name: {{ $.Values.image.pullSecretsName }}
containers:
- name: {{ template "service.name" . }}
image: {{ $.Values.image.repo }}:{{ $.Values.image.tag }}
# command: ["bin/sh", "-c", "cat /tls/tls.key"]
imagePullPolicy: {{ $.Values.image.pullPolicy }}
volumeMounts:
- name: tls-cert
mountPath: "/tls"
readOnly: true
ports:
- containerPort: 80
env:
- name: DATASOURCE_URL
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_URL
- name: DATASOURCE_USER
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_USER
- name: DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_PASSWORD
volumes:
- name: tls-cert
projected:
sources:
- secret:
name: postgres-tls
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
</code></pre>
| <p>So I figured it out, I was asking the wrong question!</p>
<p>Google Cloud SQL has a proxy component for the Postgres database. Therefore, trying to connect the traditional way (the problem I was trying to solve) has been resolved by implementing proxy. Instead of dealing with whitelisting IPs, SSL certs, and such, you just spin up the proxy, point it at a GCP credential file, then updated your database uri to access via localhost.</p>
<p>To set up the proxy, you can find directions <a href="https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine" rel="nofollow noreferrer">here</a>. There is a good example of a k8s deployment file <a href="https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/blob/master/cloudsql/postgres_deployment.yaml" rel="nofollow noreferrer">here</a>.</p>
<p>One situation I did come across was the GCP service account. Make sure to add Cloud SQL Client AND Cloud SQL Editor roles. I only added the Cloud SQL Client to start with and kept getting the 403 error.</p>
|
<p>If i spawn a pod with </p>
<blockquote>
<p>hostNetwork: true</p>
</blockquote>
<p>and then attach to it </p>
<blockquote>
<p>kubectl exec -it my-pod-test bash</p>
</blockquote>
<p>and spawn something silly like </p>
<blockquote>
<p>nc -l < HOST_IP > 5678</p>
</blockquote>
<p>and netstat shows me</p>
<blockquote>
<p>tcp 0 0 0.0.0.0:34592 0.0.0.0:* LISTEN 94/nc</p>
</blockquote>
<p>Why is the random port (in this case) 34592 assigned? Id expect to see port 5678 in use.</p>
<p>Is this a "feature" of Weave, should i go for another networking addon?</p>
<p>More context: I am trying to follow
<a href="https://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-1-containerising-and-deploying/" rel="nofollow noreferrer">https://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-1-containerising-and-deploying/</a></p>
| <p>You don't say what image your pod is running, but I've had this symptom when using the <a href="https://busybox.net/" rel="nofollow noreferrer">busybox</a> <code>nc</code>, as found in images based on <a href="https://hub.docker.com/_/alpine/" rel="nofollow noreferrer">alpine</a> for instance.</p>
<p>As you will see in <a href="https://busybox.net/downloads/BusyBox.html#nc" rel="nofollow noreferrer">the docs</a>, the syntax to listen on a specific port is:</p>
<pre><code> nc -l -p 5678
</code></pre>
<p>Note that <code>hostNetwork: true</code> bypasses any pod network, so this has nothing to do with <a href="https://www.weave.works/oss/net/" rel="nofollow noreferrer">Weave Net</a> - your pod is running in the host network, just as if it was a process run directly on the host.</p>
|
<p>I am planning to start my project but a bit confuse between choosing Amazon ECS and Kubernetes perhaps I am really a beginner with Micro-services architecture.</p>
<p>I would really appreciate if someone can show some path for deploying my docker container on a fast easier to handle platform.</p>
<p>Thanks </p>
| <p>Here a list of differences from the top of my head:</p>
<p>AWS ECS / Kubernetes:</p>
<ul>
<li><p>Proprietary AWS implementation / Open source solution</p></li>
<li><p>Runs on AWS / Supported by most cloud providers and on premise</p></li>
<li><p>Task Definitions / PODs have different features</p></li>
<li><p>Runs on your EC2 machines or allows for serverless with Fargate (in beta) / Runs on any cluster of (physical/virtual/cloud) machines running the kubernetes controller.</p></li>
<li><p>Support for AWS VPCs / Support for multiple networking models</p></li>
</ul>
<p>I would also argue that kubernetes has a slightly steeper learning curve but ultimately provides more freedom and is probably a safer bet for the future given the wide adoption. </p>
<p>Features supported in both systems:</p>
<ul>
<li>Horizontal application scalability</li>
<li>Cluster Scalability</li>
<li>Load Balancing</li>
<li>Rolling upgrades</li>
<li>Logging (with additional logging systems)</li>
<li>Container Health Checks</li>
<li>APIs</li>
</ul>
<p>Amazon has bowed to customer pressure and currently has a managed kubernetes support in beta (EKS). </p>
<p>*edit: EKS is released now - but with an upcharge for the cluster controller nodes, as compared to google GKE for example.</p>
<p>Here is one <a href="https://platform9.com/blog/compare-kubernetes-vs-ecs/" rel="nofollow noreferrer">article</a> about the topic. </p>
|
<p>I am trying to use Jenkins, Kubernetes and my SVN repository for implementing CI/CD pipeline for deployment of my Spring Boot micro services. When I am exploring, I found that in Jenkins+Kubernetes deployment, the Docker image is pulling from image registries. And when I exploring sample implementation, I found that option for automated image building within Jenkins and options for defining the code repository like GitHub/SVN. </p>
<p>Regarding the Jenkins workflow, I had felt doubts and adding here,</p>
<p><strong>1.</strong> If the Docker image is pulling from some image registries for deployment using Jenkins and Kubernetes, then why we again defining the code repositories source link (GitHub/SVN) in Jenkins?</p>
<p><strong>Reason for doubt-</strong> I am thinking thinking that, Kubernetes and Jenkins are only depending image registry for deployment by image pulling. So I felt doubt that why we defining our GitHub repository link in Jenkins. This is only reason for this doubt. Please correct me if I am thinking in wrong direction.</p>
<p><strong>2.</strong> When I am exploring I found that automated Docker image building option within Jenkins using some scripts (I don't know properly exact working, I found in you tube video). So if we are pulling image from image registry, Then why we again using automated Docker image build within Jenkins? If this is a alternative option, what is the difference between image pulling and automated image building in Jenkins? Is those all are according to our requirement? Or is there any other concept behind this?</p>
| <p>I am sorry, but I did not fully understand your question. I try to explain our CI/CD workflow. Maybe this helps.</p>
<ol>
<li>Developer commits code to GIT.</li>
<li>Jenkins automatically
<ul>
<li>checks out the repo</li>
<li>builds spring boot microservices (tests, code coverage, etc)</li>
<li>builds docker image</li>
<li>commit the docker images to a registry</li>
</ul></li>
</ol>
<p>With different switches/build parameters the developer can decide to build SNAPSHOT builds which are deployed only to a specific snapshot-registry for developers only, or to build real releases.</p>
<p>The developer can also decide to deploy the Docker image afterwards to the int or prod system. In this case Kubernetes pulls the image from the registry (which Jenkins previously deployed there).</p>
|
<p>we're currently trying out traefik and consider using it as ingress controller for our internal kubernetes cluster.</p>
<p>Now I wonder if it is possible to use traefik to loadbalance the kube-apiserver? We have a HA setup with 3 masters.</p>
<p>How would I proceed here?</p>
<p>Basically I just want to loadbalance the API requests from all nodes in the cluster between the 3 masters.</p>
<p>Should I just run traefik outside the cluster?</p>
<p>I'm trying to wrap my head around this... I'm having a hard time to understand how this could work together with traefik as ingress controller.</p>
<p>Thanks for any input, much appreciated!</p>
| <p>One way to achieve this is to use the <a href="https://docs.traefik.io/configuration/backends/file/" rel="nofollow noreferrer">file provider</a> and create a static setup pointing at your API server nodes; something like this (untested)</p>
<pre><code>[file]
[backends]
[backends.backend1]
[backends.backend1.servers]
[backends.backend1.servers.server1]
url = "http://apiserver1:80"
weight = 1
[backends.backend1.servers.server2]
url = "http://apiserver2:80"
weight = 1
[backends.backend1.servers.server3]
url = "http://apiserver3:80"
weight = 1
[frontends]
[frontends.frontend1]
entryPoints = ["http"]
backend = "backend1"
passHostHeader = true
[frontends.frontend1.routes]
[frontends.frontend1.routes.route1]
rule = "Host:apiserver"
</code></pre>
<p>(This assumes a simple HTTP-only setup; HTTPS would need some extra setup.)</p>
<p>When Traefik is given this piece of configuration (and whatever else you need to do either via the TOML file or CLI parameters), it will round-robin requests with an <em>apiserver</em> Host header across the three nodes.</p>
<p>Another at least potential option is to create a Service object capturing your API server nodes and another Ingress object referencing that Service and mapping the desirable URL path and host to your API server. This would give you more flexibility as the Service should adjust to changes to your API server automatically, which might be interesting when things like rolling upgrades come into play. One aggravating point though might be that Traefik needs to speak to the API server to process Ingresses and Services (and Endpoints, for that matter), which it cannot if the API server is unavailable. You might need some kind of HA setup or be willing to sustain a certain non-availability. (FWIW, Traefik should recover from temporary downtimes on its own.)</p>
<p>Whether you want run Traefik in-cluster or out-of-cluster is up to you. The former is definitely easier to setup if you want to process API objects as you won't have to pass in API server configuration parameters, though the same restrictions about API server connectivity needs apply if you want to go down the Ingress/Service route. With the file provider approach, you don't need to worry about that -- it is perfectly possible to run Traefik inside Kubernetes without using the Kubernetes provider.</p>
|
<p>I'm after an example that would do the following:</p>
<ol>
<li>Create a Kubernetes cluster on GKE via Terraform's <a href="https://www.terraform.io/docs/providers/google/r/container_cluster.html" rel="nofollow noreferrer"><code>google_container_cluster</code></a></li>
<li>... and continue creating namespaces in it, I suppose via <a href="https://www.terraform.io/docs/providers/kubernetes/r/namespace.html" rel="nofollow noreferrer"><code>kubernetes_namespace</code></a></li>
</ol>
<p>The thing I'm not sure about is how to connect the newly created cluster and the namespace definition. For example, when adding <code>google_container_node_pool</code>, I can do something like <code>cluster = "${google_container_cluster.hosting.name}"</code> but I don't see anything similar for <code>kubernetes_namespace</code>.</p>
| <p>In theory it is possible to reference resources from the GCP provider in K8S (or any other) provider in the same way you'd reference resources or data sources within the context of a single provider.</p>
<pre><code>provider "google" {
region = "us-west1"
}
data "google_compute_zones" "available" {}
resource "google_container_cluster" "primary" {
name = "the-only-marcellus-wallace"
zone = "${data.google_compute_zones.available.names[0]}"
initial_node_count = 3
additional_zones = [
"${data.google_compute_zones.available.names[1]}"
]
master_auth {
username = "mr.yoda"
password = "adoy.rm"
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
provider "kubernetes" {
host = "https://${google_container_cluster.primary.endpoint}"
username = "${google_container_cluster.primary.master_auth.0.username}"
password = "${google_container_cluster.primary.master_auth.0.password}"
client_certificate = "${base64decode(google_container_cluster.primary.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.primary.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
resource "kubernetes_namespace" "n" {
metadata {
name = "blablah"
}
}
</code></pre>
<p>However in practice it may not work as expected due to a known core bug breaking cross-provider dependencies, see <a href="https://github.com/hashicorp/terraform/issues/12393" rel="noreferrer">https://github.com/hashicorp/terraform/issues/12393</a> and <a href="https://github.com/hashicorp/terraform/issues/4149" rel="noreferrer">https://github.com/hashicorp/terraform/issues/4149</a> respectively.</p>
<p>The alternative solution would be:</p>
<ol>
<li>Use 2-staged apply and <a href="https://www.terraform.io/docs/commands/plan.html#resource-targeting" rel="noreferrer">target</a> the GKE cluster first, then anything else that depends on it, i.e. <code>terraform apply -target=google_container_cluster.primary</code> and then <code>terraform apply</code></li>
<li>Separate out GKE cluster config from K8S configs, give them completely isolated workflow and connect those via <a href="https://www.terraform.io/docs/state/remote.html" rel="noreferrer">remote state</a>.</li>
</ol>
<p><strong><code>/terraform-gke/main.tf</code></strong></p>
<pre><code>terraform {
backend "gcs" {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}
provider "google" {
region = "us-west1"
}
data "google_compute_zones" "available" {}
resource "google_container_cluster" "primary" {
name = "the-only-marcellus-wallace"
zone = "${data.google_compute_zones.available.names[0]}"
initial_node_count = 3
additional_zones = [
"${data.google_compute_zones.available.names[1]}"
]
master_auth {
username = "mr.yoda"
password = "adoy.rm"
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
output "gke_host" {
value = "https://${google_container_cluster.primary.endpoint}"
}
output "gke_username" {
value = "${google_container_cluster.primary.master_auth.0.username}"
}
output "gke_password" {
value = "${google_container_cluster.primary.master_auth.0.password}"
}
output "gke_client_certificate" {
value = "${base64decode(google_container_cluster.primary.master_auth.0.client_certificate)}"
}
output "gke_client_key" {
value = "${base64decode(google_container_cluster.primary.master_auth.0.client_key)}"
}
output "gke_cluster_ca_certificate" {
value = "${base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
</code></pre>
<p>Here we're exposing all the necessary configuration via <code>output</code>s and use backend to store the state, along with these outputs in a remote location, <a href="https://www.terraform.io/docs/backends/types/gcs.html" rel="noreferrer">GCS</a> in this case. This enables us to reference it in the config below.</p>
<p><strong><code>/terraform-k8s/main.tf</code></strong></p>
<pre><code>data "terraform_remote_state" "foo" {
backend = "gcs"
config {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}
provider "kubernetes" {
host = "https://${data.terraform_remote_state.foo.gke_host}"
username = "${data.terraform_remote_state.foo.gke_username}"
password = "${data.terraform_remote_state.foo.gke_password}"
client_certificate = "${base64decode(data.terraform_remote_state.foo.gke_client_certificate)}"
client_key = "${base64decode(data.terraform_remote_state.foo.gke_client_key)}"
cluster_ca_certificate = "${base64decode(data.terraform_remote_state.foo.gke_cluster_ca_certificate)}"
}
resource "kubernetes_namespace" "n" {
metadata {
name = "blablah"
}
}
</code></pre>
<p>What may or may not be obvious here is that cluster has to be created/updated before creating/updating any K8S resources (if such update relies on updates of the cluster).</p>
<hr>
<p>Taking the 2nd approach is generally advisable either way (even when/if the bug was not a factor and cross-provider references worked) as it reduces the blast radius and defines much clearer responsibility. It's (IMO) common for such deployment to have 1 person/team responsible for managing the cluster and a different one for managing K8S resources.</p>
<p>There may certainly be overlaps though - e.g. ops wanting to deploy logging & monitoring infrastructure on top of a fresh GKE cluster, so cross provider dependencies aim to satisfy such use cases. For that reason I'd recommend subscribing to the GH issues mentioned above.</p>
|
<p>I'm using kubernetes for deploying applications, and </p>
<pre><code>kubect logs
</code></pre>
<p>for logs viewing.
Redeployments are very often, so it's bit uncomfortable - to copy <code>pod</code> name each time to paste in log command(because end of pod name dinamically changes each time), e.g. below are commands for same application, after 2 deployments</p>
<pre><code>kubectl logs -n=testns --since=1m testapp-2818008534-jx2vv
kubectl logs -n=testns --since=1m testapp-2818008534-xls93
</code></pre>
<p>So, I want to use one command, which will correctly "pick up" needed POD name each time we'r running the "<code>kubectl logs</code>".</p>
<p>But smth like</p>
<pre><code>$kubectl logs -n=testns --since=1h $(kubectl get pods --namespace=testns | grep testapp)
</code></pre>
<p>returns an error, that we are not "giving" pod name, BUT that pod exists for sure:</p>
<pre><code>error: expected 'logs (POD | TYPE/NAME) [CONTAINER_NAME]'.
POD or TYPE/NAME is a required argument for the logs command
</code></pre>
<p>So, how to correctly set this command, to use it every time we want to look at logs, without need to re-copypaste full POD name?</p>
| <p>When you do:</p>
<p><code>kubectl get pods --namespace=testns | grep testapp</code></p>
<p>You get something like:</p>
<p><code>testapp-54d99599bc-g2gs4 1/1 Running 0 56m</code></p>
<p>So this won't go well with logs command as it has some additional data which can not be understood by Kubectl. What you can do is:</p>
<p><code>kubectl get pods --namespace=testns | grep testapp | cut -d' ' -f1</code></p>
<p>Which will produce only name of the POD and then your log command should work.</p>
|
<p>Does anyone have a clue why ".spec.replicas" would return this strange hex value?</p>
<pre><code>kubectl get rc -o=custom-columns=NAME:metadata.name,REPLICAS:.spec.replicas
NAME REPLICAS
devopsproxy 0xc20811d328
prd-devopsproxy-rs-etl 0xc20811d448
</code></pre>
<p>Thanks,
Drew</p>
| <p>The replicas field is a pointer: <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/core/v1/types.go#L3036" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/core/v1/types.go#L3036</a></p>
<p>It is not getting dereferenced to show you the literal value, but is displaying the memory address of the field</p>
<p>What version of kubectl are you using? This displays correctly for me:</p>
<pre><code>$ kubectl get rc -o=custom-columns=NAME:metadata.name,REPLICAS:.spec.replicas
NAME REPLICAS
frontend 1
</code></pre>
|
<p>On a single Ubuntu 14.04 box</p>
<p>I've followed the same configuration as
<a href="http://dojoblog.dellemc.com/dojo/deploy-kafka-cluster-kubernetes/" rel="nofollow noreferrer">http://dojoblog.dellemc.com/dojo/deploy-kafka-cluster-kubernetes/</a></p>
<p>I use Kubernetes version v1.10.2
( I also use apiVersion: apps/v1 in yml files. )</p>
<p>Basically I have setup a kubernetes service for kafka, and a kafka deployment using image <strong>wurstmeister/kafka</strong>. Zookeeper is working ok. Zookeeper and Kafka services are up.
Kafka deployment is configured as per the blog : <strong>KAFKA_ADVERTISED_HOST_NAME = the kafa service cluster IP</strong> which is for me <em>10.106.84.132</em></p>
<p>deployment config :</p>
<pre><code>....
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: 10.106.84.132
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: topic1:3:3
</code></pre>
<p>Then I test the kafka subscribe and publish from outside the kafka container on my host, but that fails as follow :</p>
<pre><code>root@edmitchell-virtual-machine:~# kafkacat -b 10.106.84.132:9092 -t topic1
% Auto-selecting Consumer mode (use -P or -C to override)
% ERROR: Topic topic1 error: Broker: Leader not available
</code></pre>
<p>The best I could do overall was</p>
<p>I delete and recreate a kafka deployment with </p>
<ul>
<li>name: KAFKA_ADVERTISED_HOST_NAME
value: localhost</li>
</ul>
<p>I can then subscribe and publish but only from within the kafka container, it doesn't work from outside. If I change the value to anything else than localhost, nothing works.</p>
<p>Any idea ?
It looks as if Kafka is not good to be used with Kubernetes ?
maybe I should not deploy Kafka not using kubernetes..</p>
<p>many thanks
ed</p>
<hr>
<p>Thank you, I understand better now the nodeport function.</p>
<p>I still have the same issue :</p>
<pre><code>root@fnature-virtual-machine:~/Zookeeper# kafkacat -b 192.168.198.160:32748 -t topic1 % Auto-selecting Consumer mode (use -P or -C to override) % ERROR: Topic topic1 error: Broker: Leader not available
</code></pre>
<p>I created the nodeport service as you said.</p>
<pre><code>kafka-nodeport NodePort 10.111.234.104 9092:32748/TCP 27m
kafka-service LoadBalancer 10.106.84.132 9092:30351/TCP 1d
</code></pre>
<p>I also delete/create the kafka deployment with following env : </p>
<pre><code> KAFKA_ADVERTISED_PORT: 32748
KAFKA_ADVERTISED_HOST_NAME: 192.168.198.160
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_BROKER_ID: 1
KAFKA_CREATE_TOPICS: topic1:3:3
</code></pre>
<p>—</p>
<p>also if I run the following from inside the kafka container, I get similar error </p>
<blockquote>
<p>"Leader not available". kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic1 --from-beginning</p>
</blockquote>
<p>if I create the kafka deployment with <code>KAFKA_ADVERTISED_HOST_NAME: localhost</code>, then above command works inside the kafka container</p>
<p>and 192.168.198.160 is the ip of default interface ens33 in my Ubuntu VM</p>
<p>I can’t seem to find any logs for kafka</p>
| <p>Kafka broker registers an address to zookeeper via <code>KAFKA_ADVERTISED_HOST_NAME</code>. However, this address is a kubernetes cluster ip (10.106.84.132), which is only reachable within Kubernetes cluster. So a client outside the cluster can not reach Kafka broker using this address.</p>
<p>To resolve this problem, you can expose kafka service to a public ip, either through <code>NodePort</code> or <code>LoadBalancer</code>. For example, run <code>kubectl expose svc $YOUR_KAFKA_SERVICE_NAME --name=kafka-nodeport --type=NodePort</code>, then lookup what nodeport is exposed: <code>kubectl get svc kafka-nodeport -o yaml | grep nodePort</code>. In this example, kafka broker will be accessible via this address: <code>$KUBERNETES_NODE_IP</code>:<code>$NODEPORT</code>.</p>
|
<p>One of the documented best practices for Kubernetes is to store the configuration in version control. It is mentioned in <a href="https://kubernetes.io/docs/concepts/configuration/overview/#general-configuration-tips" rel="nofollow noreferrer">the official best practices</a> and also summed up in <a href="https://stackoverflow.com/questions/47168381/best-practices-for-storing-kubernetes-configuration-in-source-control">this</a> Stack Overflow question. The reason is that this is supposed to speed-up rollbacks if necessary. </p>
<p>My question is, why do we need to store this configuration if this is already stored by Kubernetes and there are ways with which we can easily <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#checking-rollout-history-of-a-deployment" rel="nofollow noreferrer">go back to a previous version of the configuration using for example kubectl</a>? An example is a command like:</p>
<pre><code>kubectl rollout history deployment/nginx-deployment
</code></pre>
<p>Isn't storing the configuration an unnecessary duplication of a piece of information that we will then have to keep synchronized? </p>
<p>The reason I am asking this is that we are building a configuration service on top of Kubernetes. The user will interact with it to configure multiple deployments, I was wondering if we should keep a history of the Kubernetes configuration and the content of configMaps in a database for possible roll backs or if we should just rely on kubernetes to retrieve the current configuration and rolling back to previous versions of the configuration.</p>
| <p>You <em>can</em> use Kubernetes as your store of configuration, to your point, it's just that you probably shouldn't want to. By storing configuration as code, you get several benefits:</p>
<ul>
<li>Configuration changes get regular code reviews.</li>
<li>They get versioned, are diffable, etc.</li>
<li>They can be tested, linted, and whatever else you desired.</li>
<li>They can be refactored, share code, and be documented.</li>
<li>And all this happens <em>before</em> actually being pushed to Kubernetes.</li>
</ul>
<p>That may seem bad ("but then my configuration is out of date!"), but keep in mind that configuration is actually never in date - just because you told Kubernetes you want 3 replicas running doesn't mean there are, or if there were that 1 isn't temporarily down right now, and so on.</p>
<p>Configuration expresses <strong>intent</strong>. It takes a different process to actually notice when your intent changes or doesn't match reality, and make it so. For Kubernetes, that storage is etcd and it's up to the master to, in a loop forever, ensure the stored intent matches reality. For you, the storage is source control and whatever process you want, automated or not, can, in a loop forever, ensure your code eventually becomes reflected in Kubernetes.</p>
<p>The rollback command, then, is just a very fast shortcut to "please do this <em>right now</em>!". It's for when your configuration intent was wrong and you don't have time to fix it. As soon as you roll back, you should chase your configuration and update it there as well. In a sense, this is indeed duplication, but it's a rare event compared to the normal flow, and the overall benefits outweigh this downside.</p>
|
<p>I wanted to execute <code>kubectl exec</code> command and connect to a container, so that I can execute some commands on it.
I have set the proxy with the following command:</p>
<pre><code>kubectl proxy -p=8080 --kubeconfig=/directory_path/remote-kubeconfig &
</code></pre>
<p>proxy started running. I tried executing <code>kubectl exec</code> command:</p>
<pre><code>kubectl exec -it <pod> --namespace=<namespace> -c <container> -- ls -l
</code></pre>
<p>I got the following error:</p>
<pre><code>error: unable to upgrade connection: <h3>Unauthorized</h3>
</code></pre>
<p>Did I missed anything while executing <code>kubectl exec</code> command ?? Please help.</p>
| <p>I got a solution for this problem but not a good one.</p>
<p>I issued <code>--disable-filter</code> along with <code>kubectl proxy</code> command. After that, I could execute <code>kubectl exec</code> commands successfully. </p>
<p>But when I use <code>--disable-filter</code>, it is prone to vulnerability.
A message <code>Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious</code> is displayed with <code>--disable-filter</code>.</p>
<p>so, I dont think that this is a good solution.</p>
<p><strong>Please let me know, if any one knows better solution for this problem.</strong></p>
|
<p>I have developed Spring Boot applications. I have setup admin and RabbitMQ as well as spring cloud bus. When i refresh the end points of applications, it refreshes the properties for application. </p>
<p>Can anyone please help me how to setup RabbitMQ in kubernetes now? I did research to an extent and found in few articles that it needs to be deployed as "Statefulset" rather than "Deployment" <a href="https://notallaboutcode.blogspot.de/2017/09/rabbitmq-on-kubernetes-container.html" rel="nofollow noreferrer">https://notallaboutcode.blogspot.de/2017/09/rabbitmq-on-kubernetes-container.html</a>. I could not get why this needs to be done exactly. Also any useful link on deploying RabbitMQ in kubernetes would help.</p>
| <p>It depends on what you're looking to do and what tools you have available. I guess your current setup is much like that described in <a href="http://www.baeldung.com/spring-cloud-bus" rel="nofollow noreferrer">http://www.baeldung.com/spring-cloud-bus</a>. One approach to porting that to kubernetes might be to try to get your setup working with docker-compose first and then you could port that docker-compose to kubernetes deployment descriptors. </p>
<p>A simple way to deploy rabbitmq in k8s would be to set up a Deployment using a rabbitmq docker image. An example of this is <a href="https://github.com/Activiti/activiti-cloud-examples/blob/fe732096b5a19de0ad44879a399053f6ae02b095/kubernetes/kubectl/infrastructure.yml#L17" rel="nofollow noreferrer">https://github.com/Activiti/activiti-cloud-examples/blob/fe732096b5a19de0ad44879a399053f6ae02b095/kubernetes/kubectl/infrastructure.yml#L17</a>. (Notice that file isn't radically different from a docker-compose file so you could port from one to the other.) But that won't be persisting data outside of the Pods so if the cluster were to go down or the Pod/s were to go down then you'd lose message data. The persistence is ephemeral.</p>
<p>So to have non-ephemeral persistence you could instead use a StatefulSet as in the example you point to. Another example is <a href="https://wesmorgan.svbtle.com/rabbitmq-cluster-on-kubernetes-with-statefulsets" rel="nofollow noreferrer">https://wesmorgan.svbtle.com/rabbitmq-cluster-on-kubernetes-with-statefulsets</a> </p>
<p>If you are using helm (or can use helm) then you could use the <a href="https://hub.kubeapps.com/charts/stable/rabbitmq" rel="nofollow noreferrer">rabbitmq helm chart</a>, <a href="https://github.com/kubernetes/charts/blob/master/stable/rabbitmq/templates/statefulset.yaml#L2" rel="nofollow noreferrer">which uses a StatefulSet</a>. </p>
<p>But if your only reason for needing the bus is to trigger refreshes when property changes happen then there are alternative paths available with Kubernetes. I'm guessing you need the hot reloads so you could look at using <a href="https://github.com/fabric8io/spring-cloud-kubernetes#propertysource-reload" rel="nofollow noreferrer">https://github.com/fabric8io/spring-cloud-kubernetes#propertysource-reload</a> Or if you need the config to come from git specifically then you could look at <a href="http://fabric8.io/guide/develop/configuration.html" rel="nofollow noreferrer">http://fabric8.io/guide/develop/configuration.html</a> (If you didn't need the hot reloads or git then you could consider versioning your configmaps and upgrading them with your application upgrades like in <a href="https://dzone.com/articles/configuring-java-apps-with-kubernetes-configmaps-a" rel="nofollow noreferrer">https://dzone.com/articles/configuring-java-apps-with-kubernetes-configmaps-a</a> )</p>
|
<p>I use k8s engine on google cloud. I want to run kube-proxy on master node in order to acess my pods through services with NodePort type via master node.
How to run kube-proxy on master node?</p>
<p>I use 1.8.10-gke.0 k8s version.</p>
| <p>Moreover consider that the master node in the GKE is hosted in a <strong>managed</strong> infrastructure outside from your project and you have no control over it.</p>
<p>For example you cannot decide to run pod on the master and you cannot access or modify what is running there.</p>
<hr>
<p>In order to reach directly the master you can run:</p>
<pre><code>$ kubectl proxy -8080
</code></pre>
<p>and then you reach the master directly on localhost.</p>
<p>For example you can try to run it from your you Google Shell and access to the Kubernetes Dashboard through the preview from your browser:</p>
<pre><code>https://8080-dot-[numeber-of-cloud-shell]-dot-devshell.appspot.com/ui
</code></pre>
<p>or running from the shell itself:</p>
<pre><code>wget localhost:8080
</code></pre>
|
<p>I'm having difficulty creating an nginx configuration file for a django app deployed in kubernetes.
Nginx and app are two separate containers within the same cluster. From what I understand containers can communicate with each other via 127.0.0.1:XX and via hostnames. I'm using minikube for this.
My app container is built from this file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: website
labels:
name: website
spec:
template:
metadata:
labels:
name: website
spec:
containers:
- name: website
image: killabien/web
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: website
labels:
name: website
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
selector:
name: website
</code></pre>
<p>And nginx from this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: website
tier: frontend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
template:
metadata:
labels:
app: website
tier: frontend
spec:
containers:
- image: killabien/nginx
name: nginx
</code></pre>
<p>I can access the application itself(served without static files) but when I try to reach nginx I get 502 Bad Gateway error. Here is the nginx config file:</p>
<pre><code>upstream website {
server 127.0.0.1:8000;
}
server {
listen 80;
location /static {
alias /www/davidbien/static;
}
location / {
proxy_pass http://website;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
</code></pre>
<p>I tried replacing 127.0.0.1:8000 with the container name(website) and localhost:8000 but nothing changed. What am I doing wrong?</p>
| <p>For your particular question:</p>
<ul>
<li>containers can communicate with each other over localhost reference (127.0.0.1) only if they are two containers on the same pod, which is not your case - hence 503.</li>
<li>containers that are in different pods (as in your example) but share same namespace (say, default) can communicate with each other using service name (in your case: website)</li>
<li>containers that are in different namespaces can communicate using FQDN as in service-name.namespace-name.svc.cluster.local (in your case website.default.svc.cluster.local).</li>
</ul>
<p>Now, philosophy of k8s is that pod is unit of deployment and container is unit of packaging. IF your nginx and website are always scaled together, than it comes to reason to put them both in same pod and communicate over localhost as if they are two processes on same machine. IF your frontend is going to handle multiple websites and they scale separately then option 2 or 3 (communication over service name or FQDN) is in order.</p>
|
<h2><strong>Background</strong></h2>
<p>I had planned to go with Service Fabric (on premises) for my service and container orchestration. But, due to internal discussions, I am giving Kubernetes a look. Mostly because it is so very popular.</p>
<p>Service Fabric has concepts called Upgrade Domains and Failure Domains. A "domain" is a grouping of host nodes.</p>
<p>Upgrade Domains are used when pushing out an application service or container update. Service Fabric makes sure that the upgrading service/container is still available by only taking down one Upgrade Domain at at a time. (These are also used when updating the Service Fabric cluster software itself.)</p>
<p>Failure Domains work in a similar way. The idea is that the Failure Domains are created in alignment with hardware failure groups. Service Fabric makes sure that there are service/container instances running in each failure domain. (To allow for up time during a hardware failure.)</p>
<h2><strong>Question</strong></h2>
<p>As I look at docs and listen to podcasts on Kubernetes I don't see any of these concepts. It seems it just hosts containers (Pods). I have heard a bit about "scheduling" and "tags". But it seems it is just the way to manually configure pods.</p>
<p><strong>Are application upgrades and failure tolerance things that are done manually in Kubernetes? (via scheduling and/or tags perhaps)</strong></p>
<p>Or is there a feature I am missing?</p>
| <blockquote>
<p>A "domain" is a grouping of host nodes.</p>
</blockquote>
<p>Is not that simple, it would be more accurate if you said <strong><em>"A 'domain' is a logical grouping of resources".</em></strong></p>
<p>To understand it correctly, you have to first understand most components in isolation. I recommend these readings first:</p>
<ul>
<li>Virtual Machines domains in <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/windows/manage-availability" rel="nofollow noreferrer">this docs</a>.</li>
<li>Service Fabric domains in <a href="https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-resource-manager-cluster-description#fault-domains" rel="nofollow noreferrer">this docs</a></li>
</ul>
<p><strong>Then, we can take some points out of it:</strong></p>
<ul>
<li><p>Nodes are not Virtual Machines, Nodes runs on top of azure virtual machines. </p>
<p>They often have a 1:1 mapping, but in some cases you can have 5:1 node/VM <em>mapping, an example is when you install a local development cluster.</em></p></li>
<li><p>Azure Virtual machines has Update Domains and Fault Domains, Service fabric nodes has Upgrade Domains and Fault Domains</p>
<p>As much they look the same, they have their differences:</p>
<p><strong>Fault Domains:</strong> </p>
<ul>
<li>VM Fault Domains are isolated slots for physical deployment, that means: Power Supply, Network, Disks, and so on... They are limited by region.</li>
<li>SF Fault Domains are logical node slots for application deployment, that means, when SF deploy an application to nodes it will distribute on different fault domains, to make them reliable, most of the time FD will be mapped to VM Fault Domains, but in complex scenarios, you can map this to anything, for example, you could map an entire region to single SF FD.</li>
</ul>
<p>.</p>
<p><strong>Update\Upgrade Domains:</strong></p>
<ul>
<li>VM Update Domains are about OS\Hardware patches and updates, separate update domains will be handled in isolation and not updated at the same time, so that, when an OS update is required to bring down your VM, they will update domain by domain. Lower number of update domains means more machines will be put down during updates.</li>
<li>SF Upgrade Domains use a similar approach of VM update domains, but focused on the services and the cluster upgrade itself, bringing down UD per UD at a time and moving to the next one when the previous UD succeed.</li>
<li>In both cases, you adjust the Update\Upgrade to how many instances (%) of your vm\nodes\services can be put down during upgrades. So for example, if your service has 100 instances on a cluster with 5 UD, SF will update 20 services at time, if you increase the number of UD to 10 for example, the number of instances down will reduce to 10 instances at time, but the time to deploy your application will increase at same proportion.</li>
</ul></li>
</ul>
<p>Based on that, you can see FD & UP as a matrix of reliable deployment slots, as much as you have, more the reliability will increase (with trade offs, like the update time required). Example below taken from SF docs:</p>
<p><a href="https://i.stack.imgur.com/Aa7LK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Aa7LK.png" alt="enter image description here"></a></p>
<p>Service Fabric, out of the box tries to place your services instances on different FD\UD on best effort, that means, if possible they will be on different FD\UD otherwise it will find another one with least number of instances of the service being deployed.</p>
<p><strong>And about Kubernetes:</strong></p>
<p>On Kubernetes, these features are not out of the box, k8s have the concept of <a href="https://kubernetes.io/docs/admin/multiple-zones/" rel="nofollow noreferrer">zones</a>, but according to the docs, they are limited by regions, they cannot span across regions. </p>
<blockquote>
<p>Kubernetes will automatically spread the pods in a replication controller or service across nodes in a single-zone cluster (to reduce the impact of failures). With multiple-zone clusters, this spreading behaviour is extended across zones (to reduce the impact of zone failures). This is achieved via SelectorSpreadPriority.</p>
<p>This is a best-effort placement, and so if the zones in your cluster are heterogeneous (e.g. different numbers of nodes, different types of nodes, or different pod resource requirements), this might prevent equal spreading of your pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.</p>
</blockquote>
<p>Is not the same as as FD but is a very similar concept.</p>
<p>To achieve a similar result as SF, will be required to deploy your cluster across zones or map the nodes to VM FD\UD, so that they behave as nodes on SF. Add the labels to the nodes to identify these domains. You also would need to create NodeType labels on the nodes over different FD, so that you can use for deploying your pods on delimited nodes.</p>
<p>For example:</p>
<ul>
<li><strong>Node01</strong>: FD01 : NodeType=<strong>FrontEnd</strong> </li>
<li><strong>Node02</strong>: FD02 : NodeType=<strong>FrontEnd</strong> </li>
<li><strong>Node03</strong>: FD03 : NodeType=<strong>FrontEnd</strong> </li>
<li><strong>Node04</strong>: FD01 : NodeType=BackEnd </li>
<li><strong>Node05</strong>: FD02 : NodeType=BackEnd </li>
</ul>
<p>When you deploy you application, you should make use of the <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">affinity</a> feature to assign PODs to a node, and in this case your service would have:</p>
<ul>
<li><strong>Required Affinity</strong> to a <em>NodeType=FrontEnd</em></li>
<li><strong>Prefered AntiAfinity</strong> to <em>ContainerName=[itself]</em></li>
</ul>
<p>With these settings, using of the affinity and anti-affinity k8s will try to place replicas\instances of your container on separate nodes, and the nodes will be already separate by FD\zone delimited by the NoteType labels, then k8s will handle the rolling updates as SF does. </p>
<p>Because the anti-affinity rules are prefered, k8s will try to balance across these nodes on best effort, if no valid nodes are available it will start adding more instances on node that already contains instances of same container, </p>
<p><strong>Conclusion</strong></p>
<p>It is a bit of extra work, but not much different to what is currently used on other solutions.
The major concern here will be configuring the Nodes on FD\Zones, after you place your nodes on the right FD, the rest will work smoothly. </p>
<p>On SF you don't have to worry about this when you deploy a cluster on Azure, but if you do it from scratch, is a big work to do, even bigger than k8s. </p>
<p>NOTE: If you use <a href="https://learn.microsoft.com/en-us/azure/aks/" rel="nofollow noreferrer">AKS</a>, it will distribute the nodes across availability sets (set which specifies VM fault domains and update domains). Currently, according to <a href="https://stackoverflow.com/questions/49747221/default-number-of-availability-zones-for-an-aks-kubernetes-cluster">this post</a>, AKS does not provide the zone distribution for you, so you would have to do it from scratch if you need it this level of distribution.</p>
|
<p>I'm running an ASP.NET Core API on Linux, on Kubernetes in the Google Cloud.</p>
<p>This is an API with high load, and on every request it's executing a library doing a long (1-5 seconds), CPU-intensive operation.</p>
<p>What I see is that after deployment the API works properly for a while, but after 10-20 minutes it becomes unresponsive, and even the health check endpoint (which just returns a hardcoded <code>200 OK</code>) stops working and times out. (This makes Kubernetes kill the pods.)</p>
<p>Sometimes I'm also seeing the infamous <code>Heartbeat took longer than "00:00:01"</code> error message in the logs.</p>
<p>Googling these phenomenons points me to "Thread starvation", so that there are too many thread pool threads started, or too many threads are blocking waiting on something, so that there are no threads left in the pool which could pick up ASP.NET Core requests (hence the timeout of even the health check endpoint).</p>
<p>What is the best way to troubleshoot this issue? I started monitoring the numbers returned by <code>ThreadPool.GetMaxThreads</code> and <code>ThreadPool.GetAvailableThreads</code>, but they stayed constant (the completion port is always <code>1000</code> both for max and available, and the worker is always <code>32767</code>).<br>
Is there any other property I should monitor?</p>
| <p>Generally speaking, long-running work is anathema for web applications. You want sub-second response times for a healthy web app. This is particularly true if the work you need to do is synchronous or CPU-bound. Async can at least free up threads during the process, but with CPU-bound work, the thread is hog-tied.</p>
<p>You should off-load whatever you're doing to a different process and then monitor the progress. For an API, the typical approach here is to schedule the work on a different process and then immediately return a <a href="https://httpstatuses.com/202" rel="noreferrer">202 Accepted</a>, with an endpoint in the response body the client can utilize to monitor the progress/get the eventual completed result. You could also implement a webhook, which the client may register to receive notification that the process has completed, without having to constantly check on it.</p>
<p>Your only other option is to throw more resources at the problem. For example, you could stage multiple instances behind a load balancer, divvying requests between each instance to reduce the overall load on each.</p>
<p>It's also entirely possible that there's some inefficiency or issue in your code that could be corrected to either reduce the amount of time the process takes and/or the resources being consumed. As a trivial example, say if you're using something like <code>Task.Run</code>, you could potentially free up a ton of threads by <em>not</em> doing that. <code>Task.Run</code> should pretty much never be used within the context of a web application. However, you have not posted any code, so it's impossible to give you exact guidance there.</p>
|
<p>I'm trying to migrate a voip service using freeswitch on GKE (google cloud managed kubernetes cluster) in order to make the service scalable.</p>
<p>I have managed to migrate freeswitch to docker and get it to run.
I require a high number of ports to be open to allow the necessary traffic.
Kubernetes services do not seem to allow that many ports to be open.</p>
<p>I then tried using the following image and kubernetes configuration and still couldn't contact the freeswitch server.
<a href="https://github.com/sip-li/docker-freeswitch" rel="noreferrer">https://github.com/sip-li/docker-freeswitch</a></p>
<p>I've seen the following issue that seems to say that it isn't possible yet: <a href="https://github.com/kubernetes/kubernetes/issues/23864" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/23864</a></p>
<p>But I am no expert so I might have misunderstood, therefore:</p>
<p>Is it possible to run a freeswitch server on GKE that is accessible through an external IP address? Or is it possible to have auto-scaling and auto-discovery of new pods by a SIP load balancer on GKE?</p>
| <p>This is possible.
There is lecture on <a href="https://www.youtube.com/watch?v=xgx61YGSS54" rel="noreferrer">youtube</a>, check this out.</p>
<p>They implement Asterisk in kubernetes, but freeswitch should be similar.
Main problem is RTP, they solved it with RTP proxy (kamailio RTP proxy).
They also implement balance loader using kamailio hosted on dedicated server (NOT inside kubernetes).</p>
|
<p>I'm running an ASP.NET Core API on Linux, on Kubernetes in the Google Cloud.</p>
<p>This is an API with high load, and on every request it's executing a library doing a long (1-5 seconds), CPU-intensive operation.</p>
<p>What I see is that after deployment the API works properly for a while, but after 10-20 minutes it becomes unresponsive, and even the health check endpoint (which just returns a hardcoded <code>200 OK</code>) stops working and times out. (This makes Kubernetes kill the pods.)</p>
<p>Sometimes I'm also seeing the infamous <code>Heartbeat took longer than "00:00:01"</code> error message in the logs.</p>
<p>Googling these phenomenons points me to "Thread starvation", so that there are too many thread pool threads started, or too many threads are blocking waiting on something, so that there are no threads left in the pool which could pick up ASP.NET Core requests (hence the timeout of even the health check endpoint).</p>
<p>What is the best way to troubleshoot this issue? I started monitoring the numbers returned by <code>ThreadPool.GetMaxThreads</code> and <code>ThreadPool.GetAvailableThreads</code>, but they stayed constant (the completion port is always <code>1000</code> both for max and available, and the worker is always <code>32767</code>).<br>
Is there any other property I should monitor?</p>
| <p>Are you sure your ASP.NET Core web app is running out of threads? It may be it is simply saturating all available pod resources, causing Kubernetes to just kill down the pod itself, and so your web app.</p>
<p>I did experience a very similar scenario with an ASP.NET Core web API running on Linux RedHat within an <a href="https://www.openshift.com/" rel="nofollow noreferrer">OpenShift</a> environment, which also supports the pod concept as in Kubernetes: one call required approximately 1 second to complete and, under large workload, it became first slower and then unresponsive, causing OpenShift to kill down the pod, and so my web app.</p>
<p>It may be your ASP.NET Core web app is not running out of threads, especially considering the high amount of worker threads available in the ThreadPool.
Instead, the number of active threads combined with their CPU need is probably too large compared to the actual millicores available within the pod where they are running: indeed, after being created, those active threads are too many for the available CPU that most of them end up being queued by the scheduler and waiting for execution, while only a bunch will actually run.
The scheduler then does its job, making sure CPU is shared fairly among threads, by frequently switching those that would use it.
As for your case, where threads require heavy and long CPU bound operations, over time resources get saturated and the web app becomes unresponsive.</p>
<p>A mitigation step may be providing more capacity to your pods, especially millicores, or increase the number of pods Kubernetes may deploy based on need.
However, in my particular scenario this approach did not help much.
Instead, improving the API itself by reducing the execution of one request from 1s to 300ms sensibly improved the overall web application performance and actually solved the issue.</p>
<p>For example, if your library performs the same calculations in more than one request, you may consider introducing caching on your data structures in order to enhance speed at the slight cost of memory (which worked for me), especially if your operations are mainly CPU bound and if you have such request demands to your web app.
You may also consider enabling <a href="https://learn.microsoft.com/en-us/aspnet/core/performance/caching/response?view=aspnetcore-2.2" rel="nofollow noreferrer">cache response in ASP.NET Core</a> too if that makes sense with the workload and responses of your API.
Using cache, you make sure your web app does not perform the same task twice, freeing up CPU and reduce the risk of queuing threads.</p>
<p>Processing each request faster will make your web app less prone to the risk of filling up the available CPU and therefore reduce the risk of having too many threads queued and waiting for execution.</p>
|
<p>I have a container that will start a jetty server. It takes about 1 minutes to start
The Pod says it is started even tho the server is still starting.</p>
<p>Is there a way to wait until for the container to finish starting before the pod says it is ready?</p>
<p>I tried adding a readiness probe but it fails because the server has not started</p>
<pre><code>"readinessProbe": {
"httpGet": {
"path": "/api/health",
"port": 8080,
"scheme": "HTTP"
},
</code></pre>
| <p>Use <code>initialDelaySeconds</code> attribute in readinessProbe as explained at <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes</a></p>
<p>for example, add 60 seconds delay before you run first test</p>
<p>initialDelaySeconds: 60</p>
|
<p>I have a cluster running on GKE. I have my domain names registered in Cloudflare. </p>
<p>When I add/modify dns entries in Cloudflare, the propagation seems to be very fast (using tools like <a href="https://dnschecker.org/" rel="nofollow noreferrer">https://dnschecker.org/</a>). However in my gke cluster, the pods don't resolve the new entries (I have to wait a long long time...). I have tried to add the dns resolver 1.1.1.1 in my kube-dns pods, then I could resolve the new entries in the kube-dns pods (kubedns and dnsmasq) but it still doesn't work for my other pods (the dnspolicy is correctly set to ClusterFirst).</p>
<p>Thanks for your help.</p>
| <p>You can change the DNS upstream nameservers [1] but keep in mind that DNS propagation can take up to 48h. Here is the example from the documentation:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["1.1.1.1"]
</code></pre>
<p>[1] <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#example-upstream-nameserver" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#example-upstream-nameserver</a></p>
|
<p>I am trying to configure Basic Authentication on a Nginx example with Traefik as Ingress controller.</p>
<p>I just create the secret <code>"mypasswd"</code> on the Kubernetes secrets.</p>
<p>This is the Ingress I am using:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxingress
annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-realm: traefik
ingress.kubernetes.io/auth-secret: mypasswd
spec:
rules:
- host: nginx.mycompany.com
http:
paths:
- path: /
backend:
serviceName: nginxservice
servicePort: 80
</code></pre>
<p>I check in the Traefik dashboard and it appear, if I access to nginx.mycompany.com I can check the Nginx webpage, but without the basic authentication.</p>
<p>This is my nginx deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre>
<p>Nginx service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
name: nginxservice
name: nginxservice
spec:
ports:
# The port that this service should serve on.
- port: 80
# Label keys and values that must match in order to receive traffic for this service.
selector:
app: nginx
type: ClusterIP
</code></pre>
| <p>It is popular to use basic authentication. In reference to <a href="https://web.archive.org/web/20190529083601/https://docs.traefik.io/user-guide/kubernetes/#basic-authentication" rel="nofollow noreferrer">Kubernetes documentation</a>, you should be able to protect access to Traefik using the following steps :</p>
<ol>
<li>Create authentication file using <code>htpasswd</code> tool. You'll be asked for a password for the user:</li>
</ol>
<blockquote>
<p>htpasswd -c ./auth </p>
</blockquote>
<ol start="2">
<li>Now use <code>kubectl</code> to create a secret in the monitoring namespace using the file created by <code>htpasswd</code>.</li>
</ol>
<blockquote>
<p>kubectl create secret generic mysecret --from-file auth
--namespace=monitoring</p>
</blockquote>
<ol start="3">
<li>Enable basic authentication by attaching annotations to Ingress object:</li>
</ol>
<blockquote>
<p>ingress.kubernetes.io/auth-type: "basic"</p>
<p>ingress.kubernetes.io/auth-secret: "mysecret"</p>
</blockquote>
<p>So, full example config of basic authentication can looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-dashboard
namespace: monitoring
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
spec:
rules:
- host: dashboard.prometheus.example.com
http:
paths:
- backend:
serviceName: prometheus
servicePort: 9090
</code></pre>
<ol start="4">
<li>You can apply the example as following:</li>
</ol>
<blockquote>
<p>kubectl create -f prometheus-ingress.yaml -n monitoring</p>
</blockquote>
<p>This should work without any issues.</p>
|
<p>When I create a GCE ingress, Google Load Balancer does not set the health check from the readiness probe. According to the docs (<a href="https://github.com/kubernetes/ingress-gce#health-checks" rel="noreferrer">Ingress GCE health checks</a>) it should pick it up.</p>
<blockquote>
<p>Expose an arbitrary URL as a readiness probe on the pods backing the Service.</p>
</blockquote>
<p>Any ideas why?</p>
<p>Deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend-prod
labels:
app: frontend-prod
spec:
selector:
matchLabels:
app: frontend-prod
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: frontend-prod
spec:
imagePullSecrets:
- name: regcred
containers:
- image: app:latest
readinessProbe:
httpGet:
path: /healthcheck
port: 3000
initialDelaySeconds: 15
periodSeconds: 5
name: frontend-prod-app
- env:
- name: PASSWORD_PROTECT
value: "1"
image: nginx:latest
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 5
name: frontend-prod-nginx
</code></pre>
<hr>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: frontend-prod
labels:
app: frontend-prod
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: frontend-prod
</code></pre>
<hr>
<p>Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-prod-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: frontend-prod-ip
spec:
tls:
- secretName: testsecret
backend:
serviceName: frontend-prod
servicePort: 80
</code></pre>
| <p>So apparently, you need to include the container port on the PodSpec.
Does not seem to be documented anywhere.</p>
<p>e.g.</p>
<pre><code> spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre>
<p>Thanks, Brian! <a href="https://github.com/kubernetes/ingress-gce/issues/241" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce/issues/241</a></p>
|
<p>I have a private Docker repo with bunch of images. I am using Helm to deploy them to a Kubernetes cluster. </p>
<p>Helm values.yaml contains the repository credentials:</p>
<pre><code>image:
repository: <repo>
tag: <version tag>
pullPolicy: IfNotPresent
imageCredentials:
registry: <repo>
username: <username>
password: <pw>
</code></pre>
<p>After doing the helm installation </p>
<blockquote>
<p>helm install myhelmchart --values values.yaml --version </p>
</blockquote>
<p>the pod's status is Init:ErrImagePull.
kubectl describe pods gives this error: </p>
<blockquote>
<p>Failed to pull image "image:tag": rpc error: code = Unknown desc =
Error response from daemon: Get [image]/manifests/[version]:
unauthorized: authentication required</p>
</blockquote>
| <p>It depends on the output of your helm chart. You can use <code>helm template</code> to see the resulting kubernetes resources without actually deploying it. Using an image from a private docker registry comes down to two steps:</p>
<ol>
<li><p>Make sure that you have a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token" rel="noreferrer"><code>secret</code> resource</a> for the private repository. Note that the type here is <code>kubernetes.io/dockerconfigjson</code> or <code>kubernetes.io/dockercfg</code>.</p>
<p>How to create this with templates from helm is described <a href="https://helm.sh/docs/howto/charts_tips_and_tricks/#creating-image-pull-secrets" rel="noreferrer">here</a>.</p></li>
<li>Refer to that secret in the pod that uses the image from that private repository, as shown below:</li>
</ol>
<p>Pod resource/template:</p>
<pre><code>spec:
containers:
- name: some-pod
image: <image>
imagePullSecrets:
- name: <name-of your secret>
</code></pre>
<p>You can first build the resources by hand without helm. This helps to verify that the resources themselves are correct. Then you can adapt the helm templates to output the correct resources given your values.</p>
|
<p>I'm trying to deploy an app that contains a health check endpoint. If it fails, the pod should be destroyed, but the K8s keeps the pod with Running status.</p>
<p>Config:</p>
<pre><code> readinessProbe:
httpGet:
path: /healthcheck
port: 3001
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
</code></pre>
<p>Pod:</p>
<pre><code>docs-app-768b47bc69-lrlcf 0/1 Running 0 1m
</code></pre>
<p>So, there is a way to destroy the pod when the readiness probe fails?</p>
| <p>This is not what probes are meant for. LivenessProbe, in case of failure, after retrying configured number of times will "restart" the pod. Instead the ReadinessProbe is useful to indicate that the pod should not serve traffic during failure. Probes are not meant to run or abort the pod, if you need that you have to write a plugin to monitor pod status and delete the replica or deployment in case of failure. </p>
|
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app-2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
spec:
selector:
matchLabels:
client: 2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
template:
metadata:
labels:
client: 2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
spec:
containers:
- name: xxx
image: xxx
env:
- name: GUID
valueFrom:
fieldRef:
fieldPath: spec.template.metadata.labels.client
</code></pre>
<p>I tried passing existing value from the definition to the env variable using different expressions and all of them didnt work:</p>
<pre><code>error converting fieldPath: field label not supported: spec.template.metadata.labels.client
</code></pre>
<p>upd: found what <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/v1/conversion.go" rel="nofollow noreferrer">you can pass in</a>, doesnt help...</p>
<p>I have to essentially repeat myself 4 times, is there a way to have less repeating in the pod definition to ease management? According to <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">this</a> you can pass in something, it doesnt say what though.</p>
<p>ps. Do i really need same guid in the spec.template and spec.selector? It doesnt work without that</p>
| <p>You don’t necessarily need to use guids here, those are just lables and names...
Secondly, they refer to different things (althought some of them have to be the same in some cases):</p>
<ul>
<li>metadata name is name of Deployment in question. You will use it to reference and manipulator this specific Deployment during its lifecycle.</li>
<li>labels and matchlabels need to be the same if you want them matched together, which in this case you want. Kubernetes is strong and quite flexible when it comes to labeling and different assets can have multiple labels on them (say pod can have labels: app:Postfix, tier: backend, layer: mysql, env:dev). It stands to reason that label(s) that you want matched and label(s) to be matched have to be the same in order to be matched.</li>
</ul>
<p>As for automation of labeling in Deployment to avoid repetition, maybe helm Charts or some other ‘automating kubernetes’ approach, depending on your actual need, would be better?</p>
<p>Additional note: for passing label to env variable following can be used starting from kubernetes 1.9:</p>
<pre><code>...
template:
metadata:
labels:
label_name: label-value
...
env:
- name: ENV_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['label_name']
</code></pre>
<p>Below is full mock code to demonstrate this (client 1.9.3, server 1.9.0):</p>
<pre><code># cat d.yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app-guidhere
spec:
selector:
matchLabels:
client: guidhere
template:
metadata:
labels:
client: guidhere
spec:
containers:
- name: some-name
image: nginx
env:
- name: GUIDENV
valueFrom:
fieldRef:
fieldPath: metadata.labels['client']
# after: kubectl create -f d.yaml and connecting to container
# echo $GUIDENV responds with "guidhere"
</code></pre>
<p>And I've just tried this and works correctly (mind k8s versions).</p>
|
<p>As I understand,
<a href="https://github.com/kata-containers/kata-containers" rel="noreferrer">Kata Containers</a></p>
<blockquote>
<p>Kata Container build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers but provide the workload isolation and security advantages of VMs</p>
</blockquote>
<p>On the other hand, <a href="https://github.com/google/gvisor" rel="noreferrer">gvisor</a></p>
<blockquote>
<p>gVisor is a user-space kernel for containers. It limits the host kernel surface accessible to the application while still giving the application access to all the features it expects. </p>
</blockquote>
<p>As I believe, both of these technology trying to add <strong>linux space</strong> into containers in order to enhance security.</p>
<p>My question is How do they differ from each other ? Is there overlapping in functionalities?</p>
| <p>From what I gather:</p>
<p>Kata Containers</p>
<ul>
<li>Full Kernel on top of a lightweight QEMU/KVM VM
<ul>
<li>Kernel has been optimized in newer releases.</li>
</ul>
</li>
<li>Lets system calls go through freely</li>
<li>Performance penalty due to the VM layer. Not clear yet how slower or faster than gVisor</li>
<li>On paper, slower startup time.</li>
<li>Can run any application.</li>
<li>Can run in nested virtualized environments if the hypervisor and hardware support it.</li>
</ul>
<p>gVisor</p>
<ul>
<li>Partial Kernel in userspace.</li>
<li>Intercepts syscalls</li>
<li>Performance penalty at runtime due to syscall filtering. Not clear how slower or faster than Kata yet.</li>
<li>On paper, faster startup time.</li>
<li>Can run only applications that use supported system calls.</li>
<li>On paper, you may not need nested virtualization.</li>
</ul>
|
<p>Behind the enterprise proxy,</p>
<p>what is the proper setting for kubernetes (and docker)?</p>
<ol>
<li>when set the http_proxy, https_proxy, no_proxy</li>
</ol>
<p><code>export http_proxy="http://1.2.3.4:8080"</code></p>
<p>or</p>
<p><code>export http_proxy=http://1.2.3.4:8080</code></p>
<p>or</p>
<p><code>export http_proxy=1.2.3.4:8080</code></p>
<ol start="2">
<li><p>Should I set capital environment variable like HTTP_PROXY ?</p></li>
<li><p>When I set no_proxy,</p></li>
</ol>
<p><code>export no_proxy=10.0.0.1,10.0.0.2,10.0.0.3</code></p>
<p>(all the kubernetes master and nodes )</p>
<p>or</p>
<p><code>export no_proxy=10.0.0.*</code></p>
<ol start="4">
<li><p>Should I setting below file ?</p>
<p>$ vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=<a href="http://1.2.3.4:8080" rel="nofollow noreferrer">http://1.2.3.4:8080</a>" "HTTPS_PROXY=<a href="http://1.2.3.4:8080" rel="nofollow noreferrer">http://1.2.3.4:8080</a>" "NO_PROXY=127.0.0.1,localhost,10.0.0.1,10.0.0.2,10.0.0.3"</p></li>
</ol>
<p>In this file, applied same rule with above question?</p>
<ol start="5">
<li>any other considerations?</li>
</ol>
<p>Thanks inadvance.</p>
| <p>We always include the scheme in our environment variables.</p>
<p>/etc/profile.d/proxy.sh:</p>
<pre><code>#!/bin/bash
export http_proxy=http://<proxy>:3128
export https_proxy=$http_proxy
export no_proxy=169.254.169.254,localhost,127.0.0.1
export HTTP_PROXY=$http_proxy
export HTTPS_PROXY=$https_proxy
export NO_PROXY=$no_proxy
</code></pre>
<p>/etc/systemd/system/docker.service.d/proxy.conf:</p>
<pre><code>[Service]
Environment="HTTPS_PROXY=https://<proxy>:3128/" "HTTP_PROXY=http://<proxy>:3128/"
</code></pre>
|
<p>I am having trouble trying to deploy my Django Application and PostgreSQL database to Kubernetes Google Cloud cluster that I've already configured.</p>
<p>I have successfully created Docker containers for my Django Application and PostgreSQL database. Here is what my docker-compose.yml file looks like:</p>
<pre><code>version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_USER=stefan_radonjic
- POSTGRES_PASSWORD=cepajecar995
- POSTGRES_DB=agent_technologies_db
web:
build: .
command: python manage.py runserver 0.0.0.0:8000 --settings=agents.config.docker-settings
volumes:
- .:/agent-technologies
ports:
- "8000:8000"
links:
- db
depends_on:
- db
</code></pre>
<p>I have already build the images, and tried <code>sudo docker-compose up</code> command, and the application works perfectly fine. </p>
<p>After successfully dockerizing Django Application and PostgreSQL, I have tried to configure Deployment / Service <code>YML</code> files required by Kubernetes, but I am having trouble doing so. For example:</p>
<p>deployment-definition.yml - File for deploying Django application:</p>
<pre><code> apiVersion: apps/v1
kind: Deployment
metadata:
name: agent-technologies-deployment
labels:
app: agent-technologies
tier: backend
spec:
template:
metadata:
name: agent-technologies-pod
labels:
app: agent-technologies
tier: backend
spec:
containers:
- name:
image:
ports:
- containerPort: 8000
replicas:
selector:
matchLabels:
tier: backend
</code></pre>
<p>Inside container list of dictionaries, I know that my container name should be <code>web</code>, but I am not sure where the image of that container is located so I do not know what should i specify as container image.</p>
<p>Another problem lies in postgres/deployment-definition.yml :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
tier: backend
spec:
containers:
- name: postgres-container
image: postgres:9.6.6
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
- name: POSTGRES_DB
value: agent_technologies_db
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-volume-mount
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-volume-mount
persistentVolumeClaim:
claimName: postgres-pvc
</code></pre>
<p>I do not understand what <code>volumeMounts</code> and <code>volumes</code> are for, and if i even specified them correctly.</p>
<p>Here is my secret-definition.yml file:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: postgres-credentials
type: Opaque
data:
user: stefan_radonjic
passowrd: cepajecar995
</code></pre>
<p>My postgres/service-definition.yml file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
ports:
- protocol: TCP
port: 5432
targetPort: 5432
</code></pre>
<p>My postgres/volume-definition.yml file:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
type: local
spec:
capacity:
storage: 2Gi
storageClassName: standard
accessModes:
- ReadWriteMany
hostPath:
path: /data/postgres-pv
</code></pre>
<p>And my postgres/volume-claim-definitono.yml file:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
type: local
spec:
capacity:
storage: 2Gi
storageClassName: standard
accessModes:
- ReadWriteMany
hostPath:
path: /data/postgres-pv
</code></pre>
<p>Last but not least, my service-definition.yml file - for Django application</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: agent-technologies-service
spec:
selector:
app: agent-technologies
ports:
- protocol: TCP
port: 8000
targetPort: 8000
type: NodePort
</code></pre>
<p>Besides the questions I have already asked above, I also want to ask am I doing this right? If not, what can I do to fix this.</p>
| <blockquote>
<p>Inside container list of dictionaries, I know that my container name should be web, but I am not sure where the image of that container is located so I do not know what should i specify as container image.</p>
</blockquote>
<ul>
<li><p>Name for container is local to the pod (you can have several containers sharing same pod). Container name (web in your case) is for your files given under deployment:</p>
<pre><code># setting name of first container within pod to web
spec:
containers:
- name: web
</code></pre></li>
<li>Image for container has to be in some available docker container registry. There are multiple options from hosting own docker registry to use publicly available ones. In any case you have to be able to push in your build phase to that docker container registry (be it amazon ECR, Docker, Gitlab, self hosted...) and to pull from that registry from within kubernetes (security settings, pull secrets etc...). In your docker-compose file you use two containers. For db you use public postgres image, and for web you use build command and image is stored to local docker registry on that host only (you have to push it to public repository for k8s to be able to pull from it during deployment).</li>
</ul>
<blockquote>
<p>I do not understand what volumeMounts and volumes are for, and if i even specified them correctly.</p>
</blockquote>
<ul>
<li>In a nutshell, volumes are for attaching volumes to containers. Depending on your use case and decided architecture there are several approaches to volumes, but all in all they boil down to ephemeral, constant and persistent. Ephemeral will be lost on container termination or restart, constant (such as from configMaps) are used for passing configuration files to containers and persistent are most interesting for stateful applications (databases among other things). You can specify volumes in several ways, all volume have to have name (to be referenced by volumeMount) and either direct volume specification or volume claim specification (latter is advised for persistent volume since you can benefit from automatic provisioning that way).</li>
<li>VolumeMounts are for defining where on container file system predefined volume should be mounted. They reference volume to be mounted by name, provide mount point on container filesystem by mountPath and can have subpaths to specific files in some cases.</li>
<li>In your example you tied persistent volume claim obtained volume to data path of postgres (/var/lib/postgresql/data). Althought you use storage class that you didn't specify, interesting part is that your Persistent volume is defined as localpath on host. That means that on each node you have this database pod started you will end up pointing /var/lib/postgresql/data of that pod's db container to /data/postgres-pv on that specific node. This opens up you to following issue: say you have 3 nodes (A, B and C) and your database pod is started on A, uses A's /data/postgres-pv folder as own /var/lib/postrgresql/data. And then you restart it, it gets terminated and rescheduled to node B. All of the sudden, it uses B's /data/postgres-pv local folder (empty) and you end up with empty database. If you use host's local filesystem for persisntence you need to tie such pods with node (or better yet with affinity) selectors. It is advisable for performance reasons to run database volumes of local filesystem, but hose pods lose ability to be rescheduled easily. Another approach is to have some truly persistent volume that can be mounted independently of node (Amazon EBS for example) and they require different PVC (or provisioner to be used).</li>
</ul>
<blockquote>
<p>Besides the questions I have already asked above, I also want to ask am I doing this right? If not, what can I do to fix this.</p>
</blockquote>
<ul>
<li>As stated above, define storage class and either lock db pod to specific node or apply some kind of dynamic provisioning so volume will follow pod's placement on nods.</li>
<li>Oppiniated preference: don't place everything in default namespace, use separate namespace for handling k8s manifests, later on it is much harder to move everything, and harder to accidentally delete wrong thingie...</li>
<li>Also personal preference: database is stateful application and as such it is advised to use statefulset instead of deployment. </li>
<li>There are tools to help you out when you start from docker-compose files and want to convert to kubernetes manifests, worth checking.</li>
<li>Documentation on kubernetes is a bit outdated but quite good and you can have some nice read on volumes and volumeClaims there, there is also active slack channel.</li>
<li>Oh, and mock user/pass when posting files here, we know now about cepa...</li>
<li>Lastly, you are doing great job!</li>
</ul>
|
<p>Is there any way to add labels in .spec.template after a deployment has been created? So, I know this can be done </p>
<p><code>kubectl label deployment myDeployment myLabelKey=myLabelValue</code></p>
<p>But this would only add the label to <code>.metadata.labels</code>. I would like to add a label to <code>.spec.template.metadata.labels</code>.</p>
| <p>This should be possible using the <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/" rel="nofollow noreferrer"><code>kubectl patch</code></a> command. The following patch file would add a new label to the <code>spec.template.metadata.labels</code> property:</p>
<pre><code>spec:
template:
metadata:
labels:
myLabelKey: myLabelValue
</code></pre>
<p>Then apply with:</p>
<pre><code>$ kubectl patch deployment myDeployment --patch "$(cat patchfile.yaml)"
</code></pre>
<p>Alternatively, with inline JSON instead of a separate file:</p>
<pre><code>$ kubectl patch deployment myDeployment --patch '{"spec": {"template": {"metadata": {"labels": {"myLabelKey": "myLabelValue"}}}}}'
</code></pre>
|
<p>I am facing issues with Dynamic Storage provisioner.
I have created storage class (edited)</p>
<pre><code>kubectl get sc
NAME PROVISIONER AGE
slow kubernetes.io/aws-ebs 12m
standard (default) kubernetes.io/aws-ebs 14m
</code></pre>
<p>when i create a pvc referring to one of the storage classes, it fails</p>
<pre><code>kubectl describe pvc mypvc
Name: mypvc
Namespace: default
StorageClass: slow
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: []
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 8s persistentvolume-controller
storageclass.storage.k8s.io "slow" not found
</code></pre>
<p>Any idea why it fails with </p>
<pre><code>storageclass.storage.k8s.io "slow" not found
</code></pre>
<p>Here is the describe of slow sc</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: 2018-04-27T22:19:49Z
name: slow
resourceVersion: "665900"
selfLink: /apis/storage.k8s.io/v1/storageclasses/slow
uid: 19ab37b5-4a69-11e8-8f68-021e47d98090
parameters:
fsType: ext4
iopsPerGB: "10"
type: io1
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
</code></pre>
| <p>ok. I found the answer. I didnt enable the storage api as part of runtimeconfigs for apiserver</p>
<pre><code>--runtime-config=storage.k8s.io/v1=true
</code></pre>
<p>Adding the above line solved the issue</p>
|
<p>Is there any way to log this action / event?</p>
<p>It seems that if a process is using too much resources and is not the main process, it gets a SIGKILL signal. I'd like to log this action.</p>
| <p>You can use posthook to a run scripts when container is getting terminated.</p>
<p>Use that script to send notification or to log info to STDOUT.</p>
<p>Later when pod get killed, We can still get the older logs by using -p flag in kubectl logs.</p>
|
<p>With <code>helm inspect [CHART]</code> I can view the content of <code>chart.yaml</code> and <code>values.yaml</code> of a chart. Is there a way to also view the template files of a chart? Preferably through a Helm command.</p>
<p>On a sidenote: this seems like a pretty important feature to me. I would always want to know what the chart exactly does before installing it. Or is this not what <code>helm inspect</code> was intended for? Might the recommended way be to simply check GitHub for details how the chart works?</p>
| <p><code>helm install yourchart --dry-run --debug</code></p>
<p>This will print to stdout all the rendered templates in the chart (and won't install the chart)</p>
|
<p>Scouring stack overflow solutions for similar problems did not resolve my issue, so hoping to share what I'm currently experiencing to get help debugging this.</p>
<p>So a small preface; I initially installed minikube/kubectl a couple days back. I went ahead and tried following the minikube tutorial today and am now experiencing issues. I'm following the <a href="https://kubernetes.io/docs/getting-started-guides/minikube/#quickstart" rel="nofollow noreferrer">minikube getting started guide</a>.</p>
<p>I am on MacOS. My versions:</p>
<p>$ kubectl version</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: net/http: TLS handshake timeout
</code></pre>
<p>$ minikube version</p>
<pre><code>minikube version: v0.26.1
</code></pre>
<p>$ vboxmanage --version</p>
<pre><code>5.1.20r114629
</code></pre>
<p>The following are a string of commands I've tried to check responses..</p>
<hr>
<p>$ minikube start</p>
<pre><code>Starting VM...
Getting VM IP address...
Moving files into cluster...
E0503 11:08:18.654428 20197 start.go:234] Error updating cluster: downloading binaries: transferring kubeadm file: &{BaseAsset:{data:[] reader:0xc4200861a8 Length:0 AssetName:/Users/philipyoo/.minikube/cache/v1.10.0/kubeadm TargetDir:/usr/bin TargetName:kubeadm Permissions:0641}}: Error running scp command: sudo scp -t /usr/bin output: : wait: remote command exited without exit status or exit signal
</code></pre>
<hr>
<p>$ minikube status</p>
<pre><code>cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.103
</code></pre>
<h3>Edit:</h3>
<p>I don't know what happened, but checking the status again returned "Misconfigured". I ran the recommended command <code>$ minikube update-context</code> and now the <code>$ minikube ip</code> points to "172.17.0.1". Pinging this IP returns request timeouts, 100% packet loss. Double-checked context and I'm still using "minikube" both for context and cluster:</p>
<p>$ kubectl config get-cluster</p>
<p>$ kubectl config get-context</p>
<hr>
<p>$ kubectl get pods</p>
<pre><code>The connection to the server 192.168.99.103:8443 was refused - did you specify the right host or port?
</code></pre>
<hr>
<p>Reading github issues, I ran into this one: <a href="https://github.com/kubernetes/kubernetes/issues/44665#issuecomment-295216655" rel="nofollow noreferrer">kubernetes#44665</a>. So...</p>
<p>$ ls /etc/kubernetes</p>
<pre><code>ls: /etc/kubernetes: No such file or directory
</code></pre>
<hr>
<h3>Only the last few entries</h3>
<p>$ minikube logs</p>
<pre><code>May 03 18:10:48 minikube kubelet[3405]: E0503 18:10:47.933251 3405 event.go:209] Unable to write event: 'Patch https://192.168.99.103:8443/api/v1/namespaces/default/events/minikube.152b315ce3475a80: dial tcp 192.168.99.103:8443: getsockopt: connection refused' (may retry after sleeping)
May 03 18:10:49 minikube kubelet[3405]: E0503 18:10:49.160920 3405 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.99.103:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.103:8443: getsockopt: connection refused
May 03 18:10:51 minikube kubelet[3405]: E0503 18:10:51.670344 3405 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.103:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.103:8443: getsockopt: connection refused
May 03 18:10:53 minikube kubelet[3405]: W0503 18:10:53.017289 3405 status_manager.go:459] Failed to get status for pod "kube-controller-manager-minikube_kube-system(c801aa20d5b60df68810fccc384efdd5)": Get https://192.168.99.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minikube: dial tcp 192.168.99.103:8443: getsockopt: connection refused
May 03 18:10:53 minikube kubelet[3405]: E0503 18:10:52.595134 3405 rkt.go:65] detectRktContainers: listRunningPods failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
</code></pre>
<hr>
<p>I'm not exactly sure how to ping an https url, but if I ping the ip</p>
<p>$ kube ping 192.168.99.103</p>
<pre><code>PING 192.168.99.103 (192.168.99.103): 56 data bytes
64 bytes from 192.168.99.103: icmp_seq=0 ttl=64 time=4.632 ms
64 bytes from 192.168.99.103: icmp_seq=1 ttl=64 time=0.363 ms
64 bytes from 192.168.99.103: icmp_seq=2 ttl=64 time=0.826 ms
^C
--- 192.168.99.103 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.363/1.940/4.632/1.913 ms
</code></pre>
<hr>
<p>Looking at kube config file...
$ cat <code>~/.kube/config</code></p>
<pre><code>apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
- cluster:
certificate-authority: /Users/philipyoo/.minikube/ca.crt
server: https://192.168.99.103:8443
name: minikube
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: <removed>
client-key-data: <removed>
- name: minikube
user:
client-certificate: /Users/philipyoo/.minikube/client.crt
client-key: /Users/philipyoo/.minikube/client.key
</code></pre>
<p>And to make sure my key/crts are there:</p>
<p>$ ls ~/.minikube</p>
<pre><code>addons/ ca.pem* client.key machines/ proxy-client.key
apiserver.crt cache/ config/ profiles/
apiserver.key cert.pem* files/ proxy-client-ca.crt
ca.crt certs/ key.pem* proxy-client-ca.key
ca.key client.crt logs/ proxy-client.crt
</code></pre>
<hr>
<p>Any help in debugging is super appreciated!</p>
| <p>For posterity, the solution to this problem was to delete the</p>
<pre><code>.minikube
</code></pre>
<p>directory in the user's home directory, and then try again. Often fixes strange minikube problems. </p>
|
<p>We are creating a POC with CLIPPER. The deployment works fine with Docker in local environment, but we need to use kubernetes for the POC. We tried to deploy it in two ways:</p>
<ol>
<li>Minikube deployment</li>
<li>GKE k8 cluster deployment</li>
</ol>
<p>In both above cases: the <code>mgmt-frontend</code>, <code>redis</code>, and <code>query-frontend</code> images were all deployed successfully on the cluster:</p>
<p><a href="https://i.stack.imgur.com/Wwfph.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Wwfph.png" alt="enter image description here"></a></p>
<p>We are getting an error when we are trying to start clipper:</p>
<pre><code>Error connecting to Kubernetes cluster. No external node addresses found
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-4de1563b3f9a> in <module>()
9 clipper_conn = ClipperConnection(KubernetesContainerManager('35.188.55.76'))
10
---> 11 clipper_conn.start_clipper()
/usr/local/lib/python2.7/dist-packages/clipper_admin/clipper_admin.pyc in start_clipper(self, query_frontend_image, mgmt_frontend_image, cache_size)
112 self.connected = True
113 except ClipperException as e:
--> 114 logger.warning("Error starting Clipper: {}".format(e.msg))
115 raise e
116
AttributeError: 'ClipperException' object has no attribute 'msg'
</code></pre>
<p>We tried different options but it didn't help. Can anyone please help us on this?</p>
| <p>I did refresh on the gcloud creds after updating the cli to the latest version </p>
|
<p>Our cloud application consists of 3 tightly coupled Docker containers, Nginx, Web and Mongo. Currently we run these containers on a single machine. However as our users are increasing we are looking for a solution to scale. Using Kubernetes we would form a multi container pod. If we are to replicate we need to replicate all 3 containers as a unit. Our cloud application is consumed by mobile app users. Our app can only handle approx 30000 users per Worker node and we intend to place a single pod on a single worker node. Once a mobile device is connected to worker node it must continue to only use that machine ( unique IP address )</p>
<p>We plan on using Kubernetes to manage the containers. Load balancing doesn't work for our use case as a mobile device needs to be tied to a single machine once assigned and each Pod works independently with its own persistent volume. However we need a way of spinning up new Pods on worker nodes if the number of users goes over 30000 and so on. </p>
<p>The idea is we have some sort of custom scheduler which assigns a mobile device a Worker Node ( domain/ IPaddress) depending on the number of users on that node.</p>
<p>Is Kubernetes a good fit for this design and how could we implement a custom pod scale algorithm. </p>
<p>Thanks</p>
| <p>Piggy-Backing on the answer of Jonah Benton:</p>
<p>While this is technically possible - your problem is not with Kubernetes it's with your Application! Let me point you the problem:</p>
<blockquote>
<p>Our cloud application consists of 3 tightly coupled Docker containers, Nginx, Web, and Mongo.</p>
</blockquote>
<p>Here is your first problem: Is you can only deploy these three containers together and not independently - you cannot scale one or the other!
While MongoDB can be scaled to insane loads - if it's bundled with your web server and web application it won't be able to...</p>
<p>So the first step for you is to break up these three components so they can be managed independently of each other. Next:</p>
<blockquote>
<p>Currently we run these containers on a single machine.</p>
</blockquote>
<p>While not strictly a problem - I have serious doubt's what it would mean to scale your application and what the challenges that come with scalability!</p>
<blockquote>
<p>Once a mobile device is connected to worker node it must continue to only use that machine ( unique IP address )</p>
</blockquote>
<p>Now, this IS a problem. You're looking to run an application on Kubernetes but I do not think you understand the consequences of doing that: Kubernetes orchestrates your resources. This means it will move pods (by killing and recreating) between nodes (and if necessary to the same node). It does this fully autonomous (which is awesome and gives you a good night sleep) If you're relying on clients sticking to a single nodes IP, you're going to get up in the middle of the night because Kubernetes tried to correct for a node failure and moved your pod which is now gone and your users can't connect anymore. You need to leverage the load-balancing features (services) in Kubernetes. Only they are able to handle the dynamic changes that happen in Kubernetes clusters.</p>
<blockquote>
<p>Using Kubernetes we would form a multi container pod.</p>
</blockquote>
<p>And we have another winner - No! You're trying to treat Kubernetes as if it were your on-premise infrastructure! If you keep doing so you're going to fail and curse Kubernetes in the process!</p>
<p>Now that I told you some of the things you're thinking wrong - what a person would I be if I did not offer some advice on how to make this work:</p>
<p>In Kubernetes your three applications should not run in one pod! They should run in separate pods:</p>
<ul>
<li>your webservers work should be done by <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> and since you're already familiar with nginx, <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">this is probably the ingress you are looking for!</a></li>
<li>Your web application should be a simple <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> and be exposed to ingress through a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a></li>
<li>your database should be a separate deployment which you can either do manually <a href="https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets" rel="nofollow noreferrer">through a statefullset</a> or <a href="https://www.kubestack.com/catalog/mongodb" rel="nofollow noreferrer">(more advanced) through an operator</a> and also exposed to the web application trough a Service</li>
</ul>
<p>Feel free to ask if you have any more questions!</p>
|
<p>I'm using Docker For Desktop with the built-in Kubernetes cluster. I have installed a <code>Pod</code> that serves resources over HTTP, but I'm not sure how to access it using my browser. I have the following <code>ServiceSpec</code> that correctly routes traffic to the <code>Pod</code>:</p>
<pre><code>spec:
clusterIP: 10.99.132.220
externalTrafficPolicy: Cluster
ports:
- name: myport
nodePort: 31534
port: 8037
protocol: TCP
targetPort: 80
type: LoadBalancer
</code></pre>
<p>And I can see it set up when I query it with <code>kubectl</code>:</p>
<pre><code>$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myservice LoadBalancer 10.99.132.220 localhost 8037:31534/TCP 1h
</code></pre>
<p>How do I reach this service using my browser?</p>
| <p>That service will be available in your browser at <a href="http://localhost:8037" rel="noreferrer">http://localhost:8037</a></p>
<p>Note that the port <code>8037</code> corresponds to the <code>port</code> property on the <code>ServiceSpec</code> object.</p>
<p>If you are unable to reach the service at that URL, then it could be one of several things, including but not limited to:</p>
<ul>
<li>There is another <code>Service</code> in your cluster that has claimed that port. Either delete the other <code>Service</code>, or change the <code>port</code> property to an unclaimed port.</li>
<li>Your <code>Pod</code> is not running and ready. Check <code>kubectl get pods</code>.</li>
</ul>
|
<p>I'm in the process of converting a stack to k8s. The database requires persistent storage.</p>
<p>I have used <code>kubectl create -f pv.yaml</code></p>
<p><strong>pv.yaml <em>(with edits based on @whites11's answer)</em>:</strong></p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/nfs"
claimRef:
kind: PersistentVolumeClaim
namespace: default
name: mongo-persisted-storage
</code></pre>
<p>I then create an <a href="https://github.com/cvallance/mongo-k8s-sidecar/blob/master/example/StatefulSet/mongo-statefulset.yaml" rel="nofollow noreferrer">example mongo replica set</a>.</p>
<p>When I look at my k8s dashboard I see the error:</p>
<blockquote>
<p>PersistentVolumeClaim is not bound: "mongo-persistent-storage-mongo-0"
(repeated 2 times)</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/5cs3a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5cs3a.png" alt="enter image description here"></a></p>
<p>In the persistent volume tab I see the volume which looks ok:</p>
<p><a href="https://i.stack.imgur.com/i8KU6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i8KU6.png" alt="enter image description here"></a></p>
<p>I'm having trouble figuring out the next step to make the volume claim happen successfully.</p>
<h3>Edit #2</h3>
<p>I went into the PVC page on the GUI and added a volume to the claim manually <em>(based on feedback from @whites11)</em>. I can see that the PVC has been updated with the volume but it is still pending.</p>
<p><a href="https://i.stack.imgur.com/MXnGL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MXnGL.png" alt="enter image description here"></a></p>
<h3>Edit #3</h3>
<p>Realizing that since making the change suggested by @whites11, the original error message in the pod has changed. It is now "persistentvolume "pvvolume" not found (repeated 2 times)", I think I just need to figure out where I wrote pvvolume, instead of pv-volume. (or it could be the <code>-</code> was auto-parsed out somewhere?</p>
<p><a href="https://i.stack.imgur.com/alBWR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/alBWR.png" alt="enter image description here"></a></p>
| <p>You need to manually bind your PV to your PVC, by adding the appropriate <code>claimRef</code> section to the PV spec.</p>
<p>In practice, edit your PV with the method you prefer, and add a section similar to this:</p>
<pre><code>claimRef:
name: mongo-persisted-storag
namespace: <your PVC namespace>
</code></pre>
<p>Than, you need to edit your PVC to bind the correct volume, by adding the following in its <code>spec</code> section:</p>
<pre><code>volumeName: "<your volume name>"
</code></pre>
<p>Here an explanation on how this process works: <a href="https://docs.openshift.org/latest/dev_guide/persistent_volumes.html#persistent-volumes-volumes-and-claim-prebinding" rel="nofollow noreferrer">https://docs.openshift.org/latest/dev_guide/persistent_volumes.html#persistent-volumes-volumes-and-claim-prebinding</a></p>
|
<p>I have an application running over a POD in Kubernetes.
I would like to store some output file logs on a persistent storage volume.</p>
<p>In order to do that, I created a volume over the NFS and bound it to the POD through the related volume claim.
When I try to write or accede the shared folder I got a "permission denied" message, since the NFS is apparently read-only.</p>
<p>The following is the json file I used to create the volume:</p>
<pre><code>{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "task-pv-test"
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"nfs": {
"server": <IPAddress>,
"path": "/export"
},
"accessModes": [
"ReadWriteMany"
],
"persistentVolumeReclaimPolicy": "Delete",
"storageClassName": "standard"
}
}
</code></pre>
<p>The following is the POD configuration file</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: volume-test
spec:
volumes:
- name: task-pv-test-storage
persistentVolumeClaim:
claimName: task-pv-test-claim
containers:
- name: volume-test
image: <ImageName>
volumeMounts:
- mountPath: /home
name: task-pv-test-storage
readOnly: false
</code></pre>
<p>Is there a way to change permissions?</p>
<hr>
<p>UPDATE</p>
<p>Here are the PVC and NFS config:</p>
<p>PVC:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-test-claim
spec:
storageClassName: standard
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
</code></pre>
<p>NFS CONFIG</p>
<pre><code>{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "nfs-client-provisioner-557b575fbc-hkzfp",
"generateName": "nfs-client-provisioner-557b575fbc-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/nfs-client-provisioner-557b575fbc-hkzfp",
"uid": "918b1220-423a-11e8-8c62-8aaf7effe4a0",
"resourceVersion": "27228",
"creationTimestamp": "2018-04-17T12:26:35Z",
"labels": {
"app": "nfs-client-provisioner",
"pod-template-hash": "1136131967"
},
"ownerReferences": [
{
"apiVersion": "extensions/v1beta1",
"kind": "ReplicaSet",
"name": "nfs-client-provisioner-557b575fbc",
"uid": "3239b14a-4222-11e8-8c62-8aaf7effe4a0",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "nfs-client-root",
"nfs": {
"server": <IPAddress>,
"path": "/Kubernetes"
}
},
{
"name": "nfs-client-provisioner-token-fdd2c",
"secret": {
"secretName": "nfs-client-provisioner-token-fdd2c",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "nfs-client-provisioner",
"image": "quay.io/external_storage/nfs-client-provisioner:latest",
"env": [
{
"name": "PROVISIONER_NAME",
"value": "<IPAddress>/Kubernetes"
},
{
"name": "NFS_SERVER",
"value": <IPAddress>
},
{
"name": "NFS_PATH",
"value": "/Kubernetes"
}
],
"resources": {},
"volumeMounts": [
{
"name": "nfs-client-root",
"mountPath": "/persistentvolumes"
},
{
"name": "nfs-client-provisioner-token-fdd2c",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "nfs-client-provisioner",
"serviceAccount": "nfs-client-provisioner",
"nodeName": "det-vkube-s02",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
]
},
"status": {
"phase": "Running",
"hostIP": <IPAddress>,
"podIP": "<IPAddress>,
"startTime": "2018-04-17T12:26:35Z",
"qosClass": "BestEffort"
}
}
</code></pre>
<p>I have just removed some status information from the nfs config to make it shorter</p>
| <p>If you set the proper <code>securityContext</code> for the pod configuration you can make sure the volume is mounted with proper permissions.</p>
<p>Example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
securityContext:
fsGroup: 2000
volumes:
- name: task-pv-test-storage
persistentVolumeClaim:
claimName: task-pv-test-claim
containers:
- name: demo
image: example-image
volumeMounts:
- name: task-pv-test-storage
mountPath: /data/demo
</code></pre>
<p>In the above example the storage will be mounted at <code>/data/demo</code> with 2000 group id, which is set by <code>fsGroup</code>. By setting the fsGroup all processes of the container will also be part of the supplementary group ID 2000, thus you should have access to the mounted files.</p>
<p>You can read more about pod security context here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p>
|
<p>I started a pod in kubernetes cluster which can call kubernetes api via go-sdk (like in this example: <a href="https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration</a>). I want to listen some external events in this pod (e.g. GitHub web-hooks), fetch <code>yaml</code> configuration files from repository and apply them to this cluster.</p>
<p>Is it possible to call <code>kubectl apply -f <config-file></code> via kubernetes API (or better via golang SDK)?</p>
| <p>As yaml directly: no, not that I'm aware of. But if you increase the <code>kubectl</code> verbosity (<code>--v=100</code> or such), you'll see that the first thing <code>kubectl</code> does to your yaml file is convert it to json, and then <code>POST</code> <em>that</em> content to the API. So the spirit of the answer to your question is "yes."</p>
<p>This <a href="https://github.com/box/kube-applier#readme" rel="nofollow noreferrer">box/kube-applier</a> project may interest you. While it does not appear to be webhook aware, I am sure they would welcome a PR teaching it to do that. Using their existing project also means you benefit from all the bugs they have already squashed, as well as their nifty prometheus metrics integration.</p>
|
<p>I am trying to implement CI/CD pipeline using Kubernetes and Jenkins. Now I am exploring about the CI part using Jenkins file, SVN repository with Docker Hub. After pushing the Docker image into registry docker hub , I need to deploy this into a Kubernetes cluster having 3 cluster master and 15 worker machine/node.</p>
<p>When I am reading deployment into Kubernetes cluster I have several doubts:</p>
<p>Every deployment definition within the Jenkins is writes using the shell script. If I need to create my Deployment and Services for those deployments, How I can define in Jenkins? Where I can create the YAML/YML files for ReplicaSet, Deployment and Services? Do I need to use shell scripting for this? Or in any other method?</p>
| <p>You can use <a href="https://helm.sh/" rel="nofollow noreferrer">Kubernetes Helm</a> to define what you want to spin up, in parameterisable modules called Helm charts. Many charts are available for common software like nginx and postgresql. This allows an "infrastructure as code" way of working - declaratively specifying what you want, instead of using a script to manually get the cluster into the desired state from whatever state it is currently in, just rely on Helm to do that for you! This is a good use case for Helm.</p>
|
<p>I am confused when it comes to deploying PostgreSQL database of my Django application with Kubernetes. Here is how I have constructed my deployment-definition.yml file:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
tier: backend
spec:
containers:
- name: postgres-container
image: postgres:9.6.6
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
- name: POSTGRES_DB
value: agent_technologies_db
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-volume-mount
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-volume-mount
persistentVolumeClaim:
claimName: postgres-pvc
- name: postgres-credentials
secret:
secretName: postgres-credentials
</code></pre>
<p>What I dont understand is this. If I specify (like i did) an existing image of PostgreSQL inside spec of Kubernetes Deployment object, how do I actually run my application? What do I need to specify as HOST inside my settings.py file?</p>
<p>Here is what my settings.py file looks like for now:</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'agent_technologies_db',
'USER': 'stefan_radonjic',
'PASSWORD': 'cepajecar995',
'HOST': 'localhost',
'PORT': '',
}
}
</code></pre>
<p>It is constructed this way because I am still designing the application and I do not wanna deploy it to Kubernetes cluster just yet. But when I do, what am I suppose to specify for : <code>HOST</code> and <code>PORT</code> ? And also, is this the right way to deploy PostgreSQL to Kubernetes Cluster.</p>
<p>Thank you in advance! </p>
<p>*** QUESTION UPDATE ****</p>
<p>As suggested, I have created service.yml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
tier: backend
ports:
- protocol: TCP
port: 5432
targetPort: 5432
type: ClusterIP
</code></pre>
<p>And I have updated my settings.py file:</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'agent_technologies_db',
'USER': 'stefan_radonjic',
'PASSWORD': 'cepajecar995',
'HOST': 'postgres-service',
'PORT': 5432,
}
}
</code></pre>
<p>But I am getting the following error: </p>
<p><a href="https://i.stack.imgur.com/tiSVX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tiSVX.png" alt="enter image description here"></a></p>
| <p>In order to allow communication to your PostreSQL deployment in Kubernetes, you need to set up a <code>Service</code> object. If your Django app will live in the same cluster as your PostgreSQL deployment, then you will want a <code>ClusterIP</code> type service; otherwise, if your Django app lives outside of your cluster, you will want a <code>LoadBalancer</code> or <code>NodePort</code> type service.</p>
<p>There are two ways to create a service:</p>
<p><strong>YAML</strong></p>
<p>The first is through a yaml file, which in your case would look like this:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: postgres
spec:
selector:
app: postgres-container
tier: backend
ports:
- name: postgres
protocol: TCP
port: 5432
targetPort: 5432
</code></pre>
<p>The <code>.spec.selector</code> field defines the target of the <code>Service</code>. This service will target pods with labels <code>app=postgres-container</code> and <code>tier=backend</code>. It exposes port 5432 of the container. In your Django configuration, you would put the name of the service as the <code>HOST</code>: in this case, the name is simply <code>postgres</code>. Kubernetes will resolve the service name to the matching pod IP and route traffic to the pod. The port will be the port of the service: 5432.</p>
<p><strong>kubectl expose</strong></p>
<p>The other way of creating a service is through the <code>kubectl expose</code> command:</p>
<pre><code>kubectl expose deployment/postgres
</code></pre>
<p>This command will default to a <code>ClusterIP</code> type service and expose the ports defined in the <code>.spec.containers.ports</code> fields in the Deployment yaml.</p>
<p>More info:</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
<blockquote>
<p>And also, is this the right way to deploy PostgreSQL to Kubernetes Cluster.</p>
</blockquote>
<p>This depends on a few variables. Do you plan on deploying a Postgres cluster? If so, you may want to look into using a <code>StatefulSet</code>: </p>
<blockquote>
<p>StatefulSets are valuable for applications that require one or more of
the following.</p>
<ul>
<li>Stable, unique network identifiers. </li>
<li>Stable, persistent storage.</li>
<li>Ordered, graceful deployment and scaling. </li>
<li>Ordered, graceful deletion and termination. </li>
<li>Ordered, automated rolling updates.</li>
</ul>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#using-statefulsets" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#using-statefulsets</a></p>
<p>Do you have someone knowledgable of Postgres that's going to configure and maintain it? If not, I would also recommend that you look into deploying a managed Postgres server outside of a cluster (e.g. RDS). You can still deploy your Django app within the cluster and connect to your DB via an <code>ExternalName</code> service. </p>
<p>The reason I recommend this is that managing stateful applications in a Kubernetes cluster can be challenging. I'm not familiar with Postgres, but here's a cautionary tale of running Postgres on Kubernetes: <a href="https://gravitational.com/blog/running-postgresql-on-kubernetes/" rel="noreferrer">https://gravitational.com/blog/running-postgresql-on-kubernetes/</a></p>
<p>In addition to that, here are a few experiences I've run into that has influenced my decision to remove stateful workloads from my cluster:</p>
<h2>Stuck volumes</h2>
<p>If you're using AWS EBS volumes, volumes can get "stuck" on a node and fail to detach and reattach to a new node if your DB pod gets rescheduled to a new node. </p>
<h2>Migrating to a new cluster</h2>
<p>If you ever need to move your workloads to a new cluster, you will have to deal with the added challenge of moving your state to the new cluster as well without suffering any data loss. If you move your stateful apps outside of the cluster then you can treat the whole cluster as cattle, and then tearing it down and migrating to a new cluster becomes a whole lot easier. </p>
<p>More info:</p>
<p>K8s blog post on deploying Postgres with StatefulSets: <a href="https://kubernetes.io/blog/2017/02/postgresql-clusters-kubernetes-statefulsets/" rel="noreferrer">https://kubernetes.io/blog/2017/02/postgresql-clusters-kubernetes-statefulsets/</a></p>
|
<h1>To give you some context:</h1>
<p>I have two server environments running the same app. The first, which I intend to abandon, is a Standard Google App Engine environment that has many limitations. The second one is a Google Kubernetes cluster running my Python app with Gunicorn.</p>
<h2>Concurrency</h2>
<p>At the first server, I can send multiple requests to the app and it will answer many simultaneously. I run two batches of simultaneous requests against the app on both environments. At Google App Engine the first batch and the second were responded simultaneously and the first din't block the second.</p>
<p>At the Kubernetes, the server only responses 6 simultanous, and the first batch blocks the second. I've read some posts on how to achieve Gunicorn concurrency with gevent or multiple threading, and all of them say I need to have CPU cores, but the problem is that no matter how much cpu I put into it, the limitation continues. I've tried Google nodes from 1VCPU to 8VCPU and it doesn't change much.</p>
<p>Can you guys give me any ideas on what I'm possibly missing? Maybe Google Cluster nodes limitation?</p>
<h3>Kubernetes response waterfall</h3>
<p>As you can notice, the second batch only started to be responded after the first one started to finish.</p>
<p><a href="https://i.stack.imgur.com/xgHx1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xgHx1.png" alt="enter image description here"></a></p>
<h3>App Engine response waterfall</h3>
<p><a href="https://i.stack.imgur.com/jHbYv.png" rel="noreferrer"><img src="https://i.stack.imgur.com/jHbYv.png" alt="enter image description here"></a></p>
| <p>What you describe appears to be an indicator that you running the Gunicorn server with the <a href="http://docs.gunicorn.org/en/stable/settings.html#worker-class" rel="noreferrer">sync worker</a> class serving an I/O bound application. Can you share your Gunicorn configuration?</p>
<p>Is it possible that Google's platform has some kind of autoscaling feature (I'm not really familiar with their service) that's being triggered while your Kubernetes configuration does not?</p>
<p>Generically speaking increasing the number cores for a single instance will only help if you also increase the number of workers spawned to attend incoming requests. Please see the <a href="http://docs.gunicorn.org/en/stable/design.html" rel="noreferrer">Gunicorn's design documentation</a> with a special emphasis on the worker types section (and why <code>sync</code> workers are suboptimal for I/O bound applications) - its a good read and provides a more detailed explanation about this problem.</p>
<p>Just for fun, here's a small exercise to compare the two approaches:</p>
<pre><code>import time
def app(env, start_response):
time.sleep(1) # takes 1 second to process the request
start_response('200 OK', [('Content-Type', 'text/plain')])
return [b'Hello World']
</code></pre>
<hr>
<p>Running Gunicorn with 4 sync workers: <code>gunicorn --bind '127.0.0.1:9001' --workers 4 --worker-class sync --chdir app app:app</code></p>
<p>Let's trigger 8 request at the same time: <code>ab -n 8 -c 8 "http://localhost:9001/"</code></p>
<pre><code>This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient).....done
Server Software: gunicorn/19.8.1
Server Hostname: localhost
Server Port: 9001
Document Path: /
Document Length: 11 bytes
Concurrency Level: 8
Time taken for tests: 2.007 seconds
Complete requests: 8
Failed requests: 0
Total transferred: 1096 bytes
HTML transferred: 88 bytes
Requests per second: 3.99 [#/sec] (mean)
Time per request: 2006.938 [ms] (mean)
Time per request: 250.867 [ms] (mean, across all concurrent requests)
Transfer rate: 0.53 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.2 1 1
Processing: 1003 1504 535.7 2005 2005
Waiting: 1002 1504 535.8 2005 2005
Total: 1003 1505 535.8 2006 2006
Percentage of the requests served within a certain time (ms)
50% 2006
66% 2006
75% 2006
80% 2006
90% 2006
95% 2006
98% 2006
99% 2006
100% 2006 (longest request)
</code></pre>
<p>Around 2 seconds to complete the test. That's the behavior you got on your tests - the 4 first requests took kept your workers busy, the second batch was queued until the first batch was processed.</p>
<hr>
<p>Same test, but let's tell Gunicorn to use an async worker: <code>unicorn --bind '127.0.0.1:9001' --workers 4 --worker-class gevent --chdir app app:app</code></p>
<p>Same test as above: <code>ab -n 8 -c 8 "http://localhost:9001/"</code></p>
<pre><code>This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient).....done
Server Software: gunicorn/19.8.1
Server Hostname: localhost
Server Port: 9001
Document Path: /
Document Length: 11 bytes
Concurrency Level: 8
Time taken for tests: 1.005 seconds
Complete requests: 8
Failed requests: 0
Total transferred: 1096 bytes
HTML transferred: 88 bytes
Requests per second: 7.96 [#/sec] (mean)
Time per request: 1005.463 [ms] (mean)
Time per request: 125.683 [ms] (mean, across all concurrent requests)
Transfer rate: 1.06 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.4 1 2
Processing: 1002 1003 0.6 1003 1004
Waiting: 1001 1003 0.9 1003 1004
Total: 1002 1004 0.9 1004 1005
Percentage of the requests served within a certain time (ms)
50% 1004
66% 1005
75% 1005
80% 1005
90% 1005
95% 1005
98% 1005
99% 1005
100% 1005 (longest request)
</code></pre>
<p>We actually double the application's throughput here - it only took ~1s to reply to all the requests.</p>
<p>To understand what happened Gevent has a <a href="https://sdiehl.github.io/gevent-tutorial/" rel="noreferrer">great tutorial</a> about its architecture and <a href="https://www.chiark.greenend.org.uk/~sgtatham/coroutines.html" rel="noreferrer">this article</a> has a more in-depth explanation about co-routines.</p>
<hr>
<p>I apologize in advance if was way off on the actual cause of your problem (I do believe that some additional information is lacking from your initial comment for anyone to have a conclusive answer). If not to you, I hope this'll helpful to someone else. :)</p>
<p>Also do notice that I've oversimplified things a lot (my example was a simple proof of concept), tweaking an HTTP server configuration is mostly a trial and error exercise - it's all dependent on the type of workload the application has and the hardware it sits on.</p>
|
<p>I have a kubernetes cluster running on 2 machines (master-minion node and minion node). I want to add a new minion node without disrupting the current set up, is there a way to do it?</p>
<p>I have seen that when I try to add the new node, the services on the other nodes stops it, due to which I have to stop the services before deploying the new node to the existing cluster.</p>
| <p>To do this in the latest version (tested on 1.10.0) you can issue following command on the masternode:</p>
<p><code>kubeadm token create --print-join-command</code></p>
<p>It will then print out a new join command (like the one you got after <code>kubeadmn init</code>):</p>
<p><code>kubeadm join 192.168.1.101:6443 --token tokentoken.lalalalaqyd3kavez --discovery-token-ca-cert-hash sha256:complexshaoverhere</code></p>
|
<p>I've been banging my head against this wall on and off for a while. There is a ton of information on Kubernetes on the web, but it's all assuming so much knowledge that n00bs like me don't really have much to go on.</p>
<p>So, can anyone share a <strong>simple</strong> example of the following (as a yaml file)? All I want is</p>
<ul>
<li>two pods</li>
<li>let's say one pod has a backend (I don't know - node.js), and one has a frontend (say React). </li>
<li>A way to network between them.</li>
</ul>
<p>And then an example of calling an api call from the back to the front.</p>
<p>I start looking into this sort of thing, and all of a sudden I hit this page - <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this" rel="noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this</a>. This is <strong>super unhelpful</strong>. I don't want or need advanced network policies, nor do I have the time to go through several different service layers that are mapped on top of kubernetes. I just want to figure out a trivial example of a network request.</p>
<p>Hopefully if this example exists on stackoverflow it will serve other people as well.</p>
<p>Any help would be appreciated. Thanks.</p>
<p><strong>EDIT;</strong> it looks like the easiest example may be using the Ingress controller. </p>
<p><strong>EDIT EDIT;</strong> </p>
<p>I'm working to try and get a minimal example deployed - I'll walk through some steps here and point out my issues.</p>
<p>So below is my <code>yaml</code> file:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.kubeplaytime.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
</code></pre>
<p>What I believe this is doing is </p>
<ul>
<li><p>Deploying a frontend and backend app - I deployed <code>patientplatypus/frontend_example</code> and <code>patientplatypus/backend_example</code> to dockerhub and then pull the images down. One open question I have is, what if I don't want to pull the images from docker hub and rather would just like to load from my localhost, is that possible? In this case I would push my code to the production server, build the docker images on the server and then upload to kubernetes. The benefit is that I don't have to rely on dockerhub if I want my images to be private.</p></li>
<li><p>It is creating two service endpoints that route outside traffic from a web browser to each of the deployments. These services are of type <code>loadBalancer</code> because they are balancing the traffic among the (in this case 3) replicasets that I have in the deployments.</p></li>
<li><p>Finally, I have an ingress controller which is <em>supposed</em> to allow my services to route to each other through <code>www.kubeplaytime.example</code> and <code>www.kubeplaytime.example/api</code>. However this is not working.</p></li>
</ul>
<p>What happens when I run this?</p>
<pre><code>patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
ingress.extensions "frontend" created
</code></pre>
<ul>
<li><p>So first, it appears to create all the parts that I need fine with no errors.</p>
<p><code>patientplatypus:~/Documents/kubePlay:09:22:30$kubectl get --watch services</code></p>
<p><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE</code></p>
<p><code>backend LoadBalancer 10.0.18.174 <pending> 80:31649/TCP 1m</code></p>
<p><code>frontend LoadBalancer 10.0.100.65 <pending> 80:32635/TCP 1m</code></p>
<p><code>kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d</code></p>
<p><code>frontend LoadBalancer 10.0.100.65 138.91.126.178 80:32635/TCP 2m</code></p>
<p><code>backend LoadBalancer 10.0.18.174 138.91.121.182 80:31649/TCP 2m</code></p></li>
<li><p>Second, if I watch the services, I eventually get IP addresses that I can use to navigate in my browser to these sites. Each of the above IP addresses works in routing me to the frontend and backend respectively.</p></li>
</ul>
<p><strong>HOWEVER</strong></p>
<p>I reach an issue when I try and use the ingress controller - it seemingly deployed, but I don't know how to get there.</p>
<pre><code>patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses
NAME HOSTS ADDRESS PORTS AGE
frontend www.kubeplaytime.example 80 16m
</code></pre>
<ul>
<li>So I have no address I can use, and <code>www.kubeplaytime.example</code> does not appear to work.</li>
</ul>
<p>What it appears that I have to do to route to the ingress extension I just created is to use a service and deployment on <em>it</em> in order to get an IP address, but this starts to look incredibly complicated very quickly.</p>
<p>For example, take a look at this medium article: <a href="https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e" rel="noreferrer">https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e</a>.</p>
<p>It would appear that the necessary code to add for just the service routing to the Ingress (ie what he calls the <strong>Ingress Controller</strong>) appears to be this:</p>
<pre><code>---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
---
kind: Service
apiVersion: v1
metadata:
name: nginx-default-backend
spec:
ports:
- port: 80
targetPort: http
selector:
app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nginx-default-backend
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-default-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
ports:
- name: http
containerPort: 8080
protocol: TCP
</code></pre>
<p>This would seemingly need to be appended to my other <code>yaml</code> code above in order to get a service entry point for my ingress routing, and it does appear to give an ip:</p>
<pre><code>patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.31.209 <pending> 80:32428/TCP 4m
frontend LoadBalancer 10.0.222.47 <pending> 80:32482/TCP 4m
ingress-nginx LoadBalancer 10.0.28.157 <pending> 80:30573/TCP,443:30802/TCP 4m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
nginx-default-backend ClusterIP 10.0.71.121 <none> 80/TCP 4m
frontend LoadBalancer 10.0.222.47 40.121.7.66 80:32482/TCP 5m
ingress-nginx LoadBalancer 10.0.28.157 40.121.6.179 80:30573/TCP,443:30802/TCP 6m
backend LoadBalancer 10.0.31.209 40.117.248.73 80:32428/TCP 7m
</code></pre>
<p>So <code>ingress-nginx</code> appears to be the site I want to get to. Navigating to <code>40.121.6.179</code> returns a default 404 message (<code>default backend - 404</code>) - it does not go to <code>frontend</code> as <code>/</code> aught to route. <code>/api</code> returns the same. Navigating to my host namespace <code>www.kubeplaytime.example</code> returns a 404 from the browser - no error handling. </p>
<p><strong>QUESTIONS</strong></p>
<ul>
<li><p>Is the Ingress Controller strictly necessary, and if so is there a less complicated version of this?</p></li>
<li><p>I feel I am close, what am I doing wrong?</p></li>
</ul>
<p><strong>FULL YAML</strong></p>
<p>Available here: <a href="https://gist.github.com/patientplatypus/fa07648339ee6538616cb69282a84938" rel="noreferrer">https://gist.github.com/patientplatypus/fa07648339ee6538616cb69282a84938</a></p>
<p><em>Thanks for the help!</em></p>
<p><strong>EDIT EDIT EDIT</strong></p>
<p>I've attempted to use <strong>HELM</strong>. On the surface it appears to be a simple interface, and so I tried spinning it up: </p>
<pre><code>patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress
NAME: erstwhile-beetle
LAST DEPLOYED: Sun May 6 12:13:30 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
erstwhile-beetle-nginx-ingress-controller 1 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
erstwhile-beetle-nginx-ingress-controller LoadBalancer 10.0.216.38 <pending> 80:31494/TCP,443:32118/TCP 1s
erstwhile-beetle-nginx-ingress-default-backend ClusterIP 10.0.55.224 <none> 80/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
erstwhile-beetle-nginx-ingress-controller 1 1 1 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 1 1 0 1s
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
erstwhile-beetle-nginx-ingress-controller 1 N/A 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 N/A 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
erstwhile-beetle-nginx-ingress-controller-7df9b78b64-24hwz 0/1 ContainerCreating 0 1s
erstwhile-beetle-nginx-ingress-default-backend-849b8df477-gzv8w 0/1 ContainerCreating 0 1s
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller'
An example Ingress that makes use of the controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
</code></pre>
<p>Seemingly this is really nice - it spins everything up and gives an example of how to add an ingress. Since I spun up helm in a blank <code>kubectl</code> I used the following <code>yaml</code> file to add in what I thought would be required.</p>
<p>The file: </p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.example.com
http:
paths:
- path: /api
backend:
serviceName: backend
servicePort: 80
- path: /
frontend:
serviceName: frontend
servicePort: 80
</code></pre>
<p>Deploying this to the cluster however runs into this error:</p>
<pre><code>patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
error: error validating "kube-deploy.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[1]): unknown field "frontend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): missing required field "backend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>So, the question then becomes, well crap how do I debug this?
If you spit out the code that helm produces, it's basically non-readable by a person - there's no way to go in there and figure out what's going on. </p>
<p>Check it out: <a href="https://gist.github.com/patientplatypus/0e281bf61307f02e16e0091397a1d863" rel="noreferrer">https://gist.github.com/patientplatypus/0e281bf61307f02e16e0091397a1d863</a> - over a 1000 lines!</p>
<p>If anyone has a better way to debug a helm deploy add it to the list of open questions.</p>
<p><strong>EDIT EDIT EDIT EDIT</strong></p>
<p>To simplify <em>in the extreme</em> I attempt to make a call from one pod to another only using namespace. </p>
<p>So here is my React code where I make the http request:</p>
<pre><code>axios.get('http://backend/test')
.then(response=>{
console.log('return from backend and response: ', response);
})
.catch(error=>{
console.log('return from backend and error: ', error);
})
</code></pre>
<p>I've also attempted to use <code>http://backend.exampledeploy.svc.cluster.local/test</code> without luck.</p>
<p>Here is my node code handling the get:</p>
<pre><code>router.get('/test', function(req, res, next) {
res.json({"test":"test"})
});
</code></pre>
<p>Here is my <code>yaml</code> file that I uploading to the <code>kubectl</code> cluster:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
namespace: exampledeploy
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
namespace: exampledeploy
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
</code></pre>
<p>The uploading to the cluster appears to work as I can see in my terminal: </p>
<pre><code>patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy
NAME READY STATUS RESTARTS AGE
pod/backend-584c5c59bc-5wkb4 1/1 Running 0 15m
pod/backend-584c5c59bc-jsr4m 1/1 Running 0 15m
pod/backend-584c5c59bc-txgw5 1/1 Running 0 15m
pod/frontend-647c99cdcf-2mmvn 1/1 Running 0 15m
pod/frontend-647c99cdcf-79sq5 1/1 Running 0 15m
pod/frontend-647c99cdcf-r5bvg 1/1 Running 0 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend LoadBalancer 10.0.112.160 168.62.175.155 80:31498/TCP 15m
service/frontend LoadBalancer 10.0.246.212 168.62.37.100 80:31139/TCP 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/backend 3 3 3 3 15m
deployment.extensions/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.extensions/backend-584c5c59bc 3 3 3 15m
replicaset.extensions/frontend-647c99cdcf 3 3 3 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/backend 3 3 3 3 15m
deployment.apps/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/backend-584c5c59bc 3 3 3 15m
replicaset.apps/frontend-647c99cdcf 3 3 3 15m
</code></pre>
<p>However, when I attempt to make the request I get the following error:</p>
<pre><code>return from backend and error:
Error: Network Error
Stack trace:
createError@http://168.62.37.100/static/js/bundle.js:1555:15
handleError@http://168.62.37.100/static/js/bundle.js:1091:14
App.js:14
</code></pre>
<p>Since the <code>axios</code> call is being made from the browser, I'm wondering if it is simply not possible to use this method to call the backend, even though the backend and the frontend are in different pods. I'm a little lost, as I thought this was the simplest possible way to network pods together.</p>
<p><strong>EDIT X5</strong></p>
<p>I've determined that it is possible to curl the backend from the command line by exec'ing into the pod like this:</p>
<pre><code>patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test
* Hostname was NOT found in DNS cache
* Trying 10.0.249.147...
* Connected to backend (10.0.249.147) port 80 (#0)
> GET /test HTTP/1.1
> User-Agent: curl/7.38.0
> Host: backend
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 15
< ETag: W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM"
< Date: Sun, 06 May 2018 20:25:49 GMT
< Connection: keep-alive
<
* Connection #0 to host backend left intact
{"test":"test"}
</code></pre>
<p>What this means is, without a doubt, because the front end code is being executed in the browser it needs Ingress to gain entry into the pod, as http requests from the front end are what's breaking with simple pod networking. I was unsure of this, but it means Ingress is necessary.</p>
| <p>First of all, let's clarify some apparent misconceptions. You mentioned your front-end being a React application, that will presumably run in the users browser. For this to work, your actual problem is not your back-end and front-end pods <strong>communicating with each other</strong>, but the browser needs to be able to <strong>connect to both these pods</strong> (to the front-end pod in order to load the React application, and to the back-end pod for the React app to make API calls).</p>
<p>To visualize:</p>
<pre><code> +---------+
+---| Browser |---+
| +---------+ |
V V
+-----------+ +----------+ +-----------+ +----------+
| Front-end |---->| Back-end | | Front-end | | Back-end |
+-----------+ +----------+ +-----------+ +----------+
(what you asked for) (what you need)
</code></pre>
<p>As already stated, the easiest solution for this would be to use an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Ingress controller</a>. I won't go into detail on how to set up an Ingress controller here; in some cloud environments (like GKE) you will be able to use an Ingress controller provided to you by the cloud provider. Otherwise, you can set up the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/README.md" rel="noreferrer">NGINX Ingress controller</a>. Have a look at the NGINX Ingress controllers <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">deployment guide</a> for more information.</p>
<h3>Define services</h3>
<p>Start by defining <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Service resources</a> for both your front-end and back-end application (these would also allow your Pods to communicate with each other). A service definition might look like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
</code></pre>
<p>Make sure that your Pods have <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">labels</a> that can be selected by the Service resource (in this example, I'm using <code>app=backend</code> and <code>app=frontend</code> as labels).</p>
<p>If you want to establish Pod-to-Pod communication, you're done now. In each Pod, you can now use <code>backend.<namespace>.svc.cluster.local</code> (or <code>backend</code> as shorthand) and <code>frontend</code> as host names to connect to that Pod.</p>
<h3>Define Ingresses</h3>
<p>Next up, you can define the Ingress resources; since both services will need connectivity from outside the cluster (the users browser), you will need Ingress definitions for both services.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: backend
spec:
rules:
- host: api.your-application.example
http:
paths:
- path: /
backend:
serviceName: backend
servicePort: 80
</code></pre>
<p>Alternatively, you could also aggregate frontend and backend with a single Ingress resource (no "right" answer here, just a matter of preference):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
</code></pre>
<p>After that, make sure that both <code>www.your-application.example</code> and <code>api.your-application.example</code> point to your Ingress controller's external IP address, and you should be done.</p>
|
<p>Whats the best approach to provide a .kube/config file in a rest service deployed on kubernetes?</p>
<p>This will enable my service to (for example) use the kuberntes client api.</p>
<p>R</p>
| <p>Create service account:</p>
<pre><code>kubectl create serviceaccount example-sa
</code></pre>
<p>Create a role:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: example-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>Create role binding:</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: example-role-binding
namespace: default
subjects:
- kind: "ServiceAccount"
name: example-sa
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>create pod using <code>example-sa</code></p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: example-pod
spec:
serviceAccountName: example-sa
containers:
- name: secret-access-container
image: example-image
</code></pre>
<p>The most important line in pod definition is <code>serviceAccountName: example-sa</code>. After creating service account and adding this line to your pod's definition you will be able to access your api access token at <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code>.</p>
<p><a href="https://developer.ibm.com/recipes/tutorials/service-accounts-and-auditing-in-kubernetes/" rel="nofollow noreferrer">Here</a> you can find a little bit more detailed version of the above example.</p>
|
<p>I have a Jenkins instance setup using Googles <a href="https://cloud.google.com/solutions/jenkins-on-kubernetes-engine-tutorial" rel="nofollow noreferrer">Jenkins on Kubernetes</a> solution. I have not changed any of the settings of the Kubernetes Pod.</p>
<p>When I trigger a new job I am successfully able to get everything up and running until the point of my tests.</p>
<p>My tests use docker-compose. First I make sure to install docker (1.5-1+b1) and docker-compose (1.8.0-2) on the instance (I know I can optimize this by using an image that already includes these, but I am still just in proof-of-concept). </p>
<p>When I run the <code>docker-compose up</code> command everything works and the services start their initialization scripts. However, <strong>the mounts are empty</strong>. I have verified that the files exist on the Jenkins slave, and the mount is created inside the docker service when I run docker-compose, however <strong>they are empty</strong>. </p>
<p>Some information:</p>
<p>In order to get around file permissions I am using <code>/tmp</code> as the Jenkins Workspace. I am using SCM to pull my files (successfully) and in the docker-compose file I specify <code>version: '2'</code> and the mount paths with absolute paths. The volume section of the service that fails looks like this:</p>
<pre><code>volumes:
- /tmp/automation:/opt/automation
</code></pre>
<p>I changed the command that is run in the service to <code>ls /opt/automation</code> and the result is an empty directory. </p>
<p><strong>What am I missing?</strong> <em>I just want to mount a directory into my docker-compose service</em>. This works perfectly from Windows, Ubuntu, and Centos devices. Why won't it work using the Kubernetes instance? </p>
| <p>I found the reason it fails <a href="https://stackoverflow.com/a/31381323/1561176">here</a>:</p>
<blockquote>
<p>A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.</p>
<p>Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.</p>
</blockquote>
<p>So it seems like it will be impossible to mount something from the outer docker into the inner docker. And another solution must be found.</p>
|
<p>I am reading <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/local-storage-overview.md" rel="nofollow noreferrer">local storage design</a> of kubernetes. It has a section for <code>distributed database</code> where db replicates data by itself. </p>
<p>My question is that if any process of db goes down, will it be restarted on the same node/machine ? I think it's a yes.</p>
<p>If it's yes, will it have access to its <code>local storage</code> that it had before it crashed ?</p>
<p>I read an <a href="https://kubernetes.io/blog/2016/12/statefulset-run-scale-stateful-applications-in-kubernetes" rel="nofollow noreferrer">older article</a> about <code>stateful sets</code> when it was in beta. The article didn't encourage use of local storage at that time.</p>
<p>I am new to Kubernetes, so please answer this question with more information that a some new needs for understanding.</p>
| <p>In the local storage design, as you can read <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/local-storage-overview.md#alice-manages-a-database-which-needs-access-to-durable-and-fast-scratch-space" rel="nofollow noreferrer">here</a>, it is used with stateful sets. So for example if you want three mongodb instances named mongodb, then k8s will create for you three pods:</p>
<ul>
<li>mongodb-1</li>
<li>mongodb-2</li>
<li>mongodb-3</li>
</ul>
<p>If the mongodb-2 fails then k8s will restart it with the same local storage or persistent volume.
If you increase the number of replicas, then k8s will create new persistent volumes through your persistentVolumeClaimTemplate. If you shrink it down to two those newly created volumes won't be deleted and will be used it you go back to your previous number of replicas.</p>
<p>If your persistent volume is bound to a specific node then k8s will know to create your pod on that node.</p>
<p>You can read about a mongodb cluster statefulset example here : <a href="https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets" rel="nofollow noreferrer">https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets</a></p>
<p>Or you can check a great talk here (with demos):
<a href="https://www.youtube.com/watch?v=m8anzXcP-J8&feature=youtu.be" rel="nofollow noreferrer">https://www.youtube.com/watch?v=m8anzXcP-J8&feature=youtu.be</a></p>
<p>The use of statefuls sets and local storage is really well explained.</p>
|
<p>Is it possible for an InitContainer to change the environment variables of the application container when running inside the same Pod?</p>
<p>Note that I am looking for a detailed answer that describes the technical reasons why this is or isn't possible. Example: 'Current container technology supports environment variable isolation between containers and Pods cannot bypass that restriction by "grouping" containers in the same "environment variable space"'.</p>
| <p>No, it is not possible to just change the variables like that. One of the possible workarounds is to use an <code>EmptyDir</code> volume shared between the <code>InitContainer</code> and the main container. You can store there a the variables using a file in <code>InitContainer</code> and load them in the main container using a some simple shell script.</p>
|
<p>I have a single kubernetes service called MyServices which hold four deployments. Each deployment is running as a single pod and each pod has its own port number.</p>
<p><a href="https://i.stack.imgur.com/zSOKs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zSOKs.png" alt="enter image description here"></a></p>
<p>As mentioned all the pods are running inside one kubernetes service. </p>
<p>I am able to call the services through the external IP Address of that kubernetes service and port number. </p>
<p><strong>Example : 92.18.1.1:3011/MicroserviceA</strong> Or <strong>92.18.1.1:3012/MicroserviceB</strong></p>
<p>I am now trying to develop and orchestration layer that calls these services and get a response from them, However, I am trying to figure out a way in which I do NOT need to specify every micro-service port number, instead I can call them through their endpoint/ServiceName. Example: <strong>192.168.1.1/MicroserviceA</strong></p>
<p>How can I achieve above statement?</p>
<p>From architecture perspective, is it a good idea to deploy all microservice inside a single kubenetes service (like my current approach) or each micro-service needs it's own service</p>
<p>Below is the kubernetes deployment file ( I removed the script for micro-service C and D since they are identical to A and B):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
selector:
app: microservice
ports:
- name: microserviceA
protocol: TCP
port: 3011
targetPort: 3011
- name: microserviceB
protocol: TCP
port: 3012
targetPort: 3012
- name: microserviceC
protocol: TCP
port: 3013
targetPort: 3013
- name: microserviceD
protocol: TCP
port: 3014
targetPort: 3014
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: microserviceAdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: microservice
spec:
containers:
- image: dockerhub.com/myimage:v1
name: microservice
ports:
- containerPort: 3011
imagePullSecrets:
- name: regcred
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: microserviceBdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: microservice
spec:
containers:
- image: dockerhub.com/myimage:v1
name: microservice
ports:
- containerPort: 3012
</code></pre>
| <p>There is a way to <a href="https://stackoverflow.com/a/39260160/6309">discover all the port of Kubernetes services</a>.</p>
<p>So you could consider using <code>kubectl get svc</code>, as seen in "<a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport" rel="nofollow noreferrer">Source IP for Services with Type=NodePort</a>"</p>
<pre><code>NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services <yourService>)
</code></pre>
<hr>
<blockquote>
<p>, I am trying to figure out a way in which I do NOT need to specify every micro-service port number, instead I can call them through their endpoint/ServiceName</p>
</blockquote>
<p>Then you need to expose those services through one entry point, typically a reverse-proxy like NGiNX.<br>
The idea is to expose said services using the default ports (80 or 443), and reverse-proxy them to the actual URL and port number.</p>
<p>Check "<a href="https://www.nginx.com/blog/service-discovery-in-a-microservices-architecture/" rel="nofollow noreferrer">Service Discovery in a Microservices Architecture</a>" for the general idea.</p>
<p><a href="https://i.stack.imgur.com/QoMZg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QoMZg.png" alt="https://cdn-1.wp.nginx.com/wp-content/uploads/2016/04/Richardson-microservices-part4-1_difficult-service-discovery.png"></a></p>
<p>And "<a href="https://www.nginx.com/blog/service-discovery-nginx-plus-etcd/" rel="nofollow noreferrer">Service Discovery for NGINX Plus with etcd</a>" for an implementation (using NGiNX plus, so could be non-free).<br>
Or "<a href="https://hackernoon.com/setting-up-nginx-ingress-on-kubernetes-2b733d8d2f45" rel="nofollow noreferrer">Setting up Nginx Ingress on Kubernetes</a>" for a more manual approach.</p>
|
<p>My Kubernetes cluster is used for running my graph database (<a href="https://docs.dgraph.io/deploy#using-kubernetes-v1-8-4" rel="nofollow noreferrer">Dgraph</a>). However, I have to load the initial dataset (1TB) that comes as different folders and files into Dgraph. </p>
<p>I've processed the data locally and can now upload the files to my 6 different SSD persistent disks -- is there a way I can do so directly to the disks or do I need to use a Compute Engine Instance and go through it by mounting the disks and then unmounting them?</p>
| <p>I have one suggestion which may be quicker and more simple than the method you mention in the post.</p>
<p>Presumably you have persistent disk claims mounted to the pods which will make use of this data.</p>
<p>For example, let's say you have a persistent disk claim mounted to /mnt/data on a pod. </p>
<p>It's possible to copy files to pods using the 'kubectl cp' command. I realise that the dataset you want to upload is very large and would fill a pods standard filesystem. However, if you have the persistent disk claim mounted to the pod that will contain the data the pod utilises, presumably this mounted storage is large enough for that data. You could therefore try using 'kubectl cp' to copy the data to the mount point on the pod, so that it lands on the mounted volume. </p>
<p>You can run this command to try this out:</p>
<pre><code>kubectl cp datafile.csv NAMESPACE_NAME/POD_NAME:/mnt/data
</code></pre>
<p>Other than that, you could consider uploading the data to Cloud Storage using <a href="https://cloud.google.com/storage/docs/gsutil/commands/cp" rel="nofollow noreferrer">gsutil</a>, then install fuse on the nodes as mentioned <a href="https://serverfault.com/questions/891862/can-i-mount-google-cloud-storage-bucket-to-a-pod-as-a-persistent-disk-if-yes-ho?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa">here</a> which would allow you to mount the Cloud Storage to pods that need access to the data, although I realise this may not suit everyone use case. </p>
|
<p>I'm trying to setup a volume to use with Mongo on k8s.</p>
<p>I use <code>kubectl create -f pv.yaml</code> to create the volume.</p>
<p><strong>pv.yaml:</strong></p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: pvvolume
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/nfs"
claimRef:
kind: PersistentVolumeClaim
namespace: default
name: pvvolume
</code></pre>
<p>I then deploy <a href="https://github.com/cvallance/mongo-k8s-sidecar/blob/master/example/StatefulSet/mongo-statefulset.yaml" rel="noreferrer">this StatefulSet</a> that has pods making PVCs to this volume.</p>
<p>My volume seems to have been created without problem, I'm expecting it to just use the storage of the host node.</p>
<p>When I try to deploy I get the following error:</p>
<blockquote>
<p>Unable to mount volumes for pod
"mongo-0_default(2735bc71-5201-11e8-804f-02dffec55fd2)": timeout
expired waiting for volumes to attach/mount for pod
"default"/"mongo-0". list of unattached/unmounted
volumes=[mongo-persistent-storage]</p>
</blockquote>
<p>Have a missed a step in setting up my persistent volume?</p>
| <p>A persistent volume is just the declaration of availability of some storage inside your kubernetes cluster. There is no binding with your pod at this stage. </p>
<p>Since your pod is deployed through a <code>StatefulSet</code>, there should be in your cluster one or more <code>PersistentVolumeClaims</code> which are the objects that connect a pod with a PersistentVolume.</p>
<p>In order to manually bind a PV with a PVC you need to edit your PVC by adding the following in its spec section:</p>
<pre><code>volumeName: "<your persistent volume name>"
</code></pre>
<p>Here an explanation on how this process works: <a href="https://docs.openshift.org/latest/dev_guide/persistent_volumes.html#persistent-volumes-volumes-and-claim-prebinding" rel="noreferrer">https://docs.openshift.org/latest/dev_guide/persistent_volumes.html#persistent-volumes-volumes-and-claim-prebinding</a></p>
|
<p>I'm trying to install a service mesh in my Kubernetes cluster, and for that I want to accomplish the following:</p>
<ul>
<li>The service mesh pods are run as a Daemon Set, with one replica on each node.</li>
<li>The service mesh is not directly reachable from outside the cluster.</li>
<li>The application pods use the service mesh pod <em>on the same node</em> as a http proxy for all requests.</li>
</ul>
<p>In order to do so, I've added the following configuration to the container spec on the application Deployment:</p>
<pre><code># deployment.spec.template.spec.containers[0]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: HTTP_PROXY
value: http://$(NODE_NAME):4140
</code></pre>
<p>However, I'm failing to expose the service mesh properly; I've tried with both services of <code>type: ClusterIP</code> and <code>type: LoadBalancer</code>, but with the former I fail to connect, and with the latter I expose the service mesh publicly, which I don't want to do. I've also tried to get something working with <code>NodePort</code>, but I haven't even been able to figure out a valid configuration there yet (and there seems to be some opinion that <a href="https://oteemo.com/2017/12/12/think-nodeport-kubernetes/" rel="noreferrer">NodePort should be avoided if possible</a>...).</p>
<p>How should I expose the service mesh, preferrably on ports 4140-4142 and 9990, in a way that is reachable on a specific node (the current one) from other pods?</p>
| <p>You could put your pods in the host network space. That way you won't need any Service and your Pod will be reachable on the port you declare in the <code>PodSpec</code> on every node.</p>
<p>You could avoid external reachability by binding your service to 127.0.0.1 rather than 0.0.0.0</p>
<p>If you want to give it a try, you can run a port in the host network space by adding the following to the PodSpec of your DaemonSet:</p>
<pre><code>hostNetwork: true
</code></pre>
<p>Please be aware that, with this solution, you'll need to use the host's IP address in order to connect to your pod.</p>
<p>In order to get internal DNS names resolution in your hostNetworked pods, you also need to set DNS policy like this:</p>
<pre><code>dnsPolicy: ClusterFirstWithHostNet
</code></pre>
<p>This will ensure your pods will use the internal DNS server for name resolution.</p>
|
<ul>
<li>I have tests that I run locally using a <code>docker-compose</code> environment. </li>
<li><strong>I would like to implement these tests as part of our CI using Jenkins with Kubernetes on Google Cloud</strong> (following <a href="https://cloud.google.com/solutions/jenkins-on-kubernetes-engine-tutorial" rel="nofollow noreferrer">this setup</a>). </li>
<li><a href="https://stackoverflow.com/q/50197692/1561176">I have been unsuccessful</a> because <a href="https://stackoverflow.com/a/31381323/1561176">docker-in-docker does not work</a>.</li>
</ul>
<p>It seems that right now there is no solution for this use-case. I have found other questions <em>related</em> to this issue; <a href="https://stackoverflow.com/q/37214628/1561176">here</a>, and <a href="https://stackoverflow.com/q/48995717/1561176">here</a>.</p>
<p>I am looking for solutions that will let me run <code>docker-compose</code>. I have found <a href="https://stackoverflow.com/a/35830359/1561176">solutions for running <code>docker</code></a>, but not for running <code>docker-compose</code>.</p>
<p>I am hoping someone else has had this use-case and found a solution.</p>
<hr>
<p><strong>Edit:</strong> Let me clarify my use-case:</p>
<ol>
<li>When I detect a valid trigger (ie: push to repo) I need to start a new job.</li>
<li>I need to setup an environment with multiple dockers/instances (docker-compose).</li>
<li>The instances on this environment need access to code from git (mount volumes/create new images with the data).</li>
<li>I need to run tests in this environment.</li>
<li>I need to then retrieve results from these instances (JUnit test results for Jenkins to parse).</li>
</ol>
<p><strong><em>The problems I am having are with 2, and 3.</em></strong> </p>
<p>For 2 there is a problem running this in parallel (more than one job) since the docker context is shared (docker-in-docker issues). If this is running on more than one node then i get clashes because of shared resources (ports for example). <em>my workaround is to only limit it to one running instance and queue the rest (not ideal for CI)</em></p>
<p>For 3 there is a problem mounting volumes since the docker context is shared (docker-in-docker issues). I can not mount the code that I checkout in the job because it is not present on the host that is responsible for running the docker instances that I trigger. <em>my workaround is to build a new image from my template and just copy the code into the new image and then use that for the test (this works, but means I need to use docker cp tricks to get data back out, which is also not ideal)</em></p>
| <p>I think the better way is to use the pure Kubernetes resources to run tests directly by Kubernetes, not by <code>docker-compose</code>.</p>
<p>You can convert your docker-compose files into Kubernetes resources using <code>kompose</code> <a href="https://kubernetes.io/docs/tools/kompose/user-guide/" rel="nofollow noreferrer">utility</a>.</p>
<p>Probably, you will need some adaptation of the conversion result, or maybe you should manually convert your docker-compose objects into Kubernetes objects. Possibly, you can just use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Jobs</a> with multiple containers instead of a combination of deployments + services. </p>
<p>Anyway, I definitely recommend you to use Kubernetes abstractions instead of running tools like <code>docker-compose</code> inside Kubernetes.</p>
<p>Moreover, you still will be able to run tests locally using Minikube to spawn the small all-in-one cluster right on your PC.</p>
|
<p>I am trying to implement the CI/CD pipeline using Kubernetes , Jenkins with my private SVN repository. And I am planning to use Kubernetes cluster having 3 master and 15 worker machine/Node. And Using Jenkins to deploy the microservice developed using spring boot. So When I am deploying using Jenkins , How I can define which microservice need to deploy in which node in kubernetes cluster?. Do I need to specify in Pod ? Or Any other definition ? </p>
| <blockquote>
<p>How I can define which microservice need to deploy in which node in kubernetes cluster?. Do I need to specify in Pod ? Or Any other definition ?</p>
</blockquote>
<ul>
<li>As said in other answers you don't <strong>need</strong> to do this, but you <strong>can</strong> if there is any reason to do so using deprecated nodeSelector or preferable affinities. They are well worth the time to read since you can have some pods relating to specific services/microservices group together or away from each other across available nodes to allow for more flexible and resilient architecture and proper spread out. This way you are helping scheduler deciding where to place what to achieve desired layout. For most basic needs previously mentioned resource allocation can do the trick but for any fine graining you have affinity (and anti affinity) at your disposal. Documentation detailing this is here: <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/</a></li>
</ul>
|
<p>Is it possible for an InitContainer to change the environment variables of the application container when running inside the same Pod?</p>
<p>Note that I am looking for a detailed answer that describes the technical reasons why this is or isn't possible. Example: 'Current container technology supports environment variable isolation between containers and Pods cannot bypass that restriction by "grouping" containers in the same "environment variable space"'.</p>
| <p>Short answer is No, they can't.</p>
<p>You can try some hack something using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/" rel="nofollow noreferrer">ShareProcessNamespace</a> and <a href="https://stackoverflow.com/questions/205064/is-there-a-way-to-change-the-environment-variables-of-another-process-in-unix">gdb</a> but for sure this is not correct solution for problem you are trying to solve.</p>
|
<p>I am trying to test my development helm chat deployment output using <code>--dry-run</code> option. when I run the below command its trying to connect to Kubernetes API server. </p>
<p>Is dry run option required to connect Kubernetes cluster? all I want to check the deployment yaml file output.</p>
<pre><code>helm install mychart-0.1.0.tgz --dry-run --debug
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
</code></pre>
| <p>There is also an option to run <code>helm template ./mychart</code> to render the generated YAMLs without needing the connection to tiller.
Combined with <code>helm lint</code> it's a great set to verify validity of your chart.</p>
|
<p>I am working on a project which is deployed on Kubernetes. The system consists of multiple microservices where one of them does intense CPU work for 4-5 seconds when requested through HTTP.</p>
<p>We are in a situation where users might invoke this endpoint many times concurrently over a short period of time (possibly within milliseconds). My concern is that the auto-scaler is not able to boot new pods fast enough to meet the needs and that multiple requests will land on the same pod resulting in poor performance for each of those HTTP request.</p>
<p>The application running in the container does actually boot a new process on every request. This means that the application is able to take advantage of more than one core when processing more than one request.</p>
<p>So the question is</p>
<p>Is there any way to make the auto-scaler lightening-fast ie. responding within milliseconds? How is this problem solved in other projects?</p>
<p>Thanks</p>
| <p>POD autoscaling is based on metrics that get scraped from the running pods by a tool called <code>heapster</code>. This tool, by default, scrapes data every 60 seconds. Furthermore, data scraping takes a significant amount of time (seconds) to complete (the more the PODs, the longer the required time).</p>
<p>That said, it should be clear that it's not feasible to make the autoscaling work at the speed you are requiring (milliseconds).</p>
<p>The "solution" you have is allocating a number of PODs that will be able to sustain your traffic during a peak in a reasonable way. This is of course a waste of system resources when you are off-peak.</p>
|
<p>I have a Kubernetes based application consisting of multiple services (and pods) managed with helm chart.</p>
<p>Postgres is used as a database for all services.</p>
<p>When application is upgraded to a newer version, I am running a db migration script via <code>initContainers</code>.</p>
<p>The problem occurs when migration scripts require an exclusive access to DB (all other connections should be terminated), otherwise the script is blocked.</p>
<p>Ideal solution would be to stop all pods, run the migration and recreate them. But I am not sure how to achieve it properly with Kubernetes.</p>
<p>Tnx</p>
| <blockquote>
<p>Ideal solution would be to stop all pods, run the migration and
recreate them. But I am not sure how to achieve it properly with
Kubernetes.</p>
</blockquote>
<p>I see from one of the comments that you use Helm, so I'd like to propose a solution leveraging Helm's hooks:</p>
<blockquote>
<p>Helm provides a hook mechanism to allow chart developers to intervene
at certain points in a release's life cycle. For example, you can use
hooks to:</p>
<ul>
<li><p>Load a ConfigMap or Secret during install before any other charts are
loaded.</p>
</li>
<li><p>Execute a Job to back up a database before installing a new
chart, and then execute a second job after the upgrade in order to
restore data.</p>
</li>
<li><p>Run a Job before deleting a release to gracefully take a
service out of rotation before removing it.</p>
</li>
</ul>
</blockquote>
<p><a href="https://helm.sh/docs/topics/charts_hooks/" rel="noreferrer">https://helm.sh/docs/topics/charts_hooks/</a></p>
<p>You could package your migration as a k8s <code>Job</code> and leverage the <code>pre-install</code> or <code>pre-upgrade</code> hook to run the job. These hooks runs after templates are rendered, but before any new resources are created in Kubernetes. Thus, your migrations will run before your Pods are deployed.</p>
<p>To delete the deployments prior to running your migrations, create a second pre-install/pre-upgrade hook with a lower <code>helm.sh/hook-weight</code> that deletes the target deployments:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: "pre-upgrade-hook1"
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "pre-upgrade-hook1"
spec:
restartPolicy: Never
serviceAccountName: "<an SA with delete RBAC permissions>"
containers:
- name: kubectl
image: "lachlanevenson/k8s-kubectl:latest"
command: ["delete","deployment","deploy1","deploy2"]
</code></pre>
<p>The lower hook-weight will ensure this job runs prior to the migration job. This will ensure the following series of events:</p>
<ol>
<li>You run <code>helm upgrade</code></li>
<li>The helm hook with the lowest hook-weight runs and deletes the relevant deployments</li>
<li>The second hook runs and runs your migrations</li>
<li>Your Chart will install with new Deployments, Pods, etc.</li>
</ol>
<p>Just make sure to keep all of the relevant Deployments in the same Chart.</p>
|
<p>I have a k8s cluster of 2 hazelcast instances and one client application. Target is to have many clients and at least 2 hazelcast members.
I've set up a LoadBalancer type service in k8s to expose hazelcast instances</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hazelcast-service
labels:
app: hazelcast-service
spec:
type: LoadBalancer
ports:
- port: 10236
targetPort: 5701
selector:
app: hazelcast
</code></pre>
<p>And when it comes for client to start with given config:</p>
<pre><code>clientConfig.getNetworkConfig().addAddress("127.0.0.1:10236");
</code></pre>
<p>in recognizes a hazelcast members:</p>
<pre><code>May 08, 2018 11:25:21 AM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [dev] [3.9.3] HazelcastClient 3.9.3 (20180216 - 539b124) is STARTING
May 08, 2018 11:25:22 AM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [dev] [3.9.3] HazelcastClient 3.9.3 (20180216 - 539b124) is STARTED
May 08, 2018 11:25:22 AM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [dev] [3.9.3] Trying to connect to [127.0.0.1]:10236 as owner member
May 08, 2018 11:25:22 AM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [dev] [3.9.3] Authenticated with server [10.1.0.151]:5701, server version:3.10 Local address: /127.0.0.1:60102
May 08, 2018 11:25:22 AM com.hazelcast.client.spi.impl.ClientMembershipListener
INFO: hz.client_0 [dev] [3.9.3]
Members [2] {
Member [10.1.0.148]:5701 - b0e4a52f-0170-47f2-8ff3-74d9b67f45f5
Member [10.1.0.151]:5701 - 1355caa4-5c2b-4366-bd5b-b504f4f0ae4f
}
May 08, 2018 11:25:22 AM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [dev] [3.9.3] Setting ClientConnection{alive=true, connectionId=1, channel=NioChannel{/127.0.0.1:60102->/127.0.0.1:10236}, remoteEndpoint=[10.1.0.151]:5701, lastReadTime=2018-05-08 11:25:22.420, lastWriteTime=2018-05-08 11:25:22.418, closedTime=never, lastHeartbeatRequested=never, lastHeartbeatReceived=never, connected server version=3.10} as owner with principal ClientPrincipal{uuid='28696aaf-e678-47ee-8c7d-a79ba7a0079a', ownerUuid='1355caa4-5c2b-4366-bd5b-b504f4f0ae4f'}
May 08, 2018 11:25:22 AM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [dev] [3.9.3] HazelcastClient 3.9.3 (20180216 - 539b124) is CLIENT_CONNECTED
May 08, 2018 11:25:22 AM com.hazelcast.internal.diagnostics.Diagnostics
INFO: hz.client_0 [dev] [3.9.3] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
</code></pre>
<p>and when it tries to connect to second instance (10.1.0.151) it also seems to be fine:</p>
<pre><code>May 08, 2018 11:25:29 AM com.hazelcast.core.LifecycleService
INFO: hz.client_1 [dev] [3.9.3] HazelcastClient 3.9.3 (20180216 - 539b124) is STARTING
May 08, 2018 11:25:29 AM com.hazelcast.core.LifecycleService
INFO: hz.client_1 [dev] [3.9.3] HazelcastClient 3.9.3 (20180216 - 539b124) is STARTED
May 08, 2018 11:25:29 AM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_1 [dev] [3.9.3] Trying to connect to [127.0.0.1]:10236 as owner member
May 08, 2018 11:25:29 AM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_1 [dev] [3.9.3] Authenticated with server [10.1.0.148]:5701, server version:3.10 Local address: /127.0.0.1:60113
May 08, 2018 11:25:29 AM com.hazelcast.client.spi.impl.ClientMembershipListener
INFO: hz.client_1 [dev] [3.9.3]
Members [2] {
Member [10.1.0.148]:5701 - b0e4a52f-0170-47f2-8ff3-74d9b67f45f5
Member [10.1.0.151]:5701 - 1355caa4-5c2b-4366-bd5b-b504f4f0ae4f
}
May 08, 2018 11:25:29 AM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_1 [dev] [3.9.3] Setting ClientConnection{alive=true, connectionId=1, channel=NioChannel{/127.0.0.1:60113->/127.0.0.1:10236}, remoteEndpoint=[10.1.0.148]:5701, lastReadTime=2018-05-08 11:25:29.455, lastWriteTime=2018-05-08 11:25:29.453, closedTime=never, lastHeartbeatRequested=never, lastHeartbeatReceived=never, connected server version=3.10} as owner with principal ClientPrincipal{uuid='a04aa2ca-626d-4d1a-a366-38c0dbc4781f', ownerUuid='b0e4a52f-0170-47f2-8ff3-74d9b67f45f5'}
May 08, 2018 11:25:29 AM com.hazelcast.core.LifecycleService
INFO: hz.client_1 [dev] [3.9.3] HazelcastClient 3.9.3 (20180216 - 539b124) is CLIENT_CONNECTED
May 08, 2018 11:25:29 AM com.hazelcast.internal.diagnostics.Diagnostics
INFO: hz.client_1 [dev] [3.9.3] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
</code></pre>
<p>but immediately after above message I got another one (seems to be connection problem regarding first member that my client connected to):</p>
<pre><code>Constructor threw exception; nested exception is com.hazelcast.core.OperationTimeoutException: ClientInvocation{clientMessage = ClientMessage{length=72, correlationId=272, operation=Client.createProxy, messageType=5, partitionId=-1, isComplete=true, isRetryable=false, isEvent=false, writeOffset=0}, objectName = hz:impl:mapService, target = address [10.1.0.151]:5701, sendConnection = null} timed out because exception occurred after client invocation timeout 120000 ms. Current time: 2018-05-08 11:27:29.913. Start time: 2018-05-08 11:25:29.458. Total elapsed time: 120455 ms.
</code></pre>
<p>Sometimes it cannot even connect to the first member as I got OperationTimeoutException after client says that it connected to 10.1.0.151 member. Funny thing is that sometimes it works all fine:( And when I only have one replica of hazelcast pods it works predictable & fine. Thus, I believe this is because of LoadBalancer service which distributes requests equally among target pods and that there is something wrong with this set-up. </p>
<p>I suppose that client should be able to connect to any node it wants as any node can store requested item in its map but I don't know how to set up such a configuration in k8s.</p>
<p>Question is: how should I configure services in k8s so that client apps can talk to all members? Or this is not the case and it should work in a different way?</p>
<p>Am I missing something?</p>
| <blockquote>
<p>Thus, I believe this is because of LoadBalancer service which distributes requests equally among target pods and that there is something wrong with this set-up.</p>
</blockquote>
<p>Yes, it is true.</p>
<blockquote>
<p>how should I configure services in k8s so that client apps can talk to all members?</p>
</blockquote>
<p>Service can only balance requests between available pods.</p>
<blockquote>
<p>Or this is not the case and it should work in a different way?</p>
</blockquote>
<p>Yes, that is a good idea. The simplest way is to create one service per <code>hazelcast</code> replica. And that is practically the only way to do that.</p>
|
<p><a href="http://grs-preprodkubemaster01:5601/kibana" rel="nofollow noreferrer">http://grs-preprodkubemaster01:5601/kibana</a></p>
<p>I have followed docs and installed Kibana, When I used the service as type: LoadBalancer, the service isn't
coming up, so I deleted the type: LoadBalancer and let it default to ClusterIP, it came up fine. (Note I don't have AWS)
But, I am not sure how to access the UI, I tried this URL but its not working.
<a href="http://my-preprodkubemaster01/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/app/kibana" rel="nofollow noreferrer">http://my-preprodkubemaster01/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/app/kibana</a>
any ideas how to access the Kibana UI. I checked service, deployment and everything is green check.</p>
<p>Another thing I tried is this URL with this URL which I got from the command kubectl cluster-info
<a href="https://10.123.24.107:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy" rel="nofollow noreferrer">https://10.123.24.107:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy</a>
However, this is showing me this error</p>
<pre><code>{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kibana-logging" is forbidden: User "system:anonymous" cannot get services/proxy in the namespace "kube-system"",
reason: "Forbidden",
details: {
name: "kibana-logging",
kind: "services"
},
code: 403
}
</code></pre>
<p>So, as another try I used Kibana service as NodePort, but that didn't work either.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
selector:
k8s-app: kibana-logging
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30887
$ kubectl -n kube-system get rc,svc,cm,po
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/elasticsearch-logging ClusterIP 10.98.10.182 <none> 9200/TCP 12m
svc/heapster ClusterIP 10.107.184.85 <none> 80/TCP 3d
svc/kibana-logging NodePort 10.102.254.129 <none> 5601:30887/TCP 12m
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 3d
svc/kubernetes-dashboard ClusterIP 10.105.30.246 <none> 80/TCP 3d
svc/monitoring-influxdb ClusterIP 10.109.144.39 <none> 8086/TCP 3d
</code></pre>
<p>I would like to know what URL I should be using to access the Kibana UI. Please note that I have npot tried to do kubectl proxy and I would like to have it work without it</p>
| <p>Use the NodePort you defined in your service:</p>
<pre><code>https://10.123.24.107:30887
</code></pre>
|
<p>I have a k8s cluster of 2 hazelcast instances and one client application. Target is to have many clients and at least 2 hazelcast members.
I've set up a LoadBalancer type service in k8s to expose hazelcast instances</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hazelcast-service
labels:
app: hazelcast-service
spec:
type: LoadBalancer
ports:
- port: 10236
targetPort: 5701
selector:
app: hazelcast
</code></pre>
<p>And when it comes for client to start with given config:</p>
<pre><code>clientConfig.getNetworkConfig().addAddress("127.0.0.1:10236");
</code></pre>
<p>in recognizes a hazelcast members:</p>
<pre><code>May 08, 2018 11:25:21 AM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [dev] [3.9.3] HazelcastClient 3.9.3 (20180216 - 539b124) is STARTING
May 08, 2018 11:25:22 AM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [dev] [3.9.3] HazelcastClient 3.9.3 (20180216 - 539b124) is STARTED
May 08, 2018 11:25:22 AM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [dev] [3.9.3] Trying to connect to [127.0.0.1]:10236 as owner member
May 08, 2018 11:25:22 AM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [dev] [3.9.3] Authenticated with server [10.1.0.151]:5701, server version:3.10 Local address: /127.0.0.1:60102
May 08, 2018 11:25:22 AM com.hazelcast.client.spi.impl.ClientMembershipListener
INFO: hz.client_0 [dev] [3.9.3]
Members [2] {
Member [10.1.0.148]:5701 - b0e4a52f-0170-47f2-8ff3-74d9b67f45f5
Member [10.1.0.151]:5701 - 1355caa4-5c2b-4366-bd5b-b504f4f0ae4f
}
May 08, 2018 11:25:22 AM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_0 [dev] [3.9.3] Setting ClientConnection{alive=true, connectionId=1, channel=NioChannel{/127.0.0.1:60102->/127.0.0.1:10236}, remoteEndpoint=[10.1.0.151]:5701, lastReadTime=2018-05-08 11:25:22.420, lastWriteTime=2018-05-08 11:25:22.418, closedTime=never, lastHeartbeatRequested=never, lastHeartbeatReceived=never, connected server version=3.10} as owner with principal ClientPrincipal{uuid='28696aaf-e678-47ee-8c7d-a79ba7a0079a', ownerUuid='1355caa4-5c2b-4366-bd5b-b504f4f0ae4f'}
May 08, 2018 11:25:22 AM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [dev] [3.9.3] HazelcastClient 3.9.3 (20180216 - 539b124) is CLIENT_CONNECTED
May 08, 2018 11:25:22 AM com.hazelcast.internal.diagnostics.Diagnostics
INFO: hz.client_0 [dev] [3.9.3] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
</code></pre>
<p>and when it tries to connect to second instance (10.1.0.151) it also seems to be fine:</p>
<pre><code>May 08, 2018 11:25:29 AM com.hazelcast.core.LifecycleService
INFO: hz.client_1 [dev] [3.9.3] HazelcastClient 3.9.3 (20180216 - 539b124) is STARTING
May 08, 2018 11:25:29 AM com.hazelcast.core.LifecycleService
INFO: hz.client_1 [dev] [3.9.3] HazelcastClient 3.9.3 (20180216 - 539b124) is STARTED
May 08, 2018 11:25:29 AM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_1 [dev] [3.9.3] Trying to connect to [127.0.0.1]:10236 as owner member
May 08, 2018 11:25:29 AM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_1 [dev] [3.9.3] Authenticated with server [10.1.0.148]:5701, server version:3.10 Local address: /127.0.0.1:60113
May 08, 2018 11:25:29 AM com.hazelcast.client.spi.impl.ClientMembershipListener
INFO: hz.client_1 [dev] [3.9.3]
Members [2] {
Member [10.1.0.148]:5701 - b0e4a52f-0170-47f2-8ff3-74d9b67f45f5
Member [10.1.0.151]:5701 - 1355caa4-5c2b-4366-bd5b-b504f4f0ae4f
}
May 08, 2018 11:25:29 AM com.hazelcast.client.connection.ClientConnectionManager
INFO: hz.client_1 [dev] [3.9.3] Setting ClientConnection{alive=true, connectionId=1, channel=NioChannel{/127.0.0.1:60113->/127.0.0.1:10236}, remoteEndpoint=[10.1.0.148]:5701, lastReadTime=2018-05-08 11:25:29.455, lastWriteTime=2018-05-08 11:25:29.453, closedTime=never, lastHeartbeatRequested=never, lastHeartbeatReceived=never, connected server version=3.10} as owner with principal ClientPrincipal{uuid='a04aa2ca-626d-4d1a-a366-38c0dbc4781f', ownerUuid='b0e4a52f-0170-47f2-8ff3-74d9b67f45f5'}
May 08, 2018 11:25:29 AM com.hazelcast.core.LifecycleService
INFO: hz.client_1 [dev] [3.9.3] HazelcastClient 3.9.3 (20180216 - 539b124) is CLIENT_CONNECTED
May 08, 2018 11:25:29 AM com.hazelcast.internal.diagnostics.Diagnostics
INFO: hz.client_1 [dev] [3.9.3] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
</code></pre>
<p>but immediately after above message I got another one (seems to be connection problem regarding first member that my client connected to):</p>
<pre><code>Constructor threw exception; nested exception is com.hazelcast.core.OperationTimeoutException: ClientInvocation{clientMessage = ClientMessage{length=72, correlationId=272, operation=Client.createProxy, messageType=5, partitionId=-1, isComplete=true, isRetryable=false, isEvent=false, writeOffset=0}, objectName = hz:impl:mapService, target = address [10.1.0.151]:5701, sendConnection = null} timed out because exception occurred after client invocation timeout 120000 ms. Current time: 2018-05-08 11:27:29.913. Start time: 2018-05-08 11:25:29.458. Total elapsed time: 120455 ms.
</code></pre>
<p>Sometimes it cannot even connect to the first member as I got OperationTimeoutException after client says that it connected to 10.1.0.151 member. Funny thing is that sometimes it works all fine:( And when I only have one replica of hazelcast pods it works predictable & fine. Thus, I believe this is because of LoadBalancer service which distributes requests equally among target pods and that there is something wrong with this set-up. </p>
<p>I suppose that client should be able to connect to any node it wants as any node can store requested item in its map but I don't know how to set up such a configuration in k8s.</p>
<p>Question is: how should I configure services in k8s so that client apps can talk to all members? Or this is not the case and it should work in a different way?</p>
<p>Am I missing something?</p>
| <p>If your hazelcast client is inside the kubernetes cluster, you dont really need LoadBalancer type. A simple service ClusterIP or headless would suffice. Hazelcast supports kubernetes discovery mode. I suggest try using ClusterIP or none.</p>
|
<p>I've been experimenting with <a href="https://github.com/heptio/contour" rel="nofollow noreferrer">contour</a> as an alternative ingress controller on a test GKE kubernetes cluster.</p>
<p>Following the contour <a href="https://github.com/heptio/contour/blob/master/docs/deploy-options.md" rel="nofollow noreferrer">deployment docs</a> with a few modifications, I've got a working setup serving test HTTP responses.</p>
<p>First, I created a "helloworld" pod that serves http responses, exposed via a NodePort service and an ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld
spec:
replicas: 4
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: "helloworld-http"
image: "nginxdemos/hello:plain-text"
imagePullPolicy: Always
resources:
requests:
cpu: 250m
memory: 256Mi
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- helloworld
topologyKey: "kubernetes.io/hostname"
---
apiVersion: v1
kind: Service
metadata:
name: helloworld-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: helloworld
sessionAffinity: None
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helloworld-ingress
spec:
backend:
serviceName: helloworld-svc
servicePort: 80
</code></pre>
<p>Then, I created a deployment for <code>contour</code> that's directly copied from their docs:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: heptio-contour
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: contour
namespace: heptio-contour
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: contour
name: contour
namespace: heptio-contour
spec:
selector:
matchLabels:
app: contour
replicas: 2
template:
metadata:
labels:
app: contour
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9001"
prometheus.io/path: "/stats"
prometheus.io/format: "prometheus"
spec:
containers:
- image: docker.io/envoyproxy/envoy-alpine:v1.6.0
name: envoy
ports:
- containerPort: 8080
name: http
- containerPort: 8443
name: https
command: ["envoy"]
args: ["-c", "/config/contour.yaml", "--service-cluster", "cluster0", "--service-node", "node0", "-l", "info", "--v2-config-only"]
volumeMounts:
- name: contour-config
mountPath: /config
- image: gcr.io/heptio-images/contour:master
imagePullPolicy: Always
name: contour
command: ["contour"]
args: ["serve", "--incluster"]
initContainers:
- image: gcr.io/heptio-images/contour:master
imagePullPolicy: Always
name: envoy-initconfig
command: ["contour"]
args: ["bootstrap", "/config/contour.yaml"]
volumeMounts:
- name: contour-config
mountPath: /config
volumes:
- name: contour-config
emptyDir: {}
dnsPolicy: ClusterFirst
serviceAccountName: contour
terminationGracePeriodSeconds: 30
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: contour
topologyKey: kubernetes.io/hostname
---
apiVersion: v1
kind: Service
metadata:
name: contour
namespace: heptio-contour
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 8080
- port: 443
name: https
protocol: TCP
targetPort: 8443
selector:
app: contour
type: LoadBalancer
---
</code></pre>
<p>The default and heptio-contour namespaces now look like this:</p>
<pre><code>$ kubectl get pods,svc,ingress -n default
NAME READY STATUS RESTARTS AGE
pod/helloworld-7ddc8c6655-6vgdw 1/1 Running 0 6h
pod/helloworld-7ddc8c6655-92j7x 1/1 Running 0 6h
pod/helloworld-7ddc8c6655-mlvmc 1/1 Running 0 6h
pod/helloworld-7ddc8c6655-w5g7f 1/1 Running 0 6h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/helloworld-svc NodePort 10.59.240.105 <none> 80:31481/TCP 34m
service/kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 7h
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/helloworld-ingress * y.y.y.y 80 34m
$ kubectl get pods,svc,ingress -n heptio-contour
NAME READY STATUS RESTARTS AGE
pod/contour-9d758b697-kwk85 2/2 Running 0 34m
pod/contour-9d758b697-mbh47 2/2 Running 0 34m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/contour LoadBalancer 10.59.250.54 x.x.x.x 80:30882/TCP,443:32746/TCP 34m
</code></pre>
<p>There's 2 publicly routable IP addresses:</p>
<ul>
<li>x.x.x.x - a GCE TCP load balancer that forwards to the contour pods</li>
<li>y.y.y.y - a GCE HTTP load balancer that forwards to the helloworld pods via the helloworld-ingress</li>
</ul>
<p>A <code>curl</code> on both public IPs returns a valid HTTP response from the helloworld pods.</p>
<pre><code># the TCP load balancer
$ curl -v x.x.x.x
* Rebuilt URL to: x.x.x.x/
* Trying x.x.x.x...
* TCP_NODELAY set
* Connected to x.x.x.x (x.x.x.x) port 80 (#0)
> GET / HTTP/1.1
> Host: x.x.x.x
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< server: envoy
< date: Mon, 07 May 2018 14:14:39 GMT
< content-type: text/plain
< content-length: 155
< expires: Mon, 07 May 2018 14:14:38 GMT
< cache-control: no-cache
< x-envoy-upstream-service-time: 1
<
Server address: 10.56.4.6:80
Server name: helloworld-7ddc8c6655-w5g7f
Date: 07/May/2018:14:14:39 +0000
URI: /
Request ID: ec3aa70e4155c396e7051dc972081c6a
# the HTTP load balancer
$ curl http://y.y.y.y
* Rebuilt URL to: y.y.y.y/
* Trying y.y.y.y...
* TCP_NODELAY set
* Connected to y.y.y.y (y.y.y.y) port 80 (#0)
> GET / HTTP/1.1
> Host: y.y.y.y
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.13.8
< Date: Mon, 07 May 2018 14:14:24 GMT
< Content-Type: text/plain
< Content-Length: 155
< Expires: Mon, 07 May 2018 14:14:23 GMT
< Cache-Control: no-cache
< Via: 1.1 google
<
Server address: 10.56.2.8:80
Server name: helloworld-7ddc8c6655-mlvmc
Date: 07/May/2018:14:14:24 +0000
URI: /
Request ID: 41b1151f083eaf30368cf340cfbb92fc
</code></pre>
<p>Is it by design that I have two public IPs? Which one should I use for customers? Can I choose based on my preference between a TCP and HTTP load balancer? </p>
| <p>Probably you have GLBC ingress configured (<a href="https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller</a>)</p>
<p>Could you try using following ingress definition?</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "contour"
name: helloworld-ingress
spec:
backend:
serviceName: helloworld-svc
servicePort: 80
</code></pre>
<p>If you would like to be sure that your traffic goes via contour you should use <code>x.x.x.x</code> ip.</p>
|
<p>I'm currently stuck and don`t know how to proceed.</p>
<p>This is my Spring Boot application.properties</p>
<pre><code>...
spring.datasource.driverClassName=org.postgresql.Driver
spring.datasource.url=jdbc:postgresql://${POSTGRES_HOST}:5432/postgres
spring.datasource.username=${POSTGRES_USER}
spring.datasource.password=${POSTGRES_PASSWORD}
spring.datasource.testWhileIdle=true
spring.datasource.validationQuery=SELECT 1
spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=update
spring.jpa.hibernate.naming-strategy=org.hibernate.cfg.ImprovedNamingStrategy
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
#Setup SSL
server.port: 8443
server.ssl.key-store: ${TLS_CERTIFICATE}
server.ssl.key-store-password: ${TLS_PASSWORD}
server.ssl.keyStoreType: PKCS12
server.ssl.keyAlias fundtr
...
</code></pre>
<p>My Deployment yaml for Spring Boot Application:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: f-app
namespace: default
spec:
replicas: 1
template:
metadata:
name: f-app
labels:
app: f-app
spec:
containers:
- name: f-app
image: eu.gcr.io/..../...
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_password
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: hostname-config
key: postgres_host
- name: TLS-CERTIFICATE
valueFrom:
secretKeyRef:
name: f-tls
key: Certificate.p12
- name: TLS-PASSWORD
valueFrom:
secretKeyRef:
name: f-tls
key: password
</code></pre>
<p>This is how I create secret in Kubernetes:</p>
<pre><code>kubectl create secret generic f-tls --from-file=Certificate.p12 --from-literal=password=changeit
</code></pre>
<p>When it's deployed I'm getting </p>
<pre><code>State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: ContainerCannotRun
Message: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:295: setting oom score for ready process caused \"write /proc/13895/oom_score_adj: invalid argument\""
</code></pre>
<p>When I remove the Secrets from the Deployment yaml it's working fine, but I could not understand what it the root cause of this issue. I'm using Google Cloud Platform Container Engine.</p>
| <p>This is my deployment.yaml, which uses p12 key and password stored in Kubernetes secrets, created just like in your example. Works OK for me to make SSL curl calls. I fetch the content of p12 key and password files mounted as READ ONLY volume. Hope it helps.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-name
spec:
replicas: 3
selector:
matchLabels:
app: app-name
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
minReadySeconds: 30
template:
metadata:
labels:
app: app-name
spec:
volumes:
- name: application
emptyDir: {}
- name: secrets
secret:
secretName: key.p12
containers:
- name: php-fpm
image: index.docker.io/docker_account/docker_image:image_tag
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9000
volumeMounts:
- name: application
mountPath: /app
- name: secrets
mountPath: /api-p12-keys
readOnly: true
imagePullSecrets:
- name: docker-auth
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.