prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have php application with <code>public</code> folder, that contains as <code>index.php</code> that handles php requests as static files.</p> <p>I would like to deliver static files via nginx container directly, and use php-fpm only for delivering dynamic requests.</p> <p>How that can be achieved? As i understand, there are two ways: </p> <ol> <li>create two separate containers with same application folder: the first will be with nginx, that serves static and delegates dynamic requests to the second, where php-fpm would be located.</li> <li>Second solution is to use <code>persistentVolumeClaim</code>, but that also requires some kind of magic for synchronization (<code>initContainers</code>).</li> </ol> <p>What is the best practice?</p>
<p>Let's begin by saying that there are 2 pods which are Nginx Pod and Php pod. Now There are two things need to be associated with these pods.</p> <ul> <li>Static files (Public folder)</li> <li>Configuration file (nginx.conf)</li> </ul> <p>I would prefer to use Persistent Volume claim for static files and ConfigMap for the configuration file. </p> <p>Solutions to your issue. </p> <p>Php pod will have following k8s resources</p> <ul> <li>Deployment( containers )</li> <li>Service (to expose it internal)</li> <li>PersistentVolumeClaim (for public folder) </li> <li>ConfigMap ( Environment Variable such as information about database )</li> </ul> <p>Nginx pod will have following k8s resources </p> <ul> <li>Deployment (Container)</li> <li>Service (to expose it internal or external)</li> <li>ConfigMap (customised nginx.conf)</li> </ul> <p>Now Deployment Resouces will have a relationship with other resources such as Persistent volume claim and configmap etc.</p> <p>As I understand by InitContainer, It helps Pod to have a consistent Network Namespace as well as organise file system for the main container it provides all the necessary resources to the main container such as service account, pvc, configmap.</p> <p>For example, If the main container restart then init container make sure that IP address does not change.However, if you create pod again it means you are creating initcontainer again then Pod will have new IP address. </p>
<p>I have a service named <code>foo</code> with a selector to <code>foo</code> pod:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: name: foo name: foo namespace: bar spec: clusterIP: 172.20.166.230 ports: - port: 8080 protocol: TCP targetPort: 8080 selector: name: foo sessionAffinity: None type: ClusterIP </code></pre> <p>I have a deployment/pod named <code>foo</code> with a label <code>foo</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "3" generation: 3 labels: name: foo name: foo namespace: bar spec: selector: matchLabels: name: foo template: metadata: labels: name: foo spec: containers: image: my/image:tag imagePullPolicy: Always name: foo ports: - containerPort: 8080 protocol: TCP dnsPolicy: ClusterFirst </code></pre> <p>I make a request from <code>foo</code> pod to <code>foo</code> host, host resolved but requests just don't pass through:</p> <pre><code>$ curl -vvv foo:8080 * Rebuilt URL to: foo:8080/ * Trying 172.20.166.230... * TCP_NODELAY set </code></pre> <p>Is this supposed to work like that in Kubernetes?</p> <p>I don't have any problems requesting <code>foo</code> from other pods from the same namespace.</p> <p>The reason why I don't simply use <code>localhost:8080</code> (which works fine) is that I have the same config file with hosts used by different pods, so I don't want to write a specific logic per pod.</p> <p>Kubernetes 1.6.4, single-node cluster, iptables mode.</p>
<p>It looks like this is a default behavior when using iptables as a proxy mode.</p>
<p>Is it possible to update an external loadbalancer service in kubernetes with a new spec from an yaml file. </p> <p>I can do rolling updates of the deployments but how can I update services, primarily not changing the external IP address. </p> <p>Thanks, Rajesh</p>
<p>If you do not want to edit on the living or already deployed manifest file. you can edit the service manifest file on your computer and use <strong>apply</strong> action instead of create. For instance, </p> <pre><code>kubectl apply -f service-manifest.yaml </code></pre> <p><strong>It will work without changing already existing external IPs</strong> </p>
<p>I am using <strong>Helm</strong> to manage services in a <strong>Kubernetes</strong> cluster and use a <strong>chartmuseum</strong> to manage charts of my services.</p> <p>After I've added the repo with the command <code>helm repo add URL</code>, helm can interact with my repo, e.g. install available charts with <code>helm install NAME</code>, or view all charts with <code>helm search</code>.</p> <p>Now I face the following problem:</p> <p>After creating or updating a new chart, I upload it with the command:<br> <code>curl --data-binary "@FILENAME.tgz" http://REPOURL:REPOPORT/api/charts</code>.</p> <p>When I perform <code>helm search</code>, I expect to see the new chart or the updated version of the chart. This is <strong>not</strong> the case. Further, when I perform <code>helm fetch NAME</code>, I receive the old version of the updated chart.</p> <p>In order to see the new or updated chart and use it, I have to re-add the repo (with the same name, otherwise it gets confusing).</p> <p>Is there a way to refresh the list of available charts, without re-adding the repo?</p>
<p>The state of a repository is cached on your disk. When you update the remote repository you need to run <code>helm repo update</code> to retrieve the update before you can access it. </p>
<p>OpenStack uses messaging (RabbitMQ by default I think ?) for the communication between the nodes. On the other hand Kubernetes (lineage of Google's internal Borg) uses RPC. Docker's swarm uses RPC as well. Both are gRPC/protofbuf based which seems to be used heavily inside Google as well.</p> <p>I understand that messaging platforms like Kafka are widely used for streaming data and log aggregation. But systems like OpenStack, Kubernetes, Docker Swarm etc. need specific interactions between the nodes and RPC seems like a natural choice because it allows APIs to be defined for specific operations.</p> <p>Did OpenStack chose messaging after evaluating the pros and cons of messaging vs RPC? Are there any good blogs/system reviews comparing the success of large scale systems using messaging vs RPC? Does messaging offer any advantage over RPC in scaled distributed systems?</p>
<blockquote> <p>Does messaging offer any advantage over RPC in scaled distributed systems ?</p> </blockquote> <p>Mostly persistence is a big advantage for messaging system. Another point is broadcasting. You need to implement this into <a href="https://grpc.io/" rel="noreferrer">gRPC</a> by yourself. Service Discovery and Security can be another reason. In Messaging System you just need to keep one system highly secure, while with gRPC you might have many points where somebody could break into the system. Message queue systems usually already have some kind of service discovery implemented. With gRPC you have to use at least another library for this.</p> <blockquote> <p>Are there any good literate comparing the success of large scale systems using messaging vs RPC ?</p> </blockquote> <p>It's not a vs. There are different use cases. Messaging Systems are generally slower than RPC protocols. Not only slower than gRPC. The reason for this is also simple. You just introduce a middleware between two or more nodes. But they provide persistence, broadcasting, Pub/Sub etc.</p> <blockquote> <p>Did Openstack chose messaging after evaluating the pros cons of messaging vs RPC ? Probably</p> <p>Does messaging offer any advantage over RPC in scaled distributed systems ?</p> </blockquote> <ol> <li>Ready to use solution, just use a client</li> <li>Persistence</li> <li>Ready to use Service Discovery</li> <li>Pub/Sub Pattern</li> <li>Failure tolerance</li> </ol> <p>Most of the points needs to be implemented with gRPC by yourself.</p>
<p>I want to scale an application with workers.<br> There could be 1 worker or 100, and I want to scale them seamlessly.<br> The idea is using replica set. However due to domain-specific reasons, the appropriate way to scale them is for each worker to know its: ID and the total number of workers.</p> <p>For example, in case I have 3 workers, I'd have this: </p> <pre><code>id:0, num_workers:3 id:1, num_workers:3 id:2, num_workers:3 </code></pre> <p>Is there a way of using kubernetes to do so?<br> I pass this information in command line arguments to the app, and I assume it would be fine having it in environment variables too.</p> <p>It's ok on size changes for all workers to be killed and new ones spawned. </p>
<p>Before giving the kubernetes-specific answer, I wanted to point out that it seems like the problem is trying to push cluster-coordination down into the app, which is almost by definition harder than using a distributed system primitive designed for that task. For example, if every new worker identifies themselves in <a href="https://github.com/coreos/etcd#readme" rel="nofollow noreferrer">etcd</a>, then they can <a href="https://github.com/coreos/etcd/blob/v3.2.11/Documentation/learning/api.md#watch-streams" rel="nofollow noreferrer">watch keys</a> to detect changes, meaning no one needs to destroy a running application just to update its list of peers, their contact information, their capacity, current workload, whatever interesting information you would enjoy having while building a distributed worker system.</p> <p>But, on with the show:</p> <hr> <p>If you want stable identifiers, then <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> is the modern answer to that. Whether that is an <em>exact</em> fit for your situation depends on whether (for your problem domain) <code>id:0</code> being "rebooted" still counts as <code>id:0</code> or the fact that it has stopped and started now disqualifies it from being <code>id:0</code>.</p> <p>The running list of cluster size is tricky. If you are willing to be flexible in the launch mechanism, then you can have a <a href="https://github.com/mattn/etcdenv#readme" rel="nofollow noreferrer">pre-launch binary</a> populate the environment right before spawning the actual worker (that example is for reading from etcd directly, but the same principle holds for interacting with the kubernetes API, then launching).</p> <p>You could do that same trick in a more static manner by having an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">initContainer</a> write the current state of affairs to a file, which the app would then read in. Or, due to all Pod containers sharing networking, the app could contact a "sidecar" container on <code>localhost</code> to obtain that information via an API.</p> <p>So far so good, except for the</p> <blockquote> <p>on size changes for all workers to be killed and new one spawned</p> </blockquote> <p>The best answer I have for that requirement is that if the app must know its peers at launch time, then I am pretty sure you have left the realm of "scale $foo --replicas=5" and entered into the "destroy the peers and start all afresh" realm, with <code>kubectl delete pods -l some-label=of-my-pods</code>; which is, thankfully, what <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#on-delete" rel="nofollow noreferrer">updateStrategy: type: OnDelete</a> does, when combined with the <code>delete pods</code> command.</p>
<p>I am trying to install the azure-cli in the <code>dind:latest</code> image based on alpine.</p> <p>For context, I want to use it to connect to AKS and deploy an app to Kubernetes via Gitlab.</p> <p>In my <code>gitlab-ci.yml</code> file I start with this</p> <pre><code>image: docker:latest services: - docker:dind </code></pre> <p>and then I try to install the azure-cli</p> <pre><code>deploy-to-k8s--dev: # k8s namespace "dev" stage: deploy-to-k8s # image: microsoft/azure-cli script: # I need the azure cli in the dind:latest container # so install bash,curl and finally the cli - apk update - apk upgrade - apk add bash - apk add --no-cache curl - curl -L https://aka.ms/InstallAzureCli | bash - az </code></pre> <p>and I get the following error</p> <pre><code>$ curl -L https://aka.ms/InstallAzureCli | bash % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 167 100 167 0 0 167 0 0:00:01 --:--:-- 0:00:01 644 100 1367 100 1367 0 0 1367 0 0:00:01 --:--:-- 0:00:01 1367 mktemp: Invalid argument ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1 </code></pre> <p>It is the first time that I try to work with Alpine Linux and I have no idea how it is built and what tools it uses...</p> <p>Has anyone any suggestion?</p> <p><strong>EDIT</strong></p> <p>based on the accepted answer this is the final code that works</p> <pre><code>deploy-to-k8s--dev: # k8s namespace "dev" stage: deploy-to-k8s script: # I need the azure cli in the dind:latest container # so install bash,curl and finally the cli - apk update - apk upgrade - apk add bash make py-pip - apk add --virtual=build gcc libffi-dev musl-dev openssl-dev python2-dev - pip install azure-cli - apk del --purge build - az -h </code></pre>
<p>This helped me in one of my alpine based image</p> <pre><code>apk update apk add bash py-pip apk add --virtual=build gcc libffi-dev musl-dev openssl-dev python- dev pip install azure-cli apk del --purge build </code></pre>
<h1>Question</h1> <p>What is the purpose of the K8S_HOST_URL configuration parameter in EFK? In EFK, K8S_HOST_URL exists as an environment variable and it looks being used to communicate to the Kubernetes API server by fluentd as being specified in the <a href="https://github.com/openshift/origin-aggregated-logging/blob/877d84296ce113fbafca6177612741054ed5a584/fluentd/configs.d/openshift/filter-k8s-meta.conf" rel="nofollow noreferrer">filter-k8s-meta.conf</a> of the fluentd configuration.</p> <p>Looking for the documentation but not in the OpenShift <a href="https://docs.openshift.com/container-platform/3.4/install_config/aggregate_logging.html" rel="nofollow noreferrer">Aggregating Container Logs</a> documentation. Searched in Google but could not find a definite answer.</p> <p>Please suggest the documentation which explains in detail. </p>
<h2>From RHEL</h2> <p>The environment variable K8S_HOST_URL in fluend is an internal url for reaching the master API. Not documented yet.</p>
<pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: helloworld-rules spec: rules: - host: helloworld-v1.example.com http: paths: - path: / backend: serviceName: helloworld-v1 servicePort: 80 - host: helloworld-v2.example.com http: paths: - path: / backend: serviceName: helloworld-v2 servicePort: 80 </code></pre> <p>I'm making kubernetes cluster and I will apply that cloudPlatform Isolated(not aws or google). When creating an ingress for service I can choose host url but that is not exist anywhere(that address is not registrated something like DNS server) So I can't access that url. Visiting this IP just gives a 404. how can I get or configure URL that can access external browser :(... </p>
<p>It depends on how you configure your nginx controller.</p> <p>You should have a Service configured which is the entry point when accessing from outside see the docs <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress</a>.</p> <p>So basically you have a Service that points to the ingress controller and this will redirect the traffic to your pods based on Ingress Objects.</p> <p>Ingress -> Services -> Pods</p> <p>Since you don't run on aws or google You would have to use externalIp or NodePort and configure the service accordingly</p> <pre><code>kind: Service apiVersion: v1 metadata: name: ingress-nginx namespace: ingress-nginx labels: app: ingress-nginx spec: selector: app: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: http externalIPs: - 80.11.12.10 </code></pre> <p>And DNS needs to be managed with whatever you have for your domains in order to resolve, or for locally testing you can just edit your /etc/hostnames</p> <p>Basically in AWS or Google you just create a service with type: LoadBalancer and point your dns records to the balancer address (CNAME for aws and the IP for google)</p>
<p>I'd like to check that my kubernetes helm chart does not define unused values in <code>values.yaml</code>. This should include any subcharts such that if you've defined <code>subchart.foo.bar: ???</code> in the top-level <code>values.yaml</code> that key is definitely used in the subchart, or possibly as a short-cut mentioned in the <code>subchart/values.yaml</code>.</p> <p>This is needed to prevent us from shipping bogus "documentation" in the <code>values.yaml</code>, for example if a key in a subchart has been changed or removed.</p> <p>Ideally there would also be some possibility to report on which subchart values have not been overridden in the top-level chart, though this is less concerning.</p> <p>Are there any existing tools that can help with this?</p>
<p>AFAIK, there isn't a tool for that. However, it shouldn't be that hard to make one, even using bash. For example, you need to export all key/value pairs like this <code>test.test1.test2</code> and grep for that string recursively in the templates folder. If you want to read yaml using bash, you can install <code>shyaml</code>. If you know how to code in Python, even better.</p>
<p>I am trying to ship my K8s pod logs to Elasticsearch using Filebeat.</p> <p>I am following the guide online here: <a href="https://www.elastic.co/guide/en/beats/filebeat/6.0/running-on-kubernetes.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/beats/filebeat/6.0/running-on-kubernetes.html</a></p> <p>Everything works as expected however I want to filter out events from system pods. My updated config looks like:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: filebeat-prospectors namespace: kube-system labels: k8s-app: filebeat kubernetes.io/cluster-service: "true" data: kubernetes.yml: |- - type: log paths: - /var/lib/docker/containers/*/*.log multiline.pattern: '^\s' multiline.match: after json.message_key: log json.keys_under_root: true processors: - add_kubernetes_metadata: in_cluster: true namespace: ${POD_NAMESPACE} - drop_event.when.regexp: or: kubernetes.pod.name: "weave-net.*" kubernetes.pod.name: "external-dns.*" kubernetes.pod.name: "nginx-ingress-controller.*" kubernetes.pod.name: "filebeat.*" </code></pre> <p>I am trying to ignore <code>weave-net</code>, <code>external-dns</code>, <code>ingress-controller</code> and <code>filebeat</code> events via:</p> <pre><code>- drop_event.when.regexp: or: kubernetes.pod.name: "weave-net.*" kubernetes.pod.name: "external-dns.*" kubernetes.pod.name: "nginx-ingress-controller.*" kubernetes.pod.name: "filebeat.*" </code></pre> <p>However they continue to arrive in Elasticsearch.</p>
<p>The conditions need to be a list:</p> <pre><code>- drop_event.when.regexp: or: - kubernetes.pod.name: "weave-net.*" - kubernetes.pod.name: "external-dns.*" - kubernetes.pod.name: "nginx-ingress-controller.*" - kubernetes.pod.name: "filebeat.*" </code></pre> <p>I'm not sure if your order of parameters works. One of my working examples looks like this:</p> <pre><code>- drop_event: when: or: # Exclude traces from Zipkin - contains.path: "/api/v" # Exclude Jolokia calls - contains.path: "/jolokia/?" # Exclude pinging metrics - equals.path: "/metrics" # Exclude pinging health - equals.path: "/health" </code></pre>
<p>I'm trying to create a kubernetes cluster following the document at: <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a></p> <p>First I have installed kubeadm with docker image on Coreos (1520.9.0) inside VirtualBox with Vagrant:</p> <pre><code>docker run -it \ -v /etc:/rootfs/etc \ -v /opt:/rootfs/opt \ -v /usr/bin:/rootfs/usr/bin \ -e K8S_VERSION=v1.8.4 \ -e CNI_RELEASE=v0.6.0 \ xakra/kubeadm-installer:0.4.7 coreos </code></pre> <p>This was my kubeadm init:</p> <p><code>kubeadm init --pod-network-cidr=10.244.0.0/16</code></p> <p>When run the command:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml </code></pre> <p>It returns:</p> <pre><code>clusterrole "flannel" configured clusterrolebinding "flannel" configured serviceaccount "flannel" configured configmap "kube-flannel-cfg" configured daemonset "kube-flannel-ds" configured </code></pre> <p>But if I check "kubectl get pods --all-namespaces"</p> <p>It returns:</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-coreos1 1/1 Running 0 18m kube-system kube-apiserver-coreos1 1/1 Running 0 18m kube-system kube-controller-manager-coreos1 0/1 CrashLoopBackOff 8 19m kube-system kube-scheduler-coreos1 1/1 Running 0 18m </code></pre> <p>With <code>journalctl -f -u kubelet</code> I can see this error: <code>Unable to update cni config: No networks found in /etc/cni/net.d</code></p> <p>I suspect that something was wrong with the command <code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml</code></p> <p>Is there a way to know why this command doesn't work? Can I get some logs from anywhere?</p>
<p>Just tonight I used <a href="https://github.com/kubernetes-incubator/kubespray#readme" rel="noreferrer">kubespray</a> to provision a vagrant cluster, on CoreOS, using flannel (vxlan), and I was also mystified about how flannel could be a Pod inside Kubernetes</p> <p>It turns out, <a href="https://github.com/kubernetes-incubator/kubespray/blob/79417e07ca4033d162eb94d8c66a82bf8f44f9ce/roles/network_plugin/flannel/templates/cni-flannel.yml.j2#L88-L89" rel="noreferrer">as seen here</a>, that they are using <a href="https://github.com/coreos/flannel-cni#readme" rel="noreferrer">flannel-cni</a> image <a href="https://github.com/kubernetes-incubator/kubespray/blob/79417e07ca4033d162eb94d8c66a82bf8f44f9ce/roles/download/defaults/main.yml#L66-L67" rel="noreferrer">from quay.io</a> to write out CNI files using a flannel side-car plus hostDir volume-mounts; it outputs <code>cni-conf.json</code> (that configures CNI to use flannel), and then <code>net-conf.json</code> (that configures the subnet and backend used by flannel).</p> <p>I hope the jinja2 mustache syntax doesn't obfuscate the answer, but I found it very interesting to see how the Kubernetes folks chose to do it "for real" to compare and contrast against the example <code>DaemonSet</code> given in the flannel-cni README. I guess that's the long way of saying: try the descriptors in the flannel-cni README, then if it doesn't work see if they differ in some way from the known-working kubespray setup</p> <p><em>update:</em> as a concrete example, observe that <a href="https://github.com/coreos/flannel/blob/v0.9.1/Documentation/kube-flannel.yml#L111" rel="noreferrer">the Documentation yaml</a> doesn't include the <a href="https://github.com/kubernetes-incubator/kubespray/blob/79417e07ca4033d162eb94d8c66a82bf8f44f9ce/roles/network_plugin/flannel/templates/cni-flannel.yml.j2#L68" rel="noreferrer"><code>--iface=</code></a> switch, and if your Vagrant setup is using both NAT and "private_network" then it likely means flannel is binding to <code>eth0</code> (the NAT one) and not <code>eth1</code> with a more static IP. I saw that caveat mentioned in the docs, but can't immediately recall where in order to cite it</p> <p><em>update 2</em></p> <blockquote> <p>Is there a way to know why this command doesn't work? Can I get some logs from anywhere?</p> </blockquote> <p>One may almost always access the logs of a Pod (even a statically defined one such as <code>kube-controller-manager-coreos1</code>) in the same manner: <code>kubectl --namespace=kube-system logs kube-controller-manager-coreos1</code>, and in the CrashLoopBackOff circumstance, adding in the <code>-p</code> for "-p"revious will show the logs from the most recent crash (but only for a few seconds, not indefinitely), and occasionally <code>kubectl --namespace=kube-system describe pod kube-controller-manager-coreos1</code> will show helpful information in either the Events section at the bottom, or in the "Status" block near the top if it was Terminated for cause</p> <p>In the case of a very bad failure, such as the apiserver failing to come up (and thus <code>kubectl logs</code> won't do anything), then ssh-ing to the Node and using a mixture of <code>journalctl -u kubelet.service --no-pager --lines=150</code> and <code>docker logs ${the_sha_or_name}</code> to try and see any error text. You will almost certainly need <code>docker ps -a</code> in the latter case to find the exited container's sha or name, but that same "only for a few seconds" applies, too, as dead containers will be pruned after some time.</p> <p>In the case of vagrant, one can ssh into the VM in one of several ways:</p> <ul> <li><code>vagrant ssh coreos1</code></li> <li><code>vagrant ssh-config &gt; ssh-config &amp;&amp; ssh -F ssh-config coreos1</code></li> <li>or if it has a "private_network" address, such as 192.168.99.101 or such, then you can usually <code>ssh -i ~/.vagrant.d/insecure_private_key [email protected]</code> but one of the first two are almost always more convenient</li> </ul>
<p>It is known that some applications aren't aware of Linux kernel isolation and virtualization features such as cgroups. This includes system utils like <code>top</code>, <code>free</code> and <code>ps</code>, but also platforms like Java.</p> <p>I've recently read <a href="https://very-serio.us/2017/12/05/running-jvms-in-kubernetes/" rel="nofollow noreferrer">an article</a> which suggests that when running JVMs in Kubernetes, you should enforce manual limits on the Java heap size to avoid errors.</p> <p>I cannot find anywhere whether this is also true for NodeJS. Do I need to implement something like above to set <code>--max_old_space_size=XXX</code> on my NodeJS application in Kubernetes?</p>
<p>A NodeJS process will try an allocate memory regardless of the container limits, just like Java. </p> <p>Setting a limit on the process will help stop the OS from killing the process, particularly in constrained environments where Node might try allocate past the memory limit even though Node could probably run inside the limit. </p> <p>If you are running an app that is close to using the memory limit then adding the memory limit settings just changes the failure scenario. NodeJS and the JVM will have a chance to exit with an out of memory error (OOM) rather than be killed by the operating system. The process will likely slow to a crawl as it nears the memory limit and the garbage collector tries the best it can to keep the process below the limit. </p> <p>Note that the old space is only one of multiple memory spaces in NodeJS. Only the new space (semi spaces) and old space can be limited. </p> <pre><code>--max_semi_space_size (max size of a semi-space (in MBytes), the new space consists of two semi-spaces) type: int default: 0 --max_old_space_size (max size of the old space (in Mbytes)) type: int default: 0 </code></pre> <p>The other heap spaces are generally small and static enough not to worry about. </p> <p>Modules that run native code can allocate memory outside the heap and can't be limited by an option. </p>
<p>I have been creating pods with <code>type:deployment</code> but I see that some documentation uses <code>type:pod</code>, more specifically <a href="http://kubernetes.io/docs/user-guide/pods/multi-container/" rel="noreferrer">the documentation for multi-container pods</a>:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: "" labels: name: "" namespace: "" annotations: [] generateName: "" spec: ? "// See 'The spec schema' for details." : ~ </code></pre> <p>But to create pods I can just use <a href="http://kubernetes.io/docs/user-guide/deployments/" rel="noreferrer">a deployment type</a>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: "" spec: replicas: 3 template: metadata: labels: app: "" spec: containers: etc </code></pre> <p>I noticed the pod documentation says:</p> <blockquote> <p>The create command can be used to create a pod directly, or it can create a pod or pods through a Deployment. It is highly recommended that you use a Deployment to create your pods. It watches for failed pods and will start up new pods as required to maintain the specified number. If you don’t want a Deployment to monitor your pod (e.g. your pod is writing non-persistent data which won’t survive a restart, or your pod is intended to be very short-lived), you can create a pod directly with the create command.</p> <p>Note: We recommend using a Deployment to create pods. You should use the instructions below only if you don’t want to create a Deployment.</p> </blockquote> <p>But this raises the question of what <code>kind:pod</code> is good for? Can you somehow reference pods in a deployment? I didn't see a way. It looks like what you get with pods is some extra metadata but none of the deployment options such as <code>replica</code> or a restart policy. What good is a pod that doesn't persist data, survives a restart? I think I'd be able to create a multi-container pod with a deployment as well.</p>
<p>Radek's answer is very good, but I would like to pitch in from my experience, you will almost never use an <strong>object</strong> with the <strong>kind</strong> <strong>pod</strong>, because that doesn't make any sense in practice. </p> <p>Because you need a <strong>deployment</strong> object - or other Kubernetes API objects like a <strong>replication controller</strong> or <strong>replicaset</strong> - that needs to keep the <strong>replicas</strong> (pods) alive (that's kind of the point of using kubernetes).</p> <p>What you will use in practice for a typical application are:</p> <ol> <li><p><strong>Deployment object</strong> (where you will specify your apps container/containers) that will host your app's container with some other specifications.</p></li> <li><p><strong>Service object</strong> (that is like a grouping object and gives it a so-called virtual IP (cluster IP) for the <code>pods</code> that have a certain label - and those <code>pods</code> are basically the app containers that you deployed with the former <strong>deployment</strong> object).</p></li> </ol> <p>You need to have the <strong>service</strong> object because the <code>pods</code> from the deployment object can be killed, scaled up and down, and you can't rely on their IP addresses because they will not be persistent.</p> <p>So you need an object like a <strong>service</strong>, that gives those <code>pods</code> a stable IP.</p> <p>Just wanted to give you some context around <code>pods</code>, so you know how things work together.</p> <p>Hope that clears a few things for you, not long ago I was in your shoes :)</p>
<p>We are currently rolling out Pimcore 5 in our Docker Kubernetes environment but we didn't find an appropriate answer for the following question yet:</p> <p>Which folders need to be persistent?</p> <p>The documentation points out that the folders <strong>/var</strong> and <strong>/web/var</strong> are used to safe logs and assets (from the admin interface). Are there any other folders that need to be persistent to keep the environment stable even after a container restart / rebuild?</p> <p>Are there any problems with updates or downsides if we run a setup like this:</p> <ul> <li>Git Repository for our Code Base</li> <li>PHP-fpm Docker image that holds the code base (plus nginx and redis container)</li> <li>Consistent Database</li> </ul> <p>We would also like to share our results when we managed to come up with a good solution.</p> <p>Thank you very much! I know this question is kind of specific :)</p>
<p>Yes, <code>/var</code> and <code>/web/var</code> need to be on a persistent and shared filesystem.</p> <p>Further hints regarding this setup are in the documentation: </p> <ul> <li><p><a href="https://pimcore.com/docs/master/Development_Documentation/Installation_and_Upgrade/System_Setup_and_Hosting/Cluster_Setup.html" rel="nofollow noreferrer">https://pimcore.com/docs/master/Development_Documentation/Installation_and_Upgrade/System_Setup_and_Hosting/Cluster_Setup.html</a></p></li> <li><p><a href="https://pimcore.com/docs/4.6.x/Development_Documentation/Installation_and_Upgrade/System_Setup_and_Hosting/Amazon_AWS_Setup/index.html" rel="nofollow noreferrer">https://pimcore.com/docs/4.6.x/Development_Documentation/Installation_and_Upgrade/System_Setup_and_Hosting/Amazon_AWS_Setup/index.html</a></p></li> </ul>
<p> Hi there, I have a docker container that is a php backend. I have created a kubernetes pod of this container. This is what my yml file looks like:</p> <pre><code> apiVersion: v1 kind: Pod metadata: name: backend spec: containers: - name: backend image: 000.dkr.ecr.eu-west-1.amazonaws.com/fullstackapp ports: - containerPort: 8000 </code></pre> <p>However I want to be able to connect my MySql database (which is also a docker container) to the backend in the same pod. However I have no idea how to go about doing this. Any help would be appreciated!</p>
<p>Well,</p> <p>Since you have dockerized your app (you made an docker image), you also must use a docker image for your MySql database.</p> <p>But here is the kicker, you need also to create <strong>services</strong> for your app pod and your MySql pod.</p> <p>You can find all the details in the k8 <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">documentation</a> (which is really good)</p> <p>To make myself clear:</p> <p>1.) First create a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a> object for your app.</p> <p>2.) Then make a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> for your app.</p> <p>You rinse and repeat for the MySql database.</p> <p>1.) You need the <strong>deployment</strong> object (and not the pod kind), because the deployment object keeps you pod alive when one breaks, for instance if you have tree replicas (pods) the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">replicaSet</a> that the deployment object uses, will make sure that there are three replicas of your app.</p> <p>2.) <strong>Services</strong> will group your pods (via <strong>labels</strong>), because the pods that the deployment object will generate will have a short life (<strong>ephemeral</strong>), meaning their IP address will be unstable and you wont be able to rely on them.</p> <p>So, you will use services that will give you a cluster IP (<strong>virtual IP</strong>), that other objects can use. For instance; when your app wants to connect to the MySQL database.</p> <p>You can use the <strong>name</strong> of the MySQL <strong>service</strong> in your apps configuration files.</p> <p>So, basically that's how you would connect a <strong>MySQL pod</strong> to you <strong>apps pod</strong>.</p> <p>Take a look at the <a href="https://www.katacoda.com/" rel="nofollow noreferrer">katacode</a> project, they give you a playground to learn this kind of stuff.</p> <p>Tom</p>
<p>While initializing <code>kubeadm</code> I am getting following errors. I have also tried command <code>kubeadm reset</code> before doing <code>kubadm init</code>. Kubelet is also running and command I have used for same is <code>systemctl enable kubelet &amp;&amp; systemctl start kubelet</code>. Following is log after executing kubeadm init</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.8.2 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks [preflight] WARNING: Connection to "https://192.168.78.48:6443" uses proxy "http://user:[email protected]:3128/". If that is not intended, adjust your proxy settings [preflight] WARNING: Running with swap on is not supported. Please disable swap or set kubelet's --fail-swap-on flag to false. [preflight] Starting the kubelet service [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [steller.india.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.140.48] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] This often takes around a minute; or longer if the control plane images have to be pulled. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused. </code></pre> </div> </div> </p> <p>Following is output of <code>journalctl -u kubelet</code></p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>-- Logs begin at Thu 2017-11-02 16:20:50 IST, end at Fri 2017-11-03 17:11:12 IST. -- Nov 03 16:36:48 steller.india.com systemd[1]: Started kubelet: The Kubernetes Node Agent. Nov 03 16:36:48 steller.india.com systemd[1]: Starting kubelet: The Kubernetes Node Agent... Nov 03 16:36:48 steller.india.com kubelet[52511]: I1103 16:36:48.998467 52511 feature_gate.go:156] feature gates: map[] Nov 03 16:36:48 steller.india.com kubelet[52511]: I1103 16:36:48.998532 52511 controller.go:114] kubelet config controller: starting controller Nov 03 16:36:48 steller.india.com kubelet[52511]: I1103 16:36:48.998536 52511 controller.go:118] kubelet config controller: validating combination of defaults and flag Nov 03 16:36:49 steller.india.com kubelet[52511]: I1103 16:36:49.837248 52511 client.go:75] Connecting to docker on unix:///var/run/docker.sock Nov 03 16:36:49 steller.india.com kubelet[52511]: I1103 16:36:49.837282 52511 client.go:95] Start docker client with request timeout=2m0s Nov 03 16:36:49 steller.india.com kubelet[52511]: W1103 16:36:49.839719 52511 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d Nov 03 16:36:49 steller.india.com systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE Nov 03 16:36:49 steller.india.com kubelet[52511]: I1103 16:36:49.846959 52511 feature_gate.go:156] feature gates: map[] Nov 03 16:36:49 steller.india.com kubelet[52511]: W1103 16:36:49.847216 52511 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider sho Nov 03 16:36:49 steller.india.com kubelet[52511]: error: failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such Nov 03 16:36:49 steller.india.com systemd[1]: Unit kubelet.service entered failed state. Nov 03 16:36:49 steller.india.com systemd[1]: kubelet.service failed. Nov 03 16:37:00 steller.india.com systemd[1]: kubelet.service holdoff time over, scheduling restart. Nov 03 16:37:00 steller.india.com systemd[1]: Started kubelet: The Kubernetes Node Agent. Nov 03 16:37:00 steller.india.com systemd[1]: Starting kubelet: The Kubernetes Node Agent... Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.134702 52975 feature_gate.go:156] feature gates: map[] Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.134763 52975 controller.go:114] kubelet config controller: starting controller Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.134767 52975 controller.go:118] kubelet config controller: validating combination of defaults and flag Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.141273 52975 client.go:75] Connecting to docker on unix:///var/run/docker.sock Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.141364 52975 client.go:95] Start docker client with request timeout=2m0s Nov 03 16:37:00 steller.india.com kubelet[52975]: W1103 16:37:00.143023 52975 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.149537 52975 feature_gate.go:156] feature gates: map[] Nov 03 16:37:00 steller.india.com kubelet[52975]: W1103 16:37:00.149780 52975 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider sho Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.179873 52975 certificate_manager.go:361] Requesting new certificate. Nov 03 16:37:00 steller.india.com kubelet[52975]: E1103 16:37:00.180392 52975 certificate_manager.go:284] Failed while requesting a signed certificate from the master: Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.181404 52975 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/k Nov 03 16:37:00 steller.india.com kubelet[52975]: W1103 16:37:00.223876 52975 manager.go:157] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api servic Nov 03 16:37:00 steller.india.com kubelet[52975]: W1103 16:37:00.224005 52975 manager.go:166] unable to connect to CRI-O api service: Get http://%2Fvar%2Frun%2Fcrio.so Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.262573 52975 fs.go:139] Filesystem UUIDs: map[17856e0b-777f-4065-ac97-fb75d7a1e197:/dev/dm-1 2dc6a878- Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.262604 52975 fs.go:140] Filesystem partitions: map[/dev/sdb:{mountpoint:/D major:8 minor:16 fsType:xfs Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.268969 52975 manager.go:216] Machine: {NumCores:56 CpuFrequency:2600000 MemoryCapacity:540743667712 Hu Nov 03 16:37:00 steller.india.com kubelet[52975]: 967295 Mtu:1500} {Name:eno49 MacAddress:14:02:ec:82:57:30 Speed:10000 Mtu:1500} {Name:eno50 MacAddress:14:02:ec:82:57:3 Nov 03 16:37:00 steller.india.com kubelet[52975]: evel:1} {Size:262144 Type:Unified Level:2}]} {Id:13 Threads:[12 40] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Nov 03 16:37:00 steller.india.com kubelet[52975]: s:[26 54] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level: Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.270145 52975 manager.go:222] Version: {KernelVersion:3.10.0-229.14.1.el7.x86_64 ContainerOsVersion:Cen Nov 03 16:37:00 steller.india.com kubelet[52975]: I1103 16:37:00.271263 52975 server.go:422] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaultin Nov 03 16:37:00 steller.india.com kubelet[52975]: error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to Nov 03 16:37:00 steller.india.com systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE Nov 03 16:37:00 steller.india.com systemd[1]: Unit kubelet.service entered failed state. Nov 03 16:37:00 steller.india.com systemd[1]: kubelet.service failed.</code></pre> </div> </div> </p>
<p>just disable the swap in the machine. <strong>sudo swapoff -a</strong> </p>
<p>How does one deploy a node app from Gitlab-ci to GKE? I already have cluster integration enabled and functional. But the documentation on what that means is almost non existent. I don't know what variables having a GKE cluster connected gives me or how to use it in my CI.</p> <p><a href="https://i.stack.imgur.com/ESOyq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ESOyq.png" alt="enter image description here"></a></p> <p>Here's my gitlab-ci.yml, it puts the image in gitlabhq Registry, meaning I'll have to copy it to google or somehow setup GKE to use a private registry, which no one seems to have managed to do.</p> <pre><code>image: docker:git services: - docker:dind stages: - build - test - release - deploy variables: DOCKER_DRIVER: overlay2 CONTAINER_TEST_IMAGE: registry.gitlab.com/my-proj:$CI_BUILD_REF_NAME CONTAINER_RELEASE_IMAGE: registry.gitlab.com/my-proj:latest before_script: - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com build: stage: build script: - docker build -t $CONTAINER_TEST_IMAGE . - docker push $CONTAINER_TEST_IMAGE .test1: stage: test script: - docker run $CONTAINER_TEST_IMAGE npm run eslint .test2: stage: test script: - docker run $CONTAINER_TEST_IMAGE npm run mocha release-image: stage: release script: - docker pull $CONTAINER_TEST_IMAGE - docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE - docker push $CONTAINER_RELEASE_IMAGE only: - master deploy: ?????? </code></pre>
<p>I haven't used Auto DevOps integration, but I can try and generalize a working approach.</p> <p>If you have tiller installed on the k8s cluster, it's best if you create a helm chart for your application. If you haven't done that already, there is a a tutorial on how to do that here: <a href="https://github.com/kubernetes/helm/blob/master/docs/charts.md" rel="nofollow noreferrer">https://github.com/kubernetes/helm/blob/master/docs/charts.md</a> (check Using Helm to Manage Charts)</p> <p>A basic deployment.yaml managed by helm would look like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: {{ template "name" . }} labels: app: {{ template "name" . }} chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} template: metadata: labels: app: {{ template "name" . }} release: {{ .Release.Name }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} </code></pre> <p>and the corresponding values in the .Values file:</p> <pre><code>image: repository: registry.gitlab.com/my-proj tag: latest </code></pre> <p>A sample .gitlab-ci.yml file should look like this:</p> <pre><code>... deploy: stage: deploy script: - helm upgrade &lt;your-app-name&gt; &lt;path-to-the-helm-chart&gt; --install --set image.tag=$CI_BUILD_REF_NAME </code></pre> <p>The build phase publishes the docker image and the deploy phase installs a helm chart which tries to download that image from <code>registry.gitlab.com/my-proj</code>.</p> <p>I take that the k8s cluster has access to that registry. If the registry is private, you need to create a secret in kubernetes that holds the authorization token (unless it is automatically created): <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p> <p>The default pipeline image you're using (<code>image: docker:git</code>) doesn't have the helm CLI installed, so you should change that image with one that has helm and kubectl installed. In the gitlab tutorial, they seem to be doing the installation on each run: <a href="https://gitlab.com/gitlab-org/gitlab-ci-yml/blob/master/Auto-DevOps.gitlab-ci.yml" rel="nofollow noreferrer">https://gitlab.com/gitlab-org/gitlab-ci-yml/blob/master/Auto-DevOps.gitlab-ci.yml</a> (check <code>function install_dependencies()</code>)</p>
<p>I'm using a vanilla minikube environment.</p> <p>I'm not specifying any service account-related instructions in my bare-bones simple Pod <code>.yaml</code> file.</p> <p>Inside a deployed Pod, <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> is empty. What are the possible causes for this?</p>
<p>As mentioned <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server" rel="noreferrer">in the docs</a></p> <blockquote> <p>In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:</p> </blockquote> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: build-robot automountServiceAccountToken: false </code></pre> <blockquote> <p>In version 1.6+, you can also opt out of automounting API credentials for a particular pod:</p> </blockquote> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-pod spec: serviceAccountName: build-robot automountServiceAccountToken: false </code></pre> <p>So double check your pod file and check your ServiceAccount configuration with <code>kubectl describe serviceaccount build-robot</code> to see if you are disabling the automount.</p>
<p>I am trying to deploy a Spring Boot application using configuration data from Kubernetes cluster. I have a simple RestController that prints a message by reading from a Kubernetes cluster. </p> <pre><code> private String message = "Message not coming from Kubernetes config map"; @RequestMapping(value="/echo", method=GET) public String printKubeConfig() { return message; } </code></pre> <p>Specified the name of the config map in my application.yml</p> <pre><code>spring: application: name: echo-configmap </code></pre> <p>echo-configmap</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: echo-configmap data: application.properties: |- message=Hello from dev Kubernetes Configmap application_qa.properties: |- message=Hello from qa Kubernetes Configmap </code></pre> <p>I have several environments like qa, int, test etc </p> <ol> <li>What's the best way to specify environment specific properties in the config map? And how to access them in Spring boot application?<br> Ex: if the application is deployed in qa, my service should return the message "Hello from qa Kubernetes Configmap" </li> <li>We also have plans to read these configuration files from GIT in future. How to handle that usecase?</li> </ol>
<p>Let me try and provide an answer which I think gives you what you need, without using any tools beyond what you'll have installed on most boxes. Maybe try this first, and if you find the approach becomes difficult to manage and scale, move onto something more sophisticated.</p> <h2>Step 1: Version control configmaps per environment</h2> <p>Create a folder like <code>k8s/configmaps</code> or something, and create one configmap per environment:</p> <pre><code>k8s/configmaps/properties.dev.yaml k8s/configmaps/properties.qa.yaml k8s/configmaps/properties.sit.yaml k8s/configmaps/properties.uat.yaml </code></pre> <p>Each configmap should contain your environment specific settings.</p> <h2>Step 2: Have a namespace per environment</h2> <p>Create a k8s namespace per environment, such as:</p> <pre><code> application-dev application-qa application-sit application-uat </code></pre> <h2>Step 3: Create the configmap per environment</h2> <p>A little bash will help here:</p> <pre><code>#!/usr/bin/env bash # apply-configmaps.sh namespace="application-${ENVIRONMENT}" for configmap in ./k8s/configmaps/*.${ENVIRONMENT}.yml; do echo "Processing ConfigMap $configmap" kubectl apply -n ${namespace} -f $configmap done </code></pre> <p>Now all you need to do to create <em>or update</em> configmaps for any environment is:</p> <pre><code>ENVIRONMENT=dev ./update-configmaps.sh </code></pre> <h2>Step 4: Finish the job with CI/CD</h2> <p>Now you can create a CI/CD pipeline - if your configmap source changes just run the command shown above.</p> <h2>Summary</h2> <p>Based on primitive commands and no special tools you can:</p> <ul> <li>Version control config</li> <li>Manage config per environment</li> <li>Update or create config when the config code changes</li> <li>Easily apply the same approach in a CI/CD pipeline if needed</li> </ul> <p>I would strongly recommend you follow this basic 'first principles' approach before jumping into more sophisticated tools to solve the same problems, in many cases you can do it yourself without much effort, learn the key concepts and save the more sophisticated tooling till later if you really need it.</p> <p>Hope that helps!</p>
<p>I am new to Kubernetes and have a basic question. I installed the Canonical Distribution of Kubernetes on a bare metal Ubuntu "Localhost" setup with LXD. </p> <p>I am able to run a simple deployment/service for a NGINX cluster. However, I am confused as to how I can actually expose it externally using my server hostip.</p> <p>For instance:</p> <pre><code>kubectl run my-nginx --image=nginx --replicas=3 --port=80 kubectl expose deployment my-nginx --type=NodePort kubectl describe services my-nginx --&gt; Shows NodePort as 31198 </code></pre> <p>I can successfully run a CURL to any of the Worker Nodes:</p> <pre><code>curl 10.112.134.139:31198 curl 10.112.134.41:31198 </code></pre> <p>However, my hostip is 192.168.X.Y. How can I actually expose this so I can access using the HOSTIP?</p>
<p>From what you describe, it looks like, in your local environment, configured a "containerized cluster". Therefore, you can access the NodePort accessing these containerized worker nodes, but the host IP itself (as there is nothing configured in that local host, right?). </p> <p>So, what you would need to do is to establish a way to forward traffic from the Host to the "containerized cluster", so the NodePort is accessible. </p> <p>One way that comes to mind would be, in the machine you are trying to access from, to configure a route like this</p> <p>10.112.134.0/24 - gateway 192.168.X.Y</p> <pre><code> sudo route add -net 10.112.134.0/24 gw 192.168.X.Y </code></pre> <p>Probably, you may need to check that the sysctl rule net.ip4.forward is enabled.</p>
<p>Can someone guide the configuration for auto discover for K8s. The Prometheus server is outside of the cluster. I tried <a href="https://movio.co/en/blog/prometheus-service-discovery-kubernetes/" rel="noreferrer">Service Discovery With Kubernetes</a> and someone mentioned in this <a href="https://groups.google.com/forum/#!topic/prometheus-developers/kll2itGFkVg" rel="noreferrer">discussion</a> </p> <blockquote> <p>I'm not yet a K8s expert enough to explain all the details here, but fundamentally it's perfectly possible to run Prometheus outside of the cluster (and required for things like redundant cross-cluster meta-monitoring). Cf. the <code>in_cluster</code> config option in <a href="http://prometheus.io/docs/operating/configuration/#kubernetes-sd-configurations-kubernetes_sd_config" rel="noreferrer">http://prometheus.io/docs/operating/configuration/#kubernetes-sd-configurations-kubernetes_sd_config</a> . You need to jump through certificate hoops if you run it outside.</p> </blockquote> <p>So, I made a simple configuration</p> <pre><code> - job_name: 'kubernetes' kubernetes_sd_configs: - # The API server addresses. In a cluster this will normally be # `https://kubernetes.default.svc`. Supports multiple HA API servers. api_servers: - https://xxx.xx.xx.xx # Run in cluster. This will use the automounted CA certificate and bearer # token file at /var/run/secrets/kubernetes.io/serviceaccount/ in the pod. in_cluster: false # Optional HTTP basic authentication information. basic_auth: username: prometheus password: secret # Retry interval between watches if they disconnect. retry_interval: 5s </code></pre> <p>Getting <code>unknown fields in kubernetes_sd_config: api_servers, in_cluster, retry_interval"</code> or some other indentation errors</p> <p>In <a href="https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml" rel="noreferrer">sample configuration</a>, they mentioned <code>ca_file:</code>. How to get that certificate file from K8s or is there any way to specify K8s <code>config</code> file(~/.kube/config)</p>
<p>By digging though the source code I figured out, that Prometheus always uses the in cluster config, if no <code>api_server</code> is provided in the config (<a href="https://github.com/prometheus/prometheus/blob/099df0c/discovery/kubernetes/kubernetes.go#L90-L96" rel="noreferrer"><code>discovery/kubernetes/kubernetes.go#L90-L96</code></a>).</p> <p>Somehow the <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#&lt;kubernetes_sd_config" rel="noreferrer">docs</a> don't say anything about the Kubernetes configuration parameters, but the source code does (<a href="https://github.com/prometheus/prometheus/blob/099df0c5f00c45c007a9779a2e4ab51cf4d076bf/config/config.go#L1026-L1037" rel="noreferrer"><code>config/config.go#L1026-L1037</code></a>). Therefore there is not list named <code>api_servers</code>, but a single parameter named <code>api_server</code>.</p> <p>So your config should look like this (untested):</p> <pre><code> - job_name: 'kubernetes' kubernetes_sd_configs: - # The API server addresses. In a cluster this will normally be # `https://kubernetes.default.svc`. Supports multiple HA API servers. api_server: https://xxx.xx.xx.xx # Optional HTTP basic authentication information. basic_auth: username: prometheus password: secret # specify the CA tls_config: ca_file: /path/to/ca.crt ## If the actual CA file isn't available you need to disable verification: # insecure_skip_verify: true </code></pre> <p>I don't know where the <code>retry_interval</code> parameter comes from, but AFAIK this isn't a Kubernetes config parameter and it's also not part of the Prometheus config.</p>
<p>I've a kubernetes 1.8. I've deploy a lots of services but I've problem remove some pods, it's never delete.</p> <p>This is the pod describe:</p> <pre><code>Name: project-settlement-api-798c8b6688-ldclr Namespace: project Node: 10.93.96.208/10.93.96.208 Start Time: Fri, 10 Nov 2017 18:39:08 -0300 Labels: app=project-settlement-api pod-template-hash=3547462244 run=project Annotations: kubernetes.io/created-by={“kind”:“SerializedReference”,“apiVersion”:“v1",“reference”:{“kind”:“ReplicaSet”,“namespace”:“project”,“name”:“project-settlement-api-798c8b6688”,“uid”:“955c2781-c65f-11e7-ba5... Status: Terminating (expires Fri, 17 Nov 2017 10:25:24 -0300) Termination Grace Period: 0s IP: Created By: ReplicaSet/project-settlement-api-798c8b6688 Controlled By: ReplicaSet/project-settlement-api-798c8b6688 Containers: project-settlement-api: Container ID: Image: Image ID: Port: &lt;none&gt; State: Terminated Exit Code: 0 Started: Mon, 01 Jan 0001 00:00:00 +0000 Finished: Mon, 01 Jan 0001 00:00:00 +0000 Ready: False Restart Count: 0 Environment: SPRING_PROFILES_ACTIVE: docker Mounts: /opt from project-volume (rw) /var/project from project-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-qw23t (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: project-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: project-pvc ReadOnly: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: &lt;none&gt; Events: &lt;none&gt; </code></pre> <p>And this is the pod list where you can find 3 pods that cannot delete:</p> <pre><code>NAME READY STATUS RESTARTS AGE project-companies-api-5847b84599-6tpbq 0/1 Terminating 0 9m project-companies-api-5847b84599-ph4xl 1/1 Running 0 4m project-companies-api-5847b84599-z5vr2 0/1 Terminating 3 11m project-middleware-85888df786-8nxlj 1/1 Running 0 14d project-settlement-api-5cb5d5859d-qrq8b 1/1 Running 0 2m project-settlement-api-798c8b6688-ldclr 0/1 Terminating 0 20d project-transactions-api-779cd489b7-xvgd8 1/1 Running 0 14d project-users-api-7d7c5cd99d-bqg88 1/1 Running 0 14d project-web-57c869d6fc-7qpk5 1/1 Running 0 14d monitor-app-6455cf9dcf-p4ww4 1/1 Running 0 16m </code></pre> <p>I've try a lot command in order to delete it but nothing happend. kubectl delete po/mypod --grace-period=0 --force --namespace mynamespace</p> <p>Anybody can help me?</p> <p>Regards</p>
<p>It looks to me that you are using a Deployment API Object. Therefore, if you remove the pod it will be recreated. Try the following:</p> <pre><code>kubectl get deployments </code></pre> <p>Get the deployment id and execute</p> <pre><code>kubectl delete deployment DEP_ID </code></pre> <p>Then it will be removed. If there is no deployment check replicasets as well</p>
<p>If I execute "free" command in the pod (openshift/kubernetes), what output I will get.</p> <p>Will I get the memory utilisation of pod or memory utilisation of node where pod exist.</p>
<p>I believe it is the node. Some things like that don't relate to the pod but fall back to the node. Another example is uptime. That is also for the node.</p>
<p>I saw there is no <code>sink configuration</code> for Prometheus in this <a href="https://github.com/kubernetes/heapster/blob/master/docs/sink-configuration.md" rel="nofollow noreferrer">heapster document</a>. Is there any simple way to combine these two and monitor.</p>
<p>Prometheus uses a <a href="https://prometheus.io/docs/introduction/faq/#why-do-you-pull-rather-than-push?" rel="nofollow noreferrer">pull model</a> to retrieve the data, while Heapster is tool, which pushes their metrics to a certain endpoint (pull model).</p> <p>I assume you want to get Kubernetes metrics into Prometheus. You don't need heapster for that, since the cadvicor has an Prometheus endpoint which can be scraped directly. Also the kubelet itself provides some metrics.</p> <p>The Prometheus config would look like this:</p> <pre><code>- job_name: 'kubernetes-nodes' kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - job_name: 'kubernetes-cadvisor' kubernetes_sd_configs: - role: node relabel_configs: - source_labels: [__meta_kubernetes_node_address_InternalIP] target_label: __address__ regex: (.*) replacement: $1:4194 </code></pre> <p>Assuming you are using the default cadvisort port <a href="https://kubernetes.io/docs/reference/generated/kubelet/" rel="nofollow noreferrer">4194</a>. Also Prometheus should be able to detect the correct kubelet port.</p> <p>Additional Note: The job for scraping cAdvisor is only required when using a Kubernetes version <code>&gt;= 1.7</code>. Before that the cAdvisor metrics <a href="https://github.com/kubernetes/kubernetes/issues/48483" rel="nofollow noreferrer">accidentally got exposed via the Kubelet</a>.</p>
<p>minikube version ⏎ minikube version: v0.22.3</p> <p>I'm trying to setup various pods within a minikube instance. I'm running behind a corporate proxy which may explain some of this behavior.</p> <p>I start minikube using the following</p> <p>minikube start --docker-env HTTP_PROXY=<a href="http://corporate-proxy.com:80" rel="nofollow noreferrer">http://corporate-proxy.com:80</a> --docker-env HTTPS_PROXY=<a href="https://corporate-proxy:80" rel="nofollow noreferrer">https://corporate-proxy:80</a> --docker-env NO_PROXY=localhost,127.0.0.0/8,192.0.0.0/8</p> <p>otherwise it wont work at all. After building some images on docker I created a two services and two pods:</p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: app labels: name: app spec: containers: - name: app image: image_app ports: - containerPort: 7777 volumeMounts: - mountPath: /codeage name: code-volume readOnly: false imagePullPolicy: IfNotPresent tty: true volumes: - hostPath: path: /codeage name: code-volume --- apiVersion: v1 kind: Pod metadata: name: db labels: name: db spec: containers: - name: db image: postgres ports: - containerPort: 5432 volumeMounts: - mountPath: /var/lib/postgresql name: db-data imagePullPolicy: IfNotPresent tty: true volumes: - hostPath: path: /db-data name: db-data --- apiVersion: v1 kind: Service metadata: name: db spec: type: NodePort ports: - name: 'db-port' port: 5432 targetPort: 5432 selector: name: db --- apiVersion: v1 kind: Service metadata: name: app labels: name: app spec: type: NodePort ports: - name: apport port: 7777 targetPort: 7777 selector: name: app --- </code></pre> <p>I'm unable to ping 'db' from within(ssh) the 'app' pod:</p> <pre><code>sh-4.2# ping db PING db.default.svc.cluster.local (10.0.0.116) 56(84) bytes of data. From chicago11-rtr-3-v411.us.corporate.com (10.60.172.X) icmp_seq=1 Destination Host Unreachable ^C </code></pre> <p>As you can see though nslookup worked and provided the correct clusterIP 10.0.0.116 and hostname 'db.default.svn.cluster.local'</p> <p>I can ping the node itself. I cannot ping kube-dns...</p> <p>Anyone have any ideas?</p> <p>Is there an alternative to using the built in dns service?</p>
<p>we can't ping the service IP address, you can telnet to the port <code>5432</code> defined in the service with service IP.</p>
<p>I've set up an insecure k8s master node using <code>hyperkube</code>, with an insecure API:</p> <pre class="lang-sh prettyprint-override"><code>docker run -d --name=k8s-apiserver --net=container:etcd gcr.io/google_containers/hyperkube:v1.8.5 /apiserver --etcd-servers=http://127.0.0.1:2378 --service-cluster-ip-range=10.0.0.1/24 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --admission-control=AlwaysAdmit </code></pre> <p>Moving on to configuring the nodes, what option to the <code>docker run -d --name=kubelet gcr.io/google_containers/hyperkube:v1.8.5 /kubelet</code> command points <code>kubelet</code> to the master <code>apiserver</code>? I can't seem to find this option using <code>--help</code>.</p>
<p>Starting from kubernetes version 1.8 you should use <code>--kubeconfig</code> key to specify a path to a <code>kubeconfig</code> file where is described how to connect to API server: </p> <pre><code>--kubeconfig string Path to a kubeconfig file, specifying how to connect to the API server. (default "/var/lib/kubelet/kubeconfig") </code></pre> <p>where <code>/var/lib/kubelet/kubeconfig</code> something like:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority: ~/.kube/ca.crt server: https://&lt;API_IP&gt;:&lt;API_PORT&gt; name: dev contexts: - context: cluster: dev user: dev name: dev current-context: dev kind: Config preferences: {} users: - name: dev user: as-user-extra: {} client-certificate: ~/.kube/client.crt client-key: ~/.kube/client.key </code></pre> <p>So, finally you should just mount the config file inside <code>kubelet</code> docker container:</p> <pre><code>docker run -d -v /var/lib/kubelet/kubeconfig:/var/lib/kubelet/kubeconfig --name=kubelet gcr.io/google_containers/hyperkube:v1.8.5 /kubelet </code></pre>
<p>minikube version ⏎ minikube version: v0.22.3</p> <p>I'm trying to setup various pods within a minikube instance. I'm running behind a corporate proxy which may explain some of this behavior.</p> <p>I start minikube using the following</p> <p>minikube start --docker-env HTTP_PROXY=<a href="http://corporate-proxy.com:80" rel="nofollow noreferrer">http://corporate-proxy.com:80</a> --docker-env HTTPS_PROXY=<a href="https://corporate-proxy:80" rel="nofollow noreferrer">https://corporate-proxy:80</a> --docker-env NO_PROXY=localhost,127.0.0.0/8,192.0.0.0/8</p> <p>otherwise it wont work at all. After building some images on docker I created a two services and two pods:</p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: app labels: name: app spec: containers: - name: app image: image_app ports: - containerPort: 7777 volumeMounts: - mountPath: /codeage name: code-volume readOnly: false imagePullPolicy: IfNotPresent tty: true volumes: - hostPath: path: /codeage name: code-volume --- apiVersion: v1 kind: Pod metadata: name: db labels: name: db spec: containers: - name: db image: postgres ports: - containerPort: 5432 volumeMounts: - mountPath: /var/lib/postgresql name: db-data imagePullPolicy: IfNotPresent tty: true volumes: - hostPath: path: /db-data name: db-data --- apiVersion: v1 kind: Service metadata: name: db spec: type: NodePort ports: - name: 'db-port' port: 5432 targetPort: 5432 selector: name: db --- apiVersion: v1 kind: Service metadata: name: app labels: name: app spec: type: NodePort ports: - name: apport port: 7777 targetPort: 7777 selector: name: app --- </code></pre> <p>I'm unable to ping 'db' from within(ssh) the 'app' pod:</p> <pre><code>sh-4.2# ping db PING db.default.svc.cluster.local (10.0.0.116) 56(84) bytes of data. From chicago11-rtr-3-v411.us.corporate.com (10.60.172.X) icmp_seq=1 Destination Host Unreachable ^C </code></pre> <p>As you can see though nslookup worked and provided the correct clusterIP 10.0.0.116 and hostname 'db.default.svn.cluster.local'</p> <p>I can ping the node itself. I cannot ping kube-dns...</p> <p>Anyone have any ideas?</p> <p>Is there an alternative to using the built in dns service?</p>
<p>@sfgroups answer is correct! ping was the wrong tool to use. Telnet or actually using psql worked. The actual problem was that my services weren't pointing to anything. </p> <p>I updated my app pods to have the unique names and then updated the selector for the service.</p>
<p>I currently load testing my service using Gatling in AWS. I did several load tests using HTTP and my service works perfectly. No TLS handshake errors. When we moved to the HTTPS, the load test result showed <strong>TLS handshake timeout</strong> exception all over the place and finally thrown OOM because unprocessed request getting queued.</p> <p>Additional information: </p> <ul> <li>The Gatling scenario will be like this: Sends three requests and send one request with the <code>connection: close</code> header. I wanted to simulate sending three requests that is kept alive and close it at the end. </li> <li>My service is managed by Kubernetes. </li> </ul> <p>What I have done:</p> <ul> <li>I ran the load test on other Gatling instance, but the error still persists </li> <li>Restarted the AWS load balancer. Additional notes: There are no 4xx and 5xx errors, but we have client TLS negotiation errors.</li> </ul> <p>My questions:</p> <ol> <li>Is the error occurred because of the initial handshake required for the HTTPS? </li> <li>Is the error occurred because of the AWS load balancer?</li> </ol> <p>Thank you. </p>
<p>So it seems the problem was because the time it took for Gatling to handshake longer than the creation users per second. By decreasing the number of users created and increasing the number RPS solved that.</p>
<p>I'm using <code>kube-aws</code> to run a Kubernetes cluster on AWS, and everything works as expected.</p> <p>Now, I realize that cron jobs aren't turned on in the version I'm using (<code>v1.7.10_coreos.0</code>), while the documentation for Kubernetes only states the following:</p> <blockquote> <p>For previous versions of cluster (&lt; 1.8) you need to explicitly enable batch/v2alpha1 API by passing --runtime-config=batch/v2alpha1=true to the API server (see Turn on or off an API version for your cluster for more).</p> </blockquote> <p>And the documentation directed to in that text only states this (it's the actual, full documentation):</p> <blockquote> <p>Specific API versions can be turned on or off by passing --runtime-config=api/ flag while bringing up the API server. For example: to turn off v1 API, pass --runtime-config=api/v1=false. runtime-config also supports 2 special keys: api/all and api/legacy to control all and legacy APIs respectively. For example, for turning off all API versions except v1, pass --runtime-config=api/all=false,api/v1=true. For the purposes of these flags, legacy APIs are those APIs which have been explicitly deprecated (e.g. v1beta3).</p> </blockquote> <p>I have been unsuccessful in finding information about how to change the configuration of a running cluster, and I, of course, don't want to try to re-run the command on <code>api-server</code>.</p> <p>Note that kube-aws still use <code>hyperkube</code>, and not <code>kubeadm</code>. Also, the <code>/etc/kubernetes/manifests</code>-directory only contains the <code>ssl</code>-directory.</p> <p>The setting I want to apply is this: <code>--runtime-config=batch/v2alpha1=true</code></p> <p>What is the proper way, preferably using <strong><code>kubectl</code></strong>, to apply this setting and have the <code>apiserver</code>s restarted?</p> <p>Thanks.</p>
<p><code>batch/v2alpha1=true</code> is set by default in <code>kube-aws</code>. You can find it <a href="https://github.com/kubernetes-incubator/kube-aws/blob/22673054e2e2de1f096980050a5f52060974929a/core/controlplane/config/templates/cloud-config-controller#L2040" rel="nofollow noreferrer">here</a></p>
<p>I cannot seem to figure out how to get certain metrics from GCP into Stackdriver (google monitoring) in a usable way. They can be viewed using Stackdriver's "Metrics Explorer" tool, but not saved into a graph or alerting policy. As a specific example, only a handful of the metrics outlined in this table are available:</p> <p><a href="https://cloud.google.com/monitoring/api/metrics_gcp#gcp-container" rel="nofollow noreferrer">https://cloud.google.com/monitoring/api/metrics_gcp#gcp-container</a></p> <p>Again, I can use the "Metrics Explorer" tool to immediately visualize any one of them in an ad-hoc graph, but I cannot create an alerting policy or any sort of persistent monitoring for anything except for <code>CPU Usage</code>, <code>Disk Usage</code>, <code>Page Faults</code>, and <code>Used Memory</code>. Does anyone know how to get one of these metrics (such as <code>container/cpu/usage_time</code>) into an alerting policy?</p>
<p>Metrics Explorer includes access to more metrics than are currently available for alerting. We're working on addressing this both for UIs and APIs. Please stay tuned to release notes. Thanks for using Stackdriver!</p>
<p>I am trying to loop for a count in a kubernetes helm chart like this:</p> <pre><code>reaction.mongo_url_big: mongodb://{{ for $mongocount := 0; $mongocount &lt; {{ .Values.mongodbReplicantCount }}; $mongocount++ }}{{ .Values.mongodbReleaseName }}-mongodb-replicaset-{{ $mongocount }}:{{ .Values.mongodbPort }}{{ if $mongocount &lt; {{ .Values.mongodbReplicantCount }} - 1 }},{{ end }}{{ end }}/{{ .Values.mongodbName }}?replicaSet={{ .Values.mongodbReplicaSet }} </code></pre> <p>However, go templates seem to be lacking a means of rendering a 'for' loop, <a href="https://groups.google.com/forum/#!topic/golang-nuts/ahKwwz4fRFM" rel="nofollow noreferrer">by design</a></p> <p>I want it to output something like:</p> <pre><code> reaction.mongo_url: mongodb://{{ .Values.mongodbReleaseName }}-mongodb-replicaset-0:{{ .Values.mongodbPort }},{{ .Values.mongodbReleaseName }}-mongodb-replicaset-1:{{ .Values.mongodbPort }},{{ .Values.mongodbReleaseName }}-mongodb-replicaset-2:{{ .Values.mongodbPort }}/{{ .Values.mongodbName }}?replicaSet={{ .Values.mongodbReplicaSet }} </code></pre> <p>The line in my helm chart is here: <a href="https://github.com/joshuacox/reactionetes/blob/gymongonasium/reactioncommerce/templates/configmap.yaml#L11" rel="nofollow noreferrer">https://github.com/joshuacox/reactionetes/blob/gymongonasium/reactioncommerce/templates/configmap.yaml#L11</a></p>
<p>Use range:</p> <pre><code>{{ range .Values }} {{ .MongodbReleaseName }} {{ end }} </code></pre> <p>This will output the .MongodbReleaseName (assuming that's a field) of every item in .Values. The value is assigned to . while within the range so you can simply refer to fields/functions of the individual Values. This is very like a for loop in other templating languages. You can also use it by assigning an index and value. </p>
<p>What does Apache Mesos do that Kubernetes can't do or vice-versa? </p> <p>Mesos is a Two level scheduler. Sure it grabs resource information from every machine and gives it to the top level scheduler such that frameworks like kubernetes can use to schedule containers across machines but Kubernetes can itself schedule containers across machines (No need for Mesos from this regard). so what are few things that Apache Mesos can do that Kubernetes cannot do or vice-versa?</p>
<p>Both Mesos and Kubernetes are n-th level containers orchestrators. This means <strong>you can achieve the same features but some kind of tasks could be done easier</strong> (read. better) on one of them. In fact, you can run Kubernetes on Mesos and vice verse.</p> <p>Let's go through main differences that give some clue when you need to make a decision:</p> <h3>Architecture</h3> <p>As you pointed out Mesos is a Two-Level Scheduler and this is the main difference in architecture. This gives you the ability to create your custom scheduler (aka framework) to run your tasks. What's more, you can have more than one scheduler. All your schedulers compete for the resources that are fairly distributed using <a href="https://cs.stanford.edu/%7Ematei/papers/2011/nsdi_drf.pdf" rel="noreferrer">Dominant Resources Fairness algorithm</a> (that could be replaced with custom <a href="http://mesos.apache.org/documentation/latest/allocation-module/" rel="noreferrer">allocator</a>). You can also assign <a href="http://mesos.apache.org/documentation/latest/roles/" rel="noreferrer">roles</a> to the frameworks and tasks and assign <a href="http://mesos.apache.org/documentation/latest/weights/" rel="noreferrer">weights</a> to this roles to prioritize some schedulers. Roles are tightly connected with <a href="http://mesos.apache.org/documentation/latest/attributes-resources/" rel="noreferrer">resources</a>. Above features gives you the ability to create your own way of scheduling for different applications (e.g., <a href="https://github.com/Netflix/Fenzo" rel="noreferrer">Fenzo</a>) with different heuristics based on a type of tasks you want to run. For example, when running batch tasks it's good to place them near data and time to start is not so important. On the other hand, running stateless services is independent of nodes and it's more critical to run them ASAP.</p> <p><a href="https://i.stack.imgur.com/WvYZH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WvYZH.png" alt="Mesos Architecture" /></a></p> <p>Kubernetes architecture is a single level scheduler. That means decisions where pod will be run are made in a single component. There is no such thing as resource offer. On the other hand, everything there is pluggable and built with a layered design.</p> <p><a href="https://i.stack.imgur.com/ZZw2M.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZZw2M.png" alt="Kubernetes architecture" /></a></p> <h3>Origin</h3> <p>Mesos was created at Twitter (formerly at Berkeley but the first production usage was at Twitter) to support their scale.</p> <blockquote> <p>In March 2010, about a year into the Mesos project, Hindman and his Berkeley colleagues gave a talk at Twitter. At first, he was disappointed. Only about eight people showed up. But then Twitter's chief scientist told him that eight people was lot – about ten percent of the company's entire staff. And then, after the talk, three of those people approached him.</p> <p>Soon, Hindman was consulting at Twitter, working hand-in-hand with those ex-Google engineers and others to expand the project. Then he joined the company as an intern. And, a year after that, he signed on as a full-time employee. <a href="https://www.wired.com/2013/03/google-borg-twitter-mesos/" rel="noreferrer">source</a></p> </blockquote> <p>Kubernetes was created by Google to bring users to their cloud promising no lock-in experience. This is the same technique Amazon did with Kindle. You can read any book on it but using it with Amazon gives you the best experience. The same is true for Google. You can run Kubernetes on any cloud (public or private) but the best tooling, integration and support you'll get only on Google Cloud.</p> <blockquote> <p>But Google and Microsoft are different. Microsoft wants to support everything on Azure, while Google wants Kubernetes everywhere. (In a sense, Microsoft is living up to the Borg name, assimilating all orchestrators, more than Google is.) And quite literally, Kubernetes is how Google is playing up to the on-premises cloud crowd giving it differentiation from AWS (which won’t sell its infrastructure as a stack with a license, although it says VMware is its private cloud partner) and Microsoft (which still doesn’t have its Azure Stack private cloud out the door). <a href="https://www.nextplatform.com/2016/11/08/google-wants-kubernetes-rule-world/" rel="noreferrer">source</a></p> </blockquote> <h3>Community</h3> <blockquote> <p><a href="https://groups.google.com/a/dcos.io/d/msg/users/1YEfYvSyH_Y/yyShheMSCQAJ" rel="noreferrer">Judging a project simply by its community size could be misleading. It's like you'd be saying that php is a great language because it has large community.</a></p> </blockquote> <p>Mesos community is much smaller than Kubernetes. That's the fact. Kubernetes has financial support from many big companies including Google, Intel, Mirantis, RedHat and more while Mesos is developed mainly by Mesosphere with some support from Apple, Microsoft. Although Mesos is a mature project, its development is slow but stable. On the other hand, Kubernetes is much younger, but rapidly developed.</p> <p><a href="https://i.stack.imgur.com/qK3oj.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/qK3oj.jpg" alt="Mesos Community" /></a></p> <p><a href="https://twitter.com/janiszt/status/878557351095226368" rel="noreferrer">Meso contributors origin</a></p> <p><a href="https://youtu.be/cwcvAbwm-R4" rel="noreferrer">The Kubernetes Community - Ian Lewis, Developer Advocate, Google</a></p> <h3>Scale</h3> <p>Mesos was targeted for big customers from the early beginning. It is used at Twitter, Apple, Verizon, Yelp, Netflix to run hundreds of thousands of containers on thousands of servers.</p> <p>Kubernetes was started by Google to give developers Google Infrastructure experience (<a href="http://gifee.cloud/" rel="noreferrer">GIFFE</a>). From the beginning, it was prepared for small scale up to hundreds of machines. This constraint is increased with every release but they started small to grow big. There are no public data about biggest Kubernetes installation.</p> <h3>Hype</h3> <p>Due to scale issues, Kuberntetes started to be popular among smaller companies (not cloud scale) while Mesos was targeted for enterprise users. Kubernetes is supported by Cloud Native Foundation while Mesos is Apache Foundation Project. These two foundations have different founding and sponsors. Generally, more money gives you better marketing and Kubernetes definitely did it right.</p> <p><a href="https://i.stack.imgur.com/8KbNR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8KbNR.png" alt="Mesos vs Kubernetes" /></a></p> <p><a href="https://g.co/trends/RUuhA" rel="noreferrer">https://g.co/trends/RUuhA</a></p> <h3>Conclusion</h3> <p>It looks like Kubernetes already won the containers orchestrator war. But if you have some custom workloads and really big scale, Mesos could be a good choice.</p> <blockquote> <p>The main difference is in the community size and the open source model : where DCOS is supported by Mesosphere and provide enterprise features in a commercial product only (because mesosphere isn't philanthropist), K8S has a larger community with strong contributions from different companies resulting in providing much more integrated enterprise features (multitenancy, RBAC, quota, preemption, gateways...) meaning they are easier to use, not necessarily they don't exist in DCOS. I would globally say that :</p> <ul> <li>DCOS is more battle tested for stateful and big data workloads but lacks of integration with other perimetric components including plug and play central monitoring and logging and enterprise features like security model, multi tenancy, auto updates... It was a very hard way to integrate everything for a production grade platform.</li> <li>K8S is more battle tested for stateless apps and provides lots of plug and play tools like prometheus, EFK, helm... which makes the implementation of a production grade platform much easier. Next to that there is a big move on stateful workloads with statefulsets and the operator pattern which is comparable with mesos frameworks but again, K8S provides lots of tools to develop them with less costs because lots of functionalities are provided out of the box, it takes me 2 months to develop a MongoDB operator to provide MongoDB as a service in a multi tenant and secured way and I needed to learn Golang in the same time.</li> </ul> <p><a href="https://groups.google.com/a/dcos.io/d/msg/users/1YEfYvSyH_Y/qZMJMzX3CAAJ" rel="noreferrer">source</a></p> </blockquote> <ul> <li><a href="https://www.infoworld.com/article/3118345/cloud-computing/why-kubernetes-is-winning-the-container-war.html" rel="noreferrer">https://www.infoworld.com/article/3118345/cloud-computing/why-kubernetes-is-winning-the-container-war.html</a></li> <li><a href="https://www.theregister.co.uk/2017/10/17/docker_ee_kubernetes_support" rel="noreferrer">https://www.theregister.co.uk/2017/10/17/docker_ee_kubernetes_support</a></li> <li><a href="https://www.techrepublic.com/article/these-two-vendors-are-most-likely-to-bring-kubernetes-containers-to-the-enterprise" rel="noreferrer">https://www.techrepublic.com/article/these-two-vendors-are-most-likely-to-bring-kubernetes-containers-to-the-enterprise</a></li> <li><a href="https://www.cloudhealthtech.com/blog/container-wars-are-over-kubernetes-has-won" rel="noreferrer">https://www.cloudhealthtech.com/blog/container-wars-are-over-kubernetes-has-won</a></li> <li><a href="https://news.ycombinator.com/item?id=12462261" rel="noreferrer">https://news.ycombinator.com/item?id=12462261</a></li> </ul>
<h1>Questions</h1> <p>Where are the configuration files of etcd which kubeadm init installs?</p> <h2>Background</h2> <p>Followed <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">Using kubeadm to Create a Cluster</a> and noticed there was no step to manually install etcd, hence kubeadm init should be installing it.</p> <p>Run below to try to see which files but could not find the clue.</p> <pre><code>for i in $(ls /proc/$(pgrep etcd)/fd) ; do readlink $i; done | grep -v socket pipe:[432740] pipe:[432741] /var/lib/etcd/member/wal/0.tmp pipe:[432742] anon_inode:[eventpoll] /var/lib/etcd/member/snap/db /var/lib/etcd/member/wal/0000000000000000-0000000000000000.wal /var/lib/etcd/member/wal </code></pre>
<p>Kubeadm initialises as well as provide adequate files in the <code>/etc/kubernetes</code> directory to kubelet.</p> <pre><code>[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" </code></pre> <p>As you can see that necessary manifest files are prepared for kubelet in the <strong>/etc/kubernetes/manifests/</strong> directory. </p> <pre><code>[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" </code></pre> <p>Now Kubelet will apply these manifest files and control-plane will be up and running.</p>
<p>I have a kubernetes cluster on Azure and I created 2 namespaces and 2 service accounts because I have two teams deploying on the cluster. I want to give each team their own kubeconfig file for the serviceaccount I created. </p> <p>I am pretty new to Kubernetes and haven't been able to find a clear instruction on the kubernetes website. How do I create a kube config file for a serviceaccount? Hopefully someone can help me out :), I rather not give the default kube config file to the teams.</p> <p>With kind regards,</p> <p>Bram</p>
<pre><code># your server name goes here server=https://localhost:8443 # the name of the secret containing the service account token goes here name=default-token-sg96k ca=$(kubectl get secret/$name -o jsonpath='{.data.ca\.crt}') token=$(kubectl get secret/$name -o jsonpath='{.data.token}' | base64 --decode) namespace=$(kubectl get secret/$name -o jsonpath='{.data.namespace}' | base64 --decode) echo " apiVersion: v1 kind: Config clusters: - name: default-cluster cluster: certificate-authority-data: ${ca} server: ${server} contexts: - name: default-context context: cluster: default-cluster namespace: default user: default-user current-context: default-context users: - name: default-user user: token: ${token} " &gt; sa.kubeconfig </code></pre>
<p>On kubernetes 1.8.4 I'm trying to give kubernetes users access to our dashboard.</p> <p>When using the admin context to proxy, all the tokens work when logging into the dashboard. But my users don't have the admin context, only I do, so they use their own context to proxy. And in those situations, they get an error.</p> <p><strong>Steps:</strong></p> <ol> <li>Create a service account for a user, put token in <code>~/.kube/config</code></li> <li>Give permissions to namespace A to that service account through rolebinding</li> <li>Switch to that user's context</li> <li>Do deployments, get pod overview, etc, verify it works. all fine so far</li> <li>Start <code>kubectl</code> proxy, still in that user's context</li> <li>Open browser, go to <a href="http://localhost:8001/ui" rel="nofollow noreferrer">http://localhost:8001/ui</a></li> <li><p>See this in the browser:</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": " forbidden: User \"system:serviceaccount:default:&lt;username&gt;\" cannot get path \"/ui\"", "reason": "Forbidden", "details": {}, "code": 403 } </code></pre></li> <li><p>try <a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a></p></li> <li><p>See this in the browser:</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": " services \"https:kubernetes-dashboard:\" is forbidden: User \"system:serviceaccount:default:&lt;username&gt;\" cannot get services/proxy in the namespace \"kube-system\"", "reason": "Forbidden", "details": { "name": "https:kubernetes-dashboard:", "kind": "services" }, "code": 403 } </code></pre></li> </ol> <p>Clearly a permission problem. I'm not sure which permission the user needs to have to enable them to access the dashboard though. I'm very hesitant to give them permissions into the kube-system namespace.</p> <p>When I stop kubectl proxy and then switch to the admin context, start the proxy and retry the same url, I get the dashboard login page.</p> <p>What do I need to do to get that same result when using the user's context?</p>
<p>I couldn't find a different way other than providing some access to kube-system, so I did using the following role and binding:</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: namespace: kube-system name: user-role-dashboard rules: - apiGroups: [""] resources: - services verbs: ["get", "list", "watch"] - apiGroups: [""] resources: - services/proxy verbs: ["get", "list", "watch", "create"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: namespace: kube-system name: user-binding-dashboard subjects: - kind: User name: system:serviceaccount:&lt;namespace&gt;:&lt;username&gt; apiGroup: "" roleRef: kind: Role name: user-role-dashboard apiGroup: "" </code></pre> <p>Would still like to know whether there is a better way though, your thoughts and suggestions are welcome!</p>
<p>After having changed the IP configuration of the cluster (all <em>external</em> IPs changed, the internal private IPs remained the same), some <code>kubectl</code> commands do not work anymore for any container. The pods are all up and running, and seem to find themselves without problems. Here is the output:</p> <pre><code>bronger@penny:~$ time kubectl logs jb-plus--prod-615777041-71s09 Error from server (InternalError): Internal error occurred: Authorization error (user=kube-apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy) real 0m30,539s user 0m0,441s sys 0m0,021s </code></pre> <p>Apparently, there is a 30 seconds timeout, and after that the authorisation error.</p> <p>What may cause this?</p> <p>I run Kubernetes 1.8 with Weave Net.</p>
<p>Based on the symptom new ip missing from the certificate. use the below command to validate.</p> <pre><code> openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep DNS </code></pre>
<p>For Openshift Health checks (Liveness and readiness probes), does the liveness check run after the container is ready. So should the Readiness initial delay be less than the Liveness initial delay.</p> <p>Please advise.</p> <p>Thanks B.</p>
<p>The delay specified for both readiness and liveness check is from the start of the deployment. The start of the delay for the liveness check is not dependent on the pod first being ready. Once they start, both run for the life of the pod.</p> <p>You need to evaluate what you set the delays to based on the role of each check and how you implement the checks.</p> <p>A readiness probe checks if an application is ready to service requests. It is used initially to determine if the pod has started up correctly and becomes ready, but also subsequently, to determine if the pod IP should be removed from the set of endpoints for any period, with it possibly being added back later if the check is set to pass again, with the application again being ready to handle requests.</p> <p>A liveness probe checks if an application is still working. It is used to check if your application running in a pod is still running and that it is also working correctly. If the probe keeps failing, the pod will be shutdown, with a new pod started up to replace it.</p> <p>So having the delay for the liveness check be larger than that for the readiness check is quite reasonable, especially if during the initial startup phase the liveness check would fail. You don't want the pod to be killed off when startup time can be quite long.</p> <p>You may also want to look at the period and success/failure thresholds.</p> <p>Overall it is hard to give a set rule as it depends on your application.</p>
<p>I'm in desperate need of help. I'm noticing that my Kubernetes minions/nodes are rebooting at what appear to be random intervals a few times a day and I can't figure out why. This is a big problem for me because every reboot causes about 10 minutes of downtime for every app on the node. </p> <p>When they reboot, I can see the node event like so</p> <pre><code>Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 9m 9m 1 kubelet, kubernetes-minion-group-7j5x Normal Starting Starting kubelet. 9m 9m 1 kubelet, kubernetes-minion-group-7j5x Warning ImageGCFailed unable to find data for container / 9m 9m 2 kubelet, kubernetes-minion-group-7j5x Normal NodeHasSufficientDisk Node kubernetes-minion-group-7j5x status is now: NodeHasSufficientDisk 9m 9m 2 kubelet, kubernetes-minion-group-7j5x Normal NodeHasSufficientMemory Node kubernetes-minion-group-7j5x status is now: NodeHasSufficientMemory 9m 9m 2 kubelet, kubernetes-minion-group-7j5x Normal NodeHasNoDiskPressure Node kubernetes-minion-group-7j5x status is now: NodeHasNoDiskPressure 9m 9m 1 kubelet, kubernetes-minion-group-7j5x Warning Rebooted Node kubernetes-minion-group-7j5x has been rebooted, boot id: bed35a9d-584c-4458-8a04-49725200eb0c 9m 9m 1 kubelet, kubernetes-minion-group-7j5x Normal NodeNotReady Node kubernetes-minion-group-7j5x status is now: NodeNotReady 8m 8m 1 kubelet, kubernetes-minion-group-7j5x Normal NodeReady </code></pre> <p>When I check the reboot history in the node, it appears to happen fairly randomly. </p> <pre><code>kubernetes-minion-group-7j5x:~$ last reboot reboot system boot 3.16.0-4-amd64 Wed Dec 13 00:36 - 01:01 (00:25) reboot system boot 3.16.0-4-amd64 Tue Dec 12 23:24 - 01:01 (01:37) reboot system boot 3.16.0-4-amd64 Mon Dec 11 05:43 - 01:01 (1+19:18) reboot system boot 3.16.0-4-amd64 Sun Dec 10 23:46 - 01:01 (2+01:15) </code></pre> <p>Since, the reboot is in the Kubernetes events, does that mean Kubernetes is doing the rebooting, or could it be some other process? How can I troubleshoot this? I'm not sure how to go about investigating this now.</p> <p>I can't seem to find anything in the <code>kube-controller-manager.log</code> or the <code>kubelet.log</code> or <code>syslog</code> or <code>messages</code> or <code>kern.log</code> or <code>node-problem-detector.log</code> or <code>auth.log</code> or <code>unattended-upgrades.log</code>.</p> <p>I'm running Kubernetes 1.6.0 on Debian</p> <pre><code>Linux kubernetes-minion-group-7j5x 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux </code></pre>
<p>Troubleshooting can be done by looking at the logs so that you can have more information to see what is making the node to reboot. When rebooting happens the kublet process restarts and it’s trying to get metrics before the first metrics have been collected. That is why you see the warning error after the restart of kublet. This is normally not a problem, as the kubelet eventually retries, and should succeed once metrics collection has started. This error is especially visible just after kubelet restarts. </p> <p>This error does not mean it could be a Kubernetes problem since the node could be rebooting due to other problems. The initial troubleshooting would be to look at the instance logs as per the <a href="https://cloud.google.com/logging/docs/view/logs_viewer_v2" rel="nofollow noreferrer">documentation</a>:</p> <ol> <li>At the Google Cloud Platform click on Products &amp; Services which is the icon with the four bars at the top left hand corner.</li> <li>On the menu go to the ‘Stackdriver monitoring’ section, hover on ‘logging’ and click on logs.</li> <li>At the basic selector menu hover on the resource that you want to view, e.g ‘GCE VM Instance’ and click on the instance that you want to retrieve logs for. </li> <li>The time-range selector drop-down menus let you filter for specific dates and times in the logs.</li> <li>The streaming selector, at the top of the page, controls whether new log entries are displayed as they arrive.</li> <li>The View Options menu, at the far right, has additional display options.</li> <li>The expander arrow (▸) in front of each log entry lets you look at the full contents of the entry. </li> </ol> <p>You can also <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials" rel="nofollow noreferrer">connect to the cluster</a> and view the /var/log/messages file for any indication of errors. You can use a command similar to the following and see if there are any errors near the time the instance restarted:</p> <p>cat /var/log/messages|egrep -i {"warning|error|kernel|restart"}</p> <p>You can also use <a href="https://linux.die.net/man/1/less" rel="nofollow noreferrer">less</a> as in ‘less /var/log/messages’ and use ‘/’ to search for the date and time the node rebooted. </p> <p>Also look at the VM instance <a href="https://cloud.google.com/compute/docs/instances/interacting-with-serial-console" rel="nofollow noreferrer">serial console</a> output:</p> <p>Go to ‘Compute engine’ > Instances and click on the ‘VM instance’ to view the VM instance details. Scroll down to the ‘Logs’ section and click on ‘Serial port 1 (console)’. You will get more logs on the instance this way.</p> <p>I would also like to point out that you are not using an up-to-date version of Kubernetes and an upgrade might be useful.</p>
<p>I'm trying to copy files to/from a Windows container in a pod running on an ACS k8s cluster.</p> <p>I'm using this kubectl command from my Windows 10 laptop:</p> <pre><code>kubectl cp dev-acs-conn-testdn-1981314364-rjc0l:\app\nettrace.etl c:\ </code></pre> <p>And I'm getting this error in response:</p> <blockquote> <p>error: archive/tar: invalid tar header</p> </blockquote> <p>I've tried this from clusters running both v1.7.7 and v1.7.9 of k8s as well as Server 2016 ltsc and Server v1709. My kubectl.exe is v1.8.5. I have some valuable debugging files stranded on my container, any idea how I can get this to work?</p>
<p>So it turns out that the &quot;kubectl cp&quot; command requires that tar be in the container, not on the local system as I expected. And since Windows doesn't ship with tar.exe the problem lies there.</p> <p>I deployed a new pod that included a Windows version of tar.exe and it's dependencies. This got me further in a Server 2016 ltsc container. I simply had to adjust my syntax slightly and the below worked:</p> <pre><code>kubectl cp dev-acs-conn-testdn-1981314364-rjc0l:/app/nettrace.etl nettrace.etl </code></pre> <p>However, this same process does NOT work on a Server v1709 container. When I try exactly the same process I get this error:</p> <blockquote> <p>tar: Cannot open -: Permission denied</p> <p>tar: Error is not recoverable: exiting now</p> </blockquote> <p>Clearly a permissions error, but I have no idea what permissions are the issue and how to change them. Any ideas?</p>
<h1>Question</h1> <p>How to get the Kubernetes related keys from etcd? Tried to list keys in etcd but could not see related keys. Also where is etcdctl installed?</p> <pre><code>$ etcdctl bash: etcdctl: command not found.. $ sudo netstat -tnlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 386/etcd tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 386/etcd $ curl -s http://localhost:2379/v2/keys | python -m json.tool { "action": "get", "node": { "dir": true } } </code></pre> <h1>Background</h1> <p>Installed Kubernetes 1.8.5 by following <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">Using kubeadm to Create a Cluster</a> on CentOS 7. When I looked at <a href="https://coreos.com/etcd/docs/latest/getting-started-with-etcd.html" rel="noreferrer">Getting started with etcd</a>, v2/keys looks to be the end point.</p>
<p>Usually you need to get <code>etcdctl</code> by yourself. Just download the latest <code>etcdctl</code> archive from <a href="https://github.com/coreos/etcd/releases" rel="noreferrer">etcd releases page</a>.</p> <p>Also, starting from Kubernetes version 1.6 it uses etcd version 3, so to get a list of all keys is:</p> <pre><code>ETCDCTL_API=3 etcdctl --endpoints=&lt;etcd_ip&gt;:2379 get / --prefix --keys-only </code></pre> <p>You can find all <code>etcdctl v3</code> actions using:</p> <pre><code>ETCDCTL_API=3 etcdctl --endpoints=&lt;etcd_ip&gt;:2379 --help </code></pre> <p><strong>EDIT</strong> (thanks to @leodotcloud):</p> <p>In case ETCD is configured with TLS certificates support:</p> <pre><code>ETCDCTL_API=3 etcdctl --endpoints &lt;etcd_ip&gt;:2379 --cacert &lt;ca_cert_path&gt; --cert &lt;cert_path&gt; --key &lt;cert_key_path&gt; get / --prefix --keys-only </code></pre>
<p>What are the Advantages and Disadvantages of Two Level Scheduler (like in Apache Mesos) vs Single Level Scheduler (like in Kubernetes) ? And how they would perform on small and large clusters for variety of work loads that normally take place in build, test, deploying stateless, stateful &amp; Big data applications?</p>
<p>Full disclosure: I work for Mesosphere :) </p> <p>One advantage of two-level schedulers is that they can manage different workloads in different ways. </p> <p>Some workloads require access to special resources only available on certain nodes (external storage for example). Some workloads might run best on special resources but could use other types(TensorFlow should run on GPUs, but can run on CPUs in a pinch). Some workloads can be rescheduled anywhere if they node they are running on dies (Spark). In addition to variation in the resources they need, workloads also have different installation, scaling, backup, and upgrade processes. This is especially complicated for multi-node applications that need different configurations depending on their lifecycle stage. </p> <p>Two-level scheduling lets you define different rules for each workload while running them all on a common pool of resources, which can keep your resource utilization high. </p> <p>I think that <a href="https://mesosphere.com/wp-content/uploads/2017/09/Apache_Mesos_Diagram_Kubernetes.jpg" rel="noreferrer">this diagram</a> from <a href="https://mesosphere.com/blog/application-aware-scheduling-mesos/" rel="noreferrer">this blog post</a> illustrates two-level scheduling pretty well.</p>
<p>when I create the Kubernetes Job by a job yaml file, and I have this error message:</p> <p>Job in version \"v1\" cannot be handled as a Job: [pos 196]: json: expect char '\"' but got char '{'</p> <p><strong>Anyone know why? Thanks!</strong></p> <p>File job.yml:</p> <p><code> apiVersion: batch/v1 kind: Job metadata: name: pi labels: name: 09996c3d-cb13-41b0-94a6-b8492d981de5 spec: completions: 1 template: metadata: name: pi labels: name: 09996c3d-cb13-41b0-94a6-b8492d981de5 spec: containers: - name: pi image: perl # command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] env: - name: FUNCTIONGRAPH value: reqJsonBody restartPolicy: OnFailure </code> </p>
<p>The above file perfectly creates a job for me. I am using minikube version <code>minikube version: v0.23.0</code> and kubectl version <code>1.8</code></p> <p>When I run the command <code>kubectl create -f job.yaml</code> , output is as follows.</p> <pre><code>[pgarg@localhost]$ kubectl create -f job.yaml job "pi" created [pgarg@localhost]$ oc get pods NAME READY STATUS RESTARTS AGE pi-r6xsm 0/1 Completed 0 23s [pgarg@localhost]$ oc logs pi-r6xsm Loading DB routines from perl5db.pl version 1.51 Editor support available. Enter h or 'h h' for help, or 'man perldebug' for more help. main::(-e:1): 0 DB&lt;1&gt; </code></pre> <p>And when I run the same yaml after removing that comment in container's command line. It perfectly prints the value of <code>pi</code> upto 2000 decimal digit.</p> <p>I suggest that you upgrade to latest version of minikube if you're not yet on it, or provide some more details.</p>
<p>I am configuring a master Jenkins to start slaves with kubernetes. When a slave starts its pipeline I need him to have a maven <code>settings.xml</code> read (or copied) from the master. With kubernetes plugin I haven't found a way for that.</p> <p>Any suggestions, please?</p>
<p>Maybe Jenkins <a href="https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#pipeline-basic-steps" rel="nofollow noreferrer">Pipeline: Basic Steps</a> can help you out. They offer a <a href="https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-stash-code-stash-some-files-to-be-used-later-in-the-build" rel="nofollow noreferrer">stash/unstash</a> step. Meaning you stash the <code>settings.xml</code> on the master node and unstash it on your slave that runs the build. I think currently stash/unstash only support sub-directories of the current pipeline workspace, but you could work around it by copying the <code>settings.xml</code> into the current workspace before stashing. The whole thing might look something like that:</p> <pre><code>stage('Build') { node('master') { sh 'cp /path/to/master-node-settings-xml .' stash includes: 'settings.xml', name: 'settingsXml' } node('slave') { unstash 'settingsXml' sh 'mv settings.xml /path/to/slave-node-settings-xml' # Start your build here now .. } } </code></pre> <p>Another option would be to pre-bake the <code>settings.xml</code> into the docker image which you are using to spin up the slave in Kubernetes. Of course this would not be an optimal solution, if your <code>settings.xml</code> changes dynamically or contains any kind of sensitive data (as one should avoid putting sensitive information inside a docker image if possible). </p>
<h1>Question</h1> <p>What are the commands to start/stop the K8S cluster? After installation is done following <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#24-initializing-your-master" rel="nofollow noreferrer">Using kubeadm to Create a Cluster</a>, restarted the CentOS server and the K8S cluster is not running after restart. </p> <p>There are services mentioned in <a href="https://kubernetes.io/docs/getting-started-guides/fedora/fedora_manual_config/" rel="nofollow noreferrer">Fedora (Single Node)</a> listing services but there are no such services installed via kubeadm.</p> <pre><code>Failed to restart etcd.service: Unit not found. Failed to restart kube-apiserver.service: Unit not found. Failed to restart kube-controller-manager.service: Unit not found. </code></pre> <h2>Environment</h2> <p>CentOS 7 on Virtual Box. K8S 1.8.5</p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 36m v1.8.5 node01 Ready &lt;none&gt; 35m v1.8.5 node02 Ready &lt;none&gt; 35m v1.8.5 </code></pre>
<p>As you are using <strong>kubeadm</strong> to initiate and administrate the k8s cluster.As I understand kubeadm use following approach</p> <p>Systemd manage only kubelet service on the node.</p> <p>Kubelet create and manage k8s control plane componenets (kube-api server, kube-controller-manager , etcd and scheduler, kube-proxy) as a <strong>static pod</strong>. </p> <p>Kubelet access their json manifest files from <code>/etc/kubernetes/manifests</code>. </p> <p>So if you want to remove control plane components you just need to move these manifest files in another directory. </p>
<p>I guess that minikube needs VirtualBox because of dockers' dependencies, but my current docker version doesn't need it.</p> <p>So do I still need VirtualBox or another VM to start minikube?</p>
<p>this is only case on window or osx, do need virtualbox, vmware to create minikube vm using boot2docker, in Linux even Linux VM minikube can directly run kubernetes in host docker when choice --vm-driver none, which uses localkube.</p>
<p>I am trying to access the Kubernetes API directly without running <code>kubectl -proxy</code>. But when I use the token of the serviceaccount default, I get a 403. Even after creating a ClusterRole and ClusterRoleBinding for this serviceaccount, the request is rejected with 403.</p> <p>The configuration I applied looks like this:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pod-reader rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "watch", "list"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: pod-reader subjects: - kind: ServiceAccount name: default namespace: default roleRef: kind: ClusterRole name: pod-reader apiGroup: rbac.authorization.k8s.io </code></pre> <p>(It is nearly the one from the docs on kubernetes io, just used the ServiceAccount as Subject and changed the resource to pods)</p> <p>Then I applied the config and tried to access the pods via curl:</p> <pre><code>$ kubectl apply -f secrets.yaml clusterrole "pod-reader" created clusterrolebinding "pod-reader" created $ curl https://192.168.1.31:6443/v1/api/namespaces/default/pods --header "Authorization: Bearer $TOKEN" --insecure { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:serviceaccount:default:default\" cannot get path \"/v1/api/namespaces/default/pods\"", "reason": "Forbidden", "details": { }, "code": 403 } </code></pre> <p>I guess the error message shows that the authentication part is ok, because it looks like the request was correctly identified as coming from the serviceaccount default:default. But what do I have to do to entitle this (or another service account) to access information about the pods or nodes?</p> <p>I see this error when calling curl from outside a Pod, but also if I - for example - use the kubernetes java client to access the API from within a Pod using the secret mounted under /var/run/secrets.</p> <p>I am a K8s newbie, so please forgive me if this is a stupid question.</p> <p>Regarding the configuration: I have K8s 1.8 running on a cluster of Raspberry Pis with one Master and two Worker Nodes. I didn't pass much to kubeadm init, so I guess it should have the default configuration. FWIW kubectl describe shows this command for the apiserver:</p> <pre><code> kube-apiserver --requestheader-group-headers=X-Remote-Group --service-account-key-file=/etc/kubernetes/pki/sa.pub --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --secure-port=6443 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --advertise-address=192.168.1.31 --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --enable-bootstrap-token-auth=true --requestheader-username-headers=X-Remote-User --allow-privileged=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-allowed-names=front-proxy-client --client-ca-file=/etc/kubernetes/pki/ca.crt --insecure-port=0 --authorization-mode=Node,RBAC --etcd-servers=http://127.0.0.1:2379 </code></pre>
<p>I think you have a little issue in your curl path, it should be <code>/api/v1/namespaces/...</code> and not <code>/v1/api/namespaces/...</code>. See e.g. <a href="https://kubernetes.io/docs/api-reference/v1.8/#list-62" rel="nofollow noreferrer">https://kubernetes.io/docs/api-reference/v1.8/#list-62</a></p>
<p>Through the REST API I am able to GET Node's details through:</p> <pre><code>http://127.0.0.1:8001/api/v1/nodes/{Node Name} </code></pre> <p>However, I want to add a new label and delete an old one. Say add label <code>app=service</code>, and remove <code>backend=database</code>, What type of request am I supposed invoke and what's the JSON format am I required to send?</p>
<p>You can set a new label with the only one request.</p> <p>JSON is:</p> <pre><code>{ "metadata": { "labels": { "app": "service" } } } </code></pre> <p>You should send <code>PATCH</code> request to: </p> <pre><code>http://127.0.0.1:8001/api/v1/nodes/&lt;node_name&gt; </code></pre> <p>So, finally we have:</p> <pre><code>curl -k -v -H "Accept: application/json" -XPATCH -d '{"metadata":{"labels":{"app":"service"}}}' -H "Content-Type: application/merge-patch+json" http://127.0.0.1:8001/api/v1/nodes/&lt;node_name&gt; </code></pre>
<p>I would like to generate dynamic configuration files (redis, database) based on values (<code>ex.redis_host</code>) collected from env variables which will differ from service to service and put it on a specific location (<code>config/redis.yml</code>).</p> <p>How will i be able to achieve it? </p>
<p>Dynamic Config file generation can include creation of</p> <ol> <li><p><strong>Config file structure</strong>:</p> <p>The config files for e.g redis.yaml might varry for development, staging, production environments. eg: </p> <p>staging</p> <pre><code>${ENVIRONMENT}: host: ${REDIS_HOST} namespace: ${REDIS_NAMESPACE} port: 6379 </code></pre> <p>production:</p> <pre><code>${ENVIRONMENT}: host: ${REDIS_HOST} namespace: ${REDIS_NAMESPACE} port: 6379 connect_timeout: ${REDIS_CONNECT_TIMEOUT} timeout: {REDIS_TIMEOUT} </code></pre></li> <li><p><strong>Config file content</strong>:</p> <p>The content of the config files could again be populated dynamically.</p></li> </ol> <p>we could generate <code>config-templates</code> or <code>config-file-structure</code> using helm and mount them in the required place (say config/redis.yaml ) inside the pod.Then these config-templates can be converted/rendered into proper file using utilities at container run time.</p> <p>Dynamic creation of config files based on the environment/environment_variables could be achieved using following ways.</p> <p><strong>The Bash Way</strong> (using <code>eval</code> and <code>cat</code>):</p> <ol> <li><p>Create a file named <code>inator</code> with the following content</p> <pre><code>#!/bin/bash eval "cat &lt;&lt;EOF $(&lt;$1) EOF " | tee $1 &gt;/dev/null </code></pre></li> <li><p>Make inator as executable and place it inside the docker image and execute it as a ENTRYPOINT script</p></li> <li><p>Considering the env varibles are avilable inside the pod/container </p> <p>eg: staging</p> <pre><code>$ env ENVIRONMENT=staging REDIS_HOST=abc.com REDIS_NAMESPACE=inator $ cat config/redis.yaml ${ENVIRONMENT}: host: ${REDIS_HOST} namespace: ${REDIS_NAMESPACE} port: 6379 $ ./inator config/redis.yaml $ cat config/redis.yaml staging: host: abc.com namespace: inator port: 6379 </code></pre> <p>production</p> <pre><code>$ env ENVIRONMENT=production REDIS_HOST=redis.prod.com REDIS_NAMESPACE=prod REDIS_CONNECT_TIMEOUT=5 TIMEOUT=10 $ cat config/redis.yaml ${ENVIRONMENT}: host: ${REDIS_HOST} namespace: ${REDIS_NAMESPACE} port: 6379 connect_timeout: ${REDIS_CONNECT_TIMEOUT} timeout: {REDIS_TIMEOUT} $ ./inator config/redis.yaml $ cat config/redis.yaml production: host: redis.prod.com namespace: prod port: 6379 connect_timeout: 5 timeout: 10 </code></pre></li> </ol> <p>Advantages: no additional package required.</p> <p><strong>The dockerize way</strong>:</p> <p><a href="https://github.com/jwilder/dockerize" rel="nofollow noreferrer">dockerize</a> is a utility to simplify running applications in docker containers. It internally uses go templates to populate config files from environment variabels. </p> <p>Have a look at this blog post <a href="http://steveadams.io/2016/08/18/Environment-Variable-Templates.html" rel="nofollow noreferrer">Environment Variable Templates</a> for further information.</p>
<p>I'm a complete loss. I have a kubernetes cluster running on ec2. I set it up using kops (versions 1.7.3/1.7.0 client/server). It's been working just fine for 4 months and all of the sudden I begin receiving this when creating new pods. </p> <pre><code>Failed to pull image "123456789.dkr.ecr.us-west-2.amazonaws.com/ k8s-docker-s3-to-backup:latest": rpc error: code = 2 desc = unauthorized: authentication required Error syncing pod </code></pre> <p>I'm fairly certain I made no changes to the node or master roles in AWS IAM. There are no errors in my repo urls. I can create pods using public images. </p> <p>I can't find anything helpful in cloudtrail. How can I further debug what is going on?</p>
<p>I can now confirm that nothing was wrong with my configuration. The IAM permissions listed in <a href="https://kubernetes.io/docs/concepts/containers/images/#using-aws-ec2-container-registry" rel="nofollow noreferrer">here</a> work. It does appear that either AWS was encountering an outage or perhaps I exceeded some sort of limit. I will check with my AWS rep and provide more info if I can get it. As of this morning, everything is working as usual. </p>
<p>I am using Promethues to monitor my Kubernetes cluster. All my microservices can be accessed using my HA Proxy.</p> <p>My base Promethues config is :</p> <pre><code>- job_name: 'kubernetes_pods' tls_config: insecure_skip_verify: true kubernetes_sd_configs: - api_server: http://172.29.219.102:8080 role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_host_ip] target_label: __address__ regex: (.*) replacement: 172.29.219.110:8080 </code></pre> <p>Where <code>172.29.219.110:8080</code> is the IP &amp; Port of my standalone HA Proxy.</p> <p>The endpoint that I am trying to monitor using Prometheus is <code>/auth/health</code>.</p> <p>When I do a simple curl command from anywhere, I see :</p> <pre><code># curl http://172.29.219.110:8080/auth/health {"status":"UP"} </code></pre> <p>But when Prometheus tries to do it, the logs indicate :</p> <pre><code>level=warn ts=2017-12-15T16:40:48.301741927Z caller=scrape.go:673 component="target manager" scrape_pool=kubernetes_pods target=http://172.29.219.110:8080/auth/health msg="append failed" err="no token found" </code></pre> <p>This endpoint is publicly exposed and requires no authentication what so ever. So why does Promethues say :</p> <p><a href="https://i.stack.imgur.com/cLdQP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cLdQP.jpg" alt="enter image description here"></a></p>
<blockquote> <p>{"status":"UP"}</p> </blockquote> <p>Prometheus requires data to be in its format, and cannot handle other arbitrary data. The error you are getting is a parse error due to this.</p> <p>You should instrument your code using a <a href="https://prometheus.io/docs/instrumenting/clientlibs/" rel="nofollow noreferrer">client library</a>, and have it expose data in the Prometheus text format.</p>
<p>We have got into an issue with our AWS deployment with kubernetes/helm where we are seeing "Pod sandbox changed, it will be killed and re-created". This was not happening before but got started with our latest deployment where we deleted previous deployment with helm delete and created new one with helm install. Not sure if this is related with our new dependency on AWS SQS or updating of kubertetes/helm/kops versions. There are other pods on the same kubernetes node and they are working fine.</p> <p>These pods keep on getting killed and restarted with following messages repeating:</p> <ul> <li>Pod sandbox changed, it will be killed and re-created </li> <li>Killing container with id docker://xxx:Need to kill Pod </li> <li>Back-off restarting failed container </li> <li>Error syncing pod</li> </ul> <p>Manually killing the pod does bring up new pod as k8s would but that doesn't fix the issues as mentioned by some in related threads.</p> <p>values for cpu and memory </p> <p>resources: limits: cpu: 100m memory: 128Mi requests: cpu: 100m memory: 128Mi</p> <p>Version info:</p> <pre><code>- client version 1.9 (also tried 1.6 and 1.7) - server version 1.7 (git vresion 1.7.2) - helm vresion 2.7.2 - kops version 1.8.0 - Kernel Version: 4.4.102-k8s - OS Image: Debian GNU/Linux 8 (jessie) - Container Runtime Version: docker://1.12.6 - Kubelet Version: v1.7.2 - Kube-Proxy Version: v1.7.2 - Operating system: linux - Architecture: amd64 </code></pre> <p>Have already gone through all relevant threads for this error but this issue seemed to be for different environment and versions listed in those threads are not used by us. </p> <pre><code>- https://stackoverflow.com/questions/46826164/kubernetes-pods-failing-on-pod-sandbox-changed-it-will-be-killed-and-re-create - https://stackoverflow.com/questions/46922452/kubernetes-1-7-on-google-cloud-failedsync-error-syncing-pod-sandboxchanged-pod </code></pre> <p>Any pointers on finding root cause or fixing the issue would be very helpful. Thanks a lot.</p>
<p>The fix turned out to be increasing the limits for memory. We changed values.yaml file (following section) used by helm and bumped up the limits...</p> <p>resources:</p> <pre><code>limits: cpu: 100m memory: 128Mi &lt;--- increased this value... requests: cpu: 100m memory: 128Mi </code></pre> <p>Wish the error message showing up was more specific than "Pod sandbox changed, it will be killed and re-created" :-)</p>
<p>Running in Minikube, I have the following ingress, with the necessary backend:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /config/ backend: serviceName: api-gateway servicePort: web </code></pre> <p>The intention is to serve the gateway's web UI, which expects the path to be <code>/</code>, otherwise it assumes the it is a logical path and attempts to forward the request to the correct microservice.</p> <p>The rewrite of the path is not working as I expect it. It doesn't replace <code>/config/</code> with <code>/</code>. Am I misunderstanding <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md" rel="nofollow noreferrer">the example</a>?</p> <p>In the logs of the nginx controller I see:</p> <pre><code>192.168.99.1 - [192.168.99.1] - - [13/Dec/2017:08:54:11 +0000] "GET /config HTTP/1.1" 301 185 "-" "curl/7.55.1" 78 0.000 [-] - - - - 192.168.99.1 - [192.168.99.1] - - [13/Dec/2017:08:54:11 +0000] "GET /config/ HTTP/1.1" 404 154 "-" "curl/7.55.1" 79 0.010 [dev-api-gateway-web] 172.17.0.10:8080 143 0.010 404 </code></pre> <p>That's telling me that a request for <code>/config</code> got rewritten to <code>/config/</code> and resulted in a 404.</p> <p><strong>How do I rewrite <code>/config</code> to <code>/</code>?</strong></p>
<p>You are misunderstanding the example. With your configuration requests to the rewrite target <code>/</code> will be rewritten to <code>/config/</code>, not the other way around. So you need to switch paths: use <code>/config</code> as rewrite target, and set the <code>/</code> path in the rule.</p> <p>So the rewrite target is always where the external requests go to, the rule path where the requests to the service/pod go to. </p>
<p>Assume I have a deployment/Pod with lets say a name "xyz" . I would like to check if this Pod has a specific image. If this image is different from what I have recently pulled from the docker registry only then deploy. Otherwise don't deploy. This is assuming that the tags are not updated. any ideas appreciated. </p>
<p>Check this out this thread <a href="https://stackoverflow.com/questions/46266006/how-do-i-redeploy-everything-in-kubernetes-after-updating-a-dockerfile/46266696#46266696">here</a> is based on rolling update mechanism. Or directly set using the command line tool as kubectl,</p> <pre><code>$ kubectl set image deployment/auth-deployment auth=auth:2.1.0 </code></pre> <p>Then you can simply check the status and deployment respectively to get the change by the above rollout. Details is in the link of comment given by Genti Saliu.</p> <p>Note that a rollout process will trigger only and only if deployment's pod template is changed i.e. <code>.spec.template</code> is changed. Using the above <code>set</code> command will not guarantee a uptime until the rollout process is complete. So, it's just user in your dev environment.</p> <p>For no downtime you have have to use the proper strategy as one below, </p> <pre><code>minReadySeconds: 7 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 </code></pre> <p>As long as the <code>maxUnavailable</code> is set to zero, no existing pod will be replaced until the new pod was instantiated after <code>minReadySeconds</code> time over. This will not give any downtime. Hope this helps.</p>
<p>I installed minikube as instructed here <a href="https://github.com/kubernetes/minikube/releases" rel="nofollow">https://github.com/kubernetes/minikube/releases</a> and started with with a simple <code>minikube start</code> command. </p> <p>But the next step, which is as simple as <code>kubectl get pods --all-namespaces</code> fails with</p> <p><code>Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout</code></p> <p>What did I miss?</p>
<p>I faced a similar issue on win7 when changed work environment, as you said it is working fine at home but not working at office, high chance it caused by firewall policy, cannot pass TLS verification.</p> <p>Instead of waste time on troubleshoot(sometimes nothing to do if you cannot turn off firewall), if you just want to test local minikube cluster, would suggest to disable TLS verification.</p> <p>This is what I have done:</p> <pre><code># How to disable minikube TLS verification ## disable TLS verification $ VBoxManage controlvm minikube natpf1 k8s-apiserver,tcp,127.0.0.1,8443,,8443 $ VBoxManage controlvm minikube natpf1 k8s-dashboard,tcp,127.0.0.1,30000,,30000 $ kubectl config set-cluster minikube-vpn --server=https://127.0.0.1:8443 --insecure-skip-tls-verify $ kubectl config set-context minikube-vpn --cluster=minikube-vpn --user=minikube $ kubectl config use-context minikube-vpn ## test kubectl $ kubectl get pods ## enable local docker client $ VBoxManage controlvm minikube natpf1 k8s-docker,tcp,127.0.0.1,2374,,2376 $ eval $(minikube docker-env) $ unset DOCKER_TLS_VERIFY $ export DOCKER_HOST="tcp://127.0.0.1:2374" $ alias docker='docker --tls' ## test local docker client $ docker ps ## test minikube dashboard curl http://127.0.0.1:30000 </code></pre> <p>Also I make a <a href="https://github.com/robertluwang/docker-hands-on-guide/blob/master/minikube-no-tls-verify.md" rel="nofollow noreferrer">small script</a> for this for your reference.</p> <p>Hope it is helpful for you.</p>
<p>I am trying to run my app using Kubernetes I have created a deployment from file called <code>deployment.yaml</code> a service from the file called <code>service.yaml</code></p> <p>Here is the content of <code>deployment.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: kubectl-test spec: containers: - name: kubectl-test image: gcr.io/[my-name]/node-app:0.0.1 imagePullPolicy: Always ports: - containerPort: 8080 hostPort: 8080 </code></pre> <p>This is my <code>services.yaml</code></p> <pre><code>kind: Service apiVersion: v1 metadata: #Service name name: kubectl-test-node-app spec: selector: app: kubectl-test-189010 ports: - protocol: TCP port: 8000 targetPort: 8000 type: LoadBalancer </code></pre> <p>When I run the command <code>kubectl get deployments</code>, I can see:</p> <pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kubernetes-bootcamp 1 1 1 1 16m </code></pre> <p>I can see my services by using <code>kubectl get services</code> and I can see:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubectl-test-node-app LoadBalancer 10.59.242.211 35.195.2.76 8000:31592/TCP 8m kubernetes ClusterIP 10.59.240.1 &lt;none&gt; 443/TCP 1d </code></pre> <p>When I try to visit the <code>EXTERNAL-IP:8000</code> I cant see my app there, I don't get any error, but I don't what is going wrong!</p> <p><strong>EDIT</strong></p> <p>I had to create deployment using:</p> <pre><code>`kubectl run kubernetes-bootcamp --image=[image-name] --port=8080` </code></pre> <p>but not with this command </p> <pre><code>`kubectl create -f deployment.yaml` </code></pre> <p>Is this related to the port: <code>8000:31592/TCP</code> ?</p> <pre><code>kubectl get pods kubectl-test 1/1 Running 0 4h kubernetes-bootcamp-1654181842-czwqg 1/1 Running 0 3h </code></pre> <p><strong>EDIT</strong></p> <p>Running the command <code>kubectl describe services kubectl-test-node-app</code> returns:</p> <pre><code>Name: kubectl-test-node-app Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: app=kubectl-test-189010 Type: LoadBalancer IP: 10.59.242.211 LoadBalancer Ingress: 35.195.2.76 Port: &lt;unset&gt; 8000/TCP TargetPort: 8000/TCP NodePort: &lt;unset&gt; 31592/TCP Endpoints: &lt;none&gt; Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreatingLoadBalancer 58m service-controller Creating load balancer Normal CreatedLoadBalancer 57m service-controller Created load balancer </code></pre> <p>There are no endpoints defined. I guess this is the problem? How can I build the end points?</p>
<p>Your service specification target all pods with label "app=kubectl-test-189010" which your pod's specification doesn't have. </p> <p>You need to update pod's metadata so it will match selector from service spec:</p> <pre><code> metadata: name: kubectl-test app: kubectl-test-189010 </code></pre> <p>Also your service accept connections on port 8000 and then forward it to port 8000 (targetPort) on pod. But pod listen on port 8080 (containerPort). So one of them should be changed to match another - targetPort or containerPort.</p> <p>You can read more about configuring services in Kubernetes here: <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
<p>I'm trying to figure out how the docker command <code>docker run -p 8080:80 webapp</code> to kubernetes yaml. I have a nginx dockerized and the image works fine when I start it with the command mentioned above. When I push it to kubernetes, nothing happens. </p> <p>Dockerfile:</p> <pre><code>FROM nginx:1.13.3-alpine COPY nginx/default.conf /etc/nginx/conf.d/ RUN rm -rf /usr/share/nginx/html/* COPY ./dist /usr/share/nginx/html CMD ["nginx", "-g", "daemon off;"] </code></pre> <p>My Kubernetes deployment</p> <pre><code>kind: Deployment apiVersion: extensions/v1beta1 metadata: name: webapp-production spec: replicas: 1 template: metadata: name: webapp labels: app: webapp role: frontend env: production spec: containers: - name: webapp image: eu.gcr.io/projectxyz/webapp:1.0.0 env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: ENVIRONMENT value: "production" resources: limits: memory: "500Mi" cpu: "100m" imagePullPolicy: Always ports: - name: webapp containerPort: 8080 </code></pre> <p>My Service</p> <pre><code>kind: Service apiVersion: v1 metadata: name: webapp spec: selector: app: webapp role: frontend type: NodePort ports: - name: http port: 8080 - name: external port: 80 targetPort: 8080 </code></pre> <p>Do I have to run a command in kubernetes? Are the ports configured correct? Thanks for the help.</p> <p>/////// Edit ///// Message from pod logs: <code>The selected container has not logged any messages yet.</code></p>
<p>You are targeting the wrong port here. From what I see this app starts nginx on port 80, yet your service definition targets port 8080 on endpoint pods. As there is no traffic reaching nginx, nothing will get logged on stdout access log. Change your service definition to direct traffic on correct port (80) and that should do the trick.</p> <p>Sidenote: if container would not start / terminate prematurely you would probably see a crashloop state on your pod.</p>
<p>I have a <strong>console</strong> app in .net core. How to implement Kubernetes readiness/liveness probe?</p> <p>My app process rabbitmq messages in loop, and don`t listen any http ports. </p>
<p>Readiness probe makes zero sense for this scenario as you will not direct any traffic via means of Service. As your app reads rabbitmq on it's own, it will do so regardless of kube probes. There is logic though in using liveness probe where you might want to restart container if the process inside has failed in some way.</p> <p>You can either create a status api endpoint which will listen on some port for http requests and respond with 200 OK if your application is healthy (you need some logic inside to define what healthy means) or use command based probe to launch a command that will do some checking and report if container is ok or not.</p>
<p>As I understand the purpose of the <em>Kubernetes Controller</em> is to make sure that current state is equal to the desired state. Nevertheless, <em>Kubernetes Operator</em> does the same job.</p> <p>The list of controller in the <em>Control-Plane</em>:</p> <ul> <li>Deployment</li> <li>ReplicaSet</li> <li>StatefulSet</li> <li>DaemonSet</li> <li>etc</li> </ul> <p>From the Google Search, I found out that there are <em>K8s Operators</em> such as</p> <ul> <li>etcd Operator</li> <li>Prometheus Operator</li> <li>kong Operators</li> </ul> <p>However, I was not able to understand why it cannot be done using Controller?</p> <p>Is Operator complementing the Controllers?</p> <p>What's the difference between these two design as a purpose and functionality.</p> <p>What certain things need to keep in mind to choose between Controller and Operator? ?</p>
<p>I believe the term &quot;kubernetes operator&quot; was introduced by <a href="https://coreos.com/operators/" rel="noreferrer">the CoreOS people here</a></p> <blockquote> <p>An Operator is an application-specific controller that extends the Kubernetes API to create, configure and manage instances of complex stateful applications on behalf of a Kubernetes user. It builds upon the basic Kubernetes resource and controller concepts, but also includes domain or application-specific knowledge to automate common tasks better managed by computers.</p> </blockquote> <p>So basically, a kubernetes operator is the name of a pattern that consists of a kubernetes controller that adds new objects to the Kubernetes API, in order to configure and manage an application, such as Prometheus or etcd.</p> <p>In one sentence: An operator is a domain specific controller.</p> <h2>Update</h2> <p>There is <a href="https://github.com/tensorflow/k8s/issues/300" rel="noreferrer">a new discussion on Github</a> about this very same topic, linking to the same blog post. Relevant bits of the discussion are:</p> <blockquote> <p>All Operators use the controller pattern, but not all controllers are Operators. It's only an Operator if it's got: controller pattern + API extension + single-app focus.</p> <p>Operator is a customized controller implemented with CRD. It follows the same pattern as built-in controllers (i.e. watch, diff, action).</p> </blockquote> <h2>Update 2</h2> <p>I found <a href="https://octetz.com/posts/k8s-controllers-vs-operators" rel="noreferrer">a new blog post</a> that tries to explain the difference as well.</p>
<p>I am trying to setup Jenkins Dynamic slaves creation using jenkins-kubernetes plugin.</p> <p><strong>My jenkins is running outside K8s Cluster.</strong></p> <p>Link: <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="noreferrer">https://github.com/jenkinsci/kubernetes-plugin</a> </p> <p>My jenkins version is <strong>2.60.2</strong> and Kubernetes plugin version is <strong>1.1.2</strong> </p> <p>I followed the steps mention on the readme and successfully setup the connection.</p> <p>My setting looks like: <a href="https://i.stack.imgur.com/8dxSA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8dxSA.png" alt="enter image description here"></a></p> <p>And connection is successful.</p> <p>Then I created a job with pod template : <a href="https://i.stack.imgur.com/bN0zV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bN0zV.png" alt="enter image description here"></a></p> <p>Here starts the problem: <strong>1. When I run this job initially it runs and jenkins slave container inside my pod not able to connect and throws:</strong></p> <p><a href="https://i.stack.imgur.com/SDNrw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/SDNrw.png" alt="enter image description here"></a></p> <p>I have enabled JNLP port(50000) not sure if it is the right port even tested with random option in Jenkins nothing worked.</p> <p><strong>2. Now I discarded this jenkins job and re run again it says:</strong></p> <pre><code> Started by user Vaibhav Jain [Pipeline] podTemplate [Pipeline] { [Pipeline] node Still waiting to schedule task Jenkins doesn’t have label defaultlabel </code></pre> <p>and no pod is getting started in kubernetes. <strong>This is weird</strong>.</p> <p>I am not sure what I am doing wrong. Need help!</p>
<p>Instead of using certificates, I suggest you to use credentials in kubernetes, by creating a serviceAccount:</p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: jenkins --- kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: jenkins rules: - apiGroups: [""] resources: ["pods"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/exec"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get","list","watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: jenkins roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: jenkins subjects: - kind: ServiceAccount name: jenkins </code></pre> <p>and deploying jenkins using that serviceAccount:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: jenkins name: jenkins spec: replicas: 1 selector: matchLabels: app: jenkins template: metadata: labels: app: jenkins spec: serviceAccountName: jenkins .... </code></pre> <p>I show you my screenshots for Kubernetes plugin (note Jenkins tunnel for the JNLP port, 'jenkins' is the name of my kubernetes service):</p> <p><a href="https://i.stack.imgur.com/ohScK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ohScK.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/f8ECq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/f8ECq.png" alt="enter image description here"></a></p> <p>For credentials:</p> <p><a href="https://i.stack.imgur.com/Ti19f.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Ti19f.png" alt="enter image description here"></a></p> <p>Then fill the fileds (ID will be autogenerated, description will be shown in credentials listbox), but be sure to have created serviceAccount in kubernetes as I said before:</p> <p><a href="https://i.stack.imgur.com/iSFqX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/iSFqX.png" alt="enter image description here"></a></p> <p>My instructions are for the Jenkins master inside kubernetes. If you want it outside the cluster (but slaves inside) I think you have to use simple login/password credentials.</p> <p>For what concerns your last error, it seems to be a host resolution error: the slave cannot resolve your host.</p> <p>I hope it helps you.</p>
<p>A node of my k8s cluster has GC trying to remove images used by a container. </p> <p>This behaviour seems strange to me.</p> <p>Here the logs:</p> <pre><code>kubelet: I1218 12:44:19.925831 11177 image_gc_manager.go:334] [imageGCManager]: Removing image "sha256:99e59f495ffaa222bfeb67580213e8c28c1e885f1d245ab2bbe3b1b1ec3bd0b2" to free 746888 bytes kubelet: E1218 12:44:19.928742 11177 remote_image.go:130] RemoveImage "sha256:99e59f495ffaa222bfeb67580213e8c28c1e885f1d245ab2bbe3b1b1ec3bd0b2" from image service failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 99e59f495ffa (cannot be forced) - image is being used by running container 6f236a385a8e kubelet: E1218 12:44:19.928793 11177 kuberuntime_image.go:126] Remove image "sha256:99e59f495ffaa222bfeb67580213e8c28c1e885f1d245ab2bbe3b1b1ec3bd0b2" failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 99e59f495ffa (cannot be forced) - image is being used by running container 6f236a385a8e kubelet: W1218 12:44:19.928821 11177 eviction_manager.go:435] eviction manager: unexpected error when attempting to reduce nodefs pressure: wanted to free 9223372036854775807 bytes, but freed 0 bytes space with errors in image deletion: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 99e59f495ffa (cannot be forced) - image is being used by running container 6f236a385a8e </code></pre> <p>Any suggestions? May a manual remove of docker images and stopped containers on a node cause such a problem?</p> <p>Thank you in advance.</p>
<p>What you've encountered is not the regular Kubernetes garbage collection that deleted orphaned API resource objects, but the <a href="https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/#image-collection" rel="nofollow noreferrer">kubelet's <em>Image collection</em></a>.</p> <p>Whenever a node experiences <em>Disk pressure</em>, the Kubelet daemon will desperately try to reclaim disk space by deleting (supposedly) unused images. Reading the <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/images/image_gc_manager.go" rel="nofollow noreferrer">source code</a> shows that the Kubelet sorts the images to remove by the time since they have last been used for creating a Pod -- if all images are in use, the Kubelet will try to delete them anyways and fail (which is probably what happened to you).</p> <p>You can use the Kubelet's <code>--minimum-image-ttl-duration</code> flag to specify a minimum age that an image needs to have before the Kubelet will ever try to remove it (although this will not prevent the Kubelet from trying to remove used images altogether). Alternatively, see if you can provision your nodes with more disk space for images (or build smaller images).</p>
<p>I'm trying to understand why one of my containers in a pod is slower to start when started by the kubelet, than it is when started via the docker cli directly on the GKE node itself.</p> <p>Here's the kubelet log. The container is started, and but stays in an unready state for 23 seconds:</p> <pre><code>18:49:55.000 Container image "eu.gcr.io/proj/ns/myimage@sha256:fff668" already present on machine 18:49:55.000 Created container 18:49:56.000 Started container 18:49:56.000 Readiness probe failed: cat: /tmp/healthy: No such file or directory 18:49:58.000 Readiness probe failed: cat: /tmp/healthy: No such file or directory 18:50:00.000 Readiness probe failed: cat: /tmp/healthy: No such file or directory 18:50:02.000 Readiness probe failed: cat: /tmp/healthy: No such file or directory 18:50:04.000 Readiness probe failed: cat: /tmp/healthy: No such file or directory 18:50:06.000 Readiness probe failed: cat: /tmp/healthy: No such file or directory 18:50:08.000 Readiness probe failed: cat: /tmp/healthy: No such file or directory 18:50:10.000 Readiness probe failed: cat: /tmp/healthy: No such file or directory 18:50:12.000 Readiness probe failed: cat: /tmp/healthy: No such file or directory 18:50:14.000 Readiness probe failed: cat: /tmp/healthy: No such file or directory 18:50:16.000 Readiness probe failed: cat: /tmp/healthy: No such file or directory 18:50:18.000 Readiness probe failed: cat: /tmp/healthy: No such file or directory </code></pre> <p>Finally the container <em>actually really</em> starts 23 seconds later. I know this because the very first thing it does is print the following log line, and then write the /tmp/healthy file for the readinessProbe.</p> <pre><code>18:50:18.000 17:50:18,572|MainThread|INFO|cli|Starting application </code></pre> <p>However, as the following command shows by printing the current date, and then starting the container with the docker cli (on the same node as the kubelet is running on above) it should only take ~1 second to start the container.</p> <pre><code>mark@gke-cluster-3 ~ $ date ++%Y-%m-%d %H:%M:%S.%N; docker run -it eu.gcr.io/proj/ns/myimage@sha256:fff668 2017-11-25 16:37:01.188799045 2017-11-25 16:37:02,246|MainThread|INFO|cli|Starting application </code></pre> <p>It's driving me a bit nuts! Any thoughts about what could be causing this welcomed :)</p>
<p>Turns out that the problem with the slow start up of these containers was constrained CPU for the Python interpreter during start up. I added a bash script that would print the datetime before starting the Python process and when varying the CPU resources available to the container, the problem becomes painfully clear. </p> <pre><code>cpu: 10m 2017-12-18 08:05:46,1513584346 starting script 2017-12-18 08:06:22,318|MainThread|INFO|cli|Application startup cpu: 50m 2017-12-18 08:15:11,1513584911 starting script 2017-12-18 08:15:27,317|MainThread|INFO|cli|Application startup cpu: 100m 2017-12-18 08:07:46,1513584466 starting script 2017-12-18 08:07:53,218|MainThread|INFO|cli|Application startup cpu: 150m 2017-12-18 08:18:16,1513585096 starting script 2017-12-18 08:18:20,730|MainThread|INFO|cli|Application startup cpu: 200m 2017-12-18 08:09:14,1513584554 starting script 2017-12-18 08:09:17,922|MainThread|INFO|cli|Application startup </code></pre> <p>It's a bit frustrating because the applications consume around 10m CPU at run time. I'm going to investigate module imports and other recommendations from here: <a href="https://lwn.net/Articles/730915/" rel="nofollow noreferrer">https://lwn.net/Articles/730915/</a></p>
<p>I'm running a gke 1.8.4 cluster, and seeing an issue with requests to access resources being permitted, even though RBAC is denying them</p> <p>from logs/kube-apiserver.log (I've replaced my username and the username I'm impersonating, in &lt;<em>italics</em>>):</p> <blockquote> <p>I1218 13:30:38.644205 5 httplog.go:64] &amp;{&lt;<em>my_user</em>> [system:authenticated] map[]} is acting as &amp;{&lt;<em>other_user</em>> [system:authenticated] map[]}<br/><br/> I1218 13:30:38.644297 5 rbac.go:116] RBAC DENY: user "&lt;<em>other_user</em>>" groups ["system:authenticated"] cannot "list" resource "secrets" in namespace "prod"<br/><br/> I1218 13:30:38.676079 5 wrap.go:42] GET /api/v1/namespaces/prod/secrets: (32.043196ms) 200 [[kubectl/v1.8.4 (linux/amd64) kubernetes/9befc2b]</p> </blockquote> <p>Why is the api proceeding to the GET after the RBAC DENY (and ultimately returning the secrets in response to my kubectl cmd)?</p> <p>fwiw my kubectl cmd is: <code>kubectl get secrets --namespace prod --as &lt;other_user&gt;</code></p> <p>I suspect there's another authorizer that's allowing it, though I've done everything I know of to ensure there isn't (ABAC should be disabled, as I'm on 1.8, the google cloud console shows it as being disabled, and I'm seeing "legacyAbac: {}" in the response from <em>gcloud beta container clusters describe</em>)</p>
<p>GKE enables both the RBAC authorizer and a webhook authorizer that consults GKE IAM. Does the specified username have permissions via GKE?</p>
<p>I'm basically trying to run <code>crond -f</code> as root, while having the default user be something different.</p> <p>Since the crontabs it runs use sensitive information from other files on the image, I want to give root access to these files, start the crond process, then switch the user to a newly created one. This way the cronjobs will be able to get the information they need, while securing the sensitive files in the container from anyone who may get exec access. </p> <p>have tried a couple things like this:</p> <pre><code>USER root CMD ["./runCrons.sh"] USER newuser </code></pre> <p>But this does not run the crond process as root, but as newuser.</p> <p>If anyone has a solution it would save me some digging and experimentation.</p>
<p>While building the Docker image, create a user which belongs to sudo group and is allowed to run all sudo commands without a password.</p> <p>Consider the below example which creates a docker image called test with user named myuser with sudo pass: </p> <pre><code>$ cat Dockerfile FROM debian:latest ENV user_name myuser RUN apt-get update RUN apt-get install -y sudo RUN useradd --create-home -s /bin/bash ${user_name} RUN echo "${user_name} ALL=(ALL) NOPASSWD:ALL" &gt; /etc/sudoers.d/${user_name} WORKDIR /home/${user_name} USER ${user_name} CMD /bin/bash </code></pre> <p>Then build the image:</p> <p><code>docker build -t test</code> .</p> <p>Now to fix cron permissions issues for standard user, make sure all commands used on cron scripts start with sudo, like below.</p> <p><code>CMD ["sudo ./runCrons.sh"]</code></p> <p>Since no password is expected when using sudo, everything should execute fine and you should be good to go.</p>
<p>I need to </p> <ul> <li>expose some pods directly on nodes, for TCP &amp; UDP </li> <li>be able to access them externally, individually</li> </ul> <p>I would like to avoid creating a loadbancer service for each pod as there is no need of loadbalancing, just the exposure to outside world.</p> <p>I don see any solution with Service or Ingress.</p> <p>All this happens in GKE.</p> <p>Would someone have an idea?</p> <p>thanks!</p>
<p>If your nodes are accessible from the outside world you can get away with just <code>hostNetwork: true</code>, there are some potential issues with it though (ie. just one pod per host or potential port conflicts with other stuff on node). You don't need any service defined for it, as it will just listen on your nodes ports (need to have them open on firewall, security policies or whatever guards your nodes from external world).</p> <p>Any use of service (except for maybe headless one) will result in a loadbalancing between all backing pods (be it ClusterIP, NodePort or LB), but only LB service will give you a dedicated external IP.</p>
<p>I've registered custom resource definition in K8S:</p> <pre><code>apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: resources.example.com labels: service: "my-resource" spec: group: example.com version: v1alpha1 scope: Namespaced names: plural: resources singular: resource kind: MYRESOURCE shortNames: - res </code></pre> <p>Now on attempt to get an 'explain' for my custom resource with:</p> <pre><code>kubectl explain resource </code></pre> <p>I get the following error:</p> <pre><code>group example.com has not been registered </code></pre> <p>How can I add an <code>explain</code> information to my custom resource definition, or is this not supported for CRDs?</p>
<p>explain works using openapi schema information published by the server. Prior to v1.15, CRDs did not have the ability to publish that info.</p> <p>In 1.15+, CRDs that specify <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#specifying-a-structural-schema" rel="nofollow noreferrer">structural schemas</a> and <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#pruning-versus-preserving-unknown-fields" rel="nofollow noreferrer">enable pruning</a> publish OpenAPI and work with explain.</p>
<p>I want to configure elasticsearch with the elasticsearch.yml, located in /usr/share/elasticsearch/config. I am doing this via a ConfigMap which I mount into the container.</p> <p>That does not work due to k8s takes over the control of this directory and the other files of that directory does not exist anymore (eg jvm.options).</p> <p>I think that is a common issue - is there a best practice to fix this? The first idea which came to my mind was symlinking to /some/other/directory/elasticsearch.yml with an initcontainer. Is that wise to do so?</p>
<p>So found a proper solution I want to share with you. The magic happens with the subPath in the volumeMount</p> <pre><code>apiVersion: v1 kind: ConfigMap ... data: elasticsearch.yml: | xpack.license.self_generated.type: basic --- apiVersion: extensions/v1beta1 kind: Deployment ... spec: ... template: containers: ... volumeMounts: - name: config mountPath: /usr/share/elasticsearch/config/elasticsearch.yml subPath: elasticsearch.yml volumes: - name: config configMap: name: elasticsearch-logging </code></pre>
<p>I came up with the use case where I need to give specific name of secret token when it gets generated during creation of namespace. </p> <p>So when we create a namespace in K8S we will get one secret token like below.</p> <pre><code>NAMESPACE NAME TYPE DATA AGE dev secrets/default-token-vvlzv kubernetes.io/service-account-token 3 1d devops secrets/default-token-0xpt0 kubernetes.io/service-account-token 3 9d </code></pre> <p>So What we want is "default-token-vvlzv" should be generated as user given name like "dev-token". </p> <p>Is there any way to achieve this ?</p>
<p>To the best of my knowledge, there is no such option. However, if you're creating your namespaces via software and not manually through kubectl, you can always clone the secret token to a copy with a name of your liking upon namespace creation. Do you automatically create namespaces or is that for a manual use-case?</p> <p>Cheers, Christian</p>
<p><img src="https://i.stack.imgur.com/hI11N.png" alt="docker ps screenshot"></p> <p><code>docker ps</code> </p> <pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7523fd2c20c7 gcr.io/google_containers/k8s-dns-sidecar-amd64 "/sidecar --v=2 --..." 18 hours ago Up 18 hours k8s_sidecar_kube-dns-86f6f55dd5-qwc6z_kube-system_c1333ffc-e4d6-11e7-bccf-0021ccbf0996_0 9bd438011406 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 "/dnsmasq-nanny -v..." 18 hours ago Up 18 hours k8s_dnsmasq_kube-dns-86f6f55dd5-qwc6z_kube-system_c1333ffc-e4d6-11e7-bccf-0021ccbf0996_0 5c35e00a5a27 gcr.io/google_containers/k8s-dns-kube-dns-amd64 "/kube-dns --domai..." 18 hours ago Up 18 hours k8s_kubedns_kube-dns-86f6f55dd5-qwc6z_kube-system_c1333ffc-e4d6-11e7-bccf-0021ccbf0996_0 77ef463642b7 gcr.io/google_containers/pause-amd64:3.0 "/pause" 18 hours ago Up 18 hours k8s_POD_kube-dns-86f6f55dd5-qwc6z_kube-system_c1333ffc-e4d6-11e7-bccf-0021ccbf0996_0 39f618666205 gcr.io/google_containers/kubernetes-dashboard-amd64 "/dashboard --inse..." 18 hours ago Up 18 hours k8s_kubernetes-dashboard_kubernetes-dashboard-vgpjl_kube-system_c1176a44-e4d6-11e7-bccf-0021ccbf0996_0 023b7b554a8c gcr.io/google_containers/pause-amd64:3.0 "/pause" 18 hours ago Up 18 hours k8s_POD_kubernetes-dashboard-vgpjl_kube-system_c1176a44-e4d6-11e7-bccf-0021ccbf0996_0 1c3bdb7bdeb1 gcr.io/google-containers/kube-addon-manager "/opt/kube-addons.sh" 18 hours ago Up 18 hours k8s_kube-addon-manager_kube-addon-manager-tpad_kube-system_7b19c3ba446df5355649563d32723e4f_0 8a00feefa754 gcr.io/google_containers/pause-amd64:3.0 "/pause" 18 hours ago Up 18 hours k8s_POD_kube-addon-manager-tpad_kube-system_7b19c3ba446df5355649563d32723e4f_0 b657eab5f6f5 gcr.io/k8s-minikube/storage-provisioner "/storage-provisioner" 18 hours ago Up 18 hours k8s_storage-provisioner_storage-provisioner_kube-system_c0a8b187-e4d6-11e7-bccf-0021ccbf0996_0 67be5cc1dd0d gcr.io/google_containers/pause-amd64:3.0 "/pause" 18 hours ago Up 18 hours k8s_POD_storage-provisioner_kube-system_c0a8b187-e4d6-11e7-bccf-0021ccbf0996_0 </code></pre> <p>I just did the Kubernetes minikube tutorial at <a href="https://github.com/kubernetes/minikube" rel="noreferrer">https://github.com/kubernetes/minikube</a>, and I cannot stop or remove these containers, they always get recreated.</p> <pre><code>$ kubectl get deployment No resource found. $ minikube status minikube: Running cluster: Running kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100 </code></pre> <p>Output of <code>kubectl get pods --all-namespaces</code></p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-addon-manager-minikube 1/1 Running 5 19h kube-system kube-dns-86f6f55dd5-6kjsn 3/3 Running 15 19h kube-system kubernetes-dashboard-68vph 1/1 Running 5 19h kube-system storage-provisioner 1/1 Running 5 19h </code></pre> <p>UPDATE:</p> <p>I completely removed all packages called 'kube*', removed docker, remove virtualbox, removed /var/lib/docker, reinstalled docker. And the containers are back! How on earth do you get rid of them?</p>
<p>What containers do you want to delete and why? The containers printed in your <code>docker ps</code> output are Kubernetes containers. You basically would destroy minikube by deleting these containers.</p> <p>In general Kubernetes manages these containers for you. Kubernetes interprets a deleted container as a failure and restarts it. To delete a container you have to delete the pod (or the ReplicaSet, ReplicationController or Deployment depending on your deployed applications).</p> <hr> <p>If these containers actually appear on your host system, then you maybe installed Kubernetes accidentally on your host system (with another tutorial). In this case you have to look for a process called <code>kubelet</code> which creates these containers.</p> <p>For example if you use systemd:</p> <pre><code>systemctl status kublet # see if its actually running systemctl stop kubelet # stop it systemctl disable kubelet # make sure it doesn't start after next reboot </code></pre>
<p>In kubernetes we can easily expose certain params and values through environment variables. Examples of these can be the node IP, the container uid, etc.</p> <p>Example</p> <pre><code> - name: POD_ID valueFrom: fieldRef: fieldPath: metadata.uid </code></pre> <p>However, I was wondering if there is a way to list the possible references that can be included in the pod. Either in the form of an API reference or dynamically on a pod.</p>
<p>Figured it out myself, you can only reference the variables that are also exposed if you <code>kubectl edit pod &lt;podname&gt;</code> a pod.</p>
<p>Recently we have created a cluster on Kubernetes Engine (GCP) and we started to notice a strange behavior on it. Every day the nodes are getting stopped and recreated automatically in a certain time of day, making applications unavailable for a few minutes.</p> <p><strong>How the incidents are displayed in Stackdriver dashboard:</strong></p> <p><a href="https://i.stack.imgur.com/PqgLq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PqgLq.png" alt="enter image description here"></a></p> <p>In order to understand the root cause of the problem, I analyzed the logs in Stackdriver, taking as a reference the incident that happened today (<strong>2017-12-19</strong> <strong>12:22pm</strong>).</p> <p><strong>Cluster log:</strong></p> <p>The closest entry that exists related to the incident is just at <strong>12:26pm</strong> (probably the moment that the cluster was coming back).</p> <p><a href="https://i.stack.imgur.com/ntAda.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ntAda.png" alt="enter image description here"></a></p> <p><strong>Node log:</strong> </p> <p>The instance log also doesn't seem to help too much. The records closest to the incident just appears at <strong>12:23pm</strong> (also after the instance start to come back).</p> <p><a href="https://i.stack.imgur.com/DW5Zf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DW5Zf.png" alt="enter image description here"></a></p> <p>Has anyone ever been through this situation before or have any idea how can we debug it better and discover what is causing this behavior?</p> <p>The cause of the incident apparently is not been shown in Stackdriver logs.</p>
<p>The described behavior is very similar to how the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/preemptible-vm" rel="nofollow noreferrer">preemptible nodes in GKE</a> behave (they live a maximum of 24 hours).</p> <p>If you're unsure if your nodes are preemptible, check the GCP WebUI (my sample<a href="https://i.stack.imgur.com/vI9ap.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vI9ap.png" alt="enter image description here"></a> below, check the "Preemptible nodes" line), or via CLI:</p> <pre><code>$ gcloud compute instances list | grep gke | awk '{print $4}' </code></pre> <p>If the CLI command will return "true", that means that the nodes are preemptible (see below):</p> <pre><code>$ gcloud compute instances list | grep gke | awk '{print $4}' true true true </code></pre> <p>Note: if you have multiple GKE clusters under the same project, after <code>grep</code> command add your GKE cluster name.</p>
<p>I have set up a Rancher k8s environment on AWS. </p> <p>Rancher server lies behind a classic ELB with ssl termination and is accessible via e.g <code>https://my.rancher.server</code>.</p> <p>I have deployed a simple pod via the command line by running</p> <pre><code>kubectl create -f &lt;podfilename.yml&gt; </code></pre> <p>I am then able to <code>get</code> and <code>describe</code> the pod.</p> <p>However, the following command fails:</p> <pre><code>$ kubectl exec my.pod.name -- ls /app W1219 12:13:12.053543 16174 http.go:363] Error reading backend response: unexpected EOF error: error sending request: Post https://my.rancher.server/r/projects/1a1043/kubernetes:6443/api/v1/namespaces/default/pods/my.pod.name/exec?command=ls&amp;command=%2Fapp&amp;container=k8s-demo&amp;container=k8s-demo&amp;stderr=true&amp;stdout=true: unexpected EOF </code></pre> <p><strong>edit</strong>: this is the json returned:</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "Upgrade request required", "reason": "BadRequest", "code": 400 } </code></pre> <p>I have configured my elb to use ssl listener, and also configured proxy protocol.</p> <p><a href="https://i.stack.imgur.com/orDTA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/orDTA.png" alt="enter image description here"></a></p>
<p>ELB's HTTP does not support websockets; you need to use a SSL listener -> TCP backend and configure proxy protocol support. <a href="http://rancher.com/docs/rancher/v1.6/en/installing-rancher/installing-server/basic-ssl-config/#elb" rel="nofollow noreferrer">http://rancher.com/docs/rancher/v1.6/en/installing-rancher/installing-server/basic-ssl-config/#elb</a></p>
<p>I am running kubernetes (k8s) on top of Google Cloud Patform's Container Engine (GKE) and Load Balancers (GLB). I'd like to limit the access at a k8s ingress to an IP whitelist.</p> <p>Is this something I can do in k8s or GLB directly, or will I need to run things via a proxy which does it for me?</p>
<p>The way to whitelist source IP's in nginx-ingress is using below annotation.</p> <p><code>ingress.kubernetes.io/whitelist-source-range</code></p> <p><strong>But unfortunately, Google Cloud Load Balancer does not have support for it, AFAIK.</strong></p> <p>If you're using nginx ingress controller you can use it.</p> <p>The value of the annotation can be comma separated CIDR ranges.</p> <p>More on <a href="https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#whitelist-source-range" rel="nofollow noreferrer">whitelist annotations</a>.</p> <p><a href="https://issuetracker.google.com/issues/35904903" rel="nofollow noreferrer">Issue tracker</a> for progress on Google Cloud Load Balancer support for whitelisting source IP's.</p>
<p>I have tried to install Docker CE in my system and it ends with some problem.</p> <p>I did the below steps:</p> <ol> <li>sudo yum install -y yum-utils – No error</li> <li>sudo yum-config-manager --add-repo <a href="https://download.docker.com/linux/centos/docker-ce.repo" rel="nofollow noreferrer">https://download.docker.com/linux/centos/docker-ce.repo</a> - No error</li> <li>sudo yum makecache fast – No error</li> <li>sudo yum -y install docker-ce – Failed with error</li> </ol> <p>Error: Package: docker-ce-17.06.0.ce-1.el7.centos.x86_64 (docker-ce-stable) Requires: container-selinux >= 2.9</p> <hr> <p>yum can be configured to try to resolve such errors by temporarily enabling disabled repos and searching for missing dependencies. To enable this functionality please set 'notify_only=0' in /etc/yum/pluginconf.d/search-disabled-repos.conf</p> <hr> <p>Error: Package: docker-ce-17.06.0.ce-1.el7.centos.x86_64 (docker-ce-stable) Requires: container-selinux >= 2.9 You could try using --skip-broken to work around the problem</p> <p>Can someone please help me in this?</p>
<p>The <code>container-selinux</code> package is available from the <code>rhel-7-server-extras-rpms</code> channel. You can enable it using:</p> <pre><code>subscription-manager repos --enable=rhel-7-server-extras-rpms </code></pre> <p>But if you do not have any subscription for Enterprise Linux, you can use CentOS Extra repo as a workaround. Add the below content into <code>/etc/yum.repos.d/centos.repo</code></p> <pre><code>[CentOS-extras] name=CentOS-7-Extras mirrorlist=http://mirrorlist.centos.org/?release=7&amp;arch=$basearch&amp;repo=extras&amp;infra=$infra #baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/ gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 </code></pre>
<p>I have successfully set up a kubernetes cluster on AWS using <code>kops</code> and the following commands:</p> <pre><code>$ kops create cluster --name=&lt;my_cluster_name&gt; --state=s3://&lt;my-state-bucket&gt; --zones=eu-west-1a --node-count=2 --node-size=t2.micro --master-size=t2.small --dns-zone=&lt;my-cluster-dns&gt; $ kops update cluster &lt;my-cluster-name&gt; --yes </code></pre> <p>The cluster has 1 master and 2 slaves.</p> <p>I am trying to deploy the dashboard using the following command, as per <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">these guidelines</a>:</p> <pre><code>$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml </code></pre> <p><strong>1</strong>.</p> <p>I get the following error:</p> <pre><code>secret "kubernetes-dashboard-certs" created serviceaccount "kubernetes-dashboard" created error: error validating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"rbac.authorization.k8s.io", Version:"v1", Kind:"Role"}; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p><strong>2</strong>.</p> <p>My dashboard is not accessible via <code>https://&lt;my_master_node_public_ip&gt;/ui</code></p> <p>Instead, I get the following:</p> <pre><code>kind "Status" apiVersion "v1" metadata {} status "Failure" message "endpoints \"kubernetes-dashboard\" not found" reason "NotFound" details name "kubernetes-dashboard" kind "endpoints" code 404 </code></pre> <p><strong>3</strong>. </p> <p>After running</p> <pre><code>kubectl proxy </code></pre> <p>and trying to access the dashboard via:</p> <pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ </code></pre> <p>as instructed by the relevant guidelines, I have the exact same problem.</p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.11", GitCommit:"b13f2fd682d56eab7a6a2b5a1cab1a3d2c8bdd55", GitTreeState:"clean", BuildDate:"2017-11-25T17:51:39Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p><strong>edit</strong>: here is the outcome when turning validation errors off:</p> <pre><code>$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml --validate=false secret "kubernetes-dashboard-certs" configured serviceaccount "kubernetes-dashboard" configured service "kubernetes-dashboard" created Error from server (BadRequest): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": Role in version "v1" cannot be handled as a Role: no kind "Role" is registered for version "rbac.authorization.k8s.io/v1" Error from server (BadRequest): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": RoleBinding in version "v1" cannot be handled as a RoleBinding: no kind "RoleBinding" is registered for version "rbac.authorization.k8s.io/v1" Error from server (BadRequest): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": Deployment in version "v1beta2" cannot be handled as a Deployment: no kind "Deployment" is registered for version "apps/v1beta2" </code></pre>
<p>The problem is that you're provisioning a Kubernetes 1.7 cluster with kops and using a <a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">dashboard manifest</a> file for Kubernetes 1.8 so try the following (nuke the current deployment first), then:</p> <pre><code>$ kops create cluster --kubernetes-version="1.8.1" --name=&lt;my_cluster_name&gt; --state=s3://&lt;my-state-bucket&gt; --zones=eu-west-1a --node-count=2 --node-size=t2.micro --master-size=t2.small --dns-zone=&lt;my-cluster-dns&gt; </code></pre> <p>As pointed out by <a href="https://stackoverflow.com/users/2409793/pkaramol">pkaramol</a>, alternatively to above you can upgrade kops to 1.8 and it should also work.</p> <p>Note that in any case, in order to get to the dashboard, do:</p> <pre><code>$ kubectl proxy </code></pre> <p>... and then the dashboard should be accessible via <a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a> </p>
<p>I have deployed prometheus 2.0 on my multi node kubernetes cluster made from kubeadm. While accessing the prometheus dashboard i am not able to view pods and service job even after configuring it in prometheus configuration yaml file. prometheus target are as follow <a href="https://i.stack.imgur.com/jiQPG.png" rel="nofollow noreferrer">https://i.stack.imgur.com/jiQPG.png</a> . Does this problem has anything to do with the prometheus version. I think i am going wrong with the syntax part of the configuration. </p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-css lang-css prettyprint-override"><code>global: scrape_interval: 5s scrape_timeout: 5s evaluation_interval: 5s scrape_configs: - job_name: kubernetes-apiservers scrape_interval: 5s scrape_timeout: 5s metrics_path: /metrics scheme: https kubernetes_sd_configs: - api_server: null role: endpoints namespaces: names: [] bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: false relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] separator: ; regex: default;kubernetes;https replacement: $1 action: keep - job_name: kubernetes-nodes scrape_interval: 5s scrape_timeout: 5s metrics_path: /metrics scheme: https kubernetes_sd_configs: - api_server: null role: node namespaces: names: [] bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: false relabel_configs: - separator: ; regex: __meta_kubernetes_node_label_(.+) replacement: $1 action: labelmap - separator: ; regex: (.*) target_label: __address__ replacement: kubernetes.default.svc:443 action: replace - source_labels: [__meta_kubernetes_node_name] separator: ; regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics action: replace - job_name: kubernetes-pods scrape_interval: 5s scrape_timeout: 5s metrics_path: /metrics scheme: https kubernetes_sd_configs: - api_server: null role: pod namespaces: names: [] bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: false relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] separator: ; regex: "true" replacement: $1 action: keep - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] separator: ; regex: (.+) target_label: __metrics_path__ replacement: $1 action: replace - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] separator: ; regex: ([^:]+)(?::\d+)?;(\d+) target_label: __address__ replacement: $1:$2 action: replace - separator: ; regex: __meta_kubernetes_pod_label_(.+) replacement: $1 action: labelmap - source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: kubernetes_namespace replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_name] separator: ; regex: (.*) target_label: kubernetes_pod_name replacement: $1 action: replace - job_name: kubernetes-cadvisor scrape_interval: 5s scrape_timeout: 5s metrics_path: /metrics scheme: https kubernetes_sd_configs: - api_server: null role: node namespaces: names: [] bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: false relabel_configs: - separator: ; regex: __meta_kubernetes_node_label_(.+) replacement: $1 action: labelmap - separator: ; regex: (.*) target_label: __address__ replacement: kubernetes.default.svc:443 action: replace - source_labels: [__meta_kubernetes_node_name] separator: ; regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor action: replace - job_name: kubernetes-service-endpoints scrape_interval: 5s scrape_timeout: 5s metrics_path: /metrics scheme: https kubernetes_sd_configs: - api_server: null role: endpoints namespaces: names: [] relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] separator: ; regex: "true" replacement: $1 action: keep - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] separator: ; regex: (https?) target_label: __scheme__ replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] separator: ; regex: (.+) target_label: __metrics_path__ replacement: $1 action: replace - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] separator: ; regex: ([^:]+)(?::\d+)?;(\d+) target_label: __address__ replacement: $1:$2 action: replace - separator: ; regex: __meta_kubernetes_service_label_(.+) replacement: $1 action: labelmap - source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: kubernetes_namespace replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: kubernetes_name replacement: $1 action: replace</code></pre> </div> </div> </p> <p>Thanks</p>
<p>I assume you are doing the query you posted in the comments:</p> <pre><code>container_memory_usage_bytes{job="kubernetes-pods"} </code></pre> <p>This doesn't work, because you are filtering by the job name <code>kubernetes-pods</code>, but <code>container_memory_usage_bytes</code> comes from cAdvisor. So according to your config the job is named <code>kubernetes-cadvisor</code>.</p> <p>Therefore this should work:</p> <pre><code>container_memory_usage_bytes{job="kubernetes-cadvisor"} </code></pre> <p>Since the series name is rather unique, you can just omit the job name:</p> <pre><code>container_memory_usage_bytes </code></pre>
<p>I've been trying to shut down kubernetes cluster , but I couldn't managed to do it.</p> <p>When I type </p> <pre><code>kubectl cluster-info </code></pre> <p>I can see that my cluster is still running.</p> <p>I tried commands like running script</p> <pre><code>kube-down.sh </code></pre> <p>but it didn't work.</p> <p>I deleted all pods. How can I shut it down ?</p>
<p>The tear down section of the <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#tear-down" rel="noreferrer">official documentation</a> says:</p> <blockquote> <p>To undo what kubeadm did, you should first drain the node and make sure that the node is empty before shutting it down.</p> <p>Talking to the master with the appropriate credentials, run:</p> </blockquote> <pre><code>kubectl drain &lt;node name&gt; --delete-local-data --force --ignore-daemonsets kubectl delete node &lt;node name&gt; </code></pre> <blockquote> <p>Then, on the node being removed, reset all kubeadm installed state:</p> </blockquote> <pre><code>kubeadm reset </code></pre>
<p>I have a use case to gracefully terminate the container where i have a script to kill the process gracefully from within the container by using the command "kill PID".( Which will send the TERM signal ) But I have liveness probe configured as well. Currently liveness probe is configured to probe at 60 second interval. So if the liveness probe take place shortly after the graceful termination signal is sent, the overall health of the container might become CRITICAL when the termination is still in progress. In this case the liveness probe will fail and container will be terminated immediately.</p> <p>So i wanted to know whether kubelet kills the container with TERM or KILL.</p> <p>Appreciate your support Thanks in advance</p>
<p>In Kubernetes, Liveness Probe checks for the health state of a container.</p> <p>To answer your question on whether it uses SIGKILL or SIGTERM, the answer is both are used but in order. So here is what happens under the hood.</p> <ol> <li>Liveness probe check fails</li> <li>Kubernetes stops routing of traffic to the container</li> <li>Kubernetes restarts the container</li> <li>Kubernetes starts routing traffic to the container again</li> </ol> <p>For container restart, SIGTERM is first sent with waits for a parameterized grace period, and then Kubernetes sends SIGKILL.</p> <p>A hack around your issue is to use the attribute:</p> <pre><code>timeoutSeconds </code></pre> <p>This specifies how long a request can take to respond before it’s considered a failure. You can add and adjust this parameter if the time taken for your application to come online is predictable.</p> <p>Also, you can play with <code>readinessProbe</code> before <code>livenessProbe</code> with an adequate delay for the container to come into service after restarting the process. Check <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/</a> for more details on which parameters to use.</p>
<p>I'm traying to create a Pod from a docker private image. For this i have created a secret like this: </p> <pre><code>kubectl create secret docker-registry $SECRETNAME --docker-server=$DOCKER_REGISTRY_SERVER --docker-username=$DOCKER_USER --docker-password=$DOCKER_PASSWORD --docker-email=$DOCKER_EMAIL </code></pre> <p>and then in the pod yml file, </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: website.com labels: app: website spec: containers: - name: my-web image: company/web:1.0.2 imagePullPolicy: Always command: [ "echo", "SUCCESS" ] ports: - name: web-port containerPort: 8080 imagePullSecrets: - name: docker-hub-key </code></pre> <p>When I run "kubectl create -f web-pod.yml", and then I run "kubectl get pod website.com" I get the following error:</p> <ul> <li>Failed to pull image "company/web:1.0.2": rpc error: code = Unknown desc = Error response from daemon: repository company/web not found: does not exist or no pull access</li> <li>Back-off pulling image "company/web:1.0.2"</li> </ul> <p>Please help, thanks in advance</p>
<p>Since the fail is at the initial stage, this is probably an issue with authentication or repository-(image/tags access). Please verify that the repository name is valid and that the image version specified is available in the repository.</p> <p>If all has been verified and the issue still persists, try do the following:</p> <ul> <li>Restart docker service daemon</li> <li>Check to confirm that the secret provided for the repo is valid</li> <li><p>Try pull the image using kubectl command like below:</p> <p><code>$ kubectl run &lt;deployment-name&gt; --image=&lt;repo&gt;/&lt;image&gt;:&lt;version&gt; --port=8080</code></p></li> </ul> <p>If the above command is successful, then it will help you rule out auth/access issues. If the issue is not solved, go through the logs to get more specifics on the problem. </p>
<p>Kubernetes has the ability to set (in a deployment definition) environment variables from a config file. According to the <a href="https://12factor.net/config" rel="nofollow noreferrer">Twelve Factor App guide</a>, one should only store secrets/config vars in the environment variables, not in a file.</p> <p>What are the arguments in support of using the Kubernetes ConfigMap to set environment variables?</p>
<p>The Twelve Factor App guide <a href="https://12factor.net/config" rel="nofollow noreferrer">advocates the use of environment variables because when using configuration files</a></p> <blockquote> <p>it’s easy to mistakenly check in a config file to the repo; there is a tendency for config files to be scattered about in different places and different formats, making it hard to see and manage all the config in one place. Further, these formats tend to be language- or framework-specific.</p> </blockquote> <p>You could mount a <code>ConfigMap</code> as a volume in your application's file system, but then your application is responsible of knowing how to read that file during the start up of the app. Normally, it's just easier to read environment variables passed when starting the application.</p> <p>In both cases (reading from a file and reading from env vars) you would be following the Twelve Factor App recommendation. But when reading the configuration from a file, I believe is harder to run that application somewhere else, because it requires that we create that file first, which is a process that may be different for different platforms. On the other hand, passing environment variables it's usually always the same on all platforms.</p> <p>Being able to easily run the application on different platforms is the key goal of the Twelve Factor App guide, so I'd choose directly passing environment variables from a <code>ConfigMap</code>.</p>
<p>I ran into the below error when trying to deploy an application in a kubernetes cluster. It looks like kubernetes doesn't allow to mount a file to containers, do you know the possible reason?</p> <p>deployment config file</p> <pre> apiVersion: extensions/v1beta1 kind: Deployment metadata: name: model-loader-service namespace: "{{ .Values.nsPrefix }}-aai" spec: selector: matchLabels: app: model-loader-service template: metadata: labels: app: model-loader-service name: model-loader-service spec: containers: - name: model-loader-service image: "{{ .Values.image.modelLoaderImage }}:{{ .Values.image.modelLoaderVersion }}" imagePullPolicy: {{ .Values.pullPolicy }} env: - name: CONFIG_HOME value: /opt/app/model-loader/config/ volumeMounts: - mountPath: /etc/localtime name: localtime readOnly: true - mountPath: /opt/app/model-loader/config/ name: aai-model-loader-config - mountPath: /var/log/onap name: aai-model-loader-logs - mountPath: /opt/app/model-loader/bundleconfig/etc/logback.xml name: aai-model-loader-log-conf subPath: logback.xml ports: - containerPort: 8080 - containerPort: 8443 - name: filebeat-onap-aai-model-loader image: {{ .Values.image.filebeat }} imagePullPolicy: {{ .Values.pullPolicy }} volumeMounts: - mountPath: /usr/share/filebeat/filebeat.yml name: filebeat-conf - mountPath: /var/log/onap name: aai-model-loader-logs - mountPath: /usr/share/filebeat/data name: aai-model-loader-filebeat volumes: - name: localtime hostPath: path: /etc/localtime - name: aai-model-loader-config hostPath: path: "/dockerdata-nfs/{{ .Values.nsPrefix }}/aai/model-loader/appconfig/" - name: filebeat-conf hostPath: path: /dockerdata-nfs/{{ .Values.nsPrefix }}/log/filebeat/logback/filebeat.yml </pre> <p>Details information of this issue:</p> <pre><code>message: 'invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:53: mounting \\\\\\\"/dockerdata-nfs/onap/log/filebeat/logback/filebeat.yml\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/aufs/mnt/7cd32a29938e9f70a727723f550474cb5b41c0966f45ad0c323360779f08cf5c\\\\\\\" at \\\\\\\"/var/lib/docker/aufs/mnt/7cd32a29938e9f70a727723f550474cb5b41c0966f45ad0c323360779f08cf5c/usr/share/filebeat/filebeat.yml\\\\\\\" caused \\\\\\\"not a directory\\\\\\\"\\\"\"\n"' </code></pre> <p>....</p> <pre><code>$ docker version Client: Version: 1.12.6 API version: 1.24 Go version: go1.6.4 Git commit: 78d1802 Built: Tue Jan 10 20:38:45 2017 OS/Arch: linux/amd64 Server: Version: 1.12.6 API version: 1.24 Go version: go1.6.4 Git commit: 78d1802 Built: Tue Jan 10 20:38:45 2017 OS/Arch: linux/amd64 $ kubectl version Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.3-rancher3", GitCommit:"772c4c54e1f4ae7fc6f63a8e1ecd9fe616268e16", GitTreeState:"clean", BuildDate:"2017-11-27T19:51:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p><code>caused "not a directory"</code> is kind of self explanatory. What is the exact volume and volumeMount definition you use ? do you use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">subPath</a> in your declaration ?</p> <p>EDIT: change </p> <pre><code>- name: filebeat-conf hostPath: path: /dockerdata-nfs/{{ .Values.nsPrefix }}/log/filebeat/logback/filebeat.yml </code></pre> <p>to </p> <pre><code>- name: filebeat-conf hostPath: path: /dockerdata-nfs/{{ .Values.nsPrefix }}/log/filebeat/logback/ </code></pre> <p>and add <code>subPath: filebeat.yml</code> to volumeMount</p>
<p>I am trying to setup horizontal pod autoscaling using custom-metrics. For <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics" rel="nofollow noreferrer">support of custom metrics</a> in kuberenetes 1.8.1, I need to <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/" rel="nofollow noreferrer">enable the aggregation layer</a> by setting the following flags in kube-apiserver:</p> <pre><code>--requestheader-client-ca-file=&lt;path to aggregator CA cert&gt; --requestheader-allowed-names=aggregator --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --proxy-client-cert-file=&lt;path to aggregator proxy cert&gt; --proxy-client-key-file=&lt;path to aggregator proxy key&gt; </code></pre> <p>The kubernetes documentation does not contains any information for how to set these flags in api-server and controller manager. I am using azure kubernetes service (AKS).</p> <p>Not sure but I think one of the possible way to set these flags could be by editing the yaml of kube-apiserver-xxx pod but when I run:</p> <pre><code>kubectl get po -n kube-system </code></pre> <p>I get no pod for kube-apiserver neither for kube controller manager.</p> <p>What is the possible way to set these flags in aks?</p> <p>I also deployed prometheus adapter for custom metrics but the pod logs showed me the following error:</p> <pre><code>panic: cluster doesn't provide requestheader-client-ca-file </code></pre> <p>You can see the exact requirements in configuration section in this <a href="https://github.com/kubeless/kubeless/tree/master/manifests/autoscaling" rel="nofollow noreferrer">link</a>.</p> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>Support for the aggregation layer has been added a couple weeks ago, so no configuration should be necessary for a new cluster. Please see details here: <a href="https://github.com/Azure/AKS/issues/54" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/54</a></p>
<p>I am trying to create a deployment declaratively, using <code>kubectl apply</code>. The below configuration is created just fine when I do</p> <pre><code>kubectl create -f postgres-deployment.yaml </code></pre> <p>but if I go</p> <pre><code>kubectl apply -f postgres-deployment.yaml </code></pre> <p>I am presented with the lovely error message:</p> <blockquote> <p>error: unable to decode "postgres-deployment.yaml": no kind "Deployment" is registered for version "apps/v1beta1"</p> </blockquote> <p>I have tried searching for an explanation to what this means but I cannot figure it out. </p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: postgres-deployment spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:10.1 ports: - containerPort: 5432 </code></pre>
<p>Old Kubernetes versions supported the Deployment object on the <code>extensions/v1beta1</code> API group. <a href="https://v1-9.docs.kubernetes.io/docs/api-reference/v1.9/#deployment-v1beta1-extensions" rel="nofollow noreferrer">That is no longer the case</a>.</p> <p>For Kubernetes versions before 1.9.0 you should use the API group <code>apps/v1beta2</code>.</p> <p><a href="https://v1-9.docs.kubernetes.io/docs/api-reference/v1.9/#deployment-v1-apps" rel="nofollow noreferrer">In Kubernetes 1.9</a> and above you should use the API group <code>apps/v1</code>.</p>
<p>I am setting up kubernetes cluster on a Centos 7 machine, and the <code>kubeadm init</code> command gives me the below warning message.</p> <pre><code>[init] Using Kubernetes version: v1.9.0 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.09.1-ce. Max validated version: 17.03 [WARNING FileExisting-crictl]: crictl not found in system path </code></pre> <p>How can I fix this <code>crictl not found in system path</code> warning? Do I need to install any additional software?</p>
<p>Yes, you need additional software. crictl is part of the <a href="https://github.com/kubernetes-incubator/cri-tools/releases" rel="noreferrer">cri-tools</a> repo on github.</p> <p>At least when I encountered this problem (Dec 20, 2017), cri-tools isn't available on kubernete's package repo, so I had to download source and build it. cri-tools is written in go, so you may need to install golang on your system as well.</p>
<p>While joining the centos 7 node to cluster 1.9.0, <code>kubeadm join</code> command gives this error message.</p> <p><code>Failed to request cluster info, will try again: [Get https://10.10.10.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]</code></p> <p>I got this message last night, this morning when run this command it worked. I removed and trying to create the cluster this morning, again its giving same error message.</p> <pre><code>kubeadm join --token f115fe.f0eea05182abe63a 10.10.10.10:6443 --discovery-token-ca-cert-hash sha256:48d4dc90a08ff73a0cfc63e30a313aaf1903fd51da8f9ce4cc79f95ce529b8d1 [discovery] Created cluster-info discovery client, requesting info from &quot;https://10.10.10.10:6443&quot; [discovery] Requesting info from &quot;https://10.10.10.10:6443&quot; again to validate TLS against the pinned public key [discovery] Failed to request cluster info, will try again: [Get https://10.10.10.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid] [discovery] Failed to request cluster info, will try again: [Get https://10.10.10.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid] </code></pre> <p>How to resolve this error message?</p>
<p>The root cause of the issue is my node didn't have the correct time. after configuring NTP service node was able to join the master.</p>
<p>For a multi zoned, multi cluster setup. Is it possible to dynamically expose or retrieve the zone in which the master node is running, from a pod? Since this is needed to correctly push our metrics to stackdriver, in order to run a <a href="https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/custom-metrics-stackdriver-adapter" rel="nofollow noreferrer">horizontal pod autoscaler</a> based on them.</p> <p>I can hardcode it in the individual deployments but I would like to avoid that.</p> <p>I've tried looking in the compute internal metadata endpoint, and in the <a href="https://stackoverflow.com/questions/47891326/way-to-get-list-of-field-references-in-kubernetes/47891675#47891675">reference variables</a> kubernetes has, but none seem to expose the zone of the master.</p>
<p>There is an unofficial way to determine what zone (or region) the master is running in by parsing the <code>kube-env</code> metadata entry on a node. <code>kube-env</code> is a key-value store where keys are all uppercase, followed by a colon, which is followed by the value. If you look for the key <code>ZONE</code> the value will be the cluster location (e.g. the master zone for multi-zonal clusters with a single zone master or the master region for regional clusters). </p> <p>Note that this isn't a supported API, so it may disappear in future versions of GKE, but for now it's a simple way to fetch the cluster location from any node. </p>
<p>I've an gce ingress controller pointing to a service type <code>NodePort</code>.</p> <p>Is there a way to preserve the client's ip? I've tried this from <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">here</a>, but it didn't work.</p> <pre><code>service.spec.externalTrafficPolicy = Local </code></pre> <p>I found some ways to do for nginx but none for an gce ingress.</p>
<p>The original ip comes in the <code>X-Forwarded-For</code> header. If you're using rails it may come as <code>HTTP_X_FORWARDED_FOR</code></p> <p><a href="https://cloud.google.com/compute/docs/load-balancing/http/" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/load-balancing/http/</a></p>
<p>By default docker uses a shm size of 64m if not specified, but that can be increased in docker using --shm-size=256m</p> <p>How should I increase shm size of a kuberenetes container or use --shm-size of docker in kuberenetes.</p>
<p>I originally bumped into this post coming from google and went through the whole kubernetes issue and openshift workaround. Only to find the much simpler solution listed <a href="https://stackoverflow.com/a/46434614/1165797">on another stackoverflow answer</a> later.</p>
<p>I am trying to setup horizontal pod autoscaling using custom-metrics. For <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics" rel="nofollow noreferrer">support of custom metrics</a> in kuberenetes 1.8.1, I need to <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/" rel="nofollow noreferrer">enable the aggregation layer</a> by setting the following flags in kube-apiserver:</p> <pre><code>--requestheader-client-ca-file=&lt;path to aggregator CA cert&gt; --requestheader-allowed-names=aggregator --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --proxy-client-cert-file=&lt;path to aggregator proxy cert&gt; --proxy-client-key-file=&lt;path to aggregator proxy key&gt; </code></pre> <p>The kubernetes documentation does not contains any information for how to set these flags in api-server and controller manager. I am using azure kubernetes service (AKS).</p> <p>Not sure but I think one of the possible way to set these flags could be by editing the yaml of kube-apiserver-xxx pod but when I run:</p> <pre><code>kubectl get po -n kube-system </code></pre> <p>I get no pod for kube-apiserver neither for kube controller manager.</p> <p>What is the possible way to set these flags in aks?</p> <p>I also deployed prometheus adapter for custom metrics but the pod logs showed me the following error:</p> <pre><code>panic: cluster doesn't provide requestheader-client-ca-file </code></pre> <p>You can see the exact requirements in configuration section in this <a href="https://github.com/kubeless/kubeless/tree/master/manifests/autoscaling" rel="nofollow noreferrer">link</a>.</p> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>AKS now supports aggregated API - you can find specific scaling details in the following GitHub comment @ <a href="https://github.com/Azure/AKS/issues/77#issuecomment-352926551" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/77#issuecomment-352926551</a>. Run "az aks upgrade" even to the same Kubernetes version and AKS will update the control plane with the required certificates on the backend. </p>
<p>I am trying to deploy a simple nginx in kubernetes using hostvolumes. I use the next <strong>yaml</strong>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webserver spec: replicas: 1 template: metadata: labels: app: webserver spec: containers: - name: webserver image: nginx:alpine ports: - containerPort: 80 volumeMounts: - name: hostvol mountPath: /usr/share/nginx/html volumes: - name: hostvol hostPath: path: /home/docker/vol </code></pre> <p>When I deploy it <code>kubectl create -f webserver.yaml</code>, it throws the next error:</p> <pre><code>error: error validating "webserver.yaml": error validating data: ValidationError(Deployment.spec.template): unknown field "volumes" in io.k8s.api.core.v1.PodTemplateSpec; if you choose to ignore these errors, turn validation off with --validate=false </code></pre>
<p>I believe you have the wrong indentation. The <code>volumes</code> key should be at the same level as <code>containers</code>.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webserver spec: replicas: 1 template: metadata: labels: app: webserver spec: containers: - name: webserver image: nginx:alpine ports: - containerPort: 80 volumeMounts: - name: hostvol mountPath: /usr/share/nginx/html volumes: - name: hostvol hostPath: path: /home/docker/vol </code></pre> <p>Look at <a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#deploy-wordpress" rel="noreferrer">this wordpress example</a> from the documentation to see how it's done.</p>
<p>Please before you comment or answer, this question is about a CLI program, not a service. Apparently 90% of Kubernetes has to do with running services, so there is sparse documentation for CLI programs meant to be part of a pipeline workflow.</p> <p>I have a command line program that uses stdout for JSON results.</p> <p>I have a docker image for the command line program.</p> <p>If I create the container as a Kubernetes Job, than stdout and stderr are mixed and require heuristic scrubbing to get pure JSON out.</p> <p>The stderr messages are from native libraries outside of my direct control.</p> <p>Supposedly, if I run <code>kubectl exec</code> against a running pod, I will get the normal stdout/stderr pipes.</p> <p>Is there a way to just have the pod running without an entrypoint (or some dummy service entrypoint) with the sole purpose of running <code>kubectl exec</code> against it?</p>
<blockquote> <p>Is there a way to just have the pod running without an entrypoint [...]?</p> </blockquote> <p>A pod consists of one or more containers, each of which has an individual entrypoint. It is certainly possible to run a <em>container</em> with a dummy command, for example, you can build an image with:</p> <pre><code>CMD sleep inf </code></pre> <p>This will run a container that will persist until you kill it, and you could happily <code>docker exec</code> into it.</p> <p>You can apply the same solution to k8s. You could build an image as described above and deploy that in a pod, or you could use an existing image and simply set the command, as in:</p> <pre><code>spec: containers: - name: mycontainer image: myexistingimage command: ["sleep", "inf"] </code></pre>
<p>Here is a transcript:</p> <pre><code>LANELSON$ kubectl --kubeconfig foo get -a jobs No resources found. </code></pre> <p>OK, fine; even with the <code>-a</code> option, no jobs exist. Cool! Oh, let's just be paranoid and check for one that we know was created. Who knows? Maybe we'll learn something:</p> <pre><code>LANELSON$ kubectl --kubeconfig foo get -a job emcc-poc-emcc-broker-mp-populator NAME DESIRED SUCCESSFUL AGE emcc-poc-emcc-broker-mp-populator 1 0 36m </code></pre> <p>Er, um, what?</p> <p>In this second case, I just happen to know the name of a job that was created, so I ask for it directly. I would have thought that <code>kubectl get -a jobs</code> would have returned it in its output. Why doesn't it?</p> <p>Of course what I'd <em>really</em> like to do is get the logs of one of the pods that the job created, but <code>kubectl get -a pods</code> doesn't show any of that job's terminated pods either, and of course I don't know the name of any of the pods that the job would have spawned.</p> <p>What is going on here?</p> <p>Kubernetes 1.7.4 if it matters.</p>
<p>The answer is that Istio <a href="https://istio.io/docs/setup/kubernetes/sidecar-injection.html#automatic-sidecar-injection" rel="nofollow noreferrer">automatic sidecar injection</a> happened to be "on" in the environment (I had no idea, nor should I have). When this happens, you can <a href="https://istio.io/docs/setup/kubernetes/sidecar-injection.html#overriding-automatic-injection" rel="nofollow noreferrer">opt out of it</a>, but otherwise all workloads are affected by default (!). If you don't opt out of it, and Istio's presence causes your Job not to be created for any reason, then your Job is technically <em>uninitialized</em>. If a resource is uninitialized, then it does not show up in <code>kubectl get</code> lists. To make an uninitialized resource show up in <code>kubectl get</code> lists, you need to include the <code>--include-uninitialized</code> option to <code>get</code>. So once I issued <code>kubectl --kubeconfig foo get -a --include-uninitialized jobs</code>, I could see the failed jobs.</p> <p>My higher-level takeaway is that the initializer portion of Kubernetes, currently in alpha, is not at all ready for prime time yet.</p>
<p>I building a Kubernetes cluster on Google Kubernetes Engine (GKE). It is basically a <code>Service</code> with an associated <code>ReplicaSet</code> with a number of pods. </p> <p>Those pods need to talk to each other for keeping consensus. For this end, a <code>ClusterIP</code> seems a good fit, allowing intra cluster communication of the pods.</p> <p>However, now I want to expose this Service to the world. My idea was to switch from <code>ClusterIP</code> to <code>NodePort</code> and couple it with an <code>Ingress</code>, which seems to be the <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#background" rel="nofollow noreferrer">best practice</a>.</p> <p>My problem is that when I switch the <code>Service</code> to <code>NodePort</code>, I lose the internal communication of the cluster, i.e. pods can't talk to each other. As far as my understanding goes, <code>NodePort</code> <a href="https://stackoverflow.com/a/41510604/4766124">is a superset of <code>ClusterIP</code></a>, so it should keep internal communication.</p> <p>What am I doing wrong?</p> <hr> <p>Edit with extra information:</p> <p>I am referring to <a href="https://github.com/neo4j-contrib/kubernetes-neo4j" rel="nofollow noreferrer">this example</a>, an example of a Neo4j graph database.</p> <p>The example deploys a <code>StatefulSet</code>, in which pods need to communicate, among other things, for keeping consensus between the cluster.</p> <p>With the provided setting, pods can talk to each other. If I change the <a href="https://github.com/neo4j-contrib/kubernetes-neo4j/blob/master/cores/dns.yaml" rel="nofollow noreferrer">Service</a> to <code>NodePort</code>, and fix the <code>nodePorts</code> that are used (instead of choosing them randomly as it normally does), pods can no longer communicate.</p> <p>Is this expected behavior, or am I missing something?</p>
<p>Indeed <code>NodePort</code> is a superset of <code>ClusterIP</code>, but to be clear, you do not need the service to be of <code>NodePort</code> type for it to be exposed by IngressController. IC has access to endpoints (pods) directly, so there is no need to use anything but <code>ClusterIP</code>.</p> <p>Another thing is that ClusterIP service has no effect on pod-to-pod connectivity, and also it seems a bit weird that you would use service to allow consensus chatter (unless you have an svc per pod). For this kind of operations you might want to look closer at the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> concept</p>
<h1>Question</h1> <p>What is the cause of the <strong>http: proxy error: x509: certificate signed by unknown authority</strong> when starting <strong>kubectl proxy</strong> with a specific IP of the node?</p> <pre><code>kubectl proxy --port=8001 --address=172.31.0.16 --accept-hosts='172.31.0.16' I1222 09:00:03.471836 16775 logs.go:41] http: proxy error: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") </code></pre>
<p>Found the cause. Has not executed below.</p> <pre><code>export KUBECONFIG=/etc/kubernetes/admin.conf </code></pre>
<p>I am learning Kubernetes and currently deep diving into high availability and while I understand that I can set up a highly available control plane (API-server, controllers, scheduler) with local (or with remote) etcds as well as a highly available set of minions (through Kubernetes itself), I am still not sure where in this concept services are located.</p> <p>If they live in the control plane: Good I can set them up to be highly available.</p> <p>If they live on a certain node: Ok, but what happens if the node goes down or becomes unavailable in any other way?</p> <p>As I understand it, services are needed to expose my pods to the internet as well as for loadbalancing. So no HA service, I risk that my application won't be reachable (even though it might be super highly available for any other aspect of the system).</p>
<p>Kubernetes Service is another REST Object in the k8s Cluster. There are following types are services. Each one of them serves a different purpose in the cluster.</p> <ul> <li>ClusterIP</li> <li>NodePort </li> <li>LoadBalancer</li> <li>Headless</li> </ul> <p>fundamental Purpose of Services </p> <ul> <li>Providing a single point of gateway to the pods</li> <li>Load balancing the pods</li> <li>Inter Pods communication</li> <li>Provide Stability as pods can die and restart with different Ip</li> <li>more </li> </ul> <p>These Objects are <strong>stored in etcd</strong> as it is the single source of truth in the cluster. </p> <p>Kube-proxy is the responsible for creating these objects. It uses selectors and labels. </p> <p>For instance, each pod object has labels therefore service object has selectors to match these labels. Furthermore, Each Pod has endpoints, so basically kube-proxy assign these endpoints (IP:Port) with service (IP:Port).Kube-proxy use IP-Tables rules to do this magic.</p> <p>Kube-Proxy is deployed as DaemonSet in each cluster nodes so they are aware of each other by using etcd. </p>
<p>We have an existing website, lets say <code>example.com</code>, which is a CNAME for <code>where.my.server.really.is.com</code>.</p> <p>We're now developing new services using Kubernetes. Our first service <code>/login</code> is ready to be deployed. Using a mock HTML server I've been able to deploy two pods with seperate services that map to <code>example.com</code> and <code>example.com/login</code>.</p> <p>What I would like to do is get rid of my mock HTML server, and provide a service inside of the cluster, that points to our full website outside of the server. Then I can change the DNS for <code>example.com</code> to point to our kubernetes cluster and people will still get the main site from <code>where.my.server.really.is.com</code>.</p> <p>We are using Traefik for ingress, and these are the changes I've made to the config for the website:</p> <pre><code>--- kind: Service apiVersion: v1 metadata: name: wordpress spec: type: ExternalName externalName: where.my.server.really.is.com --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: wordpress annotations: kubernetes.io/ingress.class: traefik spec: backend: serviceName: wordpress servicePort: 80 rules: - host: example.com http: paths: - backend: serviceName: wordpress servicePort: 80 </code></pre> <p>Unfortunately, when I visit <code>example.com</code>, rather than getting <code>where.my.server.really.is.com</code>, I get a 503 with the body "Service Unavailable". <code>example.com/login</code> works as expected</p> <p>What have I missed?</p>
<p>Following <a href="https://github.com/containous/traefik/blob/master/docs/user-guide/kubernetes.md#forwarding-to-externalnames" rel="nofollow noreferrer">traefik documentation on using <code>ExternalName</code></a></p> <blockquote> <p>When specifying an ExternalName, Træfik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443.</p> <p>This still requires setting up a proper port mapping on the Service from the Ingress port to the (external) Service port.</p> </blockquote> <p>I believe you are missing the <code>ports</code> configuration of the Service. Something like</p> <pre><code>apiVersion: v1 kind: Service metadata: name: wordpress spec: ports: - name: http port: 80 type: ExternalName externalName: where.my.server.really.is.com </code></pre> <p>You can see <a href="https://github.com/containous/traefik/blob/master/docs/user-guide/kubernetes.md#disable-per-ingress" rel="nofollow noreferrer">a full example in the docs</a>.</p>
<p>For a large online application, use k8s to run it. The scale maybe daily activity user 500,000.</p> <p>The application inside k8s need messaging feature - Pub/Sub, there are these options:</p> <ul> <li>Kafka</li> <li>RabbitMQ</li> <li>Redis</li> </ul> <h1>Kafka</h1> <p>It needs zookeeper and good to run on os depends on disk I/O. So if install it into k8s cluster, how? The performance will be worse?</p> <p>And, if keep Kafka outside of the k8s cluster, connect Kafka from application inside the k8s cluster, how about that performance? They are in the different layer, won't be slow?</p> <h1>RabbitMQ</h1> <p>It's slow than Kafka, but for a daily activity user 500,000 application, is it good enough? If so, maybe it's a good choice.</p> <h1>Redis</h1> <p>It's another option. Maybe the most simple one. But from the internet I got that it will lose message sometimes. If true, that's terrible.</p> <hr> <p>So, the most important thing is, use Kafka(also with zookeeper) on k8s, good or not in this use case?</p>
<p>Yes, running Kafka on Kubernetes is great. Check out this example: <a href="https://github.com/Yolean/kubernetes-kafka" rel="noreferrer">https://github.com/Yolean/kubernetes-kafka</a>. It includes ZooKeeper and Kafka as <code>StatefulSet</code>s.</p> <hr> <p>PS. Running any of the services in your question on Kubernetes will be pleasant. You can Google the name of the service and "kubernetes" and find example manifests. Many examples here: <a href="https://github.com/kubernetes/charts" rel="noreferrer">https://github.com/kubernetes/charts</a>.</p>
<p>What can be the reason for not being able to curl/use a rest API with some slugs i.e example.com/application/v2/xxxx.The docker image i use is the app from <a href="http://vespa.ai/" rel="nofollow noreferrer">http://vespa.ai/</a> <a href="https://github.com/vespa-engine/docker-image/blob/master/Dockerfile" rel="nofollow noreferrer">https://github.com/vespa-engine/docker-image/blob/master/Dockerfile</a> </p> <p>I have set up nodeport correctly and an ingress</p> <p>i have tried various, for example default backend on the host:</p> <pre><code>- host: example.com http: paths: - backend: serviceName: myservice servicePort: 19071 </code></pre> <p>or explicit using wildcard routing:</p> <pre><code> - host: example.com http: paths: - path: /* backend: serviceName: myservice servicePort: 19071 </code></pre> <p>The strange thing is that doing curl externally (outside cluster): curl -s --head <a href="http://example.com/ApplicationStatus" rel="nofollow noreferrer">http://example.com/ApplicationStatus</a> Does return statusCode 200 OK</p> <p>Doing curl -s --head <a href="http://example.com/application/v2/tenant/" rel="nofollow noreferrer">http://example.com/application/v2/tenant/</a> Return BAD_REQUEST from the application.</p> <p>"error-code": "BAD_REQUEST", "message": "<a href="http://example.com/application/v2/tenant/" rel="nofollow noreferrer">http://example.com/application/v2/tenant/</a>"</p> <p>exec into the container and doing curl -s --head <a href="http://localhost:19071/application/v2/tenant/" rel="nofollow noreferrer">http://localhost:19071/application/v2/tenant/</a> works..</p> <p>So either the application somehow match on hostname which is not correct when coming from the ingress or there is some other issues with the full uri not being proxied. </p> <p>The source code for that app it too big to understand for me at the moment but looking in the source <a href="https://github.com/vespa-engine/vespa/blob/f76406b88df47f6bdbf9d24feda4c9ff55c63e06/orchestrator/src/main/java/com/yahoo/vespa/orchestrator/resources/HostSuspensionResource.java" rel="nofollow noreferrer">https://github.com/vespa-engine/vespa/blob/f76406b88df47f6bdbf9d24feda4c9ff55c63e06/orchestrator/src/main/java/com/yahoo/vespa/orchestrator/resources/HostSuspensionResource.java</a> It might explain why it returns the error message.</p> <p>Everything else seems to work, configserver and application.</p> <p>Is it the app itself or kubernetes that might be the problem here ? </p>
<p>HostSuspensionResource should not be involved here I think. The restapi entry point is the ApplicationHandler class for the call you are making.</p> <p>What happens when you curl inside the container with the default port (i.e not the 'internal' 19071 port)?</p>
<p>I create deployment and service from *.yaml. Into container I find ns record via <code>nslookup</code> or <code>dig</code>, but can't connect to db via service name or service IP. </p> <p>Anybody can tell me, what I do wrong?</p> <p><strong>Environment</strong>:</p> <p><strong>Minikube version</strong>: </p> <pre><code>$minikube version v0.24.1 </code></pre> <p><strong>OS</strong> (e.g. from /etc/os-release):</p> <pre><code>NAME="Ubuntu" VERSION="16.04.2 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.2 LTS" VERSION_ID="16.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial </code></pre> <p><strong>VM Driver</strong> (e.g. <code>cat ~/.minikube/machines/minikube/config.json | grep DriverName</code>):</p> <pre><code>"DriverName": "kvm2", and "DriverName": "virtualbox", </code></pre> <p><strong>ISO version</strong> (e.g. <code>cat ~/.minikube/machines/minikube/config.json | grep -i ISO</code> or <code>minikube ssh cat /etc/VERSION</code>): </p> <pre><code>v0.23.6 </code></pre> <h2>DNS logs</h2> <h3>sidecar</h3> <pre><code>kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar ERROR: logging before flag.Parse: I1221 13:49:25.085555 1 main.go:48] Version v1.14.4-2-g5584e04 ERROR: logging before flag.Parse: I1221 13:49:25.085647 1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns}) ERROR: logging before flag.Parse: I1221 13:49:25.085854 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} ERROR: logging before flag.Parse: I1221 13:49:25.086013 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} </code></pre> <h2>dnsmasq</h2> <pre><code>kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq I1221 13:49:24.134834 1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000} I1221 13:49:24.135086 1 nanny.go:86] Starting dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] I1221 13:49:24.353157 1 nanny.go:111] W1221 13:49:24.353184 1 nanny.go:112] Got EOF from stdout I1221 13:49:24.353308 1 nanny.go:108] dnsmasq[10]: started, version 2.78-security-prerelease cachesize 1000 I1221 13:49:24.353340 1 nanny.go:108] dnsmasq[10]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify I1221 13:49:24.353364 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I1221 13:49:24.353385 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I1221 13:49:24.353419 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain cluster.local I1221 13:49:24.353457 1 nanny.go:108] dnsmasq[10]: reading /etc/resolv.conf I1221 13:49:24.353487 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I1221 13:49:24.353514 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I1221 13:49:24.353534 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain cluster.local I1221 13:49:24.353554 1 nanny.go:108] dnsmasq[10]: using nameserver 10.110.7.1#53 I1221 13:49:24.353617 1 nanny.go:108] dnsmasq[10]: read /etc/hosts - 7 addresses </code></pre> <h2>kubedns</h2> <pre><code>kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns I1221 13:49:23.122626 1 dns.go:48] version: 1.14.4-2-g5584e04 I1221 13:49:23.202663 1 server.go:66] Using configuration read from ConfigMap: kube-system:kube-dns I1221 13:49:23.202797 1 server.go:113] FLAG: --alsologtostderr="false" I1221 13:49:23.202924 1 server.go:113] FLAG: --config-dir="" I1221 13:49:23.202932 1 server.go:113] FLAG: --config-map="kube-dns" I1221 13:49:23.202936 1 server.go:113] FLAG: --config-map-namespace="kube-system" I1221 13:49:23.202959 1 server.go:113] FLAG: --config-period="10s" I1221 13:49:23.203028 1 server.go:113] FLAG: --dns-bind-address="0.0.0.0" I1221 13:49:23.203042 1 server.go:113] FLAG: --dns-port="10053" I1221 13:49:23.203082 1 server.go:113] FLAG: --domain="cluster.local." I1221 13:49:23.203101 1 server.go:113] FLAG: --federations="" I1221 13:49:23.203107 1 server.go:113] FLAG: --healthz-port="8081" I1221 13:49:23.203111 1 server.go:113] FLAG: --initial-sync-timeout="1m0s" I1221 13:49:23.203115 1 server.go:113] FLAG: --kube-master-url="" I1221 13:49:23.203194 1 server.go:113] FLAG: --kubecfg-file="" I1221 13:49:23.203198 1 server.go:113] FLAG: --log-backtrace-at=":0" I1221 13:49:23.203249 1 server.go:113] FLAG: --log-dir="" I1221 13:49:23.203254 1 server.go:113] FLAG: --log-flush-frequency="5s" I1221 13:49:23.203277 1 server.go:113] FLAG: --logtostderr="true" I1221 13:49:23.203281 1 server.go:113] FLAG: --nameservers="" I1221 13:49:23.203348 1 server.go:113] FLAG: --stderrthreshold="2" I1221 13:49:23.203369 1 server.go:113] FLAG: --v="2" I1221 13:49:23.203416 1 server.go:113] FLAG: --version="false" I1221 13:49:23.203447 1 server.go:113] FLAG: --vmodule="" I1221 13:49:23.203554 1 server.go:176] Starting SkyDNS server (0.0.0.0:10053) I1221 13:49:23.203842 1 server.go:198] Skydns metrics enabled (/metrics:10055) I1221 13:49:23.203858 1 dns.go:147] Starting endpointsController I1221 13:49:23.203863 1 dns.go:150] Starting serviceController I1221 13:49:23.204165 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0] I1221 13:49:23.204175 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0] I1221 13:49:23.555942 1 sync_configmap.go:107] ConfigMap kube-system:kube-dns was created I1221 13:49:24.054105 1 dns.go:171] Initialized services and endpoints from apiserver I1221 13:49:24.054128 1 server.go:129] Setting up Healthz Handler (/readiness) I1221 13:49:24.054206 1 server.go:134] Setting up cache handler (/cache) I1221 13:49:24.054257 1 server.go:120] Status HTTP port 8081 </code></pre> <p><strong>What happened</strong>: Can not <code>ping</code> or <code>traceroute</code> service via service name or IP.</p> <p><strong>What you expected to happen</strong>:</p> <p><code>ping</code> to the service via service name.</p> <p><strong>How to reproduce it</strong> (as minimally and precisely as possible):</p> <h3>Default namesapce</h3> <pre><code>$ kc get all NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/memcached 1 1 1 1 1h deploy/mongo 1 1 1 1 1h NAME DESIRED CURRENT READY AGE rs/memcached-64dcdbc9f6 1 1 1 1h rs/mongo-67d67fddf9 1 1 1 39m rs/mongo-6fc9bd6d6c 0 0 0 1h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/memcached 1 1 1 1 1h deploy/mongo 1 1 1 1 1h NAME DESIRED CURRENT READY AGE rs/memcached-64dcdbc9f6 1 1 1 1h rs/mongo-67d67fddf9 1 1 1 39m rs/mongo-6fc9bd6d6c 0 0 0 1h NAME READY STATUS RESTARTS AGE po/busybox 1/1 Running 0 29m po/memcached-64dcdbc9f6-j2v97 1/1 Running 0 1h po/mongo-67d67fddf9-55zgd 1/1 Running 0 39m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 1h svc/memcached ClusterIP 10.100.42.68 &lt;none&gt; 55555/TCP 1h svc/mongo ClusterIP 10.99.92.189 &lt;none&gt; 27017/TCP 1h </code></pre> <h3>kube-system</h3> <pre><code>$ kc get --namespace=kube-system all NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/kube-dns 1 1 1 1 1h NAME DESIRED CURRENT READY AGE rs/kube-dns-86f6f55dd5 1 1 1 1h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/kube-dns 1 1 1 1 1h NAME DESIRED CURRENT READY AGE rs/kube-dns-86f6f55dd5 1 1 1 1h NAME READY STATUS RESTARTS AGE po/kube-addon-manager-minikube 1/1 Running 1 1h po/kube-dns-86f6f55dd5-mrtrm 3/3 Running 3 1h po/kubernetes-dashboard-5sgcl 1/1 Running 1 1h po/storage-provisioner 1/1 Running 1 1h NAME DESIRED CURRENT READY AGE rc/kubernetes-dashboard 1 1 1 1h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 1h svc/kubernetes-dashboard NodePort 10.110.68.80 &lt;none&gt; 80:30000/TCP 1h </code></pre> <h3>resolv.conf</h3> <pre><code>$ kc exec -it mongo-67d67fddf9-55zgd -- cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 </code></pre> <h3>nslookup test</h3> <pre><code>$ kc exec -it mongo-67d67fddf9-55zgd nslookup kubernetes Server: 10.96.0.10 Address: 10.96.0.10#53 Non-authoritative answer: Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1 </code></pre> <h3>ping test</h3> <pre><code>$ kc exec -it mongo-67d67fddf9-55zgd -- ping kubernetes PING kubernetes.default.svc.cluster.local (10.96.0.1): 56 data bytes 64 bytes from 10.96.0.1: icmp_seq=0 ttl=250 time=2.873 ms 64 bytes from 10.96.0.1: icmp_seq=1 ttl=250 time=1.845 ms 64 bytes from 10.96.0.1: icmp_seq=2 ttl=250 time=1.809 ms 64 bytes from 10.96.0.1: icmp_seq=3 ttl=250 time=2.035 ms 64 bytes from 10.96.0.1: icmp_seq=4 ttl=250 time=1.805 ms --- kubernetes.default.svc.cluster.local ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 1.805/2.073/2.873/0.409 ms </code></pre> <h3>traceroute test (ok)</h3> <pre><code>$ kc exec -it mongo-67d67fddf9-55zgd -- traceroute -n kubernetes traceroute to kubernetes (10.96.0.1), 30 hops max, 60 byte packets 1 10.110.7.1 0.207 ms 0.195 ms 0.186 ms 2 192.168.1.1 0.317 ms 0.392 ms 0.456 ms 3 10.77.0.1 2.261 ms 2.977 ms 3.755 ms 4 10.128.132.1 1.568 ms 1.721 ms 1.934 ms 5 192.168.39.136 2.055 ms 2.329 ms 2.456 ms 6 10.128.145.2 8.603 ms 8.971 ms 9.391 ms </code></pre> <h3>test nslookup</h3> <pre><code>$ kc exec -it mongo-67d67fddf9-55zgd -- nslookup mongo Server: 10.96.0.10 Address: 10.96.0.10#53 Name: mongo.default.svc.cluster.local Address: 10.99.92.189 </code></pre> <h3>test ping</h3> <pre><code>$ kc exec -it mongo-67d67fddf9-55zgd -- ping mongo PING mongo.default.svc.cluster.local (10.99.92.189): 56 data bytes --- mongo.default.svc.cluster.local ping statistics --- 210 packets transmitted, 0 packets received, 100% packet loss command terminated with exit code 1 </code></pre> <h3>test traceroute (bad)</h3> <pre><code>$ kc exec -it mongo-67d67fddf9-55zgd -- traceroute -n mongo traceroute to mongo (10.99.92.189), 30 hops max, 60 byte packets 1 10.110.7.1 0.228 ms 0.203 ms 0.194 ms 2 192.168.1.1 0.438 ms 0.519 ms 0.582 ms 3 10.77.0.1 2.290 ms 3.599 ms 4.396 ms 4 10.128.132.1 1.851 ms 1.949 ms 2.166 ms 5 192.168.39.136 2.258 ms 2.421 ms 2.618 ms 6 10.128.145.5 5.193 ms 6.084 ms 8.301 ms 7 * * * 8 * * * 9 * * * 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * </code></pre> <h3>traceroute IP (bad)</h3> <pre><code>$ kc exec -it mongo-67d67fddf9-55zgd -- traceroute -n 10.99.92.189 traceroute to 10.99.92.189 (10.99.92.189), 30 hops max, 60 byte packets 1 10.110.7.1 0.190 ms 0.136 ms 0.124 ms 2 192.168.1.1 0.431 ms 0.485 ms 0.547 ms 3 10.77.0.1 2.402 ms 3.256 ms 4.040 ms 4 10.128.132.1 1.780 ms 1.790 ms 1.930 ms 5 192.168.39.136 2.214 ms 2.209 ms 2.562 ms 6 10.128.145.5 7.645 ms 8.028 ms 8.284 ms 7 * * * 8 * * * 9 * * * 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * </code></pre> <h3>iptables from node</h3> <pre><code>$ sudo iptables-save | grep mongo -A KUBE-SEP-HYCP7OGZ3WQCZP76 -s 172.17.0.6/32 -m comment --comment "default/mongo:27017" -j KUBE-MARK-MASQ -A KUBE-SEP-HYCP7OGZ3WQCZP76 -p tcp -m comment --comment "default/mongo:27017" -m tcp -j DNAT --to-destination 172.17.0.6:27017 -A KUBE-SERVICES -d 10.99.92.189/32 -p tcp -m comment --comment "default/mongo:27017 cluster IP" -m tcp --dport 27017 -j KUBE-SVC-VMEO5WN4YXST2YCP -A KUBE-SVC-VMEO5WN4YXST2YCP -m comment --comment "default/mongo:27017" -j KUBE-SEP-HYCP7OGZ3WQCZP76 </code></pre> <h3>mongo-deployment.yaml</h3> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert -f docker-compose.yml kompose.version: 1.6.0 (e4adfef) creationTimestamp: null labels: io.kompose.service: mongo name: mongo spec: replicas: 1 strategy: type: Recreate template: metadata: creationTimestamp: null labels: io.kompose.service: mongo spec: containers: - image: docker.scnetservices.ru/mongo:dev name: mongo resources: {} volumeMounts: - mountPath: /data/db name: mongo-claim0 restartPolicy: Always volumes: - name: mongo-claim0 persistentVolumeClaim: claimName: mongo-claim0 status: {} </code></pre> <h3>mongo-service.yaml</h3> <pre><code>apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert -f docker-compose.yml kompose.version: 1.6.0 (e4adfef) creationTimestamp: null labels: io.kompose.service: mongo name: mongo spec: ports: - name: "27017" port: 27017 targetPort: 27017 selector: io.kompose.service: mongo status: loadBalancer: {} </code></pre> <h3>mongo-volume0-persistentvolume.yaml</h3> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: creationTimestamp: null labels: type: local name: mongo-volume0 spec: storageClassName: manual capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: "/home/docker/mongo" type: "DirectoryOrCreate" persistentVolumeReclaimPolicy: Recycle claimRef: namespace: default name: mongo-claim0 </code></pre> <h3>mongo-claim0-persistentvolumeclaim.yaml</h3> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mongo-claim0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre> <p>But if I connect to container mongo via ssh tunneling it work:</p> <pre><code>$ ssh -fN -l docker -i "~/.minikube/machines/minikube/id_rsa" -L 27017:localhost:27017 $(minikube ip) sah4ez@PC001:~$ mongo localhost:27017 MongoDB shell version v3.4.9 connecting to: localhost:27017 MongoDB server version: 3.4.2 Server has startup warnings: 2017-12-21T14:48:20.434+0000 I CONTROL [initandlisten] 2017-12-21T14:48:20.434+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2017-12-21T14:48:20.434+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2017-12-21T14:48:20.434+0000 I CONTROL [initandlisten] &gt; db.hostInfo() { "system" : { "currentTime" : ISODate("2017-12-21T16:23:35.940Z"), "hostname" : "minikube", "cpuAddrSize" : 64, "memSizeMB" : 1906, "numCores" : 2, "cpuArch" : "x86_64", "numaEnabled" : false }, "os" : { "type" : "Linux", "name" : "PRETTY_NAME=\"Debian GNU/Linux 8 (jessie)\"", "version" : "Kernel 4.9.13" }, "extra" : { "versionString" : "Linux version 4.9.13 (jenkins@jenkins) (gcc version 5.4.0 (Buildroot 2017.02) ) #1 SMP Thu Oct 19 17:14:00 UTC 2017", "libcVersion" : "2.19", "kernelVersion" : "4.9.13", "cpuFrequencyMHz" : "2993.200", "cpuFeatures" : "fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl eagerfpu pni vmx cx16 x2apic hypervisor lahf_lm tpr_shadow vnmi flexpriority ept vpid", "pageSize" : NumberLong(4096), "numPages" : 487940, "maxOpenFiles" : 65536 }, "ok" : 1 } </code></pre> <p>My OS info:</p> <pre><code>sah4ez@PC001:~$ uname -a Linux PC001 4.8.0-58-generic #63~16.04.1-Ubuntu SMP Mon Jun 26 18:08:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux </code></pre> <p>In example up mongo connected to <strong>minikube</strong> and <strong>Kernel 4.9.13</strong>...</p> <p>=====</p> <p><strong>UPD 2017/12/22</strong></p> <p>Now create two pods with mongo into (<code>mongo</code> and <code>mongo2</code>). From <code>mongo2</code> instance I can connect via dns name <code>mongo.default.svc.cluster.local:27017</code> and not connect via service IP. But from <code>mongo</code> instance I can't connect via <code>mongo2.default.svc.cluster.local</code>.</p> <pre><code> $ minikube ssh -- sudo iptables-save | grep mongo -A KUBE-SEP-HYCP7OGZ3WQCZP76 -s 172.17.0.6/32 -m comment --comment "default/mongo:27017" -j KUBE-MARK-MASQ -A KUBE-SEP-HYCP7OGZ3WQCZP76 -p tcp -m comment --comment "default/mongo:27017" -m tcp -j DNAT --to-destination 172.17.0.6:27017 -A KUBE-SEP-KVDY7RMLLBYXOYB5 -s 172.17.0.8/32 -m comment --comment "default/mongo:27017" -j KUBE-MARK-MASQ -A KUBE-SEP-KVDY7RMLLBYXOYB5 -p tcp -m comment --comment "default/mongo:27017" -m tcp -j DNAT --to-destination 172.17.0.8:27017 -A KUBE-SERVICES -d 10.110.87.97/32 -p tcp -m comment --comment "default/mongo2:27017 cluster IP" -m tcp --dport 27017 -j KUBE-SVC-SDHY4S2JVGEDTQ2U -A KUBE-SERVICES -d 10.98.1.35/32 -p tcp -m comment --comment "default/mongo:27017 cluster IP" -m tcp --dport 27017 -j KUBE-SVC-VMEO5WN4YXST2YCP -A KUBE-SVC-VMEO5WN4YXST2YCP -m comment --comment "default/mongo:27017" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-HYCP7OGZ3WQCZP76 -A KUBE-SVC-VMEO5WN4YXST2YCP -m comment --comment "default/mongo:27017" -j KUBE-SEP-KVDY7RMLLBYXOYB5 -A KUBE-SERVICES -d 10.110.87.97/32 -p tcp -m comment --comment "default/mongo2:27017 has no endpoints" -m tcp --dport 27017 -j REJECT --reject-with icmp-port-unreachable </code></pre> <p>Attached to <code>mongo2</code> pods and connecting to db <code>mongo</code> pods.</p> <pre><code>root@mongo2-848b44844f-dbpxx:/# mongo mongo:27017 MongoDB shell version v3.4.2 connecting to: mongo:27017 MongoDB server version: 3.4.2 Server has startup warnings: 2017-12-22T13:27:46.904+0000 I CONTROL [initandlisten] 2017-12-22T13:27:46.904+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2017-12-22T13:27:46.904+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2017-12-22T13:27:46.904+0000 I CONTROL [initandlisten] &gt; db.hostInfo() { "system" : { "currentTime" : ISODate("2017-12-22T14:37:32.222Z"), "hostname" : "mongo-6fc9bd6d6c-cc8gh", "cpuAddrSize" : 64, "memSizeMB" : 1906, "numCores" : 2, "cpuArch" : "x86_64", "numaEnabled" : false }, "os" : { "type" : "Linux", "name" : "PRETTY_NAME=\"Debian GNU/Linux 8 (jessie)\"", "version" : "Kernel 4.9.13" }, "extra" : { "versionString" : "Linux version 4.9.13 (jenkins@jenkins) (gcc version 5.4.0 (Buildroot 2017.02) ) #1 SMP Thu Oct 19 17:14:00 UTC 2017", "libcVersion" : "2.19", "kernelVersion" : "4.9.13", "cpuFrequencyMHz" : "2993.200", "cpuFeatures" : "fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl eagerfpu pni vmx cx16 x2apic hypervisor lahf_lm tpr_shadow vnmi flexpriority ept vpid", "pageSize" : NumberLong(4096), "numPages" : 487940, "maxOpenFiles" : 65536 }, "ok" : 1 } &gt; exit </code></pre>
<p>Short answer: they technically don't exist. </p> <p>Long answer: they're iptables rules <a href="http://leebriggs.co.uk/blog/2017/02/15/kubernetes-networking-part1.html" rel="nofollow noreferrer">http://leebriggs.co.uk/blog/2017/02/15/kubernetes-networking-part1.html</a></p>
<p>I am learning Kubernetes, and and faced a conceptual question, what is the benefit of new taint model over the simple node selector.</p> <p>Documentation talks about a usecase where a group of devs might have exclusive right for a set of pods by a taint like <code>dedicated=groupA:NoSchedule</code>. But I thought we can do the same thing by a simple nodeSelector. </p> <p>To be more specific, what is the role of the effect on this taint. Why not simply a label like the rest of the Kubernetes.</p>
<p>A node selector affects a single pod template, asking the scheduler to place it on a set of nodes. A NoSchedule taint affects all pods asking the scheduler to block all pods from being scheduled there.</p> <p>A node selector is useful when the pod needs something from the node. For example, requesting a node that has a GPU. A node taint is useful when the node needs to be reserved for special workloads. For example, a node that should only be running pods that will use the GPU (so the GPU node isn't filled with pods that aren't using it).</p> <p>Sometimes they are useful together as in the example above, too. You want the node to only have pods that use the GPU, and you want the pod that needs a GPU to be scheduled to a GPU node. In that case you may want to taint the node with <code>dedicated=gpu:NoSchedule</code> and add both a taint toleration and node selector to the pod template.</p>