prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<h2>Introduction</h2>
<p>Configuring a new ingress-controller with Traefik using helm chart and creating secrets.</p>
<h2>Info</h2>
<p>Kubernetes version: 1.9.3</p>
<p>Helm version: 2.9</p>
<p>Traefik chart version: 1.5</p>
<p>Traefik version: 1.7.2</p>
<h2>Problem</h2>
<p>I am deploying Traefik through official helm chart, but always I have the same problem in the logs
<code>"Error configuring TLS for ingress default/traefik-testing-tls: secret default/traefik-tls does not exist"</code></p>
<p>I have the secret properly created and configured in the same namespace and also checked the clusterrole and clusterrolebinds are ok and allows the access to secrets</p>
<p>I tried to change the <code>defaultCert</code> and <code>defaultKey</code> but not sure about this.</p>
<h3>Configmap:</h3>
<pre><code>data:
traefik.toml: |
# traefik.toml
logLevel = "INFO"
defaultEntryPoints = ["http", "https", "httpn"]
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.httpn]
address = ":8880"
compress = true
[kubernetes]
namespaces = ["default", "kube-system"]
[traefikLog]
format = "json"
[accessLog]
format = "common"
[accessLog.fields]
defaultMode = "keep"
[accessLog.fields.names]
[accessLog.fields.headers]
defaultMode = "keep"
[accessLog.fields.headers.names]
</code></pre>
| <p>Looks like you are missing the <code>traefik-tls</code> secret, for your <code>traefik-testing-tls</code> ingress, that probably holds your TLS certificates. You can follow <a href="https://docs.traefik.io/user-guide/kubernetes/#add-a-tls-certificate-to-the-ingress" rel="nofollow noreferrer">this</a>.</p>
<p>Instead of:</p>
<pre><code>kubectl -n kube-system create secret tls traefik-ui-tls-cert --key=tls.key --cert=tls.crt
</code></pre>
<p>You can use:</p>
<pre><code>kubectl -n kube-system create secret tls traefik-tls --key=tls.key --cert=tls.crt
</code></pre>
|
<p>Is Kubernetes Ingress secure enough to avoid adding a DMZ in front of Kubernetes to expose Pods and Services ?
What would happen if someone "hacked" into a Pod ?</p>
<p>Thanks.</p>
| <p>This is an opinion question so I'll answer with an option.</p>
<p>It's very secure if you follow <a href="https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/" rel="nofollow noreferrer">standard security practices</a> for your cluster. But nothing is 100% secure. So adding a DMZ would help reduce your attack vectors. </p>
<p>In terms of protecting your Ingress from outside, you can limit your access for your external load balancer just to HTTPS, and most people do that but note that HTTPS and your application itself can also have vulnerabilities.</p>
<p>As for your pods and workloads, you can increase security (at some performance cost) using things like a well-crafted <a href="https://github.com/docker/labs/tree/master/security/seccomp" rel="nofollow noreferrer">seccomp</a> profile and or adding the right <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">capabilities</a> in your pod security context. You can also add more security with <a href="https://docs.docker.com/engine/security/apparmor/#load-and-unload-profiles" rel="nofollow noreferrer">AppArmor</a> or <a href="https://www.projectatomic.io/docs/docker-and-selinux/" rel="nofollow noreferrer">SELinux</a>, but lots of people don't since it can get very complicated.</p>
<p>There are also other alternatives to Docker in order to more easily sandbox your pods (still early in their lifecycle as of this writing): <a href="https://katacontainers.io/" rel="nofollow noreferrer">Kata Containers</a>, <a href="https://nabla-containers.github.io/" rel="nofollow noreferrer">Nabla Containers</a> and <a href="https://github.com/google/gvisor" rel="nofollow noreferrer">gVisor</a>.</p>
|
<p>At the moment I have a Kubernetes cluster distributed on AWS via kops. I have a doubt: is it possible to make a sort of snapshot of the Kubernetes cluster and <strong>recreate the same environment</strong> (master and pod nodes), for example <strong>to be resilient</strong> or to migrate the cluster in an easy way? I know that the Heptio Ark exists, it is very beautiful. But I'm curious to know if there is an easy way to do it. For example, <strong>is it enough to back up Etcd</strong> (or in my case the snapshot of EBS volumes)?</p>
<p>Thanks a lot. All suggestions are welcome</p>
| <p>kops stores its state in an S3 bucket identified by the <code>KOPS_STATE_STORE</code>. So yes, if your cluster has been removed you can restore it by running <code>kops create cluster</code>.</p>
<p>Keep in mind that it doesn't restore your etcd state so for that you are going to set up <a href="https://github.com/kubernetes/kops/blob/master/docs/etcd_backup.md" rel="nofollow noreferrer">etcd backups</a>. You could also make use of <a href="https://github.com/heptio/ark" rel="nofollow noreferrer">Heptio Ark</a>.</p>
<p>Similar answers to this topic:</p>
<p><a href="https://stackoverflow.com/questions/52778698/recover-kops-kubernetes-cluster">Recover kops Kubernetes cluster</a></p>
<p><a href="https://stackoverflow.com/questions/51408546/how-to-restore-kubernetes-cluster-using-kops">How to restore kubernetes cluster using kops?</a></p>
|
<p><strong>Use case</strong></p>
<p>I deployed the nginx ingress controller in my Kubernetes cluster using this helm chart:</p>
<p><a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/nginx-ingress</a></p>
<p>I created an ingress resource for my frontend serving webserver and it is supposed to redirect from non-www to the www version. I am using SSL as well.</p>
<p><strong>The problem</strong></p>
<p>When I visit the www version of my website everything is fine and nginx serves the page using my Lets Encrypt SSL certificate (which exists as secret in the right namespace). However when I visit the NON-www version of the website I get the failing SSL certificate page in my Browser (NET::ERR_CERT_AUTHORITY_INVALID) and one can see the page is served using the Kubernetes ingress fake certificate. I assume that's also the reason why the redirect to the www version does not work at all.</p>
<p><strong>This is my ingress resource (actual hostnames have been redacted):</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
creationTimestamp: 2018-10-03T19:34:41Z
generation: 3
labels:
app: nodejs
chart: nodejs-1.0.1
heritage: Tiller
release: example-frontend
name: example-frontend
namespace: microservices
resourceVersion: "5700380"
selfLink: /apis/extensions/v1beta1/namespaces/microservices/ingresses/example-frontend
uid: 5f6d6500-c743-11e8-8aaf-42010a8401fa
spec:
rules:
- host: www.example.io
http:
paths:
- backend:
serviceName: example-frontend
servicePort: http
path: /
tls:
- hosts:
- example.io
- www.example.io
secretName: example-frontend-tls
</code></pre>
<p><strong>The question</strong></p>
<p>Why doesn't nginx use the provided certificate on the non-www version as well?</p>
| <p>Looks like you fixed the issue for receiving an invalid certificate by adding an additional rule.</p>
<p>The issue with the redirect looks like it's related to <a href="https://github.com/kubernetes/ingress-nginx/issues/2043" rel="noreferrer">this</a> and it's not fixed as of this writing. However, there is a workaround as described on the same link:</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host = 'foo.com' ) {
rewrite ^ https://www.foo.com$request_uri permanent;
}
</code></pre>
|
<p><strong>Background:</strong></p>
<p>I'm trying to stand up a BareMetal K8s Cluster and want to take advantage of Traefik's multitude of features for my cluster Ingress. I've got MetalLB in front providing the LoadBalancer IP Addresses and that isn't an issue for me at this time. </p>
<p><strong>Info:</strong></p>
<p>K8s Cluster Version: 1.12 </p>
<p>Helm and Tiller version: v2.11.0</p>
<p><strong>Problem:</strong></p>
<p>If I install Traefik using the helm chart and the <a href="https://docs.traefik.io/user-guide/kubernetes/#deploy-trfik-using-helm-chart" rel="nofollow noreferrer">link</a> It installs, but when I go to check to docker logs for the containter that is created I get errors along the lines of</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>E1012 15:23:50.784829 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: Unauthorized
E1012 15:23:52.279720 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Unauthorized
E1012 15:23:52.784902 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.Ingress: Unauthorized</code></pre>
</div>
</div>
</p>
<p>If I instead go a different route and try to manually install traefik using the official documentation, I can at least get it somewhat working, but I then get errors along the lines of </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>time="2018-10-12T12:22:57Z" level=error msg="Service not found for monitoring/prometheus-server"
time="2018-10-12T12:22:59Z" level=warning msg="Endpoints not found for monitoring/prometheus-server"</code></pre>
</div>
</div>
</p>
<p>So I am at a 100% loss as to what I need to do to get this up and running in my dev (eventual prod cluster). Can anyone provide some assistance and/or guidance to get me working in the right direction?</p>
<p>Thank you in advance</p>
| <p>For the first installation (using Helm) looks like you are missing the <a href="https://docs.traefik.io/user-guide/kubernetes/#prerequisites" rel="nofollow noreferrer">RBAC configs</a>:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml
</code></pre>
<p>For the second installation, looks like the Traefik might be configured to scrape metrics from the monitoring namespace and a <code>prometheus-server</code> service endpoint that is not there in your cluster. It would be great if you could share how you deployed it.</p>
|
<p>I have a kubernetes v1.12.1 cluster running some of my workloads. I would like to setup HPA in such that I can scale a particular POD based on metrics coming from Prometheus Node-Exporter.</p>
<p>My first question is, is it even possible to do HPA on metrics outside of the 'POD' metric namespace? If so, then here's the rest of what I am trying to do. I have setup Prometheus Node-Exporter to collect machine/node metrics and send them to Prometheus. Prometheus is sending these via the prometheus adapter to Kubernetes. I want to perform POD autoscaling based on one of these node metric values.</p>
<p>For example if: node_netstat_Udp_NoPorts >= '1', I will want to scale out an additional POD. Another one if node_sockstat_udp_mem >= '87380' I also want to scale out and perform a slight kernel level modification to the host. </p>
<p>The problem I am having is that I can not find ANY example on how to setup HPA for POD in which the custom metric is not apart of the 'POD' metrics namespace. </p>
<p>As you can see in my API get command below, those metrics are exposed to me. </p>
<pre><code><pre>
ᐅ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1|jq .|grep -i udp
"name": "jobs.batch/node_netstat_Udp_InErrors",
"name": "roles.rbac.authorization.k8s.io/node_netstat_Udp6_NoPorts",
"name":
"roles.rbac.authorization.k8s.io/node_netstat_UdpLite6_InErrors",
"name": "jobs.batch/node_netstat_Udp_InDatagrams",
"name": "jobs.batch/node_sockstat_UDP_mem_bytes",
"name": "jobs.batch/node_sockstat_UDP_inuse",
"name":
"roles.rbac.authorization.k8s.io/node_netstat_Udp_InDatagrams",
"name": "jobs.batch/node_sockstat_UDP_mem",
"name": "jobs.batch/node_netstat_Udp_NoPorts",
"name": "roles.rbac.authorization.k8s.io/node_sockstat_UDP_mem",
"name": "roles.rbac.authorization.k8s.io/node_netstat_Udp_NoPorts",
"name": "jobs.batch/node_netstat_Udp6_OutDatagrams",
"name": "jobs.batch/node_netstat_Udp6_NoPorts",
"name": "jobs.batch/node_netstat_UdpLite6_InErrors",
"name": "roles.rbac.authorization.k8s.io/node_netstat_Udp6_InErrors",
"name":
"roles.rbac.authorization.k8s.io/node_netstat_Udp6_InDatagrams",
"name":
"roles.rbac.authorization.k8s.io/node_netstat_Udp6_OutDatagrams",
"name": "roles.rbac.authorization.k8s.io/node_sockstat_UDP_inuse",
"name":
"roles.rbac.authorization.k8s.io/node_sockstat_UDP_mem_bytes",
"name": "jobs.batch/node_netstat_Udp6_InDatagrams",
"name": "jobs.batch/node_netstat_Udp_OutDatagrams",
"name":
"roles.rbac.authorization.k8s.io/node_netstat_UdpLite_InErrors",
"name": "jobs.batch/node_netstat_UdpLite_InErrors",
"name":
"roles.rbac.authorization.k8s.io/node_sockstat_UDPLITE_inuse",
"name": "jobs.batch/node_netstat_Udp6_InErrors",
"name":
"roles.rbac.authorization.k8s.io/node_netstat_Udp_OutDatagrams",
"name": "jobs.batch/node_sockstat_UDPLITE_inuse",
"name": "roles.rbac.authorization.k8s.io/node_netstat_Udp_InErrors"
</pre>
</code></pre>
<p>I just do not understand how to add one of them to a HPA descriptor:</p>
<pre>
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: atl
namespace: blackhole
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: awesome-app
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource: ????????
name: ???????????
target: ???????????
</pre>
<p>If anyone could help point me in the right direction that would be great.</p>
<p>Thanks!</p>
| <p>The documentation is a bit sketchy but I belive you would use something like this:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: atl
namespace: blackhole
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: awesome-app
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
metric:
name: node_sockstat_UDP_inuse
describedObject:
apiVersion: extensions/v1beta1
kind: Job
name: your-job-name
target:
kind: Value
value: 20
</code></pre>
<p>As per the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics" rel="nofollow noreferrer">docs</a> <code>type: Resource</code> are by default limited to cpu and memory metrics.</p>
|
<p>I would like to ask, what is the preferred way or best way to pass Config file for my app under the following scenario.</p>
<p>My app is developed on NodeJS and i have a JSON file called "config.json" that contains all the config parameters of my application i.e. AD, SMTP, DB etc. a glimpse of the file is like.</p>
<pre><code>{
"slackIncomingHook": [
{"HookUrl": "<<HookUrl>>"}
],
"wikiPage": {
"url": "<<url>>",
"timeFrame" : "week"
},
"database": {
"dbName": "DBNAME",
"dbHostName": "mongodb://username:password@<<IP Address>>:27017/"
}
}
</code></pre>
<p>Now i want to deploy this project using Kubernetes and i want to pass this information to at runtime or somehow merged at the time when the cluster is being built using configMaps.</p>
<p>My DockerFile for this project consists of copying two separate/dependent projects, setting ENV, NPM Installs and exposing PORTS.</p>
<p>PS - the Docker Image is pushed to my Private Repository.</p>
<p>Experts advise would be highly appreciated.</p>
| <p>You can either create a ConfigMap or a Secret e.g.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
namespace: default
data:
AppConfig.json: |-
{
"slackIncomingHook": [
{"HookUrl": "<<HookUrl>>"}
],
"wikiPage": {
"url": "<<url>>",
"timeFrame" : "week"
},
"database": {
"dbName": "DBNAME",
"dbHostName": "mongodb://username:password@<<IP Address>>:27017/"
}
}
</code></pre>
<p>You can create secret also as they are base64 encoded so</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: test-secret
namespace: default
type: Opaque
data:
AppConfig.json: |-
BASE_64_ENCODED_JSON
</code></pre>
<p>In the deployment, add secret/config to volumes node and set volume mounts and mountPath to the path of your config.json.</p>
<pre><code>volumeMounts:
- name: test-secretm
mountPath: PATH_OF_YOUR_CONFIG_JSON
volumes:
- name: test-secretm
secret:
secretName: test-secret
</code></pre>
|
<p>I am trying to do the log monitoring of my kubernetes cluster using Elasticsearch, Fluentd, and Kibana. Here is the <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch" rel="nofollow noreferrer">link</a> which I was followed in this task. I labeled the nodes with beta.kubernetes.io/fluentd-ds-ready: "true". Initially, I created the StatefulSet for Elasticsearch.</p>
<p>After that, I created the fluentd-es-configmap.yaml, fluentd-es-ds.yaml and checked the pods status using kubectl get pods -n kube-system. The Fluentd pods are showing status like container running. I checked the logs of the Fluentd container and it shows the error like:</p>
<blockquote>
<p>2018-10-12 13:58:06 +0000 [warn]: [elasticsearch] bad chunk is moved to /tmp/fluentd-buffers/backup/worker0/elasticsearch/577e7f176a989a71a058275373e7f103.log
2018-10-12 13:58:51 +0000 [warn]: [elasticsearch] got unrecoverable error in primary and no secondary error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error="Can not reach Elasticsearch cluster ({:host=>\"elasticsearch-logging\", :port=>9200, :scheme=>\"http\"})!"</p>
<p>2018-10-12 13:58:06 +0000 [warn]: [elasticsearch] bad chunk is moved to /tmp/fluentd-buffers/backup/worker0/elasticsearch/577e7f176a989a71a058275373e7f103.log
2018-10-12 13:58:51 +0000 [warn]: [elasticsearch] got unrecoverable error in primary and no secondary error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error="Can not reach Elasticsearch cluster ({:host=>\"elasticsearch-logging\", :port=>9200, :scheme=>\"http\"})!"</p>
</blockquote>
<p>My fluentd-configmap.yaml:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>kind: ConfigMap
apiVersion: v1
metadata:
name: fluentd-es-config-v0.1.0
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
data:
system.conf: |-
<system>
root_dir /tmp/fluentd-buffers/
</system>
containers.input.conf: |-
# This configuration file for Fluentd / td-agent is used
# to watch changes to Docker log files. The kubelet creates symlinks that
# capture the pod name, namespace, container name & Docker container ID
# to the docker logs for pods in the /var/log/containers directory on the host.
# If running this fluentd configuration in a Docker container, the /var/log
# directory should be mounted in the container.
#
# These logs are then submitted to Elasticsearch which assumes the
# installation of the fluent-plugin-elasticsearch & the
# fluent-plugin-kubernetes_metadata_filter plugins.
# See https://github.com/uken/fluent-plugin-elasticsearch &
# https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for
# more information about the plugins.
#
# Example
# =======
# A line in the Docker log file might look like this JSON:
#
# {"log":"2014/09/25 21:15:03 Got request with path wombat\n",
# "stream":"stderr",
# "time":"2014-09-25T21:15:03.499185026Z"}
#
# The time_format specification below makes sure we properly
# parse the time format produced by Docker. This will be
# submitted to Elasticsearch and should appear like:
# $ curl 'http://elasticsearch-logging:9200/_search?pretty'
# ...
# {
# "_index" : "logstash-2014.09.25",
# "_type" : "fluentd",
# "_id" : "VBrbor2QTuGpsQyTCdfzqA",
# "_score" : 1.0,
# "_source":{"log":"2014/09/25 22:45:50 Got request with path wombat\n",
# "stream":"stderr","tag":"docker.container.all",
# "@timestamp":"2014-09-25T22:45:50+00:00"}
# },
# ...
#
# The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log
# record & add labels to the log record if properly configured. This enables users
# to filter & search logs on any metadata.
# For example a Docker container's logs might be in the directory:
#
# /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b
#
# and in the file:
#
# 997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
#
# where 997599971ee6... is the Docker ID of the running container.
# The Kubernetes kubelet makes a symbolic link to this file on the host machine
# in the /var/log/containers directory which includes the pod name and the Kubernetes
# container name:
#
# synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
# ->
# /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
#
# The /var/log directory on the host is mapped to the /var/log directory in the container
# running this instance of Fluentd and we end up collecting the file:
#
# /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
#
# This results in the tag:
#
# var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
#
# The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name
# which are added to the log message as a kubernetes field object & the Docker container ID
# is also added under the docker field object.
# The final tag is:
#
# kubernetes.var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
#
# And the final log record look like:
#
# {
# "log":"2014/09/25 21:15:03 Got request with path wombat\n",
# "stream":"stderr",
# "time":"2014-09-25T21:15:03.499185026Z",
# "kubernetes": {
# "namespace": "default",
# "pod_name": "synthetic-logger-0.25lps-pod",
# "container_name": "synth-lgr"
# },
# "docker": {
# "container_id": "997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b"
# }
# }
#
# This makes it easier for users to search for logs by pod name or by
# the name of the Kubernetes container regardless of how many times the
# Kubernetes pod has been restarted (resulting in a several Docker container IDs).
# Json Log Example:
# {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}
# CRI Log Example:
# 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here
<source>
@id fluentd-containers.log
@type tail
path /var/log/containers/*.log
pos_file /var/log/es-containers.log.pos
tag raw.kubernetes.*
read_from_head true
<parse>
@type multi_format
<pattern>
format json
time_key time
time_format %Y-%m-%dT%H:%M:%S.%NZ
</pattern>
<pattern>
format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
time_format %Y-%m-%dT%H:%M:%S.%N%:z
</pattern>
</parse>
</source>
# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
@id raw.kubernetes
@type detect_exceptions
remove_tag_prefix raw
message log
stream stream
multiline_flush_interval 5
max_bytes 500000
max_lines 1000
</match>
system.input.conf: |-
# Example:
# 2015-12-21 23:17:22,066 [salt.state ][INFO ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081
<source>
@id minion
@type tail
format /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
time_format %Y-%m-%d %H:%M:%S
path /var/log/salt/minion
pos_file /var/log/salt.pos
tag salt
</source>
# Example:
# Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script
<source>
@id startupscript.log
@type tail
format syslog
path /var/log/startupscript.log
pos_file /var/log/es-startupscript.log.pos
tag startupscript
</source>
# Examples:
# time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json"
# time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404
# TODO(random-liu): Remove this after cri container runtime rolls out.
<source>
@id docker.log
@type tail
format /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
path /var/log/docker.log
pos_file /var/log/es-docker.log.pos
tag docker
</source>
# Example:
# 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal
<source>
@id etcd.log
@type tail
# Not parsing this, because it doesn't have anything particularly useful to
# parse out of it (like severities).
format none
path /var/log/etcd.log
pos_file /var/log/es-etcd.log.pos
tag etcd
</source>
# Multi-line parsing is required for all the kube logs because very large log
# statements, such as those that include entire object bodies, get split into
# multiple lines by glog.
# Example:
# I0204 07:32:30.020537 3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537]
<source>
@id kubelet.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kubelet.log
pos_file /var/log/es-kubelet.log.pos
tag kubelet
</source>
# Example:
# I1118 21:26:53.975789 6 proxier.go:1096] Port "nodePort for kube-system/default-http-backend:http" (:31429/tcp) was open before and is still needed
<source>
@id kube-proxy.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-proxy.log
pos_file /var/log/es-kube-proxy.log.pos
tag kube-proxy
</source>
# Example:
# I0204 07:00:19.604280 5 handlers.go:131] GET /api/v1/nodes: (1.624207ms) 200 [[kube-controller-manager/v1.1.3 (linux/amd64) kubernetes/6a81b50] 127.0.0.1:38266]
<source>
@id kube-apiserver.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-apiserver.log
pos_file /var/log/es-kube-apiserver.log.pos
tag kube-apiserver
</source>
# Example:
# I0204 06:55:31.872680 5 servicecontroller.go:277] LB already exists and doesn't need update for service kube-system/kube-ui
<source>
@id kube-controller-manager.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-controller-manager.log
pos_file /var/log/es-kube-controller-manager.log.pos
tag kube-controller-manager
</source>
# Example:
# W0204 06:49:18.239674 7 reflector.go:245] pkg/scheduler/factory/factory.go:193: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [2578313/2577886]) [2579312]
<source>
@id kube-scheduler.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-scheduler.log
pos_file /var/log/es-kube-scheduler.log.pos
tag kube-scheduler
</source>
# Example:
# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf
<source>
@id glbc.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/glbc.log
pos_file /var/log/es-glbc.log.pos
tag glbc
</source>
# Example:
# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf
<source>
@id cluster-autoscaler.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/cluster-autoscaler.log
pos_file /var/log/es-cluster-autoscaler.log.pos
tag cluster-autoscaler
</source>
# Logs from systemd-journal for interesting services.
# TODO(random-liu): Remove this after cri container runtime rolls out.
<source>
@id journald-docker
@type systemd
matches [{ "_SYSTEMD_UNIT": "docker.service" }]
<storage>
@type local
persistent true
path /var/log/journald-docker.pos
</storage>
read_from_head true
tag docker
</source>
<source>
@id journald-container-runtime
@type systemd
matches [{ "_SYSTEMD_UNIT": "{{ container_runtime }}.service" }]
<storage>
@type local
persistent true
path /var/log/journald-container-runtime.pos
</storage>
read_from_head true
tag container-runtime
</source>
<source>
@id journald-kubelet
@type systemd
matches [{ "_SYSTEMD_UNIT": "kubelet.service" }]
<storage>
@type local
persistent true
path /var/log/journald-kubelet.pos
</storage>
read_from_head true
tag kubelet
</source>
<source>
@id journald-node-problem-detector
@type systemd
matches [{ "_SYSTEMD_UNIT": "node-problem-detector.service" }]
<storage>
@type local
persistent true
path /var/log/journald-node-problem-detector.pos
</storage>
read_from_head true
tag node-problem-detector
</source>
<source>
@id kernel
@type systemd
matches [{ "_TRANSPORT": "kernel" }]
<storage>
@type local
persistent true
path /var/log/kernel.pos
</storage>
<entry>
fields_strip_underscores true
fields_lowercase true
</entry>
read_from_head true
tag kernel
</source>
forward.input.conf: |-
# Takes the messages sent over TCP
<source>
@type forward
</source>
monitoring.conf: |-
# Prometheus Exporter Plugin
# input plugin that exports metrics
<source>
@type prometheus
</source>
<source>
@type monitor_agent
</source>
# input plugin that collects metrics from MonitorAgent
<source>
@type prometheus_monitor
<labels>
host ${hostname}
</labels>
</source>
# input plugin that collects metrics for output plugin
<source>
@type prometheus_output_monitor
<labels>
host ${hostname}
</labels>
</source>
# input plugin that collects metrics for in_tail plugin
<source>
@type prometheus_tail_monitor
<labels>
host ${hostname}
</labels>
</source>
output.conf: |-
# Enriches records with Kubernetes metadata
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
<match **>
@id elasticsearch
@type elasticsearch
@log_level info
type_name fluentd
include_tag_key true
host elasticsearch-logging
port 9200
logstash_format true
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever true
retry_max_interval 30
chunk_limit_size 10M
total_limit_size 10G
queue_limit_length 8
overflow_action block
</buffer>
</match> </code></pre>
</div>
</div>
</p>
<p>I am trying to view the Kibana dashboard on the browser. But it shows as an empty response from the server.</p>
<p>Could anybody suggest me how to resolve this error?</p>
<p>Thanks in advance.</p>
| <p>The log message means that Fluentd cannot reach your ElasticSearch cluster. Based on the <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch" rel="nofollow noreferrer">link</a> that you shared. </p>
<p>You need to apply <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/es-service.yaml" rel="nofollow noreferrer">this</a> service to expose your ElasticSearch cluster on port <code>9200</code> as <code>elasticsearch-logging</code> on the same namespace where your Fluentd pod is running. So it appears that <code>elasticsearch-logging</code> is missing. You can find out more details with:</p>
<pre><code>kubectl -n kube-system get svc
</code></pre>
<p>and </p>
<pre><code>kubectl -n kube-system describe svc elasticsearch-logging
</code></pre>
<p>Another issue could be that DNS is not resolvable within the namespace which could be <code>coredns</code> or <code>kube-dns</code>. You could try shelling into a pod in the same namespace:</p>
<pre><code>kubectl -n kube-system exec -it <pod-in-kube-system> sh
</code></pre>
<p>Then inside</p>
<pre><code>curl elasticsearch-logging:9200
</code></pre>
|
<p>I'm using Google Kubernetes Engine, Cloud Build, and Image Registry. <a href="https://docs.docker.com/develop/develop-images/multistage-build/#use-an-external-image-as-a-stage" rel="nofollow noreferrer">According to the kubectl docs</a>, I can use external images in Dockerfiles with <code>COPY --from</code>. This would be very useful because when I run <code>gcloud builds submit</code> on my Dockerfile, I'd like to add in images already built on GCR instead of rebuilding everything in one Dockerfile.</p>
<p>I've tried adding lines like <code>COPY --from=quickstart-image:latest /some/path/thing.conf /thing.conf</code> but I always get</p>
<p><code>pull access denied for quickstart-image, repository does not exist or may require 'docker login'</code></p>
<p>Is there some authentication step I'm missing? How can I get this to work?</p>
| <p>By default, <code>quickstart-image</code> refers to <a href="https://hub.docker.com/" rel="nofollow noreferrer">Docker Hub</a> which, as error message suggests, it is not existing in Docker Hub.</p>
<p>If you want to use an image from GCR, you have to use full address like <code>asia.gcr.io/project-name/repo-name</code>.</p>
|
<p>How can I get a list of the pods running on the same Kubernetes node as my own (privileged) pod, using the official Python Kubernetes client? That is, how can a pod identify the concrete Kubernetes node it is running on and then query for a full list of pods on this node only?</p>
| <p>I'm making the assumption here that you've deployed a pod to the cluster, and now you're trying to query the node it's running on.</p>
<p>This is actually two distinct problems:</p>
<blockquote>
<p>That is, how can a pod identify the concrete Kubernetes node it is running on</p>
</blockquote>
<p>There's two ways you can do this, <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="noreferrer">but they both involved the downward API</a>. You can either push the pod name down or push the node name down (or both). You need to do this first to enable the lookups you need. So the pod running the kubernetes python client needs to be deployed like so:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: example-app
spec:
containers:
- name: python-kubernetes-client
image: my-image
command: [ "start_my_app" ]
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
restartPolicy: Never
</code></pre>
<p>Okay, so now you have the pod information and the node information available to your running pod. </p>
<blockquote>
<p>and then query for a full list of pods on this node only</p>
</blockquote>
<p>Now that you know the node name the pod is running on, querying for the pods running on it is relatively straightforward using the python API:</p>
<pre><code>#!/usr/bin/env python
from kubernetes import client, config
import os
def main():
# it works only if this script is run by K8s as a POD
config.load_incluster_config()
# use this outside pods
# config.load_kube_config()
# grab the node name from the pod environment vars
node_name = os.environ.get('MY_NODE_NAME', None)
v1 = client.CoreV1Api()
print("Listing pods with their IPs on node: ", node_name)
# field selectors are a string, you need to parse the fields from the pods here
field_selector = 'spec.nodeName='+node_name
ret = v1.list_pod_for_all_namespaces(watch=False, field_selector=field_selector)
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
if __name__ == '__main__':
main()
</code></pre>
|
<p>I need to run Kafka in a local Kubernetes instance (using Minikube) and to have the resulting Kafka service accessible to client applications (publishers and subscribers) outside the Minikube VM.</p>
<p>I have everything up and running in Minikube but I suppose that I have made a configuration mistake since I cannot access Kafka from outside. I have read similar questions and tried there suggested solutions but none of them solved the issue for me.</p>
<p>I have posted my YAML configuration files at <a href="https://github.com/thomasleplus/docker-kafka" rel="noreferrer">https://github.com/thomasleplus/docker-kafka</a> as well as the shell script that I am using to start the whole thing on my Ubuntu machine. I would really appreciate it if someone could help me spot what I have missed.</p>
<p>Here's my configuration so far:</p>
<pre><code>$ kubectl describe service kafka-service
Name: kafka-service
Namespace: default
Labels: run=kafka
Annotations: <none>
Selector: run=kafka
Type: NodePort
IP: 10.0.0.121
Port: kafka-port 30123/TCP
NodePort: kafka-port 30123/TCP
Endpoints: 172.17.0.3:9092
Session Affinity: None
Events: <none>
$ kubectl describe deployment kafka-deployment
Name: kafka-deployment
Namespace: default
CreationTimestamp: Thu, 17 Aug 2017 20:42:51 -0700
Labels: run=kafka
Annotations: deployment.kubernetes.io/revision=1
Selector: run=kafka
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: run=kafka
Containers:
kafka-service:
Image: wurstmeister/kafka
Port: 9092/TCP
Environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
KAFKA_ADVERTISED_PORT: 30123
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper-service:2181
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: kafka-deployment-2817439001 (1/1 replicas created)
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
15m 15m 1 deployment-controller Normal ScalingReplicaSet Scaled up replica set kafka-deployment-2817439001 to 1
</code></pre>
<p>The logs:</p>
<pre><code>waiting for kafka to be ready
[2017-08-18 04:31:00,296] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = PLAINTEXT://192.168.99.100:30123
advertised.port = null
alter.config.policy.class.name = null
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 1
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = false
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 0.11.0-IV2
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
listeners = PLAINTEXT://:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka/kafka-logs-kafka-deployment-2817439001-tqbjq
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.format.version = 0.11.0-IV2
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 1440
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
port = 9092
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
producer.purgatory.purge.interval.requests = 1000
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.enabled.mechanisms = [GSSAPI]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism.inter.broker.protocol = GSSAPI
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = zookeeper-service:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2017-08-18 04:31:00,436] INFO starting (kafka.server.KafkaServer)
[2017-08-18 04:31:00,439] INFO Connecting to zookeeper on zookeeper-service:2181 (kafka.server.KafkaServer)
[2017-08-18 04:31:00,467] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-08-18 04:31:00,472] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,472] INFO Client environment:host.name=kafka-deployment-2817439001-tqbjq (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,473] INFO Client environment:java.version=1.8.0_131 (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,473] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,473] INFO Client environment:java.home=/opt/jdk1.8.0_131/jre (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,473] INFO Client environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b05.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/connect-api-0.11.0.0.jar:/opt/kafka/bin/../libs/connect-file-0.11.0.0.jar:/opt/kafka/bin/../libs/connect-json-0.11.0.0.jar:/opt/kafka/bin/../libs/connect-runtime-0.11.0.0.jar:/opt/kafka/bin/../libs/connect-transforms-0.11.0.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b05.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b05.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b05.jar:/opt/kafka/bin/../libs/jackson-annotations-2.8.5.jar:/opt/kafka/bin/../libs/jackson-core-2.8.5.jar:/opt/kafka/bin/../libs/jackson-databind-2.8.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.8.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.8.5.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.8.5.jar:/opt/kafka/bin/../libs/javassist-3.21.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b05.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.24.jar:/opt/kafka/bin/../libs/jersey-common-2.24.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.24.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.24.jar:/opt/kafka/bin/../libs/jersey-guava-2.24.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.24.jar:/opt/kafka/bin/../libs/jersey-server-2.24.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.3.jar:/opt/kafka/bin/../libs/kafka-clients-0.11.0.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-0.11.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-0.11.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-0.11.0.0.jar:/opt/kafka/bin/../libs/kafka-tools-0.11.0.0.jar:/opt/kafka/bin/../libs/kafka_2.12-0.11.0.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-0.11.0.0-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-0.11.0.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-1.3.0.jar:/opt/kafka/bin/../libs/maven-artifact-3.5.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.0.24.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.0.1.jar:/opt/kafka/bin/../libs/scala-library-2.12.2.jar:/opt/kafka/bin/../libs/scala-parser-combinators_2.12-1.0.4.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,473] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,473] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,473] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,473] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,473] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,474] INFO Client environment:os.version=4.9.13 (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,474] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,474] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,474] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,475] INFO Initiating client connection, connectString=zookeeper-service:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@6d2a209c (org.apache.zookeeper.ZooKeeper)
[2017-08-18 04:31:00,500] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2017-08-18 04:31:00,505] INFO Opening socket connection to server zookeeper-service.default.svc.cluster.local/10.0.0.56:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2017-08-18 04:31:00,516] INFO Socket connection established to zookeeper-service.default.svc.cluster.local/10.0.0.56:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2017-08-18 04:31:00,558] INFO Session establishment complete on server zookeeper-service.default.svc.cluster.local/10.0.0.56:2181, sessionid = 0x15df39b70410000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2017-08-18 04:31:00,560] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2017-08-18 04:31:00,682] INFO Cluster ID = V2Mj7cI3SMG_VoQxmtb9Tw (kafka.server.KafkaServer)
[2017-08-18 04:31:00,685] WARN No meta.properties file under dir /kafka/kafka-logs-kafka-deployment-2817439001-tqbjq/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2017-08-18 04:31:00,727] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2017-08-18 04:31:00,727] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2017-08-18 04:31:00,728] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2017-08-18 04:31:00,801] INFO Log directory '/kafka/kafka-logs-kafka-deployment-2817439001-tqbjq' not found, creating it. (kafka.log.LogManager)
[2017-08-18 04:31:00,810] INFO Loading logs. (kafka.log.LogManager)
[2017-08-18 04:31:00,819] INFO Logs loading complete in 9 ms. (kafka.log.LogManager)
[2017-08-18 04:31:00,891] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2017-08-18 04:31:00,899] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2017-08-18 04:31:00,960] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2017-08-18 04:31:00,965] INFO [Socket Server on Broker 1], Started 1 acceptor threads (kafka.network.SocketServer)
[2017-08-18 04:31:00,982] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-08-18 04:31:00,985] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-08-18 04:31:00,989] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-08-18 04:31:01,089] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-08-18 04:31:01,090] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2017-08-18 04:31:01,101] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2017-08-18 04:31:01,101] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-08-18 04:31:01,128] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-08-18 04:31:01,164] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2017-08-18 04:31:01,170] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2017-08-18 04:31:01,178] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 11 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2017-08-18 04:31:01,200] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2017-08-18 04:31:01,259] INFO [Transaction Coordinator 1]: Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2017-08-18 04:31:01,277] INFO [Transaction Coordinator 1]: Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2017-08-18 04:31:01,277] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2017-08-18 04:31:01,335] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
[2017-08-18 04:31:01,374] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2017-08-18 04:31:01,378] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2017-08-18 04:31:01,381] INFO Registered broker 1 at path /brokers/ids/1 with addresses: EndPoint(192.168.99.100,30123,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2017-08-18 04:31:01,383] WARN No meta.properties file under dir /kafka/kafka-logs-kafka-deployment-2817439001-tqbjq/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2017-08-18 04:31:01,394] INFO Kafka version : 0.11.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2017-08-18 04:31:01,394] INFO Kafka commitId : cb8625948210849f (org.apache.kafka.common.utils.AppInfoParser)
[2017-08-18 04:31:01,395] INFO [Kafka Server 1], started (kafka.server.KafkaServer)
[2017-08-18 04:41:01,167] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
</code></pre>
<p>Most answers to similar questions recommend to use the service type NodePort as I do. And to use port/targetPort/nodePort to map the default 9092 port of Kafka to an exposable port (I chose 30123).</p>
<pre><code>$ minikube service kafka-service --url
http://192.168.99.100:30123
$ nmap 192.168.99.100 -p 30123
Starting Nmap 7.40 ( https://nmap.org ) at 2017-08-17 20:43 PDT
Nmap scan report for 192.168.99.100
Host is up (0.00036s latency).
PORT STATE SERVICE
30123/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 13.06 seconds
</code></pre>
<p>In the end, it looks like 192.168.99.100:30123 should be the way to access Kafka from outside Minikube (so that's what I've put in <code>KAFKA_ADVERTISED_HOST_NAME</code> and <code>KAFKA_ADVERTISED_PORT</code>) yet clients can't connect using these:</p>
<pre><code>$ kafkacat -C -b 192.168.99.100:30123 -t demo
% ERROR: Topic demo error: Broker: Leader not available
</code></pre>
<p>Finally some answers mention potential firewall interference so I have tried disabling my machine's firewall but it didn't change anything. If I need to disable the firewall inside the Minikube VM, I am not sure how to do that.</p>
<p>Any help would be greatly appreciated.</p>
| <p>The problem is to do with a bug in recent versions of minikube (see <a href="https://github.com/kubernetes/minikube/issues/1690" rel="noreferrer">https://github.com/kubernetes/minikube/issues/1690</a>).</p>
<p>The solution is simply:</p>
<p><code>minikube ssh
sudo ip link set docker0 promisc on</code></p>
|
<p>I am evaluating Kubernetes (with Docker containers, not Kubernetes) and Docker Swarm and could use your input. </p>
<p>If I'm looking at 3 (8.76 hours) or 4 (52 min) 9's reliability in a server farm that is < 100 servers, would Kubernetes be overkill due to its complexity? Would Docker Swarm suffice? </p>
| <p>It really depends on your real needs, Kubernetes or Swarm orchestrators are not silver bullets. To take real advantage from the container technology, the applications have to be properly designed. A design guideline for these cloud native apps are the Twelve Factor app principles made by Heroku.</p>
<p>In case you want to scale and achieve global scaling, Kubernetes is a great framework to run distributed apps at scale. In case you have a lot of Java apps, maybe containerized traditional applications then Swarm is a best option.</p>
<p>The business requirements can drive you to make the right choice. </p>
<p>Hope this helps!</p>
|
<p><strong>My week so far</strong>:
In the last couple of days i worked on deploying our identityserver4 .net core application to an Azure Kubernetes Cluster (AKS). After a few problems, everything seemed to work fine. We are not using the built-in http-routing functionalities because we don't want to route using subdomains and for some reason; we can't seem to get letsencrypt working when http-routing is enabled. We are using https:// to access the services hosted in AKS using nginx.</p>
<p><em>anyway..</em>
The problems arose when i deployed one of our mvc clientapplications to AKS. The homepage of the client works as expected. When the client redirects me to the login page of our idsrv4 service, and i log in using my credentials: a redirect loop kicks in. I know this means that the authcookies aren't properly set. </p>
<p><strong>The problem</strong>
I discovered that the authentication roundtrip works in Google Chrome and Firefox, no redirect loops in those browsers. Edge, IE and Safari don't work and cause redirect-loops when redirecting to signin-oidc. </p>
<p><strong>Discoveries so far:</strong> </p>
<ol>
<li>I tested the mvcclient application using my local docker for windows installation. Using HTTP connection - not https -, the roundtrip works in all browsers</li>
<li>When i use Fiddler with HTTPS decrypt to diagnose the roundtrip using the services hosted in remote AKS: the roundtrip works in all browsers</li>
<li>When i disconnect Fiddler and test the services hosted in remote AKS, the roundtrip doesn't work in Edge, IE and Safari.</li>
</ol>
<p>Does anyone know how i can configure Nginx to support all browsers for setting cookies and forward the correct headers? What are the requirements for identityserver4 in this situation? is there any additional configuration rin nginx or cookieauthentication required in my clientapplication or identityserver4 (besides setting publicorigin in identityserveroptions in startup.cs)?</p>
| <p>After i did a fresh install of a new AKS cluster and tryied once more to get Let's Encrypt working using the standard addon-http-routing, which i got working, i tried and tried and finally thought: why is my redirect to /signin-oidc registering as HTTP/2 in Edge and IE. This turned out to be the main part in a combination of the problems i had last week.... anyway: I did some research and figured out how to update some parts of the configuration to the built-in ingress controller (addon-http-routing). for anyone experiencing a signin-oidc loop when using AKS (Azure Kubernetes Service). You can overwrite the configuration for the standard http routing addon provided in AKS and disable http/2 manually (enabled by default!).</p>
<p>Because i got a bit frustrated by the fact that there was little information on the web for configuring an AKS cluster on Azure in combination with Let's Encrypt and addon-http-routing & because i couldn't find any information on deploying IdentityServer4 in an AKS cluster in Azure. I cooked up some .yaml files (all files i used to get everything up and running), expanded them with comments and published them for anyone wanting to host IdentityServer4 securely in an Azure Kubernetes Service.
This is my first, although small, public contribution ever. If anyone has problems implementing my .yaml files using my rudimental Readme.txt: please let me know and i will see what i can do.</p>
<p><a href="https://github.com/leonvandebroek/Identityserver4-deployments/tree/master/Azure%20Kubernetes%20Service" rel="nofollow noreferrer">https://github.com/leonvandebroek/Identityserver4-deployments/tree/master/Azure%20Kubernetes%20Service</a></p>
|
<p>We are pulling images from a private ECR (AWS) running Kubernetes that ships with Docker for Mac (minikube). We have a secret called <code>aws-cred</code>. I created it using:</p>
<pre><code>kubectl create secret docker-registry aws-creds --docker-server=OUR-ACCOUNT.ecr.eu-central-1.amazonaws.com --docker-username=AWS --docker-password=SUPER_LONG_TOKEN [email protected]
</code></pre>
<p>together with this in my deployments:</p>
<pre><code>"imagePullSecrets":[{"name":"aws-creds"}]
</code></pre>
<p>SUPER_LONG_TOKEN I get from running:</p>
<pre><code>aws ecr get-login --region eu-central-1 --profile default --no-include-email
</code></pre>
<p>Of course the token expires after a few hours and I tried to refresh the secret. First I deleted the secret:</p>
<pre><code>kubectl delete secret aws-creds
</code></pre>
<p>Then basically repeated the steps above, fetching a fresh token. However I noticed, that I still cannot pull from our ECR getting a <code>AWS ECR: no basic auth credentials</code> error in minikube.</p>
<p>When I repeat the process, but I <strong>rename</strong> the secret, i.e. to <code>aws-creds-2</code>, everything works. I suspect there is some kind of caching in place. Indeed I verified this by using:</p>
<pre><code>kubectl get secret aws-cred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
</code></pre>
<p>and I can see that the <code>password</code> value stays the same, even after deleting and re-creating the secret. This is a bit unintuitive to me, how should I update my secret instead?</p>
| <p>I've been using this solution for a few months without issues. It runs inside your cluster and keeps your secret refreshed. <a href="https://github.com/upmc-enterprises/registry-creds" rel="nofollow noreferrer">https://github.com/upmc-enterprises/registry-creds</a></p>
|
<p>I'm using minikube/ kubectl on Ubuntu 16.04, trying to keep minikube cluster from running at startup. Is there a service I can disable for the same?</p>
| <p>Try <code>systemctl disable kubelet.service</code></p>
<p>I use the same setup on Ubuntu 18.04 and had the same issue.<br>
After starting minikube without virtualization using <code>sudo minikube start --vm-driver=none</code> I can stop it with <code>sudo minikube stop</code> but after reboot the cluster is up again.</p>
<p>For me it was <code>kubelet</code> running on my machine that would start the cluster on reboot.
<code>systemctl disable kubelet.service</code> fixed the issue.</p>
<p>You can check if <code>kubelet</code> is enabled on your machine by running <code>systemctl is-active kubelet</code></p>
|
<p>I've set up two simple kubernetes services & deployments - frontend & api. The frontend gets data from the api so I'm exposing the api as well so I can hard code the backend ingress URL in the frontend data fetch call (if anyone knows a better way of doing this internally within cluster please let me know).</p>
<p>I'm trying to set up different host names for different services but for some reason only one of the hostnames is working.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-webapp-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: test-webapp-frontend.com
http:
paths:
- path: /
backend:
serviceName: test-webapp-frontend-lb
servicePort: 8002
- host: test-webapp-api.com
http:
paths:
- path: /get
backend:
serviceName: test-webapp-api-lb
servicePort: 8001
</code></pre>
<p>And this is what I get after I run <code>kubectl get svc</code></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
test-webapp-api-lb LoadBalancer 10.107.60.163 <pending> 8001:30886/TCP 1h
test-webapp-frontend-lb LoadBalancer 10.104.100.108 <pending> 8002:31431/TCP 1h
</code></pre>
<p>I am using minikube on my local to run this cluster. I can access both the frontend and api by running <code>minikube service test-webapp-frontend-lb</code> and <code>minikube service test-webapp-api-lb</code>.</p>
<p>When I go to <code>test-webapp-frontend.com</code>, I can see the frontend page but I can't access <code>test-webapp-api.com</code>. Not even the default not-found error, I just can't access it as if the URL just didn't exist.</p>
<p>The weird thing is, if I do this,</p>
<pre><code>spec:
rules:
- host: test-webapp-frontend.com
http:
paths:
- path: /
backend:
serviceName: test-webapp-frontend-lb
servicePort: 8002
- host: test-another-frontend.com
http:
paths:
- path: /
backend:
serviceName: test-webapp-frontend-lb
servicePort: 8002
</code></pre>
<p>I can still access <code>test-webapp-frontend.com</code> but <code>test-another-frontend.com</code> has the same problem, can't access it at all.</p>
<p>What am I doing wrong??</p>
| <p>Seems like a DNS problem. Those hostname a like 'test-webapp-frontend.com' need to resolve to the IP of the ingress controller to route traffic into the cluster. I don't see an external IP listed in your output for an ingress controller. For minikube you could enable the ingress add-on. DNS is a bit trickier with minikube as you don't have a public IP to resolve to. You can modify you etc/hosts file to resolve the names or use path-based rules instead. </p>
<p>Some useful <a href="https://stackoverflow.com/questions/52345855/ingress-and-ingress-controller-how-to-use-them-with-nodeport-services/52346482#52346482">links</a> on <a href="https://medium.com/@awkwardferny/getting-started-with-kubernetes-ingress-nginx-on-minikube-d75e58f52b6c" rel="nofollow noreferrer">this</a> </p>
|
<p>We get this error when uploading a large file (more than 10Mb but less than 100Mb):</p>
<pre><code>403 POST https://www.googleapis.com/upload/storage/v1/b/dm-scrapes/o?uploadType=resumable: ('Response headers must contain header', 'location')
</code></pre>
<p>Or this error when the file is more than 5Mb</p>
<pre><code>403 POST https://www.googleapis.com/upload/storage/v1/b/dm-scrapes/o?uploadType=multipart: ('Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>)
</code></pre>
<p>It seems that this API is looking at the file size and trying to upload it via multi part or resumable method. I can't imagine that is something that as a caller of this API I should be concerned with. Is the problem somehow related to permissions? Does the bucket need special permission do it can accept multipart or resumable upload. </p>
<pre><code>from google.cloud import storage
try:
client = storage.Client()
bucket = client.get_bucket('my-bucket')
blob = bucket.blob('blob-name')
blob.upload_from_filename(zip_path, content_type='application/gzip')
except Exception as e:
print(f'Error in uploading {zip_path}')
print(e)
</code></pre>
<p>We run this inside a Kubernetes pod so the permissions get picked up by storage.Client() call automatically. </p>
<p>We already tried these:</p>
<ul>
<li><p>Can't upload with gsutil because the container is Python 3 and <a href="https://github.com/GoogleCloudPlatform/gsutil/issues/29" rel="noreferrer">gsutil does not run in python 3</a>.</p></li>
<li><p><a href="https://dev.to/sethmichaellarson/python-data-streaming-to-google-cloud-storage-with-resumable-uploads-458h" rel="noreferrer">Tried this example</a>: but runs into the same error: <code>('Response headers must contain header', 'location')</code></p></li>
<li><p><a href="https://googlecloudplatform.github.io/google-resumable-media-python/latest/google.resumable_media.requests.html#resumable-uploads" rel="noreferrer">There is also this library.</a> But it is basically alpha quality with little activity and no commits for a year. </p></li>
<li>Upgraded to google-cloud-storage==1.13.0</li>
</ul>
<p>Thanks in advance</p>
| <p>The problem was indeed the credentials. Somehow the error message was very miss-leading. When we loaded the credentials explicitly the problem went away. </p>
<pre><code> # Explicitly use service account credentials by specifying the private key file.
storage_client = storage.Client.from_service_account_json(
'service_account.json')
</code></pre>
|
<p>when using kubeadm join token to join worker node to a k8 master. Iam receiving following errors.</p>
<pre><code>[preflight] running pre-flight checks
[preflight] WARNING: Couldn't create the interface used for talking to the container runtime: docker is required for container runtime: exec: "docker": executable file not found in $PATH
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh nf_conntrack_ipv4 ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
[preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
*********************
error 2 : when run modprobe_nrfilter
modprobe: FATAL: Module br_netfilter not found.
</code></pre>
| <p>It seems like docker is not installed or not in your <code>PATH</code>:</p>
<pre><code>Couldn't create the interface used for talking to the container runtime: docker is required for container runtime: exec: "docker": executable file not found in $PATH
</code></pre>
<p>This can be fixed by installing docker and ensuring the docker executable is in your PATH.</p>
|
<p>I made hadoop image based on centos using dockerfile. There are 4 nodes. I want to configure cluster using ssh-copy-id. But an error has occurred. </p>
<pre><code>ERROR: ssh: connect to host [ip] port 22: Connection refused
</code></pre>
<p>How can I solve this problem?</p>
| <p><code>ssh</code> follows a client-server architecture. So, the <code>openssh-server</code> has to be installed in the container. Now <code>ssh-copy-id</code> and other commands should run if the ip address is routable.</p>
|
<p>I am trying to deploy production grade Elasticsearch 6.3.0 on Kubernetes.</p>
<p>Came across few articles, but still not sure what is the best approach to go with.</p>
<ol>
<li><a href="https://github.com/pires/kubernetes-elasticsearch-cluster" rel="nofollow noreferrer">https://github.com/pires/kubernetes-elasticsearch-cluster</a></li>
</ol>
<p>It doesn't use stateful set.</p>
<ol start="2">
<li><a href="https://anchormen.nl/blog/big-data-services/elastic-search-deployment-kubernetes/" rel="nofollow noreferrer">https://anchormen.nl/blog/big-data-services/elastic-search-deployment-kubernetes/</a></li>
</ol>
<p>This is pretty old.</p>
<p>Using elastic search for App search.</p>
<p>Images from Elasticsearch are</p>
<pre><code>docker pull docker.elastic.co/elasticsearch/elasticsearch:6.3.0
docker pull docker.elastic.co/elasticsearch/elasticsearch-oss:6.3.0
</code></pre>
<p>I would like to go with -oss image and it is the core Apache one.</p>
<p>Is there any good documentation on setting up production grade 6.3.0 version on Kubernetes.</p>
| <p>One of the most promising new developments for running Elasticearch on Kubernetes is the <a href="https://github.com/upmc-enterprises/elasticsearch-operator" rel="nofollow noreferrer">Elasticsearch Operator</a>.</p>
<p>Kubernetes <a href="https://coreos.com/operators/" rel="nofollow noreferrer">Operators</a> allow for more sophistication when it comes to dealing with the requirements of complex tools (and Elasticsearch is definitely one). Especially when considering the need to avoid losing Elasticsearch data, an operator is the way to go.</p>
|
<p>We aware of that we can define environment variables in pod/containers. i wanted to use the same environment variable inside the container at runtime. </p>
<p>ex: i am running a webapplication using python , inside that how to i get the values of environment values?</p>
| <p>First go inside the pod or <code>exec</code> the <code>bash</code>(<code>kubeclt exec -it <pod_name> bash</code>) and run <code>printenv</code> to get some idea what are the environmental variables available. </p>
<p>From Python </p>
<pre><code>import os
os.environ['MYCUSTOMVAR']
</code></pre>
|
<p>I am trying to deploy a test pod with nginx and logrotate sidecar.
Logrotate sidecar taken from: <a href="https://github.com/honestbee/logrotate" rel="nofollow noreferrer">logrotate</a></p>
<p>My Pod yaml configuration:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx-apache-log
labels:
app: nginx-apache-log
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: logs
mountPath: /var/log
- name: logrotate
image: path/to/logrtr:sidecar
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {}
</code></pre>
<p>What I'd like to achieve is Logrotate container watching /var/log/<em>/</em>.log, however with the configuration above, nginx container is failing because there is no /var/log/nginx:</p>
<pre><code>nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (2: No such file or directory)
2018/10/15 10:22:12 [emerg] 1#1: open() "/var/log/nginx/error.log" failed (2: No such file or directory)
</code></pre>
<p>However if I change mountPath for nginx from </p>
<pre><code>mountPath: /var/log
</code></pre>
<p>to:</p>
<pre><code>mountPath: /var/log/nginx
</code></pre>
<p>then it is starting, logging to /var/log/nginx/access.log and error.log, but logrotate sidecar sees all logs in /var/log not /var/log/nginx/. It is not a problem with just one nginx container, but I am planning to have more container apps logging to their own /var/log/appname folders.</p>
<p>Is there any way to fix/workaround that? I don't want to run sidecar for each app.</p>
<p>If I change my pod configuration to:</p>
<pre><code> - name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: logs
mountPath: /var/log
initContainers:
- name: install
image: busybox
command:
- mkdir -p /var/log/nginx
volumeMounts:
- name: logs
mountPath: "/var/log"
</code></pre>
<p>then it is failing with: </p>
<pre><code>Warning Failed 52s (x4 over 105s) kubelet, k8s-slave1 Error: failed to start container "install": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"mkdir -p /var/log/nginx\": stat mkdir -p /var/log/nginx: no such file or directory": unknown
</code></pre>
| <p>Leave the mount path as <code>/var/log</code>. In your nginx container, execute <code>mkdir /var/log/nginx</code> in a startup script. You might have to tweak directory permissions a bit to make this work.</p>
|
<p>Kubernetes documentation is saying that multi-zone clusters are supported, but not the multi-region ones. At the same time Kubernetes has support for both <code>failure-domain/zone</code> and <code>failure-domain/region</code>.</p>
<p>What are the downsides of having my Kubernetes clusters to be multi-zone and multi-region at the same time? Is it only latency and if so what are the required latency numbers for it to be reliable?</p>
<p>On a plus side I see service discovery and being able to deploy applications across multiple regions without and extra tooling on top of it.</p>
<p>I know there's federation v1 and v2 being worked on but it seems to be adding a lot of complexity and v2 is far from being production ready.</p>
| <p>This is speculative, but it's <em>informed</em> speculation, so hopefully that means it'll still be helpful</p>
<p>Let's take two things that kubernetes does and extrapolate them into a multi-region cluster:</p>
<ul>
<li>load balancer membership -- at least on AWS, there is no mechanism for adding members of a different region to a load balancer, meaning <code>type: LoadBalancer</code> could not assign all <code>Pod</code>s to the <code>Service</code></li>
<li>persistent volume attachment -- similarly on AWS, there is no mechanism for attaching EBS volumes across even availability zones, to say nothing of across regions</li>
</ul>
<p>For each of those, one will absolutely be able to find "yes, but!" scenarios to demonstrate a situation where these restrictions won't matter. However, since kubernetes is trying to solve for the general case, in a cloud-agnostic way, that's my strong suspicion why they would recommend against even trying a multi-region cluster -- regardless of whether it happens to work for your situation right now.</p>
|
<p>I use <strong>kubeadm</strong> to launch cluster on <strong>AWS</strong>. I can successfully create a load balancer on <strong>AWS</strong> by using <strong>kubectl</strong>, but the load balancer is not registered with any EC2 instances. That causes problem that the service cannot be accessed from public. </p>
<p>From the observation, when the ELB is created, it cannot find any healthy instances under all subnets. I am pretty sure I tag all my instances correctly. </p>
<p><strong>Updated</strong>: I am reading the log from <strong>k8s-controller-manager</strong>, it shows my node does not have ProviderID set. And according to <a href="https://github.com/kubernetes/kubernetes/blob/82c986ecbcdf99a87cd12a7e2cf64f90057b9acd/pkg/cloudprovider/providers/aws/aws_loadbalancer.go#L1477" rel="noreferrer">Github</a> comment, ELB will ignore nodes where instance ID cannot be determined from provider. Could this cause the issue? How Should I set the providerID? </p>
<h2>load balancer configuration</h2>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: load-balancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "elb"
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
selector:
app: replica
type: LoadBalancer
</code></pre>
<h2>deployment configuration</h2>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: replica-deployment
labels:
app: replica
spec:
replicas: 1
selector:
matchLabels:
app: replica
template:
metadata:
labels:
app: replica
spec:
containers:
- name: web
image: web
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
- containerPort: 443
command: ["/bin/bash"]
args: ["-c", "script_to_start_server.sh"]
</code></pre>
<h2>node output <code>status</code> section</h2>
<pre><code>status:
addresses:
- address: 172.31.35.209
type: InternalIP
- address: k8s
type: Hostname
allocatable:
cpu: "4"
ephemeral-storage: "119850776788"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 16328856Ki
pods: "110"
capacity:
cpu: "4"
ephemeral-storage: 130046416Ki
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 16431256Ki
pods: "110"
conditions:
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet has sufficient disk space available
reason: KubeletHasSufficientDisk
status: "False"
type: OutOfDisk
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet is posting ready status. AppArmor enabled
reason: KubeletReady
status: "True"
type: Ready
</code></pre>
<p>How can I fix the issue?</p>
<p>Thanks!</p>
| <p>In My case the issue was with the worker nodes not getting the providerId assigned properly.</p>
<p>I managed to patch the node like - kubectl patch node ip-xxxxx.ap-southeast-2.compute.internal -p '{"spec":{"providerID":"aws:///ap-southeast-2a/i-0xxxxx"}}'</p>
<p>to add the ProviderID. And then when i deployed the service . The ELB got created. the node group got added and end to end it worked. This is not a straight forward answer . But until i find a better solution let remain here</p>
|
<p>I'm using Docker I have implemented a system to deploy environments (on a single server) based on Git branches using Traefik (*.dev.domain.com) and Docker Compose templates.</p>
<p>I like Kubernetes and I've never switched to it since I'm limited to one single server for my infrastructure. I've only used it using local installations (Docker for Windows).</p>
<p>So, my question is: does it make sense to run a Kubernetes "cluster" (master and nodes) on a single server to orchestrate and route containers (in place of Traefik/Rancher/Docker Compose)?</p>
<p>This use is for development and staging only for the moment, so high availability is not a prerequisite.</p>
| <p>AFAIU, </p>
<p>I do not see a requirement for kubernetes unless we are doing below at least for single host using native <code>docker run</code> or <code>docker-compose</code> or <code>docker engine swarm mode</code> - </p>
<ul>
<li>Make sure there are enough(>=2) replicas of your app in a single server and you are balancing the load across those apps docker containers. </li>
<li>If you want to go bit advanced, we should be able to scale up & down dynamically (docker swarm mode supports this out of the box else use <code>jwilder nginx proxy</code>).</li>
<li>Your deployment should not cause a downtime. Make sure a single container is always healthy at any instant of time while deploying.</li>
<li>Container should auto heal(restart automatically) in case your HTTP or TCP health check fails. </li>
<li>Doing all of the above will certainly put you in a better place but single host is still a single source of failure which you got to deal with at regular intervals. </li>
<li><strong>Preferred</strong> : if possible try to start with <code>docker engine swarm mode</code> or <code>kubernetes single master</code> or <code>minikube</code>. This will automatically take care of all the above scenarios out of the box and will also allow you to further scale up anytime by adding more nodes without changing much in your YML files for docker swarm or kubernetes. </li>
</ul>
<p>Ref -<br>
<a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a>
<a href="https://docs.docker.com/engine/swarm/" rel="noreferrer">https://docs.docker.com/engine/swarm/</a></p>
|
<p>From what I understand, kube-state-metrics keeps an in-memory cache of all the kubernetes events related to deployments, nodes and pods and more, and exposes them to <code>/metrics</code> for Prometheus to scrape.</p>
<p>How long does the kube-state-metrics keep these metrics in-memory? Is it indefinitely? Or does it internally clean the cache once a while?</p>
| <p>For most Promtheus targets, metrics are computed at scrape time. Based on <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/main.go" rel="nofollow noreferrer">kube-state-metrics' github</a>, it looks like Kubernetes' implementation is no different. This means that the metrics are not cached, but rather calculated each time that the Prometheus server scrapes the endpoint (or each time you visit /metrics in your browser)</p>
|
<p>I'm trying to provision/deprovision service instance/binding from my cloud provider (IBM cloud private), Currently, there is a bug that if the service is not deprovisioned in ICP, that leaves me the orphan service instance on my ICP environment which I can't delete even with force option.
They provide a workaround solution of:</p>
<pre><code>kubectl edit ServiceInstance <service-instance-name>
kubectl edit ServiceBinding <service-binding-name>
</code></pre>
<p>then delete the line:</p>
<pre><code>...
finalizers:
- kubernetes-incubator/service-catalog
...
</code></pre>
<p>and the orphan service instance/binding will get deleted properly. I'm wondering how to automate this process with bash cli (live edit + delete line + save + exit) or any alternative way.</p>
| <p>I'm not sure how this works with the ServiceInstance and ServiceBinding specifically, but you can use <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/" rel="noreferrer">kubectl patch</a> to update objects in place. As an example:</p>
<pre><code>kubectl patch ServiceInstance <service-instance-name> -p '{"metadata":{"finalizers":null}}' --type=merge
</code></pre>
|
<p>I have a Job that runs migration for a Python service. Here is the job spec:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: migration
annotations:
buildId: "__buildId__"
branchName: "__branchName__"
commitId: "__commitId__"
spec:
template:
spec:
containers:
- name: service
image: <repo>/service:__buildId__
imagePullPolicy: Always
imagePullSecrets:
- name: acr-key
command: ["/bin/sh","-c"]
args: ["python manage.py migrate --noinput --database=default && python manage.py migrate --noinput --database=data_001 && python manage.py migrate --noinput --database=data_002"]
envFrom:
- configMapRef:
name: configuration
- secretRef:
name: secrets
resources:
requests:
memory: "200Mi"
cpu: "250m"
limits:
memory: "4000Mi"
cpu: "2000m"
restartPolicy: Never
</code></pre>
<p>It doesn't look like there is an apiVersion that supports both, imagePullSecrets and kubernetes Job. Any ideas on how can I get this to work?</p>
<p>Here's my k8s configuration:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p><code>imagePullSecrets</code> should be outside of the <code>containers</code> scope. This works for me:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: migration
annotations:
buildId: "__buildId__"
branchName: "__branchName__"
commitId: "__commitId__"
spec:
template:
spec:
imagePullSecrets:
- name: acr-key
containers:
- name: service
image: <repo>/service:__buildId__
imagePullPolicy: Always
command: ["/bin/sh","-c"]
args: ["python manage.py migrate --noinput --database=default && python manage.py migrate --noinput --database=data_001 && python manage.py migrate --noinput --database=data_002"]
envFrom:
- configMapRef:
name: configuration
- secretRef:
name: secrets
resources:
requests:
memory: "200Mi"
cpu: "250m"
limits:
memory: "4000Mi"
cpu: "2000m"
restartPolicy: Never
</code></pre>
|
<p>I have created a Jenkins cluster on Kubernetes (Master + 2 workers) with local volumes on the Master node.</p>
<p>I created a persistent vol of 2GB and the claim is 1 GB.</p>
<p>I created a deployment with the image: jenkins/jenkins:lts and volume mount from /var/jenkins_home to PVC: claimname</p>
<p>I have already copied the data on local folder which is Persistent Volume but I am not able to see my jobs on jenkins server.</p>
<pre><code>kubectl describe pod dep-jenkins-8648454f65-4v8tb
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 3m38s (x149 over 4h50m) kubelet, kube-worker001 MountVolume.SetUp failed for volume "default-token-424m4" : secret "default-token-424m4" not found
</code></pre>
<p>What is the correct way to mount a local directory in a POD so that I can transfer my Jenkins data to newly created Jenkins server on Kubernetes?</p>
| <p>Looks like the <code>Warning</code> in your pod description is related to mounting a secret and not mounting any PV. To set up your <code>JENKINS_HOME</code> as a persistent volume you would do something like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: my-jenkins-image
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-home
</code></pre>
|
<p>When we were using <code>docker-compose</code>, we could map multiple aliases to one container/service via <code>links</code>. For example, <code>bucket1.s3</code> and <code>bucket2.s3</code> are aliases for container <code>s3</code>. Now as we are moving to kubernetes, I'm trying to do the same and link containers using service discovery. </p>
<p>What I can think of right now is to have one service for each bucket and each service will point to the same pod. This seems a lot of work. Is there way to make multiple DNS mapped to one service so <code>bucket1.s3.namespace.svc.cluster.local</code> and <code>bucket2.s3.namespace.svc.cluster.local</code> can both be resolved to <code>s3</code> service?</p>
| <p>I believe what you might want is one service mapped to two deployments, you could have two <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployments</a> with names bucket1 and bucket2 and have them labeled with one label <code>app: buckets</code> then have a service with a selector including that label.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: bucket1
labels:
app: buckets
spec:
replicas: 3
selector:
matchLabels:
app: buckets
template:
metadata:
labels:
app: buckets
spec:
containers:
- name: bucket-container
image: bucketimage
ports:
- containerPort: 80
</code></pre>
<hr>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: bucket2
labels:
app: buckets
spec:
replicas: 3
selector:
matchLabels:
app: buckets
template:
metadata:
labels:
app: buckets
spec:
containers:
- name: bucket-container
image: bucketimage
ports:
- containerPort: 80
</code></pre>
<hr>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: s3
spec:
selector:
app: buckets
ports:
- protocol: TCP
port: 80
</code></pre>
<p>But a further question would be why would you want to have that? if both bucket1 and bucket2 contain different information/data.</p>
<p>With Kubernetes generally, you would have one deployment with multiple pods/replicas to serve data either in bucket1 or bucket2 assuming the data is the same. If the data is different why not have 2 deployments with different DNS entries.</p>
<p>If you are using K8s 1.11 or later, you can also <a href="https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/" rel="nofollow noreferrer">tweak the coredns <code>ConfigMap</code></a> in the <code>kube-system</code> namespace to possibly get what you are trying to get to but in terms of DNS <a href="https://serverfault.com/questions/574072/can-we-have-multiple-cnames-for-a-single-name">you can't have a single record map to two CNAMEs</a> as per RFC 1034, Section 3.6.2. You can't also hardcode IPs in Kubernetes since your IPs always change when your pods get created/removed.</p>
|
<p>We can use a declarative approach for creating and updating kubernetes resources using <code>kubectl apply -f</code> , how can we do the same for recyclying the resources that are no longer needed. </p>
<p>I have used <code>kubectl delete</code> , but that looks like imperative , and sometimes we will need to delete things in proper order. </p>
<p>Is there a way to always use <code>kubectl apply</code> and it figures out itself which resources to keep and which to delete. Just like in <code>Terraform</code>.</p>
<p>Or we should conclude that currently the declarative approach works for resource creation and update only.</p>
<p><strong>Use case:</strong></p>
<p>For example , we have decided not to provide the K8S API to end users and instead provide them a repository where they keep and update thier yaml files that a bot can apply to the cluster on each update when the pull request is merged. So we need this declarative delete as well so that we don't have to clean up things after users. <code>Terraform</code> provider maybe the solution but in that case things will lock to terraform and users will need to learn one more tool instead of using the native k8s format.</p>
| <p>Truns out that they have added a declarative approach for pruning the resources that are no longer present in the yaml manifests:</p>
<pre><code>kubectl apply -f <directory/> --prune -l your=label
</code></pre>
<p>With too many cautions though.</p>
<blockquote>
<p>As an alternative to kubectl delete, you can use kubectl apply to
identify objects to be deleted after their configuration files have
been removed from the directory. Apply with --prune queries the API
server for all objects matching a set of labels, and attempts to match
the returned live object configurations against the object
configuration files. If an object matches the query, and it does not
have a configuration file in the directory, and it has a
last-applied-configuration annotation, it is deleted.</p>
</blockquote>
|
<p>I'm getting the following error, after running the kubeadm join:</p>
<pre><code>error :
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
Unauthorized
</code></pre>
<p>but the kubelet and kubeadm versions are same, but I'm still getting this error. </p>
<p>What's the way to authorize it?</p>
| <p>Not sure how you are joining your cluster but you join your cluster with something like this:</p>
<pre><code>kubeadm join <ip-of-your-master>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash-for-cert-disc>
</code></pre>
<p>You get both when you first run <code>kubeadm init</code> on your master(s). The token expires after some time in which case you need to create a new token with <code>kubeadm create token</code>.</p>
|
<p>I want to run the schema-registry in non-master-mode in Kubernetes, I passed the environment variable <code>master.eligibility=false</code>, However, it's still electing the master.</p>
<p>Please point me where else I should change the configuration! There are no errors in the environment value being wrong.</p>
<p>cmd:</p>
<pre><code>helm install helm-test-0.1.0.tgz --set env.name.SCHEMA_REGISTRY_KAFKASTORE_BOOTSERVERS="PLAINTEXT://xx.xx.xx.xx:9092\,PLAINTEXT://xx.xx.xx.xx:9092\,PLAINTEXT://xx.xx.xx.xx:9092" --set env.name.SCHEMA_REGISTRY_LISTENERS="http://0.0.0.0:8083" --set env.name.SCHEMA_REGISTRY_MASTER_ELIGIBILITY=false
</code></pre>
<p>Details:</p>
<pre><code>replicaCount: 1
image:
repository: confluentinc/cp-schema-registry
tag: "5.0.0"
pullPolicy: IfNotPresent
env:
name:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "PLAINTEXT://xx.xxx.xx.xx:9092, PLAINTEXT://xx.xxx.xx.xx:9092, PLAINTEXT://xx.xxx.xx.xx:9092"
SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8883"
SCHEMA_REGISTRY_HOST_NAME: localhost
SCHEMA_REGISTRY_MASTER_ELIGIBILITY: false
</code></pre>
<hr />
<p>Pod - schema-registry properties:</p>
<pre><code>root@test-app-788455bb47-tjlhw:/# cat /etc/schema-registry/schema-registry.properties
master.eligibility=false
listeners=http://0.0.0.0:8883
host.name=xx.xx.xxx.xx
kafkastore.bootstrap.servers=PLAINTEXT://xx.xx.xx.xx:9092,PLAINTEXT://xx.xx.xx.xx:9092,PLAINTEXT://xx.xx.xx.xx:9092
</code></pre>
<hr />
<pre><code>echo "===> Launching ... "
+ echo '===> Launching ... '
exec /etc/confluent/docker/launch
+ exec /etc/confluent/docker/launch
===> Launching ...
===> Launching schema-registry ...
[2018-10-15 18:52:45,993] INFO SchemaRegistryConfig values:
resource.extension.class = []
metric.reporters = []
kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit
response.mediatype.default = application/vnd.schemaregistry.v1+json
kafkastore.ssl.trustmanager.algorithm = PKIX
inter.instance.protocol = http
authentication.realm =
ssl.keystore.type = JKS
kafkastore.topic = _schemas
metrics.jmx.prefix = kafka.schema.registry
kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
kafkastore.topic.replication.factor = 3
ssl.truststore.password = [hidden]
kafkastore.timeout.ms = 500
host.name = xx.xxx.xx.xx
kafkastore.bootstrap.servers = [PLAINTEXT://xx.xxx.xx.xx:9092, PLAINTEXT://xx.xxx.xx.xx:9092, PLAINTEXT://xx.xxx.xx.xx:9092]
schema.registry.zk.namespace = schema_registry
kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
kafkastore.sasl.kerberos.service.name =
schema.registry.resource.extension.class = []
ssl.endpoint.identification.algorithm =
compression.enable = false
kafkastore.ssl.truststore.type = JKS
avro.compatibility.level = backward
kafkastore.ssl.protocol = TLS
kafkastore.ssl.provider =
kafkastore.ssl.truststore.location =
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
kafkastore.ssl.keystore.type = JKS
authentication.skip.paths = []
ssl.truststore.type = JKS
kafkastore.ssl.truststore.password = [hidden]
access.control.allow.origin =
ssl.truststore.location =
ssl.keystore.password = [hidden]
port = 8081
kafkastore.ssl.keystore.location =
metrics.tag.map = {}
master.eligibility = false
</code></pre>
<p>Logs of the schema-registry pod:</p>
<pre><code>(org.apache.kafka.clients.consumer.ConsumerConfig)
[2018-10-15 18:52:48,571] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-15 18:52:48,571] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-15 18:52:48,599] INFO Cluster ID: V-MGQtptQnuWK_K9-wot1Q (org.apache.kafka.clients.Metadata)
[2018-10-15 18:52:48,602] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2018-10-15 18:52:48,605] INFO [kafka-store-reader-thread-_schemas]: Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2018-10-15 18:52:48,715] INFO [Consumer clientId=KafkaStore-reader-_schemas, groupId=schema-registry-10.100.4.189-8083] Resetting offset for partition _schemas-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher)
[2018-10-15 18:52:48,721] INFO Cluster ID: V-MGQtptQnuWK_K9-wot1Q (org.apache.kafka.clients.Metadata)
[2018-10-15 18:52:48,775] INFO Wait to catch up until the offset of the last message at 228 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2018-10-15 18:52:49,831] INFO Joining schema registry with Kafka-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2018-10-15 18:52:49,852] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-15 18:52:49,852] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-15 18:52:49,909] INFO Cluster ID: V-MGQtptQnuWK_K9-wot1Q (org.apache.kafka.clients.Metadata)
[2018-10-15 18:52:49,915] INFO [Schema registry clientId=sr-1, groupId=schema-registry] Discovered group coordinator ip-10-150-4-5.ec2.internal:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-10-15 18:52:49,919] INFO [Schema registry clientId=sr-1, groupId=schema-registry] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-10-15 18:52:52,975] INFO [Schema registry clientId=sr-1, groupId=schema-registry] Successfully joined group with generation 92 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-10-15 18:52:52,980] INFO Finished rebalance with master election result: Assignment{version=1, error=0, master='sr-1-abcd4cf2-8a02-4105-8361-9aa82107acd8', masterIdentity=version=1,host=ip-xx-xxx-xx-xx.ec2.internal,port=8083,scheme=http,masterEligibility=true} (io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector)
[2018-10-15 18:52:53,088] INFO Adding listener: http://0.0.0.0:8083 (io.confluent.rest.Application)
[2018-10-15 18:52:53,347] INFO jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b01 (org.eclipse.jetty.server.Server)
[2018-10-15 18:52:53,428] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session)
[2018-10-15 18:52:53,429] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session)
[2018-10-15 18:52:53,432] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session)
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource will be ignored.
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource will be ignored.
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource will be ignored.
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource will be ignored.
Oct 15, 2018 6:52:54 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource will be ignored.
[2018-10-15 18:52:54,364] INFO HV000001: Hibernate Validator 5.1.3.Final (org.hibernate.validator.internal.util.Version)
[2018-10-15 18:52:54,587] INFO Started o.e.j.s.ServletContextHandler@764faa6{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
[2018-10-15 18:52:54,619] INFO Started o.e.j.s.ServletContextHandler@14a50707{/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
[2018-10-15 18:52:54,642] INFO Started NetworkTrafficServerConnector@62656be4{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector)
[2018-10-15 18:52:54,644] INFO Started @9700ms (org.eclipse.jetty.server.Server)
[2018-10-15 18:52:54,644] INFO Server started, listening for requests... (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)
</code></pre>
| <p>I checked and your configs look good. I believe, it is, in fact, starting as a follower and the logs are basically displaying who the master is in this case:</p>
<p><code>Assignment{version=1, error=0, master='sr-1-abcd4cf2-8a02-4105-8361-9aa82107acd8', masterIdentity=version=1,host=ip-xx-xxx-xx-xx.ec2.internal,port=8083,scheme=http,masterEligibility=true}</code></p>
|
<p>I have a Kubernetes cluster distributed on AWS via Kops consisting of 3 master nodes, each in a different AZ. As is well known, Kops realizes the deployment of a cluster where <strong>Etcd is executed on each master node</strong> through two pods, each of which mounts an EBS volume for saving the state. If you <strong>lose the volumes</strong> of <strong>2 of the 3</strong> masters, you automatically <strong>lose consensus</strong> among the masters. </p>
<p>Is there a way to use information about the only master who still has the status of the cluster, and <strong>retrieve the Quorum between the three masters</strong> on that state? I recreated this scenario, but the cluster becomes unavailable, and I can no longer access the Etcd pods of any of the 3 masters, because those pods fail with an error. Moreover, Etcd itself becomes read-only and it is impossible to add or remove members of the cluster, to try to perform manual interventions.</p>
<p>Tips? Thanks to all of you</p>
| <p>This is documented <a href="https://docs.openshift.com/container-platform/3.11/admin_guide/assembly_restore-etcd-quorum.html" rel="nofollow noreferrer">here</a>. There's also another guide <a href="https://blog.containership.io/etcd" rel="nofollow noreferrer">here</a></p>
<p>You basically have to backup your cluster and create a brand new one.</p>
|
<p>I cannot reach the following Kubernetes service when <code>externalTrafficPolicy: Local</code> is set. I access it directly through the NodePort but always get a timeout.</p>
<pre><code>{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "echo",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/echo",
"uid": "c1b66aca-cc53-11e8-9062-d43d7ee2fdff",
"resourceVersion": "5190074",
"creationTimestamp": "2018-10-10T06:14:33Z",
"labels": {
"k8s-app": "echo"
}
},
"spec": {
"ports": [
{
"name": "tcp-8080-8080-74xhz",
"protocol": "TCP",
"port": 8080,
"targetPort": 3333,
"nodePort": 30275
}
],
"selector": {
"k8s-app": "echo"
},
"clusterIP": "10.101.223.0",
"type": "NodePort",
"sessionAffinity": "None",
"externalTrafficPolicy": "Local"
},
"status": {
"loadBalancer": {}
}
}
</code></pre>
<p>I know that for this pods of the service need to be available on a node because traffic is not routed to other nodes. I checked this.</p>
| <p>Not sure where you are connecting from and what command you are typing to test connectivity or what's your environment like. But this is most likely due to <a href="https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/#services-with-externaltrafficpolicy-local-are-not-reachable" rel="nofollow noreferrer">this known issue</a> where the node ports are not reachable with <code>externalTrafficPolicy</code> set to <code>Local</code> if the <code>kube-proxy</code> cannot find the IP address for the node where it's running on.</p>
<p>This <a href="https://github.com/kubernetes/kubeadm/issues/857" rel="nofollow noreferrer">link</a> sheds more light into the problem. Apparently <code>--hostname-override</code> on the kube-proxy is not working as of K8s 1.10. You have to specify the <code>HostnameOverride</code> option in the kube-proxy ConfigMap. There's also a fix described <a href="https://github.com/kubernetes/kubernetes/pull/69340" rel="nofollow noreferrer">here</a> that will make it upstream at some point in the future from this writing.</p>
|
<p>AWS EKS makes use of their own CNI plugin and there are <a href="https://docs.aws.amazon.com/eks/latest/userguide/calico.html" rel="nofollow noreferrer">docs</a> that allow you to install Calico for managing policy. For a number of reasons, I'd like to have Calico manage networking as well.</p>
<p>Based on the <a href="https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/calico" rel="nofollow noreferrer">installation instructions</a> I can't seem to find a way to do either option:</p>
<h2>etcd</h2>
<p>Doesn't seem viable as I can't find a way to access the EKS control plane etcd endpoints. If I were to deploy my own etcd pods inside the cluster, I need to use the AWS CNI plugin for those to get an IP address, so that doesn't work. I could bring my own etcd cluster outside of Kubernetes, but that seems a bit ridiculous. </p>
<h2>Kubernetes API datastore</h2>
<p>This option wants me to change setting to the controller which I don't have access to in the AWS EKS managed control plane.</p>
| <p>The short answer is as of this writing EKS (nor GKE) doesn't give you direct access to any of the control plane components: etcd, kube-apiserver, kube-controller-manager, coredns/kube-dns, kube-scheduler.</p>
<p>They do have some <a href="https://docs.aws.amazon.com/eks/latest/userguide/calico.html" rel="nofollow noreferrer">docs</a> on how to install Calico on an EKS cluster, but if you want more control you'll have to set up your own standalone cluster.</p>
<p>They might allow you access to the master components in the future but the bottom line is that EKS is a 'managed' service where they are supposed to take care of all your control plane components.</p>
|
<p>I am trying to create a Pod in Kubernetes using <code>curl</code>. </p>
<p>This is the YAML:</p>
<pre><code>cat > nginx-pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx1
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
EOF
</code></pre>
<p>I have token with permissions to do it and I wrote the following <code>curl</code> command: </p>
<pre><code>curl -k -v -X POST -H "Authorization: Bearer $TOKEN" -H 'Content-Type: application/json;charset=utf-8' https://127.0.0.1:6443/api/v1/namespaces/default/pods --data '{"name":"","namespace":"default","content":"apiVersion: v1\nkind: Pod\nmetadata:\n name: nginx1\nspec:\n containers:\n - name: nginx\n image: nginx:1.7.9\n ports:\n - containerPort: 80\n","validate":true}'
</code></pre>
<p>Which should be equivalent to the <code>nginx-pod.yaml</code> file.<br>
The YAML is ok because when I run<code>kubectl create -f nginx.pod.yaml</code> it creates it.<br>
But when I tried to run it with <code>curl</code> I received: </p>
<pre><code>< Content-Length: 617
<
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Pod \"\" is invalid: [metadata.name: Required value: name or generateName is required, spec.containers: Required value]",
"reason": "Invalid",
"details": {
"kind": "Pod",
"causes": [
{
"reason": "FieldValueRequired",
"message": "Required value: name or generateName is required",
"field": "metadata.name"
},
{
"reason": "FieldValueRequired",
"message": "Required value",
"field": "spec.containers"
}
]
},
"code": 422
* Connection #0 to host 127.0.0.1 left intact
</code></pre>
<p>I tried to change the <code>Content-Type</code> to <code>Content-type: text/x-yaml</code> but it didn't help. </p>
<p>Any idea what can be the reason? </p>
<p>One of the errors is regarding the "metadata.name" field.</p>
| <p>make sure you set content type to application/yaml, and use --binary-data with yaml… --data drops newlines</p>
|
<p>I'm trying to setup a postgresql cluster in primary-standby mode using <a href="https://repmgr.org/" rel="nofollow noreferrer">repmgr</a>, and I'm thinking about how to correctly redirect the traffic to the "primary" pod.</p>
<p>My intuition is to use one label to "highlight" the primary pod and rely on the label selector of the service object to "bind" the cluster IP to it. But this lead to the question: How to "move" the label from the old primary pod to a new one after the failover?</p>
<p>Is there a way like let me register a custom script in my pod definition yaml to periodically check the role of pod and change the pod label depending on results?</p>
<p>Do you guys know if this method is possible? Or maybe there is already a sharp solution that can deal with my situation?</p>
| <ul>
<li><strong><code>StatefulSets</code> is the answer!</strong></li>
</ul>
<p>I'm quoting from offical <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">Kubernetes Docs</a></p>
<blockquote>
<p>Like a Deployment , a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.</p>
</blockquote>
<p>And even more</p>
<blockquote>
<p>StatefulSets are valuable for applications that require one or more of
the following.</p>
<pre><code>1. Stable, unique network identifiers.
2. Stable, persistent storage.
3. Ordered, graceful deployment and scaling.
4. Ordered, automated rolling updates.
</code></pre>
</blockquote>
<p>I believe, the point number 1 is more important for you(According to your question). StatefullSets maintain order of spawing the pods too. Back in a while I worked on Redis Cluster deployment on K8s, I don't remember correctly or repo, but they used some scripts in the container to determine the Redis Master. Since StatefullSet maintain the order of spawning, it is easy to make first spawned pod is master. And again, please refer the doc till the end and any blogs on this concept. </p>
<p>And even even more</p>
<blockquote>
<p>Stable Network ID</p>
<p>Each Pod in a StatefulSet derives its hostname from the name of the
StatefulSet and the ordinal of the Pod. The pattern for the
constructed hostname is $(statefulset name)-$(ordinal)</p>
</blockquote>
<p>Because of stable n/w ID, the DNS name for pod never changes</p>
<p>I would recommend to deploy your postgress cluster as StatefullSet. Please refer below links to get some idea</p>
<ul>
<li><a href="https://kubernetes.io/blog/2017/02/postgresql-clusters-kubernetes-statefulsets/" rel="nofollow noreferrer">https://kubernetes.io/blog/2017/02/postgresql-clusters-kubernetes-statefulsets/</a></li>
<li><a href="https://portworx.com/ha-postgresql-kubernetes/" rel="nofollow noreferrer">https://portworx.com/ha-postgresql-kubernetes/</a></li>
<li>Or deploy as helm charts - <a href="https://github.com/helm/charts/tree/master/stable/postgresql" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/postgresql</a></li>
</ul>
|
<p>I have one simple question - has Azure the support for native Kubernetes? I mean, is it possible to make a Kubernetes installation on my own, without using Azure Kubernetes Service (AKS)?</p>
| <p>Yes, it is possible, you just need to create a Linux virtual machine in Azure and manually install Kubernetes on top of it.
Exact steps and actions would be different, depending on Linux distribution you choose.
You can find more details about it, for example <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="nofollow noreferrer">here</a></p>
|
<p>I deployed a go application in google cloud using kubernetes which automatically logs to google stackdriver. Oddly, all log statements are being tagged with severity "ERROR"</p>
<p>For example:</p>
<pre><code>log.Println("This should have log level info")
</code></pre>
<p>will be tagged as an error.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/logging" rel="nofollow noreferrer">Their docs say</a> "Severities: By default, logs written to the standard output are on the INFO level and logs written to the standard error are on the ERROR level."</p>
<p>Anyone know what could be wrong with my setup?</p>
| <p>Take a look at this logging package: <a href="https://github.com/teltech/logger" rel="nofollow noreferrer">github.com/teltech/logger</a>, with an accompanying <a href="https://dev.to/hitman666/go-logger-with-kubernetes-stackdriver-format-compatibility-65k" rel="nofollow noreferrer">blog post</a>. It will output your logs in a JSON format, including the severity, that is readable by the Stackdriver Fluentd agent.</p>
|
<h2>Introduction</h2>
<p>Configuring a new ingress-controller with Traefik using helm chart and creating secrets.</p>
<h2>Info</h2>
<p>Kubernetes version: 1.9.3</p>
<p>Helm version: 2.9</p>
<p>Traefik chart version: 1.5</p>
<p>Traefik version: 1.7.2</p>
<h2>Problem</h2>
<p>I am deploying Traefik through official helm chart, but always I have the same problem in the logs
<code>"Error configuring TLS for ingress default/traefik-testing-tls: secret default/traefik-tls does not exist"</code></p>
<p>I have the secret properly created and configured in the same namespace and also checked the clusterrole and clusterrolebinds are ok and allows the access to secrets</p>
<p>I tried to change the <code>defaultCert</code> and <code>defaultKey</code> but not sure about this.</p>
<h3>Configmap:</h3>
<pre><code>data:
traefik.toml: |
# traefik.toml
logLevel = "INFO"
defaultEntryPoints = ["http", "https", "httpn"]
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.httpn]
address = ":8880"
compress = true
[kubernetes]
namespaces = ["default", "kube-system"]
[traefikLog]
format = "json"
[accessLog]
format = "common"
[accessLog.fields]
defaultMode = "keep"
[accessLog.fields.names]
[accessLog.fields.headers]
defaultMode = "keep"
[accessLog.fields.headers.names]
</code></pre>
| <p>After several checks, rbacs, namespaces, etc. a member from Traefik told us that the k8s objects are loaded asynchronously (so the ingress may be loaded before the secret) this is the reason because it gives a problem at start of the Traefik.</p>
|
<p>I have a container which runs a chatbot using python, exposed port 5000 where the bot is running. Now when i deploy this container on kubernetes, I have few questions</p>
<ul>
<li>Do i need to run nginx container in the pod where my app container is
running ? If yes why do i need to ? since kubernetes does load
balancing</li>
<li>If i run nginx container on port 80, do I need to run my
app container also on 80 or (can i use a different port like 5000)</li>
<li>what role does gunicorn play here ?</li>
</ul>
<p>I am a little confused because most of the examples i see online everyone pretty much have nginx container in their pods along with the app containers</p>
| <p>As you mentioned Kubernetes can handle its own load balancing, so the answer to your first question is no, you don't need to run nginx especially in the pod where your application is.
Typically, services and pods have IPs that are routable by the cluster network and all traffic which ends at an edge router will be dropped. So in Kubernetes there is a collection of rules that allows inbound connections to reach a cluster services. Which we call <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a>:</p>
<blockquote>
<p>An API object that manages external access to the services in a
cluster, typically HTTP.</p>
</blockquote>
<p>Confusing part is that Ingress on it's own does not do much. You will have to create an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="nofollow noreferrer">Ingress controller</a> which is a daemon deployed as a Pod. Its job is to read the Ingress Resource information and to process that accordingly. Actually any system capable of reverse proxying can be ingress controller. You can read more about Ingress and Ingress controller in practical approach <a href="https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e" rel="nofollow noreferrer">in this article</a>. Also I do not know about your environment so please remember that you should use type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> if you are on the Cloud and type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> if in bare-metal environment. </p>
<p>Going to your second question you can run your application on any port you want, just remember to adjust that port in all other configuration files. </p>
<p>About ports and how to expose services you should check the sources in documentation on how the Kubernetes model works in comparison to containers model. You can find a an instructive article <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#the-kubernetes-model-for-connecting-containers" rel="nofollow noreferrer">here</a>. </p>
<p>Unfortunately I do not have experience with gunicorn so I won't be able to tell you what role does it play in here. Hope this helps. </p>
|
<p>Is there a way to inherit annotations to all resources in a namespace?
My naive assumption was that I can annontate the namespace and that resources will get this annontation:</p>
<pre><code>kubectl get --export namespaces non-native -o yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
foo: bar
creationTimestamp: null
name: non-native
selfLink: /api/v1/namespaces/non-native
spec:
finalizers:
- kubernetes
status:
phase: Active
</code></pre>
<p>Running</p>
<pre><code>kubectl get --export pod -n non-native nginx-6f858d4d45-s2xzl -o yaml
</code></pre>
<p>shows no <code>foo=bar</code> annotations.</p>
<p>Am I asking for the impossible? Can you achieve this?</p>
<h3>update:</h3>
<p>Although my example shows a Pod, I would like to annotate also other resources, like services, or PVCs etc.</p>
| <p>I think <code>podpreset</code> can help</p>
<pre><code>kind: PodPreset
apiVersion: settings.k8s.io/v1alpha1
metadata:
annotations:
foo: bar
namespace: {youNamespace}
</code></pre>
<p>How enable <strong>PodPreset</strong>:</p>
<ol>
<li>You have enabled the api type <code>settings.k8s.io/v1alpha1/podpreset</code></li>
<li>You have enabled the admission controller PodPreset </li>
<li>You have defined your pod presets</li>
</ol>
|
<p>Ultimately i'm trying to get an array of strings e.g. <code>['foo', 'bar']</code> in my js app from my helm config.</p>
<p>./vars/dev/organizations.yaml</p>
<pre><code>...
organizations:
- 'foo'
- 'bar'
...
</code></pre>
<p>./templates/configmap.yaml</p>
<pre><code>...
data:
organizations.yaml: |
organizations: "{{ toYaml .Values.organizations | indent 4 }}"
...
</code></pre>
<p>./templates/deployment.yaml</p>
<pre><code>...
containers:
args:
- "--organizations-config"
- "/etc/app/cfg/organizations.yaml"
...
</code></pre>
<p>index.js</p>
<pre><code>...
const DEFAULT_ORGANIZATIONS_PATH = './vars/local/organizations.yaml'
const program = require('commander')
program
.option(
'--organizations-config <file path>',
'The path to the organizations config file.', DEFAULT_ORGANIZATIONS_PATH)
.parse(process.argv)
function readConfigs () {
return Promise.all(configs.map(path => {
return new Promise((resolve, reject) => {
fs.readFile(path, (err, data) => {
err ? reject(err) : resolve(yaml.safeLoad(data))
})
})
}))
}
readConfigs()
.then(configs => {
let organizationsConfig = configs[3]
console.log('organizationsConfig = ', organizationsConfig)
console.log('organizationsConfig.organizations = ', organizationsConfig.organizations)
...
</code></pre>
<p>The output from above is:</p>
<pre><code>organizationsConfig = { organizations: ' - foo - bar' }
organizationsConfig.organizations = - foo - bar
</code></pre>
<p>How can I modify my helm config so that <code>organizationsConfig.organizations</code> will be <code>['foo', 'bar']</code></p>
| <p>One way to get the output you're looking for is to change:</p>
<pre><code>...
organizations:
- 'foo'
- 'bar'
...
</code></pre>
<p>To:</p>
<pre><code>organizations: |
[ 'foo', 'bar']
</code></pre>
<p>So helm treats it as a single string. We happen to know that it contains array content but helm just thinks it's a string. Then we can set that string directly in the configmap:</p>
<p><code>organizations: {{ .Values.organizations | indent 4 }}</code></p>
<p>What this does is what <a href="https://github.com/helm/charts/blob/ac526fa232ebd07452d46fb0028a82d72cfac4b7/stable/grafana/values.yaml#L162" rel="noreferrer">the grafana chart does</a> in that it forces the user to specify the list in the desired format in the first place. Perhaps you'd prefer to take an array from the helm values and convert it to your desired format, which appears to me to be json format. To do that you could follow the <a href="https://github.com/helm/charts/blob/cbd5e811a44c7bac6226b019f1d1810ef5ee45fa/incubator/vault/templates/configmap.yaml#L12" rel="noreferrer">example of the vault chart</a>. So the configmap line becomes:</p>
<p><code>organizations: {{ .Values.organizations | toJson | indent 4 }}</code></p>
<p>Then the yaml that the user puts in can be as you originally had it i.e. a true yaml array. I tried this and it works but I notice that it gives double-quoted content like <code>["foo","bar"]</code></p>
<p>The other way you can do it is with:</p>
<pre><code>organizations:
{{- range .Values.organizations }}
- {{ . }}
{{- end }}
</code></pre>
|
<p>For sharing events between the pods of two different services in a Kubernetes namespace, I intend to use Hazelcast. This is not a problem, however, each service also has a cluster that contains all its pods.</p>
<p>So, I have two clusters using the same pods. I achieved separation of the clusters by setting a group name for one of the clusters, while the other has the default group configuration. This works fine locally, with multiple instances of a test application. However, this is with multicast enabled.</p>
<p>In Kubernetes however, Hazelcast uses the <a href="https://github.com/hazelcast/hazelcast-kubernetes/blob/master/src/main/java/com/hazelcast/kubernetes/HazelcastKubernetesDiscoveryStrategy.java" rel="nofollow noreferrer">HazelcastKubernetesDiscoveryStrategy</a> and has Multicast disabled.</p>
<p>Both services have a label:</p>
<pre><code>metadata:
name: service-1
labels:
hazelcast-group: bc-events
metadata:
name: service-2
labels:
hazelcast-group: bc-events
</code></pre>
<p>and the hazelcast configuration for the events cluster is like this:</p>
<pre><code>Config hzConfig = new Config("events-instance");
NetworkConfig nwConfig = new NetworkConfig();
JoinConfig joinConfig = new JoinConfig();
joinConfig.setMulticastConfig(new MulticastConfig().setEnabled(false));
joinConfig.setTcpIpConfig(new TcpIpConfig().setEnabled(false));
DiscoveryStrategyConfig k8sDiscoveryStrategy = new DiscoveryStrategyConfig("com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy");
k8sDiscoveryStrategy.addProperty("namespace", "dev");
k8sDiscoveryStrategy.addProperty("resolve-not-ready-addresses", true);
k8sDiscoveryStrategy.addProperty("service-label-name", "hazelcast-group");
k8sDiscoveryStrategy.addProperty("service-label-value", "bc-events");
DiscoveryConfig discoveryConfig = new DiscoveryConfig();
discoveryConfig.addDiscoveryStrategyConfig(k8sDiscoveryStrategy);
joinConfig.setDiscoveryConfig(discoveryConfig);
nwConfig.setJoin(joinConfig);
hzConfig.setNetworkConfig(nwConfig);
hzConfig.setProperty("hazelcast.discovery.enabled", "true");
GroupConfig groupConfig = new GroupConfig("bc-events");
hzConfig.setGroupConfig(groupConfig);
</code></pre>
<p>while the configuration for the shared cache cluster (the one without a group) is like this (for service 1, service 2 is the same):</p>
<pre><code>Config hzConfig = new Config("service-1-app-hc");
NetworkConfig nwConfig = new NetworkConfig();
JoinConfig joinConfig = new JoinConfig();
joinConfig.setMulticastConfig(new MulticastConfig().setEnabled(false));
joinConfig.setTcpIpConfig(new TcpIpConfig().setEnabled(false));
DiscoveryStrategyConfig k8sDiscoveryStrategy = new DiscoveryStrategyConfig("com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy");
k8sDiscoveryStrategy.addProperty("namespace", "dev");
k8sDiscoveryStrategy.addProperty("service-name", "service-1");
k8sDiscoveryStrategy.addProperty("resolve-not-ready-addresses", true);
DiscoveryConfig discoveryConfig = new DiscoveryConfig();
discoveryConfig.addDiscoveryStrategyConfig(k8sDiscoveryStrategy);
joinConfig.setDiscoveryConfig(discoveryConfig);
nwConfig.setJoin(joinConfig);
hzConfig.setNetworkConfig(nwConfig);
hzConfig.setProperty("hazelcast.discovery.enabled", "true");
</code></pre>
<p>The hazelcast instances find eachother, but then complain that the other has a different group name and blacklists the IP.</p>
<p>While debugging the code, during config validation while processing a join request, it tries to compare the group names (<code>bc-events</code> against <code>dev</code>) and obviously that's different. But then this gets blacklisted (I believe), preventing a validation check for the other instance that has the same group name.</p>
<p>I'm not sure where to go next. I cannot test this configuration locally because without multicast, it doesn't find the other nodes for joining the cluster. I also don't think there is anything wrong with the configuration.</p>
<p>The libraries used are:</p>
<pre><code><dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
<version>3.7.8</version>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-kubernetes</artifactId>
<version>1.1.0</version>
</dependency>
</code></pre>
<p><strong>UPDATE:</strong></p>
<p>I should note that, the current setup, where this cluster that has the group name discovers by service name (as such, it only contains the pods of one service) actually works. The cluster without a group and the cluster with a group are running alongside eachother. It is only when I switch to label-based discovery (and that the other service gets involved) that it breaks.</p>
<p><strong>UPDATE:</strong></p>
<p>When changing the port of the events cluster, I noticed it still tries to connect to 5701 (the default) despite being put on 5801. Of course this works because the first cluster is running on 5701. Inside <code>HazelcastKubernetesDiscoveryStrategy</code> there is the following method:</p>
<pre><code>protected int getServicePort(Map<String, Object> properties) {
int port = NetworkConfig.DEFAULT_PORT;
if (properties != null) {
String servicePort = (String) properties.get(HAZELCAST_SERVICE_PORT);
if (servicePort != null) {
port = Integer.parseInt(servicePort);
}
}
return port;
}
</code></pre>
<p>This method checks the additional properties that get returned for each endpoint returned by the kubernetes client, for a hazelcast port configuration. If none exists, it uses the default 5701. I am guessing this is what needs to be configured, however, it must not impact the other cluster, so I may have to extend the strategy with some of my own logic.</p>
| <p>It's possible to embed multiple Hazelcast instances into an application deployed in a single POD. Then, you can control how the cluster are formed. It requires an additional configuration, but you don't have to modify <code>HazelcastKubernetesDiscoveryStrategy</code>.</p>
<h1>Sample application</h1>
<p>I've created a sample application to present how it works. Please check it here: <a href="https://github.com/leszko/hazelcast-code-samples/tree/kubernetes-embedded-multiple/hazelcast-integration/kubernetes/samples/embedded" rel="nofollow noreferrer">https://github.com/leszko/hazelcast-code-samples/tree/kubernetes-embedded-multiple/hazelcast-integration/kubernetes/samples/embedded</a>.</p>
<h1>Configuration steps</h1>
<h3>Hazelcast Configuration</h3>
<p>You have two Hazelcast instances in your application, so you need to specify the ports they use. Also, with the parameters of the <a href="https://github.com/hazelcast/hazelcast-kubernetes" rel="nofollow noreferrer">hazelcast-kubernetes</a> plugin, you can configure which Hazelcast instances form the cluster together.</p>
<p>For example, assuming the first Hazelcast instance should form a cluster with all other Hazelcast instances in the current Kubernetes namespace, its configuration can look as follows.</p>
<pre><code>Config config = new Config();
config.getNetworkConfig().setPort(5701);
config.getProperties().setProperty("hazelcast.discovery.enabled", "true");
JoinConfig joinConfig = config.getNetworkConfig().getJoin();
joinConfig.getMulticastConfig().setEnabled(false);
HazelcastKubernetesDiscoveryStrategyFactory discoveryStrategyFactory = new HazelcastKubernetesDiscoveryStrategyFactory();
Map<String, Comparable> properties = new HashMap<>();
joinConfig.getDiscoveryConfig().addDiscoveryStrategyConfig(new DiscoveryStrategyConfig(discoveryStrategyFactory, properties));
</code></pre>
<p>Then, the second Hazelcast instance could form the cluster only with the same application. We could make this separation by giving it a <code>service-name</code> parameter with the value from environment variables.</p>
<pre><code>Config config = new Config();
config.getNetworkConfig().setPort(5702);
config.getProperties().setProperty("hazelcast.discovery.enabled", "true");
JoinConfig joinConfig = config.getNetworkConfig().getJoin();
joinConfig.getMulticastConfig().setEnabled(false);
HazelcastKubernetesDiscoveryStrategyFactory discoveryStrategyFactory = new HazelcastKubernetesDiscoveryStrategyFactory();
Map<String, Comparable> properties = new HashMap<>();
String serviceName = System.getenv("KUBERNETES_SERVICE_NAME");
properties.put("service-name", serviceName);
properties.put("service-port", "5702");
joinConfig.getDiscoveryConfig().addDiscoveryStrategyConfig(new DiscoveryStrategyConfig(discoveryStrategyFactory, properties));
GroupConfig groupConfig = new GroupConfig("separate");
</code></pre>
<h3>Kubernetes Template</h3>
<p>Then, in the Kubernetes Deployment template, you need to configure both ports: <code>5701</code> and <code>5702</code>.</p>
<pre><code>ports:
- containerPort: 5701
- containerPort: 5702
</code></pre>
<p>And the environment variable with the service name:</p>
<pre><code>env:
- name: KUBERNETES_SERVICE_NAME
value: hazelcast-separate-1
</code></pre>
<p>Plus, you need to create two services, for each Hazelcast instance.</p>
<pre><code>kind: Service
metadata:
name: hazelcast-shared-1
spec:
type: ClusterIP
selector:
app: hazelcast-app
ports:
- name: hazelcast-shared
port: 5701
kind: Service
metadata:
name: hazelcast-separate-1
spec:
type: ClusterIP
selector:
app: hazelcast-app
ports:
- name: hazelcast-separate
port: 5702
</code></pre>
<p>Obviously, in the same manner, you can separate Hazelcast clusters using service labels instead of service names.</p>
|
<p>I am trying to setup Kubernetes Federation on GKE following the instructions in <a href="https://kubernetes.io/docs/tasks/federation/set-up-cluster-federation-kubefed/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/federation/set-up-cluster-federation-kubefed/</a>. The Kubernetes version in my nodes is <code>v1.9.7-gke.6</code>. I ran the command <code>kubefed init federation1 --host-cluster-context=[CONTEXT] --dns-provider="google-clouddns" --dns-zone-name=[DNS_ZONE]</code>. This would stay at <code>Waiting for federation control plane to come up........</code> forever.</p>
<p>Checking the status of the apiserver pod I saw this error message:</p>
<p><code>Failed to pull image "gcr.io/k8s-jkns-e2e-gce-federation/fcp-amd64:v1.10.0-alpha.0": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/k8s-jkns-e2e-gce-federation/fcp-amd64/manifests/v1.10.0-alpha.0: denied: Token exchange failed for project 'k8s-jkns-e2e-gce-federation'. Please enable or contact project owners to enable the Google Container Registry API in Cloud Console at https://console.cloud.google.com/apis/api/containerregistry.googleapis.com/overview?project=k8s-jkns-e2e-gce-federation before performing this operation.</code></p>
<p>Does anyone know how to resolve this? Thanks.</p>
| <p>It seems like you are trying to access an API, which must be enabled first.
Can you check if this is as follows:
<a href="https://i.stack.imgur.com/yKwFE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yKwFE.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/vRlfu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vRlfu.png" alt="enter image description here"></a></p>
|
<p>I am using <code>kops</code> in AWS to create my Kubernetes cluster.</p>
<p>I have created a cluster with RBAC enabled via <code>--authorization=RBAC</code> as described <a href="https://github.com/kubernetes/kops/blob/master/docs/cli/kops_create_cluster.md" rel="nofollow noreferrer">here.</a></p>
<p>I am trying to use the default service account token to interact with the cluster and getting this error:</p>
<p><code>Error from server (Forbidden): User "system:serviceaccount:default:default" cannot list pods in the namespace "default". (get pods)</code></p>
<p>Am I missing a role or binding somewhere?</p>
| <p>I thing it <strong>is not a good idea</strong> to <strong>give the cluster-admin role to default service account in default namespace</strong>.</p>
<p>If you will give cluster-admin access to default user in default namespace - <strong>every app (pod)</strong> that will be deployed in cluster, in default namespace - <strong>will be able</strong> to manipulate the cluster <strong>(delete system pods/deployments or make other bad stuff)</strong>. </p>
<p>By default the clusterrole cluster-admin is given to default service account in kube-system namespace.
You can use it for interacting with cluster.</p>
|
<p>I'm trying to understand the concepts of ingress and ingress controllers in kubernetes. But I'm not so sure what the end product should look like. Here is what I don't fully understand:</p>
<p>Given I'm having a running Kubernetes cluster somewhere with a master node which runes the control plane and the etcd database. Besides that I'm having like 3 worker nodes - each of the worker nodes has a public IPv4 address with a corresponding DNS A record (<code>worker{1,2,3}.domain.tld</code>) and I've full control over my DNS server. I want that my users access my web application via <code>www.domain.tld</code>. So I point the the <code>www</code> CNAME to one of the worker nodes (I saw that my ingress controller i.e. got scheduled to worker1 one so I point it to <code>worker1.domain.tld</code>).</p>
<p>Now when I schedule a workload consisting of 2 frontend pods and 1 database pod with 1 service for the frontend and 1 service for the database. From what've understand right now, I need an ingress controller pointing to the frontend service to achieve some kind of load balancing. Two questions here:</p>
<ol>
<li><p>Isn't running the ingress controller only on one worker node pointless to internally load balance two the two frontend pods via its service? Is it best practice to run an ingress controller on every worker node in the cluster?</p></li>
<li><p>For whatever reason the worker which runs the ingress controller dies and it gets rescheduled to another worker. So the ingress point will get be at another IPv4 address, right? From a user perspective which tries to access the frontend via <code>www.domain.tld</code>, this DNS entry has to be updated, right? How so? Do I need to run a specific kubernetes-aware DNS server somewhere? I don't understand the connection between the DNS server and the kubernetes cluster.</p></li>
</ol>
<p>Bonus question: If I run more ingress controllers replicas (spread across multiple workers) do I do a DNS-round robin based approach here with multiple IPv4 addresses bound to one DNS entry? Or what's the best solution to achieve HA. I rather not want to use load balancing IP addresses where the worker share the same IP address.</p>
| <blockquote>
<p>Given I'm having a running Kubernetes cluster somewhere with a master
node which runes the control plane and the etcd database. Besides that
I'm having like 3 worker nodes - each of the worker nodes has a public
IPv4 address with a corresponding DNS A record
(worker{1,2,3}.domain.tld) and I've full control over my DNS server. I
want that my users access my web application via www.domain.tld. So I
point the the www CNAME to one of the worker nodes (I saw that my
ingress controller i.e. got scheduled to worker1 one so I point it to
worker1.domain.tld).</p>
<p>Now when I schedule a workload consisting of 2 frontend pods and 1
database pod with 1 service for the frontend and 1 service for the
database. From what've understand right now, I need an ingress
controller pointing to the frontend service to achieve some kind of
load balancing. Two questions here:</p>
<ol>
<li>Isn't running the ingress controller only on one worker node pointless to internally load balance two the two frontend pods via its
service? Is it best practice to run an ingress controller on every
worker node in the cluster?</li>
</ol>
</blockquote>
<p>Yes, it's a good practice. Having multiple pods for the load balancer is important to ensure high availability. For example, if you run the <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">ingress-nginx controller</a>, you should probably deploy it to multiple nodes.</p>
<blockquote>
<ol start="2">
<li>For whatever reason the worker which runs the ingress controller dies and it gets rescheduled to another worker. So the ingress point
will get be at another IPv4 address, right? From a user perspective
which tries to access the frontend via www.domain.tld, this DNS entry
has to be updated, right? How so? Do I need to run a specific
kubernetes-aware DNS server somewhere? I don't understand the
connection between the DNS server and the kubernetes cluster.</li>
</ol>
</blockquote>
<p>Yes, the IP will change. And yes, this needs to be updated in your DNS server.</p>
<p>There are a few ways to handle this:</p>
<ol>
<li><p>assume clients will deal with outages. you can list all load balancer nodes in round-robin and assume clients will fallback. this works with some protocols, but mostly implies timeouts and problems and should generally not be used, especially since you still need to update the records by hand when k8s figures it will create/remove LB entries</p></li>
<li><p>configure an external DNS server automatically. this can be done with the <a href="https://github.com/kubernetes-incubator/external-dns" rel="noreferrer">external-dns</a> project which can sync against most of the popular DNS servers, including standard RFC2136 dynamic updates but also cloud providers like Amazon, Google, Azure, etc.</p></li>
</ol>
<blockquote>
<p>Bonus question: If I run more ingress controllers replicas (spread
across multiple workers) do I do a DNS-round robin based approach here
with multiple IPv4 addresses bound to one DNS entry? Or what's the
best solution to achieve HA. I rather not want to use load balancing
IP addresses where the worker share the same IP address.</p>
</blockquote>
<p>Yes, you should basically do DNS round-robin. I would assume <a href="https://github.com/kubernetes-incubator/external-dns" rel="noreferrer">external-dns</a> would do the right thing here as well.</p>
<p>Another alternative is to do some sort of <a href="https://en.wikipedia.org/wiki/Equal-cost_multi-path_routing" rel="noreferrer">ECMP</a>. This can be accomplished by having both load balancers "announce" the same IP space. That is an advanced configuration, however, which may not be necessary. There are interesting tradeoffs between BGP/ECMP and DNS updates, see <a href="https://blogs.dropbox.com/tech/2018/10/dropbox-traffic-infrastructure-edge-network/" rel="noreferrer">this dropbox engineering post</a> for a deeper discussion about those.</p>
<p>Finally, note that CoreDNS is looking at <a href="https://github.com/coredns/coredns/issues/1851" rel="noreferrer">implementing public DNS records</a> which could resolve this natively in Kubernetes, without external resources.</p>
|
<p>I recently created a private GKE kubernetes cluster for running web services and discovered that it's quite locked down and isn't supposed to have any outbound internet access, but has access to GCP services. I'm happy to live with pushing container images to private GCP repo. </p>
<p>However what I do find strange is that after installing some public domain helm charts, some images are pulled from docker hub and other public registries and some are not. </p>
<p>I'm using pre-emptible nodes, so some charts which I had previously deployed have had underlying nodes replaced and the replacements show image pull errors.</p>
<p>Is this due to the multi-tenant nature of the GKE service? Maybe some hosts may have already cached images and so are not actually pulling images? </p>
<p>One example </p>
<p><code>mongo:3.6</code></p>
<p>was hanging for over 24 hours for one pod, then eventually was pulled by three pods, but it's a docker hub reference</p>
| <p>So it looks like Google mirrors many of the popular public repos. This explains why many of the more common public images can be pulled even without internet access, you're basically just pulling from Google's repo (which you access through private access to APIs).</p>
<p>I'm guessing certain images aren't being mirrored and those ones are the ones hanging.</p>
|
<p>I am trying to configure an istio GateWay with two different protocols (GRPC and HTTP)</p>
<p>Right now, I have two different gateways one each for GRPC and HTTP as below</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gwgrpc
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 7878
name: http
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gwrest
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 7979
name: http
protocol: HTTP
hosts:
- "*"
</code></pre>
<p><strong>Is it possible to use same gateway with different protocols and ports?</strong></p>
| <p>You should be able to combine the two Gateways. The only problem is that both your ports have the same name. Something like this should work.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gwgrpc
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 7878
name: grpc
protocol: GRPC
hosts:
- "*"
- port:
number: 7979
name: http
protocol: HTTP
hosts:
- "*"
</code></pre>
|
<p>I'm trying to find a suitable vault to use for Kubernetes itself and apps that will run on containers. By far many resources point to Hashicorp vault.
There exists a vault operator by CoreOS for that but it seems abandoned since April.</p>
<p>We run Kubernetes on AWS with EKS.</p>
<p>Any suggestions what would be possible choices to use? I'm interested a lot to see what are the top choices that are used the most today for this purpose.</p>
<p>Thank you!</p>
<p>Greg</p>
| <p>The <a href="https://github.com/coreos/vault-operator" rel="nofollow noreferrer">CoreOS Vault operator</a> is beta as of this writing. I would not recommend using it in prod yet. There's also a <a href="https://github.com/Boostport/kubernetes-vault" rel="nofollow noreferrer">Bootsport Vault Operator</a> but doesn't seem to be prod ready either.</p>
<p>IMO, as of now, you are better off running standalone Vault Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployments</a> or a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>. You can use something like <a href="https://github.com/drud/vault-consul-on-kube" rel="nofollow noreferrer">this</a> or <a href="https://github.com/raravena80/kubeconsulvaultoss" rel="nofollow noreferrer">this</a> to get yourself started. Note: still use it at your own risk.</p>
|
<p>I'm hosting some stuff as an AppService in Azure and use environment variables to differentiate settings for different slots (test, dev etc).</p>
<p>If the AppSettings.json file contains a structure like:</p>
<pre><code>{
"ConnectionString": {
"MyDb": "SomeConnectionString"
}
}
</code></pre>
<p>I can set the environment variable "ConnectionString:MyDb" to "SomeConnectionString" and .Net Core will understand that the <code>:</code> means child level.</p>
<p>But in Kubernetes I cannot use <code>:</code> as part of the environment key. Is there another way to handle hierarchy or do I need to switch to flat settings? </p>
| <p>I believe you are referring to the <code>env</code> in the container definition for a Pod. From the YAML/JSON perspective, I don't see a problem with specifying a <code>:</code> in a key for an environment variable. You can also put it within quotes and should be valid JSON/YAML:</p>
<pre><code># convert.yaml
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: dotnetapp
env:
- name: ConnectionString:Mydb
value: ConnectionString
</code></pre>
<p>Same in JSON:</p>
<pre><code>$ kubectl convert -f convert.yaml -o=json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "envar-demo",
"creationTimestamp": null,
"labels": {
"purpose": "demonstrate-envars"
}
},
"spec": {
"containers": [
{
"name": "envar-demo-container",
"image": "dotnetapp",
"env": [
{
"name": "ConnectionString:Mydb",
"value": "ConnectionString"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
},
"status": {}
}
</code></pre>
<p>However, looks like this was a known issue with Windows/.NET applications. An attempt to fix it <a href="https://github.com/kubernetes/kubernetes/pull/59599" rel="nofollow noreferrer">has been tried</a> and ditched due to the fact the this is not valid in Bash. But looks like they settled to use the <code>__</code> instead of <code>:</code> <a href="https://github.com/kubernetes/website/pull/7657/files" rel="nofollow noreferrer">workaround</a></p>
|
<p>I'm running a spring boot inside a pod with the below configurations.</p>
<p>Pod limits:</p>
<pre><code>resources:
limits:
cpu: "1"
memory: 2500Mi
requests:
cpu: "1"
memory: 2500Mi
</code></pre>
<p>command args:</p>
<pre><code>spec:
containers:
- args:
- -c
- ln -sf /dev/stdout /var/log/access.log;java -Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -Djava.security.egd=file:/dev/./urandom
-Xms1600m -Xmx1600m -XX:NewSize=420m -XX............
</code></pre>
<ol>
<li>What happens if the java process has reached its max heap limit (i.e 1600m (Xmx1600m))</li>
<li>If Xmx has no effect on the java process inside a pod, it can go up to pod limit right (i.e. memory: 2500Mi of limits section)</li>
<li>If the above configurations are correct, then we are wasting 900Mi of memory right (2500-1600=900)</li>
</ol>
| <p>The -Xmx flag only controls Java heap memory, which is space available for your own Java objects when running your code. If you run out, JVM will do garbage collection to make space. If you still ran out, you get an OutOfMemoryException thrown.</p>
<p>Java JVM also uses a bunch of other memory internally for stuff like loading classes, JIT compilation etc... Therefore you need to allow more memory in Kubernetes than just -Xmx value. If you exceed the Kubernetes limit value, then probably Java will crash.</p>
<p>The config you posted above looks fine. Normally I come to find these values by looking at the Kubernetes memory usage graph after running for some time without limits.</p>
|
<p>I have a Kubernetes ingress that I want to be the default for all paths on a set of hosts, provided there is not a more specific match:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: default-ing
spec:
rules:
- host: host1.sub.example.com
http:
paths:
- backend:
serviceName: my-default-service
servicePort: http
# Note: here we specify the root path intended as a default
path: /
- backend:
serviceName: my-default-service
servicePort: http
path: /route/path/to/default
</code></pre>
<p>A second ingress defines a custom service for a specific path:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: special-ing
spec:
rules:
- host: host1.sub.example.com
http:
paths:
- backend:
serviceName: special-service
servicePort: http
path: /special
</code></pre>
<p>I would expect that the order of adding/deleting the ingresses would not matter, or at least I could have some way of indicating that the <code>path: /</code> in <code>default-ing</code> is always to be ordered last.</p>
<p>When I try the above, the routing is fine as long as I add <code>special-ing</code> before <code>default-ing</code> (or alternatively, add <code>default-ing</code>, then the <code>special-ing</code>, then delete <code>default-ing</code> and re-add it again). When I add them as <code>default-ing</code>, then <code>special-ing</code>, requests to <code>/special are</code> routed to <code>my-default-service</code> instead of <code>special-service</code>.</p>
<p>I want the order of adding/deleting to be independent of the routing that is generated by nginx-ingress-controller, so that my kubectl manipulations are more robust, and if one of the ingresses is recreated nothing will break.</p>
<p>I'm using <code>nginx-ingress-controller:0.19.0</code></p>
<p>Thanks for any help you can offer!</p>
| <p>The short answer is no. I believe your configs should be disallowed by the nginx ingress controller or documented somewhere. Basically what's happening when you have 2 <code>hosts</code> rules with the same value: <code>host1.sub.example.com</code> one is overwriting the other in the <a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#server" rel="nofollow noreferrer"><code>server {}</code></a> block in the <code>nginx.conf</code> that your nginx ingress controller is managing.</p>
<p>So if your add <code>default-ing</code> before <code>special-ing</code> then <code>special-ing</code> will be the actual config. When you add <code>special-ing</code> before <code>default-ing</code> then <code>default-ing</code> will be your only config, <code>special-ing</code> is not supposed to work at all.</p>
<ol>
<li><p>Add <code>special-ing</code>, and the configs look something like this:</p>
<pre><code> server {
server_name host1.sub.example.com;
...
location /special {
...
}
location / { # default backend
...
}
...
}
</code></pre>
</li>
<li><p>Now add <code>default-ing</code>, and the configs will change to like this:</p>
<pre><code> server {
server_name host1.sub.example.com;
...
location /route/path/to/default {
...
}
location / { # default backend
...
}
...
}
</code></pre>
</li>
</ol>
<p>If you add them the other way around the config will look like 1. in the end.</p>
<p>You can find more by shelling into your nginx ingress controller pod and looking at the <code>nginx.conf</code> file.</p>
<pre><code> $ kubectl -n <namespace> exec -it nginx-ingress-controller-pod sh
# cat /etc/nginx/nginx.conf
</code></pre>
<p>Update 03/31/2022:</p>
<p>It seems like on newer nginx ingress controller versions. All the rules with the same host get merged into a server block in the <code>nginx.conf</code></p>
|
<p>I am trying to configure the Kubernetes UI dashboard. with full admin permissions, so created a YAML file: <code>dashboard-admin.yaml</code>.
contents of my file are below:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1.12.1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
</code></pre>
<p>so when I am trying to apply changes to this file by executing the command
<code>kubectl create -f dashboard-admin.yaml</code></p>
<p>1) I'm encountering with an error as stated below:</p>
<pre><code>error: error parsing dashboard-admin.yaml: error converting YAML to JSON: yaml: line 12: mapping values are not allowed in this context
</code></pre>
<p>2) Also, after running the <code>kubectl proxy</code> command, I'm unable to open the dashboard in my local machine using the link below:</p>
<pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
</code></pre>
| <p>Your error is related to YAML indentation. I've edited the question that shows the correct format. Or if you'd like you can use this one too.</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1.12.1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
</code></pre>
<p>Your K8s dashboard will not work unless you have correctly setup the RBAC rule above,</p>
|
<p>I want to put the following CRD into helm chart, but it contains go raw template. How to make helm not translate <code>{{</code> and <code>}}</code> inside <code>rawTemplate</code>. Thanks for your response.
<a href="https://github.com/kubeflow/katib/blob/master/examples/random-example.yaml" rel="nofollow noreferrer">https://github.com/kubeflow/katib/blob/master/examples/random-example.yaml</a></p>
<pre><code>apiVersion: "kubeflow.org/v1alpha1"
kind: StudyJob
metadata:
namespace: katib
labels:
controller-tools.k8s.io: "1.0"
name: random-example
spec:
studyName: random-example
owner: crd
optimizationtype: maximize
objectivevaluename: Validation-accuracy
optimizationgoal: 0.99
requestcount: 4
metricsnames:
- accuracy
workerSpec:
goTemplate:
rawTemplate: |-
apiVersion: batch/v1
kind: Job
metadata:
name: {{.WorkerId}}
namespace: katib
spec:
template:
spec:
containers:
- name: {{.WorkerId}}
image: katib/mxnet-mnist-example
command:
- "python"
- "/mxnet/example/image-classification/train_mnist.py"
- "--batch-size=64"
{{- with .HyperParameters}}
{{- range .}}
- "{{.Name}}={{.Value}}"
{{- end}}
{{- end}}
restartPolicy: Never
</code></pre>
| <p>In the Go template language, the expression</p>
<pre><code>{{ "{{" }}
</code></pre>
<p>will expand to two open curly braces, for cases when you need to use Go template syntax to generate documents in Go template syntax; for example</p>
<pre><code>{{ "{{" }}- if .Values.foo }}
- name: FOO
value: {{ "{{" }} .Values.foo }}
{{ "{{" }}- end }}
</code></pre>
<p>(In a Kubernetes Helm context where you're using this syntax to generate YAML, be extra careful with how whitespace is handled; consider using <code>helm template</code> to dump out what gets generated.)</p>
|
<p>I would like to apply the tolerations on the Jenkins slave pod that are dynamically spinned by the Kubernetes Plugin. I see that the Kubernetes Plugin does not provide any option on the Jenkins UI to add the tolerations, as shown in the image below. Could anyone tell me, how can I add the tolerations in this case to the slave pods(which are generated by kubernetes plugin). </p>
<p>P.S.:- I do not want to use labels,I Strictly want to use tolerations. And I am not sure if I want to add the podTemplate in the Jenkinsfile and specify the tolerations in this podTemplate. Because this will force me to do the same for every job's Jenkinsfile, which is tedious and not possible if multiple developers prepare their own respective pipelines.<a href="https://i.stack.imgur.com/r0lKJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r0lKJ.png" alt="enter image description here"></a> </p>
| <p>You can actually add taints to the configs in jenkins outright.</p>
<p><a href="https://github.com/jenkinsci/kubernetes-plugin/pull/311#issuecomment-386342776" rel="noreferrer">https://github.com/jenkinsci/kubernetes-plugin/pull/311#issuecomment-386342776</a></p>
<p>You can add this into the "raw yaml for the pod" under your container and update your criteria accordingly for the labels. </p>
<p>I have used this myself and it does indeed work.</p>
<p>(here's the yaml from the link above)</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
jenkins/kube-default: true
app: jenkins
component: agent
spec:
nodeSelector:
werkint.com/entity: other
tolerations:
- key: werkint.com/entity
operator: Equal
value: other
effect: NoSchedule
</code></pre>
|
<p>I have few cronjobs configured and running in Kubernetes. How to setup up cronjob email alerts for success or failure in Kubernetes. </p>
| <p>This could be as easy as setting up a bash script with <code>kubectl</code> to send an email if you see a job that is <code>Failed</code> state.</p>
<pre><code>while true; do if `kubectl get jobs myjob -o jsonpath='{.status.conditions[?(@.type=="Failed")].status}' | grep True`; then mail email@address -s jobfailed; else sleep 1 ; fi; done
</code></pre>
<p>or on newer K8s:</p>
<pre><code>while true; do kubectl wait --for=condition=failed job/myjob; mail@address -s jobfailed; done
</code></pre>
<p>How to tell whether a Job is complete: <a href="https://stackoverflow.com/questions/44686568/kubernetes-tell-when-job-is-complete">Kubernetes - Tell when Job is Complete</a></p>
<p>You can also setup something like <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> with <a href="https://prometheus.io/docs/alerting/alertmanager/" rel="nofollow noreferrer">Alertmanager</a> in your Kubernetes cluster to monitor your Jobs.</p>
<p>Some useful info <a href="https://medium.com/@tristan_96324/prometheus-k8s-cronjob-alerts-94bee7b90511" rel="nofollow noreferrer">here</a> and <a href="https://itnext.io/kubernetes-monitoring-with-prometheus-in-15-minutes-8e54d1de2e13" rel="nofollow noreferrer">here</a>.</p>
|
<p>So I'm setting up a NATS cluster at work in OpenShift. I can easily get things to work by having each NATS server instance broadcast its Pod IP to the cluster. The guy I talked to at work strongly advised against using the Pod IP and suggested using the Pod name. In the email, he said something about if a pod restarted. But like I tried deleting the pod and the new Pod IP was in the list of connect urls for NATS and it worked fine. I know Kubernetes has DNS and you can use the headless service but it seems somewhat flaky to me. The Pod IP works.</p>
| <p>I believe "the guy at work" has a point, to a certain extent, but it's hard to tell to which extent it's cargo-culting and what is half knowledge. The point being: the pod IPs are not stable, that is, every time a pod gets re-launched (on the same node or somewhere else, doesn't matter) it will get a new IP from the pod CIDR-range assigned.</p>
<p>Now, services provide stability by introducing a virtual IP (VIP): this acts as a cluster-internal mini-load balancer sitting in front of pods and yes, the recommended way to talk to pods, in the general case, is via services. Otherwise, you'd need to keep track of the pod IPs out-of-band, no bueno.</p>
<p>Bottom-line: if NATS manages that for you, keeps track and maps pod IPs then fine, use it, no harm done.</p>
|
<p>I want to update the image for the k8s deployment and I found two RESTAPI in k8s to update the deployment: <code>PATCH</code> and <code>PUT</code>.
I found out, that the <code>PATCH</code> is for updating and the <code>PUT</code> is for replacing in the official document but after testing with the two command: </p>
<pre><code>kubectl patch -p ...
kubectl replace -f ...
</code></pre>
<p>it seems to has no differences between the two method.</p>
<p>Both of them can rollback and name of the new pod changed.</p>
<p>I wondered if it is only different in the request body for this two commands? (patch only need the changed part and put need the whole parts)</p>
| <p>According to the documenation:</p>
<p><code>kubectl patch</code> </p>
<p>is to change the live configuration of a Deployment object. You do not change the configuration file that you originally used to create the Deployment object.</p>
<pre><code>kubectl replace
</code></pre>
<p>If replacing an existing resource, the complete resource spec must be provided.</p>
|
<p>I have a Grafana running on an EC2 instance. I installed my Kubernetes cluster k8s.mydomain.com on AWS using kops. I wanted to monitor this cluster with Grafana. Entering the below URL for Prometheus data source and the admin username and password from <code>kops get secrets kube --type secret -oplaintext</code> in grafana returned an error.</p>
<p><a href="https://api.k8s.afaquesiddiqui.com/api/v1/namespaces/monitoring/services/prometheus-k8s:9090/proxy/graph#!/role?namespace=default" rel="nofollow noreferrer">https://api.k8s.afaquesiddiqui.com/api/v1/namespaces/monitoring/services/prometheus-k8s:9090/proxy/graph#!/role?namespace=default</a></p>
<p>I also tried the kops add-on for <a href="https://github.com/kubernetes/kops/blob/master/docs/addons.md" rel="nofollow noreferrer">prometheus</a> but I wasn't able to access grafana using the following URL:</p>
<p><a href="https://api.k8s.mydomain.com/api/v1/namespaces/monitoring/services/grafana:3000/proxy/#!/role?namespace=default" rel="nofollow noreferrer">https://api.k8s.mydomain.com/api/v1/namespaces/monitoring/services/grafana:3000/proxy/#!/role?namespace=default</a></p>
<p>Am I doing something wrong? Is there a better way to do this?</p>
| <p>The URLs that you specified are proxy endpoints so they are accessed through a proxy that is usually set up on your client with:</p>
<pre><code>kubectl proxy
</code></pre>
<p>I supposed you could access it from the outside if you expose your kube-apiserver to the outside which highly unrecommended.</p>
<p>If you want to access the endpoint from the outside you usually do it through the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Service</a> which in your first case is <code>prometheus-k8s</code> on port <code>9090</code> and in the second case is <code>grafana</code> on port <code>3000</code>. You didn't provide whether the services are exposed through a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a> or <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a> so the endpoint will vary depending on how it's exposed. You can find out with:</p>
<pre><code>kubectl get svc
</code></pre>
|
<p>I am setting up 2 VPC on GCP, I setup kubeadm on each, let's call them kubemaster and kubenode1. So I ran kubeadm on kubemaster and kubenode1 which :</p>
<ul>
<li><code>kubeadm init</code> on kubemaster</li>
<li><code>kubeadm join</code> on kubenode1</li>
</ul>
<p>When I was trying to <code>kubectl apply -f (a deployment which contains a pod with simple webapps inside)</code> and <code>kubectl apply -f (a NodePort type of Service which target the deployment port)</code></p>
<p>After that I simply access the webapps from my browser (on my local machine not on GCP), it just does not work as what I tried on minikube (I setup minikube with same kubectl apply as above too). I dig some search and there are a lot of people saying regarding Ingress and network layer (flannel in kubernetes website example)</p>
<p>My question is what are these Ingress and flannel ? Which one is necessary or both are not necessary at all if I just want my webapp run ? How does each other works against others ? Because from my understanding the layering is as per below :</p>
<p><code>Traffic -> Services -> Deployments/Pods</code> </p>
<p>Where are these ingress and flannel suits to ? If its not about them both, why my apps does not work as intended (i open all port in GCP setting so its not security issue I suppose), I tried setting up Kubernetes Dashboard-UI, run <code>kubectl proxy</code> and still my browser cannot access both services (my webapp inside the deployment and also Dashboard API), may be I am a little bit lost here.</p>
| <p>The flannel and the Ingress are completely different things. </p>
<p>flannel is a CNI or Container Network Interface plugin which task is networking between containers. As coreOS says:</p>
<blockquote>
<p>each container is assigned an IP address that can be used to
communicate with other containers on the same host. For communicating
over a network, containers are tied to the IP addresses of the host
machines and must rely on port-mapping to reach the desired container.
This makes it difficult for applications running inside containers to
advertise their external IP and port as that information is not
available to them.</p>
<p>flannel solves the problem by giving each container an IP that can be
used for container-to-container communication. It uses packet
encapsulation to create a virtual overlay network that spans the whole
cluster. More specifically, flannel gives each host an IP subnet (/24
by default) from which the Docker daemon is able to allocate IPs to
the individual containers.</p>
</blockquote>
<p>The Kubernetes supports some other CNI plugins: Calico, weave, etc. They vary according to functionality ( e.g. supporting features like NetworkPolicy for restricting resources )</p>
<p>The Ingress is a Kubernetes object which is usually operate at the application layer of the network stack (HTTP) and allow you to expose your Service externally, it also provides a features such as HTTP requests routing, cookie-based session affinity, HTTPS traffic termination and so on. (just like a web server Nginx or Apache)</p>
|
<p>I'm running an ELK stack on a Kubernetes cluster but the Kibana service is showing as pending. What does it means and how can we make running?</p>
<p>kubectl get svc -n kube-system | grep kibana</p>
<p>kibana-logging LoadBalancer 10.0.34.12 5601:31840/TCP 5d </p>
| <p>It means that it cannot create the LoadBalancer to expose your service. This varies depending on what cloud provider you are using. For example, AWS, GCE, Azure, OpenStack, etc. </p>
<p>The main config on the <code>kube-apiserver</code>, <code>kube-controller-manager</code> and your <code>kubelet</code> is to provide the <code>--cloud-provider</code> option. For example for AWS it would be <code>--cloud-provicer=aws</code>. If your cloud provider supported you might want to consider exposing the service as <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a></p>
|
<p>The repo used is: <a href="https://github.com/Yolean/kubernetes-kafka/" rel="nofollow noreferrer">https://github.com/Yolean/kubernetes-kafka/</a></p>
<p>So I'm trying to run a Kafka cluster that connects to a Zookeeper cluster in Kubernetes, the first pod runs alright, but then the second Kafka pod tries to connect to the zookeeper cluster and it has this error:</p>
<blockquote>
<p>kafka.common.InconsistentBrokerIdException: Configured broker.id 1
doesn't match stored broker.id 0 in meta.properties. If you moved your
data, make sure your configured broker.id matches. If you intend to
create a new broker, you should remove all data in your data
directories (log.dirs).</p>
</blockquote>
<p>I understand the error is in the second broker id but shouldn't the zookeeper cluster allow multiple broker connections? or how could the config be changed to allow it?</p>
<p>or is it a Kafka configuration problem? The config file is:</p>
<pre><code>kind: ConfigMap
metadata:
name: broker-config
namespace: whitenfv
labels:
name: kafka
system: whitenfv
apiVersion: v1
data:
init.sh: |-
#!/bin/bash
set -x
cp /etc/kafka-configmap/log4j.properties /etc/kafka/
KAFKA_BROKER_ID=${HOSTNAME##*-}
SEDS=("s/#init#broker.id=#init#/broker.id=$KAFKA_BROKER_ID/")
LABELS="kafka-broker-id=$KAFKA_BROKER_ID"
ANNOTATIONS=""
hash kubectl 2>/dev/null || {
SEDS+=("s/#init#broker.rack=#init#/#init#broker.rack=# kubectl not found in path/")
} && {
ZONE=$(kubectl get node "$NODE_NAME" -o=go-template='{{index .metadata.labels "failure-domain.beta.kubernetes.io/zone"}}')
if [ $? -ne 0 ]; then
SEDS+=("s/#init#broker.rack=#init#/#init#broker.rack=# zone lookup failed, see -c init-config logs/")
elif [ "x$ZONE" == "x<no value>" ]; then
SEDS+=("s/#init#broker.rack=#init#/#init#broker.rack=# zone label not found for node $NODE_NAME/")
else
SEDS+=("s/#init#broker.rack=#init#/broker.rack=$ZONE/")
LABELS="$LABELS kafka-broker-rack=$ZONE"
fi
OUTSIDE_HOST=$(kubectl get node "$NODE_NAME" -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}')
if [ $? -ne 0 ]; then
echo "Outside (i.e. cluster-external access) host lookup command failed"
else
OUTSIDE_PORT=3240${KAFKA_BROKER_ID}
SEDS+=("s|#init#advertised.listeners=OUTSIDE://#init#|advertised.listeners=OUTSIDE://${OUTSIDE_HOST}:${OUTSIDE_PORT}|")
ANNOTATIONS="$ANNOTATIONS kafka-listener-outside-host=$OUTSIDE_HOST kafka-listener-outside-port=$OUTSIDE_PORT"
fi
if [ ! -z "$LABELS" ]; then
kubectl -n $POD_NAMESPACE label pod $POD_NAME $LABELS || echo "Failed to label $POD_NAMESPACE.$POD_NAME - RBAC issue?"
fi
if [ ! -z "$ANNOTATIONS" ]; then
kubectl -n $POD_NAMESPACE annotate pod $POD_NAME $ANNOTATIONS || echo "Failed to annotate $POD_NAMESPACE.$POD_NAME - RBAC issue?"
fi
}
printf '%s\n' "${SEDS[@]}" | sed -f - /etc/kafka-configmap/server.properties > /etc/kafka/server.properties.tmp
[ $? -eq 0 ] && mv /etc/kafka/server.properties.tmp /etc/kafka/server.properties
server.properties: |-
############################# Log Basics #############################
# A comma seperated list of directories under which to store log files
# Overrides log.dir
log.dirs=/var/lib/kafka/data/topics
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
default.replication.factor=3
min.insync.replicas=2
auto.create.topics.enable=true
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
#num.recovery.threads.per.data.dir=1
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
#init#broker.id=#init#
#init#broker.rack=#init#
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
listeners=OUTSIDE://:9094,PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
#init#advertised.listeners=OUTSIDE://#init#,PLAINTEXT://:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL,OUTSIDE:PLAINTEXT
inter.broker.listener.name=PLAINTEXT
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
#num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
#num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
#socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
#socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
#socket.request.max.bytes=104857600
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
#offsets.topic.replication.factor=1
#transaction.state.log.replication.factor=1
#transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# https://cwiki.apache.org/confluence/display/KAFKA/KIP-186%3A+Increase+offsets+retention+default+to+7+days
offsets.retention.minutes=10080
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=-1
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
#log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=zoo-0.zoo:2181,zoo-1.zoo:2181,zoo-2.zoo:2181
# Timeout in ms for connecting to zookeeper
#zookeeper.connection.timeout.ms=6000
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
#group.initial.rebalance.delay.ms=0
log4j.properties: |-
# Unspecified loggers and loggers with additivity=true output to server.log and stdout
# Note that INFO only applies to unspecified loggers, the log level of the child logger is used otherwise
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
# Change the two lines below to adjust ZK client logging
log4j.logger.org.I0Itec.zkclient.ZkClient=INFO
log4j.logger.org.apache.zookeeper=INFO
# Change the two lines below to adjust the general broker logging level (output to server.log and stdout)
log4j.logger.kafka=INFO
log4j.logger.org.apache.kafka=INFO
# Change to DEBUG or TRACE to enable request logging
log4j.logger.kafka.request.logger=WARN, requestAppender
log4j.additivity.kafka.request.logger=false
# Uncomment the lines below and change log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output
# related to the handling of requests
#log4j.logger.kafka.network.Processor=TRACE, requestAppender
#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
#log4j.additivity.kafka.server.KafkaApis=false
log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
log4j.additivity.kafka.network.RequestChannel$=false
log4j.logger.kafka.controller=TRACE, controllerAppender
log4j.additivity.kafka.controller=false
log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false
log4j.logger.state.change.logger=TRACE, stateChangeAppender
log4j.additivity.state.change.logger=false
# Change to DEBUG to enable audit log for the authorizer
log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender
log4j.additivity.kafka.authorizer.logger=false
</code></pre>
| <p>As per this: <a href="https://stackoverflow.com/questions/38065492/launching-multiple-kafka-brokers-fails">Launching multiple Kafka brokers fails</a>, it's an issue with <code>log.dirs</code> in your <code>server.properties</code> where it can't be the same for all your brokers or it can't be shared.</p>
<p>You can probably use the <code>${HOSTNAME##*-}</code> bash environment setting to modify your container entrypoint script that in of itself modifies your <code>server.properties</code> before the start, but the downside of that is that you are going to have to rebuild your Docker image.</p>
<p>Another strategy using StatefulSets is described here: <a href="https://stackoverflow.com/questions/41731947/how-to-pass-args-to-pods-based-on-ordinal-index-in-statefulsets">How to pass args to pods based on Ordinal Index in StatefulSets?</a>. But you will also have to make changes on how the Kafka entrypoint is called.</p>
<p>You could also try using completely different volumes for each of your Kafka broker pods.</p>
|
<p>My Kubernetes worker node has a public IP: <code>xxx.xx.xxx.xxx</code></p>
<p>Does this ever change? If some third-party API needs to whitelist my IP, is this the one to give?</p>
<p>Or do I expose the individual service of some type (NodePort/LoadBalancer) and give it that node-ip:port or loadbalancer-ip:port?</p>
| <p>This really depends on the cloud provider. The major providers such as AWS, GCP, and Azure have ephemeral public IPs for the VMs, so if you stop your VM and then start it again you will get a new IP, if you never stop your VM then you don't have to worry about losing the IP.</p>
<p>All of them also offer <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html" rel="nofollow noreferrer">'Elastic public IPs'</a> which are reserved IP addresses that you can assign to network interfaces on your VMs and basically allowing you to stop the VM and keep the same IP once you restart it.</p>
<p>Having said that, the preferred way to expose a service to the outside is by using the 'LoadBalancer` type of service and the way the IP addresses are assigned also vary depending on the cloud provider and the type of load balancer. For example, in AWS, ELBs and ALBs have ephemeral IPs but they load balancer have a DNS endpoint that never changes. On the other hand, NLBs can have an Elastic IP.</p>
|
<p>I run a service with pods that pick up tasks, process it and then finish. At the moment it is a testing environment, so there is no real CPU/memory usage of each pod but for the future, I want to set a limit for the pods.</p>
<p>Running all the pods(let's say 100) at once results in equal distribution on my two nodes(each with 2 CPUs and 2GB Memory) as expected.</p>
<p>For testing purposes I now set the limit of each pod:</p>
<pre><code> limits:
memory: "1000Mi"
cpu: "1"
requests:
memory: "1000Mi"
cpu: "1"
</code></pre>
<p>Because the controllers/system are taking a bit of the available resources of the nodes, I would expect that on each node one pod is running until success and then the next is scheduled. In reality, only one node is used to process all the 100 pods one after another.</p>
<p>Does anybody know what might cause this behavior? There are no other limits set.</p>
<p>Thanks!</p>
| <p>Finally I found out the problem was a wrong information given by the "kubectl describe node ..." command. This indicated more memory(1225076KB) than actually available on the node (0.87GB). Don't know why(especially because the setup of the two workers is identical but they still have different amounts of free memory), but this seemed to be the problem.</p>
|
<p>Does anyone have experience in building and pushing docker images + deploying them to AKS via Azure DevOps?</p>
<p>When I build and push an image I can use the variable <code>$(Build.Repository.Name):$(Build.BuildId).</code> </p>
<p>But then I have my <code>.yaml</code> files that I have in my release pipeline to deploy the images. I cannot (or don't know how) to refer to that variable <code>$(Build.Repository.Name):$(Build.BuildId)</code>.</p>
<p>Does anyone have experience in automating this?</p>
| <p>How I got it working for me is by using "tokerisation of the yaml file".</p>
<p>During the build fase (build and push of the image to the private repo I use a default variable in Azure Devops, $(Build.BuidId), as tag for the docker image.</p>
<p>Build image task
<a href="https://i.stack.imgur.com/vNJLR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vNJLR.png" alt="Build image"></a></p>
<p>Push image task</p>
<p><a href="https://i.stack.imgur.com/Bcngl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bcngl.png" alt="Push image"></a></p>
<p>In the the deployment yaml for the image I refer to:</p>
<p><a href="https://i.stack.imgur.com/ZenUZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZenUZ.png" alt="Deployment yaml"></a></p>
<p>Then for the deployment before I apply the yaml files, with kubectl apply task, I use the task "Replace tokens". You can specify which files to replace tokens in. Since I only used a token for the image I only selected the deployment yaml file.</p>
<p><a href="https://i.stack.imgur.com/hRogy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hRogy.png" alt="artifact source name"></a>
<a href="https://i.stack.imgur.com/2eJWX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2eJWX.png" alt="replace token task"></a></p>
<p>What it does it replaces #{Release.Artifacts.acpyaml.BuildId}# with the actual build nr of the last build, so when it starts pulling the image it has the right tag.</p>
<p>See a full example described on <a href="https://medium.com/@marcodesanctis2/a-build-and-release-pipeline-in-vsts-for-docker-and-azure-kubernetes-service-aks-41efc9a0c5c4" rel="nofollow noreferrer">Tokenised version of yaml</a></p>
|
<p>I am new to Kubernetes and started reading through the documentation.
There often the term 'endpoint' is used but the documentation lacks an explicit definition.</p>
<p>What is an 'endpoint' in terms of Kubernetes? Where is it located?</p>
<p>I could image the 'endpoint' is some kind of access point for an individual 'node' but that's just a guess. </p>
| <p>Pods expose themselves through endpoints to a service.
It is if you will part of a pod.</p>
<p><a href="https://i.stack.imgur.com/9RyYS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9RyYS.png" alt="enter image description here"></a>
Source: <a href="https://storage.googleapis.com/static.ianlewis.org/prod/img/753/endpoints.png" rel="noreferrer">Services and Endpoints</a></p>
|
<p>I'm trying HPA: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/</a></p>
<p>PV:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: api-orientdb-pv
labels:
app: api-orientdb
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: api-orientdb-{{ .Values.cluster.name | default "testing" }}
fsType: ext4
</code></pre>
<p>PVC:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: api-orientdb-pv-claim
labels:
app: api
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: api-orientdb
storageClassName: ""
</code></pre>
<p>HPA:</p>
<pre><code>Name: api-orientdb-deployment
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Thu, 08 Jun 2017 10:37:06 +0700
Reference: Deployment/api-orientdb-deployment
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 17% (8m) / 10%
Min replicas: 1
Max replicas: 2
Events: <none>
</code></pre>
<p>and new pod has been created:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
api-orientdb-deployment-2506639415-n8nbt 1/1 Running 0 7h
api-orientdb-deployment-2506639415-x8nvm 1/1 Running 0 6h
</code></pre>
<p>As you can see, I'm using <code>gcePersistentDisk</code> which does not support <code>ReadWriteMany</code> access mode.</p>
<p>Newly created pod also mount the volume as <code>rw</code> mode:</p>
<pre><code>Name: api-orientdb-deployment-2506639415-x8nvm
Containers:
Mounts:
/orientdb/databases from api-orientdb-persistent-storage (rw)
Volumes:
api-orientdb-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: api-orientdb-pv-claim
ReadOnly: false
</code></pre>
<p>Question: How does it work in this case? Is there a way to config the mainly pod (<code>n8nbt</code>) to use a PV with <code>ReadWriteOnce</code> access mode, and all other scaled pod (<code>x8nvm</code>) should be <code>ReadOnlyMany</code>? How to do it automatically?</p>
<p>The only way I can think of is create another PVC mount the same disk but with different <code>accessModes</code>, but then the question becomes to: how to config the newly scaled pod to use that PVC?</p>
<hr>
<p><strong>Fri Jun 9 11:29:34 ICT 2017</strong></p>
<p>I found something: there is nothing ensure that the newly scaled pod will be run on the same node as the first pod. So, if the volume plugin does not support <code>ReadWriteMany</code> and the scaled pod is run on another node, it will failed to mount:</p>
<blockquote>
<p>Failed to attach volume "api-orientdb-pv" on node
"gke-testing-default-pool-7711f782-4p6f" with: googleapi: Error 400:
The disk resource
'projects/xx/zones/us-central1-a/disks/api-orientdb-testing' is
already being used by
'projects/xx/zones/us-central1-a/instances/gke-testing-default-pool-7711f782-h7xv'</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes</a></p>
<blockquote>
<p>Important! A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.</p>
</blockquote>
<p>If so, the only way to ensure that the HPA works is <code>ReadWriteMany</code> access mode must be supported by the volume plugin?</p>
<hr>
<p><strong>Fri Jun 9 14:28:30 ICT 2017</strong></p>
<blockquote>
<p>If you want only one Pod to be able to write then create two Deployments. One with replicas: 1 and the other one that has the autoscaler attached (and has readOnly: true in it)</p>
</blockquote>
<p>OK.</p>
<blockquote>
<p>Do note that a GCE PD can only be mounted by a single node if any of the Pods are accessing it readWrite.</p>
</blockquote>
<p>Then I have to use <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">label selectors</a> to ensure that all pods end up on the same node, right?</p>
<blockquote>
<p>Your question is not clear to me</p>
</blockquote>
<p>Let me explain: in case of autoscaling, assuming that by using label selectors, I can ensure that newly scaled pod ends up on the same node, but since volume is mounted as <code>rw</code>, does it break the GCE PD as we have 2 pods mount a volume as <code>rw</code>?</p>
<blockquote>
<p>First of all, generally, if you have a Deployment with replicas: 1 you won't have 2 Pod running at the same time (most of the time!!)</p>
</blockquote>
<p>I know.</p>
<blockquote>
<p>On the other hand if a PVC specifies ReadWriteOnce then after the first Pod is scheduled any other Pods will need to be scheduled on the same node or not be scheduled at all (most common case: there aren't enough resources on the Node)</p>
</blockquote>
<p>In case of HPA, it does not. Please see my above updates for more details.</p>
<blockquote>
<p>If for any reason you do have 2 Pods accessing the same mount readWrite then it's completely up the the application what will happen and is not kubernetes specific</p>
</blockquote>
<p>The main thing made me confused is:</p>
<blockquote>
<p>ReadWriteOnce – the volume can be mounted as read-write by a single node</p>
</blockquote>
<p>OK, node, not pod. But in case of autoscaling, if 2 pods are running on the same node, and both mount the volume as <code>rw</code>, does GCE PD support it? If so, how does it work?</p>
| <p>I think we can use StatefulSet in order to each replica has its own PV.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#deployments_vs_statefulsets" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#deployments_vs_statefulsets</a></p>
<blockquote>
<p>Even Deployments with one replica using a ReadWriteOnce Volume are not recommended. This is because the default Deployment strategy will create a second Pod before bringing down the first pod on a recreate. The Deployment may fail in deadlock as the second Pod can't start because the ReadWriteOnce Volume is already in use, and the first Pod wont be removed because the second Pod has not yet started. Instead, use a StatefulSet with ReadWriteOnce volumes.</p>
<p>StatefulSets are the recommended method of deploying stateful applications that require a unique volume per replica. By using StatefulSets with Persistent Volume Claim Templates you can have applications that can scale up automatically with unique Persistent Volume Claims associated to each replica Pod.</p>
</blockquote>
|
<p>Recently I learned about HashiCorp Vault and its usage combined with Kubernetes. I've found two really awesome blog post about how you can use HashiCorp Vault to generate creds on the fly by using an init-container and shared volume (<a href="https://medium.com/ww-engineering/working-with-vault-secrets-that-expire-aa40d00d9d2a" rel="nofollow noreferrer">post1</a>, <a href="https://medium.com/@gmaliar/dynamic-secrets-on-kubernetes-pods-using-vault-35d9094d169" rel="nofollow noreferrer">post2</a>). Kubernetes also provides a good way to handle credentials with Kubernetes secrets, that also empowers one to read the credentials via environment variables. Therefore, it provides a nice abstraction to the secret storage.</p>
<p>My question is could HashiCorp Vault also be used to <strong>populate Kubernetes Secrets with credentials</strong> and how could that be achieved? </p>
| <p>As the @Rico mentioned exposing the secrets both in Vault and in Kubernetes defeats the purpose of using Vault in the first place. </p>
<p>With Vault, data is encrypted (transit/rest), and you can provide an access grained control over who can access what data. Exposing the data inside Vault to a Kubernetes Secret object which is basically limited to base64 encoding, will largely defeat the greatest benefit of <em>Vault</em>, which is to secure your Infrastructure and being the single entity responsible for managing your secrets.</p>
<p>Vault is an awesome tool, but in my perception it can get quite more complex for non-dev configurations, since you are going to have to attach the likes of Consul so you can have a persistent backend storage, therefore utilizing an architectural distributed pattern such as the <a href="https://learn.microsoft.com/en-us/azure/architecture/patterns/sidecar" rel="noreferrer">sidecar pattern</a> might also be extremely overkill and not recommended at all. </p>
<ul>
<li>But with it you could have a vault instance "living" in the same Pod as your "main" container, therefore leveraging the encryption service provided by Vault, but we would be tying the lifecycle of Vault to the lifecycle of the Pod. </li>
<li>With this approach we would be required to have a Vault instance on each Pod, in case we plan on having to access secret information, which will just make the system extremely more complex.
With this approach we could separate the secret information required for each object on multiple vault instances and therefore spreading the secret information of our infrastructure to multiple places, but we keep on increasing the challenge of managing our infrastructure.</li>
</ul>
<p>So I definitely understand that trying to find a way to have the secret information required for a Pod right next to it might seem tempting, specially in a simple manner, but it would definitely defeat the purpose if it is just left completely unencrypted.</p>
<p>With this out of the way, why not simply create a Vault controller which will be the entity responsible to interact with Vault and whose is going to be responsible for querying Vault for Wrapped Tokens, which can temporarily give access to certain secret information, after being unwrapped by an init container inside the Pod? Is that due to the extra time required for starting up a Pod, since we need to perform some early calls in order to retrieve an Wrapped Token? Or is ut due to the extra latency of having to perform extra calls whenever it is necessary to query secret data from Vault?</p>
<p>Whenever I think about the idea of integrating Kubernetes and Vault, I generally tend to think about the following prototype created by Kelsey Hightower explained <a href="https://github.com/kelseyhightower/vault-controller" rel="noreferrer">here</a>.</p>
|
<p>I deployed a cluster of Kubernetes on AWS through the use of Kops. Like all those who have used Kops know, Kops places constraints in building a Kubernetes on AWS infrastructure (for example, when it goes to perform a multi-master installation what it does is create in each AZ of a region an AWS autoscaling group with a single instance EC2).</p>
<p>My question is: <strong>is it possible to change the way in which it carries out the deployment?</strong></p>
<p>Specifically, I <em>would like a deployment with 3 Masters in each AZ, so that the number of Masters is 9</em>.</p>
<p>Suggestions? Thank you.</p>
| <p><code>kops create cluster ...</code> should be able to spread the masters in different zones depending on the number. You need these two options when creating a cluster</p>
<pre><code>--master-count int32 Set the number of masters. Defaults to one master per master-zone
--master-zones strings Zones in which to run masters (must be an odd number)
</code></pre>
<p>So in other words for example for 9 masters in 3 zones:</p>
<pre><code>--master-count 9
--master-zones us-east-1d,us-east-1b,us-east-1c
</code></pre>
<p>More info in the <a href="https://github.com/kubernetes/kops/blob/master/docs/cli/kops_create_cluster.md" rel="nofollow noreferrer">CLI options</a>. </p>
|
<p>I'm running a Ruby app on Kubernetes with Minikube. </p>
<p>However, whenever I look at the logs I don't see the output I would have seen in my terminal when running the app locally.</p>
<p>I presume it's because it only shows stderr?</p>
<p>What can I do to see all types of console logs (e.g. from <code>puts</code> or <code>raise</code>)?</p>
<p>On looking around is this something to do with it being in detached mode - see the Python related issue: <a href="https://stackoverflow.com/questions/43969743/logs-in-kubernetes-pod-not-showing-up">Logs in Kubernetes Pod not showing up</a></p>
<p>Thanks.</p>
<p>=</p>
<p>As requested - here is the deployment.yaml</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: sample
spec:
replicas: 1
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: someregistry
imagePullPolicy: Always
command: ["/bin/sh","-c"]
args: ["bundle exec rake sample:default --trace"]
envFrom:
- configMapRef:
name: sample
- secretRef:
name: sample
ports:
- containerPort: 3000
imagePullSecrets:
- name: regsecret
</code></pre>
| <p>As shown in this article, <code>kubectl logs pod apod</code> should show you stdout and stderr for a pod deployed in a minikube.</p>
<blockquote>
<p>By default in Kubernetes, Docker is configured to write a container's stdout and stderr to a file under /var/log/containers on the host system</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#system-component-logs" rel="noreferrer">Kubernetes adds</a>:</p>
<blockquote>
<p>There are two types of system components: those that run in a container and those that do not run in a container.<br>
For example:</p>
<ul>
<li>The Kubernetes scheduler and kube-proxy run in a container.</li>
<li>The kubelet and container runtime, for example Docker, do not run in containers.</li>
</ul>
</blockquote>
<p>And:</p>
<blockquote>
<ul>
<li>On machines with systemd, the kubelet and container runtime write to journald. </li>
<li>If systemd is not present, they write to <code>.log</code> files in the <code>/var/log</code> directory.</li>
</ul>
<p>Similarly to the container logs, system component logs in the /var/log directory should be rotated.<br>
In Kubernetes clusters brought up by the kube-up.sh script, those logs are configured to be rotated by the logrotate tool daily or once the size exceeds 100MB.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/JzhOm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JzhOm.png" alt="https://d33wubrfki0l68.cloudfront.net/59b1aae2adcfe4f06270b99a2789012ed64bec1f/4d0ad/images/docs/user-guide/logging/logging-node-level.png"></a></p>
|
<p>Are the resources released once the kube job is finished?</p>
<p>I mean the associated pod resources, let say a job is run to completion and the associated pod is in a completed state, which was allocated 0.5 CPU, is the 0.5cpu released after the job is finished?</p>
<p>or is it released only after deleting the job?</p>
| <p>A <strong><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="nofollow noreferrer">pod phase</a></strong> does not include "completed": you only have "Succeeded":</p>
<blockquote>
<p>All Containers in the Pod have terminated in success, and will not be restarted."</p>
</blockquote>
<p>Since this is a terminal state, its reserved resources won't be taken into consideration by the scheduler.</p>
|
<p>I currently have the following lifecycle hook in my kubernetes config:</p>
<pre><code>lifecycle:
preStop:
httpGet:
path: /path
port: 8080
</code></pre>
<p>I would like to call the shutdown endpoint of spring boot actuator in this hook, but this endpoint requires a post request instead of a get. Is this in any way possible or should I define my own endpoint that shuts down my app gracefully?</p>
<p>Thanks for your help! </p>
| <p>You can add an <code>exec</code> lifecycle hook since <code>httpPost</code> is not a valid one, and assuming you have <code>curl</code> in your containers:</p>
<pre><code>lifecycle:
preStop:
exec:
command: ["curl", "-XPOST", "http://URL"]
</code></pre>
|
<p>If there is an update in the docker image, rolling update strategy will update all the pods one by one in a daemonset, similarly is it possible to restart the pods gracefully without any changes the daemonset config or can it be triggered explicitly?</p>
<p>Currently, I am doing it manually by</p>
<p><code>kubectl delete pod <pod-name></code>
One by one until each pod gets into running state.</p>
| <p>You could try and use <a href="https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node" rel="nofollow noreferrer">Node maintenance operations</a>:</p>
<blockquote>
<p>Use kubectl drain to <strong>gracefully terminate all pods on the node</strong> while marking the node as unschedulable (with <a href="https://stackoverflow.com/a/48078350/6309"><code>--ignore-daemonsets</code></a>, from <a href="https://stackoverflow.com/users/9065705/konstantin-vustin">Konstantin Vustin</a>'s <a href="https://stackoverflow.com/questions/52866960/kubernetes-how-to-gracefully-delete-pods-in-daemonset#comment92649044_52867165">comment</a>):</p>
</blockquote>
<pre><code>kubectl drain $NODENAME --ignore-daemonsets
</code></pre>
<blockquote>
<p>This keeps new pods from landing on the node while you are trying to get them off.</p>
</blockquote>
<p>Then:</p>
<blockquote>
<p>Make the node schedulable again:</p>
</blockquote>
<pre><code>kubectl uncordon $NODENAME
</code></pre>
|
<p>I did create a Master Cluster with the following command:</p>
<p><code>kubeadm init --pod-network-cidr $CALICO_NETWORK</code></p>
<p>Now it is listening in the internal IP 10.3.8.23:6443, which is ok because I want that the master uses the internal IP to communicate with Nodes.</p>
<p>Now I want to access the cluster using the public IP and I get the following error:</p>
<p><strong>http: proxy error: x509: certificate is valid for 10.96.0.1, 10.3.8.23, not for 18.230.*.*.</strong></p>
<p>How can I generate an additional certificate for the publicIP?</p>
<p>I need to use the public IP in order to access the dashboard using the browser.</p>
<p>I install it using: <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard</a></p>
| <p>If you don't want to recreate your cluster you can also do what's described here: <a href="https://stackoverflow.com/questions/46360361/invalid-x509-certificate-for-kubernetes-master">Invalid x509 certificate for kubernetes master</a></p>
<p>For K8s 1.7 and earlier:</p>
<pre><code>rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs selfsign \
--apiserver-advertise-address=0.0.0.0 \
--cert-altnames=10.96.0.1 \
--cert-altnames=10.3.8.23 \
--cert-altnames=18.230.x.x # <== Public IP
docker rm `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
</code></pre>
<p>For K8s 1.8 an newer:</p>
<pre><code>rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs all \
--apiserver-advertise-address=0.0.0.0 \
--apiserver-cert-extra-sans=10.96.0.1,10.3.8.23,18.230.x.x # <== Public IP
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
</code></pre>
<p>And you can also add DNS name with the <code>--apiserver-cert-extra-sans</code> option.</p>
|
<p>I have a Kubernetes cluster on Linux with one master node and two slave nodes. I have installed & created services for a eureka-server and Zuul with multiple replicas which are accessible by NodePorts. In order to enable load balancing, we need to register Zuul service in Eureka.</p>
<p>Can anybody let me know how we can register Zuul on eureka-server?</p>
| <p>If you look at the configuration for <a href="https://github.com/Netflix/zuul/wiki/Core-Features#service-discovery" rel="nofollow noreferrer">Zuul Service Discovery</a> you can see that there is an option:</p>
<pre><code>eureka.serviceUrl.default=http://${region}.${eureka.eurekaServer.domainName}:7001/${eureka.eurekaServer.context}
</code></pre>
<p>You would have to point that option to your eureka-server <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Service</a>. Based on the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS Kubernetes convention</a> it would be something like this:</p>
<pre><code>eureka-server-service.<k8s-namespace>.svc.cluster.local:<port-of-service-you-exposed>
</code></pre>
|
<p>I'm currently using a podtemplate (See below) inside my <code>Jenkinsfile</code> to provision a docker container which mounts to the docker socket to provision containers within the pipeline.</p>
<p>As the cloud-hosted kubernetes I use is going from dockerd to containerd as container runtime, I want to ask if there is somebody who is using containerd with jenkins kubernetes plugin (especially podtemplates).</p>
<pre><code>podTemplate(label: 'mypod', cloud: cloud, serviceAccount: serviceAccount, kubenamespace: kubenamespace, envVars: [
envVar(key: 'NAMESPACE', value: kubenamespace),
envVar(key: 'REGNAMESPACE', value: regnamespace),
envVar(key: 'APPNAME', value: appname),
envVar(key: 'REGISTRY', value: registry)
],
volumes: [
hostPathVolume(hostPath: '/etc/docker/certs.d', mountPath: '/etc/docker/certs.d'),
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')
],
containers: [
containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:v2.9.1', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'docker' , image: 'docker:17.06.1-ce', ttyEnabled: true, command: 'cat')]) {
</code></pre>
| <p>The Containerd socket within IKS 1.11 (unix:////var/run/containerd/containerd.sock) does not support mounting it and executing docker commands, thus running docker builds in IKS or any other Kubernetes Containerd setup is not supported.</p>
<p>We will use <a href="https://github.com/GoogleContainerTools/jib" rel="nofollow noreferrer">Google Container Tools</a> for docker operations inside the cluster.</p>
|
<p>One of my pods keeps restarting sometimes. When I do <code>kubectl describe pod <podName></code> I only get the Last state and not the ones before. </p>
<p>So to find the root of the problem it would be helpful to see all the other states before. </p>
| <p>If you use <code>kubectl get pods --output=yaml</code> you will get detailed information about pods in <code>yaml</code> format, you will be able to see 3 last states.</p>
<p>You can also check the last events which was already mentioned by you, using <code>kubectl describe pod <podName></code>.</p>
<p>Also as stated in Kubernetes documentation <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/" rel="nofollow noreferrer">Determine the Reason for Pod Failure</a>, you can use <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/#customizing-the-termination-message" rel="nofollow noreferrer">Customise the termination message</a> and move <code>/dev/termination-log</code> to a <code>/tmp/</code> which should be mounted as a separate storage using for example <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent Volumes</a></p>
<p>If you are looking into more detailed information regarding storing logs, you would need to check <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/" rel="nofollow noreferrer">Logging Using Elasticsearch and Kibana</a> or <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/" rel="nofollow noreferrer">Logging Using Stackdriver</a>.</p>
|
<p>So I was deploying a new cronjob today and got the following error:</p>
<pre><code>Error: release acs-export-cronjob failed: CronJob.batch "acs-export-cronjob" is invalid: [spec.jobTemplate.spec.template.spec.containers: Required value, spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", "Never"]
</code></pre>
<p>here's some output from running helm on the same chart, no changes made, but with the <code>--debug --dry-run</code> flags:</p>
<pre><code> NAME: acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *
COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job
HOOKS:
MANIFEST:
---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
spec:
metadata:
name: acs-export-cronjob
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
template:
metadata:
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
annotations:
iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
spec:
restartPolicy: Never #<----------this is not 'Always'!!
serviceAccountName: acs-export-cronjob-sa
tolerations:
- key: sonic-node-group
operator: Equal
value: api
effect: NoSchedule
nodeSelector:
sonic-node-group: api
volumes:
- name: config
emptyDir: {}
initContainers:
- name: "get-users-vmargs-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(@.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
volumeMounts:
- mountPath: /config
name: config
- name: "get-users-yaml-appconfig-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(@.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
volumeMounts:
- mountPath: /config
name: config
containers: #<--------this field is not missing!
- image: <censored>.amazonaws.com/sonic/acs-export:latest
imagePullPolicy: Always
name: "users-batch"
command:
- "bash"
- "-c"
- 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
env:
- name: FRENV
value: "batch"
- name: STACKNAME
value: eu1-test
- name: SPRING_PROFILES
value: "export-job"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /config
name: config
resources:
limit:
cpu: 100m
memory: 1Gi
</code></pre>
<p>if you paid attention, you may have noticed line 101 (I added the comment afterwards) in the debug-output, which sets <code>restartPolicy</code> to <code>Never</code>, quite the opposite of <code>Always</code> as the error message claims it to be.</p>
<p>You may also have noticed line 126 (again, I added the comment after the fact) of the debug output, where the mandatory field <code>containers</code> is specified, again, much in contradiction to the error-message.</p>
<p>whats going on here?</p>
| <p>hah! found it! it was a simple mistake actually. I had an extra <code>spec:metadata</code> section under <code>jobtemplate</code> which was duplicated. removing one of the dupes fixed my issues.</p>
<p>I really wish the error-messages of helm would be more helpful. </p>
<p>the corrected chart looks like:</p>
<pre><code> NAME: acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *
COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job
HOOKS:
MANIFEST:
---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
spec:
template:
metadata:
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
annotations:
iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
spec:
restartPolicy: Never
serviceAccountName: acs-export-cronjob-sa
tolerations:
- key: sonic-node-group
operator: Equal
value: api
effect: NoSchedule
nodeSelector:
sonic-node-group: api
volumes:
- name: config
emptyDir: {}
initContainers:
- name: "get-users-vmargs-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(@.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
volumeMounts:
- mountPath: /config
name: config
- name: "get-users-yaml-appconfig-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(@.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
volumeMounts:
- mountPath: /config
name: config
containers:
- image: <censored>.amazonaws.com/sonic/acs-export:latest
imagePullPolicy: Always
name: "users-batch"
command:
- "bash"
- "-c"
- 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
env:
- name: FRENV
value: "batch"
- name: STACKNAME
value: eu1-test
- name: SPRING_PROFILES
value: "export-job"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /config
name: config
resources:
limit:
cpu: 100m
memory: 1Gi
</code></pre>
|
<p>I have a Kubernetes cluster running on the Google Kubernetes Engine.</p>
<p>I have a deployment that I manually (by editing the <code>hpa</code> object) scaled up from 100 replicas to 300 replicas to do some load testing. When I was load testing the deployment by sending HTTP requests to the service, it seemed that not all pods were getting an equal amount of traffic, only around 100 pods were showing that they were processing traffic (by looking at their CPU-load, and our custom metrics). So my suspicion was that the service is not load balancing the requests among all the pods equally.</p>
<p>If I checked the <code>deployment</code>, I could see that all 300 replicas were ready.</p>
<pre><code>$ k get deploy my-app --show-labels
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE LABELS
my-app 300 300 300 300 21d app=my-app
</code></pre>
<p>On the other hand, when I checked the <code>service</code>, I saw this:</p>
<pre><code>$ k describe svc my-app
Name: my-app
Namespace: production
Labels: app=my-app
Selector: app=my-app
Type: ClusterIP
IP: 10.40.9.201
Port: http 80/TCP
TargetPort: http/TCP
Endpoints: 10.36.0.5:80,10.36.1.5:80,10.36.100.5:80 + 114 more...
Port: https 443/TCP
TargetPort: https/TCP
Endpoints: 10.36.0.5:443,10.36.1.5:443,10.36.100.5:443 + 114 more...
Session Affinity: None
Events: <none>
</code></pre>
<p>What was strange to me is this part</p>
<pre><code>Endpoints: 10.36.0.5:80,10.36.1.5:80,10.36.100.5:80 + 114 more...
</code></pre>
<p>I was expecting to see 300 endpoints there, is that assumption correct?</p>
<p>(I also found <a href="https://engineering.dollarshaveclub.com/kubernetes-fixing-delayed-service-endpoint-updates-fd4d0a31852c" rel="noreferrer">this post</a>, which is about a similar issue, but there the author was experiencing only a few minutes delay until the endpoints were updated, but for me it didn't change even in half an hour.)</p>
<p>How could I troubleshoot what was going wrong? I read that this is done by the Endpoints controller, but I couldn't find any info about where to check its logs.</p>
<p><strong>Update</strong>: We managed to reproduce this a couple more times. Sometimes it was less severe, for example 381 endpoints instead of 445. One interesting thing we noticed is that if we retrieved the details of the endpoints:</p>
<pre><code>$ k describe endpoints my-app
Name: my-app
Namespace: production
Labels: app=my-app
Annotations: <none>
Subsets:
Addresses: 10.36.0.5,10.36.1.5,10.36.10.5,...
NotReadyAddresses: 10.36.199.5,10.36.209.5,10.36.239.2,...
</code></pre>
<p>Then a bunch of IPs were "stuck" in the <code>NotReadyAddresses</code> state (not the ones that were "missing" from the service though, if I summed the number of IPs in <code>Addresses</code> and <code>NotReadyAddresses</code>, that was still less than the total number of ready pods). Although I don't know if this is related at all, I couldn't find much info online about this <code>NotReadyAddresses</code> field.</p>
| <p>It turned out that this is caused by using preemptible VMs in our node pools, it doesn't happen if the nodes are not preemtibles.<br>
We couldn't figure out more details of the root cause, but using preemtibles as the nodes is not an officially supported scenario anyway, so we switched to regular VMs.</p>
|
<p>I just tried setting up kubernetes on my bare server,</p>
<p>Previously I had successfully create my docker compose</p>
<p>There are several apps :</p>
<ul>
<li>Apps A (docker image name : a-service)</li>
<li>Apps B (docker image name : b-service)</li>
</ul>
<p>Inside Application A and B there are configs (actually there are apps A,B,C,D,etc lots of em)</p>
<p>The config file is something like this</p>
<pre><code>IPFORSERVICEA=http://a-service:port-number/path/to/something
IPFORSERVICEB=http://b-service:port-number/path/to/something
</code></pre>
<p>At least above config work in docker compose (the config is inside app level, which require to access another apps). Is there any chance for me to access another Kubernetes Service from another service ? As I am planning to create 1 app inside 1 deployment, and 1 service for each deployment.</p>
<p>Something like:</p>
<pre><code>App -> Deployment -> Service(i.e: NodePort,ClusterIP)
</code></pre>
<p>Thanks !</p>
| <blockquote>
<p>Is there any chance for me to access another Kubernetes Service from
another service ?</p>
</blockquote>
<p>Yes, you just need to specify DNS name of service (<code>type: ClusterIP</code> works fine for this) you need to connect to as:</p>
<pre><code><service_name>.<namespace>.svc.cluster.local
</code></pre>
<p>In this case such domain name will be correctly resolved into internal IP address of the service you need to connect to using built-in DNS.</p>
<p>For example:</p>
<pre><code>nginx-service.web.svc.cluster.local
</code></pre>
<p>where <code>nginx-service</code> - name of your service and <code>web</code> - is apps's namespace, so service yml definition can look like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: web
spec:
ports:
- name: http
protocol: TCP
port: 80
selector:
app: nginx
type: ClusterIP
</code></pre>
<p>See <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="nofollow noreferrer">official docs</a> to get more information.</p>
|
<p>Is there any tool , online or self hosted , that takes all the values in UI as input and generate the full declarative yaml for the following kubernetes objects:</p>
<ul>
<li>Deployment, with init containers and imagepullsecrets and other options</li>
<li>Service</li>
<li>ConfigMap</li>
<li>Secret</li>
<li>Daemonset</li>
<li>StatefulSet</li>
<li>Namespaces and quotas</li>
<li>RBAC resources</li>
</ul>
<p><strong>Edit:</strong></p>
<p>I have been using <code>kubectl create</code> and <code>kubectl run</code> , but they dont spupport all the possible configuration options , and you still need to rememer all the options it supports , in UI one would be able to select from the give options for each resource.</p>
| <p>The closest is <code>kubectl create ....</code> and <code>kubectl run .....</code>. Run them with <code>-o yaml --dry-run > output.yaml</code>. This won't create the resource, but will write the resource description to output.yaml file.</p>
|
<p>I'm new on Kubernetes and currently following this guide: <a href="https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-windows-walkthrough" rel="nofollow noreferrer">Deploy Kubernetes cluster for Windows containers</a>. I recently noticed that the VM provisioned as master node is on Linux, my question is, "Is it possible to use Windows as a Kubernetes Cluster master node?".</p>
<p>My project requires to use Windows OS on physical servers, so Linux as OS for Kubernetes master node might not be good option for container orchestrator and I will need to use Docker Swarm instead.</p>
| <p>According to Microsoft documentation you can’t use Windows as a Kubernetes master.
From <a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/creating-a-linux-master" rel="nofollow noreferrer">here</a>: </p>
<blockquote>
<p>A recently-updated Linux machine is required to follow along;
Kubernetes master resources like <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">kube-dns</a>, <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/" rel="nofollow noreferrer">kube-scheduler</a>, and
<a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">kube-apiserver</a> have not been ported to Windows yet.</p>
</blockquote>
<p>Kubernetes <a href="https://kubernetes.io/docs/getting-started-guides/windows/#setting-up-windows-server-containers-on-kubernetes" rel="nofollow noreferrer">documentation</a> also implies that you need to have Linux master node.</p>
|
<p>What is the equivalent command for <code>minikube delete</code> in <a href="https://blog.docker.com/2018/07/kubernetes-is-now-available-in-docker-desktop-stable-channel/" rel="noreferrer">docker-for-desktop</a> on OSX</p>
<p>As I understand, minikube creates a VM to host its kubernetes cluster but I do not understand how docker-for-desktop is managing this on OSX.</p>
| <p>Tear down Kubernetes in Docker for OS X is quite an easy task.</p>
<p>Go to <code>Preferences</code>, open <code>Reset</code> tab, and click <code>Reset Kubernetes cluster</code>.</p>
<p><a href="https://i.stack.imgur.com/GKdSe.png" rel="noreferrer"><img src="https://i.stack.imgur.com/GKdSe.png" alt="enter image description here"></a></p>
<p>All object that have been created with Kubectl before that will be deleted. </p>
<p>You can also reset docker VM image (<code>Reset disk image</code>) and all settings (<code>Reset to factory defaults</code>) or even uninstall Docker. </p>
|
<p>Asp.Net Core microservices on Docker/Kubernetes are disagreeing on the duration of inter-service calls between the caller and callee.</p>
<p>The caller logs can show anywhere from a few milliseconds up to 10 full seconds more than the callee. The problem worsens under heavy load, but is still present under light load. Many calls do agree between the caller and callee, but this discrepancy does happen frequent enough to make a real dent in performance overall.</p>
<p>The timestamps indicate that the time gap can either be <em>before</em> or <em>after</em> the callee has reported that its response is complete.</p>
<p><strong>Example logs (numbers from a real time discrepancy)</strong> </p>
<pre><code>ServiceB: [2018-10-11T22:41:41.374Z] S2S request complete to ServiceA, Duration: 11644
ServiceA: [2018-10-11T22:41:29.732Z] Request complete, Duration: 5
</code></pre>
<p><strong>Caller Timing (common class for all S2S calls)</strong> </p>
<pre><code>var timer = Stopwatch.StartNew();
var response = await _httpClientFactory.CreateClient().SendAsync(request);
timer.Stop();
Logger.Info($"S2S request complete to {service}, Duration: {timer.EllapsedMilliseconds}");
</code></pre>
<p><strong>Callee Timing (custom Asp.Net middleware)</strong></p>
<pre><code>var timer = Stopwatch.StartNew();
await _next(context);
timer.Stop();
Logger.Info($"Request complete, Duration: {timer.EllapsedMilliseconds}");
</code></pre>
<p>This middleware is registered as almost the first in the pipeline (second to only the ActivityId / TraceId middleware for log correlation).</p>
<p><strong>Troubleshooting Steps</strong> </p>
<ul>
<li>Not able to reproduce the issue on Windows development machine</li>
<li>Monitored CPU, Memory, Thread Count, GC Collects, Open Handles (all at reasonable levels)</li>
<li>Adjusted k8s spec CPU and Memory request / limit (various levels with some effect, but does not alleviate the problem)</li>
<li>Turned on Server GC with Environment Variable: COMPlus_gcServer=1</li>
<li>Issue occurs on services that are within resource limits and have not needed to autoscale</li>
<li>Changed to new Kestrel Socket Transport (instead of libuv)</li>
<li>Changed to new .Net Core 2.1 SocketsHttpHandler </li>
</ul>
<p><strong>System Topology</strong></p>
<p>Asp.Net Core 2.1 self-hosted Kestrel<br>
.Net Core 2.1.5 runtime<br>
Docker / Kubernetes 1.10.5<br>
K8s Addons: kube-proxy, weave, etcd, SkyDNS<br>
AWS c5.4xlarge </p>
<p><strong>Updates</strong> </p>
<ol>
<li>Found out that the time gap can sometimes be before or after the callee starts/completes</li>
</ol>
| <p>In this case, this issue was fixed by <em>removing</em> the k8s spec CPU limit. </p>
<p>Monitoring the <code>container_cpu_cfs_throttled_seconds_total</code> metric found that one of the service containers was getting <em>paused</em> very frequently. These pauses were mostly on the caller side of the S2S calls. Which increased the elapsed time reported by the caller.</p>
<p>Removing the CPU limit in the k8s spec prevents k8s from passing the <code>--cpu-quota</code> and <code>--cpu-period</code> <a href="https://docs.docker.com/config/containers/resource_constraints/#cpu" rel="nofollow noreferrer">docker parameters</a>. Which is what controls the container pauses.</p>
|
<p>I try to connect in the dashboard of kubernetes.</p>
<p>I have the latest version of kubernetes v1.12 with kubeadm , in a server.</p>
<p>I download from github the metrics-server and run: </p>
<blockquote>
<p>Kubctl create -f deploy/1.8+</p>
</blockquote>
<p>but i get this error </p>
<blockquote>
<p>kube-system metrics-server-5cbbc84f8c-tjfxd 0/1 Pending 0 12m</p>
</blockquote>
<p>with out log to debug</p>
<blockquote>
<p>error: the server doesn't have a resource type "logs"</p>
</blockquote>
<p>I don't want to install heapster because is DEPRECATED.</p>
<p><strong>UPDATE</strong></p>
<p>Hello, and thanks.</p>
<p>i run the taint command i get:</p>
<blockquote>
<p>error: at least one taint update is required</p>
</blockquote>
<p>and the command </p>
<blockquote>
<p>kubectl describe deployment metrics-server -n kube-system</p>
</blockquote>
<p>i get this output: </p>
<pre><code>Name: metrics-server
Namespace: kube-system
CreationTimestamp: Thu, 18 Oct 2018 14:34:42 +0000
Labels: k8s-app=metrics-server
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata": {"annotations":{},"labels":{"k8s-app":"metrics-server"},"name":"metrics-...
Selector: k8s-app=metrics-server
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: k8s-app=metrics-server
Service Account: metrics-server
Containers:
metrics-server:
Image: k8s.gcr.io/metrics-server-amd64:v0.3.1
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: metrics-server-5cbbc84f8c (1/1 replicas created)
Events: <none>
</code></pre>
<p>Command:</p>
<blockquote>
<p>kubectl get nodes</p>
</blockquote>
<p>The output for this is just the IP of the node, and nothing special.</p>
<p>Any ideas, or what to do to work the dashboard for kubernetes.</p>
| <p>I suppose you are trying setup metrics-server on your master node.</p>
<p>If you issue <code>kubectl describe deployment metrics-server -n kube-system</code> I believe you will see something like this:</p>
<blockquote>
<p>Name: metrics-server Namespace:<br>
kube-system CreationTimestamp: Thu, 18 Oct 2018 15:57:34 +0000
Labels: k8s-app=metrics-server Annotations:<br>
deployment.kubernetes.io/revision: 1 Selector:<br>
k8s-app=metrics-server Replicas: 1 desired | 1 updated |
1 total | 0 available | 1 unavailable</p>
</blockquote>
<p>But if you will describe your node you will see taint that prevent you from scheduling new pods on master node:</p>
<pre><code>kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master-1 Ready master 17m v1.12.1
kubectl describe node kube-master-1
Name: kube-master-1
...
Taints: node-role.kubernetes.io/master:NoSchedule
</code></pre>
<p>You have to remove this taint:</p>
<pre><code>kubectl taint node kube-master-1 node-role.kubernetes.io/master:NoSchedule-
node/kube-master-1 untainted
</code></pre>
<p>Result:</p>
<pre><code> kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-xvc77 2/2 Running 0 20m
kube-system coredns-576cbf47c7-rj4wh 1/1 Running 0 21m
kube-system coredns-576cbf47c7-vsjsf 1/1 Running 0 21m
kube-system etcd-kube-master-1 1/1 Running 0 20m
kube-system kube-apiserver-kube-master-1 1/1 Running 0 20m
kube-system kube-controller-manager-kube-master-1 1/1 Running 0 20m
kube-system kube-proxy-xp5zh 1/1 Running 0 21m
kube-system kube-scheduler-kube-master-1 1/1 Running 0 20m
kube-system metrics-server-5cbbc84f8c-l2t76 1/1 Running 0 18m
</code></pre>
<p>But this is not the best approach. Good approach is to join worker and set up metrics-server there. There won't be any issues and there is no need to touch taint on master node.</p>
<p>Hope it will help you.</p>
|
<p><strong>What I have:</strong></p>
<p>I have created one <a href="https://kubernetes.io" rel="nofollow noreferrer">Kubernetes</a> cluster using single node <a href="https://rancher.com/" rel="nofollow noreferrer">Rancher</a> 2.0 deployment. which has 3 etcd, control nodes & 2 worker nodes attached to cluster.</p>
<p><strong>What I did:</strong></p>
<p>I deployed one API gateway to this cluster & one express <code>mydemoapi</code> service (no db) with 5 pods on 2 nodes on port 5000, which I don't want to expose publicly. So, I just mapped that service endpoint with service name in API gateway <code>http:\\mydemoapi:5000</code> & it was accessible by gateway public endpoint.</p>
<p><strong>Problem statement:</strong></p>
<p><code>mydemoapi</code> service is served in random fashion, not in round robin, because default setting of <code>kube-proxy</code> is random as per <a href="https://github.com/rancher/rancher" rel="nofollow noreferrer">Rancher</a> documentation <a href="https://rancher.com/load-balancing-in-kubernetes/" rel="nofollow noreferrer">Load balancing in Kubernetes</a></p>
<p><strong>Partial success:</strong></p>
<p>I created one ingress loadbalancer with <code>Keep the existing hostname option</code> in Rancher rules with this URL <code>mydemoapi.<namespace>.153.xx.xx.102.xip.io</code> & attached this service to ingress, it is served in round robin fashion, but having one problem. This service was using <a href="http://xip.io" rel="nofollow noreferrer"><code>xip.io</code></a> with public ip of my worker node & exposed publicly.</p>
<p><strong>Help needed:</strong></p>
<p>I want to map my internal clusterIP service into gateway with internal access, so that it can be served to gateway internally in round robin fashion and hence to gateway public endpoint. I don't want to expose my service publicly without gateway.</p>
| <p>Not sure which cloud you are running on, but if you are running in something like AWS you can set the following annotation to <code>true</code> on your <code>Service</code> definition:</p>
<pre><code>service.beta.kubernetes.io/aws-load-balancer-internal: "true"
</code></pre>
<p>Other Cloud providers have similar solutions and some don't even have one. In that case, you will have to use a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a> service and redirect an external load balancer such as one with <a href="http://www.haproxy.org/" rel="nofollow noreferrer"><code>haproxy</code></a> or <a href="https://nginx.org/en/" rel="nofollow noreferrer"><code>nginx</code></a> to forward traffic to that <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a></p>
<p>Another option is to not use an <code>Ingress</code> at all if you want to do round robin between your services is to change your <code>kube-proxy</code> configs to use either the old <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-userspace" rel="nofollow noreferrer">namespace</a> proxy mode or the more enhanced <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="nofollow noreferrer">ipvs</a> proxy mode. </p>
|
<p>I just tried setting up kubernetes on my bare server,</p>
<p>Previously I had successfully create my docker compose</p>
<p>There are several apps :</p>
<ul>
<li>Apps A (docker image name : a-service)</li>
<li>Apps B (docker image name : b-service)</li>
</ul>
<p>Inside Application A and B there are configs (actually there are apps A,B,C,D,etc lots of em)</p>
<p>The config file is something like this</p>
<pre><code>IPFORSERVICEA=http://a-service:port-number/path/to/something
IPFORSERVICEB=http://b-service:port-number/path/to/something
</code></pre>
<p>At least above config work in docker compose (the config is inside app level, which require to access another apps). Is there any chance for me to access another Kubernetes Service from another service ? As I am planning to create 1 app inside 1 deployment, and 1 service for each deployment.</p>
<p>Something like:</p>
<pre><code>App -> Deployment -> Service(i.e: NodePort,ClusterIP)
</code></pre>
<p>Thanks !</p>
| <p>Use Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">service discovery</a>.</p>
<blockquote>
<p>Service discovery is the process of figuring out how to connect to a
service. While there is a service discovery option based on
environment variables available, the DNS-based service discovery is
preferable. Note that DNS is a cluster add-on so make sure your
Kubernetes distribution provides for one or install it yourself.</p>
</blockquote>
<p><a href="http://kubernetesbyexample.com/sd/" rel="nofollow noreferrer">Service dicovery by example</a></p>
|
<p>I'm new to the Java development. We have an external system which issues the cert certificates so I have to use those certificates in my application in order to make calls. I don't want to add those certificates into key-store but I want to add in my spring boot application. </p>
<p>We are deploying this application into the Kubernetes cluster or is there any way we can add these certificates in the Kubernetes cluster so JVM will pick them. The tech stack we are Java 8, spring boot, spring integration, docker, kubernetes(GKE).</p>
| <p>You can follow something like <a href="https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift/" rel="nofollow noreferrer">this</a>.</p>
<p>Basically, use <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Kubernetes Secrets</a> to store your certificates. Java understands keystores so you'll have to convert them to that, but that in of itself can be stored in Kubernetes secrets. For example, you can use something like this, to create a keystore:</p>
<pre><code>openssl pkcs12 -export -inkey $keyfile -in $crtfile -out $keystore.pkcs12 -password pass:$password
keytool -importkeystore -noprompt -srckeystore $keystore.pkcs12 -srcstoretype pkcs12 -destkeystore $keystore.jks -storepass $password -srcstorepass $password
</code></pre>
<p>And something like this to create a truststore from a CA bundle:</p>
<pre><code>csplit -z -f crt- service-ca.crt '/-----BEGIN CERTIFICATE-----/' '{*}'
for file in crt-*; do keytool -import -noprompt -keystore truststore.jks -file $file -storepass changeit -alias service-$file; done
</code></pre>
|
<p>I practice about kubernetes using minikube</p>
<p>I make docker-registry as a pods and I create docker-registry service</p>
<p><strong>edit /etc/docker/daemon.json</strong></p>
<pre><code>{
"insecure-registries":["192.168.99.100:30050"]
}
</code></pre>
<p><strong>edit openssl.conf</strong></p>
<pre><code>[v3_req]
subjectAltName = IP:192.168.99.100
</code></pre>
<p><strong>And I make certificate</strong></p>
<pre><code>openssl genrsa -out my.crt
openssl req -x509 -new -nodes -key my.key -subj "CN=192.168.99.100:30050" -days 5000 -out my.crt
</code></pre>
<p><strong>create registry-tls-secret</strong></p>
<pre><code>kubectl create secret generic registry-tls-secret --from-file=my.crt=my.crt --from-file=my.key=my.key
</code></pre>
<p><strong>and make directory and copy my.crt file into the directory</strong></p>
<pre><code>/etc/docker/certs.d/192.168.99.100:30050
</code></pre>
<p>So I can push & pull 192.168.99.100:30050/[image]:[tag] on the host</p>
<p>And After I try to make hello-world pods</p>
<p>hello-world image is into the docker-registry(192.168.99.100:30050) already</p>
<p>I create secrets docker-registry regcred</p>
<pre><code>kubectl create secret docker-registry regcred --docker-server=192.168.99.100:30050 --docker-usernmae=<user-name> --docker-password=<user-password> --docker-email=<user-email>
</code></pre>
<p>and write helloworld-deployment.yaml</p>
<pre><code>...
image: 192.168.99.100:30050/hello-world:v1
...
imagePullSecrets:
-name: regcred
...
</code></pre>
<p>Finally I apply helloworld-deployment.yaml
But I got an error message that </p>
<pre><code>Failed to pull image "192.168.99.100:30050/hello-world:v1": rpc error: code = Unknown desc = Error response from daemon: Get https://192.168.99.100:30050/v2/: x509: certificate signed by unknown authority
</code></pre>
<p><strong>I don't really what I'm missing... please help me...</strong></p>
| <p>There is no standard for storing a port number in the certificate so Common Name of the certificate should not contain a port. Create Certificate with <code>CN=192.168.99.100</code> and repeat same steps. For more information on common name refer <a href="https://support.dnsimple.com/articles/what-is-common-name/" rel="nofollow noreferrer">here</a>.</p>
<p>Make sure you Copy the certificate data to <code>/etc/docker/certs.d/192.168.99.100:30050/ca.crt</code>.</p>
|
<p>I have a StatefulSet that starts a MYSQL cluster. The only downside at it for the moment is that for every replica I need to create a Persistent Volume and a Persistent Volume Claim with a select that matches label and podindex.
This means I cannot dynamically add replicas whithout manual interaction.</p>
<p>For this reason I'm searching for a soluction that gives me the option to have only 1 Volume and 1 Claim. And during the pod creation it knows his own pod name for the subPath during mount. (initContainer would be used to check and create the directories on the volume before the application container starts).</p>
<p>So I search a correct way for a code like:</p>
<pre><code>volumeMounts:
- name: mysql-datadir
mountPath: /var/lib/mysql
subPath: "${PODNAME}/datadir"
</code></pre>
| <p>You can get <code>POD_NAME</code> from the metadata ( the downward API ) by setting ENV var:</p>
<pre><code> env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
</code></pre>
<p>But, you you cannot use <code>ENV</code> vars in volumes declarations (as far as i know...). So, everything else could be reached via workarounds. One of the workarounds is described <a href="https://github.com/kubernetes/kubernetes/issues/48677#issuecomment-347452457" rel="nofollow noreferrer">here</a></p>
|
<p>hy folks </p>
<p>i made the yaml files to deploy my application and now i working with helm to deploy it automatically.However although all of my conf files dor kubernetes worked. i've a problem with helm and <code>PVC</code>.
i ve checked on internet and i dont find where is my mistake :( </p>
<p><strong>pvc-helm.yaml</strong></p>
<pre><code>{{- if .Values.persistence.enabled }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Values.persistence.name }}
namespace: {{ .Values.persistence.namespace }}
spec:
accessModes:
- {{ .Values.persistence.accessModes | quote }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- end }}
</code></pre>
<p><strong>values.yaml</strong></p>
<pre><code>persistence:
enabled: true
name: ds-pvc
namespace: ds-svc
storageClassName: standard
storageClass: standard
accessModes:
- ReadWriteOnce
size: 20Mi
</code></pre>
<p>when i run the command <code>helm install cas/ --tls</code> i get the error below </p>
<blockquote>
<p>Error: release brawny-olm failed: PersistentVolumeClaim "ds-pvc" is
invalid: [spec.accessModes: Required value: at least 1 access mode is
required, spec.resources[storage]: Required value]</p>
</blockquote>
<p>Do i've to set as well a <code>PersistentVolume</code> ? </p>
| <p>There are two issues in your helm chart:</p>
<p><strong>1.</strong> <code>spec</code> level shouldn't be below <code>metadata</code> level. It has no relations to <code>metadata</code>, so <code>spec</code> is on the same level as <code>metadata</code>.</p>
<p><strong>2.</strong> Value <code>persistence.accessModes</code> should be defined as a string, not as a list.</p>
<p>The correct files are:</p>
<p><strong>pvc-helm.yaml</strong></p>
<pre><code>{{- if .Values.persistence.enabled }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Values.persistence.name }}
namespace: {{ .Values.persistence.namespace }}
spec:
accessModes:
- {{ .Values.persistence.accessModes | quote }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- end }}
</code></pre>
<p><strong>values.yaml</strong></p>
<pre><code>persistence:
enabled: true
name: ds-pvc
namespace: default
storageClassName: standard
storageClass: standard
accessModes: ReadWriteOnce
size: 20Mi
</code></pre>
|
<p>I know there is <code>kubectl</code> command to list containers in all the namespaces:</p>
<pre><code>kubectl get pods --all-namespaces -o jsonpath={.items[*].spec.containers[*].name}
</code></pre>
<p>is there a way to get all the containers in all namespaces using rest query?</p>
| <p>You cant query the REST API for containers directly, since the smallest unit you can manage is a Pod, as is the same in your <code>kubectl</code> example. </p>
<p>You could however query all Pods and filter using a tool like jq. So obviously, replacing your <code>$TOKEN</code> and <code>$CLUSTER</code> as appropriate, the following should work:</p>
<p><code>curl -XGET -H "Authorization: Bearer $TOKEN" -H "Accept: application/json" https://$CLUSTER:8443/api/v1/pods?limit=500 | jq '.items[] .spec .containers[] .name'</code></p>
<p>Not sure how the above has any benefit over using the cli though. On a side note, if your using the <code>oc</code> tool, set the <code>--loglevel=9</code> option and you will be able to see what request is being sent to the server.</p>
|
<p>Recently I've been working on a toy app using Kubernetes. Part of the app is a web server that needs to support WebSockets. Currently, I'm using port-forwarding to access the web server and everything works just fine.</p>
<p>I'd like to switch to using an Ingress and IngressController to avoid using the port forwarding.</p>
<p>Here is my <code>Ingress</code> config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
spec:
rules:
- http:
paths:
- path: /app
backend:
serviceName: web-svc
servicePort: 3030
- path: /ws
backend:
serviceName: web-svc
servicePort: 3030
</code></pre>
<p>Now accessing the app through <code>$(minikube ip)/app</code> works just fine, but the WebSocket requests all fail because nginx is returning a 200 and not a 101. </p>
<p>I've tried adding the <code>nginx.org/websocket-services</code> annotation but that doesn't seem to be working either.</p>
<p>Has anyone encountered a similar situation?</p>
<p>Cheers</p>
| <p>From looking at the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#configuration-snippet" rel="noreferrer">nginx ingress controller docs</a> and the <a href="http://nginx.org/en/docs/http/websocket.html" rel="noreferrer">nginx docs</a> you probably need something like this as an annotation on your Kubernetes <code>Ingress</code>:</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_http_version 1.1;
proxy_set_header Upgrade "websocket";
proxy_set_header Connection "Upgrade";
</code></pre>
<p>Note that once you add that annotation all of your <code>Ingress</code> rules will have that snippet in the <code>location</code> block in your nginx configs. So if you want to ignore it for other rules you will have to create a separate Kubernetes <code>Ingress</code>.</p>
<p>EDIT:</p>
<p>As per the <a href="https://gist.github.com/jsdevtom/7045c03c021ce46b08cb3f41db0d76da#file-ingress-service-yaml" rel="noreferrer">gist</a> and the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#websockets" rel="noreferrer">Nginx ingress docs</a> 📄, it seems like this annotation fixed the problem:</p>
<pre class="lang-yaml prettyprint-override"><code>nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
nginx.ingress.kubernetes.io/proxy-send-timeout: 3600
</code></pre>
|
<p>When using <code>AWS EKS</code>, is it possible to set up the worker nodes on spot instances?</p>
<ul>
<li>How can I do this?</li>
<li>Anything special I should pay attention to, in such a setup?</li>
</ul>
| <p>Yes, you can, you will have to modify the <a href="https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-eks-nodegroup.yaml" rel="nofollow noreferrer">Cloudformation Template</a> (which is mentioned in this <a href="https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html" rel="nofollow noreferrer">document</a>) in the <code>LaunchConfiguration</code> section to specify a spot price.</p>
<pre><code>NodeLaunchConfig:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
SpotPrice: "20" # <=== Here
AssociatePublicIpAddress: 'true'
IamInstanceProfile: !Ref NodeInstanceProfile
ImageId: !Ref NodeImageId
InstanceType: !Ref NodeInstanceType
KeyName: !Ref KeyName
SecurityGroups:
- !Ref NodeSecurityGroup
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: !Ref NodeVolumeSize
VolumeType: gp2
DeleteOnTermination: true
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}
/opt/aws/bin/cfn-signal --exit-code $? \
--stack ${AWS::StackName} \
--resource NodeGroup \
--region ${AWS::Region}
</code></pre>
|
<p>I am running a <strong>job</strong> with a kubernetes POD and I need to measure <strong>the time between the creation the job by the user and the time of starting running this job on the node</strong> .</p>
<p>I want to get it through some <strong>api</strong>.</p>
<p>Does anyone know how can I get it ?</p>
| <p><strong>Monitoring Kubernetes ( number of pending pods/jobs)</strong> </p>
<p>Use the <code>kube-state-metrics</code> package for monitoring and a small Go program called <code>veneur-prometheus</code> to scrape the Prometheus metrics kube-state-metrics emits and publish them as statsd metrics to monitoring system.</p>
<p>For example, here’s a chart of the number of pending pods in the cluster over the last hour. Pending means that they’re waiting to be assigned a worker node to run on. You can see that the number spikes at 11am, because a lot of cron jobs run at the 0th minute of the hour in this case.</p>
<p><a href="https://i.stack.imgur.com/NnApM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NnApM.png" alt="Pending pods "></a></p>
<p>An example chart showing pending pods in a cluster over the last hour</p>
|
<p>I need to design a website solution with multiple spa pages.
What I thought as a high-level design is as below:-</p>
<p>There will be one machine for each spa page which will just render the UI, do SSR and take request from the browser.
For example, www.abc.com/foo will be routed to this machine. I'm thinking of to put the application UI code in kubernetes pod and host that on the machine/node. Also using KOPS I will manage the autoscaling of nodes and pods. </p>
<p>Now, this application in the pod will call other pods for data to be shown on the web page. For example, www.abc.com/API/foo will be called from pod1. I'm thinking to make this another pod which will live on the same node as the web page pod node. </p>
<p>So now I have 2 pods living on a single node which will autoscale as per traffic.
Similarly, for each page I have on my website I will have a node with 2 pods each.</p>
<p>My questions now are below:-</p>
<ol>
<li>Is there any best practice or other design solution for above?</li>
<li>How will I achieve path based routing like www.abc.com/foo should call my web page pod?</li>
<li>How can I expose the pod to external world i.e. internet without using a load balancer?</li>
<li>Should I have different repos for each pod?</li>
</ol>
| <blockquote>
<p>Is there any best practice or other design solution for above?</p>
</blockquote>
<p>You can use <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer"><code>PodAffinity</code></a> to co-locate your pods.</p>
<blockquote>
<p>How will I achieve path based routing like www.abc.com/foo should call
my web page pod?</p>
</blockquote>
<p>You can use a Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer"><code>Ingress</code></a>. Since this is a layer 7 facility you will able to do multiple host paths, keep in mind that this is generally exposed to the outside using a LoadBalancer type of service.</p>
<blockquote>
<p>How can I expose the pod to external world i.e. internet without using
a load balancer?</p>
</blockquote>
<p>You can use a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a> type of <code>Service</code>. Note that you generally use either an Ingress or a NodePort service, the downside for this approach is that you won't be able to do paths and that will have to be handled in your application.</p>
<blockquote>
<p>Should I have different repos for each pod?</p>
</blockquote>
<p>git repos? sure, you will have to have different container images for each application.</p>
|
<p>The REST API requests , <code>GET , POST , PUT</code> etc to Kubernetes API server are request , responses and simple to understand , such as <code>kubectl create <something></code>. I wonder how the API server serves the pod logs when I do <code>kubectl logs -f <pod-name></code> ( and similar operations like <code>kubectl attach <pod></code> ), Is it just an http response to <code>GET</code> in a loop?</p>
| <p>My advice is to always check what <code>kubectl</code> does under the cover, and for that use <code>-v=9</code> with your command. It will provide you with full request and responses that are going between the client and the server. </p>
|
<p>I manually install a kubernetes cluster of 3 nodes (1 master, 2 slave). Now, I want to perform a upgrade of the k8s version (say, from 1.7 to 1.11). As the gap is long, the preferred method would be to forcefully reinstalled all the required packages. Is it a better way to do this? If yes, could you please tell me how?</p>
<p>Assume I do the upgrade by re-installing packages, I would want to manually backup everything (configuration, namespaces, and especially persistent volumes). From the kubernetes homepage, I found juju is recommended. but as I'm not running juju, what would be an alternative to do it manually?</p>
<p>Thank you!</p>
| <p>They do not recommend skipping minor releases. So you should upgrade to 1.8 then 1.9 and so on. They support deprecated apis for one release, so for example if you have any deployments they are on extensions beta API which will not be supported by 1.11 release where they are on the apps API</p>
<p>I don't think you're doing yourself any favors by trying to skip stuff. Either way it will be a long manual process</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.