prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>Sometimes when Minikube is already running and I try to run any <code>kubectl</code> command (like <code>kubectl get pods</code>) I get this error:<br></p>
<pre><code>Unable to connect to the server: dial tcp 192.168.99.101:8443
</code></pre>
<p>So I stop Minikube and start it again and all <code>kubectl</code> commands work fine, but then after a while if I try to run any <code>kubectl</code> command I get the same error as above.</p>
<p>If I type <code>minikube ip</code> I get <code>192.168.99.100</code>. Why does <code>kubectl</code> try to connect to <code>192.168.99.101</code> (as mentioned in the error) when Minikube is running on <code>192.168.99.100</code>?</p>
<p>Note that I'm very new to Kubernetes. </p>
<p><code>kubectl config get-contexts</code> gives me this output:<br></p>
<pre><code>CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
</code></pre>
<p>This is <code>minikube logs</code> output <a href="https://pastebin.com/kb5jNRyW" rel="nofollow noreferrer">https://pastebin.com/kb5jNRyW</a></p>
| <p>This usually happens when the IP of your VM might be changed, and your minikube is pointing to the previous IP, You can check through <code>minikube ip</code> and then check the IP of the VM created, they will be different. </p>
<p>You can also try <code>minikube status</code>, your output will be :
<code>
minikube: Running
cluster: Stopped
kubectl: Misconfigured: pointing to stale minikube-vm.
To fix the kubectl context, run minikube update-context
</code></p>
<p>You can try <code>minikube update-context</code> and if it doesn't run even then, try <code>minikube start</code> followed by <code>minikube update-context</code>, it won't download everything again, it will only start the VM if shut down.</p>
|
<p>I am trying to run fluentd as a daemonset on kubernetes cluster (GKE). The config is getting parsed successfully, then the plugins are receiving shutdown signal with few warn messages. There are no error messages. I tried increasing the verbosity level and following is the output of pods:</p>
<pre><code>fluentd-7przp fluentd 2018-09-08 11:02:46 +0000 [info]: #0 fluent/log.rb:322:info: starting fluentd worker pid=9 ppid=1 worker=0
fluentd-7przp fluentd 2018-09-08 11:02:46 +0000 [info]: #0 fluent/log.rb:322:info: fluentd worker is now running worker=0
fluentd-sr764 fluentd 2018-09-08 11:02:50 +0000 [warn]: #0 fluent/log.rb:342:warn: dump an error event: error_class=NoMethodError error="undefined method `[]' for nil:NilClass" location="/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'" tag="fluent.info" time=2018-09-08 11:02:45.151774166 +0000 record={"worker"=>0, "message"=>"fluentd worker is now running worker=0"}
fluentd-dhwnx fluentd 2018-09-08 11:02:51 +0000 [warn]: #0 fluent/log.rb:342:warn: dump an error event: error_class=NoMethodError error="undefined method `[]' for nil:NilClass" location="/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'" tag="fluent.info" time=2018-09-08 11:02:46.029522363 +0000 record={"worker"=>0, "message"=>"fluentd worker is now running worker=0"}
fluentd-7przp fluentd 2018-09-08 11:02:51 +0000 [warn]: #0 fluent/log.rb:342:warn: dump an error event: error_class=NoMethodError error="undefined method `[]' for nil:NilClass" location="/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'" tag="fluent.info" time=2018-09-08 11:02:46.538377182 +0000 record={"worker"=>0, "message"=>"fluentd worker is now running worker=0"}
fluentd-sr764 fluentd 2018-09-08 11:02:55 +0000 [warn]: #0 fluent/log.rb:342:warn: dump an error event: error_class=NoMethodError error="undefined method `[]' for nil:NilClass" location="/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'" tag="fluent.warn" time=2018-09-08 11:02:50.153922217 +0000 record={"error"=>"#<NoMethodError: undefined method `[]' for nil:NilClass>", "location"=>"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'", "tag"=>"fluent.info", "time"=>2018-09-08 11:02:45.151774166 +0000, "record"=>{"worker"=>0, "message"=>"fluentd worker is now running worker=0"}, "message"=>"dump an error event: error_class=NoMethodError error=\"undefined method `[]' for nil:NilClass\" location=\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\" tag=\"fluent.info\" time=2018-09-08 11:02:45.151774166 +0000 record={\"worker\"=>0, \"message\"=>\"fluentd worker is now running worker=0\"}"}
fluentd-sr764 fluentd 2018-09-08 11:03:10 +0000 [warn]: #0 fluent/log.rb:342:warn: dump an error event: error_class=NoMethodError error="undefined method `[]' for nil:NilClass" location="/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'" tag="fluent.warn" time=2018-09-08 11:03:05.168427649 +0000 record={"error"=>"#<NoMethodError: undefined method `[]' for nil:NilClass>", "location"=>"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'", "tag"=>"fluent.warn", "time"=>2018-09-08 11:03:00.165843014 +0000, "record"=>{"error"=>"#<NoMethodError: undefined method `[]' for nil:NilClass>", "location"=>"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'", "tag"=>"fluent.warn", "time"=>2018-09-08 11:02:55.156840516 +0000, "record"=>{"error"=>"#<NoMethodError: undefined method `[]' for nil:NilClass>", "location"=>"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'", "tag"=>"fluent.warn", "time"=>2018-09-08 11:02:50.153922217 +0000, "record"=>{"error"=>"#<NoMethodError: undefined method `[]' for nil:NilClass>", "location"=>"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'", "tag"=>"fluent.info", "time"=>2018-09-08 11:02:45.151774166 +0000, "record"=>{"worker"=>0, "message"=>"fluentd worker is now running worker=0"}, "message"=>"dump an error event: error_class=NoMethodError error=\"undefined method `[]' for nil:NilClass\" location=\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\" tag=\"fluent.info\" time=2018-09-08 11:02:45.151774166 +0000 record={\"worker\"=>0, \"message\"=>\"fluentd worker is now running worker=0\"}"}, "message"=>"dump an error event: error_class=NoMethodError error=\"undefined method `[]' for nil:NilClass\" location=\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\" tag=\"fluent.warn\" time=2018-09-08 11:02:50.153922217 +0000 record={\"error\"=>\"#<NoMethodError: undefined method `[]' for nil:NilClass>\", \"location\"=>\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\", \"tag\"=>\"fluent.info\", \"time\"=>2018-09-08 11:02:45.151774166 +0000, \"record\"=>{\"worker\"=>0, \"message\"=>\"fluentd worker is now running worker=0\"}, \"message\"=>\"dump an error event: error_class=NoMethodError error=\\\"undefined method `[]' for nil:NilClass\\\" location=\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\" tag=\\\"fluent.info\\\" time=2018-09-08 11:02:45.151774166 +0000 record={\\\"worker\\\"=>0, \\\"message\\\"=>\\\"fluentd worker is now running worker=0\\\"}\"}"}, "message"=>"dump an error event: error_class=NoMethodError error=\"undefined method `[]' for nil:NilClass\" location=\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\" tag=\"fluent.warn\" time=2018-09-08 11:02:55.156840516 +0000 record={\"error\"=>\"#<NoMethodError: undefined method `[]' for nil:NilClass>\", \"location\"=>\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\", \"tag\"=>\"fluent.warn\", \"time\"=>2018-09-08 11:02:50.153922217 +0000, \"record\"=>{\"error\"=>\"#<NoMethodError: undefined method `[]' for nil:NilClass>\", \"location\"=>\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\", \"tag\"=>\"fluent.info\", \"time\"=>2018-09-08 11:02:45.151774166 +0000, \"record\"=>{\"worker\"=>0, \"message\"=>\"fluentd worker is now running worker=0\"}, \"message\"=>\"dump an error event: error_class=NoMethodError error=\\\"undefined method `[]' for nil:NilClass\\\" location=\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\" tag=\\\"fluent.info\\\" time=2018-09-08 11:02:45.151774166 +0000 record={\\\"worker\\\"=>0, \\\"message\\\"=>\\\"fluentd worker is now running worker=0\\\"}\"}, \"message\"=>\"dump an error event: error_class=NoMethodError error=\\\"undefined method `[]' for nil:NilClass\\\" location=\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\" tag=\\\"fluent.warn\\\" time=2018-09-08 11:02:50.153922217 +0000 record={\\\"error\\\"=>\\\"#<NoMethodError: undefined method `[]' for nil:NilClass>\\\", \\\"location\\\"=>\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\", \\\"tag\\\"=>\\\"fluent.info\\\", \\\"time\\\"=>2018-09-08 11:02:45.151774166 +0000, \\\"record\\\"=>{\\\"worker\\\"=>0, \\\"message\\\"=>\\\"fluentd worker is now running worker=0\\\"}, \\\"message\\\"=>\\\"dump an error event: error_class=NoMethodError error=\\\\\\\"undefined method `[]' for nil:NilClass\\\\\\\" location=\\\\\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\\\\\" tag=\\\\\\\"fluent.info\\\\\\\" time=2018-09-08 11:02:45.151774166 +0000 record={\\\\\\\"worker\\\\\\\"=>0, \\\\\\\"message\\\\\\\"=>\\\\\\\"fluentd worker is now running worker=0\\\\\\\"}\\\"}\"}"}, "message"=>"dump an error event: error_class=NoMethodError error=\"undefined method `[]' for nil:NilClass\" location=\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\" tag=\"fluent.warn\" time=2018-09-08 11:03:00.165843014 +0000 record={\"error\"=>\"#<NoMethodError: undefined method `[]' for nil:NilClass>\", \"location\"=>\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\", \"tag\"=>\"fluent.warn\", \"time\"=>2018-09-08 11:02:55.156840516 +0000, \"record\"=>{\"error\"=>\"#<NoMethodError: undefined method `[]' for nil:NilClass>\", \"location\"=>\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\", \"tag\"=>\"fluent.warn\", \"time\"=>2018-09-08 11:02:50.153922217 +0000, \"record\"=>{\"error\"=>\"#<NoMethodError: undefined method `[]' for nil:NilClass>\", \"location\"=>\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\", \"tag\"=>\"fluent.info\", \"time\"=>2018-09-08 11:02:45.151774166 +0000, \"record\"=>{\"worker\"=>0, \"message\"=>\"fluentd worker is now running worker=0\"}, \"message\"=>\"dump an error event: error_class=NoMethodError error=\\\"undefined method `[]' for nil:NilClass\\\" location=\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\" tag=\\\"fluent.info\\\" time=2018-09-08 11:02:45.151774166 +0000 record={\\\"worker\\\"=>0, \\\"message\\\"=>\\\"fluentd worker is now running worker=0\\\"}\"}, \"message\"=>\"dump an error event: error_class=NoMethodError error=\\\"undefined method `[]' for nil:NilClass\\\" location=\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\" tag=\\\"fluent.warn\\\" time=2018-09-08 11:02:50.153922217 +0000 record={\\\"error\\\"=>\\\"#<NoMethodError: undefined method `[]' for nil:NilClass>\\\", \\\"location\\\"=>\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\", \\\"tag\\\"=>\\\"fluent.info\\\", \\\"time\\\"=>2018-09-08 11:02:45.151774166 +0000, \\\"record\\\"=>{\\\"worker\\\"=>0, \\\"message\\\"=>\\\"fluentd worker is now running worker=0\\\"}, \\\"message\\\"=>\\\"dump an error event: error_class=NoMethodError error=\\\\\\\"undefined method `[]' for nil:NilClass\\\\\\\" location=\\\\\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\\\\\" tag=\\\\\\\"fluent.info\\\\\\\" time=2018-09-08 11:02:45.151774166 +0000 record={\\\\\\\"worker\\\\\\\"=>0, \\\\\\\"message\\\\\\\"=>\\\\\\\"fluentd worker is now running worker=0\\\\\\\"}\\\"}\"}, \"message\"=>\"dump an error event: error_class=NoMethodError error=\\\"undefined method `[]' for nil:NilClass\\\" location=\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\" tag=\\\"fluent.warn\\\" time=2018-09-08 11:02:55.156840516 +0000 record={\\\"error\\\"=>\\\"#<NoMethodError: undefined method `[]' for nil:NilClass>\\\", \\\"location\\\"=>\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\", \\\"tag\\\"=>\\\"fluent.warn\\\", \\\"time\\\"=>2018-09-08 11:02:50.153922217 +0000, \\\"record\\\"=>{\\\"error\\\"=>\\\"#<NoMethodError: undefined method `[]' for nil:NilClass>\\\", \\\"location\\\"=>\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\", \\\"tag\\\"=>\\\"fluent.info\\\", \\\"time\\\"=>2018-09-08 11:02:45.151774166 +0000, \\\"record\\\"=>{\\\"worker\\\"=>0, \\\"message\\\"=>\\\"fluentd worker is now running worker=0\\\"}, \\\"message\\\"=>\\\"dump an error event: error_class=NoMethodError error=\\\\\\\"undefined method `[]' for nil:NilClass\\\\\\\" location=\\\\\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\\\\\" tag=\\\\\\\"fluent.info\\\\\\\" time=2018-09-08 11:02:45.151774166 +0000 record={\\\\\\\"worker\\\\\\\"=>0, \\\\\\\"message\\\\\\\"=>\\\\\\\"fluentd worker is now running worker=0\\\\\\\"}\\\"}, \\\"message\\\"=>\\\"dump an error event: error_class=NoMethodError error=\\\\\\\"undefined method `[]' for nil:NilClass\\\\\\\" location=\\\\\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\\\\\" tag=\\\\\\\"fluent.warn\\\\\\\" time=2018-09-08 11:02:50.153922217 +0000 record={\\\\\\\"error\\\\\\\"=>\\\\\\\"#<NoMethodError: undefined method `[]' for nil:NilClass>\\\\\\\", \\\\\\\"location\\\\\\\"=>\\\\\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\\\\\", \\\\\\\"tag\\\\\\\"=>\\\\\\\"fluent.info\\\\\\\", \\\\\\\"time\\\\\\\"=>2018-09-08 11:02:45.151774166 +0000, \\\\\\\"record\\\\\\\"=>{\\\\\\\"worker\\\\\\\"=>0, \\\\\\\"message\\\\\\\"=>\\\\\\\"fluentd worker is now running worker=0\\\\\\\"}, \\\\\\\"message\\\\\\\"=>\\\\\\\"dump an error event: error_class=NoMethodError error=\\\\\\\\\\\\\\\"undefined method `[]' for nil:NilClass\\\\\\\\\\\\\\\" location=\\\\\\\\\\\\\\\"/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'\\\\\\\\\\\\\\\" tag=\\\\\\\\\\\\\\\"fluent.info\\\\\\\\\\\\\\\" time=2018-09-08 11:02:45.151774166 +0000 record={\\\\\\\\\\\\\\\"worker\\\\\\\\\\\\\\\"=>0, \\\\\\\\\\\\\\\"message\\\\\\\\\\\\\\\"=>\\\\\\\\\\\\\\\"fluentd worker is now running worker=0\\\\\\\\\\\\\\\"}\\\\\\\"}\\\"}\"}"}
fluentd-7przp fluentd 2018-09-08 11:03:24 +0000 [debug]: #0 fluent/log.rb:302:debug: preparing shutdown output plugin type=:elasticsearch_dynamic plugin_id="kubelet_out_es"
fluentd-7przp fluentd 2018-09-08 11:03:24 +0000 [info]: #0 fluent/log.rb:322:info: shutting down output plugin type=:elasticsearch_dynamic plugin_id="kubelet_out_es"
fluentd-dhwnx fluentd 2018-09-08 11:03:25 +0000 [warn]: #0 fluent/log.rb:342:warn: dump an error event: error_class=NoMethodError error="undefined method `[]' for nil:NilClass" location="/fluentd/vendor/bundle/ruby/2.3.0/gems/fluent-plugin-elasticsearch-2.10.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:268:in `eval'" tag="fluent.debug" time=2018-09-08 11:03:24.151685730 +0000 record={"message"=>"fluentd main process get SIGTERM"}
fluentd-sr764 fluentd-dhwnx fluentd-7przp fluentd fluentd 2018-09-08 11:03:25 +0000 [debug]: #0 fluent/log.rb:302:debug: calling terminate on filter plugin type=:parser plugin_id="myapp_filter"
fluentd 2018-09-08 11:03:25 +0000 [debug]: #0 fluent/log.rb:302:debug: calling terminate on filter plugin type=:parser plugin_id="myapp_filter"
fluentd-dhwnxfluentd-sr764 fluentd 2018-09-08 11:03:25 +0000 [info]: fluent/log.rb:322:info: Worker 0 finished with status 0
fluentd 2018-09-08 11:03:25 +0000 [info]: fluent/log.rb:322:info: Worker 0 finished with status 0
2018-09-08 11:03:25 +0000 [debug]: #0 fluent/log.rb:302:debug: calling terminate on output plugin type=:elasticsearch_dynamic plugin_id="kubelet_out_es"
fluentd-7przp fluentd 2018-09-08 11:03:25 +0000 [debug]: #0 fluent/log.rb:302:debug: calling terminate on output plugin type=:elasticsearch_dynamic plugin_id="apiserver_out_es"
fluentd-7przp fluentd 2018-09-08 11:03:25 +0000 [debug]: #0 fluent/log.rb:302:debug: calling terminate on output plugin type=:elasticsearch_dynamic plugin_id="out_es"
fluentd-7przp fluentd 2018-09-08 11:03:25 +0000 [debug]: #0 fluent/log.rb:302:debug: calling terminate on filter plugin type=:parser plugin_id="myapp_filter"
fluentd-7przp fluentd 2018-09-08 11:03:26 +0000 [info]: fluent/log.rb:322:info: Worker 0 finished with status 0
</code></pre>
<p>fluentd.conf:</p>
<pre><code>@include systemd.conf
@include kubernetes.conf
# Start of fluent.conf
<filter kubernetes.var.log.containers.myapp-provider**.log>
@type parser
@id myapp_filter
key_name log
reserve_data true
remove_key_name_field true
<parse>
@type multiline
format_firstline /^[A-Z]/
format1 /^(?<level>[^ ]*)[ \t]+\[(?<time>[^\]]*)\] \[(?<thread>[^\]]*)\] \[(?<request>[^\]]*)\] (?<class>[^ ]*): (?<message>.*)$/
time_format %Y-%m-%d %H:%M:%S,%L %z
</parse>
</filter>
<match kubelet>
@type elasticsearch_dynamic
@id kubelet_out_es
log_level info
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
#user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
#password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
index_name fluentd-${tag_parts[0]+ "-" + Time.at(time).getlocal("+05:30").strftime(@logstash_dateformat)}
include_timestamp true
#logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
#logstash_format true
<buffer>
flush_thread_count 8
flush_interval 5s
chunk_limit_size 2M
queue_limit_length 32
retry_max_interval 30
retry_forever true
</buffer>
</match>
<match kube-apiserver>
@type elasticsearch_dynamic
@id apiserver_out_es
log_level info
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
#user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
#password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
index_name fluentd-${tag_parts[0]+ "-" + Time.at(time).getlocal("+05:30").strftime(@logstash_dateformat)}
include_timestamp true
#logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
#logstash_format true
<buffer>
flush_thread_count 8
flush_interval 5s
chunk_limit_size 2M
queue_limit_length 32
retry_max_interval 30
retry_forever true
</buffer>
</match>
<match **>
@type elasticsearch_dynamic
@id out_es
log_level info
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
#user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
#password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
index_name fluentd-${record['kubernetes']['container_name']}-${Time.at(time).getlocal("+05:30").strftime(@logstash_dateformat)}
include_timestamp true
#logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
#logstash_format true
<buffer>
flush_thread_count 8
flush_interval 5s
chunk_limit_size 2M
queue_limit_length 32
retry_max_interval 30
retry_forever true
</buffer>
</match>
</code></pre>
<p>Can anyone please help to debug this. Thanks in advance!</p>
| <p>This could be a kubernetes bug as mentioned here - <a href="https://github.com/kubernetes/kubernetes/issues/51376" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/51376</a></p>
<p>Although this issue is closed as of now, I would suggest you to try running the same <code>daemonset</code> in <code>default</code> namespace.
Even I tried running it in <code>kube-system</code> namespace but its failing and was running successfully in the <code>default</code> namespace.</p>
|
<p>Regarding OpenEBS iscsi provisioner: How to use <code>ReadOnlyMany</code> with iscsi?</p>
<p>When I apply it on kubernetes pod it gives the following error:</p>
<blockquote>
<p> (x12 over ) openebs.io/provisioner-iscsi
openebs-provisioner-5569654c96-4rlsn
760ae66d-9ebc-11e8-97d4-823996605407 Failed to provision volume with
StorageClass "openebs-standard": Invalid Access Modes: [ReadOnlyMany],
Supported Access Modes: [ReadWriteOnce]</p>
</blockquote>
<p>How to fix that?</p>
| <p>Short answer no. Most volume types in Kubernetes are supported as <code>ReadWriteOnce</code>.</p>
<p>Use cases for <code>ReadOnlyMany</code> are limited. If you are trying to share things among pods you can take a look at <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMaps</a> or <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Secrets</a></p>
|
<p>I have deployed two services to a Kubernetes Cluster on GCP:</p>
<p>One is a Spring Cloud Api Gateway implementation:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: api-gateway
spec:
ports:
- name: main
port: 80
targetPort: 8080
protocol: TCP
selector:
app: api-gateway
tier: web
type: NodePort
</code></pre>
<p>The other one is a backend chat service implementation which exposes a WebSocket at <code>/ws/</code> path.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: chat-api
spec:
ports:
- name: main
port: 80
targetPort: 8080
protocol: TCP
selector:
app: chat
tier: web
type: NodePort
</code></pre>
<p>The API Gateway is exposed to internet through a <a href="https://github.com/heptio/contour" rel="nofollow noreferrer">Contour Ingress Controller</a>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-gateway-ingress
annotations:
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: "letsencrypt-prod"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- secretName: api-gateway-tls
hosts:
- api.mydomain.com.br
rules:
- host: api.mydomain.com.br
http:
paths:
- backend:
serviceName: api-gateway
servicePort: 80
</code></pre>
<p>The gateway routes incoming calls to <code>/chat/</code> path to the chat service on <code>/ws/</code>:</p>
<pre><code>@Bean
public RouteLocator routes(RouteLocatorBuilder builder) {
return builder.routes()
.route(r -> r.path("/chat/**")
.filters(f -> f.rewritePath("/chat/(?<segment>.*)", "/ws/(?<segment>.*)"))
.uri("ws://chat-api"))
.build();
}
</code></pre>
<p>When I try to connect to the WebSocket through the gateway I get a 403 error:</p>
<p><code>error: Unexpected server response: 403</code></p>
<p>I even tried to connect using http, https, ws and wss but the error remains.</p>
<p>Anyone has a clue?</p>
| <p>I had the same issue using Ingress resource with Contour 0.5.0 but I managed to solve it by
upgrading Contour to v0.6.0-beta.3 with IngressRoute (be aware, though, that it's a beta version).</p>
<p>You can add an IngressRoute resource (crd) like this (remove your previous ingress resource): </p>
<pre><code>#ingressroute.yaml
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: api-gateway-ingress
namespace: default
spec:
virtualhost:
fqdn: api.mydomain.com.br
tls:
secretName: api-gateway-tls
routes:
- match: /
services:
- name: api-gateway
port: 80
- match: /chat
enableWebsockets: true # Setting this to true enables websocket for all paths that match /chat
services:
- name: api-gateway
port: 80
</code></pre>
<p>Then apply it</p>
<p>Websockets will be authorized only on the <code>/chat</code> path.</p>
<p>See <a href="https://github.com/heptio/contour/blob/master/docs/ingressroute.md#websocket-support" rel="nofollow noreferrer">here</a> for more detail about Contour IngressRoute.</p>
|
<p>I have an issue, I want to use an API Gateway client certificate with my ingress config. </p>
<ol>
<li>I've generated the certificate on AWS.</li>
<li><p>I've created the secret of thi certificate: </p>
<pre><code>kubectl create secret generic api --from-file=api-gateway-client-certificate.crt
--namespace develop
</code></pre></li>
<li><p>I've added the configuration on my ingress file:</p>
<pre><code>annotations:
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "default/api"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"`
</code></pre></li>
</ol>
<p>Finally I don't know why I get this error on the ingress-controller:</p>
<blockquote>
<p>Error obtaining X.509 certificate: Secret "develop/api" contains no keypair or CA certificate</p>
</blockquote>
<p>I use Kubernetes v1.11.1 and nginx-ingress-controller v0.17.1</p>
| <p>So you are missing the key and/or the CA for your cert. Did you use a private CA in AWS? The regular certificate manage doesn't give you a key file because it creates the CSR under the hood.</p>
<p>Generally, you'd create your tls secret like this:</p>
<pre><code>kubectl -n kube-system create secret tls my-tls-cert --key=tls.key --cert=tls.crt
</code></pre>
<p>Also, I would append the CA that begins to with <code>-----BEGIN CERTIFICATE-----</code> to the content of <code>api-gateway-client-certificate.crt</code></p>
|
<p>I'm trying to get a certificate signed by the Kubernetes CA (1.11) by submitting the following:</p>
<pre><code>apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: openunison.openunison.svc.cluster.local
spec:
groups:
- system:authenticated
request: LS0tLS1CRUdJTiBORVcgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCk1JSURCakNDQWU0Q0FRQXdnWkF4Q3pBSkJnTlZCQVlUQW5Wek1SRXdEd1lEVlFRSUV3aDJhWEpuYVc1cFlURVQKTUJFR0ExVUVCeE1LWVd4bGVHRnVaSEpwWVRFWk1CY0dBMVVFQ2hNUWRISmxiVzlzYnlCelpXTjFjbWwwZVRFTQpNQW9HQTFVRUN4TURhemh6TVRBd0xnWURWUVFERXlkdmNHVnVkVzVwYzI5dUxtOXdaVzUxYm1semIyNHVjM1pqCkxtTnNkWE4wWlhJdWJHOWpZV3d3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ3gKRnpBR2tBWlYrZWxxem1aK3RxUW1xTEsxV3kvRFRXU0FZT3N2Mk9SaDFyVEx4eTZ6NVRwVW9kNzBjYmhCQlowbgptMDMzd0VkWW1QODFHRVM1YlYyQkpQa2FiN1EySmltQXFuU1MrcHYvSmVjTnVUcGlUb05xVUlGeHhUcXdlWHo3CkgxUVBPY25LZ251M0piempKUXZBbWZoUXZaNjdHRXRGanl3QXE5MS9TUFBHdVVlUFBOb09kU1J0MHlJdFJSV1cKV0N4THhLRW4zUU5jc1hqZWtJUy9aMXdTdERuVyttQi9LZERWbmlZUzlYRlV1T3BTcEl4ZkhHNmFkdTdZaUNLZgptQWZqSE1jdmlOQlN3M3ZBOGQ4c21yVnZveHhkelpzMGFXRlpZai9mQ0IycVVRb2FXQi85TmU1SStEb3JBbXJXCm42OGtoY1MwbkxsWGFIQmhLZjM1QWdNQkFBR2dNREF1QmdrcWhraUc5dzBCQ1E0eElUQWZNQjBHQTFVZERnUVcKQkJTUExoa2V5eUkrQmttSXEzdmxpalA4MHI1RXVUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFpMndVUjA4RgpjL3VVanovWHVvd29vQ1M3c2tndlpSZDVhVTFxdzU2MzdmOGVJSmM2S0huNGNZZUw3YTZ5M3M0QmJnYVVIOVpVCm5Sb3N1V1R2WEJNTUxxLzJBSEx4VVhsTGNhZW03cE1EbXEzbGkxNEkvWTdQWUlxSFQxNEc2UnlkQUUvc2R6MHUKd1RNL0k3eHJ0bFZNTzliNXpuWnlxVkpTY0xhYnRDTXMwa3dwQlpVM2dTZThhWW8zK3A3d2pVeVpuZmFoNllhNAovcXZVd3kzNGdianZSTWc2NmI3UTl2dERmU0RtUWFyVVh0QVJEd052T1lnNmpIMkpwYmUvNUdqcHhaUTRYYW93CnZodGJyY2NTL2RCbFZwWlQxd0k2Um85WFl2OEliMm1icWhFMjBNWGJuVWUrYS9uUkdPVndMaVRQMGNnSk92eDIKdzRZWmtxSUhVQWZad0E9PQotLS0tLUVORCBORVcgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
usages:
- digital signature
- key encipherment
- server auth
</code></pre>
<p>The response complains that its not PEM - <code>The CertificateSigningRequest "openunison.openunison.svc.cluster.local" is invalid: spec.request: Invalid value: []byte{0x2d,...}: PEM block type must be CERTIFICATE REQUEST</code>, however, the CSR is a valid CSR:</p>
<pre><code>echo 'LS0tLS1CRUdJTiBORVcgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCk1JSURCakNDQWU0Q0FRQXdnWkF4Q3pBSkJnTlZCQVlUQW5Wek1SRXdEd1lEVlFRSUV3aDJhWEpuYVc1cFlURVQKTUJFR0ExVUVCeE1LWVd4bGVHRnVaSEpwWVRFWk1CY0dBMVVFQ2hNUWRISmxiVzlzYnlCelpXTjFjbWwwZVRFTQpNQW9HQTFVRUN4TURhemh6TVRBd0xnWURWUVFERXlkdmNHVnVkVzVwYzI5dUxtOXdaVzUxYm1semIyNHVjM1pqCkxtTnNkWE4wWlhJdWJHOWpZV3d3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ3gKRnpBR2tBWlYrZWxxem1aK3RxUW1xTEsxV3kvRFRXU0FZT3N2Mk9SaDFyVEx4eTZ6NVRwVW9kNzBjYmhCQlowbgptMDMzd0VkWW1QODFHRVM1YlYyQkpQa2FiN1EySmltQXFuU1MrcHYvSmVjTnVUcGlUb05xVUlGeHhUcXdlWHo3CkgxUVBPY25LZ251M0piempKUXZBbWZoUXZaNjdHRXRGanl3QXE5MS9TUFBHdVVlUFBOb09kU1J0MHlJdFJSV1cKV0N4THhLRW4zUU5jc1hqZWtJUy9aMXdTdERuVyttQi9LZERWbmlZUzlYRlV1T3BTcEl4ZkhHNmFkdTdZaUNLZgptQWZqSE1jdmlOQlN3M3ZBOGQ4c21yVnZveHhkelpzMGFXRlpZai9mQ0IycVVRb2FXQi85TmU1SStEb3JBbXJXCm42OGtoY1MwbkxsWGFIQmhLZjM1QWdNQkFBR2dNREF1QmdrcWhraUc5dzBCQ1E0eElUQWZNQjBHQTFVZERnUVcKQkJTUExoa2V5eUkrQmttSXEzdmxpalA4MHI1RXVUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFpMndVUjA4RgpjL3VVanovWHVvd29vQ1M3c2tndlpSZDVhVTFxdzU2MzdmOGVJSmM2S0huNGNZZUw3YTZ5M3M0QmJnYVVIOVpVCm5Sb3N1V1R2WEJNTUxxLzJBSEx4VVhsTGNhZW03cE1EbXEzbGkxNEkvWTdQWUlxSFQxNEc2UnlkQUUvc2R6MHUKd1RNL0k3eHJ0bFZNTzliNXpuWnlxVkpTY0xhYnRDTXMwa3dwQlpVM2dTZThhWW8zK3A3d2pVeVpuZmFoNllhNAovcXZVd3kzNGdianZSTWc2NmI3UTl2dERmU0RtUWFyVVh0QVJEd052T1lnNmpIMkpwYmUvNUdqcHhaUTRYYW93CnZodGJyY2NTL2RCbFZwWlQxd0k2Um85WFl2OEliMm1icWhFMjBNWGJuVWUrYS9uUkdPVndMaVRQMGNnSk92eDIKdzRZWmtxSUhVQWZad0E9PQotLS0tLUVORCBORVcgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==' | base64 -d | openssl req -noout -text
Certificate Request:
Data:
Version: 1 (0x0)
Subject: C = us, ST = virginia, L = alexandria, O = tremolo security, OU = k8s, CN = openunison.openunison.svc.cluster.local
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
</code></pre>
<p>What am I missing?</p>
| <p>You are submitting it correctly, but the Kubernetes certificate manager doesn't like the format of your CSR header: </p>
<ul>
<li><code>-----BEGIN NEW CERTIFICATE REQUEST-----</code> nor the ending</li>
<li><code>-----END NEW CERTIFICATE REQUEST-----</code>. </li>
</ul>
<p>However, it does like:</p>
<ul>
<li><code>-----BEGIN CERTIFICATE REQUEST-----</code> and</li>
<li><code>-----END CERTIFICATE REQUEST-----</code></li>
</ul>
<p>you can modify those two lines and it should work (I tried it myself). </p>
<p>Opened <a href="https://github.com/kubernetes/kubernetes/issues/68685" rel="nofollow noreferrer">this</a> to address the problem.</p>
|
<p>Accidentally tried to delete all PV's in cluster but thankfully they still have PVC's that are bound to them so all PV's are stuck in Status: Terminating.</p>
<p>How can I get the PV's out of the "terminating" status and back to a healthy state where it is "bound" to the pvc and is fully working?</p>
<p>The key here is that I don't want to lose any data and I want to make sure the volumes are functional and not at risk of being terminated if claim goes away.</p>
<p>Here are some details from a <code>kubectl describe</code> on the PV.</p>
<pre><code>$ kubectl describe pv persistent-vol-1
Finalizers: [kubernetes.io/pv-protection foregroundDeletion]
Status: Terminating (lasts 1h)
Claim: ns/application
Reclaim Policy: Delete
</code></pre>
<p>Here is the describe on the claim.</p>
<pre><code>$ kubectl describe pvc application
Name: application
Namespace: ns
StorageClass: standard
Status: Bound
Volume: persistent-vol-1
</code></pre>
| <p><strong>It is, in fact, possible to save data from your <code>PersistentVolume</code> with <code>Status: Terminating</code> and <code>RetainPolicy</code> set to default (delete).</strong> We have done so on GKE, not sure about AWS or Azure but I guess that they are similar</p>
<p>We had the same problem and I will post our solution here in case somebody else has an issue like this.</p>
<p>Your <code>PersistenVolumes</code> will not be terminated while there is a pod, deployment or to be more specific - a <code>PersistentVolumeClaim</code> using it.</p>
<p>The steps we took to remedy our broken state:</p>
<p>Once you are in the situation lke the OP, the first thing you want to do is to create a snapshot of your <code>PersistentVolumes</code>.</p>
<p>In GKE console, go to <code>Compute Engine -> Disks</code> and find your volume there (use <code>kubectl get pv | grep pvc-name</code>) and create a snapshot of your volume.</p>
<p>Use the snapshot to create a disk: <code>gcloud compute disks create name-of-disk --size=10 --source-snapshot=name-of-snapshot --type=pd-standard --zone=your-zone</code></p>
<p><strong>At this point, stop the services using the volume and delete the volume and volume claim.</strong></p>
<p>Recreate the volume manually with the data from the disk:</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolume
metadata:
name: name-of-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
gcePersistentDisk:
fsType: ext4
pdName: name-of-disk
persistentVolumeReclaimPolicy: Retain
</code></pre>
<p>Now just update your volume claim to target a specific volume, the last line of the yaml file:</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
namespace: my-namespace
labels:
app: my-app
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: name-of-pv
</code></pre>
|
<p>Currently, I have an application consisting of a backend, frontend, and database. The Postgres database has a table with around 60 million rows.
This table has a foreign key to another table: <code>categories</code>. So, if want to count—I know it's one of the slowest operations in a DB—every row from a specific category, on my current setup this will result in a 5-minute query. Currently, the DB, backend, and frontend a just running on a VM. </p>
<p>I've now containerized the backend and the frontend and I want to spin them up in Google Kubernetes Engine.</p>
<p>So my question, will the performance of my queries go up if you also use a container DB and let Kubernetes do some load balancing work, or should I use Google's Cloud SQL? Does anyone have some experience in this?</p>
| <blockquote>
<p>will the performance of my queries go up if you also use a container DB</p>
</blockquote>
<p>Raw performance will only go up if the capacity of the nodes (larger nodes) is larger than your current node. If you use the same node as a kubernetes node it will not go up. You won't get benefits from containers in this case other than maybe updating your DB software might be a bit easier if you run it in Kubernetes. There are many factors that are in play here, including what disk you use for your storage. (SSD, magnetic, clustered filesystem?). </p>
<p>Say if your goal is to maximize resources in your cluster by making use if that capacity when say not many queries are being sent to your database then Kubernetes/containers might be a good choice. (But that's not what the original question is)</p>
<blockquote>
<p>should I use Google's Cloud SQL</p>
</blockquote>
<p>The only reason I would use Cloud SQL is that if you want to offload managing your SQL db. Other than that you'll get similar performance numbers than running in the same size instance on GCE.</p>
|
<p>Trying to move my development environment to run on minikube.</p>
<p>The page loads but my page uses websockets on the same port/protocol that the index.html is loaded with (https in this case), and the websockets do no seem to be working correctly.</p>
<p>Here is an example of the correct output when run through nginx / python on my local development box.</p>
<pre><code>127.0.0.1 - - [14/Sep/2018 14:14:35] "GET / HTTP/1.0" 200 -
127.0.0.1 - - [14/Sep/2018 14:14:35] "GET /static/jquery.min.js HTTP/1.0" 200 -
127.0.0.1 - - [14/Sep/2018 14:14:35] "GET /static/socket.io.min.js HTTP/1.0" 200 -
127.0.0.1 - - [14/Sep/2018 14:14:35] "GET /socket.io/?EIO=3&transport=polling&t=MNPIg-N HTTP/1.0" 200 -
127.0.0.1 - - [14/Sep/2018 14:14:35] "GET /favicon.ico HTTP/1.0" 404 -
127.0.0.1 - - [14/Sep/2018 14:14:35] "GET /favicon.ico HTTP/1.0" 404 -
127.0.0.1 - - [14/Sep/2018 14:14:35] "POST /socket.io/?EIO=3&transport=polling&t=MNPIg-o&sid=0570b4fe27f345e9b11858b3acb40a6e HTTP/1.0" 200 -
127.0.0.1 - - [14/Sep/2018 14:14:35] "GET /socket.io/?EIO=3&transport=polling&t=MNPIg-r&sid=0570b4fe27f345e9b11858b3acb40a6e HTTP/1.0" 200 -
127.0.0.1 - - [14/Sep/2018 14:14:35] "POST /socket.io/?EIO=3&transport=polling&t=MNPIg_x&sid=0570b4fe27f345e9b11858b3acb40a6e HTTP/1.0" 200 -
127.0.0.1 - - [14/Sep/2018 14:14:35] "GET /socket.io/?EIO=3&transport=polling&t=MNPIg_w&sid=0570b4fe27f345e9b11858b3acb40a6e HTTP/1.0" 200 -
127.0.0.1 - - [14/Sep/2018 14:14:40] "GET /socket.io/?EIO=3&transport=polling&t=MNPIh0L&sid=0570b4fe27f345e9b11858b3acb40a6e HTTP/1.0" 200 -
127.0.0.1 - - [14/Sep/2018 14:14:45] "GET /socket.io/?EIO=3&transport=polling&t=MNPIiE3&sid=0570b4fe27f345e9b11858b3acb40a6e HTTP/1.0" 200 -
127.0.0.1 - - [14/Sep/2018 14:14:50] "GET /socket.io/?EIO=3&transport=polling&t=MNPIjSI&sid=0570b4fe27f345e9b11858b3acb40a6e HTTP/1.0" 200 -
127.0.0.1 - - [14/Sep/2018 14:14:55] "GET /socket.io/?EIO=3&transport=polling&t=MNPIkgS&sid=0570b4fe27f345e9b11858b3acb40a6e HTTP/1.0" 200 -
</code></pre>
<p>Notice how there is a GET every 5 seconds (that's a timer running on the page)</p>
<p>When running on Kubernetes, The page loads and the timer shows up as if the websocket has worked, however I show no logs where the websocket is having a GET or POST after the initial one. </p>
<pre><code>192.168.99.1,172.17.0.7 - - [14/Sep/2018 18:24:03] "GET /static/jquery.min.js HTTP/1.1" 304 1210 0.008244
192.168.99.1,172.17.0.7 - - [14/Sep/2018 18:24:03] "GET /static/socket.io.min.js HTTP/1.1" 304 1210 0.009271
(10) accepted ('172.17.0.7', 34444)
192.168.99.1,172.17.0.7 - - [14/Sep/2018 18:24:04] "GET /socket.io/?EIO=3&transport=polling&t=MNPKrsy HTTP/1.1" 200 379 0.003682
(10) accepted ('172.17.0.7', 34446)
192.168.99.1,172.17.0.7 - - [14/Sep/2018 18:24:04] "GET /favicon.ico HTTP/1.1" 404 1314 0.004694
(10) accepted ('172.17.0.7', 34448)
(10) accepted ('172.17.0.7', 34450)
(10) accepted ('172.17.0.7', 34452)
192.168.99.1,172.17.0.7 - - [14/Sep/2018 18:24:04] "GET /socket.io/?EIO=3&transport=polling&t=MNPKrtD&sid=77d4755c524f47c2948b9c36da007b85 HTTP/1.1" 200 210 0.000749
192.168.99.1,172.17.0.7 - - [14/Sep/2018 18:24:04] "POST /socket.io/?EIO=3&transport=polling&t=MNPKrtB&sid=77d4755c524f47c2948b9c36da007b85 HTTP/1.1" 200 194 0.002632
(10) accepted ('172.17.0.7', 34454)
192.168.99.1,172.17.0.7 - - [14/Sep/2018 18:24:04] "GET /favicon.ico HTTP/1.1" 404 1314 0.002388
</code></pre>
<p>The Ingress is setup as follows:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: websitev2-cluster-ip-service
servicePort: 8080
</code></pre>
<p>As mentioned before, the websocket does not reside on a different port and it is instanciated in javascript as:</p>
<pre><code>namespace = '/socket';
var socket = io.connect(location.protocol + '//' + document.domain + ':' + location.port + namespace);
</code></pre>
<p>Are there any special requirements to get websockets to work? I do not believe i need a special route because the URI needs to be sent to the same location as everything else, and on the same port.</p>
<p>EDIT: MORE DETAILS</p>
<p>My website has a form, that when submitted, executes the following code:</p>
<pre><code> $('form#job').submit(function(event) {
var nameValue = JSON.stringify($(event.target).serializeArray());
console.log(nameValue)
socket.emit('job_submit', {data: nameValue});
return false;
});
</code></pre>
<p>On the python side, I have my socket code, which should get hit once a user clicks the submit button.</p>
<pre><code>@socketio.on('job_submit', namespace='/socket')
def job_submit(message):
print('recieved job_submit message from client')
# kick off subjob in celery task
data = unstringify(message)
print('data = {0}'.format(data))
sessiondata = dict(session)
print('sessiondata = {}'.format(sessiondata))
subjobstart.delay(sessiondata, request.sid, data)
</code></pre>
<p>In my logs I am not receiving any "recieved job_submit message from client" messages when the submit button is clicked, which means the javascript is trying to send a websocket emit to the python server, but the message is never getting there.</p>
<p>Emits from python to the client page are working as the time is getting updated on the site.</p>
| <p>So not exactly sure what changed as it worked on test box, apparently the issue lied in the form validation I was doing. Once I put in a novalidate option to temporarily bypass the form checking, the form was then able to be submitted and I did not have any websocket issues after all.</p>
<p>Hopefully this post can help someone with the code posted instead.</p>
|
<p>What is the best practice for backing up a Postgres database running on <a href="https://cloud.google.com/container-engine/" rel="noreferrer">Google Cloud Container Engine</a>?</p>
<p>My thought is working towards storing the backups in <a href="https://cloud.google.com/storage/" rel="noreferrer">Google Cloud Storage</a>, but I am unsure of how to connect the Disk/Pod to a Storage Bucket.</p>
<p>I am running Postgres in a Kubernetes cluster using the following configuration:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.6.2-alpine
imagePullPolicy: IfNotPresent
env:
- name: PGDATA
value: /var/lib/postgresql/data
- name: POSTGRES_DB
value: my-database-name
- name: POSTGRES_PASSWORD
value: my-password
- name: POSTGRES_USER
value: my-database-user
name: postgres-container
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: my-postgres-volume
volumes:
- gcePersistentDisk:
fsType: ext4
pdName: my-postgres-disk
name: my-postgres-volume
</code></pre>
<p>I have attempted to create a <a href="https://kubernetes.io/docs/user-guide/jobs/" rel="noreferrer">Job</a> to run a backup:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: postgres-dump-job
spec:
template:
metadata:
labels:
app: postgres-dump
spec:
containers:
- command:
- pg_dump
- my-database-name
# `env` value matches `env` from previous configuration.
image: postgres:9.6.2-alpine
imagePullPolicy: IfNotPresent
name: my-postgres-dump-container
volumeMounts:
- mountPath: /var/lib/postgresql
name: my-postgres-volume
readOnly: true
restartPolicy: Never
volumes:
- gcePersistentDisk:
fsType: ext4
pdName: my-postgres-disk
name: my-postgres-volume
</code></pre>
<p>(As far as I understand) this should run the <a href="https://www.postgresql.org/docs/current/static/backup-dump.html" rel="noreferrer"><code>pg_dump</code></a> command and output the backup data to stdout (which should appear in the <code>kubectl logs</code>).</p>
<p>As an aside, when I inspect the Pods (with <code>kubectl get pods</code>), it shows the Pod never gets out of the "Pending" state, which I gather is due to there not being enough resources to start the Job.</p>
<p>Is it correct to run this process as a Job?
How do I connect the Job to Google Cloud Storage?
Or should I be doing something completely different?</p>
<p>I'm guessing it would be unwise to run <code>pg_dump</code> in the database Container (with <code>kubectl exec</code>) due to a performance hit, but maybe this is ok in a dev/staging server?</p>
| <p>As @Marco Lamina said you can run pg_dump on postgres pod like </p>
<pre><code>DUMP
// pod-name name of the postgres pod
// postgres-user database user that is able to access the database
// database-name name of the database
kubectl exec [pod-name] -- bash -c "pg_dump -U [postgres-user] [database-name]" > database.sql
RESTORE
// pod-name name of the postgres pod
// postgres-user database user that is able to access the database
// database-name name of the database
cat database.sql | kubectl exec -i [pod-name] -- psql -U [postgres-user] -d [database-name]
</code></pre>
<p>You can have a job pod that does run this command and exports this to a file storage system such as AWS s3.</p>
|
<p>I have spent whole day looking for an answer why my node js app is not reachable from the internet when I run it on Kubernetes with loadbalancer as a service type. The solution was to change host from localhost to 0.0.0.0 in node js server app definition, but I still don’t know why?</p>
<p>I hope this saves one’s time in the future.</p>
| <p>When you say that you set the host I guess you mean what the app is told to listen on in the listen function like <a href="https://stackoverflow.com/questions/33953447/express-app-server-listen-all-intefaces-instead-of-localhost-only">express app server . listen all intefaces instead of localhost only</a> You can bind to a particular address or leave it open to all by using the 0.0.0.0 mask (the zeros function as a mask for matching rather than a true IP address - <a href="https://stackoverflow.com/a/20778887/9705485">https://stackoverflow.com/a/20778887/9705485</a>).</p>
<p>I imagine you had your app running fine locally and were able to access it from your host machine with your localhost configuration. This would be because your local machine would be accessing it in a way that conforms to the mask. Your config was effectively saying only localhost can access this app. So when you ported to Kubernetes then your machine has to go over a network and the incoming connection was then your machine's external/public IP address, which then did not match the localhost mask. So to get it to work you changed the mask to make it open to all IPs. </p>
|
<p>I have a single service running on a NodePort service. How do i use ingress to access multiple services.</p>
<h3>deployment.yml</h3>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
tier: backend
track: dev
spec:
containers:
- name: auth
image: [url]/auth_app:v2
ports:
- name: auth
containerPort: 3000
</code></pre>
<h2>service.yml</h2>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: auth
spec:
selector:
app: auth
tier: backend
ports:
- protocol: TCP
port: 3000
targetPort: auth
type: NodePort
</code></pre>
<h2>ingress.yml</h2>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: auth
servicePort: 8080
</code></pre>
<p>I followed step by step from the this <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/static-ip/" rel="nofollow noreferrer">repo</a>. I could not get it working for my port config.
I'm a beginner and would like some resources for the same. </p>
| <p>Your service is running in port 3000 but your Ingress routing rule is matching to port 8080. It will probably work if you just change the servicePort to 3000 in the backend section of your Ingress resource definition. </p>
<p>I'd suggest making sure it works with NodePort first before trying Ingress. I suggest this because I notice your Service only specifies values for port and targetPort but not nodePort. If you do not specify a nodeport value, you will get a random port number. As you want to use ingress with NodePort service type, the random port number should not matter. </p>
<p>For NodePort tutorials you could start with <a href="https://medium.com/@markgituma/kubernetes-local-to-production-with-django-2-docker-and-minikube-ba843d858817" rel="nofollow noreferrer">https://medium.com/@markgituma/kubernetes-local-to-production-with-django-2-docker-and-minikube-ba843d858817</a> as I notice you've tagged your post with django</p>
<p>For nginx ingress you could see <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">https://cloud.google.com/community/tutorials/nginx-ingress-gke</a> but you might want to find something specific to your cloud platform if you're not using gke</p>
<p>It is best to start with one service but to understand how this can work for multiple services you could have a look at the fanout ingress example in the docs <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout</a></p>
|
<p>I have a requirement to set up Kubernetes on-prem and have Windows worker nodes that run .NET 4.5 containers. Now, while I found this <a href="https://onedrive.live.com/view.aspx?resid=E2B6765015E5FA01!339&ithint=file%2Cdocx&app=Word&authkey=!AGvs_s_hWs7xHGs" rel="nofollow noreferrer">link</a>, I don't particularly like the idea of upgrading the control plane and rotating needed certificates manually.</p>
<p>Has anyone tried to use <code>kubespray</code> to bootstrap a Kubernetes cluster and manually add a Windows worker? Or can share any insight to setting this up? </p>
<p>Thanks for sharing. </p>
| <p>This is an opinion question so I'll answer in an opinionated way.</p>
<p>So kubespray will give you more automation and it actually uses <code>kubeadm</code> to create the control plane and cluster components including your network overlay.</p>
<p>It also provides you with capabilities for <a href="https://github.com/kubernetes-incubator/kubespray/blob/master/docs/upgrades.md" rel="nofollow noreferrer">upgrades</a>.</p>
<p>Certificate rotation is an option on your kubelet and <code>kubespray</code> also supports it. </p>
<p>The downside of using kubespray is that you may not know how all the Kubernetes components work but if you want something more fully automated and like ansible it's a great choice.</p>
<p>Also the latest kubeadm supports certificate rotation on all your Kubernetes components as per this <a href="https://github.com/kubernetes/kubernetes/pull/67910" rel="nofollow noreferrer">PR</a></p>
|
<p>How do I enable a port on Google Kubernetes Engine to accept websocket connections? Is there a way of doing so other than using an ingress controller? </p>
| <p>Web sockets are supported by Google's global load balancer, so you can use a k8s <code>Service</code> of type <code>LoadBalancer</code> to expose such a service beyond your cluster.</p>
<p>Do be aware that load balancers created and managed outside Kubernetes in this way will have a default connection duration of 30 seconds, which interferes with web socket operation and will cause the connection to be closed frequently. This is almost useless for web sockets to be used effectively.</p>
<p>Until <a href="https://github.com/kubernetes/ingress-gce/issues/28" rel="nofollow noreferrer">this issue</a> is resolved, you will either need to modify this timeout parameter manually, or (recommended) consider using an in-cluster ingress controller (e.g. nginx) which affords you more control.</p>
|
<p><strong>cat /etc/redhat-release:</strong></p>
<pre><code>CentOS Linux release 7.2.1511 (Core)
</code></pre>
<p><strong>docker version:</strong></p>
<pre><code>Client:
Version: 1.13.1
API version: 1.26
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Experimental: false
</code></pre>
<p><strong>kubectl version:</strong></p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5", GitCommit:"f01a2bf98249a4db383560443a59bed0c13575df", GitTreeState:"clean", BuildDate:"2018-03-19T15:59:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p><strong>gitlab version:</strong> 10.6-ce</p>
<p><strong>gitlab runner image:</strong> gitlab/gitlab-runner:alpine-v10.3.0</p>
<p>I just integrated a kubernetes cluster (not GKE, just a k8s cluster deployed by myself) to a gitlab project, and then installed a gitlab-runner on which.</p>
<p>All of this, followed <a href="http://gitlab.xinpinget.com/help/user/project/clusters/index.md#adding-an-existing-kubernetes-cluster" rel="nofollow noreferrer">Adding an existing Kubernetes cluster</a>.</p>
<p>After that, I added a <code>.gitlab-ci.yml</code> with a single stage, and pushed it to the repo. Here is the contents:</p>
<pre><code>build-img:
stage: docker-build
script:
# - docker build -t $CONTAINER_RELEASE_IMAGE .
# - docker tag $CONTAINER_RELEASE_IMAGE $CONTAINER_LATEST_IMAGE
# - docker push $CONTAINER_IMAGE
- env | grep KUBE
- kubectl --help
tags:
- kubernetes
only:
- develop
</code></pre>
<p>Then I got this:</p>
<pre><code>$ env | grep KUBE
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
$ kubectl --help
/bin/bash: line 62: kubectl: command not found
ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1
</code></pre>
<p>The <code>kubectl</code> was not installed in the runner yet, and some env vars like <code>KUBE_TOKEN</code>, <code>KUBE_CA_PEM_FILE</code> or <code>KUBECONFIG</code> are not found, neither(see <a href="https://docs.gitlab.com/ee/user/project/clusters/index.html#deployment-variables" rel="nofollow noreferrer">Deployment variables</a>).</p>
<p>Searched the official docs of gitlab, got nothing.</p>
<p>So, how could I deploy a project via this runner?</p>
| <p>The gitlab-runner has no build-in commands, it spin's of a container with a predefined image and then remotely executes the commands from your script, in that container.</p>
<p>You have not defined an image, so the default image will be used as defined in the setup of the gitlab-runner.</p>
<p>So,
You could <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl" rel="nofollow noreferrer">Install kubectl binary using curl</a> before you use it in your <code>script:</code>, or <code>before_script:</code></p>
<pre><code>build-img:
stage: docker-build
before_script:
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
script:
- ...
- ./kubectl --version
</code></pre>
<p>Or create a seperate deployment stage, with an image that has <code>kubectl</code>, e.g. <code>roffe/kubectl</code> :</p>
<pre><code>stages:
- docker-build
- deploy
build-img:
stage: docker-build
script:
- docker build -t $CONTAINER_RELEASE_IMAGE .
- docker tag $CONTAINER_RELEASE_IMAGE $CONTAINER_LATEST_IMAGE
- docker push $CONTAINER_IMAGE
tags:
- kubernetes
deploy:dev:
stage: deploy
image: roffe/kubectl
script:
- kubectl .....
tags:
- kubernetes
</code></pre>
|
<p>I am trying to understand the relationship between Kubernetes and OpenStack. I am confused around the topic of deploying Kubernetes on OpenStack and doing my research I found there are too many tutorials. My understanding of the sequence is: </p>
<ol>
<li>Start several <code>nova</code> instances on OpenStack.</li>
<li>Install Kubernetes master on one instance and install Kubernetes node on other instances.</li>
<li>Submit YAML file using <code>kubectl</code> and Kubernetes will create and deploy my application.</li>
</ol>
<p>As for Kubernetes's self-healing capacity, can Kubernetes restart some of the failed <code>nova</code> instances? Which component in Kubernetes is responsible for restart/reboot/delete/re-provision <code>nova</code> instances? Is it Kubernetes master? If so, what will happen if the Kubernetes master is down and cannot be recovered?</p>
| <p>1, 2 and 3 are correct.</p>
<blockquote>
<p>Self-healing</p>
</blockquote>
<p>You can deploy in master HA configuration. The recommended way is either 3 or 5 master with a quorum of <code>(n + 1)/ 2</code></p>
<blockquote>
<p>Can Kubernetes reprovision/restart some the failed nova instances?</p>
</blockquote>
<p>Not really. That's after nova to manage all the server services. Kubernetes has an <a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/openstack" rel="nofollow noreferrer">OpenStack module</a> that allows it to interact with OpenStack components like create external load balancer and creates volumes that can be used with your workloads/pods/containers.</p>
<p>You can either use <a href="https://kubernetes.io/docs/setup/independent/" rel="nofollow noreferrer">kubeadm</a> or <a href="https://github.com/kubernetes-incubator/kubespray" rel="nofollow noreferrer">kubespray</a> to bootstrap a cluster.</p>
<p>Hope it helps.</p>
|
<p>I dig into Kubernetes resource restrictions and have a hard time to understand what CPU <code>limits</code> are for. I know Kubernetes passes <code>requests</code> and <code>limits</code> down to the (in my case) Docker runtime.</p>
<p><strong>Example</strong>: I have 1 Node with 1 CPU and 2 Pods with CPU <code>requests: 500m</code> and <code>limits: 800m</code>. In Docker, this results in (<code>500m -> 0.5 * 1024 = 512</code>) <code>--cpu-shares=512</code> and (<code>800m -> 800 * 100</code>) <code>--cpu-quota=80000</code>. The pods get allocated by Kube scheduler because the <code>requests</code> sum does not exceed 100% of the node's capacity; in terms of <code>limits</code> the node is overcommited.</p>
<p>The above allows each container to get 80ms CPU time per 100ms period (the default). As soon as the CPU usage is 100%, the CPU time is shared between the containers based on their weight, expressed in CPU shares. Which would be 50% for each container according to the base value of 1024 and a 512 share fo each. At this point - in my understanding - the <code>limits</code> have no more relevance because none of the containers can get its 80ms anymore. They both would get 50ms. So no matter how much <code>limits</code> I define, when usage reaches critical 100%, it's partitioned by <code>requests</code> anyway.</p>
<p>This makes me wonder: Why should I define CPU <code>limits</code> in the first place, and does overcommitment make any difference at all? <code>requests</code> on the other hand in terms of "<em>how much share do I get when everything is in use</em>" is completely understandable.</p>
| <p>One reason to set CPU limits is that, if you set CPU request == limit <em>and</em> memory request == limit, your pod is assigned a Quality of Service class = <code>Guaranteed</code>, <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#node-oom-behavior" rel="nofollow noreferrer">which makes it less likely to be OOMKilled if the node runs out of memory</a>. Here I quote from the Kubernetes doc <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">Configure Quality of Service for Pods</a>:</p>
<blockquote>
<p>For a Pod to be given a QoS class of Guaranteed:</p>
<ul>
<li>Every Container in the Pod must have a memory limit and a memory request, and they must be the same.</li>
<li>Every Container in the Pod must have a CPU limit and a CPU request, and they must be the same.</li>
</ul>
</blockquote>
<p>Another benefit of using the <code>Guaranteed</code> QoS class is that it allows you to lock exclusive CPUs for the pod, which is critical for certain kinds of low-latency programs. Quote from <a href="https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/" rel="nofollow noreferrer">Control CPU Management Policies</a>:</p>
<blockquote>
<p>The <code>static</code> CPU management policy allows containers in <code>Guaranteed</code> pods with integer CPU <code>requests</code> access to exclusive CPUs on the node. ... Only containers that are both part of a <code>Guaranteed</code> pod and have integer CPU <code>requests</code> are assigned exclusive CPUs.</p>
</blockquote>
|
<p>I have a single service running on a NodePort service. How do i use ingress to access multiple services.</p>
<h3>deployment.yml</h3>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
tier: backend
track: dev
spec:
containers:
- name: auth
image: [url]/auth_app:v2
ports:
- name: auth
containerPort: 3000
</code></pre>
<h2>service.yml</h2>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: auth
spec:
selector:
app: auth
tier: backend
ports:
- protocol: TCP
port: 3000
targetPort: auth
type: NodePort
</code></pre>
<h2>ingress.yml</h2>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: auth
servicePort: 8080
</code></pre>
<p>I followed step by step from the this <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/static-ip/" rel="nofollow noreferrer">repo</a>. I could not get it working for my port config.
I'm a beginner and would like some resources for the same. </p>
| <p>Try these manifests and remember to deploy an Ingress Controller (I usually use traefik, <a href="https://stackoverflow.com/a/51425301/2718151">here</a> some instructions to set it)</p>
<p><code>service.yml</code>: I changed NodePort to ClusterIP (the default, you can remove the line)</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: auth
spec:
selector:
app: auth
tier: backend
ports:
- protocol: TCP
port: 3000
targetPort: auth
type: ClusterIP
</code></pre>
<p><code>ingress.yml</code>: (I set port to 3000, your service port)</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: auth
servicePort: 3000
</code></pre>
|
<p>How to update k8s certificate:</p>
<p>Some certificates in the k8s cluster are currently expired, prompting: </p>
<blockquote>
<p>Unable to connect to the server: x509: certificate has expired or is
not yet valid. Take a look at the online cluster master.</p>
</blockquote>
<p>The <code>ca.crt</code> and <code>front-proxy-ca.crt</code> are not expired, but the <code>front-proxy-client.crt</code>, <code>apiserver-kubelet-client.crt</code>, and <code>apiserver.crt</code> are expired.</p>
<p>So manually passing the existing <code>ca.key</code> generates the <code>apiserver.crt</code> on the master<a href="https://stackoverflow.com/questions/49885636/kubernetes-expired-certificate">Refer to here</a>. However, new errors occurred, suggesting: </p>
<blockquote>
<p>the server has asked for the client to provide credentials</p>
</blockquote>
<p>What is the way to update the certificate of k8s cluster?</p>
<p>thanks!</p>
| <p>The latest kubeadm should have support for <a href="https://github.com/kubernetes/kubernetes/pull/67910" rel="nofollow noreferrer">this</a>.</p>
<p>Expected commands:</p>
<pre><code>renew all
renew apiserver
renew apiserver-kubelet-client
renew apiserver-etcd-client
renew front-proxy-client
renew etcd-server
renew etcd-peer
renew etcd-healthcheck-client
</code></pre>
<p>You generally have to review all the certs above, you can also renew them manually using <a href="https://www.openssl.org/" rel="nofollow noreferrer">openssl</a> or <a href="https://github.com/cloudflare/cfssl" rel="nofollow noreferrer">cfssl</a> and using the CA in <code>/etc/kubernetes/pki/ca.pem</code></p>
|
<p>I am new to Kops and a bit to kubernetes as well. I managed to create a cluster with Kops, and run a deployment and a service on it. everything went well, and an ELB was created for me and I could access the application via this ELB endpoint.</p>
<p>My question is: How can I map my subdomain (eg. <code>my-sub.example.com</code>) to the generated ELB endpoint ? I believe this should be somehow done automatic by kubernetes and I should not hardcode the ELB endpoint inside my code. I tried something that has to do with <code>annotation -> DomainName</code>, but it did not work.(see kubernetes yml file below)</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: django-app-service
labels:
role: web
dns: route53
annotations:
domainName: "my.personal-site.de"
spec:
type: LoadBalancer
selector:
app: django-app
ports:
- protocol: TCP
port: 80
targetPort: 8000
----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: django-app-deployment
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: django-app
spec:
containers:
- image: fettah/djano_kubernetes_medium:latest
name: django-app
imagePullPolicy: Always
ports:
- containerPort: 8000
</code></pre>
| <p>When you have ELBs in place you can use external-dns (<a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-dns</a>) plugin which can attach DNS records to those ELBs using AWS Route53 integration. You need to add proper rights to Kubernetes so he can create DNS record in Route53 - you need to add additional policy in kops (according guide in external-dns plugin) in additionalPolicies section in kops cluster configuration. Then use annotation like: </p>
<pre><code>external-dns.alpha.kubernetes.io/hostname: myservice.mydomain.com.
</code></pre>
|
<p>I am new to Kubernetes and started working with it from past one month.
When creating the setup of cluster, sometimes I see that Heapster will be stuck in Container Creating or Pending status. After this happens the only way have found here is to re-install everything from the scratch which has solved our problem. Later if I run the Heapster it would run without any problem. But I think this is not the optimal solution every time. So please help out in solving the same issue when it occurs again.
Heapster image is pulled from the github for our use. Right now the cluster is running fine, So could not send the screenshot of the heapster failing with it's status by staying in Container creating or Pending status.
Suggest any alternative for the problem to be solved if it occurs again.
Thanks in advance for your time.</p>
| <p>A pod stuck in pending state can mean more than one thing. Next time it happens you should do 'kubectl get pods' and then 'kubectl describe pod '. However, since it works sometimes the most likely cause is that the cluster doesn't have enough resources on any of its nodes to schedule the pod. If the cluster is low on remaining resources you should get an indication of this by 'kubectl top nodes' and by 'kubectl describe nodes'. (Or with gke, if you are on google cloud, you often get a low resource warning in the web UI console.)</p>
<p>(Or if in Azure then be wary of <a href="https://github.com/Azure/ACS/issues/29" rel="nofollow noreferrer">https://github.com/Azure/ACS/issues/29</a> )</p>
|
<p>I'm using Kubernetes with Traefik as Ingress Controller. I've some web services within my network that can't be containerized yet. Therefore I'm looking for a way to expose my non-Kubernetes web services through the Traefik Ingress. I've no more public IP's, so splitting both environments is not an option.</p>
<p>I've made an endpoint + service to solve this issue, but when I try to connect I get an SSL Protocol Error. Am I doing something wrong or does someone have another solution?</p>
<p>These are my (test)endpoints and service yaml:</p>
<pre><code>kind: Endpoints
apiVersion: v1
metadata:
name: my-service
subsets:
- addresses:
- ip: 10.4.0.6
ports:
- port: 443
---
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 443
name: https
targetPort: 443
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: host.com
http:
paths:
- path: /*
backend:
serviceName: my-service
servicePort: 443
</code></pre>
| <p>For starters, I bet that when you <code>kubectl describe svc my-service</code> you have empty endpoints, even though endpoints do exist, right ?</p>
<p>To fix that, you need to adapt your endpoints <code>ports</code> key to have the same ports definition as your service has (name, protocol etc.). You should then see these endpoints in <code>describe service</code> results and be able to use it normally.</p>
|
<p>If the installation of OpenEBS can be completed with a single command, why would a developer use helm install ? (It is probably more a helm benefits question). I'd like to understand the additional benefits OpenEBS charts can present to a helm user, if any. </p>
| <p>I guess you're looking at the two current supported options for <a href="https://docs.openebs.io/docs/next/installation.html" rel="nofollow noreferrer">OpenEBS installation</a> and noting that the helm install section is much larger with more steps than the operator-based install option. If so, note that the helm section has two sub-sections - you only need one or the other and the one that uses the <a href="https://github.com/helm/charts/tree/master/stable/openebs" rel="nofollow noreferrer">stable helm charts repo</a> is just a single command. But one might still wonder why install helm in the first place.</p>
<p>One of the main <a href="https://platform9.com/blog/kubernetes-helm-why-it-matters/" rel="nofollow noreferrer">advantages of helm</a> is the availability of standard, reusable charts for a wide range of applications. This is including but not limited to the <a href="https://hub.kubeapps.com/" rel="nofollow noreferrer">official charts repo</a>. Relative to pure kubernetes descriptors, helm charts are easier to pass parameters into since they work as templates from which kubernetes descriptor files are generated.</p>
<p>Often the level of parameterisation that you get from templating is needed to ensure that an app can be installed to lots of different clusters and provide the full range of installation options that the app needs. Things like turning on or off certain permissions or pointing at storage. Different apps need different levels of configurability.</p>
<p>If you look at the OpenEBS non-helm deployment descriptor at <a href="https://openebs.github.io/charts/openebs-operator-0.7.0.yaml" rel="nofollow noreferrer">https://openebs.github.io/charts/openebs-operator-0.7.0.yaml</a>, you'll see it defines a list of resources. The same resources defined in <a href="https://github.com/helm/charts/tree/master/stable/openebs/templates" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/openebs/templates</a> Within the non-helm version the number of replicas for maya-apiserver is set at 1. To change this, you'd need to download the file and edit it or change it in your running kubernetes. With the helm version it's one of a range of parameters that you can set at install time (<a href="https://github.com/helm/charts/blob/master/stable/openebs/values.yaml#L19" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/openebs/values.yaml#L19</a>) as options on the <code>helm install</code> command</p>
|
<p>Does anyone know a way to <strong>refer to a namespace</strong> inside of values.yaml using an environment variable? </p>
<p>For example, when mapping a secret </p>
<pre><code>secret:
# RabbitMQ password
V_RABBIT_PASSWORD:
secretKeyRef:
name: jx-staging-rabbit //<--- this needs to work for staging and prod
key: rabbitmq-password
</code></pre>
<p>This is the section in deployment.yaml</p>
<pre><code> - name: {{ $name | quote }}
valueFrom:
secretKeyRef:
name: {{ $value.secretKeyRef.name | quote }} //<-- trying different combinations here
key: {{ $value.secretKeyRef.key | quote }}
</code></pre>
<p>attempts:</p>
<pre><code>${NAMESPACE}-{{ $value.secretKeyRef.name | quote }}
</code></pre>
<p>and</p>
<pre><code>{{ template "namespace" . }}-{{ $value.secretKeyRef.name | quote }}
</code></pre>
<p>Thanks</p>
| <p>I guess this is in a helm chart for an app that you're deploying with jenkins-x. Helm has a <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/builtin_objects.md" rel="nofollow noreferrer">Release.Namespace</a> value that you can access. So in the deployment.yaml you could use <code>{{ .Release.Namespace }}</code> Although <code>jx-staging</code> is also the name of the release so <code>{{ .Release.Name}}</code> that could equally apply here. I would expect this to look like:</p>
<pre><code> valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-{{ .Values.rabbitmq.name }}
key: rabbitmq-password
</code></pre>
<p>Where <code>{{ .Values.rabbitmq.name }}</code> is equal to <code>rabbitmq</code> or whatever you call rabbitmq in your requirements.yaml. (<a href="https://github.com/Activiti/activiti-cloud-charts/blob/95a845ded439d71dc8983d3616fc9ef2de730ce5/activiti-cloud-audit/templates/deployment.yaml#L62" rel="nofollow noreferrer">Here's</a> an example chart doing it this way for postgres, it also uses rabbit but accesses the rabbit password differently.)</p>
<p>If you have the secret loading correctly but still get password problems then make sure you're setting an explicit password value as otherwise you could be hitting <a href="https://github.com/helm/charts/issues/5167" rel="nofollow noreferrer">https://github.com/helm/charts/issues/5167</a></p>
<p>The use of <code>{{ .Release.Name }}</code> won't work inside of the values.yaml but I'm not sure if you need it to if you can do it in the deployment.yaml. </p>
<p>(If you really do need access to a function from the values.yaml then you need to have an entry for the string value in the values.yaml and then <a href="https://github.com/jenkins-x/draft-packs/pull/65" rel="nofollow noreferrer">pass it through the <code>tpl</code> function within the template</a>.)</p>
|
<p>I'm trying to expose a single database instance as a service in two Kubernetes namespaces. Kubernetes version 1.11.3 running on Ubuntu 16.04.1. The database service is visible and working in the default namespace. I created an ExternalName service in a non-default namespace referencing the fully qualified domain name in the default namespace as follows:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ws-mysql
namespace: wittlesouth
spec:
type: ExternalName
externalName: mysql.default.svc.cluster.local
ports:
- port: 3306
</code></pre>
<p>The service is running:</p>
<pre><code>eric$ kubectl describe service ws-mysql --namespace=wittlesouth
Name: ws-mysql
Namespace: wittlesouth
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ExternalName
IP:
External Name: mysql.default.svc.cluster.local
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
</code></pre>
<p>If I check whether the service can be found by name from a pod running in the wittlesouth namespace, this service name does not resolve, but other services in that namespace (i.e. Jira) do:</p>
<pre><code>root@rs-ws-diags-8mgqq:/# nslookup mysql.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: mysql.default.svc.cluster.local
Address: 10.99.120.208
root@rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth: No answer
root@rs-ws-diags-8mgqq:/# nslookup ws-mysql
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql: No answer
root@rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth: No answer
root@rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth.svc.cluster.local: No answer
root@rs-ws-diags-8mgqq:/# nslookup ws-mysql.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
*** Can't find ws-mysql.wittlesouth: No answer
root@rs-ws-diags-8mgqq:/# nslookup jira.wittlesouth
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: jira.wittlesouth.svc.cluster.local
Address: 10.105.30.239
</code></pre>
<p>Any thoughts on what might be the issue here? For the moment I've worked around it by updating applications that need to use the database to reference the fully qualified domain name of the service running in the default namespace, but I'd prefer to avoid that. My intent eventually is to have the namespaces have separate database instances, and would like to deploy apps configured to work that way now in advance of actually standing up the second instance.</p>
| <p>This doesn't work for me with Kubernetes 1.11.2 with coredns and calico. It works only if you reference the external service directly in whichever namespace it runs:</p>
<pre><code>$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
mysql-0 2/2 Running 0 17m
mysql-1 2/2 Running 0 16m
$ kubectl get pods -n wittlesouth
NAME READY STATUS RESTARTS AGE
ricos-dummy-pod 1/1 Running 0 14s
kubectl exec -it ricos-dummy-pod -n wittlesouth bash
root@ricos-dummy-pod:/# ping mysql.default.svc.cluster.local
PING mysql.default.svc.cluster.local (192.168.1.40): 56 data bytes
64 bytes from 192.168.1.40: icmp_seq=0 ttl=62 time=0.578 ms
64 bytes from 192.168.1.40: icmp_seq=1 ttl=62 time=0.632 ms
64 bytes from 192.168.1.40: icmp_seq=2 ttl=62 time=0.628 ms
^C--- mysql.default.svc.cluster.local ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.578/0.613/0.632/0.025 ms
root@ricos-dummy-pod:/# ping ws-mysql
ping: unknown host
root@ricos-dummy-pod:/# exit
$ kubectl get svc mysql
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP None <none> 3306/TCP 45d
$ kubectl describe svc mysql
Name: mysql
Namespace: default
Labels: app=mysql
Annotations: <none>
Selector: app=mysql
Type: ClusterIP
IP: None
Port: mysql 3306/TCP
TargetPort: 3306/TCP
Endpoints: 192.168.1.40:3306,192.168.2.25:3306
Session Affinity: None
Events: <none>
</code></pre>
<p>The ExternalName service feature is only supported using kube-dns as per the <a href="https://coredns.io/plugins/kubernetes/" rel="nofollow noreferrer">docs</a> and Kubernetes 1.11.x defaults to coredns. You might want to try changing from coredns to kube-dns or possibly changing the <a href="https://coredns.io/plugins/kubernetes/" rel="nofollow noreferrer">configs</a> for your coredns deployment. I expect this to available at some point using coredns.</p>
|
<p>I am trying to port a monolithic app to k8s pods. In theory, pods are considered ephemeral and it is suggested to use service concept to provide a static IP. But in my test so far, I have not seen the POD IP being changed. So now the question when will k8s assign a new IP to my POD?</p>
<p>I have created POD (without using any controller) with fixed hostname and they are bounded to a single node. So the node and the hostname will never change and the POD will never be deleted? So in this unique case, when can the POD IP change. I looked at the documentation and this is not clear to me.</p>
| <p>The IP won't change as long as the pod is running, but there are no promises that your pod will stay running. The closest there is to a stable network name is with a StatefulSet. That will create a consistent pod name, which means a consistent DNS name in kubedns/coredns. There is no generic way in Kubernetes to get long-term static IP on a pod (or on a service for that matter), though it's technically up to your CNI networking plugin so maybe some of those have special cases?</p>
|
<p>I created a kubernetes service that is exposed via <code>type: nodePort</code>. I can access the service in my browser if I enter <a href="http://PublicDropletIp:31433" rel="nofollow noreferrer">http://PublicDropletIp:31433</a>.</p>
<p>Now I want to use a DigitalOcean Load Balancer to forward traffic from port <code>80</code> to the service. So I set a rule for the Load Balancer to forward <code>http/80</code> to Droplet <code>http/31433</code>.</p>
<p>Unforutnatly this doesn't work. If I enter the load balancer IP in the browser I get: <code>503 Service Unavailable</code>.</p>
<p>Does anyone know how I can expose the service so that the Load Balancer can forward traffic to it?</p>
| <p>I had this same issue and ended up on this thread. If anyone else is looking, I resolved it by configuring the firewall on my server. </p>
<p>To answer the question above, the firewall should be configured to accept <code>tcp</code> connections from the load balancer's ip on port <code>31433</code>.</p>
|
<p>Is there a variant of <code>kubectl delete all --all</code> command or some other command to delete all resources except the <em>kubernetes service</em>?</p>
| <p>I don't think there's a built-in command for it, which means you'll have to script your way out of it, something like this (add an <code>if</code> for the namespace you want to spare):</p>
<pre><code>$ for ns in $(kubectl get ns --output=jsonpath={.items[*].metadata.name}); do kubectl delete ns/$ns; done;
</code></pre>
<p>Note: deleting a namespace deletes all its resources.</p>
|
<p>I have setup a K8S cluster (1 master and 2 slaves) using Kubeadm on my laptop.</p>
<ul>
<li>Deployed 6 replicas of a pod. 3 of them got deployed to each of the slaves.</li>
<li>Did a shutdown of one of the slave.</li>
<li>It took ~6 minutes for the 3 pods to be scheduled on the running node.</li>
</ul>
<p>Initially, I thought that it had to do something with the K8S setup. After some digging found out, it's because of the defaults in the K8S for Controller Manager and Kubelet as mentioned <a href="https://fatalfailure.wordpress.com/2016/06/10/improving-kubernetes-reliability-quicker-detection-of-a-node-down/" rel="nofollow noreferrer">here</a>. It made sense. I checked out the K8S documentation on where to change the configuration properties and also checked the configuration files on the cluster node, but couldn't figure it out.</p>
<pre><code>kubelet: node-status-update-frequency=4s (from 10s)
controller-manager: node-monitor-period=2s (from 5s)
controller-manager: node-monitor-grace-period=16s (from 40s)
controller-manager: pod-eviction-timeout=30s (from 5m)
</code></pre>
<p>Could someone point out what needs to be done to make the above-mentioned configuration changes permanent and also the different options for the same?</p>
| <p>On the kubelet change this file on all your nodes:</p>
<pre><code>/var/lib/kubelet/kubeadm-flags.env
</code></pre>
<p>Add the option at the end or anywhere on this line:</p>
<pre><code>KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin
--cni-conf-dir=/etc/cni/net.d --network-plugin=cni
--resolv-conf=/run/systemd/resolve/resolv.conf
--node-status-update-frequency=10s <== add this
</code></pre>
<p>On your kube-controller-manager change on the master the following file:</p>
<pre><code>/etc/kubernetes/manifests/kube-controller-manager.yaml
</code></pre>
<p>In this section:</p>
<pre><code> containers:
- command:
- kube-controller-manager
- --address=127.0.0.1
- --allocate-node-cidrs=true
- --cloud-provider=aws
- --cluster-cidr=192.168.0.0/16
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --use-service-account-credentials=true
- –-node-monitor-period=5s <== add this line
</code></pre>
<p>On your master do a <code>sudo systemctl restart docker</code>
On all your nodes do a <code>sudo systemctl restart kubelet</code></p>
<p>You should have the new configs take effect.</p>
<p>Hope it helps.</p>
|
<p>I am trying to add a file to a pod's disk during initialization of the pod but without luck. Below is my deployment file which I use to deploy the pod. The file gets downloaded to the persistent volume, but the pod doesn't get into ready state. After a few seconds, the pods fail and get rebuilt. Which kicks off the whole process again.</p>
<p>Any help would be appreciated.</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mapserver
spec:
selector:
matchLabels:
app: mapserver
template:
metadata:
labels:
app: mapserver
spec:
volumes:
- name: storage
persistentVolumeClaim:
claimName: mapserver-pv-claim
containers:
- name: maptiles
image: klokantech/tileserver-gl
command: ["/bin/sh"]
args:
- -c
- |
echo "[INFO] Startingcontainer"; if [ $(DOWNLOAD_MBTILES) = "true" ]; then
echo "[INFO] Download MBTILES_PLANET_URL";
rm /data/*
cd /data/
curl -k -sSL -X GET -u user:ww $(MBTILES_PLANET_URL) -O
echo "[INFO] Download finished";
fi;
env:
- name: MBTILES_PLANET_URL
value: 'https://abc-dev/nexus/repository/xyz-raw/2017-07-03_europe_netherlands.mbtiles'
- name: DOWNLOAD_MBTILES
value: 'true'
livenessProbe:
failureThreshold: 120
httpGet:
path: /health
port: 80
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
ports:
- containerPort: 80
name: http
protocol: TCP
readinessProbe:
failureThreshold: 120
httpGet:
path: /health
port: 80
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: 300m
memory: 3Gi
requests:
cpu: 100m
memory: 1Gi
volumeMounts:
- mountPath: "/data"
name: storage
</code></pre>
| <blockquote>
<p>am trying to add a file to a pod's disk during initialization of the pod but without luck.</p>
</blockquote>
<p>In that case you might want to use <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer"><code>InitContainers</code></a> instead.</p>
<p>Judging from your manifest, your main command gets executed (copies the file and then exits) terminating the container (and accompanying pod) in the process. Deployment then restarts the exited pod and cycle repeats. If you use <code>InitContainers</code> instead (with the same definition and same PV as you are doing now for main container) you should then prepopulate data using <code>InitContaienrs</code> that runs to completion and then continue to use it in your normal container (that should have non-exiting main process as its command/entry point).</p>
<p>Note: if you don't want to use <code>InitContainers</code> or just as a quick test, you could append a regular non-exiting command after your copy statement, and also, check if you need to start container with tty, depending on your use case and ways to keep container up and running.</p>
|
<p>I'm in a progress to migrate to kuberenetes from docker-compose.
One of the services we're using is rabbit-mq.
When I try to deploy rabbit-mq 3.6.16-management I receive the error:</p>
<p><em>/usr/local/bin/docker-entrypoint.sh: line 382: /etc/rabbitmq/rabbitmq.config: Permission denied.</em></p>
<p>While it works in docker-compose deployment.</p>
<p><strong>Kuberentes</strong>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rabbit-mq
name: rabbit-mq
spec:
replicas: 1
selector:
matchLabels:
app: rabbit-mq
strategy:
type: Recreate
template:
metadata:
labels:
app: rabbit-mq
spec:
containers:
- image: rabbitmq:3.6.16-management
name: rabbit-mq
ports:
- containerPort: 15671
- containerPort: 5671
volumeMounts:
- mountPath: /etc/rabbitmq
name: rabbit-mq-data
restartPolicy: Always
hostname: rabbit-mq
volumes:
- name: rabbit-mq-data
persistentVolumeClaim:
claimName: rabbit-mq-data
</code></pre>
<p><strong>PVC:</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: rabbit-mq-data
name: rabbit-mq-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Gi
</code></pre>
<p><strong>PV:</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: rabbit-mq-data
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 16Gi
hostPath:
path: "/etc/rabbitmq"
</code></pre>
<p><strong>Docker-Compose:</strong></p>
<pre><code> rabbit-mq:
image: rabbitmq:3.6.16-management
ports:
- "15671:15671"
- "5671:5671"
container_name: rabbit-mq
volumes:
- rabbit-mq-data:/etc/rabbitmq
restart: on-failure:5
</code></pre>
| <p>Eventually I've used configmap and secrets to mount files instead of PV and works as expected.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rabbit-mq
name: rabbit-mq
spec:
replicas: 1
selector:
matchLabels:
app: rabbit-mq
template:
metadata:
labels:
app: rabbit-mq
spec:
containers:
- image: rabbitmq:3.6.16-management
name: rabbit-mq
ports:
- containerPort: 15671
- containerPort: 5671
volumeMounts:
- name: rabbit-mq-data
mountPath: /etc/rabbitmq
readOnly: false
- name: mq-secret
mountPath: /etc/rabbitmq/certfiles
#readOnly: true
volumes:
- name: mq-secret
secret:
defaultMode: 420
secretName: rabbit-mq-secrets
- configMap:
defaultMode: 420
items:
- key: rabbitmq.config
path: rabbitmq.config
name: mq-config
name: rabbit-mq-data
</code></pre>
|
<p>Is there a way to be able to provide ReadWriteMany storage without having to implement a storage cluster?</p>
<p>I was able to provide storage with gcsfuse but it is really slow. I need something close to the speed of GlusterFS.</p>
<p>I am currently using GlusterFS.</p>
| <p>Another option: Google Cloud Platform recently started offering a hosted NFS service called <a href="https://cloud.google.com/filestore/docs/" rel="nofollow noreferrer">Cloud Firestore</a>. </p>
<p>Note that as of this writing, Cloud Firestore is still in Beta.</p>
<p>Here's the description:</p>
<blockquote>
<p>Use Cloud Filestore to create fully managed NFS file servers on Google
Cloud Platform (GCP) for use with applications running on Compute
Engine virtual machines (VMs) instances or Kubernetes Engine clusters.</p>
<p>Create and manage Cloud Filestore instances by using the GCP console
or the gcloud command-line tool, and interact with the NFS fileshare
on the instance by using standard operating system commands.</p>
</blockquote>
|
<p>I'm developing a microservices application with Spring Boot and Postgres. Using Docker, the REST API in one container and Postgres in another container, everything works fine. But when I try to run this with Kubernetes , it always gives an API error.</p>
<p>The <code>Dockerfile</code> where my Spring Boot-based API is defined looks as follows:</p>
<pre><code>FROM alpine
RUN apk add openjdk8
MAINTAINER rjdesenvolvimento.com
COPY target/apipessoas-0.0.1.jar /opt/gaia/apipessoas.jar
ENTRYPOINT ["/usr/bin/java"]
CMD ["-jar", "/opt/gaia/apipessoas.jar"]
EXPOSE 8080
</code></pre>
<p>The <code>Dockerfile</code> for Postgres:</p>
<pre><code>FROM postgres:10-alpine
MAINTAINER rjdesenvolvimento.com
ENV POSTGRES_PASSWORD=postgres
ENV POSTGRES_PORT=5432
CMD ["postgres"]
ADD postgres.sql /docker-entrypoint-initdb.d/
EXPOSE 5432
</code></pre>
<p>My <code>docker-compose</code> file:</p>
<pre><code>version: "3.0"
services:
postgres:
build:
dockerfile: postgres.dockerfile
context: .
image: rjdesenvolvimento/postgres-apipessoas
container_name: postgres
ports:
- "5432:5432"
volumes:
- /home/rodrigo/Projetos/Volumes:/var/lib/postgresql/data
networks:
- gaia_network
apipessoas:
build:
dockerfile: api-pessoa.dockerfile
context: .
image: rjdesenvolvimento/api-pessoas
container_name: api_pessoas
ports:
- "8080:8080"
depends_on:
- postgres
networks:
- gaia_network
networks:
gaia_network:
driver: bridge
</code></pre>
<p>Now, my Kubernetes Postgres mainfest:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
ports:
- port: 5432
name: postgres-api-pessoas
clusterIP: None
selector:
app: postgres
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-api-pessoas
spec:
serviceName: postgres
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: container-postgres-api-pessoas
image: rjdesenvolvimento/postgres-apipessoas
imagePullPolicy: Never
env:
- name: POSTGRES0_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: zeus
ports:
- containerPort: 5432
name: zeus
volumeMounts:
- name: volume-postgres-api-pessoas
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: volume-postgres-api-pessoas
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
</code></pre>
<p>And the API manifest:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: servico-api-pessoas
spec:
type: LoadBalancer
selector:
app: api-pessoas-pod
ports:
- port: 80
targetPort: 8080
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-api-pessoas
labels:
app: api-pessoas-pod
spec:
selector:
matchLabels:
app: api-pessoas-pod
template:
metadata:
labels:
app: api-pessoas-pod
spec:
containers:
- name: contaienr-api-pessoas
image: rjdesenvolvimento/api-pessoas
imagePullPolicy: Never
ports:
- containerPort: 8080
</code></pre>
<p>When I deploy above YAML manifest to Kubernetes I get this error:</p>
<pre><code>deployment-api-pessoas-8cfd5c6c5-dpln4 0/1 CrashLoopBackOff 6 10m
postgres-api-pessoas-0 1/1 Running 0 28m
</code></pre>
<p>What im doing wrong? </p>
<p>ADD:</p>
<p>describe pod</p>
<pre><code>Name: deployment-api-pessoas-8cfd5c6c5-nhtff
Namespace: default
Node: minikube/10.0.2.15
Start Time: Sun, 16 Sep 2018 11:19:02 -0400
Labels: app=api-pessoas-pod
pod-template-hash=479817271
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/deployment-api-pessoas-8cfd5c6c5
Containers:
contaienr-api-pessoas:
Container ID: docker://a6fb9a254895bb31effdcadd66675cfb5197f72d526f805d20cbbde90c0677cc
Image: rjdesenvolvimento/api-pessoas
Image ID: docker://sha256:f326d42c0afd7b4f3d3e7a06e8a2625f7a841a7451a08c2b326e90e13459b244
Port: 8080/TCP
Host Port: 0/TCP
State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 16 Sep 2018 11:19:48 -0400
Finished: Sun, 16 Sep 2018 11:20:02 -0400
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 16 Sep 2018 11:19:11 -0400
Finished: Sun, 16 Sep 2018 11:19:42 -0400
Ready: False
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-x9s8k (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-x9s8k:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-x9s8k
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned deployment-api-pessoas-8cfd5c6c5-nhtff to minikube
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-x9s8k"
Normal Pulled 26s (x2 over 1m) kubelet, minikube Container image "rjdesenvolvimento/api-pessoas" already present on machine
Normal Created 23s (x2 over 1m) kubelet, minikube Created container
Normal Started 21s (x2 over 58s) kubelet, minikube Started container
Warning BackOff 6s kubelet, minikube Back-off restarting failed container
</code></pre>
<p>the log is too big, but I think I found the problem:</p>
<pre><code> attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error
creating bean with name 'pessoaJuridicaFuncionarioResource' defined in URL
[jar:file:/opt/gaia/apipessoas.jar!/BOOT-
INF/classes!/com/rjdesenvolvimento/apipessoas/resource/funcionario/PessoaJ
uridicaFuncionarioResource.class]: Unsatisfied dependency expressed
through constructor parameter 0; nested exception is
org.springframework.beans.factory.UnsatisfiedDependencyException: Error
creating bean with name 'pessoaJuridicaFuncionarioService' defined in URL
[jar:file:/opt/gaia/apipessoas.jar!/BOOT-
INF/classes!/com/rjdesenvolvimento/apipessoas/service/funcionario/PessoaJu
ridicaFuncionarioService.class]: Unsatisfied dependency expressed through
constructor parameter 0; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating
bean with name 'pessoaJuridicaFuncionarioRepository': Cannot create inner
bean '(inner bean)#37ddb69a' of type
[org.springframework.orm.jpa.SharedEntityManagerCreator] while setting
bean property 'entityManager'; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating
bean with name '(inner bean)#37ddb69a': Cannot resolve reference to bean
'entityManagerFactory' while setting constructor argument; nested
exception is org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'entityManagerFactory': Post-processing of
FactoryBean's singleton object failed; nested exception is
org.springframework.jdbc.datasource.init.ScriptStatementFailedException:
Failed to execute SQL script statement #1 of URL
[jar:file:/opt/gaia/apipessoas.jar!/BOOT-INF/classes!/data.sql]: INSERT
INTO zeus.endereco.pais (nome) VALUES ('Brasil'); nested exception is
org.postgresql.util.PSQLException: ERROR: relation "endereco.pais" does
not exist
Position: 13
</code></pre>
<p>HIBERNATE can not create any table.</p>
<pre><code>Hibernate:
create table endereco.endereco_pessoa_fisica_funcionario (
id bigserial not null,
bairro varchar(255) not null,
cep varchar(255) not null,
complemento varchar(255) not null,
logradouro varchar(255) not null,
numero varchar(255) not null,
fk_cidade int8,
fk_pessoafisicafuncionario int8,
primary key (id)
)
2018-09-16 15:27:03.480 WARN 1 --- [ main] o.h.t.s.i.ExceptionHandlerLoggedImpl : GenerationTarget encountered exception accepting command : Error executing DDL via JDBC Statement
org.hibernate.tool.schema.spi.CommandAcceptanceException: Error executing DDL via JDBC Statement
at org.hibernate.tool.schema.internal.exec.GenerationTargetToDatabase.accept(GenerationTargetToDatabase.java:67) ~[hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
at org.hibernate.tool.schema.internal.SchemaCreatorImpl.applySqlString(SchemaCreatorImpl.java:440) [hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
at org.hibernate.tool.schema.internal.SchemaCreatorImpl.applySqlStrings(SchemaCreatorImpl.java:424) [hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
at org.hibernate.tool.schema.internal.SchemaCreatorImpl.createFromMetadata(SchemaCreatorImpl.java:315) [hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
at org.hibernate.tool.schema.internal.SchemaCreatorImpl.performCreation(SchemaCreatorImpl.java:166) [hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
at org.hibernate.tool.schema.internal.SchemaCreatorImpl.doCreation(SchemaCreatorImpl.java:135) [hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
at org.hibernate.tool.schema.internal.SchemaCreatorImpl.doCreation(SchemaCreatorImpl.java:121) [hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.performDatabaseAction(SchemaManagementToolCoordinator.java:155) [hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.process(SchemaManagementToolCoordinator.java:72) [hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:312) [hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:462) [hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:892) [hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:57) [spring-orm-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:365) [spring-orm-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:390) [spring-orm-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:377) [spring-orm-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.afterPropertiesSet(LocalContainerEntityManagerFactoryBean.java:341) [spring-orm-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1758) [spring-beans-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1695) [spring-beans-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:573) [spring-beans-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:495) [spring-beans-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) [spring-beans-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) [spring-beans-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) [spring-beans-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1089) ~[spring-context-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:859) ~[spring-context-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550) ~[spring-context-5.0.8.RELEASE.jar!/:5.0.8.RELEASE]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140) ~[spring-boot-2.0.4.RELEASE.jar!/:2.0.4.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:762) ~[spring-boot-2.0.4.RELEASE.jar!/:2.0.4.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:398) ~[spring-boot-2.0.4.RELEASE.jar!/:2.0.4.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:330) ~[spring-boot-2.0.4.RELEASE.jar!/:2.0.4.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1258) ~[spring-boot-2.0.4.RELEASE.jar!/:2.0.4.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1246) ~[spring-boot-2.0.4.RELEASE.jar!/:2.0.4.RELEASE]
at com.rjdesenvolvimento.apipessoas.ApipessoasApplication.main(ApipessoasApplication.java:10) ~[classes!/:0.0.1]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_171]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_171]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) ~[apipessoas.jar:0.0.1]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) ~[apipessoas.jar:0.0.1]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) ~[apipessoas.jar:0.0.1]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) ~[apipessoas.jar:0.0.1]
Caused by: org.postgresql.util.PSQLException: ERROR: schema "endereco" does not exist
Position: 19
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440) ~[postgresql-42.2.4.jar!/:42.2.4]
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183) ~[postgresql-42.2.4.jar!/:42.2.4]
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308) ~[postgresql-42.2.4.jar!/:42.2.4]
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441) ~[postgresql-42.2.4.jar!/:42.2.4]
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365) ~[postgresql-42.2.4.jar!/:42.2.4]
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307) ~[postgresql-42.2.4.jar!/:42.2.4]
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293) ~[postgresql-42.2.4.jar!/:42.2.4]
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270) ~[postgresql-42.2.4.jar!/:42.2.4]
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266) ~[postgresql-42.2.4.jar!/:42.2.4]
at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95) ~[HikariCP-2.7.9.jar!/:na]
at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java) ~[HikariCP-2.7.9.jar!/:na]
at org.hibernate.tool.schema.internal.exec.GenerationTargetToDatabase.accept(GenerationTargetToDatabase.java:54) ~[hibernate-core-5.2.17.Final.jar!/:5.2.17.Final]
... 42 common frames omitted
but if i run with docker-compose without kubertes, everthing works ok
</code></pre>
<p>ADD:</p>
<p>in my application.properties: </p>
<p>1) if i run in my pc withou docker:
spring.datasource.url= jdbc:postgresql://locahost:port/db_name</p>
<p>2) if i run in docker
spring.datasource.url= jdbc:postgresql://docker_db_container_name:port/db_name</p>
<p>Now I do not know how to make the api rest "see" the database with kubernetes</p>
<p>ADD:</p>
<p>First I wanted to apologize for my absence.
I had personal problems.
Secondly, I would like to thank you all for your help so far.</p>
<p>I modified my Kubernetes files:</p>
<p>postgres</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: postgres-config
labels:
app: postgres
camada: banco-de-dados
data:
POSTGRES_DB: zeus
POSTGRES_PASSWORD: lk85Kab5aCn904Ad
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-volume
labels:
app: postgres
tipo: local
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /home/rodrigo/Projetos/VolumeKube
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-volume-claim
labels:
app: postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: rjdesenvolvimento/postgres-apipessoas
imagePullPolicy: Never
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-database
volumes:
- name: postgres-database
persistentVolumeClaim:
claimName: postgres-volume-claim
---
kind: Service
apiVersion: v1
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
selector:
app: postgres
ports:
- port: 5432
</code></pre>
<p>and API</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: api-pessoas
labels:
app: api-pessoas
camada: backend
spec:
replicas: 1
selector:
matchLabels:
app: api-pessoas
template:
metadata:
labels:
app: api-pessoas
camada: backend
spec:
containers:
- name: api-pessoas
image: rjdesenvolvimento/api-pessoas
imagePullPolicy: Never
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: api-pessoas
labels:
app: api-pessoas
camada: backend
spec:
type: LoadBalancer
selector:
app: api-pessoas
camada: backend
ports:
- port: 8080
</code></pre>
<p>The error continued. So I read quietly and realized that kubernetes created "zeus DB" but did not create the SCHEMAS.</p>
<p>when I create my Docker image, from DockerFile, I add the following file:</p>
<pre><code>CREATE DATABASE zeus;
\connect zeus;
CREATE SCHEMA funcionario;
CREATE SCHEMA endereco;
CREATE SCHEMA tabela_auxiliar;
CREATE SCHEMA cliente;
</code></pre>
<p>But this information is not passed on to Kubernetes. So I have to manually enter and add them. And then everything works.</p>
<p>How do I make the information automatically added?</p>
| <p>As far as I understand your issue, your service k8s pod cannot reach the postgres. As you may already know, in k8s, pods are ephemeral and you need services abstraction if you need to invoke a service by its url.</p>
<p>I would ssh into the App pod and try to ping the hostname with which you are trying to connect to the postgres (in your app's config)</p>
<pre><code>kubectl exec -it <podname> sh
</code></pre>
|
<p>I am trying to install Traefik as an Ingress Controller for my self-installed Kubernetes cluster. For convenience I try to install the <a href="https://github.com/helm/charts/tree/master/stable/traefik" rel="nofollow noreferrer">helm chart of Traefik</a> and this works excellent without the acme part; this is my variables yml now:</p>
<pre><code>externalIP: xxx.xxx.xx.xxx
dashboard:
enabled: true
domain: traefik-ui.example.com
ssl:
enabled: true
enforced: true
acme:
enabled: true
challengeType: http-01
email: [email protected]
staging: true
persistence.enabled: true
logging: true
</code></pre>
<p>Installed with:</p>
<pre><code>helm install --name traefik --namespace kube-traefik --values traefik-variables.yml stable/traefik
</code></pre>
<p>But with <code>helm status traefik</code> I can see the <code>v1/PersistentVolumeClaim</code> named <code>traefik-acme</code> stays pending and the certificate is never assigned.</p>
| <p>It is highly recommended you use <a href="https://github.com/jetstack/cert-manager" rel="nofollow noreferrer"><code>cert-manager</code></a> instead of the built-in ACME support in Traefik, at least at this time. It is much better at dealing with multiple copies of Traefik, which you probably want. Ingress-shim (which is a default part of cert-manager) will handle Traefik-backed Ingresses just fine.</p>
|
<p>I'm building a service where users can build web apps - these apps will be hosted under a virtual DNS name *.laska.io</p>
<p>For example, if Tom and Jerry both built an app, they'd have it hosted under:</p>
<pre><code>tom.laska.io
jerry.laska.io
</code></pre>
<p>Now, suppose I have 1000 users. <strong>Should I create one big ingress that looks like this?</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: tom.laska.io
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
- host: jerry.laska.io
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
...and so forth
</code></pre>
<p>I'm worried about downtime - if I have an app with websockets for example. Also the file will become huge with 1000 users. Will reloading the ingress go fast enough? Also, how should I reload it?</p>
<p>A second option in my mind is to <strong>simply create one ingress for every web app</strong>. My worry about that is, can ingress-nginx handle many ingresses? Or is this an anti-pattern?</p>
<p>Which one is better?</p>
| <p>You can create one ingress resource for each web app. If you search the official public charts repo, you'll see that <a href="https://github.com/helm/charts/search?q=ingress&unscoped_q=ingress" rel="noreferrer">many of the charts define an ingress resource within them</a>. It's normal for each app to define its own ingress resource.</p>
<p>It's worth being clear that an ingress resource is just a definition of a routing rule. (It doesn't add an extra ingress controller or any other extra runtime component.) So it makes a lot of sense for an app to define its own routing rule. </p>
<p>The example you've given has all the ingress routing in one resource definition. This approach is easy to grasp and makes a lot of sense when you've got several related applications as then you might want to see their routing configuration together. You can see this also in the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="noreferrer">fanout ingress example in the kubernetes docs</a>.</p>
<p>I can't see any performance concerns with defining the rules separately in distinct resources. The ingress controller will <a href="https://kubernetes.github.io/ingress-nginx/how-it-works/" rel="noreferrer">listen for new rules</a> and update its configuration only when a new rule is created so there shouldn't be a problem with reading the resources. And I'd expect the combined vs separated options to result in the same routing rules being set in the background in nginx.</p>
<p>The most common pattern in the official charts repo is that the chart for each app defines its ingress resource and also exposes it through the values.yaml so that users can choose to enable or customise it as they wish. You can then compose multiple charts together and configure the rules for each in the relevant section of the values.yaml. (Here's an <a href="https://github.com/Activiti/activiti-cloud-charts/blob/95a845ded439d71dc8983d3616fc9ef2de730ce5/activiti-cloud-full-example/values.yaml#L102" rel="noreferrer">example I've worked on that does this</a> with wildcard dns.) Or you can deploy each app separately under its own helm release.</p>
|
<p>All of the leader election tools like Consul, Zookeeper or any other quorum system. I've seen have been for pods within the same cluster. I need to coordinate across clusters for a tutorial.</p>
| <p>The short answer for a broad question: It's not the norm but yes you can run them in different clusters as long as you expose your containers/pods with the right ports and IP addresses and they can find each other using these ports and IP addresses.</p>
<p>This answer relates to quorum systems that use a consensus algorithms like <a href="https://en.wikipedia.org/wiki/Paxos_(computer_science)" rel="nofollow noreferrer">Paxos</a> or <a href="https://raft.github.io/" rel="nofollow noreferrer">Raft</a> such as <a href="https://www.consul.io/" rel="nofollow noreferrer">Consul</a>, <a href="https://zookeeper.apache.org/" rel="nofollow noreferrer">Zookeeper</a>, <a href="https://mesosphere.github.io/marathon/" rel="nofollow noreferrer">Marathon</a>, <a href="https://mesos.github.io/chronos/" rel="nofollow noreferrer">Chronos</a>, <a href="https://kubernetes.io/" rel="nofollow noreferrer">Kubernetes</a>, <a href="http://mesos.apache.org/" rel="nofollow noreferrer">Mesos</a>, etc.</p>
<p>Hope it helps!</p>
|
<p>I've configured Traefik (helm chart) with let'sencrypt ACME, but I'm not receiving any certificates. The Traefik Ingress is exposed on port 80 and 443 to the internet.</p>
<p>traefik.toml</p>
<pre><code>logLevel = "INFO"
InsecureSkipVerify = true
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/tls.crt"
KeyFile = "/ssl/tls.key"
[kubernetes]
[acme]
email = "[email protected]"
storage = "/acme/acme.json"
entryPoint = "https"
onHostRule = true
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
acmeLogging = true
[acme.httpChallenge]
entryPoint = "http"
[web]
address = ":8080"
</code></pre>
<p>Ingress with Traefik as IngressClass</p>
<pre><code>{
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "domain",
"namespace": "reverse-proxy",
"selfLink": "/apis/extensions/v1beta1/namespaces/reverse-proxy/ingresses/domain",
"uid": "550cdedc-ba77-11e8-8657-00155d00021a",
"resourceVersion": "6393921",
"generation": 5,
"creationTimestamp": "2018-09-17T12:43:52Z",
"annotations": {
"ingress.kubernetes.io/ssl-redirect": "true",
"kubernetes.io/ingress.class": "traefik"
}
},
"spec": {
"tls": [
{
"hosts": [
"domain.com"
],
"secretName": "cert" // without is also not working
}
],
"rules": [
{
"host": "domain.com",
"http": {
"paths": [
{
"backend": {
"serviceName": "domain",
"servicePort": 443
}
}
]
}
},
{
"host": "www.domain.com",
"http": {
"paths": [
{
"backend": {
"serviceName": "www-domain",
"servicePort": 443
}
}
]
}
}
]
},
"status": {
"loadBalancer": {}
}
}
</code></pre>
<p>I've tried to use both http-01 and tls-sni-01 challenge. dns-01 is no option, because my DNS provider doesn't have an API.</p>
| <p>How are you injecting the letsencrypt config to your traefik Ingress service/daemonset? </p>
<p>Traefik doesn't officially have letsencrypt on Kubernetes Ingress docs. But this is a <a href="https://medium.com/@carlosedp/multiple-traefik-ingresses-with-letsencrypt-https-certificates-on-kubernetes-b590550280cf" rel="nofollow noreferrer">good guide</a>. Look for "External Traefik ingress controller" and you need a kv backend to store your certs.</p>
<p>You can also try <a href="https://cert-manager.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">cert-manager</a> which works with Traefik.</p>
|
<p>I've found the documentation hard to find, such as "type: hostdir" for storage pools. What is hostdir? Is this the only type of pool OpenEBS supports?</p>
| <p>Their documentation is missing explaining what <code>hostdir</code> is in their <code>StoragePool</code> resource definition. Judging by <a href="https://github.com/openebs/openebs/blob/master/k8s/openebs-config.yaml" rel="nofollow noreferrer">this</a> it's just a directory on your Kuberbetes nodes. </p>
<p>Also, they mention that there are 2 options: a directory on the host-os or a directory on a mounted disk. Either one of those 2 choices means that your directory will be on the node so it will be a <code>hostdir</code>, so I assume that's the only option available.</p>
<p>Hope it helps!</p>
|
<p>I am new to kubernetes and trying to deploy openstack on kubernetes cluster, below is the error I see when I try to deploy openstack. I am following the openstack docs to deploy.</p>
<pre><code>kube-system ingress-error-pages-56b4446784-crl85 0/1 Pending 0 1d
kube-system ingress-error-pages-56b4446784-m7jrw 0/1 Pending 0 5d
</code></pre>
<p>I have kubernetes cluster with one master and one node running on debain9. I encounted this error during openstack installation on kubernetes. </p>
<p>Kubectl describe pod shows the event as below:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m (x7684 over 1d) default-scheduler 0/2 nodes are available: 1 PodToleratesNodeTaints, 2 MatchNodeSelector.
</code></pre>
<p>All I see is a failed scheduling, Even the container logs for kube scheduler shows it failed to schedule a pod, but doesn't say why it failed? I am kind of struck at this step from past few hours trying to debug....</p>
<p>PS: I am running debian9, kube version: v1.9.2+coreos.0, Docker - 17.03.1-ce</p>
<p>Any help appreciated ....</p>
| <p>Looks like you have a toleration on your Pod and don't have nodes with the taints for those tolerations. Would help to post the definition for your Ingress and its corresponding Deployment or DaemonSet.</p>
<p>You would generally taint your node(s) like this:</p>
<pre><code>kubectl taint nodes <your-node> key=value:IngressNode
</code></pre>
<p>Then on your PodSpec something like this:</p>
<pre><code>tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "IngressNode"
</code></pre>
<p>It could also be because of missing labels on your node that your Pod needs in the <code>nodeSelector</code> field:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
cpuType: haswell
</code></pre>
<p>Then on you'd add a label to your node.</p>
<pre><code>kubectl label nodes kubernetes-foo-node-1 cpuType=haswell
</code></pre>
<p>Hope it helps!</p>
|
<p>Just as the title says, Kubernetes's liveness probe always fails when pod is under heavy load(500+ concurrency, 100000 request), i gave more cpu & memory resources to the pod, but it does not work.</p>
<p>Is there any method to handle this situation ?</p>
<p>Thanks!</p>
| <p>As per @Williams Mao issue was resolved by increasing
<code>readinessProbe.timeoutSeconds</code>. </p>
<p>If an application has huge load, the default <code>timeoutSeconds: 1</code> may be not enough.
Good to read: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">configure-liveness-readiness-probes</a></p>
|
<p>I'm new to devops and kubernetes and was setting up the local development environment.
For having hurdle-free deployment, I wanted to keep the development environment as similar as possible to the deployment environment. So, for that, I'm using minikube for single node cluster, and that solves a lot of my problems but right now, according to my knowledge, a developer need to do following to see the changes:</p>
<ol>
<li>write a code locally,</li>
<li>create a container image and then push it to container register</li>
<li>apply the kubernetes configuration with updated container image</li>
</ol>
<p>But the major issue with this approach is the high development time, Can you suggest some better approach by which I can see the changes in real-time?</p>
| <p>The official Kubernetes blog lists a couple of <strong>CI/CD dev tools</strong> for building Kubernetes based applications: <a href="https://kubernetes.io/blog/2018/05/01/developing-on-kubernetes/" rel="nofollow noreferrer">https://kubernetes.io/blog/2018/05/01/developing-on-kubernetes/</a></p>
<p>However, as others have mentioned, dev cycles can become a lot slower with CI/CD approaches for development. Therefore, a colleague and I started the DevSpace CLI. It lets you create a DevSpace inside Kubernetes which allows you a direct terminal access and real-time file synchronization. That means you can use it with any IDE and even use <strong>hot reloading</strong> tools such as nodemon for nodejs.</p>
<p><strong>DevSpace CLI on GitHub: <a href="https://github.com/covexo/devspace" rel="nofollow noreferrer">https://github.com/covexo/devspace</a></strong></p>
|
<p>I am using the python kubernetes api with <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#list_namespaced_pod" rel="noreferrer">list_namespaced_pod</a> to get the pods in my namespace. Now I would like to filter them, using the optional label selector parameter. </p>
<p>The <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#list_namespaced_pod" rel="noreferrer">documention</a> describes this parameter as </p>
<blockquote>
<p>A selector to restrict the list of returned objects by their labels.
Defaults to everything.</p>
</blockquote>
<p>It does not bother to give an example. On <a href="https://www.programcreek.com/python/example/96328/kubernetes.client.CoreV1Api" rel="noreferrer">this website</a>, I found several possibilities on how to use the attribute. I already tried </p>
<pre><code>label_selector='label=my_label'
label_selector='label:my_label'
label_selector='my_label'
</code></pre>
<p>non of which is working. How do I use the parameter label_selector correctly?</p>
| <p>Kubernetes CLI uses two types of label selectors.</p>
<ol>
<li><p>Equality Based
Eg: <code>kubectl get pods -l key=value</code></p></li>
<li><p>Set Based
Eg: <code>kubectl get pod -l 'key in (value1,value2)'</code></p></li>
</ol>
<p><code>label_selector='label=my_label'</code> </p>
<p>should work, else try using </p>
<p><code>label_selector='label in (my_label1, my_label2)'</code>.</p>
<p>If this does not work the error might come from somewhere else.</p>
|
<p>We have several options how to correctly manage kubernetes delcaration files and dockerfiles. Services development may be considered as fully separate without any cross service communications for now.</p>
<ol>
<li>Setup separate repository which will contain all k8s and docker delcarations and build/deploy scripts. </li>
<li>Setup separate repository for k8s declarations and leave docker files in the repositories of appropriate services.</li>
<li>All k8s declarations and docker files placed near services code (same repo with code)</li>
</ol>
<p>Which approach is better and provides more flexibility? We are having 2 services and soon the counter of services with complex inner network configurations will increase to 8 and 3rd option not looking good at all.</p>
<p>2nd option is better, but I'm not quite sure if k8s separate repo is a good option. While local docker image might also create some difficulties for local development, as it not fully required for developer teams to interact with other services and spin up all services.</p>
<p>1st option looks good as it provides full responsibility dedication and solves devops only task, but in the future may lead to problems when team would need to spin up whole k8s cluster. But, even in this case, this repo might be pulled and executed in respect to minikube.</p>
<p>But neither of those options looking really good for me. Am I missing something?</p>
| <p>I would recommend #3. All companies I have worked with so far, keep it that way.</p>
<h1>From a Kubernetes-native perspective</h1>
<p>Some <strong>Kubernetes native devtools like DevSpace</strong> (<a href="https://github.com/covexo/devspace" rel="noreferrer">https://github.com/covexo/devspace</a>) <strong>and Draft</strong> (<a href="https://github.com/Azure/draft" rel="noreferrer">https://github.com/Azure/draft</a>) recommend putting Dockerfile and Kubernetes resource definitions (or Helm chart files) into the repository containing the code, too.</p>
<h1>From a <strong>DevOps perspective</strong></h1>
<p>Using #3, developers will be able to reproduce production environments better for fixing bugs and CI/CD tools are usually set up to work with infrastructure-as-code like definitions contained in the same repo as the code.</p>
<p>There are <strong>exceptions</strong>, e.g. GitLab's Auto DevOps CI/CD tool. It is very much designed to work with Kubernetes and works with an external chart. However, they do that because they want to simplify the setup of a CI/CD pipeline to Kubernetes and want to abstract from the underlying Helm chart. They, however, also allow to define a Helm chart yourself and imply that it is located inside the same repository as the code.</p>
<h1>From a <strong>version control perspective</strong></h1>
<p>#3 is favorable because you can be sure that you can always run repeatable builds when bundling code and "infrastructure" definitions. Let's say you want to go back in time and use an older version of your code, which version of the non-related other repo would you use? Having everything in one repo, will allow you to checkout any revision or branch and always be sure that you can build and instantiate your code. </p>
<p>Example: You change your dependency management tool and need to run another command to install the dependencies. This change will make you change your Dockerfile accordingly. Having both, code + k8s + Dockerfile all together, will make sure, you can still instantiate older versions that use the old dependency management tool.</p>
|
<p>Can anyone share how kubectl exec works,like a technical overview.
also what are the ways to troubleshoot it.</p>
<p>For example I have the following issue :when trying to connect to a pod :</p>
<blockquote>
<p>kubectl.exe : I0502 04:25:18.562064 7288 loader.go:357] Config
loaded from file C:\Users\u615648/.kube/config At line:1 char:1
+ .\kubectl.exe exec dataarchives-service-264802370-mjwcl date -n fdm- ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (I0502 04:25:18....48/.kube/config:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError</p>
<p>I0502 04:25:18.636776 7288 round_trippers.go:414] GET
<a href="https://fdmmgmt.uksouth.cloudapp.azure.com/api/v1/namespaces/fdm-development/pods/dataarchives-service-264802370-mjwcl" rel="nofollow noreferrer">https://fdmmgmt.uksouth.cloudapp.azure.com/api/v1/namespaces/fdm-development/pods/dataarchives-service-264802370-mjwcl</a>
I0502 04:25:18.636776 7288 round_trippers.go:421] Request Headers:
I0502 04:25:18.636776 7288 round_trippers.go:424] Accept:
application/json, <em>/</em> I0502 04:25:18.636776 7288
round_trippers.go:424] User-Agent: kubectl.exe/v1.9.3
(windows/amd64) kubernetes/d283541 I0502 04:25:18.716758 7288
round_trippers.go:439] Response Status: 200 OK in 79 milliseconds
I0502 04:25:18.716758 7288 round_trippers.go:442] Response Headers:
I0502 04:25:18.716758 7288 round_trippers.go:445] Content-Type:
application/json I0502 04:25:18.716758 7288 round_trippers.go:445]
Content-Length: 3167 I0502 04:25:18.716758 7288
round_trippers.go:445] Date: Wed, 02 May 2018 04:25:18 GMT I0502
04:25:18.717872 7288 request.go:873] Response Body:
{"kind":"Pod","apiVersion":"v1","metadata":{"name":"dataarchives-service-264802370-mjwcl","generateName":"da
taarchives-service-264802370-","namespace":"fdm-development","selfLink":"/api/v1/namespaces/fdm-development/pods/dataarchives-service-264802370-mjwcl","uid":"eeb7d14f-49
5e-11e8-9d96-002248014205","resourceVersion":"15681866","creationTimestamp":"2018-04-26T14:34:31Z","labels":{"app":"dataarchives","pod-template-hash":"264802370"},"annot
ations":{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"fdm-development\",\"n
ame\":\"dataarchives-service-264802370\",\"uid\":\"eeaf949c-495e-11e8-9d96-002248014205\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"15075652\"}}\n"},"ownerRefe
rences":[{"apiVersion":"extensions/v1beta1","kind":"ReplicaSet","name":"dataarchives-service-264802370","uid":"eeaf949c-495e-11e8-9d96-002248014205","controller":true,"b
lockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"uploadsfileshare
[truncated 2143 chars] I0502 04:25:18.908749 7288
round_trippers.go:414] POST
<a href="https://fdmmgmt.uksouth.cloudapp.azure.com/api/v1/namespaces/fdm-development/pods/dataarchives-service-26480237" rel="nofollow noreferrer">https://fdmmgmt.uksouth.cloudapp.azure.com/api/v1/namespaces/fdm-development/pods/dataarchives-service-26480237</a>
0-mjwcl/exec?command=date&command=cmd&container=dataarchives&container=dataarchives&stderr=true&stdout=true
I0502 04:25:18.908749 7288 round_trippers.go:421] Request Headers:
I0502 04:25:18.908749 7288 round_trippers.go:424]<br>
X-Stream-Protocol-Version: v4.channel.k8s.io I0502 04:25:18.908749<br>
7288 round_trippers.go:424] X-Stream-Protocol-Version:
v3.channel.k8s.io I0502 04:25:18.908749 7288 round_trippers.go:424]
X-Stream-Protocol-Version: v2.channel.k8s.io I0502 04:25:18.908749<br>
7288 round_trippers.go:424] X-Stream-Protocol-Version:
channel.k8s.io I0502 04:25:18.908749 7288 round_trippers.go:424]<br>
User-Agent: kubectl.exe/v1.9.3 (windows/amd64) kubernetes/d283541
I0502 04:25:19.086745 7288 round_trippers.go:439] Response Status:
401 Unauthorized in 177 milliseconds I0502 04:25:19.086745 7288
round_trippers.go:442] Response Headers: I0502 04:25:19.086745 7288
round_trippers.go:445] Date: Wed, 02 May 2018 04:25:19 GMT I0502
04:25:19.086745 7288 round_trippers.go:445] Content-Length: 12
I0502 04:25:19.086745 7288 round_trippers.go:445] Content-Type:
text/plain; charset=utf-8 F0502 04:25:19.086745 7288
helpers.go:119] error: unable to upgrade connection: Unauthorized</p>
</blockquote>
<p>Help appreciated.</p>
| <p>Internally, the kubectl command interacts with the Kubernetes API server via HTTP / SPDY. The API server of Kubernetes is a regular REST API. Kubernetes is using SPDY for now but the maintainers plan to switch to HTTP/2 as soon as docker allows them to do so: <a href="https://github.com/kubernetes/kubernetes/issues/7452" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/7452</a></p>
<p>You can actually take a look here to see how the kubectl command calls the kubernetes go-client to interact with the REST API: <a href="https://github.com/kubernetes/kubernetes/blob/e6272b887b81a62e6f06b7fac4b3b61d1c8bf657/pkg/kubectl/cmd/exec/exec.go#L310" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/e6272b887b81a62e6f06b7fac4b3b61d1c8bf657/pkg/kubectl/cmd/exec/exec.go#L310</a></p>
<p>Regarding your concrete stack trace: "error: unable to upgrade connection: Unauthorized" looks like you are not authorized. Are you able to run other commands with kubectl such as "kubectl get po --all-namespaces"?</p>
|
<p>I have a Spring Boot 2.x project using Mongo. I am running this via Docker (using compose locally) and Kubernetes. I am trying to connect my service to a Mongo server. This is confusing to me, but for development I am using a local instance of Mongo, but deployed in GCP I have named mongo services.</p>
<p>here is my application.properties file:</p>
<pre><code>#mongodb
spring.data.mongodb.uri= mongodb://mongo-serviceone:27017/serviceone
#logging
logging.level.org.springframework.data=trace
logging.level.=trace
</code></pre>
<p>And my Docker-compose:</p>
<pre><code>version: '3'
# Define the services/containers to be run
services:
service: #name of your service
build: ./ # specify the directory of the Dockerfile
ports:
- "3009:3009" #specify ports forwarding
links:
- mongo-serviceone # link this service to the database service
volumes:
- .:/usr/src/app
depends_on:
- mongo-serviceone
mongo-serviceone: # name of the service
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
</code></pre>
<p>When I try docker-compose up . I get the following error:</p>
<blockquote>
<p>mongo-serviceone_1 | 2018-08-22T13:50:33.454+0000 I NETWORK
[initandlisten] waiting for connections on port 27017 service_1
| 2018-08-22 13:50:33.526 INFO 1 --- [localhost:27017]
org.mongodb.driver.cluster : Exception in monitor thread
while connecting to server localhost:27017 service_1
| service_1 | com.mongodb.MongoSocketOpenException:
Exception opening socket service_1 | at
com.mongodb.connection.SocketStream.open(SocketStream.java:62)
~[mongodb-driver-core-3.6.3.jar!/:na]</p>
</blockquote>
<p>running <code>docker ps</code> shows me:</p>
<pre><code>692ebb72cf30 serviceone_service "java -Djava.securit…" About an hour ago Up 9 minutes 0.0.0.0:3009->3009/tcp, 8080/tcp serviceone_service_1
6cd55ae7bb77 mongo "docker-entrypoint.s…" About an hour ago Up 9 minutes 0.0.0.0:27017->27017/tcp serviceone_mongo-serviceone_1
</code></pre>
<p>While I am trying to connect to a local mongo, I thought that by using the name "mongo-serviceone"</p>
| <p>Hard to tell what the exact issue is, but maybe this is just an issue because of the space " " after "spring.data.mongodb.uri=" and before "mongodb://mongo-serviceone:27017/serviceone"?</p>
<p>If not, maybe exec into the "service" container and try to ping the mongodb with: <code>ping mongo-serviceone:27017</code></p>
<p>Let me know the output of this, so I can help you analyze and fix this issue.</p>
<p>Alternatively, you could switch from using docker compose to a Kubernetes native dev tool, as you are planning to run your application on Kubernetes anyways. Here is a list of possible tools:</p>
<p>Allow hot reloading:</p>
<ul>
<li><strong>DevSpace:</strong> <a href="https://github.com/covexo/devspace" rel="nofollow noreferrer">https://github.com/covexo/devspace</a></li>
<li><strong>ksync:</strong> <a href="https://github.com/vapor-ware/ksync" rel="nofollow noreferrer">https://github.com/vapor-ware/ksync</a></li>
</ul>
<p>Pure CI/CD tools for dev:</p>
<ul>
<li><strong>Skaffold:</strong> <a href="https://github.com/GoogleContainerTools/skaffold" rel="nofollow noreferrer">https://github.com/GoogleContainerTools/skaffold</a></li>
<li><strong>Draft:</strong> <a href="https://github.com/Azure/draft" rel="nofollow noreferrer">https://github.com/Azure/draft</a></li>
</ul>
<p>For most of them, you will only need minikube or a dev namespace inside your existing cluster on GCP.</p>
|
<p>I used the AWS Kubernetes Quickstart to create a Kubernetes cluster in a VPC and private subnet: <a href="https://aws-quickstart.s3.amazonaws.com/quickstart-heptio/doc/heptio-kubernetes-on-the-aws-cloud.pdf" rel="nofollow noreferrer">https://aws-quickstart.s3.amazonaws.com/quickstart-heptio/doc/heptio-kubernetes-on-the-aws-cloud.pdf</a>. It was running fine for a while. I have Calico installed on my Kubernetes cluster. I have two nodes and a master. The calico pods on the master are running fine, the ones on the nodes are in crashloopbackoff state:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
calico-etcd-ztwjj 1/1 Running 1 55d
calico-kube-controllers-685755779f-ftm92 1/1 Running 2 55d
calico-node-gkjgl 1/2 CrashLoopBackOff 270 22h
calico-node-jxkvx 2/2 Running 4 55d
calico-node-mxhc5 1/2 CrashLoopBackOff 9 25m
</code></pre>
<p>Describing one of the crashed pods:</p>
<pre><code>ubuntu@ip-10-0-1-133:~$ kubectl describe pod calico-node-gkjgl -n kube-system
Name: calico-node-gkjgl
Namespace: kube-system
Node: ip-10-0-0-237.us-east-2.compute.internal/10.0.0.237
Start Time: Mon, 17 Sep 2018 16:56:41 +0000
Labels: controller-revision-hash=185957727
k8s-app=calico-node
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 10.0.0.237
Controlled By: DaemonSet/calico-node
Containers:
calico-node:
Container ID: docker://d89979ba963c33470139fd2093a5427b13c6d44f4c6bb546c9acdb1a63cd4f28
Image: quay.io/calico/node:v3.1.1
Image ID: docker-pullable://quay.io/calico/node@sha256:19fdccdd4a90c4eb0301b280b50389a56e737e2349828d06c7ab397311638d29
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 18 Sep 2018 15:14:44 +0000
Finished: Tue, 18 Sep 2018 15:14:44 +0000
Ready: False
Restart Count: 270
Requests:
cpu: 250m
Liveness: http-get http://:9099/liveness delay=10s timeout=1s period=10s #success=1 #failure=6
Readiness: http-get http://:9099/readiness delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
ETCD_ENDPOINTS: <set to the key 'etcd_endpoints' of config map 'calico-config'> Optional: false
CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false
CLUSTER_TYPE: kubeadm,bgp
CALICO_DISABLE_FILE_LOGGING: true
CALICO_K8S_NODE_REF: (v1:spec.nodeName)
FELIX_DEFAULTENDPOINTTOHOSTACTION: ACCEPT
CALICO_IPV4POOL_CIDR: 192.168.0.0/16
CALICO_IPV4POOL_IPIP: Always
FELIX_IPV6SUPPORT: false
FELIX_IPINIPMTU: 1440
FELIX_LOGSEVERITYSCREEN: info
IP: autodetect
FELIX_HEALTHENABLED: true
Mounts:
/lib/modules from lib-modules (ro)
/var/lib/calico from var-lib-calico (rw)
/var/run/calico from var-run-calico (rw)
/var/run/secrets/kubernetes.io/serviceaccount from calico-cni-plugin-token-b7sfl (ro)
install-cni:
Container ID: docker://b37e0ec7eba690473a4999a31d9f766f7adfa65f800a7b2dc8e23ead7520252d
Image: quay.io/calico/cni:v3.1.1
Image ID: docker-pullable://quay.io/calico/cni@sha256:dc345458d136ad9b4d01864705895e26692d2356de5c96197abff0030bf033eb
Port: <none>
Host Port: <none>
Command:
/install-cni.sh
State: Running
Started: Mon, 17 Sep 2018 17:11:52 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 17 Sep 2018 16:56:43 +0000
Finished: Mon, 17 Sep 2018 17:10:53 +0000
Ready: True
Restart Count: 1
Environment:
CNI_CONF_NAME: 10-calico.conflist
ETCD_ENDPOINTS: <set to the key 'etcd_endpoints' of config map 'calico-config'> Optional: false
CNI_NETWORK_CONFIG: <set to the key 'cni_network_config' of config map 'calico-config'> Optional: false
Mounts:
/host/etc/cni/net.d from cni-net-dir (rw)
/host/opt/cni/bin from cni-bin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from calico-cni-plugin-token-b7sfl (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
var-run-calico:
Type: HostPath (bare host directory volume)
Path: /var/run/calico
HostPathType:
var-lib-calico:
Type: HostPath (bare host directory volume)
Path: /var/lib/calico
HostPathType:
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType:
cni-net-dir:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
calico-cni-plugin-token-b7sfl:
Type: Secret (a volume populated by a Secret)
SecretName: calico-cni-plugin-token-b7sfl
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
:NoExecute
:NoSchedule
:NoExecute
CriticalAddonsOnly
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4m (x6072 over 22h) kubelet, ip-10-0-0-237.us-east-2.compute.internal Back-off restarting failed container
</code></pre>
<p>The logs for the same pod: </p>
<pre><code>ubuntu@ip-10-0-1-133:~$ kubectl logs calico-node-gkjgl -n kube-system -c calico-node
2018-09-18 15:14:44.605 [INFO][8] startup.go 251: Early log level set to info
2018-09-18 15:14:44.605 [INFO][8] startup.go 269: Using stored node name from /var/lib/calico/nodename
2018-09-18 15:14:44.605 [INFO][8] startup.go 279: Determined node name: ip-10-0-0-237.us-east-2.compute.internal
2018-09-18 15:14:44.609 [INFO][8] startup.go 101: Skipping datastore connection test
2018-09-18 15:14:44.610 [INFO][8] startup.go 352: Building new node resource Name="ip-10-0-0-237.us-east-2.compute.internal"
2018-09-18 15:14:44.610 [INFO][8] startup.go 367: Initialize BGP data
2018-09-18 15:14:44.614 [INFO][8] startup.go 564: Using autodetected IPv4 address on interface ens3: 10.0.0.237/19
2018-09-18 15:14:44.614 [INFO][8] startup.go 432: Node IPv4 changed, will check for conflicts
2018-09-18 15:14:44.618 [WARNING][8] startup.go 861: Calico node 'ip-10-0-0-237' is already using the IPv4 address 10.0.0.237.
2018-09-18 15:14:44.618 [WARNING][8] startup.go 1058: Terminating
Calico node failed to start
</code></pre>
<p>So it seems like there is a conflict finding the node IP address, or Calico seems to think the IP is already assigned to another node. Doing a quick search i found this thread: <a href="https://github.com/projectcalico/calico/issues/1628" rel="nofollow noreferrer">https://github.com/projectcalico/calico/issues/1628</a>. I see that this should be resolved by setting the IP_AUTODETECTION_METHOD to can-reach=DESTINATION, which I'm assuming would be "can-reach=10.0.0.237". This config is an environment variable set on calico/node container. I have been attempting to shell into the container itself, but kubectl tells me the container is not found: </p>
<pre><code>ubuntu@ip-10-0-1-133:~$ kubectl exec calico-node-gkjgl --stdin --tty /bin/sh -c calico-node -n kube-system
error: unable to upgrade connection: container not found ("calico-node")
</code></pre>
<p>I'm suspecting this is due to Calico being unable to assign IPs. So I logged onto the host and attempt to shell on the container using docker: </p>
<pre><code>root@ip-10-0-0-237:~# docker exec -it k8s_POD_calico-node-gkjgl_kube-system_a6998e98-ba9a-11e8-a9fa-0a97f5a48ef4_1 /bin/bash
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory"
</code></pre>
<p>So I guess there is no shell to execute in the container. Makes sense why Kubernetes couldn't execute that. I tried running commands externally to list environment variables, but I haven't been able to find any, I could be running these commands wrong however: </p>
<pre><code>root@ip-10-0-0-237:~# docker inspect -f '{{range $index, $value := .Config.Env}}{{$value}} {{end}}' k8s_POD_calico-node-gkjgl_kube-system_a6998e98-ba9a-11e8-a9fa-0a97f5a48ef4_1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
root@ip-10-0-0-237:~# docker exec -it k8s_POD_calico-node-gkjgl_kube-system_a6998e98-ba9a-11e8-a9fa-0a97f5a48ef4_1 printenv IP_AUTODETECTION_METHOD
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"printenv\": executable file not found in $PATH"
root@ip-10-0-0-237:~# docker exec -it k8s_POD_calico-node-gkjgl_kube-system_a6998e98-ba9a-11e8-a9fa-0a97f5a48ef4_1 /bin/env
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"/bin/env\": stat /bin/env: no such file or directory"
</code></pre>
<p>Okay, so maybe I am going about this the wrong way. Should I attempt to change the Calico config files using Kubernetes and redeploy it? Where can I find these on my system? I haven't been able to find where to set the environment variables. </p>
| <p>If you look at the <a href="https://docs.projectcalico.org/v3.0/reference/node/configuration" rel="nofollow noreferrer">Calico docs</a> <code>IP_AUTODETECTION_METHOD</code> is already defaulting to <code>first-round</code>.</p>
<p>My guess is that something or the IP address is not being released by the previous 'run' of calico, or just simply a bug in the <code>v3.1.1</code> version of calico.</p>
<p>Try:</p>
<ol>
<li><p>Delete your Calico pods that are in a CrashBackOff loop</p>
<pre><code>kubectl -n kube-system delete calico-node-gkjgl calico-node-mxhc5
</code></pre>
<p>Your pods will be re-created and hopefully initialize.</p></li>
<li><p>Upgrade Calico to <code>v3.1.3</code> or latest. Follow these <a href="https://docs.projectcalico.org/v3.2/getting-started/kubernetes/upgrade/upgrade#upgrading-an-installation-that-uses-an-etcd-datastore" rel="nofollow noreferrer">docs</a> My guess is that Heptio's Calico installation is using the etcd datastore.</p></li>
<li><p>Try to understand how Heptio's AWS AMIs work and see if there are any issues with them. This might take some time so you could contact their support as well.</p></li>
<li><p>Try a different method to install Kubernetes with Calico. Well documented on <a href="https://kubernetes.io" rel="nofollow noreferrer">https://kubernetes.io</a></p></li>
</ol>
|
<p>My objective is to fetch the time series of a metric for a pod running on a kubernetes cluster on GKE using the <a href="https://cloud.google.com/monitoring/api/ref_v3/rest/v3/TimeSeries" rel="nofollow noreferrer">Stackdriver TimeSeries REST API</a>.</p>
<p>I have ensured that Stackdriver monitoring and logging are enabled on the kubernetes cluster.</p>
<p>Currently, I am able to fetch the time series of all the resources available <strong>in a cluster</strong> using the following filter:</p>
<pre><code>metric.type="container.googleapis.com/container/cpu/usage_time" AND resource.labels.cluster_name="<MY_CLUSTER_NAME>"
</code></pre>
<p>In order to fetch the time series of a <strong>given pod id</strong>, I am using the following filter:</p>
<pre><code>metric.type="container.googleapis.com/container/cpu/usage_time" AND resource.labels.cluster_name="<MY_CLUSTER_NAME>" AND resource.labels.pod_id="<POD_ID>"
</code></pre>
<p>This filter returns an <strong>HTTP 200 OK</strong> with an empty response body. I have found the pod ID from the <code>metadata.uid</code> field received in the response of the following kubectl command:</p>
<pre><code>kubectl get deploy -n default <SERVICE_NAME> -o yaml
</code></pre>
<p>However, when I use the Pod ID of a <strong>background container</strong> spawned by GKE/Stackdriver, I do get the time series values.</p>
<p>Since I am able to see Stackdriver metrics of my pod on the GKE UI, I believe I should also get the metric values using the REST API.</p>
<p>My doubts/questions are:</p>
<ol>
<li>Am I fetching the Pod ID of my pod correctly using kubectl?</li>
<li>Could there be some issue with my cluster setup/service deployment due to which I'm unable to fetch the metrics?</li>
<li>Is there some other way in which I can get the time series of my pod using the REST APIs?</li>
</ol>
| <ol>
<li><p>I wouldn't rely on <code>kubectl get deploy</code> for pod ids. I would get them with something like <code>kubectl -n default get pods | grep <prefix-for-your-pod> | awk '{print $1}'</code></p></li>
<li><p>I don't think so, but the best way to find out is opening a support ticket with GCP if you have any doubts.</p></li>
<li><p>Not that I'm aware of, Stackdriver is the monitoring solution in GCP. Again, you can check with GCP support. There are other tools that you can use to get metrics from Kubernetes like <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a>. There are multiple guides on the web on how to set it up with Grafana on k8s. This is <a href="https://github.com/giantswarm/kubernetes-prometheus" rel="nofollow noreferrer">one</a> for example.</p></li>
</ol>
<p>Hope it helps!</p>
|
<p>Do you know if it is possible to mount a local folder to a Kubernetes running container.</p>
<p>Like <code>docker run -it -v .:/dev some-image bash</code> I am doing this on my local machine and then remote debug into the container from VS Code.</p>
<p>Update: This might be a solution: <code>telepresence</code>
Link: <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/" rel="noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/</a></p>
<p>Do you know it it is possible to mount a local computer to Kubernetes. This container should have access to a Cassandra IP address.</p>
<p>Do you know if it is possible?</p>
| <h1>Kubernetes Volume</h1>
<p>Using <strong>hostPath</strong> would be a solution: <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#hostpath</a></p>
<p>However, it will only work if your cluster runs on the same machine as your mounted folder.</p>
<p>Another but probably slightly over-powered method would be to use a <strong>distributed or parallel filesystem</strong> and mount it into your container as well as to mount it on your local host machine. An example would be CephFS which allows multi-read-write mounts. You could start a ceph cluster with rook: <a href="https://github.com/rook/rook" rel="nofollow noreferrer">https://github.com/rook/rook</a></p>
<h1>Kubernetes Native Dev Tools with File Sync Functionality</h1>
<p>A solution would be to use a dev tool that allows you to sync the contents of the local folder to the folder inside a kubernetes pod. There, for example, is <strong>ksync</strong>: <a href="https://github.com/vapor-ware/ksync" rel="nofollow noreferrer">https://github.com/vapor-ware/ksync</a></p>
<p>I have tested ksync and many kubernetes native dev tools (e.g. telepresence, skaffold, draft) but I found them very hard to configure and time-consuming to use. That's why I created an open source project called <strong>DevSpace</strong> together with a colleague: <strong><a href="https://github.com/loft-sh/devspace" rel="nofollow noreferrer">https://github.com/loft-sh/devspace</a></strong></p>
<p>It allows you to configure a real-time two-way sync between local folders and folders within containers running inside k8s pods. It is the only tool that is able to let you use <strong>hot reloading</strong> tools such as nodemon for nodejs. It works with volumes as well as with ephemeral / non-persistent folders and lets you directly enter the containers similar to kubectl exec and much more. <strong>It works with minikube and any other self-hosted or cloud-based kubernetes clusters.</strong></p>
<p>Let me know if that helps you and feel free to open an issue if you are missing something you need for your optimal dev workflow with Kubernetes. We will be happy to work on it.</p>
|
<p>I am trying to use the <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">kubernetes go-client</a> with cloud.google.com/go/container. I create the cluster using the google cloud go container package, then I want to deploy on that cluster using go-client. The <a href="https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster-client-configuration/main.go" rel="nofollow noreferrer">out of cluster example</a> given by the go-client uses the kube config file to get the credentials for the cluster. But since I just created this cluster within my application I don’t have that config file.</p>
<p>How can I setup a “k8s.io/client-go/rest” config with a "google.golang.org/genproto/googleapis/container/v1" Cluster? What are the required fields? The code below is what I currently have (without showing the actual CA certificate).</p>
<pre><code>func getConfig(cluster *containerproto.Cluster) *rest.Config {
return &rest.Config{
Host: "https://" + cluster.GetEndpoint(),
TLSClientConfig: rest.TLSClientConfig{
Insecure: false,
CAData: []byte(`-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----`),
},
}
</code></pre>
<p>It results in this error: x509: certificate signed by unknown authority. So there is obviously something missing.
Any other approach is more than welcome! Thanks in advance</p>
| <p>The ClientCertificate, ClientKey and ClusterCaCertificate need to be decoded as described <a href="https://github.com/terraform-providers/terraform-provider-kubernetes/issues/85" rel="noreferrer">here</a></p>
<pre><code>func CreateK8sClientFromCluster(cluster *gkev1.Cluster) {
decodedClientCertificate, err := base64.StdEncoding.DecodeString(cluster.MasterAuth.ClientCertificate)
if err != nil {
fmt.Println("decode client certificate error:", err)
return
}
decodedClientKey, err := base64.StdEncoding.DecodeString(cluster.MasterAuth.ClientKey)
if err != nil {
fmt.Println("decode client key error:", err)
return
}
decodedClusterCaCertificate, err := base64.StdEncoding.DecodeString(cluster.MasterAuth.ClusterCaCertificate)
if err != nil {
fmt.Println("decode cluster CA certificate error:", err)
return
}
config := &rest.Config{
Username: cluster.MasterAuth.Username,
Password: cluster.MasterAuth.Password,
Host: "https://" + cluster.Endpoint,
TLSClientConfig: rest.TLSClientConfig{
Insecure: false,
CertData: decodedClientCertificate,
KeyData: decodedClientKey,
CAData: decodedClusterCaCertificate,
},
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
fmt.Printf("failed to get k8s client set from config: %s\n", err)
return
}
}
</code></pre>
|
<p>I have a local kubernetes cluster setup using the edge release of docker (mac). My pods use an env var that I've defined to be my DB's url. These env vars are defined in a config map as:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
DB_URL: postgres://user@localhost/my_dev_db?sslmode=disable
</code></pre>
<p>What should I be using here instead of localhost? I need this env var to point to my local dev machine.</p>
| <h1>Option 1 - Local Networking Approach</h1>
<p>If you are running minikube, I would recommend taking a look at the answers to this question: <a href="https://stackoverflow.com/questions/42268814/routing-an-internal-kubernetes-ip-address-to-the-host-system">Routing an internal Kubernetes IP address to the host system</a></p>
<h1>Option 2 - Tunneling Solution: Connect to an External Service</h1>
<p>A very simple but a little hacky solution would be to use a tunneling tool like ngrok: <a href="https://ngrok.com/" rel="nofollow noreferrer">https://ngrok.com/</a></p>
<h1>Option 3 - Cloud-native Development (run everything inside k8s)</h1>
<p>If you plan to follow the suggestions of whites11, you could make your life a lot easier with using a <strong>kubernetes-native dev tool</strong> such as DevSpace (<a href="https://github.com/covexo/devspace" rel="nofollow noreferrer">https://github.com/covexo/devspace</a>) or Draft (<a href="https://github.com/Azure/draft" rel="nofollow noreferrer">https://github.com/Azure/draft</a>). Both work with minikube or other self-hosted clusters.</p>
|
<p>I've been messing around with kubernetes and I'm trying to setup a development environment with minikube, node and nodemon. My image works fine if I run it in a standalone container, however it crashes with the following error if I put it in my deployment.</p>
<pre><code>yarn run v1.3.2
$ nodemon --legacy-watch --exec babel-node src/index.js
/app/node_modules/.bin/nodemon:2
'use
^^^^^
SyntaxError: Invalid or unexpected token
at createScript (vm.js:80:10)
at Object.runInThisContext (vm.js:139:10)
at Module._compile (module.js:599:28)
at Object.Module._extensions..js (module.js:646:10)
at Module.load (module.js:554:32)
at tryModuleLoad (module.js:497:12)
at Function.Module._load (module.js:489:3)
at Function.Module.runMain (module.js:676:10)
at startup (bootstrap_node.js:187:16)
at bootstrap_node.js:608:3
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
</code></pre>
<p>I have a <code>dev</code> command in my package.json as so </p>
<pre><code>"dev": "nodemon --legacy-watch --exec babel-node src/index.js",
</code></pre>
<p>My image is being built with the following docker file</p>
<pre><code>FROM node:8.9.1-alpine
WORKDIR /app
COPY . /app/
RUN cd /app && yarn install
</code></pre>
<p>and my deployment is set up with this</p>
<pre><code>---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: nodeapp
name: nodeapp
spec:
replicas: 3
selector:
matchLabels:
app: nodeapp
template:
metadata:
labels:
app: nodeapp
spec:
containers:
- name: nodeapp
imagePullPolicy: Never
image: app:latest
command:
- yarn
args:
- run
- dev
ports:
- containerPort: 8080
volumeMounts:
- name: code
mountPath: /app
volumes:
- name: code
hostPath:
path: /Users/adam/Workspaces/scratch/expresssite
---
apiVersion: v1
kind: Service
metadata:
name: nodeapp
labels:
app: nodeapp
spec:
selector:
app: nodeapp
ports:
- name: nodeapp
port: 8080
nodePort: 30005
type: NodePort
---
</code></pre>
<p>It's obviously crashing on the <code>'use strict'</code> in the nodemon binstub, but I have no idea why. It works just fine as a standalone docker container. The goal is to have nodemon restart the node process in each pod when I save changes for development, but I'm really not sure where my mistake is.</p>
<p>EDIT:</p>
<p>I have narrowed it down slightly. It is mounting the <code>node_modules</code> from the file host and this is what is causing it to crash. I do have a .dockerignore file setup. Is there a way to either get it to work like this (so if I run <code>npm install</code> it will pickup the changes) or is there a way to get it to use the node_modules that were installed with the image?</p>
| <p>There are several issues when mounting node_modules fro your local computer to a container, e.g.:</p>
<p>1) node_modules has local symlinks which will not easily be resolvable inside your container.</p>
<p>2) If you have dependencies which rely on native binaries, they will be compiled for the operating system where you installed the dependencies on. If you mount them to a different OS, there will be issues running these binaries. Are you running <code>npm install</code> on Win/Mac and mount it to the linux based container build from the image above? Then, that is most likely your problem.</p>
<p><strong>We experienced the exact same problems in our team while developing software directly inside Kubernetes pods/containers. That's why we started an open source project called DevSpace CLI: <a href="https://github.com/covexo/devspace" rel="nofollow noreferrer">https://github.com/covexo/devspace</a></strong></p>
<p>The DevSpace CLI can establish a reliable and super fast 2-way code sync between your local folders and folders within your dev containers (works with any Kubernetes cluster, any volume and even with ephemeral / non-persistent folders) and it is designed to work perfectly with hot reloading tools such as nodemon. Let me know if it works for you or if there is anything you are missing.</p>
|
<p>In a Helm Chart I have to following values</p>
<pre><code>dataCenters:
- name: a
replicas: 3
- name: b
replicas: 2
</code></pre>
<p>When generating the template I would like my output to be like the following </p>
<pre><code>server.1 = a-1
server.2 = a-2
server.3 = a-3
server.4 = b-1
server.5 = b-2
</code></pre>
<p>I tried this code</p>
<pre><code>{{- $index := 0 -}}
{{ range $dc := .Values.cluster.dataCenters -}}
{{ range $seq := (int $dc.replicas | until) -}}
{{- $index := (add $index 1) -}}
server.{{ $index }}={{ $dc.name }}-{{ $seq }}
{{ end -}}
{{ end -}}
</code></pre>
<p>however in helm templates I don't thing you can reassign the value of the index as my 4th line is attempting and because of that I get out</p>
<pre><code>server.1 = a-1
...
server.1 = b-2
</code></pre>
<p>How does one calculates the global index <strong>0 to 4</strong> (1 to 5 in my situation) using the Sprig/Helm templating language?</p>
| <p>I have a way to do it that involves some trickery, heavily inspired by functional programming experience.</p>
<p>A Go/Helm template takes a single parameter, but the <a href="http://masterminds.github.io/sprig/" rel="nofollow noreferrer">sprig</a> library gives you the ability to create lists, and the <a href="https://godoc.org/text/template" rel="nofollow noreferrer">text/template</a> <code>index</code> function lets you pick things out of a list. That lets you write a "function" template that takes multiple parameters, packed into a list.</p>
<p>Say we want to write out a single line of this output. We need to keep track of which server number we're at (globally), which replica number we're at (within the current data center), the current data center record, and the records we haven't emitted yet. If we're past the end of the current list, then print the records for the rest of the data centers; otherwise print a single line for the current replica and repeat for the next server/replica index.</p>
<pre><code>{{ define "emit-dc" -}}
{{ $server := index . 0 -}}
{{ $n := index . 1 -}}
{{ $dc := index . 2 -}}
{{ $dcs := index . 3 -}}
{{ if gt $n (int64 $dc.replicas) -}}
{{ template "emit-dcs" (list $server $dcs) -}}
{{ else -}}
server.{{ $server }}: {{ $dc.name }}-{{ $n }}
{{ template "emit-dc" (list (add1 $server) (add1 $n) $dc $dcs) -}}
{{ end -}}
{{ end -}}
</code></pre>
<p>At the top level, we know the index of the next server number, plus the list of data centers. If that list is empty, we're done. Otherwise we can start emitting rows from the first data center in the list.</p>
<pre><code>{{ define "emit-dcs" -}}
{{ $server := index . 0 -}}
{{ $dcs := index . 1 -}}
{{ if ne 0 (len $dcs) -}}
{{ template "emit-dc" (list $server 1 (first $dcs) (rest $dcs)) -}}
{{ end -}}
{{ end -}}
</code></pre>
<p>Then in your actual resource definition (say, your ConfigMap definition) you can invoke this template with the first server number:</p>
<pre><code>{{ template "emit-dcs" (list 1 .Values.dataCenters) -}}
</code></pre>
<p>Copy this all into a dummy Helm chart and you can verify the output:</p>
<pre><code>% helm template .
---
# Source: x/templates/test.yaml
server.1: a-1
server.2: a-2
server.3: a-3
server.4: b-1
server.5: b-2
</code></pre>
<p>I suspect this trick won't work well if the number of servers goes much above the hundreds (the Go templating engine almost certainly isn't <em>tail recursive</em>), and this is somewhat trying to impose standard programming language methods on a templating language that isn't quite designed for it. But...it works.</p>
|
<p>I'm trying to access a MySQL database hosted inside a docker container on localhost from inside a minikube pod with little success. I tried the solution described <a href="https://stackoverflow.com/questions/43354167/minikube-expose-mysql-running-on-localhost-as-service">Minikube expose MySQL running on localhost as service</a> but to no effect. I have modelled my solution on the service we use on AWS but it does not appear to work with minikube. My service reads as follows</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql-db-svc
namespace: external
spec:
type: ExternalName
ExternalName: 172.17.0.2
</code></pre>
<p>...where I try to connect to my database from inside a pod using "mysql-db-svc" on port 3306 but to no avail. If I try and CURL the address "mysql-db-svc" from inside a pod it cannot resolve the host name.</p>
<p>Can anybody please advise a frustrated novice?</p>
| <p>I'm using ubuntu with Minikube and my database runs outside of minikube inside a docker container and can be accessed from localhost @ 172.17.0.2. My Kubernetes service for my external mysql container reads as follows:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: mysql-db-svc
namespace: external
spec:
type: ExternalName
externalName: 10.0.2.2
</code></pre>
<p>Then inside my .env for a project my DB_HOST is defined as</p>
<pre><code>mysql-db-svc.external.svc
</code></pre>
<p>... the name of the service "mysql-db-svc" followed by its namespace "external" with "svc"</p>
<p>Hope that makes sense.</p>
|
<p>I am facing "theoritical" compatility issues when using distroless-based containers with kubernetess 1.10.</p>
<p>Actually, distroless requires docker 17.5 (<a href="https://github.com/GoogleContainerTools/distroless" rel="nofollow noreferrer">https://github.com/GoogleContainerTools/distroless</a>) whereas kubernetes does support version 17.03 only (<a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies</a>) </p>
<ol>
<li>is it possible to run distroless containers within kubernetes 1.10
clusters w/o any issue? </li>
<li>is it possible to build distroless based
images on a build server running docker 17.05 then deploying it on a
kubernetes 1.10 cluster (docker 17.03)?</li>
</ol>
| <p>The requirement for 17.05 is <strong>only</strong> to build a "distroless" image with <code>docker build</code> using <strong>multistage</strong> <code>Dockerfile</code>. When you have an image built, there is nothing stopping it from running on older Docker / containerd versions.</p>
<p>Docker has supported images with no distribution for ages now by using <code>FROM: scratch</code> and leaving it to the image author to populate whatever the software needs, which in some cases like fully static binaries might be only the binary of the software and nothing more :)</p>
|
<p>I have created a local 3-node kubernetes cluster in GNOME-Boxes, using the CentOS minimal ISO. This is for testing a custom install on client-provisioned machines. Everything went very smooth, and I even had things working well for a few days. However, I needed to restart my server, so I brought the k8s cluster down with it via the <code>shutdown now</code> command run on each node in the cluster. When I brought everything back up, the cluster did not come back up as expected. The logs tell me there was an issue bringing up apiserver and etcd images. docker logs for apiserver show me this:</p>
<pre><code>Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0919 03:05:10.238042 1 server.go:703] external host was not specified, using 192.168.122.2
I0919 03:05:10.238160 1 server.go:145] Version: v1.11.3
Error: unable to load server certificate: open /etc/kubernetes/pki/apiserver.crt: permission denied
...[cli params for kube-apiserver]
error: unable to load server certificate: open /etc/kubernetes/pki/apiserver.crt: permission denied
</code></pre>
<p>When I check the permissions, it is set to <code>644</code>, and the file is definitely there. My real question is why does it work when I initialize my cluster with kubeadm, then fail to restart properly?</p>
<p>Here are the steps I am using to init my cluster:</p>
<pre><code># NOTE: this file needs to be run as root
# 1: install kubelet, kubeadm, kubectl, and docker
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
yum install -y kubelet kubeadm kubectl docker --disableexcludes=kubernetes
systemctl enable --now kubelet
systemctl enable --now docker
# 2: disable enforcement of SELinux policies (k8s has own policies)
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux
setenforce 0
# 3: make sure the network can function properly
sysctl net.bridge.bridge-nf-call-iptables=1
# 4. insert all necessary modules
modprobe --all ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr
cat <<EOF > /etc/modules-load.d/ifvs.conf
ip_vs_wrr
ip_vs_sh
ip_vs
ip_vs_rr
EOF
systemctl disable --now firewalld
# 5: initialize the cluster. this should happen only on the master node. This will print out instructions and a command that should be run on each supporting node.
kubeadm init --pod-network-cidr=10.244.0.0/16
# 6: run the kubeadm join command from result of step 5 on all the other nodes
kubeadm join 192.168.122.91:6443 --token jvr7dh.ymoahxxhu3nig8kl --discovery-token-ca-cert-hash sha256:7cc1211aa882c535f371e2cf6706072600f2cc47b7da18b1d242945c2d8cab65
#################################
# the cluster is all setup to be accessed via API. use kubectl on your local machine from here on out!
# to access the cluster via kubectl, you need to merge the contents of <master_node>:/etc/kubernetes/admin.conf with your local ~/.kube/config
#################################
# 7: to allow the master to run pods:
kubectl taint nodes --all node-role.kubernetes.io/master-
# 8: install the networking node:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
# 10: setup dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
# 11: set admin user (for dashboard)
kubectl apply -f deploy/admin-user.yaml
# copy the token into
TOKEN=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token:)
# start proxy on local machine to cluster
kubectl proxy &
# go to the dashboard in your browser
open http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
# paste the token into the login:
echo $TOKEN
</code></pre>
| <p>I ran into exact same issue on Centos 7 Virtual Box post creation of my Kubernetes single master using kubeadm , I ended up creating an <a href="https://github.com/kubernetes/kubeadm/issues/1082" rel="nofollow noreferrer">issue</a> against kubeadm .</p>
<p>You might want to follow some or all of those steps mentioned by me and the person who supported me during the debugging of the issue. To summarize , what worked for me was setting the hostname to localhost or something of that sort and trying to create my cluster again using kubeadm init. ( See this
<a href="https://github.com/kubernetes/kubeadm/issues/881#issuecomment-396638808" rel="nofollow noreferrer">link</a> on my last comment on this issue to find the exact steps that resolved my problem). I have been able to run my kubernetes cluster and also join other nodes to it successfully post this change. Goodluck</p>
|
<p>I am deploying a container in Google Kubernetes Engine with this YAML fragment:</p>
<pre><code> spec:
containers:
- name: service
image: registry/service-go:latest
resources:
requests:
memory: "20Mi"
cpu: "20m"
limits:
memory: "100Mi"
cpu: "50m"
</code></pre>
<p>But it keeps taking 120m. <strong>Why is "limits" property being ignored?</strong> Everything else is working correctly. If I request 200m, 200m are being reserved, but limit keeps being ignored.</p>
<p><a href="https://i.stack.imgur.com/9zN6j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9zN6j.png" alt="enter image description here"></a></p>
<p>My Kubernetes version is 1.10.7-gke.1</p>
<p>I only have the default namespace and when executing</p>
<blockquote>
<p>kubectl describe namespace default</p>
</blockquote>
<pre><code>Name: default
Labels: <none>
Annotations: <none>
Status: Active
No resource quota.
Resource Limits
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 100m - -
</code></pre>
| <h2>Considering Resources Request <strong>Only</strong></h2>
<p>The google cloud console works well, I think you have multiple containers in your pod, this is why. The value shown above is the sum of resources requests declared in your truncated YAML file. You can verify easily with <code>kubectl</code>.</p>
<p>First verify the number of containers in you pod.</p>
<p><code>kubectl describe pod service-85cc4df46d-t6wc9</code> </p>
<p>Then, look the description of the node via kubectl, you should have the same informations as the console says.</p>
<p><code>kubectl describe node gke-default-pool-abcdefgh...</code></p>
<h2>What is the difference between resources request and limit ?</h2>
<p>You can imagine your cluster as a big square box. This is the total of your allocatable resources. When you drop a Pod in the big box, Kubernetes will check if there is an empty space for the requested resources of the pod (is the small box fits in the big box?). If there is enough space available, then it will schedule your workload on the selected node.</p>
<p>Resources limits are not taken into account by the scheduler. All is done at the kernel level with CGroups. The goal is to restrict workloads to take all the CPU or Memory on the node they are scheduled on. </p>
<p>If your resources requests == resources limits then, workloads cannot escape their "box" and are not able to use available CPU/Memory next to them. In other terms, your resource are guaranteed for the pod. </p>
<p>But, if the limits are greater than your requests, this is called overcommiting resources. You bet that all the workloads on the same node are not fully loaded at the same time (generally the case).</p>
<p>I recommend to not overcommiting the memory resource, do not let the pod escape the "box" in term of memory, it can leads to OOMKilling.</p>
|
<p>I am trying to apply kubernetes to my minikube cluster for the first time. I have limited experience with cluster management and have never worked with prometheus before so I apologize for noob errors. </p>
<p>I run the following commands:</p>
<pre><code>docker build -t my-prometheus .
docker run -p 9090:9090 my-prometheus
</code></pre>
<p>here is my yaml:</p>
<pre><code>global:
scrape_interval: 15s
external_labels:
monitor: 'codelab-monitor'
scrape_configs:
- job_name: 'kubernetes-apiservers'
scheme: http
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubernetes_sd_configs:
- role: endpoints
- api_server: localhost:30000
</code></pre>
<p>I ran this through YAMLlint and got that it was valid. However I get the following error when I run the second docker command:</p>
<pre><code>level=error ts=2018-09-18T21:49:34.365671918Z caller=main.go:617 err="error
loading config from \"/etc/prometheus/prometheus.yml\": couldn't load
configuration (--config.file=\"/etc/prometheus/prometheus.yml\"): parsing
YAML file /etc/prometheus/prometheus.yml: role missing (one of: pod,
service, endpoints, node)"
</code></pre>
<p>However, you can see that I have specified my <code>- role: endpoints</code> in my <code>kubernetes_sd_configs</code>.</p>
<p>Can anyone help me on this</p>
| <p><code>kubernetes_sd_configs</code> is a list of configs, styled as block sequence in YAML terms.</p>
<p>Now, your list of configs looks like this:</p>
<pre><code>- role: endpoints
- api_server: localhost:3000
</code></pre>
<p>So you're defining two configs, and only the first one of them has a role. This is why you get the error. Most probably, you want to create only one config with <code>role</code> and <code>api_server</code> configured. Drop the second <code>-</code> so that the <code>api_server</code> belongs to the first config:</p>
<pre><code>- role: endpoints
api_server: localhost:3000
</code></pre>
|
<p>I have a GoogleCloud Kubernetes cluster consisting of multiple nodes and a GoogleCloud Redis Memorystore. Distributed over these nodes are replicas of a pod containing a container that needs to connect to the Redis Memorystore. I have noticed that one of the nodes is not able to connect to Redis, i.e. any container in a pod on that node cannot connect to Redis.</p>
<p>The Redis Memorystore has the following properties:</p>
<ul>
<li>IP address: <code>10.0.6.12</code></li>
<li>Instance IP address range: <code>10.0.6.8/29</code> (<code>10.0.6.8</code> - <code>10.0.6.15</code>)</li>
</ul>
<p>The node from which no connection to Redis can be made has the following properties:</p>
<ul>
<li>Internal IP: <code>10.132.0.5</code></li>
<li>PodCIDR: <code>10.0.6.0/24</code> (<code>10.0.6.0</code> - <code>10.0.6.255</code>)</li>
</ul>
<p>I assume this problem is caused by the overlap in IP ranges of the Memorystore and this node. Is this assumption correct?</p>
<p>If this is the problem I would like to change the IP range of the node.
I have tried to do this by editing <code>spec.podCIRD</code> in the node config:</p>
<pre><code>$ kubectl edit node <node-name>
</code></pre>
<p>However this did not work and resulted in the error message:</p>
<pre><code># * spec.podCIDR: Forbidden: node updates may not change podCIDR except from "" to valid
# * []: Forbidden: node updates may only change labels, taints, or capacity (or configSource, if the DynamicKubeletConfig feature gate is enabled)
</code></pre>
<p>Is there another way to change the IP range of an existing Kubernetes node? If so, how?</p>
<p>Sometimes I need to temporarily increase the number of pods in a cluster. When I do this I want to prevent Kubernetes from creating a new node with the IP range <code>10.0.6.0/24</code>.
Is it possible to tell the Kubernetes cluster to not create new nodes with the IP range <code>10.0.6.0/24</code>? If so, how?</p>
<p>Thanks in advance!</p>
| <ul>
<li><p>Not for a node. The podCidr gets defined when you install your network overlay in initial steps when setting up a new cluster.</p></li>
<li><p>Yes for the cluster. but it's not that easy. You have to change the podCidr for the network overlay in your whole cluster. It's a tricky process that can be done, but if you are doing that you might as well deploy a new cluster. Keep in mind that some network overlays require a very specific PodCidr. For example, Calico requires <code>192.168.0.0/16</code></p></li>
</ul>
<p>You could:</p>
<ol>
<li>Create a new cluster with a new cidr and move your workloads gradually.</li>
<li>Change the IP address cidr where your GoogleCloud Redis Memorystore lives.</li>
</ol>
<p>Hope it helps!</p>
|
<ul>
<li><code>kubectl expose</code> doesn't work here</li>
<li>how to do it with CLI</li>
<li>In Console UI functionality is located in <a href="https://console.kyma.local/home/environments/stage/apis" rel="nofollow noreferrer">https://console.kyma.local/home/environments/stage/apis</a></li>
</ul>
| <ul>
<li>API exposure to internet is realized through special "API Gateway" component. You can read about its architecture and usage here <a href="https://kyma-project.io/docs/latest/components/api-gateway" rel="nofollow noreferrer">https://kyma-project.io/docs/latest/components/api-gateway</a></li>
<li>Exposure through Console UI <a href="https://console.kyma.local/home/environments/stage/apis" rel="nofollow noreferrer">https://console.kyma.local/home/environments/stage/apis</a> is realized by actually creating Api CRD</li>
<li>CLI equivalent is simple <code>kubectl apply {yaml_file}</code>. Description of all the fields of Api CRD and an example can be found here <a href="https://kyma-project.io/docs/latest/components/api-gateway#custom-resource-custom-resource" rel="nofollow noreferrer">https://kyma-project.io/docs/latest/components/api-gateway#custom-resource-custom-resource</a></li>
</ul>
|
<p>I'm trying to copy files from Kubernetes Pods to my local system. I am getting the below error while running following command: </p>
<pre><code>kubectl cp aks-ssh2-6cd4948f6f-fp9tl:/home/azureuser/test.cap ./test.cap
</code></pre>
<p>Output:</p>
<blockquote>
<p>tar: home/azureuser/test: Cannot stat: No such file or directory tar:
Exiting with failure status due to previous errors error:
home/azureuser/test no such file or directory</p>
</blockquote>
<p>I could see the file under above given path. I am really confused.</p>
<p>Could you please help me out?</p>
| <p>As stated in<code>kubectl</code> help:</p>
<pre><code>kubectl cp --help
Copy files and directories to and from containers.
Examples:
# !!!Important Note!!!
# Requires that the 'tar' binary is present in your container
# image. If 'tar' is not present, 'kubectl cp' will fail.
# Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace
kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
# Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container
kubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>
# Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace>
kubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar
# Copy /tmp/foo from a remote pod to /tmp/bar locally
kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar
Options:
-c, --container='': Container name. If omitted, the first container in the pod will be chosen
Usage:
kubectl cp <file-spec-src> <file-spec-dest> [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
</code></pre>
<p>You can also login to your <code>Containter</code> and check if file is there:</p>
<pre><code>kubectl exec -it aks-ssh2-6cd4948f6f-fp9tl /bin/bash
ls -la /home/azureuser/test.cap
</code></pre>
<p>If this still doesn't work, try:</p>
<blockquote>
<p>You may try to copy your files to workdir and then retry to copy them using just their names. It's weird, but it works for now.</p>
</blockquote>
<p>Consider advice of <a href="https://github.com/kchugalinskiy" rel="noreferrer">kchugalinskiy</a> here <a href="https://github.com/kubernetes/kubernetes/issues/58692" rel="noreferrer">#58692</a>.</p>
|
<p>I am new to Prometheus and relatively new to kubernetes so bear with me, please. I am trying to test Prometheus out and have tried two different approaches. </p>
<ol>
<li><p>Run Prometheus as a docker container outside of kubernetes. To accomplish this I have created this Dockerfile:</p>
<pre><code>FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/
</code></pre>
<p>and this yaml file:</p>
<pre><code>global:
scrape_interval: 15s
external_labels:
monitor: 'codelab-monitor'
scrape_configs:
- job_name: 'kubernetes-apiservers'
scheme: http
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: endpoints
api_server: localhost:443
</code></pre>
<p>When I run this I get:</p>
<pre><code>Failed to list *v1.Pod: Get http://localhost:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
Failed to list *v1.Service: Get http://localhost:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
Failed to list *v1.Endpoints: Get http://localhost:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
</code></pre>
<p>on a loop. Prometheus will load when I go to localhost:9090 but there is no data.</p></li>
<li><p>I thought deploying Prometheus as a Kubernetes deployment may help, so I made this yaml and deployed it.</p>
<pre><code>kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: prometheus-monitor
spec:
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus-monitor
image: prom/prometheus
# args:
# - '-config.file=/etc/prometheus/prometheus.yaml'
imagePullPolicy: IfNotPresent
ports:
- name: webui
containerPort: 9090
</code></pre>
<p>The deployment was successful, but if I go to localhost:9090 I get 'ERR_SOCKET_NOT_CONNECTED'. (my port is forwarded)</p></li>
</ol>
<p>Can anyone tell me the advantage of in vs out of Kubernetes and how to fix at least one of these issues?</p>
<p>Also, my config file is suppressed because it was giving an error, and I will look into that once I am able to get Prometheus loaded.</p>
| <p>Kubernetes does not map the port outside it's cluster when you deploy your container.</p>
<p>You also have to create a service (can be inside the same file) to make it available from your workstation (append this to your prometheus yaml):</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: prometheus-web
labels:
app: prometheus
spec:
type: NodePort
ports:
- port: 9090
protocol: TCP
targetPort: 9090
nodePort: 30090
name: webui
selector:
app: prometheus
</code></pre>
<p>NodePort opens the given port on all nodes you have. You should be able to see the frontend with <a href="http://localhost:30090/" rel="nofollow noreferrer">http://localhost:30090/</a></p>
<p>Per default, kubernetes allow ports 30000 to 32767 for NodePort type (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#nodeport</a>).</p>
<p>Please consider reading the documentation in general for more information on services in kubernetes: <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
|
<p>I am running Apache Drill and Zookeeper on a Kubernetes cluster.</p>
<p>Drill is connecting to zookeeper through a zookeeper-service running on port 2181. I am trying the persist storage plugin configuration on zookeeper. On the Apache Drill docs (<a href="https://drill.apache.org/docs/persistent-configuration-storage/" rel="nofollow noreferrer">https://drill.apache.org/docs/persistent-configuration-storage/</a>), it is given that sys.store.provider.zk.blobroot key needs to be added to drill-override.conf property. But I am not able to figure out a value for this key if I want to connect it to Zookeeper service in Kubernetes.</p>
| <p>The value should be:</p>
<pre><code><name-of-your-zk-service>.<namespace-where-zk-is-running>.svc.cluster.local:2181
</code></pre>
<p>That's how services get resolved internally in Kubernetes. You can always test it by creating a Pod, connecting to is using <code>kubectl exec -it <pod-name> sh</code>, and running:</p>
<pre><code>ping <name-of-your-zk-service>.<namespace-where-zk-is-running>.svc.cluster.local
</code></pre>
<p>Hope it helps!</p>
|
<p>I am trying to setup a private docker registry to work with Kubernetes. I've setup the registry and the master-server thats running the Kubernetes cluster can pull images from the registry without a problem. Also, I've followed the docs of Kubernetes that explain how to connect to a private docker registry (see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a>).</p>
<p>However, when I try to pull images from the docker registry through Kubernetes I get the following error:</p>
<pre><code>Failed to pull image "xxx.xxx.xxx.xxx:5000/helloworld:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://xxx.xxx.xxx.xxx:5000/v1/_ping: x509: certificate signed by unknown authority
</code></pre>
<p>What I noticed is that the link that ends with v1/_ping is incorrect, it should be v2/_ping.</p>
<p>I ran the following command to generate my regcred: </p>
<pre><code>kubectl create secret docker-registry regcred --docker-server="https://xxx.xxx.xxx.xxx:5000/v2/" --docker-username=xxxxx --docker-password=xxxxxx [email protected]
</code></pre>
<p>I also googled a bit and found this:
<a href="https://github.com/kubernetes/kubernetes/issues/20786" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/20786</a></p>
<p>These suggestions, unfortunately, it didn't help, but they do indicate that more people face the same issue.</p>
<p>Does someone know how to correctly setup a docker registry v2 with Kubernetes?</p>
<p>Thanks</p>
| <p>Solved this issue, the master-server by default doesn't launch your deployments. So I needed to do the following at my slave servers:</p>
<ol>
<li>Add the certificate to /etc/docker/certs.d/my-registry-domain.com[:port]/ca.crt</li>
<li>Do docker login my-registry-domain.com[:port]</li>
<li>Add the docker registry secret to Kubernetes (see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a>) --docker-server=docker-registry-domain.com/v2/ or v1 depending on what you run</li>
<li>Now it will successfully pull images from the docker registry.</li>
</ol>
<p>Hope it will help someone.</p>
|
<p>I am trying to access the Kibana dashboard while trying to set up fluentd-elasticsearch on premises. This is the <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch" rel="nofollow noreferrer">link</a> which I followed. I checked the logs of Kibana's pod. It shows the following error:</p>
<pre><code>{"type":"log","@timestamp":"2018-09-19T21:45:42Z","tags":["warning","config","deprecation"],"pid":1,"message":"You should set server.basePath along with server.rewriteBasePath. Starting in 7.0, Kibana will expect that all requests start with server.basePath rather than expecting you to rewrite the requests in your reverse proxy. Set server.rewriteBasePath to false to preserve the current behavior and silence this warning."}
root@mTrainer3:/logging# kubectl logs kibana-logging-66d577d965-mbbg5 -n kube-system
{"type":"log","@timestamp":"2018-09-19T21:45:42Z","tags":["warning","config","deprecation"],"pid":1,"message":"You should set server.basePath along with server.rewriteBasePath. Starting in 7.0, Kibana will expect that all requests start with server.basePath rather than expecting you to rewrite the requests in your reverse proxy. Set server.rewriteBasePath to false to preserve the current behavior and silence this warning."}
{"type":"log","@timestamp":"2018-09-19T21:46:08Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
</code></pre>
<p>Could anybody suggest how I can resolve this issue?</p>
| <p><em>After a discussion it was more clear what seems to be wrong.</em></p>
<p>You are using a local cluster with no load balancer. You have to set either an ingress or use NodePort as the service type. I am going to describe the solution with NodePort. Two steps to take:</p>
<ol>
<li>Modify the <code>kibana-deployment.yaml</code> and remove the following under <code>env</code>:</li>
</ol>
<pre><code>- name: SERVER_BASEPATH
value: /api/v1/namespaces/kube-system/services/kibana-logging/proxy
</code></pre>
<p>so that you <code>kibana-deployment.yaml</code> looks like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
containers:
- name: kibana-logging
image: docker.elastic.co/kibana/kibana-oss:6.3.2
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch-logging:9200
ports:
- containerPort: 5601
name: ui
protocol: TCP
</code></pre>
<ol start="2">
<li>Modify <code>kibana-service.yaml</code> to set the service type to NodePort:</li>
</ol>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30601
selector:
k8s-app: kibana-logging
</code></pre>
<p>Then execute</p>
<pre><code>kubectl apply -f kibana-deployment.yaml
kubectl apply -f kibana-service.yaml
</code></pre>
<p>Kibana should be accessible at <code>http://<clusterip>:30601</code></p>
<p><strong>Background</strong></p>
<p>You will directly access <code>http://clusterip:30601</code> without the given base path. So this must be removed, so that kibana is using <code>/</code> as base path. Otherwise it will try to append the base path /api/v1/[...] to your url. You can try it if you want to test it.</p>
<p><a href="https://discuss.elastic.co/t/plugin-installation-failure/63422/4" rel="nofollow noreferrer">This comment from an elastic search guy mentions</a>, that you have to remove the base_path completely if you want to use <code>/</code>.</p>
<p>Modifying the service to NodePort is neccessary as K8s does not publish ports by default. I just answered a similar issue on <a href="https://stackoverflow.com/questions/52413160/connect-connection-refused-when-connecting-prometheus-to-kubernetes/52414776#52414776">this question</a>. </p>
<p><strong>Original answer (wrong)</strong></p>
<p>In the repo you were linking I can see that the kibana-deployment.yaml has to environment variables to set:</p>
<pre><code>env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch-logging:9200
- name: SERVER_BASEPATH
value: /api/v1/namespaces/kube-system/services/kibana-logging/proxy
</code></pre>
<p>Did you set them accordingly?</p>
<p>Let's assume you have an ingress, loadbalancer or NodePort directly to the kibana instance so that you want to reach it directly with <a href="http://yourserver:9200/" rel="nofollow noreferrer">http://yourserver:9200/</a>. Then the <code>SERVER_BASEPATH</code> is <code>/</code></p>
|
<p>I'm attempting to inject a ReplicationController's randomly generated pod ID extension (i.e. <code>multiverse-{replicaID}</code>) into a container's environment variables. I could manually get the hostname and extract it from there, but I'd prefer if I didn't have to add the special case into the script running inside the container, due to compatibility reasons.</p>
<p>If a pod is named <code>multiverse-nffj1</code>, <code>INSTANCE_ID</code> should equal <code>nffj1</code>. I've scoured the docs and found nothing.</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: multiverse
spec:
replicas: 3
template:
spec:
containers:
- env:
- name: INSTANCE_ID
value: $(replicaID)
</code></pre>
<p>I've tried adding a <code>command</code> into the controller's template configuration to create the environment variable from the hostname, but couldn't figure out how to make that environment variable available to the running script.</p>
<p>Is there a variable I'm missing, or does this feature not exist? If it doesn't, does anyone have any ideas on how to make this to work without editing the script inside of the container?</p>
| <p>There is an <a href="https://stackoverflow.com/a/49649009/9521610">answer</a> provided by <a href="https://stackoverflow.com/users/5937420/anton-kostenko">Anton Kostenko</a> about inserting DB credentials into container environment variables, but it could be applied to your case also. It is all about the content of the InitContainer spec.</p>
<p>You can use <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer"><strong>InitContainer</strong></a> to get the hash from the container’s hostname and put it to the file on the shared volume that you mount to the container. </p>
<p>In this example <strong>InitContainer</strong> put the Pod name into the <code>INSTANCE_ID</code> environment variable, but you can modify it according to your needs: </p>
<p>Create the <em>init.yaml</em> file with the content:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: init-test
spec:
containers:
- name: init-test
image: ubuntu
args: [bash, -c, 'source /data/config && echo $INSTANCE_ID && while true ; do sleep 1000; done ']
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: init-init
image: busybox
command: ["sh","-c","echo -n INSTANCE_ID=$(hostname) > /data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
</code></pre>
<p>Create the pod using following command:</p>
<pre><code>kubectl create -f init.yaml
</code></pre>
<p>Check if Pod initialization is done and is Running:</p>
<pre><code>kubectl get pod init-test
</code></pre>
<p>Check the logs to see the results of this example configuration:</p>
<pre><code>$ kubectl logs init-test
init-test
</code></pre>
|
<p>I have an entrypoint defined in my container image, and it runs before the args specified in my deployment manifest, as it should. But when I execute a command on that container using <code>kubectl exec</code>, it seems to bypass the container entrypoint. Is this the expected behavior? Can I somehow force it to always use the entrypoint commands?</p>
| <p>That's expected. If you really want what's in the <code>entrypoint.sh</code> you can do something like this:</p>
<pre><code>kubectl exec -it <pod-name> -c <container-name> -- /path/to/entrypoint.sh
</code></pre>
<p>Hope it helps!</p>
|
<p>The cni0 interface is missing altogether.
Any direction on how to get it back without tearing down the cluster will be greatly appreciated.
basically, the internal container networking is not working in trying to recover from that, I found this out
the IP's for coredns are docker0 interface instead of cni0, so may be if I get the cni0 back everything will start working</p>
<p>Below are the screenshots, please let me know if you need any additional command outputs</p>
<p>Master</p>
<pre><code>ip ro
default via 10.123.0.1 dev ens160 proto static metric 100
10.123.0.0/19 dev ens160 proto kernel scope link src 10.123.24.103 metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink
172.17.77.0/24 dev docker0 proto kernel scope link src 172.17.77.1
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
</code></pre>
<p>Worker nodes</p>
<pre><code>default via 10.123.0.1 dev ens160 proto static metric 100
10.123.0.0/19 dev ens160 proto kernel scope link src 10.123.24.105 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
ifconfig -a
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:27ff:fe72:a287 prefixlen 64 scopeid 0x20<link>
ether 02:42:27:72:a2:87 txqueuelen 0 (Ethernet)
RX packets 3218 bytes 272206 (265.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 286 bytes 199673 (194.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system coredns-99b9bb8bd-j77zx 1/1 Running 1 20m 172.17.0.2 abc-sjkubenode02
kube-system coredns-99b9bb8bd-sjnhs 1/1 Running 1 20m 172.17.0.3 abc-xxxxxxxxxxxx02
kube-system elasticsearch-logging-0 1/1 Running 6 2d 172.17.0.2 abc-xxxxxxxxxxxx02
kube-system etcd-abc-xxxxxxxxxxxx01 1/1 Running 3 26d 10.123.24.103 abc-xxxxxxxxxxxx01
kube-system fluentd-es-v2.0.3-6flxh 1/1 Running 5 2d 172.17.0.4 abc-xxxxxxxxxxxx02
kube-system fluentd-es-v2.0.3-7qdxl 1/1 Running 19 131d 172.17.0.2 abc-sjkubenode01
kube-system fluentd-es-v2.0.3-l5thl 1/1 Running 6 2d 172.17.0.3 abc-sjkubenode02
kube-system heapster-66bf5bd78f-twwd2 1/1 Running 4 2d 172.17.0.4 abc-sjkubenode01
kube-system kibana-logging-8b9699f9c-nrcpb 1/1 Running 3 2d 172.17.0.3 abc-sjkubenode01
kube-system kube-apiserver-abc-xxxxxxxxxxxx01 1/1 Running 2 2h 10.123.24.103 abc-xxxxxxxxxxxx01
kube-system kube-controller-manager-abc-xxxxxxxxxxxx01 1/1 Running 3 2h 10.123.24.103 abc-xxxxxxxxxxxx01
kube-system kube-flannel-ds-5lmmd 1/1 Running 3 3h 10.123.24.106 abc-sjkubenode02
kube-system kube-flannel-ds-92gd9 1/1 Running 2 3h 10.123.24.104 abc-xxxxxxxxxxxx02
kube-system kube-flannel-ds-nnxv6 1/1 Running 3 3h 10.123.24.105 abc-sjkubenode01
kube-system kube-flannel-ds-ns9ls 1/1 Running 2 3h 10.123.24.103 abc-xxxxxxxxxxxx01
kube-system kube-proxy-7h54h 1/1 Running 3 3h 10.123.24.105 abc-sjkubenode01
kube-system kube-proxy-7hrln 1/1 Running 2 3h 10.123.24.104 abc-xxxxxxxxxxxx02
kube-system kube-proxy-s4rt7 1/1 Running 3 3h 10.123.24.103 abc-xxxxxxxxxxxx01
kube-system kube-proxy-swmrc 1/1 Running 2 3h 10.123.24.106 abc-sjkubenode02
kube-system kube-scheduler-abc-xxxxxxxxxxxx01 1/1 Running 2 2h 10.123.24.103 abc-xxxxxxxxxxxx01
kube-system kubernetes-dashboard-58c479587f-bkqgf 1/1 Running 30 116d 10.244.0.56 abc-xxxxxxxxxxxx01
kube-system monitoring-influxdb-54bd58b4c9-4phxl 1/1 Running 3 2d 172.17.0.5 abc-sjkubenode01
kube-system nginx-ingress-5565bdd5fc-nc962 1/1 Running 2 2d 10.123.24.103 abc-xxxxxxxxxxxx01
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
abc-sjkubemaster01 Ready master 131d v1.11.2 10.123.24.103 <none> CentOS Linux 7 (Core) 3.10.0-862.2.3.el7.x86_64 docker://17.12.1-ce
abc-sjkubemaster02 Ready <none> 131d v1.11.2 10.123.24.104 <none> CentOS Linux 7 (Core) 3.10.0-862.2.3.el7.x86_64 docker://17.12.1-ce
abc-sjkubenode01 Ready <none> 131d v1.11.2 10.123.24.105 <none> CentOS Linux 7 (Core) 3.10.0-862.2.3.el7.x86_64 docker://17.12.1-ce
abc-sjkubenode02 Ready <none> 131d v1.11.2 10.123.24.106 <none> CentOS Linux 7 (Core) 3.10.0-862.2.3.el7.x86_64 docker://17.12.1-ce
</code></pre>
<p>Edit:
Another thing I wanted to add is how to delete the pod coredns and recreate it? I don't have an yaml file for that it was created when I intalled the kubebernets cluster using kubeadm</p>
<pre><code>docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:3fff:fe60:fea9 prefixlen 64 scopeid 0x20<link>
ether 02:42:3f:60:fe:a9 txqueuelen 0 (Ethernet)
RX packets 123051 bytes 8715267 (8.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 88559 bytes 33067497 (31.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.123.24.106 netmask 255.255.224.0 broadcast 10.123.31.255
inet6 fd0f:f1c3:ba53:6c01:5de2:b5af:362e:a9b2 prefixlen 64 scopeid 0x0<global>
inet6 fe80::ee61:b84b:bf18:93f2 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:91:75:d2 txqueuelen 1000 (Ethernet)
RX packets 1580516 bytes 534188729 (509.4 MiB)
RX errors 0 dropped 114794 overruns 0 frame 0
TX packets 303093 bytes 28327667 (27.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.1.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::4c0e:7dff:fe4b:12f2 prefixlen 64 scopeid 0x20<link>
ether 4e:0e:7d:4b:12:f2 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 40 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 75 bytes 5864 (5.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75 bytes 5864 (5.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:fc:5b:de txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0-nic: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 52:54:00:fc:5b:de txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
</code></pre>
| <p>First, check if your configs are present under <code>/etc/cni/net.d</code> on all your nodes.</p>
<p>Then I would try removing your flannel DaemonSet, killing all the pods and reinstall flannel altogether. </p>
<p>You may need to restart all your other pods except for the <code>kube-apiserver</code> and the <code>kube-controller-manager</code>. It's ok if you want to restart those as well, but you don't have to.</p>
<p>Hope it helps!</p>
|
<p>I want to call a REST service running outside OpenShift via a Service and external domain name. This works perfect with a http:// request. The mechanism is described in the documentation : <a href="https://docs.okd.io/latest/dev_guide/integrating_external_services.html#saas-define-service-using-fqdn" rel="nofollow noreferrer">https://docs.okd.io/latest/dev_guide/integrating_external_services.html#saas-define-service-using-fqdn</a></p>
<p>However the external service is secured with https. In this case I got the following exception:
Host name 'external-test-service' does not match the certificate subject provided by the peer (CN=<em>.xxx, O=xxx, L=xxx, ST=GR, C=CH); nested exception is javax.net.ssl.SSLPeerUnverifiedException: Host name 'external-test-service' does not match the certificate subject provided by the peer (CN=</em>.xxx, O=xxx, L=xxx, ST=GR, C=CH)</p>
<p>The exception is clear to me because we use the Service name from OpenShift. This name does not correspond to the origin host name in the certificate. So currently I see three possibilities to solve this issue:</p>
<ol>
<li>Add the name of the OpenShift Service to the certificate</li>
<li>Deactivate hostname verification before calling the external REST service</li>
<li>Configure OpenShift (don't know this is possible)</li>
</ol>
<p>Has anybody solve this or a similar issue?</p>
<p>Currently I used OpenShift v3.9. We are running a simple Spring Boot application in a pod accessing REST services outside OpenShift.</p>
<p>Any hint will be appreciated.</p>
<p>Thank you</p>
<p>Markus</p>
| <ol>
<li>Ugly and might cost you extra $$</li>
<li>Defeats the purpose of TLS.</li>
<li><p>On Kubernetes 1.10 and earlier you can use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">ExternalName</a>. </p>
<p>You can also use with <a href="https://docs.openshift.com/container-platform/3.3/dev_guide/integrating_external_services.html#mysql-define-service-using-fqdn" rel="nofollow noreferrer">OpenShift</a>.</p></li>
<li><p>You can also use and <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes Ingress</a> with <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="nofollow noreferrer">TLS</a>. Also, documented for <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/openshift" rel="nofollow noreferrer">OpenShift</a></p></li>
</ol>
<p>Hope it helps!</p>
|
<p>I want to choose config section from values.yaml by setting a variable in helm command line.</p>
<p>example part of values.yaml:</p>
<pre><code>aaa:
x1: "az1"
x2: "az2"
bbb:
x1: "bz1"
x2: "bz2"
</code></pre>
<p>example part of configmap.yaml</p>
<pre><code>data:
{{ .Values.outsideVal.x1 }}
</code></pre>
<p>Expected result should looks like this</p>
<pre><code> data:
az1
</code></pre>
<p>Test helm output</p>
<pre><code>helm template --set outsideVal=aaa mychart
</code></pre>
<p>And got this error</p>
<pre><code>Error: render error in "./templates/configmap.yaml": template: ./templates/configmap.yaml:21:12: executing "./templates/configmap.yaml" at <.Values.outsideVal.x...>: can't evaluate field x1 in type interface {}
</code></pre>
<p>So the question is how get the result as expected?</p>
| <p>I suspect you're looking for the <a href="https://godoc.org/text/template" rel="noreferrer">text/template</a> <code>index</code> function, which can look up a value in a map by a variable key.</p>
<pre><code>{{ (index .Values .Values.outsideVal).x1 }}
</code></pre>
|
<p>Let's say I have a 3 node cluster (master01, node01, node02)</p>
<p>How can I find the running container path on node?
I can see container ID by "oc describe pod"</p>
<pre><code>Container ID: docker://9982c309a3fd8c336c98201eff53a830a1c56a4cf94c2861c52656855cba3558
</code></pre>
<p>eg: on node 01, I can see a lot of directories under "/var/lib/openshift/openshift.local.volumes/pods/"; but not sure how this map to pod names or container names.
Seems inactive container maps/mounts but how to do a sage clean up ?</p>
<p>Any help really appreciated. Thank you.</p>
| <p>You can use this to list the container image by pod name:</p>
<p><code>kubectl get pods POD_NAME -o=jsonpath='{..image}'</code></p>
<p>Source: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/" rel="nofollow noreferrer">Docs</a></p>
|
<p>I am running into OOM issues on CentOs on some kubernetes nodes. I would like to set it up like they have in the demo:</p>
<pre><code>--kube-reserved is set to cpu=1,memory=2Gi,ephemeral-storage=1Gi
--system-reserved is set to cpu=500m,memory=1Gi,ephemeral-storage=1Gi
--eviction-hard is set to memory.available<500Mi,nodefs.available<10%
</code></pre>
<p>Where do I add those params?<br>
Should I add them to /etc/systemd/system/kubelet.service?
What format?<br>
Also, do I just set these on the worker nodes?</p>
<p>This is in a live enviroment so I want to get it right on the first go.</p>
<pre><code>[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
</code></pre>
| <p>Add them to this file (hopefully, you initiated your cluster with kubeadm):</p>
<pre><code>/var/lib/kubelet/kubeadm-flags.env
</code></pre>
<p>For example:</p>
<pre><code>KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf --kube-reserved=<value> --system-reserved=<value> --eviction-hard=<value>
</code></pre>
<p>Run:</p>
<pre><code>systemctl restart kubelet
</code></pre>
<p>and configs should take effect. You can check the kubelet is running with the right parameters like this:</p>
<pre><code>ps -Af | grep kubelet
</code></pre>
<p>Hope it helps.</p>
|
<p>Basically, when using Google Cloud Build, how do I read a value that was written in an earlier build step in subsequent steps? </p>
<p>Specifically, I'd like to make a custom image tag that's based on a combination of the timestamp and $SHORT_SHA. Something like the below. Though, it doesn't work, as docker complains about "export", and, even if that worked, it likely will be a different env:</p>
<pre><code> # Setting tag in a variable:
- name: 'ubuntu'
args: ['export', '_BUILD_TAG=`date', '-u', '+%Y%m%dT%H%M%S_$SHORT_SHA`']
</code></pre>
<p>Then, in a later step:</p>
<pre><code> # Using tag from the variable:
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$_BUILD_TAG', '.']
</code></pre>
<p>So, how do I use the output of one step in another? I could write the contents of <code>date</code> to a file, and then read it, but I'm back at not knowing how to set the variable from the file I read (or otherwise interpolate its results to form the argument to docker build). </p>
| <p>I never found a way to set an environment variable in one build step that can be read in other steps, but I ended up accomplishing the same effect by building on Konstantin's answer in the following way: </p>
<p>In an early step, I generate and write my date-based tag to a file. The filesystem (/workspace) is retained between steps, and serves as store of my environment variable. Then, in each step that I need to reference that value, I cat that file in place. The trick is to use sh or bash as the entrypoint in each container so that the sub-shell that reads from the file can execute. </p>
<p>Here's an example:</p>
<pre><code>## Set build tag and write to file _TAG
- name: 'ubuntu'
args: ['bash', '-c', 'date -u +%Y%m%dT%H%M_$SHORT_SHA > _TAG']
...
# Using the _TAG during Docker build:
- name: gcr.io/cloud-builders/docker
entrypoint: sh
args: ['-c', 'docker build -t gcr.io/$PROJECT_ID/image_name:$(cat _TAG) .']
</code></pre>
<p>A caveat to note is that if you are doing the bash interpolation in this way within, say, a JSON object or something that requires double quotes, you need the subshell call to never be surrounded by single quotes when executed in the container, only double, which may require escaping the internal double quotes to build the JSON object. Here's an example where I patch the kubernetes config using the _TAG file value to deploy the newly-build image: </p>
<pre><code>- name: gcr.io/cloud-builders/kubectl
entrypoint: bash
args: ['-c', 'gcloud container clusters get-credentials --zone $$CLOUDSDK_COMPUTE_ZONE $$CLOUDSDK_CONTAINER_CLUSTER ; kubectl patch deployment deployment_name -n mynamespace -p "{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"image_name\",\"image\":\"gcr.io/$PROJECT_ID/image_name:$(cat _TAG)\"}]}}}}}"']
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-central1-b'
- 'CLOUDSDK_CONTAINER_CLUSTER=my-google-proj-cluster-name'
</code></pre>
|
<p>I have about 100 million json files (10 TB), each with a particular field containing a bunch of text, for which I would like to perform a simple substring search and return the filenames of all the relevant json files. They're all currently stored on Google Cloud Storage. Normally for a smaller number of files I might just spin up a VM with many CPUs and run multiprocessing via Python, but alas this is a bit too much.</p>
<p>I want to avoid spending too much time setting up infrastructure like a Hadoop server, or loading all of that into some MongoDB database. My question is: what would be a quick and dirty way to perform this task? My original thoughts were to set up something on Kubernetes with some parallel processing running Python scripts, but I'm open to suggestions and don't really have a clue how to go about this.</p>
| <ol>
<li><p>Easier would be to just load the GCS data into <a href="https://cloud.google.com/bigquery/docs/loading-data-cloud-storage" rel="nofollow noreferrer">Big Query</a> and just run your query from there. </p></li>
<li><p>Send your data to AWS S3 and use <a href="https://aws.amazon.com/athena/" rel="nofollow noreferrer">Amazon Athena</a>.</p></li>
<li><p>The Kubernetes option would be set up a cluster in GKE and install <a href="https://prestodb.io/" rel="nofollow noreferrer">Presto</a> in it with a lot of workers, use a <a href="https://community.hortonworks.com/articles/9022/hive-and-google-cloud-storage-1.html" rel="nofollow noreferrer">hive metastore with GCS</a> and query from there. (Presto doesn't have direct GCS connector yet, afaik) -- This option seems more elaborate.</p></li>
</ol>
<p>Hope it helps!</p>
|
<p>I need to access google cloud memorystore from cloud function but i know it's not supported yet, so i tried workaround to add haproxy in my kubernetes cluster and make it publicly access able using kubernetes service from type loadbalancer and forward the tcp requests to memorystore.</p>
<p>It works fine with me and i can connect to my memorystore instance from cloud function, but only the problem i have and i need fix for it to securing memorystore on the haproxy level or on the memorystore level, i tried to add a password to memorystore but i found the CONFIG command is disabled.</p>
<p>That's my haproxy config:</p>
<pre><code>frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server 10.0.0.12:6379 check
</code></pre>
<p>So any suggestions ?</p>
| <p>We don't support network connectivity from App Engine or Cloud functions yet. We are working on adding this support in the future.</p>
<p>Like you found out connectivity from GKE environment is supported. </p>
<p>We don't support AUTH config yet, this feature is on our roadmap in near future. </p>
<p>Thanks for the feedback,
Prajakta</p>
<p>(Engineering Manager on GCP)</p>
|
<p>I'm trying to create a simple nginx service on GKE, but I'm running into strange problems. </p>
<p>Nginx runs on port 80 inside the Pod. The service is accessible on port 8080. (This works, I can do <code>curl myservice:8080</code> inside of the pod and see the nginx home screen)</p>
<p>But when I try to make it publicly accessible using an ingress, I'm running into trouble. Here are my deployment, service and ingress files.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
nodePort: 32111
targetPort: 80
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- http:
paths:
# The * is needed so that all traffic gets redirected to nginx
- path: /*
backend:
serviceName: my-service
servicePort: 80
</code></pre>
<p>After a while, this is what my ingress status looks like:</p>
<pre><code>$ k describe ingress test-ingress
Name: test-ingress
Namespace: default
Address: 35.186.255.184
Default backend: default-http-backend:80 (10.44.1.3:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* my-service:32111 (<none>)
Annotations:
backends: {"k8s-be-30030--ecc76c47732c7f90":"HEALTHY"}
forwarding-rule: k8s-fw-default-test-ingress--ecc76c47732c7f90
target-proxy: k8s-tp-default-test-ingress--ecc76c47732c7f90
url-map: k8s-um-default-test-ingress--ecc76c47732c7f90
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 18m loadbalancer-controller default/test-ingress
Normal CREATE 17m loadbalancer-controller ip: 35.186.255.184
Warning Service 1m (x5 over 17m) loadbalancer-controller Could not find nodeport for backend {ServiceName:my-service ServicePort:{Type:0 IntVal:32111 StrVal:}}: could not find matching nodeport from service
Normal Service 1m (x5 over 17m) loadbalancer-controller no user specified default backend, using system default
</code></pre>
<p>I don't understand why it's saying that it can't find nodeport - the service has nodePort defined and it is of type NodePort as well. Going to the actual IP results in <code>default backend - 404</code>. </p>
<p>Any ideas why?</p>
| <p>The configuration is missing a health check endpoint, for the GKE loadbalancer to know whether the backend is healthy. The <code>containers</code> section for the <code>nginx</code> should also specify:</p>
<pre><code> livenessProbe:
httpGet:
path: /
port: 80
</code></pre>
<p>The <code>GET /</code> on port 80 is the default configuration, and can be changed.</p>
|
<p><strong>What happened</strong>:</p>
<p>Force terminate does not work:</p>
<pre><code>[root@master0 manifests]# kubectl delete -f prometheus/deployment.yaml --grace-period=0 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.extensions "prometheus-core" force deleted
^C <---- Manual Quit due to hanging. Waited over 5 minutes with no change.
[root@master0 manifests]# kubectl -n monitoring get pods
NAME READY STATUS RESTARTS AGE
alertmanager-668794449d-6dppl 0/1 Terminating 0 22h
grafana-core-576c68c58d-7nvbt 0/1 Terminating 0 22h
kube-state-metrics-69b9d65dd5-rl8td 0/1 Terminating 0 3h
node-directory-size-metrics-6hcfc 2/2 Running 0 3h
node-directory-size-metrics-w7zxh 2/2 Running 0 3h
node-directory-size-metrics-z2m5j 2/2 Running 0 3h
prometheus-core-59778c7987-vh89h 0/1 Terminating 0 3h
prometheus-node-exporter-27fjg 1/1 Running 0 3h
prometheus-node-exporter-2t5v6 1/1 Running 0 3h
prometheus-node-exporter-hhxmv 1/1 Running 0 3h
</code></pre>
<p>Then</p>
<p><strong>What you expected to happen</strong>:
Pod to be deleted</p>
<p><strong>How to reproduce it (as minimally and precisely as possible)</strong>:
We feel that the there might have been an IO error with the storage on the pods. Kubernetes has its own dedicated direct storage. All hosted on AWS. Use of t3.xl</p>
<p><strong>Anything else we need to know?</strong>:
It seems to happen randomly but happens often enough as we have to reboot the entire cluster. Do stuck in termination can be ok to deal with, having no logs or no control to really force remove them and start again is frustrating.</p>
<p><strong>Environment</strong>:
- Kubernetes version (use <code>kubectl version</code>): </p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
AWS
- OS (e.g. from /etc/os-release):
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
</code></pre>
<ul>
<li><p>Kernel (e.g. <code>uname -a</code>):</p>
<p>Linux 3.10.0-862.6.3.el7.x86_64 #1 SMP Tue Jun 26 16:32:21 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux</p></li>
<li><p>Install tools:
Kubernetes was deployed with Kuberpray with GlusterFS as a container volume and Weave as its networking.</p></li>
<li>Others:
2 master 1 node setup. We have redeployed the entire setup and still get hit by the same issue.</li>
</ul>
<p>I have posted this question on their issues page:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/68829" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/68829</a></p>
<p>But no reply.</p>
<p>Logs from API:</p>
<pre><code>[root@master0 manifests]# kubectl -n monitoring delete pod prometheus-core-59778c7987-bl2h4 --force --grace-period=0 -v9
I0919 13:53:08.770798 19973 loader.go:359] Config loaded from file /root/.kube/config
I0919 13:53:08.771440 19973 loader.go:359] Config loaded from file /root/.kube/config
I0919 13:53:08.772681 19973 loader.go:359] Config loaded from file /root/.kube/config
I0919 13:53:08.780266 19973 loader.go:359] Config loaded from file /root/.kube/config
I0919 13:53:08.780943 19973 loader.go:359] Config loaded from file /root/.kube/config
I0919 13:53:08.781609 19973 loader.go:359] Config loaded from file /root/.kube/config
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0919 13:53:08.781876 19973 request.go:897] Request Body: {"gracePeriodSeconds":0,"propagationPolicy":"Foreground"}
I0919 13:53:08.781938 19973 round_trippers.go:386] curl -k -v -XDELETE -H "Accept: application/json" -H "Content-Type: application/json" -H "User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://10.1.1.28:6443/api/v1/namespaces/monitoring/pods/prometheus-core-59778c7987-bl2h4'
I0919 13:53:08.798682 19973 round_trippers.go:405] DELETE https://10.1.1.28:6443/api/v1/namespaces/monitoring/pods/prometheus-core-59778c7987-bl2h4 200 OK in 16 milliseconds
I0919 13:53:08.798702 19973 round_trippers.go:411] Response Headers:
I0919 13:53:08.798709 19973 round_trippers.go:414] Content-Type: application/json
I0919 13:53:08.798714 19973 round_trippers.go:414] Content-Length: 3199
I0919 13:53:08.798719 19973 round_trippers.go:414] Date: Wed, 19 Sep 2018 13:53:08 GMT
I0919 13:53:08.798758 19973 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"prometheus-core-59778c7987-bl2h4","generateName":"prometheus-core-59778c7987-","namespace":"monitoring","selfLink":"/api/v1/namespaces/monitoring/pods/prometheus-core-59778c7987-bl2h4","uid":"7647d17a-bc11-11e8-bd71-06b8eceafd88","resourceVersion":"676465","creationTimestamp":"2018-09-19T13:39:41Z","deletionTimestamp":"2018-09-19T13:40:18Z","deletionGracePeriodSeconds":0,"labels":{"app":"prometheus","component":"core","pod-template-hash":"1533473543"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"prometheus-core-59778c7987","uid":"75aba047-bc11-11e8-bd71-06b8eceafd88","controller":true,"blockOwnerDeletion":true}],"finalizers":["foregroundDeletion"]},"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"prometheus-core","defaultMode":420}},{"name":"rules-volume","configMap":{"name":"prometheus-rules","defaultMode":420}},{"name":"api-token","secret":{"secretName":"api-token","defaultMode":420}},{"name":"ca-crt","secret":{"secretName":"ca-crt","defaultMode":420}},{"name":"prometheus-k8s-token-trclf","secret":{"secretName":"prometheus-k8s-token-trclf","defaultMode":420}}],"containers":[{"name":"prometheus","image":"prom/prometheus:v1.7.0","args":["-storage.local.retention=12h","-storage.local.memory-chunks=500000","-config.file=/etc/prometheus/prometheus.yaml","-alertmanager.url=http://alertmanager:9093/"],"ports":[{"name":"webui","containerPort":9090,"protocol":"TCP"}],"resources":{"limits":{"cpu":"500m","memory":"500M"},"requests":{"cpu":"500m","memory":"500M"}},"volumeMounts":[{"name":"config-volume","mountPath":"/etc/prometheus"},{"name":"rules-volume","mountPath":"/etc/prometheus-rules"},{"name":"api-token","mountPath":"/etc/prometheus-token"},{"name":"ca-crt","mountPath":"/etc/prometheus-ca"},{"name":"prometheus-k8s-token-trclf","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"prometheus-k8s","serviceAccount":"prometheus-k8s","nodeName":"master1.infra.cde","securityContext":{},"schedulerName":"default-scheduler"},"status":{"phase":"Pending","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-09-19T13:39:41Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2018-09-19T13:39:41Z","reason":"ContainersNotReady","message":"containers with unready status: [prometheus]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":null,"reason":"ContainersNotReady","message":"containers with unready status: [prometheus]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-09-19T13:39:41Z"}],"hostIP":"10.1.1.187","startTime":"2018-09-19T13:39:41Z","containerStatuses":[{"name":"prometheus","state":{"terminated":{"exitCode":0,"startedAt":null,"finishedAt":null}},"lastState":{},"ready":false,"restartCount":0,"image":"prom/prometheus:v1.7.0","imageID":""}],"qosClass":"Guaranteed"}}
pod "prometheus-core-59778c7987-bl2h4" force deleted
I0919 13:53:08.798864 19973 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json" -H "User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://10.1.1.28:6443/api/v1/namespaces/monitoring/pods/prometheus-core-59778c7987-bl2h4'
I0919 13:53:08.801386 19973 round_trippers.go:405] GET https://10.1.1.28:6443/api/v1/namespaces/monitoring/pods/prometheus-core-59778c7987-bl2h4 200 OK in 2 milliseconds
I0919 13:53:08.801403 19973 round_trippers.go:411] Response Headers:
I0919 13:53:08.801409 19973 round_trippers.go:414] Content-Type: application/json
I0919 13:53:08.801415 19973 round_trippers.go:414] Content-Length: 3199
I0919 13:53:08.801420 19973 round_trippers.go:414] Date: Wed, 19 Sep 2018 13:53:08 GMT
I0919 13:53:08.801465 19973 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"prometheus-core-59778c7987-bl2h4","generateName":"prometheus-core-59778c7987-","namespace":"monitoring","selfLink":"/api/v1/namespaces/monitoring/pods/prometheus-core-59778c7987-bl2h4","uid":"7647d17a-bc11-11e8-bd71-06b8eceafd88","resourceVersion":"676465","creationTimestamp":"2018-09-19T13:39:41Z","deletionTimestamp":"2018-09-19T13:40:18Z","deletionGracePeriodSeconds":0,"labels":{"app":"prometheus","component":"core","pod-template-hash":"1533473543"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"prometheus-core-59778c7987","uid":"75aba047-bc11-11e8-bd71-06b8eceafd88","controller":true,"blockOwnerDeletion":true}],"finalizers":["foregroundDeletion"]},"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"prometheus-core","defaultMode":420}},{"name":"rules-volume","configMap":{"name":"prometheus-rules","defaultMode":420}},{"name":"api-token","secret":{"secretName":"api-token","defaultMode":420}},{"name":"ca-crt","secret":{"secretName":"ca-crt","defaultMode":420}},{"name":"prometheus-k8s-token-trclf","secret":{"secretName":"prometheus-k8s-token-trclf","defaultMode":420}}],"containers":[{"name":"prometheus","image":"prom/prometheus:v1.7.0","args":["-storage.local.retention=12h","-storage.local.memory-chunks=500000","-config.file=/etc/prometheus/prometheus.yaml","-alertmanager.url=http://alertmanager:9093/"],"ports":[{"name":"webui","containerPort":9090,"protocol":"TCP"}],"resources":{"limits":{"cpu":"500m","memory":"500M"},"requests":{"cpu":"500m","memory":"500M"}},"volumeMounts":[{"name":"config-volume","mountPath":"/etc/prometheus"},{"name":"rules-volume","mountPath":"/etc/prometheus-rules"},{"name":"api-token","mountPath":"/etc/prometheus-token"},{"name":"ca-crt","mountPath":"/etc/prometheus-ca"},{"name":"prometheus-k8s-token-trclf","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"prometheus-k8s","serviceAccount":"prometheus-k8s","nodeName":"master1.infra.cde","securityContext":{},"schedulerName":"default-scheduler"},"status":{"phase":"Pending","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-09-19T13:39:41Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2018-09-19T13:39:41Z","reason":"ContainersNotReady","message":"containers with unready status: [prometheus]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":null,"reason":"ContainersNotReady","message":"containers with unready status: [prometheus]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-09-19T13:39:41Z"}],"hostIP":"10.1.1.187","startTime":"2018-09-19T13:39:41Z","containerStatuses":[{"name":"prometheus","state":{"terminated":{"exitCode":0,"startedAt":null,"finishedAt":null}},"lastState":{},"ready":false,"restartCount":0,"image":"prom/prometheus:v1.7.0","imageID":""}],"qosClass":"Guaranteed"}}
I0919 13:53:08.801758 19973 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json" -H "User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://10.1.1.28:6443/api/v1/namespaces/monitoring/pods?fieldSelector=metadata.name%3Dprometheus-core-59778c7987-bl2h4&resourceVersion=676465&watch=true'
I0919 13:53:08.803409 19973 round_trippers.go:405] GET https://10.1.1.28:6443/api/v1/namespaces/monitoring/pods?fieldSelector=metadata.name%3Dprometheus-core-59778c7987-bl2h4&resourceVersion=676465&watch=true 200 OK in 1 milliseconds
I0919 13:53:08.803424 19973 round_trippers.go:411] Response Headers:
I0919 13:53:08.803430 19973 round_trippers.go:414] Date: Wed, 19 Sep 2018 13:53:08 GMT
I0919 13:53:08.803436 19973 round_trippers.go:414] Content-Type: application/json
</code></pre>
| <p>After some investigation and help from the Kubernetes community over on github. We found the solution. The answer is, in 1.11.0 there is a known bug in relation to this issue. after upgrading to 1.12.0 the issue was resolved. The issue is noted to be resolved in 1.11.1 </p>
<p>Thanks to cduchesne <a href="https://github.com/kubernetes/kubernetes/issues/68829#issuecomment-422878108" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/68829#issuecomment-422878108</a></p>
|
<p>Can someone help me in writing a promql query to get the node status whether node is schedulable or not. Thanks in advance.</p>
| <p>If you use prometheus-operator, <code>kube-state-metrics</code> return <code>kube_node_status_condition</code> metric:</p>
<pre><code>sum(kube_node_status_condition{condition="Ready",status!="true"})
</code></pre>
|
<p>I am still in the initial phase of understanding k8, so please forgive if you feel this lame. But I wanted to know if there is a way to have some kind of delay or ordered the execution of the containers in 1 single pod.
Let's say we have 1 Pod A and has 2 containers. For example, a kafka container exposing port 9092 and port 8081 and another container say a Kafka connect image which hits both of these ports. I wanted to know if there is a way to execute the Kafka container first and then execute Kafka connect container so that it won't get killed. </p>
<p>1) Can we have some delay or a sleep between 2 container creation?</p>
<p>2) Can we have some kind of priority of execution of containers within pods so that we can accomplish the above-mentioned situation?</p>
<p>PS: I have considered creating 2 pods and currently working on that. But also wanted to try to do this and also learn something new at the same time.Thanks.</p>
| <p>There is a Feature Request for it: <a href="https://github.com/kubernetes/kubernetes/issues/65502" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/65502</a></p>
|
<p>In some cases, we have Services that get no response when trying to access them. Eg Chrome shows ERR_EMPTY_RESPONSE, and occasionally we get other errors as well, like 408, which I'm fairly sure is returned from the ELB, not our application itself.</p>
<p>After a long involved investigation, including ssh'ing into the nodes themselves, experimenting with load balancers and more, we are still unsure at which layer the problem actually exists: either in Kubernetes itself, or in the backing services from Amazon EKS (ELB or otherwise)</p>
<ul>
<li>It seems that only the instance (data) port of the node is the one that has the issue. The problems seems to come and go intermittently, which makes us believe it is not something obvious in our kubernetes manifest or docker configurations, but rather something else in the underlying infrastructure. Sometimes the service & pod will be working, but come back and the morning it will be broken. This leads us to believe that the issue stems from a redistribution of the pods in kubernetes, possibly triggered by something in AWS (load balancer changing, auto-scaling group changes, etc) or something in kubernetes itself when it redistributes pods for other reasons.</li>
<li>In all cases we have seen, the health check port continues to work without issue, which is why kubernetes and aws both thing that everything is ok and do not report any failures.</li>
<li>We have seen some pods on a node work, while others do not on that same node.</li>
<li>We have verified kube-proxy is running and that the iptables-save output is the "same" between two pods that are working. (the same meaning that everything that is not unique, like ip addresses and ports are the same, and consistent with what they should be relative to each other). (we used these instructions to help with these instructions: <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#is-the-kube-proxy-working" rel="noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#is-the-kube-proxy-working</a></li>
<li>From ssh on the node itself, for a pod that is failing, we CAN access the pod (ie the application itself) via all possible ip/ports that are expected.
<ul>
<li>the 10. address of the node itself, on the instance data port.</li>
<li>the 10. address of the pod (docker container) on the application port.</li>
<li>the 172. address of the ??? on the application port (we are not sure what that ip is, or how the ip route gets to it, as it is a different subnet than the 172 address of the docker0 interface).</li>
</ul></li>
<li>From ssh on another node, for a pod that is failing, we cannot access the failing pod on any ports (ERR_EMPTY_RESPONSE). This seems to be the same behaviour as the service/load balancer.</li>
</ul>
<p>What else could cause behaviour like this?</p>
| <p>After much investigation, we were fighting a number of issues:
* Our application didn't always behave the way we were expecting. Always check that first.
* In our Kubernetes Service manifest, we had set the <code>externalTrafficPolicy: Local</code>, which probably should work, but was causing us problems. (This was with using Classic Load Balancer) <code>service.beta.kubernetes.io/aws-load-balancer-type: "clb"</code>. So if you have problems with CLB, either remove the <code>externalTrafficPolicy</code> or explicitly set it to the default "Cluster" value.</p>
<p>So our manifest is now:
<code>
kind: Service
apiVersion: v1
metadata:
name: apollo-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "clb"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:REDACTED"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"<br>
spec:
externalTrafficPolicy: Cluster
selector:
app: apollo
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80
type: LoadBalancer
</code></p>
|
<p>I have deployed the Kubernetes dashboard which ended up in <code>CrashLoopBackOff</code> status. When I run:</p>
<pre><code>$ kubectl logs kubernetes-dashboard-767dc7d4d-mc2sm --namespace=kube-system
</code></pre>
<p>the output is:</p>
<pre><code>Error from server: Get https://10.4.211.53:10250/containerLogs/kube-system/kubernetes-dashboard-767dc7d4d-mc2sm/kubernetes-dashboard: dial tcp 10.4.211.53:10250: connect: no route to host
</code></pre>
<p>How can I fix this? Does this means that the port 10250 isn't open?</p>
<hr>
<p>Update:</p>
<p>@LucaBrasi<br>
<code>Error from server (NotFound): pods "kubernetes-dashboard-767dc7d4d-mc2sm" not found</code></p>
<p><code>systemctl status kubelet --full</code> Output is : </p>
<pre><code>kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since 一 2018-09-10 15:04:57 CST; 1 day 23h ago
Docs: https://kubernetes.io/docs/
Main PID: 93440 (kubelet)
Tasks: 21
Memory: 78.9M
CGroup: /system.slice/kubelet.service
└─93440 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni
</code></pre>
<p>Output for <code>kubectl get pods --all-namespaces</code></p>
<p><code>
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-qh6zb 1/1 Running 2 3d
kube-system coredns-78fcdf6894-xbzgn 1/1 Running 1 3d
kube-system etcd-twsr-whtestserver01.garenanet.com 1/1 Running 2 3d
kube-system kube-apiserver-twsr-whtestserver01.garenanet.com 1/1 Running 2 3d
kube-system kube-controller-manager-twsr-whtestserver01.garenanet.com 1/1 Running 2 3d
kube-system kube-flannel-ds-amd64-2bnmx 1/1 Running 3 3d
kube-system kube-flannel-ds-amd64-r58j6 1/1 Running 0 3d
kube-system kube-flannel-ds-amd64-wq6ls 1/1 Running 0 3d
kube-system kube-proxy-ds7lg 1/1 Running 0 3d
kube-system kube-proxy-fx46d 1/1 Running 0 3d
kube-system kube-proxy-ph7qq 1/1 Running 2 3d
kube-system kube-scheduler-twsr-whtestserver01.garenanet.com 1/1 Running 1 3d
kube-system kubernetes-dashboard-767dc7d4d-mc2sm 0/1 CrashLoopBackOff 877 3d
</code></p>
| <p>I had the same issue when I reproduced all the steps from the tutorial you've linked - my dashboard was in <code>CrashLoopBackOff</code>state. After I performed this steps and applied new dashboard yaml from the official github documentation (there seems to be no difference from the one you've posted), the dashboard was working properly. </p>
<p>First, list all the objects related to Kubernetes dashboard:</p>
<pre><code>kubectl get secret,sa,role,rolebinding,services,deployments --namespace=kube-system | grep dashboard
</code></pre>
<p>Delete them:</p>
<pre><code>kubectl delete deployment kubernetes-dashboard --namespace=kube-system
kubectl delete service kubernetes-dashboard --namespace=kube-system
kubectl delete role kubernetes-dashboard-minimal --namespace=kube-system
kubectl delete rolebinding kubernetes-dashboard-minimal --namespace=kube-system
kubectl delete sa kubernetes-dashboard --namespace=kube-system
kubectl delete secret kubernetes-dashboard-certs --namespace=kube-system
kubectl delete secret kubernetes-dashboard-key-holder --namespace=kube-system
</code></pre>
<p>Now apply Kubernetes dashboard yaml:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
</code></pre>
<p>Please tell me if this worked for you as well, and if it did, treat it as a workaround as I don't know the reason yet - I am investigating. </p>
|
<p>I have a use case with elastic search for rack awareness, which requires me to identify the zone that a pod has been scheduled in.</p>
<p>I've seen many requests for this outside of SO, such as:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/40610" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/40610</a></p>
<p>The zone is not exposed in the downward API due to it being a node label, not a pod label.</p>
<p>The two "answers" I have come up with are:</p>
<p>a) Curl request to the google metadata endpoint to retrieve the node from the compute engine metadata</p>
<p>b) Identify the node name via the downward API, and make a call to the Kube API with the node name to retrieve the node object, and use a tool such as JQ to filter the JSON response to get the zone. </p>
<p>I don't like option B due to it being more or less hardcoded against the API call, and I would need to provision a custom docker image with JQ and curl included. Option A feels a bit 'hacky' given it's not Kube native.</p>
<p>Are there any better solutions that I've missed?</p>
<p>Thanks in advance,</p>
| <p>I don't particularly like doing it this way but I've yet to find a better answer, and none of the feature requests or bug reports on github seem to be going anywhere.</p>
<p>I opted to use a config map with a bash script which would do the curl request and some string manipulation, and then mount this into the container with volumes/volumeMounts and set a custom entry point to execute the script; injecting values into elasticsearch.yml, and then execute ES itself:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: elastic
data:
elastic.sh: |-
set -e
elasticConfig="/usr/share/elasticsearch/config/elasticsearch.yml"
#this gives the us the node zone in the format projects/{projectnumber}/zones/{zone}
zone=$(curl -sS http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')
#Split and retain the text after the last /
zone=${zone##*/}
#Append it to elasticsearch.yml
echo "\nnode.attr.zone: ${zone}" >> $elasticConfig
echo "\ncluster.routing.allocation.awareness.attributes: zone" >> $elasticConfig
echo "\ncluster.routing.allocation.awareness.force.zone.values: {{ .Values.elastic.shardAwareness.zones }}" >> $elasticConfig
</code></pre>
<p>I don't particularly like this solution as it adds unnecessary overhead and isn't kube native. Querying the kube API is possible but has its own set of complications and hacks. I hope one day either the downward API will expose zone / region labels, or that maybe I'm missing something and there is a better way than this after all.</p>
<p>If nothing else, perhaps this will help someone else and stop them wasting time googling for answers that don't seem to be out there!</p>
|
<p>I basically want to access the Nginx-hello page externally by URL. I've made a (working) A-record for a subdomain to my v-server running kubernetes and Nginx ingress: vps.my-domain.com</p>
<p>I installed Kubernetes via kubeadm on CoreOS as a single-node cluster using these tutorials: <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/install-kubeadm/</a>, <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a>, and nginx-ingress using <a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal</a>.</p>
<p>I also added the following entry to the /etc/hosts file:</p>
<pre><code>31.214.xxx.xxx vps.my-domain.com
</code></pre>
<p>(xxx was replaced with the last three digits of the server IP)</p>
<p>I used the following file to create the deployment, service, and ingress:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
run: my-nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "False"
spec:
rules:
- host: vps.my-domain.com
http:
paths:
- backend:
serviceName: my-nginx
servicePort: 80
</code></pre>
<p>Output of describe ing:</p>
<pre><code>core@vps ~/k8 $ kubectl describe ing
Name: my-nginx
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
vps.my-domain.com
my-nginx:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1",...}
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: False
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UPDATE 49m (x2 over 56m) nginx-ingress-controller Ingress default/my-nginx
</code></pre>
<p>While I can curl the Nginx hello page using the nodeip and port 80 it doesn't work from outside the VM. <code>Failed to connect to vps.my-domain.com port 80: Connection refused</code></p>
<p>Did I forgot something or is the configuration just wrong? Any help or tips would be appreciated! </p>
<p>Thank you</p>
<p><strong>EDIT:</strong></p>
<p>Visiting "vps.my-domain.com:30519` gives me the nginx welcome page. But in the config I specified port :80.
I got the port from the output of get services:</p>
<pre><code>core@vps ~/k8 $ kubectl get services --all-namespaces | grep "my-nginx"
default my-nginx ClusterIP 10.107.5.14 <none> 80/TCP 1h
</code></pre>
<p>I also got it to work on port :80 by adding </p>
<pre><code>externalIPs:
- 31.214.xxx.xxx
</code></pre>
<p>to the my-nginx service. But this is not how it's supposed to work, right? In the tutorials and examples for kubernetes and ingress-nginx, it worked always without externalIPs. Also the ingress rules doesn't work now (e.g. if I set the path to /test).</p>
| <p>So apparently I was missing one part: the load balancer. I'm not sure why this wasn't mentioned in those instructions as a requirement. But i followed this tutorial: <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb</a> and now everything works. </p>
<p>Since metallb requires multiple ip addresses, you have to list your single ip-adress with the subnet \32: <code>31.214.xxx.xxx\32</code></p>
|
<p>Having AWS EKS cluster in VPC with CIDR 172.20.0.0/16 and installed <code>istio 1.0.2</code> with helm:</p>
<pre><code>helm upgrade -i istio install/kubernetes/helm/istio \
--namespace istio-system \
--set tracing.enabled=true \
--set grafana.enabled=true \
--set telemetry-gateway.grafanaEnabled=true \
--set telemetry-gateway.prometheusEnabled=true \
--set global.proxy.includeIPRanges="172.20.0.0/16" \
--set servicegraph.enabled=true \
--set galley.enabled=false
</code></pre>
<p>Then deploy some pods for testing:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service-one
labels:
app: service-one
spec:
ports:
- port: 80
targetPort: 8080
name: http
selector:
app: service-one
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service-one
spec:
replicas: 1
template:
metadata:
labels:
app: service-one
spec:
containers:
- name: app
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-two
labels:
app: service-two
spec:
ports:
- port: 80
targetPort: 8080
name: http-status
selector:
app: service-two
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service-two
spec:
replicas: 1
template:
metadata:
labels:
app: service-two
spec:
containers:
- name: app
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
</code></pre>
<p>and deploy it with:</p>
<pre><code>kubectl apply -f <(istioctl kube-inject -f app.yaml)
</code></pre>
<p>Then inside service-one pod, I'm requesting service-two and there are no logs about outgoing request inside service-one's istio-proxy container, but if I reconfigure istio without setting <code>global.proxy.includeIPRanges</code> it works as expected (but I need this config to allow multiple external connections). How can I debug what is going on?</p>
| <p>Setting <code>global.proxy.includeIPRanges</code> is deprecated and should not work. There was a <a href="https://github.com/istio/istio/issues/6146" rel="nofollow noreferrer">discussion</a> on Git about this. The new closest thing is <code>includeOutboundIpRanges</code> in pod's sidecar-injector Config-Map or <code>traffic.sidecar.istio.io/includeOutboundIPRanges</code> pod annotation. Annotation looks easier. For now, it is not clear in the official documentation.</p>
<p>You could add the annotation to your deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
traffic.sidecar.istio.io/includeOutboundIPRanges: "172.20.0.0/16"
name: service-one
spec:
replicas: 1
template:
metadata:
labels:
app: service-one
spec:
containers:
- name: app
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
</code></pre>
<p>And the same for second deployment.</p>
|
<p>In my kubernetes Ingress controller logging lots of handshake message like this. how to stop this error message? it appers request coming from with-in the pod 127.0.0.1</p>
<pre><code>2018/09/15 13:28:28 [crit] 21472#21472: *323765 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442
2018/09/15 13:28:28 [crit] 21472#21472: *323766 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442
2018/09/15 13:28:28 [crit] 21472#21472: *323767 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442
2018/09/15 13:28:28 [crit] 21472#21472: *323768 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442
2018/09/15 13:28:28 [crit] 21472#21472: *323769 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442
</code></pre>
<p>Here is ingress argument.</p>
<pre><code> - args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --enable-ssl-chain-completion=false
- --default-ssl-certificate=ingress-nginx/ingress-tls-secret
- --enable-ssl-passthrough
</code></pre>
<p>Thanks</p>
| <p>My issue is with HAPROXY health check configuration I set to <code>ssl-hello-chk</code> now I changed it to <code>tcp-check</code> error message stopped.</p>
<p>change this:</p>
<pre><code>mode tcp
balance leastconn
option ssl-hello-chk
</code></pre>
<p>to</p>
<pre><code> mode tcp
balance leastconn
option tcp-check
</code></pre>
|
<p>I am trying to setup spinnaker with kubernetes and getting an error: user cannot list namespace.</p>
<p>I don't have access to list namespace in cluster scope. Is it possible to set up and apply hal configuration without access to list namespaces at cluster scope? if yes, please let me know the steps.</p>
<p>Below I mention the command out for reference:</p>
<pre><code>hal deploy apply
+ Get current deployment
Success
- Prep deployment
Failure
Problems in default.provider.kubernetes.my-k8s-account:
! ERROR Unable to communicate with your Kubernetes cluster: Failure
executing: GET at: https://<company>/api/v1/namespaces. Message:
Forbidden! User apc doesn't have permission. namespaces is forbidden: User
"system:anonymous" cannot list namespaces at the cluster scope..
? Unable to authenticate with your Kubernetes cluster. Try using
kubectl to verify your credentials.
- Failed to prep Spinnaker deployment
</code></pre>
<hr>
<pre><code>$ kubectl get ns
No resources found.
Error from server (Forbidden): namespaces is forbidden: User "ds:uid:2319639648" cannot list namespaces at the cluster scope
</code></pre>
<hr>
<p>Regards,
Ajaz</p>
| <p>Short answer: no.</p>
<p>You can try to get your admin to give you access to a <code>ClusterRole</code>+<code>RoleBinding</code> that has access to namespaces read.</p>
<p>Something like this:</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace-reader
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-all-namespaces
subjects:
- kind: User
name: your-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: namespace-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
|
<p>Grafana, service graph and zipking aren't deployed after the installation of Istio.There's no way to install these add-ons after the initial install of Istio?? </p>
| <p>It depends on how did you install istio.</p>
<p>if installed with helm install, then you can install add-ons using command like this:</p>
<pre><code>helm upgrade istio istio-1.0.0/install/kubernetes/helm/istio --set grafana.enabled=true,servicegraph.enabled=true,tracing.enabled=true
</code></pre>
<p>or you installed with helm template, then you need to edit the values.yaml in your helm template directory to enable those add-ons, then install it:</p>
<pre><code>helm template install/kubernetes/helm/istio --name istio --namespace istio-system > $HOME/istio.yaml
kubectl apply -f $HOME/istio.yaml
</code></pre>
|
<p>I'm running jobs on EKS. After trying to start a job with invalid yaml, it doesn't seem to let go of the bad yaml and keeps giving me the same error message even after correcting the file.</p>
<ol>
<li>I successfully ran a job.</li>
<li>I added an environment variable with a boolean value in the <code>env</code> section, which raised this error:
<ul>
<li><code>Error from server (BadRequest): error when creating "k8s/jobs/create_csv.yaml": Job in version "v1" cannot be handled as a Job: v1.Job: Spec: v1.JobSpec: Template: v1.PodTemplateSpec: Spec: v1.PodSpec: Containers: []v1.Container: v1.Container: Env: []v1.EnvVar: v1.EnvVar: Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true},{"nam|..., bigger context ...|oduction"},{"name":"RAILS_LOG_TO_STDOUT","value":true},{"name":"AWS_REGION","value":"us-east-1"},{"n|...</code></li>
</ul></li>
<li>I changed the value to be a string <code>yes</code>, but the error message continues to show the original, bad yaml.</li>
<li>No jobs show up in <code>kubectl get jobs --all-namespaces</code>
<ul>
<li>So I don't know where this old yaml would be hiding.</li>
</ul></li>
</ol>
<p>I thought this might be because I didn't have <code>imagePullPolicy</code> set to <code>Always</code>, but it happens even if I run the <code>kubectl</code> command locally.</p>
<p>Below is my job definition file:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
generateName: create-csv-
labels:
transformer: AR
spec:
template:
spec:
containers:
- name: create-csv
image: my-image:latest
imagePullPolicy: Always
command: ["bin/rails", "create_csv"]
env:
- name: RAILS_ENV
value: production
- name: RAILS_LOG_TO_STDOUT
value: yes
- name: AWS_REGION
value: us-east-1
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws
key: aws_access_key_id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws
key: aws_secret_access_key
restartPolicy: OnFailure
backoffLimit: 6
</code></pre>
| <p>"yes" must be quoted in yaml or it gets treated as a keyword that means a boolean true</p>
<p>Try this:</p>
<pre><code>value: "yes"
</code></pre>
|
<p>has anyone recently deployed a k8s application after standing up a cluster via devstack / Magnum?</p>
<p>Using devstack (latest) I've successfully deployed a K8s cluster on OpenStack. This is on a single bare metal server running Ubuntu 18.04.</p>
<pre><code>openstack coe cluster template create k8s-cluster-template \
--image fedora-atomic-27 \
--keypair testkey \
--external-network public \
--dns-nameserver 8.8.8.8 \
--flavor m1.small \
--docker-volume-size 5 \
--network-driver flannel \
--coe kubernetes \
--volume-driver cinder
openstack coe cluster create k8s-cluster \
--cluster-template k8s-cluster-template \
--master-count 1 \
--node-count 1
</code></pre>
<p>in trying out the cluster I ran into configuration issues. I'm trying to determine where I went wrong and am wondering if anyone else is seeing issues with magnum k8s clusters and dynamic provisioning of cinder volumes?</p>
<p>K8s version:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>the config issues: first, no default storage class was created in Kubernetes. when I used helm to deploy something simple (stable/mariadb) the persistent volume claims were never bound. it turns out this is a known issue with magnum with a <a href="https://review.openstack.org/#/c/499842" rel="nofollow noreferrer">pending fix</a>. </p>
<p>I used kubectl to create a default:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
provisioner: kubernetes.io/cinder
</code></pre>
<p>After that, the PVCs were still pending, but when I ran describe on one I could see an error: </p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 55s (x26 over 6m) persistentvolume-controller Failed to provision volume with StorageClass "standard": OpenStack cloud provider was not initialized properly : stat /etc/kubernetes/cloud-config: no such file or directory
</code></pre>
<p>looking at the kube-controller-manager process it was not passed the cloud- provider or cloud-config command line args:</p>
<pre><code>kube 3111 1.8 4.2 141340 86392 ? Ssl Sep19 1:18 /usr/local/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080 --leader-elect=true --service-account-private-key-file=/etc/kubernetes/certs/service_account_private.key --root-ca-file=/etc/kubernetes/certs/ca.crt
</code></pre>
<p>even though these arguments were written into /etc/kubernetes/controller-manager via magnum/heat/cloud-init:</p>
<pre><code>###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--leader-elect=true --service-account-private-key-file=/etc/kubernetes/certs/service_account_private.key --root-ca-file=/etc/kubernetes/certs/ca.crt --cloud-config=/etc/kubernetes/kube_openstack_config --cloud-provider=openstack"
</code></pre>
<p>from the cloud-init output log and "atomic containers list" i can see the controller manager is started from a docker image. it turns out the image is run with /usr/bin/kube-controller-manager.sh script. looking into the image rootfs this script is removing the -cloud-config / -cloud-provider argumnents:</p>
<pre><code>ARGS=$(echo $ARGS | sed s/--cloud-provider=openstack//)
ARGS=$(echo $ARGS | sed s#--cloud-config=/etc/kubernetes/kube_openstack_config##)
</code></pre>
<p>any idea why the image is doing this? </p>
<p>to make progress i commented out the two sed lines and restarted. i could then verify that the processes had the expected arguments. the log files showed they were picked up (and complained they are deprecated).</p>
<p>now when i attempt to install MariaDB via helm i get an error that the volume allocation call fails with a 400:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 9s (x7 over 1m) persistentvolume-controller Failed to provision volume with StorageClass "standard": failed to create a 8 GB volume: Invalid request due to incorrect syntax or missing required parameters.
</code></pre>
<p>from /var/log/syslog cinder is complaining, but doesn't provide any additional information:</p>
<pre><code>Sep 20 10:31:36 vantiq-dell-02 [email protected][32488]: #033[00;36mINFO cinder.api.openstack.wsgi [#033[01;36mNone req-7d95ad99-015b-4c59-8072-6e800abbf01f #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mPOST http://192.168.7.172/volume/v2/9b400f82c32b43068779637a00d3ea5e/volumes#033[00m#033[00m
Sep 20 10:31:36 vantiq-dell-02 [email protected][32488]: #033[00;36mINFO cinder.api.openstack.wsgi [#033[01;36mNone req-cc10f012-a824-4f05-9aa4-d871603842dc #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mPOST http://192.168.7.172/volume/v2/9b400f82c32b43068779637a00d3ea5e/volumes#033[00m#033[00m
Sep 20 10:31:36 vantiq-dell-02 [email protected][32488]: #033[00;32mDEBUG cinder.api.openstack.wsgi [#033[01;36mNone req-7d95ad99-015b-4c59-8072-6e800abbf01f #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mAction: 'create', calling method: create, body: {"volume":{"availability_zone":"nova","metadata":{"kubernetes.io/created-for/pv/name":"pvc-687269c1-bcf6-11e8-bf16-fa163e3354e2","kubernetes.io/created-for/pvc/name":"data-fantastic-yak-mariadb-master-0","kubernetes.io/created-for/pvc/namespace":"default"},"name":"kubernetes-dynamic-pvc-687269c1-bcf6-11e8-bf16-fa163e3354e2","size":8}}#033[00m #033[00;33m{{(pid=32491) _process_stack /opt/stack/cinder/cinder/api/openstack/wsgi.py:870}}#033[00m#033[00m
Sep 20 10:31:36 vantiq-dell-02 [email protected][32488]: #033[00;32mDEBUG cinder.api.openstack.wsgi [#033[01;36mNone req-cc10f012-a824-4f05-9aa4-d871603842dc #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mAction: 'create', calling method: create, body: {"volume":{"availability_zone":"nova","metadata":{"kubernetes.io/created-for/pv/name":"pvc-68e9c7c9-bcf6-11e8-bf16-fa163e3354e2","kubernetes.io/created-for/pvc/name":"data-fantastic-yak-mariadb-slave-0","kubernetes.io/created-for/pvc/namespace":"default"},"name":"kubernetes-dynamic-pvc-68e9c7c9-bcf6-11e8-bf16-fa163e3354e2","size":8}}#033[00m #033[00;33m{{(pid=32490) _process_stack /opt/stack/cinder/cinder/api/openstack/wsgi.py:870}}#033[00m#033[00m
Sep 20 10:31:36 vantiq-dell-02 [email protected][32488]: #033[00;36mINFO cinder.api.openstack.wsgi [#033[01;36mNone req-cc10f012-a824-4f05-9aa4-d871603842dc #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mhttp://192.168.7.172/volume/v2/9b400f82c32b43068779637a00d3ea5e/volumes returned with HTTP 400#033[00m#033[00m
Sep 20 10:31:36 vantiq-dell-02 [email protected][32488]: [pid: 32490|app: 0|req: 205/414] 172.24.4.10 () {64 vars in 1329 bytes} [Thu Sep 20 10:31:36 2018] POST /volume/v2/9b400f82c32b43068779637a00d3ea5e/volumes => generated 494 bytes in 7 msecs (HTTP/1.1 400) 5 headers in 230 bytes (2 switches on core 0)
Sep 20 10:31:36 vantiq-dell-02 [email protected][32488]: #033[00;36mINFO cinder.api.openstack.wsgi [#033[01;36mNone req-7d95ad99-015b-4c59-8072-6e800abbf01f #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mhttp://192.168.7.172/volume/v2/9b400f82c32b43068779637a00d3ea5e/volumes returned with HTTP 400#033[00m#033[00m
Sep 20 10:31:36 vantiq-dell-02 [email protected][32488]: [pid: 32491|app: 0|req: 210/415] 172.24.4.10 () {64 vars in 1329 bytes} [Thu Sep 20 10:31:36 2018] POST /volume/v2/9b400f82c32b43068779637a00d3ea5e/volumes => generated 495 bytes in 7 msecs (HTTP/1.1 400) 5 headers in 230 bytes (2 switches on core 0)
</code></pre>
<p>for reference here is the volume configuration for the master MariaDB pod:</p>
<pre><code> volumes:
- name: config
configMap:
name: joking-opossum-mariadb-master
- name: custom-init-scripts
configMap:
name: joking-opossum-mariadb-master-init-scripts
volumeClaimTemplates:
- metadata:
name: data
labels:
app: "mariadb"
chart: mariadb-4.4.2
component: "master"
release: "joking-opossum"
heritage: "Tiller"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
</code></pre>
<p>Any insight into what is wrong would greatly appreciated.</p>
| <p>The issue appears to be a bug in the interaction between Kubenetes and Cinder in the latest devstack code (as of 19-Sep-2018). I backed off and deployed using the stable/queens branch and both the issues (cmd line args missing / PVC unbound) have gone away. I can successfully deploy MariaDB to a 2 - node cluster created via Magnum.</p>
|
<p>When I run Kubernetes in Docker for Mac, the Kube API only seems to be accessible from a secure endpoint of <a href="https://localhost:6443/" rel="nofollow noreferrer">https://localhost:6443/</a></p>
<p>With minikube I was able to use an insecure endpoint for Kube API like <a href="http://localhost:8080/" rel="nofollow noreferrer">http://localhost:8080/</a></p>
<p>Is there any way to use an insecure endpoint for Kube API in Kubernetes in Docker for Mac?</p>
| <p>You may be running an old version of Kubernetes with minikube.</p>
<p>The default insecure port for the kube-apiserver is 8080, but that's disabled on the latest Kubernetes versions in the kube-apiserver with the flag: <code>--insecure-port=0</code>.</p>
<p>You can always delete that line from your <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> file.</p>
<p>You also need to add this option <code>--insecure-bind-address=0.0.0.0</code> as per <a href="https://github.com/kubernetes/kubernetes/issues/49933" rel="nofollow noreferrer">this</a>.</p>
<p>Then restart the kube-apiserver.</p>
<p>Tip: Docker/Kubernetes runs on <a href="https://github.com/mist64/xhyve" rel="nofollow noreferrer">xhyve</a> VM(s) on your Mac. So to modify the Kubernetes configs you'll have to connect to your <a href="https://github.com/mist64/xhyve" rel="nofollow noreferrer">xhyve</a> VM(s). You can do it with something like this: <code>screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty</code> or <code>screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
</code></p>
<p>Adding more details: So the port 6443 is forwarded to the host using <a href="https://github.com/moby/vpnkit" rel="nofollow noreferrer">vpnkit</a>. To make port 8080 available on the host you have to also expose that port with vpnkit. If you screen into the hyperkit vm you'll see that port mappings are defined in <code>/var/vpnkit/port</code>. There's a <code>README</code> file on that directory that you can follow to expose port 8080.</p>
|
<p>I am trying to set up Kubernetes cluster on Azure ubuntu-16.04 LTS VM. I installed docker 17.03.2~ce-0~ubuntu-xenial version on VM and followed all steps mentioned on kubernetes official website but while running kubeadm command on my master node I am getting error.</p>
<p>My init command:</p>
<pre><code> kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<ip>
</code></pre>
<p>Error Message:</p>
<pre><code>[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
[preflight] Some fatal errors occurred:
[ERROR KubeletVersion]: the kubelet version is higher than the control
plane version. This is not a supported version skew and may lead to a
malfunctional cluster. Kubelet version: "1.12.0-rc.1" Control plane version:
"1.11.3"
[preflight] If you know what you are doing, you can make a check non-fatal
with `--ignore-preflight-errors=...`
</code></pre>
| <p>You have a newer version of the <code>kubelet</code> - <code>v1.12.0-rc.1</code> than that of <code>kubeadm</code> - <code>v1.11.3</code>. You can try:</p>
<ol>
<li><p>Downgrading the kubelet to match your kubeadm version</p>
<p>On Ubuntu run: <code>apt-get -y install kubelet=1.11.3-00</code></p></li>
<li><p>The other way around, upgrade kubeadm to match that of the kubelet</p>
<p>On Ubuntu run: <code>apt-get -y install kubeadm=1.12.0-rc.1-00</code></p></li>
<li><p><code>--ignore-preflight-errors</code> like it says, but watch if you see any other errors that may make your installation not work.</p></li>
</ol>
<p>Hope it helps.</p>
|
<p>Is there any shortcut or kubectl command or REST API call to get a list of worker nodes only. ( not including the master nodes )</p>
<p><strong>Update</strong>:
For the masters we can do like this:</p>
<pre><code>kubectl get nodes --selector=node-role.kubernetes.io/master
</code></pre>
<p>for the workers I dont see any such label created by default. Can we do get by reversing or do != kind of thing on selector.</p>
<p>We can't grep it either:</p>
<pre><code>C02W84XMHTD5:ucp iahmad$ kubectl get nodes | grep worker
C02W84XMHTD5:ucp iahmad$
C02W84XMHTD5:ucp iahmad$ kubectl get nodes -o wide| grep worker
C02W84XMHTD5:ucp iahmad$
C02W84XMHTD5:ucp iahmad$ kubectl get nodes -o yaml | grep worker
C02W84XMHTD5:ucp iahmad$
C02W84XMHTD5:ucp iahmad$ kubectl get nodes -o json | grep worker
C02W84XMHTD5:ucp iahmad$
</code></pre>
<p>My use case is that want to get this list every minute to update the external load balancer pools, in case new nodes are added, removed from the cluster. Indeed I can label them myself but if there is some default built in way of doing this would be useful</p>
| <p>You can get roles/labels of your nodes by </p>
<pre><code>kubectl get nodes --show-labels
</code></pre>
<p>in my case, I do have three nodes each having the given roles and labels:</p>
<pre><code>NAME STATUS ROLES AGE VERSION LABELS
host01 Ready controlplane,etcd,worker 61d v1.10.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=host01,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/worker=true
host02 Ready etcd,worker 61d v1.10.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=host02,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/worker=true
host03 Ready etcd,worker 61d v1.10.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=host03,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/worker=true
</code></pre>
<p>Only host01 has the label <code>controlplane, worker</code> and <code>etcd</code>. The other two have <code>etcd</code> and <code>worker</code> (Scroll right to see the labels as well).</p>
<p>So I can get all worker nodes by </p>
<pre><code>kubectl get nodes -l node-role.kubernetes.io/worker=true
NAME STATUS ROLES AGE VERSION
host01 Ready controlplane,etcd,worker 61d v1.10.5
host02 Ready etcd,worker 61d v1.10.5
host03 Ready etcd,worker 61d v1.10.5
</code></pre>
<p>To exclude <code>controlplane</code>s, you can exclude them with a second label by <code>!=true</code></p>
<pre><code>kubectl get nodes -l node-role.kubernetes.io/worker=true,node-role.kubernetes.io/controlplane!=true
NAME STATUS ROLES AGE VERSION
host02 Ready etcd,worker 61d v1.10.5
host03 Ready etcd,worker 61d v1.10.5
</code></pre>
<p>Please adapt that to your labels or set labels accordingly to your cluster. In my case it is a <a href="https://rancher.com/" rel="noreferrer">Rancher 2.0</a> cluster. The labels are automatically created by Rancher when added a node.</p>
<p>The API for that is in Rancher at (with the filter already appended):</p>
<pre><code>/v3/clusters/c-xxxxx/nodes?worker=true&controlPlane_ne=true
</code></pre>
|
<p>I have tried to set up PodSecurityPolicy on a 1.10.1 cluster installed on ubuntu 16.04 with kubeadm, have followed the instructions at <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="noreferrer">https://kubernetes.io/docs/concepts/policy/pod-security-policy/</a></p>
<p>So I altered the apiserver manifest on the master at <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>
adding the <code>",PodSecurityPolicy"</code> to the <code>--admission-control</code> arg</p>
<p>When I do this and run <code>kubectl get pods -n kube-system</code>
the api-server is not listed, obviously I have managed to hit a running instance of the apiserver as I get a list of all the other pods in the kube-system namespace</p>
<p>I can see that a new docker container has been started with the PodSecurityPolicy admission controller and it is obviously serving kubectl requests</p>
<p>When I check the kubelet logs with <code>journalctl -u kubelet</code> I can see</p>
<p><code>Apr 15 18:14:23 pmcgrath-k8s-3-master kubelet[993]: E0415 18:14:23.087361 993 kubelet.go:1617] Failed creating a mirror pod for "kube-apiserver-pmcgrath-k8s-3-master_kube-system(46dbb13cd345f9fbb9e18e2229e2e
dd1)": pods "kube-apiserver-pmcgrath-k8s-3-master" is forbidden: unable to validate against any pod security policy: []</code></p>
<p>I have already added a privileged PSP and created a cluster role and binding and confirmed that the PSP is working</p>
<p>Just not sure why the apiserver kubelet gives this error and therefore does not appear in the pod list, would have thought the kubelet creates this pod and not sure if I have to create a role binding for the apiserver, controller manager, scheduler and kube-dns</p>
<p>No docs indicating how to deal with this, I presume this is a chicken and egg situation, where I have to bootstrap the cluster, add some PSPs, ClusterRoles and ClusterRolebindings before I can mutate the admission-control arg for the api server</p>
<p>Anyone have the same issue or have any pointers on this ?</p>
<p>Thanks
Pat</p>
| <p>I have written a blog post on how I figured this stuff out, short answer was</p>
<ul>
<li>On master run kubeadm init with the PodSecurityPolicy admission controller enabled</li>
<li>Add some pod security policies with RBAC config - enough to allow CNI and DNS etc. to start
<ul>
<li>CNI daemonsets will not start without this</li>
</ul></li>
<li>Complete configuring the cluster adding nodes via kubeadm join</li>
<li>As you add more workloads to the cluster check if you need additional pod security policies and RBAC configuration for the same</li>
</ul>
<p>See <a href="https://pmcgrath.net/using-pod-security-policies-with-kubeadm" rel="nofollow noreferrer">https://pmcgrath.net/using-pod-security-policies-with-kubeadm</a></p>
|
<p><strong>Intro:</strong></p>
<p>On AWS, Loadbalancers are expensive ($20/month + usage), so I'm looking for a way to achieve flexible load-balancing between the k8s nodes, without having to pay that expense. The load is not that big, so I don't need the scalability of the AWS load balancer any time soon. I just need services to be HA. I can get a small EC2 instance for $3.5/month that can easily handle the current traffic, so I'm chasing that option now.</p>
<p><strong>Current setup</strong></p>
<p>Currently, I've set up a regular standalone Nginx instance (outside of k8s) that does load balancing between the nodes in my cluster, on which all services are set up to expose through NodePorts. This works really well, but whenever my cluster topology changes during restarts, adding, restarting or removing nodes, I have to manually update the upstream config on the Nginx instance, which is far from optimal, given that cluster nodes cannot be expected to stay around forever.</p>
<p><strong>So the question is:</strong></p>
<p>Can Trækfik be set up outside of Kubernetes to do simple load-balancing between the Kubernetes nodes, just like my Nginx setup, but keep the upstream/backend servers of the traefik config in sync with Kubernetes list of nodes, such that my Kubernetes services are still HA when I make changes to my node setup? All I really need is for Træfik to listen to the Kubernetes API and change the backend servers whenever the cluster changes. </p>
<p>Sounds simple, right? ;-)</p>
<p>When looking at the Træfik documentation, it seems to want an ingress resource to send its trafik to, and an ingress resource requires an ingress controller, which I guess, requires a load balancer to become accessible? Doesn't that defeat the purpose, or is there something I'm missing?</p>
| <p>Here is something what would be useful in your case <a href="https://github.com/unibet/ext_nginx" rel="nofollow noreferrer">https://github.com/unibet/ext_nginx</a> but I'm note sure if project is still in development and configuration is probably hard as you need to allow external ingress to access internal k8s network.</p>
<p>Maybe you can try to do that on AWS level? You can add cron job on Nginx EC2 instance where you query AWS using CLI for all EC2 instances tagged as "k8s" and make update in nginx configuration if something changed.</p>
|
<p>I have a (containerized) web service talking to an external CloudSQL service in Google Cloud. I've used the sidecar pattern in which a Google Cloud SQL Proxy container is next to the web service and authenticates+proxies to the external CloudSQL service. This works fine. Let's call this Deployment "deployment-api" with containers "api" + "pg-proxy"</p>
<p>The problem occurs when I want to deploy the application on my local minikube cluster which needs to have different configuration due to the service talking to a local postgres server on my computer. If I deploy "deployment-api" as is to minikube, it tries to run the "pg-proxy" container which barfs and the entire pod goes into a crash loop. Is there a way for me to selectively NOT deploy "pg-proxy" container without having two definitions for the Pod, e.g., using selectors/labels? I do not want to move pg-proxy container into its own deployment.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-api
namespace: ${MY_ENV}
labels:
app: api
env: ${MY_ENV}
spec:
replicas: ${REPLICAS}
selector:
matchLabels:
app: api
env: ${MY_ENV}
template:
metadata:
labels:
app: api
env: ${MY_ENV}
spec:
containers:
- name: pg-proxy
ports:
- containerPort: 5432
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<redacted>:${MY_ENV}-app=tcp:5432",
"-credential_file=/secrets/cloudsql/${MY_ENV}-sql-credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: ${MY_ENV}-cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: api
image: ${DOCKER_IMAGE_PREFIX}api:${TAG}
imagePullPolicy: ${PULL_POLICY}
ports:
- containerPort: 50051
volumes:
- name: ${MY_ENV}-cloudsql-instance-credentials
secret:
secretName: ${MY_ENV}-cloudsql-instance-credentials
</code></pre>
| <p>In raw Kubernetes means? No.</p>
<p>But I strongly encourage you to use Helm to deploy your application(s). With helm you can easily adapt manifest based on variables provided for each environment (or defaults). For example with variable <code>postgresql.proxy.enabled: true</code> in defaults and</p>
<pre><code>{{- if .Values.postgresql.proxy.enabled }}
- name: pg-proxy
...
{{- end }}
</code></pre>
<p>in helm template you could disable this block completely on dev env by setting the value to false.</p>
|
<p>I am using micro-service to access hadoop and hbase to get data but it's not accessible from pod.</p>
<p>It shows only: </p>
<pre><code>INFO ipc.Client: Retrying connect to server: hdpcluster.internal/10.160.0.2:8020. Already tried 3 time(s); maxRetries=45
</code></pre>
<p>IP <code>10.160.0.2</code> is accessible from all nodes and they are on GCP.</p>
| <p>You probably need to open a firewall rule to allow port 8020 on your Hbase nodes that your Kubernetes nodes can connect to them. Something like this on your Hbase firewall rules (for your Hbase nodes):</p>
<p><a href="https://i.stack.imgur.com/oNExP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oNExP.png" alt="Sample firewall"></a></p>
|
<p>We are working on migrating our application into kubernetes. We use apache camel to process very large files. But the problem is that when we migrate to kubernetes, there will be more than one pod and there is high chance that pods often restart. If the restart happens in the middle of the processing of a huge file,</p>
<ol>
<li>Will there be any chance of camellock not getting released from the file?</li>
<li>Will the restart make the file to be processed from first, rather than being started from where the process has left?</li>
</ol>
| <p>I acknowledge that there are a lot of situations that can lead to a pod restart in a Kubernetes cluster, but the problems you describe applies to camel applications running outside Kubernetes as well. </p>
<p>To answering your questions, in case of an application restart in the middle of the processing of a large file, according to file component documentation <a href="https://github.com/apache/camel/blob/master/camel-core/src/main/docs/file-component.adoc" rel="nofollow noreferrer">https://github.com/apache/camel/blob/master/camel-core/src/main/docs/file-component.adoc</a>:</p>
<ol>
<li>eventually the camel lock will be released, timing depends on which
way you provide the file to the pod.</li>
<li>if the processing has not correctly finished and the file has not been modified during the processing, then the processing will restart from the file beginning.</li>
</ol>
<p>That said there are strategies and EIP that can be applied and used like splitter <a href="http://camel.apache.org/splitter.html" rel="nofollow noreferrer">http://camel.apache.org/splitter.html</a> and idempotent consumers <a href="http://camel.apache.org/idempotent-consumer.html" rel="nofollow noreferrer">http://camel.apache.org/idempotent-consumer.html</a> to split the file in chunks and avoid reprocessing same chunks.</p>
|
<p>I'm new to kubernetes and I'm setting up my first testing cluster. However, I'll get this error when I set up the master node. But I'm not sure how to fix it.</p>
<pre><code>[ERROR KubeletVersion]: the kubelet version is higher than the control plane version.
This is not a supported version skew and may lead to a malfunctional cluster.
Kubelet version: "1.12.0-rc.1" Control plane version: "1.11.3"
</code></pre>
<p>The host is fully patched to the latest levels</p>
<p>CentOS Linux release 7.5.1804 (Core)</p>
<p>Many Thanks
S</p>
| <p>I hit the same problem and used the kubeadm option: --kubernetes-version=v1.12.0-rc.1</p>
<blockquote>
<p>sudo kubeadm init --pod-network-cidr=172.16.0.0/12 --kubernetes-version=v1.12.0-rc.1</p>
</blockquote>
<p>I'm using a JVM image that was prepared a few weeks ago and have just updated the packages. Kubeadm, kubectl and kubelet all now return version v1.12.0-rc.1 when asked but when 'kubeadm init' is called it kicks off with the previous version.</p>
<blockquote>
<p>[init] using Kubernetes version: v1.11.3</p>
</blockquote>
<p>specifying the (control plane) version did the trick.</p>
|
<p>I have two ASP.NET Core apps. One is a Headless CMS (API), and the other one is a Razor Pages blog front-end (with a REST client that communicates with the Headless CMS/API).</p>
<p>I then have an Azure AKS cluster. In it I have an ingress resource with the following routes (as per the instructions from the following AKS docs: <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-tls#create-an-ingress-route" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-tls#create-an-ingress-route</a> ). Each route is mapped to each of the apps/services mentioned above:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
rules:
- host: mydomain.westeurope.cloudapp.azure.com
http:
paths:
- backend:
serviceName: headless-cms-svc
servicePort: 80
path: /
- backend:
serviceName: blog-svc
servicePort: 80
path: /blog
</code></pre>
<p>When I now navigate to the first route, <code>mydomain.westeurope.cloudapp.azure.com</code>, the headless CMS app works as expected but when I navigate to the second route , <code>mydomain.westeurope.cloudapp.azure.com/blog</code>, I get a bunch of 404:s because the blog apps root path is now relative to the <code>/blog</code> ingress route which in turn breaks all the resources (css, javascript, images etc) in the wwwroot folder.</p>
<p>How should I configure my ASP.NET Core blog app and/or my ingress object?</p>
<p>404:s</p>
<pre><code>https://mydomain.westeurope.cloudapp.azure.com/css/site.min.css?v=kHvJwvVAK1eJLN4w8xygUR3nbvlLmRwi5yr-OuAO90E
https://mydomain.westeurope.cloudapp.azure.com/images/banner1.svg
https://mydomain.westeurope.cloudapp.azure.com/images/banner2.svg
https://mydomain.westeurope.cloudapp.azure.com/js/site.min.js?v=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU
</code></pre>
<p>If I add the URL segment <code>/blog</code> the resources get served properly.
<code>https://mydomain.westeurope.cloudapp.azure.com/blog/images/banner1.svg</code> <- works</p>
<p>And here is a regular <code>img</code> tag in the <code>Index.cshtml</code> Razor page (from a default ASP.NET Core 2.1 Razor Pages web application). I haven't changed anything in the code.</p>
<pre><code><img src="~/images/banner1.svg" alt="ASP.NET" class="img-responsive" />
</code></pre>
| <h2>Problem</h2>
<p>It seems that your proxy rewrites the path.</p>
<ul>
<li>Before proxy: /blog/images/banner1.png</li>
<li>After proxy: /images/banner1.png</li>
</ul>
<p>Asp generates absolute (host relative) links (path only, but starting with a slash "/"). That means, we have to tell the framework that it must prefix all URLs with "/blog".</p>
<h2>Solution</h2>
<p>Do this (for asp.net core 2.1) by inserting the following snipped in your Startup class:</p>
<pre><code>app.Use((context, next) =>
{
context.Request.PathBase = new PathString("/blog");
return next();
});
</code></pre>
<p>Code sample from: <a href="https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-2.1" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-2.1</a></p>
<p>Insert this snipped before any other middleware in your Configure method.</p>
<p>You can test this on your local machine too. All generated links should be prefixed with "/blog" - so they will be broken on your dev machine.</p>
<h2>Use Configuration</h2>
<p>You will need to make it configurable e.g. like so:</p>
<pre><code> var basePath = Configuration.GetSection("BASE_PATH").Value;
if (basePath != null)
{
Console.WriteLine($"Using base path '{basePath}'");
// app.Use().. goes here
}
</code></pre>
<p>(Assuming you read configuration from env vars in your startup.)</p>
<p>… and provide this env var in your kubernetes depoyment:</p>
<pre><code>...
containers:
- name: myapp
image: myappimage
env:
- name: BASE_PATH
value: "/blog"
</code></pre>
|
<p>I am using <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="nofollow noreferrer">jenkinsci/kubernetes-plugin</a> to provision slave pods in my Jenkins builds. I need to specify these pods' cpu and memory limit and request params. It can be done directly in the pipeline configuration, as shown <a href="https://github.com/jenkinsci/kubernetes-plugin#container-configuration" rel="nofollow noreferrer">here</a>.</p>
<p>However I would prefer to do it directly in the Jenkins configuration (Manage Jenkins -> Configure System -> Kubernetes -> Kubernetes Pod Template.</p>
<p>There is a specific section to insert a mergeable raw yaml:</p>
<p><a href="https://i.stack.imgur.com/t9qxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t9qxN.png" alt="enter image description here"></a></p>
<p>How can it be set up there? I have tried but it didn't seem to work.</p>
| <p>I have found and option to configure it hidden under advanced options:</p>
<p><a href="https://i.stack.imgur.com/h6Xwp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h6Xwp.png" alt="enter image description here"></a></p>
|
<p>I'm new to OpenShift (I've been using marathon/DCOS before) and I'm willing to install an Elastic stack on it, all the tutorials I've found online are describing it as a "complicated task"</p>
<p>So here's my question, Am I looking for something impossible/not recommended?</p>
<p>Thank you very much.</p>
| <p>It's so much of a complicated task but it really depends on your experience. OpenShift essentially runs on top of Kubernetes and there are plenty of tutorials on how to install ELK or Elasticsearch on Kubernetes. </p>
<p>Perhaps there's a little less information about OpenShift specifically, but you can start with an EFK stack which is Elasticsearch, Fluentd and Kibana documented <a href="https://docs.openshift.com/enterprise/3.1/install_config/aggregate_logging.html#deploying-the-efk-stack" rel="nofollow noreferrer">here</a>. Here's another walkthrough to set up <a href="https://github.com/lbischof/openshift3-elk" rel="nofollow noreferrer">ELK specifically on OpenShift</a>. And here's another one form the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/" rel="nofollow noreferrer">Kubernetes Main Site</a></p>
<p>Hope it helps!</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.