Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
β | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
β |
---|---|---|---|
<p>Below is the sample HPA configuration for the scaling pod but there is no time duration mentioned. So wanted to know what is the duration between the next scaling event.</p>
<pre><code>containerResource:
name: cpu
container: application
target:
type: Utilization
averageUtilization: 60
</code></pre>
| Upendra | <p>By Default the Cool down period is 5 mins</p>
<p>we can configure that</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: example-deployment
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: example-deployment
minReplicas: 1
maxReplicas: 10
behavior:
scaleDown:
selectPolicy: Disabled
scaleUp:
stabilizationWindowSeconds: 120
policies:
- type: Percent
value: 10
periodSeconds: 60
- type: Pods
value: 4
periodSeconds: 60
selectPolicy: Max
</code></pre>
| saikiran |
<p>I am trying to deploy a Service in a Kubernetes Cluster. Everything works fine as long as I do not use TLS.</p>
<p>My Setup is like this:
Azure Kubernetes Cluster with Version 1.15.7
Istio 1.4.2</p>
<p>What I did so far is. Creating the Cluster and Installing Istio with the following Command:</p>
<pre><code>istioctl manifest apply --set values.grafana.enabled=true \--set values.tracing.enabled=true \
--set values.tracing.provider=jaeger \
--set values.global.mtls.enabled=false \
--set values.global.imagePullPolicy=Always \
--set values.kiali.enabled=true \
--set "values.kiali.dashboard.jaegerURL=http://jaeger-query:16686" \
--set "values.kiali.dashboard.grafanaURL=http://grafana:3000"
</code></pre>
<p>Everything starts up and all pods are running.
Then I create a Gateway</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ddhub-ingressgateway
namespace: config
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*.example.de"
# tls:
# httpsRedirect: true # sends 301 redirect for http requests
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*.example.de"
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*.example.de"
</code></pre>
<p>I then import my custom certificates which I assume also work since they are mounted correctly and when accessing my service over the browser I can see the secured connection properties with all values.</p>
<p>This is my deployed service:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: hellohub-frontend
labels:
app: hellohub-frontend
namespace: dev
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP
selector:
app: hellohub-frontend
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hellohub-frontend
namespace: dev
spec:
replicas: 1
template:
metadata:
labels:
app: hellohub-frontend
spec:
containers:
- image: ddhubregistry.azurecr.io/hellohub-frontend:latest
imagePullPolicy: Always
name: hellohub-frontend
volumeMounts:
- name: azure
mountPath: /cloudshare
ports:
- name: http
containerPort: 8080
volumes:
- name: azure
azureFile:
secretName: cloudshare-dev
shareName: ddhub-share-dev
readOnly: true
</code></pre>
<p>and the Virtual Service:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hellohub-frontend
namespace: dev
spec:
hosts:
- "dev-hellohub.example.de"
gateways:
- config/ddhub-ingressgateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: hellohub-frontend.dev.svc.cluster.local
port:
number: 8080
</code></pre>
<p>When I access the service with http. The page of my service shows up. When using https I always get "upstream connect error or disconnect/reset before headers. reset reason: connection termination".</p>
<p>What am I missing or what am I doing wrong? What is the difference that makes Kubernetes not finding my service. I understand that my config terminates TLS at the gateway and the communication inside the cluster is the same but this seems not to be the case.</p>
<p>Another question is how to enable debug logs for the Sidecars. I could not find a working way.</p>
<p>Thanks in advance!</p>
| Quorgel | <p>Seems the gateway tried to access your upstream in mtls mode through the envoy proxy, but no envoy proxy found in your container "hellohub-frontend", Have you enabled the istio-injection for your namespace "dev" or the pod, and also defined the mtls-policy?</p>
<pre><code>apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
spec:
peers:
- mtls:
mode: STRICT
</code></pre>
| mingtao wang |
<p>I have a pod running RabbitMQ inside my local cluster. I have configured it like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: service-rabbitmq
spec:
selector:
app: service-rabbitmq
ports:
- name: rabbitmq-amqp
port: 5672
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-rabbitmq
spec:
selector:
matchLabels:
app: statefulset-rabbitmq
serviceName: service-rabbitmq
template:
metadata:
labels:
app: statefulset-rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
volumeMounts:
- name: rabbitmq-data-volume
mountPath: /var/lib/rabbitmq/mnesia
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 750m
memory: 512Mi
# livenessProbe:
# exec:
# command:
# - 'rabbitmq-diagnostics'
# - 'ping'
# - '--quiet'
volumes:
- name: rabbitmq-data-volume
persistentVolumeClaim:
claimName: rabbitmq-pvc
</code></pre>
<p>And I have used <code>amqp://service-rabbitmq:5672</code> to connect to it. When I deploy both the RabbitMQ and the application pods, I get the following error:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
deployment-service1-app-6f96656d84-pbg6r 1/1 Running 0 90s
deployment-service1-db-7bf45c9d95-jb2fq 1/1 Running 0 90s
deployment-service2-app-785b878859-lwqcj 1/1 Running 0 90s
deployment-service2-db-5546975f46-7n8kn 1/1 Running 0 90s
deployment-service3-app-b56db56d8-dzhqz 1/1 Running 0 89s
deployment-service3-db-589cbc6769-kcf5x 1/1 Running 0 89s
statefulset-rabbitmq-0 1/1 Running 2 90s
$ kubectl logs deployment-service1-app-6f96656d84-pbg6r
[Nest] 1 - 10/14/2021, 4:56:14 PM LOG [NestFactory] Starting Nest application...
[Nest] 1 - 10/14/2021, 4:56:14 PM LOG [InstanceLoader] MongooseModule dependencies initialized +380ms
[Nest] 1 - 10/14/2021, 4:56:14 PM LOG [InstanceLoader] MongooseCoreModule dependencies initialized +27ms
[Nest] 1 - 10/14/2021, 4:56:14 PM LOG [InstanceLoader] TerminusModule dependencies initialized +2ms
[Nest] 1 - 10/14/2021, 4:56:14 PM LOG [InstanceLoader] MongooseModule dependencies initialized +1ms
[Nest] 1 - 10/14/2021, 4:56:14 PM LOG [InstanceLoader] HealthinessModule dependencies initialized +2ms
[Nest] 1 - 10/14/2021, 4:56:14 PM LOG [InstanceLoader] AppModule dependencies initialized +1ms
[Nest] 1 - 10/14/2021, 4:56:14 PM ERROR [Server] Disconnected from RMQ. Trying to reconnect.
[Nest] 1 - 10/14/2021, 4:56:14 PM ERROR [Server] Object:
{
"err": {
"cause": {
"errno": -111,
"code": "ECONNREFUSED",
"syscall": "connect",
"address": "10.103.6.225",
"port": 5672
},
"isOperational": true,
"errno": -111,
"code": "ECONNREFUSED",
"syscall": "connect",
"address": "10.103.6.225",
"port": 5672
}
}
</code></pre>
<p>RabbitMQ server seems to have been started successfully - I don't see any error:</p>
<pre><code>$ kubectl logs statefulset-rabbitmq-0
2021-10-14 16:57:42.964882+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-10-14 16:57:43.032431+00:00 [info] <0.222.0> Feature flags: [x] implicit_default_bindings
2021-10-14 16:57:43.032495+00:00 [info] <0.222.0> Feature flags: [x] maintenance_mode_status
2021-10-14 16:57:43.032515+00:00 [info] <0.222.0> Feature flags: [x] quorum_queue
2021-10-14 16:57:43.032537+00:00 [info] <0.222.0> Feature flags: [x] stream_queue
2021-10-14 16:57:43.032649+00:00 [info] <0.222.0> Feature flags: [x] user_limits
2021-10-14 16:57:43.032666+00:00 [info] <0.222.0> Feature flags: [x] virtual_host_metadata
2021-10-14 16:57:43.032682+00:00 [info] <0.222.0> Feature flags: feature flag states written to disk: yes
2021-10-14 16:57:44.054420+00:00 [noti] <0.44.0> Application syslog exited with reason: stopped
2021-10-14 16:57:44.054519+00:00 [noti] <0.222.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
2021-10-14 16:57:44.121154+00:00 [noti] <0.222.0> Logging: configured log handlers are now ACTIVE
2021-10-14 16:57:52.058040+00:00 [info] <0.222.0> ra: starting system quorum_queues
2021-10-14 16:57:52.058172+00:00 [info] <0.222.0> starting Ra system: quorum_queues in directory: /var/lib/rabbitmq/mnesia/rabbit@statefulset-rabbitmq-0/quorum/rabbit@statefulset-rabbitmq-0
2021-10-14 16:57:52.064234+00:00 [info] <0.291.0> ra: meta data store initialised for system quorum_queues. 0 record(s) recovered
2021-10-14 16:57:52.064926+00:00 [noti] <0.303.0> WAL: ra_log_wal init, open tbls: ra_log_open_mem_tables, closed tbls: ra_log_closed_mem_tables
2021-10-14 16:57:52.148681+00:00 [info] <0.222.0> ra: starting system coordination
2021-10-14 16:57:52.148753+00:00 [info] <0.222.0> starting Ra system: coordination in directory: /var/lib/rabbitmq/mnesia/rabbit@statefulset-rabbitmq-0/coordination/rabbit@statefulset-rabbitmq-0
2021-10-14 16:57:52.152782+00:00 [info] <0.336.0> ra: meta data store initialised for system coordination. 0 record(s) recovered
2021-10-14 16:57:52.153150+00:00 [noti] <0.341.0> WAL: ra_coordination_log_wal init, open tbls: ra_coordination_log_open_mem_tables, closed tbls: ra_coordination_log_closed_mem_tables
2021-10-14 16:57:52.255734+00:00 [info] <0.222.0>
2021-10-14 16:57:52.255734+00:00 [info] <0.222.0> Starting RabbitMQ 3.9.7 on Erlang 24.1.2 [jit]
2021-10-14 16:57:52.255734+00:00 [info] <0.222.0> Copyright (c) 2007-2021 VMware, Inc. or its affiliates.
2021-10-14 16:57:52.255734+00:00 [info] <0.222.0> Licensed under the MPL 2.0. Website: https://rabbitmq.com
## ## RabbitMQ 3.9.7
## ##
########## Copyright (c) 2007-2021 VMware, Inc. or its affiliates.
###### ##
########## Licensed under the MPL 2.0. Website: https://rabbitmq.com
Erlang: 24.1.2 [jit]
TLS Library: OpenSSL - OpenSSL 1.1.1l 24 Aug 2021
Doc guides: https://rabbitmq.com/documentation.html
Support: https://rabbitmq.com/contact.html
Tutorials: https://rabbitmq.com/getstarted.html
Monitoring: https://rabbitmq.com/monitoring.html
Logs: /var/log/rabbitmq/rabbit@statefulset-rabbitmq-0_upgrade.log
<stdout>
Config file(s): /etc/rabbitmq/conf.d/10-default-guest-user.conf
/etc/rabbitmq/conf.d/management_agent.disable_metrics_collector.conf
Starting broker...2021-10-14 16:57:52.258213+00:00 [info] <0.222.0>
2021-10-14 16:57:52.258213+00:00 [info] <0.222.0> node : rabbit@statefulset-rabbitmq-0
2021-10-14 16:57:52.258213+00:00 [info] <0.222.0> home dir : /var/lib/rabbitmq
2021-10-14 16:57:52.258213+00:00 [info] <0.222.0> config file(s) : /etc/rabbitmq/conf.d/10-default-guest-user.conf
2021-10-14 16:57:52.258213+00:00 [info] <0.222.0> : /etc/rabbitmq/conf.d/management_agent.disable_metrics_collector.conf
2021-10-14 16:57:52.258213+00:00 [info] <0.222.0> cookie hash : 2l58aDqPIZ5BNRjTNOxk2Q==
2021-10-14 16:57:52.258213+00:00 [info] <0.222.0> log(s) : /var/log/rabbitmq/rabbit@statefulset-rabbitmq-0_upgrade.log
2021-10-14 16:57:52.258213+00:00 [info] <0.222.0> : <stdout>
2021-10-14 16:57:52.258213+00:00 [info] <0.222.0> database dir : /var/lib/rabbitmq/mnesia/rabbit@statefulset-rabbitmq-0
2021-10-14 16:57:52.544775+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-10-14 16:57:52.544839+00:00 [info] <0.222.0> Feature flags: [x] drop_unroutable_metric
2021-10-14 16:57:52.544930+00:00 [info] <0.222.0> Feature flags: [x] empty_basic_get_metric
2021-10-14 16:57:52.544970+00:00 [info] <0.222.0> Feature flags: [x] implicit_default_bindings
2021-10-14 16:57:52.544988+00:00 [info] <0.222.0> Feature flags: [x] maintenance_mode_status
2021-10-14 16:57:52.545067+00:00 [info] <0.222.0> Feature flags: [x] quorum_queue
2021-10-14 16:57:52.545092+00:00 [info] <0.222.0> Feature flags: [x] stream_queue
2021-10-14 16:57:52.545108+00:00 [info] <0.222.0> Feature flags: [x] user_limits
2021-10-14 16:57:52.545123+00:00 [info] <0.222.0> Feature flags: [x] virtual_host_metadata
2021-10-14 16:57:52.545210+00:00 [info] <0.222.0> Feature flags: feature flag states written to disk: yes
2021-10-14 16:57:52.945793+00:00 [info] <0.222.0> Running boot step pre_boot defined by app rabbit
2021-10-14 16:57:52.945882+00:00 [info] <0.222.0> Running boot step rabbit_global_counters defined by app rabbit
2021-10-14 16:57:52.946226+00:00 [info] <0.222.0> Running boot step rabbit_osiris_metrics defined by app rabbit
2021-10-14 16:57:52.946440+00:00 [info] <0.222.0> Running boot step rabbit_core_metrics defined by app rabbit
2021-10-14 16:57:52.978102+00:00 [info] <0.222.0> Running boot step rabbit_alarm defined by app rabbit
2021-10-14 16:57:53.022595+00:00 [info] <0.351.0> Memory high watermark set to 5090 MiB (5338033356 bytes) of 12726 MiB (13345083392 bytes) total
2021-10-14 16:57:53.029390+00:00 [info] <0.353.0> Enabling free disk space monitoring
2021-10-14 16:57:53.029459+00:00 [info] <0.353.0> Disk free limit set to 50MB
2021-10-14 16:57:53.034835+00:00 [info] <0.222.0> Running boot step code_server_cache defined by app rabbit
2021-10-14 16:57:53.034965+00:00 [info] <0.222.0> Running boot step file_handle_cache defined by app rabbit
2021-10-14 16:57:53.035272+00:00 [info] <0.356.0> Limiting to approx 1048479 file handles (943629 sockets)
2021-10-14 16:57:53.035483+00:00 [info] <0.357.0> FHC read buffering: OFF
2021-10-14 16:57:53.035523+00:00 [info] <0.357.0> FHC write buffering: ON
2021-10-14 16:57:53.036032+00:00 [info] <0.222.0> Running boot step worker_pool defined by app rabbit
2021-10-14 16:57:53.036114+00:00 [info] <0.343.0> Will use 16 processes for default worker pool
2021-10-14 16:57:53.036143+00:00 [info] <0.343.0> Starting worker pool 'worker_pool' with 16 processes in it
2021-10-14 16:57:53.037184+00:00 [info] <0.222.0> Running boot step database defined by app rabbit
2021-10-14 16:57:53.039504+00:00 [info] <0.222.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2021-10-14 16:57:53.039726+00:00 [info] <0.222.0> Successfully synced tables from a peer
2021-10-14 16:57:53.039781+00:00 [info] <0.222.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2021-10-14 16:57:53.039985+00:00 [info] <0.222.0> Successfully synced tables from a peer
2021-10-14 16:57:53.052726+00:00 [info] <0.222.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2021-10-14 16:57:53.052917+00:00 [info] <0.222.0> Successfully synced tables from a peer
2021-10-14 16:57:53.052951+00:00 [info] <0.222.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping registration.
2021-10-14 16:57:53.053077+00:00 [info] <0.222.0> Running boot step database_sync defined by app rabbit
2021-10-14 16:57:53.053281+00:00 [info] <0.222.0> Running boot step feature_flags defined by app rabbit
2021-10-14 16:57:53.053552+00:00 [info] <0.222.0> Running boot step codec_correctness_check defined by app rabbit
2021-10-14 16:57:53.053586+00:00 [info] <0.222.0> Running boot step external_infrastructure defined by app rabbit
2021-10-14 16:57:53.053624+00:00 [info] <0.222.0> Running boot step rabbit_registry defined by app rabbit
2021-10-14 16:57:53.053799+00:00 [info] <0.222.0> Running boot step rabbit_auth_mechanism_cr_demo defined by app rabbit
2021-10-14 16:57:53.053939+00:00 [info] <0.222.0> Running boot step rabbit_queue_location_random defined by app rabbit
2021-10-14 16:57:53.054086+00:00 [info] <0.222.0> Running boot step rabbit_event defined by app rabbit
2021-10-14 16:57:53.054373+00:00 [info] <0.222.0> Running boot step rabbit_auth_mechanism_amqplain defined by app rabbit
2021-10-14 16:57:53.054473+00:00 [info] <0.222.0> Running boot step rabbit_auth_mechanism_plain defined by app rabbit
2021-10-14 16:57:53.054612+00:00 [info] <0.222.0> Running boot step rabbit_exchange_type_direct defined by app rabbit
2021-10-14 16:57:53.054738+00:00 [info] <0.222.0> Running boot step rabbit_exchange_type_fanout defined by app rabbit
2021-10-14 16:57:53.054837+00:00 [info] <0.222.0> Running boot step rabbit_exchange_type_headers defined by app rabbit
2021-10-14 16:57:53.054954+00:00 [info] <0.222.0> Running boot step rabbit_exchange_type_topic defined by app rabbit
2021-10-14 16:57:53.055080+00:00 [info] <0.222.0> Running boot step rabbit_mirror_queue_mode_all defined by app rabbit
2021-10-14 16:57:53.055177+00:00 [info] <0.222.0> Running boot step rabbit_mirror_queue_mode_exactly defined by app rabbit
2021-10-14 16:57:53.055297+00:00 [info] <0.222.0> Running boot step rabbit_mirror_queue_mode_nodes defined by app rabbit
2021-10-14 16:57:53.055386+00:00 [info] <0.222.0> Running boot step rabbit_priority_queue defined by app rabbit
2021-10-14 16:57:53.055439+00:00 [info] <0.222.0> Priority queues enabled, real BQ is rabbit_variable_queue
2021-10-14 16:57:53.055560+00:00 [info] <0.222.0> Running boot step rabbit_queue_location_client_local defined by app rabbit
2021-10-14 16:57:53.055678+00:00 [info] <0.222.0> Running boot step rabbit_queue_location_min_masters defined by app rabbit
2021-10-14 16:57:53.055772+00:00 [info] <0.222.0> Running boot step kernel_ready defined by app rabbit
2021-10-14 16:57:53.055800+00:00 [info] <0.222.0> Running boot step rabbit_sysmon_minder defined by app rabbit
2021-10-14 16:57:53.056090+00:00 [info] <0.222.0> Running boot step rabbit_epmd_monitor defined by app rabbit
2021-10-14 16:57:53.059579+00:00 [info] <0.391.0> epmd monitor knows us, inter-node communication (distribution) port: 25672
2021-10-14 16:57:53.059752+00:00 [info] <0.222.0> Running boot step guid_generator defined by app rabbit
2021-10-14 16:57:53.060987+00:00 [info] <0.222.0> Running boot step rabbit_node_monitor defined by app rabbit
2021-10-14 16:57:53.061357+00:00 [info] <0.395.0> Starting rabbit_node_monitor
2021-10-14 16:57:53.061560+00:00 [info] <0.222.0> Running boot step delegate_sup defined by app rabbit
2021-10-14 16:57:53.062457+00:00 [info] <0.222.0> Running boot step rabbit_memory_monitor defined by app rabbit
2021-10-14 16:57:53.062710+00:00 [info] <0.222.0> Running boot step core_initialized defined by app rabbit
2021-10-14 16:57:53.062739+00:00 [info] <0.222.0> Running boot step upgrade_queues defined by app rabbit
2021-10-14 16:57:53.111104+00:00 [info] <0.222.0> Running boot step channel_tracking defined by app rabbit
2021-10-14 16:57:53.111501+00:00 [info] <0.222.0> Setting up a table for channel tracking on this node: 'tracked_channel_on_node_rabbit@statefulset-rabbitmq-0'
2021-10-14 16:57:53.111728+00:00 [info] <0.222.0> Setting up a table for channel tracking on this node: 'tracked_channel_table_per_user_on_node_rabbit@statefulset-rabbitmq-0'
2021-10-14 16:57:53.111966+00:00 [info] <0.222.0> Running boot step rabbit_channel_tracking_handler defined by app rabbit
2021-10-14 16:57:53.112057+00:00 [info] <0.222.0> Running boot step connection_tracking defined by app rabbit
2021-10-14 16:57:53.112299+00:00 [info] <0.222.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@statefulset-rabbitmq-0'
2021-10-14 16:57:53.112540+00:00 [info] <0.222.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@statefulset-rabbitmq-0'
2021-10-14 16:57:53.112754+00:00 [info] <0.222.0> Setting up a table for per-user connection counting on this node: 'tracked_connection_table_per_user_on_node_rabbit@statefulset-rabbitmq-0'
2021-10-14 16:57:53.113041+00:00 [info] <0.222.0> Running boot step rabbit_connection_tracking_handler defined by app rabbit
2021-10-14 16:57:53.113111+00:00 [info] <0.222.0> Running boot step rabbit_exchange_parameters defined by app rabbit
2021-10-14 16:57:53.113202+00:00 [info] <0.222.0> Running boot step rabbit_mirror_queue_misc defined by app rabbit
2021-10-14 16:57:53.113557+00:00 [info] <0.222.0> Running boot step rabbit_policies defined by app rabbit
2021-10-14 16:57:53.113911+00:00 [info] <0.222.0> Running boot step rabbit_policy defined by app rabbit
2021-10-14 16:57:53.113981+00:00 [info] <0.222.0> Running boot step rabbit_queue_location_validator defined by app rabbit
2021-10-14 16:57:53.114081+00:00 [info] <0.222.0> Running boot step rabbit_quorum_memory_manager defined by app rabbit
2021-10-14 16:57:53.114175+00:00 [info] <0.222.0> Running boot step rabbit_stream_coordinator defined by app rabbit
2021-10-14 16:57:53.114360+00:00 [info] <0.222.0> Running boot step rabbit_vhost_limit defined by app rabbit
2021-10-14 16:57:53.114486+00:00 [info] <0.222.0> Running boot step rabbit_mgmt_db_handler defined by app rabbitmq_management_agent
2021-10-14 16:57:53.114525+00:00 [info] <0.222.0> Management plugin: using rates mode 'basic'
2021-10-14 16:57:53.115086+00:00 [info] <0.222.0> Running boot step recovery defined by app rabbit
2021-10-14 16:57:53.115914+00:00 [info] <0.433.0> Making sure data directory '/var/lib/rabbitmq/mnesia/rabbit@statefulset-rabbitmq-0/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists
2021-10-14 16:57:53.117910+00:00 [info] <0.433.0> Starting message stores for vhost '/'
2021-10-14 16:57:53.118172+00:00 [info] <0.437.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index
2021-10-14 16:57:53.121616+00:00 [info] <0.433.0> Started message store of type transient for vhost '/'
2021-10-14 16:57:53.121999+00:00 [info] <0.441.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
2021-10-14 16:57:53.124594+00:00 [info] <0.433.0> Started message store of type persistent for vhost '/'
2021-10-14 16:57:53.126533+00:00 [info] <0.222.0> Running boot step empty_db_check defined by app rabbit
2021-10-14 16:57:53.126596+00:00 [info] <0.222.0> Will not seed default virtual host and user: have definitions to load...
2021-10-14 16:57:53.126617+00:00 [info] <0.222.0> Running boot step rabbit_looking_glass defined by app rabbit
2021-10-14 16:57:53.126641+00:00 [info] <0.222.0> Running boot step rabbit_core_metrics_gc defined by app rabbit
2021-10-14 16:57:53.126908+00:00 [info] <0.222.0> Running boot step background_gc defined by app rabbit
2021-10-14 16:57:53.127164+00:00 [info] <0.222.0> Running boot step routing_ready defined by app rabbit
2021-10-14 16:57:53.127193+00:00 [info] <0.222.0> Running boot step pre_flight defined by app rabbit
2021-10-14 16:57:53.127211+00:00 [info] <0.222.0> Running boot step notify_cluster defined by app rabbit
2021-10-14 16:57:53.127230+00:00 [info] <0.222.0> Running boot step networking defined by app rabbit
2021-10-14 16:57:53.127335+00:00 [info] <0.222.0> Running boot step definition_import_worker_pool defined by app rabbit
2021-10-14 16:57:53.127431+00:00 [info] <0.343.0> Starting worker pool 'definition_import_pool' with 16 processes in it
2021-10-14 16:57:53.129310+00:00 [info] <0.222.0> Running boot step cluster_name defined by app rabbit
2021-10-14 16:57:53.129352+00:00 [info] <0.222.0> Running boot step direct_client defined by app rabbit
2021-10-14 16:57:53.129475+00:00 [info] <0.481.0> Resetting node maintenance status
2021-10-14 16:57:53.154498+00:00 [info] <0.508.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692
2021-10-14 16:57:53.154786+00:00 [info] <0.481.0> Ready to start client connection listeners
2021-10-14 16:57:53.157103+00:00 [info] <0.552.0> started TCP listener on [::]:5672
completed with 3 plugins.
2021-10-14 16:57:53.426031+00:00 [info] <0.481.0> Server startup complete; 3 plugins started.
2021-10-14 16:57:53.426031+00:00 [info] <0.481.0> * rabbitmq_prometheus
2021-10-14 16:57:53.426031+00:00 [info] <0.481.0> * rabbitmq_web_dispatch
2021-10-14 16:57:53.426031+00:00 [info] <0.481.0> * rabbitmq_management_agent
</code></pre>
<p>Can anyone help me figure out where the problem is exactly?</p>
<h2>Additional Info (1):</h2>
<p>Out of curiosity, I attempted to retrieve status of the running RabbitMQ pod, and this is what I got:</p>
<pre><code>$ kubectl exec -it statefulset-rabbitmq-0 -- rabbitmq-diagnostics status --quiet
Runtime
OS PID: 21
OS: Linux
Uptime (seconds): 101
Is under maintenance?: false
RabbitMQ version: 3.9.7
Node name: rabbit@statefulset-rabbitmq-0
Erlang configuration: Erlang/OTP 24 [erts-12.1.2] [source] [64-bit] [smp:16:1] [ds:16:1:10] [async-threads:1] [jit]
Erlang processes: 341 used, 1048576 limit
Scheduler run queue: 1
Cluster heartbeat timeout (net_ticktime): 60
Plugins
Enabled plugin file: /etc/rabbitmq/enabled_plugins
Enabled plugins:
* rabbitmq_prometheus
* rabbitmq_web_dispatch
* prometheus
* rabbitmq_management_agent
* cowboy
* cowlib
* accept
Data directory
Node data directory: /var/lib/rabbitmq/mnesia/rabbit@statefulset-rabbitmq-0
Raft data directory: /var/lib/rabbitmq/mnesia/rabbit@statefulset-rabbitmq-0/quorum/rabbit@statefulset-rabbitmq-0
Config files
* /etc/rabbitmq/conf.d/10-default-guest-user.conf
* /etc/rabbitmq/conf.d/management_agent.disable_metrics_collector.conf
Log file(s)
* /var/log/rabbitmq/rabbit@statefulset-rabbitmq-0_upgrade.log
* <stdout>
Alarms
(none)
Memory
Total memory used: 0.1453 gb
Calculation strategy: rss
Memory high watermark setting: 0.4 of available memory, computed to: 5.338 gb
reserved_unallocated: 0.0654 gb (45.03 %)
code: 0.0341 gb (23.48 %)
other_system: 0.0324 gb (22.34 %)
other_proc: 0.0193 gb (13.26 %)
other_ets: 0.003 gb (2.07 %)
atom: 0.0014 gb (0.98 %)
plugins: 4.0e-4 gb (0.3 %)
metrics: 2.0e-4 gb (0.16 %)
mnesia: 1.0e-4 gb (0.06 %)
binary: 1.0e-4 gb (0.05 %)
quorum_ets: 0.0 gb (0.02 %)
msg_index: 0.0 gb (0.02 %)
stream_queue_procs: 0.0 gb (0.0 %)
stream_queue_replica_reader_procs: 0.0 gb (0.0 %)
allocated_unused: 0.0 gb (0.0 %)
connection_channels: 0.0 gb (0.0 %)
connection_other: 0.0 gb (0.0 %)
connection_readers: 0.0 gb (0.0 %)
connection_writers: 0.0 gb (0.0 %)
mgmt_db: 0.0 gb (0.0 %)
queue_procs: 0.0 gb (0.0 %)
queue_slave_procs: 0.0 gb (0.0 %)
quorum_queue_procs: 0.0 gb (0.0 %)
stream_queue_coordinator_procs: 0.0 gb (0.0 %)
File Descriptors
Total: 2, limit: 1048479
Sockets: 0, limit: 943629
Free Disk Space
Low free disk space watermark: 0.05 gb
Free disk space: 4.9169 gb
Totals
Connection count: 0
Queue count: 0
Virtual host count: 1
Listeners
Interface: [::], port: 15692, protocol: http/prometheus, purpose: Prometheus exporter API over HTTP
Interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
</code></pre>
<p>So it seems like there's no issue with the pod, everything looks perfect so far. Then why can't I connect to it from my apps?</p>
| msrumon | <p>So the issue was that the <code>Service</code> selector didn't match with <code>Pod</code>'s label. The correct manifest would be:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service-rabbitmq
spec:
selector:
- app: service-rabbitmq
+ app: statefulset-rabbitmq
</code></pre>
| msrumon |
<p>Several resources on the web point to the existence of Cloud Run for GKE. For example, this Google <a href="https://codelabs.developers.google.com/codelabs/cloud-run-gke/" rel="nofollow noreferrer">codelabs</a>, this YouTube <a href="https://www.youtube.com/watch?v=RVdhyprptTQ" rel="nofollow noreferrer">video</a> from Google and this LinkedIn training <a href="https://www.linkedin.com/learning/google-cloud-platform-essential-training-3/google-cloud-run-on-gke" rel="nofollow noreferrer">video</a>.</p>
<p>However the Cloud Run for GKE functionality seems to have disappeared when you try to create a new Kubernetes cluster, using the Google Cloud web console. The checkboxes to enable Istio and Cloud Run for GKE underneath "Additional features" are not available anymore. (see <a href="https://www.linkedin.com/learning/google-cloud-platform-essential-training-3/google-cloud-run-on-gke" rel="nofollow noreferrer">3:40</a> on this LinkedIn video tutorial)</p>
<p>The official <a href="https://cloud.google.com/run/docs/gke/setup" rel="nofollow noreferrer">documentation</a> about Cloud run for GKE also seems to have disappeared or changed and replaced with documentation about Cloud Run on Anthos.</p>
<p>So, in short, what happened to Cloud Run for GKE?</p>
| Hyperfocus | <p>You first need to create a GKE cluster and then when creating cloud run choose <code>CloudRun for Anthos</code> so, it's not really gone anywhere.</p>
<p><a href="https://i.stack.imgur.com/vLpku.png" rel="noreferrer"><img src="https://i.stack.imgur.com/vLpku.png" alt="Here it is!"></a></p>
<p>If it was greyed out, that was probably because you had to tick "enabled stackdriver..."</p>
| DUDANF |
<p>I'm currently in the process of dockerizing a Laravel app and integrating it with Kubernetes. I'm not well-versed in php, so I could be missing something obvious. I managed to access the app with http through the browser running it via docker.</p>
<p>But for some reason whenever I access it through my Kubernetes cluster, regardless of if I type the URL manually, it redirects me to the https version of the URL. And this throws several <code>Mixed Content: The page at '<URL>' was loaded over HTTPS, but requested an insecure script '<URL>'</code> errors.</p>
<p>I've tried configuring Apache to use self-signed SSL, but I was unable to get it to work, and the cluster does seem to work on http with Angular apps anyway.</p>
<p>My Dockerfile, Kubernetes config files and apache.conf files look like this:</p>
<p><em>(Note that I removed env values for security and brevity)</em></p>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM php:7.2-apache
ENV APP_ENV=prod
ENV APACHE_DOCUMENT_ROOT /var/www/html
RUN docker-php-ext-install pdo pdo_mysql
RUN a2enmod rewrite
COPY ./apache.conf /etc/apache2/sites-available/000-default.conf
RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"
WORKDIR /var/www/html/my-app
COPY . .
RUN apt-get update && apt-get install -y git zip unzip
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/bin --filename=composer
RUN /bin/composer install
</code></pre>
<p><strong>apache.conf</strong></p>
<pre><code><VirtualHost *:80>
RewriteEngine On
DocumentRoot ${APACHE_DOCUMENT_ROOT}
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
Alias /my-app "/var/www/html/my-app/public"
<Directory "/var/www/html/my-app">
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
</code></pre>
<p><strong>k8s config files</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: in-check
image: registry-url:5000/my-app
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-app-cluster-ip
spec:
type: ClusterIP
selector:
app: my-app
ports:
- port: 8082
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /my-app/?(.*)
backend:
serviceName: my-app-cluster-ip
servicePort: 8082
</code></pre>
<p>Any idea what might be causing this issue?</p>
<p>Any and all help is appreciated.</p>
<p><strong><em>Update: it seems to load properly using http on my EKS cluster, but still results in the above errors on my copy of minikube.</em></strong></p>
<p><em>Edit: <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml" rel="nofollow noreferrer">here's the link</a> to the Ingress controller config I'm using.</em></p>
| AdHorger | <p>So I was unable to get the app to run on <code>HTTP</code> on my local copy of minikube, but after @Eduardo Baitello's advice, I changed all the resources' relative paths to full URLs with the <code>HTTPS</code> protocol included.</p>
<p>This isn't ideal since if the domain name or IP changes I have to manually update all resource URLs, but this is the only workaround I can find right now.</p>
| AdHorger |
<ul>
<li>My service (tomcat/java) is running on a kubernetes cluster (AKS).</li>
<li>I would like to write the log files (tomcat access logs, application logs with logback) to an AzureFile volume.</li>
<li>I do not want to write the access logs to the stdout, because I do not want to mix the access logs with the application logs.</li>
</ul>
<p><strong>Question</strong></p>
<p>I expect that all logging is done asynchronously, so that writing to the slow AzureFile volume should not affect the performance.
Is this correct?</p>
<p><strong>Update</strong></p>
<p>In the end I want to collect the logfiles so that I can send all logs to ElasticSearch.</p>
<p>Especially I need a way to collect the access logs.</p>
| Matthias M | <p>If you want to send your access logs to Elastic Search, you just need to extend the <a href="https://tomcat.apache.org/tomcat-9.0-doc/api/org/apache/catalina/valves/AbstractAccessLogValve.html" rel="nofollow noreferrer"><code>AbstractAccessLogValve</code></a> and implement the <code>log</code> method.</p>
<p>The <code>AbstractAccessLogValve</code> already contains the logic to format the messages, so you need just to add the logic to send the formatted message.</p>
| Piotr P. Karwasz |
<p>I want to prevent unsafe requested to reach my application running in GCP GKE with Google Ingress (not nginx) and trying to do this using path rules.
I know nginx Ingress can configure paths using regex but I don know the best way to do with Google Ingress.
Right now I am just duplicating the same rules change the path prefix like this:</p>
<pre><code>spec:
rules:
- http:
paths:
- backend:
service:
name: my-api-service
port:
number: 80
path: /api
pathType: Prefix
- backend:
service:
name: my-api-service
port:
number: 80
path: /auth
pathType: Prefix
- backend:
service:
name: my-api-service
port:
number: 80
path: /admin
pathType: Prefix
</code></pre>
<p>Is there a better way to do this?</p>
| nsbm | <p>Everything you're looking for is covered in <a href="https://cloud.google.com/load-balancing/docs/url-map-concepts#pm-constraints" rel="nofollow noreferrer">this</a> document. As GKE ingress is essentially a GCP Load Balancer, the <code>path</code> key is using a <code>url-map</code> to configure and route the traffic to what you've specified in the config. As you'd be able to see there, regexs are not allowed in <code>Path</code> keys.</p>
<p>One option if you're using Helm is to make use of the templates to generate this automatically from a variable. Given the following variable in your <code>values.yaml</code> file:</p>
<pre><code>paths:
- name: /api
- name: /admin
- name: /auth
</code></pre>
<p>Then in your ingress YAML definition you can do the following:</p>
<pre><code>spec:
rules:
- http:
paths:
{{ range $paths := .Values.paths }}
- backend:
service:
name: my-api-service
port:
number: 80
path: {{ .name }}
pathType: Prefix
{{ end }}
</code></pre>
| bhito |
<p>I installed Minikube on Windows 10 but can't get it to run. I tried to start it with:</p>
<pre><code> minikube start --vm-driver=hyperv
</code></pre>
<p>The first error was:</p>
<pre><code>[HYPERV_NO_VSWITCH] create: precreate: no External vswitch found. A valid vswitch must be available for this command to run.
</code></pre>
<p>I then searched on Google and found the solution to this error with this page: </p>
<pre><code>https://www.codingepiphany.com/2019/01/04/kubernetes-minikube-no-external-vswitch-found/
</code></pre>
<p>I then fixed the problem by defining a vswitch but I got this error:</p>
<pre><code>minikube start --vm-driver hyperv --hyperv-virtual-switch "Minikube"
o minikube v1.0.1 on windows (amd64)
$ Downloading Kubernetes v1.14.1 images in the background ...
> Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
! Unable to start VM: create: creating: exit status 1
* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
- https://github.com/kubernetes/minikube/issues/new
</code></pre>
<p>This is a pretty generic error. What do I do to get this working? Thanks!</p>
| user2471435 | <p>You need to create a Virtual Switch in the HyperV GUI ins Windows and then run it with
minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"</p>
<p>Please see the configuration details in this link
<a href="https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c" rel="nofollow noreferrer">https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c</a></p>
| Srinivasan Jayakumar |
<p>We are using <a href="https://github.com/camunda/zeebe" rel="nofollow noreferrer">zeebe</a> distributed workflow engine that uses log4j as its logging framework. Its a spring boot application deployed through helm on kubernetes. Unfortunately it does not include <code>log4j-layout-template-json</code> jar that enables json logging. Any approach that requires third party app code modifications doesn't look right since the project is under active development. We would need to constantly upgrade the third party app and hence forking it would make deployments tougher.</p>
<p>What are some possible ways to configure it for json logging so that the logs can be shipped to EFK stack?</p>
<p>Thanks</p>
| Andy Dufresne | <p>The general solution is simple: you need to add <code>log4j-layout-template-json</code> to the classpath of the application and create a Docker image with the additional libraries.</p>
<p>The devil is in the details: every distribution has a different method to add libraries to the classpath.</p>
<p>Camunda Zeebe seems to be using <a href="https://www.mojohaus.org/appassembler/appassembler-maven-plugin/" rel="nofollow noreferrer">Appassembler</a> to create its binary distribution, which uses <a href="https://github.com/mojohaus/appassembler/blob/master/appassembler-maven-plugin/src/main/resources/org/codehaus/mojo/appassembler/daemon/script/unixBinTemplate" rel="nofollow noreferrer">this script</a> to boot up the application. So you have two choices:</p>
<ul>
<li>either you add libraries directly to <code>/usr/local/zeebe/lib</code> and your Log4j Core configuration file to <code>/usr/local/zeebe/config</code>,</li>
<li>or you add libraries in another location and set the <code>CLASSPATH_PREFIX</code> environment variable accordingly.</li>
</ul>
<p>For example you can add additional Log4j Core modules and a custom configuration file to the <code>/usr/local/log4j2</code> directory, with a Docker file like this:</p>
<pre class="lang-sh prettyprint-override"><code>FROM camunda/zeebe:latest
# Download additional Logj4 2.x artifacts
RUN apt -y update && apt -y install curl
RUN curl -SL https://dist.apache.org/repos/dist/release/logging/log4j/2.20.0/apache-log4j-2.20.0-bin.tar.gz | \
tar -xz --strip-components=1 --one-top-level=/usr/local/log4j2 \
apache-log4j-2.20.0-bin/log4j-appserver-2.20.0.jar \
apache-log4j-2.20.0-bin/log4j-jul-2.20.0.jar \
apache-log4j-2.20.0-bin/log4j-layout-template-json-2.20.0.jar
# Add a custom configuration file
COPY log4j2.xml /usr/local/log4j2/
# Check artifacts
ARG LOG4J_APPSERVER_SUM=53d8e78277324145cde435b515b1c7f1ba02b93e7a1974411ce7c5511a8c6b69
ARG LOG4J_JUL_SUM=c9b33dffb40bd00d4889ea4700f79d87a2e4d9f92911a3a008ae18c0bb3fb167
ARG LOG4J_LAYOUT_TEMPLATE_JSON_SUM=62d2c2b8e80a74ca65d80cf2f9aa0eab3a1350349c7b03b428c5b53004cc751b
RUN sha256sum -c <<EOF
${LOG4J_APPSERVER_SUM} /usr/local/log4j2/log4j-appserver-2.20.0.jar
${LOG4J_JUL_SUM} /usr/local/log4j2/log4j-jul-2.20.0.jar
${LOG4J_LAYOUT_TEMPLATE_JSON_SUM} /usr/local/log4j2/log4j-layout-template-json-2.20.0.jar
EOF
# Add additional classpath entries
ENV CLASSPATH_PREFIX=/usr/local/log4j2/*:/usr/local/log4j2
# Use Log4j also for java.util.logging
ENV JAVA_OPTS=-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
# Start the application
ENTRYPOINT ["tini", "--", "/usr/local/bin/startup.sh"]
</code></pre>
| Piotr P. Karwasz |
<p>I have upgraded my Nexus from <code>3.29.0</code> to <code>3.58.1</code> via Terraform using Helm Kubernetes cluster and I am getting the following error now:</p>
<pre><code>[2023-08-14T08:05:46.935Z] C:\Users\Deployer\workspace\MyApp\MyApp.View.Test.csproj : error NU1301: Failed to retrieve information about 'Microsoft.NET.Test.Sdk' from remote source 'https://mynexus.com/repository/nuget-group/FindPackagesById()?id='Microsoft.NET.Test.Sdk'&semVerLevel=2.0.0'. [C:\Users\Deployer\workspace\MyApp\myapp.sln]
</code></pre>
<p>I didn't change anything on any of the nuget-repository, nuget-group and nuget-proxy.</p>
<p>Why am I getting this error? How to fix it?</p>
| Abdullah Khawer | <p>I fixed this by changing the <code>Protocol Version</code> for <code>nuget-proxy</code> to <code>NuGet V2</code>. It got set to <code>NuGet V3</code> after the Nexus upgrade.</p>
<p>Reference Screenshot below:</p>
<p><a href="https://i.stack.imgur.com/tFvfz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tFvfz.png" alt="enter image description here" /></a></p>
<p>In your Terraform code, if you're using <code>nexus_repository_nuget_proxy</code> then change your code as follows:</p>
<pre><code>resource "nexus_repository_nuget_proxy" "nuget_proxy" {
...
query_cache_item_max_age = 1440
nuget_version = "V2"
...
}
</code></pre>
<p>In your Terraform code, if you're using <code>nexus_repository</code> then change your code as follows:</p>
<pre><code>resource "nexus_repository" "nuget_proxy" {
...
nuget_proxy {
query_cache_item_max_age = 1440
nuget_version = "V2"
}
...
}
</code></pre>
| Abdullah Khawer |
<p>I've installed <a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="nofollow noreferrer">Prometheus operator</a> <code>0.34</code> (which works as expected) on cluster <strong>A</strong> (main prom)
Now I want to use the <a href="https://prometheus.io/docs/prometheus/latest/federation/" rel="nofollow noreferrer">federation</a> option,I mean collect metrics from other Prometheus which is located on other K8S cluster <strong>B</strong> </p>
<p><strong>Secnario:</strong></p>
<blockquote>
<ol>
<li>have in cluster <strong>A</strong> MAIN <a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer">prometheus operator</a> <code>v0.34</code> <a href="https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml" rel="nofollow noreferrer">config</a></li>
<li>I've in cluster <strong>B</strong> SLAVE <a href="https://github.com/helm/charts/tree/master/stable/prometheus" rel="nofollow noreferrer">prometheus</a> <code>2.13.1</code> <a href="https://github.com/helm/charts/blob/master/stable/prometheus/values.yaml" rel="nofollow noreferrer">config</a></li>
</ol>
</blockquote>
<p>Both installed successfully via helm, I can access to localhost via <code>port-forwarding</code> and see the scraping results on each cluster.</p>
<p><strong>I did the following steps</strong></p>
<p>Use on the operator (main cluster A) <a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/additional-scrape-config.md" rel="nofollow noreferrer">additionalScrapeconfig</a>
I've added the following to the <code>values.yaml</code> file and update it via helm.</p>
<pre><code>additionalScrapeConfigs:
- job_name: 'federate'
honor_labels: true
metrics_path: /federate
params:
match[]:
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 101.62.201.122:9090 # The External-IP and port from the target prometheus on Cluster B
</code></pre>
<p>I took the target like following:</p>
<p>on prometheus inside <strong>cluster B</strong> (from which I want to collect the data) I use:</p>
<p><code>kubectl get svc -n monitoring</code></p>
<p>And get the following entries:</p>
<p>Took the <code>EXTERNAL-IP</code> and put it inside the <code>additionalScrapeConfigs</code> config entry.</p>
<p>Now I switch to cluster <code>A</code> and run <code>kubectl port-forward svc/mon-prometheus-operator-prometheus 9090:9090 -n monitoring</code> </p>
<p>Open the browser with <code>localhost:9090</code> see the graph's and click on <code>Status</code> and there Click on <code>Targets</code> </p>
<p>And see the new target with job <code>federate</code></p>
<p><a href="https://i.stack.imgur.com/8oPqd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8oPqd.png" alt="enter image description here"></a></p>
<p>Now my main question/gaps. (security & verification) </p>
<ol>
<li>To be able to see that target <code>state</code> on green (see the pic) I configure the prometheus server in cluster <code>B</code> instead of using <code>type:NodePort</code> to use <a href="https://github.com/helm/charts/blob/master/stable/prometheus/values.yaml#L835" rel="nofollow noreferrer"><code>type:LoadBalacer</code></a> which expose the metrics outside, this can be good for testing but I need to <strong>secure it</strong>, how it can be done ?
How to make the e2e works in <strong>secure way</strong>...</li>
</ol>
<p>tls
<a href="https://prometheus.io/docs/prometheus/1.8/configuration/configuration/#tls_config" rel="nofollow noreferrer">https://prometheus.io/docs/prometheus/1.8/configuration/configuration/#tls_config</a></p>
<p>Inside <strong>cluster A</strong> (main cluster) we use certificate for out services with <code>istio</code> like following which works</p>
<pre><code>tls:
mode: SIMPLE
privateKey: /etc/istio/oss-tls/tls.key
serverCertificate: /etc/istio/oss-tls/tls.crt
</code></pre>
<p>I see that inside the doc there is an option to config</p>
<pre><code> additionalScrapeConfigs:
- job_name: 'federate'
honor_labels: true
metrics_path: /federate
params:
match[]:
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 101.62.201.122:9090 # The External-IP and port from the target
# tls_config:
# ca_file: /opt/certificate-authority-data.pem
# cert_file: /opt/client-certificate-data.pem
# key_file: /sfp4/client-key-data.pem
# insecure_skip_verify: true
</code></pre>
<p>But not sure which certificate I need to use inside the prometheus operator config , the certificate of the main prometheus A or the slave B?</p>
| Rayn D | <ol>
<li>You should consider using <a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/additional-scrape-config.md#additional-scrape-configuration" rel="nofollow noreferrer">Additional Scrape Configuration</a></li>
</ol>
<blockquote>
<p><code>AdditionalScrapeConfigs</code> allows specifying a key of a Secret
containing additional Prometheus scrape configurations. Scrape
configurations specified are appended to the configurations generated
by the Prometheus Operator.</p>
</blockquote>
<ol start="2">
<li><p>I am affraid this is not officially supported. However, you can update your <code>prometheus.yml</code> section within the Helm chart. If you want to learn more about it, check out <a href="https://www.promlts.com/resources/wheres-my-prometheus-yml?utm_source=sof&utm_medium=organic&utm_campaign=prometheus" rel="nofollow noreferrer">this blog</a></p></li>
<li><p>I see two options here:</p></li>
</ol>
<blockquote>
<p>Connections to Prometheus and its exporters are not encrypted and
authenticated by default. <a href="https://0x63.me/tls-between-prometheus-and-its-exporters/" rel="nofollow noreferrer">This is one way of fixing that with TLS
certificates and
stunnel</a>.</p>
</blockquote>
<p>Or specify <a href="https://prometheus.io/docs/operating/security/#secrets" rel="nofollow noreferrer">Secrets</a> which you can add to your scrape configuration.</p>
<p>Please let me know if that helped. </p>
| WytrzymaΕy Wiktor |
<p>I have a manifest yaml file in Terraform to deploy a deployment on a Kubernetes cluster and I want to pass a map having key-value pairs defining node selectors to that yaml file using templatefile function to set <code>nodeSelector</code> on it.</p>
<p>This is how my Terraform code looks like:</p>
<pre><code>...
resource "kubernetes_manifest" "deployment" {
manifest = yamldecode(
templatefile("${path.module}/resources/deployment-tmpl.yaml",
{
app_name = var.app_name,
app_namespace = var.app_namespace
})
)
}
...
</code></pre>
<p>This is how my manifest yaml code looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: ${app_name}
name: ${app_name}
namespace: ${app_namespace}
spec:
selector:
matchLabels:
app.kubernetes.io/name: ${app_name}
template:
metadata:
labels:
app.kubernetes.io/name: ${app_name}
spec:
containers:
...
...
</code></pre>
<p>My node selector variable may look like this:</p>
<pre class="lang-yaml prettyprint-override"><code>variable "node_selector" {
default = {
"eks.amazonaws.com/nodegroup" = "app"
}
}
</code></pre>
<p>My Terraform version is <code>v0.14.11</code>.</p>
<p>How can I do that in my case? I don't want to hardcode and I want to send a map which can have one or more key-value pairs.</p>
| Abdullah Khawer | <p>It is possible to do so using <code>yamlencode</code> inside <code>templatefile</code> function in Terraform in this case.</p>
<p>I have changed the manifest yaml file as follows:</p>
<pre class="lang-yaml prettyprint-override"><code>...
spec:
nodeSelector:
${node_selector}
containers:
...
</code></pre>
<p>I have changed the Terraform file as follows:</p>
<pre><code>...
resource "kubernetes_manifest" "deployment" {
manifest = yamldecode(
templatefile("${path.module}/resources/deployment-tmpl.yaml",
{
app_name = var.app_name,
app_namespace = var.app_namespace,
node_selector = yamlencode(var.node_selector)
})
)
}
...
</code></pre>
<p>Node selector variable will remain the same and can have more than one key-value pair:</p>
<pre class="lang-yaml prettyprint-override"><code>variable "node_selector" {
default = {
"eks.amazonaws.com/nodegroup" = "app"
}
}
</code></pre>
<p>Reference: <a href="https://developer.hashicorp.com/terraform/language/functions/templatefile" rel="nofollow noreferrer">Terraform - templatefile Function</a></p>
| Abdullah Khawer |
<p>How to debug why it's status is CrashLoopBackOff?</p>
<p>I am not using minikube , working on Aws Kubernetes instance.</p>
<p>I followed this tutorial.
<a href="https://github.com/mkjelland/spring-boot-postgres-on-k8s-sample" rel="nofollow noreferrer">https://github.com/mkjelland/spring-boot-postgres-on-k8s-sample</a></p>
<p>When I do </p>
<pre><code> kubectl create -f specs/spring-boot-app.yml
</code></pre>
<p>and check status by </p>
<pre><code> kubectl get pods
</code></pre>
<p>it gives </p>
<pre><code> spring-boot-postgres-sample-67f9cbc8c-qnkzg 0/1 CrashLoopBackOff 14 50m
</code></pre>
<p>Below Command </p>
<pre><code> kubectl describe pods spring-boot-postgres-sample-67f9cbc8c-qnkzg
</code></pre>
<p>gives </p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m18s (x350 over 78m) kubelet, ip-172-31-11-87 Back-off restarting failed container
</code></pre>
<p>Command <strong>kubectl get pods --all-namespaces</strong> gives </p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
default constraintpod 1/1 Running 1 88d
default postgres-78f78bfbfc-72bgf 1/1 Running 0 109m
default rcsise-krbxg 1/1 Running 1 87d
default spring-boot-postgres-sample-667f87cf4c-858rx 0/1 CrashLoopBackOff 4 110s
default twocontainers 2/2 Running 479 89d
kube-system coredns-86c58d9df4-kr4zj 1/1 Running 1 89d
kube-system coredns-86c58d9df4-qqq2p 1/1 Running 1 89d
kube-system etcd-ip-172-31-6-149 1/1 Running 8 89d
kube-system kube-apiserver-ip-172-31-6-149 1/1 Running 1 89d
kube-system kube-controller-manager-ip-172-31-6-149 1/1 Running 1 89d
kube-system kube-flannel-ds-amd64-4h4x7 1/1 Running 1 89d
kube-system kube-flannel-ds-amd64-fcvf2 1/1 Running 1 89d
kube-system kube-proxy-5sgjb 1/1 Running 1 89d
kube-system kube-proxy-hd7tr 1/1 Running 1 89d
kube-system kube-scheduler-ip-172-31-6-149 1/1 Running 1 89d
</code></pre>
<p>Command <strong>kubectl logs spring-boot-postgres-sample-667f87cf4c-858rx</strong>
doesn't print anything.</p>
| Dhanraj | <p>Why don't you...</p>
<ol>
<li><p>run a dummy container (run an endless sleep command)</p></li>
<li><p>kubectl exec -it bash</p></li>
<li><p>Run the program directly and have a look at the logs directly. </p></li>
</ol>
<p>Its an easier form of debugging on K8s.</p>
| DUDANF |
<p>Under which conditions will client.Client return a non-nil error and the client object will be also filled (or at least the namespace and name object meta)? The Kubernetes operator framework book suggests to <a href="https://github.com/PacktPublishing/The-Kubernetes-Operator-Framework-Book/blob/main/chapter_05/controllers/nginxoperator_controller.go#L71" rel="nofollow noreferrer">set a status condition</a> in case of getting resource errors in a similar fashion:</p>
<pre class="lang-golang prettyprint-override"><code>myResource := &v1alpha1.MyResource{}
err := r.Get(ctx, req.NamespacedName, myResource)
if err != nil && errors.IsNotFound(err) {
return ctrl.Result{}, nil
} else if err != nil {
logger.Error(err, "Error getting resource object")
meta.SetStatusCondition(&myResource.Status.Conditions, metav1.Condition{
Type: "Failed",
Status: metav1.ConditionTrue,
Reason: "NotAvailable",
LastTransitionTime: metav1.NewTime(time.Now()),
Message: fmt.Sprintf("unable to get custom resource: %s", err.Error()),
})
return ctrl.Result{}, utilerrors.NewAggregate([]error{err, r.Status().Update(ctx, myResource)})
}
</code></pre>
<p>Is it even possible to update the status of a CR where we are unable to get it, but the error doesn't satisfy the condition <code>errors.IsNotFound(err)</code>?</p>
| Kubus | <p>Here is the <a href="https://github.com/kubernetes/apimachinery/blob/master/pkg/api/errors/errors.go" rel="nofollow noreferrer">list of errors</a> you may get from reconciler's <code>Get()</code> method.</p>
<p>If the error is not nil, that means <strong>you most probably did not get the resource correctly, so you shouldn't update it</strong>. Say that your error is <code>TooManyRequests</code>. In this case, <code>errors.IsTooManyRequests(err)</code> returns <code>true</code> while <code>errors.IsNotFound(err)</code> returns <code>false</code>. The point is, there is no guarantee that you have the correct version of the custom resource if the <code>err</code> is not <code>nil</code>. You should return the error and let the controller reconcile it again.</p>
| tuna |
<p>I have a number of pods running and horizontal pod auto scaler assigned to target them, the cluster I am using can also add nodes and remove nodes automatically based on current load.</p>
<p>BUT we recently had the cluster go offline with OOM errors and this caused a disruption in service.</p>
<p>Is there a way to monitor the load on each node and if usage reaches say 80% of the memory on a node, Kubernetes should not schedule more pods on that node but wait for another node to come online.</p>
| Margach Chris | <p>The pending pods are what one should monitor and define <strong>Resource requests</strong> which affect scheduling.</p>
<p>The Scheduler uses Resource requests Information when scheduling the pod
to a node. Each node has a certain amount of CPU and memory it can allocate to
pods. When scheduling a pod, the Scheduler will only consider nodes with enough
unallocated resources to meet the podβs resource requirements. If the amount of
unallocated CPU or memory is less than what the pod requests, Kubernetes will not
schedule the pod to that node, because the node canβt provide the minimum amount
required by the pod. The new Pods will remain in Pending state until new nodes come into the cluster.</p>
<p>Example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: requests-pod
spec:
containers:
- image: busybox
command: ["dd", "if=/dev/zero", "of=/dev/null"]
name: main
resources:
requests:
cpu: 200m
memory: 10Mi
</code></pre>
<p>When you donβt specify a request for CPU, youβre saying you donβt care how much
CPU time the process running in your container is allotted. In the worst case, it may
not get any CPU time at all (this happens when a heavy demand by other processes
exists on the CPU). Although this may be fine for low-priority batch jobs, which arenβt
time-critical, it obviously isnβt appropriate for containers handling user requests.</p>
| aballaci |
<p>Suppose I have bellow code snippet which setups a reconciler that watches external resource "External":</p>
<pre><code>// SetupWithManager sets up the controller with the Manager.
func (r *SomethingReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&api.Something{}).
WithOptions(controller.Options{
MaxConcurrentReconciles: stdruntime.NumCPU(),
RecoverPanic: true,
}).
Watches(
&source.Kind{Type: &somev1.External{}},
handler.EnqueueRequestsFromMapFunc(r.findInternalObjectsForExternal),
builder.WithPredicates(predicate.Funcs{
UpdateFunc: func(ue event.UpdateEvent) bool { return true },
DeleteFunc: func(de event.DeleteEvent) bool { return true },
}),
).
Complete(r)
}
</code></pre>
<p>My problem is that I can not import <code>somev1.External</code> type into my project because importing the go module containing this type would break my current project's dependencies. <br>
Is there a way in kubebuilder to watch for external resources without having to explicitly importing their types? like GVK or something?</p>
| mammad | <p>Yes it's possible.</p>
<p>You can create a REST client for your resource in <code>main.go</code> as below:</p>
<pre class="lang-golang prettyprint-override"><code>gvkExternal := schema.GroupVersionKind{
Group: "some.group.io",
Version: "v1",
Kind: "External",
}
restClient, err := apiutil.RESTClientForGVK(gvkExternal, false, mgr.GetConfig(), serializer.NewCodecFactory(mgr.GetScheme()))
if err != nil {
setupLog.Error(err, "unable to create REST client")
}
</code></pre>
<p>Then add a field for this REST client (<code>rest.Interface</code>) to your reconciler (<code>yournativeresource_controller.go</code>) struct such as:</p>
<pre class="lang-golang prettyprint-override"><code>type YourNativeResourceReconciler struct {
client.Client
Scheme *runtime.Scheme
// add this
RESTClient rest.Interface
}
</code></pre>
<p>Last, initialize your reconciler with this REST client (<code>main.go</code>):</p>
<pre class="lang-golang prettyprint-override"><code>if err = (&controllers.YourNativeResourceReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
RESTClient: restClient,
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "YourNativeResource")
os.Exit(1)
}
</code></pre>
<p>Do not forget to add RBAC marker to your project (reconciler preferably) that will generate RBAC rules allowing you to manipulate <code>External</code> resource:</p>
<pre><code>//+kubebuilder:rbac:groups=some.group.io,resources=externals,verbs=get;list;watch;create;update;patch;delete
</code></pre>
<p>After these steps, you can use REST client for manipulating <code>External</code> resource over <code>YourNativeResource</code> reconciler using <code>r.RESTClient</code>.</p>
<p><strong>EDIT:</strong></p>
<p>If you want to watch resources, dynamic clients may help. Create a dynamic client in <code>main.go</code>:</p>
<pre class="lang-golang prettyprint-override"><code>dynamicClient, err := dynamic.NewForConfig(mgr.GetConfig())
if err != nil {
setupLog.Error(err, "unable to create dynamic client")
}
</code></pre>
<p>Apply above steps, add it to your reconciler etc. Then you will be able to watch <code>External</code> resource as below:</p>
<pre class="lang-golang prettyprint-override"><code>resourceInterface := r.DynamicClient.Resource(schema.GroupVersionResource{
Group: "some.group.io",
Version: "",
Resource: "externals",
})
externalWatcher, err := resourceInterface.Watch(ctx, metav1.ListOptions{})
if err != nil {
return err
}
defer externalWatcher.Stop()
select {
case event := <-externalWatcher.ResultChan():
if event.Type == watch.Deleted {
logger.Info("FINALIZER: An external resource is deleted.")
}
}
</code></pre>
| tuna |
<p>I have my <a href="https://support.dnsimple.com/articles/a-record/" rel="noreferrer">A record</a> on Netlify mapped to my Load Balancer IP Address on Digital Ocean, and it's able to hit the nginx server, but I'm getting a 404 when trying to access any of the apps APIs. I noticed that the status of my Ingress doesn't show that it is bound to the Load Balancer.</p>
<p><a href="https://i.stack.imgur.com/6IB0k.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6IB0k.png" alt="enter image description here" /></a></p>
<p>Does anybody know what I am missing to get this setup?</p>
<p>Application Ingress:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: d2d-server
spec:
rules:
- host: api.cloud.myhostname.com
http:
paths:
- backend:
service:
name: d2d-server
port:
number: 443
path: /
pathType: ImplementationSpecific
</code></pre>
<p>Application Service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: d2d-server
spec:
selector:
app: d2d-server
ports:
- name: http-api
protocol: TCP
port: 443
targetPort: 8080
type: ClusterIP
</code></pre>
<p>Ingress Controller:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
uid: fc64d9f6-a935-49b2-9d7a-b862f660a4ea
resourceVersion: '257931'
generation: 1
creationTimestamp: '2021-10-22T05:31:26Z'
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.4
helm.sh/chart: ingress-nginx-4.0.6
annotations:
deployment.kubernetes.io/revision: '1'
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
spec:
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
defaultMode: 420
containers:
- name: controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef
args:
- /nginx-ingress-controller
- '--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--election-id=ingress-controller-leader'
- '--controller-class=k8s.io/ingress-nginx'
- '--configmap=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--validating-webhook=:8443'
- '--validating-webhook-certificate=/usr/local/certificates/cert'
- '--validating-webhook-key=/usr/local/certificates/key'
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
resources:
requests:
cpu: 100m
memory: 90Mi
volumeMounts:
- name: webhook-cert
readOnly: true
mountPath: /usr/local/certificates/
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
allowPrivilegeEscalation: true
restartPolicy: Always
terminationGracePeriodSeconds: 300
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
serviceAccount: ingress-nginx
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
</code></pre>
<p>Load Balancer:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.4
helm.sh/chart: ingress-nginx-4.0.6
annotations:
kubernetes.digitalocean.com/load-balancer-id: <LB_ID>
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
service.beta.kubernetes.io/do-loadbalancer-name: ingress-nginx
service.beta.kubernetes.io/do-loadbalancer-protocol: https
status:
loadBalancer:
ingress:
- ip: <IP_HIDDEN>
spec:
ports:
- name: http
protocol: TCP
appProtocol: http
port: 80
targetPort: http
nodePort: 31661
- name: https
protocol: TCP
appProtocol: https
port: 443
targetPort: https
nodePort: 32761
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
clusterIP: <IP_HIDDEN>
clusterIPs:
- <IP_HIDDEN>
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Local
healthCheckNodePort: 30477
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
</code></pre>
| Ethan Miller | <p>I just needed to add the field <a href="https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/basic-configuration/" rel="noreferrer"><code>ingressClassName</code></a> of <code>nginx</code> to the ingress spec.</p>
| Ethan Miller |
<p>I want to use Kubernetes as resource manager for spark.</p>
<p>so I wanted to submit a jar far to spark cluster with <code>spark-submit</code>:</p>
<pre><code>./bin/spark-submit \
--master k8s://https://vm:6443 \
--class com.example.WordCounter \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=default \
--conf spark.kubernetes.container.image=private-docker-registery/spark/spark:3.2.1-3 \
--conf spark.kubernetes.namespace=default \
--conf spark.kubernetes.authenticate.submission.oauthToken=$TOKEN \
--conf spark.kubernetes.authenticate.caCertFile=api.cert \
java-word-count-1.0-SNAPSHOT.jar
</code></pre>
<p>for <strong>service account</strong>:</p>
<pre><code>kubectl create serviceaccount spark
</code></pre>
<pre><code>kubectl create clusterrolebinding spark-role \
--clusterrole=edit \
--serviceaccount=default:default \
--namespace=default
</code></pre>
<p>for <strong>caCertFile</strong> I used the <code>/etc/kubernetes/pki/apiserver.crt</code> content.</p>
<p>and for <strong>submission.oauthToken</strong>:</p>
<p><code>kubectl get secret spark-token-86tns -o yaml | grep token</code></p>
<p>and use the token part.</p>
<p>but still doesn't work and I <code>pods is forbidden: User "system:anonymous" cannot watch resource "pods" in API group "" in the namespace "default"</code> error</p>
| Mohammad Abdollahzadeh | <p><code>spark.kubernetes.authenticate.caCertFile</code> has to be <code>service account</code> cert</p>
<p>and also <code>spark.kubernetes.authenticate.submission.oauthToken</code> has to be <code>service account</code> token.</p>
<p>both cert and token can be found in service account secret.</p>
<ul>
<li>be careful to decode <code>service account</code> cert and token (base64 -d).</li>
</ul>
| Mohammad Abdollahzadeh |
<p>I have a quick question related to "<strong>Kubespray</strong>". </p>
<p>Does "<strong>Kubespray</strong>" support CentOS 8? </p>
<p>I wanted to deploy "<strong>Kubespray</strong>" on "CentOS" and I came to know that the CentOS 8 has Kernel version 4.18 and If I can use "CentOS 8" for "Kubernetes" deployment, maybe I can get rid of the "c-group" issue which we are currently facing for all the CentOS distribution which has Kernal Version less than 4.18.</p>
<p>Thanks in Advance.</p>
| rolz | <p>According to the <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubespray/" rel="nofollow noreferrer">official documentation</a> Kubesprawy provides support for CentOS/RHEL <strong>7</strong> only. The problem is that:</p>
<blockquote>
<p>Installing Kubespray on a RHEL8 systems does not work since the
default Python version is 3.6 and thus python3-libselinux should be
installed instead of libselinux-python. Even that python2 is still
available, the libselinux-python package is not.</p>
</blockquote>
<p>I hope it helps. </p>
| WytrzymaΕy Wiktor |
<p>I am new at Kubernetes and GKE. I have some microservices written in Spring Boot 2 and deployed from GitHub to GKE. I would like to make these services secure and I want to know if it's possible to use ingress on my gateway microservice to make the entry point secure just like that. I created an ingress with HTTPS but it seems all my health checks are failing. </p>
<p>Is it possible to make my architecture secure just by using ingress and not change the spring boot apps?</p>
| 2dor | <p>Yes, It would be possible to use a GKE ingress given your scenario, there is an official <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">guide</a> on how to do this step by step.</p>
<p>Additionally, here's a step by step <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">guide</a> on how to implement Google Managed certs.</p>
<p>Also, I understand that my response is somewhat general, but I can only help you so much without knowing your GKE infrastructure (like your DNS name for said certificate among other things).</p>
<p>Remember that you must implement this directly on your GKE infrastructure and not on your GCP side, if you modify or create something new outside GKE but that it's linked to GKE, you might see that either your deployment rolled back after a certain time or that stopped working after a certain time.</p>
<p>Edit:</p>
<p>I will assume several things here, and since I don't have your Spring Boot 2 deployment yaml file, I will replace that with an nginx deployment.</p>
<p>cert.yaml</p>
<pre><code>apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: ssl-cert
spec:
domains:
- example.com
</code></pre>
<p>nginx.yaml</p>
<pre><code>apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "nginx"
namespace: "default"
labels:
app: "nginx"
spec:
replicas: 1
selector:
matchLabels:
app: "nginx"
template:
metadata:
labels:
app: "nginx"
spec:
containers:
- name: "nginx-1"
image: "nginx:latest"
</code></pre>
<p>nodeport.yaml (please modify "targetPort: 80" to your needs)</p>
<pre><code>apiVersion: "v1"
kind: "Service"
metadata:
name: "nginx-service"
namespace: "default"
labels:
app: "nginx"
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "nginx"
type: "NodePort"
</code></pre>
<p>ingress-cert.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
networking.gke.io/managed-certificates: ssl-cert
spec:
backend:
serviceName: nginx-service
servicePort: 80
</code></pre>
<p>Keep in mind that assuming your DNS name "example.com" is pointing into your Load Balancer external IP, it could take a while to your SSL certificate to be created and applied.</p>
| Frank |
<p>I have a pod within a Kubernetes cluster that needs to send alarms via SNMP to an external network management system. However, the external system will only be able to identify the pod if it keeps a stable IP address. Considering the ephermal nature of pods, would it be possible to send/redirect requests to a system outside of the cluster with a static IP?</p>
<p>The information I could gather by now only proposed solutions on how to reach the pod from outside the cluster with e.g. Services. I found the following <a href="https://stackoverflow.com/a/59488628/11783513">answer</a> that suggests using an egress gateway, but not much information is provided on how to approach the issue.</p>
| M M | <p>One viable solution is to utilize an Egress Router resource defined <a href="https://docs.openshift.com/container-platform/4.12/networking/openshift_sdn/using-an-egress-router.html" rel="nofollow noreferrer">here</a>, which redirects traffic to a specified IP using a dedicated source IP address:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: egress-1
labels:
name: egress-1
annotations:
pod.network.openshift.io/assign-macvlan: "true"
spec:
initContainers:
- name: egress-router
image: registry.redhat.io/openshift4/ose-egress-router
securityContext:
privileged: true
env:
- name: EGRESS_SOURCE
value: <egress_router>
- name: EGRESS_GATEWAY
value: <egress_gateway>
- name: EGRESS_DESTINATION
value: <egress_destination>
- name: EGRESS_ROUTER_MODE
value: init
containers:
- name: egress-router-wait
image: registry.redhat.io/openshift4/ose-pod
</code></pre>
<p>An example configuration looks like follows:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: egress-multi
labels:
name: egress-multi
annotations:
pod.network.openshift.io/assign-macvlan: "true"
spec:
initContainers:
- name: egress-router
image: registry.redhat.io/openshift4/ose-egress-router
securityContext:
privileged: true
env:
- name: EGRESS_SOURCE
value: 192.168.12.99/24
- name: EGRESS_GATEWAY
value: 192.168.12.1
- name: EGRESS_DESTINATION
value: |
203.0.113.25
- name: EGRESS_ROUTER_MODE
value: init
containers:
- name: egress-router-wait
image: registry.redhat.io/openshift4/ose-pod
</code></pre>
<p>The Egress Router pod is exposed by a Service and linked to the application that needs to send outbound SNMP traps:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: egress-1
spec:
ports:
- name: snmp
port: 162
type: ClusterIP
selector:
name: egress-1
</code></pre>
<p>The application sends the SNMP trap to the ClusterIP/Service-Name of the Service exposing the Egress Router pod, and the pod redirects the request to the specified remote server. Once redirected, the source IP is changed to the Source IP specified in the Egress Router resource. For more information on implementing the egress router in redirection mode, see <a href="https://docs.openshift.com/container-platform/4.12/networking/openshift_sdn/deploying-egress-router-layer3-redirection.html#deploying-egress-router-layer3-redirection" rel="nofollow noreferrer">here</a>.</p>
<p><strong>Note that depending on your network configuration, you might need to configure the <em>assign-macvlan</em> field to a different NIC interface and set it to the name of that interface, e.g. <em>eth1</em></strong>.</p>
| M M |
<p>I inherited a Kubernetes/Docker setup. I am trying to recreate a dev environmental exactly as it is (with a new name) on a separate cluster. Sorry if my question is a bit ignorant, while I've mostly picked up Kubernetes/Docker, I'm still pretty new at it.</p>
<p>I've copied all of the information over to the cluster and launched it via kubectl and the old YAML. <strong>I am also using the old image file, which should contain the relevant secrets to my knowledge</strong></p>
<p>However, I am getting an error about a missing secret, db-user-pass.</p>
<p>I have checked the included secrets directory in my state store for KOPS (on S3)</p>
<pre><code> Warning FailedScheduling 22m (x3 over 22m) default-scheduler No nodes are available that match all of the predicates: Insufficient memory (2), PodToleratesNodeTaints (1).
Normal Scheduled 22m default-scheduler Successfully assigned name-keycloak-7c4c57cbdf-9g2n2 to ip-ip.address.goes.here.us-east-2.compute.internal
Normal SuccessfulMountVolume 22m kubelet, ip-ip.address.goes.here.us-east-2.compute.internal MountVolume.SetUp succeeded for volume "default-token-2vb5x"
Normal Pulled 21m (x6 over 22m) kubelet, ip-ip.address.goes.here.us-east-2.compute.internal Successfully pulled image "image.location.amazonaws.com/dev-name-keycloak"
Warning Failed 21m (x6 over 22m) kubelet, ip-ip.address.goes.here.us-east-2.compute.internal Error: secrets "db-user-pass" not found
Warning FailedSync 21m (x6 over 22m) kubelet, ip-ip.address.goes.here.us-east-2.compute.internal Error syncing pod
Normal Pulling 2m (x90 over 22m) kubelet, ip-ip.address.goes.here.us-east-2.compute.internal pulling image "image.location.amazonaws.com/dev-name-keycloak"
</code></pre>
<p>What exactly am I misunderstanding here? Is it maybe that Kubernetes is trying to assign a variable based on a value in my YAML, which is also set on the Docker image, but isn't available to Kubernetes? Should I just copy all of the secrets manually from one pod to another (or export to YAML and include in my application).</p>
<p>I'm strongly guessing that export + put into my existing setup is probably the best way forward to give all of the credentials to the pod.</p>
<p>Any guidance or ideas would be welcome here.</p>
| Steven Matthews | <p>Could you please check if you have a secret named as a "db-user-pass" in your old cluster?</p>
<p>You can check that via :
ubuntu@sal-k-m:~$ kubectl get secrets</p>
<p>OR (if it's in a different namespace)</p>
<p>ubuntu@sal-k-m:~$ kubectl get secrets -n web </p>
<p>If the secret is there then you need to --export that also and configure that in the new cluster.</p>
<p>kubectl get secrets -n web -o yaml --export > db-user-pass.yaml</p>
<p>You can find more details about the secret in this doc.</p>
<p><a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/</a></p>
| Salman Memon |
<p>I have a cluster setup on google cloud with one of the deployments being containerized Angular app on nginx:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: product-adviser-front-deployment
labels:
app: angular-front
version: v1
spec:
replicas: 1
selector:
matchLabels:
name: product-adviser-front-deployment
template:
metadata:
labels:
name: product-adviser-front-deployment
spec:
containers:
- name: product-adviser-front-app
image: aurrix/seb:product-adviser-front
imagePullPolicy: Always
ports:
- containerPort: 80
env:
- name: API_URL
value: http://back:8080/
readinessProbe:
initialDelaySeconds: 30
httpGet:
path: /healthz
port: 80
livenessProbe:
initialDelaySeconds: 30
httpGet:
path: /healthz
port: 80
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: front
spec:
selector:
name: product-adviser-front-deployment
ports:
- port: 80
type: NodePort
</code></pre>
<p>Nginx configuration:</p>
<pre><code> worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
#Allow CORS
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range';
add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH';
log_format gzip '[$time_local] ' '"$request" $status $bytes_sent';
index index.html;
include /etc/nginx/mime.types;
default_type application/javascript;
access_log /dev/stdout;
charset utf-8;
sendfile on;
keepalive_timeout 65;
#include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name localhost;
access_log /dev/stdout;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
try_files $uri $uri/ =404;
}
location /healthz {
return 200 'no content';
}
}
# Compression
#include /etc/nginx/gzip.conf;
}
</code></pre>
<p>Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: front
servicePort: 80
</code></pre>
<p>Whenever I am visiting the external IP address, angular app responses with index.html, however, fails to load resources.</p>
<p>I am pretty sure it somehow related to ingress configuration but I think already exhausted myself trying to figure this out.</p>
<p>What am I doing wrong here?</p>
<p>As a side note, I have installed ingress-nginx to my cluster, it seems to be working fine.
base href is present in index.html.
The docker image works perfectly fine locally.</p>
| Alisher | <p>The configuration works as-is. It seems the ingress needed force restart in the way of deleting it and applying again.</p>
| Alisher |
<p>I want to calculate the CPU and Memory Percentage of Resource utilization of an individual pod in Kubernetes. For that, I am using metrics server API</p>
<ol>
<li>From the metrics server, I get the utilization from this command</li>
</ol>
<p>kubectl top pods --all-namespaces</p>
<pre><code>kube-system coredns-5644d7b6d9-9whxx 2m 6Mi
kube-system coredns-5644d7b6d9-hzgjc 2m 7Mi
kube-system etcd-manhattan-master 10m 53Mi
kube-system kube-apiserver-manhattan-master 23m 257Mi
</code></pre>
<p>But I want the percentage utilization of individual pod Both CPU % and MEM%</p>
<p>From this output by top command it is not clear that from how much amount of cpu and memory it consumes the resultant amount?</p>
<p>I don't want to use Prometheus operator I saw one formula for it</p>
<pre><code>sum (rate (container_cpu_usage_seconds_total{image!=""}[1m])) by (pod_name)
</code></pre>
<p>Can I calculate it with <a href="https://github.com/kubernetes-sigs/metrics-server" rel="noreferrer">MetricsServer</a> API?</p>
<p>I thought to calculate like this </p>
<p><strong>CPU%</strong> = ((2+2+10+23)/ Total CPU MILLICORES)*100</p>
<p><strong>MEM%</strong> = ((6+7+53+257)/AllocatableMemory)* 100</p>
<p>Please tell me if I right or wrong. Because I didn't see any standard formula for calculating pod utilization in Kubernetes documentation</p>
| UDIT JOSHI | <p>Unfortunately <code>kubectl top pods</code> provides only a <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/walkthrough.md#quantity-values" rel="nofollow noreferrer">quantity values</a> and not a percentages.</p>
<p><a href="https://github.com/kubernetes-sigs/metrics-server/issues/193#issuecomment-451309811" rel="nofollow noreferrer">Here</a> is a good explanation of how to interpret those values.</p>
<p>It is currently not possible to list pod resource usage in percentages with a <code>kubectl top</code> command.</p>
<p>You could still chose <a href="https://prometheus.io/docs/visualization/grafana/" rel="nofollow noreferrer">Grafana with Prometheus</a> but it was already stated that you don't want to use it (however maybe another member of the community with similar problem would do so I am mentioning it here).</p>
<p><strong>EDIT:</strong></p>
<p>Your formulas are correct. They will calculate how much CPU/Mem is being consumed by all Pods relative to total CPU/Mem you got. </p>
<p>I hope it helps. </p>
| WytrzymaΕy Wiktor |
<p>I am trying to deploy a private docker registry on Kubernetes using the official Docker image for the registry. However, I am getting some warnings on the deployment and also I am not able to access it in the pod.</p>
<p>The output from the registry container:</p>
<pre><code>time="2019-04-12T18:40:21Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_PORT"
time="2019-04-12T18:40:21Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_PORT_5000_TCP"
time="2019-04-12T18:40:21Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_PORT_5000_TCP_ADDR"
time="2019-04-12T18:40:21Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_PORT_5000_TCP_PORT"
time="2019-04-12T18:40:21Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_PORT_5000_TCP_PROTO"
time="2019-04-12T18:40:21Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_HOST"
time="2019-04-12T18:40:21Z" level=warning msg="Ignoring unrecognized environment variable REGISTRY_SERVICE_PORT"
time="2019-04-12T18:40:21.145278902Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." go.version=go1.11.2 instance.id=988660e4-d4b9-4d21-a42e-c60c82d00a73 service=registry version=v2.7.1
time="2019-04-12T18:40:21.145343563Z" level=info msg="redis not configured" go.version=go1.11.2 instance.id=988660e4-d4b9-4d21-a42e-c60c82d00a73 service=registry version=v2.7.1
time="2019-04-12T18:40:21.149771291Z" level=info msg="Starting upload purge in 2m0s" go.version=go1.11.2 instance.id=988660e4-d4b9-4d21-a42e-c60c82d00a73 service=registry version=v2.7.1
time="2019-04-12T18:40:21.163063574Z" level=info msg="using inmemory blob descriptor cache" go.version=go1.11.2 instance.id=988660e4-d4b9-4d21-a42e-c60c82d00a73 service=registry version=v2.7.1
time="2019-04-12T18:40:21.163689856Z" level=info msg="listening on [::]:5000" go.version=go1.11.2 instance.id=988660e4-d4b9-4d21-a42e-c60c82d00a73 service=registry version=v2.7.1
</code></pre>
<p>The yaml files for the deployment on Kubernetes:</p>
<pre><code>104 apiVersion: extensions/v1beta1
105 kind: Deployment
106 metadata:
107 name: registry
108 namespace: docker
109 spec:
110 replicas: 1
111 template:
112 metadata:
113 labels:
114 name: registry
115 spec:
116 containers:
117 - resources:
118 name: registry
119 image: registry:2
120 ports:
121 - containerPort: 5000
122 volumeMounts:
123 - mountPath: /var/lib/registry
124 name: images
140 volumes:
141 - name: images
142 hostPath:
143 path: /mnt/nfs/docker-local-registry/images
150 nodeSelector:
151 name: master
152 ---
153 apiVersion: v1
154 kind: Service
155 metadata:
156 name: registry
157 namespace: docker
158 spec:
159 ports:
160 - port: 5000
161 targetPort: 5000
162 selector:
163 name: registry
</code></pre>
<p>Even if I ignore those warnings, accessing the registry in pod level with <code>registry.docker:5000/image_name</code> and <code>registry.docker.svc.cluster.local/image_name</code> won't work because the host cannot be resolved. I don't want the registry exposed. All that I want is to be able pods to pull the images from there.</p>
| thzois | <p>Not sure, I understand your use case completely, but if you want to start a pod that is based on an image served from the internal registry, it is important to understand that not the pod but the dockerd on the cluster node is pulling the image. The internal DNS names like *svc.cluster.local cannot be resolved there. At least in many K8s environments this is the case.
Some people were discussing this here: <a href="https://github.com/knative/serving/issues/1996" rel="nofollow noreferrer">https://github.com/knative/serving/issues/1996</a>
It might help, if you post which Kubernetes provider (GKE, EKS etc.) you are using.
The solution is to configure the cluster nodes to resolve these names, or to expose your registry externally using an ingress.</p>
| Klaus Deissner |
<p>I have a nginx ingress controller on aks which I configured using <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic" rel="nofollow noreferrer">official guide</a>. I also wanted to configure the nginx to allow underscores in the header so I wrote down the following configmap</p>
<pre><code>apiVersion: v1
kind: ConfigMap
data:
enable-underscores-in-headers: "true"
metadata:
name: nginx-configuration
</code></pre>
<p>Note that I am using default namespace for nginx. However applying the configmap nothing seem to be happening. I see no events. What am I doing wrong here?</p>
<pre><code>Name: nginx-configuration
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"enable-underscores-in-headers":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-configura...
Data
====
enable-underscores-in-headers:
----
true
Events: <none>
</code></pre>
| user11791088 | <p>Solution was to correctly name the configmap, firstly I did <code>kubectl describe deploy nginx-ingress-controller</code> which contained the configmap this deployment is looking for. In my case it was something like this <code>--configmap=default/nginx-ingress-controller</code>. I changed name of my configmap to <code>nginx-ingress-controller</code>. As soon I did that controller picked up the data from my configmap and changed the configuration inside my nginx pod.</p>
| user11791088 |
<p>I would like to my jenkins pod authenticate and get secret from vault pod running on the same cluster. Which auth method will be the best option for that? I think about kubernetes auth method but I'm not sure it is best option for my case. This method in my opinion is better for use when vault is running outside the kubernetes cluster.</p>
<p>Thanks in advance.</p>
| k0chan | <p>I got two options for you:</p>
<ol>
<li>Use <a href="https://github.com/jenkinsci/hashicorp-vault-plugin#jenkins-vault-plugin" rel="nofollow noreferrer">Jenkins Vault Plugin</a>:</li>
</ol>
<blockquote>
<p>This plugin allows authenticating against Vault using the AppRole
authentication backend. Hashicorp recommends using AppRole for Servers
/ automated workflows (like Jenkins)</p>
</blockquote>
<p>This is the recommended way for authenticating and it works by registering an approle auth backend using a self-chosen name (e.g. Jenkins). The approle is identified by a <code>role-id</code> and secured with a <code>secret_id</code>. If you have both of those values you can ask Vault for a token that can be used to access vault. </p>
<ol start="2">
<li>Use <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">Kubernetes auth method</a>.</li>
</ol>
<p><a href="https://medium.com/hootsuite-engineering/jenkins-kubernetes-and-hashicorp-vault-c2011bd2d66c" rel="nofollow noreferrer">Here</a> you can find an interesting read regarding Jenkins, Kubernetes, and Hashicorp Vault.</p>
<p>Choose the option that better suits you. Both are explained in detail.</p>
<p>Please let me know if that helped. </p>
| WytrzymaΕy Wiktor |
<p>I have deployed a spring boot application on a pod(pod1) on a node(node1). I have also deployed JMeter on another pod(pod2) on a different node(node2). I am trying to perform automated load testing from pod2. To perform load testing, I require to restart the pod1 for each test cases. How do I restart pod1 from pod2?</p>
| anushiya-thevapalan | <p>If you have a deployment type workload you can go into in through Workloads > [Deployment name] > Managed Pods [Pod name] and delete the pod. </p>
<p>You can also do this with <code>kubectl delete pod [pod name]</code></p>
<p>If you have a minimum number of pods for that deployment set then GKE will automatically spin up another pod, effectively restarting it.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/deployment" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/deployment</a></p>
| Nicholas Elkaim |
<p>I'm teaching myself Kubernetes with a 5 Rpi cluster, and I'm a bit confused by the way Kubernetes treats Persistent Volumes with respect to Pod Scheduling.</p>
<p>I have 4 worker nodes using ext4 formatted 64GB micro SD cards. It's not going to give GCP or AWS a run for their money, but it's a side project.</p>
<p>Let's say I create a Persistent volume Claim requesting 10GB of storage on <code>worker1</code>, and I deploy a service which relies on this PVC, is that service then forced to be scheduled on <code>worker1</code>?</p>
<p>Should I be looking into distributed file systems like Ceph or Hdfs so that Pods aren't restricted to being scheduled on a particular node?</p>
<p>Sorry if this seems like a stupid question, I'm self taught and still trying to figure this stuff out! (Feel free to improve my <a href="https://github.com/strickolas/tldr/blob/master/kubernetes/02-tldr-ckad.md" rel="nofollow noreferrer">tl;dr doc</a> for kubernetes with a pull req)</p>
| Nick Saccente | <p>just some examples, as already mentioned it depends on your storage system, as i see you use the local storage option</p>
<p><strong>Local Storage:</strong>
yes the pod needs to be run on the same machine where the pv is located (your case)</p>
<p><strong>ISCSI/Trident San:</strong>
no, the node will mount the iscsi block device where the pod will be scheduled
(as mentioned already volume binding mode is an important keyword, its possible you need to set this to 'WaitForFirstConsumer')</p>
<p><strong>NFS/Trident Nas:</strong>
no, its nfs, mountable from everywhere if you can access and auth against it</p>
<p><strong>VMWare VMDK's:</strong>
no, same as iscsi, the node which gets the pod scheduled mounts the vmdk from the datastore</p>
<p><strong>ceph/rook.io:</strong>
no, you get 3 options for storage, file, block an object storage, every type is distributed so you can schedule a pod on every node.
also ceph is the ideal system for carrying a distributed software defined storage on commodity hardware, what i can recommend is <a href="https://rook.io/" rel="nofollow noreferrer">https://rook.io/</a> basically an opensource ceph on 'container-steroids'</p>
| Elytscha Smith |
<p>I am using digital ocean to deploy a managed kubernetes cluster and it gives me a config file to download. How do I use the downloaded .yaml file with my default config file in /.kube directory? </p>
<p>I tried to merge the config files, but it did not work. Is there any easy way to make it work?</p>
| Bharadwaj Rani | <p>You can specify which configuration file to use with your <code>kubectl</code> command by adding the <code>--kubeconfig</code> flag.</p>
| Alassane Ndiaye |
<p>I use nfs-client-provisioner inside my kubernetes cluster.</p>
<p>But, the name of the PersistentVolume is random.</p>
<p>cf. doc:
<a href="https://github.com/rimusz/nfs-client-provisioner" rel="nofollow noreferrer">nfs-client-provisioner</a></p>
<blockquote>
<p>--> Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}</p>
</blockquote>
<p>But, where could i change the value of pvName ?? </p>
<p>Actually, it's random, for exemple : pvName = pvc-2v82c574-5bvb-491a-bdfe-061230aedd5f</p>
| DevOpsAddict | <p>This is the naming convention of directories corresponding to the <code>PV</code> names but stored on share of NFS server</p>
<p>If it comes to <code>PV</code> name provisioned dynamically by <code>nfs-provisioner</code> it follow the following naming convention:</p>
<p><code>pvc-</code> + <code>claim.UID</code></p>
<p>Background information:</p>
<p>According to the design proposal of external storage provisioners (NFS-client belongs to this category), you must not declare <code>volumeName</code> explicitly in <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/volume-provisioning.md" rel="nofollow noreferrer">PVC spec</a>.</p>
<blockquote>
<p># volumeName: must be empty!</p>
</blockquote>
<p><code>pv.Name</code> <strong>MUST</strong> be unique. Internal provisioners use name based on <code>claim.UID</code> to produce conflicts when two provisioners accidentally provision a <code>PV</code> for the same claim, however external provisioners can use any mechanism to generate an unique <code>PV</code> name.</p>
<p>In case of <code>nfs-client</code> provisioner, the <code>pv.Name</code> generation is handled by the <code>controller</code> library, and it gets following format:</p>
<p><code>pvc-</code> + <code>claim.UID</code></p>
<p><a href="https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/blob/c525773885fccef89a1edd22bb3f813d50548ed1/controller/controller.go#L1414" rel="nofollow noreferrer">Source</a></p>
<p>I hope it helps.</p>
| WytrzymaΕy Wiktor |
<p>I have a FastAPI app with the following code</p>
<pre class="lang-py prettyprint-override"><code> @app.on_event("startup")
async def startup_event():
"""Initialize application services"""
print("Starting the service")
</code></pre>
<p>when I run FastAPI directly from the terminal, I get the following output</p>
<pre><code>INFO: Started server process [259936]
INFO: Waiting for application startup.
Starting the service
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
</code></pre>
<p>You can see that the print statement got executed.</p>
<p>However, when the same app is automatically run inside a Kubernetes cluster, I get the following output</p>
<pre><code> INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
</code></pre>
<p>The print statement did not get executed, in fact, any additional code inside the function never gets executed.</p>
<p>However, if I exit the process like this:</p>
<pre class="lang-py prettyprint-override"><code>@app.on_event("startup")
async def startup_event():
"""Initialize application services"""
print("Starting the service")
exit(99)
</code></pre>
<p>The process exists then I can see the print statement.</p>
<pre><code>SystemExit: 99
ERROR: Application startup failed. Exiting.
Starting the service
</code></pre>
<p>What is the problem here?</p>
<p>Edit: Actually no code whatsoever gets executed, I have put print statements literally everywhere and nothing gets printed, but somehow the webserver runs...</p>
| Adham Salama | <p>So, actually, there is no problem with my code, FastAPI, asyncio, or Kubernetes.</p>
<p>Everything was actually working correctly, it's just that the output was buffered.</p>
<p>After adding flush=True to the print statement, everything showed.</p>
<p>I am answering this in case some poor soul stumbles upon this thread in the future.</p>
<p>I spent days debugging this!!!</p>
| Adham Salama |
<p>A lot of times the commands I run look like</p>
<pre><code>kubectl get * | grep abc
</code></pre>
<p>but this way I can't see the first row (which is the column names), is there an easy alternative to this such that I'll see 2 rows (for resource <code>abc</code> and the column names)?</p>
| Serge Vu | <p>Kubenertes already supported JSONPath so we can get any value 's field of a kubenertes object.</p>
<p>This is example when i want to get namespace of a pod:</p>
<pre><code>$ kubectl get pods -l app=app-demo --all-namespaces -o=jsonpath='{.items[0].metadata.namespace}'
demo%
</code></pre>
<p>You can get reference here: <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="noreferrer">JSONPath Support</a>.</p>
| Tho Quach |
<p>I am trying to update this chart :</p>
<pre><code>k8s.v1.cni.cncf.io/networks: '[
{ "name" : "ext", "ips": {{ .Values.global.extoamlancas0 | quote }}, "interface": "e1"},
{ "name" : "ext-app", "ips": {{ .Values.global.extapplancas0 | quote }}, "interface": "e3"},
{ "name" : "int-", "ips": {{ .Values.global.intoamlancas0 | quote }}, "interface": "e2"}
]'
Here
if {{- if Values.a }} then I want "ips" to be in an array i.e
{ "name" : "ext-", "ips": [ {{ .Values.global.extoamlancas0 | quote }} ], "interface": "e1"}
else
{ "name" : "ext", "ips": {{ .Values.global.extoamlancas0 | quote }}, "interface": "e1"}
</code></pre>
<p>I want this to be done for all other 2 ips too.</p>
| kavya Vaish | <p>In the values.yaml file you need to specify an array ips like this:</p>
<pre><code>ips:
- address: 192.168.1.1
name: no1
- address: 192.168.1.2
name: no2
</code></pre>
<p>And in the templates file you can loop like this:</p>
<pre><code>{{- range .Values.ips }}
- name: {{ .name }}
address: {{ .address }}
{{- end }}
</code></pre>
<p>Below is the snippet from golang docs: <a href="https://golang.org/pkg/text/template/#hdr-Actions" rel="nofollow noreferrer">template - Go | range</a></p>
<blockquote>
<p>{{range pipeline}} T1 {{end}} The value of the pipeline must be an
array, slice, map, or channel. If the value of the pipeline has
length zero, nothing is output; otherwise, dot is set to the
successive elements of the array, slice, or map and T1 is executed.
If the value is a map and the keys are of basic type with a defined
order, the elements will be visited in sorted key order.</p>
<p>{{range pipeline}} T1 {{else}} T0 {{end}} The value of the pipeline
must be an array, slice, map, or channel. If the value of the
pipeline has length zero, dot is unaffected and T0 is executed;
otherwise, dot is set to the successive elements of the array, slice,
or map and T1 is executed.</p>
</blockquote>
| Tho Quach |
<p>I am running airflow with Kubernetes executor on docker-desktop kubernetes cluster (on Mac). I have multiple sensorOperators in dag file, each one of them are part of downstream dependency. In total 22 sensors operators run in parallel, as a result after 5-7 minutes of execution, my kubernetes cluster connection drops. After restarting the cluster, I can again access my k8s dashbaord and check the logs of all <code>red</code> failed tasks and they seems to complain on mysql connection failure. <a href="https://i.stack.imgur.com/S7dnk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S7dnk.png" alt="image"></a></p>
<pre><code>[2019-08-15 10:09:03,829] {__init__.py:1374} INFO - Executing <Task(IngestionStatusSensor): ingestion_ready_relational_character_creation> on 2019-03-15T00:00:00+00:00
[2019-08-15 10:09:03,829] {base_task_runner.py:119} INFO - Running: ['airflow', 'run', 'datascience_ecc_v1', 'ingestion_ready_relational_character_creation', '2019-03-15T00:00:00+00:00', '--job_id', '22', '--raw', '-sd', 'DAGS_FOLDER/DAG_datascience_ecc_v1.py', '--cfg_path', '/tmp/tmpb3993h8h']
[2019-08-15 10:10:00,468] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation [2019-08-15 10:10:00,447] {settings.py:182} INFO - settings.configure_orm(): Using pool settings. pool_size=10, pool_recycle=1800, pid=11
[2019-08-15 10:12:39,448] {logging_mixin.py:95} INFO - [2019-08-15 10:12:39,381] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:42,967] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation [2019-08-15 10:12:42,772] {__init__.py:51} INFO - Using executor LocalExecutor
[2019-08-15 10:12:44,651] {logging_mixin.py:95} INFO - [2019-08-15 10:12:44,651] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:45,331] {logging_mixin.py:95} INFO - [2019-08-15 10:12:45,331] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:45,364] {logging_mixin.py:95} INFO - [2019-08-15 10:12:45,364] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:50,394] {logging_mixin.py:95} INFO - [2019-08-15 10:12:50,394] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:55,415] {logging_mixin.py:95} INFO - [2019-08-15 10:12:55,415] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:55,529] {logging_mixin.py:95} INFO - [2019-08-15 10:12:55,528] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:58,758] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation [2019-08-15 10:12:58,724] {cli_action_loggers.py:70} ERROR - Failed on pre-execution callback using <function default_action_log at 0x7f7452d13730>
[2019-08-15 10:12:58,758] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation Traceback (most recent call last):
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2228, in _wrap_pool_connect
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return fn()
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 434, in connect
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return _ConnectionFairy._checkout(self)
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 831, in _checkout
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation fairy = _ConnectionRecord.checkout(pool)
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 563, in checkout
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation rec = pool._do_get()
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1259, in _do_get
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation self._dec_overflow()
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 67, in __exit__
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation compat.reraise(exc_type, exc_value, exc_tb)
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 277, in reraise
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation raise value
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1256, in _do_get
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return self._create_connection()
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 379, in _create_connection
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return _ConnectionRecord(self)
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 508, in __init__
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation self.__connect(first_connect_check=True)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 710, in __connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation connection = pool._invoke_creator(self)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return dialect.connect(*cargs, **cparams)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 437, in connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return self.dbapi.connect(*cargs, **cparams)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/__init__.py", line 85, in Connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return Connection(*args, **kwargs)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/connections.py", line 208, in __init__
[2019-08-15 10:12:58,778] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation super(Connection, self).__init__(*args, **kwargs2)
[2019-08-15 10:12:58,778] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation _mysql_exceptions.OperationalError: (2006, "Unknown MySQL server host 'mysql' (111)")
</code></pre>
<p>However, If I disable the dag from airflow UI dashboard, and run each failed task independently, they seem to run successfully. I thought maybe there is a connection limit to mysql, so I added following into airflow core configs </p>
<pre><code>sql_alchemy_pool_enabled=True
sql_alchemy_pool_size = 10
sql_alchemy_max_overflow = 15
sql_alchemy_pool_recycle = 1800
sql_alchemy_reconnect_timeout = 300
</code></pre>
<p>I also tried increasing the <code>parallelism</code> and <code>dag_concurrency</code> to 32 and 40 in airflow config.cfg respectively. But both of these configs didn't had any effect. I have no idea of whats causing this failures. Either the cluster goes down first, and then worker pods are not able to connect with mysql server, or its the other way around. Is it the issue with docker-desktop kubernetes cluster ? should I be looking at the logs of kube-dns ?</p>
<p><strong>Update</strong>
after I ran 3 dag tasks together, the cluster hanged again and this time, the airflow-webserver gave-up too </p>
<pre><code>Traceback (most recent call last):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 397, in _error_catcher
yield
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 704, in read_chunked
self._update_chunk_length()
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 643, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 293, in run
self.worker_uuid)
File "/usr/local/airflow/venv/lib/python3.6/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 314, in _run
**kwargs):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 144, in stream
for line in iter_resp_lines(resp):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 48, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 732, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 415, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
[2019-08-13 14:39:03,684] {kubernetes_executor.py:295} ERROR - Unknown error in KubernetesJobWatcher. Failing
Traceback (most recent call last):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 639, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
...
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 415, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
...
File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/__init__.py", line 85, in Connect
return Connection(*args, **kwargs)
File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/connections.py", line 208, in __init__
super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (2006, "Unknown MySQL server host 'mysql' (111)")
</code></pre>
<p>I have less clue of where to look for, but if someone do, let me know, I will provide relevant logs as required. </p>
<p><strong>Environment:</strong>
docker: 19.03.1,
kubernetes: 1.14.3,
airflow: 1.10.3,
mysql: 5.7,</p>
| Anum Sheraz | <p>It looks like like a resource issue. </p>
<p>You can try to increase CPU and memory in Docker settings and connect to docker VM to check resources usage.</p>
<p>Alternatively, you can create one master cluster with one or two nodes in Virtualbox and try to run tasks there. In this case master node will not suffer from lack of resources and cluster should still be available to connect.</p>
<p><a href="https://gist.github.com/BretFisher/5e1a0c7bcca4c735e716abf62afad389" rel="nofollow noreferrer">Here</a> is how to connect to docker-desktop for mac </p>
<p>Please let me know if that helped.</p>
| WytrzymaΕy Wiktor |
<p>I created a ingress cluster and a ingress service first and then to route a request to point to NodePort-serviceA (resources-service) and NodePort-serviceB (staffing-service) in a k8s-cluster, an ingress file (by name of staffing-ingress.yaml) is applied mentioned below. </p>
<p>Resources-service and Staffing-service can communicate with each other from inside the container and while hitting a curl command βcurl βv <a href="http://staffing-service:8080/swagger/index.html" rel="nofollow noreferrer">http://staffing-service:8080/swagger/index.html</a>β from resources container returns accurate result with HTTP 200 and vice versa. However if the incomplete URL is hit like this <a href="http://staffing-service:8080/" rel="nofollow noreferrer">http://staffing-service:8080/</a>, it throws a 404 error.</p>
<p>I have a doubt the way paths routing is done in the staffing-ingress.yaml file below. Any suggestions are welcome.</p>
<p><strong>Here are the details</strong> </p>
<p>Accessing the services (staffing or resources) using nginx load balancer IP does not return the results and throw 404 error. </p>
<pre><code>Curl βv http:// a5b9f45d4119a11eabbd90a9c35f3125-1159448980.us-east-2.elb.amazonaws.com:80/api/resources throws the below error
* TCP_NODELAY set
* Connected to a5b9f45d4119a11eabbd90a9c35f3125-1159448980.us-east-2.elb.amazonaws.com (3.134.165.38) port 80 (#0)
< HTTP/1.1 404 Not Found
< Server: openresty/1.15.8.2
< X-Response-Time-ms: 0
</code></pre>
<p><strong>Staffing-Ingress.YAML</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: staffing-ingress
namespace: default
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
http:
paths:
- path: /api/resources
backend:
serviceName: resources-service
servicePort: 8001
- path: /api/staffing
backend:
serviceName: staffing-service
servicePort: 8080
</code></pre>
<p><strong>Expected result</strong>
is that while hitting from ingress using external LB having URIβ/api/resources/β it should call the api-service <a href="http://resources-service:8001/swagger/index.html" rel="nofollow noreferrer">http://resources-service:8001/swagger/index.html</a>.</p>
| fortanu82 | <p>Everything is working properly in term of routing.
You are using the URL wrong.
You have to use this url </p>
<p><code>http:// a5b9f45d4119a11eabbd90a9c35f3125-1159448980.us-east-2.elb.amazonaws.com:80/api/resources/swagger/index.html</code></p>
<p>You have to append <strong>swagger/index.html</strong> in your url just like you are doing it while accessing it via service.</p>
<pre><code>curl βv http://staffing-service:8080/swagger/index.html
curl βv http:// a5b9f45d4119a11eabbd90a9c35f3125-1159448980.us-east-2.elb.amazonaws.com:80/api/resources/swagger/index.html
</code></pre>
| Muhammad Abdul Raheem |
<p>I have a Kubernetes cluster and I have tested submitting 1,000 jobs at a time and the cluster has no problem handling this. I am interested in submitting 50,000 to 100,000 jobs and was wondering if the cluster would be able to handle this?</p>
| magladde | <p>Yes you can but only if only don't run out of resources or you don't exceed this <a href="https://kubernetes.io/docs/setup/best-practices/cluster-large/#support" rel="nofollow noreferrer">criteria</a> regarding building large clusters.</p>
<p>Usually you want to limit your jobs in some way in order to better handle memory and CPU or to adjust it in any other way according to your needs. </p>
<p>So the best practice in your case would be to:</p>
<ul>
<li>set as many jobs as you want (bear in mind the building large clusters criteria)</li>
<li>observe the resource usage</li>
<li>if needed use for example <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">Resource Quotas</a> in order to limit resources used by the jobs</li>
</ul>
<p>I hope you find this helpful. </p>
| WytrzymaΕy Wiktor |
<p>i've came across 2 types of syntax for creating configmaps from files in kubernetes.</p>
<p><strong>first one</strong> ;</p>
<hr>
<pre><code>apiVersion: v1
data:
file1.yaml: |+
parameter1=value1
kind: ConfigMap
metadata:
name: my-configmap
</code></pre>
<p><strong>second one</strong> ;</p>
<pre><code>apiVersion: v1
data:
file1.yaml: | -
parameter1=value1
kind: ConfigMap
metadata:
name: my-configmap
</code></pre>
<p>what is the difference between |+ and |- ?</p>
| whatmakesyou | <p>This is the <a href="https://yaml-multiline.info/" rel="noreferrer">block chomping indicator</a>.</p>
<p>Directly quoting from the link:</p>
<blockquote>
<p>The chomping indicator controls what should happen with newlines at
the end of the string. The default, clip, puts a single newline at the
end of the string. To remove all newlines, strip them by putting a
minus sign (-) after the style indicator. Both clip and strip ignore
how many newlines are actually at the end of the block; to keep them
all put a plus sign (+) after the style indicator.</p>
</blockquote>
<p>This means that for:</p>
<pre><code>apiVersion: v1
data:
file1.yaml: |-
parameter1=value1
kind: ConfigMap
metadata:
name: my-configmap
</code></pre>
<p>file1.yaml will have the value:</p>
<pre><code>parameter1=value1
</code></pre>
<p>For:</p>
<pre><code>apiVersion: v1
data:
file1.yaml: |+
parameter1=value1
kind: ConfigMap
metadata:
name: my-configmap
</code></pre>
<p>file1.yaml will have the value:</p>
<pre><code>parameter1=value1 # line break
# line break
# line break
</code></pre>
| Alassane Ndiaye |
<p>I want to see logs on stack driver/google logging per http request.
Currently, I get all the logs but with so many pods I can't correlate with which log belongs to which request.</p>
<p>On appengine every log entry is per http request by default and contains nested logs from the same request.</p>
<p>I am using gunicorn with python if that's helping.</p>
<p>If that's helping, that's how i write logs:</p>
<pre><code>def set_logging_env(app):
logging.basicConfig(format='', level=logging.INFO)
if __name__ != '__main__':
gunicorn_logger = logging.getLogger('gunicorn.info')
app.logger.handlers = gunicorn_logger.handlers
app.logger.setLevel(gunicorn_logger.level)
</code></pre>
| Oren Lalezari | <p>There are some option to customize your logging patterns.
First I would suggest getting familiar with the basics from the official documentations <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">here</a>.</p>
<p>Than there is a general guide regarding <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/" rel="nofollow noreferrer">logging with stackdriver</a>.</p>
<p>There we have:</p>
<blockquote>
<p>Stackdriver Logging agent attaches metadata to each log entry, for you
to use later in queries to select only the messages youβre interested
in: for example, the messages from a particular pod.</p>
</blockquote>
<p>Which is one of the things you seek I suppose.</p>
<p>Finally you can use follow <a href="https://cloud.google.com/logging/docs/view/overview" rel="nofollow noreferrer">this guide</a> to know how to view logs and later <a href="https://cloud.google.com/logging/docs/view/advanced-filters" rel="nofollow noreferrer">this one</a> to setup advanced filters:</p>
<blockquote>
<p>This guide shows you how to write advanced logs filters, which are
expressions that can specify a set of log entries from any number of
logs. Advanced logs filters can be used in the Logs Viewer, the
Stackdriver Logging API, or the command-line interface.</p>
</blockquote>
<p>Also if you want to check logs from the running pods from the Kubernetes level you can use this <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods" rel="nofollow noreferrer">cheatsheet</a>.</p>
<pre><code>kubectl logs my-pod # dump pod logs (stdout)
kubectl logs -l name=myLabel # dump pod logs, with label name=myLabel (stdout)
kubectl logs my-pod --previous # dump pod logs (stdout) for a previous instantiation of a container
kubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case)
kubectl logs -l name=myLabel -c my-container # dump pod logs, with label name=myLabel (stdout)
kubectl logs my-pod -c my-container --previous # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container
kubectl logs -f my-pod # stream pod logs (stdout)
kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case)
kubectl logs -f -l name=myLabel --all-containers # stream all pods logs with label name=myLabel (stdout)
</code></pre>
<p>I hope I understood you correctly and my answer would be valuable.
Please let me know if that helped.</p>
| WytrzymaΕy Wiktor |
<p>I installed kubernetes cluster (include one master and two nodes), and status of nodes are ready on master. When I deploy the dashboard and run it by acccessing the link <code>http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/</code>, I get error</p>
<blockquote>
<p>'dial tcp 10.32.0.2:8443: connect: connection refused' Trying to
reach: '<a href="https://10.32.0.2:8443/" rel="nofollow noreferrer">https://10.32.0.2:8443/</a>'</p>
</blockquote>
<p>The pod state of dashboard is ready, and I tried to ping to 10.32.0.2 (dashboard's ip) not succesfully </p>
<p>I run dashboard as the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">Web UI (Dashboard)</a> guide suggests.</p>
<p>How can I fix this ?</p>
| taibc | <p>There are few options here:</p>
<ol>
<li><p>Most of the time if there is some kind of connection refused, timeout or similar error it is most likely a configuration problem. If you can't get the Dashboard running then you should try to deploy another application and try to access it. If you fail then it is not a Dashboard issue.</p></li>
<li><p>Check if you are using root/sudo.</p></li>
<li><p>Have you properly installed flannel or any other network for containers?</p></li>
<li><p>Have you checked your API logs? If not, please do so.</p></li>
<li><p>Check the description of the dashboard pod (<code>kubectl describe</code>) if there is anything suspicious.</p></li>
<li><p>Analogically check the description of service.</p></li>
<li><p>What is your cluster version? Check if any updates are required. </p></li>
</ol>
<p>Please let me know if any of the above helped.</p>
| WytrzymaΕy Wiktor |
<p>I have configured a Kubernetes cluster using kubeadm, by creating 3 Virtualbox nodes, each node running CentOS (master, node1, node2). Each virtualbox virtual machine is configured using 'Bridge' networking.
As a result, I have the following setup:</p>
<ol>
<li>Master node 'master.k8s' running at 192.168.19.87 (virtualbox)</li>
<li>Worker node 1 'node1.k8s' running at 192.168.19.88 (virtualbox)</li>
<li>Worker node 2 'node2.k8s' running at 192.168.19.89 (virtualbox</li>
</ol>
<p>Now I would like to access services running in the cluster from my local machine (the physical machine where the virtualbox nodes are running).</p>
<p>Running <code>kubectl cluster-info</code> I see the following output:</p>
<pre><code>Kubernetes master is running at https://192.168.19.87:6443
KubeDNS is running at ...
</code></pre>
<p>As an example, let's say I deploy the dashboard inside my cluster, how do I open the dashboard UI using a browser running on my physical machine?</p>
| Salvatore | <p>The traditional way is to use <code>kubectl proxy</code> or a <code>Load Balancer</code>, but since you are in a <strong>development machine</strong> a <code>NodePort</code> can be used to publish the applications, as a Load balancer is not available in VirtualBox.</p>
<p>The following example deploys 3 replicas of an echo server running nginx and publishes the http port using a <code>NodePort</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: my-echo
image: gcr.io/google_containers/echoserver:1.8
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-np
labels:
name: nginx-service-np
spec:
type: NodePort
ports:
- port: 8082 # Cluster IP http://10.109.199.234:8082
targetPort: 8080 # Application port
nodePort: 30000 # Example (EXTERNAL-IP VirtualBox IPs) http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
protocol: TCP
name: http
selector:
app: nginx
</code></pre>
<p>You can access the servers using any of the VirtualBox IPs, like
<a href="http://192.168.50.11:30000" rel="nofollow noreferrer">http://192.168.50.11:30000</a> or <a href="http://192.168.50.12:30000" rel="nofollow noreferrer">http://192.168.50.12:30000</a> or <a href="http://192.168.50.13:30000" rel="nofollow noreferrer">http://192.168.50.13:30000</a></p>
<p>See a full example at <a href="https://www.itwonderlab.com/ansible-kubernetes-vagrant-tutorial/" rel="nofollow noreferrer">Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube)</a>.</p>
| Javier Ruiz |
<p>I've added some scaling policies to my <strong>HorizontalPodAutoscaler</strong> but they are not being applied. The <em>scaleUp</em> and <em>scaleDown</em> behaviours are being ignored. I need a way to stop pods scaling up and down every few minutes in response to small CPU spikes. Ideally the HPA would scale up quickly in response to more traffic but scale down slowly after about 30 minutes of reduced traffic.</p>
<p>I'm running this on an AWS EKS cluster and I have setup the policies according to <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior</a>.</p>
<p>Could this be a limitation of EKS or my K8s version which is 1.14. I have run <code>kubectl api-versions</code> and my cluster does support <em>autoscaling/v2beta2</em>.</p>
<p>My Helm spec is:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ template "app.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
scaleTargetRef:
apiVersion: apps/v1beta2
kind: Deployment
name: "{{ template "app.fullname" . }}-server"
minReplicas: {{ .Values.hpa.minReplicas }}
maxReplicas: {{ .Values.hpa.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
target:
type: AverageValue
averageValue: 200m
behavior:
scaleUp:
stabilizationWindowSeconds: 300
policies:
- type: Pods
value: 1
periodSeconds: 300
scaleDown:
stabilizationWindowSeconds: 1200
policies:
- type: Pods
value: 1
periodSeconds: 300
</code></pre>
| Mikhail Janowski | <p>It works for me,</p>
<p>Client Version: v1.20.2
Server Version: v1.18.9-eks-d1db3c</p>
<p>kubectl api-versions and my cluster also supports autoscaling/v2beta2</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "ks.fullname" . }}-keycloak
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "ks.fullname" . }}-keycloak
minReplicas: {{ .Values.keycloak.hpa.minpods }}
maxReplicas: {{ .Values.keycloak.hpa.maxpods }}
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.keycloak.hpa.memory.averageUtilization }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.keycloak.hpa.cpu.averageUtilization }}
behavior:
scaleDown:
stabilizationWindowSeconds: {{ .Values.keycloak.hpa.stabilizationWindowSeconds }}
policies:
- type: Pods
value: 1
periodSeconds: {{ .Values.keycloak.hpa.periodSeconds }}
{{- end }}
</code></pre>
| Kiruthika kanagarajan |
<p>I had to stop a job in k8 by killing the pod, and now the job is not schedule anymore. </p>
<pre><code># Import
- name: cron-xml-import-foreman
schedule: "*/7 * * * *"
args:
- /bin/sh
- -c
/var/www/bash.sh; /usr/bin/php /var/www/import-products.php -->env=prod;
resources:
request_memory: "3Gi"
request_cpu: "2"
limit_memory: "4Gi"
limit_cpu: "4"
</code></pre>
<p>Error : </p>
<blockquote>
<p>Warning FailedNeedsStart 5m34s (x7883 over 29h) cronjob-controller
Cannot determine if job needs to be started: Too many missed start
time (> 100). Set or decrease .spec.startingDeadlineSeconds or check
clock skew.</p>
</blockquote>
| Criste Horge Lucian | <p>According to the official <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/?source=post_page---------------------------#cron-job-limitations" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>If startingDeadlineSeconds is set to a large value or left unset (the
default) and if concurrencyPolicy is set to Allow, the jobs will
always run at least once.</p>
</blockquote>
<hr>
<blockquote>
<p>A CronJob is counted as missed if it has failed to be created at its
scheduled time. For example, If concurrencyPolicy is set to Forbid and
a CronJob was attempted to be scheduled when there was a previous
schedule still running, then it would count as missed.</p>
</blockquote>
<hr>
<p>And regarding the <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy" rel="nofollow noreferrer">concurrencyPolicy</a></p>
<blockquote>
<p>It specifies how to treat concurrent executions of a job that is
created by this cron job.</p>
</blockquote>
<p>Check your <code>CronJob</code> configuration and adjust those values accordingly.</p>
<p>Please let me know if that helped.</p>
| WytrzymaΕy Wiktor |
<p>I am trying to connect to an Apache Ignite (2.8.0) deployed on Kubernetes Cluster. The cluster itself seems fine, as i am able to exec into the pods and access via sqlline tool as shown below</p>
<pre><code>root@ip-172-17-28-68:/opt/apache-ignite-2.8.0-bin# kubectl exec -it ignite-cluster-6d69696b67-8vvmm /bin/bash
bash-4.4# apache-ignite/bin/sqlline.sh --verbose=true -u jdbc:ignite:thin://127.0.0.1:10800/
issuing: !connect jdbc:ignite:thin://127.0.0.1:10800/ '' '' org.apache.ignite.IgniteJdbcThinDriver
Connecting to jdbc:ignite:thin://127.0.0.1:10800/
Connected to: Apache Ignite (version 2.8.0#20200226-sha1:341b01df)
Driver: Apache Ignite Thin JDBC Driver (version 2.8.0#20200226-sha1:341b01df)
Autocommit status: true
Transaction isolation: TRANSACTION_REPEATABLE_READ
sqlline version 1.3.0
0: jdbc:ignite:thin://127.0.0.1:10800/>
</code></pre>
<p>However if i try connecting from the external LB it gives the following error.</p>
<pre><code>root@ip-172-17-28-68:/opt/apache-ignite-2.8.0-bin/bin# ./sqlline.sh --verbose=true -u jdbc:ignite:thin://abc-123.us-east-1.elb.amazonaws.com:10800
issuing: !connect jdbc:ignite:thin://abc-123.us-east-1.elb.amazonaws.com:10800 '' '' org.apache.ignite.IgniteJdbcThinDriver
Connecting to jdbc:ignite:thin://abc-123.us-east-1.elb.amazonaws.com:10800
Error: Failed to connect to server [url=jdbc:ignite:thin://weiury734ry34ry34urt.us-east-1.elb.amazonaws.com:10800/PUBLIC] (state=08001,code=0)
java.sql.SQLException: Failed to connect to server [url=jdbc:ignite:thin://abc123.us-east-1.elb.amazonaws.com:10800/PUBLIC]
at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.handleConnectExceptions(JdbcThinConnection.java:1529)
at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.connectInCommonMode(JdbcThinConnection.java:1506)
at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.ensureConnected(JdbcThinConnection.java:231)
at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.<init>(JdbcThinConnection.java:210)
at org.apache.ignite.IgniteJdbcThinDriver.connect(IgniteJdbcThinDriver.java:154)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:156)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:204)
at sqlline.Commands.connect(Commands.java:1095)
at sqlline.Commands.connect(Commands.java:1001)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:791)
at sqlline.SqlLine.initArgs(SqlLine.java:566)
at sqlline.SqlLine.begin(SqlLine.java:643)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)
Suppressed: java.io.IOException: Failed to read incoming message (not enough data).
at org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.read(JdbcThinTcpIo.java:546)
at org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.read(JdbcThinTcpIo.java:524)
at org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.handshake(JdbcThinTcpIo.java:266)
at org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.<init>(JdbcThinTcpIo.java:212)
at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.connectInCommonMode(JdbcThinConnection.java:1477)
... 17 more
Suppressed: java.io.IOException: Failed to read incoming message (not enough data).
at org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.read(JdbcThinTcpIo.java:546)
at org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.read(JdbcThinTcpIo.java:524)
at org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.handshake(JdbcThinTcpIo.java:266)
at org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.<init>(JdbcThinTcpIo.java:212)
at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.connectInCommonMode(JdbcThinConnection.java:1477)
... 17 more
sqlline version 1.3.0
</code></pre>
<p>Interestingly telnet to the port via LB says connected as shown below.</p>
<pre><code>root@ip-172-17-28-68:/opt/apache-ignite-2.8.0-bin/bin# telnet abc-123.us-east-1.elb.amazonaws.com 10800
Trying 3.209.73.243...
Connected to abc-123.us-east-1.elb.amazonaws.com.
Escape character is '^]'.
Connection closed by foreign host.
</code></pre>
<p><strong>Why is it not able to connect via jdbc client even though telnet works on port 10800?</strong>
I also tried to connect via DBeaver, but the error is the same.</p>
<p>Here are the service and deployment yaml used in Kubernetes EKS Deployment.</p>
<p><strong>Ignite-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ignite
namespace: default
labels:
app: ignite
spec:
type: LoadBalancer
ports:
- name: rest
port: 8080
protocol: TCP
targetPort: 8080
- name: sql
port: 10800
protocol: TCP
targetPort: 10800
- name: thinclients
port: 10900
protocol: TCP
targetPort: 10900
selector:
app: ignite
</code></pre>
<p><strong>Ignite-deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ignite-cluster
namespace: default
spec:
selector:
matchLabels:
app: ignite
replicas: 2
template:
metadata:
labels:
app: ignite
spec:
serviceAccount: ignite
containers:
- name: ignite-node
image: apacheignite/ignite:2.8.0
env:
- name: OPTION_LIBS
value: ignite-kubernetes
- name: CONFIG_URI
value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube.xml
ports:
- containerPort: 11211 # REST port number.
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 10800 # SQL port number.
</code></pre>
| Sandeep Shanbhag | <p>check the service itself: kubectl describe services my-service</p>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#finding-your-ip-address" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#finding-your-ip-address</a></p>
<p>make sure an external IP is provided and works.</p>
<p>Restrict K8 to one pod and re-try.</p>
<p>Set the externalTrafficPolicy and the pod selector</p>
<pre><code>externalTrafficPolicy: Local
selector:
statefulset.kubernetes.io/pod-name: pod-name
</code></pre>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip</a></p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label</a></p>
<p>A thin client needs to communicate to a specific pod/server</p>
<p>Check the server logs to make sure that data is flowing.</p>
| Alex K |
<p>I build ignite cluster in my kubernetes VM. I want to configure my ignite pods to work with tls without certificate validation.
I created a key store manually in each pod which are binary files how can I add them to be created as part of the chart deploy?
should I create the file before and add to configmap? or run a shell command during build to create them?
can you please assist I'm new to kubernetes and SSL/TLS</p>
| NoamiA | <p>You need to configure your node to use appropriate ssl/tls settings per this guide: <a href="https://ignite.apache.org/docs/latest/security/ssl-tls" rel="nofollow noreferrer">https://ignite.apache.org/docs/latest/security/ssl-tls</a></p>
<p>docs for using a configmap to create an ignite configuration file for a node: <a href="https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-configmap-for-node-configuration-file" rel="nofollow noreferrer">https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-configmap-for-node-configuration-file</a></p>
<p>You could set up the ssl/tls files as configmaps, or alternatively, use a stateful set and create a persistentvolume to hold the files. <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a></p>
<p>See the <a href="https://i.stack.imgur.com/N7qvx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N7qvx.png" alt="![enter image description here" /></a> tab on <a href="https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-pod-configuration" rel="nofollow noreferrer">https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-pod-configuration</a> for instructions on how to mount persistent volumes for an Ignite Stateful set.</p>
| Alex K |
<p>Is there a way to perform a </p>
<pre><code>kubectl run --image=busybox mydeployment sleep 100
</code></pre>
<p>but by selecting nodes e.g. that were created after a specific timestamp for example?</p>
| pkaramol | <p>In order to select the node, you can edit the json of the created resource using the <code>--overrides</code> flag.</p>
<p>For instance, this creates a pod on the node <code>name</code>:</p>
<pre><code>kubectl run nginx --generator=run-pod/v1 --image=nginx --overrides='{ "spec": { "nodeName": "name" } }'
</code></pre>
| Alassane Ndiaye |
<p>I have setup an ingress for an application but want to whitelist my ip address. So I created this Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: ${MY_IP}/32
name: ${INGRESS_NAME}
spec:
rules:
- host: ${DNS_NAME}
http:
paths:
- backend:
serviceName: ${SVC_NAME}
servicePort: ${SVC_PORT}
tls:
- hosts:
- ${DNS_NAME}
secretName: tls-secret
</code></pre>
<p>But when I try to access it I get a 403 forbidden and in the nginx logging I see a client ip but that is from one of the cluster nodes and not my home ip.</p>
<p>I also created a configmap with this configuration:</p>
<pre><code>data:
use-forwarded-headers: "true"
</code></pre>
<p>In the nginx.conf in the container I can see that has been correctly passed on/ configured, but I still get a 403 forbidden with still only the client ip from cluster node.</p>
<p>I am running on an AKS cluster and the nginx ingress controller is behind an Azure loadbalancer. The nginx ingress controller svc is exposed as type loadbalancer and locks in on the nodeport opened by the svc.</p>
<p>Do I need to configure something else within Nginx?</p>
| bramvdk | <p>If you've installed nginx-ingress with the <a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="noreferrer">Helm chart</a>, you can simply configure your <code>values.yaml</code> file with <code>controller.service.externalTrafficPolicy: Local</code>, which I believe will apply to all of your Services. Otherwise, you can configure specific Services with <code>service.spec.externalTrafficPolicy: Local</code> to achieve the same effect on those specific Services.</p>
<p>Here are some resources to further your understanding:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="noreferrer">k8s docs - Preserving the client source IP</a></li>
<li><a href="https://kubernetes.io/docs/tutorials/services/source-ip" rel="noreferrer">k8s docs - Using Source IP</a></li>
</ul>
| Jackie Luc |
<p>need help.</p>
<p>I must to install Exchange Listener with instruction <a href="https://community.terrasoft.ru/articles/1-realnyy-primer-po-razvertyvaniyu-servisa-exchange-listener-s-ispolzovaniem-kubernetes" rel="nofollow noreferrer">https://community.terrasoft.ru/articles/1-realnyy-primer-po-razvertyvaniyu-servisa-exchange-listener-s-ispolzovaniem-kubernetes</a>
Some month ago I do it, but now i see the mistake</p>
<p>adminka@l-test:~$ kubectl apply -f <a href="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml" rel="nofollow noreferrer">https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml</a>
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"</p>
<p>What need to do?</p>
<blockquote>
<p>adminka@l-test:~$ kubectl -n kube-system get pods --all-namespaces
NAMESPACE NAME READY STATUS<br />
RESTARTS AGE kube-system coredns-78fcd69978-mrb58 1/1<br />
Running 0 2d12h kube-system coredns-78fcd69978-pwp2n<br />
1/1 Running 0 2d12h kube-system etcd-l-test<br />
1/1 Running 0 2d12h kube-system kube-apiserver-l-test
1/1 Running 0 2d12h kube-system<br />
kube-controller-manager-l-test 1/1 Running 0 2d12h
kube-system kube-flannel-ds-kx9sm 1/1 Running 0<br />
2d12h kube-system kube-proxy-v2f9q 1/1 Running<br />
0 2d12h kube-system kube-scheduler-l-test 1/1<br />
Running 0 2d12h</p>
</blockquote>
<pre><code>> adminka@l-test:~$ kubectl describe pod kube-flannel-ds-kx9sm -n
> kube-system Name: kube-flannel-ds-kx9sm Namespace:
> kube-system Priority: 2000001000 Priority Class Name:
> system-node-critical Node: l-test/192.168.0.71 Start
> Time: Sat, 18 Sep 2021 00:36:50 +0300 Labels:
> app=flannel
> controller-revision-hash=7fb8b954f9
> pod-template-generation=1
> tier=node Annotations: <none> Status: Running IP: 192.168.0.71 IPs: IP:
> 192.168.0.71 Controlled By: DaemonSet/kube-flannel-ds Init Containers: install-cni:
> Container ID: docker://01da25d6de8d2b679c9035d25fcd10de432388875fd90b23756c6e1b8392af21
> Image: quay.io/coreos/flannel:v0.14.0
> Image ID: docker-pullable://quay.io/coreos/flannel@sha256:4a330b2f2e74046e493b2edc30d61fdebbdddaaedcb32d62736f25be8d3c64d5
> Port: <none>
> Host Port: <none>
> Command:
> cp
> Args:
> -f
> /etc/kube-flannel/cni-conf.json
> /etc/cni/net.d/10-flannel.conflist
> State: Terminated
> Reason: Completed
> Exit Code: 0
> Started: Sat, 18 Sep 2021 00:36:56 +0300
> Finished: Sat, 18 Sep 2021 00:36:56 +0300
> Ready: True
> Restart Count: 0
> Environment: <none>
> Mounts:
> /etc/cni/net.d from cni (rw)
> /etc/kube-flannel/ from flannel-cfg (rw)
> /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-28x6l (ro) Containers: kube-flannel:
> Container ID: docker://b109519d917ceb9d05c19421e5e65ece948977ce6b98d1e638f005250ddc9814
> Image: quay.io/coreos/flannel:v0.14.0
> Image ID: docker-pullable://quay.io/coreos/flannel@sha256:4a330b2f2e74046e493b2edc30d61fdebbdddaaedcb32d62736f25be8d3c64d5
> Port: <none>
> Host Port: <none>
> Command:
> /opt/bin/flanneld
> Args:
> --ip-masq
> --kube-subnet-mgr
> State: Running
> Started: Sat, 18 Sep 2021 00:36:57 +0300
> Ready: True
> Restart Count: 0
> Limits:
> cpu: 100m
> memory: 50Mi
> Requests:
> cpu: 100m
> memory: 50Mi
> Environment:
> POD_NAME: kube-flannel-ds-kx9sm (v1:metadata.name)
> POD_NAMESPACE: kube-system (v1:metadata.namespace)
> Mounts:
> /etc/kube-flannel/ from flannel-cfg (rw)
> /run/flannel from run (rw)
> /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-28x6l (ro) Conditions: Type Status
> Initialized True Ready True ContainersReady
> True PodScheduled True Volumes: run:
> Type: HostPath (bare host directory volume)
> Path: /run/flannel
> HostPathType: cni:
> Type: HostPath (bare host directory volume)
> Path: /etc/cni/net.d
> HostPathType: flannel-cfg:
> Type: ConfigMap (a volume populated by a ConfigMap)
> Name: kube-flannel-cfg
> Optional: false kube-api-access-28x6l:
> Type: Projected (a volume that contains injected data from multiple sources)
> TokenExpirationSeconds: 3607
> ConfigMapName: kube-root-ca.crt
> ConfigMapOptional: <nil>
> DownwardAPI: true QoS Class: Burstable Node-Selectors: <none> Tolerations:
> :NoSchedule op=Exists
> node.kubernetes.io/disk-pressure:NoSchedule op=Exists
> node.kubernetes.io/memory-pressure:NoSchedule op=Exists
> node.kubernetes.io/network-unavailable:NoSchedule op=Exists
> node.kubernetes.io/not-ready:NoExecute op=Exists
> node.kubernetes.io/pid-pressure:NoSchedule op=Exists
> node.kubernetes.io/unreachable:NoExecute op=Exists
> node.kubernetes.io/unschedulable:NoSchedule op=Exists Events:
> <none> adminka@l-test:~$
</code></pre>
| Jekaterina Aleksandrenkova | <pre><code>rbac.authorization.k8s.io/v1beta1
</code></pre>
<p>v1beta1 is not compatible with your cluster version. Use v1 instead.</p>
| Dani Alderete |
<p>Trying to install kubernetes on virtualbox using ansible:</p>
<p>in master-playbook.yml</p>
<pre><code> - name: Install comodo cert
copy: src=BCPSG.pem dest=/etc/ssl/certs/ca-certificates.crt
- name: Update cert index
shell: /usr/sbin/update-ca-certificates
- name: Adding apt repository for Kubernetes
apt_repository:
repo: deb https://packages.cloud.google.com/apt/dists/ kubernetes-xenial main
state: present
filename: kubernetes.list
validate_certs: False
</code></pre>
<p>now, Vagrantfile calls the playbook:</p>
<pre><code>config.vm.define "k8s-master" do |master|
master.vm.box = IMAGE_NAME
master.vm.network "private_network", ip: "192.168.50.10"
master.vm.hostname = "k8s-master"
master.vm.provision "ansible" do |ansible|
ansible.playbook = "kubernetes-setup/master-playbook.yml"
end
end
</code></pre>
<p>but i am getting error:</p>
<blockquote>
<pre><code>TASK [Adding apt repository for Kubernetes] ************************************
fatal: [k8s-master]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 127.0.0.1 closed.\r\n",
</code></pre>
<p>"module_stdout": "Traceback (most recent call last):\r\n File
\"/home/vagrant/.ansible/tmp/ansible-tmp-1555907987.70663-229510485563848/AnsiballZ_apt_repository.py\",
line 113, in \r\n _ansiballz_main()\r\n File
\"/home/vagrant/.ansible/tmp/ansible-tmp-1555907987.70663-229510485563848/AnsiballZ_apt_repository.py\",
line 105, in _ansiballz_main\r\n invoke_module(zipped_mod,
temp_path, ANSIBALLZ_PARAMS)\r\n File
\"/home/vagrant/.ansible/tmp/ansible-tmp-1555907987.70663-229510485563848/AnsiballZ_apt_repository.py\",
line 48, in invoke_module\r\n imp.load_module('<strong>main</strong>', mod,
module, MOD_DESC)\r\n File
\"/tmp/ansible_apt_repository_payload_GXYAmU/<strong>main</strong>.py\", line 550,
in \r\n File
\"/tmp/ansible_apt_repository_payload_GXYAmU/<strong>main</strong>.py\", line 542,
in main\r\n File \"/usr/lib/python2.7/dist-packages/apt/cache.py\",
line 487, in update\r\n raise
FetchFailedException(e)\r\napt.cache.FetchFailedException: W:The
repository '<a href="https://packages.cloud.google.com/apt/dists" rel="nofollow noreferrer">https://packages.cloud.google.com/apt/dists</a>
kubernetes-xenial Release' does not have a Release file., W:Data from
such a repository can't be authenticated and is therefore potentially
dangerous to use., W:See apt-secure(8) manpage for repository creation
and user configuration details., E:Failed to fetch
<a href="https://packages.cloud.google.com/apt/dists/dists/kubernetes-xenial/main/binary-amd64/Packages" rel="nofollow noreferrer">https://packages.cloud.google.com/apt/dists/dists/kubernetes-xenial/main/binary-amd64/Packages</a>
server certificate verification failed. CAfile:
/etc/ssl/certs/ca-certificates.crt CRLfile: none, E:Some index files
failed to download. They have been ignored, or old ones used
instead.\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact
error", "rc": 1}</p>
</blockquote>
| kamal | <p>Run the command below and try again:</p>
<p># git config --global http.sslverify false</p>
| Marcos Henrique |
<p>I have some internal services (Logging, Monitoring, etc) exposed via nginx-ingress and protected via oauth2-proxy and some identity manager (Okta) behind. We use 2fa for additional security for our users. </p>
<p>This works great for user accounts. It does not work for other systems like external monitoring as we can not make a request with a token or basic auth credentials.</p>
<p>Is there any known solution to enable multiple authentication types in an ingress resource? </p>
<p>Everything I found so far is specific for one authentication process and trying to add basic auth as well did not work. </p>
<p>Current ingress</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: cert-manager-extra-issuer
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-signin: https://sso-proxy/oauth2/start?rd=https://$host$request_uri
nginx.ingress.kubernetes.io/auth-url: https://sso-proxy/oauth2/auth
</code></pre>
| stvnwrgs | <p>This is simply not an advisable solution. You cannot use multiple authentication types in a single Ingress resource. </p>
<p>The better way to deal with it would be to create separate Ingresses for different authentication types. </p>
<p>I hope it helps. </p>
| WytrzymaΕy Wiktor |
<p>I am struggling with the following issues. I have 2 services running. I am using a wildcard for handling subdomains. See the example conf below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.global-static-ip-name: web-static-ip
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/server-alias: www.foo.bar
nginx.ingress.kubernetes.io/use-regex: "true"
name: foo-bar-ingress
namespace: test
spec:
rules:
- host: '*.foo.bar'
http:
paths:
- backend:
serviceName: legacy-service
servicePort: 80
path: /(.*)
pathType: ImplementationSpecific
- host: foo.bar
http:
paths:
- backend:
serviceName: new-service
servicePort: 8080
path: /(.*)
pathType: ImplementationSpecific
</code></pre>
<p>Using the app in the way that abc.foo.bar -> legacy-service and foo.bar -> new-service work perfectly fine. However, when I access the app with www prefix, it gets under the wildcard subdomain path, meaning <a href="http://www.foo.bar" rel="nofollow noreferrer">www.foo.bar</a> goes into legacy-service, which is what I want to avoid. AFAIU this "www" is caught by this asterisk regexp and goes in the wrong way. I would like it go to new-service.</p>
<p>Is there any way I can achieve this with the nginx ingress configuration?</p>
| Adam SoliΕski | <p>Also redirecting requests from <code>www.foo.bar</code> can be achieved by also specifying the hostname. Please note that the order of the hosts does matter as they are translated into the Envoy filter chain. Therefore, the wildcard host should be the last host.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.global-static-ip-name: web-static-ip
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/server-alias: www.foo.bar
nginx.ingress.kubernetes.io/use-regex: "true"
name: foo-bar-ingress
namespace: test
spec:
rules:
- host: 'foo.bar'
http:
paths:
- backend:
serviceName: new-service
servicePort: 8080
path: /(.*)
pathType: ImplementationSpecific
- host: 'www.foo.bar'
http:
paths:
- backend:
serviceName: new-service
servicePort: 8080
path: /(.*)
pathType: ImplementationSpecific
- host: '*.foo.bar'
http:
paths:
- backend:
serviceName: legacy-service
servicePort: 80
path: /(.*)
pathType: ImplementationSpecific
</code></pre>
| Jonas Breuer |
<p>I am trying to change priority of an existing Kubernetes Pod using 'patch' command, but it returns error saying that this is not one of the fields that can be modified. I can patch the priority in the Deployment spec, but it would cause the Pod to be recreated (following the defined update strategy). </p>
<p>The basic idea is to implement a mechanism conceptually similar to nice levels (for my application), so that certain Pods can be de-prioritized based on certain conditions (by my controller), and preempted by the default scheduler in case of resource congestion. But I don't want them to be restarted if there is no congestion.</p>
<p>Is there a way around it, or there is something inherent in the way scheduler works that would prevent something like this from working properly?</p>
| Alex Glikson | <p>Priority values are applied to a pod based on the priority value of the PriorityClass assigned to their deployment at the time that the pod is scheduled. Any changes made to the PriorityClass will not be applied to pods which have already been scheduled, so you would have to redeploy the pod for the priority to take effect anyway.</p>
| benraven |
<p>I have springboot application and I am trying to load additional configuration from a volume mounted location /tmp/secret-config.yaml and getting below error:
java.lang.IllegalStateException: Unable to load config data from '/tmp/secret-config.yaml'</p>
<p>I do not want to use configmap or secrets for this. It's a simple json file which I am trying to load.</p>
<p>I am trying to pass it like this from shell script.
java -jar /opt/app.jar --spring.config.additional-location=/tmp/secret-config.yaml</p>
<p>Can anyone please help ?</p>
| mayank agarwal | <p>I would suggest to verify the volume mount of the file inside your pod.
You can verify it with the following commands:</p>
<pre><code>kubectl exec -it <podName> -- cat /tmp/secret-config.yaml
kubectl exec -it <podName> -- ls /tmp
</code></pre>
<p>In addition, the volume mount configuration in your yaml file would be interesting. Did you specify a <code>mountPath </code> for the <code>volumeMount</code> in your pod configuration?
You can also check the mountPath with the following command:</p>
<pre><code># for a pod with a single container and only one volumeMount
kubectl get pods <podName> -o jsonpath='{.spec.containers[0].volumeMounts[0].mountPath}'
</code></pre>
| Jonas Breuer |
<p>I have installed Prometheus-adapter along with the default metrics-server that comes with k3s securely on port 443.</p>
<p>Unfortunately, I get no resources when I query custom.metrics.k8s.io</p>
<pre><code>$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": []
}
</code></pre>
<p>When I look at the logs of Prometheus-adapter I get <code>unable to update list of all metrics: unable to fetch metrics for query ...: x509: certificate is valid for localhost, localhost, not metrics-server.kube-system</code></p>
<p>How can I resolve this issue?</p>
| realsarm | <p>To solve this issue, I had to create <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">separate certificate</a> for both metrics-server and adapter. Adapter also has an <a href="https://github.com/kubernetes-sigs/prometheus-adapter/issues/169" rel="nofollow noreferrer">issue</a> about adding capability to ignore cert validation which wasn't merged.</p>
<p>For metrics-server and cert request I used the following:</p>
<pre><code>{
"hosts": [
"prometheus-adapter",
"prometheus-adapter.monitoring",
"prometheus-adapter.monitoring.svc",
"prometheus-adapter.monitoring.pod",
"prometheus-adapter.monitoring.svc.cluster.local",
"prometheus-adapter.monitoring.pod.cluster.local",
"<pod ip>",
"<service ip>"
],
"CN": "prometheus-adapter.monitoring.pod.cluster.local",
"key": {
"algo": "ecdsa",
"size": 256
},
}
</code></pre>
<pre><code>{
"hosts": [
"metrics-server",
"metrics-server.kube-system",
"metrics-server.kube-system.svc",
"metrics-server.kube-system.pod",
"metrics-server.kube-system.svc.cluster.local",
"metrics-server.kube-system.pod.cluster.local",
"<service ip>",
"<pod ip>"
],
"CN": "metrics-server.kube-system",
"key": {
"algo": "ecdsa",
"size": 256
},
}
</code></pre>
<p>For ca, you can create your certificate authority or use Kubernetes signers as indicated <a href="https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers" rel="nofollow noreferrer">here</a>
The only point worth noting here is that if you use either of signers, you should mount the ca bundle yourself to your deployments.</p>
<p>Finally, mount tls keys and ca bundle to your deployment.</p>
<pre><code> extraArguments:
- --tls-cert-file=/var/run/serving-cert/tls.crt
- --tls-private-key-file=/var/run/serving-cert/tls.key
- --client-ca-file=/etc/ssl/certs/ca.crt
</code></pre>
| realsarm |
<hr />
<h1>Background</h1>
<p>I'm attempting to configure a cluster via <code>kubeadm</code>. I normally create the (test) cluster via:</p>
<pre><code>sudo kubeadm init --pod-network-cidr 10.244.0.0/16
</code></pre>
<p>This parameter appears to eventually find its way into the static pod definition for the controllerManager (<code>/etc/kubernetes/manifests/kube-controller-manager.yaml</code>):</p>
<pre><code>- --cluster-cidr=10.244.0.0/16
</code></pre>
<p>Larger portions of <code>sudo vim /etc/kubernetes/manifests/kube-controller-manager.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- ...
- --cluster-cidr=10.244.0.0/16
</code></pre>
<hr />
<h1>Question 1:</h1>
<p>How can I pass this setting, <code>--pod-network-cidr=10.244.0.0/16</code> via a config file, i.e. <code>kubeadm init --config my_config.yaml</code>? I found a <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/admin/kubeadm/" rel="nofollow noreferrer">sample config file template on an unofficial K8S documentation wiki</a>, but I can't seem to find any documentation at all that maps these command-line arguments to <code>kubeadm</code> to their <code>kubeadm_config.yaml</code> equivalents.</p>
<p><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/" rel="nofollow noreferrer">There's also a document showing how I can create a baseline static pod definition/<code>yaml</code></a> via <code>kubeadm config print init-defaults > kubeadm_config.yaml</code>, but again, no documentation that shows how to set <code>pod-network-cidr</code> by modifying and applying this <code>yaml</code> file (i.e. <code>kubeadm upgrade -f kubeadm_config.yaml</code>).</p>
<p><strong>Sample output of <code>kubeadm config view</code>:</strong></p>
<pre><code>apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.4
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
</code></pre>
<hr />
<h1>Question 2:</h1>
<p>How can I do the above, but pass something like <code>--experimental-cluster-signing-duration=0h30m0s</code>? I'd like to experiment with tests involving manually/automatically renewing all <code>kubeadm</code>-related certs.</p>
<hr />
| Hunter | <p><strong>1.</strong> Accorindg to the <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p>Itβs possible to configure <code>kubeadm init</code> with a configuration file
instead of command line flags, and some more advanced features may
only be available as configuration file options. This file is passed
with the <code>--config</code> option.</p>
<p>The default configuration can be printed out using the <code>kubeadm config print</code> <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/" rel="nofollow noreferrer">command</a>.</p>
<p>It is recommended that you migrate your old v1beta1 configuration to v1beta2 using the <code>kubeadm config migrate</code> <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/" rel="nofollow noreferrer">command</a>.</p>
<p>During <code>kubeadm init</code>, kubeadm uploads the ClusterConfiguration object
to your cluster in a ConfigMap called kubeadm-config in the
kube-system namespace. This configuration is then read during <code>kubeadm
join</code>, <code>kubeadm reset</code> and <code>kubeadm upgrade</code>. To view this ConfigMap
call <code>kubeadm config view</code>.</p>
<p>You can use <code>kubeadm config print</code> to print the default configuration
and <code>kubeadm config migrate</code> to convert your old configuration files
to a newer version. <code>kubeadm config images list</code> and <code>kubeadm config images pull</code> can be used to list and pull the images that kubeadm
requires.</p>
</blockquote>
<p>Subnets are defined by <code>--pod-network-cidr</code> argument in kubeadm OR by a config file such as the example below below:</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
api:
advertiseAddress: 0.0.0.0
bindPort: 6443
kubernetesVersion: v1.12.1
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
networking:
podSubnet: 192.168.0.0/24
</code></pre>
<p><strong>2.</strong> I was not able to find anything like this in the official documentation nor in other sources. </p>
<p>You instead can use <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">kube-controller-manager</a> to pass that kind of configuration.</p>
<p>Please let me know if that helped.</p>
| WytrzymaΕy Wiktor |
<p>I am using kubernetes java client libraries in order to communicate with my kubernetes server. </p>
<p>My question is there any way programmatically get namespace of running pod from inside of which sent call to kubernetes?</p>
<p>I heard that there is file located here - <strong>/var/run/secrets/kubernetes.io/serviceaccount/namespace</strong></p>
<p>However I wanted to know is there any way ro get it using java client without reading this file. </p>
<p>I have searched in documentation, however found nothing related to this.</p>
| liotur | <p>If you set the below environment variable on the pod definition file, the namespace of the pod will be stored on the environment variables. Then it can be retrieved by the client API.</p>
<pre><code> env
- name: MYPOD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
</code></pre>
| Subramanian Manickam |
<p>I am having some issues when trying to launch Spark jobs via the Kubernetes scheduler.</p>
<p>I want all my driver/executor pods to be spawned onto nodes which has a certain taint. Because of this, I want to specify tolerations which will be directly injected into the pods configuration files. Currently, there is no default way directly from the <code>spark-submit</code> command</p>
<p>According to <a href="https://github.com/palantir/k8s-spark-scheduler#usage" rel="nofollow noreferrer">this</a> and <a href="https://github.com/apache/spark/blob/master/docs/running-on-kubernetes.md#pod-template" rel="nofollow noreferrer">this</a>, a user should be able to specify a pod template which can be set with the following parameters: <code>spark.kubernetes.driver.podTemplateFile</code> and <code>spark.kubernetes.executor.podTemplateFile</code>.</p>
<p>I tried specifying those parameters in the <code>spark-submit</code> command with the following file:</p>
<p><code>pod_template.template</code></p>
<pre><code>apiVersion: v1
kind: Pod
spec:
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: test
</code></pre>
<p>However, this toleration never gets added to the launched driver pod. Is currently a way to solve this?</p>
<p>For reference, here is the full spark-submit command:</p>
<pre><code>/opt/spark/bin/spark-submit --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.persistent.options.claimName=pvc-storage --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.persistent.mount.subPath=test-stage1/spark --conf spark.executor.memory=1G --conf spark.executor.instances=1 --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.persistent.mount.subPath=test-stage1/spark --conf spark.kubernetes.executor.limit.cores=1 --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.kubernetes.namespace=test-stage1 --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.persistent.mount.path=/persistent --conf spark.kubernetes.driver.limit.memory=3G --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.persistent.mount.path=/persistent --conf spark.submit.deployMode=cluster --conf spark.kubernetes.container.image=<SPARK IMAGE> --conf spark.master=k8s://https://kubernetes.default.svc --conf spark.kubernetes.driver.limit.cores=1 --conf spark.executor.cores=1 --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.persistent.options.claimName=pvc-storage --conf spark.kubernetes.container.image.pullPolicy=Always --conf spark.kubernetes.executor.podTemplateFile=//opt/pod_template.template --conf spark.kubernetes.driver.podTemplateFile=//opt/pod_template.template local:///opt/spark/examples/src/main/python/pi.py 100
</code></pre>
| toerq | <p>I have checked various documentations and found few things that might be misconfigured here:</p>
<ol>
<li>Your <code>pod_template.template</code> should have the <code>.yaml</code> at the end</li>
<li>You did not specify <code>spark.kubernetes.driver.pod.name</code> in your <code>spark-submit</code> command nor in the <code>pod_template.template.yaml</code> in a form of <code>metadata</code></li>
<li>You have used double <code>//</code> when specifing path for <code>spark.kubernetes.driver.podTemplateFile=</code> and <code>spark.kubernetes.executor.podTemplateFile=</code></li>
<li>You should put all your tolerations in <code>""</code>, for example: <code>effect: "NoSchedule"</code></li>
</ol>
<p>Please let me know if that helped.</p>
| WytrzymaΕy Wiktor |
<p>I have used kubectl proxy to access the kubernetes dashboard but i want to make it accessible now to my co-worker.
Do i need a load balancer or there is a more efficient way ?</p>
| frostarcher | <p>You should take a look at the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resource, you can use those to expose multiple Services under the same IP address.</p>
<p>Another option are Services of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a>. You can use those to expose a Service on a port of each Node in the cluster.</p>
| Alassane Ndiaye |
<p>I'm new with GKE and I'm testing some scaling features.
I started with a simple example, 1 pod inside 1 pool with 1 node.</p>
<p>When I scaled the pool to have 2 nodes and the pod to replica=2, for my surprise the 2 pods were allocated inside the same node.</p>
<p>Is it a problem for redundancy?
Can I assure that my replicas are spread to all nodes?</p>
| Marcelo Dias | <p>The place where Pods are scheduled is decided by the <a href="https://kubernetes.io/docs/concepts/scheduling/kube-scheduler/" rel="nofollow noreferrer">Kubernetes scheduler</a>. As mentioned in the documentation, the scheduler first finds eligible nodes in a filtering phase. Following that, the scheduler finds the most adequate node using scoring criteria mentioned in <a href="https://kubernetes.io/docs/concepts/scheduling/kube-scheduler/#scoring" rel="nofollow noreferrer">this section</a>. Among other factors, image locality and fitting Pods into as few nodes as possible could be the reason both Pods were allocated on the same node.</p>
<blockquote>
<p>Is it a problem for redundancy? Can I assure that my replicas are
spread to all nodes?</p>
</blockquote>
<p>This can be an issue for redundancy. If one Node, goes out, then your entire service becomes unavailable (if you use resources like Deployments and such, they will eventually be scheduled on the other node though).</p>
<p>In order to favor Pod spread among nodes, you can customize the scheduler or use mechanisms such as <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">affinity and anti-affinity</a>.</p>
| Alassane Ndiaye |
<p>I use the vanilla Open Policy Agent as a deployment on Kubernetes for handling admission webhooks.</p>
<p>The behavior of multiple policies evaluation is not clear to me, see this example:</p>
<pre><code>## policy-1.rego
package kubernetes.admission
check_namespace {
# evaluate to true
namespaces := {"namespace1"}
namespaces[input.request.namespace]
}
check_user {
# evaluate to false
users := {"user1"}
users[input.request.userInfo.username]
}
allow["yes - user1 and namespace1"] {
check_namespace
check_user
}
</code></pre>
<p>.</p>
<pre><code>## policy-2.rego
package kubernetes.admission
check_namespace {
# evaluate to false
namespaces := {"namespace2"}
namespaces[input.request.namespace]
}
check_user {
# evaluate to true
users := {"user2"}
users[input.request.userInfo.username]
}
allow["yes - user2 and namespace12] {
check_namespace
check_user
}
</code></pre>
<p>.</p>
<pre><code>## main.rego
package system
import data.kubernetes.admission
main = {
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"response": response,
}
default uid = ""
uid = input.request.uid
response = {
"allowed": true,
"uid": uid,
} {
reason = concat(", ", admission.allow)
reason != ""
}
else = {"allowed": false, "uid": uid}
</code></pre>
<p>.</p>
<pre><code> ## example input
{
"apiVersion": "admission.k8s.io/v1beta1",
"kind": "AdmissionReview",
"request": {
"namespace": "namespace1",
"userInfo": {
"username": "user2"
}
}
}
</code></pre>
<p>.</p>
<pre><code>## Results
"allow": [
"yes - user1 and namespace1",
"yes - user2 and namespace2"
]
</code></pre>
<p>It seems that all of my policies are being evaluated as just one flat file, but i would expect that each policy will be evaluated independently from the others</p>
<p>What am I missing here?</p>
| tomikos | <p><em>Files</em> don't really mean anything to OPA, but packages do. Since both of your policies are defined in the <code>kubernetes.admission</code> module, they'll essentially be appended together as one. This works in your case only due to one of the <code>check_user</code> and <code>check_namespace</code> respectively evaluating to undefined given your input. If they hadn't you would see an error message about conflict, since complete rules can't evalutate to different results (i.e. <code>allow</code> can't be both <code>true</code> <em>and</em> <code>false</code>).</p>
<p>If you rather use a separate package per policy, like, say <code>kubernetes.admission.policy1</code> and <code>kubernetes.admission.policy2</code>, this would not be a concern. You'd need to update your main policy to collect an aggregate of the <code>allow</code> rules from all of your policies though. Something like:</p>
<pre><code>reason = concat(", ", [a | a := data.kubernetes.admission[policy].allow[_]])
</code></pre>
<p>This would iterate over all the sub-packages in <code>kubernetes.admission</code> and collect the <code>allow</code> rule result from each. This pattern is called dynamic policy composition, and I wrote a longer text on the topic <a href="https://blog.styra.com/blog/dynamic-policy-composition-for-opa" rel="nofollow noreferrer">here</a>.</p>
<p>(As a side note, you probably want to aggregate <strong>deny</strong> rules rather than allow. As far as I know, clients like kubectl won't print out the reason from the response unless it's actually denied... and it's generally less useful to know why something succeeded rather than failed. You'll still have the OPA <a href="https://www.openpolicyagent.org/docs/latest/management-decision-logs/" rel="nofollow noreferrer">decision logs</a> to consult if you want to know more about why a request succeeded or failed later).</p>
| Devoops |
<p>I would like to check that every Service in a rendered Helm chart has <em>exactly</em> one matching Pod.</p>
<p>A Pod to service association exists when every entry specified in a Services <code>spec.selector</code> object is reflected in a Pods <code>metadata.labels</code> object (which can have additional keys).</p>
<p>The following policy is tested using Conftest by running <code>conftest test --combine {YAML_FILE}</code> and checks that every Service has <em>at least</em> one matching Pod. I'm completely unsure how to transform this so that it checks for <em>exactly</em> one matching Pod.</p>
<pre><code>package main
import future.keywords.every
in_set(e, s) { s[e] }
get_pod(resource) := pod {
in_set(resource.kind, {"Deployment", "StatefulSet", "Job"})
pod := resource.spec.template
}
# ensure that every service has at least one matching pod
# TODO: ensure that every service has exactly one matching pod
deny_service_without_matching_pod[msg] {
service := input[_].contents
service.kind == "Service"
selector := object.get(service, ["spec", "selector"], {})
pods := { p | p := get_pod(input[_].contents) }
every pod in pods {
labels := object.get(pod, ["metadata", "labels"], {})
matches := { key | some key; labels[key] == selector[key] }
count(matches) != count(selector)
}
msg := sprintf("service %s has no matching pod", [service.metadata.name])
}
</code></pre>
<p>Marginal note: The <code>get_pod</code> function doesn't retrieve all PodTemplates that can possibly occur in a Helm chart. Other checks are in place to keep the Kubernetes API-surface of the Helm chart small - so in this case, Pods can only occur in Deployment, StatefulSet and Job.</p>
<p>Maybe there are rego experts here that can chime in and help. That would be very appreciated! π</p>
| Codepunkt | <p>Since there's no sample data provided, this is untested code. It <em>should</em> work though :)</p>
<pre><code>package main
import future.keywords.in
pods := { pod |
resource := input[_].contents
resource.kind in {"Deployment", "StatefulSet", "Job"}
pod := resource.spec.template
}
services := { service |
service := input[_].contents
service.kind == "Service"
}
pods_matching_selector(selector) := { pod |
selector != {}
some pod in pods
labels := pod.metadata.labels
some key
labels[key] == selector[key]
}
deny_service_without_one_matching_pod[msg] {
some service in services
selector := object.get(service, ["spec", "selector"], {})
matching_pods := count(pods_matching_selector(selector))
matching_pods != 1
msg := sprintf(
"service %s has %d matching pods, must have exactly one",
[service.metadata.name, matching_pods]
)
}
</code></pre>
| Devoops |
<p>In the kubectl Cheat Sheet (<a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/</a>), there are 3 ways to modify resources. You can either update, patch or edit.</p>
<p>What are the actual differences between them and when should I use each of them?</p>
| Doomro | <p>I would like to add a few things to <em>night-gold's</em> answer. I would say that there are no better and worse ways of modifying your resources. <strong>Everything depends on particular situation and your needs.</strong></p>
<p>It's worth to emphasize <strong>the main difference between editing and patching</strong> namely the first one is an <strong>interactive method</strong> and the second one we can call <strong>batch method</strong> which unlike the first one may be easily used in scripts. Just imagine that you need to make change in dozens or even a few hundreds of different <strong>kubernetes resources/objects</strong> and it is much easier to write a simple script in which you can <strong>patch</strong> all those resources in an automated way. Opening each of them for editing wouldn't be very convenient and effective. Just a short example:</p>
<pre><code>kubectl patch resource-type resource-name --type json -p '[{"op": "remove", "path": "/spec/someSection/someKey"}]'
</code></pre>
<p>Although at first it may look unnecessary complicated and not very convenient to use in comparison with interactive editing and manually removing specific line from specific section, in fact it is a very quick and effective method which may be easily implemented in scripts and can save you a lot of work and time when you work with many objects.</p>
<p>As to <code>apply</code> command, you can read in the <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#apply" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>apply manages applications through files defining Kubernetes
resources. It creates and updates resources in a cluster through
running kubectl apply. <strong>This is the recommended way of managing
Kubernetes applications on production.</strong></p>
</blockquote>
<p>It also gives you possibility of modifying your running configuration by re-applying it from updated <code>yaml</code> manifest e.g. pulled from git repository.</p>
<p>If by <code>update</code> you mean <code>rollout</code> ( formerly known as rolling-update ), as you can see in <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources" rel="nofollow noreferrer">documentation</a> it has quite different function. It is mostly used for updating deployments. You don't use it for making changes in arbitrary type of resource.</p>
| mario |
<p>The problem:</p>
<p>We have nodes in kubernetes, which will occasionally become tainted with an <code>effect</code> of <code>PreferNoSchedule</code>. When this happens, we would like our pods to <em>completely</em> avoid scheduling on these nodes (in other words, act like the taint's <code>effect</code> were actually <code>NoSchedule</code>). The taints aren't applied by us - we're using GKE and it's an automated thing on their end, so we're stuck with the <code>PreferNoSchedule</code> behaviour.</p>
<p>What we <em>can</em> control is the spec of our pods. I'm hoping this might be possible using a <code>nodeAffinity</code> on them, however the documentation on this is fairly sparse: see e.g. <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity" rel="nofollow noreferrer">here</a>. All the examples I can find refer to an affinity by labels, so I'm not sure if a taint is even visible/accessible by this logic.</p>
<p>Effectively, in my pod spec I want to write something like:</p>
<pre><code>spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/taints *I've invented this, it doesn't work
operator: NotIn
values:
- DeletionCandidateOfClusterAutoscaler
</code></pre>
<p>where <code>DeletionCandidateOfClusterAutoscaler</code> is the taint that we see applied. Is there a way to make this work?</p>
<p>The other approach we've thought about is a cronjob which looks for the <code>PreferNoSchedule</code> taint and adds our own <code>NoSchedule</code> taint on top... but that feels a little gross!</p>
<p>Any neat suggestions / workarounds would be appreciated!</p>
<p>The long version (if you're interested):</p>
<p>The taint gets applied by the autoscaler to say the node is going to go away in 30 minutes or so. This issue describes in some more detail, from people having the same trouble: <a href="https://github.com/kubernetes/autoscaler/issues/2967" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/issues/2967</a></p>
| Alyssa | <p>I have tested your same scenario and indeed, GKE reconciles the current node configuration at the moment the autoscaler starts to spin up. This is in order to ensure no downtime in case of a lack of resources in the node which the pods/workloads can be scheduled. I believe there is no way to set the hard NoSchedule taint cleanly.</p>
<p>So, the critical information to keep in mind when using the autoscaler is:</p>
<ul>
<li><p>Pods will not be scheduled to the soft-tainted nodepool if there are resources available in the regular one.</p>
</li>
<li><p>If not enough resources are available in the regular one, they will be scheduled to the soft-tainted nodepool.</p>
</li>
<li><p>If there arenβt enough resources in the nodepools, the nodepool with the smallest nodetype will be autoscaled no matter the taints.</p>
</li>
</ul>
<p>As you mentioned a dirty workaround will be to:</p>
<p>A.- Create a cron or daemon to set the NoSchedule and overwrite the soft one set by the autoscaler.</p>
<p>B.- A.- Ensure the resources habitability, maybe by setting perfect resource limits, and requests.</p>
| Jujosiga |
<p>After succesfully hosting a first service on a single node cluster I am trying to add a second service with both its own dnsName.</p>
<p>The first service uses LetsEncrypt succesfully and now I am trying out the second service with a test-certifcate and the staging endpoint/clusterissuer</p>
<p>The error I am seeing once I describe the Letsencrypt Order is:</p>
<pre><code>Waiting for HTTP-01 challenge propagation: failed to perform self check GET request 'http://example.nl/.well-known/acme-challenge/9kdpAMRFKtp_t8SaCB4fM8itLesLxPkgT58RNeRCwL0': Get "http://example.nl/.well-known/acme-challenge/9kdpAMRFKtp_t8SaCB4fM8itLesLxPkgT58RNeRCwL0": dial tcp: lookup example.nl on 10.43.0.11:53: server misbehaving
</code></pre>
<p>The port that is misbehaving is pointing to the internal IP of my service/kube-dns, which means it is past my service/traefik i think.</p>
<p>The cluster is running on a VPS and I have also checked the example.nl domain name is added to <code>/etc/hosts</code> with the VPS's ip like so:</p>
<pre><code>206.190.101.190 example1.nl
206.190.101.190 example.nl
</code></pre>
<p>The error is a bit vague to me because I do not know exactly what de kube-dns is doing and why it thinks the server is misbehaving, I think maybe it is because it has now 2 domain names to handle I missed something. Anyone can shed some light on it?</p>
<p>Feel free to ask for more ingress or other server config!</p>
| furion2000 | <p>Everything was setup right to be able to work, however this issue had definitely had something to do with DNS resolving. Not internally in the k3s cluster, but externally at the domain registrar.</p>
<p>I found it by using <a href="https://unboundtest.com" rel="nofollow noreferrer">https://unboundtest.com</a> for my domain and saw my old namespaces still being used.</p>
<p>Contacted the registrar and they had to change something for the domain in the DNS of the registry.</p>
<p>Pretty unique situation, but maybe helpful for people who also think the solution has to be found internally (inside k3s).</p>
| furion2000 |
<p>I am trying to read the container logs through fluentd and pass it to the elastic search. I have mounted the directories from the host onto fluentd container which include all symlinks and actual files.
But when I see the fluentd container logs , it say those logs, present under <code>/var/log/pods/</code> are unreadable. Then I manually navigated to the path under fluentd container where logs are present but unfortunately I got permission denied issue.
I went till <code>/var/lib/docker/containers</code> , then the permissions were 0700 and owner was root. Even I am trying to run my fluentd container by setting<br>
<code>
- name: FLUENT_UID
value: "0"
</code>
But still it is not able to read.</p>
<p><code>
volumes:
- name: varlog
hostPath:
path: /var/log/
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers</code></p>
<p>.....
<code>
volumeMounts:
- name: varlog
mountPath: /var/log/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
</code></p>
| Nish | <p>You should take a look at <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">security contexts</a>. Among other things they allow you to specify the user that will run in the container with <code>runAsUser</code>, the primary group of that user with <code>runAsGroup</code>, and the volume owner with <code>fsGroup</code>.</p>
| Alassane Ndiaye |
<p>I want to get the MAC address of the host in the POD, the POD network doesn't use hostnetwork. I found that the node UID's suffix is the host' MAC address and I want to find the source where this UID value get from?</p>
<p>The suffix of uid (525400a9edd3) is the MAC address(ether 52:54:00:a9:ed:d3) of that host?</p>
<pre><code>kubectl get nodes node1 -o yaml
apiVersion: v1
kind: Node
metadata:
...
uid: 96557f0f-fea6-11e8-b826-525400a9edd3
...
</code></pre>
<pre><code>ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.68.1 netmask 255.255.0.0 broadcast 172.16.255.255
inet6 fe80::5054:ff:fea9:edd3 prefixlen 64 scopeid 0x20<link>
ether 52:54:00:a9:ed:d3 txqueuelen 1000 (Ethernet)
</code></pre>
<p>Could you help me to find how node uid is created accros the source code?</p>
<p>I want to know the host MAC address in kubernetes pod where that pod runing on.</p>
| toper | <p>You can look at any of the solutions posted <a href="https://askubuntu.com/questions/692258/find-mac-address-in-the-filesystem">here</a> to see where you can find the MAC address from your filesystem. Then you simply need to mount that file into your container using a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostpath volume</a>, and read the info from there.</p>
| Alassane Ndiaye |
<p>I want to know how I can start my deployments in a specific order. I am aware of <code>initContainers</code> but that is not working for me. I have a huge platform with around 20 deployments and 5 statefulsets that each of them has their own service, environment variables, volumes, horizontal autoscaler, etc. So it is not possible (or I don't know how) to define them in another yaml deployment as <code>initContainers</code>.</p>
<p>Is there another option to launch deployments in a specific order?</p>
| AVarf | <p>It's possible to order the launch of initContainers in a Pod, or Pods that belong in the same StatefulSet. However, those solutions do not apply to your case.</p>
<p>This is because ordering initialization is not the standard approach for solving your issue. In a microservices architecture, and more specifically Kubernetes, you would write your containers such that they try to call the services they depend on (whether they are up or not) and if they aren't available, you let your containers crash. This works because Kubernetes provides a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="noreferrer">self-healing mechanism</a> that automatically restarts containers if they fail. This way, your containers will try to connect to the services they depend on, and if the latter aren't available, the containers will crash and try again later using exponential back-off.</p>
<p>By removing unnecessary dependencies between services, you simplify the deployment of your application and reduce coupling between different services.</p>
| Alassane Ndiaye |
<p>I am trying to setup Horizontal Pod Autoscaler to automatically scale up and down my api server pods based on CPU usage.</p>
<p>I currently have 12 pods running for my API but they are using ~0% CPU.</p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
api-server-deployment-578f8d8649-4cbtc 2/2 Running 2 12h
api-server-deployment-578f8d8649-8cv77 2/2 Running 2 12h
api-server-deployment-578f8d8649-c8tv2 2/2 Running 1 12h
api-server-deployment-578f8d8649-d8c6r 2/2 Running 2 12h
api-server-deployment-578f8d8649-lvbgn 2/2 Running 1 12h
api-server-deployment-578f8d8649-lzjmj 2/2 Running 2 12h
api-server-deployment-578f8d8649-nztck 2/2 Running 1 12h
api-server-deployment-578f8d8649-q25xb 2/2 Running 2 12h
api-server-deployment-578f8d8649-tx75t 2/2 Running 1 12h
api-server-deployment-578f8d8649-wbzzh 2/2 Running 2 12h
api-server-deployment-578f8d8649-wtddv 2/2 Running 1 12h
api-server-deployment-578f8d8649-x95gq 2/2 Running 2 12h
model-server-deployment-76d466dffc-4g2nd 1/1 Running 0 23h
model-server-deployment-76d466dffc-9pqw5 1/1 Running 0 23h
model-server-deployment-76d466dffc-d29fx 1/1 Running 0 23h
model-server-deployment-76d466dffc-frrgn 1/1 Running 0 23h
model-server-deployment-76d466dffc-sfh45 1/1 Running 0 23h
model-server-deployment-76d466dffc-w2hqj 1/1 Running 0 23h
</code></pre>
<p>My api_hpa.yaml looks like:</p>
<pre><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: api-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api-server-deployment
minReplicas: 4
maxReplicas: 12
targetCPUUtilizationPercentage: 50
</code></pre>
<p>It has now been 24h and HPA has still not scaled down my pods to 4 even though the saw no CPU usage.</p>
<p>When I look at the GKE Deployment details dashboard I see the warning <a href="https://i.stack.imgur.com/oqfvH.png" rel="nofollow noreferrer">Unable to read all metrics</a></p>
<p>Is this causing autoscaler to not scale down my pods?</p>
<p>And how do I fix it?</p>
<p>It is my understanding that GKE runs a metrics server automatically:</p>
<pre><code>kubectl get deployment --namespace=kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
event-exporter-gke 1/1 1 1 18d
kube-dns 2/2 2 2 18d
kube-dns-autoscaler 1/1 1 1 18d
l7-default-backend 1/1 1 1 18d
metrics-server-v0.3.6 1/1 1 1 18d
stackdriver-metadata-agent-cluster-level 1/1 1 1 18d
</code></pre>
<p>Here is the configuration of that metrics server:</p>
<pre><code>Name: metrics-server-v0.3.6
Namespace: kube-system
CreationTimestamp: Sun, 21 Feb 2021 11:20:55 -0800
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=metrics-server
kubernetes.io/cluster-service=true
version=v0.3.6
Annotations: deployment.kubernetes.io/revision: 14
Selector: k8s-app=metrics-server,version=v0.3.6
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: k8s-app=metrics-server
version=v0.3.6
Annotations: seccomp.security.alpha.kubernetes.io/pod: docker/default
Service Account: metrics-server
Containers:
metrics-server:
Image: k8s.gcr.io/metrics-server-amd64:v0.3.6
Port: 443/TCP
Host Port: 0/TCP
Command:
/metrics-server
--metric-resolution=30s
--kubelet-port=10255
--deprecated-kubelet-completely-insecure=true
--kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
Limits:
cpu: 48m
memory: 95Mi
Requests:
cpu: 48m
memory: 95Mi
Environment: <none>
Mounts: <none>
metrics-server-nanny:
Image: gke.gcr.io/addon-resizer:1.8.10-gke.0
Port: <none>
Host Port: <none>
Command:
/pod_nanny
--config-dir=/etc/config
--cpu=40m
--extra-cpu=0.5m
--memory=35Mi
--extra-memory=4Mi
--threshold=5
--deployment=metrics-server-v0.3.6
--container=metrics-server
--poll-period=300000
--estimator=exponential
--scale-down-delay=24h
--minClusterSize=5
--use-metrics=true
Limits:
cpu: 100m
memory: 300Mi
Requests:
cpu: 5m
memory: 50Mi
Environment:
MY_POD_NAME: (v1:metadata.name)
MY_POD_NAMESPACE: (v1:metadata.namespace)
Mounts:
/etc/config from metrics-server-config-volume (rw)
Volumes:
metrics-server-config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: metrics-server-config
Optional: false
Priority Class Name: system-cluster-critical
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: metrics-server-v0.3.6-787886f769 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m10s (x2 over 5m39s) deployment-controller Scaled up replica set metrics-server-v0.3.6-7c9d64c44 to 1
Normal ScalingReplicaSet 2m54s (x2 over 5m23s) deployment-controller Scaled down replica set metrics-server-v0.3.6-787886f769 to 0
Normal ScalingReplicaSet 2m50s (x2 over 4m49s) deployment-controller Scaled up replica set metrics-server-v0.3.6-787886f769 to 1
Normal ScalingReplicaSet 2m33s (x2 over 4m34s) deployment-controller Scaled down replica set metrics-server-v0.3.6-7c9d64c44 to 0
</code></pre>
<h2>Edit: 2021-03-13</h2>
<p>This is the configuration for the api server deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: api-server-deployment
spec:
replicas: 12
selector:
matchLabels:
app: api-server
template:
metadata:
labels:
app: api-server
spec:
serviceAccountName: api-kubernetes-service-account
nodeSelector:
#<labelname>:value
cloud.google.com/gke-nodepool: api-nodepool
containers:
- name: api-server
image: gcr.io/questions-279902/taskserver:latest
imagePullPolicy: "Always"
ports:
- containerPort: 80
#- containerPort: 443
args:
- --disable_https
- --db_ip_address=127.0.0.1
- --modelserver_address=http://10.128.0.18:8501 # kubectl get service model-service --output yaml
resources:
# You must specify requests for CPU to autoscale
# based on CPU utilization
requests:
cpu: "250m"
- name: cloud-sql-proxy
...
</code></pre>
| Johan WikstrΓΆm | <p>I donβt see any β<a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-types" rel="nofollow noreferrer">resources</a>:β fields (e.g. cpu, mem, etc.) assigned, and this should be the root cause.
Please be aware that having resource(s) set on a HPA (Horizontal Pod Autoscaler) is a requirement, explained on official <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale" rel="nofollow noreferrer">Kubernetes documentation</a></p>
<p>Please note that if some of the Pod's containers do not have the relevant resource request set, CPU utilization for the Pod will not be defined and the autoscaler will not take any action for that metric.</p>
<p>This can definitely cause the message unable to read all metrics on target Deployment(s).</p>
| LukeTerro |
<p>I have a kubernetes cluster created using kubadm with 1 master and 2 worker. flannel is being used as the network plugin. Noticed that the docker0 bridge is down on all the worker node and master node but the cluster networking is working fine. Is it by design that the docker0 bridge will be down if we are using any network plugin like flannel in kubernetes cluster?</p>
<pre><code>docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:ad:8f:3a:99 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
</code></pre>
| Lijo | <p>I am posting a community wiki answer from <a href="https://stackoverflow.com/questions/54102888/what-role-does-network-bridge-docker0-play-in-k8s-with-flannel">this</a> SO thread as I believe it answers your question.</p>
<hr />
<p>There are two network models here Docker and Kubernetes.</p>
<p>Docker model</p>
<blockquote>
<p>By default, Docker uses host-private networking. It creates a virtual bridge, called <code>docker0</code> by default, and allocates a subnet from one of the private address blocks defined in <a href="https://www.rfc-editor.org/rfc/rfc1918" rel="nofollow noreferrer">RFC1918</a> for that bridge. For each container that Docker creates, it allocates a virtual Ethernet device (called <code>veth</code>) which is attached to the bridge. The veth is mapped to appear as <code>eth0</code> in the container, using Linux namespaces. The in-container <code>eth0</code> interface is given an IP address from the bridgeβs address range.</p>
<p><strong>The result is that Docker containers can talk to other containers only if they are on the same machine</strong> (and thus the same virtual bridge). <strong>Containers on different machines can not reach each other</strong> - in fact they may end up with the exact same network ranges and IP addresses.</p>
</blockquote>
<p>Kubernetes model</p>
<blockquote>
<p>Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):</p>
</blockquote>
<ul>
<li>all containers can communicate with all other containers without NAT</li>
<li>all nodes can communicate with all containers (and vice-versa) without NAT</li>
<li>the IP that a container sees itself as is the same IP that others see it as</li>
</ul>
<blockquote>
<p>Kubernetes applies IP addresses at the <code>Pod</code> scope - containers within a <code>Pod</code> share their network namespaces - including their IP address. This means that containers within a <code>Pod</code> can all reach each otherβs ports on <code>localhost</code>. This does imply that containers within a <code>Pod</code> must coordinate port usage, but this is no different than processes in a VM. This is called the βIP-per-podβ model. This is implemented, using Docker, as a βpod containerβ which holds the network namespace open while βapp containersβ (the things the user specified) join that namespace with Dockerβs <code>--net=container:<id></code> function.</p>
<p>As with Docker, it is possible to request host ports, but this is reduced to a very niche operation. In this case a port will be allocated on the host <code>Node</code> and traffic will be forwarded to the <code>Pod</code>. The <code>Pod</code> itself is blind to the existence or non-existence of host ports.</p>
</blockquote>
<p>In order to integrate the platform with the underlying network infrastructure Kubernetes provide a plugin specification called <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">Container Networking Interface (CNI)</a>. If the Kubernetes fundamental requirements are met vendors can use network stack as they like, typically using overlay networks to support <strong>multi-subnet</strong> and <strong>multi-az</strong> clusters.</p>
<p>Bellow is shown how overlay networks are implemented through <a href="https://github.com/coreos/flannel" rel="nofollow noreferrer">Flannel</a> which is a popular <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">CNI</a>.</p>
<p><a href="https://i.stack.imgur.com/DOxTE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DOxTE.png" alt="flannel" /></a></p>
<p>You can read more about other CNI's <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model" rel="nofollow noreferrer">here</a>. The Kubernetes approach is explained in <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">Cluster Networking</a> docs. I also recommend reading <a href="https://www.contino.io/insights/kubernetes-is-hard-why-eks-makes-it-easier-for-network-and-security-architects" rel="nofollow noreferrer">Kubernetes Is Hard: Why EKS Makes It Easier for Network and Security Architects</a> which explains how <a href="https://github.com/coreos/flannel" rel="nofollow noreferrer">Flannel</a> works, also another <a href="https://medium.com/all-things-about-docker/setup-hyperd-with-flannel-network-1c31a9f5f52e" rel="nofollow noreferrer">article from Medium</a></p>
<p>Hope this answers your question.</p>
| WytrzymaΕy Wiktor |
<p>Given the following PVC and PV:</p>
<ul>
<li>PVC:</li>
</ul>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: packages-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: packages-volume
</code></pre>
<ul>
<li>PV:</li>
</ul>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: packages-volume
namespace: test
spec:
claimRef:
name: packages-pvc
namespace: test
accessModes:
- ReadWriteMany
nfs:
path: {{NFS_PATH}}
server: {{NFS_SERVER}}
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
</code></pre>
<p>if I create the PV, then the PVC, they bind together. However if I delete the PVC then re-create it, they do not bind (pvc pending). Why?</p>
| stackoverflowed | <p>Note that after deleting <code>PVC</code>, <code>PV</code> remains in <code>Released</code> status:</p>
<pre><code>$ kubectl get pv packages-volume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
packages-volume 1007Gi RWX Retain Released default/packages-pvc 10m
</code></pre>
<p>It should have status <code>Available</code> so it can be reused by another <code>PersistentVolumeClaim</code> instance.</p>
<p><strong>Why it isn't <code>Available</code> ?</strong></p>
<p>If you display current <code>yaml</code> definition of the <code>PV</code>, which you can easily do by executing:</p>
<pre><code>kubectl get pv packages-volume -o yaml
</code></pre>
<p>you may notice that in <code>claimRef</code> section it contains the <code>uid</code> of the recently deleted <code>PersistentVolumeClaim</code>:</p>
<pre><code> claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: packages-pvc
namespace: default
resourceVersion: "10218121"
uid: 1aede3e6-eaa1-11e9-a594-42010a9c0005
</code></pre>
<p>You can easily verify it by issuing:</p>
<pre><code>kubectl get pvc packages-pvc -o yaml | grep uid
</code></pre>
<p>just before you delete your <code>PVC</code> and compare it with what <code>PV</code> definition contains. You'll see that this is exactly the same <code>uid</code> that is still referred by your <code>PV</code> after <code>PVC</code> is deleted. This remaining reference is the actual reason that <code>PV</code> remains in <code>Released</code> status.</p>
<p><strong>Why newly created <code>PVC</code> remains in a <code>Pending</code> state ?</strong></p>
<p>Although your newly created <code>PVC</code> may seem to you exactly the same <code>PVC</code> that you've just deleted as you're creating it using the very same <code>yaml</code> file, from the perspective of <code>Kubernetes</code> it's a completely new instance of <code>PersistentVolumeClaim</code> object which has completely different <code>uid</code>. This is the reason why it remains in <code>Pending</code> status and is unable to bind to the <code>PV</code>.</p>
<p><strong>Solution:</strong></p>
<p>To make the <code>PV</code> <code>Available</code> again you just need to remove the mentioned <code>uid</code> reference e.g. by issuing:</p>
<pre><code>kubectl patch pv packages-volume --type json -p '[{"op": "remove", "path": "/spec/claimRef/uid"}]'
</code></pre>
<p>or alternatively by removing the whole <code>claimRef</code> section which can be done as follows:</p>
<pre><code>kubectl patch pv packages-volume --type json -p '[{"op": "remove", "path": "/spec/claimRef"}]'
</code></pre>
| mario |
<p>Using OpenShift 3.11, I've mounted an nfs persistent volume, but the application cannot copy into the new volume, saying:</p>
<pre><code>oc logs my-project-77858bc694-6kbm6
cp: cannot create regular file '/config/dbdata/resdb.lock.db': Permission denied
...
</code></pre>
<p>I've tried to change the ownership of the folder by doing a chown in an InitContainers, but it tells me the operation not permitted. </p>
<pre><code> initContainers:
- name: chowner
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- ls -alt /config/dbdata; chown 1001:1001 /config/dbdata;
volumeMounts:
- name: my-volume
mountPath: /config/dbdata/
</code></pre>
<pre><code>oc logs my-project-77858bc694-6kbm6 -c chowner
total 12
drwxr-xr-x 3 root root 4096 Nov 7 03:06 ..
drwxr-xr-x 2 99 99 4096 Nov 7 02:26 .
chown: /config/dbdata: Operation not permitted
</code></pre>
<p>I expect to be able to write to the mounted volume.</p>
| DThompson55 | <p>You can give your Pods permission to write into a volume by using <code>fsGroup: GROUP_ID</code> in a Security Context. <code>fsGroup</code> makes your volumes writable by GROUP_ID and makes all processes inside your container part of that group.</p>
<p>For example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: POD_NAME
spec:
securityContext:
fsGroup: GROUP_ID
...
</code></pre>
| Alassane Ndiaye |
<p>We have a GKE cluster with <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades" rel="noreferrer">auto-upgrading nodes</a>. We recently noticed a node become unschedulable and eventually deleted that we suspect was being upgraded automatically for us. Is there a way to confirm (or otherwise) in Stackdriver that this was indeed the cause what was happening?</p>
| Matt R | <p>You can use the following advanced logs queries with Cloud Logging (previously Stackdriver) to detect upgrades to <strong>node pools</strong>:</p>
<pre><code>protoPayload.methodName="google.container.internal.ClusterManagerInternal.UpdateClusterInternal"
resource.type="gke_nodepool"
</code></pre>
<p>and <strong>master</strong>:</p>
<pre><code>protoPayload.methodName="google.container.internal.ClusterManagerInternal.UpdateClusterInternal"
resource.type="gke_cluster"
</code></pre>
<p>Additionally, you can control when the update are applied with <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/maintenance-windows-and-exclusions" rel="noreferrer">Maintenance Windows</a> (like the user aurelius mentioned).</p>
| David C |
<p>I'm attempting to create a cluster using <code>minikube</code>. When I run</p>
<p><code>minikube start</code></p>
<p>I get the following output:</p>
<pre><code>π minikube v1.2.0 on darwin (amd64)
π£ Requested disk size (0MB) is less than minimum of 2000MB
</code></pre>
<p>I certainly have space:</p>
<pre class="lang-sh prettyprint-override"><code>meβ‘οΈ$ df -h
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1s1 112Gi 82Gi 24Gi 77% 1394770 9223372036853381037 0% /
devfs 332Ki 332Ki 0Bi 100% 1148 0 100% /dev
/dev/disk1s4 112Gi 5.0Gi 24Gi 17% 4 9223372036854775803 0% /private/var/vm
map -hosts 0Bi 0Bi 0Bi 100% 0 0 100% /net
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /home
</code></pre>
<pre class="lang-sh prettyprint-override"><code>minikube config view
- disk-size: 2000
</code></pre>
| Ko Ga | <p>You can configurate the disk size of the minikube VM using the <code>--disk-size flag</code>.</p>
<p>First you need to run <code>minikube stop</code> and <code>minikube delete</code> for new parameters to take place.</p>
<p>Specify <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver" rel="nofollow noreferrer">VM driver</a> if needed by running the <code>--vm-driver=<enter_driver_name></code> flag.</p>
<p>Finally start the minikube with: <code>minikube start --vm-driver=<enter_driver_name> --disk-size 20GB</code> (change the size as you find fitting).</p>
<p>Please let me know if that helped. </p>
| WytrzymaΕy Wiktor |
<p>I'm using "hostpath" to access the .sql file into the volume. </p>
<p>When we enter into the mysql pod, there we can see the .sql file </p>
<p>("laravel/laravel.sql" the given path is define in the volume mount section)</p>
<p>But it's become directory not file and i'm unable to add .sql file into the database.
Currently the size of .sql file is 50Mi so unable to use configmap. </p>
<p>Please provide the proper solution of this problem </p>
<p><a href="https://i.stack.imgur.com/jt0S7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jt0S7.png" alt="mysql pod image"></a></p>
<p>mysql.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /home/paradise/lara_k8s/docker_qa-laravel
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: "MYSQL_ROOT_PASSWORD"
value: "root"
- name: "MYSQL_USERNAME"
value: "root"
- name: "MYSQL_PASSWORD"
value: "root"
- name: "MYSQL_DATABASE"
value: "homestead"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /laravel
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
| Saya Kasta | <p>As you can read in <strong>Kubernetes</strong> <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>A hostPath volume mounts a file or directory from the host nodeβs
filesystem into your Pod.</p>
</blockquote>
<p>and further:</p>
<blockquote>
<p>In addition to the required path property, user can optionally specify a type for a hostPath volume.</p>
</blockquote>
<p>There are a few supported values for <code>type</code> field. The one which you should use in your particular case is a <code>File</code> which is defined as follows:</p>
<blockquote>
<p>File A file must exist at the given path</p>
</blockquote>
<p>In your deployment definition you should use full paths to the file like in the example below:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test/file.sql
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /data/file.sql
type: File
</code></pre>
<p>For using this type of volume you don't need a separate definition of <code>persistentVolume</code> and <code>persistentVolumeClaim</code>. It's totally enough if you just add it to your deployment definition.
If for some reason you are interested in creating a <code>hostPath PersistentVolume</code> it is nicely described <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="nofollow noreferrer">here</a>.</p>
<p>EDIT:</p>
<p>Last lines of your deployment definition may look like this:</p>
<pre><code> volumeMounts:
- name: mysql-volume
mountPath: /laravel/your_file.sql
volumes:
- name: mysql-volume
hostPath:
path: /path/to/file/your_file.sql
type: File
</code></pre>
| mario |
<p>I've set up a private container registry that it is integrated with bitbucket successfully. However, I am not able to pull the images from my GKE Cluster.</p>
<p>I created a service account with the role "Project Viewer", and a json key for this account. Then I created the secret in the cluster/namespace running</p>
<pre><code>kubectl create secret docker-registry gcr-json-key \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/code/bitbucket/miappsrl/miappnodeapi/secrets/registry/miapp-staging-e94050365be1.json)" \
[email protected]
</code></pre>
<p>And in the deployment file I added</p>
<pre><code>...
imagePullSecrets:
- name: gcr-json-key
...
</code></pre>
<p>But when I apply the deployment I get</p>
<pre><code> ImagePullBackOff
</code></pre>
<p>And when I do a <code>kubectl describe pod <pod_name></code> I see</p>
<pre><code>Failed to pull image "gcr.io/miapp-staging/miappnodeapi": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 169.254.169.254:53: no such host
</code></pre>
<p>I can't realize what I am missing, I understand it can resolve the dns inside the cluster, but not sure what I should add</p>
| agusgambina | <p>If a GKE Cluster is setup as private you need to setup the DNS to reach container Registry, from <a href="https://cloud.google.com/vpc-service-controls/docs/set-up-gke#configure-dns" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>To support GKE private clusters that use Container Registry or Artifact Registry inside a service perimeter, you first need to configure your DNS server so requests to registry addresses resolve to restricted.googleapis.com, the restricted VIP. You can do so using Cloud DNS private DNS zones.</p>
</blockquote>
<p>Verify if you setup your cluster as private.</p>
| David C |
<p>IΒ΄m running windows 10 with WSL1 and ubuntu as distrib.
My windows version is Version 1903 (Build 18362.418)</p>
<p>IΒ΄m trying to connect to kubernetes using kubectl proxy within ubuntu WSL. I get a connection refused when trying to connect to the dashboard with my browser.</p>
<p><a href="https://i.stack.imgur.com/De2K7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/De2K7.png" alt="enter image description here"></a></p>
<p>I have checked in windows with netstat -a to see active connections. </p>
<p><a href="https://i.stack.imgur.com/l4YNh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l4YNh.png" alt="enter image description here"></a></p>
<p>If i run kubectl within the windows terminal i have no problem to connect to kubernetes, so the problem is only happening when i try to connect with ubuntu WSL1.</p>
<p>I have also tried to run the following command</p>
<pre><code>kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='.*'
</code></pre>
<p>... but the connection is refused although i see that windows is listening to the port. Changing port to another port didnΒ΄t fix the proble. Disabling the firewall didntΒ΄fix the problem as well.</p>
<p>Any idea ?</p>
| david | <p>First thing to do would be to check if you able to safely talk to your cluster: (<code>kubectl get svc -n kube-system</code>, <code>kubectl cluster-info</code>)</p>
<p>If not check if <code>$HOME/.kube</code> folder was created. If not, run:
<code>gcloud container clusters get-credentials default --region=<your_region></code></p>
| WytrzymaΕy Wiktor |
<p>In minikube, API server fails to connect to my audit log webhook, I see the following error in the api-server logs</p>
<pre><code>E0308 08:30:26.457841 1 metrics.go:109] Error in audit plugin 'webhook' affecting 400 audit events: Post "http://ca-audit.armo-system:8888/": dial tcp: lookup ca-audit.armo-system on 10.42.4.254:53: no such host
</code></pre>
<p>I dont know why the api-server is connecting to <code>10.42.4.254:53</code> since my service ip is different:</p>
<pre><code>$ kubectl -n armo-system get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ca-audit ClusterIP 10.109.132.114 <none> 8888/TCP 8m33s
</code></pre>
<p>I do not understand what I'm doing wrong, any suggestions?</p>
<hr />
<p>This is how I configured my audit policy, webhook and minikube-</p>
<p>I pre-configured my minikube as following:</p>
<pre><code># Create the webhook config and audit policy
export C_CONTEXT=$(kubectl config current-context)
export C_NAME=$(kubectl config get-contexts ${C_CONTEXT} --no-headers | awk '{print $2}')
export C_CLUSTER=$(kubectl config get-contexts ${C_CONTEXT} --no-headers | awk '{print $3}')
export C_USER=$(kubectl config get-contexts ${C_CONTEXT} --no-headers | awk '{print $4}')
export ARMO_NAMESPACE="armo-system"
export ARMO_AUDIT_SERVICE="ca-audit"
export ARMO_AUDIT_PORT=8888
mkdir -p ~/.minikube/files/etc/ssl/certs
cat <<EOF > ~/.minikube/files/etc/ssl/certs/audit-webhook.yaml
{
"apiVersion": "v1",
"clusters": [
{
"cluster": {
"server": "http://${ARMO_AUDIT_SERVICE}.${ARMO_NAMESPACE}:${ARMO_AUDIT_PORT}/"
},
"name": "${C_NAME}"
}
],
"contexts": [
{
"context": {
"cluster": "${C_CLUSTER}",
"user": "${C_USER}"
},
"name": "${C_NAME}"
}
],
"current-context": "${C_CONTEXT}",
"kind": "Config",
"preferences": {},
"users": []
}
EOF
cat <<EOF > ~/.minikube/files/etc/ssl/certs/audit-policy.yaml
{
"apiVersion": "audit.k8s.io/v1",
"kind": "Policy",
"rules": [
{
"level": "Metadata"
}
]
}
EOF
# Copy the audit policy to `/etc/ssl/certs/.`
sudo cp ~/.minikube/files/etc/ssl/certs/audit-policy.yaml ~/.minikube/files/etc/ssl/certs/audit-webhook.yaml /etc/ssl/certs/.
# Start the minikube, add the flags `--extra-config=apiserver.audit-policy-file=/etc/ssl/certs/audit-policy.yaml`, `--extra-config=apiserver.audit-webhook-config-file=/etc/ssl/certs/audit-webhook.yaml`
sudo -E minikube start --vm-driver=none --extra-config=apiserver.audit-policy-file=/etc/ssl/certs/audit-policy.yaml --extra-config=apiserver.audit-webhook-config-file=/etc/ssl/certs/audit-webhook.yaml
</code></pre>
<p>Now that my minikube is up and running, I created the namespace, service and webhook deployment:</p>
<pre><code>cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: Namespace
metadata:
name: ${ARMO_NAMESPACE}
---
kind: Service
apiVersion: v1
metadata:
labels:
app: ${ARMO_AUDIT_SERVICE}
name: ${ARMO_AUDIT_SERVICE}
namespace: ${ARMO_NAMESPACE}
spec:
ports:
- port: ${ARMO_AUDIT_PORT}
targetPort: ${ARMO_AUDIT_PORT}
protocol: TCP
selector:
app: ${ARMO_AUDIT_SERVICE}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${ARMO_AUDIT_SERVICE}
namespace: ${ARMO_NAMESPACE}
labels:
app: ${ARMO_AUDIT_SERVICE}
spec:
selector:
matchLabels:
app: ${ARMO_AUDIT_SERVICE}
replicas: 1
template:
metadata:
labels:
app: ${ARMO_AUDIT_SERVICE}
spec:
containers:
- name: ${ARMO_AUDIT_SERVICE}
image: quay.io/armosec/k8s-ca-auditlog-ubi:dummy
imagePullPolicy: Always
env:
- name: ARMO_AUDIT_PORT
value: "${ARMO_AUDIT_PORT}"
ports:
- containerPort: ${ARMO_AUDIT_PORT}
name: ${ARMO_AUDIT_SERVICE}
EOF
</code></pre>
<p>The webhook image code (<code>quay.io/armosec/k8s-ca-auditlog-ubi:dummy</code>) is as following:</p>
<pre><code>package main
import (
"encoding/json"
"flag"
"fmt"
"net/http"
"os"
"k8s.io/apiserver/pkg/apis/audit"
"github.com/golang/glog"
)
func main() {
flag.Parse()
flag.Set("alsologtostderr", "1") // display logs in stdout
InitServer()
}
// InitServer - Initialize webhook listener
func InitServer() {
port, ok := os.LookupEnv("ARMO_AUDIT_PORT")
if !ok {
port = "8888"
}
glog.Infof("Webhook listening on port: %s, path: %s", port, "/")
http.HandleFunc("/", HandleRequest)
glog.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}
//HandleRequest -
func HandleRequest(w http.ResponseWriter, req *http.Request) {
eventList := audit.EventList{}
err := json.NewDecoder(req.Body).Decode(&eventList)
if err != nil {
e := fmt.Errorf("failed parsing api-server request, reason: %s", err.Error())
glog.Errorf(e.Error())
http.Error(w, e.Error(), http.StatusBadRequest)
return
}
glog.Infof("webhook received audit list, len: %d", len(eventList.Items))
for _, event := range eventList.Items {
bEvent, _ := json.Marshal(event)
glog.Infof("Received event: %s", string(bEvent))
}
w.WriteHeader(http.StatusOK)
}
</code></pre>
| David Wer | <p>Actually I don't know well about minikube but I have used audit for k8s so my answer wouldn't be help for minikube environment.</p>
<p><em>First</em>, k8s dns format is incorrect. It should be <strong>{service-name}.{namespace}.svc or {service-name}.{namespace}.svc.cluster.local</strong>. Of course, http:// or https:// can be added also.</p>
<p><em>Second</em>, you should give dnsPolicy option <strong>kube-apiserver.yaml</strong>.
To make kube-apiserver search dns resolution in cluster, 'dnsPolicy' should be written.</p>
<pre><code>sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
</code></pre>
<p>open kube-apiserver.yaml and add <strong>dnsPolicy: ClusterFirstWithHostNet</strong> to it .
The location of dnsPolicy is below .spec and same level with containers or volumes</p>
| Togomi |
<p>So, I have a pod definition file and I was wondering if there was a way to use the kubectl apply command to create multiple instances of the pod from the same definition file. I know this can be done with a deployment but wondering if there was a way to quickly spin up multiple pods using the apply command?</p>
| aj31 | <blockquote>
<p>I know this can be done with a deployment but wondering if there was a
way to quickly spin up multiple pods using the apply command?</p>
</blockquote>
<p>Yes, this is possible but not with the simple <code>kubectl apply</code> command as it doesn't have such options/capabilities and cannot be used to create multiple instances of a pod from a single yaml definition file.</p>
<p>If you just need to create <strong>n number</strong> of independent pods, not managed by any <code>replicaset</code>, <code>deployment</code>, <code>statefulset</code> etc., you can use for this purpose a simple bash one-liner.</p>
<p>Suppose we want to quickly spin up 10 independent pods based on the following yaml template, named <code>pod-template.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: {{pod-name}}
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
protocol: TCP
</code></pre>
<p>We can do it with the following script:</p>
<pre><code>for i in {1..10}; do sed "s/{{pod-name}}/pod-$i/g" pod-template.yaml | kubectl apply -f - ; done
</code></pre>
<p>It will create 10 pods named <code>pod-1</code>, <code>pod-2</code> ... <code>pod-10</code>:</p>
<pre><code>$ for i in {1..10}; do sed "s/{{pod-name}}/pod-$i/g" pod-template.yaml | kubectl apply -f - ; done
pod/pod-1 created
pod/pod-2 created
pod/pod-3 created
pod/pod-4 created
pod/pod-5 created
pod/pod-6 created
pod/pod-7 created
pod/pod-8 created
pod/pod-9 created
pod/pod-10 created
</code></pre>
<p>And just in case you want to delete them now and don't want to do it one by one ;)</p>
<pre><code>for i in {1..10};do kubectl delete pod pod-$i; done
</code></pre>
| mario |
<p>The default subnet of docker0 is 172.17.x.x/16, it's overlapped with some of the network devices. After doing some search, I found that docker0 can be disabled in /etc/docker/daemon.js, like</p>
<blockquote>
<p>{ "bridge": "none"}</p>
</blockquote>
<p>None of the containers in my k8s cluster is using docker0 network, I did some test after disabling docker0, everything seems to be working fine, but I wonder if this configuration is normal for a k8s cluster, and if there is any potential risks I overlooked.</p>
| Jin Mengfei | <p>Answering on behalf of @Barath </p>
<blockquote>
<p>k8s uses custom bridge which is different from docker`s default bridge
based on network type to satisfy kubernetes networking model. So this
should be fine. In case you want to modify docker bridge CIDR block
you can specify this --bip=CIDR as part of DOCKER_OPTS which is
different from cbr0-CIDR. β Barath May 22 at 5:06</p>
</blockquote>
<p>and @menya</p>
<blockquote>
<p>It depends on which kubernetes networking model you use, but I have
never seen networking model using docker's bridge. So it is fine. β
menya May 22 at 8:09</p>
</blockquote>
<p>Because no further response was given and we should keep answers away from comments section.</p>
| WytrzymaΕy Wiktor |
<p>I want to have MySQL as service by kubernetes. Any one that create database I should create a service for it and if he want to access database I should use NodePort. but I have 2000 port and I have 4000 user. I install kubernetes in baremetal. What should I do?
Why kubernetes have limited NodePort?</p>
| check nova | <p>@check nova</p>
<p>Kubernetes does not have limited NodePort port range value.
The default range is indeed 30000-32767 but it can be changed by setting the</p>
<pre><code>--service-node-port-range
</code></pre>
<p>Update the file /etc/kubernetes/manifests/kube-apiserver.yaml and add the line <code>--service-node-port-range=xxxxx-yyyyy</code></p>
<p>Be careful however and try not to generate any configuration conflicts for your network.</p>
<p>I hope it helps.</p>
| WytrzymaΕy Wiktor |
<p>As of now i do</p>
<pre><code> kubectl --context <cluster context> get pod -A
</code></pre>
<p>to get pod in specific cluster</p>
<p>is there a python way to set kubernetes context for a virtual env , so we can use multiple context at the same time
example :</p>
<pre><code>Terminal 1:
(cluster context1) user@machine #
Terminal 2:
(cluster context2) user@machine #
</code></pre>
<p>This should be equivalent of</p>
<pre><code>Terminal 1:
user@machine # kubectl --context <cluster context1> get pod -A
Terminal 2:
user@machine # kubectl --context <cluster context1> get pod -A
</code></pre>
| Pradeep Padmanaban C | <p>This isn't probably a rational solution, but anyway... At some time I used different <code>kubectl</code> versions for different clusters and I came up with a venv-like solution to switch between them. I wrote text files like this:</p>
<pre class="lang-sh prettyprint-override"><code>export KUBECONFIG="/path/to/kubeconfig"
export PATH="/path/including/the/right/kubectl"
</code></pre>
<p>And activated them in the same fashion as venv: <code>source the_file</code>. If you can split your contexts to separate files, you can add <code>export KUBECONFIG="/path/to/kubeconfig"</code> to your <code>venv/bin/activate</code> and it will use the desired config when you activate the <code>venv</code>.</p>
| anemyte |
<p>I am learning about K8s and did setup a release pipeline with a kubectl apply. I've setup the AKS cluster via Terraform and on the first run all seemed fine. Once I destroyed the cluster I reran the pipeline, I get issues which I believe are related to the kubeconfig file mentioned in the exception. I tried the cloud shell etc. to get to the file or reset it but I wasn't succesful. How can I get back to a clean state?</p>
<pre><code>2020-12-09T09:08:51.7047177Z ##[section]Starting: kubectl apply
2020-12-09T09:08:51.7482440Z ==============================================================================
2020-12-09T09:08:51.7483217Z Task : Kubectl
2020-12-09T09:08:51.7483729Z Description : Deploy, configure, update a Kubernetes cluster in Azure Container Service by running kubectl commands
2020-12-09T09:08:51.7484058Z Version : 0.177.0
2020-12-09T09:08:51.7484996Z Author : Microsoft Corporation
2020-12-09T09:08:51.7485587Z Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/deploy/kubernetes
2020-12-09T09:08:51.7485955Z ==============================================================================
2020-12-09T09:08:52.7640528Z [command]C:\ProgramData\Chocolatey\bin\kubectl.exe --kubeconfig D:\a\_temp\kubectlTask\1607504932712\config apply -f D:\a\r1\a/medquality-cordapp/k8s
2020-12-09T09:08:54.1555570Z Unable to connect to the server: dial tcp: lookup mq-k8s-dfee38f6.hcp.switzerlandnorth.azmk8s.io: no such host
2020-12-09T09:08:54.1798118Z ##[error]The process 'C:\ProgramData\Chocolatey\bin\kubectl.exe' failed with exit code 1
2020-12-09T09:08:54.1853710Z ##[section]Finishing: kubectl apply
</code></pre>
<p>Update, workflow tasks of the release pipeline:</p>
<p>Initially I get the artifact, clone of the repo containing the k8s yamls, then the stage does a kubectl apply.</p>
<pre><code>"workflowTasks": [
{
"environment": {},
"taskId": "cbc316a2-586f-4def-be79-488a1f503564",
"version": "0.*",
"name": "kubectl apply",
"refName": "",
"enabled": true,
"alwaysRun": false,
"continueOnError": false,
"timeoutInMinutes": 0,
"definitionType": null,
"overrideInputs": {},
"condition": "succeeded()",
"inputs": {
"kubernetesServiceEndpoint": "82e5971b-9ac6-42c6-ac43-211d2f6b60e4",
"namespace": "",
"command": "apply",
"useConfigurationFile": "false",
"configuration": "",
"arguments": "-f $(System.DefaultWorkingDirectory)/medquality-cordapp/k8s",
"secretType": "dockerRegistry",
"secretArguments": "",
"containerRegistryType": "Azure Container Registry",
"dockerRegistryEndpoint": "",
"azureSubscriptionEndpoint": "",
"azureContainerRegistry": "",
"secretName": "",
"forceUpdate": "true",
"configMapName": "",
"forceUpdateConfigMap": "false",
"useConfigMapFile": "false",
"configMapFile": "",
"configMapArguments": "",
"versionOrLocation": "version",
"versionSpec": "1.7.0",
"checkLatest": "false",
"specifyLocation": "",
"cwd": "$(System.DefaultWorkingDirectory)",
"outputFormat": "json",
"kubectlOutput": ""
}
}
]
```
</code></pre>
| 1174 | <p>I can see you are using <code>kubernetesServiceEndpoint</code> as the Service connection type in Kubectl task.</p>
<blockquote>
<p>Once I destroyed the cluster I reran the pipeline, I get issues....</p>
</blockquote>
<p>If the cluster was destroyed. The <code>kubernetesServiceEndpoint</code> in azure devops is still connected to the origin cluster. Kubectl task which using the origin <code>kubernetesServiceEndpoint</code> is still looking for the old cluster. And it will fail with above error, since the old cluster was destroyed.</p>
<p>You can fix this issue by updating the <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml#sep-kuber" rel="nofollow noreferrer">kubernetesServiceEndpoint</a> in azure devops with the newly created cluster:</p>
<p>Go to Azure devops <strong>Project settings</strong>--><strong>Service connections</strong>--> Find your Kubernetes Service connection-->Click <strong>Edit</strong> to update the configuration.</p>
<p>But if your kubernete cluster gets destroyed and recreated frequently. I would suggest using <strong>Azure Resource Manager</strong> as the Service connection type to connect to the cluster in Kubectl task. See below screenshot.</p>
<p>By using <code>azureSubscriptionEndpoint</code> and specifying <code>azureResourceGroup</code>, if only the cluster's name doesnot change, It doesnot matter how many times the cluster is recreated.</p>
<p><a href="https://i.stack.imgur.com/dOuL8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dOuL8.png" alt="enter image description here" /></a></p>
<p>See document to create an <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops" rel="nofollow noreferrer">Azure Resource Manager service connection</a></p>
| Levi Lu-MSFT |
<p>I have tried <code>az aks show</code> and <code>az aks list</code> commands but they don't show the names of the attached ACR's.
I ran the command to attach acr using <code>az aks update --attach-acr</code> and it shows thats it attached.</p>
<p><img src="https://i.stack.imgur.com/zTnTd.png" alt="AFter running the az aks update" /></p>
<p>Can I see through the CLI or portal that the acr is in the cluster?</p>
| Joby Santhosh | <p>I am afraid you cannot see the attached ACR in the cluster UI portal.</p>
<p>When you attached the ACR to the AKS cluster using <code>az aks update --attach-acr</code> command.</p>
<p><strong>It just assigned the ACR's AcrPull role to the service principal associated to the AKS Cluster.</strong> See <a href="https://learn.microsoft.com/en-us/azure/aks/cluster-container-registry-integration" rel="noreferrer">here</a> for more information.</p>
<p>You can get the service principal which associated to the AKS Cluster by command <code>az aks list</code></p>
<p><a href="https://i.stack.imgur.com/gJSlj.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gJSlj.png" alt="enter image description here" /></a></p>
<p>See below screenshot. The AcrPull role was assigned to the service principal associated to the AKS Cluster.</p>
<p><a href="https://i.stack.imgur.com/jp7ei.png" rel="noreferrer"><img src="https://i.stack.imgur.com/jp7ei.png" alt="enter image description here" /></a></p>
<p>If you want to use Azure CLI to check which ACR is attached to the AKS cluster. You can list all the ACRs. And then loop through the ACRs to check which one has assigned the AcrPull role to the AKS service principal. See below example:</p>
<pre><code># list all the ACR and get the ACR id
az acr list
az role assignment list --assignee <Aks service principal ID> --scope <ACR ID>
</code></pre>
| Levi Lu-MSFT |
<p>I'm using K8S <code>1.14</code> and Helm <code>3.3.1</code>.</p>
<p>I have an app which <strong>works when deployed without probes</strong>. Then I set two trivial probes:</p>
<pre class="lang-yaml prettyprint-override"><code> livenessProbe:
exec:
command:
- ls
- /mnt
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
exec:
command:
- ls
- /mnt
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
<p>When I deploy via <code>helm upgrade</code>, the command eventually (~5 mins) fails with:</p>
<pre><code>Error: UPGRADE FAILED: release my-app failed, and has been rolled back due to atomic being set: timed out waiting for the condition
</code></pre>
<p>But in the events log there is no trace of any probe:</p>
<pre><code>5m21s Normal ScalingReplicaSet deployment/my-app Scaled up replica set my-app-7 to 1
5m21s Normal Scheduled pod/my-app-7-6 Successfully assigned default/my-app-7-6 to gke-foo-testing-foo-testing-node-po-111-r0cu
5m21s Normal LoadBalancerNegNotReady pod/my-app-7-6 Waiting for pod to become healthy in at least one of the NEG(s): [k8s1-222-default-my-app-80-54]
5m21s Normal SuccessfulCreate replicaset/my-app-7 Created pod: my-app-7-6
5m20s Normal Pulling pod/my-app-7-6 Pulling image "my-registry/my-app:v0.1"
5m20s Normal Pulled pod/my-app-7-6 Successfully pulled image "my-registry/my-app:v0.1"
5m20s Normal Created pod/my-app-7-6 Created container my-app
5m20s Normal Started pod/my-app-7-6 Started container my-app
5m15s Normal Attach service/my-app Attach 1 network endpoint(s) (NEG "k8s1-222-default-my-app-80-54" in zone "europe-west3-a")
19s Normal ScalingReplicaSet deployment/my-app Scaled down replica set my-app-7 to 0
19s Normal SuccessfulDelete replicaset/my-app-7 Deleted pod: my-app-7-6
19s Normal Killing pod/my-app-7-6 Stopping container my-app
</code></pre>
<p>Hence the question: what are the probes doing and where?</p>
| pietro909 | <p>I don't if you image include bash, but if you just want to verify if the directory exists, you can do the samething using others shell commands, try this:</p>
<pre><code> livenessProbe:
exec:
command:
- /bin/bash
- -c
- ls /mnt
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
exec:
command:
- /bin/bash
- -c
- ls /mnt
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
<p>In bash you can also try to use the <code>test</code> built-in function:</p>
<p><code>[[ -d /mnt ]]</code> = The <code>-d</code> verify if the directory <code>/mnt</code> exists.</p>
<p>As an alternative, there is also the command <code>stat</code>:</p>
<p><code>stat /mnt</code></p>
<p>If you want to check if the directory has any specific file, use the complete path with filename include.</p>
| Mr.KoopaKiller |
<p>I tested those queries. The first query was half the value of the second query:</p>
<p><code>sum(container_memory_working_set_bytes{image!="",name=~"^k8s_.*",pod=~"$pod"}) by (pod)</code></p>
<p>and</p>
<p><code>sum (container_memory_working_set_bytes{pod=~"$pod"}) by (pod)</code></p>
<p>Why is writing <code>image! = "", name = ~ "^ k8s_. *"</code> Halving the value?</p>
| redmagic0099 | <p>That's because <code>cAdvisor</code> takes these values from <code>cgroups</code>. The structure of cgroups looks like a tree, where there are branches for each pod, and every pod has child cgroups for each container in it. This is how it looks (<code>systemd-cgls</code>):</p>
<pre><code>ββkubepods
β ββpodb0c98680-4c6d-4788-95ef-0ea8b43121d4
β β ββ799e2d3f0afe0e43d8657a245fe1e97edfdcdd00a10f8a57277d310a7ecf4364
β β β ββ5479 /bin/node_exporter --path.rootfs=/host --web.listen-address=0.0.0.0:9100
β β ββ09ce1040f746fb497d5398ad0b2fabed1e4b55cde7fb30202373e26537ac750a
β β ββ5054 /pause
</code></pre>
<p>The resource value for each cgroup is <em><strong>a cumulative for all its children</strong></em>. That's how you got memory utilization doubled, you just summarized the total pod consumption with each container in it.</p>
<p>If you execute those queries in Prometheus, you would notice the duplicated values:</p>
<pre><code>{pod="cluster-autoscaler-58b9c77456-krl5m"} 59076608
{container="POD",pod="cluster-autoscaler-58b9c77456-krl5m"} 708608
{container="cluster-autoscaler",pod="cluster-autoscaler-58b9c77456-krl5m"} 58368000
</code></pre>
<p>The first one is the parent cgroup. As you see, it has no <code>container</code> label. The two others in this example are <a href="https://stackoverflow.com/questions/48651269/what-are-the-pause-containers">the pause container</a> and the actual application. Combining their values you will get the value of the parent cgroup:</p>
<pre class="lang-py prettyprint-override"><code>>>> 708608 + 58368000 == 59076608
True
</code></pre>
<p>There are multiple ways to fix the problem. For example, you can exclude metrics without container name by using <code>container!=""</code> label filter.</p>
<p>Another (more difficult) way to solve this is to drop the cumulative metrics in <code>metric_relabel_configs</code> (prometheus.yml). I.e. you can write a relabeling rule that will drop metrics without a container name. <strong>Be careful with this one</strong>, you may accidentally drop all non-cadvisor metrics.</p>
| anemyte |
<p>I have a running Kubernetes cluster consists of 3 nodes and one mater running on a VPS server, each node and master has its own public IP and floating IP also assigned to it and all these IPs are different from other</p>
<p>I am trying to configure metallb as a load balancer for my Kubernetes cluster but I don't know how can I set the metalLb IPs to range in the configuration file</p>
<p>here are the IPs examples of my servers</p>
<ul>
<li>115.203.150.255</li>
<li>94.217.238.58</li>
<li>46.12.5.65</li>
<li>76.47.79.44</li>
</ul>
<p>as you can see here, each IP is different so how can I set the Ip ranges in metallb config map?</p>
<p>Here an example of a config map</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- PUBLIC_IP-PUBLIC_IP
</code></pre>
| tarek salem | <p>In the Metallb documentation there is a mention you can use certain IPs <code>metallb.universe.tf/address-pool</code>. See <a href="https://metallb.universe.tf/usage/#requesting-specific-ips" rel="nofollow noreferrer">here</a></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
metallb.universe.tf/address-pool: production-public-ips
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
</code></pre>
<p>The <code>production-public-ips</code> must be configured as showed <a href="https://metallb.universe.tf/usage/example/" rel="nofollow noreferrer">here</a>.</p>
<p>To <a href="https://metallb.universe.tf/configuration/#layer-2-configuration" rel="nofollow noreferrer">configure MetalLB</a>, you should create a configmap with your ips. Since you don't have the range, you can set <code>/32</code> as subnet for your ips, like the example below.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: production-public-ips
protocol: layer2
addresses:
- 115.203.150.255/32
- 94.217.238.58/32
- 46.12.5.65/32
- 76.47.79.44/32
</code></pre>
<p>It should work for your scenario.</p>
| Mr.KoopaKiller |
<p>I'm trying to understand the security implications for using self-signed certificates for a Kubernetes validating webhook.</p>
<p>If I'm understanding correctly, the certificate is simply used to be able to serve the validating webhook server over https. When the Kubernetes api-server receives a request that matches the configuration for a validating webhook, it'll first check with the validating webhook server over https. If your validating webhook server lives on the Kubernetes cluster (is not external) then this traffic is all internal to a Kubernetes cluster. If this is the case is it problematic that the cert is self-signed?</p>
| user215997 | <blockquote>
<p>If I'm understanding correctly, the certificate is simply used to be
able to serve the validating webhook server over https.</p>
</blockquote>
<p>Basically yes.</p>
<blockquote>
<p>If your validating webhook server lives on the Kubernetes cluster (is
not external) then this traffic is all internal to a Kubernetes
cluster. If this is the case is it problematic that the cert is
self-signed?</p>
</blockquote>
<p>If the issuing process is handled properly and in secure manner, self-signed certs shouldn't be a problem at all. Compare with <a href="https://gist.github.com/tirumaraiselvan/b7eb1831d25dd9d59a785c11bd46c84b" rel="nofollow noreferrer">this</a> example.</p>
| mario |
<p>I have the following <code>MutatingWebhookConfiguration</code></p>
<pre><code>apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: example-webhook
webhooks:
- name: example-webhook.default.svc.cluster.local
admissionReviewVersions:
- "v1beta1"
sideEffects: "None"
timeoutSeconds: 30
objectSelector:
matchLabels:
example-webhook-enabled: "true"
clientConfig:
service:
name: example-webhook
namespace: default
path: "/mutate"
caBundle: "LS0tLS1CR..."
rules:
- operations: [ "CREATE" ]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
</code></pre>
<p>I want to inject the <code>webhook</code> pod in an <code>istio</code> enabled namespace with <code>istio</code> having strict TLS mode on.</p>
<p>Therefore, (I thought) TLS should not be needed in my <code>example-webhook</code> service so it is crafted as follows:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: example-webhook
namespace: default
spec:
selector:
app: example-webhook
ports:
- port: 80
targetPort: webhook
name: webhook
</code></pre>
<p>However when creating a <code>Pod</code> (that does indeed trigger the webhook) I get the following error:</p>
<pre><code>βΆ k create -f demo-pod.yaml
Error from server (InternalError): error when creating "demo-pod.yaml": Internal error occurred: failed calling webhook "example-webhook.default.svc.cluster.local": Post "https://example-webhook.default.svc:443/mutate?timeout=30s": no service port 443 found for service "example-webhook"
</code></pre>
<p>Can't I configure the webhook not to be called on <code>443</code> but rather on <code>80</code>? Either way TLS termination is done by the <code>istio</code> sidecar.</p>
<p>Is there a way around this using <code>VirtualService</code> / <code>DestinationRule</code>?</p>
<p><strong>edit</strong>: on top of that, why is it trying to reach the service in the <code>example-webhook.default.svc</code> endpoint? (while it should be doing so in <code>example-webhook.default.svc.cluster.local</code>) ?</p>
<h3>Update 1</h3>
<p>I have tried to use <code>https</code> as follows:</p>
<p>I have created a certificate and private key, using istio's CA.</p>
<p>I can verify that my DNS names in the cert are valid as follows (from another pod)</p>
<pre><code>echo | openssl s_client -showcerts -servername example-webhook.default.svc -connect example-webhook.default.svc:443 2>/dev/null | openssl x509 -inform pem -noout -text
</code></pre>
<pre><code>...
Subject: C = GR, ST = Attica, L = Athens, O = Engineering, OU = FOO, CN = *.cluster.local, emailAddress = [email protected]
...
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:*.default.svc.cluster.local, DNS:example-webhook, DNS:example-webhook.default.svc
...
</code></pre>
<p>but now pod creation fails as follows:</p>
<pre><code>βΆ k create -f demo-pod.yaml
Error from server (InternalError): error when creating "demo-pod.yaml": Internal error occurred: failed calling webhook "example-webhook.default.svc.cluster.local": Post "https://example-webhook.default.svc:443/mutate?timeout=30s": x509: certificate is not valid for any names, but wanted to match example-webhook.default.svc
</code></pre>
<h3>Update 2</h3>
<p>The fact that the certs the webhook pod are running with were appropriately created using the <code>istio</code> CA cert, is also validated.</p>
<pre><code>curl --cacert istio_cert https://example-webhook.default.svc
Test
</code></pre>
<p>where <code>istio_cert</code> is the file containing istio's CA certificate</p>
<p>What is going on?</p>
| pkaramol | <p>Did you try adding the <strong>port</strong> attribute in your MutatingWebhookConfiguration</p>
<pre class="lang-yaml prettyprint-override"><code>clientConfig:
service:
name: example-webhook
namespace: default
path: "/mutate"
port: 80
</code></pre>
| Peter Claes |
<p>Have a dockerized python script and using docker-compose for local development. Need to move it to GKE but unsure about which GCP services can be used for its persistent volume.</p>
<pre><code># docker-compose.yaml
version: "3"
services:
python-obj-01:
image: python-obj
build:
context: .
dockerfile: dockerfile
container_name: python-obj-01
volumes:
- ./data/inputs:/home/appuser/data/inputs
- ./data/outputs:/home/appuser/data/outputs
- ./data/model:/home/appuser/data/obj_detection
</code></pre>
<p>There are 3 mount points from my local machine:</p>
<ul>
<li>./data/input : Reads and process the input data</li>
<li>./data/output: Writes the results</li>
<li>./data/model : Reads the models of input data</li>
</ul>
<p>In the python script, it will glob to find files from ./data/inputs, processes them, create sub-directory and files in ./data/output.</p>
<p>Besides Cloud Filestore, can Cloud Storage bucket able to fulfill the requirements? or other GCP services?</p>
| jj pan | <blockquote>
<p>can Cloud Storage bucket able to fulfill the requirements?</p>
</blockquote>
<p>If you requirement is work with a volume like a filesystem the answer is <strong>NO</strong></p>
<p><strong>Why?</strong></p>
<p>Cloud Storage is an object-storage and can't be used as a normal filesytem.
It should be perfect to storage and read files, but in order to extract all the powerful and features from Cloud Storage, you need to rewrite your application to use <a href="https://googleapis.dev/python/storage/latest/index.html" rel="nofollow noreferrer">cloud storage api</a>, for example.</p>
<p>In <strong>GCP</strong> by default, the persistent storage is the <strong>compute engine persisten disks</strong>, you can also use some NFS service, like <a href="https://medium.com/faun/using-kubernetes-google-firestore-in-gke-to-orchestrate-scrapers-using-scrapy-pt-1-cdf7bb651341" rel="nofollow noreferrer">Firestore</a>.</p>
<p>The best solution depends of how your application works and how the importance for this data for you.</p>
<p>Here you can read more about <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="nofollow noreferrer">volumes</a> in GCP.</p>
| Mr.KoopaKiller |
<p>Usually when I deploy a Simple HTTPS server in VM I do</p>
<p><strong>Create Certificate with ip</strong></p>
<pre><code>$ openssl req -new -x509 -keyout private_key.pem -out public_cert.pem -days 365 -nodes
Generating a RSA private key
..+++++
.................................+++++
writing new private key to 'private_key.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:IN
State or Province Name (full name) [Some-State]:Tamil Nadu
Locality Name (eg, city) []:Chennai
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Company ,Inc
Organizational Unit Name (eg, section) []: company division
Common Name (e.g. server FQDN or YOUR name) []:35.222.65.55 <----------------------- this ip should be server ip very important
Email Address []:
</code></pre>
<p><strong>Start Simple HTTPS Python Server</strong></p>
<pre><code># libraries needed:
from http.server import HTTPServer, SimpleHTTPRequestHandler
import ssl , socket
# address set
server_ip = '0.0.0.0'
server_port = 3389
# configuring HTTP -> HTTPS
httpd = HTTPServer((server_ip, server_port), SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket(httpd.socket, certfile='./public_cert.pem',keyfile='./private_key.pem', server_side=True)
httpd.serve_forever()
</code></pre>
<p>now this works fine for</p>
<p><strong>Curl from local</strong></p>
<pre><code>curl --cacert /Users/padmanabanpr/Downloads/public_cert.pem --cert-type PEM https://35.222.65.55:3389
</code></pre>
<p>now how to deploy the same to kubernetes cluster and access via load-balancer?</p>
<p>Assuming i have</p>
<ul>
<li>public docker nginx container with write access , python3 , and this python https server file</li>
<li>deployment yaml with nginx</li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: external-nginx-server
labels:
app: external-nginx-server
spec:
replicas: 1
selector:
matchLabels:
app: external-nginx-server
template:
metadata:
labels:
app: external-nginx-server
spec:
containers:
- name: external-nginx-server
image: <docker nginx public image>
ports:
- containerPort: 3389
---
kind: Service
apiVersion: v1
metadata:
name: external-nginx-service
spec:
selector:
app: external-nginx-server
ports:
- protocol: TCP
port: 443
name: https
targetPort: 3389
type: LoadBalancer
</code></pre>
| Pradeep Padmanaban C | <p>To do the same in Kubernetes you need to create a Secret with the certificate in it, like this one:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Secret
apiVersion: v1
metadata:
name: my-tls-secret
data:
tls.crt: BASE64-ENCODED CERTIFICATE
tls.key: BASE64-ENCODED KEY
</code></pre>
<p>Then you need to mount it inside all the pods that require it:</p>
<pre class="lang-yaml prettyprint-override"><code># deployment.yml
volumes:
- name: my-tls
secret:
secretName: my-tls-secret
containers:
- name: external-nginx-server
image: <docker nginx public image>
volumeMounts:
- name: my-tls
# Here will appear the "tls.crt" and "tls.key", defined in the secret's data block.
# Kubernetes will take care to decode the contents and make them separate files.
mountPath: /etc/nginx/tls
</code></pre>
<p><strong><em>But this is pain to manage manually!</em></strong> You will have to track the certificate expiration date, renew the secret, restart the pods... There is a better way.</p>
<p>You can install an ingress conroller (<a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX</a>, for example) and the <a href="https://cert-manager.io/docs/" rel="nofollow noreferrer">certificate manager</a> for Kubernetes. The certificate manager will take care to issue certificates (via LetsEncrypt or other providers), save them as secrets, and update them before the expiration date.</p>
<p>An ingress controller is a centralized endpoint to your cluster. You can make it to handle connections to multiple applications, just like with normal NGINX installation. The benefit of it in this case is that you will not have to mount certificates if there is a new one or an update. The ingress controller will take care of that for you.</p>
<p>The links above will lead you to the documentation, where you can find the details on how to install and use these.</p>
| anemyte |
<p>I am working on a POC where I had to containerize a part of application for eg - I need to containerize "add to cart" functionality in an e-commerce website, I see a various example <a href="https://dzone.com/articles/docker-for-devs-containerizing-your-application" rel="nofollow noreferrer">https://dzone.com/articles/docker-for-devs-containerizing-your-application</a> to containerize whole application,but how to do for a part of it where my functionalities has dependency on other code parts as well.</p>
<p>Any pointers will be very helpful as I am totally stuck and don't see similar query anywhere else.</p>
<p>Thanks</p>
| Anand | <p>IMHO, the first thing you need to do is read more about the concept of <code>microservices</code> and how it works.</p>
<p>Basically, the idea is decouple your monolithic application into as many services as your want and deploy it separate of each other. Each of this parts will comminicate with other parts by API calls.</p>
<p>There are a lot of benefits of using microservices architecture, and there is no the "better way" to do this, it will depends of how your application works and how your team is engaged.</p>
<p>I can recommend you a couple of link about microservices:</p>
<p><a href="https://medium.com/hashmapinc/the-what-why-and-how-of-a-microservices-architecture-4179579423a9" rel="nofollow noreferrer">https://medium.com/hashmapinc/the-what-why-and-how-of-a-microservices-architecture-4179579423a9</a></p>
<p><a href="https://opensource.com/resources/what-are-microservices" rel="nofollow noreferrer">https://opensource.com/resources/what-are-microservices</a></p>
<p><a href="https://en.wikipedia.org/wiki/Microservices" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Microservices</a></p>
| Mr.KoopaKiller |
<p>I have Keycloak deployed in Kubernetes using the official codecentric chart. Now I want to make Keycloak logs into json format in order to export them to Kibana.</p>
| REDOUANE | <p>A comment to the original reply pointed to a cli command to do this. </p>
<pre><code> cli:
# Custom CLI script
custom: |
/subsystem=logging/json-formatter=json:add(exception-output-type=formatted, pretty-print=false, meta-data={label=value})
/subsystem=logging/console-handler=CONSOLE:write-attribute(name=named-formatter, value=json)
</code></pre>
| REDOUANE |
<p>I have a Ruby script that connects to MongoDB Atlas for getting some data. It does work perfectly if I:</p>
<ol>
<li>Run the script locally</li>
<li>Run the script locally with docker</li>
<li>Run the script in a separated AWS EC2 Instance</li>
<li>Run the script in docker inside a seperated AWS EC2 instance</li>
<li>Run the script inside local cluster with minikube.</li>
</ol>
<p>However it doesn't work inside kubernetes, I am using EKS.</p>
<p>I have tried all items listed before and make sure the user exists on MongoDB Atlas.</p>
<p>This is pretty much the output that I receive:</p>
<pre><code>User moon (mechanism: scram) is not authorized to access test (used mechanism: SCRAM-SHA-1) (Mongo::Auth::Unauthorized)
</code></pre>
<p>I'd appreciate any input.</p>
| xorb | <p>Solved by adding <code>&authSource=admin</code> to the connection string.</p>
| xorb |
<p>I am trying to put two nodejs applications into the same pod, because normally they should sit in the same machine, and are unfortunately heavily coupled together in such a way that each of them is looking for the folder of the other (pos/app.js needs /pos-service, and pos-service/app.js needs /pos)</p>
<p>In the end, the folder is supposed to contain:</p>
<pre><code>/pos
/pos-service
</code></pre>
<p>Their volume doesn't need to be persistent, so I tried sharing their volumes with an emptyDir like the following:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: pos-deployment
labels:
app: pos
spec:
replicas: 1
selector:
matchLabels:
app: pos
template:
metadata:
labels:
app: pos
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: pos-service
image: pos-service:0.0.1
volumeMounts:
- name: shared-data
mountPath: /pos-service
- name: pos
image: pos:0.0.3
volumeMounts:
- name: shared-data
mountPath: /pos
</code></pre>
<p>However, when the pod is launched, and I exec into each of the containers, they still seem to be isolated and eachother's folders can't be seen</p>
<p>I would appereciate any help, thanks</p>
| Eden Dupont | <p><em>This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.</em></p>
<p>Since this issue has already been solved or rather clarified as in fact there is nothing to be solved here, let's post a <strong>Community Wiki</strong> answer as it's partially based on comments of a few different users.</p>
<p>As <a href="https://stackoverflow.com/users/1318694/matt">Matt</a> and <a href="https://stackoverflow.com/users/10008173/david-maze">David Maze</a> have already mentioned, it works as expected and in your example there is nothing that copies any content to your <code>emptyDir</code> volume:</p>
<blockquote>
<p>With just the YAML you've shown, nothing ever copies anything into the
emptyDir volume, unless the images' startup knows to do that. β David
Maze Dec 28 '20 at 12:45</p>
</blockquote>
<p>And as the name itselt may suggest, <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer"><code>emptyDir</code></a> comes totally empty, so it's your task to pre-populate it with the desired data. It can be done with the <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a> by temporarily mounting your <code>emptyDir</code> to a different mount point e.g. <code>/mnt/my-epmty-dir</code> and copying the content of specific directory or directories already present in your container e.g. <code>/pos</code> and <code>/pos-service</code> as in your example and then mounting it again to the desired location. Take a look at <a href="https://stackoverflow.com/a/63286102/11714114">this example</a>, presented in one of my older answers as it can be done in the very same way. Your <code>Deployment</code> may look as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: pos-deployment
labels:
app: pos
spec:
replicas: 1
selector:
matchLabels:
app: pos
template:
metadata:
labels:
app: pos
spec:
volumes:
- name: shared-data
emptyDir: {}
initContainers:
- name: pre-populate-empty-dir-1
image: pos-service:0.0.1
command: ['sh', '-c', 'cp -a /pos-service/* /mnt/empty-dir-content/']
volumeMounts:
- name: shared-data
mountPath: "/mnt/empty-dir-content/"
- name: pre-populate-empty-dir-2
image: pos:0.0.3
command: ['sh', '-c', 'cp -a /pos/* /mnt/empty-dir-content/']
volumeMounts:
- name: shared-data
mountPath: "/mnt/empty-dir-content/"
containers:
- name: pos-service
image: pos-service:0.0.1
volumeMounts:
- name: shared-data
mountPath: /pos-service
- name: pos
image: pos:0.0.3
volumeMounts:
- name: shared-data
mountPath: /pos
</code></pre>
<p>It's worth mentioning that there is nothing surprising here as it is exacly how <code>mount</code> works on <strong>Linux</strong> or other <strong>nix-based</strong> operating systems.</p>
<p>If you have e.g. <code>/var/log/your-app</code> on your main disk, populated with logs and then you mount a new, empty disk defining as its mountpoint <code>/var/log/your-app</code>, you won't see any content there. It won't be deleted from its original location on your main disc, it will simply become unavailable as in this location now you've mounted completely different volume (which happens to be empty or may have totally different content). When you unmount and visit your <code>/var/log/your-app</code> again, you'll see its original content. I hope it's all clear.</p>
| mario |
<p>During the deployment of my application to Kubernetes, I come across with such kind of problem :</p>
<pre><code>Waiting for deployment "yourapplication" rollout to finish: 0 of 1 updated
replicas are available...
Waiting for deployment spec update to be observed...
Waiting for deployment "yourapplication" rollout to finish: 1 out of 2 new
replicas have been updated...
Waiting for deployment "yourapplication" rollout to finish: 1 out of 2 new
replicas have been updated...
Waiting for deployment "yourapplication" rollout to finish: 0 of 2 updated
replicas are available...
</code></pre>
<p>Also I get that error message as well : </p>
<pre><code>**2019-06-13T12:01:41.0216723Z error: deployment "yourapplication" exceeded
its progress deadline
2019-06-13T12:01:41.0382482Z ##[error]error: deployment "yourapplication"
exceeded its progress deadline
2019-06-13T12:01:41.0396315Z ##[error]/usr/local/bin/kubectl failed with
return code: 1
2019-06-13T12:01:41.0399786Z ##[section]Finishing: kubectl rollout
**
</code></pre>
| Tonyukuk | <blockquote>
<p>**2019-06-13T12:01:41.0216723Z error: deployment "yourapplication" exceeded
its progress deadline
2019-06-13T12:01:41.0382482Z ##[error]error: deployment "yourapplication"
exceeded its progress deadline</p>
</blockquote>
<p>You can try increasing progress deadline of your deployment:</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds</a></p>
| Anton Ahlroth |
<p>I'm trying to get all the secrets in the cluster of type <code>helm.sh/release.v1</code>:</p>
<pre><code>$ curl -X GET $APISERVER/api/v1/secrets --header "Authorization: Bearer $TOKEN" --insecure
{
"kind": "SecretList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/secrets",
"resourceVersion": "442181"
},
"items": [
{
"metadata": {
...
},
"data": {
...
},
"type": "helm.sh/release.v1"
},
{
"metadata": {
...
},
"data": {
...
},
"type": "kubernetes.io/service-account-token"
},
{
"metadata": {
...
},
"data": {
...
},
"type": "kubernetes.io/service-account-token"
},
...
}
</code></pre>
<p>I can use the command above and then filter by myself (<code>jq</code> or whatever) but I wonder if there's an option to filter in the API by adding query parameters or something, for example (didn't work):</p>
<pre><code>curl -X GET $APISERVER/api/v1/secrets?type=<value>
</code></pre>
<p>any idea how to filter by specific field? (<code>type</code>) can I also request specific fields in the response (if I don't care about the <code>data</code> for instance)?</p>
| ItayB | <blockquote>
<p>I'm going to use HTTP requests from my application (python) that runs
within a pod in the cluster. I am trying to be more efficient and ask
only for what I need (only specific type and not all secrets in the
cluster)</p>
</blockquote>
<p>If your application is written in <strong>Python</strong>, maybe it's a good idea to use <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Kubernetes Python Client library</a> to get the secrets ?</p>
<p>If you want to get all the secrets in the cluster of type <code>helm.sh/release.v1</code>, you can do it with the following <strong>Python</strong> code:</p>
<pre><code>from kubernetes import client , config
config.load_kube_config()
v1 = client.CoreV1Api()
list_secrets = v1.list_secret_for_all_namespaces(field_selector='type=helm.sh/release.v1')
</code></pre>
<p>If you also want to count them, use:</p>
<pre><code>print(len(list_secrets.items))
</code></pre>
<p>to print secret's name use:</p>
<pre><code>print(list_secrets.items[0].metadata.name)
</code></pre>
<p>to retrieve it's data:</p>
<pre><code>print(list_secrets.items[0].data)
</code></pre>
<p>and so on...</p>
<p>More details, including arguments that can be used with this method, you can find <a href="https://raw.githubusercontent.com/kubernetes-client/python/master/kubernetes/docs/CoreV1Api.md" rel="nofollow noreferrer">here</a> (just search for <code>list_secret_for_all_namespaces</code>):</p>
<pre><code># **list_secret_for_all_namespaces**
> V1SecretList list_secret_for_all_namespaces(allow_watch_bookmarks=allow_watch_bookmarks, _continue=_continue, field_selector=field_selector, label_selector=label_selector, limit=limit, pretty=pretty, resource_version=resource_version, timeout_seconds=timeout_seconds, watch=watch)
</code></pre>
| mario |
<pre><code>from kubernetes import client, config
config.load_kube_config()
api = client.AppsV1Api()
deployment = api.read_namespaced_deployment(name='foo', namespace='bar')
</code></pre>
<p>i tried to add affinity object to deployment spec i got this error</p>
<pre><code>deployment.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms = [{'nodeSelectorTerms':[{'matchExpressions':[{'key': 'kubernetes.io/hostname','operator': 'NotIn','values': ["awesome-node"]}]}]}]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'V1DeploymentSpec' object has no attribute 'affinity'
</code></pre>
| Pradeep Padmanaban C | <p>You're looking at the wrong place. Affinity belongs to pod template spec (<code>deployment.spec.template.spec.affinity</code>) while you're looking at deployment spec (<code>deployment.spec.affinity</code>).</p>
<p>Here's how to completely replace existing affinity (even if it's None):</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
config.load_kube_config()
api = client.AppsV1Api()
# read current state
deployment = api.read_namespaced_deployment(name='foo', namespace='bar')
# check current state
#print(deployment.spec.template.spec.affinity)
# create affinity objects
terms = client.models.V1NodeSelectorTerm(
match_expressions=[
{'key': 'kubernetes.io/hostname',
'operator': 'NotIn',
'values': ["awesome-node"]}
]
)
node_selector = client.models.V1NodeSelector(node_selector_terms=[terms])
node_affinity = client.models.V1NodeAffinity(
required_during_scheduling_ignored_during_execution=node_selector
)
affinity = client.models.V1Affinity(node_affinity=node_affinity)
# replace affinity in the deployment object
deployment.spec.template.spec.affinity = affinity
# finally, push the updated deployment configuration to the API-server
api.replace_namespaced_deployment(name=deployment.metadata.name,
namespace=deployment.metadata.namespace,
body=deployment)
</code></pre>
| anemyte |
<p>I'm trying to leverage a local volume dynamic provisioner for k8s, Rancher's one, with multiple instances, each with its own storage class so that I can provide multiple types of local volumes based on their performance (e.g. ssd, hdd ,etc).</p>
<p>The underlying infrastructure is not symmetric; some nodes only have ssds, some only hdds, some of them both.</p>
<p>I know that I can hint the scheduler to select the proper nodes by providing node affinity rules for pods.</p>
<p>But, is there a better way to address this problem at the level of provisioner / storage class only ? E.g., make a storage class only available for a subset of the cluster nodes.</p>
| Laurentiu Soica | <blockquote>
<p>I know that I can hint the scheduler to select the proper nodes by
providing node affinity rules for pods.</p>
</blockquote>
<p>There is no need to define node affinity rules on <code>Pod</code> level when using <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="noreferrer">local</a> persistent volumes. Node affinity can be specified in <code>PersistentVolume</code> definition.</p>
<blockquote>
<p>But, is there a better way to address this problem at the level of
provisioner / storage class only ? E.g., make a storage class only
available for a subset of the cluster nodes.</p>
</blockquote>
<p>No, it cannot be specified on a <code>StorageClass</code> level. Neither you can make a <code>StorageClass</code> available only for a subset of nodes.</p>
<p>But when it comes to a provisioner, I would say yes, it should be feasible as one of the major storage provisioner tasks is creating matching <code>PersistentVolume</code> objects in response to <code>PersistentVolumeClaim</code> created by the user. You can read about it <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="noreferrer">here</a>:</p>
<blockquote>
<p>Dynamic volume provisioning allows storage volumes to be created
on-demand. Without dynamic provisioning, cluster administrators have
to manually make calls to their cloud or storage provider to create
new storage volumes, and then create PersistentVolume objects to
represent them in Kubernetes. The dynamic provisioning feature
eliminates the need for cluster administrators to pre-provision
storage. Instead, it automatically provisions storage when it is
requested by users.</p>
</blockquote>
<p>So looking at the whole volume provision process from the very beginning it looks as follows:</p>
<p>User creates only <code>PersistenVolumeClaim</code> object, where he specifies a <code>StorageClass</code>:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Gi
storageClassName: local-storage ### π
</code></pre>
<p>and it can be used in a <code>Pod</code> definition:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim ### π
</code></pre>
<p>So in practice, in a <code>Pod</code> definition you need only to specify the proper <code>PVC</code>. <strong>No need for defining any node-affinity rules</strong> here.</p>
<p>A <code>Pod</code> references a <code>PVC</code>, <code>PVC</code> then references a <code>StorageClass</code>, <code>StorageClass</code> references the <code>provisioner</code> that should be used:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/my-fancy-provisioner ### π
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>So in the end it is the task of a <code>provisioner</code> to create matching <code>PersistentVolume</code> object. It can look as follows:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /var/tmp/test
nodeAffinity: ### π
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ssd-node ### π
</code></pre>
<p>So a <code>Pod</code> which uses <strong>myclaim</strong> <code>PVC</code> -> which references the <strong>local-storage</strong> <code>StorageClass</code> -> which selects a proper storage <code>provisioner</code> will be automatically scheduled on the node selected in <code>PV</code> definition created by this provisioner.</p>
| mario |
<p>I'm creating an application that is using helm(v3.3.0) + k3s. A program in a container uses different configuration files. As of now there are just few config files (that I added manually before building the image) but I'd like to add the possibility to add them dynamically when the container is running and not to lose them once the container/pod is dead. In docker I'd do that by exposing a folder like this:</p>
<p><code>docker run [image] -v /host/path:/container/path</code></p>
<p>Is there an equivalent for helm?
If not how would you suggest to solve this issue without stopping using helm/k3s?</p>
| LowRez | <p>In Kubernetes (Helm is just a tool for it) you need to do two things to mount host path inside container:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
volumes:
# 1. Declare a 'hostPath' volume under pod's 'volumes' key:
- name: name-me
hostPath:
path: /path/on/host
containers:
- name: foo
image: bar
# 2. Mount the declared volume inside container using volume name
volumeMounts:
- name: name-me
mountPath: /path/in/container
</code></pre>
<p>Lots of other volumes types and examples in Kubernetes <a href="https://kubernetes.io/docs/concepts/storage/volumes" rel="nofollow noreferrer">documentation</a>.</p>
| anemyte |
<p>I've just deployed a docker registry.</p>
<p>I'm able to get access to it using:</p>
<pre><code>$ curl -I chart-example.local/v2/
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: application/json; charset=utf-8
Date: Tue, 28 Jan 2020 20:10:35 GMT
Docker-Distribution-Api-Version: registry/2.0
Vary: Accept-Encoding
X-Content-Type-Options: nosniff
</code></pre>
<p>However, when I'm trying to push an local image to it, I'm getting this message:</p>
<pre><code>$ docker push chart-example.local/feedly:latest
The push refers to repository [chart-example.local/feedly]
Get https://chart-example.local/v2/: x509: certificate has expired or is not yet valid
</code></pre>
<p>Why docker is trying to get access using <code>https</code> instead of <code>http</code>?</p>
| Jordi | <p>Docker uses https by default for security. You can override this setting by modifying your <code>daemon.json</code> file with the following content. Do <em>not</em> use this setting in production.</p>
<pre><code> {
"insecure-registries" : ["chart-example.local"]
}
</code></pre>
<p>See this link for more information: <a href="https://docs.docker.com/registry/insecure/" rel="nofollow noreferrer">https://docs.docker.com/registry/insecure/</a></p>
| cpk |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.