Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I have been trying to deploy Kafka with schema registry locally using Kubernetes. However, the logs of the schema registry pod show this error message:</p>
<pre><code>ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
</code></pre>
<p>What could be the reason of this behavior?
'
In order to run Kubernetes locally, I user Minikube version v0.32.0 with Kubernetes version v1.13.0</p>
<p>My Kafka configuration:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kafka-1
spec:
ports:
- name: client
port: 9092
selector:
app: kafka
server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-1
spec:
selector:
matchLabels:
app: kafka
server-id: "1"
replicas: 1
template:
metadata:
labels:
app: kafka
server-id: "1"
spec:
volumes:
- name: kafka-data
emptyDir: {}
containers:
- name: server
image: confluent/kafka:0.10.0.0-cp1
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-1:2181
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-1
- name: KAFKA_BROKER_ID
value: "1"
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /var/lib/kafka
name: kafka-data
---
apiVersion: v1
kind: Service
metadata:
name: schema
spec:
ports:
- name: client
port: 8081
selector:
app: kafka-schema-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-schema-registry
spec:
replicas: 1
selector:
matchLabels:
app: kafka-schema-registry
template:
metadata:
labels:
app: kafka-schema-registry
spec:
containers:
- name: kafka-schema-registry
image: confluent/schema-registry:3.0.0
env:
- name: SR_KAFKASTORE_CONNECTION_URL
value: zookeeper-1:2181
- name: SR_KAFKASTORE_TOPIC
value: "_schema_registry"
- name: SR_LISTENERS
value: "http://0.0.0.0:8081"
ports:
- containerPort: 8081
</code></pre>
<p>Zookeeper configuraion:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
ports:
- name: client
port: 2181
selector:
app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-1
spec:
ports:
- name: client
port: 2181
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-1
spec:
selector:
matchLabels:
app: zookeeper
server-id: "1"
replicas: 1
template:
metadata:
labels:
app: zookeeper
server-id: "1"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: elevy/zookeeper:v3.4.7
env:
- name: MYID
value: "1"
- name: SERVERS
value: "zookeeper-1"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: /zookeeper/data
name: data
- mountPath: /zookeeper/wal
name: wal
</code></pre>
| Cassie | <pre><code>org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
</code></pre>
<p>can happen when trying to connect to a broker expecting SSL connections and the client config did not specify;</p>
<pre><code>security.protocol=SSL
</code></pre>
| Anders Eriksson |
<p>I am struggling to programmatically access a kubernetes cluster running on Google Cloud. I have set up a service account and pointed <code>GOOGLE_APPLICATION_CREDENTIALS</code> to a corresponding credentials file. I managed to get the cluster and credentials as follows:</p>
<pre class="lang-py prettyprint-override"><code>import google.auth
from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client
credentials, project = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform',])
credentials.refresh(google.auth.transport.requests.Request())
cluster_manager = ClusterManagerClient(credentials=credentials)
cluster = cluster_manager.get_cluster(project, 'us-west1-b', 'clic-cluster')
</code></pre>
<p>So far so good. But then I want to start using the kubernetes client:</p>
<pre class="lang-py prettyprint-override"><code>config = client.Configuration()
config.host = f'https://{cluster.endpoint}:443'
config.verify_ssl = False
config.api_key = {"authorization": "Bearer " + credentials.token}
config.username = credentials._service_account_email
client.Configuration.set_default(config)
kub = client.CoreV1Api()
print(kub.list_pod_for_all_namespaces(watch=False))
</code></pre>
<p>And I get an error message like this:</p>
<p><em><strong>pods is forbidden: User "12341234123451234567" cannot list resource "pods" in API group "" at the cluster scope: Required "container.pods.list" permission.</strong></em></p>
<p>I found <a href="https://cloud.google.com/kubernetes-engine/docs/reference/api-permissions" rel="nofollow noreferrer">this website</a> describing the <code>container.pods.list</code>, but I don't know where I should add it, or how it relates to the API scopes <a href="https://developers.google.com/identity/protocols/googlescopes" rel="nofollow noreferrer">described here</a>.</p>
| Lucas | <p>As per the error:</p>
<blockquote>
<p>pods is forbidden: User "12341234123451234567" cannot list resource
"pods" in API group "" at the cluster scope: Required
"container.pods.list" permission.</p>
</blockquote>
<p>it seems evident the user credentials you are trying to use, does not have permission on listing the pods.</p>
<p>The entire list of permissions mentioned in <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/iam" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/iam</a>, states the following:</p>
<p>There are different Role which can play into account here:</p>
<ul>
<li>If you are able to get cluster, then it is covered with multiple <strong>Role</strong> sections like: <code>Kubernetes Engine Cluster Admin</code>, <code>Kubernetes Engine Cluster Viewer</code>, <code>Kubernetes Engine Developer</code> & <code>Kubernetes Engine Viewer</code></li>
<li>Whereas, if you want to list pods <code>kub.list_pod_for_all_namespaces(watch=False)</code> then you might need <code>Kubernetes Engine Viewer</code> access.</li>
</ul>
<p><a href="https://i.stack.imgur.com/wj3jS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wj3jS.png" alt="enter image description here"></a></p>
<p>You should be able to add multiple roles.</p>
| Nagaraj Tantri |
<p>I have a flask api and I am trying to improve it identifying which function calls in the api definition takes the longest time whenever call it. For that I am using a profiler as highlighted in this <a href="https://dev.to/yellalena/profiling-flask-application-to-improve-performance-4970" rel="nofollow noreferrer">repo</a>. Whenever I make the api call, this profiler generates a .prof file which I can use with <code>snakeviz</code> to visualize.</p>
<p>Now I am trying to run this on aws cluster in the same region where my database is stored to minimize network latency time. I can get the api server running and make the api calls, my question is how can I transfer the <code>.prof</code> file from kubernetes pod without disturbing the api server. Is there a way to start a separate shell that transfers file to say an s3 bucket whenever that file is created without killing off the api server.</p>
| monte | <p>If you want to automate this process or it's simply hard to figure out connectivity for running <code>kubectl exec ...</code>, one idea would be to use <a href="https://kubernetes.io/docs/concepts/workloads/pods/#how-pods-manage-multiple-containers" rel="nofollow noreferrer">a sidecar container</a>. So your pod contains two containers with a single <code>emptyDir</code> volume mounted into both. <code>emptyDir</code> is <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">perhaps the easiest way</a> to create a folder shared between all containers in a pod.</p>
<ul>
<li>First container is your regular Flask API</li>
<li>Second container is watching for new files in shared folder. Whenever it finds a file there it uploads this file to S3</li>
</ul>
<p>You will need to configure profiler so it dumps output into a shared folder.
One benefit of this approach is that you don't have to make any major modifications to the existing container running Flask.</p>
| Oleg |
<p>I've been having a hell of a time trying to figure-out how to serve multiple models using a yaml configuration file for K8s.</p>
<p>I can run directly in Bash using the following, but having trouble converting it to yaml.</p>
<pre><code>docker run -p 8500:8500 -p 8501:8501 \
[container id] \
--model_config_file=/models/model_config.config \
--model_config_file_poll_wait_seconds=60
</code></pre>
<p>I read that model_config_file can be added using a command element, but not sure where to put it, and I keep receiving errors around valid commands or not being able to find the file.</p>
<pre><code>command:
- '--model_config_file=/models/model_config.config'
- '--model_config_file_poll_wait_seconds=60'
</code></pre>
<p>Sample YAML config below for K8s, where would the command go referencing the docker run command above?</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: model-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tensorflow-test-rw-deployment
namespace: model-test
spec:
selector:
matchLabels:
app: rate-predictions-server
replicas: 1
template:
metadata:
labels:
app: rate-predictions-server
spec:
containers:
- name: rate-predictions-container
image: aws-ecr-path
command:
- --model_config_file=/models/model_config.config
- --model_config_file_poll_wait_seconds=60
ports:
#- grpc: 8500
- containerPort: 8500
- containerPort: 8501
---
apiVersion: v1
kind: Service
metadata:
labels:
run: rate-predictions-service
name: rate-predictions-service
namespace: model-test
spec:
type: ClusterIP
selector:
app: rate-predictions-server
ports:
- port: 8501
targetPort: 8501
</code></pre>
| Roland Wang | <p>What you are passing on seems to be the arguments and not the command. Command should be set as the entrypoint in the container and arguments should be passed in args. Please see following link.
<a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/</a></p>
| gordanvij |
<p>Am I wondering if there is a way to read and write to Kubernetes parameter store from a nodejs app?</p>
<p>I would like to persist/invalidate/refresh an access token and share it across multiple instances during runtime. Can't find any good docs on how to do it exactly.</p>
| Jacobdo | <p>Amazon provides the <a href="https://aws.amazon.com/sdk-for-node-js/" rel="nofollow noreferrer">AWS SDK for node.js</a>, which also includes a class for <a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SecretsManager.html" rel="nofollow noreferrer">SecretsManager</a>.</p>
| Simon |
<p>Kubernetes version: 1.13.4 (same problem on 1.13.2).</p>
<p>I self-host the cluster on digitalocean.</p>
<p>OS: coreos 2023.4.0</p>
<p>I have 2 volumes on one node:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: prometheus-pv-volume
labels:
type: local
name: prometheus-pv-volume
spec:
storageClassName: local-storage
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
hostPath:
path: "/prometheus-volume"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/monitoring
operator: Exists
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: grafana-pv-volume
labels:
type: local
name: grafana-pv-volume
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
hostPath:
path: "/grafana-volume"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/monitoring
operator: Exists
</code></pre>
<p>And 2 pvc's using them on a same node. Here is one:</p>
<pre><code> storage:
volumeClaimTemplate:
spec:
storageClassName: local-storage
selector:
matchLabels:
name: prometheus-pv-volume
resources:
requests:
storage: 100Gi
</code></pre>
<p>Everything works fine.</p>
<p><code>kubectl get pv --all-namespaces</code> output:</p>
<pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
grafana-pv-volume 1Gi RWO Retain Bound monitoring/grafana-storage local-storage 16m
prometheus-pv-volume 100Gi RWO Retain Bound monitoring/prometheus-k8s-db-prometheus-k8s-0 local-storage 16m
</code></pre>
<p><code>kubectl get pvc --all-namespaces</code> output:</p>
<pre><code>NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
monitoring grafana-storage Bound grafana-pv-volume 1Gi RWO local-storage 10m
monitoring prometheus-k8s-db-prometheus-k8s-0 Bound prometheus-pv-volume 100Gi RWO local-storage 10m
</code></pre>
<p>The problem is that im getting these log messages every 2 minutes from kube-controller-manager:</p>
<pre><code>W0302 17:16:07.877212 1 plugins.go:845] FindExpandablePluginBySpec(prometheus-pv-volume) -> err:no volume plugin matched
W0302 17:16:07.877164 1 plugins.go:845] FindExpandablePluginBySpec(grafana-pv-volume) -> err:no volume plugin matched
</code></pre>
<p>Why do they appear? How can i fix this?</p>
| ik9999 | <p>Seems like this is safe to ignore message that was recently removed (Feb 20) and will not occur in future releases: <a href="https://github.com/kubernetes/kubernetes/pull/73901" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/73901</a></p>
| itaysk |
<p>I have 2 pods created. one is grafana and another is influx pod. I need to configure influx in grafana. I did see the below example. I got bit confused by the way its configured. Below is deployment and service file.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: influxdb
labels:
app: influxdb
spec:
template:
metadata:
labels:
app: influxdb
spec:
containers:
- name: influxdb
image: influxdb
ports:
- containerPort: 8083
name: admin
- containerPort: 8086
name: http
resources:
limits:
memory: 2048Mi
cpu: 100m
volumeMounts:
- name: influxdb-data
mountPath: /var/lib/influxdb
volumes:
- name: influxdb-data
persistentVolumeClaim:
claimName: influxdb-pvc-vol
</code></pre>
<p>Service file</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: influxdb
labels:
app: influxdb
spec:
ports:
- port: 3306
selector:
app: influxdb
clusterIP: None
</code></pre>
<p>What does <code>clusterIP: None</code> do? he has exposed 3306 port and mapped it to node port 3306. So i believe i can access from other pod using 3306 port and its IP. But here i see i am able to access via <code>http://influxdb:8086</code> How am i able to access via <a href="http://influxdb:8086" rel="nofollow noreferrer">http://influxdb:8086</a>?</p>
| Hacker | <p>I can explain what's happening and why this works, but I still think this configuration doesn't make sense.</p>
<p>The Deployment creates a Pod that runs InfluxDB which listens by default on port 8086. The <code>containerPort</code> here is purely informational, see the following from the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/" rel="nofollow noreferrer">Pod spec reference</a>:</p>
<blockquote>
<p>primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network.</p>
</blockquote>
<p>Now to the Service, which is created with a port 3306, which is odd but in this case doesn't matter because this is a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Headless Service</a>. A headless service is a means to tell Kubernetes you don't want it's fancy networking features (like kube-proxy load balancing), instead you just want it to create DNS records for you. By specifying <code>ClusterIP: None</code> you essentially make this a headless service. Given that this service is not actually serving any traffic, the "Port" field here is meaningless.</p>
<p>Now let's review what happens when you access <a href="http://influxdb:8086" rel="nofollow noreferrer">http://influxdb:8086</a>:</p>
<ol>
<li>your http client resolves the host <code>influxdb</code> to the Pod IP. This is possible thanks to the headless service. Note again that the host resolves to the Pod IP, not a Service IP.</li>
<li>Since the Pod is serving on 8086, and since you reached it directly in it's private IP, it accepts your request and you have your reply.</li>
</ol>
| itaysk |
<p>I'm using dns names for my backend servers in my hsproxy.cfg like</p>
<pre><code>backend s0
server server0 server0.x.y.local:8080
backend s1
server server1 server1.x.y.local:8080
</code></pre>
<p>The name resolution works fine after startup. But as soon as the ipadress of a backendserver changes, requests to haproxy take a long time (like 25 seconds) and then respond with 503 (reason: SC). It doesn't update or reresolve the dns names. But a <code>curl</code> on that machine works fine so the operating system updates the ip adress for those dns entries correctly. So it looks like haproxy is caching the ip adress on startup and never changes them.</p>
<p>I'm using haproxy as a pod inside of a kubernetes cluster (not sure if that matters).</p>
<p>From what I read in the offical docs, the libc option should use the operating systems resolve? I have tried putting <code>init-addr libc</code> but it didn't help, haproxy still responds with long running 503 forever while on the machine, dns resolves perfectly.</p>
<p>I have also seen that there are some fine tunings possible when using a <code>resolver</code> entry, where you can configure refresh times etc. Is this possible without hardcode nameservers in haproxy.cfg and just use the ones from the operating system?</p>
| Jens | <p>Seems to be correct that HAProxy does cache the resolved IP unless you tell it otherwise.</p>
<p>As you already found the configuration using a resolver and a custom check interval should do the trick (<code>resolvers dns check inter 1000</code> and <code>hold valid</code>), but you are also right that this requires a <code>resolvers</code> section as well. Since HAProxy 1.9 you can use <code>parse-resolv-conf</code> to use the local resolver:</p>
<pre><code>resolvers mydns
parse-resolv-conf
hold valid 10s
backend site-backend
balance leastconn
server site server.example.com:80 resolvers mydns check inter 1000
</code></pre>
<p>The HAProxy documentation can help you with further configuration: <a href="https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#5.3.2-parse-resolv-conf" rel="noreferrer">https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#5.3.2-parse-resolv-conf</a></p>
| Simon |
<p>I am using Docker for Mac but currently only using it for its Kubernetes cluster. The Kubernetes cluster's single node is named <strong>docker-for-desktop</strong>.</p>
<p>I'm currently going through a Kubernetes tutorial on Persistent Volumes (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/</a>) and as part of that, I'm setting up a <strong>hostPath</strong> persistent volume. It seems like the way this works is, you create a path on the Kubernetes Node itself and you can use that as your persistent volume for development and testing purposes.</p>
<p>To do that, the tutorial instructs you to SSH into your Kubernetes node itself and create a path which can then be turned into a persistent volume. The only problem is, I have no idea how to SSH directly into the Kubernetes node itself (not a container). The equivalent for minikube would just be <code>minikube ssh</code>. I can <code>kubectl describe</code> the node, but I only get an internal IP address, not an external one. So I have no idea how to address it in an <code>ssh</code> command.</p>
<p>Any ideas?</p>
| Stephen | <p>Comment in the OP should get credit for this but i'll add again so i find it easier in a few months:</p>
<pre><code>docker run -it --rm --privileged --pid=host justincormack/nsenter1
</code></pre>
<p>From <a href="https://github.com/justincormack/nsenter1/blob/master/README.md" rel="noreferrer">https://github.com/justincormack/nsenter1/blob/master/README.md</a></p>
<blockquote>
<p>... this is useful when you are running a lightweight, container-optimized Linux distribution such as LinuxKit... you are demonstrating with Docker for Mac, for example, your containers are not running on your host directly, but are running instead inside of a minimal Linux OS virtual machine specially built for running containers, i.e., LinuxKit. But being a lightweight environment, LinuxKit isn't running sshd, so how do you get access to a shell so you can run nsenter to inspect the namespaces for the process running as pid 1?</p>
</blockquote>
| Luke W |
<p>We have a deployment of Kubernetes in Google Cloud Platform. Recently we hit one of the well known issues related on a problem with the kube-dns that happens at high amount of requests <a href="https://github.com/kubernetes/kubernetes/issues/56903" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/56903</a> (its more related to SNAT/DNAT and contract but the final result is out of service of kube-dns).</p>
<p>After a few days of digging on that topic we found that k8s already have a solution witch is currently in alpha (<a href="https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/</a>)</p>
<p>The solution is to create a caching CoreDNS as a daemonset on each k8s node so far so good.</p>
<p>Problem is that after you create the daemonset you have to tell to kubelet to use it with --cluster-dns option and we cant find any way to do that in GKE environment. Google bootstraps the cluster with "configure-sh" script in instance metadata. There is an option to edit the instance template and "hardcode" the required values but that is not an option if you upgrade the cluster or use the horizontal autoscaling all of the modified values will be lost.
The last idea was to use custom startup script that pull configuration and update the metadata server but this is a too complicated task.</p>
| Veselin Iordanov | <p>As of 2019/12/10, GKE now supports through the <code>gcloud</code> CLI in beta:</p>
<blockquote>
<h1>Kubernetes Engine</h1>
<ul>
<li>Promoted NodeLocalDNS Addon to beta. Use <code>--addons=NodeLocalDNS</code> with <code>gcloud beta container clusters create</code>. This addon can be enabled or disabled on existing clusters using <code>--update-addons=NodeLocalDNS=ENABLED</code> or <code>--update-addons=NodeLocalDNS=DISABLED</code> with gcloud container clusters update.</li>
</ul>
</blockquote>
<p>See <a href="https://cloud.google.com/sdk/docs/release-notes#27300_2019-12-10" rel="nofollow noreferrer">https://cloud.google.com/sdk/docs/release-notes#27300_2019-12-10</a></p>
| Patrick Decat |
<p>Am using AKS for my cluster</p>
<p><strong>Scenario</strong>:
We have multiple API's (say svc1, svc2 & svc3 accessible on port 101, 102, 103).
These API links need to be exposed to client and are also used internally in application.</p>
<p><strong>Question</strong>:
I want to expose this to both external & internal load balancer on same ports.
Also when i access the service internally, i want them to be accessible by service name (Example: svc1:101)</p>
| Sunil Agarwal | <p>Well, I was able to fix the issue without using NodePort/ClusterIP.</p>
<p>Solution is pretty simple but seems its not documented.</p>
<p>Only thing we have to do is have multiple tags where 1 tag is same as of external load balancer and other tag you have same matching service.</p>
<p>This will map your replicaset to both service & external loadbalancer.</p>
<p>Detailed answer available on - <a href="https://www.linkedin.com/pulse/exposing-multiple-portsservices-same-load-balancer-sunil-agarwal" rel="nofollow noreferrer">https://www.linkedin.com/pulse/exposing-multiple-portsservices-same-load-balancer-sunil-agarwal</a></p>
| Sunil Agarwal |
<p>I am trying to utilize Rancher Terraform provider to create a new RKE cluster and then use the Kubernetes and Helm Terraform providers to create/deploy resources to the created cluster. I'm using this <a href="https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster_v2#kube_config" rel="noreferrer">https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster_v2#kube_config</a> attribute to create a local file with the new cluster's kube config.
The Helm and Kubernetes providers need the kube config in the provider configuration: <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs" rel="noreferrer">https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs</a>. Is there any way I can get the provider configuration to wait for the local file being created?</p>
| Suhas Potluri | <p>Generally speaking, Terraform always needs to evaluate provider configurations during the planning step because providers are allowed to rely on those settings in order to create the plan, and so it typically isn't possible to have a provider configuration refer to something created only during the apply step.</p>
<p>As a way to support bootstrapping in a situation like this though, this is one situation where it can be reasonable to use the <code>-target=...</code> option to <code>terraform apply</code>, to plan and apply only sufficient actions to create the Rancher cluster first, and then follow up with a normal plan and apply to complete everything else:</p>
<pre><code>terraform apply -target=rancher2_cluster_v2.example
terraform apply
</code></pre>
<p>This two-step process is needed only for situations where the <code>kube_config</code> attribute isn't known yet. As long as this resource type has convergent behavior, you should be able to use just <code>terraform apply</code> as normal unless you in future make a change that requires replacing the cluster.</p>
<p>(This is a general answer about provider configurations refering to resource attributes. I'm not familiar with Rancher in particular and so there might be some specifics about that particular resource type which I'm not mentioning here.)</p>
| Martin Atkins |
<p>Is there a Kubectl command or config map in the cluster that can help me find what CNI is being used?</p>
| YoMar | <p>First of all checking presence of exactly one config file in <code>/etc/cni/net.d</code> is a good start:</p>
<pre><code>$ ls /etc/cni/net.d
10-flannel.conflist
</code></pre>
<p>and <code>ip a s</code> or <code>ifconfig</code> helpful for checking existence of network interfaces. e.g. <code>flannel</code> CNI should setup <code>flannel.1</code> interface:</p>
<pre><code>$ ip a s flannel.1
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether de:cb:d1:d6:e3:e7 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::dccb:d1ff:fed6:e3e7/64 scope link
valid_lft forever preferred_lft forever
</code></pre>
<p>When creating a cluster, <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network" rel="nofollow noreferrer">CNI installation</a> is typically installed using:</p>
<pre><code>kubectl apply -f <add-on.yaml>
</code></pre>
<p>thus the networking pod will be called <code>kube-flannel*</code>, <code>kube-calico*</code> etc. depending on your <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model" rel="nofollow noreferrer">networking configuration</a>.</p>
<p>Then <code>crictl</code> will help you inspect running pods and containers.</p>
<pre><code>crictl pods ls
</code></pre>
<p>On a controller node in a healthy cluster you should have all pods in <code>Ready</code> state.</p>
<pre><code>crictl pods ls
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
dc90dd87e18cf 3 minutes ago Ready coredns-6d4b75cb6d-r2j9s kube-system 0 (default)
d1ab9d0aa815a 3 minutes ago Ready kubernetes-dashboard-cd4778d69-xmtkz kube-system 0 (default)
0c151fdd92e71 3 minutes ago Ready coredns-6d4b75cb6d-bn8hr kube-system 0 (default)
40f18ce56f776 4 minutes ago Ready kube-flannel-ds-d4fd7 kube-flannel 0 (default)
0e390a68380a5 4 minutes ago Ready kube-proxy-r6cq2 kube-system 0 (default)
cd93e58d3bf70 4 minutes ago Ready kube-scheduler-c01 kube-system 0 (default)
266a33aa5c241 4 minutes ago Ready kube-apiserver-c01 kube-system 0 (default)
0910a7a73f5aa 4 minutes ago Ready kube-controller-manager-c01 kube-system 0 (default)
</code></pre>
<p>If your cluster is properly configured you should be able to list containers using <code>kubectl</code>:</p>
<pre><code>kubectl get pods -n kube-system
</code></pre>
<p>if <code>kubectl</code> is not working (<code>kube-apiserver</code> is not running) you can fallback to <code>crictl</code>.</p>
<p>On an unhealthy cluster <code>kubectl</code> will show pods in <code>CrashLoopBackOff</code> state. <code>crictl pods ls</code> command will give you similar picture, only displaying pods from single node. Also check <a href="https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/" rel="nofollow noreferrer">documentation for common CNI errors</a>.</p>
<pre><code>$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d4b75cb6d-brb9d 0/1 ContainerCreating 0 25m
coredns-6d4b75cb6d-pcrcp 0/1 ContainerCreating 0 25m
kube-apiserver-cm01 1/1 Running 27 (18m ago) 26m
kube-apiserver-cm02 0/1 Running 31 (8m11s ago) 23m
kube-apiserver-cm03 0/1 CrashLoopBackOff 33 (2m22s ago) 26m
kube-controller-manager-cm01 0/1 CrashLoopBackOff 13 (50s ago) 24m
kube-controller-manager-cm02 0/1 CrashLoopBackOff 7 (15s ago) 24m
kube-controller-manager-cm03 0/1 CrashLoopBackOff 15 (3m45s ago) 26m
kube-proxy-2dvfg 0/1 CrashLoopBackOff 8 (97s ago) 25m
kube-proxy-7gnnr 0/1 CrashLoopBackOff 8 (39s ago) 25m
kube-proxy-cqmvz 0/1 CrashLoopBackOff 8 (19s ago) 25m
kube-scheduler-cm01 1/1 Running 28 (7m15s ago) 12m
kube-scheduler-cm02 0/1 CrashLoopBackOff 28 (4m45s ago) 18m
kube-scheduler-cm03 1/1 Running 36 (107s ago) 26m
kubernetes-dashboard-cd4778d69-g8jmf 0/1 ContainerCreating 0 2m27s
</code></pre>
<p><code>crictl ps</code> will give you containers (like <code>docker ps</code>), watch for high number of attempts:</p>
<pre><code>CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
d54c6f1e45dea 2ae1ba6417cbc 2 seconds ago Running kube-proxy 1 347fef3ae1e98 kube-proxy-7gnnr
d6048ef9e30c7 d521dd763e2e3 41 seconds ago Running kube-apiserver 27 640658b58d1ae kube-apiserver-cm03
b6b8c7a24914e 3a5aa3a515f5d 41 seconds ago Running kube-scheduler 28 c7b710a0acf30 kube-scheduler-cm03
b0a480d2c1baf 586c112956dfc 42 seconds ago Running kube-controller-manager 8 69504853ab81b kube-controller-manager-cm03
</code></pre>
<p>and check logs using</p>
<pre><code>crictl logs d54c6f1e45dea
</code></pre>
<p>Last not least <code>/opt/cni/bin/</code> path usually contains binaries required for networking. Another <code>PATH</code> might defined in add on setup or CNI config.</p>
<pre><code>$ ls /opt/cni/bin/
bandwidth bridge dhcp firewall flannel host-device host-local ipvlan loopback macvlan portmap ptp sbr static tuning vlan
</code></pre>
<p>Finally <code>crictl</code> reads <code>/etc/crictl.yaml</code> config, you should set proper runtime and image endpoint to match you <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/" rel="nofollow noreferrer">container runtime</a>:</p>
<pre><code>runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
</code></pre>
| Tombart |
<p>I am setting up GPU monitoring on a cluster using a <code>DaemonSet</code> and NVIDIA DCGM. Obviously it only makes sense to monitor nodes that have a GPU.</p>
<p>I'm trying to use <code>nodeSelector</code> for this purpose, but <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="nofollow noreferrer">the documentation states that</a>:</p>
<blockquote>
<p>For the pod to be eligible to run on a node, <strong>the node must have each of the indicated key-value pairs as labels</strong> (it can have additional labels as well). The most common usage is one key-value pair.</p>
</blockquote>
<p>I intended to check if the label <code>beta.kubernetes.io/instance-type</code> was any of those: </p>
<pre><code>[p3.2xlarge, p3.8xlarge, p3.16xlarge, p2.xlarge, p2.8xlarge, p2.16xlarge, g3.4xlarge, g3.8xlarge, g3.16xlarge]
</code></pre>
<p>But I don't see how to make an <code>or</code> relationship when using <code>nodeSelector</code>?</p>
| MasterScrat | <p><a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">Node Affinity</a> was the solution:</p>
<pre><code>spec:
template:
metadata:
labels:
app: dcgm-exporter
annotations:
prometheus.io/scrape: 'true'
description: |
This `DaemonSet` provides GPU metrics in Prometheus format.
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/instance-type
operator: In
values:
- p2.xlarge
- p2.8xlarge
- p2.16xlarge
- p3.2xlarge
- p3.8xlarge
- p3.16xlarge
- g3.4xlarge
- g3.8xlarge
- g3.16xlarge
</code></pre>
| MasterScrat |
<p>I am using terraform to create Kubernetes namespace. Sample below</p>
<pre><code>resource "kubernetes_namespace" "test1" {
metadata {
name = local.ns_name
}
}
</code></pre>
<p>I am trying to create Blue/Green kind of deployment using terraform following this <a href="https://www.hashicorp.com/blog/terraform-feature-toggles-blue-green-deployments-canary-test" rel="nofollow noreferrer">link</a>. As part of it, I have created two kubernetes clusters now. One of blue and other for green side and there by now I have two kubernetes providers</p>
<pre><code>provider "kubernetes" {
alias = "kubernetes_blue"
}
provider "kubernetes" {
alias = "kubernetes_green"
}
</code></pre>
<p>I want to understand if there is a way, I can have some conditional on the <code>kubernetes_namespace</code> such that, depending on the flags <code>var.enable_green_side</code> and <code>var.enable_blue_side</code>, I can create the same namespace in multiple kubernetes clusters without having to repeat the entire resource block as follows</p>
<pre><code>resource "kubernetes_namespace" "test1" {
metadata {
name = local.ns_name
}
provider = kubernetes.kubernetes_blue
}
resource "kubernetes_namespace" "test2" {
metadata {
name = local.ns_name
}
provider = kubernetes.kubernetes_green
}
</code></pre>
<p>Thanks in advance.</p>
| krisnik | <p>Terraform's model requires that each <code>resource</code> block belong to exactly one provider configuration, so there is no way to avoid declaring the resource twice, but you can at least reduce the amount of duplication that causes by factoring it out into a module and calling that module twice, rather than by duplicating the <code>resource</code> block directly:</p>
<pre><code>provider "kubernetes" {
alias = "blue"
}
provider "kubernetes" {
alias = "green"
}
module "blue" {
source = "../modules/bluegreen"
# (any settings the module needs from the root)
providers = {
kubernetes = kubernetes.blue
}
}
module "blue" {
source = "../modules/bluegreen"
# (any settings the module needs from the root)
providers = {
kubernetes = kubernetes.green
}
}
</code></pre>
<p><a href="https://www.terraform.io/docs/language/meta-arguments/module-providers.html" rel="nofollow noreferrer">The special <code>providers</code> argument</a> in a <code>module</code> block allows you to give the child module a different "view" of the declared provider configurations than the caller has. In the <code>module "blue"</code> block above, the <code>providers</code> argument is saying: "Inside this module instance, any reference to the default <code>kubernetes</code> provider configuration means to use the <code>kubernetes.blue</code> configuration from the caller".</p>
<p>Inside the module then you can just write normal <code>resource "kubernetes_...."</code> blocks without any special <code>provider</code> arguments, because that'll cause them to attach to the default provider <em>from the perspective of that module instance</em>, and each of the two module instances has a different configuration bound to that.</p>
<p>Whether this factoring out into a module will be helpful will of course depend on how much context from the calling module the child module ends up needing. If your <code>module</code> block ends up with almost as many arguments inside it as the <code>resource</code> block(s) you're factoring out then it'd likely be better to just keep the <code>resource</code> blocks at the top level and avoid the indirection.</p>
| Martin Atkins |
<p>I have a simple <code>cloudbuild.yaml</code> file which runs a Bazel command. This command returns a Kubernetes configuration in form as a log output.</p>
<p>My goal is to take the output of the first step and apply it to my Kubernetes cluster.</p>
<pre><code>steps:
- name: gcr.io/cloud-builders/bazel
args: ["run", "//:kubernetes"]
- name: "gcr.io/cloud-builders/kubectl"
args: ["apply", "<log output of previous step>"]
env:
- "CLOUDSDK_COMPUTE_ZONE=europe-west3-a"
- "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas"
</code></pre>
<h1>Update</h1>
<p>I've tried the following:</p>
<pre><code>- name: gcr.io/cloud-builders/bazel
entrypoint: /bin/bash
args:
[
"bazel",
"run",
"//:kubernetes",
" > kubernetes.yaml",
]
- name: "gcr.io/cloud-builders/kubectl"
args: ["apply", "-f", "kubernetes.yaml"]
env:
- "CLOUDSDK_COMPUTE_ZONE=europe-west3-a"
- "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas"
</code></pre>
<p>But then I get this error:</p>
<pre><code>Running: kubectl apply -f kubernetes.yaml
error: the path "kubernetes.yaml" does not exist
</code></pre>
| Florian Ludewig | <p>Here's how to mount the volume:</p>
<p><a href="https://cloud.google.com/cloud-build/docs/build-config#volumes" rel="nofollow noreferrer">https://cloud.google.com/cloud-build/docs/build-config#volumes</a></p>
<p>Basically add:</p>
<pre><code> volumes:
- name: 'vol1'
path: '/persistent_volume'
</code></pre>
<p>Then reference full path <code>/persistent_volume/filename/</code> when writing / reading to your file.</p>
| Lance Sandino |
<p>I am running Rancher Desktop on my ubuntu laptop.
I have a container running mongodb in a kubernetes container:</p>
<pre><code>$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mongo-deployment-7fb46bd85-vz9th 1/1 Running 0 37m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 3d17h
service/mongo-service NodePort 10.43.132.185 <none> 27017:32040/TCP 37m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mongo-deployment 1/1 1 1 37m
NAME DESIRED CURRENT READY AGE
replicaset.apps/mongo-deployment-7fb46bd85 1 1 1 37m
</code></pre>
<p><strong>So the node port of the mongo service is: 32040.</strong></p>
<p>I have found the local ip of the kubernetes node:</p>
<pre><code>$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
lima-rancher-desktop Ready control-plane,master 3d17h v1.23.6+k3s1 192.168.5.15 <none> Alpine Linux v3.15 5.15.32-0-virt containerd://1.5.11
</code></pre>
<p><strong>so the internal ip is: 192.168.5.15</strong></p>
<p>but when i try to connect to 192.168.5.15 on port 32040 i get <code>connection timed out</code>.</p>
<p>could i have a hint on how to do this with Rancher Desktop ?</p>
<p>thank you,
Andrei</p>
| Andrei Diaconescu | <p>i found a solution: it seems that the ip returned by
<code>kubectl get node -o wide</code>
is not usable to acess services from the kubernetes node, in Rancher Desktop (it is working in other kubernetes cluster named "kind" (<a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">https://kind.sigs.k8s.io/</a>)).</p>
<p>What is working for Rancher Desktop is to access the NodePort service directly on localhost, so in the example above: localhost:32040</p>
| Andrei Diaconescu |
<p>I already created my secret as recommend by Kubernetes and followed the tutorial, but the pod isnt with my secret attached.</p>
<p>As you can see, i created the secret and described it.
After i created my pod.</p>
<pre><code>$ kubectl get secret my-secret --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
{"auths":{"my-private-repo.com":{"username":"<username>","password":"<password>","email":"<email>","auth":"<randomAuth>="}}}
$ kubectl create -f my-pod.yaml
pod "my-pod" created
$ kubectl describe pods trunfo
Name: my-pod
Namespace: default
Node: gke-trunfo-default-pool-07eea2fb-3bh9/10.233.224.3
Start Time: Fri, 28 Sep 2018 16:41:59 -0300
Labels: <none>
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container container-trunfo
Status: Pending
IP: 10.10.1.37
Containers:
container-trunfo:
Container ID:
Image: <my-image>
Image ID:
Port: 9898/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hz4mf (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-hz4mf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hz4mf
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4s default-scheduler Successfully assigned trunfo to gke-trunfo-default-pool-07eea2fb-3bh9
Normal SuccessfulMountVolume 4s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 MountVolume.SetUp succeeded for volume "default-token-hz4mf"
Normal Pulling 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 pulling image "my-private-repo.com/my-image:latest"
Warning Failed 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Failed to pull image "my-private-repo.com/my-image:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://my-private-repo.com/v1/_ping: dial tcp: lookup my-private-repo.com on 169.254.169.254:53: no such host
Warning Failed 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Error: ErrImagePull
Normal BackOff 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Back-off pulling image "my-private-repo.com/my-image:latest"
Warning Failed 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Error: ImagePullBackOff
</code></pre>
<p>What can i do to fix it?</p>
<p><strong>EDIT</strong></p>
<p>This is my pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-private-repo/images/<my-image>
ports:
- containerPort: 9898
imagePullSecrets:
- name: my-secret
</code></pre>
<p>As we can see, the secret is defined as expected, but not attached correctly.</p>
| KpsLok | <p>You did not get as far as secrets yet. Your logs say</p>
<blockquote>
<p>Failed to pull image "my-private-repo.com/my-image:latest": rpc error: code = Unknown desc = Error response from daemon: Get <a href="https://my-private-repo.com/v1/_ping" rel="nofollow noreferrer">https://my-private-repo.com/v1/_ping</a>: dial tcp: lookup my-private-repo.com on 169.254.169.254:53: no such host
Warning Failed 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Error: ErrImagePull</p>
</blockquote>
<p>Which means that your pod cannot event start because the image is not available. Fix that, and if you still have problem with secrets after you observer pod state "ready" post your yaml definition.</p>
| Andrew Savinykh |
<p>NodePort
This way of accessing Dashboard is only recommended for development environments in a single node setup.</p>
<p>Edit kubernetes-dashboard service.</p>
<p>$ kubectl -n kube-system edit service kubernetes-dashboard
You should see yaml representation of the service. Change type: ClusterIP to type: NodePort and save file.</p>
<p>Can change clusterip to nodeport command line without editor?
Thanks!</p>
| Anton Patsev | <p>you can change it like this</p>
<pre><code>kubectl patch svc kubernetes-dashboard --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'
</code></pre>
| sfgroups |
<p>I have installed the minikube, deployed the hello-minikube application and opened the port. Basically I have followed the getting started tutorial at <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#quickstart" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/learning-environment/minikube/#quickstart</a>. </p>
<p>The problem starts when I want to open the URL where the deployed application is running obtained by running <code>minikube service hello-minikube --url</code>.</p>
<p>I get <code>http://172.17.0.7:31198</code> and that URI cannot be opened, since that IP does not exist locally. Changing it to <code>http://localhost:31198</code> does not work either (so adding an entry to hosts file won't work I guess).</p>
<p>The application is running, I can query the cluster and obtain the info through <code>http://127.0.0.1:50501/api/v1/namespaces/default/services/hello-minikube</code>:</p>
<pre><code>{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "hello-minikube",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/hello-minikube",
"uid": "56845ce6-bbba-45e5-a1b6-d094949438cf",
"resourceVersion": "1578",
"creationTimestamp": "2020-03-10T10:33:41Z",
"labels": {
"app": "hello-minikube"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 8080,
"targetPort": 8080,
"nodePort": 31198
}
],
"selector": {
"app": "hello-minikube"
},
"clusterIP": "10.108.152.177",
"type": "NodePort",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
}
}
}
</code></pre>
<pre><code>λ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube NodePort 10.108.152.177 <none> 8080:31198/TCP 4h34m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h42m
</code></pre>
<p>How to access the application deployed in minikube k8s cluster on localhost? Also minikube is running as a docker container on the machine with following ports 32770:2376 32769:8443 32771:22 exposed.</p>
| Karel Frajták | <p>Found the solution in another <a href="https://stackoverflow.com/questions/40767164/expose-port-in-minikube">thread</a> - port forwarding</p>
<pre><code>kubectl port-forward svc/hello-minikube 31191:8080
</code></pre>
<p>The first port is port that you will use on your machine (in the browser) and 8080 is the port defined when running the service.</p>
| Karel Frajták |
<p>We're serving our product on AWS EKS where the service is created of type <code>LoadBalancer</code>. The ELB IP is assigned by AWS and this is what is being shared to the client.</p>
<p>However, when we re-deploy the service when we're making some changes/improvements, the ELB IP changes. Since this is causing us to frequently send mails to all the clients, we would need a dedicated IP which needs to be mapped to LB and thus will not change with re-deployment of the service.</p>
<p>Any existing AWS solution or a nice pointer to solve this situation would be helpful.</p>
| cai | <p>You can use elastic ip as is described here <a href="https://stackoverflow.com/questions/66902641/how-to-provide-elastic-ip-to-aws-eks-for-external-service-with-type-loadbalancer">How to provide elastic ip to aws eks for external service with type loadbalancer?</a>, and here <a href="https://docs.aws.amazon.com/es_es/eks/latest/userguide/network-load-balancing.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/es_es/eks/latest/userguide/network-load-balancing.html</a>, just adding an anotation <code>service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xxxxxxxxxxxxxxxxx,eipalloc-yyyyyyyyyyyyyyyyy</code> to the nlb:</p>
<p><code>service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-05666791973f6a240</code></p>
<p>Another way is to use a domain name (my way). Then use <a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md</a> annotations to link your Service or Ingress with a dns name and configure <code>external-dns</code> to use your dns provider like Route53.</p>
<p>For example:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: ambassador
namespace: ambassador
annotations:
external-dns.alpha.kubernetes.io/hostname: 'myserver.mydomain.com'
</code></pre>
<p>Every time your LoadBalancer changes the ip the dns server will be updated by the correct ip.</p>
| TlmaK0 |
<p>I'm creating a Kubernetes Service Account using terraform and trying to output the token from the Kubernetes Secret that it creates.</p>
<pre><code>resource "kubernetes_service_account" "ci" {
metadata {
name = "ci"
}
}
data "kubernetes_secret" "ci" {
metadata {
name = "${kubernetes_service_account.ci.default_secret_name}"
}
}
output "ci_token" {
value = "${data.kubernetes_secret.ci.data.token}"
}
</code></pre>
<p>According to <a href="https://www.terraform.io/docs/configuration-0-11/data-sources.html#data-source-lifecycle" rel="noreferrer">the docs</a> this should make the data block defer getting its values until the 'apply' phase because of the computed value of <code>default_secret_name</code>, but when I run <code>terraform apply</code> it gives me this error:</p>
<pre><code>Error: Error running plan: 1 error(s) occurred:
* output.ci_token: Resource 'data.kubernetes_secret.ci' does not have attribute 'data.token' for variable 'data.kubernetes_secret.ci.data.token'
</code></pre>
<p>Adding <code>depends_on</code> to the <code>kubernetes_secret</code> data block doesn't make any difference.</p>
<p>If I comment out the <code>output</code> block, it creates the resources fine, then I can uncomment it, apply again, and everything acts normally, since the Kubernetes Secret exists already.</p>
<p>I've also made a Github issue <a href="https://github.com/terraform-providers/terraform-provider-kubernetes/issues/436" rel="noreferrer">here</a>.</p>
<p><strong>Update</strong></p>
<p>The accepted answer does solve this problem, but I omitted another output to simplify the question, which doesn't work with this solution:</p>
<pre><code>output "ci_crt" {
value = "${data.kubernetes_secret.ci.data.ca.crt}"
}
</code></pre>
<pre><code>* output.ci_ca: lookup: lookup failed to find 'ca.crt' in:
${lookup(data.kubernetes_secret.ci.data, "ca.crt")}
</code></pre>
<p>This particular issue is <a href="https://github.com/terraform-providers/terraform-provider-kubernetes/issues/334" rel="noreferrer">reported here</a> due to <a href="https://github.com/hashicorp/terraform/issues/10876" rel="noreferrer">a bug in Terraform</a>, which is fixed in version 0.12.</p>
| Ellis Percival | <p>This works:</p>
<pre><code>resource "kubernetes_service_account" "ci" {
metadata {
name = "ci"
}
}
data "kubernetes_secret" "ci" {
metadata {
name = kubernetes_service_account.ci.default_secret_name
}
}
output "ci_token" {
sensitive = true
value = lookup(data.kubernetes_secret.ci.data, "token")
}
</code></pre>
| Patrick Decat |
<p>I would like to resolve the kube-dns names from outside of the Kubernetes cluster by adding a stub zone to my DNS servers. This requires changing the cluster.local domain to something that fits into my DNS namespace.</p>
<p>The cluster DNS is working fine with cluster.local. To change the domain I have modified the line with KUBELET_DNS_ARGS on /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to read:</p>
<pre><code>Environment="KUBELET_DNS_ARGS=--cluster-dns=x.y.z --cluster-domain=cluster.mydomain.local --resolv-conf=/etc/resolv.conf.kubernetes"
</code></pre>
<p>After restarting kubelet external names are resolvable but kubernetes name resolution failed.</p>
<p>I can see that kube-dns is still running with:</p>
<pre><code>/kube-dns --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2
</code></pre>
<p>The only place I was able to find cluster.local was within the pods yaml configuration which reads:</p>
<pre><code> containers:
- args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
</code></pre>
<p>After modifying the yaml and recreating the pod using</p>
<pre><code>kubectl replace --force -f kube-dns.yaml
</code></pre>
<p>I still see kube-dns gettings started with --domain=cluster.local.</p>
<p>What am I missing?</p>
| Marcus | <p>I had a similar problem where I have been porting a microservices based application to Kubernetes. Changing the internal DNS zone to cluster.local was going to be a fairly complex task that we didn't really want to deal with.</p>
<p>In our case, we <a href="https://coredns.io/2018/01/29/deploying-kubernetes-with-coredns-using-kubeadm/" rel="noreferrer">switched from KubeDNS to CoreDNS</a>, and simply enabled the <a href="https://coredns.io/plugins/rewrite" rel="noreferrer">coreDNS rewrite plugin</a> to translate our <code>our.internal.domain</code> to <code>ourNamespace.svc.cluster.local</code>. </p>
<p>After doing this, the corefile part of our CoreDNS configmap looks something like this:</p>
<pre><code>data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
rewrite name substring our.internal.domain ourNamespace.svc.cluster.local
proxy . /etc/resolv.conf
cache 30
}
</code></pre>
<p>This enables our kubernetes services to respond on both the default DNS zone and our own zone.</p>
| simon |
<p>I have a google kubernetes engine cluster with multiple namespaces. Different applications are deployed on each of these namespaces. Is it possible to give a user complete access to a single namespace only?</p>
| Keerthi hegde | <p>Yes, Kubernetes has a built-in RBAC system that integrates with Cloud IAM so that you can control access to individual clusters and namespaces for GCP users.</p>
<ol>
<li>Create a Kubernetes <code>Role</code></li>
</ol>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: YOUR_NAMESPACE
name: ROLE_NAME
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<ol start="2">
<li>Create a Kubernetes <code>RoleBinding</code></li>
</ol>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ROLE_NAME-binding
namespace: YOUR_NAMESPACE
subjects:
# GCP user account
- kind: User
name: [email protected]
</code></pre>
<p>Reference</p>
<ul>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control" rel="nofollow noreferrer">Kubernetes Role-based access control</a></li>
</ul>
| Travis Webb |
<p>I installed <code>minikube</code> and started <code>Jenkins</code> inside a <code>pod</code>. I am able to create a new job and execute it inside a dynamically created <code>maven container</code>. However. I have a folder in my <code>host</code> machine which is a <code>Mac</code> based and I need that folder inside this dynamically created <code>pod</code> when the job is started. How can I achieve that?</p>
| Damien-Amen | <p>Option 1.</p>
<p><a href="https://kubernetes.io/docs/setup/minikube/#interacting-with-your-cluster" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/minikube/#interacting-with-your-cluster</a></p>
<p>configure kubectl on your MAC, then use <code>kubectl cp <hostdir> <podname>:<dir></code></p>
<p>Option 2.</p>
<p>use the Host path option for POD, like this post</p>
<p><a href="https://stackoverflow.com/questions/48534980/mount-local-directory-into-pod-in-minikube">Mount local directory into pod in minikube</a></p>
| sfgroups |
<p>I'm deploying some microservices in gcp kubernentes. I don't know if I need to pay to download images from docker hub by network stuff.</p>
<ol>
<li>How it could affect my billing if I use docker hub instead of google image registry?</li>
<li>Could I save money if I use image registry on gcp instead of docker hub?</li>
<li>Could I need to pay more to use docker hub instead gcp image registry?</li>
</ol>
<p>I don't know what image registry to use. </p>
<p>Thanks!</p>
| Alejandro Molina | <p>Container Registry uses <a href="https://cloud.google.com/container-registry/pricing#storage" rel="nofollow noreferrer">Cloud Storage under the hood</a> to store your images, which <a href="https://cloud.google.com/storage/pricing#storage-pricing" rel="nofollow noreferrer">publishes its pricing info in this table</a>. <strong>You can store 5GB for free, and another 100GB of storage would cost you $2.60/month.</strong> Either way your costs are incredibly low. I'd recommend storing in GCR because your deployments will be faster, management will be simpler with everything in one place, and you can easily enable <a href="https://cloud.google.com/container-registry/docs/tutorial-vulnerability-scan" rel="nofollow noreferrer">Vulnerability Scanning</a> on your images.</p>
<blockquote>
<p>How it could affect my billing if I use docker hub instead of google image registry?</p>
</blockquote>
<p>Google Cloud <a href="https://cloud.google.com/compute/network-pricing#general_network_pricing" rel="nofollow noreferrer">does not charge for ingress traffic</a>. That means there is no cost to downloading an image from Docker hub. You are downloading over the public net, however, so expect <code>push</code> and <code>pull</code> to/from GCP to take longer than if you stored images in GCR.</p>
| Travis Webb |
<p>I see that kubernets uses pod and then in each pod there can be multiple containers.</p>
<p>Example I create a pod with</p>
<pre><code>Container 1: Django server - running at port 8000
Container 2: Reactjs server - running at port 3000
</code></pre>
<p>Whereas</p>
<p>I am coming for docker background</p>
<p>So in docker we do</p>
<pre><code>docker run --name django -d -p 8000:8000 some-django
docker run --name reactjs -d -p 3000:3000 some-reactjs
</code></pre>
<p>So POD is also like PC with some ubunut os on it</p>
| Santhosh | <p>No, a Pod is not like a PC/VM with Ubuntu on it.</p>
<p>There is no intermediate layer between your host and the containers in a pod. The only thing happening here is that the containers in a pod share some resources/namespaces in the host's kernel, and there are mechanisms in your host kernel to "protect" the containers from seeing other containers. Pods are just a mechanism to help you deploy a couple containers that share some resources (like the network namespace) a little easier. Fundamentally they are just linux processes directly on the host.</p>
<p>(one nuanced technicality/caveat on the above statement: Docker and tools like it will sometimes run their own VM and may try to make that invisible to you. For example, Docker Desktop does this. Usually you can ignore this layer, but it is great to know it is there. The answer holds though: That one single VM will host all of your pods/containers and there is not one VM per pod.)</p>
| Chris Trahey |
<p>I want to export <em>already templated</em> Helm Charts as YAML files. I can not use Tiller on my Kubernetes Cluster at the moment, but still want to make use of Helm Charts. Basically, I want Helm to export the YAML that gets send to the Kubernetes API with values that have been templated by Helm. After that, I will upload the YAML files to my Kubernetes cluster.</p>
<p>I tried to run <code>.\helm.exe install --debug --dry-run incubator\kafka</code> but I get the error <code>Error: Unauthorized</code>. </p>
<p>Note that I run Helm on Windows (version helm-v2.9.1-windows-amd64).</p>
| j9dy | <p>We need logs to check the <code>Unauthorized</code> issue.</p>
<p>But you can easily generate templates locally:</p>
<pre><code>helm template mychart
</code></pre>
<blockquote>
<p>Render chart templates locally and display the output.</p>
<p>This does not require Tiller. However, any values that would normally
be looked up or retrieved in-cluster will be faked locally.
Additionally, none of the server-side testing of chart validity (e.g.
whether an API is supported) is done.</p>
</blockquote>
<p>More info: <a href="https://helm.sh/docs/helm/helm_template/" rel="noreferrer">https://helm.sh/docs/helm/helm_template/</a></p>
| Amrit |
<p>I have helm 3 template created using <code>helm create microservice</code> command. it has below files.</p>
<pre><code>/Chart.yaml
./values.yaml
./.helmignore
./templates/ingress.yaml
./templates/deployment.yaml
./templates/service.yaml
./templates/serviceaccount.yaml
./templates/hpa.yaml
./templates/NOTES.txt
./templates/_helpers.tpl
./templates/tests/test-connection.yaml
</code></pre>
<p>Updated values file based on my application, when I try to install the helm chat its giving below error message.</p>
<pre><code>Error: UPGRADE FAILED: template: microservice/templates/ingress.yaml:20:8: executing "microservice/templates/ingress.yaml" at <include "microservice.labels" .>: error calling include: template: no template "microservice.labels" associated with template "gotpl"
helm.go:75: [debug] template: microservice/templates/ingress.yaml:20:8: executing "microservice/templates/ingress.yaml" at <include "microservice.labels" .>: error calling include: template: no template "microservice.labels" associated with template "gotpl"
</code></pre>
<p>Here is the <code>ingress.yaml</code> file.</p>
<pre><code>{{- if .Values.ingress.enabled -}}
{{- $fullName := include "microservice.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }}
{{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "microservice.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
</code></pre>
<p>How to I added <code>microservice.labels</code> template?. Do I need to create <code>microservice.labels.tlp</code> file?</p>
<p>Any tips to fix this error.</p>
<p>Thanks
SR</p>
| sfgroups | <p>I copied the <code>ingress.yaml</code> file to, chart created older version helm. this value was missing in <code>_helpers.tpl</code> file. Now I copied new version of hellpers.tpl file. deployment works now.</p>
| sfgroups |
<p>I'm frequently installing multiple instances of an umbrella Helm chart across multiple namespaces for testing. I'd like to continue using the randomly generated names, but also be able to tear down multiple releases of the same chart in one command that doesn't need to change for each new release name.</p>
<p>So for charts like this:</p>
<pre><code>$ helm ls
NAME REVISION UPDATED STATUS CHART NAMESPACE
braided-chimp 1 Mon Jul 23 15:52:43 2018 DEPLOYED foo-platform-0.2.1 foo-2
juiced-meerkat 1 Mon Jul 9 15:19:43 2018 DEPLOYED postgresql-0.9.4 default
sweet-sabertooth 1 Mon Jul 23 15:52:34 2018 DEPLOYED foo-platform-0.2.1 foo-1
</code></pre>
<p>I can delete all releases of the <code>foo-platform-0.2.1</code> chart by typing the release names like:</p>
<pre><code>$ helm delete braided-chimp sweet-sabertooth
</code></pre>
<p>But every time I run the command, I have to update it with the new release names.</p>
<p>Is it possible to run list / delete on all instances of a given chart across all namespaces based on the chart name? (I'm thinking something like what <code>kubectl</code> supports with the <code>-l</code> flag.)</p>
<p>For instance, how can I achieve something equivalent to this?</p>
<pre><code>$ helm delete -l 'chart=foo-platform-0.2.1'
</code></pre>
<p>Is there a better way to do this?</p>
| Taylor D. Edmiston | <p>I wanted to see if I could achieve the same result using <a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer">jq</a> instead of awk.</p>
<p>I'm not an expert on jq so there might simpler methods. <strong>Test with a dry run!</strong></p>
<p>Assuming Bash:</p>
<pre><code>CHARTID=foo-platform-0.2.1
helm delete --dry-run $(helm ls --output json | jq -r ".Releases[] | select(.Chart == \"${CHARTID}\") | .Name")
</code></pre>
<p>with the above example I would expect the output to be:</p>
<pre><code>release "braided-chimp" deleted
release "sweet-sabertooth" deleted
</code></pre>
| Mark McLaren |
<p>I want to change config of log on Golang application which run on K8S,
I’ve tried the following code locally and it works as expected
I'm using viper to watch for config file changes</p>
<p>This is the config map with the log configuration </p>
<pre><code>apiVersion: v1
kind: ConfigMap
data:
config.yaml: 'log.level: error'
metadata:
name: app-config
namespace: logger
</code></pre>
<p>In the deployment yaml I’ve added the following</p>
<pre><code>...
spec:
containers:
- name: gowebapp
image: mvd/myapp:0.0.3
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: app-config
</code></pre>
<p>This is the code </p>
<pre><code>package configuration
import (
"fmt"
"os"
"strings"
"github.com/fsnotify/fsnotify"
"github.com/sirupsen/logrus"
"github.com/spf13/viper"
)
const (
varLogLevel = "log.level
"
varPathToConfig = "config.file"
)
type Configuration struct {
v *viper.Viper
}
func New() *Configuration {
c := Configuration{
v: viper.New(),
}
c.v.SetDefault(varPathToConfig, "./config.yaml")
c.v.SetDefault(varLogLevel, "info")
c.v.AutomaticEnv()
c.v.SetConfigFile(c.GetPathToConfig())
err := c.v.ReadInConfig() // Find and read the config file
logrus.WithField("path", c.GetPathToConfig()).Warn("loading config")
if _, ok := err.(*os.PathError); ok {
logrus.Warnf("no config file '%s' not found. Using default values", c.GetPathToConfig())
} else if err != nil { // Handle other errors that occurred while reading the config file
panic(fmt.Errorf("fatal error while reading the config file: %s", err))
}
setLogLevel(c.GetLogLevel())
c.v.WatchConfig()
c.v.OnConfigChange(func(e fsnotify.Event) {
logrus.WithField("file", e.Name).Warn("Config file changed")
setLogLevel(c.GetLogLevel())
})
return &c
}
// GetLogLevel returns the log level
func (c *Configuration) GetLogLevel() string {
s := c.v.GetString(varLogLevel)
return s
}
// GetPathToConfig returns the path to the config file
func (c *Configuration) GetPathToConfig() string {
return c.v.GetString(varPathToConfig)
}
func setLogLevel(logLevel string) {
logrus.WithField("level", logLevel).Warn("setting log level")
level, err := logrus.ParseLevel(logLevel)
if err != nil {
logrus.WithField("level", logLevel).Fatalf("failed to start: %s", err.Error())
}
logrus.SetLevel(level)
}
</code></pre>
<p>Now when I apply the yaml file again and changing the value from <code>error</code> to <code>warn</code> or <code>debug</code> etc
Nothing change … any idea what I miss here ? </p>
<p>I see in the K8S dashboard that the config map is <strong>assigned to the application</strong> and when I change the value I see that the env was changed...</p>
<p><strong>update</strong></p>
<p>when run it locally I use the following config just for testing
but when using config map I've used the <code>data</code> entry according to the spec of configmap ...</p>
<pre><code>apiVersion: v1
kind: ConfigMap
log.level: 'warn'
#data:
# config.yaml: 'log.level: error'
metadata:
name: app-config
</code></pre>
<p><strong>This is how the config env looks in k8s dashboard</strong> </p>
<p><a href="https://i.stack.imgur.com/GWCUB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GWCUB.png" alt="enter image description here"></a></p>
| Jenny M | <p>envFrom creates environment variables from the config map. There is no file that changes. If you exec into the container you'll probably see an environment variable named config.yaml or CONFIG.YAML or similar (don' t know if it works with dots).</p>
<p>You are probably better of if you mount config.yaml as a file inside your pods, like this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="nofollow noreferrer">Add ConfigMap data to a Volume</a></p>
| Andreas Wederbrand |
<p>I am trying to configure Kubernetes on docker-for-desktops and I want to change the default network assigned to containers. </p>
<blockquote>
<p>Example: the default network is <code>10.1.0.0/16</code> but I want <code>172.16.0.0/16</code>. </p>
</blockquote>
<p>I changed the docker network section to <code>Subnet address: 172.16.0.0 and netmask 255.255.0.0</code> but the cluster keeps assigning the network 10.1.0.0/16.
<a href="https://i.stack.imgur.com/mdlFB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mdlFB.png" alt="Network configuration"></a></p>
<p>The problem I am facing here is that I am in a VPN which has the same network IP of kubernetes default network (<code>10.1.0.0/16</code>) so if I try to ping a host that is under the vpn, the container from which I am doing the ping keeps saying <code>Destination Host Unreachable</code>.</p>
<p>I am running Docker Desktop (under Windows Pro) Version 2.0.0.0-win81 (29211) Channel: stable Build: 4271b9e.</p>
<p>Kubernetes is provided from Docker desktop <a href="https://i.stack.imgur.com/xshra.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xshra.png" alt="Kuberbetes"></a></p>
<p>From the official <a href="https://docs.docker.com/docker-for-windows/kubernetes/" rel="noreferrer">documentation</a> I know that </p>
<blockquote>
<p>Kubernetes is available in Docker for Windows 18.02 CE Edge and higher, and 18.06 Stable and higher , this includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, <strong>is not configurable</strong>, and is a single-node cluster</p>
</blockquote>
<p>Said so, should Kubernetes use the underlying docker's configuration (like network, volumes etc.)?</p>
| Justin | <p>Kubernetes also has a subnet that it uses and a corresponding network setting, which defaults to <code>10.1.0.0/16</code>, but this setting is not exposed in the Docker for-win UI. In <a href="https://github.com/docker/for-win/issues/1667#issuecomment-367357067" rel="nofollow noreferrer">docker/for-win issue #1667: Can not access local network, with kubernetes installed</a>, guillaumerose shows a workaround that I've altered a bit to read:</p>
<ol>
<li>Disable Kubernetes</li>
<li>Restart Docker for-win to clicking on "Restart" from the "Troubleshoot" screen (the one with the bug icon) - this step is missing in guillaumerose's workaround, see below</li>
<li><code>docker run -it --privileged --pid=host justincormack/nsenter1</code> and edit <code>/var/lib/cni/conf/10-default.conflist</code>. Change the mentioned <code>10.1.0.0/16</code> to the network you want. Don't forget the gateway and the dns</li>
<li>Enable Kubernetes</li>
</ol>
<p>In step 3, I changed <code>/var/lib/cni/conf/10-default.conflist</code> like this:</p>
<pre><code># diff -u 10-default.conflist.orig 10-default.conflist
--- 10-default.conflist.orig
+++ 10-default.conflist
@@ -10,11 +10,11 @@
"hairpinMode": true,
"ipam": {
"type": "host-local",
- "subnet": "10.1.0.0/16",
- "gateway": "10.1.0.1"
+ "subnet": "10.250.0.0/16",
+ "gateway": "10.250.0.1"
},
"dns": {
- "nameservers": ["10.1.0.1"]
+ "nameservers": ["10.250.0.1"]
}
},
{
</code></pre>
<p>And this works. I can now ping <code>10.1.119.43</code> <em>and</em> use kubernetes.</p>
<h2>OBS! <code>10-default.conflist</code> gets reset/reverted whenever docker is restarted</h2>
<p>Yes, every time docker gets restarted (e.g. because of a windows reboot), kubernetes reverts back to using <code>10.1.0.0/16</code> and then it is broken again. Apply the workaround above once more, and it will work again.</p>
<p>So I personally have a <code>~/10-default.conflist.250</code> file with the above patch applied and then do:</p>
<pre><code>docker run -i --rm --privileged --pid=host justincormack/nsenter1 /bin/sh -c '/bin/cat > /var/lib/cni/conf/10-default.conflist' < ~/10-default.conflist.250
</code></pre>
<p>as step 3 above instead of editing the file by hand over and over.</p>
<p>It is quite annoying that this the workaround has to be applied every time docker for-win is restarted, but it is better than it not working :-).</p>
<h2>About the need to restart Docker for-win after disabling kubernetes</h2>
<p>My experience is that when kubernetes has been restarted and has reverted to using <code>10.1.0.0/16</code>, if I skip step 2 - the "restart Docker for-win" step - it takes more then 5 minutes to attempt to start kubernetes after which I give up waiting. When I now restart docker (because kubernetes is in a bad state), kubernetes will be re-enabled (again using <code>10.1.0.0/16</code>) but now I can follow the workaround again (including step 2). So restarting docker between disabling kubernetes and modifying <code>10-default.conflist</code> makes the subsequent start of kubernetes actually work.</p>
<p>If anybody has any idea why the contents of <code>/var/lib/cni/conf/10-default.conflist</code> revert to factory defaults every time docker gets restarted, I'm very curious to understand why that is and how to fix this last annoying problem.</p>
| Peter V. Mørch |
<p>I'm just trying to create a simple service account. Theoretically, kubectl automatically creates the secret and token for service accounts... But, not in my case... I've done this in <code>kube-system</code>, <code>default</code>, and new/other namespaces.</p>
<pre><code>me@mymachine ~ % kubectl create serviceaccount my-acct
serviceaccount/my-acct created
me@mymachine ~ % kubectl describe serviceaccount my-acct
Name: my-acct
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
</code></pre>
<p>I have reset the Kubernetes system. Uninstalled, removed ./kube/ and removed the Library...
Still no secret created. All of my developers machines (MAC as well, both Intel and M1) automatically create the account secret.
Any ideas?</p>
| icetnet | <p><strong>Disclaimer</strong>: This answer will not "fix" the automatic creation of secrets for service accounts, but shows how you can associate a secret to a service account.</p>
<p>For the newer Docker Desktop 4.8.1 (for Mac), you can create the secret manually:</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: default-secret
annotations:
kubernetes.io/service-account.name: default
type: kubernetes.io/service-account-token
EOF
</code></pre>
<p>And then you can associate the secret to the service account by editing the service account configuration, run:</p>
<pre><code>kubectl edit serviceaccounts default
</code></pre>
<p>There you can add the secret, at the end, like:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "XXXX-XX-XXTXX:XX:XXZ"
name: default
namespace: default
resourceVersion: "XXXX"
uid: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
secrets:
- name: default-secret
</code></pre>
<p>After that, you'll be able to use the token for the required purposes.</p>
| camikiller |
<p>We use spring-boot(2.5.6) in kubernetes</p>
<p>Many of the dependencies we use include healthchecks for instance RedisHealthIndicator,CouchbaseHealthIndicator etc</p>
<p>When one of these health checks fails and the overall application health fails, the pod is restarted by kubernetes.</p>
<p>However there is no indication <em>why</em> it failed, spring does not log health check failures, instead relying on the Healthcheck itself to log a message.
Which is not the case for the built in health checks.</p>
<p>So from the outside it appears that kubernetes has killed this pod for 'no reason' and we have to assume it was the health check</p>
<p>Does spring have a 'health check change event' so that I can log which bean has failed?</p>
<p>Or otherwise track the 'down' state of the health on an individual basis</p>
<p><a href="https://github.com/spring-projects/spring-boot/issues/22632" rel="nofollow noreferrer">https://github.com/spring-projects/spring-boot/issues/22632</a>
This issue is similar but they explicitly state they will not log failures</p>
| wesleyjconnor | <p>I've fought with this awhile myself. I'm not sure why they've taken that stance on logging health failures, but what is worse is the current implementation is incredibly unfriendly to try and inject that kind of functionality into.</p>
<p>In the end, the work around I settled on involved wrapping the health contributors so that I can log messages if they report not-up. The wrapper itself is pretty simple:</p>
<pre class="lang-java prettyprint-override"><code>public class LoggingHealthIndicator implements HealthIndicator {
private static final Logger LOG = LoggerFactory.getLogger(LoggingHealthIndicator.class);
private final String name;
private final HealthIndicator delegate;
public LoggingHealthIndicator(final String name, final HealthIndicator delegate) {
this.name = name;
this.delegate = delegate;
}
@Override
public Health health() {
final Health health = delegate.health();
if (!Status.UP.equals(health.getStatus())) {
if (health.getDetails() == null || health.getDetails().isEmpty()) {
LOG.error("Health check '{}' {}", name, health.getStatus());
}
else {
LOG.error("Health check '{}' {}: {}", name, health.getStatus(), health.getDetails());
}
}
return health;
}
}
</code></pre>
<p>You could of course do whatever you want; Raise an application event, further tweak when and what you log, etc. As fancy as you like.</p>
<p>As far as making it actually used, that's where it gets a little annoying. It involves replacing the <code>HealthContributorRegistry</code> with our own enhanced version.</p>
<pre class="lang-java prettyprint-override"><code> /**
* Replicated from {@link org.springframework.boot.actuate.autoconfigure.health.HealthEndpointConfiguration}.
*
* Note that we lose the {@link org.springframework.boot.actuate.autoconfigure.health.HealthEndpointConfiguration.AdaptedReactiveHealthContributors},
* since it is private. Technically its private to the package-scoped class it's a child of, so we lose twice really.
*/
@Bean
@SuppressWarnings("JavadocReference")
public HealthContributorRegistry healthContributorRegistry(final Map<String, HealthContributor> contributors, final HealthEndpointGroups groups) {
return new LoggingHealthContributorRegistry(contributors, groups.getNames());
}
</code></pre>
<pre class="lang-java prettyprint-override"><code>public class LoggingHealthContributorRegistry extends AutoConfiguredHealthContributorRegistryCopy {
private static HealthContributor loggingContributor(final Entry<String, HealthContributor> entry) {
return loggingContributor(entry.getKey(), entry.getValue());
}
private static HealthContributor loggingContributor(final String name, final HealthContributor contributor) {
if (contributor instanceof HealthIndicator){
return new LoggingHealthIndicator(name, (HealthIndicator)contributor);
}
return contributor;
}
public LoggingHealthContributorRegistry(Map<String, HealthContributor> contributors, Collection<String> groupNames) {
// The constructor does not use `registerContributor` on the input map entries
super(contributors.entrySet().stream().collect(Collectors.toMap(Entry::getKey, LoggingHealthContributorRegistry::loggingContributor)),
groupNames);
}
@Override
public void registerContributor(String name, HealthContributor contributor) {
super.registerContributor(name, loggingContributor(name, contributor));
}
}
</code></pre>
<p>A note about <code>AutoConfiguredHealthContributorRegistryCopy</code>: it's literally just a copy of the <code>AutoConfiguredHealthContributorRegistry</code> class that happens to be package-scoped and so isn't inheritable (unless you don't mind playing package games)</p>
| mrusinak |
<p>I have a dynamic <code>PersistentVolume</code> provisioned using <code>PersistentVolumeClaim</code>.</p>
<p>I would like to keep the PV after the pod is done. So I would like to have what <code>persistentVolumeReclaimPolicy: Retain</code> does.</p>
<p>However, that is applicable to <code>PersistentVolume</code>, not <code>PersistentVolumeClaim</code> (AFAIK).</p>
<p><strong>How can I change this behavior for dynamically provisioned PV's?</strong></p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Release.Name }}-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 6Gi
---
kind: Pod
apiVersion: v1
metadata:
name: "{{ .Release.Name }}-gatling-test"
spec:
restartPolicy: Never
containers:
- name: {{ .Release.Name }}-gatling-test
image: ".../services-api-mvn-builder:latest"
command: ["sh", "-c", 'mvn -B gatling:test -pl csa-testing -DCSA_SERVER={{ template "project.fullname" . }} -DCSA_PORT={{ .Values.service.appPort }}']
volumeMounts:
- name: "{{ .Release.Name }}-test-res"
mountPath: "/tmp/testResults"
volumes:
- name: "{{ .Release.Name }}-test-res"
persistentVolumeClaim:
claimName: "{{ .Release.Name }}-pvc"
#persistentVolumeReclaimPolicy: Retain ???
</code></pre>
| Ondra Žižka | <p>This is not the answer to the OP, but the answer to the personal itch that led me here is that I don't need <code>reclaimPolicy: Retain</code> at all. I need a <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="noreferrer"><code>StatefulSet</code></a> instead. Read on if this is for you:</p>
<p>My itch was to have a <code>PersistentVolume</code> that got re-used over and over by the container in a persistent way; the way that is the default behavior when using <code>docker</code> and <code>docker-compose</code> volumes. So that a new <code>PersistentVolume</code> only gets created once:</p>
<pre><code># Create a new PersistentVolume the very first time
kubectl apply -f my.yaml
# This leaves the "volume" - the PersistentVolume - alone
kubectl delete -f my.yaml
# Second and subsequent times re-use the same PersistentVolume
kubectl apply -f my.yaml
</code></pre>
<p>And I thought the way to do that was to declare a <code>PersistentVolumeClaim</code> with <code>reclaimPolicy: Retain</code> and then reference that in my deployment. But even when i got <code>reclaimPolicy: Retain</code> working, a brand new <code>PersistentVolume</code> still got created on every <code>kubectl apply</code>. <code>reclaimPolicy: Retain</code> just ensured that the old ones didn't get deleted.</p>
<p>But no. The way to achieve this use-case is with a <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="noreferrer"><code>StatefulSet</code></a>. It is way simpler, and then it behaves like I'm used to with docker and docker-compose.</p>
| Peter V. Mørch |
<p>Being new to Kubernetes, I am trying to make a simple .NET Core 3 MVC app run on Kubernetes and reply on port 443 as well as port 80. I have a working Docker-Compose setup which I am trying to port to Kubernetes.</p>
<p>Running Docker Desktop CE with nginx-ingress on Win 10 Pro.</p>
<p>So far it is working on port 80. (<a href="http://mymvc.local" rel="noreferrer">http://mymvc.local</a> on host Win 10 - hosts file redirects mymvc.local to 127.0.0.1)</p>
<p>My MVC app is running behind service mvc on port 5000.</p>
<p>I've made a self-signed certificate for the domain 'mymvc.local', which is working in the Docker-Compose setup.</p>
<p>This is my ingress file</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mvc-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- mymvc.local
secretName: mvcsecret-tls
rules:
- host: mymvc.local
http:
paths:
- path: /
backend:
serviceName: mvc
servicePort: 5000
</code></pre>
<p>This is my secrets file (keys abbreviated):</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mvcsecret-tls
data:
tls.crt: MIIDdzCCAl+gAwIBAgIUIok60uPHId5kve+/bZAw/ZGftIcwDQYJKoZIhvcNAQELBQAwKTELMAkGBxGjAYBgN...
tls.key: MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQDPGN6yq9yzxvDL8fEUJChqlnaTQW6bQX+H0...
type: kubernetes.io/tls
</code></pre>
<p>kubectl describes the ingress as follows:</p>
<pre><code>Name: mvc-ingress
Namespace: default
Address: localhost
Default backend: default-http-backend:80 (<none>)
TLS:
mvcsecret-tls terminates mymvc.local
Rules:
Host Path Backends
---- ---- --------
mymvc.local
/ mvc:5000 (10.1.0.27:5000)
Annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 11m nginx-ingress-controller Ingress default/mvc-ingress
Normal UPDATE 11m nginx-ingress-controller Ingress default/mvc-ingress
</code></pre>
<p>In my Docker-Compose setup, I have an Nginx reverse proxy redirecting 80 and 443 to my MVC service, but I figured that is the role of ingress on Kubernetes?</p>
<p>My service YAML:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mvc
labels:
app: mymvc
spec:
ports:
- name: "mvc"
port: 5000
targetPort: 5000
selector:
app: mymvc
type: ClusterIP
</code></pre>
<p><strong>EDIT:</strong>
Adding 'nginx.ingress.kubernetes.io/rewrite-target: /' to ingress annotations males the https forward work, but the certificate presented is the 'Kubernetes Ingress Controller Fake Certificate' - not my self-signed one.</p>
| TheRoadrunner | <p>The solution turned out to be the addition of a second kind of certificate.</p>
<p>Instead of using the secrets file above (where I pasted the contents of my certificates files), I issued kubectl to use my certificate files directly:</p>
<pre><code>kubectl create secret tls mvcsecret-tls --key MyCert.key --cert MyCert.crt
kubectl create secret generic tls-rootca --from-file=RootCA.pem
</code></pre>
| TheRoadrunner |
<p>I am having a lot of issues configuring My Dockerized Django + PostgreSQL DB application to work on Kubernetes Cluster, which I have created using Google Cloud Platform.</p>
<p>How do I specify DATABASES.default.HOST from my settings.py file when I deploy image of PostgreSQL from Docker Hub and an image of my Django Web Application, to the Kubernetes Cluster?</p>
<p>Here is how I want my app to work. When I run the application locally, I want to use SQLITE DB, in order to do that I have made following changes in my settings.py file:</p>
<pre><code>if(os.getenv('DB')==None):
print('Development - Using "SQLITE3" Database')
DATABASES = {
'default':{
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR,'db.sqlite3'),
}
}
else:
print('Production - Using "POSTGRESQL" Database')
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'agent_technologies_db',
'USER': 'stefan_radonjic',
'PASSWORD': 'cepajecar995',
'HOST': , #???
'PORT': , #???
}
}
</code></pre>
<p>The main idea is that when I deploy application to Kubernetes Cluster, inside of Kubernetes Pod object, a Docker container ( my Dockerized Django application ) will run. When creating a container I am also creating Environment Variable <code>DB</code> and setting it to True. So when I deploy application I use PostgreSQL Database .</p>
<p><strong>NOTE</strong>: If anyone has any other suggestions how I should separate Local from Production development, please leave a comment. </p>
<p>Here is how my Dockerfile looks like:</p>
<pre><code>FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /agent-technologies
WORKDIR /agent-technologies
COPY . /agent-technologies
RUN pip install -r requirements.txt
EXPOSE 8000
</code></pre>
<p>And here is how my docker-compose file looks like:</p>
<pre><code>version: '3'
services:
web:
build: .
command: python src/manage.py runserver --settings=agents.config.settings
volumes:
- .:/agent-technologies
ports:
- "8000:8000"
environment:
- DB=true
</code></pre>
<p>When running application locally it works perfectly fine. But when I try to deploy it to Kubernetes cluster, Pod objects which run my application containers are crashing in an infinite loop, because I dont know how to specify the DATABASES.default.HOST when running app in production environment. And of course the command specified in docker-compose file (<code>command: python src/manage.py runserver --settings=agents.config.settings</code>) probably produces an exception and makes the Pods crash in infinite loop.</p>
<p>NOTE: I have already created all necessary configuration files for Kubernetes ( Deployment definitions / Services / Secret / Volume files ). Here is my github link: <a href="https://github.com/StefanCepa/agent-technologies-bachelor" rel="nofollow noreferrer">https://github.com/StefanCepa/agent-technologies-bachelor</a></p>
<p>Any help would be appreciated! Thank you all in advance!</p>
| Stefan Radonjic | <p>You will have to create a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> (cluster ip) for your postgres pod to make it "accessible". When you create a service, you can <a href="https://kubernetes.io/docs/concepts/services-networking/service/#dns" rel="nofollow noreferrer">access</a> it via <code><service name>.default:<port></code>. However, running postgres (or any db) as a simple pod is dangerous (you will loose data as soon as you or k8s re-creates the pod or scale it up). You can use a <a href="https://cloud.google.com/sql/docs/postgres/" rel="nofollow noreferrer">service</a> or install it properly using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">statefulSets</a>.</p>
<p>Once you have the address, you can put it in <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">env variable</a> and access it from your settings.py</p>
<p><strong>EDIT</strong>:
Put this in your deployment yaml (<a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">example</a>):</p>
<pre><code>env:
- name: POSTGRES_HOST
value: "postgres-service.default"
- name: POSTGRES_PORT
value: "5432"
- name: DB
value: "DB"
</code></pre>
<p>And in your settings.py</p>
<pre><code>'USER': 'stefan_radonjic',
'PASSWORD': 'cepajecar995',
'HOST': os.getenv('POSTGRES_HOST'),
'PORT': os.getenv('POSTGRES_PORT'),
</code></pre>
| Amrit |
<p>I am trying to deploy an Apache Ignite cluster in Kubernetes. The documentation suggests using TcpDiscoveryKubernetesIpFinder to facilitate the Ignite node discovery in a Kubernetes environment. However, I could not find this class in Apache Ignite for .Net. Is it migrated to .Net at all? If not, how can I use in my Net application? I am not very much familiar with Java.</p>
<p>If it is not possible, is there an alternative approach to implement node discovery in the Kubernetes environment without using TcpDiscoveryKubernetesIpFinder? Multicast is not available in Azure Virtual Network. </p>
<p>The range of available IPs in my Kubernetes subnet is 1000+ addresses so using TcpDiscoveryStaticIpFinder would not be very efficient. I tried to reduce FailureDetectionTimeout to 1 sec on my local PC to make it more efficient but Ignite generates a bunch of the "critical thread blocked" exception, allegedly each time when an endpoint is found unavailable. So I had to get rid of FailureDetectionTimeout.</p>
<p>I am using Azure Kubernetes Service and Apache Ignite 2.7 for Net. Thank you in advance. </p>
| Alex Avrutin | <p>You can combine Java-based (Spring XML) configuration with .NET configuration.</p>
<ol>
<li><p>Configure <code>TcpDiscoveryKubernetesIpFinder</code> in Spring XML file (see <a href="https://apacheignite.readme.io/docs/kubernetes-ip-finder" rel="nofollow noreferrer">https://apacheignite.readme.io/docs/kubernetes-ip-finder</a>)</p></li>
<li><p>In .NET, set <code>IgniteConfiguration.SpringConfigUrl</code> to point to that file</p></li>
</ol>
<p>The way it works is Ignite loads Spring XML first, then applies any custom config properties that are specified on .NET side.</p>
| Pavel Tupitsyn |
<p>Anyone know difference between those two?
For now only difference I see is that regional require >= 3 zones.</p>
| Andriy Kopachevskyy | <p>Found good explanation <a href="https://www.terraform.io/docs/providers/google/r/container_cluster.html#additional_zones" rel="noreferrer">here</a></p>
<blockquote>
<p>A "multi-zonal" cluster is a zonal cluster with at least one
additional zone defined; in a multi-zonal cluster, the cluster master
is only present in a single zone while nodes are present in each of
the primary zone and the node locations. In contrast, in a regional
cluster, cluster master nodes are present in multiple zones in the
region. For that reason, regional clusters should be preferred.</p>
</blockquote>
| Andriy Kopachevskyy |
<p>I have a pre-upgrade hook in my Helm chart that looks like this:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}-preupgrade"
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "0"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
securityContext:
# Because we are running as non root user and group id/User id of the flink user is 1000/1000.
fsGroup: {{ .Values.spec.securityContext.fsGroup }}
runAsNonRoot: {{ .Values.spec.securityContext.runAsNonRootFlag }}
runAsUser: {{ .Values.spec.securityContext.runAsUser }}
containers:
- name: pre-upgrade-job
image: {{ .Values.registry }}/{{ .Values.imageRepo }}:{{ .Values.imageTag }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
# Got error /bin/sleep: invalid time interval 'lcm_hook'
args:
- lcm_hook
env:
# Need to add this env variable so that the custom flink conf values will be written to $FLINK_HOME/conf.
# This is needed for the hook scripts to connect to the Flink JobManager
- name: FLINK_KUBE_CONFIGMAP_PATH
value: {{ .Values.spec.config.mountPath }}
volumeMounts:
- name: {{ template "fullname" . }}-flink-config
mountPath: {{ .Values.spec.config.mountPath }}
- mountPath: {{ .Values.pvc.shared_storage_path }}/{{ template "fullname" . }}
name: shared-pvc
command: ["/bin/sh", "-c", "scripts/preUpgradeScript.sh","{{ .Values.pvc.shared_storage_path }}/{{ template "fullname" . }}"]
command: ["/bin/sleep","10"]
volumes:
- name: {{ template "fullname" . }}-flink-config
configMap:
name: {{ template "fullname" . }}-flink-config
- name: shared-pvc
persistentVolumeClaim:
claimName: {{ template "fullname" . }}-shared-pv-claim
</code></pre>
<p>Here, I need to pass an argument called "lcm_hooks" to my docker container. But when I do this, this argument seems to override the argument for my second command ["/bin/sleep","10"], and I get an error </p>
<blockquote>
<p>/bin/sleep: invalid time interval 'lcm_hook'</p>
</blockquote>
<p>during the upgrade phase. What is the right way to ensure that I am able to pass one argument to my container, and a totally different one to my bash command in the helm hook?</p>
| James Isaac | <blockquote>
<p>my docker container, called "lcm_hooks"</p>
</blockquote>
<p>Your hook has one container which is not called <code>lcm_hooks</code>, you called it <code>pre-upgrade-job</code>. I'm mentioning this because perhaps you forgot to include a piece of code, or misunderstood how it works.</p>
<blockquote>
<p>I need to pass an argument to my docker container</p>
</blockquote>
<p>Your yaml specifies both <code>command</code> and <code>args</code>, therefore the image's original <code>entrypoint</code> and <code>cmd</code> will be completely ignored. If you want to "pass argument to container" you should omit the <code>command</code> from the yaml and override the <code>args</code> only.</p>
<blockquote>
<p>second command</p>
</blockquote>
<p>Your container spec does specify two commands, which means only the latter will execute. If you want to execute both of them you should chain them.</p>
<blockquote>
<p>What is the right way to ensure that I am able to pass one argument to my container, and a totally different one to my bash command in the helm hook</p>
</blockquote>
<p>You separate the hook container from the actual container you wanted to deploy using Helm.</p>
<p>I recommend the you review the container spec, and Helm hooks docs, that might clarify things:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></li>
<li><a href="https://github.com/kubernetes/helm/blob/master/docs/charts_hooks.md" rel="nofollow noreferrer">https://github.com/kubernetes/helm/blob/master/docs/charts_hooks.md</a></li>
</ul>
| itaysk |
<p>We have microservice with identity server 4. When user login to the app, and we restart pod with this microservice, then somehow token is still valid (user can browse app) but when he click logout then there is call to endsession which removes the token and redirection to logout page (but since there is no token we get access denied )</p>
<p><a href="https://i.stack.imgur.com/LOI51.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LOI51.png" alt="enter image description here"></a></p>
<pre><code>2019-08-14 08:19:57.5514|DEBUG|||||MyNamespace.UserManagement.Api.Program||init main |!LOGEND!
2019-08-14 08:19:58.5769|INFO|||||MyNamespace.Common.Core.Rpc.Client.RpcClientServiceCollectionExtensions|UserManagement.Api|Rpc Client:PermissionsServiceClient is connecting to usermanagement-worker:9090 |!LOGEND!
2019-08-14 08:19:58.7928|INFO|||||MyNamespace.Common.Core.Rpc.Client.RpcClientServiceCollectionExtensions|UserManagement.Api|Rpc Client:NotificationServiceClient is connecting to notification-worker:9090 |!LOGEND!
2019-08-14 08:19:58.7928|INFO|||||MyNamespace.Common.Core.Rpc.Client.RpcClientServiceCollectionExtensions|UserManagement.Api|Rpc Client:ContentFileServiceClient is connecting to content-worker:9090 |!LOGEND!
2019-08-14 08:19:59.0045|WARN|||||Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager|UserManagement.Api|No XML encryptor configured. Key {d4445b6b-a8ae-47b4-bd08-2ff446b40755} may be persisted to storage in unencrypted form. |!LOGEND!
2019-08-14 08:19:59.0865|INFO|||||IdentityServer4.Startup|UserManagement.Api|You are using the in-memory version of the persisted grant store. This will store consent decisions, authorization codes, refresh and reference tokens in memory only. If you are using any of those features in production, you want to switch to a different store implementation. |!LOGEND!
2019-08-14 08:19:59.0986|INFO|||||IdentityServer4.Startup|UserManagement.Api|Using the default authentication scheme Identity.Application for IdentityServer |!LOGEND!
2019-08-14 08:19:59.0986|DEBUG|||||IdentityServer4.Startup|UserManagement.Api|Using Identity.Application as default ASP.NET Core scheme for authentication |!LOGEND!
2019-08-14 08:19:59.0986|DEBUG|||||IdentityServer4.Startup|UserManagement.Api|Using Identity.External as default ASP.NET Core scheme for sign-in |!LOGEND!
2019-08-14 08:19:59.0986|DEBUG|||||IdentityServer4.Startup|UserManagement.Api|Using Identity.External as default ASP.NET Core scheme for sign-out |!LOGEND!
2019-08-14 08:19:59.0986|DEBUG|||||IdentityServer4.Startup|UserManagement.Api|Using Identity.Application as default ASP.NET Core scheme for challenge |!LOGEND!
2019-08-14 08:19:59.0986|DEBUG|||||IdentityServer4.Startup|UserManagement.Api|Using Identity.Application as default ASP.NET Core scheme for forbid |!LOGEND!
2019-08-14 08:20:02.4042|INFO|||||MyNamespace.UserManagement.Domain.UserManagementDataContext|UserManagement.Api|Seeding data for |!LOGEND!
2019-08-14 08:20:02.8778|WARN|||||Microsoft.EntityFrameworkCore.Query|UserManagement.Api|The Include operation for navigation '[rp].Permission' is unnecessary and was ignored because the navigation is not reachable in the final query results. See https://go.microsoft.com/fwlink/?linkid=850303 for more information. |!LOGEND!
2019-08-14 08:20:02.8778|WARN|||||Microsoft.EntityFrameworkCore.Query|UserManagement.Api|The Include operation for navigation '[rp].Role' is unnecessary and was ignored because the navigation is not reachable in the final query results. See https://go.microsoft.com/fwlink/?linkid=850303 for more information. |!LOGEND!
2019-08-14 08:20:03.1423|DEBUG|||||Jaeger.Configuration|UserManagement.Api|Using the UDP Sender to send spans to the agent. |!LOGEND!
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
2019-08-14 08:20:19.3125|DEBUG|||||OpenTracing.Contrib.NetCore.AspNetCore.AspNetCoreDiagnostics|UserManagement.Api|Ignoring request |!LOGEND!
2019-08-14 08:20:26.1147|DEBUG|||||OpenTracing.Contrib.NetCore.AspNetCore.AspNetCoreDiagnostics|UserManagement.Api|Ignoring request |!LOGEND!
2019-08-14 08:20:32.2729|DEBUG|||||IdentityServer4.Startup|UserManagement.Api|Login Url: /Account/Login |!LOGEND!
2019-08-14 08:20:32.2729|DEBUG|||||IdentityServer4.Startup|UserManagement.Api|Login Return Url Parameter: ReturnUrl |!LOGEND!
2019-08-14 08:20:32.2729|DEBUG|||||IdentityServer4.Startup|UserManagement.Api|Logout Url: /Account/Logout |!LOGEND!
2019-08-14 08:20:32.2729|DEBUG|||||IdentityServer4.Startup|UserManagement.Api|ConsentUrl Url: /consent |!LOGEND!
2019-08-14 08:20:32.2729|DEBUG|||||IdentityServer4.Startup|UserManagement.Api|Consent Return Url Parameter: returnUrl |!LOGEND!
2019-08-14 08:20:32.2729|DEBUG|||||IdentityServer4.Startup|UserManagement.Api|Error Url: /home/error |!LOGEND!
2019-08-14 08:20:32.2729|DEBUG|||||IdentityServer4.Startup|UserManagement.Api|Error Id Parameter: errorId |!LOGEND!
2019-08-14 08:20:39.2364|DEBUG|||||OpenTracing.Contrib.NetCore.AspNetCore.AspNetCoreDiagnostics|UserManagement.Api|Ignoring request |!LOGEND!
2019-08-14 08:20:46.1140|DEBUG|||||OpenTracing.Contrib.NetCore.AspNetCore.AspNetCoreDiagnostics|UserManagement.Api|Ignoring request |!LOGEND!
2019-08-14 08:20:56.1262|DEBUG|||||IdentityServer4.Hosting.EndpointRouter|UserManagement.Api|Request path /connect/authorize matched to endpoint type Authorize |!LOGEND!
2019-08-14 08:20:56.1423|DEBUG|||||IdentityServer4.Hosting.EndpointRouter|UserManagement.Api|Endpoint enabled: Authorize, successfully created handler: IdentityServer4.Endpoints.AuthorizeEndpoint |!LOGEND!
2019-08-14 08:20:56.1423|INFO|||||IdentityServer4.Hosting.IdentityServerMiddleware|UserManagement.Api|Invoking IdentityServer endpoint: IdentityServer4.Endpoints.AuthorizeEndpoint for /connect/authorize |!LOGEND!
2019-08-14 08:20:56.1461|DEBUG|||||IdentityServer4.Endpoints.AuthorizeEndpoint|UserManagement.Api|Start authorize request |!LOGEND!
2019-08-14 08:20:56.1563|DEBUG|||||IdentityServer4.Endpoints.AuthorizeEndpoint|UserManagement.Api|No user present in authorize request |!LOGEND!
2019-08-14 08:20:56.1606|DEBUG|||||IdentityServer4.Validation.AuthorizeRequestValidator|UserManagement.Api|Start authorize request protocol validation |!LOGEND!
2019-08-14 08:20:56.1783|DEBUG|||||IdentityServer4.Stores.ValidatingClientStore|UserManagement.Api|client configuration validation for client 9e7b8d6a-ac6c-4f68-94eb-dd8ef7d17eed succeeded. |!LOGEND!
2019-08-14 08:20:56.2215|DEBUG|||||IdentityServer4.Validation.AuthorizeRequestValidator|UserManagement.Api|Calling into custom validator: IdentityServer4.Validation.DefaultCustomAuthorizeRequestValidator |!LOGEND!
2019-08-14 08:20:56.2215|INFO|||||IdentityServer4.Endpoints.AuthorizeEndpoint|UserManagement.Api|ValidatedAuthorizeRequest
{"ClientId":"9e7b8d6a-ac6c-4f68-94eb-dd8ef7d17eed", "ClientName":"angularclient", "RedirectUri":"https:\/\/myUrl\/silent-renew.html", "AllowedRedirectUris":["https:\/\/myUrl\/#\/auth-callback?","https:\/\/myUrl\/silent-renew.html","http:\/\/localhost:4200\/#\/auth-callback?","https:\/\/localhost:4200\/silent-renew.html"], "SubjectId":"anonymous", "ResponseType":"id_token token", "ResponseMode":"fragment", "GrantType":"implicit", "RequestedScopes":"openid profile Apis", "State":"MaWNo5cO47XXFUFMrUW0xNv7F3sMpfr3ngFOJpr6", "UiLocales":"en", "Nonce":"MaWNo5cO47XXFUFMrUW0xNv7F3sMpfr3ngFOJpr6", "PromptMode":"none", "LoginHint":"[email protected]", "Raw":{"response_type":"id_token token","client_id":"9e7b8d6a-ac6c-4f68-94eb-dd8ef7d17eed","state":"MaWNo5cO47XXFUFMrUW0xNv7F3sMpfr3ngFOJpr6","redirect_uri":"https:\/\/myUrl\/silent-renew.html","scope":"openid profile Apis","nonce":"MaWNo5cO47XXFUFMrUW0xNv7F3sMpfr3ngFOJpr6","prompt":"none","ui_locales":"en","login_hint":"[email protected]"}} |!LOGEND!
2019-08-14 08:20:56.2725|INFO|||||IdentityServer4.ResponseHandling.AuthorizeInteractionResponseGenerator|UserManagement.Api|Showing error: prompt=none was requested but user is not authenticated |!LOGEND!
2019-08-14 08:20:56.2750|INFO|||||IdentityServer4.Endpoints.AuthorizeEndpoint|UserManagement.Api|{"ClientId":"9e7b8d6a-ac6c-4f68-94eb-dd8ef7d17eed", "ClientName":"angularclient", "RedirectUri":"https:\/\/myUrl\/silent-renew.html", "AllowedRedirectUris":["https:\/\/myUrl\/#\/auth-callback?","https:\/\/myUrl\/silent-renew.html","http:\/\/localhost:4200\/#\/auth-callback?","https:\/\/localhost:4200\/silent-renew.html"], "SubjectId":"anonymous", "ResponseType":"id_token token", "ResponseMode":"fragment", "GrantType":"implicit", "RequestedScopes":"openid profile Apis", "State":"MaWNo5cO47XXFUFMrUW0xNv7F3sMpfr3ngFOJpr6", "UiLocales":"en", "Nonce":"MaWNo5cO47XXFUFMrUW0xNv7F3sMpfr3ngFOJpr6", "PromptMode":"none", "LoginHint":"[email protected]", "Raw":{"response_type":"id_token token","client_id":"9e7b8d6a-ac6c-4f68-94eb-dd8ef7d17eed","state":"MaWNo5cO47XXFUFMrUW0xNv7F3sMpfr3ngFOJpr6","redirect_uri":"https:\/\/myUrl\/silent-renew.html","scope":"openid profile Apis","nonce":"MaWNo5cO47XXFUFMrUW0xNv7F3sMpfr3ngFOJpr6","prompt":"none","ui_locales":"en","login_hint":"[email protected]"}} |!LOGEND!
2019-08-14 08:20:56.2896|INFO|||||IdentityServer4.Events.DefaultEventService|UserManagement.Api|{"ClientId":"9e7b8d6a-ac6c-4f68-94eb-dd8ef7d17eed", "ClientName":"angularclient", "RedirectUri":"https:\/\/myUrl\/silent-renew.html", "Endpoint":"Authorize", "Scopes":"openid profile Apis", "GrantType":"implicit", "Error":"login_required", "Category":"Token", "Name":"Token Issued Failure", "EventType":"Failure", "Id":2001, "ActivityId":"0HLP0I0V87B7O:00000005", "TimeStamp":"2019-08-14T08:20:56Z", "ProcessId":1, "LocalIpAddress":"::ffff:127.0.0.1:80", "RemoteIpAddress":"10.123.88.10"} |!LOGEND!
2019-08-14 08:20:59.2361|DEBUG|||||OpenTracing.Contrib.NetCore.AspNetCore.AspNetCoreDiagnostics|UserManagement.Api|Ignoring request |!LOGEND!
2019-08-14 08:21:06.1138|DEBUG|||||OpenTracing.Contrib.NetCore.AspNetCore.AspNetCoreDiagnostics|UserManagement.Api|Ignoring request |!LOGEND!
2019-08-14 08:21:09.5788|DEBUG|||||IdentityServer4.Hosting.EndpointRouter|UserManagement.Api|Request path /.well-known/openid-configuration/jwks matched to endpoint type Discovery |!LOGEND!
2019-08-14 08:21:09.5878|DEBUG|||||IdentityServer4.Hosting.EndpointRouter|UserManagement.Api|Endpoint enabled: Discovery, successfully created handler: IdentityServer4.Endpoints.DiscoveryKeyEndpoint |!LOGEND!
2019-08-14 08:21:09.5878|INFO|||||IdentityServer4.Hosting.IdentityServerMiddleware|UserManagement.Api|Invoking IdentityServer endpoint: IdentityServer4.Endpoints.DiscoveryKeyEndpoint for /.well-known/openid-configuration/jwks |!LOGEND!
2019-08-14 08:21:09.5912|DEBUG|||||IdentityServer4.Endpoints.DiscoveryKeyEndpoint|UserManagement.Api|Start key discovery request |!LOGEND!
2019-08-14 08:21:16.8870|DEBUG|||||IdentityServer4.Hosting.EndpointRouter|UserManagement.Api|Request path /connect/endsession matched to endpoint type Endsession |!LOGEND!
2019-08-14 08:21:16.8925|DEBUG|||||IdentityServer4.Hosting.EndpointRouter|UserManagement.Api|Endpoint enabled: Endsession, successfully created handler: IdentityServer4.Endpoints.EndSessionEndpoint |!LOGEND!
2019-08-14 08:21:16.8925|INFO|||||IdentityServer4.Hosting.IdentityServerMiddleware|UserManagement.Api|Invoking IdentityServer endpoint: IdentityServer4.Endpoints.EndSessionEndpoint for /connect/endsession |!LOGEND!
2019-08-14 08:21:16.8970|DEBUG|||||IdentityServer4.Endpoints.EndSessionEndpoint|UserManagement.Api|Processing signout request for anonymous |!LOGEND!
2019-08-14 08:21:16.9025|DEBUG|||||IdentityServer4.Validation.EndSessionRequestValidator|UserManagement.Api|Start end session request validation |!LOGEND!
2019-08-14 08:21:16.9097|DEBUG|||||IdentityServer4.Validation.TokenValidator|UserManagement.Api|Start identity token validation |!LOGEND!
2019-08-14 08:21:16.9462|DEBUG|||||IdentityServer4.Stores.ValidatingClientStore|UserManagement.Api|client configuration validation for client 9e7b8d6a-ac6c-4f68-94eb-dd8ef7d17eed succeeded. |!LOGEND!
2019-08-14 08:21:16.9462|DEBUG|||||IdentityServer4.Validation.TokenValidator|UserManagement.Api|Client found: 9e7b8d6a-ac6c-4f68-94eb-dd8ef7d17eed / angularclient |!LOGEND!
2019-08-14 08:21:17.0891|DEBUG|||||IdentityServer4.Validation.TokenValidator|UserManagement.Api|Calling into custom token validator: IdentityServer4.Validation.DefaultCustomTokenValidator |!LOGEND!
2019-08-14 08:21:17.0899|DEBUG|||||IdentityServer4.Validation.TokenValidator|UserManagement.Api|Token validation success
{"ClientId":"9e7b8d6a-ac6c-4f68-94eb-dd8ef7d17eed", "ClientName":"angularclient", "ValidateLifetime":false, "Claims":{"nbf":1565770492,"exp":1565772292,"iss":"https:\/\/myurl\/usermanagement","aud":"9e7b8d6a-ac6c-4f68-94eb-dd8ef7d17eed","nonce":"HTd1yWr7DEeL1BAxRSDJsNb4JkOdjFSRt","iat":1565770492,"at_hash":"HgkgWuBFWj9MTUYnKdU9Gw","sid":"534fb69c314ab146dc699f34d0f64e47","sub":"8d961fe9-cdcb-4563-abc2-e503d2794e1f","auth_time":1565770491,"idp":"ActiveDirectory","amr":"external"}} |!LOGEND!
2019-08-14 08:21:17.0963|INFO|||||IdentityServer4.Validation.EndSessionRequestValidator|UserManagement.Api|End session request validation failure: Invalid post logout URI
{"ClientId":"9e7b8d6a-ac6c-4f68-94eb-dd8ef7d17eed", "ClientName":"angularclient", "SubjectId":"unknown", "Raw":{"id_token_hint":"eyJhbGciOiJSUzI1NiIsImtpZCI6IkI5QjUyOEY2OTAyMzhCOTNBQTkzM0MyNUMyNU","post_logout_redirect_uri":"https:\/\/myUrl\/#\/auth-callback?"}} |!LOGEND!
2019-08-14 08:21:17.0998|ERROR|||||IdentityServer4.Endpoints.EndSessionEndpoint|UserManagement.Api|Error processing end session request Invalid request |!LOGEND!
2019-08-14 08:21:19.2363|DEBUG|3ddef511-e2d0-4a00-ac5e-69c0cf47e61c|HttpAPI|/Account/AccessDenied (GET)||OpenTracing.Contrib.NetCore.AspNetCore.AspNetCoreDiagnostics|UserManagement.Api|Ignoring request |!LOGEND!
2019-08-14 08:21:26.1139|DEBUG|3ddef511-e2d0-4a00-ac5e-69c0cf47e61c|HttpAPI|/Account/AccessDenied (GET)||OpenTracing.Contrib.NetCore.AspNetCore.AspNetCoreDiagnostics|UserManagement.Api|Ignoring request |!LOGEND!
2019-08-14 08:21:39.2363|DEBUG|3ddef511-e2d0-4a00-ac5e-69c0cf47e61c|HttpAPI|/Account/AccessDenied (GET)||OpenTracing.Contrib.NetCore.AspNetCore.AspNetCoreDiagnostics|UserManagement.Api|Ignoring request |!LOGEND!
2019-08-14 08:21:46.1138|DEBUG|||||OpenTracing.Contrib.NetCore.AspNetCore.AspNetCoreDiagnostics|UserManagement.Api|Ignoring request |!LOGEND!
2019-08-14 08:21:59.2363|DEBUG|||||OpenTracing.Contrib.NetCore.AspNetCore.AspNetCoreDiagnostics|UserManagement.Api|Ignoring request |!LOGEND!
</code></pre>
<p>and my startup </p>
<pre><code>.AddSigningCertificates(certificatesSettings)
.AddInMemoryIdentityResources(IdentityProviderConfig.IdentityResources)
.AddInMemoryApiResources(IdentityProviderConfig.ApiResources)
.AddInMemoryClients(IdentityProviderConfig.GetClients(identityConfig))
.AddAspNetIdentity<ApplicationUser>()
.AddProfileService<IdentityWithAdditionalClaimsProfileService>();
</code></pre>
| kosnkov | <p>Since you're not properly persisting your persisted grants, signing credentials and data protection keys then you will get all sorts of odd behavior when you scale across multiple processes or restart an instance. You must address these considerations before deploying into this sort of environment.</p>
<p>I suspect in your example the cookie issued to the user is no longer valid as the data protection keys used to encrypt and sign said cookie will no longer exist.</p>
<p>See the following documentation:</p>
<ul>
<li>Guidance from the identityserver4 authors: <a href="http://docs.identityserver.io/en/latest/topics/deployment.html" rel="nofollow noreferrer">http://docs.identityserver.io/en/latest/topics/deployment.html</a></li>
<li>Deploying ASP.Net Core: <a href="https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-2.2#scenarios-and-use-cases" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-2.2#scenarios-and-use-cases</a></li>
<li>ASP.NET Core Data Protection: <a href="https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/introduction?view=aspnetcore-2.2" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/introduction?view=aspnetcore-2.2</a></li>
</ul>
| mackie |
<p>I have an API written in Go that has been Dockerised and runs in a Kubernetes cluster on GKE.</p>
<p>At the moment my API server does not handle any shutdown scenarios such as a Pod dying or being purposefully brought down.</p>
<p>What set of UNIX signals should I expect to trap to gracefully shutdown the server and what circumstances would trigger them? For example, crashes, K8s shutdowns etc.</p>
| Andy Fusniak | <p>Kubernetes sends a <code>SIGTERM</code> signal. So the graceful shutdown may look like this:</p>
<pre><code>package main
import (
"context"
"log"
"net/http"
"os"
"os/signal"
"syscall"
)
func main() {
var srv http.Server
idleConnsClosed := make(chan struct{})
go func() {
sigint := make(chan os.Signal, 1)
// interrupt signal sent from terminal
signal.Notify(sigint, os.Interrupt)
// sigterm signal sent from kubernetes
signal.Notify(sigint, syscall.SIGTERM)
<-sigint
// We received an interrupt signal, shut down.
if err := srv.Shutdown(context.Background()); err != nil {
// Error from closing listeners, or context timeout:
log.Printf("HTTP server Shutdown: %v", err)
}
close(idleConnsClosed)
}()
if err := srv.ListenAndServe(); err != http.ErrServerClosed {
// Error starting or closing listener:
log.Printf("HTTP server ListenAndServe: %v", err)
}
<-idleConnsClosed
}
</code></pre>
<p>Also you should add Liveness and Readiness probes to your pods:</p>
<pre><code>livenessProbe:
httpGet:
path: /health
port: 80
readinessProbe:
httpGet:
path: /health
port: 80
</code></pre>
| Alex Pliutau |
<p>I have defined a couple of case classes for JSON representation but I am not sure whether I did it properly as there a lot of nested case classes.
Entities like spec, meta and so on are of type JSONObject as well as the Custom object itself.</p>
<p>Here is all the classes I have defined:</p>
<pre><code> case class CustomObject(apiVersion: String,kind: String, metadata: Metadata,spec: Spec,labels: Object,version: String)
case class Metadata(creationTimestamp: String, generation: Int, uid: String,resourceVersion: String,name: String,namespace: String,selfLink: String)
case class Spec(mode: String,image: String,imagePullPolicy: String, mainApplicationFile: String,mainClass: String,deps: Deps,driver: Driver,executor: Executor,subresources: Subresources)
case class Driver(cores: Double,coreLimit: String,memory: String,serviceAccount: String,labels: Labels)
case class Executor(cores: Double,instances: Double,memory: String,labels: Labels)
case class Labels(version: String)
case class Subresources(status: Status)
case class Status()
case class Deps()
</code></pre>
<p>And this is a JSON structure for the custom K8s object I need to transform:</p>
<pre><code>{
"apiVersion": "sparkoperator.k8s.io/v1alpha1",
"kind": "SparkApplication",
"metadata": {
"creationTimestamp": "2019-01-11T15:58:45Z",
"generation": 1,
"name": "spark-example",
"namespace": "default",
"resourceVersion": "268972",
"selfLink": "/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/spark-example",
"uid": "uid"
},
"spec": {
"deps": {},
"driver": {
"coreLimit": "1000m",
"cores": 0.1,
"labels": {
"version": "2.4.0"
},
"memory": "1024m",
"serviceAccount": "default"
},
"executor": {
"cores": 1,
"instances": 1,
"labels": {
"version": "2.4.0"
},
"memory": "1024m"
},
"image": "gcr.io/ynli-k8s/spark:v2.4.0,
"imagePullPolicy": "Always",
"mainApplicationFile": "http://localhost:8089/spark_k8s_airflow.jar",
"mainClass": "org.apache.spark.examples.SparkExample",
"mode": "cluster",
"subresources": {
"status": {}
},
"type": "Scala"
}
}
</code></pre>
<p>UPDATE:
I want to convert JSON into case classes with Circe, however, with such classes I face this error:</p>
<pre><code>Error: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder[dataModel.CustomObject]
implicit val customObjectDecoder: Decoder[CustomObject] = deriveDecoder[CustomObject]
</code></pre>
<p>I have defined implicit decoders for all case classes:</p>
<pre><code> implicit val customObjectLabelsDecoder: Decoder[Labels] = deriveDecoder[Labels]
implicit val customObjectSubresourcesDecoder: Decoder[Subresources] = deriveDecoder[Subresources]
implicit val customObjectDepsDecoder: Decoder[Deps] = deriveDecoder[Deps]
implicit val customObjectStatusDecoder: Decoder[Status] = deriveDecoder[Status]
implicit val customObjectExecutorDecoder: Decoder[Executor] = deriveDecoder[Executor]
implicit val customObjectDriverDecoder: Decoder[Driver] = deriveDecoder[Driver]
implicit val customObjectSpecDecoder: Decoder[Spec] = deriveDecoder[Spec]
implicit val customObjectMetadataDecoder: Decoder[Metadata] = deriveDecoder[Metadata]
implicit val customObjectDecoder: Decoder[CustomObject] = deriveDecoder[CustomObject]
</code></pre>
| Cassie | <p>The reason you can't derive a decode for <code>CustomObject</code> is because of the <code>labels: Object</code> member. </p>
<p>In circe all decoding is driven by static types, and circe does not provide encoders or decoders for types like <code>Object</code> or <code>Any</code>, which have no useful static information. </p>
<p>If you change that case class to something like the following:</p>
<pre><code>case class CustomObject(apiVersion: String, kind: String, metadata: Metadata, spec: Spec)
</code></pre>
<p>…and leave the rest of your code as is, with the import:</p>
<pre><code>import io.circe.Decoder, io.circe.generic.semiauto.deriveDecoder
</code></pre>
<p>And define your JSON document as <code>doc</code> (after adding a quotation mark to the <code>"image": "gcr.io/ynli-k8s/spark:v2.4.0,</code> line to make it valid JSON), the following should work just fine:</p>
<pre><code>scala> io.circe.jawn.decode[CustomObject](doc)
res0: Either[io.circe.Error,CustomObject] = Right(CustomObject(sparkoperator.k8s.io/v1alpha1,SparkApplication,Metadata(2019-01-11T15:58:45Z,1,uid,268972,spark-example,default,/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/spark-example),Spec(cluster,gcr.io/ynli-k8s/spark:v2.4.0,Always,http://localhost:8089/spark_k8s_airflow.jar,org.apache.spark.examples.SparkExample,Deps(),Driver(0.1,1000m,1024m,default,Labels(2.4.0)),Executor(1.0,1.0,1024m,Labels(2.4.0)),Subresources(Status()))))
</code></pre>
<p>Despite what one of the other answers says, circe can definitely derive encoders and decoders for case classes with no members—that's definitely not the problem here.</p>
<p>As a side note, I wish it were possible to have better error messages than this:</p>
<pre><code>Error: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder[dataModel.CustomObject
</code></pre>
<p>But given the way circe-generic has to use Shapeless's <code>Lazy</code> right now, this is the best we can get. You can try <a href="https://github.com/circe/circe-derivation" rel="nofollow noreferrer">circe-derivation</a> for a mostly drop-in alternative for circe-generic's semi-automatic derivation that has better error messages (and some other advantages), or you can use a compiler plugin like <a href="https://github.com/tek/splain" rel="nofollow noreferrer">splain</a> that's specifically designed to give better error messages even in the presence of things like <code>shapeless.Lazy</code>.</p>
<p>As one final note, you can clean up your semi-automatic definitions a bit by letting the type parameter on <code>deriveDecoder</code> be inferred:</p>
<pre><code>implicit val customObjectLabelsDecoder: Decoder[Labels] = deriveDecoder
</code></pre>
<p>This is entirely a matter of taste, but I find it a little less noisy to read.</p>
| Travis Brown |
<p>I know that the rule in KUBE-MARK-MASQ chain is a mark rule:</p>
<pre><code>iptables -t nat -nvL KUBE-MARK-MASQ
Chain KUBE-MARK-MASQ (123 references)
pkts bytes target prot opt in out source destination
16 960 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
</code></pre>
<p>It's to mark a packet and then the packets can do SNAT in KUBE-POSTROUTING chain,the source IP can be changed to node's ip.But what confused me is that why there are so many different KUBE-MARK-MASQ rules in k8s chains?For example,in KUBE-SERVICES chain,there are lots of KUBE-MARK-MASQ rules.What are they marking for?The pod..Or else?</p>
<p>Let's see an example:</p>
<pre><code>KUBE-MARK-MASQ tcp -- * * !10.244.0.0/16 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
</code></pre>
<p>It's a kube-dns's clusterip rule.My pods' CIDR is 10.244.0.0/16.Why the rule's source ip has <code>!</code> ?If a pod in the node want to send a packet outbound,it shouldn't have <code>!</code>,then it can do SNAT in KUBE-POSTROUTING to change to node's ip,is my understanding wrong?</p>
<p>And there are also other KUBE-MARK-MASQ rule in KUBE-SEP-XXX chain:</p>
<pre><code>KUBE-MARK-MASQ all -- * * 10.244.2.162 0.0.0.0/0 /* default/echo-load-balance: */
DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echo-load-balance: */ tcp to:10.244.2.162:8080
</code></pre>
<p>The pod's ip is 10.244.2.162,and the rule source's ip matches pod's ip.What is it used for?</p>
<p>And in KUBE-FW-XXX chain:</p>
<pre><code>KUBE-MARK-MASQ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echo-load-balance: loadbalancer IP */
KUBE-SVC-P24HJGZOUZD6OJJ7 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echo-load-balance: loadbalancer IP */
KUBE-MARK-DROP all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echo-load-balance: loadbalancer IP */
</code></pre>
<p>Why the source's ip here is 0.0.0.0/0?What is it used for?</p>
| Jesse Stutler | <p>To see all rules <code>iptables-save</code> output is useful.</p>
<p><code>iptables</code> processing chart might help understand this:
<a href="https://i.stack.imgur.com/l8RkC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l8RkC.png" alt="iptables" /></a></p>
<p>(<a href="https://upload.wikimedia.org/wikipedia/commons/9/91/Iptables_diagram.png" rel="nofollow noreferrer">diagram from Wikipedia</a>)</p>
<p>When you isolate rules for a single service <code>nat</code> rules looks like this:</p>
<pre><code>*nat
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-SEP-232DQYSHL5HNRYWJ -s 10.244.0.7/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-232DQYSHL5HNRYWJ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-LPGSDLJ3FDW46N4W -s 10.244.0.5/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-LPGSDLJ3FDW46N4W -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.5:53
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.0.5:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-LPGSDLJ3FDW46N4W
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.0.7:53" -j KUBE-SEP-232DQYSHL5HNRYWJ
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
</code></pre>
<p>In this case it balances DNS queries between 2 instances with <code>50%</code> chance to hit one DNS instance:</p>
<pre><code>--mode random --probability 0.5
</code></pre>
<p>And yes, it does look a bit complicated. But that's what you get when your building universal solution for all cases.</p>
| Tombart |
<p>Hello I have a problem in kubernetes. When I do a nslookup from a pod I get correct ip:</p>
<pre><code>~ kubectl -n exampleNamespace exec -it pod/curl -- nslookup exampleService.exampleNamespace
Defaulting container name to curl.
Use 'kubectl describe pod/curl -n exampleNamespace' to see all of the containers in this pod.
Server: 192.168.3.10
Address: 192.168.3.10:53
** server can't find exampleService.exampleNamespace: NXDOMAIN
Non-authoritative answer:
Name: exampleService.exampleNamespace
Address: 192.168.3.64
command terminated with exit code 1
</code></pre>
<p>192.168.3.64 is the correct ip but when I try to curl this DNS from a pod in the same namespace I get this:</p>
<pre><code>~ kubectl -n exampleNamespace exec -it pod/curl -- curl http://exampleService.exampleNamespace/path
Defaulting container name to curl.
Use 'kubectl describe pod/curl -n exampleNamespace' to see all of the containers in this pod.
curl: (6) Could not resolve host: exampleService.exampleNamespace
command terminated with exit code 6
</code></pre>
<p>Curl pod was started with following yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: curl
namespace: exampleNamespace
spec:
containers:
- image: curlimages/curl
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: curl
restartPolicy: Always
</code></pre>
| D. O. | <p>It seams that there are some problems with <code>Alpine</code> and <code>Kubernetes</code> dns resolution as reported at some sites:</p>
<ul>
<li><a href="https://www.openwall.com/lists/musl/2018/03/30/9" rel="nofollow noreferrer">https://www.openwall.com/lists/musl/2018/03/30/9</a></li>
<li><a href="https://stackoverflow.com/questions/65181012/does-alpine-have-known-dns-issue-within-kubernetes">Does Alpine have known DNS issue within Kubernetes?</a></li>
<li><a href="https://github.com/gliderlabs/docker-alpine/issues/8" rel="nofollow noreferrer">https://github.com/gliderlabs/docker-alpine/issues/8</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/issues/30215" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/30215</a></li>
</ul>
<p>Using image <code>curlimages/curl:7.77.0</code> works as expected.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: curl
namespace: exampleNamespace
spec:
containers:
- image: curlimages/curl:7.77.0
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: curl
restartPolicy: Always
</code></pre>
| TlmaK0 |
<p>I'm running microk8s (installed via <code>snap install microk8s --classic</code>) across a local multinode cluster and <a href="https://snapcraft.io/spark-client" rel="nofollow noreferrer">spark-client</a> (installed via <code>snap install spark-client --edge</code>). Two of the nodes are WSL2 (Ubuntu) on Windows 11. I'm now adding a laptop running Ubuntu node. When I try to run <code>spark-client.spark-shell ...</code>, it will spin up executors successfully on the two WSL nodes, but it fails on the new laptop node. I know the laptop node is capable of running pods successfully because an hdfs pod is successfully running there.</p>
<p><code>spark-shell</code> immediately deletes and creates a new pod when it fails, so it's hard to see the error information. I was able to capture a log and it was only one line:
<code>error: unknown command "executor", see 'pebble help'.</code></p>
<p>I notice that in the configuration for those pods there is an argument: executor, so that might be where that's coming from. But why would one node start up differently?</p>
<p>The image is: ghcr.io/canonical/charmed-spark:3.4.0-22.04_edge . I was able to run it directly.</p>
<p>Any ideas on how to resolve or further troubleshoot this?</p>
<p>Note: I did see 2 other questions here that are similar, but they do not have this particular error message, so I think this question is distinct.</p>
<p>Update: I just noticed in the node details, the sha256 is different across the nodes for that image.</p>
| mentics | <p>Good news and bad news...</p>
<p>I deleted the images on all the nodes:</p>
<p><code>microk8s.ctr images delete ghcr.io/canonical/charmed-spark:3.4.0-22.04_edge</code></p>
<p>to force it to pull the latest.</p>
<p>Now all the nodes behave the same. They all fail. I'll assume that some bug was introduced last week. The latest edge release was last week (6/8), after I pulled the image for the old nodes, but before I pulled for the new node.</p>
<p>Mystery solved, though no solution, because I can't pull an older version via snap because it's on the same channel. I'll find something else to use.</p>
<p>Filed bug: <a href="https://github.com/canonical/spark-client-snap/issues/68" rel="nofollow noreferrer">https://github.com/canonical/spark-client-snap/issues/68</a>
There's an immediate workaround in the bug conversation, though it sounds like they'll have a fix out soon.</p>
| mentics |
<p>I have an OpenShift, cluster, and periodically when accessing logs, I get:</p>
<pre><code>worker1-sass-on-prem-origin-3-10 on 10.1.176.130:53: no such host" kube doing a connection to 53 on a node.
</code></pre>
<p>I also tend to see <code>tcp: lookup postgres.myapp.svc.cluster.local on 10.1.176.136:53: no such host</code> errors from time to time in pods, again, this makes me think that, when accessing internal service endpoints, pods, clients, and other Kubernetes related services actually talk to a DNS server that is assumed to be running on the given node that said pods are running on.</p>
<h1>Update</h1>
<p>Looking into one of my pods on a given node, I found the following in resolv.conf (I had to ssh and run <code>docker exec</code> to get this output - since oc exec isn't working due to this issue).</p>
<pre><code>/etc/cfssl $ cat /etc/resolv.conf
nameserver 10.1.176.129
search jim-emea-test.svc.cluster.local svc.cluster.local cluster.local bds-ad.lc opssight.internal
options ndots:5
</code></pre>
<p>Thus, it appears that in my cluster, containers have a self-referential resolv.conf entry. This cluster is created with <em>openshift-ansible</em>. I'm not sure if this is infra-specific, or if its actually a fundamental aspect of how openshift nodes work, but i suspect the latter, as I haven't done any major customizations to my ansible workflow from the upstream openshift-ansible recipes.</p>
| jayunit100 | <h1>Yes, DNS on every node is normal in openshift.</h1>
<p>It does appear that its normal for an openshift ansible deployment to deploy <code>dnsmasq</code> services on every node. </p>
<h2>Details.</h2>
<p>As an example of how this can effect things, the following <a href="https://github.com/openshift/openshift-ansible/pull/8187" rel="nofollow noreferrer">https://github.com/openshift/openshift-ansible/pull/8187</a> is instructive. In any case, if a local node's dnsmasq is acting flakey for any reason, it will prevent containers running on that node from properly resolving addresses of other containers in a cluster. </p>
<h2>Looking deeper at the dnsmasq 'smoking gun'</h2>
<p>After checking on an individual node, I found that in fact, there was a process indeed bounded to port 53, and it is dnsmasq. Hence, </p>
<p><code>
[enguser@worker0-sass-on-prem-origin-3-10 ~]$ sudo netstat -tupln | grep 53
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 675/openshift
</code></p>
<p>And, dnsmasq is running locally: </p>
<p><code>
[enguser@worker0-sass-on-prem-origin-3-10 ~]$ ps -ax | grep dnsmasq
4968 pts/0 S+ 0:00 grep --color=auto dnsmasq
6994 ? Ss 0:22 /usr/sbin/dnsmasq -k
[enguser@worker0-sass-on-prem-origin-3-10 ~]$ sudo ps -ax | grep dnsmasq
4976 pts/0 S+ 0:00 grep --color=auto dnsmasq
6994 ? Ss 0:22 /usr/sbin/dnsmasq -k
</code></p>
<p>The final clue, resolv.conf itself is even adding the local IP address as a nameserver... And this is obviously borrowed into containers that start.</p>
<pre><code> nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
Generated by NetworkManager
search cluster.local bds-ad.lc opssight.internal
NOTE: the libc resolver may not support more than 3 nameservers.
The nameservers listed below may not be recognized.
nameserver 10.1.176.129
</code></pre>
<h1>The solution (in my specific case)</h1>
<p>In my case , this was happening because the local nameserver was using an <code>ifcfg</code> (you can see these files in /etc/sysconfig/network-scripts/) with </p>
<pre><code>[enguser@worker0-sass-on-prem-origin-3-10 network-scripts]$ cat ifcfg-ens192
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens192
UUID=50936212-cb5e-41ff-bec8-45b72b014c8c
DEVICE=ens192
ONBOOT=yes
</code></pre>
<p>However, my internally configured Virtual Machines could not resolve IPs provided to them by the PEERDNS records.</p>
<p>Ultimately the fix was to work with our IT department to make sure our authoritative domain for our kube clusters had access to all IP addresses in our data center.</p>
<h1>The Generic Fix to :53 lookup errors...</h1>
<p>If youre seeing the :53 record errors are coming up when you try to kubectl or oc logs / exec, then there is likely that <em>your apiserver is not able to connect with kubelets via their IP address</em>. </p>
<p>If youre seeing :53 record errors in other places, for example, <em>inside of pods</em>, then this is because your pod, using its own local DNS, isnt able to resolve internal cluster IP addresses. This might simply be because you have an outdated controller that is looking for services that don't exist anymore, or else, you have flakiness at your kubernetes dns implementation level.</p>
| jayunit100 |
<p>I have a local minikube installation. I want to change the authentication mechanism for the api-server and restart and test it out. All the documentation I have read lacks this information.</p>
| Vaibhav Ranglani | <p>Yes you can. The kubernetes API Server, Controller manager, and scheduler are all run as static manifests in minikube.</p>
<p>So, in fact, in your example: Any change to the manifest will <em>automatically</em> lead to them being restarted instantly.</p>
<p>In order to make the modification, just use <code>vi</code> inside of /etc/kubernetes/manifests on whatever file you want to edit, and you'll see that the apiserver instantly restarts.</p>
<p>To look at the logs of the restart, you can look in /var/log/containers/ where each of the individual minikube services run.</p>
| jayunit100 |
<p>I have the following Google Cloud Build pipeline:</p>
<pre><code># gcloud builds submit --config cloud-build/cloudbuild.yaml --substitutions=_GIT_USER="<your user>",_GIT_PASS="<your password here>,_GIT_TAG="<tag name>"
steps:
# Git
- name: 'gcr.io/cloud-builders/git'
args: ['clone', 'https://${_GIT_USER}:${_GIT_PASS}@bitbucket.my-company.com/scm/my-project/my-app.git',
'--branch', '${_GIT_TAG}', '--single-branch']
# Build
- name: 'gcr.io/cloud-builders/mvn'
args: ['package', '-DskipTests=true']
dir: my-app/backend
- name: 'gcr.io/cloud-builders/docker'
args: ['build',
'--no-cache',
'-t', 'gcr.io/$PROJECT_ID/my-app-test:latest',
'-f', './cloud-build/Dockerfile-backend',
'--build-arg', 'JAR_FILE=./my-app/backend/target/my-app-0.0.1-SNAPSHOT.jar',
'.']
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/my-app-test:latest"]
# Deploy
# The Deploy step requires the role 'Kubernetes Engine Developer' for the service account `<project_number>@cloudbuild.gserviceaccount.com`
- name: 'gcr.io/cloud-builders/kubectl'
id: Deploy
args:
- 'apply'
- '-f'
- 'cloud-build/deployment-backend.yaml'
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_K8S_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_K8S_CLUSTER}'
substitutions:
_K8S_COMPUTE_ZONE: us-central1-a
_K8S_CLUSTER: my-cluster-1
_GIT_USER: my-git-user
_GIT_PASS: replace-me-in-cloudbuild-file # default value
</code></pre>
<p>deployment-backend.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-backend
spec:
replicas: 1
selector:
matchLabels:
app: my-backend
template:
metadata:
labels:
app: my-backend
spec:
containers:
- name: my-backend
image: gcr.io/<my-project>/my-app-test:latest
...
</code></pre>
<p>The problem is that in step 3 I have to build the image as <code>my-app-test:latest</code> so I can use again the latest image in the <code>deployment.yaml</code> (<code>image: gcr.io/<my-project>/my-app-test:latest</code>)
I would like to be able to use the tag name for the image tag like this:</p>
<p>step 3:</p>
<pre><code>- name: 'gcr.io/cloud-builders/docker'
args: ['build',
'--no-cache',
'-t', 'gcr.io/$PROJECT_ID/my-app-test:${_GIT_TAG}',
'-f', './cloud-build/Dockerfile-backend',
'--build-arg', 'JAR_FILE=./my-app/backend/target/my-app-0.0.1-SNAPSHOT.jar',
'.']
</code></pre>
<p>but in that case what is the best way to tell the Deployment step to use the image named after the tag that is used?</p>
<p>I've found that Kustomize is the idiomatic way to "parameterize" kubernetes, but I still have to know the image name upfront and store it in a file.</p>
<p>Replacing the image tag with <code>sed</code> might work, but does not seem like a good solution. </p>
| Evgeni Dimitrov | <p>Is there a reason you’re reapplying they deployment? Or are you doing that just one time? you could just use the built in command to replace / update the image instead of reapplying the config (if that’s what you’re doing)</p>
<p><code>kubectl set image deployment/my-deployment mycontainer=myimage</code></p>
<p>Or the other way is like you said, just use sed. (Basically what kustomize does)
Bash into kubectl and then </p>
<p><code>cat deploy-file | sed “/latest/${_TAG_NAME}/“ | kubectl.bash apply -f -</code></p>
| Lance Sandino |
<p>thanks for checking out my topic.</p>
<p>I'm currently working to have kustomize to download the resource and base files from our git repository.
We have tried a few options some of them following the documentation and some of them not, see below. But anyhow still not able to download from our remote repo and while trying to run the kubectl apply it looks for a local resource based on the git url and file names.</p>
<pre><code>resources:
- ssh://git@SERVERURL:$PORT/$REPO.GIT
- git::ssh://git@SERVERURL:$PORT/$REPO.GIT
- ssh::git@SERVERURL:$PORT/$REPO.GIT
- git::git@SERVERURL:$PORT/$REPO.GIT
- git@SERVERURL:$PORT/$REPO.GIT
</code></pre>
<p>As a workaround I have added the git clone for the expected folder to my pipeline, but the goal is to have the bases/resources downloaded directly from the kustomization url.
Any ideas or some hints on how to get it running?</p>
| Gabriele Hausmann | <p>Use <code>bases</code> instead of <code>resources</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/ecr/?ref=release-1.0
resources:
- rbac.yaml
- manifest.yaml
</code></pre>
<p>add the complete route to your source and add param <code>ref</code> to the tag or branch you want to download.</p>
| TlmaK0 |
<p>I'm now trying to run a simple container with shell (/bin/bash) on a Kubernetes cluster.</p>
<p>I thought that there was a way to keep a container running on a Docker container by using <code>pseudo-tty</code> and detach option (<code>-td</code> option on <code>docker run</code> command).</p>
<p>For example,</p>
<pre><code>$ sudo docker run -td ubuntu:latest
</code></pre>
<p>Is there an option like this in Kubernetes?</p>
<p>I've tried running a container by using a <code>kubectl run-container</code> command like:</p>
<pre><code>kubectl run-container test_container ubuntu:latest --replicas=1
</code></pre>
<p>But the container exits for a few seconds (just like launching with the <code>docker run</code> command without options I mentioned above). And ReplicationController launches it again repeatedly.</p>
<p>Is there a way to keep a container running on Kubernetes like the <code>-td</code> options in the <code>docker run</code> command?</p>
| springwell | <p>Containers are meant to run to completion. You need to provide your container with a task that will never finish. Something like this should work:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu:latest
# Just spin & wait forever
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
</code></pre>
| Joel B |
<p>I am running docker for Mac in latest Mojave. Tried to enable Kubernetes from the preferences. Since then the message is just 'kubernetes is starting'. But it never completes. I am confused as to what has to be done.
Is there anything that I need to change in the network config part? </p>
<p>Just before this, I tried a failed attempt of installing Minikube in the same machine.</p>
| Sony Joseph | <p>For me was very useful:</p>
<ol>
<li>stop docker for desktop</li>
<li>remove the folder <code>~/Library/Group\ Containers/group.com.docker/pki</code></li>
</ol>
<pre><code> rm -rf ~/Library/Group\ Containers/group.com.docker/pki
</code></pre>
<ol start="3">
<li>start docker for destkop</li>
</ol>
<p>Found <a href="https://github.com/docker/for-mac/issues/3594#issuecomment-621487150" rel="nofollow noreferrer">the solution here</a></p>
| freedev |
<p>I have a Kubernetes pod using a readiness probe, and tied with the service this ensures that I don't receive traffic until I'm ready.</p>
<p>I'm using Spring Actuator as the health endpoint for this readiness probe.</p>
<p>But i'd like to trigger some actions whenever the pod is deemed ready by the kubelet. </p>
<p>What would be the simplest way to do this?</p>
| Asgeir S. Nilsen | <p>Perhaps <strong><em>implement your own HealthCheck</em></strong>. When you find that everything is ok for the first time, run your code.</p>
<p>I use a static variable firstHealthCheckOK is checked. Your logic should run only once.</p>
<p>I am assuming you are running Spring-boot 2.x and are calling a readiness probe on <a href="http://localhost:8080/actuator/health" rel="nofollow noreferrer">http://localhost:8080/actuator/health</a></p>
<p>The health() method below is called when Kubernetes calls <a href="http://localhost:8080/actuator/health" rel="nofollow noreferrer">http://localhost:8080/actuator/health</a></p>
<pre><code>import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;
import org.springframework.stereotype.Component;
@Component
public class HealthCheck implements HealthIndicator {
static boolean firstHealthCheckOK = false;
@Override
public Health health() {
int errorCode = check(); // perform health check
if (errorCode != 0) {
return Health.down()
.withDetail("Error Code", errorCode).build();
}
if (firstHealthCheckOK == false){
firstHealthCheckOK = true;
doStartUpLogic();
}
return Health.up().build();
}
private int check() {
//some logic
return 0;
}
private void doStartUpLogic() {
//some startup logic
}
}
</code></pre>
| rjdkolb |
<p>I have deployed Kafka using Helm and Minikube. I need to build a producer in Scala for that broker IP-address and host are required. I have defined NodePort service to expose Kafka to the outside world. I set up broker as minkube-ip:service-node-port, however, I get connection exception.
What is wrong with the configuration I defined?
With a docker-compose file, the application works fine.</p>
<p>Error stack trace:</p>
<pre><code>Exception in thread "main" org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
Caused by: java.net.ConnectException: Connection refused (Connection refused)
</code></pre>
<p>Kafka configurations look like this:</p>
<pre><code> val brokers = "192.168.99.100:32400"
val props = new Properties()
props.put("bootstrap.servers", brokers)
props.put("client.id", "AvroKafkaProducer")
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer")
props.put("schema.registry.url", "http://0.0.0.0:8081")
</code></pre>
<p>Kafka NodePort service definition where labels match Kafka pods labeled produced by Helm:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: kafka-service
spec:
selector:
app: cp-kafka
release: my-confluent-oss
ports:
- protocol: TCP
targetPort: 9092
port: 32400
nodePort: 32400
type: NodePort
</code></pre>
<p>This is the list of all the created services:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-confluent-oss-cp-kafka ClusterIP 10.96.241.37 <none> 9092/TCP 6h25m
my-confluent-oss-cp-kafka-connect ClusterIP 10.105.148.181 <none> 8083/TCP 6h25m
my-confluent-oss-cp-kafka-headless ClusterIP None <none> 9092/TCP 6h25m
my-confluent-oss-cp-kafka-rest ClusterIP 10.99.154.76 <none> 8082/TCP 6h25m
my-confluent-oss-cp-ksql-server ClusterIP 10.108.41.220 <none> 8088/TCP 6h25m
my-confluent-oss-cp-schema-registry ClusterIP 10.108.182.212 <none> 8081/TCP 6h25m
my-confluent-oss-cp-zookeeper ClusterIP 10.97.148.103 <none> 2181/TCP 6h25m
my-confluent-oss-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 6h25m
</code></pre>
| Cassie | <p>The error is from the deserialiser trying to connect to the Schema Registry</p>
<p><code>props.put("schema.registry.url", "http://0.0.0.0:8081")</code></p>
<p>should read</p>
<p><code>props.put("schema.registry.url", "http://<hostname of Schema Registry resolvable from Connect node>:8081")</code></p>
| Robin Moffatt |
<p>I would like to scale (up and down) deployment from PODs. In other words, how PODs in the namespace will send a Kubernetes API call in order to scale the deployment?</p>
<p>I have created a role and assign it to a service account with the following privileges in order to send API calls:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2019-05-19T18:52:09Z"
name: {name}-sa
namespace: {name}
resourceVersion: "11378025"
selfLink: /api/v1/namespaces/{name}/serviceaccounts/{name}-sa
uid: 34606554-7a67-11e9-8e78-c6f4a9a0006a
secrets:
- name: {name}-sa-token-mgk5z
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: "2019-05-17T13:21:09Z"
name: {name}-{name}-api-role
namespace: {name}
resourceVersion: "10985868"
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/{name}/roles/{name}-{name}-api-role
uid: a298e71a-78a6-11e9-b54a-c6f4a9a00070
rules:
- apiGroups:
- extensions
- apps
resources:
- deployments
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: "2019-05-17T13:45:46Z"
name: {name}-{name}-api-rolebind
namespace: {name}
resourceVersion: "11378111"
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/{name}/rolebindings/{name}-{name}-api-rolebind
uid: 12812ea7-78aa-11e9-89ae-c6f4a9a00064
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {name}-{name}-api-role
subjects:
- kind: ServiceAccount
name: {name}-sa
namespace: {name}
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
<p>I can retrieve the deployment with the following command, but I cannot find how to scale it.</p>
<pre><code>https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/apis/apps/v1/namespaces/{name}/deployments/{name}
</code></pre>
<p>I tried the following command in order to scale it, but failed:</p>
<pre><code>curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -X PUT -d '[{ \
"op":"replace", \
"path":"/spec/replicas", \
"value": "2" \
}]'
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/apis/apps/v1/namespaces/{name}/deployments/{name}
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "deployments.apps \"{name}\" is forbidden: User \"system:serviceaccount:{name}:default\" cannot resource \"deployments\" in API group \"apps\" in the namespace \"{name}\"",
"reason": "Forbidden",
"details": {
"name": "{name}",
"group": "apps",
"kind": "deployments"
},
"code": 403
</code></pre>
| user10573594 | <p>Using Kubernetes v1.16.13 on GKE.</p>
<p>I found that
If you give <code>patch</code> permission for <code>deployments/scale</code> resource, you can do <code>PATCH /apis/apps/v1/namespaces/default/deployments/{name}/scale</code>.</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {name}
rules:
- apiGroups: ["apps"]
resources: ["deployments/scale"]
verbs: ["patch"]
</code></pre>
| hiroshi |
<p>I'm running a pod in kubernetes, with hugepages allocated in host and hugepages defined in the pod. The kubernetes worker is in a VM. The VM (host) has huge pages allocated. The pod fails to allocate hugepages though. Application gets SIGBUS when trying to write to the first hugepage allocation.</p>
<p>The pod definition includes hugepages:</p>
<pre class="lang-json prettyprint-override"><code> securityContext:
allowPrivilegeEscalation: true
privileged: true
runAsUser: 0
capabilities:
add: ["SYS_ADMIN", "IPC_LOCK"]
resources:
requests:
intel.com/intel_sriov_netdevice : 2
memory: 2Gi
hugepages-2Mi: 4Gi
limits:
intel.com/intel_sriov_netdevice : 2
memory: 2Gi
hugepages-2Mi: 4Gi
volumeMounts:
- mountPath: /sys
name: sysfs
- mountPath: /dev/hugepages
name: hugepage
readOnly: false
volumes:
- name: hugepage
emptyDir:
medium: HugePages
- name: sysfs
hostPath:
path: /sys
</code></pre>
<p>The VM hosting the pod has hugepages allocated:</p>
<pre><code>cat /proc/meminfo | grep -i hug
AnonHugePages: 0 kB
HugePages_Total: 4096
HugePages_Free: 4096
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
</code></pre>
<p>The following piece of code runs fine in the VM hosting the pod, I can see the hugepages files getting created in <code>/dev/hugepages</code>, also the HugePages_Free counter decreases while the process is running.</p>
<pre class="lang-c prettyprint-override"><code>#include <stdio.h>
#include <sys/mman.h>
#include <errno.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
#define LENGTH (2UL*1024*1024)
#define FILE_NAME "/dev/hugepages/hugepagefile"
static void write_bytes(char *addr)
{
unsigned long i;
for (i = 0; i < LENGTH; i++)
*(addr + i) = (char)i;
}
int main ()
{
void *addr;
int i;
char buf[32];
int fd;
for (i = 0 ; i < 16 ; i++ ) {
sprintf(buf, "%s_%d", FILE_NAME, i);
fd = open(buf, O_CREAT | O_RDWR, 0755);
addr = mmap((void *)(0x0UL), LENGTH, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_HUGETLB , fd, 0);
printf("address returned %p \n", addr);
if (addr == MAP_FAILED) {
perror("mmap ");
} else {
write_bytes(addr);
//munmap(addr, LENGTH);
//unlink(FILE_NAME);
}
close(fd);
}
while (1){}
return 0;
}
</code></pre>
<p>But if I run the same code in the pod, I get a SIGBUS while trying to write to the first hugepage allocated.</p>
<p>Results on the VM (hosting the pod)</p>
<pre><code>root@k8s-1:~# cat /proc/meminfo | grep -i hug
AnonHugePages: 0 kB
HugePages_Total: 4096
HugePages_Free: 4096
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
root@k8s-1:~# ./mmap &
[1] 19428
root@k8s-1:~# address returned 0x7ffff7800000
address returned 0x7ffff7600000
address returned 0x7ffff7400000
address returned 0x7ffff7200000
address returned 0x7ffff7000000
address returned 0x7ffff6e00000
address returned 0x7ffff6c00000
address returned 0x7ffff6a00000
address returned 0x7ffff6800000
address returned 0x7ffff6600000
address returned 0x7ffff6400000
address returned 0x7ffff6200000
address returned 0x7ffff6000000
address returned 0x7ffff5e00000
address returned 0x7ffff5c00000
address returned 0x7ffff5a00000
root@k8s-1:~# cat /proc/meminfo | grep -i hug
AnonHugePages: 0 kB
HugePages_Total: 4096
HugePages_Free: 4080
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
</code></pre>
<p>Results in the pod:</p>
<pre><code>Program received signal SIGBUS, Bus error.
0x00005555555547cb in write_bytes ()
(gdb) where
#0 0x00005555555547cb in write_bytes ()
#1 0x00005555555548a6 in main ()
</code></pre>
| emartin | <p>This is <a href="https://github.com/kubernetes/kubernetes/issues/71233" rel="nofollow noreferrer">a known problem</a> in K8s.</p>
<p>The culprit is that kubelet doesn't update /sys/fs/cgroup/hugetlb/kubepods/hugetlb.2MB.limit_in_bytes upon Node Status Update which happens every 5 minutes by default. Yet it updates the node's resources correctly after enabling hugepages on the host. This creates the possibility to schedule a workload using hugepages on a node with misconfigured limits in the root cgroup.</p>
<p>Some time ago I made <a href="https://github.com/kubernetes/kubernetes/pull/81774" rel="nofollow noreferrer">this patch</a> to K8s, but it never got accepted. You can try to apply it to your K8s build if it's still applicable. If not I'd appreciate if somebody else rebased it and submitted again. I spent too much time trying to make it in and switched to another project.</p>
| versale |
<p>We have two Kubernetes clusters and has deployed internal NGINX ingress controllers. So they are not publicly accessible. The NGINX ingress controller has private IP assigned to it. I will be implementing Azure Front Door and I'd like to know if we can add the private IP address of the NGINX ingress controller as the backend to Front Door. Furthermore for the frontend of the Azure Front Door can we have a private IP address?</p>
| Container-Man | <p>Azure Front Door premium supports securing origins with <a href="https://learn.microsoft.com/en-us/azure/frontdoor/private-link" rel="nofollow noreferrer">private link</a>. I'm not certain if you can directly use a private link with your internal NGINX ingress controllers, but if you can't you can always set up an <a href="https://learn.microsoft.com/en-us/azure/frontdoor/standard-premium/how-to-enable-private-link-internal-load-balancer#enable-private-connectivity-to-an-internal-load-balancer" rel="nofollow noreferrer">internal load balancer</a> that points to the ingress controller. The ingress controller may be the same thing, but internal load balancers support private links.</p>
<p>For your second question you can't have a private IP address as your front end for Front Door using only Front Door. You could achieve this by having a resource with a private IP address (like an internal load balancer) that functions as your Front Door frontend. You'd point its backend to your Front Door url. Then you would create a custom WAF rule in Front Door that only allows the public IP for the resource, restricting traffic essentially to a single private IP address.</p>
| Narthring |
<p>When trying to run a pod that uses docker image from a private docker registry. I am getting the following error:</p>
<pre><code>Warning Failed 24s (x2 over 40s) kubelet, minikube Failed to pull image "registry.hub.docker.com/repository/docker/projecthelloworld/helloworld-container:V1": rpc error: code = Unknown desc = Error response from daemon: unauthorized: authentication required
</code></pre>
<p>In order to find if there is any issue with the credentials, I tried to check the secret with the name <code>regcred</code> but I am not getting any output, which might be the reason why I am getting the authentication error.</p>
<pre><code>k get secret regcred -o="jsonpath={.data .dockerconfigjson}"
</code></pre>
<p>This is how I created the secret <code>regcred</code> and the applied it:</p>
<pre><code>kubectl create secret docker-registry --dry-run=client regcred \
--docker-server=https://index.docker.io/v1/ \
--docker-username=projecthelloworld \
--docker-password=HelloWorld240721 \
[email protected] \
-o yaml > docker-secret.yaml
</code></pre>
<p>Here is the generated <code>docker-secret.yaml</code> is</p>
<pre><code>apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOnsidXNlcm5hbWUiOiJwcm9qZWN0aGVsbG93b3JsZCIsInBhc3N3b3JkIjoiSGVsbG9Xb3JsZDI0MDcyMSIsImVtYWlsIjoiaGVsbG8ud29ybGRAZ21haWwuY29tIiwiYXV0aCI6ImNISnZhbVZqZEdobGJHeHZkMjl5YkdRNlNHVnNiRzlYYjNKc1pESTBNRGN5TVE9PSJ9fX0l
kind: Secret
metadata:
creationTimestamp: null
name: regcred
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>Output of secret <code>k get secret regcred -o yaml</code></p>
<pre><code>apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOnsidXNlcm5hbWUiOiJwcm9qZWN0b3JjYSIsInBhc3N3b3JkIjoiT3JjYTI0MDcyMSIsImVtYWlsIjoic3VyZXNoLnNoYXJtYUB0aGViaWdzY2FsZS5jb20iLCJhdXRoIjoiY0hKdmFtVmpkRzl5WTJFNlQzSmpZVEkwTURjeU1RPT0ifX19
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{".dockerconfigjson":"eyJhdXRocyI6eyJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOnsidXNlcm5hbWUiOiJwcm9qZWN0b3JjYSIsInBhc3N3b3JkIjoiT3JjYTI0MDcyMSIsImVtYWlsIjoic3VyZXNoLnNoYXJtYUB0aGViaWdzY2FsZS5jb20iLCJhdXRoIjoiY0hKdmFtVmpkRzl5WTJFNlQzSmpZVEkwTURjeU1RPT0ifX19"},"kind":"Secret","metadata":{"annotations":{},"creationTimestamp":null,"name":"regcred","namespace":"default"},"type":"kubernetes.io/dockerconfigjson"}
creationTimestamp: "2021-08-21T03:18:14Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:.dockerconfigjson: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:type: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-08-21T03:18:14Z"
name: regcred
namespace: default
resourceVersion: "71712"
uid: fa4b2b55-fe16-4921-9c65-7b5eddc820ba
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>Then I created the following <code>hello-world-deploy.yml</code> deployment file.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
labels:
app: hello-world-app
spec:
selector:
matchLabels:
app: hello-world-app
template:
metadata:
labels:
app: hello-world-app
spec:
containers:
- name: hello-world-app
image: registry.hub.docker.com/repository/docker/projecthelloworld/helloworld-container:V1
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
</code></pre>
<p>Following the above, I created deployment but the pod is showing the status <code>ErrImagePull</code></p>
<p>On describing the pod, I can see that the secrets are mounted as expected.</p>
<pre><code>Name: hello-world-deployment-b74c8c7db-26r2f
Namespace: default
Priority: 0
Node: ip-192-168-41-226.eu-west-2.compute.internal/192.168.41.226
Start Time: Tue, 03 Aug 2021 13:53:17 +0100
Labels: app=hello-world-app
pod-template-hash=b74c8c7db
Annotations: kubernetes.io/psp: eks.privileged
Status: Pending
IP: 192.168.51.2
IPs:
IP: 192.168.51.2
Controlled By: ReplicaSet/hello-world-deployment-b74c8c7db
Containers:
hello-world-app:
Container ID:
Image: registry.hub.docker.com/repository/docker/projecthelloworld/helloworld-container:hello-world-service.V.0
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 500m
memory: 128Mi
Requests:
cpu: 500m
memory: 128Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-h8j4t (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-h8j4t:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-h8j4t
Optional: false
</code></pre>
<p>I am not sure, in which step I am going wrong. Any help would greatly appreciated.</p>
<p><strong>Update</strong>: I login to docker and tried to pull the image from command prompt</p>
<pre><code>docker pull registry.hub.docker.com/repository/docker/projecthelloworld/helloworld-container:V1
</code></pre>
<p>and got the same response</p>
<pre><code>Error response from daemon: unauthorized: authentication required
</code></pre>
<p>Output from the json.config file</p>
<pre><code>{
"auths": {
"registry.hub.docker.com": {}
},
"credsStore": "desktop"
}
</code></pre>
| tintin | <p>Found the issue, the deployment file's image property had a dns name <code>hub.docker.com</code> along with the uri path <code>/repository/docker/</code></p>
<pre><code>image: registry.hub.docker.com/repository/docker/projecthelloworld/helloworld-container:V1
</code></pre>
<p>Removing it got it working, it should have been</p>
<pre><code>image: projecthelloworld/helloworld-container:V1
</code></pre>
| tintin |
<p>I have a preemptible node pool of size 1 on GKE. I've been running this node pool with size 1 for almost a month now. Every day the node restarts after 24 hours and rejoins the cluster. Today it restarted but did not rejoin the cluster.</p>
<p>Instead, I noticed that according to <code>gcloud compute instances list</code> the underlying instance was running but not included in the output of <code>kubectl get node</code>. I increased the node pool size to 2, whereupon a second instance was launched. That node immediately joined my GKE cluster and pods were scheduled onto it. The first node is still running according to <code>gcloud</code>, but it won't join the cluster.</p>
<p>What's going on? How can I debug this this problem?</p>
<hr>
<p><strong>Update</strong>:</p>
<p>I SSHed into the instance and was immediately greeted with this excellent error message:</p>
<pre><code>Broken (or in progress) Kubernetes node setup! Check the cluster initialization status
using the following commands:
Master instance:
- sudo systemctl status kube-master-installation
- sudo systemctl status kube-master-configuration
Node instance:
- sudo systemctl status kube-node-installation
- sudo systemctl status kube-node-configuration
</code></pre>
<p>The results of <code>sudo systemctl status kube-node-installation</code>: </p>
<pre><code>goto mark: ● kube-node-installation.service - Download and install k8s binaries and configurations
Loaded: loaded (/etc/systemd/system/kube-node-installation.service; enabled; vendor preset: disabled)
Active: active (exited) since Thu 2017-12-28 21:08:53 UTC; 6h ago
Process: 945 ExecStart=/home/kubernetes/bin/configure.sh (code=exited, status=0/SUCCESS)
Process: 941 ExecStartPre=/bin/chmod 544 /home/kubernetes/bin/configure.sh (code=exited, status=0/SUCCESS)
Process: 937 ExecStartPre=/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -H X-Google-Metadata-Request: True -o /home/kubernetes/bin/configure.sh http://metadata.google.internal/com
puteMetadata/v1/instance/attributes/configure-sh (code=exited, status=0/SUCCESS)
Process: 933 ExecStartPre=/bin/mount -o remount,exec /home/kubernetes/bin (code=exited, status=0/SUCCESS)
Process: 930 ExecStartPre=/bin/mount --bind /home/kubernetes/bin /home/kubernetes/bin (code=exited, status=0/SUCCESS)
Process: 925 ExecStartPre=/bin/mkdir -p /home/kubernetes/bin (code=exited, status=0/SUCCESS)
Main PID: 945 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4915)
Memory: 0B
CPU: 0
CGroup: /system.slice/kube-node-installation.service
Dec 28 21:08:52 gke-cluster0-pool-d59e9506-g9sc configure.sh[945]: Downloading node problem detector.
Dec 28 21:08:52 gke-cluster0-pool-d59e9506-g9sc configure.sh[945]: % Total % Received % Xferd Average Speed Time Time Time Current
Dec 28 21:08:52 gke-cluster0-pool-d59e9506-g9sc configure.sh[945]: Dload Upload Total Spent Left Speed
Dec 28 21:08:52 gke-cluster0-pool-d59e9506-g9sc configure.sh[945]: [158B blob data]
Dec 28 21:08:52 gke-cluster0-pool-d59e9506-g9sc configure.sh[945]: == Downloaded https://storage.googleapis.com/kubernetes-release/node-problem-detector/node-problem-detector-v0.4.1.tar.gz (SHA1 = a57a3fe
64cab8a18ec654f5cef0aec59dae62568) ==
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc configure.sh[945]: cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz is preloaded.
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc configure.sh[945]: kubernetes-manifests.tar.gz is preloaded.
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc configure.sh[945]: mounter is preloaded.
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc configure.sh[945]: Done for installing kubernetes files
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc systemd[1]: Started Download and install k8s binaries and configurations.
</code></pre>
<p>And the result of <code>sudo systemctl status kube-node-configuration</code>:</p>
<pre><code>● kube-node-configuration.service - Configure kubernetes node
Loaded: loaded (/etc/systemd/system/kube-node-configuration.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2017-12-28 21:08:53 UTC; 6h ago
Process: 994 ExecStart=/home/kubernetes/bin/configure-helper.sh (code=exited, status=4)
Process: 990 ExecStartPre=/bin/chmod 544 /home/kubernetes/bin/configure-helper.sh (code=exited, status=0/SUCCESS)
Main PID: 994 (code=exited, status=4)
CPU: 33ms
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc systemd[1]: Starting Configure kubernetes node...
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[994]: Start to configure instance for kubernetes
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[994]: Configuring IP firewall rules
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[994]: Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[994]: Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[994]: Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc systemd[1]: kube-node-configuration.service: Main process exited, code=exited, status=4/NOPERMISSION
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc systemd[1]: Failed to start Configure kubernetes node.
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc systemd[1]: kube-node-configuration.service: Unit entered failed state.
Dec 28 21:08:53 gke-cluster0-pool-d59e9506-g9sc systemd[1]: kube-node-configuration.service: Failed with result 'exit-code'.
</code></pre>
<p><strong>So it looks like <code>kube-node-configuration</code> failed</strong>. I ran <code>sudo systemctl restart kube-node-configuration</code> and now the status output is:</p>
<pre><code>● kube-node-configuration.service - Configure kubernetes node
Loaded: loaded (/etc/systemd/system/kube-node-configuration.service; enabled; vendor preset: disabled)
Active: active (exited) since Fri 2017-12-29 03:41:36 UTC; 3s ago
Main PID: 20802 (code=exited, status=0/SUCCESS)
CPU: 1.851s
Dec 29 03:41:28 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[20802]: Extend the docker.service configuration to set a higher pids limit
Dec 29 03:41:28 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[20802]: Docker command line is updated. Restart docker to pick it up
Dec 29 03:41:30 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[20802]: Start kubelet
Dec 29 03:41:35 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[20802]: Using kubelet binary at /home/kubernetes/bin/kubelet
Dec 29 03:41:35 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[20802]: Start kube-proxy static pod
Dec 29 03:41:35 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[20802]: Start node problem detector
Dec 29 03:41:35 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[20802]: Using node problem detector binary at /home/kubernetes/bin/node-problem-detector
Dec 29 03:41:36 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[20802]: Prepare containerized mounter
Dec 29 03:41:36 gke-cluster0-pool-d59e9506-g9sc configure-helper.sh[20802]: Done for the configuration for kubernetes
Dec 29 03:41:36 gke-cluster0-pool-d59e9506-g9sc systemd[1]: Started Configure kubernetes node.
</code></pre>
<p>...and the node joined the cluster :). But, my original question stands: what happened?</p>
| Dmitry Minkovsky | <p>We were experiencing a similar problem on GKE with preemptible nodes, seeing error messaging like these from the nodes:</p>
<pre><code>Extend the docker.service configuration to set a higher pids limit
Docker command line is updated. Restart docker to pick it up
level=info msg="Processing signal 'terminated'"
level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
level=info msg="Daemon shutdown complete"
docker daemon exited
Start kubelet
</code></pre>
<p>After about a month of back-and-forth with Google Support, we learned that the Nodes were getting preempted and replaced, and the new node that comes in, uses the same name, and it all happens without the normal pod disruption of a node being evicted.</p>
<hr />
<p>Backstory: we were running into this problem because Jenkins was running it's workers on the nodes, and during this ~2 minute "restart" of the node going and returning, Jenkins master would loose connection and fail the job.</p>
<p><strong>tldr;</strong> don't use preemptible nodes for this kind of work.</p>
| GrandVizier |
<p>I'm think I'm about to reinvent the wheel here. I have all the parts but am thinking: Somebody must have done this (properly) before me.</p>
<p>We have a a jenkins CI job that builds <code>image-name:${BRANCH_NAME}</code> and pushes it to a registry. We want to create a CD job that deploys this <code>image-name:${BRANCH_NAME}</code> to kubernetes cluster. And so now we run into the problem that if we call <code>helm upgrade --install</code> with the same <code>image-name:${BRANCH_NAME}</code> nothing happens, even if <code>image-name:${BRANCH_NAME}</code> now actually refers to a different sha256 sum. We (think we) understand this.</p>
<p>How is this generally solved? Are there best practices about this? I see two general approaches:</p>
<ol>
<li>The CI job doesn't just create <code>image-name:${BRANCH_NAME}</code>, it also creates a unique tag, e.g. <code>image-name:${BRANCH_NAME}-${BUILD_NUMBER}</code>. The CD job never deploys the generic <code>image-name:${BRANCH_NAME}</code>, but always the unique <code>image-name:${BRANCH_NAME}-${BUILD_NUMBER}</code>.</li>
<li>After the CI job has created <code>image-name:${BRANCH_NAME}</code>, its SHA256 sum is retrieved somehow (e.g. with <code>docker inspect</code> or <a href="https://github.com/containers/skopeo" rel="nofollow noreferrer"><code>skopeo</code></a> and helm is called with the SHA256 sum.</li>
</ol>
<p>In both cases, we have two choices. Modify, commit and track a <code>custom-image-tags.yaml</code> file, or run helm with <code>--set </code>parameters for the image tags. If we go with option 1, we'll have to periodically remove "old tags" to save disk space.</p>
<p>And if we have a single CD job with a single helm chart that contains multiple images, this only gets more complicated.</p>
<p>Surely, there must be some opinionated tooling to do all this for us.</p>
<p>What are the ways to do this without re-inventing this particular wheel for the 4598734th time?</p>
<h1><code>kbld</code> gets me some of the way, but breaks helm</h1>
<p>I've found <a href="https://carvel.dev/kbld/" rel="nofollow noreferrer"><code>kbld</code></a>, which allows me to:</p>
<pre><code>helm template my-chart --values my-vals.yml | kbld -f - | kubectl apply -f -
</code></pre>
<p>which basically implements 2 above, but now helm is unaware that the chart has been installed so I can't <code>helm uninstall</code> it. :-( I'm hoping there is some better approach...</p>
| Peter V. Mørch | <p><code>kbld</code> can also be used "fully" with helm...</p>
<p>Yes, the <a href="https://carvel.dev/kbld/" rel="nofollow noreferrer">docs</a> suggest:</p>
<pre><code>$ helm template my-chart --values my-vals.yml | kbld -f - | kubectl apply -f -
</code></pre>
<p>But this also works:</p>
<pre><code>$ cat kbld-stdin.sh
#!/bin/bash
kbld -f -
$ helm upgrade --install my-chart --values my-vals.yml --post-renderer ./kbld-stdin.sh
</code></pre>
<p>With <code>--post-renderer</code>, <code>helm list</code>, <code>helm uninstall</code>, etc. all still work.</p>
| Peter V. Mørch |
<p>We're trying to set up a spot node group in EKS with lower and higher capacity instance types, (e.g. <code>instance_types = ["t3.xlarge", "c5.4xlarge"]</code>), but ... only the t3 is used, even if we specify more CPU than it has to offer. Pods still try to use it and just hang.</p>
<p>How do we get the larger instances to come into play?</p>
| hellified | <p>An AWS AutoScalingGroup has the ability to put weights on the instance types, but that functionality isn't built into EKS. So what's happening is that the ASG is designed to create the first instance type if possible, and doesn't get impacted by your K8s workload requests, and therefor will always be the first type that is available.</p>
<p>You probably want to <strong>create two different node groups</strong> (one for the <code>t3.xlarge</code> and another for the <code>c5.4xlarge</code>). And depending on the workloads, maybe allow the min-size to be 0.</p>
<p>Alternatively, if you want to explicitly change the existing node group and not have two, then maybe these instructions would be useful: <a href="https://blog.porter.run/updating-eks-instance-type/" rel="nofollow noreferrer">https://blog.porter.run/updating-eks-instance-type/</a></p>
| GrandVizier |
<p>I am running a Kubernetes CronJon with a HTTPS GET using curl command. Token has to be retrieved before any POST or GET commands.
Setting these env var locally in my <code>.bashrc</code> file and running the curl command works fine when I test locally.</p>
<pre class="lang-sh prettyprint-override"><code># ~/.bashrc: executed by bash(1) for non-login shells.
export api_vs_hd=Accept:application/vnd.appgate.peer-v13+json
export controller_ip=value
export admin_pass=value
export uuid=value
export token=`curl -H "Content-Type: application/json" -H "${api_vs_hd}" --request POST --data "{\"providerName\":\"local\",\"username\":\"admin\",\"password\":\"$admin_pass\",\"deviceId\":\"$uuid\"}" https://$controller_ip:444/admin/login --insecure | jq -r '.token'`
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ curl -k -H "Content-Type: application/json" \
> -H "$api_vs_hd" \
> -H "Authorization: Bearer $token" \
> -X GET \
> https://$controller_ip:444/admin/license/users
{"data":[],"range":"0-0/0","orderBy":"created","descending":true,"filterBy":[]}
</code></pre>
<p>However, I am getting a parsing error when linting this YAML and I am sure it's due to the format on my value in the <code>TOKEN</code> key: <code>curl ...</code> command.
See CronJob YAML</p>
<pre class="lang-yaml prettyprint-override"><code>#file-name: postgresql-backup-cron-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cron-job
namespace: device-purge
spec:
#Cron Time is set according to server time, ensure server time zone and set accordingly.
schedule: "*/2 * * * *" # test
jobTemplate:
spec:
template:
spec:
imagePullSecrets:
- name: cron
containers:
- name: cron-pod
image: harbor/privateop9/python38:device-purge
env:
- name: API_VS_HD
value: "Accept:application/vnd.appgate.peer-v13+json"
- name: CONTROLLER_IP
value: "value"
- name: ADMIN_PASS
value: "value"
- name: UUID
value: "value"
- name: TOKEN
value: "curl -H \"Content-Type: application/json\" -H \"${api_vs_hd}\" --request POST --data \"{\"providerName\":\"local\",\"username\":\"admin\",\"password\":\"$admin_pass\",\"deviceId\":\"$uuid\"}" https://$controller_ip:444/admin/login --insecure | jq -r '.token'"
imagePullPolicy: Always
restartPolicy: OnFailure
backoffLimit: 3
</code></pre>
| kddiji | <p>You forgot to escape one double quote here:</p>
<pre><code>... \"deviceId\":\"$uuid\"}" https:/ ...
^
</code></pre>
<p>Escaping this will fix the YAML. However, the value will not be correct because as you can see, your original command already needs escape sequences for the double quotes inside <code>data</code> so you would need to <em>escape the escape sequences</em>, like <code>\\\"</code> for every sequence.</p>
<p>A vastly simpler way to enter the command would be a folded block scalar:</p>
<pre class="lang-yaml prettyprint-override"><code> - name: TOKEN
value: >-
curl -H "Content-Type: application/json" -H "${api_vs_hd}" --request POST
--data "{\"providerName\":\"local\",\"username\":\"admin\",\"password\":\"$admin_pass\",\"deviceId\":\"$uuid\"}"
https://$controller_ip:444/admin/login --insecure | jq -r '.token'
</code></pre>
<p>In a folded block scalar, no escape sequences are processed so you can give the original command.
Also, line breaks will be folded into single spaces so you can enter your command on multiple lines for readability.
Just make sure that you do not indent lines more than the first line, YAML has a <em>really weird</em> special case for that where it <em>doesn't</em> remove newlines around more-indented lines.</p>
| flyx |
<p>I am not sure where else to turn as all example I have seen I have pretty much copied and still cannot get it to work. The connector will not install and states empty password. I have validted each step and cannot get it to work. Here are the steps I have taken.</p>
<h1>Container</h1>
<pre class="lang-sh prettyprint-override"><code>FROM strimzi/kafka:0.16.1-kafka-2.4.0
USER root:root
RUN mkdir -p /opt/kafka/plugins/debezium
COPY ./debezium-connector-mysql/ /opt/kafka/plugins/debezium/
USER 1001
</code></pre>
<p>Next I create the secret to use with mySQL.</p>
<pre class="lang-sh prettyprint-override"><code>cat <<EOF | kubectl apply -n kafka-cloud -f -
apiVersion: v1
kind: Secret
metadata:
name: mysql-auth
type: Opaque
stringData:
mysql-auth.properties: |-
username: root
password: supersecret
EOF
</code></pre>
<p><strong>Validate</strong></p>
<pre class="lang-sh prettyprint-override"><code>% kubectl -n kafka-cloud get secrets | grep mysql-auth
mysql-auth Opaque 1 14m
</code></pre>
<p>Double check to make sure the user and password are not empty as the error in the connector state.</p>
<pre class="lang-sh prettyprint-override"><code>% kubectl -n kafka-cloud get secret mysql-auth -o yaml
apiVersion: v1
data:
mysql-auth.properties: dXNlcm5hbWU6IHJvb3QKcGFzc3dvcmQ6IHN1cGVyc2VjcmV0
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{},"name":"mysql-auth","namespace":"kafka-cloud"},"stringData":{"mysql-auth.properties":"username: root\npassword: supersecret"},"type":"Opaque"}
creationTimestamp: "2022-03-02T23:48:55Z"
name: mysql-auth
namespace: kafka-cloud
resourceVersion: "4041"
uid: 14a7a878-d01f-4899-8dc7-81b515278f32
type: Opaque
</code></pre>
<h1>Add Connect Cluster</h1>
<pre class="lang-yaml prettyprint-override"><code>cat <<EOF | kubectl apply -n kafka-cloud -f -
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: my-connect-cluster
annotations:
# # use-connector-resources configures this KafkaConnect
# # to use KafkaConnector resources to avoid
# # needing to call the Connect REST API directly
strimzi.io/use-connector-resources: "true"
spec:
version: 3.1.0
image: connect-debezium
replicas: 1
bootstrapServers: my-kafka-cluster-kafka-bootstrap:9092
config:
group.id: connect-cluster
offset.storage.topic: connect-cluster-offsets
config.storage.topic: connect-cluster-configs
status.storage.topic: connect-cluster-status
config.storage.replication.factor: 1
offset.storage.replication.factor: 1
status.storage.replication.factor: 1
config.providers: file
config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider
externalConfiguration:
volumes:
- name: mysql-auth-config
secret:
secretName: mysql-auth
EOF
</code></pre>
<h2>Add Connector</h2>
<pre class="lang-sh prettyprint-override"><code>cat <<EOF | kubectl apply -n kafka-cloud -f -
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
name: mysql-test-connector
labels:
strimzi.io/cluster: my-connect-cluster
spec:
class: io.debezium.connector.mysql.MySqlConnector
tasksMax: 1
config:
database.hostname: 172.17.0.13
database.port: 3306
database.user: "${file:/opt/kafka/external-configuration/mysql-auth-config/mysql-auth.properties:username}"
database.password: "${file:/opt/kafka/external-configuration/mysql-auth-config/mysql-auth.properties:password}"
database.server.id: 184054
database.server.name: mysql-pod
database.whitelist: sample
database.history.kafka.bootstrap.servers: my-kafka-cluster-kafka-bootstrap:9092
database.history.kafka.topic: "schema-changes.sample"
key.converter: "org.apache.kafka.connect.storage.StringConverter"
value.converter: "org.apache.kafka.connect.storage.StringConverter"
EOF
</code></pre>
<h1>Error</h1>
<p>And no matter what I have tried to get this error. I have no idea what I am missing. I know it a simple config, but I cannot figure it out. I'm stuck.</p>
<pre class="lang-sh prettyprint-override"><code>% kubectl -n kafka-cloud describe kafkaconnector mysql-test-connector
Name: mysql-test-connector
Namespace: kafka-cloud
Labels: strimzi.io/cluster=my-connect-cluster
Annotations: <none>
API Version: kafka.strimzi.io/v1beta2
Kind: KafkaConnector
Metadata:
Creation Timestamp: 2022-03-02T23:44:20Z
Generation: 1
Managed Fields:
API Version: kafka.strimzi.io/v1beta2
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:labels:
.:
f:strimzi.io/cluster:
f:spec:
.:
f:class:
f:config:
.:
f:database.history.kafka.bootstrap.servers:
f:database.history.kafka.topic:
f:database.hostname:
f:database.password:
f:database.port:
f:database.server.id:
f:database.server.name:
f:database.user:
f:database.whitelist:
f:key.converter:
f:value.converter:
f:tasksMax:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-03-02T23:44:20Z
API Version: kafka.strimzi.io/v1beta2
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:observedGeneration:
f:tasksMax:
f:topics:
Manager: okhttp
Operation: Update
Subresource: status
Time: 2022-03-02T23:44:20Z
Resource Version: 3874
UID: c70ffe4e-3777-4524-af82-dad3a57ca25e
Spec:
Class: io.debezium.connector.mysql.MySqlConnector
Config:
database.history.kafka.bootstrap.servers: my-kafka-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.sample
database.hostname: 172.17.0.13
database.password:
database.port: 3306
database.server.id: 184054
database.server.name: mysql-pod
database.user:
database.whitelist: sample
key.converter: org.apache.kafka.connect.storage.StringConverter
value.converter: org.apache.kafka.connect.storage.StringConverter
Tasks Max: 1
Status:
Conditions:
Last Transition Time: 2022-03-02T23:45:00.097311Z
Message: PUT /connectors/mysql-test-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
Reason: ConnectRestException
Status: True
Type: NotReady
Observed Generation: 1
Tasks Max: 1
Topics:
Events: <none>
</code></pre>
| nitefrog | <p>The config param needed for the mySQL connector is:</p>
<p><code>database.allowPublicKeyRetrieval: true</code></p>
<p>That resolved the issue.</p>
| nitefrog |
<p>When I ssh directly inside my pod, and run the following on the command line: <code>curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc.cluster.local/apis/batch/v1/watch/namespaces/kludge/jobs/api-job-12</code> it works and returns all the updates for the specified job (<code>api-job-12</code>) in the form of a stream.</p>
<p>However, when I'm inside my application level code, I can't get the API to stream the response (the request times out with no response at all). I'm working inside a PHP(Laravel) environment and I'm using Guzzle for my http client. </p>
<p>Here's my code:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code> $token = file_get_contents('/var/run/secrets/kubernetes.io/serviceaccount/token');
$client = new Client([
'headers' => [
'Authorization' => "Bearer {$token}"
]
]);
$response = $client->get(
'https://kubernetes.default.svc/apis/batch/v1/watch/namespaces/kludge/jobs/api-job-12',
[
'verify' => '/var/run/secrets/kubernetes.io/serviceaccount/ca.crt',
'stream' => true
]
);
dd($response->getBody()->getContents());</code></pre>
</div>
</div>
</p>
<p>When I dump <code>$response->getBody()</code> I get the following:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>^ GuzzleHttp\Psr7\Stream {#338
-stream: stream resource @21
crypto: array:4 [
"protocol" => "TLSv1.2"
"cipher_name" => "ECDHE-RSA-AES128-GCM-SHA256"
"cipher_bits" => 128
"cipher_version" => "TLSv1.2"
]
timed_out: false
blocked: true
eof: false
wrapper_data: array:4 [
0 => "HTTP/1.1 200 OK"
1 => "Content-Type: application/json"
2 => "Date: Sun, 01 Mar 2020 20:15:33 GMT"
3 => "Connection: close"
]
wrapper_type: "http"
stream_type: "tcp_socket/ssl"
mode: "r"
unread_bytes: 0
seekable: false
uri: "https://kubernetes.default.svc/apis/batch/v1/watch/namespaces/kludge/jobs/api-job-12"
options: array:2 [
"http" => array:5 [
"method" => "GET"
"header" => """
Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrbHVkZ2UiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi14ZjhsciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZDA5ZjZiZTAtM2UyOS00NWU3LWI3ZjgtOGE1YWI0OGZjNDJiIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmtsdWRnZTpkZWZhdWx0In0.eAtABzj-SimmTGNCQXkmtVHFPYoxiZWV7ET5tYK3OIa6-Ea6WA3cy7cMmObRILTc26cLU4YX8YovhzoNAV8RkteKAVGv2HNvaeOXD_AKkYilX618SUMEfat-zsnXYUego24gNLPtPFRefRyEwAnxf6E61DDwSWWlyptKiggcnl8GHrlY_14oumOsFpsjsRTc807DsuZGn1jCU1Dw2DPhSz457a-afXb0jggzorYNzDtfG6rBTKYctPI4wfh30y9iwjPLTU5L5B-8mYqWn9lgOs2Z9XkFu1GRUD19j6bgAnzoyfVCY8uJp9FGi1Ega84n_MsC6cXmS7K7_QiyBtFR-Q
User-Agent: GuzzleHttp/6.5.1 curl/7.64.0 PHP/7.2.28
Host: kubernetes.default.svc
Content-Length: 0
Connection: close
"""
"protocol_version" => "1.1"
"ignore_errors" => true
"follow_location" => 0
]
"ssl" => array:4 [
"cafile" => "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"</code></pre>
</div>
</div>
</p>
<p>It does appear to be a stream object, but the object says it has no bytes to read and that it is not seekable. Does anyone have any ideas as to how I can get a stream response using PHP here?</p>
<p>Here's the Kubernetes API endpoint that I'm calling:
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#watch-47" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#watch-47</a></p>
| thatguyjono | <p>Have you tried to actually read from this stream? Private fields can be confusing, so I would suggest to try <a href="http://docs.guzzlephp.org/en/stable/request-options.html#stream" rel="nofollow noreferrer">the public interface</a> and see what you get:</p>
<pre><code>$body = $response->getBody();
while (!$body->eof()) {
echo $body->read(1024);
}
</code></pre>
| Alexey Shokov |
<p>I have set up 3 node kubernetes using 3 VPS and installed rook/ceph.</p>
<p>when I run</p>
<pre><code>kubectl exec -it rook-ceph-tools-78cdfd976c-6fdct -n rook-ceph bash
ceph status
</code></pre>
<p>I get the below result</p>
<pre><code>osd: 0 osds: 0 up, 0 in
</code></pre>
<p>I tried</p>
<pre><code>ceph device ls
</code></pre>
<p>and the result is</p>
<pre><code>DEVICE HOST:DEV DAEMONS LIFE EXPECTANCY
</code></pre>
<p><code>ceph osd status</code> gives me no result</p>
<p>This is the yaml file that I used</p>
<pre><code>https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/cluster.yaml
</code></pre>
<p>When I use the below command</p>
<pre><code>sudo kubectl -n rook-ceph logs rook-ceph-osd-prepare-node1-4xddh provision
</code></pre>
<p>results are</p>
<pre><code>2021-05-10 05:45:09.440650 I | cephosd: skipping device "sda1" because it contains a filesystem "ext4"
2021-05-10 05:45:09.440653 I | cephosd: skipping device "sda2" because it contains a filesystem "ext4"
2021-05-10 05:45:09.475841 I | cephosd: configuring osd devices: {"Entries":{}}
2021-05-10 05:45:09.475875 I | cephosd: no new devices to configure. returning devices already configured with ceph-volume.
2021-05-10 05:45:09.476221 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm list --format json
2021-05-10 05:45:10.057411 D | cephosd: {}
2021-05-10 05:45:10.057469 I | cephosd: 0 ceph-volume lvm osd devices configured on this node
2021-05-10 05:45:10.057501 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log raw list --format json
2021-05-10 05:45:10.541968 D | cephosd: {}
2021-05-10 05:45:10.551033 I | cephosd: 0 ceph-volume raw osd devices configured on this node
2021-05-10 05:45:10.551274 W | cephosd: skipping OSD configuration as no devices matched the storage settings for this node "node1"
</code></pre>
<p>My disk partition</p>
<pre><code>root@node1: lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 400G 0 disk
├─sda1 8:1 0 953M 0 part /boot
└─sda2 8:2 0 399.1G 0 part /
</code></pre>
<p>What am I doing wrong here?</p>
| jeril | <p>I have similar problem that OSD doesn't appear in <code>ceph status</code>, after I install and teardown for test multiple times.</p>
<p>I fixed this issue by running</p>
<pre><code>dd if=/dev/zero of=/dev/sdX bs=1M status=progress
</code></pre>
<p>to completely remove any information on such raw block disk.</p>
| j3ffyang |
<p>I have a configMap inside a helm chart:</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: card-template
data:
card.tmpl: |-
{{- if .Values.customMessageCardTemplate }}
{{ toYaml .Values.customMessageCardTemplate | indent 4 }}
{{- else }}
{{ .Files.Get "card.tmpl" | indent 4 }}
{{- end }}
</code></pre>
<p>This configMap reads data from <code>.Values.customMessageCardTemplate</code> value.</p>
<p>I have a file <code>custom-card.tmpl</code> whose content should be set as the value of <code>customMessageCardTemplate</code> during the installation of the chart.</p>
<p>The data inside <code>custom-card.tmpl</code> is :</p>
<pre><code>{{ define "teams.card" }}
{
"@type": "MessageCard",
"@context": "http://schema.org/extensions",
"themeColor": "{{- if eq .Status "resolved" -}}2DC72D
{{- else if eq .Status "firing" -}}
{{- if eq .CommonLabels.severity "critical" -}}8C1A1A
{{- else if eq .CommonLabels.severity "warning" -}}FFA500
{{- else -}}808080{{- end -}}
{{- else -}}808080{{- end -}}",
"summary": "{{- if eq .CommonAnnotations.summary "" -}}
{{- if eq .CommonAnnotations.message "" -}}
{{- .CommonLabels.alertname -}}-hai
{{- else -}}
{{- .CommonAnnotations.message -}}
{{- end -}}
{{- else -}}
{{- .CommonAnnotations.summary -}}
{{- end -}}",
"title": "Prometheus Alert ({{ .Status }})",
"sections": [ {{$externalUrl := .ExternalURL}}
{{- range $index, $alert := .Alerts }}{{- if $index }},{{- end }}
{
"activityTitle": "[{{ $alert.Annotations.description }}]({{ $externalUrl }})",
"facts": [
{{- range $key, $value := $alert.Annotations }}
{
"name": "{{ reReplaceAll "_" " " $key }}",
"value": "{{ reReplaceAll "_" " " $value }}"
},
{{- end -}}
{{$c := counter}}{{ range $key, $value := $alert.Labels }}{{if call $c}},{{ end }}
{
"name": "{{ reReplaceAll "_" " " $key }}",
"value": "{{ reReplaceAll "_" " " $value }}"
}
{{- end }}
],
"markdown": true
}
{{- end }}
]
}
{{ end }}
</code></pre>
<p>When running the install command with <code>set-file</code> flag:</p>
<pre><code>helm install --name my-rel --dry-run --debug --set-file customMessageCardTemplate=custom-card.tmpl ./my-chart
</code></pre>
<p>helm inserts some extra characters into the data it reads from the file:</p>
<pre><code># Source: my-chart/templates/configMapTemplate.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: card-template
data:
card.tmpl: |-
"{{ define \"teams.card\" }}\r\n{\r\n \"@type\": \"MessageCard\",\r\n \"@context\":
\"http://schema.org/extensions\",\r\n \"themeColor\": \"{{- if eq .Status \"resolved\"
-}}2DC72D\r\n {{- else if eq .Status \"firing\" -}}\r\n {{-
if eq .CommonLabels.severity \"critical\" -}}8C1A1A\r\n {{- else
if eq .CommonLabels.severity \"warning\" -}}FFA500\r\n {{- else
-}}808080{{- end -}}\r\n {{- else -}}808080{{- end -}}\",\r\n \"summary\":
\"{{- if eq .CommonAnnotations.summary \"\" -}}\r\n {{- if eq .CommonAnnotations.message
\"\" -}}\r\n {{- .CommonLabels.alertname -}}-hai\r\n {{-
else -}}\r\n {{- .CommonAnnotations.message -}}\r\n {{-
end -}}\r\n {{- else -}}\r\n {{- .CommonAnnotations.summary
-}}\r\n {{- end -}}\",\r\n \"title\": \"Prometheus Alert ({{ .Status
}})\",\r\n \"sections\": [ {{$externalUrl := .ExternalURL}}\r\n {{- range $index,
$alert := .Alerts }}{{- if $index }},{{- end }}\r\n {\r\n \"activityTitle\":
\"[{{ $alert.Annotations.description }}]({{ $externalUrl }})\",\r\n \"facts\":
[\r\n {{- range $key, $value := $alert.Annotations }}\r\n {\r\n \"name\":
\"{{ reReplaceAll \"_\" \" \" $key }}\",\r\n \"value\": \"{{ reReplaceAll
\"_\" \" \" $value }}\"\r\n },\r\n {{- end -}}\r\n {{$c :=
counter}}{{ range $key, $value := $alert.Labels }}{{if call $c}},{{ end }}\r\n {\r\n
\ \"name\": \"{{ reReplaceAll \"_\" \" \" $key }}\",\r\n \"value\":
\"{{ reReplaceAll \"_\" \" \" $value }}\"\r\n }\r\n {{- end }}\r\n
\ ],\r\n \"markdown\": true\r\n }\r\n {{- end }}\r\n ]\r\n}\r\n{{
end }}\r\n"
</code></pre>
<p>Why does this happen? When I encode the original data and the read data using base-64, both seem different. </p>
<p>How to solve this issue?</p>
<p><strong>Note:</strong></p>
<p>I cannot set the data using an extraValues.yaml as:</p>
<pre><code>customMessageCardTemplate:
{{ define "teams.card" }}
{
.
.
.
}
{{ end }}
</code></pre>
<p>It gives an error:</p>
<pre><code>Error: failed to parse extraValues.yaml: error converting YAML to JSON: yaml: line 2: did not find expected key
</code></pre>
<p>But this error doesn't appear if the values file is like:</p>
<pre><code>customMessageCardTemplate:
card.tmpl: |-
{{ define "teams.card" }}
{
.
.
}
{{ end }}
</code></pre>
| AnjK | <p>It just does exactly what you tell it to. <code>customMessageCardTemplate</code> contains a string, so <code>toYaml</code> encodes it as double-quoted YAML string. While doing so, it replaces special characters such as line endings and double quotes with escape sequences.</p>
<p>Since you're pasting into a block scalar, you don't need the escaping. Just drop the <code>toYaml</code> and you should be fine.</p>
| flyx |
<p>I'd like to run a particular Kubernetes job whenever the pod of particular deployment restarts.</p>
<p>In particular, I have a Redis deployment. It is not backed by permanent storage. When the pod in the Redis deployment restarts, I'd like to populate some keys in Redis.</p>
<p>Is there a way to trigger a job on pod restart?</p>
| Laizer | <p>The best option comes to my mind is an k8s operator - A simple python/go script watches your target pod (by label, name, namespace, etc.) and performs some actions when the state changes.</p>
<p>An operator is just a deployment with special features. There are various ways to implement, one of them is <a href="https://sdk.operatorframework.io/docs/building-operators/golang/quickstart/" rel="nofollow noreferrer">https://sdk.operatorframework.io/docs/building-operators/golang/quickstart/</a></p>
<p>You can also use <a href="https://github.com/kubernetes-client/python#examples" rel="nofollow noreferrer">https://github.com/kubernetes-client/python#examples</a> (check the second example).</p>
<p>You can get rid of the job and write your redis logic inside the operator itself.</p>
| Amrit |
<p>Two years ago while I took CKA exam, I already have this question. At that time I only could do was to see k8s.io official documentation. Now just curious on generating pv / pvc / storageClass via pure kubectl cli. What I look for is similar to the similar logic as deployment, for example:</p>
<pre><code>$ kubectl create deploy test --image=nginx --port=80 --dry-run -o yaml
W0419 23:54:11.092265 76572 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: test
name: test
spec:
replicas: 1
selector:
matchLabels:
app: test
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: test
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
resources: {}
status: {}
</code></pre>
<p>Or similar logic to run a single pod:</p>
<pre><code>$ kubectl run test-pod --image=nginx --port=80 --dry-run -o yaml
W0419 23:56:29.174692 76654 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: test-pod
name: test-pod
spec:
containers:
- image: nginx
name: test-pod
ports:
- containerPort: 80
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
</code></pre>
<p>So what should I type in order to generate pv / pvc / storageClass yaml? The current only declarative fastest way:</p>
<pre><code>cat <<EOF | kubectl create -f -
<PV / PVC / storageClass yaml goes here>
EOF
</code></pre>
<p>Edited: Please note that I look any fast way to generate correct pv / pvc / storageClass template without remembering specific syntax thru cli, and not necessary via kubectl.</p>
| Ming Hsieh | <h2>TL;DR:</h2>
<p>Look, bookmark and build index your brain in all yaml files in this Github directory (content/en/examples/pods) before the exam. 100% legal according to CKA curriculum.</p>
<p><a href="https://github.com/kubernetes/website/tree/master/content/en/examples/pods/storage/pv-volume.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/website/tree/master/content/en/examples/pods/storage/pv-volume.yaml</a></p>
<p>Then use this form during exam:</p>
<pre><code>kubectl create -f https://k8s.io/examples/pods/storage/pv-volume.yaml
</code></pre>
<p>In case you need edit and apply:</p>
<pre><code># curl
curl -sL https://k8s.io/examples/pods/storage/pv-volume.yaml -o /your/path/pv-volume.yaml
# wget
wget -O /your/path/pv-volume.yaml https://k8s.io/examples/pods/storage/pv-volume.yaml
vi /your/path/pv-volume.yaml
kubectl apply -f /your/path/pv-volume.yaml
</code></pre>
<h2>Story:</h2>
<p>Actually after look around for my own answer, there's an article floating around that suggest me to bookmark these 100% legal pages:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume</a></p>
<p><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#creating-a-cron-job" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#creating-a-cron-job</a></p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/</a></p>
<p>Note that:</p>
<pre><code>kubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml
</code></pre>
<ol>
<li>kubectl could create objects from URL</li>
<li>Where is the original <a href="https://k8s.io" rel="nofollow noreferrer">https://k8s.io</a> pointing to?</li>
<li>What else I could benefit from?</li>
</ol>
<p>Then after digging up the page above "pods/storage/pv-volume.yaml" code, the link points to:</p>
<p><a href="https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/storage/pv-volume.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/storage/pv-volume.yaml</a></p>
<p>Which direct to:</p>
<p><a href="https://github.com/kubernetes/website/tree/master/content/en/examples/pods" rel="nofollow noreferrer">https://github.com/kubernetes/website/tree/master/content/en/examples/pods</a></p>
<p>So <a href="https://k8s.io" rel="nofollow noreferrer">https://k8s.io</a> is a shorten uri as well as a http 301 redirect to <a href="https://github.com/kubernetes/website/tree/master/content/en" rel="nofollow noreferrer">https://github.com/kubernetes/website/tree/master/content/en</a> to help the exam candidate easy to produce (not copy-n-paste) in the exam terminal.</p>
| Ming Hsieh |
<p>I am seeing a very strange issue trying to start the official <code>postgres:14.6-alpine</code> image on Kubernetes.</p>
<p>For reference the official postgres image allows for configuring the initialization script using the <code>POSTGRES_USER</code>, <code>POSTGRES_PASSWORD</code>, and <code>POSTGRES_DB</code> environment variables.</p>
<p>I have the following secret and configmap defined:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: Secret
metadata:
namespace: default
name: postgres-credentials
data:
DATABASE_URL: cG9zdGdyZXM6Ly9sZXRzY2h1cmNoOnBhc3N3b3JkQHBvc3RncmVzOjU0MzIvbGV0c2NodXJjaA==
POSTGRES_USER: bGV0c2NodXJjaA==
POSTGRES_PASSWORD: cGFzc3dvcmQ=
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: default
name: postgres-config
data:
POSTGRES_DB: letschurch
</code></pre>
<p>The value <code>POSTGRES_USER</code> value of <code>bGV0c2NodXJjaA==</code> decodes to <code>letschurch</code> and the <code>POSTGRES_PASSWORD</code> value of <code>cGFzc3dvcmQ=</code> decodes to <code>password</code>.</p>
<p>I also have the following deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
restartPolicy: Always
containers:
- image: postgres:14.6-alpine
name: postgres
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
- secretRef:
name: postgres-credentials
</code></pre>
<p>When I shell into the running container, I can echo out the environment variables, and they appear to be intact:</p>
<pre><code>postgres-74f67b778-lsv4c:/# echo $POSTGRES_USER
letschurch
postgres-74f67b778-lsv4c:/# echo $POSTGRES_PASSWORD
password
postgres-74f67b778-lsv4c:/# echo $POSTGRES_DB
letschurch
postgres-74f67b778-lsv4c:/# echo -n $POSTGRES_USER | wc -c
10
postgres-74f67b778-lsv4c:/# echo -n $POSTGRES_PASSWORD | wc -c
8
postgres-74f67b778-lsv4c:/# echo -n $POSTGRES_DB | wc -c
10
postgres-74f67b778-lsv4c:/# [ "$POSTGRES_USER" = "$POSTGRES_DB" ] && echo 'good!'
good!
</code></pre>
<p>However, I am not able to connect with the role <code>letschurch</code>. I can connect as <code>temporal</code> (another role I have set up with an init script), and when I run <code>\l</code> and <code>\du</code> I see that the role (but not the database name) have a <code>+</code> appended:</p>
<pre><code>temporal=> \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
---------------------+------------+----------+------------+------------+--------------------------
letschurch | letschurch+| UTF8 | en_US.utf8 | en_US.utf8 |
temporal=> \du
List of roles
Role name | Attributes | Member of
------------+------------------------------------------------------------+-----------
letschurch+| Superuser, Create role, Create DB, Replication, Bypass RLS | {}
| |
temporal | | {}
</code></pre>
<p>At first I thought that the base64-encoded <code>POSTGRES_USER</code> environment variable might have some whitespace or something encoded in it, so I double checked that I was encoding the value properly with <code>echo -n letschurch | base64</code>, and as you can see in the shell output above the resulting value is exactly 10 characters long, no extra whitespace. Also, the <code>POSTGRES_USER</code> and <code>POSTGRES_DB</code> environment variables are equal, but they appear to result in different outcomes in postgres.</p>
<p>Also, this does not happen with <code>docker-compose</code>. Given the following configuration, everything works as expected:</p>
<pre class="lang-yaml prettyprint-override"><code> postgres:
image: postgres:14.6-alpine
environment:
POSTGRES_USER: letschurch
POSTGRES_PASSWORD: password
POSTGRES_DB: letschurch
ports:
- '5432:5432'
</code></pre>
<p>What am I missing here? Why does the <code>letschurch</code> role get a <code>+</code> appended to it? Is there something to do with <code>secret</code>s that doesn't apply to <code>configMap</code>s that I'm missing?</p>
| knpwrs | <p>I think this had to do with a specific setting I had enabled for Docker Desktop for Mac:</p>
<p><a href="https://i.stack.imgur.com/7f60o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7f60o.png" alt="Docker Desktop Settings" /></a></p>
<p>I had changed from the default value (<code>gRPC FUSE</code>) to <code>VirtioFS</code>. When I changed it back to <code>gRPC FUSE</code> a number of issues resolved themselves, this one included.</p>
| knpwrs |
<p>Create <code>yaml-bomb.yaml</code> file:</p>
<pre><code>apiVersion: v1
data:
a: &a ["web","web","web","web","web","web","web","web","web"]
b: &b [*a,*a,*a,*a,*a,*a,*a,*a,*a]
c: &c [*b,*b,*b,*b,*b,*b,*b,*b,*b]
d: &d [*c,*c,*c,*c,*c,*c,*c,*c,*c]
e: &e [*d,*d,*d,*d,*d,*d,*d,*d,*d]
f: &f [*e,*e,*e,*e,*e,*e,*e,*e,*e]
g: &g [*f,*f,*f,*f,*f,*f,*f,*f,*f]
h: &h [*g,*g,*g,*g,*g,*g,*g,*g,*g]
i: &i [*h,*h,*h,*h,*h,*h,*h,*h,*h]
kind: ConfigMap
metadata:
name: yaml-bomb
namespace: default
</code></pre>
<p>Send <code>ConfigMap</code> creation request to Kubernetes API by cmd <code>kubectl apply -f yaml-bomb.yaml</code>.</p>
<p><code>kube-api</code> CPU/memory usage are very high, even later are getting restarted.</p>
<p>How do we prevent such yaml-bomb?</p>
| maxwell jiang | <p>This is a <a href="https://en.wikipedia.org/wiki/Billion_laughs_attack" rel="noreferrer">billion laughts attack</a> and can only be fixed in the YAML processor.</p>
<p>Note that the Wikipedia is wrong here when it says</p>
<blockquote>
<p>A "Billion laughs" attack should exist for any file format that can contain references, for example this YAML bomb: </p>
</blockquote>
<p>The problem is not that the file format contains references; it is the processor expanding them. This is against the spirit of the YAML spec which says that anchors are used for nodes that are actually referred to from multiple places. In the loaded data, anchors & aliases should become multiple references to the same object instead of the alias being expanded to a copy of the anchored node.</p>
<p>As an example, compare the behavior of the <a href="https://yaml-online-parser.appspot.com/" rel="noreferrer">online PyYAML parser</a> and the <a href="https://nimyaml.org/testing.html" rel="noreferrer">online NimYAML parser</a> (full disclosure: my work) when you paste your code snippet. PyYAML won't respond because of the memory load from expanding aliases, while NimYAML doesn't expand the aliases and therefore responds quickly.</p>
<p>It's astonishing that Kubernetes suffers from this problem; I would have assumed since it's written in Go that they are able to properly handle references. You have to file a bug with them to get this fixed.</p>
| flyx |
<p>I have installed docker-registry on Kubernetes via helm.</p>
<p>I am able to docker push to <code>docker push 0.0.0.0:5000/<my-container>:v1</code> using port-forward.</p>
<p>Now how do I reference the images in the registry from a deployment.yaml?</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <my-container>-deployment-v1
spec:
replicas: 1
template:
metadata:
labels:
app: <my-container>-deployment
version: v1
spec:
containers:
- name: <my-container>
image: 0.0.0.0:5000/<my-container>:v1 # <<< ????
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: private-docker-registry-secret
</code></pre>
<p>This do list my containers:</p>
<pre><code>curl -X GET http://0.0.0.0:5000/v2/_catalog
</code></pre>
<p>I keep getting <strong><em>ImagePullBackOff</em></strong> when deploying.</p>
<p>I tyied using internal service name and cluster ip address, still not working.</p>
<p>Then tried using secrets:</p>
<pre><code>{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "running-buffoon-docker-registry-secret",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/secrets/running-buffoon-docker-registry-secret",
"uid": "127c93c1-53df-11e9-8ede-a63ad724d5b9",
"resourceVersion": "216488",
"creationTimestamp": "2019-03-31T18:01:56Z",
"labels": {
"app": "docker-registry",
"chart": "docker-registry-1.7.0",
"heritage": "Tiller",
"release": "running-buffoon"
}
},
"data": {
"haSharedSecret": "xxx"
},
"type": "Opaque"
}
</code></pre>
<p>And added the secret to to deployment.yaml:</p>
<pre><code> imagePullSecrets:
- name: running-buffoon-docker-registry-secret
</code></pre>
<p>Then I get:<br></p>
<pre><code>image "x.x.x.x/:<my-container>v1": rpc error: code = Unknown desc = Error response from daemon: Get https://x.x.x.x/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
</code></pre>
| Chris G. | <p>You need to get the <strong>cluster-ip</strong> of your local docker registry. </p>
<p>You will find this in the dashboard - just visit the registry pod page and then to the associated <code>service</code>. Replace your image spec's <code>0.0.0.0</code> with the cluster ip. Also make sure the <code>port</code> matches - generally the port exposed by registry service is different from the actual port exposed inside the cluster. If you have authentication set up in your registry, you will need <code>imagepullsecret</code> as well.</p>
<p>I have blogged about minikube setup with a local registry - might be helpful. <a href="https://amritbera.com/journal/minikube-insecure-registry.html" rel="nofollow noreferrer">https://amritbera.com/journal/minikube-insecure-registry.html</a></p>
| Amrit |
<p>I am trying out the capability where 2 pods deployed to the same worker node in EKS are associated to different service accounts. Below are the steps</p>
<ul>
<li>Each service account is associated to a different role one with access to SQS and other without access.</li>
<li>Used eksutil to associate OIDC provider with cluster and also created iamserviceaccount with service account in kubernetes and role with policy for accessing SQS attached (implicit annotation of service account with IAM role provided by eksctl create iamserviceaccount).</li>
</ul>
<p>But when I try to start the pod which has service account tied to role with SQS access, I am getting access denied for SQS, however if I add SQS permissions to worker node instance role, its working fine.</p>
<p>Am I missing any steps and is my understanding correct?</p>
| rajesh kumar | <p>So, there are a few things required to get IRSA to work:</p>
<ol>
<li>There has to be an OIDC provider associated with the cluster, following the directions <a href="https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html" rel="nofollow noreferrer">here</a>.</li>
<li>The IAM role has to have a trust relationship with the OIDC provider, as defined in the AWS CLI example <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html" rel="nofollow noreferrer">here</a>.</li>
<li>The service account must be annotated with a matching <code>eks.amazonaws.com/role-arn</code>.</li>
<li>The pod must have the appropriate service account specified with a <code>serviceAccountName</code> in its <code>spec</code>, as per the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#podspec-v1-core" rel="nofollow noreferrer">API docs</a>.</li>
<li>The SDK for the app needs to support the <code>AssumeRoleWithWebIdentity</code> API call. Weirdly, the <code>aws-sdk-go-v2</code> SDK doesn't currently support it at all (the "old" <code>aws-sdk-go</code> does).</li>
</ol>
<p>It's working with the node role because one of the requirements above isn't met, meaning the credential chain "falls through" to the underlying node role.</p>
| asthasr |
<p>There are a vast majority of tutorials and documentation on the web where Flask is running in development state. The log looks like this in development mode:</p>
<pre><code>* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://0.0.0.0:5555/ (Press CTRL+C to quit)
</code></pre>
<p>I want to know more about how to make it production ready. I've seen documentation on this as well using production ready WSGI servers and <code>nginx</code> as reverse proxy in front. But can somebody tell me why WSGI and reverse proxy is needed?</p>
<p>If my Flask application is dockerized and running in Google Kubernetes Engine is it even necessary then? Will GKE not take care of the purpose of WSGI and reverse proxy?</p>
| mr.bjerre | <p>As <a href="http://flask.pocoo.org/docs/1.0/deploying/" rel="nofollow noreferrer">Flask's documentation</a> states:</p>
<blockquote>
<p>Flask’s built-in server is not suitable for production</p>
</blockquote>
<p>Why WSGI? It's <a href="https://www.python.org/dev/peps/pep-0333/" rel="nofollow noreferrer">a standard way</a> to deploy Python web apps, it gives you options when choosing a server (i.e. you can choose the best fit for your application/workflow without changing your application), and it allows offloading scaling concerns to the server.</p>
<p>Why a reverse proxy? It depends on the server. Here is <a href="http://docs.gunicorn.org/en/stable/deploy.html" rel="nofollow noreferrer">Gunicorn's rationale</a>:</p>
<blockquote>
<p>... we strongly advise that you use Nginx. If you choose another proxy server you need to make sure that it buffers slow clients when you use default Gunicorn workers. Without this buffering Gunicorn will be easily susceptible to denial-of-service attacks.</p>
</blockquote>
<p>Here is <a href="https://docs.pylonsproject.org/projects/waitress/en/latest/reverse-proxy.html" rel="nofollow noreferrer">Waitress's rationale</a> for the same:</p>
<blockquote>
<p>Often people will set up "pure Python" web servers behind reverse proxies, especially if they need TLS support (Waitress does not natively support TLS). Even if you don't need TLS support, it's not uncommon to see Waitress and other pure-Python web servers set up to only handle requests behind a reverse proxy; these proxies often have lots of useful deployment knobs.</p>
</blockquote>
<p>Other practical reasons for a reverse proxy may include <em>needing</em> a reverse proxy for multiple backends (some of which may not be Python web apps), caching responses, and serving static content (something which Nginx, for example, happens to be good at). Not all WSGI servers need a reverse proxy: <a href="https://uwsgi-docs.readthedocs.io/en/latest/" rel="nofollow noreferrer">uWSGI</a> and <a href="https://cherrypy.org/" rel="nofollow noreferrer">CherryPy</a> treat it as optional.</p>
<p><sub>P.S. <a href="https://cloud.google.com/appengine/docs/standard/python/how-requests-are-handled" rel="nofollow noreferrer">Google App Engine</a> seems to be WSGI-compliant and doesn't require any additional configuration.</sub></p>
| imsky |
<p>Our global Prometheus scrape interval on k8s is <code>60s</code>, but I want one application has <code>300s</code> scrape interval.</p>
<p>I attach the following to my pod so the metrics are <strong>scraped</strong>.</p>
<pre><code>prometheus.io/scrape: 'true'
prometheus.io/port: '{{ .Values.prometheus.port }}'
prometheus.io/path: '{{ .Values.prometheus.path }}'
</code></pre>
<p>Now I want to slow down the frequency of this application specifically, and tested with</p>
<pre><code>prometheus.io/interval: '300s'
</code></pre>
<p>However it does not work. I think it requires <code>relabel</code>, or any other suggestion?
<a href="https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L251" rel="nofollow noreferrer">https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L251</a></p>
| Holm | <p>I would like to add some details. If you use the official Prometheus Helm charts, this should be in your <code>values-prometheus.yaml</code> file:</p>
<pre><code># extra scraping configs
# | is required, because extraScrapeConfigs is expected to be a string
extraScrapeConfigs: |
- job_name: 'kubernetes-service-endpoints-scrape-every-2s'
scrape_interval: 2s
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
#Custom
- source_labels: [__meta_kubernetes_service_annotation_example_com_scrape_every_2s]
action: keep
regex: true
# Boilerplate
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- source_labels: [__meta_kubernetes_pod_node_name]
action: replace
target_label: kubernetes_node
</code></pre>
<p>To upgrade the existing(default) installation:</p>
<pre><code>helm upgrade prometheus --values values-prometheus.yaml stable/prometheus
</code></pre>
<p>On the service or the pod you can now add these annotations:</p>
<pre><code>prometheus.io/path: /metrics
prometheus.io/port: "9090"
example.com/scrape_every_2s: "true"
</code></pre>
<p>Remove the original <code>prometheus.io/scrape: "true"</code>, because otherwise your service will show up as two separate Prometheus targets, which is probably not what you want.</p>
| Alex Fedulov |
<p>I would like to grant one user only the "get" privilege of "kubectl" command. I suppose it should be done with RBAC, anyone can advise it, thanks.</p>
| James Pei | <p>Create a <code>allow-get.yaml</code> file with the next content and change <code>my-user</code> by your user, and run <code>kubectl apply -f allow-get.yaml</code></p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: allow-get
rules:
- apiGroups:
- ""
resources:
- "*"
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-get-bind
subjects:
- kind: User
name: my-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: allow-get
</code></pre>
| TlmaK0 |
<p>If I have a buggy app in a container that is constantly writing to an emptyDir volume it could use up all the space on the worker node and affect the performance of other pods / containers on the node. Thus breaking the expectation that containers are isolated from each other what one container does should not negatively impact other containers on the node.</p>
<p>Is there a way to limit the amount of disk space used by a emptyDir volume (not the RAM based emptyDir type)?</p>
| ams | <p>You can set <code>sizeLimit</code> on the volume (see <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#emptydirvolumesource-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#emptydirvolumesource-v1-core</a>). Setting this will, once the volume is full, evict the pod.</p>
| imsky |
<p>I am new to Google Cloud Platform and the following context:</p>
<p>I have a Compute Engine VM running as a MongoDB server and a Compute Engine VM running as a NodeJS server already with Docker. Then the NodeJS application connects to Mongo via the default VPC internal IP. Now, I'm trying to migrate the NodeJS application to Google Kubernetes Engine, but I can't connect to the MongoDB server when I deploy the NodeJS application Docker image to the cluster.</p>
<p>All services like GCE and GKE are in the same region (us-east-1).</p>
<p>I did a hard test accessing a kubernetes cluster node via SSH and deploying a simple MongoDB Docker image and trying to connect to the remote MongoDB server via command line, but the problem is the same, time out when trying to connect.</p>
<p>I have also checked the firewall settings on GCP as well as the <code>bindIp</code> setting on the MongoDB server and it has no blocking on that.</p>
<p>Does anyone know what may be happening? Thank you very much.</p>
| Felipe Antero | <p>In my case traffic from GKE to GCE VM was blocked by Google Firewall even thou both are in the same network (default).</p>
<p>I had to whitelist cluster pod network listed in cluster details:</p>
<p><code>Pod address range 10.8.0.0/14</code></p>
<p><a href="https://console.cloud.google.com/kubernetes/list" rel="noreferrer">https://console.cloud.google.com/kubernetes/list</a>
<a href="https://i.stack.imgur.com/EpnjW.png" rel="noreferrer"><img src="https://i.stack.imgur.com/EpnjW.png" alt="enter image description here" /></a></p>
<p><a href="https://console.cloud.google.com/networking/firewalls/list" rel="noreferrer">https://console.cloud.google.com/networking/firewalls/list</a></p>
<p><a href="https://i.stack.imgur.com/s8pTX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/s8pTX.png" alt="firewall" /></a></p>
| Dariusz Bacinski |
<p>I am currently facing a situation where I need to deploy a small cluster (only 4 pods for now) which will be containing 4 different microservices. This cluster has to be duplicated so I can have one PRODUCTION cluster and one DEVELOPMENT cluster.</p>
<p>Even if it's not hard from my point of view (Creating a cluster and then uploading docker images to pods with parameters in order to use the right resources connection strings), I am stuck at the CD/CI part..</p>
<blockquote>
<p>From a CloudBuild trigger, how to push the Docker image to the right "cluster's pod", I have absolutely no idea AT ALL how to achieve it...</p>
</blockquote>
<p>There is my cloudbuild.yaml</p>
<pre><code>steps:
#step 1 - Getting the previous (current) image
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest || exit 0'
]
#step 2 - Build the image and push it to gcr.io
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}',
'-t',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest',
'.'
]
#step 3 - Deploy our container to our cluster
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'service.yaml', '--force']
env:
- 'CLOUDSDK_COMPUTE_ZONE={CLUSTER_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER={CLUSTER_NAME}'
#step 4 - Set the image
- name: 'gcr.io/cloud-builders/kubectl'
args: [
'set',
'image',
'deployment',
'{SERVICE_NAME}',
'{SERVICE_NAME}=gcr.io/{PROJECT_ID}/{SERVICE_NAME}'
]
env:
- 'CLOUDSDK_COMPUTE_ZONE={CLUSTER_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER={CLUSTER_NAME}'
# push images to Google Container Registry with tags
images: [
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest'
]
</code></pre>
<p>Can anyone help me out? I don't really know in which direction to go to..</p>
| Emixam23 | <p>So my question is:</p>
<p>How are you triggering these builds? Manually? GitHub Trigger? HTTP Trigger using the REST API?</p>
<p>so you're almost there for the building/pushing part, you would need to use substitution variables <a href="https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values" rel="nofollow noreferrer">https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values</a></p>
<p>If you would be triggering the builds manually, you would edit the build trigger and change the sub variable for what you want it to be.
GitHub Trigger -- this is a little more complex as you might want to do releases or branches.
HTTP Trigger, same as manual, in your request you change the sub variable. </p>
<p>So here's part of one of our repo build files, as you will see there are different sub. variables we use, sometimes we want to build the image AND deploy to cluster, other times we just want to build or deploy.</p>
<pre><code>steps:
# pull docker image
- name: 'gcr.io/cloud-builders/docker'
id: pull-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
docker pull $${_TAG_DOCKER_IMAGE} || exit 0
# build docker image
- name: 'gcr.io/cloud-builders/docker'
id: build-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
docker build -t ${_DOCKER_IMAGE_TAG} --cache-from $${_TAG_DOCKER_IMAGE} .;
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# push docker image
- name: 'gcr.io/cloud-builders/docker'
id: push-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
docker push ${_DOCKER_IMAGE_TAG};
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# tag docker image
- name: 'gcr.io/cloud-builders/gcloud'
id: tag-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
gcloud container images add-tag ${_DOCKER_IMAGE_TAG} $${_TAG_DOCKER_IMAGE} -q;
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# update service image on environment
- name: 'gcr.io/cloud-builders/kubectl'
id: update service deployment image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_UPDATE_CLUSTER}" == "true" ]]; then
/builder/kubectl.bash set image deployment $REPO_NAME master=${_DOCKER_IMAGE_TAG} --namespace=${_DEFAULT_NAMESPACE};
else
echo "skipping ... UPDATE_CLUSTER=${_UPDATE_CLUSTER}";
fi
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
# subs are needed because of our different ENVs
# _DOCKER_IMAGE_TAG = ['gcr.io/$PROJECT_ID/$REPO_NAME:gcb-${_COMPANY_ENV}-$SHORT_SHA', 'other']
# _COMPANY_ENV = ['dev', 'staging', 'prod']
# _DEFAULT_NAMESPACE = ['default'] or ['custom1', 'custom2']
# _CLOUDSDK_CONTAINER_CLUSTER = ['dev', 'prod']
# _CLOUDSDK_COMPUTE_ZONE = ['us-central1-a']
# _BUILD_IMAGE = ['true', 'false']
# _UPDATE_CLUSTER = ['true', 'false']
substitutions:
_DOCKER_IMAGE_TAG: $DOCKER_IMAGE_TAG
_COMPANY_ENV: dev
_DEFAULT_NAMESPACE: default
_CLOUDSDK_CONTAINER_CLUSTER: dev
_CLOUDSDK_COMPUTE_ZONE: us-central1-a
_BUILD_IMAGE: 'true'
_UPDATE_CLUSTER: 'true'
options:
substitution_option: 'ALLOW_LOOSE'
env:
- _TAG_DOCKER_IMAGE=gcr.io/$PROJECT_ID/$REPO_NAME:${_COMPANY_ENV}-latest
- DOCKER_IMAGE_TAG=gcr.io/$PROJECT_ID/$REPO_NAME:gcb-${_COMPANY_ENV}-$SHORT_SHA
tags:
- '${_COMPANY_ENV}'
- 'build-${_BUILD_IMAGE}'
- 'update-${_UPDATE_CLUSTER}'
</code></pre>
<p>we have two workflows -- </p>
<ol>
<li>github trigger builds and deploys under the 'dev' environment. </li>
<li>we trigger via REST API <a href="https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.builds/create" rel="nofollow noreferrer">https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.builds/create</a> (we replace the variables via the request.json) -- this method also works using the <code>gcloud builds --substitutions</code> CLI.</li>
</ol>
<p>Hope that answers your question!</p>
| Lance Sandino |
<p>I've created a Kubernetes deployment. However, there seem to be additional pods running - that I'm hoping to be able to delete the unnecessary ones. </p>
<p>I see no need to run the dashboard container. I'd like to remove it to free up CPU resources.</p>
<p>How can I disable this container from starting up? Preferably from the deployment config.</p>
<p>Essentially the following pod:</p>
<pre><code>kubectl get pods --all-namespaces | grep "dashboard"
kube-system kubernetes-dashboard-490794276-sb6qs 1/1 Running 1 3d
</code></pre>
<p><strong>Additional information:</strong></p>
<p>Output of <code>kubectl --namespace kube-system get deployment</code>:</p>
<pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
heapster-v1.3.0 1 1 1 1 3d
kube-dns 2 2 2 2 3d
kube-dns-autoscaler 1 1 1 1 3d
kubernetes-dashboard 1 1 1 1 11m
l7-default-backend 1 1 1 1 3d
</code></pre>
<p>Output of <code>kubectl --namespace kube-system get rs</code>:</p>
<pre><code>NAME DESIRED CURRENT READY AGE
heapster-v1.3.0-191291410 1 1 1 3d
heapster-v1.3.0-3272732411 0 0 0 3d
heapster-v1.3.0-3742215525 0 0 0 3d
kube-dns-1829567597 2 2 2 3d
kube-dns-autoscaler-2501648610 1 1 1 3d
kubernetes-dashboard-490794276 1 1 1 12m
l7-default-backend-3574702981 1 1 1 3d
</code></pre>
| Chris Stryczynski | <h2>Update 2023-03</h2>
<p>To have a clean removal you must to delete a lot of objects. Overtime removing the dashboard has been a common problem, so you can now do this:</p>
<pre><code>kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
</code></pre>
<p>If you don't want to do a blind thing and you want to know what are you removing just try to execute this:</p>
<pre><code>kubectl get secret,sa,role,rolebinding,services,deployments --namespace=kube-system | grep dashboard
</code></pre>
<p>If the output is empty, just double check your dashboard namespace's with the command</p>
<pre><code>kubectl get namespaces
</code></pre>
<p>The dashboard is stored in a separate namespace and, depending of your context, the dashboard can be not always in the same namespace. If you want have a deeper look start trying with <code>kubernetes-dashboard</code> or <code>kube-system</code> and always specify the namespace while callin <code>kubectl</code>.</p>
| freedev |
<p>In order to a service work, it needs an environment variable called <code>DSN</code> which prints to something like <code>postgres://user:[email protected]:5432/database</code>. This value I built with a <code>ConfigMap</code> resource:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: idp-config
namespace: diary
data:
DSN: postgres://user:[email protected]:5432/database
</code></pre>
<p>This ConfigMap mounts as environment variable in my service Pod. Since the values are different from <code>user</code> and <code>password</code> and these PostgreSQL credentials are in another k8s resource (a <code>Secret</code> and a <code>ConfigMap</code>), how can I properly build this <code>DSN</code> environment in a k8s resource yaml so my service can connect to the database?</p>
| Rodrigo Souza | <p>Digging into Kubernetes Docs I was able to find. According to <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config" rel="nofollow noreferrer">Define Environment Variables for a Container</a> :</p>
<blockquote>
<p>Environment variables that you define in a Pod’s configuration can be used elsewhere in the configuration, for example in commands and arguments that you set for the Pod’s containers. In the example configuration below, the GREETING, HONORIFIC, and NAME environment variables are set to Warm greetings to, The Most Honorable, and Kubernetes, respectively. Those environment variables are then used in the CLI arguments passed to the env-print-demo container.</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: print-greeting
spec:
containers:
- name: env-print-demo
image: bash
env:
- name: GREETING
value: "Warm greetings to"
- name: HONORIFIC
value: "The Most Honorable"
- name: NAME
value: "Kubernetes"
command: ["echo"]
args: ["$(GREETING) $(HONORIFIC) $(NAME)"]
</code></pre>
| Rodrigo Souza |
<p>I'm using k8s 1.11.2 to build my service, the YAML file looks like this:</p>
<p><strong>Deployment</strong></p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx-test
namespace: default
labels:
- type: test
spec:
replicas: 1
selector:
matchLabels:
- type: test
template:
metadata:
labels:
- type: test
spec:
containers:
- image: nginx:1.14
name: filebeat
ports:
- containerPort: 80
</code></pre>
<p><strong>Service</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
- type:test
spec:
type: ExternalName
externalName: my.nginx.com
externalIPs:
- 192.168.125.123
clusterIP: 10.240.20.1
ports:
- port: 80
name: tcp
selector:
- type: test
</code></pre>
<hr>
<p>and I get this error:</p>
<blockquote>
<p>error validating data: [ValidationError(Service.metadata.labels):
invalid type for
io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.labels: got "array",
expected "map", ValidationError(Service.spec.selector): invalid type
for io.k8s.api.core.v1.ServiceSpec.selector: got "array", expected
"map"];</p>
</blockquote>
<p>I am sure the format of my YAML file is right, because I used the website <a href="http://www.yamllint.com/" rel="noreferrer">http://www.yamllint.com/</a> to validate it.</p>
<p>Why am I getting this error?</p>
| 李向朋 | <p><a href="http://www.yamllint.com/" rel="noreferrer">yamllint.com</a> is a dubious service because it does not tell us which YAML version it is checking against and which implementation it is using. Avoid it.</p>
<p>More importantly, while your input may be valid YAML, this does not mean that it is a valid input for kubernetes. YAML allows you to create any kind of structure, while kubernetes expects a certain structure from you. This is what the error is telling you:</p>
<blockquote>
<p>got "array", expected "map"</p>
</blockquote>
<p>This means that at a place where kubernetes expects a <em>mapping</em> you provided an array (<em>sequence</em> in proper YAML terms). The error message also gives you the path where this problem occurs:</p>
<blockquote>
<p>ValidationError(Service.metadata.labels):</p>
</blockquote>
<p>A quick check on metadata labels in kubernetes reveals <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">this documentation</a>, which states that labels need to be mappings, not arrays.</p>
<p>So in your input, the last line here is the culprit:</p>
<pre><code>metadata:
name: nginx-test
namespace: default
labels:
- type: test
</code></pre>
<p><code>-</code> is a YAML indicator for a sequence item, creating a sequence as value for the key <code>labels:</code>. Dropping it will make it a mapping instead:</p>
<pre><code>metadata:
name: nginx-test
namespace: default
labels:
type: test
</code></pre>
| flyx |
<p>While joining the centos 7 node to cluster 1.9.0, <code>kubeadm join</code> command gives this error message.</p>
<p><code>Failed to request cluster info, will try again: [Get https://10.10.10.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]</code></p>
<p>I got this message last night, this morning when run this command it worked. I removed and trying to create the cluster this morning, again its giving same error message.</p>
<pre><code>kubeadm join --token f115fe.f0eea05182abe63a 10.10.10.10:6443 --discovery-token-ca-cert-hash sha256:48d4dc90a08ff73a0cfc63e30a313aaf1903fd51da8f9ce4cc79f95ce529b8d1
[discovery] Created cluster-info discovery client, requesting info from "https://10.10.10.10:6443"
[discovery] Requesting info from "https://10.10.10.10:6443" again to validate TLS against the pinned public key
[discovery] Failed to request cluster info, will try again: [Get https://10.10.10.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]
[discovery] Failed to request cluster info, will try again: [Get https://10.10.10.10:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]
</code></pre>
<p>How to resolve this error message?</p>
| sfgroups | <p>The root cause of the issue is my node didn't have the correct time. after configuring NTP service node was able to join the master.</p>
| sfgroups |
<p>I'm running a spring boot inside a pod with the below configurations.</p>
<p>Pod limits:</p>
<pre><code>resources:
limits:
cpu: "1"
memory: 2500Mi
requests:
cpu: "1"
memory: 2500Mi
</code></pre>
<p>command args:</p>
<pre><code>spec:
containers:
- args:
- -c
- ln -sf /dev/stdout /var/log/access.log;java -Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -Djava.security.egd=file:/dev/./urandom
-Xms1600m -Xmx1600m -XX:NewSize=420m -XX............
</code></pre>
<ol>
<li>What happens if the java process has reached its max heap limit (i.e 1600m (Xmx1600m))</li>
<li>If Xmx has no effect on the java process inside a pod, it can go up to pod limit right (i.e. memory: 2500Mi of limits section)</li>
<li>If the above configurations are correct, then we are wasting 900Mi of memory right (2500-1600=900)</li>
</ol>
| k_vishwanath | <p>The -Xmx flag only controls Java heap memory, which is space available for your own Java objects when running your code. If you run out, JVM will do garbage collection to make space. If you still ran out, you get an OutOfMemoryException thrown.</p>
<p>Java JVM also uses a bunch of other memory internally for stuff like loading classes, JIT compilation etc... Therefore you need to allow more memory in Kubernetes than just -Xmx value. If you exceed the Kubernetes limit value, then probably Java will crash.</p>
<p>The config you posted above looks fine. Normally I come to find these values by looking at the Kubernetes memory usage graph after running for some time without limits.</p>
| Hitobat |
<p>I'm running the kafka kubenetes helm deployment, however I am unsure about how to install a custom plugin.</p>
<p>When running custom plugin on my local version of kafka I mount the volume <code>/myplugin</code> to the Docker image, and then set the plugin path environment variable.</p>
<p>I am unsure about how to apply this workflow to the helm charts / kubernetes deployment, mainly how to go about mounting the plugin to the Kafka Connect pod such that it can be found in the default <code>plugin.path=/usr/share/java</code>.</p>
| Sam Palmer | <p>Have a look at the last few slides of <a href="https://talks.rmoff.net/QZ5nsS/from-zero-to-hero-with-kafka-connect" rel="noreferrer">https://talks.rmoff.net/QZ5nsS/from-zero-to-hero-with-kafka-connect</a>. You can mount your plugins but the best way is to either build a new image to extend the <code>cp-kafka-connect-base</code>, or to install the plugin at runtime - both using Confluent Hub. </p>
| Robin Moffatt |
<p>I'm trying to exec kubernetes pod using the Websocket, as per the kubernetes document it can be achieved through passing the <strong>Bearer THETOKEN</strong></p>
<p>When using bearer token authentication from an http client, the API server expects an Authorization header with a value of Bearer THETOKEN</p>
<p>Here is the sample for <code>wscat</code> passing Header Value <code>--header "Authorization: Bearer $TOKEN"</code> to establish exec to pod and the connection went successfully</p>
<pre><code>/ # wscat --header "Authorization: Bearer $TOKEN" -c "wss://api.0cloud0.com/api/v1/namespaces/ba410a7474380169a5ae230d8e784535/pods/txaclqhshg
-6f69577c74-jxbwn/exec?stdin=1&stdout=1&stderr=1&tty=1&command=sh"
</code></pre>
<p>But when it comes to <a href="https://developer.mozilla.org/en/docs/Web/API/WebSocket" rel="noreferrer">Websocket API</a> connection from web browser </p>
<blockquote>
<p>How to pass this Beaer Token in the web Socket as per the doc there is no standard way to pass custom header </p>
</blockquote>
<p>Tried URI Query Parameter <strong>access_token= Bearer TOKEN</strong> in the API query it doesn't work and the Authentication denied with 403 </p>
<pre><code>wss://api.0cloud0.com/api/v1/namespaces/ba410a7474380169a5ae230d8e784535/pods/txaclqhshg-%206f69577c74-jxbwn/exec?stdout=1&stdin=1&stderr=1&tty=1&command=%2Fbin%2Fsh&command=-i&access_token=$TOKEN
</code></pre>
| anish | <p>I never used websocket with kubernetes before, but here is the documentation about the token authentication method for websocket browser clients <a href="https://github.com/kubernetes/kubernetes/pull/47740" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/47740</a></p>
<p>You must to send token in subprotocol parameter with the token encoded in base64.</p>
<p>So it should be:</p>
<pre><code>wscat -s "base64url.bearer.authorization.k8s.io.$TOKEN_IN_BASE64","base64.binary.k8s.io" -c "wss://api.0cloud0.com/api/v1/namespaces/ba410a7474380169a5ae230d8e784535/pods/txaclqhshg
-6f69577c74-jxbwn/exec?stdin=1&stdout=1&stderr=1&tty=1&command=sh"
</code></pre>
| TlmaK0 |
<p>We are using a Apache-Kafka deployment on Kubernetes which is based on the ability to label pods after they have been created (see <a href="https://github.com/Yolean/kubernetes-kafka" rel="noreferrer">https://github.com/Yolean/kubernetes-kafka</a>). The init container of the broker pods takes advantage of this feature to set a label on itself with its own numeric index (e.g. "0", "1", etc) as value. The label is used in the service descriptors to select exactly one pod.</p>
<p>This approach works fine on our DIND-Kubernetes environment. However, when tried to port the deployment onto a Docker-EE Kubernetes environment we ran into trouble because the command <code>kubectl label pod</code> generates a run time error which is completely misleading (also see <a href="https://github.com/fabric8io/kubernetes-client/issues/853" rel="noreferrer">https://github.com/fabric8io/kubernetes-client/issues/853</a>).</p>
<p>In order to verify the run time error in a minimal setup we created the following deployment scripts.</p>
<h1>First step: Successfully label pod using the Docker-EE-Host</h1>
<pre><code># create a simple pod as a test target for labeling
> kubectl run -ti -n default --image alpine sh
# get the pod name for all further steps
> kubectl -n default get pods
NAME READY STATUS RESTARTS AGE
nfs-provisioner-7d49cdcb4f-8qx95 1/1 Running 1 7d
nginx-deployment-76dcc8c697-ng4kb 1/1 Running 1 7d
nginx-deployment-76dcc8c697-vs24j 1/1 Running 0 20d
sh-777f6db646-hrm65 1/1 Running 0 3m <--- This is the test pod
test-76bbdb4654-9wd9t 1/1 Running 2 6d
test4-76dbf847d5-9qck2 1/1 Running 0 5d
# get client and server versions
> kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5",
GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean",
BuildDate:"2018-06-21T11:46:00Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.11- docker-8d637ae", GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5",
GitTreeState:"clean", BuildDate:"2018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
# set label
kubectl -n default label pod sh-777f6db646-hrm65 "mylabel=hallo"
pod "sh-777f6db646-hrm65" labeled <---- successful execution
</code></pre>
<p>Everything works fine as expected.</p>
<h1>Second step: Reproduce run-time error from within pod</h1>
<h2>Create Docker image containing <code>kubectl</code> 1.10.5</h2>
<pre><code>FROM debian:stretch-
slim@sha256:ea42520331a55094b90f6f6663211d4f5a62c5781673935fe17a4dfced777029
ENV KUBERNETES_VERSION=1.10.5
RUN set -ex; \
export DEBIAN_FRONTEND=noninteractive; \
runDeps='curl ca-certificates procps netcat'; \
buildDeps=''; \
apt-get update && apt-get install -y $runDeps $buildDeps --no-install- recommends; \
rm -rf /var/lib/apt/lists/*; \
\
curl -sLS -o k.tar.gz -k https://dl.k8s.io/v${KUBERNETES_VERSION}/kubernetes-client-linux-amd64.tar.gz; \
tar -xvzf k.tar.gz -C /usr/local/bin/ --strip-components=3 kubernetes/client/bin/kubectl; \
rm k.tar.gz; \
\
apt-get purge -y --auto-remove $buildDeps; \
rm /var/log/dpkg.log /var/log/apt/*.log
</code></pre>
<p>This image is deployed as <code>10.100.180.74:5000/test/kubectl-client-1.10.5</code> in a site local registry and will be referred to below.</p>
<h2>Create a pod using the container above</h2>
<pre><code>apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: pod-labeler
namespace: default
spec:
selector:
matchLabels:
app: pod-labeler
replicas: 1
serviceName: pod-labeler
updateStrategy:
type: OnDelete
template:
metadata:
labels:
app: pod-labeler
annotations:
spec:
terminationGracePeriodSeconds: 10
containers:
- name: check-version
image: 10.100.180.74:5000/test/kubectl-client-1.10.5
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
value: sh-777f6db646-hrm65
command: ["/usr/local/bin/kubectl", "version" ]
- name: label-pod
image: 10.100.180.74:5000/test/kubectl-client-1.10.5
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
value: sh-777f6db646-hrm65
command: ["/bin/bash", "-c", "/usr/local/bin/kubectl -n default label pod $POD_NAME 'mylabel2=hallo'" ]
</code></pre>
<h2>Logging output</h2>
<p>We get the following logging output</p>
<pre><code># Log of the container "check-version"
2018-07-18T11:11:10.791011157Z Client Version: version.Info{Major:"1",
Minor:"10", GitVersion:"v1.10.5",
GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean",
BuildDate:"2018-\
06-21T11:46:00Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
2018-07-18T11:11:10.791058997Z Server Version: version.Info{Major:"1",
Minor:"8+", GitVersion:"v1.8.11-docker-8d637ae",
GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5", GitTreeState:"clean",
BuildDate:"2018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc",
Platform:"linux/amd64"}
</code></pre>
<p>and the run time error</p>
<pre><code>2018-07-18T11:24:15.448695813Z The Pod "sh-777f6db646-hrm65" is invalid:
spec.tolerations: Forbidden: existing toleration can not be modified except its tolerationSeconds
</code></pre>
<h2>Notes</h2>
<ul>
<li>This is not an authorization problem since we've given the default user of the default namespace full administrative rights. In case we don't, we get an error message referring to missing permissions.</li>
<li>Both client and servers versions "outside" (e.g on the docker host) and "inside" (e.g. the pod) are identical down to the GIT commit tag</li>
<li>We are using version 3.0.2 of the Universal Control Plane</li>
</ul>
<p>Any ideas?</p>
| Marcus Rickert | <p>It was pointed out in one of the comments that the issue may be caused by a missing permission even though the error message does not insinuate so. We officially filed a ticket with Docker and actually got exactly this result: In order to be able to set/modify a label from within a pod the default user of the namespace must be given the "Scheduler" role on the swarm resource (which later shows up as <code>\</code> in the GUI). Granting this permission fixes the problem. See added grant in Docker-EE-GUI below.</p>
<p><a href="https://i.stack.imgur.com/63xsH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/63xsH.png" alt="New grant on Docker-EE-GUI"></a></p>
<p>From my point of view, this is far from obvious. The Docker support representative offered to investigate if this is actually expected behavior or results from a bug. As soon as we learn more on this question I will include it into our answer.</p>
<p>As for using more debugging output: Unfortunately, adding <code>--v=9</code> to the calls of <code>kubectl</code> does not return any useful information. There's too much output to be displayed here but the overall logging is very similar in both cases: It consists of a lot GET API requests which are all successful followed by a final PATCH API request which succeeds in the one case but fails in the other as described above.</p>
| Marcus Rickert |
<p>I have created a headless statefull service in kubernates. and cassandra db is running fine.</p>
<pre><code>PS C:\> .\kubectl.exe get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra None <none> 9042/TCP 50m
kubernetes 10.0.0.1 <none> 443/TCP 6d
PS C:\> .\kubectl.exe get pods
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 49m
cassandra-1 1/1 Running 0 48m
cassandra-2 1/1 Running 0 48m
</code></pre>
<p>I am running all this on minikube. From my laptop i am trying to connect to 192.168.99.100:9402 using a java program. But it is not able to connect.</p>
| parag mangal | <p>Looks like your service not defined with NodePort. can you change service type to <code>NodePort</code> and test it.</p>
<p>when we define svc to NodePort we should get two port number for the service.</p>
| sfgroups |
<p>I created an headless service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-service-headless
spec:
ports:
- port: 80
selector:
app: myapp
clusterIP: None
</code></pre>
<p>From Kubernetes dashboard I can see its <code>Internal endpoints</code>:</p>
<pre><code>myapp-service-headless:80 TCP
myapp-service-headless:0 TCP
</code></pre>
<p>In this application, I also set internal endpoint to:</p>
<pre><code>http://myapp-service-headless
</code></pre>
<p>But from outside, how can I access its IP to connect API?</p>
<p>For example, my Kubernetes' IP is <code>192.168.99.100</code>, then connect to <code>192.168.99.100</code> is okay?</p>
<h1>Now the service status from Kubernetes dashboard</h1>
<h2>Services</h2>
<p><a href="https://i.stack.imgur.com/E6BeS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E6BeS.png" alt="enter image description here" /></a></p>
<h2>Service Details</h2>
<p><a href="https://i.stack.imgur.com/6WlBi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6WlBi.png" alt="enter image description here" /></a></p>
| online | <p>There is two option to expose the service outside, you can use the ingress controller to connect to the server. </p>
<p>The simple method is change your service type to NodePort, then you should be able access server using NodeIP and service external port number.</p>
<p>here is the more info.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport</a></p>
| sfgroups |
<p>In my single master Kubernetes 1.15 cluster, some of the pods shows in NotReady status. some pods in Ready and in NotReady status?</p>
<p>How to clean up pods in NotReady status?</p>
<pre><code> # crictl pods
POD ID CREATED STATE NAME NAMESPACE ATTEMPT
7ddfd0ce6c7ea 8 hours ago Ready kube-proxy-vntzl kube-system 0
e430a86591d26 8 hours ago Ready calico-kube-controllers-65b8787765-rrlwv kube-system 3
e4d6510396731 8 hours ago Ready coredns-5c98db65d4-gbplj kube-system 2
07b7eda330c7d 8 hours ago Ready kube-apiserver-master01 kube-system 3
9310330074be8 8 hours ago Ready etcd-master01 kube-system 3
929ea8dc9580c 8 hours ago Ready kube-scheduler-master01 kube-system 3
3fb1789729499 8 hours ago Ready calico-node-h422j kube-system 3
b833585489625 8 hours ago Ready kube-controller-manager-master01 kube-system 3
4aef641d05712 8 hours ago NotReady calico-kube-controllers-65b8787765-rrlwv kube-system 2
69f4929fe0268 8 hours ago NotReady coredns-5c98db65d4-gbplj kube-system 1
10536cc6250ee 8 hours ago NotReady kube-scheduler-master01 kube-system 2
7b7023760c906 8 hours ago NotReady calico-node-h422j kube-system 2
180fba7f48d86 8 hours ago NotReady kube-controller-manager-master01 kube-system 2
d825333e0a833 8 hours ago NotReady etcd-master01 kube-system 2
5d9d9706458d8 8 hours ago NotReady kube-apiserver-master01 kube-system 2
</code></pre>
<p>Thnaks</p>
| sfgroups | <p>I was able to remove NotReady pods using <code>crictl</code> command.</p>
<pre><code>crictl pods|grep NotReady|cut -f1 -d" "|xargs -L 1 -I {} -t crictl rmp {}
</code></pre>
| sfgroups |
<p>I would like to be able to access and manage a GKE (kubernetes) cluster from a Google Cloud function written in python.
I managed to access and retrieve data from the created cluster (endpoint, username, and password at least), however I dont know how to use them with the kubernetes package api.</p>
<p>Here are my imports :</p>
<pre><code>import google.cloud.container_v1 as container
from google.auth import compute_engine
from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client, config
</code></pre>
<p>Here is the code for cluster data :</p>
<pre><code>project_id = 'my-gcp-project'
zone = 'my-zone'
cluster_id = 'my-existing-cluster'
credentials = compute_engine.Credentials()
gclient: ClusterManagerClient = container.ClusterManagerClient(credentials=credentials)
cluster = gclient.get_cluster(project_id,zone,cluster_id)
cluster_endpoint = cluster.endpoint
print("*** CLUSTER ENDPOINT ***")
print(cluster_endpoint)
cluster_master_auth = cluster.master_auth
print("*** CLUSTER MASTER USERNAME PWD ***")
cluster_username = cluster_master_auth.username
cluster_password = cluster_master_auth.password
print("USERNAME : %s - PASSWORD : %s" % (cluster_username, cluster_password))
</code></pre>
<p>I would like to do something like this after that :</p>
<pre><code>config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
<p>However, I can't figure out how to set my endpoint and authentification informations.
Can anyone help me please ?</p>
| Ab. C. | <p>You can use a bearer token rather than using basic authentication:</p>
<pre><code>from google.auth import compute_engine
from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client
def test_gke(request):
project_id = "my-gcp-project"
zone = "my-zone"
cluster_id = "my-existing-cluster"
credentials = compute_engine.Credentials()
cluster_manager_client = ClusterManagerClient(credentials=credentials)
cluster = cluster_manager_client.get_cluster(name=f'projects/{project_id}/locations/{zone}/clusters/{cluster_id}')
configuration = client.Configuration()
configuration.host = f"https://{cluster.endpoint}:443"
configuration.verify_ssl = False
configuration.api_key = {"authorization": "Bearer " + credentials.token}
client.Configuration.set_default(configuration)
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
pods = v1.list_pod_for_all_namespaces(watch=False)
for i in pods.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
| Dustin Ingram |
<p>I have a java application that runs inside a Kubernetes pod.</p>
<p>The application performs several tasks (<code>taskA</code>, <code>taskB</code>, etc.). The application supports running multiple instances in different pods. All the pods are doing the same tasks. </p>
<p>However, there is a task that should only be done by only one of the pods (e.g. <code>taskA</code> should only run in one of the pods). And if the pod that is performing the specific task dies, one of the other nodes should start doing that task (passive node, with regards to <code>taskA</code>, takes over). </p>
<p>Is there some support for this feature in k8s, or do I need use some other service (e.g. zookeeper)?</p>
| Ahmed A | <p>After researching David's answer, I found the Kubernetes blog recommends a way to do leader election:</p>
<p><a href="https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/" rel="nofollow noreferrer">https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/</a></p>
<p>Newer articles have come out since then with improvements to that algorithm but they are still based on the original technique, according to this article:</p>
<p><a href="https://medium.com/hybrid-cloud-hobbyist/leader-election-architecture-kubernetes-32600da81e3c" rel="nofollow noreferrer">https://medium.com/hybrid-cloud-hobbyist/leader-election-architecture-kubernetes-32600da81e3c</a></p>
<p>It looks like you'll have to copy a bit of code or add in a new external dependency for this.</p>
| Alexander Taylor |
<p>im Running Kubernetes (minikube) on Windows via Virtualbox. I've got a Services running on my Host-Service i dont want to put inside my Kubernetes Cluster, how can i access that Services from Inside my Pods?</p>
<p>Im new to to Kubernetes i hope this Questions isnt to stupid to ask.</p>
<p>I tried to create a Service + Endpoint but it didnt work:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>kind: Endpoints
apiVersion: v1
metadata:
name: vetdb
subsets:
- addresses:
- ip: 192.168.99.100
ports:
- port: 3307</code></pre>
</div>
</div>
</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>kind: Service
apiVersion: v1
metadata:
name: vetdb
spec:
selector:
app: vetdb
type: ClusterIP
ports:
- port: 3306
targetPort: 3307</code></pre>
</div>
</div>
</p>
<p>i started a ubuntu image inside the same cluster the pod should be running on later and tested the connection:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>$ root@my-shell-7766cd89c6-rtxt2:/# curl vetdb:3307 --output test
Host '192.168.99.102' is not allowed to connect to this MariaDB serverroot@my-shell</code></pre>
</div>
</div>
</p>
<p>This is the same Output i get running (except other Host-IP)</p>
<pre><code>curl 192.168.99.100:3307
</code></pre>
<p>on my Host PC ==> Itworks.</p>
<p>But i cant access the Host from inside my Microservices where i really need to access the URL.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
eureka-77f949499-g2l82 1/1 Running 0 2h
my-shell-7766cd89c6-rtxt2 1/1 Running 0 2h
vet-ms-54b89f9c86-29psf 1/1 Running 10 18m
vet-ms-67c774fd9b-2fnjc 0/1 CrashLoopBackOff 7 18m</code></pre>
</div>
</div>
</p>
<p>The Curl Response i posted above was from Pod: <code>my-shell-7766cd89c6-rtxt2</code>
But i need to access vetdb from <code>vet-ms-*</code></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>$ kubectl logs -f vet-ms-67c774fd9b-2fnjc
...
Caused by: java.net.UnknownHostException: vetdb
...</code></pre>
</div>
</div>
</p>
<p>Spring URL Settings i tried</p>
<pre><code>spring.profiles.datasource.url: jdbc:mysql://vetdb:3307/vetdb?useSSL=false&allowPublicKeyRetrieval=true
</code></pre>
<blockquote>
<pre><code>spring.profiles.datasource.url: jdbc:mysql://vetdb:3306/vetdb?useSSL=false&allowPublicKeyRetrieval=true
</code></pre>
</blockquote>
<pre><code>spring.profiles.datasource.url: jdbc:mysql://vetdb/vetdb?useSSL=false&allowPublicKeyRetrieval=true
</code></pre>
<p>Ty guys</p>
<hr>
<hr>
<p>Edit://
i allowed every Host to Connect to the DB to remove this error</p>
<pre><code>Host '192.168.99.102' is not allowed to connect to this MariaDB
</code></pre>
<p>but i still get the Same Unknown Host Exception inside of my Microservices.</p>
| pro sp | <p>I think the Ubuntu image test is most informative here.</p>
<p>From the error message I think the problem is in the MySQL config. You must configure server to listen on port of your host IP address (i.e. not localhost or socketfile).</p>
<p>In addition, you must also ensure that IP address from pod subnets are allowed to connect also.</p>
| Hitobat |
<p>Is it possible to <strong>access machine environments inside dockerfile</strong>? I was thinking passing the SECRET as build ARG, like so:</p>
<p>docker-compose:</p>
<pre class="lang-yaml prettyprint-override"><code>version: '3.5'
services:
service:
...
build:
...
args:
SECRET: ${SECRET}
...
</code></pre>
<p>dockerfile:</p>
<pre><code>FROM image
ARG SECRET
RUN script-${SECRET}
</code></pre>
<p><em>Note:</em> the container is build in kubernetes, I can not pass any arguments to the build command or perform any command at all.</p>
<p><em>Edit 1:</em> It is okay to pass SECRET as ARG because this is not sensitive data. I'm using SECRETS to access micro service data, and I can only store data using secrets. Think of this as machine environment.</p>
<p><em>Edit 2:</em> This was not a problem with docker but with the infrastructure that I was working with which does not allow any arguments to be passed to the docker build.</p>
| Gustavo Santamaría | <p>The secrets should be used during run time and provided by execution environment.</p>
<p>Also everything that is executing during a container build is written down as layers and available later to anyone who is able to get access to an image. That's why it's hard to consume secrets during the build in a secure way.</p>
<p>In order to address this, Docker recently introduced <a href="https://docs.docker.com/engine/reference/commandline/buildx_build/#secret" rel="nofollow noreferrer">a special option <code>--secret</code></a>. To make it work, you will need the following:</p>
<ol>
<li><p>Set environment variable <code>DOCKER_BUILDKIT=1</code></p>
</li>
<li><p>Use the <code>--secret</code> argument to <code>docker build</code> command</p>
<pre class="lang-bash prettyprint-override"><code>DOCKER_BUILDKIT=1 docker build --secret id=mysecret,src=mysecret.txt
</code></pre>
</li>
<li><p>Add a <code>syntax</code> comment to the very top of your Docker file</p>
<pre><code># syntax = docker/dockerfile:1.0-experimental
</code></pre>
</li>
<li><p>Use the <code>--mount</code> argument to mount the secret for every <code>RUN</code> directive that needs it</p>
<pre><code>RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
</code></pre>
</li>
</ol>
<p>Please note that this needs <a href="https://docs.docker.com/engine/reference/commandline/buildx_build/#secret" rel="nofollow noreferrer">Docker version 18.09 or later</a>.</p>
| Slava Semushin |
<p>I am new in Airflow and kubernetes.</p>
<p>I have deployed airflow following this guide: <a href="https://docs.bitnami.com/tutorials/deploy-apache-airflow-azure-postgresql-redis/" rel="nofollow noreferrer">https://docs.bitnami.com/tutorials/deploy-apache-airflow-azure-postgresql-redis/</a></p>
<p>I understand that the executor is celeryExecutor. I tried to change it to kubernetesexecutor but I really don´t know how to do it.</p>
<p>I have read that the celery executor creates static workers and kubernetes executor create a pod for each task.</p>
<p>What I don´t know is how many workers has my celery executor deployed and how to increase them.</p>
| J.C Guzman | <p>I'm assuming you used the <a href="https://github.com/bitnami/charts/tree/master/bitnami/airflow" rel="nofollow noreferrer">Bitnami Helm chart</a> to deploy Airflow into Kubernetes.</p>
<p>In the values file, the config setting <code>airflow.worker.replicas</code> controls how many worker pods will be deployed. The default value is 2.</p>
| Hitobat |
<p>I am trying to push my container up to GCP Kubernetes in my cluster. My pod runs locally but it doesn't want to run on GCP. It comes back with this error <code>Error response from daemon: No command specified: CreateContainerError</code></p>
<p>It worked if I run it locally in docker but once I push it up to the container registry on gcp and apply the deployment yaml using <code>kubectl apply -f</code> in my namespace it never brings it up and just keeps saying
<code>gce-exporting-fsab83222-85sc4 0/1 CreateContainerError 0 5m6s</code></p>
<p>I can't get any logs out of it either:
<code>Error from server (BadRequest): container "gce" in pod "gce-exporting-fsab83222-85sc4" is waiting to start: CreateContainerError</code></p>
<p>Heres my files below:</p>
<p><strong>Dockerfile:</strong></p>
<pre><code>FROM alpine:3.8
WORKDIR /build
COPY test.py /build
RUN chmod 755 /build/test.py
CMD ["python --version"]
CMD ["python", "test.py"]
</code></pre>
<p><em>Python Script:</em></p>
<pre><code>#!/usr/bin/python3
import time
def your_function():
print("Hello, World")
while True:
your_function()
time.sleep(10) #make function to sleep for 10 seconds
</code></pre>
<p><strong>yaml file:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: gce-exporting
namespace: "monitoring"
spec:
selector:
matchLabels:
app: gce
template:
metadata:
labels:
app: gce
spec:
containers:
- name: gce
image: us.gcr.io/lab-manager/lab/gce-exporting:latest
</code></pre>
<p>I have tried using CMD and Entrypoint at the end to make sure the pod is running but no luck.</p>
<p>This is the output of the describe pod</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 60s default-scheduler Successfully assigned monitoring/gce-exporting-fsab83222-85sc4 to gke-dev-lab-standard-8efad9b6-8m66
Normal Pulled 5s (x7 over 59s) kubelet, gke-dev-lab-standard-8efad9b6-8m66 Container image "us.gcr.io/lab-manager/lab/gce-exporting:latest" already present on machine
Warning Failed 5s (x7 over 59s) kubelet, gke-dev-lab-standard-8efad9b6-8m66 Error: Error response from daemon: No command specified
</code></pre>
| soniccool | <p>It was a malformed character in my Dockerfile and caused it to crash.</p>
| soniccool |
<p>We have created service and ingress files for our deployment below.
However, when we try to reach our app through the ingress controller, we have seen that our static files such as JS and CSS cannot be loaded on the website. Aside from this, when we try to reach with NodePort, we have seen that the app was loaded perfectly.</p>
<p>service.yaml:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: karel-service-1
labels:
app: karel-deployment-1
spec:
selector:
app: karel-deployment-1
type: NodePort
ports:
- port: 3000
targetPort: 3000
nodePort:
</code></pre>
<p>ingress.yaml:</p>
<pre><code>---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: karel-ingress-1
annotations:
nginx.ingress.kubernetes.io/add-base-url: 'true'
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
spec:
rules:
- host: www.kareldeployment.com
http:
paths:
- path: "/1"
backend:
serviceName: karel-service-1
servicePort: 3000
</code></pre>
<p>How can we reach all static content of the website by using the ingress controller?</p>
| thenextgeneration | <p>It looks like the ingress controller listens on port 80 (and that's why HTML pages are loaded) but they have links that point to port 3000 (and that's why everything works with NodePort). In other words, I suspect that you have to modify the pages to not use hardcoded port.</p>
| Slava Semushin |
<p>I'm trying to implement EFK stack (with Fluent Bit) in my k8s cluster. My log file I would like to parse sometimes is oneline and sometimes multiline:</p>
<pre><code>2022-03-13 13:27:04 [-][-][-][error][craft\db\Connection::open] SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known
2022-03-13 13:27:04 [-][-][-][info][application] $_GET = []
$_POST = []
$_FILES = []
$_COOKIE = [
'__test1' => 'x'
'__test2' => 'x2'
]
$_SERVER = [
'__test3' => 'x3'
'__test2' => 'x3'
]
</code></pre>
<p>When I'm checking captured logs in Kibana I see that all multiline logs are separated into single lines, which is of course not what we want to have. I'm trying to configure a parser in fluent bit config which will interpret multiline log as one entry, unfortunately with no success.</p>
<p>I've tried this:</p>
<pre><code>[PARSER]
Name MULTILINE_MATCH
Format regex
Regex ^\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2} \[-]\[-]\[-]\[(?<level>.*)\]\[(?<where>.*)\] (?<message>[\s\S]*)
Time_Key time
Time_Format %b %d %H:%M:%S
</code></pre>
<p>In k8s all fluent bit configurations are stored in config map. So here's my whole configuration of fluent bit (the multiline parser is at the end):</p>
<pre><code>kind: ConfigMap
metadata:
name: fluent-bit
namespace: efk
labels:
app: fluent-bit
data:
# Configuration files: server, input, filters and output
# ======================================================
fluent-bit.conf: |
[SERVICE]
Flush 1
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
@INCLUDE input-kubernetes.conf
@INCLUDE filter-kubernetes.conf
@INCLUDE output-elasticsearch.conf
input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10
filter-kubernetes.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
output-elasticsearch.conf: |
[OUTPUT]
Name es
Match *
Host ${FLUENT_ELASTICSEARCH_HOST}
Port ${FLUENT_ELASTICSEARCH_PORT}
Logstash_Format On
Replace_Dots On
Retry_Limit False
parsers.conf: |
[PARSER]
Name apache
Format regex
Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name apache2
Format regex
Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name apache_error
Format regex
Regex ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$
[PARSER]
Name nginx
Format regex
Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name json
Format json
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
[PARSER]
Name syslog
Format regex
Regex ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
Time_Key time
Time_Format %b %d %H:%M:%S
[PARSER]
Name MULTILINE_MATCH
Format regex
Regex ^\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2} \[-]\[-]\[-]\[(?<level>.*)\]\[(?<where>.*)\] (?<message>[\s\S]*)
Time_Key time
Time_Format %b %d %H:%M:%S
</code></pre>
| Murakami | <p>Starting from Fluent Bit v1.8, You can use the <code>multiline.parser</code> option as below.
docker and cri multiline parsers are predefined in fluent-bit.</p>
<pre><code>[INPUT]
Name tail
Path /var/log/containers/*.log
multiline.parser docker, cri
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
</code></pre>
<p><a href="https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-and-containers-v1.8" rel="nofollow noreferrer">https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-and-containers-v1.8</a></p>
| Kavinda Gayashan |
<p>Lets say I need to create environment variables or ConfigMap entries like this:</p>
<pre><code>- name: JDBC_URL
value: "jdbc:db2://alice-service-a:50000/db1"
- name: KEYCLOAK_BASE_URL
value: "http://alice-keycloak:8080/auth"
</code></pre>
<p>Where <code>alice-</code> is the namePrefix. How do I do this using Kustomize?</p>
<p>The containers I use actually do need references to other containers that are string concatenations of "variables" like above.</p>
<p>It doesn't look like Kustomize's <code>vars</code> can do this. The documentation entry <a href="https://kubectl.docs.kubernetes.io/faq/kustomize/eschewedfeatures/#unstructured-edits" rel="nofollow noreferrer">Unstructured Edits</a> seems to describe this and is under a heading called "Eschewed Features", so I guess that isn't going to happen. A similar feature request, <a href="https://github.com/kubernetes-sigs/kustomize/issues/775" rel="nofollow noreferrer">#775 Support envsubst style variable expansion</a> was closed.</p>
<p>Coming from Helm, that was easy.</p>
<p>What are my options if I want to move from Helm to Kustomize, but need to create an env or ConfigMap entry like e.g. <code>jdbc:db2://${namePrefix}-service-b:${dbPort}/${dbName}</code> (admittedly a contrived example)?</p>
<p>I'm guessing I'll have to resort to functionality external to Kustomize, like <code>envsubst</code>. Are there any best practices for cobbling this together, or am I writing my own <code>custom-deploy-script.sh</code>?</p>
| Peter V. Mørch | <p>I'm afraid I've come up against one of the limitations of Kustomize.</p>
<p><a href="https://blog.argoproj.io/the-state-of-kubernetes-configuration-management-d8b06c1205" rel="nofollow noreferrer">The State of Kubernetes Configuration Management: An Unsolved Problem | by Jesse Suen | Argo Project</a> has this to say under "Kustomize: The Bad":</p>
<blockquote>
<p><em><strong>No parameters & templates</strong></em>. The same property that makes kustomize applications so readable, can also make it very limiting. For example, I was recently trying to get the kustomize CLI to set an image tag for a custom resource instead of a Deployment, but was unable to. Kustomize does have a concept of “vars,” which look a lot like parameters, but somehow aren’t, and can only be used in Kustomize’s sanctioned whitelist of field paths. I feel like this is one of those times when the solution, despite making the hard things easy, ends up making the easy things hard.</p>
</blockquote>
<p>Instead, I've started using <a href="https://github.com/hairyhenderson/gomplate" rel="nofollow noreferrer">gomplate: A flexible commandline tool for template rendering</a> in addition to Kustomize to solve the challenge above, but having to use two tools that weren't designed to work together is not ideal.</p>
<p>EDIT: We ended up using <a href="https://carvel.dev/ytt/" rel="nofollow noreferrer">ytt</a> for this instead of <code>gomplate</code>.</p>
<p>I can heavily recommend the article: <a href="https://blog.argoproj.io/the-state-of-kubernetes-configuration-management-d8b06c1205" rel="nofollow noreferrer">The State of Kubernetes Configuration Management: An Unsolved Problem</a>. Nice to know I'm not the only one hitting this road block.</p>
| Peter V. Mørch |
<p>I have ingress-nginx configured with ingress resources that are host specific so how can I access <a href="http://appX.my.example.com" rel="nofollow noreferrer">http://appX.my.example.com</a> both from my desktop browser and also from within other pods inside the cluster, that also need to access <a href="http://appX.my.example.com" rel="nofollow noreferrer">http://appX.my.example.com</a>?</p>
<p>I'm running kubernetes locally using <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">kind</a> in a docker container on windows. I'm guessing the exact same question would apply for minicube, k3s or whatever. I'm running this on Windows with Docker for-win and WSL2 (which may not matter).</p>
<p>One solution that I've found is to see what the IP address <code>host.docker.internal</code> resolves to (currently <code>192.168.1.100</code>) and then create entries like this in <code>C:\Windows\System32\drivers\etc\hosts</code>:</p>
<pre><code>192.168.1.100 appx.my.example.com
192.168.1.100 appy.my.example.com
</code></pre>
<p>Now <a href="http://appX.my.example.com" rel="nofollow noreferrer">http://appX.my.example.com</a> resolves correctly both in the desktop browser and for <code>appY</code>. Everything works. Two problems with this:</p>
<ul>
<li>E.g. after reboots, after starting a VPN and for other black-magic reasons, <code>host.docker.internal</code> changes IP addresses "sometimes"</li>
<li>It is not possible to create a <code>*.my.example.com</code> entry in <code>C:\Windows\System32\drivers\etc\hosts</code> (or linux <code>/etc/hosts</code> either).</li>
</ul>
<p>This leads to a need to maintain the <code>hosts</code> file which is error-prone and annoying.</p>
<p>Is there a better way? What is the easiest way to develop with kubernetes on localhost if we want to use named hosts in ingress rules?</p>
| Peter V. Mørch | <p>Here's what I've come up with:</p>
<p>I've created a wildcard certificate in a real DNS entry in a domain I own. Something like</p>
<pre><code>*.local.mydomain.dk. IN A 127.0.0.1
</code></pre>
<p>Now the trick is to get coredns (the DNS server in the kubernetes cluster) to resolve <code>*.local.mydomain.dk</code> to a <code>CNAME host.docker.internal</code>. To do that, I've modified both the configmap and deployment called <code>coredns</code> in the <code>kube-system</code> name space:</p>
<pre><code>diff -u configmap.yaml.orig configmap.yaml
--- configmap.yaml.orig 2021-08-10 00:24:29.234095600 +0200
+++ configmap.yaml 2021-08-10 00:24:37.664095600 +0200
@@ -7,6 +7,7 @@
lameduck 5s
}
ready
+ file /etc/coredns/mydomain.dk.db local.mydomain.dk
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
@@ -21,6 +22,11 @@
reload
loadbalance
}
+ mydomain.db: |
+ local.mydomain.dk. IN SOA sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
+ local.mydomain.dk. IN NS a.iana-servers.net.
+ local.mydomain.dk. IN NS b.iana-servers.net.
+ *.local.mydomain.dk. IN CNAME host.docker.internal.
kind: ConfigMap
metadata:
creationTimestamp: "2021-08-09T21:58:26Z"
</code></pre>
<pre><code>diff -u deployment.yaml.orig deployment.yaml
--- deployment.yaml.orig 2021-08-10 00:26:17.324095600 +0200
+++ deployment.yaml 2021-08-10 00:25:57.584095600 +0200
@@ -108,6 +108,8 @@
items:
- key: Corefile
path: Corefile
+ - key: mydomain.dk.db
+ path: mydomain.dk.db
name: coredns
name: config-volume
status:
</code></pre>
<p>Now <code>whatever.local.mydomain.dk</code> resolves to <code>127.0.0.1</code> in the browser and to <code>host.docker.internal</code> inside pods. Bingo!</p>
| Peter V. Mørch |
<p>I have a application pod where I am logging log messed to a file a specific location.</p>
<p>I have already shared this location to other pod using emptyDir volumeMount.</p>
<p>I am getting standard stdout & stderr in my ELF stack - dashboard. How do I capture my custom logs?</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: elk
namespace: default
labels:
k8s-app: elk-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: elk-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: elk
image: fluent/fluentd-kubernetes-daemonset:elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "cp-os-logging-dashboard"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: logs
mountPath: /home/services/*/logs/
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: logs
hostPath:
path: /home/services/*/logs/
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
</code></pre>
<p>I have experimented with volume host-path's, emptyDir and other varieties prior to asking question here. All I want is access my application logs from daemonset. I was able to do that without daemonset.</p>
| Sampath Maddula | <p>Kubernetes will send all the logs to nodes /var/log etc. You need hostPath volume for fluentd daemoset to pick it up and send to your logger. <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">EmptyDir</a>, as the name suggest, will be empty when the pod is scheduled to a node. </p>
<pre><code>...
...
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
</code></pre>
<p>Check <a href="https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd" rel="nofollow noreferrer">https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd</a> and <a href="https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch.yaml" rel="nofollow noreferrer">https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch.yaml</a> for more info.</p>
| Amrit |
Subsets and Splits