prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I'm using <a href="https://keda.sh/" rel="nofollow noreferrer">KEDA</a> to scale my pods on EKS cluster. I want to have trigger enabled on both CPU and Memory based scalers in the ScaledObject so here is my code chunk:-</p>
<pre><code>spec:
scaleTargetRef:
name: workload-processor
cooldownPeriod: 120
minReplicaCount: 1
maxReplicaCount: 3
triggers:
- type: cpu
metadata:
type: AverageValue
value: "70"
- type: memory
metadata:
type: AverageValue
value: "70"
</code></pre>
<p>I'm trying to use both types of triggers (cpu and memory) here in same YAML file. When I run the <code>kubectl get scaledobject</code> command then it shows triggers enabled only on CPU but not on both including memory:-</p>
<pre><code>NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE
workload-processor-keda-scaler apps/v1.Deployment workload-processor 1 3 cpu True True Unknown
</code></pre>
<p>Am I doing anything mistake here? How can I enable the trigger for both? Should I write two separate scaledobject YAML files (one for CPU and another for Memory)?</p>
| <p>Functionally it has got all the triggers when I do <code>kubectl get scaledobject -o yaml</code>. The problem is with the CRD itself as it is not able to display all the triggers on CLI. It is an existing bug and have been already reported here <a href="https://github.com/kedacore/keda/issues/1943" rel="nofollow noreferrer">https://github.com/kedacore/keda/issues/1943</a></p>
|
<p>I am totally new to Go and the Kubernetes library k8s.io (<a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go</a>) and try to figure out how to get a specific secret.</p>
<p>I have a kind of observer which watches changes of <code>Secrets</code>.
I am iterating through a Secretlist within a specific namespace. That works, I also can filter them by a while loop. But I do not know how to <strong>get and search a Secret in a different namespace</strong> which should be available in this loop.</p>
<p>I need a secret named XXX in namespace "my-namespace"
(I know that the following line does not exist, it should only outline the idea what I am looking for) I come from Ruby, so I searched for something like this :</p>
<pre><code>var myKubeSecret = kubernetes.V1().Secrets("my-namespace").Find("XXX")
</code></pre>
<p>Exists like the function like that one above?</p>
<p>This is what I have: this observes all my secrets in namespace "default". Which works. That example was taken from a Code that does something similar I was searching for, and I try to modify now.:</p>
<pre><code>import (
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
kubeinformers "k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"sigs.k8s.io/yaml"
)
// a lot of code
// ....
// ...
// ..
// .
if data, err := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/namespace"); err == nil {
fmt.Println("data",data)
}
// listen for new secrets
factory := kubeinformers.NewSharedInformerFactoryWithOptions(clientsetCore, 0, kubeinformers.WithNamespace(namespace()))
informer := factory.Core().V1().Secrets().Informer()
secrets := factory.Core().V1().Secrets().Lister()
var myKubeSecret string // will hold my secret
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(new interface{}) {
// get the secret
var cpSecret = new.(*v1.Secret).DeepCopy()
if mySecret.GetName() == "argocd-credentials" {
var cpData = *&cpSecret.Data
for k, v := range cpData {
clusterIP = kubeConfig.Clusters[0].Cluster.Server
fmt.Println("cpData k:", k, "v:", v)
switch k {
case "authToken":
fmt.Println("authToken:", v)
// ### HERE SHOULD BE THE VALUE OF A
// ### SECRET NAMED XXX in ns my-namespace
myKubeSecret = // ### should a bearerToken string
}
}
}
}
}
</code></pre>
<p>I hope you get the idea..</p>
<p><em>Please also tell me which <code>import</code> libray is needed, if any.</em></p>
| <p>as mentioned above, <code>secret</code> object resides in a namespace. They can only be referenced by pods in that same namespace.</p>
<p><a href="https://stackoverflow.com/questions/46297949/sharing-secret-across-namespaces">Sharing secret across namespaces</a></p>
<p>if you want to use the secret in multiple namespaces, copy the same secret into the desired namespaces.</p>
<p>example case</p>
<ul>
<li>kubernetes secret: test-secret-1</li>
<li>namespace from: testns1</li>
<li>namespace to: testns2</li>
</ul>
<ol>
<li>Using pipe "|" operator</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>kubectl get secret test-secret-1 --namespace=testns1 -oyaml | grep -v ^\s*namespace:\s' |kubectl apply --namespace=testns2 -f -
</code></pre>
<ol start="2">
<li>Using sed command</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>kubectl get secret test-secret-1 -n testns1 -o yaml | sed s/"namespace: testns1"/"namespace: testns2"/| kubectl
apply -n testns2 -f -
</code></pre>
<ol start="3">
<li>Export kubernetes secret to yaml and apply secret</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>kubectl get secret test-secret-1 -n testns1 -o yaml
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
data:
password: dGVzdFBAc3N3b3Jk
username: dGVzdC11c2Vy
kind: Secret
metadata:
creationTimestamp: "2021-11-11T21:21:02Z"
name: test-secret-1
namespace: testns1 # change namespace to testns2
resourceVersion: "307939"
uid: 6a8d9a6d-9648-4a39-a362-150e682c9a42
type: Opaque
</code></pre>
<p><a href="https://jhooq.com/kubernetes-share-secrets-namespaces/" rel="nofollow noreferrer">https://jhooq.com/kubernetes-share-secrets-namespaces/</a></p>
|
<p>I have one running container which uses node:alpine as base image.I want to know version of this image.</p>
| <p>You can check the <strong>Dockerfile</strong> of the container if handy.</p>
<p>You can check the first line with <strong>FROM</strong> <code>node:<version>-alpine</code></p>
<p>For example :</p>
<pre><code>FROM node:12.18.1-alpine
ENV NODE_ENV=production
WORKDIR /app
</code></pre>
<p>You can also use the command <code>docker image inspect</code></p>
<p><a href="https://docs.docker.com/engine/reference/commandline/image_inspect/" rel="nofollow noreferrer">https://docs.docker.com/engine/reference/commandline/image_inspect/</a></p>
<p>You can also <strong>exec</strong> into the container to check the version of <strong>Node</strong></p>
<p><strong>Command to check node version:</strong></p>
<pre><code>docker exec -ti <Container name> sh -c "node --version"
</code></pre>
|
<p>I'm starting out in K8s and I'm not quite wrapping my head around deploying a StatefulSet with multiple replicas bound to a local disc using <code>PV</code>+<code>PVC</code>+<code>SC</code> vs. <code>volumeClaimTemplates</code> + <code>HostPath</code>
scenarios.
My goal is to deploy a MongoDB StatefulSet with 3 replicas set in (mongo's replica set) ReplicaSet mode and bound each one to a local ssd.
I did a few tests and got a few concepts to get straight.</p>
<p>Scenario a) using <code>PV</code>+<code>PVC</code>+<code>SC</code>:
If in my <code>StatefulSet</code>'s container (set with replicas:1) I declare a <code>volumeMounts</code> and a <code>Volume</code> I can point it to a PVC which uses a SC used by a PV which points to a physical local ssd folder.
The concept is straight, it all maps beautifully.
If I increase the replicas to be more the one then from the second pod onward they'll won't find a Volume to bind to..and I get the <code>1 node(s) didn't find available persistent volumes to bind</code> error.</p>
<p>This makes me realise that the storage capacity reserved from the PVC on that PV is not replicated as the pods in the StatefulSet and mapped to each created POD.</p>
<p>Scenario b) <code>volumeClaimTemplates</code> + <code>HostPath</code>:</p>
<p>I commented out the Volume, and instead used the <code>volumeClaimTemplates</code> which indeed works as I was expecting in scenario a, for each created pod an associated claim gets created and some storage capacity gets reserved for that Pod. Here also pretty straight concept, but it only works as long as I use <code>storageClassName: hostpath</code> in <code>volumeClaimTemplates</code>. I tried using my SC and the result is the same <code>1 node(s) didn't find available persistent volumes to bind</code> error.</p>
<p>Also, when created with <code>volumeClaimTemplates</code> PV names are useless and confusing as the start with PVC..</p>
<pre><code>vincenzocalia@vincenzos-MacBook-Air server-node % kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mg-pv 3Gi RWO Delete Available mg-sc 64s
pvc-32589cce-f472-40c9-b6e4-dc5e26c2177a 50Mi RWO Delete Bound default/mg-pv-cont-mongo-3 hostpath 36m
pvc-3e2f4e50-30f8-4ce8-8a62-0b923fd6aa79 50Mi RWO Delete Bound default/mg-pv-cont-mongo-1 hostpath 37m
pvc-8f4ff966-c30a-469f-a68d-ed579ef2a96f 50Mi RWO Delete Bound default/mg-pv-cont-mongo-4 hostpath 36m
pvc-9f8c933b-85d6-4024-8bd0-6668feee8757 50Mi RWO Delete Bound default/mg-pv-cont-mongo-2 hostpath 37m
pvc-d6c212f3-2391-4137-97c3-07836c90b8f3 50Mi RWO Delete Bound default/mg-pv-cont-mongo-0 hostpath 37m
vincenzocalia@vincenzos-MacBook-Air server-node % kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mg-pv-cont-mongo-0 Bound pvc-d6c212f3-2391-4137-97c3-07836c90b8f3 50Mi RWO hostpath 37m
mg-pv-cont-mongo-1 Bound pvc-3e2f4e50-30f8-4ce8-8a62-0b923fd6aa79 50Mi RWO hostpath 37m
mg-pv-cont-mongo-2 Bound pvc-9f8c933b-85d6-4024-8bd0-6668feee8757 50Mi RWO hostpath 37m
mg-pv-cont-mongo-3 Bound pvc-32589cce-f472-40c9-b6e4-dc5e26c2177a 50Mi RWO hostpath 37m
mg-pv-cont-mongo-4 Bound pvc-8f4ff966-c30a-469f-a68d-ed579ef2a96f 50Mi RWO hostpath 37m
mg-pvc Pending mg-sc 74s
</code></pre>
<p>Is there any way to get to set the <code>volumeClaimTemplates</code>'s PVs names as something more useful as when declaring a PV?</p>
<p>How to point <code>volumeClaimTemplates</code>'s PVs to an ssd as I'm doing for my scenario a?</p>
<p>Many thanks</p>
<h1>PV</h1>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mg-pv
labels:
type: local
spec:
capacity:
storage: 3Gi
persistentVolumeReclaimPolicy: Delete
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
storageClassName: mg-sc
local:
path: /Volumes/ProjectsSSD/k8s_local_volumes/mongo/mnt/data/unt
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
</code></pre>
<h1>SC</h1>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mg-sc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
</code></pre>
<h1>PVC</h1>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mg-pvc
spec:
storageClassName: mg-sc
# volumeName: mg-pv
resources:
requests:
# storage: 1Gi
storage: 50Mi
accessModes:
- ReadWriteOnce
</code></pre>
<h1>StatefulSet</h1>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
role: mongo
environment: test
serviceName: 'mongo'
replicas: 5
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- '--bind_ip'
- all
- '--replSet'
- rs0
# - "--smallfiles"
# - "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mg-pv-cont
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: 'role=mongo,environment=test'
- name: KUBERNETES_MONGO_SERVICE_NAME
value: 'mongo'
### using volumes you have to have one persistent volume for each created pod..useful only for static set of pods
# volumes:
# - name: mg-pv-cont
# persistentVolumeClaim:
# claimName: mg-pvc
## volume claim templates create a claim for each created pos, so if scaling up or down the number of pod they¡ll clame their own space in the persistent volume.
volumeClaimTemplates:
- metadata:
name: mg-pv-cont # this binds
# name: mg-pv-pvc-template # same name as volumeMounts or it won't bind.
### Waiting for deployments to stabilize...
### - statefulset/mongo: Waiting for statefulset spec update to be observed...
spec:
# storageClassName: mg-sc
storageClassName: hostpath
accessModes: ['ReadWriteOnce']
resources:
requests:
storage: 50Mi
</code></pre>
| <p>Ok, after fiddling with it a bit more and testing a couple more of configurations I found out that the <code>PVC</code> to <code>PV</code> binding happens in a 1:1 manner, so once the <code>PV</code> has bound to a claim (either <code>PVC</code> or <code>volumeClaimTemplates</code>) no other claim can bind to it. So the solution is just to create many <code>PV</code>s as many pods you expect to create. And some extra for scaling up and down the replicas of your <code>StatefulSet</code>. Now in the <code>volumeClaimTemplates: spec: storageClassName:</code> you can user the <code>SC</code> you defined so the those <code>PV</code> get used. No use for <code>PVC</code> if using <code>volumeClassTemplates</code>..it'd just create a claim that nobody uses..</p>
<p>Hope this will help others starting out in the Kubernetes world.
Cheers.</p>
|
<p>After shifting from Docker to containerd as docker engine used by our kubernetes, we are not able to show the multiline logs in a proper way by our visualization app (Grafana) as some details prepended to the container/pod logs by the containerd itself (i.e. timestamp, stream & log severity to be specific it is appending something like the following and as shown in the below sample: <strong>2022-07-25T06:43:17.20958947Z stdout F</strong> ) which make some confusion for the developers and the application owners.</p>
<p>I am showing here a dummy sample of the logs generated by the application and how it got printed in the nodes of kuberenetes'nodes after containerd prepended the mentioned details.</p>
<p>The following logs generated by the application (kubectl logs ):</p>
<pre><code>2022-07-25T06:43:17,309ESC[0;39m dummy-[txtThreadPool-2] ESC[39mDEBUGESC[0;39m
ESC[36mcom.pkg.sample.ComponentESC[0;39m - Process message meta {
timestamp: 1658731397308720468
version {
major: 1
minor: 0
patch: 0
}
}
</code></pre>
<p>when I check the logs in the filesystem (/var/log/container/ABCXYZ.log) :</p>
<pre><code>2022-07-25T06:43:17.20958947Z stdout F 2022-07-25T06:43:17,309ESC[0;39m dummy-[txtThreadPool-2]
ESC[39mDEBUGESC[0;39m
ESC[36mcom.pkg.sample.ComponentESC[0;39m - Process message meta {
2022-07-25T06:43:17.20958947Z stdout F timestamp: 1658731449723010774
2022-07-25T06:43:17.209593379Z stdout F version {
2022-07-25T06:43:17.209595933Z stdout F major: 14
2022-07-25T06:43:17.209598466Z stdout F minor: 0
2022-07-25T06:43:17.209600712Z stdout F patch: 0
2022-07-25T06:43:17.209602926Z stdout F }
2022-07-25T06:43:17.209605099Z stdout F }
</code></pre>
<p>I am able to parse the multiline logs with fluentbit but the problem is I am not able to remove the details injected by containerd ( >> 2022-07-25T06:43:17.209605099Z stdout F .......). So is there anyway to configure containerd to not prepend these details somehow in the logs and print them as they are generated from the application/container ?</p>
<p>On the other hand is there any plugin to remove such details from fluentbit side .. as per the existing <a href="https://docs.fluentbit.io/manual/concepts/data-pipeline" rel="nofollow noreferrer">plugins</a> none of them can manipulate or change the logs (which is logical as the log agent should not do any change on the logs).</p>
<p>Thanks in advance.</p>
| <p>This is the workaround I followed to show the multiline log lines in Grafana by applying extra fluentbit filters and multiline parser.</p>
<p>1- First I receive the stream by tail input which parse it by a multiline parser (multilineKubeParser).</p>
<p>2- Then another filter will intercept the stream to do further processing by a regex parser (kubeParser).</p>
<p>3- After that another filter will remove the details added by the containerd by a lua parser ().</p>
<pre><code> fluent-bit.conf: |-
[SERVICE]
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_PORT 2020
Flush 1
Daemon Off
Log_Level warn
Parsers_File parsers.conf
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
multiline.Parser multilineKubeParser
Exclude_Path /var/log/containers/*_ABC-logging_*.log
DB /run/fluent-bit/flb_kube.db
Mem_Buf_Limit 5MB
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Merge_Log On
Merge_Parser kubeParser
K8S-Logging.Parser Off
K8S-Logging.Exclude On
[FILTER]
Name lua
Match kube.*
call remove_dummy
Script filters.lua
[Output]
Name grafana-loki
Match kube.*
Url http://loki:3100/api/prom/push
TenantID ""
BatchWait 1
BatchSize 1048576
Labels {job="fluent-bit"}
RemoveKeys kubernetes
AutoKubernetesLabels false
LabelMapPath /fluent-bit/etc/labelmap.json
LineFormat json
LogLevel warn
labelmap.json: |-
{
"kubernetes": {
"container_name": "container",
"host": "node",
"labels": {
"app": "app",
"release": "release"
},
"namespace_name": "namespace",
"pod_name": "instance"
},
"stream": "stream"
}
parsers.conf: |-
[PARSER]
Name kubeParser
Format regex
Regex /^([^ ]*).* (?<timeStamp>[^a].*) ([^ ].*)\[(?<requestId>[^\]]*)\] (?<severity>[^ ]*) (?<message>[^ ].*)$/
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
Time_Keep On
Time_Offset +0200
[MULTILINE_PARSER]
name multilineKubeParser
type regex
flush_timeout 1000
rule "start_state" "/[^ ]* stdout .\s+\W*\w+\d\d\d\d-\d\d-\d\d \d\d\:\d\d\:\d\d,\d\d\d.*$/" "cont"
rule "cont" "/[^ ]* stdout .\s+(?!\W+\w+\d\d\d\d-\d\d-\d\d \d\d\:\d\d\:\d\d,\d\d\d).*$/" "cont"
filters.lua: |-
function remove_dummy(tag, timestamp, record)
new_log=string.gsub(record["log"],"%d+-%d+-%d+T%d+:%d+:%d+.%d+Z%sstdout%sF%s","")
new_record=record
new_record["log"]=new_log
return 2, timestamp, new_record
end
</code></pre>
<p>As I mentioned this is a workaround till I can find any other/better solution.</p>
|
<p>I'm just trying to set the log level of Guacamole to DEBUG.. I've successfully set the logging level for <code>guacd</code> and tried multiple environment variables for the guacamole container but it's still not working.. I've found a <code>logging.properties</code> file in the Guacamole container at path: <code>/home/guacamole/tomcat/conf/logging.properties</code>. Is it possible to configure the logging file path so I can add a custom <code>logging.properties</code> file?</p>
<p>The current state of the <code>deployment.yml</code> file looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code># apiVersion: v1
# kind: ConfigMap
# metadata:
# name: logback
# namespace: $NAMESPACE
# data:
# logback.xml: |
# <configuration>
#
# <!-- Appender for debugging -->
# <appender name="GUAC-DEBUG" class="ch.qos.logback.core.ConsoleAppender">
# <encoder>
# <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
# </encoder>
# </appender>
#
# <appender name="GUAC-DEBUG-2" class="org.apache.juli.ClassLoaderLogManager">
# <encoder>
# <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
# </encoder>
# </appender>
#
# <!-- Log at DEBUG level -->
# <root level="debug">
# <appender-ref ref="GUAC-DEBUG"/>
# </root>
#
# <root level="debug">
# <appender-ref ref="GUAC-DEBUG-2"/>
# </root>
#
# </configuration>
apiVersion: v1
kind: ConfigMap
metadata:
name: logging
namespace: $NAMESPACE
data:
logging.properties: |
handlers = 1catalina.org.apache.juli.AsyncFileHandler, 2localhost.org.apache.juli.AsyncFileHandler, 3manager.org.apache.juli.AsyncFileHandler, 4host-manager.org.apache.juli.AsyncFileHandler, java.util.logging.ConsoleHandler
.handlers = 1catalina.org.apache.juli.AsyncFileHandler, java.util.logging.ConsoleHandler
############################################################
# Handler specific properties.
# Describes specific configuration info for Handlers.
############################################################
1catalina.org.apache.juli.AsyncFileHandler.level = FINE
1catalina.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
1catalina.org.apache.juli.AsyncFileHandler.prefix = catalina.
1catalina.org.apache.juli.AsyncFileHandler.encoding = UTF-8
2localhost.org.apache.juli.AsyncFileHandler.level = FINE
2localhost.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
2localhost.org.apache.juli.AsyncFileHandler.prefix = localhost.
2localhost.org.apache.juli.AsyncFileHandler.encoding = UTF-8
3manager.org.apache.juli.AsyncFileHandler.level = FINE
3manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
3manager.org.apache.juli.AsyncFileHandler.prefix = manager.
3manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8
4host-manager.org.apache.juli.AsyncFileHandler.level = FINE
4host-manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
4host-manager.org.apache.juli.AsyncFileHandler.prefix = host-manager.
4host-manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = org.apache.juli.OneLineFormatter
java.util.logging.ConsoleHandler.encoding = UTF-8
############################################################
# Facility specific properties.
# Provides extra control for each logger.
############################################################
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = FINE
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = 2localhost.org.apache.juli.AsyncFileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].level = FINE
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].handlers = 3manager.org.apache.juli.AsyncFileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager].level = FINE
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager].handlers = 4host-manager.org.apache.juli.AsyncFileHandler
# For example, set the org.apache.catalina.util.LifecycleBase logger to log
# each component that extends LifecycleBase changing state:
#org.apache.catalina.util.LifecycleBase.level = FINE
# To see debug messages in TldLocationsCache, uncomment the following line:
#org.apache.jasper.compiler.TldLocationsCache.level = FINE
# To see debug messages for HTTP/2 handling, uncomment the following line:
#org.apache.coyote.http2.level = FINE
# To see debug messages for WebSocket handling, uncomment the following line:
#org.apache.tomcat.websocket.level = FINE
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: guacamole
namespace: $NAMESPACE
labels:
app: guacamole
spec:
replicas: 1
selector:
matchLabels:
app: guacamole
template:
metadata:
labels:
app: guacamole
spec:
containers:
- name: guacd
image: docker.io/guacamole/guacd:$GUACAMOLE_GUACD_VERSION
env:
- name: GUACD_LOG_LEVEL
value: "debug"
ports:
- containerPort: 4822
securityContext:
runAsUser: 1000
runAsGroup: 1000
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
- name: guacamole
image: docker.io/guacamole/guacamole:$GUACAMOLE_GUACAMOLE_VERSION
env:
- name: GUACD_HOSTNAME
value: "localhost"
- name: GUACD_PORT
value: "4822"
- name: POSTGRES_HOSTNAME
value: "database-url.nl"
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_DATABASE
value: "guacamole"
- name: POSTGRES_USER
value: "guacamole_admin"
- name: POSTGRES_PASSWORD
value: "guacamoleadmin"
- name: HOME
value: "/home/guacamole"
- name: GUACAMOLE_LOG_LEVEL
value: "debug"
- name: JAVA_TOOL_OPTIONS
value: "-Djava.util.logging.config.file=/home/guacamole/logging.properties"
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
volumeMounts:
# - name: logback
# mountPath: /home/guacamole/logback.xml
# subPath: logback.xml
- name: logging
mountPath: /home/guacamole/logging.properties
subPath: logging.properties
securityContext:
runAsUser: 1001
runAsGroup: 1001
allowPrivilegeEscalation: false
runAsNonRoot: true
volumes:
# - name: logback
# configMap:
# name: logback
- name: logging
configMap:
name: logging
</code></pre>
<p>UPDATE 1:
According to <a href="https://guacamole.apache.org/doc/0.9.6/gug/configuring-guacamole.html" rel="nofollow noreferrer">this page</a> it's possible to add custom logging configuration by adding the <code>logback.xml</code> file in the GUACAMOLE HOME folder, but this didn't fixed it. I've added this <code>logback.xml</code> file via a ConfigMap</p>
<p>UPDATE 2:
I can see in the logging of the Guacamole container it adds the following Java Option: <code>-Djava.util.logging.config.file=/home/guacamole/tomcat/conf/logging.properties</code>. So I added the following environment variable to edit this config path and added my own <code>logging.properties</code> file via a ConfigMap:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: JAVA_TOOL_OPTIONS
value: "-Djava.util.logging.config.file=/home/guacamole/logging.properties"
</code></pre>
<p>I've updated the <code>deployment.yml</code> file so you can see the current state of the file.. According to the logging it updates the argument correctly, but one line later it overwrites the same argument with the old path..</p>
<p>Anybody ideas? :')</p>
| <p>I got in contact <a href="https://issues.apache.org/jira/browse/GUACAMOLE-1646" rel="nofollow noreferrer">with Guacamole support</a>.</p>
<p>For the people they're experiencing the same issue because the people from Guacamole didn't documented this yet, you can use the environment variable <code>LOGBACK_LEVEL</code>.</p>
|
<p>When I try to create a pod in kubernetes with my image in my Harbor registry,I got an ErrImagePull Error, which looks like that:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10s default-scheduler Successfully assigned test/test-pod to ubuntu-s-2vcpu-2gb-ams3-01-slave01
Normal Pulling 9s kubelet Pulling image "my.harbor.com/test/nginx:1.18.0"
Warning Failed 9s kubelet Failed to pull image "my.harbor.com/test/nginx:1.18.0": rpc error: code = Unknown desc = failed to pull and unpack image "my.harbor.com/test/nginx:1.18.0": failed to resolve reference "my.harbor.com/test/nginx:1.18.0": failed to do request: Head https://my.harbor.com/v2/test/nginx/manifests/1.18.0: x509: certificate signed by unknown authority
Warning Failed 9s kubelet Error: ErrImagePull
Normal BackOff 8s kubelet Back-off pulling image "my.harbor.com/test/nginx:1.18.0"
Warning Failed 8s kubelet Error: ImagePullBackOff
</code></pre>
<p>I think the crucial problem is that <code>'x509: certificate signed by unknown authority</code> but I really don't know what's wrong, since I <strong>copied my CA to both kubernetes master node and slave node</strong>, and <strong>they can both login to harbor</strong> and run <code>docker pull my.harbor.com/test/nginx:1.18.0</code> to pull the image successfully.</p>
<p>I had been bothered days for this, any reply would be grateful.</p>
| <blockquote>
<p>I copied the ca.crt to /etc/docker/certs.d/my.harbor.com/</p>
</blockquote>
<p>This will make it work for the docker engine, which you've shown.</p>
<blockquote>
<p>along with my.harbor.cert and my.harbor.com.key</p>
</blockquote>
<p>I'd consider that a security violation and no longer trust the secret key for your harbor host. The private key should never need to be copied off of the host.</p>
<blockquote>
<p>and I also copied the ca.crt to /usr/local/share/ca-certificates/ and run command update-ca-certificates to update.</p>
</blockquote>
<p>That's the step that should have resolved this.</p>
<p>You can verify that you loaded the certificate with:</p>
<pre class="lang-bash prettyprint-override"><code>openssl s_client -connect my.harbor.com:443 -showcerts </dev/null
</code></pre>
<p>If the output for that doesn't include a message like <code>Verification: OK</code>, then you didn't configure the host certificates correctly and need to double check the steps for your Linux distribution. It's important to check this on each of your nodes. If you only update the manager and pull your images from a worker, that worker will still encounter TLS errors.</p>
<p>If <code>openssl</code> shows a successful verification, then check your Kubernetes node. Depending on the CRI, it could be caching old certificate data and need to be restarted to detect the change on the host.</p>
<blockquote>
<p>As for CRI, I don't know what is it</p>
</blockquote>
<p>Container Runtime Interface, part of your Kubernetes install. By default, this is <code>containerd</code> on many Kubernetes distributions. <code>containerd</code> and other CRI's (except for <code>docker-shim</code>) will not look at the docker configuration.</p>
|
<h1>Actual Issue:</h1>
<p>Unable to start kubernetes API, due to which, unable to intite kube services like:
kubectl version
kubect get nodes</p>
<pre><code>/home/ubuntu# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port ?
</code></pre>
<h1>Background of the issue:</h1>
<p>Docker is installed.
Using below, kube components are installed:</p>
<pre><code>apt-get update && apt-get install -y kubeadm kubelet kubectl
</code></pre>
<p>But, when executing <code>kubeadm init --apiserver-advertise-address=$myip --ignore-preflight-errors=all:</code></p>
<pre><code>I0408 09:09:07.316109 1 client.go:352] scheme "" not registered, fallback to default scheme
I0408 09:09:07.319904 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
I0408 09:09:07.323010 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0408 09:09:07.332669 1 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0408 09:09:08.325625 1 client.go:352] parsed scheme: ""
I0408 09:09:08.325650 1 client.go:352] scheme "" not registered, fallback to default scheme
I0408 09:09:08.325707 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
I0408 09:09:08.325768 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0408 09:09:08.326158 1 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
</code></pre>
<p>Getting the above in the kube api container logs. This is a fresh install .Also tried </p>
<pre><code>sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
</code></pre>
<h1>Expected Results:</h1>
<p>kubectl version: should give only the version details without any connection issue message
Kubectl get nodes: should give the details of Master node and status</p>
| <p>I spent a couple of hours on that.<br />
In my case, I was using Ubuntu 22.04 and Kubernetes 1.24.</p>
<p>API was restarting all the time, I didn't find something in kubelet logs:</p>
<pre><code>service kubelet status
journalctl -xeu kubelet
</code></pre>
<p>I checked the API logs through:</p>
<pre><code>/var/log/containers/kube-apiserver-XXXX
</code></pre>
<p>I saw the same error:</p>
<blockquote>
<p>grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379</p>
</blockquote>
<p><strong>Solution:</strong></p>
<pre><code>containerd config default | tee /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
service containerd restart
service kubelet restart
</code></pre>
<p><strong>Explanation:</strong><br />
The problem is <code>cgroupv2</code> introduced with Ubuntu 21.04 and above(Debian since version 11).<br />
You need to tell <code>containerd</code> to use the <code>systemdCgroup</code> driver via the config file(<code>/etc/containerd/config.toml</code>).<br />
I'm using the default config file that containerd provides ( <code>containerd config default | sudo tee /etc/containerd/config.toml</code>).<br />
Don't enable systemd_cgroup inside the "io.containerd.grpc.v1.cri" section, because the plugin doesn't seem support this anymore(status of the service will print following log:<br />
"failed to load plugin io.containerd.grpc.v1.cri" error="invalid pluginconfig: systemd_cgroup only works for runtime io.containerd.runtime.v1.linux")<br />
You need to enable the SystemdCgroup(=true) flag inside the section:</p>
<pre><code>[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]"
</code></pre>
<p>Restart the <code>containerd-service</code> and then the <code>kubelet-service</code> or reboot your machine and then it should work as expected.</p>
<p>Credit: Dennis from slack</p>
|
<p>I've searched but nothing has helped me through.</p>
<p>My Set.<br />
k8s - v1.20.2.<br />
calico - 3.16.6.<br />
pod-cidr = 10.214.0.0/16.<br />
service-cidr = 10.215.0.1/16.</p>
<p>Installed by kubespray with this one <a href="https://kubernetes.io/ko/docs/setup/production-environment/tools/kubespray" rel="noreferrer">https://kubernetes.io/ko/docs/setup/production-environment/tools/kubespray</a></p>
<p><a href="https://i.stack.imgur.com/5w7WH.png" rel="noreferrer">pod restarts again and again</a>.<br />
<a href="https://i.stack.imgur.com/ABvUf.png" rel="noreferrer">ingress-nginx-controller pod describe</a></p>
<p>[dns-autoscaler pod logs]</p>
<pre><code>github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:96: Failed to list *v1.Node: Get https://10.215.0.1:443/api/v1/nodes: dial tcp 10.215.0.1:443: i/o timeout
</code></pre>
<p>[dns-autoscaler pod describe]</p>
<pre><code>kubelet Readiness probe failed: Get "http://10.214.116.129:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>[coredns pod logs]</p>
<pre><code>pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.215.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.215.0.1:443: i/o timeout
</code></pre>
<p>[coredns pod describe]</p>
<pre><code>Get "http://10.214.122.1:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>And I tried to install ingress-nginx-controller it got me logs and describe.<br />
[ingress-controller logs]</p>
<pre><code>W0106 04:17:16.715661 6 flags.go:243] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W0106 04:17:16.715911 6 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0106 04:17:16.716200 6 main.go:182] Creating API client for https://10.215.0.1:
</code></pre>
<p>[ingress-controller describe]</p>
<pre><code>Liveness probe failed: Get "https://10.214.233.2:8443/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>All those pods are struggling with Readiness/Liveness probe failed: Get "http://10.214.116.155:10254/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers).</p>
<p>Calico is Running. and i checked pod to pod communication(OK).<br />
<a href="https://i.stack.imgur.com/ir5w3.png" rel="noreferrer">calico is Running</a></p>
<p>[kubectl get componentstatuses]</p>
<pre><code>controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
</code></pre>
<p><a href="https://i.stack.imgur.com/dtoJ3.png" rel="noreferrer">kubectl get componentstatuses</a>
I followed <a href="https://stackoverflow.com/questions/64296491/how-to-resolve-scheduler-and-controller-manager-unhealthy-state-in-kubernetes">How to resolve scheduler and controller-manager unhealthy state in Kubernetes</a>
and now scheduler and controller-manager are healthy.</p>
<p>[kubectl get nodes]</p>
<pre><code>Nodes are ready.
</code></pre>
<p>what i did wrong? T.T.<br />
thanks in advance</p>
| <p>Experienced this issue when deploying an app to Kubernetes.</p>
<blockquote>
<p>Warning Unhealthy 10m (x3206 over 3d16h) kubelet Liveness probe failed: Get "http://10.2.0.97:80/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)</p>
</blockquote>
<p>I did an exec into the pod:</p>
<pre><code>kubectl exec <pod-name> -it --namespace default /bin/bash
</code></pre>
<p>And then I ran a curl request to the IP and port of the pod:</p>
<pre><code>curl 10.2.0.97:80
</code></pre>
<p>And it returned a successful response. But the liveness probe was still failing to execute successfully.</p>
<p><strong>Here's how I solved it:</strong></p>
<p>All I had to do was to increase the <code>timeoutSeconds</code> to <code>10</code>:</p>
<pre><code>livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 300
periodSeconds: 20
timeoutSeconds: 10
</code></pre>
<p>After which the liveness probe started executing successfully</p>
<p>Same can be done for the readiness probe:</p>
<pre><code>ReadinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 30
periodSeconds: 20
timeoutSeconds: 10
</code></pre>
<p><strong>Reference</strong>: <a href="https://github.com/kubernetes/kubernetes/issues/89898#issuecomment-1028654403" rel="noreferrer">Sometime Liveness/Readiness Probes fail because of net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting head</a></p>
|
<p>Is it possible to create resource:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: projectcalico.org/v3
kind: HostEndpoint
</code></pre>
<p>using calico operator?
I want to get rid of <code>calicoctl</code>.</p>
| <p>It is possible only with <code>calicoctl</code> to create a host endpoint resource.</p>
<p>As mentioned in the <a href="https://projectcalico.docs.tigera.io/reference/host-endpoints/objects" rel="nofollow noreferrer">document</a>:</p>
<blockquote>
<p>For each host endpoint that you want Calico to secure, you’ll need to
create a host endpoint object in etcd. Use the <code>calicoctl create</code>
command to create a host endpoint resource (HostEndpoint).</p>
</blockquote>
<p>There are two ways to specify the interface that a host endpoint should refer to. You can either specify the name of the interface or its expected IP address. In either case, you’ll also need to know the name given to the Calico node running on the host that owns the interface; in most cases this will be the same as the hostname of the host.</p>
|
<p><strong>NOTE</strong>: this is I think a bash question, not kubernetes or kubectl</p>
<p>I can run the following:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get pods -n myspace -w
</code></pre>
<p>which will "watch" and provide a stream of lines with e.g. pod name, ready count, status, restarts, and age</p>
<p>similarly I can run</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get pods -n myspace -w -o json
</code></pre>
<p>which will provide more fine-grained change info. I note that the end of a json block appears to be an unindented <code>}</code> and an unindented <code>{</code> next begins the following block.</p>
<p>my question is, how could I pipe the above to write individual files based on either a new line, or an unindented <code>{</code>?</p>
<p>If there <em>is</em> a native kubernetes way to do that that <em>does not</em> involve extensions or plugins this would be a viable solution though I would prefer a pure bash command-line solution.</p>
| <p>You could convert each object into a single line json object with <code>jq</code>:</p>
<pre><code>i=0;
while read line; do
i=$(($i+1))
echo "$line" > "$i.json"
done < <(kubectl get pods -w -o json | jq -rc '.')
</code></pre>
|
<p>In <strong>command line</strong> we can use <code>kubectl apply --server-dry-run -f example-pod.yml</code> to make a trial run of <code>yml</code> file without persisting it.
Can we do smth similar from <strong>Java</strong>, using <code>io.fabric8.kubernetes</code> (Java client for kubernetes) ?</p>
| <p>DryRun has been implemented with version <code>5.2.0</code> according to this issue <a href="https://github.com/fabric8io/kubernetes-client/issues/2588" rel="nofollow noreferrer">https://github.com/fabric8io/kubernetes-client/issues/2588</a>.</p>
<p>It can be used like this on many resources:</p>
<pre class="lang-java prettyprint-override"><code>petClient.inNamespace(currentNamespace).withName("dry-run-delete").dryRun().delete();
</code></pre>
|
<p>I am creating a hyperledger fabric network using the following hyperledger fabric operator for kubernetes <a href="https://github.com/hyperledger-labs/hlf-operator" rel="nofollow noreferrer">https://github.com/hyperledger-labs/hlf-operator</a> I have my cluster configured in aws eks and it is currently running 3 nodes. I am following the documentation and so far all the steps of the implementation are working without problem, but when installing my chaincode it shows me the following message:</p>
<pre><code>'InstallChaincode': could not build chaincode: docker build failed: docker build is disabled
</code></pre>
<p>Validate and change docker permissions but I don't understand what I am missing so that it can work and install my chaincode.</p>
<p>I think it may be a permissions error in the eks, I am also validating the permissions</p>
| <p>I encountered the same problem and I finally solved it. The problem is when you create your peer node right now (as of July 28, 2022), the version defaults to <code>2.3.0-v0.0.2</code> (you can find this <code> kubectl hlf peer create --help</code> and see the description next to the <code>--version</code> flag). This peer version happens to be incompatible when deploying <code>ccaas</code> - chaincode as a service. So, the solution is to manually override the version using the <code>--version</code> flag while creating the peer node. Peer version <code>2.4.1-v0.0.4</code> solved this for me.</p>
<p>Please see the below command while creating a <code>peer</code> node for <code>org1</code>.</p>
<pre><code>kubectl hlf peer create --statedb=couchdb --storage-class=standard --enroll-id=org1-peer --mspid=Org1MSP --enroll-pw=peerpw --capacity=5Gi --name=org1-peer0 --ca-name=org1-ca.fabric --version=2.4.1-v0.0.4 --namespace=fabric
</code></pre>
<p>Note the above steps apply only when you are using the peer image from <code>quay.io/kfsoftware/fabric-peer</code> which is the default image. If you want to use other images use the <code>--image</code> tag. Repeat the same steps while creating every peer node. This should solve your problem. Hope this helps!</p>
|
<p>I am using Kubernetes version 1.24, I have created a secret for my service account manually, but when I run <code>kubectl get serviceaccounts</code>, it is showing that I do not have any secrets for that service account?</p>
| <p>If you are on K8s version <strong>1.24</strong></p>
<p>The <strong>serviceaccount</strong> won't create the <strong>secret</strong> automatically.</p>
<p>You have to create it manually.</p>
<pre><code>kubectl create sa <serviceaccount-name>
</code></pre>
<p><strong>Example</strong> :</p>
<pre><code>apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: token-secret
annotations:
kubernetes.io/service-account.name: "<SA name>"
</code></pre>
<p>If you just want to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount" rel="nofollow noreferrer">create the</a> <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount" rel="nofollow noreferrer">token</a> you can use the : <code>kubectl create token <Name></code></p>
<p>Read more about it : <a href="https://medium.com/@harsh.manvar111/k8s-v1-24-is-unable-to-create-a-serviceaccount-secret-798f8454e6e7" rel="nofollow noreferrer">https://medium.com/@harsh.manvar111/k8s-v1-24-is-unable-to-create-a-serviceaccount-secret-798f8454e6e7</a></p>
|
<p>I am new to microk8s, it's a wonderful and easy approach to setup k8s, just one thing which is quite annoying which is to use <strong>microk8s</strong> everytime before the <strong>kubectl</strong> command, is there any way to use kubectl directly or make it a alias.</p>
<p>For eg to run kubectl command, I've to use microk8s everytime ==> microk8s kubectl get pods</p>
<p>Thanks!</p>
| <p>Yeah, there is a way to create an alias for kubectl in microk8s,
run this command</p>
<p><code>sudo snap alias microk8s.kubectl kubectl</code></p>
<p>and you will be able to use kubectl directly.</p>
|
<p>I am trying to set up a load test for an endpoint. This is what I have followed so far:</p>
<h2><code>Dockerfile</code></h2>
<pre><code>FROM python:3.8
# Add the external tasks directory into /tasks
WORKDIR /src
ADD requirements.txt .
RUN pip install --no-cache-dir --upgrade locust==2.10.1
ADD run.sh .
ADD load_test.py .
ADD load_test.conf .
# Expose the required Locust ports
EXPOSE 5557 5558 8089
# Set script to be executable
RUN chmod 755 run.sh
# Start Locust using LOCUS_OPTS environment variable
CMD ["bash", "run.sh"]
# Modified from:
# https://github.com/scrollocks/locust-loadtesting/blob/master/locust/docker/Dockerfile
</code></pre>
<h2><code>run.sh</code></h2>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
LOCUST="locust"
LOCUS_OPTS="--config=load_test.conf"
LOCUST_MODE=${LOCUST_MODE:-standalone}
if [[ "$LOCUST_MODE" = "master" ]]; then
LOCUS_OPTS="$LOCUS_OPTS --master"
elif [[ "$LOCUST_MODE" = "worker" ]]; then
LOCUS_OPTS="$LOCUS_OPTS --worker --master-host=$LOCUST_MASTER_HOST"
fi
echo "${LOCUST} ${LOCUS_OPTS}"
$LOCUST $LOCUS_OPTS
# Copied from
# https://github.com/scrollocks/locust-loadtesting/blob/master/locust/docker/locust/run.sh
</code></pre>
<p>This is how I have written the load test locust script:</p>
<pre class="lang-py prettyprint-override"><code>import json
from locust import HttpUser, constant, task
class CategorizationUser(HttpUser):
wait_time = constant(1)
@task
def predict(self):
payload = json.dumps(
{
"text": "Face and Body Paint washable Rubies Halloween item 91#385"
}
)
_ = self.client.post("/predict", data=payload)
</code></pre>
<p>I am invoking that with a configuration:</p>
<pre><code>locustfile = load_test.py
headless = false
users = 1000
spawn-rate = 1
run-time = 5m
host = IP
html = locust_report.html
</code></pre>
<p>So, after building and pushing the Docker image and creating a k8s cluster on GKE, I am deploying it. This is how the <code>deployment.yaml</code> looks like:</p>
<pre class="lang-yaml prettyprint-override"><code># Copied from
# https://github.com/scrollocks/locust-loadtesting/blob/master/locust/kubernetes/templates/deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: locust-master-deployment
labels:
name: locust
role: master
spec:
replicas: 1
selector:
matchLabels:
name: locust
role: master
template:
metadata:
labels:
name: locust
role: master
spec:
containers:
- name: locust
image: gcr.io/PROJECT_ID/IMAGE_URI
imagePullPolicy: Always
env:
- name: LOCUST_MODE
value: master
- name: LOCUST_LOG_LEVEL
value: DEBUG
ports:
- name: loc-master-web
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5558
protocol: TCP
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: locust-worker-deployment
labels:
name: locust
role: worker
spec:
replicas: 2
selector:
matchLabels:
name: locust
role: worker
template:
metadata:
labels:
name: locust
role: worker
spec:
containers:
- name: locust
image: gcr.io/PROJECT_ID/IMAGE_URI
imagePullPolicy: Always
env:
- name: LOCUST_MODE
value: worker
- name: LOCUST_MASTER
value: locust-master-service
- name: LOCUST_LOG_LEVEL
value: DEBUG
</code></pre>
<p>After deployment, I am exposing the required ports like so:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl expose pod locust-master-deployment-f9d4c4f59-8q6wk \
--type NodePort \
--port 5558 \
--target-port 5558 \
--name locust-5558
kubectl expose pod locust-master-deployment-f9d4c4f59-8q6wk \
--type NodePort \
--port 5557 \
--target-port 5557 \
--name locust-5557
kubectl expose pod locust-master-deployment-f9d4c4f59-8q6wk \
--type LoadBalancer \
--port 80 \
--target-port 8089 \
--name locust-web
</code></pre>
<p>The cluster and the nodes provision successfully. But the moment I hit the IP of <code>locust-web</code>, I am getting:</p>
<p><a href="https://i.stack.imgur.com/Othog.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Othog.png" alt="enter image description here" /></a></p>
<p>Any suggestions on how to resolve the bug?</p>
| <p>Probably the environment variables set by k8s is colliding with locust’s (LOCUST_WEB_PORT specifically). Change your setup so that no containers are named ”locust”.</p>
<p>See <a href="https://github.com/locustio/locust/issues/1226" rel="nofollow noreferrer">https://github.com/locustio/locust/issues/1226</a> for a similar issue.</p>
|
<p>I'm trying to set up a K3s cluster. When I had a single master and agent setup cert-manager had no issues. Now I'm trying a 2 master setup with embedded etcd. I opened TCP ports <code>6443</code> and <code>2379-2380</code> for both VMs and did the following:</p>
<pre class="lang-none prettyprint-override"><code>VM1: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --cluster-init
VM2: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --server https://MASTER_IP:6443
</code></pre>
<pre class="lang-none prettyprint-override"><code># k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
VM1 Ready control-plane,etcd,master 130m v1.22.7+k3s1
VM2 Ready control-plane,etcd,master 128m v1.22.7+k3s1
</code></pre>
<p>Installing cert-manager works fine:</p>
<pre class="lang-none prettyprint-override"><code># k3s kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
# k3s kubectl get pods --namespace cert-manager
NAME READY STATUS
cert-manager-b4d6fd99b-c6fpc 1/1 Running
cert-manager-cainjector-74bfccdfdf-gtmrd 1/1 Running
cert-manager-webhook-65b766b5f8-brb76 1/1 Running
</code></pre>
<p>My manifest has the following definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-account-key
solvers:
- selector: {}
http01:
ingress: {}
</code></pre>
<p>Which results in the following error:</p>
<pre class="lang-none prettyprint-override"><code># k3s kubectl apply -f manifest.yaml
Error from server (InternalError): error when creating "manifest.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": context deadline exceeded
</code></pre>
<p>I tried disabling both firewalls, waiting a day, reset and re-setup, but the error persists. Google hasn't been much help either. The little info I can find goes over my head for the most part and no tutorial seems to do any extra steps.</p>
| <p>Try to specify the proper ingress class name in your Cluster Issuer, like this:</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-account-key
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<p>Also, make sure that you have the cert manager annotation and the tls secret name specified in your Ingress like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
...
spec:
tls:
- hosts:
- domain.com
secretName: letsencrypt-account-key
</code></pre>
|
<p>When I want to restart the kubernetes(<code>v1.21.2</code>) statefulset pod, the pod are stuck with terminating status, and the log shows like this:</p>
<pre><code>error killing pod: failed to "KillPodSandbox" for "8aafe99f-53c1-4bec-8cb8-abd09af1448f" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to check network namespace closed: remove netns: unlinkat /var/run/netns/cni-f9ccb1de-ed43-dff6-1b86-1260e07178e6: device or resource busy"
</code></pre>
<p>the pod terminate for hours but still stuck. why did this happen? what should I do to fixed this problem?</p>
| <p>I think force deletion can be a workaround for this issue.</p>
<p>In order to delete the affected pod that is in the terminating state, please refer to the <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#delete-pods" rel="nofollow noreferrer">documentation</a>. In case the pod still does not get deleted then you can do the force deletion by following <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#force-deletion" rel="nofollow noreferrer">documentation</a>.</p>
<p>Please note that when you force delete a StatefulSet pod, you are asserting that the Pod in question will never again make contact with other Pods in the StatefulSet and its name can be safely freed up for a replacement to be created.</p>
<p>You can also try these workarounds to quickly mitigate this</p>
<ol>
<li>Run the command below to remove all pods in the terminating state.</li>
</ol>
<blockquote>
<p>for p in $(kubectl get pods | grep Terminating | awk '{print $1}'); do
kubectl delete pod $p --grace-period=0 --force;done</p>
</blockquote>
<p>2.Set finalizer value in the deployment YAML to null.</p>
|
<p>I have a cli app written in NodeJS [not by me].</p>
<p>I want to deploy this on a k8s cluster like I have done many times with web servers.</p>
<p>I have not deployed something like this before, so I am in a kind of a loss.</p>
<p>I have worked with dockerized cli apps [like Terraform] before, and i know how to use them in a CICD.</p>
<p>But how should I deploy them in a pod so they are always available for usage from another app in the cluster?</p>
<p>Or is there a completely different approach that I need to consider?</p>
<p>#EDIT#</p>
<p>I am using this in the end of my Dockerfile ..</p>
<pre><code># the main executable
ENTRYPOINT ["sleep", "infinity"]
# a default command
CMD ["mycli help"]
</code></pre>
<p>That way the pod does not restart and the cli inside is waiting for commands like <code>mycli do this</code></p>
<p>Is it a <code>hacky</code> way that is frowned upon or a legit solution?</p>
| <p>Your edit is one solution, another one if you do not want or cannot change the Docker image is to <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">Define a Command for a Container</a> to loop infinitely, this would achieve the same as the Dockerfile ENTRYPOINT but without having to rebuild the image.</p>
<p>Here's an example of such implementation:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
restartPolicy: OnFailure
</code></pre>
<p>As for your question about if this is a legit solution, this is hard to answer; I would say it depends on what your application is designed to do. Kubernetes Pods are designed to be ephemeral, so a good solution would be one that is running until the job is completed; for a web server, for example, the job is never completed because it should be constantly listening to requests.</p>
|
<p>I am currently using the following to attempt to spread Kubernetes pods in a given deployment across all Kubernetes nodes evenly:</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- api
</code></pre>
<p>However, I noticed that a new attribute <code>topologySpreadConstraints</code> was recently added to Kubernetes. What's the advantage of switching from using <code>affinity.podAntiAffinity</code> to <code>topologySpreadConstraints</code> in Kubernetes deployments? Any reasons to switch? What would that syntax look like to match what I am currently doing above?</p>
| <h2>Summary</h2>
<p>TL;DR - The advantage of switching to <code>topologySpreadConstraints</code> is that you will be able to be more expressive about the <code>topology</code> or the <code>structure</code> of your underlying infrastructure for pod scheduling. Think of this is as a superset of what <code>affinity</code> can do.</p>
<p>One concept is not the replacement for the other, but both can be used for different purposes. You can combine <code>pod/nodeAffinity</code> with <code>topologySpreadConstraints</code> and they will be ANDed by the Kubernetes scheduler when scheduling pods. In short, <code>pod/nodeAffinity</code> is for <em>linear</em> topologies (all nodes on the same level) and <code>topologySpreadConstraints</code> are for <em>hierarchical</em> topologies (nodes spread across logical domains of topology). And when combined, the scheduler ensures that <em>both</em> are respected and <em>both</em> are used to ensure certain criteria, like high availability of your applications.</p>
<p>Keep on reading for more details!</p>
<h2>Affinities vs Topology Spread</h2>
<p>With <code>affinities</code> you can decide which <code>nodes</code> your <code>pods</code> are scheduled onto, based on a <code>node</code> <code>label</code> (in your case <code>kubernetes.io/hostname</code>).</p>
<p>With <code>topologySpreadConstraints</code> you can decide which <code>nodes</code> your <code>pods</code> are scheduled onto using a wider set of labels that define your <code>topology domain</code>. So, this is a generalisation of the simple <code>affinity</code> concept where all your nodes are "on the same topology level" - logically speaking - and on smaller scales, this is a simplified view of managing pod scheduling.</p>
<h2>An Example</h2>
<p>A <code>topology domain</code> is simply a logical unit of your infrastructure. Imagine you have a cluster with 10 nodes, that are logically on the same level and your <code>topology domain</code> represents simply a flat topology where all those nodes are at the same level.</p>
<pre><code>node1, node2, node3 ... node10
</code></pre>
<p>Now, imagine your cluster grows to have 20 nodes, 10 nodes in one Availability Zone (in your cloud provider) and 10 on another AZ. Now, your <code>topology domain</code> can be an Availability Zone and therefore, all the nodes are <em>not</em> at the same level, your topology has become "multi-zonal" now and instead of having 20 nodes in the same topology, you now have 20 nodes, 10 in <em>each</em> topological domain (AZ).</p>
<pre><code>AZ1 => node1, node2, node3 ... node10
AZ2 => node11, node12, node13 ... node20
</code></pre>
<p>Imagine it grows further to 40 nodes, 20 in each <code>region</code>, where each region can have 2 AZs (10 nodes each). A "multi-regional" topology with 2 types of <code>topology domains</code>, an AZ and a region. It now looks something like:</p>
<pre><code>Region1: => AZ1 => node1, node2, node3 ... node10
=> AZ2 => node11, node12, node13 ... node20
Region2: => AZ1 => node21, node22, node23 ... node30
=> AZ2 => node31, node32, node33 ... node40
</code></pre>
<p>Now, here's an idea. When scheduling your workload pods, you would like the scheduler to be aware of the topology of your underlying infrastructure that provides your Kubernetes nodes. This can be your own data center, a cloud provider etc. This is because you would like to ensure, for instance, that:</p>
<ol>
<li>You get an equal number of pods across regions, so your multi-regional application has similar capacities.</li>
<li>You can have an equal number of pods across AZs within a region, so AZs are not overloaded.</li>
<li>You have scaling constraints where you would like to prefer to scale an application equally across regions and so on.</li>
</ol>
<p>In order for the Kubernetes scheduler to be aware of this underlying topology that <em>you have setup</em>, you can use the <code>topologySpreadConstraints</code> to tell the scheduler how to interpret a "list of nodes" that it sees. Because remember, to the scheduler, all the nodes are just a flat list of nodes, there is no concept of a topology. You <em>can</em> build a topology by attaching special labels to nodes called <code>topologyKey</code> labels. For example, you will label <strong><em>each node</em></strong> in your cluster to make Kubernetes scheduler understand what kind of underlying "topology" you have. Like,</p>
<pre><code>node1 => az: 1, region: 1
...
node11 => az: 2, region: 1
...
node21 => az: 1, region: 2
...
node31 => az: 2, region: 2
</code></pre>
<p>Now each node has been configured to be a part of "two" topological domains; each node must be in an <code>AZ</code> and in a <code>Region</code>. So, you can start configuring your <code>topologySpreadConstraints</code> to make the scheduler spread pods across regions, AZs etc. (your <code>topology domains</code>) and meet your requirements.</p>
<p>This is a very common use case that a lot of organisations implement with their workloads in order to ensure high availability of applications when they grow very large and become multi-regional, for instance. You can read more about <code>topologySpreadConstraints</code> <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/" rel="noreferrer">here</a>.</p>
|
<p>I'm following Les Jackson's <a href="https://www.youtube.com/watch?v=DgVjEo3OGBI" rel="nofollow noreferrer">tutorial</a> to microservices and got stuck at 05:30:00 while creating a deployment for a ms sql server. I've written the deployment file just as shown on the yt video:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-depl
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2017-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Express"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- mountPath: /var/opt/mssql/data
name: mssqldb
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-claim
---
apiVersion: v1
kind: Service
metadata:
name: mssql-clusterip-srv
spec:
type: ClusterIP
selector:
app: mssql
ports:
- name: mssql
protocol: TCP
port: 1433 # this is default port for mssql
targetPort: 1433
---
apiVersion: v1
kind: Service
metadata:
name: mssql-loadbalancer
spec:
type: LoadBalancer
selector:
app: mssql
ports:
- protocol: TCP
port: 1433 # this is default port for mssql
targetPort: 1433
</code></pre>
<p>The persistent volume claim:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
</code></pre>
<p>But when I apply this deployment, the pod ends up with ImagePullBackOff status:</p>
<pre><code>commands-depl-688f77b9c6-vln5v 1/1 Running 0 2d21h
mssql-depl-5cd6d7d486-m8nw6 0/1 ImagePullBackOff 0 4m54s
platforms-depl-6b6cf9b478-ktlhf 1/1 Running 0 2d21h
</code></pre>
<p><strong>kubectl describe pod</strong></p>
<pre><code>Name: mssql-depl-5cd6d7d486-nrrkn
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Thu, 28 Jul 2022 12:09:34 +0200
Labels: app=mssql
pod-template-hash=5cd6d7d486
Annotations: <none>
Status: Pending
IP: 10.1.0.27
IPs:
IP: 10.1.0.27
Controlled By: ReplicaSet/mssql-depl-5cd6d7d486
Containers:
mssql:
Container ID:
Image: mcr.microsoft.com/mssql/server:2017-latest
Image ID:
Port: 1433/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
MSSQL_PID: Express
ACCEPT_EULA: Y
SA_PASSWORD: <set to the key 'SA_PASSWORD' in secret 'mssql'> Optional: false
Mounts:
/var/opt/mssql/data from mssqldb (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube- api-access-xqzks (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mssqldb:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mssql-claim
ReadOnly: false
kube-api-access-xqzks:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not- ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m42s default-scheduler Successfully assigned default/mssql-depl-5cd6d7d486-nrrkn to docker-desktop
Warning Failed 102s kubelet Failed to pull image "mcr.microsoft.com/mssql/server:2017-latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 102s kubelet Error: ErrImagePull
Normal BackOff 102s kubelet Back-off pulling image "mcr.microsoft.com/mssql/server:2017-latest"
Warning Failed 102s kubelet Error: ImagePullBackOff
Normal Pulling 87s (x2 over 3m41s) kubelet Pulling image "mcr.microsoft.com/mssql/server:2017-latest"
</code></pre>
<p>In the events it shows</p>
<blockquote>
<p>"rpc error: code = Unknown desc = context deadline exceeded"</p>
</blockquote>
<p>But it doesn't tell me anything and resources on troubleshooting this error don't include such error.</p>
<p>I'm using kubernetes on docker locally.
I've researched that this issue can happen when pulling the image from a private registry, but this is public one, right <a href="https://hub.docker.com/_/microsoft-mssql-server" rel="nofollow noreferrer">here</a>. I copy pasted the image path to be sure, I tried with different ms sql version, but to no avail.</p>
<p>Can someone be so kind and show me the right direction I should go / what should I try to get this to work? It worked just fine on the video :(</p>
| <p>I fixed it by manually pulling the image via <code>docker pull mcr.microsoft.com/mssql/server:2017-latest</code> and then deleting and re-applying the deployment.</p>
|
<p>I'm trying to run Keycloak 18.0.1 as a StatefulSet with the bitnami Helm chart on my Azure AKS Kubernetes cluster. Traefik 2.7 is the Ingress Controller and an external Postgres Database is used. Keycloak is in "proxy"-mode "edge" and doesn't need to handle SSL, because it's handled by traefik, cert-manager & Let's encrypt.</p>
<p>I'm trying to switch it to production mode:</p>
<pre><code>2022-07-29 22:43:21,460 INFO [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-metrics, vault, vertx]
2022-07-29 22:43:21,466 WARN [org.keycloak.quarkus.runtime.KeycloakMain] (main) Running the server in development mode. DO NOT use this configuration in production.
</code></pre>
<p>Therefore I tried using the following values during helm chart installation:</p>
<pre><code>
cache:
enabled: true
auth:
adminUser: ****
adminPassword: ****
managementUser: ****
managementPassword: ****
proxy: edge
postgresql:
enabled: false
externalDatabase:
host: ****
port: 5432
user: ****
password: ****
database: keycloak
resources:
requests:
cpu: 0.5
memory: 512Mi
limits:
cpu: 1
memory: 1Gi
extraEnvVars:
- name: KEYCLOAK_PRODUCTION
value: "true"
- name: KC_HOSTNAME
value: "<external host name>"
- name: KC_HOSTNAME_STRICT_HTTPS
value: "false"
</code></pre>
<p>As soon as I add the env vars for production, I'm getting the following error:</p>
<pre><code>at org.h2.jdbcx.JdbcDataSource.getXAConnection(JdbcDataSource.java:352)
at io.agroal.pool.ConnectionFactory.createConnection(ConnectionFactory.java:216)
at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:513)
at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:494)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at io.agroal.pool.util.PriorityScheduledExecutor.beforeExecute(PriorityScheduledExecutor.java:75)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1126)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2022-07-29 18:27:20,885 WARN [io.agroal.pool] (agroal-11) Datasource '<default>': No suitable driver found for jdbc:postgresql://***********:5432/keycloak?currentSchema=public
</code></pre>
<p>It seems that the chart wants to go back to the integrated H2 database?</p>
<p>The second problem is the Infinispan cache:</p>
<p><code>[org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled</code></p>
<p>How can I enable this cache to make the chart work with multiple replicas?</p>
<p>Any help is appreciated!</p>
<p>Thanks,
Pascal</p>
| <p>Found the solution to enable production mode:</p>
<pre><code> - name: KEYCLOAK_EXTRA_ARGS
value: "--auto-build"
</code></pre>
<p>The error:</p>
<pre><code>[org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
</code></pre>
<p>however still remains.</p>
|
<p>I am fairly new to Kubernetes, and I think I understand the basics of provisioning nodes and setting memory limits for pods. Here's the problem I have: my application can require dramatically different amounts of memory, depending on the input (and there is no fool-proof way to predict it). Some jobs require 50MB, some require 50GB. How can I set up my K8s deployment to handle this situation?</p>
<p>I have one strategy that I'd like to try out, but I don't know how to do it: start with small instances (nodes with not a lot of memory), and if the job fails with out-of-memory, then automatically send it to increasingly bigger instances until it succeeds. How hard would this be to implement in Kubernetes?</p>
<p>Thanks!</p>
| <p>Natively K8S supports horizontal autoscalling i.e. automatically deplying more replicas of a deployment basing on chosen metric like CPU usage, memory usage etc.: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaling</a></p>
<p>What you are describing here though is vertical scaling. It is not supported out of the box, but there is a subproject that seems to be able to fulfill your requirements: <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow noreferrer">vertical-pod-autoscaler</a></p>
|
<p>In my kubernetes cluster, there are multiple deployments in a namespace.
For a specific deployment, there is a need to not allow "CrashLoopBackoff" pods to exist.
So basically, when any pod gets to this state, I would want it to be deleted and later a new pod to be created which is already handled by the ReplicaSet.</p>
<p>I tried with custom controllers, with the thought that the SharedInformer would alert about the state of Pod and then I would delete it from that loop.
However, this brings dependency on the pod on which the custom controller would run.</p>
<p>I also tried searching for any option to be configured in the manifest itself, but could not find any.</p>
<p>I am pretty new to Kuberenetes, so need help in the implementation of this behaviour.</p>
| <p>Firstly, you should address the reason <em>why</em> the pod has entered the <code>CrashLoopBackOff</code> state rather than just delete it. If you do this, you'll potentially just recreate the problem again and you'll be deleting pods repeatedly. For example, if your pod is trying to access an external DB and that DB is down, it'll CrashLoop, and deleting and restarting the pod won't help fix that.</p>
<p>Secondly, if you want to do this deleting in an automated manner, an easy way would be to run a <code>CronJob</code> resource that goes through your deployment and deletes the <code>CrashLoop</code>ed pods. You could set the cronjob to run once an hour or whatever schedule you wish.</p>
|
<p>Using different tools (kubent for example) I see that I have deprecated API in my cluster. For example</p>
<pre><code>Type: Ingress Name: kibana API: networking.k8s.io/v1beta1
</code></pre>
<p>But when I open Ingress itself, I can see this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
managedFields:
- manager: Go-http-client
operation: Update
apiVersion: networking.k8s.io/v1beta1
</code></pre>
<p>So, it shows that API of my Ingress is actually "v1", not "beta". But "managedFields" section indeed has "v1beta1" API. According to official <a href="https://kubernetes.io/docs/reference/using-api/server-side-apply/" rel="nofollow noreferrer">documentation</a>, this is server side API that should not be edited by user.</p>
<p>So, my question is - should/can I do anything with deprecated API in this "managedField"? Will there be any issues during upgrade to next k8s version? Because currently my GCP console shows that there will be problems.</p>
| <p>There will be no issue while upgrading your Kubernetes cluster to the latest version even if you have deprecated API version in the <code>managed field</code> in the ingress configuration. The reason why you still see versions <strong>“/v1beta1”</strong> in the UI is because there are different parts of GKE that rely on both versions(v1 and v1beta1).</p>
<p>Between the two Kubernetes versions 1.19 and 1.21, both endpoints <code>networking.k8s.io/v1</code> and <code>extensions/v1beta1</code> are supported. They are functionally identical, and it is down to the given UI's preference for which version is displayed. So it won’t affect the functionality of your Ingress. As said, GKE clusters were created on <a href="https://cloud.google.com/kubernetes-engine/docs/deprecations/apis-1-22#ingress-v122" rel="nofollow noreferrer">versions 1.22</a> and later stopped supporting extensions/v1beta1 and networking.k8s.io/v1beta1 Ingress.</p>
|
<p>I am trying to isolate my pods in namespace from other namespaces. I have tried to create a NetworkPolicy:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
</code></pre>
<p>This NetworkPolicy successfully isolating pods in my namespace from another namespace. But this policy, once applied, disables all external traffic to these pods. Is there any method for only block traffic from other namespaces and allow all external traffic to the pods.</p>
| <p>In my case I have the same problem and the response in this link <a href="https://stackoverflow.com/a/56860217/7324872">https://stackoverflow.com/a/56860217/7324872</a> is great</p>
<p>Please create 2 network policy:</p>
<p><strong>deny-from-other-namespaces</strong></p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
</code></pre>
<p>And <strong>web-allow-external</strong></p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: web-allow-external
spec:
podSelector:
matchLabels:
app: <label>
ingress:
- {}
</code></pre>
<p>The Network policy are not excluding.</p>
|
<p>Fairly new to kubernetes, I have a question about setting <code>limits</code> and <code>requests</code> in regards to <code>CPU</code> and <code>Memory</code>. If I have a few applications in my cluster but none of these applications are in production how do I determine the correct way to set the <code>limits</code> and <code>requests</code> for the applications in the cluster with know prior data.</p>
<p>I tried the <code>kubectl top pods -n <namespace></code> but this only gave me the current usage, wondering if anyone had any suggestions of a logical way to set the <code>limits</code> and <code>requests</code>?</p>
| <p>To determine request and limits, you can try the vertical pod autoscaler : <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler</a></p>
<p>You have two main modes :</p>
<ul>
<li><code>off</code> : give informations about request and limits during the pod life.</li>
<li><code>recreate</code> : recreate the pod with new request and limits (when needed)</li>
</ul>
|
<p>I created disk in azure k8s cluster (4Gb standard HDD)
I am using code PV</p>
<p><a href="https://pastebin.com/HysrzFyB" rel="nofollow noreferrer">Pv file</a></p>
<p>Then I am creating PVC:</p>
<p><a href="https://pastebin.com/r7T4KZEv" rel="nofollow noreferrer">PVC yaml</a></p>
<p>Attach my volume to Pod:</p>
<p><a href="https://pastebin.com/z8MXNHXF" rel="nofollow noreferrer">Pod volume attache</a></p>
<p>But when I checked the status of my Pod, I got an error:</p>
<pre><code>root@k8s:/home/azureuser/k8s# kubectl get describe pods mypod
error: the server doesn't have a resource type "describe"
root@k8s:/home/azureuser/k8s# kubectl describe pods mypod
Name: mypod
Namespace: default
Priority: 0
Node: aks-agentpool-37412589-vmss000000/10.224.0.4
Start Time: Wed, 03 Aug 2022 10:34:45 +0000
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
mypod:
Container ID:
Image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 250m
memory: 256Mi
Requests:
cpu: 100m
memory: 128Mi
Environment: <none>
Mounts:
/mnt/azure from azure (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nq9q2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
azure:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-azuredisk
ReadOnly: false
kube-api-access-nq9q2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m2s default-scheduler Successfully assigned default/mypod to aks-agentpool-36264904-vmss000000
Warning FailedAttachVolume 53s (x8 over 2m1s) attachdetach-controller AttachVolume.Attach failed for volume "pv-azuredisk" : rpc error: code = InvalidArgument desc = Volume capability not supported
</code></pre>
<p>Could you please help with advice, how I can solve this issue: <code>Warning FailedAttachVolume 53s (x8 over 2m1s) attachdetach-controller AttachVolume.Attach failed for volume "pv-azuredisk" : rpc error: code = InvalidArgument desc = Volume capability not supported</code></p>
| <blockquote>
<p>...Volume capability not supported</p>
</blockquote>
<p>Try update your PV:</p>
<pre><code>...
accessModes:
- ReadWriteOnce # <-- ReadWriteMany is not supported by disk.csi.azure.com
...
</code></pre>
<p>ReadWriteMany is supported by file.csi.azure.com (Azure Files).</p>
|
<p>I have an JAVA application running in Kubernetes which listens to port 8080. When I connect JProfiler to JVM and run a few requests sequentially, everything works fine. But as soon as I fire some load using Jmeter, my application stops responding on Port 8080 and I get request timeout.</p>
<p>When JProfiler is detached from JVM, everything starts working fine again.
I explored a lot but couldn't find any help regarding what in JProfiler is blocking my application to respond.</p>
| <p>From the feedback you have sent me by email, the overhead becomes noticeable when you switch on allocation recording. Just with CPU and probe recording you don't experience any problem.</p>
<p>Allocation recording is an expensive operation that should only be used when you have a related problem. The added overhead can be reduced by reducing the allocation sampling rate.</p>
|
<p>I am running a 2-node K8s cluster on OVH Bare Metal Servers. I've set up <strong>MetalLB</strong> and <strong>Nginx-Ingress</strong>.The 2 servers both have public IPs and are not in the same network segment. I've used one of the IPs as the entrypoint for the LB. The deployments I created 3 nginx containers & services to test the forwarding.
When I use host based routing, the endpoints are reachable via the internet, but when I use path based forwarding, only the / path is reachable. For the rest, I get the default backend.
My host based Ingress resource:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource-2
spec:
ingressClassName: nginx
rules:
- host: nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy-main
port:
number: 80
- host: blue.nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy-blue
port:
number: 80
- host: green.nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy-green
port:
number: 80
</code></pre>
<p>The path based Ingress resource:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource-3
spec:
ingressClassName: nginx
rules:
- host: nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
- path: /blue
pathType: Prefix
backend:
service:
name: nginx-deploy-blue
port:
number: 80
- path: /green
pathType: Prefix
backend:
service:
name: nginx-deploy-green
port:
number: 80
</code></pre>
<p>The endpoints are all reachable in both cases</p>
<pre><code># kubectl describe ing ingress-resource-2
Name: ingress-resource-2
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
nginx.example.com
/ nginx:80 (192.168.107.4:80)
blue.nginx.example.com
/ nginx-deploy-blue:80 (192.168.164.212:80)
green.nginx.example.com
/ nginx-deploy-green:80 (192.168.164.213:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 13m nginx-ingress-controller Configuration for default/ingress-resource-2 was added or updated
</code></pre>
<pre><code># kubectl describe ing ingress-resource-3
Name: ingress-resource-3
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
nginx.example.com
/ nginx:80 (192.168.107.4:80)
/blue nginx-deploy-blue:80 (192.168.164.212:80)
/green nginx-deploy-green:80 (192.168.164.213:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 109s nginx-ingress-controller Configuration for default/ingress-resource-3 was added or updated
</code></pre>
<p>Getting the Nginx-Ingress logs:</p>
<pre><code># kubectl -n nginx-ingress logs pod/nginx-ingress-6947fb84d4-m9gkk
W0803 17:00:48.516628 1 flags.go:273] Ignoring unhandled arguments: []
I0803 17:00:48.516688 1 flags.go:190] Starting NGINX Ingress Controller Version=2.3.0 PlusFlag=false
I0803 17:00:48.516692 1 flags.go:191] Commit=979db22d8065b22fedb410c9b9c5875cf0a6dc66 Date=2022-07-12T08:51:24Z DirtyState=false Arch=linux/amd64 Go=go1.18.3
I0803 17:00:48.527699 1 main.go:210] Kubernetes version: 1.24.3
I0803 17:00:48.531079 1 main.go:326] Using nginx version: nginx/1.23.0
2022/08/03 17:00:48 [notice] 26#26: using the "epoll" event method
2022/08/03 17:00:48 [notice] 26#26: nginx/1.23.0
2022/08/03 17:00:48 [notice] 26#26: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/08/03 17:00:48 [notice] 26#26: OS: Linux 5.15.0-41-generic
2022/08/03 17:00:48 [notice] 26#26: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/08/03 17:00:48 [notice] 26#26: start worker processes
2022/08/03 17:00:48 [notice] 26#26: start worker process 27
2022/08/03 17:00:48 [notice] 26#26: start worker process 28
2022/08/03 17:00:48 [notice] 26#26: start worker process 29
2022/08/03 17:00:48 [notice] 26#26: start worker process 30
2022/08/03 17:00:48 [notice] 26#26: start worker process 31
2022/08/03 17:00:48 [notice] 26#26: start worker process 32
2022/08/03 17:00:48 [notice] 26#26: start worker process 33
2022/08/03 17:00:48 [notice] 26#26: start worker process 34
I0803 17:00:48.543403 1 listener.go:54] Starting Prometheus listener on: :9113/metrics
2022/08/03 17:00:48 [notice] 26#26: start worker process 35
2022/08/03 17:00:48 [notice] 26#26: start worker process 37
I0803 17:00:48.543712 1 leaderelection.go:248] attempting to acquire leader lease nginx-ingress/nginx-ingress-leader-election...
2022/08/03 17:00:48 [notice] 26#26: start worker process 38
...
2022/08/03 17:00:48 [notice] 26#26: start worker process 86
I0803 17:00:48.645253 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"delivery-ingress", UID:"23f93b2d-c3c8-48eb-a2a1-e2ce0453677f", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1527358", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/delivery-ingress was added or updated
I0803 17:00:48.645512 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
I0803 17:00:48.646550 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
I0803 17:00:48.646629 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"delivery-ingress", UID:"23f93b2d-c3c8-48eb-a2a1-e2ce0453677f", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1527358", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/delivery-ingress was added or updated
I0803 17:00:48.646810 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
I0803 17:00:48.646969 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
I0803 17:00:48.647259 1 event.go:285] Event(v1.ObjectReference{Kind:"Secret", Namespace:"nginx-ingress", Name:"default-server-secret", UID:"d8271053-2785-408f-b87b-88b9bb9fc488", APIVersion:"v1", ResourceVersion:"1612716", FieldPath:""}): type: 'Normal' reason: 'Updated' the special Secret nginx-ingress/default-server-secret was updated
2022/08/03 17:00:48 [notice] 26#26: signal 1 (SIGHUP) received from 88, reconfiguring
2022/08/03 17:00:48 [notice] 26#26: reconfiguring
2022/08/03 17:00:48 [notice] 26#26: using the "epoll" event method
2022/08/03 17:00:48 [notice] 26#26: start worker processes
2022/08/03 17:00:48 [notice] 26#26: start worker process 89
2022/08/03 17:00:48 [notice] 26#26: start worker process 90
...
2022/08/03 17:00:48 [notice] 26#26: start worker process 136
2022/08/03 17:00:48 [notice] 27#27: gracefully shutting down
2022/08/03 17:00:48 [notice] 27#27: exiting
2022/08/03 17:00:48 [notice] 35#35: gracefully shutting down
2022/08/03 17:00:48 [notice] 31#31: exiting
2022/08/03 17:00:48 [notice] 38#38: gracefully shutting down
2022/08/03 17:00:48 [notice] 32#32: exiting
2022/08/03 17:00:48 [notice] 30#30: exiting
2022/08/03 17:00:48 [notice] 40#40: gracefully shutting down
2022/08/03 17:00:48 [notice] 35#35: exiting
2022/08/03 17:00:48 [notice] 45#45: gracefully shutting down
2022/08/03 17:00:48 [notice] 40#40: exiting
2022/08/03 17:00:48 [notice] 48#48: gracefully shutting down
2022/08/03 17:00:48 [notice] 47#47: exiting
2022/08/03 17:00:48 [notice] 57#57: gracefully shutting down
2022/08/03 17:00:48 [notice] 52#52: exiting
2022/08/03 17:00:48 [notice] 55#55: gracefully shutting down
2022/08/03 17:00:48 [notice] 55#55: exiting
2022/08/03 17:00:48 [notice] 51#51: gracefully shutting down
2022/08/03 17:00:48 [notice] 51#51: exiting
2022/08/03 17:00:48 [notice] 31#31: exit
2022/08/03 17:00:48 [notice] 34#34: gracefully shutting down
2022/08/03 17:00:48 [notice] 34#34: exiting
2022/08/03 17:00:48 [notice] 41#41: exiting
2022/08/03 17:00:48 [notice] 49#49: gracefully shutting down
....
2022/08/03 17:00:48 [notice] 49#49: exiting
2022/08/03 17:00:48 [notice] 57#57: exit
.....
2022/08/03 17:00:48 [notice] 43#43: exit
2022/08/03 17:00:48 [notice] 58#58: gracefully shutting down
2022/08/03 17:00:48 [notice] 38#38: exiting
2022/08/03 17:00:48 [notice] 53#53: gracefully shutting down
2022/08/03 17:00:48 [notice] 48#48: exiting
2022/08/03 17:00:48 [notice] 59#59: gracefully shutting down
2022/08/03 17:00:48 [notice] 58#58: exiting
2022/08/03 17:00:48 [notice] 62#62: gracefully shutting down
2022/08/03 17:00:48 [notice] 60#60: gracefully shutting down
2022/08/03 17:00:48 [notice] 53#53: exiting
2022/08/03 17:00:48 [notice] 61#61: gracefully shutting down
2022/08/03 17:00:48 [notice] 63#63: gracefully shutting down
2022/08/03 17:00:48 [notice] 64#64: gracefully shutting down
2022/08/03 17:00:48 [notice] 59#59: exiting
2022/08/03 17:00:48 [notice] 65#65: gracefully shutting down
2022/08/03 17:00:48 [notice] 62#62: exiting
2022/08/03 17:00:48 [notice] 60#60: exiting
2022/08/03 17:00:48 [notice] 66#66: gracefully shutting down
2022/08/03 17:00:48 [notice] 67#67: gracefully shutting down
2022/08/03 17:00:48 [notice] 63#63: exiting
2022/08/03 17:00:48 [notice] 68#68: gracefully shutting down
2022/08/03 17:00:48 [notice] 64#64: exiting
2022/08/03 17:00:48 [notice] 61#61: exiting
2022/08/03 17:00:48 [notice] 69#69: gracefully shutting down
2022/08/03 17:00:48 [notice] 65#65: exiting
2022/08/03 17:00:48 [notice] 66#66: exiting
2022/08/03 17:00:48 [notice] 71#71: gracefully shutting down
2022/08/03 17:00:48 [notice] 70#70: gracefully shutting down
2022/08/03 17:00:48 [notice] 67#67: exiting
...
2022/08/03 17:00:48 [notice] 65#65: exit
2022/08/03 17:00:48 [notice] 73#73: gracefully shutting down
...
2022/08/03 17:00:48 [notice] 74#74: exiting
2022/08/03 17:00:48 [notice] 83#83: gracefully shutting down
2022/08/03 17:00:48 [notice] 72#72: exiting
2022/08/03 17:00:48 [notice] 77#77: gracefully shutting down
2022/08/03 17:00:48 [notice] 77#77: exiting
2022/08/03 17:00:48 [notice] 77#77: exit
I0803 17:00:48.780547 1 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"nginx-ingress", Name:"nginx-config", UID:"961b1b89-3765-4eb8-9f5f-cfd8212012a8", APIVersion:"v1", ResourceVersion:"1612730", FieldPath:""}): type: 'Normal' reason: 'Updated' Configuration from nginx-ingress/nginx-config was updated
I0803 17:00:48.780573 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"delivery-ingress", UID:"23f93b2d-c3c8-48eb-a2a1-e2ce0453677f", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1527358", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/delivery-ingress was added or updated
I0803 17:00:48.780585 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 72
2022/08/03 17:00:48 [notice] 26#26: worker process 72 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 30
2022/08/03 17:00:48 [notice] 26#26: worker process 30 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 35 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 77 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 73
2022/08/03 17:00:48 [notice] 26#26: worker process 73 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 37
2022/08/03 17:00:48 [notice] 26#26: worker process 29 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 32 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 37 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 38 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 41 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 47 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 49 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 63 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 64 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 75 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 47
2022/08/03 17:00:48 [notice] 26#26: worker process 34 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 43 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 48 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 53 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 54 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 59 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 61 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 66 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 55
2022/08/03 17:00:48 [notice] 26#26: worker process 50 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 55 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 83
2022/08/03 17:00:48 [notice] 26#26: worker process 28 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 31 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 42 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 51 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 52 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 56 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 62 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 68 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 71 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 83 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 33
2022/08/03 17:00:48 [notice] 26#26: worker process 33 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 58
2022/08/03 17:00:48 [notice] 26#26: worker process 58 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 57
2022/08/03 17:00:48 [notice] 26#26: worker process 27 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 57 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 40
2022/08/03 17:00:48 [notice] 26#26: worker process 40 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 45 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 60 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 65 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 67 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 69 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 70 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 74 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 86 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
</code></pre>
<p>I'm not sure what the issue is, and I can't figure out why it's working when I use different hosts, and not working when I try to use different paths.</p>
<p>I thought it could be resource limits, but I only have the requests, no limits. There is already a default IngressClass</p>
<p>I installed the ingress controller via manifests following the steps <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">here</a></p>
<p>Update: To add the deployments running in the cluster.</p>
<pre><code># nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx-main
template:
metadata:
labels:
run: nginx-main
spec:
containers:
- image: nginx
name: nginx
</code></pre>
<pre><code># nginx-deploy-green.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy-green
spec:
replicas: 1
selector:
matchLabels:
run: nginx-green
template:
metadata:
labels:
run: nginx-green
spec:
volumes:
- name: webdata
emptyDir: {}
initContainers:
- name: web-content
image: busybox
volumeMounts:
- name: webdata
mountPath: "/webdata"
command: ["/bin/sh", "-c", 'echo "<h1>I am <font color=green>GREEN</font></h1>" > /webdata/index.html']
containers:
- image: nginx
name: nginx
volumeMounts:
- name: webdata
mountPath: "/usr/share/nginx/html"
</code></pre>
<pre><code># nginx-deploy-blue.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy-blue
spec:
replicas: 1
selector:
matchLabels:
run: nginx-blue
template:
metadata:
labels:
run: nginx-blue
spec:
volumes:
- name: webdata
emptyDir: {}
initContainers:
- name: web-content
image: busybox
volumeMounts:
- name: webdata
mountPath: "/webdata"
command: ["/bin/sh", "-c", 'echo "<h1>I am <font color=blue>BLUE</font></h1>" > /webdata/index.html']
containers:
- image: nginx
name: nginx
volumeMounts:
- name: webdata
mountPath: "/usr/share/nginx/html"
</code></pre>
| <p>Based on the comments from zer0 answered, try:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource-3
annotations:
nginx.ingress.kubernetes.io/rewrite-target: / # <-- add
spec:
ingressClassName: nginx
...
</code></pre>
<p>Page with different font color should response when your browse to <code>http://nginx.example.com/blue</code> or <code>green</code></p>
|
<p>I am trying to implement ingress for gRPC services. When i use single service it works fine but as soon as i integrate second service i am unable to call it.</p>
<p>below is the code for this. My both services are exposed as NodePort.</p>
<pre><code>#### Ingress ####
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: GRPC
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/target-type: instance
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
alb.ingress.kubernetes.io/certificate-arn: <certificate-arn>
labels:
app: grpcserver
name: grpcserver
spec:
rules:
- host: grpc-services.dev.test.com
http:
paths:
- backend:
service:
name: users
port:
number: 9090
path: /
pathType: Prefix
</code></pre>
<p>This is the result</p>
<pre><code>❯ grpcurl grpc-services.dev.test.com:443 list
grpc.health.v1.Health
grpc.reflection.v1alpha.ServerReflection
user.UserService
</code></pre>
<p>Now when i add the code for emails service like below</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: GRPC
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/target-type: instance
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
alb.ingress.kubernetes.io/certificate-arn: <certificate-arn>
labels:
app: grpcserver
name: grpcserver
spec:
rules:
- host: grpc-services.dev.test.com
http:
paths:
- backend:
service:
name: emails
port:
number: 9090
path: /emails
pathType: Prefix
- backend:
service:
name: users
port:
number: 9090
path: /
pathType: Prefix
</code></pre>
<p>Now when i try to access this i get error</p>
<pre><code>❯ grpcurl grpc-services.dev.test.com:443/emails list
Failed to dial target host "grpc-services.dev.test.com:443/emails": dial tcp: lookup tcp/443/emails: nodename nor servname provided, or not known
also tried
❯ grpcurl grpc-services.dev.test.com/emails:443 list
Failed to dial target host "grpc-services.dev.test.com/emails:443": dial tcp: lookup grpc-services.dev.test.com/emails: no such host
</code></pre>
<p>I was expecting it to work as HTTP, maybe i am missing some thing.</p>
<p>Please help me.</p>
<p>Thanks,</p>
| <p>gRPC uses paths in the <a href="https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests" rel="nofollow noreferrer">style of <code>/the.service.Name/methodName</code></a>. So use the service name in the <code>path</code> to separate traffic. To send all <code>UserService</code> traffic to the <code>users</code> service, use the fully-qualified service name with a forward slash on either side.</p>
<pre class="lang-yaml prettyprint-override"><code> - host: grpc-services.dev.test.com
http:
paths:
- backend:
service:
name: users
port:
number: 9090
path: /user.UserService/
pathType: Prefix
</code></pre>
<p>You didn't mention the service name for the emails service, so I can't provide the example configuration for it.</p>
|
<p>I have a Python process that I want to fire up every <em>n</em> minutes in a Kubernetes cronjob and read a number of messages (say 5) from a queue, and then process/convert some files and run analysis on results based on these queue messages. If the process is still running after <em>n</em> minutes, I don't want to start a new process. In total, I would like a number of these (say 3) of these to be able to run at the same time, however, there can never be more than 3 processes running at the same time. To try and implement this, I tried the following (simplified):</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: some-job
namespace: some-namespace
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: "Forbid"
jobTemplate:
spec:
parallelism: 3
template:
spec:
containers:
- name: job
image: myimage:tag
imagePullPolicy: Always
command: ['python', 'src/run_job.py']
</code></pre>
<p>Now what this amounts to is a maximum of three processes running at the same time due to 'parallelism' being 3, and concurrencyPolicy being "Forbid", even if the processes go over the 5 minute mark.</p>
<p>The problem I specifically have is that one pod (e.g. pod 1) can take longer than the other two to finish, which means that pod 2 and 3 might finish after a minute, while pod one only finishes after 10 minutes due to processing of larger files from the queue.</p>
<p>Where I thought that <code>parallelism: 3</code> would cause pod 2 and 3 to be deleted and replaced after finishing (when new cron interval hits), they are not and have to wait for pod 1 to finish before starting three new pods when the cron interval hits again.</p>
<p>When I think about it, this functionality makes sense given the specification and meaning of what a cronjob is. However, I would like to know if it would be able to have these pods/processes not be dependent on one another for restart without having to define a duplicate cronjob, all running one process.</p>
<p>Otherwise, maybe I would like to know if it's possible to easily launch more duplicate cronjobs without copying them into multiple manifests.</p>
| <p>Duplicate cronjobs seems to be the way to achieve what you are looking for. Produce 3 duplicates with single job at a time. You could template the job manifest and produce multiple as in the following example. The example is not in your problem context, but you can get the idea.
<a href="http://kubernetes.io/docs/tasks/job/parallel-processing-expansion" rel="nofollow noreferrer">http://kubernetes.io/docs/tasks/job/parallel-processing-expansion</a></p>
|
<p>While reading about ingress and ingress controller in k8s, it is documented that there is a load balancer in the ingress controller.</p>
<p>From my understanding, The ingress controller only accepts external requests and forwards them to the target service (the load balancing between the pods actually happens here) so, there is no need to balance the load at the level of the ingress controller!</p>
<p>Can anyone clarify the correct scenario?</p>
| <p>You have the right mental model of the implementation of ingress, though it is not what is actually happening:</p>
<pre><code>Ingress -> Service -> 1..n Endpoints (Pods)
</code></pre>
<p>In practice this can lead to problems, though: If you want to influence the actual load balancing and use additional information (like a cookie for sticky sessions) the Ingress must make the decision itself and can not delegate this to the service.</p>
<p>Therefore usually an Ingress queries the endpoints of a service and handles http request balancing itself.</p>
<p>You can observe this actually: Create a small app that echos the hostname (pod name), for example using PHP and a webserver that allows http keepalive. When exposing through LoadBalancer or NodePort you will notice that reloading in the browser does not change the pod per request. cURL will show different pod names per request. This is due to the re-use of the TCP connection so the Service does not re-balance.</p>
<p>If you use an Ingress you will notice that the requests are load balanced across the Pods.</p>
|
<p>If I want to run multiple replicas of some container that requires a one off initialisation task, is there a standard or recommended practice?</p>
<p>Possibilities:</p>
<ul>
<li>Use a StatefulSet even if it isn't necessary after initialisation, and have init containers which check to see if they are on the first pod in the set and do nothing otherwise. (If a StatefulSet is needed for other reasons anyway, this is almost certainly the simplest answer.)</li>
<li>Use init containers which use leader election or some similar method to pick only one of them to do the initialisation.</li>
<li>Use init containers, and make sure that multiple copies can safely run in parallel. Probably ideal, but not always simple to arrange. (Especially in the case where a pod fails randomly during a rolling update, and a replacement old pod runs its init at the same time as a new pod is being started.)</li>
<li>Use a separate Job (or a separate Deployment) with a single replica. Might make the initialisation easy, but makes managing the dependencies between it and the main containers in a CI/CD pipeline harder (we're not using Helm, but this would be something roughly comparable to a post-install/post-upgrade hook).</li>
</ul>
| <p>We effectively ended up with a Job that does the initialization task and creates a secret that the Deployment replicas have mounted as a volume, blocking them until the Job has completed. We're using ArgoCD without sync waves.
(There are complications with patching the Job name whenever its spec is updated because Jobs are immutable, but they aren't directly relevant to the original question.)</p>
|
<p>I am installing CNI using null_resource in terraform. Now if the CNI is already installed the terraform script fails with error:
<code>exit status 254. Output: │ An error occurred (ResourceInUseException) when calling the CreateAddon │ operation: Addon already exists.</code>
How can I make terraform continue with execution if the CNI is already installed, rather than failing.</p>
<p>Below is my Configuration for installing CNI:</p>
<pre><code>### Installing CNI Addon ###
resource "null_resource" "install-CNI" {
provisioner "local-exec" {
when = create
interpreter = ["bash", "-c"]
command = <<EOT
aws eks create-addon \
--cluster-name ${data.aws_eks_cluster.Custom_Dev-cluster-deploy.name} \
--addon-name vpc-cni \
--addon-version v1.11.2-eksbuild.1 \
--service-account-role-arn ${aws_iam_role.Custom_Dev-cluster.arn} \
--resolve-conflicts OVERWRITE
EOT
}
triggers = {
"before" = null_resource.eks-config-file.id
}
}
</code></pre>
| <p>you can handle the error base on the response. if the command response contains <code>Addon already exists</code> you can <code>exit 0</code> and return an error if something else, it can be <code>aws cli</code> permission or wrong command.</p>
<pre><code>resource "null_resource" "install-CNI" {
provisioner "local-exec" {
when = create
interpreter = ["bash", "-c"]
command = <<EOT
RESULT=$(aws eks create-addon --cluster-name ${data.aws_eks_cluster.Custom_Dev-cluster-deploy.name} --addon-name vpc-cni --addon-version v1.11.2-eksbuild.1 --service-account-role-arn ${aws_iam_role.Custom_Dev-cluster.arn} --resolve-conflicts OVERWRITE 2>&1)
if [ $? -eq 0 ]
then
echo "Addon installed successfully $RESULT"
exit 0
elif [[ "$RESULT" =~ .*"Addon already exists".* ]]
then
echo "Plugin already exists $RESULT" >&2
exit 0
else
echo "Encounter error $RESULT" >&2
exit 1
fi
EOT
}
triggers = {
"before" = null_resource.eks-config-file.id
}
}
</code></pre>
|
<p>While provisioning a Kubernetes cluster using kubeadm, I get my nodes tagged as "none".I would like to know if there is an option to add a Role name manually for the node.</p>
| <p>You can label the node by <code>kubectl label node <name> key=value</code></p>
|
<p>I have a ASP.NET Core Worker Service which gets some information like thread count, batch size etc. from <code>appsettings.json</code>. These settings will only take affect on startup and cannot be change when the service is running.
The application will be running in a kubernetes cluster.</p>
<p>What is the best practice for changing these settings while deployed in production?
What I need to have done is, changing the settings and restart the service.</p>
<p>I can come up with these solution:</p>
<ol>
<li>Mapping <code>appsetttings.json</code> to a volume, to be able to change it, so that when container is restarted the appsettings is not reverted.
<ul>
<li>Do I need to move the <code>appsettings.json</code> file to another folder then root? Is there something about not mapping the running application to a volume?</li>
</ul>
</li>
<li>Update the containers environment variables to override settings.
<ul>
<li>Will these settings be reverted back when the container is restarted?)</li>
</ul>
</li>
</ol>
<p>Any other ideas?</p>
| <p>In Kubernetes, the suggested way of having these type of configuration settings of non-confidential data is to use the configmap api</p>
<p><a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/configmap/</a></p>
<p>This gives You 1 MB, which should be fine for a typical appsettings file</p>
<p>we're adviced to use yaml format instead of json here and an example from link could be:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-settings
data:
MyKey: "my-value"
MySecondKey: "my other value"
# file-like keys, completely optional naming
# the app.properties and the operation.properties are just examples
app.properties: |
obj.property=value1, value2
obj2.property-with-dash=value 3
operation.properties: |
obj3.property=value3, value1
obj4=value4, value 5
</code></pre>
|
<p>I am using Azure devops to deploy services to AKS. There are few instances that even though the pods weren't started(in crashed state) the pipeline still shows it as success. I want to make the task/pipeline to fail if the deployment aren't successfully rolled out. I tried to use</p>
<pre><code>kubectl rollout status deployment name --namespace nsName
</code></pre>
<p>but its just stating the status. Even if there is an error in the deployment.yaml the task simply say that there is error and the rollout status is saying successfully rollout. Is there a way I make the pipeline fail when there is an error in the deployment or the pods aren't created ?</p>
<p>My Yaml Task</p>
<pre><code>- task: AzureCLI@2
inputs:
azureSubscription: ${{ parameters.svcconn }}
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: |
az aks get-credentials -n $(clusterName) -g $(clusterRG)
kubectl apply -f '$(Pipeline.Workspace)/Manifest/deployment.yaml' --namespace=${{ parameters.NS }}
kubectl rollout status deployment ***svc-deployment --namespace=${{ parameters.NS }}
</code></pre>
| <p>Based on the approach from @DevUtkarsh I am able to fail my pipeline when the pods are not in running state Thanks to <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/</a>. basically, I am doing two things. 1. Validate whether the right version is deployed 2.validate pods are in running state.</p>
<pre><code>inlineScript: |
$imageVersion = kubectl get deployment <deployment_name> --namespace ${{ parameters.kubernetesNS }} -o=jsonpath='{$.spec.template.spec.containers[:1].image}'
# validate whether the right version is deployed
if ( "$imageVersion" -ne "$(expectedImageVersion)" )
{
"invalid image version"
exit 1
}
#validate whether minimum number of pods are running
$containerCount = (kubectl get pods --selector=app=<appname> --field-selector=status.phase=Running --namespace ${{ parameters.kubernetesNS }}).count
$containerCount
if ( $containerCount -ge 4 )
{
"pods are not in running state"
exit 1
}
</code></pre>
<p>though there is still one issue, even though the total pods running is 3 it still shows as 4 when running the count, includes the header.</p>
|
<p>Im on macOS and im using <code>minikube</code> with <code>hyperkit</code> driver: <code>minikube start --driver=hyperkit</code></p>
<p>and everything seems ok...</p>
<p>with <code>minikube status</code>:</p>
<pre><code>minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
</code></pre>
<p>with <code>minikube version</code>:</p>
<pre><code>minikube version: v1.24.0
</code></pre>
<p>with <code>kubectl version</code>:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>and with <code>kubectl get no</code>:</p>
<pre><code>NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 13m v1.22.3
</code></pre>
<p>my problem is when i deploy anything, it wont pull any image...</p>
<p>for instance:</p>
<pre><code>kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
</code></pre>
<p>then <code>kubectl get pods</code>:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
hello-minikube-6ddfcc9757-nfc64 0/1 ImagePullBackOff 0 13m
</code></pre>
<p>then i tried to figure out what is the problem?</p>
<pre><code>k describe pod/hello-minikube-6ddfcc9757-nfc64
</code></pre>
<p>here is the result:</p>
<pre><code>Name: hello-minikube-6ddfcc9757-nfc64
Namespace: default
Priority: 0
Node: minikube/192.168.64.8
Start Time: Sun, 16 Jan 2022 10:49:27 +0330
Labels: app=hello-minikube
pod-template-hash=6ddfcc9757
Annotations: <none>
Status: Pending
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/hello-minikube-6ddfcc9757
Containers:
echoserver:
Container ID:
Image: k8s.gcr.io/echoserver:1.4
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5qql (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-k5qql:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned default/hello-minikube-6ddfcc9757-nfc64 to minikube
Normal Pulling 16m (x4 over 18m) kubelet Pulling image "k8s.gcr.io/echoserver:1.4"
Warning Failed 16m (x4 over 18m) kubelet Failed to pull image "k8s.gcr.io/echoserver:1.4": rpc error: code = Unknown desc = Error response from daemon: Get "https://k8s.gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 16m (x4 over 18m) kubelet Error: ErrImagePull
Warning Failed 15m (x6 over 18m) kubelet Error: ImagePullBackOff
Normal BackOff 3m34s (x59 over 18m) kubelet Back-off pulling image "k8s.gcr.io/echoserver:1.4"
</code></pre>
<p>then tried to get some logs!:</p>
<p><code>k logs pod/hello-minikube-6ddfcc9757-nfc64</code> and <code>k logs deploy/hello-minikube</code></p>
<p>both returns the same result:</p>
<pre><code>Error from server (BadRequest): container "echoserver" in pod "hello-minikube-6ddfcc9757-nfc64" is waiting to start: trying and failing to pull image
</code></pre>
<p>this deployment was an example from <a href="https://minikube.sigs.k8s.io/docs/start/" rel="noreferrer">minikube documentation</a></p>
<p>but i have no idea why it doesnt pull any image...</p>
| <p>I had exactly same problem.
I found out that my internet connection was slow,
the timout to pull an image is <code>120</code> seconds, so kubectl <strong>could not</strong> pull the image in under <code>120</code> seconds.</p>
<p>first use minikube to pull the image you need
for example:</p>
<pre><code>minikube image load k8s.gcr.io/echoserver:1.4
</code></pre>
<p>and then everything will work because now kubectl will use the image that is stored locally.</p>
|
<p>First, the yaml file is right,because I can use them directley create a mysql cluster in kubernetes.<br />
but when I try to create a mysql cluster by kubernetes api for java, an error occured<br />
The commond in yaml file cannot be recognized by the process.<br />
The key component of the yaml file is as follows</p>
<pre class="lang-yaml prettyprint-override"><code>initContainers:
- name: init-mysql
image: mysql:5.7.33
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
command:
- bash
- "-c"
- |
set -ex
[[ $(hostname) =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
if [[ ${ordinal} -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d
else
cp /mnt/config-map/slave.cnf /mnt/conf.d
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
</code></pre>
<p>and the java code is as follows</p>
<pre class="lang-java prettyprint-override"><code>........
........
.withInitContainers(new V1ContainerBuilder()
.withName("init-mysql")
.withImage("mysql:5.7.33")
.withEnv(env)
.withCommand("bash",
"\"-c\"",
"|",
"set -ex",
"[[ $(hostname) =~ -([0-9]+)$ ]] || exit 1",
"ordinal=${BASH_REMATCH[1]}",
"echo [mysqld] > /mnt/conf.d/server-id.cnf",
"echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf",
"if [[ ${ordinal} -eq 0 ]]; then",
" cp /mnt/config-map/master.cnf /mnt/conf.d",
"else",
" cp /mnt/config-map/slave.cnf /mnt/conf.d",
"fi"
)
.withVolumeMounts(new V1VolumeMountBuilder()
.withName("conf")
.withMountPath("/mnt/conf.d")
.build(),
new V1VolumeMountBuilder()
.withName("config-map")
.withMountPath("/mnt/config-map")
.build()
)
.build(),
......
......
</code></pre>
<p>The java code and the yaml file look the same, but when I execute it, an error occured. The result for kubectl logs is as follows</p>
<pre class="lang-bash prettyprint-override"><code>bash: "-c": No such file or directory
</code></pre>
<p>so I think it may be caused by the uncorrect params of withCommand function.<br />
How can I fix it.Thank you.</p>
| <p>In your Java code, you are explicitly including double quotes in the string, <code>withCommand("bash", "\"-c\"", ...)</code>. That's causing the container command to execute something similar to <code>bash '"-c"' ...</code> where the quotes are part of the argument. In turn, that doesn't start with a hyphen, so bash interprets it as a script to run, but when there isn't a local file named exactly <code>"-c"</code> including the quotes as part of the filename, you get that error.</p>
<p>The answer to your immediate question is just to remove those extra quotes</p>
<pre class="lang-java prettyprint-override"><code>.withCommand("bash",
"-c", // no extra quotes here
...)
</code></pre>
<p>There's a second half of your question about the YAML syntax: why does what you show work? I believe it would also work to remove the double quotes here</p>
<pre class="lang-yaml prettyprint-override"><code>command:
- bash
- -c
- ...
</code></pre>
<p>The trick here is that YAML has three different kinds of inline strings ("flow scalars"). <a href="https://yaml.org/spec/1.2.2/#733-plain-style" rel="nofollow noreferrer">Plain strings</a> have no quotes but allow no escaping either; <a href="https://yaml.org/spec/1.2.2/#732-single-quoted-style" rel="nofollow noreferrer">single-quoted strings</a> have <code>'single quotes'</code> but very limited escaping options; and <a href="https://yaml.org/spec/1.2.2/#731-double-quoted-style" rel="nofollow noreferrer">double-quoted strings</a> have <code>"double quotes"</code> and also can represent any string through escaping. <a href="https://yaml.org/spec/1.2.2/#81-block-scalar-styles" rel="nofollow noreferrer">Block scalars</a> provide a further kind of string for multi-line text.</p>
<pre class="lang-yaml prettyprint-override"><code>These are all the same:
- the same
- 'the same'
- "the same"
- "the\u0020same"
- >-
the same
</code></pre>
<p>In some cases you need a particular quoting style to work around YAML syntax issues. In your case, the <code>-c</code> option looks somewhat similar to the YAML <code>- item</code> list syntax (though the list syntax <em>requires</em> a space after the hyphen) so perhaps the original YAML author chose to quote it to make it unambiguous. It shouldn't be required here though.</p>
<p>The YAML parser will remove the quotes before the application sees them. In the Kubernetes case, this means the quotes around <code>"-c"</code> are removed at the YAML layer and the actual command list contains <code>[bash, -c, ...]</code> with no quotes.</p>
|
<p>I have a Kubernetes namespace with limited privileges which excludes the creation of <code>ClusterRole</code> and <code>ClusterRoleBinding</code>.
I want to monitor the resource consumption and pod-related metrics on the namespace level.</p>
<p>E.g., pod health and status, new pod creation, pod restarts, etc.</p>
<p>Although I can create an application-level metrics endpoint for custom metrics by exposing <code>/metrics</code> and adding the annotation <code>prometheus.io/scrape: 'true'</code>.</p>
<p>But is there a way to get <code>resource consumption and pod-related metrics</code> on the namespace level without <code>Cluster Role</code> and <code>ClusterRoleBinding</code>?</p>
| <p>It is possible to get namespace level entities from kube-state-metrics.</p>
<ol>
<li><p>Pull the helm chart for kube-state-metrics:
<a href="https://bitnami.com/stack/kube-state-metrics/helm" rel="nofollow noreferrer">https://bitnami.com/stack/kube-state-metrics/helm</a></p>
</li>
<li><p>Edit the values.yaml file and make the following changes:</p>
<pre><code> rbac:
create: false
useClusterRole: false
collectors:
- configmaps
- cronjobs
- daemonsets
- deployments
- endpoints
- horizontalpodautoscalers
- ingresses
- jobs
- limitranges
- networkpolicies
- poddisruptionbudgets
- pods
- replicasets
- resourcequotas
- services
- statefulsets
namespace: <current-namespace>
</code></pre>
</li>
<li><p>In the prometheus ConfigMap, add a job with the following configurations:</p>
<pre><code> - job_name: 'kube-state-metrics'
scrape_interval: 1s
scrape_timeout: 500ms
static_configs:
- targets: ['{{ .Values.kube_state_metrics.service.name }}:8080']
</code></pre>
</li>
<li><p>Create a role binding:</p>
<pre><code> apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kube-state-metrics
namespace: <current-namespace>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: <current-namespace>
</code></pre>
</li>
</ol>
|
<p>I have managed MariaDB with SSL enabled deployed in Azure, and i created a service type "external" named "mysql" within my k8s cluster.</p>
<p>Then i created a secret like follwing :</p>
<pre><code>kubectl create secret generic ca-cert --from-file=ca-cert=./BaltimoreCyberTrustRoot.crt.pem -n app
</code></pre>
<p>PS: where i got <code>BaltimoreCyberTrustRoot.crt.pem</code> from :</p>
<p><code>wget https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem</code></p>
<p>Then i deployed Wordpress:</p>
<pre><code>helm template wp azure-marketplace/wordpress -n app --create-namespace -f values.yml
</code></pre>
<p>where values.yml looks like :</p>
<pre><code>##############################PART1########################
#pvc wordpress
persistence:
enabled: false
#pvc mariadb
mariadb:
enabled: false
externalDatabase:
host: mysql
port: 3306
user: benighil@benighil
password: "SomePassword"
database: bitnami_wordpress
##############################PART2########################
extraEnvVars:
- name: "WORDPRESS_DATABASE_SSL_CA_FILE"
value: /tmp/ca-cert
## Additional volume mounts
## Example: Mount CA file
extraVolumeMounts:
- name: ca-cert
mountPath: /tmp
## Additional volumes
## Example: Add secret volume
extraVolumes:
- name: ca-cert
secret:
secretName: ca-cert
</code></pre>
<p>But the pods logs gives :</p>
<pre><code>wordpress 22:08:07.00 ERROR ==> Could not connect to the database
</code></pre>
<p><strong>NOTE1:</strong> When i exec into pod, and do : <code>env | grep WORDPRESS_DATABASE_SSL_CA_FILE</code> it gives : <code>WORDPRESS_DATABASE_SSL_CA_FILE=/tmp/ca-cert</code> and when i do <code>cat /tmp/ca-cert</code> it gives its content normally.</p>
<p><strong>NOTE2:</strong> The credentials are CORRECT, because when i desable SSL from MariaDB, and delete the whole PART2 from <code>values.yml</code> then it works fine!</p>
<p>Any help please?</p>
| <p>So make sure that the DB exist on the Azure MariaDB server and the second thing is that path is further used by the daemon <code>tmp</code> so certs should not be mounted here, somewhere where the daemon can read.</p>
<pre><code>wordpress 04:19:09.91 INFO ==> Persisting WordPress installation
/opt/bitnami/scripts/libpersistence.sh: line 51: /tmp/perms.acl: Read-only file system
</code></pre>
<p>so make the below changes and it should work</p>
<pre><code>extraEnvVars:
- name: "WORDPRESS_DATABASE_SSL_CA_FILE"
value: /opt/bitnami/wordpress/tmp/ca-cert
- name: WORDPRESS_ENABLE_DATABASE_SSL
value: "yes"
## Additional volume mounts
## Example: Mount CA file
extraVolumeMounts:
- name: ca-cert
mountPath: /opt/bitnami/wordpress/tmp
</code></pre>
<p>or you will have to set extra params for the same path</p>
<pre><code> containerSecurityContext:
enabled: true
privileged: false
allowPrivilegeEscalation: false
## Requires mounting an `extraVolume` of type `emptyDir` into /tmp
##
readOnlyRootFilesystem: false
capabilities:
drop:
- ALL
</code></pre>
|
<p>In EKS I am trying to use SecretProviderClass to provide secrets as environment variables to containers. I can see the secret mounted inside the container but no combination of key/names is allowing me to use it as an environment variable. Insode the container I can
<code>cat /mnt/secrets-store/awscredentials</code>
And see the output:</p>
<pre><code>{"accesskey":"ABCDE12345","secretkey":"a/long/redacted5tring"}
</code></pre>
<p>My SecretProviderClass is below</p>
<pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aws-secrets
namespace: default
spec:
provider: aws
parameters:
objects: |
- objectName: "path/to/service/awscredentials"
objectType: secretsmanager
objectAlias: awscredentials
secretObjects:
- secretName: awscredentials
type: Opaque
data:
- objectName: accesskeyalias
key: accesskey
- objectName: secretkeyalias
key: secretkey
</code></pre>
<p>and my deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myservice
labels:
team: devops
spec:
replicas: 1
selector:
matchLabels:
app: myservice
template:
metadata:
labels:
app: myservice
spec:
serviceAccountName: myservice
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "aws-secrets"
containers:
- name: myservice
image: someimage:2
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
env:
- name: AWS_ACCESS_KEY
valueFrom:
secretKeyRef:
name: awscredentials
key: accesskey
</code></pre>
<p>When I run the deployment without reference to the SecretKeyRef the container runs and I can see the secret under <code>/mnt/secrets-store/awscredentials</code>. However, trying to set the environment variable results in the pod stuck in Pending state and the message:
<code>Error: secret "awscredentials" not found</code>
I reckon I have mixed up the name and keys somewhere but I've spent hours trying every combination I can think of. What am I missing?</p>
| <p>I eventually got this sorted. I had followed the AWS documentation for installing the driver which included using a helm chart i.e.</p>
<pre><code>helm install -n kube-system csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver
</code></pre>
<p>However, the documentation failed to point out that with this helm chart you need to specifically set a value <code>syncSecret.enabled=true</code> - as in the code below.</p>
<pre><code>helm install -n kube-system csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver --set syncSecret.enabled=true
</code></pre>
<p>I used helm to uninstall the secrets-store-csi-driver, then re-installed with <code>syncSecret.enabled=true</code> and immediately my secretsmanager secret was available via <code>kubectl get secrets -n default</code>.
So if you can see the secrets inside the container in the mounted volume but you can't set them as environment variables you should check that you installed with this value so that the k8s secret is created. Otherwise the secretObjects > secretName section is not actioned.</p>
|
<p>I'm trying to follow the instructions at <a href="https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md" rel="noreferrer">https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md</a> to create a Kubernetes Dashboard token. However, when I run the specified command, I get an error</p>
<pre><code>% kubectl -n kubernetes-dashboard create token admin-user
Error: must specify one of -f and -k
error: unknown command "token admin-user"
See 'kubectl create -h' for help and examples
</code></pre>
<p>If I jump back <a href="https://github.com/kubernetes/dashboard/commit/f78bd53d7561eb614308b1aff5a134668244f2a4#diff-8c263c6ea3af03864ae35dfca688964ee8b03f118a59b904b409c09a34b1309f" rel="noreferrer">in the doc history</a>, I see a different, more verbose command that I can run</p>
<pre><code>% kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
</code></pre>
<p>This seems to work OK and PR for the doc change mentions "version 1.24" but doesn't mention what piece of software version 1.24 refers to (<code>kubectl</code>? The Dashboard? Kuberenetes itself? <code>kind</code>? Something else?)</p>
<p>So what's going on with that first command? Why doesn't it work?</p>
| <p>If your version is lower than 1.24, please run the following command.</p>
<pre><code>kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
</code></pre>
<p>This works in my case. Thanks.</p>
|
<blockquote>
<p>error: resource mapping not found for name: "ingress-srv" namespace: "" from "ingress-srv.yaml": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
ensure CRDs are installed first</p>
</blockquote>
<p>I am new to Kubernetes, I was setting up ingress nginx on minikube and it installed successfully but when I try to run using kubectl apply -f filename it gives above error</p>
<p>here is the code
filename: <strong>ingress-srv.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: posts.com
http:
paths:
- path: /posts
pathType: Prefix
backend:
serviceName: posts-clusterip-srv
servicePort: 4000
</code></pre>
| <p>The resource type specified in your manifest, <code>networking.k8s.io/v1beta1 Ingress</code>, was removed in Kubernetes v1.22 and replaced by <code>networking.k8s.io/v1 Ingress</code> (see the <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#ingress-v122" rel="noreferrer">deprecation guide</a> for details). If your cluster's Kubernetes server version is 1.22 or higher (which I suspect it is) trying to create an Ingress resource from your manifest will result in exactly the error you're getting.</p>
<p>You can check your cluster's Kubernetes server version (as Kamol Hasan points out) using the command <code>kubectl version --short</code>.</p>
<p>If the version is indeed 1.22 or higher, you'll need to modify your YAML file so that its format is valid in the new version of the API. This <a href="https://github.com/kubernetes/kubernetes/pull/89778" rel="noreferrer">pull request</a> summarises the format differences. In your case, <code>ingress-srv.yaml</code> needs to be changed to:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: posts.com
http:
paths:
- path: /posts
pathType: Prefix
backend:
service:
name: posts-clusterip-srv
port:
number: 4000
</code></pre>
|
<p>I've created a secret okay by doing this...<code>kubectl create secret generic <namespace> <secret-name> --from-literal=value1=xxxx --from-literal=value2=xxxx --from-literal=value3=xxxx</code></p>
<p>When I do a get command I get</p>
<pre><code> apiVersion: v1
data:
value1: xxxx
value2: xxxx
value3: xxxx
kind: Secret
metadata:
creationTimestamp: <time>
name: <secret-name>
namespace: <namespace>
resourceVersion: <version number>
uid: <alpha-numeric>
type: Opaque
</code></pre>
<p>...the thing is...I was expecting to automatically include an annotations section below where it says metadata so that it should looks more like</p>
<pre><code> apiVersion: v1
data:
value1: xxxx
value2: xxxx
value3: xxxx
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"value1":<value1>,"value2":<value2> ,"value3" <value3>},"kind":"Secret","metadata":{"annotations":{},"name":"<secret-name>","namespace":"
<namespace>"},"type":"Opaque"}
creationTimestamp: <time>
name: <secret-name>
namespace: <namespace>
resourceVersion: <version number>
uid: <alpha-numeric>
type: Opaque
</code></pre>
<p>Is this ONLY possible if you add the secret from a file or is there away you can add this annotation information via the string literal..I've been searching the internet but the only solution I can find is via a file...not through a string as such....can anybody help?</p>
| <blockquote>
<p>Is this ONLY possible if you add the secret from a file</p>
</blockquote>
<p>yes, this is used to compare the live manifest and manifest in the file.
<strong>But we annotate the secret even if it's created without a manifest file.</strong></p>
<blockquote>
<p>The kubectl apply command writes the contents of the configuration file to the kubectl.kubernetes.io/last-applied-configuration annotation. This is used to identify fields that have been removed from the configuration file and need to be cleared from the live configuration.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#merge-patch-calculation" rel="nofollow noreferrer">merge-patch-calculation</a></p>
<p>so for example if we created secrets like</p>
<pre><code> kubectl create secret generic test --from-literal=value1=xxxx
</code></pre>
<p>and now we want to patch this secrets with manifest file, we will get a warning <code>"Warning: resource secrets/test is missing the kubectl.kubernetes.io/last-applied-configuration"</code></p>
<pre><code>apiVersion: v1
data:
value1: eHh4eA==
kind: Secret
metadata:
name: test
namespace: playground-account-app
type: Opaque
</code></pre>
<p>and if we tried to apply this we will get warning</p>
<pre><code>Warning: resource secrets/test is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
</code></pre>
<p><strong>Avoid these warning</strong>:</p>
<p>The above warning is because of missing the annotation but we can annotate the object in the Kubernetes to fix and avoid these warning</p>
<p><a href="https://kubernetes.io/blog/2021/04/20/annotating-k8s-for-humans/#kubernetes-annotations" rel="nofollow noreferrer">kubernetes-annotations</a></p>
<pre><code> kubectl annotate secret test --overwrite kubectl.kubernetes.io/last-applied-configuration='{"apiVersion":"v1","data":{"value1":"eHh4eA=="},"kind":"Secret","metadata":{"annotations":{},"name":"test","namespace":"namespace-name"},"type":"Opaque"}'
</code></pre>
|
<p>Since NFS has single point of failure issue. I am thinking to build a storage layer using S3 or Google Cloud Storage as PersistentVolumn in my local k8s cluster.</p>
<p>After a lot of google search, I still cannot find an way. I have tried using s3 fuse to mount volume to local, and then create PV by specifying the hotPath. However, a lot of my pods (for example airflow, jenkins), complained about no write permission, or say "version being changed".</p>
<p>Could someone help figuring out the right way to mount S3 or GCS bucket as a PersistenVolumn from local cluster without using AWS, or GCP.</p>
| <p>S3 is not a file system and is not intended to be used in this way.</p>
<p>I do not recommend to use S3 this way, because in my experience any FUSE-drivers very unstable and with I/O operations you will easily ruin you mounted disk and stuck in <a href="https://stackoverflow.com/questions/37476114/s3fs-throws-error-as-transport-endpoint-is-not-connected">Transport endpoint is not connected</a> nightmare for you and your infrastructure users. It's also may lead to high CPU usage and RAM leakage.</p>
<p>Useful crosslinks:</p>
<ul>
<li><p><a href="https://stackoverflow.com/questions/51673011/how-to-mount-s3-bucket-on-kubernetes-container-pods/51677039">How to mount S3 bucket on Kubernetes container/pods?</a></p>
</li>
<li><p><a href="https://stackoverflow.com/questions/14010115/amazon-s3-with-s3fs-and-fuse-transport-endpoint-is-not-connected/14010199">Amazon S3 with s3fs and fuse, transport endpoint is not connected</a></p>
</li>
<li><p><a href="https://stackoverflow.com/questions/10801158/how-stable-is-s3fs-to-mount-an-amazon-s3-bucket-as-a-local-directory">How stable is s3fs to mount an Amazon S3 bucket as a local directory</a></p>
</li>
</ul>
|
<p>I've just started learning about kubernetes and yaml files.</p>
<p>Although kubernetes documentation provides lot's of examples.</p>
<p>One thing I couldn't manage to find were how the different <strong>Kind</strong> (Deployment, Pod, Service, ConfigMap, Etc...) required a different yaml structure and keys.</p>
<p>So i would like to know, how do i find out what are all the child keys that a given parent key expect to have, and the data type a key expect, in case of expecting a list of something for example.</p>
<p>I really would appreciate some help with that , thank you.</p>
| <p><a href="https://kubernetes.io/docs/reference/kubernetes-api/" rel="nofollow noreferrer">It's documented on the Kubernetes website.</a> However it may be a bit cumbersome to learn it like this, and you may prefer to learn mostly by looking at examples and existing yaml files.</p>
<p>Also, a good IDE that support Kubernetes can help you writing the files.</p>
|
<p>With my team, we're currently building an API using FastAPI and we're really struggling to get good performances out of it once deployed to Kubernetes. We're using async calls as much as possible but overall sitting at ~8RPS / pod to stay under our SLA of P99 200ms.</p>
<p>For resources, we assign the following:</p>
<pre><code>resources:
limits:
cpu: 1
memory: 800Mi
requests:
cpu: 600m
memory: 100Mi
</code></pre>
<p>Surprisingly, such performance drops don't occur when running load tests on the API running locally in a Docker container. There we easily get ~200RPS on a single container with 120ms latency at P99...</p>
<p>Would anyone have an idea of what could go wrong in there and where I could start looking to find the bottleneck?</p>
<p>Cheers!</p>
| <p>First, try to request at least 1 CPU for your API, because if there are no available CPUs on the node, the pod will only use the reserved amount of CPUs which is 600m, so if you have another application with requests cpu=400m for example, kubernetes will run both applications on the same cpu, with 60% of the time for the API and 40% for the second application. While docker uses 1 CPU (maybe more) in localhost.</p>
<p>If you are using Uvicorn with multiple workers, you can also increase CPU limits to or at least 2.</p>
<pre class="lang-yaml prettyprint-override"><code> Resources:
limits:
processor: 2
memory: 800Mi
requests:
processor: 1
memory: 100Mi
</code></pre>
<p>Finally, there is a difference between your local machine CPUs and kubernetes cluster CPUs, if you want to get good performance, you can test better CPUs and choose the most suitable one in terms of cost.</p>
|
<p>I have a Neo4j database running on Kubernetes. I want to make scheduled backups for the database. I know that Neo4j provides <a href="https://neo4j.com/docs/operations-manual/current/backup-restore/" rel="nofollow noreferrer">a set of tools for backup and restore</a>. However, Kubernetes <a href="https://kubernetes.io/docs/concepts/storage/volume-snapshots/" rel="nofollow noreferrer"><code>VolumeSnapshot</code></a> also looks viable for backup and restore.</p>
<p>I wonder if it's a good idea to use Kubernetes <code>VolumeSnapshot</code> to backup/restore Neo4j databases? Will it cause errors like inconsistency database status or faulty disk problem? Thanks.</p>
| <p>Generally, if it is not supported by the database, then it is a bad idea.</p>
<p>Think of your database as being stored across:</p>
<ol>
<li>Database files on disk</li>
<li>Page cache (in volatile memory)</li>
<li>Write ahead transaction logs on disk</li>
</ol>
<p>A volume snapshot would not save enough information to get a consistent state of your database (unless the database is gracefully shut down).</p>
<p>Use the set of tools provided for backup/restore</p>
|
<p>So we have a kubernetes cluster running some pods with celery workers. We are using python3.6 to run those workers and celery version is 3.1.2 (I know, really old, we are working on upgrading it). We have also setup some autoscaling mechanism to add more celery workers on the fly.</p>
<p>The problem is the following. So let's say we have 5 workers at any given time. Then lot of tasks come, increasing the CPU/RAM usage of the pods. That triggers an autoscaling event, adding, let's say, two more celery worker pods. So now those two new celery workers take some long running tasks. Before they finishing running those tasks, kubernetes creates a downscaling event, killing those two workers, and killing those long running tasks too.</p>
<p>Also, for legacy reasons, we do not have a retry mechanism if a task is not completed (and we cannot implement one right now).</p>
<p>So my question is, is there a way to tell kubernetes to wait for the celery worker to have run all of its pending tasks? I suppose the solution must include some way to notify the celery worker to make it stop receiving new tasks also. Right now I know that Kubernetes has some scripts to handle this kind of situations, but I do not know what to write on those scripts because I do not know how to make the celery worker stop receiving tasks.</p>
<p>Any idea?</p>
| <p>I wrote a <a href="https://blog.dy.engineering/hpa-for-celery-workers-6efd82444aee" rel="nofollow noreferrer">blog post</a> exactly on that topic - check it out.</p>
<p>When Kubernetes decide to kill a pod, it first send SIGTERM signal, so your Application have time to gracefully shutdown, and after that if your Application didn't end - Kubernetes will kill it by sending a SIGKILL signal.</p>
<p>This period, between SIGTERM to SIGKILL can be tuned by <code>terminationGracePeriodSeconds</code> (more about it <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-execution" rel="nofollow noreferrer">here</a>).</p>
<p>In other words, if your longest task takes 5 minutes, make sure to set this value to something higher than 300 seconds.</p>
<p>Celery handle those signals for you as you can see <a href="https://docs.celeryq.dev/en/stable/userguide/workers.html#stopping-the-worker" rel="nofollow noreferrer">here</a> (I guess it is relevant for your version as well):</p>
<p>Shutdown should be accomplished using the TERM signal.</p>
<blockquote>
<p>When shutdown is initiated the worker will finish all currently
executing tasks before it actually terminates. If these tasks are
important, you should wait for it to finish before doing anything
drastic, like sending the KILL signal.</p>
</blockquote>
<p>As explained in the docs, you can set the <code>acks_late=True</code> <a href="https://docs.celeryq.dev/en/stable/reference/celery.app.task.html#celery.app.task.Task.acks_late" rel="nofollow noreferrer">configuration</a> so the task will run again if it stopped accidentally.</p>
<p>Another thing that I didn't find documentation for (almost sure I saw it somewhere) - Celery worker won't receive a new tasks after getting a SIGTERM - so you should be safe to terminate the worker (might require to set <code>worker_prefetch_multiplier = 1</code> as well).</p>
|
<p>I am running a 2-node K8s cluster on OVH Bare Metal Servers. I've set up <strong>MetalLB</strong> and <strong>Nginx-Ingress</strong>.The 2 servers both have public IPs and are not in the same network segment. I've used one of the IPs as the entrypoint for the LB. The deployments I created 3 nginx containers & services to test the forwarding.
When I use host based routing, the endpoints are reachable via the internet, but when I use path based forwarding, only the / path is reachable. For the rest, I get the default backend.
My host based Ingress resource:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource-2
spec:
ingressClassName: nginx
rules:
- host: nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy-main
port:
number: 80
- host: blue.nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy-blue
port:
number: 80
- host: green.nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy-green
port:
number: 80
</code></pre>
<p>The path based Ingress resource:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource-3
spec:
ingressClassName: nginx
rules:
- host: nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
- path: /blue
pathType: Prefix
backend:
service:
name: nginx-deploy-blue
port:
number: 80
- path: /green
pathType: Prefix
backend:
service:
name: nginx-deploy-green
port:
number: 80
</code></pre>
<p>The endpoints are all reachable in both cases</p>
<pre><code># kubectl describe ing ingress-resource-2
Name: ingress-resource-2
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
nginx.example.com
/ nginx:80 (192.168.107.4:80)
blue.nginx.example.com
/ nginx-deploy-blue:80 (192.168.164.212:80)
green.nginx.example.com
/ nginx-deploy-green:80 (192.168.164.213:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 13m nginx-ingress-controller Configuration for default/ingress-resource-2 was added or updated
</code></pre>
<pre><code># kubectl describe ing ingress-resource-3
Name: ingress-resource-3
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
nginx.example.com
/ nginx:80 (192.168.107.4:80)
/blue nginx-deploy-blue:80 (192.168.164.212:80)
/green nginx-deploy-green:80 (192.168.164.213:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 109s nginx-ingress-controller Configuration for default/ingress-resource-3 was added or updated
</code></pre>
<p>Getting the Nginx-Ingress logs:</p>
<pre><code># kubectl -n nginx-ingress logs pod/nginx-ingress-6947fb84d4-m9gkk
W0803 17:00:48.516628 1 flags.go:273] Ignoring unhandled arguments: []
I0803 17:00:48.516688 1 flags.go:190] Starting NGINX Ingress Controller Version=2.3.0 PlusFlag=false
I0803 17:00:48.516692 1 flags.go:191] Commit=979db22d8065b22fedb410c9b9c5875cf0a6dc66 Date=2022-07-12T08:51:24Z DirtyState=false Arch=linux/amd64 Go=go1.18.3
I0803 17:00:48.527699 1 main.go:210] Kubernetes version: 1.24.3
I0803 17:00:48.531079 1 main.go:326] Using nginx version: nginx/1.23.0
2022/08/03 17:00:48 [notice] 26#26: using the "epoll" event method
2022/08/03 17:00:48 [notice] 26#26: nginx/1.23.0
2022/08/03 17:00:48 [notice] 26#26: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/08/03 17:00:48 [notice] 26#26: OS: Linux 5.15.0-41-generic
2022/08/03 17:00:48 [notice] 26#26: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/08/03 17:00:48 [notice] 26#26: start worker processes
2022/08/03 17:00:48 [notice] 26#26: start worker process 27
2022/08/03 17:00:48 [notice] 26#26: start worker process 28
2022/08/03 17:00:48 [notice] 26#26: start worker process 29
2022/08/03 17:00:48 [notice] 26#26: start worker process 30
2022/08/03 17:00:48 [notice] 26#26: start worker process 31
2022/08/03 17:00:48 [notice] 26#26: start worker process 32
2022/08/03 17:00:48 [notice] 26#26: start worker process 33
2022/08/03 17:00:48 [notice] 26#26: start worker process 34
I0803 17:00:48.543403 1 listener.go:54] Starting Prometheus listener on: :9113/metrics
2022/08/03 17:00:48 [notice] 26#26: start worker process 35
2022/08/03 17:00:48 [notice] 26#26: start worker process 37
I0803 17:00:48.543712 1 leaderelection.go:248] attempting to acquire leader lease nginx-ingress/nginx-ingress-leader-election...
2022/08/03 17:00:48 [notice] 26#26: start worker process 38
...
2022/08/03 17:00:48 [notice] 26#26: start worker process 86
I0803 17:00:48.645253 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"delivery-ingress", UID:"23f93b2d-c3c8-48eb-a2a1-e2ce0453677f", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1527358", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/delivery-ingress was added or updated
I0803 17:00:48.645512 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
I0803 17:00:48.646550 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
I0803 17:00:48.646629 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"delivery-ingress", UID:"23f93b2d-c3c8-48eb-a2a1-e2ce0453677f", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1527358", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/delivery-ingress was added or updated
I0803 17:00:48.646810 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
I0803 17:00:48.646969 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
I0803 17:00:48.647259 1 event.go:285] Event(v1.ObjectReference{Kind:"Secret", Namespace:"nginx-ingress", Name:"default-server-secret", UID:"d8271053-2785-408f-b87b-88b9bb9fc488", APIVersion:"v1", ResourceVersion:"1612716", FieldPath:""}): type: 'Normal' reason: 'Updated' the special Secret nginx-ingress/default-server-secret was updated
2022/08/03 17:00:48 [notice] 26#26: signal 1 (SIGHUP) received from 88, reconfiguring
2022/08/03 17:00:48 [notice] 26#26: reconfiguring
2022/08/03 17:00:48 [notice] 26#26: using the "epoll" event method
2022/08/03 17:00:48 [notice] 26#26: start worker processes
2022/08/03 17:00:48 [notice] 26#26: start worker process 89
2022/08/03 17:00:48 [notice] 26#26: start worker process 90
...
2022/08/03 17:00:48 [notice] 26#26: start worker process 136
2022/08/03 17:00:48 [notice] 27#27: gracefully shutting down
2022/08/03 17:00:48 [notice] 27#27: exiting
2022/08/03 17:00:48 [notice] 35#35: gracefully shutting down
2022/08/03 17:00:48 [notice] 31#31: exiting
2022/08/03 17:00:48 [notice] 38#38: gracefully shutting down
2022/08/03 17:00:48 [notice] 32#32: exiting
2022/08/03 17:00:48 [notice] 30#30: exiting
2022/08/03 17:00:48 [notice] 40#40: gracefully shutting down
2022/08/03 17:00:48 [notice] 35#35: exiting
2022/08/03 17:00:48 [notice] 45#45: gracefully shutting down
2022/08/03 17:00:48 [notice] 40#40: exiting
2022/08/03 17:00:48 [notice] 48#48: gracefully shutting down
2022/08/03 17:00:48 [notice] 47#47: exiting
2022/08/03 17:00:48 [notice] 57#57: gracefully shutting down
2022/08/03 17:00:48 [notice] 52#52: exiting
2022/08/03 17:00:48 [notice] 55#55: gracefully shutting down
2022/08/03 17:00:48 [notice] 55#55: exiting
2022/08/03 17:00:48 [notice] 51#51: gracefully shutting down
2022/08/03 17:00:48 [notice] 51#51: exiting
2022/08/03 17:00:48 [notice] 31#31: exit
2022/08/03 17:00:48 [notice] 34#34: gracefully shutting down
2022/08/03 17:00:48 [notice] 34#34: exiting
2022/08/03 17:00:48 [notice] 41#41: exiting
2022/08/03 17:00:48 [notice] 49#49: gracefully shutting down
....
2022/08/03 17:00:48 [notice] 49#49: exiting
2022/08/03 17:00:48 [notice] 57#57: exit
.....
2022/08/03 17:00:48 [notice] 43#43: exit
2022/08/03 17:00:48 [notice] 58#58: gracefully shutting down
2022/08/03 17:00:48 [notice] 38#38: exiting
2022/08/03 17:00:48 [notice] 53#53: gracefully shutting down
2022/08/03 17:00:48 [notice] 48#48: exiting
2022/08/03 17:00:48 [notice] 59#59: gracefully shutting down
2022/08/03 17:00:48 [notice] 58#58: exiting
2022/08/03 17:00:48 [notice] 62#62: gracefully shutting down
2022/08/03 17:00:48 [notice] 60#60: gracefully shutting down
2022/08/03 17:00:48 [notice] 53#53: exiting
2022/08/03 17:00:48 [notice] 61#61: gracefully shutting down
2022/08/03 17:00:48 [notice] 63#63: gracefully shutting down
2022/08/03 17:00:48 [notice] 64#64: gracefully shutting down
2022/08/03 17:00:48 [notice] 59#59: exiting
2022/08/03 17:00:48 [notice] 65#65: gracefully shutting down
2022/08/03 17:00:48 [notice] 62#62: exiting
2022/08/03 17:00:48 [notice] 60#60: exiting
2022/08/03 17:00:48 [notice] 66#66: gracefully shutting down
2022/08/03 17:00:48 [notice] 67#67: gracefully shutting down
2022/08/03 17:00:48 [notice] 63#63: exiting
2022/08/03 17:00:48 [notice] 68#68: gracefully shutting down
2022/08/03 17:00:48 [notice] 64#64: exiting
2022/08/03 17:00:48 [notice] 61#61: exiting
2022/08/03 17:00:48 [notice] 69#69: gracefully shutting down
2022/08/03 17:00:48 [notice] 65#65: exiting
2022/08/03 17:00:48 [notice] 66#66: exiting
2022/08/03 17:00:48 [notice] 71#71: gracefully shutting down
2022/08/03 17:00:48 [notice] 70#70: gracefully shutting down
2022/08/03 17:00:48 [notice] 67#67: exiting
...
2022/08/03 17:00:48 [notice] 65#65: exit
2022/08/03 17:00:48 [notice] 73#73: gracefully shutting down
...
2022/08/03 17:00:48 [notice] 74#74: exiting
2022/08/03 17:00:48 [notice] 83#83: gracefully shutting down
2022/08/03 17:00:48 [notice] 72#72: exiting
2022/08/03 17:00:48 [notice] 77#77: gracefully shutting down
2022/08/03 17:00:48 [notice] 77#77: exiting
2022/08/03 17:00:48 [notice] 77#77: exit
I0803 17:00:48.780547 1 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"nginx-ingress", Name:"nginx-config", UID:"961b1b89-3765-4eb8-9f5f-cfd8212012a8", APIVersion:"v1", ResourceVersion:"1612730", FieldPath:""}): type: 'Normal' reason: 'Updated' Configuration from nginx-ingress/nginx-config was updated
I0803 17:00:48.780573 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"delivery-ingress", UID:"23f93b2d-c3c8-48eb-a2a1-e2ce0453677f", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1527358", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/delivery-ingress was added or updated
I0803 17:00:48.780585 1 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-resource-3", UID:"66ed1c4b-54ae-4880-bf08-49029a93e365", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1622747", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/ingress-resource-3 was added or updated
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 72
2022/08/03 17:00:48 [notice] 26#26: worker process 72 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 30
2022/08/03 17:00:48 [notice] 26#26: worker process 30 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 35 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 77 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 73
2022/08/03 17:00:48 [notice] 26#26: worker process 73 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 37
2022/08/03 17:00:48 [notice] 26#26: worker process 29 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 32 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 37 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 38 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 41 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 47 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 49 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 63 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 64 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 75 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 47
2022/08/03 17:00:48 [notice] 26#26: worker process 34 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 43 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 48 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 53 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 54 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 59 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 61 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 66 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 55
2022/08/03 17:00:48 [notice] 26#26: worker process 50 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 55 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 83
2022/08/03 17:00:48 [notice] 26#26: worker process 28 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 31 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 42 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 51 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 52 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 56 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 62 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 68 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 71 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 83 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 33
2022/08/03 17:00:48 [notice] 26#26: worker process 33 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 58
2022/08/03 17:00:48 [notice] 26#26: worker process 58 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 57
2022/08/03 17:00:48 [notice] 26#26: worker process 27 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 57 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
2022/08/03 17:00:48 [notice] 26#26: signal 17 (SIGCHLD) received from 40
2022/08/03 17:00:48 [notice] 26#26: worker process 40 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 45 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 60 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 65 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 67 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 69 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 70 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 74 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: worker process 86 exited with code 0
2022/08/03 17:00:48 [notice] 26#26: signal 29 (SIGIO) received
</code></pre>
<p>I'm not sure what the issue is, and I can't figure out why it's working when I use different hosts, and not working when I try to use different paths.</p>
<p>I thought it could be resource limits, but I only have the requests, no limits. There is already a default IngressClass</p>
<p>I installed the ingress controller via manifests following the steps <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">here</a></p>
<p>Update: To add the deployments running in the cluster.</p>
<pre><code># nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx-main
template:
metadata:
labels:
run: nginx-main
spec:
containers:
- image: nginx
name: nginx
</code></pre>
<pre><code># nginx-deploy-green.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy-green
spec:
replicas: 1
selector:
matchLabels:
run: nginx-green
template:
metadata:
labels:
run: nginx-green
spec:
volumes:
- name: webdata
emptyDir: {}
initContainers:
- name: web-content
image: busybox
volumeMounts:
- name: webdata
mountPath: "/webdata"
command: ["/bin/sh", "-c", 'echo "<h1>I am <font color=green>GREEN</font></h1>" > /webdata/index.html']
containers:
- image: nginx
name: nginx
volumeMounts:
- name: webdata
mountPath: "/usr/share/nginx/html"
</code></pre>
<pre><code># nginx-deploy-blue.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy-blue
spec:
replicas: 1
selector:
matchLabels:
run: nginx-blue
template:
metadata:
labels:
run: nginx-blue
spec:
volumes:
- name: webdata
emptyDir: {}
initContainers:
- name: web-content
image: busybox
volumeMounts:
- name: webdata
mountPath: "/webdata"
command: ["/bin/sh", "-c", 'echo "<h1>I am <font color=blue>BLUE</font></h1>" > /webdata/index.html']
containers:
- image: nginx
name: nginx
volumeMounts:
- name: webdata
mountPath: "/usr/share/nginx/html"
</code></pre>
| <p>I believe the reason is that the <code>nginx</code> service you have deployed for "blue" and "green" deployments are not serving anything on the <code>/blue</code> and <code>/green</code> endpoints. You need to create these endpoints inside <code>nginx</code> so that they become valid. Your ingress objects appear fine.</p>
<p>That's also why you see <code>/</code> working in all cases since by default <code>nginx</code> is hosting a web page on the path <code>/</code> but not on the paths <code>/blue</code> and <code>/green</code>. Simply creating the respective endpoints in your services for <code>nginx</code> should fix the problem.</p>
|
<p>When k8s job keeps recreating pods within 10 seconds, what is the best way to get the related logs and find out the root cause? I tried to use</p>
<pre><code>kubectl logs <pod> -n <namespace>
</code></pre>
<p>and always got <em><strong>Error from server (NotFound): pods "XXXX" not found</strong></em> as it already got deleted and a new pod got created in the meantime.</p>
| <p>If the pod "restarting/crashloop" then you can get logs from the <code>previous</code> run.</p>
<pre><code>kubectl logs -n <namespace> <pod> --previous
</code></pre>
<p>This will show you the logs of the last run of the pod before it crashed. It is a handy feature in case you want to figure out why the pod crashed in the first place.</p>
<p>For the long term you should consider some logging solution as the pod should be treated as a disposable object and should not rely on this.</p>
<p>One of best tool that I used so far without any issue <a href="https://docs.datadoghq.com/containers/kubernetes/log/?tab=daemonset" rel="nofollow noreferrer">Kubernetes Log collection using datadog</a></p>
|
<p>I am using Azure devops to deploy services to AKS. There are few instances that even though the pods weren't started(in crashed state) the pipeline still shows it as success. I want to make the task/pipeline to fail if the deployment aren't successfully rolled out. I tried to use</p>
<pre><code>kubectl rollout status deployment name --namespace nsName
</code></pre>
<p>but its just stating the status. Even if there is an error in the deployment.yaml the task simply say that there is error and the rollout status is saying successfully rollout. Is there a way I make the pipeline fail when there is an error in the deployment or the pods aren't created ?</p>
<p>My Yaml Task</p>
<pre><code>- task: AzureCLI@2
inputs:
azureSubscription: ${{ parameters.svcconn }}
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: |
az aks get-credentials -n $(clusterName) -g $(clusterRG)
kubectl apply -f '$(Pipeline.Workspace)/Manifest/deployment.yaml' --namespace=${{ parameters.NS }}
kubectl rollout status deployment ***svc-deployment --namespace=${{ parameters.NS }}
</code></pre>
| <p>Your <code>kubectl</code> commands are getting applied successfully to the cluster which means the YAML's are getting updated in the kube cluster and so you will not see an error with apply or rollout commands in your Azure pipelines.</p>
<p>It's your Kubernetes controller manager that is not able to apply the changes from the updated YAML.</p>
<p>In your inline script, you can add some sleep time and then read the pod running status.</p>
<p>You can get a pod name if you have added an app label in your deployment-</p>
<p><code>POD=$(kubectl get pod -l app=my-app -o jsonpath="{.items[0].metadata.name}")</code></p>
<p>For that pod then you can pull the running status -</p>
<p><code>STATUS=$(kubectl get pods -n default $POD -o jsonpath="{.status.phase}")</code></p>
<p>You can then manipulate the <code>STATUS</code> variable to manipulate the pipeline result.</p>
|
<p>Is there any way to set label on secret created by ServiceAccount? For now it is the only secret I'm not able to configure with label.</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: forum-sa
</code></pre>
<p><a href="https://i.stack.imgur.com/qZY9D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qZY9D.png" alt="enter image description here" /></a></p>
| <p>Please use the below configuration using the service account with kubectl,Follow the <a href="https://www.ibm.com/docs/en/cloud-paks/cp-management/2.0.0?topic=kubectl-using-service-account-tokens-connect-api-server#kube" rel="nofollow noreferrer">documentation</a><a href="https://www.ibm.com/docs/en/cloud-paks/cp-management/2.0.0?topic=kubectl-using-service-account-tokens-connect-api-server#kube" rel="nofollow noreferrer">1</a></p>
<p>kubectl get secret --namespace={namespace}</p>
<p>By using above kubectl command which will help to set label namespace for secrets.</p>
|
<p>In the following task I want to make ansible <strong>not</strong> to fail when <code>stderr</code> contains <code>the object has been modified</code> or <code>have a resource type</code>.</p>
<pre><code>- name: List nodes
delegate_to: localhost
shell: kubectl get nodes
run_once: true
register: output_error
failed_when: (output_error.stderr_lines | length > 0) and ("'the object has been modified' not in output_error.stderr or 'have a resource type' not in output_error.stderr")
</code></pre>
<p>However I am reveiving the following error message:</p>
<pre><code>kubectl get nodes", "delta": "0:00:00.498169", "end": "2022-08-05 11:08:01.386024", "failed_when_result": true, "item": "nodes", "rc": 0, "start": "2022-08-05 11:08:00.887855", "stderr": "error: the server doesn't have a resource type \"nodes\"", "stderr_lines": ["error: the server doesn't have a resource type \"nodes\""], "stdout": "", "stdout_lines": []}
</code></pre>
| <p>Use a regex which includes both alternatives:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: List nodes
delegate_to: localhost
shell: kubectl get nodes
run_once: true
register: output_error
failed_when:
- output_error.stderr_lines | length > 0
- output_error.stderr is not regex('the object has been modified|have a resource type')
</code></pre>
<p>For easier reading and maintenance you can set your positive matches in a list and join them together in an expression.</p>
<pre class="lang-yaml prettyprint-override"><code>- name: List nodes
vars:
acceptable_err_matches:
- the object has been modified
- have a resource type
match_regex: acceptable_err_matches | map('regex_escape') | join('|')
delegate_to: localhost
shell: kubectl get nodes
run_once: true
register: output_error
failed_when:
- output_error.stderr_lines | length > 0
- output_error.stderr is not regex(match_regex)
</code></pre>
|
<p>With my team, we're currently building an API using FastAPI and we're really struggling to get good performances out of it once deployed to Kubernetes. We're using async calls as much as possible but overall sitting at ~8RPS / pod to stay under our SLA of P99 200ms.</p>
<p>For resources, we assign the following:</p>
<pre><code>resources:
limits:
cpu: 1
memory: 800Mi
requests:
cpu: 600m
memory: 100Mi
</code></pre>
<p>Surprisingly, such performance drops don't occur when running load tests on the API running locally in a Docker container. There we easily get ~200RPS on a single container with 120ms latency at P99...</p>
<p>Would anyone have an idea of what could go wrong in there and where I could start looking to find the bottleneck?</p>
<p>Cheers!</p>
| <p><strike>It finally appeared that our performance issues were caused by the non-usage of <code>gunicorn</code> and only <code>uvicorn</code> (even though <a href="https://fastapi.tiangolo.com/id/deployment/docker/#replication-number-of-processes" rel="nofollow noreferrer">FastAPI's author recommends against this in his documentation</a>). On the other hand, <a href="https://www.uvicorn.org/deployment/#using-a-process-manager" rel="nofollow noreferrer">Uvicorn authors are recommending the other way round in their docs</a>, i.e, using <code>gunicorn</code>. We followed that advice and our performances issues were gone.</p>
<p>As suggested by people in this thread, setting more CPUs in request of our PodSpec was also part of the solution.</strike></p>
<p>EDIT: In the end, we finally discovered that the performance issues were caused by our implementation of OpenTelemetry over FastAPI using the opentelemetry-instrument CLI. The latter was causing a lot of overhead and blocking calls over the async of FastAPI. Performances are now super stable using both gunicorn or uvicorn. We are still using gunicorn with multiple workers but we are also planning to move back to uvicorn single-process and scale more dynamically.</p>
|
<p>I have new Civo cluster,I downloaded config</p>
<pre><code> kubectl config --kubeconfig=civo-enroute-demo-kubeconfig
</code></pre>
<p>does not work</p>
<pre><code>Modify kubeconfig files using subcommands like "kubectl config set current-context my-context"
The loading order follows these rules:
1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes
place.
2. If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for
your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When
a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the
last file in the list.
3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.
</code></pre>
<p>Output config view</p>
<pre><code>kubectl config view --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://00.220.23.220:6443
name: enroute-demo
contexts:
- context:
cluster: enroute-demo
user: enroute-demo
name: enroute-demo
current-context: enroute-demo
kind: Config
preferences: {}
users:
- name: enroute-demo
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
</code></pre>
<p>How should the kubectl command look like?</p>
| <p>You can use a kubeconfig file with kubectl as</p>
<pre><code>kubectl --kubeconfig=config_file.yaml command
</code></pre>
<p>Example:</p>
<pre><code>kubectl --kubeconfig=civo-enroute-demo-kubeconfig.yaml get nodes
</code></pre>
<p>OR
You can export <code>KUBECONFIG</code> and use <code>kubectl</code> command</p>
|
<p>I created a k8s eks cluster and all of it's resources within terraform but ran into some issues and deleted the cluster through the aws UI. Now when I try to run terraform I'm getting this error <code>Error: Get "http://localhost/api/v1/persistentvolumes/efs-jupyter-shared-v2": dial tcp 127.0.0.1:80: connect: connection refused</code> for each of the resources. I tried commenting out all of the resources within terraform and running, and still got the same error. Any ideas how to proceed?</p>
| <p>You need to clean the terraform state:</p>
<p>First step is listing all the resources in the state:</p>
<pre class="lang-bash prettyprint-override"><code>terraform state list
</code></pre>
<p>Then you can delete the resources you deleted in AWS dashboard:</p>
<pre class="lang-bash prettyprint-override"><code>terraform state rm <resource_name>
</code></pre>
<p>Finally you can run your terraform code to recreate the resources.</p>
|
<p>In our kubernetes cluster setup, we normally have 2 types of setups: <code>prod</code> and <code>dev</code>. The clusters have tags/labels as well as env variables to identify them as such.</p>
<p>Some of our images are designed to dev but not prod and vice-versa (differences are in the custom binaries that are installed within the org). Right now, when the pods start , they do not fail at startup but are functionally useless because the accurate binaries are missing. Is there a way to prevent a prod pod to not start with an image named dev (and vice-versa)</p>
| <p>Kubernetes provides a few ways to enforce your organizational policies, some are as follow:</p>
<p><strong>using ImagePolicyWebhook:</strong></p>
<p>You can use <code>ImagePolicyWebhook</code> to enforce your image usage policies by doing <code>ImageReview.</code> More info <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook" rel="nofollow noreferrer">here</a>.</p>
<p><strong>Using validation webhook:</strong></p>
<p>A <code>validation webhook</code> intercepts the request for pod creation, You can write your business logic in the webhook code, and it will enforce using the admission review. You can find more <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/" rel="nofollow noreferrer">here</a> and <a href="https://technekey.com/what-is-validation-webhook-in-kubernetes/" rel="nofollow noreferrer">here</a>.</p>
<p><strong>Using OPA Gatekeeper:</strong></p>
<p>Alternatively, you may use <code>OPA Gatekeeper</code> for policy enforcement. You can get more info <a href="https://kubernetes.io/blog/2019/08/06/opa-gatekeeper-policy-and-governance-for-kubernetes/" rel="nofollow noreferrer">here</a>.</p>
<p>Note that each option listed above are huge topic on its own. You might need to choose the best-suited option for your use case.</p>
|
<p>We have an AKS test cluster with <em>four</em> Windows worker nodes and a Deployment with a replica count of <em>two</em>. The corresponding Pod spec does not specify any resource requests and limits (thus, the resulting Pods are in the BestEffort QoS class).</p>
<p>In order to conduct a performance test, we scaled all other Deployments on those worker nodes to 0 replicas and deleted all remaining Pods on the nodes. Only the system Pods created by AKS DaemonSets itself (in the <code>kube-system</code> namespace) remained. We then created the Deployment mentioned above.</p>
<p>We had assumed that the default Kubernetes scheduler would place the two replicas on different nodes by default, or at least choose nodes randomly. However, the scheduler always chose the same node to place both replicas on, no matter how often we deleted the Pods or scaled the Deployment to 0 and back again to 2. Only after we tainted that node as <code>NoSchedule</code>, did the scheduler choose another node.</p>
<p>I know I could configure anti-affinities or topology spread constraints to get a better spreading of my Pods. But in the <em>Cloud Native DevOps with Kubernetes</em> book, I read that the scheduler actually does a very good job by default and one should only use those features if absolutely necessary. (Instead maybe using the descheduler if the scheduler is forced to make bad decisions.)</p>
<p>So, I would like to understand why the behavior we observed would happen. From the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation" rel="nofollow noreferrer">docs</a>, I've learned that the scheduler first filters the nodes for fitting ones. In this case, all of them should fit, as all are configured identically. It then scores the nodes, choosing randomly if all have the same score. Why would one node always win that scoring?</p>
<p>Follow-up question: Is there some way how I could reconstruct the scheduler's decision logic in AKS? I can see <code>kube-scheduler</code> logs in Container Insights, but they don't contain any information regarding scheduling, just some operative stuff.</p>
| <p>I <em>believe</em> that the scheduler is aware of which Nodes already have the container images pulled down, and will give them preference to avoid the image pull (and thus faster start time)</p>
<p>Short of digging up the source code as proof, I would guess one could create a separate Pod (for this purpose, I literally mean <code>kind: Pod</code>), force it onto one of the other Nodes via <code>nodeName:</code>, then after the Pod has been scheduled and attempted to start, delete the Pod and scale up your Deployment</p>
<p>I would then expect the new Deployment managed Pod to arrive on that other Node because it by definition has less resources in use but also has the container image required</p>
|
<p>I am running a GKE Cluster with the GCP Load Balancer as my Ingress Controller.</p>
<p>However, I noticed that a specific request to the service hosted in this GKE Cluster was being rejected with and 502 error.</p>
<p>I checked the GCP Loadbalancer logs and I was able to see a return with <code>statusDetails: "failed_to_pick_backend"</code>.</p>
<p>The health field is saying that my backend is healthy. Just to make sure I changed the health check type from HTTP to TCP to see if anything would change but it kept green.</p>
<p>So, what can I be missing if my GCP Loadbalancer is saying that my backend is reachable but at the same time it returns me an failed to pick backend message?</p>
<p>I really appreciate some help on that.</p>
<p>My ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
annotations:
kubernetes.io/ingress.global-static-ip-name: my-app
networking.gke.io/managed-certificates: my-app-managed-cert
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.allow-http: "false"
spec:
defaultBackend:
service:
name: my-app-svc
port:
number: 3000
</code></pre>
<p>My service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-app-svc
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 3000
targetPort: 3000
type: NodePort
</code></pre>
<p>Running <code>kubectl describe pod <my-pod></code> I can see</p>
<pre><code>Conditions:
Type Status
cloud.google.com/load-balancer-neg-ready True
Initialized True
Ready True
ContainersReady True
PodScheduled True
</code></pre>
| <p>Can you verify the timer of your health check? If this is set to 1 second or lower than the health checks that were set to higher value, <code>failed_to_pick_backend</code> error will normally occur.</p>
<p>I recommend you to change the timer to the default value 5 or higher and then test a new deployment. You can check more details about the health check timers on this link[1].</p>
<p>[1] <a href="https://cloud.google.com/load-balancing/docs/health-check-concepts" rel="nofollow noreferrer">https://cloud.google.com/load-balancing/docs/health-check-concepts</a></p>
|
<p>Has anyone deployed QUIC with a supported/official Kubernetes Ingress Controller? I have been using the NGINX ingress so far and support for QUIC seems to be NOT there at present.</p>
| <p><a href="https://bunny.net/academy/cdn/what-is-http-3-quic-protocol/" rel="nofollow noreferrer">QUIC/HTTP3</a> is not yet available in the <code>Stable Nginx</code> and to be available in the Nginx ingress controller will defiantly take time.</p>
<blockquote>
<p>Product Management tells me that we hope to fully support QUIC and HTTP/3 by the end of 2022. Note that this is not a promise :-)</p>
</blockquote>
<p><a href="https://www.nginx.com/blog/our-roadmap-quic-http-3-support-nginx/" rel="nofollow noreferrer">our-roadmap-quic-http-3-support-nginx</a></p>
<p>QUIC/HTTP3 will be targeted in the Nginx ingress controller once nginx release becomes stable.</p>
<blockquote>
<p>we have just upgraded to nginx 1.20.1 and from here <a href="https://quic.nginx.org/README" rel="nofollow noreferrer">https://quic.nginx.org/README</a> it looks that <strong>HTTP/3 is still experimental so not until there is a stable release will we be implementing HTTP/3.</strong></p>
</blockquote>
<p>Here is the feature request <a href="https://github.com/kubernetes/ingress-nginx/issues/4760" rel="nofollow noreferrer">[FEATURE REQUEST] HTTP/3 support</a></p>
<p>You can try the <a href="https://www.nginx.com/blog/our-roadmap-quic-http-3-support-nginx/#How-You-Can-Help" rel="nofollow noreferrer">nginx docker image that show how to build with quick support</a> and try to use Network Load Balancer hope that will work.</p>
|
<p>minikube in mac os is not able to pull docker images from docker repository.</p>
<p>Trying to run spark on k8s</p>
<pre><code>spark-submit --master k8s://https://ip:port --deploy-mode cluster --name test-run-spark --conf spark.kubernetes.container.image=Docker-image --conf spark.kubernetes.driver.container.image=docker4tg/Docker-image --conf spark.kubernetes.executor.container.image=docker4tg/Docker-image --conf spark.kubernetes.driver.pod.name=test-run-spark --class [class] --num-executors 1 --executor-memory 512m --driver-memory 512m --driver-cores 2 --executor-cores 2 --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark local:///[jar].jar
</code></pre>
<p>Using the same docker image and I'm able to pull the docker image to my local. But the k8s pods are not able to , <strong>but here's the cache. Only few tags of the same image, I moved the mkdir command up or down to change the hash, worked. I did not logical changes, But it worked fine for 3 to 4 tags,and the spplication ran successfully. I could not understand this.</strong></p>
<p>Please help me to figure out the issue.</p>
<p>Dockerfile</p>
<pre><code>
FROM ubuntu:18.04
ARG SPARKVERSION=tmpsVersion
ARG HADOOPVERSION=tmpHVersion
ENV SPARK_VERSION=$SPARKVERSION
ENV HADOOP_VERSION=$HADOOPVERSION
RUN sed -i s/http/ftp/ /etc/apt/sources.list && apt-get update -y
RUN apt-get install wget -y
RUN apt-get install openjdk-8-jdk -y
RUN sed -i s/http/ftp/ /etc/apt/sources.list && apt-get update -y
RUN mkdir -p /opt/spark/work-dir
WORKDIR /opt/spark/work-dir
RUN wget -O spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz https://archive.apache.org/dist/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz
RUN tar -xzvf spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz -C /opt/spark/
RUN rm -rf spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz
RUN mv -f /opt/spark/spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}/* /opt/spark/
RUN rm -rf /opt/spark/spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}
ENV SPARK_HOME=/opt/spark
ENV PATH="${SPARK_HOME}/bin:${PATH}"
RUN mkdir -p /opt/spark/data-jars/
COPY [jar.jar] /opt/spark/data-jars/
ENTRYPOINT [ "/opt/spark/kubernetes/dockerfiles/spark/entrypoint.sh" ]
</code></pre>
<p>ERROR:</p>
<pre><code>kubectl get pods; kubectl logs test-run-spark
NAME READY STATUS RESTARTS AGE
test-run-spark 0/1 ImagePullBackOff 0 2m36s
Error from server (BadRequest): container "spark-kubernetes-driver" in pod "test-run-spark" is waiting to start: trying and failing to pull image
</code></pre>
<p>Kindly help me with this guys</p>
| <p>Your <code>minikube</code> environment is isolated from your host, so if you already have the image on your host or you can pull it, it doesn't mean you can do the same thing in <code>minikube</code>.</p>
<p>If you want to build the image in <code>minikube</code> context:</p>
<pre class="lang-bash prettyprint-override"><code># export minikube docker config
eval $(minikube docker-env)
# build your image directly in minikube
docker build
</code></pre>
<p>If you have the image locally, you can load it to <code>minikube</code> by:</p>
<pre class="lang-bash prettyprint-override"><code>minikube image load IMAGE_NAME
</code></pre>
<p>And if you want to let <code>minikube</code> pull the images from a private remote registry (ex: dockerhub), you can follow <a href="https://minikube.sigs.k8s.io/docs/handbook/registry/" rel="nofollow noreferrer">these instructions</a> to add the registry creds to your <code>minikube</code>.</p>
|
<p>There are two fields in k8s, requests, and limits. I want to know that whether GKE cost us on requests or limits.</p>
<p>If request is 1Gi and limits is 2Gi
Will I be costed for 1Gi or 2Gi?</p>
| <p>There are two different modes of operation in Google Kubernetes Engine: Autopilot (easier to manage, but less flexible) and Standard. They're billed <a href="https://cloud.google.com/kubernetes-engine/pricing#google-kubernetes-engine-pricing" rel="nofollow noreferrer">differently</a>.</p>
<p>In Standard mode, you're essentially <strong>billed for Compute Engine instances</strong> used in your cluster. That means your requests and limits are only used indirectly, as you're expected to be responsible for setting up your cluster so that it's scaled according to those. When doing this, you should remember that some of each node's resources <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#node_allocatable" rel="nofollow noreferrer">are required to run the GKE and Kubernetes node components</a> necessary to make that node function as part of your cluster.</p>
<p>In Autopilot mode, you're <strong>billed for resources</strong> - CPU, memory, ephemeral storage - requested by your currently scheduled Pods. The catch is that each Pod in Autopilot mode is <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#allowable_resource_ranges" rel="nofollow noreferrer">considered to be a Guaranteed QoS Class Pod</a>:</p>
<blockquote>
<p>Autopilot automatically sets resource limits equal to requests if you
do not have resource limits specified. If you do specify resource
limits, your limits will be overridden and set to be equal to the
requests.</p>
</blockquote>
<p>To be more specific, in your example 2 Gi limit will be overridden and set to 1 Gi, the same as request. You'll be billed accordingly.</p>
|
<p>I have a kubernetes pod configuration with a named volume and want to run it via <code>podman play kube</code> which fails for an unknown reason:</p>
<p><code>podman play kube kubernetes.yml</code>:</p>
<pre><code>Error: kubernetes.yml: Volume mount database-data-volume specified for container but not configured in volumes
</code></pre>
<hr />
<p>The error indicates that the volume does not exist, but it's there:</p>
<pre><code>> podman volume list
DRIVER VOLUME NAME
local database-data-volume
</code></pre>
<p><code>kubernetes.yml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
...
spec:
containers:
...
- image: it.impl/h2-database:2.1.214-0
name: database
ports:
- containerPort: 8082
hostPort: 8082
- containerPort: 9092
hostPort: 9092
volumeMounts:
- mountPath: /opt/h2-database/data
name: database-data-volume
volumes:
- persistentVolumeClaim:
claimName: database-data-volume
restartPolicy: Never
</code></pre>
| <p>Your <code>volume</code> is missing a name (this would fail on Kubernetes as well). The <code>volumes</code> section maps a volume name to some sort of volume definition; when you write:</p>
<pre><code> volumes:
- persistentVolumeClaim:
claimName: database-data-volume
</code></pre>
<p>You have a volume definition but no volume name. You need:</p>
<pre><code> volumes:
- name: database-data-volume
persistentVolumeClaim:
claimName: database-data-volume
</code></pre>
|
<p>I'm setting up an ingress for the application resides in the AKS. But ran into a problem on binding the certificate to the ingress.</p>
<p>As you can see below, I am trying to reference <code>ingress-cert</code> from the KV and use it in the ingress through <code>SecretProviderClass</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: {{ include "secretProvider.name" . }}
spec:
provider: azure
secretObjects:
- secretName: ingress-tls-csi
type: kubernetes.io/tls
data:
- objectName: ingress-cert
key: tls.key
- objectName: ingress-cert
key: tls.crt
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: {{ .Values.keyVault.identity }}
keyvaultName: {{ .Values.keyVault.name }}
objects: |
array:
- |
objectName: ingress-cert
objectType: secret
tenantId: {{ .Values.keyVault.tenant }}
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "ingress.name" . }}
annotations:
kubernetes.io/ingress.class: azure/application-gateway
kubernetes.io/ingress.allow-http: "false"
appgw.ingress.kubernetes.io/override-frontend-port: "443"
spec:
tls:
- hosts:
- {{ .Values.ingress.host }}
secretName: ingress-tls-csi
# Property `rules` is omitted
</code></pre>
<p>It's working fine when accessing other secrets from pods through env but for the ingress this is the output on describing it:</p>
<pre class="lang-bash prettyprint-override"><code>Name: my-ingress
Namespace: my-namespace
Address: x.x.x.x
Default backend: default-http-backend:80
TLS:
ingress-tls-csi terminates my.example.com
Rules:
Host Path Backends
---- ---- --------
Annotations: appgw.ingress.kubernetes.io/override-frontend-port: 443
kubernetes.io/ingress.allow-http: false
kubernetes.io/ingress.class: azure/application-gateway
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning SecretNotFound 3m3s (x2 over 3m10s) azure/application-gateway Unable to find the secret associated to secretId: [my-namespace/ingress-tls-csi]
</code></pre>
<hr />
<p>I've set up the KV integration by following the <a href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver" rel="nofollow noreferrer">Use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster</a> documentation.</p>
<p>But upon following the <a href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-nginx-tls" rel="nofollow noreferrer">Set up Secrets Store CSI Driver to enable NGINX Ingress Controller with TLS</a> as a hint on how to implement the same with AGIC. I noticed that the certificate is added as a secret inside the AKS which is then referenced inside the ingress with <code>secretName: ingress-tls-csi</code>.</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get secret -n $NAMESPACE
NAME TYPE DATA AGE
ingress-tls-csi kubernetes.io/tls 2 1m34s
</code></pre>
<p>I assume that ingress can't reference the secret directly from <code>SecretProviderClass</code> as the example in the documentation need to use the <code>ingress-tls-csi</code> as a secret object which I assumed (again) created by <code>ingress-nginx</code> chart.</p>
<p>My question is how can I implement the same as the <code>ingress-nginx</code> example with AGIC?</p>
<hr />
<p>Additional information:</p>
<ul>
<li>I used AGIC with Azure CNI networking.</li>
<li>Ingress is currently working with manually added certificate with <code>kubectl</code> command. The reason I need to use the one from KV is the AKS will also be used by other people deploying under the same domain but different namespace and I think it's a bad idea to give direct access to certificate's private key.</li>
</ul>
| <p>As I couldn't find a way to integrate the Ingress with the Azure Key Vault, I've implemented a workaround with GitHub Actions to retrieve the certificate and add it to the AKS. Because most of them are bash commands, the workaround isn't exclusive to GitHub Actions.</p>
<pre class="lang-yaml prettyprint-override"><code>name: Assign Cerfiticate from KV to AKS
on:
workflow_dispatch:
jobs:
build-push:
name: Assign Cerfiticate from KV to AKS
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
# Azure Authentication
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Authenticate and set context to AKS cluster
run: az aks get-credentials --name "my-aks-cluster" --resource-group "my-rg" --admin
- uses: azure/setup-helm@v3
with:
token: ${{ secrets.PAT }}
# Retrieve certificate and private key from Azure Key Vault
- name: Download certificate and private key in PKCS#12 format
run: |
az keyvault secret download --name my-certificate \
--vault-name my-kv \
--encoding base64 \
-f certificate.pfx
- name: Extract private key in RSA format
run: |
openssl pkcs12 -in certificate.pfx -nocerts -nodes -passin pass: | openssl rsa -out private.key
- name: Extract certificate
run: |
openssl pkcs12 -in certificate.pfx -clcerts -nokeys -passin pass: -out certificate.crt
- name: Deploy Kubernetes Configuration
run: |
helm upgrade --install release-name chart-name \
-n my-namespace \
--set-file mychart.ingress.key=private.key \
--set-file mychart.ingress.certificate=certificate.crt
</code></pre>
<p>The brief explanation for the above:</p>
<ol>
<li>Download the certificate and private key in <code>.pfx</code> format</li>
<li>Use OpenSSL to extract into <code>.crt</code> and <code>.key</code></li>
<li>Pass the extracted files into <code>helm install/upgrade</code> using <code>--set-file</code></li>
</ol>
<p>Here is the secret and ingress configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: ingress-tls
type: kubernetes.io/tls
data:
tls.crt: {{ .Values.ingress.certificate | b64enc }}
tls.key: {{ .Values.ingress.key | b64enc }}
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "ingress.name" . }}
annotations:
kubernetes.io/ingress.class: azure/application-gateway
kubernetes.io/ingress.allow-http: "false"
appgw.ingress.kubernetes.io/override-frontend-port: "443"
spec:
tls:
- hosts:
- {{ .Values.ingress.host }}
secretName: ingress-tls
# Property `rules` is omitted
</code></pre>
<p>No additional modification to the <code>SecretProviderClass</code> is required.</p>
<p>I hope this is just a workaround because it'd be nicer if the Ingress can directly integrate with Azure Key Vault.</p>
|
<p>kubernetes cannot pull a public image. Standard images like nginx are downloading successfully, but my pet project is not downloading. I'm using minikube for launch kubernetes-cluster</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway-deploumnet
labels:
app: api-gateway
spec:
replicas: 3
selector:
matchLabels:
app: api-gateway
template:
metadata:
labels:
app: api-gateway
spec:
containers:
- name: api-gateway
image: creatorsprodhouse/api-gateway:latest
imagePullPolicy: Always
ports:
- containerPort: 80
</code></pre>
<p>when I try to create a deployment I get an error that kubernetes cannot download my public image.</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl get pods
</code></pre>
<p>result:</p>
<pre class="lang-js prettyprint-override"><code>NAME READY STATUS RESTARTS AGE
api-gateway-deploumnet-599c784984-j9mf2 0/1 ImagePullBackOff 0 13m
api-gateway-deploumnet-599c784984-qzklt 0/1 ImagePullBackOff 0 13m
api-gateway-deploumnet-599c784984-csxln 0/1 ImagePullBackOff 0 13m
</code></pre>
<pre class="lang-bash prettyprint-override"><code>$ kubectl logs api-gateway-deploumnet-599c784984-csxln
</code></pre>
<p>result</p>
<pre><code>Error from server (BadRequest): container "api-gateway" in pod "api-gateway-deploumnet-86f6cc5b65-xdx85" is waiting to start: trying and failing to pull image
</code></pre>
<p>What could be the problem? The standard images are downloading but my public one is not. Any help would be appreciated.</p>
<p><strong>EDIT 1</strong></p>
<pre><code>$ api-gateway-deploumnet-599c784984-csxln
</code></pre>
<p>result:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m22s default-scheduler Successfully assigned default/api-gateway-deploumnet-849899786d-mq4td to minikube
Warning Failed 3m8s kubelet Failed to pull image "creatorsprodhouse/api-gateway:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 3m8s kubelet Error: ErrImagePull
Normal BackOff 3m7s kubelet Back-off pulling image "creatorsprodhouse/api-gateway:latest"
Warning Failed 3m7s kubelet Error: ImagePullBackOff
Normal Pulling 2m53s (x2 over 8m21s) kubelet Pulling image "creatorsprodhouse/api-gateway:latest"
</code></pre>
<p><strong>EDIT 2</strong></p>
<p>If I try to download a separate docker image, it's fine</p>
<pre><code>$ docker pull creatorsprodhouse/api-gateway:latest
</code></pre>
<p>result:</p>
<pre><code>Digest: sha256:e664a9dd9025f80a3dd60d157ce1464d4df7d0f8a00538e6a137d44f9f9f12aa
Status: Downloaded newer image for creatorsprodhouse/api-gateway:latest
docker.io/creatorsprodhouse/api-gateway:latest
</code></pre>
<p><strong>EDIT 3</strong>
After advice to restart minikube</p>
<pre><code>$ minikube stop
$ minikube delete --purge
$ minikube start --cni=calico
</code></pre>
<p>I started the pods.</p>
<pre><code>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m28s default-scheduler Successfully assigned default/api-gateway-deploumnet-849899786d-bkr28 to minikube
Warning FailedCreatePodSandBox 4m27s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "7e112c92e24199f268ec9c6f3a6db69c2572c0751db9fd57a852d1b9b412e0a1" network for pod "api-gateway-deploumnet-849899786d-bkr28": networkPlugin cni failed to set up pod "api-gateway-deploumnet-849899786d-bkr28_default" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "7e112c92e24199f268ec9c6f3a6db69c2572c0751db9fd57a852d1b9b412e0a1" network for pod "api-gateway-deploumnet-849899786d-bkr28": networkPlugin cni failed to teardown pod "api-gateway-deploumnet-849899786d-bkr28_default" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.34 -j CNI-57e7da7379b524635074e6d0 -m comment --comment name: "crio" id: "7e112c92e24199f268ec9c6f3a6db69c2572c0751db9fd57a852d1b9b412e0a1" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-57e7da7379b524635074e6d0':No such file or directory
Try `iptables -h' or 'iptables --help' for more information.
</code></pre>
| <p>I could not solve the problem in the ways I was suggested. However, it worked when <strong>I ran minikube with a different driver</strong></p>
<pre><code>$ minikube start --driver=none
</code></pre>
<p><em><strong>--driver=none</strong></em> means that the cluster will run on your host instead of the standard <em><strong>--driver=docker</strong></em> which runs the cluster in docker.</p>
<p>It is better to run minikube with <em><strong>--driver=docker</strong></em> as it is safer and easier, but it didn't work for me as I could not download my images. For me personally it is ok to use <em><strong>--driver=none</strong></em> although it is a bit dangerous.</p>
<p>In general, if anyone knows what the problem is, please answer my question. In the meantime you can try to run minikube cluster on your host with the command I mentioned above.</p>
<p>In any case, thank you very much for your attention!</p>
|
<p>Should I deploy traefik 1.7.x as DaemonSet or as A deployment in GKE (Google K8S)?</p>
<h3>Environment Description</h3>
<p>Kubernetes clusters with node autoscaler in Google cloud, hosting several production clusters.
Clusters can extend up to 90 nodes (minimum is 6 nodes), currently we have <code>traefik</code> pod deployed with 10 replicas in each cluster (we use kustomize to deploy the same manifests in all clusters).</p>
<p>We notice slow response time in the cluster that has 18 nodes (<code>europe-west1</code> region), compared to our cluster in <code>australia-southeast1</code> region, which has 6 nodes. Both clusters has 10 replicas of traefik.</p>
<h3>Deployment Specs</h3>
<p>traefik.toml:</p>
<pre><code> [kubernetes]
# all namespaces!
namespaces = []
</code></pre>
<p>Service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: traefik
name: traefik-ingress
namespace: ingress-traefik
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
selector:
app: traefik
sessionAffinity: None
type: LoadBalancer
loadBalancerIP: {{LOAD_BALANCER_IP}}
</code></pre>
<p>Deployment.yaml</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: traefik
name: traefik
namespace: ingress-traefik
spec:
replicas: 10
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
containers:
- args:
- --configfile=/config/traefik.toml
image: traefik:1.7.9-alpine
</code></pre>
<h3>Questions</h3>
<ol>
<li>In this scenario (using GKE node autoscaler) what would be the optimal configuration for our clusters? Using Deployment or a DaemonSet for traefik?</li>
<li>Does the amount of traefik pods has effect on response time according to the cluster size (node count)?</li>
<li>Does routing inside the cluster (hops between pod, service and nodes networks) is easier for traefik when using a DaemonSet (pod for each node) or by using a deployment of several replicas for the whole cluster? (We use K8S namespaces for each of our https service and traefik has its own namespace).</li>
</ol>
| <h2>Deploy Traefik using a Deployment or DaemonSet?</h2>
<p>It is possible to use Traefik with a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment" rel="nofollow noreferrer">Deployment</a> or a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a> object, whereas both options have their own pros and cons:</p>
<ul>
<li><p>The scalability can be much better when using a Deployment, because you will have a Single-Pod-per-Node model when using a DaemonSet, whereas you may need less replicas based on your environment when using a Deployment.</p>
</li>
<li><p>DaemonSets automatically scale to new nodes, when the nodes join the cluster, whereas Deployment pods are only scheduled on new nodes if required.</p>
</li>
<li><p>DaemonSets ensure that only one replica of pods run on any single node. Deployments require affinity settings if you want to ensure that two pods don't end up on the same node.</p>
</li>
<li><p>DaemonSets can be run with the NET_BIND_SERVICE capability, which will allow it to bind to port 80/443/etc on each host. This will allow bypassing the kube-proxy, and reduce traffic hops. Note that this is against the Kubernetes Best Practices <a href="https://kubernetes.io/docs/concepts/configuration/overview/#services" rel="nofollow noreferrer">Guidelines</a> , and raises the potential for scheduling/scaling issues. Despite potential issues, this remains the choice for most ingress controllers.</p>
</li>
</ul>
<h3>There are some significant differences between using Deployments and DaemonSets:</h3>
<ul>
<li><p>The Deployment has easier up and down scaling possibilities. It can implement full pod lifecycle and supports rolling updates from Kubernetes 1.2. At least one Pod is needed to run the Deployment.</p>
</li>
<li><p>The DaemonSet automatically scales to all nodes that meets a specific selector and guarantees to fill nodes one at a time. Rolling updates are fully supported from Kubernetes 1.7 for DaemonSets as well.</p>
</li>
</ul>
<p>You might want to have a look at additional <a href="https://doc.traefik.io/traefik/" rel="nofollow noreferrer">traefik docs</a></p>
|
<p>I do have multiple persistent volumes which need to be shrinked to reduce the hosting costs. I already figured out that Kubernetes does not provide such an option. I also tried to clone or the restore the volumes from an snapshot to a new smaller volume - with the same result (<code>requested volume size XXX is less than the size XXX for the source snapshot</code>).</p>
<p>Nevertheless I need a solution or workaround to get this done.</p>
<p>The cluster is deployed with Rancher and the volumes are mounted to a Ceph Cluster. Everything is provided by an external hoster.</p>
| <p>Finally I achieved what needed with the following steps (still tricky and manual work):</p>
<ul>
<li>Stop running pod (otherwhise you could not use the volume in the next steps)</li>
<li>Create the a new PVC with the desired capacity (ensure that the spec and label matches the exisitng PVC)</li>
<li>Run this Job <a href="https://github.com/edseymour/pvc-transfer" rel="nofollow noreferrer">https://github.com/edseymour/pvc-transfer</a>
<ul>
<li>In the spec of the <code>job-template.yaml</code> set the source and destination volume</li>
</ul>
</li>
<li>Set the ReclaimPolicy on the new created pv to Retain. This will ensure that the pv won't be deleted after we delete the temp pvc in the next step</li>
<li>Delete the source and destination pvc</li>
<li>Create a new pvc with the old name and the new storage capacity</li>
<li>On the new pv point the claimRef to the new pvc</li>
</ul>
|
<p>My cluster is running on-prem. Currently when I try to ping the external IP of service type LoadBalancer assigned to it from Metal LB. I get a reply from one of the VM's hosting the pods - <strong>Destination Host unreachable</strong>. Is this because the pods are on an internal kubernetes network(I am using calico) and cannot be pinged. A detailed explanation of the scenario can help to understand it better. Also all the services are performing as expected. I am just curious to know the exact reason behind this since I am new at this. Any help will be much appreciated. Thank you</p>
| <p>The LoadbalancerIP or the External SVC IP will never be pingable.</p>
<p>When you define a service of type LoadBalancer, you are saying I would like to Listen on TCP port 8080 for eg: on this SVC.</p>
<p>And that is the only thing your External SVC IP will respond to.</p>
<p>A ping would be UDP ICMP packets that do not match the destination of TCP port 8080.</p>
<p>You can do an <code>nc -v <ExternalIP> 8080</code> to test it.</p>
<p>OR</p>
<p>use a tool like <code>mtr</code> and pass --tcp --port 8080 to do your tests</p>
|
<p>I've experienced a surprising behavior when playing around with Kubernetes and I wanted to know if there is any good explanation behind it.</p>
<p>I've noticed that when two Kubernetes deployments are created with the same labels, and with the same <code>spec.selector</code>, the deployments still function correctly, even though using the same selector "should" cause them to be confused regarding which pods is related to each one.</p>
<p>Example configurations which present this -</p>
<p><code>example_deployment.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
extra_label: one
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
</code></pre>
<p><code>example_deployment_2.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-2
labels:
app: nginx
extra_label: two
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
</code></pre>
<p>I expected the deployments not to work correctly, since they will select pods from each other and assume it is theirs.
The actual result is that the deployments seem to be created correctly, but entering the deployment from <code>k9s</code> returns all of the pods. This is true for both deployments.</p>
<p>Can anyone please shed light regarding why this is happening? Is there additional internal filtering in Kubernetes to to prevent pods which were not really created by the deployment from being associated with it?</p>
<p>I'll note that I've seen this behavior in AWS and have reproduced it in Minikube.</p>
| <p>When you create a K8S Deployment, K8S creates a ReplicaSet to manage the pods, then this ReplicaSet creates the pods based on the number of replicas provided or patched by the hpa. Addition to the provided labels and annotations you provide, the ReplicaSet add <code>ownerReferences</code> which contains its name and uid, so even if you have 4 pods with the same labels, each two pods will have a different <code>ownerReferences</code> used by the ReplicaSet to manage them:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: <replicaset name>
uid: <replicaset uid>
...
</code></pre>
|
<p>I noticed that a new cluster role - "eks:cloud-controller-manager" appeared in our EKS cluster. we never created it.I tried to find origin/creation of this cluster role but not able to find it.</p>
<p>any idea what does "eks:cloud-controller-manager" cluster role does in EKS cluster?</p>
<p><code>$ kubectl get clusterrole eks:cloud-controller-manager -o yaml</code></p>
<pre><code>kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"eks:cloud-controller-manager"},"rules":[{"apiGroups":[""],"resources":["events"],"verbs":["create","patch","update"]},{"apiGroups":[""],"resources":["nodes"],"verbs":["*"]},{"apiGroups":[""],"resources":["nodes/status"],"verbs":["patch"]},{"apiGroups":[""],"resources":["services"],"verbs":["list","patch","update","watch"]},{"apiGroups":[""],"resources":["services/status"],"verbs":["list","patch","update","watch"]},{"apiGroups":[""],"resources":["serviceaccounts"],"verbs":["create","get"]},{"apiGroups":[""],"resources":["persistentvolumes"],"verbs":["get","list","update","watch"]},{"apiGroups":[""],"resources":["endpoints"],"verbs":["create","get","list","watch","update"]},{"apiGroups":["coordination.k8s.io"],"resources":["leases"],"verbs":["create","get","list","watch","update"]},{"apiGroups":[""],"resources":["serviceaccounts/token"],"verbs":["create"]}]}
creationTimestamp: "2022-08-02T00:25:52Z"
name: eks:cloud-controller-manager
resourceVersion: "762242250"
uid: 34e568bb-20b5-4c33-8a7b-fcd081ae0a28
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- '*'
- apiGroups:
- ""
resources:
- serviceaccounts/token
verbs:
- create```
I tried to find this object in our Gitops repo but do not find it.
</code></pre>
| <p>This role is created by AWS when you provision the cluster. This role is for the AWS <a href="https://kubernetes.io/docs/concepts/architecture/cloud-controller/" rel="nofollow noreferrer">cloud-controller-manager</a> to integrate AWS services (eg. CLB/NLB, EBS) with Kubernetes. You will also find other roles like eks:fargate-manager to integrate with Fargate.</p>
|
<p>I am learning kubernetes and created first pod using below command</p>
<pre><code>kubectl run helloworld --image=<image-name> --port=8080
</code></pre>
<p>The Pod creation was successful.
But since it is neither a ReplicationController or a Deloyment, how could I expose it as a service. Please advise.</p>
| <p>Please refer to the documentation of <strong>kubernetes service concept</strong> <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/</a>
At the end of the page, there also is an interactive tutorial in minikube</p>
|
<p>I am learning kubernetes and created first pod using below command</p>
<pre><code>kubectl run helloworld --image=<image-name> --port=8080
</code></pre>
<p>The Pod creation was successful.
But since it is neither a ReplicationController or a Deloyment, how could I expose it as a service. Please advise.</p>
| <p>You can create the service with the same set of selector and labels</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: helloworld
ports:
- protocol: TCP
port: 80
targetPort: 9376
</code></pre>
<p>so if selector matching it will route the traffic to POD and you can expose it.</p>
<p>ref : <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
|
<p>We are using helm charts to deploy our charts in Openshift.</p>
<p>This is our workflow:</p>
<ul>
<li>We create a new version of the helm and docker image at each sprint/ e.g 1.0.0 (saving them in a snapshot artifactory)</li>
<li>During the sprint we build several times the helm chart and the docker image and push them in our snapshot artifactory every time.</li>
<li>Once the helm chart and the docker image are published, we automatically deploy our chart in our test environment</li>
<li>once we are ready we create the production version of the charts and the docker image: we basically publish the helm chart and the docker image in a release artifactory with the same version. From now on the helm chart and the docker images are immutable</li>
<li>Now we deploy in PROD</li>
</ul>
<p>The issue is that usually the helm-chart does not change but the docker image v1.0.0 (snapshot) may change several times during the sprint therefore when we try to upgrade the helm chart in our test env, helm does not detect any change and then the application is not updated.</p>
<p>To solve this situation, currently, every time that we have to deploy in the test environment, we uninstall the application and re install the helm chart (with the image pull policy == always)</p>
<p>I was wondering if there is a way to modify our helm chart in order to force it to redeploy when we build a new version.
e.g we tried to add an annotation in the deployment.yaml : build-time: {{ now }} but this changes every time so the helm chart is always redeployed in the test environment (and usually is fine but not if we trigger a manual re-deploy of all our components).</p>
<p>Is it possible for example to provide a parameter during the <code>helm package</code> command?
Something like <code>helm package --set package-time=timestamp</code> and then we could save this value as annotation.</p>
<p>Any better solution?</p>
| <p>In addition to you functional tag (eg v1.0.0), add a second tag to your image with something unique, for example the git-sha commit tag coming from git used at build time.</p>
<p>The functionnal tag is a "floating" tag, it will move from one image to the other but the "git-sha" tag will be unique</p>
<p>One way to get a short version of git-sha:</p>
<pre><code>git log -n 1 --pretty=format:'%h'
</code></pre>
<p>In your deployment, specify the "git-sha" tag for your image in your helm chart as a variable</p>
<p>The functional tag/version could be read from a single line file in your source, So when you are ready to push v2.0.0, just change the file and commit it</p>
|
<p>Error : Failed to pull image "busybox": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: <a href="https://www.docker.com/increase-rate-limit" rel="nofollow noreferrer">https://www.docker.com/increase-rate-limit</a></p>
<p>To fix this I have added the login credentials</p>
<pre><code>apiVersion: v1
data:
.dockerconfigjson: xxxx
kind: Secret
metadata:
creationTimestamp: null
name: pullsecretgeneric
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>and also added to the deployment yaml</p>
<pre><code>template:
metadata:
labels:
component: {{ .component }}
spec:
imagePullSecrets:
- name: pullsecretgeneric
- name: pullsecret
</code></pre>
<p>Then using helm install to do the installation
But still I get this error
Do I need to do add somewhere else</p>
<p>Config.json</p>
<pre><code>{
"auths": {
"https://registry.hub.docker.com": {
"auth": "xxxx"
}
}
}
</code></pre>
<p>Any pointers to fix this</p>
| <p><strong>1 .</strong> On November 20, 2020, rate limits anonymous and free authenticated use of Docker Hub went into effect. Anonymous and Free Docker Hub users are limited to 100 and 200 container image pull requests per six hours. You can read <a href="https://docs.docker.com/docker-hub/download-rate-limit/" rel="nofollow noreferrer">here</a> for more detailed information.</p>
<p>As stated in the <a href="https://www.docker.com/increase-rate-limits/" rel="nofollow noreferrer">documentation</a> :</p>
<blockquote>
<p>The rate limits of 100 container image requests per six hours for anonymous usage, and 200 container image requests per six hours for free Docker accounts are now in effect. Image requests exceeding these limits will be denied until the six hour window elapses.</p>
</blockquote>
<p>So as a workaround you can either:</p>
<ul>
<li>Reduce your pull rate.</li>
<li>Upgrade your membership.</li>
<li>Setup your own docker proxy to cache containers locally</li>
</ul>
<p>To overcome docker hub pull rate limit refer to the <a href="https://container-registry.com/posts/overcome-docker-hub-rate-limit/" rel="nofollow noreferrer">documentation</a> and also refer to the <a href="https://stackoverflow.com/a/65020370/15745153">stackpost</a>.</p>
<p><strong>2 .</strong> Another workaround is to pull the image locally once, push it to your local docker repository and then update your image properties to point to your local repository.</p>
<p>You have to pull the images locally using your credentials and push it to your local (internally hosted) docker repository. Once pushed, update the deployment.yaml file with updated image link.</p>
<p><code>image: <LOCAL DOCKER REPO URL>/busybox</code>.</p>
<p><strong>3 .</strong> If there was no issue with Docker login and if you are able to download docker images with docker pull but getting error when creating a pod with the image then create a private docker registry.</p>
<ul>
<li>Create and run a private docker registry</li>
<li>Download busybox image from public docker hub</li>
<li>Create a tag for busybox before pushing it to private registry</li>
<li>Push to registry</li>
<li>Now create a pod, it will be created successfully.</li>
</ul>
<p>Refer to the <a href="https://stackoverflow.com/a/70093649/15745153">stackpost</a> for more information.</p>
|
<p>What are the packages inside Kubernetes ConstraintTemplate policy definition? and can we create our own packages using rego?
I have a use case wherein I want to disallow the use of a <strong>stable</strong> image tag for any deployments. I have seen the Github repository of OPA gatekeeper samples and I see they have used packages while defining policies.</p>
| <blockquote>
<p>Packages group the rules defined in one or more modules into a
particular namespace. Because rules are namespaced they can be safely
shared across.</p>
</blockquote>
<p>You can try this <strong>ConstraintTemplate</strong>, also set it as per requirement to disable <strong>stable</strong> image</p>
<pre><code>apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sblocklatesttag
annotations:
description: Blocks images with the "stable" tag.
spec:
crd:
spec:
names:
kind: K8sBlockLatestTag
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sblocklatesttag
violation[{"msg": msg, "details": {}}]{
input.review.object.kind == "Pod"
imagename := input.review.object.spec.containers[_].image
endswith(imagename,"stable")
msg := "Image with \"stable\" tab is not allowed"
}
</code></pre>
|
<p>I am working on building a project using Node.js which will require me to have an application that can deploy to Kubernetes. The service I am working on will take some Kubernetes manifests, add some ENV variables into them, and then would deploy those resources.</p>
<p>I have some code that can create and destroy a namespace for me using the SDK and <code>createNamespace</code> and <code>deleteNamespace</code>. This part works how I want it to, ie without needing a Kubernetes YAML file. I would like to use the SDK for creating a deployment as well however I can't seem to get it to work. I found a code example of <code>createNamespacedDeployment</code> however using version <code>0.13.2</code> of the SDK I am unable to get that working. I get this error message when I run the example code I found.</p>
<pre><code>k8sApi.createNamespacedDeployment is not a function
</code></pre>
<p>I have tried to check over the git repo for the SDK though it is massive and I've yet to find anything in it that would allow me to define a deployment in my Node.js code, closest I have found is a pod deployment however that won't work for me, I need a deployment.</p>
<p>How can I create a deployment via Node.js and have it apply to my Kubernetes cluster?</p>
| <p>Management of deployments is handled by the <a href="https://kubernetes-client.github.io/javascript/classes/appsv1api.appsv1api-1.html" rel="nofollow noreferrer">AppsV1Api</a> class:</p>
<pre><code>const k8s = require('@kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromDefault();
const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
const appsApi = kc.makeApiClient(k8s.AppsV1Api);
const deploymentYamlString = fs.readFileSync('./deployment.yaml', { encoding: 'utf8'});
const deployment = k8s.loadYaml(deploymentYamlString);
const res = await appsApi.createNamespacedDeployment('default', deployment);
</code></pre>
<p>Generally, you can find the relevant API class for managing a Kubernetes object by its apiVersion, eg: Deployment -> <code>apiVersion: apps/v1</code> -> <code>AppsV1Api</code>, CronJob -> <code>apiVersion: batch/v1</code> -> <code>BatchV1Api</code>.</p>
|
<p>I made a Kafka and zookeeper as a statefulset and exposed Kafka to the outside of the cluster. However, whenever I try to delete the Kafka statefulset and re-create one, the data seemed to be gone? (when I tried to consume all the message using <code>kafkacat</code>, the old messages seemed to be gone) even if it is using the same PVC and PV. I am currently using EBS as my persistent volume. </p>
<p>Can someone explain to me what is happening to PV when I delete the statefulset? Please help me.</p>
| <p>Kubernetes default prevent deleting <code>PersistentVolumeClaims</code> and bounded <code>PersistentVolume</code> objects when you scaling <code>StatefulSet</code> down or deleting them.</p>
<p>Retaining <code>PersistentVolumeClaims</code> is the default behavior, but you can configure the <code>StatefulSet</code> to delete them via the <code>persistentVolumeClaimRetentionPolicy</code> field.</p>
<p>This example shows part of <code>StatefulSet</code> manifest file, where retention policy causes deleting <code>PersistentVolumeClaim</code> when <code>StatefulSet</code> is scaled down, and retaining when <code>StatefulSet</code> is deleted.</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: quiz
spec:
persistentVolumeClaimRetentionPolicy:
whenScaled: Delete
whenDeleted: Retain
</code></pre>
<p>Make sure you have properly configured StatefulSet manifest and Kafka cluster.</p>
<p>NOTE</p>
<blockquote>
<p>If you want to delete a StatefulSet but keep the Pods and the
PersistentVolumeClaims, you can use the --cascade=orphan option. In
this case, the PersistentVolumeClaims will be preserved even if the
retention policy is set to Delete.</p>
</blockquote>
<p><a href="https://www.manning.com/books/kubernetes-in-action-second-edition" rel="nofollow noreferrer">Marko Lukša "Kubernetes in Action, Second Edition"</a></p>
|
<p>I'm trying to create a new <strong>Kubernetes Service Connection</strong> for <strong>Azure DevOps</strong>, but when I try to create it I get the error:</p>
<blockquote>
<p>You don’t appear to have an active Azure subscription</p>
</blockquote>
<p>I've tried a few ways to fix the issue but it's not working.</p>
| <p><strong>Here's how I solved it</strong>:</p>
<p>I simply went to <strong>Azure DevOps</strong> > <strong>Project</strong> > <strong>Project settings</strong></p>
<p><a href="https://i.stack.imgur.com/4EkFj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4EkFj.png" alt="enter image description here" /></a></p>
<p>Next, I went to <strong>Permissions</strong> > <strong>Endpoint Administrators</strong> > <strong>Members</strong></p>
<p><a href="https://i.stack.imgur.com/6FUhY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6FUhY.png" alt="enter image description here" /></a></p>
<p>Then I added my user to one of the groups under the <strong>Endpoint Administrators</strong> group.
You are also allowed to add your user directly, but permissions are better managed in groups and not individually.</p>
<p>Since the permission updates might take some minutes to take effect in the current web browser window, I logged in to <strong>Azure DevOps</strong> using a <strong>New incognito window</strong> of my web browser, this time I was able to create a new <strong>Kubernetes Service Connection</strong>.</p>
|
<p>I have installed loki and grafana via helm charts to my cloud hosted k8s cluster. How do I send the loki metrics to grafana cloud? I know I should edit the promtail config file but how do I locate and view/edit helm chart files?</p>
<p>My installation procedure for loki+grafana are:</p>
<pre><code>helm upgrade --install loki --namespace=monitoring grafana/loki-stack
helm upgrade --install grafana --namespace=monitoring grafana/grafana
kubectl port-forward service/grafana 3000:80 -n monitoring
</code></pre>
<p>This installation of loki only exposes loki to grafana locally. I wish to enter the configuration of loki (or more specifically promtail) so that I can send the loki logs to my grafana cloud account and monitor the logs from grafana cloud as well.
I have done this in a non-k8s setting by modifying the promtail-config.yaml which is referenced to send the logs to grafana cloud. Adding the grafana cloud url to the promtail config such as:</p>
<pre><code>clients:
- url: http://loki:3100/loki/api/v1/push
- url: https://123456:[email protected]/api/prom/push
</code></pre>
<p>When I explore grafana cloud > intergrations and connections > loki hosted logs > k8s cluster. I receive the following instructions</p>
<blockquote>
<p>Your configuration, complete with your
API key, has been generated below. Copy and paste this code to
promtail/config.yaml to send data using promtail.</p>
</blockquote>
<p>The code snippet:</p>
<pre><code>curl -fsS https://raw.githubusercontent.com/grafana/loki/master/tools/promtail.sh | sh -s 123456 eB9... logs-prod3.grafana.net default | kubectl apply --namespace=default -f -
</code></pre>
<p>I am new to helm charts, so I do not know how to view the helm config files and where they are stored. Or how to obtain the <code>promtail/config.yaml</code> file from helm or the k8s cluster.</p>
| <p>I'm not an expert in Loki but I downloaded the default values.yaml that Chart uses and there seems an option to mention the Promtail configuration which should help you (shown below) and looking at the commands above, it seems Helm would have used default values to install the chart, you can still modify them in 2 ways.</p>
<p>1st: Run the below to get the values.yaml that was used to install the chart by default, modify the Promtail configuration and then upgrade the installation using the modified values.yaml</p>
<pre><code>helm get values -n monitoring grafana/loki-stack loki > values.installed.loki.yaml
*Modify the values.installed.loki.yaml, on line number 5 there are a few *promtail configuration you may need to set
promtail:
enabled: true
config:
lokiAddress: http://{{ .Release.Name }}:3100/loki/api/v1/push
Once done, upgrade the chart with the modified values with -f flag
helm upgrade --install loki --namespace=monitoring grafana/loki-stack -f values.installed.loki.yaml
</code></pre>
<p>2nd: Get the default values.yaml as shown below which is used for Loki installation, modify the promtail configuration and upgrade the installation as shown below</p>
<pre><code>helm show values grafana/loki-stack > values.loki.yaml
Modify values.loki.yaml for the promtail configuration and upgrade the chart
promtail:
enabled: true
config:
lokiAddress: http://{{ .Release.Name }}:3100/loki/api/v1/push
Run an upgrade on the existing installation with the new values.yaml
helm upgrade --install loki --namespace=monitoring grafana/loki-stack -f values.loki.yaml
</code></pre>
|
<p>I'm trying to create an EKS cluster with terraform and configure it thorugh kubectl and istio basic following this guides:</p>
<p><a href="https://rtfm.co.ua/en/aws-elastic-kubernetes-service-running-alb-ingress-controller/" rel="nofollow noreferrer">alb-ingress-controller</a></p>
<p><a href="https://rtfm.co.ua/en/istio-external-aws-application-loadbalancer-and-istio-ingress-gateway/" rel="nofollow noreferrer">istio-alb</a></p>
<p>However when trying to deploy the alb, it does not create any alb on aws.</p>
<p>Ruunning <code>kubectl get ingress -n istio-system</code>, I get:</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
alb <none> * 80 4s
</code></pre>
<p>I'm unable to debug it as I can't find any log telling me why the alb is not deployed. Does anynone came across the same issue? Or does anyone have any clues on how to pin-point the issue?</p>
<p>Follow config files used:</p>
<p>ingress-alb.yaml</p>
<pre><code> ---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: alb
namespace: istio-system
annotations:
# create AWS Application LoadBalancer
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: "***"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
external-dns.alpha.kubernetes.io/hostname: "****"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: istio-ingressgateway
servicePort: 80
</code></pre>
<p><code>kubectl -n istio-system get svc istio-ingressgateway</code></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway NodePort ************** <none> 15021:30012/TCP,80:31684/TCP,443:30689/TCP 132m
</code></pre>
<p>eks_cluster.tf</p>
<pre><code>data "aws_eks_cluster" "eks" {
name = module.eks_cluster.cluster_id
}
data "aws_eks_cluster_auth" "eks" {
name = module.eks_cluster.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.eks.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.eks.token
}
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_version = "1.18"
cluster_name = var.eks.cluster_name
vpc_id = *****
subnets = *****
cluster_endpoint_private_access = true
enable_irsa = true
worker_groups = [
{
name = "worker-group-1"
instance_type = "t3a.medium"
asg_min_size = 1
asg_max_size = 3
asg_desired_capacity = 2
root_volume_type = "gp3"
root_volume_size = 20
}
]
map_users = [{
userarn = "***"
username = "****"
groups = ["****"]
}]
}
</code></pre>
| <p>I managed to make ALB working with following</p>
<pre><code>EKS - 1.22
aws-load-balancer-controller - v2.4.1
istioctl - 1.14.2
</code></pre>
<p>First I modified istio-ingressgateway service type from load balancer to node port.</p>
<pre><code>istioctl install --set profile=default --set values.gateways.istio-ingressgateway.type=NodePort -y
</code></pre>
<p>Then create ingress. Here is my ingress. I only configured to support port 80</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: istio-alb
namespace: istio-system
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
alb.ingress.kubernetes.io/subnets: subnet-aaaaaaa, subnet-bbbbbbbb, subnet-cccccc
alb.ingress.kubernetes.io/security-groups: sg-xxxxxxxxxx
alb.ingress.kubernetes.io/healthcheck-path: /healthz/ready
alb.ingress.kubernetes.io/healthcheck-port: "32560"
spec:
rules:
- http:
paths:
- backend:
service:
name: istio-ingressgateway
port:
number: 80
pathType: ImplementationSpecific
</code></pre>
<p>I believe adding security group annotation will solve your issue.</p>
|
<p>I want to run Python code inside a pod. The pod is created by airflow that I don't control.</p>
<p>I want to somehow get the name of the pod I'm running in.</p>
<p>How can it be done?</p>
| <p>You can tell kuberenetes to mount an env variable for you:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
</code></pre>
<p>and then in python you can access it like:</p>
<pre class="lang-py prettyprint-override"><code>import os
pod_name = os.environ['MY_POD_NAME']
</code></pre>
<hr />
<p>Or you can just open and read <code>/etc/hostname</code>:</p>
<pre class="lang-py prettyprint-override"><code>f = open('/etc/hostname')
pod_name = f.read()
f.close()
</code></pre>
|
<p>All.</p>
<p>I have questions about the purpose of root directory for containerd, docker, kubeadm.
I've been trying to change the root directory from default to specific directory.</p>
<p>As far as I known, there are 3 types of root directory like below.</p>
<ul>
<li>docker : --data-root /var/lib/docker</li>
<li>containerd : --root /var/lib/containerd</li>
<li>kubeadm : --root-dir /var/lib/kubeadm</li>
</ul>
<p>Please anyone let me know exactly about what those directories are used for.</p>
<p>Thanks.</p>
| <p>These are the implementation specific paths for these runtimes. With docker all your containers, images and their related information is stored in /var/lib/docker,
same is the case for containerd.io which stores all its container related objects in /var/lib/containerd</p>
|
<p>I am deploying a monitoring stack from the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer"><code>kube-prometheus-stack</code></a> helm chart and I am trying to configure alertmanager so that it has my custom configuration for alerting in a Slack channel.</p>
<p>The configuration in the pod is loaded from <code>/etc/alertmanager/config/alertmanager.yaml</code>.
From the pod description, this file is loaded from a secret automatically generated:</p>
<pre class="lang-yaml prettyprint-override"><code>...
volumeMounts:
- mountPath: /etc/alertmanager/config
name: config-volume
...
volumes:
- name: config-volume
secret:
defaultMode: 420
secretName: alertmanager-prometheus-community-kube-alertmanager-generated
</code></pre>
<p>If I inspect the secret, it contains the default configuration found in the default values in <code>alertmanager.config</code>, which I intend to overwrite.</p>
<p>If I pass the following configuration to alertmanager to a fresh installation of the chart, it does not create the alertmanager pod:</p>
<pre class="lang-yaml prettyprint-override"><code>alertmanager:
config:
global:
resolve_timeout: 5m
route:
group_by: ['job', 'alertname', 'priority']
group_wait: 10s
group_interval: 1m
routes:
- match:
alertname: Watchdog
receiver: 'null'
- receiver: 'slack-notifications'
continue: true
receivers:
- name: 'slack-notifications'
slack-configs:
- slack_api_url: <url here>
title: '{{ .Status }} ({{ .Alerts.Firing | len }}): {{ .GroupLabels.SortedPairs.Values | join " " }}'
text: '<!channel> {{ .CommonAnnotations.summary }}'
channel: '#mychannel'
</code></pre>
<p>First of all, if I don't pass any configuration in the <code>values.yaml</code>, the alertmanager pod is successfully created.</p>
<p>How can I properly overwrite alertmanager's configuration so it mounts the correct file with my custom configuration into <code>/etc/alertmanger/config/alertmanager.yaml</code>?</p>
| <p>The alertmanager requires certain non-default arguments to overwrite the default as it appears it fails in silence. Wrong configuration leads to the pod not applying the configuration (<a href="https://github.com/prometheus-community/helm-charts/issues/1998" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts/issues/1998</a>). What worked for me was to carefully configure the alertmanager and add a watchdog child route and the null receiver</p>
<pre><code>route:
group_by: [ '...' ]
group_wait: 30s
group_interval: 10s
repeat_interval: 10s
receiver: 'user1'
routes:
- match:
alertname: Watchdog
receiver: 'null'
receivers:
- name: 'null'
- ...
</code></pre>
|
<p>I created PV as follows:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: foo-pv
spec:
storageClassName: "my-storage"
claimRef:
name: foo-pvc
namespace: foo
</code></pre>
<p>Why we need to give storageClassName in PV? When Storage class creates PV, why to give storageClassName in PV?</p>
<p>Can someone help me to understand this?</p>
| <p>According to Kubernetes Official documentation:</p>
<p><strong>Why we need to give storageClassName in PV?</strong></p>
<blockquote>
<p>Each <em>StorageClass</em> contains the fields <em>provisioner, parameters, and
reclaimPolicy</em>, which are used when a PersistentVolume belonging to
the class needs to be dynamically provisioned.</p>
<p>The name of a StorageClass object is significant, and is how users can
request a particular class. Administrators set the name and other
parameters of a class when first creating <em>StorageClass</em> objects, and
the objects cannot be updated once they are created.</p>
</blockquote>
<p><strong>When Storage class creates PV, why to give storageClassName in PV?</strong></p>
<blockquote>
<p>A PersistentVolume (PV) is a piece of storage in the cluster that has
been provisioned by an administrator or dynamically provisioned using
<em>Storage Classes</em>. It is a resource in the cluster just like a node is a cluster resource.</p>
<p>Cluster administrators need to be able to offer a variety of
PersistentVolumes that differ in more ways than size and access modes,
without exposing users to the details of how those volumes are
implemented. For these needs, there is the <em>StorageClass</em> resource.</p>
</blockquote>
<p>If you wish to know more about Storage class resources, please follow <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">this link</a>, or <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">this one</a> to know more about Persistent Volumes.</p>
|
<p>I want to implement a conditional template for Deployment resources through helm which can be enabled or disabled as per the environment. Something like the below which in not working. Or can we achieve the same through a different method.</p>
<pre><code>resources:
enabled: true
requests:
cpu: 100m
memory: 128Mi
</code></pre>
| <p>you can add condition in the deployment template</p>
<pre><code>{{- if .Values.resources_limit.enabled }}
resources:
{{- toYaml .Values.resources_limit.resources | nindent 12 }}
{{- end }}
</code></pre>
<p>and the value file should be like this</p>
<pre><code>resources_limit:
enabled: true
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
</code></pre>
<p>to disable, for example, <code>develop-values.yaml</code></p>
<pre><code>resources_limit:
enabled: false
</code></pre>
|
<p>I want to implement a conditional template for Deployment resources through helm which can be enabled or disabled as per the environment. Something like the below which in not working. Or can we achieve the same through a different method.</p>
<pre><code>resources:
enabled: true
requests:
cpu: 100m
memory: 128Mi
</code></pre>
| <p>You can also check directly on <strong>resource</strong> value without adding the <strong>If</strong> <strong>condition</strong> or introducing a new variable in <code>values.yaml</code></p>
<pre><code>resources:
{{- toYaml .Values.resources | nindent 12 }}
</code></pre>
<p><strong>values.yaml</strong> if values added in <code>values.yaml</code> they will get applied to the template else will get ignored.</p>
<pre><code>resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
</code></pre>
<p><strong>Disable it</strong></p>
<pre><code>resources: {}
</code></pre>
<p>For ref : <a href="https://opensource.com/article/20/5/helm-charts" rel="nofollow noreferrer">https://opensource.com/article/20/5/helm-charts</a></p>
|
<p>I have a k8s cluster with three masters, two workers, and an external haproxy and use flannel as a cni.
The coredns have problems, and their status is running, but they don't become ready.</p>
<p>Coredns log</p>
<p><img src="https://i.stack.imgur.com/lFNGV.png" alt="1" /></p>
<p>I get the logs of this pod, and I get this message:</p>
<blockquote>
<p>[INFO] plugin/ready: Still waiting on: "Kubernetes."</p>
</blockquote>
<p>What I do to solve this problem but didn't get any result:<br />
1- check ufw and disable it.<br />
2- check IPtables and flush them.<br />
3- check Kube-proxy logs.<br />
4- check the haproxy, and it is accessible from out and all servers in the cluster.<br />
5- check nodes network.<br />
7- reboot all servers at the end. :))</p>
<p>I get describe po :</p>
<p><a href="https://i.stack.imgur.com/1S5ux.png" rel="nofollow noreferrer">describe pod</a></p>
| <h3>Lets see your CoreDNS works at all?</h3>
<ul>
<li>You can crete a simple pod and go inside and from there, curl Services via <code>IP:PORT</code> & <code>Service-name:PORT</code>
<pre class="lang-bash prettyprint-override"><code>kubectl run -it --rm test-nginx-svc --image=nginx -- bash
</code></pre>
</li>
</ul>
<ol>
<li>IP:PORT
<pre class="lang-bash prettyprint-override"><code>curl http://<SERVICE-IP>:8080
</code></pre>
</li>
<li>DNS
<pre class="lang-bash prettyprint-override"><code>curl http://nginx-service:8080
</code></pre>
</li>
</ol>
<p>If you couldn't curl your service via <code>Service-name:PORT</code> then you probably have a DNS Issue....</p>
<hr />
<h4>CoreDNS</h4>
<p>Service Name Resolution Problems?</p>
<ul>
<li>Check CoreDNS Pods are running and accessible?</li>
<li>Check CoreDNS logs</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>kubectl run -it test-nginx-svc --image=nginx -- bash
</code></pre>
<ul>
<li>Inside the Pod
<pre class="lang-bash prettyprint-override"><code>cat /etc/resolv.conf
</code></pre>
</li>
<li>The result would look like:
<pre><code>nameserver 10.96.0.10 # IP address of CoreDNS
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
</li>
</ul>
<hr />
<h3>If It is NOT working:</h3>
<p>I suggest to try re-install it via <a href="https://github.com/flannel-io/flannel" rel="nofollow noreferrer">official docs</a> or <a href="https://github.com/flannel-io/flannel/issues/1426" rel="nofollow noreferrer">helm chart</a></p>
<p>OR</p>
<p>Try onther CNIs like <a href="https://www.weave.works/docs/net/latest/kubernetes/kube-addon/" rel="nofollow noreferrer">weave</a></p>
<hr />
<p><a href="https://github.com/alifiroozi80/CKA/tree/main/CKA#coredns" rel="nofollow noreferrer">Source</a></p>
|
<p>GCP allows the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts. This binding allows the Kubernetes service account to act as the IAM service account.</p>
<pre><code>gcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]"
</code></pre>
<p>We would like to create the same via Terraform resource and we tried this way, refer: <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_service_account_iam#google_service_account_iam_binding" rel="nofollow noreferrer">article</a></p>
<pre><code>resource "google_service_account_iam_binding" "service-account-iam" {
service_account_id = "GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com"
role = "roles/iam.workloadIdentityUser"
members = [
"serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]",
]
}
</code></pre>
<p>But we received the below error:</p>
<blockquote>
<p>Error: "service_account_id" ("[email protected]") doesn't match regexp "projects/(?:(?:[-a-z0-9]{1,63}\.)<em>(?:a-z?):)?(?:[0-9]{1,19}|(?:a-z0-9?)|-)/serviceAccounts/((?:(?:[-a-z0-9]{1,63}\.)</em>(?:a-z?):)?(?:[0-9]{1,19}|(?:a-z0-9?))@[a-z]+.gserviceaccount.com$|[0-9]{1,20}[email protected]|a-z@[-a-z0-9\.]{1,63}\.iam\.gserviceaccount\.com$)"</p>
</blockquote>
<p>What's wrong here?</p>
| <p><code>service_account_id</code> is the fully-qualified name of the service account to apply the policy to.</p>
<p><code>projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_EMAIL</code></p>
|
<p>We are using the Go <a href="https://pkg.go.dev/helm.sh/helm/v3/pkg/action" rel="nofollow noreferrer">helm.sh/helm/v3/pkg/action</a> package to install Helm charts.
We are able to pass installation values to Helm chart <code>Run</code> functions via <code>map[string]interface{}</code>, as in <a href="https://stackoverflow.com/questions/45692719/samples-on-kubernetes-helm-golang-client">Samples on kubernetes helm golang client</a>:</p>
<pre class="lang-golang prettyprint-override"><code>rel, err := client.Run(myChart, vals)
if err != nil {
log.Println(err)
panic(err)
}
</code></pre>
<p>But we have a <code>values.yaml</code> file also which was passed as <code>-f values.yaml</code> when installing charts from the CLI.
Is there a way to pass these <code>values.yaml</code> files via the action Go package during installation (<code>client.Run()</code>)?</p>
<p>Or do we need to unmarshal the YAML file and pass that also as map:</p>
<pre class="lang-golang prettyprint-override"><code>data2 := make(map[string]interface{})
yfile2, err := ioutil.ReadFile("./utils/values.yaml")
fmt.Println(err)
err = yaml.Unmarshal(yfile2, &data2)
</code></pre>
| <p>One straightforward thing to do could be to reuse the <a href="https://pkg.go.dev/helm.sh/helm/[email protected]/pkg/cli/values" rel="nofollow noreferrer">helm.sh/helm/v3/pkg/cli/values</a> package. This has the logic to handle the <code>-f</code> option (and also <code>--set</code> and its variants) and return a unified <code>map[string]interface{}</code>. This could look like:</p>
<pre class="lang-golang prettyprint-override"><code>import (
"helm.sh/helm/v3/pkg/cli"
"helm.sh/helm/v3/pkg/cli/values"
"helm.sh/helm/v3/pkg/getter"
)
envSettings := cli.New()
providers := getter.All(envSettings)
options := values.Options{
ValueFiles: []string{"./utils/values.yaml"},
}
theValues := options.MergeValues(providers)
</code></pre>
<p>Now <code>theValues</code> is the <code>map[string]interface{}</code> that results from reading those files. You can customize the values further if required (as Go data) and then pass that to <code>install.Run()</code>.</p>
|
<p>I can't find where the kubelet logs are located for Docker Desktop (Windows). There's a similar question <a href="https://stackoverflow.com/questions/34113476/where-are-the-kubernetes-kubelet-logs-located">here</a>, but the answers all refer to linux/kind installs of kubernetes.</p>
| <p><em>I get that this is an old question, but it does seem to get some traffic, has no accepted answer and is the only of this kind on SO - here is my take on this problem and a solution:</em></p>
<h2>The process</h2>
<p>As already pointed out by @acid_fuji, everything is running inside a VM we need to access, to get the required information. As most things in Linux have some representation in the filesystem, and tools like <code>ps</code> use <em>procfs</em> to query the requested information, mounting the VMs root directory (<code>/</code>) ist sufficient for introspection and quite easy:</p>
<p>Run a container and mount the VMs root directory <code>/</code> to <code>/host</code>:</p>
<pre class="lang-bash prettyprint-override"><code>docker run --rm -it -v /:/host alpine
</code></pre>
<p>In that container, get a shell with the VMs root directory in <code>/host</code> as root <code>/</code>:</p>
<pre class="lang-bash prettyprint-override"><code>chroot /host
</code></pre>
<p>From this shell with changed root directory, tools like <code>ps</code> will return information about the VM and not your container anymore. Next step is to find some information about the running <code>kubelet</code>. Get a tree formatted list of all running processes in the VM with their command line and highlight <em>kubelet</em>:</p>
<pre class="lang-bash prettyprint-override"><code>ps -ef --forest | grep --color -E 'kubelet|$'
</code></pre>
<p>Reduced to the relevant portions, the result will look something like this:</p>
<pre><code>UID PID PPID C STIME TTY TIME CMD
...
root 30 1 0 Aug11 ? 00:05:09 /usr/bin/memlogd -fd-log 3 -fd-query 4 -max-lines 5000 -max-line-len 1024
...
root 543 1 0 Aug11 ? 00:00:16 /usr/bin/containerd-shim-runc-v2 -namespace services.linuxkit -id docker -address /run/containerd/containerd.sock
root 563 543 0 Aug11 ? 00:00:02 \_ /usr/bin/docker-init /usr/bin/entrypoint.sh
root 580 563 0 Aug11 ? 00:00:00 | \_ /bin/sh /usr/bin/entrypoint.sh
root 679 580 0 Aug11 ? 00:00:00 | \_ /usr/bin/logwrite -n lifecycle-server /usr/bin/lifecycle-server
root 683 679 0 Aug11 ? 00:00:35 | \_ /usr/bin/lifecycle-server
...
root 1539 683 0 Aug11 ? 00:00:01 | \_ /usr/bin/logwrite -n kubelet kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --config /etc/kubeadm/kubelet.yaml --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --hostname-override=docker-desktop --container-runtime=remote --container-runtime-endpoint unix:///var/run/cri-dockerd.sock
root 1544 1539 2 Aug11 ? 00:38:38 | \_ kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --config /etc/kubeadm/kubelet.yaml --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --hostname-override=docker-desktop --container-runtime=remote --container-runtime-endpoint unix:///var/run/cri-dockerd.sock
</code></pre>
<p>We find <code>kubelet</code> as PID <strong>1544</strong>, spawned by <code>logwrite</code> (PID <strong>1539</strong>). Issuing <code>logwrite --help</code> we find the <code>-n</code> flag to set the name to appear in logs for the launched instance, and <code>logwrite</code> to be sending its logs to <code>memlogd</code>, which we find as PID <strong>30</strong>.</p>
<p>Knowing what to search for I found this blog post <a href="https://www.docker.com/blog/capturing-logs-in-docker-desktop/" rel="nofollow noreferrer">Capturing Logs in Docker Desktop</a> describing how logging in <em>Docker Desktop</em> is implemented. What we can learn:</p>
<p><code>logwrite</code> and <code>memlogd</code> are part of <a href="https://github.com/linuxkit/linuxkit" rel="nofollow noreferrer">Linuxkit</a>, and LinuxKit provides binaries of their packages as containers, with the logging infrastructure as <a href="https://hub.docker.com/r/linuxkit/memlogd" rel="nofollow noreferrer"><code>linuxkit/memlogd</code></a>. <code>memlogd</code> has a socket to query for logs at <code>/run/guest-services/memlogdq.sock</code>.</p>
<p><em>Be aware, that the <code>latest</code>-tag on <code>linuxkit/memlogd</code> is quite useless and pick a specific tag matching the running version in the VM; just try another version if the next step errors.</em></p>
<h2>The solution</h2>
<p>Run <code>logread</code> from <a href="https://hub.docker.com/r/linuxkit/memlogd" rel="nofollow noreferrer"><code>linuxkit/memlogd</code></a> and mount the VMs <code>/run/guest-services/memlogdq.sock</code> to the <code>logread</code>s expected default location at <code>/var/run/memlogdq.sock</code>. Tell <code>logread</code> to either show only new entries and follow, using <code>-f</code>, or dump all existing entries and follow with <code>-F</code>. Pipe this through something like grep, or a powershell equivalent and filter for <code>kubelet</code>:</p>
<pre class="lang-bash prettyprint-override"><code># example assuming presence grep on windows
docker run --rm -it -v /run/guest-services/memlogdq.sock:/var/run/memlogdq.sock linuxkit/memlogd:014f86dce2ea4bb2ec13e92ae5c1e854bcefec40 /usr/bin/logread -F | grep -i kubelet
</code></pre>
<p>Tag <code>014f86dce2ea4bb2ec13e92ae5c1e854bcefec40</code> is working with <em>Docker Desktop <code>v4.11.1</code></em>; just test which container version/tag works with your docker version.</p>
|
<p>How can I add a default value to shutdownDelay to 50 seconds in my template</p>
<pre><code> lifecycle:
preStop:
{{ toYaml .Values.lifecycleHooks.preStop | indent 14 }}
{{- else if gt (int .Values.shutdownDelay) 0 }}
preStop:
exec:
command:
- sleep
- "{{ int .Values.shutdownDelay }}"
</code></pre>
<p>I tried few combinations like but nothing is working</p>
<p>Like if shutdownDelay isn't specified in values.yaml use 50 as default value and should appear like</p>
| <p>Ordinarily I'd recommend the <a href="https://docs.helm.sh/docs/chart_template_guide/function_list/#default" rel="nofollow noreferrer"><code>default</code></a> function</p>
<pre class="lang-none prettyprint-override"><code>... (gt (.Values.shutdownDelay | default 50) 0) ...
</code></pre>
<p>The problem here is that <code>0</code> is an expected value, but it's logical "false" for purposes of <code>default</code>, so you won't be able to explicitly specify no delay. You can't easily tell <code>0</code> and <code>nil</code> or "absent" apart with simple conditionals like this.</p>
<p>The "big hammer" here is the <a href="https://docs.helm.sh/docs/chart_template_guide/function_list/#ternary" rel="nofollow noreferrer"><code>ternary</code></a> function, which acts like the inline <code>condition ? true_expr : false_expr</code> conditional expression in C-like languages. You can use this to select one value if the value is absent and another if it's present.</p>
<pre class="lang-none prettyprint-override"><code>{{- $delay := hasKey .Values "shutdownDelay" | ternary .Values.shutdownDelay 50 }}
{{- if gt $delay 0 }}
...
{{- end }}
</code></pre>
<p>If you decide to use a more complex expression inside <code>ternary</code>, remember that it is an ordinary function and is not "short-circuiting"; the condition and both values are always evaluated even though only one will actually be returned, so your expression will need to pass the various type constraints even if the value is absent.</p>
|
<p>I'm new to k8 and i'm trying to figure out how to deploy my first docker image on minikube.</p>
<p>My k8.yaml file is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-docker-image
image: my-docker-image:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
type: LoadBalancer
selector:
app: my-service
ports:
- port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: my-ingress
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080
</code></pre>
<p>Everything seems fine to me, however, I'm not able to reach my service on cluster.
I tried to create a tunnel using the <code>minikube tunnel</code> command and i have this result if i execute <code>kubectl get services</code></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.109.154.236 127.0.0.1 8080:30558/TCP 2m53s
</code></pre>
<p>However, if i try to call my service at 127.0.0.1:30588 host is unreachable.</p>
<p>Can someone help me?</p>
| <p>There is an issue in the service selector as well, so first, we need to fix this service selector and it should match with with deployment label</p>
<pre><code> replicas: 1
selector:
matchLabels:
app: my-app
</code></pre>
<p>and the service should refer to this <code>selector</code></p>
<p>the selector should be <code>my-app</code> in the service or same as above for deployment</p>
<pre><code> type: LoadBalancer
selector:
app: my-app
ports:
- port: 8080
targetPort: 8080
</code></pre>
<p>to access from the host</p>
<pre><code>minikube service my-service
</code></pre>
<p><a href="https://i.stack.imgur.com/4Kjj0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Kjj0.png" alt="enter image description here" /></a></p>
<p>and here you go</p>
<pre><code>kubectl delete -f myapp.yaml
kubectl apply -f myapp.yaml
</code></pre>
<p>deployment manifest</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-docker-image
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- port: 80
targetPort: 80
</code></pre>
<p>Also worth considering the service type for Minikube.</p>
<p><a href="https://stackoverflow.com/questions/41509439/whats-the-difference-between-clusterip-nodeport-and-loadbalancer-service-types">What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?</a></p>
|
<p>In Kubernetes secrets I can only retrieve creation timestamp</p>
<pre><code>metadata:
creationTimestamp:
</code></pre>
<p>However I don't find a way to check modified/updated timestamp</p>
<p>Is this a lacking feature or I am not able to find it ?</p>
| <p>Indeed, while Kubernetes metadata usually includes creationTimestamp, there would not be anything tracking previous or last edition timestamp. For deployments, you could track changes using their replicasets. Or querying etcd directly, you may find about previous revisions for any object, that were not yet removed by etcd compactions.</p>
<p>One reliable way to keep track of these would be to enable <a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/" rel="nofollow noreferrer">Kubernetes audit logs</a>. Then you may track who creates, updates or deletes objects.</p>
<p>If you already use EFK or similar logging stack, you should be able to store audit logs alongside your cluster logs -- better use a distinct index pattern. Note there can be a huge volume of logs to keep: another way to store it would be to export those logs to s3, then use something like Athena or Hive querying those data.</p>
|
<p>When I went through some of the tutorials online, it showed that only the worker nodes have the container runtime.
But from my understanding, it seems master nodes also run some pods such as <a href="https://stackoverflow.com/questions/58481709/why-kubelet-is-running-on-kubernetes-master-node#:%7E:text=The%20reason%20is%20that%20kubeadm,to%20provide%20the%20infrastructure%20pods.">etcd and the api server components</a> to ensure the cluster functions properly and thus has kubelet.
Can anyone please correct me if I'm wrong and answer my question if possible?</p>
| <p><code>Master</code> nodes have <code>CRI</code> too, verify it using: <code>kubectl get nodes -o wide</code>.</p>
<p>When a Kubernetes cluster is first set up, a <code>Taint</code> is set on the master node. This automatically prevents any pods from being scheduled on this node. But, it's definitely possible to run pods on the master node. However, <code>best practice</code> is not to deploy application workloads on a master server.</p>
<p>In terms of tutorials, I believe it's just to keep things simple.</p>
|
<p>I have created an image of my app using docker and when I run the docker image on my local machine it runs successfully, but the same image is erroring out when running using kubernetes.
The error is: <code>exec /docker-entrypoint.sh: exec format error</code>
Upon surfing the net for possible solutions I found the suggestion to add the shebang at the start of the <code>docker-entrypoint.sh</code>. But this is a default .sh file as I've not specified any ENTRYPOINT in my dockerfile. I am not able to understand where to add the shebang.</p>
<p>Here is my dockerfile:</p>
<pre><code># pull the official base image
FROM node:18-alpine as builder
ENV NODE_ENV production
# set working direction
WORKDIR /app
# install application dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm i --production
# add app
COPY . .
# Build the app
RUN npm run build
# Bundle static assets with nginx
FROM nginx:1.23.1-alpine as production
ENV NODE_ENV production
# Copy built assets from builder
COPY --from=builder /app/build /usr/share/nginx/html
# Add your nginx.conf
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Expose port
EXPOSE 80
# Start nginx
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p>Some information about the image created:</p>
<pre><code>"Architecture": "arm64",
"Variant": "v8",
"Os": "linux",
"Cmd": ["nginx", "-g", "daemon off;"],
"Entrypoint": ["/docker-entrypoint.sh"]
</code></pre>
| <p>Try something like in you <strong>Dockerfile</strong></p>
<pre><code>FROM --platform=linux/amd64 node:18-alpine as builder
</code></pre>
<p>Build the image and run it on K8s.</p>
<p>it could be due to the building image on Mac or Arm and your K8s cluster is not supporting that architecture.</p>
|
<p>I installed the <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">kube-prometheus-0.9.0</a>, and want to deploy a sample application on which to test the Prometheus metrics autoscaling, with the following resource manifest file: (hpa-prome-demo.yaml)</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-prom-demo
spec:
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- name: nginx-demo
image: cnych/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: hpa-prom-demo
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "80"
prometheus.io/path: "/status/format/prometheus"
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: NodePort
</code></pre>
<p>For testing purposes, used a NodePort Service and luckly I can get the http repsonse after applying the deployment. Then I installed
Prometheus Adapter via Helm Chart by creating a new <code>hpa-prome-adapter-values.yaml</code> file to override the default Values values, as follows.</p>
<pre class="lang-yaml prettyprint-override"><code>rules:
default: false
custom:
- seriesQuery: 'nginx_vts_server_requests_total'
resources:
overrides:
kubernetes_namespace:
resource: namespace
kubernetes_pod_name:
resource: pod
name:
matches: "^(.*)_total"
as: "${1}_per_second"
metricsQuery: (sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))
prometheus:
url: http://prometheus-k8s.monitoring.svc
port: 9090
</code></pre>
<p>Added a rules rule and specify the address of Prometheus. Install Prometheus-Adapter with the following command.</p>
<pre class="lang-sh prettyprint-override"><code>$ helm install prometheus-adapter prometheus-community/prometheus-adapter -n monitoring -f hpa-prome-adapter-values.yaml
NAME: prometheus-adapter
LAST DEPLOYED: Fri Jan 28 09:16:06 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
</code></pre>
<p>Finally the adatper was installed successfully, and can get the http response, as follows.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get po -nmonitoring |grep adapter
prometheus-adapter-665dc5f76c-k2lnl 1/1 Running 0 133m
$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
</code></pre>
<p>But it was supposed to be like this,</p>
<pre class="lang-json prettyprint-override"><code>$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
</code></pre>
<p>Why I can't get the metrics <code>pods/nginx_vts_server_requests_per_second</code>? as a result, below query was also failed.</p>
<pre class="lang-sh prettyprint-override"><code> kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
Error from server (NotFound): the server could not find the metric nginx_vts_server_requests_per_second for pods
</code></pre>
<p>Anybody cloud please help? many thanks.</p>
| <p><strong>ENV</strong>:</p>
<ol>
<li>helm install all Prometheus charts from <code>prometheus-community https://prometheus-community.github.io/helm-chart</code></li>
<li>k8s cluster enabled by docker for mac</li>
</ol>
<p><strong>Solution</strong>:<br />
I met the same problem, from Prometheus UI, i found it had <code>namespace</code> label and no <code>pod</code> label in metrics as below.</p>
<pre><code>nginx_vts_server_requests_total{code="1xx", host="*", instance="10.1.0.19:80", job="kubernetes-service-endpoints", namespace="default", node="docker-desktop", service="hpa-prom-demo"}
</code></pre>
<p>I thought Prometheus may <strong>NOT</strong> use <code>pod</code> as a label, so i checked Prometheus config and found:</p>
<pre><code>121 - action: replace
122 source_labels:
123 - __meta_kubernetes_pod_node_name
124 target_label: node
</code></pre>
<p>then searched
<a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/" rel="nofollow noreferrer">https://prometheus.io/docs/prometheus/latest/configuration/configuration/</a> and do the similar thing as below under every <code>__meta_kubernetes_pod_node_name</code> i searched(ie. 2 places)</p>
<pre><code>125 - action: replace
126 source_labels:
127 - __meta_kubernetes_pod_name
128 target_label: pod
</code></pre>
<p>after a while, the configmap reloaded, UI and API could find <code>pod</code> label</p>
<pre><code>$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
</code></pre>
|
<p>I am running my <strong>minikube as docker image.</strong>
I am trying to expose my service to outside world using Nodeport.</p>
<p>This is my yaml file.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-hello-world
labels:
app: docker-hello-world
spec:
selector:
matchLabels:
app: docker-hello-world
replicas: 3
template:
metadata:
labels:
app: docker-hello-world
spec:
containers:
- name: docker-hello-world
image: scottsbaldwin/docker-hello-world:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: docker-hello-world-svc
spec:
selector:
app: docker-hello-world
ports:
- port: 8088
targetPort: 80
type: NodePort
</code></pre>
<p>Searched a lot about <em><strong>nodePort that we require node ip to access the service.</strong></em>
I am able to access my service using <em><strong>minikube service docker-hello-world-svc --url</strong></em>
which gives me url to access the service <em><code>http://127.0.0.1:52526</code></em> but here port number is different then the nodePort.</p>
<p>My service is successful running .</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-hello-world-svc NodePort 10.109.146.181 <none> 8088:30934/TCP 65m
</code></pre>
<p>i want to access my service from outside the cluster using Nodeport but my nodes does not have any external ip</p>
<pre><code>kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane 5h9m v1.24.3 192.168.49.2 <none> Ubuntu 20.04.4 LTS 5.10.104-linuxkit docker://20.10.17
</code></pre>
<p>already read out that i need ingress controller to access service but i wanna test it using <strong>nodePort</strong> .</p>
<p><strong>Any work around so i can access my service using only <em>nodePort</em> running inside the minikube which is running as docker image</strong>?</p>
<p>Status of minikube does not show kubectl</p>
<pre><code>>>minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
</code></pre>
| <p>You don't need an external IP when using NodePort, you can use <code>minikube ip</code> to get the minikube node address, and then connect to the respective <code>nodePort</code>:</p>
<pre><code>$ kubectl svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-hello-world-svc NodePort 10.108.66.235 <none> 8088:31550/TCP 2m15s
$ minikube ip
192.168.49.2
$ curl 192.168.49.2:31550
<h1>Hello webhook world from: docker-hello-world-cc79bf486-k4lm8</h1>
</code></pre>
<hr />
<p>Another alternative is to use a <code>LoadBalancer</code> service, and use <code>minikube tunnel</code> to connect to it using the internal port:</p>
<pre><code>$ kubectl svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-hello-world-svc LoadBalancer 172.16.58.95 172.16.58.95 8088:32624/TCP 10s
$ curl 172.16.58.95:8088
<h1>Hello webhook world from: docker-hello-world-cc79bf486-dg5s9</h1>
</code></pre>
<p>Notes:</p>
<ul>
<li>You will only get an <code>EXTERNAL-IP</code> after running <code>minikube tunnel</code> in a separate terminal (it's a foreground process)</li>
<li>I recommend that you run Minikube using a less common IP range for the services, preventing conflicts with other network routes:</li>
</ul>
<pre><code>minikube start --service-cluster-ip-range=172.16.0.0/16
</code></pre>
|
Subsets and Splits