prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I have a link to a public URL in the format of <code>https://storage.googleapis.com/companyname/foldername/.another-folder/file.txt</code></p>
<p>I want to create an ingress rule to create a path to this public file, so that whoever open a specific URL, e.g., <a href="https://myapp.mydomain.com/.another-folder/myfile.txt" rel="nofollow noreferrer">https://myapp.mydomain.com/.another-folder/myfile.txt</a> -> it open up above file.</p>
<p>I tried a few different ingress rules such as:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: googlestoragebucket
spec:
externalName: storage.googleapis.com
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
type: ExternalName
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: staging-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: staging-static-ip
kubernetes.io/ingress.class: gce
spec:
defaultBackend:
service:
name: website-frontend
port:
number: 80
rules:
- host: myapp.mydomain.com
http:
paths:
- path: /.another-folder/
pathType: Prefix
backend:
service:
name: googlestoragebucket
port:
number: 443
- pathType: ImplementationSpecific
backend:
service:
name: myactual-app
port:
number: 80
</code></pre>
<p>But I couldn't make it wrok. In this case I've got an error: <code>Translation failed: invalid ingress spec: service "staging/googlestoragebucket" is type "ExternalName", expected "NodePort" or "LoadBalancer</code></p>
<p>I don’t mind any other solutions to achieve the same result in the context of GCP and Kubernetes.</p>
<p>Do you have any ideas?</p>
<p>Looking forward for you suggestions.</p>
| <p>Think that you should be able to do it via Cloud External Load Balancer:</p>
<p>Here is some information about that:</p>
<p><a href="https://cloud.google.com/load-balancing/docs/https/ext-load-balancer-backend-buckets" rel="nofollow noreferrer">https://cloud.google.com/load-balancing/docs/https/ext-load-balancer-backend-buckets</a></p>
<p>Then you can point the ingress to that load balancer:
<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features</a></p>
<p>Another option is use some proxy, la Nginx, there is a post on GitHub about this: <a href="https://github.com/kubernetes/ingress-nginx/issues/1809" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/1809</a></p>
|
<p>When trying to deploy Clickhouse operator on Kubernetes, by default access_management is commented out in users.xml file. Is there a way to uncomment it when installing kubernetes operator?</p>
<p>Clickhouse Operator deployment:</p>
<pre><code>kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/0.18.3/deploy/operator/clickhouse-operator-install-bundle.yaml
</code></pre>
<p>I have tried to do that through "ClickHouseInstallation" but that didn't work.</p>
<p>Furthermore, Clickhouse operator source code doesn't contain parameter for access_management</p>
| <p>look to <code>kubectl explain chi.spec.configuration.files</code> and <code>kubectl explain chi.spec.configuration.users</code></p>
<p>try</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseInstallation
metadata:
name: access-management-example
spec:
configuration:
files:
users.d/access_management.xml: |
<clickhouse><users>
<default><access_management>1</access_management></default>
</users></clickhouse>
</code></pre>
<p>you shall carry on itself about replicate RBAC objects during change cluster layout (scale-up)</p>
|
<p>I have a Terraform config that (among other resources) creates a Google Kubernetes Engine cluster on Google Cloud. I'm using the <code>kubectl</code> provider to add YAML manifests for a ManagedCertificate and a FrontendConfig, since these are not part of the kubernetes or google providers.
This works as expected when applying the Terraform config from my local machine, but when I try to execute it in our CI pipeline, I get the following error for both of the <code>kubectl_manifest</code> resources:</p>
<pre><code>Error: failed to create kubernetes rest client for read of resource: Get "http://localhost/api?timeout=32s": dial tcp 127.0.0.1:80: connect: connection refused
</code></pre>
<p>Since I'm only facing this issue during CI, my first guess is that the service account is missing the right scopes, but as far as I can tell, all scopes are present. Any suggestions and ideas are greatly appreciated!</p>
| <p>Fixed the issue by adding <code>load_config_file = false</code> to the <code>kubectl</code> provider config. My provider config now looks like this:</p>
<pre><code>data "google_client_config" "default" {}
provider "kubernetes" {
host = "https://${endpoint from GKE}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(CA certificate from GKE)
}
provider "kubectl" {
host = "https://${endpoint from GKE}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(CA certificate from GKE)
load_config_file = false
}
</code></pre>
|
<p>If I do</p>
<pre><code>POD=$($KUBECTL get pod -lsvc=app,env=production -o jsonpath="{.items[0].metadata.name}")
kubectl debug -it --image=mpen/tinker "$POD" -- zsh -i
</code></pre>
<p>I can get into a shell running inside my pod, but I want access to the filesystem for a container I've called "php". I think this should be at <code>/proc/1/root/app</code> but that directory doesn't exist. For reference, my Dockerfile has:</p>
<pre><code>WORKDIR /app
COPY . .
</code></pre>
<p>So all the files should be in the root <code>/app</code> directory.</p>
<p>If I add <code>--target=php</code> then I get permission denied:</p>
<pre><code>❯ cd /proc/1/root
cd: permission denied: /proc/1/root
</code></pre>
<p>How do I get access to the files?</p>
| <p>Reading through <a href="https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/" rel="nofollow noreferrer">the documentation</a>, using <code>kubectl debug</code> won't give you access to the filesystem in another container.</p>
<p>The simplest option may be to use <code>kubectl exec</code> to start a shell inside an existing container. There are some cases in which this isn't an option (for example, some containers contain only a single binary, and won't have a shell or other common utilities avaiable), but a php container will typically have a complete filesystem.</p>
<p>In this case, you can simply:</p>
<pre><code>kubectl exec -it $POD -- sh
</code></pre>
<p>You can replace <code>sh</code> by <code>bash</code> or <code>zsh</code> depending on what shells are available in the existing image.</p>
<hr />
<p>The linked documentation provides several other debugging options, but all involve working on <em>copies of</em> the pod.</p>
|
<p>I learnt that to run a container as rootless, you need to specify either the SecurityContext:runAsUser 1000 or specify the USER directive in the DOCKERFILE.</p>
<p>Question on this is that there is no UID 1000 on the Kubernetes/Docker host system itself.</p>
<p>I learnt before that Linux User Namespacing allows a user to have a different UID outside it's original NS.</p>
<p>Hence, how does UID 1000 exist under the hood? Did the original root (UID 0) create a new user namespace which is represented by UID 1000 in the container?</p>
<p>What happens if we specify UID 2000 instead?</p>
| <p>Hope this answer helps you</p>
<blockquote>
<p>I learnt that to run a container as rootless, you need to specify
either the SecurityContext:runAsUser 1000 or specify the USER
directive in the DOCKERFILE</p>
</blockquote>
<p>You are correct except in <code>runAsUser: 1000</code>. you can specify any UID, not only <code>1000</code>. Remember any UID you want to use (<code>runAsUser: UID</code>), that <code>UID</code> should already be there!</p>
<hr />
<p>Often, base images will already have a user created and available but leave it up to the development or deployment teams to leverage it. For example, the official Node.js image comes with a user named node at UID <code>1000</code> that you can run as, but they do not explicitly set the current user to it in their Dockerfile. We will either need to configure it at runtime with a <code>runAsUser</code> setting or change the current user in the image using a <code>derivative Dockerfile</code>.</p>
<pre class="lang-yaml prettyprint-override"><code>runAsUser: 1001 # hardcode user to non-root if not set in Dockerfile
runAsGroup: 1001 # hardcode group to non-root if not set in Dockerfile
runAsNonRoot: true # hardcode to non-root. Redundant to above if Dockerfile is set USER 1000
</code></pre>
<p>Remmeber that <code>runAsUser</code> and <code>runAsGroup</code> <strong>ensures</strong> container processes do not run as the <code>root</code> user but don’t rely on the <code>runAsUser</code> or <code>runAsGroup</code> settings to guarantee this. Be sure to also set <code>runAsNonRoot: true</code>.</p>
<hr />
<p>Here is full example of <code>securityContext</code>:</p>
<pre class="lang-yaml prettyprint-override"><code># generic pod spec that's usable inside a deployment or other higher level k8s spec
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
# basic container details
- name: my-container-name
# never use reusable tags like latest or stable
image: my-image:tag
# hardcode the listening port if Dockerfile isn't set with EXPOSE
ports:
- containerPort: 8080
protocol: TCP
readinessProbe: # I always recommend using these, even if your app has no listening ports (this affects any rolling update)
httpGet: # Lots of timeout values with defaults, be sure they are ideal for your workload
path: /ready
port: 8080
livenessProbe: # only needed if your app tends to go unresponsive or you don't have a readinessProbe, but this is up for debate
httpGet: # Lots of timeout values with defaults, be sure they are ideal for your workload
path: /alive
port: 8080
resources: # Because if limits = requests then QoS is set to "Guaranteed"
limits:
memory: "500Mi" # If container uses over 500MB it is killed (OOM)
#cpu: "2" # Not normally needed, unless you need to protect other workloads or QoS must be "Guaranteed"
requests:
memory: "500Mi" # Scheduler finds a node where 500MB is available
cpu: "1" # Scheduler finds a node where 1 vCPU is available
# per-container security context
# lock down privileges inside the container
securityContext:
allowPrivilegeEscalation: false # prevent sudo, etc.
privileged: false # prevent acting like host root
terminationGracePeriodSeconds: 600 # default is 30, but you may need more time to gracefully shutdown (HTTP long polling, user uploads, etc)
# per-pod security context
# enable seccomp and force non-root user
securityContext:
seccompProfile:
type: RuntimeDefault # enable seccomp and the runtimes default profile
runAsUser: 1001 # hardcode user to non-root if not set in Dockerfile
runAsGroup: 1001 # hardcode group to non-root if not set in Dockerfile
runAsNonRoot: true # hardcode to non-root. Redundant to above if Dockerfile is set USER 1000
</code></pre>
<hr />
<p>sources:</p>
<ul>
<li><a href="https://github.com/BretFisher/podspec" rel="nofollow noreferrer">Kubernetes Pod Specification Good Defaults</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context" rel="nofollow noreferrer">Configure a Security Context for a Pod or Container</a></li>
<li><a href="https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/" rel="nofollow noreferrer">10 Kubernetes Security Context settings you should understand</a></li>
</ul>
|
<p>Is there any way to perform an update action on all pods simultaneously?</p>
<p>We have a process running in kubernetes as a stateful set where we want to update all the pods at the same time. We cannot seem to find a configuration for that. I am aware of <code>rollingUpdate</code>, which only updates one pod at a time.</p>
<p>This is what we have currently</p>
<pre><code> updateStrategy:
rollingUpdate:
partition: 2
type: RollingUpdate
</code></pre>
<p>I also tried with <code>maxUnavailable</code>, but still did not work.
Is there any other hack to get this done?</p>
| <p>There is no native alternative for updating all pods simultaneously when using Statefulsets.</p>
<p>The closest thing to it is to use the <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#parallel-pod-management" rel="nofollow noreferrer">Parallel Pod Management policy</a>, but it only affects the behavior for scaling operations (including initial setup) and doesn't work for updates.</p>
<hr />
<p>Although, the OpenKruise project has an extended component suite that enables <a href="https://openkruise.io/docs/next/user-manuals/advancedstatefulset/" rel="nofollow noreferrer">Advanced StatefulSet</a> to update workflow.</p>
<p>Here is a minimal working example that will <strong>upgrade all pods at once</strong>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps.kruise.io/v1beta1
kind: StatefulSet
metadata:
name: sample
spec:
replicas: 5
serviceName: fake-service
selector:
matchLabels:
app: sample
template:
metadata:
labels:
app: sample
spec:
readinessGates:
- conditionType: InPlaceUpdateReady
containers:
- name: main
image: nginx:alpine
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 100%
</code></pre>
<p>Note this will <strong>certainly cause downtime</strong>, but you can adjust to something like <code>maxUnavailable: 50%</code> to make it more resilient.</p>
|
<p>I want to set some proxy (http, https etc) config on my k8s container. That k8s container will run some user scripts, it may also run some <code>sudo</code> commands in that script.</p>
<p>What is the best way to pass this proxy config in my k8s container.</p>
<p>One approach I was trying was passing the proxy config as env variable to the container in my deployment manifest file.</p>
<p>Drawback: It will set only for default user and not root user. <code>sudo</code> commands will fail. User/container start script will need to set proxy config in the container for root user.</p>
<p>Any other alternative or clean way of passing env variable to all users in the container?</p>
<hr />
<p>Another approach I am using now is <code>env_keep</code> and adding proxy config env variable list in <code>env_keep</code> and when command is executed with sudo in pod, it will take env variable of default user that I set using deployment manifest.</p>
<hr />
<p>Is there any other alternatives ?</p>
| <p>Best practice is to run the <code>Init container</code> as <strong>Root</strong>.</p>
<p>And further, once you startup script is done with work you can run your container as the Non-root user.</p>
<p>Init container : <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
<p><strong>Update:</strong></p>
<p>You can also use the <code>sudo -E</code></p>
<pre><code>-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their
existing environment variables. The security policy may return an error
if the user does not have permission to preserve the environment.
</code></pre>
|
<p>Is there any shorter alias on the kubectl/oc for deployments? In OpenShift you have deployment configurations and you can access them using their alias <code>dc</code>.</p>
<p>Writing <code>deployment</code> all the time takes too much time. Any idea how to shorten that without setting a local alias on each machine?</p>
<p>Reality:</p>
<pre><code>kubectl get deployment/xyz
</code></pre>
<p>Dream:</p>
<pre><code>kubectl get d/xyz
</code></pre>
| <p>All of the above answers are correct and I endorse the idea of using aliases: I have several myself. But the question was fundamentally about shortnames of API Resources, like <code>dc</code> for <code>deploymentcontroller</code>.</p>
<p>And the answer to that question is to use <code>oc api-resources</code> (or <code>kubectl api-resources</code>). Each API Resource also includes any SHORTNAMES that are available. For example, the results for me of <code>oc api-resources |grep deploy</code> on OpenShift 4.10 is:</p>
<pre><code>➜oc api-resources |grep deploy
deployments deploy apps/v1 true Deployment
deploymentconfigs dc apps.openshift.io/v1 true DeploymentConfig
</code></pre>
<p>Thus we can see that the previously given answer of "deploy" is a valid SHORTNAME of deployments. But it's also useful for just browsing the list of other available abbreviations.</p>
<p>I'll also make sure that you are aware of <code>oc completion</code>. For example <code>source <(oc completion zsh)</code> for zsh. You say you have multiple devices, so you may not set up aliases, but completions are always easy to add. That way you should never have to type more than a few characters and then autocomplete yourself the rest of the way.</p>
|
<p>After I install the promethus using helm in kubernetes cluster, the pod shows error like this:</p>
<pre><code>0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
</code></pre>
<p>this is the deployment yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-prometheus-1660560589-node-exporter-n7rzg
generateName: kube-prometheus-1660560589-node-exporter-
namespace: reddwarf-monitor
uid: 73986565-ccd8-421c-bcbb-33879437c4f3
resourceVersion: '71494023'
creationTimestamp: '2022-08-15T10:51:07Z'
labels:
app.kubernetes.io/instance: kube-prometheus-1660560589
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: node-exporter
controller-revision-hash: 65c69f9b58
helm.sh/chart: node-exporter-3.0.8
pod-template-generation: '1'
ownerReferences:
- apiVersion: apps/v1
kind: DaemonSet
name: kube-prometheus-1660560589-node-exporter
uid: 921f98b9-ccc9-4e84-b092-585865bca024
controller: true
blockOwnerDeletion: true
status:
phase: Pending
conditions:
- type: PodScheduled
status: 'False'
lastProbeTime: null
lastTransitionTime: '2022-08-15T10:51:07Z'
reason: Unschedulable
message: >-
0/1 nodes are available: 1 node(s) didn't have free ports for the
requested pod ports.
qosClass: BestEffort
spec:
volumes:
- name: proc
hostPath:
path: /proc
type: ''
- name: sys
hostPath:
path: /sys
type: ''
- name: kube-api-access-9fj8v
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
- downwardAPI:
items:
- path: namespace
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
defaultMode: 420
containers:
- name: node-exporter
image: docker.io/bitnami/node-exporter:1.3.1-debian-11-r23
args:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--web.listen-address=0.0.0.0:9100'
- >-
--collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
- >-
--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
ports:
- name: metrics
hostPort: 9100
containerPort: 9100
protocol: TCP
resources: {}
volumeMounts:
- name: proc
readOnly: true
mountPath: /host/proc
- name: sys
readOnly: true
mountPath: /host/sys
- name: kube-api-access-9fj8v
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
livenessProbe:
httpGet:
path: /
port: metrics
scheme: HTTP
initialDelaySeconds: 120
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
readinessProbe:
httpGet:
path: /
port: metrics
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1001
runAsNonRoot: true
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: kube-prometheus-1660560589-node-exporter
serviceAccount: kube-prometheus-1660560589-node-exporter
hostNetwork: true
hostPID: true
securityContext:
fsGroup: 1001
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- k8smasterone
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/instance: kube-prometheus-1660560589
app.kubernetes.io/name: node-exporter
namespaces:
- reddwarf-monitor
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
- key: node.kubernetes.io/disk-pressure
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/memory-pressure
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/pid-pressure
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/unschedulable
operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/network-unavailable
operator: Exists
effect: NoSchedule
priority: 0
enableServiceLinks: true
preemptionPolicy: PreemptLowerPriority
</code></pre>
<p>I have checked the host machine and found the port 9100 is free, why still told that no port for this pod? what should I do to avoid this problem? this is the host port 9100 check command:</p>
<pre><code>[root@k8smasterone grafana]# lsof -i:9100
[root@k8smasterone grafana]#
</code></pre>
<p>this is the pod describe info:</p>
<pre><code>➜ ~ kubectl describe pod kube-prometheus-1660560589-node-exporter-n7rzg -n reddwarf-monitor
Name: kube-prometheus-1660560589-node-exporter-n7rzg
Namespace: reddwarf-monitor
Priority: 0
Node: <none>
Labels: app.kubernetes.io/instance=kube-prometheus-1660560589
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=node-exporter
controller-revision-hash=65c69f9b58
helm.sh/chart=node-exporter-3.0.8
pod-template-generation=1
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: DaemonSet/kube-prometheus-1660560589-node-exporter
Containers:
node-exporter:
Image: docker.io/bitnami/node-exporter:1.3.1-debian-11-r23
Port: 9100/TCP
Host Port: 9100/TCP
Args:
--path.procfs=/host/proc
--path.sysfs=/host/sys
--web.listen-address=0.0.0.0:9100
--collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
Liveness: http-get http://:metrics/ delay=120s timeout=5s period=10s #success=1 #failure=6
Readiness: http-get http://:metrics/ delay=30s timeout=5s period=10s #success=1 #failure=6
Environment: <none>
Mounts:
/host/proc from proc (ro)
/host/sys from sys (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9fj8v (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
proc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
sys:
Type: HostPath (bare host directory volume)
Path: /sys
HostPathType:
kube-api-access-9fj8v:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m54s (x233 over 3h53m) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
</code></pre>
<p>this is the netstat:</p>
<pre><code>[root@k8smasterone ~]# netstat -plant |grep 9100
[root@k8smasterone ~]#
</code></pre>
<p>I also tried this to allow the pods running in master node by add this config:</p>
<pre><code>tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
</code></pre>
<p>still did not fixed this problem.</p>
| <p>When you configure your pod with <code>hostNetwork: true</code>, the containers running in this pod can directly see the network interfaces of the host machine where the pod was started.</p>
<p>The container port will be exposed to the external network at :, the hostPort is the port requested by the user in the configuration <code>hostPort</code>.</p>
<p>To bypass your problem, you have two options:</p>
<ul>
<li>setting <code>hostNetwork: false</code></li>
<li>choose a different <code>hostPort</code> (it is better in the range 49152 to 65535)</li>
</ul>
|
<p>Defining the <strong>defaultMode</strong> in a Kubernetes <strong>volume</strong> field within a <strong>deployment</strong> element can become quite tricky.</p>
<p>It expects three decimals, corresponding to the binary UNIX permissions.</p>
<p>As an example, to mount the ConfigMap with permissions r------, you'd need to specify 256.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: foo
namespace: foo
spec:
replicas: 1
selector:
matchLabels:
app: foo
template:
metadata:
labels:
app: foo
spec:
containers:
- image: php-fpm:latest
volumeMounts:
- name: phpini
mountPath: /usr/local/etc/php/conf.d/99-settings.ini
readOnly: true
subPath: 99-settings.ini
volumes:
- configMap:
defaultMode: 256
name: phpini-configmap
optional: false
name: phpini
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: foo
namespace: foo
name: phpini-configmap
data:
99-settings.ini: |
; Enable Zend OPcache extension module
zend_extension = opcache
</code></pre>
| <p>Use the following table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>unix decimal</th>
<th>unix readable</th>
<th>binary equivalent</th>
<th>defaultMode</th>
</tr>
</thead>
<tbody>
<tr>
<td>400</td>
<td>r--------</td>
<td>100000000</td>
<td>256</td>
</tr>
<tr>
<td>440</td>
<td>r--r-----</td>
<td>100100000</td>
<td>288</td>
</tr>
<tr>
<td>444</td>
<td>r--r--r--</td>
<td>100100100</td>
<td>292</td>
</tr>
<tr>
<td>600</td>
<td>rw-------</td>
<td>110000000</td>
<td>384</td>
</tr>
<tr>
<td>600</td>
<td>rw-r-----</td>
<td>110100000</td>
<td>416</td>
</tr>
<tr>
<td>660</td>
<td>rw-rw----</td>
<td>110110000</td>
<td>432</td>
</tr>
<tr>
<td>660</td>
<td>rw-rw-r--</td>
<td>110110100</td>
<td>436</td>
</tr>
<tr>
<td>666</td>
<td>rw-rw-rw-</td>
<td>110110110</td>
<td>438</td>
</tr>
<tr>
<td>700</td>
<td>rwx------</td>
<td>111000000</td>
<td>448</td>
</tr>
<tr>
<td>770</td>
<td>rwxrwx---</td>
<td>111111000</td>
<td>504</td>
</tr>
<tr>
<td>777</td>
<td>rwxrwxrwx</td>
<td>111111111</td>
<td>511</td>
</tr>
</tbody>
</table>
</div>
<p>A more direct way to do this is to use a base8 to base10 converter like <a href="http://www.unitconversion.org/numbers/base-8-to-base-10-conversion.html" rel="noreferrer">this one</a></p>
|
<p>In a container-based environment such as Kubernetes, the UseContainerSupport JVM feature is handy as it allows configuring heap size as a percentage of container memory via options such as XX:MaxRAMPercentage instead of a static value via Xmx. This way you don't have to potentially adjust your JVM options every time the container memory limit changes, potentially allowing use of vertical autoscaling. The primary goal is hitting a Java OufOfMemoryError rather than running out of memory at the container (e.g. K8s OOMKilled).</p>
<p>That covers heap memory. In applications that use a significant amount of direct memory via NIO (e.g. gRPC/Netty), what are the options for this? The main option I could find is XX:MaxDirectMemorySize, but this takes in a static value similar to Xmx.</p>
| <p>There's no similar switch for MaxDirectMemorySize as far as I know.
But by default (if you don't specify <code>-XX:MaxDirectMemorySize</code>) the limit is same as for <code>MaxHeapSize</code>.
That means, if you set <code>-XX:MaxRAMPercentage</code> then the same limit applies to <code>MaxDirectMemory</code>.</p>
<p>Note: that you cannot verify this simply via <code>-XX:+PrintFlagsFinal</code> because that prints 0:</p>
<pre><code>java -XX:MaxRAMPercentage=1 -XX:+PrintFlagsFinal -version | grep 'Max.*Size'
...
uint64_t MaxDirectMemorySize = 0 {product} {default}
size_t MaxHeapSize = 343932928 {product} {ergonomic}
...
openjdk version "17.0.2" 2022-01-18
...
</code></pre>
<p>See also <a href="https://dzone.com/articles/default-hotspot-maximum-direct-memory-size" rel="nofollow noreferrer">https://dzone.com/articles/default-hotspot-maximum-direct-memory-size</a> and <a href="https://stackoverflow.com/questions/53543062/replace-access-to-sun-misc-vm-for-jdk-11">Replace access to sun.misc.VM for JDK 11</a></p>
<p>My own experiments here: <a href="https://github.com/jumarko/clojure-experiments/pull/32" rel="nofollow noreferrer">https://github.com/jumarko/clojure-experiments/pull/32</a></p>
|
<p>I am using active FTP to transfer file(via the <strong>PORT</strong> command). I can initiate active FTP sessions using <strong>LoadBalancer IP</strong> and Loadbalancer Service <strong>Target Port</strong>. I tried a similar way to initiate active FTP session using <strong>Node External IP</strong> and <strong>Node Port</strong> but I am not able to do it. I am using npm.js <strong>basic-ftp</strong> module for it. The code for my connection is given below:</p>
<pre><code>let client = new ftp.Client(ftpTimeout * 1000);
client.prepareTransfer = prepareTransfer;
</code></pre>
<p>And prepareTransfer has implementation like:</p>
<pre><code>export async function prepareTransfer(ftp: FTPContext): Promise<FTPResponse> {
// Gets the ip address of either LoadBalancer(for LoadBalancer service) or Node(For NodePort Service)
const ip = await getIp();
// Gets a TargetPort for LoadBalancer service or Node Port for NodePort service
const port = await getFtpPort();
// Example command: PORT 192,168,150,80,14,178
// The first four octets are the IP address while the last two octets comprise the
//port that will be used for the data connection.
// To find the actual port multiply the fifth octet by 256 and then add the sixth
//octet to the total.
// Thus in the example above the port number is ( (14*256) + 178), or 3762
const p1 = Math.floor(port / 256);
const p2 = port % 256;
const command = `PORT ${ip.replace(/\./g, ',')},${p1},${p2}`;
// https://github.com/patrickjuchli/basic-ftp/issues/195
// Data socket pipes before the connection so use the mock socket.
const mockSocket = new MockSocket();
ftp.dataSocket = mockSocket as any;
let ftpResponse = new Promise<FTPResponse>((resolve, reject) => {
let response: FTPResponse;
const server = createServer(function (socket) {
console.log('FTP Client connected');
mockSocket.setSocket(socket, server, port);
});
server.on('error', err => {
reject(err);
});
server.listen(port, async () => {
console.log(`socket server for FTP started at port ${port}`);
// send the port request
response = await ftp.request(command);
if (response.code !== 200) {
reject('Could not connect');
server.close();
}
resolve(response);
});
});
return ftpResponse;
}
</code></pre>
<p>Any idea why NodePort is not working here?</p>
<p>Helm Chart for NodePort:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: "ftp-service"
spec:
type: NodePort
externalTrafficPolicy: Local
selector:
statefulset.kubernetes.io/pod-name: "pod1"
ports:
- protocol: TCP
name: ftp-8000
port: 8000
targetPort: 8000
nodePort: 30014
</code></pre>
<p>Helm chart for loadbalancer:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: "ftp-service-load"
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
statefulset.kubernetes.io/pod-name: "pod1"
ports:
- protocol: TCP
name: ftp-8000
port: 8000
targetPort: 8000
nodePort: 30014
</code></pre>
| <p>ok, finally I figured out the solution. I have been using <strong>Azure Kubernetes Service(AKS)</strong>. Need to set the inbound port rule for the appropriate <code>NodePort</code>(under <strong>Virtual machine scale sets</strong>, selecting the right node pools whose public IPs are enabled, and under <strong>Networking</strong> tab). Also, need to make <code>externalTrafficPolicy</code> to <code>Cluster</code> or simply remove the line <code>externalTrafficPolicy: Local</code> from the helm chart.</p>
|
<p>I found the mention of an agent node in the aks documentation but i'm not finding the defition of it. can anyone please explain it to ? also want to know if is it an azure concept or a kubernetes concept.</p>
<p>Regards,</p>
| <p>In Kubernetes the term <code>node</code> refers to a compute node. Depending on the role of the node it is usually referred to as <code>control plane node</code> or <code>worker node</code>. From the <a href="https://kubernetes.io/docs/concepts/overview/components/" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p><strong>A Kubernetes cluster consists of a set of worker machines, called nodes</strong>, that run containerized applications. Every cluster has at least one worker node.</p>
<p>The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.</p>
</blockquote>
<p><code>Agent nodes</code> in AKS refers to the worker nodes (which should not be confused with the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">Kubelet</a>, which is the primary "node agent" that runs on each worker node)</p>
|
<p>I'm testing a database insert statement similar to the following which works locally but not after deployment to a kubernetes cluster connected to a managed database host:</p>
<pre><code>func Insert(w http.ResponseWriter, r *http.Request) {
db := dbConn()
//If it's a post request, assign a variable to the value returned in each field of the New page.
if r.Method == "POST" {
email := r.FormValue("email")
socialNetwork := r.FormValue("social_network")
socialHandle := r.FormValue("social_handle")
createdOn := time.Now().UTC()
//prepare a query to insert the data into the database
insForm, err := db.Prepare(`INSERT INTO public.users(email, social_network, social_handle) VALUES ($1,$2, $3)`)
//check for and handle any errors
CheckError(err)
//execute the query using the form data
_, err = insForm.Exec(email, socialNetwork, socialHandle)
CheckError(err)
//print out added data in terminal
log.Println("INSERT: email: " + email + " | social network: " + socialNetwork + " | social handle : " + socialHandle + " | created on: " + createdOn.String() + " | createdOn is type: " + reflect.TypeOf(createdOn).String())
sendThanks(socialHandle, email)
}
defer db.Close()
//redirect to the index page
http.Redirect(w, r, "/thanks", 301)
}
</code></pre>
<p>I've configured a deployment as follows with a corresponding secrets object:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: novvsworld
namespace: novvsworld
spec:
replicas: 1
selector:
matchLabels:
app: novvsworld
template:
metadata:
labels:
app: novvsworld
spec:
containers:
- name: novvsworld
image: my.registry.com/registry/novvsworld:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 3000
env:
- name: DBHOST
valueFrom:
secretKeyRef:
name: novvworld-secrets
key: DBHOST
- name: DBPORT
valueFrom:
secretKeyRef:
name: novvworld-secrets
key: DBPORT
- name: DBUSER
valueFrom:
secretKeyRef:
name: novvworld-secrets
key: DBUSER
- name: DBPASS
valueFrom:
secretKeyRef:
name: novvworld-secrets
key: DBPASS
- name: DBSSLMODE
valueFrom:
secretKeyRef:
name: novvworld-secrets
key: DBSSLMODE
- name: SENDGRID_API_KEY
valueFrom:
secretKeyRef:
name: novvworld-secrets
key: SENDGRID_API_KEY
</code></pre>
<p>The value of 'DBSSLMODE' is currently set to "disabled" in the secrets file.</p>
<p>When testing the insert statement by inputting data through the front end, the following panic is returned:</p>
<p><code>022/08/15 18:50:58 http: panic serving 10.244.0.38:47590: pq: no pg_hba.conf entry for host "167.172.231.113", user "novvsworld", database "novvsworld", no encryption </code></p>
<p>Am I missing an additional configuration for the encryption and shouldn't setting the sslmode to disabled bypass this?</p>
| <blockquote>
<p>Am I missing an additional configuration for the encryption and shouldn't setting the sslmode to disabled bypass this?</p>
</blockquote>
<p>Yes, and that is the problem. The client refuses to use SSL. While the server (configuration not shown, but can be inferred from the error) refuses to proceed <em>without</em> SSL.</p>
<p>As long as both sides make incompatible demands and refuse to compromise, nothing can get done.</p>
|
<p>Is it possible write write an existing environment variable into a file from a Kubernetes deployment.yaml file?</p>
<p>The background: I've already parsed a json containing secrets. Now, I'd like to store that secret in a local file.</p>
<p>So far, I've tried something like this:</p>
<pre><code> lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c"],
args: ["echo $PRIVATE_KEY > /var/private.key"]
</code></pre>
<p>( I've setup /var/ as an empty writeVolume. )</p>
<p>Or perhaps there is a completely different way to do this, such as storing the secret in it's own, separate secret?</p>
| <p>Rather than using <code>postStart</code> , I'd suggest you use an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a>, the <code>postStart</code> hook doesn't guarantee that it will be executed before the container <code>ENTRYPOINT</code>.</p>
<p>You can define your environment variables in your deployment manifest, by setting static values or referencing a <code>configMap</code> or <code>secret</code>. Your init container would run a bash script that writes the content of each variable to a file.</p>
<p>A second approach would be to mount a <code>configMap</code> as a volume inside your pod, e.g.:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
</code></pre>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
</code></pre>
<p>That would create two files inside <code>/etc/config</code>, named as the key defined in your <code>configMap</code> with the content of its value.</p>
|
<p>I am trying to run an application locally on k8s but I am not able to reach it.</p>
<p>here is my deloyment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: listings
labels:
app: listings
spec:
replicas: 2
selector:
matchLabels:
app: listings
template:
metadata:
labels:
app: listings
spec:
containers:
- image: mydockerhub/listings:latest
name: listings
envFrom:
- secretRef:
name: listings-secret
- configMapRef:
name: listings-config
ports:
- containerPort: 8000
name: django-port
</code></pre>
<p>and it is my service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: listings
labels:
app: listings
spec:
type: NodePort
selector:
app: listings
ports:
- name: http
port: 8000
targetPort: 8000
nodePort: 30036
protocol: TCP
</code></pre>
<p>At this stage, I don't want to use other methods like ingress or ClusterIP, or load balancer. I want to make nodePort work because I am trying to learn.</p>
<p>When I run <code>kubectl get svc -o wide</code> I see</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
listings NodePort 10.107.77.231 <none> 8000:30036/TCP 28s app=listings
</code></pre>
<p>When I run <code>kubectl get node -o wide</code> I see</p>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane,master 85d v1.23.3 192.168.49.2 <none> Ubuntu 20.04.2 LTS 5.10.16.3-microsoft-standard-WSL2 docker://20.10.12
</code></pre>
<p>and when I run <code>minikube ip</code> it shows <code>192.168.49.2</code></p>
<p>I try to open <code>http://192.168.49.2:30036/health</code> it is not opening <code>This site can’t be reached</code></p>
<p>How should expose my application externally?</p>
<p>note that I have created the required configmap and secret objects. also note that this is a simple django restful application that if you hit the /health endpoint, it returns success. and that's it. so there is no problem with the application</p>
| <p>That is because your local and minikube are not in the same network segment,
you must do something more to access minikube service on windows.</p>
<p>First</p>
<pre><code>$ minikube service list
</code></pre>
<p>That will show your service detail which include name, url, nodePort, targetPort.</p>
<p>Then</p>
<pre><code>$ minikube service --url listings
</code></pre>
<p>It will open a port to listen on your windows machine that can forward the traffic to minikube node port.</p>
<p>Or you can use command <code>kubectl port-forward</code> to expose service on host port, like:</p>
<pre><code>kubectl port-forward --address 0.0.0.0 -n default service/listings 30036:8000
</code></pre>
<p>Then try with <code>http://localhost:30036/health</code></p>
|
<p>This is my <code>~/.kube/config</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
clusters:
- cluster:
server: https://192.168.10.190:6443
name: cluster-1
- cluster:
server: https://192.168.99.101:8443
name: cluster-2
contexts:
- context:
cluster: cluster-1
user: kubernetes-admin-1
name: cluster-1
- context:
cluster: cluster-2
user: kubernetes-admin-2
name: cluster-2
kind: Config
preferences: {}
users:
- name: kubernetes-admin-1
user:
client-certificate: /home/user/.minikube/credential-for-cluster-1.crt
client-key: /home/user/.minikube/credential-for-cluster-1.key
- name: kubernetes-admin-2
user:
client-certificate: /home/user/.minikube/credential-for-cluster-2.crt
client-key: /home/user/.minikube/credential-for-cluster-2.key
</code></pre>
<hr />
<p>My understanding is, <code>cluster-1</code> & <code>cluster-2</code> are kubernetes physical clusters (<code>Control Plane</code>).</p>
<p>Each physical cluster has multiple virtual clusters (<code>Namespaces</code>)</p>
<p>If my understanding is correct, then with the above <code>kubeConfig</code>, What is the <code>kubectl</code> syntax to <strong>get all the namespaces in cluster</strong>?</p>
| <p>short answer, you can get all ns across the existing cluster in <code>kubeconfig</code></p>
<pre><code>for context in $(kubectl config view -o jsonpath='{.clusters[*].name}'); do
kubectl config use-context $context ;
kubectl get ns;
done
#or
for context in $(kubectl config view -o jsonpath='{.clusters[*].name}'); do kubectl config use-context $context ;kubectl get ns;done
</code></pre>
<p>you can get <strong>all namespace</strong> from each cluster using below command <strong>(current context)</strong></p>
<pre><code>kubectl get namespace
</code></pre>
<p>the above will return <code>namespace</code> in the current context, so you have two cluster, its mean you will need two different context to get all the namespace from both cluster</p>
<blockquote>
<p>A context element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: <strong>cluster, namespace, and user</strong>. By default, the kubectl command-line tool uses parameters from the <strong>current context to communicate with the cluster</strong>.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#context" rel="nofollow noreferrer">organize-cluster-access-kubeconfig-context</a></p>
<p>A namespace is simply the <strong>isolation</strong> of the resources. for example</p>
<p>you can not create <strong>two deployments</strong> with the same name in a single namespace, because those resources are <strong>namespace scoped</strong>.</p>
<p>so you can deploy multiple deployment under <code>develop</code>, <code>stage</code> and <code>production</code> namespace.</p>
<p><a href="https://i.stack.imgur.com/EXoY6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EXoY6.png" alt="enter image description here" /></a></p>
<p><a href="https://belowthemalt.com/2022/04/09/kubernetes-namespaces/" rel="nofollow noreferrer">kubernetes-namespaces</a></p>
|
<p>I am new to Argo and following the Quickstart templates and would like to deploy the HTTP template as a workflow.</p>
<p>I create my cluster as so:</p>
<pre class="lang-bash prettyprint-override"><code>minikube start --driver=docker --cpus='2' --memory='8g'
kubectl create ns argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-postgres.yaml
</code></pre>
<p>I then apply the HTTP template <code>http_template.yaml</code> from the docs:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: http-template-
spec:
entrypoint: main
templates:
- name: main
steps:
- - name: get-google-homepage
template: http
arguments:
parameters: [ { name: url, value: "https://www.google.com" } ]
- name: http
inputs:
parameters:
- name: url
http:
timeoutSeconds: 20 # Default 30
url: "{{inputs.parameters.url}}"
method: "GET" # Default GET
headers:
- name: "x-header-name"
value: "test-value"
# Template will succeed if evaluated to true, otherwise will fail
# Available variables:
# request.body: string, the request body
# request.headers: map[string][]string, the request headers
# response.url: string, the request url
# response.method: string, the request method
# response.statusCode: int, the response status code
# response.body: string, the response body
# response.headers: map[string][]string, the response headers
successCondition: "response.body contains \"google\"" # available since v3.3
body: "test body" # Change request body
</code></pre>
<p><code>argo submit -n argo http_template.yaml --watch</code></p>
<p>However I get the the following error:</p>
<pre><code>Name: http-template-564qp
Namespace: argo
ServiceAccount: unset (will run with the default ServiceAccount)
Status: Error
Message: failed to get token volumes: service account argo/default does not have any secrets
</code></pre>
<p>I'm not clear on why this doesn't work given it's straight from the Quickstart documentation. Help would be appreciated.</p>
| <p>It seems your default serviceaccount is missing a credential (kubernetes secret)</p>
<p>You can verify the existence of the credential by checking which one it needs by running
<code>kubectl get serviceaccount -n default default -o yaml</code></p>
<pre><code>kubectl get serviceaccount -n default default -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-02-10T10:48:54Z"
name: default
namespace: default
resourceVersion: "*******"
uid: ********************
secrets:
- name: default-token-*****
</code></pre>
<p>Now you should be able to find the secret which is attached to the serviceaccount</p>
<p><code>kubectl get secret -n default default-token-***** -o yaml</code></p>
<p>Or you can just run</p>
<p><code>kubectl get secret -n default</code></p>
<p>To see all secrets in the respective namespace (in this example, default)</p>
|
<p>Is it possible write write an existing environment variable into a file from a Kubernetes deployment.yaml file?</p>
<p>The background: I've already parsed a json containing secrets. Now, I'd like to store that secret in a local file.</p>
<p>So far, I've tried something like this:</p>
<pre><code> lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c"],
args: ["echo $PRIVATE_KEY > /var/private.key"]
</code></pre>
<p>( I've setup /var/ as an empty writeVolume. )</p>
<p>Or perhaps there is a completely different way to do this, such as storing the secret in it's own, separate secret?</p>
| <p>Usually when we need to read some secrets from a secret manager, we use an init container, and we create an <code>emptyDir</code> shared between the pods to write the secrets and access them from the other containers. In this case you can use a different docker image with secret manager dependencies and creds, without install those dependencies and provide the creds to the main container:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
initContainers:
- name: init-container
image: alpine
command:
- /bin/sh
- -c
- 'echo "test_value" > /mnt/volume/var.txt'
volumeMounts:
- mountPath: /mnt/volume
name: shared-storage
containers:
- image: alpine
name: test-container
command:
- /bin/sh
- -c
- 'READ_VAR=$(cat /mnt/volume/var.txt) && echo "main_container: ${READ_VAR}"'
volumeMounts:
- mountPath: /mnt/volume
name: shared-storage
volumes:
- name: shared-storage
emptyDir: {}
</code></pre>
<p>Here is the log:</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl logs test-pd
main_container: test_value
</code></pre>
|
<p>lately I am configuring a k8s cluster composed of 3 nodes(master, worker1 and worker2) that will host an UDP application(8 replicas of it). Everything is done and the cluster is working very well but there is only one problem.</p>
<p>Basically there is a Deployment which describes the Pod and it looks like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: <name>
labels:
app: <app_name>
spec:
replicas: 8
selector:
matchLabels:
app: <app_name>
template:
metadata:
labels:
app: <app_name>
spec:
containers:
- name: <name>
image: <image>
ports:
- containerPort: 6000
protocol: UDP
</code></pre>
<p>There is also a Service which is used to access to the UDP application:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: <service_name>
labels:
app: <app_name>
spec:
type: NodePort
ports:
- port: 6000
protocol: UDP
nodePort: 30080
selector:
app: <app_name>
</code></pre>
<p>When i try to access to the service 2 different scenarios may occur:</p>
<ul>
<li>The request is assigned to a POD that is in the same node that received the request</li>
<li>The request is assigned to a POD that is in the other node</li>
</ul>
<p>In the second case the request arrives correctly to the POD but with a source IP which ends by 0 (for example 10.244.1.0) so the response will never be delivered correctly.</p>
<p>I can't figure it out, I really tried everything but this problem still remains. In this moment to make the cluster working properly i added <code>externalTrafficPolicy: Local</code> and <code>internalTrafficPolicy: Local</code> to the Service in this way the requests will remain locally so when a request is sent to worker1 it will be assigned to a Pod which is running on worker1, the same for the worker2.</p>
<p>Do you have any ideas about the problem?
Thanks to everyone.</p>
| <p>Have you confirmed that the response is not delivered correctly for your second scenario? The source IP address in that case should be the one of the node where the request first arrived.</p>
<p>I am under the impression that you are assuming that since the IP address ends in 0 this is necessarily a network address, and that could be a wrong assumption, as it depends on the <a href="https://www.hacksplaining.com/glossary/netmasks#:%7E:text=Netmasks%20(or%20subnet%20masks)%20are,Internet%20Protocol%20(IP)%20address." rel="nofollow noreferrer">Netmask</a> configured for the Subnetwork where the nodes are allocated; for example, if the nodes are in the Subnet 10.244.0.0/23, then the network address is 10.244.0.0, and 10.244.1.0 is just another usable address that can be assigned to a node.</p>
<p>Now, if your application needs to preserve the client's IP address, then that could be an issue since, by default, the source IP seen in the target container is not the original source IP of the client. In this case, additionally to configuring the <code>externalTrafficPolicy</code> as Local, you would need to configure a <code>healthCheckNodePort</code> as specified in the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">Preserving the client source IP</a> documentation.</p>
|
<pre><code>2022-08-17T16:14:15.5682728Z error: error validating "deployment.yml": error validating data: ValidationError(HorizontalPodAutoscaler.spec.metrics[1].resource.target.averageUtilization): invalid type for io.k8s.api.autoscaling.v2.MetricTarget.averageUtilization: got "string", expected "integer"; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<pre><code>---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: hpa-xyz
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: XYZ
minReplicas: ${MinRequestForwarderReplicas}
maxReplicas: ${MaxRequestForwarderReplicas}
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 75
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 1500Mi
</code></pre>
<p>I tried the memory "averageUtilization" values with 1.5Gi and "1.5Gi" with double quotes. Can anyone check and let me know if I am missing something here?</p>
| <pre><code>averageUtilization:
</code></pre>
<p>is represented as a percent of requested memory. So you're wrong with <code>Mi</code>.</p>
<p>It should be.</p>
<pre><code>averageUtilization: 65
</code></pre>
<p>Specify <code>averageUtilization</code> and a target average memory utilization over all the pods, represented as a percent of requested memory. The target pods must have memory requests configured.</p>
|
<p>I want to deploy some pods (from deployment X) only on nodes where pods from deployment Y do not exist.
Is there a way to make something like that?
First idea is to use node affinity and deploy X on nodes with label X and Y where there is no X, but I would like to avoid that.</p>
| <p>You can use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">inter pod affinity and Anti affinity</a>. Here is the relevant snippet from the documentation:</p>
<blockquote>
<p>Inter-pod affinity and anti-affinity allow you to constrain which
nodes your Pods can be scheduled on based on the labels of Pods
already running on that node, instead of the node labels.</p>
<p>Inter-pod affinity and anti-affinity rules take the form "this Pod
should (or, in the case of anti-affinity, should not) run in an X if
that X is already running one or more Pods that meet rule Y", where X
is a topology domain like node, rack, cloud provider zone or region,
or similar and Y is the rule Kubernetes tries to satisfy.</p>
</blockquote>
|
<p>I am trying to retrieve the hostname in my Application Load Balancer that I configured as ingress.</p>
<p>The scenario currently is: I am deploying a helm chart using terraform, and have configured an ALB as ingress. The ALB and the Helm chart was deployed normally and is working, however, I need to retrieve the hostname of this ALB to create a Route53 record pointing to this ALB. When I try to retrieve this information, it returns null values.</p>
<p>According to terraform's own documentation, the correct way is as follows:</p>
<pre><code>data "kubernetes_ingress" "example" {
metadata {
name = "terraform-example"
}
}
resource "aws_route53_record" "example" {
zone_id = data.aws_route53_zone.k8.zone_id
name = "example"
type = "CNAME"
ttl = "300"
records = [data.kubernetes_ingress.example.status.0.load_balancer.0.ingress.0.hostname]
}
</code></pre>
<p>I did exactly as in the documentation (even the provider version is the latest), here is an excerpt of my code:</p>
<pre><code># Helm release resource
resource "helm_release" "argocd" {
name = "argocd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
namespace = "argocd"
version = "4.9.7"
create_namespace = true
values = [
templatefile("${path.module}/settings/helm/argocd/values.yaml", {
certificate_arn = module.acm_certificate.arn
})
]
}
# Kubernetes Ingress data to retrieve de ingress hostname from helm deployment (ALB Hostname)
data "kubernetes_ingress" "argocd" {
metadata {
name = "argocd-server"
namespace = helm_release.argocd.namespace
}
depends_on = [
helm_release.argocd
]
}
# Route53 record creation
resource "aws_route53_record" "argocd" {
name = "argocd"
type = "CNAME"
ttl = 600
zone_id = aws_route53_zone.r53_zone.id
records = [data.kubernetes_ingress.argocd.status.0.load_balancer.0.ingress.0.hostname]
}
</code></pre>
<p>When I run the <code>terraform apply</code> I've get the following error:</p>
<pre><code>╷
│ Error: Attempt to index null value
│
│ on route53.tf line 67, in resource "aws_route53_record" "argocd":
│ 67: records = [data.kubernetes_ingress.argocd.status.0.load_balancer.0.ingress.0.hostname]
│ ├────────────────
│ │ data.kubernetes_ingress.argocd.status is null
│
│ This value is null, so it does not have any indices.
</code></pre>
<p>My ingress configuration (deployed by Helm Release):</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server
namespace: argocd
uid: 646f6ea0-7991-4a13-91d0-da236164ac3e
resourceVersion: '4491'
generation: 1
creationTimestamp: '2022-08-08T13:29:16Z'
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
helm.sh/chart: argo-cd-4.9.7
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/certificate-arn: >-
arn:aws:acm:us-east-1:124416843011:certificate/7b79fa2c-d446-423d-b893-c8ff3d92a5e1
alb.ingress.kubernetes.io/group.name: altb-devops-eks-support-alb
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/load-balancer-name: altb-devops-eks-support-alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/tags: >-
Name=altb-devops-eks-support-alb,Stage=Support,CostCenter=Infrastructure,Project=Shared
Infrastructure,Team=DevOps
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
meta.helm.sh/release-name: argocd
meta.helm.sh/release-namespace: argocd
finalizers:
- group.ingress.k8s.aws/altb-devops-eks-support-alb
managedFields:
- manager: controller
operation: Update
apiVersion: networking.k8s.io/v1
time: '2022-08-08T13:29:16Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"group.ingress.k8s.aws/altb-devops-eks-support-alb": {}
- manager: terraform-provider-helm_v2.6.0_x5
operation: Update
apiVersion: networking.k8s.io/v1
time: '2022-08-08T13:29:16Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:alb.ingress.kubernetes.io/backend-protocol: {}
f:alb.ingress.kubernetes.io/certificate-arn: {}
f:alb.ingress.kubernetes.io/group.name: {}
f:alb.ingress.kubernetes.io/listen-ports: {}
f:alb.ingress.kubernetes.io/load-balancer-name: {}
f:alb.ingress.kubernetes.io/scheme: {}
f:alb.ingress.kubernetes.io/tags: {}
f:alb.ingress.kubernetes.io/target-type: {}
f:kubernetes.io/ingress.class: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/managed-by: {}
f:app.kubernetes.io/name: {}
f:app.kubernetes.io/part-of: {}
f:helm.sh/chart: {}
f:spec:
f:rules: {}
- manager: controller
operation: Update
apiVersion: networking.k8s.io/v1
time: '2022-08-08T13:29:20Z'
fieldsType: FieldsV1
fieldsV1:
f:status:
f:loadBalancer:
f:ingress: {}
subresource: status
selfLink: /apis/networking.k8s.io/v1/namespaces/argocd/ingresses/argocd-server
status:
loadBalancer:
ingress:
- hostname: >-
internal-altb-devops-eks122-support-alb-1845221539.us-east-1.elb.amazonaws.com
spec:
rules:
- host: argocd.altb.co
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80
</code></pre>
| <p>The terraform datasource for Ingress is : kubernetes_ingress_v1.
<a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/ingress_v1" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/ingress_v1</a></p>
<pre><code>data "kubernetes_ingress_v1" "argocd" {
metadata {
name = "argocd-server"
namespace = helm_release.argocd.namespace
}
depends_on = [
helm_release.argocd
]
}
</code></pre>
<p>This should work.</p>
|
<p>I am running rancher latest docker image on Mac M1 laptop but the contact failed to start.
The command I am using is sudo docker run -d -p 80:80 -p 443:443 --privileged rancher/rancher.</p>
<p>Below is the versions for my environment:</p>
<p>$ docker --version
Docker version 20.10.13, build a224086</p>
<p>$ uname -a
Darwin Joeys-MBP 21.3.0 Darwin Kernel Version 21.3.0: Wed Jan 5 21:37:58 PST 2022; root:xnu-8019.80.24~20/RELEASE_ARM64_T6000 arm64</p>
<p>$ docker images|grep rancher
rancher/rancher latest f09cdb8a8fba 3 weeks ago 1.39GB</p>
<p>Below is the logs from the container.</p>
<pre><code>$ docker logs -f 8d21d7d19b21
2022/04/28 03:34:00 [INFO] Rancher version v2.6.4 (4b4e29678) is starting
2022/04/28 03:34:00 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features: ClusterRegistry:}
2022/04/28 03:34:00 [INFO] Listening on /tmp/log.sock
2022/04/28 03:34:00 [INFO] Waiting for k3s to start
2022/04/28 03:34:01 [INFO] Waiting for server to become available: an error on the server ("apiserver not ready") has prevented the request from succeeding
2022/04/28 03:34:03 [INFO] Waiting for server to become available: an error on the server ("apiserver not ready") has prevented the request from succeeding
2022/04/28 03:34:05 [INFO] Running in single server mode, will not peer connections
2022/04/28 03:34:05 [INFO] Applying CRD features.management.cattle.io
2022/04/28 03:34:05 [INFO] Waiting for CRD features.management.cattle.io to become available
2022/04/28 03:34:05 [INFO] Done waiting for CRD features.management.cattle.io to become available
2022/04/28 03:34:08 [INFO] Applying CRD navlinks.ui.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD clusters.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD apiservices.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD clusterregistrationtokens.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD settings.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD preferences.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD features.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD clusterrepos.catalog.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD operations.catalog.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD apps.catalog.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD fleetworkspaces.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD bundles.fleet.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD clusters.fleet.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD managedcharts.management.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD clusters.provisioning.cattle.io
2022/04/28 03:34:08 [INFO] Applying CRD clusters.provisioning.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD rkeclusters.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD rkebootstraps.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD rkebootstraptemplates.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD custommachines.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD etcdsnapshots.rke.cattle.io
2022/04/28 03:34:09 [INFO] Applying CRD clusters.cluster.x-k8s.io
2022/04/28 03:34:09 [INFO] Applying CRD machinedeployments.cluster.x-k8s.io
2022/04/28 03:34:09 [INFO] Applying CRD machinehealthchecks.cluster.x-k8s.io
2022/04/28 03:34:09 [INFO] Applying CRD machines.cluster.x-k8s.io
2022/04/28 03:34:09 [INFO] Applying CRD machinesets.cluster.x-k8s.io
2022/04/28 03:34:09 [INFO] Waiting for CRD machinesets.cluster.x-k8s.io to become available
2022/04/28 03:34:09 [INFO] Done waiting for CRD machinesets.cluster.x-k8s.io to become available
2022/04/28 03:34:09 [INFO] Creating CRD authconfigs.management.cattle.io
2022/04/28 03:34:09 [INFO] Creating CRD groupmembers.management.cattle.io
2022/04/28 03:34:09 [INFO] Creating CRD groups.management.cattle.io
2022/04/28 03:34:09 [INFO] Creating CRD tokens.management.cattle.io
2022/04/28 03:34:09 [INFO] Creating CRD userattributes.management.cattle.io
2022/04/28 03:34:09 [INFO] Creating CRD users.management.cattle.io
2022/04/28 03:34:09 [INFO] Waiting for CRD tokens.management.cattle.io to become available
2022/04/28 03:34:10 [INFO] Done waiting for CRD tokens.management.cattle.io to become available
2022/04/28 03:34:10 [INFO] Waiting for CRD userattributes.management.cattle.io to become available
2022/04/28 03:34:10 [INFO] Done waiting for CRD userattributes.management.cattle.io to become available
2022/04/28 03:34:10 [INFO] Waiting for CRD users.management.cattle.io to become available
2022/04/28 03:34:11 [INFO] Done waiting for CRD users.management.cattle.io to become available
2022/04/28 03:34:11 [INFO] Creating CRD clusterroletemplatebindings.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD apps.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD catalogs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD apprevisions.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD dynamicschemas.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD catalogtemplates.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD pipelineexecutions.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD etcdbackups.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD pipelinesettings.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD globalrolebindings.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD pipelines.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD catalogtemplateversions.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD globalroles.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD sourcecodecredentials.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clusteralerts.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clusteralertgroups.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD sourcecodeproviderconfigs.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD kontainerdrivers.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD nodedrivers.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clustercatalogs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD sourcecoderepositories.project.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clusterloggings.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD nodepools.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD nodetemplates.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clusteralertrules.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clustermonitorgraphs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clusterscans.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD nodes.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD podsecuritypolicytemplateprojectbindings.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD composeconfigs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD podsecuritypolicytemplates.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD multiclusterapps.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectnetworkpolicies.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD multiclusterapprevisions.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectroletemplatebindings.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD monitormetrics.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projects.management.cattle.io
2022/04/28 03:34:11 [INFO] Waiting for CRD sourcecodecredentials.project.cattle.io to become available
2022/04/28 03:34:11 [INFO] Creating CRD rkek8ssystemimages.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD notifiers.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD rkek8sserviceoptions.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectalerts.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD rkeaddons.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectalertgroups.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD roletemplates.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectcatalogs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectloggings.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD samltokens.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectalertrules.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clustertemplates.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD projectmonitorgraphs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD clustertemplaterevisions.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD cisconfigs.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD cisbenchmarkversions.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD templates.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD templateversions.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD templatecontents.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD globaldnses.management.cattle.io
2022/04/28 03:34:11 [INFO] Creating CRD globaldnsproviders.management.cattle.io
2022/04/28 03:34:11 [INFO] Waiting for CRD nodetemplates.management.cattle.io to become available
2022/04/28 03:34:12 [INFO] Waiting for CRD projectalertgroups.management.cattle.io to become available
2022/04/28 03:34:12 [FATAL] k3s exited with: exit status 1
</code></pre>
| <p>I would recommend trying to run it with a specific tag, i.e. <code>rancher/rancher:v2.6.6</code>.</p>
<p>Some other things that may interfere: What size setup are you running on?
CPU and minimum memory requirements are currently 2 CPUs and 4gb RAM.</p>
<p>Also, you can try their docker install scripts and check out other documentation here: <a href="https://rancher.com/docs/rancher/v2.6/en/installation/requirements/installing-docker/" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.6/en/installation/requirements/installing-docker/</a></p>
<p>Edit: noticed you're running on ARM. There is additional documentation for running rancher on ARM here: <a href="https://rancher.com/docs/rancher/v2.5/en/installation/resources/advanced/arm64-platform/" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.5/en/installation/resources/advanced/arm64-platform/</a></p>
|
<p>I am installing nginx ingress controller through helm chart and pods are not coming up. Got some issue with the permission.</p>
<p>Chart link - <a href="https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx</a></p>
<p>I am using latest version 4.2.1</p>
<p>I done debugging as stated here <a href="https://github.com/kubernetes/ingress-nginx/issues/4061" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/4061</a>
also tried to run as root user <strong>runAsUser: 0</strong></p>
<p>I think i got this issue after cluster upgrade from 1.19 to 1.22. Previously it was working fine.</p>
<p>Any suggestion what i need to do to fix that?</p>
<blockquote>
<p>unexpected error storing fake SSL Cert: could not create PEM
certificate file
/etc/ingress-controller/ssl/default-fake-certificate.pem: open
/etc/ingress-controller/ssl/default-fake-certificate.pem: permission
denied</p>
</blockquote>
| <p>You obviously have permission problem. Looking at the Chart you specified, the are multiple values of <code>runAsUser</code> for different config.</p>
<pre><code>controller.image.runAsUser: 101
controller.admissionWebhooks.patch.runAsUser: 2000
defaultBackend.image.runAsUser: 65534
</code></pre>
<p>I'm not sure why these are different, but if possible -</p>
<p>Try to delete your existing chart, and fresh install it.</p>
<p>If the issue still persist - check the deployment / pod events, see if the cluster alerts you about something.</p>
<p>Also worth noting, there were breaking changes in 1.22 to <code>Ingress</code> resource.
Check <a href="https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/#what-to-do" rel="nofollow noreferrer">this</a> and this <a href="https://kubernetes.io/blog/2021/08/04/kubernetes-1-22-release-announcement/#major-changes" rel="nofollow noreferrer">links</a> from the official release notes.</p>
|
<p>I have a docker image that I want to run inside my django code. Inside that image there is an executable that I have written using c++ that writes it's output to google cloud storage. Normally when I run the django code like this:</p>
<pre><code>container = client.V1Container(name=container_name, command=["//usr//bin//sleep"], args=["3600"], image=container_image, env=env_list, security_context=security)
</code></pre>
<p>And manually go inside the container to run this:</p>
<pre><code>gcloud container clusters get-credentials my-cluster --region us-central1 --project proj_name && kubectl exec pod-id -c jobcontainer -- xvfb-run -a "path/to/exe"
</code></pre>
<p>It works as intended and gives off the output to cloud storage. (I need to use a virtual monitor so I'm using xvfb first). However I must call this through django like this:</p>
<pre><code>container = client.V1Container(name=container_name, command=["xvfb-run"], args=["-a","\"path/to/exe\""], image=container_image, env=env_list, security_context=security)
</code></pre>
<p>But when I do this, the job gets created but never finishes and does not give off an output to the storage. When I go inside my container to run <code>ps aux</code> I get this output:</p>
<pre><code>USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 2888 1836 ? Ss 07:34 0:00 /bin/sh /usr/bin/xvfb-run -a "path/to/exe"
root 16 0.0 1.6 196196 66256 ? S 07:34 0:00 Xvfb :99 -screen 0 1280x1024x24 -nolisten tcp -auth /tmp/xvfb-run.r5gaBO/Xauthority
root 35 0.0 0.0 7016 1552 ? Rs 10:31 0:00 ps aux
</code></pre>
<p>It looks like it's stuck inside my code but my code does not have a loop that it can stuck inside, perhaps there is an error occurring (I don't think so since the exact same command is working when typed manually). If there is an error how can I see the console output? Why is my code get stuck and how can I get my desired output? Could there be an error caused by permissions (The code does a lot of stuff that requires permissions like writing to storage and reading files inside the pod, but like mentioned works normally when i run it via the command line)?</p>
| <p>Apparently for anyone having a similar issue, we fixed it by adding the command we want to run at the end of the <code>Dockerfile</code> instead of passing it as a parameter inside django's container call like this:</p>
<pre><code>cmd["entrypoint.sh"]
</code></pre>
<p>entrypoint.sh:</p>
<pre><code>xvfb-run -a "path/to/exe"
</code></pre>
<p>Instead of calling it inside django like we did before and simply removing the command argument from the container call so it looked like this:</p>
<pre><code>container = client.V1Container(name=container_name, image=container_image, env=env_list, stdin=True, security_context=security)
</code></pre>
|
<p>In microservices environment deployed to the Kubernetes cluster, why will we use API gateway (for example Spring cloud gateway) if Kubernetes supplies the same service with Ingress?</p>
| <p>Ingress controller makes one Kubernetes service that gets exposed as LoadBalancer.For simple understanding, you can consider ingress as Nginx server which just do the work of forwarding the traffic to services based on the ruleset.ingress don't have much functionality like API gateway. Some of ingress don't support authentication, rate limiting, application routing, security, merging response & request, and other add-ons/plugin options.</p>
<p>API gateway can also do the work of simple routing but it mostly gets used when you need higher flexibility, security and configuration options.While multiple teams or projects can share a set of Ingress controllers, or Ingress controllers can be specialized on a per‑environment basis, there are reasons you might choose to deploy a dedicated API gateway inside Kubernetes rather than leveraging the existing Ingress controller. Using both an Ingress controller and an API gateway inside Kubernetes can provide flexibility for organizations to achieve business requirements</p>
<p>For accessing database</p>
<p>If this database and cluster are somewhere in the cloud you could use internal Database IP. If not you should provide the IP of the machine where this Database is hosted.</p>
<p>You can also refer to this <a href="https://medium.com/@ManagedKube/kubernetes-access-external-services-e4fd643e5097" rel="nofollow noreferrer">Kubernetes Access External Services</a> article.</p>
|
<p>This question has been asked before, ive been trying plenty of examples over the past two days to try and configure with no luck so I am posting my environment for any help.</p>
<p><strong>Problem</strong> <br />
Nextjs environment variables are all undefined after deploying to kubernetes using Terraform</p>
<p><strong>Expected Result</strong> <br /></p>
<pre><code>staging: NEXT_PUBLIC_APIROOT=https://apis-staging.mywebsite.com
production: NEXT_PUBLIC_APIROOT=https://apis.mywebsite.com
</code></pre>
<p>The secrets are stored in github actions. I have a terraform setup that deploys my application to my staging and production klusters, a snippet below:</p>
<pre><code>env:
ENV: staging
PROJECT_ID: ${{ secrets.GKE_PROJECT_STAG }}
GOOGLE_CREDENTIALS: ${{ secrets.GOOGLE_CREDENTIALS_STAG }}
GKE_SA_KEY: ${{ secrets.GKE_SA_KEY_STAG }}
NEXT_PUBLIC_APIROOT: ${{ secrets.NEXT_PUBLIC_APIROOT_STAGING }}
</code></pre>
<p>I have an additional step to manually create a .env file as well</p>
<pre><code> - name: env-file
run: |
touch .env.local
echo NEXT_PUBLIC_APIROOT: ${{ secrets.NEXT_PUBLIC_APIROOT_STAGING }} >> .env.local
</code></pre>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM node:16-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json .npmrc ./
RUN npm ci
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:16-alpine AS runner
WORKDIR /app
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
</code></pre>
<p><strong>next.config.js</strong></p>
<pre><code>module.exports = withBundleAnalyzer({
publicRuntimeConfig: {
NEXT_PUBLIC_APIROOT: process.env.NEXT_PUBLIC_APIROOT,
},
output: 'standalone',
webpack: (config, { dev, isServer }) => {
if (dev && isServer) {
const ForkTsCheckerWebpackPlugin = require('fork-ts-checker-webpack-plugin')
config.plugins.push(
new ForkTsCheckerWebpackPlugin({
eslint: {
files: './src/**/*.{ts,tsx,js,jsx}',
},
})
)
}
return config
},
})
</code></pre>
<p>Anybody have experience with this issue?</p>
| <p>I wanna start by saying that I am by no means an expert in NextJS. Therefore, I tried approaching your problem under the following assumptions:</p>
<ul>
<li>the issue is not necessarily related to NextJS, as it appears to be related to Kubernetes resource deployment and management</li>
<li>leverage the default <code>next.config.js</code> mechanism that automatically loads environment variables from <code>.env.local</code></li>
<li>use a GKE cluster to use the same deployment target (region: <code>us-central1-c</code>)</li>
</ul>
<p>My first step was to create a dummy NextJS application with a single API endpoint that simply prints one of the environment variables that I am trying to set when deploying the workload to Kubernetes. When it comes to the Dockerfile, I used the exact same image that you provided. Please find below the relevant files from my dummy app:</p>
<p><strong>pages/api/test.js</strong></p>
<pre><code>export default function handler(req, res) {
res.status(200).json(process.env.NEXT_PUBLIC_APIROOT)
}
</code></pre>
<p><strong>next.config.js</strong></p>
<pre><code>const withBundleAnalyzer = require('@next/bundle-analyzer')({
enabled: true,
});
module.exports = withBundleAnalyzer({
publicRuntimeConfig: {
NEXT_PUBLIC_APIROOT: process.env.NEXT_PUBLIC_APIROOT,
},
output: 'standalone'
})
</code></pre>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM node:16-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:16-alpine AS runner
WORKDIR /app
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["npm", "start"]
</code></pre>
<p>There is a single change that I've done in the Dockerfile and that is updating the CMD entry so that the application starts via the <code>npm start</code> command.</p>
<p>As per the official <a href="https://nextjs.org/docs/basic-features/environment-variables" rel="nofollow noreferrer">documentation</a>, NextJS will try to look for <code>.env.local</code> in the app root folder and load those environment variables in <code>process.env</code>.</p>
<p>Therefore, I created a YAML file with Kubernetes resources that will be used to create the deployment setup.</p>
<p><strong>nextjs-app-setup.yaml</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nextjs-app-config
data:
.env.local: |-
NEXT_PUBLIC_APIROOT=hello_i_am_an_env_variable
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextjs-app
labels:
app: nextjs-app
spec:
replicas: 1
selector:
matchLabels:
app: nextjs-app
template:
metadata:
labels:
app: nextjs-app
spec:
containers:
- name: nextjs-app
image: public.ecr.aws/u4x8r8g3/nextjs-app:latest
ports:
- containerPort: 3000
volumeMounts:
- name: nextjs-app-config
mountPath: "/app/.env.local"
subPath: ".env.local"
readOnly: true
volumes:
- name: nextjs-app-config
configMap:
name: nextjs-app-config
---
apiVersion: v1
kind: Service
metadata:
name: nextjs-service
spec:
selector:
app: nextjs-app
ports:
- protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>There are multiple things happening in the above configuration:</p>
<ul>
<li>Define a ConfigMap resource that will hold all of the required environment variables that the NextJS application will require. There is a single entry for <code>.env.local</code> that will hold all of the environment variables and will be mounted as a file in the application pod</li>
<li>Define a Deployment resource for the NextJS application. The most important section here is the <code>volumes</code> and <code>volumeMounts</code> blocks. Here, I am mounting the <code>.env.local</code> entry from the ConfigMap that was defined on the <code>/app/.env.local</code> path</li>
<li>Define a Service resource to be able to interact with the NextJS application</li>
</ul>
<p>After connecting to the GKE cluster via kubectl, I applied the configuration via <code>kubectl apply -f nextjs-app-setup.yaml</code>.</p>
<p>To connect to the service from my local workstation, I executed <code>kubectl port-forward service/nextjs-service 3000:3000</code>. Then I navigated in my browser to <code>localhost:3000/api/test</code> and can see the value that I set in the ConfigMap as the output.</p>
<p>Disclaimer: I understand that your setup might involve some additional components especially when it comes to CI/CD and Infrastructure-as-Code, but my answer here should at least provide you with an approach to accessing environment variables in your containerized NextJS workloads. If you still get <code>undefined</code> values, my assumption is that it would most likely be related to how you are configuring them in your CI/CD pipeline, but that would be a different issue that is not related to NextJS or Kubernetes.</p>
|
<p>I have two servers with k3s setup. I have install <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">k8s-dashboard</a> on server1.</p>
<p>I have setup <a href="https://stackoverflow.com/questions/36306904/configure-kubectl-command-to-access-remote-kubernetes-cluster-on-azure">clustering</a> in server1 i.e. I can access k8s resources of server2 from server1.</p>
<pre><code>kubectl config set-cluster server2 --server=https://{IP_OF_SERVER2}:6443
kubectl config set-context server2 --cluster=server2
kubectl config use-context server2
</code></pre>
<p>But I want to access all resources of server2 from k8s dashboard of server1.</p>
<p>Is this possible to do?</p>
| <p>First, The Kubernetes dashboard needs to query <a href="https://github.com/kubernetes-sigs/dashboard-metrics-scraper" rel="nofollow noreferrer">dashboard-metrics-scraper</a>, so you will need to install that before linking the dashboard UI with <code>scraper</code>.</p>
<p>Second from the code, it does not look like it accept array, but accept string.</p>
<pre><code># Metrics Scraper sidecar host for dashboard
K8S_DASHBOARD_SIDECAR_HOST=${K8S_DASHBOARD_SIDECAR_HOST:-"http://localhost:8000"}
</code></pre>
<p><a href="https://github.com/kubernetes/dashboard/blob/1148f7ba9f9eadd719e53fa3bc8bde5b7cfdb395/aio/develop/run-npm-on-container.sh#L62" rel="nofollow noreferrer">Scraper sidecar</a></p>
<p><a href="https://github.com/kubernetes/dashboard/blob/1148f7ba9f9eadd719e53fa3bc8bde5b7cfdb395/aio/develop/run-npm-on-container.sh#L98" rel="nofollow noreferrer">docker-env</a></p>
<p>So you will need deploy Metrics Scraper sidecar on the cluster 2 and then you will need to expose the service and might need two instance of the dashboard.</p>
<p>so better to create dashboards on its own cluster.</p>
|
<p>I need to create a Kubernetes clientset using a token extracted from JSON service account key file.</p>
<p>I explicitly provide this token inside the config, however it still looks for Google Application-Default credentials, and crashes because it cannot find them.</p>
<p>Below is my code:</p>
<pre><code>package main
import (
"context"
"encoding/base64"
"fmt"
"io/ioutil"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
gke "google.golang.org/api/container/v1"
"google.golang.org/api/option"
"k8s.io/client-go/kubernetes"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/clientcmd/api"
)
const (
projectID = "my_project_id"
clusterName = "my_cluster_name"
scope = "https://www.googleapis.com/auth/cloud-platform"
)
func main() {
ctx := context.Background()
// Read JSON key and extract the token
data, err := ioutil.ReadFile("sa_key.json")
if err != nil {
panic(err)
}
creds, err := google.CredentialsFromJSON(ctx, data, scope)
if err != nil {
panic(err)
}
token, err := creds.TokenSource.Token()
if err != nil {
panic(err)
}
fmt.Println("token", token.AccessToken)
// Create GKE client
tokenSource := oauth2.StaticTokenSource(token)
gkeClient, err := gke.NewService(ctx, option.WithTokenSource(tokenSource))
if err != nil {
panic(err)
}
// Create a dynamic kube config
inMemKubeConfig, err := createInMemKubeConfig(ctx, gkeClient, token, projectID)
if err != nil {
panic(err)
}
// Use it to create a rest.Config
config, err := clientcmd.NewNonInteractiveClientConfig(*inMemKubeConfig, clusterName, &clientcmd.ConfigOverrides{CurrentContext: clusterName}, nil).ClientConfig()
if err != nil {
panic(err)
}
// Create the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err) // this where the code crashes because it can't find the Google ADCs
}
fmt.Printf("clientset %+v\n", clientset)
}
func createInMemKubeConfig(ctx context.Context, client *gke.Service, token *oauth2.Token, projectID string) (*api.Config, error) {
k8sConf := api.Config{
APIVersion: "v1",
Kind: "Config",
Clusters: map[string]*api.Cluster{},
AuthInfos: map[string]*api.AuthInfo{},
Contexts: map[string]*api.Context{},
}
// List all clusters in project with id projectID across all zones ("-")
resp, err := client.Projects.Zones.Clusters.List(projectID, "-").Context(ctx).Do()
if err != nil {
return nil, err
}
for _, f := range resp.Clusters {
name := fmt.Sprintf("gke_%s_%s_%s", projectID, f.Zone, f.Name) // My custom naming convention
cert, err := base64.StdEncoding.DecodeString(f.MasterAuth.ClusterCaCertificate)
if err != nil {
return nil, err
}
k8sConf.Clusters[name] = &api.Cluster{
CertificateAuthorityData: cert,
Server: "https://" + f.Endpoint,
}
k8sConf.Contexts[name] = &api.Context{
Cluster: name,
AuthInfo: name,
}
k8sConf.AuthInfos[name] = &api.AuthInfo{
Token: token.AccessToken,
AuthProvider: &api.AuthProviderConfig{
Name: "gcp",
Config: map[string]string{
"scopes": scope,
},
},
}
}
return &k8sConf, nil
}
</code></pre>
<p>and here is the error message:</p>
<pre><code>panic: cannot construct google default token source: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
</code></pre>
| <p>Here's what worked for me.</p>
<p>It is based on this <a href="https://gist.github.com/ahmetb/548059cdbf12fb571e4e2f1e29c48997" rel="nofollow noreferrer">gist</a>
and it's exactly what I was looking for. It uses an <code>oauth2.TokenSource</code> object which can be fed with a variety of token types so it's quite flexible.</p>
<p>It took me a long time to find this solution so I hope this helps somebody!</p>
<pre><code>package main
import (
"context"
"encoding/base64"
"fmt"
"io/ioutil"
"log"
"net/http"
gke "google.golang.org/api/container/v1"
"google.golang.org/api/option"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
)
const (
googleAuthPlugin = "gcp"
projectID = "my_project"
clusterName = "my_cluster"
zone = "my_cluster_zone"
scope = "https://www.googleapis.com/auth/cloud-platform"
)
type googleAuthProvider struct {
tokenSource oauth2.TokenSource
}
// These funcitons are needed even if we don't utilize them
// So that googleAuthProvider is an rest.AuthProvider interface
func (g *googleAuthProvider) WrapTransport(rt http.RoundTripper) http.RoundTripper {
return &oauth2.Transport{
Base: rt,
Source: g.tokenSource,
}
}
func (g *googleAuthProvider) Login() error { return nil }
func main() {
ctx := context.Background()
// Extract a token from the JSON SA key
data, err := ioutil.ReadFile("sa_key.json")
if err != nil {
panic(err)
}
creds, err := google.CredentialsFromJSON(ctx, data, scope)
if err != nil {
panic(err)
}
token, err := creds.TokenSource.Token()
if err != nil {
panic(err)
}
tokenSource := oauth2.StaticTokenSource(token)
// Authenticate with the token
// If it's nil use Google ADC
if err := rest.RegisterAuthProviderPlugin(googleAuthPlugin,
func(clusterAddress string, config map[string]string, persister rest.AuthProviderConfigPersister) (rest.AuthProvider, error) {
var err error
if tokenSource == nil {
tokenSource, err = google.DefaultTokenSource(ctx, scope)
if err != nil {
return nil, fmt.Errorf("failed to create google token source: %+v", err)
}
}
return &googleAuthProvider{tokenSource: tokenSource}, nil
}); err != nil {
log.Fatalf("Failed to register %s auth plugin: %v", googleAuthPlugin, err)
}
gkeClient, err := gke.NewService(ctx, option.WithTokenSource(tokenSource))
if err != nil {
panic(err)
}
clientset, err := getClientSet(ctx, gkeClient, projectID, org, env)
if err != nil {
panic(err)
}
// Demo to make sure it works
pods, err := clientset.CoreV1().Pods("").List(ctx, metav1.ListOptions{})
if err != nil {
panic(err)
}
log.Printf("There are %d pods in the cluster", len(pods.Items))
for _, pod := range pods.Items {
fmt.Println(pod.Name)
}
}
func getClientSet(ctx context.Context, client *gke.Service, projectID, name string) (*kubernetes.Clientset, error) {
// Get cluster info
cluster, err := client.Projects.Zones.Clusters.Get(projectID, zone, name).Context(ctx).Do()
if err != nil {
panic(err)
}
// Decode cluster CA certificate
cert, err := base64.StdEncoding.DecodeString(cluster.MasterAuth.ClusterCaCertificate)
if err != nil {
return nil, err
}
// Build a config using the cluster info
config := &rest.Config{
TLSClientConfig: rest.TLSClientConfig{
CAData: cert,
},
Host: "https://" + cluster.Endpoint,
AuthProvider: &clientcmdapi.AuthProviderConfig{Name: googleAuthPlugin},
}
return kubernetes.NewForConfig(config)
}
</code></pre>
|
<p>I have been trying to implement istio authorization using Oauth2 and keycloak. I have followed few articles related to this <a href="https://medium.com/@senthilrch/api-authentication-using-istio-ingress-gateway-oauth2-proxy-and-keycloak-part-2-of-2-dbb3fb9cd0d0" rel="nofollow noreferrer">API Authentication: Configure Istio IngressGateway, OAuth2-Proxy and Keycloak</a>, <a href="https://istio.io/latest/docs/reference/config/security/authorization-policy/" rel="nofollow noreferrer">Authorization Policy</a></p>
<p><strong>Expected output:</strong> My idea is to implement keycloak authentication where oauth2 used as an external Auth provider in the istio ingress gateway.
when a user try to access my app in <code><ingress host>/app</code> , it should automatically redirect to keycloak login page.</p>
<p>How do i properly redirect the page to keycloak login screen for authentication ?</p>
<p><strong>problem:</strong>
When i try to access <code><ingress host>/app</code>, the page will take 10 seconds to load and it gives status 403 access denied.
if i remove the authorization policy (<em>kubectl delete -f authorization-policy.yaml</em>) within that 10 seconds, it will redirect to the login screen (<em>keycloak</em>)</p>
<p>oauth2.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: oauth-proxy
name: oauth-proxy
spec:
type: NodePort
selector:
app: oauth-proxy
ports:
- name: http-oauthproxy
port: 4180
nodePort: 31023
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: oauth-proxy
name: oauth-proxy
spec:
replicas: 1
selector:
matchLabels:
app: "oauth-proxy"
template:
metadata:
labels:
app: oauth-proxy
spec:
containers:
- name: oauth-proxy
image: "quay.io/oauth2-proxy/oauth2-proxy:v7.2.0"
ports:
- containerPort: 4180
args:
- --http-address=0.0.0.0:4180
- --upstream=http://test-web-app:3000
- --set-xauthrequest=true
- --pass-host-header=true
- --pass-access-token=true
env:
# OIDC Config
- name: "OAUTH2_PROXY_PROVIDER"
value: "keycloak-oidc"
- name: "OAUTH2_PROXY_OIDC_ISSUER_URL"
value: "http://192.168.1.2:31020/realms/my_login_realm"
- name: "OAUTH2_PROXY_CLIENT_ID"
value: "my_nodejs_client"
- name: "OAUTH2_PROXY_CLIENT_SECRET"
value: "JGEQtkrdIc6kRSkrs89BydnfsEv3VoWO"
# Cookie Config
- name: "OAUTH2_PROXY_COOKIE_SECURE"
value: "false"
- name: "OAUTH2_PROXY_COOKIE_SECRET"
value: "ZzBkN000Wm0pQkVkKUhzMk5YPntQRUw_ME1oMTZZTy0="
- name: "OAUTH2_PROXY_COOKIE_DOMAINS"
value: "*"
# Proxy config
- name: "OAUTH2_PROXY_EMAIL_DOMAINS"
value: "*"
- name: "OAUTH2_PROXY_WHITELIST_DOMAINS"
value: "*"
- name: "OAUTH2_PROXY_HTTP_ADDRESS"
value: "0.0.0.0:4180"
- name: "OAUTH2_PROXY_SET_XAUTHREQUEST"
value: "true"
- name: OAUTH2_PROXY_PASS_AUTHORIZATION_HEADER
value: "true"
- name: OAUTH2_PROXY_SSL_UPSTREAM_INSECURE_SKIP_VERIFY
value: "true"
- name: OAUTH2_PROXY_SKIP_PROVIDER_BUTTON
value: "true"
- name: OAUTH2_PROXY_SET_AUTHORIZATION_HEADER
value: "true"
</code></pre>
<p>keycloak.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: keycloak
spec:
type: NodePort
selector:
app: keycloak
ports:
- name: http-keycloak
port: 8080
nodePort: 31020
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
spec:
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:17.0.0
ports:
- containerPort: 8080
args: ["start-dev"]
env:
- name: KEYCLOAK_ADMIN
value: "admin"
- name: KEYCLOAK_ADMIN_PASSWORD
value: "admin"
</code></pre>
<p>istio-operator.yaml</p>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
extensionProviders:
- name: "oauth2-proxy"
envoyExtAuthzHttp:
service: "oauth-proxy.default.svc.cluster.local"
port: "4180" # The default port used by oauth2-proxy.
includeHeadersInCheck: ["authorization", "cookie","x-forwarded-access-token","x-forwarded-user","x-forwarded-email","x-forwarded-proto","proxy-authorization","user-agent","x-forwarded-host","from","x-forwarded-for","accept","x-auth-request-redirect"] # headers sent to the oauth2-proxy in the check request.
headersToUpstreamOnAllow: ["authorization", "path", "x-auth-request-user", "x-auth-request-email", "x-auth-request-access-token","x-forwarded-access-token"] # headers sent to backend application when request is allowed.
headersToDownstreamOnDeny: ["content-type", "set-cookie"] # headers sent back to the client when request is denied.
</code></pre>
<p>gateway.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: test-gateway
namespace : istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
</code></pre>
<p>virtual-service.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gateway-vs
spec:
hosts:
- '*'
gateways:
- istio-system/test-gateway
http:
- match:
- uri:
prefix: /oauth2
route:
- destination:
host: oauth-proxy.default.svc.cluster.local
port:
number: 4180
- match:
- uri:
prefix: /app
route:
- destination:
host: test-web-app.default.svc.cluster.local
port:
number: 3000
</code></pre>
<p>authorization-policy.yaml</p>
<pre><code>apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: example-auth-policy
spec:
action: CUSTOM
provider:
name: "oauth2-proxy"
rules:
- to:
- operation:
paths: ["/app"]
notPaths: ["/oauth2/*"]
</code></pre>
| <p>The redirection issue solved by updating authorization policy</p>
<pre><code>apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: example-auth-policy
namespace: istio-system
spec:
action: CUSTOM
provider:
name: "oauth2-proxy"
rules:
- to:
- operation:
paths: ["/app"]
notPaths: ["/oauth2/*"]
selector:
matchLabels:
app: istio-ingressgateway
</code></pre>
<ul>
<li>Added <code>istio-system</code> namespace instead of workload namespace (it was default in my case)</li>
<li>Forgot to add <code>matchLabels</code>.</li>
</ul>
|
<p>I'm trying to implement EFK stack (with Fluent Bit) in my k8s cluster. My log file I would like to parse sometimes is oneline and sometimes multiline:</p>
<pre><code>2022-03-13 13:27:04 [-][-][-][error][craft\db\Connection::open] SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known
2022-03-13 13:27:04 [-][-][-][info][application] $_GET = []
$_POST = []
$_FILES = []
$_COOKIE = [
'__test1' => 'x'
'__test2' => 'x2'
]
$_SERVER = [
'__test3' => 'x3'
'__test2' => 'x3'
]
</code></pre>
<p>When I'm checking captured logs in Kibana I see that all multiline logs are separated into single lines, which is of course not what we want to have. I'm trying to configure a parser in fluent bit config which will interpret multiline log as one entry, unfortunately with no success.</p>
<p>I've tried this:</p>
<pre><code>[PARSER]
Name MULTILINE_MATCH
Format regex
Regex ^\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2} \[-]\[-]\[-]\[(?<level>.*)\]\[(?<where>.*)\] (?<message>[\s\S]*)
Time_Key time
Time_Format %b %d %H:%M:%S
</code></pre>
<p>In k8s all fluent bit configurations are stored in config map. So here's my whole configuration of fluent bit (the multiline parser is at the end):</p>
<pre><code>kind: ConfigMap
metadata:
name: fluent-bit
namespace: efk
labels:
app: fluent-bit
data:
# Configuration files: server, input, filters and output
# ======================================================
fluent-bit.conf: |
[SERVICE]
Flush 1
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
@INCLUDE input-kubernetes.conf
@INCLUDE filter-kubernetes.conf
@INCLUDE output-elasticsearch.conf
input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10
filter-kubernetes.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
output-elasticsearch.conf: |
[OUTPUT]
Name es
Match *
Host ${FLUENT_ELASTICSEARCH_HOST}
Port ${FLUENT_ELASTICSEARCH_PORT}
Logstash_Format On
Replace_Dots On
Retry_Limit False
parsers.conf: |
[PARSER]
Name apache
Format regex
Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name apache2
Format regex
Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name apache_error
Format regex
Regex ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$
[PARSER]
Name nginx
Format regex
Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name json
Format json
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
[PARSER]
Name syslog
Format regex
Regex ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
Time_Key time
Time_Format %b %d %H:%M:%S
[PARSER]
Name MULTILINE_MATCH
Format regex
Regex ^\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2} \[-]\[-]\[-]\[(?<level>.*)\]\[(?<where>.*)\] (?<message>[\s\S]*)
Time_Key time
Time_Format %b %d %H:%M:%S
</code></pre>
| <p>Starting from Fluent Bit v1.8, You can use the <code>multiline.parser</code> option as below.
docker and cri multiline parsers are predefined in fluent-bit.</p>
<pre><code>[INPUT]
Name tail
Path /var/log/containers/*.log
multiline.parser docker, cri
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
</code></pre>
<p><a href="https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-and-containers-v1.8" rel="nofollow noreferrer">https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-and-containers-v1.8</a></p>
|
<p>Having setup Kibana and a fleet server, I now have attempted to add APM.
When going through the general setup - I forever get an error no matter what is done:</p>
<pre><code>failed to listen:listen tcp *.*.*.*:8200: bind: can't assign requested address
</code></pre>
<p>This is when following the steps for setup of APM having created the fleet server.
This is all being launched in Kubernetes and the documentation has been gone through several times to no avail.</p>
<p>We did discover that we can hit the</p>
<blockquote>
<p>/intake/v2/events</p>
</blockquote>
<p>etc endpoints when shelled into the container but 404 for everything else. Its close but no cigar so far following the instructions.</p>
| <p>As it turned out, the general walk through is soon to be depreciated in its current form as is.
And setup is far far simpler in a helm file where its actually possible to configure kibana with package ref for your named apm service.</p>
<blockquote>
<pre><code>xpack.fleet.packages:
- name: system
version: latest
- name: elastic_agent
version: latest
- name: fleet_server
version: latest
- name: apm
version: latest
</code></pre>
</blockquote>
<pre><code> xpack.fleet.agentPolicies:
- name: Fleet Server on ECK policy
id: eck-fleet-server
is_default_fleet_server: true
namespace: default
monitoring_enabled:
- logs
- metrics
unenroll_timeout: 900
package_policies:
- name: fleet_server-1
id: fleet_server-1
package:
name: fleet_server
- name: Elastic Agent on ECK policy
id: eck-agent
namespace: default
monitoring_enabled:
- logs
- metrics
unenroll_timeout: 900
is_default: true
package_policies:
- name: system-1
id: system-1
package:
name: system
- package:
name: apm
name: apm-1
inputs:
- type: apm
enabled: true
vars:
- name: host
value: 0.0.0.0:8200
</code></pre>
<p>Making sure these are set in the kibana helm file will allow any spun up fleet server to automatically register as having APM.</p>
<p>The missing key in seemingly all the documentation is the need of a APM service.
The simplest example of which is here:</p>
<p><a href="https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.3/config/recipes/elastic-agent/fleet-apm-integration.yaml" rel="nofollow noreferrer">Example yaml scripts</a></p>
|
<p>We have created kubernetes dashboard using below command.</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
kubectl patch svc -n kubernetes-dashboard kubernetes-dashboard --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'
</code></pre>
<p>created dashboard-adminuser.yaml file like below.</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
</code></pre>
<p>Created ClusterRoleBinding.yaml file like below</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
</code></pre>
<p>And then run the below command at the end we got a token to login dashboard.</p>
<pre><code>kubectl apply -f dashboard-adminuser.yaml
kubectl apply -f ClusterRoleBinding.yaml
kubectl -n kubernetes-dashboard create token admin-user
</code></pre>
<p>But the problem is the token which we generated got expired in one hour. We couldn't able to use the same token again, if dashboard logged out.</p>
<p>So can we create a token without expiry or at least minimum 6 months?</p>
<p>What is the command/procedure to create a token for long time use?</p>
<p>And one more thing is that can now we are accessing kubernetes dashboard like below in outside.</p>
<p>https://server_ip_address:PORT_NUMBER</p>
<p>Now we want to open the kubernetes dashboard using our website URL like below and it should login automatically to the dashboard.</p>
<p><a href="https://my-domain-name.com/kubernetes-dashboard/%7Bkubernetes-dashboard-goto-url%7D" rel="noreferrer">https://my-domain-name.com/kubernetes-dashboard/{kubernetes-dashboard-goto-url}</a></p>
| <p>you can set <code>--duration=0s:</code></p>
<pre><code> --duration=0s:
Requested lifetime of the issued token. The server may return a token with a longer or shorter lifetime.
</code></pre>
<p>so this should work</p>
<pre><code>kubectl -n kubernetes-dashboard create token admin-user --duration=times
</code></pre>
<p>you can check the further option</p>
<pre><code>kubectl create token --help
</code></pre>
<p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-" rel="nofollow noreferrer">kubectl-commands--toke</a></p>
<p>After play around with token, it seems like the maximum expiration is 720h.</p>
<pre><code>kubectl create token default --duration=488h --output yaml
</code></pre>
<p>and the output shows</p>
<pre><code>kind: TokenRequest
metadata:
creationTimestamp: null
spec:
audiences:
- https://container.googleapis.com/v1/projects/test/clusters/test
boundObjectRef: null
expirationSeconds: **172800**
status:
expirationTimestamp: "2022-08-21T12:37:02Z"
token: eyJhbGciOiJSUzI1N....
</code></pre>
<p>So the other option is to go with kubeconfig as the dashboard also accepts config.</p>
<p><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#kubeconfig" rel="nofollow noreferrer">dashboard-auth-kubeconfig</a></p>
|
<p>I am using EKS Fargate and created a fargate profile based on this doc: <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html</a>.</p>
<p>In the doc it says Fargate is used to allocate what pods are deployed to Fargate instead of Nodegroup or EC2. So my question is should I always have one Fargate profile in one cluster? Is there any reason to have more than 1?</p>
| <p>As of Aug 17th 2022, EKS now supports wildcards in Fargate Profile selectors. This means you can now run workloads from various Kubernetes namespaces with a single Fargate Profile. Previously, you would have had to specify every namespace, and were limited to just 5 namespace selectors or label pairs.</p>
<p><a href="https://aws.amazon.com/about-aws/whats-new/2022/08/wildcard-support-amazon-eks-fargate-profile-selectors/" rel="nofollow noreferrer">https://aws.amazon.com/about-aws/whats-new/2022/08/wildcard-support-amazon-eks-fargate-profile-selectors/</a></p>
<p>For example, now you can use selectors like <code>*-staging</code> to match namespaces ending in <code>-staging</code>. You can also use <code>?</code> to match single characters. <code>app?</code> would match <code>appA</code> and <code>appB</code>.</p>
<pre><code>eksctl create fargateprofile \
--cluster my-cluster \
--name my-fargate-profile \
--namespace *-staging
</code></pre>
|
<p>In trying to securely install metrics-server on Kubernetes, I'm having problems.</p>
<p>It seems like the metric-server pod is unable to successfully make requests to the Kubelet API on it's <code>10250</code> port.</p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 0/1 1 0 16h
</code></pre>
<p>The Metrics Server deployment never becomes ready and it repeats the same sequence of error logs:</p>
<pre class="lang-sh prettyprint-override"><code>I0522 01:27:41.472946 1 serving.go:342] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0522 01:27:41.798068 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0522 01:27:41.798092 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0522 01:27:41.798068 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/front-ca/front-proxy-ca.crt"
I0522 01:27:41.798107 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key"
I0522 01:27:41.798240 1 secure_serving.go:266] Serving securely on [::]:4443
I0522 01:27:41.798265 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0522 01:27:41.798284 1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed
I0522 01:27:41.898439 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0522 01:27:55.297497 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.106:10250/metrics/resource\": context deadline exceeded" node="system76-pc"
E0522 01:28:10.297872 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.106:10250/metrics/resource\": context deadline exceeded" node="system76-pc"
I0522 01:28:10.325613 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve"
I0522 01:28:20.325231 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve"
E0522 01:28:25.297750 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.106:10250/metrics/resource\": context deadline exceeded" node="system76-pc"
</code></pre>
<p>I'm running Kubernetes deployed with <code>kubeadm</code> version 1.23.4 and I'm trying to securely use metrics-server.</p>
<p>I'm looking for advice that could help with:</p>
<ol>
<li>How I can accurately diagnose the problem?</li>
<li>Or alternatively, what configuration seems most fruitful to check first?</li>
<li>Anything that will help with my mental model of which certificates and keys I need to configure explicitly and what is being handled automatically.</li>
</ol>
<p>So far, I have tried to validate that the I can retrieve API metrics:</p>
<p><code>kubectl get --raw /api/v1/nodes/system76-pc/proxy/stats/summary</code></p>
<pre class="lang-json prettyprint-override"><code>{
"node": {
"nodeName": "system76-pc",
"systemContainers": [
{
"name": "kubelet",
"startTime": "2022-05-20T01:51:28Z",
"cpu": {
"time": "2022-05-22T00:48:40Z",
"usageNanoCores": 59453039,
"usageCoreNanoSeconds": 9768130002000
},
"memory": {
"time": "2022-05-22T00:48:40Z",
"usageBytes": 84910080,
"workingSetBytes": 84434944,
"rssBytes": 67149824,
"pageFaults": 893055,
"majorPageFaults": 290
}
},
{
"name": "runtime",
"startTime": "2022-05-20T00:33:24Z",
"cpu": {
"time": "2022-05-22T00:48:37Z",
"usageNanoCores": 24731571,
"usageCoreNanoSeconds": 3955659226000
},
"memory": {
"time": "2022-05-22T00:48:37Z",
"usageBytes": 484306944,
"workingSetBytes": 242638848,
"rssBytes": 84647936,
"pageFaults": 56994074,
"majorPageFaults": 428
}
},
{
"name": "pods",
"startTime": "2022-05-20T01:51:28Z",
"cpu": {
"time": "2022-05-22T00:48:37Z",
"usageNanoCores": 292818104,
"usageCoreNanoSeconds": 45976001446000
},
"memory": {
"time": "2022-05-22T00:48:37Z",
"availableBytes": 29648396288,
"usageBytes": 6108573696,
</code></pre>
<p><code>kubectl get --raw /api/v1/nodes/system76-pc/proxy/metrics/resource</code></p>
<pre class="lang-sh prettyprint-override"><code># HELP container_cpu_usage_seconds_total [ALPHA] Cumulative cpu time consumed by the container in core-seconds
# TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{container="alertmanager",namespace="flux-system",pod="alertmanager-prometheus-stack-kube-prom-alertmanager-0"} 108.399948 1653182143362
container_cpu_usage_seconds_total{container="calico-kube-controllers",namespace="kube-system",pod="calico-kube-controllers-56fcbf9d6b-n87ts"} 206.442768 1653182144294
container_cpu_usage_seconds_total{container="calico-node",namespace="kube-system",pod="calico-node-p6pxk"} 6147.643669 1653182155672
container_cpu_usage_seconds_total{container="cert-manager",namespace="cert-manager",pod="cert-manager-795d7f859d-8jp4f"} 134.583294 1653182142601
container_cpu_usage_seconds_total{container="cert-manager",namespace="cert-manager",pod="cert-manager-cainjector-5fcddc948c-vw4zz"} 394.286782 1653182151252
container_cpu_usage_seconds_total{container="cert-manager",namespace="cert-manager",pod="cert-manager-webhook-5b64f87794-pl7fb"} 404.53758 1653182140528
container_cpu_usage_seconds_total{container="config-reloader",namespace="flux-system",pod="alertmanager-prometheus-stack-kube-prom-alertmanager-0"} 6.01391 1653182139771
container_cpu_usage_seconds_total{container="config-reloader",namespace="flux-system",pod="prometheus-prometheus-stack-kube-prom-prometheus-0"} 42.706567 1653182130750
container_cpu_usage_seconds_total{container="controller",namespace="flux-system",pod="sealed-secrets-controller-5884bbf4d6-mql9x"} 43.814816 1653182144648
container_cpu_usage_seconds_total{container="controller",namespace="ingress-nginx",pod="ingress-nginx-controller-f9d6fc8d8-sgwst"} 645.109711 1653182141169
container_cpu_usage_seconds_total{container="coredns",namespace="kube-system",pod="coredns-64897985d-crtd9"} 380.682251 1653182141861
container_cpu_usage_seconds_total{container="coredns",namespace="kube-system",pod="coredns-64897985d-rpmxk"} 365.519839 1653182140533
container_cpu_usage_seconds_total{container="dashboard-metrics-scraper",namespace="kubernetes-dashboard",pod="dashboard-metrics-scraper-577dc49767-cbq8r"} 25.733362 1653182141877
container_cpu_usage_seconds_total{container="etcd",namespace="kube-system",pod="etcd-system76-pc"} 4237.357682 1653182140459
container_cpu_usage_seconds_total{container="grafana",namespace="flux-system",pod="prometheus-stack-grafana-757f9b9fcc-9f58g"} 345.034245 1653182154951
container_cpu_usage_seconds_total{container="grafana-sc-dashboard",namespace="flux-system",pod="prometheus-stack-grafana-757f9b9fcc-9f58g"} 123.480584 1653182146757
container_cpu_usage_seconds_total{container="grafana-sc-datasources",namespace="flux-system",pod="prometheus-stack-grafana-757f9b9fcc-9f58g"} 35.851112 1653182145702
container_cpu_usage_seconds_total{container="kube-apiserver",namespace="kube-system",pod="kube-apiserver-system76-pc"} 14166.156638 1653182150749
container_cpu_usage_seconds_total{container="kube-controller-manager",namespace="kube-system",pod="kube-controller-manager-system76-pc"} 4168.427981 1653182148868
container_cpu_usage_seconds_total{container="kube-prometheus-stack",namespace="flux-system",pod="prometheus-stack-kube-prom-operator-54d9f985c8-ml2qj"} 28.79018 1653182155583
container_cpu_usage_seconds_total{container="kube-proxy",namespace="kube-system",pod="kube-proxy-gg2wd"} 67.215459 1653182155156
container_cpu_usage_seconds_total{container="kube-scheduler",namespace="kube-system",pod="kube-scheduler-system76-pc"} 579.321492 1653182147910
container_cpu_usage_seconds_total{container="kube-state-metrics",namespace="flux-system",pod="prometheus-stack-kube-state-metrics-56d4759d67-h6lfv"} 158.343644 1653182153691
container_cpu_usage_seconds_total{container="kubernetes-dashboard",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-69dc48777b-8cckh"} 78.231809 1653182139263
container_cpu_usage_seconds_total{container="manager",namespace="flux-system",pod="helm-controller-dfb4b5478-7zgt6"} 338.974637 1653182143679
container_cpu_usage_seconds_total{container="manager",namespace="flux-system",pod="image-automation-controller-77fd9657c6-lg44h"} 280.841645 1653182154912
container_cpu_usage_seconds_total{container="manager",namespace="flux-system",pod="image-reflector-controller-86db8b6f78-5rz58"} 2909.277578 1653182144081
container_cpu_usage_seconds_total{container="manager",namespace="flux-system",pod="kustomize-controller-cd544c8f8-hxvk6"} 596.392781 1653182152714
container_cpu_usage_seconds_total{container="manager",namespace="flux-system",pod="notification-controller-d9cc9bf46-2jhbq"} 244.387967 1653182142902
container_cpu_usage_seconds_total{container="manager",namespace="flux-system",pod="source-controller-84bfd77bf8-r827h"} 541.650877 1653182148963
container_cpu_usage_seconds_total{container="metrics-server",namespace="flux-system",pod="metrics-server-55bc5f774-zznpb"} 174.229886 1653182146946
container_cpu_usage_seconds_total{container="nfs-subdir-external-provisioner",namespace="flux-system",pod="nfs-subdir-external-provisioner-858745f657-zcr66"} 244.061329 1653182139840
container_cpu_usage_seconds_total{container="node-exporter",namespace="flux-system",pod="prometheus-stack-prometheus-node-exporter-wj2fx"} 29.852036 1653182148779
container_cpu_usage_seconds_total{container="prometheus",namespace="flux-system",pod="prometheus-prometheus-stack-kube-prom-prometheus-0"} 7141.611234 1653182154042
# HELP container_memory_working_set_bytes [ALPHA] Current working set of the container in bytes
# TYPE container_memory_working_set_bytes gauge
container_memory_working_set_bytes{container="alertmanager",namespace="flux-system",pod="alertmanager-prometheus-stack-kube-prom-alertmanager-0"} 2.152448e+07 1653182143362
</code></pre>
<p>metric-server config:</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
containers:
- args:
- --secure-port=4443
- --cert-dir=/tmp
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-preferred-address-types=Hostname
- --requestheader-client-ca-file=/front-ca/front-proxy-ca.crt
- --kubelet-certificate-authority=/ca/ca.crt
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /front-ca
name: front-proxy-ca-dir
- mountPath: /ca
name: ca-dir
dnsPolicy: ClusterFirst
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: metrics-server
serviceAccountName: metrics-server
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: tmp
- configMap:
defaultMode: 420
name: front-proxy-ca
name: front-proxy-ca-dir
- configMap:
defaultMode: 420
name: kubelet-ca
name: ca-dir
</code></pre>
<p>kube-apiserver config:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.1.106:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.1.106
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver:v1.23.4
</code></pre>
| <p>In my case I had the same issue with the metrics-server because of just 1 ocpu on master-node. Use at least 2.</p>
|
<p>I am using nginx-ingress in my cluster to expose certain services. I have an "auth" service that handles authentication, which I am trying to setup through nginx. Currently the service has a very simple GET endpoint, that always responds with a <code>UserId</code> header and tries to set two cookies:</p>
<pre class="lang-js prettyprint-override"><code>// This is implemented on Nest.js which uses express.js
@Get('*')
auth(@Res() res: Response): void {
res.header('UserId', '1')
res.cookie('key', 'value')
res.cookie('x', 'y')
res.status(200).send('hello')
}
</code></pre>
<p>I can confirm that both cookies are being set when I manually send a request to that endpoint, but when I set it as an annotation to the ingress:</p>
<pre><code>nginx.ingress.kubernetes.io/auth-url: http://auth.dev.svc.cluster.local
</code></pre>
<p>and send a request through the ingress, only one of the cookies is forwarded to the Response (the first one <code>key=value</code>). I am not familiar with the nginx configuration, is there something I am supposed to change to make this work, so that both cookies are set?</p>
<p>I found <a href="https://github.com/kubernetes/ingress-nginx/issues/8183" rel="nofollow noreferrer">this issue</a> on GitHub, but it seems to be about OAuth2 there is no clear explanation on what I am supposed to change.</p>
| <p>I couldn't find a way to make this work with the <code>Set-Cookie</code> header. Not sure if there is a better way, but here is a workaround:</p>
<p>I added a snippet for the <code>location</code> block that converts two headers to cookies:</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $auth_cookie1 $upstream_http_x_header1;
auth_request_set $auth_cookie2 $upstream_http_x_header2;
add_header Set-Cookie $auth_cookie1;
add_header Set-Cookie $auth_cookie2;
</code></pre>
<p>And the <code>auth()</code> endpoint now responds with the <code>X-Header1</code> and <code>X-Header2</code> headers:</p>
<pre class="lang-js prettyprint-override"><code>import { serialize } from 'cookie'
@Get('*')
auth(@Res() res: Response): void {
res.header('UserId', '1')
res.header('X-Header1', serialize('key', 'value'))
res.header('X-Header2', serialize('x', 'y'))
res.status(200).send('hello')
}
</code></pre>
<p>Everything seems to be working well and this solution is similar to how nginx is adding the Set-Cookie header which doesn't support multiple cookies. The code below is copied from the <code>nginx.conf</code> file in the <code>nginx-controller</code> pod that <code>nginx-ingress</code> creates.</p>
<pre><code>auth_request_set $auth_cookie $upstream_http_set_cookie;
add_header Set-Cookie $auth_cookie;
</code></pre>
|
<p>What do you use instead og kubectl get ComponentStatus?</p>
<pre><code>kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
</code></pre>
| <p>Yes, this API is deprecated and as it provided status of <code>etcd, kube-scheduler, and kube-controller-manager</code> components, which we can get through kubectl or using <code>/livez</code> endpoint.</p>
<p>so you can try</p>
<pre><code>kubectl get --raw='/readyz?verbose'
#local cluster
curl -k https://localhost:6443/livez?verbose
</code></pre>
<p><strong>output</strong></p>
<pre><code>[+]ping ok
[+]log ok
[+]etcd ok
[+]informer-sync ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]shutdown ok
readyz check passed
</code></pre>
<blockquote>
<p>The current state of this API is problematic, and requires reversing the actual data flow (it requires the API server to call to its clients), and is not functional across deployment topologies.</p>
<p>It should be clearly marked as deprecated.</p>
</blockquote>
<p><a href="https://github.com/kubernetes/kubernetes/pull/93570" rel="nofollow noreferrer">Mark componentstatus as deprecated </a></p>
<blockquote>
<p>The Kubernetes API server provides 3 API endpoints (healthz, livez and readyz) to indicate the current status of the API server. The healthz endpoint is deprecated (since Kubernetes v1.16), and you should use the more specific livez and readyz endpoints instead.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/reference/using-api/health-checks/" rel="nofollow noreferrer">using-api-health-checks</a></p>
|
<p>I want to create a replica set of MongoDB pods and after pods are in running state, I want to create a collection on every mongo db instance. Here is the code:</p>
<pre><code>metadata:
name: mongodb-standalone
spec:
replicas: 3
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
selector: mongodb-standalone
spec:
containers:
- name: mongodb-standalone
image: mongo:4.0.8
lifecycle:
postStart:
exec:
command: ["mongo --eval 'db.createCollection(\"Profile\");' test"]
</code></pre>
<p>Still this code is not working.</p>
| <p>you can use configmap and mount the db creation script to init</p>
<blockquote>
<p>When a container is started for the first time it will execute files with extensions .sh and .js that are found in <code>/docker-entrypoint-initdb.d.</code> Files will be executed in alphabetical order. .js files will be executed by mongo using the database specified by the MONGO_INITDB_DATABASE variable, if it is present, or test otherwise. You may also switch databases within the .js script.</p>
</blockquote>
<p>create file <code>create_db.js</code></p>
<pre><code>db.createCollection("user")
db.createCollection("movies")
db.user.insert({name: "Ada Lovelace", age: 205})
db.movies.insertMany( [
{
title: 'Titanic',
year: 1997,
genres: [ 'Drama', 'Romance' ]
},
{
title: 'Spirited Away',
year: 2001,
genres: [ 'Animation', 'Adventure', 'Family' ]
},
{
title: 'Casablanca',
genres: [ 'Drama', 'Romance', 'War' ]
}
] )
</code></pre>
<p>create configmap</p>
<pre><code>kubectl create configmap create-db-configmap --from-file=./create_db.js
</code></pre>
<p>now we are all set, create deployment and check the magic</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
args: ["--dbpath","/data/db"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
env:
- name: MONGO_INITDB_DATABASE
value: demodb
- name: MONGO_INITDB_ROOT_USERNAME
value: "root"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "password"
volumeMounts:
- name: "mongo-data-dir"
mountPath: "/data/db"
- name: "init-database"
mountPath: "/docker-entrypoint-initdb.d/"
volumes:
- name: "mongo-data-dir"
- name: "init-database"
configMap:
name: create-db-configmap
</code></pre>
<p>you can find complete example <a href="https://github.com/Adiii717/kubernetes-mongo-db-init" rel="nofollow noreferrer">here</a></p>
<p><a href="https://i.stack.imgur.com/GHeFR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GHeFR.png" alt="enter image description here" /></a></p>
|
<p>I tried to run Kafka in Raft mode (zookeeper-less) in Kubernetes and everything worked fine with this configuration:</p>
<p>I am curious about how to change the provided configuration to run with a replication factor of 3 for instance?</p>
<p>The fruitful topic was <a href="https://github.com/bitnami/bitnami-docker-kafka/issues/159" rel="nofollow noreferrer">on the github</a> but no one provided Kafka Kraft mode with replication set up.</p>
<p>Statefulset</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka-statefulset
namespace: kafka
labels:
app: kafka-cluster
spec:
serviceName: kafka-svc
replicas: 1
selector:
matchLabels:
app: kafka-cluster
template:
metadata:
labels:
app: kafka-cluster
spec:
containers:
- name: kafka-container
image: 'bitnami/kafka:latest'
ports:
- containerPort: 9092
- containerPort: 9093
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CFG_NODE_ID
value: "1"
- name: KAFKA_ENABLE_KRAFT
value: "yes"
- name: KAFKA_CFG_PROCESS_ROLES
value: "broker,controller"
- name: KAFKA_CFG_CONTROLLER_LISTENER_NAMES
value: "CONTROLLER"
- name: KAFKA_CFG_LISTENERS
value: "CLIENT://:9092,CONTROLLER://:9093,EXTERNAL://0.0.0.0:9094"
- name: KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP
value: "CONTROLLER:PLAINTEXT,CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT"
- name: KAFKA_CFG_INTER_BROKER_LISTENER_NAME
value: "CLIENT"
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: "CLIENT://kafka-statefulset-0.kafka-svc.kafka.svc.cluster.local:9092,EXTERNAL://127.0.0.1:9094"
- name: KAFKA_CFG_CONTROLLER_QUORUM_VOTERS
value: "[email protected]:9093"
- name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
value: "false"
- name: KAFKA_DEFAULT_REPLICATION_FACTOR
value: "1"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
</code></pre>
<p>Headless service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: kafka-svc
labels:
app: kafka-cluster
spec:
clusterIP: None
ports:
- name: '9092'
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka-cluster
</code></pre>
| <p>Finally, I deployed Kafka in Kraft mode with a replication factor of 3 in Kubernetes. I used guidelines within this <a href="https://learnk8s.io/kafka-ha-kubernetes#deploying-a-3-node-kafka-cluster-on-kubernetes" rel="nofollow noreferrer">article</a>. There is a very comprehensive description of how this setup works. I went through the image in docker hub <strong>doughgle/kafka-kraft</strong>, and there is a link to their Github repo where you can find a script:</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
NODE_ID=${HOSTNAME:6}
LISTENERS="PLAINTEXT://:9092,CONTROLLER://:9093"
ADVERTISED_LISTENERS="PLAINTEXT://kafka-$NODE_ID.$SERVICE.$NAMESPACE.svc.cluster.local:9092"
CONTROLLER_QUORUM_VOTERS=""
for i in $( seq 0 $REPLICAS); do
if [[ $i != $REPLICAS ]]; then
CONTROLLER_QUORUM_VOTERS="$CONTROLLER_QUORUM_VOTERS$i@kafka-$i.$SERVICE.$NAMESPACE.svc.cluster.local:9093,"
else
CONTROLLER_QUORUM_VOTERS=${CONTROLLER_QUORUM_VOTERS::-1}
fi
done
mkdir -p $SHARE_DIR/$NODE_ID
if [[ ! -f "$SHARE_DIR/cluster_id" && "$NODE_ID" = "0" ]]; then
CLUSTER_ID=$(kafka-storage.sh random-uuid)
echo $CLUSTER_ID > $SHARE_DIR/cluster_id
else
CLUSTER_ID=$(cat $SHARE_DIR/cluster_id)
fi
sed -e "s+^node.id=.*+node.id=$NODE_ID+" \
-e "s+^controller.quorum.voters=.*+controller.quorum.voters=$CONTROLLER_QUORUM_VOTERS+" \
-e "s+^listeners=.*+listeners=$LISTENERS+" \
-e "s+^advertised.listeners=.*+advertised.listeners=$ADVERTISED_LISTENERS+" \
-e "s+^log.dirs=.*+log.dirs=$SHARE_DIR/$NODE_ID+" \
/opt/kafka/config/kraft/server.properties > server.properties.updated \
&& mv server.properties.updated /opt/kafka/config/kraft/server.properties
kafka-storage.sh format -t $CLUSTER_ID -c /opt/kafka/config/kraft/server.properties
exec kafka-server-start.sh /opt/kafka/config/kraft/server.properties
</code></pre>
<p>This script is necessary for setting proper configuration one by one to pods/brokers.</p>
<p>Then I built my own image with the latest version of Kafka, Scala and openjdk 17:</p>
<pre><code>FROM openjdk:17-bullseye
ENV KAFKA_VERSION=3.3.1
ENV SCALA_VERSION=2.13
ENV KAFKA_HOME=/opt/kafka
ENV PATH=${PATH}:${KAFKA_HOME}/bin
LABEL name="kafka" version=${KAFKA_VERSION}
RUN wget -O /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz https://downloads.apache.org/kafka/${KAFKA_VERSION}/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz \
&& tar xfz /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz -C /opt \
&& rm /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz \
&& ln -s /opt/kafka_${SCALA_VERSION}-${KAFKA_VERSION} ${KAFKA_HOME} \
&& rm -rf /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz
COPY ./entrypoint.sh /
RUN ["chmod", "+x", "/entrypoint.sh"]
ENTRYPOINT ["/entrypoint.sh"]
</code></pre>
<p>and here is the Kubernetes configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: kafka-kraft
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: kafka-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: '/path/to/dir'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kafka-pv-claim
namespace: kafka-kraft
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
labels:
app: kafka-app
namespace: kafka-kraft
spec:
clusterIP: None
ports:
- name: '9092'
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
labels:
app: kafka-app
namespace: kafka-kraft
spec:
serviceName: kafka-svc
replicas: 3
selector:
matchLabels:
app: kafka-app
template:
metadata:
labels:
app: kafka-app
spec:
volumes:
- name: kafka-storage
persistentVolumeClaim:
claimName: kafka-pv-claim
containers:
- name: kafka-container
image: me/kafka-kraft
ports:
- containerPort: 9092
- containerPort: 9093
env:
- name: REPLICAS
value: '3'
- name: SERVICE
value: kafka-svc
- name: NAMESPACE
value: kafka-kraft
- name: SHARE_DIR
value: /mnt/kafka
- name: CLUSTER_ID
value: oh-sxaDRTcyAr6pFRbXyzA
- name: DEFAULT_REPLICATION_FACTOR
value: '3'
- name: DEFAULT_MIN_INSYNC_REPLICAS
value: '2'
volumeMounts:
- name: kafka-storage
mountPath: /mnt/kafka
</code></pre>
<p>I am not 100% sure if this setup works like with a stable zookeeper setup, but it is currently sufficient for me for the testing phase.</p>
<p><strong>UPDATE:</strong>
Kafka Kraft is production ready in release 3.3.1</p>
|
<p>In the namespace, I have multiple applications deployed. I would like to rollout the deployment based on selector. Can someone please share how to achieve it?</p>
<p>Thanks</p>
| <p>You can achieve that by:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl rollout status deployment --selector="key=value"
</code></pre>
<p>But this argument is added in kubctl <a href="https://github.com/kubernetes/kubernetes/pull/99758" rel="nofollow noreferrer">v1.24</a>, so if you have a lower version, you need to update it.</p>
|
<p>In kubernetes (I am using minikube) I have deployed the following deployment using <code>kubectl apply -f nginx-deployment</code>:</p>
<pre><code># nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
</code></pre>
<p>I get <code>deployment.apps/nginx-deployment created</code> as an output, and when I run <code>kubectl get deployment</code> I get:</p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 22s
</code></pre>
<p>I have also deployed the following service file using <code>kubectl apply -f nginx-service.yml</code> command</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- name: "http"
port: 80
targetPort: 80
nodePort: 30080
</code></pre>
<p>The output is <code>service/nginx-service created</code> and the output of <code>kubectl get service</code> is:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 127d
nginx-service NodePort 10.99.253.196 <none> 80:30080/TCP 75s
</code></pre>
<p>However, when I try to access the app by entering <code>10.99.253.196</code> into the browser, it doesn't load and when I try localhost:30080 it says <code>Unable to connect</code>. Could someone help me to understand why this is happening/provide further directions for troubleshooting?</p>
| <p>Since you are using minikube you might need to run <code>minikube service nginx-service --url</code>, this will create a tunnel to the cluster and expose the service.</p>
|
<p>I have installed k8s 1.24 version and containerd (containerd://1.5.9) is the CR for my setup (ubuntu 20.04).</p>
<p>I have also installed docker on my VM and have added my private repository under /etc/docker/daemon.json with the following changes:</p>
<pre><code>{ "insecure-registries" : ["myPvtRepo.com:5028"] }
</code></pre>
<p>When I am running <code>docker pull myPvtRepo:123/image</code> after login to my pvt repo by using <code>docker login myPvtRepo:123</code> command, I am able to pull the images while running the same command with <code>crictl pull myPvtRepo:123/image</code>, I am facing:</p>
<blockquote>
<p>E0819 06:49:01.200489 162610 remote_image.go:218] "PullImage from
image service failed" err="rpc error: code = Unknown desc = failed to
pull and unpack image "myPvtRepo.com:5028/centos:latest": failed to
resolve reference "myPvtRepo.com:5028/centos:latest": failed to do
request: Head <a href="https://myPvtRepo.com::5028/v2/centos/manifests/latest" rel="nofollow noreferrer">https://myPvtRepo.com::5028/v2/centos/manifests/latest</a>:
x509: certificate signed by unknown authority"
image="myPvtRepo.com::5028/centos:latest" FATA[0000] pulling image:
rpc error: code = Unknown desc = failed to pull and unpack image
"myPvtRepo.com::5028/centos:latest": failed to resolve reference
"myPvtRepo.com:5028/centos:latest": failed to do request: Head
<a href="https://myPvtRepo.com::5028/v2/centos/manifests/latest" rel="nofollow noreferrer">https://myPvtRepo.com::5028/v2/centos/manifests/latest</a>: x509:
certificate signed by unknown authority</p>
</blockquote>
<p>FYI, I have modified /etc/containerd/config.toml with below content.</p>
<pre><code>version = 2
[plugin."io.containerd.grpc.v1.cri".registry.configs."myPvtRepo.com:5028".tls]
insecure_skip_verify = true
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://myPvtRepo.com:5028", "https://myPvtRepo.com:5038", "https://myPvtRepo.com:5037",
"https://myPvtRepo.com:5039"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."IP:5000"]
endpoint = ["http://IP:5000"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."IP:5000"]
endpoint = ["http://IP:5000"]
</code></pre>
<p>I have also modified containerd's endpoint to point to containerd's sock.</p>
<p>Can you please help me out to understand and fix that even after setting <code>insecure_skip_verify = true</code> for my pvt repository and restarting the containerd service why I am getting this issue.</p>
| <p>I got a solution:</p>
<pre><code>cd /usr/local/share/ca-certificates/
curl -L --remote-name http://your-artifacts.com/xyz-bundle.crt
/usr/sbin/update-ca-certificates
</code></pre>
<p>This one work for me.</p>
<p>Also make sure to update your endpoints under /etc/crictl.yaml</p>
<pre><code>runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: ""
timeout: 0
debug: false
pull-image-on-create: false
disable-pull-on-run: false
</code></pre>
|
<p>I am using terraform to deploy a kube cluster to Google Kubernetes Engine.</p>
<p>Here is my ingress config - both http and https are working but I want http to auto redirect to https</p>
<pre><code>resource "kubernetes_ingress_v1" "ingress" {
wait_for_load_balancer = true
metadata {
name = "ingress"
}
spec {
default_backend {
service {
name = kubernetes_service.frontend_service.metadata[0].name
port {
number = 80
}
}
}
rule {
http {
path {
backend {
service {
name = kubernetes_service.api_service.metadata[0].name
port {
number = 80
}
}
}
path = "/api/*"
}
path {
backend {
service {
name = kubernetes_service.api_service.metadata[0].name
port {
number = 80
}
}
}
path = "/api"
}
}
}
tls {
secret_name = "tls-secret"
}
}
depends_on = [kubernetes_secret_v1.tls-secret, kubernetes_service.frontend_service, kubernetes_service.api_service]
}
</code></pre>
<p>How can I configure the ingress to auto redirect from http to https?</p>
| <p>The following worked for me - I got my hints from <a href="https://github.com/hashicorp/terraform-provider-kubernetes/issues/1326#issuecomment-910374103" rel="nofollow noreferrer">https://github.com/hashicorp/terraform-provider-kubernetes/issues/1326#issuecomment-910374103</a></p>
<pre><code>
resource "kubectl_manifest" "app-frontend-config" {
wait_for_rollout = true
yaml_body = yamlencode({
apiVersion = "networking.gke.io/v1beta1"
kind = "FrontendConfig"
metadata = {
name = "ingress-fc"
}
spec = {
redirectToHttps = {
enabled = true
}
}
})
}
resource "kubernetes_ingress_v1" "ingress" {
wait_for_load_balancer = true
metadata {
name = "ingress"
annotations = {
"networking.gke.io/v1beta1.FrontendConfig" = kubectl_manifest.app-frontend-config.name
}
}
spec {
default_backend {
service {
name = kubernetes_service.frontend_service.metadata[0].name
port {
number = 80
}
}
}
rule {
http {
path {
backend {
service {
name = kubernetes_service.api_service.metadata[0].name
port {
number = 80
}
}
}
path = "/api/*"
}
path {
backend {
service {
name = kubernetes_service.api_service.metadata[0].name
port {
number = 80
}
}
}
path = "/api"
}
}
}
tls {
secret_name = "tls-secret"
}
}
depends_on = [kubernetes_secret_v1.tls-secret, kubernetes_service.frontend_service, kubernetes_service.api_service]
}
</code></pre>
<p>You need an additional module in your <code>terraform</code> block</p>
<pre><code>
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.14.0"
}
</code></pre>
<p>Do not forget to initialise the kubectl provider</p>
<pre><code>
provider "kubectl" {
host = "https://${google_container_cluster.primary.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(google_container_cluster.primary.master_auth[0].cluster_ca_certificate)
load_config_file = false
}
</code></pre>
|
<p>I am using celery with a fastAPI.</p>
<p>Getting <strong>Can't decode message body: ContentDisallowed('Refusing to deserialize untrusted content of type json (application/json)')</strong> while running in docker. When running the same in local machine without docker there is not issue.</p>
<p>The configuration for the same is as below.</p>
<pre><code>celery_app = Celery('cda-celery-tasks',
broker=CFG.BROKER_URL,
backend=CFG.BACKEND_URL,
include=['src.tasks.tasks']
)
celery_app.conf.task_serializer = 'pickle'
celery_app.conf.result_serializer = 'pickle'
celery_app.conf.accept_content = ['pickle']
celery_app.conf.enable_utc = True
</code></pre>
<p>While Running in docker I am getting the error continuously</p>
<pre><code>FROM python:3.8
WORKDIR /app
COPY . .
RUN pip3 install poetry
ENV PATH="/root/.poetry/bin:$PATH"
RUN poetry install
</code></pre>
<p>the celery is started using the following command from kubernetes.</p>
<p><code>poetry run celery -A src.infrastructure.celery_application worker --loglevel=INFO --concurrency 2</code></p>
<p>While running I am getting the error continuously</p>
<p>Can't decode message body: ContentDisallowed('Refusing to deserialize untrusted content of type json (application/json)')</p>
<pre><code>body: '{"method": "enable_events", "arguments": {}, "destination": null, "pattern": null, "matcher": null}' (99b)
Traceback (most recent call last):
File "/root/.cache/pypoetry/virtualenvs/cda-9TtSrW0h-py3.8/lib/python3.8/site-packages/kombu/messaging.py", line 620, in _receive_callback
decoded = None if on_m else message.decode()
File "/root/.cache/pypoetry/virtualenvs/cda-9TtSrW0h-py3.8/lib/python3.8/site-packages/kombu/message.py", line 194, in decode
self._decoded_cache = self._decode()
File "/root/.cache/pypoetry/virtualenvs/cda-9TtSrW0h-py3.8/lib/python3.8/site-packages/kombu/message.py", line 198, in _decode
return loads(self.body, self.content_type,
File "/root/.cache/pypoetry/virtualenvs/cda-9TtSrW0h-py3.8/lib/python3.8/site-packages/kombu/serialization.py", line 242, in loads
raise self._for_untrusted_content(content_type, 'untrusted')
kombu.exceptions.ContentDisallowed: Refusing to deserialize untrusted content of type json (application/json)
</code></pre>
<p>Could someone please tell me the possible cause and solution to manage the same? If I've missed anything, over- or under-emphasized a specific point, please let me know in the comments. Thank you so much in advance for your time.</p>
| <p>Configuring the celery_app with the accept_content type seems to fix the issue:</p>
<pre><code>celery_app.conf.accept_content = ['application/json', 'application/x-python-serialize', 'pickle']
</code></pre>
|
<p>I have some amount of traffic that can boost the cpu usage up to 180%. I tried using a single pod which works fine but the response was extremely slow. When I configured my HPA to cpu=80%, min=1 and max={2 or more} I hit connection refused when HPA was creating more pods. I tried put a large value to min (ie. min = 3) the connection refused relief but there will be too many idle pods when traffic is low. Is there any way to stop putting pod online until it is completely started?</p>
| <blockquote>
<p>I hit connection refused when HPA was creating more pods</p>
</blockquote>
<p>Kubernetes uses the readinessProbe, to determine whether to redirect clients to some pods. If the readinessProbe for a Pod is not successful, then Service whose selectors could have matched that Pod would not take it under consideration.</p>
<p>If there is no readinessProbe defined, or if it was misconfigured, Pods that are still starting up may end up serving client requests. Connection refused could suggest there was no process listening yet for incoming connections.</p>
<p>Please share your deployment/statefulset/..., if you need further assistance setting this up.</p>
|
<p>I have read many links similar to my issue, but none of them were helping me to resolve the issue.</p>
<p><strong>Similar Links</strong>:</p>
<ol>
<li><a href="https://github.com/containerd/containerd/issues/7219" rel="noreferrer">Failed to exec into the container due to permission issue after executing 'systemctl daemon-reload'</a></li>
<li><a href="https://github.com/opencontainers/runc/issues/3551" rel="noreferrer">OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown</a></li>
<li><a href="https://stackoverflow.com/questions/73379718/ci-runtime-exec-failed-exec-failed-unable-to-start-container-process-open-de">CI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown</a></li>
<li><a href="https://github.com/moby/moby/issues/43969" rel="noreferrer">OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown</a></li>
<li><a href="https://bbs.archlinux.org/viewtopic.php?id=277995" rel="noreferrer">Fail to execute docker exec</a></li>
<li><a href="https://github.com/docker/for-linux/issues/246" rel="noreferrer">OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "open /proc/self/fd: no such file or directory": unknown</a></li>
</ol>
<p><strong>Problem Description</strong>:</p>
<p>I have created a new Kubernetes cluster using <code>Kubespray</code>. When I wanted to execute some commands in one of containers I faced to the following error:</p>
<h6>Executed Command</h6>
<pre class="lang-bash prettyprint-override"><code>kubectl exec -it -n rook-ceph rook-ceph-tools-68d847b88d-7kw2v -- sh
</code></pre>
<h6>Error:</h6>
<blockquote>
<p>OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/1: operation not permitted: unknown
command terminated with exit code 126</p>
</blockquote>
<p>I have also logged in to the node, which runs the pod, and try executing the container using <code>docker exec</code> command, but the error was not changed.</p>
<p><strong>Workarounds</strong>:</p>
<ul>
<li><p>As I have found, the error code (126) implies that the permissions are insufficient, but I haven't faced this kind of error (like executing <code>sh</code>) in Docker or Kubernetes.</p>
</li>
<li><p>I have also checked whether <code>SELinux</code> is enabled or not (as it has been said in the 3rd link).</p>
<pre class="lang-bash prettyprint-override"><code>apt install policycoreutils
sestatus
# Output
SELinux status: disabled
</code></pre>
</li>
<li><p>In the 5th link, it was said to check whether you have updated the kernel, and I didn't upgrade anything on the nodes.</p>
<pre class="lang-bash prettyprint-override"><code>id; stat /dev/pts/0
# output
uid=0(root) gid=0(root) groups=0(root)
File: /dev/pts/0
Size: 0 Blocks: 0 IO Block: 1024 character special file
Device: 18h/24d Inode: 3 Links: 1 Device type: 88,0
Access: (0600/crw-------) Uid: ( 0/ root) Gid: ( 5/ tty)
Access: 2022-08-21 12:01:25.409456443 +0000
Modify: 2022-08-21 12:01:25.409456443 +0000
Change: 2022-08-21 11:54:47.474457646 +0000
Birth: -
</code></pre>
</li>
<li><p>Also tried <code>/bin/sh</code> instead of <code>sh</code> or <code>/bin/bash</code>, but not worked and the same error occurred.</p>
</li>
</ul>
<p>Can anyone help me to find the root cause of this problem and then solve it?</p>
| <p>This issue may relate to docker, first drain your node.</p>
<pre><code>kubectl drain <node-name>
</code></pre>
<p>Second, SSH to the node and restart docker service.</p>
<pre><code>systemctl restart docker.service
</code></pre>
<p>At the end try to execute your command.</p>
|
<p>The grafana helm chart spawns a service on a Classic Load Balancer. I have the AWS load balancer webhook installed, and I'd like to overwrite the annotations on the Grafana service. I'm attempting the following:</p>
<pre><code>helm install grafana grafana/grafana \
--namespace grafana \
--set persistence.storageClassName="gp2" \
--set persistence.enabled=true \
--set adminPassword='abc' \
--values grafana.yaml \
--set service.type=LoadBalancer \
--set nodeSelector.app=prometheus \
--set nodeSelector.k8s-app=metrics-server \
--set service.annotations."service\.beta.kubernetes\.io/aws-load-balancer-nlb-target-type"=ip \
--set service.annotations."service\.beta.kubernetes\.io/aws-load-balancer-type"=external
</code></pre>
<p>but, after trying multiple permutations, I continue to get:</p>
<pre><code>Error: INSTALLATION FAILED: YAML parse error on grafana/templates/service.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal object into Go struct field .metadata.annotations of type string
</code></pre>
<p>What is the correct way of doing this?</p>
| <p>there is an issue in the annotation, you are missing escape character for <code>beta\.kubernetes</code></p>
<p>try this and it should work.</p>
<pre><code>--set service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-nlb-target-type"=ip \
--set service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"=external
</code></pre>
|
<p>I am running 4 replicas of the <em>bf-v</em> instance. I am using <strong>ClientIP</strong> as a sessionAffinity.</p>
<p>I want to distribute requests based on the client IP address, but also distribute client IPs evenly (round-robin based) across replicas. I want pods to have the same number of clients. is there any way to achieve this kind of distribution? (the default round-robin gets affected by session affinity)</p>
<p>Thanks :)</p>
<p>svc.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: bf-v
name: bf-v
spec:
ports:
- port: 80
type: LoadBalancer
selector:
app: bf-v
sessionAffinity: "ClientIP"
</code></pre>
| <p>Two options:</p>
<p>1- Use <strong>IPVS</strong> mode with <strong>ipvs-scheduler=sh</strong> (sh -> source hashing)</p>
<p>2- Use <strong>ClientIP</strong> affinity in your svc.yaml with <strong>ipvs-scheduler=rr</strong> (rr -> round robin)</p>
|
<p>i am confused about <strong>Nginx Ingress</strong> vs <strong>Nginx deployment</strong>(container) in kubernetes
since both are controlling incomming requests to the cluster
let say i am deploying a web app container and nginx container in one pod and all works perfectly if i deploy an other application and use Nginx Ingress to route incomming requests
Then <strong>who will controll incomming requests</strong> Nginx Ingress or that nginx container ?
<strong>THANKS</strong></p>
| <p>Let's abstract ourselves from nginx. We should distinguish the webserver running alongside an application, from the reverse proxy routing client requests to that application.</p>
<p>In Kubernetes, you may deploy applications based on some Nginx, lighttpd, Apache, ... webserver. Sometimes complex configurations routing clients to different bits composing your application (eg: nodejs backend for an API, static assets, php/smarty frontend ...).</p>
<p>While most Kubernetes clusters would come with an "Ingress Controller". A Controller in Kubernetes refers to some software integrating with your cluster API. An Ingress Controller watches for "Ingress" objects, and configures itself proxying client requests to "Services" within your cluster.</p>
<p>Answering "who controls incoming requests", then: a little bit of both. Your Ingress Controller is the proxy exposed to clients connecting to an application in your cluster. The webserver running in your application deployment serves requests proxied by your ingress controller.</p>
<p>And why would we do this: consider Kubernetes comes with an SDN. Services and Pods in your cluster are usually not reachable from clients that would not be part of your cluster network. Ingress controllers is a convenient way to allow end-users of a cluster to expose their own applications, in a somewhat-generic way, managing their own Ingresses. While cluster administrators would make sure traffic can reach your application, setting up the actual Ingress Controller.</p>
|
<p>I'm attempting to configure AKS, and I have the below setup</p>
<p><a href="https://i.stack.imgur.com/n6FfO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n6FfO.png" alt="enter image description here" /></a></p>
<p>I want to enable HTTPS between Nginx Kubernetes Ingress Controller & Asp.Net Core 6.0 WebAPI PODs, like</p>
<p><a href="https://i.stack.imgur.com/XGalr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XGalr.png" alt="enter image description here" /></a></p>
<p>How do I setup this? Where do I store the WebAPI SSL certificate?</p>
| <p>Reference <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol" rel="nofollow noreferrer">documentation for annotation</a> to set the ingress backend to <code>HTTPS</code>:</p>
<pre><code>nginx.ingress.kubernetes.io/backend-protocol: HTTPS
</code></pre>
<p>Follow the guidance <a href="https://learn.microsoft.com/en-us/aspnet/core/security/docker-https?view=aspnetcore-6.0" rel="nofollow noreferrer">here</a> to setup SSL certs for your WebAPI pods.</p>
<p>The certs can be stored in kubernetes generic <code>secret</code> and can be mounted onto the pods as <code>volumes</code>. In production, the AKS secret storage could be <a href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver" rel="nofollow noreferrer">backed by Azure KeyVault</a>, so the cert would be really stored in the KeyVault.</p>
<p>For your test environment, here is how you create secret:</p>
<pre><code>kubectl create secret generic webapi-cert-secret
--from-file=cert=yourcert.pfx
--from-literal=pass='yourcertpasswd'
</code></pre>
<p>Then mount into your pod/deployment definition (truncated for brevity):</p>
<pre><code> env:
- name: Kestrel__Certificates__Default__Path
value: /certs/aspnet-cert.pfx
- name: Kestrel__Certificates__Default__Password
valueFrom:
secretKeyRef:
name: webapi-cert-secret
key: pass
volumeMounts:
- name: certsvolume
mountPath: /certs/aspnet-cert.pfx
subPath: aspnet-cert.pfx
readOnly: true
volumes:
- name: certsvolume
secret:
secretName: webapi-cert-secret
items:
- key: cert
path: aspnet-cert.pfx
</code></pre>
|
<p>In a nutshell, most of our apps are configured with the following <code>strategy</code> in the Deployment - </p>
<pre><code> strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
</code></pre>
<p>The Horizonatal Pod Autoscaler is configured as so </p>
<pre><code>spec:
maxReplicas: 10
minReplicas: 2
</code></pre>
<p>Now when our application was redeployed, instead of running a rolling update, it instantly terminated 8 of our pods and dropped the number of pods to <code>2</code> which is the min number of replicas available. This happened in a fraction of a second as you can see here.</p>
<p><a href="https://i.stack.imgur.com/V7AVN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/V7AVN.png" alt="enter image description here"></a></p>
<p>Here is the output of <code>kubectl get hpa</code> - </p>
<p><a href="https://i.stack.imgur.com/ehlyV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ehlyV.png" alt="enter image description here"></a></p>
<p>As <code>maxUnavailable</code> is 25%, shouldn't only about 2-3 pods go down at max ? Why did so many pods crash at once ? It seems as though rolling update is useless if it works this way.</p>
<p>What am I missing ?</p>
| <p>In our case we added the <code>replicas</code> field a while ago and forgot to remove it when we added the HPA. The HPA does not play nice with the <code>replicas</code> field during deployments, so if you have a HPA remove the <code>replicas</code> field. See <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#migrating-deployments-and-statefulsets-to-horizontal-autoscaling" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#migrating-deployments-and-statefulsets-to-horizontal-autoscaling</a></p>
<blockquote>
<p>When an HPA is enabled, it is recommended that the value of spec.replicas of the Deployment and / or StatefulSet be removed from their manifest(s). If this isn't done, any time a change to that object is applied, for example via kubectl apply -f deployment.yaml, this will instruct Kubernetes to scale the current number of Pods to the value of the spec.replicas key. This may not be desired and could be troublesome when an HPA is active.</p>
</blockquote>
<blockquote>
<p>Keep in mind that the removal of spec.replicas may incur a one-time degradation of Pod counts as the default value of this key is 1 (reference Deployment Replicas). Upon the update, all Pods except 1 will begin their termination procedures. Any deployment application afterwards will behave as normal and respect a rolling update configuration as desired.</p>
</blockquote>
|
<p>In Azure Kubernetes I want have a pod with jenkins in defualt namespace, that needs read secret from my aplication workspace.</p>
<p>When I tried I get the next error:</p>
<pre><code>Error from server (Forbidden): secrets "myapp-mongodb" is forbidden: User "system:serviceaccount:default:jenkinspod" cannot get resource "secrets" in API group "" in the namespace "myapp"
</code></pre>
<p>How I can bring access this jenkisn pod to read secrets in 'myapp' namespace</p>
| <p><code>secret</code> is a namespaced resource and can be accessed via proper rbac permissions. However any improper rbac permissions may lead to leakage.</p>
<p>You must <code>role bind</code> the pod's associated service account. Here is a complete example. I have created a new service account for role binding in this example. However, you can use the default <code>service account</code> if you want.</p>
<p>step-1: create a namespace called <code>demo-namespace</code></p>
<pre><code>kubectl create ns demo-namespace
</code></pre>
<p>step-2: create a secret in <code>demo-namespace</code>:</p>
<pre><code>kubectl create secret generic other-secret -n demo-namespace --from-literal foo=bar
secret/other-secret created
</code></pre>
<p>step-2: Create a service account(<code>my-custom-sa</code>) in the <code>default</code> namespace.</p>
<pre><code>kubectl create sa my-custom-sa
</code></pre>
<p>step-3: Validate that, by default, the service account you created in the last step has no access to the secrets present in <code>demo-namespace</code>.</p>
<pre><code>kubectl auth can-i get secret -n demo-namespace --as system:serviceaccount:default:my-custom-sa
no
</code></pre>
<p>step-4: Create a cluster role with permissions of <code>get</code> and <code>list</code> secrets from <code>demo-namespace</code> namespace.</p>
<pre><code>kubectl create clusterrole role-for-other-user --verb get,list --resource secret
clusterrole.rbac.authorization.k8s.io/role-for-other-user created
</code></pre>
<p>step-5: Create a rolebinding to bind the cluster role created in last step.</p>
<pre><code> kubectl create rolebinding role-for-other-user -n demo-namespace --serviceaccount default:my-custom-sa --clusterrole role-for-other-user
rolebinding.rbac.authorization.k8s.io/role-for-other-user created
</code></pre>
<p>step-6: validate that the service account in the default ns now has access to the secrets of <code>demo-namespace</code>. (note the difference from step 3)</p>
<pre><code>kubectl auth can-i get secret -n demo-namespace --as system:serviceaccount:default:my-custom-sa
yes
</code></pre>
<p>step-7: create a pod in default namsepace and mount the service account you created earlier.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: my-pod
name: my-pod
spec:
serviceAccountName: my-custom-sa
containers:
- command:
- sleep
- infinity
image: bitnami/kubectl
name: my-pod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
</code></pre>
<p>step-7: Validate that you can read the secret of <code>demo-namespace</code> from the pod in the default namespace.</p>
<pre><code> curl -sSk -H "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/demo-namespace/secrets
{
"kind": "SecretList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "668709"
},
"items": [
{
"metadata": {
"name": "other-secret",
"namespace": "demo-namespace",
"uid": "5b3b9dba-be5d-48cc-ab16-4e0ceb3d1d72",
"resourceVersion": "662043",
"creationTimestamp": "2022-08-19T14:51:15Z",
"managedFields": [
{
"manager": "kubectl-create",
"operation": "Update",
"apiVersion": "v1",
"time": "2022-08-19T14:51:15Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:data": {
".": {},
"f:foo": {}
},
"f:type": {}
}
}
]
},
"data": {
"foo": "YmFy"
},
"type": "Opaque"
}
]
}
</code></pre>
|
<p>I have deployed an nginx ingress controller in my eks cluster. I want to add more security to my nginx deployment i.e add <a href="https://content-security-policy.com/examples/nginx/" rel="nofollow noreferrer">content-security-policy</a> header and the below</p>
<pre><code>X-Frame-Options:
Content-Security-Policy:
X-Content-Type-Options:
X-XSS-Protection:
</code></pre>
<p>Is there any document i can follow to do it. please help.</p>
<p>I added in the configmap and turns out it didn't help as well.</p>
<p>Thanks</p>
| <p>you can try this</p>
<pre><code>ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Frame-Options: Deny";
more_set_headers "X-Xss-Protection: 1; mode=block";
more_set_headers "X-Content-Type-Options: nosniff";
more_clear_headers "Cache-Control";
more_set_headers "Cache-Control: must-revalidate";
proxy_set_header l5d-dst-override
</code></pre>
<p>Here is the list of all <a href="https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md" rel="nofollow noreferrer">nginx-configuration-annotations</a></p>
<p>This fixes my issue and verifies it using <a href="https://www.zaproxy.org/" rel="nofollow noreferrer">zaproxy</a></p>
|
<p>I am writing an Operator for Kubernetes using the Operator SDK and have a question about logging (I am still rather new to Go).</p>
<p>I have setup a logger in my controller that I have attached a UUID to as a Trace ID</p>
<pre><code>func (r *MyReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
myLog = log.FromContext(ctx).WithValues("traceID", uuid.NewUUID())
myCtx := log.IntoContext(ctx, myLog)
</code></pre>
<p>However I have a Library that is setup at the same time as the controllers and passed in via the struct.</p>
<pre><code>type MyReconciler struct {
MyBase
MyClient MyGreatClient
}
</code></pre>
<p>And each method on the client take the ctx as a parameter.</p>
<p>As I want to use the logger in the ctx as it contains my trace ID do I need to add this line to every method in the client or is there a better way to do it?</p>
<pre><code>func (mgc *MyGreatClient) SomethingExists(ctx context.Context, something string) bool {
myGreatClientLog = log.FromContext(ctx).WithName("MyGreatClient")
myGreatClientLog.Info("Checking Something Exists", "Something", something)
</code></pre>
<p>Is there a better way to achieve what I am wanting to do? It doesn't appear that <code>"sigs.k8s.io/controller-runtime/pkg/log"</code> (of type <code>logr.Logger</code>) supports just logging a context like zap.</p>
<pre><code> myappcontex.Logger(ctx).Info("did something awesome")
</code></pre>
<p>Any help on doing this the idiomatic way is appreciated</p>
| <p>Not at all sure of this answer and I too wonder why logging and logging sinks are built so complex (refer <a href="https://dave.cheney.net/2015/11/05/lets-talk-about-logging" rel="nofollow noreferrer">https://dave.cheney.net/2015/11/05/lets-talk-about-logging</a> which I found reffered in logr <a href="https://pkg.go.dev/github.com/go-logr/[email protected]" rel="nofollow noreferrer">https://pkg.go.dev/github.com/go-logr/[email protected]</a> !);</p>
<p>This is how I logged in a generated <code>kubebuilder</code> operator controller</p>
<pre><code>log.Log.Info("Pod Image is set", "PodImageName", testOperator.Spec.PodImage)
</code></pre>
<p>Ouput-</p>
<pre><code>1.6611775636957748e+09 INFO Pod Image is set {"PodImageName": "alexcpn/run_server:1.2"}
</code></pre>
<p>and with this</p>
<pre><code>log.FromContext(ctx).Info("Pod Image is ", "PodImageName", testOperator.Spec.PodImage)
</code></pre>
<p>Ouput is</p>
<pre><code>1.6611801111484244e+09 INFO Pod Image is {"controller": "testoperartor", "controllerGroup": "grpcapp.mytest.io", "controllerKind": "Testoperartor", "testoperartor": {"name":"testoperartor-sample","namespace":"default"}, "namespace": "default", "name": "testoperartor-sample", "reconcileID": "ffa3a957-c14f-4ec9-8cf9-767c38fc26ee", "PodImageName": "alexcpn/run_server:1.2"}
</code></pre>
<p>The controller uses Golang Logr</p>
<p><code>All logging in controller-runtime is structured, using a set of interfaces defined by a package called logr (https://pkg.go.dev/github.com/go-logr/logr). The sub-package zap provides helpers for setting up logr backed by Zap (go.uber.org/zap) </code> <a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/log#DelegatingLogSink" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/log#DelegatingLogSink</a></p>
<p>And I can see that it sets Zap logging in main</p>
<pre><code>ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
</code></pre>
|
<p>while I try to add my k8s cluster in azure vm, is shows error like
error: resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system" from "STDIN": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first</p>
<p>Here is the output for my command executed</p>
<pre><code>root@kubeadm-master:~# curl --insecure -sfL https://104.211.32.151:8443/v3/import/lqkbhj6gwg9xcb5j8pnqcmxhtdg6928wmb7fj2n9zv95dbxsjq8vn9.yaml | kubectl apply -f -clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver
created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
secret/cattle-credentials-e558be7 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead
deployment.apps/cattle-cluster-agent created
daemonset.apps/cattle-node-agent created
error: resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system" from "STDIN": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
</code></pre>
<p>ensure CRDs are installed first</p>
| <p>I was also facing the same issue, so I changed the API version for the <code>cattle-admin-binding</code> from beta to stable as below:</p>
<p>Old value:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
</code></pre>
<p>Changed to:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
</code></pre>
<p>Though I ran into some other issues later, the above error was gone.</p>
|
<p>An action item from the <strong>security scan</strong> is to implement <strong>HSTS</strong> header in ASP.Net Core 6.0 WebAPI.</p>
<p>A WebAPI application is deployed on AKS using Application Gateway Ingress Controller. SSL termination occurs at the Application Gateway. Application Gateway Ingress Controllers and PODs communicate using HTTP.</p>
<p><a href="https://i.stack.imgur.com/mF57L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mF57L.png" alt="enter image description here" /></a></p>
<p>In this case, is it necessary to implement HSTS? In that case, what infrastructure requirements are needed?</p>
| <p>The <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security" rel="nofollow noreferrer">HSTS header</a> is a browser only instruction. It informs browsers that the site should only be accessed using HTTPS, and that any future attempts to access it using HTTP should automatically be converted to HTTPS.</p>
<blockquote>
<p>In this case, is it necessary to implement HSTS?</p>
</blockquote>
<p>If your application hosted in AKS is a web application which will load in browser then, yes. However, as you mentioned, if it is only an API then it does not make much sense.</p>
<p>This is also <a href="https://learn.microsoft.com/en-us/aspnet/core/security/enforcing-ssl?view=aspnetcore-6.0&tabs=visual-studio" rel="nofollow noreferrer">documented on MSDN</a>:</p>
<blockquote>
<p>HSTS is generally a browser only instruction. Other callers, such as
phone or desktop apps, do not obey the instruction. Even within
browsers, a single authenticated call to an API over HTTP has risks on
insecure networks. The secure approach is to configure API projects to
only listen to and respond over HTTPS.</p>
</blockquote>
<p>That said, assuming your application is a web application, to implement it with AGIC, you will have to first configure rewrite ruleset on the app gateway. This can be done from portal or with PowerShell:</p>
<pre><code># Create RuleSet
$responseHeaderConfiguration = New-AzApplicationGatewayRewriteRuleHeaderConfiguration -HeaderName "Strict-Transport-Security" -HeaderValue "max-age=31536000; includeSubDomains; preload"
$actionSet = New-AzApplicationGatewayRewriteRuleActionSet -ResponseHeaderConfiguration $responseHeaderConfiguration
$rewriteRule = New-AzApplicationGatewayRewriteRule -Name HSTSHeader -ActionSet $actionSet
$rewriteRuleSet = New-AzApplicationGatewayRewriteRuleSet -Name SecurityHeadersRuleSet -RewriteRule $rewriteRule
# apply the ruleset to your app gateway
$appgw = Get-AzApplicationGateway -Name "yourgw" -ResourceGroupName "yourgw-rg"
Add-AzApplicationGatewayRewriteRuleSet -ApplicationGateway $appgw -Name $rewriteRuleSet.Name -RewriteRule $rewriteRuleSet.RewriteRules
Set-AzApplicationGateway -ApplicationGateway $appgw
</code></pre>
<p>Next, to map the RuleSet to your ingress path, use the <a href="https://azure.github.io/application-gateway-kubernetes-ingress/annotations/#rewrite-rule-set" rel="nofollow noreferrer">annotation</a> on your ingress definition to reference the Ruleset:</p>
<pre><code>appgw.ingress.kubernetes.io/rewrite-rule-set: SecurityHeadersRuleSet
</code></pre>
|
<p>I have configured an <a href="https://github.com/elastic/cloud-on-k8s/blob/main/config/recipes/beats/filebeat_autodiscover.yaml" rel="nofollow noreferrer">Elastic ECK Beat with autodiscover</a> enabled for all pod logs, but I need to add logs from a specific pod log file too; from this path <code>/var/log/traefik/access.log</code> inside the container. I've tried with module and log config but still nothing works.</p>
<p>The access.log file exists on the pods and contains data.
The filebeat index does not show any data from this log.file.path</p>
<p>Here is the Beat yaml:</p>
<pre><code>---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
namespace: elastic
spec:
type: filebeat
version: 8.3.1
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
filebeat:
autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints:
enabled: true
default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
templates:
- condition.contains:
kubernetes.pod.name: traefik
config:
- module: traefik
access:
enabled: true
var.paths: [ "/var/log/traefik/*access.log*" ]
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
- name: varlog
mountPath: /var/log
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
namespace: elastic
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elastic
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
namespace: elastic
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elastic
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Here is the module loaded from Filebeat Logs:</p>
<pre><code>...
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.337Z","log.logger":"esclientleg","log.origin":{"file.name":"eslegclient/connection.go","file.line":291},"message":"Attempting to connect to Elasticsearch version 8.3.1","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.352Z","log.logger":"modules","log.origin":{"file.name":"fileset/modules.go","file.line":108},"message":"Enabled modules/filesets: traefik (access)","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.353Z","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":172},"message":"Configured paths: [/var/log/traefik/*access.log*]","service.name":"filebeat","input_id":"fa247382-c065-40ca-974e-4b69f14c3134","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.355Z","log.logger":"modules","log.origin":{"file.name":"fileset/modules.go","file.line":108},"message":"Enabled modules/filesets: traefik (access)","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.355Z","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":172},"message":"Configured paths: [/var/log/traefik/*access.log*]","service.name":"filebeat","input_id":"6883d753-f149-4a68-9499-fe039e0de899","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.437Z","log.origin":{"file.name":"input/input.go","file.line":134},"message":"input ticker stopped","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.439Z","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":172},"message":"Configured paths: [/var/log/containers/*9a1680222e867802388f649f0a296e076193242962b28eb7e0e575bf68826d85.log]","service.name":"filebeat","input_id":"3c1fffae-0213-4889-b0e7-5dda489eeb51","ecs.version":"1.6.0"}
...
</code></pre>
| <p>Docker logging is based on the stdout/stderr output of a container. If you only write into a log file inside a container it will never be picked up by Docker logging and can therefore also not be processed by your Filebeat setup.</p>
<p>Instead ensure that all logs generated by your containers are sent to stdout. Which would mean in your example start the Traeffic pod with <code>--accesslogsfile=/dev/stdout</code> to also send the access logs to stdout instead of the log file.</p>
|
<p>We're using Gitlab Runner with Kubernetes executor and we were thinking about what I think is currently not possible. We want to assign the Gitlab Runner daemon's pod to a specific node group's worker with instance type X and the jobs' pods to a different node group Y worker nodes as these usually require more computation resources than the Gitlab Runner's pod.</p>
<p>This comes in order to save costs, as the node where the Gitlab runner main daemon will always be running, then we want it to be running on a cheap instance, and later the jobs which need more computation capacity then they can run on different instances with different type and which will be started by the Cluster Autoscaler and later destroyed when no jobs are present.</p>
<p>I made an investigation about this feature, and the available way to assign the pods to specific nodes is to use the node selector or node affinity, but the rules included in these two configuration sections are applied to all the pods of the Gitlab Runner, the main pod and the jobs pods. The proposal is to make it possible to apply two separate configurations, one for the Gitlab Runner's pod and one for the jobs' pods.</p>
<p>The current existing config consists of the node selector and nodes/pods affinity, but as I mentioned these apply globally to all the pods and not to specified ones as we want in our case.</p>
<p>Gitlab Runner Kubernetes Executor Config: <a href="https://docs.gitlab.com/runner/executors/kubernetes.html" rel="nofollow noreferrer">https://docs.gitlab.com/runner/executors/kubernetes.html</a></p>
| <p>This problem is solved! After a further investigation I found that Gitlab Runner's Helm chart provide 2 <code>nodeSelector</code> features, to exactly do what I was looking for, 1 for the main pod which represents the Gitlab Runner pod and the other one for the Gitlab Runner's jobs pods. Below I show a sample of the Helm chart in which I set beside each <code>nodeSelector</code> its domain and the pod that it affects.</p>
<p>Note that the first level <code>nodeSelector</code> is the one that affects the main Gitlab Runner pod, and the <code>runners.kubernetes.node_selector</code> is the one that affects the Gitlab Runner's jobs pods.</p>
<pre class="lang-yaml prettyprint-override"><code>gitlabUrl: https://gitlab.com/
...
nodeSelector:
gitlab-runner-label-example: label-values-example-0
...
runnerRegistrationToken: ****
...
runners:
config:
[[runners]]
name = "gitlabRunnerExample"
executor = "kubernetes"
environment = ["FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY=true"]
[runners.kubernetes]
...
[runners.kubernetes.node_selector]
"gitlab-runner-label-example" = "label-values-example-1"
[runners.cache]
...
[runners.cache.s3]
...
...
</code></pre>
|
<p>I am applying the app-of-apps with Argo CD on my application deployments, where I have a directory with the applications definitions, and then a directory with resource definitions and a <code>kustomization.yaml</code> file. When a new version is released, all I do is run <code>kustomize set image ...</code> in a pipeline that will issue an autocommit and Argo will pick it up.</p>
<p>I currently have the following structure of files and it is repeated for other environments, like staging and dev.</p>
<pre class="lang-sh prettyprint-override"><code>deployments
├── production
│ ├── app-1
│ │ ├── kustomization.yaml
| | └── deployment.yaml
│ ├── app-2
│ │ ├── kustomization.yaml
| | └── deployment.yaml
└───└── apps
├── app1.yaml
└── app2.yaml
</code></pre>
<p>I know decided to throw myself in the Helm world and create charts for each application with the required resource definitions. Then, in each environment folder I will keep an appropriate <code>values.yaml</code> file to override the proper values for each environment application deployment.</p>
<p>I would like to have the same flow as before, where the pipeline updates the new image tag (this time in the <code>values.yaml</code> file), creates the autocommit and Argo will sync it.</p>
<p>Is it possible to somehow do a <code>kustomize set image...</code> in each of the <code>values.yaml</code> file accordingly? Or what would be a smarter approach here?</p>
| <p>In my case, I implemented a simple github action that fix 'yaml' format file in another repo then commit it.</p>
<p>I have two kinds of github repositories, the one is for application development and the other for storing k8s manifests which app-of-apps pattern is applied.</p>
<p>There is a github action for CI/CD in my development repository triggered when dev branches are merged to the 'main'.</p>
<p>It build new docker image and publish it to Docker hub(or AWS ECR) with version tag, then update values.yaml with the tag in k8s manifests repository helm chart via another github action (<a href="https://github.com/alphaprime-dev/fix-yaml-in-another-repo/blob/main/action.yml" rel="nofollow noreferrer">'fix-yaml-in-another-repo'</a>)</p>
|
<p>I am trying to run a local cluster on Mac with M1 chip using Minikube (Docker driver). I enabled ingress addon in Minikube, I have a separate terminal in which I'm running <code>minikube tunnel</code> and I enabled Minikube dashboard, which I want to expose using Ingress.
This is my configuration file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: dashboard.com
http:
paths:
- backend:
service:
name: kubernetes-dashboard
port:
number: 80
pathType: Prefix
path: /
</code></pre>
<p>I also put "dashboard.com" in my /etc/hosts file and it's actually resolving to the right IP, but it's not responding when I put "http://dashboard.com" in a browser or when I try to ping it and I always receive a timeout.</p>
<p>NOTE: when I run <code>minikube tunnel</code> I get</p>
<pre><code>❗ The service/ingress dashboard-ingress requires privileged ports to be exposed: [80 443]
🔑 sudo permission will be asked for it.
</code></pre>
<p>I insert my sudo password and then nothing gets printed afterwards. Not sure if this is is an issue or the expected behavior.</p>
<p>What am I doing wrong?</p>
| <p>I had the same behavior, and apparently what's needed for <code>minikube tunnel</code> to work is to map "127.0.0.1" in <code>/etc/hosts</code>, instead of the output from <code>minikube ip</code> or the ingress description.
that fixed it for me</p>
|
<p>I'm trying to create a ConfigMap with ArgoCD.</p>
<p>I've created a <code>volumes.yaml</code> file as such</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: persistent-volumes-argo
labels:
grafana_dashboard: "1"
project: "foo"
data:
kubernetes.json: |
{{ .Files.Get "dashboards/persistent-volumes.json" | indent 4 }}
</code></pre>
<p>But ArgoCD doesn't seem to be able to read the data, the way a standard Helm deployment would.</p>
<p>I've tried adding the data directly into the ConfigMap as such</p>
<p>(Data omitted for brevity)</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: persistent-volumes-argo
labels:
grafana_dashboard: "1"
project: "foo"
data:
kubernetes.json: |
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"limit": 100,
"name": "Annotations & Alerts",
"showIn": 0,
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": 13646,
"graphTooltip": 0,
"iteration": 1659421503107,
"links": [],
"panels": [
{
"collapsed": false,
"datasource": null,
"fieldConfig": {
"defaults": {},
"overrides": []
},
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 26,
"panels": [],
"title": "Alerts",
"type": "row"
},
{
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"noValue": "--",
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "semi-dark-red",
"value": null
},
{
"color": "light-green",
"value": -0.0001
},
{
"color": "semi-dark-red",
"value": 0.0001
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 8,
"x": 0,
"y": 1
},
"id": 21,
"options": {
"colorMode": "background",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"text": {},
"textMode": "auto"
},
"pluginVersion": "8.0.3",
"targets": [
{
"expr": "count (max by (persistentvolumeclaim,namespace) (kubelet_volume_stats_used_bytes{namespace=~\"${k8s_namespace}\"} ) and (max by (persistentvolumeclaim,namespace) (kubelet_volume_stats_used_bytes{namespace=~\"${k8s_namespace}\"} )) / (max by (persistentvolumeclaim,namespace) (kubelet_volume_stats_capacity_bytes{namespace=~\"${k8s_namespace}\"} )) >= (${warning_threshold} / 100)) or vector (0)",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "PVCs Above Warning Threshold",
"type": "stat"
},
{
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"decimals": 0,
"mappings": [],
"noValue": "--",
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "semi-dark-red",
"value": null
},
{
"color": "light-green",
"value": -0.0001
},
{
"color": "semi-dark-red",
"value": 0.0001
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 8,
"x": 8,
"y": 1
},
"id": 24,
"options": {
"colorMode": "background",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"text": {},
"textMode": "auto"
},
"pluginVersion": "8.0.3",
"targets": [
{
"expr": "count((kube_persistentvolumeclaim_status_phase{namespace=~\"${k8s_namespace}\",phase=\"Pending\"}==1)) or vector(0)",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "PVCs in Pending State",
"transformations": [
{
"id": "organize",
"options": {}
}
],
"type": "stat"
},
{
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"decimals": 0,
"mappings": [],
"noValue": "--",
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "semi-dark-red",
"value": null
},
{
"color": "light-green",
"value": -0.0001
},
{
"color": "semi-dark-red",
"value": 0.0001
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 8,
"x": 16,
"y": 1
},
"id": 23,
"options": {
"colorMode": "background",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"text": {},
"textMode": "auto"
},
"pluginVersion": "8.0.3",
"targets": [
{
"expr": "count((kube_persistentvolumeclaim_status_phase{namespace=~\"${k8s_namespace}\",phase=\"Lost\"}==1)) or vector(0)",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "PVCs in Lost State",
"transformations": [
{
"id": "organize",
"options": {}
}
],
"type": "stat"
},
{
"collapsed": false,
"datasource": null,
"fieldConfig": {
"defaults": {},
"overrides": []
},
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 5
},
"id": 17,
"panels": [],
"title": "Usage statistics",
"type": "row"
},
{
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": null,
"displayMode": "auto",
"filterable": false
},
"mappings": [],
"noValue": "--",
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "light-green",
"value": null
}
]
},
"unit": "none"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "Used (%)"
},
"properties": [
{
"id": "custom.displayMode",
"value": "gradient-gauge"
},
{
"id": "thresholds",
"value": {
"mode": "absolute",
"steps": [
{
"color": "light-green",
"value": null
},
{
"color": "semi-dark-yellow",
"value": 70
},
{
"color": "dark-red",
"value": 80
}
]
}
},
{
"id": "decimals",
"value": 1
}
]
},
{
"matcher": {
"id": "byName",
"options": "Status"
},
"properties": [
{
"id": "custom.displayMode",
"value": "color-background"
},
{
"id": "mappings",
"value": [
{
"options": {
"0": {
"text": "Bound"
},
"1": {
"text": "Pending"
},
"2": {
"text": "Lost"
}
},
"type": "value"
}
]
},
{
"id": "thresholds",
"value": {
"mode": "absolute",
"steps": [
{
"color": "light-green",
"value": null
},
{
"color": "light-green",
"value": 0
},
{
"color": "semi-dark-orange",
"value": 1
},
{
"color": "semi-dark-red",
"value": 2
}
]
}
},
{
"id": "noValue",
"value": "--"
},
{
"id": "custom.align",
"value": "center"
}
]
},
{
"matcher": {
"id": "byName",
"options": "Namespace"
},
"properties": [
{
"id": "custom.width",
"value": 120
}
]
},
{
"matcher": {
"id": "byName",
"options": "Status"
},
"properties": [
{
"id": "custom.width",
"value": 80
}
]
},
{
"matcher": {
"id": "byName",
"options": "Capacity (GiB)"
},
"properties": [
{
"id": "custom.width",
"value": 120
}
]
},
{
"matcher": {
"id": "byName",
"options": "Used (GiB)"
},
"properties": [
{
"id": "custom.width",
"value": 120
}
]
},
{
"matcher": {
"id": "byName",
"options": "Available (GiB)"
},
"properties": [
{
"id": "custom.width",
"value": 120
}
]
},
{
"matcher": {
"id": "byName",
"options": "StorageClass"
},
"properties": [
{
"id": "custom.width",
"value": 150
}
]
},
{
"matcher": {
"id": "byName",
"options": "PersistentVolumeClaim"
},
"properties": [
{
"id": "custom.width",
"value": 370
}
]
}
]
},
"gridPos": {
"h": 12,
"w": 24,
"x": 0,
"y": 6
},
"id": 29,
"interval": "",
"options": {
"frameIndex": 2,
"showHeader": true,
"sortBy": [
{
"desc": false,
"displayName": "PersistentVolumeClaim"
}
]
},
"pluginVersion": "8.0.3",
"targets": [
{
"expr": " sum by (persistentvolumeclaim,namespace,storageclass,volumename) (kube_persistentvolumeclaim_info{namespace=~\"${k8s_namespace}\"})",
"format": "table",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
},
{
"expr": "sum by (persistentvolumeclaim) (kubelet_volume_stats_capacity_bytes{namespace=~\"${k8s_namespace}\"}/1024/1024/1024)",
"format": "table",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "B"
},
{
"expr": "sum by (persistentvolumeclaim) (kubelet_volume_stats_used_bytes{namespace=~\"${k8s_namespace}\"}/1024/1024/1024)",
"format": "table",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "C"
},
{
"expr": "sum by (persistentvolumeclaim) (kubelet_volume_stats_available_bytes{namespace=~\"${k8s_namespace}\"}/1024/1024/1024)",
"format": "table",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "D"
},
{
"expr": "sum(kube_persistentvolumeclaim_status_phase{namespace=~\"${k8s_namespace}\",phase=~\"(Pending|Lost)\"}) by (persistentvolumeclaim) + sum(kube_persistentvolumeclaim_status_phase{namespace=~\"${k8s_namespace}\",phase=~\"(Lost)\"}) by (persistentvolumeclaim)",
"format": "table",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "E"
},
{
"expr": "sum by (persistentvolumeclaim) (kubelet_volume_stats_used_bytes{namespace=~\"${k8s_namespace}\"}/kubelet_volume_stats_capacity_bytes{namespace=~\"${k8s_namespace}\"} * 100)",
"format": "table",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "F"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Persistent Volume Claim",
"transformations": [
{
"id": "seriesToColumns",
"options": {
"byField": "persistentvolumeclaim"
}
},
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Time 1": true,
"Time 2": true,
"Time 3": true,
"Time 4": true,
"Time 5": true,
"Time 6": true,
"Value #A": true
},
"indexByName": {},
"renameByName": {
"Time 1": "",
"Time 2": "",
"Time 3": "",
"Time 4": "",
"Time 5": "",
"Time 6": "",
"Value #A": "",
"Value #B": "Capacity (GiB)",
"Value #C": "Used (GiB)",
"Value #D": "Available (GiB)",
"Value #E": "Status",
"Value #F": "Used (%)",
"namespace": "Namespace",
"persistentvolumeclaim": "PersistentVolumeClaim",
"storageclass": "StorageClass",
"volumename": "PhysicalVolume"
}
}
}
],
"type": "table"
},
{
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {
"align": null,
"displayMode": "auto",
"filterable": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 24,
"x": 0,
"y": 18
},
"id": 7,
"options": {
"showHeader": true,
"sortBy": [
{
"desc": true,
"displayName": "Status"
}
]
},
"pluginVersion": "8.0.3",
"targets": [
{
"expr": "kube_storageclass_info",
"format": "table",
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Storage Class",
"transformations": [
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Value": true,
"__name__": true,
"app_kubernetes_io_instance": true,
"app_kubernetes_io_name": true,
"instance": true,
"job": true,
"kubernetes_namespace": true,
"kubernetes_pod_name": true,
"pod_template_hash": true
},
"indexByName": {
"Time": 1,
"Value": 13,
"__name__": 2,
"app_kubernetes_io_instance": 3,
"app_kubernetes_io_name": 4,
"instance": 5,
"job": 6,
"kubernetes_namespace": 7,
"kubernetes_pod_name": 8,
"pod_template_hash": 9,
"provisioner": 10,
"reclaimPolicy": 11,
"storageclass": 0,
"volumeBindingMode": 12
},
"renameByName": {
"provisioner": "Provisioner",
"reclaimPolicy": "ReclaimPolicy",
"storageclass": "StorageClass",
"volumeBindingMode": "VolumeBindingMode"
}
}
},
{
"id": "groupBy",
"options": {
"fields": {
"Provisioner": {
"aggregations": [],
"operation": "groupby"
},
"ReclaimPolicy": {
"aggregations": [],
"operation": "groupby"
},
"StorageClass": {
"aggregations": [],
"operation": "groupby"
},
"VolumeBindingMode": {
"aggregations": [],
"operation": "groupby"
}
}
}
}
],
"type": "table"
},
{
"collapsed": false,
"datasource": null,
"fieldConfig": {
"defaults": {},
"overrides": []
},
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 23
},
"id": 15,
"panels": [],
"title": "Graphical usage data ",
"type": "row"
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 24,
"x": 0,
"y": 24
},
"hiddenSeries": false,
"id": 9,
"legend": {
"alignAsTable": true,
"avg": true,
"current": true,
"max": true,
"min": true,
"rightSide": true,
"show": true,
"total": false,
"values": true
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "8.0.3",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "(max by (persistentvolumeclaim,namespace) (kubelet_volume_stats_used_bytes{namespace=~\"${k8s_namespace}\"}))",
"interval": "",
"legendFormat": "{{namespace}} ({{persistentvolumeclaim}})",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "All Running PVCs Used Bytes",
"tooltip": {
"shared": true,
"sort": 2,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "Date & time",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"collapsed": true,
"datasource": null,
"fieldConfig": {
"defaults": {},
"overrides": []
},
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 36
},
"id": 19,
"panels": [
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 24,
"x": 0,
"y": 41
},
"hiddenSeries": false,
"id": 11,
"legend": {
"alignAsTable": true,
"avg": true,
"current": false,
"max": false,
"min": false,
"rightSide": true,
"show": true,
"total": false,
"values": true
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "rate(kubelet_volume_stats_used_bytes{namespace=~\"${k8s_namespace}\"}[1h])",
"instant": false,
"interval": "",
"legendFormat": "{{namespace}} ({{persistentvolumeclaim}})",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Hourly Volume Usage Rate",
"tooltip": {
"shared": true,
"sort": 2,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "binBps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "Date & time",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 24,
"x": 0,
"y": 48
},
"hiddenSeries": false,
"id": 12,
"legend": {
"alignAsTable": true,
"avg": true,
"current": false,
"max": false,
"min": false,
"rightSide": true,
"show": true,
"total": false,
"values": true
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "rate(kubelet_volume_stats_used_bytes{namespace=~\"${k8s_namespace}\"}[1d])",
"interval": "",
"legendFormat": "{{namespace}} ({{persistentvolumeclaim}})",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Daily Volume Usage Rate",
"tooltip": {
"shared": true,
"sort": 2,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "binBps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "Date & time",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 24,
"x": 0,
"y": 55
},
"hiddenSeries": false,
"id": 13,
"legend": {
"alignAsTable": true,
"avg": true,
"current": false,
"max": false,
"min": false,
"rightSide": true,
"show": true,
"total": false,
"values": true
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
}
}
</code></pre>
<p>But this errors with <code>rpc error: code = FailedPrecondition desc = Failed to unmarshal "volumes.yaml": <nil></code></p>
<p>Is there a way to pass in json data when creating a ConfigMap with ArgoCD, either as a template or by dumping the data in the file?</p>
| <p>To create configmap with argocd and helm</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "helm-chart.fullname" . }}-configmap
data:
config.json : |
{{ toJson .Values.configmap | indent 4 }}
</code></pre>
<p>and the value file should be like this, which is our JSON but converted to YAML</p>
<pre><code>configmap:
json:
- rigid
- better for data interchange
yaml:
- slim and flexible
- better for configuration
object:
key: value
array:
- null_value:
- boolean: true
- integer: 1
- alias: &example aliases are like variables
- alias: *example
paragraph: >
Blank lines denote
paragraph breaks
content: |-
Or we
can auto
convert line breaks
to save space
alias: &foo
bar: baz
alias_reuse: *foo
</code></pre>
<p><a href="https://www.json2yaml.com/" rel="nofollow noreferrer">json2yaml</a></p>
<p>A complete demo app can be found <a href="https://github.com/Adiii717/argocd-demo-app" rel="nofollow noreferrer">here</a></p>
<p>A very quick way to create app with the above configmap and demo app</p>
<pre><code>argocd app create demo-app --repo https://github.com/Adiii717/argocd-demo-app.git --path helm-chart --dest-namespace default --dest-server https://kubernetes.default.svc --helm-set replicaCount=2
</code></pre>
<p><a href="https://i.stack.imgur.com/1Eftq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Eftq.png" alt="enter image description here" /></a>
Make sure you also set</p>
<pre><code>export ARGOCD_AUTH_TOKEN="tokeh
export ARGOCD_OPTS="--grpc-web"
export ARGOCD_SERVER="argocd.example.com"
</code></pre>
<p>added section for the <a href="https://github.com/Adiii717/argocd-demo-app#configmap-creation" rel="nofollow noreferrer">configmap</a></p>
|
<p>I want to deploy WSO2 API Manager with Kubernetes.
Should I use Google Cloud?
Is there another way?</p>
| <p>The helm charts <a href="https://github.com/wso2/kubernetes-apim" rel="nofollow noreferrer">1</a> for APIM can be deployed on GKE, AKS, EKS, etc. You can even deploy the all-in-one simple deployment pattern <a href="https://github.com/wso2/kubernetes-apim/tree/master/simple/am-single" rel="nofollow noreferrer">2</a> in a local Kubernetes cluster like minikube, etc.</p>
<p>You might have to use a cloud provider for more advanced patterns since they require more resources to run.</p>
<p>All these charts are there as samples to get an idea about the deployment patterns. It is not recommended to deploy those as it is in real production scenarios as the resource requirements and infrastructure vary according to the use cases.</p>
<p><a href="https://github.com/wso2/kubernetes-apim" rel="nofollow noreferrer">1</a> - <a href="https://github.com/wso2/kubernetes-apim" rel="nofollow noreferrer">https://github.com/wso2/kubernetes-apim</a></p>
<p><a href="https://github.com/wso2/kubernetes-apim/tree/master/simple/am-single" rel="nofollow noreferrer">2</a> - <a href="https://github.com/wso2/kubernetes-apim/tree/master/simple/am-single" rel="nofollow noreferrer">https://github.com/wso2/kubernetes-apim/tree/master/simple/am-single</a></p>
|
<p>I am trying to add livenessprobe for <a href="https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1" rel="nofollow noreferrer">snapshot-controller</a>. snapshot-controller deployment mentioned <a href="https://github.com/kubernetes-csi/external-snapshotter/blob/v6.0.1/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml" rel="nofollow noreferrer">here</a> to be specific. I tried to exec into controller pod to see what can be used for liveness probe. But not able to get into the pod. Has someone tried adding probes for snapshot-controller.</p>
| <p>In the github repository you shared: <a href="https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1#snapshot-controller-command-line-options" rel="nofollow noreferrer">https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1#snapshot-controller-command-line-options</a></p>
<h4>Important optional arguments that are highly recommended to be used</h4>
<blockquote>
<p>--http-endpoint: The TCP network address where the HTTP server for diagnostics, including metrics and leader election health check, will listen (example: :8080 which corresponds to port 8080 on local host). The default is empty string, which means the server is disabled.</p>
</blockquote>
<p>You should be able to use this, if you do enable this option. At which point, you could query port 8080, path /metrics for example.</p>
<p>Note: if you need to troubleshoot something like this and do not have a shell in your container image. You could use <code>kubectl debug</code>, see: <a href="https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#debugging-using-a-copy-of-the-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#debugging-using-a-copy-of-the-pod</a></p>
|
<p>I need a help with EKS managed node group.
I've created a cluster with one additional sg. Inside of this cluster I've created managed node group. All code is stocked in terraform. Once managed node group creates new instance, only one security group is attached (SG created by AWS). Is it somehow a way to attach also additional security group to instances?</p>
<p>Thanks in advance for help!</p>
| <p>You can create a custom launch template to define your own security group for the node group. You can then reference the launch template in your terraform <code>aws_eks_node_group</code> resource.</p>
<pre class="lang-bash prettyprint-override"><code> launch_template {
name = aws_launch_template.your_eks_launch_template.name
version = aws_launch_template.your_eks_launch_template.latest_version
}
</code></pre>
|
<p>I executing a command that give me cpu limit</p>
<pre><code>kubectl get pods -o=jsonpath='{.items[*]..resources.limits.cpu}' -A
</code></pre>
<p>How can I modify the command to show pod name and the memory limit also</p>
| <p>You can format the jsonpath like this.</p>
<pre><code>kubectl get pods -Ao jsonpath='{range .items[*]}{"name: "}{@.metadata.name}{" cpu: "}{@..resources.limits.cpu}{" memory: "}{@..resources.limits.memory}{"\n"}{"\n"}{end}'
</code></pre>
|
<p>So I had a ConfigMap with a json configuration file in it, like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
data:
config.json: |+
{
"some-url": "{{ .Values.myApp.someUrl }}"
}
</code></pre>
<p>But I've moved to having my config files outside the ConfigMap's yaml, and just referencing them there, like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
data:
config.json: |-
{{ .Files.Get .Values.myApp.configFile | indent 4 }}
</code></pre>
<p>But now I want my json to look like the following</p>
<pre class="lang-json prettyprint-override"><code>{
"some-url": "{{ .Values.myApp.someUrl }}"
}
</code></pre>
<p>The only thing I tried is what I just showed. I 'm not even sure how to look for this answer.</p>
<p>Is it even possible?</p>
| <p>At the time of reading the file, its content is a string. It's not evaluated as template, and therefore you cannot use variables like you do.</p>
<p>However, helm has a function for this purpose specifically called <a href="https://helm.sh/docs/howto/charts_tips_and_tricks/#using-the-tpl-function" rel="nofollow noreferrer">tpl</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
data:
config.json: |-
{{ tpl (.Files.Get .Values.myApp.configFile) $ | indent 4 }}
</code></pre>
<p>The tpl function takes a template string and renders it with some context. This is useful when you have template snippets in your values file or like in your case in some files content.</p>
|
<p>Unfortunately I am unable to paste configs or <code>kubectl</code> output, but please bear with me.</p>
<p>Using helm to deploy a series of containers to K8s 1.14.6, all containers are deploying successfully <strong>except</strong> for those that have <code>initContainer</code> sections defined within them.</p>
<p>In these failing deployments, their templates define <code>container</code> and <code>initContainer</code> stanzas that reference the same <code>persistent-volume</code> (and associated <code>persistent-volume-claim</code>, both defined elsewhere).</p>
<p>The purpose of the <code>initContainer</code> is to copy persisted files from a mounted drive location into the appropriate place before the main <code>container</code> is established.</p>
<p>Other containers (without <code>initContainer</code> stanzas) mount properly and run as expected.</p>
<p>These pods which have <code>initContainer</code> stanzas, however, report "failed to initialize" or "CrashLoopBackOff" as they continually try to start up. The <code>kubectl describe pod</code> of these pods gives only a Warning in the events section that "pod has unbound immediate PersistentVolumeClaims." The <code>initContainer</code> section of the pod description says it has failed because "Error" with no further elaboration.</p>
<p>When looking at the associated <code>pv</code> and <code>pvc</code> entries from <code>kubectl</code>, however, none are left pending, and all report "Bound" with no Events to speak of in the description.</p>
<p>I have been able to find plenty of articles suggesting fixes when your <code>pvc</code> list shows Pending claims, yet none so far that address this particular set of circumstance when all <code>pvc</code>s are bound.</p>
| <p>When a PVC is "Bound", this means that you do have a PersistentVolume object in your cluster, whose claimRef refers to that PVC (and usually: that your storage provisioner is done creating the corresponding volume in your storage backend).</p>
<p>When a volume is "not bound", in one of your Pod, this means the node where your Pod was scheduled is unable to attach your persistent volume. If you're sure there's no mistake in your Pods volumes, you should then check logs for your csi volumes attacher pod, when using CSI, or directly in nodes logs when using some in-tree driver.</p>
<p>While the crashLoopBackOff thing is something else. You should check for logs of your initContainer: <code>kubectl logs -c <init-container-name> -p</code>. From your explanation, I would suppose there's some permission issues when copying files over.</p>
|
<p>I’ve built a service that lives in a Docker container. As part of it’s required behavior, when receiving a gRPC request, it needs to send an email as a side effect. So imagine something like</p>
<pre><code>service MyExample {
rpc ProcessAndSendEmail(MyData) returns (MyResponse) {}
}
</code></pre>
<p>where there’s an additional emission (adjacent to the request/response pattern) of an email message.</p>
<p>On a “typical” server deployment, I might have a postfix running ; if I were using a service, I’d just dial it’s SMTP endpoint. I don’t have either readily available in this case.</p>
<p>As I’m placing my service in a container and would like to deploy to kubernetes, I’m wondering what solutions work best? There may be a simple postfix-like Docker image I can deploy... I just don’t know.</p>
| <p>There's several docker mailservers:</p>
<ul>
<li><a href="https://github.com/docker-mailserver/docker-mailserver" rel="nofollow noreferrer">https://github.com/docker-mailserver/docker-mailserver</a></li>
<li><a href="https://github.com/Mailu/Mailu" rel="nofollow noreferrer">https://github.com/Mailu/Mailu</a></li>
<li><a href="https://github.com/bokysan/docker-postfix" rel="nofollow noreferrer">https://github.com/bokysan/docker-postfix</a></li>
</ul>
<p>Helm charts:</p>
<ul>
<li><a href="https://github.com/docker-mailserver/docker-mailserver-helm" rel="nofollow noreferrer">https://github.com/docker-mailserver/docker-mailserver-helm</a></li>
<li><a href="https://github.com/Mailu/helm-charts" rel="nofollow noreferrer">https://github.com/Mailu/helm-charts</a></li>
<li><a href="https://github.com/bokysan/docker-postfix/tree/master/helm" rel="nofollow noreferrer">https://github.com/bokysan/docker-postfix/tree/master/helm</a></li>
</ul>
<p>Note that the top answer in <a href="https://www.reddit.com/r/kubernetes/comments/uf4r8v/easy_to_deploy_mail_server/" rel="nofollow noreferrer">this reddit thread</a> recommends signing up for a managed mail provider instead of trying to self-host your own.</p>
|
<p>I have the following deployments one of Django api and the other of celery, when I run the command to get the resource consumption of the pods, it only return those of celery and not those of the API. What are potential reasons for this? given that the same configuration works well on another cluster</p>
<p>Kubernetes Server Version: v1.22.5</p>
<p><a href="https://i.stack.imgur.com/WKEK9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WKEK9.png" alt="enter image description here" /></a></p>
<p><strong>EDIT: Added logs of metrics server</strong></p>
<pre><code>I0824 13:28:05.498602 1 serving.go:342] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I0824 13:28:06.269888 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0824 13:28:06.269917 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0824 13:28:06.269966 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0824 13:28:06.269981 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0824 13:28:06.270005 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0824 13:28:06.270025 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0824 13:28:06.270512 1 secure_serving.go:266] Serving securely on [::]:8443
I0824 13:28:06.270577 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key"
I0824 13:28:06.270593 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0824 13:28:06.270852 1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed
I0824 13:28:06.277601 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve"
I0824 13:28:06.371038 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0824 13:28:06.371093 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
I0824 13:28:06.371932 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
</code></pre>
| <p>I upgraded my cluster to 1.23.4 and my HPAs to autoscaling/v2 api and now everything works fine, I suppose my problem was related to this <a href="https://github.com/kubernetes/kubernetes/issues/76292" rel="nofollow noreferrer">issue</a> because my api pod had an initContainer.</p>
|
<p><strong>Scenario:</strong></p>
<p>I have a k8s cluster with <code>host01</code>, <code>host02</code> and <code>host03</code>.</p>
<p>I'm running master on <code>host01</code> and slaves on <code>host02</code> and <code>host03</code>, and I want to open a port for client connections on master, ie I want to open: <code>host01:32702</code>.</p>
<p>Using following service.yaml, I managed to open <code>host01:32702</code>. To my surprise <code>host02:32702</code> and <code>host03:32702</code> are also open.</p>
<pre><code>kind: Service
# ...
spec:
type: NodePort
ports:
- name: dashboard
port: 8265
targetPort: 8265
nodePort: 32702
selector:
podonhost01: yes
</code></pre>
<p><strong>Questions:</strong></p>
<ul>
<li>Is this intended NodePort behavior? Is the selector not working?</li>
<li>If yes, then why is this the intended NodePort behavior? If I have 1000s hosts, then single service will open port 32702 on all these hosts?</li>
<li>How do I only open port 32702 on <code>host01</code> and not the rest of hosts?</li>
</ul>
| <p>You can use the <strong>externalIP</strong>,</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- HOST-01 IP
</code></pre>
<p>Ref : <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#external-ips</a></p>
|
<p>I just installed ingress controller in an aks cluster using this deployment resource :</p>
<blockquote>
<p>kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml</a></p>
</blockquote>
<p>specific for azure.</p>
<p>So far everything works fine the issue i am having is, i get this error on my certificate that :</p>
<blockquote>
<p>Kubernetes Ingress Controller Fake Certificate</p>
</blockquote>
<p>i Know i followed all steps as i should, but i can figure out why my certificate says that. I would appreciate if anyone can help guide on a possible fix for the issue.</p>
<p>issuer manifest</p>
<blockquote>
</blockquote>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: TargetPods-6dc98445c4-jr6pt
spec:
tls:
- hosts:
- test.domain.io
secretName: TargetPods-tls
rules:
- host: test.domain.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: TargetPod-6dc98445c4-jr6pt
port:
number: 80
</code></pre>
<p>Below is the result of : kubectl get secrets -n ingress-nginx</p>
<pre><code>> NAME TYPE DATA AGE
default-token-dh88n kubernetes.io/service-account-token 3 45h
ingress-nginx-admission Opaque 3 45h
ingress-nginx-admission-token-zls6p kubernetes.io/service-account-token 3 45h
ingress-nginx-token-kcvpf kubernetes.io/service-account-token 3 45h
</code></pre>
<p>also the secrets from cert-manager : kubectl get secrets -n cert-manager</p>
<pre><code>> NAME TYPE DATA AGE
cert-manager-cainjector-token-2m8nw kubernetes.io/service-account-token 3 46h
cert-manager-token-vghv5 kubernetes.io/service-account-token 3 46h
cert-manager-webhook-ca Opaque 3 46h
cert-manager-webhook-token-chz6v kubernetes.io/service-account-token 3 46h
default-token-w2jjm kubernetes.io/service-account-token 3 47h
letsencrypt-cluster-issuer Opaque 1 12h
letsencrypt-cluster-issuer-key Opaque 1 45h
</code></pre>
<p>Thanks in advance</p>
| <p>You're seeing this as it is the default out of the box TLS certificate. You should replace this with your own certificate.</p>
<p>Here is some information in the <a href="https://github.com/kubernetes/ingress-nginx/blob/c6a8ad9a65485b1c4593266ab067dc33f3140c4f/docs/user-guide/tls.md#default-ssl-certificate" rel="nofollow noreferrer">documentation</a></p>
<p>You essentially want to create a TLS certificate (try <a href="https://shocksolution.com/2018/12/14/creating-kubernetes-secrets-using-tls-ssl-as-an-example/" rel="nofollow noreferrer">this</a> method if you are unfamiliar) and then add --default-ssl-certificate=default/XXXXX-tls in the nginx-controller deployment in you yaml. You can add this as an argument, search for "/nginx-ingress-controller" in your yaml and that'll take you to the relevant section.</p>
|
<p>I want to share multiple volumes using PersistentVolume reqource of kubernetes.</p>
<p>I want to share "/opt/*" folders in pod. But not the "/opt":</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: demo
namespace: demo-namespace
labels:
app: myApp
chart: "my-app"
name: myApp
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: "myApp-data"
hostPath:
path: /opt/*
</code></pre>
<p>But in pod I am not able to see shared volume. If I share only "/opt" folder then it goes shown
in pod.</p>
<p>Is there anything I am missing?</p>
| <p>If you want to share a folder among some pods or deployments or statefulsets you should create PersistentVolumeClaim and it's access mode should be ReadeWriteMany.So here is an example of PersistentVolumeClaim which has ReadeWriteMany mode</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
</code></pre>
<p>then in your pods you should use it as below ...</p>
<pre><code> apiVersion: v1
kind: Pod
metadata:
name: mypod01
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: c01
image: alpine
volumeMounts:
- mountPath: "/opt"
name: task-pv-storage
</code></pre>
<pre><code> apiVersion: v1
kind: Pod
metadata:
name: mypod02
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: c02
image: alpine
volumeMounts:
- mountPath: "/opt"
name: task-pv-storage
</code></pre>
|
<p>How to Stop/Start application deployed in the ArgoCD?</p>
<p>I see only <em>Delete</em> or <em>Sync</em> or deploy/redeploy options. I got running server applications and I'd like to stop (shutdown) temporarily their functionality in the cluster. Or I'm missing something in the concept?</p>
<p>Do I need to implement some kind of custom interface for my server applications to make start/stop functionality possible and communicate with my apps directly? (so it is out of ArgoCD responsibility - i.e. it is <em>not</em> like Linux service management system - I need to implement this by myself at application level)</p>
| <p>You can set the replica count to 0 so no pod will be created, without having to update your application code or remove the application from argocd.</p>
<p>You need to edit the definition of your deployment, setting the <code>replicas</code> to <code>0</code> like so:</p>
<pre><code>apiVersion: ...
kind: Deployment
spec:
replicas: 0
...
</code></pre>
<p>This can be done in 2 ways:</p>
<ul>
<li>You can commit the changes in your config and sync argocd so they get applied,</li>
<li>Or you can do this directly from the argocd UI:
<ul>
<li>First disable the auto-sync (<code>App Details</code> > <code>Summary</code> > <code>Disable auto-sync</code>) so your changes don't get overwritten</li>
<li>Then edit the desired manifest of your deployment directly in the UI</li>
<li>When you want to rollback this change and re-deploy your app, simply sync and you will get your old config back</li>
</ul>
</li>
</ul>
|
<p>I'm attempting to configure AKS, and I've installed <strong>Istio Gateway,</strong> which interns created an Azure Load Balancer, to make the overall traffic flow to be as shown below.</p>
<p><a href="https://i.stack.imgur.com/lZdnv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lZdnv.png" alt="enter image description here" /></a></p>
<p>In my opinion, Azure Load Balancer is not required, <strong>Istio Gateway</strong> should connect directly to Azure Application Gateway, as shown below</p>
<p><a href="https://i.stack.imgur.com/JgHu0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JgHu0.png" alt="enter image description here" /></a></p>
<p>Is this doable? If so, can I get any reference?</p>
| <p>From <a href="https://istio.io/latest/docs/reference/config/networking/gateway/" rel="nofollow noreferrer">istio documentation</a> : <code>Gateway describes a load balancer operating at the edge of the mesh [...]</code>, which means it's the point of entry (endpoint) to your mesh network. Even though it's virtual, it still needs some kind of underlying infrastructure (internal load balancer in your case) to host that load balancing service.</p>
<p>Now it's possible to configure your own ingress-gateway (<a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/</a>), but it's usually much simpler (IMHO) to just use the one from your cloud provider, unless you have a specific use case.</p>
|
<p>In the <a href="https://open-vsx.org/extension/ms-kubernetes-tools/vscode-kubernetes-tools" rel="nofollow noreferrer">VS Code Kubernetes Extension</a>, I am getting an error when I try to Access resources in my cluster.</p>
<p>I have updated my ~/.kube/config with the correct data and general format</p>
<h2>.kube/config</h2>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: M1ekNDQWMrZ0F3SUJBZ0lCQURB...
server: https://{yadayada}.gr7.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform
contexts:
- context:
cluster: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform
user: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform
name: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform
current-context: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- eventplatform
command: aws
</code></pre>
<h2>ERROR</h2>
<p><a href="https://i.stack.imgur.com/LoABO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LoABO.png" alt="enter image description here" /></a></p>
| <p>The solution to add my AWS credential ENV variables:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: M1ekNDQWMrZ0F3SUJBZ0lCQURB...
server: https://{yadayada}.gr7.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform
contexts:
- context:
cluster: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform
user: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform
name: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform
current-context: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- eventplatform
command: aws
env:
- name: AWS_ACCESS_KEY_ID
value: {SOME_VALUES}
- name: AWS_SECRET_ACCESS_KEY
value: {SOME_OTHER_VALUES}
- name: AWS_SESSION_TOKEN
value: {SOME_OTHER_OTHER_VALUES}
</code></pre>
|
<p>I'm facing the below mentioned issue while using DHCP IPAM plugin + Macvlan + Multus for the additional interface creation inside my pod and assigning IP from DHCP server.</p>
<p>I actually went through the related issues around this problem and tried all the solutions/different configurations mentioned there. But none of them were working so far. The documentation for CNI plugin w.r.t DHCP usage also not quite clear.</p>
<p><strong>Related Issues:</strong></p>
<ol>
<li><a href="https://github.com/k8snetworkplumbingwg/multus-cni/issues/291" rel="nofollow noreferrer">https://github.com/k8snetworkplumbingwg/multus-cni/issues/291</a></li>
<li><a href="https://github.com/containernetworking/plugins/issues/587" rel="nofollow noreferrer">https://github.com/containernetworking/plugins/issues/587</a></li>
<li><a href="https://github.com/containernetworking/plugins/issues/371" rel="nofollow noreferrer">https://github.com/containernetworking/plugins/issues/371</a></li>
<li><a href="https://github.com/containernetworking/plugins/issues/440" rel="nofollow noreferrer">https://github.com/containernetworking/plugins/issues/440</a></li>
<li><a href="https://github.com/containernetworking/cni/issues/398" rel="nofollow noreferrer">https://github.com/containernetworking/cni/issues/398</a></li>
<li><a href="https://github.com/containernetworking/cni/issues/225" rel="nofollow noreferrer">https://github.com/containernetworking/cni/issues/225</a></li>
<li><a href="https://github.com/containernetworking/plugins/issues/371" rel="nofollow noreferrer">https://github.com/containernetworking/plugins/issues/371</a></li>
</ol>
<p><strong>Solutions Suggested:</strong></p>
<ol>
<li><a href="https://github.com/containernetworking/plugins/pull/577" rel="nofollow noreferrer">https://github.com/containernetworking/plugins/pull/577</a></li>
</ol>
<p><strong>DHCP Daemon Logs:</strong></p>
<pre><code>[root@test-node cni_plugins]# ./dhcp daemon -broadcast=true
2022/06/09 12:00:03 ac7d57597540992a1af43455da24b3210561ce12b164820ee18f583a304a/test_net_attach1/net1: acquiring lease
2022/06/09 12:00:03 Link "net1" down. Attempting to set up
2022/06/09 12:00:03 network is down
2022/06/09 12:00:03 retrying in 2.881018 seconds
2022/06/09 12:00:16 no DHCP packet received within 10s
2022/06/09 12:00:16 retrying in 2.329120 seconds
2022/06/09 12:00:29 no DHCP packet received within 10s
2022/06/09 12:00:29 retrying in 1.875428 seconds
</code></pre>
<p><strong>NetworkAttachmentDefinition:</strong></p>
<pre><code>apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: "test1"
annotations:
k8s.v1.cni.cncf.io/resourceName: intel.com/test_net_attach1
spec:
config:
'{
"name": "test_net_attach1",
"type": "macvlan",
"master": "ens2f0",
"ipam": {
"type": "dhcp"
}
}'
</code></pre>
<p><strong>Test Pod YAML:</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dhcp-test1
annotations:
k8s.v1.cni.cncf.io/networks: test1
spec:
nodeSelector:
deploy: vm1
containers:
- name: c1
image: centos:latest
command: ["/bin/sh","-c"]
args: ["sleep infinity"]
</code></pre>
<p><strong>DHCP SERVER Logs:</strong></p>
<pre><code>Internet Systems Consortium DHCP Server 4.4.1
Copyright 2004-2018 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Config file: /data/dhcpd.conf
Database file: /data/dhcpd.leases
PID file: /var/run/dhcpd.pid
Wrote 0 leases to leases file.
Listening on LPF/ens2f0/b4:96:91:a5:8c:82/192.168.231.0/24
Sending on LPF/ens2f0/b4:96:91:a5:8c:82/192.168.231.0/24
Sending on Socket/fallback/fallback-net
Server starting service.
DHCPDISCOVER from 4a:63:ad:6d:9a:44 via ens2f0
DHCPOFFER on 192.168.231.120 to 4a:63:ad:6d:9a:44 via ens2f0
DHCPDISCOVER from 4a:63:ad:6d:9a:44 via ens2f0
DHCPOFFER on 192.168.231.120 to 4a:63:ad:6d:9a:44 via ens2f0
</code></pre>
<p><strong>Observations:</strong></p>
<ol>
<li>DHCP request and response are seen by tcpdump on the interface but the dhcp daemon says none was received.</li>
<li>Looks like the DHCP daemon is not picking up the offer.</li>
</ol>
<p><strong>ENVIRONMENT:</strong>
CNI: cni-plugins-linux-amd64-v1.1.1.tgz</p>
<p>Could someone please share the working configurations for the below mentioned items</p>
<ol>
<li>Network attachment definition</li>
<li>Sample Pod YAML</li>
<li>How to run DHCP Daemon?</li>
</ol>
<p>for the better documentation and for the new comers who is facing issues around this? Please let me know if you needed anything or if I'm missing anything?</p>
| <p>I started the daemon on the server node with:</p>
<pre><code>cd /opt/cni/bin
sudo ./dhcp daemon
</code></pre>
<p>In a couple of seconds my pod received its IP address from my DHCP server.</p>
|
<p><a href="https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/" rel="nofollow noreferrer">This doc</a> tells about how to run multiple scheduler. But I m not able to understand how does default scheduler is decided? Is it based on <code>--leader-elect</code> option?</p>
<p>Can I tell Kubernetes to use my-custom-scheduler as default scheduler instead of kube-scheduler?
Is there another way to specify scheduler other than <code>schedulerName</code> in Pod/Deployment spec.</p>
| <p>If you can't remove/replace the default scheduler in the control plane (e.g. if you are on a managed k8s platform), you can use <a href="https://open-policy-agent.github.io/gatekeeper/website/docs/" rel="nofollow noreferrer">GateKeeper</a> — the replacement for OPA — or another policy agent to write mutations that are applied to some or all pods on your cluster.</p>
<p>For example:</p>
<pre><code>apiVersion: mutations.gatekeeper.sh/v1beta1
kind: Assign
metadata:
name: pod-scheduler-name
spec:
applyTo:
- groups: ""
kinds: ["Pod"]
versions: ["v1"]
match:
kinds:
- apiGroups: ["*"]
kinds: ["Pod"]
# Adjust this to a label that is present on the pods of your custom scheduler.
# It's important that you leave your custom scheduler to be itself scheduled by the
# default scheduler, as otherwise if all pods of your custom scheduler somehow get
# terminated, they won't be able to start up again due to not being scheduled.
labelSelector:
matchExpressions:
- key: app
operator: NotIn
values: ["my-scheduler"]
location: "spec.schedulerName"
# Adjust this to match the desired profile name from your scheduler's configuration.
parameters:
assign:
value: my-scheduler
</code></pre>
|
<p>When we launch the EKS Cluster using the below manifest, it is creating ALB. We have a default ALB that we are using, let's call it EKS-ALB. The Hosted zone is routing traffic to this EKS-ALB. We gave tag <strong>ingress.k8s.aws/resource:LoadBalancer, ingress.k8s.aws/stack:test-alb, elbv2.k8s.aws/cluster: EKS</strong>. But when we delete the manifest, it is deleting the default ALB and we need to reconfigure hosted zone again with New ALB which will get created in next deployment. Is there any way to block Ingress-controller not deleting ALB, but only deleting the listeners and Target Group?</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-nginx-rule
namespace: test
annotations:
alb.ingress.kubernetes.io/group.name: test-alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /index.html
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/security-groups: eks-test-alb-sg
spec:
ingressClassName: alb
rules:
- host: test.eks.abc.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: test-svc
port:
number: 5005
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-dep
namespace: test
labels:
app: test
spec:
replicas: 1
restartPolicy:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: Imagepath
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5005
resources:
requests:
memory: "256Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: test-svc
namespace: test
labels:
app: test
spec:
type: NodePort
ports:
- port: 5005
targetPort: 80
protocol: TCP
selector:
app: test
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: test-scaler
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-dep
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 60
---
</code></pre>
| <p>In order to achieve the existing ALB not being deleted with group.name annotation enabled, we need to meet following conditions:</p>
<ol>
<li>ALB should be tagged with below 3 tags:</li>
</ol>
<pre><code>alb.ingress.kubernetes.io/group.name: test-alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: instance
</code></pre>
<ol start="2">
<li>Create a dummy ingress with the same group name with the below manifest.</li>
</ol>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-nginx-rule
namespace: test
annotations:
alb.ingress.kubernetes.io/group.name: test-alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /index.html
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/security-groups: eks-test-alb-sg
spec:
ingressClassName: alb
rules:
- host: dummy.eks.abc.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: test-svc
port:
number: 5005
</code></pre>
<p>After deploying the above manifest, an ingress will be created using the same ALB and listener will have rule of if host is dummy.eks.abc.com, it will return 443. It's create and forget type of manifest, so after creating this ingress, even after we delete all the running deployment services (except the dummy manifest file above), the ALB will remain.</p>
|
<p>My goal is to have a kubernetes cluster running with stateless replicas of some kind of frontend which can schedule jobs.</p>
<p>Afterwards my plan is to have multiple runners (as pods) which are polling for scheduled jobs.
Once they receive job data they should launch job executors (also as a pod) on demand.</p>
<p>This should look somehow like this:</p>
<pre><code> pod A pod B
________ __________
| | kube | |
| runner | ------> | executor |
|________| |__________|
.
.
pod A' . pod B'
________ __________
| | kube | |
| runner | ------> | executor |
|________| |__________|
</code></pre>
<p>Basically I am kind of inspired by the gitlab architecture.
<a href="https://docs.gitlab.com/runner/executors/kubernetes.html#kubernetes-executor-interaction-diagram" rel="nofollow noreferrer">https://docs.gitlab.com/runner/executors/kubernetes.html#kubernetes-executor-interaction-diagram</a></p>
<p>Therefore it would be really awesome if you could give me a hint where to start with.
I was considering to use the python api of the cluster but in this I have to somehow make the cluster administration available to the runner.</p>
<p>So I'm really thankful for any idea or approach how to realize such an architecture of pod lifecycling.</p>
<p>Do you see any security risks with such an approach or is this reasonable securitywise?</p>
<p>Thank you very much for your help and hints</p>
| <p>Here is a <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">list of available kubernetes api clients</a>. You will use a <strong><a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/" rel="nofollow noreferrer">service account</a></strong> to make authorized calls with your api client against the <strong>kube-apiserver</strong> running in the <strong>controlplane</strong> node. I made a first idea draft for your architecture. Maybe that is inspiring to you. Good look with your project! Alternative to the database you may want to use a message queue.</p>
<p><a href="https://i.stack.imgur.com/SFBOV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SFBOV.png" alt="enter image description here" /></a></p>
|
<p>Is there a way to use Helm to show available chart updates for installed charts?</p>
<p>For example I have a "web-app" chart installed as "test" with version 1.2.4, but in my repo 1.2.7 is available:</p>
<pre><code># helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
test default 1 2020-06-04 07:33:07.822952298 +0000 UTC deployed web-app-1.2.4 0.0.1
# helm search repo myrepo
NAME CHART VERSION APP VERSION DESCRIPTION
myrepo/ingress 0.1.0 1.16.0 A Helm chart for Kubernetes
myrepo/sandbox 1.2.3 1.16.0 A Helm chart for Kubernetes
myrepo/web-app 1.2.7 0.0.1 A Helm chart for Kubernetes
</code></pre>
<p>My goal is to write a script to send notifications of any charts that need updating so that I can review and run updates. I'd be happy to hear about any devOps style tools that do this,</p>
| <p>As of August 28th 2022, there is no way of knowing from which repository an already installed helm chart came from.</p>
<p>If you want to be able to do some sort of automation, currently you need to track the information of which chart came from which repo externally.<br />
Examples would be: storing configuration in Source Control, Installing charts as argo apps (if you're using argocd), a combination of both, etc.</p>
<p>Now since this question doesn't describe the use of any of these methods, I'll just make an assumption and give an example based on of the methods I mentioned.</p>
<p>Let's say you store all of the helm charts as dependencies of some local chart in your source control.</p>
<p>An example would be a <code>Chart.yaml</code> that looks something like this:</p>
<pre><code>apiVersion: v2
name: chart-of-charts
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
dependencies:
- name: some-chart
version: 0.5.1
repository: "https://somechart.io"
- name: web-app
version: 0.2.2
repository: "https://myrepo.io"
</code></pre>
<p>What you could do in this case is traverse through the dependencies and perform a lookup to compare the versions in the .yaml vs versions available.</p>
<p>An example of a bash script:</p>
<pre><code>#!/bin/bash
# requires:
# - helm
# - yq (https://github.com/mikefarah/yq)
chart = Chart.yaml
length=$(yq '.dependencies | length' $chart)
for i in $(seq $length $END); do
iter=$(($i-1))
repo=$(yq .dependencies[$iter].repository $chart)
name=$(yq .dependencies[$iter].name $chart)
version=$(yq .dependencies[$iter].version $chart)
# only if this app points to an external helm chart
if helm repo add "repo$iter" $repo > /dev/null 2>&1
then
available_version=$(helm search repo "repo$iter/$name" --versions | sed -n '2p' | awk '{print $2}')
if [ "$available_version" != "$version" ]; then
echo APP: $(echo $chart | sed 's|/Chart.yaml||')
echo repository: $repo
echo chart name: $name
echo current version: $version Available version: $available_version
echo
fi
fi
done
</code></pre>
|
<p>community.</p>
<p>I have doubts about the use of HorizontalPodAutoscaler (HPA) in Kubernetes, what are the best practices of using HPA, especially in the implementation of MaxReplicate, as an example, If I have a cluster with 3 workers nodes running a single app, and setting up the HPA to scale up to 20 pods, but it is a good practice scale pods 3x more than the available nodes? Or scale the pods up to the same quantity of available worker nodes in the cluster as a better approach?</p>
<p>Thank you in advantage</p>
| <p>first of all you need to test your application and decide a reasonable resources per pod "request and limits"</p>
<p>After setting the limit per pod then you know how many pods your cluster can maintain.
<code>for example if you have total/free 10 cpu and 10 Gi memory over the cluster and you set limit per pod to have 1 cpu and 1 Gi memo then you can run up to 10 pods.</code></p>
<p>then it's time to run your load test and fire the expected traffic at its max with the lowest number of pods you're planning to run that fit the normal/daily traffic .. gradually startup new pod and check if you can handle the high traffic or you need to add more pods still .. repeat this till you reach a appropriate number of pods. then you got the maximum number of pods that you can configure in your HPA.</p>
|
<p>Im trying to format my yml to be more readable.
I have an if statement that is really long with a bunch of and/ors and I would like to be able to spread it across multiple lines
So something along the lines of</p>
<pre><code>{{-if or
(eq 'abc' .values.foo)
(eq 'def' . values.bar)
}}
Def:'works'
{{- end}}
</code></pre>
<p>But this throws up errors for incomplete if statement.
Is there some special character or syntax I can use to achieve the above?</p>
| <p>helm supports direct line breaks without special characters.</p>
<p>Missing a space between <code>{{</code> and <code>if</code>.</p>
<p>There is an extra space between <code>.</code> and <code>values</code>.</p>
<p>String constants require double quotes.</p>
<p>demo:</p>
<p>values.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>foo: xxx
bar: yyy
</code></pre>
<p>templates/cm.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: test
labels:
{{- include "test.labels" . | nindent 4 }}
data:
cfg: |-
{{- if or
(eq "abc" .Values.foo)
(eq "def" .Values.bar)
}}
if
{{- else }}
else
{{- end }}
</code></pre>
<p>cmd</p>
<pre class="lang-bash prettyprint-override"><code>helm template --debug test .
</code></pre>
<p>output</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
cfg: |-
else
</code></pre>
|
<p>I use this command to install and enable Kubernetes dashboard on a remote host:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
kubectl proxy --address='192.168.1.132' --port=8001 --accept-hosts='^*$'
http://192.168.1.132:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
</code></pre>
<p>But I get:</p>
<pre><code>Insecure access detected. Sign in will not be available. Access Dashboard securely over HTTPS or using localhost. Read more here .
</code></pre>
<p>Is it possible to enable SSL connection on the Kubernetes host so that I can access it without this warning message and enable login?</p>
| <p>From the service definition</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
</code></pre>
<p>Which exposes port 443 (aka https). So it's already preconfigured. First, use https instead of http in your URL.</p>
<p>Then, instead of doing a <code>kubectl proxy</code>, why not simply</p>
<pre><code>kubectl port-forward -n kubernetes-dashboard services/kubernetes-dashboard 8001:443
</code></pre>
<p>Access endpoint via <a href="https://127.0.0.1:8001/#/login" rel="noreferrer">https://127.0.0.1:8001/#/login</a></p>
<p>Now it's going to give the typical "certificate not signed" since the certificate are self signed (arg <code>--auto-generate-certificates</code> in deployment definition). Just skip it with your browser. See an article like <a href="https://vmwire.com/2022/02/07/running-kubernetes-dashboard-with-signed-certificates/" rel="noreferrer">https://vmwire.com/2022/02/07/running-kubernetes-dashboard-with-signed-certificates/</a> if you need to configure a signed certificate.</p>
|
<p>I've placed a docker compose file <strong>project.yaml</strong> at the location /etc/project/project.yaml</p>
<p>the file and well as the project directory have the same file permission, i.e. -rxwrxxrwx
but when I run docker-compose</p>
<pre><code>sudo docker-compose -f ./project.yaml up -d
</code></pre>
<p>if errors out with the following
Cannot find the file ./project.yaml</p>
<p>I have checked various times and it seems there is no permission issue. Can anyone tell why we have this problem and what would be the solution</p>
| <p>Beside using the full path, as <a href="https://stackoverflow.com/questions/73500671/docker-compose-cannot-find-the-yaml-file#comment129801726_73500671">commented</a> by <a href="https://stackoverflow.com/users/14312225/quoc9x">quoc9x</a>, double-check your current working directory when you call a command with a relative path <code>./project.yaml</code></p>
<p>If you are not in the right folder, that would explain the error message.</p>
|
<p>I want to make a container that is able to transfer files between itself and other containers on the cluster. I have multiple containers that are responsible for executing a task, and they are waiting to get an input file to do so. I want a separate container to be responsible for handling files before and after the task is executed by the other containers. As an example:</p>
<ol>
<li>have all files on the file manager container.</li>
<li>let the file manager container automatically copy a file to a task executing container.</li>
<li>let task executing container run the task.</li>
<li>transfer the output of the task executing container to the file manager container.</li>
</ol>
<p>And i want to do this automatically, so that for example 400 input files can be processed to output files in this way. What would be the best way to realise such a process with kubernetes? Where should I start?</p>
| <p>A simple approach would be to set up the NFS or use the File system like AWS EFS or so.</p>
<p>You can mount the File system or NFS directly to POD which will be in <strong>ReadWriteMany</strong> access method.</p>
<p><strong>ReadWriteMany</strong> - Multiple POD can access the single file system.</p>
<p>If you don't want to use the Managed service like EFS or so you can also set up the file system on <strong>K8s</strong> checkout the <strong>MinIO</strong> : <a href="https://min.io/" rel="nofollow noreferrer">https://min.io/</a></p>
<p>All files will be saved in the <strong>File system</strong> and as per <strong>POD</strong> requirement, it can simply access it from the file system.</p>
<p>You can create different directories to separate the outputs.</p>
<p>If you want only read operation, meaning all PODs can read the files only you can also set up the <code>ReadOnlyMany</code> access mode.</p>
<p>If you are GCP you can checkout this nice document : <a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="nofollow noreferrer">https://cloud.google.com/filestore/docs/accessing-fileshares</a></p>
|
<p>I've just migrated to M1 Macbook and tried to deploy couchbase using Couchbase Helm Chart on Kubernetes. <a href="https://docs.couchbase.com/operator/current/helm-setup-guide.html" rel="nofollow noreferrer">https://docs.couchbase.com/operator/current/helm-setup-guide.html</a></p>
<p>But, couchbase server pod fails with message below</p>
<blockquote>
<p>Readiness probe failed: dial tcp 172.17.0.7:8091: connect: connection
refused</p>
</blockquote>
<p>Pod uses image: couchbase/server:7.0.2</p>
<p>Error from log file:</p>
<pre><code>Starting Couchbase Server -- Web UI available at http://<ip>:8091
and logs available in /opt/couchbase/var/lib/couchbase/logs
runtime: failed to create new OS thread (have 2 already; errno=22)
fatal error: newosproc
runtime stack:
runtime.throw(0x4d8d66, 0x9)
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/panic.go:596 +0x95
runtime.newosproc(0xc420028000, 0xc420038000)
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/os_linux.go:163 +0x18c
runtime.newm(0x4df870, 0x0)
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/proc.go:1628 +0x137
runtime.main.func1()
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/proc.go:126 +0x36
runtime.systemstack(0x552700)
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/asm_amd64.s:327 +0x79
runtime.mstart()
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/proc.go:1132
goroutine 1 [running]:
runtime.systemstack_switch()
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/asm_amd64.s:281 fp=0xc420024788 sp=0xc420024780
runtime.main()
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/proc.go:127 +0x6c fp=0xc4200247e0 sp=0xc420024788
runtime.goexit()
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc4200247e8 sp=0xc4200247e0
{"init terminating in do_boot",{{badmatch,{error,{{shutdown,{failed_to_start_child,encryption_service,{port_terminated,normal}}},{ns_babysitter,start,[normal,[]]}}}},[{ns_babysitter_bootstrap,start,0,[{file,"src/ns_babysitter_bootstrap.erl"},{line,23}]},{init,start_em,1,[]},{init,do_boot,3,[]}]}}
init terminating in do_boot ({{badmatch,{error,{{_},{_}}}},[{ns_babysitter_bootstrap,start,0,[{_},{_}]},{init,start_em,1,[]},{init,do_boot,3,[]}]})
</code></pre>
<p>Any help would be appreciated.</p>
| <p>It seems ARM64 version of Couchbase Server for MacOS has become available since Couchbase Server 7.1.1.</p>
<p>So, I ran the command below to install couchbase.</p>
<pre><code>helm install couchbasev1 --values myvalues.yaml couchbase/couchbase-operator
</code></pre>
<p>myvalues.yaml:</p>
<pre><code>cluster:
image: couchbase/server:7.1.1
</code></pre>
<p>
And it worked.</p>
|
<p>I have created an Openshift cluster where my pods and services are running. Before creating an ingress, i deployed a Kong ingress controller that auto provisioned a Azure load balancer.</p>
<p><a href="https://i.stack.imgur.com/wp3Vr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wp3Vr.png" alt="enter image description here" /></a></p>
<p>From my understanding, Kong uses Nginx that can be configured to be used as a load balancer itself. Why does it need a Load balancer to be provisioned by Cloud infrastructure on which kubernetes cluster is running?</p>
| <p>You might have deployed the Kong Ingress controller with the service <code>type : LoadBalancer</code></p>
<p><strong>LoadBalancer</strong> service type is mainly use to expose the <code>Kubernetes</code> service using LB of <strong>Cloud</strong> provider.</p>
<p>ref : <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer</a></p>
<p>You can change the Kong service type to <strong>ClusterIP</strong> and it will as expected it will forward the request.</p>
|
<p><strong>I am new to helm and kubernetes.</strong></p>
<p>My current requirement is to use setup multiple services using a common helm chart.</p>
<p>Here is the scenario.</p>
<ol>
<li><p>I have a common docker image for all of the services</p>
</li>
<li><p>for each of the services there are different commands to run. <strong>In total there are more than 40 services.</strong></p>
<p>Example</p>
</li>
</ol>
<blockquote>
<pre><code>pipenv run python serviceA.py
pipenv run python serviceB.py
pipenv run python serviceC.py
and so on...
</code></pre>
</blockquote>
<p>Current state of helm chart I have is</p>
<pre><code>demo-helm
|- Chart.yaml
|- templates
|- deployment.yaml
|- _helpers.tpl
|- values
|- values-serviceA.yaml
|- values-serviceB.yaml
|- values-serviceC.yaml
and so on ...
</code></pre>
<p>Now, since I want to use the same helm chart and deploy multiple services. How should I do it?</p>
<p>I used following command <code>helm install demo-helm . -f values/values-serviceA.yaml -f values-serviceB.yaml</code> but it only does a deployment for values file provided at the end.</p>
<p>Here is my <code>deployment.yaml</code> file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm.fullname" . }}
labels:
{{- include "helm.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "helm.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "helm.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: {{- toYaml .Values.command |nindent 12}}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: secrets
secret:
secretName: sample-application
defaultMode: 0400
</code></pre>
<p><strong>Update.</strong></p>
<p>Since my requirement has been updated to add all the values for services in a single file I am able to do it by following.</p>
<p><code>deployment.yaml</code></p>
<pre><code>{{- range $service, $val := .Values.services }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $service }}
labels:
app: {{ .nameOverride }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app: {{ .nameOverride }}
template:
metadata:
labels:
app: {{ .nameOverride }}
spec:
imagePullSecrets:
- name: aws-ecr
containers:
- name: {{ $service }}
image: "image-latest-v3"
imagePullPolicy: IfNotPresent
command: {{- toYaml .command |nindent 12}}
resources:
{{- toYaml .resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
volumes:
- name: secrets
secret:
secretName: {{ .secrets }}
defaultMode: 0400
{{- end }}
</code></pre>
<p>and <code>values.yaml</code></p>
<pre><code>services:
#Services for region1
serviceA-region1:
nameOverride: "serviceA-region1"
fullnameOverride: "serviceA-region1"
command: ["bash", "-c", "python serviceAregion1.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
#Services for region2
serviceA-region2:
nameOverride: "serviceA-region2"
fullnameOverride: "serviceA-region2"
command: ["bash", "-c", "python serviceAregion2.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
</code></pre>
<p>Now I want to know will the following configuration work with the changes I am posting below for both <code>values.yaml</code></p>
<pre><code>services:
region:
#Services for region1
serviceA-region1:
nameOverride: "serviceA-region1"
fullnameOverride: "serviceA-region1"
command: ["bash", "-c", "python serviceAregion1.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
region:2
#Services for region2
serviceA-region2:
nameOverride: "serviceA-region2"
fullnameOverride: "serviceA-region2"
command: ["bash", "-c", "python serviceAregion2.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
</code></pre>
<p>and <code>deployment.yaml</code></p>
<pre><code>{{- range $region, $val := .Values.services.region }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $region }}-{{ .nameOverride }}
labels:
app: {{ .nameOverride }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app: {{ .nameOverride }}
template:
metadata:
labels:
app: {{ .nameOverride }}
spec:
imagePullSecrets:
- name: aws-ecr
containers:
- name: {{ $region }}-{{ .nameOverride }}
image: "image-latest-v3"
imagePullPolicy: IfNotPresent
command: {{- toYaml .command |nindent 12}}
resources:
{{- toYaml .resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
volumes:
- name: secrets
secret:
secretName: {{ .secrets }}
defaultMode: 0400
{{- end }}
</code></pre>
| <p>I can recommend you try a helmfile-based approach. I prefer a 3-file approach.
What you'll need :</p>
<ul>
<li>helmfile-init.yaml: contains YAML instructions that you might need to use for creating and configuring namespaces etc.</li>
<li>helmfile-backend.yaml: contains all the releases you need to deploy (service1, service2 ...)</li>
<li>helmfile.yaml: paths to the above-mentioned (helmfile-init, helmfile-backend YAML files)</li>
<li>a deployment spec file (app_name.json): a specification file that contains all the information regarding the release (release-name, namespace, helm chart version, application-version, etc.)</li>
</ul>
<p>Helmfile has made my life a little bit breezy when deploying multiple applications. I will edit this answer with a couple of examples in a few minutes.</p>
<p>Meanwhile, you can refer to the official docs <a href="https://helmfile.readthedocs.io/en/latest/" rel="nofollow noreferrer">here</a> or the <a href="https://lyz-code.github.io/blue-book/devops/helmfile/" rel="nofollow noreferrer">Blue Books</a> if you have Github access on your machine.</p>
|
<p>I have created the below 'pod` in default namespace</p>
<pre><code> kubectl run myhttpd --image="docker.io/library/nginx:latest" --restart=Never -l app=httpd-server --port 80
</code></pre>
<p>I was creating another Pod on a different <code>namespace</code> to check the connectivity on <code>port 80</code> on <code>default namespace</code> with the below <code>command</code></p>
<pre><code>kubectl run cli-httpd --rm -it --image=busybox --restart=Never -l app=myhttpd -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 100.64.9.198 (IP of application in default namespace)
</code></pre>
<p>In order to allow the connectivity between both the namespace , I have created the below <code>Pod network policy</code></p>
<pre><code> apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-ingress-80
namespace: default
spec:
podSelector:
matchLabels:
app: myhttpd
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 10.64.8.0/22
ports:
- protocol: TCP
port: 80
</code></pre>
<p><code>10.64.8.0/22</code> is the Pods network range.</p>
<p>But the connectivity is timing out. Please suggest to allow this connectivty</p>
| <p>In NetworkPolicy, the ipBlock is usually meant to allow communications from outside your SDN.</p>
<p>What you want to do is to filter based on pod labels.</p>
<p>Having started your test pod, check for its labels</p>
<pre><code>kubectl get pods --show-labels
</code></pre>
<p>Pick one that identify your Pod, while not matching anything else, then fix your NetworkPolicy. Should look something like:</p>
<pre><code>spec:
ingress:
- from:
- podSelector: # assuming client pod belongs to same namespace as application
matchLabels:
app: my-test # netpol allows connections from any pod with label app=my-test
ports:
- port: 80 # netpol allows connections to port 80 only
protocol: TCP
podSelector:
matchLabels:
app: myhttpd # netpol applies to any pod with label app=myhttpd
policyTypes:
- Ingress
</code></pre>
<p>While ... I'm not certain what the NetworkPolicy specification says regarding ipBlocks (can they refer to SDN ranges?) ... depending on your SDN, I guess your configuration "should" work, in some cases, maybe. Maybe your issue is only related to label selectors?</p>
<p>Note, allowing connections from everywhere, I would use:</p>
<pre><code>spec:
ingress:
- {}
....
</code></pre>
|
<p>From a certain PVC, I'm trying to get the volume id from the metadata of the PV associated with the PVC using <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Kubernetes Python Api</a>.</p>
<p>I'm able to describe PVC with <code>read_namespaced_persistent_volume_claim</code> function and obtain the PV name <code>spec.volume_name</code>. Now I need to go deeper and get the <code>Source.VolumeHandle</code> attribute from the PV metadata to get de EBS Volume Id and obtain the volume status from aws, but I can't find a method to describe pv from the python api.</p>
<p>Any help?</p>
<p>Thanks</p>
| <p>While <code>PersistentVolumeClaims</code> are namedspaced, <code>PersistentVolumes</code> are not. Looking at the available methods in the V1 API...</p>
<pre><code>>>> v1 = client.CoreV1Api()
>>> print('\n'.join([x for x in dir(v1) if x.startswith('read') and 'volume' in x]))
read_namespaced_persistent_volume_claim
read_namespaced_persistent_volume_claim_status
read_namespaced_persistent_volume_claim_status_with_http_info
read_namespaced_persistent_volume_claim_with_http_info
read_persistent_volume
read_persistent_volume_status
read_persistent_volume_status_with_http_info
read_persistent_volume_with_http_info
</code></pre>
<p>...it looks like <code>read_persistent_volume</code> is probably what we want. Running <code>help(v1.read_persistent_volume)</code> gives us:</p>
<pre><code>read_persistent_volume(name, **kwargs) method of kubernetes.client.api.core_v1_api.CoreV1Api instance
read_persistent_volume
read the specified PersistentVolume
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.read_persistent_volume(name, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name: name of the PersistentVolume (required)
:param str pretty: If 'true', then the output is pretty printed.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: V1PersistentVolume
If the method is called asynchronously,
returns the request thread.
</code></pre>
|
<p>I am planning to have a Kubernetes cluster for production using Ingress for external requests.</p>
<p>I have an elastic database that is not going to be part of the Kubernetes cluster. I have a microservice in the Kubernetes cluster that communicates with the elastic database through HTTP (Get,Post etc).</p>
<p>Should I create another NodePort Service in order to communicate with the elastic database or should it be through the Ingress controller as it's an HTTP request? if both are valid options please let me know what is better to use and why</p>
| <blockquote>
<p>Should I create another NodePort Service in order to communicate with
the elastic database or should it be through the Ingress controller as
it's an HTTP request?</p>
</blockquote>
<p>There is no requirement of it if your k8s cluster is a public, microservices will be able to send requests to the <strong>Elasticsearch</strong> database.</p>
<p><strong>Ingress</strong> and <strong>Egress</strong> endpoints might not be the same point in <strong>K8s</strong>.</p>
<blockquote>
<p>I have a microservice in the Kubernetes cluster that communicates with
the elastic database through HTTP (Get,Post etc).</p>
</blockquote>
<p>May there is some misunderstanding, Ingress is for the incoming request it's not guarantee when you are running the microservice on Kubernetes your HTTP outgoing egress request will go through the same way.</p>
<p>If your microservice running on the K8s cluster, it will use the Node's IP on which POD is running as outgoing IP.</p>
<p>You can verify this quickly using <strong>kubectl exec</strong> command</p>
<pre><code>kubectl exec -it <Any POD name> -n <namespace name> -- /bin/bash
</code></pre>
<p>Run the command now</p>
<pre><code>curl https://ifconfig.me
</code></pre>
<p>above command will response with the IP from where the request is going out in your cluster, it will be Node's IP on which your POD is scheduled.</p>
<p><strong>Extra</strong></p>
<p>So you can manage the <strong>ingress</strong> for incoming traffic no extra config is required for <strong>egress</strong> traffic, but if you want to whitelist single IP in the Elasticsearch database then you have to set up the <strong>NAT gateway</strong>.</p>
<p>So all traffic of K8s microservices will go out from a <strong>single</strong> <strong>IP</strong>(Nat gateway's IP), it will be different IP from the <strong>Ingress IP</strong>.</p>
<p>If you are on GCP, here is terraform script to setup the NAT gateway also : <a href="https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway" rel="nofollow noreferrer">https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway</a></p>
<p>You might will get an idea by watching the diagram in the above link.</p>
|
<p>I noticed a strange behavior while experimenting with <code>kubectl run</code> :</p>
<ul>
<li><p>When the command to be executed is passed as option flag <code>--command -- /bin/sh -c "ls -lah"</code> > <strong>OK</strong></p>
<pre><code>kubectl run nodejs --image=node:lts-alpine \
--restart=Never --quiet -i --rm \
--command -- /bin/sh -c "ls -lah"
</code></pre>
</li>
<li><p>When command to be executed is passed in <code>--overrides</code> with <code>"command": [ "ls", "-lah" ]</code> > <strong>OK</strong></p>
<pre><code>kubectl run nodejs --image=node:lts-alpine \
--restart=Never \
--overrides='
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "nodejs"
},
"spec": {
"volumes": [
{
"name": "host-volume",
"hostPath": {
"path": "/home/dferlay/Sources/df-sdc/web/themes/custom/"
}
}
],
"containers": [
{
"name": "nodejs",
"image": "busybox",
"command": [
"ls",
"-lah"
],
"workingDir": "/app",
"volumeMounts": [
{
"name": "host-volume",
"mountPath": "/app"
}
],
"terminationMessagePolicy": "FallbackToLogsOnError",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Never",
"securityContext": {
"runAsUser": 1000,
"runAsGroup": 1000
}
}
}
' \
--quiet -i --rm
</code></pre>
</li>
<li><p>When the command to be executed is passed as option flag <code>--command -- /bin/sh -c "ls -lah"</code> and <code>--overrides</code> is used for something else (volume for instance) > <strong>KO</strong></p>
<pre><code>kubectl run nodejs --image=node:lts-alpine --restart=Never \
--overrides='
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "nodejs"
},
"spec": {
"volumes": [
{
"name": "host-volume",
"hostPath": {
"path": "/home/dferlay/Sources/df-sdc/web/themes/custom/"
}
}
],
"containers": [
{
"name": "nodejs",
"image": "busybox",
"workingDir": "/app",
"volumeMounts": [
{
"name": "host-volume",
"mountPath": "/app"
}
],
"terminationMessagePolicy": "FallbackToLogsOnError",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Never",
"securityContext": {
"runAsUser": 1000,
"runAsGroup": 1000
}
}
}
' \
--quiet -i --rm --command -- /bin/sh -c "ls -lah"
</code></pre>
</li>
</ul>
<p>So it looks like using <code>--overrides</code> prevents <code>--command</code> to be used.</p>
<p>However, I precisely need to use <code>--command</code> to bypass the array format expected by <code>--overrides</code> (ie. <code>"command": [ "ls", "-lah" ]</code>) because in my use case the command is a placeholder and cannot be known in advance.</p>
<ul>
<li>How can I do that ? Is there something I'm missing ?</li>
</ul>
<p>FYI: <code>kubectl version=v1.23.1+k3s2</code></p>
| <p>You can bypass the array format by using the <code>args</code> field:</p>
<pre><code>"command": [
"sh",
"-c"
],
"args": [ "pwd && id && node YOUR_COMMAND" ]
</code></pre>
|
<p>Is there a way to assign pods to the nodes in a particular nodegroup without labeling each node in the nodegroup?</p>
<p>E.g.:</p>
<p>Suppose I have two nodegroup <code>NG1</code> and <code>NG2</code>, and I have two apps <code>A1</code> and <code>A2</code></p>
<p>I want to assign pods of app <code>A1</code> to nodegroup <code>NG1</code> and pods of app <code>A2</code> to nodegroup <code>A2</code>. (I don't want to assign labels to each node in the nodegroup manually and then use nodeselector)</p>
| <p>You can use some of the default labels if those are coming and not same across the both Node pools</p>
<pre><code>failure-domain.beta.kubernetes.io/zone
failure-domain.beta.kubernetes.io/region
beta.kubernetes.io/instance-type
beta.kubernetes.io/os
beta.kubernetes.io/arch
</code></pre>
<p>For example, if both of your Node pool running the different type of instances, you can use the <code>beta.kubernetes.io/instance-type</code></p>
<p><strong>Example</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/instance-type
operator: In
values:
- Node Type
- Node Type
containers:
- name: with-node-affinity
image: registry.k8s.io/pause:2.0
</code></pre>
<p>You can also use the <code>topology.kubernetes.io/zone</code> if zone if difference is there.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- antarctica-east1
- antarctica-west1
containers:
- name: with-node-affinity
image: registry.k8s.io/pause:2.0
</code></pre>
<p><strong>Update</strong></p>
<p>If all the labels are the <strong>same</strong> you can try below command, which will list and label from a specific node group <code>alpha.eksctl.io/nodegroup-name=ng-1</code> :</p>
<pre><code>kubectl label nodes -l alpha.eksctl.io/nodegroup-name=ng-1 new-label=foo
</code></pre>
|
<p>I'm using minikube on a Fedora based machine to run a simple mongo-db deployment on my local machine but I'm constantly getting <code>ImagePullBackOff</code> error. Here is the yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
</code></pre>
<p>I tried to pull the image locally by using <code>docker pull mongo</code>, <code>minikube image pull mongo</code> & <code>minikube image pull mongo-express</code> several times while restarting docker and minikube several times.</p>
<p>Logining into dockerhub (both in broweser and through terminal didn't work)</p>
<p>I also tried to login into docker using <code>docker login</code> command and then modified my <code>/etc/resolv.conf</code> and adding <code>nameserver 8.8.8.8</code> and then restartied docker using <code>sudo systemctl restart docker</code> but even that failed to work.</p>
<p>On running <code>kubectl describe pod</code> command I get this output:</p>
<pre><code>Name: mongodb-deployment-6bf8f4c466-85b2h
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 29 Aug 2022 23:04:12 +0530
Labels: app=mongodb
pod-template-hash=6bf8f4c466
Annotations: <none>
Status: Pending
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/mongodb-deployment-6bf8f4c466
Containers:
mongodb:
Container ID:
Image: mongo
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
MONGO_INITDB_ROOT_USERNAME: <set to the key 'mongo-root-username' in secret 'mongodb-secret'>
Optional: false
MONGO_INITDB_ROOT_PASSWORD: <set to the key 'mongo-root-password' in secret 'mongodb-secret'>
Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vlcxl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-vlcxl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
</code></pre>
<hr />
<pre><code> Normal Scheduled 22m default-scheduler Successfully assigned default/mongodb-deployment-6bf8f4c466-85b2h to minikube
Warning Failed 18m (x2 over 20m) kubelet Failed to pull image "mongo:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 18m (x2 over 20m) kubelet Error: ErrImagePull
Normal BackOff 17m (x2 over 20m) kubelet Back-off pulling image "mongo:latest"
Warning Failed 17m (x2 over 20m) kubelet Error: ImagePullBackOff
Normal Pulling 17m (x3 over 22m) kubelet Pulling image "mongo:latest"
Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulling 3m59s (x4 over 11m) kubelet Pulling image "mongo:latest"
Warning Failed 2m (x4 over 9m16s) kubelet Failed to pull image "mongo:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 2m (x4 over 9m16s) kubelet Error: ErrImagePull
Normal BackOff 83s (x7 over 9m15s) kubelet Back-off pulling image "mongo:latest"
Warning Failed 83s (x7 over 9m15s) kubelet Error: ImagePullBackOff
</code></pre>
<p>PS: Ignore any any spacing errors</p>
| <p>I think your internet connection is slow. The timeout to pull an image is <code>120</code> seconds, so kubectl could not pull the image in under <code>120</code> seconds.</p>
<p>First, pull the image via <code>Docker</code></p>
<pre class="lang-bash prettyprint-override"><code>docker image pull mongo
</code></pre>
<p>Then load the downloaded image to <code>minikube</code></p>
<pre class="lang-bash prettyprint-override"><code>minikube image load mongo
</code></pre>
<p>And then everything will work because now kubectl will use the image that is stored locally.</p>
|
<p>This is my <code>ClusterRoleBinding</code> and <code>ClusterRole</code> defination</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-namespaces
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: bootstrap
subjects:
- kind: ServiceAccount
name: executors
namespace: bootstrap
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: bootstrap
rules:
- apiGroups:
- '*'
resources:
- namespaces
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
</code></pre>
<p>The service account</p>
<pre><code>[node1 ~]$ kubectl get sa executors -n bootstrap -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-08-30T19:51:17Z"
name: executors
namespace: bootstrap
resourceVersion: "2209"
uid: 488f5a2d-c44d-4db1-8d18-11a4f0206952
secrets:
- name: executors-token-2b2wl
</code></pre>
<p>The test Config</p>
<pre><code>[node1 ~]$ kubectl create namespace test --as=executors
Error from server (Forbidden): namespaces is forbidden: User "executors" cannot create resource "namespaces" in API group "" at the cluster scope
[no
</code></pre>
<pre><code>[node1 ~]$ kubectl auth can-i create namespace --as=executors
Warning: resource 'namespaces' is not namespace scoped
no
</code></pre>
<p>Why I'm getting the above error I did follow the k8's doc of ClusterRoleBinding here</p>
| <p>Try this and let me know how it goes.</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-namespaces
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: bootstrap
subjects:
- kind: ServiceAccount
name: executors
namespace: bootstrap
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: bootstrap
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
</code></pre>
<p>I see that in my cluster ClusterRole <code>system:controller:namespace-controller</code> have apiGroups of <code>''</code> instead of <code>'*'</code> seen in your original ClusterRole.</p>
|
<p>Can you please assist when deploying we getting ImagePullBackOff for our pods.</p>
<p>running <code>kubectl get <pod-name> -n namespace -o yaml</code> am getting below error.</p>
<pre class="lang-yaml prettyprint-override"><code>containerStatuses:
- image: mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644
imageID: ""
lastState: {}
name: dmd-base
ready: false
restartCount: 0
started: false
state:
waiting:
message: Back-off pulling image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644"
reason: ImagePullBackOff
hostIP: x.x.x.53
phase: Pending
podIP: x.x.x.237
</code></pre>
<p>and running <code>kubectl describe pod <pod-name> -n namespace</code> am getting below error infomation</p>
<pre class="lang-none prettyprint-override"><code> Normal Scheduled 85m default-scheduler Successfully assigned dmd-int/app-app-base-5b4b75756c-lrcp6 to aks-agentpool-35064155-vmss00000a
Warning Failed 85m kubelet Failed to pull image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
[rpc error: code = Unknown desc = failed to pull and unpack image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to resolve reference "mycontainer-registry.io/commpany/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to do request: Head "https://mycontainer-registry.azurecr.io/v2/company/my-app/manifests/1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
dial tcp: lookup mycontainer-registry.azurecr.io on [::1]:53: read udp [::1]:56109->[::1]:53: read: connection refused,
rpc error: code = Unknown desc = failed to pull and unpack image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to resolve reference "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to do request: Head "https://mycontainer-registry.io/v2/company/my-app/manifests/1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
dial tcp: lookup mycontainer-registry.io on [::1]:53: read udp [::1]:60759->[::1]:53: read: connection refused]`
</code></pre>
<p>From the described logs I can see the issue is a connection but I can't tell where the issue is with connectivity, we running our apps in a Kubernetes cluster on Azure.</p>
<p>If anyone has come across this issue can you please assist the application has been running successfully throughout the past months we just got this issue this morning.</p>
| <p>There is a known Azure outage multiple regions today.
Some DNS issue that also affects image pulls.
<a href="https://status.azure.com/en-us/status" rel="noreferrer">https://status.azure.com/en-us/status</a></p>
|
<p>I'm trying to install Kubernetes with dashboard but I get the following issue:</p>
<pre><code>test@ubuntukubernetes1:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-ksc9n 0/1 CrashLoopBackOff 14 (2m15s ago) 49m
kube-system coredns-6d4b75cb6d-27m6b 0/1 ContainerCreating 0 4h
kube-system coredns-6d4b75cb6d-vrgtk 0/1 ContainerCreating 0 4h
kube-system etcd-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kube-system kube-apiserver-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kube-system kube-controller-manager-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kube-system kube-proxy-6v8w6 1/1 Running 1 (106m ago) 4h
kube-system kube-scheduler-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h
kubernetes-dashboard dashboard-metrics-scraper-7bfdf779ff-dfn4q 0/1 Pending 0 48m
kubernetes-dashboard dashboard-metrics-scraper-8c47d4b5d-9kh7h 0/1 Pending 0 73m
kubernetes-dashboard kubernetes-dashboard-5676d8b865-q459s 0/1 Pending 0 73m
kubernetes-dashboard kubernetes-dashboard-6cdd697d84-kqnxl 0/1 Pending 0 48m
test@ubuntukubernetes1:~$
</code></pre>
<p>Log files:</p>
<pre><code>test@ubuntukubernetes1:~$ kubectl logs --namespace kube-flannel kube-flannel-ds-ksc9n
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
I0808 23:40:17.324664 1 main.go:207] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W0808 23:40:17.324753 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
E0808 23:40:17.547453 1 main.go:224] Failed to create SubnetManager: error retrieving pod spec for 'kube-flannel/kube-flannel-ds-ksc9n': pods "kube-flannel-ds-ksc9n" is forbidden: User "system:serviceaccount:kube-flannel:flannel" cannot get resource "pods" in API group "" in the namespace "kube-flannel"
test@ubuntukubernetes1:~$
</code></pre>
<p>Do you know how this issue can be solved? I tried the following installation:</p>
<pre><code>Swapoff -a
Remove following line from /etc/fstab
/swap.img none swap sw 0 0
sudo apt update
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker
sudo apt install apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list
sudo mv ~/kubernetes.list /etc/apt/sources.list.d
sudo apt update
sudo apt install kubeadm kubelet kubectl kubernetes-cni
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
kubectl proxy --address 192.168.1.133 --accept-hosts '.*'
</code></pre>
<p>Can you advise?</p>
| <p>I had the same situation on a new deployment today. Turns out, the kube-flannel-rbac.yml file had the wrong namespace. It's now 'kube-flannel', not 'kube-system', so I modified it and re-applied.</p>
<p>I also added a 'namespace' entry under each 'name' entry in kube-flannel.yml, except for under the roleRef heading. (it threw an error when I added it there) All pods came up as 'Running' after the new yml was applied.</p>
|
<p>Good day!</p>
<p>I am facing a strange problem. I have a standard deployment that uses a public image. But when I create it, I get the error <strong>ImagePullBackOff</strong></p>
<pre><code>$ kubectl get pods
</code></pre>
<p>result</p>
<pre><code>api-gateway-deployment-74968fbf5c-cvqwj 0/1 ImagePullBackOff 0 6h23m
api-gateway-gateway-deployment-74968fbf5c-hpdxb 0/1 ImagePullBackOff 0 6h23m
api-gateway-gateway-deployment-74968fbf5c-rctv6 0/1 ImagePullBackOff 0 6h23m
</code></pre>
<p>my deployment</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway-deployment
labels:
app: api-gateway-deployment
spec:
replicas: 3
selector:
matchLabels:
app: api-gateway-deployment
template:
metadata:
labels:
app: api-gateway-deployment
spec:
containers:
- name: api-gateway-node
image: creatorsprodhouse/api-gateway:latest
imagePullPolicy: Always
ports:
- containerPort: 80
</code></pre>
<p>I am using the docker driver, is there anything I can do wrong?</p>
<pre><code>minikube start --driver=docker
</code></pre>
| <p>I think your internet connection is slow. The timeout to pull an image is <code>120</code> seconds, so kubectl could not pull the image in under <code>120</code> seconds.</p>
<p>First, pull the image via <code>Docker</code></p>
<pre class="lang-bash prettyprint-override"><code>docker image pull creatorsprodhouse/api-gateway:latest
</code></pre>
<p>Then load the downloaded image to <code>minikube</code></p>
<pre class="lang-bash prettyprint-override"><code>minikube image load creatorsprodhouse/api-gateway:latest
</code></pre>
<p>And then everything will work because now kubectl will use the image that is stored locally.</p>
|
<p>We are running Grafana on EKS Kubernetes v1.21 as a Helm deployment behind a Traefik reverse proxy.</p>
<p>Grafana version: <code>v9.0.3</code></p>
<p>Recently, Grafana has been posting this same log message every minute without fail:</p>
<pre><code>2022-08-24 15:52:47
logger=context traceID=00000000000000000000000000000000 userId=0 orgId=0 uname= t=2022-08-24T13:52:47.293094029Z level=info msg="Request Completed" method=GET path=/api/live/ws status=401 remote_addr=10.1.3.153 time_ms=4 duration=4.609805ms size=27 referer= traceID=00000000000000000000000000000000
2022-08-24 15:52:47
logger=context traceID=00000000000000000000000000000000 t=2022-08-24T13:52:47.290478899Z level=error msg="Failed to look up user based on cookie" error="user token not found"
</code></pre>
<p>I can't confirm whether these two log messages are related but I believe they are.</p>
<p>I cannot find any user with id <code>0</code>.</p>
<p>Another log error I see occasionally is</p>
<pre><code>2022-08-24 15:43:43
logger=ngalert t=2022-08-24T13:43:43.020553296Z level=error msg="unable to fetch orgIds" msg="context canceled"
</code></pre>
<p>What I can see, is that the <code>remote_addr</code> refers to the node in our cluster that Grafana is deployed on.</p>
<p>Can anyone explain why this is continually hitting the endpoint shown?</p>
<p>Thanks!</p>
| <p>The Grafana Live feature is real-time messaging that uses websockets. It is used in Grafana for notifying on events like someone else is editing the same dashboard as you. It can also be used for streaming data directly to Grafana. <a href="https://grafana.com/docs/grafana/latest/setup-grafana/set-up-grafana-live/" rel="nofollow noreferrer">Docs here</a></p>
<p>You can either turn off Grafana Live or configure your proxy to allow websockets.</p>
<ul>
<li><a href="https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#max_connections" rel="nofollow noreferrer">Turn it off by setting config option <code>max_connections</code> to zero</a></li>
<li><a href="https://grafana.com/tutorials/run-grafana-behind-a-proxy/" rel="nofollow noreferrer">Instructions on how to configure the Traefik proxy with Grafana</a></li>
<li><a href="https://grafana.com/docs/grafana/latest/setup-grafana/set-up-grafana-live/#configure-grafana-live" rel="nofollow noreferrer">Setup guide for Grafana Live</a></li>
</ul>
|
Subsets and Splits