Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I'm currently running Sentry in Kubernetes with auto certificate generation using let's encrypt and cert-manager. When Sentry attempts to send an error to the sentry server, the following error is thrown: </p>
<pre><code>urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)> (url: https://example.host.com/)
</code></pre>
<p>I have verified that the correct python packages for 2.7.15 have been installed. Packages include <code>certifi</code>, <code>urllib2</code> along with the dependencies. </p>
<p>Turning off TLS Verification works, but this is a last resort. Security is very important even though this is an internally hosted service. </p>
| Norman Shipman | <p>It has been my experience that even the most up-to-date <code>ca-certificates</code> packages sometimes don't contain all <a href="https://letsencrypt.org/certificates/" rel="nofollow noreferrer">3 Let's Encrypt certificates</a>. The solution(?) is to download them into the "user-controlled" certificate directory (often <code>/usr/local/share/ca-certificates</code>) and then re-run <a href="http://manpages.ubuntu.com/manpages/bionic/en/man8/update-ca-certificates.8.html" rel="nofollow noreferrer"><code>update-ca-certificates</code></a>:</p>
<pre class="lang-sh prettyprint-override"><code># the first one very likely is already in your chain,
# but including it here won't hurt anything
for i in isrgrootx1.pem.txt lets-encrypt-x3-cross-signed.pem.txt letsencryptauthorityx3.pem.txt
do
curl -vko /usr/local/share/ca-certificates/`basename $i .pem.txt`.crt \
https://letsencrypt.org/certs/$i
done
update-ca-certificates
</code></pre>
<p>The ideal outcome would be to do that process for every Node in your cluster, and then volume mount the <em>actual</em> ssl directory into the containers, so every container benefits from the latest certificates. However, I would guess just doing it in the affected containers could work, too.</p>
| mdaniel |
<p>I am using auditSink object in order to get the audit logs.
I didn't find any documentation/api regarding retry option for audit logs.
What happens in case the web server / service is not available?</p>
<p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#auditsink-v1alpha1-auditregistration-k8s-io" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#auditsink-v1alpha1-auditregistration-k8s-io</a></p>
| inza | <p><a href="https://github.com/kubernetes/kubernetes/blob/v1.18.3/staging/src/k8s.io/apiserver/plugin/pkg/audit/dynamic/factory.go#L34" rel="nofollow noreferrer">The fine source</a> implies there is a retry mechanism, and thus the need for configuring its backoff, but aside from whatever you can find by surfing around in the source, I don't know that any promises have been made about deliverability. If you need such guarantees, you may be happier sending audit event to stdout or to disk and then egressing them the way you would with any other log content</p>
| mdaniel |
<p>I was trying to use prometheus to do monitoring in kubernetes. We have some metrics stored in an external postgres database, so first I would like to install a postgres exporter. I used this helm chart to install it: <a href="https://github.com/helm/charts/tree/master/stable/prometheus-postgres-exporter" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/prometheus-postgres-exporter</a>
And filled values.yaml with my database info. After installing, it provided me with the instruction below: </p>
<pre><code>NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus-postgres-exporter,release=veering-seastar" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
</code></pre>
<p>But when I try to forward the port, I got connection refused:</p>
<pre><code>Handling connection for 8080
E0801 19:51:02.781508 22099 portforward.go:331] an error occurred forwarding 8080 -> 80: error forwarding port 80 to pod 37a502b22a15fefcbddd3907669a448c99e4927515fa6cdd6fd87ef774993b6b, uid : exit status 1: 2018/08/02 02:51:02 socat[32604] E connect(5, AF=2 127.0.0.1:80, 16): Connection refused
</code></pre>
<p>However the pod is working properly when I do kubectl describe, and there only three of logs there:</p>
<pre><code>time="2018-08-02T01:08:45Z" level=info msg="Established new database connection." source="postgres_exporter.go:995"
time="2018-08-02T01:08:45Z" level=info msg="Semantic Version Changed: 0.0.0 -> 9.5.12" source="postgres_exporter.go:925"
time="2018-08-02T01:08:46Z" level=info msg="Starting Server: :9187" source="postgres_exporter.go:1137"
</code></pre>
<p>Is there anything I'm missing here to get it working and be able to see the metrics via port forwarding?</p>
| Lavender | <blockquote>
<p>time="2018-08-02T01:08:46Z" level=info msg="Starting Server: :9187" source="postgres_exporter.go:1137"</p>
</blockquote>
<p>It looks like the <a href="https://github.com/helm/charts/blob/master/stable/prometheus-postgres-exporter/templates/NOTES.txt#L13" rel="nofollow noreferrer">chart notes text</a> just has a copy-paste error, since the port number is not <code>:80</code> but rather <code>:9187</code>, which is great because it squares with the <a href="https://github.com/prometheus/prometheus/wiki/Default-port-allocations" rel="nofollow noreferrer">postgresql exporter port</a> in their registry.</p>
<p>So, it should be:</p>
<pre><code>kubectl port-forward 9187:9187 &
sleep 2
curl localhost:9187/metrics
</code></pre>
| mdaniel |
<p>I am looking to have a dynamic <code>etcd</code> cluster running inside my k8s cluster. The best way I can think of doing it dynamically (no hardcoded addresses, names, etc.) is to use DNS discovery, with the internal k8s DNS (CoreDNS).</p>
<p>I find detached information about <code>SRV</code> records created for services in k8s, and some explanations on how <code>etcd</code> DNS discovery works, but no complete howto.</p>
<p>For example:</p>
<ul>
<li>how does k8s name <code>SRV</code> entries?</li>
<li>should they be named with a specific way for <code>etcd</code> to be able to find them?</li>
<li>should any special CoreDNS setting be set?</li>
</ul>
<p>Any help on that would be greatly appreciated.</p>
<p>references:</p>
<ul>
<li><a href="https://coreos.com/etcd/docs/latest/v2/clustering.html#dns-discovery" rel="nofollow noreferrer">https://coreos.com/etcd/docs/latest/v2/clustering.html#dns-discovery</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></li>
</ul>
| Ehud Kaldor | <blockquote>
<p>how does k8s name SRV entries?</p>
</blockquote>
<p>via the <code>Service.port[].name</code>, which is why almost everything in kubernetes has to be a DNS-friendly name: because a lot of times, it does put them in DNS for you.</p>
<p>A Pod that has <code>dig</code> or a new enough <code>nslookup</code> will then show you:</p>
<pre><code>$ dig SRV kubernetes.default.svc.cluster.local.
</code></pre>
<p>and you'll see the names of the ports that the <code>kubernetes</code> <code>Service</code> is advertising.</p>
<blockquote>
<p>should they be named with a specific way for etcd to be able to find them?</p>
</blockquote>
<p>Yes, as one can see in the page you linked to, they need to be named one of these four:</p>
<ul>
<li><code>_etcd-client</code></li>
<li><code>_etcd-client-ssl</code></li>
<li><code>_etcd-server</code></li>
<li><code>_etcd-server-ssl</code></li>
</ul>
<p>so something like this on the kubernetes side:</p>
<pre><code>ports:
- name: etcd-client
port: 2379
containerPort: whatever
- name: etcd-server
port: 2380
containerPort: whatever
</code></pre>
| mdaniel |
<pre><code>from os import getenv, listdir, path
from kubernetes import client, config
from kubernetes.stream import stream
import constants, logging
from pprint import pprint
def listdir_fullpath(directory):
return [path.join(directory, file) for file in listdir(directory)]
def active_context(kubeConfig, cluster):
config.load_kube_config(config_file=kubeConfig, context=cluster)
def kube_exec(command, apiInstance, podName, namespace, container):
response = None
execCommand = [
'/bin/bash',
'-c',
command]
try:
response = apiInstance.read_namespaced_pod(name=podName,
namespace=namespace)
except ApiException as e:
if e.status != 404:
print(f"Unknown error: {e}")
exit(1)
if not response:
print("Pod does not exist")
exit(1)
try:
response = stream(apiInstance.connect_get_namespaced_pod_exec,
podName,
namespace,
container=container,
command=execCommand,
stderr=True,
stdin=False,
stdout=True,
tty=False,
_preload_content=True)
except Exception as e:
print("error in executing cmd")
exit(1)
pprint(response)
if __name__ == '__main__':
configPath = constants.CONFIGFILE
kubeConfigList = listdir_fullpath(configPath)
kubeConfig = ':'.join(kubeConfigList)
active_context(kubeConfig, "ort.us-west-2.k8s.company-foo.net")
apiInstance = client.CoreV1Api()
kube_exec("whoami", apiInstance, "podname-foo", "namespace-foo", "container-foo")
</code></pre>
<p>I run this code
and the response I get from running <code>whoami</code> is:<code>'java\n'</code>
how can I run as root? also, I can't find a good doc for this client anywhere (the docs on the git repo are pretty horrible) if you can link me to any it would be awesome</p>
<p>EDIT: I just tried on a couple of different pods and containers, looks like some of them default to root, would still like to be able to choose my user when I run a command so question is still relevant</p>
| ConscriptMR | <blockquote>
<p>some of them default to root, would still like to be able to choose my user when I run a command so question is still relevant</p>
</blockquote>
<p>You have influence over the UID (not the user directly, as far as I know) when you <em>launch</em> the Pod, but from that point forward, there is no equivalent to <code>docker exec -u</code> in kubernetes -- you can attach to the Pod, running as whatever UID it was launched as, but you cannot change the UID</p>
<p>I would hypothesize that's a security concern in locked down clusters, since one would not want someone with kubectl access to be able to elevate privileges</p>
<p>If you need to run as <code>root</code> in your container, then you should change the value of <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="nofollow noreferrer"><code>securityContext: runAsUser: 0</code></a> and then drop privileges for running your main process. That way new commands (spawned by your <code>exec</code> command) will run as root, just as your initial <code>command:</code> does</p>
| mdaniel |
<p>Kubernetes documentation is saying that multi-zone clusters are supported, but not the multi-region ones. At the same time Kubernetes has support for both <code>failure-domain/zone</code> and <code>failure-domain/region</code>.</p>
<p>What are the downsides of having my Kubernetes clusters to be multi-zone and multi-region at the same time? Is it only latency and if so what are the required latency numbers for it to be reliable?</p>
<p>On a plus side I see service discovery and being able to deploy applications across multiple regions without and extra tooling on top of it.</p>
<p>I know there's federation v1 and v2 being worked on but it seems to be adding a lot of complexity and v2 is far from being production ready.</p>
| Maklaus | <p>This is speculative, but it's <em>informed</em> speculation, so hopefully that means it'll still be helpful</p>
<p>Let's take two things that kubernetes does and extrapolate them into a multi-region cluster:</p>
<ul>
<li>load balancer membership -- at least on AWS, there is no mechanism for adding members of a different region to a load balancer, meaning <code>type: LoadBalancer</code> could not assign all <code>Pod</code>s to the <code>Service</code></li>
<li>persistent volume attachment -- similarly on AWS, there is no mechanism for attaching EBS volumes across even availability zones, to say nothing of across regions</li>
</ul>
<p>For each of those, one will absolutely be able to find "yes, but!" scenarios to demonstrate a situation where these restrictions won't matter. However, since kubernetes is trying to solve for the general case, in a cloud-agnostic way, that's my strong suspicion why they would recommend against even trying a multi-region cluster -- regardless of whether it happens to work for your situation right now.</p>
| mdaniel |
<p>Please find the GitLab repo for the terraform scripts which we are using.
<a href="https://gitlab.com/komati.udaykiran/gkewithterraform" rel="nofollow noreferrer">enter link description here</a>
Run in terraform plan gives the below error in an all-in-one.YAML file for the elastic search.</p>
<pre><code>Error: Error in function call
on kubernetes.tf line 49, in locals:
49: resource_list = yamldecode(file("${path. module}/all-in-one.yaml")).items
|----------------
| path.module is "."
Call to function `"yamldecode"` failed: on line 458, column 1: unexpected extra
content after value.
</code></pre>
<p><a href="https://i.stack.imgur.com/k7sAy.png" rel="nofollow noreferrer">enter image description here</a></p>
| udaykiran komati | <p>As is describe in <a href="https://www.terraform.io/docs/configuration/functions/yamldecode.html" rel="nofollow noreferrer">the fine manual</a>:</p>
<blockquote>
<p>Only one YAML document is permitted. If multiple documents are present in the given string then this function will return an error.</p>
</blockquote>
<p>and one can trivially reproduce your error message:</p>
<pre><code> content = yamldecode("---\nhello: world\n---\ntoo: bad\n")
</code></pre>
<pre><code> on main.tf line 14, in resource "local_file" "example":
14: content = yamldecode("---\nhello: world\n---\ntoo: bad\n")
Call to function "yamldecode" failed: on line 2, column 1: unexpected extra
content after value.
</code></pre>
| mdaniel |
<p>I'm trying to enable efk in my kubernetes cluster. I find a file about fluentd's config: <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml</a> </p>
<p>In this file, there's:</p>
<pre><code><filter kubernetes.**>
@id filter_kubernetes_metadata
@type kubernetes_metadata
</filter>
# Fixes json fields in Elasticsearch
<filter kubernetes.**>
@id filter_parser
@type parser
key_name log
reserve_data true
remove_key_name_field true
<parse>
@type multi_format
<pattern>
format json
</pattern>
<pattern>
format none
</pattern>
</parse>
</filter>
</code></pre>
<p>I want to use different parsers for different deployments. So I wonder:</p>
<ol>
<li><p>what's 'kubernetes.**' in kubernetes? Is it the name of a deployment or label of a deployment? </p></li>
<li><p>In docker-compose file, we can tag on different containers and use the tag in fluentd's 'filter'. In kubernetes, is there any similar way?</p></li>
</ol>
<p>Thanks for your help!</p>
| user9345277 | <p>It isn't related to kubernetes, or to deployments; it is <code>fluentd</code> syntax that represents the top-level <code>kubernetes</code> "tag" and all its subkeys that are published as an event, as one can see <a href="https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter#example-inputoutput" rel="nofollow noreferrer">here</a></p>
| mdaniel |
<p>I have a cluster provisioned using KubeSpray on AWS. It has two bastions, one controller, one worker, and one etcd server.</p>
<p>I am seeing endless messages in the APISERVER logs:</p>
<pre><code>http: TLS handshake error from 10.250.227.53:47302: EOF
</code></pre>
<p>They come from two IP addresses, <code>10.250.227.53</code> and <code>10.250.250.158</code>. The port numbers change every time.</p>
<p>None of the cluster nodes correspond to those two IP addresses. The subnet cidr ranges are shown below.</p>
<p><a href="https://i.stack.imgur.com/VlBMs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VlBMs.png" alt="Two private and two public cluster subnets" /></a></p>
<p>The cluster seems stable. This behavior does not seem to have any negative affect. But I don't like having random HTTPS requests.</p>
<p>How can I debug this issue?</p>
| David Medinets | <p>They're from the health check configured on the AWS ELB; you can stop those messages by <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html#update-health-check-config" rel="nofollow noreferrer">changing the health check configuration</a> to be <code>HTTPS:6443/healthz</code> instead of the likely <a href="https://github.com/kubernetes-sigs/kubespray/blob/v2.13.2/contrib/terraform/aws/modules/elb/main.tf#L45" rel="nofollow noreferrer"><code>TCP</code> one it is using now</a></p>
<blockquote>
<p>How can I debug this issue?</p>
</blockquote>
<p>Aside from just generally being cognizant of how your cluster was installed, and then observing that those connections come at regular intervals, I would further bet that those two IP addresses belong to the two ENIs that are allocated to the ELB in each public subnet (they'll show up in the Network Interfaces list on the console as "owner: elasticloadbalancer" or something similar)</p>
| mdaniel |
<p>I'm trying to find a quick command to redeploy a pod in Kubernetes.</p>
<ol>
<li>Currently, I pushed my docker image to google cloud.</li>
<li>Then I list my pods</li>
</ol>
<pre><code>kubectl -n pringadi get pods
NAME READY STATUS RESTARTS AGE
audit-server-757ffd5dd-ztn5s 1/1 Running 0 38m
configs-service-75c98f68c7-q928q 1/1 Running 0 36m
</code></pre>
<ol start="3">
<li>I edit the deployment config
<code>kubectl edit deployment audit-server</code>.</li>
<li>Update/change the image name.</li>
<li>Save and exit.</li>
</ol>
<p>Kubernetes immediately recognizes the change and redeploy <code>audit-server</code>.</p>
<p><strong>Question:</strong>
What if I pushed my docker image (a newer image) to google cloud with the same name (Step 4) and just want to redeploy the <code>audit-server</code> based on the current image. Is there a command for that? It a tedious job to keep editing the deployment config (Step 3)</p>
| RonPringadi | <p>It's wholly unclear what you're trying to do, but taking a guess:</p>
<p><code>kubectl set image deploy audit-server "*=us.gcr.io/whatever/whateverelse:12345"</code> will bump the image in your deployment without having to invoke your editor</p>
<p>Alternatively, use something like <a href="https://github.com/GoogleContainerTools/skaffold#readme" rel="nofollow noreferrer">skaffold</a> or its competitors to continuously push and reload a Pod for development</p>
| mdaniel |
<p>I want to have 2 apps running on kubernetes, I wonder if I can do 2 subdomains using nginx ingress controller.</p>
<p>For example: <code>app1.localhost:8181/cxf</code> and <code>app2.localhost:8181/cxf</code>
each one of those will have diferent services.</p>
<p>How can I do that?</p>
<p>Some more context here:</p>
<p>EDIT:</p>
<p>Note:mysql is working fine so im not posting the yaml's here so it doesn't get too long.</p>
<p>Note too that im using karaf with a kar.(that will be my app)</p>
<p>I was thinking that maybe I should have 2 nodes? one with mysql and app1 and the other one with mysql and app2? so in one I could access <code>app1.localhost/cxf</code> services and in the other <code>app2.localhost/cxf</code> services... maybe doesn't make much sense... and I was reading that I need kubeadm for that, and there is no way to install it on windows. I think I must use minikube for that instead?</p>
<p>These are my yaml's:</p>
<p><strong>The load balancer:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: lb-service
spec:
type: LoadBalancer
selector:
app: app1
ports:
- protocol: TCP
name: app1
port: 3306
targetPort: 3306
- protocol: TCP
name: app1-8080
port: 8080
targetPort: 8080
- protocol: TCP
name: app1-8101
port: 8101
targetPort: 8101
- protocol: TCP
name: app1-8181
port: 8181
targetPort: 8181
status:
loadBalancer:
ingress:
- hostname: localhost
</code></pre>
<p><strong>app1:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app1-service
spec:
ports:
- port: 8101
selector:
app: app1
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: app1-deployment
spec:
selector:
matchLabels:
app: app1
replicas: 1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: app1:latest
</code></pre>
<p><strong>app2:</strong> is the same as app1 but in a diferent version(older services)</p>
<p><strong>ingress-resource:</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: apps-ingress
#annotations:
#nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: app1.localhost # tried app1.127-0-0-1.sslip.io ass answered below too.
http:
paths:
- path: /
backend:
serviceName: app1-service
servicePort: 8181
- host: app2.localhost
http:
paths:
- path: /
backend:
serviceName: app2-service
servicePort: 8181
</code></pre>
<p>I should be able to access app1 version in <code>app1.localhost:8181/cxf</code>, and app2 version in <code>app2.localhost:8181/cxf</code></p>
<p>There is another doubt I have, shouldn't I be able to create another loadBalancer? I wanted to, so the selector would be app2 in that loadBalancer, but since I already have one, the new one just stays <code><pending></code> until I remove the first one.</p>
<p>That would make some sense, since if I have 2 replicas if app1, and 2 replicas of app2, there should be a loadBalancer for each app right?</p>
<p>Note that I installed the nginx ingress-controller using helm too, since the ingress-resource would not work otherwise, at least thats what I have read.</p>
<p>By installing that, it installed nginx load balancer too, and this one didnt go to pending. Do I need to use nginx loadBalancer? or can I delete it and use kubernetes type loadBalancer?</p>
<p>Huum, im missing something here...</p>
<p>Thanks for your time!</p>
| Tiago Machado | <blockquote>
<p>I want to have 2 apps running on kubernetes, I wonder if I can do 2 subdomains using nginx ingress controller.</p>
</blockquote>
<p>Yes, you just need any number of DNS records which point to your Ingress controller's IP (you used 127.0.0.1, so that's what I'll use for these examples, but you can substitute whatever IP is relevant). That's the whole point of an Ingress resource: to use the <code>host:</code> header to dispatch requests into the cluster</p>
<p>I found <a href="https://moss.sh/free-wildcard-dns-services/" rel="nofollow noreferrer">a list of wildcard DNS providers</a> of which I confirmed that <code>app1.127-0-0-1.sslip.io</code> and <code>app2.127-0-0-1.sslip.io</code> do as expected</p>
<p>Thus:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Ingress
metadata:
name: app1-and-2
spec:
rules:
- host: app1.127-0-0-1.sslip.io
http:
paths:
- path: /
backend:
serviceName: app1-backend
servicePort: 8181 # <-- or whatever your Service port is
# then you can repeat that for as many hosts as you wish
- host: app2.127-0-0-1.sslip.io
http:
paths:
- path: /
backend:
serviceName: app2-backend
servicePort: 8181
</code></pre>
| mdaniel |
<p>When I run <code>kubectl get svc -n kube-system</code> it tells me:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP xx.xx.xx.xx <none> 53/UDP,53/TCP 13h
</code></pre>
<p>But when I try to <code>kubectl edit svc/kube-dns -n kube-system</code>:</p>
<blockquote>
<p>error: services "kube-dns" is invalid</p>
<p>A copy of your changes has been stored to "/tmp/kubectl-edit-4p5gn.yaml"</p>
<p>error: Edit cancelled, no valid changes were saved.</p>
</blockquote>
<p>I am unable to change it to a LoadBalancer...any ideas?</p>
<p>I also tried to create a new kube-dns also but I am unable to get an external-ip; it stays stuck in pending state.</p>
<pre><code>kind: Service
metadata:
name: kubedns-bkp
namespace: kube-system
labels:
k8s-app: kube-dns
spec:
type: LoadBalancer
ports:
- port: 53
protocol: UDP
selector:
k8s-app: kube-dns
</code></pre>
<p><code>kubectl get svc -n kube-system</code> reports:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubedns-bkp LoadBalancer xx.xx.xx.xx <pending> 53:32115/UDP 5h
</code></pre>
<p>Note: I have created k8s cluster with ELB integration, for other services I successfully get external IPs.</p>
| Srinivasa Reddy | <p>So, two things here:</p>
<ol>
<li>As they advised you in the yaml validation errors that you chose not to share with us, one cannot change the <code>type:</code> of an existing <code>Service</code>; you have to create a new one, or delete the existing one and recreate it.</li>
<li>However, I would strongly, strongly, <em>strongly</em> advise against deleting the <code>kube-dns</code> <code>Service</code> -- you are more than welcome to create a new <code>Service</code> of <code>type: LoadBalancer</code> and point it at the same <code>selector:</code> as <code>kube-dns</code> is using. That way anyone who wishes to use the load balanced service can, but the things in the cluster who depend on <code>kube-dns</code> being a <code>ClusterIP</code> with (likely) that existing xx.xx.xx.xx value can continue as before.</li>
</ol>
| mdaniel |
<p>I am deploying a EKS cluster to AWS and using alb ingress controller points to my K8S service. The ingress spec is shown as below.</p>
<p>There are two targets <code>path: /*</code> and <code>path: /es/*</code>. And I also configured <code>alb.ingress.kubernetes.io/auth-type</code> to use <code>cognito</code> as authentication method.</p>
<p>My question is how can I configure different <code>auth-type</code> for different target? I'd like to use <code>cognito</code> for <code>/*</code> and <code>none</code> for <code>/es/*</code>. How can I achieve that?</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: sidecar
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sidecar
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.order: '1'
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
# Auth
alb.ingress.kubernetes.io/auth-type: cognito
alb.ingress.kubernetes.io/auth-idp-cognito: '{"userPoolARN":"xxxx","userPoolClientID":"xxxx","userPoolDomain":"xxxx"}'
alb.ingress.kubernetes.io/auth-scope: 'email openid aws.cognito.signin.user.admin'
alb.ingress.kubernetes.io/certificate-arn: xxxx
spec:
rules:
- http:
paths:
- path: /es/*
backend:
serviceName: sidecar-entrypoint
servicePort: 8080
- path: /*
backend:
serviceName: server-entrypoint
servicePort: 8081
</code></pre>
| Joey Yi Zhao | <p>This question comes up a lot, so I guess it needs to be PR-ed into their documentation.</p>
<p>Ingress resources are cumulative, so you can separate your paths into two separate Ingress resources in order to annotate each one differently. They will be combined with all other Ingress resources across the entire cluster to form the final config</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: sidecar-star
namespace: default
annotations:
kubernetes.io/ingress.class: alb
# ... and the rest ...
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: server-entrypoint
servicePort: 8081
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: sidecar-es
namespace: default
annotations:
kubernetes.io/ingress.class: alb
# ... and the rest ...
spec:
rules:
- http:
paths:
- path: /es/*
backend:
serviceName: sidecar-entrypoint
servicePort: 8080
</code></pre>
| mdaniel |
<p>I am trying to execute a query on postgres pod in k8s via bash script but cannot get results when i select a large number of columns. Here is my query:</p>
<pre><code>kubectl exec -it postgres-pod-dcd-wvd -- bash -c "psql -U postgres -c \"Select json_build_object('f_name',json_agg(f_name),'l_name',json_agg(l_name),'email',json_agg(email),'date_joined',json_agg(date_joined),'dep_name',json_agg(dep_name),'address',json_agg(address),'zip_code',json_agg(zip_code),'city',json_agg(city), 'country',json_agg(country)) from accounts WHERE last_name='ABC';\""
</code></pre>
<p>When i reduce the number of columns to be selected in the query, i get the results but if I use all the column names, the query just hangs indefinitely. What could be wrong here?</p>
<p>Update:</p>
<p>I tried using the query as :</p>
<pre><code>kubectl exec -it postgres-pod-dcd-wvd -- bash -c "psql -U postgres -c \"Select last_name,first_name,...(other column names).. row_to_json(accounts) from register_account WHERE last_name='ABC';\""
</code></pre>
<p>But this also hangs.</p>
| devcloud | <blockquote>
<p>When i try from inside the pod, It works but i need to execute it via bash script</p>
</blockquote>
<p>Means it is almost certainly the results pagination; when you run <code>exec -t</code> it sets up a TTY in the Pod, just like you were connected interactively, so it is likely waiting for you to press space or "n" for the next page</p>
<p>You can disable the pagination with <code>env PAGER=cat psql -c "select ..."</code> or use the <a href="https://www.postgresql.org/docs/9.5/app-psql.html" rel="nofollow noreferrer"><code>--pset pager=off</code></a> as in <code>psql --pset pager=off -c "Select ..."</code></p>
<p>Also, there's no need to run <code>bash -c</code> unless your <code>.bashrc</code> is setting some variables or otherwise performing work in the Pod. Using <code>exec -- psql</code> should work just fine, all other things being equal. You <em>will</em> need to use the <code>env</code> command if you want to go with the <code>PAGER=cat</code> approach, because <code>$ ENV=var some_command</code> <em>is</em> shell syntax, and thus cannot be fed directly into <code>exec</code></p>
| mdaniel |
<p>I need to be able to assign custom environment variables to each replica of a pod. One variable should be some random uuid, another unique number. How is it possible to achieve? I'd prefer continue using "Deployment"s with replicas. If this is not feasible out of the box, how can it be achieved by customizing replication controller/controller manager? Are there hooks available to achieve this?</p>
| rubenhak | <blockquote>
<p>If this is not feasible out of the box, how can it be achieved by customizing replication controller/controller manager? Are there hooks available to achieve this?</p>
</blockquote>
<p>Your best bet is a mixture of an <code>initContainer:</code> and/or a custom -- possibly overridden -- entrypoint <code>command:</code>. The Pods are all going to be carbon copies of each other, except for their names and a few other trivial changes. Any per-Pod specific behavior is the responsibility of the containers in the Pod themselves.</p>
<pre><code>containers:
- image: whatever
command:
- bash
- -c
- |
export RANDOM_UUID=`uuidgen`
export UNIQ=/usr/bin/generate-some-awesome-sauce
exec /usr/local/bin/dockerfile-entrypoint.sh or whatever else
</code></pre>
| mdaniel |
<p>I’m packaging a Python app for use within a Kubernetes cluster. In the code base this method exists :</p>
<pre><code> def get_pymongo_client(self):
username = test;
password = 'test';
url = ‘test
conn_str = "mongodb+srv://" + username + ":" + password + “/”+ url
return pymongo.MongoClient(conn_str)
</code></pre>
<p>I’m attempting to secure the username, password & URL fields so that they are not viewable within the src code. For this, I plan to use secrets.</p>
<p>The URL <a href="https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/" rel="noreferrer">https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/</a> details how to create a secret. But I’m not sure how to read the secret from the Python app.</p>
<p>.Dockerfile for my app:</p>
<pre><code>#https://docs.docker.com/language/python/build-images/
FROM python:3.8-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
</code></pre>
<p>Reading <a href="https://stackoverflow.com/questions/65447044/python-flask-application-access-to-docker-secrets-in-a-swarm">Python flask application access to docker secrets in a swarm</a> details the use of secrets in a docker-compose file, is this also required for Kubernetes? What steps are involved in order to read secret parameters from the Python src code file?</p>
| blue-sky | <p>The traditional way is via environment variable</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
containers:
- name: your-app
# ...
env:
- name: PYMONGO_USERNAME
valueFrom:
secretKeyRef:
name: your-secret-name-here
key: PYMONGO_USERNAME
</code></pre>
<p>Or you can make that yaml less chatty by using a well-formed Secret and the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#container-v1-core" rel="noreferrer">"envFrom:" field</a></p>
<pre class="lang-yaml prettyprint-override"><code>kind: Secret
metadata:
name: pymongo
stringData:
PYMONGO_USERNAME: test
PYMONGO_PASSWORD: sekrit
---
spec:
containers:
- name: your-app
envFrom:
- secretRef:
name: pymongo
# and now the pod has all environment variables matching the keys in the Secret
</code></pre>
<p>and then your code would just read it from its environment as normal</p>
<pre class="lang-py prettyprint-override"><code> def get_pymongo_client(self):
username = os.getenv('PYMONGO_USERNAME')
password = os.getenv('PYMONGO_PASSWORD')
# etc
</code></pre>
<p>An alternative, but similar idea, is to <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretvolumesource-v1-core" rel="noreferrer">mount the Secret onto the filesystem</a>, and then read in the values as if they were files</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
containers:
- name: your-app
env:
# this part is 100% optional, but allows for easier local development
- name: SECRETS_PATH
value: /secrets
volumeMounts:
- name: pymongo
mountPath: /secrets
volumes:
- name: pymongo
secret:
secretName: your-secret-name-here
</code></pre>
<p>then:</p>
<pre class="lang-py prettyprint-override"><code> def get_pymongo_client(self):
sec_path = os.getenv('SECRETS_PATH', './secrets')
with open(os.path.join(sec_path, 'PYMONGO_USERNAME')) as fh:
username = fh.read()
</code></pre>
| mdaniel |
<p>Can we apply/delete kubernetes YAML files in IntelliJ IDEA/Visual Studio Code by right clicking on the YAML file and then choosing apply/delete/..?</p>
| user674669 | <p>The <a href="https://www.jetbrains.com/help/idea/settings-tools-external-tools.html" rel="nofollow noreferrer">External Tools</a> feature will do what you're asking; you can create an "external tool" named <code>kubectl-apply</code> with any of the variables they support, and then invoke it via any number of built-in mechanisms</p>
<p><a href="https://i.stack.imgur.com/v1SXg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v1SXg.png" alt="the IntelliJ External Tools configuration dialog"></a></p>
| mdaniel |
<p>I am bit confused with commands in kubectl. I am not sure when I can use the commands directly like</p>
<p><code>command: ["command"] or -- some_command</code></p>
<p>vs</p>
<p><code>command: [/bin/sh, -c, "command"] or -- /bin/sh -c some_command</code></p>
| BrownTownCoder | <blockquote>
<p>I am bit confused with commands in kubectl. I am not sure when I can use the commands directly</p>
</blockquote>
<p>Thankfully the distinction is easy(?): every <code>command:</code> is fed into the <a href="https://en.wikipedia.org/wiki/Exec_(system_call)" rel="nofollow noreferrer"><code>exec</code> system call</a> (or its golang equivalent); so if your container contains a binary that the kernel can successfully execute, you are welcome to use it in <code>command:</code>; if it is a shell built-in, shell alias, or otherwise requires <code>sh</code> (or <code>python</code> or whatever) to execute, then you must be explicit to the container runtime about that distinction</p>
<p>If it helps any, the <code>command:</code> syntax of kubernetes <code>container:</code>s are the <em>equivalent</em> of <a href="https://docs.docker.com/engine/reference/builder/#entrypoint" rel="nofollow noreferrer"><code>ENTRYPOINT ["",""]</code></a> line of Dockerfile, not <code>CMD ["", ""]</code> and for sure not <code>ENTRYPOINT echo this is fed to /bin/sh for you</code>.</p>
| mdaniel |
<p>Is it possible to send a http Rest request to another K8 Pod that belongs to the same Service in Kubernetes? </p>
<p>E. G.
Service name = UserService , 2 Pods (replica = 2)</p>
<pre><code>Pod 1 --> Pod 2 //using pod ip not load balanced hostname
Pod 2 --> Pod 1
</code></pre>
<p>The connection is over Rest <code>GET 1.2.3.4:7079/user/1</code></p>
<p>The value for host + port is taken from <code>kubectl get ep</code></p>
<p>Both of the pod IP's work successfully outside of the pods but when I do a <code>kubectl exec -it</code> into the pod and make the request via CURL, it returns a 404 not found for the endpoint. </p>
<p><strong>Q</strong> What I would like to know if it is possible to make a request to another K8 Pod that is in the same Service? </p>
<p><strong>Q</strong> Why am I able to get a successful <code>ping 1.2.3.4</code>, but not hit the Rest API? </p>
<p><strong>below is my config files</strong></p>
<pre><code> #values.yml
replicaCount: 1
image:
repository: "docker.hosted/app"
tag: "0.1.0"
pullPolicy: Always
pullSecret: "a_secret"
service:
name: http
type: NodePort
externalPort: 7079
internalPort: 7079
ingress:
enabled: false
</code></pre>
<h1>deployment.yml</h1>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_PORT
value: "{{ .Values.service.internalPort }}"
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /actuator/alive
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/ready
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }
</code></pre>
<h1>service.yml</h1>
<pre><code>kind: Service
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: {{ .Values.service.internalPort }}
protocol: TCP
name: {{ .Values.service.name }}
selector:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
</code></pre>
<h1>executed from master</h1>
<p><a href="https://i.stack.imgur.com/bizoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bizoF.png" alt="executed from k8 master"></a> </p>
<h1>executed from inside a pod of the same MicroService</h1>
<p><a href="https://i.stack.imgur.com/LaMRn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LaMRn.png" alt="executed from inside a pod of the same MicroService"></a></p>
| M_K | <blockquote>
<p>Is it possible to send a http Rest request to another K8 Pod that belongs to the same Service in Kubernetes?</p>
</blockquote>
<p>For sure, yes, that's actually exactly why every Pod in the cluster has a cluster-wide routable address. You can programmatically ask kubernetes for the list of the Pod's "peers" by requesting the <code>Endpoint</code> object that is named the same as the <code>Service</code>, then subtract out your own Pod's IP address. It seems like you kind of knew that from <code>kubectl get ep</code>, but then you asked the question, so I thought I would be explicit that your experience wasn't an accident.</p>
<blockquote>
<p>Q Why am I able to get a successful ping 1.2.3.4, but not hit the Rest API?</p>
</blockquote>
<p>We can't help you troubleshoot your app without some app logs, but the fact that you got a 404 and not "connection refused" or 504 or such means your <strong>connectivity</strong> worked fine, it's just the <em>app</em> that is broken.</p>
| mdaniel |
<p>Hello I've been learning about Kubernetes but in a YAML file I found k8s-app as a label but when looking for an answer I really didn't find an accurate answer.
Please if someone has what does k8s-app stands for in YAML file help.</p>
| nessHaf | <p>It doesn't stand for anything; it's just an older convention that has been superseded by the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/#labels" rel="noreferrer">new <code>app.kubernetes.io/</code> nomenclature</a></p>
<p>The old style was likely replaced by the new "namespaced" style to allow users to have their own <code>k8s-app:</code> or <code>instance:</code> or other "common" names without colliding with the labels that were used by the Deployment controllers for managing Pod lifecycles</p>
<p>tl;dr = it's not important what the text is, it's import that the text match up in the multiple places that reference it, as those labels are a contract between a few moving parts</p>
| mdaniel |
<p>I am getting the below error in my deployment pipeline</p>
<pre><code>Error: YAML parse error on cnhsst/templates/deployment.yaml: error converting YAML to JSON: yaml: line 38: did not find expected key
</code></pre>
<p>The yml file corresponding to this error is below:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "fullname" . }}
namespace: {{ .Values.namespace }}
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ template "fullname" . }}
release: "{{ .Release.Name }}"
# We dont need a large deployment history limit as Helm keeps it's own
# history
revisionHistoryLimit: 2
template:
metadata:
namespace: {{ .Values.namespace }}
labels:
app: {{ template "fullname" . }}
release: "{{ .Release.Name }}"
annotations:
recreatePods: {{ randAlphaNum 8 | quote }}
spec:
containers:
- name: {{ template "fullname" . }}
image: {{ template "docker-image" . }}
imagePullPolicy: Always
ports:
# The port that our container listens for HTTP requests on
- containerPort: {{ default 8000 .Values.portOverride }}
name: http
{{- if .Values.resources }}
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- end }}
{{- if and (.Values.livenessProbe) (.Values.apipod)}}
livenessProbe:
{{ toYaml .Values.livenessProbe | indent 10 }}
{{- end }}
{{- if and (.Values.readinessProbe) (.Values.apipod)}}
readinessProbe:
{{ toYaml .Values.readinessProbe | indent 10 }}
{{- end }}
imagePullSecrets:
- name: regcred
securityContext:
runAsNonRoot: true
runAsUser: 5000
runAsGroup: 5000
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- {{ template "fullname" . }}
topologyKey: failure-domain.beta.kubernetes.io/zone
</code></pre>
<p>I am stuck with this issue for few hours. I have gone through numerous posts, tried online tools trying to figure out syntax errors, but unfortunately no luck. If anyone is able to point out the issue, that would be really great.</p>
| user264953 | <p>You can see the mismatched indentation under <code>regcred</code>:</p>
<pre class="lang-yaml prettyprint-override"><code> imagePullSecrets:
- name: regcred
# <-- indented "-"
#VVV not indented
securityContext:
runAsNonRoot: true
</code></pre>
<p>which, as luck would have it, is the 38th line in the output YAML</p>
<pre><code>$ helm template --debug my-chart . 2>&1| sed -e '1,/^apiVersion:/d' | sed -ne 38p
securityContext:
</code></pre>
| mdaniel |
<p>I have a Docker Enterprise k8 bare metal cluster running on Centos8, and following the official docs to install NGINX using manifest files from GIT: <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/</a></p>
<p>The pod seems to be running:</p>
<pre><code>kubectl -n nginx-ingress describe pod nginx-ingress-fzr2j
Name: nginx-ingress-fzr2j
Namespace: nginx-ingress
Priority: 0
Node: server.example.com/172.16.1.180
Start Time: Sun, 16 Aug 2020 16:48:49 -0400
Labels: app=nginx-ingress
controller-revision-hash=85879fb7bc
pod-template-generation=2
Annotations: kubernetes.io/psp: privileged
Status: Running
IP: 192.168.225.27
IPs:
IP: 192.168.225.27
</code></pre>
<p>But my issue is the IP address it has selected is a 192.168.225.27. This is a second network on this server. How do I tell nginx to use the 172.16.1.180 address that is has in the Node: part?
The Daemset config is :</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
#annotations:
#prometheus.io/scrape: "true"
#prometheus.io/port: "9113"
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:edge
imagePullPolicy: Always
name: nginx-ingress
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: readiness-port
containerPort: 8081
#- name: prometheus
#containerPort: 9113
readinessProbe:
httpGet:
path: /nginx-ready
port: readiness-port
periodSeconds: 1
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
</code></pre>
<p>I can't see any configuration option for which IP address to bind to.</p>
| gmate2008 | <p>The thing you are likely looking for is <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#podspec-v1-core" rel="nofollow noreferrer"><code>hostNetwork: true</code></a>, which:</p>
<blockquote>
<p>Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>spec:
template:
spec:
hostNetwork: true
containers:
- image: nginx/nginx-ingress:edge
name: nginx-ingress
</code></pre>
<p>You would only then need to specify a bind address if it bothered you having the Ingress controller bound to all addresses on the host. If that's still a requirement, you can have the Node's IP injected via the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables" rel="nofollow noreferrer"><code>valueFrom:</code> mechanism</a>:</p>
<pre><code>...
containers:
- env:
- name: MY_NODE_IP
valueFrom:
fieldRef:
status.hostIP
</code></pre>
| mdaniel |
<p>Kubectl provides a nice way to convert environment variable files into secrets using:</p>
<pre><code>$ kubectl create secret generic my-env-list --from-env-file=envfile
</code></pre>
<p>Is there any way to achieve this in Helm? I tried the below snippet but the result was quite different:</p>
<pre><code>kind: Secret
metadata:
name: my-env-list
data:
{{ .Files.Get "envfile" | b64enc }}
</code></pre>
| Sayon Roy Choudhury | <p>It appears kubectl just does the simple thing and only <a href="https://github.com/kubernetes/kubernetes/blob/v1.22.0/staging/src/k8s.io/kubectl/pkg/cmd/util/env_file.go#L56" rel="nofollow noreferrer">splits on a single <code>=</code> character</a> so the Helm way would be to replicate that behavior (helm has <code>regexSplit</code> which will suffice for our purposes):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
data:
{{ range .Files.Lines "envfile" }}
{{ if . }}
{{ $parts := regexSplit "=" . 2 }}
{{ index $parts 0 }}: {{ index $parts 1 | b64enc }}
{{ end }}
{{ end }}
</code></pre>
<p>that <code>{{ if . }}</code> is because <code>.Files.Lines</code> returned an empty string which of course doesn't comply with the pattern</p>
<p>Be aware that kubectl's version accepts <a href="https://github.com/kubernetes/kubernetes/blob/v1.22.0/staging/src/k8s.io/kubectl/pkg/cmd/util/env_file.go#L65-L66" rel="nofollow noreferrer">barewords looked up from the environment</a> which helm has no support for doing, so if your <code>envfile</code> is formatted like that, this specific implementation will fail</p>
| mdaniel |
<p>I'm setting up Airflow in Kubernetes Engine, and I now have the following (running) pods:</p>
<ul>
<li>postgres (with a mounted <code>PersistentVolumeClaim</code>)</li>
<li>flower</li>
<li>web (airflow dashboard)</li>
<li>rabbitmq</li>
<li>scheduler</li>
<li>worker</li>
</ul>
<p>From Airflow, I'd like to run a task starting a pod which - in this case - downloads some file from an SFTP server. However, the <code>KubernetesPodOperator</code> in Airflow which should start this new pod can't run, because the kubeconfig cannot be found.</p>
<p>The Airflow worker is configured as below. The other Airflow pods are exactly the same apart from different <code>args</code>.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: worker
spec:
replicas: 1
template:
metadata:
labels:
app: airflow
tier: worker
spec:
restartPolicy: Always
containers:
- name: worker
image: my-gcp-project/kubernetes-airflow-in-container-registry:v1
imagePullPolicy: IfNotPresent
env:
- name: AIRFLOW_HOME
value: "/usr/local/airflow"
args: ["worker"]
</code></pre>
<p>The <code>KubernetesPodOperator</code> is configured as follows:</p>
<pre class="lang-py prettyprint-override"><code>maybe_download = KubernetesPodOperator(
task_id='maybe_download_from_sftp',
image='some/image:v1',
namespace='default',
name='maybe-download-from-sftp',
arguments=['sftp_download'],
image_pull_policy='IfNotPresent',
dag=dag,
trigger_rule='dummy',
)
</code></pre>
<p>The following error shows there's no kubeconfig on the pod.</p>
<pre><code>[2019-01-24 12:37:04,706] {models.py:1789} INFO - All retries failed; marking task as FAILED
[2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp Traceback (most recent call last):
[2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/bin/airflow", line 32, in <module>
[2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp args.func(args)
[2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/utils/cli.py", line 74, in wrapper
[2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp return f(*args, **kwargs)
[2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/bin/cli.py", line 490, in run
[2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp _run(args, dag, ti)
[2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/bin/cli.py", line 406, in _run
[2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp pool=args.pool,
[2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 74, in wrapper
[2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp return func(*args, **kwargs)
[2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1659, in _run_raw_task
[2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp result = task_copy.execute(context=context)
[2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/contrib/operators/kubernetes_pod_operator.py", line 90, in execute
[2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp config_file=self.config_file)
[2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/contrib/kubernetes/kube_client.py", line 51, in get_kube_client
[2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp return _load_kube_config(in_cluster, cluster_context, config_file)
[2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/contrib/kubernetes/kube_client.py", line 38, in _load_kube_config
[2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp config.load_kube_config(config_file=config_file, context=cluster_context)
[2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/airflow/.local/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 537, inload_kube_config
[2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp config_persister=config_persister)
[2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/airflow/.local/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 494, in_get_kube_config_loader_for_yaml_file
[2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp with open(filename) as f:
[2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/airflow/.kube/config'
[2019-01-24 12:37:08,300] {logging_mixin.py:95} INFO - [2019-01-24 12:37:08,299] {jobs.py:2627} INFO - Task exited with return code 1
</code></pre>
<p>I'd like the pod to start and "automatically" contain the context of the Kubernetes cluster it's in - if that makes sense. I feel like I'm missing something fundamental. Could anyone help?</p>
| bartcode | <p>As is described in <a href="https://airflow.apache.org/kubernetes.html#airflow.contrib.operators.kubernetes_pod_operator.KubernetesPodOperator" rel="nofollow noreferrer">The Fine Manual</a>, you will want <code>in_cluster=True</code> to advise KPO that it is, in fact, in-cluster.</p>
<p>I would actually recommend filing a bug with Airflow because Airflow can <em>trivially</em> detect the fact that it is running inside the cluster, and should have a much more sane default than your experience.</p>
| mdaniel |
<p>I am using skywalking 6.5.0 to monitor my apps in kubernetes cluster, this is my skywalking ui yaml config:</p>
<pre><code>{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "oap",
"namespace": "fat",
"selfLink": "/apis/extensions/v1beta1/namespaces/fat/deployments/oap",
"uid": "41438118-5ae4-4da2-b3d5-6e082263e360",
"resourceVersion": "44426777",
"generation": 52,
"creationTimestamp": "2020-02-28T02:53:28Z",
"labels": {
"app": "oap",
"release": "skywalking"
},
"annotations": {
"deployment.kubernetes.io/revision": "14",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"oap\",\"namespace\":\"dabai-fat\"},\"spec\":{\"replicas\":1,\"selector\":{\"matchLabels\":{\"app\":\"oap\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"oap\",\"release\":\"skywalking\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"JAVA_OPTS\",\"value\":\"-Xmx2g -Xms2g\"},{\"name\":\"SW_CLUSTER\",\"value\":\"standalone\"},{\"name\":\"SKYWALKING_COLLECTOR_UID\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.uid\"}}},{\"name\":\"SW_STORAGE\",\"value\":\"elasticsearch\"},{\"name\":\"SW_STORAGE_ES_CLUSTER_NODES\",\"value\":\"172.30.184.10:9200\"},{\"name\":\"SW_NAMESPACE\",\"value\":\"dabai-fat\"},{\"name\":\"SW_ES_USER\",\"value\":\"elastic\"},{\"name\":\"SW_ES_PASSWORD\",\"value\":\"XXXXXX\"}],\"image\":\"registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/skywalking-oap-server:6.5.0\",\"imagePullPolicy\":\"Always\",\"livenessProbe\":{\"initialDelaySeconds\":15,\"periodSeconds\":20,\"tcpSocket\":{\"port\":12800}},\"name\":\"oap\",\"ports\":[{\"containerPort\":11800,\"name\":\"grpc\"},{\"containerPort\":12800,\"name\":\"rest\"}],\"readinessProbe\":{\"initialDelaySeconds\":15,\"periodSeconds\":20,\"tcpSocket\":{\"port\":12800}},\"resources\":{\"limits\":{\"memory\":\"2Gi\"},\"requests\":{\"memory\":\"1Gi\"}}}],\"imagePullSecrets\":[{\"name\":\"regcred\"}],\"serviceAccountName\":\"skywalking-oap-sa\"}}}}\n"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "oap"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "oap",
"release": "skywalking"
},
"annotations": {
"kubectl.kubernetes.io/restartedAt": "2020-04-18T18:30:58+08:00"
}
},
"spec": {
"containers": [
{
"name": "oap",
"image": "registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/skywalking-oap-server:6.5.0",
"ports": [
{
"name": "grpc",
"containerPort": 11800,
"protocol": "TCP"
},
{
"name": "rest",
"containerPort": 12800,
"protocol": "TCP"
}
],
"env": [
{
"name": "JAVA_OPTS",
"value": "-Xmx2g -Xms2g"
},
{
"name": "SW_CLUSTER",
"value": "standalone"
},
{
"name": "SKYWALKING_COLLECTOR_UID",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.uid"
}
}
},
{
"name": "SW_STORAGE",
"value": "mysql"
},
{
"name": "SW_JDBC_URL",
"value": "jdbc:mysql://45.131.218.134:3309/report?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&transformedBitIsBoolean=true&useSSL=false&verifyServerCertificate=false"
},
{
"name": "SW_NAMESPACE",
"value": "fat"
},
{
"name": "SW_DATA_SOURCE_USER",
"value": "root"
},
{
"name": "SW_DATA_SOURCE_PASSWORD",
"value": "uwesGwew2rewd109dskhgwugPD"
}
],
"resources": {
"limits": {
"memory": "2Gi"
},
"requests": {
"memory": "1Gi"
}
},
"livenessProbe": {
"tcpSocket": {
"port": 12800
},
"initialDelaySeconds": 15,
"timeoutSeconds": 1,
"periodSeconds": 20,
"successThreshold": 1,
"failureThreshold": 3
},
"readinessProbe": {
"tcpSocket": {
"port": 12800
},
"initialDelaySeconds": 15,
"timeoutSeconds": 1,
"periodSeconds": 20,
"successThreshold": 1,
"failureThreshold": 3
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "skywalking-oap-sa",
"serviceAccount": "skywalking-oap-sa",
"securityContext": {},
"imagePullSecrets": [
{
"name": "regcred"
}
],
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 10,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 52,
"replicas": 1,
"updatedReplicas": 1,
"unavailableReplicas": 1,
"conditions": [
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2020-08-20T13:34:42Z",
"lastTransitionTime": "2020-04-02T03:01:31Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"oap-7cffc4c77d\" has successfully progressed."
},
{
"type": "Available",
"status": "False",
"lastUpdateTime": "2020-08-20T13:34:52Z",
"lastTransitionTime": "2020-08-20T13:34:52Z",
"reason": "MinimumReplicasUnavailable",
"message": "Deployment does not have minimum availability."
}
]
}
}
</code></pre>
<p>when the pod start, the log output like this:</p>
<pre><code>java.lang.RuntimeException: Failed to get driver instance for jdbcUrl=jdbc:mysql://45.131.218.134:3309/report?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&transformedBitIsBoolean=true&useSSL=false&verifyServerCertificate=false
at com.zaxxer.hikari.util.DriverDataSource.<init>(DriverDataSource.java:110) ~[HikariCP-3.1.0.jar:?]
at com.zaxxer.hikari.pool.PoolBase.initializeDataSource(PoolBase.java:334) ~[HikariCP-3.1.0.jar:?]
at com.zaxxer.hikari.pool.PoolBase.<init>(PoolBase.java:109) ~[HikariCP-3.1.0.jar:?]
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:108) ~[HikariCP-3.1.0.jar:?]
at com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:81) ~[HikariCP-3.1.0.jar:?]
at org.apache.skywalking.oap.server.library.client.jdbc.hikaricp.JDBCHikariCPClient.connect(JDBCHikariCPClient.java:44) ~[library-client-6.5.0.jar:6.5.0]
at org.apache.skywalking.oap.server.storage.plugin.jdbc.mysql.MySQLStorageProvider.start(MySQLStorageProvider.java:123) ~[storage-jdbc-hikaricp-plugin-6.5.0.jar:6.5.0]
at org.apache.skywalking.oap.server.library.module.BootstrapFlow.start(BootstrapFlow.java:61) ~[library-module-6.5.0.jar:6.5.0]
at org.apache.skywalking.oap.server.library.module.ModuleManager.init(ModuleManager.java:67) ~[library-module-6.5.0.jar:6.5.0]
at org.apache.skywalking.oap.server.starter.OAPServerStartUp.main(OAPServerStartUp.java:43) [server-starter-6.5.0.jar:6.5.0]
Caused by: java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315) ~[?:1.8.0_181]
at com.zaxxer.hikari.util.DriverDataSource.<init>(DriverDataSource.java:103) ~[HikariCP-3.1.0.jar:?]
... 9 more
</code></pre>
<p>I read the skywalking official issue and tell me because the mysql jdbc was GPL licence and SkyWalking is Apache license,so I must add the jdbc driver by myself, but how to add the jdbc driver jar into the image file? I have no ideas.</p>
| Dolphin | <blockquote>
<p>how to add the jdbc driver jar into the image file?</p>
</blockquote>
<p>One way would be an <code>initContainer:</code> and then artificially inject the jdbc driver via <a href="https://wiki.openjdk.java.net/display/mlvm/BootClassPath" rel="nofollow noreferrer"><code>-Xbootclasspath</code></a></p>
<pre class="lang-yaml prettyprint-override"><code>initContainers:
- name: download
image: busybox:latest
command:
- wget
- -O
- /foo/jdbc.jar
- https://whatever-the-jdbc-url-jar-is-goes-here
volumeMounts:
- name: tmp
mountPath: /foo
containers:
- env:
- name: JAVA_OPTS
value: -Xmx2g -Xbootclasspath/a:/foo/jdbc.jar
volumeMounts:
- name: tmp
mountPath: /foo
volumes:
- name: tmp
emptyDir: {}
</code></pre>
<p>a similar, although slightly riskier way, is to find a path that is already on the classpath of the image, and attempt to volume mount the jar path into that directory</p>
<p>All of this seems kind of moot given that your image looks like one that is custom built, and therefore the correct action is to update the <code>Dockerfile</code> for it to download the jar at build time</p>
| mdaniel |
<p>My code is below along with all the logs. In short my init pod seems to be attempting to run my setup.sh file, which is in a configmap, before it's mounted into the init pod. Does anyone have any guidance as to what the issue could be?</p>
<p>deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: two-containers
labels:
app: stockai
spec:
selector:
matchLabels:
app: stockai
template:
metadata:
labels:
app: stockai
spec:
volumes:
- name: shared-data
emptyDir: {}
initContainers:
- name: init-myservice
image: alpine
command:
- "sh -c 'sleep 60; /app/setup.sh'"
volumeMounts:
- name: shared-data
mountPath: /pod-data
</code></pre>
<p>configmap</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
annotations:
name: stock-ai-init-config
name: stock-ai-init-config
namespace: trading
data:
setup.sh: |
apk update
apk upgrade
apk add git
git clone [email protected]:****/****/****
</code></pre>
<p>pod preset</p>
<pre><code>apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: stock-ai-init
spec:
selector:
matchLabels:
app: stockai
volumeMounts:
- name: setup
mountPath: "/app/setup.sh"
subPath: "setup.sh"
volumes:
- name: setup
configMap:
name: stock-ai-init-config
defaultMode: 0777
</code></pre>
<p>kubctl log</p>
<pre><code>$ kubectl describe po two-containers-6d5f4b4d85-blxqj
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m22s default-scheduler Successfully assigned trading/two-containers-6d5f4b4d85-blxqj to minikube
Normal Created 4m32s (x4 over 5m19s) kubelet, minikube Created container init-myservice
Warning Failed 4m32s (x4 over 5m19s) kubelet, minikube Error: failed to start container "init-myservice": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"sh -c 'sleep 60; /app/setup.sh'\": stat sh -c 'sleep 60; /app/setup.sh': no such file or directory": unknown
Normal Pulling 3m41s (x5 over 5m21s) kubelet, minikube Pulling image "alpine"
Normal Pulled 3m40s (x5 over 5m19s) kubelet, minikube Successfully pulled image "alpine"
Warning BackOff 10s (x23 over 5m1s) kubelet, minikube Back-off restarting failed container
</code></pre>
| user3625941 | <p><code>command:</code> does not work like <code>docker run</code>, it is the kubernetes equivalent of the <code>CMD ["", ""]</code> in a Dockerfile and is fed to <strong>exec</strong>, not to <strong>sh</strong>; thus what you want is:</p>
<pre class="lang-yaml prettyprint-override"><code> command:
- sh
- -c
- 'sleep 60; /app/setup.sh'
</code></pre>
| mdaniel |
<p>Created a local cluster using <strong>Vagrant</strong> + <strong>Ansible</strong> + <strong>VirtualBox</strong>. Manually deploying works fine, but when using <strong>Helm</strong>:</p>
<pre><code>:~$helm install stable/nginx-ingress --name nginx-ingress-controller --set rbac.create=true
Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 10.0.52.15:10250: i/o timeout
</code></pre>
<p>Kubernetes cluster info:</p>
<pre><code>:~$kubectl get nodes,po,deploy,svc,ingress --all-namespaces -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/ubuntu18-kube-master Ready master 32m v1.13.3 10.0.51.15 <none> Ubuntu 18.04.1 LTS 4.15.0-43-generic docker://18.6.1
node/ubuntu18-kube-node-1 Ready <none> 31m v1.13.3 10.0.52.15 <none> Ubuntu 18.04.1 LTS 4.15.0-43-generic docker://18.6.1
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default pod/nginx-server 1/1 Running 0 40s 10.244.1.5 ubuntu18-kube-node-1 <none> <none>
default pod/nginx-server-b8d78876d-cgbjt 1/1 Running 0 4m25s 10.244.1.4 ubuntu18-kube-node-1 <none> <none>
kube-system pod/coredns-86c58d9df4-5rsw2 1/1 Running 0 31m 10.244.0.2 ubuntu18-kube-master <none> <none>
kube-system pod/coredns-86c58d9df4-lfbvd 1/1 Running 0 31m 10.244.0.3 ubuntu18-kube-master <none> <none>
kube-system pod/etcd-ubuntu18-kube-master 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-apiserver-ubuntu18-kube-master 1/1 Running 0 30m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-controller-manager-ubuntu18-kube-master 1/1 Running 0 30m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-flannel-ds-amd64-jffqn 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-flannel-ds-amd64-vc6p2 1/1 Running 0 31m 10.0.52.15 ubuntu18-kube-node-1 <none> <none>
kube-system pod/kube-proxy-fbgmf 1/1 Running 0 31m 10.0.52.15 ubuntu18-kube-node-1 <none> <none>
kube-system pod/kube-proxy-jhs6b 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/kube-scheduler-ubuntu18-kube-master 1/1 Running 0 31m 10.0.51.15 ubuntu18-kube-master <none> <none>
kube-system pod/tiller-deploy-69ffbf64bc-x8lkc 1/1 Running 0 24m 10.244.1.2 ubuntu18-kube-node-1 <none> <none>
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
default deployment.extensions/nginx-server 1/1 1 1 4m25s nginx-server nginx run=nginx-server
kube-system deployment.extensions/coredns 2/2 2 2 32m coredns k8s.gcr.io/coredns:1.2.6 k8s-app=kube-dns
kube-system deployment.extensions/tiller-deploy 1/1 1 1 24m tiller gcr.io/kubernetes-helm/tiller:v2.12.3 app=helm,name=tiller
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32m <none>
default service/nginx-server NodePort 10.99.84.201 <none> 80:31811/TCP 12s run=nginx-server
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 32m k8s-app=kube-dns
kube-system service/tiller-deploy ClusterIP 10.99.4.74 <none> 44134/TCP 24m app=helm,name=tiller
</code></pre>
<p>Vagrantfile:</p>
<pre><code>...
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
$hosts.each_with_index do |(hostname, parameters), index|
ip_address = "#{$subnet}.#{$ip_offset + index}"
config.vm.define vm_name = hostname do |vm_config|
vm_config.vm.hostname = hostname
vm_config.vm.box = box
vm_config.vm.network "private_network", ip: ip_address
vm_config.vm.provider :virtualbox do |vb|
vb.gui = false
vb.name = hostname
vb.memory = parameters[:memory]
vb.cpus = parameters[:cpus]
vb.customize ['modifyvm', :id, '--macaddress1', "08002700005#{index}"]
vb.customize ['modifyvm', :id, '--natnet1', "10.0.5#{index}.0/24"]
end
end
end
end
</code></pre>
<p>Workaround for <strong>VirtualBox</strong> issue: set diffenrent <strong>macaddress</strong> and <strong>internal_ip</strong>.</p>
<p>It is interesting to find a solution that can be placed in one of the configuration files: vagrant, ansible roles. Any ideas on the problem?</p>
| EnjoyLife | <blockquote>
<p><code>Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 10.0.52.15:10250: i/o timeout</code></p>
</blockquote>
<p>You're getting bitten by a very common kubernetes-on-Vagrant bug: the kubelet believes its IP address is <code>eth0</code>, which is the <strong>NAT</strong> interface in Vagrant, versus using (what I hope you have) <a href="https://github.com/kubernetes-sigs/kubespray/blob/v2.8.2/Vagrantfile#L161" rel="nofollow noreferrer">the <code>:private_address</code> network</a> in your <code>Vagrantfile</code>. Thus, since all kubelet interactions happen directly to it (and not through the API server), things like <code>kubectl exec</code> and <code>kubectl logs</code> will fail in exactly the way you see.</p>
<p>The solution is to force kubelet to bind to the private network interface, or I guess you could switch your <code>Vagrantfile</code> to use the <a href="https://www.vagrantup.com/docs/networking/public_network.html#default-network-interface" rel="nofollow noreferrer">bridge network</a>, if that's an option for you -- just so long as the interface isn't the NAT one.</p>
| mdaniel |
<p>I have a Kubernetes question.</p>
<p>We have a master pod that can deploy other pods depending on the REST endpoint called. For example, if someone calls "/start_work" endpoint, it can deploy a worker pod to do the work related with this request.</p>
<p>This master pod is deployed with default ServiceAccount, and to allow it to deploy other pods, we had to give it cluster-admin access. We used ClusterRoleBinding to tie the default ServiceAccount to a cluster-admin role.</p>
<p>However, we have a more challenging problem now where our master pod is running in one cluster, but the worker pod needs to be deployed in another cluster. Does this sound achievable ? Giving the default ServiceAccount cluster-admin access can't help us if we're talking about another cluster, right?</p>
<p>Has anyone done this before? How did you achieve this ?</p>
<p>Thanks a ton.</p>
| Sunny Patel | <blockquote>
<p>However, we have a more challenging problem now where our master pod is running in one cluster, but the worker pod needs to be deployed in another cluster. Does this sound achievable ?</p>
</blockquote>
<p>Certainly, yes; you are free to provide credentials via any number of supported mechanisms that would give the Pod a well-formed KUBECONFIG that can talk to the remote cluster at whatever access level of your comfort. By <em>default</em> the injected <code>ServiceAccount</code> is trusted by only its own cluster, but there are seemingly infinite ways of providing the component parts or a fully formed KUBECONFIG into a Pod's filesystem, and then you're off to the races</p>
<blockquote>
<p>Giving the default ServiceAccount cluster-admin access can't help us if we're talking about another cluster, right?</p>
</blockquote>
<p>That depends <em>(is always the answer!)</em> on whether the two clusters share a common CA root; if the answer is yes, then yes, <code>cluster-admin</code> on one will become <code>cluster-admin</code> on both. The <code>Subject</code> is determined by the <code>CN=</code> (and sometimes <code>OU=</code>/<code>O=</code>) of the presented x509 certificate, and its validity is determined by the chain-of-trust between the presented certificate and the api-server of the cluster</p>
| mdaniel |
<p>I'm trying to achieve a header routing ingress rule with nginx. Why ? Because <em>the same path</em> should go to <em>different backend</em> based on <em>headers</em>. Here what i've tried:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-mutli-back
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
set $dataflag 0;
if ( $http_content_type ~ "multipart\/form-data.*" ){
set $dataflag 1;
}
if ( $dataflag = 1 ){
set $service_name "backend-data";
}
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: backend-default
servicePort: 80
path: /api
</code></pre>
<p>But the logs of nginx output this error:</p>
<pre><code>unknown directive "set $service_name backend-data" in /tmp/nginx-cfg864446123:1237
</code></pre>
<p>which seems unlogic to me... If I check the configuration generated by nginx, each rule generate a location with something like this at the begining:</p>
<pre><code>[...]
location ~* "^/api" {
set $namespace "my-namespace";
set $ingress_name "api-multi-back";
set $service_name "backend-default";
[...]
</code></pre>
<p>What am I doing wrong ? Isn't it possible to redefine <strong>service_name</strong> variable with annotation <strong>configuration-snippet</strong> ? Is there any other method ?</p>
<p>Edit: My error on nginx side was due to the lack of exact spaces between <em>set $service_name</em> and <em>backend-data</em>. Then nginx generated correctly the configuration but it still not routing to another kubernetes service.</p>
| Kelindil | <p>You got bitten by a YAML-ism:</p>
<p>The indentation of your 2nd <code>if</code> block isn't the same as the indentation of the others, and thus YAML thinks you are starting a new key under <code>annotations:</code></p>
<p>You have</p>
<pre class="lang-yaml prettyprint-override"><code>metadata:
name: api-mutli-back
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
set $dataflag 0;
if ( $http_content_type ~ "multipart\/form-data.*" ){
set $dataflag 1;
}
if ( $dataflag = 1 ){
set $service_name "backend-data"
}
</code></pre>
<p>but you should have:</p>
<pre><code>metadata:
name: api-mutli-back
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
set $dataflag 0;
if ( $http_content_type ~ "multipart\/form-data.*" ){
set $dataflag 1;
}
if ( $dataflag = 1 ){
set $service_name "backend-data"
}
</code></pre>
| mdaniel |
<p>I have setup a local kubernetes cluster, using vagrant. Have assigned 2 nw interfaces for each vagrant box public and private.</p>
<p>kubectl get nodes -o wide</p>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubemaster Ready master 14h v1.12.2 192.168.33.10 <none>
Ubuntu 16.04.5 LTS 4.4.0-137-generic docker://17.3.2
kubenode2 Ready <none> 14h v1.12.2 10.0.2.15 <none>
Ubuntu 16.04.5 LTS 4.4.0-138-generic docker://17.3.2
</code></pre>
<p>While initiating kubeadm on master, i ran ip advertise and gave ip as 192.168.33.10 of master.</p>
<p>My reall issue was i am not able to login to any pod. </p>
<pre><code>kubectl exec -ti web /bin/bash
</code></pre>
<blockquote>
<p>error: unable to upgrade connection: pod does not exist</p>
</blockquote>
| batman | <p>It's because vagrant, in its default configuration, will have a NAT <code>public_network</code>, usually eth0, and then any additional network interfaces -- such as what is likely a host-only interface on 192.168.33.10</p>
<p>You need to change the kubelet configuration -- and possibly your CNI provider -- to bind and advertise the IP address of <code>kubenode2</code> that's in a subnet your machine can reach. Unidirectional traffic from <code>kubenode2</code> can likely reach <code>kubemaster</code> over the NAT IP, but almost by definition your machine cannot reach anything behind the NAT IP, thus the connection failure when trying to reach the kubelet port</p>
| mdaniel |
<p>I have 2 services, one serves up a rest API and the other serves up static content via nginx web server.
I can retrieve the static content from the pod running an nginx web server via the ingress controller using https provided that I <strong>don't</strong> use the following annotation within the ingress yaml</p>
<p><code>nginx.ingress.kubernetes.io/backend-protocol: HTTPS</code></p>
<p>However, the backend API service no longer works. If I add that annotation back, the backend service URL <code>https://fqdn/restservices/engine-rest/v1/api</code> works but the front end <code>https://fqdn/</code> web server throws a 502.</p>
<p>Ingress</p>
<pre><code>Ingress
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress
namespace: namespace-abc
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
rules:
- http:
paths:
- path: /restservices/engine-rest/v1
backend:
serviceName: a
servicePort: 8080
- path: /
backend:
serviceName: b
servicePort: 8011
</code></pre>
<p>Service API</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: a
namespace: namespace-abc
labels:
app: a
version: 1
spec:
ports:
- name: https
protocol: TCP
port: 80
targetPort: 8080
nodePort: 31019
selector:
app: a
version: 1
clusterIP: <cluster ip>
type: LoadBalancer
sessionAffinity: ClientIP
externalTrafficPolicy: Cluster
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
</code></pre>
<p>Service UI</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: b
namespace: namespace-abc
labels:
app: b
version: 1
annotations:
spec:
ports:
- name: http
protocol: TCP
port: 8011
targetPort: 8011
nodePort: 32620
selector:
app: b
version: 1
clusterIP: <cluster ip>
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Cluster
</code></pre>
| Jay Steven Hamilton | <p>If your problem is that adding <code>nginx.ingress.kubernetes.io/backend-protocol: HTTPS</code> makes service-A work but fails service-B, and removing it makes service-A fail but works for service-B, then the solution is to create two different Ingress objects so they can be annotated independently</p>
<pre class="lang-yaml prettyprint-override"><code>---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-a
namespace: namespace-abc
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
rules:
- http:
paths:
- path: /restservices/engine-rest/v1
backend:
serviceName: a
servicePort: 8080
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-b
namespace: namespace-abc
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: b
servicePort: 8011
</code></pre>
| mdaniel |
<p>I wish to run a <a href="https://www.drone.io/" rel="nofollow noreferrer">Drone</a> CI/CD pipeline on a Raspberry Pi, including a stage to update a Kubernetes Deployment. Unfortunately, all the pre-built solutions that I've found for doing so (<a href="https://github.com/sinlead/drone-kubectl" rel="nofollow noreferrer">e.g. 1</a>, <a href="https://github.com/honestbee/drone-kubernetes" rel="nofollow noreferrer">e.g. </a>) are not built for <code>arm64</code> architecture, so I believe I need to build my own.</p>
<p>I am attempting to adapt the commands from <a href="https://github.com/sinlead/drone-kubectl/blob/master/init-kubectl" rel="nofollow noreferrer">here</a> (see also <a href="https://github.com/sinlead/drone-kubectl" rel="nofollow noreferrer">README.md</a>, which describes the authorization required), but my attempt to contact the cluster still fails with authorization problems:</p>
<pre class="lang-bash prettyprint-override"><code>$ cat service-account-definition.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: drone-demo-service-account
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: drone-demo-service-account-clusterrolebinding
subjects:
- kind: ServiceAccount
name: drone-demo-service-account
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
$ kubectl apply -f service-account-definition.yaml
serviceaccount/drone-demo-service-account created
clusterrolebinding.rbac.authorization.k8s.io/drone-demo-service-account-clusterrolebinding created
$ kubectl get serviceaccount drone-demo-service-account
NAME SECRETS AGE
drone-demo-service-account 1 10s
$ kubectl get secret $(kubectl get secrets | grep 'drone-demo-service-account-token' | cut -f1 -d' ') -o jsonpath='{.data.ca\.crt}' > secrets/cert
$ head -c 10 secrets/cert
LS0tLS1CRU%
$ kubectl get secret $(kubectl get secrets | grep 'drone-demo-service-account-token' | cut -f1 -d' ') -o jsonpath='{.data.token}' | base64 > secrets/token
$ head -c 10 secrets/token
WlhsS2FHSk%
$ cat Dockerfile
FROM busybox
COPY . .
CMD ["./script.sh"]
$ cat script.sh
#!/bin/sh
server=$(cat secrets/server) # Pre-filled
cert=$(cat secrets/cert)
# Added this `tr` call, which is not present in the source I'm working from, after noticing that
# the file-content contains newlines
token=$(cat secrets/token | tr -d '\n')
echo "DEBUG: server is $server, cert is $(echo $cert | head -c 10)..., token is $(echo $token | head -c 10)..."
# Cannot depend on the binami/kubectl image (https://hub.docker.com/r/bitnami/kubectl), because
# it's not available for arm64 - https://github.com/bitnami/charts/issues/7305
wget https://storage.googleapis.com/kubernetes-release/release/v1.19.2/bin/linux/arm64/kubectl
chmod +x kubectl
./kubectl config set-credentials default --token=$token
echo $cert | base64 -d > ca.crt
./kubectl config set-cluster default --server=$server --certificate-authority=ca.crt
./kubectl config set-context default --cluster=default --user=default
./kubectl config use-context default
echo "Done with setup, now cat-ing .kube/config"
echo
cat $HOME/.kube/config
echo "Attempting to get pods"
echo
./kubectl get pods
$ docker build -t stack-overflow-testing . && docker run stack-overflow-testing
Sending build context to Docker daemon 10.75kB
Step 1/3 : FROM busybox
---> 3c277069c6ae
Step 2/3 : COPY . .
---> 74c6a132d255
Step 3/3 : CMD ["./script.sh"]
---> Running in dc55f33f74bb
Removing intermediate container dc55f33f74bb
---> dc68a5d6ba9b
Successfully built dc68a5d6ba9b
Successfully tagged stack-overflow-testing:latest
DEBUG: server is https://rassigma.avril:6443, cert is LS0tLS1CRU..., token is WlhsS2FHSk...
Connecting to storage.googleapis.com (142.250.188.16:443)
wget: note: TLS certificate validation not implemented
saving to 'kubectl'
kubectl 18% |***** | 7118k 0:00:04 ETA
kubectl 43% |************* | 16.5M 0:00:02 ETA
kubectl 68% |********************** | 26.2M 0:00:01 ETA
kubectl 94% |****************************** | 35.8M 0:00:00 ETA
kubectl 100% |********************************| 38.0M 0:00:00 ETA
'kubectl' saved
User "default" set.
Cluster "default" set.
Context "default" created.
Switched to context "default".
Done with setup, now cat-ing .kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /ca.crt
server: https://rassigma.avril:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
token: WlhsS2FHSkhZM[...REDACTED]
Attempting to get pods
error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>If I copy the <code>~/.kube/config</code> from my laptop to the docker container, <code>kubectl</code> commands succeed as expected - so, this isn't a networking issue, just an authorization one. I do note that my laptop-based <code>~/.kube/config</code> lists <code>client-certificate-data</code> and <code>client-key-data</code> rather than <code>token</code> under <code>users: user:</code>, but I suspect that's because my base config is recording a non-service-account.</p>
<p>How can I set up <code>kubectl</code> to authorize as a service account?</p>
<p>Some reading I have done that didn't answer the question for me:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">kubenetes documentation on AuthN/AuthZ</a></li>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/kubernetes-service-accounts" rel="nofollow noreferrer">Google Kubernetes Engine article on service accounts</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">Configure Service Accounts for Pods</a> (this described how to create and associate the accounts, but not how to act as them)</li>
<li>Two blog posts (<a href="https://medium.com/the-programmer/working-with-service-account-in-kubernetes-df129cb4d1cc_" rel="nofollow noreferrer">1</a>, <a href="https://www.magalix.com/blog/building-a-cd-pipeline-with-drone-ci-and-kubernetes" rel="nofollow noreferrer">2</a>) that refer to Service Accounts</li>
</ul>
| scubbo | <p>It appears you have used <code>| base64</code> instead of <code>| base64 --decode</code></p>
| mdaniel |
<p>When attempting to install ElasticSearch for Kubernetes on a PKS instance I am running into an issue where after running <code>kubectl get events --all-namespaces</code> I see <code>create Pod logging-es-default-0 in StatefulSet logging-es-default failed error: pods "logging-es-default-0" is forbidden: SecurityContext.RunAsUser is forbidden</code>. Does this have something to do with a pod security policy? Is there any way to be able to deploy ElasticSearch to Kubernetes if privileged containers are not allowed?</p>
<p>Edit: here is the values.yml file that I am passing into the elasticsearch helm chart.</p>
<pre><code>---
clusterName: "elasticsearch"
nodeGroup: "master"
# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: ""
# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.master=true
roles:
master: "true"
ingest: "true"
data: "true"
replicas: 3
minimumMasterNodes: 2
esMajorVersion: ""
# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "7.4.1"
imagePullPolicy: "IfNotPresent"
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# additionals labels
labels: {}
esJavaOpts: "-Xmx1g -Xms1g"
resources:
requests:
cpu: "100m"
memory: "2Gi"
limits:
cpu: "1000m"
memory: "2Gi"
initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
sidecarResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
networkHost: "0.0.0.0"
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 30Gi
rbac:
create: false
serviceAccountName: ""
podSecurityPolicy:
create: false
name: ""
spec:
privileged: false
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
persistence:
enabled: true
annotations: {}
extraVolumes: ""
# - name: extras
# emptyDir: {}
extraVolumeMounts: ""
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraInitContainers: ""
# - name: do-something
# image: busybox
# command: ['do', 'something']
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
protocol: http
httpPort: 9200
transportPort: 9300
service:
labels: {}
labelsHeadless: {}
type: ClusterIP
nodePort: ""
annotations: {}
httpPortName: http
transportPortName: transport
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: null
runAsUser: null
# The following value is deprecated,
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""
securityContext:
capabilities: null
# readOnlyRootFilesystem: true
runAsNonRoot: null
runAsUser: null
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# Enabling this will publically expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
nameOverride: ""
fullnameOverride: ""
# https://github.com/elastic/helm-charts/issues/63
masterTerminationFix: false
lifecycle: {}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
sysctlInitContainer:
enabled: false
keystore: []
</code></pre>
<p>The values listed above produce the following error:</p>
<pre><code>create Pod elasticsearch-master-0 in StatefulSet elasticsearch-master failed error: pods "elasticsearch-master-0" is forbidden: SecurityContext.RunAsUser is forbidden
</code></pre>
<p>Solved: I learned that my istio deployment was causing issues when attempting to deploy any other service into my cluster. I had made a bad assumption that istio along with my cluster security policies weren't causing my issue.</p>
| nfickas | <blockquote>
<p>is forbidden: <code>SecurityContext.RunAsUser</code> is forbidden. Does this have something to do with a pod security policy?</p>
</blockquote>
<p>Yes, that's exactly what it has to do with</p>
<p>Evidently the <code>StatefulSet</code> has included a <code>securityContext:</code> stanza, but your cluster administrator forbids such an action</p>
<blockquote>
<p>Is there any way to be able to deploy ElasticSearch to Kubernetes if privileged containers are not allowed?</p>
</blockquote>
<p>That's not exactly what's going on here -- it's not the "privileged" part that is causing you problems -- it's the <code>PodSpec</code> requesting to run the container as a user other than the one in the docker image. In fact, I would actually be very surprised if any modern elasticsearch docker image requires modifying the user at all, since all the recent ones do not run as <code>root</code> to begin with</p>
<p>Remove that <code>securityContext:</code> stanza from the <code>StatefulSet</code> and report back what new errors arise (if any)</p>
| mdaniel |
<p>I have setup a kubernetes cluster using kubeadm.</p>
<p><strong>Environment</strong></p>
<ol>
<li>Master node installed in a PC with public IP.</li>
<li>Worker node behind NAT address (the interface has local internal IP, but needs to be accessed using the public IP)</li>
</ol>
<p><strong>Status</strong></p>
<p>The worker node is able to join the cluster and running</p>
<pre><code>kubectl get nodes
</code></pre>
<p>the status of the node is ready. </p>
<p>Kubernetes can deploy and run pods on that node.</p>
<p><strong>Problem</strong></p>
<p>The problem that I have is that I'm not able to access the pods deployed on that node. For example, if I run </p>
<pre><code>kubectl logs <pod-name>
</code></pre>
<p>where pod-name is the name of a pod deployed on the worker node, I have this error:</p>
<pre><code>Error from server: Get https://192.168.0.17:10250/containerLogs/default/stage-bbcf4f47f-gtvrd/stage: dial tcp 192.168.0.17:10250: i/o timeout
</code></pre>
<p>because it is trying to use the local IP 192.168.0.17, which is not accessable externally. </p>
<p>I have seen that the node had this annotation:</p>
<pre><code>flannel.alpha.coreos.com/public-ip: 192.168.0.17
</code></pre>
<p>So, I have tried to modify the annotation, setting the external IP, in this way:</p>
<pre><code>flannel.alpha.coreos.com/public-ip: <my_externeal_ip>
</code></pre>
<p>and I see that the node is correctly annotated, but it is still using 192.168.0.17.</p>
<p>Is there something else that I have to setup in the worker node or in the cluster configuration?</p>
| Davide | <p><em>there were a metric boatload of Related questions in the sidebar, and I'm about 90% certain this is a FAQ, but can't be bothered to triage the Duplicate</em></p>
<blockquote>
<p>Is there something else that I have to setup in the worker node or in the cluster configuration?</p>
</blockquote>
<p>No, that situation is not a misconfiguration of your worker Node, nor your cluster configuration. It is just a side-effect of the way kubernetes handles Pod-centric traffic. It does mean that if you choose to go forward with that setup, you will not be able to use <code>kubectl exec</code> nor <code>kubectl logs</code> (and I think <code>port-forward</code>, too) since those commands do not send traffic through the API server, rather it directly contacts the <code>kubelet</code> port on the Node which hosts the Pod you are interacting with. That's primarily to offload the traffic from traveling through the API server, but can also be a scaling issue if you have a sufficiently large number of exec/log/port-foward/etc commands happening simultaneously, since TCP ports are not infinite.</p>
<p>I think it is <em>theoretically</em> possible to have your workstation join the overlay network, since by definition it's not related to the outer network, but I don't have a ton of experience with trying to get an overlay to play nice-nice with NAT, so that's the "theoretically" part.</p>
<p>I have personally gotten Wireguard to work across NAT, meaning you could VPN into your Node's network, but it was some gear turning, and is likely more trouble than it's worth.</p>
| mdaniel |
<p>I have a repetitive task that I do while testing which entails connecting to a cassandra pod and running a couple of CQL queries.</p>
<p>Here's the "manual" approach:</p>
<ol>
<li><p>On cluster controller node, I exec a shell on the pod using kubectl:<br />
<code>kubectl exec pod/my-app-cassandra-pod-name -it --namespace myns -- /bin/bash</code></p>
</li>
<li><p>Once in the pod I execute cqlsh:<br />
<code>cqlsh $(hostname -i) -u myuser</code><br />
and then enter password interactively</p>
</li>
<li><p>I execute my cql queries interactively</p>
</li>
</ol>
<p>Now, I'd like to have a bash script to automate this. My intent is to run cqlsh directly, via kubectl exec.</p>
<p>The problem I have is that apparently I cannot use a shell variable within the "command" section of kubectl exec. And I will need shell variables to store a) the pod's IP, b) an id which is the input to my first query, and c) intermediate query results (the two latter ones are not added to script yet).</p>
<p>Here's what I have so far, using a dummy CQL query for now:</p>
<pre><code>#!/bin/bash
CASS_IP=$(kubectl exec pod/my-app-cassandra-pod-name -it --namespace myns -- /usr/bin/hostname -i)
echo $CASS_IP # This prints out the IP address just fine, say 192.168.79.208
# The below does not work, errors provided below
kubectl exec pod/my-app-cassandra-pod-name -it --namespace myns -- /opt/cassandra/bin/cqlsh $CASS_IP -u myuser -p 'mypass' -e 'SELECT now() FROM system.local;'
# The below works just fine and returns the CQL query output
kubectl exec pod/my-app-cassandra-pod-name -it --namespace myns -- /opt/cassandra/bin/cqlsh 192.168.79.208 -u myuser -p 'mypass' -e 'SELECT now() FROM system.local;'
</code></pre>
<p>The output from the above is as follows, where IP is echoed, first exec'd cqlsh breaks, and second succeeds:</p>
<pre><code>192.168.79.208
Warning: Timezone defined and 'pytz' module for timezone conversion not installed. Timestamps will be displayed in UTC timezone.
Traceback (most recent call last):
File "/opt/cassandra/bin/cqlsh.py", line 2357, in <module>
main(*read_options(sys.argv[1:], os.environ))
File "/opt/cassandra/bin/cqlsh.py", line 2326, in main
encoding=options.encoding)
File "/opt/cassandra/bin/cqlsh.py", line 463, in __init__
load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
File "/opt/cassandra/bin/../lib/cassandra-driver-internal-only-3.25.0.zip/cassandra-driver-3.25.0/cassandra/policies.py", line 425, in __init__
File "/opt/cassandra/bin/../lib/cassandra-driver-internal-only-3.25.0.zip/cassandra-driver-3.25.0/cassandra/policies.py", line 426, in <listcomp>
File "/usr/lib64/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
command terminated with exit code 1
Warning: Timezone defined and 'pytz' module for timezone conversion not installed. Timestamps will be displayed in UTC timezone.
system.now()
--------------------------------------
e78e75c0-0d3e-11ed-8825-1de1a1b1c128
(1 rows)
</code></pre>
<p>Any ideas how to get around this? I've been researching this for quite a while now, but I'm stuck...</p>
| James | <p>This is a very, very FAQ: the <code>kubectl exec</code> is, as its name says, using <code>exec(3)</code> versus <code>system(3)</code> -- which in your case wouldn't work anyway because the <code>$</code> in your <code>kubectl exec</code> would be interpreted by <strong>your</strong> shell not the <strong>pod's</strong> shell</p>
<p>but thankfully the solution is the same to both problems: create your own <code>system(3)</code> by wrapping the command in a <code>sh -c</code> invocation (or <code>bash -c</code> if you have bash-isms and bash is available inside the pod):</p>
<pre class="lang-bash prettyprint-override"><code>kubectl exec pod/my-app-cassandra-pod-name -it --namespace myns -- sh -c '/opt/cassandra/bin/cqlsh $(hostname -i) -u myuser -p "mypass" -e "SELECT now() FROM system.local;"'
</code></pre>
<p>as always, be cognizant of the "outer" versus "inner" quoting, especially if your "mypass" or the <code>-e</code> statement contains shell meta-characters</p>
| mdaniel |
<p>We have application deployed in K8S pod and all logs are being monitored in ELK stack. Now we have one application which is using external *.jar which is writing logs in one file local to container path. How I can send this logs to kubernetes console so that it will come to elastic search monitoring.</p>
<p>Any help is much appreciated!.</p>
| Baharul | <blockquote>
<p>Now we have one application which is using external *.jar which is writing logs in one file local to container path. How I can send this logs to kubernetes console so that it will come to elastic search monitoring.</p>
</blockquote>
<p>There are three ways, in increasing order of complexity:</p>
<ol>
<li>Cheat and symlink the path it tries to log to as <code>/dev/stdout</code> (or <code>/proc/1/fd/0</code>); sometimes it works and it's super cheap, but if the logging system tries to seek to the end of the file, or rotate it, or catches on that it's not actually a "file", then you'll have to try other tricks</li>
<li>If the app uses a "normal" logging framework, such as log4j, slf4j, logback, etc, you have a better-than-average chance of being able to influence the app's logging behavior via some well placed configuration files or in some cases environment variables</li>
<li>Actually, you know, ask your developers to configure their application according to the <a href="https://12factor.net/" rel="nofollow noreferrer">12 Factor App</a> principles and log to stdout (and stderr!) like a sane app</li>
</ol>
<p>Without more specifics we can't offer more specific advice, but that's the gist of it</p>
| mdaniel |
<p>I need to find the certificate validation for K8S cluster , e.g. to use the alert manager to notify when the
certificate is about to expire and send sutible notification.</p>
<p>I found this <a href="https://github.com/ribbybibby/ssl_exporter" rel="nofollow noreferrer">repo</a> but not I’m not sure how configure it, what is the target and how to achieve it?</p>
<p><a href="https://github.com/ribbybibby/ssl_exporter" rel="nofollow noreferrer">https://github.com/ribbybibby/ssl_exporter</a></p>
<p>which based on the black-box exporter </p>
<p><a href="https://github.com/prometheus/blackbox_exporter" rel="nofollow noreferrer">https://github.com/prometheus/blackbox_exporter</a></p>
<pre><code>
- job_name: "ssl"
metrics_path: /probe
static_configs:
- targets:
- 127.0.0.1
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 127.0.0.1:9219 # SSL exporter.
</code></pre>
<p>I want to check the current K8S cluster (where Prometheus is deployed) , to see whether the certificate is valid or not.
What should I put there <strong>inside the</strong> <strong>target</strong> to make it work? </p>
<p>Do I need to expose something in cluster ?</p>
<p><strong>update</strong>
This is where out certificate located in the system</p>
<pre><code> tls:
mode: SIMPLE
privateKey: /etc/istio/bide-tls/tls.key
serverCertificate: /etc/istio/bide-tls/tls.crt
</code></pre>
<p>My scenario is:</p>
<p>Prometheus and the ssl_exporter are in the same cluster, that the certificate which they need to check is in the same cluster also. (see the config above) </p>
| JME | <blockquote>
<p>What should I put there inside the target to make it work? </p>
</blockquote>
<p>I think the <a href="https://github.com/ribbybibby/ssl_exporter/tree/v0.6.0#targets" rel="nofollow noreferrer">"Targets" section of the readme</a> is clear: it contains the endpoints that you wish the monitor to report on:</p>
<pre><code>static_configs:
- targets:
- kubernetes.default.svc.cluster.local:443
- gitlab.com:443
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
# rewrite to contact the SSL exporter
replacement: 127.0.0.1:9219
</code></pre>
<blockquote>
<p>Do I need to expose something in cluster ?</p>
</blockquote>
<p>Depends on if you want to report on <strong>internal</strong> certificates, or whether the <code>ssl_exporter</code> can reach the endpoints you want. For example, in the snippet above, I used the KubeDNS name <code>kubernetes.default.svc.cluster.local</code> with the assumption that <code>ssl_exporter</code> is running as a Pod within the cluster. If that doesn't apply to you, the you would want to change that endpoint to be <code>k8s.my-cluster-dns.example.com:6443</code> or whatever your kubernetes API is listening upon that your <code>kubectl</code> can reach.</p>
<p>Then, in the same vein, if both prometheus and your ssl_exporter are running inside the cluster, then you would change <code>replacement:</code> to be the <code>Service</code> IP address that is backed by your ssl_exporter Pods. If prometheus is outside the cluster and ssl_monitor is inside the cluster, then you'll want to create a <code>Service</code> of <code>type: NodePort</code> so you can point your prometheus at one (or all?) of the Node IP addresses and the <code>NodePort</code> upon which ssl_exporter is listening</p>
<p>The only time one would use the literal <code>127.0.0.1:9219</code> is if prometheus and the ssl_exporter are running on the same machine or in the same Pod, since that's the only way that 127.0.0.1 is meaningful from prometheus's point of view</p>
| mdaniel |
<pre><code>kubectl get -n istio-system secret istio-ca -ogo-template='{{index .data "tls.crt"}}' | base64 -d > ca.pem
</code></pre>
<p>how to run the above command in an ansible playbook?</p>
<p>I am trying to use it as follows:</p>
<pre><code>- name: Apply secret istio-ca
shell: kubectl get -n istio-system secret istio-ca -ogo-template='{{index .data "tls.crt"}}' | base64 -d > ca.pem
register: sout
</code></pre>
<p>but this gives me an error as follows:</p>
<pre><code>fatal: [172.31.20.135]: FAILED! => {"msg": "template error while templating string: expected token 'end of print statement', got 'string'. String: kubectl get -n istio-system secret istio-ca -ogo-template='{{index .data \"tls.crt\"}}' | base64 -d > ca.pem"}
</code></pre>
| Dhananjay Gahiwade | <p>I swear this has been answered a thousand times, but I can't immediately find the golang/helm/kubectl specific one in the <a href="https://stackoverflow.com/search?q=%5Bansible%5D+end+of+print+statement">thousands of this same error</a></p>
<p>The problem is that jinja2 uses <code>{{</code> as its escape syntax, but golang text templating uses <code>{{</code> as its escape syntax, and because ansible does not know you mean the golang version, it tried to evaluate your go-template as if it was jinja2 and kaboom</p>
<p>There are two paths out of that situation: <a href="https://jinja.palletsprojects.com/en/3.0.x/templates/#escaping" rel="noreferrer"><code>{% raw %}</code> and <code>{% endraw %}</code></a> or having an outer jinja2 expression that resolves to the inner golang expression</p>
<pre class="lang-yaml prettyprint-override"><code>- debug:
msg: kubectl get {% raw %}-ogo-template={{ awesome }}{% endraw %}
- debug:
msg: kubectl get -ogo-template={{"{{"}} awesome {{"}}"}}
</code></pre>
| mdaniel |
<p>I have a Helm chart with <code>values.yaml</code> containing:</p>
<pre class="lang-yaml prettyprint-override"><code># Elided
tolerations: []
</code></pre>
<p>I'm trying to pass the tolerations via the command line but it always removes the quotes (or adds double quotes inside single quotes) despite all the below attempts. As a result it <strong>fails</strong> on install saying it expected a string.</p>
<pre class="lang-sh prettyprint-override"><code># Attempt 0
helm install traefik traefik/traefik --set tolerations[0].key=CriticalAddonsOnly --set tolerations[0].value="true" --set tolerations[0].operator=Equal --set tolerations[0].effect=NoExecute
# Attempt 1
helm install traefik traefik/traefik --set tolerations[0].key=CriticalAddonsOnly --set "tolerations[0].value="true"" --set tolerations[0].operator=Equal --set tolerations[0].effect=NoExecute
# Attempt 2
helm install traefik traefik/traefik --set tolerations[0].key=CriticalAddonsOnly --set "tolerations[0].value=\"true\"" --set tolerations[0].operator=Equal --set tolerations[0].effect=NoExecute
# Attempt 3
helm install traefik traefik/traefik --set tolerations[0].key=CriticalAddonsOnly --set tolerations[0].value="\"true\"" --set tolerations[0].operator=Equal --set tolerations[0].effect=NoExecute
# Attempt 4
helm install traefik traefik/traefik --set tolerations[0].key=CriticalAddonsOnly --set tolerations[0].value='"true"' --set tolerations[0].operator=Equal --set tolerations[0].effect=NoExecute
</code></pre>
<p>They all end up creating a yaml with <code>value: true</code> or <code>value: '"true"'</code>, neither of which will install.</p>
| Don Rhummy | <p>There appears to be two answers: the exceptionally verbose one that you're trying has a solution, or the more succinct one which doesn't prompt stack overflow questions for future readers to understand:</p>
<p>Helm offers <a href="https://helm.sh/docs/helm/helm_install/#synopsis" rel="noreferrer"><code>--set-string</code></a> which is the interpolation-free version of <code>--set</code></p>
<pre class="lang-sh prettyprint-override"><code>helm install traefik traefik/traefik \
--set tolerations[0].key=CriticalAddonsOnly \
--set-string tolerations[0].value=true \
--set tolerations[0].operator=Equal \
--set tolerations[0].effect=NoExecute
</code></pre>
<p>However, as you experienced, that <code>--set</code> syntax is designed for the simplest cases only, for more complex cases <code>--values</code> is the correct mechanism. You can read them from stdin if created a temporary yaml file is too much work</p>
<pre class="lang-sh prettyprint-override"><code>printf 'tolerations: [{key: CriticalAddonsOnly, value: "true", operator: Equal, effect: NoExecute}]\n' | \
helm install traefik traefik/traefik --values /dev/stdin
</code></pre>
| mdaniel |
<p>I was trying out spring boot microservice deployment on kubernetes cluster using Helm Chart. But I noticed a strange issue that <strong>my spring boot application start but it shutdown immediately after</strong></p>
<p>Here are the logs</p>
<pre><code>Started JhooqK8sApplication in 3.431 seconds (JVM running for 4.149)
2020-06-25 20:57:24.460 INFO 1 --- [extShutdownHook] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
2020-06-25 20:57:24.469 INFO 1 --- [extShutdownHook] o.e.jetty.server.AbstractConnector : Stopped ServerConnector@548a102f{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
2020-06-25 20:57:24.470 INFO 1 --- [extShutdownHook] org.eclipse.jetty.server.session : node0 Stopped scavenging
2020-06-25 20:57:24.474 INFO 1 --- [extShutdownHook] o.e.j.s.h.ContextHandler.application : Destroying Spring FrameworkServlet 'dispatcherServlet'
2020-06-25 20:57:24.493 INFO 1 --- [extShutdownHook] o.e.jetty.server.handler.ContextHandler : Stopped o.s.b.w.e.j.JettyEmbeddedWebAppContext@56528192{application,/,[file:///tmp/jetty-docbase.4637295322181051129.8080/],UNAVAILABLE}
</code></pre>
<p>Spring Boot Version : <strong>2.2.7.RELEASE</strong>
Docker Hub Public image for spring boot : <strong>rahulwagh17/kubernetes:jhooq-k8s-springboot-jetty</strong></p>
<p>One strange thing which i noticed when i use kubectl command manually to create deployment and service spring boot deployments goes perfectly fine.</p>
<pre><code>vagrant@kmaster:~$ kubectl create deployment demo --image=rahulwagh17/kubernetes:jhooq-k8s-springboot-jetty
vagrant@kmaster:~$ kubectl expose deployment demo --type=LoadBalancer --name=demo-service --external-ip=1.1.1.1 --port=8080
</code></pre>
<p>(I followed this guide for deploying spring boot on kubernete - <a href="https://jhooq.com/deploy-spring-boot-microservices-on-kubernetes/" rel="nofollow noreferrer">Deploy spring boot on kubernetes cluster</a>)</p>
<p>I am just wodering is there something wrong with spring boot or my helm setup?</p>
<p>Here is my helm templates -</p>
<pre><code>---
# Source: springboot/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: RELEASE-NAME-springboot
labels:
helm.sh/chart: springboot-0.1.0
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
---
# Source: springboot/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: RELEASE-NAME-springboot
labels:
helm.sh/chart: springboot-0.1.0
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
---
# Source: springboot/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: RELEASE-NAME-springboot
labels:
helm.sh/chart: springboot-0.1.0
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
labels:
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
spec:
serviceAccountName: RELEASE-NAME-springboot
securityContext:
{}
containers:
- name: springboot
securityContext:
{}
image: "rahulwagh17/kubernetes:jhooq-k8s-springboot-jetty"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
---
# Source: springboot/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
name: "RELEASE-NAME-springboot-test-connection"
labels:
helm.sh/chart: springboot-0.1.0
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['RELEASE-NAME-springboot:80']
restartPolicy: Never
</code></pre>
| Rahul Wagh | <blockquote>
<p>2020-06-25 20:57:24.469 INFO 1 --- [extShutdownHook] o.e.jetty.server.AbstractConnector : Stopped ServerConnector@548a102f{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code> ports:
- name: http
containerPort: 80
</code></pre>
<p>It appears the liveness probe (configured to contact the port named <code>http</code>) is killing your Pod since your container appears to be listening on :8080 but you've told kubernetes that it's listening on :80</p>
<p>Since a <code>kubectl</code> created deployment will not have any such specificity, kubernetes won't use a liveness probe and there you are</p>
<p>You can usually configure the spring application via an environment variable if you want to test that theory:</p>
<pre class="lang-yaml prettyprint-override"><code> containers:
- name: springboot
env:
- name: SERVER_PORT
value: '80'
# and its friend, which is the one that
# you should be using for liveness and readiness
- name: MANAGEMENT_SERVER_PORT
value: '8080'
securityContext:
{}
image: "rahulwagh17/kubernetes:jhooq-k8s-springboot-jetty"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
</code></pre>
| mdaniel |
<p>im trying to run flyway <code>docker image 7.3.2</code> against a postgres db on kubernetes:</p>
<p>When i run the job my output is:</p>
<pre><code>Flyway Community Edition 7.3.2 by Redgate
ERROR:
Unable to obtain connection from database (jdbc:postgresql://xxx.eu-west-2.rds.amazonaws.com:5432/xxx flyway.user=postgres flyway.password=****************) for user 'null': The server requested password-based authentication, but no password was provided.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 08004
Error Code : 0
Message : The server requested password-based authentication, but no password was provided.
Caused by: org.postgresql.util.PSQLException: The server requested password-based authentication, but no password was provided.
</code></pre>
<p>the settings it outputs are correct and should enable a connection.</p>
<p>I pass in my <code>flyway.conf</code> via a configmap which is:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: flyway-configmap
data:
flyway.conf:
flyway.url=jdbc:postgresql://xxx.eu-west-2.rds.amazonaws.com:5432/xxx
flyway.user=postgres
flyway.password=xxx
</code></pre>
<p>anyone able to assist in what im doing wrong?</p>
| Staggerlee011 | <p>If that is literally your <code>ConfigMap</code>, then it is missing the <code>|</code> character after the <code>:</code> which would make that yaml key into <a href="https://yaml.org/spec/1.2/spec.html#style/block/literal" rel="nofollow noreferrer">a newline delimited scalar</a>. That theory also squares up with your error message showing that the <strong>entire thing</strong> is taken as the value of <code>flyway.url</code></p>
<p>What you want:</p>
<pre class="lang-yaml prettyprint-override"><code> flyway.conf: |
flyway.url=jdbc:postgresql://xxx.eu-west-2.rds.amazonaws.com:5432/xxx
flyway.user=postgres
flyway.password=xxx
</code></pre>
| mdaniel |
<p>I know there are many questions around this but I didn't find any with a real answer.</p>
<p>My helm chart have dependencies from another helm charts and I need to override their values with my <code>.Release.Name</code> and <code>.Release.Namespace</code>.</p>
<p><strong>My requeriments.yaml</strong></p>
<pre><code>dependencies:
- name: keycloak
alias: keycloak-config
repository: https://my-repository.com/
version: 1.0.0
- name: kong
alias: kong-config
repository: https://my-repository.com/
version: 1.0.0
</code></pre>
<p><strong>On my values.yaml</strong></p>
<pre><code>kong-config:
websso:
service:
fullnameOverride: "my-helm._RELEASE_NAMESPACE_.svc.cluster.local"
ckngOauth2Opts: "--data config.post_logout_redirect_uri=/_RELEASE_NAME_
--data config.logout_path=/_RELEASE_NAME_/logout"
</code></pre>
<p>I basically need to use <code>{{ .Release.Name }}</code> where I have <code>_RELEASE_NAME_</code> and <code>{{ .Release.Namespace }}</code> where I have <code>_RELEASE_NAMESPACE_</code>.</p>
<p>I already tried:</p>
<ul>
<li><code>{{ .Release.Name }}</code> and <code>{{ .Release.Namespace }}</code></li>
<li><code>$RELEASE_NAME</code> and <code>$RELEASE_NAMESPACE</code></li>
<li><code>${RELEASE_NAME}</code> and <code>${RELEASE_NAMESPACE}</code></li>
</ul>
<p>but nothing works.</p>
<p>Note I really need to access those values at <code>values.yaml</code>. I don't have access to my dependencies code to change and set that values on that.</p>
<p>How can I solve this?</p>
| Ninita | <p>While it does not appear that helm, itself, can do that, <a href="https://github.com/roboll/helmfile#readme" rel="nofollow noreferrer">helmfile</a> can via either its <a href="https://github.com/roboll/helmfile#helmfile--kustomize" rel="nofollow noreferrer">integration with kustomize</a> or with its <a href="https://github.com/roboll/helmfile#hooks" rel="nofollow noreferrer"><code>prepare</code> hook</a>. I'll show the <code>prepare</code> hook because it's much shorter</p>
<pre class="lang-yaml prettyprint-override"><code>releases:
- name: kong-config
chart: whatever/kong
version: 1.0.0
values:
- ./generated-values.yaml
hooks:
- events: ['prepare']
command: bash
args:
- -c
- |
printf 'websso:\n service:\n fullnameOverride: my-helm.{{`{{ .Release.Namespace }}`}}.svc.cluster.local\n' > generated-values.yaml
</code></pre>
| mdaniel |
<ol>
<li>Figure out what is the correct way to scale up the remote function.</li>
<li>Figure out scaling relations between replicas of the remote function, Flink <code>parallelism.default</code> configuration, ingress topic partition counts together with message partition keys. What is the design intentions behind this topic.</li>
</ol>
<p>As the docs suggest, one of the benefits of flink statefun remote functions is that the remote function can scale differently with the flink workers and task parallelism. To understand more about how these messages are sent to the remote function processes. I have tried following scenarios.</p>
<p><strong>Preparation</strong></p>
<ol>
<li>Use <a href="https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s" rel="nofollow noreferrer">https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s</a> this for my experiment.</li>
<li>Modify the <a href="https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s/03-functions/functions.py" rel="nofollow noreferrer">https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s/03-functions/functions.py</a> to the following to check the logs how things are parallelized in practice</li>
</ol>
<pre class="lang-py prettyprint-override"><code>...
functions = StatefulFunctions()
@functions.bind(typename="example/hello")
async def hello(context: Context, message: Message):
arg = message.raw_value().decode('utf-8')
hostname = os.getenv('HOSTNAME')
for _ in range(10):
print(f"{datetime.utcnow()} {hostname}: Hello from {context.address.id}: you wrote {arg}!", flush=True)
time.sleep(1)
...
</code></pre>
<ol start="3">
<li>Play around the <code>parallelism.default</code> in the flink.conf, replicas count in the functions deployment configuration as well different partitioning configurations in the ingress topic: <code>names</code></li>
</ol>
<p><strong>Observations</strong></p>
<ol>
<li>When sending messages with the same partition key, everything seems to be running sequentially. Meaning if I send 5 messages like "key1:message1", "key1:message2", "key1:message3", "key1:message4", ""key1:message5". I can see that only one of the pod is getting requests even I have more replicas (Configured 5 replicas) of the remote function in the deployment. Regardless how I configure the parallelism or increasing the ingress topic partition count, it always stays the same behavior.</li>
<li>When sending messages with 10 partition keys (The topic is configured with 5 partitions, and parallelism is configured to 5 and the replicas of the remote function is configured to 5). How the replicas remote function receiving the requests seems to be random. Sometime, 5 of them receiving requests at the same time so that 5 of them can run some task together. But some time only 2 of them are utilized and other 3 are just waiting there.</li>
<li>Seems parallelism determines the number of consumers in the same consumer group that subscribing to the ingress topic. I suspect if I have if configured more parallelism than the number of partitions in the ingress topic. the extra parallelism will just stay idle.</li>
</ol>
<p><strong>My Expectations</strong></p>
<ol>
<li>What I really expect how this SHOULD work is that 5 of the replica remote functions should always be fully utilized if there is still backlogs in the ingress topic.</li>
<li>When the ingress topic is configured with multiple partitions, each partitions should be batched separately and multiplex with other parallelism (or consumers) batches to utilize all of the processes in the remote functions.</li>
</ol>
<p>Can some Flink expert help me understand the above behavior and design intentions more?</p>
| joeyinso | <p>There are two things happening here...</p>
<ol>
<li>Each partition in your topic is assigned to a sub-task. This is done round-robin, so if you have 5 topic partitions and 5 sub-tasks (your parallelism) then every sub-task is reading from a single different topic partition.</li>
<li>Records being read from the topic are keyed and distributed (what Flink calls partitioning). If you only have one unique key, then every record it sent to the same sub-task, and thus only one sub-task is getting any records. Any time you have low key cardinality relative to the number of sub-tasks you can get skewed distribution of data.</li>
</ol>
<p>Usually in Statefun you'd scale up processing by having more parallel functions, versus scaling up the number of task managers that are running.</p>
| kkrugler |
<p>I am trying to get the status of pods running on k8 cluster.
I went through this document which <a href="https://docs.ansible.com/ansible/latest/modules/k8s_module.html" rel="nofollow noreferrer">states</a> that - "<code>Use the OpenShift Python client</code>"</p>
<p>Does that mean Openshift python client needs to be installed on the Master of K8 cluster or on the machine where ansible is installed and ansible scripts being invoked?</p>
<p>( I have installed openshift client on the ansible server- however, still getting error that openshift client is not installed)</p>
| pythondev | <blockquote>
<p>I have installed openshift client on the ansible server- however, still getting error that openshift client is not installed</p>
</blockquote>
<p>The answer is the same for every ansible module's dependency: it must be in the python that is configured as <code>ansible_python_interpreter</code> for the host against which that module is running. So, if your module is connecting to the k8s master, it must be in its python, if it's running against localhost, then it must be in the python you are using locally.</p>
<p>Be aware that "the python you are using locally" can be different from "the python that <strong>ansible</strong> is running under," especially if you have installed ansible via Homebrew or in its own virtualenv.</p>
| mdaniel |
<p>I'm struggling to expose my app over the Internet when deployed to AWS EKS.</p>
<p>I have created a deployment and a service, I can see both of these running when using kubectl. I can see that the app has successfully connected to an external database as it runs a script on startup that initialises said database.</p>
<p>My issue is arising when trying to access the app over the internet. I have tried accessing the cluster endpoint and I am getting this error:</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User "system:anonymous" cannot get path "/"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
</code></pre>
<p>However, if I access the "/readyz" path I get "ok" returned.
"/version" returns the following:</p>
<pre><code>{
"major": "1",
"minor": "16+",
"gitVersion": "v1.16.8-eks-e16311",
"gitCommit": "e163110a04dcb2f39c3325af96d019b4925419eb",
"gitTreeState": "clean",
"buildDate": "2020-03-27T22:37:12Z",
"goVersion": "go1.13.8",
"compiler": "gc",
"platform": "linux/amd64"
}
</code></pre>
<p>My deployment.yml file contains the following:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: client
labels:
app: client
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: image/repo
ports:
- containerPort: 80
imagePullPolicy: Always
</code></pre>
<p>My service.yml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: client
labels:
run: client
spec:
type: LoadBalancer
ports:
- name: "80"
port: 80
targetPort: 80
protocol: TCP
selector:
run: client
</code></pre>
<p>I can see the Load Balancer has been created in the AWS console and I have tried updating the security group of the LB to be able to talk to the cluster endpoint. The LB dashboard is showing the one attached instance is 'OutOfService' and also under the monitoring tab, I can see one Unhealthy Host.</p>
<p>I've tried accessing the Load Balancer endpoint as provided in the EC2 area of the console (this matches what is returned from <code>kubectl get services</code> as the <code>EXTERNAL-IP</code> of the LB service) and I'm getting an empty response from there.</p>
<pre><code>curl XXXXXXX.eu-west-2.elb.amazonaws.com:80
curl: (52) Empty reply from server
</code></pre>
<p>This is the same when accessing in a web browser.</p>
<p>I seem to be going round in circles with this one any help at all would be greatly appreciated.</p>
| SteveJDB | <blockquote>
<p>I've tried accessing the Load Balancer endpoint</p>
</blockquote>
<p>You are accessing the <strong>EKS</strong> URL, which is the kubernetes apiserver endpoint, and not the LoadBalancer that was (hopefully) created for your <code>client</code> <code>Service</code></p>
<p>You will want to <code>kubectl get -o wide svc client</code> and if it was successful in provisioning a LoadBalancer for you, then its URL will appear in the output. You can get more details about that situation by <code>kubectl describe svc client</code>, which will include any events that affected it during provisioning</p>
| mdaniel |
<p>I have databases delpoyed as StatefullSet on my kubernetes cluster, i would like to know how can i make alerts (send email) when Persistent Volumes are 80% full?</p>
<p>P.S: This k8s cluster is deployed using Rancher v2.4</p>
| Mohamed | <p>You will need to monitor your volumes from Prometheus, the link above from Manoj is a good start or visit <a href="https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepersistentvolumefillingup" rel="nofollow noreferrer">kubepersistentvolumefillingup</a></p>
<p>After prometheus is happy, you can configure alert manager to generate email alerts.</p>
<p>Good luck.</p>
<p>There is a helm chart to get you started. <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">kube-prometheus</a></p>
| guycole |
<p>I created a small cluster with GPU nodes on GKE like so:</p>
<pre class="lang-sh prettyprint-override"><code># create cluster and CPU nodes
gcloud container clusters create clic-cluster \
--zone us-west1-b \
--machine-type n1-standard-1 \
--enable-autoscaling \
--min-nodes 1 \
--max-nodes 3 \
--num-nodes 2
# add GPU nodes
gcloud container node-pools create gpu-pool \
--zone us-west1-b \
--machine-type n1-standard-2 \
--accelerator type=nvidia-tesla-k80,count=1 \
--cluster clic-cluster \
--enable-autoscaling \
--min-nodes 1 \
--max-nodes 2 \
--num-nodes 1
</code></pre>
<p>When I submit a GPU job it successfully ends up running on the GPU node. However, when I submit a second job I get an <code>UnexpectedAdmissionError</code> from kubernetes:</p>
<blockquote>
<p>Update plugin resources failed due to requested number of devices
unavailable for nvidia.com/gpu. Requested: 1, Available: 0, which is
unexpected.</p>
</blockquote>
<p>I would have expected the cluster to start the second GPU node and place the job there. Any idea why this didn't happen? My job spec looks roughly like this:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: <job_name>
spec:
template:
spec:
initContainers:
- name: decode
image: "<decoder_image>"
resources:
limits:
nvidia.com/gpu: 1
command: [...]
[...]
containers:
- name: evaluate
image: "<evaluation_image>"
command: [...]
</code></pre>
| Lucas | <p>The resource constraint needs to be added to the <code>containers</code> spec as well:</p>
<pre><code>piVersion: batch/v1
kind: Job
metadata:
name: <job_name>
spec:
template:
spec:
initContainers:
- name: decode
image: "<decoder_image>"
resources:
limits:
nvidia.com/gpu: 1
command: [...]
[...]
containers:
- name: evaluate
image: "<evaluation_image>"
resources:
limits:
nvidia.com/gpu: 1
command: [...]
</code></pre>
<p>I only required a GPU in one of the <code>initContainers</code>, but this seems to confuse the scheduler. Now autoscaling and scheduling works as expected.</p>
| Lucas |
<p>I'm quite new to Kubernetes. But so far was able to configure an AKS (Azure Kubernetes Services) cluster. I have multiple namespaces for my services (Dev, stage, prod). And configured an Ingress service using nginx (<strong>into it's own namespace 'ingress-nginx'</strong>). The setup works perfectly with HTTP. </p>
<p>My problems started when I tried to use HTTPS. First installed cert-manager by using <a href="https://cert-manager.io/docs/installation/kubernetes/" rel="nofollow noreferrer"><strong>this</strong></a> script. It has <strong>created it own namespace again: 'cert-manager'</strong> <strong>I was not using HELM just regular manifest</strong>. Also followed MS Azure <a href="https://cert-manager.io/docs/configuration/acme/dns01/azuredns/" rel="nofollow noreferrer"><strong>DNS config</strong></a>.</p>
<p>Everything seems correct I have my services, secrets, clusterIssuer, etc. Even challenge created in Azure DNS Zone. You can see it on Azure portal. But I did not get any certificate.</p>
<p><a href="https://i.stack.imgur.com/pMHQI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pMHQI.png" alt="enter image description here"></a></p>
<p>ClusterIssuer config:</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: 8b3s-org-letsencrypt
spec:
acme:
#server: https://acme-v02.api.letsencrypt.org/directory
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: <[email protected]>
privateKeySecretRef:
name: 8b3s-org-letsencrypt-key
solvers:
- selector:
dns01:
azuredns:
clientID: ....
clientSecretSecretRef:
# The following is the secret we created in Kubernetes. Issuer will use this to present challenge to Azure DNS.
name: azuredns-config
key: client-secret
subscriptionID: ....
tenantID: "...."
resourceGroupName: Web
hostedZoneName: 8b3s.org
# Azure Cloud Environment, default to AzurePublicCloud
environment: AzurePublicCloud
</code></pre>
<p>Ingress config:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: 8b3s-virtual-host-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "8b3s-org-letsencrypt"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- '*.8b3s.org'
secretName: 8b3s-org-letsencrypt-tls
rules:
- host: dev.8b3s.org
http:
paths:
- path: /
backend:
serviceName: the8b3swebsite-development-ext
servicePort: 8080
</code></pre>
<p>So the issue is every config seems OK but there is no Cert at all. <strong>I got only an 'Opaque' ' tls.key:
1679 bytes' as '8b3s-org-letsencrypt-key' in the 'cert-manager' namespace.</strong></p>
<p><a href="https://i.stack.imgur.com/wdZer.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wdZer.png" alt="enter image description here"></a></p>
<p>The <strong>Cert got created for the 'ingress-nginx' namespaces '8b3s-org-letsencrypt-tls' as type 'kubernetes.io/tls'. But 'ca.crt: 0 bytes' and 'tls.crt: 0 bytes'</strong>.</p>
<p><a href="https://i.stack.imgur.com/6LWoR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6LWoR.png" alt="enter image description here"></a></p>
<p>I was also looking into cert-manager pods output. Found those 2 logs weird (if needed I have the full logs):</p>
<pre><code>I0222 22:51:00.067791 1 acme.go:201] cert-manager/controller/certificaterequests-issuer-acme/sign "msg"="acme Order resource is not in a ready state, waiting..." "related_resource_kind"="Order" "related_resource_name"="8b3s-org-letsencrypt-tls-1807766204-3808299645" "related_resource_namespace"="ingress-nginx" "resource_kind"="CertificateRequest" "resource_name"="8b3s-org-letsencrypt-tls-1807766204" "resource_namespace"="ingress-nginx"
I0222 22:51:00.068069 1 sync.go:129] cert-manager/controller/orders "msg"="Creating additional Challenge resources to complete Order" "resource_kind"="Order" "resource_name"="8b3s-org-letsencrypt-tls-1807766204-3808299645" "resource_namespace"="ingress-nginx"
E0222 22:51:01.876182 1 sync.go:184] cert-manager/controller/challenges "msg"="propagation check failed" "error"="DNS record for \"8b3s.org\" not yet propagated" "dnsName"="8b3s.org" "resource_kind"="Challenge" "resource_name"="8b3s-org-letsencrypt-tls-1807766204-3808299645-481622463" "resource_namespace"="ingress-nginx" "type"="dns-01"
</code></pre>
<p>Any idea what can be the problem?</p>
<p><strong>UPDATE:</strong>
Added Azure NS records to my domain DNS as suggested. Waited an hour or so but there is no effect... Deleted existing CA secrets, restarted all Nginx, cert-manager Pods. And noticed the following error:</p>
<pre><code>E0225 10:14:03.671099 1 util.go:71] cert-manager/controller/certificaterequests/handleOwnedResource "msg"="error getting order referenced by resource" "error"="certificaterequest.cert-manager.io \"8b3s-org-letsencrypt-tls-1807766204\" not found" "related_resource_kind"="CertificateRequest" "related_resource_name"="8b3s-org-letsencrypt-tls-1807766204" "related_resource_namespace"="ingress-nginx" "resource_kind"="Order" "resource_name"="8b3s-org-letsencrypt-tls-1807766204-3808299645" "resource_namespace"="ingress-nginx"
E0225 10:14:03.674679 1 util.go:71] cert-manager/controller/certificates/handleOwnedResource "msg"="error getting order referenced by resource" "error"="certificate.cert-manager.io \"8b3s-org-letsencrypt-tls\" not found" "related_resource_kind"="Certificate" "related_resource_name"="8b3s-org-letsencrypt-tls" "related_resource_namespace"="ingress-nginx" "resource_kind"="CertificateRequest" "resource_name"="8b3s-org-letsencrypt-tls-1807766204" "resource_namespace"="ingress-nginx"
</code></pre>
| Major | <p>The <a href="https://whois.domaintools.com/8b3s.org" rel="nofollow noreferrer">whois record</a> for your domain shows that it is still pointed at <code>NS57.DOMAINCONTROL.COM</code> and not the 4 Azure DNS resolvers you show in your screenshot. Thus, Let's Encrypt has no way of knowing they should use Azure to look up that <code>_acme-challenge</code> record, and it fails.</p>
| mdaniel |
<p>I have this yaml for an Ingress:</p>
<pre><code>kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: app
namespace: ingress-controller
... omitted for brevity ...
spec:
rules:
- host: ifs-alpha-kube-001.example.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: service-nodeport
servicePort: 80
- path: /
pathType: ImplementationSpecific
backend:
serviceName: service-nodeport
servicePort: 443
status:
loadBalancer:
ingress:
- {}
</code></pre>
<p>In the above I set ...</p>
<pre><code> - host: ifs-alpha-kube-001.example.com
</code></pre>
<p>That host just happens to be one of my nodes. I have three nodes. I am pretty certain that this incorrect. The ingress works but if I shutdown ifs-alpha-kube-001 the ingress stops working. What should I set <code>host</code> if I want a high availability cluster?</p>
<p>Thanks</p>
| Red Cricket | <blockquote>
<p>What should I set host if I want a high availability cluster?</p>
</blockquote>
<p>The idea behind the Ingress resource is using the <em>brower's</em> <code>host:</code> HTTP header (which is sent for every request HTTP/1.1 and newer) for virtual hosting, so you can create <em>one</em> load balancer, but point all of your DNS records at the one host -- versus having to create a new load balancer for every <code>Service</code> in your cluster</p>
<p>Thus, the <code>host:</code> header would be whatever DNS name you wished for the outside world to be able to reach your <code>Service</code> as; for example, if you have a website and a reporting web-app in your cluster, one <code>host:</code> might be <code>www.example.com</code> and the other <code>host:</code> might be <code>reports.example.com</code> but both would be CNAME records for <code>my-k8s-lb.example.com</code></p>
| mdaniel |
<p>I use <code>configMap</code> to feed the <code>init.sql</code> script to initialise my Mysql container in the Kubernetes pod. While that works for most cases I am struggling to convert a larger <code>init.sql</code> file which is 12.2 MB.
I use the following in <code>deployment.yaml</code> to mount the configMap.</p>
<pre><code>volumeMounts:
- name: init-volume
mountPath: /docker-entrypoint-initdb.d
</code></pre>
<pre><code>volumes:
- name: init-volume
configMap:
defaultMode: 420
name: init-volume
</code></pre>
<p>The command I use to create the configMap </p>
<pre><code>kubectl create configMap init-volume --from-file=init.sql
</code></pre>
<p>I get the error </p>
<pre><code>Error from server (RequestEntityTooLarge): Request entity too large: limit is 3145728
</code></pre>
<p>How can I increase this limit/or any other alternate to initialise my database?</p>
| Vishakha Lall | <blockquote>
<p>How can I increase this limit/or any other alternate to initialise my database?</p>
</blockquote>
<p>The <code>.d</code> nomenclature in that path means it will load <strong>all</strong> files found therein, so the solution is to chop up the <code>init.sql</code> into smaller chunks, name them so that they apply in the right order when viewed with <code>/bin/ls -1</code>, and stick each one into a configmap:</p>
<pre class="lang-sh prettyprint-override"><code>i=1
for fn in *.sql; do
kubectl create configmap init-sql-$i --from-file="$fn"
i=$(( i + 1 ))
done
</code></pre>
<p>then volume mount them into the directory</p>
<pre class="lang-yaml prettyprint-override"><code>volumeMounts:
- name: init-file-1
mountPath: /docker-entrypoint-initdb.d/file-1.sql
- name: init-file-2
mountPath: /docker-entrypoint-initdb.d/file-2.sql
# and so forth
</code></pre>
| mdaniel |
<p>It is possible to get all the pods on the cluster:</p>
<pre><code>kubectl get pod --all-namespaces -o wide
</code></pre>
<p>It is also possible to get all pods on the cluster with a specific label:</p>
<pre><code>kubectl get pod --all-namespaces -o wide --selector some.specific.pod.label
</code></pre>
<p>It is even possible to get all pods on the specific node of the cluster:</p>
<pre><code>kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=<node>
</code></pre>
<p>The question is, how to get all pods from the namespaces with a particular label?</p>
<p>e.g. <code>kubectl get pod --namespace-label some.specific.namespace.label -o wide</code> (pseudocode)</p>
| Ilya Buziuk | <p>One cannot do that operation in one shot, because labels on <code>Namespace</code> objects are not propagated down upon their child objects. Since <code>kubectl</code> is merely doing a <code>GET</code> on <code>/api/v1/whatevers</code> there is no obvious way to make a REST request to two endpoints at once and join them together.</p>
<p>You'll want either a <code>for</code> loop in shell, or to use one of the many API client bindings to create a program that does the <code>Namespace</code> fetch, then a <code>Pod</code> fetch for those matching <code>Namespaces</code>; for example:</p>
<pre class="lang-sh prettyprint-override"><code>for n in $(kubectl get ns --selector some.specific.namespace.label -o name); do
# it's possible kubectl -n will accept the "namespace/foo" output of -o name
# or one can even -o go-template='{{ range .items }}{{ .metadata.name }} {{ end }}'
n=${n##namespace/}
kubectl -n "$n" get pods -o wide
done
</code></pre>
| mdaniel |
<p>We have set logger as STDOUT in the rails configuration.</p>
<pre><code> config.log_level = :info
config.logger = Logger.new(STDOUT)
</code></pre>
<p>We are expecting these logs in kubectl logs as well as datadog logs but STDOUT is not showing up there. We tried below code to test it.</p>
<pre><code>def method_name
system('echo testing logging') - this shows up in kubectl/datadog logs
Rails.logger.info('STDOUT - testing logging') - this does not show up in kubectl/datadog log
end
</code></pre>
| Priya | <p>Try to use the default config and make sure to set the environment variable <code>RAILS_LOG_TO_STDOUT=true</code>, for your deployment/replica set, and in production mode (<code>RAILS_ENV=production</code>). (In dev mode it always logs to console per default).</p>
<p>Actually, the official rails docker images used to have that set, but the newer recommended ruby docker images - of course - do not have Rails specific environment variables set.</p>
<p>(more: search for <em>RAILS_LOG_TO_STDOUT</em> in the <a href="https://guides.rubyonrails.org/v5.1/5_0_release_notes.html" rel="nofollow noreferrer">release notes here</a>, and see <a href="https://github.com/rails/rails/pull/23734" rel="nofollow noreferrer">PR here</a>)</p>
| Michael W. |
<p>I have configMap that are loading properties files for my spring boot application.
My configMap is mounted as a volume and my springboot app is reading from that volume.</p>
<p>my typical property files are:</p>
<pre><code>application-dev1.yml has
integrations-queue-name=integration-dev1
search-queue-name=searchindex-dev1
application-dev2.yml
integrations-queue-name=integration-dev2
search-queue-name=searchindex-dev1
application-dev3.yml
integrations-queue-name=integration-dev3
search-queue-name=searchindex-dev1
</code></pre>
<p>My goal is to have 1 properties file </p>
<pre><code>application-env.yml
integrations-queue-name=integration-{env}
search-queue-name=searchindex-{env}
</code></pre>
<p>I want to do parameter substitution of env with the profile that is active for my service.</p>
<p>Is it possible to do parameter substitution in configMaps from my spring boot application running in the pod? I am lookin for something similar to maven-resource-plugin that can be done run time.</p>
| Praveen Kumar | <p>If it's just those two, then likely you will get more mileage out of using the <code>SPRING_APPLICATION_JSON</code> environment variable, which should supersede anything in the configmap:</p>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: my-spring-app
image: whatever
env:
- name: ENV_NAME
value: dev2
- name: SPRING_APPLICATION_JSON
value: |
{"integrations-queue-name": "integration-$(ENV_NAME)",
"search-queue-name": "searchindex-$(ENV_NAME)"}
</code></pre>
<p>materializes as:</p>
<pre><code>$ kubectl exec my-spring-pod -- printenv
ENV_NAME=dev2
SPRING_APPLICATION_JSON={"integrations-queue-name": "integration-dev2",
"search-queue-name": "searchindex-dev2"}
</code></pre>
| mdaniel |
<p>HACMP cluster provides high availability feature with IBM lpar's or within AIX physical boxes</p>
<p>Similarly, </p>
<p>MSCS cluster service in windows Virtual machine</p>
<p>Veritas cluster for Linux/Windows Virtual machine</p>
<hr>
<p>How kubernetes cluster service different from these cluster service?</p>
| overexchange | <p><strong>Key Differences</strong></p>
<p><img src="https://i.stack.imgur.com/EyQsk.png" width="500"></p>
<p><strong>The TL;DR Backstory</strong></p>
<p>Clustering = teaming up multiple cooperating servers to accomplish something that none of the individual servers ("nodes") could accomplish on their own.</p>
<p>The cluster products you mention--HACMP, MSCS, etc.--were designed in the 1990s (and evolved over time) primarily to provide higher app/service availability than any single server could guarantee. Given appropriate cluster-enabled apps, databases, and middleware, should one server in a cluster go down or suffer a serious fault, the app/service would continue operating on remaining nodes without interruption. In the best case, this can almost eliminate either unplanned or planned downtime. </p>
<p>Kubernetes clusters have some high-availability features, but start with a very different worldview--one 20 years later from where HACMP and friends started. In IT, 20 years = multiple entire generations. Kubernetes and similar clusters (e.g. Docker Swarm) expect each server to host multiple "containers" (packaged workloads) rather than a single app/workload. Operating system containers are a lightweight form of app/system/service virtualization than basically didn't exist for mainstream applications for most of the HA clusters' lifetimes. </p>
<p>The abstractions and capabilities of any platform evolves to match problems expected on common workloads. For Kubernetes, this means multiple- or many-workloads possible per server, a great many updates during an app/service's lifetime, networking being the primary means of software connectivity, and <em>intense</em> dynamism / constant flux of where apps/services live. Those were not expectations, design criteria, or common realities of the HA clusters or the software they run. In addition to the many abstractions provided by containers (e.g. Docker) vs. base operating systems, Kubernetes provides many abstractions and tools for "orchestrating" many apps/services concurrently and dynamically across large clusters of servers. E.g. Pods (groups of multiple containers operated together) and StatefulSets (for managing shared persistent state). HA clusters include some concepts/facilities that go beyond single servers (e.g. service definitions, connection topologies, heartbeats, failover policies). These could be considered ancestral forms of container and Kubernetes facilities. But platforms like Kubernetes that came after the Internet, scale-out, virtualization, cloud, and DevOps revolutions address massively greater scale and dynamism than any 1980s- or 1990s-born HA clusters ever would. </p>
<p>If HA clusters were horse-drawn carts of the agrarian age, Kubernetes would be modern tractor-trailers running on interstate highways. Both enable "getting to market," albeit at very different levels of scale, with very different expectations and infrastructure. </p>
<p>Finally, because Kubernetes focuses on scale and dynamism, many of its workloads are not thoroughly optimized for availability--at least not in the same "it must stay running, always and forever!" way that is the very point of HA clusters. </p>
| Jonathan Eunice |
<p>I'm new to Kubernetes and as a tutorial for myself I've been working on deploying a basic project to Kubernetes with helm (v3).
I have an image in AWS's ECR as well as a local helm chart for this project.
However, I am struggling to run my image with Kubernetes.</p>
<p>My image is set up correctly. If I try something like <code>docker run my_image_in_ecr</code> locally it behaves as expected (after configuring my IAM access credentials locally).
My helm chart is properly linted and in my image map, it specifies:</p>
<pre><code>image:
repository: my_image_in_ecr
tag: latest
pullPolicy: IfNotPresent
</code></pre>
<p>When I try to use helm to deploy though, I'm running into issues.
My understanding is to run my program with helm, I should:</p>
<ol>
<li><p>Run helm install on my chart</p></li>
<li><p>Run the image inside my new kubernetes pod</p></li>
</ol>
<p>But when I look at my kubernetes pods, it looks like they never get up and running.</p>
<pre><code>hello-test1-hello-world-54465c788c-dxrc7 0/1 ImagePullBackOff 0 49m
hello-test2-hello-world-8499ddfb76-6xn5q 0/1 ImagePullBackOff 0 2m45s
hello-test3-hello-world-84489658c4-ggs89 0/1 ErrImagePull 0 15s
</code></pre>
<p>The logs for these pods look like this:</p>
<pre><code>Error from server (BadRequest): container "hello-world" in pod "hello-test3-hello-world-84489658c4-ggs89" is waiting to start: trying and failing to pull image
</code></pre>
<p>Since I don't know how to set up imagePullSecrets properly with Kubernetes I was expecting this to fail. But I was expecting a different error message such as bad auth credentials. </p>
<ol>
<li>How can I resolve the error in image pulling? Is this issue not even related to the fact that my image is in ecr?</li>
<li>How can I properly set up credentials (such as imagePullSecrets) to authorize pulling the image from ecr? I have followed some guides such as <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">this one</a> and <a href="https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry" rel="nofollow noreferrer">this one</a> but am confused on how to tranlate this information into a proper authorization configuration for ecr.</li>
</ol>
| Thomas Scruggs | <blockquote>
<p>How can I properly set up credentials (such as imagePullSecrets) to authorize pulling the image from ecr?</p>
</blockquote>
<p>The traditional way is to grant the Node an <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html" rel="nofollow noreferrer">instance role</a> that includes <a href="https://github.com/kubernetes-sigs/kubespray/blob/v2.11.0/contrib/aws_iam/kubernetes-minion-policy.json#L34-L40" rel="nofollow noreferrer"><code>ecr:*</code> IAM Permissions</a> , ensure you have <code>--cloud-provider=aws</code> set on <code>apiserver</code>, <code>controller-manager</code>, and <code>kubelet</code> (which if you are doing anything with kubernetes inside AWS you will for sure want to enable and configure correctly), and <code>kubelet</code> will then automatically coordinate with ECR to Just Work™</p>
<p>That information was present on the page you cited, under the heading <a href="https://kubernetes.io/docs/concepts/containers/images/#using-amazon-elastic-container-registry" rel="nofollow noreferrer">Using Amazon Elastic Container Registry</a> but it isn't clear if you read it and didn't understand, or read it and it doesn't apply to you, or didn't get that far down the page</p>
| mdaniel |
<p>I would like to template values in my Chart.yaml file.
For example, <code>version: {{ .Values.version }}</code> instead of <code>version: 0.1.0</code></p>
<p>For other yaml files, the above would work. However, it's my understanding that Helm treats the Chart.yaml differently and <strong>the Chart.yaml file is not run through the templating engine</strong>.</p>
<p>Does anyone know a workaround?</p>
<p>The actual error I get if I try to helm lint this (with <code>version: 0.1.0</code> as an entry in my values.yaml file) is:
<code>error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.version":interface {}(nil)}</code></p>
| Thomas Scruggs | <p>You are thinking of the problem backward: specify the version in <code>Chart.yaml</code> and <em>derive</em> the version in wherever you are using it in the templates; you can't have a dynamic version in the <code>Chart.yaml</code> because <code>helm repo index .</code> does not accept <code>--set</code> or any such flag and thus couldn't construct the tgz to upload</p>
<p>Thus, given a <code>Chart.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
name: my-awesome-chart
appVersion: 0.1.0
version: 1.2.3
</code></pre>
<p>and a <code>Deployment.yaml</code> template:</p>
<pre><code>{{ $myTag := .Chart.Version }}
{{/* or, you can use .Chart.AppVersion */}}
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- image: docker.example.com:{{ $myTag }}
# produces: docker.example.com:1.2.3
</code></pre>
| mdaniel |
<p>I'm trying to write simple ansible playbook that would be able to execute some arbitrary command against the pod (container) running in kubernetes cluster.</p>
<p>I would like to utilise kubectl connection plugin: <a href="https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html" rel="nofollow noreferrer">https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html</a> but having struggle to figure out how to actually do that.</p>
<p>Couple of questions:</p>
<ol>
<li>Do I need to first have inventory for k8s defined? Something like: <a href="https://docs.ansible.com/ansible/latest/plugins/inventory/k8s.html" rel="nofollow noreferrer">https://docs.ansible.com/ansible/latest/plugins/inventory/k8s.html</a>. My understanding is that I would define kube config via inventory which would be used by the kubectl plugin to actually connect to the pods to perform specific action.</li>
<li>If yes, is there any example of arbitrary command executed via kubectl plugin (but not via shell plugin that invokes kubectl on some remote machine - this is not what I'm looking for)</li>
</ol>
<p>I'm assuming that, during the ansible-playbook invocation, I would point to k8s inventory.</p>
<p>Thanks.</p>
| Bakir Jusufbegovic | <blockquote>
<p>I would like to utilise kubectl connection plugin: <a href="https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html" rel="nofollow noreferrer">https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html</a> but having struggle to figure out how to actually do that.</p>
</blockquote>
<p>The <a href="https://docs.ansible.com/ansible/2.9/plugins/connection.html#using-connection-plugins" rel="nofollow noreferrer">fine manual</a> describes how one uses connection plugins, and while it is <a href="https://docs.ansible.com/ansible/2.9/reference_appendices/playbooks_keywords.html#task" rel="nofollow noreferrer">possible to use in in tasks</a>, that is unlikely to make any sense unless your inventory <em>started</em> with Pods.</p>
<p>The way I have seen that connection used is to start by identifying the Pods against which you might want to take action, and then run a playbook against a unique group for that purpose:</p>
<pre><code>- hosts: all
tasks:
- set_fact:
# this is *just an example for brevity*
# in reality you would use `k8s:` or `kubectl get -o name pods -l my-selector=my-value` to get the pod names
pod_names:
- nginx-12345
- nginx-3456
- add_host:
name: '{{ item }}'
groups:
- my-pods
with_items: '{{ pod_names }}'
- hosts: my-pods
connection: kubectl
tasks:
# and now you are off to the races
- command: ps -ef
# watch out if the Pod doesn't have a working python installed
# as you will have to use raw: instead
# (and, of course, disable "gather_facts: no")
- raw: ps -ef
</code></pre>
| mdaniel |
<p>I have a nginx server outside kubernetes. <code>nginx -> nginx ingress</code>. I want know how add a custom health check path <code>/health/status</code> to nginx ingress. </p>
| quanwei li | <p><em>This question is almost certainly solving the wrong problem, but in the spirit of answering what was asked:</em></p>
<p>You can expose the Ingress <code>/healthz</code> to the outside world:</p>
<pre><code>kind: Service
metadata:
name: ingress-nginx-health
spec:
type: ClusterIP
selector: # whatever
ports:
- name: healthz
port: 80
targetPort: 10254
---
kind: Ingress
spec:
rules:
- host: elb-1234.example.com
http:
path: /healthz
backend:
serviceName: ingress-nginx-health
servicePort: healthz
</code></pre>
<p>Because if your Ingress controller falls over, it will for sure stop answering its own healthz check</p>
| mdaniel |
<p>I'm using helm and given a yaml object I want to flatten it while applying some recursive formatting.</p>
<p>Given this:</p>
<pre class="lang-yaml prettyprint-override"><code>some_map:
with: different
indentation:
levels: and
nested:
sub:
maps: "42"
and_more:
maps: 42
</code></pre>
<p>I want to (for example) get this:</p>
<pre class="lang-ini prettyprint-override"><code>some_map.with="different"
some_map.indentation.levels="and"
some_map.nested.sub.maps="42"
some_map.nested.and_more.maps=42
</code></pre>
<p>I haven't read anything about recursive looping in the helm docs, keep in mind that the format of the recursion in the example ( "%v.%v" if !root else "%v=%v" ) may vary.</p>
| Roberto P. Romero | <p>Yes, it seems that <code>{{ define</code> supports recursive use of <code>{{ include</code>, although unknown to what depth</p>
<p>The PoC I whipped up to see if it could work</p>
<pre><code>{{- define "bob" -}}
{{- $it := . -}}
{{- $knd := kindOf . -}}
{{- if eq $knd "map" }}
{{- range (keys .) }}
{{- $k := . }}
{{- $v := get $it . }}
{{- $vk := kindOf $v }}
{{- if eq $vk "map" }}
{{- printf "%s." $k }}
{{- include "bob" $v }}
{{- else }}
{{- printf "%s=%s\n" $k (toJson $v) }}
{{- end }}
{{- end }}
{{- else }}
{{ toJson . }}#k({{ $knd }})
{{- end }}
{{- end -}}
</code></pre>
<p>invoked as</p>
<pre class="lang-yaml prettyprint-override"><code>{{ $fred := dict
"alpha" (dict "a0" "a0ch0")
"beta" (dict "beta0" (dict "beta00" 1234))
"charlie" (list "ch0" "ch1" "ch2") }}
data:
theData: |
{{ toJson $fred | indent 4 }}
toml: |
{{ include "bob" $fred | indent 4 }}
</code></pre>
<p>produced</p>
<pre class="lang-yaml prettyprint-override"><code>data:
theData: |
{"alpha":{"a0":"a0ch0"},"beta":{"beta0":{"beta00":1234}},"charlie":["ch0","ch1","ch2"]}
toml: |
alpha.a0="a0ch0"
beta.beta0.beta00=1234
charlie=["ch0","ch1","ch2"]
</code></pre>
<p>Also, your cited example seems to make reference to the outermost variable name, which I don't think helm knows about, so you'd need an artificial wrapper <code>dict</code> in order to get that behavior: <code>{{ include "toToml" (dict "some_map" .Values.some_map) }}</code></p>
| mdaniel |
<p>I'm pretty new to Kubernetes and I have to create a pod using Kubernetes <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">python-client</a>.
So to experiment around I'm trying to run <a href="https://github.com/kubernetes-client/python/tree/master/examples/notebooks" rel="nofollow noreferrer">examples notebooks</a> provided by the project without any change to see how things works.</p>
<p>Starting with <a href="https://github.com/kubernetes-client/python/blob/master/examples/notebooks/intro_notebook.ipynb" rel="nofollow noreferrer">intro_notebook.ipynb</a> at 3rd step I get an error:</p>
<pre class="lang-none prettyprint-override"><code>ValueError: Invalid value for `selector`, must not be `None`
</code></pre>
<p>Here is the code:</p>
<pre><code>from kubernetes import client, config
</code></pre>
<p>The only part I've changed is the second cell, because I'm running Kubernetes using Ubuntu's <code>microk8s</code>:</p>
<pre class="lang-py prettyprint-override"><code>config.load_kube_config('/var/snap/microk8s/current/credentials/client.config')
</code></pre>
<pre class="lang-py prettyprint-override"><code>api_instance = client.AppsV1Api()
dep = client.V1Deployment()
spec = client.V1DeploymentSpec() # <<< At this line I hit the error!
</code></pre>
<p>Complete Traceback:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-9-f155901e8381> in <module>
1 api_instance = client.AppsV1Api()
2 dep = client.V1Deployment()
----> 3 spec = client.V1DeploymentSpec()
~/.local/share/virtualenvs/jupyter-2hJZ4kgI/lib/python3.8/site-packages/kubernetes/client/models/v1_deployment_spec.py in __init__(self, min_ready_seconds, paused, progress_deadline_seconds, replicas, revision_history_limit, selector, strategy, template)
76 if revision_history_limit is not None:
77 self.revision_history_limit = revision_history_limit
---> 78 self.selector = selector
79 if strategy is not None:
80 self.strategy = strategy
~/.local/share/virtualenvs/jupyter-2hJZ4kgI/lib/python3.8/site-packages/kubernetes/client/models/v1_deployment_spec.py in selector(self, selector)
215 """
216 if selector is None:
--> 217 raise ValueError("Invalid value for `selector`, must not be `None`") # noqa: E501
218
219 self._selector = selector
ValueError: Invalid value for `selector`, must not be `None`
</code></pre>
<p>I'm running <code>python3.8</code>:</p>
<pre class="lang-sh prettyprint-override"><code>$ pip freeze | grep -e kuber
kubernetes==11.0.0
$ snap list microk8s
Name Version Rev Tracking Publisher Notes
microk8s v1.18.6 1551 latest/stable canonical✓ classic
</code></pre>
<h2>Update</h2>
<p>Downgrading microk8s to v1.15.11 did not solve the issue.</p>
| FooBar | <p>It's because they changed the validation to happen in the constructor, rather that later -- which is not what the notebook was expecting. <s>They</s> You just need to move those inner assignments up to be valid before constructing the <code>Deployment</code>:</p>
<pre class="lang-py prettyprint-override"><code>name = "my-busybox"
spec = client.V1DeploymentSpec(
selector=client.V1LabelSelector(match_labels={"app":"busybox"}),
template=client.V1PodTemplateSpec(),
)
container = client.V1Container(
image="busybox:1.26.1",
args=["sleep", "3600"],
name=name,
)
spec.template.metadata = client.V1ObjectMeta(
name="busybox",
labels={"app":"busybox"},
)
spec.template.spec = client.V1PodSpec(containers = [container])
dep = client.V1Deployment(
metadata=client.V1ObjectMeta(name=name),
spec=spec,
)
</code></pre>
| mdaniel |
<p>I deployed a mysql monitor application image in kubernetes cluster which run as non root user. When I tried to mount a path to make the data persistent,its overriding the directory(creates a new directory by deleting everything inside that path) in which my application configuration files has to be present.Even I tried using init container still,i am not able to mount it.</p>
<pre><code>my docker file:
FROM centos:7
ENV DIR /binaries
ENV PASS admin
WORKDIR ${DIR}
COPY libstdc++-4.8.5-39.el7.x86_64.rpm ${DIR}
COPY numactl-libs-2.0.12-3.el7.x86_64.rpm ${DIR}
COPY mysqlmonitor-8.0.18.1217-linux-x86_64-installer.bin ${DIR}
RUN yum install -y libaio && yum -y install gcc && yum -y install gcc-c++ && yum -y install compat-libstdc++-33 && yum -y install libstdc++-devel && yum -y install elfutils-libelf-devel && yum -y install glibc-devel && yum -y install libaio-devel && yum -y install sysstat
RUN yum install -y gcc && yum install -y make && yum install -y apr-devel && yum install -y openssl-devel && yum install -y java
RUN rpm -ivh numactl-libs-2.0.12-3.el7.x86_64.rpm
RUN useradd sql
RUN chown sql ${DIR}
RUN chmod 777 ${DIR}
RUN chmod 755 /home/sql
USER sql
WORKDIR ${DIR}
RUN ./mysqlmonitor-8.0.18.1217-linux-x86_64-installer.bin --installdir /home/sql/mysql/enterprise/monitor --mode unattended --tomcatport 18080 --tomcatsslport 18443 --adminpassword ### --dbport 13306
RUN rm -rf /binaries/*
VOLUME /home/mysql/mysql/enterprise/monitor/mysql/data
ENTRYPOINT ["/bin/bash", "-c", "/home/sql/mysql/enterprise/monitor/mysqlmonitorctl.sh start && tail -f /home/sql/mysql/enterprise/monitor/apache-tomcat/logs/mysql-monitor.log"]
</code></pre>
<pre><code>my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: mypod
spec:
replicas: 1
selector:
matchLabels:
app: mem
template:
metadata:
labels:
app: mem
spec:
containers:
- name: mem
image: 22071997/mem
command:
volumeMounts:
- mountPath: /home/sql/mysql/enterprise/monitor/mysql/data
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: mem-pvc1
initContainers:
- name: permissionsfix
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- chown -R 1000:1000 /home/sql/mysql/enterprise/monitor/ && chmod -R 777 /home/sql/mysql/enterprise/monitor/ ;
volumeMounts:
- name: volume
mountPath: /home/sql/mysql/enterprise/monitor
</code></pre>
<pre><code>output:
[sql@mypod-775764db45-bzs8n enterprise]$ cd monitor/mysql
[sql@mypod-775764db45-bzs8n mysql]$ ls
LICENSE LICENSE.router README.meb bin docs lib my-large.cnf my-small.cnf new runtime support-files var
LICENSE.meb README README.router data include man my-medium.cnf my.cnf run share tmp
[sql@mypod-775764db45-bzs8n mysql]$ cd data
[sql@mypod-775764db45-bzs8n data]$ ls
mypod-775764db45-bzs8n.err
</code></pre>
| Sowmiya | <p>This doesn't seem related to mounting as a non-root user, but more so that mounting a volume over an existing directory will result in that directory looking as if it is empty (or containing whatever happens to be on the volume already). If you have configuration stored on a non-volume that you would like to be on the volume, then you will need to mount the volume to a different location (so it doesn't overwrite your local configuration) and copy that configuration to the mounted volume location. You can do this in an init container, but be careful not to overwrite the volume contents on every startup of the container.</p>
| snormore |
<p>We have an AKS test cluster with <em>four</em> Windows worker nodes and a Deployment with a replica count of <em>two</em>. The corresponding Pod spec does not specify any resource requests and limits (thus, the resulting Pods are in the BestEffort QoS class).</p>
<p>In order to conduct a performance test, we scaled all other Deployments on those worker nodes to 0 replicas and deleted all remaining Pods on the nodes. Only the system Pods created by AKS DaemonSets itself (in the <code>kube-system</code> namespace) remained. We then created the Deployment mentioned above.</p>
<p>We had assumed that the default Kubernetes scheduler would place the two replicas on different nodes by default, or at least choose nodes randomly. However, the scheduler always chose the same node to place both replicas on, no matter how often we deleted the Pods or scaled the Deployment to 0 and back again to 2. Only after we tainted that node as <code>NoSchedule</code>, did the scheduler choose another node.</p>
<p>I know I could configure anti-affinities or topology spread constraints to get a better spreading of my Pods. But in the <em>Cloud Native DevOps with Kubernetes</em> book, I read that the scheduler actually does a very good job by default and one should only use those features if absolutely necessary. (Instead maybe using the descheduler if the scheduler is forced to make bad decisions.)</p>
<p>So, I would like to understand why the behavior we observed would happen. From the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation" rel="nofollow noreferrer">docs</a>, I've learned that the scheduler first filters the nodes for fitting ones. In this case, all of them should fit, as all are configured identically. It then scores the nodes, choosing randomly if all have the same score. Why would one node always win that scoring?</p>
<p>Follow-up question: Is there some way how I could reconstruct the scheduler's decision logic in AKS? I can see <code>kube-scheduler</code> logs in Container Insights, but they don't contain any information regarding scheduling, just some operative stuff.</p>
| Fabian Schmied | <p>I <em>believe</em> that the scheduler is aware of which Nodes already have the container images pulled down, and will give them preference to avoid the image pull (and thus faster start time)</p>
<p>Short of digging up the source code as proof, I would guess one could create a separate Pod (for this purpose, I literally mean <code>kind: Pod</code>), force it onto one of the other Nodes via <code>nodeName:</code>, then after the Pod has been scheduled and attempted to start, delete the Pod and scale up your Deployment</p>
<p>I would then expect the new Deployment managed Pod to arrive on that other Node because it by definition has less resources in use but also has the container image required</p>
| mdaniel |
<p>We have 8 java microservices talking to each other in kubeneters cluster. Each microservice is bundled with auth library which intercepts and validates/renews JWT token for each REST request to controllers. </p>
<p>Scenario:
From Frontend, we get access token for the first time, Authentication gets successful. Lets say</p>
<ol>
<li>Frontend hit 'Microservice A' with access token - Successful</li>
<li>'Microservice A' internally hits 'Microservice B' via restTemplate.
My 'Microservice B' also needs logged in user details.</li>
</ol>
<p>Issue: I have to pass same access token from 'A' to 'B' but I am not able to get access token in Controller/Service logic but can get only in filters where token is being validated. I can get token in Rest Controllers by adding following argument in all rest methods in controller:</p>
<pre><code>@RequestHeader (name="Authorization") String token
</code></pre>
<p>But I dont want to go with this approach as I have to pass this token to everywhere till end and have to declare this argument in all APIS.</p>
<p>I want to get token from TokenStore by passing authentication object. We are using Oauth2 and I checked the code in library, There are many tokenStore providers.</p>
<p>In DefaultTokenServices.java class, I am calling </p>
<pre><code>Authentication auth = SecurityContextHolder.getContext().getAuthentication() // Passed this auth to tokenStore
String token = tokenStore.getAccessToken(auth).getValue(); // NullPointerException
</code></pre>
<p>My code is going through JWTTokenStore provider which is returning null. I checked, there is a provider called InMemoryTokenStore.class which actually extrActs token from store. But my flow is not going into in memory implementation. </p>
<p>Is there any way I can get token afterwards without grabbing it in controller via arguments? or how can I enable/use inMemoryTokenStore?</p>
<p>Also recommend something better for kubernetes intercommunication authentication?</p>
<p>TIA</p>
| Roobal Jindal | <p>It looks like you're using Spring (and Spring Security), so I believe the relevant part of the docs is the part on <a href="https://docs.spring.io/spring-security/site/docs/current/reference/html5/#bearer-token-propagation" rel="nofollow noreferrer">Bearer Token Propagation</a>.</p>
<p>Its recommendation is to use a <code>WebClient</code> (the recommended replacement for <code>RestTemplate</code> as of Spring 5) that uses the provided <code>ServletBearerExchangeFilterFunction</code> to automagically propagate the JWT token from the incoming request into the outgoing request:</p>
<pre><code>@Bean
public WebClient rest() {
return WebClient.builder()
.filter(new ServletBearerExchangeFilterFunction())
.build();
}
</code></pre>
<p>On <code>RestTemplate</code>, the docs say:</p>
<blockquote>
<p>"There is no dedicated support for RestTemplate at the moment, but you can achieve propagation quite simply with your own interceptor"</p>
</blockquote>
<p>and the following example is provided:</p>
<pre><code>@Bean
RestTemplate rest() {
RestTemplate rest = new RestTemplate();
rest.getInterceptors().add((request, body, execution) -> {
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
if (authentication == null) {
return execution.execute(request, body);
}
if (!(authentication.getCredentials() instanceof AbstractOAuth2Token)) {
return execution.execute(request, body);
}
AbstractOAuth2Token token = (AbstractOAuth2Token) authentication.getCredentials();
request.getHeaders().setBearerAuth(token.getTokenValue());
return execution.execute(request, body);
});
return rest;
}
</code></pre>
<p>I don't believe you need to be looking at <code>TokenStore</code>s if all you're trying to do is propagate the token. Remember everything relevant about a JWT should be inside the token itself. (Which is why <a href="https://docs.spring.io/spring-security/oauth/apidocs/org/springframework/security/oauth2/provider/token/store/JwtTokenStore.html" rel="nofollow noreferrer">the doc for the JwtTokenStore</a> explains that it doesn't actually store anything, but just pulls info out of the token, and will return null for some methods, including the <code>getAccessToken()</code> method you're calling.)</p>
| Graham Lea |
<p>Given a bash function in .bashrc such as</p>
<pre><code>kgp () {
kubectl get po -n $1 $2
}
</code></pre>
<p>Is it possible to have kubectl auto complete work for k8s resources such as namespaces/pods? As an example if I use</p>
<pre><code>kubectl get po -n nsprefix podprefix
</code></pre>
<p>I can tab auto complete the prefix. Whereas with the positional parameters when I call</p>
<pre><code>kgp nsprefix podprefix
</code></pre>
<p>I have to type out the entire resource name.</p>
| Kelvin Baumgart | <p>Yes, that's because bash-completion only understands <em>known commands</em>, not aliases or new functions that you have made up. You will experience the same thing with a trivial example of <code>alias whee=/bin/ls</code> and then <code>whee <TAB></code> will do nothing because it doesn't "recurse" into that alias, and <em>for sure</em> does not attempt to call your function in order to find out what arguments it could possibly accept. That could potentially be catastrophic</p>
<p>You're welcome to create a new <a href="https://www.gnu.org/software/bash/manual/html_node/Programmable-Completion.html#Programmable-Completion" rel="nofollow noreferrer"><code>complete</code></a> handler for your custom <code>kgp</code>, but that's the only way you'll get the desired behavior</p>
<pre class="lang-bash prettyprint-override"><code>_kgp_completer() {
local cur prev words cword
COMPREPLY=()
_get_comp_words_by_ref -n : cur prev words cword
if [[ $cword == 1 ]] && [[ -z "$cur" ]]; then
COMPREPLY=( $(echo ns1 ns2 ns3) )
elif [[ $cword == 2 ]] && [[ -z "$cur" ]]; then
COMPREPLY=( $(echo pod1 pod2 pod3) )
fi
echo "DEBUG: cur=$cur prev=$prev words=$words cword=$cword COMPREPLY=${COMPREPLY[@]}" >&2
}
complete -F _kgp_completer kgp
</code></pre>
| mdaniel |
<p>I'm trying to run a query with Kubectl , as follows:</p>
<pre><code>kubectl -n employeesns exec -ti employeedpoddb-0 -- psql -d db_people -U postgres
-c 'create extension if not exists dblink;'
-c 'SELECT dbemployees."empId" , dbemployees."createdAt" , dbemployees."updatedAt"
from "users" as "dbemployees"
WHERE dbemployees."empId" not in (
SELECT "empId"
FROM dblink('dbname=peopledb','SELECT "empId" FROM employees')
AS dbpeople("empId" varchar)
)'
</code></pre>
<p>However I get</p>
<pre><code>ERROR: syntax error at or near "SELECT"
LINE 1: ...SELECT "empId" FROM dblink(dbname=peopledb,SELECT
^
command terminated with exit code 1
</code></pre>
<p>How can we execute multiline SQL query with Kubectl ?</p>
| JAN | <p>It's because your inner <code>'</code> is not escaped; you'll see the same thing locally</p>
<pre><code>$ echo 'hello 'world' from shell'
</code></pre>
<p>you just need to escape those inner quotes, or change the outer to <code>"</code> and then escape those usages, based on your needs</p>
<pre class="lang-sh prettyprint-override"><code>-c 'SELECT dbemployees."empId" , dbemployees."createdAt" , dbemployees."updatedAt"
from "users" as "dbemployees"
WHERE dbemployees."empId" not in (
SELECT "empId"
FROM dblink('\''dbname=peopledb'\'','\''SELECT "empId" FROM employees'\'')
AS dbpeople("empId" varchar)
)'
</code></pre>
| mdaniel |
<p>I'm dockerizing a laravel application, my image is based on an apache image, this is being hosted in AKS, where I'm mounting azure files with images share inside /public/images, the problem is apache would add header inside the image resulting in corrupting the images</p>
<p><a href="https://i.stack.imgur.com/vHkrz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vHkrz.png" alt="enter image description here" /></a></p>
<p>even if I exec inside the pod itself and try curl localhost, I get the same problem so I'm sure it's not a problem with routing or my ingress</p>
<pre><code> FROM php:7.3-apache
#install all the system dependencies and enable PHP modules
RUN apt-get update -y && apt-get install -y libmcrypt-dev openssl
RUN apt-get update && apt-get install -y libmcrypt-dev \
&& pecl install mcrypt-1.0.2 \
&& docker-php-ext-enable mcrypt
RUN docker-php-ext-install pdo mbstring
RUN apt-get install -y \
libzip-dev \
zip \
&& docker-php-ext-install zip
RUN apt-get install -y libfreetype6-dev libjpeg62-turbo-dev libpng-dev && \
docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/
RUN docker-php-ext-install gd
RUN docker-php-ext-install mysqli pdo pdo_mysql
# RUN apt-get install wget
RUN apt-get update; apt-get install curl -y
#install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin/ --filename=composer
#set our application folder as an environment variable
ENV APP_HOME /var/www/html
#change uid and gid of apache to docker user uid/gid
RUN usermod -u 1000 www-data && groupmod -g 1000 www-data
#change the web_root to laravel /var/www/html/public folder
#RUN sed -i -e "s/html/html\/public/g" /etc/apache2/sites-enabled/000-default.conf
COPY vhost.conf /etc/apache2/sites-available/000-default.conf
RUN echo "EnableSendfile off" >> /etc/apache2/apache2.conf
# enable apache module rewrite
RUN a2enmod rewrite
#copy source files and run composer
COPY . $APP_HOME
# install all PHP dependencies
RUN composer install --no-interaction
#change ownership of our applications
RUN chown -R www-data:www-data $APP_HOME
</code></pre>
<p>next using regular deployment yaml file to push this to kubernetes with the following Volume mounts:</p>
<pre><code>volumeMounts:
- name: sessions
mountPath: /var/www/html/storage/framework/sessions
- name: cache
mountPath: /var/www/html/storage/framework/cache
- name: views
mountPath: /var/www/html/storage/framework/views
- name: images
mountPath: /var/www/html/public/images
</code></pre>
<p>volumes:</p>
<ul>
<li>name: sessions
azureFile:
secretName: appmnt
shareName: sessions
readOnly: false</li>
<li>name: cache
azureFile:
secretName: appmnt
shareName: cache
readOnly: false</li>
<li>name: views
azureFile:
secretName: appmnt
shareName: views
readOnly: false</li>
<li>name: images
azureFile:
secretName: appmnt
shareName: images
readOnly: false</li>
</ul>
<p>now the problem is if i try to access a static file from images folder, by example using a url like "https://www.somedomain.com/images/somefile.png"</p>
<p>the file will be download but apache will attach the above headers to the content resulting in corruption.</p>
<p>the web applications work perfectly fine, except for any files inside the volume mounts.</p>
<p>if i do "kubectl exec -it podname -- bash" and browse the files i can see the volume mounts are working fine, also if i try to upload files from the application interface, the file gets written in the write way inside the folder, only problem is with browsing the file.</p>
| Stacker | <p>We fixed the issue, simply in the vhost.conf, we needed to turn off EnableMMAP</p>
<pre><code>EnableMMAP off
</code></pre>
| Stacker |
<p>I need deploy daemonset in Kubernetes, but each pod in different nodes requires different memory and cpu requests for different hardware types.</p>
| che yang | <p><em>Since you have asked such an imprecise question, you're going to get an imprecise answer -- update your question with more specifics and you'll get a better answer</em></p>
<p>Using helm can help you with that problem, since the manifests are subject to golang template evaluation; thus:</p>
<pre><code># values.yaml
instance_type: m5.large
---
# templates/deployment.yaml
{{ $mem := "2Gi" }}
{{ if (hasSuffix .Values.instance_type ".xlarge") }}
{{ $mem = "4Gi"
{{ end }}
spec:
template:
spec:
containers:
- resources:
requests:
memory: {{ $mem }}
</code></pre>
<p>then install it and the user can choose the size Node they have:</p>
<pre><code>$ helm install --set instance_type=r5.xlarge my-release my/chart
</code></pre>
<hr>
<p>If, instead, you mean that you have a mixed set if instances and you want your <strong>one</strong> Deployment to adjust its memory settings according to the headroom available on its target Node, then you'll want a <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook" rel="nofollow noreferrer">Mutating Admission Webhook</a> which can use whatever business rules you want to adjust the <code>resource:</code> field of the soon-to-be-scheduled Pod to set its resources as you see fit. You can use the <a href="https://github.com/kubernetes/autoscaler/tree/cluster-autoscaler-1.17.1/vertical-pod-autoscaler#vertical-pod-autoscaler" rel="nofollow noreferrer">vertical pod autoscaler</a> as a source of inspiration, since they're doing roughly the same thing just over a different timescale</p>
| mdaniel |
<p>In a multi-master kubernetes cluster, do only one master schedule and are the other masters in a standby mode? Does all the masters coordinate and schedule pods and etc?</p>
| jarge | <p>That's mostly correct; one can see that process by looking at the logs of all the <code>controller-manager</code> Pods and observing:</p>
<blockquote>
<p>I1021 00:09:49.283273 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...</p>
</blockquote>
<p>in some of them, and more "working" messages in just one of them:</p>
<blockquote>
<p>I1021 02:12:51.779698 1 cleaner.go:181] Cleaning CSR "csr-rf8vh" as it is more than 1h0m0s old and approved.</p>
</blockquote>
<p>I say "mostly correct" because to the best of my knowledge all <code>apiserver</code> Pods are actively doing work -- or at the very least they service HTTPS traffic and emit Audit logs -- but the rest of the HA components use that leader lease pattern</p>
<p>As for your 2nd question</p>
<blockquote>
<p>Does all the masters coordinate and schedule pods and etc?</p>
</blockquote>
<p>no, that's the job of the <code>scheduler</code> Pods, which, again, use that leader lease pattern in order to avoid competing scheduling decisions</p>
| mdaniel |
<p>Whenever i do a "docker ps -a", i see two containers corresponding to pod, here the pod has only container. Typically, the two containers listed under "docker ps" has the following prefixes:-
<strong>k8s_POD_kubernetes-<POD_NAME> and k8s_kubernetes-<POD_NAME></strong></p>
<p>Can someone help me understand why we see two entries in "docker ps" ?</p>
| Hemanth | <p>The <code>_POD_</code> one is the only one with the Pod's IP address, the others are every <em>workload</em> container from the PodSpec's <code>container:</code> and <code>initContainer:</code> arrays, since one of the contracts of Kubernetes is that <a href="https://kubernetes.io/docs/concepts/workloads/pods/#workload-resources-for-managing-pods" rel="nofollow noreferrer">all containers in a Pod share the same network identity</a></p>
<p>The nitty gritty of that involves the different namespaces in the Linux kernel that make "containers" operate, with cgroups for cpu, memory, process ids, and network stack. Just like <code>nsenter</code> allows the host to temporarily switch its cgroup into a container's cgroup, so does the container runtime mechanism have the "sibling" containers in a Pod switch into the allocated networking cgroup of that "sandbox" container, otherwise traffic sent from <code>container[0]</code> and <code>container[1]</code> would appear as different hosts, violating that network identity promise</p>
<p>That's also why a container within a Pod can restart without the Pod losing its IP address and <code>.metadata.name</code> because only the workload containers are restarted, the <code>_POD_</code> version remains running. It's also why you'll always see <a href="https://github.com/kubernetes-sigs/kind/blob/v0.12.0/images/base/files/etc/containerd/config.toml#L30" rel="nofollow noreferrer"><code>k8s.gcr.io/pause</code> images</a> in your Node's <code>docker images</code> list, because that container is designed to "do nothing" except exist</p>
| mdaniel |
<p>I am looking for a recipe using Terraform to create a Kubernetes cluster on AWS using Fargate. I cannot find any end-to-end documentation to do this.</p>
<p>I am using SSO, and so terraform needs to use my AWS credentials to do this.</p>
<p>No example I can find addresses using AWS credentials and Fargate.</p>
<p>If anyone has done this and has a recipe for all of the above, please share.</p>
| user10664542 | <p>You can use popular module for that <a href="https://github.com/terraform-aws-modules/terraform-aws-eks" rel="nofollow noreferrer">
terraform-aws-eks</a>. It supports Fargate EKS as well. Since its open sourced, you can also have a look at exactly how to create such clusters if you want to fork and customize the module, or create your own scratch.</p>
<p>Example use for Fargate EKS from its docs:</p>
<pre><code>module "eks" {
source = "../.."
cluster_name = local.cluster_name
cluster_version = "1.17"
subnets = module.vpc.private_subnets
tags = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
vpc_id = module.vpc.vpc_id
fargate_profiles = {
example = {
namespace = "default"
# Kubernetes labels for selection
# labels = {
# Environment = "test"
# GithubRepo = "terraform-aws-eks"
# GithubOrg = "terraform-aws-modules"
# }
# using specific subnets instead of all the ones configured in eks
# subnets = ["subnet-0ca3e3d1234a56c78"]
tags = {
Owner = "test"
}
}
}
map_roles = var.map_roles
map_users = var.map_users
map_accounts = var.map_accounts
}
</code></pre>
| Marcin |
<p>I try to run this command with a service account from Jenkins:
<code>kubectl rollout history deployment.v1.apps/config-service-deployment</code>
The command fails with the following error:</p>
<pre><code>Error from server (NotFound): namespaces "build" not found
</code></pre>
<p>I would like to mention, we have only one namespace: <strong>default</strong>; </p>
<p>This is the service account:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2019-09-09T05:50:56Z"
name: jenkins-user
namespace: default
resourceVersion: "387323"
selfLink: /api/v1/namespaces/default/serviceaccounts/jenkins-user
uid: ********
secrets:
- name: ********
</code></pre>
<p>If I login from bash and use the default account the command runs successfully and the history is returned.
The service account is working for creating new deployments and services. The only issue is, I can't get the rollout history.</p>
<p>What do I miss?</p>
| Zsolt Tolvaly | <p>You can side-step all doubt about what namespace is in the global <code>$KUBECONFIG</code> by being explicit about the namespace in which the deployment is happening:</p>
<p><code>kubectl -n default rollout history deploy/config-service-deployment</code></p>
| mdaniel |
<p>I want to set <code>slave.extraVolumes</code> as below. </p>
<pre><code>helm install my-db --set replication.enabled=true,slave.extraVolumes={"db-disk-1","db-disk-2"} bitnami/postgresql -n development
</code></pre>
<p>But it says a error</p>
<pre><code>Error: expected at most two arguments, unexpected arguments: bitnami/postgresql
</code></pre>
<p>Already tested ways:</p>
<pre><code>helm install my-db --set replication.enabled=true,slave.extraVolumes={db-disk-1,db-disk-2} bitnami/postgresql -n development
Error: expected at most two arguments, unexpected arguments: bitnami/postgresql
helm install my-db --set replication.enabled=true,slave.extraVolumes="db-disk-1\,db-disk-2" bitnami/postgresql -n development
Error: YAML parse error on postgresql/templates/statefulset-slaves.yaml: error converting YAML to JSON: yaml: line 115: could not find expected ':'
</code></pre>
| Padmasankha | <p>There are (at least) three things going on:</p>
<ul>
<li>the <a href="https://github.com/bitnami/charts/blob/master/bitnami/postgresql/templates/statefulset-slaves.yaml#L271-L273" rel="nofollow noreferrer"><code>slave.extraVolumes</code> is a <strong>list</strong> of <code>Volume</code> structures</a>, so just providing two names won't get it done</li>
<li>you are using characters that are meaningful to the shell without quoting them</li>
<li>but in the end it doesn't matter because you cannot represent complex structures using only <code>--set</code> syntax, you'll need <code>--values</code> with either a file or a <a href="https://www.gnu.org/software/bash/manual/html_node/Process-Substitution.html#Process-Substitution" rel="nofollow noreferrer">process substitution</a></li>
</ul>
<pre class="lang-shell prettyprint-override"><code>helm install my-db \
--set replication.enabled=true \
--values <(echo '{
"slave": {
"extraVolumes": [
{
"name": "db-disk-1",
"emptyDir": {}
},
{
"name": "db-disk-2",
"emptyDir": {}
}
]
}
}') \
bitnami/postgresql -n development
</code></pre>
| mdaniel |
<p>I run a LoadBalance on aws and when I tried to get an external ip i got like this:</p>
<pre><code>a86a863a4bea9807-1478376474.us-west-2.elb.amazonaws.com
</code></pre>
<p>Is it possible to get a normal IP?</p>
| AlexWhite | <p>Yes you can get the IP, e.g. using <code>dig</code> or <code>drill</code> commands:</p>
<pre><code>drill a86a863a4bea9807-1478376474.us-west-2.elb.amazonaws.com
</code></pre>
<p>But the IP returned are <strong>not static IP</strong> addresses. If you require to have static IP addresses for your load balancer, you should use either Network Load Balancer, or add <a href="https://docs.aws.amazon.com/global-accelerator/latest/dg/about-accelerators.alb-accelerator.html" rel="nofollow noreferrer">global accelerator</a> to Application Load Balancer.</p>
| Marcin |
<p>Here's what my Dockerfile looks like for an image I'm creating.</p>
<pre><code>FROM python:3.7-alpine
COPY requirements.txt /
RUN pip install -r /requirements.txt
ENV U_PATH="a"
WORKDIR $U_PATH
</code></pre>
<p>I override the env variable <code>U_PATH</code> when I call it using <code>docker run -it -e U_PATH=/mnt temp:v1 /bin/sh</code> but the <code>WORKDIR</code> is set during build time and I cannot change that during runtime.</p>
<p>Is there any way to dynamically set the working directory at runtime by passing an env variable?</p>
| him229 | <p>While not an environment variable, don't forget you can alter the working directory of a Pod's container via the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#container-v1-core" rel="nofollow noreferrer"><code>workingDir:</code> PodSpec field</a></p>
<pre><code>containers:
- name: foo
image: 'temp:v1'
workingDir: /mnt
</code></pre>
| mdaniel |
<p>Hey I'm new to CI/CD with gitlab and I am a bit confused.</p>
<p>I got a Kubernetes cluster connected to a Gitlab instance to run CI/CD pipelines.
There is a gitlab runner with kubernetes executor, from what I understand it means there is a pod which runs the pipelines.</p>
<p>A look with <code>kubectl get pods -n gitlab-runner</code> supports that (now there is some other issue, but normally it is <code>1/1 running</code>):</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
gitlab-runner gitlab-runner-gitlab-runner-6b7bf4d766-9t4k6 0/1 Running 248 29d
</code></pre>
<p>The CI/CD pipelines calls commands like <code>kubectl apply -f [...]</code>, to create new deployments and pods.
But why does that work?
If the pipeline commands are run the pod, modifications to the host cluster config should be impossible, right?
I thought the whole point of containerization is that guests can't modify the host.</p>
<p>Where is the flaw in my logic?</p>
| iaquobe | <blockquote>
<p>I thought the whole point of containerization is that guests can't modify the host.</p>
</blockquote>
<p>You are overlooking <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">the <code>serviceAccount</code></a> that is <em>optionally</em> injected into every Pod, and those <code>ServiceAccount</code> objects can be bound to <code>Role</code> or <code>ClusterRole</code> objects to optionally grant them privileges to operate against the kubernetes API, which is exposed in-cluster on the well known DNS address <code>https://kubernetes.default.svc.cluster.local</code></p>
<p>So, yes, they mostly can't modify the <em>host</em> but kubernetes is an orchestration engine, so the GitLab runner can <em>request</em> that a new Pod spin up within the cluster, which so long as the GitLab runner has the correct kubernetes credentials will be taken just as seriously as if a user had requested the same action from kubectl</p>
<p>Another way to look at this problem is that you would have equal success if you ran the gitlab-runner <em>outside</em> of kubernetes, but provided it with credentials to the cluster, you'd just then have the problem of running another VM outside of your existing cluster infrastructure, coming with all the maintenance burdens that always comes with</p>
| mdaniel |
<p>I went through the steps listed here: <a href="https://kubernetes.io/docs/setup/production-environment/tools/kops/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kops/</a></p>
<p>After moving the kops file to /usr/local/bin/ and renaming to kops, I tried to confirm if it was in fact installed and executable by trying 'kops --help' and 'kops --version'/'kops version' and neither command worked. Any idea what the issue might be?</p>
<p>Edit: Here's what I did step by step</p>
<ol>
<li><p>curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s <a href="https://api.github.com/repos/kubernetes/kops/releases/latest" rel="nofollow noreferrer">https://api.github.com/repos/kubernetes/kops/releases/latest</a> | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64</p>
</li>
<li><p>sudo chmod +x kops-darwin-amd64</p>
</li>
<li><p>sudo mv kops-darwin-amd64 /usr/local/bin/kops</p>
</li>
</ol>
<p>It's a t2.micro Ubuntu 20.04 EC2 Instance.</p>
<p>Tried to confirm if kops was properly installed and executable by entering 'kops --help' and 'kops --version' and also 'kops version' but they all return this error:</p>
<p><code>-bash: /usr/local/bin/kops: cannot execute binary file: Exec format error</code></p>
| codestein | <p>I think its because you are using <code>kops-darwin-amd64</code>. This is for mac. I think you should be using <code>kops-linux-amd64</code> instead for linux.</p>
| Marcin |
<p>I want to delete all the files under the volume directory. The directory is inside the Kubernetes pod. So I am using the exec command.</p>
<p>My command - </p>
<pre><code>kubectl exec $POD -- rm -rf /usr/local/my-app/volume/*
</code></pre>
<p>The above command is not working. No output of the above command on terminal. I tried with below command and it is working -</p>
<pre><code>kubectl exec $POD -- rm -rf /usr/local/my-app/volume
</code></pre>
<p>But it will delete the directory. I can't delete the directory because it is using for mounting purpose.</p>
<p>How can I achieve the above functionalities?</p>
<p>Thanks</p>
| lucy | <p>That's because the wildcard expansion is happening on <strong>your machine</strong> and not the Pod; what you want is to have the shell glob expand on the Pod, which one can accomplish via</p>
<pre class="lang-shell prettyprint-override"><code>kubectl exec $POD -- sh -c 'rm -rf /usr/local/my-app/volume/*'
</code></pre>
| mdaniel |
<p>I am trying to create an Centralized file based repository where I can upload all the configuration files needed for an application to run which is deployed as a pod inside the Kubernetes. Any suggestion on achieving this functionality ? Can the file based repository version the files uploaded ?</p>
<p>I see that s3fs-fuse can be used to achieve this, but i lack to see that, it wont support versioning the added config files in the S3 bucket.
<a href="https://github.com/s3fs-fuse/s3fs-fuse" rel="nofollow noreferrer">https://github.com/s3fs-fuse/s3fs-fuse</a></p>
<p>Any other suggestion ?</p>
| babs84 | <p>You could use <a href="https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html" rel="nofollow noreferrer">elastic file system</a> which is <a href="http://Amazon%20EKS%20Announces%20Support%20for%20the%20Amazon%20EFS%20CSI%20Driver" rel="nofollow noreferrer">supported</a> by EKS:</p>
<blockquote>
<p>Applications running in Kubernetes can use EFS file systems to <strong>share data between pods</strong> in a scale-out group, or with other applications running within or outside of Kubernetes. EFS can also help Kubernetes applications be highly available because all data written to EFS is written to multiple AWS Availability zones. If a Kubernetes pod is terminated and relaunched, the <strong>CSI driver will reconnect the EFS file system</strong>, even if the pod is relaunched in a different AWS Availability Zone.</p>
</blockquote>
<p>But its not S3 and it <strong>does not have versioning</strong> of files such as S3 has. You would have to add such functionality yourself, e.g. by keeping everything in a git repository on the EFS file system.</p>
| Marcin |
<p>In my application, I have a control plane component which spawns Jobs on my k8s cluster. I'd like to be able to pass in a dynamically generated (but read-only) config file to each Job. The config file will be different for each Job.</p>
<p>One way to do that would be to create, for each new Job, a ConfigMap containing the desired contents of the config file, and then set the ConfigMap as a VolumeMount in the Job spec when launching the Job. But now I have two entities in the cluster which are semantically tied together but don't share a lifetime, i.e. if the Job ends, the ConfigMap won't automatically go away.</p>
<p>Is there a way to directly "mount a string" into the Job's Pod, without separately creating some backing entity like a ConfigMap to store it? I could pass it in as an environment variable, I guess, but that seems fragile due to length restrictions.</p>
| kini | <p>The way that is traditionally done is via an <code>initContainer</code> and an <code>emptyDir</code> <code>volumeMount</code> that allows the two containers to "communicate" over a private shared piece of disk:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
initContainers:
- name: config-gen
image: docker.io/library/busybox:latest
command:
- /bin/sh
- -ec
# now you can use whatever magick you wish to generate the config
- |
echo "my-config: is-generated" > /generated/sample.yaml
echo "some-env: ${SOME_CONFIG}" >> /generated/sample.yaml
env:
- name: SOME_CONFIG
value: subject to injection like any other kubernetes env var
volumeMounts:
- name: shared-space
mountPath: /generated
containers:
- name: main
image: docker.example.com:1234
# now you can do whatever you want with the config file
command:
- /bin/cat
- /config/sample.yaml
volumeMounts:
- name: shared-space
mountPath: /config
volumes:
- name: shared-space
emptyDir: {}
</code></pre>
| mdaniel |
<p>I have a pod up and running and we have fluentd configurated in the cluster to scrap the pod logs and push it to Elastic via Logstash.</p>
<p>I need to do some testing where I am executing a process (<code>spark-submit</code>) manually on this pod and test that the logs are being parsed correctly. Since I am running this manually, they are not being fed to the pod logs and not appearing in Elastic. Is there any workaround do perform this testing?</p>
| adesai | <p>The "pod logs" are whatever gets written to <code>/proc/self/fd/1</code> and <code>/proc/self/fd/2</code> of the <em>container's</em> process-id (which is often <code>1</code> but not mandatory); if you have used <code>kubectl exec</code> to get into the pod, you'll want to redirect the process's output to those file descriptors (I'll use <code>$c_pid</code> as substitution for whatever pid it is):</p>
<pre class="lang-bash prettyprint-override"><code>spark-submit ..."$@"... >/proc/${c_pid}/fd/1 2>/proc/${c_pid}/fd/2
</code></pre>
<p>It is also possible to use <code>tee</code> if you want to follow along with the output, but that's more complex. The short version is to just smash together stdout and stderr and then tee that into the container's stdout</p>
<pre class="lang-bash prettyprint-override"><code>spark-submit ..."$@"... 2>&1 | tee /proc/${c_pid}/fd/1
</code></pre>
| mdaniel |
<p>I wrote a service to retrieve some information from the Kubernetes cluster. Below is a snippet from the <code>kubernetes_service.py</code> file that works perfectly when I run it on my local machine.</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes.client.rest import ApiException
from kubernetes import client, config
from exceptions.logs_not_found_exceptions import LogsNotFound
import logging
log = logging.getLogger("services/kubernetes_service.py")
class KubernetesService:
def __init__(self):
super().__init__()
config.load_kube_config()
self.api_instance = client.CoreV1Api()
def get_pods(self, body):
try:
api_response = self.api_instance.list_namespaced_pod(namespace=body['namespace'])
dict_response = api_response.to_dict()
pods = []
for item in dict_response['items']:
pods.append(item['metadata']['name'])
log.info(f"Retrieved the pods: {pods}")
return pods
except ApiException as e:
raise ApiException(e)
def get_logs(self, body):
try:
api_response = self.api_instance.read_namespaced_pod_log(name=body['pod_name'], namespace=body['namespace'])
tail_logs = api_response[len(api_response)-16000:]
log.info(f"Retrieved the logs: {tail_logs}")
return tail_logs
except ApiException:
raise LogsNotFound(body['namespace'], body['pod_name'])
</code></pre>
<p>When creating the docker image using Dockerfile, it also installed kubectl. Below is my Dockerfile.</p>
<pre><code>FROM python:3.8-alpine
RUN mkdir /app
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt && rm requirements.txt
RUN apk add curl openssl bash --no-cache
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
COPY . .
EXPOSE 8087
ENTRYPOINT [ "python", "bot.py"]
</code></pre>
<p>To grant the container permissions to run the command <code>kubectl get pods</code> I added the role in the deployment.yml file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: pyhelper
spec:
selector:
app: pyhelper
ports:
- protocol: "TCP"
port: 8087
targetPort: 8087
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pyhelper
spec:
selector:
matchLabels:
app: pyhelper
replicas: 1
template:
metadata:
labels:
app: pyhelper
spec:
serviceAccountName: k8s-101-role
containers:
- name: pyhelper
image: **********
imagePullPolicy: Always
ports:
- containerPort: 8087
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-101-role
subjects:
- kind: ServiceAccount
name: k8s-101-role
namespace: ind-iv
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-101-role
</code></pre>
<p>At the start up of the container it returns the error <code>kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found</code> at the line <code>config.load_kube_config()</code> in the <code>kubernetes_service.py</code> file. I checked the config file by running the command <code>kubectl config view</code> and the file is indeed empty. What am I doing wrong here?
Empty config file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
</code></pre>
<p>Also tried to run the command <code>kubectl get pods</code> in the shell of the container and it successfully returned the pods.</p>
| Lucas Scheepers | <p>I believe you'll want <a href="https://github.com/kubernetes-client/python-base/blob/3aa8b4c94282707a20482f71e86624f3b39a2cc6/config/__init__.py#L24" rel="nofollow noreferrer"><code>kubernetes.config.load_config</code></a> which differs from the <code>load_kube_config</code> you're currently using in that the package-level one looks for any <code>$HOME/.kube/config</code> as you expected, but <em>then</em> falls back to the in-cluster config as the <code>ServiceAccount</code> usage expects</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes.config import load_config
class KubernetesService:
def __init__(self):
super().__init__()
load_config()
</code></pre>
| mdaniel |
<p>According to the [documentation][1] Kubernetes variables are expanded using the previous defined environment variables in the container using the syntax $(VAR_NAME). The variable can be used in the container's entrypoint.</p>
<p>For example:</p>
<pre class="lang-yaml prettyprint-override"><code>env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
</code></pre>
<p>Is this possible though to use bash expansion aka <code>${Var1:-${Var2}}</code> inside the container's entrypoint for the kubernetes environment variables E.g.</p>
<pre class="lang-yaml prettyprint-override"><code>env:
- name: Var1
value: "hello world"
- name: Var2
value: "no hello"
command: ['bash', '-c', "echo ${Var1:-$Var2}"]
</code></pre>
| alixander | <blockquote>
<p>Is this possible though to use bash expansion aka <code>${Var1:-${Var2}}</code> inside the container's entrypoint ?</p>
</blockquote>
<p>Yes, by using</p>
<pre><code>command:
- /bin/bash
- "-c"
- "echo ${Var1:-${Var2}}"
</code></pre>
<p>but not otherwise -- kubernetes is not a wrapper for bash, it use the Linux <code>exec</code> system call to launch programs inside the container, and so the only way to get bash behavior is to launch bash</p>
<p>That's also why they chose <code>$()</code> syntax for their environment interpolation so it would be different from the <code>${}</code> style that a shell would use, although this question comes up so much that one might wish they had not gone with <code>$</code> anything to avoid further confusing folks</p>
| mdaniel |
<p>In jenkins jobs A and B both the jobs are executed on the same machine to two different clusters. When the "kubectl config use-context" command is entered in both the jobs. They error out with following error. How can this be handled.</p>
<p>looks like use-context changes the file and doing it at the same time from two jobs causes issues.</p>
<p>On job A:</p>
<ul>
<li>kubectl config use-context arn:aws:eks:us-west-2:XYZXYZXYZ:cluster/ABC
error: error loading config file "/home/ubuntu/.kube/config": yaml: line 29: could not find expected ':'</li>
</ul>
<p>On job B:</p>
<ul>
<li>kubectl config use-context arn:aws:eks:us-west-2:XYZXYZXYZ:cluster/CBD
error: error loading config file "/home/ubuntu/.kube/config": yaml: line 29: could not find expected ':'</li>
</ul>
| harishb | <p>You don't need to issue a "use-context" (which yes, does write to the <code>$KUBECONFIG</code>) -- kubectl has the <code>--context</code> argument that allows you to specify the context to use per invocation:</p>
<pre><code># job A
$ kubectl --context "arn:aws:eks:us-west-2:XYZXYZXYZ:cluster/ABC" get nodes
# job B
$ kubectl --context "arn:aws:eks:us-west-2:XYZXYZXYZ:cluster/CBD" get nodes
</code></pre>
<p>However, if you have <em>a lot</em> of those commands, that can get tedious. In that case, you may be happier merely copying the original <code>$KUBECONFIG</code> and then setting the <code>KUBECONFIG</code> env-var in the job to point to your local, effectively disposable, one:</p>
<pre class="lang-sh prettyprint-override"><code>cp ${KUBECONFIG:-$HOME/.kube/config} job-X.kubeconfig
export KUBECONFIG=$PWD/job-X.kubeconfig
# some copies of kubectl whine if the permissions are too broad
chmod 0600 $KUBECONFIG
# now your use-context is safe to perform
kubectl config use-context "arn:aws:eks:us-west-2:XYZXYZXYZ:cluster/ABC"
kubectl get nodes
</code></pre>
| mdaniel |
<p>Why is default load balancer port 80 and 443 is considered as TCP ports? I want to test stickiness as shown in the <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#sticky-sessions" rel="nofollow noreferrer">aws docs</a> either through yaml file or through aws console.</p>
<p>I was using nginx ingress and moved to default load balancer to test stickiness but I see the error <code>Stickiness options not available for TCP protocols</code></p>
<p><a href="https://i.stack.imgur.com/0mt0A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0mt0A.png" alt="enter image description here" /></a></p>
<p>I even tried specifying protocol <code>https</code> but it doesn't accept. It only allows <code>"SCTP", "TCP", "UDP"</code>.</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: httpd
labels:
app: httpd-service
namespace: test-web-dev
spec:
#type: LoadBalancer
selector:
app: httpd
ports:
- name: port-80
port: 80
targetPort: 80
- name: port-443
port: 443
targetPort: 443
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
</code></pre>
<p>When I try ingress, I disable the service type <code>Loadbalancer</code> above</p>
<p><code>nginx-ingress-lb-service.yml</code>:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
---
</code></pre>
| user630702 | <p>Stickiness requires listener which operates in <strong>layer 7</strong> of <a href="https://en.wikipedia.org/wiki/OSI_model" rel="nofollow noreferrer">OSI model</a>, which in case of CLB, is provided by <code>http</code> and <code>https</code> listeners.</p>
<p>Since you are using <code>TCP</code> listener which operates in <strong>layer 3</strong>, stickiness is not supported. Thus, if you want to use sticky sessions, you <strong>must change to</strong> <code>http</code> or <code>https</code> listeners.</p>
<p><code>UDP</code> and <code>SCTP</code> are invalid listeners for CLB. It only supports <code>TCP</code>, <code>HTTP</code>, <code>HTTPS</code> and <code>SSL</code>.</p>
| Marcin |
<p>I have 2 services. Service A and Service B. They correspond to deployments dA and dB. </p>
<p>I set up my cluster and start both services/deployments. Service A is reachable from the external world. External World --> Service A <--> Service B.</p>
<p>How can I scale dB (change replicaCount and run kubectl apply OR kubectl scale) from within Service A that is responding to a user request. </p>
<p>For example, if a user that is being served by Service A wants some extra resource in my app, I would like to provide that by adding an extra pod to dB. How do I do this programatically? </p>
| crossvalidator | <p>Every <code>Pod</code>, unless it opts out, has a <code>ServiceAccount</code> token injected into it, which enables it to interact with the kubernetes API according to the <code>Role</code> associated with the <code>ServiceAccount</code></p>
<p>Thus, one can use any number of kubernetes libraries -- most of which are "in cluster" aware, meaning they don't need any further configuration to know about that injected <code>ServiceAccount</code> token and how to use it -- to issue scale events against any resource the <code>ServiceAccount</code>'s <code>Role</code> is authorized to use</p>
<p>You can make it as simple or as complex as you'd like, but the tl;dr is akin to:</p>
<pre class="lang-sh prettyprint-override"><code>curl --cacert /var/run/secrets/kubernetes.io/ca.crt \
--header "Accept: application/json" \
--header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/token)" \
https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}/api/v1/namespaces
</code></pre>
| mdaniel |
<p>I am trying to implement liveliness probe through C# code (.Net core framework). I simply want to run a curl command inside container <a href="https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/probe/exec-liveness.yaml" rel="nofollow noreferrer">like this</a> . Below is the code snippet:</p>
<pre><code>IList<string> command = new List<string>();
V1Probe livnessconfig = null;
command.Add("curl http://localhost:5001/checkhealth/");
V1ExecAction execommand = new V1ExecAction(command);
livnessconfig = new V1Probe { Exec = execommand, InitialDelaySeconds = 10, PeriodSeconds = 10, TimeoutSeconds = 5, FailureThreshold = 3 };
</code></pre>
<p><strong>But getting this error in pod description:</strong></p>
<blockquote>
<p>Liveness probe errored: rpc error: code = Unknown desc = failed to
exec in container: failed to start exec
"a80d33b5b2046b8e606ed622da7085013a725": OCI runtime exec failed: exec
failed: container_linux.go:380: starting container process caused:
exec: "curl http://localhost:5001/checkhealth/": stat curl
http://localhost:5001/checkhealth/</p>
</blockquote>
<p>Can someone let me know whether this is correct way to provide command to V1ExecAction. Its metadata implementation in K8s.Models showing that V1ExecAction take command in List:</p>
<pre><code> #region Assembly KubernetesClient, Version=3.0.0.0, Culture=neutral, PublicKeyToken=a0f90e8c9af122d
using Newtonsoft.Json;
using System.Collections.Generic;
namespace k8s.Models
{
//
// Summary:
// ExecAction describes a "run in container" action.
public class V1ExecAction
{
//
// Summary:
// Initializes a new instance of the V1ExecAction class.
public V1ExecAction();
//
// Summary:
// Initializes a new instance of the V1ExecAction class.
//
// Parameters:
// command:
// Command is the command line to execute inside the container, the working directory
// for the command is root ('/') in the container's filesystem. The command is simply
// exec'd, it is not run inside a shell, so traditional shell instructions ('|',
// etc) won't work. To use a shell, you need to explicitly call out to that shell.
// Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
public V1ExecAction(IList<string> command = null);
//
// Summary:
// Gets or sets command is the command line to execute inside the container, the
// working directory for the command is root ('/') in the container's filesystem.
// The command is simply exec'd, it is not run inside a shell, so traditional shell
// instructions ('|', etc) won't work. To use a shell, you need to explicitly call
//out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
[JsonProperty(PropertyName = "command")]
public IList<string> Command { get; set; }
} }
</code></pre>
| solveit | <p>You have confused the Exec form with the Shell form; you can either change your <code>command</code> to use a shell explicitly, or fix the invocation to be compatible with exec. That's what the <code>stat</code> response was trying to tell you: there is no such <em>file</em> named <code>curl http...</code></p>
<h2>Using sh</h2>
<pre class="lang-cs prettyprint-override"><code>command.Add("sh");
command.Add("-c");
command.Add("curl http://localhost:5001/checkhealth/");
</code></pre>
<h2>Using native exec</h2>
<pre class="lang-cs prettyprint-override"><code>command.Add("curl");
command.Add("http://localhost:5001/checkhealth/");
</code></pre>
<p>While this wasn't what you asked, the next problem you're going to experience is that curl only varies its exit status based on whether it could connect, not 200 versus non-200 HTTP status codes. You will want the <code>--fail</code> argument to have curl vary its return code based on the HTTP status</p>
<p>You may also want to also include <code>--silent</code> since that health check command output shows up in the kubelet logs and in <code>kubectl describe pod</code></p>
| mdaniel |
<p>I am trying to delete multiple ConfigMaps at once using a label. With <code>kubectl</code>, I would do it as follow:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete cm -l application=my-app
</code></pre>
<p>Kubeclient offers the <code>delete_config_map</code> method, but it requires a name.</p>
<pre class="lang-rb prettyprint-override"><code># `k` is an instance of Kubeclient::Client
k.delete_config_map('my-config-map')
</code></pre>
<hr />
<p><strong>Is there a way to acheive the same behavior as the CLI here?</strong></p>
| Richard-Degenne | <p>The way <code>kubectl</code> does operations upon labeled, versus named, resources is that it actually does that in two phases: <code>get -o name $resourceType -l ...</code> and then the actual requested operation upon <code>${those_resource_names}</code></p>
<p>One can run <code>kubectl --v=10</code> (or the <code>v</code> of your choice) to see it in action</p>
<p>Since that behavior is a feature of <code>kubectl</code> and not the kubernetes API itself, it means anyone trying to replicate that handy feature will need to replicate the two-phase approach also</p>
| mdaniel |
<p>LetsEncrypt not verifying via Kubernetes ingress and loadbalancer in AWS EKS</p>
<p>ClientIssuer</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected]
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<p>Ingress.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- echo0.site.com
secretName: echo-tls
rules:
- host: echo0.site.com
http:
paths:
- backend:
serviceName: echo0
servicePort: 80
</code></pre>
<p>Events</p>
<pre><code>12m Normal IssuerNotReady certificaterequest/echo-tls-3171246787 Referenced issuer does not have a Ready status condition
12m Normal GeneratedKey certificate/echo-tls Generated a new private key
12m Normal Requested certificate/echo-tls Created new CertificateRequest resource "echo-tls-3171246787"
4m29s Warning ErrVerifyACMEAccount clusterissuer/letsencrypt-staging Failed to verify ACME account: context deadline exceeded
4m29s Warning ErrInitIssuer clusterissuer/letsencrypt-staging Error initializing issuer: context deadline exceeded
</code></pre>
<p>kubectl describe certificate</p>
<pre><code>Name: echo-tls
Namespace: default
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1alpha3
Kind: Certificate
Metadata:
Creation Timestamp: 2020-04-04T23:57:22Z
Generation: 1
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: echo-ingress
UID: 1018290f-d7bc-4f7c-9590-b8924b61c111
Resource Version: 425968
Self Link: /apis/cert-manager.io/v1alpha3/namespaces/default/certificates/echo-tls
UID: 0775f965-22dc-4053-a6c2-a87b46b3967c
Spec:
Dns Names:
echo0.site.com
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-staging
Secret Name: echo-tls
Status:
Conditions:
Last Transition Time: 2020-04-04T23:57:22Z
Message: Waiting for CertificateRequest "echo-tls-3171246787" to complete
Reason: InProgress
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal GeneratedKey 18m cert-manager Generated a new private key
Normal Requested 18m cert-manager Created new CertificateRequest resource "echo-tls-3171246787"
</code></pre>
<p>Been going at this for a few days now. I have tried with different domains, but end up with same results. Am I missing anything here/steps. It is based off of this tutorial <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">here</a></p>
<p>Any help would be appreciated.</p>
| teej2542 | <p>Usually with golang applications the error <code>context deadline exceeded</code> means the connection timed out. That sounds like the <code>cert-manager</code> pod was not able to reach the ACME API, which can happen if your cluster has an outbound firewalls, and/or does not have a NAT or Internet Gateway attached to the subnets</p>
| mdaniel |
<p>We're moving a legacy app to Kubernetes. We will have many instances of it running (a Kubernetes namespace per customer), so we want to automate our application upgrade process.</p>
<p>Kubernetes has well established patterns for rolling upgrades, but I can't use them (yet). My application requires the following process:</p>
<ol>
<li>all existing pods are deleted</li>
<li>a database upgrade job (Kubernetes Job) runs and completes successfully</li>
<li>new pods are created</li>
</ol>
<p>We define the pods through a Deployment. The database upgrade job is idempotent as long as we never run more than one at a time.</p>
<p>I assume my workflow is not an uncommon one for legacy applications, and yet I can't find any established patterns or tools preconfigured. Is there something out there? If I do have to write my own operator (or use something like Kudo), what is the best set of steps for it to perform?</p>
| Jesse McDowell | <p>Yes, there is an existing process for that:</p>
<ol>
<li><p>Use the <code>kubectl scale</code> command to scale down the existing Deployment to zero replicas: <code>kubectl scale --replicas=0 deploy/my-legacy-deployment</code></p>
</li>
<li><p>Wait for that to stabilize (there's your requested downtime ;-)</p>
<p>Using <code>kubectl wait</code> will be helpful, although I personally don't have experience introducing downtime in order to know the remaining arguments to <code>wait</code> to suggest here</p>
<p>You can also use something like <code>while true; do [[ 0 -eq $(kubectl get pods -o name -l legacy-deployment-selector | wc -l) ]] && break; done</code> to stall until there are no pods remaining</p>
</li>
<li><p>Run your database job, or migrations of your choosing</p>
</li>
<li><p>Deploy the new version as you normally would; depending on the tool you use, this may or may not actually influence the currently zero-scaled Deployment</p>
<p>For example, <code>kubectl set image deploy/my-legacy-deployment "*=example.com/my/new/image"</code> will leave the Deployment at zero replicas</p>
<p>But a <code>helm install --upgrade legacy whatever-else</code> may very well set the Deployment's replicas to the value found in the chart</p>
</li>
<li><p>If your tool has not yet scaled up the new Deployment, you can now set it back to the desired value with the opposite command: <code>kubectl scale --replicas=3 deploy/my-legacy-deployment</code></p>
</li>
</ol>
| mdaniel |
<p>I'm working with Airflow DAG and Kubernetes for the first time.
I have a Python script that connects to AWS S3 and reads some files. This works fine if I run it in a Docker container/image using bash. But when I try to run this docker from an airflow task using a K8s pod, I get the following error (I replaced some sensitive values with XXXXX)</p>
<pre><code> [2022-02-08 22:48:55,795] {kubernetes_pod.py:365} INFO - creating pod with labels {'dag_id': 'ECO_CELLS_POLYGON_STORES', 'task_id': 'process_AR', 'execution_date': '2022-02-08T224216.4628350000-e866f2011', 'try_number': '1'} and launcher <airflow.providers.cncf.kubernetes.utils.pod_launcher.PodLauncher object at 0x7f649be71410>
[2022-02-08 22:48:55,812] {pod_launcher.py:93} ERROR - Exception when attempting to create Namespaced Pod: {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {},
"labels": {
"airflow_version": "2.0.0-astro.8",
"kubernetes_pod_operator": "True",
"dag_id": "ECO_CELLS_POLYGON_STORES",
"task_id": "process_AR",
"execution_date": "2022-02-08T224216.4628350000-e866f2011",
"try_number": "1"
},
"name": "k8s-pod-ml-operator.3aada8ada8df491ea63e9319bf779d10",
"namespace": "default"
},
"spec": {
"affinity": {},
"containers": [
{
"args": [],
"command": [
"python",
"main.py"
],
"env": {
"AWS_ACCESS_KEY_ID": "XXXXXX",
"AWS_SECRET_ACCESS_KEY": "***",
"AWS_BUCKET_NAME": "XXXXXX-dev",
"SNOWFLAKE_SERVER": "XXXXXX",
"SNOWFLAKE_LOGIN": "XXXXXX",
"SNOWFLAKE_PASSWORD": "***",
"SNOWFLAKE_ACCOUNT": "XXXXXX",
"SNOWFLAKE_DATABASE": "XXXXXX",
"SNOWFLAKE_WAREHOUSE": "XXXXXX",
"COUNTRY": "AR",
"S3_PROJECT": "ecom_polygon_stores",
"S3_TEAM_VERTICAL": "ecommerce"
},
"envFrom": [],
"image": "ecom_polygon_stores:v1.0.7",
"imagePullPolicy": "Never",
"name": "base",
"ports": [],
"resources": {},
"volumeMounts": []
}
],
"hostNetwork": false,
"imagePullSecrets": [],
"initContainers": [],
"restartPolicy": "Never",
"securityContext": {},
"serviceAccountName": "default",
"tolerations": [],
"volumes": []
}
}
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/utils/pod_launcher.py", line 89, in run_pod_async
body=sanitized_pod, namespace=pod.metadata.namespace, **kwargs
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 6174, in create_namespaced_pod
(data) = self.create_namespaced_pod_with_http_info(namespace, body, **kwargs) # noqa: E501
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 6265, in create_namespaced_pod_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 345, in call_api
_preload_content, _request_timeout)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 176, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 388, in request
body=body)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 278, in POST
body=body)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 231, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '319700db-6333-4a4d-885c-1f45a0cd13a3', 'X-Kubernetes-Pf-Prioritylevel-Uid': '4d5b12e4-65e9-4ab9-ad63-de6f29ca0b6d', 'Date': 'Tue, 08 Feb 2022 22:48:55 GMT', 'Content-Length': '487'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod in version \"v1\" cannot be handled as a Pod: v1.Pod.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: decode slice: expect [ or n, but found {, error found in #10 byte of ...|, \"env\": {\"AWS_ACCES|..., bigger context ...|s\": [], \"command\": [\"python\", \"main.py\"], \"env\": {\"AWS_ACCESS_KEY_ID\": \"AXXXXXXXXXXXXXXXXXX6\", \"AWS_|...","reason":"BadRequest","code":400}
</code></pre>
<p>I'm not sure where to go from here... From what I'm reading the error says it expected a [ instead of a { in "env": {"AWS_ACCESS_KEY_ID"... But I'm not sure how to correct that since I pass those parameters like this:</p>
<pre><code> self.env_vars = {
'AWS_ACCESS_KEY_ID': s3_connection.login,
'AWS_SECRET_ACCESS_KEY': s3_connection.password,
'AWS_BUCKET_NAME': bucket_name,
'SNOWFLAKE_SERVER': str(snowflake_connection.host),
'SNOWFLAKE_LOGIN': str(snowflake_connection.login),
'SNOWFLAKE_PASSWORD': str(snowflake_connection.password),
'SNOWFLAKE_ACCOUNT': str(snowflake_connection.extra_dejson['account']),
'SNOWFLAKE_DATABASE': str(snowflake_connection.extra_dejson['database']),
'SNOWFLAKE_WAREHOUSE': str(snowflake_connection.extra_dejson['warehouse']),
'COUNTRY': code,
'S3_PROYECT': s3_project,
'S3_TEAM_VERTICAL': s3_team_vertical
}
</code></pre>
<p>Any suggestions?</p>
| Alain | <p>Your <code>env:</code> is malformed; one can see this in two different ways: (1) <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#container-v1-core" rel="nofollow noreferrer"><code>env:</code> in the <code>PodSpec</code></a> is a <strong>list</strong> of <code>{name: "", value: ""}</code> items (2) the structure emitted in the error message is malformed regardless: <code>"env": {"AWS_ACCESS_KEY_ID": "ASIAWCMTKGYGDU6KEOD6", "AWS_|...</code> as there is no such shape of data as <code>{"":"",""</code></p>
<p>I don't have any Airflow reference documentation links handy, but you'd want to check them to ensure <code>self.env_vars</code> is what Airflow expects it to be, since python places the entire burden of correctness upon the programmer</p>
| mdaniel |
<p>I'm interested if I can run k8s with publicly available control plane and worker nodes in network behind firewall (which is edge/iot deployment use-case). The main concern as I believe is communication between apiserver and kubelet/ kube-proxy. Can it be configured as only node -> master communication? How can I achieve this?</p>
<p>I could not find precisize info besides this short note in kubelet <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">reference</a>:
<code>HTTP endpoint: HTTP endpoint passed as a parameter on the command line. This endpoint is checked every 20 seconds (also configurable with a flag).</code></p>
<p>For kube-proxy I could not find any info.</p>
<p>I'm also new to golang so analyzing the k8s source code is for now beyond my skill. Any help appreciated :)</p>
| Piotr Kozimor | <blockquote>
<p>Can it be configured as only node -> master communication? How can I achieve this?</p>
</blockquote>
<p>I would guess only trying it will prove for sure that the apiserver doesn't need to contact <code>kubelet</code>. However, related to that: be aware that in such a setup, <code>kubectl exec</code> and <code>kubectl logs</code> will no longer function because those commands connect directly to port 10254 on the kubelet binary instead of sending all that traffic through the API server</p>
<p>As for kube-proxy, <a href="https://github.com/kubernetes/kubernetes/blob/v1.17.0/cluster/addons/kube-proxy/kube-proxy-ds.yaml#L46-L47" rel="nofollow noreferrer">it appears</a> it uses the in-cluster <code>$KUBERNETES_SERVICE_HOST</code> which will be the <code>.1</code> IP of the <code>Service</code> CIDR and will use the software defined network to reach the apiserver. Although there are <a href="https://github.com/kubernetes/kubernetes/blob/v1.17.0/cluster/gce/manifests/kube-proxy.manifest#L60-L61" rel="nofollow noreferrer">other configurations</a> which volume mount a <code>kubeconfig</code> from the host, so I guess the ultimate answer will depend on how you installed your cluster.</p>
| mdaniel |
<p>I was playing around in minikube and installed the wrong version of istio. I ran:</p>
<pre><code>kubectl apply -f install/kubernetes/istio-demo-auth.yaml
</code></pre>
<p>instead of:</p>
<pre><code>kubectl apply -f install/kubernetes/istio-demo.yaml
</code></pre>
<p>I figured I would just undo it and install the right one.</p>
<p>But I cannot seem to find an <code>unapply</code> command.</p>
<p><strong>How do I <em>undo</em> a "kubectl apply" command?</strong></p>
| Vaccano | <p>One way would be <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete" rel="noreferrer"><code>kubectl delete -f <filename></code></a> but it implies few things:</p>
<ol>
<li><p>The resources were first created. It simply removes all of those, if you really want to "revert to the previous state" I'm not sure there are built-in tools in Kubernetes to do that (so you really would restore from a backup, if you have one)</p></li>
<li><p>The containers did not modify the host machines: containers may mount root filesystem and change it, or kernel subsystems (iptables, etc). The <code>delete</code> command would not revert it either, and in that case you really need to check the documentation for the product to see if they offer any official way to guarantees a proper cleanup</p></li>
</ol>
| zerkms |
<p>I have a Kubernetes cluster that is making use of an Ingress to forward on traffic to a frontend React app and a backend Flask app. My problem is that the React app only works if rewrite-target annotation is not set and the flask app only works if it is.</p>
<p>How can I get my flask app accessible without setting this value (commented out in below yaml).</p>
<p>Here is the Ingress controller:</p>
<pre><code>metadata:
name: thesis-ingress
namespace: thesis
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
# nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
tls:
- hosts:
- thesis
secretName: ingress-tls
rules:
- host: thesis.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 3000
- path: /backend
pathType: Prefix
backend:
service:
name: backend
port:
number: 5000
</code></pre>
| EoinHanan | <p>Your question didn't specify, but I'm guessing your capture group was to rewrite <code>/backend/(.+)</code> to <code>/$1</code>; on that assumption:</p>
<p>Be aware that annotations are per-Ingress, but all Ingress resources are unioned across the cluster to comprise the whole of the configuration. Thus, if you need one rewrite and one without, just create two Ingress resources</p>
<pre class="lang-yaml prettyprint-override"><code>metadata:
name: thesis-frontend
namespace: thesis
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
tls:
- hosts:
- thesis
secretName: ingress-tls
rules:
- host: thesis.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 3000
---
metadata:
name: thesis-backend
namespace: thesis
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
tls:
- hosts:
- thesis
secretName: ingress-tls
rules:
- host: thesis.info
- path: /backend/(.+)
backend:
service:
name: backend
port:
number: 5000
</code></pre>
| mdaniel |
<p>I'm new in Kubernetes and I was tring to deploy a nodejs service to kubernetes. For that I created a docker image and upload it to dockerhub and finally I created a deployment file that contains all required configurations in order to accomplish the deployment.
The deployment file is shown above. I then executed the command 'kubectl apply -f deployment_local.yaml' and I came across with this error: "*spec.template.metadata.labels:Invalid value map[string]string{"app":"nodejs\u00a0\u00a0"}:<code>selector</code> does not match template <code>labels</code>"</p>
<p>I'm tring to fix this bug but I could not fix it. Pls help understand this error because I'm strugglying for a lot of time.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodeapp
image: lucasseabra/nodejs-starter
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: nodejs
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
</code></pre>
| Lucas Seabra | <p>As the error message was trying to tell you, there are two <a href="https://en.wikipedia.org/wiki/Non-breaking_space#Encodings" rel="nofollow noreferrer">"non-breaking space" characters</a> after <code>nodejs</code>: <code>map[string]string{"app":"nodejs\u00a0\u00a0"}</code></p>
<p>I would guess it was a side-effect of copy-pasting from a webpage</p>
<p>If you even do a "select all" on your posted question here, you'll see that SO has converted the two characters into normal spaces, but they do show up in the selection extension past the "nodejs" text</p>
<p>If your editor is not able to show you the characters, then either manually retype the labels, or try copying this (which is just yours but with trailing spaces removed)</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodeapp
image: lucasseabra/nodejs-starter
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: nodejs
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
</code></pre>
| mdaniel |
<p>When deploying Spinnaker to EKS via <code>hal deploy apply</code>, Spinnaker Clouddriver pod goes to <code>CrashLoopBackOff</code> with the following error,</p>
<blockquote>
<p>Factory method 'awsProvider' threw exception; nested exception is java.lang.NullPointerException: Cannot get property 'name' on null object</p>
</blockquote>
<p>My Halyard config is like follows, </p>
<pre><code>currentDeployment: default
deploymentConfigurations:
- name: default
version: 1.17.6
providers:
appengine:
enabled: false
accounts: []
aws:
enabled: true
accounts:
- name: my-account
requiredGroupMembership: []
providerVersion: V1
permissions: {}
accountId: '010101010101' # my account id here
regions: []
assumeRole: Spinnaker-Clouddriver-Role
lifecycleHooks: []
primaryAccount: my-account
bakeryDefaults:
baseImages: []
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: us-east-1
defaults:
iamRole: BaseIAMRole
</code></pre>
<p>My <code>Spinnaker-Clouddriver-Role</code> IAM role has full permissions at the moment. How can I get this resolved?</p>
<hr>
<p>This is the full log <a href="https://gist.github.com/agentmilindu/cfbebffe46b93458df8158f9355e4041" rel="nofollow noreferrer">https://gist.github.com/agentmilindu/cfbebffe46b93458df8158f9355e4041</a></p>
| Milindu Sanoj Kumarage | <p><em>This is more or less a guess, since you didn't include one iota of version information about your spinnaker setup, but...</em></p>
<p>According to <a href="https://gist.github.com/agentmilindu/cfbebffe46b93458df8158f9355e4041#file-spinnaker-error-log-L128" rel="nofollow noreferrer"><code>at com.netflix.spinnaker.clouddriver.aws.provider.agent.ReservationReportCachingAgent$_determineVpcOnlyAccounts_closure2.doCall(ReservationReportCachingAgent.groovy:117) ~[clouddriver-aws.jar:na]</code></a> in your gist, which corresponds to <a href="https://github.com/spinnaker/clouddriver/blob/version-6.5.2/clouddriver-aws/src/main/groovy/com/netflix/spinnaker/clouddriver/aws/provider/agent/ReservationReportCachingAgent.groovy#L117" rel="nofollow noreferrer"><code>getAmazonEC2(credentials, credentials.regions[0].name)</code> in version 6.5.2</a></p>
<p>it appears they do not tolerate having an empty <code>regions: []</code> like you do; thus:</p>
<pre><code>aws:
enabled: true
accounts:
- name: my-account
# ... snip ...
# vvv-- update this list
regions:
- name: us-east-1
</code></pre>
| mdaniel |
<p>For example</p>
<pre><code>- host: "domain.com"
http:
paths:
- path: /?(.*) # want to rewrite this with /$1
backend:
serviceName: RELEASE-NAME-svcname1
servicePort: 80
- path: /test/?(.*) # want to skip rewrite
backend:
serviceName: RELEASE-NAME-svcname2
servicePort: 80
</code></pre>
<p>Any way to handle this in a single ingress ?</p>
| Wakeupcolumn | <blockquote>
<p>Any way to handle this in a single ingress?</p>
</blockquote>
<p>Not in a single Ingress resource, no, but it will work fine with a single ingress controller.</p>
<p>The reason you need to create two separate Ingress resources is so that you can apply the annotation to one but not the other; all Ingress resources across the whole cluster are unioned together, then grouped by virtual host, in the ultimate emitted nginx.conf</p>
<pre class="lang-yaml prettyprint-override"><code>...
metadata:
name: ingress-svc-1
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
...
- host: "domain.com"
http:
paths:
- path: /?(.*) # want to rewrite this with /$1
backend:
serviceName: RELEASE-NAME-svcname1
servicePort: 80
---
...
metadata:
name: ingress-svc-2
spec:
...
- host: "domain.com"
http:
paths:
- path: /test/?(.*)
backend:
serviceName: RELEASE-NAME-svcname2
servicePort: 80
</code></pre>
| mdaniel |
<p>Is it possible to configure k8s in a way that empty secrets are not possible?</p>
<p>I had a problem in a service that somewhat the secret got overwritten with an empty one (zero bytes) and thereby my service malfunctioned. I see no advantage of having an secret empty at any time and would like to prevent empty secrets all together.</p>
<p>Thans for your help!</p>
| Simon Frey | <p>While it's not a simple answer to implement, as best I can tell what you are looking for is an <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="nofollow noreferrer">Admission Controller</a>, with a very popular one being <a href="https://open-policy-agent.github.io/gatekeeper/website/docs/" rel="nofollow noreferrer">OPA Gatekeeper</a></p>
<p>The theory is that kubernetes, as a platform, does not understand your business requirement to keep mistakes from overwriting Secrets. But OPA as a policy rules engine allows you to specify those things without requiring the upstream kubernetes to adopt those policies for everyone</p>
<p>An alternative is to <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">turn on audit logging</a> and track down the responsible party for re-education</p>
<p>A further alternative is to correctly scope RBAC Roles to actually deny writes to Secrets except for those credentials that are known to be trusted</p>
| mdaniel |
<p>As per kubectl documentation, kubectl apply is possible by using a file or stdin. My usecase is that there would be service/deployment json strings in runtime and I have to deploy those in clusters using nodejs. Of course, I can create files and just do kubectl apply -f thefilename. But, I don't want to create files. Is there any approach where I can do like below:</p>
<pre><code>kubectl apply "{"apiVersion": "extensions/v1beta1","kind": "Ingress"...}"
</code></pre>
<p>For the record, I am using node_ssh library.</p>
| CuteBoy | <pre><code>echo 'your manifest' | kubectl create -f -
</code></pre>
<p>Reference:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply</a></li>
</ul>
| zerkms |
<p>There is an official manifest for the deployment of the daemonset <a href="https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds.yaml</a></p>
<p>Line 49 defines the volume <code>varlibdockercontainers</code>.</p>
<p>I don't understand why the Fluent-bit needs to read data from the folder <code>/var/lib/docker/containers</code>.</p>
| Maksim | <blockquote>
<p>I don't understand why the Fluent-bit needs to read data from the folder <code>/var/lib/docker/containers</code>.</p>
</blockquote>
<p>Because that is where docker stores its <code>${container_id}-json.log</code> file when using the <a href="https://docs.docker.com/config/containers/logging/json-file/" rel="nofollow noreferrer"><code>json-file</code> logging driver</a>, which is (AFAIK) the default. There are more details <a href="https://stackoverflow.com/questions/44579227/where-is-the-docker-json-file-logging-driver-writing-files-to">in this related question</a></p>
<p>Therefore, in order for fluent to transmit logs, it does (effectively) <code>tail -f $the_log_filename | jq -r .log</code> and those are the container's logs. If you want to see the <em>actual</em> implementation, it seems to be in <a href="https://github.com/fluent/fluent-bit/blob/v1.8.10/plugins/in_docker/docker.h#L40" rel="nofollow noreferrer"><code>docker.h</code></a> and its <code>docker.c</code> peer</p>
| mdaniel |
<p>Is it possible to use pipe output as input for grep or git grep? The data im trying to pass to grep/git grep is the following</p>
<pre><code> kubectl get namespace -o name -l app.kubernetes.io/instance!=applications | cut -f2 -d "/"
argocd
default
kube-node-lease
kube-public
kube-system
nsx-system
pks-system
</code></pre>
<p>I've tried to extent the command but this results in an error:</p>
<pre><code> kubectl get namespace -o name -l app.kubernetes.io/instance!=applications | cut -f2 -d "/" | xargs git grep -i
fatal: ambiguous argument 'default': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
</code></pre>
<p>Using just grep results in:</p>
<pre><code> kubectl get namespace -o name -l app.kubernetes.io/instance!=applications | cut -f2 -d "/" | xargs grep -i
grep: default: No such file or directory
grep: kube-node-lease: No such file or directory
grep: kube-public: No such file or directory
grep: kube-system: No such file or directory
grep: nsx-system: No such file or directory
grep: pks-system: No such file or directory
</code></pre>
<p>The issue im facing with grep in general in this particular case is, that even if i soley use grep within my directory, it takes ages till it's done, whereas git grep is done within seconds. If I'm not doing something terrible wrong that would explain the slow results of grep, getting git grep to work would be preferred.</p>
<p>I've found this other Stackoverflow <a href="https://stackoverflow.com/questions/9754236/git-grep-gives-unknown-revision-or-path-not-in-the-working-tree">Question</a> that somewhat explains what the issue is, but I don't know how to "process" the output into git grep properly.</p>
| MikeK | <p>The problem is that (as your screenshot shows) the result is multiple terms which I'm guessing you want to be <em>OR</em>-ed together, and not searching for the first term in the files identified by the last terms (which is what the current xargs command does)</p>
<p>Since OR in regex is via the <code>|</code> character, you can use <code>xargs echo</code> to fold the vertical list into a space delimited horizontal list then replace the spaces with <code>|</code> and be pretty close to what you want</p>
<pre class="lang-sh prettyprint-override"><code>printf 'alpha\nbeta\ncharlie\n' | xargs echo | tr ' ' '|' | xargs git grep -i
</code></pre>
<p>although due to the folding operation, that command is an xargs of one line, and thus would be conceptually easier to reason about using just normal <code>$()</code> interpolation:</p>
<pre class="lang-sh prettyprint-override"><code>git grep -i $(printf 'alpha\nbeta\ncharlie\n' | xargs echo | tr ' ' '|')
</code></pre>
<p>The less "whaaa" shell pipeline would be to use <code>kubectl get -o go-template=</code> to actually emit a pipe-delimited list and feed that right into xargs (or <code>$()</code>), bypassing the need to massage the output text first</p>
| mdaniel |
Subsets and Splits