Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I have recently installed airflow 2.1.3 using apache-airflow helm repo on Azure AKS cluster. But post the installation, The Dag files are not getting displayed on the UI. The reason could be the scheduler getting terminated consistently. Below is the error. Can anyone please help me with the below issue?</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>[2021-10-28 05:16:49,322] {manager.py:254} INFO - Launched DagFileProcessorManager with pid: 1268
[2021-10-28 05:16:49,339] {settings.py:51} INFO - Configured default timezone Timezone('UTC')
[2021-10-28 05:17:39,997] {manager.py:414} ERROR - DagFileProcessorManager (PID=1268) last sent a heartbeat 50.68 seconds ago! Restarting it
[2021-10-28 05:17:39,998] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 1268
[2021-10-28 05:17:40,251] {process_utils.py:66} INFO - Process psutil.Process(pid=1268, status='terminated', exitcode=0, started='05:16:48') (1268) terminated with exit code 0
[2021-10-28 05:17:40,256] {manager.py:254} INFO - Launched DagFileProcessorManager with pid: 1313
[2021-10-28 05:17:40,274] {settings.py:51} INFO - Configured default timezone Timezone('UTC')</code></pre>
</div>
</div>
</p>
| Sumith08 | <p>I have previously been able to fix this by setting a higher value in <strong>airflow.cfg</strong> for <code>scheduler_health_check_threshold</code></p>
<p>For Ex:<br />
<code>scheduler_health_check_threshold = 240</code></p>
<p>Also, ensure that <code>orphaned_tasks_check_interval</code> is greater than the value that you set for <code>scheduler_health_check_threshold</code></p>
| Vinay Kulkarni |
<p>I'm trying to use internal service DNS for service-to-service HTTP communication.</p>
<p>If I try to curl a pod from another deployment pod it is working but unable to use it in golang net/http service</p>
<pre><code>2023/01/27 15:48:37 oauth2.go:90: oauth2 url http://hydra-admin.microservices.svc.cluster.local:4445/oauth2/auth/requests/login/accept
2023/01/27 15:48:37 oauth2.go:101: Unable to make http request Put "http://localhost:4445/admin/oauth2/auth/requests/login/accept?login_challenge=b569006c8b834a298cf1cd72e2424953": dial tcp [::1]:4445: connect: connection refused
</code></pre>
<p>hydra-admin.microservices.svc.cluster.local is resolved to localhost when the API call is made</p>
<p>but curl works as you see below</p>
<pre><code>/ # curl -X PUT http://hydra-admin:4445/admin/oauth2/auth/requests/login/accept?login_challenge=6f51146e49c54b739de8a37b25a72349
{"error":"invalid_request","error_description":"The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed. Unable to decode body because: EOF"}
</code></pre>
<p>What am I missing here?</p>
| Hariharan Sivakumar | <p>Per my comment, depending on how you are building your <code>go</code> executable will have an effect on how it behaves within a <code>k8s</code> environment. Are you using a <code>scratch</code> image or a <code>CGO_ENABLED=1</code> image?</p>
<p>From the <code>dns</code> package <a href="https://pkg.go.dev/net#hdr-Name_Resolution" rel="nofollow noreferrer">docs</a> there's a caveat on DNS behavior</p>
<blockquote>
<p>By default the pure Go resolver is used, because a blocked DNS request
consumes only a goroutine, while a blocked C call consumes an
operating system thread. When cgo is available, the cgo-based resolver
is used instead under a variety of conditions:</p>
</blockquote>
<blockquote>
<p>... when /etc/resolv.conf or /etc/nsswitch.conf specify the use of features
that the Go resolver does not implement, <em><strong>and when the name being
looked up ends in .local</strong></em> or is an mDNS name.</p>
</blockquote>
<p>So I would suggest - to maximized your success rate for both external & internal DNS requests - building your <code>go</code> executable for <code>k8s</code> like so:</p>
<pre><code>CGO_ENABLED=1 go build -tags netgo
</code></pre>
| colm.anseo |
<p>when I try to deploy application in Kubernetes using images in my private Docker registry on same server (master node), I receive following error:</p>
<blockquote>
<p>Failed to pull image
"0.0.0.0:5000/continuous-delivery-tutorial:5ec98331a69ec5e6f818583d4506d361ff4de89b-2020-02-12-14-37-03":
rpc error: code = Unknown desc = Error response from daemon: Get
<a href="https://0.0.0.0:5000/v2/" rel="nofollow noreferrer">https://0.0.0.0:5000/v2/</a>: http: server gave HTTP response to HTTPS
client</p>
</blockquote>
<p>When I print <code>docker system info</code> I can see there is my registry as insecure registry:</p>
<p><a href="https://i.stack.imgur.com/y5bqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y5bqe.png" alt="enter image description here"></a></p>
<p>I run my registry by following command:</p>
<pre><code>docker run -d -p 5000:5000 --restart=always --name registry -v $PWD/docker_reg_certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key registry:2
</code></pre>
<p>Thank you for any advice</p>
| Mariyo | <p>you need to add your hostname to the list of allowed insecure registries in <code>/etc/docker/daemon.json</code>, for example:</p>
<pre class="lang-json prettyprint-override"><code>{
"insecure-registries" : ["your-computer-hostname:5000"]
}
</code></pre>
<p>(this file is supposed to contain 1 json object, so if it's not empty then add <code>insecure-registries</code> property to the existing object instead of creating a new one. Also remember to restart your docker daemon afterwards)<br/>
<br/>
also, you should not use <code>0.0.0.0</code> as it is not a real address. use your hostname instead when specifying image, like <code>your-computer-hostname:5000/continuous-delivery-tutorial:5ec98331a69ec5e6f818583d4506d361ff4de89b-2020-02-12-14-37-03</code></p>
| morgwai |
<p>I'm struggling to hunt down the cause of this error, though it seems like a string/bool/Dict or some value is not where ConfigMap wants it. I've verified that the JSON is valid that I'm passing, so that's where I started and everything was legal.</p>
<pre><code>- name: Create appsettings ConfigMap
k8s:
state: "{{ service_state }}"
kubeconfig: "{{ tempdir.path }}/.kubeconfig"
resource_definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: appsettingconf
namespace: "{{ cust_reports_namespace }}"
data:
"app-settings.json": "{{ lookup('template', 'appsettings.json.j2') }}"
</code></pre>
<p>ERROR:
1.ConfigMap.Data: ReadString: expects \\\\\" or n, but found {, error found in #10 byte of ...|gs.json\\\\\":{\\\\\"applicati|..., bigger context ...|{\\\\\"apiVersion\\\\\":\\\\\"v1\\\\\",\\\\\"data\\\\\":{\\\\\"app-settings.json\\\\\":{\\\\\"applicationSettings\\\\\":{\\\\\"reportingApiUrl\\\\\":\\\\\"http://a|...\",\"reason\":\"Invalid\",\"details\":{\"causes\":[{\"reason\":\"FieldValueInvalid\",\"message\":\"Invalid value: \\\\\"{\\\\\\\</p>
| inbinder | <p>Solved. template | tojson | b64encode. Always something simple....</p>
| inbinder |
<p>I have a Jenkins pipeline using the kubernetes plugin to run a <a href="https://github.com/docker-library/docker/blob/65fab2cd767c10f22ee66afa919eda80dbdc8872/18.09/dind/Dockerfile" rel="nofollow noreferrer">docker in docker</a> container and build images:</p>
<pre><code>pipeline {
agent {
kubernetes {
label 'kind'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
name: dind
...
</code></pre>
<p>I also have a pool of persistent volumes in the jenkins namespace each labelled <code>app=dind</code>. I want one of these volumes to be picked for each pipeline run and used as <code>/var/lib/docker</code> in my dind container in order to cache any image pulls on each run. I want to have a pool and caches, not just a single one, as I want multiple pipeline runs to be able to happen at the same time. How can I configure this?</p>
<p>This can be achieved natively in kubernetes by creating a persistent volume claim as follows:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dind
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: dind
</code></pre>
<p>and mounting it into the Pod, but I'm not sure how to configure the pipeline to create and cleanup such a persistent volume claim.</p>
| dippynark | <p>First of all, I think the way you think it can be achieved natively in kubernetes - wouldn't work. You either have to re-use same PVC which will make build pods to access same PV concurrently, or if you want to have a PV per build - your PVs will be stuck in <code>Released</code> status and not automatically available for new PVCs.</p>
<p>There is more details and discussion available here <a href="https://issues.jenkins.io/browse/JENKINS-42422" rel="nofollow noreferrer">https://issues.jenkins.io/browse/JENKINS-42422</a>.</p>
<p>It so happens that I wrote two simple controllers - automatic PV releaser (that would find and make <code>Released</code> PVs <code>Available</code> again for new PVCs) and dynamic PVC provisioner (for Jenkins Kubernetes plugin specifically - so you can define a PVC as annotation on a Pod). Check it out here <a href="https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers" rel="nofollow noreferrer">https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers</a>. There is a full <code>Jenkinsfile</code> example here <a href="https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers/tree/main/examples/jenkins-kubernetes-plugin-with-build-cache" rel="nofollow noreferrer">https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers/tree/main/examples/jenkins-kubernetes-plugin-with-build-cache</a>.</p>
| Dee |
<p>I am trying to write a .net core application to run in a kubernetes pod. This application needs to know if the cluster has been unable to schedule any pods. </p>
<p>I have tried getting deployment data from </p>
<pre><code>kubectl get --raw /apis/apps/v1/namespaces/default/deployments
</code></pre>
<p>I can see the <code>unavailableReplicas</code> number and the <code>MinimumReplicasUnavailable</code> message. </p>
<p>Are these valid metrics to watch for the cluster status? </p>
<p>Is there a way to query the cluster as a whole instead of by deployment? </p>
| leemicw | <p>If you are looking for the images in each node in the cluster you can try</p>
<pre><code>kubectl get nodes -o json
</code></pre>
<p>which will return a json object or using <strong>--field-selector</strong> as shown below.</p>
<pre><code>kubectl get pods --all-namespaces --field-selector=status.phase==Pending
</code></pre>
<p>and using api </p>
<pre><code>kubectl get --raw /api/v1/pods?fieldSelector=status.phase==Pending
</code></pre>
| Bimal |
<p>I have a kubernetes with 3 nodes:</p>
<pre><code>[root@ops001 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
azshara-k8s01 Ready <none> 143d v1.15.2
azshara-k8s02 Ready <none> 143d v1.15.2
azshara-k8s03 Ready <none> 143d v1.15.2
</code></pre>
<p>when after I am deployed some pods I found only one nodes <code>azshara-k8s03</code> could resolve DNS, the other two nodes could not resolve DNS.this is my azshara-k8s03 host node /etc/resolv.conf:</p>
<pre><code>options timeout:2 attempts:3 rotate single-request-reopen
; generated by /usr/sbin/dhclient-script
nameserver 100.100.2.136
nameserver 100.100.2.138
</code></pre>
<p>this is the other 2 node /etc/resolv.conf:</p>
<pre><code>nameserver 114.114.114.114
</code></pre>
<p>should I keep the same ? what should I do to make the DNS works fine in 3 nodes?</p>
| Dolphin | <p>did you try if <code>114.114.114.114</code> is actually reachable from your nodes? if not, change it to something that actually is ;-]</p>
<p>also check which <code>resolv.conf</code> your kublets actually use: it is often something else than <code>/etc/resolv.conf</code>: do <code>ps ax |grep kubelet</code> and check the value of <code>--resolv-conf</code> flag and see if the DNSes in that file work correctly.</p>
<p><strong>update:</strong></p>
<p>what names are failing to resolve on the 2 problematic nodes? are these public names or internal only? if they are internal only than 114.114.114 will not know about them. <code>100.100.2.136</code> and <code>100.100.2.138</code> are not reachable for me: are they your internal DNSes? if so try to just change <code>/etc/resolv.conf</code> on 2 nodes that don't work to be the same as on the one that works.</p>
| morgwai |
<p>I'm using newest springdoc library to create one common endpoint with all Swagger configurations in one place. There're a bunch of microservices deployed in Kubernetes, so having documentation in one place would be convenient. The easiest way to do that is by using sth like this (<a href="https://springdoc.org/faq.html#how-can-i-define-groups-using-applicationyml" rel="nofollow noreferrer">https://springdoc.org/faq.html#how-can-i-define-groups-using-applicationyml</a>):</p>
<pre><code>springdoc:
api-docs:
enabled: true
swagger-ui:
disable-swagger-default-url: true
urls:
- name: one-service
url: 'http://one.server/v3/api-docs'
- name: second-service
url: 'http://second.server/v3/api-docs'
</code></pre>
<p>and it works great, I can choose from list in upper right corner.
The problem is that it doesn't work through proxy. According to documentation I need to set some headers (<a href="https://springdoc.org/faq.html#how-can-i-deploy-springdoc-openapi-ui-behind-a-reverse-proxy" rel="nofollow noreferrer">https://springdoc.org/faq.html#how-can-i-deploy-springdoc-openapi-ui-behind-a-reverse-proxy</a>) and it works for single service called directly. But when i try grouping described above, headers are not passed to one-service or second-service, and they generates documentation pointing to localhost.</p>
<p>I suspect I need to use grouping (<a href="https://springdoc.org/faq.html#how-can-i-define-multiple-openapi-definitions-in-one-spring-boot-project" rel="nofollow noreferrer">https://springdoc.org/faq.html#how-can-i-define-multiple-openapi-definitions-in-one-spring-boot-project</a>) but I miss good example, how to achive similar effect (grouping documentation from different microservices). Examples shows only one external address, or grouping local endpoints. I hope, that using this approach, I'll be able to pass headers.</p>
| Marx | <p>The properties <code>springdoc.swagger-ui.urls.*</code>, are suitable to configure external (/v3/api-docs url), for example if you want to agreagte all the endpoints of other services, inside one single application.</p>
<p>It will not inherit the proxy configuration, but it will use servers urls defined in your: servers <a href="http://one.server/v3/api-docs" rel="nofollow noreferrer">http://one.server/v3/api-docs</a> and <a href="http://second.server/v3/api-docs" rel="nofollow noreferrer">http://second.server/v3/api-docs</a>.</p>
<p>You want to have a proxy in front of your service, it's up to your service to handle the correct server urls you want to expose.</p>
<p>If you want it work out of the box, you can use a solution that handles proxying and routing like <a href="https://piotrminkowski.com/2020/02/20/microservices-api-documentation-with-springdoc-openapi/" rel="nofollow noreferrer">spring-cloud-gateway</a></p>
| brianbro |
<p>When trying to deploy Clickhouse operator on Kubernetes, by default access_management is commented out in users.xml file. Is there a way to uncomment it when installing kubernetes operator?</p>
<p>Clickhouse Operator deployment:</p>
<pre><code>kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/0.18.3/deploy/operator/clickhouse-operator-install-bundle.yaml
</code></pre>
<p>I have tried to do that through "ClickHouseInstallation" but that didn't work.</p>
<p>Furthermore, Clickhouse operator source code doesn't contain parameter for access_management</p>
| Oktay Alizada | <p>look to <code>kubectl explain chi.spec.configuration.files</code> and <code>kubectl explain chi.spec.configuration.users</code></p>
<p>try</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseInstallation
metadata:
name: access-management-example
spec:
configuration:
files:
users.d/access_management.xml: |
<clickhouse><users>
<default><access_management>1</access_management></default>
</users></clickhouse>
</code></pre>
<p>you shall carry on itself about replicate RBAC objects during change cluster layout (scale-up)</p>
| Slach |
<p>Whenever switching to Windows Containers in Docker for windows, Kubernetes option is missing after restart.</p>
<p><a href="https://i.stack.imgur.com/3pYM1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3pYM1.png" alt="enter image description here"></a></p>
<p>I'm running windows 10 Enterprise.</p>
<p>Is is possible to create windows container (for .net framework app) and deploy kubernetes? Or are only linux containers supported on kubernetes (meaning only .net Standard/Core would work)?</p>
| ShaneKm | <p>You can run Windows nodes and containers on Kubernetes but the Kubernetes master (control plane) has to be Linux. </p>
<p><a href="https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/</a></p>
<p>Similarly, you need to be running Windows Server 2019 for Kubernetes GA on Windows - Windows 10/Server 2016 is possible but it is considered beta</p>
| KeepCalmAndCarryOn |
<p>I am currently working on a monitoring service that will monitor Kubernetes' deployments and their pods. I want to notify users when a deployment is not running the expected amount of replicas and also when pods' containers restart unexpectedly. This may not be the right things to monitor and I would greatly appreciate some feedback on what I should be monitoring. </p>
<p>Anyways, the main question is the differences between all of the <em>Statuses</em> of pods. And when I say <em>Statuses</em> I mean the Status column when running <code>kubectl get pods</code>. The statuses in question are:</p>
<pre><code>- ContainerCreating
- ImagePullBackOff
- Pending
- CrashLoopBackOff
- Error
- Running
</code></pre>
<p>What causes pod/containers to go into these states? <br/>
For the first four Statuses, are these states recoverable without user interaction? <br/>
What is the threshold for a <code>CrashLoopBackOff</code>? <br/>
Is <code>Running</code> the only status that has a <code>Ready Condition</code> of True? <br/> <br/>
Any feedback would be greatly appreciated! <br/> <br/>
Also, would it be bad practice to use <code>kubectl</code> in an automated script for monitoring purposes? For example, every minute log the results of <code>kubectl get pods</code> to Elasticsearch?</p>
| Sam | <p>You can see the pod lifecycle details in k8s <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="nofollow noreferrer">documentation</a>.
The recommended way of monitoring kubernetes cluster and applications are with <a href="https://prometheus.io/" rel="nofollow noreferrer">prometheus</a></p>
| Bimal |
<p>I'm trying to set up a bare metal Kubernetes cluster. I have the basic cluster set up, no problem, but I can't seem to get MetalLB working correctly to expose an external IP to a service.</p>
<p>My end goal is to be able to deploy an application with 2+ replicas and have a single IP/Port that I can reference in order to hit any of the running instances.</p>
<p>So far, what I've done (to test this out,) is:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
kubectl apply -f metallb-layer-2.yml
kubectl run nginx --image=nginx --port=80
kubectl expose deployment nginx --type=LoadBalancer --name=nginx-service
</code></pre>
<p>metallb-layer-2.yml:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: k8s-master-ip-space
protocol: layer2
addresses:
- 192.168.0.240-192.168.0.250
</code></pre>
<p>and then when I run <code>kubectl get svc</code>, I get:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service LoadBalancer 10.101.122.140 <pending> 80:30930/TCP 9m29s
</code></pre>
<p>No matter what I do, I can't get the service to have an external IP. Does anyone have an idea?</p>
<p><strong>EDIT:</strong> After finding another post about using NodePort, I did the following:</p>
<pre><code>iptables -A FORWARD -j ACCEPT
</code></pre>
<p>found <a href="https://stackoverflow.com/questions/46667659/kubernetes-cannot-access-nodeport-from-other-machines">here</a>.</p>
<p>Now, unfortunately, when I try to curl the nginx endpoint, I get:</p>
<pre><code>> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service LoadBalancer 10.101.122.140 192.168.0.240 80:30930/TCP 13h
> curl 192.168.0.240:30930
curl: (7) Failed to connect to 192.168.0.240 port 30930: No route to host
> curl 192.168.0.240:80
curl: (7) Failed to connect to 192.168.0.240 port 80: No route to host
</code></pre>
<p>I'm not sure what exactly this means now. </p>
<p><strong>EDIT 2:</strong>
When I do a TCP Dump on the worker where the pod is running, I get:</p>
<pre><code>15:51:44.705699 IP 192.168.0.223.54602 > 192.168.0.240.http: Flags [S], seq 1839813500, win 29200, options [mss 1460,sackOK,TS val 375711117 ecr 0,nop,wscale 7], length 0
15:51:44.709940 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28
15:51:45.760613 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28
15:51:45.775511 IP 192.168.0.223.54602 > 192.168.0.240.http: Flags [S], seq 1839813500, win 29200, options [mss 1460,sackOK,TS val 375712189 ecr 0,nop,wscale 7], length 0
15:51:46.800622 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28
15:51:47.843262 IP 192.168.0.223.54602 > 192.168.0.240.http: Flags [S], seq 1839813500, win 29200, options [mss 1460,sackOK,TS val 375714257 ecr 0,nop,wscale 7], length 0
15:51:47.843482 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28
15:51:48.880572 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28
15:51:49.774953 ARP, Request who-has 192.168.0.240 tell 192.168.0.223, length 46
15:51:49.920602 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28
</code></pre>
| Pete.Mertz | <p>After going through with the MetalLB maintainer, he was able to figure out the issue was Debian Buster's new nftables firewall. To disable,</p>
<pre><code># update-alternatives --set iptables /usr/sbin/iptables-legacy
# update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
# update-alternatives --set arptables /usr/sbin/arptables-legacy
# update-alternatives --set ebtables /usr/sbin/ebtables-legacy
</code></pre>
<p>and restart the nodes in the cluster!</p>
| Pete.Mertz |
<p>Im new in kubernetes and i am studying the performance of load balancer and Nodeport ,in my research I can't find something about which is better/fastest it is anyway to know which is faster or give best performance? </p>
| R.O.O.T | <p>Generally, there should be no visible difference in performance between node-port and load-balancer type services: all the load balancers do after all is relaying traffic, so if they are located close enough to the cluster itself (and I'd bet all providers such as eks, gke, aks do so) then you can expect about a 1ms increased latency max. And if the load-balancers are setup on the cluster itself or using the BGP router that routes traffic to a given cluster, then there will be no difference in latency at all. </p>
<p>The main advantage of using load-balancer type over node-port is that it gives you a single stable VIP for your service, while in case of node-port the set of IPs on which your service is available will be changing as nodes in your cluster go down and up or are added or removed.</p>
| morgwai |
<p>I was able to setup the Kubernetes Cluster on Centos7 with one master and two worker nodes, however when I try to deploy a pod with nginx, the state of the pod stays in ContainerRunning forever and doesn't seem to get out of it.</p>
<p>For pod network I am using the calico.
Can you please help me resolve this issue? for some reason I don't feel satisfied moving forward without resolving this issue, I tried to check forums etc, since the last two days and this is the last resort that I am reaching out to you.</p>
<pre><code>[root@kube-master ~]# kubectl get pods --all-namespaces
[get pods result][1]
</code></pre>
<p>However when I run describe pods I see the below error for the nginx container under events section.</p>
<pre><code>Warning FailedCreatePodSandBox 41s (x8 over 11m) kubelet,
kube-worker1 (combined from similar events): Failed to create pod
sandbox: rpc error: code = Unknown desc = failed to set up sandbox
container
"ac77a42270009cba0c508e2fd82a84d6caef287bdb117d288d5193960b52abcb"
network for pod "nginx-6db489d4b7-2r4d2": networkPlugin cni failed to
set up pod "nginx-6db489d4b7-2r4d2_default" network: unable to connect
to Cilium daemon: failed to create cilium agent client after 30.000000
seconds timeout: Get http:///var/run/cilium/cilium.sock/v1/config:
dial unix /var/run/cilium/cilium.sock: connect: no such file or
directory
</code></pre>
<p>Hope you can help here.</p>
<p><strong>Edit 1:</strong></p>
<p>The ip address of the master VM is <code>192.168.40.133</code></p>
<p>Used the below command to initialize the kubeadm:
<code>kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address 192.168.40.133</code></p>
<p>Used the below command to install the pod network:
<code>kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml</code></p>
<p>The <code>kubeadm init</code> above gave me the join command that I used to join the workers into the cluster.</p>
<p>All the VMs are connected to host and bridged network adapters.</p>
| Dileep Kumar Manduva | <p>your pod subnet (specified by <code>--pod-network-cidr</code>) clashes with the network your VMs are located in: these 2 have to be distinct. Use something else for the pod subnet, for example <code>10.244.0.0/16</code> and then edit calico.yaml before applying it as described in <a href="https://docs.projectcalico.org/getting-started/kubernetes/installation/calico#installing-with-the-kubernetes-api-datastore50-nodes-or-less" rel="nofollow noreferrer">the official docs</a>:</p>
<pre><code>POD_CIDR="10.244.0.0/16"
kubeadm init --pod-network-cidr=${POD_CIDR} --apiserver-advertise-address 192.168.40.133
curl https://docs.projectcalico.org/manifests/calico.yaml -O
sed -i -e "s?192.168.0.0/16?${POD_CIDR}?g" calico.yaml
kubectl apply -f calico.yaml
</code></pre>
<p>hope this helps :)</p>
<p>note: you don't really need to specify <code>--apiserver-advertise-address</code> flag: <code>kubeadm</code> will detect correctly the main IP of the machine most of the time.</p>
| morgwai |
<p>I create a ingress to expose my internal service.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /app
backend:
serviceName: my-app
servicePort: 80
</code></pre>
<p>But when I try to get this ingress, it show it has not ip address.</p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
app-ingress example.com 80 10h
</code></pre>
<p>The service show under below.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
ports:
- name: my-app
nodePort: 32000
port: 3000
targetPort: 3000
type: NodePort
</code></pre>
| ccd | <p>Note: I'm guessing because of the <a href="https://stackoverflow.com/q/59999345/1220560">other question you asked</a> that you are trying to create an ingress on a manually created cluster with <code>kubeadm</code>.</p>
<p>As described in <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites" rel="nofollow noreferrer">the docs</a>, in order for ingress to work, you need to install <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">ingress controller</a> first. An ingress object itself is merely a configuration slice for the installed ingress controller.</p>
<p><a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">Nginx based controller</a> is one of the most popular choice. Similarly to services, in order to get a single failover-enabled VIP for your ingress, you need to use <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a>. Otherwise you can deploy ingress-nginx over a node port: see details <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">here</a></p>
<p>Finally, <code>servicePort</code> in your ingress object should be 3000, same as <code>port</code> of your service.</p>
| morgwai |
<p>I am using go-client to access k8s resources in my environment. There are APIs to get/list pods, namespaces, etc.</p>
<p>How do I access the pod that I am currently running on?</p>
| Kaushal Agrawal | <p>You can <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Expose Pod Information to Containers Through Environment Variables</a> using <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables" rel="nofollow noreferrer">pod fields</a>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
...
...
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
</code></pre>
<p>then simply look up these env vars in your Go code:</p>
<pre><code>log.Printf("MY_POD_NAME: %q", os.Getenv("MY_POD_NAME"))
</code></pre>
| colm.anseo |
<p>I am learning and experimenting with Prometheus metrics and Grafana dashboards. The information is coming from a Kubernetes cluster.</p>
<p>I am struggling with figuring out how to collect together information about pods, that is coming from multiple metrics. I believe the metrics involved are all related in some way. They are all in the <code>kube_pod...</code> "family".</p>
<h2>Background</h2>
<p>I've used a technique like the following that works for a simple metric to metric case:</p>
<pre><code>(metric_A) + on(<common_label>) group_left(<metric_B_label>, ...) (0 * metric_B)
</code></pre>
<p>This allows me to associate a label from the right side that is absent from the left side via a common label. There is no arithmetic involved, so the add and multiply really do nothing. The <code>on</code> (or <code>ignoring</code>) operator apparently requires a binary operator between the left and right sides. This seems really clunky, but it does work and I know of no other way to achieve this.</p>
<p>Here's a concrete example:</p>
<pre><code>(kube_pod_status_phase != 0) + on (pod) group_left (node) (0 * kube_pod_info)
</code></pre>
<p>The <code>kube_pod_status_phase</code> provides the <code>phase</code> (and, of course, <code>pod</code>) of each pod (Running, Failed, etc.), but does not have the node information. The <code>kube_pod_info</code> has the <code>node</code> label and a matching <code>pod</code> label. Using the query above provides a collection of pods, their current phase and which node they're associated with.</p>
<h2>Problem</h2>
<p>My current task is to collect the following information:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Node</th>
<th>Pod</th>
<th>Status</th>
<th>Created</th>
<th>Age</th>
</tr>
</thead>
<tbody>
<tr>
<td>node_1</td>
<td>pod_A_1</td>
<td>Running</td>
<td>mm/dd/yyyy hh:mm:ss</td>
<td>{x}d{y}h</td>
</tr>
<tr>
<td>node_1</td>
<td>pod_B_1</td>
<td>Running</td>
<td>mm/dd/yyyy hh:mm:ss</td>
<td>{x}d{y}h</td>
</tr>
<tr>
<td>node_2</td>
<td>pod_C_1</td>
<td>Pending</td>
<td>mm/dd/yyyy hh:mm:ss</td>
<td>{x}d{y}h</td>
</tr>
<tr>
<td>node_3</td>
<td>pod_A_2</td>
<td>Running</td>
<td>mm/dd/yyyy hh:mm:ss</td>
<td>{x}d{y}h</td>
</tr>
<tr>
<td>node_3</td>
<td>pod_B_2</td>
<td>Failed</td>
<td>mm/dd/yyyy hh:mm:ss</td>
<td>{x}d{y}h</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>My plan is to get the status (phase) from the <code>kube_pod_status_phase</code> metric, the created date/time and the age from the <code>kube_pod_start_time</code> metric and include the corresponding node from the <code>kube_pod_info</code> metric. The age is calculated as <code>time() - kube_pod_start_time</code>.</p>
<p>Another detail that complicates this is that the <code>phase</code> and <code>node</code> are <em>labels</em> in their respective metrics, while the created date/time and age are the "result" of running the queries (i.e. they are <em>not</em> labels). This has been causing me problems in several attempts.</p>
<p>I tried seeing if I could somehow chain together queries, but in addition to being incredibly ugly and complicated, I couldn't get the result values (created date and age) to be included in the results that I managed to get to work.</p>
<p>If anyone knows how this could be done, I would very much appreciate knowing about it.</p>
| Joseph Gagnon | <p>I was able to find a method that does what I need. Thanks to comments from <a href="https://stackoverflow.com/users/21363224/markalex">https://stackoverflow.com/users/21363224/markalex</a> that got me going on the thought track.</p>
<p>I ended up creating 3 queries:</p>
<pre><code>(kube_pod_status_phase{namespace=~"runner.*"} != 0) + on (pod) group_left (node) (0 * kube_pod_info)
time() - kube_pod_start_time{namespace=~"runner.*"}
kube_pod_start_time{namespace=~"runner.*"}
</code></pre>
<p>Then joining them together with a "Join by field" transform on the <code>pod</code> label. Finally, used an "Organize fields" transform to hide the columns I don't care about as well as some re-ordering.</p>
| Joseph Gagnon |
<p>As I'm playing around with K8s deployment and Gitlab CI my deployment got stuck with the state <code>ContainerStarting</code>.</p>
<p>To reset that, I deleted the K8s namespace using <code>kubectl delete namespaces my-namespace</code>.</p>
<p>Now my Gitlab runner shows me </p>
<pre><code>$ ensure_namespace
Checking namespace [MASKED]-docker-3
error: the server doesn't have a resource type "namespace"
error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>I think that has something to do with RBAC and most likely Gitlab created that namespace with some arguments and permissions (but I don't know exactly when and how that happens), which are missing now because of my deletion.</p>
<p>Anybody got an idea on how to fix this issue?</p>
| Hans Höchtl | <p>In my case I had to delete the namespace in Gitlab database, so gitlab would readd service account and namespace:</p>
<p>On the gitlab machine or task runner enter the PostgreSQL console:</p>
<pre><code>gitlab-rails dbconsole -p
</code></pre>
<p>Then select the database:</p>
<pre><code>\c gitlabhq_production
</code></pre>
<p>Next step is to find the namespace that was deleted:</p>
<pre><code>SELECT id, namespace FROM clusters_kubernetes_namespaces;
</code></pre>
<p>Take the id of the namespace to delete it:</p>
<pre><code>DELETE FROM clusters_kubernetes_namespaces WHERE id IN (6,7);
</code></pre>
<p>Now you can restart the pipeline and the namespace and service account will be readded.</p>
| scasei |
<p>I have a .NET Core Web API hosted in Kubernetes as a Pod. It is also exposed as a Service.
I have created a Dev SSL certificate and it's produced a aspnetapp.pfx file.</p>
<p>Here is a snippet of my Docker file:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 443
ENV ASPNETCORE_URLS=https://+:443
ENV ASPNETCORE_HTTPS_PORT=443
ENV ASPNETCORE_Kestrel__Certificates__Default__Password={password}
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=APIGateway/Certificates/aspnetapp.pfx
</code></pre>
<p>When I run the app in Kubernetes I receive an error in the container logs, and the container is failing to start:</p>
<pre><code>error:2006D002:BIO routines:BIO_new_file:system lib
</code></pre>
<p>I know its able to find the SSL certificate but, its throwing the above error.</p>
<p>Please help!:)</p>
| Sach K | <p>I just ran into this same problem and even though things were working fine previously, <em>something</em> was updated (possibly .NET 6.0.402) which caused a problem.</p>
<p>What I noticed is that my exported dev cert pfx in the Docker container had it's permissions set to:</p>
<pre><code>-rw------- 1 root root 2383 Oct 18 14:40 cert.pfx
</code></pre>
<p>In my Dockerfile, I export the dotnet dev cert and run a chmod to add read permissions for everyone:</p>
<pre><code>RUN dotnet dev-certs https --clean && dotnet dev-certs https --export-path /app/publish/cert.pfx -p {password}
RUN chmod 644 /app/publish/cert.pfx
</code></pre>
<p>This resulted in permissions which were the same as my appsettings files:</p>
<pre><code>-rw-r--r-- 1 root root 535 Oct 18 14:11 appsettings.Development.json
-rw-r--r-- 1 root root 331 Sep 27 18:13 appsettings.json
-rw-r--r-- 1 root root 2383 Oct 18 14:40 cert.pfx
</code></pre>
<p>That fixed the error for me.</p>
| BearsEars |
<p>Every deepcopy generated file that is produced by <code>make</code> with kubebuilder produces a file with a <code>// +build !ignore_autogenerated</code> build tag directive at the top.</p>
<pre><code>//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by controller-gen. DO NOT EDIT.
</code></pre>
<p>Why is this specific build tag directive added to these generated files? What's its purpose?</p>
| Jonathan Innis | <p>It's used by <code>controller-gen</code> to identify files it generated, it will only overwrite those.</p>
<p>E.g. edit a generated <code>zz_generated.deepcopy.go</code> and run <code>make generate</code> => the file is overwritten.</p>
<p>Now edit the file again, also remove the two lines with the build constraints (the <code>go:build</code> line is for go >= 1.17, the <code>+build</code>line for older versions IIRC) and run <code>make generate</code> again => your changes to the file have not been overwritten this time.</p>
| Jürgen Kreileder |
<p>First off, I am totally new to deploying CICD builds.</p>
<p>I started with a successful setup of Jenkins X on an AWS EKS Cluster via this
<a href="https://aws.amazon.com/blogs/opensource/continuous-delivery-eks-jenkins-x/#" rel="nofollow noreferrer">guide</a>.</p>
<p>I am able to run the pipeline via GitHub and builds successfully on a normal jx quickstart.</p>
<p>Problems arose when I started pushing my node express application.</p>
<p>On an alpine node base, my dockerfile looked like this:</p>
<pre><code>FROM node:10.15.3-alpine
RUN mkdir -p /app/node_modules && chown -R node:node /app
WORKDIR /app
COPY package*.json ./
RUN npm ci --prod
FROM alpine:3.7
COPY --from=0 /usr/bin/node /usr/bin/
COPY --from=0 /usr/lib/libgcc* /usr/lib/libstdc* /usr/lib/
WORKDIR /app
COPY --from=0 /app .
EXPOSE 3000
CMD ["node", "server.js"]
</code></pre>
<p>And it terminated with an error:</p>
<pre><code>Step 5/14 : RUN npm ci --prod
---> Running in c7f038a80dcc
[91mnpm[0m[91m ERR! code EAI_AGAIN
[0m[91mnpm ERR! errno EAI_AGAIN
[0m[91mnpm ERR![0m[91m request to https://registry.npmjs.org/express/-/express-4.16.4.tgz failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org registry.npmjs.org:443
[0mtime="2019-03-28T08:26:00Z" level=fatal msg="build failed: building [community]: build artifact: The command '/bin/sh -c npm ci --prod' returned a non-zero code: 1"
</code></pre>
<p>I tried using a non alpine base and this was how it looked:</p>
<pre><code>FROM node:10-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
ENV PORT 3000
EXPOSE 3000
CMD ["npm", "start"]
</code></pre>
<p>But then, the problem was the build hangs (or is taking very long) when it hits the RUN npm install step.</p>
<p>I have scoured for possible answers and duplicate questions but to no avail. So I last resorted into asking it here.</p>
<p>I don't have an idea of what's going on honestly.</p>
| VeeBee | <p>I managed to solve this issue by enabling docker bridge network when bootstrapping EKS worker nodes.</p>
<pre><code>#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --enable-docker-bridge true 'your-cluster-name'
</code></pre>
<p>More detail in this Github issue: <a href="https://github.com/awslabs/amazon-eks-ami/issues/183" rel="nofollow noreferrer">https://github.com/awslabs/amazon-eks-ami/issues/183</a></p>
| Viet Tran |
<p>Is there a way to configure Portainer's dashboard to show Minikube's docker?</p>
<p><strong>Portainer</strong></p>
<p>Installed in the local docker (toolbox), on VM under windows 7;
the dashboard connection to the local (inside) docker is working fine.</p>
<p><code>docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer</code></p>
<p><strong>Minikube</strong></p>
<p>Installed in another VM on the same machine with a different port.</p>
<ul>
<li>I've created a new Portainer Endpoint using the portainer UI</li>
<li>Set the Endpoint URL (minikubeIp:2375)</li>
<li>Selected TLS and point to the path of the cert files</li>
</ul>
<p><code>c:/users/<myusername>/.minikube/certs</code></p>
<p>but keep getting an error on the dashboard tab:</p>
<blockquote>
<p>Failed to load resource: the server responded with a status of 502 (Bad Gateway)</p>
</blockquote>
<p>I'm getting the same error also when configuring the Endpoint <em>without</em> TLS.</p>
<p>Is it <em>possible</em> to configure Portainer to work with Minikube's Docker?</p>
| Yuval Simhon | <p>Are you sure that the Docker API is exposed in the Minikube configuration?</p>
<blockquote>
<p>Failed to load resource: the server responded with a status of 502 (Bad Gateway)</p>
</blockquote>
<p>This error is generally raised when Portainer cannot proxy requests to the Docker API.</p>
<p>A simple way to verify that would be to use the Docker CLI and check if Minikube's Docker API is exposed:</p>
<p><code>docker -H minikubeIp:2375 info</code></p>
<p>If this is returning a connection error, that means that the Docker API is not exposed and thus, Portainer will not be able to connect to it.</p>
| Tony |
<p>I need to download chart which is located external OCI repository, when I download it using click on the link of the chart and version and provide user and password it works but not with the following code, this is what I tried and get an error</p>
<p><strong>failed to download "https://fdr.cdn.repositories.amp/artifactory/control-1.0.0.tgz" at version "1.0.0" (hint: running <code>helm repo update</code> may help)</strong> , if I click on the above link it asks for user and pass (in the browser) and when I provide it (the same in the code) the chart is <strong>downloaded</strong>, any idea why with the code its not working?</p>
<p>This is what I tried</p>
<pre><code> package main
import (
"fmt"
"os"
"helm.sh/helm/v3/pkg/action"
"helm.sh/helm/v3/pkg/cli"
"helm.sh/helm/v3/pkg/repo"
)
var config *cli.EnvSettings
func main() {
config = cli.New()
re := repo.Entry{
Name: "control",
URL: "https://fdr.cdn.repositories.amp/artifactory/control",
Username: "myuser",
Password: "mypass",
}
file, err := repo.LoadFile(config.RepositoryConfig)
if err != nil {
fmt.Println(err.Error())
}
file.Update(&re)
file.WriteFile(config.RepositoryConfig, os.ModeAppend)
co := action.ChartPathOptions{
InsecureSkipTLSverify: false,
RepoURL: "https://fdr.cdn.repositories.amp/artifactory/control",
Username: "myuser",
Password: "mypass",
Version: "1.0.0",
}
fp, err := co.LocateChart("control", config)
if err != nil {
fmt.Println(err.Error())
}
fmt.Println(fp)
}
</code></pre>
<p>While <strong>debug</strong> the code I found <strong>where the error is coming from</strong> <a href="https://github.com/helm/helm/blob/release-3.6/pkg/downloader/chart_downloader.go#L352" rel="nofollow noreferrer">https://github.com/helm/helm/blob/release-3.6/pkg/downloader/chart_downloader.go#L352</a>
it trying to find some cache which doesn't exist in my laptop, how could I disable it or some other solution to make it work?</p>
| Jenney | <p>I think you need to update your repository before locating the chart.</p>
<p><a href="https://github.com/helm/helm/blob/main/cmd/helm/repo_update.go#L64-L89" rel="nofollow noreferrer">This</a> is the code the CLI uses to update the repositories.</p>
<p>And <a href="https://github.com/helm/helm/blob/main/cmd/helm/repo_update.go#L64-L89" rel="nofollow noreferrer">this</a> is the function that performs the update on the repositories.</p>
| Jorge Villaverde |
<p><strong>Goal</strong></p>
<p>I want to allow unauthenticated access to the following OIDC endpoints in my K3s cluster (from other pods inside the cluster mostly but access from outside is also acceptable):</p>
<ul>
<li><a href="https://kubernetes.default.svc/.well-known/openid-configuration" rel="nofollow noreferrer">https://kubernetes.default.svc/.well-known/openid-configuration</a></li>
<li><a href="https://kubernetes.default.svc/openid/v1/jwks" rel="nofollow noreferrer">https://kubernetes.default.svc/openid/v1/jwks</a></li>
</ul>
<p><strong>Problem</strong></p>
<p>By default, Kubernetes requires an authorization token for accessing those endpoints and despite my efforts to enable unauthenticated access, I cannot seem to get the unauthenticated access to work.</p>
<p><strong>What I have tried</strong></p>
<p>According to the Kubernetes documentation on <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery" rel="nofollow noreferrer">Service account issuer discovery</a>, one must create a ClusterRoleBinding that maps the ClusterRole <code>system:service-account-issuer-discovery</code> to the Group <code>system:unauthenticated</code>.</p>
<p>I also found this helpful example for a different use-case but they're exposing the OIDC endpoints anonymously just like I want to do:
<a href="https://banzaicloud.com/blog/kubernetes-oidc/" rel="nofollow noreferrer">OIDC issuer discovery for Kubernetes service accounts</a></p>
<p>Based on both of those, I created my ClusterRoleBinding with:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl create clusterrolebinding service-account-issuer-discovery-unauthenticated --clusterrole=system:service-account-issuer-discovery --group=system:unauthenticated
</code></pre>
<p>This results in the following spec:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: "2022-11-28T15:24:41Z"
name: service-account-issuer-discovery-unauthenticated
resourceVersion: "92377634"
uid: 75402324-a8cf-412f-923e-a7a87ed082c2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:service-account-issuer-discovery
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:unauthenticated
</code></pre>
<p>I also confirmed RBAC is enabled by running this which showed an entry:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl api-versions | grep 'rbac.authorization.k8s.io/v1'
</code></pre>
<p>Unfortunately, despite creating that ClusterRoleBinding, it still seems to be requiring an authorization token to access the OIDC endpoints. For example, when I call from outside the cluster:</p>
<pre class="lang-bash prettyprint-override"><code>curl -vk https://my-cluster-hostname:6443/.well-known/openid-configuration
</code></pre>
<p>Output:</p>
<pre class="lang-json prettyprint-override"><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
</code></pre>
<p>Accessing from the Ingress controller inside the pod (by execing into it), I get the same error.</p>
<pre class="lang-bash prettyprint-override"><code>curl -vk https://kubernetes.default.svc/.well-known/openid-configuration
</code></pre>
<p>I am using K3s for this cluster... Is there something special about K3s that I have not accounted for? Do I need to do something to get this ClusterRoleBinding to take effect? Something else I missed?</p>
| BearsEars | <p>The issue was that the <code>--anonymous-auth</code> api server setting is set to <code>false</code> by default.</p>
<p>I was able to adjust this with my already-installed K3s server nodes by editing the systemd unit for the k3s service. Step-by-step guide:</p>
<p>Edit the systemd unit with:</p>
<pre class="lang-bash prettyprint-override"><code>sudo vim /etc/systemd/system/k3s.service
</code></pre>
<p>You'll see the K3s unit:</p>
<pre><code>[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
Wants=network-online.target
After=network-online.target
[Install]
WantedBy=multi-user.target
[Service]
Type=notify
EnvironmentFile=-/etc/default/%N
EnvironmentFile=-/etc/sysconfig/%N
EnvironmentFile=-/etc/systemd/system/k3s.service.env
KillMode=process
Delegate=yes
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service'
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s \
server \
'--cluster-init' \
'--disable=traefik' \
'--write-kubeconfig-mode' \
'644' \
</code></pre>
<p>Add the last two lines to the <code>ExecStart</code>:</p>
<pre><code>ExecStart=/usr/local/bin/k3s \
server \
'--cluster-init' \
'--disable=traefik' \
'--write-kubeconfig-mode' \
'644' \
'--kube-apiserver-arg' \
'--anonymous-auth=true' \
</code></pre>
<p>Reload the systemd unit:</p>
<pre class="lang-bash prettyprint-override"><code>sudo systemctl daemon-reload
</code></pre>
<p>Finally, restart the k3s service:</p>
<pre class="lang-bash prettyprint-override"><code>sudo systemctl restart k3s
</code></pre>
<p>References:</p>
<ul>
<li><a href="https://docs.k3s.io/security/hardening-guide#control-plane-execution-and-arguments" rel="nofollow noreferrer">Control Plane Execution and Arguments</a> - You can see the default for K3s there</li>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#discovery-roles" rel="nofollow noreferrer">API discovery roles</a> - also mentions this flag at the top</li>
</ul>
| BearsEars |
<p>I have seen that PATCH request is supported in Kubernetes REST API reference manual from this link: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#patch-ingress-v1beta1-networking-k8s-io" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#patch-ingress-v1beta1-networking-k8s-io</a></p>
<pre><code>HTTP Request
PATCH /apis/networking.k8s.io/v1beta1/namespaces/{namespace}/ingresses/{name}
</code></pre>
<p>However, I get HTTP 415 Unsupported Media Type error when sending PATCH request to Kubernetes cluster through Kubernetes REST API server in Postman.</p>
<p>I want to update our specified ingress partially outside the cluster. For this purpose, I added a snapshot from my trial.</p>
<p><a href="https://i.stack.imgur.com/UK1kR.png" rel="nofollow noreferrer">Kubernetes REST API server Ingress PATCH Request</a></p>
<p>Our Kubernetes version is:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Patching JSON:</p>
<pre><code>{
"apiVersion": "networking.k8s.io/v1beta1",
"kind": "Ingress",
"metadata": {
"name": "demo-ingress",
"annotations": {
"nginx.org/rewrites": "serviceName=demo-service-235 rewrite=/"
}
},
"spec": {
"rules": [
{
"host": "www.demodeployment.com",
"http": {
"paths": [
{
"path": "/235/",
"backend": {
"serviceName": "demo-service-235",
"servicePort": 8088
}
}
]
}
}
]
}
}
</code></pre>
<p>I can use GET, POST, PUT and DELETE successfully, in PATCH request I can't get the same results. What could be the root cause of the problem? What are your ideas?</p>
| thenextgeneration | <p>I solved this same issue by setting the following header:</p>
<pre><code>"Content-Type": "application/strategic-merge-patch+json"
</code></pre>
| Ben Elgar |
<p>I have try to expose my micro-service to the internet with aws ec2. Using the deployment and service yaml file under below.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
strategy: {}
template:
metadata:
labels:
app: my-app
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: my-app
image: XXX
ports:
- name: my-app
containerPort: 3000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
ports:
- name: my-app
nodePort: 32000
port: 3000
targetPort: 3000
type: NodePort
</code></pre>
<p>And also create a ingress resource.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.myApp.com
http:
paths:
- path: /my-app
backend:
serviceName: my-app
servicePort: 80
</code></pre>
<p>The last step I open the 80 port in aws dashboard, how should I choice the ingress controller to realize my intend?</p>
| ccd | <p><code>servicePort</code> should be 3000, the same as <code>port</code> in your service object.</p>
<p>Note however that, setting up cluster with kubeadm on AWS is not the best way to go: EKS provides you optimized, well configured clusters with external load-balancers and ingress controllers.</p>
| morgwai |
<p>I'm trying to write a controller and I'm having a few issues writing tests. </p>
<p>I've used some code from the k8s HPA in my controller and I'm seeing something weird when using the <code>testrestmapper</code>.</p>
<p>basically when running this <a href="https://github.com/kubernetes/kubernetes/blob/c6ebd126a77e75e6f80e1cd59da6b887e783c7c4/pkg/controller/podautoscaler/horizontal_test.go#L852" rel="nofollow noreferrer">test</a> with a breakpoint <a href="https://github.com/kubernetes/kubernetes/blob/7498c14218403c9a713f9e0747f2c6794a0da9c7/pkg/controller/podautoscaler/horizontal.go#L512" rel="nofollow noreferrer">here</a> I see the mappings are returned. </p>
<p>When I do the same the mappings are not returned. </p>
<p>What magic is happening here?</p>
<p>The following test fails</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/api/meta/testrestmapper"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/kubernetes/pkg/api/legacyscheme"
"testing"
)
func TestT(t *testing.T) {
mapper := testrestmapper.TestOnlyStaticRESTMapper(legacyscheme.Scheme)
gk := schema.FromAPIVersionAndKind("apps/v1", "Deployment").GroupKind()
mapping, err := mapper.RESTMapping(gk)
assert.NoError(t, err)
assert.NotNil(t, mapping)
}
</code></pre>
| Mark | <p>I think this is because you are missing an import of <code>_ "k8s.io/kubernetes/pkg/apis/apps/install"</code>.</p>
<p>Without importing this path, there are no API groups or versions registered with the <code>schema</code> you are using to obtain the REST mapping.</p>
<p>By importing the path, the API group will be registered, allowing the call to <code>schema.FromAPIVersionAndKind("apps/v1", "Deployment").GroupKind()</code> to return a valid GroupKind.</p>
| James Munnelly |
<p>I have my company S3 (<code>companys3</code>) bucket with multiple files for example <code>file1</code>, <code>file2</code> and <code>file3</code>. And client S3 bucket (<code>clients3</code>) with some files that i don't know.</p>
<p>What I want is the solution for opening only <code>file2</code> from <code>companys3</code> to <code>clients3</code>.</p>
<p>I found solutions about how to copy/clone whole buckets. But couldn't find any that copy only specific files.</p>
<p>Till this time wi copy files through Kubernetes pods, but files become too large to handle this way (ower 20GB one file), so I am searching to solution that allows us to quit using Kubernetes pods ad transfer clients. </p>
| Adam Tomaszewski | <p>You can use S3 command line (awscli).</p>
<pre><code>aws s3 cp s3://COMPANY_BUCKET/filename s3://CLIENT_BUCKET/filename
</code></pre>
| Vikyol |
<p>I've got a k8s cronjob that consists of an init container and a one pod container. If the init container fails, the Pod in the main container never gets started, and stays in "PodInitializing" indefinitely.</p>
<p>My intent is for the job to fail if the init container fails.</p>
<pre><code>---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: job-name
namespace: default
labels:
run: job-name
spec:
schedule: "15 23 * * *"
startingDeadlineSeconds: 60
concurrencyPolicy: "Forbid"
successfulJobsHistoryLimit: 30
failedJobsHistoryLimit: 10
jobTemplate:
spec:
# only try twice
backoffLimit: 2
activeDeadlineSeconds: 60
template:
spec:
initContainers:
- name: init-name
image: init-image:1.0
restartPolicy: Never
containers:
- name: some-name
image: someimage:1.0
restartPolicy: Never
</code></pre>
<p>a kubectl on the pod that's stuck results in: </p>
<pre><code>Name: job-name-1542237120-rgvzl
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: my-node-98afffbf-0psc/10.0.0.0
Start Time: Wed, 14 Nov 2018 23:12:16 +0000
Labels: controller-uid=ID
job-name=job-name-1542237120
Annotations: kubernetes.io/limit-ranger:
LimitRanger plugin set: cpu request for container elasticsearch-metrics; cpu request for init container elasticsearch-repo-setup; cpu requ...
Status: Failed
IP: 10.0.0.0
Controlled By: Job/job-1542237120
Init Containers:
init-container-name:
Container ID: docker://ID
Image: init-image:1.0
Image ID: init-imageID
Port: <none>
Host Port: <none>
State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 14 Nov 2018 23:12:21 +0000
Finished: Wed, 14 Nov 2018 23:12:32 +0000
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wwl5n (ro)
Containers:
some-name:
Container ID:
Image: someimage:1.0
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wwl5n (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
</code></pre>
| Anderson | <p>To try and figure this out I would run the command:</p>
<p><code>kubectl get pods</code> - Add the namespace param if required.</p>
<p>Then copy the pod name and run:</p>
<p><code>kubectl describe pod {POD_NAME}</code></p>
<p>That should give you some information as to why it's stuck in the initializing state.</p>
| ajtrichards |
<p>I am using Keycloak as my identity provider for kubernetes. I am using kubelogin to get the token. The token seems to work but I am getting the below error. I think there is some issue in the ClusterRoleBinding which is not allowing it to work. </p>
<ul>
<li>Whats the error</li>
</ul>
<pre><code>Error from server (Forbidden): pods is forbidden: User "test" cannot list resource "pods" in API group "" in the namespace "default"
</code></pre>
<p><strong>Additional Information</strong></p>
<ul>
<li>Api Manifest</li>
</ul>
<pre><code> - --oidc-issuer-url=https://test1.example.com/auth/realms/kubernetes
- --oidc-username-claim=preferred_username
- --oidc-username-prefix=-
- --oidc-groups-claim=groups
- --oidc-client-id=kubernetes
- --oidc-ca-file=/etc/ssl/certs/ca.crt
</code></pre>
<ul>
<li>Cluster role and cluster role binding</li>
</ul>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cluster-admin
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: admin-rolebinding
subjects:
- kind: User
name: //test1.example.com.com/auth/realms/kubernetes#23fd6g03-e03e-450e-8b5d-07b19007c443
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Is there anything I am missing to get this to work?</p>
| Shash | <p>After digging a lot I could find the issue. Rather than adding the keycloak url for the user, we have to use the user name itself. Here is the example yaml</p>
<pre class="lang-yaml prettyprint-override"><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cluster-admin
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: admin-rolebinding
subjects:
- kind: User
name: test
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
</code></pre>
| Shash |
<p>I wrote golang program which fetches values from environment variable set in my system using export var name = somevalue.</p>
<pre><code>cloudType = os.Getenv("CLOUD_TYPE")
clusterRegion = os.Getenv("CLUSTER_REGION")
clusterType = os.Getenv("CLUSTER_TYPE")
clusterName = os.Getenv("CLUSTER_NAME")
clusterID = os.Getenv("CLUSTER_ID")
</code></pre>
<p>As mentioned above my program tries to fetch values from env var set in system using getenv func.The program is working good if run it and fetching values from env variables. But When I tried building a image and running it inside a pod it was able to fetch values from the env var. It is giving empty values. Is there any way to access the local env var from the pod?</p>
| Sathya | <p>Make a yaml file like this to define a config map</p>
<pre><code>apiVersion: v1
data:
CLOUD_TYPE: "$CLOUD_TYPE"
CLUSTER_REGION: "$CLUSTER_REGION"
CLUSTER_TYPE: "$CLUSTER_TYPE"
CLUSTER_NAME: "$CLUSTER_NAME"
CLUSTER_ID: "$CLUSTER_ID"
kind: ConfigMap
metadata:
creationTimestamp: null
name: foo
</code></pre>
<p>Ensure your config vars are set then apply it to your cluster, with env substitution first</p>
<pre><code>envsubst < foo.yaml | kubectl apply -f
</code></pre>
<p>Then in the pod definition use the config map</p>
<pre><code>spec:
containers:
- name: mypod
envFrom:
- configMapRef:
name: foo
</code></pre>
| Vorsprung |
<p>I have configured dockerhub to build a new image with tags <code>latest</code> and <code>dev-<version></code> as new tag <code><version></code> is appeared in GitHub. I have no idea how to configure Tekton or any other cloud-native tool to automatically deploy new images as they become available at the registry.</p>
<p>Here's my k8s configuration:</p>
<pre><code>apiVersion: v1
kind: List
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80
- apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
ports:
- port: 80
targetPort: 8000
selector:
app: my-app
type: LoadBalancer
- apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-local-deployment
labels:
app: my-app
type: web
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
imagePullSecrets:
- name: regcred
containers:
- name: backend
image: zuber93/my-app:dev-latest
imagePullPolicy: IfNotPresent
envFrom:
- secretRef:
name: my-app-local-secret
ports:
- containerPort: 8000
readinessProbe:
httpGet:
path: /flvby
port: 8000
initialDelaySeconds: 10
periodSeconds: 5
- name: celery
image: zuber93/my-app:dev-latest
imagePullPolicy: IfNotPresent
workingDir: /code
command: [ "/code/run/celery.sh" ]
envFrom:
- secretRef:
name: my-app-local-secret
- name: redis
image: redis:latest
imagePullPolicy: IfNotPresent
</code></pre>
| Vassily | <p>Short answer is:
Either setup a webhook from dockerhub (<a href="https://docs.docker.com/docker-hub/webhooks/" rel="nofollow noreferrer">https://docs.docker.com/docker-hub/webhooks/</a>) to tekton using triggers.</p>
<p>Or (depends on your security and if your cluster is reachable from www or not)</p>
<p>Poll dockerhub and trigger tekton upon new image detection.
(This can be done in many different ways, simple instant service, scheduled cronjob, etc in k8s)</p>
<p>So, you choose push or pull. ;)</p>
<p>I would ask "why not trigger from your git repo directly?"</p>
| MrSimpleMind |
<p>I have configured keycloak for Kubernetes RBAC. </p>
<ul>
<li>user having access to get pods</li>
</ul>
<pre><code>vagrant@haproxy:~/.kube$ kubectl auth can-i get pods --user=oidc
Warning: the server doesn't have a resource type 'pods'
yes
</code></pre>
<pre><code>vagrant@haproxy:~/.kube$ kubectl get pods --user=oidc
error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>my kubeconfig file for the user looks like below</p>
<pre class="lang-yaml prettyprint-override"><code>users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://test.example.com/auth/realms/kubernetes
- --oidc-client-id=kubernetes
- --oidc-client-secret=e479f74d-d9fd-415b-b1db-fd7946d3ad90
- --username=test
- --grant-type=authcode-keyboard
command: kubectl
</code></pre>
<p>Is there anyway to get this to work?</p>
| Shash | <p>The issue was with the ip address of the cluster. You might have to configure the DNS name if the ip address. </p>
| Shash |
<p>Consider a microservice <code>X</code> which is containerized and deployed in a kubernetes cluster. <code>X</code> communicates with a Payment Gateway <code>PG</code>. However, the payment gateway requires a static IP for services contacting it as it maintains a whitelist of IP addresses which are authorized to access the payment gateway. One way for <code>X</code> to contact <code>PG</code> is through a third party proxy server like <code>QuotaGuard</code> which will provide a static IP address to service <code>X</code> which can be whitelisted by the Payment Gateway.
However, is there an inbuilt mechanism in kubernetes which can enable a service deployed in a kube-cluster to obtain a static IP address?</p>
| adi | <p>there's no mechanism in Kubernetes for this yet.</p>
<p>other possible solutions:</p>
<ul>
<li><p>if nodes of the cluster are in a private network behind a NAT then just add your network's default gateway to the PG's whitelist.</p></li>
<li><p>if whitelist can accept a cidr apart from single IPs (like <code>86.34.0.0/24</code> for example) then add your cluster's network cidr to the whitelist</p></li>
</ul>
<p>If every node of the cluster has a public IP and you can't add a cidr to the whitelist then it gets more complicated:</p>
<ul>
<li><p>a naive way would be to add ever node's IP to the whitelist, but it doesn't scale above tiny clusters few just few nodes.</p></li>
<li><p>if you have access to administrating your network, then even though nodes have pubic IPs, you can setup a NAT for the network anyway that targets only packets with PG's IP as a destination.</p></li>
<li><p>if you don't have administrative access to the network, then another way is to allocate a machine with a static IP somewhere and make it act as a proxy using iptables NAT similarly like above again. This introduces a single point of failure though. In order to make it highly available, you could deploy it on a kubernetes cluster again with few (2-3) replicas (this can be the same cluster where X is running: see below). The replicas instead of using their node's IP to communicate with PG would share a VIP using <a href="https://www.keepalived.org/" rel="nofollow noreferrer">keepalived</a> that would be added to PG's whitelist. (you can have a look at <a href="https://www.reddit.com/r/kubernetes/comments/92xhum/easykeepalived_simple_load_balancing_for/" rel="nofollow noreferrer">easy-keepalived</a> and either try to use it directly or learn from it how it does things). This requires high privileges on the cluster: you need be able to grant to pods of your proxy <code>NET_ADMIN</code> and <code>NET_RAW</code> capabilities in order for them to be able to add iptables rules and setup a VIP.</p></li>
</ul>
<p><strong>update:</strong></p>
<p>While waiting for builds and deployments during last few days, I've polished my old VIP-iptables scripts that I used to use as a replacement for external load-balancers on bare-metal clusters, so now they can be used as well to provide egress VIP as described in the last point of my original answer. You can give them a try: <a href="https://github.com/morgwai/kevip" rel="nofollow noreferrer">https://github.com/morgwai/kevip</a></p>
| morgwai |
<p>Say we have a couple of clusters on Amazon EKS. We have a new user or new machine that needs .kube/config to be populated with the latest cluster info.</p>
<p>Is there some easy way we get the context info from our clusters on EKS and put the info in the .kube/config file? something like:</p>
<pre><code>eksctl init "cluster-1-ARN" "cluster-2-ARN"
</code></pre>
<p>so after some web-sleuthing, I heard about:</p>
<pre><code>aws eks update-kubeconfig
</code></pre>
<p>I tried that, and I get this:</p>
<blockquote>
<p>$ aws eks update-kubeconfig usage: aws [options]
[ ...] [parameters] To see help text, you can
run:</p>
<p>aws help aws help aws help</p>
<p>aws: error: argument --name is required</p>
</blockquote>
<p>I would think it would just update for all clusters then, but it don't. So I put the cluster names/ARNs, like so:</p>
<pre><code>aws eks update-kubeconfig --name arn:aws:eks:us-west-2:913xxx371:cluster/eks-cluster-1
aws eks update-kubeconfig --name arn:aws:eks:us-west-2:913xxx371:cluster/ignitecluster
</code></pre>
<p>but then I get:</p>
<pre><code>kbc stderr: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: arn:aws:eks:us-west-2:913xxx371:cluster/eks-cluster-1.
kbc stderr: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: arn:aws:eks:us-west-2:913xxx371:cluster/ignitecluster.
</code></pre>
<p>hmmm this is kinda dumb 😒 those cluster names exist..so what 🤷 do I do now</p>
| Alexander Mills | <p>So yeah those clusters I named don't actually exist. I discovered that via:</p>
<pre><code> aws eks list-clusters
</code></pre>
<p>ultimately however, I still feel strong because we people need to make a tool that can just update your config with all the clusters that exist instead of having you name them.</p>
<p>So to do this programmatically, it would be:</p>
<pre><code>aws eks list-clusters | jq '.clusters[]' | while read c; do
aws eks update-kubeconfig --name "$c"
done;
</code></pre>
| Alexander Mills |
<p>I am trying to spin up a third-party service that accepts connections in 4 different ports:</p>
<p>x-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: x-deployment
labels:
app: x
...
ports:
- containerPort: 8000 # HttpGraphQLServer
- containerPort: 8001 # WebSocketServer
- containerPort: 8020 # JsonRpcServer
- containerPort: 8030 # HttpIndexingServer
livenessProbe:
tcpSocket:
port: 8020
</code></pre>
<p>x-service.yaml</p>
<pre><code>apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: x-rpc-config
spec:
healthCheck:
checkIntervalSec: 7
timeoutSec: 3
healthyThreshold: 2
unhealthyThreshold: 2
type: HTTP2
port: 8020
---
apiVersion: v1
kind: Service
metadata:
name: x-service
annotations:
beta.cloud.google.com/backend-config: '{"default": "x-rpc-config"}'
spec:
selector:
app: x
ports:
- name: graphql
port: 8000
targetPort: 8000
- name: subscription
port: 8001
targetPort: 8001
- name: indexing
port: 8030
targetPort: 8030
- name: jrpc
port: 8020
targetPort: 8020
type: NodePort
</code></pre>
<p>ingress.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: backend-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: backend-dev-ip-address
networking.gke.io/managed-certificates: backend-certificate
spec:
rules:
- host: x.dev.domain.io
http:
paths:
- path: /rpc
backend:
serviceName: x-service
servicePort: 8020
- path: /idx
backend:
serviceName: x-service
servicePort: 8030
- path: /ws
backend:
serviceName: x-service
servicePort: 8001
- path: /*
backend:
serviceName: x-service
servicePort: 8000
</code></pre>
<p>By default the GKE LoadBalancer runs the Health Check on HTTP:80, if I spin up the backend-service (<code>x-service.yaml</code>) without the BackendConfig (<code>x-rpc-config</code>), it is able to detect only 2 healthy backend-services, both with HTTP ports: 8000 and 8030). However the backend-services listen to ports: 8020 (RPC) and 8030 (WS) are not considered healthy. I believe it happens because of the protocol type, so I've created the BackendConfig (<code>x-rpc-config</code>) to run a TPC Health Check instead, using HTTP2 protocol for port 8020 - which is where the livenessProbe is pointing to.</p>
<p>The pods and services are created properly, but the Load Balancer still fails to detect them as healthy services. The console simply shows the following warning:</p>
<blockquote>
<p>Some backend services are in UNHEALTHY state</p>
</blockquote>
<p>The goal is to open up the port 8020 (RPC) but also keep the 8000 (HTTP) working. Is it possible? Do I need another type of Load Balancer or it is just a config issue?</p>
<p>I could not find any example of HealthCheck config for multiple ports with different protocols under the same service. It is probably an anti-pattern?</p>
<p>Thanks in advance.</p>
| fforbeck | <p><strong>Solution</strong></p>
<p>Instead of using an <code>Ingress</code>, which will launch a <code>HTTP/HTTPs Load Balancer</code> on GCP by default, I've changed the <code>Service</code> to work as a <code>LoadBalancer</code> with a custom <code>HTTP2</code> health check config. By default this configuration will spin up a <code>TCP Load Balancer</code> on GCP. For instance:</p>
<pre><code>apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: rpc-config
spec:
healthCheck:
checkIntervalSec: 10
timeoutSec: 3
healthyThreshold: 2
unhealthyThreshold: 2
type: HTTP2
port: 8020
---
apiVersion: v1
kind: Service
metadata:
name: x-service
annotations:
cloud.google.com/app-protocols: '{"rpc-a":"HTTP2", "rpc-b":"HTTP2", "rpc-c":"HTTP2"}'
beta.cloud.google.com/backend-config: '{"default": "rpc-config"}'
spec:
selector:
app: x-node
ports:
- name: rpc-a
port: 5001
protocol: TCP
targetPort: 5001
- name: rpc-b
port: 8020
protocol: TCP
targetPort: 8020
- name: rpc-c
port: 8000
protocol: TCP
targetPort: 8000
type: LoadBalancer
</code></pre>
<p>The next step is to enable the SSL for the TCP LB. I saw GCP has the SSL Proxy LB, that might solve it. Just need to figure out the proper configuration for that, I could not find it in their docs.</p>
| fforbeck |
<p>im confuse to convert nanocore to core of CPU
my formula is:</p>
<p><code>nanocore / 1000000000 = CORE (1000000000 is 1 billion)</code></p>
<p>if my linux os have 10 core.
I want to calculator percentage cpu usage. This formula is:</p>
<p><code>(nanocore / 1000000000) / 10 * 100 = percentage cpu usage ?</code></p>
<p>is it right ?</p>
| Đặng Lực | <p>Yes, a core is equal to 1e9 (1,000,000,000) nanocores.</p>
<p>You can test this by seeing that, for example in Datadog, a metric like <code>(kubernetes.cpu.usage.total / 1000000000) / kubernetes_state.node.cpu_allocatable * 100</code> gives you a percentage that should be capped at 100%. The 1st metric is measured in nanocores, and the 2nd is measured in cores (<a href="https://docs.datadoghq.com/containers/kubernetes/data_collected/" rel="nofollow noreferrer">see definition here</a>).</p>
<p>Here's another answer that aligns with this: <a href="https://discuss.kubernetes.io/t/metric-server-cpu-and-memory-units/7497/2" rel="nofollow noreferrer">link</a></p>
| jwayne |
<p>new to GKE and kubernetes just trying to get a simple project up and running. Here's what I'm trying to accomplish in GKE in a single cluster, single node pool, and single namespace:</p>
<p>nginx deployment behind LoadBalancer service accepting Http traffic on port 80 passing it on port 8000 to</p>
<p>front-end deployment (python Django) behind ClusterIP service accepting traffic on port 8000. </p>
<p>The front-end is already successfully communicating with a StatefulSet running Postgres database. The front-end was seen successfully serving Http (gunicorn) before I switched it's service from LoadBalancer to ClusterIP.</p>
<p>I don't know how to properly set up the Nginx configuration to pass traffic to the ClusterIP service for the front-end deployment. What I have is not working.</p>
<p>Any advice/suggestions would be appreciated. Here are the setup files:</p>
<p>nginx - etc/nginx/conf.d/nginx.conf</p>
<pre><code>upstream front-end {
server front-end:8000;
}
server {
listen 80;
client_max_body_size 2M;
location / {
proxy_pass http://front-end;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /usr/src/app/static/;
}
}
</code></pre>
<p>nginx deployment/service</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: "web-nginx"
labels:
app: "nginx"
spec:
type: "LoadBalancer"
ports:
- port: 80
name: "web"
selector:
app: "nginx"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "nginx"
namespace: "default"
labels:
app: "nginx"
spec:
replicas: 1
selector:
matchLabels:
app: "nginx"
template:
metadata:
labels:
app: "nginx"
spec:
containers:
- name: "my-nginx"
image: "us.gcr.io/my_repo/my_nginx_image" # this is nginx:alpine + my staicfiles & nginx.conf
ports:
- containerPort: 80
args:
- /bin/sh
- -c
- while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g "daemon off;"
</code></pre>
<p>front-end deployment/service</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: "front-end"
labels:
app: "front-end"
spec:
type: "ClusterIP"
ports:
- port: 8000
name: "django"
targetPort: 8000
selector:
app: "front-end"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "front-end"
namespace: "default"
labels:
app: "front-end"
spec:
replicas: 1
selector:
matchLabels:
app: "front-end"
template:
metadata:
labels:
app: "front-end"
spec:
containers:
- name: "myApp"
image: "us.gcr.io/my_repo/myApp"
ports:
- containerPort: 8000
args:
- /bin/sh
- -c
- python manage.py migrate && gunicorn smokkr.wsgi:application --bind 0.0.0.0:8000
---
</code></pre>
| konkrer | <p>Kubernetes ingress is the way to go about this. GKE uses Google cloud load balancer behind the scenes to provision your Kubernetes ingress resource; so when you create an Ingress object, the GKE ingress controller creates a Google Cloud HTTP(S) load balancer and configures it according to the information in the Ingress and its associated Services.</p>
<p>This way you get access to some custom resource types from Google like <code>ManagedCertificates</code> and <code>staticIP</code> addresses, which could be associated with the ingress in kubernetes to achieve loadbalancing between services or between clients and services. </p>
<p>Follow the documentation here to understand how to setup HTTP(s) load balancing with GKE using K8s ingress - <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/ingress</a></p>
<p>This tutorial is really helpful too - </p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer</a></p>
| automaticAllDramatic |
<p>I have an Asp.Net Core application that is configured to connect to Azure KeyVault using Visual Studio 2019 Connected Services:</p>
<p><a href="https://learn.microsoft.com/en-us/azure/key-vault/general/vs-key-vault-add-connected-service" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/key-vault/general/vs-key-vault-add-connected-service</a></p>
<p>I containerized the application with Docker and deployed it into Kubernetes as a Pod.
The KeyVault connection is not working, probably because of the Managed Identity not set-up.</p>
<p>I tried:</p>
<ol>
<li>Added the Kubernetes agent Managed Identity to the KeyVault Acccess policies like I would do with App Services or Container Services, but does not allow the connection.</li>
<li>Followed the docs here: <a href="https://learn.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes</a></li>
</ol>
<p>I wonder if the "Azure Key Vault provider for the Secrets Store CSI driver on Kubernetes" is the right way to use KeyVault from a pod, or if there is a simpler solution like a direct connection.</p>
| Francesco Cristallo | <p>The solution, for whoever is in my situation, is to use <a href="https://github.com/Azure/aad-pod-identity" rel="nofollow noreferrer">AAD-Pod Identity</a></p>
<p>There is no need to attach a CSI Driver unless you need the Secrets in the Kubernetes configuration, want total control on custom configurations, or have the cluster outside Azure.</p>
<p>For Asp.Net Core applications deployed to AKS, the easiest way is to use Managed Identities, and to provide that to your Kubernetes Cluster you need AAD-Pod identity.</p>
<p>There is not a documentation page yet, but following the Get Started instructions on GitHub is enough to get it going.</p>
| Francesco Cristallo |
<p>I am not able to see any log output when deploying a very simple Pod:</p>
<p>myconfig.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
</code></pre>
<p>then</p>
<pre><code>kubectl apply -f myconfig.yaml
</code></pre>
<p>This was taken from this official tutorial: <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes</a> </p>
<p>The pod appears to be running fine:</p>
<pre><code>kubectl describe pod counter
Name: counter
Namespace: default
Node: ip-10-0-0-43.ec2.internal/10.0.0.43
Start Time: Tue, 20 Nov 2018 12:05:07 -0500
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"counter","namespace":"default"},"spec":{"containers":[{"args":["/bin/sh","-c","i=0...
Status: Running
IP: 10.0.0.81
Containers:
count:
Container ID: docker://d2dfdb8644b5a6488d9d324c8c8c2d4637a460693012f35a14cfa135ab628303
Image: busybox
Image ID: docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done
State: Running
Started: Tue, 20 Nov 2018 12:05:08 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r6tr6 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-r6tr6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r6tr6
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned counter to ip-10-0-0-43.ec2.internal
Normal SuccessfulMountVolume 16m kubelet, ip-10-0-0-43.ec2.internal MountVolume.SetUp succeeded for volume "default-token-r6tr6"
Normal Pulling 16m kubelet, ip-10-0-0-43.ec2.internal pulling image "busybox"
Normal Pulled 16m kubelet, ip-10-0-0-43.ec2.internal Successfully pulled image "busybox"
Normal Created 16m kubelet, ip-10-0-0-43.ec2.internal Created container
Normal Started 16m kubelet, ip-10-0-0-43.ec2.internal Started container
</code></pre>
<p>Nothing appears when running:</p>
<pre><code>kubectl logs counter --follow=true
</code></pre>
| seenickcode | <p>I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now.</p>
| seenickcode |
<pre class="lang-sh prettyprint-override"><code>➜ k get pods -n edna
NAME READY STATUS RESTARTS AGE
airflow-79d5f59644-dd4k7 1/1 Running 0 16h
airflow-worker-67bcf7844b-rq7r8 1/1 Running 0 22h
backend-65bcb6546-wvvqj 1/1 Running 0 2d16h
</code></pre>
<p>so airflow running in <strong>airflow-79d5f59644-dd4k7</strong> pod is trying to get logs extracted from the airflow worker (celery/python, which runs a simple flask based webserver handling logs) and it can't because domain name <strong>airflow-worker-67bcf7844b-rq7r8</strong> is not resolved inside <strong>airflow-79d5f59644-dd4k7</strong></p>
<pre><code>*** Log file does not exist: /usr/local/airflow/logs/hello_world/hello_task/2020-07-14T22:05:12.123747+00:00/1.log
*** Fetching from: http://airflow-worker-67bcf7844b-rq7r8:8793/log/hello_world/hello_task/2020-07-14T22:05:12.123747+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-67bcf7844b-rq7r8', port=8793): Max retries exceeded with url: /log/hello_world/hello_task/2020-07-14T22:05:12.123747+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd37d6a9790>: Failed to establish a new connection: [Errno -2] Name or service not known'))
</code></pre>
<p>How can I make this work?</p>
<p>I understand that Airflow has remote logging to s3, but Is there a way to route requests by POD random hasotnames?</p>
<p>I have created a NodeType Service, but airflow has NO idea about DNS name for that service and is trying to access logs by hostname of the airflow worker (reported back by Celery).</p>
<pre><code>➜ k get pods -n edna
NAME READY STATUS RESTARTS AGE
airflow-79d5f59644-dd4k7 1/1 Running 0 16h
airflow-worker-67bcf7844b-rq7r8 1/1 Running 0 22h
backend-65bcb6546-wvvqj 1/1 Running 0 2d17h
kubectl get pods -n edna -l app=edna-airflow-worker \
-o go-template='{{range .items}}{{.status.podIP}}{{"\n"}}{{end}}'
'Tipz:' kgp -n edna -l app=edna-airflow-worker \ -o go-template='{{range .items}}{{.status.podIP}}{{" "}}{{end}}'
10.0.101.120
</code></pre>
<p>Get inside the <strong>airflow-79d5f59644-dd4k7</strong> pod</p>
<pre><code>k exec -ti -n edna airflow-79d5f59644-dd4k7 bash
🐳 [DEV] airflow-79d5f59644-dd4k7 app # curl -L http://airflow-worker-67bcf7844b-rq7r8:8793/log/hello_world/hello_task/2020-07-14T21:59:01.400678+00:00/1.log
curl: (6) Could not resolve host: airflow-worker-67bcf7844b-rq7r8; Unknown error
🐳 [DEV] airflow-79d5f59644-dd4k7 app # curl -L http://10.0.101.120:8793/log/hello_world/hello_task/2020-07-14T21:59:01.400678+00:00/1.log
[2020-07-14 21:59:07,257] {{taskinstance.py:669}} INFO - Dependencies all met for <TaskInstance: hello_world.hello_task 2020-07-14T21:59:01.400678+00:00 [queued]>
[2020-07-14 21:59:07,341] {{taskinstance.py:669}} INFO - Dependencies all met for <TaskInstance: hello_world.hello_task 2020-07-14T21:59:01.400678+00:00 [queued]>
[2020-07-14 21:59:07,342] {{taskinstance.py:879}} INFO -
--------------------------------------------------------------------------------
[2020-07-14 21:59:07,342] {{taskinstance.py:880}} INFO - Starting attempt 1 of 1
[2020-07-14 21:59:07,342] {{taskinstance.py:881}} INFO -
--------------------------------------------------------------------------------
[2020-07-14 21:59:07,348] {{taskinstance.py:900}} INFO - Executing <Task(PythonOperator): hello_task> on 2020-07-14T21:59:01.400678+00:00
[2020-07-14 21:59:07,351] {{standard_task_runner.py:53}} INFO - Started process 5795 to run task
[2020-07-14 21:59:07,912] {{logging_mixin.py:112}} INFO - Running %s on host %s <TaskInstance: hello_world.hello_task 2020-07-14T21:59:01.400678+00:00 [running]> airflow-worker-67bcf7844b-rq7r8
[2020-07-14 21:59:07,989] {{logging_mixin.py:112}} INFO - Hello world! This is really cool!
[2020-07-14 21:59:07,989] {{python_operator.py:114}} INFO - Done. Returned value was: Hello world! This is really cool!
[2020-07-14 21:59:08,161] {{taskinstance.py:1065}} INFO - Marking task as SUCCESS.dag_id=hello_world, task_id=hello_task, execution_date=20200714T215901, start_date=20200714T215907, end_date=20200714T215908
[2020-07-14 21:59:17,070] {{logging_mixin.py:112}} INFO - [2020-07-14 21:59:17,070] {{local_task_job.py:103}} INFO - Task exited with return code 0
🐳 [DEV] airflow-79d5f59644-dd4k7 app #
</code></pre>
| DmitrySemenov | <p><strong>Solution</strong></p>
<p>Provide the following ENV <strong>AIRFLOW__CORE__HOSTNAME_CALLABLE</strong> for the deployment.yaml of the worker pod:</p>
<pre><code>env:
- name: AIRFLOW__CORE__HOSTNAME_CALLABLE
value: 'airflow.utils.net:get_host_ip_address'
</code></pre>
<p>Or just change airflow.cfg</p>
<p>then airflow is trying to access by IP of the POD and things are working if your POD exposes port <strong>8793</strong></p>
| DmitrySemenov |
<p>I had a Angular 7 web app, I am trying to consume a REST API deployed in Kubernetes that port is not open to the Internet. </p>
<p>Tried with http client module ... but this is executing client side ... so no way to reach the service that is running in Kubernetes. </p>
<p>Is it possible to consume this without expose the service to the Internet ?</p>
| Pedro Sosa | <p>Your Angular application is running on your clients so you have to publicly expose the REST API in order to consume it from there. If you only want to expose the API for specific IPs (if your Angular application should only work within your intranet for example), then you can use an ingress controller (e. g. nginx) and configure it with annotations. Example:</p>
<pre><code>nginx.ingress.kubernetes.io/whitelist-source-range: <YourNetworkCIDR>
</code></pre>
| Martin Brandl |
<p>I’m migrating from a GitLab managed Kubernetes cluster to a self managed cluster. In this self managed cluster need to install nginx-ingress and cert-manager. I have already managed to do the same for a cluster used for review environments. I use the latest Helm3 RC to managed this, so I won’t need Tiller.</p>
<p>So far, I ran these commands:</p>
<pre class="lang-sh prettyprint-override"><code># Add Helm repos locally
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo add jetstack https://charts.jetstack.io
# Create namespaces
kubectl create namespace managed
kubectl create namespace production
# Create cert-manager crds
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
# Install Ingress
helm install ingress stable/nginx-ingress --namespace managed --version 0.26.1
# Install cert-manager with a cluster issuer
kubectl apply -f config/production/cluster-issuer.yaml
helm install cert-manager jetstack/cert-manager --namespace managed --version v0.11.0
</code></pre>
<p>This is my <code>cluster-issuer.yaml</code>:</p>
<pre><code># Based on https://docs.cert-manager.io/en/latest/reference/issuers.html#issuers
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: XXX # This is an actual email address in the real resource
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- selector: {}
http01:
ingress:
class: nginx
</code></pre>
<p>I installed my own Helm chart named <code>docs</code>. All resources from the Helm chart are installed as expected. Using cURL, I can fetch the page over HTTP. Google Chrome redirects me to an HTTPS page with an invalid certificate though.</p>
<p>The additional following resources have been created:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get secrets
NAME TYPE DATA AGE
docs-tls kubernetes.io/tls 3 18m
$ kubectl get certificaterequests.cert-manager.io
NAME READY AGE
docs-tls-867256354 False 17m
$ kubectl get certificates.cert-manager.io
NAME READY SECRET AGE
docs-tls False docs-tls 18m
$ kubectl get orders.acme.cert-manager.io
NAME STATE AGE
docs-tls-867256354-3424941167 invalid 18m
</code></pre>
<p>It appears everything is blocked by the cert-manager order in an invalid state. Why could it be invalid? And how do I fix this?</p>
| Remco Haszing | <p>It turns out that in addition to a correct DNS <code>A</code> record for <code>@</code>, there were some <code>AAAA</code> records that pointed to an IPv6 address I don’t know. Removing those records and redeploying resolved the issue for me.</p>
| Remco Haszing |
<p>Our cluster has 3 elasticsearch data pods / 3 master pods / 1 client and 1 exporter. The problem is the alert "Elasticsearch unassigned shards due to circuit breaking exception". You can check further about this in this <a href="https://stackoverflow.com/questions/64606301/elasticsearch-unassigned-shards-circuitbreakingexceptionparent-data-too-large">question</a></p>
<p>Now, by making the curl call http://localhost:9200/_nodes/stats, I've figured that the heap usage is average across data pods.</p>
<p>The heap_used_percent of eleasticsearch-data-0, 1 and 2 are 68%, 61% and 63% respectively.</p>
<p>I made below API calls and can see the shards are almost evenly distributed.</p>
<blockquote>
<p>curl -s http://localhost:9200/_cat/shards |grep elasticsearch-data-0 |wc -l</p>
</blockquote>
<pre><code>145
</code></pre>
<blockquote>
<p>curl -s http://localhost:9200/_cat/shards |grep elasticsearch-data-1
|wc -l</p>
</blockquote>
<pre><code>145
</code></pre>
<blockquote>
<p>curl -s http://localhost:9200/_cat/shards |grep elasticsearch-data-2
|wc -l</p>
</blockquote>
<pre><code>142
</code></pre>
<p>Below is the output of allocate explain curl call</p>
<blockquote>
<p>curl -s http://localhost:9200/_cluster/allocation/explain | python -m
json.tool</p>
</blockquote>
<pre><code>{
"allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
"can_allocate": "no",
"current_state": "unassigned",
"index": "graph_24_18549",
"node_allocation_decisions": [
{
"deciders": [
{
"decider": "max_retry",
"decision": "NO",
"explanation": "shard has exceeded the maximum number of retries [50] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-31T09:18:44.115Z], failed_attempts[50], delayed=false, details[failed shard on node [nodeid1]: failed to perform indices:data/write/bulk[s] on replica [graph_24_18549][0], node[nodeid1], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=someid], unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-31T09:16:42.146Z], failed_attempts[49], delayed=false, details[failed shard on node [nodeid2]: failed to perform indices:data/write/bulk[s] on replica [graph_24_18549][0], node[nodeid2], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=someid2], unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-31T09:15:05.849Z], failed_attempts[48], delayed=false, details[failed shard on node [nodeid1]: failed to perform indices:data/write/bulk[s] on replica [tsg_ngf_graph_1_mtermmetrics1_vertex_24_18549][0], node[nodeid1], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=someid3], unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-31T09:11:50.143Z], failed_attempts[47], delayed=false, details[failed shard on node [nodeid2]: failed to perform indices:data/write/bulk[s] on replica [graph_24_18549][0], node[o_9jyrmOSca9T12J4bY0Nw], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=someid4], unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-31T09:08:10.182Z], failed_attempts[46], delayed=false, details[failed shard on node [nodeid1]: failed to perform indices:data/write/bulk[s] on replica [graph_24_18549][0], node[nodeid1], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=someid6], unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-31T09:07:03.102Z], failed_attempts[45], delayed=false, details[failed shard on node [nodeid2]: failed to perform indices:data/write/bulk[s] on replica [graph_24_18549][0], node[nodeid2], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=someid7], unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-31T09:05:53.267Z], failed_attempts[44], delayed=false, details[failed shard on node [nodeid2]: failed to perform indices:data/write/bulk[s] on replica [graph_24_18549][0], node[nodeid2], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=someid8], unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-31T09:04:24.507Z], failed_attempts[43], delayed=false, details[failed shard on node [nodeid1]: failed to perform indices:data/write/bulk[s] on replica [graph_24_18549][0], node[nodeid1], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=someid9], unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-31T09:03:02.018Z], failed_attempts[42], delayed=false, details[failed shard on node [nodeid2]: failed to perform indices:data/write/bulk[s] on replica [graph_24_18549][0], node[nodeid2], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=someid10], unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-31T09:01:38.094Z], failed_attempts[41], delayed=false, details[failed shard on node [nodeid1]: failed recovery, failure RecoveryFailedException[[graph_24_18549][0]: Recovery failed from {elasticsearch-data-2}{}{} into {elasticsearch-data-1}{}{}{IP}{IP:9300}]; nested: RemoteTransportException[[elasticsearch-data-2][IP:9300][internal:index/shard/recovery/start_recovery]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [2012997826/1.8gb], which is larger than the limit of [1972122419/1.8gb], real usage: [2012934784/1.8gb], new bytes reserved: [63042/61.5kb]]; ], allocation_status[no_attempt]], expected_shard_size[4338334540], failure RemoteTransportException[[elasticsearch-data-0][IP:9300][indices:data/write/bulk[s][r]]]; nested: AlreadyClosedException[engine is closed]; ], allocation_status[no_attempt]], expected_shard_size[5040039519], failure RemoteTransportException[[elasticsearch-data-1][IP:9300][indices:data/write/bulk[s][r]]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [2452709390/2.2gb], which is larger than the limit of [1972122419/1.8gb], real usage: [2060112120/1.9gb], new bytes reserved: [392597270/374.4mb]]; ], allocation_status[no_attempt]], expected_shard_size[2606804616], failure RemoteTransportException[[elasticsearch-data-0][IP:9300][indices:data/write/bulk[s][r]]]; nested: AlreadyClosedException[engine is closed]; ], allocation_status[no_attempt]], expected_shard_size[4799579998], failure RemoteTransportException[[elasticsearch-data-0][IP:9300][indices:data/write/bulk[s][r]]]; nested: AlreadyClosedException[engine is closed]; ], allocation_status[no_attempt]], expected_shard_size[4012459974], failure RemoteTransportException[[elasticsearch-data-1][IP:9300][indices:data/write/bulk[s][r]]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [2045921066/1.9gb], which is larger than the limit of [1972122419/1.8gb], real usage: [1770141176/1.6gb], new bytes reserved: [275779890/263mb]]; ], allocation_status[no_attempt]], expected_shard_size[3764296412], failure RemoteTransportException[[elasticsearch-data-0][IP:9300][indices:data/write/bulk[s][r]]]; nested: AlreadyClosedException[engine is closed]; ], allocation_status[no_attempt]], expected_shard_size[2631720247], failure RemoteTransportException[[elasticsearch-data-1][IP:9300][indices:data/write/bulk[s][r]]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [2064366222/1.9gb], which is larger than the limit of [1972122419/1.8gb], real usage: [1838754456/1.7gb], new bytes reserved: [225611766/215.1mb]]; ], allocation_status[no_attempt]], expected_shard_size[3255872204], failure RemoteTransportException[[elasticsearch-data-0][IP:9300][indices:data/write/bulk[s][r]]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [2132674062/1.9gb], which is larger than the limit of [1972122419/1.8gb], real usage: [1902340880/1.7gb], new bytes reserved: [230333182/219.6mb]]; ], allocation_status[no_attempt]], expected_shard_size[2956220256], failure RemoteTransportException[[elasticsearch-data-1][IP:9300][indices:data/write/bulk[s][r]]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [2092139364/1.9gb], which is larger than the limit of [1972122419/1.8gb], real usage: [1855009224/1.7gb], new bytes reserved: [237130140/226.1mb]]; ], allocation_status[no_attempt]]]"
},
{
"decider": "same_shard",
"decision": "NO",
"explanation": "the shard cannot be allocated to the same node on which a copy of the shard already exists [[graph_24_18549][0], node[nodeid2], [P], s[STARTED], a[id=someid]]"
}
],
"node_decision": "no",
"node_id": "nodeid2",
"node_name": "elasticsearch-data-2",
"transport_address": "IP:9300"
}
</code></pre>
<p>What needs to be done now? Because I don't see the heap is shooting. I've already tried below API, which helps and assigns all unassigned shards, but the problem reoccurs every couple of hours.</p>
<blockquote>
<p>curl -XPOST ':9200/_cluster/reroute?retry_failed</p>
</blockquote>
| Gokul | <p>Which ElasticSearch version are you using? <code>7.9.1</code> and <code>7.10.1</code> have better <a href="https://github.com/elastic/elasticsearch/pull/55633" rel="nofollow noreferrer">retry failed replication due to CircuitBreakingException</a> and better <a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.10/index-modules-indexing-pressure.html" rel="nofollow noreferrer">indexing pressure</a></p>
<p>I would recommend you to try <a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.x/setup-upgrade.html" rel="nofollow noreferrer">upgrading you cluster</a>. Version 7.10.1 seems to have fixed this issue for me. See more: <a href="https://discuss.elastic.co/t/help-with-unassigned-shards-circuitbreakingexception-values-less-than-1-bytes-are-not-supported/257441" rel="nofollow noreferrer">Help with unassigned shards / CircuitBreakingException / Values less than -1 bytes are not supported</a></p>
| Ricardo |
<p>I set up K3s on a server with:</p>
<pre><code>curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --cluster-init --disable=traefik --write-kubeconfig-mode 644" sh -s -
</code></pre>
<p>Then I grabbed the kube config from <code>/etc/rancher/k3s/k3s.yaml</code> and copy it to my local machine so I can interact with the cluster from my machine rather than the server node I installed K3s on. I had to swap out references to 127.0.0.1 and change it to the actual hostname of the server I installed K3s on as well but other than that it worked.</p>
<p>I then hooked up 2 more server nodes to the cluster for a High Availability setup using:</p>
<pre><code>curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --server {https://{hostname or IP of server 1}:6443 --disable=traefik --write-kubeconfig-mode 644" sh -s -
</code></pre>
<p>Now on my local machine again I run <code>kubectl get pods</code> (for example) and that works but I want a highly available setup so I placed a TCP Load Balancer (NGINX actually) in front of my cluster. Now I am trying to connect to the Kubernetes API through that proxy / load balancer and unfortunately, since my <code>~/.kube/config</code> has a client certificate for authentication, this no longer works because my load balancer / proxy that lives in front of that server cannot pass my client cert on to the K3s server.</p>
<p>My <code>~/.kube/config</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {omitted}
server: https://my-cluster-hostname:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: {omitted}
client-key-data: {omitted}
</code></pre>
<p>I also grabbed that client cert and key in my kube config, exported it to a file, and hit the API server with curl and it works when I directly hit the server nodes but NOT when I go through my proxy / load balancer.</p>
<p>What I would like to do instead of using the client certificate approach is use <code>token</code> authentication as my proxy would not interfere with that. However, I am not sure how to get such a token. I read the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">Kubernetes Authenticating guide</a> and specifically I tried creating a new service account and getting the token associated with it as described in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens" rel="nofollow noreferrer">Service Account Tokens</a> section but that also did not work. I also dug through <a href="https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/" rel="nofollow noreferrer">K3s server config options</a> to see if there was any mention of static token file, etc. but didn't find anything that seemed likely.</p>
<p>Is this some limitation of K3s or am I just doing something wrong (likely)?</p>
<p>My <code>kubectl version</code> output:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7+k3s1", GitCommit:"ac70570999c566ac3507d2cc17369bb0629c1cc0", GitTreeState:"clean", BuildDate:"2021-11-29T16:40:13Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| BearsEars | <p>I figured out an approach that works for me by reading through the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="noreferrer">Kubernetes Authenticating Guide</a> in more detail. I settled on the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens" rel="noreferrer">Service Account Tokens</a> approach as it says:</p>
<blockquote>
<p>Normally these secrets are mounted into pods for in-cluster access to
the API server, but can be used from outside the cluster as well.</p>
</blockquote>
<p>My use is for outside the cluster.</p>
<p>First, I created a new <code>ServiceAccount</code> called <code>cluster-admin</code>:</p>
<pre><code>kubectl create serviceaccount cluster-admin
</code></pre>
<p>I then created a <code>ClusterRoleBinding</code> to assign cluster-wide permissions to my ServiceAccount (I named this <code>cluster-admin-manual</code> because K3s already had created one called <code>cluster-admin</code> that I didn't want to mess with):</p>
<pre><code>kubectl create clusterrolebinding cluster-admin-manual --clusterrole=cluster-admin --serviceaccount=default:cluster-admin
</code></pre>
<p>Now you have to get the <code>Secret</code> that is created for you when you created your <code>ServiceAccount</code>:</p>
<pre><code>kubectl get serviceaccount cluster-admin -o yaml
</code></pre>
<p>You'll see something like this returned:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2021-12-20T15:55:55Z"
name: cluster-admin
namespace: default
resourceVersion: "3973955"
uid: 66bab124-8d71-4e5f-9886-0bad0ebd30b2
secrets:
- name: cluster-admin-token-67jtw
</code></pre>
<p>Get the Secret content with:</p>
<pre><code>kubectl get secret cluster-admin-token-67jtw -o yaml
</code></pre>
<p>In that output you will see the <code>data/token</code> property. This is a base64 encoded JWT bearer token. Decode it with:</p>
<pre><code>echo {base64-encoded-token} | base64 --decode
</code></pre>
<p>Now you have your bearer token and you can add a user to your <code>~/.kube/config</code> with the following command. You can also paste that JWT into <a href="https://jwt.io/" rel="noreferrer">jwt.io</a> to take a look at the properties and make sure you base64 decoded it properly.</p>
<pre><code>kubectl config set-credentials my-cluster-admin --token={token}
</code></pre>
<p>Then make sure your existing <code>context</code> in your <code>~/.kube/config</code> has the user set appropriately (I did this manually by editing my kube config file but there's probably a <code>kubectl config</code> command for it). For example:</p>
<pre class="lang-yaml prettyprint-override"><code>- context:
cluster: my-cluster
user: my-cluster-admin
name: my-cluster
</code></pre>
<p>My user in the kube config looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: my-cluster-admin
user:
token: {token}
</code></pre>
<p>Now I can authenticate to the cluster using the token instead of relying on a transport layer specific mechanism (TLS with Mutual Auth) that my proxy / load-balancer does not interfere with.</p>
<p>Other resources I found helpful:</p>
<ul>
<li><a href="https://medium.com/devops-mojo/kubernetes-role-based-access-control-rbac-overview-introduction-rbac-with-kubernetes-what-is-2004d13195df" rel="noreferrer">Kubernetes — Role-Based Access Control (RBAC) Overview by Anish Patel</a></li>
</ul>
| BearsEars |
<p>I'm on Ubuntu linux VM and trying to run minikube on it.</p>
<p>I installed kubectl via homebrew and then installed minikube by following below installation guides:<br>
kubectl: <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux" rel="noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux</a><br>
minikube: <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="noreferrer">https://kubernetes.io/docs/tasks/tools/install-minikube/</a></p>
<p>I started minikube as <code>sudo minikube start --driver=none</code> which has the following output:<br>
<a href="https://i.stack.imgur.com/h1QdZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/h1QdZ.png" alt="enter image description here"></a></p>
<p>When I run the command: <code>kubectl get pods</code>, I get an error:</p>
<pre><code>Error in configuration:
* unable to read client-cert /home/jenkins/.minikube/profiles/minikube/client.crt for minikube due to open /home/jenkins/.minikube/profiles/minikube/client.crt: permission denied
* unable to read client-key /home/jenkins/.minikube/profiles/minikube/client.key for minikube due to open /home/jenkins/.minikube/profiles/minikube/client.key: permission denied
</code></pre>
<p>The user I'm installing all above is <code>/home/jenkins</code>. I'm not sure what's going wrong. Can someone help?</p>
| Abhijeet Vaikar | <p>From <a href="https://github.com/kubernetes/minikube/issues/8363#issuecomment-637892712" rel="nofollow noreferrer">#8363</a> shared by <a href="https://stackoverflow.com/a/70451125/1235675">this answer</a>, I did <code>nano ~/.kube/config</code> and corrected the path..</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/username/.minikube/ca.crt
server: https://192.168.64.3:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
users:
- name: minikube
user:
client-certificate: /home/username/.minikube/profiles/client.crt
client-key: /home/username/.minikube/profiles/client.key
</code></pre>
| piouson |
<p>I want a job to trigger every 15 minutes but it is consistently triggering every 30 minutes.</p>
<p><strong>UPDATE:</strong></p>
<p>I've simplified the problem by just running:</p>
<pre><code>kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"
</code></pre>
<p>As specified in the docs here: <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="noreferrer">https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/</a></p>
<p>and yet the job still refuses to run on time.</p>
<pre><code>$ kubectl get cronjobs
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello */1 * * * * False 1 5m 30m
hello2 */1 * * * * False 1 5m 12m
</code></pre>
<p>It took 25 minutes for the command line created cronjob to run and 7 minutes for the cronjob created from yaml. They were both finally scheduled at the same time so it's almost like etcd finally woke up and did something?</p>
<p><strong>ORIGINAL ISSUE:</strong></p>
<p>When I drill into an active job I see <code>Status: Terminated: Completed</code> but
<code>Age: 25 minutes</code> or something greater than 15. </p>
<p>In the logs I see that the python script meant to run has completed it's final print statement. The script takes about ~2min to complete based on it's output file in s3. Then no new job is scheduled for 28 more minutes.</p>
<p>I have tried with different configurations:</p>
<p><code>Schedule: */15 * * * *</code> AND <code>Schedule: 0,15,30,45 * * * *</code></p>
<p>As well as</p>
<p><code>Concurrency Policy: Forbid</code> AND <code>Concurrency Policy: Replace</code></p>
<p>What else could be going wrong here?</p>
<p>Full config with identifying lines modified:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
type: f-c
name: f-c-p
namespace: extract
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
labels:
type: f-c
spec:
containers:
- args:
- /f_c.sh
image: identifier.amazonaws.com/extract_transform:latest
imagePullPolicy: Always
env:
- name: ENV
value: prod
- name: SLACK_TOKEN
valueFrom:
secretKeyRef:
key: slack_token
name: api-tokens
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: aws_access_key_id
name: api-tokens
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: aws_secret_access_key
name: api-tokens
- name: F_ACCESS_TOKEN
valueFrom:
secretKeyRef:
key: f_access_token
name: api-tokens
name: s-f-c
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: '*/15 * * * *'
successfulJobsHistoryLimit: 1
suspend: false
status: {}
</code></pre>
| ProGirlXOXO | <p>After running these jobs in a test cluster I discovered that external circumstances prevented them from running as intended.</p>
<p>On the original cluster there were ~20k scheduled jobs. The built-in scheduler for Kubernetes is not yet capable of handling this volume consistently.</p>
<p>The maximum number of jobs that can be reliably run (within a minute of the time intended) may depend on the size of your master nodes.</p>
| ProGirlXOXO |
<p>I've setup some services and ingresses to try out the SSL termination. I had no problem at all with <code>LoadBalancer</code> and <code>NodePort</code> services as backend but it's not working at all with <code>ClusterIP</code> service.</p>
<p>Although the Ingress' backend is described as healthy, I get an HTTP error that do not come from my application.</p>
<pre><code>$ kubectl describe ing nginx-cluster-ssl-ingress
Name: nginx-cluster-ssl-ingress
Namespace: default
Address: X.X.X.X
Default backend: nginx-cluster-svc:80 (...)
TLS:
ssl-certificate terminates
Rules:
Host Path Backends
---- ---- --------
Annotations:
https-target-proxy: k8s-tps-default-nginx-cluster-ssl-ingress
static-ip: k8s-fw-default-nginx-cluster-ssl-ingress
target-proxy: k8s-tp-default-nginx-cluster-ssl-ingress
url-map: k8s-um-default-nginx-cluster-ssl-ingress
backends: {"k8s-be-30825":"HEALTHY"}
forwarding-rule: k8s-fw-default-nginx-cluster-ssl-ingress
https-forwarding-rule: k8s-fws-default-nginx-cluster-ssl-ingress
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
28m 28m 1 {loadbalancer-controller } Normal ADD default/nginx-cluster-ssl-ingress
27m 27m 1 {loadbalancer-controller } Normal CREATE ip: X.X.X.X
</code></pre>
<p>The HTTP error is the following:</p>
<pre><code>$ curl http://X.X.X.X/
default backend - 404%
</code></pre>
<p>My question is quite simple: is it supposed to work with ClusterIP services? If it is supposed to as more or less written in the documentation, where should I have a look to resolve that issue?</p>
<p>Thank you!</p>
| Samuel ROZE | <p><a href="https://github.com/kubernetes/kubernetes/issues/26508#issuecomment-222376886">The native GKE Ingress controller do not support <code>ClusterIP</code>, only <code>NodePort</code> is working.</a></p>
<p>Non-native Ingress controllers such as the nginx one do work with <code>ClusterIP</code> services.</p>
| Samuel ROZE |
<p>I want to setup web app using three components that i already have:</p>
<ol>
<li>Domain name registered on domains.google.com</li>
<li>Frontend web app hosted on Firebase Hosting and served from <code>example.com</code></li>
<li>Backend on Kubernetes cluster behind Load Balancer with external static IP <code>1.2.3.4</code></li>
</ol>
<p>I want to serve the backend from <code>example.com/api</code> or <code>api.example.com</code></p>
<p>My best guess is to use Cloud DNS to connect IP adress and subdomain (or URL)</p>
<ul>
<li><code>1.2.3.4</code> -> <code>api.exmple.com</code></li>
<li><code>1.2.3.4</code> -> <code>example.com/api</code></li>
</ul>
<p>The problem is that Cloud DNS uses custom name servers, like this:</p>
<pre><code>ns-cloud-d1.googledomains.com
</code></pre>
<p>So if I set Google default name servers I can reach Firebase hosting only, and if I use custom name servers I can reach only Kubernetes backend.</p>
<p>What is a proper way to be able to reach both api.example.com and example.com?</p>
<p>edit:
As a temporary workaround i'm combining two default name servers and two custom name servers from cloud DNS, like this:</p>
<ul>
<li><code>ns-cloud-d1.googledomains.com</code> (custom)</li>
<li><code>ns-cloud-d2.googledomains.com</code> (custom)</li>
<li><code>ns-cloud-b1.googledomains.com</code> (default)</li>
<li><code>ns-cloud-b2.googledomains.com</code> (default)</li>
</ul>
<p>But if someone knows the proper way to do it - please post the answer.</p>
| Zhorzh Alexandr | <p><strong>Approach 1:</strong></p>
<pre><code>example.com --> Firebase Hosting (A record)
api.example.com --> Kubernetes backend
</code></pre>
<p>Pro: Super-simple</p>
<p>Con: CORS request needed by browser before API calls can be made.</p>
<p><strong>Approach 2:</strong></p>
<pre><code>example.com --> Firebase Hosting via k8s ExternalName service
example.com/api --> Kubernetes backend
</code></pre>
<p>Unfortunately from my own efforts to make this work with service <code>type: ExternalName</code> all I could manage is to get infinitely redirected, something which I am still unable to debug.</p>
<p><strong>Approach 3:</strong></p>
<pre><code>example.com --> Google Cloud Storage via NGINX proxy to redirect paths to index.html
example.com/api --> Kubernetes backend
</code></pre>
<p>You will need to deploy the static files to Cloud Storage, with an NGINX proxy in front if you want SPA-like redirection to index.html for all routes. This approach does not use Firebase Hosting altogether.</p>
<p>The complication lies in the /api redirect which depends on which Ingress you are using.</p>
<p>Hope that helps.</p>
| Jonathan Lin |
<p>I'm trying to create a quicklab on GCP to implement CI/CD with Jenkins on GKE, I created a <strong>Multibranch Pipeline</strong>. When I push the modified script to git, Jenkins kicks off a build and fails with the following error:</p>
<blockquote>
<pre><code> Branch indexing
> git rev-parse --is-inside-work-tree # timeout=10
Setting origin to https://source.developers.google.com/p/qwiklabs-gcp-gcpd-502b5f86f641/r/default
> git config remote.origin.url https://source.developers.google.com/p/qwiklabs-gcp-gcpd-502b5f86f641/r/default # timeout=10
Fetching origin...
Fetching upstream changes from origin
> git --version # timeout=10
> git config --get remote.origin.url # timeout=10
using GIT_ASKPASS to set credentials qwiklabs-gcp-gcpd-502b5f86f641
> git fetch --tags --progress -- origin +refs/heads/*:refs/remotes/origin/*
Seen branch in repository origin/master
Seen branch in repository origin/new-feature
Seen 2 remote branches
Obtained Jenkinsfile from 4bbac0573482034d73cee17fa3de8999b9d47ced
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
Waiting for next available executor
Agent sample-app-f7hdx-n3wfx is provisioned from template Kubernetes Pod Template
---
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "http://cd-jenkins:8080/job/sample-app/job/new-feature/1/"
labels:
jenkins: "slave"
jenkins/sample-app: "true"
name: "sample-app-f7hdx-n3wfx"
spec:
containers:
- command:
- "cat"
image: "gcr.io/cloud-builders/kubectl"
name: "kubectl"
tty: true
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
- command:
- "cat"
image: "gcr.io/cloud-builders/gcloud"
name: "gcloud"
tty: true
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
- command:
- "cat"
image: "golang:1.10"
name: "golang"
tty: true
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
- env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_TUNNEL"
value: "cd-jenkins-agent:50000"
- name: "JENKINS_AGENT_NAME"
value: "sample-app-f7hdx-n3wfx"
- name: "JENKINS_NAME"
value: "sample-app-f7hdx-n3wfx"
- name: "JENKINS_AGENT_WORKDIR"
value: "/home/jenkins/agent"
- name: "JENKINS_URL"
value: "http://cd-jenkins:8080/"
image: "jenkins/jnlp-slave:alpine"
name: "jnlp"
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
nodeSelector: {}
restartPolicy: "Never"
serviceAccountName: "cd-jenkins"
volumes:
- emptyDir: {}
name: "workspace-volume"
Running on sample-app-f7hdx-n3wfx in /home/jenkins/agent/workspace/sample-app_new-feature
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Checkout SCM)
[Pipeline] checkout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
java.lang.IllegalStateException: Jenkins.instance is missing. Read the documentation of Jenkins.getInstanceOrNull to see what you are
doing wrong.
at jenkins.model.Jenkins.get(Jenkins.java:772)
at hudson.model.Hudson.getInstance(Hudson.java:77)
at com.google.jenkins.plugins.source.GoogleRobotUsernamePassword.areOnMaster(GoogleRobotUsernamePassword.java:146)
at com.google.jenkins.plugins.source.GoogleRobotUsernamePassword.readObject(GoogleRobotUsernamePassword.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1975)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at hudson.remoting.UserRequest.deserialize(UserRequest.java:290)
at hudson.remoting.UserRequest.perform(UserRequest.java:189)
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from 10.8.2.12/10.8.2.12:53086
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:957)
at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283)
at com.sun.proxy.$Proxy88.addCredentials(Unknown Source)
at org.jenkinsci.plugins.gitclient.RemoteGitImpl.addCredentials(RemoteGitImpl.java:200)
at hudson.plugins.git.GitSCM.createClient(GitSCM.java:845)
at hudson.plugins.git.GitSCM.createClient(GitSCM.java:813)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Caused: java.lang.Error: Failed to deserialize the Callable object.
at hudson.remoting.UserRequest.perform(UserRequest.java:195)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:97)
Caused: java.io.IOException: Remote call on JNLP4-connect connection from 10.8.2.12/10.8.2.12:53086 failed
at hudson.remoting.Channel.call(Channel.java:963)
at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283)
Caused: hudson.remoting.RemotingSystemException
at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:299)
at com.sun.proxy.$Proxy88.addCredentials(Unknown Source)
at org.jenkinsci.plugins.gitclient.RemoteGitImpl.addCredentials(RemoteGitImpl.java:200)
at hudson.plugins.git.GitSCM.createClient(GitSCM.java:845)
at hudson.plugins.git.GitSCM.createClient(GitSCM.java:813)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE
</code></pre>
</blockquote>
<pre><code>
</code></pre>
| Andre12 | <p>This issue has now been fixed. Please update the Google Authenticated Source Plugin to version 0.4.</p>
<p><a href="https://github.com/jenkinsci/google-source-plugin/pull/9" rel="nofollow noreferrer">https://github.com/jenkinsci/google-source-plugin/pull/9</a></p>
| viglesiasce |
<p>I have the following definitions in my custom namespace:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "delete", "patch", "create"]
- apiGroups: ["extensions", "apps"]
resources: ["deployments", "deployments/scale"]
verbs: ["get", "list", "delete", "patch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test
subjects:
- kind: User
name: test-sa
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: test
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Running <code>describe role test</code></p>
<pre><code>Name: test
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"test","namespace":"test-namesapce...
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods/exec [] [] [get list delete patch create]
pods [] [] [get list delete patch create]
deployments.apps/scale [] [] [get list delete patch create]
deployments.apps [] [] [get list delete patch create]
deployments.extensions/scale [] [] [get list delete patch create]
deployments.extensions [] [] [get list delete patch create]
</code></pre>
<p>When I'm trying to run the command <code>kubectl get pods</code> in a pod that is using this service account, I'm getting the following error:</p>
<blockquote>
<p>Error from server (Forbidden): pods is forbidden: User
"system:serviceaccount:test-namespace:test-sa" cannot list resource
"pods" in API group "" in the namespace "test-namespace"</p>
</blockquote>
<p>Where is that misconfigured?</p>
| Mugen | <p>The problem was with the <code>subjects</code> of <code>RoleBinding</code>. The correct definition would be:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test
subjects:
- kind: ServiceAccount
name: test-sa
roleRef:
kind: Role
name: test
apiGroup: rbac.authorization.k8s.io
</code></pre>
| Mugen |
<p>I have created a new docker image that I want to use to replace the current docker image. The application is on the kubernetes engine on google cloud platform.</p>
<p>I believe I am supposed to use the gcloud container clusters update command. Although, I struggle to see how it works and how I'm supposed to replace the old docker image with the new one.</p>
| Peterbarr | <p>You may want to use <code>kubectl</code> in order to interact with your GKE cluster. Method of image update depends on how the Pod / Container was created.</p>
<p>For some example commands, see <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources</a></p>
<p>For example, <code>kubectl set image deployment/frontend www=image:v2</code> will do a rolling update "www" containers of "frontend" deployment, updating the image.</p>
<p>Getting up and running on GKE: <a href="https://cloud.google.com/kubernetes-engine/docs/quickstart" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/quickstart</a></p>
| Keilo |
<p>I have a project that need to update cloneset yaml</p>
<p>YAML document something like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps.kruise.io/v1alpha1
kind: CloneSet
metadata:
generation: 1
...
spec:
...
...
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: data1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60G
- metadata:
creationTimestamp: null
name: data2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60G
status:
availableReplicas: 1
</code></pre>
<p>I want to patch volumeClaimTemplates based on metadata.name</p>
<p>If name is not specified, storages for <strong>all volumeClaimTemplates</strong> are updated, and if name is specified and is matched, storages for a specific name are updated, if name is specified, but name don't match, return error</p>
<pre class="lang-sh prettyprint-override"><code>matchVolumeTemplateName=${1-all}
storage="$2"
if [[ ${matchVolumeTemplateName} == "all" ]]; then
./bin/yq e -i ".spec.volumeClaimTemplates[].spec.resources.requests.storage = \"${storage}\"" "cloneset_modifystorage.yml"
else
./bin/yq e -i ".spec.volumeClaimTemplates[] | select(.metadata.name == "\"${matchVolumeTemplateName}\"").spec.resources.requests.storage |= \"${storage}\"" "cloneset_modifystorage.yml"
fi
</code></pre>
<p>However, with the above code, <strong>only part of the YAML data</strong> will be output if <code>a match is found</code> for the specified name, and the file will be <strong>empty</strong> if name don't match, for example <code>matchVolumeTemplateName=data3</code></p>
<pre class="lang-yaml prettyprint-override"><code># cloneset_modifystorage.yml
# matchVolumeTemplateName = data1 storage = 90G
# Other data of K8S YAML is lost
metadata:
creationTimestamp: null
name: data1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 90G
</code></pre>
<p>As a result, other data information about K8S is <strong>missing</strong>, and an empty file is printed <strong>instead of an error</strong> when a name match fails.</p>
<p>I really appreciate any help with this.</p>
| moluzhui | <p>You're just missing brackets - so it's first filtering the yaml and then updating (instead of updating a specific portion of the yaml):</p>
<pre class="lang-sh prettyprint-override"><code>./bin/yq e -i "(.spec.volumeClaimTemplates[] | select(.metadata.name == "\"${matchVolumeTemplateName}\"").spec.resources.requests.storage) |= \"${storage}\"" "cloneset_modifystorage.yml"
</code></pre>
<p>Disclosure: I wrote yq</p>
| mike.f |
<p>Below is my <code>testfile.yaml</code>:</p>
<pre><code>---
kind: Pod
metadata:
name: amazing-application
---
kind: Deployment
metadata:
name: amazing-deployment
---
kind: Service
metadata:
name: amazing-deployment
---
kind: Service
metadata:
name: tea-service
</code></pre>
<p>My goal is to split this into 4 files where the filename is <code>.metadata.name</code> and the dir that file goes into is <code>.kind</code>.</p>
<p>I have achieved what I want with this:</p>
<pre><code>for kind in $(yq e '.kind' testfile.yaml | awk '!/^(---)/' | uniq);
do
mkdir "$kind"
cd "$kind"
yq 'select(.kind == "'$kind'")' ../testfile.yaml | yq -s '.metadata.name'
cd ..;
done
</code></pre>
<p>What I want to know is how to get a unique together mapping, or somehow using multple criteria to split the testfile rather than through the loop.</p>
<p>Is there a way to use <code>yq</code> and <code>-s</code> or <code>select</code> to select where kind and metadata.name are unique together in that individual document (document as in separated by '---')?</p>
<p>Because if you do <code>yq -s '.kind' testfile.yaml</code> it will yield three yaml files, not four. Same for <code>yq -s '.metadata.name' testfile.yaml</code>; we get three files as not all <code>name</code> are unique - one gets lost.</p>
| noblerthanoedipus | <p>There are a few ways you can do this direct in yq.</p>
<p>First of, you can use string concatenation with another property to comeup with a unique filename:</p>
<pre><code>yq -s '(.kind | downcase) + "_" + .metadata.name' testfile.yaml
</code></pre>
<p>That will create files like:</p>
<pre><code>deployment_amazing-deployment.yml
pod_amazing-application.yml
service_amazing-deployment.yml
service_tea-service.yml
</code></pre>
<p>Or you can use the built in $index to make the filenames unique:</p>
<pre><code>yq -s '.metadata.name + "_" + $index'
</code></pre>
<p>Which will create:</p>
<pre><code>amazing-application_0.yml
amazing-deployment_1.yml
amazing-deployment_2.yml
tea-service_3.yml
</code></pre>
<p>Disclaimer: I wrote yq</p>
| mike.f |
<p>trying to get into istio on kubernetes but it seems i am missing either some fundamentals, or i am doing things back to front. I am quite experienced in kubernetes, but istio and its virtualservice confuses me a bit.</p>
<p>I created 2 deployments (helloworld-v1/helloworld-v2). Both have the same image, the only thing thats different is the environment variables - which output either version: "v1" or version: "v2". I am using a little testcontainer i wrote which basically returns the headers i got into the application. A kubernetes service named "helloworld" can reach both.</p>
<p>I created a Virtualservice and a Destinationrule</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- helloworld
http:
- route:
- destination:
host: helloworld
subset: v1
weight: 90
- destination:
host: helloworld
subset: v2
weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: helloworld
spec:
host: helloworld
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
</code></pre>
<p>According to the docs not mentioning any gateway should use the internal "mesh" one.
Sidecar containers are successfully attached:</p>
<pre><code>kubectl -n demo get all
NAME READY STATUS RESTARTS AGE
pod/curl-6657486bc6-w9x7d 2/2 Running 0 3h
pod/helloworld-v1-d4dbb89bd-mjw64 2/2 Running 0 6h
pod/helloworld-v2-6c86dfd5b6-ggkfk 2/2 Running 0 6h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/helloworld ClusterIP 10.43.184.153 <none> 80/TCP 6h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/curl 1 1 1 1 3h
deployment.apps/helloworld-v1 1 1 1 1 6h
deployment.apps/helloworld-v2 1 1 1 1 6h
NAME DESIRED CURRENT READY AGE
replicaset.apps/curl-6657486bc6 1 1 1 3h
replicaset.apps/helloworld-v1-d4dbb89bd 1 1 1 6h
replicaset.apps/helloworld-v2-6c86dfd5b6 1 1 1 6h
</code></pre>
<p>Everything works quite fine when i access the application from "outside" (istio-ingressgateway), v2 is called one times, v1 9 nine times:</p>
<pre><code>curl --silent -H 'host: helloworld' http://localhost
{"host":"helloworld","user-agent":"curl/7.47.0","accept":"*/*","x-forwarded-for":"10.42.0.0","x-forwarded-proto":"http","x-envoy-internal":"true","x-request-id":"a6a2d903-360f-91a0-b96e-6458d9b00c28","x-envoy-decorator-operation":"helloworld:80/*","x-b3-traceid":"e36ef1ba2229177e","x-b3-spanid":"e36ef1ba2229177e","x-b3-sampled":"1","x-istio-attributes":"Cj0KF2Rlc3RpbmF0aW9uLnNlcnZpY2UudWlkEiISIGlzdGlvOi8vZGVtby9zZXJ2aWNlcy9oZWxsb3dvcmxkCj8KGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBIjEiFoZWxsb3dvcmxkLmRlbW8uc3ZjLmNsdXN0ZXIubG9jYWwKJwodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USBhIEZGVtbwooChhkZXN0aW5hdGlvbi5zZXJ2aWNlLm5hbWUSDBIKaGVsbG93b3JsZAo6ChNkZXN0aW5hdGlvbi5zZXJ2aWNlEiMSIWhlbGxvd29ybGQuZGVtby5zdmMuY2x1c3Rlci5sb2NhbApPCgpzb3VyY2UudWlkEkESP2t1YmVybmV0ZXM6Ly9pc3Rpby1pbmdyZXNzZ2F0ZXdheS01Y2NiODc3NmRjLXRyeDhsLmlzdGlvLXN5c3RlbQ==","content-length":"0","version":"v1"}
"version": "v1",
"version": "v1",
"version": "v2",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
</code></pre>
<p>But as soon as i do the curl from within a pod (in this case just byrnedo/alpine-curl) against the service things start to get confusing:</p>
<pre><code>curl --silent -H 'host: helloworld' http://helloworld.demo.svc.cluster.local
{"host":"helloworld","user-agent":"curl/7.61.0","accept":"*/*","version":"v1"}
"version":"v2"
"version":"v2"
"version":"v1"
"version":"v1"
"version":"v2"
"version":"v2"
"version":"v1"
"version":"v2“
"version":"v1"
</code></pre>
<p>Not only that i miss all the istio attributes (which i understand in a service to service communication because as i understand it they are set when the request first enters the mesh via gateway), but the balance for me looks like the default 50:50 balance of a kubernetes service.</p>
<p>What do i have to do to achieve the same 1:9 balance on an inter-service communication? Do i have to create a second, "internal" gateway to use instead the service fqdn? Did i miss a definition? Should calling a service fqdn from within a pod respect a virtualservice routing?</p>
<p>used istio version is 1.0.1, used kubernetes version v1.11.1. </p>
<p><strong>UPDATE</strong>
deployed the sleep-pod as suggested, (this time not relying on the auto-injection of the demo namespace) but manually as described in the sleep sample</p>
<pre><code>kubectl -n demo get deployment sleep -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
sleep 1 1 1 1 2m
sleep,istio-proxy tutum/curl,docker.io/istio/proxyv2:1.0.1 app=sleep
</code></pre>
<p>Also changed the Virtualservice to 0/100 to see if it works at first glance . Unfortunately this did not change much:</p>
<pre><code>export SLEEP_POD=$(kubectl get -n demo pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user- agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v1"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v1"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"
</code></pre>
| sapien99 | <p>Found the solution, one of the prerequisites (i forgot) is that a proper routing requires named ports: @see <a href="https://istio.io/docs/setup/kubernetes/spec-requirements/" rel="noreferrer">https://istio.io/docs/setup/kubernetes/spec-requirements/</a>. </p>
<p>Wrong:</p>
<pre><code>spec:
ports:
- port: 80
protocol: TCP
targetPort: 3000
</code></pre>
<p>Right:</p>
<pre><code>spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
</code></pre>
<p>After using name http everything works like a charm</p>
| sapien99 |
<p>There are two fields in k8s, requests, and limits. I want to know that whether GKE cost us on requests or limits.</p>
<p>If request is 1Gi and limits is 2Gi
Will I be costed for 1Gi or 2Gi?</p>
| Akshit Bansal | <p>There are two different modes of operation in Google Kubernetes Engine: Autopilot (easier to manage, but less flexible) and Standard. They're billed <a href="https://cloud.google.com/kubernetes-engine/pricing#google-kubernetes-engine-pricing" rel="nofollow noreferrer">differently</a>.</p>
<p>In Standard mode, you're essentially <strong>billed for Compute Engine instances</strong> used in your cluster. That means your requests and limits are only used indirectly, as you're expected to be responsible for setting up your cluster so that it's scaled according to those. When doing this, you should remember that some of each node's resources <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#node_allocatable" rel="nofollow noreferrer">are required to run the GKE and Kubernetes node components</a> necessary to make that node function as part of your cluster.</p>
<p>In Autopilot mode, you're <strong>billed for resources</strong> - CPU, memory, ephemeral storage - requested by your currently scheduled Pods. The catch is that each Pod in Autopilot mode is <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#allowable_resource_ranges" rel="nofollow noreferrer">considered to be a Guaranteed QoS Class Pod</a>:</p>
<blockquote>
<p>Autopilot automatically sets resource limits equal to requests if you
do not have resource limits specified. If you do specify resource
limits, your limits will be overridden and set to be equal to the
requests.</p>
</blockquote>
<p>To be more specific, in your example 2 Gi limit will be overridden and set to 1 Gi, the same as request. You'll be billed accordingly.</p>
| raina77ow |
<p>When I define e.g. a deployment in Kubernetes there is a section with a list of containers and each of them contains an array of ports, e.g.:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>Now the documentation <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#container-v1-core" rel="noreferrer">here</a> explicitly says it does not affect connectivity:</p>
<blockquote>
<p>List of ports to expose from the container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port
here DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network. Cannot be updated.</p>
</blockquote>
<p>Now it seems it does not really affect anything and is only informational, but what does that really mean, where is that used?</p>
<p>I have found one use of that is that if port defines a name, it can be referenced from a service by that name.</p>
<p>Is that it or there are some other uses for this specification?</p>
| Ilya Chernomordik | <p>As you quote the documentation, </p>
<blockquote>
<p>List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.</p>
</blockquote>
<p>the purpose of defining the <code>containerPorts</code> is purely for documentation. It is only used by other developers to understand the port that the container listens to. Kubernetes borrows this idea from docker which does the same with <code>EXPOSE</code> command as mentioned <a href="https://docs.docker.com/engine/reference/builder/#expose" rel="noreferrer">here</a>.</p>
| Malathi |
<p>What could be the reason of Deployment not being able to see config files?</p>
<p>This is a part from Deployment</p>
<pre><code>command: ["bundle", "exec", "puma", "-C", "config/puma.rb"]
already tried with ./config/.. and using args instead of command
</code></pre>
<p>I'm getting <code>Errno::ENOENT: No such file or directory @ rb_sysopen - config/puma.rb</code></p>
<p>Everything used to work fine with docker-compose</p>
<p><strong>When I keep the last line (CMD) from the Dockerfile below and omit the <code>command:</code> in Deployment, everything works fine</strong> but, to reuse the image for sidekiq, I need to provide config files.</p>
<p>Dockerfile</p>
<pre><code>FROM ruby:2.7.2
RUN apt-get update -qq && apt-get install -y build-essential ca-certificates libpq-dev nodejs postgresql-client yarn vim -y
ENV APP_ROOT /var/www/app
RUN mkdir -p $APP_ROOT
WORKDIR $APP_ROOT
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
COPY public public/
RUN gem install bundler
RUN bundle install
# tried this
COPY config config/
COPY . .
EXPOSE 9292
# used to have this line but I want to reuse the image
# CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"]
</code></pre>
<p><strong>error message</strong></p>
<pre><code>bundler: failed to load command: puma (/usr/local/bundle/bin/puma)
Errno::ENOENT: No such file or directory @ rb_sysopen - config/puma.rb
</code></pre>
<p><strong>upd</strong></p>
<p>It seems that the issue was related to wrong paths and misunderstanding of command and args fields. The following config worked for me. It also possible there were cache issues with docker(happened to me earlier)</p>
<pre><code> command:
- bundle
- exec
- puma
args:
- "-C"
- "config/puma.rb"
</code></pre>
| kirqe | <p>For some reason providing commands inside of <code>values.yaml</code> doesn't seem to work properly.</p>
<p>But it works when commands are provided through template.</p>
<p>There's the following section in the <code>app/templates/deployment.yaml</code> of my app. Everything works fine now.</p>
<pre><code> containers:
- name: {{ .Values.app.name }}
image: {{ .Values.app.container.image }}
command:
- bundle
- exec
- puma
args:
- "-C"
- "config/puma.rb"
</code></pre>
<p>I have also found this k8s rails demo <a href="https://github.com/lewagon/rails-k8s-demo/blob/master/helm/templates/deployments/sidekiq.yaml" rel="nofollow noreferrer">https://github.com/lewagon/rails-k8s-demo/blob/master/helm/templates/deployments/sidekiq.yaml</a></p>
<p>As you can see the commans section is provided through <code>templates/../name.yaml</code> rather than <code>values.yaml</code></p>
| kirqe |
<p>I have a VPC network with a subnet in the range 10.100.0.0/16, in which the nodes reside. There is a route and firewall rules applied to the range 10.180.102.0/23, which routes and allows traffic going to/coming from a VPN tunnel.</p>
<p>If I deploy a node in the 10.100.0.0/16 range, I can ping my devices in the 10.180.102.0/23 range. However, the pod running inside that node cannot ping the devices in the 10.180.102.0/23 range. I assume it has to do with the fact that the pods live in a different IP range(10.12.0.0/14).</p>
<p>How can I configure my networking so that I can ping/communicate with the devices living in the 10.180.102.0/23 range?</p>
| Henke | <p>I don't quite remember exactly how to solve, but I'm posting what I have to help @tdensmore.</p>
<p>You have to edit the ip-masq-agent(which is an agent running on GKE that masquerades the IPs) and this configuration is responsible for letting the pods inside the nodes, reach other parts of the GCP VPC Network, more specifically the VPN. So, it allows pods to communicate with the devices that are accessible through the VPN.</p>
<p>First of all we're gonna be working inside the <code>kube-system</code> namespace, and we're gonna put the configmap that configures our ip-masq-agent, put this in a <code>config</code> file:</p>
<pre><code>nonMasqueradeCIDRs:
- 10.12.0.0/14 # The IPv4 CIDR the cluster is using for Pods (required)
- 10.100.0.0/16 # The IPv4 CIDR of the subnetwork the cluster is using for Nodes (optional, works without but I guess its better with it)
masqLinkLocal: false
resyncInterval: 60s
</code></pre>
<p>and run <code>kubectl create configmap ip-masq-agent --from-file config --namespace kube-system</code></p>
<p>afterwards, configure the ip-masq-agent, put this in a <code>ip-masq-agent.yml</code> file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ip-masq-agent
namespace: kube-system
spec:
template:
metadata:
labels:
k8s-app: ip-masq-agent
spec:
hostNetwork: true
containers:
- name: ip-masq-agent
image: gcr.io/google-containers/ip-masq-agent-amd64:v2.4.1
args:
- --masq-chain=IP-MASQ
# To non-masquerade reserved IP ranges by default, uncomment the line below.
# - --nomasq-all-reserved-ranges
securityContext:
privileged: true
volumeMounts:
- name: config
mountPath: /etc/config
volumes:
- name: config
configMap:
# Note this ConfigMap must be created in the same namespace as the daemon pods - this spec uses kube-system
name: ip-masq-agent
optional: true
items:
# The daemon looks for its config in a YAML file at /etc/config/ip-masq-agent
- key: config
path: ip-masq-agent
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
- key: "CriticalAddonsOnly"
operator: "Exists"
</code></pre>
<p>and run <code>kubectl -n kube-system apply -f ip-masq-agent.yml</code></p>
<p>Note: this has been a long time since I've done this, there are more infos in this link: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent</a></p>
| Henke |
<p>I have one spring boot microservice running on docker container, below is the Dockerfile</p>
<pre><code>FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
</code></pre>
<p>here my configuration stores in external folder, i.e <code>/config/console-server.yml</code> and when I started the application, internally it will load the config (spring boot functionality).</p>
<p>Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.</p>
<blockquote>
<p>kubectl create configmap console-configmap
--from-file=./config/console-server.yml</p>
<p>kubectl describe configmap console-configmap</p>
</blockquote>
<p>below are the description details:</p>
<pre><code>Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
</code></pre>
<p>my deployment yml is:</p>
<pre><code>apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
</code></pre>
<p>My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.</p>
| Chintamani | <p>First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:</p>
<pre><code>
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
</code></pre>
<p>The file will be available in the path <code>/app/config/console-server.yml</code>. You have to modify it as per your needs.</p>
| Malathi |
<p>Using the python client, I've written a function to evict all pods on a node. How can i monitor/watch for all pods to be fully evicted?</p>
<p>I'm using the create_namespaced_pod_eviction method to evict all pods on a single node. While this works, it doesn't wait for the process to finish before continuing. I need the eviction process 100% complete before moving on in my script. How can i monitor the status of this process? Similar to kubectl, i'd like my function to wait for each pod to evict before returning. </p>
<pre class="lang-py prettyprint-override"><code># passes list of pods to evict function
def drain_node(self):
print("Draining node", self._node_name)
self.cordon_node()
pods = self._get_pods()
response = []
for pod in pods:
response.append(self._evict_pod(pod))
return response
# calls the eviction api on each pod
def _evict_pod(self, pod, delete_options=None):
name = pod.metadata.name
namespace = pod.metadata.namespace
body = client.V1beta1Eviction(metadata=client.V1ObjectMeta(name=name, namespace=namespace))
response = self._api.create_namespaced_pod_eviction(name, namespace, body)
return response
# gets list of pods to evict
def _get_pods(self):
all_pods = self._api.list_pod_for_all_namespaces(watch=True, field_selector='spec.nodeName=' + self._node_name)
user_pods = [p for p in all_pods.items
if (p.metadata.namespace != 'kube-system')]
return user_pods
</code></pre>
| N. Alston | <p>As given in this <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#create_namespaced_pod_eviction" rel="nofollow noreferrer">link</a>, the <code>create_namespaced_pod_eviction</code> call returns <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1beta1Eviction.md" rel="nofollow noreferrer"><code>V1Beta1Eviction</code></a> object. It has <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1ObjectMeta.md" rel="nofollow noreferrer"><code>ObjectMeta</code></a> object, that contains the <code>deletion_timestamp</code> field. Perhaps you can use that to determine if the pod is already deleted. Or probably polling the pod status might give the same ObjectMeta.</p>
| Malathi |
<pre><code>echo "apiVersion: v1
kind: Node
metadata:
name: host-cluster-control-plane-64j47
labels:
beta.kubernetes.io/arch: amd64
" | yq -o p
</code></pre>
<p>Result:</p>
<pre><code>apiVersion = v1
kind = Node
metadata.name = host-cluster-control-plane-64j47
metadata.labels.beta.kubernetes.io/arch = amd64
</code></pre>
<p>That's almost what I want. I am looking for the key to get values.</p>
<p>I could use <code>metadata.name</code> like this:</p>
<pre><code>echo "apiVersion: v1
kind: Node
metadata:
name: host-cluster-control-plane-64j47
labels:
beta.kubernetes.io/arch: amd64
" | yq '.metadata.name'
</code></pre>
<p>But the <code>-o p</code> option of <code>yq</code> does not quote the key, if needed.</p>
<p>I can't use <code>metadata.labels.beta.kubernetes.io/arch</code> as key, since the correct syntax is <code>metadata.labels["beta.kubernetes.io/arch"]</code>.</p>
<p>Is there an automated way to get the keys of a yaml file so that I can use the keys in <code>yq</code> (or <code>jq</code>)?</p>
<p>The desired output would be something like this:</p>
<pre><code>apiVersion = v1
kind = Node
metadata.name = host-cluster-control-plane-64j47
metadata.labels["beta.kubernetes.io/arch"] = amd64
</code></pre>
<p>I am looking for the valid key, because I want to create a second command line to select these values.</p>
<p>For example:</p>
<pre><code>❯ k get nodes -o yaml | yq '.items[].metadata.labels["beta.kubernetes.io/arch"]'
amd64
amd64
amd64
</code></pre>
| guettli | <p>You can get close by doing something like:</p>
<pre class="lang-bash prettyprint-override"><code>yq '(.. | key | select(test("\."))) |= ("[\"" + . + "\"]")' file.yaml -op
apiVersion = v1
kind = Node
metadata.name = host-cluster-control-plane-64j47
metadata.labels.["beta.kubernetes.io/arch"] = amd64
</code></pre>
<p>Or you could do:</p>
<pre><code>yq '(.. | key | select(test("\."))) |= sub("\.", "\.")' file.yaml -op
apiVersion = v1
kind = Node
metadata.name = host-cluster-control-plane-64j47
metadata.labels.beta\\.kubernetes\\.io/arch = amd64
</code></pre>
<p>BTW - I'm not sure how it's supposed be escaped in property files, I'd be willing to update yq to do it natively someone raises a bug with details on github...</p>
<p>Disclaimer: I wrote yq</p>
| mike.f |
<p>Does anyone know if there is a way to define static selectors based on the namespace name instead of label selectors? The reason is that some of the namespaces are created by an operator and I don't have any control over the labels.</p>
<p>Thanks
Essey</p>
| Erkan | <p>Each namespace has the so-called <a href="https://kubernetes.io/docs/reference/labels-annotations-taints/#kubernetes-io-metadata-name" rel="nofollow noreferrer">well-known label</a> <code>kubernetes.io/metadata.name</code></p>
<p>So your <code>namespaceSelector</code> can be something like:</p>
<pre class="lang-yaml prettyprint-override"><code>namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: "In"
values:
- "staging"
- "demo"
</code></pre>
| mac |
<p>I've an app, packaged as docker image. The app has some default plugins, installed in <code>/opt/myapp/plugins/</code>. I want to install some additional plugins, which basically means copying the plugins to the app's aforementioned path:</p>
<pre class="lang-sh prettyprint-override"><code>cp /path/to/more-plugins/* /opt/myapp/plugins/
</code></pre>
<p>How do I do that? Is <code>initContainers</code> helpful in such cases? Does init-container have access to the filesystem of the app-container, so that the above command can be executed? I tried to use <code>busybox</code> in <code>initContainers</code> and ran <code>ls -lR /opt/myapp</code> just to see if such a path even exists. But it seems, init container doesn't have access to the app-filesystem. </p>
<p>So what are the different solutions to this problem? And what is the best one?</p>
| Nawaz | <p>I'm not a container expert, but my understanding is that the best way to do this is to create a new container, based on your current docker image, that copies the files into place. Your image, with your plugins and configs, is what k8s loads and manages.</p>
<pre><code>FROM my/oldimage:1.7.1
COPY more-plugins/* /opt/myapp/plugins/
</code></pre>
<pre><code>$ docker build -t my/newimage:1.7.1 .
</code></pre>
| PaulProgrammer |
<p>I have all ready prepared my docker image.
My Dockerfile : </p>
<pre><code>FROM python:3.7-alpine
# Creating Application Source Code Directory
RUN mkdir -p /FogAPP/src
# Setting Home Directory for containers
WORKDIR /FogAPP/src
# Copying src code to Container
COPY fogserver.py /FogAPP/src
# Application Environment variables
ENV APP_ENV development
# Exposing Ports
EXPOSE 31700
# Setting Persistent data
VOLUME ["/app-data"]
#Running Python Application
CMD ["python", "fogserver.py"]
</code></pre>
<p>My source code fogserver.py (socket programming) :</p>
<pre><code>import socket
from datetime import datetime
import os
def ReceiveDATA():
hostname = socket.gethostname()
i=0
host = socket.gethostbyname(hostname)
port = 31700
while True:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Create a socket object
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host, port)) # Bind to the port
s.listen(10) # Accepts up to 10 clientections.
print("############################# ",i+1," #################################")
print('Server listening.... on '+ str(host))
client, address = s.accept()
print('Connection from : ',address[0])
i+=1
date=str(datetime.now())
date=date.replace('-', '.')
date=date.replace(' ', '-')
date=date.replace(':', '.')
PATH = 'ClientDATA-'+date+'.csv'
print(date+" : File created")
f = open(PATH,'wb') #open in binary
# receive data and write it to file
l = client.recv(1024)
while (l):
f.write(l)
l = client.recv(1024)
f.close()
dt=str(datetime.now())
dt=dt.replace('-', '.')
dt=dt.replace(' ', '-')
dt=dt.replace(':', '.')
print(dt+' : '+'Successfully get the Data')
feedback = dt
client.send(feedback.encode('utf-8'))
client.close()
s.close()
if __name__ == '__main__':
ReceiveDATA()
</code></pre>
<p>My kubernetes cluster is ready : </p>
<pre><code>kubectl get nodes
NAME STATUS ROLES AGE VERSION
rpimanager Ready master 3d23h v1.15.0
rpiworker1 Ready worker 3d23h v1.15.0
rpiworker2 Ready worker 3d23h v1.15.0
</code></pre>
<p>Then I have deployed the docker image in 2 pods through the kubernetes dashboard :</p>
<pre><code>kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cluster-fogapp NodePort 10.101.194.192 <none> 80:31700/TCP 52m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d23h
</code></pre>
<p>So actually docker image is runing in two pods : </p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
cluster-fogapp-c987dfffd-6zc2x 1/1 Running 0 56m
cluster-fogapp-c987dfffd-gq5k4 1/1 Running 0 56m
</code></pre>
<p>and I have also a client source code which is also socket programming. Here I have found a problem which address of the server in cluster I have to put ?</p>
<p>This is my client code source : </p>
<pre><code>
host = "????????????"#Which Address should I set
port = 31700
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
PATH = GenerateDATA()
f = open (PATH, "rb")
l = f.read(1024)
while (l):
s.send(l)
l = f.read(1024)
print(dt+' : '+'Done sending')
</code></pre>
<p>I have tried the address of the master node and I get an error of Connection refused.</p>
<p>I would like just to clarify that I am working on a cluster composed of raspberry Pi3 and the client is on my own pc. The pc and the raspberry cards are connected to the same local network.</p>
<p>Thank you for helping me. </p>
| Abid Omar | <p>You can access the service with the worker nodes IP since you exposed the service as NodePort.</p>
<p><code>WorkerNode:<NodePort></code></p>
<p>The problem with this approach is that if any of the nodes are dead, you might face issue. The ideal solution is to expose the service as LoadBalancer, so that you can access the service outside the cluster with external IP or DNS.</p>
| Malathi |
<p>I provisioned alertmanager using Helm (and ArgoCD).
I need to insert smtp_auth_password value but not as a plain text.</p>
<pre><code>smtp_auth_username: 'apikey'
smtp_auth_password: $API_KEY
</code></pre>
<p>How can I achieve it? I heard about "external secret" but this should be the easiest way?</p>
| Tomer Aharon | <h3>Solution</h3>
<p>if you use <code>prometheus-community/prometheus</code> which includes this alertmanager <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/alertmanager" rel="nofollow noreferrer">chart</a> as a dependency, then you can do the following:</p>
<p><strong>create secret</strong> in the same namespace where your alertmanager pod is running:</p>
<pre class="lang-bash prettyprint-override"><code>k create secret generic alertmanager-secrets \
--from-literal="opsgenie-api-key=YOUR-OPSGENIE-API-KEY" \
--from-literal="slack-api-url=https://hooks.slack.com/services/X03R2856W/A14T19TKEGM/...."
</code></pre>
<p><strong>mount that secret</strong> via use of extraSecretMounts</p>
<pre class="lang-yaml prettyprint-override"><code>alertmanager:
enabled: true
service:
annotations:
prometheus.io/scrape: "true"
# contains secret values for opsgenie and slack receivers
extraSecretMounts:
- name: secret-files
mountPath: /etc/secrets
subPath: ""
secretName: alertmanager-secrets
readOnly: true
</code></pre>
<p><strong>use them in your receivers</strong>:</p>
<pre class="lang-yaml prettyprint-override"><code>receivers:
- name: slack-channel
slack_configs:
- channel: '#client-ccf-ccl-alarms'
api_url_file: /etc/secrets/slack-api-url <-------------------THIS
title: '{{ template "default.title" . }}'
text: '{{ template "default.description" . }}'
pretext: '{{ template "slack.pretext" . }}'
color: '{{ template "slack.color" . }}'
footer: '{{ template "slack.footer" . }}'
send_resolved: true
actions:
- type: button
text: "Query :mag:"
url: '{{ template "alert_query_url" . }}'
- type: button
text: "Silence :no_bell:"
url: '{{ template "alert_silencer_url" . }}'
- type: button
text: "Karma UI :mag:"
url: '{{ template "alert_karma_url" . }}'
- type: button
text: "Runbook :green_book:"
url: '{{ template "alert_runbook_url" . }}'
- type: button
text: "Grafana :chart_with_upwards_trend:"
url: '{{ template "alert_grafana_url" . }}'
- type: button
text: "KB :mag:"
url: '{{ template "alert_kb_url" . }}'
- name: opsgenie
opsgenie_configs:
- send_resolved: true
api_key_file: /etc/secrets/opsgenie-api-key <-------------------THIS
message: '{{ template "default.title" . }}'
description: '{{ template "default.description" . }}'
source: '{{ template "opsgenie.default.source" . }}'
priority: '{{ template "opsgenie.default.priority" . }}'
tags: '{{ template "opsgenie.default.tags" . }}'
</code></pre>
<p>If you want to use email functionality of <a href="https://prometheus.io/docs/alerting/latest/configuration/#email_config" rel="nofollow noreferrer">email_config</a>
then simply use the same approach with:</p>
<pre><code>[ auth_password_file: <string> | default = global.smtp_auth_password_file ]
</code></pre>
| DmitrySemenov |
<p>When DNS service Discovery in kubernetes helping us in establishing connection between services . What will be the need to have a service mesh like istio etc in kubernetes.</p>
| Bala | <p>In addition to the service discovery, few other things are available when you use Istio on top of Kubernetes:</p>
<ul>
<li>Blue green deployments with request routing to new release based on percentage value given by user.</li>
<li>Rate limiting</li>
<li>Integration with grafana and prometheus for monitoring </li>
<li>Service graph visualisation with Kiali</li>
<li>Circuit breaking</li>
<li>Tracing and metrics without having to instrument your applications</li>
<li>You will be able to secure your connections at mesh level via mTLS</li>
</ul>
<p>You can read more about the advantages of <a href="https://medium.com/google-cloud/istio-why-do-i-need-it-18d122838ee3" rel="nofollow noreferrer">having Istio in your cluster here</a>.</p>
| Malathi |
<p>trying to get started testing kubernetes with inspec using: <a href="https://github.com/bgeesaman/inspec-k8s" rel="nofollow noreferrer">https://github.com/bgeesaman/inspec-k8s</a></p>
<p>Im running it from the <code>make</code> and <code>docker</code> image found here: <a href="https://github.com/bgeesaman/inspec-k8s-sample" rel="nofollow noreferrer">https://github.com/bgeesaman/inspec-k8s-sample</a></p>
<p>I have multiple <code>eks</code> clusters and a local <code>docker-desktop</code> cluster. When i try and connect to any of them via: <code>inspec exec . -t k8s://docker-desktop</code> (Matching the kubeconfig -name: xxx to the value put after <code>k8s://</code>) I always get the same error:</p>
<pre><code># inspec exec -t k8s://docker-desktop
Traceback (most recent call last):
20: from /usr/local/bundle/bin/inspec:23:in `<main>'
19: from /usr/local/bundle/bin/inspec:23:in `load'
18: from /usr/local/bundle/gems/inspec-bin-4.18.51/bin/inspec:11:in `<top (required)>'
17: from /usr/local/bundle/gems/inspec-4.18.51/lib/inspec/base_cli.rb:35:in `start'
16: from /usr/local/bundle/gems/thor-0.20.3/lib/thor/base.rb:466:in `start'
15: from /usr/local/bundle/gems/thor-0.20.3/lib/thor.rb:387:in `dispatch'
14: from /usr/local/bundle/gems/thor-0.20.3/lib/thor/invocation.rb:126:in `invoke_command'
13: from /usr/local/bundle/gems/thor-0.20.3/lib/thor/command.rb:27:in `run'
12: from /usr/local/bundle/gems/inspec-4.18.51/lib/inspec/cli.rb:284:in `exec'
11: from /usr/local/bundle/gems/inspec-4.18.51/lib/inspec/cli.rb:284:in `new'
10: from /usr/local/bundle/gems/inspec-4.18.51/lib/inspec/runner.rb:78:in `initialize'
9: from /usr/local/bundle/gems/inspec-4.18.51/lib/inspec/runner.rb:86:in `configure_transport'
8: from /usr/local/bundle/gems/inspec-4.18.51/lib/inspec/backend.rb:53:in `create'
7: from /usr/local/bundle/gems/train-kubernetes-0.1.6/lib/train-kubernetes/transport.rb:9:in `connection'
6: from /usr/local/bundle/gems/train-kubernetes-0.1.6/lib/train-kubernetes/transport.rb:9:in `new'
5: from /usr/local/bundle/gems/train-kubernetes-0.1.6/lib/train-kubernetes/connection.rb:13:in `initialize'
4: from /usr/local/bundle/gems/train-kubernetes-0.1.6/lib/train-kubernetes/connection.rb:36:in `parse_kubeconfig'
3: from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/client.rb:40:in `config'
2: from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/transport.rb:81:in `config'
1: from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/transport.rb:117:in `token_from_exec'
/usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/transport.rb:117:in ``': No such file or directory - aws (Errno::ENOENT)
</code></pre>
<p>I thought it was because of <code>eks</code> kubeconfigs being linked to the aws profile. But i get the same error for docker-desktop as well.</p>
<p>I tried updating the <code>Makefile</code> <code>COMMAND</code> with: <code>COMMAND=docker run --rm -it -v </code>pwd<code>:$(WORKDIR) -v $(HOME)/.kube:/root/.kube:ro -v $(HOME)/.aws:/root/.aws:ro</code></p>
<p>After the error ends with <code>No such file or directory - aws</code> but no joy.</p>
<p>Any ideas how to resolve or progress?</p>
<p>Thanks</p>
<p>Small update, It did start running after making sure names where correct. But then stopped again..</p>
<p>I had connected to docker-desktop (It wasnt running when i orginally ran it)
I had connected to an eks cluster</p>
<p>I did a <code>vi controls/basic.rb</code> to start looking playing with my tests and it started erroring again.</p>
<p>I thought it might error due to a syntax problem with my changes so did a new <code>make build</code> but still no joy now :(</p>
<p>I have also tried updating the chef/inspec image to the latest 4.26 but this breaks the dockerfile as it doesnt have <code>apk</code> anymore.</p>
| Staggerlee011 | <p>Ok, i dont get it but i can get it to run:</p>
<p>It looks to be linked to im using <code>kubectx</code>. If i set <code>kubectx</code> to <code>docker-desktop</code> and then run the docker image it works. If im set to anything else it doesnt.</p>
| Staggerlee011 |
<p>I'm running a Azure AKS cluster with both Windows and Linux VMs. </p>
<p>I can curl the cluster service by name from a pod in the Istio namespace, so I know TCP to the pod works. I believe I need to inform my Virtual Service in some way to <em>not</em> route through the envoy proxy, but just forward requests directly to the k8s service endpoint - similar to as if it were a VM external to the mesh. I do have TLS terminating at the gateway - the k8s service itself is just exposed inside the cluster on port 80.</p>
<p>Currently, there is no envoy sidecar for Windows containers, but from k8s perspective, this is just another service in the same cluster Istio is running in.</p>
<p>http-gateway.yaml </p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
annotations:
name: http-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "*.myapp.com"
port:
number: 80
name: http-80-gateway
protocol: HTTP
tls:
httpsRedirect: true # sends 301 redirect for http requests
- hosts:
- "*.myapp.com"
port:
number: 443
name: https-443-gateway
protocol: HTTPS
tls:
credentialName: cert-azure-dns
privateKey: sds
serverCertificate: sds
mode: SIMPLE
</code></pre>
<p>virtual-service.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp-vsvc
namespace: myapp-ns
spec:
hosts:
- foo #external DNS name is foo.myapp.com; gateway host for HTTPS is '*.myapp.com'
gateways:
- istio-system/http-gateway
http:
- route:
- destination:
host: myapp-svc.myapp-ns.svc.cluster.local
port:
number: 80
</code></pre>
<p>Attempting an <a href="https://istio.io/docs/tasks/traffic-management/egress/egress-control/#envoy-passthrough-to-external-services" rel="nofollow noreferrer">Envoy Passthrough</a> I've added a ServiceEntry like the following:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: myapp-se
namespace: myapp-ns
spec:
hosts:
- myapp-svc.myapp-ns.svc.cluster.local
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
</code></pre>
<p>The server response is a 404 with a "server" header value of "istio-envoy". </p>
<p>DNS is resolving to the gateway correctly and the acme cert is valid - so this error usually indicates I've made it to the Virtual Service, but haven't been routed to a cluster service. In Kiali, there are no Istio validation errors on any of my yaml definitions: virtual service, service entry or gateway.</p>
<p>My global.outboundTrafficPolicy.mode is set to "ALLOW_ANY". </p>
<p>I wonder if declaring "EXTERNAL_MESH" for a cluster service is a problem? Istio knows the k8s service exists, so is it trying to give priority to routing to the envoy sidecar and ignoring my service entry registration?</p>
<p>There is an <a href="https://istio.io/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services" rel="nofollow noreferrer">option to bypass envoy altogether</a> for specific IP ranges, which would be an option if I could somehow set a static IP on this particular cluster service. I want to bypass envoy for ingress to this one cluster service.</p>
| Matthew | <p>I could have sworn I tried this before, but apparently all I needed to provide was a simple Virtual Service <em>without</em> any Destination Rule or Service Entry. </p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp-vsvc
namespace: myapp-ns
spec:
hosts:
- foo.myapp.com
gateways:
- istio-system/http-gateway
http:
- route:
- destination:
host: myapp-svc.myapp-ns.svc.cluster.local
port:
number: 80
</code></pre>
| Matthew |
<p>My private AKS Cluster is accessible only to the root user using <code>kubectl</code> on a jumphost. But for a non-root user it throws below error message:</p>
<pre class="lang-bash prettyprint-override"><code>someuser@jump-vm$ kubectl get pods -A
Error from server (Forbidden): pods is forbidden: User "XX-XX-XX-XX-XX" cannot list resource "XX" in API group " " at the cluster scope
</code></pre>
<p>How to resolve this error?</p>
| Rajesh Swarnkar | <p>It seems the Azure VM from the private AKS cluster was being accessed was set to automatic restart which caused some issue with <code>kubectl</code> or <code>kubelogin</code>.</p>
<p>I followed below steps for both -- root as well as non-root user and after <code>kubectl</code> worked successfully.</p>
<pre class="lang-bash prettyprint-override"><code>root@jump-vm# cd ~ && cd .kube/
root@jump-vm# rm -r cache && rm config
root@jump-vm# az login --tenant <tenant-id>
root@jump-vm# az account set --subscription <subscription-id>
root@jump-vm# az aks get-credentials --resource-group <resource-group-name> --name <aks-clutser-name>
root@jump-vm# kubelogin convert-kubeconfig -l azurecli
someuser@jump-vm$ cd ~ && cd .kube/
someuser@jump-vm$ rm -r cache && rm config
someuser@jump-vm$ az login --tenant <tenant-id>
someuser@jump-vm$ az account set --subscription <subscription-id>
someuser@jump-vm$ az aks get-credentials --resource-group <resource-group-name> --name <aks-clutser-name>
someuser@jump-vm$ kubelogin convert-kubeconfig -l azurecli
</code></pre>
| Rajesh Swarnkar |
<p>There are Kubernetes RBAC in Amazon EKS with Pulumi instructions for TypeScript.</p>
<pre class="lang-js prettyprint-override"><code>const vpc = new awsx.ec2.Vpc("vpc", {});
const cluster = new eks.Cluster("eks-cluster", {
vpcId : vpc.id,
subnetIds : vpc.publicSubnetIds,
instanceType : "t2.medium",
nodeRootVolumeSize: 200,
desiredCapacity : 1,
maxSize : 2,
minSize : 1,
deployDashboard : false,
vpcCniOptions : {
warmIpTarget : 4,
},
roleMappings : [
// Provides full administrator cluster access to the k8s cluster
{
groups : ["system:masters"],
roleArn : clusterAdminRole.arn,
username : "pulumi:admin-usr",
},
// Map IAM role arn "AutomationRoleArn" to the k8s user with name "automation-usr", e.g. gitlab CI
{
groups : ["pulumi:automation-grp"],
roleArn : AutomationRole.arn,
username : "pulumi:automation-usr",
},
// Map IAM role arn "EnvProdRoleArn" to the k8s user with name "prod-usr"
{
groups : ["pulumi:prod-grp"],
roleArn : EnvProdRole.arn,
username : "pulumi:prod-usr",
},
],
});
</code></pre>
<blockquote>
<p>Kubernetes RBAC in AWS EKS with open source Pulumi packages | Pulumi <a href="https://www.pulumi.com/blog/simplify-kubernetes-rbac-in-amazon-eks-with-open-source-pulumi-packages/" rel="nofollow noreferrer">https://www.pulumi.com/blog/simplify-kubernetes-rbac-in-amazon-eks-with-open-source-pulumi-packages/</a></p>
</blockquote>
<p>I'm looking for how to achieve this with .NET C#?
It looks like eks roleMappings extensions is only available for TypeScript, so that C# may be require to construct configmap manifest with Pulumi.Kubernetes?</p>
<blockquote>
<p><a href="https://github.com/pulumi/pulumi-aws/blob/c672e225a765b11b07ea23e7b1b411483d7f38da/sdk/dotnet/Eks/Cluster.cs" rel="nofollow noreferrer">https://github.com/pulumi/pulumi-aws/blob/c672e225a765b11b07ea23e7b1b411483d7f38da/sdk/dotnet/Eks/Cluster.cs</a></p>
<p><a href="https://github.com/pulumi/pulumi-eks" rel="nofollow noreferrer">https://github.com/pulumi/pulumi-eks</a></p>
</blockquote>
| guitarrapc | <p>The <code>pulumi-eks</code> package is currently only available in TypeScript. There is a plan to bring it to all languages later this year, but for now you basically have two options:</p>
<ol>
<li><p>Use TypeScript. If needed, break down your complete deployment into multiple stacks. The stack that defines the EKS package would be in TypeScript, while other stacks can be in C#.</p></li>
<li><p>Refer to the <code>pulumi-eks</code> implementation that you linked above and transfer that code manually to C#. This is a non-trivial work, so be careful with feasibility estimation.</p></li>
</ol>
| Mikhail Shilkov |
<p>I am running celery flower on port inside Kubernetes with nginx-ingress controller</p>
<p>I want to do a rewrite where requests to /flower/(.*) request goes to /$1 according to their documentation:
<a href="https://flower.readthedocs.io/en/latest/config.html?highlight=nginx#url-prefix" rel="nofollow noreferrer">https://flower.readthedocs.io/en/latest/config.html?highlight=nginx#url-prefix</a></p>
<pre><code>server {
listen 80;
server_name example.com;
location /flower/ {
rewrite ^/flower/(.*)$ /$1 break;
proxy_pass http://example.com:5555;
proxy_set_header Host $host;
}
}
</code></pre>
<p>I have come up with the following ingress.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: backend-airflow-ingress
namespace: edna
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
ingress.kubernetes.io/rewrite-target: /$2
# nginx.ingress.kubernetes.io/app-root: /flower
spec:
rules:
- host:
http:
paths:
- path: /flower(/|$)(.*)
backend:
serviceName: airflow-flower-service
servicePort: 5555
</code></pre>
<p>Inside POD running flower, I successfully get</p>
<pre class="lang-sh prettyprint-override"><code>curl localhost:5555/dashboard
</code></pre>
<p>However if get into the POD running Nginx controller, then it fails.</p>
<pre><code>curl localhost/flower/dashboard
</code></pre>
<p>I get response by the flower:</p>
<pre class="lang-html prettyprint-override"><code><div class="span12">
<p>
Error, page not found
</p>
</div>
</code></pre>
<p>this is what I see inside nginx.conf in nginx-controller pod</p>
<pre><code>server {
server_name _ ;
listen 80 default_server reuseport backlog=511 ;
listen 443 default_server reuseport backlog=511 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location ~* "^/flower(/|$)(.*)" {
set $namespace "edna";
set $ingress_name "backend-airflow-ingress";
set $service_name "";
set $service_port "";
set $location_path "/flower(/|${literal_dollar})(.*)";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = true,
force_no_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
# other authentication method such as basic auth or external auth useless - all requests will be allowed.
#access_by_lua_block {
#}
header_filter_by_lua_block {
lua_ingress.header()
plugins.run()
}
body_filter_by_lua_block {
}
</code></pre>
| DmitrySemenov | <p><strong>Ok figured this out</strong></p>
<pre><code>ingress.kubernetes.io/rewrite-target: /$2
</code></pre>
<p>should be in my case a different annotation</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /$2
</code></pre>
| DmitrySemenov |
<p>I have the bare metall kubernetes pod running tomcat application on port 8085. If it would be common server, the app would be accessible via <code>http://<server-ip>:8085/app</code>. My goal is to expose the tomcat on Kubernetes node's address and the same port as used in tomcat.</p>
<p>I am able to expose and access app using Node Port service - but it is inconvenient that port is always different.
I tried to setup traefik ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-tag2
spec:
rules:
- host: kubernetes.example.com #in my conf I use node's domain name
http:
paths:
- path: /test
backend:
serviceName: test-tag2
servicePort: 8085
</code></pre>
<p>And I can see result in Traefik's dashboard, but still if I navigate to <code>http://kubernetes.example.com/test/app</code> I get nothing.</p>
<p>I've tried a bunch of ways to configure that and still no luck.
Is it actually possible to expose my pod in this way?</p>
| Anton | <p>Did you try specifying a nodePort value in the service yaml? If specified, kubernetes will create service on the specified NodePort. If the nodePort is not available , kubernetes doesn't create the service. </p>
<p>Refer to this answer for more details:
<a href="https://stackoverflow.com/a/43944385/1237402">https://stackoverflow.com/a/43944385/1237402</a></p>
| Malathi |
<p>I have launched a <strong>private</strong> <a href="https://www.terraform.io/docs/providers/google/r/container_cluster.html" rel="nofollow noreferrer">GKE cluster</a> using terraform resource <code>"google_container_cluster"</code> with a <code>private_cluster_config</code> block in it.</p>
<p>I have added <code>master_authorized_networks_config</code> to allow my own IP address in authorized networks for the GKE.</p>
<p>And I have added <a href="https://www.terraform.io/docs/providers/kubernetes/r/namespace.html" rel="nofollow noreferrer">k8s namespace</a> using terraform resource <code>"kubernetes_namespace"</code>.</p>
<p>I have also set all google, kubernetes providers, k8s token, cluster_ca_certificate etc correctly and the namespace was indeed provisioned by this terraform.</p>
<hr>
<pre><code>resource "google_container_cluster" "k8s_cluster" {
# .....
# .....
private_cluster_config {
enable_private_nodes = true
enable_private_endpoint = false
master_ipv4_cidr_block = "172.16.0.0/28"
}
ip_allocation_policy { } # enables VPC-native
master_authorized_networks_config {
cidr_blocks {
{
cidr_block = "0.0.0.0/0"
display_name = "World"
}
}
}
# .....
# .....
}
</code></pre>
<pre><code>data "google_client_config" "google_client" {}
data "google_container_cluster" "k8s_cluster" {
name = google_container_cluster.k8s_cluster.name
location = var.location
}
provider "kubernetes" {
# following this example https://www.terraform.io/docs/providers/google/d/datasource_client_config.html#example-usage-configure-kubernetes-provider-with-oauth2-access-token
version = "1.11.1"
load_config_file = false
host = google_container_cluster.k8s_cluster.endpoint
token = data.google_client_config.google_client.access_token
cluster_ca_certificate = base64decode(
data.google_container_cluster.k8s_cluster.master_auth.0.cluster_ca_certificate
)
}
resource "kubernetes_namespace" "namespaces" {
depends_on = [google_container_node_pool.node_pool]
for_each = ["my-ns"]
metadata {
name = each.value
}
}
</code></pre>
<hr>
<p>Then I ran <code>terraform apply</code> and the namespace was created fine ✅✅✅</p>
<pre><code>kubernetes_namespace.namespaces["my-ns"]: Creating...
kubernetes_namespace.namespaces["my-ns"]: Creation complete after 1s [id=my-ns]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
</code></pre>
<hr>
<p>However, when I run <code>terraform apply</code> or <code>terraform plan</code> again and terraform is trying to refresh the namespace resource, </p>
<pre><code>data.google_container_cluster.k8s_cluster: Refreshing state...
kubernetes_namespace.namespaces["my-ns"]: Refreshing state... [id=my-ns]
</code></pre>
<p>it's throwing the following error <strong>intermittently</strong>. ❌ ❌ ❌</p>
<pre><code>Error: Get http://localhost/api/v1/namespaces/my-ns: dial tcp 127.0.0.1:80: connect: connection refused
</code></pre>
<p>It's sometimes passing and sometimes failing - <strong>intermittently</strong>.</p>
<hr>
<p>Where would you advise I should look into to fix this intermittent error?</p>
| Rakib | <p>In my case, the source of the issue was <a href="https://www.terraform.io/docs/commands/import.html#provider-configuration" rel="nofollow noreferrer">this</a>:</p>
<blockquote>
<p>The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs</p>
</blockquote>
<p>In your case, your kubernetes <code>provider</code> block has several config options that are variables:</p>
<pre><code> host = google_container_cluster.k8s_cluster.endpoint
token = data.google_client_config.google_client.access_token
</code></pre>
<p>My workaround was to create a kubeconfig.yaml file and temporarily replace the provider config with something like the following:</p>
<pre><code>provider "kubernetes" {
config_path = "kubeconfig.yaml"
}
</code></pre>
<p>This allowed me to run the import, and then I restored the previous variable-based config.</p>
| kenske |
<p>I have three instances for kubernetes cluster and three instances for mongo cluster as shown here:</p>
<p><a href="https://i.stack.imgur.com/uOfTE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uOfTE.png" alt="enter image description here"></a></p>
<p>I can access my mongo cluster from app console and other compute instances using uri like this:</p>
<pre><code>mongo mongodb:root:passwd@mongodb-1-servers-vm-0:27017,mongodb-1-servers-vm-1:27017/devdb?replicaSet=rs0
</code></pre>
<p>I also tried replacing instance names with internal and external ip addresses, but that didn't help it either. </p>
<p>But the same command does not work from instances inside the kubernetes cluster. I assume that I have to configure some kind of permissions for my cubernetes cluster to access compute instances? Can someone help?</p>
| Ben | <p>Ok, I managed to find a solution, not sure if the best one. </p>
<p>First we add firewall rules to allow mongodb traffic </p>
<pre><code>gcloud compute firewall-rules create allow-mongodb --allow tcp:27017
</code></pre>
<p>Then we use external ip's to connect to mongodb from kubernetes instances</p>
<pre><code>mongodb:root:passwd@<ip1>:27017,<ip2>:27017/devdb?replicaSet=rs0
</code></pre>
| Ben |
<p>I have a got a kubernetes mysql pod which is exposed as a nodePort like shown below</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: demo-mysql
labels:
app: demo
spec:
type: NodePort
ports:
- port: 3306
nodePort: 32695
</code></pre>
<p>I am trying to access this mysql server using the command below</p>
<pre><code>mysql -u root -h 117.213.118.86 -p 32695
</code></pre>
<p>but I get this error</p>
<pre><code>ERROR 2003 (HY000): Can't connect to MySQL server on '117.213.118.86' (111)
</code></pre>
<p>What am I doing wrong here ?</p>
| jeril | <p>If you want to connect to a remote mysql service, you have to specify an endpoint that has the remote service's ip addrress like this: </p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
name: demo-mysql
subsets:
- addresses:
- ip: 192.0.2.42
ports:
- port: 3306
</code></pre>
<p>More details <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">here</a>.</p>
| Malathi |
<p>I set up a simple redis ClusterIP service to be accessed by a php LoadBalancer service inside the Cluster. The php log shows the connection timeout error. The redis service is not accessible. </p>
<pre><code>'production'.ERROR: Operation timed out {"exception":"[object] (RedisException(code: 0):
Operation timed out at /var/www/html/vendor/laravel/framework/src/Illuminate/Redis
/html/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php(109):
Redis->connect('redis-svc', '6379', 0, '', 0, 0)
</code></pre>
<p>My redis service is quite simple so I don't know what went wrong:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: redis
name: redis
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
io.kompose.service: redis
spec:
containers:
- image: redis:alpine
name: redis
resources: {}
ports:
- containerPort: 6379
restartPolicy: Always
status: {}
---
kind: Service
apiVersion: v1
metadata:
name: redis-svc
spec:
selector:
app: redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
type: ClusterIP
</code></pre>
<p>I verify redis-svc is running, so why it can't be access by other service</p>
<pre><code>kubectl get service redis-svc git:k8s*
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-svc ClusterIP 10.101.164.225 <none> 6379/TCP 22m
</code></pre>
<p>This SO <a href="https://stackoverflow.com/questions/50852542/kubernetes-cannot-ping-another-service/50853306">kubernetes cannot ping another service</a> said ping doesn't work with service's cluster IP(indeed) how do I verify whether redis-svc can be accessed or not ?</p>
<p>---- update ----</p>
<p>My first question was a silly mistake but I still don't know how do I verify whether the service can be accessed or not (by its name). For example I changed the service name to be the same as the deployment name and I found php failed to access redis again.</p>
<p><code>kubectl get endpoints</code> did not help now.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
...
status: {}
---
kind: Service
apiVersion: v1
metadata:
name: redis
...
</code></pre>
<p>my php is another service with env set the redis's service name</p>
<pre><code>spec:
containers:
- env:
- name: REDIS_HOST # the php code access this variable
value: redis-svc #changed to "redis" when redis service name changed to "redis"
</code></pre>
<p>----- update 2------ </p>
<p>The reason I can' set my redis service name to "redis" is b/c "<a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="nofollow noreferrer">kubelet adds a set of environment variables for each active Service</a>" so with the name "redis", there will be a <code>REDIS_PORT=tcp://10.101.210.23:6379</code> which overwrite my own <code>REDIS_PORT=6379</code>
But my php just expect the value of REDIS_PORT to be 6379</p>
| Qiulang | <p>I ran the yaml configuration given by you and it created the deployment and service. However when I run the below commands:</p>
<pre class="lang-sh prettyprint-override"><code>>>> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d14h
redis-svc ClusterIP 10.105.31.201 <none> 6379/TCP 109s
>>>> kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.99.116:8443 5d14h
redis-svc <none> 78s
</code></pre>
<p>As you see, the endpoints for redis-svc is none, it means that the service doesn't have an endpoint to connect to. You are using selector labels as <code>app: redis</code> in the redis-svc. But <strong>the pods doesn't have the selector label defined in the service.</strong> Adding the label <code>app: redis</code> to the pod template will work. The complete working yaml configuration of deployment will look like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: redis
name: redis
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
io.kompose.service: redis
app: redis
spec:
containers:
- image: redis:alpine
name: redis
resources: {}
ports:
- containerPort: 6379
restartPolicy: Always
status: {}
</code></pre>
| Malathi |
<p>Ubuntu 16.04 LTS, Docker 17.12.1, Kubernetes 1.10.0 </p>
<p>Kubelet not starting:</p>
<p><em>Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a</em></p>
<p><em>Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Failed with result 'exit-code'.</em></p>
<p>Note: No issue with v1.9.1</p>
<p><strong>LOGS:</strong></p>
<pre><code>Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.518085 20051 docker_service.go:249] Docker Info: &{ID:WDJK:3BCI:BGCM:VNF3:SXGW:XO5G:KJ3Z:EKIH:XGP7:XJGG:LFBL:YWAJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:btrfs DriverStatus:[[Build Version Btrfs v4.15.1] [Library Vers
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.521232 20051 docker_service.go:262] Setting cgroupDriver to cgroupfs
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.532834 20051 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.533812 20051 kuberuntime_manager.go:186] Container runtime docker initialized, version: 18.05.0-ce, apiVersion: 1.37.0
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.534071 20051 csi_plugin.go:61] kubernetes.io/csi: plugin initializing...
Jun 22 06:45:55 dev-master hyperkube[20051]: W0622 06:45:55.534846 20051 kubelet.go:903] Accelerators feature is deprecated and will be removed in v1.11. Please use device plugins instead. They can be enabled using the DevicePlugins feature gate.
Jun 22 06:45:55 dev-master hyperkube[20051]: W0622 06:45:55.535035 20051 kubelet.go:909] GPU manager init error: couldn't get a handle to the library: unable to open a handle to the library, GPU feature is disabled.
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.535082 20051 server.go:129] Starting to listen on 0.0.0.0:10250
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.535164 20051 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.535189 20051 server.go:944] Started kubelet
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.535555 20051 event.go:209] Unable to write event: 'Post https://10.50.50.201:8001/api/v1/namespaces/default/events: dial tcp 10.50.50.201:8001: getsockopt: connection refused' (may retry after sleeping)
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.535825 20051 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536202 20051 status_manager.go:140] Starting to sync pod status with apiserver
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536253 20051 kubelet.go:1782] Starting kubelet main sync loop.
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536285 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536464 20051 volume_manager.go:247] Starting Kubelet Volume Manager
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536613 20051 desired_state_of_world_populator.go:129] Desired state populator starts to run
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.538574 20051 server.go:299] Adding debug handlers to kubelet server.
Jun 22 06:45:55 dev-master hyperkube[20051]: W0622 06:45:55.538664 20051 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.539199 20051 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.636465 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.636795 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.638630 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.638954 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.836686 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.839219 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.841028 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.841357 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:56 dev-master hyperkube[20051]: I0622 06:45:56.236826 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jun 22 06:45:56 dev-master hyperkube[20051]: I0622 06:45:56.241590 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:56 dev-master hyperkube[20051]: I0622 06:45:56.245081 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.245475 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.492206 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://10.50.50.201:8001/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.493216 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.50.50.201:8001/api/v1/pods?fieldSelector=spec.nodeName%3D10.50.50.201&limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: co
Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.494240 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://10.50.50.201:8001/api/v1/nodes?fieldSelector=metadata.name%3D10.50.50.201&limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connecti
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.036893 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.045705 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.047489 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.047787 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.413319 20051 event.go:209] Unable to write event: 'Post https://10.50.50.201:8001/api/v1/namespaces/default/events: dial tcp 10.50.50.201:8001: getsockopt: connection refused' (may retry after sleeping)
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.492781 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://10.50.50.201:8001/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.493560 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.50.50.201:8001/api/v1/pods?fieldSelector=spec.nodeName%3D10.50.50.201&limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: co
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.494574 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://10.50.50.201:8001/api/v1/nodes?fieldSelector=metadata.name%3D10.50.50.201&limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connecti
Jun 22 06:45:57 dev-master hyperkube[20051]: W0622 06:45:57.549477 20051 manager.go:340] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.659932 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.661447 20051 cpu_manager.go:155] [cpumanager] starting with none policy
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.661459 20051 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.661468 20051 policy_none.go:42] [cpumanager] none policy: Start
Jun 22 06:45:57 dev-master hyperkube[20051]: W0622 06:45:57.661523 20051 fs.go:539] stat failed on /dev/loop10 with error: no such file or directory
Jun 22 06:45:57 dev-master hyperkube[20051]: F0622 06:45:57.661535 20051 kubelet.go:1359] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 126 in cached partitions map
Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Failed with result 'exit-code'.
</code></pre>
| Hari | <p>Run the following command on all your nodes. It worked for me.</p>
<pre><code> swapoff -a
</code></pre>
| user1188867 |
<p>I'm developing a Pulumi ComponentResource named CopyPostgresql in Typescript.</p>
<p>CopyPostgreSql is a Kubernetes job that copy in streaming the content of a source Postgresql database to a target Postgresql database. The options of CopyPostgreSql include properties source and target. Both are of type DatabaseInput.</p>
<pre><code>export interface DatabaseInput {
readonly port: Input<number>;
readonly user: Input<string>;
readonly password: Input<string>;
readonly host: Input<string>;
readonly dbname: Input<string>;
}
</code></pre>
<p>So, i want to use port as value of another property from another component, but that another property is of type Input< string >.</p>
<p>How can i apply (or transform) a value of type Input< number > to Input< string >? and in general: In Pulumi, exist a equivalent to pulumi.Output.apply, but to transform pulumi.Input values?</p>
| gabomgp | <p>You can do <code>pulumi.output(inputValue).apply(f)</code>.</p>
<p>So, you can flow them back and forth:</p>
<pre><code>const input1: pulumi.Input<string> = "hi";
const output1 = pulumi.output(input1);
const output2 = output1.apply(s => s.toUpperCase());
const input2: pulumi.Input<string> = output2;
</code></pre>
| Mikhail Shilkov |
<p>I have a Pod with two containers.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: my-container
image: google/my-container:v1
- name: third-party
image: google/third-party:v1
</code></pre>
<p>One container is my image and the second is third-party image which I can’t control its stdout/stderr.<br>
I need that my-container will access logs written in third-party container.<br>
Inside "my-container" I want to collect all the stdout and stderr from the "third-party" container, add some metadata and write it with my logger.</p>
<p>I cant use a privileged container with volumeMounts.</p>
<p>If I could do something like this it was great.</p>
<pre><code> containers:
- name: my-container
image: google/my-container:v1
volumeMounts:
- name: varlog
mountPath: /var/log
- name: third-party
image: google/third-party:v1
stdout: /var/log/stdout
stderr: /var/log/stderr
volumes:
- name: varlog
emptyDir: {}
</code></pre>
| nassi.harel | <p>Based on the <a href="https://docs.docker.com/v17.09/engine/admin/logging/overview/" rel="nofollow noreferrer">logging driver</a> specified for docker, docker tracks the containers' logs. The default logging driver of docker is <code>json-file</code> which redirect the container's <code>stdout</code> and <code>stderr</code> logs to <code>/var/log/containers</code> folder in the host machine that runs docker.</p>
<p>In case of kubernetes, the logs will be available in the worker nodes <code>/var/log/containers</code> folder. </p>
<p>Probably, what you are looking for is <a href="https://github.com/fluent/fluentd-kubernetes-daemonset" rel="nofollow noreferrer">fluentd daemonset</a>, that creates a daemonset, which runs in each worker node and then help you move the logs to s3, cloudwatch or Elastic search. There are many sinks provided with fluentd. You can use one that suits your needs. I hope this is what you want to do with your <code>my-container</code>. </p>
| Malathi |
<p>I'm trying to use a ytt overlay to replace the <code>objectName</code> in my secret class following <a href="https://carvel.dev/ytt/#gist:https://gist.github.com/cppforlife/7633c2ed0560e5c8005e05c8448a74d2" rel="nofollow noreferrer">this gist example replacing only part of a multi-line string</a>, but it ends up appending a new item instead of replacing the existing one. How can I get it to work for this case?</p>
<h4>Input Files</h4>
<p>db_secret.yaml</p>
<pre><code>kind: SecretProviderClass
metadata:
namespace: default
name: db_credentials
spec:
provider: aws
parameters:
objects: |
- objectName: TO_BE_REPLACED_BY_YTT
objectType: "secretsmanager"
jmesPath:
- path: username
objectAlias: dbusername
- path: password
objectAlias: dbpassword
</code></pre>
<p>overlay.yaml</p>
<pre><code>#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:yaml", "yaml")
#@ load("@ytt:data", "data")
#@ def updates():
- objectName: #@ data.values.db_secret_name
#@ end
#@overlay/match by=overlay.subset({"kind": "SecretProviderClass", "metadata": {"name": "db_credentials"}})
---
spec:
provider: aws
parameters:
#@overlay/replace via=lambda a,_: yaml.encode(overlay.apply(yaml.decode(a), updates()))
objects:
</code></pre>
<p>values-staging.yaml</p>
<pre><code>db_secret_name: db-secret-staging
</code></pre>
<h4>ytt output:</h4>
<pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
namespace: default
name: db_credentials
spec:
provider: aws
parameters:
objects: |
- objectName: TO_BE_REPLACED_BY_YTT
objectType: secretsmanager
jmesPath:
- path: username
objectAlias: dbusername
- path: password
objectAlias: dbpassword
- objectName: db-secret-staging
</code></pre>
| Mai Lubega | <p>It's key to notice that the YAML value being overlayed is itself an array. It's an array of one item, but an array nonetheless.</p>
<p>In order to reach the map that has the <code>objectName</code> map item, you need to first match its parent: that containing array item.</p>
<p>The most durable way to do that is by selecting for the array item that has a map that contains the <code>objectName</code> key. You can say that like this:</p>
<pre><code> #@ def updates():
+ #@overlay/match by=lambda idx,left,right: "objectName" in left
- objectName: #@ data.values.db_secret_name
#@ end
</code></pre>
<p>can be read: "in the array value being overlay'ed (aka 'left'), find the array item whose value has a map whose keys include the string "objectName"... merge the value of <em>this</em> array item (i.e. the map in the array item within this overlay) into <em>that</em> matched map."</p>
<p><em>(Playground: <a href="https://carvel.dev/ytt/#gist:https://gist.github.com/pivotaljohn/9593f971ac5962055ff38c5eeaf1df11" rel="nofollow noreferrer">https://carvel.dev/ytt/#gist:https://gist.github.com/pivotaljohn/9593f971ac5962055ff38c5eeaf1df11</a>)</em></p>
<p>When working with overlays, it can be helpful to visualize the tree of values. There are some nice examples in the docs: <a href="https://carvel.dev/ytt/docs/v0.40.0/yaml-primer/" rel="nofollow noreferrer">https://carvel.dev/ytt/docs/v0.40.0/yaml-primer/</a></p>
<p>Also, there was a recent-ish vlog post that has been reported to help folks level up using <code>ytt</code> overlays: <a href="https://carvel.dev/blog/primer-on-ytt-overlays/" rel="nofollow noreferrer">https://carvel.dev/blog/primer-on-ytt-overlays/</a></p>
| JTigger |
<p>Running Ubuntu 18.04 </p>
<p>kubectl : 1.10</p>
<p>Google Cloud SDK 206.0.0
alpha 2018.06.18
app-engine-python 1.9.70
app-engine-python-extras 1.9.70
beta 2018.06.18
bq 2.0.34
core 2018.06.18
gsutil 4.32</p>
<pre><code>helm init
$HELM_HOME has been configured at /home/jam/snap/helm/common.
Error: error installing: Post https://<ip>/apis/extensions/v1beta1/namespaces/kube-system/deployments: error executing access token command "/usr/lib/google-cloud-sdk/bin/gcloud config config-helper --format=json": err=fork/exec /usr/lib/google-cloud-sdk/bin/gcloud: no such file or directory output= stderr=
</code></pre>
<p>I have copy pasted the command and it runs fine</p>
<p>Any help ? </p>
| hounded | <p>In my case, <code>/snap/google-cloud-sdk/127/bin/gcloud</code> was called.</p>
<p>I suppose I didn't do it "right", but I just linked my <code>gcloud</code> to the file <code>helm</code> wanted to run.</p>
<pre><code>sudo mkdir -p /snap/google-cloud-sdk/127/bin
sudo ln -s /usr/bin/gcloud /snap/google-cloud-sdk/127/bin/gcloud
</code></pre>
<p>After that, <code>helm</code> was able to find <code>gcloud</code>.</p>
| denis.peplin |
<p>I having a WebAPI Application deployed in Kubernetes and when we are accessing the API, we need to log the system IP from where the application gets accessed. In simple, I need to get the Client IP / System IP from where the API gets invoked. In order to get the IP Address, I am using</p>
<pre><code>HttpContext.Connection.RemoteIpAddress.MapToIPv4().ToString()
</code></pre>
<p>but it is always returning the Node IP Address of Kubernetes instead of client IP Address. The Service abstraction / service that are created in Kubernetes is of type "ClusterIP". </p>
<p>Is it possible to get the client IP with a service of type ClusterIP?.</p>
| user2003780 | <p>As per the <a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">link</a> given by <a href="https://stackoverflow.com/users/201306/maciek-sawicki">Maciek Sawicki</a>, services of type ClusterIP, are accessible from within a cluster and hence services of type ClusterIP are not accessible outside the cluster. So the traffic to such services, come from either nodes or through other pods.</p>
<p>However if you want to log the IP address of the client, change the service type to NodePort or load balancer and then add <code>service.spec.externalTrafficPolicy</code> to the value <code>Local</code> as given in the above link.</p>
| Malathi |
<p>We have big problems with setting limits/requests in our web django servers/python processors/celery workers. Our current strategy is to look at the usage of the graphs in the last 7 days:</p>
<p>1) get raw average excluding peak</p>
<p>2) add 30% buffer</p>
<p>3) set limits 2 time requests</p>
<p>It works more or less, but then service's code is changed and limits which were set before are not valid anymore. What are the other strategies?</p>
<p>How would you set up limits/requests for those graphs:</p>
<p>1) Processors:</p>
<p><a href="https://i.stack.imgur.com/si4CG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/si4CG.png" alt="enter image description here"></a></p>
<p>2) Celery-beat</p>
<p><a href="https://i.stack.imgur.com/ny7BE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ny7BE.png" alt="enter image description here"></a></p>
<p>3) Django (artifacts probably connected somehow to rollouts)</p>
<p><a href="https://i.stack.imgur.com/KDmmB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KDmmB.png" alt="enter image description here"></a></p>
| sacherus | <p>I would suggest you to start with the average CPU and memory value that the application takes and then enable <strong>auto scaling</strong>. Kunernetes has multiple kinds of autoscaling.</p>
<ul>
<li>Horizontal pod autoscaler</li>
<li>Vertical pod autoscaler</li>
</ul>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal pod autoscaling</a> is commonly used these days. HPA will automatically create new pods if the pods CPU or memory exceeds the percentage or volume of CPU or memory set as threshold.</p>
<p>Monitor the new releases before deployment and see why exactly the new release needs more memory. Troubleshoot and try to reduce the resource consumption limit. If that is not update the resource request with new CPU and memory value.</p>
| Malathi |
<p>When using the Kubernetes <a href="https://pkg.go.dev/k8s.io/client-go/kubernetes/fake" rel="nofollow noreferrer">Fake Client</a> to write unit tests, I noticed that it fails to create two identical objects which have their <a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#ObjectMeta" rel="nofollow noreferrer"><code>ObjectMeta.GenerateName</code></a> field set to some string. A real cluster accepts this specification and generates a unique name for each object.</p>
<p>Running the following test code:</p>
<pre><code>package main
import (
"context"
"testing"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/fake"
)
func TestFake(t *testing.T) {
ctx := context.Background()
client := fake.NewSimpleClientset()
_, err := client.CoreV1().Secrets("default").Create(ctx, &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "generated",
},
StringData: map[string]string{"foo": "bar"},
}, metav1.CreateOptions{})
assert.NoError(t, err)
_, err = client.CoreV1().Secrets("default").Create(ctx, &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "generated",
},
StringData: map[string]string{"foo": "bar"},
}, metav1.CreateOptions{})
assert.NoError(t, err)
}
</code></pre>
<p>fails with</p>
<pre><code>--- FAIL: TestFake (0.00s)
/Users/mihaitodor/Projects/kubernetes/main_test.go:44:
Error Trace: main_test.go:44
Error: Received unexpected error:
secrets "" already exists
Test: TestFake
FAIL
FAIL kubernetes 0.401s
FAIL
</code></pre>
| Mihai Todor | <p>According to <a href="https://github.com/kubernetes/client-go/issues/439#issuecomment-403867107" rel="nofollow noreferrer">this</a> GitHub issue comment:</p>
<blockquote>
<p>the fake clientset doesn't attempt to duplicate server-side behavior
like validation, name generation, uid assignment, etc. if you want to
test things like that, you can add reactors to mock that behavior.</p>
</blockquote>
<p>To add the required reactor, we can insert the following code before creating the <code>corev1.Secret</code> objects:</p>
<pre><code>client.PrependReactor(
"create", "*",
func(action k8sTesting.Action) (handled bool, ret runtime.Object, err error) {
ret = action.(k8sTesting.CreateAction).GetObject()
meta, ok := ret.(metav1.Object)
if !ok {
return
}
if meta.GetName() == "" && meta.GetGenerateName() != "" {
meta.SetName(names.SimpleNameGenerator.GenerateName(meta.GetGenerateName()))
}
return
},
)
</code></pre>
<p>There are a few gotchas in there:</p>
<ul>
<li>The <code>Clientset</code> contains an embedded <a href="https://pkg.go.dev/k8s.io/client-go/testing#Fake" rel="nofollow noreferrer"><code>Fake</code></a> structure which has the <a href="https://pkg.go.dev/k8s.io/client-go/testing#Fake.PrependReactor" rel="nofollow noreferrer"><code>PrependReactor</code></a> method we need to call for this use case (there are a few others). This code <a href="https://github.com/kubernetes/client-go/blob/d6c83109f030902f150f03f252311d2749cb6094/testing/fake.go#L140-L145" rel="nofollow noreferrer">here</a> is invoked when creating such objects.</li>
<li>The <code>PrependReactor</code> method has 3 parameters: <code>verb</code>, <code>resource</code> and <code>reaction</code>. For <code>verb</code>, <code>resource</code>, I couldn't find any named constants, so, in this case, "create" and "secrets" (strange that it's not "secret") seem to be the correct values for them if we want to be super-specific, but setting <code>resource</code> to "*" should be acceptable in this case.</li>
<li>The <code>reaction</code> parameter is of type <a href="https://pkg.go.dev/k8s.io/client-go/testing#ReactionFunc" rel="nofollow noreferrer">ReactionFunc</a>, which takes an <a href="https://pkg.go.dev/k8s.io/client-go/testing#Action" rel="nofollow noreferrer"><code>Action</code></a> as a parameter and returns <code>handled</code>, <code>ret</code> and <code>err</code>. After some digging, I noticed that the <code>action</code> parameter will be cast to <a href="https://pkg.go.dev/k8s.io/client-go/testing#CreateAction" rel="nofollow noreferrer"><code>CreateAction</code></a>, which has the <code>GetObject()</code> method that returns a <code>runtime.Object</code> instance, which can be cast to <a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Object" rel="nofollow noreferrer"><code>metav1.Object</code></a>. This interface allows us to get and set the various metadata fields of the underlying object. After setting the object <code>Name</code> field as needed, we have to return <code>handled = false</code>, <code>ret = mutatedObject</code> and <code>err = nil</code> to instruct the calling code to execute the remaining reactors.</li>
<li>Digging through the <code>apiserver</code> code, I noticed that the <code>ObjectMeta.Name</code> field <a href="https://github.com/kubernetes/apiserver/blob/f0b4663d4cd5caceddb64fd239053d29208104cd/pkg/registry/rest/create.go#L112-L114" rel="nofollow noreferrer">is generated</a> from the <code>ObjectMeta.GenerateName</code> field using the <a href="https://pkg.go.dev/k8s.io/apiserver/pkg/storage/names" rel="nofollow noreferrer"><code>names.SimpleNameGenerator.GenerateName</code></a> utility.</li>
</ul>
| Mihai Todor |
<p>I am new to Kubernetes and I am trying to create a simple front-end back-end application where front-end and back-end will have its own services. For some reason, I am not able to access back-end service by its name from front-end service.</p>
<p>Just because of simplicity, front-end service can be created like:<br/>
<code>kubectl run curl --image=radial/busyboxplus:curl -i --tty</code></p>
<p>When I do a nslookup I get the following:<br/></p>
<pre><code>[ root@curl-66bdcf564-rbx2k:/ ]$ nslookup msgnc-travel
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: msgnc-travel
Address 1: 10.100.171.209 msgnc-travel.default.svc.cluster.local
</code></pre>
<p>Service is available by its name msgnc-travel, but when I try to do curl it:<br/>
<code>curl msgnc-travel</code><br/>
it just keeps on waiting and no response is received. I have also tried <br/>
<code>curl 10.100.171.209</code> and <code>curl msgnc-travel.default.svc.cluster.local</code> but I have the same behaviour
<br/><br/></p>
<p>Any ideas why is this issue occurring? <br/></p>
<p>I have successfully managed to do a "workaround" by using Ingress, but I am curious why can't I access my Spring Boot backend service directly just by providing its name?</p>
<p><strong>deployment.yml</strong> looks like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: msgnc-travel-deployment
labels:
name: msgnc-travel-deployment
app: msgnc-travel-app
spec:
template:
metadata:
name: msgnc-travel-pod
labels:
name: msgnc-travel-pod
app: msgnc-travel-app
spec:
containers:
- name: msgnc-travel
image: bdjordjevic/msgnc-travel
ports:
- containerPort: 8080
replicas: 1
selector:
matchExpressions:
- {key: name, operator: In, values: [msgnc-travel-pod]}
- {key: app, operator: In, values: [msgnc-travel-app]}
</code></pre>
<p><strong>service.yml</strong> looks like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: msgnc-travel
labels:
name: msgnc-travel-service
app: msgnc-travel-app
spec:
ports:
- port: 8080
targetPort: 8080
selector:
name: msgnc-travel-pod
app: msgnc-travel-app
</code></pre>
| Boban Djordjevic | <p>You are defining the service to listen at port 8080. So you are supposed to execute <code>curl msgnc-travel:8080</code>.</p>
<p>I tried running wget and this is the output I got:</p>
<pre class="lang-sh prettyprint-override"><code>wget msgnc-travel:8080
Connecting to msgnc-travel:8080 (10.98.81.45:8080)
wget: server returned error: HTTP/1.1 404
</code></pre>
| Malathi |
<p>What is the clean way to deploy a <strong>pod</strong> using kubernetes client api in Java ?</p>
<blockquote>
<p>import io.kubernetes.client.ApiClient;</p>
</blockquote>
| Smaillns | <pre class="lang-java prettyprint-override"><code>import io.kubernetes.client.ApiClient;
import io.kubernetes.client.ApiException;
import io.kubernetes.client.Configuration;
import io.kubernetes.client.apis.CoreV1Api;
import io.kubernetes.client.models.V1Pod;
import io.kubernetes.client.models.V1PodList;
import io.kubernetes.client.util.Config;
import java.io.IOException;
public class Example {
public static void main(String[] args) throws IOException, ApiException{
ApiClient client = Config.defaultClient();
Configuration.setDefaultApiClient(client);
CoreV1Api api = new CoreV1Api();
V1Pod podTemplate = init_pod;
V1Pod pod = api.createNamespacedPod(pod creation arguments and podTemplate)
System.out.println("pod status : " + pod.getStatus().getPhase());
}
}
</code></pre>
<p>The above code might not be accurate. But this code might give you a gist of getting started. </p>
<p>A sample medium post that describes using java client of kubernetes is <a href="https://medium.com/programming-kubernetes/building-stuff-with-the-kubernetes-api-toc-84d751876650" rel="nofollow noreferrer">here</a></p>
| Malathi |
<p>I have a Kubernetes Cluster running on Azure (AKS / ACS).
I created the cluster using the Portal. There a aadClient was created automatically with a client secret that now expired.</p>
<p>Can somebody please tell me how to set the new client secret which I already created?</p>
<p>Right now AKS is not able to update Loadbalancer values or mount persistant storage.</p>
<p>Thank you!</p>
| Balo | <p>AKS client credentials can be updated via command:</p>
<pre><code>az aks update-credentials \
--resource-group myResourceGroup \
--name myAKSCluster \
--reset-service-principal \
--service-principal $SP_ID \
--client-secret $SP_SECRET
</code></pre>
<p>Official documentation: <a href="https://learn.microsoft.com/en-us/azure/aks/update-credentials#update-aks-cluster-with-new-credentials" rel="noreferrer">https://learn.microsoft.com/en-us/azure/aks/update-credentials#update-aks-cluster-with-new-credentials</a></p>
| azmelanar |
<p>After reading thru Kubernetes documents like this, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">deployment</a> , <a href="https://kubernetes.io/docs/concepts/services-networking/service/#motivation" rel="noreferrer">service</a> and <a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes" rel="noreferrer">this</a> I still do not have a clear idea what the purpose of service is. </p>
<p>It seems that the service is used for 2 purposes:</p>
<ol>
<li>expose the deployment to the outside world (e.g using LoadBalancer), </li>
<li>expose one deployment to another deployment (e.g. using ClusterIP services). </li>
</ol>
<p>Is this the case? And what about the Ingress?</p>
<p>------ update ------</p>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="noreferrer">Connect a Front End to a Back End Using a Service</a> is a good example of the service working with the deployment.</p>
| Qiulang | <p><strong>Service</strong></p>
<p>A deployment consists of one or more pods and replicas of pods. Let's say, we have 3 replicas of pods running in a deployment. Now let's assume there is no service. How does other pods in the cluster access these pods? Through IP addresses of these pods. What happens if we say one of the pods goes down. Kunernetes bring up another pod. Now the IP address list of these pods changes and all the other pods need to keep track of the same. The same is the case when there is auto scaling enabled. The number of the pods increases or decreases based on demand. To avoid this problem services come into play. Thus services are basically programs that manages the list of the pods ip for a deployment. </p>
<p>And yes, also regarding the uses that you posted in the question.</p>
<p><strong>Ingress</strong></p>
<p>Ingress is something that is used for providing a single point of entry for the various services in your cluster. Let's take a simple scenario. In your cluster there are two services. One for the web app and another for documentation service. If you are using services alone and not ingress, you need to maintain two load balancers. This might cost more as well. To avoid this, ingress when defined, sits on top of services and routes to services based on the rules and path defined in the ingress. </p>
| Malathi |
<p>I have a classic microservice architecture. So, there are differ applications. Each application may have <code>1..N</code> instances. The system is deployed to <code>Kubernetes.</code> So, we have many differ <code>PODs</code>, which can start and stop in any time.</p>
<p>I want to implement <a href="https://www.confluent.io/blog/transactions-apache-kafka/" rel="nofollow noreferrer">read-process-write</a> pattern, so I need Kafka transactions. </p>
<p>To configure transactions, I need to set some <code>transaction id</code> for each Kafka producer. (Actually, I need <code>transaction-id-prefix</code>, because of I use Spring for my applications, and it has such <code>API</code>). These <code>IDs</code> have to be the same, after application is restarted.</p>
<p>So, how to choose Kafka transaction id for several applications, hosted in Kubernetes?</p>
| Denis | <p>If the consumer starts the transaction (read-process-write) then the transaction id prefix must be the same for all instances of the same app (so that zombie fencing works correctly after a rebalance). The actual transaction id used is <code><prefix><group>.<topic>.<partition></code>.</p>
<p>If you have multiple apps, they should have unique prefixes (although if they consume from different topics, they will be unique anyway).</p>
<p>For producer-only transactions, the prefix must be unique in each instance (to prevent kafka fencing the producers).</p>
<p><strong>EDIT</strong></p>
<p>Note that <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-447%3A+Producer+scalability+for+exactly+once+semantics" rel="nofollow noreferrer">KIP-447</a> changed all this; it is no longer necessary (when using <code>EOSMode.V2</code> aka <code>BETA</code>) to keep the tranactional id the same - consumer metadata is used for fencing instead.</p>
| Gary Russell |
<p>I deployed my first container, I got info:</p>
<pre><code>deployment.apps/frontarena-ads-deployment created
</code></pre>
<p>but then I saw my container creation is stuck in Waiting status.
Then I saw the logs using <code>kubectl describe pod frontarena-ads-deployment-5b475667dd-gzmlp</code> and saw MountVolume error which I cannot figure out why it is thrown:</p>
<blockquote>
<p>Warning FailedMount 9m24s kubelet MountVolume.SetUp
failed for volume "ads-filesharevolume" : mount failed: exit status 32 Mounting command:
systemd-run Mounting arguments: --description=Kubernetes transient
mount for
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
--scope -- mount -t cifs -o username=frontarenastorage,password=mypassword,file_mode=0777,dir_mode=0777,vers=3.0
//frontarenastorage.file.core.windows.net/azurecontainershare
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
Output: Running scope as unit
run-rf54d5b5f84854777956ae0e25810bb94.scope. mount error(115):
Operation now in progress Refer to the mount.cifs(8) manual page (e.g.
man mount.cifs)</p>
</blockquote>
<p>Before I run the deployment I created a secret in Azure, using the already created azure file share, which I referenced within the YAML.</p>
<pre><code>$AKS_PERS_STORAGE_ACCOUNT_NAME="frontarenastorage"
$STORAGE_KEY="mypassword"
kubectl create secret generic fa-fileshare-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
</code></pre>
<p>In that file share I have folders and files which I need to mount and I reference <strong>azurecontainershare</strong> in YAML:
<a href="https://i.stack.imgur.com/Yncpx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yncpx.png" alt="enter image description here" /></a></p>
<p>My YAML looks like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: frontarena-ads-deployment
labels:
app: frontarena-ads-deployment
spec:
replicas: 1
template:
metadata:
name: frontarena-ads-aks-test
labels:
app: frontarena-ads-aks-test
spec:
containers:
- name: frontarena-ads-aks-test
image: faselect-docker.dev/frontarena/ads:test1
imagePullPolicy: Always
ports:
- containerPort: 9000
volumeMounts:
- name: ads-filesharevolume
mountPath: /opt/front/arena/host
volumes:
- name: ads-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: azurecontainershare
readOnly: false
imagePullSecrets:
- name: fa-repo-secret
selector:
matchLabels:
app: frontarena-ads-aks-test
</code></pre>
| vel | <p>The Issue was because of the different Azure Regions in which AKS cluster and Azure File Share are deployed. If they are in the same Region you would not have this issue.</p>
| Veljko |
<p>Is it possible to start or stop Kubernets PODS based on some events like a Kafka event?</p>
<p>For e.g., if there is an event that some work is complete and based on that I want to bring down a POD or bring a POD up. In my case, minimum replicas of the PODs keep running even though they are not required to be running for the most part of the day.</p>
| Ashish Abrol | <p>Pods with <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaling</a> based on <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics" rel="nofollow noreferrer">custom metrics</a> is the option you are looking for.</p>
<p>Probably you instrument your code with custom Prometheus metrics. In your case it is publishing a metric in Prometheus that says the number of messages available for processing at a point in time. Then use that custom Prometheus metric to scale pods on that basis.</p>
| Malathi |
<p>I have been following the tutorial here:
<a href="https://learn.microsoft.com/en-us/azure/application-gateway/tutorial-ingress-controller-add-on-new#code-try-4" rel="nofollow noreferrer">MS Azure</a></p>
<p>This is fine. However deploying a local config file I get a "502 Gate Way" error. This config has been fine and works as expected.</p>
<p>Can anyone see anything obvious with this: At this point I don't know where to start.</p>
<p>I am trying to achieve using the ingress controller that is Application gateway. Then add deployments and apply additional ingress rules</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 80
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 80
</code></pre>
<p>Output of: <code>kubectl describe ingress strata-2022</code></p>
<pre><code>Name: strata-2022
</code></pre>
<p>Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class:
Default backend:
Rules:
Host Path Backends</p>
<hr />
<ul>
<li>
<pre><code> / one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
</code></pre>
</li>
</ul>
<p>Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events: </p>
<p><code>kubectl describe ingress</code></p>
<pre><code>Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
</code></pre>
<p>Commands used to create AKS using Azure CLI.</p>
<p><code>az aks create -n myCluster -g david-tutorial --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name testApplicationGateway --appgw-subnet-cidr "10.225.0.0/16" --generate-ssh-keys</code></p>
<p>// Get credentials and switch to this context</p>
<p><code>az aks get-credentials -n myCluster -g david-tutorial</code></p>
<p>// This line is from the tutorial -- this works as expected</p>
<p>//kubectl apply -f <a href="https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml</a></p>
<p>// This is what i ran. It works locally</p>
<p><code>kubectl apply -f nano new-deploy.yaml</code></p>
<p>// Get address</p>
<p><code>kubectl get ingress</code></p>
<p><code>kubectl get configmap</code></p>
| user3067684 | <p>I tried recreating the same setup on my end, and I could identify the following issue right after running the same <code>az aks</code> create command: All the instances in one or more of your backend pools are unhealthy.
<a href="https://i.stack.imgur.com/ZOAsO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZOAsO.png" alt="All the instances in one or more of your backend pools are unhealthy" /></a></p>
<p>Since this appeared to indicate that the backend pools are unreachable, it was strange at first so I tried to look at the logs of one of the pods based on the hello-app images you were using and noticed this right away:</p>
<pre><code>> kubectl logs one-api-77f9b4b9f-6sv6f
2022/08/12 00:22:04 Server listening on port 8080
</code></pre>
<p>Hence, my immediate thought was that maybe in the Docker image that you are using, nothing is configured to listen on port <code>80</code>, which is the port you are using in your kubernetes resources definition.</p>
<p>After updating your Deployment and Service definitions to use port <code>8080</code> instead of <code>80</code>, everything worked perfectly fine and I started getting the following response in my browser:</p>
<pre><code>Hello, world!
Version: 1.0.0
Hostname: one-api-d486fbfd7-pm8kt
</code></pre>
<p>Below you can find the updated YAML file that I used to successfully deploy all the resources:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 8080
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 8080
</code></pre>
| vladzam |
<p>I have imported some docker images to microk8s image cache for local kubernetes deployment using, </p>
<pre><code>microk8s.ctr -n k8s.io image import <docker_image_name>.tar
</code></pre>
<p>how can I remove some of the unwanted images from this cache? Is there any command for that?</p>
<p>Thanks in advance!</p>
| anju | <p>If you want to delete all custom added images from the built-in library, you can do this:</p>
<pre class="lang-sh prettyprint-override"><code># get all images that start with localhost:32000, output the results into image_ls file
sudo microk8s ctr images ls name~='localhost:32000' | awk {'print $1'} > image_ls
# loop over file, remove each image
cat image_ls | while read line || [[ -n $line ]];
do
microk8s ctr images rm $line
done;
</code></pre>
<p>Yes, I've used David Castros comment as a base, but since it did not work for Microk8s, I needed to adapt it a bit.</p>
| Marco |
<p>I'm trying to understand the concepts of ingress and ingress controllers in kubernetes. But I'm not so sure what the end product should look like. Here is what I don't fully understand:</p>
<p>Given I'm having a running Kubernetes cluster somewhere with a master node which runes the control plane and the etcd database. Besides that I'm having like 3 worker nodes - each of the worker nodes has a public IPv4 address with a corresponding DNS A record (<code>worker{1,2,3}.domain.tld</code>) and I've full control over my DNS server. I want that my users access my web application via <code>www.domain.tld</code>. So I point the the <code>www</code> CNAME to one of the worker nodes (I saw that my ingress controller i.e. got scheduled to worker1 one so I point it to <code>worker1.domain.tld</code>).</p>
<p>Now when I schedule a workload consisting of 2 frontend pods and 1 database pod with 1 service for the frontend and 1 service for the database. From what've understand right now, I need an ingress controller pointing to the frontend service to achieve some kind of load balancing. Two questions here:</p>
<ol>
<li><p>Isn't running the ingress controller only on one worker node pointless to internally load balance two the two frontend pods via its service? Is it best practice to run an ingress controller on every worker node in the cluster?</p></li>
<li><p>For whatever reason the worker which runs the ingress controller dies and it gets rescheduled to another worker. So the ingress point will get be at another IPv4 address, right? From a user perspective which tries to access the frontend via <code>www.domain.tld</code>, this DNS entry has to be updated, right? How so? Do I need to run a specific kubernetes-aware DNS server somewhere? I don't understand the connection between the DNS server and the kubernetes cluster.</p></li>
</ol>
<p>Bonus question: If I run more ingress controllers replicas (spread across multiple workers) do I do a DNS-round robin based approach here with multiple IPv4 addresses bound to one DNS entry? Or what's the best solution to achieve HA. I rather not want to use load balancing IP addresses where the worker share the same IP address.</p>
| Jens Kohl | <blockquote>
<p>Given I'm having a running Kubernetes cluster somewhere with a master
node which runes the control plane and the etcd database. Besides that
I'm having like 3 worker nodes - each of the worker nodes has a public
IPv4 address with a corresponding DNS A record
(worker{1,2,3}.domain.tld) and I've full control over my DNS server. I
want that my users access my web application via www.domain.tld. So I
point the the www CNAME to one of the worker nodes (I saw that my
ingress controller i.e. got scheduled to worker1 one so I point it to
worker1.domain.tld).</p>
<p>Now when I schedule a workload consisting of 2 frontend pods and 1
database pod with 1 service for the frontend and 1 service for the
database. From what've understand right now, I need an ingress
controller pointing to the frontend service to achieve some kind of
load balancing. Two questions here:</p>
<ol>
<li>Isn't running the ingress controller only on one worker node pointless to internally load balance two the two frontend pods via its
service? Is it best practice to run an ingress controller on every
worker node in the cluster?</li>
</ol>
</blockquote>
<p>Yes, it's a good practice. Having multiple pods for the load balancer is important to ensure high availability. For example, if you run the <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">ingress-nginx controller</a>, you should probably deploy it to multiple nodes.</p>
<blockquote>
<ol start="2">
<li>For whatever reason the worker which runs the ingress controller dies and it gets rescheduled to another worker. So the ingress point
will get be at another IPv4 address, right? From a user perspective
which tries to access the frontend via www.domain.tld, this DNS entry
has to be updated, right? How so? Do I need to run a specific
kubernetes-aware DNS server somewhere? I don't understand the
connection between the DNS server and the kubernetes cluster.</li>
</ol>
</blockquote>
<p>Yes, the IP will change. And yes, this needs to be updated in your DNS server.</p>
<p>There are a few ways to handle this:</p>
<ol>
<li><p>assume clients will deal with outages. you can list all load balancer nodes in round-robin and assume clients will fallback. this works with some protocols, but mostly implies timeouts and problems and should generally not be used, especially since you still need to update the records by hand when k8s figures it will create/remove LB entries</p></li>
<li><p>configure an external DNS server automatically. this can be done with the <a href="https://github.com/kubernetes-incubator/external-dns" rel="noreferrer">external-dns</a> project which can sync against most of the popular DNS servers, including standard RFC2136 dynamic updates but also cloud providers like Amazon, Google, Azure, etc.</p></li>
</ol>
<blockquote>
<p>Bonus question: If I run more ingress controllers replicas (spread
across multiple workers) do I do a DNS-round robin based approach here
with multiple IPv4 addresses bound to one DNS entry? Or what's the
best solution to achieve HA. I rather not want to use load balancing
IP addresses where the worker share the same IP address.</p>
</blockquote>
<p>Yes, you should basically do DNS round-robin. I would assume <a href="https://github.com/kubernetes-incubator/external-dns" rel="noreferrer">external-dns</a> would do the right thing here as well.</p>
<p>Another alternative is to do some sort of <a href="https://en.wikipedia.org/wiki/Equal-cost_multi-path_routing" rel="noreferrer">ECMP</a>. This can be accomplished by having both load balancers "announce" the same IP space. That is an advanced configuration, however, which may not be necessary. There are interesting tradeoffs between BGP/ECMP and DNS updates, see <a href="https://blogs.dropbox.com/tech/2018/10/dropbox-traffic-infrastructure-edge-network/" rel="noreferrer">this dropbox engineering post</a> for a deeper discussion about those.</p>
<p>Finally, note that CoreDNS is looking at <a href="https://github.com/coredns/coredns/issues/1851" rel="noreferrer">implementing public DNS records</a> which could resolve this natively in Kubernetes, without external resources.</p>
| anarcat |
<p>I want to create labels for each node under a nodepool , but i dont find this option in azure cli , can someone help me with this?</p>
<p>Expected :
Must be able to label nodepools which can help in autoscaling and pod scheduling.</p>
| Sai Prasanth | <p>Adding labels is a feature that is still in progress (tracked here: <a href="https://github.com/Azure/AKS/issues/1088" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/1088</a>).</p>
<p>However, you can add Taints using ARM: <a href="https://learn.microsoft.com/en-us/rest/api/aks/agentpools/createorupdate#examples" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/rest/api/aks/agentpools/createorupdate#examples</a> or Terraform: <a href="https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html#node_taints" rel="nofollow noreferrer">https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html#node_taints</a> (looks like the CLI still lacks the functionality).</p>
| Alessandro Vozza |
<p>I have 2 different namespace: <strong>prod-01</strong> and <strong>prod-02</strong>, What I want to do is build a copy of my <strong>prod-01</strong> into <strong>prod-02</strong> namespace keeping the same names for its pvcs, so that I don't have to maintain 2 sets of charts for each different namespace. </p>
<p>Here's how it looks like:</p>
<pre><code>$ kubectl get ns | grep prod
prod-01 Active 178d
prod-02 Active 8d
$
</code></pre>
<p>As shown below, I have 2 pairs of pv's for each namespace:</p>
<pre><code>$ kubectl get pv -o wide | grep prod
prod-01-db-pv 50Gi RWX Retain Bound prod-01/app-db 164d
prod-01-nosql-db-pv 5Gi RWX Retain Bound prod-01/app-nosql-db 149d
prod-02-db-pv 50Gi RWX Retain Available prod-02/app-db 41m
prod-02-nosql-db-pv 5Gi RWX Retain Available prod-02/app-nosql-db 19m
$
</code></pre>
<p>Here's how pvc's for <strong>prod-01</strong> are being displayed:</p>
<pre><code>$ kubectl get pvc --namespace=prod-01
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
app-db Bound prod-01-db-pv 50Gi RWX 164d
app-nosql-db Bound prod-01-nosql-db-pv 5Gi RWX 149d
$
</code></pre>
<p>And here's what I'm trying to accomplish:</p>
<pre><code>$ kubectl get pvc --namespace=prod-02
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
app-db Pending prod-02-db-pv 0 2m
app-nosql-db Pending prod-02-nosql-db-pv 0 24m
$
</code></pre>
<p>As shown above, the pvc's for <strong>prod-02</strong> namespace are stuck forever with <strong>Pending</strong> status. </p>
<p>Them when I change the pvc names on <strong>prod-02</strong> to anything different, they bond as expected.</p>
<p>Which leads me to think I can't use the same names on pvc's even when they are in different namespaces and pointing to different pv's ... However, when searching the documentation, I could not find any evidence to this issue, and was wondering if I could be missing something. </p>
<p>So to put it simple, can I have multiple pvc's with the same name accross different namespaces (considering that they are using different pv's)?</p>
<hr>
<p><strong>Update:</strong> result of <code>kubectl describe pvc</code> </p>
<pre><code>$ kubectl describe pvc app-db --namespace=prod-02
Name: app-db
Namespace: prod-02
StorageClass:
Status: Pending
Volume: prod-02-db-pv
Labels: <none>
Annotations: <none>
Finalizers: []
Capacity: 0
Access Modes:
Events: <none>
$
</code></pre>
<p>Also here's the output of <code>kubectl get pvc</code>:</p>
<pre><code>$ kubectl get pvc app-db --namespace=prod-02 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: 2018-09-22T22:00:34Z
name: app-db
namespace: prod-02
resourceVersion: "43027607"
selfLink: /api/v1/namespaces/prod-02/persistentvolumeclaims/app-db
uid: ee81b951-beb2-11e8-b972-005056bbded7
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
volumeName: prod-02-db-pv
status:
phase: Pending
$
</code></pre>
<p>And here are some details about the pv too:</p>
<pre><code>$ kubectl get pv prod-02-db-pv --namespace=prod-02 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
creationTimestamp: 2018-09-22T21:15:19Z
name: prod-02-db-pv
resourceVersion: "43020171"
selfLink: /api/v1/persistentvolumes/prod-02-db-pv
uid: 9c07d7a6-beac-11e8-b972-005056bbded7
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 50Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: app-db
namespace: prod-02
nfs:
path: /nfs_server/prod02/db
server: 158.87.52.35
persistentVolumeReclaimPolicy: Retain
status:
phase: Available
$
</code></pre>
<hr>
<p>Thanks in advance for the help!</p>
| silveiralexf | <p>PVC is a namespaced resource but not PV. ie., you can have multiple PVC's with same name across difference namespaces.</p>
<p>There might be issues in the way you have configured the pv.</p>
<p>Can you make sure you are using the right ip address in pv configuration just under <code>nfs</code> attribute:</p>
<pre><code>nfs:
path: /nfs_server/prod01/db
server: 158.87.52.35
</code></pre>
| Harish Ambady |
<p>I am now trying to implement the new system. My system will be divided into 2 clusters. First is for computing job. It will be heavily change by CI/CD very frequent. Then to prevent it from my juniors's accident and also save cost. Because on computer node does not need to use <code>100GB</code> like <code>database</code></p>
<p>Now. I am setting up my <a href="https://github.com/helm/charts/tree/master/stable/mongodb-replicaset" rel="nofollow noreferrer"><code>mongo-replicaset</code></a> using <code>helm</code>. My configuration works fine. Here is my terminal log during the installation.</p>
<p>Install with <code>100GB</code> per each node. They are 3 nodes.</p>
<pre class="lang-sh prettyprint-override"><code>$ gcloud container clusters create elmo --disk-size=100GB --enable-cloud-logging --enable-cloud-monitoring
</code></pre>
<p>I have changed username and password in the <a href="https://gist.github.com/elcolie/109910871acf5e919e375d75dda85b40" rel="nofollow noreferrer"><code>values.yaml</code></a></p>
<pre class="lang-sh prettyprint-override"><code>mongodbUsername: myuser
mongodbPassword: mypassword
</code></pre>
<p>However, when I jump in to the pod. It does not require me to do any authentication. I can execute <code>show dbs</code></p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl exec -it ipman-mongodb-replicaset-0 mongo
MongoDB shell version v4.0.6
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("966e85fd-8857-46ac-a2a4-a8b560e37104") }
MongoDB server version: 4.0.6
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
2019-03-20T12:15:51.266+0000 I STORAGE [main] In File::open(), ::open for '//.mongorc.js' failed with Unknown error
Server has startup warnings:
2019-03-20T11:36:03.768+0000 I STORAGE [initandlisten]
2019-03-20T11:36:03.768+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-03-20T11:36:03.768+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-03-20T11:36:05.082+0000 I CONTROL [initandlisten]
2019-03-20T11:36:05.082+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-03-20T11:36:05.082+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2019-03-20T11:36:05.083+0000 I CONTROL [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
rs0:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
</code></pre>
<p>I can see 2 services running <code>mongodb-replicaset</code></p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl describe svc ipman-mongodb-replicaset
Name: ipman-mongodb-replicaset
Namespace: default
Labels: app=mongodb-replicaset
chart=mongodb-replicaset-3.9.2
heritage=Tiller
release=ipman
Annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: true
Selector: app=mongodb-replicaset,release=ipman
Type: ClusterIP
IP: None
Port: mongodb 27017/TCP
TargetPort: 27017/TCP
Endpoints: 10.60.1.5:27017,10.60.2.7:27017,10.60.2.8:27017
Session Affinity: None
Events: <none>
$ kubectl describe svc ipman-mongodb-replicaset-client
Name: ipman-mongodb-replicaset-client
Namespace: default
Labels: app=mongodb-replicaset
chart=mongodb-replicaset-3.9.2
heritage=Tiller
release=ipman
Annotations: <none>
Selector: app=mongodb-replicaset,release=ipman
Type: ClusterIP
IP: None
Port: mongodb 27017/TCP
TargetPort: 27017/TCP
Endpoints: 10.60.1.5:27017,10.60.2.7:27017,10.60.2.8:27017
Session Affinity: None
Events: <none>
</code></pre>
<p>I have seen <a href="https://stackoverflow.com/questions/52006612/expose-database-to-deployment-on-gke">here</a> and <a href="https://stackoverflow.com/questions/42040238/how-to-expose-nodeport-to-internet-on-gce">here</a>. I have 3 IP address. Which one should I use?</p>
<p>I think <code>LoadBalancer</code> might not fit to my need because it is normally use with <code>backend</code> service to balance load between nodes. For my case. It is <code>master</code> to do writing and <code>replica</code> to do reading.</p>
<pre class="lang-sh prettyprint-override"><code>$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-elmo-default-pool-c5dc6e86-1j8v asia-southeast1-a n1-standard-1 10.148.0.59 35.197.148.201 RUNNING
gke-elmo-default-pool-c5dc6e86-5hs4 asia-southeast1-a n1-standard-1 10.148.0.57 35.198.217.71 RUNNING
gke-elmo-default-pool-c5dc6e86-wh0l asia-southeast1-a n1-standard-1 10.148.0.58 35.197.128.107 RUNNING
</code></pre>
<p><strong>Question:</strong></p>
<ol>
<li><p>Why my <code>username:password</code> does not take in to account when do authentication?</p></li>
<li><p>How can I expose my <code>mongo</code> shell and let client comes from internet use my database server by using</p></li>
</ol>
<pre class="lang-sh prettyprint-override"><code>mongo -u <user> -p <pass> --host kluster.me.com --port 27017
</code></pre>
<p>I have checked with the <code>helm chart</code> document. I am worry that I am using <code>k8s</code> in the wrong way. Therefore I decided to ask in here.</p>
| joe | <p>I cannot answer about the password issue, but using a separate cluster for your DB might not be the best option. By creating a separate cluster you are forced to expose your sensitive database to the world. This is not ideal.</p>
<p>I recommend you deploy your mongo on your existing cluster. This way you can have your computing workloads connect to your mongo simply by using the service name as the hostname.</p>
<p>If you need bigger drive for your mongo, simply use persistence disk and specify the size when you create your mongo installation using helm.</p>
<p>For example:</p>
<pre><code>helm install mongo-replicaset --name whatever --set persistentVolume.size=100Gi
</code></pre>
<p>In your <code>values.yaml</code> file, you have a section called <code>persistence</code> when it should be called <code>persistentVolume</code>.</p>
<p>I recommend that your <code>values.yaml</code> only contains the values you want to change and not everything.</p>
| roychri |
<p>Currently, I am facing an issue with my application: it does not become healthy due to the kubelet not being able to perform a successful health check.</p>
<p>From pod describe:</p>
<pre><code> Warning Unhealthy 84s kubelet Startup probe failed: Get "http://10.128.0.208:7777/healthz/start": dial tcp 10.128.0.208:7777: connect: connection refused
Warning Unhealthy 68s (x3 over 78s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 500
</code></pre>
<p>Now, I find this strange as I can run the health check fine from the worker node where the kubelet is running? So I am wondering what is the difference between running the health check from the worker node via curl, or the kubelet doing it.</p>
<p>Example:</p>
<pre><code>From worker node where the kubelet is running:
sh-4.4# curl -v http://10.128.0.208:7777/healthz/readiness
* Trying 10.128.0.208...
* TCP_NODELAY set
* Connected to 10.128.0.208 (10.128.0.208) port 7777 (#0)
> GET /healthz/readiness HTTP/1.1
> Host: 10.128.0.208:7777
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 0
< Connection: close
<
* Closing connection 0
sh-4.4#
</code></pre>
<p>Can I somehow trace when the kubelet is sending the health probe check? Or maybe get into the kubelet and send it myself from there?</p>
<p>There is an extra thing to be told: my pod has got an istio-proxy container inside. It looks like the traffic from the kubelet gets blocked by this istio-proxy.</p>
<p>Setting the following annotation in my deployement:</p>
<pre><code> "rewriteAppHTTPProbe": true
</code></pre>
<p>does not help for the kubelet. It did help to get a 200 OK when running the curl command from the worker node.</p>
<p>Maybe also to note: we are using the istio-cni plugin to ingest the istio sidecar. Not sure whether that makes a difference when using the former approach when injecting using istio-init ...</p>
<p>Any suggestions are welcome :).
Thanks.</p>
| opstalj | <p>Issue looks to be that the istio-cni plugin changes the iptables and a re-direct of the health probe check happens towards the application.
However, the re-direct happens to the localhost: and the application is not listening there for the health probes ... only on the : ...</p>
<p>After changing the iptables to a more proper re-direct, the health probe could get responded fine with a 200 OK and the pod became healthy.</p>
| opstalj |
<p>I want to allow a kubernetes cluster, all the pods running in it, to access my ec2 machine.</p>
<p>This means I have to allow a particular IP or a range of IPs in the security group of my ec2 machine.</p>
<p>But what is that one IP or a range of IPs that I'd have to enter in the security group of EC2 machine?</p>
| Aviral Srivastava | <p>The pods in kubernetes run in worker nodes which are nothing but ec2 instances and have their own security group. If you want your ec2 instance which is outside the cluster to accept connection from pods in kubernetes cluster, you can <strong>add an inbound rule in the ec2 instance with source security group value that of the worker nodes security group</strong>. </p>
<p>Why is that the pods in the kubernetes cluster wants to access an ec2 instance outside the cluster. You can also bring the ec2 instance within your kubernetes cluster and if need be, you can expose the ec2 instance's process via kubernetes service.</p>
| Malathi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.