Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I have tried to list pods based on labels</p>
<pre><code> // Kubernetes client - package kubernetes
clientset := kubernetes.NewForConfigOrDie(config)
// create a temp list for storage
var podslice []string
// Get pods -- package metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
pods, _ := clientset.CoreV1().Pods("").List(metav1.ListOptions{})
for _, p := range pods.Items {
fmt.Println(p.GetName())
}
</code></pre>
<p>this is equivalent of</p>
<pre><code>kubectl get po
</code></pre>
<p>is there a way to get in golang</p>
<pre><code>kubectl get po -l app=foo
</code></pre>
<p>thanks in advance</p>
| Pradeep Padmanaban C | <p>You may just be able to set using the <code>ListOptions</code> parameter.</p>
<pre><code>listOptions := metav1.ListOptions{
LabelSelector: "app=foo",
}
pods, _ := clientset.CoreV1().Pods("").List(listOptions)
</code></pre>
<p>If you have multiple labels, you may be able to perform this via the <code>labels</code> library, like below untested code:</p>
<pre><code>import "k8s.io/apimachinery/pkg/labels"
labelSelector := metav1.LabelSelector{MatchLabels: map[string]string{"app": "foo"}}
listOptions := metav1.ListOptions{
LabelSelector: labels.Set(labelSelector.MatchLabels).String(),
}
pods, _ := clientset.CoreV1().Pods("").List(listOptions)
</code></pre>
| Sankar |
<p>I have a simple Spring Boot application, deployed on k8s (2 pods).
Briefly described this application takes messages from producer and process them to the consumer. Nothing complicated. </p>
<p>UPD: </p>
<ul>
<li>java version: 1.8.172</li>
<li>javaMemoryOpts: -Xmx2048m -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap</li>
</ul>
<p><a href="https://i.stack.imgur.com/tgbcA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tgbcA.png" alt="memory monitoring"></a>
This's a memory consumption for one of 2 podes.</p>
<ul>
<li>blue line - requested memory by k8s</li>
<li>orange line - working set</li>
<li>green line - service usage </li>
<li>yellow line - memory limit by k8s</li>
</ul>
<p>The issue is the high memory usage despite of simplicity of service.
I'd profiled the app, but it seems ok with service: just about 60 threads, no memory leaks and so on. </p>
<p>The used memory never exceed k8s limit even if it's very close to it (no OOM).
Of course, I can add more pods and consumption will become even but I think it's not the right way.</p>
<p>One thing confusing me why used memory always above required even at the start.</p>
<p>Actually, I don't know what's going wrong with it. Does anybody have ideas or maybe know how to reduce app's memory usage?</p>
| yevtsy | <p>The answer from mk_sta is helpful and all the information you need is probably in those documents, but I feel it is worth summarising the main points in a full answer.</p>
<p>Your -Xmx2048m (2Gb) option is setting the maximum heap size, but the application will use more memory than this - the Metaspace, the Garbage Collector and lots of other overhead (this is "off heap" memory).</p>
<p>No matter how simple your App, Java will use the available heap size. A 3 line app outputting random Strings, if given a 2Gb heap, will eventually use it all. So it doesn't matter if your Spring Boot app is "simple" - the heap will grow until it reaches the maximum and then you'll see a garbage collection - these are the jagged teeth in your green line.</p>
<p>So these two things together probably explains why you're seeing an upper bound of about 3.8Gb on memory usage.</p>
<p>The graphs you're plotting are probably showing a well behaved application, so stop worrying about memory leaks. I can't tell from the picture if these are minor or major collections. Ie I can't infer from the picture how small you can risk shrinking the Xmx.</p>
<p>Although you say your Spring Boot app is "simple" there's no way of knowing without seeing its pom how complex it really is. But the document linked from mk_ska here...</p>
<p><a href="https://github.com/dsyer/spring-boot-memory-blog/blob/master/cf.md" rel="nofollow noreferrer">https://github.com/dsyer/spring-boot-memory-blog/blob/master/cf.md</a></p>
<p>...is a very useful one because it shows some good defaults for "small" Spring Boot apps - eg an app using Freemarker to generate non-static content can run happily with a 32Mb heap.</p>
<p>So the easy answer is to try shrinking your -Xmx down to as small as you can before you see garbage collection thrashing. You too might be able to get it down to as little as 32Mb. </p>
<p>Then, you can use these finding to set some sensible values for the resource limit and resource request in your K8S manifests. You'll need a lot more than the eg 32Mb of the heap size - as stated in that doc, Spring boot apps are much happier with container memory of 512Mb or 1Gb. You might get away with 256Mb, but that is tight.</p>
| Dick Chesterwood |
<p>I have create cron job in kubernetes and I have ssh key in one of pod directory. when I am executing from command line its working fine, but when I am manually triggered , cron job is not recognizing .ssh folder .</p>
<pre><code>scp -i /srv/batch/source/.ssh/id_rsa user@server:/home/data/openings.csv /srv/batch/source
</code></pre>
<p><a href="https://i.stack.imgur.com/JxtiG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JxtiG.png" alt="enter image description here"></a></p>
| Sanjay Chintha | <p>When you log into a remote host from your container, the remote host key is unknown to your SSH client inside the container</p>
<p>usually, you're asked to confirm its fingerprint:</p>
<pre><code>The authenticity of host ***** can't be established.
RSA key fingerprint is *****.
Are you sure you want to continue connecting (yes/no)?
</code></pre>
<p>But as there is no interactive shell, the SSH client fails.</p>
<p>Two solutions:</p>
<ul>
<li>add the host key in the file <code>~/.ssh/known_hosts</code> in the container</li>
<li><p>disable host key check (<strong>Dangerous as no remote host authentication is performed</strong>)</p>
<p><code>ssh -o "StrictHostKeyChecking=no" user@host</code></p></li>
</ul>
| Kartoch |
<p>We have an issue in an AKS cluster running Kubernetes 1.13.5. The symptoms are:</p>
<ul>
<li>Pods are randomly restarted</li>
<li>The "Last State" is "Terminated", the "Reason" is "Error" and the "Exit Code" is "137"</li>
<li>The pod events show no errors, either related to lack of resources or failed liveness checks</li>
<li>The docker container shows "OOMKilled" as "false" for the stopped container</li>
<li>The linux logs show no OOM killed pods</li>
</ul>
<p>The issues were are experiencing match those described in <a href="https://github.com/moby/moby/issues/38768" rel="nofollow noreferrer">https://github.com/moby/moby/issues/38768</a>. However, I can find no way to determine if the version of Docker run on the AKS nodes is affected by this bug, because AKS seems to use a custom build of Docker whose version is something like 3.0.4, and I can't find any relationship between these custom version numbers and the upstream Docker releases.</p>
<p>Does anyone know how to match internal AKS Docker build numbers to upstream Docker releases, or better yet how someone might prevent pods from being randomly killed?</p>
<p><strong>Update</strong></p>
<p>This is still an ongoing issue, and I though I would document how we debugged it for future AKS users.</p>
<p>This is the typical description of a pod with a container that has been killed with an exit code of 137. The common factors are the <code>Last State</code> set to <code>Terminated</code>, the <code>Reason</code> set to <code>Error</code>, <code>Exit Code</code> set to 137 and no events.</p>
<pre><code>Containers:
octopus:
Container ID: docker://3a5707ab02f4c9cbd66db14d1a1b52395d74e2a979093aa35a16be856193c37a
Image: index.docker.io/octopusdeploy/linuxoctopus:2019.5.10-hosted.462
Image ID: docker-pullable://octopusdeploy/linuxoctopus@sha256:0ea2a0b2943921dc7d8a0e3d7d9402eb63b82de07d6a97cc928cc3f816a69574
Ports: 10943/TCP, 80/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 08 Jul 2019 07:51:52 +1000
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Thu, 04 Jul 2019 21:04:55 +1000
Finished: Mon, 08 Jul 2019 07:51:51 +1000
Ready: True
Restart Count: 2
...
Events: <none>
</code></pre>
<p>The lack of events is caused by the event TTL set in Kubernetes itself resulting in the events expiring. However with Azure monitoring enabled we can see that there were no events around the time of the restart other than the container starting again.</p>
<p><a href="https://i.stack.imgur.com/1H4uT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1H4uT.png" alt="enter image description here"></a></p>
<p>In our case, running <code>kubectl logs octopus-i002680-596954c5f5-sbrgs --previous --tail 500 -n i002680</code> shows no application errors before the restart.</p>
<p>Running <code>docker ps --all --filter 'exited=137'</code> on the Kubernetes node hosting the pod shows the container 593f857910ff with an exit code of 137.</p>
<pre><code>Enable succeeded:
[stdout]
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
20930700810f 7c23e4d2be70 "./install.sh " 14 hours ago Exited (137) 12 hours ago k8s_octopus_octopus-i002525-55f69565f8-s488l_i002525_b08125ab-9e2e-11e9-99be-422b98e8f214_2
593f857910ff 7c23e4d2be70 "./install.sh " 4 days ago Exited (137) 25 hours ago k8s_octopus_octopus-i002680-596954c5f5-sbrgs_i002680_01eb1b4d-9e03-11e9-99be-422b98e8f214_1
d792afb85c6f 7c23e4d2be70 "./install.sh " 4 days ago Exited (137) 4 days ago k8s_octopus_octopus-i002521-76bb77b5fd-twsdx_i002521_035093c5-9e2e-11e9-99be-422b98e8f214_0
0361bc71bf14 7c23e4d2be70 "./install.sh " 4 days ago Exited (137) 2 days ago k8s_octopus_octopus-i002684-769bd954-f89km_i002684_d832682d-9e03-11e9-99be-422b98e8f214_0
[stderr]
</code></pre>
<p>Running <code>docker inspect 593f857910ff | jq .[0] | jq .State</code> shows the container was not <code>OOMKilled</code>.</p>
<pre><code>Enable succeeded:
[stdout]
{
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 137,
"Error": "",
"StartedAt": "2019-07-04T11:04:55.037288884Z",
"FinishedAt": "2019-07-07T21:51:51.080928603Z"
}
[stderr]
</code></pre>
| Phyxx | <p>This issue appears to have been resolved by updating to AKS 1.13.7, which includes an update to Moby 3.0.6. Since updating a few days ago we have not seen containers killed in the manner described in the Docker bug at <a href="https://github.com/moby/moby/issues/38768" rel="nofollow noreferrer">https://github.com/moby/moby/issues/38768</a>.</p>
| Phyxx |
<p>I'm trying out a very simple Istio setup on a Docker Desktop Kubernetes installation.</p>
<p>I have 2 Spring boot micro services and have deployed these 2 services in my K8s "cluster" without any replication. All I have in my YAML file is the Service and Deployment for both services.</p>
<p>I have installed istio and I can see there are 2 containers in my pod. One is the spring boot application, the other is the istio sidecar.</p>
<p>I am making a rest call from service 2 to service 1 like this and it works fine.</p>
<pre><code>restTemplate.exchange("http://service1:8080/getSomeString", HttpMethod.GET, null, String.class, new Object()).getBody();
</code></pre>
<p>However, now if I disable sidecar injection and rededloy me services, it still works fine. Basically it is Kubernetes that is resolving where service1 is and completing the rest call and Not istio.</p>
<p>How do I do service discovery using istio ?</p>
| ViV | <p>Istio is a Service Mesh, as such it isn't responsible for service discovery. A service mesh adds functionality to the Service -> Service traffic (monitoring, routing, etc). So when running on a Kubernetes cluster, Kubernetes continues to be responsible for service discovery, as you've observed.</p>
<p>As Arghya's answer states, with Istio you can apply a VirtualService on top of this, which allows you to do "clever" additional features such as custom routings, but this in no way replaces or changes the functionality of the underlying Kubertetes Service Discovery. </p>
<p>In my opinion, VirtualService is a confusing term because it sounds like it's somehow replacing Kubernete's existing features. I prefer to think of a VirtualService as "Custom Routing".</p>
<p>By the way, you only need a virtualservice if you need one. By which I mean, you might have 1,000 services defined in your cluster (using the normal Kubernetes Service construct). But perhaps you want to apply custom routing rules to just one service - that's fine, you just define 1 VirtualService in Istio to handle that. </p>
| Dick Chesterwood |
<p>I am trying to copy the first container argument in a pod definition to an environment variable. This is my pod template file.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: test
image: nginx
command: [ "/bin/bash", "-c", "--" ]
args: ["sleep 30"]
env:
- name: "TEST_ENV"
valueFrom:
fieldRef:
fieldPath: spec.containers[0].args[0]
restartPolicy: Never
</code></pre>
<p>But on executing <code>kubectl apply</code>, I get the following error.</p>
<p><code>The Pod "myapp-pod" is invalid: spec.containers[0].env[0].valueFrom.fieldRef.fieldPath: Invalid value: "spec.containers[0].args[0]": error converting fieldPath: field label not supported: spec.containers[0].args[0]</code></p>
<p>What is the correct way to reference this argument?</p>
| Harshith Bolar | <p>Only a limited number of fields are available using <code>fieldRef</code>.</p>
<p>From <a href="https://kubernetes.io/docs/concepts/workloads/pods/downward-api/#available-fields" rel="nofollow noreferrer">the documentation</a>:</p>
<blockquote>
<p>Only some Kubernetes API fields are available through the downward API. This section lists which fields you can make available.</p>
<h2>Information available via fieldRef</h2>
<ul>
<li><code>metadata.name</code></li>
<li><code>metadata.namespace</code></li>
<li><code>metadata.uid</code></li>
<li><code>metadata.annotations['<KEY>']</code></li>
<li><code>metadata.labels['<KEY>']</code></li>
</ul>
<p>The following information is available through environment variables but not as a downwardAPI volume fieldRef:</p>
<ul>
<li><code>spec.serviceAccountName</code></li>
<li><code>spec.nodeName</code></li>
<li><code>status.hostIP</code></li>
<li><code>status.podIP</code></li>
</ul>
<p>The following information is available through a downwardAPI volume fieldRef, but not as environment variables:</p>
<ul>
<li><code>metadata.labels</code></li>
<li><code>metadata.annotations</code></li>
</ul>
</blockquote>
<hr />
<p>In any case, since the container arguments are available to whatever service you're running inside the container, you should be able to retrieve them from your codes argument array/list/argv/etc.</p>
| larsks |
<p>I am new to Istio and I have learned a lot and applied to my project which consist of many Microservices. I am stuck in Authentication when it comes to using Istio</p>
<p>So the issue is this. Istio offers authentication which involves using Oauth google, Oauth or any other provider. and Once we do this, we can setup AuthPolicy and define which microservices we want it to apply to. I have attached my auth policy yaml and it works fine. Now may project at Job requires me to use custom auth also. In other words, I have one microservice which handles authentication. This auth microservice has three end points /login ,/singup, /logout and /auth. Normally, In my application, I would call /auth as a middleware to before I make any other call to make sure the user is logged in. /auth in my microservice reads jwt token I stored in a cookie when I logged in at a first place and check if it is valid. Now my question is how to add my custom authentication rather than using Oauth?. Now as you know auth policy.yaml I attached will trigger auth check at sidecar proxy level; so I don't need to direct my traffic to ingress gateway; that means my gateway takes care of mtls while sidecar takes care of jwt auth check. So how to plug in my custom auth in policy.yaml or another way such that "I don't need to redirect my all traffic to ingress gateway". </p>
<p>In short please help me with how to add my custom auth jwt check-in policy.yaml like in the picture or any other way and if required modify my auth [micro-service][1] code too. People suggest redirecting traffic to ingress gateway and add envoy filter code there which will redirect traffic to auth microservices. But I don't have to redirect my all calls to ingress gateway and run envoy filter there. I want to achieve what istio already doing by defining policy yaml and jwt auth check happens at sidecar proxy level suing policy.yaml; so we don't redirect traffic to ingress gateway. </p>
<p>Np: my all microservices are in ClusterIP and only my front end is exposed outside
Looking forward to your help/advice</p>
<p>Heres my code for auth policy.yaml</p>
<pre><code>apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: reshub
spec:
targets:
- name: hotelservice // auth check when ever call is made to this microservice
peers:
- mtls: {}
origins:
- jwt:
issuer: "https://rshub.auth0.com/"
jwksUri: "https://rshub.auth0.com/.well-known/jwks.json"
principalBinding: USE_ORIGIN
</code></pre>
<p>here's my code for auth microservice just to show you my current log of checking jwt </p>
<pre><code>@app.route('/auth/varifyLoggedInUser',methods=['POST'])
def varifyLoggedInUser():
isAuthenticated = False
users = mongo.db.users
c = request.cookies.get('token')
token = request.get_json()['cookie']
print(token)
if token:
decoded_token = decode_token(token)
user_identity =decoded_token['identity']['email']
user = users.find_one({'email': user_identity,'token':token})
if user:
isAuthenticated = True
return jsonify({'isAuthenticated' : isAuthenticated,'token':c})
</code></pre>
| BoeingK8 | <p>Try the AuthService project here which seems to aim to improve this area of Istio, which is at the moment pretty deficient IMO:</p>
<p><a href="https://github.com/istio-ecosystem/authservice" rel="nofollow noreferrer">https://github.com/istio-ecosystem/authservice</a></p>
<p>I think the Istio docs imply that it supports more than it really does - Istio will accept and validate JWT tokens for <strong>authorization</strong> but it provides nothing in the way of <strong>authentication</strong>.</p>
| Dick Chesterwood |
<p>I want to know How much volume is available to each of <code>EC2</code> instances in an <code>EKS</code> cluster?</p>
<p>According to <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html" rel="nofollow noreferrer">this page</a>, There are two types of <code>AMI</code>s:</p>
<ol>
<li><code>EBS-backed AMI</code>s with <code>16 TiB</code> available volume.</li>
<li><code>instance store-backed AMI</code>s with <code>10 GiB</code> available volume.</li>
</ol>
<p>Which one of them does the workers' AMI belong?</p>
<p>I create my EKS cluster using this <a href="https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest" rel="nofollow noreferrer">terraform module</a>:</p>
<pre><code>module "eks" {
...
worker_groups = [
{
name = "worker-group-1"
instance_type = "t2.small"
asg_desired_capacity = 2
}
]
}
</code></pre>
| HsnVahedi | <p>You’re using instances of type T2.small. This instance type is EBS-backed only and doesn’t have an instance store option.</p>
<p>According to the documentation that you mentioned, the size limit for an EBS-backed instance’s root device is 16 TiB. The actual size however depends on the volume sizes that you configure for the instances (I’m not sure but I think it defaults to 20 GiB). You can also add multiple EBS volumes to exceed the 16 TiB limit if needed.</p>
| Dennis Traub |
<p>I have a few kubernetes service accounts.</p>
<p>I want to login kubernetes dashboard.</p>
<pre><code>$kubectl get sa -n kubernetes-dashboard
NAME SECRETS AGE
whitebear 0 9m37s
default 0 15m
kubernetes-dashboard 0 15m
</code></pre>
<p>However service account does'nt have token.</p>
<pre><code>$kubectl describe sa whitebear -n kubernetes-dashboard
Name: whitebear
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
</code></pre>
<p>How can I create the token for account?</p>
<p>I am using docker for mac, local environement.</p>
<p>Thank you very much.</p>
<p><strong>Solution</strong></p>
<p>thanks to @Sai Chandini Routhu!!</p>
<p>I made token and login successfuly</p>
<pre><code>kubectl create token default
</code></pre>
<p>However it was not enough to use dashboard</p>
<p>I make cluster role</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["*"]
verbs: ["*"]
</code></pre>
<p>then bind this to my account.</p>
<pre><code>kubectl create clusterrolebinding service-reader-pod \
--clusterrole=service-reader \
--serviceaccount=default:whitebear
</code></pre>
<p>Now I can login and operate dashboard!</p>
| whitebear | <p>Tokens are not generated by default for ServiceAccounts since Kubernetes version 1.22. To create a long-lived ServiceAccount token in a Secret, see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount" rel="nofollow noreferrer">this documentation</a>, which says:</p>
<blockquote>
<p>If you want to obtain an API token for a ServiceAccount, you create a new Secret with a special annotation, <code>kubernetes.io/service-account.name</code>.</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: build-robot-secret
annotations:
kubernetes.io/service-account.name: build-robot
type: kubernetes.io/service-account-token
EOF
</code></pre>
<p>If you view the Secret using:</p>
<pre><code>kubectl get secret/build-robot-secret -o yaml
</code></pre>
<p>you can see that the Secret now contains an API token for the "build-robot" ServiceAccount.</p>
</blockquote>
| larsks |
<p>I just installed Helm v3.4.2 and the command below prints many packages as DEPRECATED in the description:</p>
<p><code>helm search repo stable</code></p>
<p>Output:</p>
<pre><code>stable/acs-engine-autoscaler 2.2.2 2.1.1 DEPRECATED Scales worker nodes within agent pools
stable/aerospike 0.3.5 v4.5.0.5 DEPRECATED A Helm chart for Aerospike in Kubern...
stable/airflow 7.13.3 1.10.12 DEPRECATED - please use: https://github.com/air...
stable/ambassador 5.3.2 0.86.1 DEPRECATED A Helm chart for Datawire Ambassador
...
</code></pre>
<p>Why only 18 on 284 packages are not deprecated ?</p>
<p>Do that mean for these packages we have to add external repositories ?</p>
| user2668735 | <p>The underlying reason "why" is that the CNCF no longer wanted to pay the costs in hosting a single monolithic repository:</p>
<p><a href="https://www.cncf.io/blog/2020/10/07/important-reminder-for-all-helm-users-stable-incubator-repos-are-deprecated-and-all-images-are-changing-location/" rel="noreferrer">https://www.cncf.io/blog/2020/10/07/important-reminder-for-all-helm-users-stable-incubator-repos-are-deprecated-and-all-images-are-changing-location/</a></p>
<p>This means that the charts are now scattered across various repositories, hosted by a range of organisations.</p>
<p>The Artifact Hub aggregates these so you can search them:</p>
<p><a href="https://artifacthub.io/packages/search?page=1&ts_query_web=mysql" rel="noreferrer">https://artifacthub.io/packages/search?page=1&ts_query_web=mysql</a></p>
<p>We're now in a very confusing situation where if you want to install a package, you're very likely to find several different repositories hosting different versions and variants, and you need to decide which one to trust and go for.</p>
<p>Very likely many of these repos will get deprecated themselves.</p>
<p>It's all a bit wild west right now, and it's a shame there is no longer a single "stable" one shop stop.</p>
| Dick Chesterwood |
<p>I have an application with multiple services called from a primary application service. I understand the basics of doing canary and A/B deployments, however all the examples I see show a round robin where each request switches between versions. </p>
<p>What I'd prefer is that once a given user/session is associated with a certain version it stays that way to avoid giving a confusing experience to the user. </p>
<p>How can this be achieved with Kubernetes or Istio/Envoy?</p>
| gunygoogoo | <p>We've been grappling with this because we want to deploy test microservices into production and expose them only if the first request contains a "dark release" header.</p>
<p>As mentioned by Jonas, cookies and header values can in theory be used to achieve what you're looking for. It's very easy to achieve if the service that you are canarying is on the edge, and your user is directly accessing. </p>
<p>The problem is, you mention you have multiple services. If you have a chain where the user accesses edge service A which is then making calls to service B, service C etc, the headers or cookies will not be propagated from one service to another.</p>
<p>This is the same problem that we hit when trying to do distributed tracing. The Istio documents currently have this FAQ:</p>
<p><a href="https://istio.io/faq/distributed-tracing/#istio-copy-headers" rel="nofollow noreferrer">https://istio.io/faq/distributed-tracing/#istio-copy-headers</a></p>
<p>The long and short of that is that you will have to do header propagation manually. Luckily most of my microservices are built on Spring Boot and I can achieve header propagation with a simple 5-line class that intercepts all outgoing calls. But it is nonetheless invasive and has to be done everywhere. The antithesis of a service mesh.</p>
<p>It's possible there is a clever way around this but it's hard to infer from the docs what is possible and what isn't. I've seen a few github issues raised by Istio developers to address this but every one I've seen has gone stale after initial enthusiasm.</p>
| Dick Chesterwood |
<p>I'm working with Kubernetes 1.26 on the server side (EKS) and kubectl client 1.27.1.</p>
<p>I have a job define in this way:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: build
spec:
template:
spec:
restartPolicy: Never
volumes:
.....
containers:
- name: build-tool
....
</code></pre>
<p>My pod dies for OOMKilled or some other reason, then Kubernetes launches another pod.
Why?</p>
<p>It is supposed not to be restarted.</p>
<p>Related reading:</p>
<ul>
<li><a href="https://www.aquasec.com/cloud-native-academy/kubernetes-101/kubernetes-jobs/" rel="nofollow noreferrer">https://www.aquasec.com/cloud-native-academy/kubernetes-101/kubernetes-jobs/</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/job/</a></li>
</ul>
| david.perez | <p>I think you missed this section of the documentation:</p>
<blockquote>
<p>An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node (node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the .spec.template.spec.restartPolicy = "Never". <strong>When a Pod fails, then the Job controller starts a new Pod.</strong> This means that your application needs to handle the case when it is restarted in a new pod. In particular, it needs to handle temporary files, locks, incomplete output and the like caused by previous runs.</p>
</blockquote>
<p>The value of <code>spec.template.spec.restartPolicy</code> effects the response to failed <em>containers</em> in your Pod (<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">docs</a>), but is not relevant to failures of the pod itself.</p>
<p>You can control how the Job controller responds to a failed <em>Pod</em> by setting a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-failure-policy" rel="nofollow noreferrer">podFailurePolicy</a>.</p>
| larsks |
<p>Is it possible to promote a Kubernetes worker node to master to quickly recover from the loss of a master (1 of 3) and restore safety to the cluster? Preferably without disrupting all the pods already running on it. Bare metal deployment. Tx.</p>
| David Tinker | <p>It doesn't look like a worker node can be promoted to master in general. However it is easy to sort out for a specific case:</p>
<ul>
<li>Control plane node disappears from the network</li>
<li>Node is manually deleted: <code>k delete node2.example.com --ignore-daemonsets --delete-local-data</code></li>
<li>Some time later it reboots and rejoins the cluster</li>
</ul>
<p>Check that it has rejoined the etcd cluster:</p>
<pre><code># k exec -it etcd-node1.example.com -n kube-system -- /bin/sh
# etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key \
member list
506801cdae97607b, started, node1.example.com, https://65.21.128.36:2380, https://xxx:2379, false
8051adea81dc4c6a, started, node2.example.com, https://95.217.56.177:2380, https://xxx:2379, false
ccd32aaf544c8ef9, started, node3.example.com, https://65.21.121.254:2380, https://xxx:2379, false
</code></pre>
<p>If it is part of the cluster then re-label it:</p>
<pre><code>k label node node2.example.com node-role.kubernetes.io/control-plane=
k label node node2.example.com node-role.kubernetes.io/master=
</code></pre>
| David Tinker |
<p>Whats the best approach to provide a .kube/config file in a rest service deployed on kubernetes?</p>
<p>This will enable my service to (for example) use the kuberntes client api.</p>
<p>R</p>
| Raster R | <p>Create service account:</p>
<pre><code>kubectl create serviceaccount example-sa
</code></pre>
<p>Create a role:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: example-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>Create role binding:</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: example-role-binding
namespace: default
subjects:
- kind: "ServiceAccount"
name: example-sa
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>create pod using <code>example-sa</code></p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: example-pod
spec:
serviceAccountName: example-sa
containers:
- name: secret-access-container
image: example-image
</code></pre>
<p>The most important line in pod definition is <code>serviceAccountName: example-sa</code>. After creating service account and adding this line to your pod's definition you will be able to access your api access token at <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code>.</p>
<p><a href="https://developer.ibm.com/recipes/tutorials/service-accounts-and-auditing-in-kubernetes/" rel="nofollow noreferrer">Here</a> you can find a little bit more detailed version of the above example.</p>
| Maciek Sawicki |
<p>We are just getting started with k8s (bare metal on Ubuntu 20.04). Is it possible for ingress traffic arriving at a host for a load balanced service to go to a pod running on that host (if one is available)?</p>
<p>We have some apps that use client side consistent hashing (using customer ID) to select a service instance to call. The service instances are stateless but maintain in memory ML models for each customer. So it is useful (but not essential) to have repeated requests for a given customer go to the same service. Then we can just use antiAffinity to have one pod per host.</p>
<p>Our existing service discovery mechanism lets the clients find all the instances of the service and the nodes they are running on. All our k8s nodes are running the Nginx ingress controller.</p>
| David Tinker | <p>I finally got this figured out. This was way harder than it should be IMO! <strong>Update: It's not working. Traffic frequently goes to the wrong pod.</strong></p>
<p>The service needs <code>externalTrafficPolicy: Local</code> (see <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">docs</a>).</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: starterservice
spec:
type: LoadBalancer
selector:
app: starterservice
ports:
- port: 8168
externalTrafficPolicy: Local
</code></pre>
<p>The Ingress needs <code>nginx.ingress.kubernetes.io/service-upstream: "true"</code> (<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#service-upstream" rel="nofollow noreferrer">service-upstream docs</a>).</p>
<p>The <code>nginx.ingress.kubernetes.io/server-alias: "~^starterservice-[a-z0-9]+\\.example\\.com"</code> bit is because our service discovery updates DNS so each instance of the service includes the name of the host it is running on in its DNS name.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: starterservice
namespace: default
annotations:
nginx.ingress.kubernetes.io/server-alias: "~^starterservice-[a-z0-9]+\\.example\\.com"
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- host: starterservice.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: starterservice
port:
number: 8168
</code></pre>
<p>So now a call <code>https://starterservice-foo.example.com</code> will go to the instance running on k8s host foo.</p>
| David Tinker |
<p>I have simple helm chart. I have a <code>labels:</code> block that I need to refer in a <code>Deployment</code></p>
<p>Here's my <code>values.yaml</code></p>
<pre><code>labels:
app: test-app
group: test-group
provider: test-provider
</code></pre>
<p>And in the <code>templates/deployment.yaml</code> I need to add the above whole <code>labels</code> block. So I did;</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ include "accountmasterdata.fullname" . }}
namespace: {{ .Values.namespace }}
labels:
{{ .Values.labels | nindent 4 }}
{{- include "accountmasterdata.labels" . | nindent 4 }}
</code></pre>
<p>But I get the following error</p>
<blockquote>
<p>wrong type for value; expected string; got map[string]interface {}</p>
</blockquote>
<p>Can someone help me with two things:</p>
<ol>
<li><p>How can I solve this issue</p>
</li>
<li><p>And in the line where it says <code>{{- include "accountmasterdata.labels" . | nindent 4 }} </code>, where I can see the <code>accountmasterdata.labels</code> values? And how to override those?</p>
</li>
</ol>
<p>Thank you!</p>
| Jananath Banuka | <p>Iterating over a mapping is covered in the "<a href="https://helm.sh/docs/chart_template_guide/variables/" rel="nofollow noreferrer">Variables</a>" documentation:</p>
<blockquote>
<p>For data structures that have both a key and a value, we can use range to get both. For example, we can loop through .Values.favorite like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
myvalue: "Hello World"
{{- range $key, $val := .Values.favorite }}
{{ $key }}: {{ $val | quote }}
{{- end }}
</code></pre>
</blockquote>
<p>So in your template, you would handle the value of <code>.Values.labels</code> like this:</p>
<pre><code> labels:
{{- range $name, $value := .Values.labels }}
{{ $name | quote }}: {{ $value | quote }}
{{- end -}}
</code></pre>
<hr />
<blockquote>
<p>And in the line where it says {{- include "accountmasterdata.labels" . | nindent 4 }} , where I can see the accountmasterdata.labels values? And how to override those?</p>
</blockquote>
<p>Is this a template you are writing? If so, where have you defined these values? Presumably in your <code>templates/</code> directory there exists a file that includes something like:</p>
<pre><code>{{- define "accountmasterdata.labels" -}}
...
{{- end -}}
</code></pre>
<p>The contents of that block are what will get inserted at the point of reference.</p>
<hr />
<p>Lastly, in your template you have:</p>
<pre><code>namespace: {{ .Values.namespace }}
</code></pre>
<p>But you probably want to use <code>.Release.Namespace</code> instead:</p>
<pre><code>namespace: {{ .Release.Namespace | quote }}
</code></pre>
<hr />
<p>With the above changes in place, I end up with:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ include "accountmasterdata.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
{{- range $name, $value := .Values.labels }}
{{ $name | quote }}: {{ $value | quote }}
{{- end -}}
{{- include "accountmasterdata.labels" . | nindent 4 }}
</code></pre>
| larsks |
<p>I've been experimenting with <a href="https://github.com/heptio/contour" rel="nofollow noreferrer">contour</a> as an alternative ingress controller on a test GKE kubernetes cluster.</p>
<p>Following the contour <a href="https://github.com/heptio/contour/blob/master/docs/deploy-options.md" rel="nofollow noreferrer">deployment docs</a> with a few modifications, I've got a working setup serving test HTTP responses.</p>
<p>First, I created a "helloworld" pod that serves http responses, exposed via a NodePort service and an ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld
spec:
replicas: 4
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: "helloworld-http"
image: "nginxdemos/hello:plain-text"
imagePullPolicy: Always
resources:
requests:
cpu: 250m
memory: 256Mi
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- helloworld
topologyKey: "kubernetes.io/hostname"
---
apiVersion: v1
kind: Service
metadata:
name: helloworld-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: helloworld
sessionAffinity: None
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helloworld-ingress
spec:
backend:
serviceName: helloworld-svc
servicePort: 80
</code></pre>
<p>Then, I created a deployment for <code>contour</code> that's directly copied from their docs:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: heptio-contour
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: contour
namespace: heptio-contour
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: contour
name: contour
namespace: heptio-contour
spec:
selector:
matchLabels:
app: contour
replicas: 2
template:
metadata:
labels:
app: contour
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9001"
prometheus.io/path: "/stats"
prometheus.io/format: "prometheus"
spec:
containers:
- image: docker.io/envoyproxy/envoy-alpine:v1.6.0
name: envoy
ports:
- containerPort: 8080
name: http
- containerPort: 8443
name: https
command: ["envoy"]
args: ["-c", "/config/contour.yaml", "--service-cluster", "cluster0", "--service-node", "node0", "-l", "info", "--v2-config-only"]
volumeMounts:
- name: contour-config
mountPath: /config
- image: gcr.io/heptio-images/contour:master
imagePullPolicy: Always
name: contour
command: ["contour"]
args: ["serve", "--incluster"]
initContainers:
- image: gcr.io/heptio-images/contour:master
imagePullPolicy: Always
name: envoy-initconfig
command: ["contour"]
args: ["bootstrap", "/config/contour.yaml"]
volumeMounts:
- name: contour-config
mountPath: /config
volumes:
- name: contour-config
emptyDir: {}
dnsPolicy: ClusterFirst
serviceAccountName: contour
terminationGracePeriodSeconds: 30
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: contour
topologyKey: kubernetes.io/hostname
---
apiVersion: v1
kind: Service
metadata:
name: contour
namespace: heptio-contour
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 8080
- port: 443
name: https
protocol: TCP
targetPort: 8443
selector:
app: contour
type: LoadBalancer
---
</code></pre>
<p>The default and heptio-contour namespaces now look like this:</p>
<pre><code>$ kubectl get pods,svc,ingress -n default
NAME READY STATUS RESTARTS AGE
pod/helloworld-7ddc8c6655-6vgdw 1/1 Running 0 6h
pod/helloworld-7ddc8c6655-92j7x 1/1 Running 0 6h
pod/helloworld-7ddc8c6655-mlvmc 1/1 Running 0 6h
pod/helloworld-7ddc8c6655-w5g7f 1/1 Running 0 6h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/helloworld-svc NodePort 10.59.240.105 <none> 80:31481/TCP 34m
service/kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 7h
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/helloworld-ingress * y.y.y.y 80 34m
$ kubectl get pods,svc,ingress -n heptio-contour
NAME READY STATUS RESTARTS AGE
pod/contour-9d758b697-kwk85 2/2 Running 0 34m
pod/contour-9d758b697-mbh47 2/2 Running 0 34m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/contour LoadBalancer 10.59.250.54 x.x.x.x 80:30882/TCP,443:32746/TCP 34m
</code></pre>
<p>There's 2 publicly routable IP addresses:</p>
<ul>
<li>x.x.x.x - a GCE TCP load balancer that forwards to the contour pods</li>
<li>y.y.y.y - a GCE HTTP load balancer that forwards to the helloworld pods via the helloworld-ingress</li>
</ul>
<p>A <code>curl</code> on both public IPs returns a valid HTTP response from the helloworld pods.</p>
<pre><code># the TCP load balancer
$ curl -v x.x.x.x
* Rebuilt URL to: x.x.x.x/
* Trying x.x.x.x...
* TCP_NODELAY set
* Connected to x.x.x.x (x.x.x.x) port 80 (#0)
> GET / HTTP/1.1
> Host: x.x.x.x
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< server: envoy
< date: Mon, 07 May 2018 14:14:39 GMT
< content-type: text/plain
< content-length: 155
< expires: Mon, 07 May 2018 14:14:38 GMT
< cache-control: no-cache
< x-envoy-upstream-service-time: 1
<
Server address: 10.56.4.6:80
Server name: helloworld-7ddc8c6655-w5g7f
Date: 07/May/2018:14:14:39 +0000
URI: /
Request ID: ec3aa70e4155c396e7051dc972081c6a
# the HTTP load balancer
$ curl http://y.y.y.y
* Rebuilt URL to: y.y.y.y/
* Trying y.y.y.y...
* TCP_NODELAY set
* Connected to y.y.y.y (y.y.y.y) port 80 (#0)
> GET / HTTP/1.1
> Host: y.y.y.y
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.13.8
< Date: Mon, 07 May 2018 14:14:24 GMT
< Content-Type: text/plain
< Content-Length: 155
< Expires: Mon, 07 May 2018 14:14:23 GMT
< Cache-Control: no-cache
< Via: 1.1 google
<
Server address: 10.56.2.8:80
Server name: helloworld-7ddc8c6655-mlvmc
Date: 07/May/2018:14:14:24 +0000
URI: /
Request ID: 41b1151f083eaf30368cf340cfbb92fc
</code></pre>
<p>Is it by design that I have two public IPs? Which one should I use for customers? Can I choose based on my preference between a TCP and HTTP load balancer? </p>
| James Healy | <p>Probably you have GLBC ingress configured (<a href="https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller</a>)</p>
<p>Could you try using following ingress definition?</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "contour"
name: helloworld-ingress
spec:
backend:
serviceName: helloworld-svc
servicePort: 80
</code></pre>
<p>If you would like to be sure that your traffic goes via contour you should use <code>x.x.x.x</code> ip.</p>
| Maciek Sawicki |
<p>item.Status.ContainerStatuses.RestartCount doesn't exist. I cannot find the command. Reinstalling the nuget-package or updating it did not work either.</p>
<p>Down there I added the problem I have and the package I use. Sorry if my english is kinda rusty.</p>
<p><a href="https://i.stack.imgur.com/hKkTa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hKkTa.png" alt="the package I use" /></a></p>
<p><a href="https://i.stack.imgur.com/e2Hty.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e2Hty.png" alt="the problem I have" /></a></p>
| Lukas Zauner | <p><code>ContainerStatuses</code> is a collection of <code>ContainerStatus</code>, not a <code>ContainerStatus</code> itself. You must choose from which container you want the <code>RestartCount</code>, per example:</p>
<pre><code> int restarts = item.Status.ContainerStatuses[0].RestartCount;
</code></pre>
| Gusman |
<p>Could some one please help me with this..
I would like to understand a bit about the apiGroups & its usage in below Role definition.</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: example.com-superuser
rules:
- apiGroups: ["example.com"]
resources: ["*"]
verbs: ["*"]
</code></pre>
<p>I was going through RBAC in Kubernetes. <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/</a>
Above example is from this link.</p>
| bala sreekanth | <p>An api group groups a set of resource types in a common namespace. For example, resource types related to Ingress services are grouped under the <code>networking.k8s.io</code> api group:</p>
<pre><code>$ kubectl api-resources --api-group newtorking.k8s.io
NAME SHORTNAMES APIVERSION NAMESPACED KIND
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
</code></pre>
<p>It is possible to have two different resource types that have the same short name in different resource groups. For example, in my OpenShift system there are two different groups that provide a <code>Subscription</code> resource type:</p>
<pre><code>$ kubectl api-resources | awk '$NF == "Subscription" {print}'
subscriptions appsub apps.open-cluster-management.io/v1 true Subscription
subscriptions sub,subs operators.coreos.com/v1alpha1 true Subscription
</code></pre>
<p>If I am creating a role, I need to specify to <em>which</em> <code>Subscription</code> I want to grant access. This:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: allow-config-access
rules:
- apiGroups:
- operators.coreos.com
resources:
- subscriptions
verbs: ["*"]
</code></pre>
<p>Provides access to different resources than this:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: allow-config-access
rules:
- apiGroups:
- apps.open-cluster-management.io
resources:
- subscriptions
verbs: ["*"]
</code></pre>
| larsks |
<p>I'm trying to communicate via grpc between two microservices internally on kubernetes, but I'm getting a connection refused error.</p>
<p>These are the yaml files of the services that are trying to communicate.</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: source-api
app.kubernetes.io/part-of: liop
app.kubernetes.io/version: latest
name: source-api
spec:
ports:
- name: grpc-server
port: 8081
protocol: TCP
targetPort: 8081
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: source-api
app.kubernetes.io/part-of: liop
app.kubernetes.io/version: latest
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: source-api
app.kubernetes.io/part-of: liop
app.kubernetes.io/version: latest
name: source-api
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: source-api
app.kubernetes.io/part-of: liop
app.kubernetes.io/version: latest
template:
metadata:
labels:
app.kubernetes.io/name: source-api
app.kubernetes.io/part-of: liop
app.kubernetes.io/version: latest
spec:
containers:
- env:
- name: QUARKUS_GRPC_CLIENTS_STORE_PORT
value: "8081"
- name: QUARKUS_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
key: datasourcePassword
name: liop
- name: KAFKA_BOOTSTRAP_SERVERS
value: kafka-service:9092
- name: QUARKUS_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
key: datasourceUsername
name: liop
- name: QUARKUS_HTTP_PORT
value: "8080"
- name: QUARKUS_GRPC_SERVER_PORT
value: "8081"
- name: QUARKUS_GRPC_SERVER_HOST
value: localhost
- name: QUARKUS_DATASOURCE_JDBC_URL
value: jdbc:mysql://mysql:3306/product
- name: QUARKUS_GRPC_CLIENTS_STORE_HOST
value: store-api
image: tools_source-api:latest
imagePullPolicy: Never
name: source-api
ports:
- containerPort: 8081
name: grpc-server
protocol: TCP
- containerPort: 8080
name: http
protocol: TCP
imagePullSecrets:
- name: gitlab-registry
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: product-api
app.kubernetes.io/part-of: liop
app.kubernetes.io/version: latest
name: product-api
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: product-api
app.kubernetes.io/part-of: liop
app.kubernetes.io/version: latest
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: product-api
app.kubernetes.io/part-of: liop
app.kubernetes.io/version: latest
name: product-api
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: product-api
app.kubernetes.io/part-of: liop
app.kubernetes.io/version: latest
template:
metadata:
labels:
app.kubernetes.io/name: product-api
app.kubernetes.io/part-of: liop
app.kubernetes.io/version: latest
spec:
containers:
- env:
- name: KAFKA_BOOTSTRAP_SERVERS
value: kafka-service:9092
- name: QUARKUS_DATASOURCE_JDBC_URL
value: jdbc:mysql://mysql:3306/product
- name: QUARKUS_GRPC_CLIENTS_IMAGE_PORT
value: "8081"
- name: QUARKUS_GRPC_CLIENTS_SOURCE_HOST
value: source-api
- name: QUARKUS_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
key: datasourcePassword
name: liop
- name: QUARKUS_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
key: datasourceUsername
name: liop
- name: QUARKUS_GRPC_CLIENTS_SOURCE_PORT
value: "8081"
- name: QUARKUS_GRPC_CLIENTS_IMAGE_HOST
value: media-api
image: tools_product-api:latest
imagePullPolicy: Always
name: product-api
ports:
- containerPort: 8080
name: http
protocol: TCP
imagePullSecrets:
- name: gitlab-registry
</code></pre>
<p>This is the yaml of my API-gateway, which does correctly communicate via HTTP with the microservices:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/version: latest
app.kubernetes.io/part-of: liop
app.kubernetes.io/name: api-gateway
name: api-gateway
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/version: latest
app.kubernetes.io/part-of: liop
app.kubernetes.io/name: api-gateway
template:
metadata:
labels:
app.kubernetes.io/version: latest
app.kubernetes.io/part-of: liop
app.kubernetes.io/name: api-gateway
spec:
containers:
- env:
- name: product_api
value: http://product-api:8080/api/products/v1/
- name: source_api
value: http://source-api:8080/api/sources/v1/
- name: store_api
value: http://store-api:8080/api/stores/v1/
- name: report_api
value: http://report-api:8080/api/reports/v1/
- name: category_api
value: http://category-api:8080/api/categories/v1/
- name: AUTH0_ISSUER_URL
value: xxxx
- name: AUTH0_AUDIENCE
value: xxxxxxx
- name: PORT
value: "7000"
image: tools_webgateway:latest
imagePullPolicy: Never
name: api-gateway
ports:
- containerPort: 7000
hostPort: 7000
name: http
protocol: TCP
imagePullSecrets:
- name: gitlab-registry
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: api-gateway
app.kubernetes.io/part-of: liop
app.kubernetes.io/version: latest
name: api-gateway
spec:
ports:
- name: http
port: 7000
protocol: TCP
targetPort: 7000
selector:
app.kubernetes.io/name: api-gateway
app.kubernetes.io/part-of: liop
app.kubernetes.io/version: latest
</code></pre>
<p>Error the product-api throws:</p>
<pre><code>Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at io.grpc.Status.asRuntimeException(Status.java:533)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:478)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.quarkus.grpc.runtime.supports.IOThreadClientInterceptor$1$1.onClose(IOThreadClientInterceptor.java:68)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:617)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:803)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:782)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: source-api/10.101.237.82:8081
Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
</code></pre>
<p>The source-api:</p>
<pre><code>source-api ClusterIP 10.101.237.82 <none> 8081/TCP,8080/TCP
</code></pre>
<pre><code>2021-04-01 06:38:08,973 INFO [io.qua.grp.run.GrpcServerRecorder] (vert.x-eventloop-thread-1) gRPC Server started on localhost:8081 [SSL enabled: false]
</code></pre>
<p>Using grpcurl internally also gives me a connection refused error. But port forwarding the source-api:8081 does allow me to do requests.</p>
| MrDoekje | <p>QUARKUS_GRPC_SERVER_HOST should be 0.0.0.0 instead of localhost</p>
| Luca Burgazzoli |
<p>I am trying to access a service listening on a port running on every node in my bare metal (Ubuntu 20.04) cluster from inside a pod. I can use the real IP address of one of the nodes and it works. However I need pods to connect to the port on their own node. I cant use '127.0.0.1' inside a pod.</p>
<p>More info: I am trying to wrangle a bunch of existing services into k8s. We use an old version of Consul for service discovery and have it running on every node providing DNS on 8600. I figured out how to edit the coredns Corefile to add a consul { } block so lookups for .consul work.</p>
<pre><code>consul {
errors
cache 30
forward . 157.90.123.123:8600
}
</code></pre>
<p>However I need to replace that IP address with the "address of the node the coredns pod is running on".</p>
<p>Any ideas? Or other ways to solve this problem? Tx.</p>
| David Tinker | <p>Comment from @mdaniel worked. Tx.</p>
<p>Edit coredns deployment. Add this to the container after volumeMounts:</p>
<pre><code>env:
- name: K8S_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
</code></pre>
<p>Edit coredns config map. Add to bottom of the Corefile:</p>
<pre><code>consul {
errors
cache 30
forward . {$K8S_NODE_IP}:8600
}
</code></pre>
<p>Check that DNS is working</p>
<pre><code>kubectl run tmp-shell --rm -i --tty --image nicolaka/netshoot -- /bin/bash
nslookup myservice.service.consul
nslookup www.google.com
exit
</code></pre>
| David Tinker |
<p>I have setup a kubernetes cluster locally by minikube. I have the Postgres service running in the cluster. I am trying to run a Flask app that connects to the Postgres database using psycopg2, fetch records and expose them on a REST endpoint.</p>
<p>I am getting this erorr in gunicorn logs -</p>
<pre><code>[2022-12-12 18:49:41 +0000] [10] [ERROR] Error handling request /popular/locations
File "/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "postgres-postgresql.kafkaplaypen.svc.cluster.local:5432" to address: Name or service not kn>
</code></pre>
<p>I installed Postgres on the cluster using <code>helm install postgres bitnami/postgresql</code>. Here is some useful info helm showed me about my Postgres deployment -</p>
<pre><code>PostgreSQL can be accessed via port 5432 on the following DNS names from within your cluster:
postgres-postgresql.kafkaplaypen.svc.cluster.local - Read/Write connection
To get the password for "postgres" run:
export POSTGRES_PASSWORD=$(kubectl get secret --namespace kafkaplaypen postgres-postgresql -o jsonpath="{.data.postgres-password}" | base64 -d)
To connect to your database run the following command:
kubectl run postgres-postgresql-client --rm --tty -i --restart='Never' --namespace kafkaplaypen --image docker.io/bitnami/postgresql:15.1.0-debian-11-r7 --env="PGPASSWORD=$POSTGRES_PASSWORD" \
--command -- psql --host postgres-postgresql -U postgres -d postgres -p 5432
> NOTE: If you access the container using bash, make sure that you execute "/opt/bitnami/scripts/postgresql/entrypoint.sh /bin/bash" in order to avoid the error "psql: local user with ID 1001} does not exist"
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace kafkaplaypen svc/postgres-postgresql 5432:5432 &
PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432
</code></pre>
<p>Here is the code for my flask app -</p>
<pre><code>app = Flask(__name__)
app.config["DEBUG"] = True
def get_db_connection():
conn = psycopg2.connect(host='postgres-postgresql.kafkaplaypen.svc.cluster.local:5432',
database=os.environ['DB_NAME'],
user=os.environ['DB_USERNAME'],
password=os.environ['DB_PASSWORD'])
return conn
@app.route('/popular/locations')
def get_popular_locations():
conn = get_db_connection()
cur = conn.cursor()
cur.execute('SELECT * FROM tolocations;')
data = cur.fetchall()
cur.close()
conn.close()
return data
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8081, debug=True)
</code></pre>
<p>Using the following command to run flask pod -</p>
<p><code>kubectl run flaskapp -i --image flaskapp:latest --image-pull-policy Never --restart Never --namespace kafkaplaypen --env="DB_NAME=postgres" --env="DB_USERNAME=postgres" --env="DB_PASSWORD=$POSTGRES_PASSWORD"</code></p>
<p>Also, adding the dockerfile for my flask app in case it's is useful</p>
<pre><code>FROM python:3.7-slim
WORKDIR /app
RUN apt-get update && apt-get install -y curl nano
RUN pip3 install flask psycopg2-binary gunicorn
COPY rest /app
EXPOSE 8081
CMD gunicorn -b 0.0.0.0:8081 --log-file /app/logs/gunicorn.log --log-level DEBUG src.expose:app
</code></pre>
| Masquerade | <p>Looking at <a href="https://www.psycopg.org/docs/module.html" rel="nofollow noreferrer">the documentation</a>, it looks like you need to remove the port from the <code>host</code> argument and use the <code>port</code> argument:</p>
<pre><code>def get_db_connection():
conn = psycopg2.connect(host="postgres-postgresql.kafkaplaypen.svc.cluster.local",
port=5432,
database=os.environ["DB_NAME"],
user=os.environ["DB_USERNAME"],
password=os.environ["DB_PASSWORD"])
return conn
</code></pre>
<p>...or just drop the port, since you're using the default.</p>
| larsks |
<p>How can I use ingress to proxy a kubernetes external url address?
Before I used nginx as a proxy, the configuration is as follows.</p>
<pre><code> location /index.html {
proxy_next_upstream http_502 http_504 error timeout invalid_header;
proxy_pass http://172.19.2.2:8080/index.html;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_intercept_errors off;
proxy_connect_timeout 900000;
proxy_send_timeout 900000;
proxy_read_timeout 900000;
add_header Cache-Control 'no-cache';
add_header Access-Control-Allow-Origin *;
add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
</code></pre>
<p><code>http://172.19.2.2:8080/index.html</code> Is a service outside of kubernetes。</p>
<p>How can I achieve the effect of the above nginx configuration proxy in ingress?</p>
<h3>kubernetes version info</h3>
<pre><code>Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<h3>ingress version info</h3>
<p><code>0.20.0</code></p>
<h3>ingress configuration</h3>
<p>Configuration without external url</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-yilin-web-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- dev-yilin.example.com
secretName: example-ingress-secret
rules:
- host: dev-yilin.example.com
http:
paths:
- path: /yilin
backend:
serviceName: dev-yilin-web
servicePort: 8080
</code></pre>
| liyao | <p>Here is a nice article from Google Cloud about how to create services for external endpoints: <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services" rel="nofollow noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services</a></p>
<p>Then all you would have to add is a rules entry with the new service and port you configured. As for how kubernetes handles this, it returns a DNS name/IP (depending on which method you configured for your endpoint, and then as far I understand the ingress handles this like any other request.</p>
<p>Hope this helps.</p>
| cewood |
<p>I created a managed Postgres database in Google Cloud. This database got a external IP address.
In a second step I created a Kubernetes cluster.
In the k8s I want access this external database. Therefore I created a service without label selector but with a external endpoint pointing to my Postgres-database.</p>
<p>I had to allow the Postgres database to get access from the (three) cluster nodes. I configured that in the Google Cloud Console (SQL).</p>
<p>My first question: Is this the right way to integrate an external database? Especially this IP access configuration?</p>
<p>To test my connection against the database my first try was to establish a port-forwarding from my local host. My idea was to access this database via my Database-IDE(datagrip). However when trying to establish a port forwarding I get the following error:</p>
<pre><code>error: cannot attach to *v1.Service: invalid service 'postgres-bla-bla': Service is defined without a selector
</code></pre>
<p>Second question: How to access this service locally? </p>
<p>In a third step I created a pod with 'partlab/ubuntu-postgresql' docker-image. I did a 'kctrl exec -it ... ' and could access my Postgres database with</p>
<pre><code>psql -h postgres-bla-bla ...
</code></pre>
<p>So basically it works. But I'm sure my solution has some flaws.
What can I do better? How to fix the problem from question 2?</p>
| Thomas Seehofchen | <p>The problem was discussed <a href="https://github.com/txn2/kubefwd/issues/35" rel="noreferrer">here</a> and there is a solution to set up port forwarding to a service without selector/pod (e.g. ExternalName service) by deploying a proxy pod inside K8s:</p>
<pre><code>kubectl -n production run mysql-tunnel-$USER -it --image=alpine/socat --tty --rm --expose=true --port=3306 tcp-listen:3306,fork,reuseaddr tcp-connect:your-internal-mysql-server:3306
kubectl -n production port-forward svc/mysql-tunnel-$USER 3310:3306
</code></pre>
<p>In the example above the MySQL server at <code>your-internal-mysql-server:3306</code> will be available on <code>localhost:3310</code> on your machine.</p>
| Aldekein |
<p>Is there anyway to get all logs from pods in a specific namespace running a dynamic command like a combination of awk and xargs?</p>
<pre><code> kubectl get pods | grep Running | awk '{print $1}' | xargs kubectl logs | grep value
</code></pre>
<p>I have tried the command above but it's failing like <code>kubectl logs</code> is missing pod name:</p>
<blockquote>
<p>error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples</p>
</blockquote>
<p>Do you have any suggestion about how get all logs from Running pods?</p>
| placplacboom | <p>Think about what your pipeline is doing:</p>
<p>The <code>kubectl logs</code> command takes as an argument a <em>single</em> pod name, but through your use of <code>xargs</code> you're passing it <em>multiple</em> pod names. Make liberal use of the <code>echo</code> command to debug your pipelines; if I have these pods in my current namespace:</p>
<pre><code>$ kubectl get pods -o custom-columns=name:.metadata.name
name
c069609c6193930cd1182e1936d8f0aebf72bc22265099c6a4af791cd2zkt8r
catalog-operator-6b8c45596c-262w9
olm-operator-56cf65dbf9-qwkjh
operatorhubio-catalog-48kgv
packageserver-54878d5cbb-flv2z
packageserver-54878d5cbb-t9tgr
</code></pre>
<p>Then running this command:</p>
<pre><code>kubectl get pods | grep Running | awk '{print $1}' | xargs echo kubectl logs
</code></pre>
<p>Produces:</p>
<pre><code>kubectl logs catalog-operator-6b8c45596c-262w9 olm-operator-56cf65dbf9-qwkjh operatorhubio-catalog-48kgv packageserver-54878d5cbb-flv2z packageserver-54878d5cbb-t9tgr
</code></pre>
<hr />
<p>To do what you want, you need to arrange to call <code>kubectl logs</code> multiple times with a single argument. You can do that by adding <code>-n1</code> to your <code>xargs</code> command line. Keeping the <code>echo</code> command, running this:</p>
<pre><code>kubectl get pods | grep Running | awk '{print $1}' | xargs -n1 echo kubectl logs
</code></pre>
<p>Gets us:</p>
<pre><code>kubectl logs catalog-operator-6b8c45596c-262w9
kubectl logs olm-operator-56cf65dbf9-qwkjh
kubectl logs operatorhubio-catalog-48kgv
kubectl logs packageserver-54878d5cbb-flv2z
kubectl logs packageserver-54878d5cbb-t9tgr
</code></pre>
<p>That looks more reasonable. If we drop the echo and run:</p>
<pre><code>kubectl get pods | grep Running | awk '{print $1}' | xargs -n1 kubectl logs | grep value
</code></pre>
<p>Then you will get the result you want. You may want to add the <code>--prefix</code> argument to <code>kubectl logs</code> so that you know which pod generated the match:</p>
<pre><code>kubectl get pods | grep Running | awk '{print $1}' | xargs -n1 kubectl logs --prefix | grep value
</code></pre>
<hr />
<p>Not directly related to your question, but you can lose that <code>grep</code>:</p>
<pre><code>kubectl get pods | awk '/Running/ {print $1}' | xargs -n1 kubectl logs --prefix | grep value
</code></pre>
<p>And even lose the <code>awk</code>:</p>
<pre><code>kubectl get pods --field-selector=status.phase==Running -o name | xargs -n1 kubectl logs --prefix | grep value
</code></pre>
| larsks |
<p>I need to grab some pod information which will be used for some unit tests which will be run in-cluster. I need all the information which kubectl describe po gives but from an in cluster api call. </p>
<p>I have some working code which makes an api call to apis/metrics.k8s.io/v1beta1/pods, and have installed the metrics-server on minikube for testing which is all working and gives me output like this: </p>
<pre><code>Namespace: kube-system
Pod name: heapster-rgnlj
SelfLink: /apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/heapster-rgnlj
CreationTimestamp: 2019-09-10 12:27:13 +0000 UTC
Window: 30s
Timestamp: 2019-09-10 12:26:23 +0000 UTC
Name: heapster
Cpu usage: 82166n
Mem usage: 19420Ki
</code></pre>
<pre><code>...
func getMetrics(clientset *kubernetes.Clientset, pods *PodMetricsList) error {
data, err := clientset.RESTClient().Get().AbsPath("apis/metrics.k8s.io/v1beta1/pods").DoRaw()
if err != nil {
return err
}
err = json.Unmarshal(data, &pods)
return err
}
func main() {
config, err := rest.InClusterConfig()
if err != nil {
fmt.Println(err)
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
fmt.Println(err)
}
var pods PodMetricsList
err = getMetrics(clientset, &pods)
if err != nil {
fmt.Println(err)
}
for _, m := range pods.Items {
fmt.Print("Namespace: ", m.Metadata.Namespace, "\n", "Pod name: ", m.Metadata.Name, "\n", )
fmt.Print("SelfLink: ", m.Metadata.SelfLink, "\n", "CreationTimestamp: ", m.Metadata.CreationTimestamp, "\n", )
fmt.Print("Window: ", m.Window, "\n", "Timestamp: ", m.Timestamp, "\n", )
for _, c := range m.Containers {
fmt.Println("Name:", c.Name)
fmt.Println("Cpu usage:", c.Usage.CPU)
fmt.Println("Mem usage:", c.Usage.Memory, "\n")
...
</code></pre>
<p>As I say, what i really need is what you'd get with a 'describe pods' type call. Having looked through the kubernetes source this NodeDescriber looks like the right type of function, but I'm slightly at a loss as to how to integrate / implement it to get the desired results. </p>
<p>kubernetes/pkg/printers/internalversion/describe.go</p>
<p>Line 2451 in 4f2d7b9</p>
<p>func (d *NodeDescriber) Describe(namespace, name string, describerSettings...etc)</p>
<p>I'm new to Go and not particularly familiar with kubernetes.
Any pointers as to how to go about it would be greatly appreciated.</p>
| sensedata1 | <p>Looking at the <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/versioned/describe.go#L668" rel="nofollow noreferrer">describePod</a> and <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L693" rel="nofollow noreferrer">Describe</a> funcs from staging/src/k8s.io/kubectl/pkg/describe/versioned/describe.go should give you a better picture of how to do this. And since <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L693" rel="nofollow noreferrer">Describe</a> and <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/versioned/describe.go#L628" rel="nofollow noreferrer">PodDescriber</a> are public, you can reuse these for your use case.</p>
<p>You could couple this with a <a href="https://godoc.org/k8s.io/client-go/kubernetes/typed/core/v1#CoreV1Client" rel="nofollow noreferrer">CoreV1Client</a> which has a <a href="https://godoc.org/k8s.io/client-go/kubernetes/typed/core/v1#CoreV1Client.Pods" rel="nofollow noreferrer">Pods</a> func, that returns a <a href="https://godoc.org/k8s.io/client-go/kubernetes/typed/core/v1#PodInterface" rel="nofollow noreferrer">PodInterface</a> that has a <a href="https://godoc.org/k8s.io/api/core/v1#PodList" rel="nofollow noreferrer">List</a> func which would return a list of <a href="https://godoc.org/k8s.io/api/core/v1#Pod" rel="nofollow noreferrer">Pod</a> objects for the given namespace.</p>
<p>Those pod objects will provide the Name needed for the <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L693" rel="nofollow noreferrer">Describe</a> func, the Namespace is already known, and the <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/interface.go#L49" rel="nofollow noreferrer">describe.DescriberSettings</a> is just a struct type that you could inline to enable showing events in the <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L693" rel="nofollow noreferrer">Describe</a> output.</p>
<p>Using the <a href="https://godoc.org/k8s.io/api/core/v1#PodList" rel="nofollow noreferrer">List</a> func will only list the pods that one time. If you're interested in having this list be updated regularly, you might want to look at the Reflector and Informer patterns; both of which are largely implemented in the <a href="https://godoc.org/k8s.io/client-go/tools/cache" rel="nofollow noreferrer">tools/cache</a> package, and the docs briefly explain this concept in the <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="nofollow noreferrer">Efficient detection of changes</a> section.</p>
<p>Hope this helps.</p>
| cewood |
<p>I believe that I must create multiple <code>Ingress</code> resources to achieve the desired effect, but must ask, is it possible to have multiple rewrite annotations with nginx (community) controller for the same host?</p>
<p>I have the following, but it won't work, as I understand it, because there is no way to link the rewrite rule to the path explicitly. In this case, I suppose it could use the fact that there are different numbers of capture groups, but that wouldn't always be the case. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: https://staging.example.com/keyword-lnk/badnewsbears/$1/$2
nginx.ingress.kubernetes.io/rewrite-target: https://staging.example.com/lnk/$2
certmanager.k8s.io/cluster-issuer: staging-letsencrypt-prod
spec:
rules:
- host: staging.example.com
http:
paths:
- path: /
backend:
serviceName: api-service
servicePort: http
- host: badnewsbears.example.com
http:
paths:
- backend:
serviceName: api-service
servicePort: http
path: ^(/a/)(.*)$
- backend:
serviceName: api-service
servicePort: http
path: ^/([tbc])/(.*)
tls:
- hosts:
- staging.example.com
- badnewsbears.example.com
secretName: letsencrypt-prod-issuer-account-key
# ^ must be set to the key matching the annotation
# certmanager.k8s.io/cluster-issuer above
</code></pre>
<p>The goal is to have requests to <code>staging.example.com</code> not have rewrites, but requests to <code>badnewsbears.example.com/t/soup</code> rewrite to <code>https://staging.example.com/keyword-lnk/badnewsbears/t/soup</code>, while
<code>badnewsbears.example.com/a/soup</code> yields <code>https://staging.example.com/lnk/$2</code></p>
<p>Is there a way to specify a mapping of rewrite target->path in the <code>Ingress</code> (or elsewhere), or will I have to separate out the different rewrite rules for the same host into different </p>
| Ben | <p>TL;DR; you're not really meant to be able to configure multiple rewrites for the kubernetes/ingress-nginx controller type. Although it is possible to hack this together in a limited fashion using regex based rewrites with capture groups, as explained in this answer I posted to <a href="https://stackoverflow.com/a/57822415/207488">How to proxy_pass with nginx-ingress?
</a>.</p>
<p>Hope this helps.</p>
| cewood |
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Looking at the <a href="https://github.com/kubernetes/charts/tree/master/stable/sonarqube" rel="noreferrer">Sonarqube helm chart</a></p>
<p><strong>requirements.yaml</strong></p>
<pre><code>dependencies:
- name: sonarqube
version: 0.5.0
repository: https://kubernetes-charts.storage.googleapis.com/
</code></pre>
<p>Trying to install the latest version of the java plugin:</p>
<p><strong>values.yaml</strong></p>
<pre><code>plugins:
install:
- "http://central.maven.org/maven2/org/sonarsource/java/sonar-java-plugin/5.3.0.13828/sonar-java-plugin-5.3.0.13828.jar"
</code></pre>
<p>However, I am getting an error on the init container:</p>
<pre><code>$ kubectl logs sonarqube-sonarqube-7b5dfd84cf-sglk5 -c install-plugins
sh: /opt/sonarqube/extensions/plugins/install_plugins.sh: Permission denied
</code></pre>
<hr>
<pre><code>$ kubectl describe po sonarqube-sonarqube-7b5dfd84cf-sglk5
Name: sonarqube-sonarqube-7b5dfd84cf-sglk5
Namespace: default
Node: docker-for-desktop/192.168.65.3
Start Time: Thu, 19 Apr 2018 15:22:04 -0500
Labels: app=sonarqube
pod-template-hash=3618984079
release=sonarqube
Annotations: <none>
Status: Pending
IP: 10.1.0.250
Controlled By: ReplicaSet/sonarqube-sonarqube-7b5dfd84cf
Init Containers:
install-plugins:
Container ID: docker://b090f52b95d36e03b8af86de5a6729cec8590807fe23e27689b01e5506604463
Image: joosthofman/wget:1.0
Image ID: docker-pullable://joosthofman/wget@sha256:74ef45d9683b66b158a0acaf0b0d22f3c2a6e006c3ca25edbc6cf69b6ace8294
Port: <none>
Command:
sh
-c
/opt/sonarqube/extensions/plugins/install_plugins.sh
State: Waiting
Reason: CrashLoopBackOff
</code></pre>
<p><strong>Is there a way to <code>exec</code> into the into the init container?</strong></p>
<p>My attempt:</p>
<pre><code>$ kubectl exec -it sonarqube-sonarqube-7b5dfd84cf-sglk5 -c install-plugins sh
error: unable to upgrade connection: container not found ("install-plugins")
</code></pre>
<hr>
<p><strong>Update</strong></p>
<p>With @WarrenStrange's suggestion:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sonarqube-postgresql-59975458c6-mtfjj 1/1 Running 0 11m
sonarqube-sonarqube-685bd67b8c-nmj2t 1/1 Running 0 11m
$ kubectl get pods sonarqube-sonarqube-685bd67b8c-nmj2t -o yaml
...
initContainers:
- command:
- sh
- -c
- 'mkdir -p /opt/sonarqube/extensions/plugins/ && cp /tmp/scripts/install_plugins.sh
/opt/sonarqube/extensions/plugins/install_plugins.sh && chmod 0775 /opt/sonarqube/extensions/plugins/install_plugins.sh
&& /opt/sonarqube/extensions/plugins/install_plugins.sh '
image: joosthofman/wget:1.0
imagePullPolicy: IfNotPresent
name: install-plugins
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/sonarqube/extensions
name: sonarqube
subPath: extensions
- mountPath: /tmp/scripts/
name: install-plugins
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-89d9n
readOnly: true
...
</code></pre>
<p>Create a new pod manifest extracted from the init container manifest. Replace the command with <code>sleep 6000</code> and execute the commands. This allows you to poke around.</p>
| Eric Francis | <p>The issue is that the container does not exist (see the CrashLoopBackOff).</p>
<p>One of the things that I do with init containers (assuming you have the source) is to put a sleep 600 on failure in the entrypoint. At least for debugging. This lets you exec into the container to poke around to see the cause of the failure.</p>
| Warren Strange |
<p>I have job failure alerts in prometheus, which resolves itself right after 2 hours I got the alert where the alert actually is not resolved. How come Prometheus resolves it? Just so you know, this is only happening with this job alert.</p>
<p>Job Alert:</p>
<pre><code> - alert: Failed Job Status
expr: increase(kube_job_status_failed[30m]) > 0
for: 1m
labels:
severity: warning
annotations:
identifier: '{{ $labels.namespace }} {{ $labels.job_name }}'
description: '{{ $labels.namespace }} - {{ $labels.job_name }} Failed'
</code></pre>
<p>An example of the alert:</p>
<pre><code>At 3:01 pm
[FIRING:1] Failed Job Status @ <environment-name> <job-name>
<environment-name> - <job-name> Failed
At 5:01 pm
[RESOLVED]
Alerts Resolved:
- <environment-name> - <job-name>: <environment-name> - <job-name> Failed
</code></pre>
<p>Here's the related pods as it can be seen that nothing seems to be resolved.</p>
<p><a href="https://i.stack.imgur.com/P2Gaz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P2Gaz.png" alt="Scription here" /></a></p>
<p>Thanks for your help in advance!</p>
| cosmos-1905-14 | <p><code>kube_job_status_failed</code> is a gauge representing the number of failed job pods at a given time. The expression <code>increase(kube_job_status_failed[30m]) > 0</code> asks the question: "have there been new failures in the last 30 minutes?" If there haven't, it won't be true, even if old failures remain in the Kubernetes API.</p>
<p>A refinement of this approach is <code>sum(rate(kube_job_status_failed[5m])) by (namespace, job_name) > 0</code>, plus an alert manager configuration to <em>not send resolved notices</em> for this alert. This is because a job pod failure is an event that can't be reversed - the job could be retried, but the pod can't be un-failed so resolution only means the alert has "aged out" or the pods have been deleted.</p>
<p>An expression that looks at the current number of failures recorded in the API server is <code>sum(kube_job_status_failed) by (namespace, job_name) > 0</code>. An alert based on this could be "resolved", but only by the <code>Job</code> objects being removed from the API (which doesn't necessarily mean that a process has succeeded...)</p>
| Jason S |
<p>I am trying to understand how we can create circuit breakers for cloud run services,Unlike in GKE we are using istio kind of service mesh how we implement same thing cloud Run ?</p>
| Aadesh kale | <p>On GKE you'd <a href="https://cloud.google.com/traffic-director/docs/configure-advanced-traffic-management#circuit-breaking" rel="nofollow noreferrer">set up a circuit breaker</a> to prevent overloading your legacy backend systems from a surge in requests.</p>
<p>To accomplish the same on Cloud Run or Cloud Functions, you can set a <a href="https://cloud.google.com/run/docs/configuring/max-instances" rel="nofollow noreferrer">maximum number of instances</a>. From that documentation:</p>
<blockquote>
<p>Specifying maximum instances in Cloud Run allows you to limit the scaling of your service in response to incoming requests, although this maximum setting can be exceeded for a brief period due to circumstances such as <a href="https://cloud.google.com/run/docs/about-instance-autoscaling#spikes" rel="nofollow noreferrer">traffic spikes</a>. Use this setting as a way to control your costs or to limit the number of connections to a backing service, such as to a database.</p>
</blockquote>
| Frank van Puffelen |
<p>In this application, nodejs pods are running inside kubernetes, and mongodb itself sitting outside at host as localhost.</p>
<p>This indeed not good design, but its only for dev environment. In production a separte mongodb server will be there, as such option to have a non loopback ip in endpoint, so will not be a problem in Production.</p>
<p>Have considered following options for dev environment</p>
<ol>
<li><p>Use localhost connect string to connect to mongodb, but it will refer to pod's own localhost not host's localhost</p>
</li>
<li><p>Use headless service and provide localhost ip and port in endpoint. However endpoint doesn't allow loopback</p>
</li>
</ol>
<p>Suggest if there is a way to access mongodb database at host's localhost from inside cluster (pod / nodejs application).</p>
| GLK | <p>I'm running on docker for windows, and for me just using <code>host.docker.internal</code> instead of <code>localhost</code> seems to work fine.</p>
<p>For example, my mongodb connection string looks like this:</p>
<pre><code>mongodb://host.docker.internal:27017/mydb
</code></pre>
<p>As an aside, my <code>hosts</code> file includes the following lines (which I didn't add, I guess the <code>docker desktop</code> installation did that):</p>
<pre><code># Added by Docker Desktop
192.168.1.164 host.docker.internal
192.168.1.164 gateway.docker.internal
</code></pre>
| joniba |
<p>I just started to use Rancher and request to correct me for any wrong terminology.</p>
<p>Earlier I was using minikube on Macbook which provide SSH easily using <code>minikube ssh</code> for troubleshooting. As I am newbie to Rancher Desktop and wanted to do SSH on Rancher Desktop node similar to minikube.</p>
<p>I googled for same but unfortunately I didn't get any fruitful answer. Thanks in advance.</p>
| Ashish Kumar | <p>On recent versions (1.3 on) you can use the <code>rdctl</code> utility, which ships with Rancher Desktop, and run <code>rdctl shell COMMAND</code> or <code>rdctl shell</code> to ssh into the VM.</p>
| Eric |
<p>I wanted to check my mongo database, not sure which database my application is putting the data. Basically i have configured a very simple tasks application using a python flask api(deployed in gke) which in turn connects to a mongo database in the same gke cluster. The application works fine.
I referred to this link
<a href="https://levelup.gitconnected.com/deploy-your-first-flask-mongodb-app-on-kubernetes-8f5a33fa43b4" rel="nofollow noreferrer">link</a></p>
<p>Below is my application yaml file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: tasksapp
labels:
app: tasksapp
spec:
replicas: 1
selector:
matchLabels:
app: tasksapp
template:
metadata:
labels:
app: tasksapp
spec:
containers:
- name: tasksapp
image: myimage/1.0
ports:
- containerPort: 5000
imagePullPolicy: Always
</code></pre>
<p>The python code section which connects to the database --->This does not have username /password. Not sure which database it is connecting to(even though in the code it says as 'dev'. This is why I wanted to check the mongo db pod.</p>
<pre><code>from bson.objectid import ObjectId
import socket
app = Flask(__name__)
app.config["MONGO_URI"] = "mongodb://mongo:27017/dev"
mongo = PyMongo(app)
db = mongo.db
@app.route("/")
def index():
hostname = socket.gethostname()
return jsonify(
message="Welcome to Tasks app! I am running inside {} pod!".format(hostname)
)
</code></pre>
<p>the mongo db deployment yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: storage
mountPath: /data/db
volumes:
- name: storage
persistentVolumeClaim:
claimName: mongo-pvc
</code></pre>
<p>the issue is that when i log into the pod for mongo, mongo command is not recognized as shown below?</p>
<pre><code>kubectl exec -it mongo-869f6488c8-jnkgp -- /bin/bash
root@mongo-869f6488c8-jnkgp:/# cd home
root@mongo-869f6488c8-jnkgp:/home# ls
root@mongo-869f6488c8-jnkgp:/home# mongo
bash: mongo: command not found
</code></pre>
| bionics parv | <p>First things first, it is not advisable to run MongoDB as a <code>Deployment</code>, also some folks do refrain using Stateful apps in Kubernetes in general such as Databases because it can lead into many problems and it is a hard thing to manage. Some managed services really helps such as Atlas for MongoDB.</p>
<p>Considering this environment is for <strong>testing purposes</strong> or <strong>self managed</strong>, I'd run Mongo using either a helm chart <strong>(when in dev mode)</strong> such as <a href="https://github.com/bitnami/charts/tree/main/bitnami/mongodb/#installing-the-chart" rel="nofollow noreferrer">Bitnami's</a> or a more elegant and advanced Operator such as <a href="https://github.com/mongodb/mongodb-kubernetes-operator" rel="nofollow noreferrer">MongoDB Community Operator</a> <strong>(while in Production and if I really know what I am doing and care for retain mode of PVCs, constant backups, etc.)</strong></p>
<p>Aside from this, to your actual use case, it depends on the <code>namespace</code> you are deploying your app and Mongo's <code>StatefulSet/Deployment/Pod</code>.</p>
<p>If you are just deploying it in the <code>default</code> namespace, make sure to point the hostname of your Mongodb in your Python app to <code><mongodb-svcname>.default.svc.cluster.local</code> on MongoDB's port, normally defaults to <code>27017</code></p>
<p>So, again, if your Service is called <code>mongodb-svc</code>, in order for your Python app to be able to connect to it you'd use then <code>mongodb-svc.default.svc.cluster.local:27017</code> as a valid hostname for it.</p>
<p>Make sure to match both <code><service-name></code> and <code><namespace></code> and don't forget the <strong>Port</strong> as well</p>
| Ernani Joppert |
<p>eks server endpoint is <strong>xxxxxxxxxxx.xxx.eks.amazonaws.com</strong> and I've created a yml file with a deployment and service object. </p>
<pre><code>[ec2-user@ip-]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fakeserver NodePort 10.100.235.246 <none> 6311:30002/TCP 1h
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 1d
</code></pre>
<p>When I browse <strong>xxxxxxxxxxx.xxx.eks.amazonaws.com:30002</strong> returns too long to respond. security groups have all traffic in inbound rules. </p>
| Ratul | <p>You should be using your Worker Node's IP (one of the nodes if you have more than one), not the EKS server endpoint. The EKS server endpoint is the master plane, meant to process requests pertaining to creating/deleting pods, etc.</p>
<p>You also need to make sure that the Security Group of your Node's will allow the traffic.</p>
<p>With this in place you should be able to make the request to your NodePort service.
For Example:</p>
<p><a href="http://your-workernodeIp:NodePortNumber" rel="nofollow noreferrer">http://your-workernodeIp:NodePortNumber</a></p>
| nacho10f |
<pre><code>echo "import /opt/tomcat/webapps/identityiq/WEB-INF/config/BLANKIdentity.xml" | /opt/tomcat/webapps/identityiq/WEB-INF/bin/iiq console)
</code></pre>
<p>This command works if kubectl exec -it [container] -n [namespace] -- /bin/bash and then run it from the console.</p>
<p>However, if I try to: <code>kubectl exec [container] -n [namespace] -- echo "import /opt/tomcat/webapps/identityiq/WEB-INF/config/BLANKIdentity.xml" | /opt/tomcat/webapps/identityiq/WEB-INF/bin/iiq console</code></p>
<p>..it claims not to be able to find 'iiq'.</p>
<p>I've tried variables on relative vs. absolute pathing and am currently just defaulting to absolute paths to make sure nothing is getting lost in translation there.</p>
<p>I've also tried variations like: <code>kubectl exec [container] -n [namespace] -- /bin/bash <(echo "import /opt/tomcat/webapps/identityiq/WEB-INF/config/BLANKIdentity.xml" | /opt/tomcat/webapps/identityiq/WEB-INF/bin/iiq console)</code></p>
<p>any suggestions?</p>
| thepip3r | <p>When you run <code>kubectl exec ... <somecommand> | <anothercommand></code>, anything after the <code>|</code> is execute <strong>on your local host</strong>, not inside the remote container. It's just a regular shell <code>a | b | c</code> pipeline, where <code>a</code> in this case is your <code>kubectl exec</code> command.</p>
<p>If you want to run a shell pipeline inside the remote container, you'll need to ensure you pass the entire command line <strong>to the remote container</strong>, for example like this:</p>
<pre><code>kubectl exec somepod -- sh -c '
echo "import /opt/tomcat/webapps/identityiq/WEB-INF/config/BLANKIdentity.xml" |
/opt/tomcat/webapps/identityiq/WEB-INF/bin/iiq console
'
</code></pre>
<p>Here, we're passing the pipeline as an argument to the <code>sh -c</code> command in the pod.</p>
| larsks |
<p>Can EKS Fargate be used in a private EKS cluster which has no outbound internet access?</p>
<p>According to the AWS documentation, the aws-alb-ingress controller is not supported for private EKS clusters with no outbound internet access:</p>
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html</a></p>
<blockquote>
<p>AWS Fargate is supported with private clusters. You must include the
STS VPC endpoint. For more information, see VPC endpoints for private
clusters. You must use a third-party ingress controller with AWS
Fargate because the ALB Ingress Controller on Amazon EKS does not work
in private clusters and because Classic Load Balancers and Network
Load Balancers are not supported on pods running on Fargate.</p>
</blockquote>
<p>Unfortunately AWS provides no suggestions here on what the third-party options would be. I have not been able to find any information specific to EKS Fargate Private Clusters.</p>
<p>Questions:</p>
<p>1.) Is there an open source ingress controller that uses ALB that would work for Fargate?</p>
<p>2.) Is there a specific reason why the aws-alb-ingress controller will not work in a private cluster? I might be able to request outbound access for specific ports, if that is the issue, but AWS does not provide any detail on this.</p>
| ramen123 | <p>That paragraph in the docs has changed as-of mid/end October, and now says</p>
<blockquote>
<p>AWS Fargate is supported with private clusters. You must include the STS VPC endpoint. For more information, see VPC endpoints for private clusters. <em>You can use the AWS load balancer controller to deploy AWS Application Load Balancers and Network Load Balancers with. The controller supports network load balancers with IP targets, which are required for use with Fargate. For more information, see <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">Application load balancing on Amazon EKS</a> and <a href="https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html#load-balancer-ip" rel="nofollow noreferrer">Load balancer – IP targets</a>.</em></p>
</blockquote>
<p>I <em>emphasised</em> the changed part.</p>
<p>So you now <em>can</em> use ALB-based <code>Ingress</code> with private clusters, and the <a href="https://github.com/aws/containers-roadmap/issues/981#issuecomment-715571153" rel="nofollow noreferrer">newly-introduced IP-target mode for <code>LoadBalancer</code> <code>Service</code></a> supports private clusters too.</p>
<p>Note that this requires <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/" rel="nofollow noreferrer">AWS Load Balancer Controller</a>, which is the new version of aws-alb-ingress-controller.</p>
| TBBle |
<p>I have a 3rd party docker image that I want to use (<a href="https://github.com/coreos/dex/releases/tag/v2.10.0" rel="nofollow noreferrer">https://github.com/coreos/dex/releases/tag/v2.10.0</a>). I need to inject some customisation into the pod (CSS stylesheet and PNG images). </p>
<p>I haven't found a suitable way to do this yet. Configmap binaryData is not available before v1.10 (or 9, can't remember off the top of my head). I could create a new image and <code>COPY</code> the PNG files into the image, but I don't want the overhead of maintaining this new image - far safer to just use the provided image. </p>
<p>Is there an easy way of injecting these 2/3 files I need into the pod I create?</p>
| agentgonzo | <p>One way would be to mount 1 or more volumes into the desired locations within the pod, seemingly <code>/web/static</code>. This however would overwrite the entire directly so you would need to supply all the files not just those you wish to overwrite.</p>
<p>Example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- image: dex:2.10.0
name: dex
volumeMounts:
- mountPath: /web/static # the mount location within the container
name: dex-volume
volumes:
- name: dex-volume
hostPath:
path: /destination/on/K8s/node # path on host machine
</code></pre>
<p>There are a number of types of storage types for different cloud providers so take a look at <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/</a> and see if theres something a little more specific to your environment rather than storing on disk.</p>
<p>For what it's worth, creating your own image would probably be the simplest solution.</p>
| justcompile |
<p>I am writing a bash script and I need the kubectl command to get all the namespace in my cluster based on a particular label.</p>
<pre><code>kubectl get ns -l=app=backend
</code></pre>
<p>When I run the command above I get:
<code>no resources found</code></p>
| Philcz | <blockquote>
<p>only the pods in the ns have that label. wondering if there's a way I can manipulate kubectl to output only the ns of the pods that have that label</p>
</blockquote>
<p>You can combine a few commands to do something like:</p>
<pre><code>kubectl get pods -A -l app=backend -o json |
jq -r '.items[]|.metadata.namespace' |
sort -u
</code></pre>
<p>This gets a list of all pods in all namespaces that match the label selector; uses <code>jq</code> to extract the namespace name from each pod, and then uses <code>sort -u</code> to produce a unique list.</p>
<hr />
<p>You can actually do this without <code>jq</code> by using the <code>go-template</code> output format, but for me that always means visiting the go template documentation:</p>
<pre><code>kubectl get pods -A -l app=backend \
-o go-template='{{range .items}}{{.metadata.namespace}}{{"\n"}}{{end}}' |
sort -u
</code></pre>
| larsks |
<p>I deployed a EKS cluster and I'd like to add more IAM users to the role. I read this doc <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html</a> and it mentioned how to map IAM users or roles to k8s but it doesn't say how to map IAM group. Is it not supported? or is there a way to do that? I don't want to map many users one by one. When a new user join the team, I just move them to the IAM group without changing anything in EKS.</p>
| Joey Yi Zhao | <p>You can't. You can only map roles and users. Directly from the <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">documentation</a> you linked:</p>
<blockquote>
<ol start="3">
<li>Add your IAM users, roles, or AWS accounts to the configMap. You cannot add IAM groups to the configMap.</li>
</ol>
</blockquote>
<p>The easiest workaround would be to have a different IAM role for each group and only grant that group the ability to assume that role.</p>
| Mark Loeser |
<p>I have an application with multiple services called from a primary application service. I understand the basics of doing canary and A/B deployments, however all the examples I see show a round robin where each request switches between versions. </p>
<p>What I'd prefer is that once a given user/session is associated with a certain version it stays that way to avoid giving a confusing experience to the user. </p>
<p>How can this be achieved with Kubernetes or Istio/Envoy?</p>
| gunygoogoo | <p>You can do this with Istio using <a href="https://istio.io/docs/tasks/traffic-management/request-routing/#route-based-on-user-identity" rel="nofollow noreferrer">Request Routing - Route based on user identity</a> but I don't know how mature the feature is. It may also be possible to route based on cookies or header values.</p>
| Jonas |
<p>I want to auto scale my pod in <strong>kubernetes</strong>. after some research I understand that i should use <strong>heapster</strong> for monitoring. what tested document you can suggest.
how can i test it?
i know i should use some stress test but does any one has document about it?
thanks</p>
| yasin lachini | <p>Heapster is EOL. <a href="https://github.com/kubernetes-retired/heapster" rel="nofollow noreferrer">https://github.com/kubernetes-retired/heapster</a></p>
<blockquote>
<p>RETIRED: Heapster is now retired. See the deprecation timeline for more information on support. We will not be making changes to Heapster.</p>
</blockquote>
<p>The following are potential migration paths for Heapster functionality:</p>
<pre><code>For basic CPU/memory HPA metrics: Use metrics-server.
For general monitoring: Consider a third-party monitoring pipeline that can gather Prometheus-formatted metrics. The kubelet exposes all the metrics exported by Heapster in Prometheus format. One such monitoring pipeline can be set up using the Prometheus Operator, which deploys Prometheus itself for this purpose.
For event transfer: Several third-party tools exist to transfer/archive Kubernetes events, depending on your sink. heptiolabs/eventrouter has been suggested as a general alternative.
</code></pre>
| dmcgill50 |
<p>I am trying to follow this tutorial:
<a href="https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-ubuntu-18-04" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-ubuntu-18-04</a></p>
<p>Important difference:
I need to run the <code>master</code> on a specific node, and the <code>worker</code> nodes are <em>from different</em> regions on AWS. </p>
<p>So it all went well until I wanted to join the nodes (step 5). The command succeeded but <code>kubectl get nodes</code> still only showed the <code>master</code> node.</p>
<p>I looked at the <code>join</code> command and it contained the <code>master</code> 's <strong>private</strong> ip address:
<code>join 10.1.1.40</code>. I guess that can not work if the workers are in a different region (note: later we probably need to add nodes from different providers even, so if there is no important security threat, it should work via public IPs).</p>
<p>So while <code>kubeadm init pod-network-cidr=10.244.0.0/16</code> initialized the cluster but with this internal IP, I then tried with
<code>kubeadm init --apiserver-advertise-address <Public-IP-Addr> --apiserver-bind-port 16443 --pod-network-cidr=10.244.0.0/16</code></p>
<p>But then it always hangs, and init does not complete. The kubelet log prints lots of </p>
<p><code>E0610 19:24:24.188347 1051920 kubelet.go:2267] node "ip-x-x-x-x" not found</code></p>
<p>where "ip-x-x-x-x" seems to be the master's node hostname on AWS.</p>
| transient_loop | <p>I think what made it work is that I set the master's hostname to its <strong>public</strong> DNS name, and then used that as <code>--control-plane-endpoint</code> argument..., without <code>--apiserver-advertise-address</code> (but with the <code>--apiserver-bind-port</code> as I need to run it on another port).</p>
<p>Need to have it run longer to confirm but so far looks good.</p>
| transient_loop |
<p>I am trying to install velero and minio for my k8s cluster. I have one master and 2 worker nodes.
I have issue with NodePort service.</p>
<p>Overall pods are working and Node Port service is also running but when I try to access the minio Dashboard from browser It change Port number. I thought that issue is with my service so I also created another <a href="https://stackoverflow.com/questions/75234024/why-my-nodeport-service-change-its-port-number/75234616?noredirect=1#comment132760590_75234616">question</a> for that.</p>
<p>Actual problem is with Console port.</p>
<p>When I run <code>kubectl logs minio-8649b94fb5-8cr2k -n velero</code> I see this information .</p>
<pre><code>WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2023 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2023-01-25T00-19-54Z (go1.19.4 linux/amd64)
Status: 1 Online, 0 Offline.
API: http://10.244.2.136:9000 http://127.0.0.1:9000
Console: http://10.244.2.136:37269 http://127.0.0.1:37269
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
</code></pre>
<p>The port number of</p>
<p><code>Console: http://10.244.2.136:37269 http://127.0.0.1:37269</code> is different than the port of Node Port service.</p>
<p>This is my NodePort Service</p>
<pre><code>master-k8s@masterk8s-virtual-machine:~/velero-v1.2.0-darwin-amd64$ kubectl get svc -n velero
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio NodePort 10.97.197.54 <none> 9000:30480/TCP 82m
</code></pre>
<p>When I search the URL in browser with service port number it convert to console port and I am not able to access my application.</p>
<p><strong>What I have done to fix this:</strong></p>
<ol>
<li>I try to use Load balancer services and still not able to access the Application.</li>
<li>I also assign the NodePort in service.yaml file It stop changing the port but I am not able to access the Application.</li>
<li>I change the Ports in application and service still the same issue.</li>
<li>Log have noting but this information.</li>
<li>I try to install minio directly on my Ubuntu VM and it was successful.</li>
</ol>
<p><strong>What do I want:</strong></p>
<p>I have done everything I could do. I don't find any issue like that or any information related to this topic. Any advice will be very help to fix this issue. How can I change the port of console or make it sync with service port?</p>
<p><strong>Updated yaml</strong></p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: velero
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: minio
name: minio
labels:
component: minio
spec:
strategy:
type: Recreate
selector:
matchLabels:
component: minio
template:
metadata:
labels:
component: minio
spec:
volumes:
- name: storage
emptyDir: {}
- name: config
emptyDir: {}
containers:
- name: minio
image: minio/minio:latest
imagePullPolicy: IfNotPresent
args:
- server
- /storage
- --console-address=:9090
- --config-dir=/config
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
name: api
- containerPort: 9090
name: console
volumeMounts:
- name: storage
mountPath: "/storage"
- name: config
mountPath: "/config"
---
apiVersion: v1
kind: Service
metadata:
namespace: minio
name: minio-service
labels:
component: minio
spec:
# ClusterIP is recommended for production environments.
# Change to NodePort if needed per documentation,
# but only if you run Minio in a test/trial environment, for example with Minikube.
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
component: minio
---
apiVersion: batch/v1
kind: Job
metadata:
namespace: minio
name: minio-setup
labels:
component: minio
spec:
template:
metadata:
name: minio-setup
spec:
restartPolicy: OnFailure
volumes:
- name: config
emptyDir: {}
containers:
- name: mc
image: minio/mc:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- "mc --config-dir=/config config host add velero http://minio:9000 minio minio123 && mc --config-dir=/config mb -p velero/velero"
volumeMounts:
- name: config
mountPath: "/config"
</code></pre>
| tauqeerahmad24 | <p>We need to tell Minio to use a static console port. We can do that by providing the <code>--console-address</code> argument to the <code>minio server</code> command. Here's an example Deployment that I have used to run Minio locally:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: minio
name: minio
namespace: minio
spec:
replicas: 1
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
app: minio
spec:
containers:
- command:
- minio
- server
- /data
- --console-address=:9090
envFrom:
- secretRef:
name: minio-creds-9d9kmkc4m4
image: quay.io/minio/minio:latest
name: minio
ports:
- containerPort: 9000
name: api
- containerPort: 9090
name: console
volumeMounts:
- mountPath: /data
name: minio-data
volumes:
- name: minio-data
persistentVolumeClaim:
claimName: minio-data
</code></pre>
<p>This runs <code>minio server /data --console-address=:9090</code>; when Minio starts up, we see in the logs:</p>
<pre><code>Status: 1 Online, 0 Offline.
API: http://10.244.0.11:9000 http://127.0.0.1:9000
Console: http://10.244.0.11:9090 http://127.0.0.1:9090
</code></pre>
<p>Now that we have a static port, we can set up the NodePort service you want:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: minio
name: minio
namespace: minio
spec:
ports:
- name: api
nodePort: 30900
port: 9000
protocol: TCP
targetPort: api
- name: console
nodePort: 30990
port: 9090
protocol: TCP
targetPort: console
selector:
app: minio
type: NodePort
</code></pre>
<p>This exposes the API on port 30900 and the console on port 30990.</p>
<hr />
<p>You can find my complete test including manifests and deployment instructions at <a href="https://github.com/larsks/k8s-example-minio/" rel="nofollow noreferrer">https://github.com/larsks/k8s-example-minio/</a>.</p>
| larsks |
<p>Google has ]this cool tool <code>kubemci</code> - <code>Command line tool to configure L7 load balancers using multiple kubernetes clusters</code> with which you can basically have a HA multi region Kubernetes setup. Which is kind of cool.</p>
<p>But let's say we have an basic architecture like this:</p>
<ul>
<li>Front end is implemented as SPA and uses json API to talk to backend</li>
<li>Backend is a set of microservices which use PostgreSQL as a DB storage engine.</li>
</ul>
<p>So I can create two Kubernetes Clusters on GKE, put both backend and frontend on them (e.g. let's say in London and Belgium) and all looks fine. </p>
<p>Until we think about the database. PostgreSQL is single master only, so it must be placed in one of the regions only. And If backend from London region starts to talk to PostgreSQL in Belgium region the performance will really be poor considering the 6ms+ latency between those regions. </p>
<p>So that whole HA setup kind of doesn't make any sense? Or am I missing something? One option to slightly mitigate the issue is would be have a readonly replica in the the "slave" region, and direct read-only queries there (is that even possible with PostgreSQL?)</p>
| gerasalus | <p>This is a classic architecture scenario that has no easy solution. Making data available in multiple regions is a challenging problem that major companies spend a lot of time and money to solve.</p>
<ul>
<li><p>PostgreSQL does not natively support multi-master writes. Your idea of a replica located in the other region with logic in your app to read and write to the correct database would work. This will give you fast local reads, but slower writes in one region. It's also more complicated code in you app and more work to handle failover of the master. Bandwidth and costs can also be problems with heavy updates.</p></li>
<li><p>Use 3rd-party solutions for multi-master Postgres (like <a href="https://www.2ndquadrant.com/en/resources/postgres-bdr-2ndquadrant/" rel="nofollow noreferrer">Postgres-BDR by 2nd Quadrant</a>) to offload the work to the database layer. This can get expensive and your application still has to manage data conflicts from two regions overwriting the same data at the same time.</p></li>
<li><p>Choose another database that supports multi-regional replication with multi-master writes. <a href="http://cassandra.apache.org/" rel="nofollow noreferrer">Cassandra</a> (or <a href="https://www.scylladb.com/" rel="nofollow noreferrer">ScyllaDB</a>) is a good choice, or hosted options like <a href="https://cloud.google.com/spanner/" rel="nofollow noreferrer">Google Spanner</a>, <a href="https://learn.microsoft.com/en-us/azure/cosmos-db/introduction" rel="nofollow noreferrer">Azure CosmosDB</a>, <a href="https://aws.amazon.com/dynamodb/global-tables/" rel="nofollow noreferrer">AWS DynamoDB Global Tables</a>, and others. An interesting option is <a href="https://www.cockroachlabs.com/" rel="nofollow noreferrer">CockroachDB</a> which supports the PostgreSQL protocol but is a scalable relational database and supports multiple regions.</p></li>
<li><p>If none of these options work, you'll have to create your own replication system. Some companies do this with a event-sourced / CQRS architecture where every write is a message sent to a central log, then applied in every location. This is a more work but provides the most flexibility. At this point you're also basically building your own database replication system.</p></li>
</ul>
| Mani Gandham |
<p>I am getting below error in Jenkins while deploying to kubernetes cluster:</p>
<blockquote>
<p>ERROR: ERROR: java.lang.RuntimeException: io.kubernetes.client.openapi.ApiException: java.net.UnknownHostException: **.azmk8s.io: Name or service not known
hudson.remoting.ProxyException: java.lang.RuntimeException: io.kubernetes.client.openapi.ApiException: java.net.UnknownHostException:</p>
</blockquote>
<p>Tried to deploy with below jenkins pipeline snippet:</p>
<pre><code>kubernetesDeploy(
configs: 'deploymentFile.yaml',
kubeconfigId: 'Kubeconfig',
enableConfigSubstitution: true
)
</code></pre>
<p>Please suggest</p>
| Anil Kumar P | <p>Have you deployed AKS private cluster (<a href="https://learn.microsoft.com/en-us/azure/aks/private-clusters" rel="nofollow noreferrer">document</a>)? If so, jenkins needs to be in the private network to access k8s cluster. </p>
<p>If this is not private cluster, check the network setting of jenkins to see it is able to connect to internet. also check the DNS setting of the jenkins box as the error which you have shard is DNS error.</p>
| Atul |
<p>I am using Azure Kubernetes. I installed Istio 1.6.1. It installed the Istio-ingressgateway with LoadBalancer. I don't want to use Istio ingressgateway because I want to kong ingress. </p>
<p>I tried to run below command to change istio-ingress services from LoadBalancer to ClusterIP but getting errors.</p>
<pre><code>$ kubectl patch svc istio-ingressgateway -p '{"spec": {"ports": "type": "ClusterIP"}}' -n istio-system
Error from server (BadRequest): invalid character ':' after object key:value pair
</code></pre>
<p>Not sure if I can make the changes and delete and re-create istio-ingress service?</p>
| Vikas Kalra | <p>The better option would be to reinstall istio without ingress controller. Do not install default profile in istio as it will install ingress controller along with other component. Check the various settings as mentioned in the installation page of <a href="https://istio.io/latest/docs/setup/install/istioctl/" rel="nofollow noreferrer">istio</a> and disable ingress controller.</p>
<p>Also check the documentation of using istio and kong together on k8s <a href="https://kubernetes.io/blog/2020/03/18/kong-ingress-controller-and-istio-service-mesh/" rel="nofollow noreferrer">page</a> and see what needs to be done on kong installation in order for enble communication between kong and other services.</p>
| Atul |
<p>We have one cluster where it seems that namespaces never want to be deleted completely and now can't re-create custom-metrics namespace to be able to collect custom metrics to properly setup HPA. I fully understand that I can create another namespace with all custom-metrics resources, but a little concerned with the overall health of the cluster, given that the namespaces get stuck in "Terminating" state</p>
<pre><code>$ kubectl get ns
NAME STATUS AGE
cert-manager Active 14d
custom-metrics Terminating 7d
default Active 222d
nfs-share Active 15d
ingress-nginx Active 103d
kube-public Active 222d
kube-system Active 222d
lb Terminating 4d
monitoring Terminating 6d
production Active 221d
</code></pre>
<p>I already tried to export namespaces to JSON, delete finalizers and re-create using edited JSON files. also tried to kubectl edit ns custom-metrics and delete "- kubernetes" finalizer. all to no avail.</p>
<p>does anyone have any other recommendations on how else I can try to destroy these "stuck" namespaces" </p>
<p>curl to <a href="https://master-ip/api/v1/namespace/...../finalize" rel="noreferrer">https://master-ip/api/v1/namespace/...../finalize</a> doesn't seem to work on Google Kubernetes Engine for me, I'm assuming these operations are not allowed on GKE cluster</p>
<p>Trying things like doesn't work as well:</p>
<pre><code>$ kubectl delete ns custom-metrics --grace-period=0 --force
</code></pre>
<blockquote>
<p>warning: Immediate deletion does not wait for confirmation that the
running resource has been terminated. The resource may continue to run
on the cluster indefinitely. Error from server (Conflict): Operation
cannot be fulfilled on namespaces "custom-metrics": The system is
ensuring all content is removed from this namespace. Upon completion,
this namespace will automatically be purged by the system.</p>
</blockquote>
<p>and there're no resources listed in this namespaces at all:
<code>kubectl get all -n custom-metrics</code> or looping through all api-resources in this namespace shows no resources exist at all:
<code>kubectl api-resources --namespaced=true -o name | xargs -n 1 kubectl get -n custom-metrics</code></p>
| Alex Smirnov | <p>I did something similar to rahul.tripathi except the curl did not work for me - I followed <a href="https://medium.com/@craignewtondev/how-to-fix-kubernetes-namespace-deleting-stuck-in-terminating-state-5ed75792647e" rel="noreferrer">https://medium.com/@craignewtondev/how-to-fix-kubernetes-namespace-deleting-stuck-in-terminating-state-5ed75792647e</a> which does the following:</p>
<pre><code>NAMESPACE=
kubectl get namespace $NAMESPACE -o json > $NAMESPACE.json
sed -i -e 's/"kubernetes"//' $NAMESPACE.json
kubectl replace --raw "/api/v1/namespaces/$NAMESPACE/finalize" -f ./$NAMESPACE.json
</code></pre>
<p>Voila! Namespace is deleted</p>
<p>Update: One-liner version of this solution (requires jq)</p>
<pre><code>NAMESPACE= ; kubectl get namespace $NAMESPACE -o json | jq 'del(.spec.finalizers[0])' | kubectl replace --raw "/api/v1/namespaces/$NAMESPACE/finalize" -f -
</code></pre>
<p>Update #2: Terraform version</p>
<pre><code>resource "kubernetes_namespace" "this" {
for_each = toset( var.namespaces )
metadata {
name = each.key
}
provisioner "local-exec" {
when = destroy
command = "nohup ${path.module}/namespace-finalizer.sh ${each.key} 2>&1 &"
}
}
</code></pre>
<p>namespace-finalizer.sh</p>
<pre><code>sleep 30; kubectl get namespace $1 && kubectl get namespace $1 -o json | jq 'del(.spec.finalizers[0])' | kubectl replace --raw "/api/v1/namespaces/$1/finalize" -f -
</code></pre>
| saranicole |
<p>I want to remove a few environment variables in a container with kustomize? Is that possible? When I patch, it just adds as you may know.</p>
<p>If it's not possible, can we replace environment variable name, and secret key name/key pair all together?</p>
<pre><code> containers:
- name: container1
env:
- name: NAMESPACE
valueFrom:
secretKeyRef:
name: x
key: y
</code></pre>
<p>Any help on this will be appreciated! Thanks!</p>
| cosmos-1905-14 | <p>If you're looking remove that <code>NAMESPACE</code> variable from the manifest, you can use the special <code>$patch: delete</code> directive to do so.</p>
<p>If I start with this Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
image: docker.io/traefik/whoami:latest
env:
- name: ENV_VAR_1
valueFrom:
secretKeyRef:
name: someSecret
key: someKeyName
- name: ENV_VAR_2
value: example-value
</code></pre>
<p>If I write in my <code>kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
patches:
- patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
env:
- name: ENV_VAR_1
$patch: delete
</code></pre>
<p>Then the output of <code>kustomize build</code> is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- env:
- name: ENV_VAR_2
value: example-value
image: docker.io/traefik/whoami:latest
name: example
</code></pre>
<p>Using a strategic merge patch like this has an advantage over a JSONPatch style patch like Nijat's answer because it doesn't depend on the order in which the environment variables are defined.</p>
| larsks |
<p>I'm trying to write logs to an Elasticsearch index from a Kubernetes cluster. Fluent-bit is being used to read stdout and it enriches the logs with metadata including pod labels. A simplified example log object is</p>
<pre><code>{
"log": "This is a log message.",
"kubernetes": {
"labels": {
"app": "application-1"
}
}
}
</code></pre>
<p>The problem is that a few other applications deployed to the cluster have labels of the following format:</p>
<pre><code>{
"log": "This is another log message.",
"kubernetes": {
"labels": {
"app.kubernetes.io/name": "application-2"
}
}
}
</code></pre>
<p>These applications are installed via Helm charts and the newer ones are following the label and selector conventions as laid out <a href="https://github.com/helm/charts/blob/master/REVIEW_GUIDELINES.md#metadata" rel="noreferrer">here</a>. The naming convention for labels and selectors was updated in Dec 2018, seen <a href="https://github.com/helm/charts/commit/7458584650756b9ba8e9380ac727e89fe5ae285c#diff-9b00c37f18653dae7f2730865280ae4c" rel="noreferrer">here</a>, and not all charts have been updated to reflect this.</p>
<p>The end result of this is that depending on which type of label format makes it into an Elastic index first, trying to send the other type in will throw a mapping exception. If I create a new empty index and send in the namespaced label first, attempting to log the simple <code>app</code> label will throw this exception:</p>
<pre><code>object mapping for [kubernetes.labels.app] tried to parse field [kubernetes.labels.app] as object, but found a concrete value
</code></pre>
<p>The opposite situation, posting the namespaced label second, results in this exception:</p>
<pre><code>Could not dynamically add mapping for field [kubernetes.labels.app.kubernetes.io/name]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text].
</code></pre>
<p>What I suspect is happening is that Elasticsearch sees the periods in the field name as JSON dot notation and is trying to flesh it out as an object. I was able to find <a href="https://github.com/elastic/elasticsearch/commit/aed1f68e494c65ad50b98a3e0a7a2b6a794b2965" rel="noreferrer">this PR</a> from 2015 which explicitly disallows periods in field names however it seems to have been reversed in 2016 with <a href="https://github.com/elastic/elasticsearch/pull/17759" rel="noreferrer">this PR</a>. There is also this multi-year <a href="https://discuss.elastic.co/t/field-name-cannot-contain/33251/48" rel="noreferrer">thread</a> from 2015-2017 discussing this issue but I was unable to find anything recent involving the latest versions.</p>
<p>My current thoughts on moving forward is to standardize the Helm charts we are using to have all of the labels use the same convention. This seems like a band-aid on the underlying issue though which is that I feel like I'm missing something obvious in the configuration of Elasticsearch and dynamic field mappings.</p>
<p>Any help here would be appreciated.</p>
| rpf3 | <p>I opted to use the Logstash mutate filter with the <code>rename</code> option as described here:</p>
<p><a href="https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-rename" rel="nofollow noreferrer">https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-rename</a></p>
<p>The end result looked something like this:</p>
<pre><code>filter {
mutate {
'[kubernetes][labels][app]' => '[kubernetes][labels][app.kubernetes.io/name]'
'[kubernetes][labels][chart]' => '[kubernetes][labels][helm.sh/chart]'
}
}
</code></pre>
| rpf3 |
<p>I have the following chartInflator.yml file:</p>
<pre><code>apiVersion: builtin
kind: ChartInflator
metadata:
name: project-helm-inflator
chartName: helm-k8s
chartHome: ../../../helm-k8s/
releaseName: project-monitoring-chart
values: ../../values.yaml
releaseNamespace: project-monitoring-ns
</code></pre>
<p>When I ran it using this, I got the error message below:</p>
<pre><code>$ kustomize build .
Error: loading generator plugins: failed to load generator: plugin HelmChartInflationGenerator.builtin.[noGrp]/project-helm-inflator.[noNs] fails configuration: chart name cannot be empty
</code></pre>
<p>Here is my project structure:</p>
<pre><code>project
- helm-k8s
- values.yml
- Chart.yml
- templates
- base
- project-namespace.yml
- grafana
- grafana-service.yml
- grafana-deployment.yml
- grafana-datasource-config.yml
- prometheus
- prometheus-service.yml
- prometheus-deployment.yml
- prometheus-config.yml
- prometheus-roles.yml
- kustomization.yml
- prod
- kustomization.yml
- test
- kustomization.yml
</code></pre>
| joesan | <p>I think you may have found some outdated documentation for the helm chart generator. The canonical documentation for this is <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md" rel="nofollow noreferrer">here</a>. Reading that implies several changes:</p>
<ol>
<li><p>Include the inflator directly in your <code>kustomization.yaml</code> in the <code>helmCharts</code> section.</p>
</li>
<li><p>Use <code>name</code> instead of <code>chartName</code>.</p>
</li>
<li><p>Set <code>chartHome</code> in the <code>helmGlobals</code> section rather than per-chart.</p>
</li>
</ol>
<p>That gets us something like this in our <code>kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmGlobals:
chartHome: ../../../helm-k8s/
helmCharts:
- name: helm-k8s
releaseName: project-monitoring-chart
values: ../../values.yaml
releaseNamespace: project-monitoring-ns
</code></pre>
<p>I don't know if this will actually work -- you haven't provided a reproducer in your question, and I'm not familiar enough with Helm to whip one up on the spot -- but I will note that your project layout is highly unusual. You appear to be trying to use Kustomize to deploy a Helm chart that <em>contains</em> your kustomize configuration, and it's not clear what the benefit is of this layout vs. just creating a helm chart and then using kustomize to inflate it from <em>outside</em> of the chart templates directory.</p>
<p>You may need to add <code>--load-restrictor LoadRestrictionsNone</code> when calling <code>kustomize build</code> for this to work; by default, the <code>chartHome</code> location must be contained by the same directory that contains your <code>kustomization.yaml</code>.</p>
<hr />
<p><strong>Update</strong></p>
<p>To make sure things are clear, this is what I'm recommending:</p>
<ol>
<li><p>Remove the kustomize bits from your helm chart, so that it looks <a href="https://github.com/larsks/open-electrons-deployments/tree/less-weird-layout/open-electrons-monitoring" rel="nofollow noreferrer">like this</a>.</p>
</li>
<li><p>Publish your helm charts somewhere. I've set up github pages for that repository and published the charts at <a href="http://oddbit.com/open-electrons-deployments/" rel="nofollow noreferrer">http://oddbit.com/open-electrons-deployments/</a>.</p>
</li>
<li><p>Use kustomize to deploy the chart with transformations. Here we add a <code>-prod</code> suffix to all the resources:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: open-electrons-monitoring
repo: http://oddbit.com/open-electrons-deployments/
nameSuffix: -prod
</code></pre>
</li>
</ol>
| larsks |
<p>Referencing the bookinfo yaml here :
My gateway looks like :</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
name: https
number: 443
protocol: https
tls:
mode: PASSTHROUGH
hosts:
- "*"
</code></pre>
<p>Configuring it to accept https from all host. However, in the VirtualService, I want to achieve a URL match based routing. This is how my current configuration for VS looks.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
tls:
- match:
- uri:
prefix: /productpage
- port: 443
sniHosts:
- "*"
route:
- destination:
host: productpage
port:
number: 9080
</code></pre>
<p>On deploying it fails with the error, "TLS match must have at least one SNI host". The same VS configuration works if I remove the uri match criteria.</p>
<p>Is there a way to have URI match based routing for TLS while keeping generic sniHosts (as my host is common and I need to route to a particular app based on url prefixes)?</p>
| Jim | <p>In Istio, VirtualService TLS Match does not contains URI based routing (<a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#TLSMatchAttributes" rel="nofollow noreferrer">link</a>) . TLS is kind of opaque connection which can perform only host based routing (as hostname is present in the client hello tcp handshake).</p>
<p>In order to achieve path based routing, you will need to terminate the TLS as the gateway level and perform routing based on http. HTTP messages are transparent messages where L7 routing can be applied by istio or any other intermitient layer.</p>
<p>Another alternative is to use nginx or any other reverse proxy (which performs the ssl termination and route the call to appropirate service directly). In short, in order to perform L7 based routing (path based one of them), you will need to decrypt the request (TLS termination) whether it is done at istio end or application end.</p>
| Atul |
<p>I'm trying to have a rule listening to a specific path containing a dollar sign like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: metadata-ingress
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "false"
spec:
ingressClassName: public
tls:
- hosts:
- mydomain.com
rules:
- host: mydomain.com
http:
paths:
- path: /api/v2/$metadata
pathType: Prefix
backend:
service:
name: busybox
port:
number: 8280
</code></pre>
<p>I don't want any url rewrite or anything fancy, just want this specific path to be caught and forwarded to this service.</p>
<p>Without the "$" it works.</p>
<p>I thought disabling regex with <code>use-regex: "false"</code> would fix it, but no.</p>
<p>I also tried using the url encoded value for $ : %24metadata but it doesn't help either.</p>
<p>I also tried to use "exact" instead of "prefix" as the pathType but no.</p>
| Dunge | <p>I can't reproduce your problem, but I thought I walk through my test setup and you can tell me if anything is different. For the purpose of testing different paths, I have two deployments using the <a href="https://hub.docker.com/r/traefik/whoami" rel="nofollow noreferrer"><code>traefik/whoami</code></a> image (this just provides a useful endpoint that shows us -- among other things -- the hostname and path involved in the request).</p>
<p>That looks like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: example
component: app1
name: example-app1
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app: example
component: app1
---
apiVersion: v1
kind: Service
metadata:
labels:
app: example
component: app2
name: example-app2
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app: example
component: app2
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
component: app1
name: example-app1
spec:
selector:
matchLabels:
app: example
component: app1
template:
metadata:
labels:
app: example
component: app1
spec:
containers:
- image: docker.io/traefik/whoami:latest
name: whoami
ports:
- containerPort: 80
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
component: app2
name: example-app2
spec:
selector:
matchLabels:
app: example
component: app2
template:
metadata:
labels:
app: example
component: app2
spec:
containers:
- image: docker.io/traefik/whoami:latest
name: whoami
ports:
- containerPort: 80
name: http
</code></pre>
<p>I've also deployed the following Ingress resource, which looks mostly like yours, except I've added a second <code>paths</code> config so that we can compare requests that match <code>/api/v2/$metadata</code> vs those that do not:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: house
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
name: example
spec:
ingressClassName: nginx
rules:
- host: example.apps.infra.house
http:
paths:
- backend:
service:
name: example-app1
port:
name: http
path: /
pathType: Prefix
- backend:
service:
name: example-app2
port:
name: http
path: /api/v2/$metadata
pathType: Prefix
tls:
- hosts:
- example.apps.infra.house
secretName: example-cert
</code></pre>
<p>With these resources in place, a request to <code>https://example.apps.infra.house/</code> goes to <code>app1</code>:</p>
<pre><code>$ curl -s https://example.apps.infra.house/ | grep Hostname
Hostname: example-app1-596fcf48bd-dqhvc
</code></pre>
<p>Whereas a request to <code>https://example.apps.infra.house/api/v2/$metadata</code> goes to <code>app2</code>:</p>
<pre><code>$ curl -s https://example.apps.infra.house/api/v2/\$metadata | grep Hostname
Hostname: example-app2-8675dc9b45-6hg7l
</code></pre>
<p>So that all seems to work.</p>
<hr />
<p>We can, if we are so inclined, examine the nginx configuration that results from that Ingress. On my system, the nginx ingress controller runs in the <code>nginx-ingress</code> namespace:</p>
<pre><code>$ kubectl -n nginx-ingress get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
ingress-nginx-controller 1/1 1 1 8d
</code></pre>
<p>The configuration lives in <code>/etc/nginx/nginx.conf</code> in the container. We can <code>cat</code> the file to stdout and look for the relevant directives:</p>
<pre><code>$ kubectl -n nginx-ingress exec deploy/ingress-nginx-controller cat /etc/nginx/nginx.conf
...
location /api/v2/$metadata/ {
...
}
...
</code></pre>
<hr />
<p>Based on your comment, the following seems to work:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
cert-manager.io/cluster-issuer: house
spec:
ingressClassName: nginx
tls:
- hosts:
- example.apps.infra.house
secretName: example-cert
rules:
- host: example.apps.infra.house
http:
paths:
- path: /app1(/|$)(.*)
pathType: Prefix
backend:
service:
name: example-app1
port:
name: http
# Note the use of single quotes (') here; this is
# important; using double quotes we would need to
# write `\\$` instead of `\$`.
- path: '/api/v2/\$metadata'
pathType: Prefix
backend:
service:
name: example-app2
port:
name: http
</code></pre>
<p>The resulting <code>location</code> directives look like:</p>
<pre><code>location ~* "^/api/v2/\$metadata" {
...
}
location ~* "^/app1(/|$)(.*)" {
...
}
</code></pre>
<p>And a request for the <code>$metadata</code> path succeeds.</p>
| larsks |
<p>I have a web solution (angular application connecting to rest services) deployed in Kubernetes. I don't use any http sessions in my solution.</p>
<p>On upgrade of my rest services, I need to have both my pods with rest service version 1 and with rest service with version 2 available. Is there any way to setup a gateway/router where I can configure my endpoints dynamically?</p>
<p>I want <code>/myendpoint?version=1</code> to route the traffic to the group of PODs with version 1, and <code>/myendpoint?version=2</code> to route the traffic to the other group of PODs.</p>
<p>I must be able to dynamically add new endpoints without stopping the service. </p>
| Elena | <h2>Separate components by deployment cycle</h2>
<p>I would recommend to separate <strong>frontend</strong> app and REST <strong>backend</strong>. (I don't know if you have this already)</p>
<p>By separation, you can roll out new versions independently, with a deployment cycle for each app.</p>
<h2>Two Deployment or N-1 compatibility</h2>
<p>In addition, if you want to have multiple versions of the same app available for a longer period, you can deploy them in two different <code>Deployment</code>s</p>
<p>e.g. Two <code>Deployment</code>, each with its own <code>Service</code> and <code>Ingress</code> that setup both.</p>
<pre><code>kind: Ingress
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /v1/*
backend:
serviceName: service-v1
servicePort: 8080
- path: /v2/*
backend:
serviceName: service-v2
servicePort: 8080
</code></pre>
<p>Or you can have <strong>N-1</strong> compatibility, so version 2 implements both <code>/v1/</code> and <code>/v2/</code> API.</p>
<h2>Consider using CDN for static assets</h2>
<p>It is usually recommended to deploy frontend on a CDN since it is <strong>static</strong> content. Sometimes your Javascript refers to other Javascript files using <a href="https://www.keycdn.com/support/what-is-cache-busting" rel="nofollow noreferrer"><em>cache busting</em></a>, then it is much easier to handle such setup if all your static content is available from a CDN. </p>
| Jonas |
<p>I'm reading through <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/</a>, but it is not giving any concrete commands and it is mostly focusing when we want to create everything from scratch. It's also explaining auth for engineers using Kubernetes.</p>
<p>I have an existing deployment and service (with exposed external IP) and would like to create the simplest possible authentication (preferably token based) for an external user accessing the exposed IP. I can't add authentication to the services since I don't have access to their code. If somebody could help me with some commands I would be grateful.</p>
| user2085124 | <p>The documentation which referred is for authentication with k8s (for api accesses). This is not for application layer authentication.</p>
<p>However I can suggest one way to implement application layer authentication without changing the service at all. You can redirect the traffic to nginx (or any other reverse proxy) which can perform the authentication and redirect the authenticated user to service directly. It can also perform some kind of authorization too.</p>
<p>There are various resources available which can help you choose various authentication mechanism available in nginx such as password file based mechanism (<a href="https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/" rel="nofollow noreferrer">link</a>) or JWT based authentication (<a href="https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-jwt-authentication/" rel="nofollow noreferrer">link</a>)</p>
| Atul |
<p>I'm porting a node/react/webpack app to k8s, and am trying to configure a development environment that makes use of the hot-reloading feature of webpack. I'm hitting an error when running this with a shared volume on <code>minikube</code>: </p>
<pre><code>ERROR in ./~/css-loader!./~/sass-loader/lib/loader.js?{"data":"$primary: #f9427f;$secondary: #171735;$navbar-back-rotation: 0;$navbar-link-rotation: 0;$login-background: url('/images/login-background.jpg');$secondary-background: url('/images/secondary-bg.jpg');"}!./src/sass/style.sass
Module build failed: Error: Node Sass does not yet support your current environment: Linux 64-bit with Unsupported runtime (67)
For more information on which environments are supported please see:
</code></pre>
<p>Running the code in the container by itself (mostly) works--it starts up without errors and serves the page via <code>docker run -it --rm --name=frontend --publish=3000:3000 <container hash></code></p>
<pre><code>#Dockerfile
FROM node:latest
RUN mkdir /code
ADD . /code/
WORKDIR /code/
RUN yarn cache clean && yarn install --non-interactive && npm rebuild node-sass
CMD npm run dev-docker
</code></pre>
<p>where <code>dev-docker</code> in <code>package.json</code> is <code>NODE_ENV=development npm run -- webpack --progress --hot --watch</code></p>
<p>In the following, commenting out the <code>volumeMounts</code> key eliminates the error. </p>
<pre><code># deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: dev
name: web
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: frontend-container
template:
metadata:
labels:
app: frontend-container
spec:
volumes:
- name: frontend-repo
hostPath:
path: /Users/me/Projects/code/frontend
containers:
- name: web-container
image: localhost:5000/react:dev
ports:
- name: http
containerPort: 3000
protocol: TCP
volumeMounts:
- name: frontend-repo
mountPath: /code
env:
... # redacted for simplicity, assume works
</code></pre>
<p>Based on what i've found elsewhere, I believe that the os-native binding used by <code>node-sass</code> are interfering between host and container when the shared volume is introduced. That is, the image build process creates the bindings that would work for the container, but those are overwritten when the shared volume is mounted. </p>
<p>Is this understanding correct? How do I best structure things so that a developer can work on their local repo and see those changes automatically reflected in the cluster instance, without rebuilding images?</p>
| Ben | <p>My hypothesis was borne out--the node modules were being built for the container, but overwritten by the <code>volumeMount</code>. The approach that worked best at this point was to do the requirements building as the entrypoint of the container, so that it would run when the container started up, rather than only at build time. </p>
<pre><code># Dockerfile
CMD RUN yarn cache clean && yarn install --non-interactive --force && npm run dev-docker
</code></pre>
| Ben |
<p>I have some stateless applications where I want one pod to be scheduled on each node (limited by a node selector). If I have 3 nodes in the cluster and one goes down then I should still have 2 pods (one on each node).</p>
<p>This is exactly what DaemonSets do, but DaemonSets have a couple of caveats to their usage (such as not supporting node draining, and tools such as Telepresence not supporting them). So I would like to emulate the behaviour of DaemonSets using Deployments.</p>
<p>My first idea was to use horizontal pod autoscaler with custom metrics, so the desired replicas would be equal to the number of nodes. But even after implementing this, it still wouldn't guarantee that one pod would be scheduled per node (I think?).</p>
<p>Any ideas on how to implement this?</p>
| bcoughlan | <h1>Design for Availability</h1>
<blockquote>
<p>If I have 3 nodes in the cluster and one goes down then I should still have 2 pods (one on each node).</p>
</blockquote>
<p>I understand this as that you want to design your cluster for <strong>Availability</strong>. So the most important thing is that your replicas (pods) is spread on <em>different</em> nodes, to reduce the effect if a node goes down.</p>
<h2>Schedule pods on different nodes</h2>
<p>Use <code>PodAntiAffinity</code> and <code>topologyKey</code> for this.</p>
<blockquote>
<p>deploy the redis cluster so that no two instances are located on the same host. </p>
</blockquote>
<p>See <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#never-co-located-in-the-same-node" rel="nofollow noreferrer">Kubernetes documentation: Never co-located in the same node</a> and the <a href="https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure" rel="nofollow noreferrer">ZooKeeper High Availability example</a></p>
| Jonas |
<p>I have read about the various ways to run tasks periodically in a K8s cluster, but none of them seem to work well for this specific case. I have a deployment "my-depl" that can run an arbitrary number of pods and the task needs to execute periodically inside each pod (basically a shell command that "nudges" the main application once a week or so).</p>
<p>The Kubernetes Cronjob functionality starts a task in its own container. This K8s task does not know how many pods are currently running for "my-depl" and cannot run anything in those pods. Conceivably, I could run kubectl within this K8s Cronjob, but that seems incredibly hacky and dangerous.</p>
<p>The second alternative would be to have crond (or an alternative tool like <a href="https://github.com/dshearer/jobber" rel="nofollow noreferrer">Jobber</a> or <a href="https://github.com/ess/cronenberg" rel="nofollow noreferrer">Cronenberg</a>) run as part of the pod. But that would mean that two processes are running and the container might not die, if only the cron process dies.</p>
<p>The third option is to run a multi-process container via a special init process like <a href="https://github.com/just-containers/s6-overlay" rel="nofollow noreferrer">s6-overlay</a>. This can be made to die if one of the child processes dies, but it seems fairly involved and hardly a first-class feature.</p>
<p>The fourth option I could think of was "don't do this, it's stupid. Redesign your application so it doesn't need to be 'nudged' once a week". That's a sound suggestion, but a lot of work and I need at least a temporary solution in the meantime.</p>
<p>So, does anyone have a better idea than those detailed here?</p>
| ulim | <p>I think the simplest solution is to run <code>crond</code> (or an alternative of your choice) in a sidecar container (that is, another container in the same pod). Recall that all containers in a pod share the same network namespace, so <code>localhost</code> is the same thing for all containers.</p>
<p>This means your cron container can happily run a <code>curl</code> or <code>wget</code> command (or whatever else is necessary) to ping your API over the local port.</p>
<p>For example, something like this, in which our cron task simply runs <code>wget</code> against the web server running in the <code>api</code> container:</p>
<pre><code>apiVersion: v1
data:
root: |
* * * * * wget -O /tmp/testfile http://127.0.0.1:8080 2> /tmp/testfile.err
kind: ConfigMap
metadata:
labels:
app: cron-example
name: crontabs-ghm86fgddg
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: cron-example
name: cron-example
spec:
replicas: 1
selector:
matchLabels:
app: cron-example
template:
metadata:
labels:
app: cron-example
spec:
containers:
- image: docker.io/alpinelinux/darkhttpd:latest
name: api
- command:
- /bin/sh
- -c
- |
crontab /data/crontabs/root
exec crond -f -d0
image: docker.io/alpine:latest
name: cron
volumeMounts:
- mountPath: /data/crontabs
name: crontabs
volumes:
- configMap:
name: crontabs-ghm86fgddg
name: crontabs
</code></pre>
| larsks |
<p>I started last month in a new company. Where I will be responsible for the infrastructure and the backend of the SAAS.</p>
<p>We currently have one droplet/instance per customer. In the current phase of the company it is a good choice. But in the future when the number of instances grow, it will be difficult to maintain. At the moment there are 150 instances online, each with 1CPU and 1GB memory. </p>
<p>Our customers only use the environments for moments a week, a month or a year. So most of the time, they do nothing. So we want to change that. I am thinking of Kubernetes, Docker Swarm or another tool. </p>
<p>What advice can you give us? Should we make the step to Kubernetes or Docker Swarm, or stay with the droplets / VMs at DigitalOcean, AWS or GCP? </p>
<p>If we move to AWS or GCP our average price will go up from 5$ p/m to above the 10$ p/m.</p>
<p>We want to make the next step to lower the waste of resources but also thinking about the monthly bill. In my mind, it will be better to have 2 our 3 bigger VMs running Kubernetes or Docker Swarm to lower the monthly bill and lower our reserved resources. </p>
<p>What do you think? </p>
| Michael Tijhuis | <p>If you are serious about scaling, then you should <strong>rethink your application architecture</strong>. The most expensive part of computing is memory (RAM), so having dedicated memory per-customer will not allow you to scale.</p>
<p>Rather than keeping customers separate by using droplets, you should move this logical separation to the <strong>data layer</strong>. So, every customer can use the same horizontally-scaled compute servers and databases, but the software separates their data and access based on a User Identifier in the database.</p>
<p>Think for a moment... does <strong>Gmail</strong> keep RAM around for each specific customer? No, everybody uses the same compute and database, but the software separates their messages from other users. This allows them to scale to huge numbers of customers without assigning per-customer resources.</p>
<p>Here's another couple of examples...</p>
<p><strong>Atlassian</strong> used to have exactly what you have. Each JIRA Cloud customer would be assigned their own virtual machine with CPU, RAM and a database. They had to grow their data center to a crazy size, and it was Expensive!</p>
<p>They then embarked on a journey to move to multi-tenancy, first by separating the databases from each customer (and using a common pool of databases), then by moving to shared microservices and eventually they removed all per-customer resources.</p>
<p>See:</p>
<ul>
<li><a href="https://techcrunch.com/2018/04/02/atlassians-two-year-cloud-journey/" rel="noreferrer">Atlassian’s two-year cloud journey | TechCrunch</a></li>
<li><a href="https://www.geekwire.com/2018/atlassian-moved-jira-confluence-users-amazon-web-services-learned-along-way/" rel="noreferrer">How Atlassian moved Jira and Confluence users to Amazon Web Services, and what it learned along the way – GeekWire</a></li>
<li><a href="https://confluence.atlassian.com/cloud/atlassian-cloud-architecture-973494795.html" rel="noreferrer">Atlassian cloud architecture - Atlassian Documentation</a></li>
</ul>
<p><strong>Salesforce</strong> chose to go multi-tenant from the very beginning. They defined the concept of SaaS and used to call themselves the "cloud" (before Cloud Computing as we know it now). While their systems are sharded to allow scale, multiple customers share the same resources within a shard. The separation of customer data is done at the database-level.</p>
<p>See:</p>
<ul>
<li><a href="https://engineering.salesforce.com/the-magic-of-multitenancy-2daf71d99735" rel="noreferrer">The Magic of Multitenancy - Salesforce Engineering</a></li>
<li><a href="https://developer.salesforce.com/page/Multi_Tenant_Architecture" rel="noreferrer">Multi Tenant Architecture - developer.force.com</a></li>
</ul>
<p><strong>Bottom line:</strong> Sure, you can try to optimize around the current architecture by using containers, but if you want to get serious about scale (I'm talking 10x or 100x), then you need to <strong>re-think the architecture</strong>.</p>
| John Rotenstein |
<p>Please correct me if I'm wrong. Up to my understanding, advantage of containers is that those don't need to supply the whole OS and they also don't need to execute it (but instead they utilize underlying OS which is provided e.g. by Docker) which leads to the saving of the resources -especially the memory.</p>
<p>My question is: do I need to pay attention when choosing base image for my containers that I plan to deploy in Azure k8s service (AKS) which currently supports Ubuntu 18.04? Should I choose only from those base images that explicitly support Ubuntu 18.04 or can I go for any base Linux image (e.g. Alpine) and all will still work fine once deployed in AKS?
I guess that as far as my base image is compliant (same Linux kernel version) with the Linux kernel available in AKS then I should be fine. Is my assumption correct?</p>
| Gladik | <p>Short answer: you can pick any base image that's compatible with whatever is running inside your container.</p>
<blockquote>
<p>advantage of containers is that those don't need to supply the whole OS and they also don't need to execute it (but instead they utilize underlying OS which is provided e.g. by Docker)</p>
</blockquote>
<p>More precisely, containers do not run their own OS <em>kernel</em>. They do run their own copy of the part of the OS that runs in user space.</p>
<p>Practically speaking, kernel version compatibility is rarely a concern.</p>
| Max |
<p>I want to pass a certificate which is stored in the cluster as a secret. I have this piece of code failing:</p>
<pre><code>method(param1, param2, os.environ['CERTIFICATE']){
# param1: Does something
# param2: Does something
# param3: local path to pem cert used for auth
}
</code></pre>
<p>Error is that "File name too long: '---BEGIN PRIVATE KEY---...........'". I think - param3 requires a file path, but because I pass the certificate content directly as an environment variable, and not a file path which references the cert - it fails.</p>
<p>Not sure if mounting the secret as volume would make any difference. The cert is stored as follows, I only need tls.key:</p>
<pre><code> Type: kubernetes.io/tls
Data
====
tls.crt: 1880 bytes
tls.key: 5204 bytes
</code></pre>
| HC LW | <p>Why are you exposing the private key in an environment variable if your application expects a file? Just <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-pod-that-has-access-to-the-secret-data-through-a-volume" rel="nofollow noreferrer">mount the secret as a file</a> instead. If you have a Secret named "my-secret" with keys <code>tls.crt</code> and <code>tls.key</code>, then you can do something like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: example
spec:
volumes:
- name: my-secret
secret:
secretName: my-secret
containers:
- name: example
image: my-image
volumeMounts:
- name: my-secret
mountPath: "/certificate"
</code></pre>
<p>This would result in the container having the certificate and key mounted at <code>/certificate/tls.crt</code> and <code>/certificate/tls.key</code>.</p>
| larsks |
<p>I have a Security-group <code>Apple</code> which is attached to my application EKS worker nodes i.e. EC2 instances.
I have another Security-group <code>Mango</code> which is attached to the database EC2 instance, also EKS cluster.</p>
<p>When I whitelist <code>Apple</code> Security-group in <code>Mango</code>, the applications in my EKS cannot access the db.</p>
<p>But when I explicitely whitelist the IP's of the worker nodes i.e. the EC2 instances, the applications can access the database.</p>
<p>Why does this work? Shouldn't whitelisting the attached Security-group solve my use case?
Please help.</p>
| Red Bottle | <p>When the rules of a Security Group refer to another Security Group, traffic will be permitted but <strong>only for the private IP address of instances</strong>.</p>
<p>If you are referring to the public IP address of the EC2 instance with the database, then the traffic goes out of the Internet Gateway and back 'into' the VPC. The source identity is therefore lost and the Security Group will not permit the inbound traffic.</p>
<p>You should communicate within the VPC by using private IP addresses.</p>
| John Rotenstein |
<p>I am doing some practices. one of the test cases i tried to override the values of port value. while doing that getting an error as <strong>replace operation does not apply: doc is missing key: /spec/ports/port: missing value.</strong></p>
<pre><code>patches:
- target:
kind: Service
name: cdq-ui-service
patch: |
- op: replace
path: /spec/ports/0/port
value: 8080
</code></pre>
<p>while applying kustomization in Kubernetes with the patch. that path will be override the port value.</p>
| Vijay | <p>You are trying to modify <code>/spec/ports/port</code>, but there is no such path in a Kubernetes service. Recall that a service looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
</code></pre>
<p>The path <code>/spec/ports</code> is a <strong>list</strong>, not a <strong>dictionary</strong>. You could patch <code>/spec/ports/0/port</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
patches:
- target:
kind: Service
name: my-service
patch: |
- op: replace
path: /spec/ports/0/port
value: 8080
</code></pre>
<p>Which would result in:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 9376
selector:
app.kubernetes.io/name: MyApp
</code></pre>
| larsks |
<blockquote>
<p>I have connected a Kubernetes cluster in Gitlab as a project cluster
<a href="https://gitlab.com/steinKo/gitlabcicd/-/clusters/142291" rel="nofollow noreferrer">https://gitlab.com/steinKo/gitlabcicd/-/clusters/142291</a>
I want to acsess this cluster in Pulumi
Folow thw dcumentaion can I use Look up an Existing ProjectCluster Resource
Could I use the API call</p>
</blockquote>
<pre><code>public static get(name: string, id: Input<ID>, state?: ProjectClusterState, opts?: CustomResourceOptions): ProjectCluster
</code></pre>
<blockquote>
<p>I wrote</p>
</blockquote>
<pre><code>import * as gitlab from "@pulumi/gitlab";
const cluster = gitlab.get("gitlabcicd", ???)
</code></pre>
<blockquote>
<p>Then I get an errormessage : Property get dose not exit
How do I use get API?
Where do I find the id?</p>
</blockquote>
| stein korsveien | <p>You can get access to the cluster using the following code:</p>
<pre class="lang-js prettyprint-override"><code>import * as gitlab from "@pulumi/gitlab";
const projectCluster = gitlab.ProjectCluster.get("mycluster", "clusterid");
</code></pre>
<p>where <code>mycluster</code> is the name you're giving it in your Pulumi program and <code>clusterid</code> is the ID in GitLab of the cluster.</p>
<p>You can get the ID of the cluster using the GitLab API: <a href="https://docs.gitlab.com/ee/api/project_clusters.html" rel="nofollow noreferrer">https://docs.gitlab.com/ee/api/project_clusters.html</a></p>
<p>Please note that this will not allow you to make changes to the cluster (as you're not importing it into the Pulumi program), but it will give you information about the cluster itself.</p>
<p>If you wanted to start managing the cluster in your Pulumi program, then you can import it using the CLI by running this command: <code>pulumi import gitlab:index/projectCluster:ProjectCluster bar projectid:clusterid</code> which will give you the correct code to copy and paste into your Pulumi program at which point you can start managing it.</p>
| Piers Karsenbarg |
<p>I would like to use <code>kubectl</code> to print out all key-value pairs in my Secrets. I cannot figure out how to do this in one line with the <code>-o --jsonpath</code> flag or by piping into <code>jq</code>. I could certainly make a script to do this but I feel there must be a better way, given that the kubernetes GUI is pretty straightforward and liberal when it comes to letting you view Secrets.</p>
<p>Say I create secret like so:</p>
<p><code>kubectl create secret generic testsecret --from-literal=key1=val1 --from-literal=key2=val2</code></p>
<p>Now I can run <code>kubectl get secret testsecret -o json</code> to get something like:</p>
<pre><code>{
"apiVersion": "v1",
"data": {
"key1": "dmFsMQ==",
"key2": "dmFsMg=="
},
...
}
</code></pre>
<p>I can do something like</p>
<p><code>kubectl get secret testsecret -o jsonpath='{.data}'</code> </p>
<p>or </p>
<p><code>kubectl get secret testsecret -o json | jq '.data'</code></p>
<p>to get my key-value pairs in <em>non-list</em> format then I'd have to <code>base64 --decode</code> the values.</p>
<p>What is the easiest way to get a clean list of all my key-value pairs? Bonus points for doing this across all Secrets (as opposed to just one specific one, as I did here).</p>
| s g | <p>I read this question as asking for how to decode <em>all secrets</em> in one go. I built on the accepted answer to produce a one-liner to do this:</p>
<pre><code>kubectl get secrets -o json | jq '.items[] | {name: .metadata.name,data: .data|map_values(@base64d)}'
</code></pre>
<p>This has the added benefit of listing the name of the secret along with the decoded values for readability.</p>
| saranicole |
<p>In dag script</p>
<pre><code>PARAM = {
'key1' : 'value1',
'key2' : 'value2'
}
t1 = KubernetesPodOperator(
task_id=task_id,
name=task_name,
cmds=["pipenv", "run", "python3", "myscript.py"],
env_vars={
'GCS_PROJECT': GCS_PROJECT,
'GCS_BUCKET': GCS_BUCKET,
'PARAM': PARAM # this line throws an error
},
image=docker_image
)
</code></pre>
<p>I failed to pass a dictionary (PARAM) in the case above. I tried to pass two lists (a list of keys and a list of values so that I can zip them later) but it didn't work too. The error message is something like this</p>
<pre><code>kubernetes.client.exceptions.ApiException: (400)
Reason: Bad Request
HTTP response headers: ......
HTTP response body: {
......
"apiVersion":"v1",
"metadata":{},
"status":"Failure",
"message":"Pod in version \"v1\" cannot be handled as a Pod: json: cannot be unmarshal object into Go struct field EnvVar.spec.containers.env.value of type string"
....
}
</code></pre>
<p>Is there a way in which I can pass PARAM?</p>
| zZzZ | <p>Environment variables are <em>strings</em>. You cannot pass a structured variable like a dictionary in an environment variable unless you first convert it into a string (e.g, by serializing it to JSON). You could do this:</p>
<pre><code>t1 = KubernetesPodOperator(
task_id=task_id,
name=task_name,
cmds=["pipenv", "run", "python3", "myscript.py"],
env_vars={
'GCS_PROJECT': GCS_PROJECT,
'GCS_BUCKET': GCS_BUCKET,
'PARAM': json.dumps(PARAM),
},
image=docker_image,
)
</code></pre>
| larsks |
<p>I am running python application on K8s cluster in Stateless mode.</p>
<p>Right now we are using configmap & secret to store environment variables data. Using configmap and Secrets adding an environment variable to container os and application get it from os.</p>
<pre><code>app.config['test'] = os.environ.get("test")
</code></pre>
<p>To use best practices we are planning to use vault from hashicrop. SO can i populate the config map ? Or direct add the values to container OS so that from os application can get it. No plan to use the volume to populate the variables as using stateless images.</p>
<p>Can someone also please share a document or blog. Thanks in advance.</p>
| Harsh Manvar | <p>You can check <a href="https://learn.hashicorp.com/vault/identity-access-management/vault-agent-k8s" rel="nofollow noreferrer">Vault Agent with Kubernetes</a>. You may have to do some Kubernetes distro specific steps as well. </p>
| Kunal Deo |
<p>I am new to Kubernetes, I am looking to see if its possible to hook into the container execution life cycle events in the orchestration process so that I can call an API to pass the details of the container and see if its allowed to execute this container in the given environment, location etc.</p>
<p>An example check could be: container can only be run in a Europe or US data centers. so before someone tries to execute this container, outside this region data centers, it should not be allowed.</p>
<p>Is this possible and what is the best way to achieve this?</p>
| Kiran | <p>You can possibly set up an <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook" rel="nofollow noreferrer">ImagePolicy</a> admission controller in the clusters, were you describes from what registers it is allowed to pull images.</p>
<p><a href="https://github.com/flavio/kube-image-bouncer" rel="nofollow noreferrer">kube-image-bouncer</a> is an example of an ImagePolicy admission controller</p>
<blockquote>
<p>A simple webhook endpoint server that can be used to validate the images being created inside of the kubernetes cluster.</p>
</blockquote>
| Jonas |
<p>I have a deployment in which I want to populate pod with config files without using ConfigMap. </p>
| user9263173 | <p>You could also store your config files on a <code>PersistentVolume</code> and read those files at container startup. For more details on that topic please take a look at the K8S reference docs: <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a> </p>
<p>Please note: I would not consider this good practice. I used this approach in the early beginning of a project where a legacy app was migrated to Kubernetes: The application consisted of tons of config files that were read by the application at startup.</p>
<p>Later on I switched to creating <code>ConfigMap</code>s from my configuration files, as the latter approach allows to store the K8S object (yaml file) in Git and I found managing/editing a <code>ConfigMap</code> way easier/faster, especially in a multi-node K8S environment:</p>
<p><code>kubectl create configmap app-config --from-file=./app-config1.properties --from-file=./app-config2.properties</code></p>
<p>If you go for the "config files in persistent volume" approach you need to take different aspects into account... e.g. how to bring your configuration files on that volume, potentially not on a single but multiple nodes, and how to keep them in sync.</p>
| Tommy Brettschneider |
<p>I've build a npm react-app that connects to a REST-backend using a given url.
To run the app on kubernetes, I've distributed the app and put it into an nginx container.
The app starts nicely, but I want to make the backend url configurable without having to rebuild the container image every time.
I don't know how to do that or where to search for, any help would be appreciated</p>
| Urr4 | <p>You have several methods to achieve your objective</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">Use environment variables</a></li>
</ul>
<pre><code> apiVersion: v1
kind: Pod
metadata:
name: pod-name
spec:
containers:
- name: envar-demo-container
image: my_image:my_version
env:
- name: BACKEND_URL
value: "http://my_backend_url"
</code></pre>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Using a configmap as a config file for your service</a></li>
<li>If the service is external, you can use a fixed name and register as a local kubernetes service: <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services" rel="nofollow noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services</a></li>
</ul>
<p>Regards.</p>
| mdaguete |
<p>Any example appsmith yaml for deploying appsmith since I don't want use helm in prod environment?</p>
| Duanjax | <p>There's no official direct YAML files for deploying Appsmith on Kubernetes today. The Helm chart at helm.appsmith.com, and as documented at <a href="https://docs.appsmith.com/getting-started/setup/installation-guides/kubernetes" rel="nofollow noreferrer">https://docs.appsmith.com/getting-started/setup/installation-guides/kubernetes</a>, is the recommended way to install Appsmith on your cluster.</p>
<p>Asking as an engineering team member with Appsmith, can you elaborate a little on why avoid Helm for production please?</p>
| sharat87 |
<p>I've got a problem doing automatic heap dump to a mounted persistent volume in Microsoft Azure AKS (Kubernetes).</p>
<p>So the situation looks like this:</p>
<ul>
<li>Running program with parameters -Xmx200m causes out of memory
exception</li>
<li>After building, pushing and deploying docker image in AKS after few
seconds pod is killed and restarted</li>
<li>I got message in hello.txt in mounted volume but no dump file is
created</li>
</ul>
<p>What could be the reason of such a behaviour?</p>
<p>My test program looks like this:</p>
<pre><code>import java.io._
object Main {
def main(args: Array[String]): Unit = {
println("Before printing test info to file")
val pw = new PrintWriter(new File("/borsuk_data/hello.txt"))
pw.write("Hello, world")
pw.close
println("Before allocating to big Array for current memory settings")
val vectorOfDouble = Range(0, 50 * 1000 * 1000).map(x => 666.0).toArray
println("After creating to big Array")
}
}
</code></pre>
<p>My entrypoint.sh:</p>
<pre><code>#!/bin/sh
java -jar /root/scala-heap-dump.jar -Xmx200m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/scala-heap-dump.bin
</code></pre>
<p>My Dockerfile:</p>
<pre><code>FROM openjdk:jdk-alpine
WORKDIR /root
ADD target/scala-2.12/scala-heap-dump.jar /root/scala-heap-dump.jar
ADD etc/entrypoint.sh /root/entrypoint.sh
ENTRYPOINT ["/bin/sh","/root/entrypoint.sh"]
</code></pre>
<p>My deployment yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: scala-heap-dump
spec:
replicas: 1
template:
metadata:
labels:
app: scala-heap-dump
spec:
containers:
- name: scala-heap-dump-container
image: PRIVATE_REPO_ADDRESS/scala-heap-dump:latest
imagePullPolicy: Always
resources:
requests:
cpu: 500m
memory: "1Gi"
limits:
cpu: 500m
memory: "1Gi"
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: dynamic-persistence-volume-claim
dnsPolicy: ClusterFirst
hostNetwork: false
imagePullSecrets:
- name: regsecret
</code></pre>
<p>UPDATE:
As lawrencegripper pointed out the first issue was that pod was OOM killed due to memory limits in yaml. After changing memory to 2560Mi or higher (I've tried even such ridiculous values in yaml as CPU: 1000m and memory 5Gi) I don't get reason OOM killed. However, no dump file is created and different kind of message occurs under lastState terminated. The reason is: Error. Unfortunately this isn't very helpful. If anybody knows how to narrow it down, please help.</p>
<p>UPDATE 2:
I've added some println in code to have better understanding of what's going on. The logs for killed pod are:</p>
<pre><code>Before printing test info to file
Before allocating to big Array for current memory settings
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at scala.reflect.ManifestFactory$DoubleManifest.newArray(Manifest.scala:153)
at scala.reflect.ManifestFactory$DoubleManifest.newArray(Manifest.scala:151)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:285)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:283)
at scala.collection.AbstractTraversable.toArray(Traversable.scala:104)
at Main$.main(Main.scala:12)
at Main.main(Main.scala)
</code></pre>
<p>So as you can see program never reaches: println("After creating to big Array").</p>
| CodeDog | <p>I think the problem is the entrypoint.sh command. </p>
<pre><code>> java --help
Usage: java [options] <mainclass> [args...]
(to execute a class)
or java [options] -jar <jarfile> [args...]
(to execute a jar file)
</code></pre>
<p>Note that anything after the -jar are arguments passed to your application, not to the JVM. </p>
<p>Try:</p>
<pre><code>java -Xmx200m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/scala-heap-dump.bin -jar /root/scala-heap-dump.jar
</code></pre>
| DanLebrero |
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-emp
labels:
run: my-emp
spec:
ports:
– port: 80
protocol: TCP
targetPort: 8888
type: NodePort
selector:
run: my-emp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-emp
spec:
replicas: 2
template:
metadata:
labels:
run: my-emp
spec:
containers:
– name: my-emp
image: kavisuresh/employee
ports:
– containerPort: 8888
</code></pre>
| harish hari | <p>The problem is that you have "–" (an <a href="https://www.fileformat.info/info/unicode/char/2013/index.htm" rel="nofollow noreferrer">en dash</a>) where you want "-" (a <a href="https://www.fileformat.info/info/unicode/char/2d/index.htm" rel="nofollow noreferrer">hyphen</a>).</p>
<p>I'm guessing you wrote this in a text editor that automatically does "smart" substitutions like <code>"</code> to <code>“</code>, and when you typed <code>-</code> you got <code>–</code> instead. If that's the case it will be worth your while to make sure those features are turned off, or switch to a "programmer's editor" like Visual Studio Code, Sublime Text, Atom, Vim, etc.</p>
<p>To fix the problem, replace the en dashes on lines 9, 28, and 31 with hyphens (and make sure your editor doesn't override them):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-emp
labels:
run: my-emp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8888
type: NodePort
selector:
run: my-emp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-emp
spec:
replicas: 2
template:
metadata:
labels:
run: my-emp
spec:
containers:
- name: my-emp
image: kavisuresh/employee
ports:
- containerPort: 8888
</code></pre>
| Jordan Running |
<p>The only requirement in Kubernetes networking docs is to open firewall between pods. How does pod to service connectivity works, as service cluster ip range and pod cidrs are different?</p>
| user6317694 | <p>Services has an <em>virtual IP</em> assigned. When a Pod communicates with a Service, the Kubeproxy on the local node replaces the virtual IP with an IP to one of the pods that represents the service.</p>
<p><strong>An example:</strong>
E.g. Pod-A on Node-A want to send a request to Service-B. Service-B is for example implemented by the pods with label app-b, and in this example Pod-D and Pod-E on Node-C and Node-E. When Pod-A sends the request, the target IP is changed from an virtual IP, to the IP for Pod-D or Pod-E by kubeproxy and the request is routed to one of the pods that represents Service-B.</p>
<pre><code>Layout:
Service-B with selector: app=b
Pod-D with label: app=b
Pod-E with label: app=b
</code></pre>
<p>Pod-A should address the Service virtual IP, since pods <em>comes and goes</em> when new versions are deployed. But the virtual IP is translated to a pod with the implementation of the Service.</p>
| Jonas |
<p>I am playing around with writing a CRD for kubernetes, and am having trouble getting the code-generator to work. In particular, generating deepcopy functions is failing for a particular struct, which has reference to a <code>batch.JobTemplateSpec</code>. Commenting out that line fixes my issue.</p>
<p>I've already tried various imports and registering the <code>JobTemplateSpec</code> in <code>register.go</code>, but that doesn't seem to be the solution.</p>
<p>In particular, the struct looks something like this:</p>
<pre class="lang-golang prettyprint-override"><code>type TestSpec struct {
Selector *metav1.LabelSelector `json:"selector,omitempty"`
//Commenting out the JobTemplate fixes my problem
JobTemplate batch.JobTemplateSpec `json:"jobTemplate,omitempty"`
}
</code></pre>
<p>What I end up getting is this error from the codegen script:</p>
<pre><code>Generating client codes...
Generating deepcopy funcs
F0411 18:54:09.409084 251 deepcopy.go:885] Hit an unsupported type invalid type for invalid type, from test/pkg/apis/test/v1.TestSpec
</code></pre>
<p>and the rest of code gen fails.</p>
| user3769061 | <p>I experienced this issue trying to replicate the steps in <a href="https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/" rel="nofollow noreferrer">https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/</a> and found that I needed to change directory to avoid this issue.</p>
<p>If I was at the root of my Go workspace, e.g. <code>$GOPATH/src</code>, I got the error you received. But if I changed to the project directory, e.g. <code>$GOPATH/src/github.com/openshift-evangelist/crd-code-generation</code>, the issue went away.</p>
| William Rose |
<p>I am developing a series of microservices using Spring Boot and plan to deploy them on Kubernetes.</p>
<p>Some of the microservices are composed of an API which writes messages to a kafka queue and a listener which listens to the queue and performs the relevant actions (e.g. write to DB etc, construct messsages for onward processing).</p>
<p>These services work fine locally but I am planning to run multiple instances of the microservice on Kubernetes. I'm thinking of the following options:</p>
<ol>
<li><p>Run multiple instances as is (i.e. each microservice serves as an API and a listener).</p></li>
<li><p>Introduce a FRONTEND, BACKEND environment variable. If the FRONTEND variable is true, do not configure the listener process. If the BACKEND variable is true, configure the listener process.
This way I can start scale how may frontend / backend services I need and also have the benefit of shutting down the backend services without losing requests.</p></li>
</ol>
<p>Any pointers, best practice or any other options would be much appreciated.</p>
| Swordfish | <p>You can do as you describe, with environment variables, or you may also be interested in building your app with different profiles/bean configuration and make two different images.</p>
<p>In both cases, you should use two different Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployments</a> so you can scale and configure them independently.</p>
<p>You may also be interested in a <a href="https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/" rel="nofollow noreferrer">Leader Election pattern</a> where you want only <strong>one active replica</strong> if it only make sense if one single replica processes the events from a queue. This can also be solved by only using a single replica depending on your <em>availability</em> requirements.</p>
| Jonas |
<p>An operator I'm building needs to talk to other Kubernetes clusters, are there any best practices on how to do that from within the operator that runs on Kubernetes?</p>
<p>Using <code>k8s.io/client-go/tools/clientcmd</code> package I can call <code>BuildConfigFromFlags</code> method passing <code>masterURL</code> and kubeconfig location. This works fine from outside Kubernetes, but within Kubernetes, can any assumptions be made about kubeconfig location? Or should some other API be used?</p>
<p>As a side note: I'm using <code>controller-runtime</code>'s <code>Client</code> API for talking to Kubernetes.</p>
| Galder Zamarreño | <p>Turns out it's quite easy to do, just call the following with the master URL and the token to access it:</p>
<pre><code>cfg, err := clientcmd.BuildConfigFromFlags(os.Getenv("MASTERURL"), os.Getenv("KUBECONFIG"))
cfg.BearerToken = os.Getenv("BEARERTOKEN")
</code></pre>
<p>It might also require:</p>
<pre><code>cfg.Insecure = true
</code></pre>
| Galder Zamarreño |
<p>Note: this is more a bash/shell script issue than Kubernetes.</p>
<p>So I have one replica of a pod, and so <code>kubectl get pods</code> will only return one active replica.</p>
<p>Conceptually what I want to do is simple. I want to do a <code>kubectl logs my-pod-nnn1</code> with a watcher, HOWEVER when that pod terminates I want to stream a message "my-pod-nnn1 terminated, now logging my-pod-nnn2", and then stream the new logs. This is a fairly complex process (for me) and I'm wondering what approach I could take, and if this is possible (or perhaps not necessary) with multi-threading of some kind (which I have not done). Thanks</p>
| Oliver Williams | <p>as a rough outline for what you'll need to do, if I've read your needs right</p>
<pre class="lang-sh prettyprint-override"><code>slm-log-continue() {
while true ; do
PODLINE=$(kubectl get pod | grep regional-wkspce | grep Running | awk '/^regional-wkspce-.*/{print $1}')
if [[ -z "$PODLINE" ]]; then
echo no pod currently active, waiting 5 seconds ...
sleep 5
else
echo "----- logging for $PODLINE -----"
kubectl logs -f $PODLINE
echo "----- pod $PODLINE disconnected -----"
fi
done
}
</code></pre>
<p>assuming kubectl logs terminates after the pod does and it's received the end of the logs ( I've not tested ), something like that should do what you need without any fancy multithreading. it will just find the current pod using whatever regex against the <code>get pods</code> output, extract the name and then spit out logs until it dies.</p>
| Michael Speer |
<p>I am trying to deploy an application to GCP on kubernetes, however, the deployment fails with the error <code>the job spec is invalid ... the field is immutable</code>.</p>
<p>In the migration job, I have a section of bash in the following format:</p>
<pre><code>args:
- |
/cloud_sql_proxy -instances=xxxxxxxxxxx:europe-west1:xxxxxxxxxxx=tcp:5432 -credential_file=/secrets/cloudsql/credentials.json -log_debug_stdout=true &
CHILD_PID=$!
(while true; do echo "waiting for termination file"; if [[ -f "/tmp/pod/main-terminated" ]]; then kill ; echo "Killed as the main container terminated."; fi; sleep 1; done) &
wait
if [[ -f "/tmp/pod/main-terminated" ]]; then exit 0; echo "Job completed. Exiting..."; fi
</code></pre>
<p>but when the file is executed, in the yaml on GCP I see that the command has been enclosed in quotes and then it returns the above mention error.</p>
| David Essien | <p>I got the message <code>the job spec is invalid ... the field is immutable</code> for a different reason and wanted to briefly share it here.</p>
<p>I was trying to apply this yaml file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
spec:
selector:
matchLabels:
app: application-name
...
</code></pre>
<p>Turns out that this yaml was going to replace a previous version of the same Deployment. When I ran <code>kubectl get deployment application-name -o yaml</code> I saw this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
spec:
selector:
matchLabels:
app: application-name
track: stable
...
</code></pre>
<p>Apparently the <code>spec.selector.matchLabels</code> is currently an array, and I was trying to replace that with a single string. My fix was deleting the deployment and re-deploying it.</p>
| Lukasvan3L |
<p>Let's say we have a number of Kubernetes configuration files in a folder <code>kubernetes</code> and we want to apply them all:</p>
<pre><code>kubectl apply -f kubernetes -n MyNamespace
</code></pre>
<p>Some of these files contain environment variables which need to be substituted first (no <a href="https://github.com/kubernetes/kubernetes/issues/23896" rel="noreferrer">templating</a> in Kubernetes). For instance, several of the deployment yamls contain something like:</p>
<pre><code>image: myregistry.com/myrepo:$TAG
</code></pre>
<p>For a single yaml file, this can be <a href="https://serverfault.com/questions/791715/using-environment-variables-in-kubernetes-deployment-spec">done</a> e.g. by using envsubst like this:</p>
<pre><code>envsubst < deploy.yml | kubectl apply -f -
</code></pre>
<p>What's the best way to do these substitutions for all the yaml files?</p>
<p>(Looping over the files in the folder and calling <code>envsubst</code> as above is one option, but I suspect that it would be preferrable to pass the entire folder to <code>kubectl</code> and not individual files)</p>
| Max | <p>This works:</p>
<pre><code>for f in *.yaml; do envsubst < $f | kubectl apply -f -; done
</code></pre>
| Max |
<p>I have a node.js Project which I run as Docker-Container in different environments (local, stage, production) and therefor configure it via <code>.env</code>-Files. As always advised I don't store the <code>.env</code>-Files in my remote repository which is Gitlab. My production- and stage-systems are run as kubernetes cluster.</p>
<p>What I want to achieve is an automated build via Gitlab's CI for different environments (e.g. stage) depending on the commit-branch (named stage as well), meaning when I push to origin/stage I want an Docker-image to be built for my stage-environment with the corresponding <code>.env</code>-File in it.</p>
<p>On my local machine it's pretty simple, since I have all the different <code>.env</code>-Files in the root-Folder of my app I just use this in my <code>Dockerfile</code></p>
<pre><code>COPY .env-stage ./.env
</code></pre>
<p>and everything is fine.</p>
<p>Since I don't store the .env-Files in my remote repo, this approach doesn't work, so I used Gitlab CI Variables and created a variable named <code>DOTENV_STAGE</code> of type <code>file</code> with the contents of my local <code>.env-stage</code> file.</p>
<p>Now my problem is: How do I get that content as .env-File inside the docker image that is going to be built by gitlab since that file is not yet a file in my repo but a variable instead?</p>
<p>I tried using <code>cp</code> (see below, also in the <code>before_script</code>-section) to just copy the file to an <code>.env</code>-File during the build process, but that obviously doesn't work.</p>
<p>My current build stage looks like this:</p>
<pre><code>image: docker:git
services:
- docker:dind
build stage:
only:
- stage
stage: build
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- cp $DOTENV_STAGE .env
- docker pull $GITLAB_IMAGE_PATH-$CI_COMMIT_BRANCH || true
- docker build --cache-from $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH --file=Dockerfile-$CI_COMMIT_BRANCH -t $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH:$CI_COMMIT_SHORT_SHA .
- docker push $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH
</code></pre>
<p>This results in </p>
<pre><code> Step 12/14 : COPY .env ./.env
COPY failed: stat /var/lib/docker/tmp/docker-builder513570233/.env: no such file or directory
</code></pre>
<p>I also tried <code>cp $DOTENV_STAGE .env</code> as well as <code>cp $DOTENV_STAGE $CI_BUILDS_DIR/.env</code> and <code>cp $DOTENV_STAGE $CI_PROJECT_DIR/.env</code> but none of them worked.</p>
<p>So the part I actually don't know is: Where do I have to put the file in order to make it available to docker during build?</p>
<p>Thanks</p>
| Daniel | <p>You should avoid copying <code>.env</code> file into the container altogether. Rather feed it from outside on runtime. There's a dedicated prop for that: <a href="https://docs.docker.com/compose/environment-variables/#the-env_file-configuration-option" rel="noreferrer">env_file</a>. </p>
<pre><code>web:
env_file:
- .env
</code></pre>
<p>You can store contents of the <code>.env</code> file itself in a <a href="https://docs.gitlab.com/ee/ci/variables/#masked-variables" rel="noreferrer">Masked Variable</a> in the GitLabs CI backend. Then dump it to <code>.env</code> file in the runner and feed to Docker compose pipeline.</p>
| jayarjo |
<p>I have a kubernetes deployment with the below spec that gets installed via helm 3.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
spec:
replicas: 1
template:
spec:
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
args:
- --listen=0.0.0.0:80
- --client-id=gk-client
- --discovery-url={{ .Values.discoveryUrl }}
</code></pre>
<p>I need to pass the <code>discoveryUrl</code> value as a helm value, which is the public IP address of the <code>nginx-ingress</code> pod that I deploy via a different helm chart. I install the above deployment like below:</p>
<pre><code>helm3 install my-nginx-ingress-chart
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].ip}')
helm3 install my-gatekeeper-chart --set discovery_url=${INGRESS_IP}
</code></pre>
<p>This works fine, however, Now instead of these two <code>helm3 install</code>, I want to have a single helm3 install, where both the nginx-ingress and the gatekeeper deployment should be created.</p>
<p>I understand that in the <code>initContainer</code> of <code>my-gatekeeper-image</code> we can get the nginx-ingress ip address, but I am not able to understand how to set that as an environment variable or pass to the container spec.</p>
<p>There are some stackoverflow questions that mention that we can create a persistent volume or secret to achieve this, but I am not sure, how that would work if we have to delete them. I do not want to create any extra objects and maintain the lifecycle of them.</p>
| Sankar | <p>It is not possible to do this without mounting a persistent volume. But the creation of persistent volume can be backed by just an in-memory store, instead of a block storage device. That way, we do not have to do any extra lifecycle management. The way to achieve that is:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: gatekeeper
data:
gatekeeper.sh: |-
#!/usr/bin/env bash
set -e
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].name}')
# Do other validations/cleanup
echo $INGRESS_IP > /opt/gkconf/discovery_url;
exit 0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
replicas: 1
selector:
matchLabels:
app: gatekeeper
template:
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
initContainers:
- name: gkinit
command: [ "/opt/gk-init.sh" ]
image: 'bitnami/kubectl:1.12'
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
- mountPath: /opt/gk-init.sh
name: gatekeeper
subPath: gatekeeper.sh
readOnly: false
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
# ENTRYPOINT of above image should read the
# file /opt/gkconf/discovery_url and then launch
# the actual gatekeeper binary
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
volumes:
- name: gkconf
emptyDir:
medium: Memory
- name: gatekeeper
configMap:
name: gatekeeper
defaultMode: 0555
</code></pre>
| Sankar |
<p>I'm getting a <code>directory index of "/src/" is forbidden</code> error when setting up Docker Nginx configuration within Kubernetes. Here is the error from the Kubernetes logs.</p>
<pre><code>/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/03/11 02:23:24 [error] 23#23: *1 directory index of "/src/" is forbidden, client: 10.136.144.155, server: 0.0.0.0, request: "GET / HTTP/1.1", host: "10.42.3.4:80"
10.136.144.155 - - [11/Mar/2021:02:23:24 +0000] "GET / HTTP/1.1" 403 125 "-" "kube-probe/1.15"
</code></pre>
<p>My dockerfile to serve nginx for an Angular app is quite simple:</p>
<pre><code>FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx/conf.d /etc/nginx/
COPY dist /src
RUN ls /src
</code></pre>
<p>My nginx.conf file contains:</p>
<pre><code>worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name 0.0.0.0;
root /src;
charset utf-8;
include h5bp/basic.conf; # eg https://github.com/h5bp/server-configs-nginx
include modules/gzip.conf;
location =/index.html {
include modules/cors.conf;
}
location / {
try_files $uri$args $uri$args/ /index.html;
}
}
}
</code></pre>
<p>The Kubernetes deployment is using a Quay image. Do you think my error could be in the dockerfile, the nginx.conf file, or both?</p>
| stevetronix | <p>I solved my error by changing my Angular <code>build</code> <code>outputPath</code> to <code>"dist"</code> rather than the default <code>"dist/my-project"</code> that was configured with the Angular installation.</p>
<pre><code>"architect": {
"build": {
"builder": "@angular-devkit/build-angular:browser",
"options": {
"outputPath": "dist", // was previously "dist/my-project"
</code></pre>
| stevetronix |
<p>I want to disable basic auth only on a specific subpath of my App. How this can be done?</p>
<p>e.g.</p>
<p><strong>All subpaths should be basic auth secured:</strong></p>
<pre><code>/
</code></pre>
<p><strong>This path should be an exception and public reachable:</strong></p>
<pre><code>/#/public
</code></pre>
<p>ingress.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app
labels:
app: app
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: basic-auth
ingress.kubernetes.io/auth-realm: "Authentication Required"
spec:
rules:
- host: "<MYHOST>"
http:
paths:
- path: /
backend:
serviceName: app-service
servicePort: 80
tls:
- secretName: "app-secret"
hosts:
- "<MYHOST>"
</code></pre>
| Tim Schwalbe | <p>The path you're trying to use ( <code>/#/public</code> ) never reachs the server, the client send only <code>/</code>. That's the reason why you are unable to disable de auth. </p>
<p>The symbol (#) is a separator for URL fragment identifier. The <a href="https://www.ietf.org/rfc/rfc2396.txt" rel="nofollow noreferrer">rfc2396</a> explains it. </p>
<blockquote>
<p>The semantics of a fragment identifier is a property of the data
resulting from a retrieval action, regardless of the type of URI used
in the referenc </p>
</blockquote>
<p>If you tail the logs of your ingress pod you'll see the url that reachs the backend.</p>
<p>An additional note, if you need the url to reach the server then you need to urlencode it, <code>/%23/public</code> but, it's something with a different meaning.</p>
<p>Regards. </p>
| mdaguete |
<p>I tried to automate the rolling update when the configmap changes are made. But, I am confused about how can I verify if the rolling update is successful or not. I found out the command </p>
<pre><code>kubectl rollout status deployment test-app -n test
</code></pre>
<p>But I guess this is used when we are performing the rollback rather than for rolling update. What's the best way to know if the rolling update is successful or not?</p>
| programmingtech | <h2>ConfigMap generation and rolling update</h2>
<blockquote>
<p>I tried to automate the rolling update when the configmap changes are made</p>
</blockquote>
<p>It is a good practice to <strong>create new</strong> resources instead of <strong>mutating</strong> (update in-place). <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">kubectl kustomize</a> is supporting this <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/configGeneration.md" rel="nofollow noreferrer">workflow</a>:</p>
<blockquote>
<p>The recommended way to change a deployment's configuration is to</p>
<ol>
<li>create a new configMap with a new name,</li>
<li>patch the deployment, modifying the name value of the appropriate configMapKeyRef field.</li>
</ol>
</blockquote>
<p>You can deploy using <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#configmapgenerator" rel="nofollow noreferrer">Kustomize</a> to automatically create a new ConfigMap every time you want to change content by using <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#configmapgenerator" rel="nofollow noreferrer">configMapGenerator</a>. The old ones can be <em>garbage collected</em> when not used anymore.</p>
<p>Whith Kustomize configMapGenerator can you get a generated <em>name</em>.</p>
<p>Example</p>
<pre><code>kind: ConfigMap
metadata:
name: example-configmap-2-g2hdhfc6tk
</code></pre>
<p>and this name get reflected to your <code>Deployment</code> that then trigger a new <em>rolling update</em>, but with a new ConfigMap and leave the old <em>unchanged</em>.</p>
<p>Deploy both <code>Deployment</code> and <code>ConfigMap</code> using</p>
<pre><code>kubectl apply -k <kustomization_directory>
</code></pre>
<p>When handling change this way, you are following the practices called <a href="https://www.digitalocean.com/community/tutorials/what-is-immutable-infrastructure" rel="nofollow noreferrer">Immutable Infrastructure</a>.</p>
<h2>Verify deployment</h2>
<p>To verify a successful deployment, you are right. You should use:</p>
<pre><code>kubectl rollout status deployment test-app -n test
</code></pre>
<p>and when leaving the old ConfigMap unchanged but creating a new ConfigMap for the new <em>ReplicaSet</em> it is clear which ConfigMap belongs to which ReplicaSet. </p>
<p>Also rollback will be easier to understand since both old and new ReplicaSet use its own ConfigMap (on change of content).</p>
| Jonas |
<p>I want to implement custom logic to determine readiness for my pod, and I went over this: <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.kubernetes-probes.external-state" rel="noreferrer">https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.kubernetes-probes.external-state</a> and they mention an example property:
<code>management.endpoint.health.group.readiness.include=readinessState,customCheck</code></p>
<p>Question is - how do I override <code>customCheck</code>?
In my case I want to use HTTP probes, so the yaml looks like:</p>
<pre><code>readinessProbe:
initialDelaySeconds: 10
periodSeconds: 10
httpGet:
path: /actuator/health
port: 12345
</code></pre>
<p>So then again - where and how should I apply logic that would determine when the app is ready (just like the link above, i'd like to rely on an external service in order for it to be ready)</p>
| Hummus | <p>To expand KrzysztofS's answer:</p>
<p>First, create a custom health indicator like this:</p>
<pre class="lang-java prettyprint-override"><code>import java.util.concurrent.atomic.AtomicBoolean;
import org.springframework.boot.actuate.availability.ReadinessStateHealthIndicator;
import org.springframework.boot.availability.ApplicationAvailability;
import org.springframework.boot.availability.AvailabilityState;
import org.springframework.boot.availability.ReadinessState;
import org.springframework.stereotype.Component;
@Component
public class MyCustomReadinessIndicator extends ReadinessStateHealthIndicator {
private final AtomicBoolean ready = new AtomicBoolean();
public MyCustomReadinessIndicator(ApplicationAvailability availability) {
super(availability);
}
@Override
protected AvailabilityState getState(ApplicationAvailability applicationAvailability) {
return ready.get()
? ReadinessState.ACCEPTING_TRAFFIC
: ReadinessState.REFUSING_TRAFFIC;
}
public void markAsReady() {
if (ready.get()) {
throw new IllegalStateException("Already initialized");
}
ready.set(true);
}
}
</code></pre>
<p>This must be a bean, or Spring won't be able to discover it.</p>
<p><code>@Autowire</code> your component to your service or another component, and call its <code>markAsReady()</code> function when this indicator should switch into "ready" state.</p>
<p>Next, add the name of the bean<sup>1</sup> into "include" block for "readiness" group in your application.yaml file (if you're using application.properties, figure it out yourself).</p>
<pre class="lang-yaml prettyprint-override"><code>management:
endpoint:
health:
group:
readiness:
include: readinessState, myCustomReadinessIndicator
show-components: always
show-details: always
probes:
enabled: true
</code></pre>
<p>Next, try running your application and opening various Actuator endpoints.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Endpoint</th>
<th><code>ready</code> state of<br />your indicator</th>
<th>HTTP response<br />code</th>
<th>JSON response</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>/actuator/health</code></td>
<td><code>false</code></td>
<td>503</td>
<td><code>{</code><br /><code> "status": "OUT_OF_SERVICE",</code><br /><code> "components":</code><br /><code> {</code><br /><code> "myCustomReadinessIndicator":</code><br /><code> {</code><br /><code> "status": "OUT_OF_SERVICE"</code><br /><code> },</code><br /><code> "livenessState":</code><br /><code> {</code><br /><code> "status": "UP"</code><br /><code> },</code><br /><code> "readinessState":</code><br /><code> {</code><br /><code> "status": "UP"</code><br /><code> }</code><br /><code> },</code><br /><code> "groups": ["liveness", "readiness"]</code><br /><code>}</code></td>
</tr>
<tr>
<td><code>/actuator/health/liveness</code></td>
<td><code>false</code></td>
<td>200</td>
<td><code>{"status":"UP"}</code></td>
</tr>
<tr>
<td><code>/actuator/health/readiness</code></td>
<td><code>false</code></td>
<td>503</td>
<td><code>{</code><br /><code> "status": "OUT_OF_SERVICE",</code><br /><code> "components":</code><br /><code> {</code><br /><code> "myCustomReadinessIndicator":</code><br /><code> {</code><br /><code> "status": "OUT_OF_SERVICE"</code><br /><code> },</code><br /><code> "readinessState":</code><br /><code> {</code><br /><code> "status": "UP"</code><br /><code> }</code><br /><code> }</code><br /><code>}</code></td>
</tr>
<tr>
<td><code>/actuator/health</code></td>
<td><code>true</code></td>
<td>200</td>
<td><code>{</code><br /><code> "status": "UP",</code><br /><code> "components":</code><br /><code> {</code><br /><code> "myCustomReadinessIndicator":</code><br /><code> {</code><br /><code> "status": "UP"</code><br /><code> },</code><br /><code> "livenessState":</code><br /><code> {</code><br /><code> "status": "UP"</code><br /><code> },</code><br /><code> "readinessState":</code><br /><code> {</code><br /><code> "status": "UP"</code><br /><code> }</code><br /><code> },</code><br /><code> "groups": ["liveness", "readiness"]</code><br /><code>}</code></td>
</tr>
<tr>
<td><code>/actuator/health/liveness</code></td>
<td><code>true</code></td>
<td>200</td>
<td><code>{"status":"UP"}</code></td>
</tr>
<tr>
<td><code>/actuator/health/readiness</code></td>
<td><code>true</code></td>
<td>200</td>
<td><code>{</code><br /><code> "status": "UP",</code><br /><code> "components":</code><br /><code> {</code><br /><code> "myCustomReadinessIndicator":</code><br /><code> {</code><br /><code> "status": "UP"</code><br /><code> },</code><br /><code> "readinessState":</code><br /><code> {</code><br /><code> "status": "UP"</code><br /><code> }</code><br /><code> }</code><br /><code>}</code></td>
</tr>
</tbody>
</table>
</div>
<p>Now, you need to set <code>/actuator/health/readiness</code> in your readiness probe's path</p>
<pre class="lang-yaml prettyprint-override"><code>readinessProbe:
initialDelaySeconds: 10
periodSeconds: 10
httpGet:
path: /actuator/health/readiness
port: 12345
</code></pre>
<p>Related: set liveness probe's path to <code>/actuator/health/liveness</code> not to <code>/actuator/health</code>, since <code>/actuator/health</code> will return 503 if your indicator isn't ready yet, even though <code>livenessState</code> is "UP".</p>
<hr />
<p><sup>1</sup> Bean name is usually camel-cased name of the class, starting with lowercase letter, but you can override it by providing name in component annotation: <code>@Component("overriddenName")</code></p>
| izogfif |
<p>I have a client that is calling the Kubernetes REST API using the library from <a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">https://github.com/kubernetes-client/csharp</a>. When I pass in client certificate credentials, I get the following error:</p>
<pre><code>System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception. ---> System.ComponentModel.Win32Exception: The credentials supplied to the package were not recognized
at System.Net.SSPIWrapper.AcquireCredentialsHandle(SSPIInterface secModule, String package, CredentialUse intent, SCHANNEL_CRED scc)
at System.Net.Security.SslStreamPal.AcquireCredentialsHandle(CredentialUse credUsage, SCHANNEL_CRED secureCredential)
at System.Net.Security.SslStreamPal.AcquireCredentialsHandle(X509Certificate certificate, SslProtocols protocols, EncryptionPolicy policy, Boolean isServer)
at System.Net.Security.SecureChannel.AcquireClientCredentials(Byte[]& thumbPrint)
at System.Net.Security.SecureChannel.GenerateToken(Byte[] input, Int32 offset, Int32 count, Byte[]& output)
at System.Net.Security.SecureChannel.NextMessage(Byte[] incoming, Int32 offset, Int32 count)
at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.PartialFrameCallback(AsyncProtocolRequest asyncRequest)
</code></pre>
<p>How can I fix this?</p>
| Phyxx | <p>The trick to solving this was to configure the properties on the private key. In particular, the <code>X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.Exportable</code> flags needed to be set.</p>
<pre><code>var context = Yaml.LoadFromString<K8SConfiguration>(configFileInRawYaml);
var config = KubernetesClientConfiguration.BuildConfigFromConfigObject(context);
config.ClientCertificateKeyStoreFlags = X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.Exportable;
var client = new Kubernetes(config);
</code></pre>
| Phyxx |
<p>Have been trying to setup Kubeflow on bare metal (on prem etc) on a shared server i.e. not my laptop. I followed the <a href="https://www.kubeflow.org/docs/started/k8s/kfctl-k8s-istio/" rel="nofollow noreferrer">Kubeflow Deployment with kfctl_k8s_istio</a> setup instructions which all well.</p>
<p>Under "Access Kubeflow Dashboard" it says</p>
<blockquote>
<p>Refer Ingress Gateway guide.</p>
</blockquote>
<p>which just leads to more questions I don't know the answer to coz i didn't write the setup i.e.</p>
<ol>
<li>What is the ingress port for the UI? <code>kubectl get svc istio-ingressgateway -n istio-system</code> returns a hug list??</li>
<li>What do i do if the external IP is <code><none></code>? The server has an IP on the local network i.e. 192.168.1.69</li>
<li>I'm assuming <code>kfctl</code> didn't setup an external load balancer?</li>
<li>Whats the container that hosts the web UI? What should the <code>Gateway</code> and <code>VirtualService</code> yaml look like?</li>
</ol>
<p>I want to use Kubeflow and have to learn how Istio works? Why?</p>
| CpILL | <p>So, in the end i went with k3s as it is a one-liner to setup</p>
<pre class="lang-sh prettyprint-override"><code>curl -sfL https://get.k3s.io | sh -
</code></pre>
<p>and there are <a href="https://rancher.com/docs/k3s/latest/en/installation/install-options/" rel="nofollow noreferrer">many options</a> which you can set with environmental vars.</p>
<p>We were using GPUs and so needed to setup <a href="https://github.com/NVIDIA/k8s-device-plugin" rel="nofollow noreferrer">NVIDIA device plugin for Kubernetes</a></p>
<p>We do all this now with Ansible scripts as we have a fleet of machines to manage.</p>
<p>Kubeflow is, like most Google projects, too bloated and we're looking at <a href="https://dagster.io/" rel="nofollow noreferrer">Dagster</a> now as its easy to develop with on your local setup.</p>
| CpILL |
<p>After learning that we should have used a <code>StatefulSet</code> instead of a <code>Deployment</code> in order to be able to attach the same persistent volume to multiple pods and especially pods on different nodes, I tried changing our config accordingly.</p>
<p>However, even when using the same name for the volume claim as before, it seems to be creating an entirely new volume instead of using our existing one, hence the application loses access to the existing data when run as a <code>StatefulSet</code>.</p>
<p>Here's the volume claim part of our current <code>Deployment</code> config:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>This results in a claim with the same name.</p>
<p>And here's the template for the <code>StatefulSet</code>:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>This results in new claims for every pod, with the pod name and an ID per claim, like e.g. <code>gitea-server-data-gitea-server-0</code>.</p>
<p>The new claims are now using a new volume instead of the existing one. So I tried specifying the existing volume explicitly, like so:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
volumeName: pvc-c87ff507-fd77-11e8-9a7b-420101234567
resources:
requests:
storage: 20Gi
</code></pre>
<p>However, this results in pods failing to be scheduled and the new claim being "pending" indefinitely:</p>
<blockquote>
<p>pod has unbound immediate PersistentVolumeClaims (repeated times)</p>
</blockquote>
<p>So the question is: how can we migrate the volume claim(s) in a way that allows us to use the existing persistent volume and access the current application data from a new <code>StatefulSet</code> instead of the current <code>Deployment</code>?</p>
<p>(In case it is relevant, we are using Kubernetes on GKE.)</p>
| raucao | <p>OK, so I spent quite some time trying out all kinds of different configs until finally learning that GCE persistent disks simply don't support <code>ReadWriteMany</code> to begin with.</p>
<p>The <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="nofollow noreferrer">GKE docs</a> go out of their way to never explicitly mention that you cannot actually mount <em>any</em> normal GKE persistent volume on multiple pods/nodes.</p>
<p>Apparently, the only way to get shared file storage between pods is to deploy either your own NFS/Gluster/etc. or to cough up a bunch of money and <a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="nofollow noreferrer">use Google Cloud Filestore</a>, for which there is a GKE storage class, and which can indeed be mounted on multiple pods.</p>
<p>Unfortunately, that's not an option for this app, as Filestore pricing begins with 1TB minimum capacity at a whopping $0.20/GB/month, which means that <strong>the cheapest option available costs around $205 per month</strong>. We currently pay around $60/month, so that would more than triple our bill, simply to get rolling deployments without errors.</p>
| raucao |
<p>I am wondering about Kubernetes's secret management. I have a process that generates a lot of secrets that only need to live for a short while.</p>
<p>I would like for these secrets to come from Vault or a similar service in the future. However, for right now, I don't have the time to implement this. </p>
<p>If someone could provide me with the documentation or delineate the secret life cycle, it would be super helpful. Does Kubernetes have the means to garbage collect these secrets as it does with containers?</p>
<p>Likewise, I am wondering if there is a way to set cascading deletes when this one resource disappears, so does its secrets?</p>
| Aaron | <p>Kubernetes has no notion of secret lifetime.</p>
<ul>
<li><p>you can implement a <code>CronJob</code> in charge of checking then deleting secret in specific namespace(s) if the secret is older that a specific time.</p></li>
<li><p>you can create all your secrets in a temporary namespace, destroying the namespace will destroy all the secrets associated with this namespace.</p></li>
<li><p>use <code>Vault</code></p></li>
</ul>
| Kartoch |
<p>We are having problem with several deployments in our cluster that do not seem to be working. But I am a bit apprehensive in touching these, since they are part of the kube-system namespace. I am also unsure as what the correct approach to getting them into an OK state is. </p>
<p>I currently have two daemonsets that have warnings with the message </p>
<p><strong>DaemonSet has no nodes selected</strong> </p>
<p>See images below. Does anyone have any idea what the correct approach is? </p>
<p><a href="https://i.stack.imgur.com/paqrW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/paqrW.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/QfR3S.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QfR3S.png" alt="enter image description here"></a></p>
| Andreas | <p>A DaemonSet is creating a pod in each node of your Kubernetes cluster.</p>
<p>If the Kubernetes scheduler cannot schedule any pod, there are several possibilities:</p>
<ul>
<li>Pod spec has a too high memory requests resource for the memory node capacity, look at the value of <code>spec.containers[].resources.requests.memory</code></li>
<li>The nodes may have a taint, so DaemonSet declaration must have a toleration (<a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">kubernetes documentation about taint and toleration</a>)</li>
<li>The pod spec may have a <code>nodeSelector</code> field (<a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="nofollow noreferrer">kubernetes documentation about node selector</a>)</li>
<li>The pod spec may have an enforced node affinity or anti-affinity (<a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">kubernetes documentation about node affinity</a>)</li>
<li>If <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">Pod Security Policies</a> are enabled on the cluster, a security policy may be blocking access to a resource that the pod needs to run</li>
</ul>
<p>There are not the only solutions possible. More generally, a good start would be to look at the events associated to the daemon set:</p>
<pre><code>> kubectl describe daemonsets NAME_OF_YOUR_DAEMON_SET
</code></pre>
| Kartoch |
<p>I have 2-3 machine learning models I am trying to host via Kubernetes. I don't get much usage on the models right now, but they are critical and need to be available when called upon.</p>
<p>I am providing access to the models via a flask app and am using a load balancer to route traffic to the flask app.</p>
<p>Everything typically works fine since requests are only made intermittently, but I've come to find that if multiple requests are made at the same time my pod crashes due to OOM. Isn't this the job of the load balancer? To make sure requests are routed appropriately? (in this case, route the next request after the previous ones are complete?)</p>
<p>Below is my deployment:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: flask-service
labels:
run: flask-service
spec:
selector:
app: flask
ports:
- protocol: "TCP"
port: 5000
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask
spec:
selector:
matchLabels:
app: flask
replicas: 1
template:
metadata:
labels:
app: flask
spec:
containers:
- name: flask
imagePullPolicy: Always
image: gcr.io/XXX/flask:latest
ports:
- containerPort: 5000
resources:
limits:
memory: 7000Mi
requests:
memory: 1000Mi
</code></pre>
| echan00 | <blockquote>
<p>Isn't this the job of the load balancer? To make sure requests are routed appropriately?</p>
</blockquote>
<p>Yes, you are right. But...</p>
<blockquote>
<p>replicas: 1</p>
</blockquote>
<p>You only use a single replica, so the load balancer has no options to route to other <em>instances</em> of your application. Give it multiple instances.</p>
<blockquote>
<p>I've come to find that if multiple requests are made at the same time my pod crashes due to OOM</p>
</blockquote>
<p>It sounds like your application has very limited resources.</p>
<pre><code> resources:
limits:
memory: 7000Mi
requests:
memory: 1000Mi
</code></pre>
<p>When your application uses more than <code>7000Mi</code> it will get OOM-killed (also consider increase request value). If your app need more, you can give it more memory (scale vertically) or add more instances (scale horizontally).</p>
<h2>Horizontal Pod Autoscaler</h2>
<blockquote>
<p>Everything typically works fine since requests are only made intermittently</p>
</blockquote>
<p>Consider using <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a>, it can scale up your application to more instances when you have more requests and scale down when there is less requests. This can be based on memory or CPU usage for example.</p>
<h2>Use a queue</h2>
<blockquote>
<p>route the next request after the previous ones are complete?</p>
</blockquote>
<p>If this is the behavior you want, then you need to use a <em>queue</em> e.g. RabbitMQ or Kafka to process your requests one at a time.</p>
| Jonas |
<p>I am having difficulty getting a kubernetes livenessProbe exec command to work with environment variables.
My goal is for the liveness probe to monitor memory usage on the pod as well as also perform an httpGet health check.</p>
<p>"If container memory usage exceeds 90% of the resource limits OR the http response code at <code>/health</code> fails then the probe should fail."</p>
<p>The liveness probe is configured as follows:</p>
<pre><code>
livenessProbe:
exec:
command:
- sh
- -c
- |-
"used=$(awk '{ print int($1/1.049e+6) }' /sys/fs/cgroup/memory/memory.usage_in_bytes);
thresh=$(awk '{ print int( $1 / 1.049e+6 * 0.9 ) }' /sys/fs/cgroup/memory/memory.limit_in_bytes);
health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/health);
if [[ ${used} -gt ${thresh} || ${health} -ne 200 ]]; then exit 1; fi"
initialDelaySeconds: 240
periodSeconds: 60
failureThreshold: 3
timeoutSeconds: 10
</code></pre>
<p>If I exec into the (ubuntu) pod and run these commands they all work fine and do the job.</p>
<p>But when deployed as a livenessProbe the pod is constantly failing with the following warning:</p>
<pre><code>Events: │
│ Type Reason Age From Message │
│ ---- ------ ---- ---- ------- │
│ Warning Unhealthy 14m (x60 over 159m) kubelet (combined from similar events): Liveness probe failed: sh: 4: used=1608; │
│ thresh=2249; │
│ health=200; │
│ if [[ -gt || -ne 200 ]]; then exit 1; fi: not found
</code></pre>
<p>It looks as if the initial commands to probe memory and curl the health check endpoint all worked and populated environment variables but then those variable substitutions did not subsequently populate in the if statement so the probe never passes.</p>
<p>Any idea as to why? Or how this could be configured to work properly?
I know it's a little bit convoluted. Thanks in advance.</p>
| david_beauchamp | <p>Looks like the shell is seeing your whole command as a filename to execute.</p>
<p>I would remove the outer quotes</p>
<pre><code>livenessProbe:
exec:
command:
- sh
- -c
- |-
used=$(awk '{ print int($1/1.049e+6) }' /sys/fs/cgroup/memory/memory.usage_in_bytes);
thresh=$(awk '{ print int( $1 / 1.049e+6 * 0.9 ) }' /sys/fs/cgroup/memory/memory.limit_in_bytes);
health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/health);
if [[ ${used} -gt ${thresh} || ${health} -ne 200 ]]; then exit 1; fi
initialDelaySeconds: 240
periodSeconds: 60
failureThreshold: 3
timeoutSeconds: 10
</code></pre>
<p>You're already telling the YAML parser it's a multiline string</p>
| Andrew McGuinness |
<p>I have a StatefulSet that has 2 replicas. I want to create an endpoint to be able to reach any of this replica, passing it hostname id, and in a way that if I scale it to more replicas, the new pods need to be reachable.</p>
<p>I can do this creating an Ingress like this:</p>
<pre><code>apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: /0
backend:
hostNames:
- web-0
serviceName: nginx-set
servicePort: '80'
- path: /1
backend:
hostNames:
- web-1
serviceName: nginx-set
servicePort: '80'
</code></pre>
<p>With this, a <code>GET</code> on <code>appscode.example.com/0</code> will be routed to <code>web-0</code> pod.
But, how can I do this in a dynamic way? If I change the replicas to 3, I will need to manually create a new path route to the pod <code>web-2</code> to be reachable.</p>
| fabriciols | <p>You need a program (operator) listening to the Kubernetes API, and patching the ingress resource every time teh number of pods in the statefull set.</p>
<p>Using go:</p>
<ul>
<li>watching a resource: <a href="https://medium.com/programming-kubernetes/building-stuff-with-the-kubernetes-api-part-4-using-go-b1d0e3c1c899" rel="nofollow noreferrer">https://medium.com/programming-kubernetes/building-stuff-with-the-kubernetes-api-part-4-using-go-b1d0e3c1c899</a></li>
<li>patching a resource: <a href="https://dwmkerr.com/patching-kubernetes-resources-in-golang/" rel="nofollow noreferrer">https://dwmkerr.com/patching-kubernetes-resources-in-golang/</a></li>
</ul>
| Kartoch |
<p>While running this command k</p>
<blockquote>
<p>kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml</p>
</blockquote>
<p>I am getting this error</p>
<blockquote>
<p>Error from server (NotFound): error when deleting
"samples/bookinfo/networking/bookinfo-gateway.yaml": the server could
not find the requested resource (delete gatewaies.networking.istio.io
bookinfo-gateway)</p>
</blockquote>
<p>Can someone please tell me how can i accept gatewaies plural ? or how to fix this error</p>
| Salman | <p>Upgrading to latest kubectl solved the issue</p>
| Salman |
<p>Using kubectl command line, is it possible to define the exact pod name?</p>
<p>I have tried with</p>
<pre><code>kubectl run $pod-name --image imageX
</code></pre>
<p>However, the resulted pod name is something like <code>$pod-name-xx-yyy-nnn</code>.
So without using a yaml file, can I define the pod name using kubectl CLI?</p>
| sqr | <p><code>kubectl run</code> creates a <em>Deployment</em> by default. A <em>Deployment</em> starts a <em>ReplicaSet</em> that manages the pods/replicas... and therefore has a generated <em>pod name</em>.</p>
<h2>Run pod</h2>
<p>To run a single pod you can add <code>--restart=Never</code> to the <code>kubectl run</code> command.</p>
<pre><code>kubectl run mypod --restart=Never --image=imageX
</code></pre>
| Jonas |
<p>We are on Kubernetes 1.9.0 and wonder if there is way to access an "ordinal index" of a pod with in its statefulset configuration file. We like to dynamically assign a value (that's derived from the ordinal index) to the pod's label and later use it for setting pod affinity (or antiaffinity) under spec.</p>
<p>Alternatively, is the pod's instance name available with in statefulset configfile? If so, we can hopefully extract ordinal index from it and dynamically assign to a label (for later use for affinity).</p>
| Raj N | <p><a href="https://github.com/kubernetes/kubernetes/issues/30427" rel="noreferrer">Right now</a> the only option is to extract index from host name</p>
<pre><code>lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "export INDEX=${HOSTNAME##*-}"]
</code></pre>
| Maciek Sawicki |
<p>In my Kubernetes installation, I can see cAdvisor reports a measurement called "container_cpu_load_average_10s" for each pod/container. I get values such as 232, 6512 and so on.</p>
<p>So, what is the unit of measure for CPU load here? To me "CPU Load" and "CPU Usage" are used interchangeably, so I can't understand why its not a value between [0-100] ?</p>
<p><strong>UPDATE</strong>:</p>
<p>Here I put the related line from cAdvisor log:</p>
<pre><code>...
container_cpu_load_average_10s{container_name="",id="/system.slice/kubelet.service",image="",name="",namespace="",pod_name=""} 1598
...
</code></pre>
| Michel Gokan Khan | <p>It is the number of tasks. A very nice explanation can be found here: <a href="https://serverfault.com/questions/667078/high-cpu-utilization-but-low-load-average">https://serverfault.com/questions/667078/high-cpu-utilization-but-low-load-average</a></p>
| Michel Gokan Khan |
<p>I have an applications Docker image which starts a mongodb instance on a random port. When I create a kubernetes Pod with application image; application gets successfully initialized and a mongodb instance gets up on a random port as <strong>localhost:port</strong> without any error.</p>
<p>However, when I create a Kubernetes Deployment; the same application initialization fails inside the container with error "mongodb can not be started as <strong>localhost:port</strong> can not be accessed".</p>
<p>If anyone can explain, why application initialization is failing with K8-Deployment but not with K-8Pod? And, How can I resolve this problem?</p>
<pre><code>apiVersion: apps/v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:v1
ports:
- containerPort: 8888 # Apps exposed port
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-dep
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:v1
ports:
- containerPort: 8888 # Apps exposed port
</code></pre>
<p>Thanks</p>
| Jaraws | <p>Instead of listening on <code>localhost:port</code>, you should try configure MongoDB to listening on <code>0.0.0.0:port</code>. This has helped me when having a similar issue with another app.</p>
<h2>Configure MongoDB to listen to all interfaces</h2>
<p>Your <code>mongod.conf</code></p>
<pre><code># /etc/mongod.conf
# Listen to local interface only. Comment out to listen on all interfaces.
bind_ip = 127.0.0.1
</code></pre>
<p>change to this</p>
<pre><code># /etc/mongod.conf
# Listen to local interface only. Comment out to listen on all interfaces.
# bind_ip = 127.0.0.1
</code></pre>
| Jonas |
<p>Can anybody let me know how to config
kubernetes pods to send alert to slack channel ?</p>
<p>Thanks in Advance
Rishabh Gupta</p>
| Rishabh Gupta | <p>Kubernetes dosn't provide out of the box slack integration.</p>
<p>There are few projects that you can use:</p>
<ul>
<li><p><a href="https://hub.kubeapps.com/charts/stable/kube-slack" rel="nofollow noreferrer">https://hub.kubeapps.com/charts/stable/kube-slack</a> - runs on Kubernetes, watches for evnets and sends pod failures notifications to Slac</p></li>
<li><p><a href="https://hub.kubeapps.com/charts/stable/kubewatch" rel="nofollow noreferrer">https://hub.kubeapps.com/charts/stable/kubewatch</a> - similar project. depending on configuration can be quiet noisy</p></li>
</ul>
<p>If you need more complex motoring you can use Prometheus and it's alert manager <a href="https://prometheus.io/docs/alerting/notification_examples/" rel="nofollow noreferrer">https://prometheus.io/docs/alerting/notification_examples/</a> </p>
| Maciek Sawicki |
<p>I used kubeadm to deploy my Kubernetes dashboard.
When I tried to deploy the <em>nginx-ingress-controller</em> in my dev namespace with default service-account, I was getting <em>liveness probe</em> and readiness failing with status code.</p>
<p>nginx-ingress-controller image is</p>
<pre><code>gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15
</code></pre>
<p>I get the same error in the test namespace also.
In my logs it's showing</p>
<pre><code> Received SIGTERM, shutting down
shutting down controller queues
I1201 00:19:48.745970 7 nginx.go:237] stopping NGINX process...
I1201 00:19:48.746923 7 shared_informer.go:112] stop requested
E1201 00:19:48.746985 7 listers.go:63] Timed out waiting for caches to sync
[notice] 22#22: signal process started
shutting down Ingress controller...
Handled quit, awaiting pod deletion
I NGINX process has stopped
Exiting with 0
</code></pre>
<p>Why am I getting failures in cluster scope; where is my failure?</p>
| krish123 | <p>You're most likely adding too little resources try removing the resources for debugging</p>
| Ami Mahloof |
Subsets and Splits