Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
β | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
β |
---|---|---|---|
<p>Is there a way to query from an API the resources under <code>Kubernetes Engine > Services & Ingress</code> in the GCloud console ?</p>
| cyberhippo | <p>It's a good question. The answer is slightly complicated.</p>
<p>Essentially, IIUC, you want to list the Kubernetes services and ingresses for your cluster(s). This functionality is provided by Kubernetes' API server rather than Kubernetes Engine itself.</p>
<p>So, you can do this various ways but, commonly (using the <a href="https://kubernetes.io/docs/reference/kubectl/overview/" rel="nofollow noreferrer"><code>kubectl</code></a> command-line):</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get services [--namespace=${NAMESPACE}]
kubectl get ingresses [--namespace=${NAMESPACE}]
</code></pre>
<p>If you've deployed e.g. Kubernetes <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">Web UI</a> formerly Dashboard, you should be able to enumerate the services|ingresses through it too.</p>
<p>You may also interact directly with your clusters' API servers to make the underlying REST API call that's made by <code>kubectl</code> using the above commands.</p>
<p>For Kubernetes Engine, Cloud Console is accessing 2, distinct APIs:</p>
<ol>
<li>The Kubernetes Engine API that provisions|manages clusters and is documented <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/" rel="nofollow noreferrer">here</a>, accessible through Console and <code>gcloud</code>.</li>
<li>The Kubernetes API that provisions|manages resources (e.g. Pods, Deployments, Services, Ingress etc.) that are owned by the cluster and are documented <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/" rel="nofollow noreferrer">here</a>, some (!) are accessible through Console. All are accessible directly or commonly using Kubernetes' command-line <code>kubectl</code>.</li>
</ol>
| DazWilkin |
<p>I create my docker (python flask).</p>
<p>How can I calculate what is the limit to put for memory and CPU?</p>
<p>Do we have some tools that run performance tests on docker with different limitation and then advise what is the best limitation numbers to put?</p>
| Shurik | <p>With an application already running inside of a container, you can use <code>docker stats</code> to see the current utilization of CPU and memory. While there it little harm in setting CPU limits too low (it will just slow down the app, but it will still run), be careful to keep memory limits above the worst case scenario. When apps attempt to exceed their memory limit, they will be killed and usually restarted by a restart policy/orchestration tool. If the limit is set too low, you may find your app in a restart loop.</p>
| BMitch |
<p>I have multiple files in s3 bucket which I need to copy to one of the running Kubernetes pods under /tmp path .
Need any reliable command or try and tested way to do the same.</p>
<p>Let's say my bucket name "learning" and pod name is "test-5c7cd9c-l6qng"</p>
| me25 | <p>AWS CLI commands "<a href="https://docs.aws.amazon.com/cli/latest/reference/s3api/get-object.html" rel="nofollow noreferrer">aws s3api get-object</a>" or "<a href="https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html" rel="nofollow noreferrer">aws s3 cp</a>" can be used to copy the data onto the Pod from S3. To make these calls <a href="https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys" rel="nofollow noreferrer">AWS Access Keys</a> are required. These keys provide the authentication to call the S3 service. "<a href="https://docs.aws.amazon.com/cli/latest/reference/configure/index.html" rel="nofollow noreferrer">aws configure</a>" command can be used to configure the Access Keys in the Pod.</p>
<p>Coming to K8S, an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init Container</a> can be used to execute the above command before the actual application container starts. Instead of having the Access Keys directly written into the Pod which is not really safe, <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">K8S Secrets feature</a> can be used to pass/inject the Access Keys to the Pods.</p>
<p>FYI ... the download can be done programmatically by using the <a href="https://aws.amazon.com/tools/" rel="nofollow noreferrer">AWS SDK</a> and the <a href="https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html" rel="nofollow noreferrer">S3Client</a> Interface for Java.</p>
| Praveen Sripati |
<p>I've a working ValidatingWebhookConfiguration and have been creating|approving CSRs with <code>certificates.k8s.io/v1beta</code>.</p>
<p>I upgraded (MicroK8s) from 1.18 to 1.20 and received a warning that <code>certificates.k8s.io/v1beta</code> is deprecated 1.19+ and thought I'd try (without success) upgrading to <code>certificates.k8s.io/v1</code>.</p>
<p>Existing (working) CSR:</p>
<pre><code>apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: ${SERVICE}.${NAMESPACE}
spec:
groups:
- system:authenticated
request: $(cat ${FILENAME}.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
</code></pre>
<p>Upgrading the API generated an error:</p>
<pre><code>missing required field "signerName" in io.k8s.api.certificates.v1.CertificateSigningRequestSpec;
</code></pre>
<p>I read the CSR doc specifically the part about <a href="https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers" rel="nofollow noreferrer">Kubernetes signers</a> and, because my existing spec uses <code>server auth</code>, assumed I could use <code>kubernetes.io/kubelet-serving</code> since this is the only one that permits <code>server auth</code>.</p>
<pre><code>apiVersion: certificates.k8s.io/v1 <<--- UPGRADED
kind: CertificateSigningRequest
metadata:
name: ${SERVICE}.${NAMESPACE}
spec:
groups:
- system:authenticated
request: $(cat ${FILENAME}.csr | base64 | tr -d '\n')
signerName: kubernetes.io/kubelet-serving <<--- ADDED
usages:
- digital signature
- key encipherment
- server auth
</code></pre>
<p>However, I get errors trying to approve the CSR (as a cluster admin):</p>
<pre><code>kubectl certificate approve ${SERVICE}.${NAMESPACE}
certificatesigningrequest.certificates.k8s.io/${SERVICE}.${NAMESPACE} approved
kubectl get csr ${SERVICE}.${NAMESPACE}
NAME SIGNERNAME REQUESTOR CONDITION
${SERVICE}.${NAMESPACE} kubernetes.io/kubelet-serving admin Approved,Failed
</code></pre>
<blockquote>
<p><strong>NOTE</strong> <code>Approved</code> but <code>Failed</code></p>
</blockquote>
<p>And I'm unable to get the certificate (presumably because it <code>Failed</code>):</p>
<pre><code>kubectl get csr ${SERVICE}.${NAMESPACE} \
--output=jsonpath='{.status.certificate}'
</code></pre>
<p>How should I use the <code>certificates.k8s.io/v1</code> API?</p>
<h3>Update: 2021-01-06</h3>
<p>OK, so I realized I have more information on the "Failed" and this gives me something to investigate...</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get csr/${SERVICE}.${NAMESPACE} \
--output=jsonpath="{.status}" \
| jq .
</code></pre>
<p>Yields:</p>
<pre><code>{
"conditions": [
{
"lastTransitionTime": "2021-01-06T18:52:15Z",
"lastUpdateTime": "2021-01-06T18:52:15Z",
"message": "This CSR was approved by kubectl certificate approve.",
"reason": "KubectlApprove",
"status": "True",
"type": "Approved"
},
{
"lastTransitionTime": "2021-01-06T18:52:15Z",
"lastUpdateTime": "2021-01-06T18:52:15Z",
"message": "subject organization is not system:nodes",
"reason": "SignerValidationFailure",
"status": "True",
"type": "Failed"
}
]
}
</code></pre>
<h3>Update: 2021-01-07</h3>
<p>Thanks @PjoterS</p>
<pre class="lang-sh prettyprint-override"><code>ubectl describe csr/${SERVICE}.${NAMESPACE}
Name: eldlund.utopial
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certificates.k8s.io/v1","kind":"CertificateSigningRequest","metadata":{"annotations":{},"name":"eldlund.utopial"},"spec":{"groups":["system:authenticated"],"request":"LS0tLS1C...LS0tLS0K","signerName":"kubernetes.io/kubelet-serving","usages":["digital signature","key encipherment","server auth"]}}
CreationTimestamp: Thu, 07 Jan 2021 17:03:23 +0000
Requesting User: admin
Signer: kubernetes.io/kubelet-serving
Status: Pending
Subject:
Common Name: eldlund.utopial.svc
Serial Number:
Subject Alternative Names:
DNS Names: eldlund.utopial.svc
eldlund.utopial.svc.cluster.local
Events: <none>
</code></pre>
<h3>Signing w/ OpenSSL (rather than Kubernetes)</h3>
<p>I tried creating CA crt|key and then a service key|CSR and signing the service CSR with the CA but Kubernetes complains:</p>
<pre><code>x509: certificate is not valid for any names, but wanted to match ainsley.utopial.svc
</code></pre>
<p>Yet the certificate appears to contain both CN and SAN entries:</p>
<p><strong>DOESN'T WORK</strong></p>
<pre class="lang-sh prettyprint-override"><code>openssl x509 -in ${FILENAME}.crt --noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
6f:14:25:8c:...
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = Validating Webhook CA
Validity
Not Before: Jan 7 18:10:50 2021 GMT
Not After : Feb 6 18:10:50 2021 GMT
Subject: CN = ainsley.utopial.svc
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
Modulus:
00:ca:56:15:...
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:ainsley.utopial.svc, DNS:ainsley.utopial.svc.cluster.local
Signature Algorithm: sha256WithRSAEncryption
b2:ec:22:b6:...
</code></pre>
<blockquote>
<p><strong>NOTE</strong> <code>CN</code> is a DNS name above but an IP below ???</p>
</blockquote>
<p>Reverting to my working solution with <code>v1beta1</code> and changing the service name for completeness (<code>loi</code>), the Webhook succeeds and the certificate appears to be no different from the one shown above (except the different service name):</p>
<p><strong>WORKS</strong></p>
<pre class="lang-sh prettyprint-override"><code>Certificate:
Data:
Version: 3 (0x2)
Serial Number:
ff:b3:cb:11:...
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = 10.152.183.1
Validity
Not Before: Jan 7 18:18:45 2021 GMT
Not After : Jan 7 18:18:45 2022 GMT
Subject: CN = loi.utopial.svc
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
Modulus:
00:d2:cc:c2:...
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
keyid:E7:AE:3A:25:95:D2:F7:5B:C6:EA:50:56:07:E8:25:83:60:88:68:7A
X509v3 Subject Alternative Name:
DNS:loi.utopial.svc, DNS:loi.utopial.svc.cluster.local
Signature Algorithm: sha256WithRSAEncryption
48:a1:b2:e2:...
</code></pre>
| DazWilkin | <p><strong>Update</strong> Switched to <a href="https://cert-manager.io" rel="nofollow noreferrer">cert-manager</a> and everything working <a href="https://pretired.dazwilkin.com/posts/210108/" rel="nofollow noreferrer">well</a></p>
<p>I got it working but I'm unsure why what I'm now doing is correct.</p>
<p>And the <code>openssl</code> feels unwieldy (advice appreciated).</p>
<h3>Environment</h3>
<pre class="lang-sh prettyprint-override"><code>DIR=${PWD}/secrets
SERVICE="..."
NAMESPACE="..."
FILENAME="${DIR}/${SERVICE}.${NAMESPACE}"
</code></pre>
<h3>CA</h3>
<pre class="lang-sh prettyprint-override"><code>openssl req \
-nodes \
-new \
-x509 \
-keyout ${FILENAME}.ca.key \
-out ${FILENAME}.ca.crt \
-subj "/CN=Validating Webhook CA"
</code></pre>
<h3>Create (Webhook) Service</h3>
<p>Necessary to set the service certificate's CN to the IP</p>
<pre class="lang-sh prettyprint-override"><code>cat ./kubernetes/service.yaml \
| sed "s|SERVICE|${SERVICE}|g" \
| sed "s|NAMESPACE|${NAMESPACE}|g" \
| kubectl apply --filename=- --namespace=${NAMESPACE}
ENDPOINT=$(\
kubectl get service/${SERVICE} \
--namespace=${NAMESPACE} \
--output=jsonpath="{.spec.clusterIP}") && echo ${ENDPOINT}
</code></pre>
<h3>Create CSR</h3>
<p>Even though I include the CN and <code>alt_names</code> here, I must duplicate the SAN stuff (next step)</p>
<pre class="lang-sh prettyprint-override"><code>echo "[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ dn ]
commonName = ${ENDPOINT}
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1 = ${SERVICE}.${NAMESPACE}.svc
DNS.2 = ${SERVICE}.${NAMESPACE}.svc.cluster.local
" > ${FILENAME}.cfg
openssl req \
-nodes \
-new \
-sha256 \
-newkey rsa:2048 \
-keyout ${FILENAME}.key \
-out ${FILENAME}.csr \
-config ${FILENAME}.cfg
</code></pre>
<h3>Create CSR extension</h3>
<p>Unsure why I must duplicate (or separate) this content. If I omit this from the <code>openssl x509 -extfile</code>, the certificate contains no SAN extension.</p>
<pre class="lang-sh prettyprint-override"><code>printf "subjectAltName=DNS:${SERVICE}.${NAMESPACE}.svc,DNS:${SERVICE}.${NAMESPACE}.svc.cluster.local" > ${FILENAME}.ext
</code></pre>
<h3>Create service certificate</h3>
<p>How can I use a single CSR for everything rather than CSR+EXT?</p>
<pre class="lang-sh prettyprint-override"><code>openssl x509 \
-req \
-in ${FILENAME}.csr \
-extfile ${FILENAME}.ext \
-CA ${FILENAME}.ca.crt \
-CAkey ${FILENAME}.ca.key \
-CAcreateserial \
-out ${FILENAME}.crt
</code></pre>
<h3>Create (Webhook) Deployment</h3>
<p>The underlying implementation of the webhook needs the service's crt|key</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create secret tls ${SERVICE} \
--namespace=${NAMESPACE} \
--cert=${FILENAME}.crt \
--key=${FILENAME}.key
cat ./kubernetes/deployment.yaml \
| sed "s|SERVICE|${SERVICE}|g" \
| sed "s|NAMESPACE|${NAMESPACE}|g" \
| kubectl apply --filename=- --namespace=${NAMESPACE}
</code></pre>
<h3>Create Webhook</h3>
<p>Grab the CA certificate</p>
<pre class="lang-sh prettyprint-override"><code>CABUNDLE=$(openssl base64 -A <"${FILENAME}.ca.crt")
cat ./kubernetes/webhook.yaml \
| sed "s|SERVICE|${SERVICE}|g" \
| sed "s|NAMESPACE|${NAMESPACE}|g" \
| sed "s|CABUNDLE|${CABUNDLE}|g" \
| kubectl apply --filename=- --namespace=${NAMESPACE}
</code></pre>
| DazWilkin |
<p>I'm looking into using Skaffold.dev to improve my development experience with Kubernetes.</p>
<p>I've created a default .NET API project and it's autogenerated my docker file:</p>
<pre><code>#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["TestMicro/TestMicro.csproj", "TestMicro/"]
RUN dotnet restore "TestMicro/TestMicro.csproj"
COPY . .
WORKDIR "/src/TestMicro"
RUN dotnet build "TestMicro.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "TestMicro.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "TestMicro.dll"]
</code></pre>
<p>I have created a Kubernetes manifest and all is running ok using <code>kubectl apply</code>.</p>
<p>After installing skaffold, I ran <code>skaffold init</code> and it autogenerated this</p>
<pre><code>apiVersion: skaffold/v2beta8
kind: Config
metadata:
name: microservices-investigation
build:
artifacts:
- image: testmicro
context: src\Microservices\TestMicro
deploy:
kubectl:
manifests:
- k8s/TestMicro.yaml
</code></pre>
<p>However, when I run <code>skaffold run</code> I get the following:</p>
<pre><code>$ skaffold run
Generating tags...
- testmicro -> testmicro:bd61fc5-dirty
Checking cache...
- testmicro: Error checking cache.
failed to build: getting hash for artifact "testmicro": getting dependencies for "testmicro": file pattern [TestMicro/TestMicro.csproj] must match at least one file
</code></pre>
<p>I think this is because, when I run <code>docker build</code> from the CLI, I have to run <code>docker build -f Dockerfile ..</code> <a href="https://learn.microsoft.com/en-us/visualstudio/containers/container-build?view=vs-2019#docker-build" rel="nofollow noreferrer">see why here</a>.</p>
<p>I just can't figure out how to translate this into skaffold's yaml file. Any ideas?!</p>
| penguinflip | <p>In Skaffold, the artifact <code>context</code> (sometimes called the <em>workspace</em>) is the working directory when building the artifact. For Docker-based artifacts (the default artifact type), the artifact context is the root of the Docker build context and the <code>Dockerfile</code> in the root of the artifact's context. You can specify an alternative Dockerfile location but it must live within the artifact context.</p>
<p>Normally <code>skaffold init</code> will create a <code>skaffold.yaml</code> where the artifact context is the directory containing a <code>Dockerfile</code>. So if I understand your situation, I think you normally run your <code>docker build -f Dockerfile ..</code> in <code>src/Microservices/TestMicro</code>. So you should be able to also run your <code>docker build</code> via:</p>
<pre><code>C:> cd ...\src\Microservices
C:> docker build -f TestMicro\Dockerfile .`
</code></pre>
<p>So you need to change your artifact definition to the following:</p>
<pre><code>build:
artifacts:
- image: testmicro
context: src/Microservices
docker:
dockerfile: TestMicro/Dockerfile
</code></pre>
| Brian de Alwis |
<p>What is a recommended way to have the <code>gcloud</code> available from <strong>within</strong> a running App Engine web application?</p>
<p><strong>Background:</strong><br />
The <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Kubernetes Python Client</a> is using <code>subprocess</code> to execute <code>gcloud</code> (taken from <code>cmd-path</code> configured in <code>~/.kube/config</code>) to refresh access tokens. Because the web application is using the <code>kubernetes</code> python library to interact with a cluster, the <code>gcloud</code> command has to be available within the App Engine service. So this is <strong>not</strong> about running <code>gcloud</code> during a cloudbuild or other CI steps, but having access to <code>gcloud</code> inside the App Engine service.</p>
<p><strong>Possible solution:</strong><br />
During Cloud Build it is of course possible to execute the <a href="https://cloud.google.com/sdk/docs/install#linux" rel="nofollow noreferrer">gcloud install instructions for Linux</a> to make the tool available within the directory of the app, but are there better solutions?</p>
<p>Thanks!</p>
| samuirai | <p>IIUC the Python client for Kubernetes requires a Kubernetes config and you're using <code>gcloud container clusters get-credentials</code> to automatically create the config; The Python client for Kubernetes does not require <code>gcloud</code>.</p>
<p>I recommend a different approach that uses Google's API Client Library for GKE (Container) to programmatically create a Kubernetes Config that can be consumed by the Kubernetes Python Client from within App Engine. You'll need to ensure that the Service Account being used by your App Engine app has sufficient permissions.</p>
<p>Unfortunately, I've not done this using the Kubernetes Python client but I am doing this using the Kubernetes Golang client.</p>
<p>The approach is to use Google's Container API to get the GKE cluster's details.</p>
<p>APIs Explorer: <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters/get" rel="nofollow noreferrer"><code>clusters.get</code></a></p>
<p>Python API Client Library: <a href="https://googleapis.github.io/google-api-python-client/docs/dyn/container_v1.projects.locations.clusters.html#get" rel="nofollow noreferrer"><code>cluster.get</code></a></p>
<p>From the response (<a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters" rel="nofollow noreferrer">Cluster</a>), you can create everything you need to create a Kubernetes config that's acceptable to the Kubernetes client.</p>
<p>Here's a summary of the Golang code:</p>
<pre><code>ctx := context.Background()
containerService, _ := container.NewService(ctx)
name := fmt.Sprintf(
"projects/%s/locations/%s/clusters/%s",
clusterProject,
clusterLocation,
clusterName,
)
rqst := containerService.Projects.Locations.Clusters.Get(name)
resp, _ := rqst.Do()
cert, _ := base64.StdEncoding.DecodeString(resp.MasterAuth.ClusterCaCertificate)
server := fmt.Sprintf("https://%s", resp.Endpoint)
apiConfig := api.Config{
APIVersion: "v1",
Kind: "Config",
Clusters: map[string]*api.Cluster{
clusterName: {
CertificateAuthorityData: cert,
Server: server,
},
},
Contexts: map[string]*api.Context{
clusterName: {
Cluster: clusterName,
AuthInfo: clusterName,
},
},
AuthInfos: map[string]*api.AuthInfo{
clusterName: {
AuthProvider: &api.AuthProviderConfig{
Name: "gcp",
Config: map[string]string{
"scopes": "https://www.googleapis.com/auth/cloud-platform",
},
},
},
},
}
</code></pre>
| DazWilkin |
<p>We have a lot of big files (~ gigabytes) in our Google bucket. I would like to process these files and generate new ones. To be specific, these are JSON files, from which I want to extract one field and join some files into one.</p>
<p>I could write some scripts running as pods in Kubernetes, which would connect to the bucket and stream the data from there and back. But I find it ugly - is there something made specifically for data processing in buckets?</p>
| VojtΔch | <p>Smells like a Big Data problem.</p>
<p>Use Big Data softwares like <a href="http://spark.apache.org/" rel="nofollow noreferrer">Apache Spark</a> for the processing of the huge files. Since, the data is there in the Google Cloud, would recommend <a href="https://cloud.google.com/dataproc/" rel="nofollow noreferrer">Google Cloud Dataproc</a>. Also, Big Data on K8S is a WIP and would recommend to leave K8S for now. Maybe use Big Data on K8S down the line in the future. More on Big Data on K8S (<a href="https://hortonworks.com/blog/bringing-cloud-native-architecture-to-big-data-in-the-data-center/" rel="nofollow noreferrer">here</a> and <a href="https://databricks.com/blog/2018/09/26/whats-new-for-apache-spark-on-kubernetes-in-the-upcoming-apache-spark-2-4-release.html" rel="nofollow noreferrer">here</a>).</p>
<p>With your solution (using K8S and hand made code), all the fault tolerance has to be handled manually. But, in the case of Apache Spark the fault tolerances (node going down, network failures etc) are taken care of automatically.</p>
<p>To conclude, I would recommend to forget about K8S for now and focus on Big Data for solving the problem. </p>
| Praveen Sripati |
<p>Apologies if this is a duplicate, I haven't found a solution in similar questions.
I'm trying to upload a docker image to Google Kubernetes Engine.
I've did it successfully before, but I can't seem to find my luck this time around.</p>
<p>I have Google SDK set up locally with kubectl and my Google Account, which is project owner and has all required permissions.
When I use</p>
<pre><code>kubectl create deployment hello-app --image=gcr.io/{project-id}/hello-app:v1
</code></pre>
<p>I see the deployment on my GKE console, consistently crashing as it "cannot pull the image from the repository.ErrImagePull Cannot pull image '' from the registry".</p>
<p>It provides 4 recommendations, which I have by now triple checked:</p>
<ul>
<li>Check for spelling mistakes in the image names.</li>
<li>Check for errors when pulling the images manually (all fine in Cloud Shell)</li>
<li>Check the image pull secret settings
So, based on this <a href="https://blog.container-solutions.com/using-google-container-registry-with-kubernetes" rel="nofollow noreferrer">https://blog.container-solutions.com/using-google-container-registry-with-kubernetes</a>, I manually added 'gcr-json-key' from a new service account with project view permissions as well as 'gcr-access-token' to kubectl default service account.</li>
<li>Check the firewall for the cluster to make sure the cluster can connect to the ''. Afaik, this should not be an issue with a newly set up cluster.</li>
</ul>
<p>The pods themselve provide the following error code:</p>
<pre><code>Failed to pull image "gcr.io/{project id}/hello-app:v1":
[rpc error: code = Unknown desc = Error response from daemon:
Get https://gcr.io/v2/{project id}/hello-app/manifests/v1: unknown: Unable to parse json key.,
rpc error: code = Unknown desc = Error response from daemon:
Get https://gcr.io/v2/{project id}/hello-app/manifests/v1:
unauthorized: Not Authorized., rpc error: code = Unknown desc = Error response from daemon:
pull access denied for gcr.io/{project id}/hello-app,
repository does not exist or may require 'docker login': denied:
Permission denied for "v1" from request "/v2/{project id}/hello-app/manifests/v1".]
</code></pre>
<p>My question now, what am I doing wrong or how can I find out why my pods can't pull my image?</p>
<hr />
<p>Kubernetes default serviceaccount spec:</p>
<pre><code>kubectl get serviceaccount -o json
{
"apiVersion": "v1",
"imagePullSecrets": [
{
"name": "gcr-json-key"
},
{
"name": "gcr-access-token"
}
],
"kind": "ServiceAccount",
"metadata": {
"creationTimestamp": "2020-11-25T15:49:16Z",
"name": "default",
"namespace": "default",
"resourceVersion": "6835",
"selfLink": "/api/v1/namespaces/default/serviceaccounts/default",
"uid": "436bf59a-dc6e-49ec-aab6-0dac253e2ced"
},
"secrets": [
{
"name": "default-token-5v5fb"
}
]
}
</code></pre>
| Niels Uitterdijk | <p>It does take several steps and the blog post you referenced appears to have them correctly. So, I suspect your error is in one of the steps.</p>
<p>Couple of things:</p>
<ul>
<li><p>The error message says <code>Failed to pull image "gcr.io/{project id}/hello-app:v1"</code>. Did you edit the error message to remove your <code>{project id}</code>? If not, that's one problem.</p>
</li>
<li><p>My next concern is the second line: <code>Unable to parse json key</code>. This suggests that you created the secret incorrectly:</p>
</li>
</ul>
<ol>
<li>Create the service account and generate a key</li>
<li>Create the Secret <strong>exactly</strong> as shown: <code>kubectl create secret docker-registry gcr-json-key...</code> (in the <code>default</code> namespace unless <code>--namespace=...</code> differs)</li>
<li>Update the Kubernetes spec with <code>ImagePullSecrets</code></li>
</ol>
<p>Because of the <code>ImagePullSecrets</code> requirement, I'm not aware of an alternative <code>kubectl run</code> equivalent but, you can try accessing your image using Docker from your host:</p>
<p>See: <a href="https://cloud.google.com/container-registry/docs/advanced-authentication#json-key" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/advanced-authentication#json-key</a></p>
<p>And then try <code>docker pull gcr.io/{project id}/hello-app:v1</code> ensuring that <code>{project id}</code> is replaced with the correct GCP Project ID.</p>
<p>This proves:</p>
<ul>
<li>The Service Account & Key are correct</li>
<li>The Container Image is correct</li>
</ul>
<p>That leaves, your creation of the Secret and your Kubernetes spec to test.</p>
<blockquote>
<p><strong>NOTE</strong> The Service Account IAM permission of <code>Project Viewer</code> is overly broad for GCR access, see the <a href="https://cloud.google.com/container-registry/docs/access-control#permissions_and_roles" rel="nofollow noreferrer">permissions</a></p>
<p>Use <code>StorageObject Viewer</code> (<code>roles/storage.objectViewer</code>) if the Service Account needs only to pull images.</p>
</blockquote>
| DazWilkin |
<p>I wanted to know is it possible to have a job in Kubernetes that will run every hour, and will delete certain pods. I need this as a temporary
stop gap to fix an issue.</p>
| user1555190 | <p>Use a CronJob (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">1</a>, <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#cronjob-v1beta1-batch" rel="nofollow noreferrer">2</a>) to run the Job every hour.</p>
<p>K8S API can be accessed from Pod (<a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">3</a>) with proper permissions. When a Pod is created a <code>default ServiceAccount</code> is assigned to it (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server" rel="nofollow noreferrer">4</a>) by default. The <code>default ServiceAccount</code> has no RoleBinding and hence the <code>default ServiceAccount</code> and also the Pod has no permissions to invoke the API.</p>
<p>If a role (with permissions) is created and mapped to the <code>default ServiceAccount</code>, then all the Pods by default will get those permissions. So, it's better to create a new ServiceAccount instead of modifying the <code>default ServiceAccount</code>.</p>
<p>So, here are steps for RBAC (<a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">5</a>)</p>
<ul>
<li>Create a ServiceAccount</li>
<li>Create a Role with proper permissions (deleting pods)</li>
<li>Map the ServiceAccount with the Role using RoleBinding</li>
<li>Use the above ServiceAccount in the Pod definition</li>
<li>Create a pod/container with the code/commands to delete the pods</li>
</ul>
<p>I know it's a bit confusing, but that's the way K8S works.</p>
| Praveen Sripati |
<p>So, I have a specific chain of kubernetes and bash commands that I have to execute on gcloud, I'd like to know if there is any way to execute those commands automatically via scripting, without the need of actually having to open and interact with gcloud CLI. Maybe using a npm package but i don't know if there is a package for this usage.</p>
<p>I already have gcloud cli installed locally, should exec and/or spawn commands work? I've tried but ultimately failed.</p>
<p>TL;DR: I just want to know how to automate gcloud commands using code! Preferably on node.js, but i can learn it on another language too.</p>
| Isaac de Souza | <p>Please provide a little more detail as to what it is you're trying to automate as this will help guide recommendations.</p>
<p>There are several challenges automating subprocesses (e.g. invoking bash and it running <code>gcloud</code>) from within another program. High among these is that shell-scripting doesn't provide very strong parameter passing (and no typing beyond strings) and error handling is difficult.</p>
<p>For NodeJS you can use <a href="https://nodejs.org/api/child_process.html" rel="nofollow noreferrer"><code>child_process</code></a>. You'll need to ensure that the child process environment is able to access the <code>gcloud</code> binary and its configuration and that it can authenticate too. Since you're running a program to interact with <code>gcloud</code>, I recommend you use a service account to authenticate if you choose this approach and then you may need (if you run off-GCP) to provide a service account key to your code too.</p>
<p>There are alternatives although these require more work, they provide much more robust solutions:</p>
<ol>
<li>Use a definitive "infrastructure as code" tools (e.g. <a href="https://terraform.io" rel="nofollow noreferrer">terraform</a>). Google Cloud provides <a href="https://cloud.google.com/blog/products/devops-sre/google-cloud-templates-for-terraform-and-deployment-manager-now-available" rel="nofollow noreferrer">templates</a> for Terraform</li>
<li>Use Google Cloud SDKs (there are 2 flavors <a href="https://cloud.google.com/apis/docs/client-libraries-explained" rel="nofollow noreferrer">explained</a>) and these cover everything except Kubernetes itself (you get management of GKE clusters) and a NodeJS Kubernetes <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">client library</a></li>
</ol>
<p>If you add more details to your question, I can add more details to this answer.</p>
| DazWilkin |
<p>For my project I need a config file to connect my program to a database. Instead of writing the config data(for example username and password) hard coded in a document I want to save it as secure environment variables in kubernetes.</p>
<p>I programmed my application with python. The python application runs successfully in a docker container. I then created the secret variables via kubernetes. I wrote a .yaml file and specified the required container and image data.
That's the Kubernetes tutorial: <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data" rel="nofollow noreferrer">Link to Kubernetes tutorial</a></p>
<p>Now I want to access the secret environment variables I created in kubernetes. But how?</p>
<p>How can I read a secret environment variable with python? Do I have to initialize the environment variables in the docker file?</p>
<p>PS: I already tried things like this:</p>
<pre><code>import os
print(os.environ['USERNAME'])
</code></pre>
| William Sharlaag | <p>Th following works and should prove this to you:</p>
<h3>1. Base64 encode e.g. username|password:</h3>
<pre class="lang-sh prettyprint-override"><code>U=$(echo -n "freddie" | base64)
P=$(echo -n "fridays" | base64)
</code></pre>
<p><strong>NB</strong> The host's environment variables are <code>U</code> and <code>P</code></p>
<p>Assuming</p>
<pre class="lang-sh prettyprint-override"><code>POD="p" # Or...
SECRET="s" # Or...
NAMESPACE="default" # Or...
</code></pre>
<h3>2. Create the secret:</h3>
<pre class="lang-sh prettyprint-override"><code>echo "
apiVersion: v1
kind: Secret
metadata:
name: ${SECRET}
data:
u: ${U}
p: ${P}
" | kubectl apply --filename=- --namespace=${NAMESPACE}
</code></pre>
<p><strong>NB</strong> The Secret's data contains values <code>u</code> and <code>p</code> (corresponding to <code>U</code> and <code>P</code>)</p>
<p>yields:</p>
<pre><code>secret/x created
</code></pre>
<p>And:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl describe secret/${SECRET} --namespace=${NAMESPACE}
</code></pre>
<p>yields:</p>
<pre><code>Name: ${SECRET}
Namespace: ${NAMESPACE}
Labels: <none>
Annotations:
Type: Opaque
Data
====
p: 7 bytes
u: 7 bytes
</code></pre>
<h3>3. Attach the secret's values to a Pod:</h3>
<pre class="lang-sh prettyprint-override"><code>echo "
apiVersion: v1
kind: Pod
metadata:
name: ${POD}
spec:
containers:
- name: test
image: python
command:
- python3
- -c
- \"import os;print(os.environ['USERNAME']);print(os.environ['PASSWORD'])\"
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: ${SECRET}
key: u
- name: PASSWORD
valueFrom:
secretKeyRef:
name: ${SECRET}
key: p
" | kubectl apply --filename=- --namespace=${NAMESPACE}
</code></pre>
<p><strong>NB</strong> The Pod's environment maps the Secret's <code>u</code>--><code>USERNAME</code>, <code>p</code>--><code>PASSWORD</code></p>
<p><strong>NB</strong> These variable name changes are to demonstrate the point; they may be the same</p>
<h3>4. Review the Pod's logs</h3>
<pre class="lang-sh prettyprint-override"><code>kubectl logs pod/${POD} --namespace=${NAMESPACE}
</code></pre>
<p>yields (in my case):</p>
<pre><code>freddie
fridays
</code></pre>
| DazWilkin |
<p>I want to execute script before I run my container</p>
<p>If I execute script in container like that</p>
<pre><code> containers:
- name: myservice
image: myservice.azurecr.io/myservice:1.0.6019912
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
command:
- '/bin/bash'
- '-c'
- 'ls /mnt/secrets-store;'
</code></pre>
<p>then that command replaces my entrypoint and the pod exits. How can I execute command but then start the container after that</p>
| Vladimir Bodurov | <p>A common way to do this is too use <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init Containers</a> but I'm unsure what you're trying to run before you run the <code>ENTRYPOINT</code>.</p>
<p>You can apply the same volume mounts in the init container(s), if the init work requires changing state of the mounted file system content.</p>
<p>Another solution may be to run the <code>ENTRYPOINT</code>'s command as the last statement in the script.</p>
| DazWilkin |
<p>I am a scientist who is exploring the use of Dask on Amazon Web Services. I have some experience with Dask, but none with AWS. I have a few large custom task graphs to execute, and a few colleagues who may want to do the same if I can show them how. I believe that I should be using <a href="https://dask.pydata.org/en/latest/setup/kubernetes-helm.html" rel="nofollow noreferrer">Kubernetes with Helm</a> because I fall into the <a href="https://dask.pydata.org/en/latest/setup/kubernetes.html" rel="nofollow noreferrer">"Try out Dask for the first time on a cloud-based system like Amazon, Google, or Microsoft Azure"</a> category.</p>
<ol>
<li>I also fall into the "Dynamically create a personal and ephemeral deployment for interactive use" category. Should I be trying native Dask-Kubernetes instead of Helm? It seems simpler, but it's hard to judge the trade-offs.</li>
<li>In either case, how do you provide Dask workers a uniform environment that includes your own Python packages (not on any package index)? <a href="https://dask.pydata.org/en/latest/setup/docker.html" rel="nofollow noreferrer">The solution I've found</a> suggests that packages need to be on a <code>pip</code> or <code>conda</code> index.</li>
</ol>
<p>Thanks for any help!</p>
| jkmacc | <h3>Use Helm or Dask-Kubernetes ?</h3>
<p>You can use either. Generally starting with Helm is simpler.</p>
<h3>How to include custom packages</h3>
<p>You can install custom software using pip or conda. They don't need to be on PyPI or the anaconda default channel. You can point pip or conda to other channels. Here is an example installing software using pip from github</p>
<pre><code>pip install git+https://github.com/username/repository@branch
</code></pre>
<p>For small custom files you can also use the <a href="http://dask.pydata.org/en/latest/futures.html#distributed.Client.upload_file" rel="nofollow noreferrer">Client.upload_file</a> method.</p>
| MRocklin |
<p>I've been experimenting with skaffold with a local minikube installation. It's a nice to be able to develop your project on something that is as close as possible to production.</p>
<p>If I use the <a href="https://github.com/GoogleContainerTools/skaffold/tree/master/examples/getting-started" rel="noreferrer">getting-started example</a> provided on skaffold github repo, everything works just fine, my IDE (intellij idea) stops on the breakpoints and when I modify my code, the changes are reflected instantly.</p>
<p>Now on my personal project which is a bit more complicated than a simple main.go file, things don't work as expected. The IDE stops on the breakpoint but hot code reload are not happening even though I see in the console that skaffold detected the changes made on that particular file but unfortunately the changes are not reflected/applied.</p>
<p>A docker file is used to build an image, the docker file is the following</p>
<pre><code>FROM golang:1.14 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /app.o ./cmd/shortener/shortener.go
FROM alpine:3.12
COPY --from=builder /app.o ./
COPY --from=builder /app ./
EXPOSE 3000
ENV GOTRACEBACK=all
CMD ["./app.o"]
</code></pre>
<p>On kubernetes side, I'm creating a deployment and a service as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: url-shortener-deployment
spec:
selector:
matchLabels:
app: url-shortener
template:
metadata:
labels:
app: url-shortener
spec:
containers:
- name: url-shortener
image: url_shortener
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: url-shortener-service
spec:
selector:
app: url-shortener
ports:
- port: 3000
nodePort: 30000
type: NodePort
</code></pre>
<p>As for skaffold, here's the skaffold.yaml file:</p>
<pre><code>apiVersion: skaffold/v2beta5
kind: Config
metadata:
name: url-shortener
build:
artifacts:
- image: url_shortener
context: shortener
docker:
dockerfile: build/docker/Dockerfile.dev
noCache: false
deploy:
kubectl:
manifests:
- stack/mongo/mongo.yaml
- shortener/deployments/kubernetes/shortener.yaml
</code></pre>
<p>I've enabled verbose logging and I notice this in the output whenever I save (CTRL+S) a source code file.</p>
<pre><code>time="2020-07-05T22:51:08+02:00" level=debug msg="Found dependencies for dockerfile: [{go.mod /app true} {go.sum /app true} {. /app true}]"
time="2020-07-05T22:51:08+02:00" level=info msg="files modified: [shortener/internal/handler/rest/rest.go]"
</code></pre>
<p>I'm assuming that this means that the change has been detected.</p>
<p>breakpoints works correctly in the IDE but code swap in kubernetes don't seem to be happening</p>
| Fouad | <p>The <em>debug</em> functionality deliberately disables Skaffold's <em>file-watching</em>, which rebuilds and redeploys containers on file change. The redeploy causes existing containers to be terminated, which tears down any ongoing debug sessions. It's really disorienting and aggravating to have your carefully-constructed debug session be torn down because you accidentally saved a change to a comment! π«</p>
<p>But we're looking at how to better support this more <em>iterative debugging</em> within Cloud Code.</p>
<p>If you're using Skaffold directly, we recently added the ability to re-enable file-watching via <code>skaffold debug --auto-build --auto-deploy</code> (present in v1.12).</p>
| Brian de Alwis |
<p>I have a Google Kubernetes Engine running and I'm trying to deploy a mongo container. Everything works fine except when I try to use the argument "--wiredTigerCacheSizeGB", then the deployment fails because this command is not recognized. I'm using the latest Mongo version (5.0) and I see nothing in the documentation saying that this should not work.</p>
<p>Here is the yml configuration of the POD creation:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
environment: test
role: mongo
serviceName: mongo
template:
metadata:
creationTimestamp: null
labels:
environment: test
role: mongo
namespace: default
spec:
containers:
- command:
- mongod
- --wiredTigerCacheSizeGB 2
image: mongo:5.0
imagePullPolicy: Always
name: mongo
ports:
- containerPort: 27017
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data/db
name: mongo-persistent-storage
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 10
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: backend
updateStrategy:
type: OnDelete
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: standard
creationTimestamp: null
name: mongo-persistent-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
volumeMode: Filesystem
</code></pre>
| Hugo Sartori | <p>Does it work if you remove the <code>--wiredTigerCacheSizeGB</code> flag?</p>
<p><strike>I would be surprised.</strike></p>
<blockquote>
<p>It does appear to work (see below) but I can't explain why. I am surprised!</p>
</blockquote>
<p>If this is the correct <a href="https://github.com/docker-library/mongo/blob/master/5.0/Dockerfile" rel="nofollow noreferrer">Dockerfile</a> for the image, then it uses a Docker <code>CMD</code> to run <code>mongod</code>.</p>
<p>If so, you'd need to run the image on Kubernetes using <code>args</code> not <code>command</code> in order to correctly override the container image's <code>CMD</code> and <strong>not</strong> override the container image's <code>ENTRYPOINT</code>, i.e.</p>
<pre><code>containers:
- name: mongo
args:
- mongod
- --wiredTigerCacheSizeGB=2
</code></pre>
<blockquote>
<p><strong>NOTE</strong> The inclusion of <code>=</code> between the flag and value to avoid introducing YAML parsing issues.</p>
</blockquote>
<p>I tested this hypothesis using <code>podman</code>; you can replace <code>podman</code> with <code>docker</code> in what follows if you use <code>docker</code>:</p>
<pre class="lang-sh prettyprint-override"><code># Does not work: Override `ENTRYPOINT` with mongod+flag
# This is effectively what you're doing
podman run \
--interactive --tty --rm \
--entrypoint="mongod --wiredTigerCacheSizeGB=2" \
docker.io/mongo:5.0 \
Error: executable file `mongod --wiredTigerCacheSizeGB=2` not found in $PATH:
No such file or directory:
OCI runtime attempted to invoke a command that was not found
# Works: Override `CMD`
# This is what I thought should work
podman run \
--interactive --tty --rm \
docker.io/mongo:5.0 \
mongod \
--wiredTigerCacheSizeGB=2
# Works: Override `ENTRYPOINT` w/ mongod
# This is what I thought wouldn't work
podman run \
--interactive --tty --rm \
--entrypoint=mongod \
docker.io/mongo:5.0 \
--wiredTigerCacheSizeGB=2
</code></pre>
| DazWilkin |
<p>I have a Kubernetes cluster set up using Kubernetes Engine on GCP. I have also installed Dask using the Helm package manager. My data are stored in a Google Storage bucket on GCP.</p>
<p>Running <code>kubectl get services</code> on my local machine yields the following output</p>
<p><a href="https://i.stack.imgur.com/Uix0v.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uix0v.png" alt="enter image description here"></a></p>
<p>I can open the dashboard and jupyter notebook using the external IP without any problems. However, I'd like to develop a workflow where I write code in my local machine and submit the script to the remote cluster and run it there. </p>
<p><strong>How can I do this?</strong></p>
<p>I tried following the instructions in <a href="http://distributed.dask.org/en/latest/submitting-applications.html" rel="nofollow noreferrer">Submitting Applications</a> using <code>dask-remote</code>. I also tried exposing the scheduler using <code>kubectl expose deployment</code> with type LoadBalancer, though I do not know if I did this correctly. Suggestions are greatly appreciated.</p>
| PollPenn | <p>Yes, if your client and workers share the same software environment then you should be able to connect a client to a remote scheduler using the publicly visible IP.</p>
<pre><code>from dask.distributed import Client
client = Client('REDACTED_EXTERNAL_SCHEDULER_IP')
</code></pre>
| MRocklin |
<p>I develop in local k8s cluster with <code>minikube</code> and <code>skaffold</code>. Using Django and DRF for the API.</p>
<p>I'm working on a number of <code>models.py</code> and one thing that is starting to get annoying is anytime I run a <code>./manage.py</code> command like (<code>showmigrations</code>, <code>makemigrations</code>, etc.) it triggers a <code>skaffold</code> rebuild of the API nodes. It takes less than 10 seconds, but getting annoying none the less.</p>
<p>What should I exclude/include specifically from my <code>skaffold.yaml</code> to prevent this?</p>
<pre><code>apiVersion: skaffold/v2beta12
kind: Config
build:
artifacts:
- image: postgres
context: postgres
sync:
manual:
- src: "**/*.sql"
dest: .
docker:
dockerfile: Dockerfile.dev
- image: api
context: api
sync:
manual:
- src: "**/*.py"
dest: .
docker:
dockerfile: Dockerfile.dev
local:
push: false
deploy:
kubectl:
manifests:
- k8s/ingress/development.yaml
- k8s/postgres/development.yaml
- k8s/api/development.yaml
defaultNamespace: development
</code></pre>
| cjones | <p>It seems that <code>./manage.py</code> must be recording some state locally, and thus triggering a rebuild. You need to add these state files to your <code>.dockerignore</code>.</p>
<p>Skaffold normally logs at a <em>warning</em> level, which suppresses details of what triggers sync or rebuilds. Run Skaffold with <code>-v info</code> and you'll see more detail:</p>
<pre><code>$ skaffold dev -v info
...
[node] Example app listening on port 3000!
INFO[0336] files added: [backend/src/foo]
INFO[0336] Changed file src/foo does not match any sync pattern. Skipping sync
Generating tags...
- node-example -> node-example:v1.20.0-8-gc9335b0ad-dirty
INFO[0336] Tags generated in 80.293621ms
Checking cache...
- node-example: Not found. Building
INFO[0336] Cache check completed in 1.844615ms
Found [minikube] context, using local docker daemon.
Building [node-example]...
</code></pre>
| Brian de Alwis |
<p>I'm trying to improve my knowledge in GCP-GKE as a newbie and in the way to do that, I found out a little concept that I don't quite understand yet. In GKE, there is a <strong>Service Account</strong> called <code>service-PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com</code> (where the <code>PROJECT_NUM</code> is the ID of our project) and after several hours googling, I couldn't find any article or definition about this stuff. So could you guys please explain to me</p>
<ul>
<li>What is this <strong>Service Account</strong> ? How was it created (by who)?</li>
<li>What is this thing for? How important is it in GKE?</li>
<li>What happens if we delete it ? Could we re-created it manually ?</li>
</ul>
<p>In fact, I found out that in GCP, we have some <strong>Service Account</strong> that have a "robot" suffix: <code>...robot.iam.gserviceaccount.com/</code> (like <code>@gcf-admin-robot.iam.gserviceaccount.com/</code>, <code>@serverless-robot-prod.iam.gserviceaccount.com</code>, etc). What could we say about this, please ?</p>
<p>If I misunderstand something, please, point it out for me, I really appreciate that.</p>
<p>Thank you guys !!!</p>
| nxh6991 | <p><a href="https://cloud.google.com/iam/docs/service-accounts" rel="nofollow noreferrer">Service Accounts</a> aka "robots" contrast with user ("human") accounts and represent two forms of Google identity.</p>
<blockquote>
<p><strong>NOTE</strong> Robots was the original name for Service Accounts and is a more colorful description of the intent of these accounts, to run software.</p>
</blockquote>
<p>(Google) User accounts include consumer (Gmail) e.g. [email protected] and [email protected] (Workspace) accounts. User accounts are used by humans to interact with Google services and must be used (or a suitable delegate) to acccess user-owned content such as Workspace docs, sheets etc.</p>
<p>Software ("robots") generally should run as a Service Account <strong>not</strong> as a User account. In part, you can't easily run software using User accounts because the User OAuth flow is 3-legged and requires interacting with an OAuth Consent Screen to permit an app access to data.</p>
<p>There are two flavors of Service Account: Google-created|managed and User-created|managed. The difference is essentially the owner. If you create applications, generally you should create a Service Account for each app and run the app using its Service Account.</p>
<p>User-managed Service Accounts take the form <code>{something}@{project}.iam.gserviceaccount.com</code> where you get to define the value of <code>{something}</code> and the Google Project in which the Service Account is created (the project that owns the Service Account) is represented by <code>{project}</code> (actually the Project ID).</p>
<p>When Google provides app functionality, it also creates Service Accounts and often, Google "binds" these Service Accounts to your projects that use them in addition to defining the role that the Service Account has in your project.</p>
<p>Google-managed Service Accounts take the form <code>{something}@{label}.iam.gserviceaccount.com</code>. Unlike User-managed Service Accounts, Google uses more descriptive labels (<code>{label}</code>) to help explain the role of the Service Account.</p>
<blockquote>
<p><strong>NOTE</strong> With Google-managed Service Accounts <code>{something}</code> often includes the Project Number (not ID) of (your!) project for which the Google-managed account has been created.</p>
</blockquote>
<p>You <strong>cannot</strong> delete Google-managed Service Accounts because you(r Google account) does not own the Service Account.</p>
<p>You <strong>can</strong> (but <strong>should not</strong>) delete the role binding between one of your projects and a Google-managed Service Account. It may be possible for you to revert (recreate) the binding but you may not have permission to do this.</p>
| DazWilkin |
<p>I'd like to set up a dask cluster with a number of different types of workers e.g. normal workers, high-memory workers, GPU workers, ...</p>
<p>As I understand it I can manually create the workers and tag them with <a href="https://distributed.dask.org/en/latest/resources.html#worker-resources" rel="nofollow noreferrer"><code>resources</code></a>. What I'd like to do is create a cluster, specifying the min/max number of each type of worker and have it autoscale the exact number of each type of worker based on the number of tasks requesting each type of worker/resource.</p>
<p>Is this possible now or is this something which is on the roadmap (issue I can subscribe to)? </p>
| Dave Hirschfeld | <blockquote>
<p>Is this possible now </p>
</blockquote>
<p>As of 2020-02-15 this is not a supported behavior</p>
<blockquote>
<p>or is this something which is on the roadmap (issue I can subscribe to)?</p>
</blockquote>
<p>Dask isn't centrally managed, and so doesn't have a roadmap. I don't know of any issue about this today, but I wouldn't be surprised if one already existed.</p>
| MRocklin |
<p>I am new to skaffold, k8s, docker set and I've been having trouble building my application on a cluster locally.</p>
<p>I have a code repository that is trying to pull a private NPM package but when building it loses the .npmrc file or the npm secret.</p>
<pre><code>npm ERR! code E404
npm ERR! 404 Not Found - GET https://registry.npmjs.org/@sh1ba%2fcommon - Not found
npm ERR! 404
npm ERR! 404 '@sh1ba/common@^1.0.3' is not in the npm registry.
npm ERR! 404 You should bug the author to publish it (or use the name yourself!)
npm ERR! 404
npm ERR! 404 Note that you can also install from a
npm ERR! 404 tarball, folder, http url, or git url.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-06-02T06_08_57_246Z-debug.log
unable to stream build output: The command '/bin/sh -c npm install' returned a non-zero code: 1. Please fix the Dockerfile and try again..
</code></pre>
<p>Ideally I'd like to avoid hard coding the secret into the file and use a k8s environment variable to pass in the key to docker as a secret. I am able to (kind of) do it with the docker build command:</p>
<ul>
<li>with "--build-args" with npm secret (the not safe way)</li>
<li>with "--secret" with npm secret (the better way)</li>
<li>copying the .npmrc file directly, <code>npm install</code>ing and deleting it right after</li>
</ul>
<p>The issue arises when I try to build it using kubernetes/skaffold. After running, it doesn't seem like any of the args, env variables, or even the .npmrc file is found. When checking in the dockerfile for clues I was able to identify that nothing was being passed over from the manifest (args defined, .npmrc file, etc) to the dockerfile.</p>
<p>Below is the manifest for the application:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: auth
env:
- name: NPM_SECRET
valueFrom:
secretKeyRef:
name: npm-secret
key: NPM_SECRET
args: ["--no-cache", "--progress=plain", "--secret", "id=npmrc,src=.npmrc"]
</code></pre>
<p>Here's the code in the dockerfile:</p>
<pre><code># syntax=docker/dockerfile:1.2
# --------------> The build image
FROM node:alpine AS build
WORKDIR /app
COPY package*.json .
RUN --mount=type=secret,mode=0644,id=npmrc,target=/app/.npmrc \
npm install
# --------------> The production image
FROM node:alpine
WORKDIR /app
COPY package.json .
COPY tsconfig.json .
COPY src .
COPY prisma .
COPY --chown=node:node --from=build /app/node_modules /app/node_modules
COPY --chown=node:node . /app
s
RUN npm run build
CMD ["npm", "start"]
</code></pre>
<p>And also the skaffold file:</p>
<pre><code>apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
- ./infra/k8s-dev/*
build:
local:
push: false
artifacts:
- image: auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
</code></pre>
<p>A few notes:</p>
<ul>
<li>I can't locate the .npmrc file regardless of where I copy and paste it (in auth, in the manifest, in skaffold and in the ~/ directories)</li>
<li>I would like to also make it semi-usable (pretty reusable) in production too so that I don't need to do a complete overhaul if possible (but if this is bad practice I'd like to hear more about it)</li>
<li>I've been able to make it work with buildArgs in the skaffold.yaml file but I'm not sure how that would translate into a production environment as I can't pass build args from kubernetes to docker (and I read that it isn't safe and that secrets should be used)</li>
<li>The args in the manifest are throwing this error too (server runs if the args are commented out):</li>
</ul>
<pre><code> - deployment/auth-depl: container auth terminated with exit code 9
- pod/auth-depl-85fb8975d8-4rh9r: container auth terminated with exit code 9
> [auth-depl-85fb8975d8-4rh9r auth] node: bad option: --progress=plain
> [auth-depl-85fb8975d8-4rh9r auth] node: bad option: --secret
- deployment/auth-depl failed. Error: container auth terminated with exit code 9.
</code></pre>
<p>Any insight would be amazing, I've been fiddling with this for far too long now.</p>
<p>Thank you!</p>
| Gerry Saporito | <p>Building and deploying an image to Kubernetes is at three levels:</p>
<ol>
<li>Your local system where you initiate the building of an image</li>
<li>The Docker build that populates the image and then stores that image somewhere</li>
<li>The Kubernetes cluster that loads and starts running that image</li>
</ol>
<p>Docker is not involved in #3. <em>(This is only partially true, since some clusters use Docker to run the containers too, but that's a hidden detail and is also changing.)</em></p>
<p>There are two places where you might communicate secrets:</p>
<ul>
<li>at image build time (steps #1 to #2): you can use Docker <code>--build-args</code> or mounting secrets with <code>--secret</code> (both require Buildkit)</li>
<li>at deployment time (step #3): you use Kubernetes secrets or config maps, which are configured separately from the image build</li>
</ul>
<p>Skaffold supports passing build-time secrets, like your npm password, with Docker's <code>--build-args</code> and <code>--secret</code> flags, though they are slightly renamed.</p>
<p><a href="https://skaffold.dev/docs/references/yaml/#build-artifacts-docker-buildArgs" rel="noreferrer"><code>buildArgs</code></a> supports Go-style templating, so you can reference environment variables like <code>MYSECRET</code> as <code>{{.MYSECRET}}</code>:</p>
<pre><code>build:
local:
useBuildkit: true
artifacts:
- image: auth
context: auth
docker:
buildArgs:
MYSECRET: "{{.MYSECRET}}"
</code></pre>
<p>Then you can reference <code>MYSECRET</code> within your <code>Dockerfile</code>:</p>
<pre><code>ARG MYSECRET
RUN echo MYSECRET=${MYSECRET}
</code></pre>
<p>Note that build-args are not propagated into your container unless you explicitly assign it via an <code>ENV MYSECRET=${MYSECRET}</code>.</p>
<p>If the secret is in a local file, you can use the <a href="https://skaffold.dev/docs/references/yaml/#build-artifacts-docker-secret" rel="noreferrer"><code>secret</code></a> field in the <code>skaffold.yaml</code>:</p>
<pre><code>build:
local:
useBuildkit: true
artifacts:
- image: auth
context: auth
docker:
secret:
id: npmrc
src: /path/to/.npmrc
</code></pre>
<p>and you'd then reference the secret as you are in your <code>Dockerfile</code>:</p>
<pre><code>RUN --mount=type=secret,mode=0644,id=npmrc,target=/app/.npmrc \
npm install
</code></pre>
<hr />
<p>Now in your <code>Deployment</code>, you're attempting to setting <code>args</code> for your container:</p>
<pre><code> args: ["--no-cache", "--progress=plain", "--secret", "id=npmrc,src=.npmrc"]
</code></pre>
<p>The <code>args</code> field overrides the <code>CMD</code> directive set in your image. This field is used to provide command-line arguments provided to your image's entrypoint, which is likely <code>node</code>. If you want to reference a secret in a running container on a cluster, you'd use a <code>Secret</code> or <code>ConfigMap</code>.</p>
| Brian de Alwis |
<p>i am trying GCP and GKE google kubernetes engine.
1-)i am create a cluster
2-)i opened cloud shell and used command "kubectl get nodes"</p>
<p>i get this error:
"The connection to the server localhost:8080 was refused - did you specify the right host or port?"</p>
<p>how can i solve. thanks.</p>
| gokhandincel | <p>You must have a local Kubernetes config file that is used by <code>kubectl</code> to access cluster(s).</p>
<p>Conventionally, the config file is called <code>config</code> (YAML) and is found in <code>${HOME}/.kube/config</code>.</p>
<p>Google provides a way to generate a config file (and context) for you. To do this run <code>gcloud container clusters get-credentials ...</code>. You'll need to fill in the blanks of the cluster name and probably the project, zone/region etc.:</p>
<pre class="lang-sh prettyprint-override"><code>gcloud container clusters get-credentials ${CLUSTER_NAME} \
--project=${PROJECT} \
--region=${REGION}
</code></pre>
<p>After running this command, you should be able to <code>more ${HOME}/.kube/config</code> and you should be able to access the cluster using e.g. <code>kubectl get nodes</code>.</p>
| DazWilkin |
<p>My code looks something like this</p>
<pre class="lang-py prettyprint-override"><code>def myfunc(param):
# expensive stuff that takes 2-3h
mylist = [...]
client = Client(...)
mgr = DeploymentMgr()
# ... setup stateful set ...
futures = client.map(myfunc, mylist, ..., resources={mgr.hash.upper(): 1})
client.gather(futures)
</code></pre>
<p>I have dask running on a Kubernetes cluster. At the start of the program I create a stateful set. This is done via <code>kubernetes.client.AppsV1Api()</code>. Then I wait for up to 30 minutes until all workers that I have requested are available. For this example, say I request 10 workers, but after 30 minutes, only 7 workers are available. Finally, I call <code>client.map()</code> and pass a function and a list to it. This list has 10 elements. However, dask will only use 7 workers to process this list! Even if after a couple of minutes the remaining 3 workers are available, dask does not assign any list elements to them even if none of the processing of the first elements has finished.</p>
<p>How can I change that behaviour of dask? Is there a a way of telling dask (or the scheduler of dask) to periodically check for newly arriving workers and distribute work more "correctly"? Or can I manually influence the distribution of these list elements?</p>
<p>Thank you.</p>
| r0f1 | <p>Dask will balance loads once it has a better understanding of how long the tasks will take. You can give an estimate of task length with the configuration values </p>
<pre class="lang-yaml prettyprint-override"><code>distributed:
scheduler:
default-task-durations:
myfunc: 1hr
</code></pre>
<p>Or, once Dask finishes one of these tasks, it will know how to make decisions around that task in the future.</p>
<p>I believe that this has also come up a couple of times on the GitHub issue tracker. You may want to search through <a href="https://github.com/dask/distributed/issues" rel="nofollow noreferrer">https://github.com/dask/distributed/issues</a> for more information.</p>
| MRocklin |
<p>We are going to provide customers a function by deploying and running a container in customers kubernetes environment. After the job is done, we will clean up the container. Currently, the plan is to use k8s default namespace, but I'm not sure whether it can be a concern for customers. I don't have much experience in k8s related field. Should we give customers' an option to specify a namespace to run container, or just use the default namespace? I appreciate your suggestions!</p>
| vcycyv | <p>I would recommend you <strong>not</strong> use (!?) the default namespace for anything <strong>ever</strong>.</p>
<p>The following is more visceral than objective but it's drawn from many years' experience of Kubernetes. In 2016, a now former colleague and I blogged about the use of namespaces:</p>
<p><a href="https://kubernetes.io/blog/2016/08/kubernetes-namespaces-use-cases-insights/" rel="nofollow noreferrer">https://kubernetes.io/blog/2016/08/kubernetes-namespaces-use-cases-insights/</a></p>
<p><strong>NB</strong> since then, RBAC was added and it permits enforcing separation, securely.</p>
<p>Although it exists as a named (<code>default</code>) namespace, it behaves as if there is (the cluster has) no namespace. It may be (!?) that it was retcon'd into Kubernetes after namespaces were added</p>
<p>Unless your context is defined to be a specific other namespace, <code>kubectl ...</code> behaves as <code>kubectl ... --namespace=default</code>. So, by accident it's easy to pollute and be impacted by pollution in this namespace. I'm sure your team will use code for your infrastructure but mistakes happen and "I forgot to specify the namespace" is easily done (and rarely wanted).</p>
<p>Using non-default namespaces becomes very intentional, explicit and, I think, precise. You must, for example (per @david-maze answer) be more intentional about RBAC for the namespace's resources.</p>
<p>Using namespaces is a mechanism that promotes multi-tenancy which is desired for separation of customers (business units, versions etc.)</p>
<p>You can't delete the default namespace but you can delete (and by consequence delete all the resources constrained by) any non-default namespace.</p>
<p>I'll think of more, I'm sure!</p>
<h3>Update</h3>
<ul>
<li>Corollary: generally don't constrain resources to <code>namespace</code> in specs but use e.g. <code>kubectl apply --filename=x.yaml --namespace=${NAMESPACE}</code></li>
</ul>
| DazWilkin |
<p>I am trying to make quick tweaks to the compute requests for a large number of deployments rather than having to get a PR approved if it turns out we need to change the numbers slightly. I need to be able to pass just one variable to the yaml file that gets applied.</p>
<p>Example below.</p>
<p>Shell Script:</p>
<pre><code>#!/bin/bash
APPCENTER=(pa nyc co nc md sc mi)
for ac in "${APPCENTER[@]}"
do
kubectl apply -f jobs-reqtweaks.yaml --env=APPCENTER=$ac
kubectl apply -f queue-reqtweaks.yaml --env=APPCENTER=$ac
kubectl apply -f web-reqtweaks.yaml --env=APPCENTER=$ac
done
</code></pre>
<p>YAML deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example-app
app.kubernetes.io/managed-by: Helm
appcenter: $APPCENTER
chart: example-app-0.1.0
component: deployment-name
heritage: Tiller
release: example-app-fb-feature-tp-1121-fb-pod-resizing
name: $APPCENTER-deployment-name
namespace: example-app-fb-feature-tp-1121-fb-pod-resizing
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: example-app
appcenter: $APPCENTER
chart: example-app-0.1.0
component: deployment-name
heritage: Tiller
release: example-app-fb-feature-tp-1121-fb-pod-resizing
template:
metadata:
labels:
app: example-app
appcenter: $APPCENTER
chart: example-app-0.1.0
component: deployment-name
heritage: Tiller
release: example-app-fb-feature-tp-1121-fb-pod-resizing
spec:
containers:
name: deployment-name
resources:
limits:
memory: "400Mi"
cpu: "10m"
requests:
cpu: 11m
memory: 641Mi
</code></pre>
| Nathan McKaskle | <p><code>kubectl apply</code> doesn't accept an <code>--env</code> flag (<code>kubectl apply --help</code>).</p>
<p>You have various choices:</p>
<ol>
<li><code>sed</code> -- treats the YAML as a text file</li>
<li><a href="https://github.com/mikefarah/yq" rel="nofollow noreferrer"><code>yq</code></a> -- treats YAML as YAML</li>
</ol>
<p>Define constants:</p>
<pre class="lang-sh prettyprint-override"><code># The set of YAML files you want to change
FILES=(
"jobs-reqtweaks.yaml"
"queue-reqtweaks.yaml"
"web-reqtweaks.yaml"
)
# The set of data-centers
APPCENTER=(pa nyc co nc md sc mi)
</code></pre>
<p>Then, either use <code>sed</code>:</p>
<pre class="lang-sh prettyprint-override"><code># Iterate over the data-centers
for ac in "${APPCENTER[@]}"
do
# Iterate over the files
for FILE in ${FILES[@]}
do
sed \
--expression="s|\$APPCENTER|${ac}|g" \
${FILE}| kubectl apply --filename=-
done
done
</code></pre>
<p>Or use <a href="https://github.com/mikefarah/yq" rel="nofollow noreferrer"><code>yq</code></a>:</p>
<pre class="lang-sh prettyprint-override"><code># Iterate over the data-centers
for ac in "${APPCENTER[@]}"
do
# Iterate over the files
for FILE in ${FILES[@]}
do
UPDATE="with(select(documentIndex==0)
; .metadata.labels.appcenter=\"${ac}\"
| .metadata.name=\"${ac}-deployment-name\"
| .spec.selector.matchLabels.appcenter=\"${ac}\"
| .spec.template.metadata.labels.appcenter=\"${ac}\"
)"
yq eval "${UPDATE}" \
${FILE}| kubectl apply --filename=-
done
done
</code></pre>
| DazWilkin |
<p>Is there a way to make <code>skaffold dev</code> completely skip image building including the initial one? I have a prebuilt image. All I want skaffold to do is deploy the K8s Deployment YAML file and sync local files to it. I couldn't find a working example of how to do this. Closest was <a href="https://github.com/GoogleContainerTools/skaffold/tree/main/examples/hot-reload" rel="nofollow noreferrer">this one</a> but it assumes an initial image build.</p>
<p><code>skaffold dev --auto-build=false</code> stills builds.</p>
<p>my <code>skaffold.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: skaffold/v2beta26
kind: Config
build:
artifacts:
- image: gcr.io/my-project/my-repo
context: .
sync:
infer: ["**/*"]
deploy:
kubectl:
manifests:
- skaffold/*.yaml
</code></pre>
<pre><code>skaffold dev --auto-build=false --default-repo gcr.io/my-project
Listing files to watch...
- gcr.io/my-project/my-repo
Generating tags...
- gcr.io/my-project/my-repo -> gcr.io/my-project/my-repo:v0.7.4-182-gb47cd3b5-dirty
Checking cache...
- gcr.io/my-project/my-repo: Not found. Building
Starting build...
Building [gcr.io/my-project/my-repo]...
</code></pre>
<p>Update: Using skaffold v1.35.0 on an Ubuntu VM, deploying to remote GKE cluster version 1.20.10-gke.1600 created via Google Cloud Console. Storing images in Google Container Registry (gcr.io).</p>
| David Xia | <p>Skaffold's <code>build.local.tryImportMissing: true</code> setting will cause Skaffold to use a tagged image if it already exists. In your example above, Skaffold would look for <code>gcr.io/my-project/my-repo:v0.7.4-182-gb47cd3b5-dirty</code>.</p>
<p>You could combine <code>tryImportMissing</code> with <code>skaffold dev --tag {fixed-tag}</code> argument to override the tagging policy to specify a fixed tag.</p>
<p>And <code>skaffold dev</code> also supports <code>--auto-build=false --auto-deploy=false</code> to avoid re-building and re-deploying changed images.</p>
| Brian de Alwis |
<p>I'm trying to update a kubernetes template that we have so that I can pass in arguments such as <code>--db-config <value></code> when my container starts up.</p>
<p>This is obviously not right b/c there's not getting picked up</p>
<pre><code>...
containers:
- name: {{ .Chart.Name }}
...
args: ["--db-config", "/etc/app/cfg/db.yaml", "--tkn-config", "/etc/app/cfg/tkn.yaml"] <-- WHY IS THIS NOT WORKING
</code></pre>
| Catfish | <p>Here's an example showing your approach working:</p>
<p>main.go:</p>
<pre><code>package main
import "flag"
import "fmt"
func main() {
db := flag.String("db-config", "default", "some flag")
tk := flag.String("tk-config", "default", "some flag")
flag.Parse()
fmt.Println("db-config:", *db)
fmt.Println("tk-config:", *tk)
}
</code></pre>
<p>Dockerfile [simplified]:</p>
<pre><code>FROM scratch
ADD kube-flags /
ENTRYPOINT ["/kube-flags"]
</code></pre>
<p>Test:</p>
<pre><code>docker run kube-flags:180906
db-config: default
tk-config: default
docker run kube-flags:180906 --db-config=henry
db-config: henry
tk-config: default
</code></pre>
<p>pod.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- image: gcr.io/.../kube-flags:180906
imagePullPolicy: Always
name: test
args:
- --db-config
- henry
- --tk-config
- turnip
</code></pre>
<p>test:</p>
<pre><code>kubectl logs test
db-config: henry
tk-config: turnip
</code></pre>
| DazWilkin |
<p>I'm trying to containerize a python application, where I used the Kubernetes package. My Dockerfile is:</p>
<pre class="lang-bash prettyprint-override"><code>FROM python:3.10.6
ADD App_pod.py .
ADD config.yml ~/.kube/config
RUN pip3 install requests beautifulsoup4
RUN pip3 install kubernetes
RUN apt-get install -y ca-certificates curl
CMD ["python3","./App_pod.py"]
</code></pre>
<p>when I enter docker run -i -t run, it shows me the following error:</p>
<pre class="lang-none prettyprint-override"><code>> Traceback (most recent call last):
> File "//./App_pod.py", line 9, in <module>
> config.load_kube_config()
> File "/usr/local/lib/python3.10/site-packages/kubernetes/config/kube_config.py", line 813, in load_kube_config
> loader = _get_kube_config_loader(
> File "/usr/local/lib/python3.10/site-packages/kubernetes/config/kube_config.py", line 770, in _get_kube_config_loader
> raise ConfigException(
> kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
</code></pre>
<p>I tried to change the appearing error from raise exeption but it still shows me the same error, so I guess maybe it doesn't detect the changes I do.</p>
<p>I used ADD command to copy the config file to the specific path, but it still shows the same error. I also tried to mount the volume but it can't be mounted since there is no container yet, because it needs first the conf file to be created. Does anyone face this problem and know how to fix it?</p>
| Mahmoud Abdelhafez | <p>A couple of issues:</p>
<ul>
<li><code>~</code> has a special meaning on Linux; it is a shortcut to the user's home directory. I think <code>~</code> is not available for the destination path of a Dockerfile <code>ADD</code> (or <code>COPY</code>) command. You should use a specific path instead</li>
<li>It's not advisable to include a file that includes credentials (such as a kubeconfig file) in a container. You should instead mount data such as this.</li>
<li>You're renaming the file <code>config.yml</code> is the source but <code>config</code> is the destination which may be problematic.</li>
</ul>
<p>You don't include the full Dockerfile so it's unclear what your <code>FROM</code> image is and you don't include the source of your <code>App_pod.py</code> file so it's unclear where it looks for the kubeconfig file, but...</p>
<ol>
<li>Generalize your code's use of the kubeconfig file by providing it as an argument or using an environment variable. This way, the location is dynamic and the code can work when run outside of a container or within</li>
<li>Mount the file into the container when you <code>run</code> the container</li>
</ol>
<p>Let's assume you change your code to accept a command-line argument for <code>config.yml</code> in the working directory, e.g.:</p>
<pre class="lang-bash prettyprint-override"><code>python3 App_pod.py ~/.kube/config
</code></pre>
<p>Then, when you run the container, you will need to mount the <code>config.yml</code> into the container and reference it. In the following, I'm using different source and destination folders to demonstrate the point:</p>
<pre class="lang-bash prettyprint-override"><code>docker run \
--interactive --tty --rm \
--volume=~/.kube/config:/somewhere/.kube/config \
your-container /somewhere/.kube/config
</code></pre>
<p>You can use <code>~</code> in the <code>--volume</code> flag because <code>~</code> is meaningful on your (Linux) host. The file is mapped to <code>/somewhere/.kube/config</code> in the container and so your Python file needs to point to the container location (!) when you invoke it.</p>
<p>Also, so that you may use command-line parameters, I encourage you to use <code>ENTRYPOINT</code> instead of <code>CMD</code> to run your Python program:</p>
<pre><code>...
ENTRYPOINT ["python3","./App_pod.py"]
</code></pre>
| DazWilkin |
<p>I want to drain node on shutdown and uncordon on start, I wrote below unit file but i am getting error (Openshift 3.11 and Kubernetes 1.11.0)</p>
<pre><code>[Unit]
Description=Drain Node at Shutdown
DefaultDependencies=no
Before=shutdown.target reboot.target halt.target
[Service]
Type=oneshot
ExecStart=/bin/sleep 60 && kubectl uncordon $HOSTNAME
ExecStop=kubectl drain $HOSTNAME --ignore-daemonsets --force --grace-period=30 && /bin/sleep 60
[Install]
WantedBy=halt.target reboot.target shutdown.target
</code></pre>
<p>its giving me error</p>
<pre><code>error: no configuration has been provided
</code></pre>
<p>I set Environment variable but still no success</p>
<pre><code>[Service]
Environment="KUBECONFIG=$HOME/.kube/config"
</code></pre>
| ImranRazaKhan | <p>Following systemd unit is working, in ExecStop %H should be use for HOSTNAME</p>
<pre><code>[Unit]
Description=Drain Node at Shutdown
After=network.target glusterd.service
[Service]
Type=oneshot
Environment="KUBECONFIG=/root/.kube/config"
ExecStart=/bin/true
ExecStop=/usr/bin/kubectl drain %H --ignore-daemonsets --force --grace-period=30 --delete-local-data
TimeoutStopSec=200
# This service shall be considered active after start
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
</code></pre>
| ImranRazaKhan |
<p>We are having 1000's of pods running for our application. Due to some reason we need to restart 100's of pods.</p>
<p>Is there any way we can do it in kubernetes using kubectl or any other tool. Please advice. It should be pure pod restart.</p>
| mcbss | <p>One way</p>
<pre><code>kubectl scale deploymennt <your-deployment-goes-here> --replicas=0
</code></pre>
<p>and then</p>
<pre><code>kubectl scale deploymennt <your-deployment-goes-here> --replicas=1000
</code></pre>
<p>Another way:</p>
<p>Write a script that:</p>
<ol>
<li>will acquire a list of all active pods that belong to specific deployment</li>
<li>issue kubectl delete pod in a loop</li>
</ol>
<p>Yet another way (works if your pods belong to a dedicated namespace 'foo'):</p>
<pre><code>kubectl delete --all pods --namespace=foo
</code></pre>
| Mark Bramnik |
<p>I am trying to install Kubectl but when I type this in the terminal :</p>
<pre><code>kubectl get pods --namespace knative-serving -w
</code></pre>
<p>I got this :</p>
<pre><code>NAME READY STATUS RESTARTS AGE
activator-69b8474d6b-jvzvs 2/2 Running 0 2h
autoscaler-6579b57774-cgmm9 2/2 Running 0 2h
controller-66cd7d99df-q59kl 0/1 Pending 0 2h
webhook-6d9568d-v4pgk 1/1 Running 0 2h
controller-66cd7d99df-q59kl 0/1 Pending 0 2h
controller-66cd7d99df-q59kl 0/1 Pending 0 2h
controller-66cd7d99df-q59kl 0/1 Pending 0 2h
controller-66cd7d99df-q59kl 0/1 Pending 0 2h
controller-66cd7d99df-q59kl 0/1 Pending 0 2h
controller-66cd7d99df-q59kl 0/1 Pending 0 2h
</code></pre>
<p>I don't understand why <code>controller-66cd7d99df-q59kl</code> is still pending.</p>
<p>When I tried this : <code>kubectl describe pods -n knative-serving controller-66cd7d99df-q59kl</code> I got this :</p>
<pre><code>Name: controller-66cd7d99df-q59kl
Namespace: knative-serving
Node: <none>
Labels: app=controller
pod-template-hash=66cd7d99df
Annotations: sidecar.istio.io/inject=false
Status: Pending
IP:
Controlled By: ReplicaSet/controller-66cd7d99df
Containers:
controller:
Image: gcr.io/knative-releases/github.com/knative/serving/cmd/controller@sha256:5a5a0d5fffe839c99fc8f18ba028375467fdcd83cbee9c7015c1a58d01ca6929
Port: 9090/TCP
Limits:
cpu: 1
memory: 1000Mi
Requests:
cpu: 100m
memory: 100Mi
Environment: <none>
Mounts:
/etc/config-logging from config-logging (rw)
/var/run/secrets/kubernetes.io/serviceaccount from controller-token-d9l64 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-logging:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: config-logging
Optional: false
controller-token-d9l64:
Type: Secret (a volume populated by a Secret)
SecretName: controller-token-d9l64
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 40s (x98 over 2h) default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
</code></pre>
| Bruce Peter | <p>Please consider the comments above: you have <code>kubectl</code> installed correctly (it's working) and <code>kubectl describe pod/<pod></code> would help...</p>
<p>But, the information you provide appears sufficient for an answer:</p>
<p><code>FailedScheduling</code> because of <code>Insufficient cpu</code></p>
<p>The pod that you show (one of several) requests:</p>
<pre><code>cpu: 1
memory: 1000Mi
</code></pre>
<p>The cluster has insufficient capacity to deploy this pod (and apparently the others).</p>
<p>You should increase the number (and|or size) of the nodes in your cluster to accommodate the capacity needed for the pods.</p>
<p>You needn't delete these pods because, once the cluster's capacity increases, you should see these pods deploy successfully.</p>
| DazWilkin |
<p>To wait for a certain pod to be completed the command is </p>
<p><code>kubectl wait --for=condition=Ready pod/pod-name</code> </p>
<p>Similarly I want to wait for any one pod in the statefulset to be ready. I tried the command below which did not work,</p>
<p><code>kubectl wait --for=condition=Ready statefulset/statefulset-name</code> </p>
<p>What should the command options look like? </p>
| agirlwithnoname | <p>I used following and it works for me</p>
<pre><code>kubectl wait -l statefulset.kubernetes.io/pod-name=activemq-0 --for=condition=ready pod --timeout=-1s
</code></pre>
| ImranRazaKhan |
<p>I am evaluating a migration of an application working with docker-compose to Kubernates and came across two solutions: Kompose and compose-on-kubernetes.</p>
<p>I'd like to know their differences in terms of functionality/ease of use to make decision of which one is more suited.</p>
| staticdev | <p>Both product provide a migration path from docker-compose to Kubernetes, but they do it in a slightly different way.</p>
<ul>
<li>Compose on Kubernetes runs within your Kubernetes cluster and allows you to deploy your compose setup unchanged on the Kubernetes cluster.</li>
<li>Kompose translates your docker-compose files to a bunch of Kubernetes resources.</li>
</ul>
<p>Compose is a good solution if you want to continue running using docker-compose in parallel to deploying on Kubernetes and so plan to keep the docker-compose format maintained.</p>
<p>If you're migrating completely to Kubernetes and don't plan to continue working with docker-compose, it's probably better to complete the migration using Kompose and use that as the starting point for maintaining the configuration directly as Kubernetes resources.</p>
| intellectronica |
<p>We wanted to build a Centralized Toggle Server leveraging Spring Cloud Config Server, but I've read a blog/article hinting that Spring Cloud Config is not Suited for Kubernetes Cloud environment (didn't give any reason why). Instead it is recommended to use Spring Kubernetes ConfigMaps for that. </p>
<p>Can some one shed the light on why Spring Cloud Config Server is not recommended for Kubernetes environment, if any? And advantages of using Spring Kubernetes ConfigMaps over Spring Cloud Config Server, if any?</p>
| user3495691 | <p>Here are some thoughts, a kind of comparison that that might help to decide:</p>
<p>IMO both can work generally speaking. Maybe you colleague could provide more insights on this (I'm not joking): what if there is something special in your particular environment that prevents Spring Cloud config from being even considered as an option.</p>
<ol>
<li><p>Once the property changes in spring cloud config, potentially beans having <code>@Refresh</code> scope can be reloaded without the need to re-load the application context. A kind of solution that you might benefit from if you're using spring. </p></li>
<li><p>In general Spring Cloud Config can manage secrets (stuff like passwords), ConfigMaps can't, you should use Secrets of kubernetes in this case.</p></li>
<li><p>On the other hand, Spring Cloud Config - requires a dedicated service. ConfigMaps is "native" in kubernetes.</p></li>
<li><p>When the application (a business) microservice starts it first contacts spring cloud config service, if its not available, the application won't start correctly (technically it falls back to other ways of configurations supported by spring boot, like <code>application.properties</code>, etc.) If you have hundreds of micro-services and hundreds of instances of microservice, Cloud Config has to be available all the time, so you might need a replica of those, which is perfectly doable of course.</p></li>
<li><p>Spring Cloud Config works best if all your microservices use Java / Spring. ConfigMaps is a general purpose mechanism. Having said that, spring cloud config exposes REST interface so you can integrate.</p></li>
<li><p>Spring Cloud Config requires some files that can be either on file system or on git repository. So the "toggle" actually means git commit and push. Kubernetes usually is used for "post-compile" environments so that its possible that git is not even available there. </p></li>
<li><p>DevOps people probably are more accustomed to use Kubernetes tools, because its a "general purpose" solution.</p></li>
<li><p>Depending on your CI process some insights might come from CI people (regarding the configuration, env. variables that should be applied on CI tool, etc.) This highly varies from application to application so I believe you should talk to them as well.</p></li>
</ol>
| Mark Bramnik |
<p>For my project, I have to connect to a postgres Database in Google Cloud Shell using a series of commands:</p>
<p><code> gcloud config set project <project-name></code><br><code> gcloud auth activate-service-account <keyname>@<project-name>.iam.gserviceaccount.com --key-file=<filename>.json</code><br><code> gcloud container clusters get-credentials banting --region <region> --project <project></code><br><code> kubectl get pods -n <node></code><br><code> kubectl exec -it <pod-name> -n <node> bash</code><br><code> apt-get update</code><br><code> apt install postgresql postgresql-contrib</code><br><code> psql -h <hostname> -p <port> -d <database> -U <userId></code><br>`</p>
<p>I am a beginner to this and just running the scripts provided to me by copy pasting till now.
But to make things easier, I have created a .bat file in the Shell editor with all the above commands and tried to run it using <code>bash <filename></code></p>
<p>But once the <code>kubectl exec -it <pod-name> -n <node> bash</code> command runs and new directory is opened like below, the rest of the commands do not run.</p>
<p><code> Defaulted container "<container>" out of: <node>, istio-proxy, istio-init (init)</code><br><code> root@<pod-name>:/#</code></p>
<p>So how can I make the shell run the rest of these scripts from the .bat file:</p>
<p><code> apt-get update</code><br><code> apt install postgresql postgresql-contrib</code><br><code> psql -h <hostname> -p <port> -d <database> -U <userId></code><br>`</p>
| Hemendra | <p>Cloud Shell is a Linux instance and default to the Bash shell.</p>
<p><code>BAT</code> commonly refers to Windows|DOS batch files.</p>
<p>On Linux, shell scripts are generally <code>.sh</code>.</p>
<p>Your script needs to be revised in order to pass the commands intended for the <code>kubectl exec</code> command to the Pod and not to the current script.</p>
<p>You can try (!) the following. It creates a Bash (sub)shell on the Pod and runs the commands listed after <code>-c</code> in it:</p>
<pre class="lang-sh prettyprint-override"><code>gcloud config set project <project-name>
gcloud auth activate-service-account <keyname>@<project-name>.iam.gserviceaccount.com \
--key-file=<filename>.json
gcloud container clusters get-credentials banting \
--region <region> \
--project <project>
kubectl get pods -n <node>
kubectl exec -it <pod-name> -n <node> bash -c "apt-get update && apt install postgresql postgresql-contrib && psql -h <hostname> -p <port> -d <database> -U <userId>"
</code></pre>
<p>However, I have some feedback|recommendations:</p>
<ol>
<li>It's unclear whether even this approach will work because your running <code>psql</code> but doing nothing with it. In theory, I think you could then pass a script to the <code>psql</code> command too but then your script is becoming very janky.</li>
<li>It is considered <strong>not</strong> good practice to install software in containers as you're doing. The recommendation is to create the image that you want to run beforehand and use that. It is recommended that containers be immutable</li>
<li>I encourage you to use long flags when you write scripts as short flags (<code>-n</code>) can be confusing whereas <code>--namespace=</code> is more clear (IMO). Yes, these take longer to type but your script is clearer as a result. When you're hacking on the command-line, short flags are fine.</li>
<li>I encourage you to <strong>not</strong> use <code>gcloud config set</code> e.g. <code>gcloud config set project ${PROJECT}</code>. This sets global values. And its use is confusing because subsequent commands use the values implicitly. Interestingly, you provide a good example of why this can be challenging. Your subsequent command <code>gcloud container clusters get-credentials --project=${PROJECT}</code> explicitly uses the <code>--project</code> flag (this is good) even though you've already implicitly set the value for <code>project</code> using <code>gcloud config set project</code>.</li>
</ol>
| DazWilkin |
<p>To use alert by E-mail in Grafana, we have to set SMTP settings in grafana.ini.</p>
<p>On Ubuntu, we can easily run the grafana-prometheus-k8s stack by command
<code>microk8s enable prometheus</code>
However, how can we feed grafana.ini to grafana running in a k8s pod?</p>
| Meng-Yuan Huang | <p>We can modify grafana k8s deployment manifest by <strong>volumeMounts</strong> to feed grafana.ini on our host to grafana running in a pod.</p>
<p>First, prepare your grafana.ini with SMTP settings. E.g.</p>
<pre><code>[smtp]
enabled = true
host = smtp.gmail.com:465
# Please change user and password to your ones.
user = [email protected]
password = your-password
</code></pre>
<p>Then, you can place this file on your host. E.g. <code>/home/mydir/grafana.ini</code></p>
<p>Modify the loaded grafana k8s deployment manifest:</p>
<pre><code>kubectl edit deployments.apps -n monitoring grafana
</code></pre>
<p>Add a new mount to <strong>volumeMounts</strong> (not the one in <code>kubectl.kubernetes.io/last-applied-configuration</code>):</p>
<pre><code> volumeMounts:
- mountPath: /etc/grafana/grafana.ini
name: mydir
subPath: grafana.ini
</code></pre>
<p>Add a new <strong>hostPath</strong> to <strong>volumes</strong>:</p>
<pre><code> volumes:
- hostPath:
path: /home/mydir
type: ""
name: mydir
</code></pre>
<p>Finally, restart the deployment:</p>
<pre><code>kubectl rollout restart -n monitoring deployment grafana
</code></pre>
<p>Run this command and use a web browser on your host to navigate to http://localhost:8080 to grafana web app:</p>
<pre><code>kubectl port-forward -n monitoring svc/grafana 8080:3000
</code></pre>
<p>Then, you can navigate to Alerting / Notification channels / Add channel to add an Email notification channel and test it!</p>
| Meng-Yuan Huang |
<p>I have Kubernetes running on two nodes and one application deployed on the two nodes (two pods, one per node). </p>
<p>It's a Spring Boot application. It uses OpenFeign for service discoverability. In the app i have a RestController defined and it has a few APIs and an @Autowired @Service which is called from inside the APIs.</p>
<p>Whenever i do a request on one of the APIs Kubernetes uses some sort of load-balancing to route the traffic to one of the pods, and the apps RestController is called. This is fine and i want this to be load-balanced. </p>
<p>The problem happens once that API is called and it calls the @Autowired @Service. Somehow this too gets load-balanced and the call to the @Service might end up on the other node.</p>
<p>Heres and example:</p>
<ul>
<li>we have two nodes: node1, node2</li>
<li>we make a request to node1's IP address.
<ul>
<li>this might get load-balanced to node2 (this is fine)</li>
</ul></li>
<li>node1 gets the request and calls the @Autowired @Service</li>
<li>the call jumps to node2 (this is where the problem happens)</li>
</ul>
<p>And in code:<br>
Controller:</p>
<pre><code> @Autowired
private lateinit var userService: UserService
@PostMapping("/getUser")
fun uploadNewPC(@RequestParam("userId") userId: String): User {
println(System.getEnv("hostIP")) //123.45.67.01
return userService.getUser(userId)
}
</code></pre>
<p>Service:</p>
<pre><code>@Service
class UserService {
fun getUser(userId: String) : User {
println(System.getEnv("hostIP")) //123.45.67.02
...
}
}
</code></pre>
<p>I want the load-balancing to happen only on the REST requests not the internal calls of the app to its @Service components. How would i achieve this? Is there any configuration to the way Spring Boot's @service components operate in Kubernetes clusters? Can i change this?</p>
<p>Thanks in advance.</p>
<p>Edit: <br>
After some debugging i found that It wasn't the Service that was load balanced to another node but the initial http request. Even though the request is specifically send to the url of node1... And since i was debugging both nodes at the same time, i didn't notice this.</p>
| Oliver Tasevski | <p>Well, I haven't used openfeign, but in my understanding it can loadbalance only REST requests indeed.</p>
<p>If I've got your question right, you say that when the REST controller calls the service component (UserService in this case) the network call is issued and this is undesirable.</p>
<p>In this case, I believe, following points for consideration will be beneficial:</p>
<ol>
<li><p>Spring boot has nothing to do with load balancing at this level by default, it should be a configured in the spring boot application somehow.</p></li>
<li><p>This also has nothing to do with the fact that this application runs in a Kubernetes environment, again its only a spring boot configuration. </p></li>
<li><p>Assuming, you have a <code>UserService</code> interface that obviously doesn't have any load balancing logic, spring boot must <strong>wrap</strong> it into some kind of proxy that adds these capabilities. So try to debug the application startup, place a breakpoint in the controller method and check out what is the actual type of the user service, again it must be some sort of proxy</p></li>
<li><p>If the assumption in 3 is correct, there must be some kind of bean post processor (possibly in spring.factories file of some dependency) class that gets registered within the application context. Probably if you'll create some custom method that will print all beans (Bean Post Processor is also a bean), you'll see the suspicious bean. </p></li>
</ol>
| Mark Bramnik |
<p>I am trying to get the pod name with highest CPU utilization using kubectl command.
Able to retrieve list using following command but unable to write a jsonpath query to fetch the name of first pod from the output.
Appreciate any help in this regard. Thanks!</p>
<pre><code>kubectl top pod POD_NAME --sort-by=cpu
</code></pre>
| haridurgempudi | <p><code>kubectl top</code> doesn't appear to enable <code>--output</code> formatting and so no JSON and thus no JSONPath :-(</p>
<p>You can:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl top pod \
--sort-by=cpu \
--no-headers \
--namespace=${NAMESPACE} \
| head -n 1
</code></pre>
<p>I think it would be useful to support <code>--output</code> for all <code>kubectl</code> commands and you may wish to submit a feature request for this.</p>
<blockquote>
<p><strong>NOTE</strong> Hmmm <a href="https://github.com/kubernetes/kubectl/issues/753" rel="nofollow noreferrer"><code>kubectl top</code> output format options</a></p>
</blockquote>
| DazWilkin |
<p>The k8s documentation <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/" rel="nofollow noreferrer">Versions in CustomResourceDefinitions</a> talks about how to upgrade CRD schema and CRD objects from one version to another.</p>
<p>However, what will happen when I update/change a CRD schema WITHOUT updating its version?</p>
<p>Fr example, I have a CRD schema <code>v1/Foo</code> and two field <code>A,B</code> (written in go):</p>
<pre class="lang-golang prettyprint-override"><code>// api version: v1
type FOOSpec struct {
A int `json:"a"`
B int `json:"b"`
}
</code></pre>
<p>I use kubebuilder to generate the CRD manifest and apply it in the k8s cluster. After that, The field A is deleted.</p>
<pre class="lang-golang prettyprint-override"><code>// still api version: v1
type FOOSpec struct {
B int `json:"b"`
}
</code></pre>
<p>Again I generate the CRD manifest and apply it in the k8s cluster. Surprisingly k8s simply accept the updated CRD schema and no warning is reported.</p>
<p>Why is that? What if I create a Foo1 object of the first schema and then create a Foo2 object in the second schema. The two object has same version (v1) but the different schema.</p>
<p>I expect k8s to report an error when updating the CRD version without upgrading the version.</p>
| Owe u. | <p>You've mostly answered your own question.</p>
<p>Custom Resource <strong>Definitions</strong> (CRDs) <strong>define</strong> Kubernetes resource types in the Kubernetes API.</p>
<p>In most schema evolutions, it is developers' responsibility to version such changes correctly. Kubernetes can't challenge the developer on the definition of schemas; it must accept the developers' intent (even if problematic).</p>
<p>The issue as you've observed, it that changing the definition of a Group<em>Version</em>Kind is problematic and therefore discouraged (in production).</p>
<p>If you create Resources against CRDs, then revise the CRD (without revising its Version) and create more Resources, you've introduced inconsistency:</p>
<ul>
<li>Removed fields are not accessible in <strong>any</strong> of the CRD's Resources</li>
<li>Added|Changed fields aren't present in pre-change CRD's Resources</li>
</ul>
<p>Errors will not arise until an operation (CRUD) is attempted (by the Kubernetes API server) against the Resource.</p>
<p>Kubernetes Kinds aren't SemVer versioned but follow a limited set of enumerations eg. v1alpha1, v1beta1, v1, v2 ... (see e.g. <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-priority" rel="nofollow noreferrer">Version priority</a>). During development I tend to use e.g. v1alpha1, make changes without bumping the version and manage the house-keeping. Once you publish a CRD, it's important to bump versions so that you do not break your users.</p>
| DazWilkin |
<p>I'm pretty new to Prometheus and according to my understanding, there are many metrics already available in Prometheus. But I'm not able to see "http_requests_total" which is used in many examples in the list. Do we need to configure anything in order to avail these HTTP metrics?</p>
<p>My requirement is to calculate the no: of HTTP requests hitting the server at a time. So http_request_total or http_requests_in_flight metrics would be of great help for usage.</p>
<p>Can someone please guide me here on what to do next?</p>
| Savitha | <p>The <a href="https://prometheus.io" rel="nofollow noreferrer">documentation</a> is extensive and helpful.</p>
<p>See <a href="https://prometheus.io/docs/prometheus/latest/installation/#using-docker" rel="nofollow noreferrer">installation</a></p>
<p>If you have Docker, you can simply run:</p>
<pre class="lang-sh prettyprint-override"><code>docker run \
--interactive --tty --rm \
--publish=9090:9090 \
prom/prometheus
</code></pre>
<p>And then browse: <a href="http://localhost:9090" rel="nofollow noreferrer">http://localhost:9090</a>.</p>
<p>The default config is set to scrape <a href="http://localhost:9090/targets" rel="nofollow noreferrer">itself</a>.</p>
<p>You can list these <a href="http://localhost:9090/metrics" rel="nofollow noreferrer">metrics</a>.</p>
<p>And graph <a href="http://localhost:9090/graph?g0.expr=prometheus_http_requests_total&g0.tab=1&g0.stacked=0&g0.range_input=1h" rel="nofollow noreferrer"><code>prometheus_http_requests_total</code></a> them.</p>
| DazWilkin |
<p>I need to clean up some Kubernete namespaces(hello_namespace, second,my_namespace1, my_namespace45,my_namespace44 for example and I do it with a jenkins job.
I read with kubectl the namespace I need to clean up and then I want to fire a job to delete it, My code should be something like that</p>
<pre><code> pipeline {
agent { label 'master' }
stages {
stage('Clean e2e') {
steps {
script {
sh "kubectl get namespace |egrep 'my_namespace[0-9]+'|cut -f1 -d ' '>result.txt"
def output=readFile('result.txt').trim()
}
}
}
</code></pre>
<p>The ouput of this code will be the variable $output with the values:
my_namespace1
my_namespace45
my_namespace44
Separated by line, now I want to fire a job with the namespace like parameter , how can I do that? (My problem is to read the file and fire independent job for each namespace)
while (output.nextLine() callJob)
The job call should be like </p>
<pre><code>build job: 'Delete temp Stage', parameters:
[string(name: 'Stage', value: "${env.stage_name}")]
</code></pre>
| Guel135 | <p>I already got it :)</p>
<pre><code> #!groovy
pipeline {
agent { label 'master' }
stages {
stage('Clean up stages') {
steps {
script {
sh '(kubectl get namespace |egrep "namespace[0-9]+"|cut -f1 -d " "|while read i;do echo -n $i";" ; done;)>result.txt'
def stages = readFile('result.txt').trim().split(';')
for (stage in stages) {
if (stage?.trim()) {
echo "deleting stage: $stage"
build job: 'Delete temp Stage', parameters:
[string(name: 'Stage', value: "$stage")]
}
}
}
}
}
}
}
</code></pre>
| Guel135 |
<p>I am doing some tutorials from <a href="https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes?continue=https%3A%2F%2Fdevelopers.google.com%2Flearn%2Fpathways%2Fjava-cloud-fundamentals%23codelab-https%3A%2F%2Fcodelabs.developers.google.com%2Fcodelabs%2Fcloud-springboot-kubernetes#5" rel="nofollow noreferrer">https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes?continue=https%3A%2F%2Fdevelopers.google.com%2Flearn%2Fpathways%2Fjava-cloud-fundamentals%23codelab-https%3A%2F%2Fcodelabs.developers.google.com%2Fcodelabs%2Fcloud-springboot-kubernetes#5</a>. I am using Google Cloud Console. When I type:</p>
<pre><code>kubectl version
</code></pre>
<p>I am getting an response:</p>
<pre><code>WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:20:07Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Unable to connect to the server: dial tcp 34.135.56.138:443: i/o timeout
</code></pre>
<p>What I do not like is the</p>
<pre><code>Unable to connect to the server: dial tcp 34.135.56.138:443: i/o timeout
</code></pre>
<p>I am getting this error also when running other <strong>kubectl</strong> commands, which stops my progress.
I am using only out of the box tools provided by Google Cloud.
What is causing this error in Google Cloud Console? Do I need to make additional Kubernetes configuration? (The tutorial did not mention anything like that)</p>
<hr />
<p>Running the command as suggested in the comments:</p>
<pre><code>gcloud container clusters get-credentials hello-java-cluster --zone=us-central1-c
</code></pre>
<p>I am getting an error:</p>
<pre><code>Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=404, message=Not found: projects/question-tracker/zones/us-central1-c/clusters/hello-java-cluster.
No cluster named 'hello-java-cluster' in question-tracker.
</code></pre>
<p>EDIT:
I have run the command: (instead of <strong>hello</strong>-java-cluster I ran it with previously created <strong>questy</strong>-java-cluster)</p>
<pre><code>(quizdev)$ gcloud container clusters get-credentials questy-java-cluster --zone=us-central1-c
</code></pre>
<p>And it ran without error giving result:</p>
<pre><code>Fetching cluster endpoint and auth data.
kubeconfig entry generated for questy-java-cluster.
</code></pre>
<p>After this operation kubectl version runs in 1 second and does not give the error.</p>
| fascynacja | <p>There are 2 steps that get blurred by <code>gcloud</code> into one.</p>
<p>The <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/create" rel="nofollow noreferrer"><code>gcloud container clusters create</code></a> command not only creates the cluster but it also configures <code>kubectl</code> via kubeconfig (<code>~/.kube/config</code>) to access the cluster.</p>
<p>If you're unable to e.g. <code>kubectl get nodes</code> after <code>gcloud container clusters create</code> and you're confident the cluster created successfully (i.e. by checking Cloud Console), it may be that the kubeconfig file was not correctly configured.</p>
<p>Please confirm that the kubeconfig file in the default location (~/.kube/config) exists and contains data (<code>cluster</code>, <code>server</code>, <code>user</code>) entries correspondonging to your cluster.</p>
<p>If that all looks good, you can try repeating the step that configures kubeconfig: <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials" rel="nofollow noreferrer"><code>gcloud container clusters get-credentials hello-java-cluster --zone=us-central1-c</code></a>. If there are any other errors running these commands, please edit your question and include the errors.</p>
<p>If you've changed the cluster's <code>{NAME}</code>, <code>{LOCATION}</code> or are using a different Google Cloud <code>{PROJECT}</code>, please use these (replacing with the correct values) in the above command:</p>
<pre class="lang-bash prettyprint-override"><code>gcloud container clusters get-credentials ${NAME} \
--location=${LOCATION} \
--project=${PROJECT}
</code></pre>
| DazWilkin |
<p>The REST API requests , <code>GET , POST , PUT</code> etc to Kubernetes API server are request , responses and simple to understand , such as <code>kubectl create <something></code>. I wonder how the API server serves the pod logs when I do <code>kubectl logs -f <pod-name></code> ( and similar operations like <code>kubectl attach <pod></code> ), Is it just an http response to <code>GET</code> in a loop?</p>
| Ijaz Ahmad | <p>My advice is to always check what <code>kubectl</code> does under the cover, and for that use <code>-v=9</code> with your command. It will provide you with full request and responses that are going between the client and the server. </p>
| soltysh |
<p>I have a Kubernetes cluster with a pod running an instance of Open Telemetry Collector.</p>
<p>My .Net app inside Kubernetes exports traces to the Collector instance which in turn exports them to Elastic APM server. This work correctly if I use this config (<a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticexporter#sample-configuration-using-an-elastic-apm-secret-token" rel="nofollow noreferrer">described here</a>) for my Collector instance:</p>
<pre><code>exporters:
otlp/elastic:
endpoint: "xxx.elastic-cloud.com:443"
headers:
Authorization: "Bearer your-apm-secret-token"
</code></pre>
<p>To work in Kubernetes, I set this config in a ConfigMap. <strong>This work correctly but the issue is that this requires me to add a secret in the ConfigMap which I would like to avoid.</strong></p>
<p>To avoid this, I saw that you could add an <a href="https://www.elastic.co/guide/en/apm/get-started/current/open-telemetry-elastic.html#instrument-apps-apm-server" rel="nofollow noreferrer">OTEL_EXPORTER_OTLP_HEADERS environment variable</a> which will be used by the exporter. You could then pass the secrets through an environment variable in the container (not a perfect solution, but ok for me). This functionality seems to be implemented by the different OpenTelemetry SDKs (.Net, Java, <a href="https://opentelemetry-python.readthedocs.io/en/latest/exporter/otlp/otlp.html" rel="nofollow noreferrer">Python</a>, ...) but it doesn't seem to work with the Collector if I try to use the environment variable trick.</p>
<p>Any idea how I could do this with the Collector? Or any other trick to avoid passing the secret to the ConfigMap?</p>
| Absolom | <p>An <a href="https://github.com/open-telemetry/opentelemetry-collector/issues/2469" rel="nofollow noreferrer">issue</a> was entered for OpenTelemetry Collector that would solve my main concerns of using secrets in environment variables.</p>
<p>Until then, the author of the issue suggest environment variable expansion mechanism as a workaround.</p>
<p>So if you put your token in an environment variable ELASTIC_APM_TOKEN, then you could reference it in your ConfigMap like so:</p>
<pre><code>exporters:
otlp/elastic:
endpoint: "xxx.elastic-cloud.com:443"
headers:
Authorization: "Bearer $ELASTIC_APM_TOKEN"
</code></pre>
<p>The Collector will then replace $ELASTIC_APM_TOKEN with the value in your environment variable before applying the config.</p>
| Absolom |
<p>I have this workflow where I write some code and a docker image is deployed under <code>latest</code>. Currently, it deploys to my container registry and then I run this <code>kubectl apply file.yaml</code> after the container deploys, but K8s doesn't seem to recognize that it needs to re-pull and rollout a new deployment with the newly pulled image.</p>
<p>How can I basically feed in the YAML spec of my deployments and just rollout restart the deployments?</p>
<p>Alternatively, is there a better approach? I unconditionally am rolling out deployment restarts on all my deployments this way.</p>
| Ryan | <p>@daniel-mann is correct to discourage the use of <code>:latest</code>.</p>
<p>Don't read the word 'latest' when you see the tag <code>latest</code>. It's a default tag and it breaks the ability to determine whether the image's content has changed.</p>
<p>A better mechanism is to tag your images by some invariant value... your code's hash, for example. This is what Docker does with its image hashes and that's the definitive best (but not easiest) way to identify images: <code>[[image]]@sha256:....</code>.</p>
<p>You can use some SemVer value. Another common mechanism is to use the code's git commit for its tag: <code>git rev-parse HEAD</code> or similar.</p>
<p>So, assuming you're now uniquely identify images by tags, how to update the Deployment? The docs provide various approaches:</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment</a></p>
<p>But these aren't good for robust deployments (lowercase-D). What you should also do is create unique Deployment manifests each time you change the image. Then, if you make a mistake and inadvertently deploy something untoward, you have a copy of what you did and you can correct it (making another manifest) and apply that. This is a principle behind immutable infrastructure.</p>
<p>So...</p>
<pre class="lang-sh prettyprint-override"><code>TAG=$(git rev-parse HEAD)
docker build \
--tag=${REPO}/${IMAGE}:${TAG} \
...
docker push ${REPO}/${IMAGE}:${TAG}
</code></pre>
<p>Then change the manifest (and commit the change to source control):</p>
<pre class="lang-sh prettyprint-override"><code>sed --in-place "s|image: IMAGE|image: ${REPO}/${IMAGE}:${TAG}|g" path/to/manifest.yaml
git add /path/to/manifest.yaml
git commit --message=...
</code></pre>
<p>Then apply the revised (but unique!) manifest to the cluster:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply \
--filename=/path/to/manifest.yaml \
--namespace=${NAMESPACE}
</code></pre>
| DazWilkin |
<p>I am trying to find a way to disable <code>--basic-auth-file</code> on my cluster.</p>
<p>Can someone help me?</p>
| Danilo | <p>Based on your comments your are using kops to deploy a cluster. In kops case, you need to add the following lines to disable the --basic-auth-file flag. </p>
<pre><code>kops edit cluster --name <clustername> --state <state_path>
spec:
kubeAPIServer:
disableBasicAuth: true
</code></pre>
<p><em>spec</em> and <em>kubeAPIServer</em> is probably already present in your cluster config</p>
<p>To apply the change, you need to run</p>
<pre><code>kops update cluster --name <clustername> --state <state_path> <--yes>
</code></pre>
<p>and do a rolling upgrade</p>
<pre><code>kops update cluster --name <clustername> --state <state_path> <--yes>
</code></pre>
<p>If you run the commands without <em>--yes</em>, it will basically shows you what it is going to do, with <em>--yes</em> it will apply the changes/roll the cluster.</p>
<p>Sadly KOPS is a bit lacking documentation on what are the options you can use in the cluster config yaml, the best I could find is their API definition:
<a href="https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/componentconfig.go#L246" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/componentconfig.go#L246</a></p>
| Lorant Fecske |
<p>I am using Spark <code>3.1.2</code> and have created a cluster with 4 executors each with 15 cores.</p>
<p>My total number of partitions therefore should be 60, yet only 30 are assigned.</p>
<p>The job starts as follows, requesting 4 executors</p>
<pre><code>21/12/23 23:51:11 DEBUG ExecutorPodsAllocator: Set total expected execs to {0=4}
</code></pre>
<p>A few mins later, it is still waiting for them</p>
<pre><code>21/12/23 23:53:13 DEBUG ExecutorPodsAllocator: ResourceProfile Id: 0 pod allocation status: 0 running, 4 unknown pending, 0 scheduler backend known pending, 0 unknown newly created, 0 scheduler backend known newly created.
21/12/23 23:53:13 DEBUG ExecutorPodsAllocator: Still waiting for 4 executors for ResourceProfile Id 0 before requesting more.
</code></pre>
<p>then finally 2 come up</p>
<pre><code>21/12/23 23:53:14 DEBUG ExecutorPodsWatchSnapshotSource: Received executor pod update for pod named io-getspectrum-data-acquisition-modelscoringprocessor-8b92877de9b4ab13-exec-1, action MODIFIED
21/12/23 23:53:14 DEBUG ExecutorPodsWatchSnapshotSource: Received executor pod update for pod named io-getspectrum-data-acquisition-modelscoringprocessor-8b92877de9b4ab13-exec-3, action MODIFIED
21/12/23 23:53:15 DEBUG ExecutorPodsAllocator: ResourceProfile Id: 0 pod allocation status: 2 running, 2 unknown pending, 0 scheduler backend known pending, 0 unknown newly created, 0 scheduler backend known newly created.
</code></pre>
<p>then a third</p>
<pre><code>21/12/23 23:53:17 DEBUG ExecutorPodsWatchSnapshotSource: Received executor pod update for pod named io-getspectrum-data-acquisition-modelscoringprocessor-8b92877de9b4ab13-exec-2, action MODIFIED
21/12/23 23:53:18 DEBUG ExecutorPodsAllocator: ResourceProfile Id: 0 pod allocation status: 3 running, 1 unknown pending, 0 scheduler backend known pending, 0 unknown newly created, 0 scheduler backend known newly created.
</code></pre>
<p>...and then finally the job proceeds</p>
<pre><code>21/12/23 23:53:30 DEBUG KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Launching task 0 on executor id: 1 hostname: 10.128.35.137.
21/12/23 23:53:33 INFO MyProcessor: Calculated partitions are read 45 write 1
</code></pre>
<p>I don't understand why it suddenly decides to proceed when we have 3 executors as opposed to waiting for the 4th.</p>
<p>I have gone through the Spark and Spark K8s configs I don't see an appropriate config to influence this behavior</p>
<p>Why does it proceed when we have 3 executors?</p>
| DJ180 | <p>Per <a href="https://spark.apache.org/docs/latest/configuration.html#scheduling" rel="nofollow noreferrer">Spark docs</a>, scheduling is controlled by these settings</p>
<blockquote>
<p><code>spark.scheduler.maxRegisteredResourcesWaitingTime</code><br>default=30s<br>
Maximum amount
of time to wait for resources to register before scheduling
begins. <br><br>
<code>spark.scheduler.minRegisteredResourcesRatio</code><br> default=0.8 for
KUBERNETES mode; 0.8 for YARN mode; 0.0 for standalone mode and Mesos
coarse-grained mode<br> The minimum ratio of registered resources
(registered resources / total expected resources) (resources are
executors in yarn mode and Kubernetes mode, CPU cores in standalone
mode and Mesos coarse-grained mode ['spark.cores.max' value is total
expected resources for Mesos coarse-grained mode] ) to wait for before
scheduling begins. Specified as a double between 0.0 and 1.0.
Regardless of whether the minimum ratio of resources has been reached,
the maximum amount of time it will wait before scheduling begins is
controlled by config
spark.scheduler.maxRegisteredResourcesWaitingTime.</p>
</blockquote>
<p>In your case, looks like the <code>WaitingTime</code> has been reached.</p>
| mazaneicha |
<p>My bash scripts with <code>kubectl</code> sometimes cause trouble when it changes context. After running the script user may accidentally end up make changes in wrong cluster. I would like to probe your wisdom when it comes to handling context in scripts.</p>
<p>Is it possible for the script to save old context and when done revert the old context back? (I thought about running <code>kubectl config get-contexts</code> find current context and set it back after script completes. But this may fail if user haven't saved contexts)</p>
<p>Other approach I am thinking is to save the value of KUBECONFIG env var, change it to a temp file, get credentials and restore the value when the script completes.</p>
<p>Before I go and reinvent the wheel I like to here how the power users are handling situation like this? Can you share your thoughts/ideas?</p>
| RandomQuests | <p>Generally, I think it's problematic to depend implicitly on global state that may be arbitrarily updated by other processes and users.</p>
<p>Even with multiple configuration files, there's still opacity as to which cluster, user, namespace, context are being used.</p>
<p>For a single user, <code>kubectl</code>'s configuration file provides the convenience of not having to retype flags for every command and I think that should be it's sole purpose.</p>
<p>In scripts, I think it's preferable (clearer|self-documenting) to be explicit and to include either <code>--context</code> or <code>--cluster</code>, <code>--user</code> (and possibly <code>--namespace</code>) every time.</p>
<p>This said, it is also advisable to use variables rather than hard-coded values so there's still room for error.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete deployment/primary-service
# vs
KUBECONFIG=sam-monday-morning-config.yaml \
kubectl delete deplopyment/primary-service
# vs
kubectl delete deployment/primary-service \
--cluster=test-cluster \
--namespace=test-namespace \
--user=test-user
</code></pre>
| DazWilkin |
<p>I'm using <code>kubectl run</code> with environment parameters to create temporary docker containers for me (e.g. some forwarding for debugging purposes).
Since several weeks <code>kubectl</code> is complaining about <code>kubectl run</code> being deprecated. Unfortunately I can't find an appropriate replacement.</p>
<p>This is the old command:</p>
<pre><code>$KUBECTL run -i -t --attach=false --image djfaze/port-forward --env="REMOTE_HOST=$REMOTE_HOST" --env="REMOTE_PORT=$REMOTE_PORT" $POD_NAME
</code></pre>
<p>When issuing this, <code>kubectl</code> complains with this message:</p>
<blockquote>
<p><code>kubectl run --generator=deployment/apps.v1beta1</code> is DEPRECATED and will be removed in a future version. Use kubectl create instead.</p>
</blockquote>
<p>Any ideas how to replace this run command?</p>
| peez80 | <p>As the author of the problem let me explain a little bit the intention behind this deprecation. Just like Brendan explains in <a href="https://stackoverflow.com/a/52902113">his answer</a>, <code>kubectl run</code> per se is not being deprecated, only all the generators, except for the one that creates a Pod for you.</p>
<p>The reason for this change is two folds:</p>
<ol>
<li><p>The vast majority of input parameters for <code>kubectl run</code> command is overwhelming for newcomers, as well as for the old timers. It's not that easy to figure out what will be the result of your invocation. You need to take into consideration several passed options as well as the server version.</p></li>
<li><p>The code behind it is also a mess to maintain given the matrix of possibilities is growing faster than we can handle.</p></li>
</ol>
<p>That's why we're trying to move people away from using <code>kubectl run</code> for their daily workflows and convince them that using explicit <code>kubectl create</code> commands is more straightforward. Finally, we want to make the newcomers that played with docker or any other container engine, where they run a container, to have the same experience with Kubernetes where <code>kubectl run</code> will just run a Pod in a cluster.</p>
<p>Sorry for the initial confusion and I hope this will clear things up.</p>
<p>UPDATE (2020/01/10): As of <a href="https://github.com/kubernetes/kubernetes/pull/87077" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/87077</a> <code>kubectl run</code> will ONLY create Pods. All generators will be removed entirely. </p>
| soltysh |
<p>I'm looking for more detailed guidance / other people's experience of using Npgsql in production with Pgbouncer.</p>
<p>Basically we have the following setup using GKE and Google Cloud SQL:</p>
<p><a href="https://i.stack.imgur.com/CeOGy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CeOGy.png" alt="enter image description here"></a></p>
<p>Right now - I've got npgsql configured as if pgbouncer wasn't in place, using a local connection pool. I've added pgbouncer as a deployment in my GKE cluster as Google SQL has very low max connection limits - and to be able to scale my application horizontally inside of Kubernetes I need to protect against overwhelming it.</p>
<p>My problem is one of reliability when one of the pgbouncer pods dies (due to a node failure or as I'm scaling up/down). </p>
<p>When that happens (1) all of the existing open connections from the client side connection pools in the application pods don't immediately close (2) - and basically result in exceptions to my application as it tries to execute commands. Not ideal!</p>
<p>As I see it (and looking at the advice at <code>https://www.npgsql.org/doc/compatibility.html</code>) I have three options.</p>
<ol>
<li><p><strong>Live with it, and handle retries of SQL commands within my application.</strong> Possible, but seems like a lot of effort and creates lots of possible bugs if I get it wrong.</p></li>
<li><p><strong>Turn on keep alives and let npgsql itself 'fail out' relatively quickly the bad connections when those fail.</strong> I'm not even sure if this will work or if it will cause further problems.</p></li>
<li><p><strong>Turn off client side connection pooling entirely.</strong> This seems to be the official advice, but I am loathe to do this for performance reasons, it seems very wasteful for Npgsql to have to open a connnection to pgbouncer for each session - and runs counter to all of my experience with other RDBMS like SQL Server.</p></li>
</ol>
<p>Am I on the right track with one of those options? Or am I missing something?</p>
| Kieran Benton | <p>You are generally on the right track and your analysis seems accurate. Some comments:</p>
<p>Option 2 (turning out keepalives) will help remove idle connections in Npgsql's pool which have been broken. As you've written your application will still some failures (as some bad idle connections may not be removed in time). There is no particular reason to think this would cause further problems - this should be pretty safe to turn on.</p>
<p>Option 3 is indeed problematic for perf, as a TCP connection to pgbouncer would have to be established every single time a database connection is needed. It will also not provide a 100% fail-proof mechanism, since pgbouncer may still drop out while a connection is in use.</p>
<p>At the end of the day, you're asking about resiliency in the face of arbitrary network/server failure, which isn't an easy thing to achieve. The only 100% reliable way to deal with this is in your application, via a dedicated layer which would retry operations when a transient exception occurs. You may want to look at <a href="https://github.com/App-vNext/Polly" rel="nofollow noreferrer">Polly</a>, and note that Npgsql helps our a bit by exposing an <a href="http://www.npgsql.org/doc/api/Npgsql.NpgsqlException.html#Npgsql_NpgsqlException_IsTransient" rel="nofollow noreferrer"><code>IsTransient</code></a> exception which can be used as a trigger to retry (Entity Framework Core also includes a similar "retry strategy"). If you do go down this path, note that transactions are particularly difficult to handle correctly.</p>
| Shay Rojansky |
<p>i have a lab environment with a bind server. The server manages the domain "lab.local" DNS Dynamic Update are configured. The lab client (windows and linux) are using the DNS server.</p>
<p>Now i would like do use a kubernetes cluster in our lab. Can i use the bind server with the zone "lab.local" with kubernetes? </p>
<p>For example: i would like to create a nginx pod and access it from my client over nginx.lab.local. I have looked at <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-dns</a> but i didn't find any Information how to use it with bind.</p>
| DanMar | <p>Once the nginx Pod has been created, it will have a internal IP by default, not addressable from your lab network (only other pods can access it).</p>
<p>To access it from the lab network, expose it as a Service with type as <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> and then it will have a external routable IP address. Then add an entry in the bind server to that external routable IP address for everyone to access using the URL.</p>
<p>There are other and better ways also of exposing a Service by using a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">Load Balancer</a> or an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a>. For those who are new or getting started with K8S, exposing the Pod using NodePort is the easiest to see some quick results.</p>
| Praveen Sripati |
<p>I am working on a java service that basically creates files in a network file system to store data. It runs in a k8s cluster in a Ubuntu 18.04 LTS.
When we began to limit the memory in kubernetes (limits: memory: 3Gi), the pods began to be OOMKilled by kubernetes.</p>
<p>At the beginning we thought it was a leak of memory in the java process, but analyzing more deeply we noticed that the problem is the memory of the kernel.
We validated that looking at the file /sys/fs/cgroup/memory/memory.kmem.usage_in_bytes</p>
<p>We isolated the case to only create files (without java) with the DD command like this:</p>
<pre><code>for i in {1..50000}; do dd if=/dev/urandom bs=4096 count=1 of=file$i; done
</code></pre>
<p>And with the dd command we saw that the same thing happened ( the kernel memory grew until OOM).
After k8s restarted the pod, I got doing a describe pod:</p>
<ul>
<li>Last State:Terminated</li>
<li>Reason: OOMKilled</li>
<li>Exit Code: 143</li>
</ul>
<p>Creating files cause the kernel memory grows, deleting those files cause the memory decreases . But our services store data , so it creates a lot of files continuously, until the pod is killed and restarted because OOMKilled.</p>
<p>We tested limiting the kernel memory using a stand alone docker with the --kernel-memory parameter and it worked as expected. The kernel memory grew to the limit and did not rise anymore. But we did not find any way to do that in a kubernetes cluster.
Is there a way to limit the kernel memory in a K8S environment ?
Why the creation of files causes the kernel memory grows and it is not released ? </p>
| Pablo Hadziatanasiu | <p>Thanks for all this info, it was very useful!</p>
<p>On my app, I solved this by creating a new side container that runs a cron job, every 5 minutes with the following command:</p>
<pre><code>echo 3 > /proc/sys/vm/drop_caches
</code></pre>
<p>(note that you need the side container to run in privileged mode)</p>
<p>It works nicely and has the advantage of being predictable: every 5 minutes, your memory cache will be cleared.</p>
| Cyrille99 |
<p>Iβm k8s beginner, and struggling with below error.</p>
<pre><code>E0117 18:24:47.596238 53015 portforward.go:400]
an error occurred forwarding 9999 -> 80: error forwarding port 80 to pod XXX,
uid : exit status 1: 2020/01/17 09:24:47 socat[840136] E connect(5, AF=2 127.0.0.1:80, 16): Connection refused
</code></pre>
<p>I donβt even know what the error stands for, needless to say for its cause. Does anyone know of which situation below error occurs?</p>
<p>This error is occuring while processing GCP's deployment manager tutorial according to tutorial project GCP provides.</p>
<p><a href="https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/gke" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/gke</a></p>
<p>Error occurs when typing this command.</p>
<pre><code>curl localhost:9999
</code></pre>
<p>Any ambiguous expression or extra information is required, please notify me.
Thanks in advance!</p>
| Taruya | <p>The error is telling you, that there's nothing listening to port <strong>80</strong> inside the pod. You should check the pod state:</p>
<pre><code>kubectl get pods
</code></pre>
<p>It will also tell you which port(s) the pod (its containers) is listening to. </p>
<p>Maybe it has crashed. Also check the log of the pod:</p>
<pre><code>kubectl logs <pod-name>
</code></pre>
<p>Btw. Google's Deployment Manager is a very special kind of a tool. Google itself suggests to use Terraform instead. It's nevertheless part of their certification exams.</p>
| HefferWolf |
<p>I have a running cluster on Google Cloud Kubernetes engine and I want to access that using kubectl from my local system.</p>
<p>I tried installing kubectl with gcloud but it didn't worked. Then I installed kubectl using apt-get. When I try to see the version of it using kubectl version it says</p>
<p>Unable to connect to server EOF. I also don't have file ~/.kube/config, which I am not sure why. Can someone please tell me what I am missing here? How can I connect to the already running cluster in GKE?</p>
| tank | <p><code>gcloud container clusters get-credentials ...</code> will auth you against the cluster using your gcloud credentials.</p>
<p>If successful, the command adds appropriate configuration to <code>~/.kube/config</code> such that you can <code>kubectl</code>.</p>
| DazWilkin |
<p>I'm trying to implement python script which collect and parsing kubernetes pod manifest's image version and secretName of each pod in 2 various kubernetes clusters and then if there are any differences between 2 clusters - should be send an alert. These metrics for 2 clusters then should be parsed by instance of Victoria Metrics.
The problem is observed in if i check <code>kubectl describe pod_name</code> - in its output exists field secretName:</p>
<pre><code>Volumes:
cacert:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
EndpointsName: glusterfs-cluster
Path: test/jvm/cert
ReadOnly: false
service-conf-secrets:
Type: Projected (a volume that contains injected data from multiple sources)
SecretName: example-app-1.25.01-57409t3
SecretOptionalName: <nil>
</code></pre>
<p>But if I use <code>kubernetes.client.CoreV1Api</code> and its function <code>list_pod_for_all_namespaces</code> - can't find in its output secretName at all.</p>
<p>Where can I find and parse this field and make prometheus format metrics from these fields?</p>
| Garamoff | <p>Here's an example.</p>
<p>I've includes comment references to the Python SDK's implementation of the Kubernetes types as well as the type hints for these types to help explain the use of the properties.</p>
<p>I've included the full enumeration of <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1VolumeProjection.md" rel="nofollow noreferrer"><code>V1VolumeProjection</code></a> names including <code>secret</code> (<a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1SecretProjection.md" rel="nofollow noreferrer"><code>V1SecretProjection</code></a>) for completeness.</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client,config
config.load_kube_config()
v1 = client.CoreV1Api()
# https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1PodList.md
pod_list: client.V1PodList = v1.list_pod_for_all_namespaces(watch=False)
# Iterator over returned items (if any)
pods: list[client.V1Pod] = pod_list.items
for pod in pods:
metadata: client.V1ObjectMeta = pod.metadata
# https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1PodSpec.md
spec: client.V1PodSpec = pod.spec
print(f"{metadata.namespace}/{metadata.name} [{pod.status.pod_ip}]")
# if pod.metadata.namespace=="shell-operator" and pod.metadata.name=="pods":
# Iterative over volume (f any)
# https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Volume.md
volumes: list[client.V1Volume] = spec.volumes
for volume in volumes:
if volume.projected:
projected: client.V1ProjectedVolumeSource = volume.projected
# https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1VolumeProjection.md
sources: list[client.V1VolumeProjection] = projected.sources
for source in sources:
# if source.config_map:
# if source.downward_api:
if source.secret:
# https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1SecretProjection.md
secret: client.V1SecretProjection = source.secret
print(secret.name)
items: list[client.V1KeyToPath] = secret.items
for i in items:
path: str = i.path
print(path)
# if source.service_account_token:
</code></pre>
| DazWilkin |
<p>Is there a way to make Kubernetes Pods aware of the new file changes ?</p>
<p>Lets say, I have an Kubernetes(K8) pod running with 4 replicas created, also I have an K8 PV created and attached to the external file system where we can modify the files. Lets consider K8 pod is running
a tomcat server with an application name test_app which is located in the following directory inside the container
tomcat/webapps/test_app/
Inside the test_app directory, i have few sub-directories like below
test_app/xml
test_app/properties
test_app/jsp</p>
<p>All these sub-directories are attached to an volume and it is mounted to an external file system. Anyone who have access to the external file system, will be updating xml / properties / jsp files.
When these files are changed in the external file system, it will get reflected inside the sub-directories test_app/xml, test_app/properties, test_app/jsp as well as we have an PV attached. But these changes will not reflected in th web application unless we restart the tomcat server. To restart the tomcat server, we need to restart the pod.</p>
<p>So whenever someone make any changes to the files exist in the external file system, how do i make K8 aware that there is some new changes which require Pods needs to be restarted ?
is it even possible in Kubernetes right now ?</p>
| babs84 | <p>If you are referring to file changes meaning changes to your application, the best practice is to bake a container image with your application code, and push a new container image when you need to deploy new code. You can do this by modifying your Kubernetes deployment to point to the latest digest hash.</p>
<p>For instance, in a deployment YAML file:</p>
<p><code>image: myimage@sha256:digest0</code></p>
<p>becomes</p>
<p><code>image: myimage@sha256:digest1</code></p>
<p>and then <code>kubectl apply</code> would be one way to do it.</p>
<p>You can read more about using container images with Kubernetes <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">here</a>.</p>
| Alex Watt |
<p>I've been following <a href="https://www.youtube.com/watch?v=DwlIn9zOcfc" rel="nofollow noreferrer">tutorial videos</a> and trying to understand to build a small minimalistic application. The videos I followed are pulling containers from the registries while I'm trying to test, build and deploy everything locally at the moment if possible. Here's my setup.</p>
<ol>
<li><p>I've the latest docker installed with Kubernetes enabled on mac OS.</p></li>
<li><p>A helloworld NodeJS application running with Docker and Docker Compose</p></li>
</ol>
<p><strong>TODO:</strong> I'd like to be able to start my instances, let's say 3 in the kubernetes cluster</p>
<hr>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM node:alpine
COPY package.json package.json
RUN npm install
COPY . .
CMD ["npm", "start"]
</code></pre>
<hr>
<p><strong>docker-compose.yml</strong></p>
<pre><code>version: '3'
services:
user:
container_name: users
build:
context: ./user
dockerfile: Dockerfile
</code></pre>
<hr>
<p>Creating a deployment file with the help of this <a href="https://www.mirantis.com/blog/introduction-to-yaml-creating-a-kubernetes-deployment/" rel="nofollow noreferrer">tutorial</a> and it may have problems since I'm merging information both from youtube as well as the web link.</p>
<p>Creating a miminalistic yml file for to be able to get up and running, will study other aspects like readiness and liveness later.</p>
<hr>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: user
spec:
selector:
app: user
ports:
- port: 8080
type: NodePort
</code></pre>
<p>Please review the above yml file for correctness, so the question is what do I do next?</p>
| AppDeveloper | <p>The snippets you provide are regrettably insufficient but you have the basics.</p>
<p>I had a Google for you for a tutorial and -- unfortunately -- nothing obvious jumped out. That doesn't mean that there isn't one, just that I didn't find it.</p>
<p>You've got the right idea and there are quite a few levels of technology to understand but, I commend your approach and think we can get you there.</p>
<ol>
<li>Let's start with a helloworld Node.JS tutorial</li>
</ol>
<p><a href="https://nodejs.org/en/docs/guides/getting-started-guide/" rel="nofollow noreferrer">https://nodejs.org/en/docs/guides/getting-started-guide/</a></p>
<ol start="2">
<li>Then you want to containerize this</li>
</ol>
<p><a href="https://nodejs.org/de/docs/guides/nodejs-docker-webapp/" rel="nofollow noreferrer">https://nodejs.org/de/docs/guides/nodejs-docker-webapp/</a></p>
<p>For #3 below, the last step here is:</p>
<pre class="lang-sh prettyprint-override"><code>docker build --tag=<your username>/node-web-app .
</code></pre>
<p>But, because you're using Kubernetes, you'll want to push this image to a public repo. This is so that, regardless of where your cluster runs, it will be able to access the container image.</p>
<p>Since the example uses DockerHub, let's continue using that:</p>
<pre class="lang-sh prettyprint-override"><code>docker push <your username>/node-web-app
</code></pre>
<p><strong>NB</strong> There's an implicit <code>https://docker.io/<your username>/node-web-app:latest</code> here</p>
<ol start="3">
<li>Then you'll need a Kubernetes cluster into which you can deploy your app</li>
</ol>
<ul>
<li>I think <a href="https://microk8s.io/" rel="nofollow noreferrer">microk8s</a> is excellent</li>
<li>I'm a former Googler but <a href="https://cloud.google.com/kubernetes-engine/" rel="nofollow noreferrer">Kubernetes Engine</a> is the benchmark (requires $$$)</li>
<li>Big fan of DigitalOcean too and it has <a href="https://www.digitalocean.com/products/kubernetes/" rel="nofollow noreferrer">Kubernetes</a> (also $$$)</li>
</ul>
<p>My advice is (except microk8s and minikube) don't ever run your own Kubernetes clusters; leave it to a cloud provider.</p>
<ol start="4">
<li>Now that you have all the pieces, I recommend you just:</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>kubectl run yourapp \
--image=<your username>/node-web-app:latest \
--port=8080 \
--replicas=1
</code></pre>
<p>I believe <code>kubectl run</code> is deprecated but use it anyway. It will create a Kubernetes Deployment (!) for you with 1 Pod (==replica). Feel free to adjust that value (perhaps <code>--replicas=2</code>) if you wish.</p>
<p>Once you've created a Deployment, you'll want to create a Service to make your app accessible (top of my head) this command is:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl expose deployment/yourapp --type=NodePort
</code></pre>
<p>Now you can query the service:</p>
<pre><code>kubectl get services/yourapp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
yourapp NodePort 10.152.183.27 <none> 80:32261/TCP 7s
</code></pre>
<p><strong>NB</strong> The NodePort that's been assigned (in this case!) is <code>:32261</code> and so I can then interact with the app using <code>curl http://localhost:32261</code> (localhost because I'm using microk8s).</p>
<p><code>kubectl</code> is powerful. Another way to determine the NodePort is:</p>
<pre><code>kubectl get service/yourapp \
--output=jsonpath="{.spec.ports[0].nodePort}"
</code></pre>
<p>The advantage of the approach of starting from <code>kubectl run</code> is you can then easily determine the Kubernetes configuration that is needed to recreate this Deployment|Service by:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get deployment/yourapp \
--format=yaml \
> ./yourapp.deployment.yaml
kubectl get service/yourapp \
--format=yaml \
> ./yourapp.service.yaml
</code></pre>
<p>These commands will interrogate the cluster, retrieve the configuration for you and pump it into the files. It will include some instance data too but the gist of it shows you what you would need to recreate the deployment. You will need to edit this file.</p>
<p>But, you can test this by first deleting the deployment and the service and then recreating it from the configuration:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete deployment/yourapp
kubectl delete service/yourapp
kubectl apply --filename=./yourapp.deployment.yaml
kubectl apply --filename=./yourapp.service.yaml
</code></pre>
<p><strong>NB</strong> You'll often see multiple resource configurations merged into a single YAML file. This is perfectly valid YAML but you only ever see it used by Kubernetes. The format is:</p>
<pre><code>...
some: yaml
---
...
some: yaml
---
</code></pre>
<p>Using this you could merge the <code>yourapp.deployment.yaml</code> and <code>yourapp.service.yaml</code> into a single Kubernetes configuration.</p>
| DazWilkin |
<p>In case of cloud managed kubernetes, whether AWS EKS, Azure AKS or Google GKE, the option to use customer managed key always comes at the cost of storing the customer master key in the cloud provider's own vault/KMS (e.g. aws kms or azure vault). In this case the cloud provider still has access to customer encryption key (or at least it resides in the cloud environment).</p>
<p>What would be an ideal implementation for deploying the application in k8s environment and encrypting the storage with customer provided key but the knowledge of the keys should only be at customer side i.e. not stored anywhere inside the cloud provider due to privacy concerns?</p>
| devcloud | <p>You could use a 3rd party kubernetes storage provider like portworx that will take you across clusters and keep data encrypted.
<a href="https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-pvcs/create-encrypted-pvcs/" rel="nofollow noreferrer">https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-pvcs/create-encrypted-pvcs/</a></p>
| Illusionist |
<p>i've a question i create my config map but i dont know how to apply them to k8s ?
if i execute the command </p>
<pre><code>kubectl get configmap configmap-cas-properties -o yaml
</code></pre>
<p>i can see my configmap. However i cant find any information on how to add it to k8s . </p>
<p>I going to use it with my pod where i already set </p>
<pre><code> envFrom:
- configMapRef:
name: configmap-cas-app-properties
name: configmap-cas-properties restartPolicy: never
</code></pre>
| morla | <p>The ConfigMaps can be set as Environment variables which can be read in the application. Another way is to mount the ConfigMaps as a Volume in the Container and the application read the data from there. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Here</a> is the documentation from K8S on the same.</p>
| Praveen Sripati |
<p>Just for training purpose, I'm trying to inject those env variables with this ConfigMap in my Wordpress and Mysql app by using a File with a Volume.</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: wordpress-mysql
namespace: ex2
data:
wordpress.conf: |
WORDPRESS_DB_HOST mysql
WORDPRESS_DB_USER admin
WORDPRESS_DB_PASSWORD "1234"
WORDPRESS_DB_NAME wordpress
WORDPRESS_DB_PREFIX wp_
mysql.conf: |
MYSQL_DATABASE wordpress
MYSQL_USER admin
MYSQL_PASSWORD "1234"
MYSQL_RANDOM_ROOT_PASSWORD "1"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mysql
name: mysql
namespace: ex2
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
namespace: ex2
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
volumeMounts:
- name: config
mountPath: "/etc/env"
readOnly: true
ports:
- containerPort: 3306
protocol: TCP
volumes:
- name: config
configMap:
name: wordpress-mysql
---
apiVersion: v1
kind: Service
metadata:
labels:
app: wordpress
name: wordpress
namespace: ex2
spec:
ports:
- nodePort: 30999
port: 80
protocol: TCP
targetPort: 80
selector:
app: wordpress
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
namespace: ex2
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
volumeMounts:
- name: config
mountPath: "/etc/env"
readOnly: true
ports:
- containerPort: 80
protocol: TCP
volumes:
- name: config
configMap:
name: wordpress-mysql
</code></pre>
<p>When I deploy the app the mysql pod fails with this error:<br />
<code>kubectl -n ex2 logs mysql-56ddd69598-ql229</code></p>
<blockquote>
<p>2020-12-26 19:57:58+00:00 [ERROR] [Entrypoint]: Database is
uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD</p>
</blockquote>
<p>I don't understand because I have specified everything in my configMap. I also have tried by using <code>envFrom</code> and <code>Single Env Variables</code> and it works just fine. I'm just having an issue with <code>File in a Volume</code></p>
| Kevin | <p>@DavidMaze is correct; you're mixing two useful features.</p>
<p>Using <code>test.yaml</code>:</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: wordpress-mysql
data:
wordpress.conf: |
WORDPRESS_DB_HOST mysql
WORDPRESS_DB_USER admin
WORDPRESS_DB_PASSWORD "1234"
WORDPRESS_DB_NAME wordpress
WORDPRESS_DB_PREFIX wp_
mysql.conf: |
MYSQL_DATABASE wordpress
MYSQL_USER admin
MYSQL_PASSWORD "1234"
MYSQL_RANDOM_ROOT_PASSWORD "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- image: busybox
name: test
args:
- ash
- -c
- while true; do sleep 15s; done
volumeMounts:
- name: config
mountPath: "/etc/env"
readOnly: true
volumes:
- name: config
configMap:
name: wordpress-mysql
</code></pre>
<p>Then:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply --filename=./test.yaml
kubectl exec --stdin --tty deployment/test -- ls /etc/env
mysql.conf wordpress.conf
kubectl exec --stdin --tty deployment/test -- more /etc/env/mysql.conf
MYSQL_DATABASE wordpress
MYSQL_USER admin
MYSQL_PASSWORD "1234"
MYSQL_RANDOM_ROOT_PASSWORD "1"
</code></pre>
<blockquote>
<p><strong>NOTE</strong> the files are missing (and should probably include) <code>=</code> between the variable and its value e.g. <code>MYSQL_DATABASE=wordpress</code></p>
</blockquote>
<p>So, what you have is a ConfigMap that represents 2 files (<code>mysql.conf</code> and <code>wordpress.conf</code>) and, if you use e.g. <code>busybox</code> and mount the ConfigMap as a volume, you can see that it includes 2 files and that the files contain the configurations.</p>
<p>So, <strong>if</strong> you can run e.g. WordPress or MySQL and pass a configuration file to them, you're good but what you probably want to do is reference the ConfigMap entries as environment variables, per @DavidMaze suggestion, i.e. run Pods with environment variables set by the ConfigMap entries, i.e.:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data</a></p>
| DazWilkin |
<p>I have many pods in a kubernetes system with randomly name wordpress.xxx.xx.</p>
<p>Here the list of <a href="https://i.stack.imgur.com/k7Jxw.png" rel="nofollow noreferrer">pods</a></p>
<p>I want to use one command with <code>kubectl cp</code> in other to copy files to all pods from one deployment.</p>
<p>In my case I don't want to use volumes because they mount into a path that already the existing content of that folder will be hidden.</p>
<p>How to do that, please?</p>
<p>Thank you for your answer.</p>
| Ngo Quyet | <p>The <code>kubectl cp</code> command copies files to a single container in a pod. To copy files in multiple containers easily, the below shell script can be used.</p>
<pre><code>for pod in `kubectl get pods -o=name | grep wordpress | sed "s/^.\{4\}//"`; do echo "copying to $pod"; kubectl cp file.txt $pod:/; done
</code></pre>
<p>or</p>
<pre><code>for pod in `kubectl get pods -o=name | grep wordpress | sed "s/^.\{4\}//"`
do
echo "copying file to $pod"
kubectl cp file.txt $pod:/
done
</code></pre>
<p>Both the scripts are same, single vs multi-line.</p>
| Praveen Sripati |
<p>I'm trying to run the open source cachet status page within Kubernetes via this tutorial <a href="https://medium.com/@ctbeke/setting-up-cachet-on-google-cloud-817e62916d48" rel="nofollow noreferrer">https://medium.com/@ctbeke/setting-up-cachet-on-google-cloud-817e62916d48</a> </p>
<p>2 docker containers (cachet/nginx) and Postgres are deployed to a pod on GKE but the cachet container fails with the following CrashLoopBackOff error
<a href="https://i.stack.imgur.com/VLy1z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VLy1z.png" alt="Crash error"></a></p>
<p>Within the <a href="https://github.com/CachetHQ/Docker/blob/master/docker-compose.yml" rel="nofollow noreferrer">docker-compose.yml file</a> its set to APP_KEY=${APP_KEY:-null} and iβm wondering if I didnβt set an environment variable I should have.</p>
<p><a href="https://i.stack.imgur.com/bw696.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bw696.png" alt="Stack driver logs"></a></p>
<p>Any help with configuring the cachet docker file would be much appreciated! <a href="https://github.com/CachetHQ/Docker" rel="nofollow noreferrer">https://github.com/CachetHQ/Docker</a> </p>
| JJ Nace | <p>Yes, you need to generate a key.</p>
<p>In the <code>entrypoint.sh</code> you can see that the bash script generates a key for you:</p>
<p><a href="https://github.com/CachetHQ/Docker/blob/master/entrypoint.sh#L188-L193" rel="nofollow noreferrer">https://github.com/CachetHQ/Docker/blob/master/entrypoint.sh#L188-L193</a></p>
<p>It seems there's a bug in the Dockerfile here. Generate a key manually and then set it as an environment variable in your manifest.</p>
<p>There's a helm chart you can use in development here: <a href="https://github.com/apptio/helmcharts/blob/cachet/devel/cachet/templates/secrets.yaml#L12" rel="nofollow noreferrer">https://github.com/apptio/helmcharts/blob/cachet/devel/cachet/templates/secrets.yaml#L12</a></p>
| jaxxstorm |
<p>I have a certain processing task that I want to solve with kubernetes. The basic concept is that there is a certain number of items in a work queue that I want to process. Items can be added to the queue and are deleted as soon as a pod finished processing one.
The preferred work flow would be:</p>
<ul>
<li>Defining a maximum number of pods(e.g.40)</li>
<li>push items to queue(e.g. 20)</li>
<li>number of pods is created according to number of items in queue(=>20)</li>
<li>while the pods are still processing on the 20 items another 40 items are pushed to the queue resulting in 20 more pods that are created(maximum number is reached), and as soon as the first ones finish additional pods are created until the end of the queue is reached.</li>
</ul>
<p>Is there any build in solution using kubectl? Using the job pattern I can define the number of parallel Pods but those are running all the time until success and not scaled up upon other criteria.</p>
<p>Thanks for your help!</p>
| Fabian83 | <p>Use a <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a>. You may have to define Custom Metrics for getting the number of items in the Queue and use it in HPA.</p>
| Praveen Sripati |
<p>My requirement is to monitor the helpdesk system of the company which is running inside the Kubernetes cluster, for example, URL <a href="https://xyz.zendesk.com" rel="nofollow noreferrer">https://xyz.zendesk.com</a></p>
<p>They provide their <a href="https://developer.zendesk.com/api-reference/status_api/status_api/" rel="nofollow noreferrer">API set</a> to monitor this efficiently.</p>
<p>We can easily check the status using <strong>curl</strong></p>
<pre><code>$ curl -s "https://status.zendesk.com/api/components/support?domain=xyz.zendesk.com" | jq '.active_incidents'
[]
</code></pre>
<p>The above output means success status according to <strong>zendesk</strong> documentation.</p>
<p>Now the main part is, the company uses Prometheus to monitor everything.</p>
<p>How to have Prometheus check the success status from the output of this curl command?.</p>
<p>I did some research already and found somewhat related threads <a href="https://stackoverflow.com/questions/68659153/prometheus-monitoring-command-output-in-container">here</a> and using <a href="https://github.com/prometheus/pushgateway" rel="nofollow noreferrer">pushgateway</a></p>
<p>Are they applicable to my requirement or going in the wrong route?</p>
| vjwilson | <p>What you probably (!?) want is something that:</p>
<ol>
<li>Provides an HTTP(s) (e.g. <code>/metrics</code>) endpoint</li>
<li>Producing metrics in Prometheus' exposition format</li>
<li>From Zendesk's API</li>
</ol>
<blockquote>
<p><strong>NOTE</strong> <code>curl</code> only gives you #3</p>
</blockquote>
<p>There are some examples of solutions that appear to meet the requirements but none is from Zendesk:</p>
<p><a href="https://www.google.com/search?q=%22zendesk%22+prometheus+exporter" rel="nofollow noreferrer">https://www.google.com/search?q=%22zendesk%22+prometheus+exporter</a></p>
<p>There are >2 other lists of Prometheus exporters (neither contains Zendesk):</p>
<ul>
<li><a href="https://prometheus.io/docs/instrumenting/exporters/" rel="nofollow noreferrer">https://prometheus.io/docs/instrumenting/exporters/</a></li>
<li><a href="https://github.com/prometheus/prometheus/wiki/Default-port-allocations" rel="nofollow noreferrer">https://github.com/prometheus/prometheus/wiki/Default-port-allocations</a></li>
</ul>
<p>I recommend you contact Zendesk and ask whether there's a Prometheus Exporter already. It's surprising to not find one.</p>
<p>It is straightforward to write a Prometheus Exporter. Prometheus <a href="https://prometheus.io/docs/instrumenting/clientlibs/" rel="nofollow noreferrer">Client libraries</a> and Zendesk <a href="https://developer.zendesk.com/documentation/ticketing/api-clients/introduction/" rel="nofollow noreferrer">API client</a> are available and preferred. While it's possible, bash is probably sub-optimal.</p>
<p>If <strong>all</strong> you want to do is GET that static endpoint, get a 200 response code and confirm that the body is <code>[]</code>, you may be able to use Prometheus <a href="https://github.com/prometheus/blackbox_exporter" rel="nofollow noreferrer">Blackbox exporter</a></p>
<blockquote>
<p><strong>NOTE</strong> Logging and monitoring tools often provide a higher-level tool that provides something analogous to a "universal translator", facilitating translation from 3rd-party systems' native logging|monitoring formats into some canonical form using config rather than code. Although in the logging space, <a href="https://www.fluentd.org/" rel="nofollow noreferrer">fluentd</a> is an example. To my knowledge, there is no such tool for Prometheus but I sense that there's an opportunity for someone to create one.</p>
</blockquote>
| DazWilkin |
<p>I have installed the Edge version of Docker for Windows 18.05.0-ce (Windows 10 Hyper-V) and enabled Kubernetes afterwards.<br>
On my other machine a kubectl context was created automatically, but on this new machine it was not.</p>
<pre><code>> kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
> kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
</code></pre>
<p>Can i some how make Docker for Windows create the context?<br>
Or can I set it up manually?<br>
I am a little unsure how to get the infomation needed for the <code>kubectl config set-context</code> command.</p>
<p>I can run docker containers outside of Kubernetes.<br>
I see the Kubernetes containers running inside Docker.</p>
<pre><code>> docker ps
CONTAINER ID IMAGE COMMAND
8285ca0dd57a 353b8f1d102e "kube-scheduler --adβ¦"
3b25fdb0b7a6 40c8d10b2d11 "kube-controller-manβ¦"
e81db90fa68e e03746fe22c3 "kube-apiserver --adβ¦"
2f19e723e0eb 80cc5ea4b547 "/kube-dns --domain=β¦"
etc...
</code></pre>
| KasperT | <p>There is an issue with docker for windows when the <code>HOMEDRIVE</code> is set by a corporate policy.</p>
<p>If you set the <code>$KUBECONFIG</code> environment variable to <code>C:\Users\my_username\.kube\config</code> (make sure the <code>$HOME</code> environment variables expand, don't use <code>$HOME</code> itself.), it should work.</p>
<p>Further info: <a href="https://github.com/docker/for-win/issues/1651" rel="nofollow noreferrer">https://github.com/docker/for-win/issues/1651</a></p>
| jaxxstorm |
<p>I have a rabbitMQ in my project, and I want the queues on one pod to be on the other and the information on one pod to be shared with the other pod. Is there a way for you to share the same volume and both can read and write? I use GCloud.</p>
| Sermanes | <p>GCEPersistentDisk supports only ReadWriteOnce and ReadOnlyMany and not the ReadWriteMany access modes. So, it's not possible to share a volume across two containers in a RW mode. <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">Here</a> is the documentation on the same.</p>
| Praveen Sripati |
<p>I made hadoop image based on centos using dockerfile. There are 4 nodes. I want to configure cluster using ssh-copy-id. But an error has occurred. </p>
<pre><code>ERROR: ssh: connect to host [ip] port 22: Connection refused
</code></pre>
<p>How can I solve this problem?</p>
| K.k | <p><code>ssh</code> follows a client-server architecture. So, the <code>openssh-server</code> has to be installed in the container. Now <code>ssh-copy-id</code> and other commands should run if the ip address is routable.</p>
| Praveen Sripati |
<p>how can I describe this command in yaml format?</p>
<pre><code>kubectl create configmap somename --from-file=./conf/nginx.conf
</code></pre>
<p>I'd expect to do something like the following yaml, but it doesn't work </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: somename
namespace: default
fromfile: ./conf/nginx.conf
</code></pre>
<p>any idea?</p>
| Maoz Zadok | <p>That won't work, because kubernetes isn't aware of the local file's path. You can simulate it by doing something like this:</p>
<pre><code>kubectl create configmap --dry-run=client somename --from-file=./conf/nginx.conf --output yaml
</code></pre>
<p>The <code>--dry-run</code> flag will simply show your changes on stdout, and not make the changes on the server. This will output a valid configmap, so if you pipe it to a file, you can use that:</p>
<pre><code>kubectl create configmap --dry-run=client somename --from-file=./conf/nginx.conf --output yaml | tee somename.yaml
</code></pre>
| jaxxstorm |
<p>I am trying to fully purge my kube env, but sometimes when I run <code>helm delete --purge</code> some pods don't delete (sometimes).
<br>
<br>
Is there any issues using <code>kubectl delete pods --grace-period=0 --force</code>
Or will using this command over and over lead to any issues on my cluster or nodes? </p>
| user3292394 | <p>According to the K8S documentation <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete" rel="nofollow noreferrer">here</a>. Depending on the application it might lead to corruption or inconsistency of the data because of the duplication of the pods till the node detects and kills one of the pod.</p>
<blockquote>
<p>Force deleting pods does not wait for confirmation that the pod's processes have been terminated, which can leave those processes running until the node detects the deletion and completes graceful deletion. If your processes use shared storage or talk to a remote API and depend on the name of the pod to identify themselves, force deleting those pods may result in multiple processes running on different machines using the same identification which may lead to data corruption or inconsistency. Only force delete pods when you are sure the pod is terminated, or if your application can tolerate multiple copies of the same pod running at once. Also, if you force delete pods the scheduler may place new pods on those nodes before the node has released those resources and causing those pods to be evicted immediately. </p>
</blockquote>
<p>So, it depends, if the pods are using any shared resources or not.</p>
| Praveen Sripati |
<p>I have set up a Kubernetes cluster using Kubernetes Engine on GCP to work on some data preprocessing and modelling using Dask. I installed Dask using Helm <a href="http://docs.dask.org/en/latest/setup/kubernetes-helm.html" rel="nofollow noreferrer">following these instructions</a>.</p>
<p>Right now, I see that there are two folders, <code>work</code> and <code>examples</code></p>
<p><a href="https://i.stack.imgur.com/7wIcs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7wIcs.png" alt="enter image description here"></a></p>
<p>I was able to execute the contents of the notebooks in the <code>example</code> folder confirming that everything is working as expected.</p>
<p>My questions now are as follows</p>
<ul>
<li>What are the suggested workflow to follow when working on a cluster? Should I just create a new notebook under <code>work</code> and begin prototyping my data preprocessing scripts? </li>
<li>How can I ensure that my work doesn't get erased whenever I upgrade my Helm deployment? Would you just manually move them to a bucket every time you upgrade (which seems tedious)? or would you create a simple vm instance, prototype there, then move everything to the cluster when running on the full dataset?</li>
</ul>
<p>I'm new to working with data in a distributed environment in the cloud so any suggestions are welcome.</p>
| PollPenn | <blockquote>
<p>What are the suggested workflow to follow when working on a cluster? </p>
</blockquote>
<p>There are many workflows that work well for different groups. There is no single blessed workflow.</p>
<blockquote>
<p>Should I just create a new notebook under work and begin prototyping my data preprocessing scripts?</p>
</blockquote>
<p>Sure, that would be fine.</p>
<blockquote>
<p>How can I ensure that my work doesn't get erased whenever I upgrade my Helm deployment? </p>
</blockquote>
<p>You might save your data to some more permanent store, like cloud storage, or a git repository hosted elsewhere.</p>
<blockquote>
<p>Would you just manually move them to a bucket every time you upgrade (which seems tedious)? </p>
</blockquote>
<p>Yes, that would work (and yes, it is)</p>
<blockquote>
<p>or would you create a simple vm instance, prototype there, then move everything to the cluster when running on the full dataset?</p>
</blockquote>
<p>Yes, that would also work.</p>
<h3>In Summary</h3>
<p>The Helm chart includes a Jupyter notebook server for convenience and easy testing, but it is no substitute for a full fledged long-term persistent productivity suite. For that you might consider a project like JupyterHub (which handles the problems you list above) or one of the many enterprise-targeted variants on the market today. It would be easy to use Dask alongside any of those.</p>
| MRocklin |
<p>We are trying to build a Kubernetes node on our Private VMware infrastructure. I have the cluster up and running and and ingress running, however I can't figure out how to route traffic to the ingress.</p>
<p>We are using Rancher 2.0.7. </p>
<p><strong>I would like to have the following setup if possible:</strong> </p>
<ol>
<li>DNSMadeEasy.com to handle DNS A Records (DNS to External IP)</li>
<li>Firewall we host (External IP to Static Private IP)</li>
<li>Kubernetes Ingress (Private IP to Cluster Load balanced Ingress)</li>
<li>Load Balanced Ingress (Ingress to Service with multiple instances)</li>
</ol>
<p>I can figure out the DNS and firewall routing, however I can't figure out how to set a static External IP address on the Ingress Load Balancer. </p>
<p>I can see you can specify a Host name in the Load balancer, however how does this become publicly available?
Could it be because we don;t have an external Load Balancer?<br>
What am I missing on setup of the Ingress/Load balancer?</p>
<p>Thank you in advance, I have spent about two weeks trying to get this to work.</p>
| Gneisler | <p>You need to be able to set the Ingress Service to <code>type=LoadBalancer</code>. With on-prem infrastructure, this either requires you to have an external loadbalancer like an <a href="https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.0/" rel="nofollow noreferrer">F5</a>.</p>
<p>One option to have this working is to use <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLb</a></p>
| jaxxstorm |
<p>I'm trying to add a custom function In Go template for parsing the time in PodStatus and getting the absolute time for it.</p>
<p>Example for the custom function:</p>
<pre><code>PodScheduled, _ := time.Parse(time.RFC3339, "2021-12- 23T20:20:36Z")
Ready, _ := time.Parse(time.RFC3339, "2021-12-31T07:36:11Z")
difference := Ready.Sub(PodScheduled)
fmt.Printf("difference = %v\n", difference)
</code></pre>
<p>I can use the built-in functions.</p>
<p>How I can use a custom function with the kubectl?</p>
<p>For example this lib:
<a href="https://github.com/Masterminds/sprig" rel="nofollow noreferrer">https://github.com/Masterminds/sprig</a></p>
<p>Thanks :)</p>
| Zipzap | <p>IIUC you have (at least) 3 options:</p>
<ol>
<li>Discouraged: Write your own client (instead of <code>kubectl</code>) that provides the functionality;</li>
<li>Encouraged: Use the shell to post-process the output from e.g. <code>kubectl get pods --output=json</code>) by piping the result through:
<ul>
<li>Either your Golang binary that reads from standard input</li>
<li>Or better a general-purpose JSON processing tool like <a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer"><code>jq</code></a> that would do this (and much more!)</li>
</ul>
</li>
<li>For completeness, <code>kubectl</code> supports output formatting (<code>--output=jsonpath...</code>); although <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer"><code>JSONPath</code></a> may be insufficient for this need;</li>
</ol>
<p>See jq's documentation for <a href="https://stedolan.github.io/jq/manual/#Dates" rel="nofollow noreferrer">Dates</a></p>
| DazWilkin |
<p>Somebody knows what source code and version were used to create docker image gcr.io/google_containers/kube2sky:<strong>1.15</strong>?</p>
<p>The latest version where I could find kube2sky in kubernetes repository is the branch release-1.2 in folder: <a href="https://github.com/kubernetes/kubernetes/tree/release-1.2/cluster/addons/dns/kube2sky" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/release-1.2/cluster/addons/dns/kube2sky</a></p>
<p>But looking at the kube2sky.go that is inside the docker container (doing docker exec) is not the same as the kube2sky.go of the github repository.</p>
<p>Please if somebody can help me.
Thanks.</p>
| Claudio Saavedra | <p>kube2sky is pretty old. It used to be stored in the kubernetes main repo, but was removed quite a while back.</p>
<p>You can find it here: <a href="https://github.com/kubernetes/kubernetes/tree/1c8140c2ac1fb7cb6ddbadc5e1efb3c0beefb8df/cluster/addons/dns/kube2sky" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/1c8140c2ac1fb7cb6ddbadc5e1efb3c0beefb8df/cluster/addons/dns/kube2sky</a></p>
<p>Version tag: <a href="https://github.com/kubernetes/kubernetes/blob/1c8140c2ac1fb7cb6ddbadc5e1efb3c0beefb8df/cluster/addons/dns/kube2sky/Makefile#L24" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/1c8140c2ac1fb7cb6ddbadc5e1efb3c0beefb8df/cluster/addons/dns/kube2sky/Makefile#L24</a></p>
<p>the last known version of kube2sky before it was replaced with kubedns seems to be here: <a href="https://github.com/kubernetes/kubernetes/blob/5762ebfc6318eabbe870b02239226ab74e2e699b/cluster/addons/dns/kube2sky/kube2sky.go" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/5762ebfc6318eabbe870b02239226ab74e2e699b/cluster/addons/dns/kube2sky/kube2sky.go</a></p>
<p>It was removed in this PR
<a href="https://github.com/kubernetes/kubernetes/pull/26335/files" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/26335/files</a></p>
| jaxxstorm |
<p>Because Kubernetes handles situations where there's a typo in the job spec, and therefore a container image can't be found, by leaving the job in a running state forever, I've got a process that monitors job events to detect cases like this and deletes the job when one occurs.</p>
<p>I'd prefer to just stop the job so there's a record of it. Is there a way to stop a job?</p>
| Brent212 | <p>1) According to the K8S documentation <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>Finished Jobs are usually no longer needed in the system. Keeping them around in the system will put pressure on the API server. If the Jobs are managed directly by a higher level controller, such as CronJobs, the Jobs can be cleaned up by CronJobs based on the specified capacity-based cleanup policy.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#cronjobspec-v1beta1-batch" rel="nofollow noreferrer">Here</a> are the details for the failedJobsHistoryLimit property in the CronJobSpec.</p>
<p>This is another way of retaining the details of the failed job for a specific duration. The <code>failedJobsHistoryLimit</code> property can be set based on the approximate number of jobs run per day and the number of days the logs have to be retained. Agree that the Jobs will be still there and put pressure on the API server.</p>
<p>This is interesting. Once the job completes with failure as in the case of a wrong typo for image, the pod is getting deleted and the resources are not blocked or consumed anymore. Not sure exactly what <code>kubectl job stop</code> will achieve in this case. But, when the Job with a proper image is run with success, I can still see the pod in <code>kubectl get pods</code>. </p>
<p>2) Another approach without using the CronJob is to specify the <code>ttlSecondsAfterFinished</code> as mentioned <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#ttl-mechanism-for-finished-jobs" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>Another way to clean up finished Jobs (either Complete or Failed) automatically is to use a TTL mechanism provided by a TTL controller for finished resources, by specifying the .spec.ttlSecondsAfterFinished field of the Job.</p>
</blockquote>
| Praveen Sripati |
<p>To make ingress work <a href="https://medium.com/google-cloud/understanding-kubernetes-networking-ingress-1bc341c84078" rel="nofollow noreferrer">as far as I understand it</a> you need to create all services (that are using ingress controllers) as type NodePort.</p>
<p>Therefore is 2768 the service limit for ingress (maximum available NodePorts)?</p>
| Compendius | <p>You're understanding isn't necessarily correct.</p>
<p>It depends on your environment, cloud provider, ingress controller etc. Because the ingress controller is provisioned inside the cluster, all services that requires an ingress can use ClusterIP, and the ingress controller will route traffic to it.</p>
<p>Again, depending on your platform, the only service that <em>needs</em> to be type=NodePort is the service attached to your ingress controller deployment. The rest can be ClusterIP</p>
| jaxxstorm |
<p>When working on a local project, <code>from local_project.funcs import local_func</code> will fail in the cluster because <code>local_project</code> is not installed.</p>
<p>This forces me to develop everything on the same file.</p>
<p>Solutions? Is there a way to "import" the contents of the module into the working file so that the cluster doesn't need to import it?</p>
<p>Installing the <code>local_project</code> in the cluster is not development friendly because any change in an imported feature requires a cluster redeploy.</p>
<pre class="lang-py prettyprint-override"><code>import dask
from dask_kubernetes import KubeCluster, make_pod_spec
from local_project.funcs import local_func
pod_spec = make_pod_spec(
image="daskdev/dask:latest",
memory_limit="4G",
memory_request="4G",
cpu_limit=1,
cpu_request=1,
)
cluster = KubeCluster(pod_spec)
df = dask.datasets.timeseries()
df.groupby('id').apply(local_func) #fails if local_project not installed in cluster
</code></pre>
| Nuno Silva | <p>Typically the solution to this is to make your own docker image. If you have only a single file, or an egg or zip file then you might also look into the <code>Client.upload_file</code> method</p>
| MRocklin |
<p>I am trying to add a new key value pair to existing set of Annotations to a running Pod using the below example code:</p>
<pre><code>import (
"fmt"
"context"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/kubernetes"
"k8s.io/klog"
)
const (
configPath = "/home/test/.kube/config"
)
func main() {
client, _ := connect()
pod, _ := client.CoreV1().Pods("default").Get(context.TODO(), "nginx-pod",metav1.GetOptions{})
fmt.Println(pod.Name)
annotations := map[string]string{
"foo":"bar",
}
pod.SetAnnotations(annotations)
for name, value := range pod.GetAnnotations() {
fmt.Println("name := ", name, "value =", value)
}
}
func connect() (*kubernetes.Clientset, error) {
restconfig, err := clientcmd.BuildConfigFromFlags("", configPath)
if err != nil {
klog.Exit(err.Error())
}
clientset, err := kubernetes.NewForConfig(restconfig)
if err != nil {
klog.Exit(err.Error())
}
return clientset, nil
}
</code></pre>
<p>when i run the above code and use "oc describe pods/nginx-pod i don't see the annotation "foo: bar" under the annotations.
What's the right way to add New Annotations to an existing pod.</p>
| Niranjan M.R | <p>You're going to want something along the lines:</p>
<pre class="lang-golang prettyprint-override"><code>...
pod.SetAnnotations(annotations)
client.
CoreV1().
Pods("default").
Update(context.TODO(), pod, metav1.UpdateOptions{})
</code></pre>
<p>See: <a href="https://pkg.go.dev/k8s.io/[email protected]/kubernetes/typed/core/v1#PodInterface" rel="nofollow noreferrer">PodInterface</a></p>
| DazWilkin |
<p>I want to expose my kubernetes cluster with minikube. </p>
<p>consider my tree</p>
<pre><code>.
βββ deployment.yaml
βββ Dockerfile
βββ server.js
βββ service.yaml
</code></pre>
<p>I build my docker image locally and am able to run all pods via </p>
<pre><code>kubectl create -f deployment.yaml
kubectl create -f service.yaml
</code></pre>
<p>. However when I run </p>
<pre><code> $ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
nodeapp LoadBalancer 10.110.106.83 <pending> 80:32711/TCP 9m
</code></pre>
<p>There is no external ip to be able to connect to the cluster. Tried to expose one pod but the the external Ip stays none. Why Is there no external ip? </p>
<pre><code> $ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeapp
labels:
app: nodeapp
spec:
replicas: 2
selector:
matchLabels:
app: nodeapp
template:
metadata:
labels:
app: nodeapp
spec:
containers:
- name: hello-node
image: hello-node:v2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
</code></pre>
<p>and </p>
<pre><code> cat service.yaml
kind: Service
apiVersion: v1
metadata:
name: nodeapp
spec:
selector:
app: nodeapp
ports:
- name: http
port: 80
targetPort: 3000
protocol: TCP
type: LoadBalancer
$ cat server.js
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello User');
};
var www = http.createServer(handleRequest);
</code></pre>
| A.Dumas | <p>According to the K8S documentation <a href="https://kubernetes.io/docs/concepts/services-networking/#loadbalancer" rel="noreferrer">here</a>. So, <code>type=LoadBalancer</code> can be used on AWS, GCP and other supported Clouds, not on Minikube.</p>
<blockquote>
<p>On cloud providers which support external load balancers, setting the type field to LoadBalancer will provision a load balancer for your Service.</p>
</blockquote>
<p>Specify type as NodePort as mentioned <a href="https://kubernetes.io/docs/concepts/services-networking/#nodeport" rel="noreferrer">here</a> and the service will be exposed on a port on the Minikube. Then the service can be accessed by using url from the host OS.</p>
<blockquote>
<p>minikube service nodeapp --url</p>
</blockquote>
| Praveen Sripati |
<p>We currently have 2 Kubernetes clusters:</p>
<ul>
<li><p>One setup with Kops running on AWS</p></li>
<li><p>One setup with Kubeadm running on our own hardware</p></li>
</ul>
<p>We want to combine them to only have a single cluster to manage. </p>
<p>The master could end up being on AWS or on our servers, both are fine.</p>
<p>We can't find a way to add nodes configured with one cluster to the other. </p>
<ul>
<li><p><code>kubeadm</code> is not made available on nodes setup with Kops, so we can't do eg <code>kubeadm token create --print-join-command</code></p></li>
<li><p>Kops doesn't seem to have utilities to let us add arbitrary nodes, see <a href="https://stackoverflow.com/questions/50248179/how-to-add-an-node-to-my-kops-cluster-node-in-here-is-my-external-instance">how to add an node to my kops cluster? (node in here is my external instance)</a></p></li>
</ul>
<p>This issue raises the same question but was left unanswered: <a href="https://github.com/kubernetes/kops/issues/5024" rel="nofollow noreferrer">https://github.com/kubernetes/kops/issues/5024</a></p>
| MasterScrat | <p>You can join the nodes manually, but this is really not a recommend way of doing things.</p>
<p>If you're using kubeadm, you probably already have all the relevant components installed on the workers to have them join in a valid way. What I'd say the process to follow is:</p>
<p>run <code>kubeadm reset</code> on the on-prem in question</p>
<p>login to the kops node, and examine the kubelet configuration:</p>
<p><code>systemctl cat kubelet</code></p>
<p>In here, you'll see the kubelet config is specified at <code>/etc/sysconfig/kubelet</code>. You'll need to copy that file and ensure the on-prem node has it in its systemd startup config</p>
<p>Copy the relevent config over to the on-prem node. You'll need to remove any references to the AWS cloud provider stuff, as well as make sure the hostname is valid. Here's an example config I copied from a kops node, and modified:</p>
<pre><code>DAEMON_ARGS="--allow-privileged=true --cgroup-root=/ --cluster-dns=100.64.0.10 --cluster-domain=cluster.local --enable-debugging-handlers=true - --feature-gates=ExperimentalCriticalPodAnnotation=true --hostname-override=<my_dns_name> --kubeconfig=/var/lib/kubelet/kubeconfig --network-plugin=cni --node-labels=kops.k8s.io/instancegroup=onpremnodes,kubernetes.io/role=node,node-role.kubernetes.io/node= --non-masquerade-cidr=100.64.0.0/10 --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 --pod-manifest-path=/etc/kubernetes/manifests --register-schedulable=true --v=2 --cni-bin-dir=/opt/cni/bin/ --cni-conf-dir=/etc/cni/net.d/"
HOME="/root"
</code></pre>
<p>Also, examine the kubelet kubeconfig configuration (it should be at <code>/var/lib/kubelet/kubeconfig</code>). This is the config which tells the kubelet which API server to register with. Ensure that exists on the on-prem node</p>
<p>This should get your node joining the API. You may have to go through some debugging as you go through this process.</p>
<p>I really don't recommend doing this though, for the following reasons:</p>
<ul>
<li>Unless you use node-labels in a sane way, you're going to have issues provisioning cloud elements. The kubelet will interact with the AWS API regularly, so if you try use service of type LoadBalancer or any cloud volumes, you'll need to pin the workloads to specific nodes. You'll need to make heavy uses of <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">taints and tolerations.</a></li>
<li>Kubernetes workers aren't designed to connect over a WAN. You're probably going to see issue at some point with network latency etc</li>
<li>If you do choose to go down this route, you'll need to ensure you have TLS configured in both directions for the API <-> kubelet communication, or a VPN.</li>
</ul>
| jaxxstorm |
<p>Several weeks ago I asked <a href="https://stackoverflow.com/questions/71224960/commands-to-switch-kubectl-and-gcloud-back-and-forth-between-two-totally-separat">this question</a> and received a very helpful answer. The gist of that question was: "<em>how do I switch back and forth between two different K8s/GCP accounts on the same machine?</em>" I have 2 different K8s projects with 2 different emails (gmails) that live on 2 different GKE clusters in 2 different GCP accounts. And I wanted to know how to switch back and forth between them so that when I run <code>kubectl</code> and <code>gcloud</code> commands, I don't inadvertently apply them to the wrong project/account.</p>
<p>The answer was to basically leverage <code>kubectl config set-context</code> along with a script.</p>
<p>This question (today) is an extenuation of that question, a "Part 2" so to speak.</p>
<hr />
<p>I am confused about the <em>order</em> in which I:</p>
<ul>
<li>Set the K8s context (again via <code>kubectl config set-context ...</code>); and</li>
<li>Run <code>gcloud init</code>; and</li>
<li>Run <code>gcloud auth</code>; and</li>
<li>Can safely run <code>kubectl</code> and <code>gcloud</code> commands and be sure that I am hitting the right GKE cluster</li>
</ul>
<p>My <em>understanding</em> is that <code>gcloud init</code> only has to be ran once to initialize the <code>gcloud</code> console on your system. Which I have already done.</p>
<p>So my <em>thinking</em> here is that I could be able to do the following:</p>
<pre><code># 1. switch K8s context to Project 1
kubectl config set-context <context for GKE project 1>
# 2. authenticate w/ GCP so that now gcloud commands will only hit the GCP
# resources associated with Project 1 (and GCP Account 1)
gcloud auth
# 3. run a bunch of kubectl and gcloud commands for Project/GCP Account 1
# 4. switch K8s context to Project 2
kubectl config set-context <context for GKE project 2>
# 5. authenticate w/ GCP so that now gcloud commands will only hit the GCP
# resources associated with Project 2 (and GCP Account 2)
gcloud auth
# 6. run a bunch of kubectl and gcloud commands for Project/GCP Account 2
</code></pre>
<p><strong>Is my understanding here correct or is it more involved/complicated than this (and if so, why)?</strong></p>
| simplezarg | <p>I'll assume familiarity with the earlier <a href="https://stackoverflow.com/a/71225494/609290">answer</a></p>
<h3><code>gcloud</code></h3>
<p><a href="https://cloud.google.com/sdk/gcloud/reference/init" rel="nofollow noreferrer"><code>gcloud init</code></a> need only be run once per machine and only again if you really want to re-<code>init</code>'ialize the CLI (<code>gcloud</code>).</p>
<p><a href="https://cloud.google.com/sdk/gcloud/reference/auth/login" rel="nofollow noreferrer"><code>gcloud auth login ${ACCOUNT}</code></a> authenticates a (Google) (user or service) account and persists (on Linux by default in <code>${HOME}/.config/gcloud</code>) and renews the credentials.</p>
<p><a href="https://cloud.google.com/sdk/gcloud/reference/auth/list" rel="nofollow noreferrer"><code>gcloud auth list</code></a> lists the accounts that have been <code>gcloud auth login</code>. The results show which account is being used by default (<code>ACTIVE</code> with <code>*</code>).</p>
<p>Somewhat inconveniently, one way to switch between the currently <code>ACTIVE</code> account is to change <code>gcloud</code> <strong>global</strong> (every instance on the machine) configuration using <a href="https://cloud.google.com/sdk/gcloud/reference/config/set" rel="nofollow noreferrer"><code>gcloud config set account ${ACCOUNT}</code></a>.</p>
<h3><code>kubectl</code></h3>
<p>To facilitate using previously authenticated (i.e. <code>gcloud auth login ${ACCOUNT}</code>) credentials with Kubernetes Engine, Google provides the command <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials" rel="nofollow noreferrer"><code>gcloud container clusters get-credentials</code></a>. This uses the currently <code>ACTIVE</code> <code>gcloud</code> account to create a <code>kubectl</code> context that joins a Kubernetes Cluster with a <em>User</em> and possibly with a Kubernetes Namespace too. <code>gcloud container clusters get-credentials</code> makes changes to <code>kubectl</code> config (on Linux by default in <code>${HOME}/.kube/config</code>).</p>
<p>What is a <em>User</em>? See <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#users-in-kubernetes" rel="nofollow noreferrer">Users in Kubernetes</a>. Kubernetes Engine (via <code>kubectl</code>) wants (OpenID Connect) <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens" rel="nofollow noreferrer">Tokens</a>. And, conveniently, <code>gcloud</code> can provide these tokens for us.</p>
<p>How? Per previous <a href="https://stackoverflow.com/a/71225494/609290">answer</a></p>
<pre><code>user:
auth-provider:
config:
access-token: [[redacted]]
cmd-args: config config-helper --format=json
cmd-path: path/to/google-cloud-sdk/bin/gcloud
expiry: "2022-02-22T22:22:22Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
</code></pre>
<p><code>kubectl</code> uses the configuration file to invoke <code>gcloud config config-helper --format=json</code> and extracts the <code>access_token</code> and <code>token_expiry</code> from the result. GKE can then use the <code>access_token</code> to authenticate the user. And, if necessary can renew the token using Google's token endpoint after expiry (<code>token_expiry</code>).</p>
<h3>Scenario</h3>
<p>So, how do you combine all of the above.</p>
<ol>
<li>Authenticate <code>gcloud</code> with all your Google accounts</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>ACCOUNT="[email protected]"
gcloud auth login ${ACCOUNT}
ACCOUNT="[email protected]"
gcloud auth login ${ACCOUNT} # Last will be the `ACTIVE` account
</code></pre>
<ol start="2">
<li>Enumerate these</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>gcloud auth list
</code></pre>
<p>Yields:</p>
<pre><code>ACTIVE ACCOUNT
[email protected]
* [email protected] # This is ACTIVE
To set the active account, run:
$ gcloud config set account `ACCOUNT`
</code></pre>
<ol start="3">
<li>Switch between users for <code>gcloud</code> commands</li>
</ol>
<blockquote>
<p><strong>NOTE</strong> This <strong>doesn't</strong> affect <code>kubectl</code></p>
</blockquote>
<p><strong>Either</strong></p>
<pre class="lang-sh prettyprint-override"><code>gcloud config set account [email protected]
gcloud auth list
</code></pre>
<p>Yields:</p>
<pre><code>ACTIVE ACCOUNT
* [email protected] # This is ACTIVE
[email protected]
</code></pre>
<p><strong>Or</strong> you can explicitly add <a href="https://cloud.google.com/sdk/gcloud/reference#--account" rel="nofollow noreferrer"><code>--account=${ACCOUNT}</code></a> to <strong>any</strong> <code>gcloud</code> command, e.g.:</p>
<pre class="lang-sh prettyprint-override"><code># Explicitly unset your account
gcloud config unset account
# This will work and show projects accessible to client1
gcloud projects list [email protected]
# This will work and show projects accessible to client2
gcloud projects list [email protected]
</code></pre>
<ol start="2">
<li>Create <code>kubectl</code> contexts for any|all your Google accounts (via <code>gcloud</code>)</li>
</ol>
<p><strong>Either</strong></p>
<pre class="lang-sh prettyprint-override"><code>ACCOUNT="[email protected]"
PROJECT="..." # Project accessible to ${ACCOUNT}
gcloud container clusters get-credentials ${CLUSTER} \
--ACCOUNT=${ACCOUNT} \
--PROJECT=${PROJECT} \
...
</code></pre>
<p><strong>Or</strong> equivalently using <code>kubectl config set-context</code> directly:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl config set-context ${CONTEXT} \
--cluster=${CLUSTER} \
--user=${USER} \
</code></pre>
<p>But it avoids having to <code>gcloud config get-clusters</code>, <code>gcloud config get-users</code> etc.</p>
<blockquote>
<p><strong>NOTE</strong> <code>gcloud containers clusters get-credentials</code> uses derived names for contexts and GKE uses derived names for clusters. <strong>If you're confident</strong> you can edit <code>kubectl</code> config directly (or using <code>kubectl config</code> commands) to rename these cluster, context and user references to suit your needs.</p>
</blockquote>
<ol start="3">
<li>List <code>kubectl</code> contexts</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>kubectl config get-context
</code></pre>
<p>Yields:</p>
<pre><code>CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* client1 a-cluster client1
client2 b-cluster client2
</code></pre>
<ol start="4">
<li>Switch between <code>kubectl</code> contexts (clusters*users)</li>
</ol>
<blockquote>
<p><strong>NOTE</strong> This <strong>doesn't</strong> affect <code>gcloud</code></p>
</blockquote>
<p><strong>Either</strong></p>
<pre class="lang-sh prettyprint-override"><code>kubectl config use-context ${CONTEXT}
</code></pre>
<p><em>Or</em>* you can explicitly add <code>--context</code> flag to <strong>any</strong> <code>kubectl</code> commands</p>
<pre class="lang-sh prettyprint-override"><code># Explicitly unset default|current context
kubectl config unset current-context
# This will work and list deployments accessible to ${CONTEXT}
kubectl get deployments --context=${CONTEXT}
</code></pre>
| DazWilkin |
<p>I'm trying to write a k8s controller. Within the controller I want to parse the YAML file from GitHub to <code>unstructured. Unstructured</code>. After parsing, I want to track the status of the applied instance of <code>unstructured. Unstructured</code>. The tracking will try to catch if there's a specific key-value.</p>
<p>I failed to do so, since the <code>unstructured.Unstructured</code> doesn't have a method for getting status. Then I was trying to marshal it to JSON and find the status, but failed.</p>
<p>If you know a way to achieve these, it would be great.</p>
| Ian Zhang | <p>The unstructured package provides "Nested" functions.
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" rel="nofollow noreferrer">https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured</a></p>
<p>For status you would use:</p>
<pre><code>unstructured.NestedStringMap(myunstruct.Object, "status")
</code></pre>
<p>For the status message:</p>
<pre><code>unstructured.NestedString(myunstruct.Object, "status", "message")
</code></pre>
<p>See Chapter 4 of <strong>Programming Kubernetes</strong> by Stefan Schimanski and Michael Hausenblas for more discussion of the dynamic client.</p>
| user620143 |
<p>I created a pod with <code>kubectl create -f pod.xml</code> and <code>kubectl apply -f pod.xml</code> using the below yaml and I don't see any difference, a pod gets created with both the commands. The <a href="https://kubernetes.io/docs/concepts/overview/object-management-kubectl/overview/" rel="noreferrer">K8S document</a>, mentions imperative and declarative commands. But, still the create and apply behave the same way.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
</code></pre>
<p>What's the difference? Also, how is <code>kubectl apply</code> declarative and <code>kubectl create</code> imperative? Both of them take one or multiple yaml files with the
object details in it.</p>
| Praveen Sripati | <p>There is a subtle difference between <code>kubectl create</code> and <code>kubectl apply</code> commands.</p>
<p>The <code>kubectl create</code> command creates a new resource. So, if the command is run again it will throw an error as resource names should be unique in a namespace.</p>
<pre><code>kubectl get pods
No resources found.
kubectl create -f pod.xml
pod/myapp-pod created
kubectl create -f pod.xml
Error from server (AlreadyExists): error when creating "pod.xml": pods "myapp-pod" already exists
</code></pre>
<p>2) The <code>kubectl apply</code> command applies the configuration to a resource. If the resource is not there then it will be created. The <code>kubectl apply</code> command can be run the second time as it simply applies the configuration as shown below. In this case, the configuration hasn't changed. So, the pod hasn't changed.</p>
<pre><code>kubectl delete pod/myapp-pod
pod "myapp-pod" deleted
kubectl apply -f pod.xml
pod/myapp-pod created
kubectl apply -f pod.xml
pod/myapp-pod unchanged
</code></pre>
<p>In the <code>kubectl create</code>, we specify a certain action, in this case <code>create</code> and so it is <strong>imperative</strong>. In the <code>kubectl apply</code> command we specify the target state of the system and don't specify a certain action and so <strong>declarative</strong>. We let the system decide what action to take. If the resource is not there it will create it, if the resource is there then it will apply the configuration to the existing resource.</p>
<p>From an execution perspective, there is no difference when a resource is created for the first time between <code>kubectl create</code> and <code>kubectl apply</code> as shown above. But, the second time the <code>kubectl create</code> will throw an error.</p>
<p>It took me some time to get around it, but it makes sense now.</p>
| Praveen Sripati |
<p>I want to start Kubernetes jobs on a GKE cluster from a Google Cloud Function (Firebase)</p>
<p>I'm using the Kubernetes node client <a href="https://github.com/kubernetes-client/javascript" rel="nofollow noreferrer">https://github.com/kubernetes-client/javascript</a></p>
<p>I've created a Kubernetes config file using `kubectl config view --flatten -o json'</p>
<p>and loaded it</p>
<pre><code>const k8s = require('@kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromString(config)
</code></pre>
<p>This works perfectly locally but the problem is when running on cloud functions the token can't be refreshed so calls fail after a while.</p>
<p>My config k8s config files contains</p>
<pre><code> "user": {
"auth-provider": {
"name": "gcp",
"config": {
"access-token": "redacted-secret-token",
"cmd-args": "config config-helper --format=json",
"cmd-path": "/usr/lib/google-cloud-sdk/bin/gcloud",
"expiry": "2022-10-20T16:25:25Z",
"expiry-key": "{.credential.token_expiry}",
"token-key": "{.credential.access_token}"
}
}
</code></pre>
<p>I'm guessing the command path points to the gcloud sdk which is used to get a new token when the current one expires. This works locally but on cloud functions it doesn't as there is no <code>/usr/lib/google-cloud-sdk/bin/gcloud</code></p>
<p>Is there a better way to authenticate or a way to access the gcloud binary from cloud functions?</p>
| patrick_corrigan | <p>I have a similar mechanism (using Cloud Functions to authenticate to Kubernetes Engine) albeit written in Go.</p>
<p>This approach uses Google's <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest" rel="nofollow noreferrer">Kubernetes Engine API</a> to get the cluster's credentials and construct the <code>KUBECONFIG</code> using the values returned. This is equivalent to:</p>
<pre class="lang-bash prettyprint-override"><code>gcloud container clusters get-credentials ...
</code></pre>
<p><a href="https://developers.google.com/apis-explorer" rel="nofollow noreferrer">APIs Explorer</a> has a Node.js <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters/get#examples" rel="nofollow noreferrer">example</a> for the above method. The example uses <a href="https://github.com/googleapis/google-api-nodejs-client" rel="nofollow noreferrer">Google's API Client Library for Node.JS for Kubernetes Engine</a> also see <a href="https://googleapis.dev/nodejs/googleapis/latest/container/index.html" rel="nofollow noreferrer">here</a>.</p>
<p>There's also a <a href="https://cloud.google.com/nodejs/docs/reference/container/latest" rel="nofollow noreferrer">Google Cloud Client Library for Node.js for Kubernetes Engine</a> and this includes <a href="https://cloud.google.com/nodejs/docs/reference/container/latest/container/v1.clustermanagerclient#_google_cloud_container_v1_ClusterManagerClient_getCluster_member_1_" rel="nofollow noreferrer"><code>getCluster</code></a> which (I assume) is equivalent. Confusingly there's <a href="https://cloud.google.com/nodejs/docs/reference/container/latest/container/v1.clustermanagerclient#_google_cloud_container_v1_ClusterManagerClient_getServerConfig_member_1_" rel="nofollow noreferrer"><code>getServerConfig</code></a> too and it's unclear from reading the API docs as to the difference between these methods.</p>
<p>Here's a link to the <a href="https://gist.github.com/DazWilkin/9506c0b9677d53a3e11b7457ed21cbe7" rel="nofollow noreferrer">gist</a> containing my Go code. It constructs a Kubernetes <code>Config</code> object that can then be used by the Kubernetes API to authenticate you to a cluster..</p>
| DazWilkin |
<p>I want to deploy a few <code>Spring Boot</code> microservices on <code>Kubernetes</code> cluster. One of them is <em>authorization server</em> serving <code>OAuth 2.0</code> tokens. With current deployment (no k8s) only two services are visible to the outer world: <code>api-gateway</code> (Zuul) and <code>authorization-server</code> (Spring OAuth). The rest is hidden behind the <code>api-gateway</code>. During <code>k8s</code> deployment Zuul proxy probably will be substituted by Kubernetes Ingress. </p>
<p>Now the questions: </p>
<ul>
<li>Should I put <code>authorization-server</code> behind the Ingress or not?</li>
<li>What are pros and cons concerning these two solutions?</li>
<li>What are <em>best practices</em>?</li>
<li>Maybe I shouldn't get rid of Zuul at all?</li>
</ul>
| k13i | <p>Getting rid of Zuul is perfectly reasonable. Ingress should be the only outer-cluster accessible component that provides access to the cluster through ingress rules.
So yes, authorization-server and microservices should be accessible through ingress.</p>
| Jeff |
<p>I am new to Kubernetes and looking for a better understanding of the difference between Kube-DNS and CoreDNS.</p>
<p>As I understand it the recommendation is to use the newer CoreDNS rather than the older Kube-DNS. </p>
<p>I have setup a small cluster using <code>kubeadm</code> and now I am a little confused about the difference between CoreDNS and Kube-DNS.</p>
<p>Using <code>kubectl get pods --all-namespaces</code> I can see that I have two CoreDNS pods running.</p>
<p>However using <code>kubectl get svc --all-namespaces</code> I also see that I have a service named <code>kube-dns</code> running in the <code>kube-system</code> namespace. When I inspect that with <code>kubectl describe svc/kube-dns -n kube-system</code> I can see that the <code>kube-dns</code> service links to coredns.</p>
<p>I am now wondering if I am actually running both kube-dns and coredns. Or else, why is that service called <code>kube-dns</code> and not <code>core-dns</code>?</p>
| lanoxx | <p>I have K8S 1.12. Do a describe of the dns pod.</p>
<blockquote>
<p>kubectl describe pod coredns-576cbf47c7-hhjrs --namespace=kube-system | grep -i "image:"</p>
<p>Image: k8s.gcr.io/coredns:1.2.2</p>
</blockquote>
<p>Looks like coredns is running. According to the documentation CoreDNS is default from K8S 1.11. For previous installations it's kube-dns.</p>
<p>The image is what important, rest are metadata (names, labels etc).</p>
<p>According to the K8S blog <a href="https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/" rel="noreferrer">here</a>.</p>
<blockquote>
<p>In Kubernetes 1.11, CoreDNS has reached General Availability (GA) for DNS-based service discovery, as an alternative to the kube-dns addon. This means that CoreDNS will be offered as an option in upcoming versions of the various installation tools. In fact, the kubeadm team chose to make it the default option starting with Kubernetes 1.11.</p>
</blockquote>
<p>Also, see this link for <a href="https://kubernetes.io/docs/tasks/administer-cluster/coredns/#installing-kube-dns-instead-of-coredns-with-kubeadm" rel="noreferrer">more</a> info.</p>
| Praveen Sripati |
<p>Currently I am working on a project where we have a single trusted master server, and multiple untrusted (physically in an unsecured location) hosts (which are all replicas of each other in different physical locations).</p>
<p>We are using Ansible to automate the setup and configuration management however I am very unimpressed in how big of a gap we have in our development and testing environments, and production environment, as well as the general complexity we have in configuration of the network as well as containers themselves.</p>
<p>I'm curious if Kubernetes would be a good option for orchestrating this? Basically, multiple unique copies of the same pod(s) on all untrusted hosts must be kept running, and communication should be restricted between the hosts, and only allowed between specific containers in the same host and specific containers between the hosts and the main server. </p>
| kittydoor | <p>There's a little bit of a lack of info here. I'm going to make the following assumptions:</p>
<ul>
<li>K8s nodes are untrusted</li>
<li>K8s masters are trusted</li>
<li>K8s nodes <em>cannot</em> communicate with each other</li>
<li>Containers on the same host <em>can</em> communicate with each other</li>
</ul>
<p>Kubernetes <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">operates on the model that</a>:</p>
<blockquote>
<ul>
<li>all containers can communicate with all other containers without NAT</li>
<li>all nodes can communicate with all containers (and vice-versa) without NAT</li>
<li>the IP that a container sees itself as is the same IP that others see it as</li>
</ul>
</blockquote>
<p>Bearing this in mind, you're going to have some difficulty here doing what you want.</p>
<p>If you can change your <em>physical</em> network requirements, and ensure that all nodes can communicate with each other, you might be able to use <a href="https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/" rel="nofollow noreferrer">Calico's Network Policy</a> to segregate access at the pod level, but that depends entirely on your flexibility.</p>
| jaxxstorm |
<p>I am currently getting my kubeconfig files for my GKE clusters via</p>
<pre><code>export KUBECONFIG=<config-path>
gcloud container clusters get-credentials cluster-name --region=region-name
</code></pre>
<p>Now I get the config files and I can use them.</p>
<p>However, for some applications it would be helpful to have hardcoded credentials and not those appearing here</p>
<pre><code>...
users:
- name: user-name
user:
auth-provider:
config:
access-token: <access-token>
cmd-args: config config-helper --format=json
cmd-path: /Users/user-name/google-cloud-sdk/bin/gcloud
expiry: "2022-08-13T18:27:44Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
</code></pre>
<p>Is there an elegant way to do it? Could also be via a service account or whatever, I am open to any thoughts. The only thing that matters to me is to have a kubeconfig file that I can share and everyone can make use of it, once the user has it in his hands.</p>
| tobias | <p>See Google's post <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke" rel="nofollow noreferrer"><code>kubectl</code> auth changes in GKE v1.25</a> for changes to the way that <code>KUBECONFIG</code> files will authenticate to GKE clusters. Your <code>KUBECONFIG</code> uses the existing mechanism and you may want to consider migrating.</p>
<p>Google uses OAuth to <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens" rel="nofollow noreferrer">authenticate</a> to GKE Kubernetes clusters. By default, the config uses <code>gcloud</code> to obtain the currently auth'd user's access token (you can use a Service Account as well, see below).</p>
<p>The <code>KUBECONFIG</code> that you included in your question is how <code>kubectl</code> acquires the <code>gcloud</code>'s (currently auth'd) user's access token using the <code>config-helper</code>. There's no better way to authenticate as a user if you want the benefits of using <code>gcloud</code> but you could duplicate this functionality outside of <code>KUEBCONFIG</code>.</p>
<p>See <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/api-server-authentication#overview" rel="nofollow noreferrer">Authenticating to Kubernetes</a> for a well-documented set of alternatives approaches. These include environments with|without <code>gcloud</code>, using Service Accounts and running on Google Cloud (where you can obtain Service Account access tokens easily using Google's Metadata service) and running off Google Cloud. Any of these alternatives may address your need.</p>
| DazWilkin |
<p>The failing code runs inside a Docker container based on <code>python:3.6-stretch</code> debian.
It happens while Django moves a file from one Docker volume to another.</p>
<p>When I test on MacOS 10, it works without error. Here, the Docker containers are started with docker-compose and use regular Docker volumes on the local machine.</p>
<p>Deployed into Azure (AKS - Kubernetes on Azure), moving the file succeeds but copying the stats fails with the following error:</p>
<pre><code> File "/usr/local/lib/python3.6/site-packages/django/core/files/move.py", line 70, in file_move_safe
copystat(old_file_name, new_file_name)
File "/usr/local/lib/python3.6/shutil.py", line 225, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/local/lib/python3.6/shutil.py", line 157, in _copyxattr
names = os.listxattr(src, follow_symlinks=follow_symlinks)
OSError: [Errno 38] Function not implemented: '/some/path/file.pdf'
</code></pre>
<p>The volumes on Azure are persistent volume claims with <code>ReadWriteMany</code> access mode.</p>
<p>Now, <code>copystat</code> is documented as:</p>
<blockquote>
<p>copystat() never returns failure.</p>
</blockquote>
<p><a href="https://docs.python.org/3/library/shutil.html" rel="nofollow noreferrer">https://docs.python.org/3/library/shutil.html</a></p>
<p>My questions are:</p>
<ul>
<li>Is this a "bug" because the documentation says that it should "never return failure"?</li>
<li>Can I savely try/except this error because the file in question is moved (it only fails later on, while trying to copy the stats)</li>
<li>Can I change something about the Azure settings that fix this? (probably not)</li>
</ul>
<p>Here some small test on the machine in Azure itself:</p>
<pre><code>root:/media/documents# ls -al
insgesamt 267
drwxrwxrwx 2 1000 1000 0 Jul 31 15:29 .
drwxrwxrwx 2 1000 1000 0 Jul 31 15:29 ..
-rwxrwxrwx 1 1000 1000 136479 Jul 31 16:48 orig.pdf
-rwxrwxrwx 1 1000 1000 136479 Jul 31 15:29 testfile
root:/media/documents# lsattr
--S-----c-jI------- ./orig.pdf
--S-----c-jI------- ./testfile
root:/media/documents# python
Python 3.6.6 (default, Jul 17 2018, 11:12:33)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import shutil
>>> shutil.copystat('orig.pdf', 'testfile')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/shutil.py", line 225, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/local/lib/python3.6/shutil.py", line 157, in _copyxattr
names = os.listxattr(src, follow_symlinks=follow_symlinks)
OSError: [Errno 38] Function not implemented: 'orig.pdf'
>>> shutil.copystat('orig.pdf', 'testfile', follow_symlinks=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/shutil.py", line 225, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/local/lib/python3.6/shutil.py", line 157, in _copyxattr
names = os.listxattr(src, follow_symlinks=follow_symlinks)
OSError: [Errno 38] Function not implemented: 'orig.pdf'
>>>
</code></pre>
| Risadinha | <p>The following solution is a hotfix. It would have to be applied to <em>any</em> method that calls <code>copystat</code> directly or indirectly (or any shutil method that produces an ignorable <code>errno.ENOSYS</code>).</p>
<pre><code>if hasattr(os, 'listxattr'):
LOGGER.warning('patching listxattr to avoid ERROR 38 (errno.ENOSYS)')
# avoid "ERROR 38 function not implemented on Azure"
with mock.patch('os.listxattr', return_value=[]):
file_field.save(name=name, content=GeneratedFile(fresh, content_type=content_type), save=True)
else:
file_field.save(name=name, content=GeneratedFile(fresh, content_type=content_type), save=True)
</code></pre>
<p><code>file_field.save</code> is the Django method that calls the <code>shutil</code> code in question. It's the last location in my code before the error.</p>
| Risadinha |
<p>I'm trying to understand the relationship among Kubernetes pods and the cores and memory of my cluster nodes when using Dask.</p>
<p>My current setup is as follows:</p>
<ul>
<li>Kubernetes cluster using GCP's Kubernetes Engine</li>
<li>Helm package manager to install Dask on the cluster</li>
</ul>
<p>Each node has 8 cores and 30 gb of ram. I have 5 nodes in my cluster:</p>
<p><a href="https://i.stack.imgur.com/JnEFa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JnEFa.png" alt="cluster info"></a></p>
<p>I then scaled the number of pods to 50 by executing</p>
<pre><code>kubectl scale --replicas 50 deployment/nuanced-armadillo-dask-worker
</code></pre>
<p>When I initialize the client in Dask using <code>dask.distributed</code> I see the following</p>
<p><a href="https://i.stack.imgur.com/ZKoex.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZKoex.png" alt="dask distributed client info"></a></p>
<p>What puzzles me is that the client says that there are 400 cores and 1.58 tb of memory in my cluster (see screenshot). I suspect that by default each pod is being allocated 8 cores and 30 gb of memory, but how is this possible given the constraints on the actual number of cores and memory in each node?</p>
| PollPenn | <p>If you don't specify a number of cores or memory then every Dask worker tries to take up the entire machine on which it is running.</p>
<p>For the helm package you can specify the number of cores and amount of memory per worker by adding resource limits to your worker pod specification. These are listed in the configuration options of the chart.</p>
| MRocklin |
<p>I am trying to use <code>if/else-if/else</code> loop in helm chart. Basically, I want to add ENV configs in configfile based on the if/else condition. Below is the logic:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.applicationName }}-configmap
labels:
projectName: {{ .Values.applicationName }}
environment: {{ .Values.environment }}
type: configmap
data:
{{- if eq .Values.environment "production" }}
{{ .Files.Get "config-prod.yaml" | nindent 2}}
{{- else if eq .Values.environment "development" }}
{{ .Files.Get "config-dev.yaml" | nindent 2}}
{{- else }}
{{ .Files.Get "config-stage.yaml" | nindent 2}}
{{- end }}
</code></pre>
<p>But I am not getting the desired output and facing some issue. Can anybody help me out with this?</p>
<p>Edit1: I have added my modified configmap.yaml as per the suggestions, helm install/template command gives <code>Error: YAML parse error on demo2/templates/configmap.yaml: error converting YAML to JSON: yaml: line 14: did not find expected key </code>error.</p>
<p>also my config-prod and config-stage is being rendered (as per the condition if I give <code>environment: production</code> then config-prod.yaml is being added and if I give <code>environment: stage/null</code> then config-stage.yaml is being added.</p>
| dharmendra kariya | <p>Your question would benefit from more specifics.</p>
<p>Please consider adding the following to your question:</p>
<ul>
<li>How are you trying this? What commands exactly did you run?</li>
<li>How are you "not getting the desired output"? What output did you get?</li>
</ul>
<p>Please also include:</p>
<ul>
<li>the relevant entries from your <code>values.yaml</code></li>
<li>the <code>config-dev.yaml</code> and <code>config-stage.yaml</code> files</li>
</ul>
<p>Have you run <a href="https://helm.sh/docs/helm/helm_template/" rel="nofollow noreferrer">helm template</a> to generate the templates that Helm would apply to your cluster? This would be a good way to diagnose the issue.</p>
<p>I wonder whether you're chomping too much whitespace.</p>
<p>And you should just chomp left, i.e. <code>{{- .... }}</code> rather than left+right <code>{{- ... -}}</code>.</p>
| DazWilkin |
<p>I have a scenario and was wondering the best way to structure it with Kustomize.</p>
<p>Say I have multiple environments: <code>dev</code>, <code>qa</code>, <code>prod</code></p>
<p>and say I have multiple DCs: <code>OnPrem</code>, <code>AWS</code>, <code>GCP</code></p>
<p>Let's say each DC above has a <code>dev</code>, <code>qa</code>, <code>prod</code> environment.</p>
<p>I have data that is per environment but also per DC. For example, apply this string to dev overlays but apply these, if AWS.</p>
<p>Is there a way to easily doing this without duplication. An example may be, say if it's AWS, I want to run an additional container in my pod, and if it's prod I want extra replicas. If it's GCP, I want a different image but if it's prod, I still want extra replicas.</p>
<p>The below example, will have a lot of duplication. I've read you can do multiple bases. Maybe it makes sense to have a <code>AWS</code>, <code>GCP</code>, <code>OnPrep</code> Base and then have a <code>dev</code>, <code>qa</code>, <code>prod</code> overlay and have mutiple Kustomize files for each?</p>
<p>ie</p>
<pre><code>βββ base
βΒ Β βββ guestbook-ui-deployment.yaml
βΒ Β βββ guestbook-ui-svc.yaml
βΒ Β βββ kustomization.yaml
βββ overlay
βββ dev
βΒ Β βββ aws
βΒ Β βΒ Β βββ guestbook-ui-deployment.yaml
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βββ gcp
βΒ Β βββ guestbook-ui-deployment.yaml
βΒ Β βββ kustomization.yaml
βββ qa
βββ aws
βΒ Β βββ guestbook-ui-deployment.yaml
βΒ Β βββ kustomization.yaml
βββ gcp
βββ guestbook-ui-deployment.yaml
βββ kustomization.yaml
</code></pre>
| CodyK | <p>I recommend having an overlay for each combination you want to build. e.g:</p>
<pre><code>βββ overlays
βββ aws-dev
βββ aws-qa
βββ gcp-dev
</code></pre>
<p>Then you can structure in different ways, such as using components:</p>
<pre><code>βββ components
βββ environments
β βββ dev
β βββ qa
βββ providers
βββ aws
βββ gcp
</code></pre>
<p>This makes sense because you usually don't create all combinations of possible environments, but only some that make sense to you.</p>
<p>More documentation: <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md</a></p>
| MartΓn Coll |
<p>We have a Rancher Kubernetes cluster where sometimes the pods get stuck in <code>terminating</code> status when we try to delete the corresponding deployment, as shown below.</p>
<pre><code>$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
...
storage-manager-deployment 1 0 0 0 1d
...
$ kubectl delete deployments storage-manager-deployment
kubectl delete deployments storage-manager-deployment
deployment.extensions "storage-manager-deployment" deleted
C-c C-c^C
$ kubectl get po
NAME READY STATUS RESTARTS AGE
...
storage-manager-deployment-6d56967cdd-7bgv5 0/1 Terminating 0 23h
...
$ kubectl delete pods storage-manager-deployment-6d56967cdd-7bgv5 --grace-period=0 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "storage-manager-deployment-6d56967cdd-7bgv5" force deleted
C-c C-c^C
</code></pre>
<p>Both the delete commands (for the <code>deployment</code> and the <code>pod</code>) get stuck and need to be stopped manually.</p>
<p>We have tried both</p>
<p><code>kubectl delete pod NAME --grace-period=0 --force</code></p>
<p>and</p>
<p><code>kubectl delete pod NAME --now</code></p>
<p>without any luck.</p>
<p>We have also set <code>fs.may_detach_mounts=1</code>, so it seems that all the similar questions already on StackOverflow don't apply to our problem.</p>
<p>If we check the node on which the incriminated pod runs, it does not appear in the <code>docker ps</code> list.</p>
<p>Any suggestion?</p>
<p>Thanks</p>
| gruggie | <p>Check the pod spec for an array: 'finalizers'</p>
<pre><code>finalizers:
- cattle-system
</code></pre>
<p>If this exists, remove it, and the pod will terminate.</p>
| jaxxstorm |
<p>I am starting the implementation in the project where I work, but I had some doubts.</p>
<ol>
<li><p>I have a project with several profiles spring, and for each I may want to have a replicated amount.</p>
<p>Example:</p>
<ul>
<li>Dev and staging (1 replica)</li>
<li>Production (3 replicas)</li>
</ul>
<p>How should I handle this scenario, creating a deployment file for each profile?</p></li>
<li><p>Where do you usually keep Kubernetes .yml? In a "kubenetes" folder within the project, or a repository just to store these files?</p></li>
</ol>
| Rafael Dani da Cunha | <p>You should store them with your code in a build folder. If you are deploying on multiple platforms (AKS, EKS, GKE, OpenShift..) you could create a subfolder for each. </p>
<p>The amount of environment specific configuration should be kept to a bare minimum.
So I would recommend using some templating for the files in your CI/CD pipeline. (Helm for example)</p>
<p>If you don't want to worry about these files you could look into Spinnaker.</p>
| Jeff |
<p>Has anyone tried to add or update the Clusters from Google Kubernetes Engine through Python API?</p>
<p>I managed to do this for Compute instances, but the guide for Kubernetes Engine says its deprecated:</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.zones.clusters.nodePools/update" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.zones.clusters.nodePools/update</a></p>
<p>Tried it and it fails saying it does not find "labels":</p>
<blockquote>
<p>googleapiclient.errors.HttpError: <HttpError 400 when requesting
<a href="https://container.googleapis.com/v1/projects/testingproject/zones/us-east1/clusters/testing-cluster/resourceLabels?alt=json" rel="nofollow noreferrer">https://container.googleapis.com/v1/projects/testingproject/zones/us-east1/clusters/testing-cluster/resourceLabels?alt=json</a>
returned "Invalid JSON payload received. Unknown name "labels": Cannot
find field.". Details: "[{'@type':
'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations':
[{'description': 'Invalid JSON payload received. Unknown name
"labels": Cannot find field.'}]}]"></p>
</blockquote>
<p>My code is this:</p>
<pre><code>credentials = GoogleCredentials.get_application_default()
service = discovery.build('container', 'v1', credentials=credentials)
# Deprecated. The Google Developers Console [project ID or project
# number](https://developers.google.com/console/help/new/#projectnumber).
# This field has been deprecated and replaced by the name field.
project_id = 'testingproject' # TODO: Update placeholder value.
# Deprecated. The name of the Google Compute Engine
# [zone](/compute/docs/zones#available) in which the cluster
# resides.
# This field has been deprecated and replaced by the name field.
zone = 'us-east1' # TODO: Update placeholder value.
# Deprecated. The name of the cluster.
# This field has been deprecated and replaced by the name field.
cluster_id = 'testing-cluster' # TODO: Update placeholder value.
set_labels_request_body = {
'labels': 'value'
}
request = service.projects().zones().clusters().resourceLabels(projectId=project_id, zone=zone, clusterId=cluster_id, body=set_labels_request_body)
response = request.execute()
# TODO: Change code below to process the `response` dict:
pprint(response)
</code></pre>
<p>I want to update the Workload named 'matei-testing-2000-gke-ops' inside the cluster 'testing-cluster'.</p>
<p>Any ideas?
Thank you</p>
<p>Update: It does not find the labels because the name is resourceLabels. But I get the following error after:</p>
<blockquote>
<p>googleapiclient.errors.HttpError: <HttpError 400 when requesting
<a href="https://container.googleapis.com/v1/projects//zones//clusters//resourceLabels?alt=json" rel="nofollow noreferrer">https://container.googleapis.com/v1/projects//zones//clusters//resourceLabels?alt=json</a>
returned "Invalid value at 'resource_labels'
(type.googleapis.com/google.container.v1.SetLabelsRequest.ResourceLabelsEntry),
"value"". Details: "[{'@type':
'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations':
[{'field': 'resource_labels', 'description': 'Invalid value at
'resource_labels'
(type.googleapis.com/google.container.v1.SetLabelsRequest.ResourceLabelsEntry),
"value"'}]}]"></p>
</blockquote>
| happymatei | <p>I've <strike>not</strike> now tried this.</p>
<p>But IIUC, you'll need to:</p>
<ul>
<li><strike>ditch (or use defaults) for e.g. <code>project_id</code>, <code>zone</code> and <code>cluster_id</code> parameters of <code>resourceLabels</code></strike></li>
<li>add <code>name</code> to your body and it should be of the form: <code>projects/*/locations/*/clusters/*</code></li>
</ul>
<p>i.e.</p>
<pre><code>import os
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('container', 'v1', credentials=credentials)
PROJECT = os.getenv("PROJECT")
LOCATION = os.getenv("ZONE")
CLUSTER = os.getenv("CLUSTER")
NAME = "projects/{project}/locations/{location}/clusters/{cluster}".format(
project=project_id,
location=zone,
cluster=cluster_id)
# To update `resourceLabels` you must first fingerprint them
# To get the current `labelFingerprint`, you must `get` the cluster
body = {
'name': NAME,
}
request = service.projects().zones().clusters().get(
projectId=project_id,
zone=zone,
clusterId=cluster_id)
response = request.execute()
labelFingerprint = response["labelFingerprint"]
if "resourceLabels" in response:
print("Existing labels")
resourceLabels = response["resourceLabels"]
else:
print("No labels")
resourceLabels = {}
# Add|update a label
resourceLabels["dog"] = "freddie"
# Construct `resourceLabels` request
body = {
'name': NAME,
'resourceLabels': resourceLabels,
'labelFingerprint': labelFingerprint,
}
request = service.projects().zones().clusters().resourceLabels(
projectId=project_id,
zone=zone,
clusterId=cluster_id,
body=body)
# Do something with the `response`
response = request.execute()
</code></pre>
<ul>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.zones.clusters/get" rel="nofollow noreferrer"><code>clusters.get</code></a></li>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters#Cluster" rel="nofollow noreferrer">`clusters#CLuster</a></li>
</ul>
<p>And</p>
<pre class="lang-sh prettyprint-override"><code>gcloud container clusters describe ${CLUSTER} \
--zone=${ZONE} \
--project=${PROJECT} \
--format="value(labelFingerprint,resourceLabels)"
</code></pre>
<p>Before:</p>
<pre class="lang-sh prettyprint-override"><code>a9dc16a7
</code></pre>
<p>After:</p>
<pre class="lang-sh prettyprint-override"><code>b2c32ec0 dog=freddie
</code></pre>
| DazWilkin |
<p>Unable to configure Locality-prioritized load balancing.
There are two nodes with the labels:</p>
<pre><code> labels:
kubernetes.io/hostname: test-hw1
topology.kubernetes.io/region: us
topology.kubernetes.io/zone: wdc04
</code></pre>
<pre><code> labels:
kubernetes.io/hostname: test-hw2
topology.kubernetes.io/region: eu
topology.kubernetes.io/zone: fra02
</code></pre>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: cara
namespace: default
labels:
app: cara
spec:
selector:
app: cara
ports:
- name: http
appProtocol: http
targetPort: http
port: 8000
topologyKeys:
- "kubernetes.io/hostname"
- "topology.kubernetes.io/zone"
- "topology.kubernetes.io/region"
- "*"
</code></pre>
<p>DestinationRule:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: cara
spec:
host: cara.default.svc.cluster.local
trafficPolicy:
connectionPool:
http:
http1MaxPendingRequests: 1
outlierDetection:
consecutive5xxErrors: 2
baseEjectionTime: 5s
</code></pre>
<p>If the request comes to test-hw1 node it still sometimes gets routed to test-hw2 node without any reason.</p>
| Jonas | <p>Turns out that the issue was in the NodePort service that was accepting traffic from outside. Services by default are load-balancing traffic across the pods, so sometimes connections were routed to the other region <code>istio-ingressgateway</code> pod.
Simply adding <code>externalTrafficPolicy: Local</code> to the service that is accepting traffic from outside on NodePort solved this issue.</p>
| Jonas |
<p>I recently learned about <code>helm</code> and how easy it is to deploy the whole <code>prometheus</code> stack for monitoring a Kubernetes cluster, so I decided to try it out on a staging cluster at my work.</p>
<p>I started by creating a dedicates namespace on the cluster for monitoring with:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create namespace monitoring
</code></pre>
<p>Then, with <code>helm</code>, I added the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">prometheus-community repo</a> with:</p>
<pre class="lang-sh prettyprint-override"><code>helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
</code></pre>
<p>Next, I installed the chart with a <code>prometheus</code> release name:</p>
<pre class="lang-sh prettyprint-override"><code>helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
</code></pre>
<p>At this time I didn't pass any custom configuration because I'm still trying it out.</p>
<p>After the install is finished, it all looks good. I can access the prometheus dashboard with:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl port-forward prometheus-prometheus-kube-prometheus-prometheus-0 9090 -n monitoring
</code></pre>
<p>There, I see a bunch of pre-defined alerts and rules that are monitoring but the problem is that I don't quite understand how to create new rules to check the pods in the <code>default</code> namespace, where I actually have my services deployed.</p>
<p>I am looking at <code>http://localhost:9090/graph</code> to play around with the queries and I can't seem to use any that will give me metrics on my pods in the <code>default</code> namespace.</p>
<p>I am a bit overwhelmed with the amount of information so I would like to know what did I miss or what am I doing wrong here?</p>
| everspader | <p>The Prometheus Operator includes several Custom Resource Definitions (CRDs) including <code>ServiceMonitor</code> (and <code>PodMonitor</code>). <code>ServiceMonitor</code>'s are used to define services to the Operator to be monitored.</p>
<p>I'm familiar with the Operator although not the Helm deployment but I suspect you'll want to create <code>ServiceMonitors</code> to generate metrics for your apps in any (including <code>default</code>) namespace.</p>
<p>See: <a href="https://github.com/prometheus-operator/prometheus-operator#customresourcedefinitions" rel="nofollow noreferrer">https://github.com/prometheus-operator/prometheus-operator#customresourcedefinitions</a></p>
| DazWilkin |
<p>This EKS cluster has a private endpoint only. My end goal is to deploy Helm charts on the EKS. I connect to an EC2 machine via SSM and I have already installed Helm and Kubectl on that machine. The trouble is that in a private network, the AWS APIs can't be called. So, instead of calling <strong>aws eks update-kubeconfig --region region-code --name cluster-name</strong> I have created the kubeconfig such as below.</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
server: 1111111111111111.gr7.eu-west-1.eks.amazonaws.com
certificate-authority-data: JTiBDRVJU111111111
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws
args:
- "eks"
- "get-token"
- "--cluster-name"
- "this-is-my-cluster"
# - "--role-arn"
# - "role-arn"
# env:
# - name: AWS_PROFILE
# value: "aws-profile"
</code></pre>
<p>Getting the following error:</p>
<pre><code>I0127 21:24:26.336266 3849 loader.go:372] Config loaded from file: /tmp/.kube/config-eks-demo
I0127 21:24:26.337081 3849 round_trippers.go:435] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.21.2 (linux/amd64) kubernetes/d2965f0" 'http://1111111111111111.gr7.eu-west-1.eks.amazonaws.com/api?timeout=32s'
I0127 21:24:56.338147 3849 round_trippers.go:454] GET http://1111111111111111.gr7.eu-west-1.eks.amazonaws.com/api?timeout=32s in 30001 milliseconds
I0127 21:24:56.338171 3849 round_trippers.go:460] Response Headers:
I0127 21:24:56.338238 3849 cached_discovery.go:121] skipped caching discovery info due to Get "http://1111111111111111.gr7.eu-west-1.eks.amazonaws.com/api?timeout=32s": dial tcp 10.1.1.193:80: i/o timeout
</code></pre>
<p>There is connectivity in the VPC, there are no issues with NACLs, security groups, port 80.</p>
| Morariu | <p>That looks like this open EKS issue: <a href="https://github.com/aws/containers-roadmap/issues/298" rel="nofollow noreferrer">https://github.com/aws/containers-roadmap/issues/298</a></p>
<p>If thatβs the case, upvote it so that the product team can prioritize it. If you have Enterprise support your TAM can help there as well.</p>
| Corey Cole |
<p>I have a k8s service of type clusterIP.. i need to change the below configuration via CLI</p>
<ol>
<li>the http port to https port</li>
<li>the port number</li>
<li>the type to Load Balancer</li>
</ol>
<p>Is there a way to do it..?</p>
| Ram | <p>You can't remove the existing port, but you <em>can</em> add the HTTPs port and also change the type using <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/" rel="noreferrer">kubectl patch</a></p>
<p>Example:</p>
<pre><code>kubectl patch svc <my_service> -p '{"spec": {"ports": [{"port": 443,"targetPort": 443,"name": "https"},{"port": 80,"targetPort": 80,"name": "http"}],"type": "LoadBalancer"}}'
</code></pre>
<p>If you don't want to create JSON on the command line, create a yaml file like so:</p>
<pre><code>ports:
- port: 443
targetPort: 443
name: "https"
- port: 80
targetPort: 80
name: "http"
type: LoadBalancer
</code></pre>
<p>And then do:</p>
<pre><code>kubectl patch svc <my_service> --patch "$(cat patch.yaml)"
</code></pre>
| jaxxstorm |
<p>I wish to run k6 in a container with some simple javascript load from local file system,
It seems the below had some syntax error</p>
<pre><code>$ cat simple.js
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
vus: 10,
duration: '30s',
};
export default function () {
http.get('http://100.96.1.79:8080');
sleep(1);
}
$kubectl run k6 --image=grafana/k6 -- run - <simple.js
//OR
$kubectl run k6 --image=grafana/k6 run - <simple.js
</code></pre>
<p>in the k6 pod log, I got</p>
<pre><code>β time="2023-02-16T12:12:05Z" level=error msg="could not initialize '-': could not load JS test 'file:///-': no exported functions in s β
</code></pre>
<p>I guess this means the simple.js is not really passed to k6 this way?</p>
<p>thank you!</p>
| sqr | <p>I think you can't pipe (host) files into Kubernetes containers this way.</p>
<p>One way that it should work is to:</p>
<ol>
<li>Create a ConfigMap to represent your file</li>
<li>Apply a Pod config that mounts the ConfigMap file</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>NAMESPACE="..." # Or default
kubectl create configmap simple \
--from-file=${PWD}/simple.js \
--namespace=${NAMESPACE}
kubectl get configmap/simple \
--output=yaml \
--namespace=${NAMESPACE}
</code></pre>
<p>Yields:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: simple
data:
simple.js: |
import http from 'k6/http';
import { sleep } from 'k6';
export default function () {
http.get('http://test.k6.io');
sleep(1);
}
</code></pre>
<p><strong>NOTE</strong> You could just create e.g. <code>configmap.yaml</code> with the above YAML content and apply it.</p>
<p>Then with <code>pod.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: simple
spec:
containers:
- name: simple
image: docker.io/grafana/k6
args:
- run
- /m/simple.js
volumeMounts:
- name: simple
mountPath: /m
volumes:
- name: simple
configMap:
name: simple
</code></pre>
<p>Apply it:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl apply \
--filename=${PWD}/pod.yaml \
--namespace=${NAMESPACE}
</code></pre>
<p>Then, finally:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl logs pod/simple \
--namespace=${NAMESPACE}
</code></pre>
<p>Yields:</p>
<pre><code>
/\ |βΎβΎ| /βΎβΎ/ /βΎβΎ/
/\ / \ | |/ / / /
/ \/ \ | ( / βΎβΎ\
/ \ | |\ \ | (βΎ) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: /m/simple.js
output: -
scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
* default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)
running (00m01.0s), 1/1 VUs, 0 complete and 0 interrupted iterations
default [ 0% ] 1 VUs 00m01.0s/10m0s 0/1 iters, 1 per VU
running (00m01.4s), 0/1 VUs, 1 complete and 0 interrupted iterations
default β [ 100% ] 1 VUs 00m01.4s/10m0s 1/1 iters, 1 per VU
data_received..................: 17 kB 12 kB/s
data_sent......................: 542 B 378 B/s
http_req_blocked...............: avg=128.38ms min=81.34ms med=128.38ms max=175.42ms p(90)=166.01ms p(95)=170.72ms
http_req_connecting............: avg=83.12ms min=79.98ms med=83.12ms max=86.27ms p(90)=85.64ms p(95)=85.95ms
http_req_duration..............: avg=88.61ms min=81.28ms med=88.61ms max=95.94ms p(90)=94.47ms p(95)=95.2ms
{ expected_response:true }...: avg=88.61ms min=81.28ms med=88.61ms max=95.94ms p(90)=94.47ms p(95)=95.2ms
http_req_failed................: 0.00% β 0 β 2
http_req_receiving.............: avg=102.59Β΅s min=67.99Β΅s med=102.59Β΅s max=137.19Β΅s p(90)=130.27Β΅s p(95)=133.73Β΅s
http_req_sending...............: avg=67.76Β΅s min=40.46Β΅s med=67.76Β΅s max=95.05Β΅s p(90)=89.6Β΅s p(95)=92.32Β΅s
http_req_tls_handshaking.......: avg=44.54ms min=0s med=44.54ms max=89.08ms p(90)=80.17ms p(95)=84.62ms
http_req_waiting...............: avg=88.44ms min=81.05ms med=88.44ms max=95.83ms p(90)=94.35ms p(95)=95.09ms
http_reqs......................: 2 1.394078/s
iteration_duration.............: avg=1.43s min=1.43s med=1.43s max=1.43s p(90)=1.43s p(95)=1.43s
iterations.....................: 1 0.697039/s
vus............................: 1 min=1 max=1
vus_max........................: 1 min=1 max=1
</code></pre>
<p>Tidy:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl delete \
--filename=${PWD}/pod.yaml \
--namespace=${NAMESPACE}
kubectl delete configmap/simple \
--namespace=${NAMESPACE}
kubectl delete namespace/${NAMESPACE}
</code></pre>
| DazWilkin |
<p>I need to restrict pod egress traffic to external destinations. Pod should be able to access any destination on the internet and all cluster internal destinations should be denied.</p>
<p>This is what I tried and it is not passing validation:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: test
spec:
workloadSelector:
labels:
k8s-app: mypod
outboundTrafficPolicy:
mode: REGISTRY_ONLY
egress:
- hosts:
- 'default/*'
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: all-external
spec:
location: MESH_EXTERNAL
resolution: DNS
hosts:
- '*'
ports:
- name: http
protocol: HTTP
number: 80
- name: https
protocol: TLS
number: 443
</code></pre>
<p>Istio 1.11.4</p>
| Jonas | <p>I did it using <code>NetworkPolicy</code>. Allow traffic to kubernetes and istio related services (could be more restrictive not just based on the namespace):</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myapp-eg-system
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: istio-system
</code></pre>
<p>Allow anything except cluster network IP space:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myapp-eg-app
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Egress
egress:
- to:
# Restrict to external traffic
- ipBlock:
cidr: '0.0.0.0/0'
except:
- '172.0.0.0/8'
- podSelector:
matchLabels:
app: myapp
ports:
- protocol: TCP
</code></pre>
| Jonas |
<p>What is a Load Balancer? </p>
<blockquote>
<p>Load balancing improves the distribution of workloads across multiple
computing resources, such as computers, a computer cluster, network
links, central processing units, or disk drives</p>
</blockquote>
<h3>The NodePort</h3>
<p>NodePort is not load balancer. (I know that <code>kube-proxy</code> load balance the traffic among the pod once the traffic is inside the cluster) I mean, the end user hits <code>http://NODEIP:30111</code> (For example) URL to access the application. Even though the traffic is load balanced among the POD, users still hits a single node i.e. the "Node" which is K8s's minion but a real Load Balancer, right?</p>
<h3>The Ingress Service</h3>
<p>Here also same, imagine the ingress-controller is deployed and ingress-service too. The sub-domain that we specify in ingress-service should points to "a" node in K8s cluster, then ingress-controller load balance the traffic among the pods. Here also end users hitting single node which is K8s's minion but a real Load Balancer, right?</p>
<h3>Load Balancer From Cloud Provider(for example AWS ELB)</h3>
<p>I'm having a doubt, how cloud provider's LB does the load balancing? Are those really distribute the traffic to appropriate Node which PODS are deployed or just forwarding the traffic to master node or minion?</p>
<p>If above point is true. Where is the true load balancing the traffic among the pods/appropriate nodes.</p>
<p>Can I implement true load balancing in K8s? I asked a related <a href="https://stackoverflow.com/questions/51531312/how-to-access-k8ss-flannel-network-from-outside">question here</a></p>
| Veerendra K | <blockquote>
<p>NodePort is not load balancer. </p>
</blockquote>
<p>You're right about this in one way, yes it's not designed to be a load balancer.</p>
<blockquote>
<p>users still hits a single node i.e. the "Node" which is K8s's minion but a real Load Balancer, right?</p>
</blockquote>
<p>With NodePort, you <em>have</em> to hit a single node at any one time, but you have to remember that <code>kube-proxy</code> is running on ALL nodes. So you can hit the NodePort on any node in the cluster (even a node the workload isn't running on) and you'll still hit the endpoint you want to hit. This becomes important later.</p>
<blockquote>
<p>The sub-domain that we specify in ingress-service should points to "a" node in K8s cluster</p>
</blockquote>
<p>No, this isn't how it works.</p>
<p>Your ingress controller needs to be exposed externally still. If you're using a cloud provider, a commonly used pattern is to expose your ingress controller with Service of <code>Type=LoadBalancer</code>. The LoadBalancing still happens with Services, but Ingress allows you to use that Service in a more user friendly way. Don't confuse ingress with loadbalancing.</p>
<blockquote>
<p>I'm having a doubt how cloud provider LB does the load balancing? Are those really distribute the traffic to appropriate Node which PODS are deployed or just forwarding the traffic to master node or minion?</p>
</blockquote>
<p>If you look at a provisioned service in Kubernetes, you'll see why it makes sense.</p>
<p>Here's a Service of Type LoadBalancer:</p>
<pre><code>kubectl get svc nginx-ingress-controller -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer <redacted> internal-a4c8... 80:32394/TCP,443:31281/TCP 147d
</code></pre>
<p>You can see I've deployed an ingress controller with type LoadBalancer. This has created an AWS ELB, but also notice, like <code>NodePort</code> it's mapped port 80 on the ingress controller pod to port <code>32394</code>.</p>
<p>So, let's look at the actual LoadBalancer in AWS:</p>
<pre><code>aws elb describe-load-balancers --load-balancer-names a4c80f4eb1d7c11e886d80652b702125
{
"LoadBalancerDescriptions": [
{
"LoadBalancerName": "a4c80f4eb1d7c11e886d80652b702125",
"DNSName": "internal-a4c8<redacted>",
"CanonicalHostedZoneNameID": "<redacted>",
"ListenerDescriptions": [
{
"Listener": {
"Protocol": "TCP",
"LoadBalancerPort": 443,
"InstanceProtocol": "TCP",
"InstancePort": 31281
},
"PolicyNames": []
},
{
"Listener": {
"Protocol": "HTTP",
"LoadBalancerPort": 80,
"InstanceProtocol": "HTTP",
"InstancePort": 32394
},
"PolicyNames": []
}
],
"Policies": {
"AppCookieStickinessPolicies": [],
"LBCookieStickinessPolicies": [],
"OtherPolicies": []
},
"BackendServerDescriptions": [],
"AvailabilityZones": [
"us-west-2a",
"us-west-2b",
"us-west-2c"
],
"Subnets": [
"<redacted>",
"<redacted>",
"<redacted>"
],
"VPCId": "<redacted>",
"Instances": [
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
},
{
"InstanceId": "<redacted>"
}
],
"HealthCheck": {
"Target": "TCP:32394",
"Interval": 10,
"Timeout": 5,
"UnhealthyThreshold": 6,
"HealthyThreshold": 2
},
"SourceSecurityGroup": {
"OwnerAlias": "337287630927",
"GroupName": "k8s-elb-a4c80f4eb1d7c11e886d80652b702125"
},
"SecurityGroups": [
"sg-8e0749f1"
],
"CreatedTime": "2018-03-01T18:13:53.990Z",
"Scheme": "internal"
}
]
}
</code></pre>
<p>The most important things to note here are:</p>
<p>The LoadBalancer is mapping port 80 in ELB to the NodePort:</p>
<pre><code>{
"Listener": {
"Protocol": "HTTP",
"LoadBalancerPort": 80,
"InstanceProtocol": "HTTP",
"InstancePort": 32394
},
"PolicyNames": []
}
</code></pre>
<p>You'll also see that there are multiple target <code>Instances</code>, not one:</p>
<pre><code>aws elb describe-load-balancers --load-balancer-names a4c80f4eb1d7c11e886d80652b702125 | jq '.LoadBalancerDescriptions[].Instances | length'
8
</code></pre>
<p>And finally, if you look at the number of nodes in my cluster, you'll see it's actually <em>all</em> the nodes that have been added to the LoadBalancer:</p>
<pre><code>kubectl get nodes -l "node-role.kubernetes.io/node=" --no-headers=true | wc -l
8
</code></pre>
<p>So, in summary - Kubernetes <em>does</em> implement true LoadBalancing with services (whether that be NodePort or LoadBalancer types) and the ingress just makes that service more accessible to the outside world</p>
| jaxxstorm |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.