Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>Say we have this in a deployment.yml</p>
<pre><code>containers:
- name: my_container
imagePullPolicy: Always
image: my_image:latest
</code></pre>
<p>and so redeployment might take the form of:</p>
<pre><code>kubectl set image deployment/my-deployment my_container=my_image
</code></pre>
<p>which I stole from here:</p>
<p><a href="https://stackoverflow.com/a/40368520/1223975">https://stackoverflow.com/a/40368520/1223975</a></p>
<p>my question is - is this the right way to do a rolling-update? Will the above always work to make sure the deployment gets the new image? My deployment.yml might never change - it might just be <code>my_image:latest</code> forever, so how to do rolling updates?</p>
| Alexander Mills | <p>I don't expect this to be an accepted answer. But I wanted to make it for the future as there <em>is</em> a command to do this in Kubernetes 1.15.</p>
<p>PR <a href="https://github.com/kubernetes/kubernetes/pull/76062" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/76062</a> added a command called <code>kubectl rollout restart</code>. It is part of Kubernetes 1.15. In the future you will be able to do:</p>
<pre><code>kubectl rollout restart deployment/my-deployment
</code></pre>
| Andy Shinn |
<h1>The situation</h1>
<p>I have a kubernetes pod stuck in "Terminating" state that resists pod deletions</p>
<pre><code>NAME READY STATUS RESTARTS AGE
...
funny-turtle-myservice-xxx-yyy 1/1 Terminating 1 11d
...
</code></pre>
<p>Where <code>funny-turtle</code> is the name of the helm release that have since been deleted.</p>
<h1>What I have tried</h1>
<h3>try to delete the pod.</h3>
<p>Output: <code>pod "funny-turtle-myservice-xxx-yyy" deleted
</code>
Outcome: it still shows up in the same state.
- also tried with <code>--force --grace-period=0</code>, same outcome with extra warning</p>
<blockquote>
<p>warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.</p>
</blockquote>
<h3>try to read the logs (kubectl logs ...).</h3>
<p>Outcome: <code>Error from server (NotFound): nodes "ip-xxx.yyy.compute.internal" not found</code></p>
<h3>try to delete the kubernetes deployment.</h3>
<p>but it does not exist.</p>
<p>So I assume this pod somehow got "disconnected" from the aws API, reasoning from the error message that <code>kubectl logs</code> printed.</p>
<p>I'll take any suggestions or guidance to explain what happened here and how I can get rid of it.</p>
<h3>EDIT 1</h3>
<p>Tried to see if the "ghost" node was still there (<code>kubectl delete node ip-xxx.yyy.compute.internal</code>) but it does not exist.</p>
| Yann Pellegrini | <p>Try removing the finalizers from the pod:</p>
<pre><code>kubectl patch pod funny-turtle-myservice-xxx-yyy -p '{"metadata":{"finalizers":null}}'
</code></pre>
| jaxxstorm |
<p>I'm trying to get Let's Encrypt working on a K3s cluster of mine. I've been following the below tutorial but since it's more than a year old I'm using a later version of <code>cert-manager</code>.</p>
<p><a href="https://pascalw.me/blog/2019/07/02/k3s-https-letsencrypt.html" rel="nofollow noreferrer">https://pascalw.me/blog/2019/07/02/k3s-https-letsencrypt.html</a></p>
<p>I'm executing the following commands</p>
<pre><code>kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.1/cert-manager.crds.yaml
helm repo add jetstack https://charts.jetstack.io
kubectl create namespace cert-manager
helm install cert-manager jetstack/cert-manager --namespace cert-manager
echo "apiVersion: cert-manager.io/v1beta1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: [email protected]
privateKeySecretRef:
name: staging-issuer-account-key
server: https://acme-staging-v02.api.letsencrypt.org/directory
http01: {}
solvers:
- http01:
ingress:
class: traefik
selector: {}
" | kubectl apply --validate=false -f -
</code></pre>
<p>My generated ingress resource looks as the following</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/issuer: letsencrypt-staging
kubernetes.io/ingress.class: traefik
meta.helm.sh/release-name: whoami-mn
meta.helm.sh/release-namespace: whoami-mn-dev
creationTimestamp: "2020-09-13T08:49:27Z"
generation: 3
labels:
app.kubernetes.io/instance: whoami-mn
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: whoami-mn
app.kubernetes.io/version: "0.1"
helm.sh/chart: whoami-mn-0.4.0
managedFields:
- apiVersion: extensions/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:loadBalancer:
f:ingress: {}
manager: traefik
operation: Update
time: "2020-09-13T10:08:21Z"
- apiVersion: networking.k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:certmanager.k8s.io/acme-challenge-type: {}
f:certmanager.k8s.io/issuer: {}
f:kubernetes.io/ingress.class: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/managed-by: {}
f:app.kubernetes.io/name: {}
f:app.kubernetes.io/version: {}
f:helm.sh/chart: {}
f:spec:
f:rules: {}
f:tls: {}
manager: Go-http-client
operation: Update
time: "2020-09-13T12:37:32Z"
name: whoami-mn
namespace: whoami-mn-dev
resourceVersion: "1127785"
selfLink: /apis/extensions/v1beta1/namespaces/whoami-mn-dev/ingresses/whoami-mn
uid: d4ff44a2-a45e-4ef4-ac53-e76c7603d91a
spec:
rules:
- host: whoami-mn.myhost.com
http:
paths:
- backend:
serviceName: whoami-mn
servicePort: 8080
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- whoami-mn.myhost.com
secretName: whoami-mn-tls
status:
loadBalancer:
ingress:
- ip: 192.168.0.100
</code></pre>
<p>But the endpoint returns 404 and the logs from traefik contains multiple entries like the following</p>
<pre><code>{"level":"error","msg":"Error configuring TLS for ingress whoami-mn-dev/whoami-mn: secret whoami-mn-dev/whoami-mn-tls does not exist","time":"2020-09-13T14:44:10Z"}
</code></pre>
<p>Any clue about what I'm doing wrong? Also please let me know if I should post anything else</p>
| user672009 | <p>The key to get this working was annotating my ingress resource correctly.</p>
<pre><code>cert-manager.io/cluster-issuer: letsencrypt-staging
</code></pre>
<p>And NOT the following</p>
<pre><code>certmanager.k8s.io/issuer: letsencrypt-staging
</code></pre>
<p>Version 1.0.1 of cert-manager was used.</p>
| user672009 |
<p>I've created kubernetes cluster using kops</p>
<pre><code>kops create cluster \
--dns-zone=vpc.abc.in \
--master-zones=ap-southeast-1a,ap-southeast-1b,ap-southeast-1c \
--zones=ap-southeast-1a,ap-southeast-1b,ap-southeast-1c \
--node-count 3 \
--topology private \
--networking flannel-vxlan \
--node-size=t2.medium \
--master-size=t2.micro \
${NAME}
</code></pre>
<p>I'm using private topology and internal loadbalancer.</p>
<p>Whenever I create service of type=LoadBalancer it creates a public facing ELB and url is accessible publically.</p>
<p>I want to deploy Elastic Search and kibana and make it available only inside VPN. We already have VPN setup.</p>
<p>How to make service accessible within the VPN?</p>
| prranay | <p>Add the following annotation to your service definition:</p>
<pre><code>service.beta.kubernetes.io/aws-load-balancer-internal: '"true"'
</code></pre>
<p>Full example:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: '"true"'
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
type: LoadBalancer
</code></pre>
<p>This will provision an internal ELB rather than external.</p>
| jaxxstorm |
<p>I want to syslog from a container to the host Node - </p>
<p>Targeting fluentd (@127.0.0.1:5140) which runs on the node - <a href="https://docs.fluentd.org/input/syslog" rel="nofollow noreferrer">https://docs.fluentd.org/input/syslog</a></p>
<p>e.g syslog from hello-server to the node (which hosts all of these namespaces) </p>
<p>I want to syslog output from hello-server container to fluentd running on node (@127.0.0.1:5140).</p>
<pre><code>kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-server-7d8589854c-r4xfr 1/1 Running 0 21h
kube-system event-exporter-v0.2.4-5f7d5d7dd4-lgzg5 2/2 Running 0 6d6h
kube-system fluentd-gcp-scaler-7b895cbc89-bnb4z 1/1 Running 0 6d6h
kube-system fluentd-gcp-v3.2.0-4qcbs 2/2 Running 0 6d6h
kube-system fluentd-gcp-v3.2.0-jxnbn 2/2 Running 0 6d6h
kube-system fluentd-gcp-v3.2.0-k58x6 2/2 Running 0 6d6h
kube-system heapster-v1.6.0-beta.1-7778b45899-t8rz9 3/3 Running 0 6d6h
kube-system kube-dns-autoscaler-76fcd5f658-7hkgn 1/1 Running 0 6d6h
kube-system kube-dns-b46cc9485-279ws 4/4 Running 0 6d6h
kube-system kube-dns-b46cc9485-fbrm2 4/4 Running 0 6d6h
kube-system kube-proxy-gke-test-default-pool-040c0485-7zzj 1/1 Running 0 6d6h
kube-system kube-proxy-gke-test-default-pool-040c0485-ln02 1/1 Running 0 6d6h
kube-system kube-proxy-gke-test-default-pool-040c0485-w6kq 1/1 Running 0 6d6h
kube-system l7-default-backend-6f8697844f-bxn4z 1/1 Running 0 6d6h
kube-system metrics-server-v0.3.1-5b4d6d8d98-k7tz9 2/2 Running 0 6d6h
kube-system prometheus-to-sd-2g7jc 1/1 Running 0 6d6h
kube-system prometheus-to-sd-dck2n 1/1 Running 0 6d6h
kube-system prometheus-to-sd-hsc69 1/1 Running 0 6d6h
</code></pre>
<p>For some reason k8s does not allow us to use the built in syslog driver <code>docker run --log-driver syslog</code>.</p>
<p>Also, k8s does not allow me to connect with the underlying host using --network="host"</p>
<p>Has anyone tried anything similar? Maybe it would be easier to syslog remotely rather than trying to use the underlying syslog running on every node?</p>
| forestgreen | <p>What you are actually looking at is the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver" rel="nofollow noreferrer">Stackdriver Logging Agent</a>. According to the documentation at <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/#prerequisites" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/#prerequisites</a>:</p>
<blockquote>
<p>If you’re using GKE and Stackdriver Logging is enabled in your cluster, you cannot change its configuration, because it’s managed and supported by GKE. However, you can disable the default integration and deploy your own.</p>
</blockquote>
<p>The documentation then gives an example of rinning your own fluentd DaemonSet with custom ConfigMap. You'd need to run your own fluentd so you could configure a syslog input per <a href="https://docs.fluentd.org/input/syslog" rel="nofollow noreferrer">https://docs.fluentd.org/input/syslog</a>.</p>
<p>Then, since the fluentd is running as a DaemonSet, you would configure a Service to expose it to other pods and allow then to connect to it. If you are running the official upstream DaemonSet from <a href="https://github.com/fluent/fluentd-kubernetes-daemonset" rel="nofollow noreferrer">https://github.com/fluent/fluentd-kubernetes-daemonset</a> then a service might look like:</p>
<pre><code>apiVersion: v1
kind: Service
namespace: kube-system
metadata:
name: fluentd
spec:
selector:
k8s-app: fluentd-logging
ports:
- protocol: UDP
port: 5140
targetPort: 5140
</code></pre>
<p>Then your applications can log to <code>fluentd.kube-system:5140</code> (see using DNS at <a href="https://kubernetes.io/docs/concepts/services-networking/service/#dns" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#dns</a>).</p>
| Andy Shinn |
<p>I'm working on writing custom controller for our kubernetes cluster and that'll listen to node events and perform some operation on the node.I'm using kubernetes client-go library and able to capture kubernetes events whenever a node is attached or removed from the cluster. But is it possible to get AWS instance details of kubernetes node that has been created like instance id, tags etc ? Thanks in advance.</p>
<p>PS: I have installed the kubernetes cluster using kops</p>
| Mathan Kumar | <p>On a Kubernetes node in AWS, you'll have some things populated as part of the node labels and various other parts of the node's metadata:</p>
<pre><code>kubectl get nodes -o json | jq '.items[].metadata.labels'
{
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/instance-type": "c5.large",
"beta.kubernetes.io/os": "linux",
"failure-domain.beta.kubernetes.io/region": "us-east-1",
"failure-domain.beta.kubernetes.io/zone": "us-east-1b",
"kubernetes.io/hostname": "<hostname>",
"kubernetes.io/role": "master",
"manufacturer": "amazon_ec2",
"node-role.kubernetes.io/master": "",
"operatingsystem": "centos",
"tier": "production",
"virtual": "kvm"
}
</code></pre>
<p>The node information is in <code>client-go</code> in the <a href="https://github.com/kubernetes/client-go/blob/master/kubernetes/typed/core/v1/node.go" rel="nofollow noreferrer">node package here</a> using the <code>Get</code> method. Here's an example:</p>
<pre><code> client := kubernetes.NewForConfigOrDie(config)
list, err := client.CoreV1().Nodes().List(metav1.ListOptions{})
if err != nil {
fmt.Fprintf(os.Stderr, "error listing nodes: %v", err)
os.Exit(1)
}
for _, node := range list.Items {
fmt.Printf("Node: %s\n", node.Name)
node, err := client.CoreV1().Nodes().Get(node.Name, metav1.GetOptions{})
if err != nil {
fmt.Fprintf(os.Stderr, "error getting node: %v", err)
os.Exit(1)
}
fmt.Println(node)
}
</code></pre>
<p><em>However</em> this is really probably not the way you want to go about it. If you're running this on a kops cluster in AWS, the node your workload is running on already has access to the AWS API and also the <a href="https://aws.amazon.com/iam/faqs/" rel="nofollow noreferrer">IAM role</a> needed to query node data.</p>
<p>With that in mind, please consider using the <a href="https://github.com/aws/aws-sdk-go" rel="nofollow noreferrer">AWS Go SDK</a> instead. You can query EC2 quite easily, here's an <a href="https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/go/example_code/ec2/describing_instances.go" rel="nofollow noreferrer">adapted example</a>:</p>
<pre><code>package main
import (
"fmt"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/ec2"
)
func main() {
// Load session from shared config
sess := session.Must(session.NewSessionWithOptions(session.Options{
SharedConfigState: session.SharedConfigEnable,
}))
// Create new EC2 client
ec2Svc := ec2.New(sess)
// Call to get detailed information on each instance
result, err := ec2Svc.DescribeInstances(nil)
if err != nil {
fmt.Println("Error", err)
} else {
fmt.Println("Success", result)
}
}
</code></pre>
| jaxxstorm |
<p>We need to access the kubelet logs on our Kubernetes node (which is in AWS) to investigate an issue we are facing regarding Kubernetes error (see <a href="https://stackoverflow.com/questions/51475561/even-after-adding-additional-kubernetes-node-i-see-new-node-unused-while-gettin">Even after adding additional Kubernetes node, I see new node unused while getting error "No nodes are available that match all of the predicates:</a>).</p>
<p>Kubectl logs only gets logs from pod. To get kubelet logs, we need to ssh into the k8s node box - (AWS EC2 box). While doing so we are getting error "Permission denied (publickey)" which means we need to set the ssh public key as we may not be having access to what were set earlier. </p>
<p>Question is if we set the new keys using kops as described in <a href="https://github.com/kubernetes/kops/blob/master/docs/security.md" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/docs/security.md</a>, would we end up creating any harm to existing cluster? Would any of the existing services/access stop working? Or would this only impact manual ssh to the AWS EC2 machines?</p>
| mi10 | <p>You would need to update the kops cluster using <code>kops cluster update</code> first. However, this would not change the SSH key on any running nodes.</p>
<p>By modifying a cluster using <code>kops cluster update</code> you are simply modifying the <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html" rel="nofollow noreferrer">Launch Configurations</a> for the cluster. This will only take effect when new nodes are provisioned.</p>
<p>In order to rectify this, you'll need to cycle your infrastructure. The only way to do this is to delete the nodes and control plane nodes one by one from the ASG.</p>
<p>Once you delete a node from the ASG, it will be replaced by the new launch configuration with the new SSH key.</p>
<p>Before you delete a node from AWS, you should <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="nofollow noreferrer">drain it</a> it first using <code>kubectl drain</code>:</p>
<pre><code>kubectl drain <nodename> --ignore-daemonsets --force
</code></pre>
| jaxxstorm |
<p>I'm testing the latest version of the Elastic Stack (7.2.0) and i can't seem to connect Kibana to Elasticsearch, but when i rollback to 6.8.1 it works. Any ideas ?</p>
<hr>
<h2>Kibana Deploy & Service</h2>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: *************
labels:
component: kibana
spec:
replicas: 1
selector:
matchLabels:
component: kibana
template:
metadata:
labels:
component: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.2.0
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_URL
value: http://elastic.****************:80
ports:
- containerPort: 5601
name: kibana
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: *************
labels:
component: kibana
spec:
selector:
component: kibana
ports:
- port: 80
protocol: "TCP"
name: "http"
targetPort: 5601
</code></pre>
<hr>
<p>I am using an ingress but Kibana comlpetely ignores the ELASTICSEARCH_URL value when i try to deploy the 7.2.0 but it works when i rollback to the 6.8.1. I don't know if this methode is no longer supported on the 7.2.0, i've been all over trying to find some documentation but no luck.</p>
| user11765676 | <p>As of Kibana 7.0 <code>elasticsearch.url</code> is no longer valid and it is now <code>elasticsearch.hosts</code>: <a href="https://www.elastic.co/guide/en/kibana/7.x/breaking-changes-7.0.html#_literal_elasticsearch_url_literal_is_no_longer_valid" rel="nofollow noreferrer">https://www.elastic.co/guide/en/kibana/7.x/breaking-changes-7.0.html#_literal_elasticsearch_url_literal_is_no_longer_valid</a>.</p>
<p>The environment variables translate to these settings names. In this case, the new environment variable would be <code>ELASTICSEARCH_HOSTS</code>. See the example at <a href="https://www.elastic.co/guide/en/kibana/7.2/docker.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/kibana/7.2/docker.html</a>.</p>
| Andy Shinn |
<p>I'm trying to connect one pod to another, but getting a connection refused error.</p>
<p>I only run:</p>
<ol>
<li><p>RavenDB Server</p>
<ul>
<li>Deployment which has:
<ul>
<li>ports:
<ul>
<li>containerPort:8080, protocol: TCP</li>
<li>containerPort:38888, protocol: TCP</li>
</ul></li>
</ul></li>
<li>Service:
<ul>
<li>ravendb-cluster01-service</li>
<li>clusterIP: None, ports: 8080 / 38888</li>
</ul></li>
</ul></li>
<li><p>RavenDB Client</p>
<ul>
<li>Connects to ravendb-cluster01-service.staging.svc.cluster.local:8080
<ul>
<li>Though fails with a connection refused error</li>
</ul></li>
</ul></li>
</ol>
<p>What doesn't work:</p>
<ul>
<li>Client cannot connect to server, connection refused</li>
</ul>
<p>What does work:</p>
<ul>
<li>when accessing the client pod using interactive shell: <code>docker -it ... -- bash</code>,
<ul>
<li>I can ping the service</li>
<li>and telnet to it</li>
</ul></li>
<li>when using <code>kubectl ... port-forward 8080:8080</code>, I can locally enjoy the database server, so the server is running</li>
</ul>
<p>Strange enough, when accessing the docker I'm able to connect to it, though the running script itself refuses to connect to the target pod.</p>
<p>It's connecting pod to pod, and tagged the target server (RavenDB) with a service, without service IP address, to resolve the domain name to the current IP address of the pod.</p>
<p>Any idea what I'm doing wrong?</p>
<p>Full config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: ravendb-cluster01
tier: backend
name: ravendb-cluster01
namespace: staging
spec:
replicas: 1
selector:
matchLabels:
app: ravendb-cluster01
tier: backend
template:
metadata:
labels:
app: ravendb-cluster01
tier: backend
name: ravendb-cluster01
namespace: staging
spec:
containers:
- env:
- name: RAVEN_ARGS
value: --ServerUrl=http://ravendb-cluster01-service.staging.svc.cluster.local:8080
--ServerUrl.Tcp=tcp://ravendb-cluster01-service.staging.svc.cluster.local:38888
--PublicServerUrl=http://localhost:8080 --PublicServerUrl.Tcp=tcp://localhost:38888
--DataDir=/ravendb/ --Setup.Mode=None --License.Eula.Accepted=true
image: ravendb/ravendb-nightly:4.0.6-nightly-20180720-0400-ubuntu.16.04-x64
name: ravendb
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 38888
name: tcp
protocol: TCP
resources:
limits:
memory: 26000Mi
requests:
memory: 26000Mi
volumeMounts:
- mountPath: /ravendb/
name: ravendb-cluster01-storage
volumes:
- gcePersistentDisk:
fsType: ext4
pdName: ravendb-cluster01-storage
name: ravendb-cluster01-storage
---
apiVersion: v1
kind: Service
metadata:
labels:
app: ravendb-cluster01-service
tier: backend
name: ravendb-cluster01-service
namespace: staging
spec:
clusterIP: None
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
- name: tcp
port: 38888
protocol: TCP
targetPort: 38888
selector:
app: ravendb-cluster01
tier: backend
sessionAffinity: None
type: ClusterIP
</code></pre>
| user2331234 | <p>The issue appears to be your <code>PublicServerUrl</code> setting.</p>
<pre><code>--PublicServerUrl=http://localhost:8080 --PublicServerUrl.Tcp=tcp://localhost:38888
</code></pre>
<p>As per the RavenDB documentation:</p>
<blockquote>
<p>Set the URL to be accessible by clients and other nodes, regardless of which IP is used to access the server internally. This is useful when using a secured connection via https URL, or behind a proxy server.</p>
</blockquote>
<p>You either need to configure this to be the service name, or remove the option entirely. After reviewing the docs for <a href="https://ravendb.net/docs/article-page/4.0/csharp/server/configuration/core-configuration#serverurl" rel="nofollow noreferrer">ServerUrl</a> I would personally recommend updating your args to be something like this:</p>
<pre><code>value: --ServerUrl=http://0.0.0.0:8080
--ServerUrl.Tcp=tcp://0.0.0.0:38888
--PublicServerUrl=http://ravendb-cluster01-service.staging.svc.cluster.local:8080 --PublicServerUrl.Tcp=tcp://ravendb-cluster01-service.staging.svc.cluster.local:38888
--DataDir=/ravendb/ --Setup.Mode=None --License.Eula.Accepted=true
</code></pre>
<p>You want the <code>ServerUrl</code> to be listening on all ports ideally, so setting to <code>0.0.0.0</code> makes sense for the PublicUrl.</p>
<p>The reason it works with both <code>port-forward</code> and from the local docker container is probably because RavenDB is listening on the loopback device, and both those methods of connection give you a local process inside the container, so the loopback device is accessible.</p>
| jaxxstorm |
<p>I have an Azure Kubernetes cluster with Velero installed. A Service Principal was created for Velero, per <a href="https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/tree/master#option-1-create-service-principal" rel="nofollow noreferrer">option 1 of the instructions</a>.</p>
<p>Velero was working fine until the credentials for the Service Principal were reset. Now the scheduled backups are failing.</p>
<pre><code>NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
daily-entire-cluster-20210727030055 Failed 0 0 2021-07-26 23:00:55 -0000 13d default <none>
</code></pre>
<p>How can I update the secret for Velero?</p>
| Codebling | <h1>1. Update credentials file</h1>
<p>First, update your credentials file (for most providers, this is <code>credentials-velero</code> and the contents are described in the plugin installation instructions: <a href="https://github.com/vmware-tanzu/velero-plugin-for-aws#set-permissions-for-velero" rel="nofollow noreferrer">AWS</a>, <a href="https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure#create-service-principal" rel="nofollow noreferrer">Azure</a>, <a href="https://github.com/vmware-tanzu/velero-plugin-for-gcp#option-1-set-permissions-with-a-service-account" rel="nofollow noreferrer">GCP</a>)</p>
<h1>2. Update secret</h1>
<p>Now update the velero secret. On linux:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl patch -n velero secret cloud-credentials -p '{"data": {"cloud": "'$(base64 -w 0 credentials-velero)'"}}'
</code></pre>
<ul>
<li><code>patch</code> tells <code>kubectl</code> to update a resource by merging the provided data</li>
<li><code>-n velero</code> tells <code>kubectl</code> to use the <code>velero</code> namespace</li>
<li><code>secret</code> is the resource type</li>
<li><code>cloud-credentials</code> is the name of the secret used by Velero to store credentials</li>
<li><code>-p </code> specifies that the next word is the patch data. It's more common to patch using JSON rather than YAML</li>
<li><code>'{"data": {"cloud": "<your-base64-encoded-secret-will-go-here>"}}'</code> this is the JSON data that matches the existing structure of the Velero secret in Kubernetes. <code><your-base64-encoded-secret-will-go-here></code> is a placeholder for the command we'll insert.</li>
<li><code>$(base64 -w 0 credentials-velero)</code> reads the file <code>credentials-velero</code> in the current directory, turns off word wrapping of the output (<code>-w 0</code>), BASE64-encodes the contents of the file, and inserts the result in the data.</li>
</ul>
| Codebling |
<p>I have created a deployment and a service on Google Kubernetes Engine. These are running on Cloud Compute instances.</p>
<p>I need to make my k8s application reachable from other Compute instances, but not from the outside world. That is because there are some legacy instances running outside the cluster and those cannot be migrated (yet, at least).</p>
<p>My understanding is that a <code>Service</code> makes the pod reachable from other cluster nodes, whereas an <code>Ingress</code> exposes the pod to the external traffic with an external IP.</p>
<p>What I need is something in the middle: I need to expose my pod outside the cluster, but only to other local Compute instances (in the same zone). I don't understand how I am supposed to do it.</p>
| rubik | <p>In Google Kubernetes Engine this is accomplished with a <code>LoadBalancer</code> type Service that is annotated to be an internal load balancer. The documentation for it is at <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing</a>.</p>
<p>Assuming you had a Deployment with label <code>app: echo-pod</code> that listened on port 080 and you wanted to expose it as port 80 to GCE instance the service would look something like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: echo-internal
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: echo-pod
spec:
type: LoadBalancer
selector:
app: echo-pod
ports:
- port: 80
targetPort: 8080
protocol: TCP
</code></pre>
<p>It will take a moment to create the Service and internal load balancer. It will have an external IP once created:</p>
<pre><code>$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-internal LoadBalancer 10.4.11.165 10.128.0.35 80:31706/TCP 2m33s
kubernetes ClusterIP 10.4.0.1 <none> 443/TCP 20m
</code></pre>
<p>The <code>10.128.0.35</code> IP is actually an <em>internal</em> IP address only accessible inside your VPC. From another GCE instance you can access it on the exposed port:</p>
<pre><code>$ curl http://10.128.0.35
Hostname: echo-deployment-5f55bb9855-hxl7b
</code></pre>
<p>Note: You need to have the "Load balancing" add-on enabled when you provisioned your cluster. But it is enabled by default and should be working unless you explicitly disabled the "Enable HTTP load balancing" option at cluster creation.</p>
| Andy Shinn |
<p>Question regarding AKS, each time release CD. The Kubernetes will give random IP Address to my services. <br/>
I would like to know how to bind the domain to the IP?</p>
<p>Can someone give me some link or article to read?</p>
| Herman | <p>You have two options.</p>
<p>You can either deploy a Service with <code>type=LoadBalancer</code> which will provision a cloud load balancer. You can then point your DNS entry to that provisioned LoadBalancer with (for example) a CNAME.</p>
<p>More information on this can be found <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="noreferrer">here</a></p>
<p>Your second option is to use an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers" rel="noreferrer">Ingress Controller</a> with an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Ingress Resource</a>. This offers much finer grained access via url parameters. You'll probably need to deploy your ingress controller pod/service with a service <code>Type=LoadBalancer</code> though, to make it externally accessible.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/ingress" rel="noreferrer">Here's</a> an article which explains how to do ingress on Azure with the <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">nginx-ingress-controller</a></p>
| jaxxstorm |
<p>Is there a simple <code>kubectl</code> command to take a <code>kubeconfig</code> file (that contains a cluster+context+user) and merge it into the ~/.kube/config file as an additional context?</p>
| Chad | <p>Do this:</p>
<pre><code>export KUBECONFIG=~/.kube/config:~/someotherconfig
kubectl config view --flatten
</code></pre>
<p>You can then pipe that out to a new file if needed.</p>
| jaxxstorm |
<p>Say, I have two namespaces k8s-app1 and k8s-app2</p>
<p>I can list all pods from specific namespace using the below command</p>
<pre><code>kubectl get pods -n <namespace>
</code></pre>
<p>We need to append namespace to all commands to list objects from the respective namespaces. Is there a way to set specific namespace and list objects without including the namespace explicitly?</p>
| P Ekambaram | <p>I like my answers short, to the point and with references to official documentation:</p>
<p><strong>Answer</strong>:</p>
<pre><code>kubectl config set-context --current --namespace=my-namespace
</code></pre>
<p><strong>From</strong>:</p>
<p><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/</a></p>
<pre><code># permanently save the namespace for all subsequent kubectl commands in that context.
kubectl config set-context --current --namespace=ggckad-s2
</code></pre>
| PussInBoots |
<p>Is it possible to use the Ingress Controller function in Kubernetes without a load balancer (in Digital Ocean). </p>
<p>Is there any other mechanism to allow a domain name to map to a Kubernetes service; for instance if I host two WordPress sites on a Kubernetes cluster:</p>
<p>==> WP Site 1: Node Port 80
==> WP Site 2: Node Port 8080</p>
<p>How does a domain name map to the container port 8080 without explicitly entering the port number.</p>
<p>Any help is appreciated.</p>
| Rutnet | <p>DNS doesn't support adding port numbers, you need an ingress controller (which essentially acts like a reverse proxy) to do this.</p>
<p>If you install the <a href="https://github.com/digitalocean/digitalocean-cloud-controller-manager" rel="noreferrer">digital ocean cloud controller manager</a> you'll be able to provision loadbalancers using services with type LoadBalancer. You can then deploy a standard ingress controller, like the <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">nginx ingress controller</a> and give the service type=LoadBalancer.</p>
<p>This then becomes the ingress into your cluster, and you only have a single LoadBalancer, which keeps costs down.</p>
| jaxxstorm |
<p>Workflow:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: my-workflow-
spec:
entrypoint: main
arguments:
parameters:
- name: configmap
value: my-configmap
- name: secret
value: my-secret
templates:
- name: main
steps:
- - name: main
templateRef:
name: my-template
template: main
arguments:
parameters:
- name: configmap
value: "{{workflow.parameters.configmap}}"
- name: secret
value: "{{workflow.parameters.secret}}"
</code></pre>
<p>Template:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: my-template
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: configmap
parameters:
- name: secret
container:
image: my-image:1.2.3
envFrom:
- configMapRef:
name: "{{inputs.parameters.configmap}}"
- secretRef:
name: "{{inputs.parameters.secret}}"
</code></pre>
<p>When deployed through the Argo UI I receive the following error from Kubernetes when starting the pod:</p>
<pre><code>spec.containers[1].envFrom: Invalid value: \"\": must specify one of: `configMapRef` or `secretRef`
</code></pre>
<p>Using <code>envFrom</code> is supported and documented in the Argo documentation: <a href="https://argoproj.github.io/argo-workflows/fields/" rel="nofollow noreferrer">https://argoproj.github.io/argo-workflows/fields/</a>. Why is Kubernetes complaining here?</p>
| Boon | <p>As mentioned in the comments, there are a couple issues with your manifests. They're valid YAML, but that YAML does not deserialize into valid Argo custom resources.</p>
<ol>
<li>In the Workflow, you have duplicated the <code>parameters</code> key in <code>spec.templates[0].inputs</code>.</li>
<li>In the WorkflowTemplate, you have placed the <code>configMapRef</code> and <code>secretRef</code> names at the same level as the keys. <code>configMapRef</code> and <code>secretRef</code> are objects, so the <code>name</code> key should be nested under each of those.</li>
</ol>
<p>These are the corrected manifests:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: my-template
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: configmap
- name: secret
container:
image: my-image:1.2.3
envFrom:
- configMapRef:
name: "{{inputs.parameters.configmap}}"
- secretRef:
name: "{{inputs.parameters.secret}}"
---
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: my-workflow-
spec:
entrypoint: main
arguments:
parameters:
- name: configmap
value: my-configmap
- name: secret
value: my-secret
templates:
- name: main
steps:
- - name: main
templateRef:
name: my-template
template: main
arguments:
parameters:
- name: configmap
value: "{{workflow.parameters.configmap}}"
- name: secret
value: "{{workflow.parameters.secret}}"
</code></pre>
<p>Argo Workflows supports <a href="https://github.com/argoproj/argo-workflows/blob/master/docs/ide-setup.md" rel="nofollow noreferrer">IDE-based validation</a> which should help you find/avoid these issues.</p>
| crenshaw-dev |
<p>I have Nginx-based service which configured to accept HTTPS-only.
However GKE ingress answers HTTP requests in HTTP. I know that GKE Ingress doesn't know to enforce HTTP -> HTTPS redirect, but is it possible to learn it at least return HTTPS from service?</p>
<pre><code>rules:
- http:
paths:
- path: /*
backend:
serviceName: dashboard-ui
servicePort: 8443
</code></pre>
<p>UPDATE: I do have TSL configured on GKE ingress and my K8S service. When request comes in HTTPS everything works nice. But HTTP requests gets HTTP response. I implemented HTTP->HTTPS redirect in my service, but it didn't help. In fact, for now all communication between ingress and my service is HTTTPS because service exposes only HTTPS port</p>
<p>SOLUTION - thanks to Paul Annetts: Nginx should check original protocol inside <em>HTTPS</em> block and redirect, like this</p>
<pre><code>if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
</code></pre>
| Vitaly Karasik DevOps | <p>Yes, you can configure the GKE Kubernetes Ingress to both terminate HTTPS for external traffic, and also to use HTTPS internally between Google HTTP(S) Load Balancer and your service inside the GKE cluster.</p>
<p>This is documented <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">here</a>, but it is fairly complex.</p>
<p>For HTTPS to work you will need a TLS certificate and key.</p>
<p>If you have your own TLS certificate and key in the cluster as a secret, you can provide it using the <code>tls</code> section of <code>Ingress</code>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-2
spec:
tls:
- secretName: my-secret
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-metrics
servicePort: 60000
</code></pre>
<p>You can also upload your TLS certificate and key directly to Google Cloud and provide a <code>ingress.gcp.kubernetes.io/pre-shared-cert</code> annotation that tells GKE Ingress to use it.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-psc-ingress
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: "my-domain-tls-cert"
...
</code></pre>
<p>To use HTTPS for traffic inside Google Cloud, from the Load Balancer to your GKE cluster, you need the <code>cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'</code> annotation on your <code>NodePort</code> service. Note that your ports must be named for the HTTPS to work.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service-3
annotations:
cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
type: NodePort
selector:
app: metrics
department: sales
ports:
- name: my-https-port
port: 443
targetPort: 8443
- name: my-http-port
port: 80
targetPort: 50001
</code></pre>
<p>The load balancer itself doesn’t support redirection from HTTP->HTTPS, you need to find another way for that.</p>
<p>As you have NGINX as entry-point into your cluster, you can detect the protocol used to connect to the load-balancer with the <code>X-forwarded-Proto</code> HTTP header and do a redirect, something like this.</p>
<pre><code>if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
</code></pre>
| Paul Annetts |
<p>I came across an open source Kubernetes project <a href="https://github.com/kubernetes/kops" rel="noreferrer">KOPS</a> and AWS Kubernetes service EKS. Both these products allow installation of a Kubernetes cluster. However, I wonder why one would pick EKS over KOPS or vice versa if one has not run any of them earlier. </p>
<p>This question does not ask which one is better, but rather asks for a comparison.</p>
| Débora | <p>The two are largely the same, at the time of writing, the following are the differences I'm aware of between the 2 offerings</p>
<p>EKS:</p>
<ul>
<li>Fully managed control plane from AWS - you have no control over the masters</li>
<li><a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-auth.html" rel="noreferrer">AWS native authentication IAM authentication with the cluster</a></li>
<li><a href="https://aws.amazon.com/blogs/opensource/networking-foundation-eks-aws-cni-calico/" rel="noreferrer">VPC level networking for pods</a> meaning you can use things like security groups at the cluster/pod level</li>
</ul>
<p>kops:</p>
<ul>
<li>Support for more Kubernetes features, such as <a href="https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md" rel="noreferrer">API server options</a></li>
<li>Auto provisioned nodes use the built in kops <code>node_up</code> tool</li>
<li>More flexibility over Kubernetes versions, EKS only has a few versions available right now</li>
</ul>
| jaxxstorm |
<p>How can you run an akka streams application in argo and kubernetes, I found documentation about kubernetes and akka cluster, but I don't need an akka cluster, do I just need to run an ephemeral akka application with many actors; or is an akka cluster necessary?</p>
| javier_orta | <p>You can run an Akka Stream app on Kubernetes in a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> just like you can run it locally when testing.</p>
<p>If you need your application to scale to many Pods and handle a lot of input, it may be helpful to use an Akka Cluster to better distribute the work.</p>
<p>Argo Workflows and Akka Streams serve similar purposes: both connect a series of steps to transform data. Argo Workflows connects containers, and Akka Streams connects actors.</p>
<p>Depending on your use case, it might make sense to have an Argo Workflows step that runs an Akka Streams app (which runs a while and then exits). But it might be simpler to write the whole series of steps as either just an Akka Streams app (which is then run on a Deployment) or just an Argo Workflow (made up of non-Akka Streams containers).</p>
<p>tl;dr - There are a variety of "right" ways to run an Akka Streams app in Argo and/or Kubernetes. The "best" choice depends on your use case. At the end of the day, you either drop your Akka Streams container in the <code>image</code> field of either a Kubernetes Deployment or an Argo Workflow.</p>
| crenshaw-dev |
<p>I have created the single master kubernetes <code>v1.9.0</code> cluster using kubeadm command in bare metal server. Now I want to add two more master and make it multi master. </p>
<p>Is it possible to convert to multi master configuration? Is there a document available for this type of conversation? </p>
<p>I have found this link for <code>Kops</code> not sure same steps will work for other environment also.</p>
<p><a href="https://github.com/kubernetes/kops/blob/master/docs/single-to-multi-master.md" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/docs/single-to-multi-master.md</a></p>
<p>Thanks
SR</p>
| sfgroups | <p>Yes, it's possible, but you may need to break your master setup temporarily.
You'll need to follow the instructions <a href="https://kubernetes.io/docs/setup/independent/high-availability/" rel="noreferrer">here</a></p>
<p>In a nutshell:</p>
<p>Create a kubeadm config file. In that kubeadm config file you'll need to include the SAN for the loadbalancer you'll use. Example:</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
apiServerCertSANs:
- "LOAD_BALANCER_DNS"
api:
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://CP0_IP:2379"
advertise-client-urls: "https://CP0_IP:2379"
listen-peer-urls: "https://CP0_IP:2380"
initial-advertise-peer-urls: "https://CP0_IP:2380"
initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380"
serverCertSANs:
- CP0_HOSTNAME
- CP0_IP
peerCertSANs:
- CP0_HOSTNAME
- CP0_IP
networking:
# This CIDR is a Calico default. Substitute or remove for your CNI provider.
podSubnet: "192.168.0.0/16"
</code></pre>
<p>Copy the certificates created to your new nodes. All the certs under <code>/etc/kubernetes/pki/</code> should be copied</p>
<p>Copy the <code>admin.conf</code> from <code>/etc/kubernetes/admin.conf</code> to the new nodes</p>
<p>Example:</p>
<pre><code>USER=ubuntu # customizable
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
scp /etc/kubernetes/admin.conf "${USER}"@$host:
done
</code></pre>
<p>Create your second kubeadm config file for the second node:</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
apiServerCertSANs:
- "LOAD_BALANCER_DNS"
api:
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://CP1_IP:2379"
advertise-client-urls: "https://CP1_IP:2379"
listen-peer-urls: "https://CP1_IP:2380"
initial-advertise-peer-urls: "https://CP1_IP:2380"
initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380,CP1_HOSTNAME=https://CP1_IP:2380"
initial-cluster-state: existing
serverCertSANs:
- CP1_HOSTNAME
- CP1_IP
peerCertSANs:
- CP1_HOSTNAME
- CP1_IP
networking:
# This CIDR is a calico default. Substitute or remove for your CNI provider.
podSubnet: "192.168.0.0/16"
</code></pre>
<p>Replace the following variables with the correct addresses for this node:</p>
<p>LOAD_BALANCER_DNS</p>
<p>LOAD_BALANCER_PORT</p>
<p>CP0_HOSTNAME</p>
<p>CP0_IP</p>
<p>CP1_HOSTNAME</p>
<p>CP1_IP</p>
<p>Move the copied certs to the correct location</p>
<pre><code>USER=ubuntu # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
mv /home/${USER}/ca.key /etc/kubernetes/pki/
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
mv /home/${USER}/sa.key /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
</code></pre>
<p>Now, you can start adding the master using <code>kubeadm</code></p>
<pre><code> kubeadm alpha phase certs all --config kubeadm-config.yaml
kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml
kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml
kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml
systemctl start kubelet
</code></pre>
<p>Join the node to the etcd cluster:</p>
<pre><code> CP0_IP=10.0.0.7
CP0_HOSTNAME=cp0
CP1_IP=10.0.0.8
CP1_HOSTNAME=cp1
KUBECONFIG=/etc/kubernetes/admin.conf kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380
kubeadm alpha phase etcd local --config kubeadm-config.yaml
</code></pre>
<p>and then finally, add the controlplane:</p>
<pre><code> kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
kubeadm alpha phase controlplane all --config kubeadm-config.yaml
kubeadm alpha phase mark-master --config kubeadm-config.yaml
</code></pre>
<p>Repeat these steps for the third master, and you should be good.</p>
| jaxxstorm |
<p>I've launched kubernetes cluster using kops. It was working find and I started facing the following problem:</p>
<pre><code>kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>How do i solve this?
It looks like kubernetes-apiserver is not running, How do i get this working?</p>
<pre><code>kubectl run nginx --image=nginx:1.10.0
error: failed to discover supported resources: Get http://localhost:8080/apis/apps/v1beta1?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
</code></pre>
<p>Please suggest</p>
| prranay | <p>Kubernetes uses a <code>$KUBECONFIG</code> file for connecting to clusters. It may be when provisioning your kops cluster, it didn't write the file correctly. I can't be sure as you haven't provided enough info.</p>
<p>Assuming this is the issue, and you only have a single cluster, it can be resolved like so:</p>
<pre><code># Find your cluster name
kops get clusters
# set the clustername as a var
clustername=<clustername>
# export the KUBECONFIG variable, which kubectl uses to find the kubeconfig file
export KUBECONFIG=~/.kube/${clustername}
# download the kubeconfig file locally using kops
kops export kubecfg --name ${clustername} --config=~$KUBECONFIG
</code></pre>
<p>You can find more information about the <code>KUBECONFIG</code> file <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="noreferrer">here</a></p>
| jaxxstorm |
<p>I am learning Kubernetes and have deployed a headless service on Kubernetes(on AWS) which is exposed to the external world via nginx ingress.</p>
<p>I want <code>nslookup <ingress_url></code> to directly return IP address of PODs.
How to achieve that?</p>
| Saurav Prakash | <p>If you declare a “headless” service with selectors, then the internal DNS for the service will be configured to return the IP addresses of its pods directly. This is a somewhat unusual configuration and you should also expect an effect on other, cluster internal, users of that service.</p>
<p>This is documented <a href="https://kubernetes.io/docs/concepts/services-networking/service/#with-selectors" rel="nofollow noreferrer">here</a>. Example:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
clusterIP: None
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
</code></pre>
| Paul Annetts |
<p>I tried to convert the below working kubernetes manifest from</p>
<pre class="lang-yaml prettyprint-override"><code>##namespace
---
apiVersion: v1
kind: Namespace
metadata:
name: poc
##postgress
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: db
name: db
namespace: poc
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- image: postgres
name: postgres
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
ports:
- containerPort: 5432
name: postgres
---
apiVersion: v1
kind: Service
metadata:
labels:
app: db
name: db
namespace: poc
spec:
type: ClusterIP
ports:
- name: "db-service"
port: 5432
targetPort: 5432
selector:
app: db
##adminer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ui
name: ui
namespace: poc
spec:
replicas: 1
selector:
matchLabels:
app: ui
template:
metadata:
labels:
app: ui
spec:
containers:
- image: adminer
name: adminer
ports:
- containerPort: 8080
name: ui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: ui
name: ui
namespace: poc
spec:
type: NodePort
ports:
- name: "ui-service"
port: 8080
targetPort: 8080
selector:
app: ui
</code></pre>
<p>to</p>
<pre class="lang-js prettyprint-override"><code>import * as k8s from "@pulumi/kubernetes";
import * as kx from "@pulumi/kubernetesx";
//db
const dbLabels = { app: "db" };
const dbDeployment = new k8s.apps.v1.Deployment("db", {
spec: {
selector: { matchLabels: dbLabels },
replicas: 1,
template: {
metadata: { labels: dbLabels },
spec: {
containers: [
{
name: "postgres",
image: "postgres",
env: [{ name: "POSTGRES_USER", value: "postgres"},{ name: "POSTGRES_PASSWORD", value: "postgres"}],
ports: [{containerPort: 5432}]
}
]
}
}
}
});
const dbService = new k8s.core.v1.Service("db", {
metadata: { labels: dbDeployment.spec.template.metadata.labels },
spec: {
selector: dbLabels,
type: "ClusterIP",
ports: [{ port: 5432, targetPort: 5432, protocol: "TCP" }],
}
});
//adminer
const uiLabels = { app: "ui" };
const uiDeployment = new k8s.apps.v1.Deployment("ui", {
spec: {
selector: { matchLabels: uiLabels },
replicas: 1,
template: {
metadata: { labels: uiLabels },
spec: {
containers: [
{
name: "adminer",
image: "adminer",
ports: [{containerPort: 8080}],
}
]
}
}
}
});
const uiService = new k8s.core.v1.Service("ui", {
metadata: { labels: uiDeployment.spec.template.metadata.labels },
spec: {
selector: uiLabels,
type: "NodePort",
ports: [{ port: 8080, targetPort: 8080, protocol: "TCP" }]
}
});
</code></pre>
<p>With this <code>pulumi up -y</code> is SUCCESS without error but the application is not fully UP and RUNNING. Because the <code>adminer</code> image is trying to use Postgres database hostname as <code>db</code>, But looks like pulumi is changing the service name like below:</p>
<p><a href="https://i.stack.imgur.com/syjmB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/syjmB.png" alt="enter image description here" /></a></p>
<p>My question here is, How to make this workable?
Is there a way in pulumi to strict with the naming?</p>
<p>Note- I know we can easily pass the hostname as an env variable to the <code>adminer</code> image but I am wondering if there is anything that can allow us to not change the name.</p>
| Samit Kumar Patel | <p>Pulumi automatically adds random strings to your resources to help with replacing resource. You can find more information about this in the <a href="https://www.pulumi.com/docs/troubleshooting/faq/#why-do-resource-names-have-random-hex-character-suffixes" rel="nofollow noreferrer">FAQ</a></p>
<p>If you'd like to disable this, you can override it using the <code>metadata</code>, like so:</p>
<pre class="lang-ts prettyprint-override"><code>import * as k8s from "@pulumi/kubernetes";
import * as kx from "@pulumi/kubernetesx";
//db
const dbLabels = { app: "db" };
const dbDeployment = new k8s.apps.v1.Deployment("db", {
spec: {
selector: { matchLabels: dbLabels },
replicas: 1,
template: {
metadata: { labels: dbLabels },
spec: {
containers: [
{
name: "postgres",
image: "postgres",
env: [{ name: "POSTGRES_USER", value: "postgres"},{ name: "POSTGRES_PASSWORD", value: "postgres"}],
ports: [{containerPort: 5432}]
}
]
}
}
}
});
const dbService = new k8s.core.v1.Service("db", {
metadata: {
name: "db", // explicitly set a name on the service
labels: dbDeployment.spec.template.metadata.labels
},
spec: {
selector: dbLabels,
type: "ClusterIP",
ports: [{ port: 5432, targetPort: 5432, protocol: "TCP" }],
}
});
</code></pre>
<p>With that said, it's not always best practice to hardcode names like this, you should, if possible, reference outputs from your resources and pass them to new resources.</p>
| jaxxstorm |
<p>I created a Kubernetes cluster on Google Cloud using:</p>
<pre><code>gcloud container clusters create my-app-cluster --num-nodes=1
</code></pre>
<p>Then I deployed my 3 apps (backend, frontend and a scraper) and created a load balancer. I used the following configuration file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-server
image: gcr.io/my-app/server
ports:
- containerPort: 8009
envFrom:
- secretRef:
name: my-app-production-secrets
- name: my-app-scraper
image: gcr.io/my-app/scraper
ports:
- containerPort: 8109
envFrom:
- secretRef:
name: my-app-production-secrets
- name: my-app-frontend
image: gcr.io/my-app/frontend
ports:
- containerPort: 80
envFrom:
- secretRef:
name: my-app-production-secrets
---
apiVersion: v1
kind: Service
metadata:
name: my-app-lb-service
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- name: my-app-server-port
protocol: TCP
port: 8009
targetPort: 8009
- name: my-app-scraper-port
protocol: TCP
port: 8109
targetPort: 8109
- name: my-app-frontend-port
protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>When typing <code>kubectl get pods</code> I get:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
my-app-deployment-6b49c9b5c4-5zxw2 0/3 Pending 0 12h
</code></pre>
<p>When investigation i Google Cloud I see "Unschedulable" state with "insufficient cpu" error on pod:</p>
<p><a href="https://i.stack.imgur.com/7boXc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7boXc.png" alt="Unschedulable state due to Insufficient cpu"></a></p>
<p>When going to Nodes section under my cluster in the Clusters page, I see 681 mCPU requested and 940 mCPU allocated:
<a href="https://i.stack.imgur.com/tLpKL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tLpKL.png" alt="enter image description here"></a></p>
<p>What is wrong? Why my pod doesn't start?</p>
| Naor | <p>Every container has a default CPU request (in GKE I’ve noticed it’s 0.1 CPU or 100m). Assuming these defaults you have three containers in that pod so you’re requesting another 0.3 CPU.</p>
<p>The node has 0.68 CPU (680m) requested by other workloads and a total limit (allocatable) on that node of 0.94 CPU (940m).</p>
<p>If you want to see what workloads are reserving that 0.68 CPU, you need to inspect the pods on the node. In the page on GKE where you see the resource allocations and limits per node, if you click the node it will take you to a page that provides this information.<br>
In my case I can see 2 pods of <code>kube-dns</code> taking 0.26 CPU each, amongst others. These are system pods that are needed to operate the cluster correctly. What you see will also depend on what add-on services you have selected, for example: HTTP Load Balancing (Ingress), Kubernetes Dashboard and so on.</p>
<p>Your pod would take CPU to 0.98 CPU for the node which is more than the 0.94 limit, which is why your pod cannot start.</p>
<p>Note that the scheduling is based on the amount of CPU <em>requested</em> for each workload, not how much it actually uses, or the limit.</p>
<p>Your options:</p>
<ol>
<li>Turn off any add-on service which is taking CPU resource that you don't need.</li>
<li>Add more CPU resource to your cluster. To do that you will either need to change your node pool to use VMs with more CPU, or increase the number of nodes in your existing pool. You can do this in GKE console or via the <code>gcloud</code> command line.</li>
<li>Make explicit requests in your containers for less CPU that will override the defaults.</li>
</ol>
<pre><code>apiVersion: apps/v1
kind: Deployment
...
spec:
containers:
- name: my-app-server
image: gcr.io/my-app/server
...
resources:
requests:
cpu: "50m"
- name: my-app-scraper
image: gcr.io/my-app/scraper
...
resources:
requests:
cpu: "50m"
- name: my-app-frontend
image: gcr.io/my-app/frontend
...
resources:
requests:
cpu: "50m"
</code></pre>
| Paul Annetts |
<p>I have 3 services in my ingress, the first 2 use <code>default</code> namespace. The third service is <strong>prometheus-server</strong> service which has namespace <code>ingress-nginx</code>.
Now, I want to map my prometheus DNS to the service, but getting error because ingress can't find the prometheus service in <code>default</code> namespace.</p>
<p>How to deal with non-default namespace in ingress definition?</p>
| Justinus Hermawan | <p>You will need to refer to your service in the other namespace with its full path, that is <code>prometheus-server.ingress-nginx.svc.cluster.local</code>.</p>
<p>You shouldn’t need a second Ingress to do this.</p>
| Paul Annetts |
<p>I have a private Docker image registry running on a Linux VM (10.78.0.228:5000) and a Kubernetes master running on a different VM running Centos Linux 7.</p>
<p>I used the below command to create a POD:<br>
<code>kubectl create --insecure-skip-tls-verify -f monitorms-rc.yml</code></p>
<p>I get this:</p>
<blockquote>
<p>sample monitorms-mmqhm 0/1 ImagePullBackOff 0 8m</p>
</blockquote>
<p>and upon running:
<code>kubectl describe pod monitorms-mmqhm --namespace=sample</code></p>
<blockquote>
<p>Warning Failed Failed to pull image "10.78.0.228:5000/monitorms":
Error response from daemon: {"message":"Get
<a href="https://10.78.0.228:5000/v1/_ping" rel="noreferrer">https://10.78.0.228:5000/v1/_ping</a>: x509: certificate signed by unknown
authority"}</p>
</blockquote>
<p>Isn't Kubernetes supposed to ignore the server certificate for all operations during POD creation when the <code>--insecure-skip-tls-verify</code> is passed?</p>
<p>If not, how do I make it ignore the tls verification while pulling the docker image?</p>
<p><strong>PS:</strong></p>
<p><strong>Kubernetes version :</strong></p>
<p>Client Version: <code>v1.5.2</code>
Server Version: <code>v1.5.2</code></p>
<p>I have raised this issue here: <a href="https://github.com/kubernetes/kubernetes/issues/43924" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/43924</a></p>
| Rushil Paul | <p>The issue you're seeing is actually a docker issue. Using <code>--insecure-skip-tls-verify</code> is a valid arg to <code>kubectl</code>, but it only deals with the connecition between <code>kubectl</code> and the kubernetes API server. The error you're seeing is actually because the docker daemon cannot login to the private registry because the cert it's using in unsigned.</p>
<p>Have a look at the <a href="https://docs.docker.com/registry/insecure/" rel="noreferrer">Docker insecure registry docs</a> and this should solve your problem.</p>
| jaxxstorm |
<p>I would like to be able to deploy the AWS EFS CSI Driver Helm chart hosted at <a href="https://kubernetes-sigs.github.io/aws-efs-csi-driver/" rel="nofollow noreferrer">AWS EFS SIG Repo</a> using Pulumi. With Source from <a href="https://github.com/kubernetes-sigs/aws-efs-csi-driver" rel="nofollow noreferrer">AWS EFS CSI Driver Github Source</a>. I would like to avoid having almost everything managed with Pulumi except this one part of my infrastructure.</p>
<p>Below is the TypeScript class I created to manage interacting with the k8s.helm.v3.Release class:</p>
<pre class="lang-js prettyprint-override"><code>import * as k8s from '@pulumi/kubernetes';
import * as eks from '@pulumi/eks';
export default class AwsEfsCsiDriverHelmRepo extends k8s.helm.v3.Release {
constructor(cluster: eks.Cluster) {
super(`aws-efs-csi-driver`, {
chart: `aws-efs-csi-driver`,
version: `1.3.6`,
repositoryOpts: {
repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`,
},
namespace: `kube-system`,
}, { provider: cluster.provider });
}
}
</code></pre>
<p>I've tried several variations on the above code, chopping of the <code>-driver</code> in the name, removing <code>aws-cfs-csi-driver</code> from the <code>repo</code> property, changing to <code>latest</code> for the version.</p>
<p>When I do a <code>pulumi up</code> I get: <code>failed to pull chart: chart "aws-efs-csi-driver" version "1.3.6" not found in https://kubernetes-sigs.github.io/aws-efs-csi-driver/ repository</code></p>
<pre class="lang-sh prettyprint-override"><code>$ helm version
version.BuildInfo{Version:"v3.7.0", GitCommit:"eeac83883cb4014fe60267ec6373570374ce770b", GitTreeState:"clean", GoVersion:"go1.16.8"}
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ pulumi version
v3.24.1
</code></pre>
| Gary | <p>You're using the wrong version in your chart invocation.</p>
<p>The version you're selecting is the application version, ie the release version of the underlying application. You need to set the Chart version, see <a href="https://helm.sh/docs/topics/charts/#charts-and-versioning" rel="nofollow noreferrer">here</a> which is defined <a href="https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/release-1.3/charts/aws-efs-csi-driver/Chart.yaml#L3" rel="nofollow noreferrer">here</a></p>
<p>the following works:</p>
<pre class="lang-js prettyprint-override"><code>const csiDrive = new kubernetes.helm.v3.Release("csi", {
chart: `aws-efs-csi-driver`,
version: `2.2.3`,
repositoryOpts: {
repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`,
},
namespace: `kube-system`,
});
</code></pre>
<p>If you want to use the existing code you have, try this:</p>
<pre class="lang-js prettyprint-override"><code>import * as k8s from '@pulumi/kubernetes';
import * as eks from '@pulumi/eks';
export default class AwsEfsCsiDriverHelmRepo extends k8s.helm.v3.Release {
constructor(cluster: eks.Cluster) {
super(`aws-efs-csi-driver`, {
chart: `aws-efs-csi-driver`,
version: `2.2.3`,
repositoryOpts: {
repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`,
},
namespace: `kube-system`,
}, { provider: cluster.provider });
}
}
</code></pre>
| jaxxstorm |
<p>I have an argocd ApplicationSet created. I have the following merge keys setup:</p>
<pre><code> generators:
- merge:
mergeKeys:
- path
generators:
- matrix:
generators:
- git:
directories:
- path: aws-ebs-csi-driver
- path: cluster-autoscaler
repoURL: >-
...
revision: master
- clusters:
selector:
matchLabels:
argocd.argoproj.io/secret-type: cluster
- list:
elements:
- path: aws-ebs-csi-driver
namespace: system
- path: cluster-autoscaler
namespace: system
</code></pre>
<p>Syncing the application set however generates:</p>
<pre><code> - lastTransitionTime: "2022-08-08T21:54:05Z"
message: the parameters from a generator were not unique by the given mergeKeys,
Merge requires all param sets to be unique. Duplicate key was {"path":"aws-ebs-csi-driver"}
reason: ApplicationGenerationFromParamsError
status: "True"
</code></pre>
<p>Any help is appreciated.</p>
| sebastian | <p>The matrix generator is producing one set of parameters for each combination of directory and cluster.</p>
<p>If there is more than one cluster, then there will be one parameter set with <code>path: aws-ebs-csi-driver</code> for each cluster.</p>
<p>The merge generator requires that each parameter used as a merge key be completely unique. That mode was the original design of the merge generator, but more modes may be supported in the future.</p>
<p>Argo CD v2.5 will support <a href="https://argo-cd.readthedocs.io/en/latest/operator-manual/applicationset/GoTemplate/" rel="nofollow noreferrer">go templated ApplicationSets</a>, which might provide an easier way to solve your problem.</p>
| crenshaw-dev |
<p>I want to reference the label's value in VirtualService's spec section inside k8s yaml file. I use ${metadata.labels[component]} to indicate the positions below. Is there a way to implement my idea?</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: istio-ingress-version
namespace: netops
labels:
component: version
spec:
hosts:
- "service.api.com"
gateways:
- public-inbound-gateway
http:
- match:
- uri:
prefix: /${metadata.labels[component]}/
headers:
referer:
regex: ^https://[^\s/]*a.api.com[^\s]*
rewrite:
uri: "/"
route:
- destination:
host: ${metadata.labels[component]}.3da.svc.cluster.local
- match:
- uri:
prefix: /${metadata.labels[component]}/
headers:
referer:
regex: ^https://[^\s/]*b.api.com[^\s]*
rewrite:
uri: "/"
route:
- destination:
host: ${metadata.labels[component]}.3db.svc.cluster.local
- match:
- uri:
prefix: /${metadata.labels[component]}/
rewrite:
uri: "/"
route:
- destination:
host: ${metadata.labels[component]}.3db.svc.cluster.local
</code></pre>
| Jeffrey | <p>This isn't a capability of Kubernetes itself, however other tools exist that can help you with this scenario.</p>
<p>The main one of these is <a href="https://docs.helm.sh" rel="nofollow noreferrer">Helm</a>. It allows you to create variables that can be shared across several different YAML files, allowing you to share values or even fully parameterise your deployment.</p>
| Paul Annetts |
<p>I want to create Security Policy in my Google Kubernetes such that there is Adaptive Protection enabled for the DDoS attacks, on my application layer.</p>
<p>Reading pulumi documents, this is what I came up with:</p>
<pre><code>ddos_layer7_defense_policy_name = "ddos-layer7-defense-policy"
ddos_layer7_defense_policy = gcp.compute.SecurityPolicy(
resource_name=ddos_layer7_defense_policy_name,
description="Policy for enabling DDoS defence on L7",
name=ddos_layer7_defense_policy_name,
adaptive_protection_config=gcp.compute.SecurityPolicyAdaptiveProtectionConfigArgs(
layer7_ddos_defense_config=gcp.compute.SecurityPolicyAdaptiveProtectionConfigLayer7DdosDefenseConfigArgs(
enable=False, # enable DDoS defense
rule_visibility="STANDARD"
)
)
)
</code></pre>
<p>I read the <a href="https://www.pulumi.com/registry/packages/gcp/api-docs/compute/securitypolicy/#securitypolicyadaptiveprotectionconfiglayer7ddosdefenseconfig" rel="nofollow noreferrer">official documents</a>, and while they also denote <code>enable=True</code> as the first argument, my local Pulumi library (the one installed in the virtualenv) does not have that <code>kwarg</code>. However, when I look at the code, I can see the two flags being very much present.</p>
<p>Still, I get the invalid key error:</p>
<pre><code> error: gcp:compute/securityPolicy:SecurityPolicy resource 'ddos-layer7-defense-policy' has a problem: Invalid or unknown key. Examine values at 'SecurityPolicy.AdaptiveProtectionConfig.Layer7DdosDefenseConfig'.
</code></pre>
<p>Reading at the source code is not helping either as the signature matches to what I am providing.</p>
<p>This problem is also unsolved by people working on pulumi, such as <a href="https://archive.pulumi.com/t/2764528/did-anyone-try-to-configure-cloud-armour-https-www-pulumi-co#62c18e79-b3f0-41d9-bb4f-7e84f8fc483d" rel="nofollow noreferrer">this</a>.</p>
<h1>Update 1: Setting <code>enable=True</code> and removing <code>rule_visibility</code> produces the same result.</h1>
| Aviral Srivastava | <p>The pulumi-gcp provider is derived from the Google Terraform provider.</p>
<p>There was a <a href="https://github.com/hashicorp/terraform-provider-google/issues/12554" rel="nofollow noreferrer">bug in this resource</a> in the Terraform provider which meant that it wasn't possible to manage these resources properly because the resource properties weren't being correctly sent to the API.</p>
<p>This was fixed in <a href="https://github.com/hashicorp/terraform-provider-google/pull/12661" rel="nofollow noreferrer">this</a> PR which was merged into <a href="https://github.com/hashicorp/terraform-provider-google/releases/tag/v4.39.0" rel="nofollow noreferrer">v4.39.0</a> of the Terraform provider.</p>
<p>This then propagated to the Pulumi provider in <a href="https://github.com/pulumi/pulumi-gcp/releases/tag/v6.40.0" rel="nofollow noreferrer">v6.40.0</a>.</p>
<p>It's likely you're not using <code>>6.40.0</code> of the Pulumi provider, which is why you're experiencing this issue. Try upgrading, and then reattempt.</p>
| jaxxstorm |
<p>I need to supply a Configgroup with yaml. The yaml needs to be populated with values from Pulumi Outputs. My problem is that the yaml field in the code below only take strings and I cannot figure a way to create this string from Outputs.</p>
<p>For example, imagine taking an Id of sorts from an Output and replacing foo.</p>
<p>Any ideas?</p>
<pre><code>import * as k8s from "@pulumi/kubernetes";
const example = new k8s.yaml.ConfigGroup("example", {
yaml: `
apiVersion: v1
kind: Namespace
metadata:
name: foo
`,
})
</code></pre>
| TomHells | <p>Any time you're needing to use an output value inside a string, you'll need to make sure the output is resolved using an <code>apply</code>.</p>
<p>In your case, it'd look a little bit like this:</p>
<pre class="lang-js prettyprint-override"><code>import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";
const nsName = pulumi.output("foo")
nsName.apply(name => {
const example = new k8s.yaml.ConfigGroup("example", {
yaml: `
apiVersion: v1
kind: Namespace
metadata:
name: ${name}
`
})
})
</code></pre>
<p>However, it's usually not desirable to create resources inside an apply like this, because it means previews aren't accurate. If you're just trying to create a namespace, it'd be highly recommend to use the kubernetes provider's namespace resource, instead of <code>ConfigGroup</code> (although I recognise you're trying to get a working example probably for something else)</p>
<p>You might consider using <a href="https://www.pulumi.com/kube2pulumi/" rel="nofollow noreferrer">kube2pulumi</a> to convert whatever YAML you're installing, because once you're using the standard types, you can pass outputs to fields easily, like so:</p>
<pre class="lang-js prettyprint-override"><code>const newNs = new k8s.core.v1.Namespace("example", {
metadata: {
name: nsName
}
})
</code></pre>
| jaxxstorm |
<p>I am exploring Argo to orchestrate processing big data. I wish to kick off a workflow via REST call that divides a large data set among a number of machines with desired resources for processing. From an architectural perspective, how would I accomplish this? Is there an REST api or maybe some libraries for Node.js that I can use?</p>
| afriedman111 | <p>Argo 2.5 <a href="https://blog.argoproj.io/argo-workflows-v2-5-released-ce7553bfd84c" rel="nofollow noreferrer">introduces its own API</a>.</p>
<p>There are currently officially-supported <a href="https://github.com/argoproj/argo-workflows/blob/master/pkg/apiclient/apiclient.go" rel="nofollow noreferrer">Golang</a> and <a href="https://github.com/argoproj-labs/argo-client-java" rel="nofollow noreferrer">Java</a> clients. There is also a community-supported <a href="https://github.com/CermakM/argo-client-python" rel="nofollow noreferrer">Python</a> client. Updates will be available here: <a href="https://github.com/argoproj-labs/argo-client-gen" rel="nofollow noreferrer">https://github.com/argoproj-labs/argo-client-gen</a></p>
<p>Argo provides Swagger API specs, so it should be reasonably easy to generate clients for other languages.</p>
| crenshaw-dev |
<p>I want to create a secret with Pulumi in Typescript
it should contain the following data:</p>
<pre><code> remote_write:
- url: "example.com"
basic_auth:
username: "user"
password: "XXX"
</code></pre>
<p>the code looks like:</p>
<pre><code> const databaseSecret = new k8s.core.v1.Secret(
"secret-config",
{
data: {
remote_write: [
{
url: "example.com",
basic_auth:
{
username: "user",
password: "XXX",
}
}
],
}
},
k8sOpts
);
</code></pre>
<p>But this shows the follwing error message:</p>
<blockquote>
<p>"Type '{ url: string; basic_auth: { username: string; password:
string; }; }[]' is not assignable to type 'Input'"</p>
</blockquote>
<p>I dont know how i can fix this?
How do I get such nested data into a secret ?</p>
| robolott | <p>There are a couple of problems here.</p>
<p>Firstly: A Kubernetes secret takes an input with a key name, and then some string data. You're passing the key name as <code>remote_write</code> and then trying to pass a TypeScript object - you need to stringify it first. You can do take advantage of YAML being a superset of JSON to handle this:</p>
<pre class="lang-js prettyprint-override"><code>let secret = [
{
url: "example.com",
basic_auth: {
username: "user",
password: "XXX",
},
},
];
const databaseSecret = new k8s.core.v1.Secret("secret-config", {
data: {
remote_write: JSON.stringify(secret),
},
});
</code></pre>
<p>However, there's an additional problem here: Kubernetes expects objects in secrets to be base64 encoded, so you'll need to encode it first:</p>
<pre><code>const databaseSecret = new k8s.core.v1.Secret("secret-config", {
data: {
remote_write: Buffer.from(JSON.stringify(secret), 'binary').toString('base64'),
},
});
</code></pre>
| jaxxstorm |
<p>I get the following error message whenever I run a pulumi command. I verified and my kubeconfig file is <code>apiVersion: v1</code> I updated <code>client.authentication.k8s.io/v1alpha1</code> to <code>client.authentication.k8s.io/v1beta1</code> and still have the issue, what could be the reason for this error message?</p>
<pre><code>Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update.
</code></pre>
| Kaizendae | <p>The bug report for this issue is <a href="https://github.com/pulumi/pulumi-eks/issues/599" rel="noreferrer">here</a></p>
<p>The underlying cause is that the AWS cli shipped a breaking change in a minor version release. You can see this <a href="https://github.com/aws/aws-cli/issues/6920" rel="noreferrer">here</a></p>
<p>I'm assuming here you're using the <code>pulumi-eks</code> package in order to provision an EKS cluster greater than <code>v1.22</code>. The EKS package uses a resource provider to configure some EKS resources like the <code>aws-auth</code> config map, and this isn't the same transient kubeconfig you're referring to in <code>~/.kube/config</code></p>
<p>In order to fix this, you need to do the following:</p>
<ul>
<li>Ensure your <code>aws-cli</code> version is greater than <code>1.24.0</code> or <code>2.7.0</code></li>
<li>Ensure you've updated your <code>pulumi-eks</code> package in your language SDK package manager to greater than <code>0.40.0</code>. This will mean also updated the provider in your existing stack.</li>
<li>Ensure you have the version of <code>kubectl</code> installed locally that matches your cluster version that has been provisioned</li>
</ul>
| jaxxstorm |
<p>I deployed prometheus server (+ kube state metrics + node exporter + alertmanager) through the <a href="https://github.com/helm/charts/tree/master/stable/prometheus" rel="nofollow noreferrer">prometheus helm chart</a> using the chart's default values, including the chart's default <a href="https://github.com/helm/charts/blob/ead4e79279a972ec71f6a58dd04ef4491686efbc/stable/prometheus/values.yaml#L796" rel="nofollow noreferrer">scrape_configs</a>. The problem is that I expect certain metrics to be coming from a particular job but instead are coming from a different one.</p>
<p>For example, <code>node_cpu_seconds_total</code> is being provided by the <code>kubernetes-service-endpoints</code> job but I expect it to come from the <code>kubernetes-nodes</code> job, i.e. <code>node-exporter</code>. The returned metric's values are accurate but the problem is I don't have the labels that would normally come from <code>kubernetes-nodes</code> (since <code>kubernetes-nodes</code> job has <code>role: node</code> vs <code>role: endpoint</code> for <code>kubernetes-service-endpoints</code>. I need these missing labels for advanced querying + dashboards.</p>
<p>Output of <code>node_cpu_seconds_total{mode="idle"}</code>:</p>
<p><code>
node_cpu_seconds_total{app="prometheus",chart="prometheus-7.0.2",component="node-exporter",cpu="0",heritage="Tiller",instance="10.80.20.46:9100",job="kubernetes-service-endpoints",kubernetes_name="get-prometheus-node-exporter",kubernetes_namespace="default",mode="idle",release="get-prometheus"} | 423673.44
node_cpu_seconds_total{app="prometheus",chart="prometheus-7.0.2",component="node-exporter",cpu="0",heritage="Tiller",instance="10.80.20.52:9100",job="kubernetes-service-endpoints",kubernetes_name="get-prometheus-node-exporter",kubernetes_namespace="default",mode="idle",release="get-prometheus"} | 417097.16
</code></p>
<p>There are no errors in the logs and I do have other <code>kubernetes-nodes</code> metrics such as <code>up</code> and <code>storage_operation_errors_total</code> so <code>node-exporter</code> is getting scraped.</p>
<p>I also verified manually that <code>node-exporter</code> has this particular metric, <code>node_cpu_seconds_total</code>, with <code>curl <node IP>:9100/metrics | grep node_cpu</code> and it has results.</p>
<p>Does the job order definition matter? Would one job override the other's metrics if they have the same name? Should I be dropping metrics for the <code>kubernetes-service-endpoints</code> job? I'm new to prometheus so any detailed help is appreciated.</p>
| ravishi | <p>I was able to figure out how to add the "missing" labels by navigating to the prometheus service-discovery status UI page. This page shows all the "Discovered Labels" that can be processed and kept through relabel_configs. What is processed/kept shows next to "Discovered Labels" under "Target Labels". So then it was just a matter of modifying the <code>kubernetes-service-endpoints</code> job config in <code>scrape_configs</code> so I add more taget labels. Below is exactly what I changed in the chart's <code>scrape_configs</code>. With this new config, I get <code>namespace</code>, <code>service</code>, <code>pod</code>, and <code>node</code> added to all metrics if the metric didn't already have them (see <code>honor_labels</code>).</p>
<pre><code> - job_name: 'kubernetes-service-endpoints'
+ honor_labels: true
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
- target_label: kubernetes_namespace
+ target_label: namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
- target_label: kubernetes_name
+ target_label: service
+ - source_labels: [__meta_kubernetes_pod_name]
+ action: replace
+ target_label: pod
+ - source_labels: [__meta_kubernetes_pod_node_name]
+ action: replace
+ target_label: node
</code></pre>
| ravishi |
<p>I want to manage different clusters of k8s,<br>
one called <code>production</code> for prod deployments,<br>
and another one called <code>staging</code> other deployments and configurations.</p>
<p>How can I connect <code>helm</code> to the tiller in those 2 different clusters?<br>
Assume that I already have <code>tiller</code> installed and I have a configured ci pipeline.</p>
| itaied | <p>Helm will connect to the same cluster that <code>kubectl</code> is pointing to.</p>
<p>By setting multiple <code>kubectl</code> contexts and changing them with <code>kubectl config use-context [environment]</code> you can achieve what you want.</p>
<p>Of course you will need to set appropriate HELM_ environment values in your shell for each cluster including TLS certificates if you have them enabled.</p>
<p>Also it’s worth taking steps so that you don’t accidentally deploy to the wrong cluster by mistake.</p>
| Paul Annetts |
<p>I want, in one command with <strong>args</strong> to config <code>kubeconfig</code>, that is able to connect to k8s cluster.</p>
<p>I tried the following which does not work.</p>
<pre><code>cfg:
mkdir ~/.kube
kube: cfg
touch config $(ARGS)
</code></pre>
<p>In the <strong>args the user</strong> should pass the config file <strong>content</strong> of the cluster (kubeconfig). </p>
<p>If there is a shorter way please let me know.</p>
<p><strong>update</strong></p>
<p>I've used the following which (from the answer) is partially solve the issue.</p>
<pre><code>kube: cfg
case "$(ARGS)" in \
("") printf "Please provide ARGS=/some/path"; exit 1;; \
(*) cp "$(ARGS)" /some/where/else;; \
esac
</code></pre>
<p>The problem is because of the <code>cfg</code> which is creating the dir in case the user not providing the <code>args</code> and in the <strong>second</strong> run when providing the path the dir is already exist and you get an error, is there a way to avoid it ? <strong>something like if the arg is not provided dont run the cfg</strong></p>
| NSS | <p>I assume the user input is the pathname of a file. The <code>make</code> utility can take variable assignments as arguments, in the form of <code>make NAME=VALUE</code>. You refer to these in your <code>Makefile</code> as usual, with <code>$(NAME)</code>. So something like</p>
<pre><code>kube: cfg
case "$(ARGS)" in \
("") printf "Please provide ARGS=/some/path"; exit 1;; \
(*) cp "$(ARGS)" /some/where/else;; \
esac
</code></pre>
<p>called with</p>
<pre><code>make ARGS=/some/path/file kube
</code></pre>
<p>would then execute <code>cp /some/path/file /some/where/else</code>. If that is not what you were asking, please rephrase the question, providing exact details of what you want to do.</p>
| Jens |
<p>We are using 3 master - 4 worker setup cluster. Recently due to disk pressure we had to add another worker to our cluster and we would like to redistribute some of the existing persistent volume claims to new worker.</p>
<p>Right now,</p>
<p>2 out of 4 Longhorn nodes are not schedulable due to not enough disk space. I can cordon 2 of these nonschedulable nodes and manually delete some of the pvcs on these cordon stated nodes in order to re-create them on new worker. But I was wondering if there is another way to automate this process. I think you can understand better my question and what I'm trying to do with the provided image. Thanks for your answers already!</p>
<p>Longhorn version : 0.8
<a href="https://i.stack.imgur.com/KKX4d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KKX4d.png" alt="enter image description here" /></a></p>
| Çağdaş Özgür | <p>I think it’s better for you to just create a new replica. Instead of deleting your pv/pvc.
Click on the longhorn volume that you want to move around to a different node (using longhorn dashboard, in volume menu).</p>
<p>In the context menu there should be option to <em>Update Replicas Count</em> if your volume is attached.</p>
<p>Increase the replica so longhorn create more replica in the healthy node. Wait for it to finish rebuilding. Let’s say previously you have 2 replicas, now you have 3 (one new replica in the healthy node). Reduce the replica count again to 2. Then delete your replica in the node you want to cordon.</p>
<p>So this way:</p>
<ul>
<li>No need to delete your PV/PVC then reattach</li>
<li>Zero downtime</li>
</ul>
<p>Also, longhorn has reached version 1 by now. It’s a good idea to upgrade.</p>
| lucernae |
<p>I'm developing an application in ASP.NET Core 2.1, and running it on a Kubernetes cluster. I've implemented authentication using OpenIDConnect, using Auth0 as my provider.</p>
<p>This all works fine. Actions or controllers marked with the <code>[Authorize]</code> attribute redirect anonymous user to the identity provider, they log in, redirects back, and Bob's your uncle. </p>
<p>The problems start occurring when I scale my deployment to 2 or more containers. When a user visits the application, they log in, and depending on what container they get served during the callback, authentication either succeeds or fails. Even in the case of authentication succeeding, repeatedly F5-ing will eventually redirect to the identity provider when the user hits a container they aren't authorized on.</p>
<p>My train of thought on this would be that, using cookie authentication, the user stores a cookie in their browser, that gets passed along with each request, the application decodes it and grabs the JWT, and subsequently the claims from it, and the user is authenticated. This makes the whole thing stateless, and therefore should work regardless of the container servicing the request. As described above however, it doesn't appear to actually work that way. </p>
<p>My configuration in <code>Startup.cs</code> looks like this:</p>
<pre><code>services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = CookieAuthenticationDefaults.AuthenticationScheme;
options.DefaultSignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = CookieAuthenticationDefaults.AuthenticationScheme;
})
.AddCookie()
.AddOpenIdConnect("Auth0", options =>
{
options.Authority = $"https://{Configuration["Auth0:Domain"]}";
options.ClientId = Configuration["Auth0:ClientId"];
options.ClientSecret = Configuration["Auth0:ClientSecret"];
options.ResponseType = "code";
options.Scope.Clear();
options.Scope.Add("openid");
options.Scope.Add("profile");
options.Scope.Add("email");
options.TokenValidationParameters = new TokenValidationParameters
{
NameClaimType = "name"
};
options.SaveTokens = true;
options.CallbackPath = new PathString("/signin-auth0");
options.ClaimsIssuer = "Auth0";
options.Events = new OpenIdConnectEvents
{
OnRedirectToIdentityProviderForSignOut = context =>
{
var logoutUri =
$"https://{Configuration["Auth0:Domain"]}/v2/logout?client_id={Configuration["Auth0:ClientId"]}";
var postLogoutUri = context.Properties.RedirectUri;
if (!string.IsNullOrEmpty(postLogoutUri))
{
if (postLogoutUri.StartsWith("/"))
{
var request = context.Request;
postLogoutUri = request.Scheme + "://" + request.Host + request.PathBase +
postLogoutUri;
}
logoutUri += $"&returnTo={Uri.EscapeDataString(postLogoutUri)}";
}
context.Response.Redirect(logoutUri);
context.HandleResponse();
return Task.CompletedTask;
},
OnRedirectToIdentityProvider = context =>
{
context.ProtocolMessage.SetParameter("audience", "https://api.myapp.com");
// Force the scheme to be HTTPS, otherwise we end up redirecting back to HTTP in production.
// They should seriously make it easier to make Kestrel serve over TLS in the same way ngninx does...
context.ProtocolMessage.RedirectUri = context.ProtocolMessage.RedirectUri.Replace("http://",
"https://", StringComparison.OrdinalIgnoreCase);
Debug.WriteLine($"RedirectURI: {context.ProtocolMessage.RedirectUri}");
return Task.FromResult(0);
}
};
});
</code></pre>
<p>I've spent hours trying to address this issue, and came up empty. The only thing I can think of that could theoretically work now is using sticky load balancing, but that's more applying a band-aid than actually fixing the problem.</p>
<p>One of the main reasons to use Kubernetes is its resilience and ability to handle scaling very well. As it stands, I can only scale my backing services, and my main application would have to run as a single pod. That's far from ideal.</p>
<p>Perhaps there is some mechanism somewhere that creates affinity with a specific instance that I'm not aware of?</p>
<p>I hope someone can point me in the right direction.</p>
<p>Thanks!</p>
| aevitas | <p>The cookie issued by authentication is encrypted via Data Protection. Data Protection by default is scoped to a particular application, or instance thereof. If you need to share an auth cookie between instances, you need to ensure that the data protection keys are persisted to a common location and that the application name is the same.</p>
<pre><code>services.AddDataProtection()
.PersistKeysToFileSystem(new DirectoryInfo(@"\\server\share\directory\"))
.SetApplicationName("MyApp");
</code></pre>
<p>You can find more info in the <a href="http://services.AddDataProtection()%20%20%20%20%20.PersistKeysToFileSystem(GetKeyRingDirInfo())%20%20%20%20%20.SetApplicationName(%22SharedCookieApp%22);%20%20services.ConfigureApplicationCookie(options%20=%3E%20%7B%20%20%20%20%20options.Cookie.Name%20=%20%22.AspNet.SharedCookie%22;%20%7D);" rel="noreferrer">docs</a>.</p>
| Chris Pratt |
<p>I'm looking to configure Redis for Sidekiq and Rails in k8s. Using Google Cloud Memory Store with an IP address. </p>
<p>I have a helm template like the following (with gcpRedisMemorystore specified separately) - My question is what does the Service object add to the system? Is it necessary or does the Endpoint provide all the needed access?</p>
<p>charts/app/templates/app-memorystore.service.yaml</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: app-memorystore
spec:
type: ClusterIP
clusterIP: None
ports:
- name: redis
port: {{ .Values.gcpredis.port }}
protocol: TCP
---
kind: Endpoints
apiVersion: v1
metadata:
name: app-memorystore
subsets:
- addresses:
- ip: {{ .Values.gcpredis.ip }}
ports:
- port: {{ .Values.gcpredis.port }}
name: redis
protocol: TCP
</code></pre>
| stujo | <p>Yes, you still need it.</p>
<p>Generally speaking, the Service is the name which is consumed by applications to connect to an Endpoint. Usually, a Service with a selector will automatically create a corresponding endpoint with the IP addresses of the Pods found by the selector.</p>
<p>When you define a Service without a selector you need to give the corresponding Endpoint of the same name so the Service has somewhere to go. This bit of information is in documentation but a bit buried. At <a href="https://kubernetes.io/docs/concepts/services-networking/service/#without-selectors" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#without-selectors</a> it is mentioned in the second bullet point for headless services without selectors:</p>
<blockquote>
<p>For headless services that do not define selectors, the endpoints controller does not create Endpoints records. However, the DNS system looks for and configures either:</p>
<ul>
<li>CNAME records for ExternalName-type services.</li>
<li>A records for any Endpoints that share a name with the service, for all other types.</li>
</ul>
</blockquote>
| Andy Shinn |
<p>I am creating a kube cluster with GKE in terraform. I am creating the cluster from two modules, a cluster module and a nodepool module. I'd like to create a module for the master_authorized_networks_config so that each time a new cidr is added to it terraform doesn't destroy the original cluster. Is this possible. This is an example of the block of code that's in my cluster module that I would like to change without destroying the whole cluster. These IP addresses are what allow acess to the cluster.</p>
<pre><code>master_authorized_networks_config {
cidr_blocks {
cidr_block = "123.456.789/32"
display_name = "megacorp-1-nat1"
}
cidr_blocks {
cidr_block = "34.69.69.69/32"
display_name = "megacorp-1-nat2"
}
cidr_blocks {
cidr_block = "123.456.333.333/32"
display_name = "vpn-test"
}
</code></pre>
<p>Adding another cidr destroys the original cluster. I don't want this.</p>
| TeeTee | <p>You can use <a href="https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks" rel="nofollow noreferrer">dynamic blocks</a> to achieve this.
In your terraform template, write:</p>
<pre><code>master_authorized_networks_config {
dynamic "cidr_blocks" {
for_each = var.authorized_networks
content {
cidr_block = cidr_blocks.key
display_name = cidr_blocks.value
}
}
</code></pre>
<p>Then you can put the allowed ip ranges in a variable of type map(string).
Example:</p>
<pre><code>authorized_networks= {
"14.170.140.29/32": "home",
"181.72.169.20/32": "office",
"181.73.147.199/32": "office Wi-Fi"
}
</code></pre>
| Kristiaan |
<p>I followed this tutorial <a href="https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/</a></p>
<p>I got the error as bellow when I try to create the pods</p>
<pre><code>kubectl apply -k .
error: json: unknown field "metadata"
</code></pre>
<p>My kubectl version is as bellow:</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Bellow are some files that I created following the toturial.</p>
<p>kustomization.yaml</p>
<pre><code>configMapGenerator:
- name: example-redis-config
files:
- redis-config
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis:5.0.4
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
resources:
- redis-pod.yaml
</code></pre>
<p>redis-config </p>
<pre><code>maxmemory 2mb
maxmemory-policy allkeys-lru
</code></pre>
<p>redis-pod.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis:5.0.4
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
</code></pre>
<p>Any help?</p>
| wanghaoming | <p>I think you misinterpreted the <code>kustomization.yaml</code> instructions (which are confusing). You don't add the contents of <code>pods/config/redis-pod.yaml</code> to <code>kustomization.yaml</code>. You just download that file and add the <code>resources</code> snippet.</p>
<p>The resulting <code>kustomization.yaml</code> should look like:</p>
<pre><code>configMapGenerator:
- name: example-redis-config
files:
- redis-config
resources:
- redis-pod.yaml
</code></pre>
| Andy Shinn |
<p>Is it possible to send a http Rest request to another K8 Pod that belongs to the same Service in Kubernetes when Envoy is configured? </p>
<p><strong>Important</strong> : I have another question <a href="https://stackoverflow.com/questions/54410515/pod-to-pod-communication-within-a-service?r=SearchResults">here</a> that directed me to ask with Envoy specific tags.</p>
<p>E. G.
Service name = UserService , 2 Pods (replica = 2)</p>
<pre><code>Pod 1 --> Pod 2 //using pod ip not load balanced hostname
Pod 2 --> Pod 1
</code></pre>
<p>The connection is over Rest <code>GET 1.2.3.4:7079/user/1</code></p>
<p>The value for host + port is taken from <code>kubectl get ep</code></p>
<p>Both of the pod IP's work successfully outside of the pods but when I do a <code>kubectl exec -it</code> into the pod and make the request via CURL, it returns a 404 not found for the endpoint. </p>
<p><strong>Q</strong> What I would like to know if it is possible to make a request to another K8 Pod that is in the same Service?
<em>Answered : this is definitely possible</em>. </p>
<p><strong>Q</strong> Why am I able to get a successful <code>ping 1.2.3.4</code>, but not hit the Rest API? </p>
<p><strong>Q</strong> is it possible to directly request a Pod IP from another Pod when Envoy is configured? </p>
<p>Please let me know what config files are needed or output is needed to progress, as I am a complete beginner with K8. Thanks. </p>
<p><strong>below is my config files</strong></p>
<pre><code> #values.yml
replicaCount: 1
image:
repository: "docker.hosted/app"
tag: "0.1.0"
pullPolicy: Always
pullSecret: "a_secret"
service:
name: http
type: NodePort
externalPort: 7079
internalPort: 7079
ingress:
enabled: false
</code></pre>
<h1>deployment.yml</h1>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_PORT
value: "{{ .Values.service.internalPort }}"
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /actuator/alive
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/ready
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }
</code></pre>
<h1>service.yml</h1>
<pre><code>kind: Service
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: {{ .Values.service.internalPort }}
protocol: TCP
name: {{ .Values.service.name }}
selector:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
</code></pre>
<h1>executed from master</h1>
<p><a href="https://i.stack.imgur.com/bizoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bizoF.png" alt="executed from k8 master"></a> </p>
<h1>executed from inside a pod of the same MicroService</h1>
<p><a href="https://i.stack.imgur.com/LaMRn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LaMRn.png" alt="executed from inside a pod of the same MicroService"></a></p>
<p><strong>EDIT 2:</strong>
<strong>output from 'kubectl get -o yaml deployment '</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2019-01-29T20:34:36Z
generation: 1
labels:
app: msg-messaging-room
chart: msg-messaging-room-0.0.22
heritage: Tiller
release: msg-messaging-room
name: msg-messaging-room
namespace: default
resourceVersion: "25447023"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/msg-messaging-room
uid: 4b283304-2405-11e9-abb9-000c29c7d15c
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: msg-messaging-room
release: msg-messaging-room
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: msg-messaging-room
release: msg-messaging-room
spec:
containers:
- env:
- name: KAFKA_HOST
value: confluent-kafka-cp-kafka-headless
- name: KAFKA_PORT
value: "9092"
- name: MY_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: MY_POD_PORT
value: "7079"
image: msg-messaging-room:0.0.22
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /actuator/alive
port: 7079
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: msg-messaging-room
ports:
- containerPort: 7079
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/ready
port: 7079
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: secret
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2019-01-29T20:35:43Z
lastUpdateTime: 2019-01-29T20:35:43Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2019-01-29T20:34:36Z
lastUpdateTime: 2019-01-29T20:36:01Z
message: ReplicaSet "msg-messaging-room-6f49b5df59" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 2
replicas: 2
updatedReplicas: 2
</code></pre>
<p><strong>output from 'kubectl get -o yaml svc $the_service'</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-01-29T20:34:36Z
labels:
app: msg-messaging-room
chart: msg-messaging-room-0.0.22
heritage: Tiller
release: msg-messaging-room
name: msg-messaging-room
namespace: default
resourceVersion: "25446807"
selfLink: /api/v1/namespaces/default/services/msg-messaging-room
uid: 4b24bd84-2405-11e9-abb9-000c29c7d15c
spec:
clusterIP: 1.2.3.172.201
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31849
port: 7079
protocol: TCP
targetPort: 7079
selector:
app: msg-messaging-room
release: msg-messaging-room
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
| M_K | <p>What I posted on another question was , I disabled Istio injection before installing the service and then re enabled it after installing the service and now its all working fine, so the commands that worked for me were:</p>
<p><a href="https://i.stack.imgur.com/T8Wvm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T8Wvm.png" alt="enter image description here"></a></p>
| M_K |
<p>I deploy asp.net core application docker image to Kubernetes cluster. My application is using NAudio to get microphone stream from the user and send to Google Speech-To-Text.</p>
<p>But after I deployed, getting the error below in Kubernetes logging:</p>
<blockquote>
<p>System.DllNotFoundException: Unable to load shared library
'Msacm32.dll' or one of its dependencies. In order to help diagnose
loading problems, consider setting the LD_DEBUG environment variable:
libMsacm32.dll: cannot open shared object file: No such file or
directory at NAudio.Wave.Compression.AcmInterop.acmStreamOpen2(IntPtr&
hAcmStream, IntPtr hAcmDriver, IntPtr sourceFormatPointer, IntPtr
destFormatPointer, WaveFilter waveFilter, IntPtr callback, IntPtr
instance, AcmStreamOpenFlags openFlags) at
NAudio.Wave.Compression.AcmStream..ctor(WaveFormat sourceFormat,
WaveFormat destFormat) at
NAudio.Wave.WaveFormatConversionProvider..ctor(WaveFormat
targetFormat, IWaveProvider sourceProvider) at
NAudio.Wave.WaveFormatConversionStream..ctor(WaveFormat targetFormat,
WaveStream sourceStream) at
Web.API.GoogleApi.GoogleSpeechSession.WriteBufferToStreamingContext(Byte[]
buffer) in /app/GoogleApi/GoogleSpeechSession.cs:line 385 at
Web.API.GoogleApi.GoogleSpeechSession.SubmitToGoogle(Byte[] buffer) in
/app/GoogleApi/GoogleSpeechSession.cs:line 406</p>
</blockquote>
<p>So, is there any way to deploy NAudio to Kubernetes? or I have to change to another library?</p>
<p>Please help me if you know about it.
Thanks</p>
| Vo Dinh Duy | <p>Considering that Kubernetes only very recently began to support Windows containers, and that support is still so sketchy as to virtually make Windows containers still unsupported, I'd imagine you're running linux containers.</p>
<p>Something like an audio library is very often going to be platform-specific, using APIs provided by a particular operating system, drivers compatible only with a particular operating system, etc. I'd imagine that's the case here: your NAudio library only works in Windows. You need to find a library that is either cross-platform or will work in linux, if you're going to be using linux containers.</p>
<p>.NET Core 2.0 began allowing references to .NET Framework libraries as a convenience. There's tons of .NET Framework libraries and components out there, many of which are no longer updated, but yet are perfectly compatible with .NET Standard and thus .NET Core. However, being able to add the reference is not a guarantee that will will <em>actually</em> work, and in particular, work cross-platform.</p>
<p>For what it's worth, you should attempt to mimic your production environment as closely as possible in development. In particular here, if you're going to be deploying to linux containers in Kubernetes, then you should use linux containers in your dev environment (fully supported by Docker for Windows) and even actually use Kubernetes (built in to Docker for Windows).</p>
| Chris Pratt |
<p>For a .net core application, I need the internal IP address of the nginx ingress to trust the proxy and process its forwarded headers.</p>
<p>This is done with the following code in my application:</p>
<pre><code>forwardedHeadersOptions.KnownProxies.Add(IPAddress.Parse("10.244.0.16"));
</code></pre>
<p>Now it is hard-coded. But how can I get this IP address into an environment variable for my container?</p>
<p>It seems like the given IP address is the endpoint of the <code>ingress-nginx</code> service in the <code>ingress-nginx</code> namespace:</p>
<pre><code>❯ kubectl describe service ingress-nginx -n ingress-nginx
Name: ingress-nginx
Namespace: ingress-nginx
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/par...
Selector: app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type: LoadBalancer
IP: 10.0.91.124
LoadBalancer Ingress: 40.127.224.177
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30756/TCP
Endpoints: 10.244.0.16:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31719/TCP
Endpoints: 10.244.0.16:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32003
Events: <none>
</code></pre>
<p>FYI: this is my deployment:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: uwgazon-web
spec:
replicas: 1
paused: true
template:
metadata:
labels:
app: uwgazon-web
spec:
containers:
- name: uwgazon-web
image: uwgazon/web
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: UWGAZON_RECAPTCHA__SITEKEY
valueFrom:
secretKeyRef:
name: uwgazon-recaptcha
key: client-id
- name: UWGAZON_RECAPTCHA__SERVERKEY
valueFrom:
secretKeyRef:
name: uwgazon-recaptcha
key: client-secret
- name: UWGAZON_MAILGUN__BASEADDRESS
valueFrom:
secretKeyRef:
name: uwgazon-mailgun
key: base-address
- name: UWGAZON_APPLICATIONINSIGHTS__INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: uwgazon-appinsights
key: instrumentationkey
- name: APPINSIGHTS_INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: uwgazon-appinsights
key: instrumentationkey
- name: UWGAZON_MAILGUN__APIKEY
valueFrom:
secretKeyRef:
name: uwgazon-mailgun
key: api-key
- name: UWGAZON_MAILGUN__TOADDRESS
valueFrom:
secretKeyRef:
name: uwgazon-mailgun
key: to-address
- name: UWGAZON_BLOG__NAME
valueFrom:
configMapKeyRef:
name: uwgazon-config
key: sitename
- name: UWGAZON_BLOG__OWNER
valueFrom:
configMapKeyRef:
name: uwgazon-config
key: owner
- name: UWGAZON_BLOG__DESCRIPTION
valueFrom:
configMapKeyRef:
name: uwgazon-config
key: description
</code></pre>
<p>And my ingress configuration</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: uwgazon-web-ingress
annotations:
cert-manager.io/issuer: "uwgazon-tls-issuer"
spec:
tls:
- hosts:
- uwgazon.sdsoftware.be
secretName: uwgazon-sdsoftware-be-tls
rules:
- host: uwgazon.sdsoftware.be
http:
paths:
- backend:
serviceName: uwgazon-web
servicePort: 80
</code></pre>
| Sander Declerck | <p>I found the solution to this, specific for Asp.net core. </p>
<p>First of all, you MUST whitelist the proxy, otherwise the forwarded headers middleware will not work.</p>
<p>I found out, you can actually whitelist an entire network. That way, you are trusting everything inside your cluster. Kubernetes uses the 10.0.0.0/8 network (subnet mask 0.255.255.255). Trusting it, can be done with the following code:</p>
<pre class="lang-cs prettyprint-override"><code>services.Configure<ForwardedHeadersOptions>(forwardedHeadersOptions =>
{
forwardedHeadersOptions.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
forwardedHeadersOptions.KnownNetworks.Add(new IPNetwork(IPAddress.Parse("10.0.0.0"), 8));
});
</code></pre>
| Sander Declerck |
<p>I have a flask app with uwsgi and gevent.<br>
Here is my <code>app.ini</code>
How could I write readinessProbe and livenessProbe on kubernetes to check to flask app?</p>
<pre><code>[uwsgi]
socket = /tmp/uwsgi.sock
chdir = /usr/src/app/
chmod-socket = 666
module = flasky
callable = app
master = false
processes = 1
vacuum = true
die-on-term = true
gevent = 1000
listen = 1024
</code></pre>
| Rukeith | <p>I think what you are really asking is "How to health check a uWSGI application". There are some example tools to do this. Particularly:</p>
<ul>
<li><a href="https://github.com/andreif/uwsgi-tools" rel="nofollow noreferrer">https://github.com/andreif/uwsgi-tools</a></li>
<li><a href="https://github.com/che0/uwping" rel="nofollow noreferrer">https://github.com/che0/uwping</a></li>
<li><a href="https://github.com/m-messiah/uwget" rel="nofollow noreferrer">https://github.com/m-messiah/uwget</a></li>
</ul>
<p>The <code>uwsgi-tools</code> project seems to have the most complete example at <a href="https://github.com/andreif/uwsgi-tools/issues/2#issuecomment-345195583" rel="nofollow noreferrer">https://github.com/andreif/uwsgi-tools/issues/2#issuecomment-345195583</a>. In a Kubernetes Pod spec context this might end up looking like:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: myapp
image: myimage
livenessProbe:
exec:
command:
- uwsgi_curl
- -H
- Host:host.name
- /path/to/unix/socket
- /health
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
<p>This would also assume your application responded to <code>/health</code> as a health endpoint.</p>
| Andy Shinn |
<p>I have a very simple asp.net core app (C# Web Application with Docker Support for Linux) and when i build the docker image and try to run it on my local PC the following happens;
In docker with my image called test, i type docker run test, at which point it states "Content root path: /app Now listening on: <a href="http://[::]:80" rel="nofollow noreferrer">http://[::]:80</a>"
And even though when i type docker ps i can see the process running, when i try to navigate to localhost:80 all i get is a long wait and then "This site can’t be reached, localhost refused to connect."</p>
<p>I typed </p>
<p><code>docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ec158cc3b344</code></p>
<p>which gave me the containers IP address, but even navigating directly to the container i either get "This site can’t be reached
" if i navigate on port 80, or "Your connection was interrupted" if i try to access the IP directly.</p>
<p>I also tried to step over docker completely and deploy the image to Kubernetes to see if this would give me any luck, but instead when i try to access the services External-IP (In this case localhost), i get the following
"This page isn’t working, localhost didn’t send any data."</p>
<p>I also tried to use </p>
<p><code>kubectl get pods -o wide</code></p>
<p>and access the IP's of the pods directly, but this just gives me "This 10.1.0.32 page can’t be found", for example.</p>
<p>And incase you're wondering, this is my dockerfile and kubernetes deployment .yml</p>
<pre><code>FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["Test/Test.csproj", "Test/"]
RUN dotnet restore "Test/Test.csproj"
COPY . .
WORKDIR "/src/Test"
RUN dotnet build "Test.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Test.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Test.dll"]
</code></pre>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: 3
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: <DockerEndpoint>.io/test:v5 #Sorry, can't include the real endpoint!
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: test
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: test
</code></pre>
<p>I also understand that .net core work in weird way that don't allow it to expose its ports to the outside world unless you tell it to, but that combined with my relative newness to the docker/kubernetes stack and leaving me bewildered.</p>
<p>Does anybody have any idea how i can make a .net core app, any app, work with docker?</p>
<p>P.S. I am really using such a simple app that even if i create a brand new .net core app with docker support, and try to immediately build and run the basic .net core app, it doesnt work. i cannot make it work with literally any .net core app!</p>
| Tom Baker | <p>When it says listening on <code>http://[::]:80</code>, it's talking about localhost <em>in the container</em>. When you try to access it via <code>http://localhost</code> in your web browser running on your computer, <code>localhost</code> is your computer, <em>not</em> the container. You need to use the container's IP.</p>
<p>From your description, it sounds like you tried that as well, but there's no reason you should have any issues with that. You either didn't get the right IP or you did something else incorrect not detailed here.</p>
| Chris Pratt |
<p>I have a multiplayer game based on microservices architecture which I am trying to figure how to <strong>scale horizontally</strong>. It is currently orchestrated in Docker Swarm but I am considering moving to Kubernetes.</p>
<p>Here are the details about the game:</p>
<ul>
<li>It is a table game with cards</li>
<li>Multiple players sit on the same table and play with each other</li>
</ul>
<p>As it works now, I have a single container that is responsible for all tables. When a player joins the table, he sits down and establishes a websocket connection that is routed to that particular container. All players on all tables are connected to the same container. The game logic and the game events can be easily pushed to all clients. </p>
<p>It's currently like that. <strong>All clients that sit on the same table have a connection to the same container</strong>, so it's easy to push dynamic game data back and forth.</p>
<pre><code>Client 1+
| Container A +Client 3
| +---------------+ |
+---> |---------------| <----+
|| Table 1 || |Client 4
Client 2+----> |---------------| <----+
|---------------|
|| Table 2 ||
|---------------|
|---------------|
|| Table 3 ||
+---------------+
| . |
| . |
| . |
+---------------+
</code></pre>
<p>However, when you try to scale this by just increasing the number of containers you run into the problem that clients sitting on the same table are connected to different containers. This means that every game action and all shared dynamic game data have to be updated in a database sitting between these containers. However this becomes increasingly hard to write and maintain:</p>
<pre><code> Container 1 Container 2
Client 1+ +-------------+ +-------------+ +Client 3
+----> |-------------| |-------------| <------+
|| Table 1 || || Table 1 ||
+----> |-------------| |-------------| <------+Client 4
Cleint 2+ |-------------| |-------------|
|| Table 2 || || Table 2 ||
+-------------+ +-------------+
| | | |
| | | |
| | | |
+----+--------+ +-----------+-+
| |
| |
| |
| +------------------------+ |
+> | Redis DB | <+
+------------------------+
</code></pre>
<p>Rather than designing the components like that, it would be much simpler to somehow route clients that have to sit on the same table to the same container. This is to avoid writing every player action and every public table update into the DB. It would look like this:</p>
<pre><code> Game Service
+-----------------+
Client 1+ | | + Client 3
| | Container 1 | |
+------> +-----------+ <-------+
| |-----------| |
Client 2 +-----> || Table 1 || <-------+ Client 4
| |-----------| |
| |-----------| |
| || Table 2 || |
| |-----------| |
| +-----------+ |
| |
| Container 2 |
| +-----------+ |
| |-----------| |
| || Table 3 || |
| |-----------| |
| |-----------| |
| || Table 4 || |
| |-----------| |
| +-----------+ |
| |
+-----------------+
</code></pre>
<p>Having the above architecture would dramatically decrease the complexity of the app. The problem is that <strong>connections coming from different clients have to be identified and routed to the correct container</strong>. I haven't found a way to do that. Is routing to specific containers within the service possible and with what tool?</p>
<p>What is the correct approach to use in my scenario?
Also, if manually routing requests to the target container is not a viable option, what would be the correct way to architect this service?</p>
| BabbevDan | <p>This can be achieved with help of 3rd party libraries, like Istio. </p>
<p><a href="https://istio.io/docs/tasks/traffic-management/request-routing/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/request-routing/</a></p>
<p>You will have to define VirtualServices depending on your config. For your game services you should use StatefulSet, by doing this you will be able to identify to which service you need to route your traffic.</p>
| YoK |
<p>I am trying to check the status of a pod using kubectl wait command through this <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait" rel="nofollow noreferrer">documentation</a>.
Following is the command that i am trying</p>
<pre><code>kubectl wait --for=condition=complete --timeout=30s -n d1 job/test-job1-oo-9j9kj
</code></pre>
<p>Following is the error that i am getting</p>
<pre><code>Kubectl error: status.conditions accessor error: Failure is of the type string, expected map[string]interface{}
</code></pre>
<p>and my <code>kubectl -o json</code> output can be accessed via this github <a href="https://github.com/msraju2009/kubernetes-tests/blob/master/kubernetes-json-output.json" rel="nofollow noreferrer">link</a>.</p>
<p>Can someone help me to fix the issue</p>
| Auto-learner | <p>To wait until your pod is running, check for "condition=ready". In addition, prefer to filter by label, rather than specifying pod id. For example:</p>
<pre><code>$ kubectl wait --for=condition=ready pod -l app=netshoot
pod/netshoot-58785d5fc7-xt6fg condition met
</code></pre>
<p>Another option is <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-status-em-" rel="noreferrer">rollout status</a> - To wait until the deployment is done:</p>
<pre><code>$ kubectl rollout status deployment netshoot
deployment "netshoot" successfully rolled out
</code></pre>
<p>Both options works great in automation scripts, when it is required to wait for an app to be installed. However, as @CallMeLaNN noted for the second option, deployment "rolled out" is not necessarily without errors.</p>
| Noam Manos |
<p>I am trying to create a module in Terraform to create the basic resources in a Kubernetes cluster, this means a <code>cert-manager</code>, <code>ingress-nginx</code> (as the ingress controller) and a <code>ClusterIssuer</code> for the certificates. In this exact order.</p>
<p>The first two I am installing with a <code>helm_release</code> resource and the <code>cluster_issuer</code> via <code>kubernetes_manifest</code>.</p>
<p>I am getting the below error, which, after some Google searches, I found out that it's because the <code>cert-manager</code> installs the CRDs that the <code>ClusterIssuer</code> requires but at the <code>terraform plan</code> phase, since they are not installed yet, the manifest cannot detect the <code>ClusterIssuer</code>.</p>
<p>Then, I would like to know if there's a way to circumvent this issue but still create everything in the same configuration with only one <code>terraform apply</code>?</p>
<p>Note: I tried to use the depends_on arguments and also include a <code>time_sleep</code> block but it's useless because nothing is installed in the plan and that's where it fails</p>
<pre><code>| Error: Failed to determine GroupVersionResource for manifest
│
│ with module.k8s_base.kubernetes_manifest.cluster_issuer,
│ on ../../modules/k8s_base/main.tf line 37, in resource "kubernetes_manifest" "cluster_issuer":
│ 37: resource "kubernetes_manifest" "cluster_issuer" {
│
│ no matches for kind "ClusterIssuer" in group "cert-manager.io"
</code></pre>
<pre><code>resource "helm_release" "cert_manager" {
chart = "cert-manager"
repository = "https://charts.jetstack.io"
name = "cert-manager"
create_namespace = var.cert_manager_create_namespace
namespace = var.cert_manager_namespace
set {
name = "installCRDs"
value = "true"
}
}
resource "helm_release" "ingress_nginx" {
name = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
create_namespace = var.ingress_nginx_create_namespace
namespace = var.ingress_nginx_namespace
wait = true
depends_on = [
helm_release.cert_manager
]
}
resource "time_sleep" "wait" {
create_duration = "60s"
depends_on = [helm_release.ingress_nginx]
}
resource "kubernetes_manifest" "cluster_issuer" {
manifest = {
"apiVersion" = "cert-manager.io/v1"
"kind" = "ClusterIssuer"
"metadata" = {
"name" = var.cluster_issuer_name
}
"spec" = {
"acme" = {
"email" = var.cluster_issuer_email
"privateKeySecretRef" = {
"name" = var.cluster_issuer_private_key_secret_name
}
"server" = var.cluster_issuer_server
"solvers" = [
{
"http01" = {
"ingress" = {
"class" = "nginx"
}
}
}
]
}
}
}
depends_on = [helm_release.cert_manager, helm_release.ingress_nginx, time_sleep.wait]
}
</code></pre>
| everspader | <p><a href="https://cert-manager.io/docs/installation/helm/#option-1-installing-crds-with-kubectl" rel="nofollow noreferrer">Official documentation</a> says to use <code>kubectl apply</code> before installing this with a helm chart, making it a two step process. Using Terraform, this would make it a 3 step process in that you have to apply a targeted section to create the cluster so you can have access to kubeconfig credentials, then run the kubectl apply command to install the CRDs, and finally run terraform apply again to install the helm chart and the rest of the IaC. This is even less ideal.</p>
<p>I would use the <code>kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.crds.yaml</code> in kubectl_manifest resources as the comment above suggests, but this is impossible since this does not link to a single yaml file but so many of them one would not be able to keep up with the changes. Unfortunately, there is no "kubectl_apply" terraform resource** for the helm chart to depend on those CRDs being installed first.</p>
<p>Despite all this wonkiness, there is a solution, and that is to use the helm_release resource twice. It requires creating a module and referencing a custom helm chart for the cert-issuer. It's not ideal given the amount of effort that has to be used to create it for custom needs, but once it's created, it's a reusable, modular solution.</p>
<pre><code>#
# Cert-manager
# main.tf
#
resource "helm_release" "cert_manager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
version = var.cert_manager_chart_version
namespace = var.cert_manager_namespace
create_namespace = true
set {
name = "installCRDs"
value = true
}
}
</code></pre>
<p>Reference to custom chart:</p>
<pre><code>#
# cert-issuer.tf
#
# Cert Issuer using Helm
resource "helm_release" "cert_issuer" {
name = "cert-issuer"
repository = path.module
chart = "cert-issuer"
namespace = var.namespace
set {
name = "fullnameOverride"
value = local.issuer_name
}
set {
name = "privateKeySecretRef"
value = local.issuer_name
}
set {
name = "ingressClass"
value = var.ingress_class
}
set {
name = "acmeEmail"
value = var.cert_manager_email
}
set {
name = "acmeServer"
value = var.acme_server
}
depends_on = [helm_release.cert_manager]
}
</code></pre>
<p>You can see that the above use of <code>helm_release</code> is referencing itself locally as the repository, which requires you to have a custom helm chart, like this:</p>
<pre><code># ./cluster-issuer/cluster-issuer.yaml
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: {{ include "cert-issuer.fullname" . }}
namespace: {{ .Release.Namespace }}
spec:
acme:
# The ACME server URL
server: {{ .Values.acmeServer }}
email: {{ .Values.acmeEmail }}
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: {{ .Values.privateKeySecretRef }}
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: {{ .Values.ingressClass }}
</code></pre>
<p>For some reason, this avoids the dependency check terraform uses to throw the error and works fine to get this installed in a single <code>apply</code></p>
<p>This could be further simplified by not using values.yaml values by creating a pure chart.</p>
<p>** Note, I think another work around is one can use a provisioner like 'local-exec' or 'remote-exec' after a cluser is created to run the kubectl apply command for the CRds directly, but I haven't tested this yet. It would also still require that your provisioning environment have kubectl installed and .kubeconfig properly configured, creating a dependency tree.</p>
<p>Also, that is of course not fully working code. for a full example of the module to use or fork, see <a href="https://github.com/DeimosCloud/terraform-kubernetes-cert-manager" rel="nofollow noreferrer">this github repo</a>.</p>
| user658182 |
<p>I have an API that recently started receiving more traffic, about 1.5x. That also lead to a doubling in the latency:</p>
<p><a href="https://i.stack.imgur.com/clJmx.png" rel="noreferrer"><img src="https://i.stack.imgur.com/clJmx.png" alt="latency" /></a></p>
<p>This surprised me since I had setup autoscaling of both nodes and pods as well as GKE internal loadbalancing.</p>
<p>My external API passes the request to an internal server which uses a lot of CPU. And looking at my VM instances it seems like all of the traffic got sent to one of my two VM instances (a.k.a. Kubernetes nodes):</p>
<p><a href="https://i.stack.imgur.com/B1BhR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/B1BhR.png" alt="CPU utilization per node" /></a></p>
<p>With loadbalancing I would have expected the CPU usage to be more evenly divided between the nodes.</p>
<p>Looking at my deployment there is one pod on the first node:</p>
<p><a href="https://i.stack.imgur.com/S0KYc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/S0KYc.png" alt="First pod on first node" /></a></p>
<p>And two pods on the second node:</p>
<p><a href="https://i.stack.imgur.com/DBSUR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DBSUR.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/ZVKg6.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZVKg6.png" alt="enter image description here" /></a></p>
<p>My service config:</p>
<pre><code>$ kubectl describe service model-service
Name: model-service
Namespace: default
Labels: app=model-server
Annotations: networking.gke.io/load-balancer-type: Internal
Selector: app=model-server
Type: LoadBalancer
IP Families: <none>
IP: 10.3.249.180
IPs: 10.3.249.180
LoadBalancer Ingress: 10.128.0.18
Port: rest-api 8501/TCP
TargetPort: 8501/TCP
NodePort: rest-api 30406/TCP
Endpoints: 10.0.0.145:8501,10.0.0.152:8501,10.0.1.135:8501
Port: grpc-api 8500/TCP
TargetPort: 8500/TCP
NodePort: grpc-api 31336/TCP
Endpoints: 10.0.0.145:8500,10.0.0.152:8500,10.0.1.135:8500
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UpdatedLoadBalancer 6m30s (x2 over 28m) service-controller Updated load balancer with new hosts
</code></pre>
<p>The fact that Kubernetes started a new pod seems like a clue that Kubernetes autoscaling is working. But the pods on the second VM do not receive any traffic. How can I make GKE balance the load more evenly?</p>
<h2>Update Nov 2:</h2>
<p>Goli's answer leads me to think that it has something to do with the setup of the model service. The service exposes both a REST API and a GRPC API but the GRPC API is the one that receives traffic.</p>
<p>There is a corresponding forwarding rule for my service:</p>
<pre><code>$ gcloud compute forwarding-rules list --filter="loadBalancingScheme=INTERNAL"
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
aab8065908ed4474fb1212c7bd01d1c1 us-central1 10.128.0.18 TCP us-central1/backendServices/aab8065908ed4474fb1212c7bd01d1c1
</code></pre>
<p>Which points to a backend service:</p>
<pre><code>$ gcloud compute backend-services describe aab8065908ed4474fb1212c7bd01d1c1
backends:
- balancingMode: CONNECTION
group: https://www.googleapis.com/compute/v1/projects/questions-279902/zones/us-central1-a/instanceGroups/k8s-ig--42ce3e0a56e1558c
connectionDraining:
drainingTimeoutSec: 0
creationTimestamp: '2021-02-21T20:45:33.505-08:00'
description: '{"kubernetes.io/service-name":"default/model-service"}'
fingerprint: lA2-fz1kYug=
healthChecks:
- https://www.googleapis.com/compute/v1/projects/questions-279902/global/healthChecks/k8s-42ce3e0a56e1558c-node
id: '2651722917806508034'
kind: compute#backendService
loadBalancingScheme: INTERNAL
name: aab8065908ed4474fb1212c7bd01d1c1
protocol: TCP
region: https://www.googleapis.com/compute/v1/projects/questions-279902/regions/us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/questions-279902/regions/us-central1/backendServices/aab8065908ed4474fb1212c7bd01d1c1
sessionAffinity: NONE
timeoutSec: 30
</code></pre>
<p>Which has a health check:</p>
<pre><code>$ gcloud compute health-checks describe k8s-42ce3e0a56e1558c-node
checkIntervalSec: 8
creationTimestamp: '2021-02-21T20:45:18.913-08:00'
description: ''
healthyThreshold: 1
httpHealthCheck:
host: ''
port: 10256
proxyHeader: NONE
requestPath: /healthz
id: '7949377052344223793'
kind: compute#healthCheck
logConfig:
enable: true
name: k8s-42ce3e0a56e1558c-node
selfLink: https://www.googleapis.com/compute/v1/projects/questions-279902/global/healthChecks/k8s-42ce3e0a56e1558c-node
timeoutSec: 1
type: HTTP
unhealthyThreshold: 3
</code></pre>
<p>List of my pods:</p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
api-server-deployment-6747f9c484-6srjb 2/2 Running 3 3d22h
label-server-deployment-6f8494cb6f-79g9w 2/2 Running 4 38d
model-server-deployment-55c947cf5f-nvcpw 0/1 Evicted 0 22d
model-server-deployment-55c947cf5f-q8tl7 0/1 Evicted 0 18d
model-server-deployment-766946bc4f-8q298 1/1 Running 0 4d5h
model-server-deployment-766946bc4f-hvwc9 0/1 Evicted 0 6d15h
model-server-deployment-766946bc4f-k4ktk 1/1 Running 0 7h3m
model-server-deployment-766946bc4f-kk7hs 1/1 Running 0 9h
model-server-deployment-766946bc4f-tw2wn 0/1 Evicted 0 7d15h
model-server-deployment-7f579d459d-52j5f 0/1 Evicted 0 35d
model-server-deployment-7f579d459d-bpk77 0/1 Evicted 0 29d
model-server-deployment-7f579d459d-cs8rg 0/1 Evicted 0 37d
</code></pre>
<p>How do I A) confirm that this health check is in fact showing 2/3 backends as unhealthy? And B) configure the health check to send traffic to all of my backends?</p>
<h2>Update Nov 5:</h2>
<p>After finding that several pods had gotten evicted in the past because of too little RAM, I migrated the pods to a new nodepool. The old nodepool VMs had 4 CPU and 4GB memory, the new ones have 2 CPU and 8GB memory. That seems to have resolved the eviction/memory issues, but the loadbalancer still only sends traffic to one pod at a time.</p>
<p>Pod 1 on node 1:</p>
<p><a href="https://i.stack.imgur.com/rPQJ0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rPQJ0.png" alt="Pod 1 on node 1" /></a></p>
<p>Pod 2 on node 2:</p>
<p><a href="https://i.stack.imgur.com/dE4B5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/dE4B5.png" alt="enter image description here" /></a></p>
<p>It seems like the loadbalancer is not splitting the traffic at all but just randomly picking one of the GRPC modelservers and sending 100% of traffic there. Is there some configuration that I missed which caused this behavior? Is this related to me using GRPC?</p>
| Johan Wikström | <p>Turns out the answer is that you <strong>cannot loadbalance gRPC requests using a GKE loadbalancer</strong>.</p>
<p>A GKE loadbalancer (as well as Kubernetes' default loadbalancer) picks a new backend every time a new TCP connection is formed. For regular HTTP 1.1 requests each request gets a new TCP connection and the loadbalancer works fine. For gRPC (which is based on HTTP 2), the TCP connection is only setup once and all requests are multiplexed on the same connection.</p>
<p>More details in this <a href="https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/" rel="nofollow noreferrer">blog post</a>.</p>
<p>To enable gRPC loadbalancing I had to:</p>
<ol>
<li>Install Linkerd</li>
</ol>
<pre><code>curl -fsL https://run.linkerd.io/install | sh
linkerd install | kubectl apply -f -
</code></pre>
<ol start="2">
<li>Inject the Linkerd proxy in <strong>both</strong> the receiving and sending pods:</li>
</ol>
<p><a href="https://i.stack.imgur.com/LO0Ii.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LO0Ii.png" alt="enter image description here" /></a></p>
<pre><code>kubectl apply -f api_server_deployment.yaml
kubectl apply -f model_server_deployment.yaml
</code></pre>
<ol start="3">
<li>After realizing that Linkerd would not work together with the GKE loadbalancer, I exposed the receiving deployment as a ClusterIP service instead.</li>
</ol>
<pre><code>kubectl expose deployment/model-server-deployment
</code></pre>
<ol start="4">
<li>Pointed the gRPC client to the ClusterIP service IP address I just created, and redeployed the client.
<a href="https://i.stack.imgur.com/AMFag.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AMFag.png" alt="enter image description here" /></a></li>
</ol>
<pre><code>kubectl apply -f api_server_deployment.yaml
</code></pre>
| Johan Wikström |
<p>I need to inject container port from an environment variable inside my pod. How to do that? </p>
<p>Have been through the documentation, Links:-
1. <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/</a>
2. <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: default
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: $(MY_CONTAINER_PORT)
env:
- name: MY_CONTAINER_PORT
value: 80
</code></pre>
<pre><code>error: error validating "nginx-pod-exposed-through-env.yaml": error validating data: ValidationError(Pod.spec.containers[0].ports[0].containerPort): invalid type for io.k8s.api.core.v1.ContainerPort.containerPort: got "string", expected "integer"; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
| Kunal Malhotra | <p>A way to accomplish this would be to use a templating tool such as <a href="https://get-ytt.io/" rel="nofollow noreferrer">ytt</a>. With ytt you would turn your manifest into a template like:</p>
<pre><code>#@ load("@ytt:data", "data")
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: default
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: #@ data.values.port
</code></pre>
<p>And then supply a <code>values.yml</code> like:</p>
<pre><code>#@data/values
---
port: 8080
</code></pre>
<p>Assuming the original template is named <code>test.yml</code> we could run <code>ytt</code> like so to generate the output:</p>
<pre><code>$ ytt -f test.yml -f values.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: default
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 8080
</code></pre>
<p>The ytt utility then lets us override the data values one the command line with <code>--data-value</code> (or <code>-v</code> for short). An example changing to port 80:</p>
<pre><code>$ ytt -v port=80 -f test.yml -f values.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: default
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>Your original question sounded like you wanted to use environment variables. This is supported with <code>--data-values-env</code>. An example using prefix <code>MYVAL</code>:</p>
<pre><code>$ export MYVAL_port=9000
$ ytt --data-values-env MYVAL -f test.yml -f values.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: default
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 9000
</code></pre>
<p>You can then combine <code>ytt</code> and <code>kubectl</code> to create and apply resources:</p>
<pre><code>ytt --data-values-env MYVAL -f test.yml -f values.yml | kubectl apply -f-
</code></pre>
<p>Additional information on injecting data into ytt templates is at <a href="https://github.com/k14s/ytt/blob/develop/docs/ytt-data-values.md" rel="nofollow noreferrer">https://github.com/k14s/ytt/blob/develop/docs/ytt-data-values.md</a>.</p>
| Andy Shinn |
<p>I am trying to run a test pod with OpenShift CLI:</p>
<pre><code>$oc run nginx --image=nginx --limits=cpu=2,memory=4Gi
deploymentconfig.apps.openshift.io/nginx created
$oc describe deploymentconfig.apps.openshift.io/nginx
Name: nginx
Namespace: myproject
Created: 12 seconds ago
Labels: run=nginx
Annotations: <none>
Latest Version: 1
Selector: run=nginx
Replicas: 1
Triggers: Config
Strategy: Rolling
Template:
Pod Template:
Labels: run=nginx
Containers:
nginx:
Image: nginx
Port: <none>
Host Port: <none>
Limits:
cpu: 2
memory: 4Gi
Environment: <none>
Mounts: <none>
Volumes: <none>
Deployment #1 (latest):
Name: nginx-1
Created: 12 seconds ago
Status: New
Replicas: 0 current / 0 desired
Selector: deployment=nginx-1,deploymentconfig=nginx,run=nginx
Labels: openshift.io/deployment-config.name=nginx,run=nginx
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal DeploymentCreated 12s deploymentconfig-controller Created new replication controller "nginx-1" for version 1
Warning FailedCreate 1s (x12 over 12s) deployer-controller Error creating deployer pod: pods "nginx-1-deploy" is forbidden: failed quota: quota-svc-myproject: must specify limits.cpu,limits.memory
</code></pre>
<p>I get "must specify limits.cpu,limits.memory" error, despite both limits being present in the same <strong>describe</strong> output.</p>
<p>What might be the problem and how do I fix it?</p>
| Hohol | <p>I found a solution!</p>
<p>Part of the error message was "Error creating deployer pod". It means that the problem is not with my pod, but with the <strong>deployer pod</strong> which performs my pod deployment.
It seems the quota in my project affects deployer pods as well.
I couldn't find a way to set deployer pod limits with CLI, so I've made a DeploymentConfig.</p>
<pre><code>kind: "DeploymentConfig"
apiVersion: "v1"
metadata:
name: "test-app"
spec:
template:
metadata:
labels:
name: "test-app"
spec:
containers:
- name: "test-app"
image: "nginxinc/nginx-unprivileged"
resources:
limits:
cpu: "2000m"
memory: "20Gi"
ports:
- containerPort: 8080
protocol: "TCP"
replicas: 1
selector:
name: "test-app"
triggers:
- type: "ConfigChange"
- type: "ImageChange"
imageChangeParams:
automatic: true
containerNames:
- "test-app"
from:
kind: "ImageStreamTag"
name: "nginx-unprivileged:latest"
strategy:
type: "Rolling"
resources:
limits:
cpu: "2000m"
memory: "20Gi"
</code></pre>
<p>A you can see, two sets of limitations are specified here: for container and for deployment strategy.</p>
<p>With this configuration it worked fine!</p>
| Hohol |
<p>I just lost access to my k3s.</p>
<p>I had the certs check this week to if if they had been auto-updated... and it seen so:</p>
<pre><code>[root@vmpkube001 tls]# for crt in *.crt; do printf '%s: %s\n' "$(date --date="$(openssl x509 -enddate -noout -in "$crt"|cut -d= -f 2)" --iso-8601)" "$crt"; done | sort
2021-09-18: client-admin.crt
2021-09-18: client-auth-proxy.crt
2021-09-18: client-cloud-controller.crt
2021-09-18: client-controller.crt
2021-09-18: client-k3s-controller.crt
2021-09-18: client-kube-apiserver.crt
2021-09-18: client-kube-proxy.crt
2021-09-18: client-scheduler.crt
2021-09-18: serving-kube-apiserver.crt
2029-11-03: client-ca.crt
2029-11-03: request-header-ca.crt
2029-11-03: server-ca.crt
</code></pre>
<p>but the cli is broken:
<a href="https://i.stack.imgur.com/Y3fRC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y3fRC.png" alt="image" /></a></p>
<p>Same goes to the dashboard:</p>
<p><a href="https://i.stack.imgur.com/hUQuO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hUQuO.png" alt="image" /></a></p>
<p>The cluster "age" was about 380~something days.
I am running a "v1.18.12+k3s1" in a centos7 cluster.</p>
<p>I change the date on the server to be able to execute kubectl again...
<a href="https://i.stack.imgur.com/s3dJS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s3dJS.png" alt="enter image description here" /></a>
The secrets are wrong... how to update this?</p>
<p>Node logs:</p>
<pre><code>Nov 18 16:34:17 pmpnode001.agrotis.local k3s[6089]: time="2020-11-18T16:34:17.400604478-03:00" level=error msg="server https://127.0.0.1:33684/cacerts is not trusted: Get https://127.0.0.1:33684/cacerts: x509: certificate has expired or is not yet valid"
</code></pre>
<p>Not only that but every case of this problem in the internet says somethings about kubeadm alpha certs. There is no kubeadm, and the only "alpha" feature i have in kubeclt is debug.</p>
| Techmago | <p>To ignore this error, follow these steps:</p>
<p><strong>Step 1. Stop k3s</strong></p>
<pre><code>systemctl stop k3s.service
</code></pre>
<p><strong>Step 2. Stop time sync</strong></p>
<pre><code>hwclock --debug
timedatectl set-ntp 0
systemctl stop ntp.service
systemctl status systemd-timesyncd.service
</code></pre>
<p><strong>Step 3. Update date to <90 days from expiration</strong></p>
<pre><code>date $(date "+%m%d%H%M%Y" --date="90 days ago")
</code></pre>
<p><strong>Step 4. Restart k3s</strong></p>
<pre><code>systemctl start k3s.service
</code></pre>
<p>Just run this to test cluster!</p>
<pre><code>kubectl get nodes
</code></pre>
| Mohsen Abasi |
<p>I have a question about following architecture, I could not find a clear cut answer in the Kubernetes documentation may be you can help me.</p>
<p>I have a service called 'OrchestrationService' this service is dependant to 3 other services 'ServiceA', 'ServiceB', 'ServiceC' to be able to do its job.</p>
<p>All these services have their Docker Images and deployed to Kubernetes.</p>
<p>Now, the 'OrchestrationService' will be the only one that is going to have a contact with outside world so it would definitely have an external endpoint, my question is does 'ServiceA', 'ServiceB', 'ServiceC' would need one or Kubernetes would make those services available for 'OrchestrationService' via KubeProxy/LoadBalancer?</p>
<p>Thx for answers</p>
| posthumecaver | <p>No, you only expose OrchestrationService to public and other service A/B/C need to be cluster services. You create <code>selector</code> services for A/B/C so OrchestrationService can connect to A/B/C services. OrchestrationService can be defined as <code>NodePort</code> with fixed port or you can use ingress to route traffic to OrchestrationService.</p>
| Akash Sharma |
<p>I am running an application with GKE. It works fine but I can not figure out how to get the external IP of the service in a machine readable format.
So i am searching a gcloud or kubectl command that gives me only the external IP or a url of the format <code>http://192.168.0.2:80</code> so that I can cut out the IP.</p>
| stm | <p>You can use the <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">jsonpath</a> output type to get the data directly without needing the additional <code>jq</code> to process the json:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get services \
--namespace ingress-nginx \
nginx-ingress-controller \
--output jsonpath='{.status.loadBalancer.ingress[0].ip}'
</code></pre>
<h4>NOTE</h4>
<p>Be sure to replace the namespace and service name, respectively, with yours.</p>
| kenny |
<p>I'm sorry if this is a very ignorant question but is it possible for Ambassador to truly handle CORS headers and pre-flight OPTION responses? </p>
<p>The docs (<a href="https://www.getambassador.io/reference/cors" rel="noreferrer">https://www.getambassador.io/reference/cors</a>) seem kind of ambiguous to me, if there are just hooks to prevent requests, or if it can truly respond on behalf of the services.</p>
<p>Here's my situation: I've got Ambassador in front of all the http requests to some microservices. For [reasons] we now need a separate domain to make requests into the same Ambassador. </p>
<p>I have an AuthService configured, and according to the docs "When you use external authorization, each incoming request is authenticated before routing to its destination, including pre-flight OPTIONS requests." Which makes perfect sense, and that's what I'm seeing. My AuthService is configured to allow things correctly and that seems to be working. The AuthService responds with the appropriate headers, but Ambassador seems to just ignore that and only cares if the AuthService responds with a 200 or not. (Which seems totally reasonable.)</p>
<p>I have this annotated on my ambassador module:</p>
<pre><code>getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Module
name: ambassador
config:
service_port: 8080
cors:
origins: [my domain here]
credentials: true
</code></pre>
<p>And that doesn't seem to do what I'd expect, which is handle the CORS headers and pre-flight... instead it forwards it on to the service to handle all the CORS stuff. </p>
| xbakesx | <p>Turns out, by specifying <code>headers: "Content-Type"</code> in the <code>cors</code> configuration, things just started to work. Apparently that's not as optional as I thought.</p>
<p>So this is now my module:</p>
<pre><code>getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Module
name: ambassador
config:
service_port: 8080
cors:
origins: [my domain here]
headers: "Content-Type"
credentials: true
</code></pre>
| xbakesx |
<p>From the <a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/" rel="nofollow noreferrer">official example</a> of <code>Kubernetes</code> documentation site on deploying a <code>Wordpress</code> application with mysql:</p>
<p>The service definition of <code>mysql</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
</code></pre>
<p>The deployment definition of <code>mysql</code></p>
<pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
</code></pre>
<p>My question is the following:</p>
<p>The <code>Deployment</code> definition, has a <code>matchLabel</code> <code>selector</code>, so that it will <strong>match</strong> the pod defined below that has the <code>app: wordpress</code> <strong>and</strong> <code>tier:mysql</code> labels.</p>
<p>Why the <code>Service</code> <code>selector</code> does not require a <code>matchLabel</code> directive for the same purpose? What is the "selection" of service performed upon?</p>
| pkaramol | <p>The <code>Service</code> is a concept that makes your container (in this case hosting wordpress) available on a given port. It maps an external port (the <code>Node's</code> port) to and internal port (the container/pod's port). It does this by using the <code>Pod's</code> networking capabilities. The selector is a way of specifying in the service which <code>Pod</code> the port should be opened on. The <code>Deployment</code> is actually just a way of grouping things together - the <code>Pod</code> itself holds the Wordpress container, and the port that's defined in the service is available through the <code>Pod</code> networking.</p>
<p>This is a simple explanation, there are different kinds of services.</p>
| Kyle |
<p>I created a single-node kubeadm cluster on bare-metal and after some research I would go for a host network approach (<a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network</a>), since NodePort is not an option due to network restrictions. </p>
<p>I tried installing nginx-ingress with helm chart through the command:</p>
<pre><code> helm install stable/nginx-ingress \
--set controller.hostNetwork=true
</code></pre>
<p>The problem is that it is creating a LoadBalancer service which is Pending forever and my ingress objects are not being routed:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/whopping-kitten-nginx-ingress-controller-5db858b48c-dp2j8 1/1 Running 0 5m34s
pod/whopping-kitten-nginx-ingress-default-backend-5c574f4449-dr4xm 1/1 Running 0 5m34s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6m43s
service/whopping-kitten-nginx-ingress-controller LoadBalancer 10.97.143.40 <pending> 80:30068/TCP,443:30663/TCP 5m34s
service/whopping-kitten-nginx-ingress-default-backend ClusterIP 10.106.217.96 <none> 80/TCP 5m34s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/whopping-kitten-nginx-ingress-controller 1/1 1 1 5m34s
deployment.apps/whopping-kitten-nginx-ingress-default-backend 1/1 1 1 5m34s
NAME DESIRED CURRENT READY AGE
replicaset.apps/whopping-kitten-nginx-ingress-controller-5db858b48c 1 1 1 5m34s
replicaset.apps/whopping-kitten-nginx-ingress-default-backend-5c574f4449 1 1 1 5m34s
</code></pre>
<p>Is there any other configuration that needs to be done to succeed in this approach?</p>
<p>UPDATE: here are the logs for the ingress-controller pod</p>
<pre><code>-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.24.1
Build: git-ce418168f
Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
I0707 19:02:50.552631 6 flags.go:185] Watching for Ingress class: nginx
W0707 19:02:50.552882 6 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: nginx/1.15.10
W0707 19:02:50.556215 6 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0707 19:02:50.556368 6 main.go:205] Creating API client for https://10.96.0.1:443
I0707 19:02:50.562296 6 main.go:249] Running in Kubernetes cluster version v1.15 (v1.15.0) - git (clean) commit e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529 - platform linux/amd64
I0707 19:02:51.357524 6 main.go:102] Validated default/precise-bunny-nginx-ingress-default-backend as the default backend.
I0707 19:02:51.832384 6 main.go:124] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
W0707 19:02:53.516654 6 store.go:613] Unexpected error reading configuration configmap: configmaps "precise-bunny-nginx-ingress-controller" not found
I0707 19:02:53.527297 6 nginx.go:265] Starting NGINX Ingress controller
I0707 19:02:54.630002 6 event.go:209] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"staging-ingress", UID:"9852d27b-d8ad-4410-9fa0-57b92fdd6f90", APIVersion:"extensions/v1beta1", ResourceVersion:"801", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/staging-ingress
I0707 19:02:54.727989 6 nginx.go:311] Starting NGINX process
I0707 19:02:54.728249 6 leaderelection.go:217] attempting to acquire leader lease default/ingress-controller-leader-nginx...
W0707 19:02:54.729235 6 controller.go:373] Service "default/precise-bunny-nginx-ingress-default-backend" does not have any active Endpoint
W0707 19:02:54.729334 6 controller.go:797] Service "default/face" does not have any active Endpoint.
W0707 19:02:54.729442 6 controller.go:797] Service "default/test" does not have any active Endpoint.
I0707 19:02:54.729535 6 controller.go:170] Configuration changes detected, backend reload required.
I0707 19:02:54.891620 6 controller.go:188] Backend successfully reloaded.
I0707 19:02:54.891654 6 controller.go:202] Initial sync, sleeping for 1 second.
I0707 19:02:54.948639 6 leaderelection.go:227] successfully acquired lease default/ingress-controller-leader-nginx
I0707 19:02:54.949148 6 status.go:86] new leader elected: precise-bunny-nginx-ingress-controller-679b9557ff-n57mc
[07/Jul/2019:19:02:55 +0000]TCP200000.000
W0707 19:02:58.062645 6 controller.go:373] Service "default/precise-bunny-nginx-ingress-default-backend" does not have any active Endpoint
W0707 19:02:58.062676 6 controller.go:797] Service "default/face" does not have any active Endpoint.
W0707 19:02:58.062686 6 controller.go:797] Service "default/test" does not have any active Endpoint.
W0707 19:03:02.406151 6 controller.go:373] Service "default/precise-bunny-nginx-ingress-default-backend" does not have any active Endpoint
W0707 19:03:02.406188 6 controller.go:797] Service "default/face" does not have any active Endpoint.
W0707 19:03:02.406357 6 controller.go:797] Service "default/test" does not have any active Endpoint.
[07/Jul/2019:19:03:02 +0000]TCP200000.000
W0707 19:03:05.739438 6 controller.go:797] Service "default/face" does not have any active Endpoint.
W0707 19:03:05.739467 6 controller.go:797] Service "default/test" does not have any active Endpoint.
[07/Jul/2019:19:03:05 +0000]TCP200000.001
W0707 19:03:09.072793 6 controller.go:797] Service "default/face" does not have any active Endpoint.
W0707 19:03:09.072820 6 controller.go:797] Service "default/test" does not have any active Endpoint.
W0707 19:03:12.406121 6 controller.go:797] Service "default/face" does not have any active Endpoint.
W0707 19:03:12.406143 6 controller.go:797] Service "default/test" does not have any active Endpoint.
[07/Jul/2019:19:03:15 +0000]TCP200000.000
I0707 19:03:54.959607 6 status.go:295] updating Ingress default/staging-ingress status from [] to [{ }]
I0707 19:03:54.961925 6 event.go:209] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"staging-ingress", UID:"9852d27b-d8ad-4410-9fa0-57b92fdd6f90", APIVersion:"extensions/v1beta1", ResourceVersion:"1033", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/staging-ingress
</code></pre>
| staticdev | <p>@ijaz-ahmad-khan @vkr gave good ideas for solving the problem but the complete steps for setup are:</p>
<p>1) Install nginx-ingress with: </p>
<pre><code>helm install stable/nginx-ingress --set controller.hostNetwork=true,controller.service.type="",controller.kind=DaemonSet
</code></pre>
<p>2) In your deployments put:</p>
<pre><code>spec:
template:
spec:
hostNetwork: true
</code></pre>
<p>3) In all your Ingress objects put:</p>
<pre><code>metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
</code></pre>
| staticdev |
<p>On Macbook Pro, tried installing from binary with curl and then with brew.</p>
<p>Both installs generate an error at the end of output:</p>
<pre><code>~ via 🐘 v7.1.23
➜ kubectl version --output=yaml
clientVersion:
buildDate: "2019-04-19T22:12:47Z"
compiler: gc
gitCommit: b7394102d6ef778017f2ca4046abbaa23b88c290
gitTreeState: clean
gitVersion: v1.14.1
goVersion: go1.12.4
major: "1"
minor: "14"
platform: darwin/amd64
error: unable to parse the server version: invalid character '<' looking for beginning of value
</code></pre>
<p>Is there a way to fix this?</p>
| Stephane Gosselin | <p>I think there is another application listening on 8080 port. By default, <code>kubectl</code> will try to connect on localhost:8080 if no <code>server</code> is passed.</p>
<p>If you have deployed kubernetes <code>apiserver</code> on some other machine or port, pass <code>--server=IP:PORT</code> to <code>kubectl</code>. </p>
| Akash Sharma |
<p>After learning about arguments that can be passed to a Java 8 Virtual Machine to make it container-aware (i.e. -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap), I am trying to add these arguments to my Kubernetes deployment for a Spring Boot service.</p>
<p>In containers section of my deployment YAML file, I have the following:<p></p>
<pre>
resources:
requests:
memory: "256Mi"
cpu: "50m"
limits:<br/>
memory: "512Mi"
cpu: "200m"
env:
- name: JVM_OPTS
value: "-Xms256M -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1"
</pre>
<p>In my Dockerfile, I have:</p>
<pre>
ENV JVM_OPTS="-Xmx256M"
ENV JVM_ARGS="-Dspring.profiles.active=kubernetes"
EXPOSE 8080
ENTRYPOINT [ "sh", "-c", "java $JVM_ARGS $JVM_OPTS -jar testservice.jar" ]
</pre>
<p>I can't seem to figure out why the max heap sized does not get sized properly:</p>
<pre>
$ kubectl exec test-service-deployment-79c9d4bd54-trxgj -c test-service -- java -XshowSettings:vm -version'
VM settings:
Max. Heap Size (Estimated): 875.00M
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_171"
OpenJDK Runtime Environment (IcedTea 3.8.0) (Alpine 8.171.11-r0)
OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode)
</pre>
<p>What am I doing wrong here?</p>
<p>On a local Docker install, I can see the JVM max heap set correctly: </p>
<pre>
$ docker run openjdk:8-jre-alpine java -Xms256M -Xmx512M -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -version
VM settings:
Min. Heap Size: 256.00M
Max. Heap Size: 512.00M
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_171"
OpenJDK Runtime Environment (IcedTea 3.8.0) (Alpine 8.171.11-r0)
OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode)
</pre>
| JJ_Mind | <p>When running <code>java -XshowSettings:vm -version</code> in container, <code>JVM_OPTS</code> is not include in your command.</p>
<p>Try with this one</p>
<pre><code>kubectl exec test-service-deployment-79c9d4bd54-trxgj -c test-service \
-- sh -c 'java $JVM_OPTS -XshowSettings:vm -version'
</code></pre>
| silverfox |
<p>I am trying Seldon Core example.</p>
<p>Here's SeldonExampleDeployment.yaml.</p>
<pre><code>apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: seldon-model
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier_rest:1.3
name: classifier
command:
- --kubelet-insecure-tls
- --insecure-skip-tls-verify
graph:
children: []
endpoint:
type: REST
name: classifier
type: MODEL
name: example
replicas: 1
</code></pre>
<pre><code>$ kubectl apply -n seldon -f SeldonExampleDeployment.yaml
Error from server (InternalError): error when creating "SeldonExampleDeployment.yaml":
Internal error occurred: failed calling webhook "v1.vseldondeployment.kb.io":
Post https://seldon-webhook-service.kubeflow.svc:443/validate-machinelearning-seldon-io-v1-seldondeployment?timeout=30s:
x509: certificate signed by unknown authority
</code></pre>
<ul>
<li>I use EKS</li>
<li>I just opened all traffic in VPC (both inbound and outbound)</li>
</ul>
<p>I don't know why this error happened.
Please help me...</p>
| Anderson | <p>Old case, but to help at least other Googlers...</p>
<p>To avoid that webhook to fail deployment,</p>
<ul>
<li>first create SeldonDeployment</li>
<li>then enable interferenceservice on namespace,</li>
<li>lastly add Gateway</li>
</ul>
<pre><code># Create namespace and add a mock classifier REST service
MY_NS=a-namespace
kubectl create namespace $MY_NS
cat <<EOF | kubectl create -n $MY_NS -f -
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: seldon-model
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier_rest:1.3
name: classifier
graph:
children: []
endpoint:
type: REST
name: classifier
type: MODEL
name: example
replicas: 1
EOF
# Enable interferenceservice namespace and add gateway
kubectl label namespace $MY_NS serving.kubeflow.org/inferenceservice=enabled
cat <<EOF | kubectl create -n $MY_NS -f -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: kubeflow-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
EOF
# Test REST service
curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8004/seldon/$MY_NS/seldon-model/api/v1.0/predictions -H "Content-Type: application/json"
</code></pre>
| Jens X Augustsson |
<p>I have the following service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: downstream-service
spec:
type: ClusterIP
selector:
app: downstream
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>which I'd like to load balance based on app version which I've defined as follows in deployments:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: downstream-deployment-v1
labels:
app: downstream
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: downstream
version: v1
template:
metadata:
labels:
app: downstream
version: v1
spec:
containers:
- name: downstream
image: downstream:0.1
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: downstream-deployment-v2
labels:
app: downstream
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: downstream
version: v2
template:
metadata:
labels:
app: downstream
version: v2
spec:
containers:
- name: downstream
image: downstream:0.2
ports:
- containerPort: 80
</code></pre>
<p>Now this routes traffic 50/50 as expected on both of those deployments but I'd like to tweak the weights as per <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPRouteDestination" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPRouteDestination</a> so I've defined <code>DestinationRule</code> and <code>VirtualService</code>:</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: downstream-destination
spec:
host: downstream-service.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: downstream-virtualservice
spec:
hosts:
- downstream-service.svc.cluster.local
http:
- name: "downstream-service-v1-routes"
route:
- destination:
host: downstream-service.svc.cluster.local
subset: v1
weight: 5
- name: "downstream-service-v2-routes"
route:
- destination:
host: downstream-service.svc.cluster.local
subset: v2
weight: 95
</code></pre>
<p>but with this I'm still getting 50/50 split.</p>
<p>I've tried replacing <code>downstream-service.svc.cluster.local</code> with just <code>downstream-service</code> but the result was that without <code>weight</code>s defined in the yaml and with <code>subset</code>s removed I'd get a 50/50 split but when I've added the subset (without the weights) I'd get all the traffic on the <code>v1</code> instance.</p>
<p>What am I doing wrong here?</p>
<hr>
<p><strong>EDIT</strong></p>
<p>This might be the cause but I'm not sure what to make of it:</p>
<pre><code>$ istioctl x describe service downstream-service
Service: downstream-service
Port: 80/auto-detect targets pod port 80
DestinationRule: downstream-service for "downstream-service"
Matching subsets: v1,v2
No Traffic Policy
VirtualService: downstream-route
2 HTTP route(s)
$ istioctl x describe pod downstream-deployment-v2-69bdfc8fbf-bm22f
Pod: downstream-deployment-v2-69bdfc8fbf-bm22f
Pod Ports: 80 (downstream), 15090 (istio-proxy)
--------------------
Service: downstream-service
Port: 80/auto-detect targets pod port 80
DestinationRule: downstream-service for "downstream-service"
Matching subsets: v2
(Non-matching subsets v1)
No Traffic Policy
VirtualService: downstream-route
1 additional destination(s) that will not reach this pod
Route to non-matching subset v1 for (everything)
$ istioctl x describe pod downstream-deployment-v1-65bd866c47-66p9k
Pod: downstream-deployment-v1-65bd866c47-66p9k
Pod Ports: 80 (downstream), 15090 (istio-proxy)
--------------------
Service: downstream-service
Port: 80/auto-detect targets pod port 80
DestinationRule: downstream-service for "downstream-service"
Matching subsets: v1
(Non-matching subsets v2)
No Traffic Policy
VirtualService: downstream-route
1 additional destination(s) that will not reach this pod
Route to non-matching subset v2 for (everything)
</code></pre>
<p><strong>EDIT2</strong></p>
<p>So I've launched kiali just to see that:</p>
<p><a href="https://i.stack.imgur.com/UqCQE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UqCQE.png" alt="enter image description here"></a></p>
<blockquote>
<p>The weight is assumed to be 100 because there is only one route destination</p>
</blockquote>
<p><a href="https://kiali.io/documentation/v1.13/validations/#_the_weight_is_assumed_to_be_100_because_there_is_only_one_route_destination" rel="nofollow noreferrer">https://kiali.io/documentation/v1.13/validations/#_the_weight_is_assumed_to_be_100_because_there_is_only_one_route_destination</a></p>
<p>Not sure how to fix this though.</p>
| Patryk | <p>Ok, so it seems I've missed one big "typo" on my part which was the fact that it's the <code>routes</code> that have many <code>destination</code>s which are weighted not the <code>http</code> that has many weighted <code>route</code>s.</p>
<p>So the correct version of my <code>VirtualService</code> is as follows:</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: downstream-service
spec:
hosts:
- downstream-service
http:
- name: "downstream-service-routes"
route:
- destination:
host: downstream-service
subset: v1
weight: 10
- destination:
host: downstream-service
subset: v2
weight: 90
</code></pre>
| Patryk |
<p>I am able to login to the container running in a pod using <code>kubectl exec -t ${POD } /bin/bash --all-namespaces</code>
(POD is the text parameter value in my Jenkins job, In which user would have entered the pod name before running the job), Now my question is : I am able to login into the container , I want to my test.sh file from the logged in container ?
Flow: </p>
<p>Step1 : Run a Jenkins job which should login to a docker container running inside the pods</p>
<p>Step: From the container execute the test.sh script.</p>
<p>test.sh</p>
<p>echo "This is demo file"</p>
| Suresh Ravi | <p>There is no need to have two steps one step is sufficient. I believe below should get the job done</p>
<p>kubectl exec ${POD} /path/to/script/test.sh --all-namespaces</p>
<p>Below is the reference form Kubernetes <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>kubectl exec my-pod -- ls / # Run command in
existing pod (1 container case)</p>
<p>kubectl exec my-pod -c my-container -- ls / # Run command in
existing pod (multi-container case)</p>
</blockquote>
| asolanki |
<p>I have several Persistent Volume Claims in Google Kubernetes Engine that I am not sure if they are still used or not. How can I find out which pod they are attached to or is safe to delete them?</p>
<p>Google Kubernetes UI tells me they are bound but not to which container. Or maybe it means they are bound to a Volume Claim.</p>
<p>kubectl describe did not return the name of the pods either.</p>
<pre><code> kubectl describe pv xxxxxx-id
</code></pre>
<p><a href="https://i.stack.imgur.com/F2bhQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F2bhQ.png" alt="enter image description here" /></a></p>
| David Dehghan | <p>this gives you the PVC for each pod</p>
<pre><code>kubectl get pods --all-namespaces -o=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName: .spec | select( has ("volumes") ).volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'
</code></pre>
| David Dehghan |
<p>I am tasked with migrating AWS lambda microservices to Kubernetes. For simplicity, there are two service endpoints: <code>/admin</code> and <code>/user</code> where you can GET or POST a request to get something done.</p>
<p>You have to be in <code>admin</code> group (defined in an external authZ provider) to hit <code>/admin</code> endpoint, or otherwise, you get 401. You have to be in <code>user</code> group to have access to <code>/user</code> endpoint.</p>
<p>I will have each endpoint exposed as a service running in a docker container. The question is - what is the correct way to add routing and path-based authorization in Kubernetes?</p>
<p>Like, if I go to <code>/admin</code> in the browser, I need Kubernetes to check if I am in <code>admin</code> group and then route me to <code>admin</code> service; otherwise, it should return 401.</p>
<p>I can write this router myself, but I want to check if there is a built-in solution in Kubernetes for that.</p>
| Andrey | <blockquote>
<p>check if there is a built-in solution in Kubernetes</p>
</blockquote>
<p>Nope, there's no built-in solution for L7 network policies. <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Network Policies</a> in Kubernetes are at L4 level, so things like rate limiting, path based firewall rules etc are not possible. Although you could look into a Service Mesh, like <a href="https://linkerd.io/" rel="nofollow noreferrer">Linkerd</a>, <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> or even using a different CNI plugin based on <code>eBPF</code> such as <a href="https://cilium.io/blog/2018/09/19/kubernetes-network-policies/" rel="nofollow noreferrer">Cilium</a>.</p>
<p>Cilium has a CRD <code>CiliumNetworkPolicy</code> which would help you with your usecase. You can put any proxy like Nginx/Caddy/HAProxy in front of it or an API Gateway like Kong, if you want to offload the authentication/authorization process. You can apply the following network policy, which would restrict the <code>/admin</code> endpoint on any pod with label <code>app: customapp</code> and would only allow it from a pod with label <code>app: proxyapp</code>. </p>
<pre><code>apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-from-proxy"
specs:
- endpointSelector:
matchLabels:
app: customapp
ingress:
- fromEndpoints:
- matchLabels:
app: proxyapp
toPorts:
- ports:
- port: "8080"
protocol: "TCP"
rules:
http:
- method: "GET"
path: "/admin"
</code></pre>
| mr-karan |
<p>I have been attempting to get GRPC's load balancing working in my Java application deployed to a Kubernetes cluster but I have not been having too much success. There does not seem to be too much documentation around this, but from examples online I can see that I should now be able to use '.defaultLoadBalancingPolicy("round_robin")' when setting up the ManagedChannel (in later versions of GRPC Java lib).</p>
<p>To be more specific, I am using version 1.34.1 of the GRPC Java libraries. I have created two Spring Boot (v2.3.4) applications, one called grpc-sender and one called grpc-receiver.</p>
<p>grpc-sender acts as a GRPC client and defines a (Netty) ManagedChannel as:</p>
<pre><code>@Bean
public ManagedChannel greetingServiceManagedChannel() {
String host = "grpc-receiver";
int port = 6565;
return NettyChannelBuilder.forAddress(host, port)
.defaultLoadBalancingPolicy("round_robin")
.usePlaintext().build();
}
</code></pre>
<p>Then grpc-receiver acts as the GRPC server:</p>
<pre><code>Server server = ServerBuilder.forPort(6565)
.addService(new GreetingServiceImpl()).build();
</code></pre>
<p>I am deploying these applications to a Kubernetes cluster (running locally in minikube for the time being), and I have created a Service for the grpc-receiver application as a headless service, so that GRPC load balancing can be achieved.</p>
<p>To test failed requests, I do two things:</p>
<ul>
<li>kill one of the grpc-receiver pods during the execution of a test run - e.g. when I have requested grpc-sender to send, say, 5000 requests to grpc-receiver. Grpc-sender does detect that the pod has been killed and does refresh its list of receiver pods, and routes future requests to the new pods. As expected, some of the requests that were in flight during the kill of the pod fail with GRPC Status UNAVAILABLE.</li>
<li>have some simple logic in grpc-receiver that generates a random number and if that random number is below, say, 0.2, return Grpc Status INTERNAL rather than OK.</li>
</ul>
<p>With both the above, I can get a proportion of the requests during a test run to fail. Now what I am trying to get GRPC's retry mechanism to work. From reading the sparse documentation I am doing the following:</p>
<pre><code>return NettyChannelBuilder.forAddress(host, port)
.defaultLoadBalancingPolicy("round_robin")
.enableRetry()
.maxRetryAttempts(10)
.usePlaintext().build();
</code></pre>
<p>However this seems to have no effect and I cannot see that failed requests are retried at all.</p>
<p>I see that this is still marked as an @ExperimentalApi feature, so should it work as expected and has it been implemented?</p>
<p>If so, is there something obvious I am missing? Anything else I need to do to get retries working?</p>
<p>Any documentation that explains how to do this in more detail?</p>
<p>Thanks very much in advance...</p>
| Daniel Western | <p>ManagedChannelBuilder.enableRetry().maxRetryAttempts(10) is not sufficient to make retry happen. The retry needs a service config with RetryPolicy defined. One way is set a default service config with RetryPolicy, please see the retry example in <a href="https://github.com/grpc/grpc-java/tree/v1.35.0/examples" rel="nofollow noreferrer">https://github.com/grpc/grpc-java/tree/v1.35.0/examples</a></p>
<p>There's been some confusion on the javadoc of maxRetryAttempts(), and it's being clarified in <a href="https://github.com/grpc/grpc-java/pull/7803" rel="nofollow noreferrer">https://github.com/grpc/grpc-java/pull/7803</a></p>
| user675693 |
<p>I just tested Ranche RKE , upgrading kubernetes 13.xx to 14.xx , during upgrade , an already running nginx Pod got restarted during upgrade. Is this expected behavior? </p>
<p>Can we have Kubernetes cluster upgrades without user pods restarting? </p>
<p>Which tool supports un-intruppted upgrades?</p>
<p>What are the downtimes that we can never aviod? ( apart from Control plane )</p>
| Ijaz Ahmad | <p>The default way Kubernetes upgrades is by doing a rolling upgrade of the nodes, one at a time.</p>
<p>This works by draining and cordoning (marking the node as unavailable for new deployments) each node that is being upgraded so that there no pods running on that node.</p>
<p>It does that by creating a new revision of the existing pods on another node (if it's available) and when the new pod starts running (and answering to the readiness/health probes), it stops and remove the old pod (sending <code>SIGTERM</code> to each pod container) on the node that was being upgraded.</p>
<p>The amount of time Kubernetes waits for the pod to graceful shutdown, is controlled by the <code>terminationGracePeriodSeconds</code> on the pod spec, if the pod takes longer than that, they are killed with <code>SIGKILL</code>.</p>
<p>The point is, to have a graceful Kubernetes upgrade, you need to have enough nodes available, and your pods must have correct liveness and readiness probes (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/</a>).</p>
<p>Some interesting material that is worth a read: </p>
<p><a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime" rel="nofollow noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime</a> (specific to GKE but has some insights)<br>
<a href="https://blog.gruntwork.io/zero-downtime-server-updates-for-your-kubernetes-cluster-902009df5b33" rel="nofollow noreferrer">https://blog.gruntwork.io/zero-downtime-server-updates-for-your-kubernetes-cluster-902009df5b33</a></p>
| jonathancardoso |
<p>In official document, the default replicas of argo-server and workflow-controller is set to 1. Should it be set to 3 in the production environment for high availability?</p>
| zheng cy | <p><a href="https://github.com/argoproj/argo/blob/master/docs/scaling.md" rel="nofollow noreferrer">According to Argo's scaling documentation</a>, the Argo Workflows controller cannot be horizontally scaled. In other words, you should only have one replica.</p>
<p>You can have multiple Argo installations (called "instances" in their documentation) if you're okay splitting your work up that way. You can also vertically scale the single controller replica to better handle large workflows or high numbers of workflows.</p>
<p>UPDATE: the latest version of argo-workflows supports multiple replicas for the controller with leader election which improves availability (reference: <a href="https://blog.argoproj.io/argo-workflows-v3-0-4d0b69f15a6e" rel="nofollow noreferrer">https://blog.argoproj.io/argo-workflows-v3-0-4d0b69f15a6e</a>)</p>
| crenshaw-dev |
<p>I'm exploring an easy way to read K8S resources in the Argo workflow. The current documentation is focusing mainly on create/patch with conditions (<a href="https://argoproj.github.io/argo/examples/#kubernetes-resources" rel="nofollow noreferrer">https://argoproj.github.io/argo/examples/#kubernetes-resources</a>), while I'm curious if it's possible to perform "action: get", extra part of the resource state (or full resource) and pass it downstream as artifact or result output. Any ideas?</p>
| Oleksandr | <p><strong>Update:</strong></p>
<p><code>action: get</code> is now supported: <a href="https://github.com/argoproj/argo-workflows/blob/246d4f44013b545e963106a9c43e9cee397c55f7/examples/k8s-wait-wf.yaml#L46" rel="nofollow noreferrer">https://github.com/argoproj/argo-workflows/blob/246d4f44013b545e963106a9c43e9cee397c55f7/examples/k8s-wait-wf.yaml#L46</a></p>
<p><strong>Original answer:</strong></p>
<p><code>action: get</code> is not a feature available from Argo.</p>
<p>However, it's easy to use <code>kubectl</code> from within a Pod and then send the JSON output to an output parameter. This uses a BASH script to send the JSON to the <code>result</code> output parameter, but an explicit output parameter or an output artifact are also viable options.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: kubectl-bash-
spec:
entrypoint: kubectl-example
templates:
- name: kubectl-example
steps:
- - name: generate
template: get-workflows
- - name: print
template: print-message
arguments:
parameters:
- name: message
value: "{{steps.generate.outputs.result}}"
- name: get-workflows
script:
image: bitnami/kubectl:latest
command: [bash]
source: |
some_workflow=$(kubectl get workflows -n argo | sed -n 2p | awk '{print $1;}')
kubectl get workflow "$some_workflow" -n argo -ojson
- name: print-message
inputs:
parameters:
- name: message
container:
image: alpine:latest
command: [sh, -c]
args: ["echo result was: '{{inputs.parameters.message}}'"]
</code></pre>
<p>Keep in mind that <code>kubectl</code> will run with the permissions of the Workflow's ServiceAccount. Be sure to <a href="https://github.com/argoproj/argo/blob/master/docs/service-accounts.md" rel="nofollow noreferrer">submit the Workflow using a ServiceAccount</a> which has access to the resource you want to get.</p>
| crenshaw-dev |
<p>Is there a way to identify the url from where container executing in Kubernetes POD was pulled from ?</p>
<p>The Kubernetes Image <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">doc</a> indicate only image name is provided as part of Pod specification.</p>
<p>I would like to identify if image was pulled from Google Container Registry, Amazon ECR, IBM Cloud Container Registry etc.</p>
| alwaysAStudent | <p>You can use the image id to understand that. Something like</p>
<pre><code>kubectl get pod pod-name-123 -o json | jq '.status.containerStatuses[] | .imageID'
</code></pre>
<p>will return something like:</p>
<pre><code>"docker-pullable://redacted.dkr.ecr.eu-west-2.amazonaws.com/docker-image-name@sha256:redacted"
</code></pre>
<p>In my example was pulled from aws ecr.</p>
| Federkun |
<p>I want to restart deployment pod by patching ENV variable in deployment. Here is my code:</p>
<pre><code>String PATCH_STR = "[{\"op\":\"replace\",\"path\":\"/spec/template/spec/containers/0/env/8/UPDATEDON\",\"value\": \"%d\"}]";
final String patchStr = String.format(PATCH_STR, System.currentTimeMillis());
AppsV1Api api = new AppsV1Api(apiClient);
V1Deployment deploy = PatchUtils.patch(V1Deployment.class,
() -> api.patchNamespacedDeploymentCall(
deploymentName,
namespace,
new V1Patch(patchStr),
null,
null,
null, // field-manager is optional
null,
null),
V1Patch.PATCH_FORMAT_JSON_PATCH,
apiClient);
</code></pre>
<p>This code executes successfully but it does not start pod. Here is an equivalent kubectl command (it doesn't patch, so pod doesn't start):</p>
<pre><code>kubectl -n aaaac7bg7b6nsaaaaaaaaaaoyu patch deployment aaaaaaaaxldpcswy2bl3jee6umwck72onc55wimyvldrfc442rokz3cpll2q -p '{"spec":{"containers":[{"env":[{"name":"UPDATEDON","value":"1645099482000"}]}]}}'
</code></pre>
<p>If I execute following command, it restarts pod:</p>
<pre><code>kubectl -n aaaac7bg7b6nsaaaaaaaaaaoyu set env deployment/aaaaaaaaxldpcswy2bl3jee6umwck72onc55wimyvldrfc442rokz3cpll2q UPDATEDON=1645099482000
</code></pre>
<p>I thought of using <code>V1EnvVar/V1EnvVarBuilder</code> but I couldn't find equivalent java code.</p>
| Pushpendra | <p>There are a couple of issues with your example. In general, if you <em>successfully update</em> the environment variables in the pod template of your deployment, the Kubernetes operator will recognize the change and start a new pod to reflect the change.</p>
<p>When you perform the update with a JSON patch by specifying the operation (<code>replace</code>), the path, and the value, the path must directly match the property in the deployment manifest. In your case, since you want to change the <code>value</code> of the environment variable, this would be:</p>
<pre><code>/spec/template/spec/containers/0/env/8/value
</code></pre>
<p>There is no need to repeat the name of the environment variable. The index, here <code>8</code>, already signifies which variable you want to update, so there is no need to repeat <code>UPDATEDON</code>.</p>
<p>The equivalent command with <code>kubectl</code> would be</p>
<pre><code>kubectl -n aaaac7bg7b6nsaaaaaaaaaaoyu patch deployment aaaaaaaaxldpcswy2bl3jee6umwck72onc55wimyvldrfc442rokz3cpll2q \
--type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/env/0/value", "value": "1645099482000"}]'
</code></pre>
<p>Alternatively, instead of using a JSON patch, you can used the default patch type, like you did in your example. However, you forgot to add the outermost <code>spec/template</code> layers. Addionaly, you also need the given identify the container by specifying it's name. Here I've used <code>test</code> as the container's name.</p>
<pre><code>kubectl -n aaaac7bg7b6nsaaaaaaaaaaoyu patch deployment aaaaaaaaxldpcswy2bl3jee6umwck72onc55wimyvldrfc442rokz3cpll2q \
-p '{"spec": {"template": {"spec": {"containers": [{"name": "test", "env":[{"name":"UPDATEDON","value":"1645099482000"}]}]}}}}'
</code></pre>
<p>This way of updating has the advantage that you identify the container and the environment variable by their names, so and you don't need to rely on the ordering as would be the case with the index-based JSON patch path.</p>
| sauerburger |
<p>I installed the default helm chart of Argo Workflow with only configuring init.serviceAccount as argo-sa, which I have created. (ServiceAccount with enough authorization)
However, running every Workflow runs as serviceaccount Default, which I can’t figure out where the setting is configured.
According to the README provided by Argo Helm Chart, specifying <code>init.serviceAccount</code> as the serviceaccount which I have created should solved the problem.
The workaround is to modify the Default serviceaccount, but it seems that it's not a great solution.
Is there anything that I understood incorrectly ? Thanks in advance.</p>
| Piljae Chae | <p>The Argo installation does not control which ServiceAccount Workflows use. According to the <a href="https://github.com/argoproj/argo/blob/3507c3e6e8e9c420a6028a43b930a3ef6b221705/docs/service-accounts.md" rel="noreferrer">Argo docs</a>,</p>
<blockquote>
<p>When no ServiceAccount is provided [when the Workflow is submitted], Argo will use the default
ServiceAccount from the namespace from which it is run, which will
almost always have insufficient privileges by default.</p>
</blockquote>
<p>If you are using the <a href="https://argoproj.github.io/argo/cli/argo_submit/" rel="noreferrer">Argo CLI to submit Workflows</a>, you can specify the ServiceAccount with <code>--serviceaccount</code>.</p>
<p>If you are using <code>kubectl apply</code> or some other tool to install Workflows, you can <a href="https://github.com/argoproj/argo/blob/7abead2a564876a695201cf7539bd20d1fc67888/docs/workflow-rbac.md" rel="noreferrer">set the ServiceAccount name in the yaml definition</a>. See an <a href="https://github.com/argoproj/argo/blob/da43086a19f88c0b7ac71fdb888f913fd619962b/examples/cron-backfill.yaml#L52" rel="noreferrer">example from the documentation</a>, or this abbreviated example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
spec:
serviceAccountName: some-serviceaccount
</code></pre>
<p>As a convenience, the Argo Helm chart <a href="https://github.com/argoproj/argo-helm/blob/master/charts/argo/values.yaml#L28-L33" rel="noreferrer">provides a way to create a ServiceAccount</a> with which to run your Workflows. But it does not actually <em>cause</em> your Workflows to use that ServiceAccount. You have to specify it when you submit the Workflow.</p>
<pre class="lang-yaml prettyprint-override"><code> serviceAccount:
create: false # Specifies whether a service account should be created
annotations: {}
name: "argo-workflow" # Service account which is used to run workflows
rbac:
create: false # adds Role and RoleBinding for the above specified service account to be able to run workflows
</code></pre>
| crenshaw-dev |
<p>I would like to deploy <a href="https://www.keycloak.org/" rel="nofollow noreferrer">Keycloak</a> on my K8S cluster. In addition, the prerequisite for using Keycloack is a database, so I am going to use postgresql. </p>
<p>Before deploying Keycloak, the database has to be up and running. For such as scenario, I think, I should use <a href="https://argoproj.github.io/docs/argo/readme.html" rel="nofollow noreferrer">Argo Workflow</a>. </p>
<p>My question is, how to trigger the ArgoCD, after the database is up and running through the Argo Workflow? Or how to combine Argo Workflow with ArgoCD? </p>
| softshipper | <p>This should be possible without using Argo Workflow to spin up a Postgres server. ArgoCD supports multiple ways to deploy a "package" of resources that includes both Keycloack and Postgres.</p>
<p>For example, you could use <a href="https://github.com/codecentric/helm-charts/tree/master/charts/keycloak#prerequisites-details" rel="nofollow noreferrer">Keybase's Helm chart</a> which optionally supports a Postgres component. </p>
<p>Then you can follow <a href="https://argoproj.github.io/argo-cd/user-guide/helm/" rel="nofollow noreferrer">ArgoCD's documentation on deploying Helm charts</a>.</p>
<p>While Argo Workflows <em>does</em> help manage sequential tasks, those tasks are usually ephemeral - they execute once and disappear. For a more persistent Postgres server, you want to make it part of your deployment rather than a one-off task.</p>
| crenshaw-dev |
<p>In <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/baremetal/</a>
In metalLB mode, one node attracts all the traffic for the ingress-nginx
By node port we can gather all traffic and loadbalance it podes by service</p>
<p>what is diffrence between node port and metalLB?</p>
| yasin lachini | <p>A Nodeport offers access to a service through a port on the node (hence node+port). A port is allocated that you can access the service through on any node in the cluster.</p>
<p>MetalLB is a load balancer for on-prem clusters. It allocates services with separate dedicated IP addresses allocated from a pool. So, if you want to access a service (an ingress controller or something else) on a dedicated IP then MetalLB allows you to do this.</p>
<p>MetalLB works in two ways, either BGP or Layer2 ARP. The latter is easier to set up if you're working on a "lab" environment. Basically the MetalLB responds to ARP requests sent by clients trying to connect to a service to which it's allocated an IP.</p>
| starfry |
<p>I'm currently investigating using dynamically provisioned persistent disks in the GCE application: In my application I have 1-n pods, where each pod contains a single container that needs rw access to a persistent volume. The volume needs to be pre-populated with some data which is copied from a bucket.</p>
<p>What I'm confused about is; if the persistent disk is dynamically allocated, how do I ensure that data is copied onto it before it is mounted to my pod? The copying of the data is infrequent but regular, the only time I might need to do this out of sequence is if a pod falls over and I need a new persistent disk and pod to take it's place.</p>
<p>How do I ensure the persistent disk is pre populated before it is mounted to my pod?</p>
<p>My current thought is to have the bucket mounted to the pod, and as part of the startup of the pod, copy from the bucket to the persistent disk. This creates another problem, in that the bucket cannot be write enabled and mounted to multiple pods.</p>
<p>Note: I'm using a seperate persistent disk as I need it to be an ssd for speed.</p>
| Andy | <p>Looks like the copy is a good candidate to be done as an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">"init container"</a>.</p>
<p>That way on every pod start, the "init container" would connect to the GCS bucket and check the status of the data, and if required, copy the data to the dynamically assigned PersistentDisk. </p>
<p>When completed, the main container of the pod starts, with data ready for it to use. By using an "init container" you are guaranteeing that: </p>
<ol>
<li><p>The copy is complete before your main pod container starts.</p></li>
<li><p>The main container does not need access to the GCS, just the dynamically created PV.</p></li>
<li><p>If the "init container" fails to complete successfully then your pod would fail to start and be in an error state.</p></li>
</ol>
<p>Used in conjunction with a <code>StatefulSet</code> of N pods this approach works well, in terms of being able to initialize a new replica with a new disk, and keep persistent data across main container image (code) updates.</p>
| Paul Annetts |
<p>I have a fluentbit deployed as a sidecar. This fluentbit has an output of type Forward that is suppose to send the logs to a FluentD deployed as a DaemonSet.</p>
<p>The implementation works when using the PodIP of FluentD as host, but I get Connection refused when using the Service hostname from Kubernetes.</p>
<p>This is the error when using the Upstream approach:</p>
<pre><code>[error] [net] TCP connection failed: fluentd.logging.svc.cluster.local:24224 (Connection refused)
[error] [net] socket #33 could not connect to fluentd.logging.svc.cluster.local:24224
[debug] [upstream] connection #-1 failed to fluentd.logging.svc.cluster.local:24224
[error] [output:forward:forward.0] no upstream connections available
</code></pre>
<p>This is the error when using the regular Host approach:</p>
<pre><code>[error] [output:forward:forward.0] could not write forward header
</code></pre>
<p>I tried both using the Host parameter in Forward for Fluentbit, and also the Upstream functionality with the same outcome.</p>
<p>No network policies in place. This is the configuration with Upstream. With Host it will have Host and Port instead of Upstream in the OUTPUT section.</p>
<pre><code>[SERVICE]
Daemon Off
Flush 5
Log_Level debug
Parsers_File parsers.conf
Parsers_File custom_parsers.conf
HTTP_Server Off
[INPUT]
Name tail
Path /var/app-logs/*
Parser json
Tag app-logs.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[OUTPUT]
Name forward
Match app-logs.*
Host fluentd.logging.svc.cluster.local
Port 24244
[PARSER]
Name json
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
</code></pre>
<p>The FluentD deployment has a Service with the 24244 TCP port connected with the container TCP port 24244, where FluentD is listening.</p>
<p>A simple "nc" test also shows that I'm able to connect with the PodIP, but not to the Service hostname.</p>
<p>There's also an additional port in my FluenD daemonset which is for Prometheus metrics, and I can "nc" to that one using the host name.</p>
<p>This is the FluentD service</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fluentd ClusterIP 10.102.255.48 <none> 24231/TCP,24244/TCP 4d6h
</code></pre>
<p>This is the FluentD deployment</p>
<pre><code>Containers:
fluentd:
Container ID: xxxx
Image: xxxx
Image ID: xxxx
Ports: 24231/TCP, 24244/TCP
Host Ports: 0/TCP, 0/TCP
</code></pre>
<p>This is the FluentD forward listener config</p>
<pre><code><source>
@type forward
port 24224
bind 0.0.0.0
@label @applogs
tag applogs.*
</source>
</code></pre>
<p>Am I missing something obvious here?</p>
| codiaf | <p>Ok, stupid stupid mistake, there was a typo when writing the number port so the one configured in FluentBit didn't match the one defined in the Kubernetes Service -.-</p>
| codiaf |
<p>I've one workflow in which I'm using <code>jsonpath</code> function for a output parameter to extract a specific value from json string, but it is failing with this error <code>Error (exit code 255)</code></p>
<p>Here is my workflow</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: wf-dev-
spec:
entrypoint: main
templates:
- name: main
dag:
tasks:
- name: dev-create
templateRef:
name: dev-create-wft
template: main
arguments:
parameters:
- name: param1
value: "val1"
- name: dev-outputs
depends: dev-create.Succeeded
templateRef:
name: dev-outputs-wft
template: main
arguments:
parameters:
- name: devoutputs
value: "{{=jsonpath(tasks.dev-create.outputs.parameters.devoutputs, '$.alias.value')}}"
</code></pre>
<p>In the above workflow task <code>dev-create</code> invokes another workflowTemplate <code>dev-create-wft</code> which returns the output of another workflowTemplate</p>
<p>Here is my workflowTemplate</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: dev-create-wft
spec:
entrypoint: main
templates:
- name: main
outputs:
parameters:
- name: devoutputs
valueFrom:
expression: "tasks['dev1'].outputs.parameters.devoutputs"
inputs:
parameters:
- name: param1
dag:
tasks:
- name: dev1
templateRef:
name: fnl-dev
template: main
arguments:
parameters:
- name: param1
value: "{{inputs.parameters.param1}}"
</code></pre>
<p>The returned json output looks like this</p>
<pre><code>{
"alias": {
"value": "testing:dev1infra",
"type": "string",
"sensitive": false
},
"match": {
"value": "dev1infra-testing",
"type": "string",
"sensitive": false
}
}
</code></pre>
<p>Does <code>jsonpath</code> function supported in workflow? The reason why am asking is, it's working when I used the same function in another workflowTemplate <code>dev-outputs-wft</code></p>
<p>What could be the issue?</p>
| Biru | <p>When an expression fails to evaluate, Argo Workflows simply does not substitute the expression with its evaluated value. Argo Workflows passes the expression <em>as if it were the parameter</em>.</p>
<p><code>{{=}}</code> "expression tag templates" in Argo Workflows must be written according to the <a href="https://github.com/antonmedv/expr/blob/master/docs/Language-Definition.md" rel="noreferrer">expr language spec</a>.</p>
<p>In simple tag templates, Argo Workflows itself does the interpreting. So hyphens in parameter names are allowed. For example, <code>value: "{{inputs.parameters.what-it-is}}"</code> is evaluated by Argo Workflows to be <code>value: "over 9000!"</code>.</p>
<p>But in expression tag templates, expr interprets hyphens as minus operators. So <code>value: "{{=inputs.parameters.what-it-is}}"</code> looks like a really weird mathematical expression, fails, and isn't substituted. The workaround is to use <code>['what-it-is']</code> to access the appropriate map item.</p>
<p>My guess is that your expression is failing, Argo Workflows is passing the expression to <code>dev-outputs-wft</code> un-replaced, and whatever shell script is receiving that parameter is breaking.</p>
<p>If I'm right, the fix is easy:</p>
<pre><code> - name: dev-outputs
depends: dev-create.Succeeded
templateRef:
name: dev-outputs-wft
template: main
arguments:
parameters:
- name: devoutputs
- value: "{{=jsonpath(tasks.dev-create.outputs.parameters.devoutputs, '$.alias.value')}}"
+ value: "{{=jsonpath(tasks['dev-create'].outputs.parameters.devoutputs, '$.alias.value')}}"
</code></pre>
| crenshaw-dev |
<p>I've recently started working with Kubernetes clusters. The flow of network calls for a given Kubernetes service in our cluster is something like the following:</p>
<p>External Non-K8S Load Balancer -> Ingress Controller -> Ingress Resource -> Service -> Pod</p>
<p>For a given service, there are two replicas. By looking at the logs of the containers in the replicas, I can see that calls are being routed to different pods. As far as I can see, we haven't explicitly set up any load-balancing policies anywhere for our services in Kubernetes.</p>
<p>I've got a few questions:</p>
<p>1) Is there a default load-balancing policy for K8S? I've read about kube-proxy and random routing. It definitely doesn't appear to be round-robin.
2) Is there an obvious way to specify load balancing rules in the Ingress resources themselves? On a per-service basis?</p>
<p>Looking at one of our Ingress resources, I can see that the 'loadBalancer' property is empty:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/rewrite-target":"/","nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"example-service-ingress","namespace":"member"},"spec":{"rules":[{"host":"example-service.x.x.x.example.com","http":{"paths":[{"backend":{"serviceName":"example-service-service","servicePort":8080},"path":""}]}}]}}
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2019-02-13T17:49:29Z"
generation: 1
name: example-service-ingress
namespace: x
resourceVersion: "59178"
selfLink: /apis/extensions/v1beta1/namespaces/x/ingresses/example-service-ingress
uid: b61decda-2fb7-11e9-935b-02e6ca1a54ae
spec:
rules:
- host: example-service.x.x.x.example.com
http:
paths:
- backend:
serviceName: example-service-service
servicePort: 8080
status:
loadBalancer:
ingress:
- {}
</code></pre>
<p>I should specify - we're using an on-prem Kubernetes cluster, rather than on the cloud.</p>
<p>Cheers!</p>
| Danny Noam 父 | <p>The "internal load balancing" between Pods of a Service has already been covered in <a href="https://stackoverflow.com/questions/48789227/does-clusterip-service-distributes-requests-between-replica-pods">this question from a few days ago</a>.</p>
<p>Ingress isn't really doing anything special (unless you've been hacking in the NGINX config it uses) - it will use the same Service rules as in the linked question.</p>
<p>If you want or need fine-grained control of how pods are routed to within a service, it is possible to extend Kubernetes' features - I recommend you look into the traffic management features of <a href="https://istio.io" rel="nofollow noreferrer">Istio</a>, as one of its features is to be able to dynamically control how much traffic different pods in a service receive.</p>
| Paul Annetts |
<p>I've Two workflowTemplates <code>generate-output</code>, <code>lib-read-outputs</code> and One workflow <code>output-paramter</code> as follows</p>
<ol>
<li><code>generate-output.yaml</code></li>
</ol>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: generate-output
spec:
entrypoint: main
templates:
- name: main
dag:
tasks:
# Generate Json for Outputs
- name: read-outputs
arguments:
parameters:
- name: outdata
value: |
{
"version": 4,
"terraform_version": "0.14.11",
"serial": 0,
"lineage": "732322df-5bd43-6e92-8f46-56c0dddwe83cb4",
"outputs": {
"key_alias_arn": {
"value": "arn:aws:kms:us-west-2:123456789:alias/tetsing-key",
"type": "string",
"sensitive": true
},
"key_arn": {
"value": "arn:aws:kms:us-west-2:123456789:alias/tetsing-key",
"type": "string",
"sensitive": true
}
}
}
template: retrieve-outputs
# Create Json
- name: retrieve-outputs
inputs:
parameters:
- name: outdata
script:
image: python
command: [python]
env:
- name: OUTDATA
value: "{{inputs.parameters.outdata}}"
source: |
import json
import os
OUTDATA = json.loads(os.environ["OUTDATA"])
with open('/tmp/templates_lst.json', 'w') as outfile:
outfile.write(str(json.dumps(OUTDATA['outputs'])))
volumeMounts:
- name: out
mountPath: /tmp
volumes:
- name: out
emptyDir: { }
outputs:
parameters:
- name: message
valueFrom:
path: /tmp/templates_lst.json
</code></pre>
<ol start="2">
<li><code>lib-read-outputs.yaml</code></li>
</ol>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: lib-read-outputs
spec:
entrypoint: main
templates:
- name: main
dag:
tasks:
# Read Outputs
- name: lib-wft
templateRef:
name: generate-output
template: main
</code></pre>
<ol start="3">
<li><code>output-paramter.yaml</code></li>
</ol>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: output-paramter-
spec:
entrypoint: main
templates:
- name: main
dag:
tasks:
# Json Output data task1
- name: wf
templateRef:
name: lib-read-outputs
template: main
- name: lib-wf2
dependencies: [wf]
arguments:
parameters:
- name: outputResult
value: "{{tasks.wf.outputs.parameters.message}}"
template: whalesay
- name: whalesay
inputs:
parameters:
- name: outputResult
container:
image: docker/whalesay:latest
command: [cowsay]
args: ["{{inputs.parameters.outputResult}}"]
</code></pre>
<p>I am trying to pass the output parameters generated in workflowTemplate <code>generate-output</code> to workflow <code>output-paramter</code> via <code>lib-read-outputs</code></p>
<p>When I execute them, it's giving the following error - <code>Failed: invalid spec: templates.main.tasks.lib-wf2 failed to resolve {{tasks.wf.outputs.parameters.message}}</code></p>
| Biru | <h1>DAG and steps templates don't produce outputs by default</h1>
<p>DAG and steps templates do not automatically produce their child templates' outputs, even if there is only one child template.</p>
<p>For example, the <code>no-parameters</code> template here does not produce an output, even though it invokes a template which <em>does</em> have an output.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
spec:
templates:
- name: no-parameters
dag:
tasks:
- name: get-a-parameter
template: get-a-parameter
</code></pre>
<p>This lack of outputs makes sense if you consider a DAG template with multiple tasks:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
spec:
templates:
- name: no-parameters
dag:
tasks:
- name: get-a-parameter
template: get-a-parameter
- name: get-another-parameter
depends: get-a-parameter
template: get-another-parameter
</code></pre>
<p>Which task's outputs should <code>no-parameters</code> produce? Since it's unclear, DAG and steps templates simply do not produce outputs by default.</p>
<p>You can think of templates as being like functions. You wouldn't expect a function to implicitly return the output of a function it calls.</p>
<pre class="lang-py prettyprint-override"><code>def get_a_string():
return "Hello, world!"
def call_get_a_string():
get_a_string()
print(call_get_a_string()) # This prints nothing.
</code></pre>
<h1>But a DAG or steps template can <em>forward</em> outputs</h1>
<p>You can make a DAG or a steps template <em>forward</em> an output by setting its <code>outputs</code> field.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: get-parameters-wftmpl
spec:
templates:
- name: get-parameters
dag:
tasks:
- name: get-a-parameter
template: get-a-parameter
- name: get-another-parameter
depends: get-a-parameter
template: get-another-parameter
# This is the critical part!
outputs:
parameters:
- name: parameter-1
valueFrom:
expression: "tasks['get-a-parameter'].outputs.parameters['parameter-name']"
- name: parameter-2
valueFrom:
expression: "tasks['get-another-parameter'].outputs.parameters['parameter-name']"
---
apiVersion: argoproj.io/v1alpha1
kind: Workflow
spec:
templates:
- name: print-parameter
dag:
tasks:
- name: get-parameters
templateRef:
name: get-parameters-wftmpl
template: get-parameters
- name: print-parameter
depends: get-parameters
template: print-parameter
arguments:
parameters:
- name: parameter
value: "{{tasks.get-parameters.outputs.parameters.parameter-1}}"
</code></pre>
<p>To continue the Python analogy:</p>
<pre class="lang-py prettyprint-override"><code>def get_a_string():
return "Hello, world!"
def call_get_a_string():
return get_a_string() # Add 'return'.
print(call_get_a_string()) # This prints "Hello, world!".
</code></pre>
<h1>So, in your specific case...</h1>
<ol>
<li><p>Add an <code>outputs</code> section to the <code>main</code> template in the <code>generate-parameter</code> WorkflowTemplate to forward the output parameter from the <code>retrieve-parameters</code> template.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: generate-parameter
spec:
entrypoint: main
templates:
- name: main
outputs:
parameters:
- name: message
valueFrom:
expression: "tasks['read-parameters'].outputs.parameters.message"
dag:
tasks:
# ... the rest of the file ...
</code></pre>
</li>
<li><p>Add an <code>outputs</code> section to the <code>main</code> template in the <code>lib-read-parameters</code> WorkflowTemplate to forward <code>generate-parameter</code>'s parameter.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: lib-read-parameters
spec:
entrypoint: main
templates:
- name: main
outputs:
parameters:
- name: message
valueFrom:
expression: "tasks['lib-wft'].outputs.parameters.message"
dag:
tasks:
# ... the rest of the file ...
</code></pre>
</li>
</ol>
| crenshaw-dev |
<p>I have deployed <strong>Influxdb 2.0.0</strong> as Statefulset with EBS volume persistence. I've noticed that, if for some reason, pod gets rescheduled to other node or even if we scale down statefulset pod replicas = 0 and then scale up, the effect would be the same on persisted data: they will be lost.</p>
<p>Initially, in case of pod that gets rescheduled to other node, I would thought the problem is with EBS volume, it doesn't get unmounted and them mounted to another node where pod replica is running but that is NOT the case. EBS volume is present, same pv/pvc exists, but data is lost.</p>
<p>To figure out what might be the problem, I've purposely done influxdb setup and added data and then did this:</p>
<pre><code>kubectl scale statefulsets influxdb --replicas=0
...
kubectl scale statefulsets influxdb --replicas=1
</code></pre>
<p>The effect was the same just like when influxdb pod got rescheduled. Data was lost.</p>
<p>Any specific reason why would something like that happen? </p>
<p>My environment:
I'm using EKS k8s environment with <strong>1.15</strong> k8s version of control plane/workers.</p>
| Bakir Jusufbegovic | <p>Fortunately, the problem was due to the big changes that happened between influxdb 1.x and 2.0.0 beta version in terms on where the actual data is persisted.</p>
<p>In 1.x version, data was persisted in:</p>
<pre><code>/var/lib/influxdb
</code></pre>
<p>while on the 2.x version, data is persisted, by default, on:</p>
<pre><code>/root/.influxdbv2
</code></pre>
<p>My EBS volume was mounted on the 1.x version location and with every restart of the pod (either caused by scaling down or by scheduling to other node), EBS volume was regularly attached but on the wrong location. That was the reason why there was no data.</p>
<p>Also, one difference that I see is that configuration params cannot be provided for 2.x version via configuration file (like it was on 1.x where I had configuration file mounted into the container as configmap). We have to provide additional configuration params inline. This link explains how: <a href="https://v2.docs.influxdata.com/v2.0/reference/config-options/" rel="nofollow noreferrer">https://v2.docs.influxdata.com/v2.0/reference/config-options/</a></p>
<p>At the end this is the working version of Statefulset:</p>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: influxdb
name: influxdb
spec:
replicas: 1
selector:
matchLabels:
app: influxdb
serviceName: influxdb
template:
metadata:
labels:
app: influxdb
spec:
containers:
- image: quay.io/influxdb/influxdb:2.0.0-beta
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /ping
port: api
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: influxdb
ports:
- containerPort: 9999
name: api
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ping
port: api
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: "800m"
memory: 1200Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- mountPath: /root/.influxdbv2
name: influxdb-data
volumeClaimTemplates:
- metadata:
name: influxdb-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
volumeMode: Filesystem
</code></pre>
| Bakir Jusufbegovic |
<p>I am using Rancher to manage Kubernetes which orchestrates my Docker containers.</p>
<p>Each of our microservices (running in a container) that requires persistence has a corresponding MySQL container. E.g. MyApp is running in a container called MyApp and persists to a MySQL container called MySQL-MyApp.</p>
<p>We have many of these. We don't want to define which nodes the MySQL containers runs on, and therefore can't publish/expose the port on the host in case it clashes with any other ports on that host.</p>
<p>However, if something goes wrong with some data for one of our microservices, we need to be able to access the MySQL instance in the relevant container using MySQL Workbench to view/edit the data in the database from an external machine on our physical network.</p>
<p>Any ideas how we would go about doing this? Are we able to somehow temporarily expose/publish a port on the fly for a MySQL container that is running so that we can connect to it via MySQL Workbench, or are there other ways to get this done?</p>
| dleerob | <p>If the users have access to <code>kubectl</code> command-line for the cluster, they can set-up a temporary <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="noreferrer">port-forward</a> between a local development machine and the pod that contains your MySQL container.</p>
<p>For example, where <code>mypod-765d459796-258hz</code> is a pod and you want to connect to port 3306 of that pod:</p>
<p><code>kubectl port-forward mypod-765d459796-258hz 12345:3306</code></p>
<p>Then you could connect MySQL Workbench to <code>localhost:12345</code> and it would forward to your MySQL container in Kubernetes.</p>
| Paul Annetts |
<p>I have multi-environment k8s cluster (<strong>EKS</strong>) and I'm trying to setup accurate values for ResourceQuotas. </p>
<p>One interesting thing that I've noticed is that specified request/limit for CPU/memory stay <strong>"occupied"</strong> in k8s cluster when job is completed successfully and effectively pod releases cpu/memory resources that it is using. </p>
<p>Since I expect that there would be a lot of jobs executed on the environment this caused a problem for me. Of course, I've added support for running cleanup cronjob for the successfully executed jobs but that is just one part of the solution.</p>
<p>I'm aware of the <strong>TTL feature on k8s</strong>: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#ttl-mechanism-for-finished-jobs" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#ttl-mechanism-for-finished-jobs</a> that is still in alpha state and as such not available on the EKS k8s cluster.</p>
<p>I would expect that both request/limits specified on that specific pod (container/s) are "released" also but when looking at the k8s metrics on Grafana, I see that that is not true.</p>
<p>This is an example (green line marks current resource usage, yellow marks resource request while blue marks resource limit):
<a href="https://i.stack.imgur.com/5SfMZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5SfMZ.png" alt="enter image description here"></a></p>
<p>My question is:</p>
<ul>
<li>Is this expected behaviour?</li>
<li>If yes, what are the technical reasons why request/limits are not released as well after job (pod) execution is completed?</li>
</ul>
| Bakir Jusufbegovic | <p>I've done "load" test on my environment to test if requests/limits that are left assigned on the completed job (pod) will indeed have influence on the ResourceQuota that I've set. </p>
<p>This is how my ResourceQuota looks like:</p>
<pre><code>apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-quota
spec:
hard:
requests.cpu: "1"
requests.memory: 2Gi
limits.cpu: "2"
limits.memory: 3Gi
</code></pre>
<p>This is request/limit for cpu/memory that exists on each k8s job (to be precise on the container running in the Pod which is spinned up by Job):</p>
<pre><code>resources:
limits:
cpu: 250m
memory: 250Mi
requests:
cpu: 100m
memory: 100Mi
</code></pre>
<p>Results of testing:</p>
<ul>
<li>Currently running number of jobs: <strong>66</strong></li>
<li>Expected sum of CPU requests (if assumption from the question is correct) <strong>~= 6.6m</strong></li>
<li>Expected sum of Memory requests (if assumption from the question is correct) <strong>~= 6.6Mi</strong></li>
<li>Expected sum of CPU limits (if assumption from the question is correct) <strong>~= 16.5</strong></li>
<li>Expected sum of Memory limits (if assumption from the question is correct) <strong>~= 16.5</strong></li>
</ul>
<p>I've created Grafana graphs that show following:</p>
<p><strong>CPU usage/requests/limits for jobs in one namespace</strong></p>
<pre><code>sum(rate(container_cpu_usage_seconds_total{namespace="${namespace}", container="myjob"}[5m]))
sum(kube_pod_container_resource_requests_cpu_cores{namespace="${namespace}", container="myjob"})
sum(kube_pod_container_resource_limits_cpu_cores{namespace="${namespace}", container="myjob"})
</code></pre>
<p><strong>Memory usage/requests/limits for jobs in one namespace</strong></p>
<pre><code>sum(rate(container_memory_usage_bytes{namespace="${namespace}", container="myjob"}[5m]))
sum(kube_pod_container_resource_requests_memory_bytes{namespace="${namespace}", container="myjob"})
sum(kube_pod_container_resource_limits_memory_bytes{namespace="${namespace}", container="myjob"})
</code></pre>
<p>This is how graphs look like:
<a href="https://i.stack.imgur.com/MSa3M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MSa3M.png" alt="enter image description here"></a></p>
<p>According to this graph, requests/limits get accumulated and go well beyond the ResourceQuota thresholds. However, I'm still able to run new jobs without a problem.</p>
<p>At this moment, I've started doubting in what metrics are showing and opted to check other part of the metrics. To be specific, I've used following set of metrics:</p>
<p><strong>CPU:</strong></p>
<pre><code>sum (rate(container_cpu_usage_seconds_total{namespace="$namespace"}[1m]))
kube_resourcequota{namespace="$namespace", resource="limits.cpu", type="hard"}
kube_resourcequota{namespace="$namespace", resource="requests.cpu", type="hard"}
kube_resourcequota{namespace="$namespace", resource="limits.cpu", type="used"}
kube_resourcequota{namespace="$namespace", resource="requests.cpu", type="used"}
</code></pre>
<p><strong>Memory:</strong></p>
<pre><code>sum (container_memory_usage_bytes{image!="",name=~"^k8s_.*", namespace="$namespace"})
kube_resourcequota{namespace="$namespace", resource="limits.memory", type="hard"}
kube_resourcequota{namespace="$namespace", resource="requests.memory", type="hard"}
kube_resourcequota{namespace="$namespace", resource="limits.memory", type="used"}
kube_resourcequota{namespace="$namespace", resource="requests.memory", type="used"}
</code></pre>
<p>This is how graph looks like:
<a href="https://i.stack.imgur.com/jp7BQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jp7BQ.png" alt="enter image description here"></a></p>
<hr>
<p><strong>Conclusion:</strong></p>
<p>From this screenshot, it is clear that, once load test completes and jobs go into the complete state, even though pods are still around <strong>(with READY: 0/1 and STATUS: Completed)</strong>, cpu/memory request/limits are released and no longer represent constraint that needs to be calculated into the ResourceQuota threshold.
This can be seen by observing following data on the graph:</p>
<pre><code>CPU allocated requests
CPU allocated limits
Memory allocated requests
Memory allocated limits
</code></pre>
<p>all of which increase at the point of time when load happens on the system and new jobs are created but goes back into the previous state as soon as jobs are completed (even though they are not deleted from the environment)</p>
<p><strong>In other words, resource usage/limits/request for both cpu/memory are taken into the account only while job (and its corresponding pod) is in RUNNING state</strong> </p>
| Bakir Jusufbegovic |
<p>I have the following Argo Workflow using a Secret from Kubernetes:</p>
<pre><code>args:
- |
export TEST_FILENAME="./test.txt"
echo "$TEST_DATA" > $TEST_FILENAME
chmod 400 $TEST_FILENAME
env:
- name: TEST_DATA
valueFrom:
secretKeyRef:
name: test_data
key: testing
</code></pre>
<p>I need to redirect <code>TEST_DATA</code> to a file when I run the Argo Workflow, but the data of <code>TEST_DATA</code> always shows in the argo-ui log. How can I redirect the data to the file without showing the data in the log?</p>
| ratzip | <p><code>echo</code> shouldn't be writing <code>$TEST_DATA</code> to logs the way your code is written. So I'm not sure what's going wrong.</p>
<p>However, I think there's an easier way to <a href="https://kubernetes.io/docs/concepts/configuration/secret/#projection-of-secret-keys-to-specific-paths" rel="nofollow noreferrer">write a secret to a file</a>. Add a volume to your Workflow spec, and a volume mount to the <code>container</code> section of the step spec.</p>
<pre class="lang-yaml prettyprint-override"><code> containers:
- name: some-pod
image: some-image
volumeMounts:
- name: test-mount
mountPath: "/some/path/"
readOnly: true
volumes:
- name: test-volume
secret:
secretName: test_data
items:
- key: testing
path: test.txt
</code></pre>
| crenshaw-dev |
<p>I have a .Net Core Console Application which I have containerized. The purpose of my application is to accept a file url and return the text. Below is my Dockerfile.</p>
<pre class="lang-sh prettyprint-override"><code>FROM mcr.microsoft.com/dotnet/runtime:5.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["CLI_ReadData/CLI_ReadData.csproj", "CLI_ReadData/"]
RUN dotnet restore "CLI_ReadData/CLI_ReadData.csproj"
COPY . .
WORKDIR "/src/CLI_ReadData"
RUN dotnet build "CLI_ReadData.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "CLI_ReadData.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CLI_ReadData.dll"]
</code></pre>
<p>I now want to create an Argo Workflow for the same. Below is the corresponding .yaml file</p>
<pre class="lang-yaml prettyprint-override"><code>metadata:
name: read-data
namespace: argo
spec:
entrypoint: read-data
templates:
- name: read-data
dag:
tasks:
- name: read-all-data
template: read-all-data
arguments:
parameters:
- name: fileUrl
value: 'https://dpaste.com/24593EK38'
- name: read-all-data
inputs:
parameters:
- name: fileUrl
container:
image: 'manankapoor2705/cli_readdata:latest'
- app/bin/Debug/net5.0/CLI_ReadData.dll
args:
- '--fileUrl={{inputs.parameters.fileUrl}}'
ttlStrategy:
secondsAfterCompletion: 300
</code></pre>
<p>While creating the Argo Workflow I am getting the below error :</p>
<blockquote>
<p>task 'read-data.read-all-data' errored: container "main" in template
"read-all-data", does not have the command specified: when using the
emissary executor you must either explicitly specify the command, or
list the image's command in the index:
<a href="https://argoproj.github.io/argo-workflows/workflow-executors/#emissary-emissary" rel="nofollow noreferrer">https://argoproj.github.io/argo-workflows/workflow-executors/#emissary-emissary</a></p>
</blockquote>
<p>I am also attaching my Program.cs file for reference purposes</p>
<pre class="lang-cs prettyprint-override"><code>class Program
{
public class CommandLineOptions
{
[Option("fileUrl", Required = true, HelpText = "Please provide a url of the text file.")]
public string fileUrl { get; set; }
}
static void Main(string[] args)
{
try
{
var result = Parser.Default.ParseArguments<CommandLineOptions>(args)
.WithParsed<CommandLineOptions>(options =>
{
Console.WriteLine("Arguments received...Processing further !");
var text = readTextFromFile(options.fileUrl);
Console.WriteLine("Read names from textfile...");
var names = generateListOfNames(text);
});
if (result.Errors.Any())
{
throw new Exception($"Task Failed {String.Join('\n', result.Errors)}");
}
//exit successfully
Environment.Exit(0);
}
catch (Exception ex)
{
Console.WriteLine("Task failed!!");
Console.WriteLine(ex.ToString());
//failed exit
Environment.Exit(1);
}
Console.WriteLine("Hello World!");
}
public static string readTextFromFile(string path)
{
System.Net.WebRequest request = System.Net.WebRequest.Create(path);
System.Net.WebResponse response = request.GetResponse();
Stream dataStream = response.GetResponseStream();
var reader = new StreamReader(dataStream);
var text = reader.ReadToEnd();
reader.Close();
response.Close();
return text;
}
public static List<string> generateListOfNames(string text)
{
var names = text.Split(',').ToList<string>();
foreach (var name in names)
Console.WriteLine(name);
return names;
}
}
</code></pre>
<p>Can anyone please help me out ?</p>
| Manan Kapoor | <p>The <code>read-all-data</code> template looks to me like invalid YAML. I think you're missing the <code>command</code> field name. I think the path also needs either a leading <code>/</code> (for an absolute path), or to start with <code>bin/</code> (for a relative path with <code>/app</code> as the working directory).</p>
<pre class="lang-yaml prettyprint-override"><code> - name: read-all-data
inputs:
parameters:
- name: fileUrl
container:
image: 'manankapoor2705/cli_readdata:latest'
command:
- /app/bin/Debug/net5.0/CLI_ReadData.dll
args:
- '--fileUrl={{inputs.parameters.fileUrl}}'
</code></pre>
| crenshaw-dev |
<p>I'm using standard procedure for enabling HTTPS termination for my application that is running on Kubernetes using:
- Ingress nginx
- AWS ELB classic
- Cert Manager for Let's encrypt</p>
<p>I've used procedure described here: <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a></p>
<p>I've been <strong>able to make Ingress work with HTTP</strong> before but I'm having problem with HTTPS where I'm getting following error when I try to cURL app URL:</p>
<pre><code>$ curl -iv https://<SERVER>
* Rebuilt URL to: <SERVER>
* Trying <IP_ADDR>...
* Connected to <SERVER> (<IP_ADDR>) port 443 (#0)
* found 148 certificates in /etc/ssl/certs/ca-certificates.crt
* found 592 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* gnutls_handshake() failed: An unexpected TLS packet was received.
* Closing connection 0
curl: (35) gnutls_handshake() failed: An unexpected TLS packet was received.
</code></pre>
<p>This is what I currently have:</p>
<p><strong>Cert manager running in kube-system namespace:</strong></p>
<pre><code>$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
cert-manager-5f8db6f6c4-c4t4k 1/1 Running 0 2d1h
cert-manager-webhook-85dd96d87-rxc7p 1/1 Running 0 2d1h
cert-manager-webhook-ca-sync-pgq6b 0/1 Completed 2 2d1h
</code></pre>
<p><strong>Ingress setup in ingress-nginx namespace:</strong></p>
<pre><code>$ kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/default-http-backend-587b7d64b5-ftws2 1/1 Running 0 2d1h
pod/nginx-ingress-controller-68bb4bfd98-zsz8d 1/1 Running 0 12h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/default-http-backend ClusterIP <IP_ADDR_1> <none> 80/TCP 2d1h
service/ingress-nginx NodePort <IP_ADDR_2> <none> 80:32327/TCP,443:30313/TCP 2d1h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/default-http-backend 1 1 1 1 2d1h
deployment.apps/nginx-ingress-controller 1 1 1 1 12h
</code></pre>
<p><strong>Application and ingress in app namespace:</strong></p>
<pre><code>$ kubectl get all -n app
NAME READY STATUS RESTARTS AGE
pod/appserver-0 1/1 Running 0 2d1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/appserver ClusterIP <IP_ADDR> <none> 22/TCP,80/TCP 2d1h
NAME DESIRED CURRENT AGE
statefulset.apps/appserver 1 1 2d1h
$ kubectl describe ingress -n app
Name: appserver
Namespace: app
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
letsencrypt-prod terminates <SERVER>
Rules:
Host Path Backends
---- ---- --------
<SERVER>
/ appserver:80 (<none>)
Annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
Events: <none>
</code></pre>
<p>This is how ingress resource looks like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
name: appserver
namespace: app
spec:
tls:
- hosts:
- <SERVER>
secretName: letsencrypt-prod
rules:
- host: <SERVER>
http:
paths:
- backend:
serviceName: appserver
servicePort: 80
path: /
</code></pre>
<p><strong>Additional checks that I've done:</strong></p>
<p>Checked that certificates have been generated correctly:</p>
<pre><code>$ kubectl describe cert -n app
Name: letsencrypt-prod
...
Status:
Conditions:
Last Transition Time: 2019-03-20T14:23:07Z
Message: Certificate is up to date and has not expired
Reason: Ready
Status: True
Type: Ready
</code></pre>
<p>Checked logs from cert-manager:</p>
<pre><code>$ kubectl logs -f cert-manager-5f8db6f6c4-c4t4k -n kube-system
...
I0320 14:23:08.368872 1 sync.go:177] Certificate "letsencrypt-prod" for ingress "appserver" already exists
I0320 14:23:08.368889 1 sync.go:180] Certificate "letsencrypt-prod" for ingress "appserver" is up to date
I0320 14:23:08.368894 1 controller.go:179] ingress-shim controller: Finished processing work item "app/appserver"
I0320 14:23:12.548963 1 controller.go:183] orders controller: syncing item 'app/letsencrypt-prod-1237734172'
I0320 14:23:12.549608 1 controller.go:189] orders controller: Finished processing work item "app/letsencrypt-prod-1237734172"
</code></pre>
<p>Not really sure at this point what else might be worth of checking? </p>
| Bakir Jusufbegovic | <p>After all, it seems that problem was related with the fact on how I've done setup of listeners on ELB classic on AWS.</p>
<p>I've done following:</p>
<pre><code>HTTP 80 -> HTTP <INGRESS_SVC_NODE_PORT_1>
HTTP 443 -> HTTP <INGRESS_SVC_NODE_PORT_2>
</code></pre>
<p>First mistake was that I've used HTTP instead of HTTPS for 443 port. When I tried to use HTTPS, I had to enter SSL certificates which didn't make sense to me since I'm doing SSL termination on Ingress level with Let's encrypt.</p>
<p>Therefore, following configuration of listener worked:</p>
<pre><code>TCP 80 -> TCP <INGRESS_SVC_NODE_PORT_1>
TCP 443 -> TCP <INGRESS_SVC_NODE_PORT_2>
</code></pre>
| Bakir Jusufbegovic |
<p>I have a Kubernetes pod which downloading several types of files (let’s say <code>X</code>, <code>Y</code> and <code>Z</code>), and I have some processing scripts (each one is in a docker image) which are interested in one or more files (let's say <code>processor_X_and_Y</code>, <code>processor_X_and_Z</code> and <code>processor_Z</code>). </p>
<p>The first pod is always running, and I need to create a processor pod after downloading a file according to the file type, for example if the downloader downloads a file of type <code>Z</code>, I need to create a new instance of <code>processor_X_and_Z</code> and a new instance of <code>processor_Z</code>. </p>
<p>My current idea is to use <a href="https://argoproj.github.io/" rel="nofollow noreferrer">Argo workflow</a> by creating a simple workflow from 1 step for each processor, then starting the suitable workflows by calling the <a href="https://argoproj.github.io/docs/argo/rest-api.html" rel="nofollow noreferrer">Argo REST API</a> from the downloader pod. Thus I have achieved my goal and the auto-scaling of my system. </p>
<p>My question is is there another simpler engine or a service in Kubernetes which I can use to create a new prod from another pod without using this workflow engine? </p>
| Hussein Awala | <p>As mentioned in another answer, you can give your pod access to the Kubernetes API and then apply a Pod resource via kubectl.</p>
<p>If you want to start an Argo Workflow, you could use kubectl to apply a Workflow resource, or you could use the <a href="https://argoproj.github.io/docs/argo/examples/readme.html#argo-cli" rel="nofollow noreferrer">Argo CLI</a>.</p>
<p>But if you're using Argo anyway, you might find it easier to use <a href="https://argoproj.github.io/argo-events/" rel="nofollow noreferrer">Argo Events</a> to <a href="https://argoproj.github.io/argo-events/triggers/argo-workflow/" rel="nofollow noreferrer">kick off a Workflow</a>. You would have to choose an <a href="https://argoproj.github.io/argo-events/concepts/event_source/" rel="nofollow noreferrer">event source</a> based on how/from where you're downloading the source files. If, for example, the files are on S3, you could use the SNS event source.</p>
<p>If you just need to periodically check for new files, you could use a <a href="https://github.com/argoproj/argo/blob/master/docs/cron-workflows.md" rel="nofollow noreferrer">CronWorkflow</a> to perform the check and conditionally perform the rest of the workflow based on whether there's anything to download.</p>
| crenshaw-dev |
<p>We have a Kubernetes cluster with 1 master and 3 nodes managed by kops that we use for our application deployment. We have minimal pod-to-pod connectivity but like the autoscaling features in Kubernetes. We've been using this for the past few months but recently have started having issue where our pods randomly cannot connect to our redis or database with an error like:</p>
<pre><code>Set state pending error: dial tcp: lookup redis.id.0001.use1.cache.amazonaws.com on 100.64.0.10:53: read udp 100.126.88.186:35730->100.64.0.10:53: i/o timeout
</code></pre>
<p>or</p>
<pre><code>OperationalError: (psycopg2.OperationalError) could not translate host name “postgres.id.us-east-1.rds.amazonaws.com” to address: Temporary failure in name resolution
</code></pre>
<p>What's stranger is this only occurs some of the time, then when a pod is recreated it will work again and this will trip it up shortly after.</p>
<p>We have tried following all of Kube's kube-dns debugging instructions to no avail, tried countless solutions like changing the ndots configuration and have even experimented moving to CoreDNS, but still have the exact same intermittent issues. We use Calico for networking but it's hard to say if it's occurring at the network level as we haven't seen issues with any other services.</p>
<p>Does anyone have any ideas of where else to look for what could be causing this behavior, or if you've experienced this behavior before yourself could you please share how you resolved it?</p>
<p>Thanks</p>
<p>The pods for CoreDNS look OK</p>
<pre><code>⇒ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
...
coredns-784bfc9fbd-xwq4x 1/1 Running 0 3h
coredns-784bfc9fbd-zpxhg 1/1 Running 0 3h
...
</code></pre>
<p>We have enabled logging on CoreDNS and seen requests actually coming through:</p>
<pre><code>⇒ kubectl logs coredns-784bfc9fbd-xwq4x --namespace=kube-system
.:53
2019-04-09T00:26:03.363Z [INFO] CoreDNS-1.2.6
2019-04-09T00:26:03.364Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
[INFO] plugin/reload: Running configuration MD5 = 7f2aea8cc82e8ebb0a62ee83a9771ab8
[INFO] Reloading
[INFO] plugin/reload: Running configuration MD5 = 73a93c15a3b7843ba101ff3f54ad8327
[INFO] Reloading complete
...
2019-04-09T02:41:08.412Z [INFO] 100.126.88.129:34958 - 18745 "AAAA IN sqs.us-east-1.amazonaws.com.cluster.local. udp 59 false 512" NXDOMAIN qr,aa,rd,ra 152 0.000182646s
2019-04-09T02:41:08.412Z [INFO] 100.126.88.129:51735 - 62992 "A IN sqs.us-east-1.amazonaws.com.cluster.local. udp 59 false 512" NXDOMAIN qr,aa,rd,ra 152 0.000203112s
2019-04-09T02:41:13.414Z [INFO] 100.126.88.129:33525 - 52399 "A IN sqs.us-east-1.amazonaws.com.ec2.internal. udp 58 false 512" NXDOMAIN qr,rd,ra 58 0.001017774s
2019-04-09T02:41:18.414Z [INFO] 100.126.88.129:44066 - 47308 "A IN sqs.us-east-1.amazonaws.com. udp 45 false 512" NOERROR qr,rd,ra 140 0.000983118s
...
</code></pre>
<p>Service and endpoints look OK</p>
<pre><code>⇒ kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 100.64.0.10 <none> 53/UDP,53/TCP 63d
...
⇒ kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 100.105.44.88:53,100.127.167.160:53,100.105.44.88:53 + 1 more... 63d
...
</code></pre>
| Ruby | <p>We also encounter this issue, but issue was with query timeout.</p>
<p>The best way after testing was to run dns on all nodes and all PODs referring to their own node DNS. It will save round trips to other node pods because you may run multiple pods for DNS but dns service will distribute traffic some how and PODs will end up having more network traffic across nodes. Not sure if possible on amazon eks.</p>
| Akash Sharma |
<p>We are using argo cd and kubernetes.</p>
<p>And I want to use environmental variables in the yaml file.</p>
<p>For example,</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: guestbook-ui
annotations:
spec:
ports:
- port: $PORT
targetPort: $TARGET_PORT
selector:
app: guestbook-ui
</code></pre>
<p>I want to set the value of the environmental variable (PORT and TARGET_PORT) when deploying it to Argo CD.</p>
<p>What should I do?</p>
| Junseok Lee | <p>I'd recommend converting your raw YAML to a Helm chart and templating the relevant fields.</p>
<p>Argo CD has an <a href="https://github.com/argoproj/argocd-example-apps/tree/master/helm-guestbook" rel="nofollow noreferrer">example Helm app</a> with a service similar to yours.</p>
<p>You could define a service like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: guestbook-ui
annotations:
spec:
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
selector:
app: guestbook-ui
</code></pre>
<p>And then define your port and targetPort parameters in Argo CD.</p>
| crenshaw-dev |
<p>I am trying to create fargate profiles for EKS using terraform, the requirement is to create multiple fargate profiles bound to single namespace but different label.</p>
<p>I have defined the selector variable as below :</p>
<pre><code>variable "selectors" {
description = "description"
type = list(object({
namespace = string
labels = any
}))
default = []
}
</code></pre>
<p>and the fargate module block as below :</p>
<pre><code>resource "aws_eks_fargate_profile" "eks_fargate_profile" {
for_each = {for namespace in var.selectors: namespace.namespace => namespace}
cluster_name = var.cluster_name
fargate_profile_name = format("%s-%s","fargate",each.value.namespace)
pod_execution_role_arn = aws_iam_role.eks_fargate_role.arn
subnet_ids = var.vpc_subnets
selector {
namespace = each.value.namespace
labels = each.value.labels
}
</code></pre>
<p>and calling the module as below :</p>
<pre><code> selectors = [
{
namespace = "ns"
labels = {
Application = "fargate-1"
}
},
{
namespace = "ns"
labels = {
Application = "fargate-2"
}
}
]
</code></pre>
<p>When i try to run terraform plan, i am getting below error :</p>
<pre><code>Two different items produced the key "jenkinsbuild" in this 'for' expression. If duplicates are expected, use the ellipsis (...) after the value expression to enable grouping by key.
</code></pre>
<p>I tried giving (...) at the end of the for loop, this time i am getting another error as below :</p>
<pre><code>each.value is tuple with 1 element
│
│ This value does not have any attributes.
</code></pre>
<p>I also defined selectors variable type as <em><strong>any</strong></em>, as well tried type casting the output to string(namespace) and object(labels), but no luck.</p>
<p>So could you please help me in achieving the same, It seems i am close but i am missing something here.</p>
<p>Thanks and Regards,
Sandeep.</p>
| sandeepdosapati | <p>In Terraform, when using <code>for_each</code>, the keys must be unique. If you do not have unique keys, then use <code>count</code>:</p>
<pre><code>resource "aws_eks_fargate_profile" "eks_fargate_profile" {
count = length(var.selectors)
selector {
namespace = var.selectors[count.index].namespace
labels = var.selectors[count.index].labels
}
...
}
</code></pre>
| Old Pro |
<p>I'm trying to automate the process of simultaneously deploying an app onto multiple machines with kubernetes clusters. I'm new to kubernetes.</p>
<p>Which tool/technology should I use for this?</p>
| Queilyd | <p>In kubernetes, you can control the nodes no which service to deploy or if multiple pods of same application should not be deployed on same node. Use <code>node-selector</code> or <code>node-affinity</code> or <code>node-anti-affinity</code>. For details check <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/</a></p>
| Akash Sharma |
<p>I am trying to set up ArgoCD, and am unclear on some of its directions. I am a Kubernetes beginner and am experimenting to learn. I've set up my own Kubernetes master and two workers on VMs, and so far so good. (VMs and real k8s because I want to dig in...) Next I've installed ArgoCD and got it to run, according to <a href="https://argoproj.github.io/argo-cd/getting_started/" rel="nofollow noreferrer">https://argoproj.github.io/argo-cd/getting_started/</a>.</p>
<p>Following the instructions led me to run ArgoCD with Port Forwarding. This is a process running on a terminal on the kubernetes master. And it works for me, great.</p>
<p>I would expect people normally want ArgoCD to run without a foreground process, but the ArgoCD instructions and all the various instructables around left me hanging.</p>
<p>What's the next step to have ArgoCD run on its own?</p>
| jws | <p>I'm not sure that what you're actually seeing is ArgoCD running as a "foreground process." The API server is running in a pod. I think what you're seeing in the foreground is <code>kubectl</code> forwarding a port so you can access the ArgoCD API/UI.</p>
<p>In order to avoid running the <code>kubectl</code> port-forward (in the foreground or anywhere else), you need to set up a more "permanent/proper" way of accessing the API.</p>
<p>The <a href="https://argoproj.github.io/argo-cd/getting_started/#3-access-the-argo-cd-api-server" rel="nofollow noreferrer">ArgoCD instructions</a> are a bit brief about how to set up access. But you should try either the LoadBalancer or Ingress approach. It'll probably take a little external research about what those <em>are</em> in the Kubernetes world to understand which is best and how to use it.</p>
<p>For a private cluster, one option is to set up <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> on the cluster, in particular see <a href="https://metallb.universe.tf/installation/" rel="nofollow noreferrer">installation</a> then <a href="https://metallb.universe.tf/configuration/" rel="nofollow noreferrer">configuration</a>. Configure the LB with a Layer2 configuration using a private IP range. Then, update ArgoCD with the command provided in the <a href="https://argoproj.github.io/argo-cd/getting_started/#3-access-the-argo-cd-api-server" rel="nofollow noreferrer">ArgoCD instructions</a> for a load balancer. Once all set up, find the Load Balancer assigned IP with <code>kubectl get service -n argocd</code> and the external IP's port 80 should route to an ArgoCD pod IP port 8080.</p>
| crenshaw-dev |
<p>We have Prometheus running on k8s but it won't start anymore because RAM requirements are insufficient (and CPU close to the limit as well). Since this is all new to me I'm not sure about which approach to take. I tried deploying the container with a bit increased RAM limit (node has 16Gi, I increased from 145xxMi to 15Gi). The status is constantly pending.</p>
<pre><code> Normal NotTriggerScaleUp 81s (x16 over 5m2s) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 3 node(s) didn't match node selector, 2 Insufficient memory
Warning FailedScheduling 80s (x6 over 5m23s) default-scheduler 0/10 nodes are available: 10 Insufficient memory, 6 node(s) didn't match node selector, 9 Insufficient cpu.
Normal NotTriggerScaleUp 10s (x14 over 5m12s) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 2 Insufficient memory, 3 node(s) didn't match node selector
</code></pre>
<p>These are the logs from when prometheus crashed and didn't start anymore. describe pod also said memory usage was 99%:</p>
<pre><code>level=info ts=2020-10-09T09:39:34.745Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=53476 maxSegment=53650
level=info ts=2020-10-09T09:39:38.518Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=53477 maxSegment=53650
level=info ts=2020-10-09T09:39:41.244Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=53478 maxSegment=53650
</code></pre>
<p>What can I do to solve this issue? Note there is no autoscaling in place.</p>
<p>Do I scale up the EC2 worker nodes manually?
Do I do something else?</p>
| aardbol | <p>The message from cluster autoscaler reveals the problem:</p>
<p><code>cluster-autoscaler pod didn't trigger scale-up</code></p>
<p>Even if the cluster autoscaler would add a new node to the cluster, the Prometheus still would not fit to the node.</p>
<p>This is likely due to the EKS nodes having some capacity from the 16Gi <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/" rel="nofollow noreferrer">reserved for the system</a>. The allocatable capacity is seemingly less than 15Gi, as the Prometheus does not fit on the node after increasing its memory request.</p>
<p>To solve this, you could either decrease the memory request on the Prometheus pod, or add new larger nodes which have more memory available.</p>
| Lauri Koskela |
<p>I am trying to achieve 0 downtime during rolling update with EKS (AWS K8s service).</p>
<p>I have one WebSocket server and I want to ensure during the rolling update of this server, existing connections will be kept until the WebSockets are closed after the work is done.</p>
<p>I thought K8s rolling update feature would help me with this but it did not. I tried and it simply killed the pod while there were still connections to the WebSocket.</p>
<p>If I understand the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="noreferrer">document</a> correctly, then the pod termination goes like this:</p>
<ol>
<li>User signals pod deletion to K8s API</li>
<li>K8s stops routing new traffic to this pod and sends the SIGTERM signal</li>
<li>The application MUST handle this signal and start graceful termination of itself in a specified <code>grace-period</code> (default to 30s)</li>
<li>After that, K8s sends a SIGKILL signal to force terminate the pod.</li>
</ol>
<p>If my above understanding is correct, clearly there is no way to tell K8s to:</p>
<ol>
<li>Don't interrupt current connections</li>
<li>Let them run for as long as they need (they will eventually close but the period varies greatly)</li>
<li>Once all connections are closed, terminate the pod</li>
</ol>
<p><strong>Question</strong>: Is there any ways at all to make sure K8s:</p>
<ol>
<li>Doesn't interrupt WebSocket connection</li>
<li>Doesn't force the application to kill the connection in a specific <code>grace-period</code></li>
<li>Detects when all WebSocket connections are closed and kill the pod</li>
</ol>
<p>If anyone can assist me that would be greatly appreciated. </p>
| Tran Triet | <p>For mission critical application, go for customised blue-green deployments.</p>
<p>First deploy new version deployment with new selector and when all POD replicas are UP and ready to serve traffic, switch the service selector to point to new version deployment. </p>
<p>After this send the kill switch to older version which gracefully handles and disconnect all clients. So all new reconnections are forwarded to new version which is already set to serve traffic.</p>
| Akash Sharma |
<p>I am working in a setup where I have an Argo CD portal to view the Kubernetes deployments etc. But do not have access to a kubeconfig file (hence cannot use kubectl).</p>
<p>I can see the logs for the pods in the web UI, but is there a way to export the logs as a text file?</p>
| Prabal Rakshit | <p>ArgoCD's logging interface in >2.0 includes a Download button.</p>
<p>For earlier versions, open your browser's dev tools to the Network tab. Click the Logs tag in the ArgoCD interface. Find the network request to the <code>logs</code> endpoint and open the URL in a new tab. From there you can download the logs as a text file using your browser's "Save As" feature.</p>
| crenshaw-dev |
<h3>Summary:</h3>
<p>We have a golang application that submits Argo workflows to a kubernetes cluster upon requests. I'd like to pass a yaml file to one of the steps and I'm wondering what are the options for doing this.</p>
<h3>Environment:</h3>
<ul>
<li>Argo: v2.4.2</li>
<li>K8s: 1.13.12-gke.25</li>
</ul>
<h3>Additional details:</h3>
<p>Eventually, I would like to pass this file to the test step as shown in this example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: test-
spec:
entrypoint: test
templates:
- name: test
container:
image: gcr.io/testproj/test:latest
command: [bash]
source: |
python test.py --config_file_path=/path/to/config.yaml
</code></pre>
<p>The image used in this step would have a python script that receives the path to this file then accesses it.</p>
<p>To submit the Argo workflows with golang, we use the following dependencies:</p>
<ul>
<li><a href="https://github.com/argoproj/argo-workflows/tree/master/pkg/client" rel="nofollow noreferrer">https://github.com/argoproj/argo-workflows/tree/master/pkg/client</a></li>
<li><a href="https://github.com/argoproj/argo-workflows/tree/master/pkg/apis" rel="nofollow noreferrer">https://github.com/argoproj/argo-workflows/tree/master/pkg/apis</a></li>
</ul>
<p>Thank you.</p>
| Ash | <h1>Option 1: pass the file as a parameter</h1>
<p><a href="https://github.com/argoproj/argo-workflows/tree/master/examples#parameters" rel="nofollow noreferrer">Workflow parameters</a> are usually small bits of text or numbers. But if your yaml file is reasonably small, you could string-encode it and pass it as a parameter.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: test-
spec:
entrypoint: test
arguments:
parameters:
- name: yaml
value: "string-encoded yaml"
templates:
- name: test
container:
image: gcr.io/testproj/test:latest
command: [bash]
source: |
# In this case, the string-encoding should be BASH-compatible.
python test.py --config_file_as_string="{{inputs.parameters.message}}"
</code></pre>
<h1>Option 2: pass the file as an artifact</h1>
<p>Argo supports multiple types of <a href="https://github.com/argoproj/argo-workflows/tree/master/examples#artifacts" rel="nofollow noreferrer">artifacts</a>. Perhaps the simplest for your use case is the <a href="https://github.com/argoproj/argo-workflows/blob/master/examples/input-artifact-raw.yaml" rel="nofollow noreferrer">raw parameter</a> type.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: test-
spec:
entrypoint: test
templates:
- name: test
inputs:
artifacts:
- name: yaml
path: /path/to/config.yaml
raw:
data: |
this is
the raw file
contents
container:
image: gcr.io/testproj/test:latest
command: [bash]
source: |
python test.py --config_file_path=/path/to/config.yaml
</code></pre>
<p>Besides <code>raw</code>, Argo supports "S3, Artifactory, HTTP, [and] Git" artifacts (among others, I think).</p>
<p>If, for example, you chose to use S3, you could upload the file from your golang app and then pass the S3 bucket and key as parameters.</p>
<h1>Golang client</h1>
<p>I'm not familiar with the golang client, but passing parameters is certainly supported, and I <em>think</em> passing in a raw parameter should be supported as well.</p>
| crenshaw-dev |
<p>Here's a simplified version of a kubernetes job YAML config I use commonly:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
template:
spec:
containers:
- name: mycontainer
image: me/mycontainer:latest
command: ["bash", "-c"]
args:
- python -u myscript.py
--param1 abc
--param2 xyz
</code></pre>
<p>The above works great, and is easy to maintain and read. But now one of my parameters needs some minified YAML:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
template:
spec:
containers:
- name: mycontainer
image: me/mycontainer:latest
command: ["bash", "-c"]
args:
- python -u myscript.py
--param_minified_yaml "{key: value}"
</code></pre>
<p>This bit of embedded minified yaml is being parsed by <code>kubectl</code> and causing: <code>error: error parsing STDIN: error converting YAML to JSON: yaml: line 26: mapping values are not allowed in this context</code></p>
<p>How can the embedded yaml in <code>args:</code> be escaped such that it's passed as a pure text argument?</p>
| David Parks | <p>If the minified yaml (or the args string in general) does not include single quotes, you can wrap the whole command line in them:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
template:
spec:
containers:
- name: mycontainer
image: me/mycontainer:latest
command: ["bash", "-c"]
args:
- 'python -u myscript.py
--param_minified_yaml "{key: value}"'
</code></pre>
<p>If the arg string contains includes single quotes, the args string can be passed as a <a href="https://yaml-multiline.info/" rel="nofollow noreferrer">YAML multiline string</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
template:
spec:
containers:
- name: mycontainer
image: me/mycontainer:latest
command: ["bash", "-c"]
args:
- >-
python -u myscript.py
--param_minified_yaml "{key: 'value'}"
</code></pre>
| Lauri Koskela |
<p>I've stumbled upon a problem when using some more complex argo workflows with initialization and clean-up logic. We are running some initialization in one of the initial steps of the workflow (e.g. creation of some resources) and we'd like to perform a clean-up regardless of the status of the workflow. <code>onExit</code> template seems to be an ideal solution (I think that clean-up is even mentioned in argo documentation as predestined for tasks of the <code>onExit</code> template).</p>
<p>However, I haven't found a way yet to pass some values to it. For example - let's say that in the initialization phase we created some resource with id <code>some-random-unique-id</code> and we'd like to let the <code>onExit</code> container know what resources it needs to clean up.</p>
<p>We tried the <code>outputs</code> of some steps, but it seems that <code>steps</code> are unknown in the <code>onExit</code> template.</p>
<p>Is there a built-in argo mechanism to pass this kind of data? We'd like to avoid some external services (like key-value storage service that would hold the context).</p>
| Andrzej Igielski | <p>You can mark output parameters as global using the <code>globalName</code> field. A global output parameter, assuming it has been set, can be accessed from anywhere in the Workflow, including in an exit handler.</p>
<p>The example file for writing and consuming global output parameters should contain all the information you need to use global output parameters in an exit handler.</p>
<p><a href="https://github.com/argoproj/argo-workflows/blob/master/examples/global-outputs.yaml" rel="nofollow noreferrer">https://github.com/argoproj/argo-workflows/blob/master/examples/global-outputs.yaml</a></p>
| crenshaw-dev |
<p>I want to execute a task in Argo workflow if a string starts with a particular substring.
For example, my string is <code>tests/dev-or.yaml</code> and I want to execute task if my string starts with <code>tasks/</code></p>
<p>Here is my workflow but the condition is not being validated properly</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: conditional-
spec:
entrypoint: conditional-example
arguments:
parameters:
- name: should-print
value: "tests/dev-or.yaml"
templates:
- name: conditional-example
inputs:
parameters:
- name: should-print
steps:
- - name: print-hello
template: whalesay
when: "{{inputs.parameters.should-print }} startsWith 'tests/'"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello"]
</code></pre>
<p>Below is the error it is giving when I run the workflow</p>
<pre><code>WorkflowFailed 7s workflow-controller Invalid 'when' expression 'tests/dev-or.yaml startsWith 'tests/'': Unable to access unexported field 'yaml' in token 'or.yaml'
</code></pre>
<p>Seems it is not accepting <code>-</code>, <code>.yaml</code> and <code>/</code> while evaluating the when condition.</p>
<p>Any mistake am making in my workflow? What's the right way to use this condition?</p>
| Biru | <p>tl;dr - use this: <code>when: "'{{inputs.parameters.should-print}}' =~ '^tests/'"</code></p>
<p>Parameter substitution happens before the <code>when</code> expression is evaluated. So the when expression is actually <code>tests/dev-or.yaml startsWith 'tests/'</code>. As you can see, the first string needs quotation marks.</p>
<p>But even if you had <code>when: "'{{inputs.parameters.should-print}}' startsWith 'tests/'"</code> (single quotes added), the expression would fail with this error: <code>Cannot transition token types from STRING [tests/dev-or.yaml] to VARIABLE [startsWith]</code>.</p>
<p>Argo Workflows <a href="https://github.com/argoproj/argo-workflows/tree/master/examples#conditionals" rel="nofollow noreferrer">conditionals</a> are evaluated as <a href="https://github.com/Knetic/govaluate" rel="nofollow noreferrer">govaluate</a> expressions. govaluate <a href="https://github.com/Knetic/govaluate/blob/master/MANUAL.md#built-in-functions" rel="nofollow noreferrer">does not have any built-in functions</a>, and Argo Workflows does not augment it with any functions. So <code>startsWith</code> is not defined.</p>
<p>Instead, you should use govaluate's <a href="https://github.com/Knetic/govaluate/blob/master/MANUAL.md#regex-comparators--" rel="nofollow noreferrer">regex comparator</a>. The expression will look like this: <code>when: "'{{inputs.parameters.should-print}}' =~ '^tests/'"</code>.</p>
<p>This is the functional Workflow:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: conditional-
spec:
entrypoint: conditional-example
arguments:
parameters:
- name: should-print
value: "tests/dev-or.yaml"
templates:
- name: conditional-example
inputs:
parameters:
- name: should-print
steps:
- - name: print-hello
template: whalesay
when: "'{{inputs.parameters.should-print}}' =~ '^tests/'"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello"]
</code></pre>
| crenshaw-dev |
<p><strong>UDPDATED</strong><br />
I am trying to get resources via curl inside a pod deployed on K8s.<br />
While I am able to fetch the list of pods via curl request, I can't on configmaps and nodes.</p>
<p>Here the Role Binding I am using (working for pods)</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-ro
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods", “configmaps”]
verbs: ["get","list"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test-cro
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["nodes”]
verbs: ["get","list"]
</code></pre>
<p>and when I try to fetch the list of nodes:</p>
<pre><code> curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/nodes
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "nodes is forbidden: User \"system:serviceaccount:test:test\" cannot list resource \"nodes\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "nodes"
},
</code></pre>
<p>the same for configmaps:</p>
<pre><code>curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/default/configmaps
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "configmaps is forbidden: User \"system:serviceaccount:test:test\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"kind": "configmaps"
},
"code": 403
</code></pre>
<p>instead on pods it is working.<br />
What could be the issue? A Wrong configuration on RoleBinding?</p>
| user1971444 | <p>To give the <code>test-ro</code> Role access to list ConfigMaps, the resource name must be specified in its plural form. This is likely why listing Pods works, but listing ConfigMaps does not. So the Role should be specified like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-ro
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods", "configmaps"]
verbs: ["get","list"]
</code></pre>
<p>Listing Nodes requires some different configuration due to Nodes being a cluster-level resource rather than a namespaced resource. <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">Due to this, the <code>nodes</code> permissions must be given in a <code>ClusterRole</code>.</a></p>
<p>Additionally, the API url to list nodes does not have the namespace. The correct url would be <code>https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/nodes</code>.</p>
<p>An example of a working <code>ClusterRole</code> could be this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test-clusterrole
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["nodes"]
verbs: ["get","list"]
</code></pre>
| Lauri Koskela |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.