Question
stringlengths 65
39.6k
| Answer
stringlengths 38
29.1k
|
---|---|
<p>With stackdriver's kubernetes engine integration, I can view real-time information on my pods and services, including how many are ready. I can't find any way to monitor this, however.</p>
<p>Is there a way to set up an alerting policy that triggers if no pods in a deployment or service are ready? I can set up a log-based metric, but this seems like a crude workaround for information that stackdriver logging seems to already have access to.</p>
|
<p>Based on the <a href="https://cloud.google.com/monitoring/api/metrics_kubernetes" rel="nofollow noreferrer">Kubernetes metrics</a> documentation there doesn't seems to be such metric in place.</p>
<p>It does however look like a potential <a href="https://cloud.google.com/support/docs/issue-trackers" rel="nofollow noreferrer">Feature Request</a>.</p>
|
<p>A MCVE example is here: <a href="https://github.com/chrissound/k8s-metallb-nginx-ingress-minikube" rel="nofollow noreferrer">https://github.com/chrissound/k8s-metallb-nginx-ingress-minikube</a>
(just run <code>./init.sh</code> and <code>minikube addons enable ingress</code>).</p>
<p>The IP assigned to the ingress keeps getting reset, I don't know what is causing it? Do I need additional configuration perhaps?</p>
<pre><code>kubectl get ingress --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
chris-example app-ingress example.com 192.168.122.253 80, 443 61m
</code></pre>
<p>And a minute later:</p>
<pre><code>NAMESPACE NAME HOSTS ADDRESS PORTS AGE
chris-example app-ingress example.com 80, 443 60m
</code></pre>
<hr>
<p>In terms of configuration I've just applied:</p>
<pre><code># metallb
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
# nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
</code></pre>
<hr>
<p>ingress controller logs logs:</p>
<pre><code>I0714 22:00:38.056148 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"chris-example", Name:"app-ingress", UID:"cbf3b5bf-a67a-11e9-be9a-a4cafa3aa171", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8681", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress chris-example/app-ingress
I0714 22:01:19.153298 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"chris-example", Name:"app-ingress", UID:"cbf3b5bf-a67a-11e9-be9a-a4cafa3aa171", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8743", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress chris-example/app-ingress
I0714 22:01:38.051694 7 status.go:296] updating Ingress chris-example/app-ingress status from [{192.168.122.253 }] to []
I0714 22:01:38.060044 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"chris-example", Name:"app-ingress", UID:"cbf3b5bf-a67a-11e9-be9a-a4cafa3aa171", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8773", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress chris-example/app-ingress
</code></pre>
<p>And the metallb controller logs:</p>
<pre><code>{"caller":"main.go:72","event":"noChange","msg":"service converged, no change","service":"kube-system/kube-dns","ts":"2019-07-14T21:58:39.656725017Z"}
{"caller":"main.go:73","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2019-07-14T21:58:39.656741267Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"chris-example/app-lb","ts":"2019-07-14T21:58:39.6567588Z"}
{"caller":"main.go:72","event":"noChange","msg":"service converged, no change","service":"chris-example/app-lb","ts":"2019-07-14T21:58:39.656842026Z"}
{"caller":"main.go:73","event":"endUpdate","msg":"end of service update","service":"chris-example/app-lb","ts":"2019-07-14T21:58:39.656873586Z"}
</code></pre>
<hr>
<p>As a test I deleted the deployment+daemonset relating to metallb:</p>
<pre><code>kubectl delete deployment -n metallb-system controller
kubectl delete daemonset -n metallb-system speaker
</code></pre>
<p>And after the external IP is set, it'll once again reset... </p>
|
<p>I was curious and recreated your case. I was able to properly expose the service.</p>
<p>First of all: you don't need to use minikube ingress addon when deploying your own NGINX. If you do, you have 2 ingress controllers in a cluster and it leads confusion later. Run: <code>minikube addons disable ingress</code></p>
<p>Sidenote: You can see this confusion in the IP your ingress got assigned: <code>192.168.122.253</code> which is not in CIDR range <code>192.168.39.160/28</code> you defined in <code>configmap-metallb.yaml</code>.</p>
<hr />
<p>You need to change service type of <code>ingress-nginx</code> to <code>LoadBalancer</code>. you can do this by running:</p>
<pre><code>kubectl edit -n ingress-nginx service ingress-nginx
</code></pre>
<p>Additionally, you can change <code>app-lb</code> service to <code>NodePort</code>, since it doesn't need to be exposed outside of the cluster - ingress controller will take care of it.</p>
<hr />
<h3>Explanation</h3>
<p>It's easier to think about <code>Ingress</code> object as of <code>ConfigMap</code>, rather than <code>Service</code>.</p>
<p>MetalLB takes configuration you provided in <code>ConfigMap</code> and waits for IP request API call. When it gets one it provides IP form the CIDR range you specified.</p>
<p>In a similar way, ingress controller (NGINX in your case) takes configuration described in <code>Ingress</code> object and uses it to rout traffic to desired place in the cluster.</p>
<p>Then <code>ingress-nginx</code> service is exposed outside of the cluster with assigned IP.</p>
<p>Inbound traffic is directed by Ingress controller (NGINX), based on rules described in <code>Ingress</code> object to a service in font of your application.</p>
<h3>Diagram</h3>
<pre><code>Inbound
traffic
++ +---------+
|| |ConfigMap|
|| +--+------+
|| |
|| | CIDR range to provision
|| v
|| +--+----------+
|| |MetalLB | +-------+
|| |Load balancer| |Ingress|
|| +-+-----------+ +---+---+
|| | |
|| | External IP assigned |Rules described in spec
|| | to service |
|| v v
|| +--+--------------------+ +---+------------------+
|| | | | Ingress Controller |
|---->+ ingress-nginx service +----->+ (NGINX pod) |
+---->| +----->+ |
+-----------------------+ +----------------------+
||
VV
+-----------------+
| Backend service |
| (app-lb) |
| |
+-----------------+
||
VV
+--------------------+
| Backend pod |
| (httpbin) |
| |
+--------------------+
</code></pre>
|
<p>I have created an EKS cluster by following the getting started guide by AWS with k8s version 1.11. I have not changed any configs as such for kube-dns.
If I create a service let's say myservice, I would like to access it from some other ec2 instance which is not part of this eks cluster but it is in same VPC.
Basically, I want this DNS to work as my DNS server for instances outside the cluster as well. How will I be able to do that?</p>
<p>I have seen that the kube-dns service gets a cluster IP but doesn't get an external IP, is that necessary for me to be able to access it from outside the cluster?</p>
<p>This is the current response : </p>
<pre><code>[ec2-user@ip-10-0-0-149 ~]$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 4d
</code></pre>
<p>My VPC subnet is 10.0.0.0/16</p>
<p>I am trying to reach this 172.20.0.10 IP from other instances in my VPC and I am not able to, which I think is expected because my VPC is not aware of any subnet range that is 172.20.0.10. But then how do make this dns service accessible to all my instances in VPC?</p>
|
<p>The problem you are facing is mostly not related to DNS. As you said you cannot reach ClusterIP from your other instances because it is internal cluster network and it is unreachable from outside of Kubernetes. </p>
<p>Instead of going into the wrong direction I recommend you to make use of <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx Ingress</a> which allows you to create Nginx backed by AWS Load Balancer and expose your services on that Nginx.</p>
<p>You can further integrate your Ingresses with <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">External-DNS</a> addon which will allow you to dynamically create DNS records in Route 53. </p>
<p>This will take some time to configure but this is the Kubernetes way.</p>
|
<p>By adding static route table on every node with proper rules, the container network also works fine. For example, given three nodes with three different docker bridge subnet:</p>
<pre><code>node-1(192.168.0.1):
10.0.1.1/24
node-2(192.168.0.2):
10.0.2.1/24
node-3(192.168.0.3):
10.0.3.1/24
</code></pre>
<p>On each node add the following routes:</p>
<pre><code>ip route add 10.0.1.0/24 via 192.168.0.1 dev eth0
ip route add 10.0.2.0/24 via 192.168.0.2 dev eth0
ip route add 10.0.3.0/24 via 192.168.0.3 dev eth0
</code></pre>
<p>With kube-proxy running in iptables mode, cluster-service-ip is translated to pod ip and finally routed to related node by the route table.</p>
<p>So what's the benefit of using cni plugin over route table? Is there a performance issue with route table method?</p>
|
<p>By design Kubernetes has a fluent structure. Pods, services, nodes can come and go depending on the needs, either by manual changes (rolling updates, new deployments) or by automatic scaling (HPA, node auto-scaling). Manually setting up rigid network structure negates the benefits of dynamic Kubernetes environment.</p>
<blockquote>
<p>Overlay networks are not required by default, however, they help in specific situations. Like when we don’t have enough IP space, or network can’t handle the extra routes. Or maybe when we want some extra management features the overlays provide. One commonly seen case is when there’s a limit of how many routes the cloud provider route tables can handle. For example, AWS route tables support up to 50 routes without impacting network performance. So if we have more than 50 Kubernetes nodes, AWS route table won’t be enough. In such cases, using an overlay network helps.</p>
<p>It is essentially encapsulating a packet-in-packet which traverses the native network across nodes. You may not want to use an overlay network since it may cause some latency and complexity overhead due to encapsulation-decapsulation of all the packets. It’s often not needed, so we should use it only when we know why we need it.</p>
</blockquote>
<p><a href="https://itnext.io/an-illustrated-guide-to-kubernetes-networking-part-2-13fdc6c4e24c" rel="nofollow noreferrer">https://itnext.io/an-illustrated-guide-to-kubernetes-networking-part-2-13fdc6c4e24c</a></p>
<p>If you are concerned with latency and overhead caused by CNI plugins here is a handy <a href="https://itnext.io/benchmark-results-of-kubernetes-network-plugins-cni-over-10gbit-s-network-updated-april-2019-4a9886efe9c4" rel="nofollow noreferrer">Benchmark results of Kubernetes network plugins</a></p>
|
<p>I'm trying to provision emepheral environments via automation leveraging Kubernetes namespaces. My automation workers deployed in Kubernetes must be able to create Namespaces. So far my experimentation with this led me nowhere. Which binding do I need to attach to the Service Account to allow it to control Namespaces? Or is my approach wrong?</p>
<p>My code so far:</p>
<p><code>deployment.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-deployer
namespace: tooling
labels:
app: k8s-deployer
spec:
replicas: 1
selector:
matchLabels:
app: k8s-deployer
template:
metadata:
name: k8s-deployer
labels:
app: k8s-deployer
spec:
serviceAccountName: k8s-deployer
containers: ...
</code></pre>
<p><code>rbac.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-deployer
namespace: tooling
---
# this lets me view namespaces, but not write
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: administer-cluster
subjects:
- kind: ServiceAccount
name: k8s-deployer
namespace: tooling
roleRef:
kind: ClusterRole
name: admin
apiGroup: rbac.authorization.k8s.io
</code></pre>
|
<p>To give a pod control over something in Kubernetes you need at least four things:</p>
<ol>
<li>Create or select existing <code>Role</code>/<code>ClusterRole</code> (you picked <code>administer-cluster</code>, which rules are unknown to me).</li>
<li>Create or select existing <code>ServiceAccount</code> (you created <code>k8s-deployer</code> in namespace <code>tooling</code>).</li>
<li>Put the two together with <code>RoleBinding</code>/<code>ClusterRoleBinding</code>.</li>
<li>Assign the <code>ServiceAccount</code> to a pod.</li>
</ol>
<p>Here's an example that can manage namespaces:</p>
<pre class="lang-yaml prettyprint-override"><code># Create a service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-deployer
namespace: tooling
---
# Create a cluster role that allowed to perform
# ["get", "list", "create", "delete", "patch"] over ["namespaces"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-deployer
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "create", "delete", "patch"]
---
# Associate the cluster role with the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-deployer
# make sure NOT to mention 'namespace' here or
# the permissions will only have effect in the
# given namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8s-deployer
subjects:
- kind: ServiceAccount
name: k8s-deployer
namespace: tooling
</code></pre>
<p>After that you need to mention the service account name in pod <code>spec</code> as you already did. More info about RBAC in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">documentation</a>.</p>
|
<p>We are trying to replicate the existing Traefik configuration with Kong Ingress Controller on Kubernetes cluster. </p>
<p>Currently, I'm trying to configure the sub domain in Kong ingress controller and not sure how exactly to proceed with this.</p>
<p>Below code is from Traefik configuration. Could you please help me configure the similar in Kong.</p>
<pre class="lang-sh prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: application-xyz
namespace: default
spec:
tls: {}
entryPoints:
- web
- websecure
routes:
- match: "HostRegexp(`{sub:(www.)?}mycompany.com`) && PathPrefix(`/`)"
kind: Rule
priority: 1
services:
- name: application-xyz-service
port: 80
</code></pre>
<p>Thanks in Advance.</p>
|
<p>We have explored a bit and found there is no such feature available in kong. Traefik was a router and it was valid to have this kind of functionality available. We have just configured a couple of ingress rules for www and domain explicitly to achieve this on Kong.</p>
<p>We can also make use of the Admin API available in Kong to achieve the same. The documentation is pretty straight forward on the official site.</p>
|
<p>I'm working on a project where I need to list AWS S3 buckets in Java Spring Boot. Initially, I used secret and access keys passed through application.properties its working for me and the code looked like this:</p>
<pre><code>@GetMapping("service2/getListOfS3Buckets")
public String getListOfS3Buckets() {
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withRegion(awsRegion)
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(awsAccessKey, awsSecretKey)))
.build();
List<Bucket> buckets = s3Client.listBuckets();
StringBuilder response = new StringBuilder("List of S3 Buckets:\n");
for (Bucket bucket : buckets) {
response.append(bucket.getName()).append("\n");
}
return response.toString();
}
</code></pre>
<p>However, I understand that this is not considered best practice, and I'd like to improve it by using an IAM role attached to a service account. I've attached the IAM role to the pod, but when I use the following code, I encounter an internal error:</p>
<ol>
<li><p>I created an IAM policy named "my-policy" with the necessary S3 permissions.</p>
</li>
<li><p>I attached this policy to an IAM role named "my-role" using the following AWS CLI commands:</p>
</li>
</ol>
<pre><code>cat >my-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:Describe*",
"s3-object-lambda:Get*",
"s3-object-lambda:List*"
],
"Resource": "*"
}
]
}
EOF
</code></pre>
<pre><code>
aws iam create-policy --policy-name my-policy --policy-document file://my-policy.json
eksctl create iamserviceaccount --name my-service-account --namespace kube-system --cluster my-cluster --role-name my-role \
--attach-policy-arn arn:aws:iam::111122223333:policy/my-policy --approve
kubectl describe serviceaccount my-service-account -n kube-system
Name: my-service-account
Namespace: kube-system
Labels: app.kubernetes.io/managed-by=eksctl
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::11111222222:role/my-role
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
</code></pre>
<p>And this is my code</p>
<pre><code>@GetMapping("service2/getListOfS3Buckets")
public String getListOfS3Buckets() {
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withRegion(awsRegion) // Specify your desired AWS region here
.withCredentials(DefaultAWSCredentialsProviderChain.getInstance())
.build();
List<Bucket> buckets = s3Client.listBuckets();
StringBuilder response = new StringBuilder("List of S3 Buckets:\n");
for (Bucket bucket : buckets) {
response.append(bucket.getName()).append("\n");
}
return response.toString();
}
</code></pre>
<p>I receive an internal error with the following message:</p>
<pre><code>Request processing failed: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), WebIdentityTokenCredentialsProvider: To use assume role profiles the aws-java-sdk-sts module must be on the class path., com.amazonaws.auth.profile.ProfileCredentialsProvider@59639381: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper@39201cc7: Failed to connect to service endpoint:]
</code></pre>
<p>I need assistance in resolving this issue and successfully using the IAM role attached to the service account for AWS authentication. Any guidance or suggestions would be greatly appreciated.</p>
|
<p>Error says that you haven't annotated the service account with proper IAM role.</p>
<p>Steps to resolve this issue:</p>
<ol>
<li>Create an IAM Role and attach the s3 access policy.</li>
<li>Annotate the service account with the above created IAM role.</li>
</ol>
<p>Please follow the steps(1,2,3) in this document
<a href="https://repost.aws/knowledge-center/eks-restrict-s3-bucket" rel="nofollow noreferrer">https://repost.aws/knowledge-center/eks-restrict-s3-bucket</a></p>
|
<p>Has anyone succeeded in getting kubectl connecting to the AKS public API endpoint for their AKS cluster, from behind a corporate proxy that does SSL inspection ?</p>
<p>When I try to do something like</p>
<p><code>kubectl get nodes</code></p>
<p>I get the following error: (edited) </p>
<p><code>Unable to connect to the server: x509: certificate signed by unknown authority</code></p>
<p>So it appears my corporate proxy does SSL inspection.</p>
<p>My question would be: Is it at all possible to access the AKS public API via HTTPS through an SSL-interfering proxy, either via another "helper proxy" or other method?</p>
|
<p>If your corporate proxy performs TLS re-encryption and injects its own certificate into TLS connection there are a couple of things you can do:</p>
<p>1) Extract your corporate TLS certificate and paste that in your ~/.kube/config. For that you'll need to get corporate certificate using for example this command </p>
<p><code>openssl s_client -showcerts -connect KUBE_API:443</code> </p>
<p>2) Skip TLS certificate verification in ~/.kube/config</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
server: https://KUBE_API:8443
insecure-skip-tls-verify: true
</code></pre>
|
<p>I am trying to deploy sample spring microservice using AWS EKS. After creating cluster I tried to deploy to EKS cluster having one single worker node created with t2.small. While describing service , it shows that its deployed and running.While accessing end point through node's ip address and port ( node-ip-address:31821/cicd/load ) giving <code>This site can’t provide a secure connection</code> on browser.</p>
<p><strong>I am using t2.small Amazon Linux 2 single node (Only 1 worker node) in node group.</strong></p>
<p>My Docker file looks like the following,</p>
<pre><code>FROM openjdk:8
ADD target/cicdpipeline-0.0.1-SNAPSHOT.war cicdpipeline.war
EXPOSE 8085
ENTRYPOINT ["java", "-jar", "cicdpipeline.war"]
</code></pre>
<p>I am using following command in my deploy stage of jenkinsfile to make kubernetes deployment,</p>
<pre><code>kubectl apply -f deployment/deployment.yaml
kubectl apply -f deployment/service.yaml
</code></pre>
<p><strong>My deployment.yaml file,</strong></p>
<p><a href="https://i.stack.imgur.com/289f0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/289f0.png" alt="enter image description here" /></a></p>
<p><strong>My Service.yaml file like the following,</strong></p>
<p><a href="https://i.stack.imgur.com/CVJiq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CVJiq.png" alt="enter image description here" /></a></p>
<p><code>kubectl describe service pipelineservice</code> command gives the following result,</p>
<p><a href="https://i.stack.imgur.com/LRJT4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LRJT4.png" alt="enter image description here" /></a></p>
<p><code>kubectl describe pod</code> command giving the following results,</p>
<p><a href="https://i.stack.imgur.com/FULsF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FULsF.png" alt="enter image description here" /></a></p>
<p>Accessing Output using Node Ip address , port and end URL like <code>node-ip-address:31821/cicd/load</code> typing in browser.</p>
<p>NB:
1 Describe pod showing status with running state.</p>
<ol start="2">
<li>Security group for ec2 ( Node) is successfully added inbound rule to allow node ports</li>
</ol>
<p><a href="https://i.stack.imgur.com/4IM23.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4IM23.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/RKuTv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RKuTv.png" alt="enter image description here" /></a></p>
<p><strong>Update</strong></p>
<p>Warning while applying <code>kubectl describe pod</code> command - <code>Warning FailedToRetrieveImagePullSecret Unable to retrieve some image pull secret (dhubauth) ; Attempting to pull the image may not suceed</code></p>
<p>Here I am just using 1 single worker node for learning purpose. And instance type t2.small. So is this pblm because of my instance type ? Or because of my YAML file issue?</p>
<p>I just started to explore EKS for my learning purpose using NodePort method witout using ELB and Ingress. Can anyone suggest where I went wrong here in implementation? Or Am I taking output in wrong way? suggest any documentation which can use to resolve this type of problem please?</p>
|
<p>1.<code>This site can’t provide a secure connection</code>. This is a browser error and you are accessing the jenkins in <code>https</code> protocol. But, the jenkins is exposed in <code>http</code> protocol.</p>
<p>Since, You are accessing the application via Node IP which use http protocol. Please access the jenkins like below.(Make sure url prefixed with <code>http</code>)</p>
<pre><code>http://node-ip-address:31821/cicd/load
</code></pre>
<ol start="2">
<li><code>FailedToRetrieveImagePullSecret Unable to retrieve some image pull secret (dhubauth) ; Attempting to pull the image may not suceed</code></li>
</ol>
<p>If you are pulling image from private repository. You have to configure the ImagePullSecret. Please double check the ImagePullSecret is configured in kubernetes secret.</p>
<pre><code>kubectl get secret dhubauth
</code></pre>
<p>If the secret is not available please follow this document to create the secret.
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
<ol start="3">
<li>You are trying to expose the jenkins application via NodePort. Please make sure your worker node(EC2 instance) is running in public subnet.</li>
</ol>
|
<p>I have used some bitnami charts in my kubernetes app. In my pod, there is a file whose path is /etc/settings/test.html. I want to override the file. When I search it, I figured out that I should mount my file by creating a configmap. But how can I use the created configmap with the <strong>existed pod</strong> . Many of the examples creates a new pod and uses the created config map. But I dont want to create a new pod, I wnat to use the existed pod.</p>
<p>Thanks</p>
|
<p>If not all then almost all pod specs are immutable, meaning that you can't change them without destroying the old pod and creating a new one with desired parameters. There is <em>no way</em> to edit pod volume list without recreating it.</p>
<p>The reason behind this is that pods aren't meant to be immortal. Pods meant to be temporary units that can be spawned/destroyed according to scheduler needs. In general, you need a workload object that does pod management for you (a <code>Deployement</code>, <code>StatefulSet</code>, <code>Job</code>, or <code>DaemonSet</code>, depenging on deployment strategy and application nature).</p>
<p>There are two ways to edit a file in an existing pod: either by using <code>kubectl exec</code> and console commands to edit the file in place, or <code>kubectl cp</code> to copy an already edited file into the pod. I advise you against <em>both of these</em>, because this is not permanent. Better backup the necessary data, switch deployment type to <code>Deployment</code> with one replica, then go with mounting a <code>configMap</code> as you read on the Internet.</p>
|
<p>What I would like to do is to run some backup scripts on each of Kubernetes nodes periodically. I want it to run inside Kubernetes cluster in contrast to just adding script to each node's crontab. This is because I will store backup on the volume mounted to the node by Kubernetes. It differs from the configuration but it could be CIFS filesystem mounted by Flex plugin or <code>awsElasticBlockStore</code>.</p>
<p>It would be perfect if <code>CronJob</code> will be able to template <code>DaemonSet</code> (instead of fixing it as <code>jobTemplate</code>) and there will be possibility to set <code>DaemonSet</code> restart policy to <code>OnFailure</code>.</p>
<p>I would like to avoid defining <code>n</code> different <code>CronJobs</code> for each of <code>n</code> nodes and then associate them together by defining <code>nodeSelectors</code> since this will be not so convenient to maintain in environment where nodes count changes dynamically.</p>
<p>What I can see problem was discussed here without any clear conclusion: <a href="https://github.com/kubernetes/kubernetes/issues/36601" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/36601</a></p>
<p>Maybe do you have any hacks or tricks to achieve this?</p>
|
<p>You can use DaemonSet with the following bash script:</p>
<pre><code> while :; do
currenttime=$(date +%H:%M)
if [[ "$currenttime" > "23:00" ]] && [[ "$currenttime" < "23:05" ]]; then
do_something
else
sleep 60
fi
test "$?" -gt 0 && notify_failed_job
done
</code></pre>
|
<p>I'm currently working with Rook v1.2.2 to create a Ceph Cluster on my Kubernetes Cluster (v1.16.3) and I'm failing to add a rack level on my CrushMap.</p>
<p>I want to go from :</p>
<pre><code>ID CLASS WEIGHT TYPE NAME
-1 0.02737 root default
-3 0.01369 host test-w1
0 hdd 0.01369 osd.0
-5 0.01369 host test-w2
1 hdd 0.01369 osd.1
</code></pre>
<p>to something like : </p>
<pre><code>ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.01358 root default
-5 0.01358 zone zone1
-4 0.01358 rack rack1
-3 0.01358 host mynode
0 hdd 0.00679 osd.0 up 1.00000 1.00000
1 hdd 0.00679 osd.1 up 1.00000 1.00000
</code></pre>
<p>Like explained in the official rook doc (<a href="https://rook.io/docs/rook/v1.2/ceph-cluster-crd.html#osd-topology" rel="nofollow noreferrer">https://rook.io/docs/rook/v1.2/ceph-cluster-crd.html#osd-topology</a>).</p>
<p>Steps I followed :</p>
<p>I have a v1.16.3 Kubernetes Cluster with 1 Master (test-m1) and two workers (test-w1 and test-w2).
I installed this cluster using the default configuration of Kubespray (<a href="https://kubespray.io/#/docs/getting-started" rel="nofollow noreferrer">https://kubespray.io/#/docs/getting-started</a>).</p>
<p>I labeled my node with :</p>
<pre><code>kubectl label node test-w1 topology.rook.io/rack=rack1
kubectl label node test-w2 topology.rook.io/rack=rack2
</code></pre>
<p>I added the label <code>role=storage-node</code> and taint <code>storage-node=true:NoSchedule</code> to force Rook to execute on specific storage nodes, here are the full exemple of labels and taints for one storage node : </p>
<pre><code>Name: test-w1
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=test-w1
kubernetes.io/os=linux
role=storage-node
topology.rook.io/rack=rack1
Annotations: csi.volume.kubernetes.io/nodeid: {"rook-ceph.cephfs.csi.ceph.com":"test-w1","rook-ceph.rbd.csi.ceph.com":"test-w1"}
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 29 Jan 2020 03:38:52 +0100
Taints: storage-node=true:NoSchedule
</code></pre>
<p>I started to deploy the common.yml of Rook : <a href="https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/common.yaml" rel="nofollow noreferrer">https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/common.yaml</a></p>
<p>The I applied a custom operator.yml file to be able to run the operator, csi-plugin and agent on nodes labelled "role=storage-node" :</p>
<pre><code>#################################################################################################################
# The deployment for the rook operator
# Contains the common settings for most Kubernetes deployments.
# For example, to create the rook-ceph cluster:
# kubectl create -f common.yaml
# kubectl create -f operator.yaml
# kubectl create -f cluster.yaml
#
# Also see other operator sample files for variations of operator.yaml:
# - operator-openshift.yaml: Common settings for running in OpenShift
#################################################################################################################
# OLM: BEGIN OPERATOR DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: rook-ceph-operator
namespace: rook-ceph
labels:
operator: rook
storage-backend: ceph
spec:
selector:
matchLabels:
app: rook-ceph-operator
replicas: 1
template:
metadata:
labels:
app: rook-ceph-operator
spec:
serviceAccountName: rook-ceph-system
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: role
operator: In
values:
- storage-node
tolerations:
- key: "storage-node"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: rook-ceph-operator
image: rook/ceph:v1.2.2
args: ["ceph", "operator"]
volumeMounts:
- mountPath: /var/lib/rook
name: rook-config
- mountPath: /etc/ceph
name: default-config-dir
env:
# If the operator should only watch for cluster CRDs in the same namespace, set this to "true".
# If this is not set to true, the operator will watch for cluster CRDs in all namespaces.
- name: ROOK_CURRENT_NAMESPACE_ONLY
value: "false"
# To disable RBAC, uncomment the following:
# - name: RBAC_ENABLED
# value: "false"
# Rook Agent toleration. Will tolerate all taints with all keys.
# Choose between NoSchedule, PreferNoSchedule and NoExecute:
# - name: AGENT_TOLERATION
# value: "NoSchedule"
# (Optional) Rook Agent toleration key. Set this to the key of the taint you want to tolerate
# - name: AGENT_TOLERATION_KEY
# value: "storage-node"
# (Optional) Rook Agent tolerations list. Put here list of taints you want to tolerate in YAML format.
- name: AGENT_TOLERATIONS
value: |
- effect: NoSchedule
key: storage-class
operator: Exists
# (Optional) Rook Agent priority class name to set on the pod(s)
# - name: AGENT_PRIORITY_CLASS_NAME
# value: "<PriorityClassName>"
# (Optional) Rook Agent NodeAffinity.
- name: AGENT_NODE_AFFINITY
value: "role=storage-node"
# (Optional) Rook Agent mount security mode. Can by `Any` or `Restricted`.
# `Any` uses Ceph admin credentials by default/fallback.
# For using `Restricted` you must have a Ceph secret in each namespace storage should be consumed from and
# set `mountUser` to the Ceph user, `mountSecret` to the Kubernetes secret name.
# to the namespace in which the `mountSecret` Kubernetes secret namespace.
# - name: AGENT_MOUNT_SECURITY_MODE
# value: "Any"
# Set the path where the Rook agent can find the flex volumes
# - name: FLEXVOLUME_DIR_PATH
# value: "<PathToFlexVolumes>"
# Set the path where kernel modules can be found
# - name: LIB_MODULES_DIR_PATH
# value: "<PathToLibModules>"
# Mount any extra directories into the agent container
# - name: AGENT_MOUNTS
# value: "somemount=/host/path:/container/path,someothermount=/host/path2:/container/path2"
# Rook Discover toleration. Will tolerate all taints with all keys.
# Choose between NoSchedule, PreferNoSchedule and NoExecute:
# - name: DISCOVER_TOLERATION
# value: "NoSchedule"
# (Optional) Rook Discover toleration key. Set this to the key of the taint you want to tolerate
# - name: DISCOVER_TOLERATION_KEY
# value: "storage-node"
# (Optional) Rook Discover tolerations list. Put here list of taints you want to tolerate in YAML format.
- name: DISCOVER_TOLERATIONS
value: |
- effect: NoSchedule
key: storage-node
operator: Exists
# (Optional) Rook Discover priority class name to set on the pod(s)
# - name: DISCOVER_PRIORITY_CLASS_NAME
# value: "<PriorityClassName>"
# (Optional) Discover Agent NodeAffinity.
- name: DISCOVER_AGENT_NODE_AFFINITY
value: "role=storage-node"
# Allow rook to create multiple file systems. Note: This is considered
# an experimental feature in Ceph as described at
# http://docs.ceph.com/docs/master/cephfs/experimental-features/#multiple-filesystems-within-a-ceph-cluster
# which might cause mons to crash as seen in https://github.com/rook/rook/issues/1027
- name: ROOK_ALLOW_MULTIPLE_FILESYSTEMS
value: "false"
# The logging level for the operator: INFO | DEBUG
- name: ROOK_LOG_LEVEL
value: "INFO"
# The interval to check the health of the ceph cluster and update the status in the custom resource.
- name: ROOK_CEPH_STATUS_CHECK_INTERVAL
value: "60s"
# The interval to check if every mon is in the quorum.
- name: ROOK_MON_HEALTHCHECK_INTERVAL
value: "45s"
# The duration to wait before trying to failover or remove/replace the
# current mon with a new mon (useful for compensating flapping network).
- name: ROOK_MON_OUT_TIMEOUT
value: "600s"
# The duration between discovering devices in the rook-discover daemonset.
- name: ROOK_DISCOVER_DEVICES_INTERVAL
value: "60m"
# Whether to start pods as privileged that mount a host path, which includes the Ceph mon and osd pods.
# This is necessary to workaround the anyuid issues when running on OpenShift.
# For more details see https://github.com/rook/rook/issues/1314#issuecomment-355799641
- name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
value: "false"
# In some situations SELinux relabelling breaks (times out) on large filesystems, and doesn't work with cephfs ReadWriteMany volumes (last relabel wins).
# Disable it here if you have similar issues.
# For more details see https://github.com/rook/rook/issues/2417
- name: ROOK_ENABLE_SELINUX_RELABELING
value: "true"
# In large volumes it will take some time to chown all the files. Disable it here if you have performance issues.
# For more details see https://github.com/rook/rook/issues/2254
- name: ROOK_ENABLE_FSGROUP
value: "true"
# Disable automatic orchestration when new devices are discovered
- name: ROOK_DISABLE_DEVICE_HOTPLUG
value: "false"
# Provide customised regex as the values using comma. For eg. regex for rbd based volume, value will be like "(?i)rbd[0-9]+".
# In case of more than one regex, use comma to seperate between them.
# Default regex will be "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
# Add regex expression after putting a comma to blacklist a disk
# If value is empty, the default regex will be used.
- name: DISCOVER_DAEMON_UDEV_BLACKLIST
value: "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
# Whether to enable the flex driver. By default it is enabled and is fully supported, but will be deprecated in some future release
# in favor of the CSI driver.
- name: ROOK_ENABLE_FLEX_DRIVER
value: "false"
# Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster.
# This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs.
- name: ROOK_ENABLE_DISCOVERY_DAEMON
value: "true"
# Enable the default version of the CSI CephFS driver. To start another version of the CSI driver, see image properties below.
- name: ROOK_CSI_ENABLE_CEPHFS
value: "true"
# Enable the default version of the CSI RBD driver. To start another version of the CSI driver, see image properties below.
- name: ROOK_CSI_ENABLE_RBD
value: "true"
- name: ROOK_CSI_ENABLE_GRPC_METRICS
value: "true"
# Enable deployment of snapshotter container in ceph-csi provisioner.
- name: CSI_ENABLE_SNAPSHOTTER
value: "true"
# Enable Ceph Kernel clients on kernel < 4.17 which support quotas for Cephfs
# If you disable the kernel client, your application may be disrupted during upgrade.
# See the upgrade guide: https://rook.io/docs/rook/v1.2/ceph-upgrade.html
- name: CSI_FORCE_CEPHFS_KERNEL_CLIENT
value: "true"
# CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
#- name: CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY
# value: "OnDelete"
# CSI Rbd plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
#- name: CSI_RBD_PLUGIN_UPDATE_STRATEGY
# value: "OnDelete"
# The default version of CSI supported by Rook will be started. To change the version
# of the CSI driver to something other than what is officially supported, change
# these images to the desired release of the CSI driver.
#- name: ROOK_CSI_CEPH_IMAGE
# value: "quay.io/cephcsi/cephcsi:v1.2.2"
#- name: ROOK_CSI_REGISTRAR_IMAGE
# value: "quay.io/k8scsi/csi-node-driver-registrar:v1.1.0"
#- name: ROOK_CSI_PROVISIONER_IMAGE
# value: "quay.io/k8scsi/csi-provisioner:v1.4.0"
#- name: ROOK_CSI_SNAPSHOTTER_IMAGE
# value: "quay.io/k8scsi/csi-snapshotter:v1.2.2"
#- name: ROOK_CSI_ATTACHER_IMAGE
# value: "quay.io/k8scsi/csi-attacher:v1.2.0"
# kubelet directory path, if kubelet configured to use other than /var/lib/kubelet path.
#- name: ROOK_CSI_KUBELET_DIR_PATH
# value: "/var/lib/kubelet"
# (Optional) Ceph Provisioner NodeAffinity.
- name: CSI_PROVISIONER_NODE_AFFINITY
value: "role=storage-node"
# (Optional) CEPH CSI provisioner tolerations list. Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
- name: CSI_PROVISIONER_TOLERATIONS
value: |
- effect: NoSchedule
key: storage-node
operator: Exists
# (Optional) Ceph CSI plugin NodeAffinity.
- name: CSI_PLUGIN_NODE_AFFINITY
value: "role=storage-node"
# (Optional) CEPH CSI plugin tolerations list. Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
- name: CSI_PLUGIN_TOLERATIONS
value: |
- effect: NoSchedule
key: storage-node
operator: Exists
# Configure CSI cephfs grpc and liveness metrics port
#- name: CSI_CEPHFS_GRPC_METRICS_PORT
# value: "9091"
#- name: CSI_CEPHFS_LIVENESS_METRICS_PORT
# value: "9081"
# Configure CSI rbd grpc and liveness metrics port
#- name: CSI_RBD_GRPC_METRICS_PORT
# value: "9090"
#- name: CSI_RBD_LIVENESS_METRICS_PORT
# value: "9080"
# Time to wait until the node controller will move Rook pods to other
# nodes after detecting an unreachable node.
# Pods affected by this setting are:
# mgr, rbd, mds, rgw, nfs, PVC based mons and osds, and ceph toolbox
# The value used in this variable replaces the default value of 300 secs
# added automatically by k8s as Toleration for
# <node.kubernetes.io/unreachable>
# The total amount of time to reschedule Rook pods in healthy nodes
# before detecting a <not ready node> condition will be the sum of:
# --> node-monitor-grace-period: 40 seconds (k8s kube-controller-manager flag)
# --> ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS: 5 seconds
- name: ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS
value: "5"
# The name of the node to pass with the downward API
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# The pod name to pass with the downward API
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# The pod namespace to pass with the downward API
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Uncomment it to run rook operator on the host network
#hostNetwork: true
volumes:
- name: rook-config
emptyDir: {}
- name: default-config-dir
emptyDir: {}
# OLM: END OPERATOR DEPLOYMENT
</code></pre>
<p>Then I applied my own custom ceph-cluster.yml file to allow the pods to run on nodes labelled "role=storage-node"</p>
<pre><code>#################################################################################################################
# Define the settings for the rook-ceph cluster with settings that should only be used in a test environment.
# A single filestore OSD will be created in the dataDirHostPath.
# For example, to create the cluster:
# kubectl create -f common.yaml
# kubectl create -f operator.yaml
# kubectl create -f ceph-cluster.yaml
#################################################################################################################
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v14.2.5
allowUnsupported: true
dataDirHostPath: /var/lib/rook
skipUpgradeChecks: false
mon:
count: 1
allowMultiplePerNode: true
dashboard:
enabled: true
ssl: true
monitoring:
enabled: false # requires Prometheus to be pre-installed
rulesNamespace: rook-ceph
network:
hostNetwork: false
rbdMirroring:
workers: 0
placement:
all:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: role
operator: In
values:
- storage-node
tolerations:
- key: "storage-node"
operator: "Exists"
effect: "NoSchedule"
mgr:
modules:
# the pg_autoscaler is only available on nautilus or newer. remove this if testing mimic.
- name: pg_autoscaler
enabled: true
storage:
useAllNodes: false
useAllDevices: false
nodes:
- name: "test-w1"
directories:
- path: /var/lib/rook
- name: "test-w2"
directories:
- path: /var/lib/rook
</code></pre>
<p>With this configuration, Rook does not apply the labels on the Crush Map.
If I install the toolbox.yml (<a href="https://rook.io/docs/rook/v1.2/ceph-toolbox.html" rel="nofollow noreferrer">https://rook.io/docs/rook/v1.2/ceph-toolbox.html</a>), go into it and run </p>
<pre><code>ceph osd tree
ceph osd crush tree
</code></pre>
<p>I have the following output : </p>
<pre><code>ID CLASS WEIGHT TYPE NAME
-1 0.02737 root default
-3 0.01369 host test-w1
0 hdd 0.01369 osd.0
-5 0.01369 host test-w2
1 hdd 0.01369 osd.1
</code></pre>
<p>As you can see, no rack is defined. Even if I labelled my nodes correctly.</p>
<p>What is surprising is that the pods prepare-osd can retrieve the information on the first line of the following logs : </p>
<pre><code>$ kubectl logs rook-ceph-osd-prepare-test-w1-7cp4f -n rook-ceph
2020-01-29 09:59:07.272649 I | cephcmd: crush location of osd: root=default host=test-w1 rack=rack1
[couppayy@test-m1 test_local]$ cat preposd.txt
2020-01-29 09:59:07.155656 I | cephcmd: desired devices to configure osds: [{Name: OSDsPerDevice:1 MetadataDevice: DatabaseSizeMB:0 DeviceClass: IsFilter:false IsDevicePathFilter:false}]
2020-01-29 09:59:07.185024 I | rookcmd: starting Rook v1.2.2 with arguments '/rook/rook ceph osd provision'
2020-01-29 09:59:07.185069 I | rookcmd: flag values: --cluster-id=c9ee638a-1d02-4ad9-95c9-cb796f61623a, --data-device-filter=, --data-device-path-filter=, --data-devices=, --data-directories=/var/lib/rook, --encrypted-device=false, --force-format=false, --help=false, --location=, --log-flush-frequency=5s, --log-level=INFO, --metadata-device=, --node-name=test-w1, --operator-image=, --osd-database-size=0, --osd-journal-size=5120, --osd-store=, --osd-wal-size=576, --osds-per-device=1, --pvc-backed-osd=false, --service-account=
2020-01-29 09:59:07.185108 I | op-mon: parsing mon endpoints: a=10.233.35.212:6789
2020-01-29 09:59:07.272603 I | op-osd: CRUSH location=root=default host=test-w1 rack=rack1
2020-01-29 09:59:07.272649 I | cephcmd: crush location of osd: root=default host=test-w1 rack=rack1
2020-01-29 09:59:07.313099 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2020-01-29 09:59:07.313397 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2020-01-29 09:59:07.322175 I | cephosd: discovering hardware
2020-01-29 09:59:07.322228 I | exec: Running command: lsblk --all --noheadings --list --output KNAME
2020-01-29 09:59:07.365036 I | exec: Running command: lsblk /dev/sda --bytes --nodeps --pairs --output SIZE,ROTA,RO,TYPE,PKNAME
2020-01-29 09:59:07.416812 W | inventory: skipping device sda: Failed to complete 'lsblk /dev/sda': exit status 1. lsblk: /dev/sda: not a block device
2020-01-29 09:59:07.416873 I | exec: Running command: lsblk /dev/sda1 --bytes --nodeps --pairs --output SIZE,ROTA,RO,TYPE,PKNAME
2020-01-29 09:59:07.450851 W | inventory: skipping device sda1: Failed to complete 'lsblk /dev/sda1': exit status 1. lsblk: /dev/sda1: not a block device
2020-01-29 09:59:07.450892 I | exec: Running command: lsblk /dev/sda2 --bytes --nodeps --pairs --output SIZE,ROTA,RO,TYPE,PKNAME
2020-01-29 09:59:07.457890 W | inventory: skipping device sda2: Failed to complete 'lsblk /dev/sda2': exit status 1. lsblk: /dev/sda2: not a block device
2020-01-29 09:59:07.457934 I | exec: Running command: lsblk /dev/sr0 --bytes --nodeps --pairs --output SIZE,ROTA,RO,TYPE,PKNAME
2020-01-29 09:59:07.503758 W | inventory: skipping device sr0: Failed to complete 'lsblk /dev/sr0': exit status 1. lsblk: /dev/sr0: not a block device
2020-01-29 09:59:07.503793 I | cephosd: creating and starting the osds
2020-01-29 09:59:07.543504 I | cephosd: configuring osd devices: {"Entries":{}}
2020-01-29 09:59:07.543554 I | exec: Running command: ceph-volume lvm batch --prepare
2020-01-29 09:59:08.906271 I | cephosd: no more devices to configure
2020-01-29 09:59:08.906311 I | exec: Running command: ceph-volume lvm list --format json
2020-01-29 09:59:10.841568 I | cephosd: 0 ceph-volume osd devices configured on this node
2020-01-29 09:59:10.841595 I | cephosd: devices = []
2020-01-29 09:59:10.847396 I | cephosd: configuring osd dirs: map[/var/lib/rook:-1]
2020-01-29 09:59:10.848011 I | exec: Running command: ceph osd create 652071c9-2cdb-4df9-a20e-813738c4e3f6 --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/851021116
2020-01-29 09:59:14.213679 I | cephosd: successfully created OSD 652071c9-2cdb-4df9-a20e-813738c4e3f6 with ID 0
2020-01-29 09:59:14.213744 I | cephosd: osd.0 appears to be new, cleaning the root dir at /var/lib/rook/osd0
2020-01-29 09:59:14.214417 I | cephconfig: writing config file /var/lib/rook/osd0/rook-ceph.config
2020-01-29 09:59:14.214653 I | exec: Running command: ceph auth get-or-create osd.0 -o /var/lib/rook/osd0/keyring osd allow * mon allow profile osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format plain
2020-01-29 09:59:17.189996 I | cephosd: Initializing OSD 0 file system at /var/lib/rook/osd0...
2020-01-29 09:59:17.194681 I | exec: Running command: ceph mon getmap --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/298283883
2020-01-29 09:59:20.936868 I | exec: got monmap epoch 1
2020-01-29 09:59:20.937380 I | exec: Running command: ceph-osd --mkfs --id=0 --cluster=rook-ceph --conf=/var/lib/rook/osd0/rook-ceph.config --osd-data=/var/lib/rook/osd0 --osd-uuid=652071c9-2cdb-4df9-a20e-813738c4e3f6 --monmap=/var/lib/rook/osd0/tmp/activate.monmap --keyring=/var/lib/rook/osd0/keyring --osd-journal=/var/lib/rook/osd0/journal
2020-01-29 09:59:21.324912 I | mkfs-osd0: 2020-01-29 09:59:21.323 7fc7e2a8ea80 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to
force use of aio anyway
2020-01-29 09:59:21.386136 I | mkfs-osd0: 2020-01-29 09:59:21.384 7fc7e2a8ea80 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to
force use of aio anyway
2020-01-29 09:59:21.387553 I | mkfs-osd0: 2020-01-29 09:59:21.384 7fc7e2a8ea80 -1 journal do_read_entry(4096): bad header magic
2020-01-29 09:59:21.387585 I | mkfs-osd0: 2020-01-29 09:59:21.384 7fc7e2a8ea80 -1 journal do_read_entry(4096): bad header magic
2020-01-29 09:59:21.450639 I | cephosd: Config file /var/lib/rook/osd0/rook-ceph.config:
[global]
fsid = a19423a1-f135-446f-b4d9-f52da10a935f
mon initial members = a
mon host = v1:10.233.35.212:6789
public addr = 10.233.95.101
cluster addr = 10.233.95.101
mon keyvaluedb = rocksdb
mon_allow_pool_delete = true
mon_max_pg_per_osd = 1000
debug default = 0
debug rados = 0
debug mon = 0
debug osd = 0
debug bluestore = 0
debug filestore = 0
debug journal = 0
debug leveldb = 0
filestore_omap_backend = rocksdb
osd pg bits = 11
osd pgp bits = 11
osd pool default size = 1
osd pool default pg num = 100
osd pool default pgp num = 100
osd max object name len = 256
osd max object namespace len = 64
osd objectstore = filestore
rbd_default_features = 3
fatal signal handlers = false
[osd.0]
keyring = /var/lib/rook/osd0/keyring
osd journal size = 5120
2020-01-29 09:59:21.450723 I | cephosd: completed preparing osd &{ID:0 DataPath:/var/lib/rook/osd0 Config:/var/lib/rook/osd0/rook-ceph.config Cluster:rook-ceph KeyringPath:/var/lib/rook/osd0/keyring UUID:652071c9-2cdb-4df9-a20e-813738c4e3f6 Journal:/var/lib/rook/osd0/journal IsFileStore:true IsDirectory:true DevicePartUUID: CephVolumeInitiated:false LVPath: SkipLVRelease:false Location: LVBackedPV:false}
2020-01-29 09:59:21.450743 I | cephosd: 1/1 osd dirs succeeded on this node
2020-01-29 09:59:21.450755 I | cephosd: saving osd dir map
2020-01-29 09:59:21.479301 I | cephosd: device osds:[]
dir osds: [{ID:0 DataPath:/var/lib/rook/osd0 Config:/var/lib/rook/osd0/rook-ceph.config Cluster:rook-ceph KeyringPath:/var/lib/rook/osd0/keyring UUID:652071c9-2cdb-4df9-a20e-813738c4e3f6 Journal:/var/lib/rook/osd0/journal IsFileStore:true IsDirectory:true DevicePartUUID: CephVolumeInitiated:false LVPath: SkipLVRelease:false Location: LVBackedPV:false}]
</code></pre>
<p>Do you have any idea where is the issue and how can I solve it ? </p>
|
<p>I talked with a Rook Dev about this issue on this post : <a href="https://groups.google.com/forum/#!topic/rook-dev/NIO16OZFeGY" rel="nofollow noreferrer">https://groups.google.com/forum/#!topic/rook-dev/NIO16OZFeGY</a></p>
<p>He was able to reproduce the problem : </p>
<blockquote>
<p>Yohan, I’m also able to reproduce this problem of the labels not being picked up by the OSDs even though the labels are detected in the OSD prepare pod as you see. Could you open a GitHub issue for this? I’m investigating the fix.</p>
</blockquote>
<p>But it appears that the issue was only concerning OSDs using directories and the problem does not exist when you use devices (like RAW devices) : </p>
<blockquote>
<p>Yohan, I found that this only affects OSDs created on directories. I would recommend you test creating the OSDs on raw devices to get the CRUSH map populated correctly. In the v1.3 release it is also important to note that support for directories on OSDs is being removed. It will be expected that OSDs will be created on raw devices or partitions after that release. See this issue for more details:
<a href="https://github.com/rook/rook/issues/4724" rel="nofollow noreferrer">https://github.com/rook/rook/issues/4724</a></p>
<p>Since the support for OSDs on directories is being removed in the next release I don’t anticipate fixing this issue.</p>
</blockquote>
<p>As you see, the issue will not be fixed because the use of directories will be soon deprecated. </p>
<p>I restarted my tests with the use of RAW devices instead of directories and it worked like a charm.</p>
<p><a href="https://i.stack.imgur.com/YVy8I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YVy8I.png" alt="Crush Map with Racks"></a></p>
<p>I want to thanks Travis for the help he provided and his quick answers ! </p>
|
<p>Currently, I have a number of Kubernetes manifest files which define <code>service</code>'s or <code>deployment</code>'s. When I do an <code>kubectl apply</code> I need to include -all- the files which have changes and need to be applied.</p>
<p>Is there a way to have a main manifest file which references all the other files so when I do <code>kubectl apply</code> i just have to include the main manifest file and don't have to worry manually adding each file that has changed, etc.</p>
<p>Is this possible?</p>
<p>I did think of making an alias or batch file or bash file that has the <code>apply</code> command and -all- the files listed .. but curious if there's a 'kubernetes' way ....</p>
|
<p>You may have a directory with manifests and do the following:</p>
<pre><code>kubectl apply -R -f manifests/
</code></pre>
<p>In this case kubectl will recursively traverse the directory and apply all manifests that it finds.</p>
|
<p>I have Prometheus setup via Helm from Terraform and it's is configured to connect to my Kubernetes cluster. I open my Prometheus but I am not sure which metric to choose from the list to be able to view the CPU/MEM of running pods/jobs.
Here are all the pods running with the command (<strong>test1</strong> is the kube <strong>namespace</strong>):</p>
<pre><code>kubectl -n test1 get pods
</code></pre>
<p><a href="https://i.stack.imgur.com/BdNNe.png" rel="nofollow noreferrer">podsrunning</a></p>
<p>When, I am on Prometheus, I see many metrics related to CPU, but not sure which one to choose:</p>
<p><a href="https://i.stack.imgur.com/8wyQd.png" rel="nofollow noreferrer">prom1</a></p>
<p>I tried to choose one, but the namespace = prometheus and it uses <code>prometheus-node-exporter</code> and I don't see my cluster or my namespace <code>test1</code> anywhere here.</p>
<p><a href="https://i.stack.imgur.com/l1Mz2.png" rel="nofollow noreferrer">prom2</a></p>
<p>Could you please help me? Thank you very much in advance.</p>
<p><strong>UPDATE SCREENSHOT</strong>
<em><strong>UPDATE SCREENSHOT</strong></em>
I need to concentrate on this specific namespace, normally with the command:
<code>kubectl get pods --all-namespaces | grep hermatwin</code>
I see the first line with namespace = <code>jobs</code> I think this is namespace.
<a href="https://i.stack.imgur.com/BCwUI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BCwUI.png" alt="promQL1" /></a></p>
<p>No result when set calendar to last Friday:
<a href="https://i.stack.imgur.com/io6vI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/io6vI.png" alt="promQL2" /></a></p>
<p><strong>UPDATE SCREENSHOT April 20</strong>
I tried to select 2 days with starting date on last Saturday 17 April but I don't see any result:
<a href="https://i.stack.imgur.com/MG63W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MG63W.png" alt="noResult1" /></a></p>
<p>ANd, if I remove (namespace="jobs") condition, I don't see any result either:
<a href="https://i.stack.imgur.com/RMo5s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RMo5s.png" alt="noresult2" /></a></p>
<p>I tried to rerun the job (simulation jobs) again just now and tried to execute the prometheus query while the job was still running mode but I don't get any result :-( Here you can see my jobs where running.</p>
<p><a href="https://i.stack.imgur.com/Jza0o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jza0o.png" alt="jobsRunning" /></a></p>
<p>I don't get any result:
<a href="https://i.stack.imgur.com/j7tan.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j7tan.png" alt="noresult3" /></a></p>
<p>When using simple filter, just <code>container_cpu_usage_seconds_total</code>, I can see the namespace="jobs"
<a href="https://i.stack.imgur.com/W8AIZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W8AIZ.png" alt="resultnamespacejobs" /></a></p>
<p><a href="https://i.stack.imgur.com/YRCmf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YRCmf.png" alt="iRate1" /></a></p>
<p><a href="https://i.stack.imgur.com/U0whc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U0whc.png" alt="ResultJob" /></a></p>
|
<p><code>node_cpu_seconds_total</code> is a metric from <code>node-exporter</code>, the exporter that brings machine statistics and its metrics are prefixed with <code>node_</code>. You need metrics from <code>cAdvisor</code>, this one produces metrics related to containers and they are prefixed with <code>container_</code>:</p>
<pre><code>container_cpu_usage_seconds_total
container_cpu_load_average_10s
container_memory_usage_bytes
container_memory_rss
</code></pre>
<p>Here are some basic queries for you to get started. Be ready that they may require tweaking (you may have different label names):</p>
<h3>CPU Utilisation Per Pod</h3>
<pre><code>sum(irate(container_cpu_usage_seconds_total{container!="POD", container=~".+"}[2m])) by (pod)
</code></pre>
<h3>RAM Usage Per Pod</h3>
<pre><code>sum(container_memory_usage_bytes{container!="POD", container=~".+"}) by (pod)
</code></pre>
<h3>In/Out Traffic Rate Per Pod</h3>
<p>Beware that pods with <code>host</code> network mode (not isolated) show traffic rate for the whole node. <code>* 8</code> is to convert bytes to bits for convenience (MBit/s, GBit/s, etc).</p>
<pre><code># incoming
sum(irate(container_network_receive_bytes_total[2m])) by (pod) * 8
# outgoing
sum(irate(container_network_transmit_bytes_total[2m])) by (pod) * 8
</code></pre>
|
<p>I am currently trying to move my calico based clusters to the new Dataplane V2, which is basically a managed Cilium offering.
For local testing, I am running k3d with open source cilium installed, and created a set of NetworkPolicies (k8s native ones, not CiliumPolicies), which lock down the desired namespaces.</p>
<p>My current issue is, that when porting the same Policies on a GKE cluster (with DataPlane enabled), those same policies don't work.</p>
<p>As an example let's take a look into the connection between some app and a database:</p>
<pre class="lang-yaml prettyprint-override"><code>---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: db-server.db-client
namespace: BAR
spec:
podSelector:
matchLabels:
policy.ory.sh/db: server
policyTypes:
- Ingress
ingress:
- ports: []
from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: FOO
podSelector:
matchLabels:
policy.ory.sh/db: client
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: db-client.db-server
namespace: FOO
spec:
podSelector:
matchLabels:
policy.ory.sh/db: client
policyTypes:
- Egress
egress:
- ports:
- port: 26257
protocol: TCP
to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: BAR
podSelector:
matchLabels:
policy.ory.sh/db: server
</code></pre>
<p>Moreover, using GCP monitoring tools we can see the expected and actual effect the policies have on connectivity:</p>
<p><strong>Expected:</strong>
<a href="https://i.stack.imgur.com/AOOXu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AOOXu.png" alt="Expected" /></a></p>
<p><strong>Actual:</strong>
<a href="https://i.stack.imgur.com/bD0aS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bD0aS.png" alt="Actual" /></a></p>
<p>And logs from the application trying to connect to the DB, and getting denied:</p>
<pre class="lang-json prettyprint-override"><code>{
"insertId": "FOO",
"jsonPayload": {
"count": 3,
"connection": {
"dest_port": 26257,
"src_port": 44506,
"dest_ip": "172.19.0.19",
"src_ip": "172.19.1.85",
"protocol": "tcp",
"direction": "egress"
},
"disposition": "deny",
"node_name": "FOO",
"src": {
"pod_name": "backoffice-automigrate-hwmhv",
"workload_kind": "Job",
"pod_namespace": "FOO",
"namespace": "FOO",
"workload_name": "backoffice-automigrate"
},
"dest": {
"namespace": "FOO",
"pod_namespace": "FOO",
"pod_name": "cockroachdb-0"
}
},
"resource": {
"type": "k8s_node",
"labels": {
"project_id": "FOO",
"node_name": "FOO",
"location": "FOO",
"cluster_name": "FOO"
}
},
"timestamp": "FOO",
"logName": "projects/FOO/logs/policy-action",
"receiveTimestamp": "FOO"
}
</code></pre>
<p>EDIT:</p>
<p>My local env is a k3d cluster created via:</p>
<pre class="lang-sh prettyprint-override"><code>k3d cluster create --image ${K3SIMAGE} --registry-use k3d-localhost -p "9090:30080@server:0" \
-p "9091:30443@server:0" foobar \
--k3s-arg=--kube-apiserver-arg="enable-admission-plugins=PodSecurityPolicy,NodeRestriction,ServiceAccount@server:0" \
--k3s-arg="--disable=traefik@server:0" \
--k3s-arg="--disable-network-policy@server:0" \
--k3s-arg="--flannel-backend=none@server:0" \
--k3s-arg=feature-gates="NamespaceDefaultLabelName=true@server:0"
docker exec k3d-server-0 sh -c "mount bpffs /sys/fs/bpf -t bpf && mount --make-shared /sys/fs/bpf"
kubectl taint nodes k3d-ory-cloud-server-0 node.cilium.io/agent-not-ready=true:NoSchedule --overwrite=true
skaffold run --cache-artifacts=true -p cilium --skip-tests=true --status-check=false
docker exec k3d-server-0 sh -c "mount --make-shared /run/cilium/cgroupv2"
</code></pre>
<p>Where cilium itself is being installed by skaffold, via helm with the following parameters:</p>
<pre class="lang-yaml prettyprint-override"><code>name: cilium
remoteChart: cilium/cilium
namespace: kube-system
version: 1.11.0
upgradeOnChange: true
wait: false
setValues:
externalIPs.enabled: true
nodePort.enabled: true
hostPort.enabled: true
hubble.relay.enabled: true
hubble.ui.enabled: true
</code></pre>
<p>UPDATE:
I have setup a third environment: a GKE cluster using the old calico CNI (Legacy dataplane) and installed cilium manually as shown <a href="https://docs.cilium.io/en/v1.10/gettingstarted/k8s-install-helm/" rel="nofollow noreferrer">here</a>. Cilium is working fine, even hubble is working out of the box (unlike with the dataplane v2...) and I found something interesting. The rules behave the same as with the GKE managed cilium, but with hubble working I was able to see this:</p>
<p><a href="https://i.stack.imgur.com/SwhKv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SwhKv.png" alt="Hubble db connection" /></a></p>
<p>For some reason cilium/hubble cannot identify the db pod and decipher its labels. And since the labels don't work, the policies that rely on those labels, also don't work.</p>
<p>Another proof of this would be the trace log from hubble:</p>
<p><a href="https://i.stack.imgur.com/pQu0m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pQu0m.png" alt="kratos -> db" /></a></p>
<p>Here the destination app is only identified via an IP, and not labels.</p>
<p>The question now is why is this happening?</p>
<p>Any idea how to debug this problem? What could be difference coming from? Do the policies need some tuning for the managed Cilium, or is a bug in GKE?
Any help/feedback/suggestion appreciated!</p>
|
<p>Update: I was able to solve the mystery and it was ArgoCD all along. Cilium is creating an Endpoint and Identity for each object in the namespace, and Argo was deleting them after deploying the applications.</p>
<p>For anyone who stumbles on this, the solution is to add this exclusion to ArgoCD:</p>
<pre class="lang-yaml prettyprint-override"><code> resource.exclusions: |
- apiGroups:
- cilium.io
kinds:
- CiliumIdentity
- CiliumEndpoint
clusters:
- "*"
</code></pre>
|
<p>I'm trying to setup rate limiting option <code>limit_req</code> for specific path in Kubernetes ingress-nginx to prevent brute-forcing authentication.</p>
<p>I've defined <code>limit_req_zone</code> using ConfigMap:</p>
<pre><code>http-snippet: |
limit_req_zone $the_real_ip zone=authentication_ratelimit:10m rate=1r/s;
</code></pre>
<p>Next, I'm using annotation to add a custom location block:</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
location ~* "^/authenticate$" {
limit_req zone=authentication_ratelimit nodelay;
more_set_headers "x-test: matched";
}
</code></pre>
<p>This produces nginx.conf:</p>
<pre><code>server {
# - - 8< - -
location / {
# - - 8< - -
location ~* "^/authenticate$" {
limit_req zone=authentication_ratelimit nodelay;
more_set_headers "x-test: matched";
}
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
</code></pre>
<p>The result is that <code>/authenticate</code> always returns HTTP 503 (with x-test header). Message from ingress access logs:</p>
<pre><code><ip> - [<ip>] - - [04/Jan/2019:15:22:07 +0000] "POST /authenticate HTTP/2.0" 503 197 "-" "curl/7.54.0" 172 0.000 [-] - - - - 1a63c9825c9795be1378b2547e29992d
</code></pre>
<p>I suspect this might be because of conflict between nested location block and <code>proxy_pass</code> (but this is just a wild guess).</p>
<p>What other options have I tried?</p>
<ul>
<li>use <code>server-snippet</code> annotation instead of <code>configuration-snippet</code> - <code>/authenticate</code> returns 404 because <code>proxy_pass</code> is not configured</li>
<li>use <code>nginx.ingress.kubernetes.io/limit-rpm</code> annotation - forces ratelimit on whole application which is not what I want.</li>
</ul>
<p>Question is why custom location block responds with 503? How can I debug this? Will increasing nginx logging level give more details about 503?
Or more general question: can I inject custom location blocks in ingress-nginx?</p>
|
<p>This can be done by using map and that fact that <a href="http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone" rel="noreferrer">Requests with an empty key value are not accounted.</a></p>
<pre><code>http-snippets: |
map $uri $with_limit_req {
default 0;
"~*^/authenticate$" 1;
}
map $with_limit_req $auth_limit_req_key {
default '';
'1' $binary_remote_addr; # the limit key
}
limit_req_zone $auth_limit_req_key zone=authentication_ratelimit:10m rate=1r/s;
</code></pre>
<p>And use annotation to add a custom location block:</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
limit_req zone=authentication_ratelimit nodelay;
</code></pre>
<p>Or if you use ingress from nginxinc</p>
<pre><code>nginx.org/location-snippets:
limit_req zone=authentication_ratelimit nodelay;
</code></pre>
<p>in this case check if requests need to be ratelimited processed on map level.</p>
<p>And my opinion: better to limit requests on app level as if you made rate limit on ingress level, it depends on count of ingress pods.</p>
|
<p>Is there a way to configure prometheus to ignore scraping metrics for all the resources belonging to a particular namespace? I am not able to figure it out by reading the documentation.</p>
|
<p>You can drop targets with <code>relabel_config</code> by using <code>drop</code> action. From the <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config" rel="noreferrer">documentation</a>:</p>
<blockquote>
<p><code>drop</code>: Drop targets for which <code>regex</code> matches the concatenated <code>source_labels</code>.</p>
</blockquote>
<p>Example:</p>
<pre class="lang-yaml prettyprint-override"><code> relabel_configs:
# This will ignore scraping targets from 'ignored_namespace_1',
# 'ignored_namespace_2', and 'ignored_namespace_N'.
- source_labels: [__meta_kubernetes_namespace]
action: drop
regex: ignored_namespace_1|ignored_namespace_2|ignored_namespace_N
</code></pre>
|
<p>I tested ingress in minikube successfully, no issue at all.
Then I deployed my app into ubuntu, if I am using service NodePort, it is also worked very well. After that, I was thinking to use Ingress as load balancer to router traffic, so that external url is no longer the ugly long port.
But unfortunately, I did not succeed, always failed.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dv
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /test
backend:
serviceName: ngsc
servicePort: 3000
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
dv * 80 12s
root@kmaster:/home/ubuntu/datavisor# kubectl describe ing dv
Name: dv
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/ ngsc:3000 (192.168.1.14:3000,192.168.1.17:3000,192.168.1.18:3000)
Annotations:
ingress.kubernetes.io/rewrite-target: /
Events: <none>
</code></pre>
<p>Then when I tried to access, I got following error:</p>
<pre><code>curl http://cluster-ip
curl: (7) Failed to connect to <cluster-ip> port 80: Connection refused
</code></pre>
<p>What I really hope is to let the url exposed outside is <a href="http://ipaddress" rel="nofollow noreferrer">http://ipaddress</a>, instead of <a href="http://ipaddress:30080" rel="nofollow noreferrer">http://ipaddress:30080</a></p>
<p>I know that I can easily use nginx out of kubernete to meet this requirement, but that is not ideal, I want kubernete to handle it so that even service port changed, everything is still up.</p>
<p>Can you check above output and tell me what is the error? I checked a lot doc, every place seemed focus only in minikube, nothing related to real cluster deployment. Do I need to install anything to make ingress working? when I use kubectl get all --all-namespaces I did not see the ingress-controller installed at all. How can I install it if needed?</p>
<p>Thanks for your advice</p>
|
<p>Well, actually Kubernetes does not provide any Ingress controller out of box. You have to install Nginx Ingress or Traefik Ingress or anything else. Ingress controller must run somewhere in your cluster, it's a must. Actually ingress controller is the actual proxy that proxies traffic to your applications.</p>
<p>And I think you should know that minikube under the hood also uses nginx-ingress-controller (see <a href="https://github.com/kubernetes/minikube/tree/master/deploy/addons/ingress" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/tree/master/deploy/addons/ingress</a>).</p>
<p>In a cloud environments ingress controllers run behind the cloud load balancer that performs load balancing between cluster nodes.</p>
<p>If you run on-prem cluster - then usually your ingress-controller is run as NodePort service and you may create DNS record pointing to your node IP addresses. It is also possible to run ingress controller on dedicated nodes and use <code>hostNetwork: true</code>. That will allow to use standard 80/443 ports. So there are many options here.</p>
|
<p>I'm starting in kubernetes and I'm having some issues reaching out my <code>rabbitmq</code> service inside a <code>messaging</code> namespace from my <code>banking</code> service inside the <code>backend</code> namespace. I know it's supposed to be "easy" by reading the documentation: <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
<p>However, I've spent more than a day trying to figure out why I'm not able to connect to the rabbitmq client host</p>
<p>These are my yaml files:</p>
<p>banking-ip-service:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: banking-ip-service
namespace: backend
spec:
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30080
protocol: TCP
name: "banking-api"
selector:
component: bank</code></pre>
</div>
</div>
</p>
<p>banking deployment:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: banking
namespace: backend
spec:
replicas: 1
selector:
matchLabels:
component: bank
template:
metadata:
labels:
component: bank
spec:
containers:
- name: banking
image: user/myownimage
env:
- name: URL
# cluster ip
value: rabbit-ip-service.messaging.svc.cluster.local
- name: PORT
value: "5672"
- name: USER
value: "guest"
- name: PASSWORD
value: "guest"
ports:
- containerPort: 8000
resources:
requests:
memory: "64Mi"
cpu: "25m"
limits:
memory: "128Mi"
cpu: "50m"
restartPolicy: Always
imagePullSecrets:
- name: regcred</code></pre>
</div>
</div>
</p>
<p>rabbitmq-ip-service:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: rabbit-ip-service
namespace: messaging
spec:
type: ClusterIP
ports:
- port: 5672
targetPort: 5672
name: "rabbit-api"
- port: 15672
targetPort: 15672
name: "rabbit-manager"
selector:
component: rabbitmq</code></pre>
</div>
</div>
</p>
<p>rabbitmq deployment:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: rabbitmq
labels:
component: rabbitmq
namespace: messaging
spec:
containers:
- image: rabbitmq:3.5.4-management
name: rabbitmq
ports:
- containerPort: 5672
name: service
- containerPort: 15672
name: management
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
volumeMounts:
- name: config-volume
mountPath: /etc/rabbitmq
volumes:
- name: config-volume
configMap:
name: rabbitmq-config
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- key: enabled_plugins
path: enabled_plugins</code></pre>
</div>
</div>
</p>
<p>This is the output when running <code>kubectl describe svc rabbit-ip-service -n messaging</code> :</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>Name: rabbit-ip-service
Namespace: messaging
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"rabbit-ip-service","namespace":"messaging"},"spec":{"ports":[{"na...
Selector: component=rabbitmq
Type: ClusterIP
IP: 10.106.248.55
Port: rabbit-api 5672/TCP
TargetPort: 5672/TCP
Endpoints: 172.17.0.5:5672
Port: rabbit-manager 15672/TCP
TargetPort: 15672/TCP
Endpoints: 172.17.0.5:15672
Session Affinity: None
Events: <none></code></pre>
</div>
</div>
</p>
<p>If I hardcode the url value inside the <code>backend</code> deployment it doesn't work either:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>env:
- name: URL
# cluster ip
value: 10.106.248.55:5672</code></pre>
</div>
</div>
</p>
|
<p>I made it work by just removing the volumes temporarily that were using the configmaps:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: rabbitmq
labels:
component: rabbitmq
namespace: messaging
spec:
containers:
- image: rabbitmq:3.5.4-management
name: rabbitmq
ports:
- containerPort: 5672
name: service
- containerPort: 15672
name: management
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"</code></pre>
</div>
</div>
</p>
|
<p>I'm looking to get the kubelet_* metrics from EKS/GKE servers.</p>
<p>neither <code>metrics-server</code> or <code>kube-state-metrics</code> seem to provide them to prometheus. Information online suggests the metrics are coming from the kubelet itself, but I'm not precisely sure which software piece is usually used to provide those metrics.</p>
<p>I seem to be able to do a --raw query on the node to get the information, but I'd <em>rather</em> not write my own exporter for that. :)</p>
|
<p>It's true that kubelet exposes <code>kubelet_*</code> metrics. By default they're available on port <code>10250</code>, path <code>/metrics</code>. Additionally, kubelet also has metrics in <code>/metrics/cadvisor</code>, <code>/metrics/resource</code> and <code>/metrics/probes</code> endpoints.</p>
<p>I'm running a self-managed Prometheus deployment and use this config to scrape kubelet metrics:</p>
<pre><code>- job_name: 'kubelet'
scheme: https
# these are provided by the service account I created for Prometheuse
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
# map node labels to prometheus labels
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
</code></pre>
|
<p>I have the following Ingress section:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tb-ingress
namespace: thingsboard
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
spec:
rules:
- http:
paths:
- path: /api/v1/.*
backend:
serviceName: tb-http-transport
servicePort: http
- path: /static/rulenode/.*
backend:
serviceName: tb-node
servicePort: http
- path: /static/.*
backend:
serviceName: tb-web-ui
servicePort: http
- path: /index.html.*
backend:
serviceName: tb-web-ui
servicePort: http
- path: /
backend:
serviceName: tb-web-ui
servicePort: http
</code></pre>
<p>However, this does not seem to be working. GKE gives me an </p>
<blockquote>
<p>Invalid path pattern, invalid</p>
</blockquote>
<p>error.</p>
|
<p>It seems to me, you forgot to specify <code>kubernetes.io/ingress.class: "nginx"</code> annotation. If you don't specify any <code>kubernetes.io/ingress.class</code> - GKE will consider using its own ingress which does not support regexps.</p>
|
<p>I have a failed pod which is not properly created. I used these steps:</p>
<pre><code>kubernetes@kubernetes1:~$ cd /opt/registry
kubernetes@kubernetes1:/opt/registry$ kubectl create -f private-registry1.yaml
persistentvolume/pv1 created
kubernetes@kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default private-repository-k8s-6ddbcd9c45-s6dfq 0/1 ContainerCreating 0 2d1h
kube-system calico-kube-controllers-58dbc876ff-dgs77 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-czmzc 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-q4lxz 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-k94z2 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-nt27m 1/1 Running 4 (125m ago) 2d13h
kube-system etcd-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-apiserver-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-controller-manager-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-97djs 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-d8bzs 1/1 Running 4 (125m ago) 2d13h
kube-system kube-scheduler-kubernetes1 1/1 Running 5 (125m ago) 2d13h
</code></pre>
<p>As you can see the pod is stucked in status ContainerCreating. I tried to delete it:</p>
<pre><code>kubernetes@kubernetes1:/opt/registry$ kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default private-repository-k8s 0/1 1 0 2d2h
kube-system calico-kube-controllers 1/1 1 1 2d14h
kube-system coredns 2/2 2 2 2d14h
</code></pre>
<p>Delete command:</p>
<pre><code>kubernetes@kubernetes1:/opt/registry$ kubectl delete -n default deployment private-repository-k8s
deployment.apps "private-repository-k8s" deleted
kubernetes@kubernetes1:/opt/registry$ kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system calico-kube-controllers 1/1 1 1 2d14h
kube-system coredns 2/2 2 2 2d14h
kubernetes@kubernetes1:/opt/registry$ kubectl create -f private-registry1.yaml
Error from server (AlreadyExists): error when creating "private-registry1.yaml": persistentvolumes "pv1" already exists
</code></pre>
<p><strong>private-registry1.yaml configuration:</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteOnce # only 1 node will read/write on the path.
# - ReadWriteMany # multiple nodes will read/write on the path
</code></pre>
<p>Do you know how I can delete <code>pv1</code>?</p>
|
<p>You can delete the PV using the following two commands:</p>
<p><code>kubectl delete pv <pv_name> --grace-period=0 --force</code></p>
<p>And then deleting the finalizer using:</p>
<p><code>kubectl patch pv <pv_name> -p '{"metadata": {"finalizers": null}}'</code></p>
<p>As you have created using a file, you can also use the following command to delete the pv:</p>
<p><code>kubectl delete -f private-registry1.yaml</code></p>
|
<p>I am trying to create a Kubernetes node in minikube with a NodejS Express server in it.</p>
<p>The basic application runs without any problem on bare metal(Linux Ubuntu, Windows etc). But in Kubernetes I have lot of problems
I have a lot of routes and the server deployment fails. If I reduce the number of routes with for lets say 50% the app runs fine. It doesn't make any difference which routes I commented out.</p>
<p>Deployment file (server-cluster-ip-service.yaml):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 8093
targetPort: 8093
</code></pre>
<p>Deployment file (server-deployment.yaml):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 1
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: jabro888/salesactas4002server:1.0.1
ports:
- containerPort: 8093
</code></pre>
<p>server.ts file:</p>
<pre><code>export const app: Application = express();
app.listen(8093), () => {
initApi(app).then(() => {
apiRoutes(app);
}).catch((error) => {
console.log(" what the f?ck is going wrong: " + error);
});
console.log('HTTP Server running at http://' + '192.168.99.100' + ': on port: ' + '8093');
});
</code></pre>
<p>api.ts file:</p>
<pre><code>const options:cors.CorsOptions = {
allowedHeaders : config.get('server.cors.allowedHeaders'),
credentials: config.get('server.cors.credentials'),
methods: config.get('server.cors.methods'),
origin: config.get('server.cors.origin'),
preflightContinue: config.get('server.cors.preflightContinue')
};
export async function initApi(app) {
console.log('voor initialiseren');
//await apiInitialiseer();
console.log('na initialiseren');
app.use(bodyParser.json());
app.use(cors(options));
app.use(cookieParser());
app.set('strict routing', true);
app.enable('strict routing');
console.log('stap1');
}
</code></pre>
<p>apiRoute.ts file :
(And when I remove or commented out the routes from step6 untill step9 the application runs ok in kubernetes minikube.)</p>
<pre><code>export function apiRoutes(app) {
//app.route('/api/test').get(apiGetRequestByMedewerkerAfterTime);
app.route('/api/salesactas400/cookie').get(apiGetAllCookies);
app.route('/api/salesactas400/aut/v').put(apiVerlengSession);
app.route('/api/salesactas400/aut/s').put(apiStopSession);
console.log('step2');
app.route('/api/salesactas400/medewerker/login-afdeling').get(apiGetMedewerkerAfdelingByLogin);
app.route('/api/salesactas400/medewerker/Login').get(apiGetMedewerkerByLogin);
app.route('/api/salesactas400/medewerker/login').put(apiGetMedewerkerVestigingByLoginLogin); //+gebruikt inloggen PUt vanwege de cookie
console.log('step3');
app.route('/api/salesactas400/medewerker').get(apiGetAllMedewerkersWithAfdelingLocatie);
app.route('/api/salesactas400/medewerker/:id').get(apiGetMedewerkerByID);
app.route('/api/salesactas400/medewerker/:id').put(apiUpdateMedewerkerByID);
app.route('/api/salesactas400/medewerker').post(apiAddMedewerker);
app.route('/api/salesactas400/medewerker/:id').delete(apiDeleteMedewerkerByID);
console.log('step4');
app.route('/api/salesactas400/locatie').get(apiGetAllLocaties);
app.route('/api/salesactas400/locatie/:id').get(apiGetLocatieByID);
app.route('/api/salesactas400/locatie/:id').put(apiUpdateLocatieByID);
app.route('/api/salesactas400/locatie').post(apiAddLocatie);
app.route('/api/salesactas400/locatie/:id').delete(apiDeleteLocatieByID);
console.log('step5');
app.route('/api/salesactas400/afdeling').get(apiGetAllAfdelings);
app.route('/api/salesactas400/afdeling/:id').get(apiGetAfdelingByID);
app.route('/api/salesactas400/afdeling/:id').put(apiUpdateAfdelingByID);
app.route('/api/salesactas400/afdeling').post(apiAddAfdeling);
app.route('/api/salesactas400/afdeling/:id').delete(apiDeleteAfdelingByID);
console.log('step6');
app.route('/api/salesactas400/activiteit').get(apiGetAllActiviteitenWithAfdeling);
app.route('/api/salesactas400/activiteit/afdeling/:afdelingId').get(apiGetActiviteitenByAfdelingId);
app.route('/api/salesactas400/activiteit/:id').get(apiGetActiviteitByID);
app.route('/api/salesactas400/activiteit/:id').put(apiUpdateActiviteitByID);
app.route('/api/salesactas400/activiteit').post(apiAddActiviteit);
app.route('/api/salesactas400/activiteit/:id').delete(apiDeleteActiviteitByID);
console.log('step13');
console.log('step7');
app.route('/api/salesactas400/registratiefilter').put(apiGetAllRegistratiesFiltered);
app.route('/api/salesactas400/registratie').get(apiGetAllRegistraties);
app.route('/api/salesactas400/registratie/:id').get(apiGetRegistratieByMedewerkerID);
app.route('/api/salesactas400/registratie/:id').put(apiUpdateRegistratieByID);
app.route('/api/salesactas400/registratie/:id').delete(apiDeleteRegistratieByID);
app.route('/api/salesactas400/registratie').post(apiAddRegistratie);
console.log('step8');
app.route('/api/salesactas400/export').post(apiAddExport);
console.log('step9');
}
</code></pre>
<p>after loading the files with
kubectl apply -f
And running
kubectl logs server-deployment-8588f6cfdd-ftqvj
Then i get this in response:</p>
<pre><code>> [email protected] start /server
> ts-node ./server.ts
</code></pre>
<p>This is WRONG, it seems that the application crashes I don't see console.log messages.</p>
<p>After kubectl get pods I get this:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
postgres-deployment-7d9788bdfd-pf6hf 1/1 Running 0 101s
server-deployment-8588f6cfdd-ftqvj 0/1 Completed 2 67s
</code></pre>
<p>For some reason the container completed ???</p>
<p>When i remove the routes from step6 to step9 then I see this:</p>
<pre><code>> [email protected] start /server
> ts-node ./server.ts
voor initialiseren
na initialiseren
stap1
HTTP Server running at http://192.168.99.100: on port: 8093
stap2
stap3
stap4
stap5
</code></pre>
<p>So this is OK, but WHY I can't load all the routes, is there any limitation in kubernetes on NodeJS Express server on routes maybe, something else in my code is maybe wrong ?</p>
<p>I run:
minikube version 1.6.2, docker version 19.03.5
NodeJS version at this moment 12.14 from the the node:alpine image
I also tried NodeJS version 10.14 and 11.6</p>
<p>Dockerfile I have used for creating the container jabro888/salesactas4002server:1.0.1</p>
<pre><code>FROM node:12.14.0-alpine
WORKDIR "/server"
COPY ./package.json ./
RUN apk add --no-cache --virtual .gyp \
python \
make \
g++ \
unixodbc \
unixodbc-dev \
&& npm install \
&& apk del .gyp
COPY . .
#ENV NODE_ENV=production
CMD ["npm", "start"]
</code></pre>
<p>I hope somebody can help me I am struggling all ready 3 days with this problem.</p>
<p>This also might be interesting and I don't understand no any .... thing about this. After some time the pod restarts and after some time it crashes.
And again I tried the same app on a Linux machine and it runs without any problem.</p>
<pre><code>bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-7d9788bdfd-mm8mm 1/1 Running 0 76s
server-deployment-8588f6cfdd-qd5n6 0/1 Completed 1 34s
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-7d9788bdfd-mm8mm 1/1 Running 0 81s
server-deployment-8588f6cfdd-qd5n6 0/1 Completed 1 39s
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$ kubectl get pods
Unable to connect to the server: net/http: TLS handshake timeout
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-7d9788bdfd-mm8mm 1/1 Running 0 2m17s
server-deployment-8588f6cfdd-qd5n6 0/1 Completed 2 95s
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-7d9788bdfd-mm8mm 1/1 Running 0 2m21s
server-deployment-8588f6cfdd-qd5n6 0/1 Completed 2 99s
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-7d9788bdfd-mm8mm 1/1 Running 0 2m27s
server-deployment-8588f6cfdd-qd5n6 0/1 CrashLoopBackOff 2 105s
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$
</code></pre>
|
<p>Ok SOLVED, the problem was that minikube is not giving enough resources. I had the same problem when I used AWS Beanstalk, also suddenly the server stopped, but in the logs I could see why. They ran out of RAM.
So to solve this Minkube has to be started with an extra memory parameter like this:</p>
<blockquote>
<p>minikube start --memory=4096</p>
</blockquote>
|
<p>Prometheus supports multiple roles in its <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config" rel="nofollow noreferrer">Kubernetes SD config</a></p>
<p>I'm confused about whether I should use a <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config" rel="nofollow noreferrer">Pod</a> config or a <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config" rel="nofollow noreferrer">endpoints</a> role for my Deployment + Service.</p>
<p>The service I am monitoring is a Deployment</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
spec:
replicas: ~10
strategy:
rollingUpdate:
maxSurge: 5
maxUnavailable: 0
type: RollingUpdate
template:
containers:
- name: web-app
ports:
- containerPort: 3182
name: http
- containerPort: 6060
name: metrics
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: web-app
spec:
ports:
- name: http
port: 3182
targetPort: http
selector:
app: web-app
type: ClusterIP
</code></pre>
<p>The number of pods can vary in the deployment. The deployment is continuously being updated with new images.</p>
<p>I can add annotations or labels as needed to either of these YAML files.</p>
<p>Is there a reason to prefer either a <strong>Pod</strong> role or an <strong>Endpoints</strong> role?</p>
|
<p>In short, there are two major differences:</p>
<ul>
<li>an <code>endpoints</code> role gives you more data in labels (to which service a pod belongs, for example);</li>
<li>a <code>pod</code> role targets <strong><em>any</em></strong> pod out there and not just those belonging to a service.</li>
</ul>
<p>What's best for you is for you to decide, but I suppose that an <code>endpoints</code> role would fit well for your production applications (all these usually have a corresponding service), and a <code>pod</code> role for everything else. Or you may do with just one <code>pod</code> role job for everything and bring that extra information with the <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> exporter.</p>
|
<p>I want to create a private Kubernetes registry from this tutorial: <a href="https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/" rel="nofollow noreferrer">https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/</a></p>
<p>I implemented this:</p>
<pre><code>Generate Self-Signed Certificate
cd /opt
sudo mkdir certs
cd certs
sudo touch registry.key
cd /opt
sudo openssl req -newkey rsa:4096 -nodes -sha256 -keyout \
./certs/registry.key -x509 -days 365 -out ./certs/registry.crt
ls -l certs/
Create registry folder
cd /opt
mkdir registry
</code></pre>
<p>Copy-paste <code>private-registry.yaml</code> into /opt/registry</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: registry-vol
hostPath:
path: /opt/registry
type: Directory
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /certs
- name: registry-vol
mountPath: /var/lib/registry
kubernetes@kubernetes1:/opt/registry$ kubectl create -f private-registry.yaml
deployment.apps/private-repository-k8s created
kubernetes@kubernetes1:/opt/registry$ kubectl get deployments private-repositor y-k8s
NAME READY UP-TO-DATE AVAILABLE AGE
private-repository-k8s 0/1 1 0 12s
kubernetes@kubernetes1:/opt/registry$
</code></pre>
<p>I have the following questions:</p>
<ol>
<li><p>I have a control plane and 2 work nodes. Is it possible to have a folder located only on the control plane under <code>/opt/registry</code> and deploy images on all work nodes without using shared folders?</p>
</li>
<li><p>As alternative more resilient solution I want to have a control plane and 2 work nodes. Is it possible to have a folder located on all work nodes and on the control plane under <code>/opt/registry</code> and deploy images on all work nodes without using manually created shared folders? I want Kubernetes to manage repository replication on all nodes. i.e data into <code>/opt/registry</code> to be synchronized automatically by Kubernetes.</p>
</li>
<li><p>Do you know how I can debug this configuration? As you can see pod is not starting.</p>
</li>
</ol>
<p>EDIT: Log file:</p>
<pre><code>kubernetes@kubernetes1:/opt/registry$ kubectl logs private-repository-k8s-6ddbcd9c45-s6dfq
Error from server (BadRequest): container "private-repository-k8s" in pod "private-repository-k8s-6ddbcd9c45-s6dfq" is waiting to start: ContainerCreating
kubernetes@kubernetes1:/opt/registry$
</code></pre>
<p><strong>Attempt 2:</strong></p>
<p>I tried this configuration deployed from control plane:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteOnce # only 1 node will read/write on the path.
# - ReadWriteMany # multiple nodes will read/write on the path
</code></pre>
<p>Note! control plane hostname is <code>kubernetes1</code> so I changed the value into above configuration. I get this:</p>
<pre><code>kubernetes@kubernetes1:~$ cd /opt/registry
kubernetes@kubernetes1:/opt/registry$ kubectl create -f private-registry1.yaml
persistentvolume/pv1 created
kubernetes@kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default private-repository-k8s-6ddbcd9c45-s6dfq 0/1 ContainerCreating 0 2d1h
kube-system calico-kube-controllers-58dbc876ff-dgs77 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-czmzc 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-q4lxz 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-k94z2 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-nt27m 1/1 Running 4 (125m ago) 2d13h
kube-system etcd-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-apiserver-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-controller-manager-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-97djs 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-d8bzs 1/1 Running 4 (125m ago) 2d13h
kube-system kube-scheduler-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kubernetes@kubernetes1:/opt/registry$ kubectl logs private-repository-k8s-6ddbcd9c45-s6dfq
Error from server (BadRequest): container "private-repository-k8s" in pod "private-repository-k8s-6ddbcd9c45-s6dfq" is waiting to start: ContainerCreating
</code></pre>
<p>Unfortunately again the image is not created.</p>
|
<p>You can try with following file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: task-pv-storage
mountPath: /opt/registry
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 256Mi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteMany
</code></pre>
|
<p>I use minikube to run a local kubernetes cluster. I deploy grafana using helm from this repo: <a href="https://grafana.github.io/helm-charts" rel="nofollow noreferrer">https://grafana.github.io/helm-charts</a>. If I use port-forwarding it is perfectly accessible, so I tried to setup an ingress on chart-example.local/grafana. When I curl <code>chart-example.local/grafana</code> it works as well. but when I use <code>minnikube tunnel</code> and <code>localhost/grafana</code> in my browser I get <code>404 Not Found nginx</code>.</p>
<p>I made the following changes to the helm values file:</p>
<p><code>custom-values.yml</code>:</p>
<pre><code>ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
path: /grafana/?(.*)
pathType: Prefix
hosts:
- chart-example.local
grafana.ini:
server:
domain: "{{ if (and .Values.ingress.enabled .Values.ingress.hosts) }}{{ .Values.ingress.hosts | first }}{{ else }}''{{ end }}"
root_url: http://localhost:3000/grafana
serve_from_sub_path: true
</code></pre>
<p>I also tried using root_url: <code>root_url: "%(protocol)s://%(domain)s/grafana"</code>.</p>
<p>I have a feeling this is caused by the host key in the values.yml file.
Or is the value I entered for root_url wrong?</p>
|
<p>I could fix the problem by setting</p>
<pre><code>ingress:
- hosts: ""
</code></pre>
<p>So the problem was the hosts tag, but I don't understand why it causes this problem. Does somebody know?</p>
|
<p>what is python kubernetes client equivalent for</p>
<pre><code>kubectl get deploy -o yaml
</code></pre>
<p><a href="https://github.com/kubernetes-client/python/blob/master/examples/deployment_crud.py" rel="noreferrer">CRUD python Client example</a></p>
<p>i referred this example for getting python deployment
but there is no read deployment option</p>
|
<p><code>read_namespaced_deployment()</code> does the thing:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
config.load_kube_config()
api = client.AppsV1Api()
deployment = api.read_namespaced_deployment(name='foo', namespace='bar')
</code></pre>
|
<p>I am trying to add google cloud armor to my Terraform project that deploys app using Kubernetes. I follow this example. But, in my case, I want to create this rules instead:
<a href="https://github.com/hashicorp/terraform-provider-google/blob/master/examples/cloud-armor/main.tf" rel="nofollow noreferrer">https://github.com/hashicorp/terraform-provider-google/blob/master/examples/cloud-armor/main.tf</a></p>
<p><strong>Close all traffics for all IPs on all ports but open traffic for all IPs on port 80 and 443</strong></p>
<ul>
<li>Then I added a file also called <code>web_application_firewall.tf</code> under the directory <code>terraform/kubernetes</code> with the following configuration:</li>
</ul>
<pre><code># Cloud Armor Security policies
resource "google_compute_security_policy" "web-app-firewall" {
name = "armor-security-policy"
description = "Web application security policy to close all traffics for all IPs on all ports but open traffic for all IPs on port 80 and 443"
# Reject all traffics for all IPs on all ports
rule {
description = "Default rule, higher priority overrides it"
action = "deny(403)"
priority = "2147483647"
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = ["*"]
}
}
}
# Open traffic for all IPs on port 80 and 443
#rule {
# description = "allow traffic for all IPs on port 80 and 443"
# action = "allow"
# priority = "1000"
# match {
# versioned_expr = "SRC_IPS_V1"
# config {
# src_ip_ranges = ["*"]
# }
# }
#}
}
resource "google_compute_firewall" "firewall-allow-ports" {
name = "firewall-allow-ports"
network = google_compute_network.default.name
allow {
protocol = "icmp"
}
allow {
protocol = "tcp"
ports = ["80"]
}
source_tags = ["web"]
}
resource "google_compute_network" "default" {
name = "test-network"
}
</code></pre>
<p>Here, I deactivate port 445 but after I redeployed, I still have an access to the web app. Could you please let me know what I did wrong here? Thank you in advance.</p>
|
<p>First of all I would like to clarify a few things.</p>
<p><strong>Cloud Armor</strong></p>
<blockquote>
<p><a href="https://cloud.google.com/armor/docs/cloud-armor-overview" rel="nofollow noreferrer">Google Cloud Armor</a> provides protection only to applications running behind an external load balancer, and several features are only available for external HTTP(S) load balancer.</p>
</blockquote>
<p>In short, it can filter IP addresses but cannot block ports, it's firewall role.</p>
<p>In question you have <code>deny</code> rule for all IP's and <code>allow</code> rule (which is commented), however both rules have <code>src_ip_ranges = ["*"]</code> which applies to all IPs which is a bit pointless.</p>
<p><strong>Terraform snippet</strong>.</p>
<p>I have tried to apply <a href="https://github.com/hashicorp/terraform-provider-google/blob/master/examples/cloud-armor/main.tf" rel="nofollow noreferrer">terraform-provider-google</a> with your changes, however I am not sure if this is exactly what you have. If you could post your whole code it would be more helpful to replicate this whole scenario as you have.</p>
<p>As I mentioned previously, to block ports you need to use Firewall Rule. Firewall Rule applies to a specific VPC network, not all. When I tried to replicate your issue I found that you:</p>
<p><em><strong>Create new VPC network</strong></em></p>
<pre><code>resource "google_compute_network" "default" {
name = "test-network"
}
</code></pre>
<p><em><strong>Created Firewall rule</strong></em></p>
<pre><code>resource "google_compute_firewall" "firewall-allow-ports" {
name = "firewall-allow-ports"
network = google_compute_network.default.name
allow {
protocol = "icmp"
}
allow {
protocol = "tcp"
ports = ["80"]
}
source_tags = ["web"]
}
</code></pre>
<p><strong>But where did you create VMs? If you followed github code, your VM has been created in <code>default</code> VPC:</strong></p>
<pre><code> network_interface {
network = "default" ### this line
access_config {
# Ephemeral IP
}
</code></pre>
<p>In <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance#network_interface" rel="nofollow noreferrer">Terraform doc</a> you can find information that this value indicates to which network the VM will be attached.</p>
<blockquote>
<p><code>network_interface</code> - (Required) Networks to attach to the instance. This can be specified multiple times.</p>
</blockquote>
<p><strong>Issue Summary</strong></p>
<p>So in short, you have created new VPC (<code>test-network</code>), Created VPC rule (<code>"firewall-allow-ports"</code>) to allow only <code>ICMP</code> protocol and <code>TCP</code> protocol on port <code>80</code> with <code>source_tags = web</code> for new VPC - <code>test-network</code> but your VM has been created in <code>default</code> VPC which might have different firewall rules to allow whole traffic, allow traffic on port 445 or many more variations.</p>
<p><a href="https://i.stack.imgur.com/zPA5k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zPA5k.png" alt="" /></a>
<a href="https://i.stack.imgur.com/mGY1W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mGY1W.png" alt="" /></a></p>
<p><strong>Possible solution</strong></p>
<p>Using <code>default</code> as the name of resource in terraform might be dangerous/tricky as it can create resources in different locations than you want. I have changed this code a bit to create a VPC network - <code>test-network</code>, use it for firewall rules and in resource <code>"google_compute_instance"</code>.</p>
<pre><code>resource "google_compute_network" "test-network" {
name = "test-network"
}
resource "google_compute_firewall" "firewall-allow-ports" {
name = "firewall-allow-ports"
network = google_compute_network.test-network.name
allow {
protocol = "icmp"
}
allow {
protocol = "tcp"
ports = ["80", "443"] ### before was only 80
}
source_tags = ["web"]
}
resource "google_compute_instance" "cluster1" {
name = "armor-gce-333" ### previous VM name was "armor-gce-222"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "test-network"
access_config {
...
</code></pre>
<p>As you can see on the screens below, it created Firewall rule also for port 443 and in VPC <code>test-network</code> you can see VM <code>"armor-gce-333"</code>.</p>
<p><strong>Summary</strong>
Your main issue was related that you have configured new VPC with firewall rules, but your instance was probably created in another VPC network which allowed traffic on port 445.</p>
<p><a href="https://i.stack.imgur.com/Tfb6E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tfb6E.png" alt="" /></a>
<a href="https://i.stack.imgur.com/OYhHA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OYhHA.png" alt="" /></a></p>
|
<p>We are using AWS EKS, i deployed Promethus using the below command:</p>
<pre><code>kubectl create namespace prometheus
helm install prometheus prometheus-community/prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2"
</code></pre>
<p>Once this is done i get this message:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:</p>
<p>The services on my prometheus deployment looks like blow:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus-alertmanager ClusterIP 10.22.210.131 <none> 80/TCP 20h
service/prometheus-kube-state-metrics ClusterIP 10.12.43.248 <none> 8080/TCP 20h
service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 20h
service/prometheus-pushgateway ClusterIP 10.130.54.42 <none> 9091/TCP 20h
service/prometheus-server ClusterIP 10.90.94.70 <none> 80/TCP 20h
</code></pre>
<p>I am now using this URL in the datasource on Grafana as:</p>
<pre><code>datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-alertmanager.prometheus.svc.cluster.local
access: proxy
isDefault: true
</code></pre>
<p>Grafana is also up, but when the default datasource which is prometheus in this case is unable to pull any data, when i check in the datasources tab on Grafana and try to test the datasource i am getting Error reading Prometheus: client_error: client error: 404</p>
<p>since both these deployments are on the same cluster ideally it should have been able to access this.
Any help here would be highly appreciated.</p>
|
<p>This is because you're targeting the wrong service. You're using the alert manager url instead of the prometheus server.<br />
The URL should be this one :</p>
<pre class="lang-yaml prettyprint-override"><code>url: http://prometheus-server.prometheus.svc.cluster.local
</code></pre>
|
<p>I am trying to install following packages on elasticsearch_exporter container.
I tried yuminstall and apt-get and could not install using them.
Can someone tell how to install packages on this container?</p>
<p><strong>Linux elasticsearch-exporter-6dbd9cf659-7km8x 5.2.9-1.el7.elrepo.x86_64 #1 SMP Fri Aug 16 08:17:55 EDT 2019 x86_64 GNU/Linux</strong></p>
<p>Error:
apt-get install python3
sh: apt-get: not found</p>
<p>python3
python3-pip
awscli</p>
|
<p>If you are referring to <code>justwatch/elasticsearch_exporter</code> image, then it is based on <code>busybox</code> image (<a href="https://github.com/prometheus-community/elasticsearch_exporter/blob/master/Dockerfile#L3" rel="nofollow noreferrer">source</a>). There is no package manager inside and adding one would be rather difficult.</p>
<p>An easy way to solve this is to just copy the exporter into regular Debian image, like this:</p>
<pre><code>FROM justwatch/elasticsearch_exporter:1.1.0 as source
FROM debian:buster-slim
COPY --from=source /bin/elasticsearch_exporter /bin/elasticsearch_exporter
EXPOSE 9114
ENTRYPOINT [ "/bin/elasticsearch_exporter" ]
</code></pre>
<p>And here is your exporter with <code>apt</code> inside.</p>
|
<h1>Background</h1>
<p>Consider a set of HTTP <code>GET</code> and <code>PUT</code> requests that I would like to issue to the K8S REST API. I know that the currently running pod (i.e. assume a single pod in the cluster for one-off testing/debugging/etc.) has the appropriate credentials (i.e. associated with the service account) to execute these calls successfully.</p>
<p>I would like to modify my requests so that they use a different service account to execute the request (i.e. modify the <code>user</code> field of the request). However, there's no guarantee the user is permitted to make <strong>all</strong> of these requests, and some could be destructive, so it's ideal that one of the two scenarios occur:</p>
<ul>
<li>None of the requests are executed.</li>
<li>100% of the requests are executed.</li>
</ul>
<p>By having just some of the requests succeed, it can put a system into an indeterminate state.</p>
<hr />
<h1>Question</h1>
<p>Is there an API/feature in K8S where I can pre-determine if a specific API request, on the behalf of a specific user/service-account, will be permitted to execute?</p>
|
<pre><code>$ kubectl -v 10 --as system:serviceaccount:default:jenkins auth can-i create pod
...
I0426 20:27:33.008777 4149 request.go:942] Request Body: {"kind":"SelfSubjectAccessReview","apiVersion":"authorization.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"resourceAttributes":{"namespace":"default","verb":"create","resource":"pods"}},"status":{"allowed":false}}
I0426 20:27:33.008875 4149 round_trippers.go:419] curl -k -v -XPOST -H "Impersonate-User: system:serviceaccount:default:jenkins" -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubectl/v1.14.0 (darwin/amd64) kubernetes/641856d" 'https://172.22.1.3/apis/authorization.k8s.io/v1/selfsubjectaccessreviews'
I0426 20:27:34.935506 4149 round_trippers.go:438] POST https://172.22.1.3/apis/authorization.k8s.io/v1/selfsubjectaccessreviews 201 Created in 1926 milliseconds
I0426 20:27:34.935550 4149 round_trippers.go:444] Response Headers:
I0426 20:27:34.935564 4149 round_trippers.go:447] Audit-Id: 631abed7-b27b-4eca-b267-4d7db0f1aa21
I0426 20:27:34.935576 4149 round_trippers.go:447] Content-Type: application/json
I0426 20:27:34.935588 4149 round_trippers.go:447] Date: Fri, 26 Apr 2019 14:57:34 GMT
I0426 20:27:34.935599 4149 round_trippers.go:447] Content-Length: 378
I0426 20:27:34.935724 4149 request.go:942] Response Body: {"kind":"SelfSubjectAccessReview","apiVersion":"authorization.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"resourceAttributes":{"namespace":"default","verb":"create","resource":"pods"}},"status":{"allowed":true,"reason":"RBAC: allowed by RoleBinding \"jenkins-ns-default/default\" of Role \"jenkins-ns-default\" to User \"system:serviceaccount:default:jenkins\""}}
yes
</code></pre>
<p>You can view detailed description of SubjectAccessReview API here: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#subjectaccessreview-v1-authorization" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#subjectaccessreview-v1-authorization</a></p>
<p>Read more here: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authorization/</a></p>
|
<p>All I have is a GKE cluster and there are 3 node pools and the machine size is <code>e2-standard-2</code> but when I push my deployment into this cluster I got this error on gke dashboard
<a href="https://i.stack.imgur.com/FDVOl.png" rel="nofollow noreferrer">error image</a></p>
<p>Although I have enabled the node auto-provisioning but it is still showing this error.
Can you help me out how can I fix this issue?</p>
|
<p>From what I understand, you are using <code>Standard GKE</code> not <code>Autopilot GKE</code>.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning" rel="nofollow noreferrer">Using node auto-provisioning</a> GKE documentation provides much information about this feature.</p>
<p>Regarding your issue, you mention that your cluster didn't have <code>node auto-provisioning</code> and you have enabled it.
In <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#enable" rel="nofollow noreferrer">Enabling node auto-provisioning</a> part you have note:</p>
<blockquote>
<p>If you disable then re-enable auto-provisioning on your cluster, <strong>existing node pools will not have auto-provisioning enabled. To re-enable auto-provisioning for these node pools, you need to</strong> <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#mark_node_auto-provisioned" rel="nofollow noreferrer">mark individual node pools</a> as auto-provisioned.</p>
</blockquote>
<p>In short, if you just enabled auto-provisioning on cluster using:</p>
<pre><code>gcloud container clusters update CLUSTER_NAME \
--enable-autoprovisioning \
... (other config)
</code></pre>
<p><strong>It will work for new node-pools.</strong></p>
<p>To enable <code>node auto-provisioning</code> for <strong>already existing</strong> <code>node-pools</code>, you need to specify it by name:</p>
<pre><code>gcloud container node-pools update NODE_POOL_NAME \
--enable-autoprovisioning
</code></pre>
<p>If you didn't change anything, the first <code>node-pool</code> in the new cluster by default is called <code>default-pool</code>. So for example you should use</p>
<pre><code>gcloud container node-pools update default-pool \
--enable-autoprovisioning
</code></pre>
<p>If this won't solve your issue, please provide more details about your cluster, what commands you execute in order to replicate this behavior.</p>
|
<p>can someone answer me the following questions, otherwise it's a little unclear to me.</p>
<ol>
<li><p>How do I understand and where to look what annotations exist at all? Is there a list somewhere or template?</p>
</li>
<li><p>Standard annotations work and prometheus sees this service. Everything is ok with this.</p>
<p>annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/actuator/prometheus'
prometheus.io/port: '8700'</p>
</li>
<li><p>But the service is displayed as Endpoint IP. Is there any possibility, setting, so that not Endpoint IP is displayed in prometheus targets, but the service name and namespace?</p>
</li>
</ol>
<p>Thanks</p>
|
<p>I'll try and give you some answer based on what I understand from your question.</p>
<ol>
<li>About the annotations : to know wich are the one used by the Helm chart, well there is no magic here, you need to read the templates files. Although, most of those annotations are listed in the <code>values.yaml</code> file under the <code>relabel_configs</code> section. You'll find blocks like this one</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code> relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
</code></pre>
<p>This means that all the ressource of this job that has the <code>prometheus.io/scrape</code> annotation with the value <code>true</code> will be scrapped.</p>
<ol start="2">
<li><p>Yes, this is normal based on the content of the <code>values.yaml</code> file.</p>
</li>
<li><p>On the target page, Prometheus will always print the IP address under the <em>Endpoint</em> column. However you can play with the configuration in order to print the namespace and ressource name under the <em>Labels</em> column if it's not already here.</p>
</li>
</ol>
|
<p>I have been following this tutorial: <a href="https://cert-manager.io/docs/" rel="nofollow noreferrer">https://cert-manager.io/docs/</a> , and after I have installed my cert manager and made sure they are running with kubectl get pods --namespace cert-manager,</p>
<pre><code>cert-manager-5597cff495-l5hjs 1/1 Running 0 91m
cert-manager-cainjector-bd5f9c764-xrb2t 1/1 Running 0 91m
cert-manager-webhook-5f57f59fbc-q5rqs 1/1 Running 0 91m
</code></pre>
<p>I then configured my cert-manager using ACME issuer by following this tutorial <a href="https://cert-manager.io/docs/configuration/acme/" rel="nofollow noreferrer">https://cert-manager.io/docs/configuration/acme/</a> .</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: [email protected]
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<p>Here is my full ingress config file:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: www.hyhaus.xyz
http:
paths:
- path: /api/?(.*)
backend:
serviceName: devback-srv
servicePort: 4000
- path: /?(.*)
backend:
serviceName: devfront-srv
servicePort: 3000
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
service.beta.kubernetes.io/do-loadbalancer-hostname: 'www.hyhaus.xyz'
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: [email protected]
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<p>However when I browse to my site, the browser warns: security certificate is not trusted by your computer's operating system. And when I took a look a my certificate, it shows self-assigned, which is not really what I want. <a href="https://i.stack.imgur.com/VqbYX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VqbYX.png" alt="certificate" /></a> Am I doing something wrong here?</p>
|
<p>This is a certificate placeholder provided by <code>nginx ingress controller</code>. When you see it, it means there is no other (dedicated) certificate for the endpoint.</p>
<p>Now the first reason why this happened is that your <code>Ingress</code> doesn't have necessary data. Update it with this:</p>
<pre class="lang-yaml prettyprint-override"><code>metadata:
annotations:
# which issuer to use
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls: # placing a host in TLS config indicates that a certificate should be created
- hosts:
- example.org
- www.example.org
- xyz.example.org
secretName: myingress-cert # cert-manager will store the created certificate in this secret
</code></pre>
<p>Documentation for ingress objects is <a href="https://cert-manager.io/docs/usage/ingress/" rel="nofollow noreferrer">here</a>.</p>
<p>If the above didn't help, try the troubleshooting steps offered by the <a href="https://cert-manager.io/docs/faq/troubleshooting/" rel="nofollow noreferrer">documentation</a>. In my experience checking <code>CertificateRequest</code> and <code>Certificate</code> resources was enough in most cases to determine the problem.</p>
<pre><code>$ kubectl get certificate
$ kubectl describe certificate <certificate-name>
$ kubectl get certificaterequest
$ kubectl describe certificaterequest <CertificateRequest name>
</code></pre>
<p>Remember that these objects are namespaced, meaning that they'll be in the same namespace as the <code>ingress</code> object.</p>
|
<p>This is my current setup:</p>
<pre><code>os1@os1:/usr/local/bin$ minikube update-check
CurrentVersion: v1.20.0
LatestVersion: v1.25.1
os1@os1:/usr/local/bin$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
</code></pre>
<p>What should be the steps to upgrade the minikube?</p>
|
<p><a href="https://minikube.sigs.k8s.io/docs/" rel="nofollow noreferrer">Minikube</a> is an executable, in this case you would need to <code>re-install</code> the <code>minikube</code> with the desired version. There is no command to upgrade the running <code>Minikube</code>.</p>
<p>You would need to:</p>
<pre><code>sudo minikube delete # remove your minikube cluster
sudo rm -rf ~/.minikube # remove minikube
</code></pre>
<p>and reinstall it using <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">Minikube documentation - Start</a>, depends on what your requirements are (packaged used in docs should be always up to date and should cover all your requirements regarding available drivers).</p>
|
<p>I've just installed a k8s cluster (k3d).</p>
<p>I'm just playing with that and I'm running against the first newbie issue: How to load our local created images.</p>
<p>I mean, I've just created a docker image tagged as <code>quarkus/feedly:v1</code>.</p>
<ol>
<li>How could I make it accessible for k8s cluster?</li>
<li>Which is the default k8s container runtime?</li>
<li>Does exist any interaction with my k8s cluster and my local docker? I mean, Have each k8s node installed a docker/rkt/containerd runtime?</li>
<li>Could I create a docker registry inside kubernetes, as a manifest? How could I make that kubernetes make access to it? </li>
</ol>
<p>I've deployed my manifest and I'm getting these events:</p>
<blockquote>
<p>Failed to pull image "quarkus/feedly:0.0.1-SNAPSHOT": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/quarkus/feedly:0.0.1-SNAPSHOT": failed to resolve reference "docker.io/quarkus/feedly:0.0.1-SNAPSHOT": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed</p>
</blockquote>
<p>I know it's a normal error since quarkus registry doesn't exist.</p>
<p>Any helping code over there?</p>
|
<p>Here are some pointers :</p>
<ol>
<li>To make your image accessible through your k8s cluster, you need to use a registry that is accessible from your cluster node. So either, create an account on the docker hub and use this one, or install a local image registry and use it.</li>
<li>Docker is the default container runtime used by a majority of k8s distribution. However, you can use any OCI compatible runtime (containerd for example). rkt is no longer a living project, so i advise not to use it.</li>
<li>Well, it depends on the k8s distribution you're using. Anyway, each nodes on the cluster need a container runtime installed on it. It is mandatory.</li>
<li>Deploying a Docker registry as a kubernetes resource is probably not a good idea, as you'll have to much configuration to make it work. However, the best solution is to deploy a docker registry inside one of your node and then call it using the node IP. You have a configuration example in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">official doc</a>. By the way, Docker provide a registry as a <a href="https://hub.docker.com/_/registry" rel="nofollow noreferrer">Docker image</a>, so the installation is pretty simple. </li>
</ol>
<p>Hope this helps !</p>
|
<p>I'm trying to use <code>sonarsource/sonar-scanner-cli</code> as a kubernetes container, so I do this in a yaml:</p>
<pre><code>- name: "sonarqube-scan-{{ .Values.git.commitID }}"
image: "{{ .Values.harbor.host }}/{{ .Values.harbor.cache }}/sonarsource/sonar-scanner-cli"
env:
# Sonarqube's URL
- name: SONAR_HOST_URL
valueFrom:
secretKeyRef:
name: sonarqube
key: sonar-url
# Auth token of sonarqube's bot
- name: SONAR_LOGIN
valueFrom:
secretKeyRef:
name: sonar-bot
key: sonar-token
volumeMounts:
- mountPath: /usr/src
name: initrepo
</code></pre>
<p>Now, I want to do some pre-setup before the <code>sonarsource/sonar-scanner-cl</code> regular run and parse the docker container's run output for some other works. If this is a shell script, I do like:</p>
<pre><code>$before.sh
$sonar-scanner-cl.sh | after.sh
</code></pre>
<p>I guess I can build a new docker image which is <code>FROM sonarsource/sonar-scanner-cl</code>, and put processes in <code>before.sh</code> and <code>after.sh</code> in its run script, but I don't know how to call the original <code>sonarsource/sonar-scanner-cl</code> run commands. What are the actual commands?</p>
<p>Or, alternatively, does kubernetes have a way to do this?</p>
|
<p>Here's how one can modify container commands without building another image.</p>
<ol>
<li>Pull the image</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>docker pull sonarsource/sonar-scanner-cli
</code></pre>
<ol start="2">
<li>Inspect it</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>docker inspect sonarsource/sonar-scanner-cli
</code></pre>
<p>You should get something like this:</p>
<pre class="lang-json prettyprint-override"><code> "Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"sonar-scanner\"]" <- arguments
],
"WorkingDir": "/usr/src",
"Entrypoint": [ <- executable
"/usr/bin/entrypoint.sh"
],
</code></pre>
<p><code>Entrypoint</code> is <em>what</em> will be executed and <code>CMD [...]</code> (not <code>Cmd</code>) are the arguments for the executable. In a human-friendly format that equals to:</p>
<pre class="lang-sh prettyprint-override"><code># entrypoint args
/usr/bin/entrypoint.sh sonar-scanner
</code></pre>
<p>Now in this case we have a script that is being executed so there are two options.</p>
<h3>Option 1: Modify the entrypoint script and mount it at launch</h3>
<ol>
<li>Run this to save the script on your machine:</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>docker run --rm --entrypoint="" sonarsource/sonar-scanner-cli /bin/cat /usr/bin/entrypoint.sh > entrypoint.sh
</code></pre>
<ol start="2">
<li>Modify <code>entrypoint.sh</code> as you like, then put its contents into a configMap.</li>
<li>Mount the file from the configMap instead of /usr/bin/entrypoint.sh (don't forget to set mode to <code>0755</code>)</li>
</ol>
<h3>Option 2: Change entrypoint and arguments in resource definition</h3>
<p>Note that this may not work with some images (ones with no shell inside).</p>
<pre class="lang-yaml prettyprint-override"><code>- name: "sonarqube-scan-{{ .Values.git.commitID }}"
image: "{{ .Values.harbor.host }}/{{ .Values.harbor.cache }}/sonarsource/sonar-scanner-cli"
command: # this is entrypoint in k8s API
- /bin/sh
- -c
args: # this goes instead of CMD
- "before.sh && /usr/bin/entrypoint.sh sonar-scanner && after.sh"
# | original command and args |
</code></pre>
|
<p>I have a k8s cluster running ,having 2 slave nodes. Its been running few apps without any issue from some time. Now I need to add an app which required SCTP support. So I need to modify the cluster so that it support SCTP. I do not want to delete the entire cluster and recreate it. From google I understood that <code>--feature-gates=SCTPSupport=True</code> is required at the time of init. </p>
<p>Can someone tell me is there a way to do it on runtime ? or with minimum rework of cluster deletion/addition ? </p>
<pre><code>ubuntu@kmaster:~$ helm install --debug ./myapp
[debug] Created tunnel using local port: '40409'
[debug] SERVER: "127.0.0.1:40409"
[debug] Original chart version: ""
[debug] CHART PATH: /home/ubuntu/myapp
Error: release myapp-sctp failed: Service "myapp-sctp" is invalid: spec.ports[0].protocol: Unsupported value: "SCTP": supported values: "TCP", "UDP"
ubuntu@kmaster:~$
</code></pre>
<p>Thanks.</p>
|
<p>Basically you must pass this flag to kube-apiserver. How you can do that depends on how you set up the cluster. If you used kubeadm or kubespray then you should edit file /etc/kubernetes/manifests/kube-apiserver.yaml and add this flag somewhere under "command" field (somewhere between other flags). After that kube-apiserver pod should be restarted automatically. If not - you can kill it by hand.</p>
|
<p>How do I expose an ingress when running kubernetes with minikube in windows 10?</p>
<p>I have enabled the minikube ingress add on.</p>
<p>My ingress is running here...</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
helmtest-ingress nginx helmtest.info 192.168.49.2 80 37m
</code></pre>
<p>I have added my <code>hosts</code> entry...</p>
<pre><code>192.168.49.2 helmtest.info
</code></pre>
<p>I just get nothing when attempting to browse or ping either <code>192.168.49.2</code> or <code>helmtest.info</code></p>
<p>My ingress looks like the following</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: helmtest-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: helmtest.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: helmtest-service
port:
number: 80
</code></pre>
<p>My service looks like the following...</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: helmtest-service
labels:
app: helmtest-service
spec:
type: ClusterIP
selector:
app: helmtest
ports:
- port: 80
targetPort: 80
protocol: TCP
</code></pre>
<p>I can access my service successfully in the browser after running <code>minikube service helmtest-service --url</code></p>
<p>If I run <code>minikube tunnel</code> it just hangs here....</p>
<pre><code>minikube tunnel
❗ Access to ports below 1024 may fail on Windows with OpenSSH clients older than v8.1. For more information, see: https://minikube.sigs.k8s.io/docs/handbook/accessing/#access-to-ports-1024-on-windows-requires-root-permission
🏃 Starting tunnel for service helmtest-ingress.
</code></pre>
<p>Where am I going wrong here?</p>
|
<p>OP didn't provide further information so I will provide answer based on the current information.</p>
<p>You can run <code>Ingress</code> on <a href="https://minikube.sigs.k8s.io/docs/" rel="nofollow noreferrer">Minikube</a> using the <code>$ minikube addons enable ingress</code> command. However, ingress has more addons, like <a href="https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/" rel="nofollow noreferrer">Ingress DNS</a> using <code>minikube addons enabled ingress-dns</code>. In <code>Minikube</code> documentation you can find more details about this addon and when you should use it.</p>
<p>Minikube has quite a well described section about <a href="https://minikube.sigs.k8s.io/docs/commands/tunnel/" rel="nofollow noreferrer">tunnel</a>. Quite important fact about the tunnel is that it must be run in a separate terminal window to keep the LoadBalancer running.</p>
<blockquote>
<p>Services of type <code>LoadBalancer</code> can be exposed via the <code>minikube tunnel</code> command. It must be run in a separate terminal window to keep the <code>LoadBalancer</code> running. <code>Ctrl-C</code> in the terminal can be used to terminate the process at which time the network routes will be cleaned up.</p>
</blockquote>
<p>This part is described in <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">Accessing apps</a> documentation.</p>
<p>As OP mention</p>
<blockquote>
<p>I can access my service successfully in the browser after running minikube service helmtest-service --url</p>
<p>If I run minikube tunnel it just hangs here....</p>
</blockquote>
<p><strong>Possible Solution</strong></p>
<ul>
<li>You might use the old version of SSH, update it.</li>
<li>You are using ports <1024. This situation it's described in <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#access-to-ports-1024-on-windows-requires-root-permission" rel="nofollow noreferrer">this known issue</a> part. Try to use higher port like 5000 like in <a href="https://stackoverflow.com/questions/59994578/how-do-i-expose-ingress-to-my-local-machine-minikube-on-windows">this example</a></li>
<li>It might look like it just hangs, but you need a separate terminal window. Maybe it works correctly but you have to use another terminal</li>
</ul>
<p>Useful links</p>
<ul>
<li><a href="https://stackoverflow.com/questions/59994578/how-do-i-expose-ingress-to-my-local-machine-minikube-on-windows">How do I expose ingress to my local machine? (minikube on windows)</a></li>
<li><a href="https://stackoverflow.com/questions/58790433/">Cannot export a IP in minikube and haproxy loadBalancer - using minikube tunnel</a></li>
</ul>
|
<p>So I am unable to Launch a <strong>custom task application</strong> stored in a <strong>private docker repo</strong>. All my docker images in Kubernetes come are pulled from this private repo. So the <strong>imagePullSecrets</strong> works fine but it seems it is not being used by Spring Cloud Dataflow when deploying the task to Kubernetes. If I inspect the pod there is no imagepullSecret set.</p>
<p>The error I get is:</p>
<p>xxxxx- no basic auth credentials
<a href="https://i.stack.imgur.com/9kXGS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9kXGS.png" alt="enter image description here"></a></p>
<p>The server has been deployed with the ENV variable which the guide states will fix this</p>
<pre><code> - name: SPRING_CLOUD_DEPLOYER_KUBERNETES_IMAGE_PULL_SECRET
value: regcred
</code></pre>
<p>I have even tried to add custom properties on a per-application bases</p>
<p>I have read through the guide <a href="https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/current-SNAPSHOT/reference/htmlsingle/#_private_docker_registry" rel="nofollow noreferrer">HERE</a></p>
<p>I am running the following versions:</p>
<p>Kubernetes 1.15 &</p>
<p><a href="https://i.stack.imgur.com/cEK6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cEK6P.png" alt="enter image description here"></a></p>
<p>I have been stuck on this issue for weeks and simply can't find a solution. I'm hoping somebody has seen this issue and managed to solve it before?</p>
<p>Is there something else I'm missing?</p>
|
<p>Using the environment variable <code>SPRING_CLOUD_DEPLOYER_KUBERNETES_IMAGE_PULL_SECRET</code> also didnt work for me.</p>
<p>An alternative that made it work in my case is adding the following to the <code>application.yaml</code> of the SCDF Server in Kubernetes:</p>
application.yaml
<pre class="lang-yaml prettyprint-override"><code>spring:
cloud:
dataflow:
task:
platform:
kubernetes:
accounts:
default:
imagePullSecret: <your_secret>
</code></pre>
<p>or, when you are using a custom SCDF image like i do, you can of course specify it as argument:</p>
deployment.yaml
<pre class="lang-yaml prettyprint-override"><code>[...]
command: ["java", "-jar", "spring-cloud-dataflow-server.jar"]
args:
- --spring.cloud.dataflow.task.platform.kubernetes.accounts.default.imagePullSecret=<your_secret>
[...]
</code></pre>
<p>More details on <a href="https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/" rel="nofollow noreferrer">https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/</a></p>
|
<p>If Kubernetes Pods bound to PVCs go down/restart, they are guaranteed to come back up with the same cluster name in order for the PVC binding to be valid. But Are they guaranteed to come back up with the same ClusterIp? </p>
|
<p>ClusterIP is not a property of Pod. It's a property of Service. Unless you delete and recreate the Service - its ClusterIP will stay the same no matter how many times Pods behind it will be restarted.</p>
<p>Regarding Pod IPs - they can definitely change unless they are a part of StatefulSet. Pods in StatefulSet preserve their IPs on restarts.</p>
|
<p>Okay, the title is quite mouthful. But it's actually describing the situation.</p>
<p>I deployed a service on GKE in namespace argo-events. Something was wrong with it so I tore it down:</p>
<pre><code>kubectl delete namespace argo-events
</code></pre>
<p>Actually, that's already where the problems started (I suspect a connection to the problem described below) and I had to resort to a <a href="http://medium.com/@craignewtondev/how-to-fix-kubernetes-namespace-deleting-stuck-in-terminating-state-5ed75792647e" rel="nofollow noreferrer">hack</a> because argo-events got stuck in a Terminating state forever. But the result was as desired - namespace seemed to be gone together with all objects in it.</p>
<p>Because of problems with redeployment I inspected the GKE Object Browser (just looking around - cannot filter for argo-events namespace anymore as it is officially gone) where I stumbled upon two lingering objects in ns argo-events:</p>
<p><a href="https://i.stack.imgur.com/9wcNy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9wcNy.png" alt="enter image description here" /></a></p>
<p>argo-events is not listed by <code>kubectl get namespaces</code>. Just confirming that.</p>
<p>And I can find those two objects if I look them up specifically:</p>
<pre><code>$ kubectl get eventbus -n argo-events
NAME AGE
default 17h
$ kubectl get eventsource -n argo-events
NAME AGE
pubsub-event-source 14h
</code></pre>
<p>But - I cannot find anything by asking for all objects:</p>
<pre><code>$ kubectl get all -n argo-events
No resources found in argo-events namespace.
</code></pre>
<p>So my question is. How can I generically list all lingering objects in argo-events?</p>
<p>I'm asking because otherwise I'd have to inspect the entire Object Browser Tree to maybe find more objects (as I cannot select the namespace anymore).</p>
|
<p>By using command <code>$ kubectl get all</code> you will only print a few resources like:</p>
<ul>
<li>pod</li>
<li>service</li>
<li>daemonset</li>
<li>deployment</li>
<li>replicaset</li>
<li>statefulset</li>
<li>job</li>
<li>cronjobs</li>
</ul>
<p>It won't print all resources which can be found when you will use <code>$ kubectl api-resources</code>.</p>
<p><strong>Example</strong></p>
<p>When create <code>PV</code> from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="nofollow noreferrer">PersistentVolume</a> documentation it won't be listed in <code>$ kubectl get all</code> output, but it will be listed if you will specify this resource.</p>
<pre><code>$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/storage/pv-volume.yaml
persistentvolume/task-pv-volume created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Available manual 3m12s
$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.3.240.1 <none> 443/TCP 86m
$
</code></pre>
<p>If you would like to list all resources from specific <code>namespace</code> you should use command below:</p>
<pre><code>kubectl -n argo-events api-resources --namespaced=true -o name | xargs --verbose -I {} kubectl -n argo-events get {} --show-kind --ignore-not-found
</code></pre>
<p>Above solution was presented in Github thread <a href="https://github.com/kubernetes/kubectl/issues/151" rel="nofollow noreferrer">kubectl get all does not list all resources in a namespace</a>. In this thread you might find some additional variations of above command.</p>
<p>In addition, you can also check <a href="https://www.studytonight.com/post/how-to-list-all-resources-in-a-kubernetes-namespace" rel="nofollow noreferrer">How to List all Resources in a Kubernetes Namespace</a> article. You can find there method to list resources using <code>function</code>.</p>
|
<p>I'm currently looking at GKE and some of the tutorials on google cloud. I was following this one here <a href="https://cloud.google.com/solutions/integrating-microservices-with-pubsub#building_images_for_the_app" rel="nofollow noreferrer">https://cloud.google.com/solutions/integrating-microservices-with-pubsub#building_images_for_the_app</a> (source code <a href="https://github.com/GoogleCloudPlatform/gke-photoalbum-example" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/gke-photoalbum-example</a>)</p>
<p>This example has 3 deployments and one service. The example tutorial has you deploy everything via the command line which is fine and all works. I then started to look into how you could automate deployments via cloud build and discovered this:</p>
<p><a href="https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments" rel="nofollow noreferrer">https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments</a></p>
<p>These docs say you can create a build configuration for your a trigger (such as pushing to a particular repo) and it will trigger the build. The sample yaml they show for this is as follows:</p>
<pre><code># deploy container image to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=kubernetes-resource-file
- --image=gcr.io/project-id/image:tag
- --location=${_CLOUDSDK_COMPUTE_ZONE}
- --cluster=${_CLOUDSDK_CONTAINER_CLUSTER}
</code></pre>
<p>I understand how the location and cluster parameters can be passed in and these docs also say the following about the resource file (filename parameter) and image parameter:</p>
<p><em>kubernetes-resource-file is the file path of your Kubernetes configuration file or the directory path containing your Kubernetes resource files.</em></p>
<p><em>image is the desired name of the container image, usually the application name.</em></p>
<p>Relating this back to the demo application repo where all the services are in one repo, I believe I could supply a folder path to the filename parameter such as the config folder from the repo <a href="https://github.com/GoogleCloudPlatform/gke-photoalbum-example/tree/master/config" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/gke-photoalbum-example/tree/master/config</a></p>
<p>But the trouble here is that those resource files themselves have an image property in them so I don't know how this would relate to the <em>image</em> property of the cloud build trigger yaml. I also don't know how you could then have multiple "image" properties in the trigger yaml where each deployment would have it's own container image.</p>
<p>I'm new to GKE and Kubernetes in general, so I'm wondering if I'm misinterpreting what the <em>kubernetes-resource-file</em> should be in this instance.</p>
<p>But is it possible to automate deploying of multiple deployments/services in this fashion when they're all bundled into one repo? Or have Google just over simplified things for this tutorial - the reality being that most services would be in their own repo so as to be built/tested/deployed separately?</p>
<p>Either way, how would the <code>image</code> property relate to the fact that an <em>image</em> is already defined in the deployment yaml? e.g:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: photoalbum-app
name: photoalbum-app
spec:
replicas: 3
selector:
matchLabels:
name: photoalbum-app
template:
metadata:
labels:
name: photoalbum-app
spec:
containers:
- name: photoalbum-app
image: gcr.io/[PROJECT_ID]/photoalbum-app@[DIGEST]
tty: true
ports:
- containerPort: 8080
env:
- name: PROJECT_ID
value: "[PROJECT_ID]"
</code></pre>
|
<p>The command that you use is perfect for testing the deployment of one image. But when you work with Kubernetes (K8S), and the managed version of GCP (GKE), you usually never do this.</p>
<p>You use YAML file to describe your deployments, services and all other K8S object that you want. When you deploy, you can perform something like this</p>
<pre><code>kubectl apply -f <file.yaml>
</code></pre>
<p>If you have several file, you can use wildcard is you want</p>
<pre><code>kubectl apply -f config/*.yaml
</code></pre>
<p>If you prefer to use only one file, you can separate the object with <code>---</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:...
...
</code></pre>
|
<p>I ping kubernetes pod ip from host,it success. So what is the procedure about it? I read many to talk pod to pod communication,but nothing about host to pod. According my knownlege, The kubernetes cluster is individual about host machine and communication through edge nodes ingress controller.I am confusing about this.</p>
|
<p>In fact this is Docker's behaviour, not Kubernetes <em>(supposing you're using Docker as your container runtime)</em>.</p>
<p>By default, when you install Docker, there is a default network interface that is created on your host : <strong>docker0</strong> (you should be able to see it with <code>ifconfig</code>). This interface is an ethernet bridge device.<br>
So as stated in the documentation :</p>
<blockquote>
<p>If you don’t specify a different network when starting a container, the container is connected to the bridge and all traffic coming from and going to the container flows over the bridge to the Docker daemon, which handles routing on behalf of the container.</p>
</blockquote>
<p>This is why you can ping your container from your host.</p>
<p>If you want to customize default IP range for this interface, you can refer to the <a href="https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/" rel="nofollow noreferrer">official documentation</a></p>
|
<p><strong>Example:</strong></p>
<p><strong>Namespace: a</strong></p>
<pre><code> PVC: a-pvc
Pod: main-pod-to-work-with
</code></pre>
<p><strong>Mounts:</strong></p>
<pre><code>a-pvc; name: a-pvc-mount; path: /pvc/a-files
b-pvc; name: b-pvc-mount; path: /pvc/b-files
c-pvc; name: c-pvc-mount; path: /pvc/c-files
</code></pre>
<p><strong>Namespace: b</strong></p>
<pre><code> PVC: b-pvc
</code></pre>
<p><strong>Namespace: c</strong></p>
<pre><code> PVC: c-pvc
</code></pre>
|
<p>[<strong>TL;DR</strong>]
In the same <code>namespace</code>, one <code>PVC</code> can be re-used by two or more different <code>pod</code>s. However, it is impossible for a <code>pod</code> to mount volumes based on <code>PVCs</code> from two different <code>namespaces</code> (that would violate the main idea behind <code>namespace</code>s - isolating resources allocated to different users).</p>
<p><strong>More info</strong></p>
<p>As per <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#introduction" rel="nofollow noreferrer">Persistent Volume</a> docs:</p>
<blockquote>
<p>A <strong>PersistentVolume</strong> (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.</p>
<p>A <strong>PersistentVolumeClaim</strong> (PVC) is a request for storage by a user.</p>
</blockquote>
<p>Bit lower in documentation you have <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding" rel="nofollow noreferrer">Binding</a> part, where you can find information:</p>
<blockquote>
<p>Once bound, PersistentVolumeClaim binds are exclusive, regardless of how they were bound. A PVC to PV binding is a one-to-one mapping, using a ClaimRef which is a bi-directional binding between the PersistentVolume and the PersistentVolumeClaim.</p>
</blockquote>
<p><code>Pod</code> and <code>PVC</code> are <code>namespaced</code> resources. It mean's that if you have <code>pod</code> in <code>default namespace</code>, <code>pvc</code> also must be in the same <code>namespace</code>.</p>
<pre><code>$ kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
...
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
...
</code></pre>
<p>If you would create <code>pod</code> in <code>default</code> <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">namespace</a>, it won't be albe to see resources in other <code>namespace</code>. That's why you need to specify <code>namespace</code> in some commends. If you won't specify it, your output will be from <code>default</code> namespace.</p>
<pre><code>$ kubectl get po
No resources found in default namespace.
$ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
event-exporter-gke-564fb97f9-wtx9w 2/2 Running 0 9m2s
fluentbit-gke-8tcm6 2/2 Running 0 8m48s
fluentbit-gke-cdm2w 2/2 Running 0 8m51s
</code></pre>
<h2>Test</h2>
<p>Based on <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">Configure a Pod to Use a PersistentVolume for Storage</a> documentation you have steps to create <code>PV</code>, <code>PVC</code> and <code>Pods</code>.</p>
<pre><code>$ kubectl get po
NAME READY STATUS RESTARTS AGE
task-pv-pod 1/1 Running 0 5m20s
task-pv-pod-2 1/1 Running 0 4m43s
task-pv-pod-3 1/1 Running 0 2m40s
</code></pre>
<p>Few pods can use the same <code>PVC</code>. But all <code>pods</code> and <code>PVC</code> are in the same <code>namespace</code>. As <code>pod</code> and <code>pvc</code> resources, it's impossible to use <code>pod</code> from one namespace to use resource from another <code>namespace</code>.</p>
<h2>Conclusion</h2>
<p>Kubernetes <code>namespaced</code> resources are visible only for resources in the same <code>namespace</code> (like <code>pod</code> or <code>pvc</code>) In that situation you have specify <code>namespace</code> using <code>--namespace <namespace></code> or <code>-n <namespace></code>. There are also cluster-wide resources like <code>PersistentVolume</code> which don't require to specify name as they are visible in whole cluster. To get list of resources to check if it's namespaced you can use command <code>$ kubectl api-resources</code>.</p>
<p>If this didn't answer your question, please elaborate it.</p>
|
<p>I executed followign command</p>
<pre><code>gcloud container clusters get-credentials my-noice-cluter --region=asia-south2
</code></pre>
<p>and that command runs successfully. I can see the relevant config with <code>kubectl config view</code></p>
<p>But when I try to kubectl, I get timeout</p>
<p>kubectl config view</p>
<pre><code>❯ kubectl get pods -A -o wide
Unable to connect to the server: dial tcp <some noice ip>:443: i/o timeout
</code></pre>
<p>If I create a VM in gcp and use kubectl there or use gcp's cloud shell, It works but it does not work on our local laptops and PCs.</p>
<p>Some network info about our cluster:-</p>
<pre><code>Private cluster Disabled
Network default
Subnet default
VPC-native traffic routing Enabled
Pod address range 10.122.128.0/17
Service address range 10.123.0.0/22
Intranode visibility Enabled
NodeLocal DNSCache Enabled
HTTP Load Balancing Enabled
Subsetting for L4 Internal Load Balancers Disabled
Control plane authorized networks
office (192.169.1.0/24)
Network policy Disabled
Dataplane V2 Disabled
</code></pre>
<p>I also have firewall riles to allow http/s</p>
<pre><code>❯ gcloud compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
default-allow-http default INGRESS 1000 tcp:80 False
default-allow-https default INGRESS 1000 tcp:443 False
....
</code></pre>
|
<p>If it's work from your VPC and not from outside, it's because you created a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">private GKE cluster</a>. The master is only reachable through the private IP or through the autorized network.</p>
<p>Speaking about the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks" rel="nofollow noreferrer">authorized network</a>, you have one authorizer <code>office (192.169.1.0/24)</code>. Sadly, you registered a private IP range in your office network and not the public IP used to access the internet.</p>
<p>To solve that, go to a site that provide you your public IP. Then update the authorized network for your cluster with that IP/32, and try again.</p>
|
<p>I am trying to create a Kubernetes pod and mounting a volume from local hostpath. I am using Azure Kubernetes cluster. Following is my yaml for creating pod</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /opt/myfolder
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /Users/kkadam/minikube/myfolder
# this field is optional
</code></pre>
<p>I have few files under <code>myfolder</code> which I want to use inside container. Files are present in local volume but not inside container.</p>
<p>What could be an issue?</p>
|
<p>Judging by what you said in your comment and your config, especially the path <code>/Users/kkadam/minikube/myfolder</code> which is typically a Mac OS path, it seems that you're trying to mount your local volume (probably your mac) in a pod deployed on AKS.<br>
That's the problem.</p>
<p>In order to make it work, you need to put the files you're trying to mount on the node running your pod (which is in AKS).</p>
|
<p>I have a Go Lang REST service and ETCD DB in one container, deployed in kubernetes cluster using Deployment type. Whenever I try to restart the service pod, the service loses connectivity to ETCD, I have tried using stateful sets instead of deployment but still didn't help. My deployment looks something like below.</p>
<p>The ETCD fails restarting due to this issue: <a href="https://github.com/etcd-io/etcd/issues/10487" rel="nofollow noreferrer">https://github.com/etcd-io/etcd/issues/10487</a></p>
<p>PVC :</p>
<pre><code> apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: XXXX
namespace: XXXX
annotations:
volume.beta.kubernetes.io/storage-class: glusterfs-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
</code></pre>
<p>Deployment:</p>
<pre><code> apiVersion: apps/v1
kind: Deployment
metadata:
name: XXX
namespace: XXX
spec:
replicas: X
XXXXXXX
template:
metadata:
labels:
app: rest-service
version: xx
spec:
hostAliases:
- ip: 127.0.0.1
hostnames:
- "etcd.xxxxx"
containers:
- name: rest-service
image: xxxx
imagePullPolicy: IfNotPresent
ports:
- containerPort: xxx
securityContext:
readOnlyRootFilesystem: false
capabilities:
add:
- IPC_LOCK
- name: etcd-db
image: quay.io/coreos/etcd:v3.3.11
imagePullPolicy: IfNotPresent
command:
- etcd
- --name=etcd-db
- --listen-client-urls=https://0.0.0.0:2379
- --advertise-client-urls=https://etcd.xxxx:2379
- --data-dir=/var/etcd/data
- --client-cert-auth
- --trusted-ca-file=xxx/ca.crt
- --cert-file=xxx/tls.crt
- --key-file=xxx/tls.key
volumeMounts:
- mountPath: /var/etcd/data
name: etcd-data
XXXX
ports:
- containerPort: 2379
volumes:
- name: etcd-data
persistentVolumeClaim:
claimName: XXXX
</code></pre>
<p>I would expect the DB to be able to connect to pod even when it restarts</p>
|
<p>Keeping application and database in one pod is one of the worst practices in Kubernetes. If you update application code - you have to restart pod to apply changes. So you restart database also just for nothing.</p>
<p>Solution is very simple - you should run application in one deployment and database - in another. That way you can update application without restarting database. In that case you can also scale app and DB separately, like add more replicas to app while keeping DB at 1 replicas or vice versa.</p>
|
<p>In my 1 node AKS, I deploy multiple job resources (kind:jobs) that are terminated after the task is completed. I have enabled Cluster Autoscaler to add a second node when too many jobs are consuming the first node memory, however it scales out after a job/pod is unable to be created due to lack of memory.</p>
<p>In my job yaml I also defined the resource memory limit and request.</p>
<p>Is there a possibility to configure the Cluster Autoscaler to scale out proactively when it reaches a certain memory threshold (e.g., 70% of the node memory) not just when it cannot deploy a job/pod?</p>
|
<p>In Kubernetes you can find 3 Autoscaling Mechanisms: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noreferrer">Horizontal Pod Autoscaler</a>, <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/verticalpodautoscaler" rel="noreferrer">Vertical Pod Autoscaler</a> which both can be controlled by metrics usage and <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="noreferrer">Cluster Autoscaler</a>.</p>
<p>As per <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="noreferrer">Cluster Autoscaler Documentation</a>:</p>
<blockquote>
<p>Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:</p>
<ul>
<li>there are pods that failed to run in the cluster due to insufficient resources.</li>
<li>there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.</li>
</ul>
</blockquote>
<p>In <a href="https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler" rel="noreferrer">AKS Cluster Autoscaler Documentation</a> you can find note that <code>CA</code> is Kubernetes Component, not something AKS specific:</p>
<blockquote>
<p>The cluster autoscaler is a Kubernetes component. Although the AKS cluster uses a virtual machine scale set for the nodes, don't manually enable or edit settings for scale set autoscale in the Azure portal or using the Azure CLI. Let the Kubernetes cluster autoscaler manage the required scale settings.</p>
</blockquote>
<p>In <a href="https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler#about-the-cluster-autoscaler" rel="noreferrer">Azure Documentation - About the cluster autoscaler</a> you have information that AKS clusters can scale in one of two ways:</p>
<blockquote>
<p>The <code>cluster autoscaler</code> watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes.</p>
<p>The <code>horizontal pod autoscaler</code> uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand.</p>
</blockquote>
<p>On <code>AKS</code> you can adjust a bit your <code>Autoscaler Profile</code> to change some default values. More detail can be found in <a href="https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler#using-the-autoscaler-profile" rel="noreferrer">Using the autoscaler profile</a></p>
<p>I would suggest you to read the <a href="https://medium.com/kubecost/understanding-kubernetes-cluster-autoscaling-675099a1db92" rel="noreferrer">Understanding Kubernetes Cluster Autoscaling</a> article which explains how <code>CA</code> works. Under <code>Limitations</code> part you have information:</p>
<blockquote>
<p>The cluster autoscaler <strong>doesn’t take into account actual CPU/GPU/Memory usage</strong>, just resource requests and limits. Most teams overprovision at the pod level, so in practice we see aggressive upscaling and conservative downscaling.</p>
</blockquote>
<h2>Conclusion</h2>
<p><code>Cluster Autoscaler</code> doesn't consider actual resources usage. <code>CA</code> downscale or upscale might take a few minutes depending on cloud provider.</p>
|
<p>i am trying to deploy containers to local kubernetes, for now i have install docker deamon, minikube and minikube dashboard. this all are working fine. i had also setup local container repository on port 5000. i had also push 2 images of my application. i can see them on browser <a href="http://localhost:5000/v2/_catalog" rel="nofollow noreferrer">http://localhost:5000/v2/_catalog</a></p>
<p>now when i am trying to up pod using minikube.</p>
<pre><code>kubectl apply -f ./docker-compose-k.yml --record
</code></pre>
<p>I am getting error on dashboard like this:- </p>
<pre><code>Failed to pull image "localhost:5000/coremvc2": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
</code></pre>
<p>Here is my compose file:-</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: core23
labels:
app: codemvc
spec:
replicas: 1
selector:
matchLabels:
app: coremvc
template:
metadata:
labels:
app: coremvc
spec:
containers:
- name: coremvc
image: localhost:5000/coremvc2
ports:
- containerPort: 80
imagePullPolicy: Always
</code></pre>
<p>i don't know why this images are not pulled as docker deamon and kubernetes both are on same machine. i have also try this with dockerhub image and it's working fine, but i want to do this using local images.
please give me hint or any guideline.</p>
<p>Thank you,</p>
|
<p>Based on the comment, you started minikube with <code>minikube start</code> (without specifying the driver).</p>
<p>That means that the minikube is running inside a <strong>Virtualbox VM</strong>. In order to make your use case work, you have two choices :</p>
<ol>
<li><strong>The hard way</strong> Set-up the connection between you VM and your host and use your host IP</li>
<li><strong>The easy way</strong> Connect to your VM using <code>minikube ssh</code> and install your registry there. Then your deployment should work with your VM's IP.</li>
</ol>
<p>If you don't want to use Virtual box, you should read the <a href="https://minikube.sigs.k8s.io/docs/reference/drivers/" rel="nofollow noreferrer">documentation</a> about other existing drivers and how to use them.</p>
<p>Hope this helps !</p>
|
<p>I have my assets on s3 and my service is deployed on kubernetes. Is it possible to define proxy pass in nginx-ingress. My current nginx proxy_pass assets to s3 and I want to replicate in kubernetes.</p>
<pre><code>location /assets/ {
proxy_pass https://s3.ap-south-1.amazonaws.com;
}
</code></pre>
<p>I tried this but its not working</p>
<pre><code>nginx.ingress.kubernetes.io/server-snippet: |
location /assets/ {
proxy_pass https://s3.ap-south-1.amazonaws.com/assets/;
}
</code></pre>
|
<p>You can try to use service of type ExternalName here like that:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: s3-ap-south
spec:
type: ExternalName
externalName: s3.ap-south-1.amazonaws.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: s3-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: YOUR_HOSTNAME
http:
paths:
- backend:
serviceName: s3-ap-south
servicePort: 443
</code></pre>
|
<p>Am new to Kubernetes, my question is related to Google Cloud platform.</p>
<p>Given a scenario we need to restart a kubernetes cluster and we have some services in Spring boot. As Spring boot services are like individual JVM's each and run like an independent process. Once the Kubernetes is restarted in order to restart the Spring boot services I need help in understanding what type of a script or mechanism to use to restart all the services in Kubernetes. Please let me know and thank you and appreciate all your inputs.</p>
|
<p>I am not sure if I fully understood your question but I think the best approach for you would be to <code>Pack</code> your <code>Spring Boot app</code> to a <a href="https://www.docker.com/resources/what-container" rel="nofollow noreferrer">Docker container</a> and then use it on <code>GKE</code>.</p>
<p>Good guide about <code>Packing</code> your <code>Spring Boot</code> application to <code>container</code> can be found in <a href="https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes#4" rel="nofollow noreferrer">CodeLabs tutorial</a>.</p>
<p>When you will have your application in <code>container</code> you will be able to use it in <code>Deployment</code> or <code>Statefulsets</code> configuration file and deploy it in your cluster.</p>
<p>As mentioned in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment Documentation</a>:</p>
<blockquote>
<p>A Deployment provides declarative updates for Pods and ReplicaSets.</p>
<p>You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.</p>
</blockquote>
<p>In short, <code>Deployment controller</code> ensure to keep your application in your desired state.</p>
<p>For example if you would like to restart your application you just could scale down <code>Deployment</code> to <code>0 replicas</code> and scale up to <code>5 replicas</code>.</p>
<p>Also as <code>GKE</code> is working on <code>Google Compute Engine</code> VMs you can also scale your cluster nodes number.</p>
<h2>Examples</h2>
<p><strong>Restarting Application</strong></p>
<p>For my test I've used <code>Nginx container</code> in <code>Deployment</code> but it should work similar with your <code>Spring boot app container</code>.</p>
<p>Let's say you have 2 node cluster with 5 replicas aplication.</p>
<pre><code>$ kubectl create deployment nginx --image=nginx --replicas=5
deployment.apps/nginx created
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-86c57db685-2x8tj 1/1 Running 0 2m45s 10.4.1.5 gke-cluster-1-default-pool-faec7b51-6kc3 <none> <none>
nginx-86c57db685-6lpfg 1/1 Running 0 2m45s 10.4.1.6 gke-cluster-1-default-pool-faec7b51-6kc3 <none> <none>
nginx-86c57db685-8lvqq 1/1 Running 0 2m45s 10.4.0.9 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
nginx-86c57db685-lq6l7 1/1 Running 0 2m45s 10.4.0.11 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
nginx-86c57db685-xn7fn 1/1 Running 0 2m45s 10.4.0.10 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
</code></pre>
<p>Now you would need to change some environment variables inside your application using <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">ConfigMap</a>. To apply this change you could just use <code>rollout</code>. It would <code>restart</code> your application and provide additional data from <code>ConfigMap</code>.</p>
<pre><code>$ kubectl rollout restart deployment nginx
deployment.apps/nginx restarted
$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-6c98778485-2k98b 1/1 Running 0 6s 10.4.0.13 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
nginx-6c98778485-96qx7 1/1 Running 0 6s 10.4.1.7 gke-cluster-1-default-pool-faec7b51-6kc3 <none> <none>
nginx-6c98778485-qb89l 1/1 Running 0 6s 10.4.0.12 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
nginx-6c98778485-qqs97 1/1 Running 0 4s 10.4.1.8 gke-cluster-1-default-pool-faec7b51-6kc3 <none> <none>
nginx-6c98778485-skbwv 1/1 Running 0 4s 10.4.0.14 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
nginx-86c57db685-2x8tj 0/1 Terminating 0 4m38s 10.4.1.5 gke-cluster-1-default-pool-faec7b51-6kc3 <none> <none>
nginx-86c57db685-6lpfg 0/1 Terminating 0 4m38s <none> gke-cluster-1-default-pool-faec7b51-6kc3 <none> <none>
nginx-86c57db685-8lvqq 0/1 Terminating 0 4m38s 10.4.0.9 gke-cluster-1-default-pool-faec7b51-x07n <none> <none>
nginx-86c57db685-xn7fn 0/1 Terminating 0 4m38s 10.4.0.10 gke-cluster-1-default-pool-faec7b51-x07n <nont e> <none>
</code></pre>
<p><strong>Draining node to perform node operations</strong></p>
<p>Another example can be when you need to do something with your VMs. You can do it using by <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="nofollow noreferrer">draining node</a>.</p>
<blockquote>
<p>You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node (e.g. kernel upgrade, hardware maintenance, etc.). Safe evictions allow the pod's containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.</p>
</blockquote>
<p>So it will reschedule all pods from this node to another nodes.</p>
<p><strong>Restarting Cluster</strong></p>
<p>Keep in Mind that <code>GKE</code> is managed by google and you cannot restart one machine as it's managed by <a href="https://cloud.google.com/compute/docs/instance-groups" rel="nofollow noreferrer">Managed instance group</a>.
You can ssh to each node, change some settings. When you scale them to 0 and scale up, you will get new machine with your requirements with new <code>ExternalIP</code>.</p>
<pre><code>$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-cluster-1-default-pool-faec7b51-6kc3 Ready <none> 3d1h v1.17.14-gke.1600 10.128.0.25 34.XX.176.56 Container-Optimized OS from Google 4.19.150+ docker://19.3.6
gke-cluster-1-default-pool-faec7b51-x07n Ready <none> 3d1h v1.17.14-gke.1600 10.128.0.24 23.XXX.50.249 Container-Optimized OS from Google 4.19.150+ docker://19.3.6
$ gcloud container clusters resize cluster-1 --node-pool default-pool \
> --num-nodes 0 \
> --zone us-central1-c
Pool [default-pool] for [cluster-1] will be resized to 0.
$ kubectl get nodes -o wide
No resources found
$ gcloud container clusters resize cluster-1 --node-pool default-pool --num-nodes 2 --zone us-central1-c
Pool [default-pool] for [cluster-1] will be resized to 2.
Do you want to continue (Y/n)? y
$ $ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-cluster-1-default-pool-faec7b51-n5hm Ready <none> 68s v1.17.14-gke.1600 10.128.0.26 23.XXX.50.249 Container-Optimized OS from Google 4.19.150+ docker://19.3.6
gke-cluster-1-default-pool-faec7b51-xx01 Ready <none> 74s v1.17.14-gke.1600 10.128.0.27 35.XXX.135.41 Container-Optimized OS from Google 4.19.150+ docker://19.3.6
</code></pre>
<h2>Conclusion</h2>
<p>When you are using <code>GKE</code> you are using pre-definied nodes, managed by google and those nodes are automatically upgrading (some security features, etc). Due to that, changing nodes capacity it's easy.</p>
<p>When you <code>pack</code> your application to container and used it in <code>Deployment</code>, your application will be handled by <code>Deployment Controller</code> which will try to keep desired state all the time.</p>
<p>As mention in <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service Documentation</a>.</p>
<blockquote>
<p>In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them</p>
</blockquote>
<p>Service will be still visible in your cluster even if you will scale you cluster to 0 node as this is abstraction. You don't have to restart it. However if you would change some static service configuration (like port) you would need to recreate service with new configuration.</p>
<h2>Useful links</h2>
<ul>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/migrating-node-pool" rel="nofollow noreferrer">Migrating workloads to different machine types</a></li>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair" rel="nofollow noreferrer">Auto-repairing nodes</a></li>
</ul>
|
<p>I want to deploy the starter image on my kubernetes cluster. </p>
<p>I have three raspberries in a cluster with </p>
<pre><code>> kubectl version -o json ~
{
"clientVersion": {
"major": "1",
"minor": "18",
"gitVersion": "v1.18.2",
"gitCommit": "52c56ce7a8272c798dbc29846288d7cd9fbae032",
"gitTreeState": "clean",
"buildDate": "2020-04-16T11:56:40Z",
"goVersion": "go1.13.9",
"compiler": "gc",
"platform": "linux/arm"
},
"serverVersion": {
"major": "1",
"minor": "18",
"gitVersion": "v1.18.2",
"gitCommit": "52c56ce7a8272c798dbc29846288d7cd9fbae032",
"gitTreeState": "clean",
"buildDate": "2020-04-16T11:48:36Z",
"goVersion": "go1.13.9",
"compiler": "gc",
"platform": "linux/arm"
}
}
> docker -v ~
Docker version 19.03.8, build afacb8b
> kubectl get node ~
NAME STATUS ROLES AGE VERSION
master-pi4 Ready master 18h v1.18.2
node1-pi4 Ready <none> 17h v1.18.2
node2-pi3 Ready <none> 17h v1.18.2
</code></pre>
<p>To try it out I want to deploy the simple image but I get the error <code>CrashLoopBackOff</code></p>
<pre><code>> kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
> kubectl get pods ~
NAME READY STATUS RESTARTS AGE
hello-node-7bf657c596-wc5r4 0/1 CrashLoopBackOff 7 15m
</code></pre>
<p>The describe is also cryptic to me </p>
<pre><code>kubectl describe pod hello-node ~
Name: hello-node-7bf657c596-wc5r4
Namespace: default
Priority: 0
Node: node1-pi4/192.168.188.11
Start Time: Wed, 13 May 2020 15:02:10 +0200
Labels: app=hello-node
pod-template-hash=7bf657c596
Annotations: <none>
Status: Running
IP: 10.32.0.4
IPs:
IP: 10.32.0.4
Controlled By: ReplicaSet/hello-node-7bf657c596
Containers:
echoserver:
Container ID: docker://841beb3a675963ecb40569439e0575a29c5b9f48aaa967da8c011faeafd96acc
Image: k8s.gcr.io/echoserver:1.4
Image ID: docker-pullable://k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 13 May 2020 15:18:03 +0200
Finished: Wed, 13 May 2020 15:18:03 +0200
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wvbzk (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-wvbzk:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wvbzk
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/hello-node-7bf657c596-wc5r4 to node1-pi4
Normal Pulled 15m (x5 over 16m) kubelet, node1-pi4 Container image "k8s.gcr.io/echoserver:1.4" already present on machine
Normal Created 15m (x5 over 16m) kubelet, node1-pi4 Created container echoserver
Normal Started 15m (x5 over 16m) kubelet, node1-pi4 Started container echoserver
Warning BackOff 112s (x70 over 16m) kubelet, node1-pi4 Back-off restarting failed container
</code></pre>
<p>What I am I missing?</p>
|
<p>That's probably because this image is not compatible with ARM architecture.</p>
<p>You should instead use this image <code>k8s.gcr.io/echoserver-arm:1.8</code></p>
|
<p>I understand that Kubernetes CronJobs create pods that run on the schedule specified inside the CronJob. However, the retention policy seems arbitrary and I don't see a way where I can retain failed/successful pods for a certain period of time.</p>
|
<p>I am not sure about what you are exactly asking here.</p>
<p>CronJob does not create pods. It creates Jobs (which also manages) and those jobs are creating pods.
As per Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Jobs Documentation</a> If the Jobs are managed directly by a higher level controller, such as CronJobs, the Jobs can be cleaned up by CronJobs based on the specified capacity-based cleanup policy. In short, pods and jobs will not be deleted utill you remove CronJob. You will be able to check logs from Pods/Jobs/CronJob. Just use kubctl describe </p>
<p>As default CronJob keeps history of 3 successfulJobs and only 1 of failedJob. You can change this limitation in CronJob spec by parameters:</p>
<pre><code>spec:
successfulJobsHistoryLimit: 10
failedJobsHistoryLimit: 0
</code></pre>
<p>0 means that CronJob will not keep any history of failed jobs <br/>
10 means that CronJob will keep history of 10 succeeded jobs</p>
<p>You will not be able to retain pod from failed job because, when job fails it will be restarted until it was succeded or reached backoffLimit given in the spec.</p>
<p>Other option you have is to suspend CronJob.</p>
<pre><code>kubctl patch cronjob <name_of_cronjob> -p '{"spec:"{"suspend":true}}'
</code></pre>
<p>If value of spuspend is true, CronJob will not create any new jobs or pods. You will have access to completed pods and jobs.</p>
<p>If none of the above helpd you, could you please give more information what do you exactly expect? <br/>
CronJob spec would be helpful.</p>
|
<p>I have installed k3s on a cluster of raspberry pi's.</p>
<pre><code>pi@pikey:~ $ sudo kubectl get node
NAME STATUS ROLES AGE VERSION
pikey Ready master 4d23h v1.14.6-k3s.1
pinode-1 Ready worker 4d23h v1.14.6-k3s.1
pinode-2 Ready worker 4d23h v1.14.6-k3s.1
pinode-3 Ready worker 4d23h v1.14.6-k3s.1
</code></pre>
<p>I'm initially working on localhost (pikey) only.</p>
<p>I've got a docker image that I tagged and pushed into crictl</p>
<pre><code>pi@pikey:~ $ sudo crictl pull localhost:5000/pilab/node-intro-img
Image is up to date for sha256:7a2c45e77748e6b2282210d7d54b12f0cb25c4b224c222149d7a66653512f543
pi@pikey:~ $ sudo kubectl delete deployment node-intro
deployment.extensions "node-intro" deleted
pi@pikey:~ $ sudo crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/coredns/coredns 1.3.0 6d816a3a1d703 11.5MB
docker.io/library/traefik 1.7.12 a0fc65eddfcc8 19.1MB
docker.io/rancher/klipper-helm v0.1.5 d4a37f6d19104 25.4MB
docker.io/rancher/klipper-lb v0.1.1 36563d1beb5e2 2.58MB
k8s.gcr.io/pause 3.1 e11a8cbeda868 224kB
localhost:5000/pilab/node-intro-img latest 7a2c45e77748e 320MB
</code></pre>
<p>Now if I try and deploy</p>
<pre><code>sudo kubectl create deployment node-intro --image=localhost:5000/pilab/node-intro-img
</code></pre>
<p>I get </p>
<pre><code>pi@pikey:~ $ sudo kubectl get pods
NAME READY STATUS RESTARTS AGE
node-intro-567c59c8c7-9p5c5 0/1 ImagePullBackOff 0 101s
pi@pikey:~ $ sudo kubectl get pods
NAME READY STATUS RESTARTS AGE
node-intro-567c59c8c7-9p5c5 0/1 ErrImagePull 0 104s
</code></pre>
<p>If crictl can pull an image, why does k3s fail to deploy it?</p>
|
<p>You're using <code>localhost:5000/...</code> as the image location.</p>
<p>However are you sure that the image is present on <strong>each node</strong> ? Maybe the node on which the image is deploying just does not host the image.<br />
Either use the IP address of the node hosting the registry, or make sure the image is present on each node.</p>
|
<p>When my Kubernetes pods get terminated (due to restarting or completely stopping), I would like to invoke some kind of a lifecycle hook that will notify me of the termination through email. Something like the following:</p>
<pre><code> onTermination:
args:
- '/bin/sh'
- '-c'
- |
<smtp login and send email script>
</code></pre>
<p>How I can get an email when my pod is restarted or stop in Kubernetes?</p>
|
<p>Only way I know that you can get an email if there is something wrong with Cluster/Node/Pod are monitoring tools.</p>
<p>You can use i.e paid software with free trial like <a href="https://sysdig.com/" rel="nofollow noreferrer">sysdig</a> or <a href="https://www.datadoghq.com/" rel="nofollow noreferrer">datadog</a>.
If you want to learn Kubernetes metrics you can use <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> with <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> (AlertManager) and i.e <a href="https://grafana.com/" rel="nofollow noreferrer">Grafana</a> as backend.</p>
<p>Here you have some steps which may be useful.</p>
<ol>
<li>Install kube-state-metrics.</li>
<li>Install Prometheus</li>
<li>Install Grafana</li>
<li>Connect to Prometheus (kubectl Port Forwarding or expose Prometheus as a Service)</li>
<li>Connect to Grafana (kubectl port forwarding)</li>
<li>In Grafana you have something like Alerts > Notification Chanels. There you can define how notification can be send (one of the option is email).</li>
<li>Create dashboard with desired metrics and add alerting to it.</li>
</ol>
<p>You can also check InfluxDB or Stackdriver as data source.</p>
<p>Tutorials which may help you <br/>
<a href="https://devopscube.com/setup-prometheus-monitoring-on-kubernetes/" rel="nofollow noreferrer">https://devopscube.com/setup-prometheus-monitoring-on-kubernetes/</a> <br/>
<a href="https://itnext.io/kubernetes-monitoring-with-prometheus-in-15-minutes-8e54d1de2e13" rel="nofollow noreferrer">https://itnext.io/kubernetes-monitoring-with-prometheus-in-15-minutes-8e54d1de2e13</a></p>
|
<p>I have a requirement to add kubernetes Service with an ExternalName pointing to NLB(in a different AWS account).
I am using terraform to implement this.
I am not sure how to use NLB info external name section.
Can someone please help?</p>
<pre><code> resource "kubernetes_service" "test_svc" {
metadata {
name = "test"
namespace = var.environment
labels = {
app = "test"
}
}
spec {
type = "ExternalName"
**external_name =**
}
}
</code></pre>
|
<p>Usage of external name is as follows:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
</code></pre>
<p>Try to put the NLB CNAME as the external name</p>
|
<p>Today randomly minikube seems to be taking very long to respond to command via <code>kubectl</code>.</p>
<p>And occasionally even:</p>
<pre><code>kubectl get pods
Unable to connect to the server: net/http: TLS handshake timeout
</code></pre>
<p>How can I diagnose this?</p>
<p>Some logs from <code>minikube logs</code>:</p>
<pre><code>==> kube-scheduler <==
I0527 14:16:55.809859 1 serving.go:319] Generated self-signed cert in-memory
W0527 14:16:56.256478 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0527 14:16:56.256856 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0527 14:16:56.257077 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0527 14:16:56.257189 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0527 14:16:56.257307 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0527 14:16:56.264875 1 server.go:142] Version: v1.14.1
I0527 14:16:56.265228 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0527 14:16:56.286959 1 authorization.go:47] Authorization is disabled
W0527 14:16:56.286982 1 authentication.go:55] Authentication is disabled
I0527 14:16:56.286995 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0527 14:16:56.287397 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
I0527 14:16:57.417028 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0527 14:16:57.524378 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0527 14:16:57.827438 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
E0527 14:17:10.865448 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0527 14:17:43.418910 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0527 14:18:01.447065 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler
I0527 14:18:29.044544 1 leaderelection.go:263] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded
E0527 14:18:38.999295 1 server.go:252] lost master
E0527 14:18:39.204637 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
lost lease
</code></pre>
<p><em>Update:
To work around this issue I just did a <code>minikube delete</code> and <code>minikube start</code>, and the performance issue resolved..</em></p>
|
<p>As solution has been found, I am posting this as Community Wiki for future users.</p>
<p><strong>1)</strong> Debugging issues with minikube by adding <code>-v</code> flag and set debug level (0, 1, 2, 3, 7).</p>
<p>As example: <code>minikube start --v=1</code> to set outbut to INFO level.<br/>
More detailed information <a href="https://github.com/kubernetes/minikube/blob/master/docs/debugging.md" rel="noreferrer">here</a></p>
<p><strong>2)</strong> Use logs command <code>minikube logs</code></p>
<p><strong>3)</strong> Because Minikube is working on Virtual Machine sometimes is better to delete minikube and start it again (It helped in this case).</p>
<pre><code>minikube delete
minikube start
</code></pre>
<p><strong>4)</strong> It might get slow due to lack of resources.</p>
<p>Minikube as default is using 2048MB of memory and 2 CPUs. More details about this can be fund <a href="https://github.com/kubernetes/minikube/blob/232080ae0cbcf9cb9a388eb76cc11cf6884e19c0/pkg/minikube/constants/constants.go#L97" rel="noreferrer">here</a>
In addition, you can enforce Minikube to create more using command</p>
<pre><code>minikube start --cpus 4 --memory 8192
</code></pre>
|
<p>I have a K8s cluster, that I would like to publish to the internet. What is the best practice to publish K8s to internet?</p>
<p>The K8s is installed on 3 VPS server and it is in the private network. </p>
<p>I have the following idea: </p>
<p><a href="https://i.stack.imgur.com/dccMT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dccMT.png" alt="enter image description here"></a></p>
<p>With the proxy. Requests will pass through proxy and will be forwarded to K8s.
<a href="https://i.stack.imgur.com/MvVEa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MvVEa.png" alt="enter image description here"></a></p>
<p>Or directly to K8s.</p>
|
<p>One way is "OpenShift router" style - you have one (or more) node which has public IP and join it to K8s cluster with some distinct label on it. Then you launch ingress controller like Nginx on this node using NodeSelector and hostNetwork=true. In this scheme Nginx ingress will be available on Internet and is able to publish your services.</p>
<p>Another way is to have the same proxy node with public IP (without joining to K8s) and make it load balance requests to all K8s nodes. You can then publish your services using NodePort and configure proxy for each service. Or you can launch Nginx ingress as NodePort in K8s and forward requests from proxy node to Nginx ingress NodePort. </p>
<p>I would prefer first option because it is simpler to configure. </p>
|
<p>I have a GKE cluster and I would like to connect some, but not all (!), pods and services to a managed Postgresql Cloud DB running in the same VPC.</p>
<p>Of course, I could just go for it (<a href="https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine</a>), but I would like to make sure that only those pods and services can connect to the Postgresql DB, that should do so.</p>
<p>I thought of creating a separate node pool in my GKE cluster (<a href="https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools</a>), where only those pods services do run, that should be able to connect to the Postgresql DB, and allow only those pods and services to connect to the DB by telling the DB which IPs to accept. However, it seems that I cannot set dedicated IPs on the node pool level, only on the cluster level.</p>
<p>Do you have an idea how I can make such a restriction?</p>
|
<p>When you create your node pool, create it with a service account that haven't the permission to access to Cloud SQL instances.</p>
<p>Then, leverage <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">Workload identity</a> to load a specific service account with some of your pods, and grant the service account the permission to access to Cloud SQL instance</p>
<hr />
<p>You asked "how to know the IP to restrict them to a access to Cloud SQL". It's a wrong (or legacy) assumption. Google always says "Don't trust the network (and so, the IPs)". Base your security on the identity (the service account of the node pool and of the pod through workload identity) is a far better option.</p>
|
<p>I'm using Kubernetes that is bundled with Docker-for-Mac. I'm trying to configure an Ingress that routes http requests starting with /v1/ to my backend service and /ui/ requests to my Angular app.</p>
<p>My issues seems to be that the HTTP method of the requests are changed by ingress (NGINX) from a POST to a GET.</p>
<p>I have tried various rewrite rules, but to no avail. I even switched from Docker-for-Mac to Minikube, but the result is the same. </p>
<p>If I use a simple ingress with no paths (just the default backend) then the service is getting the correct HTTP method.
The ingress below works: </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: backend
servicePort: 8080
</code></pre>
<p>But this ingress does not:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /v1
backend:
serviceName: backend
servicePort: 8080
- path: /ui
backend:
serviceName: webui
servicePort: 80
</code></pre>
<p>When I debug the "backend" service I see that the HTTP Request is a GET instead of a POST.</p>
<p>I read somewhere that NGINX rewrites issue a 308 (permanent) redirect and the HTTP method is changed from a GET to a POST, but if that is the case how can I configure my ingress to support different paths for different services that require POST calls?</p>
|
<p>I found the solution to my problem. When I add <code>host:</code> to the configuration then the http method is not changed. Here is my current ingress yaml (the rewrite and regex are used to omit sending the /v1 as part of the backend URL)</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: localhost
http:
paths:
- path: /v1(/|$)(.*)
backend:
serviceName: gateway
servicePort: 8080
</code></pre>
|
<p>I am using kubernetes. I have Ingress service which talks my container service. We have exposed a webapi which works all fine. But we keep getting 502 bad gateway error. I am new to kubernetes and i have no clue how to go about debugging this issue. Server is a nodejs server connected to database. Is there anything wrong with configuration?</p>
<p>My Deployment file--</p>
<hr>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-pod
spec:
replicas: 1
template:
metadata:
labels:
app: my-pod
spec:
containers:
- name: my-pod
image: my-image
ports:
- name: "http"
containerPort: 8086
resources:
limits:
memory: 2048Mi
cpu: 1020m
---
apiVersion: v1
kind: Service
metadata:
name: my-pod-serv
spec:
ports:
- port: 80
targetPort: "http"
selector:
app: my-pod
</code></pre>
<hr>
<p>My Ingress Service:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: abc.test.com
http:
paths:
- path: /abc
backend:
serviceName: my-pod-serv
servicePort: 80
</code></pre>
|
<p><strong>In Your case:</strong></p>
<p>I think that you get this 502 gateway error because you don't have Ingress controller configured correctly.
Please try do do it with installed Ingress like in example below. It will do all automatically.</p>
<p><strong>Nginx Ingress step by step:</strong></p>
<p><strong>1)</strong> <a href="https://helm.sh/docs/using_helm/#installing-helm" rel="nofollow noreferrer">Install helm</a></p>
<p><strong>2)</strong> Install nginx controller using helm</p>
<pre><code>$ helm install stable/nginx-ingress --name nginx-ingress
</code></pre>
<p>It will create 2 services. You can get their details via</p>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 29d
nginx-ingress-controller LoadBalancer 10.39.243.140 35.X.X.15 80:32324/TCP,443:31425/TCP 19m
nginx-ingress-default-backend ClusterIP 10.39.252.175 <none> 80/TCP 19m
</code></pre>
<p><a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx-ingress-controller</a> - in short, it's dealing with requests to Ingress and directing</p>
<p><a href="https://kubernetes.github.io/ingress-nginx/user-guide/default-backend/" rel="nofollow noreferrer">nginx-ingress-default-backend</a> - in short, default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand</p>
<p><strong>3)</strong> Create 2 deployments (or use yours)</p>
<pre><code>$ kubectl run my-pod --image=nginx
deployment.apps/my-pod created
$ kubectl run nginx1 --image=nginx
deployment.apps/nginx1 created
</code></pre>
<p><strong>4)</strong> Connect to one of the pods</p>
<pre><code>$ kubectl exec -ti my-pod-675799d7b-95gph bash
</code></pre>
<p>And add additional line to the output to see which one we will try to connect later.</p>
<pre><code>$ echo "HELLO THIS IS INGRESS TEST" >> /usr/share/nginx/html/index.html
$ exit
</code></pre>
<p><strong>5)</strong> Expose deployments.</p>
<pre><code>$ kubectl expose deploy nginx1 --port 80
service/nginx1 exposed
$ kubectl expose deploy my-pod --port 80
service/my-pod exposed
</code></pre>
<p>This will automatically create service and will looks like</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: my-pod
name: my-pod
selfLink: /api/v1/namespaces/default/services/my-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: my-pod
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p><strong>6)</strong> Now its the time to create Ingress.yaml and deploy it. Each rule in ingress need to be specified. Here I have 2 services. Each service specification starts with -host under rule parameter.</p>
<p><strong>Ingress.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: two-svc-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /pod
backend:
serviceName: my-pod
servicePort: 80
- host: nginx.test.svc
http:
paths:
- path: /abc
backend:
serviceName: nginx1
servicePort: 80
$ kubectl apply -f Ingress.yaml
ingress.extensions/two-svc-ingress created
</code></pre>
<p><strong>7)</strong> You can check Ingress and hosts</p>
<pre><code>$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
two-svc-ingress my.pod.svc,nginx.test.svc 35.228.230.6 80 57m
</code></pre>
<p><strong>8)</strong> Eplanation why I installed Ingress.</p>
<p>Connect to the ingress controller pod</p>
<pre><code>$ kubectl exec -ti nginx-ingress-controller-76bf4c745c-prp8h bash
www-data@nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ cat /etc/nginx/nginx.conf
</code></pre>
<p>Because I have installed nginx ingress earlier, after deploying Ingress.yaml, the nginx-ingress-controller found changes and automatically added necessary code.
In this file you should be able to find whole configuration for two services. I will not copy configuration but only headers.</p>
<blockquote>
<p>start server my.pod.svc</p>
<p>start server nginx.test.svc</p>
</blockquote>
<p>www-data@nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ exit</p>
<p><strong>9)</strong> Test</p>
<pre><code>$ kubectl get svc to get your nginx-ingress-controller external IP
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/
default backend - 404
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/pod
<!DOCTYPE html>
...
</html>
HELLO THIS IS INGRESS TEST
</code></pre>
<p>Please keep in mind Ingress needs to be in the same namespace like services. If you have a few services in many namespace you need to create Ingress for each namespace.</p>
|
<p>Our on-premise Kubernetes/Kubespray cluster has suddenly stopped routing traffic between the nginx-ingress and node port services. All external requests to the ingress endpoint return a "504 - gateway timeout" error.</p>
<p>How do I diagnose what has broken? </p>
<p>I've confirmed that the containers/pods are running, the node application has started and if I exec into the pod then I can run a local curl command and get a response from the app.</p>
<p>I've checked the logs on the ingress pods and traffic is arriving and nginx is trying to forward the traffic on to the service endpoint/node port but it is reporting an error.</p>
<p>I've also tried to curl directly to the node via the node port but I get no response.</p>
<p>I've looked at the ipvs configuration and the settings look valid (e.g. there are rules for the node to forward traffic on the node port the service endpoint address/port)</p>
|
<p>We couldn't resolve this issue and, in the end, the only workaround was to uninstall and reinstall the cluster. </p>
|
<p>I’m proposing a project to my school supervisor, which is to improve our current server to be more fault tolerant, easily scaling and able to handle high traffic.
I have a plan to build a distributed system starting from deploying our server to different PCs and implement caching and load balancing, etc.
But I want to know whether Kubernetes already can satisfy my objective? what are the tradeoff between using Kubernetes and building own distributed system to deploy applications?</p>
<p>Our applications are built with Django and most are likely used by students such course planner or search/recommend systems.</p>
|
<p>You didn't give any details of your app, so I'll provide some generic thoughts. Shortly speaking, Kubernetes gives you scheduling, load balancing and (sort of) high availability for free. You still have to plan proper application architecture but Kubernetes gives you a good starting point where you can say like "ok, I want this number of app containers to run on this number of nodes". It also gives you internal load balancing and DNS resolution.</p>
<p>Of course, the tradeoff is that you have to learn Kubernetes and Docker up to some certain point. But I wouldn't say it's too hard for enthusiast.</p>
|
<p>How to restrict termination of k8s clusters within a project, to certain users:</p>
<ol>
<li>dev team creates project-dev k8s cluster</li>
<li>qa team creates project-qa k8s cluster</li>
<li>prod team creates project-prod k8s cluster</li>
</ol>
<p>How can we prevent dev, qa, prod team members from deleting clusters which they didn't create.</p>
<p>How should we set up RBAC for a Google Cloud project?</p>
|
<p>You need to create a project for each team and they create the cluster in their project. If you don't grant the teams in other projects, they won't be able to touch them.</p>
|
<p>I'm trying to deploy a simple API into an AKS and want to show it off to the internet using an ingress controller. I want to know if I'm using cloud do I need to install minikube?</p>
|
<p>Minikube is designed for local Kubernetes development and testing. It supports one node by default. So it is not related to your AKS setup, i.e you don't need minikute for AKS.</p>
<p>To be able to demo your setup on the Internet, you can set up an AKS but be mindful of securities and make sure that you are not exposing your entire cluster on the Internet.</p>
|
<p>I have an EKS cluster created with <code>eksctl</code>, with two unmanaged nodegroups. <code>ingress-nginx</code> and <code>cluster-autoscaler</code> are deployed and working. <code>ingress-nginx</code> controller has created a Classic LoadBalancer upon deployment.</p>
<p>When either NodeGroup scales up, its instances are added to the LB. (question: <em>what</em> takes this action? It's not the ASG itself, so I assume it's <code>ingress-nginx</code> doing this). Additionally, I see that all instances (from both ASGs) are responding as "healthy" to the TCP healthcheck on the LoadBalancer.</p>
<p>The problem: I need to only whitelist <strong>one</strong> of the node groups as being eligible for load balancing. The other ASG (and any future ASGs, by default), are batch workers which do not host any internet services, and no web service pods will be ever scheduled on them.</p>
<p>Is this possible to achieve in EKS, and how?</p>
|
<p>You are right, that's the default behavior of Kubernetes <code>LoadBalancer</code> services like Nginx Ingress. <code>LoadBalancer</code> under the hood uses <code>NodePort</code> service which is exposed on ALL cluster nodes regardless of EKS node group they belong to.</p>
<p>What you should do is to add <code>externalTrafficPolicy: Local</code> to your Nginx Ingress <code>LoadBalancer</code> like that (this is just an example):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
externalTrafficPolicy: Local
type: LoadBalancer
</code></pre>
<p>Doing that will cause AWS Load Balancer to target only those nodes who actually run Nginx Ingress pods. After that you may want to use Nginx Ingress <code>nodeSelector</code> to run only on desired EKS node group.</p>
|
<p>I am an newbie for Kubernetes and Cloudrun deployment using YAML file, So pardon if this question should very basic.</p>
<p><strong>Problem</strong>: I have files that are stored in cloud storage. I want to download these files in the local mount before the container spins up my docker entrypoint.</p>
<blockquote>
<p>It is my understanding that KNative does not support volumes or
persistentVolumeClaims.</p>
</blockquote>
<p>Please correct me if this understanding is wrong.</p>
<p>Let me explain it better using an image below,
<a href="https://i.stack.imgur.com/1KXgS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1KXgS.png" alt="enter image description here" /></a></p>
<p>Inside the Kubernetes pod, I have divided the container startup into 3 section.</p>
<ol>
<li>Prehook to download files from GCS (Google Cloud Storage) -> This will copy files from google storage to local mount. Possible using some kind of init container with cloudsdk image and then gsutils to copy the files down.</li>
<li>Local Mount Filesystem -> The prehook will write into this mount. The container having the "container Image" will also have access to this mount.</li>
<li>container Image -> This is my main container image running in the container.</li>
</ol>
<p>I am looking for Knative serving solution that will work on cloudrun. How do I solve this?</p>
<p>Additional, is it possible to have yaml file without Knative serving for creating an cloudrun service?</p>
|
<p>Knative contract, as you said, doesn't allow to mount or claim a volume. So, you can't achieve this (for now, on Cloud Run managed).</p>
<p>On the other hand, Pod allow this, but Knative is a special version of "Pod": no persistent volume, and you can't define a list of container, it's a pod with only one container (+ the mesh (most of the time Istio) sidecar injected when you deploy)</p>
<p>For you additional question, Cloud Run implements Knative API. And thus, you need to present Knative serving YAML file to configure your service.</p>
<hr />
<p>If you want to write file, you can do it in the <code>/tmp</code> in memory partition. So, at your container starts, download the file and store them there. However, if you update the files and you need to push the update, you need to push them manually to Cloud Storage.</p>
<p>In addition, the other running instances, that have already downloaded the file and stored them in their <code>/tmp</code> directory won't see the file change in Cloud Storage; it's only the new instances.</p>
<p>UPDATE 1:</p>
<p>If you want to download the files "before" the container start, you have 2 solutions:</p>
<ol>
<li>"Before" is not possible, you can do this at startup:</li>
</ol>
<ul>
<li>The container start</li>
<li>Download the files, initiate what you need</li>
<li>Serve traffic with your webserver.</li>
</ul>
<p>The previous solution has 2 issues</p>
<ul>
<li>The service cold start is impacted by the download before serving</li>
<li>The max size of file is limited by the max memory size of the instance (<code>/tmp</code> directory is an in-memory file system. If you have a config of 2Gb, the size is max 2Gb minus the memory footprint of your app)</li>
</ul>
<ol start="2">
<li>The second solution is to build your container with the file already present in the container image.</li>
</ol>
<ul>
<li>No cold start impact</li>
<li>No memory limitation</li>
<li>Reduce your agility and you need to build and deploy a new revision with a file change.</li>
</ul>
<p>UPDATE 2:</p>
<p>For the solution1, it's not a Knative solution, it's in your code! I don't know your language and framework, but at startup, you need to use Google Cloud Storage client library to download, from your code, the file that you need.</p>
<p><em>Show me your server startup I could try to provide you an example!</em></p>
<p>For the solution 2, the files aren't in your git repo, but still in your Cloud Storage. Your Docker file can look like to this</p>
<pre><code>FROM google/cloud-sdk:alpine as gcloud
WORKDIR /app
# IF you aren't building your image on Cloud Build, you need to be authenticated
#ARG KEY_FILE_CONTENT
#RUN echo $KEY_FILE_CONTENT | gcloud auth activate-service-account --key-file=- && \
# Get the file(s)
gsutil cp gs://my-bucket/name.csv .
FROM golang:1.15-buster as builder
WORKDIR /app
COPY go.* ./
....
RUN go build -v -o server
FROM debian:buster-slim
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /app/server
COPY --from=gcloud /app/name.csv /app/name.csv
# Run the web service on container startup.
CMD ["/app/server"]
</code></pre>
<p>You can also imagine downloading the file before the Docker build command and simply perform a copy in the Dockerfile. I don't know your container creation pipeline, but it's ideas that you can reuse!</p>
|
<p>I have declared the following Variable group in Azure DevOps.</p>
<pre><code>KUBERNETES_CLUSTER = myAksCluster
RESOURCE_GROUP = myResourceGroup
</code></pre>
<p><a href="https://i.stack.imgur.com/KHMWp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KHMWp.png" alt="enter image description here"></a></p>
<p>In the Release pipeline process, I want to create a Public Static IP address in order to assigned to some application in <code>myAksCluster</code>.</p>
<p>Via Azure cli commands I am creating the Static IP address of this way via <strong>az cli bash small script</strong>. We can assume here that I already have created the kubernetes cluster</p>
<pre><code>#!/usr/bin/env bash
KUBERNETES_CLUSTER=myAksCluster
RESOURCE_GROUP=myResourceGroup
# Getting the name of the node resource group. -o tsv is to get the value without " "
NODE_RESOURCE_GROUP=$(az aks show --resource-group $RESOURCE_GROUP --name $KUBERNETES_CLUSTER --query nodeResourceGroup -o tsv)
echo "Node Resource Group:" $NODE_RESOURCE_GROUP
# Creating Public Static IP
PUBLIC_IP_NAME=DevelopmentStaticIp
az network public-ip create --resource-group $NODE_RESOURCE_GROUP --name $PUBLIC_IP_NAME --allocation-method static
# Query the ip
PUBLIC_IP_ADDRESS=$(az network public-ip list --resource-group $NODE_RESOURCE_GROUP --query [1].ipAddress --output tsv)
# Output
# I want to use the value of PUBLIC_IP_ADDRESS variable in Azure DevOps variable groups of the release pipeline
</code></pre>
<p>If I execute <code>az network public-ip list ...</code> command I get my public Ip address.</p>
<pre><code>⟩ az network public-ip list --resource-group $NODE_RESOURCE_GROUP --query [1].ipAddress -o tsv
10.x.x.x
</code></pre>
<p>I want to use that <code>PUBLIC_IP_ADDRESS</code> value to assign it to a new Azure DevOps variable groups in my release, but doing all this process <strong>from a CLI task or Azure Cli task like part of the release pipeline process.</strong></p>
<p>The idea is when my previous <strong>az cli bash small script</strong> to be executed in the release pipeline, I have a new variable in my ReleaseDev azure variable groups of this way:</p>
<p><a href="https://i.stack.imgur.com/Dfiq0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dfiq0.png" alt="enter image description here"></a></p>
<p>And after that, I can use <code>PUBLIC_STATIC_IP_ADDRESS</code> which will be an azure devops variable like arguments to the application which will use that IP value inside my kubernetes cluster.</p>
<p>I have been checking some information and maybe I could create an Azure CLI task in my release pipeline to execute the <strong>az cli bash small script</strong> which is creating the public static ip address of this way:
<a href="https://i.stack.imgur.com/187Po.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/187Po.png" alt="enter image description here"></a></p>
<p>But is finally at the end when I get the public ip address value, that I don't know how to create from this Azure CLI task (my script) the variable <code>PUBLIC_STATIC_IP_ADDRESS</code> with its respective value that I got here.</p>
<p>Can I use an Azure CLI task from the release pipeline to get this small workflow? I have been checking somethings like <a href="https://github.com/Microsoft/azure-pipelines-tasks/issues/9416#issuecomment-470791784" rel="nofollow noreferrer">this recommendation</a>
but is not clear for me</p>
<p>how to create an Az variable group with some value passed from my release pipeline?
Is Azure CLI release pipeline task, the proper task for do that?</p>
<p><strong>UPDATE</strong></p>
<p>I am following the approach suugested by <strong>Lu Mike</strong>, so I have created a Powershell task and executing the following script in Inline type/mode:</p>
<pre><code># Connect-AzAccount
Install-Module -Name Az -AllowClobber -Force
@{KUBERNETES_CLUSTER = "$KUBERNETES_CLUSTER"}
@{RESOURCE_GROUP = "$RESOURCE_GROUP"}
@{NODE_RESOURCE_GROUP="$(az aks show --resource-group $RESOURCE_GROUP --name $KUBERNETES_CLUSTER --query nodeResourceGroup -o tsv)"}
# echo "Node Resource Group:" $NODE_RESOURCE_GROUP
@{PUBLIC_IP_NAME="Zcrm365DevelopmentStaticIpAddress"}
az network public-ip create --resource-group $NODE_RESOURCE_GROUP --name $PUBLIC_IP_NAME --allocation-method static
@{PUBLIC_IP_ADDRESS="$(az network public-ip list --resource-group $NODE_RESOURCE_GROUP --query [1].ipAddress --output tsv)"}
echo "##vso[task.setvaraible variable=ipAddress;]%PUBLIC_IP_ADDRESS%"
$orgUrl="https://dev.azure.com/my-org/"
$projectName = "ZCRM365"
##########################################################################
$personalToken = $PAT # "<your PAT>"
# I am creating a varaible environment inside the power shell task and reference it here.
##########################################################################
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($personalToken)"))
$header = @{authorization = "Basic $token"}
$projectsUrl = "$($orgUrl)/$($projectName)/_apis/distributedtask/variablegroups/1?api-version=5.0-preview.1"
$requestBody = @{
"variables" = @{
"PUBLIC_STATIC_IP_ADDRESS" = @{
"value" = "$PUBLIC_IP_ADDRESS"
}
}
"@type" = "Vsts"
"name" = "ReleaseVariablesDev"
"description" = "Updated variable group"
} | ConvertTo-Json
Invoke-RestMethod -Uri $projectsUrl -Method Put -ContentType "application/json" -Headers $header -Body $requestBody
Invoke-RestMethod -Uri $projectsUrl -Method Put -ContentType "application/json" -Headers $header -Body $requestBody
</code></pre>
<hr>
<p><strong>IMPORTANT</strong></p>
<p>As you can see, in this process I am mixing <code>az cli</code> commands and powershell language in this task. I am not sure if it is good.
By the way I am using a Linux Agent.</p>
<p>I had to include <code>-Force</code> flag to the <code>Install-Module -Name Az -AllowClobber -Force</code> command to install azure powershell module </p>
<hr>
<p>My output is the following:</p>
<pre><code>2019-07-19T06:01:29.6372873Z Name Value
2019-07-19T06:01:29.6373433Z ---- -----
2019-07-19T06:01:29.6373706Z KUBERNETES_CLUSTER
2019-07-19T06:01:29.6373856Z RESOURCE_GROUP
2019-07-19T06:01:38.0177665Z ERROR: az aks show: error: argument --resource-group/-g: expected one argument
2019-07-19T06:01:38.0469751Z usage: az aks show [-h] [--verbose] [--debug]
2019-07-19T06:01:38.0470669Z [--output {json,jsonc,table,tsv,yaml,none}]
2019-07-19T06:01:38.0471442Z [--query JMESPATH] --resource-group RESOURCE_GROUP_NAME
2019-07-19T06:01:38.0472050Z --name NAME [--subscription _SUBSCRIPTION]
2019-07-19T06:01:38.1381959Z NODE_RESOURCE_GROUP
2019-07-19T06:01:38.1382691Z PUBLIC_IP_NAME Zcrm365DevelopmentStaticIpAddress
2019-07-19T06:01:39.5094672Z ERROR: az network public-ip create: error: argument --resource-group/-g: expected one argument
2019-07-19T06:01:39.5231190Z usage: az network public-ip create [-h] [--verbose] [--debug]
2019-07-19T06:01:39.5232152Z [--output {json,jsonc,table,tsv,yaml,none}]
2019-07-19T06:01:39.5232671Z [--query JMESPATH] --resource-group
2019-07-19T06:01:39.5233234Z RESOURCE_GROUP_NAME --name NAME
2019-07-19T06:01:39.5233957Z [--location LOCATION]
2019-07-19T06:01:39.5234866Z [--tags [TAGS [TAGS ...]]]
2019-07-19T06:01:39.5235731Z [--allocation-method {Static,Dynamic}]
2019-07-19T06:01:39.5236428Z [--dns-name DNS_NAME]
2019-07-19T06:01:39.5236795Z [--idle-timeout IDLE_TIMEOUT]
2019-07-19T06:01:39.5237070Z [--reverse-fqdn REVERSE_FQDN]
2019-07-19T06:01:39.5240483Z [--version {IPv4,IPv6}]
2019-07-19T06:01:39.5250084Z [--sku {Basic,Standard}] [--zone {1,2,3}]
2019-07-19T06:01:39.5250439Z [--ip-tags IP_TAGS [IP_TAGS ...]]
2019-07-19T06:01:39.5251048Z [--public-ip-prefix PUBLIC_IP_PREFIX]
2019-07-19T06:01:39.5251594Z [--subscription _SUBSCRIPTION]
2019-07-19T06:01:40.4262896Z ERROR: az network public-ip list: error: argument --resource-group/-g: expected one argument
2019-07-19T06:01:40.4381683Z usage: az network public-ip list [-h] [--verbose] [--debug]
2019-07-19T06:01:40.4382086Z [--output {json,jsonc,table,tsv,yaml,none}]
2019-07-19T06:01:40.4382346Z [--query JMESPATH]
2019-07-19T06:01:40.4382668Z [--resource-group RESOURCE_GROUP_NAME]
2019-07-19T06:01:40.4382931Z [--subscription _SUBSCRIPTION]
2019-07-19T06:01:40.5103276Z PUBLIC_IP_ADDRESS
2019-07-19T06:01:40.5133644Z ##[error]Unable to process command '##vso[task.setvaraible variable=ipAddress;]%PUBLIC_IP_ADDRESS%' successfully. Please reference documentation (http://go.microsoft.com/fwlink/?LinkId=817296)
2019-07-19T06:01:40.5147351Z ##[error]##vso[task.setvaraible] is not a recognized command for Task command extension. Please reference documentation (http://go.microsoft.com/fwlink/?LinkId=817296)
</code></pre>
<p>And maybe that is why I have some problem to execute <code>az</code> commands</p>
<p>Power shell and its respective azure task are new for me, and I am not sure about how can I get along of this process.</p>
|
<p>You can try to invoke REST API(<a href="https://learn.microsoft.com/en-us/rest/api/azure/devops/distributedtask/variablegroups/update?view=azure-devops-rest-5.0" rel="nofollow noreferrer">Variablegroups - Update</a>) to add or update the variable group in the script. Please refer to following script.</p>
<pre><code>$orgUrl = "https://dev.azure.com/<your organization >"
$projectName = "<your project>"
$personalToken = "<your PAT>"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($personalToken)"))
$header = @{authorization = "Basic $token"}
$projectsUrl = "$($orgUrl)/$($projectName)/_apis/distributedtask/variablegroups/1?api-version=5.0-preview.1"
$requestBody = @{
"variables" = @{
"PUBLIC_STATIC_IP_ADDRESS" = @{
"value" = "<the static ip you got>"
}
}
"@type" = "Vsts"
"name" = "<your variable group name>"
"description" = "Updated variable group"
} | ConvertTo-Json
Invoke-RestMethod -Uri $projectsUrl -Method Put -ContentType "application/json" -Headers $header -Body $requestBody
</code></pre>
<p>Then, you will find the varaible in the group.</p>
<p><a href="https://i.stack.imgur.com/s5qz8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s5qz8.png" alt="enter image description here"></a></p>
<p><strong>UPDATE:</strong></p>
<p>You can use <code>AZ</code> commands directly as powershell script, since after you installed Az module for powershell, Az commands support powershell and bash.
Please refer to following script.</p>
<pre><code>$KUBERNETES_CLUSTER = "KUBERNETES_CLUSTER"
$RESOURCE_GROUP = "RESOURCE_GROUP"
$PUBLIC_IP_ADDRESS
$PUBLIC_IP_NAME="Zcrm365DevelopmentStaticIpAddress"
$NODE_RESOURCE_GROUP = az aks show --resource-group $RESOURCE_GROUP --name $KUBERNETES_CLUSTER --query nodeResourceGroup -o tsv | az network public-ip create --resource-group --name $PUBLIC_IP_NAME --allocation-method static
az network public-ip create --resource-group $NODE_RESOURCE_GROUP --name $PUBLIC_IP_NAME --allocation-method static
$PUBLIC_IP_ADDRESS = az network public-ip list --resource-group $NODE_RESOURCE_GROUP --query [1].ipAddress --output tsv
echo "##vso[task.setvaraible variable=ipAddress;]$PUBLIC_IP_ADDRESS"
$orgUrl="https://dev.azure.com/<org>/"
....
</code></pre>
<p>Please refer to following links to learn the usage of az commands in powershell.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/storage/common/storage-auth-aad-script" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/storage/common/storage-auth-aad-script</a></p>
<p><a href="https://stackoverflow.com/questions/45585000/azure-cli-vs-powershell">Azure CLI vs Powershell?</a></p>
|
<p>I am currently seeing a strange issue where I have a Pod that is constantly being <code>Evicted</code> by Kubernetes.</p>
<p><strong>My Cluster / App Information:</strong></p>
<ul>
<li>Node size: <code>7.5GB RAM</code> / <code>2vCPU</code></li>
<li>Application Language: <code>nodejs</code></li>
<li>Use Case: puppeteer website extraction (I have code that loads a website, then extracts an element and repeats this a couple of times per hour)</li>
<li>Running on <code>Azure Kubernetes Service (AKS)</code></li>
</ul>
<p><strong>What I tried:</strong></p>
<ul>
<li><p>Check if Puppeteer is closed correctly and that I am removing any chrome instances. After adding a force killer it seems to be doing this</p>
</li>
<li><p>Checked <code>kubectl get events</code> where it is showing the lines:</p>
</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>8m17s Normal NodeHasSufficientMemory node/node-1 Node node-1 status is now: NodeHasSufficientMemory
2m28s Warning EvictionThresholdMet node/node-1 Attempting to reclaim memory
71m Warning FailedScheduling pod/my-deployment 0/4 nodes are available: 1 node(s) had taint {node.kubernetes.io/memory-pressure: }, that the pod didn't tolerate, 3 node(s) didn't match node selector
</code></pre>
<ul>
<li>Checked <code>kubectl top pods</code> where it shows it was only utilizing ~30% of the node's memory</li>
<li>Added resource limits in my kubernetes <code>.yaml</code>:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-d
spec:
replicas: 1
template:
spec:
containers:
- name: main
image: my-image
imagePullPolicy: Always
resources:
limits:
memory: "2Gi"
</code></pre>
<p><strong>Current way of thinking:</strong></p>
<p>A node has X memory total available, however from X memory only Y is actually allocatable due to reserved space. However when running <code>os.totalmem()</code> in <code>node.js</code> I am still able to see that Node is allowed to allocate the X memory.</p>
<p>What I am thinking here is that Node.js is allocating up to X due to its Garbage Collecting which should actually kick in at Y instead of X. However with my limit I actually expected it to see the limit instead of the K8S Node memory limit.</p>
<p><strong>Question</strong></p>
<p>Are there any other things I should try to resolve this? Did anyone have this before?</p>
|
<p>You NodeJS app is not aware that it runs in container. It sees only the amount of memory that Linux kernel reports (which always reports the total node memory). You should make your app aware of cgroup limits, see <a href="https://medium.com/the-node-js-collection/node-js-memory-management-in-container-environments-7eb8409a74e8" rel="nofollow noreferrer">https://medium.com/the-node-js-collection/node-js-memory-management-in-container-environments-7eb8409a74e8</a></p>
<p>With regard to Evictions: when you've set memory limits - did that solve your problems with evictions?</p>
<p>And don't trust <code>kubectl top pods</code> too much. It always shows data with some delay.</p>
|
<p>I have a website (front-end + backend) published on my own GKE cluster. The current configuration relies on one static IP allocated in GCP + one GCE Ingress instance to open the website to internet traffic. It works.</p>
<p>Is there a way to not use a static IP and rely on "ghs.googlehosted.com." ? I don't mind having a fixed static IP. I've tried to set up the DNS as GCP advises on <a href="https://console.cloud.google.com/appengine/settings/domains?project=YOURPROJECTID" rel="nofollow noreferrer">https://console.cloud.google.com/appengine/settings/domains?project=YOURPROJECTID</a> but it doesn't work. Looking at the service logs my FE can communicate internally to my BE. It is just the Ingress + LB configuration that doesn't let the <code>googlehosted.com</code> infra know my website is waiting for traffic and all requests should be sent there. Does such a configuration even exist?</p>
|
<p>If you take each piece of the architecture:</p>
<ul>
<li>You have your cluster with your services</li>
<li>You want to expose the services. You create a load balancer</li>
<li>The load balancer is created with an IP address</li>
</ul>
<p>So, at the end, you only have an IP, there is no other way to expose a service with GKE. You have to use a Load Balancer, and the load balancer expose an IP.</p>
<p><em>Other cloud providers, like AWS, expose a subdomain and not an IP when you create a load balancer, and thus you aren't linked to an IP. It's not the case of GCP, at least for GKE</em></p>
|
<p>I'm pretty new to K8s in general and I'm a developer not exactly a network guy so I would like some ideas on how to reach my goal so I could research a bit on it.</p>
<p>Let's say I have my app (hosted on k8s), let's say myapp.domain.com.
Let's imagine I have a new customer, they want their own URL... let's say backoffice.company.com. For this to work I would have to go into my k8's ingress and add "backoffice.company.com" to the hosts.
Now let's imagine I have... 30000 customers doing this........... okay u get the point.</p>
<p>One idea that came to my mind was to use an external nginx for example which listens to all servernames and just proxies them to "myapp.domain.com". But this forces me to have an extra server just to serve as proxy :/</p>
<p>Is there a way, maybe via TXT records or something like that to make the ingress verify that the request is to 'myapp'?</p>
<p>Thanks</p>
|
<p>You may expose service directly to clients using Load Balancer and add wildcard dns CNAME record <code>*.company.com</code> pointing to Load Balancer. In that case you don't need Nginx Ingress which reduces latency for your clients and removes one possible bottleneck.</p>
<p>If you still want Nginx Ingress then you may use <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#hostname-wildcards" rel="nofollow noreferrer">hostname wildcards</a> like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "*.foo.com"
http:
paths:
- pathType: Prefix
path: "/foo"
backend:
service:
name: service2
port:
number: 80
</code></pre>
|
<p>I'm new to GKE and K8S so please bare with me and my silliness. I currently have a GKE cluster that has two nodes in the default node pool and the cluster is exposed via a LoadBalancer type service.</p>
<p>These nodes are tasked with calling a Compute Engine instance via HTTP. I have a Firewall rule set in GCP to deny ingress traffic to the GCE instance except the one coming from the GKE cluster.</p>
<p>The issue is that the traffic isn't coming from the LoadBalancer's service IP but rather from the nodes themselves, so whitelisting the services' IP has no effect and I have to whitelist the IPs of the nodes instead of the cluster. This is not ideal, since each time a new node is created I have to change the Firewall rule. I understand that once you have a service set up in the cluster, all traffic will be directed towards the IP of the service, so why is this happening? What am I doing wrong? Please let me know if you need more details and thanks in advance.</p>
<p>YAML of the service:</p>
<p><a href="https://i.stack.imgur.com/XBZmE.png" rel="nofollow noreferrer">https://i.stack.imgur.com/XBZmE.png</a></p>
|
<p>When you create a service on GKE, and you expose it to internet, a load balancer is created. This load balancer manage only the ingress traffic (traffic from internet to your GKE cluster).</p>
<p>When your pod initiate a communication, the traffic is not managed by the load balancer, but by the node that host the pod, if the node have a public IP (Instead of denied the traffic to GCE instance, simply remove the public IP, it's easier and safer!).</p>
<p>If you want to manage the IP for egress traffic originated by your pod, you have to <a href="https://cloud.google.com/nat/docs/gke-example#gcloud" rel="nofollow noreferrer">set up a Cloud NAT on your GKE cluster</a>.</p>
|
<p>As I am new to kubernetes and its DNS service, it would be great if someone helps to clarify the question below. I understand from the kubernetes documentation that kube-dns supports 'services' and 'pods' records and uses to resolve domain names of services or pods.</p>
<p>In a pod, few containers are running. I need those containers to resolve few external domains using kube-dns. Now, how can I use kube-dns to make containers to resolve such external domain names? Do the kubernetes provides dns server only for the resolution of domains within kubernetes? Or is there a way to customize kubernetes provided dns to resolve external domains? If so how to customize it?</p>
<p>It could be really helpful if someone helps to unblock my queries. Thanks in advance!!!</p>
|
<p>You should take a look first pod's DNS Policy. From the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">documentation</a>, I shared below.</p>
<blockquote>
<p>DNS policies can be set on a per-pod basis. Currently Kubernetes supports the following pod-specific DNS policies. These policies are specified in the dnsPolicy field of a Pod Spec.</p>
<ul>
<li>"<strong>Default</strong>": The Pod inherits the name resolution configuration from the node that the pods run on. See related discussion for more details.</li>
<li>"<strong>ClusterFirst</strong>": Any DNS query that does not match the configured cluster domain suffix, such as "www.kubernetes.io", is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured. See related discussion for details on how DNS queries are handled in those cases.</li>
<li>"<strong>ClusterFirstWithHostNet</strong>": For Pods running with hostNetwork, you should explicitly set its DNS policy "ClusterFirstWithHostNet".
"None": It allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the dnsConfig field in the Pod Spec. See Pod's DNS config subsection below.</li>
<li><strong>Note</strong>: "Default" is not the default DNS policy. If dnsPolicy is not explicitly specified, then "ClusterFirst" is used.</li>
</ul>
</blockquote>
<p>Then you can configure CoreDNS configmap <em><strong>forward</strong></em> option according to your needs. This <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="nofollow noreferrer">documentation</a> will help you to understand.</p>
<blockquote>
<p><strong>forward</strong>: Any queries that are not within the cluster domain of Kubernetes will be forwarded to predefined resolvers (/etc/resolv.conf).</p>
</blockquote>
|
<p>I am trying to run the Kubernetes sample controller example by following the link <a href="https://github.com/kubernetes/sample-controller" rel="nofollow noreferrer">https://github.com/kubernetes/sample-controller</a>. I have the repo set up on an Ubuntu 18.04 system and was able to build the sample-controller package.
However, when I try to run the go package, I am getting some errors and am unable to debug the issue. Can someone please help me with this?</p>
<p>Here are the steps that I followed : </p>
<pre><code>user@ubuntu-user:~$ go get k8s.io/sample-controller
user@ubuntu-user:~$ cd $GOPATH/src/k8s.io/sample-controller
</code></pre>
<p>Here's the error that I get on running the controller:</p>
<pre><code>user@ubuntu-user:~/go/src/k8s.io/sample-controller$ ./sample-controller -kubeconfig=$HOME/.kube/config
E0426 15:05:57.721696 31517 reflector.go:125] k8s.io/sample-controller/pkg/generated/informers/externalversions/factory.go:117: Failed to list *v1alpha1.Foo: the server could not find the requested resource (get foos.samplecontroller.k8s.io)
</code></pre>
<p>Kubectl Version : </p>
<pre><code>user@ubuntu-user:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"```
</code></pre>
|
<p>I have reproduced your issue. The order of commands in this tutorial is wrong.</p>
<p>In this case you received this error due to lack of resource (samplecontroller)</p>
<pre><code>$ ./sample-controller -kubeconfig=$HOME/.kube/config
E0430 12:55:05.089624 147744 reflector.go:125] k8s.io/sample-controller/pkg/generated/informers/externalversions/factory.go:117: Failed to list *v1alpha1.Foo: the server could not find the requested resource (get foos.samplecontroller.k8s.io)
^CF0430 12:55:05.643778 147744 main.go:74] Error running controller: failed to wait for caches to sync
goroutine 1 [running]:
k8s.io/klog.stacks(0xc0002feb00, 0xc000282200, 0x66, 0x1f5)
/usr/local/google/home/user/go/src/k8s.io/klog/klog.go:840 +0xb1
k8s.io/klog.(*loggingT).output(0x2134040, 0xc000000003, 0xc0002e12d0, 0x20afafb, 0x7, 0x4a, 0x0)
/usr/local/google/home/user/go/src/k8s.io/klog/klog.go:791 +0x303
k8s.io/klog.(*loggingT).printf(0x2134040, 0x3, 0x14720f2, 0x1c, 0xc0003c1f48, 0x1, 0x1)
/usr/local/google/home/user/go/src/k8s.io/klog/klog.go:690 +0x14e
k8s.io/klog.Fatalf(...)
/usr/local/google/home/user/go/src/k8s.io/klog/klog.go:1241
main.main()
/usr/local/google/home/user/go/src/k8s.io/sample-controller/main.go:74 +0x3f5
</code></pre>
<p>You can verify that this api was not created</p>
<pre><code>$ kubectl api-versions | grep sample
$ <emptyResult>
</code></pre>
<p>In the tutorial you have command to create <strong>Custom Resource Definition</strong></p>
<pre><code>$ kubectl create -f artifacts/examples/crd.yaml
customresourcedefinition.apiextensions.k8s.io/foos.samplecontroller.k8s.io created
</code></pre>
<p>Now you can search this CRD, it will be on the list now.</p>
<pre><code>$ kubectl api-versions | grep sample
samplecontroller.k8s.io/v1alpha1
</code></pre>
<p>Next step is to create Foo resource</p>
<pre><code>$ kubectl create -f artifacts/examples/example-foo.yaml
foo.samplecontroller.k8s.io/example-foo created
</code></pre>
<p>Those commands will not create any objects yet. </p>
<pre><code>user@user:~/go/src/k8s.io/sample-controller$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP XX.XXX.XXX.XX <none> 443/TCP 14d
</code></pre>
<p>All resources will be created after you will run <code>./sample-controller -kubeconfig=$HOME/.kube/config</code></p>
<pre><code>user@user:~/go/src/k8s.io/sample-controller$ ./sample-controller -kubeconfig=$HOME/.kube/config
user@user:~/go/src/k8s.io/sample-controller$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/example-foo-6cbc69bf5d-8k59h 1/1 Running 0 43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 14d
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/example-foo 1 1 1 1 43s
NAME DESIRED CURRENT READY AGE
replicaset.apps/example-foo-6cbc69bf5d 1 1 1 43s
</code></pre>
<p><strong>Correct order:</strong></p>
<pre><code>$ go get k8s.io/sample-controller
$ cd $GOPATH/src/k8s.io/sample-controller
$ go build -o sample-controller .
$ kubectl create -f artifacts/examples/crd.yaml
$ kubectl create -f artifacts/examples/example-foo.yaml
$ ./sample-controller -kubeconfig=$HOME/.kube/config
$ kubectl get deployments
</code></pre>
|
<p>How to i get logging level of a pod given the pod name and Namespace name </p>
<p>if its not possible to get Logging level then please tell me why</p>
|
<p>With kubectl command you can perform this</p>
<pre><code>kubectl logs <pod name> --namespace <namespace> [-c <container name>]
</code></pre>
<p>The container name is required is you have several container in your pod</p>
<p>In the GUI of GCP, you can do a custom filter like this</p>
<pre><code>resource.type="k8s_pod"
resource.labels.location="us-central1-c"
resource.labels.cluster_name="cluster-2"
jsonPayload.involvedObject.namespace="namespace"
jsonPayload.involvedObject.name="pod name"
</code></pre>
<p><a href="https://i.stack.imgur.com/MDeUQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MDeUQ.png" alt="enter image description here"></a></p>
|
<p>I've looked at <a href="https://stackoverflow.com/questions/43152190/how-does-one-install-the-kube-dns-addon-for-minikube">How does one install the kube-dns addon for minikube?</a> but the issue is that in that question, the addon is installed. However when I write</p>
<p><code>minikube addons list</code></p>
<p>I get the following:</p>
<p><code>- addon-manager: enabled
- dashboard: enabled
- default-storageclass: enabled
- efk: disabled
- freshpod: disabled
- gvisor: disabled
- heapster: disabled
- ingress: disabled
- logviewer: disabled
- metrics-server: disabled
- nvidia-driver-installer: disabled
- nvidia-gpu-device-plugin: disabled
- registry: disabled
- registry-creds: disabled
- storage-provisioner: enabled
- storage-provisioner-gluster: disabled
</code></p>
<p>none of which is kube-dns. Can't find instructions anywhere as it's supposed to be there by default, so what have I missed?</p>
<p><strong>EDIT</strong> This is minikube v1.0.1 running on Ubuntu 18.04.</p>
|
<p>The StackOverflow case which you are referring to was in 2017 so it's bit outdated.</p>
<p>According to <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction" rel="nofollow noreferrer">documentation</a> CoreDNS is recommended DNS server which replaced kube-dns. There was a transitional period when both KubeDNS and CoreDNS were deployed parallel, however in latest version only CoreDNS is deployed.</p>
<p>As default <code>Minikube</code> is creating 2 pods with CoreDNS. To verify execute: </p>
<pre><code>$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-g4vs2 1/1 Running 1 20m
coredns-5c98db65d4-k4s7v 1/1 Running 1 20m
etcd-minikube 1/1 Running 0 19m
kube-addon-manager-minikube 1/1 Running 0 20m
kube-apiserver-minikube 1/1 Running 0 19m
kube-controller-manager-minikube 1/1 Running 0 19m
kube-proxy-thbv5 1/1 Running 0 20m
kube-scheduler-minikube 1/1 Running 0 19m
storage-provisioner 1/1 Running 0 20m
</code></pre>
<p>You can also see that there is CoreDNS deployment.</p>
<pre><code>$ kubectl get deployments coredns -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 37m
</code></pre>
<p><a href="https://coredns.io/2018/11/27/cluster-dns-coredns-vs-kube-dns/" rel="nofollow noreferrer">Here</a> you can find comparison between both DNS. </p>
<p>So in short, you did not miss anything. CoreDNS is deployed as default during <code>minikube start</code>.</p>
|
<p>when I using apt-get command to update the kubernetes(v1.15.2) pods, it failed:</p>
<pre><code>root@nginx-deployment-5754944d6c-7gbds:/# apt-get update
Err http://security.debian.org wheezy/updates Release.gpg
Temporary failure resolving 'security.debian.org'
Err http://http.debian.net wheezy Release.gpg
Temporary failure resolving 'http.debian.net'
Err http://http.debian.net wheezy-updates Release.gpg
Temporary failure resolving 'http.debian.net'
Err http://nginx.org wheezy Release.gpg
Temporary failure resolving 'nginx.org'
Reading package lists... Done
W: Failed to fetch http://http.debian.net/debian/dists/wheezy/Release.gpg Temporary failure resolving 'http.debian.net'
W: Failed to fetch http://http.debian.net/debian/dists/wheezy-updates/Release.gpg Temporary failure resolving 'http.debian.net'
W: Failed to fetch http://security.debian.org/dists/wheezy/updates/Release.gpg Temporary failure resolving 'security.debian.org'
W: Failed to fetch http://nginx.org/packages/mainline/debian/dists/wheezy/Release.gpg Temporary failure resolving 'nginx.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.
</code></pre>
<p>now I could ping success to my kube-dns(ip:10.96.0.10,coredns version 1.6.7):</p>
<pre><code>root@nginx-deployment-5754944d6c-7gbds:/# cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
root@nginx-deployment-5754944d6c-7gbds:/# ping 10.96.21.92
PING 10.96.21.92 (10.96.21.92): 48 data bytes
^C--- 10.96.21.92 ping statistics ---
11 packets transmitted, 0 packets received, 100% packet loss
root@nginx-deployment-5754944d6c-7gbds:/# ping 10.96.0.10
PING 10.96.0.10 (10.96.0.10): 48 data bytes
56 bytes from 10.96.0.10: icmp_seq=0 ttl=64 time=0.103 ms
56 bytes from 10.96.0.10: icmp_seq=1 ttl=64 time=0.094 ms
56 bytes from 10.96.0.10: icmp_seq=2 ttl=64 time=0.068 ms
56 bytes from 10.96.0.10: icmp_seq=3 ttl=64 time=0.066 ms
56 bytes from 10.96.0.10: icmp_seq=4 ttl=64 time=0.060 ms
56 bytes from 10.96.0.10: icmp_seq=5 ttl=64 time=0.064 ms
</code></pre>
<p>why the pods could not access to the network? Now I could not install any tool to check the pods network problem in this pod. What should I do to find out where is going wrong?</p>
<p>I tried to create a busybox and test the kube-dns like this:</p>
<pre><code>[miao@MeowK8SMaster1 ~]$ kubectl exec -it busybox -- nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes'
command terminated with exit code 1
</code></pre>
|
<p>I understand why you need to exec into coredns pod.</p>
<p>It, however, allows only coredns binary to be executed (not any shell). </p>
<p>E.g:</p>
<pre><code>k exec -it <<coredns podname>> -n kube-system -- ./coredns -version
</code></pre>
<p>This will returns the version of the coredns binary that is running. </p>
|
<p>I'm trying to figure out why a GKE "Workload" CPU usage is not equivalent to the sum of cpu usage of its pods.</p>
<p>Following image shows a Workload CPU usage.</p>
<p><a href="https://i.stack.imgur.com/b45rT.png" rel="nofollow noreferrer">Service Workload CPU Usage</a></p>
<p>Following images show pods CPU usage for the above Workload.</p>
<p><a href="https://i.stack.imgur.com/pPptu.png" rel="nofollow noreferrer">Pod #1 CPU Usage</a></p>
<p><a href="https://i.stack.imgur.com/UBbEG.png" rel="nofollow noreferrer">Pod #2 CPU Usage</a></p>
<p>For example, at 9:45, the Workload cpu usage was around 3.7 cores, but at the same time Pod#1 CPU usage was around 0.9 cores and Pod#2 CPU usage was around 0.9 cores too. It means, the service Workload CPU Usage should have been around 1.8 cores, but it wasn't.</p>
<p>Does anyone have an idea of this behavior?</p>
<p>Thanks.</p>
|
<p>On your VM, the node managed by Kubernetes, you have the deployed pods (that you manage) but also several services that run on it for the supervision, the management, the logs ingestion,... A basic description <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/" rel="nofollow noreferrer">here</a></p>
<p>You can see all these basic services by performing this command <code>kubctl get all --namespace kube-system</code>.</p>
<p>If you have installed additional components, like Istio or Knative, you have additional services and namespaces. All of these get a part of the resources of the node.</p>
|
<p>I am new to kubernetes and trying to understand when to use kubectl autoscale and kubectl scale commands</p>
|
<p><strong>Scale</strong> in deployment tells how many pods should be always running to ensure proper working of the application. You have to specify it manually.
In YAMLs you have to define it in <code>spec.replicas</code> like in example below:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre>
<p>Second way to specify scale (replicas) of deployment is use command.</p>
<pre><code>$ kubectl run nginx --image=nginx --replicas=3
deployment.apps/nginx created
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 3 3 3 3 11s
</code></pre>
<p>It means that deployment will have 3 pods running and Kubernetes will always try to maintain this number of pods (If any of the pods will crush, K8s will recreate it). You can always change it with in <code>spec.replicas</code> and use <code>kubectl apply -f <name-of-deployment></code> or via command</p>
<pre><code>$ kubectl scale deployment nginx --replicas=10
deployment.extensions/nginx scaled
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 10 10 10 10 4m48s
</code></pre>
<p>Please read in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">documentation</a> about scaling and replicasets.</p>
<p><strong>Horizontal Pod Autoscaling</strong> (HPA) was invented to scale deployment based on metrics produced by pods. For example, if your application have about 300 HTTP request per minute and each your pod allows to 100 HTTP requests for minute it will be ok. However if you will receive a huge amount of HTTP request ~ 1000, 3 pods will not be enough and 70% of request will fail. When you will use <code>HPA</code>, deployment will autoscale to run 10 pods to handle all requests. After some time, when number of request will drop to 500/minute it will scale down to 5 pods. Later depends on request number it might go up or down depends on your HPA configuration.</p>
<p>Easiest way to apply autoscale is:</p>
<pre><code>$ kubectl autoscale deployment <your-deployment> --<metrics>=value --min=3 --max=10
</code></pre>
<p>It means that autoscale will automatically scale based on metrics to maximum 10 pods and later it will downscale minimum to 3.
Very good example is shown at <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#run-expose-php-apache-server" rel="noreferrer">HPA documentation</a> with CPU usage.</p>
<p>Please keep in mind that Kubernetes can use many types of metrics based on API (HTTP/HTTP request, CPU/Memory load, number of threads, etc.)</p>
<p>Hope it help you to understand difference between Scale and Autoscaling.</p>
|
<p>I am trying to setup kubernetes cluster in AWS using KOPS. I configured for 3 master nodes and 6 nodes. But after launching the cluster only two master nodes are up.</p>
<p>I am using <code>.k8s.local</code> DNS instead of Purchased DNS. Below is the script that I am using for creating the cluster.</p>
<pre><code>kops create cluster \
--cloud=aws \
--name=kops-cassandra-cluster-01.k8s.local \
--zones=ap-south-1a,ap-south-1b,ap-south-1c \
--master-size="t2.small" \
--master-count 3 \
--master-zones=ap-south-1a,ap-south-1b,ap-south-1c \
--node-size="t2.small" \
--ssh-public-key="kops-cassandra-cluster-01.pub" \
--state=s3://kops-cassandra-cluster-01 \
--node-count=6
</code></pre>
<p>After executing <code>kops update cluster --name=kops-cassandra-cluster-01.k8s.local --state=s3://kops-cassandra-cluster-01 --yes</code>
only two master nodes are available instead of 3. </p>
<p><code>kubectl get nodes</code> shows:</p>
<pre><code>NAME STATUS ROLES AGE VERSION
ip-172-20-44-37.ap-south-1.compute.internal Ready master 18m v1.12.8
ip-172-20-52-78.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-60-234.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-61-141.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-66-215.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-69-124.ap-south-1.compute.internal Ready master 18m v1.12.8
ip-172-20-85-58.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-90-119.ap-south-1.compute.internal Ready node 18m v1.12.8
</code></pre>
<p>I am new to Kubernetes. Am I missing something?</p>
|
<p>After doing a lot of research I came to know that it is because of t2.small instance type is not available in ap-south-1c. After modifying the zones to eu-west-1a,eu-west-1b,eu-west-1c, I can see 3 master nodes and 6 worker nodes. Thanks @mchawre for your help.</p>
|
<p>For example, I have a pod running a server in it and I have a job in my cluster that is doing some yaml patching on the server deployment.</p>
<p>Is there a way we can set up some kind of trigger or anything that will rerun the job when ever the respective deployment change happens?</p>
|
<p>You can add your job spec into the deployment as <code>initContainer</code> like that:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
initContainers:
- name: init
image: centos:7
command:
- "bin/bash"
- "-c"
- "do something useful"
containers:
- name: nginx
image: nginx
</code></pre>
<p>In this case every time you rollout the deployment, job defined in <code>initContainers</code> will run.</p>
|
<p>I'm running GKE cluster and there is a deployment that uses image which I push to Container Registry on GCP, issue is - even though I build the image and push it with <code>latest</code> tag, the deployment keeps on creating new pods with the old one cached - is there a way to update it without re-deploying (aka without destroying it first)? </p>
<p>There is a known issue with the kubernetes that even if you change configmaps the old config remains and you can either redeploy or workaround with </p>
<pre><code>kubectl patch deployment $deployment -n $ns -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
</code></pre>
<p>is there something similar with cached images? </p>
|
<p>1) You should change the way of your thinking. Destroying pod is not bad. Application downtime is what is bad. You should always plan your deployments in such a way that it can tolerate one pod death. Use multiple replicas for stateless apps and use clusters for stateful apps. Use Kubernetes rolling update for any changes to your deployments. Rolling updates have many extremely important settings which directly influence the uptime of your apps. Read it carefully.</p>
<p>2) The reason why Kubernetes launches old image is that by default it uses
<code>imagePullPolicy: IfNotPresent</code>. Use <code>imagePullPolicy: Always</code> and it will always try to pull latest version on redeploy. </p>
|
<p>I'm trying to run remotely a kubernetes command using python and ssh. The command doesn't work if is run remote, but works if is run directly on the machine.</p>
<pre><code>"kubectl get po --all-namespaces -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\t"}{.metadata.labels.k8s-app}{"\n"}{end}"
</code></pre>
<p>If it runs as it is, the recive error is </p>
<blockquote>
<p><code>"Pods [NotFound] .items[*]}"</code></p>
</blockquote>
<p>If I replace <code>'</code> with <code>"</code> and viceversa the eror is "- </p>
<blockquote>
<p>Expecting 'EOF'</p>
</blockquote>
<p>Taking in consideration that the command run on machine directly, something is interpreted when is passed remotely to the shell. I tried different combination but are not working.</p>
|
<p>Posting this as community wiki <br></p>
<p>As @Orion pointed in comments it seems that your command is not full. I've tested it on mine GKE cluster.</p>
<pre><code>$ kubectl get po --all-namespaces -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].
image}{"\t"}{.metadata.labels.k8s-app}{"\n"}{end}
>
</code></pre>
<p>When you used <code>jsonpath=</code> you started it with sign <code>'</code>, however you did not close it. That's why Kubernetes is still expecting some values.
However, if you will end <code>jasonpath</code> with <code>'</code> you will receive output like:</p>
<pre><code>$ kubectl get po --all-namespaces -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].
image}{"\t"}{.metadata.labels.k8s-app}{"\n"}{end}'
event-exporter-v0.2.4-5f88c66fb7-xwwh6 k8s.gcr.io/event-exporter:v0.2.4 event-exporter
fluentd-gcp-scaler-59b7b75cd7-pw9kf k8s.gcr.io/fluentd-gcp-scaler:0.5.2 fluentd-gcp-scaler
fluentd-gcp-v3.2.0-6lxd9 gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8 fluentd-gcp
fluentd-gcp-v3.2.0-zhtds gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8 fluentd-gcp
heapster-v1.6.1-9bbcd7f79-ld4z7 gcr.io/stackdriver-agents/heapster-amd64:v1.6.1 heapster
kube-dns-6987857fdb-l2dt8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 kube-dns
kube-dns-6987857fdb-r97b8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 kube-dns
kube-dns-autoscaler-bb58c6784-8vq5d k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.3.0 kube-dns-autoscaler
kube-proxy-gke-stc1-default-pool-094e5c74-6vk4 k8s.gcr.io/kube-proxy:v1.13.7-gke.8
kube-proxy-gke-stc1-default-pool-094e5c74-htr0 k8s.gcr.io/kube-proxy:v1.13.7-gke.8
l7-default-backend-fd59995cd-xz72d k8s.gcr.io/defaultbackend-amd64:1.5 glbc
metrics-server-v0.3.1-57c75779f-t2rfb k8s.gcr.io/metrics-server-amd64:v0.3.1 metrics-server
prometheus-to-sd-2jxr5 k8s.gcr.io/prometheus-to-sd:v0.5.2 prometheus-to-sd
prometheus-to-sd-xmfsl k8s.gcr.io/prometheus-to-sd:v0.5.2 prometheus-to-sd
</code></pre>
<p>As you want to run exact command remotely you have to use <code>"</code> at beginning and the end, same with <code>jsonpatch</code>.</p>
<p>Based on information you have provided, solution for your problem should be below command:</p>
<pre><code>"kubectl get po --all-namespaces -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\t"}{.metadata.labels.k8s-app}{"\n"}{end}'"
</code></pre>
|
<p>I'm working on a single-node cluster which works fine with docker-compose but the reconfiguration of the same setup using Minikube Ingress Controller gives me a <code>Bad Request</code> response.</p>
<pre><code>Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
</code></pre>
<p>My Ingress looks like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: emr-cluster-ip-service
port:
number: 443
- path: /?(.*)
pathType: Prefix
backend:
service:
name: erp-cluster-ip-service
port:
number: 8069
</code></pre>
<p>How to fix this?</p>
|
<p>You are exposing HTTPS service on HTTP ingress, which is not the right thing to do. You might want to do one of the following:</p>
<ol>
<li>Configure <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="nofollow noreferrer">TLS-enabled ingress</a>.</li>
<li>Configure <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough" rel="nofollow noreferrer">TLS passthough</a> on ingress object.</li>
</ol>
<p>In both cases you also need to set <code>nginx.ingress.kubernetes.io/ssl-redirect: "true"</code></p>
|
<p>I want to Ping a Nodename in a pod. By CoreDns forward to anothor Service that is current cluster's.
When forwarding, append a domain name to source nodename, like <strong>searches</strong> in /etc/resolv.conf. How do I config yaml of coredns pod or corefile of coredns.</p>
<pre><code># in one pod
ping node1
# equivalent
ping node1.xxx.com
</code></pre>
|
<p>You have some misunderstanding on how DNS works. DNS server is not responsible for you "search" domains. DNS server has absolutely no idea of search domains because this is client side setting.</p>
<p>Read more for perfect explanation of pods DNS settings:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config" rel="nofollow noreferrer">Pod's DNS Config</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction" rel="nofollow noreferrer">Customizing DNS Service</a></p>
</li>
</ul>
|
<p>Trying to run a simple Spark application using Kubernetes master. But I don't get the intended output/processing, neither do I see any error messages. The final pod phase is 'Failed' and the error code is 101. The pod logs show the usual log4j warnings, but nothing else.</p>
<p>Running minikube v1.0.1 on windows (amd64) on my office laptop using hyperv. Have already increased the #cpus and memory on minikube VM to 3 and 4 GB as recommended.</p>
<p>Made sure that the applications run fine with Spark Standalone. The first application 'Hello' is supposed to print a 'Hello' message. The second application 'Calculate Monthly Revenue' is supposed to read data from Teradata over JDBC, aggregate it and write the result back to Teradata table over JDBC.</p>
<p>Also made sure that 'hello minikube' works fine.</p>
<p>In all the code snippets below, ... indicates portions omitted for brevity, >>> indicates command prompt.</p>
<pre><code>>>> spark-submit --master k8s://https://153.65.225.219:8443 --deploy-mode cluster --name Hello --class Hello --conf spark.executor.instances=1 --conf spark.kubernetes.container.image=rahulvkulkarni/default:spark-td-run --conf spark.kubernetes.container.image.pullSecrets=regcred local://hello_2.12-0.1.0-SNAPSHOT.jar
log4j:WARN No appenders could be found for logger (io.fabric8.kubernetes.client.Config).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/05/20 16:59:09 INFO LoggingPodStatusWatcherImpl: State changed, new state:
pod name: hello-1558351748442-driver
...
phase: Pending
status: []
...
19/05/20 16:59:13 INFO LoggingPodStatusWatcherImpl: State changed, new state:
pod name: hello-1558351748442-driver
...
phase: Failed
status: [ContainerStatus(containerID=docker://464c9c0e23d543f20954d373218c9cefefc31107711cbd2ada4d93bb31ce4d80, image=rahulvkulkarni/default:spark-td-run, imageID=docker-pullable://rahulvkulkarni/default@sha256:1de9951c4ac9f0b5f26efa3949e1effa779b0605066f2043738402ce20e8179b, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=ContainerStateTerminated(containerID=docker://464c9c0e23d543f20954d373218c9cefefc31107711cbd2ada4d93bb31ce4d80, exitCode=101, finishedAt=2019-05-17T18:26:41Z, message=null, reason=Error, signal=null, startedAt=2019-05-17T18:26:40Z, additionalProperties={}), waiting=null, additionalProperties={}), additionalProperties={})]
19/05/20 16:59:13 INFO LoggingPodStatusWatcherImpl: Container final statuses:
Container name: spark-kubernetes-driver
Container image: rahulvkulkarni/default:spark-td-run
Container state: Terminated
Exit code: 101
19/05/20 16:59:13 INFO Client: Application Hello finished.
...
>>> kubectl logs hello-1558351748442-driver
++ id -u
...
+ CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$@")
+ exec /sbin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=172.17.0.5 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class Hello spark-internal
19/05/17 18:26:41 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (org.apache.spark.deploy.SparkSubmit$$anon$2).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
</code></pre>
<p>What does exit code 101 mean? How to find the actual error?</p>
<p>Then I tried to configure log4j for detailed logging as described in <a href="https://stackoverflow.com/questions/27781187/how-to-stop-info-messages-displaying-on-spark-console">How to stop INFO messages displaying on spark console?</a>. Renamed and used the log4j.properties template provided in the conf directory. But spark-submit is not able to find the log4j.properties file that I have already included in the docker build.</p>
<pre><code>>>> spark-submit --master k8s://https://153.65.225.219:8443 --deploy-mode cluster --files /opt/spark/conf/log4j.properties --conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=file:/opt/spark/conf/log4j.properties" --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:/opt/spark/conf/log4j.properties" --name "Calculate Monthly Revenue" --class mthRev --conf spark.executor.instances=1 --conf spark.kubernetes.container.image=rahulvkulkarni/default:spark-td-run --conf spark.kubernetes.container.image.pullSecrets=regcred local://mthrev_2.10-0.1-SNAPSHOT.jar <username> <password> <server name>
19/05/20 20:02:50 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/05/20 20:02:52 INFO LoggingPodStatusWatcherImpl: State changed, new state:
pod name: calculate-monthly-revenue-1558362771110-driver
...
Container name: spark-kubernetes-driver
Container image: rahulvkulkarni/default:spark-td-run
Container state: Terminated
Exit code: 1
>>> kubectl logs -c spark-kubernetes-driver calculate-monthly-revenue-1558362771110-driver
++ id -u
...
log4j:ERROR Could not read configuration file from URL [file:/opt/spark/conf/log4j.properties].
java.io.FileNotFoundException: /opt/spark/conf/log4j.properties (No such file or directory)
...
log4j:ERROR Ignoring configuration file [file:/opt/spark/conf/log4j.properties].
19/05/17 21:30:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected scheme-specific part at index 2: C:
at org.apache.hadoop.fs.Path.initialize(Path.java:205)
at org.apache.hadoop.fs.Path.<init>(Path.java:171)
at org.apache.hadoop.fs.Path.<init>(Path.java:93)
at org.apache.hadoop.fs.Globber.glob(Globber.java:211)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1657)
at org.apache.spark.deploy.DependencyUtils$.org$apache$spark$deploy$DependencyUtils$$resolveGlobPath(DependencyUtils.scala:192)
at org.apache.spark.deploy.DependencyUtils$$anonfun$resolveGlobPaths$2.apply(DependencyUtils.scala:147)
at org.apache.spark.deploy.DependencyUtils$$anonfun$resolveGlobPaths$2.apply(DependencyUtils.scala:145)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at org.apache.spark.deploy.DependencyUtils$.resolveGlobPaths(DependencyUtils.scala:145)
at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$4.apply(SparkSubmit.scala:355)
at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$4.apply(SparkSubmit.scala:355)
at scala.Option.map(Option.scala:146)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:355)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:143)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.URISyntaxException: Expected scheme-specific part at index 2: C:
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.failExpecting(URI.java:2854)
at java.net.URI$Parser.parse(URI.java:3057)
at java.net.URI.<init>(URI.java:746)
at org.apache.hadoop.fs.Path.initialize(Path.java:202)
... 23 more
[INFO tini (1)] Main child exited normally (with status '1')
</code></pre>
<p>I tried several variations of specifying the log4j.properties file: local file on my Windows laptop (file:///C$/Users//spark-2.4.3-bin-hadoop2.7/conf/log4j.properties and file:///C:/Users//spark-2.4.3-bin-hadoop2.7/conf/log4j.properties), local file in the Linux container (file:///opt/spark/conf/log4j.properties). But I keep getting the message:</p>
<pre><code>log4j:ERROR Could not read configuration file from URL [file:/C$/Users/<my-username>/spark-2.4.3-bin-hadoop2.7/conf/log4j.properties].
</code></pre>
<p>The IllegalArgumentException exception went away when I tried the path without the colon (C:), i.e. either the Linux path or the Windows path with C$.</p>
<p>But I still don't get the desired output of my program and don't know if/what is the error!</p>
|
<p>There was a typo in the spark-submit command in the specification of the application jar. I was using only two forward slashes: local://hello_2.12-0.1.0-SNAPSHOT.jar. Hence, Spark was not able to locate it and (I think) was ignoring it silently and then had no work to do. Hence, there was no message. I'd expect it to give a warning at least.</p>
<p>Changed it to three slashes and it moved ahead:
local:///hello_2.12-0.1.0-SNAPSHOT.jar</p>
<p>I now have another issue related to Kubernetes RBAC, which I will solve separately. The log4j issue still remains, but is not a concern for me now.</p>
|
<p>I am trying to deploy a persistentvolume for 3 pods to work on and i want to use the cluster's node storage i.e. not an external storage like ebs spin off.</p>
<p>To achieve the above i did the following experiment's -</p>
<p>1) I applied only the below PVC resource defined below -</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
</code></pre>
<p>This spin's up a storage set by default storageclass, which in my case was digital ocean's volume. So it created a 1Gi volume.</p>
<p>2) Created a PV resource and PVC resource like below -</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
</code></pre>
<hr>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
</code></pre>
<p>Post this i see my claim is bound.</p>
<pre><code> pavan@p1:~$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv1 Bound task-pv-volume 10Gi RWO manual 2m5s
pavan@p1:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 118m
pavan@p1:~$ kubectl describe pvc
Name: pv1
Namespace: default
StorageClass: manual
Status: Bound
Volume: task-pv-volume
Labels: io.kompose.service=pv1
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"io.kompose.service":"mo...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 28s (x8 over 2m2s) persistentvolume-controller storageclass.storage.k8s.io "manual" not found
</code></pre>
<p>Below are my questions that i am hoping to get answers/pointers to -</p>
<ol>
<li><p>The above warning, storage class could not be found, do i need to
create one? If so, can you tell me why and how? or any pointer. (Somehow this link misses to state that - <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/</a>)</p></li>
<li><p>Notice the PV has storage capacity of 10Gi and PVC with request capacity of 1Gi, but still PVC was bound with 10Gi capacity? Can't i share the same PV capacity with other PVCs?</p></li>
</ol>
<p>For question 2) If i have to create different PVs for different PVC with the required capacity, do i have to create storageclass as-well? Or same storage class and use selectors to select corresponding PV?</p>
|
<p>I was trying to reproduce all behavior to answer all your questions. However, I don't have access to DigitalOcean, so I tested it on GKE.</p>
<blockquote>
<p>The above warning, storage class could not be found, do i need to
create one?</p>
</blockquote>
<p>According to the documentation and best practices, it is highly recommended to create a <code>storageclass</code> and later create PV / PVC based on it. However, there is something called "manual provisioning". Which you did in this case.</p>
<p>Manual provisioning is when you need to manually create a PV first, and then a PVC with matching <code>spec.storageClassName:</code> field. Examples:</p>
<ul>
<li>If you create a PVC without <code>default storageclass</code>, <code>PV</code> and <code>storageClassName</code> parameter (afaik <code>kubeadm</code> is not providing default <code>storageclass</code>) - PVC will be stuck on <code>Pending</code> with event: <code>no persistent volumes available for this claim and no storage class is set</code>.</li>
<li>If you create a PVC with <code>default storageclass</code> setup on cluster but without <code>storageClassName</code> parameter it will be created based on default <code>storageclass</code>.</li>
<li>If you create a PVC with <code>storageClassName</code> parameter (somewhere in the Cloud, Minikube, or Microk8s), PVC will also get stuck <code>Pending</code> with this warning: <code>storageclass.storage.k8s.io "manual" not found.</code>
However, if you create <code>PV with the same</code>storageClassName` parameter, it will be bound in a while.</li>
</ul>
<p>Example:</p>
<pre><code>$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Available manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Pending manual 4m12s
...
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Bound task-pv-volume 10Gi RWO manual 4m17s
</code></pre>
<p>The disadvantage of <code>manual provisioning</code> is that you have to create PV for each PVC (only 1:1 pairings will work). If you use <code>storageclass</code>, you can just create <code>PVC</code>.</p>
<blockquote>
<p>If so, can you tell me why and how? or any pointer.</p>
</blockquote>
<p>You can use <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">documentation</a> examples or check <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">here</a>. As you are using a Cloud provider with default <code>storageclass</code> (or <code>sc</code> for short) set up for you, you can export it to a yaml file by: <br>
<code>$ kubectl get sc -o yaml >> storageclass.yaml</code>
(you will then need to clean it up, removing unique metadata, before you can reuse it).</p>
<p>Or, if you have more than one <code>sc</code>, you have to specify which one. Names of <code>storageclass</code> can be obtained by <br> <code>$ kubectl get sc</code>.
Later you can refer to <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#storageclass-v1-storage-k8s-io" rel="nofollow noreferrer">K8s API</a> to customize your <code>storageclass</code>.</p>
<blockquote>
<p>Notice the PV has storage capacity of 10Gi and PVC with request
capacity of 1Gi, but still PVC was bound with 10Gi capacity?</p>
</blockquote>
<p>You created manually a PV with 10Gi and the PVC requested 1Gi. As PVC and PV are bound 1:1 to each other, PVC searched for a PV which meets all conditions and has bound to it. PVC ("pv1") requested 1Gi and the PV ("task-pv-volume") met those requirements, so Kubernetes bound them. Unfortunately much of the space was wasted in this case.</p>
<blockquote>
<p>Can't i share the same PV capacity with other PVCs</p>
</blockquote>
<p>Unfortunately, you cannot bind more than 1 PVC to the same PV as the relationship between PVC and PV is 1:1, but you can configure many pods or deployments to use the same PVC (within the same namespace).</p>
<p>I can advise you to look at <a href="https://stackoverflow.com/questions/57798267/kubernetes-persistent-volume-access-modes-readwriteonce-vs-readonlymany-vs-read">this SO case</a>, as it explains <code>AccessMode</code> specifics very well.</p>
<blockquote>
<p>If i have to create different PVs for different PVC with the required
capacity, do i have to create storageclass as-well? Or same storage
class and use selectors to select corresponding PV?</p>
</blockquote>
<p>As I mentioned before, if you create PV manually with a specific size and a PVC bound to it, which requests less storage, the extra space will be wasted. So, you have to create PV and PVC with the same resource request, or let <code>storageclass</code> adjust the storage based on the PVC request.</p>
|
<p>i deplyed my application on kubernetes but have been getting this error:</p>
<pre><code>**MountVolume.SetUp failed for volume "airflow-volume" : mount failed: mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/4a3c3d0b-b7e8-49bc-8a78-5a8bdc932eca/volumes/kubernetes.io~glusterfs/airflow-volume --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.2.107:10.0.2.24,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/airflow-volume/worker-844c9db787-vprt8-glusterfs.log,log-level=ERROR 10.0.2.107:/airflow /var/lib/kubelet/pods/4a3c3d0b-b7e8-49bc-8a78-5a8bdc932eca/volumes/kubernetes.io~glusterfs/airflow-volume Output: Running scope as unit run-22059.scope. mount: /var/lib/kubelet/pods/4a3c3d0b-b7e8-49bc-8a78-5a8bdc932eca/volumes/kubernetes.io~glusterfs/airflow-volume: unknown filesystem type 'glusterfs'. , the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod worker-844c9db787-vprt8**
</code></pre>
<p>AND</p>
<pre><code>**Unable to attach or mount volumes: unmounted volumes=[airflow-volume], unattached volumes=[airflow-volume default-token-s6pvd]: timed out waiting for the condition**
</code></pre>
<p>Any suggestions?</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: airflow
spec:
replicas: 1
selector:
matchLabels:
tier: web
template:
metadata:
labels:
app: airflow
tier: web
spec:
imagePullSecrets:
- name: peeriqregistrykey
restartPolicy: Always
containers:
# Airflow Webserver Container
- name: web
image: peeriq/data_availability_service:airflow-metadata-cutover
volumeMounts:
- mountPath: /usr/local/airflow
name: airflow-volume
envFrom:
- configMapRef:
name: airflow-config
env:
- name: VAULT_ADDR
valueFrom:
secretKeyRef:
name: vault-credentials
key: VAULT_ADDR
- name: VAULT_TOKEN
valueFrom:
secretKeyRef:
name: vault-credentials
key: VAULT_TOKEN
- name: DJANGO_AUTH_USER
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_USER
- name: DJANGO_AUTH_PASS
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_PASS
- name: FERNET_KEY
valueFrom:
secretKeyRef:
name: airflow-secrets
key: FERNET_KEY
- name: POSTGRES_SERVICE_HOST
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_SERVICE_HOST
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_PASSWORD
ports:
- name: web
containerPort: 8080
args: ["webserver"]
# Airflow Scheduler Container
- name: scheduler
image: peeriq/data_availability_service:airflow-metadata-cutover
volumeMounts:
- mountPath: /usr/local/airflow
name: airflow-volume
envFrom:
- configMapRef:
name: airflow-config
env:
- name: AWS_DEFAULT_REGION
value: us-east-1
- name: ETL_AWS_ACCOUNT_NUMBER
valueFrom:
secretKeyRef:
name: aws-creds
key: ETL_AWS_ACCOUNT_NUMBER
- name: VAULT_ADDR
valueFrom:
secretKeyRef:
name: vault-credentials
key: VAULT_ADDR
- name: VAULT_TOKEN
valueFrom:
secretKeyRef:
name: vault-credentials
key: VAULT_TOKEN
- name: DJANGO_AUTH_USER
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_USER
- name: DJANGO_AUTH_PASS
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_PASS
- name: FERNET_KEY
valueFrom:
secretKeyRef:
name: airflow-secrets
key: FERNET_KEY
- name: POSTGRES_SERVICE_HOST
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_SERVICE_HOST
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_PASSWORD
args: ["scheduler"]
volumes:
- name: airflow-volume
# This GlusterFS volume must already exist.
glusterfs:
endpoints: glusterfs-cluster
path: /airflow
readOnly: false
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flower
namespace: airflow
spec:
replicas: 1
selector:
matchLabels:
tier: flower
template:
metadata:
labels:
app: airflow
tier: flower
spec:
imagePullSecrets:
- name: peeriqregistrykey
restartPolicy: Always
containers:
- name: flower
image: peeriq/data_availability_service:airflow-metadata-cutover
volumeMounts:
- mountPath: /usr/local/airflow
name: airflow-volume
envFrom:
- configMapRef:
name: airflow-config
env:
# To prevent the error: ValueError: invalid literal for int() with base 10: 'tcp://10.0.0.83:5555'
- name: FLOWER_PORT
value: "5555"
- name: DJANGO_AUTH_USER
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_USER
- name: DJANGO_AUTH_PASS
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_PASS
- name: POSTGRES_SERVICE_HOST
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_SERVICE_HOST
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_PASSWORD
ports:
- name: flower
containerPort: 5555
args: ["flower"]
volumes:
- name: airflow-volume
# This GlusterFS volume must already exist.
glusterfs:
endpoints: glusterfs-cluster
path: /airflow
readOnly: false
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: worker
namespace: airflow
spec:
replicas: 1
selector:
matchLabels:
tier: worker
template:
metadata:
labels:
app: airflow
tier: worker
spec:
imagePullSecrets:
- name: peeriqregistrykey
restartPolicy: Always
containers:
- name: worker
image: peeriq/data_availability_service:airflow-metadata-cutover
volumeMounts:
- mountPath: /usr/local/airflow
name: airflow-volume
envFrom:
- configMapRef:
name: airflow-config
env:
- name: AWS_DEFAULT_REGION
value: us-east-1
- name: ETL_AWS_ACCOUNT_NUMBER
valueFrom:
secretKeyRef:
name: aws-creds
key: ETL_AWS_ACCOUNT_NUMBER
- name: VAULT_ADDR
valueFrom:
secretKeyRef:
name: vault-credentials
key: VAULT_ADDR
- name: VAULT_TOKEN
valueFrom:
secretKeyRef:
name: vault-credentials
key: VAULT_TOKEN
- name: DJANGO_AUTH_USER
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_USER
- name: DJANGO_AUTH_PASS
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_PASS
- name: FERNET_KEY
valueFrom:
secretKeyRef:
name: airflow-secrets
key: FERNET_KEY
- name: POSTGRES_SERVICE_HOST
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_SERVICE_HOST
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_PASSWORD
args: ["worker"]
volumes:
- name: airflow-volume
# This GlusterFS volume must already exist.
glusterfs:
endpoints: glusterfs-cluster
path: /airflow
readOnly: false
</code></pre>
|
<p>You must install package <code>glusterfs-fuse</code> on your Kubernetes nodes, otherwise it won't be able to mount glusterfs volumes.</p>
|
<p>Kubernetes namespace is stuck in terminating status.</p>
<p>This usually happens due to the finalizer</p>
<pre><code>$ kubectl get ns
NAME STATUS AGE
cert-manager Active 14d
custom-metrics Terminating 7d
default Active 222d
nfs-share Active 15d
ingress-nginx Active 103d
kube-public Active 222d
kube-system Active 222d
lb Terminating 4d
monitoring Terminating 6d
production Active 221d
</code></pre>
|
<h2 id="this-worked-for-me">This worked for me :</h2>
<p>kubectl get namespace linkerd -o json > linkerd.json</p>
<p>Where:/api/v1/namespaces/<your_namespace_here>/finalize</p>
<p>kubectl replace --raw "/api/v1/namespaces/linkerd/finalize" -f ./linkerd.json</p>
|
<p>I'm trying to create a Persistent Volume on top of/based off of an existing Storage Class Name. Then I want to attach the PVC to it; so that they are bound. Running the code below, will give me the "sftp-pv-claim" I want, but it is not bound to my PV ("sftp-pv-storage"). It's status is "pending".</p>
<p>The error message I receive is: "The PersistentVolume "sftp-pv-storage" is invalid: spec: Required value: must specify a volume type". If anyone can point me in the right direction as to why I'm getting the error message, it'd be much appreciated.</p>
<p><strong>Specs:</strong></p>
<p>I'm creating the PV and PVC using a helm chart.</p>
<p>I'm using the Rancher UI to see if they are bound or not and if the PV is generated.</p>
<p>The storage I'm using is Ceph with Rook (to allow for dynamic provisioning of PVs).</p>
<p><strong>Error:</strong></p>
<p>The error message I receive is: "The PersistentVolume "sftp-pv-storage" is invalid: spec: Required value: must specify a volume type".</p>
<p><strong>Attempts:</strong></p>
<p>I've tried using claimRef and matchLabels to no avail.</p>
<p>I've added "volumetype: none" to my PV specs.</p>
<p>If I add "hostPath: path: "/mnt/data"" as a spec to the PV, it will show up as an Available PV (with a local node path), but my PVC is not bonded to it. (Also, for deployment purposes I don't want to use hostPath.</p>
<pre><code>## Create Persistent Storage for SFTP
## Ref: https://www.cloudtechnologyexperts.com/kubernetes-persistent-volume-with-rook/
kind: PersistentVolume
apiVersion: v1
metadata:
name: sftp-pv-storage
labels:
type: local
name: sftp-pv-storage
spec:
storageClassName: rook-ceph-block
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
allowVolumeExpansion: true
volumetype: none
---
## Create Claim (links user to PV)
## ==> If pod is created, need to automatically create PVC for user (without their input)
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: sftp-pv-claim
spec:
storageClassName: sftp-pv-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
</code></pre>
|
<p><strong>The PersistentVolume "sftp-pv-storage" is invalid: spec: Requiredvalue: must specify a volume type.</strong></p>
<p>In PV manifest you must provide type of volume. List of all supported types are described <a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes" rel="nofollow noreferrer">here</a>.
As you are using <code>Ceph</code> I assume you will use <code>CephFS</code>.</p>
<blockquote>
<p>A cephfs volume allows an existing CephFS volume to be mounted into
your Pod. Unlike emptyDir, which is erased when a Pod is removed, the
contents of a cephfs volume are preserved and the volume is merely
unmounted. This means that a CephFS volume can be pre-populated with
data, and that data can be “handed off” between Pods. CephFS can be
mounted by multiple writers simultaneously.</p>
</blockquote>
<p>Example of <code>CephFS</code> you can find in <a href="https://github.com/kubernetes/examples/tree/master/volumes/cephfs" rel="nofollow noreferrer">Github</a>.</p>
<p><strong>If I add "hostPath: path: "/mnt/data"" as a spec to the PV, it will show up as an Available PV (with a local node path), but my PVC is not bonded to it.</strong></p>
<p>If you will check <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1" rel="nofollow noreferrer">Official Kubernetes docs</a> about <code>storageClassName</code>.</p>
<blockquote>
<p>A claim can request a particular class by specifying the name of a
StorageClass using the attribute storageClassName. Only PVs of the
requested class, ones with the same storageClassName as the PVC, can
be bound to the PVC.</p>
</blockquote>
<p><code>storageClassName</code> of your <code>PV</code> and <code>PVC</code> are different.</p>
<p>PV:</p>
<pre><code>spec:
storageClassName: rook-ceph-block
</code></pre>
<p>PVC:</p>
<pre><code>spec:
storageClassName: sftp-pv-storage
</code></pre>
<p>Hope it will help.</p>
|
<p>Given the following values.yaml</p>
<pre><code>elements:
first:
enabled: true
url: first.url
second:
enabled: flase
url: second.url
third:
enabled: true
url: third.url
</code></pre>
<p>What would be a good way to obtain the following result:</p>
<pre><code>list_of_elements=first,third
</code></pre>
<p>Where the resulting list needs to contain only the elements that have been enabled. The list needs to be a single line of comma separated items.</p>
|
<p>A little bit lengthy but does its job:</p>
<pre><code>{{ $result := list }}
{{ range $k, $v := .Values.elements }}
{{ if eq (toString $v.enabled) "true" }}
{{ $result = append $result $k }}
{{ end }}
{{ end }}
list_of_elements: {{ join "," $result }}
</code></pre>
|
<p>In Kubernetes cluster I am trying to build a selenium hub and node. I am able to do it in the distributive mode, but trying to do in hub node mode.</p>
<h1>Hub-deployment.yaml</h1>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: selenium-hub
spec:
selector:
matchLabels:
app: selenium-hub
replicas: 1
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:4.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4444
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 50m
memory: 100Mi
imagePullSecrets:
- name: regcred
</code></pre>
<h1>hub-service.yaml</h1>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: selenium-hub
spec:
ports:
- name: "selenium-hub"
port: 4444
targetPort: 4444
selector:
app: selenium-hub
type: ClusterIP
</code></pre>
<h1>node-chrome-deployment.yaml</h1>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: selenium-chrome-node
spec:
selector:
matchLabels:
app: selenium-chrome-node
replicas: 1
template:
metadata:
labels:
app: selenium-chrome-node
spec:
containers:
- name: selenium-chrome-node
image: selenium/node-chrome:4.0
env:
- name: SE_EVENT_BUS_HOST
value: "selenium-hub"
- name: SE_EVENT_BUS_PUBLISH_PORT
value: "4442"
- name: SE_EVENT_BUS_SUBSCRIBE_PORT
value: "4443"
- name: SE_NODE_MAX_CONCURRENT_SESSIONS
value: "8"
- name: SE_SESSION_REQUEST_TIMEOUT
value: "3600"
- name: SE_NODE_MAX_SESSIONS
value: "10"
- name: SE_NODE_OVERRIDE_MAX_SESSIONS
value: "true"
- name: SE_NODE_GRID_URL
value: "http://selenium-hub:4444"
- name: HUB_HOST
value: "selenium-hub"
- name: HUB_PORT
value: "4444"
- name: SE_NODE_HOST
value: "selenium-chrome-node"
- name: SE_NODE_PORT
value: "5555"
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 50m
memory: 100Mi
ports:
- containerPort: 5555
volumeMounts:
- mountPath: /dev/shm
name: dshm
imagePullSecrets:
- name: regcred
volumes:
- name: dshm
emptyDir:
medium: Memory
</code></pre>
<h1>node-chrome-service.yaml</h1>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: selenium-chrome-node
spec:
ports:
- name: "selenium-chrome-node"
port: 5555
targetPort: 5555
selector:
app: selenium-chrome-node
type: ClusterIP
</code></pre>
<p>Logs:</p>
<pre><code>[events]
publish = "tcp://selenium-hub:4442"
subscribe = "tcp://selenium-hub:4443"
[server]
host = "selenium-chrome-node"
port = "5555"
[node]
grid-url = "http://selenium-hub:4444"
session-timeout = "300"
override-max-sessions = true
detect-drivers = false
max-sessions = 10
[[node.driver-configuration]]
display-name = "chrome"
stereotype = '{"browserName": "chrome", "browserVersion": "95.0", "platformName": "Linux"}'
max-sessions = 10
.
.
.
17:00:27.549 INFO [NodeServer$1.start] - Starting registration process for node id d6a68bf5-e5b4-483a-9e71-408b3c158c0b
17:00:27.609 INFO [NodeServer.execute] - Started Selenium node 4.0.0 (revision 3a21814679): http://selenium-chrome-node:5555
17:00:27.621 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
17:00:37.629 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
17:00:47.636 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
17:00:57.641 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
17:01:07.646 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
17:01:17.650 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
</code></pre>
<p>This registration never happens</p>
<p>Logs of hub pod:</p>
<pre><code>4443, advertising as tcp://100.106.0.5:4443]
17:00:25.242 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://100.106.0.5:4442 and tcp://100.106.0.5:4443
17:00:25.324 INFO [UnboundZmqEventBus.<init>] - Sockets created
17:00:26.333 INFO [UnboundZmqEventBus.<init>] - Event bus ready
17:00:27.621 INFO [Hub.execute] - Started Selenium Hub 4.0.0 (revision 3a21814679): http://100.106.0.5:4444
</code></pre>
<p>How to get the selenium hub 4.0 and chrome-node 4.0 registered ?</p>
|
<p>The hub also needs to expose the publisher and subscriber ports, so that the it can be reached by your node/chrome pod. Update to the following:</p>
<p><strong>hub-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: selenium-hub
spec:
ports:
- name: "selenium-hub"
port: 4444
targetPort: 4444
- name: "subscribe-events"
port: 4443
targetPort: 4443
- name: "publish-events"
port: 4442
targetPort: 4442
selector:
app: selenium-hub
type: ClusterIP
</code></pre>
<p>Also found that you have legacy env variables from grid v3 and the v4 alpha/beta releases for your node. These need to be removed, see amended config below:</p>
<p><strong>node-chrome-deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: selenium-chrome-node
spec:
selector:
matchLabels:
app: selenium-chrome-node
replicas: 1
template:
metadata:
labels:
app: selenium-chrome-node
spec:
containers:
- name: selenium-chrome-node
image: selenium/node-chrome:4.0
env:
- name: SE_EVENT_BUS_HOST
value: "selenium-hub"
- name: SE_EVENT_BUS_PUBLISH_PORT
value: "4442"
- name: SE_EVENT_BUS_SUBSCRIBE_PORT
value: "4443"
- name: SE_SESSION_REQUEST_TIMEOUT
value: "3600"
- name: SE_NODE_MAX_SESSIONS
value: "10"
- name: SE_NODE_OVERRIDE_MAX_SESSIONS
value: "true"
- name: SE_NODE_GRID_URL
value: "http://selenium-hub:4444"
- name: SE_NODE_HOST
value: "selenium-chrome-node"
- name: SE_NODE_PORT
value: "5555"
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 50m
memory: 100Mi
ports:
- containerPort: 5555
volumeMounts:
- mountPath: /dev/shm
name: dshm
imagePullSecrets:
- name: regcred
volumes:
- name: dshm
emptyDir:
medium: Memory
</code></pre>
<p>Configuration that has been removed:</p>
<ul>
<li>SE_NODE_MAX_CONCURRENT_SESSIONS</li>
<li>HUB_HOST</li>
<li>HUB_PORT</li>
</ul>
|
<p><strong>Minikube setup:</strong>
I am running <code>minikube</code> on my Windows 10 office laptop with <code>Hyper-v</code>. A Virtual switch - external mode has been created and checked <code>Allow management operating system to share this network adapter</code>. </p>
<p>VM works perfect and i can do deployments</p>
<p><strong>Problem:</strong>
The host windows 10 machine loses internet connection when Virtual switch is in external mode though i have checked the option <code>Allow management operating system to share this network adapter</code></p>
<p><strong>Questions:</strong> </p>
<ol>
<li>How to make the host and VM share the same network? </li>
<li>Since this is my office laptop and I see <code>Internet Connection sharing has been disabled by network administrator</code>, I doubt this
could be an issue. Could that be the reason?</li>
</ol>
|
<p>In the future you should provide some more details about your environment in your <a href="https://stackoverflow.com/help/how-to-ask">question</a>.</p>
<p>As you did not provide information, I've tried to reproduce your scenario, but could't reproduce your environment behaviour. Issue is related to your Hyper-V configuration.
I will post some tips and common mistakes using <code>Minikube</code> on Windows.</p>
<p>First of all, to run <code>Minikube</code> you need to have <a href="https://kubernetes.io/docs/reference/kubectl/kubectl/" rel="nofollow noreferrer">kubectl</a> and <a href="https://www.docker.com/" rel="nofollow noreferrer">docekr</a> and <a href="https://minikube.sigs.k8s.io/docs/" rel="nofollow noreferrer">Minikube</a> itself. To do it you can use <a href="https://docs.docker.com/docker-for-windows/" rel="nofollow noreferrer">Docker for Windows</a> or follow <a href="https://www.assistanz.com/installing-minikube-on-windows-10-using-hyper-v/" rel="nofollow noreferrer">this tutorial</a>. Tutorial since chapter <code>CREATING VIRTUAL SWITCH</code> is using <code>Internal switch</code> and mentioned <code>Internet Connection Sharing</code> is required.</p>
<p>As <code>Docker</code> was installed you should have already 2 Virtual Switches: <code>Default Switch</code> (The Default Network switch automatically gives virtual machines access to the computer's network using NAT) and <code>DockerNAT</code>. Next steps require to add <code>Virtual Switch</code> for Minikube as <code>External</code>. It was good explained in <a href="https://octopus.com/blog/minikube-on-windows" rel="nofollow noreferrer">this tutorial</a>.</p>
|
<p>I have a single image that I'm trying to deploy to an AKS cluster. The image is stored in Azure container registry and I'm simply trying to apply the YAML file to get it loaded into AKS using the following command:</p>
<blockquote>
<p>kubectl apply -f myPath\myimage.yaml</p>
</blockquote>
<p>kubectl keeps complaining that I'm missing the required "selector" field and that the field "spec" is unknown. This seems like a basic image configuration so I don't know what else to try.</p>
<blockquote>
<p>kubectl : error: error validating "myimage.yaml": error validating
data: [ValidationError(Deployment.spec): unknown field "spec" in
io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec):
missing required field "selector" in
io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these
errors, turn validation off with --validate=false At line:1 char:1</p>
</blockquote>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myimage
spec:
replicas: 1
template:
metadata:
labels:
app: myimage
spec:
containers:
- name: myimage
image: mycontainers.azurecr.io/myimage:v1
ports:
- containerPort: 5000
</code></pre>
|
<p>You have incorrect indentation of second <code>spec</code> field and also you missed <code>selector</code> in first <code>spec</code>: </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: myimage
spec:
replicas: 1
selector:
matchLabels:
app: myimage
template:
metadata:
labels:
app: myimage
spec:
containers:
- name: myimage
image: mycontainers.azurecr.io/myimage:v1
ports:
- containerPort: 5000
</code></pre>
|
<p>For example I want to create a pvc with RWX ACCESS_MODE and can I know in advance if default sorageclasses supports RWX?</p>
|
<p>This is not supported by Kubernetes. You have to manually find what access modes are supported by your storage class.</p>
|
<p>I'm trying to delete a service I wrote & deployed to Azure Kubernetes Service (along with required Dask components that accompany it), and when I run <code>kubectl delete -f my_manifest.yml</code>, my service gets stuck in the Terminating state. The console tells me that it was deleted, but the command hangs:</p>
<pre><code>> kubectl delete -f my-manifest.yaml
service "dask-scheduler" deleted
deployment.apps "dask-scheduler" deleted
deployment.apps "dask-worker" deleted
service "my-service" deleted
deployment.apps "my-deployment" deleted
</code></pre>
<p>I have to <kbd>Ctrl</kbd>+<kbd>C</kbd> this command. When I check my services, Dask has been successfully deleted, but my custom service hasn't. If I try to manually delete it, it similarly hangs/fails:</p>
<pre><code>> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP x.x.x.x <none> 443/TCP 18h
my-service LoadBalancer x.x.x.x x.x.x.x 80:30786/TCP,443:31934/TCP 18h
> kubectl delete service my-service
service "my-service" deleted
</code></pre>
<p><a href="https://stackoverflow.com/questions/62240272/deleting-namespace-was-stuck-at-terminating-state">This question</a> says to delete the pods first, but all my pods are deleted (<code>kubectl get pods</code> returns nothing). There's also <a href="https://github.com/kubernetes/kubernetes/issues/66110" rel="nofollow noreferrer">this closed K8s issue</a> that says <code>--wait=false</code> might fix foreground cascade deletion, but this doesn't work and doesn't seem to be the issue here anyway (as the pods themselves have already been deleted).</p>
<p>I assume that I can completely wipe out my AKS cluster and re-create, but that's an option of last resort here. I don't know whether it's relevant, but my service is using <a href="https://learn.microsoft.com/en-us/azure/aks/internal-lb#create-an-internal-load-balancer" rel="nofollow noreferrer">the <code>azure-load-balancer-internal: "true"</code> annotation</a> for the service, and I have a webapp deployed to my VNet that uses this service.</p>
<p>Is there any other way to force shutdown this service?</p>
|
<p>I had a similar issue with a svc not connecting to the pod cause the pod was already deleted:</p>
<pre><code>HTTPConnectionPool(host='scv-name-not-shown-because-prod.namespace-prod', port=7999): Max retries exceeded with url:
my-url-not-shown-because-prod (Caused by
NewConnectionError('<urllib3.connection.HTTPConnection object at
0x7faee4b112b0>: Failed to establish a new connection: [Errno 110] Connection timed out'))
</code></pre>
<p>I was able to solve this with the patch command:</p>
<pre><code>kubectl patch service scv-name-not-shown-because-prod -n namespace-prod -p '{"metadata":{"finalizers":null}}'
</code></pre>
<p>I think the service went into some illegal state and was not able to ricover</p>
|
<p>After adding the kubernetes plugin on Jenkins, what kind of credential info do I need to put so that I can manage the kubernetes cluster. Also, where do I get the credential info on the master node?</p>
<p>Thanks!
Phil</p>
|
<p>You didnt precise your question, if you are using On-Prem or Local, however in internet you can find many good tutorials about Jenkins plugin.</p>
<p>The best tutorial can be found on <a href="https://plugins.jenkins.io/kubernetes" rel="nofollow noreferrer">jenkins plugin doc</a>. You can find there information how to configure Minikube, GKE, AWS.</p>
<blockquote>
<p>In Jenkins settings click on add cloud, select Kubernetes and fill the
information, like Name, Kubernetes URL, Kubernetes server certificate
key, ...</p>
<p>If Kubernetes URL is not set, the connection options will be
autoconfigured from service account or kube config file.</p>
<p>When running the Jenkins master outside of Kubernetes you will need to
set the credential to secret text. The value of the credential will be
the token of the service account you created for Jenkins in the
cluster the agents will run on.</p>
</blockquote>
<p>You should also read <a href="https://wiki.jenkins.io/display/JENKINS/Kubernetes+Continuous+Deploy+Plugin" rel="nofollow noreferrer">this</a> and this <a href="https://github.com/jenkinsci/kubernetes-cli-plugin" rel="nofollow noreferrer">github tutorial</a>.</p>
<p>If you need more detailed help, please edit your question and provide more information.</p>
|
<p>We are deploying Java microservices to AWS 'ECR > EKS' using helm3 and Jenkins CI/CD pipeline. However what we see is, if we re-run Jenkins job to re-install the deployment/pod, then the pod does not re-install if there are no code changes. It still keeps the old running pod as is. <strong>Use case considered here is,</strong> <em>AWS Secrets Manager configuration for db secret pulled during deployment has changed, so service needs to be redeployed by re-triggering the Jenkins job.</em></p>
<h2><strong>Approach 1</strong> : <a href="https://helm.sh/docs/helm/helm_upgrade/" rel="nofollow noreferrer">https://helm.sh/docs/helm/helm_upgrade/</a></h2>
<p>I tried using 'helm upgrade --install --force ....' as suggested in helm3 upgrade documentation but it fails with below error in Jenkins log</p>
<blockquote>
<p>"Error: UPGRADE FAILED: failed to replace object: Service "dbservice" is invalid: spec.clusterIP: Invalid value: "": field is immutable"</p>
</blockquote>
<h2><strong>Approach 2</strong> : using --recreate-pods from earlier helm version</h2>
<p>With 'helm upgrade --install --recreate-pods ....', I am getting below warning in Jenkins log</p>
<blockquote>
<p>"Flag --recreate-pods has been deprecated, functionality will no longer be updated. Consult the documentation for other methods to recreate pods"</p>
</blockquote>
<p>However, the pod gets recreated. But as we know --recreate-pods is not soft-restart. Thus we would have downtime, which breaks the microservice principle.</p>
<h3>helm version used</h3>
<p><em>version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"clean", GoVersion:"go1.14.10"}</em></p>
<h3>question</h3>
<ul>
<li>How to use --force with helm 3 with helm upgrade for above error ?</li>
<li>How to achieve soft-restart with deprecated --recreate-pods ?</li>
</ul>
|
<p>This is nicely described in Helm documentation: <a href="https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments" rel="noreferrer">https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments</a></p>
|
<p>I am trying to set up an envoy for k8s. But the envoy service does not start and I see the following error in the log:</p>
<pre><code>"The v2 xDS major version is deprecated and disabled by default.
Support for v2 will be removed from Envoy at the start of Q1 2021.
You may make use of v2 in Q4 2020 by following the advice in https://www.envoyproxy.io/docs/envoy/latest/faq/api/transition "
</code></pre>
<p>I understand that I need to rewrite the configuration in v3. I ask for help, as I am not very good at this. Here is my config.</p>
<pre><code>static_resources:
listeners:
- name: k8s-controllers-listener
address:
socket_address: { address: 0.0.0.0, port_value: 6443 }
filter_chains:
- filters:
- name: envoy.tcp_proxy
config:
stat_prefix: ingress_k8s_control
cluster: k8s-controllers
clusters:
- name: k8s-controllers
connect_timeout: 0.5s
type: STRICT_DNS
lb_policy: round_robin
http2_protocol_options: {}
load_assignment:
cluster_name: k8s-controllers
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address: { address: ${CONTROLLER0_IP}, port_value: 6443 }
- endpoint:
address:
socket_address: { address: ${CONTROLLER1_IP}, port_value: 6443 }
- endpoint:
address:
socket_address: { address: ${CONTROLLER2_IP}, port_value: 6443 }
</code></pre>
|
<p>This topic is covered by the <a href="https://www.envoyproxy.io/docs/envoy/latest/faq/api/envoy_v3#how-do-i-configure-envoy-to-use-the-v3-api" rel="nofollow noreferrer">official Envoy FAQ section</a>:</p>
<blockquote>
<p>All bootstrap files are expected to be v3.</p>
<p>For dynamic configuration, we have introduced two new fields to
<a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/config_source.proto#envoy-v3-api-msg-config-core-v3-configsource" rel="nofollow noreferrer">config sources</a>, transport API version and resource API version.
The distinction is as follows:</p>
<ul>
<li><p>The <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/config_source.proto#envoy-v3-api-field-config-core-v3-apiconfigsource-transport-api-version" rel="nofollow noreferrer">transport API</a> version indicates the API endpoint and version of <code>DiscoveryRequest</code>/<code>DiscoveryResponse</code> messages used.</p>
</li>
<li><p>The <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/config_source.proto#envoy-v3-api-field-config-core-v3-configsource-resource-api-version" rel="nofollow noreferrer">resource API</a> version indicates whether a v2 or v3 resource, e.g. v2 <code>RouteConfiguration</code> or v3 <code>RouteConfiguration</code>, is delivered.</p>
</li>
</ul>
<p>The API version must be set for both transport and resource API
versions.</p>
<p>If you see a warning or error with <code>V2 (and AUTO) xDS transport protocol versions are deprecated</code>, it is likely that you are missing
explicit V3 configuration of the transport API version.</p>
</blockquote>
<ul>
<li><p>Check out <a href="https://pjausovec.medium.com/the-v2-xds-major-version-is-deprecated-and-disabled-by-default-envoy-60672b1968cb" rel="nofollow noreferrer">this source</a> for an example.</p>
</li>
<li><p>There is <a href="https://www.getenvoy.io/" rel="nofollow noreferrer">this open source tool</a> that makes it easy to install and upgrade Envoy.</p>
</li>
</ul>
<hr />
<p>Also, you can still use v2 by:</p>
<ul>
<li><p>Setting <a href="https://www.envoyproxy.io/docs/envoy/v1.17.0/operations/cli#cmdoption-bootstrap-version" rel="nofollow noreferrer">–bootstrap-version</a> 2 on the CLI for a v2 bootstrap file.</p>
</li>
<li><p>Enabling the runtime <code>envoy.reloadable_features.enable_deprecated_v2_api</code> feature. This is implicitly enabled if a v2 <a href="https://www.envoyproxy.io/docs/envoy/v1.17.0/operations/cli#cmdoption-bootstrap-version" rel="nofollow noreferrer">–bootstrap-version</a> is set.</p>
</li>
</ul>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.