prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I want to create a kubernetes config map with multi-lines, such as this kind of yaml:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nifi-bootstrap
data:
run.sh: |-
echo "Waiting to run nslookup..."
sleep 30
</code></pre>
<p>How should I write it in a part function in my prototype?</p>
<pre><code> parts:: {
bootstrap(p):: {
apiVersion: 'v1',
kind: 'ConfigMap',
metadata: {
name: p.name + '-bootstrap',
labels: {
app: p.app,
release: p.release,
},
},
data: {
'run.sh': "|-
line 1
line 2
line 3
"
</code></pre>
<p>but it generates yaml like this: (ks show default):</p>
<pre><code>apiVersion: v1
data:
run.sh: "|-\nline 1 \nline 2\nline 3\n"
kind: ConfigMap
</code></pre>
<p>I want to mount this config map and run it as script, but I doubt this output can work. Any idea on how to generate multi-line value in ksonnet/jsonnet?</p>
| <p>The <em>jsonnet</em> "equivalent" of <em>yaml</em>'s <code>|</code> is the <code>|||</code> construct (see <a href="https://jsonnet.org/ref/spec.html" rel="noreferrer">https://jsonnet.org/ref/spec.html</a>), applied your example:</p>
<pre><code>$ cat foo.jsonnet
{
parts:: {
bootstrap(p):: {
apiVersion: "v1",
kind: "ConfigMap",
metadata: {
name: p.name + "-bootstrap",
labels: {
app: p.app,
release: p.release,
},
},
data: {
"run.sh": |||
line 1
line 2
line 3
|||,
},
},
},
} {
foo: self.parts.bootstrap({name: "foo", app: "bar", release: "v1"}),
}
$ jsonnet foo.jsonnet
{
"foo": {
"apiVersion": "v1",
"data": {
"run.sh": "line 1\nline 2\nline 3\n"
},
"kind": "ConfigMap",
"metadata": {
"labels": {
"app": "bar",
"release": "v1"
},
"name": "foo-bootstrap"
}
}
}
$ jsonnet foo.jsonnet|jq -r '.foo.data["run.sh"]'
line 1
line 2
line 3
</code></pre>
|
<p><a href="http://grs-preprodkubemaster01:5601/kibana" rel="nofollow noreferrer">http://grs-preprodkubemaster01:5601/kibana</a></p>
<p>I have followed docs and installed Kibana, When I used the service as type: LoadBalancer, the service isn't
coming up, so I deleted the type: LoadBalancer and let it default to ClusterIP, it came up fine. (Note I don't have AWS)
But, I am not sure how to access the UI, I tried this URL but its not working.
<a href="http://my-preprodkubemaster01/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/app/kibana" rel="nofollow noreferrer">http://my-preprodkubemaster01/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/app/kibana</a>
any ideas how to access the Kibana UI. I checked service, deployment and everything is green check.</p>
<p>Another thing I tried is this URL with this URL which I got from the command kubectl cluster-info
<a href="https://10.123.24.107:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy" rel="nofollow noreferrer">https://10.123.24.107:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy</a>
However, this is showing me this error</p>
<pre><code>{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kibana-logging" is forbidden: User "system:anonymous" cannot get services/proxy in the namespace "kube-system"",
reason: "Forbidden",
details: {
name: "kibana-logging",
kind: "services"
},
code: 403
}
</code></pre>
<p>So, as another try I used Kibana service as NodePort, but that didn't work either.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
selector:
k8s-app: kibana-logging
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30887
$ kubectl -n kube-system get rc,svc,cm,po
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/elasticsearch-logging ClusterIP 10.98.10.182 <none> 9200/TCP 12m
svc/heapster ClusterIP 10.107.184.85 <none> 80/TCP 3d
svc/kibana-logging NodePort 10.102.254.129 <none> 5601:30887/TCP 12m
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 3d
svc/kubernetes-dashboard ClusterIP 10.105.30.246 <none> 80/TCP 3d
svc/monitoring-influxdb ClusterIP 10.109.144.39 <none> 8086/TCP 3d
</code></pre>
<p>I would like to know what URL I should be using to access the Kibana UI. Please note that I have npot tried to do kubectl proxy and I would like to have it work without it</p>
| <p>The most common way to expose internal server outside the cluster is an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a>.</p>
<p>First, you need to have an Ingress controller running in your Kubernetes cluster.<br>
There are two types of maintained Ingress controllers - <a href="https://git.k8s.io/ingress-gce/README.md" rel="nofollow noreferrer">GCE</a> and <a href="https://git.k8s.io/ingress-nginx/README.md" rel="nofollow noreferrer">nginx</a></p>
<p>Then, you need to create a <a href="https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/services-networking/ingress.yaml" rel="nofollow noreferrer">yaml</a> file as shown below and change it according to your needs:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
</code></pre>
<p>When you create it using <code>kubectl create -f</code>, you should see something like this:</p>
<pre><code>$ kubectl get ingress
NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 1.2.3.4
</code></pre>
<p>In this example, <code>1.2.3.4</code> is the IP allocated by Ingress controller.</p>
<p>When you have all things in place, you'll be able to access your application (Kibana) by IP <code>1.2.3.4</code></p>
<p>Please find more examples and use cases in <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress documentation</a></p>
<p>You can also expose a Kubernetes service without using the Ingress resource:</p>
<ol>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer" rel="nofollow noreferrer">Service.Type=LoadBalancer</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">Service.Type=NodePort</a></li>
<li><a href="https://git.k8s.io/contrib/for-demos/proxy-to-service" rel="nofollow noreferrer">Port Proxy</a></li>
</ol>
|
<p>We are trying to deploy Cassandra within Kubernetes. Thinking of the storage and how to make it work its fastest at each datacenter, without the expense of implementing network attached storage at each data center, it would seem reasonable to make use of a Local Persistent Volume at each datacenter and leverage Cassandra to handle the cross-datacenter replication.
Am I thinking about this problem correctly? Is there a better way to consider implementing Cassandra in each of our data centers to make our application run their fastest by connecting to a more local data center?</p>
| <p>@Simon Fontana Oscarsson is right.</p>
<p>I just want to add a bit more details about that feature for people who will find that question, because it is a common case.</p>
<ol>
<li>Local Persistent Volumes are available only from 1.7 in alpha stage and from 1.10 in beta.</li>
<li>It requires pre-configured LVM on nodes, and it should be done before you will use it.</li>
<li>Here you may find examples of configuration <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume/examples" rel="nofollow noreferrer">here</a>.</li>
</ol>
|
<p>When a client sends a request to the Kubernetes apiserver, authentication plugins attempt to <a href="https://kubernetes.io/docs/admin/authentication/#webhook-token-authentication#authentication-strategies" rel="nofollow noreferrer">associate a number of attributes to the request</a>. These attributes can be used by authorisation plugins to determine whether the client's request can proceed. </p>
<p>One such attribute is the UID of the client, however <a href="https://kubernetes.io/docs/admin/authorization#review-your-request-attributes" rel="nofollow noreferrer">Kubernetes does not review the UID attribute during authorisation</a>. If this is the case, how is the UID attribute used?</p>
| <p>The UID field is intentionally not used for authentication purposes, but it is to allow logging for audit purposes.</p>
<p>For many organizations this might not be important, but for example Google allows employees to change their usernames (but of course not the numeric UID). Logging the UID would allow lookups of actions regardless of the current username.</p>
<p>(Now some might point out, that changing the username will likely involve loosing the current privileges; this is an accepted limitation/inconvinience.)</p>
|
<p>I'm using a slightly customized Terraform configuration to generate my Kubernetes cluster on AWS. The configuration includes an EFS instance attached to the cluster nodes and master. In order for Kubernetes to use this EFS instance for volumes, my Kubernetes YAML needs the id and endpoint/domain of the EFS instance generated by Terraform.</p>
<p>Currently, my Terraform outputs the EFS id and DNS name, and I need to manually edit my Kubernetes YAML with these values after <code>terraform apply</code> and before I <code>kubectl apply</code> the YAML.</p>
<p>How can I automate passing these Terraform output values to Kubernetes?</p>
| <p>I don't know what you mean by a yaml to set up an Kubernetes cluster in AWS. But then, I've always set up my AWS clusters using <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a>. Additionally I don't understand why you would want to mount an EFS to the master and/or nodes instead of to the containers.</p>
<p>But in direct answer to your question: you could write a script to output your Terraform outputs to a <a href="https://github.com/kubernetes/helm" rel="nofollow noreferrer">Helm</a> values file and use that to generate the k8s config.</p>
<p>I stumbled upon this question when searching for a way to get TF outputs to envvars specified in Kubernetes and I expect more people do. I also suspect that that was really your question as well or at least that it can be a way to solve your problem. So:</p>
<p>You can use the <a href="https://www.terraform.io/docs/providers/kubernetes/" rel="nofollow noreferrer">Kubernetes Terraform provider</a> to connect to your cluster and then use the <a href="https://www.terraform.io/docs/providers/kubernetes/r/config_map.html" rel="nofollow noreferrer"><code>kubernetes_config_map</code></a> resources to create configmaps.</p>
<pre><code>provider "kubernetes" {}
resource "kubernetes_config_map" "efs_configmap" {
"metadata" {
name = "efs_config" // this will be the name of your configmap
}
data {
efs_id = "${aws_efs_mount_target.efs_mt.0.id}"
efs_dns = "${aws_efs_mount_target.efs_mt.0.dns_name}"
}
}
</code></pre>
<p>If you have secret parameters use the <a href="https://www.terraform.io/docs/providers/kubernetes/r/secret.html" rel="nofollow noreferrer"><code>kubernetes_secret</code></a> resource:</p>
<pre><code>resource "kubernetes_secret" "some_secrets" {
"metadata" {
name = "some_secrets"
}
data {
s3_iam_access_secret = "${aws_iam_access_key.someresourcename.secret}"
rds_password = "${aws_db_instance.someresourcename.password}"
}
}
</code></pre>
<p>You can then <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">consume these in your k8s</a> yaml when setting your environment:</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: some-app-deployment
spec:
selector:
matchLabels:
app: some
template:
metadata:
labels:
app: some
spec:
containers:
- name: some-app-container
image: some-app-image
env:
- name: EFS_ID
valueFrom:
configMapKeyRef:
name: efs_config
key: efs_id
- name: RDS_PASSWORD
valueFrom:
secretKeyRef:
name: some_secrets
key: rds_password
</code></pre>
|
<p>I have a kubernetes yaml deployment file which accepts db username and password as arguments as shown below. </p>
<pre><code>args:
- "-db_host=postgres"
- "-db_port=5432"
- "-db_username=postgres"
- "-db_password=postgres"
</code></pre>
<p>To hide the values of db_username and db_password I thought of using kubernetes secret kind. But to achieve that I have to make db_username and db_password as environment variables so that I can use it something like as shown below:</p>
<pre><code>args:
- "-db_host=postgres"
- "-db_port=5432"
env:
- name: db_username
valueFrom:
secretKeyRef:
name: db-secret
key: db-user
- name: db_password
valueFrom:
secretKeyRef:
name: db-secret
key: db-pass
</code></pre>
<p>Is there any way we can use secret in args itself so that I don't have to do the 2nd approach. </p>
| <p>Once you have an environment variable you can embed its value into the arguments:</p>
<pre><code>env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
</code></pre>
<p>Or in your case:</p>
<pre><code>args:
- "-db_host=postgres"
- "-db_port=5432"
- "-db_username=$(db_username)"
- "-db_password=$(db_password)"
env:
- name: db_username
valueFrom:
secretKeyRef:
name: db-secret
key: db-user
- name: db_password
valueFrom:
secretKeyRef:
name: db-secret
key: db-pass
</code></pre>
<p>The reference can be found <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="noreferrer">here</a></p>
|
<p>Newbie setup :</p>
<ol>
<li>Created First project in GCP</li>
<li>Created cluster with default, 3 nodes. Node version 1.7.6. cluster master version 1.7.6-gke.1.</li>
<li>Deployed aan application in a pod, per example.</li>
<li>Able to access "hello world" and the hostname, using the external-ip and the port.</li>
<li>In GCP / GKE webpage of my cloud console, clicked "discovery and loadbalancing", I was able to see the "kubernetes-dashboard" process in green-tick, but cannot access throught the IP listed. tried 8001,9090, /ui and nothing worked.</li>
<li>not using any cloud shell or gcloud commands on my local laptop. Everything is done on console.</li>
</ol>
<p>Questions : </p>
<ol>
<li>How can anyone access the kubernetes-dashboard of the cluster created in console? </li>
<li>docs are unclear, are the dashboard components incorporated in the console itself? Are the docs out of sync with GCP-GKE screens? </li>
<li>tutorial says run "kubectl proxy" and then to open<br>
"<a href="http://localhost:8001/ui" rel="nofollow noreferrer">http://localhost:8001/ui</a>", but it doesnt work, why?</li>
</ol>
| <p>If you create a cluster with with version 1.9.x or greater, then u can access using tokens.</p>
<ol>
<li>get secret.</li>
</ol>
<p><code>kubectl -n kube-system describe secrets `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` | awk '/token:/ {print $2}'</code></p>
<ol start="2">
<li><p>Copy secret.</p></li>
<li><p>kubectl proxy.</p></li>
<li><p>Open UI using 127.0.0.1:8001/ui. This will redirect to login page.
there will be two options to login, kubeconfig and token.
Select token and paste the secret copied earlier.</p></li>
</ol>
<p>hope this helps</p>
|
<p>I have a deployment (A pods) with a Service and HorizontalPodAutoscaler attached. I want to be able to control the scale down process and do some cleanup before the pod shutdown. Problem is, the cleanup can take a lot of time and for it to complete some other service (B pods) should be able to access the pod trying to shut down.</p>
<p>To accomplish this I set the deployment A to have a long <code>spec.terminationGracePeriodSeconds</code> value. When A pod gets the SIGTERM it starts finishing up and closing the process when it's done.</p>
<p>From the point pod A get the SIGTERM it is no longer receives connections from pod B because the service removes it's IP from the endpoint - making it impossible for pod A to finish it's cleanup.</p>
<p>Tried using ClusterIP and Headless services, both acts the same.</p>
<p>How can I make the service continue sending traffic to pod A even after it got the SIGTERM? I don't mind requests from B pods getting errors when trying to get to A pods.</p>
| <p>There is no way to do that because of the termination process design.</p>
<p>Here is the extract from the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">documentation</a> of the termination process:</p>
<ol>
<li><p>User sends command to delete Pod, with default grace period (30s)</p></li>
<li><p>The Pod in the API server is updated with the time beyond which the Pod is considered “dead” along with the grace period.</p></li>
<li>Pod shows up as “Terminating” when listed in client commands</li>
<li><p>(simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the pod shutdown process.</p>
<ol>
<li>If the pod has defined a preStop hook, it is invoked inside of the pod. If the preStop hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period.</li>
<li>The processes in the Pod are sent the TERM signal.</li>
</ol></li>
<li><p>(simultaneous with 3) <strong>Pod is removed from endpoints list for service, and are no longer considered part of the set of running pods for replication controllers. Pods that shutdown slowly can continue to serve traffic as load balancers (like the service proxy) remove them from their rotations.</strong></p></li>
<li>When the grace period expires, any processes still running in the Pod are killed with SIGKILL.</li>
<li>The Kubelet will finish deleting the Pod on the API server by setting grace period 0 (immediate deletion). The Pod disappears from the API and is no longer visible from the client.</li>
</ol>
<p>So, the Pod will be deregistered in the Service while resolving 'SIGTERM' signal and you have no options to avoid it.</p>
|
<p>I am working on openam deployment on Google cloud platform (GCP) and the OS is RHEL7.
I am facing issue while running minikube start.</p>
<pre><code>[root@test ~]# minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
150.53 MB / 150.53 MB [============================================] 100.00% 0s
E0509 06:20:12.950109 16264 start.go:159] Error starting host: Error creating host: Error executing step: Running precreate checks.
: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory.
Retrying.
E0509 06:20:12.951500 16264 start.go:165] Error starting host: Error creating host: Error executing step: Running precreate checks.
: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory
</code></pre>
<p>I already installed virtualbox on RHEL.
I want to know how to enable VT-X on GCP? </p>
<p>Thanks
Ashish</p>
| <p><a href="https://github.com/kubernetes/minikube" rel="nofollow noreferrer">You can use</a> <code>--vm-driver=none</code> to run your minikube in cloud. This flag will run your minukube in Docker. You should have installed Docker first. </p>
<p>Also you can create a custom image where VMX will be enabled. Just follow the <a href="https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances" rel="nofollow noreferrer">official documentation instruction.</a>
Example from the documentation on how to create a custom image with enabled VMX:</p>
<pre><code>gcloud compute images create nested-vm-image --source-disk disk1 --source-disk-zone us-central1-a --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"
</code></pre>
<p>Then, just create a new VM with the custom image.</p>
<pre><code> gcloud compute instances create example-nested-vm --zone us-central1-b --image nested-vm-image
</code></pre>
<p>After all, you can install the VirtualBox or KVM and start minikube.</p>
|
<p>How to ssh to the node inside the cluster in local. I am using docker edge version which has kubernetes inbuilt. If i run
kubectl ssh node
I am getting</p>
<pre><code> Error: unknown command "ssh" for "kubectl"
Did you mean this?
set
Run 'kubectl --help' for usage.
error: unknown command "ssh" for "kubectl"
Did you mean this?
set
</code></pre>
| <p>There is no "ssh" command in <code>kubectl</code> yet, but there are plenty of options to access Kubernetes node shell.</p>
<p>In case you are using <strong>cloud provider</strong>, you are able to connect to nodes directly from instances management interface.</p>
<p>For example, in <strong>GCP</strong>: Select <code>Menu</code> -> <code>Compute Engine</code> -> <code>VM instances</code>, then press <code>SSH</code> button on the left side of the desired node instance.</p>
<p>In case of using <strong>local VM</strong> (VMWare, Virtualbox), you can configure <code>sshd</code> before rolling out Kubernetes cluster, or use VM console, which is available from management GUI.</p>
<p>Vagrant provides its own command to access VMs - <code>vagrant ssh</code></p>
<p>In case of using <strong>minikube</strong>, there is <code>minikube ssh</code> command to connect to minikube VM. There are also other <a href="https://stackoverflow.com/questions/38870277/minikube-how-to-ssh-into-the-vm">options</a>.</p>
<p>I found no simple way to access <code>docker-for-desktop</code> VM, but you can easily switch to minikube for experimenting with node settings.</p>
|
<p>I have created a Kubernetes cluster on AWS by following the instructions below. All my master and worker nodes are running Ubuntu.</p>
<p><a href="https://jee-appy.blogspot.in/2017/10/setup-kubernetes-cluster-kops-aws.html" rel="nofollow noreferrer">https://jee-appy.blogspot.in/2017/10/setup-kubernetes-cluster-kops-aws.html</a></p>
<p>I am aware on how to increase or decrease the number of nodes in my cluster using cluster updates which kubernetes spins up a new node for us,</p>
<p>However i was wondering, is it possible to attach my external aws instance(for eg: an instance with same OS like ubuntu) to my existing kops cluster?</p>
| <p>Kops means <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">Kubernetes Operations</a>, and this is a command line tool made to maintain production grade Kubernetes installation. Kops works best with <a href="https://aws.amazon.com/" rel="nofollow noreferrer">Amazon Web Services</a>. There have been attempts to fully support GCE and other cloud-ware software, but this is the future.</p>
<p><a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Nodes</a> in <a href="https://kubernetes.io" rel="nofollow noreferrer">Kubernetes</a> mean physical or virtual machines where a cluster is running pods. The cluster consists of a number of nodes aimed to keep services working. The quantity of designated nodes is declared during the Kubernetes cluster creation by Kops utility. </p>
<p>There is a possibility to add (extend) nodes to the cluster to achieve better performance. When the process of provisioning new nodes is managed by internal cluster routines, this feature is called auto-scaling.</p>
<p>kops uses instance groups for auto-scaling. See your instance groups using</p>
<p>kops get instancegroups</p>
<p>Of course, you can attach your existing VM instance to Kubernetes cluster (working on AWS or not), but you need to do the whole thing manually - there is no import facility in the Kops utility. <strong>I don't recommend this.</strong></p>
<p>I found the description of manual installation process of <a href="https://devops.profitbricks.com/tutorials/getting-started-with-a-multi-node-kubernetes-cluster-on-ubuntu/" rel="nofollow noreferrer">Kubernetes elements </a> for the Ubuntu machine. It may help you a bit.</p>
<p>If you need to extend the number of running nodes for your Kubernetes configuration, please consider using the autoscaling feature.</p>
<p>In this case, use kops to edit cluster properties:</p>
<blockquote>
<p>kops edit ig nodes</p>
</blockquote>
<p>After editor shows configuration file find minSize parameter and change it to desire new size of the cluster. Make sure the maxSize is equal to or larger than minSize. The example below was taken from the internet.</p>
<pre><code>apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2017-07-01T12:06:22Z"
generation: 2
labels:
kops.k8s.io/cluster: URL_OF_CLUSTER
name: nodes
spec:
image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-
machineType: m3.large
maxSize: 7
minSize: 3
nodeLabels:
kops.k8s.io/instancegroup: nodes
role: Node
subnets:
- eu-west-1a
</code></pre>
<p>Then, apply the new configuration and let kops resize the cluster:</p>
<blockquote>
<p>kops update cluster --yes</p>
</blockquote>
<p>New VM instances will be ready after AWS creates them; next, kops applies Kubernetes configuration and merge them with cluster configuration.</p>
|
<p>I have a lot of standard runtime docker images like python3 with tensorflow 1.7 installed and I want to use these standard images to run some customers code out side of them. The scenario seems quite similar with the serverless. So what is the best way to put the code into runtime dockers?</p>
<p>Right now I am trying to use a persistent volume to mount the code into runtime. But it has a lot of work. Is there some solution easier for this?</p>
<p><strong>UPDATE</strong></p>
<p>What is the workflow for google machine learning engine or floydhub. I think what I want is similar. They have a command line tool to make the local code combine with a standard env.</p>
| <p>Following cloud native practices, <a href="https://12factor.net/build-release-run" rel="nofollow noreferrer">code should be immutable</a>, and releases and their dependencies uniquely identifiable for repeat-ability, replic-ability, etc - in short: you should really create images with your src code.</p>
<p>In your case, that would mean basing your Dockerfile on upstream python3 or TF images, there are a couple projects that may help with the <em>workflow</em> for above (code+build-release-run):</p>
<ul>
<li><a href="https://github.com/Azure/draft" rel="nofollow noreferrer">https://github.com/Azure/draft</a> -- looks like better suited for your case</li>
<li><a href="https://github.com/GoogleContainerTools/skaffold" rel="nofollow noreferrer">https://github.com/GoogleContainerTools/skaffold</a> -- more golang friendly afaics</li>
</ul>
<p>Hope it helps --jjo</p>
|
<p>I created a Kubernetes Cluster using kubadm and the private IP of the server so all the nodes could reach it withing the cloudprovider network. I am using 4 nodes in DigitalOcean. </p>
<pre><code>kubctl-s-2vcpu-4gb-nyc3-01-master:~# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.132.113.68:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
</code></pre>
<p>The command I used to initialize the cluster is:</p>
<pre><code>kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.132.113.68 --kubernetes-version stable-1.8
</code></pre>
<p>I am trying to connect to this cluster using kubectl from my local computer. The admin.conf file has the private IP:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS********S0tLQo=
server: https://10.132.113.68:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
</code></pre>
<p>I have tried setting up the proxy in the master <code>kubectl proxy</code> and making an SSH tunnel to the server:</p>
<pre><code>ssh -L 8001:127.0.0.1:8001 -N -i test.pem [email protected]
</code></pre>
<p>I can login into the Kubernetes Dashboard from my computer, but can't execute <code>kubectl</code> commands:</p>
<pre><code>$kubectl -s localhost:8001 get nodes
Unable to connect to the server: read tcp 127.0.0.1:62394->127.0.0.1:8001: read: connection reset by peer
</code></pre>
| <p>Where <code>ssh -L ... </code> ends, <code>sshuttle</code> starts :): it creates local tcp "catch-all" DNATing via the ssh dest node, ie will forward <strong>every</strong> tcp connection in the specified <em>CIDR</em>.</p>
<p>Try it out:</p>
<ul>
<li><p>In one terminal (to ease later ^C):</p>
<p><code>sshuttle -e 'ssh -vi test.pem' -r [email protected] 10.132.113.68/32</code></p>
</li>
<li><p>From other terminal, just do the <code>kubectl ...</code> as you would do if locally run from your initial <code>kubeadm</code> node.</p>
</li>
<li><p>Profit :)</p>
</li>
</ul>
<p>--jjo</p>
|
<p>Have been trying to setup Kubernetes for local development on my Windows 7 machine with a VirtualBox VM Driver. Installing and running minikube fails each time with the below error:</p>
<pre><code>D:\minikube>minikube start --vm-driver=virtualbox
Starting local Kubernetes v1.9.0 cluster...
Starting VM...
E0219 09:47:24.441727 4220 start.go:159] Error starting host: Error getting state for host: machine does not exist.
Retrying.
E0219 09:47:24.448727 4220 start.go:165] Error starting host: Error getting state for host: machine does not exist
E0219 09:47:54.448727 4220 util.go:151] Error uploading error message: :
Post https://clouderrorreporting.googleapis.com/v1beta1/projects/k8s-minikube/events:report?key=AIzaSyACUwzG0dEPcl-eOgpDKnyKoUFgHdfoFuA: dial tcp 172.217.25.138:443: i/o timeout
</code></pre>
<p>I suspected this may be happening due to minikube cache downloading at a network drive folder (N:) due to enterprise configurations in my laptop, however, copying the <code>.minikube</code> folder from <code>N:\.minikube</code> to <code>C:\Users\abc123\.minikube</code> has not abated the problem. </p>
<p>Do let me know if someone has managed to solve it.</p>
| <p>I ran into the same error on osx after re-installing minikube. Simply deleting the minikube VM and restarting fixed everything:</p>
<pre><code>$ minikube delete
$ minikube start
</code></pre>
|
<p>I have a <code>spring boot</code> api running on <code>google cloud kubernetes cluster</code>, I wanna have a caching server to use for my api so I thought to use <code>memcache</code>.</p>
<p>I tried two ways of doing it:</p>
<ol>
<li>I downloaded the <code>memcache</code> from the google launcher which is basically deploying an instance of <code>memcache</code> on a vm. And then I assigned an external <code>IP</code> to my vm, whitelisted my ip to try it locally and ofc opened the port <code>11211</code> (the default one). For the client side I used, <a href="https://github.com/sixhours-team/memcached-spring-boot" rel="nofollow noreferrer">this guy</a>, specified the ip address but I still get connection cancelled : <code>java.util.concurrent.CancellationException: Cancelled</code> and the doc is bad so I could find anything that helps.</li>
<li>I decided to try another way, which is following <a href="https://cloud.google.com/solutions/deploying-memcached-on-kubernetes-engine" rel="nofollow noreferrer">this tutorial</a> and now I have the <code>memcached</code> cluster but I don't know how to consume these pods from my other cluster or should the pods be on the same cluster i have the api running on?</li>
</ol>
<p>I would appreciate any help, this is my first encounter with the global caching.</p>
| <p>So I figured it out based on Jonah Benton's advice.</p>
<p>It was actually pretty simple, i used <a href="https://github.com/GoogleCloudPlatform/memcached-docker/blob/master/1/README.md" rel="nofollow noreferrer">this tutorial</a> to create a new pod running <code>memcached</code> in my cluster and then I used <a href="https://github.com/sixhours-team/memcached-spring-boot" rel="nofollow noreferrer">this</a> client to connect on it and it worked like a charm!</p>
<p>Hope it helps someone.</p>
|
<p>I read in kubernetes docs somewhere that kubernetes reads application logs from stdout and stderror in pods.
I created a new application and configured it to send logs to a remote splunk hec endpoint (using splunk-logback jars) and at the same time to console.
So by default, the console logs in logback should go to System.out, which should then be visible using kubectl logs .
But it's not happening in my application.</p>
<p>my logback file:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<configuration>
<Appender name="SPLUNK" class="com.splunk.logging.HttpEventCollectorLogbackAppender">
<url>${splunk_hec_url}</url>
<token>${splunk_hec_token}</token>
<index>${splunk_app_token}</index>
<disableCertificateValidation>true</disableCertificateValidation>
<batch_size_bytes>1000000</batch_size_bytes>
<batch_size_count>${batch_size_count}</batch_size_count>
<send_mode>sequential</send_mode>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%msg</pattern>
</layout>
</Appender>
<Appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%msg</pattern>
</encoder>
</Appender>
<Appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="STDOUT" />
</Appender>
<root level="INFO">
<appender-ref ref="SPLUNK"/>
<appender-ref ref="ASYNC"/>
</root>
</configuration>
</code></pre>
<p>I am able to see the logs in splunk and If I login to the container from backend and start my java application, then also I can see the logs on the terminal that time. But if I let the container start by default on it's own, then the logs are only going to splunk and I can't view them using <code>kubectl logs <POD_NAME></code></p>
<p>The kubernetes yml file for my logger app:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: logging-pod
labels:
app: logging-pod
spec:
containers:
- name: logging-container
image: logger-splunk:latest
command: ["java", "-jar", "logger-splunk-1.0-SNAPSHOT.jar"]
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 1
memory: 1Gi
</code></pre>
| <p>According to the Kubenetes <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">documentation</a>, all output (that a containerized application writes to <code>stdout</code> and <code>stderr</code>) is redirected to a JSON file by default. You can access it by using <code>kubectl logs</code>. </p>
<p>Let's test this feature by creating a simple pod that outputs numbers in stdout:</p>
<pre><code>kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/counter-pod.yaml
</code></pre>
<p><em>counter-pod.yaml:</em></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
</code></pre>
<p>where:<br>
<code>counter</code> - name of the pod<br>
<code>count</code> - name of the container inside "counter" pod</p>
<p>You can access the content of that file by running: </p>
<pre><code>$ kubectl logs counter
</code></pre>
<p>You can access a log file of previously crashed container in a pod by the following command: </p>
<pre><code>$ kubectl logs --previous
</code></pre>
<p>In case of multiple containers in the pod, you should add the name of the container as follows:</p>
<pre><code>$ kubectl logs counter -c count
</code></pre>
<p>When the pod is removed from the cluster, all its logs (current and previous) are also removed.</p>
<p>Ensure you configure stdout in application correctly, and the output to stdout in your application is not silently skipped by any reason.</p>
|
<p>I have been trying to test minikube to create a demo application with three services. The idea is to have a web UI which communicates with the other services. Each service will be written in different languages: nodejs, python and go.</p>
<p>I created 3 docker images, one for each app and tested the code, basically they provided a very simple REST endpoints. After that, I deployed them using minikube. Below is my current deployment yaml file:</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: ngci
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-gateway
namespace: ngci
spec:
replicas: 1
template:
metadata:
labels:
app: web-gateway
spec:
containers:
- env:
- name: VCSA_MANAGER
value: http://vcsa-manager-service:7070
name: web-gateway
image: silvam11/web-gateway
imagePullPolicy: Never
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /status
port: 8080
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: web-gateway-service
namespace: ngci
spec:
selector:
app: web-gateway
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8080
# Port forward to inside the pod
#targetPort did not work with nodePort, why?
#targetPort: 9090
# Port accessible outside cluster
nodePort: 30001
#name: grpc
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: vcsa-manager
namespace: ngci
spec:
replicas: 1
template:
metadata:
labels:
app: vcsa-manager
spec:
containers:
- name: vcsa-manager
image: silvam11/vcsa-manager
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: vcsa-manager-service
namespace: ngci
spec:
selector:
app: vcsa-manager
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 7070
# Port forward to inside the pod
#targetPort did not work with nodePort, why?
targetPort: 9090
# Port accessible outside cluster
#nodePort: 30001
#name: grpc
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: repo-manager
namespace: ngci
spec:
replicas: 1
template:
metadata:
labels:
app: repo-manager
spec:
containers:
- name: repo-manager
image: silvam11/repo-manager
imagePullPolicy: Never
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: repo-manager-service
namespace: ngci
spec:
selector:
app: repo-manager
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 9090
# Port forward to inside the pod
#targetPort did not work with nodePort, why?
#targetPort: 9090
# Port accessible outside cluster
#nodePort: 30001
#name: grpc
</code></pre>
<p>As you can see, I created there services but only the web-gateway is defined as LoadBalancer type. It provides two endpoints. One named /status which allows me to test the service is up and running and reachable.</p>
<p>The second endpoint, named /user, communicates with another k8s service. The code is very simple:</p>
<pre><code>app.post('/user', (req, res) => {
console.log("/user called.");
console.log("/user req.body : " + req.body);
if(!req || !req.body)
{
var errorMsg = "Invalid argument sent";
console.log(errorMsg);
return res.status(500).send(errorMsg);
}
**console.log("calling " + process.env.VCSA_MANAGER);
const options = {
url: process.env.VCSA_MANAGER,
method: 'GET',
headers: {
'Accept': 'application/json'
}
};**
request(options, function(err, resDoc, body) {
console.log("callback : " + body);
if(err)
{
console.log("ERROR: " + err);
return res.send(err);
}
console.log("statusCode : " + resDoc.statusCode);
if(resDoc.statusCode != 200)
{
console.log("ERROR code: " + res.statusCode);
return res.status(500).send(resDoc.statusCode);
}
return res.send({"ok" : body});
});
});
</code></pre>
<p>The main idea of this snippet is to use the environment variable process.env.VCSA_MANAGER to send a request to the other service. This varaible was defined on my k8s deployment yaml file as <em><a href="http://vcsa-manager-service:7070" rel="noreferrer">http://vcsa-manager-service:7070</a></em></p>
<p>The issue is that, this request returns a connection error. Initially I thought that it would be a DNS issue but it seems that the web-gateway pod can resolve the name:</p>
<pre><code>kubectl exec -it web-gateway-7b4689bff9-rvbbn -n ngci -- ping vcsa-manager-service
PING vcsa-manager-service.ngci.svc.cluster.local (10.99.242.121): 56 data bytes
</code></pre>
<p>The ping command from the web-gateway pod resolved the dns correctly. The IP is the correct, as can be seen below:</p>
<pre><code>kubectl get svc -n ngci
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
repo-manager-service ClusterIP 10.102.194.179 <none> 9090/TCP 35m
vcsa-manager-service ClusterIP 10.99.242.121 <none> 7070/TCP 35m
web-gateway-service LoadBalancer 10.98.128.210 <pending> 8080:30001/TCP 35m
</code></pre>
<p>Also, as suggested, the describe of them</p>
<pre><code>kubectl describe pods -n ngci
Name: repo-manager-6cf98f5b54-pd2ht
Namespace: ngci
Node: minikube/10.0.2.15
Start Time: Wed, 09 May 2018 17:53:54 +0100
Labels: app=repo-manager
pod-template-hash=2795491610
Annotations: <none>
Status: Running
IP: 172.17.0.10
Controlled By: ReplicaSet/repo-manager-6cf98f5b54
Containers:
repo-manager:
Container ID: docker://d2d54e42604323c8a6552b3de6e173e5c71eeba80598bfc126fbc03cae93d261
Image: silvam11/repo-manager
Image ID: docker://sha256:dc6dcbb1562cdd5f434f86696ce09db46c7ff5907b991d23dae08b2d9ed53a8f
Port: 8000/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 10 May 2018 10:32:49 +0100
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Wed, 09 May 2018 17:53:56 +0100
Finished: Wed, 09 May 2018 18:31:24 +0100
Ready: True
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tbkms (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-tbkms:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tbkms
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16h default-scheduler Successfully assigned repo-manager-6cf98f5b54-pd2ht to minikube
Normal SuccessfulMountVolume 16h kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tbkms"
Normal Pulled 16h kubelet, minikube Container image "silvam11/repo-manager" already present on machine
Normal Created 16h kubelet, minikube Created container
Normal Started 16h kubelet, minikube Started container
Normal SuccessfulMountVolume 3m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tbkms"
Normal SandboxChanged 3m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal Pulled 3m kubelet, minikube Container image "silvam11/repo-manager" already present on machine
Normal Created 3m kubelet, minikube Created container
Normal Started 3m kubelet, minikube Started container
Name: vcsa-manager-8696b44dff-mzq5q
Namespace: ngci
Node: minikube/10.0.2.15
Start Time: Wed, 09 May 2018 17:53:54 +0100
Labels: app=vcsa-manager
pod-template-hash=4252600899
Annotations: <none>
Status: Running
IP: 172.17.0.14
Controlled By: ReplicaSet/vcsa-manager-8696b44dff
Containers:
vcsa-manager:
Container ID: docker://3e19fd8ca21a678e18eda3cb246708d10e3f1929a31859f0bb347b3461761b53
Image: silvam11/vcsa-manager
Image ID: docker://sha256:1a9cd03166dafceaee22586385ecda1c6ad3ed095b498eeb96771500092b526e
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 10 May 2018 10:32:54 +0100
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Wed, 09 May 2018 17:53:56 +0100
Finished: Wed, 09 May 2018 18:31:15 +0100
Ready: True
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tbkms (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-tbkms:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tbkms
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16h default-scheduler Successfully assigned vcsa-manager-8696b44dff-mzq5q to minikube
Normal SuccessfulMountVolume 16h kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tbkms"
Normal Pulled 16h kubelet, minikube Container image "silvam11/vcsa-manager" already present on machine
Normal Created 16h kubelet, minikube Created container
Normal Started 16h kubelet, minikube Started container
Normal SuccessfulMountVolume 3m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tbkms"
Normal SandboxChanged 3m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal Pulled 3m kubelet, minikube Container image "silvam11/vcsa-manager" already present on machine
Normal Created 3m kubelet, minikube Created container
Normal Started 3m kubelet, minikube Started container
Name: web-gateway-7b4689bff9-rvbbn
Namespace: ngci
Node: minikube/10.0.2.15
Start Time: Wed, 09 May 2018 17:53:55 +0100
Labels: app=web-gateway
pod-template-hash=3602456995
Annotations: <none>
Status: Running
IP: 172.17.0.12
Controlled By: ReplicaSet/web-gateway-7b4689bff9
Containers:
web-gateway:
Container ID: docker://677fbcbc053c57e4aa24c66d7f27d3e9910bc3dbb5fda4c1cdf5f99a67dfbcc3
Image: silvam11/web-gateway
Image ID: docker://sha256:b80fb05c087934447c93c958ccef5edb08b7c046fea81430819823cc382337dd
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 10 May 2018 10:32:54 +0100
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 09 May 2018 17:53:57 +0100
Finished: Wed, 09 May 2018 18:31:16 +0100
Ready: True
Restart Count: 1
Readiness: http-get http://:8080/status delay=0s timeout=1s period=5s #success=1 #failure=3
Environment:
VCSA_MANAGER: http://vcsa-manager-service:7070
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tbkms (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-tbkms:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tbkms
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16h default-scheduler Successfully assigned web-gateway-7b4689bff9-rvbbn to minikube
Normal SuccessfulMountVolume 16h kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tbkms"
Normal Pulled 16h kubelet, minikube Container image "silvam11/web-gateway" already present on machine
Normal Created 16h kubelet, minikube Created container
Normal Started 16h kubelet, minikube Started container
Warning Unhealthy 16h kubelet, minikube Readiness probe failed: Get http://172.17.0.13:8080/status: dial tcp 172.17.0.13:8080: getsockopt: connection refused
Normal SuccessfulMountVolume 3m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-tbkms"
Normal SandboxChanged 3m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal Pulled 3m kubelet, minikube Container image "silvam11/web-gateway" already present on machine
Normal Created 3m kubelet, minikube Created container
Normal Started 3m kubelet, minikube Started container
Warning Unhealthy 3m (x3 over 3m) kubelet, minikube Readiness probe failed: Get http://172.17.0.12:8080/status: dial tcp 172.17.0.12:8080: getsockopt: connection refused
</code></pre>
<p>Here are the pods on ngci namespace:</p>
<pre><code>silvam11@ubuntu:~$ kubectl get pods -n ngci
NAME READY STATUS RESTARTS AGE
repo-manager-6cf98f5b54-pd2ht 1/1 Running 1 16h
vcsa-manager-8696b44dff-mzq5q 1/1 Running 1 16h
web-gateway-7b4689bff9-rvbbn 1/1 Running 1 16h
</code></pre>
<p>What am I missing here? Is it a firewall?</p>
<p>Mauro</p>
| <p>You misconfigured the port numbers.</p>
<p>First, <code>vcsa-manager</code> was exposed on port 8080; after that, you tried to map the service <code>vcsa-manager-service</code> to port 9090.
Then, <code>repo-manager</code> was exposed on port 8000; you commented <code>targetPort</code> and didn’t map the service to port.</p>
<p>You should map the service to the right ports.</p>
<p>Fixed config will look like:</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: ngci
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-gateway
namespace: ngci
spec:
replicas: 1
template:
metadata:
labels:
app: web-gateway
spec:
containers:
- env:
- name: VCSA_MANAGER
value: http://vcsa-manager-service:7070
name: web-gateway
image: silvam11/web-gateway
imagePullPolicy: Never
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /status
port: 8080
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: web-gateway-service
namespace: ngci
spec:
selector:
app: web-gateway
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
nodePort: 30001
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: vcsa-manager
namespace: ngci
spec:
replicas: 1
template:
metadata:
labels:
app: vcsa-manager
spec:
containers:
- name: vcsa-manager
image: silvam11/vcsa-manager
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: vcsa-manager-service
namespace: ngci
spec:
selector:
app: vcsa-manager
ports:
- protocol: "TCP"
port: 7070
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: repo-manager
namespace: ngci
spec:
replicas: 1
template:
metadata:
labels:
app: repo-manager
spec:
containers:
- name: repo-manager
image: silvam11/repo-manager
imagePullPolicy: Never
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: repo-manager-service
namespace: ngci
spec:
selector:
app: repo-manager
ports:
- protocol: "TCP"
port: 9090
targetPort: 8000
</code></pre>
<p>I’ve just fixed all ports in your config.</p>
|
<p>We are having kubernetes cluster which is running on-premise & we having GCR private repository. So how we can access that private repository to my on-premise kubernetes cluster, As I know we can do using gcloud-sdk but it won't be possible to install gcloud-sdk on every node of kubernetes cluster.</p>
| <p>We used to deploy pods on azure AKS cluster and images used to be from GCR.
these are the steps we follow.</p>
<ol>
<li>Create a service account in gcloud with permissions to gcr.</li>
<li>Create keys for the service account.</li>
<li>Add kubectl secret.</li>
<li>Use secret in yaml</li>
</ol>
<p><code>gcloud iam service-accounts keys create gcr-docker-cred.json --iam-account=service-account-name@project-id.iam.gserviceaccount.com</code></p>
<p>Add kubectl secret.</p>
<p><code>kubectl create secret docker-registry gcriosecret --docker-server=https://gcr.io --docker-username=_json_key [email protected] --docker-password="$(cat gcr-docker-cred.json)"</code></p>
<p>Use secret in yaml</p>
<p><code>imagePullSecrets:
- name: gcriosecret</code></p>
<p>this <a href="http://docs.heptio.com/content/private-registries/pr-gcr.html" rel="nofollow noreferrer">blog</a> might be a good help</p>
|
<p>I am creating a CI/CD pipeline.</p>
<p>I run <code>helm install --wait --timeout 300 ...</code>. But that doesn't really wait, just returns when the "release" status is <code>DEPLOYED</code>.</p>
<p>So then I see a few things in <code>kubectl get pods --namespace default -l 'release=${TAG}' -o yaml</code> that could be used:</p>
<pre><code>- kind: Pod
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2018-05-11T00:30:46Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2018-05-11T00:30:48Z
status: "True"
type: Ready
</code></pre>
<p>So I guess I will look at when <code>Ready</code> condition becomes "True".</p>
<ol>
<li><p>It feels a bit wrong thing to do... Everyone solves this so I assume there is some feature of <code>kubectl</code> for that, is there?</p></li>
<li><p>Is this the right thing to query? (See <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="noreferrer">Kubernetes JSONPath reference</a>)</p>
<p>kubectl get pods --namespace default -l 'release=sc8757070' -o jsonpath='{.items[*].status.conditions[?(@.type=="Ready")].status}'</p></li>
</ol>
| <p>Kubernetes already has something to wait on pods (and print a message every time something changes and print a summary at the end).</p>
<pre><code># kubectl rollout status RESOURCE_TYPE RESOURCE_NAME
kubectl rollout status statefulset app1-hello
# with timeout set to 300 seconds
kubectl rollout status statefulset app1-hello --timeout=300s
</code></pre>
|
<p>I'm setting up a k8s cluster on GKE. A wildcard DNS <code>*.server.com</code> will point to a Ingress controller. Internally to the cluster, there will be webserver pods, each exposing a unique service. The Ingress controller will use the server name to route to the various services. </p>
<p>Servers will be created and destroyed on a nearly daily basis. I'd like to know if there's a way to add and remove a named server from the ingress controller without editing the whole list of named servers. </p>
| <p>It appears like you're planning to host multiple domain names on a single Load Balancer (==single <code>Ingress</code> resource). If not, this answer doesn't apply.</p>
<p>You can do this by configuring <code>Ingress</code> with a long list of domain names like:</p>
<pre><code>spec:
rules:
- host: cats.server.com
http:
paths:
- path: /*
backend:
serviceName: cats
servicePort: 8080
- host: dogs.server.com
http:
paths:
- path: /*
backend:
serviceName: dogs
servicePort: 8080
- [...]
</code></pre>
<p>If that's your intention, <strong>there's no way of doing this without editing this whole list</strong> and applying it to the cluster every time.</p>
<p>You can build a tool to construct this manifest file, then apply the changes. The Ingress controller is smart enough that existing domains will not see a downtime if they're still on the list.</p>
<p>However the domains you removed from the list will also be removed from the URL Map of the load balancer and hence stop accepting the traffic.</p>
|
<p>Does kubernetes or Helm support shut down the pods if it is idle for more than a given threshold time?</p>
<p>This would be very useful in the development environment, to provide room for other processes to consume it and save cost. </p>
| <p>Kubernetes is featured with the ability to autoscale your application in a cluster. Literally, it means that Kubernetes can start additional pods when the load is increasing and terminate excessive pods when the load is decreasing.</p>
<p>It is possible to downscale the application to zero pods, but, in this case, you will have a delay serving the first request while the pod is starting.</p>
<p>This functionality relies on performance metrics provided by <a href="https://github.com/kubernetes/heapster" rel="noreferrer">Heapster</a> application, that must be run in the cluster. From the practical side, it means that autoscaling doesn't happen instantly, because it takes some time to performance metrics reach the configured threshold.</p>
<p>The mentioned Kubernetes feature called HPA(horizontal pod autoscale) is described in this <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noreferrer">document</a>.</p>
<p>In case you are running your cluster on GCP or GKE, you are able to go further and automatically start additional nodes for your cluster when you need more computing capacity and shut down nodes when they are not running application pods anymore.</p>
<p>More information about this functionality can be found following the <a href="https://kubernetes.io/blog/2016/07/autoscaling-in-kubernetes/" rel="noreferrer">link</a>.</p>
<p>If you decide to give it a try, you might find this information useful:</p>
<ul>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster#create_a_cluster_with_autoscaling" rel="noreferrer">Creating a Container cluster in GKE</a> </li>
<li><a href="https://carlosbecker.com/posts/k8s-sandbox-costs/" rel="noreferrer">70% cheaper Kubernetes cluster on AWS</a> </li>
<li><a href="https://sysdig.com/blog/kubernetes-scaler/" rel="noreferrer">How to build a Kubernetes Horizontal Pod Autoscaler using custom metrics</a></li>
</ul>
|
<p>As kubernetes 1.9 , all flags than --config are deprecated on kube-scheduler. So how to create config file ?</p>
| <p>Building from tip and using <code>--write-config-to</code> yields a sample config file. Examining it, it's a yaml file using <a href="https://godoc.org/k8s.io/kubernetes/pkg/apis/componentconfig#KubeSchedulerConfiguration" rel="nofollow noreferrer">KubeSchedulerConfig</a> as the schema.</p>
<p>Sample config (no other flags specified):</p>
<pre><code>algorithmSource:
provider: DefaultProvider
apiVersion: componentconfig/v1alpha1
clientConnection:
acceptContentTypes: ""
burst: 100
contentType: application/vnd.kubernetes.protobuf
kubeconfig: ""
qps: 50
disablePreemption: false
enableContentionProfiling: false
enableProfiling: false
failureDomains: kubernetes.io/hostname,failure-domain.beta.kubernetes.io/zone,failure-domain.beta.kubernetes.io/region
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 0.0.0.0:10251
kind: KubeSchedulerConfiguration
leaderElection:
leaderElect: true
leaseDuration: 15s
lockObjectName: kube-scheduler
lockObjectNamespace: kube-system
renewDeadline: 10s
resourceLock: endpoints
retryPeriod: 2s
metricsBindAddress: 0.0.0.0:10251
schedulerName: default-scheduler
</code></pre>
|
<p>I am using docker containers and have docker-compose files for both local development and production environment. I want to try Google Cloud Platform for my new app and specifically Google Kubernetes Engine. My tools is Docker for Mac with Kubernetes on local machine.</p>
<p>It is super important for developers to be able to change code and to see changes live for local development.</p>
<p>Use cases:</p>
<ol>
<li><p>Backend developer make changes to basic Flask API (or whatever you use) and should see changes on reloaded app immediately.</p></li>
<li><p>Frontend developer make changes to HTML layout and should see changes on web page immediately.</p></li>
</ol>
<p>At the moment i am using docker-compose files to mount source code to local containers. But Kubernetes does not support relative paths to mount the source code.</p>
<p>Ideally i should be able to set the variable </p>
<blockquote>
<p>Deployment.spec.templates.spec.containers.volumes.hostPath</p>
</blockquote>
<p>as relative path to my repo. For example, in our team developers clone repo to this folders:</p>
<blockquote>
<p>/User/BACKEND_developer/code/project_repo</p>
<p>/User/FRONTEND_developer/code/project_repo</p>
</blockquote>
<p>Obviously you can't commit and build the image after every little change to the source code.</p>
<p>So what is the best practice for local development with Kubernetes? Do i need some additional tools to modify .yaml files for every developer?</p>
| <p>@tgogos is right.
The best way to achieve your goal is to use <a href="https://cloudplatform.googleblog.com/2018/03/introducing-Skaffold-Easy-and-repeatable-Kubernetes-development.html" rel="nofollow noreferrer">Skaffold</a></p>
<p>It will rebuild container whenever it sees changes in source code.</p>
<p>Skaffold has a pluggable architecture that allows you to choose the tools in developer workflow that work best for you:
<a href="https://i.stack.imgur.com/YgwIS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YgwIS.png" alt="Pluggability"></a></p>
|
<p><a href="http://grs-preprodkubemaster01:5601/kibana" rel="nofollow noreferrer">http://grs-preprodkubemaster01:5601/kibana</a></p>
<p>I have followed docs and installed Kibana, When I used the service as type: LoadBalancer, the service isn't
coming up, so I deleted the type: LoadBalancer and let it default to ClusterIP, it came up fine. (Note I don't have AWS)
But, I am not sure how to access the UI, I tried this URL but its not working.
<a href="http://my-preprodkubemaster01/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/app/kibana" rel="nofollow noreferrer">http://my-preprodkubemaster01/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/app/kibana</a>
any ideas how to access the Kibana UI. I checked service, deployment and everything is green check.</p>
<p>Another thing I tried is this URL with this URL which I got from the command kubectl cluster-info
<a href="https://10.123.24.107:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy" rel="nofollow noreferrer">https://10.123.24.107:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy</a>
However, this is showing me this error</p>
<pre><code>{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kibana-logging" is forbidden: User "system:anonymous" cannot get services/proxy in the namespace "kube-system"",
reason: "Forbidden",
details: {
name: "kibana-logging",
kind: "services"
},
code: 403
}
</code></pre>
<p>So, as another try I used Kibana service as NodePort, but that didn't work either.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
selector:
k8s-app: kibana-logging
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30887
$ kubectl -n kube-system get rc,svc,cm,po
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/elasticsearch-logging ClusterIP 10.98.10.182 <none> 9200/TCP 12m
svc/heapster ClusterIP 10.107.184.85 <none> 80/TCP 3d
svc/kibana-logging NodePort 10.102.254.129 <none> 5601:30887/TCP 12m
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 3d
svc/kubernetes-dashboard ClusterIP 10.105.30.246 <none> 80/TCP 3d
svc/monitoring-influxdb ClusterIP 10.109.144.39 <none> 8086/TCP 3d
</code></pre>
<p>I would like to know what URL I should be using to access the Kibana UI. Please note that I have npot tried to do kubectl proxy and I would like to have it work without it</p>
| <p>I got it to work with these changes in ingress config</p>
<pre><code> apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kube
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/rewrites: "serviceName=kubernetes-dashboard rewrite=/;serviceName=kibana-logging rewrite=/"
spec:
rules:
- host: HOSTNAME_OF_MASTER
http:
paths:
- path: /kube-ui/
backend:
serviceName: kubernetes-dashboard
servicePort: 80
- path: /kibana/
backend:
serviceName: kibana-logging
servicePort: 5601
</code></pre>
<p>and my Kibana serive is setup as Nodeport</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
</code></pre>
<p>and dashboard is also configured as this</p>
<pre><code># ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
</code></pre>
<p>once you have the svc running you can access kibana using the NodePort from any node. Example: <a href="http://node01_ip" rel="nofollow noreferrer">http://node01_ip</a>: 31325/app/kibana</p>
<pre><code>$ kubectl get svc -o wide -n=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
elasticsearch-logging ClusterIP 10.xx.120.130 <none> 9200/TCP 11h k8s-app=elasticsearch-logging
heapster ClusterIP 10.xx.232.165 <none> 80/TCP 11h k8s-app=heapster
kibana-logging NodePort 10.xx.39.255 <none> 5601:31325/TCP 11h k8s-app=kibana-logging
kube-dns ClusterIP 10.xx.0.xx <none> 53/UDP,53/TCP 12h k8s-app=kube-dns
kubernetes-dashboard NodePort 10.xx.xx.xx <none> 80:32086/TCP 11h k8s-app=kubernetes-dashboard
monitoring-influxdb ClusterIP 10.13.199.138 <none> 8086/TCP 11h k8s-app=influxdb
</code></pre>
|
<p>I'm just getting started with GCP and Kubernetes Engine. So far I managed to start a Kubernetes cluster, run my app in a pod and connect it to a Cloud SQL instance. I also added a load balancer so now my app has a static IP and I should be able to connect to it from the outside. </p>
<p>However, I just get a <code>DisallowedHost</code> error? Which IP should I allow? The IP of the pod that is completely random or the IP of the load balancer?</p>
| <p>Turns out, it's the IP of the load balancer. In the settings.py file I changed the allowed hosts to</p>
<pre><code>ALLOWED_HOSTS = [os.environ.get('LOAD_BALANCER_IP', '127.0.0.1')]
</code></pre>
<p>and in my deployment yaml I added the load balancer IP as an evironment variable to my container:</p>
<pre><code>spec:
containers:
- env:
- name: LOAD_BALANCER_IP
value: xx.xx.xx.xx
</code></pre>
<p>This way I can have the app work automatically both on deploy to the kubernetes cluster and on localhost for development.</p>
|
<p>I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (<a href="https://cloud.google.com/vision/docs/libraries#client-libraries-install-java" rel="nofollow noreferrer">https://cloud.google.com/vision/docs/libraries#client-libraries-install-java</a>) as explained here <a href="https://cloud.google.com/vision/docs/auth" rel="nofollow noreferrer">https://cloud.google.com/vision/docs/auth</a>:</p>
<blockquote>
<p>If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.</p>
</blockquote>
<p>On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file. </p>
<p>I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:</p>
<pre><code>{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
</code></pre>
<p>What I am doing wrong? Thx in advance!</p>
| <p>I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to <a href="https://stackoverflow.com/questions/47021469/how-to-set-google-application-credentials-on-gke-running-through-kubernetes">How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes</a>:</p>
<p><strong>1. Create the secret (in my case in my deploy step on Gitlab):</strong></p>
<pre><code>kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
</code></pre>
<p><strong>2. Setup the volume:</strong></p>
<pre><code>...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
</code></pre>
<p><strong>3. Setup the volume mount:</strong></p>
<pre><code>spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
</code></pre>
<p><strong>4. Setup the environment variable:</strong></p>
<pre><code>spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
</code></pre>
|
<p>I am using multiple ingresses resource on my GKE, say I have 2 ingress in different namespaces. I create the ingress resource as shown in the yaml below. With the annotations used in the below yaml, I clearly mention that I am using the GCE controller that comes with GKE(<a href="https://github.com/kubernetes/ingress-gce" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce</a>). But every time I create an ingress I get different IPs, For instance sometimes I get 133.133.133.<strong><em>133</em></strong> and for the other times I get 133.133.133.<strong><em>134</em></strong>. And it alternates between only these two IPs (it's probably between only two IPs because of quotas limit). This is a problem when I just want to reserve one IP and load balance/terminate multiple apps on this IP only.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: gce
name: http-ingress
spec:
backend:
serviceName: http-svc
servicePort: 80
</code></pre>
| <p>In your Ingress resource you can specify you need the Load Balancer to use a specific IP address with the <code>kubernetes.io/ingress.global-static-ip-name</code> annotation like so:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.global-static-ip-name: static-ip-name
name: http-ingress
spec:
backend:
serviceName: http-svc
servicePort: 80
</code></pre>
<p>You will need to create a global static IP first using the gcloud tool. See step 2(b) here: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip</a>.</p>
|
<p>I just installed a new centos server with docker</p>
<pre><code>Client:
Version: 1.13.1
API version: 1.26
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Server: Version: 1.13.1 API version: 1.26 (minimum
> version 1.12) Package version: <unknown> Go version: go1.8.3
> Git commit: 774336d/1.13.1 Built: Wed Mar 7 17:06:16
> 2018 OS/Arch: linux/amd64 Experimental: false
</code></pre>
<p>And i can use the command oc cluster up to launch a openshift server</p>
<pre><code>oc cluster up --host-data-dir /data --public-hostname master.ouatrahim.com --routing-suffix master.ouatrahim.com
</code></pre>
<p>which gives the output</p>
<pre><code>Using nsenter mounter for OpenShift volumes
Using 127.0.0.1 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.
The server is accessible via web console at:
https://master.ouatrahim.com:8443
You are logged in as:
User: developer
Password: <any value>
To login as administrator:
oc login -u system:admin
</code></pre>
<p>And oc version gives the output </p>
<pre><code>oc v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://127.0.0.1:8443
openshift v3.9.0+0e3d24c-14
kubernetes v1.9.1+a0ce1bc657
</code></pre>
<p>But when i tried to access to the web console via <a href="https://master.ouatrahim.com:8443/" rel="nofollow noreferrer">https://master.ouatrahim.com:8443/</a> i keep getting a http redirect to 127.0.0.1</p>
<pre><code>https://127.0.0.1:8443/oauth/authorize?client_id=openshift-web-console&response_type=code&state=eyJ0aGVuIjoiLyIsIm5vbmNlIjoiMTUyNTk2NjcwODI1MS0xODg4MTcxMDEyMjU3OTQ1MjM0NjIwNzM5NTQ5ODE0ODk5OTYxMTIxMTI2NDI3ODg3Mjc5MjAwMTgwODI4NTg0MTkyODAxOTA2NTY5NjU2In0&redirect_uri=https%3A%2F%2F127.0.0.1%3A8443%2Fconsole%2Foauth
</code></pre>
<p>I hope someone can help me solve this</p>
| <p>You can bring up the cluster using your IP address like:
oc cluster up --public-hostname=192.168.122.154 </p>
<p>This way you should be able to access using <a href="https://master.ouatrahim.com:8443/" rel="nofollow noreferrer">https://master.ouatrahim.com:8443/</a></p>
|
<p>I read in kubernetes docs somewhere that kubernetes reads application logs from stdout and stderror in pods.
I created a new application and configured it to send logs to a remote splunk hec endpoint (using splunk-logback jars) and at the same time to console.
So by default, the console logs in logback should go to System.out, which should then be visible using kubectl logs .
But it's not happening in my application.</p>
<p>my logback file:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<configuration>
<Appender name="SPLUNK" class="com.splunk.logging.HttpEventCollectorLogbackAppender">
<url>${splunk_hec_url}</url>
<token>${splunk_hec_token}</token>
<index>${splunk_app_token}</index>
<disableCertificateValidation>true</disableCertificateValidation>
<batch_size_bytes>1000000</batch_size_bytes>
<batch_size_count>${batch_size_count}</batch_size_count>
<send_mode>sequential</send_mode>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%msg</pattern>
</layout>
</Appender>
<Appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%msg</pattern>
</encoder>
</Appender>
<Appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="STDOUT" />
</Appender>
<root level="INFO">
<appender-ref ref="SPLUNK"/>
<appender-ref ref="ASYNC"/>
</root>
</configuration>
</code></pre>
<p>I am able to see the logs in splunk and If I login to the container from backend and start my java application, then also I can see the logs on the terminal that time. But if I let the container start by default on it's own, then the logs are only going to splunk and I can't view them using <code>kubectl logs <POD_NAME></code></p>
<p>The kubernetes yml file for my logger app:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: logging-pod
labels:
app: logging-pod
spec:
containers:
- name: logging-container
image: logger-splunk:latest
command: ["java", "-jar", "logger-splunk-1.0-SNAPSHOT.jar"]
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 1
memory: 1Gi
</code></pre>
| <p>ok so this finally got resolved. The issue was with the logs not being flushed.</p>
<p>In the PatternLayout the %n was missing. Hence everything was going into some buffer I guess and not reaching the console.</p>
|
<p>I have installed Traefik on Kubernetes and followed allong the official tutorial.
I have a cluster of 4 machines for Kubernetes.</p>
<p>When I run <code>kubectl --namespace=kube-system get pods</code> I see <code>traefik-ingress-controller-678226159-eqseo</code>, so all fine.</p>
<p>Then I executed:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml
</code></pre>
<p>and then ran:</p>
<pre><code>echo "$(my master node ip) traefik-ui.minikube" | sudo tee -a /etc/hosts
</code></pre>
<p>which resulted in:
<code>http://192.168.178.31 traefik-ui.minikube</code> in <code>/etc/hosts</code></p>
<p>I further edited <code>kubectl -n kube-system edit service traefik-web-ui</code> service and changed
the type to <code>NodePort</code>.</p>
<p>When I finally run <code>$ curl http://192.168.178.31:31107</code> I get:</p>
<pre><code>curl: (7) Failed to connect to 192.168.178.31 port 31107: Connection refused
</code></pre>
<p>Does anyone know, why I am getting the Connection refused?</p>
<p><strong>EDIT 1:</strong></p>
<p>Log from <code>kubectl logs -f traefik-ingress-controller-68994b879-5z2xr -n kube-system</code>:</p>
<pre><code>time="2018-05-13T09:55:48Z" level=info msg="Traefik version v1.6.0 built on 2018-04-30_09:28:44PM"
time="2018-05-13T09:55:48Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://docs.traefik.io/basics/#collected-data\n"
time="2018-05-13T09:55:48Z" level=info msg="Preparing server http &{Address::80 TLS:<nil> Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0x14ed5e50} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2018-05-13T09:55:48Z" level=info msg="Preparing server traefik &{Address::8080 TLS:<nil> Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0x14ed5e60} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2018-05-13T09:55:48Z" level=info msg="Starting server on :80"
time="2018-05-13T09:55:48Z" level=info msg="Starting provider *kubernetes.Provider {\"Watch\":true,\"Filename\":\"\",\"Constraints\":[],\"Trace\":false,\"TemplateVersion\":0,\"DebugLogGeneratedTemplate\":false,\"Endpoint\":\"\",\"Token\":\"\",\"CertAuthFilePath\":\"\",\"DisablePassHostHeaders\":false,\"EnablePassTLSCert\":false,\"Namespaces\":null,\"LabelSelector\":\"\",\"IngressClass\":\"\"}"
time="2018-05-13T09:55:48Z" level=info msg="Starting server on :8080"
time="2018-05-13T09:55:48Z" level=info msg="ingress label selector is: \"\""
time="2018-05-13T09:55:48Z" level=info msg="Creating in-cluster Provider client"
time="2018-05-13T09:55:48Z" level=info msg="Server configuration reloaded on :80"
time="2018-05-13T09:55:48Z" level=info msg="Server configuration reloaded on :8080"
time="2018-05-13T09:55:53Z" level=info msg="Server configuration reloaded on :80"
time="2018-05-13T09:55:53Z" level=info msg="Server configuration reloaded on :8080"
time="2018-05-13T09:55:55Z" level=info msg="Server configuration reloaded on :80"
time="2018-05-13T09:55:55Z" level=info msg="Server configuration reloaded on :8080"
</code></pre>
| <p>in <a href="https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml</a> there is following ingress definition:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
rules:
- host: traefik-ui.minikube
http:
paths:
- path: /
backend:
serviceName: traefik-web-ui
servicePort: web
</code></pre>
<p>This mean you should access traefik-web-ui via ingress service.</p>
<p>If you deployed traefik as Deployment (<a href="https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml</a>), you should check the NodePort returned by <code>kubectl describe svc traefik-ingress-service -n kube-system</code> and use it as your url (<a href="http://traefik-ui.minikube:xxx" rel="nofollow noreferrer">http://traefik-ui.minikube:xxx</a>)</p>
<p>(you don't have to change traefik-web-ui to NodePort)</p>
<p>If you used DeamonSet (<a href="https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml</a>) just use <code>http://traefik-ui.minikube</code>.</p>
<p>If you would like to access <code>traefik-web-ui</code> directly the easiest way would be:
<code>minikube service traefik-web-ui --url</code></p>
|
<p>I have Kubernetes 1.8.7 cluster deployed on AKS. I am trying to install NGINX ingress controller using help (helm install stable/nginx-ingress --namespace kube-system). I init helm using helm init --service-account default.
I am getting following error </p>
<blockquote>
<p>Error: release my-release failed: clusterroles.rbac.authorization.k8s.io "my-release-nginx-ingress" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["patch"]} PolicyRule{Resources:["ingresses/status"], APIGroups:["extensions"], Verbs:["update"]}] user=&{system:serviceaccount:kube-system:default 0296ac27-555a-11e8-a9ed-cad48efa2d60 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]</p>
</blockquote>
<p>This command was working till yesterday and I have no changes done but it stopped working this morning. Aks doesn't support RBAC yet in my understanding and so created a role and mapping to cluster admin was futile. Any suggestions </p>
| <p>Add --version 0.18.1 to your <code>helm install</code> or <code>helm upgrade</code> command. Still getting to the bottom of this, but the culprit is <a href="https://github.com/kubernetes/charts/pull/5169" rel="nofollow noreferrer">https://github.com/kubernetes/charts/pull/5169</a></p>
|
<p>I have simple Spring boot application which I need to deploy on development and prod different namespaces on a Kubernetes cluster using Helm.</p>
<p>I was thinking about keeping multiple application.properties (application-dev.properties, application-prod.properties) files for each environment and then create configmaps from them through values.yaml files which also will be different for each environment and specified when I execute Helm upgrade.</p>
<p>Now the question is how do I consume values from config.maps as I understand I can either mount the properties file inside container for example /deployment/application.properties</p>
<p>Or expose each property as an environment variable inside container.</p>
<p>But how do I consume them from Java application?</p>
<p>Also at the moment when I create container image it has current application .properties inside /resources/ files embedded and this is what application is using from default so I need to overwrite this behaviour when application is running inside container as opposite to then when its just build and run manually on developer desktop.</p>
| <p>Springboot can automatically infer variables from environment variables. In your application.properties or application.yaml, just use ${MY_ENVIRONMENT_VARIABLE:my-defaultvalue}.<br>
Use helm to populate your configmap.
Use configmap as environment variables into your deployment manifest.</p>
<p>This way you do not need to have multiple application.properties for dev, int ,prod inside your image. Keeping it intact across deployment. </p>
<p>And then in your helm chart, you can have multiple values.yaml example values-dev.yaml or values-int.yaml. you can also dynamically set helm values from command line, overriding the yaml file.</p>
<p>I have a demo app in github <a href="https://github.com/balchua/demo" rel="noreferrer">https://github.com/balchua/demo</a>, which uses this pattern.</p>
|
<p>I need add argument <code>--authentication-token-webhook</code> in Kubelet. I can change file <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> at all nodes step by step with my hands. But it is not funny )). How can I change Kubelet arguments from single point?</p>
| <p>You can either</p>
<ol>
<li>configure your Kubernetes workers via tools like <a href="https://puppet.com/" rel="nofollow noreferrer">Puppet</a> or <a href="https://www.ansible.com/" rel="nofollow noreferrer">Ansible</a>. Write your service drop-in once and deploy it via the tool to all nodes. Make sure you don't restart all kubelets at once (keyword serial for Ansible). Also, don't change <code>10-kubeadm.conf</code>, drop in another file like <code>20-kubeadm-extra-args.conf</code> and set the environment variable <code>KUBELET_EXTRA_ARGS</code>.</li>
<li>or use a Kubernetes feature called <a href="https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/" rel="nofollow noreferrer">DynamicKubeletConfig</a>. Beware that this is an alpha feature (as of Kubernetes 1.10) and has to be enabled by hand. I wouldn't recommend this method (yet, as long as it's an alpha feature), but it might become a feasible option in the future.</li>
</ol>
|
<p>I try to get a value from a YAML file within a shell:</p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: Pod
spec:
containers:
hostIP: 12.198.110.192
phase: Running
podIP: 10.244.1.9
</code></pre>
<p>With <code>kubectl get pods -l run=hello-kube -o yaml | grep podIP:</code> I get this ouput:</p>
<pre><code> podIP: 10.244.1.9
</code></pre>
<p>My goal is to save that value in a Environment Variable, but I only get the <code>key/value</code>-pair: </p>
<pre><code>export PODIP=$(kubectl get pods -l run=hello-kube -o yaml | grep podIP)
</code></pre>
| <p>With awk:</p>
<pre><code>kubectl get pods -l run=hello-kube -o yaml | awk '/podIP:/ {print $2}'
</code></pre>
<p>Output:</p>
<pre>
10.244.1.9
</pre>
|
<p>I fail to see why kubernetes need a pod selector in a deployment statement that can only contain one pod template? Feel free to educate me why kubernetes engineers introduced a selector statement inside a deployment definition instead of automatically select the pod from the template?</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: grpc-service
spec:
type: LoadBalancer
ports:
- name: grpc
port: 8080
targetPort: 8080
protocol: TCP
selector:
app: grpc-test
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grpc-deployment
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
selector:
matchLabels:
app: grpc-test
template:
metadata:
labels:
app: grpc-test
spec:
containers:
...
</code></pre>
<p>Why not simply define something like this?</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: grpc-service
spec:
type: LoadBalancer
ports:
- name: grpc
port: 8080
targetPort: 8080
protocol: TCP
selector:
app: grpc-test
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grpc-deployment
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
app: grpc-test
spec:
containers:
...
</code></pre>
| <p>Ah! Funny enough, I have once tried wrapping my head around the concept of label selectors as well before. So, here it goes...</p>
<p>First of all, what the hell are these labels used for? Labels within Kubernetes are the core means of identifying objects. A controller controls pods based on their label instead of their name. In this particular case they are meant to identify the pods belonging to the deployment’s replica set.</p>
<p>You actually didn’t have to implicitly define <code>.spec.selector</code> when using the <code>v1beta1</code> extensions. It would in that case default from <code>.spec.template.labels</code>. However, if you don’t, you can run into problems with <code>kubectl apply</code> once one or more of the labels that are used for selecting change because <code>kubeclt apply</code> will look at <code>kubectl.kubernetes.io/last-applied-configuration</code> when comparing changes and that annotation will only contain the user input when he created the resource and none of the defaulted fields. You’ll get an error because it cannot calculate the diff like:</p>
<pre><code>spec.template.metadata.labels: Invalid value: {"app":"nginx"}: `selector` does not match template `labels`
</code></pre>
<p>As you can see, this is a pretty big shortcoming since it means you can not change any of the labels that are being used as a selector label or it would completely break your deployment flow. It was “fixed” in <code>apps/v1beta2</code> by requiring selectors to be explicitly defined, disallowing mutation on those fields.</p>
<p>So in your example, you actually don’t have to define them! The creation will work and will use your <code>.spec.template.labels</code> by default. But yeah, in the near future when you have to use <code>v1beta2</code>, the field will be mandatory. I hope this kind of answers your question and I didn’t make it any more confusing ;)</p>
|
<p>Is it possible to send requests to all pods behind a service/ingress controller based on the requests?</p>
<p>My requirement is to send requests to all the pods if the request is /send/all.</p>
| <p>It's not possible because ingress controller can't do this (for sure nginx and GLBC based ingress can't do it, bud due to the way how to http works I assume this is the case for all ingress controllers).</p>
<p>Depending what your exact case is you have few options.</p>
<p>If your case is just monitoring and you can afford using control on number of request sending to your pods you can just set http <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">liveness probe</a> for your pods. Then you will be sure that if pod doesn't return correct response k8s won't send traffic to it.</p>
<p>If you need to trigger some action on all pods you have few options:</p>
<p>Use messaging - for example you can use <a href="https://github.com/kubernetes/charts/tree/master/stable/rabbitmq" rel="nofollow noreferrer">rabbitmq chart</a> to deploy rabbitmq and write some application that will handle your traffic.</p>
<p>Using DB - create some app that will set some flag in DB abd add some logic to your app to monitor the flag, or create <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">cron job</a> and to monitor the flag and trigger and trigger required actions on pods (in this case you can <a href="https://stackoverflow.com/questions/50202961/kube-config-how-to-make-it-available-to-a-rest-service-deployed-in-kubernetes/50203659#50203659">use service account</a> to give your cron job pod to k8s API to list pods.</p>
|
<p>We have a Cassandra Cluster with 3 pods, in Google Cloud Kubernetes.
Our Cassandra version is 3.9 we are using the Google images.</p>
<p>I got a problem when I tried to create a Materialized View from a table.</p>
<p>The schema of the table is like:</p>
<pre><code>CREATE TABLE environmental_data (
block_id int,
timestamp timestamp,
device_id int,
sensor_id int,
.
.
.
PRIMARY KEY (block_id, timestamp, device_id, sensor_id)
</code></pre>
<p>I want to create a view with the device_id as cluster key, I tried to do this:</p>
<pre><code>CREATE MATERIALIZED VIEW environmental_data_by_device AS
SELECT block_id, timestamp, device_id, sensor_id,... FROM environmental_data
WHERE block_id is not null
and timestamp is not null
and device_id is not null
and sensor_id is not null
PRIMARY KEY ((device_id), timestamp, sensor_id, block_id)
WITH CLUSTERING ORDER BY (timestamp DESC);
</code></pre>
<p>In local with a very small amount of data, everything went well.
But in production with 80 million lines, 2 pods crashed,
and Cassandra looped on this error:</p>
<blockquote>
<p>Unknown exception caught while attempting to update MaterializedView! environmental_data</p>
<p>java.lang.IllegalArgumentException: Mutation of XXXX bytes is too large for the maximum size of XXXX</p>
</blockquote>
<p><strong>There was also many</strong> <code>java.lang.OutOfMemoryError: Java heap space</code></p>
<p>What can I do to be sure the next try will be successful?
To put down the production Cassandra a second time is not really conceivable.</p>
<p>I already succeed to create a view base on a table but it was not that big.</p>
| <p>According to <a href="https://www.datastax.com/dev/blog/understanding-materialized-views" rel="nofollow noreferrer">docs</a>, Cassandra considers updates within the same partition as a single mutation. In your case, I suppose, this means that every new insert with the same device id may result in transferring all the data that was previously written into that partition.
To avoid it, you should consider splitting the data in the mat. view into smaller pieces using the partition key. For example you may introduce minute or hour wide timebuckets depending on the measurement frequency.</p>
|
<p>I'm using Kubernetes into google cloud platform and have defined some clusters with deployment controllers there.</p>
<p>For each pod into deployment there is a health check which each 30sec check my route(<a href="http://www.example.com" rel="nofollow noreferrer">www.example.com</a>) and the response must be 200 else it cause the health check goes fail.</p>
<p>So i decided to trace it and write die() in my index.php then health check show green status and it works correctly.
I got confused and i really do not know what happened for this pod and why it does not work.</p>
<p>Could anybody help my to find this issue in my pod and tell me what is exactly this health check doing in GCP?</p>
| <p>health check used to verify if a container in a pod is healthy and ready to serve traffic.</p>
<p>So you should check your http status code at first:</p>
<blockquote>
<p>curl -I www.example.com</p>
</blockquote>
<p>What is your http status code in the response?</p>
<p>when you write die() in your index.php then i am sure http status code is 200</p>
<p>but when you remove it from your index.php then http code is something else!</p>
<p>you may redirect user after visitign www.example.com then in this case http status code is not 200.</p>
<p>make sure and check http status code of health check url is 200 , if it is not 200 then you could change health check url in GKE to something else which http status code could be 200.</p>
|
<p>I'm new in Kubernetes and was trying to deploy an nginx-ingress in a Aks cluster with Helm.</p>
<p>After the cluster was created with az-cli, tried to deploy nginx-ingress with this command:</p>
<pre><code>helm install stable/nginx-ingress -n nginx-ingress --namespace kube-system --set rbac.create=false
</code></pre>
<p>But I had the following error in command line:</p>
<pre><code>Error: release nginx-ingress failed: clusterroles.rbac.authorization.k8s.io "nginx-ingress" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["patch"]} PolicyRule{Resources:["ingresses/status"], APIGroups:["extensions"], Verbs:["update"]}] user=&{system:serviceaccount:kube-system:default 5ddc4e18-5607-11e8-b434-0a58ac1f0fc5 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]
</code></pre>
<p>I'm using the latest az cli, Helm and Ubuntu 16.04 (Xenial Xerus).</p>
| <p>I just got over this. It looks to me like RBAC may be enabled now on AKS, but only partially? I just created my cluster on 5/11/18.</p>
<p>I took pieces from these two issues...<br>
<a href="https://github.com/kubernetes/helm/issues/3985" rel="noreferrer">https://github.com/kubernetes/helm/issues/3985</a><br>
<a href="https://github.com/jenkins-x/jx/issues/485" rel="noreferrer">https://github.com/jenkins-x/jx/issues/485</a></p>
<pre><code>kubectl create -f ClusterRoles.yaml # (this yaml came from herzogf in the jenkins issue)
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --upgrade --service-account tiller
helm repo update
helm install stable/nginx-ingress --namespace kube-system
</code></pre>
|
<p>I am creating the deployments/services using REST APIs. I send POST request with bodies which contain the JSON objects which create the applications on Openshift. After I call all the APIs, these objects get instantiated.</p>
<p>I have 2 deployments which are dependent on mongodb deployment but this mongodb takes a little longer to start running, while the two deployments which are dependent on mongodb start running earlier. This breaks the code inside the 2 deployments as the mongodb connection fails(since it is not up yet).</p>
<p>There could be 2 possible way I can fix this problem.</p>
<ol>
<li><p>I put a delay after i create mongodb deployment and recursively call the API to check it's status if it is running or not.</p></li>
<li><p>Just like we make changes in docker-compose, with the key, depends-on which tell the docker-compose that all the dependencies should be started first and then the dependent container.</p></li>
</ol>
<p>Is there any way this could be achieved in openshift?</p>
| <p>Instead of implementing complex logic for dependency handling, use health checking mechanism of Kubernetes. If your application starts and doesn't see Mongo DB, let it crash. Kubernetes will keep restarting it until Mongo DB comes online, and your application becomes healthy and serving as well. Kubernetes won't send traffic to not yet healthy instances.</p>
<p>Docs: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/</a></p>
|
<p>I have a rest microservice and would like to setup nginx as a reverse proxy for it. I am little confused about which approach to follow:</p>
<ol>
<li>Run nginx in each pod where application code is running.</li>
<li>Run nginx in separate pods and redirect http requests to application code running in separate pods.</li>
</ol>
<p>Can someone explain which one is better </p>
| <p>In my opinion, running nginx in a separate pod is a better option because that way you can scale up and down application separately from a proxy. Usually, we use one container with proxy and few with API. </p>
|
<p>I've created a <a href="https://github.com/kubernetes-incubator/kubespray" rel="nofollow noreferrer">local Kubernetes cluster using ansible</a>.
Everything is running but now I try to connect my <code>kubectl</code> with the cluster (in the VM's).</p>
<p>My cluster is running on <code>https://IP:6443</code></p>
<p>First I got:</p>
<pre><code>$ kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>So I tried <a href="http://blog.christianposta.com/kubernetes/logging-into-a-kubernetes-cluster-with-kubectl/" rel="nofollow noreferrer">this solution</a>:</p>
<pre><code>kubectl config set-credentials kubeuser/IP --username=kubeuser --password=kubepassword
kubectl config set-cluster IP --insecure-skip-tls-verify=true --server=https://IP:6443
kubectl config set-context default/IP/kubeuser --user=kubeuser/IP --namespace=default --cluster=IP
kubectl config use-context default/IP/kubeuser
</code></pre>
<p>and tried again:</p>
<pre><code>$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "system:anonymous" cannot list pods in the namespace "default"
</code></pre>
<p>I understand I need to create a clusterrolebinding but I still need an initial "admin-ownership" for that? What step am I missing or doing wrong so I can access my cluster with kubectl and get my pods?</p>
| <p>Kubectl is a command line tool for remote management of Kubernetes cluster.</p>
<p>Kubectl is using a config file you must have to connect to the cluster. It is possible that your config file is inconsistent due to a lot of major or minor changes. If further analyses of the issues does not show good results, try to rm -f ~/.kube/config and start it from scratch.</p>
<p>As I see, you suspect that the problem is with the self signed certificates. It may require updating cluster root Certificate Authority (CA) on clients, then refreshing the local list for valid certificates.</p>
<p>Go to your local CA directory, check if ca.crt file exists, then copy it to the clients.
For clients, perform the following operations:</p>
<pre><code>sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt
sudo update-ca-certificates
</code></pre>
<p>I agree with the suggestion provided by Matthew L. Daniel:</p>
<blockquote>
<p>cluster: tag accepts either the filename of the CA certificate or an
"inline" base64-ed version of the PEM you can see it with: </p>
</blockquote>
<pre><code>kubectl config set-cluster $foo --certificate-authority=... --embed-certs=true
</code></pre>
<blockquote>
<p>or you can cheat and use <code>--insecure-skip-tls-verify=true</code> to switch
off the CA verification for testing things</p>
</blockquote>
<p>If you do not have a certificate generated during the instalation of Kubernetes, I suggest starting with <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">Kubernetes TLS documentation</a>.
You may have a look at Kubernetes cloud providers like GKE or AKS - they give a possibility to create Kubernetes config file remotely with all certificates included, and then download it to a local ~/.kube directory.
Maybe this will solve some issues with the config if you adopt it to local installation.
It works like a charm and saves a lot of time. If you’re still undecided, please provide here the output of <code>kubectl config view</code> for us to help you with your trouble. (Please remove sensitive data from it before posting here).</p>
|
<p>I have a ReactJS application and I'm deploying it using Kubernetes.</p>
<p>I'm trying to wrap my head around how to inject environment variables into my <code>config.js</code> file from within the Kubernetes deployment file.</p>
<p>I currently have these:
<code>config.js</code> file:</p>
<pre><code>export const CLIENT_API_ENDPOINT = {
default:process.env.URL_TO_SERVICE,
};
</code></pre>
<p>and here's my Kubernetes deployment variables:</p>
<pre><code>"spec": {
"containers": [
{
"name": "container_name",
"image": "image_name",
"env": [
{
"name": "URL_TO_SERVICE",
"value": "https://www.myurl.com"
}
]
</code></pre>
<p>Kinda clueless of why I can't see the environment variable in my <code>config.js</code> file. Any help would be highly appreciated.</p>
<p>Here's my dockerfile:</p>
<pre><code># Dockerfile (tag: v3)
FROM node:9.3.0
RUN npm install webpack -g
WORKDIR /tmp
COPY package.json /tmp/
RUN npm config set registry http://registry.npmjs.org/ && npm install
WORKDIR /usr/src/app
COPY . /usr/src/app/
RUN cp -a /tmp/node_modules /usr/src/app/
#RUN webpack
ENV NODE_ENV=production
ENV PORT=4000
#CMD [ "/usr/local/bin/node", "./index.js" ]
ENTRYPOINT npm start
EXPOSE 4000
</code></pre>
| <p>The kubernetes environment variables are available in your container. So you would think the task here is a version of getting server side configuration variables shipped to your client side code.</p>
<p>But, If your react application is running in a container, you are most likely running your javascript build pipeline when you build the docker image. Something like this:</p>
<pre><code>RUN npm run build
# Run app using nodemon
CMD [ "npm", "start" ]
</code></pre>
<p>When docker is building your container, the environment variables injected by kubernetes aren't yet yet available. They won't exist until you run the built container on a cluster.</p>
<p>One solution, and this is maybe your shortest path, is to stop building your client side code in the docker file and combine the build and run steps in npm start command . Something like this if you are using webpack:</p>
<pre><code>"start": "webpack -p --progress --config webpack.production.config.js && node index.js"
</code></pre>
<p>If you go this route, then you can use any of the well documented techniques for shipping server side environment variables to your client during the build step : <a href="https://stackoverflow.com/questions/30030031/passing-environment-dependent-variables-in-webpack">Passing environment-dependent variables in webpack</a>. There are similar techniques and tools for all other javascript build tools.</p>
<p>Two: If you are running node, you can continue building your client app in the container, but have the node app write a config.js to the file system on the startup of the node application.</p>
<p>You could do even more complicated things like exposing your config via an api (a variation on the second approach), but this seems like throwing good money after bad. </p>
<p>I wonder if there isn't an easier way. If you have a purely client side app, why not just deploy it as a static site to, say, an amazon or gcloud bucket, firebase, or netlify? This way you just run the build process and deploy to the correct environment. no container needed.</p>
|
<p>I need to pass dynamic env variable to <code>kubectl create</code>. Something like this </p>
<p><code>kubectl create -f app.yaml --Target=prod</code></p>
<p>Based on Target code deploys on different servers.</p>
| <p>If you want to avoid installing 3rd party plugin then you can replace the text using sed "s/orginal/change/". It worked. I used this in Jenkins shell. </p>
<p><strong>cat app.yaml | sed "s/l3-apps/l2-apps/" | kubectl create -f -</strong></p>
|
<p>In minikube I can get a service's url via <code>minikube service kubedemo-service --url</code>. How do I get the URL for a <code>type: LoadBalancer</code> service in Docker for Mac or Docker for Windows in Kubernetes mode?</p>
<p><code>service.yml</code> is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kubedemo-service
spec:
type: LoadBalancer
selector:
app: kubedemo
ports:
- port: 80
targetPort: 80
</code></pre>
<p>When I switch to <code>type: NodePort</code> and run <code>kubectl describe svc/kubedemo-service</code> I see:</p>
<pre><code>...
Type: NodePort
LoadBalancer Ingress: localhost
...
NodePort: <unset> 31838/TCP
...
</code></pre>
<p>and I can browse to <code>http://localhost:31838/</code> to see the content. Switching to <code>type: LoadBalancer</code>, I see localhost ingress lines in <code>kubectl describe svc/kubedemo-service</code> but I get <code>ERR_CONNECTION_REFUSED</code> browsing to it.</p>
<p>(I'm familiar with <code>http://localhost:8080/api/v1/namespaces/kube-system/services/kubedemo-service/proxy/</code> though this changes the root directory of the site, breaking css and js references that assume a root directory. I'm also familiar with <code>kubectl port-forward pods/pod-name</code> though this only connects to pods until k8s 1.10.)</p>
<p>How do I browse to a <code>type: LoadBalancer</code> service in Docker for Win or Docker for Mac?</p>
| <p>LoadBalancer will work on Docker-for-Mac and Docker-for-Windows as long as you're running a recent build. Flip the type back to <code>LoadBalancer</code> and update. When you check the describe command output look for the <code>Port: <unset> 80/TCP</code> line. And try hitting <a href="http://localhost:80" rel="noreferrer">http://localhost:80</a>.</p>
|
<p>We have a issue where connecting to AWS RDS in Istio Service Mesh is results in <code>upstream connect error or disconnect/reset before header</code> .
Our Egress rule is as below </p>
<pre><code> apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
namespace: <our-namespace>
name: rds-egress-rule-with
spec:
destination:
service: <RDS End point>
ports:
- port: 80
protocol: http
- port: 443
protocol: https
- port: 3306
protocol: https
</code></pre>
<p>The connection to MySQL works fine in a stand alone MySQL in EC2. The connection to AWS RDS works fine without Istio. The problem only occurs in Istio Service Mesh. </p>
<p>We are using istio in Disabled Mutual TLS Configuration.</p>
| <p>The protocol in your <code>EgressRule</code> definition should be <code>tcp</code>. The <code>service</code> should contain the IP address or a range of IP addresses in CIDR notation.</p>
<p>Alternatively, you can use the <code>--includeIPRanges</code> flag of <code>istioctl kube-inject</code>, to specify which IP ranges are handled by Istio. Istio will not interfere with the the not-included IP addresses and will just allow the traffic to pass thru.</p>
<p>References:</p>
<ol>
<li><a href="https://istio.io/latest/blog/2018/egress-tcp/" rel="nofollow noreferrer">https://istio.io/latest/blog/2018/egress-tcp/</a></li>
<li><a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services</a></li>
</ol>
|
<p>I have non deckerised application that needs to connect to dockerised application running inside kubernetes pod.</p>
<p>Given that pods may died and came again with different ip address, how my application can detect this? any way to assign a hostname that redirect to whatever existing pods?</p>
| <p>You will have to use kubernetes service. Service gives you a way to talk to your pods with static Ip and dns (if you're client app is inside the cluster). </p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
<p>You can do it in several ways:</p>
<ol>
<li>Easiest: Use kubernetes service with type: NodePort. Then you can access the pod using <a href="http://[nodehost]:[nodeport]" rel="nofollow noreferrer">http://[nodehost]:[nodeport]</a></li>
<li>Use kubernetes ingress. See this link for more details (<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a>)</li>
<li>If you are running in the cloud like aws, azure or gce, you can use kubernetes service type LoadBalancer.</li>
</ol>
|
<p>I'm able to bind a docker volume to a specific container in a swarm thanks to <a href="https://flocker.readthedocs.io/en/latest/" rel="nofollow noreferrer">flocker</a>, but now i would have multiple replicas of my server (to do load balancing) and so i'm searching something to bind the same data volume to multiple replicas of a docker service.
In flocker documentaiton i have found that</p>
<blockquote>
<p>Can more than one container access the same volume? Flocker works by
creating a 1 to 1 relationship of a volume and a container. This means
you can have multiple volumes for one container, and those volumes
will always follow that container.</p>
<p>Flocker attaches volumes to the individual agent host (docker host)
and this can only be one host at a time because Flocker attaches
Block-based storage. Nodes on different hosts cannot access the same
volume, because it can only be attached to one node at a time.</p>
<p>If multiple containers on the same host want to use the same volume,
they can, but be careful because multiple containers accessing the
same storage volume can cause corruption.</p>
<p>Can I attach a single volume to multiple hosts? Not currently, support
from multi-attach backends like GCE in Read Only mode, or NFS-like
backends like storage, or distributed filesystems like GlusterFS would
need to be integrated. Flocker focuses mainly on block-storage uses
cases that attach a volume to a single node at a time.</p>
</blockquote>
<p>So i think is no possible to do what i want with flocker.
I could use a different orchestrator (k8s) if that could help me, even if i have no experience with that.</p>
<p>I would not use NAS/NFS or anything distribuited filesystems.</p>
<p>Any suggestions?</p>
<p>Thanks in advance.</p>
| <p>In k8s, you can mount volume to different Pods at the same time if technology that backs the volume supports shared access.</p>
<p>As mentioned in <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Kubernetes Persistent Volumes</a>:</p>
<blockquote>
<p>Access Modes A PersistentVolume can be mounted on a host in any way
supported by the resource provider. As shown below, providers will
have different capabilities and each PV’s access modes are set to the
specific modes supported by that particular volume. For example, NFS
can support multiple read/write clients, but a specific NFS PV might
be exported on the server as read-only. Each PV gets its own set of
access modes describing that specific PV’s capabilities.</p>
<p>The access modes are:</p>
<ul>
<li>ReadWriteOnce – the volume can be mounted as read-write by a single node</li>
<li>ReadOnlyMany – the volume can be mounted read-only by many nodes</li>
<li>ReadWriteMany – the volume can be mounted as read-write by many nodes</li>
</ul>
</blockquote>
<p>Types of volumes that supports ReadOnlyMany mode:</p>
<ul>
<li>AzureFile</li>
<li>CephFS</li>
<li>FC</li>
<li>FlexVolume</li>
<li>GCEPersistentDisk</li>
<li>Glusterfs</li>
<li>iSCSI</li>
<li>Quobyte</li>
<li>NFS</li>
<li>RBD</li>
<li>ScaleIO</li>
</ul>
<p>Types of volumes that supports ReadWriteMany mode:</p>
<ul>
<li>AzureFile</li>
<li>CephFS</li>
<li>Glusterfs</li>
<li>Quobyte</li>
<li>RBD</li>
<li>PortworxVolume</li>
<li>VsphereVolume(works when pods are collocated)</li>
</ul>
|
<p>I want to be able to perform something like <code>docker commit</code> on a container running inside a Kubernetes pod.</p>
<p>Volume snapshots do not meet my criteria because I want to snapshot the complete state of the container, not just the data mounted under some path.</p>
<p>I need this because I provide (trusted) users root access to containers and I want to be able to provide a way to roll the containers back without losing any packages they have installed (or any other changes they have made) before the last snapshot.</p>
<p>It appears <a href="https://github.com/kubernetes/kubernetes/issues/14561" rel="noreferrer">a related feature request</a> was made but nothing came of it.</p>
<p>I am using Google Kubernetes Engine but I am prepared to migrate to another provider that easily supports this.</p>
| <p>Run the bash shell inside a container named guest:</p>
<pre><code>[root@host ~]# docker run -i -t --name guest oraclelinux:6.6 /bin/bash
[root@guest ~]#
</code></pre>
<p>If you use a web proxy, edit the yum configuration on the guest as described in the <a href="http://docs.oracle.com/cd/E52668_01/E54669/html/ol7-proxy-config.html" rel="nofollow noreferrer">Oracle Linux Administrator's Guide for Release 7</a>.</p>
<p>Install the httpd package: </p>
<pre><code>[root@guest ~]# yum install httpd
</code></pre>
<p>If required, create the web content to be displayed under the <code>/var/www/html</code> directory hierarchy on the guest.</p>
<p>Exit the guest by using the docker stop command on the host:</p>
<pre><code>[root@host ~]# docker stop guest
guest
</code></pre>
<p>Create the image <code>mymod/httpd</code> with the tag <code>v1</code> using the ID of the container that you stopped:</p>
<pre><code>[root@host ~]# docker commit -m "ol6 + httpd" -a "A N Other" \
`docker ps -l -q` mymod/httpd:v1
8594abec905e6374db51bed1bfb208804cfb60d96b285efb897db581a01676e9
</code></pre>
<p>Use the <code>-m</code> and <code>-a</code> options to document the image and its author. The command returns the full version of the new image's ID.</p>
<p>If you use the docker images command, the new image now appears in the list: </p>
<pre><code> [root@host ~]# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mymod/httpd v1 8594abec905e 2 minutes ago 938.5 MB
oraclelinux 6 9ac13076d2b5 5 days ago 319.4 MB
oraclelinux 6.6 9ac13076d2b5 5 days ago 319.4 MB
oraclelinux latest 073ded22ac0f 5 days ago 265.2 MB
oraclelinux 7 073ded22ac0f 5 days ago 265.2 MB
oraclelinux 7.0 073ded22ac0f 5 days ago 265.2 MB
</code></pre>
<p>Remove the container named guest.</p>
<pre><code> # docker rm guest
guest
</code></pre>
<p>You can now use the new image to create a container that works as a web server, for example:</p>
<pre><code> # docker run -d --name newguest -p 8080:80 mymod/httpd:v1 /usr/sbin/httpd -D FOREGROUND
7afbbefec5191f632e149f85ae10ed0ba88f1c545daad18cb930e575ef6a3e63
</code></pre>
|
<p>I'm running a kubernetes cluster and one microservice is constantly crashing with exitCode 134. I already changed the resource memory limit to 6Gi</p>
<pre><code>resources: {
limits: {
memory: "6Gi"
}
}
</code></pre>
<p>but the pod never goes above 1.6/1.7Gi. </p>
<p>What may be missing?</p>
| <p>It's not about Kubernetes memory limit. Default JavaScript Heap limit is 1.76GB when running in node (v8 engine).</p>
<p>The command-line in Deployment/Pod should be changed like <code>node --max-old-space-size=6144 index.js</code>.</p>
|
<p>I know that Docker and Kubernetes aren’t direct competitors. Docker is the container platform and containers are coordinated and scheduled by Kubernetes, which is a tool. </p>
<p>What does it really mean and how can I deploy my app on Docker for Azure ? </p>
| <h1>Short answer:</h1>
<ul>
<li><p>Docker (and containers in general) solve the problem of packaging an application and its dependencies. This makes it easy to ship and run everywhere.</p></li>
<li><p>Kubernetes is one layer of abstraction above containers. It is a distributed system that controls/manages containers.</p></li>
</ul>
<p>My advice: because the <a href="https://landscape.cncf.io/" rel="noreferrer">landscape</a> is huge... start learning and putting the pieces of the puzzle together by following a course. Below I have added some information from the:</p>
<ul>
<li><a href="https://www.edx.org/course/introduction-to-kubernetes" rel="noreferrer">Introduction to Kubernetes</a>, free online course from The Linux Foundation.</li>
</ul>
<h2>Why do we need Kubernetes (and other orchestrators) above containers?</h2>
<blockquote>
<p>In the quality assurance (QA) environments, we can get away with running containers on a single host to develop and test applications. However, <strong><em>when we go to production, we do not have the same liberty</em></strong>, as we need to ensure that our applications:</p>
<ul>
<li>Are fault-tolerant</li>
<li>Can scale, and do this on-demand</li>
<li>Use resources optimally</li>
<li>Can discover other applications automatically, and communicate with each other</li>
<li>Are accessible from the external world </li>
<li>Can update/rollback without any downtime.</li>
</ul>
<p><strong><em>Container orchestrators</em></strong> are the tools which group hosts together to form a cluster, and help us fulfill the requirements mentioned above.</p>
<hr>
<p>Nowadays, there are many container orchestrators available, such as:</p>
<ul>
<li><strong>Docker Swarm:</strong> Docker Swarm is a container orchestrator provided by Docker, Inc. It is part of Docker Engine.</li>
<li><strong>Kubernetes:</strong> Kubernetes was started by Google, but now, it is a part of the Cloud Native Computing Foundation project.</li>
<li><strong>Mesos Marathon:</strong> Marathon is one of the frameworks to run containers at scale on Apache Mesos.</li>
<li><strong>Amazon ECS:</strong> Amazon EC2 Container Service (ECS) is a hosted service provided by AWS to run Docker containers at scale on its infrastructrue.</li>
<li><strong>Hashicorp Nomad:</strong> Nomad is the container orchestrator provided by HashiCorp.</li>
</ul>
</blockquote>
|
<p>I want to run consul in kubernetes but I am not allowed to run it as user root.</p>
<p>Therefore I added</p>
<pre><code>RUN addgroup consul root
</code></pre>
<p>to the Dockerfile (derived <code>FROM consul:1.0.3</code>)</p>
<p>and start the deployment in kubernetes with</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
xyz.service: consul-deployment
name: consul-deployment
spec:
template:
spec:
securityContext:
runAsUser: 100
</code></pre>
<p>Now I expect kubernetes to start consul with user <code>100</code> (who used to be the user <code>consul</code> when I started it locally in Docker and now should be member in the group <code>root</code>).</p>
<p>But now I get the following the error when the pod is started</p>
<pre><code>chown: /consul/data: Operation not permitted
</code></pre>
<p>The <code>chown</code> is executed in Consuls <a href="https://github.com/hashicorp/docker-consul/blob/389ad67978f3fb9c43ae270e31c2d7b121df46c0/0.X/docker-entrypoint.sh" rel="nofollow noreferrer"><code>docker-entrypoint.sh</code></a> and I guess it (still) fails because user <code>100</code> is not root.</p>
<p>Can anybody explain me how to start a container with a non root user when the container has an entrypoint script expecting to be executed as root?</p>
| <p>I ended up in fixing Consuls <code>docker-entrypoint.sh</code> to check if the user is root before executing the chown command by adding some <code>if [ "$(id -u)" = "0" ]</code> tests.</p>
<p><a href="https://github.com/DatzAtWork/docker-consul/tree/DatzAtWork-patch-non-root" rel="nofollow noreferrer">You can find the patch on GitHub.</a></p>
|
<p>I am currently trying to deploy a spark example jar on a Kubernetes cluster running on IBM Cloud.</p>
<p>If I try to follow these <a href="https://spark.apache.org/docs/latest/running-on-kubernetes" rel="noreferrer">instructions to deploy spark on a kubernetes cluster</a>, I am not able to launch Spark Pi, because I am always getting the error message:</p>
<blockquote>
<p>The system cannot find the file specified</p>
</blockquote>
<p>after entering the code</p>
<pre><code>bin/spark-submit \
--master k8s://<url of my kubernetes cluster> \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=5 \
--conf spark.kubernetes.container.image=<spark-image> \
local:///examples/jars/spark-examples_2.11-2.3.0.jar
</code></pre>
<p>I am in the right directory with the <code>spark-examples_2.11-2.3.0.jar</code> file in the <code>examples/jars</code> directory.</p>
| <p>Ensure <code>your.jar</code> file is present inside the container image.</p>
<p><a href="https://spark.apache.org/docs/latest/running-on-kubernetes#cluster-mode" rel="nofollow noreferrer">Instruction</a> tells that it should be there:</p>
<blockquote>
<p>Finally, notice that in the above example we specify a jar with a
specific URI with a scheme of local://. This URI is the location of
the example jar <strong>that is already in the Docker image</strong>.</p>
</blockquote>
<p>In other words, <code>local://</code> scheme is removed from <code>local:///examples/jars/spark-examples_2.11-2.3.0.jar</code> and the path <code>/examples/jars/spark-examples_2.11-2.3.0.jar</code> is expected to be available in a container image.</p>
|
<p>Base question: When I try to use kube-apiserver on my master node, I get command not found error. How I can install/configure kube-apiserver? Any link to example will help.</p>
<pre><code>$ kube-apiserver --enable-admission-plugins DefaultStorageClass
-bash: kube-apiserver: command not found
</code></pre>
<p>Details: I am new to Kubernetes and Docker and was trying to create StatefulSet with volumeClaimTemplates. My problem is that the automatic PVs are not created and I get this message in the PVC log: "persistentvolume-controller waiting for a volume to be created". I am not sure if I need to define DefaultStorageClass and so needed kube-apiserver to define it.</p>
<pre><code>Name: nfs
Namespace: default
StorageClass: example-nfs
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner=example.com/nfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 3m (x2401 over 10h) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator
</code></pre>
<p>Here is get pvc result:</p>
<pre><code>$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs Pending example-nfs 10h
</code></pre>
<p>And get storageclass:</p>
<pre><code>$ kubectl describe storageclass example-nfs
Name: example-nfs
IsDefaultClass: No
Annotations: <none>
Provisioner: example.com/nfs
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
</code></pre>
<p>How can I troubleshoot this issue (e.g. logs for why the storage was not created)?</p>
| <p>You are asking two different questions here, one about kube-apiserver configuration, one about troubleshooting your <code>StorageClass</code>.</p>
<p>Here's an answer for your first question:</p>
<p><code>kube-apiserver</code> is running as a Docker container on your master node. Therefore, the binary is <em>within</em> the container, not on your host system. It is started by the master's <code>kubelet</code> from a file located at <code>/etc/kubernetes/manifests</code>. <code>kubelet</code> is watching this directory and will start any Pod defined here as "static pods".</p>
<p>To configure <code>kube-apiserver</code> command line arguments you need to modify <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> on your master.</p>
|
<p>When I installed docker and ran <code>service docker start</code> I got this message:</p>
<pre><code>Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
</code></pre>
<p>When I ran <code>journalctl -xe</code> I got this message:</p>
<pre><code>8月 02 20:42:11 centos-master systemd[1]: Unit docker-storage-setup.service entered failed state.
8月 02 20:42:11 centos-master systemd[1]: docker-storage-setup.service failed.
8月 02 20:42:11 centos-master systemd[1]: Starting Docker Application Container Engine...
-- Subject: Unit docker.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has begun starting up.
8月 02 20:42:11 centos-master dockerd-current[32672]: time="2017-08-02T20:42:11.667102874+08:00" level=info msg="libcontainerd: new containerd process, pid: 32676"
8月 02 20:42:11 centos-master dockerd-current[32672]: time="2017-08-02T20:42:11.681065709+08:00" level=info msg="[graphdriver] using prior storage driver \"overlay\""
8月 02 20:42:12 centos-master dockerd-current[32672]: time="2017-08-02T20:42:12.045251482+08:00" level=fatal msg="Error starting daemon: SELinux is not supported with the overlay graph driver
8月 02 20:42:12 centos-master systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
8月 02 20:42:12 centos-master systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
</code></pre>
<p>thanks my friends</p>
| <p>Docker version 1.13.1</p>
<p>For CentOS Linux release 7.2.1511 (Core), ensure following <code>DOCKER_STORAGE_OPTIONS</code> option is set to <code>devicemapper</code>.</p>
<pre><code>bash # vi /etc/sysconfig/docker-storage
...
DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper "
...
bash # vi /etc/sysconfig/docker-storage-setup
...
STORAGE_DRIVER=devicemapper
...
</code></pre>
|
<p>I followed this tutorial on Medium to deploy a Kubernetes cluster on AWS:
<a href="https://medium.com/containermind/how-to-create-a-kubernetes-cluster-on-aws-in-few-minutes-89dda10354f4" rel="noreferrer">https://medium.com/containermind/how-to-create-a-kubernetes-cluster-on-aws-in-few-minutes-89dda10354f4</a></p>
<p>However, when I launch the Kubernetes dashboard I see the following errors:</p>
<pre><code>configmaps is forbidden: User "kube" cannot list configmaps in the namespace "default"
persistentvolumeclaims is forbidden: User "kube" cannot list persistentvolumeclaims in the namespace "default"
secrets is forbidden: User "kube" cannot list secrets in the namespace "default"
services is forbidden: User "kube" cannot list services in the namespace "default"
ingresses.extensions is forbidden: User "kube" cannot list ingresses.extensions in the namespace "default"
</code></pre>
<p>Why is this happening?</p>
<p><a href="https://i.stack.imgur.com/egTzw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/egTzw.png" alt="enter image description here"></a></p>
| <p>The problem is that step 13 of the tutorial reads:</p>
<blockquote>
<p>kops get secrets kube --type secret -oplaintext</p>
</blockquote>
<p>It should instead be:</p>
<blockquote>
<p>kops get secrets admin -oplaintext</p>
</blockquote>
|
<p>I got the ingress nginx working in <code>gcloud</code>. However, when I see the log with the command <code>kubectl log</code> </p>
<pre><code>$ kubectl logs nginx-ingress-controller-59f55c679c-zcr24
myhost.com/clients"
10.28.0.1 - [10.28.0.1] - - [14/May/2018:09:00:59 +0000] "GET /api/users/2/10 HTTP/1.1" 304 0 "http://myhost.com/clients" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:60.0) Gecko/20100101 Firefox/60.0" 796 0.083 [default-back-main-80] 10.28.0.93:1337 0 0.083 304
2018/05/14 09:01:00 [notice] 10982#10982: *6937489 "/api/(.*)" matches "/api/users/1/10", client: 10.28.0.1, server: myhost.com, request: "GET /api/users/1/10 HTTP/1.1", host: "myhost.com", referrer: "http://myhost.com/clients"
2018/05/14 09:01:00 [notice] 10982#10982: *6937489 rewritten data: "/users/1/10", args: "", client: 10.28.0.1, server: myhost.com, request: "GET /api/users/1/10 HTTP/1.1", host: "myhost.com", referrer: "http://myhost.com/clients"
10.28.0.1 - [10.28.0.1] - - [14/May/2018:09:01:00 +0000] "GET /api/users/1/10 HTTP/1.1" 304 0 "http://myhost.com/clients" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:60.0) Gecko/20100101 Firefox/60.0" 796 0.014 [default-back-main-80] 10.28.0.93:1337 0 0.014 304
</code></pre>
<p>This is only one part of the logs, however, all the ips are coming from <code>10.28.0.1</code>. I would like to see the external ip, or my users' ips.</p>
<p>Furthermore, the root of the ip <code>10.28.*.*</code> seems a bit weird to me. If I display the ip of my containers, you can see that it seems different.</p>
<pre><code>$ kubectl get service | awk '{print $3}'
CLUSTER-IP
10.31.243.114
10.31.245.58
10.31.241.148
10.31.240.1
</code></pre>
<p>I understand that the request in my other containers comes from a private ip, however in the ingress container I should receive the external ips of my users. How can I see these ips?</p>
| <p>It looks like <a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">source IP</a> (in your case, External IP) must be preserved while using the GKE service. Here, the default is set to use cluster IP, which means that traffic will go through SNAT and get transferred from node to node. During this time, the ‘Source IP' (External IP) get replaced with the node’s IP(a detailed explanation is provided in <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport" rel="nofollow noreferrer">this</a> help center article).</p>
<p>As per the <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport" rel="nofollow noreferrer">article</a>, by setting ‘<strong><a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">service.spec.externalTrafficPolicy</a></strong>’ to the value ‘<strong>Local</strong>’ which will bypass the SNAT and you might be able to route traffic directly to the correct node/pod thus preserving the source IP.</p>
<p>The command for setting this <strong>exernalTrafficPolicy</strong> for a service might look like this:-</p>
<pre><code>$kubectl patch svc [service_name] -p '{"spec":{"externalTrafficPolicy":"Local"}'
</code></pre>
<p>In your case, service_name=nginx-ingress-controller-59f55c679c-zcr24</p>
|
<p>What is a simple explanation to resource allocation and definitions in kubernetes?
What does it mean to allocate "1000m" CPU units and 1024Mi off memory?</p>
| <p>(tried to write it in simpler language than the official <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="noreferrer">docs</a>)</p>
<p><strong>CPU</strong> </p>
<p>In Kubernetes each CPU core is allocated in units of one "milicore" meaning one Virtual Core (on a virtual machine) can be divided into 1000 shares of 1 milicore. Allocating 1000 milicores will give a pod one full CPU. Giving more will require the code in the pod to able to utilize more than one core.</p>
<p><strong>Memory</strong></p>
<p>Very simple. Each Megabyte you allocate is reserved for the pod.</p>
<p><strong>Requests</strong></p>
<p>Minimal resources that are guaranteed to be given to the pod. If there are not enough resources to start a pod on any node it will remain in "Pending" state.</p>
<p><strong>Limits</strong></p>
<p><strong>CPU Limit</strong> Will cause the the pod to throttle down when hitting the limit.</p>
<p><strong>Memory Limit</strong> When a pod utilizes all of it's memory and asks for more than the limit <em>it will considered a memory leak</em> and the pod will get restarted.</p>
<p><strong>Target</strong> (defined in the Horizontal Pod Autoscaler)</p>
<p>Can be applied to CPU, Memory and other custom metrics (more complicated to define.</p>
<p>It's might be a good idea to set resources for a pod in sizes of <strong>A</strong> <strong>B</strong> and <strong>C</strong> where: A < B < C. With requests = A, Target = B and Limits = C.
Just remember that a fully loaded node might prevent pods from reaching their "target" and not never scale up. </p>
|
<p>I have set up a custom kubernetes cluster on GCE using kubeadm. I am trying to use StatefulSets with persistent storage.</p>
<p>I have the following configuration:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gce-slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zones: europe-west3-b
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myname
labels:
app: myapp
spec:
serviceName: myservice
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: mycontainer
image: ubuntu:16.04
env:
volumeMounts:
- name: myapp-data
mountPath: /srv/data
imagePullSecrets:
- name: sitesearch-secret
volumeClaimTemplates:
- metadata:
name: myapp-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: gce-slow
resources:
requests:
storage: 1Gi
</code></pre>
<p>And I get the following error:</p>
<pre><code>Nopx@vm0:~$ kubectl describe pvc
Name: myapp-data-myname-0
Namespace: default
StorageClass: gce-slow
Status: Pending
Volume:
Labels: app=myapp
Annotations: volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 5s persistentvolume-controller Failed to provision volume
with StorageClass "gce-slow": Failed to get GCE GCECloudProvider with error <nil>
</code></pre>
<p>I am treading in the dark and do not know what is missing. It seems logical that it doesn't work, since the provisioner never authenticates to GCE. Any hints and pointers are very much appreciated.</p>
<p><strong>EDIT</strong></p>
<p>I Tried the solution <a href="https://stackoverflow.com/questions/37421540/container-vm-image-with-gpd-volumes-fails-with-failed-to-get-gce-cloud-provider">here</a>, by editing the config file in kubeadm with <code>kubeadm config upload from-file</code>, however the error persists. The kubadm config looks like this right now:</p>
<pre><code>api:
advertiseAddress: 10.156.0.2
bindPort: 6443
controlPlaneEndpoint: ""
auditPolicy:
logDir: /var/log/kubernetes/audit
logMaxAge: 2
path: ""
authorizationModes:
- Node
- RBAC
certificatesDir: /etc/kubernetes/pki
cloudProvider: gce
criSocket: /var/run/dockershim.sock
etcd:
caFile: ""
certFile: ""
dataDir: /var/lib/etcd
endpoints: null
image: ""
keyFile: ""
imageRepository: k8s.gcr.io
kubeProxy:
config:
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 5
clusterCIDR: 192.168.0.0/16
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
metricsBindAddress: 127.0.0.1:10249
mode: ""
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
kubeletConfiguration: {}
kubernetesVersion: v1.10.2
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
nodeName: mynode
privilegedPods: false
token: ""
tokenGroups:
- system:bootstrappers:kubeadm:default-node-token
tokenTTL: 24h0m0s
tokenUsages:
- signing
- authentication
unifiedControlPlaneImage: ""
</code></pre>
<p><strong>Edit</strong></p>
<p>The issue was resolved in the comments thanks to Anton Kostenko. The last edit coupled with <code>kubeadm upgrade</code> solves the problem.</p>
| <p>The answer took me a while but here it is:</p>
<p>Using the GCECloudProvider in Kubernetes outside of the Google Kubernetes Engine has the following prerequisites (the last point is Kubeadm specific):</p>
<ol>
<li><p>The VM needs to be run with a service account that has the right to provision disks. Info on how to run a VM with a service account can be found <a href="https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances" rel="nofollow noreferrer">here</a></p></li>
<li><p>The Kubelet needs to run with the argument <code>--cloud-provider=gce</code>. For this the <code>KUBELET_KUBECONFIG_ARGS</code> in <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> have to be edited. The Kubelet can then be restarted with <code>
sudo systemctl restart kubelet</code></p></li>
<li><p>The Kubernetes cloud-config file needs to be configured. The file can be found at <code>/etc/kubernetes/cloud-config</code> and the following content is enough to get the cloud provider to work:</p>
<pre><code>[Global]
project-id = "<google-project-id>"
</code></pre></li>
<li><p>Kubeadm needs to have GCE configured as its cloud provider. The config posted in the question works fine for this. However, the <code>nodeName</code> has to be changed.</p></li>
</ol>
|
<p>I'm currently trying to wrap my head around learning Go, some details of the kubernetes API I haven't used before and the kubernetes api framework for Go at the same time, and would appreciate your help in understanding the grammar of that framework and why people use it anyways.</p>
<p>Honestly I'm not sure why to use a framework in the first place if it contains the same information as the REST endpoint. Wouldn't it make more sense to just call the API directly via a <code>http</code> library?</p>
<p>And here's one example (taken from <a href="https://github.com/coreos/etcd-operator/blob/master/cmd/operator/main.go#L171" rel="noreferrer">some real code</a>):</p>
<pre><code>pod, err := kubecli.CoreV1().Pods(namespace).Get(name, metav1.GetOptions{})
</code></pre>
<p>What I feel bothersome is that I have to <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/" rel="noreferrer">look up everything in the API docs</a> and then I additionally need to figure out that <code>/v1/</code> translates to <code>CoreV1()</code>. And I'm not even sure where I could look that up. Also the whole block <code>metav1.GetOptions{}</code> seems completely unnecessary, or which part of a HTTP request is represented by it?</p>
<p>I hope I could make clear what the confusion is and hope for your help in clearing it up.</p>
<h2>Edit:</h2>
<p>Here's also an example, generated from the new operator-framework which sadly doesn't make it much better:</p>
<pre><code> return &v1.Pod{
TypeMeta: metav1.TypeMeta{
Kind: "Pod",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "busy-box",
Namespace: cr.Namespace,
OwnerReferences: []metav1.OwnerReference{
*metav1.NewControllerRef(cr, schema.GroupVersionKind{
Group: v1alpha1.SchemeGroupVersion.Group,
Version: v1alpha1.SchemeGroupVersion.Version,
Kind: "Memcached",
}),
},
Labels: labels,
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "busybox",
Image: "busybox",
Command: []string{"sleep", "3600"},
},
},
},
}
</code></pre>
<p>The <a href="https://v1-9.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#pod-v1-core" rel="noreferrer">API docs</a> don't know anything about this <code>TypeMeta</code> object. And the second element is called <code>ObjectMeta:</code> instead of <code>metadata</code>. I mean, I'm not a magician. How should I know this.</p>
| <p>I'm a bit late, but here is my 2 cents.</p>
<h1>Why to use <code>client-go</code> instead of <code>http</code> library</h1>
<p>There are serval pros with <code>client-go</code>.</p>
<ol>
<li><p>Kubernetes resource is defined as <strong>strongly-typed class</strong>, means less misspelled debugging and easy to refactor.</p></li>
<li><p>When we manipulate some resources, It <strong>authenticates with cluster automatically</strong> (<a href="https://github.com/kubernetes/client-go/tree/master/examples" rel="nofollow noreferrer" title="docs">doc</a>), what it only needs is a valid config. And we need not to know how exactly the authentication is done.</p></li>
<li><p>It has multiple versions <strong>compatible</strong> with different Kubernetes version. It make our code align with specify kubernetes version much easier, without knowing every detail of API changes.</p></li>
</ol>
<h1>How do I know which class and method should be called</h1>
<p>In <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/" rel="nofollow noreferrer">API Reference</a>, each resource has the latest Group and Version tag.
For example, <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#pod-v1-core" rel="nofollow noreferrer">Pod</a> is group <code>core</code>, version <code>v1</code>, kind <code>Pod</code> in v1.10. </p>
<p><a href="https://godoc.org/k8s.io/client-go/kubernetes" rel="nofollow noreferrer">GoDoc</a> listed all properties and links to detail explanation for every class like <a href="https://godoc.org/k8s.io/api/core/v1#Pod" rel="nofollow noreferrer">Pod</a>.</p>
<p>So the pod list can be found by calling <a href="https://godoc.org/k8s.io/client-go/kubernetes#Clientset.CoreV1" rel="nofollow noreferrer"><code>CoreV1()</code></a>, then <a href="https://godoc.org/k8s.io/client-go/kubernetes/typed/core/v1#PodsGetter" rel="nofollow noreferrer"><code>Pods(namespace string)</code></a>, then <a href="https://godoc.org/k8s.io/client-go/kubernetes/typed/core/v1#PodInterface" rel="nofollow noreferrer"><code>List(opts meta_v1.ListOptions)</code></a>.</p>
|
<p>I am trying to develop an application with microservices and Spring Cloud. I am using Zuul as a proxy server, Eureka as a Service discovery and couple of other microservices like product service, order service etc.</p>
<p>For deployment purpose, I am trying to leverage features of Docker and Kubernetes. While learning kubernetes, I realized that kubernetes has "service" object which works as load balancer and can have static IP which does not change even if pods are rescheduled. In that case, can I remove Eureka server (Which works as discovery as well as client side load balancer) from spring cloud and utilize kubernetes "service" object for the same purpose?</p>
<p>If yes,
How zuul will identify the service to which it need to redirect the request? Is there any kubernetes object which perform the function of zuul proxy server?</p>
<p>Also if I remove Eureka and Zuul, how can I make it work in local and non kubernetes env?</p>
| <p>Kubernetes is a container management tool whereas Spring Cloud is a collection of tools to build microservices architectures. There are overlap, like service discovery, gateway or configuration services. </p>
<p>You can use Spring Cloud to build your microservices architecture by deploying the jars yourself. You don't need kubernetes as such for this.</p>
<p>Difference :
<a href="https://developers.redhat.com/blog/2016/12/09/spring-cloud-for-microservices-compared-to-kubernetes/" rel="nofollow noreferrer">https://developers.redhat.com/blog/2016/12/09/spring-cloud-for-microservices-compared-to-kubernetes/</a></p>
|
<p>I am currently running mysql, wordpress and my custom node.js + express application on kubernetes pods in the same cluster. Everything is working quite well but my problem is that all the data will be reset if I have to rerun the deployments, services and persistent volumes.</p>
<p>I have configured wordpress quite extensively and would like to save all the data and insert it again after redeploying everything. How is this possible to do or am I thinking something wrong? I am using the mysql:5.6 and wordpress:4.8-apache images.</p>
<p>I also want to transfer my configuration to my other team members so they don't have to configure wordpress again.</p>
<p>This is my mysql-deploy.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: hidden
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
<p>This the wordpress-deploy.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
value: hidden
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
</code></pre>
| <blockquote>
<p>How is this possible to do or am I thinking something wrong?</p>
</blockquote>
<p>It might be better to move configuration mindset from working directly on base container instances to configuring container images/manifests. You have several approaches there, just some pointers:</p>
<ul>
<li><p>Create own Dockerfile based on images you referenced and bundle configuration files inside them. This is viable approach if configuration is more or less static and can be handled with env vars or infrequent builds of docker images, but require docker registry handling to work with k8s. In this approach you would add all changed files to build context of docker and then <code>COPY</code> them to appropriate places.</p></li>
<li><p>Create ConfigMaps and mount them on container filesystem as config files where change is required. This way you can still use base images you reference directly but changes are limited to kubernetes manifests instead of rebuilding docker images. Approach in this case would be to identify all changed files on container, then create kubernetes ConfigMaps out of them and finally mount appropriately. I don't know which exactly things you are changing but here is example of how you can place nginx config in ConfigMap: </p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: cm-nginx-example
data:
nginx.conf: |
server {
listen 80;
...
# actual config here
...
}
</code></pre>
<p>and then mount it in container in appropriate place like so:</p>
<pre><code>...
containers:
- name: nginx-example
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: nginx-conf
volumes:
- name: nginx-conf
configMap:
name: cm-nginx-example
items:
- key: nginx.conf
path: nginx.conf
...
</code></pre></li>
<li><p>Mount persistent volumes (subpaths) on places where you need configs and keep configuration on persistent volumes.</p></li>
</ul>
<p>Personally, I'd probably opt for ConfigMaps since you can easily share/edit those with k8s deployments and configuration details are not lost as some mystical 'extensive work' but can be reviewed, tweaked and stored to some code versioning system for version tracking...</p>
|
<p>kops update cluster xyz.testexample.com</p>
<p>error doing DNS lookup for NS records for "xyz.testexample.com": lookup xyz.testexample.com on 192.168.0.1:53: no such host</p>
| <blockquote>
<pre><code>error doing DNS lookup for NS records for "xyz.testexample.com": lookup xyz.testexample.com on 192.168.0.1:53: no such host
</code></pre>
</blockquote>
<p>That error means Kops is unable to resolve the DNS name <code>xyz.testexample.com</code>.</p>
<p>DNS names in Kops are a critical path. Please check the resolving on your PC. Managing the cluster will be impossible without a correct resolving.</p>
<p>What about creating a new cluster with private DNS (I see in comments that you tried to do it) - check that <a href="https://github.com/kubernetes/kops/blob/master/docs/aws.md" rel="nofollow noreferrer">how-to</a>, you need the path "Using Public/Private DNS (Kops 1.5+)".</p>
|
<p>I have a new laptop and kubernetes cluster running on Google Cloud Platform. How can I access that cluster from local machine to execute kubectl commands, open dashboard etc?</p>
<p>That is not clearly stated in the documentation.</p>
| <p>From your local workstation, you need to have the <code>gcloud</code> tool installed and properly configured to connect to the correct GCE account. Then you can run:</p>
<pre><code> gcloud container clusters get-credentials [CLUSTER_NAME]
</code></pre>
<p>This will setup <code>kubectl</code> to connect to your kubernetes cluster.</p>
<p>Of course you'll need to install kubectl either using gcloud with:</p>
<pre><code>gcloud components install kubectl
</code></pre>
<p>Or using specific instructions for your operating system.</p>
<p>Please check the following link for more details: <a href="https://cloud.google.com/kubernetes-engine/docs/quickstart" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/quickstart</a></p>
<p>Once you have kubectl access you can deploy and access the kubernetes dashboard as described here: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/</a></p>
|
<p>Following the Deployment example in the docs. I'm trying to deploy the example nginx. With the following config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre>
<p>So far, the deployment always hangs. I tried to see if for any reason I needed a pod named nginx to be deployed already. That didn't solve the problem.</p>
<pre><code>$ sudo kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 0/3 34m
$ sudo kubectl describe deployments
Name: nginx-deployment
Namespace: default
CreationTimestamp: Sat, 30 Jan 2016 06:03:47 +0000
Labels: app=nginx
Selector: app=nginx
Replicas: 0 updated / 3 total
StrategyType: RollingUpdate
RollingUpdateStrategy: 1 max unavailable, 1 max surge, 0 min ready seconds
OldReplicationControllers: nginx (2/2 replicas created)
NewReplicationController: <none>
No events.
</code></pre>
<p>When I check the events from kubernetes I see no events which belong to this deployment. Has anyone experienced this before?</p>
<p>The versions are as followed:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"}
</code></pre>
| <p>If the deployment is not creating any pods you could have a look at the events an error might be reported there for example:</p>
<pre><code>kubectl get events --all-namespaces
NAMESPACE LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
default 8m 2d 415 wordpress Ingress Normal Service loadbalancer-controller no user specified default backend, using system default
kube-lego 2m 8h 49 kube-lego-7c66c7fddf ReplicaSet Warning FailedCreate replicaset-controller Error creating: pods "kube-lego-7c66c7fddf-" is forbidden: service account kube-lego/kube-lego2-kube-lego was not found, retry after the service account is created
</code></pre>
<p>Also have a look at <code>kubectl get rs --all-namespaces</code>.</p>
|
<p>I need to start kubernetes pods in a sequence like pod2 should start only when pod1 is up and running.</p>
<p>we can do this in <code>docker-compose.yml</code> using <code>depends_on</code></p>
| <p>No, there is no built-in dependency management equivalent to <code>depends_on</code> available. In general, we assume loosely coupled services and as a good practice there should be no hard dependency in terms of start-up order, but retries and timeouts should be used. If you have to hardcode dependencies, you can use <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">init containers</a>. In your case, a init container in <code>pod2</code> could simply query if <code>pod1</code> (or better: the service in front of it) is ready in a while loop. The main container in <code>pod2</code> is guaranteed only to be launched if and when the init container exits successfully. </p>
|
<p>I have a test executor <code>Pod</code> in K8s cluster created through helm, which asks for a dynamically created <code>PersistentVolume</code> where it stores the test results.</p>
<p>Now I would like to get the contents of this volume. It seems quite natural thing to do.
I would expect some <code>kubectl download pv <id></code>. But I can't google up anything.</p>
<p><strong>How can I get the contents of a <code>PersistentVolume</code>?</strong></p>
<p>I am in AWS EKS; so AWS API is also an option. Also I can access ECR so perhaps I could somehow store it as an image and download?</p>
<p>Or, in general, I am looking for a way to transfer a directory, can be even in an archive. But It should be <em>after</em> the container finished and doesn't run anymore.</p>
| <p>I can think about two options to fulfill your needs:</p>
<ol>
<li>Create a pod with the PV attached to it and use <code>kubectl cp</code> to copy the contents wherever you need. You could for example use a <code>PodSpec</code> similar to the following:</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: dataaccess
spec:
containers:
- name: alpine
image: alpine:latest
command: ['sleep', 'infinity']
volumeMounts:
- name: mypvc
mountPath: /data
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: mypvc
</code></pre>
<p>Please note that <code>mypvc</code> should be the name of the <code>PersistentVolumeClaim</code> that is bound to the PV you want to copy data from.</p>
<p>Once the pod is running, you can run something like below to copy the data from any machine that has <code>kubectl</code> configured to connect to your cluster:</p>
<pre><code>kubectl cp dataaccess:/data data/
</code></pre>
<ol start="2">
<li>Mount the PV's EBS volume in an EC2 instance and copy the data from there. This case is less simple to explain in detail because it needs a little more context about what you're trying to achieve.</li>
</ol>
|
<p>I am learning kubernetes right now.I want to enter a pod which is in remote cluster . But, I don't know it's entrypoint. I can't find it using <strong>$kubectl describe pod podname</strong>.</p>
| <p>If you want to access the shell in a Container(POD), You can use the following command. </p>
<pre><code>kubectl exec POD -c CONTAINER -- COMMAND [args...]
</code></pre>
<p>For example, IF the Pod has bash shell, you can access it with the following command. </p>
<pre><code>kubectl exec -it shell-demo -- /bin/bash
</code></pre>
<p>You will be able to access the shell </p>
<blockquote>
<p>root@shell-demo:/# ls /</p>
</blockquote>
<p>Here is the reference <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">get-shell-running-container</a></p>
|
<p>I have a Kubernetes cluster with a master node and two other nodes:</p>
<pre><code>sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 4h v1.10.2
kubernetes-node1 Ready <none> 4h v1.10.2
kubernetes-node2 Ready <none> 34m v1.10.2
</code></pre>
<p>Each of them is running on a VirtualBox Ubuntu VM, accessible from the guest computer:</p>
<pre><code>kubernetes-master (192.168.56.3)
kubernetes-node1 (192.168.56.4)
kubernetes-node2 (192.168.56.6)
</code></pre>
<p>I deployed an nginx server with two replicas, having one pod per kubernetes-node-x:</p>
<pre><code>sudo kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-64ff85b579-5k5zh 1/1 Running 0 8s 192.168.129.71 kubernetes-node1
nginx-deployment-64ff85b579-b9zcz 1/1 Running 0 8s 192.168.22.66 kubernetes-node2
</code></pre>
<p>Next I expose a service for the nginx-deployment as a NodePort to access it from outside the cluster:</p>
<pre><code>sudo kubectl expose deployment/nginx-deployment --type=NodePort
sudo kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h
nginx-deployment NodePort 10.96.194.15 <none> 80:32446/TCP 2m
sudo kubectl describe service nginx-deployment
Name: nginx-deployment
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP: 10.96.194.15
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32446/TCP
Endpoints: 192.168.129.72:80,192.168.22.67:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>I can access each pod in a node directly using their node IP </p>
<pre><code>kubernetes-node1 http://192.168.56.4:32446/
kubernetes-node2 http://192.168.56.6:32446/
</code></pre>
<p>But, I thought that K8s provided some kind of external cluster ip that balanced the requests to the nodes from the outside. What is that IP??</p>
| <blockquote>
<p>But, I thought that K8s provided some kind of external cluster ip that balanced the requests to the nodes from the outside. What is that IP??</p>
</blockquote>
<ul>
<li><p>Cluster IP is internal to Cluster. Not exposed to outside, it is for intercommunication across the cluster.</p></li>
<li><p>Indeed, you have LoadBanacer type of service that can do such a trick that you need, only it is dependent on cloud providers or minikube/docker edge to work properly.</p></li>
</ul>
<blockquote>
<p>I can access each pod in a node directly using their node IP </p>
</blockquote>
<ul>
<li>Actually you don't access them individually that way. NodePort does a bit different trick, since it is essentially loadbalancing requests from outside on ANY exposed node IP. In a nutshell, if you hit any of node's IPs with exposed NodePort, kube-proxy will make sure that required service gets it and then service is doing round-robin through active pods, so although you hit specific node IP, you don't necessarily get pod running on that specific node. More details on that you can find here: <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="noreferrer">https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0</a>, as author there said, not technically most accurate representation, but attempt to show on logical level what is happening with NodePort exposure:</li>
</ul>
<p><a href="https://i.stack.imgur.com/3aN0Am.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3aN0Am.png" alt="NodePort Illustration"></a></p>
<ul>
<li><p>As a sidenote, in order to do this on bare metal and do ssl or such, you need to provision ingress of your own. Say, place one nginx on specific node and then reference all appropriate services you want exposed (mind fqdn for service) as upstream(s) that can run on multiple nodes with as many nginx of their own as desired - you don't need to handle exact details of that since k8s runs the show. That way you have one node point (ingress nginx) with known IP address that is handling incoming traffic and redirecting it to services inside k8s that can run across any node(s). I suck with ascii art but will give it a try:</p>
<pre><code>(outside) -> ingress (nginx) +--> my-service FQDN (running accross nodes):
[node-0] | [node-1]: my-service-pod-01 with nginx-01
| [node 2]: my-service-pod-02 with nginx-02
| ...
+--> my-second-service FQDN
| [node-1]: my-second-service-pod with apache?
...
</code></pre>
<p>In above sketch you have nginx ingress on node-0 (known IP) that takes external traffic and then handles my-service (running on two pods on two nodes) and my-second-service (single pod) as upstreams. You only need to expose FQDN on services for this to work without worrying about details of IPs of specific nodes. More info you can find in documentation: <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/</a></p>
<p>Also way better than my ansi-art is this representation from same link as in previous point that illustrate idea behind ingress:
<a href="https://i.stack.imgur.com/Wh21i.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Wh21i.png" alt="Ingress Illustration"></a></p></li>
</ul>
<h3>Updated for comments</h3>
<blockquote>
<p>Why isn't the service load balancing the used pods from the service?</p>
</blockquote>
<ul>
<li>This can happen for several reasons. Depending on how your Liveness and Readiness Probes are configured, maybe service still don't see pod as out of service. Due to this async nature in distributed system such as k8s we experience temporary lost of requests when pods get removed during, for example rolling updates and similar. Secondly, depending on how your kube-proxy was configured, there are options where you can limit it. By official documentation (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport</a>) using <code>--nodeport-addresses</code> you can change node-proxy behavior. Turns out that round-robin was old kube-proxy behavior, apparently new one should be random. Finally, to exclude connection and session issues from browser, did you try this from anonymous session as well? Do you have dns cached locally maybe? </li>
</ul>
<blockquote>
<p>Something else, I killed the pod from node1, and when calling to node1 it didn't use the pod from node 2.</p>
</blockquote>
<ul>
<li>This is a bit strange. Might be related to above mentioned probes thoug. According to official documentation this should not be the case. We had NodePort behaving inline with official documentation mentined above: <code>and each Node will proxy that port (the same port number on every Node) into your Service</code>. But if that is your case then probably LB or Ingress, maybe even ClusterIP with external address (see below) can do the trick for you.</li>
</ul>
<blockquote>
<p>if the service is internal (ClusterIP) ... does it load balance to any of the pods in the nodes</p>
</blockquote>
<ul>
<li><p>Most definitely yes. One more thing, you can use this behavior to also expose 'load balanced' behavior in 'standard' port range as opposed to 30k+ from NodePort. Here is excerpt of service manifest we use for ingress controller.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: ns-my-namespace
name: svc-nginx-ingress-example
labels:
name: nginx-ingress-example
role: frontend-example
application: nginx-example
spec:
selector:
name: nginx-ingress-example
role: frontend-example
application: nginx-example
ports:
- protocol: TCP
name: http-port
port: 80
targetPort: 80
- protocol: TCP
name: ssl-port
port: 443
targetPort: 443
externalIPs:
- 123.123.123.123
</code></pre>
<p>Note that in the above example imaginary 123.123.123.123 that is exposed with <code>externalIPs</code> represents ip address of one of our worker nodes. Pods running in <code>svc-nginx-ingress-example</code> service doesn't need to be on this node at all but they still get the traffic routed to them (and loadbalanced across the pods as well) when that ip is hit on specified port. </p></li>
</ul>
|
<p>I want to set up a kubernetes cluster that supports <a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="nofollow noreferrer">RBAC</a> and have multiple admin users. If I create an user like <a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/" rel="nofollow noreferrer">this</a>, I end up creating ordinary user where I need to specify individual capabilities using the ROLES config file. Instead, how do I create multiple users having admin privileges?</p>
| <p>Here is an easier way to do it. <a href="https://kubernetes.io/blog/2017/10/using-rbac-generally-available-18/" rel="nofollow noreferrer">https://kubernetes.io/blog/2017/10/using-rbac-generally-available-18/</a></p>
<pre><code>kubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin \
--user=user1 --user=user2 --group=group1
</code></pre>
|
<p>I'm trying to integrate Kubernetes cluster with Gitlab for using the Gitlab Review Apps feature.</p>
<ul>
<li>Kubernetes cluster is created via Rancher 1.6</li>
<li>Running the <code>kubectl get all</code> from the kubernetes shell gives</li>
</ul>
<blockquote>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/my-service LoadBalancer x.x.144.67 x.x.13.89 80:32701/TCP 30d
svc/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 30d
</code></pre>
</blockquote>
<ul>
<li>On the Gitlab <code>CI / CD</code> > <code>Kubernetes</code> page, we need to enter mainly 3 fields:
<ol>
<li>API URL</li>
<li>CA Certificate</li>
<li>Token</li>
</ol></li>
</ul>
<h2>API URL</h2>
<ul>
<li>If I'm not wrong, we can get the Kubernetes API URL from <code>Rancher Dashboard</code> > <code>Kubernetes</code> > <code>CLI</code> > <code>Generate Config</code> and copy the <code>server</code> url under <code>cluster</code></li>
</ul>
<blockquote>
<pre><code>apiVersion: v1
kind: Config
clusters:
- cluster:
api-version: v1
insecure-skip-tls-verify: true
server: "https://x.x.122.197:8080/r/projects/1a7/kubernetes:6443"
</code></pre>
</blockquote>
<h2>CA Certificate & Token?</h2>
<ul>
<li>Now, the question is, where to get the CA Certificate (pem format) and the Token?</li>
</ul>
<p>I tried all the <code>ca.crt</code> and <code>token</code> values from all the namespaces from the Kubernetes dashboard, but I'm getting this error on the Gitlab when trying to install <code>Helm Tiller</code> application:</p>
<blockquote>
<pre><code>Something went wrong while installing Helm Tiller
Can't start installation process
</code></pre>
</blockquote>
<p>Here is how my secrets page look like
<a href="https://i.stack.imgur.com/15Imj.png" rel="noreferrer"><img src="https://i.stack.imgur.com/15Imj.png" alt="enter image description here"></a></p>
| <p>I'm also dying out with kubernetes and GitLab. I've created a couple single-node "clusters" for testing, one with <a href="https://kubernetes.io/docs/getting-started-guides/minikube/" rel="noreferrer"><code>minikube</code></a> and another via <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer"><code>kubeadm</code></a>.</p>
<p>I answered this question on the <a href="https://forum.gitlab.com/t/kubernetes-integration-something-went-wrong-while-installing-helm-tiller/15627/3" rel="noreferrer">GitLab forum</a> but I'm posting my solution below:</p>
<h3>API URL</h3>
<p>According to the <a href="https://gitlab.com/help/user/project/clusters/index.md#adding-an-existing-kubernetes-cluster" rel="noreferrer">official documentation</a>, the API URL is only <code>https://hostname:port</code> without trailing slash </p>
<h3>List secrets</h3>
<p>First, I listed the secrets as usual:</p>
<pre><code>$ kubectl get secrets
NAME TYPE DATA AGE
default-token-tpvsd kubernetes.io/service-account-token 3 2d
k8s-dashboard-sa-token-XXXXX kubernetes.io/service-account-token 3 1d
</code></pre>
<h3>Get the service token</h3>
<pre><code>$ kubectl -o json get secret k8s-dashboard-sa-token-XXXXX | jq -r '.data.token' | base64 -d
eyJhbGci ... sjcuNA8w
</code></pre>
<h3>Get the CA certificate</h3>
<p>Then I got the CA certificate directly from the JSON output via jq with a custom selector:</p>
<pre><code>$ kubectl -o json get secret k8s-dashboard-sa-token-XXXXX | jq -r '.data."ca.crt"' | base64 -d - | tee ca.crt
-----BEGIN CERTIFICATE-----
MIICyDCCAbCgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
... ... ... ... ... ...
FT55iMtPtFqAOnoYBCiLH6oT6Z1ACxduxPZA/EeQmTUoRJG8joczI0V1cnY=
-----END CERTIFICATE-----
</code></pre>
<h3>Verity the CA certificate</h3>
<p>With the CA certificate on hand you can <code>verify</code> as usual:</p>
<pre><code>$ openssl x509 -in ca.crt -noout -subject -issuer
subject= /CN=kubernetes
issuer= /CN=kubernetes
$ openssl s_client -showcerts -connect 192.168.100.20:6443 < /dev/null &> apiserver.crt
$ openssl verify -verbose -CAfile ca.crt apiserver.crt
apiserver.crt: OK
</code></pre>
|
<p>I installed a Kubernetes master using kubeadm sucessfully on a VM (VirtualBox). The problem is that if I stop the machine and restart it the master node seems to be down:</p>
<pre><code>kubectl get nodes
The connection to the server 10.0.x.x:6443 was refused - did you specify the right host or port?
</code></pre>
<p>How can I make sure it will always be up after restarting the VM?</p>
<p><strong>UPDATE:</strong></p>
<p>After restarting VM this is what I have to do to make the master node start:</p>
<pre><code>sudo swapoff -a
sudo systemctl restart kubelet.service
</code></pre>
<p>Why? How can I fix it so that it starts without having to input that?</p>
| <blockquote>
<p>The problem is that if I stop the machine and restart it the master node seems to be down</p>
</blockquote>
<ul>
<li><p>Since it was kubeadm installation that worked properly before restarts, seems like Env var is missing after restart. Try to run this before <code>kubectl get nodes</code>:</p>
<pre><code>export KUBECONFIG=/etc/kubernetes/admin.conf
</code></pre>
<p>If it starts normally, then you need to make sure that <code>KUBECONFIG</code> environment variable is properly configured upon restart either adding it to <code>.bashrc</code> or similar...</p></li>
</ul>
<h3>Edited:</h3>
<blockquote>
<p>Why? How can I fix it so that it starts without having to input that?</p>
</blockquote>
<ul>
<li><p>Ah, swap file is teasing you. By default kubelet will not start if swap is enabled. You have two options:</p>
<ul>
<li><strong>Remove swap</strong>: That's easy, just disable it as you already listed but make it permanent by commenting swap line in <code>/etc/fstab</code> file. Add <code>#</code> before line creating swap mount point and next time you restart you won't have it.</li>
<li><p><strong>Allow kubelet to run with swap enabled</strong>: I know, not recommended by documentation, but if you like to live dangerous, you can add/edit in <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> following line:</p>
<pre><code>Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
</code></pre>
<p>and next restart you will be able to run kubelet with swap enabled.</p></li>
</ul></li>
</ul>
|
<p>So I have just started using Kubernetes API server and I tried this example :</p>
<pre><code>from kubernetes import client, config
def main():
# Configs can be set in Configuration class directly or using helper
# utility. If no argument provided, the config will be loaded from
# default location.
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
if __name__ == '__main__':
main()
</code></pre>
<p>This worked but it returned the pods that are on my local minikube, I want to get the pods that are at the kubernetes server here :
<code>http://192.168.237.115:8080</code>
How do I do that?</p>
<p>When I do <code>kubectl config view</code> , I get this :</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/piyush/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/piyush/.minikube/apiserver.crt
client-key: /home/piyush/.minikube/apiserver.key
</code></pre>
<p>I know this is for the local cluster I set up. I want to know how to modify this to make api requests to kubernetes server on <code>http://192.168.237.115:8080</code> </p>
| <p>You can actually create a simple api wrapper. This way you can pass through different yaml configuration files, that I imagine may have different hosts</p>
<pre><code>import yaml
from kubernetes import client
from kubernetes.client import Configuration
from kubernetes.config import kube_config
class K8s(object):
def __init__(self, configuration_yaml):
self.configuration_yaml = configuration_yaml
self._configuration_yaml = None
@property
def config(self):
with open(self.configuration_yaml, 'r') as f:
if self._configuration_yaml is None:
self._configuration_yaml = yaml.load(f)
return self._configuration_yaml
@property
def client(self):
k8_loader = kube_config.KubeConfigLoader(self.config)
call_config = type.__call__(Configuration)
k8_loader.load_and_set(call_config)
Configuration.set_default(call_config)
return client.CoreV1Api()
# Instantiate your kubernetes class and pass in config
kube_one = K8s(configuration_yaml='~/.kube/config1')
kube_one.client.list_pod_for_all_namespaces(watch=False)
kube_two = K8s(configuration_yaml='~/.kube/config2')
kube_two.client.list_pod_for_all_namespaces(watch=False)
</code></pre>
<p>Also another neat reference in libcloud. <a href="https://github.com/apache/libcloud/blob/trunk/libcloud/container/drivers/kubernetes.py" rel="noreferrer">https://github.com/apache/libcloud/blob/trunk/libcloud/container/drivers/kubernetes.py</a>. </p>
<p>Good luck! Hope this helps! :) </p>
|
<p>I've created a cluster on Google Kubernetes Engine (previously Google Container Engine) and installed the Google Cloud SDK and the Kubernetes tools with it on my Windows machine.</p>
<p>It worked well for some time, and, out of nowhere, it stopped working. Every command I'm issuing with <code>kubectl</code> provokes the following:</p>
<pre><code>Unable to connect to the server: net/http: TLS handshake timeout
</code></pre>
<p>I've searched Google, the Kubernetes Github Issues, Stack Overflow, Server Fault ... without success.</p>
<p>I've tried the following:</p>
<ul>
<li>Restart my computer</li>
<li>Change wifi connection</li>
<li>Check that I'm not somehow using a proxy</li>
<li>Delete and re-create my cluster</li>
<li>Uninstall the Google Cloud SDK (and kubectl) from my machine and re-install them</li>
<li>Delete my <code>.kube</code> folder (config and cache)</li>
<li>Check my <code>.kube/config</code></li>
<li>Change my cluster's version (tried 1.8.3-gke.0 and 1.7.8-gke.0)</li>
<li>Retry several hours later</li>
<li>Tried both on PowerShell and cmd.exe</li>
</ul>
<p>Note that the cluster seem to work perfectly, since I have my application running on it and can interact with it normally through the Google Cloud Shell.</p>
<p>Running:</p>
<pre><code>gcloud container clusters get-credentials cluster-2 --zone europe-west1-b --project ___
kubectl get pods
</code></pre>
<p>works on Google Cloud Shell and provokes the <code>TLS handshake timeout</code> on my machine.</p>
| <p>For others seeing this issue, there is another cause to consider.</p>
<p>After doing:</p>
<pre><code>gcloud config set project $PROJECT_NAME
gcloud config set container/cluster $CLUSTER_NAME
gcloud config set compute/zone europe-west2
gcloud beta container clusters get-credentials $CLUSTER_NAME --region europe-west2 --project $PROJECT_NAME
</code></pre>
<p>I was then seeing:</p>
<pre><code>kubectl cluster-info
Unable to connect to the server: net/http: TLS handshake timeout
</code></pre>
<p>I tried everything suggested here and elsewhere. When the above worked without issue from my home desktop, I discovered that shared workspace wifi was disrupting TLS/VPNs to control the internet access!</p>
|
<p>I written deployment file as follow, which is giving me error as <code>unknown field "platform"</code>. Any idea on what to specify so that it deploy based on architecture?</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
platform:
architecture: amd64
os: linux
- name: nginx
image: ppc64le/nginx:1.7.9
ports:
- containerPort: 80
platform:
architecture: ppc64le
os: linux
</code></pre>
| <p>You have to use nodeAffinity definitions on your deployment spec. Here's an example I use to pin tasks to amd64 or arm hosts:</p>
<pre><code> affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
</code></pre>
<p>You can use arbitrary keys and values. Here's the <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature" rel="nofollow noreferrer">documented example</a></p>
|
<p>I'm using kubernetes-client/java api and I want to programmatically get the pod status of all the pods in all the namespaces. My code is based on this <a href="https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/WatchExample.java" rel="noreferrer">example on Kubernetes java library</a>.</p>
<p>Here's a snippet of my code:</p>
<pre><code>Watch<V1Pod> watch = Watch.createWatch(
client,
api.listPodForAllNamespacesCall(
null, null, null, null, limit,
null, null, null, watchTrue,
null, null),
new TypeToken<Watch.Response<V1Pod>>() {}.getType());
for (Watch.Response<V1Pod> item : watch) {
V1PodStatus podStatus = item.object.getStatus();
String name = item.object.getMetadata().getName();
String status = podStatus.getPhase();
String kind = item.object.getKind();
String details = podStatus.toString();
System.out.printf("NAME: %s | KIND: %s | STATUS: %s | DETAILS: %n%s%n====================%n", name, kind, status, details);
}
</code></pre>
<p><strong>My question is this:</strong> Is <code>Watch</code> equivalent to an event handler? This code shows me a list of all the statuses of the pods, but will it automatically "push" more pod status events as they occur in realtime? Or is this only triggered once?</p>
| <p>Watch is designed to send continuous updates. If you run your program for a while and start/stop something, you will see new updates coming.</p>
<p>I'm not sure it's correct to call it an event handler though. It's a different pattern.</p>
|
<p>Inside the same openshift project, I have an application1 (APP1) in POD1 and another application (APP2) in POD2.<br>
Both these applications are offering REST API services. These 2 applications are accesible using 2 differents openshift route.</p>
<p>Acessing the two applications from outside the PAAS (openshift) is working fine: meaning doing a CURL to APP1/APP2 using openhsift route from a client outside the PAAS is working.</p>
<p>But when APP1 in POD1 is targeeting the REST service offered by APP2 in POD2 using APP2's openshift route. It is failing.<br>
Similarly doing a CURL from APP1's container (using oc rsh) to APP2 using openshift route is failing. </p>
<p>However if APP1 is deployed outside this PAAS, the client call to APP2 using open shift route is working.<br>
Similarly doing a CURL to APP2 using openshift route is working fine from outside the PAAS. </p>
<p>Also if I use service PAAS internal IP/PORT (10.224.x.y), this will solve the issue. But implies to use two different adresses depending if we target APP2 from inside or outside the PAAS.</p>
<p>So the question is how to make an openshift route visible from inside and outside the PAAS?</p>
<p>Or is is there an alternative way to share same hostname/port to access APP2 from inside and outside the PAAS?</p>
| <p>You may want to have a look at:</p>
<p><a href="https://docs.openshift.com/container-platform/3.7/architecture/networking/networking.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.7/architecture/networking/networking.html</a></p>
<p>For internal communication it is sufficient to use the name of the service (in your case use app1-svc in app2 to communicate with app1).
You can find the name of your service when you visit the routes view - the column "Routes To" shows you the service name.</p>
<p>How services work and how they are configured is explained here in detail:</p>
<p><a href="https://docs.openshift.com/container-platform/3.7/architecture/core_concepts/pods_and_services.html#services" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.7/architecture/core_concepts/pods_and_services.html#services</a></p>
|
<p>I am trying to copy a jar file to specific pod's container by executing below command.</p>
<p><code>kubectl cp local_policy.jar podname:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/security</code>.</p>
<p>I am getting below error. </p>
<pre><code>rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"tar\\\": executable file not found in $PATH\"\n"
</code></pre>
<p>Please help.</p>
| <p>tar binary is necessary to run cp. It is in the help page of kubectl cp:</p>
<pre><code>kubectl cp --help
Copy files and directories to and from containers.
Examples:
# !!!Important Note!!!
# Requires that the 'tar' binary is present in your container
# image. If 'tar' is not present, 'kubectl cp' will fail.
</code></pre>
|
<p>In Kubernetes documentation I haven't found any explanation of the pods line when describing resources:</p>
<p><a href="https://i.stack.imgur.com/tzBaB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tzBaB.png" alt="enter image description here"></a></p>
<p>What does "pods 110" mean? Why is it fixed to 110?</p>
<p>Thank you in advance.</p>
| <p>According to <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a> documentation:</p>
<pre><code>--max-pods int32 Default: 110
Number of Pods that can run on this Kubelet.
</code></pre>
<p>So, that means that 110 pods could be created on node by default.</p>
|
<p>Now that Heapster is nearing end of life, using the in incubation Metrics Server allows the "core" resources of CPU and Memory, is there a non-deprecated way of accessing other resource metrics (disk and network usage, for example) WITHOUT resorting to Prometheus?</p>
<p>Thank you.</p>
| <p><a href="https://prometheus.io/docs/introduction/overview/" rel="nofollow noreferrer">Prometheus</a> is becoming standard de-facto for monitoring Kubernetes cluster.</p>
<blockquote>
<p><strong>When does it fit?</strong><br />
Prometheus works well for recording any purely numeric time series. It fits both machine-centric monitoring as well
as monitoring of highly dynamic service-oriented architectures. In a
world of microservices, its support for multi-dimensional data
collection and querying is a particular strength.</p>
<p>Prometheus is designed for reliability, to be the system you go to
during an outage to allow you to quickly diagnose problems. Each
Prometheus server is standalone, not depending on network storage or
other remote services. You can rely on it when other parts of your
infrastructure are broken, and you do not need to setup extensive
infrastructure to use it.</p>
<p><strong>When does it not fit?</strong><br />
Prometheus values reliability. You can always view what statistics are available about your system, even
under failure conditions. If you need 100% accuracy, such as for
per-request billing, Prometheus is not a good choice as the collected
data will likely not be detailed and complete enough. In such a case
you would be best off using some other system to collect and analyze
the data for billing, and Prometheus for the rest of your monitoring.</p>
</blockquote>
|
<p>I have an Google Cloud Load Balancer-backed ingress in my Google Kubernetes Engine cluster. I have an autoscaler set up to scale the number of replicas of my deployment based on CPU usage. Let's say I have set the CPU threshold to 50%.</p>
<p>When there is a burst of requests, the CPU usage goes to 100%. The autoscaler takes a few minutes to realize the high load, create more pods, create new nodes if necessary, and pass health checks. During this scaling period, some or the majority of requests fail with the 502 error due to timeouts. I would rather return a 503 error code immediately if the server is under heavy load instead of returning a 502 error code after the 30 second timeout.</p>
<p>Is it possible to have the load balancer direct traffic to pods with the lowest CPU usage? Is is possible to return a 503 error code if none of the pods have a CPU usage below a certain threshold, say 80%?</p>
<p>What is standard practice for handling a large burst of traffic, and how should I go about resolving this issue in Kubernetes?</p>
| <p>First problem you are describing (serving 503) is called "load shedding". Normally it's a responsibility of the application to say: "oops, I'm overloaded, 503, slow down". If you move this responsibility to the client, then it might be too slow to react to provide you any reasonable protection - its data will always be behind. From the system reliability point of view, it's better to keep this logic in the server application.</p>
<p>The second problem is CPU-aware load balancing. One possible approach to this problem is called weighted round-robin - it's like regular round-robin, but preferring less loaded nodes. If you install <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/installing-istio" rel="nofollow noreferrer">istio</a> in Kubernetes, you can select from <a href="https://istio.io/docs/concepts/traffic-management/load-balancing.html" rel="nofollow noreferrer">a list of load balancing policies</a>. One of them is <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/load_balancing#weighted-least-request" rel="nofollow noreferrer">weighted least request</a> - it relies on the number of requests in flight, not directly on CPU, but if all your requests have about the same CPU cost, it might be a good proxy to CPU load.</p>
|
<p>I want to mount a shell script template into a container.</p>
<p>I have the following <code>configmap.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "myservice-chart.fullname" . }}--scripts-configmap
labels:
app: {{ template "myservice-chart.name" . }}
chart: {{ template "myservice-chart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
setup: |
{{ include "setup" . | indent 4 }}
</code></pre>
<p>And this is my <code>setup.tpl</code>:</p>
<pre><code>{{- define "setup" }}
#!/bin/bash
echo "Hello world!"
{{- end }}
</code></pre>
<p>When I do a Helm dry-run, Helm generates this (valid) YAML:</p>
<pre><code>...
apiVersion: v1
kind: ConfigMap
metadata:
name: listening-donkey-myservice-chart-quorum-scripts-configmap
labels:
app: myservice-chart
chart: myservice-chart-0.1.0
release: listening-donkey
heritage: Tiller
data:
setup: |
#!/bin/bash
echo "Hello world!"
...
</code></pre>
<p>When I run it without <code>--dry-run</code>, it generates this error:</p>
<blockquote>
<p>configmap.yaml: error converting YAML to JSON: yaml: line 13: did not
find expected key</p>
</blockquote>
| <p>According to <a href="https://docs.helm.sh/chart_template_guide/" rel="noreferrer">helm chart template guide</a>:</p>
<blockquote>
<p>The curly brace syntax of template declarations can be modified with
special characters to tell the template engine to chomp whitespace.
{{- (with the dash and space added) indicates that whitespace should
be chomped left, while -}} means whitespace to the right should be
consumed. Be careful! Newlines are whitespace!</p>
</blockquote>
<p>So, in order to prevent an useless empty line below <code>setup: |</code> the <code>configmap.yaml</code> should be the following:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "myservice-chart.fullname" . }}--scripts-configmap
labels:
app: {{ template "myservice-chart.name" . }}
chart: {{ template "myservice-chart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
setup: |
{{- include "setup" . | indent 4 }}
</code></pre>
|
<p>I am new to K8S. I have a yaml file which generates kubernetes secrets mounted on projected volumes. Upon execution, I found that the secret files (packaged with secrets) are showing "root" as file owner and group owner. I want to change the file owner and group owner to the same specific user (say 450). </p>
<p>I have tried using "chown" from init container (tried it but failed), but I got error saying "read-only file system" and could not modify file & group owner.
I do not want to use "fsGroup" under securitycontext. I observed that the "mode:" option under "items" behaves in unpredictable manner when fsGroup is used.</p>
<p>Is there any way to modify default file and group owner of the kubernetes secret files that are mounted via projected volumes ? </p>
<p>I am providing the sample code below. Suppose I want to change the file & group owner of "password" file (under 'mysecret2') in the below sample. how to achieve it?</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: container-test
image: busybox
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: mysecret
items:
- key: username
path: username
- secret:
name: mysecret2
items:
- key: password
path: password
mode: 511
</code></pre>
| <p>As far as I know, there's no way to change owner UID for secrets.</p>
<p>A workaround is to copy a secret to a normal file, then change its ownership and mode, like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: container-test
image: busybox
command: |
- "/bin/bash"
- "-exc"
cp /etc/secrets-mount/*_pgpass /etc/secrets
chown my-user /etc/*_pgpass
chmod 600 /etc/*_pgpass
exec su-exec my-user /entrypoint.sh
volumeMounts:
- name: secrets
mountPath: /etc/secrets-mount/
....
</code></pre>
|
<p>I'm using the simplest docker ever:</p>
<pre><code>FROM ubuntu
COPY script.sh /script.sh
CMD /script.sh
</code></pre>
<p>Where all the script does is:</p>
<pre><code>#!/bin/bash
function sigterm() {
echo "Got SIGTERM"
exit
}
trap sigterm SIGTERM
i=1
while true; do
echo "$(date +%H:%M:%S) | $((i++)) | $HOSTNAME"
sleep 1
done
</code></pre>
<p>I'm running this container in Minikube, <strong>but I can't get it to catch any SIGTERM from kubernetes.</strong> </p>
<p>I tried deleting the pod/deployment or to scale it up and down. In no case it got SIGTERM before being deleted. It respects the <code>terminationGracePeriodSeconds</code> But doesn't seem to run the <code>preStop</code> command or send the SIGTERM before killing the pod.</p>
<p>Is that due to using minikube? or am I doing something else wrong?</p>
<p>(the deployment is not part of a service, it's just a deployment)
(SSH into the pod and manually <code>kill</code>-ing it works as expected)</p>
| <p>Reading <a href="https://docs.docker.com/engine/reference/builder/#cmd" rel="noreferrer">the Dockerfile documentation on CMD</a></p>
<blockquote>
<p>The CMD instruction has three forms:</p>
<ul>
<li>CMD ["executable","param1","param2"] (exec form, this is the preferred form)</li>
<li>CMD ["param1","param2"] (as default parameters to ENTRYPOINT)</li>
<li>CMD command param1 param2 (shell form)</li>
</ul>
<p>If you use the shell form of the CMD, then the will execute in /bin/sh -c</p>
</blockquote>
<p>So you are using the shell form, which means your command is /bin/sh -c script.sh. Then, when kubernetes sends a SIGTERM to the container, is not the script.sh process the one receiving the signal, but the /bin/sh process. That's why you don't see the "Got SIGTERM" message.</p>
<p>When creating a Dockerfile, make sure you use the exec form. Otherwise the application will be started as a subcommand of /bin/sh -c, which does not pass signals. The container’s PID1 will be the shell, your application will not receive any signals.</p>
<p>Try changing your Dockerfile to use the exec form</p>
<pre><code>FROM ubuntu
COPY script.sh /script.sh
CMD ["/script.sh"]
</code></pre>
|
<p>I have Consul running in my cluster and each node runs a consul-agent as a DaemonSet. I also have other DaemonSets that interact with Consul and therefore require a consul-agent to be running in order to communicate with the Consul servers.</p>
<p>My problem is, if my DaemonSet is started before the consul-agent, the application will error as it cannot connect to Consul and subsequently get restarted.</p>
<p>I also notice the same problem with other DaemonSets, e.g <a href="https://www.weave.works/oss/net/" rel="nofollow noreferrer">Weave</a>, as it requires kube-proxy and kube-dns. If Weave is started first, it will constantly restart until the kube services are ready.</p>
<p>I know I could add retry logic to my application, but I was wondering if it was possible to specify the order in which DaemonSets are scheduled?</p>
| <p>Kubernetes itself does not provide a way to specific dependencies between pods / deployments / services (e.g. "start pod A only if service B is available" or "start pod A after pod B").</p>
<p>The currect approach (based on what I found while researching this) seems to be retry logic or an init container. To quote the <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#what-can-init-containers-be-used-for" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>They run to completion before any app Containers start, whereas app Containers run in parallel, so Init Containers provide an easy way to block or delay the startup of app Containers until some set of preconditions are met.</p>
</blockquote>
<p>This means you can either add retry logic to your application (which I would recommend as it might help you in different situations such as a short service outage) our you can use an init container that polls a health endpoint via the Kubernetes service name until it gets a satisfying response.</p>
|
<p>I'm looking for a way to increase the master node VM size on GKE.</p>
<p>On <a href="https://kubernetes.io/docs/admin/cluster-large/#size-of-master-and-master-components" rel="noreferrer">https://kubernetes.io/docs/admin/cluster-large/#size-of-master-and-master-components</a> it suggests that for a cluster of 11-100 nodes we should be using an n1-standard-4 VM for Kubernetes master. </p>
<p>However, since the cluster has started out smaller, and since grown to this size, does that mean that we're stuck with an underpowered master node? From the above link:</p>
<blockquote>
<p>Note that these master node sizes are currently only set at cluster startup time, and are not adjusted if you later scale your cluster up or down (e.g. manually removing or adding nodes, or using a cluster autoscaler)"</p>
</blockquote>
<p>So, is there any way to increase the size of the master?</p>
| <blockquote>
<p>The Kubernetes documentation that you pointed out is <strong>NOT</strong> correct and should be modified since the master actually scales. </p>
</blockquote>
<p>First of all notice that how and when Google Cloud takes care of resizing the master should not be a concern for users if the behaviour of the cluster is stable and performant. </p>
<p>It is a managed service and therefore some details are not public, for example how the master is resized and which algorithms are used are not shared.</p>
<p>Moreover there is no information or disclaimer regarding the machine type of the master in the autoscaler GKE <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="noreferrer">official documentation</a> and it if the master was not able to resize since it would have been an action potentially disruptive for the cluster health.</p>
<h2>From the blog</h2>
<p>"Master VM is automatically scaled, upgraded, backed up and secured"</p>
<ul>
<li><a href="https://cloudplatform.googleblog.com/2017/11/Cutting-Cluster-Management-Fees-on-Google-Kubernetes-Engine.html" rel="noreferrer">https://cloudplatform.googleblog.com/2017/11/Cutting-Cluster-Management-Fees-on-Google-Kubernetes-Engine.html</a></li>
</ul>
<hr>
<p>However if you want you can test the behavior:</p>
<ul>
<li><p>Create a cluster having one node</p></li>
<li><p>Add 10 nodes</p></li>
<li><p>The master will be not reachable for a moment and a call to the API will resolve in an error</p>
<pre><code> $ gcloud container clusters get-credentials cluster-1 --zone us-central1-a --project **-**
Fetching cluster endpoint and auth data.
WARNING: cluster cluster-1 is not running. The kubernetes API may not be available.
</code></pre></li>
<li><p>Inspect the logs, you will notice that in the logs will be present an entry "master upgrade"</p></li>
</ul>
<hr>
<p>There is an <a href="https://issuetracker.google.com/79973484" rel="noreferrer">feature request</a> asking to Improve the Google cloud documentation, you can decide to star it in order to receive updates.</p>
<p>On the other hand to fix the Kubernetes documentation I opened a <a href="https://github.com/kubernetes/website/issues/8640" rel="noreferrer">public issue</a> on Github.</p>
|
<p>When I create kubernetes cluster with gcloud container clusters create command, a permission error occurs as follows:</p>
<pre>
$ gcloud container clusters create my-k8s
WARNING: Currently node auto repairs are disabled by default. In the future this will change and they will be enabled by default. Use `--[no-]enable-autorepair` flag to suppress this warning.
WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the latter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Required "container.clusters.create" permission for "projects/test-project".
</pre>
<p>How I can solve this error ?</p>
<p>Thanks</p>
| <p>I could start with the following command:</p>
<pre><code>gcloud container clusters create my-k8s --project test-project-xxxxxx
</code></pre>
<p>Note: Without a project name without number, the creation fails with the same error.</p>
<pre><code>gcloud container clusters create my-k8s --project test-project
</code></pre>
|
<p>On NixOS is is easy to set up Kubernetes by a single line of config:</p>
<pre><code>services.kubernetes.roles = ["master" "node"];
</code></pre>
<p>This installs both the master and node components on the local system and therefore creates a nice little working local kubernetes "cluster".</p>
<p>If I want to set up a "real" cluster I need to install it over multiple hosts, but I'm not sure about the intended way to connect them.</p>
<p>If I install only the master components on one host and only the node components on another node, how do I tell the node where to find its master?</p>
<p>There are quite a few <a href="https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/cluster/kubernetes/default.nix" rel="noreferrer">configuration options</a>, but I'm not sure how to use them correctly. Is anyone aware of some example setup? </p>
| <p>I'm currently working to automate Kubernetes deployment with NixOS / NixOps. It works quiet well with multiple local VirtualBox nodes. Regarding AWS integration I still have to fix few things. Then I will try to integrate with other cloud providers.</p>
<p>You can have a look to this repository: <a href="https://github.com/thpham/magics/tree/master/k8s-cluster" rel="nofollow noreferrer">NixOps Kubernetes</a>. Do not hesitate to fork and help me improve it.</p>
|
<p>How can I use a normal context to configure the kubernetes client-go?</p>
<pre><code> package kube
import (
"fmt"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
)
// GetKubeClient creates a Kubernetes config and client for a given kubeconfig context.
func GetKubeClient(context string) (*rest.Config, kubernetes.Interface, error) {
config, err := configForContext(context)
if err != nil {
return nil, nil, err
}
client, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, nil, fmt.Errorf("could not get Kubernetes client: %s", err)
}
return config, client, nil
}
// configForContext creates a Kubernetes REST client configuration for a given kubeconfig context.
func configForContext(context string) (*rest.Config, error) {
config, err := getConfig(context).ClientConfig()
if err != nil {
return nil, fmt.Errorf("could not get Kubernetes config for context %q: %s", context, err)
}
return config, nil
}
// getConfig returns a Kubernetes client config for a given context.
func getConfig(context string) clientcmd.ClientConfig {
rules := clientcmd.NewDefaultClientConfigLoadingRules()
rules.DefaultClientConfig = &clientcmd.DefaultClientConfig
overrides := &clientcmd.ConfigOverrides{ClusterDefaults: clientcmd.ClusterDefaults}
if context != "" {
overrides.CurrentContext = context
}
return clientcmd.NewNonInteractiveDeferredLoadingClientConfig(rules, overrides)
}
</code></pre>
<p>If I try this code (got it from helm), the api server is not correctly set and the client wants to connect to the default host <code>localhost:8080</code>.</p>
| <p>Found the problem. The implementation of <code>github.com/imdario/mergo</code> changed in a newer version and breaks the actual behavior of generating the client config. So just only use revision <code>6633656539c1639d9d78127b7d47c622b5d7b6dc</code> like in the official kubernetes cient-go repository.</p>
<p><a href="https://github.com/kubernetes/client-go/issues/415" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/issues/415</a></p>
|
<p>I'm trying to figure out how can a rename a field (or create a new field with the same value ) with Fluentd</p>
<p>Like:</p>
<pre><code>agent: Chrome ....
</code></pre>
<p>To:</p>
<pre><code>agent: Chrome
user-agent: Chrome
</code></pre>
<p>but for a specific type of logs, like <code>**nginx**</code>.</p>
<p>I'm trying to user <code>record_reformer</code> but it doesn't apply for a second filter:</p>
<pre><code><filter kubernetes.**.nginx-ingress-controller-**.log>
@type parser
format /^(?<host>[^ ]*) (?<domain>[^ ]*) \[(?<x_forwarded_for>[^\]]*)\] (?<server_port>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+[^\"])(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")? (?<request_length>[^ ]*) (?<request_time>[^ ]*) (?:\[(?<proxy_upstream_name>[^\]]*)\] )?(?<addr>[^ ]*) (?<response_length>[^ ]*) (?<response_time>[^ ]*) (?<status>[^ ]*)$/
time_format %d/%b/%Y:%H:%M:%S %z
key_name log
types server_port:integer,code:integer,size:integer,request_length:integer,request_time:float,upstream_response_length:integer,upstream_response_time:float,upstream_status:integer
reserve_data true
</filter>
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
<filter kubernetes.**>
@type grep
<regexp>
key $.kubernetes.labels.fluentd
pattern true
</regexp>
</filter>
<filter kubernetes.**.deployment-name**>
@type record_transformer
<record>
level ${record["Level"]}
</record>
</filter>
<match kubernetes.**>
@type elasticsearch
include_tag_key true
host "#{ENV['OUTPUT_HOST']}"
port "#{ENV['OUTPUT_PORT']}"
scheme "#{ENV['OUTPUT_SCHEME']}"
reload_connections true
logstash_format true
</match>
<match kubernetes.**>
@type record_reformer
remove_keys log,kubernetes
tag mytag.generic
<record>
name ${record['kubernetes']['labels']['app']}
namespace ${record['kubernetes']['namespace_name']}
</record>
</match>
<match kubernetes.**api**>
@type record_reformer
remove_keys log,kubernetes
tag mytag.api
<record>
user_agent ${record['req']['headers']['user-agent']}
</record>
</match>
</code></pre>
<p>In such a case it doesn't send the logs for the <code>mytag.generic</code></p>
| <p>I have a very similar use case, and like @embik said, using record_transformer seems like a more fit choice. In my case I'm trying to downcase a key in a json record fluentd is going through, here's the conf - </p>
<pre><code><filter kubernetes.**.deployment-name**>
@type record_transformer
<record>
level ${record["Level"]}
</record>
</filter>
</code></pre>
<p>Hope that helps.</p>
|
<p>These are my first steps to the kubernetes world so excuse me if my terms are not used right etc.
I am running a single node kubernetes setup without external loadbalancer and I have deployed a pod with to containers. One mysql database and a powerdns.
Powerdns should expose port 53 to the internet while mysql should expose its port only in the cluster.
Therefore I set the following:</p>
<p>"hostNetwork: true" for the pod<br>
"hostPort" for the powerdns container and not for mysql</p>
<p>Service for port 3306 with "type: ClusterIP"</p>
<p>Now everything is running. Powerdns can connect to the mysql and is exposed on port 53 in the internet.
But contrary to my assumption the mysql database is exposed to the internet too. </p>
<p>Could anyone give me a hint to what I am doing wrong?</p>
| <p>Using <code>hostNetwork: true</code> allows your whole pod (all containers in it) to bind ports to the host, which you already identified as problematic.</p>
<p>First of all, you should consider to move the mysql container out of your pod. Using multiple containers is supposed to group containers working as one unit (e.g. an application and a background process closely communicating with each other).</p>
<p>Think in services. Your service PowerDNS is a service user itself as it requires a database, something the application PowerDNS doesn't provide. You want another service for MySQL. Take a look at the documentation (<a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="noreferrer">one</a>, <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="noreferrer">two</a>) for StatefulSets as it uses MySQL as an example (running databases on Kubernetes is one of the more complex tasks).
Create a <code>ClusterIP</code> service for this. <code>ClusterIP</code> services are only available from within the cluster (your database is an internal service, so that's what you want).</p>
<p>This way, your PowerDNS pod will only feature one container that you can bind to your host network. <strong>But</strong> using <code>hostNetwork: true</code> is not a good in general. You won't be able to create multiple instances of your application (in case PowerDNS scales), it's fine for first steps though. A load balancer in front of your setup would be better though. You can use <code>NodePort</code> services to make your service available on a high-values port which your load balancer proxies connections to.</p>
|
<p>I've read documentation of kubernetes annotations. </p>
<p>But I couldn't find basic example about using this annotations. For Example;</p>
<p>I have a deployment yaml like below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
test_value: "test"
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13
ports:
- containerPort: 80
</code></pre>
<p>How can I use this annotation named test_value and where.</p>
<p>Best Regards...</p>
| <p>Just as <code>Labels</code>, <code>Annotations</code> are key-value pairs which represent metadata that is attached to a Kubernetes object.
But contrary to <code>Labels</code>, which are internally utilized to find a collection of objects which satisfy specific conditions, the purpose of <code>Annotations</code> is simply to attach relevant metadata, which should not be used as a filter to identify those objects.</p>
<p>What if we wanted to describe whose person was responsible for generating a specific .yaml file? </p>
<p>We could attach such information to the Kubernetes's object, so that when we need to know who created such object, we can simply run <code>kubectl describe ...</code></p>
<p>Another useful example, could be to add an annotation to a <code>Deployment</code> before a rollout, explaining what modifications occurred on the new version of the Deployment object. That information could be retrieved later while checking the history of your deployment versions.</p>
<p>But as you have realized with the <code>Ingress</code> example, with <code>Annotations</code> we can also perform advanced configuration on such objects. This is not limited only to Ingress, and for instance you can also provide configuration for running Prometheus on a Kubernetes cluster. You can check the details <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/" rel="nofollow noreferrer">here</a>.</p>
|
<p>In Kubernetes , is there a particular way I can check if a specific feature-gate is enabled/disabled. Say, I want to check if MountPropagation feature is enabled in my cluster. How do I do that.</p>
| <p>Check <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">official documents</a> for default value, and run <code>ps aux | grep apiserver | grep feature-gates</code> on <code>master</code> to check if you turn on/off that feature.</p>
|
<p>While going through the helm documentation, i came across rollback feature.
Its a cool feature, but i have some doubts about the implementation of that feature.</p>
<p>How they have implemented it? If they might have used some datastore to preserve old release config, what datastore it is?</p>
<p>Is there any upper limit on consecutive rollbacks? If so, Upto how many rollbacks will it support? Can we change this limit?</p>
| <p>As the <a href="https://docs.helm.sh/helm/#helm-rollback" rel="nofollow noreferrer">documentation</a> says, it rolls back the entire release. Helm generally stores release metadata in its own configmaps. Every-time you release changes, it appends that to the existing data. Your changes can have new deployment image, new configmaps, storages, etc. On rollback, everything goes to the previous version. </p>
|
<p>I'm using a kubernetes ConfigMap that contains database configurations for an app and there is a secret that has the database password.
I need to use this secret in the ConfigMap so when I try to add environment variable in the ConfigMap and specify the value in the pod deployment from the secret I'm not able to connect to mysql with password as the values in the ConfigMap took the exact string of the variable.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
APP_CONFIG: |
port: 8080
databases:
default:
connector: mysql
host: "mysql"
port: "3306"
user: "root"
password: "$DB_PASSWORD"
</code></pre>
<p>and the deployment.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
ports:
- name: "8080"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
</code></pre>
<p>Note: the secret exist and I'm able to get "mysql-root-password" value and use to login to the database</p>
| <p>Kubernetes can't make that substitution for you, you should do it with shell in the entrypoint of the container.</p>
<p>This is a working example. I modify the default entrypoint to create a new variable with that substitution. After this command you should add the desired entrypoint.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
command:
- /bin/bash
- -c
args:
- "NEW_APP_CONFIG=$(echo $APP_CONFIG | envsubst) && echo $NEW_APP_CONFIG && <INSERT IMAGE ENTRYPOINT HERE>"
ports:
- name: "app"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
</code></pre>
|
<p>I want to create a CI/CD on Azure with Jenkins, Docker, Kubernetes, starting from simple.</p>
<p>I already went through the <strong>Voting-app</strong> tutorial <strong>Tutorial: Prepare application for Azure Container Service (AKS)</strong> <a href="https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-app" rel="nofollow noreferrer">Azure tutorial</a> The tutorial covers the steps: </p>
<blockquote>
<p>1 - Prepare application for AKS 2 - Create container registry ACS 3
-Create Kubernetes cluster 4 - Run application</p>
</blockquote>
<p>The application is working.</p>
<p>The next step I want to do, is to integrate Jenkins. I am following the tutorial <strong>Continuous deployment with Jenkins and Azure Container Service</strong> I couldn´t follow the tutorial because is too advanced to understand the commands from the files. For example, the way they deployed jenkins using the file <a href="https://github.com/Azure-Samples/azure-voting-app-redis/blob/master/jenkins-tutorial/deploy-jenkins-vm.sh" rel="nofollow noreferrer">deploy-jenkins-vm.sh</a> </p>
<p>Instead of that, I went to the Marketplace on Azure and created "Jenkins" and via the Azure UI, set up the configuration. Jenkins is now running on localhost:8080</p>
<p>From another video tutorial <a href="https://www.youtube.com/watch?v=UyYTrOCLuBs" rel="nofollow noreferrer">Hands-on Docker, Jenkins CI/CD Azure</a> I shared the cluster kubeconfig to my remote jenkins host:</p>
<blockquote>
<p>$ sudo scp ~/.kube/config
[email protected]:/var/lib/jenkins/config</p>
</blockquote>
<p>It worked. </p>
<p>Then, from the same video tutorial they run a bash file</p>
<pre><code>#!/bin/bash
# Jenkins Bootstrap for ACS Kubernetes
set -x #echo on
wget https://raw.githubusercontent.com/mekenthompson/kubectl/master/kubectl -O ~/kubectl
chmod +x kubectl
sudo cp kubectl /usr/local/bin/kubectl
sudo mkdir /home/tomcat/.kube
sudo cp config /home/tomcat/.kube/config
sudo chown -R tomcat:tomcat /home/tomcat/.kube
sudo usermod -aG docker tomcat
sudo /opt/bitnami/ctlscript.sh restart
set +x #echo off
</code></pre>
<p>From the repository <a href="https://raw.githubusercontent.com/mekenthompson/example-voting-app/master/jenkins/jenkins-bootstrap.sh" rel="nofollow noreferrer">here</a>. It didn´t work for me</p>
<pre><code>+ chmod +x kubectl
+ sudo cp kubectl /usr/local/bin/kubectl
+ sudo mkdir /home/tomcat/.kube
mkdir: cannot create directory '/home/tomcat/.kube': No such file or directory
+ sudo cp config /home/tomcat/.kube/config
cp: cannot stat 'config': No such file or directory
+ sudo chown -R tomcat:tomcat /home/tomcat/.kube
chown: invalid user: 'tomcat:tomcat'
+ sudo usermod -aG docker tomcat
usermod: group 'docker' does not exist
+ sudo /opt/bitnami/ctlscript.sh restart
sudo: /opt/bitnami/ctlscript.sh: command not found
+ set +x
</code></pre>
<p>But my question is, what is the next step after I shared de kubeconfig to my jenkins remote host. I am just copying and pasting, but not knowing , what is happening</p>
<p>Form the tutorials, they seem to me, too advanced and I don´t know what is going on, or what exactly should be done.</p>
<p>From the Azure <strong>Voting tutorial</strong> they have another file, to set up Jenkins <a href="https://github.com/Azure-Samples/azure-voting-app-redis/blob/master/jenkins-tutorial/config-jenkins.sh" rel="nofollow noreferrer">config-jenkins.sh</a>, but as well I don´t know what needs to be done</p>
<p>What exactly I should do, Do you know other resources or links to follow. </p>
<p>I have already jenkins, an Azure Container Registry, Azure Container Service, and the voting app. </p>
| <p>We're working on an open source project called <a href="https://jenkins-x.io/" rel="nofollow noreferrer">Jenkins X</a> which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins pipelines and GitOps for promotion across environments.</p>
<p>If you want to see how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests you might wanna check out <a href="https://jenkins-x.io/demos/devoxx-uk-2018/" rel="nofollow noreferrer">my recent talk on Jenkins X at DevOxx UK</a> where I do a live demo of this on GKE. Though Jenkins X works on <a href="https://jenkins-x.io/getting-started/" rel="nofollow noreferrer">AWS, AKS and GKE and other kubernetes clusters too</a>.</p>
|
<p>I want a job to trigger every 15 minutes but it is consistently triggering every 30 minutes.</p>
<p><strong>UPDATE:</strong></p>
<p>I've simplified the problem by just running:</p>
<pre><code>kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"
</code></pre>
<p>As specified in the docs here: <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="noreferrer">https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/</a></p>
<p>and yet the job still refuses to run on time.</p>
<pre><code>$ kubectl get cronjobs
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello */1 * * * * False 1 5m 30m
hello2 */1 * * * * False 1 5m 12m
</code></pre>
<p>It took 25 minutes for the command line created cronjob to run and 7 minutes for the cronjob created from yaml. They were both finally scheduled at the same time so it's almost like etcd finally woke up and did something?</p>
<p><strong>ORIGINAL ISSUE:</strong></p>
<p>When I drill into an active job I see <code>Status: Terminated: Completed</code> but
<code>Age: 25 minutes</code> or something greater than 15. </p>
<p>In the logs I see that the python script meant to run has completed it's final print statement. The script takes about ~2min to complete based on it's output file in s3. Then no new job is scheduled for 28 more minutes.</p>
<p>I have tried with different configurations:</p>
<p><code>Schedule: */15 * * * *</code> AND <code>Schedule: 0,15,30,45 * * * *</code></p>
<p>As well as</p>
<p><code>Concurrency Policy: Forbid</code> AND <code>Concurrency Policy: Replace</code></p>
<p>What else could be going wrong here?</p>
<p>Full config with identifying lines modified:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
type: f-c
name: f-c-p
namespace: extract
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
labels:
type: f-c
spec:
containers:
- args:
- /f_c.sh
image: identifier.amazonaws.com/extract_transform:latest
imagePullPolicy: Always
env:
- name: ENV
value: prod
- name: SLACK_TOKEN
valueFrom:
secretKeyRef:
key: slack_token
name: api-tokens
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: aws_access_key_id
name: api-tokens
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: aws_secret_access_key
name: api-tokens
- name: F_ACCESS_TOKEN
valueFrom:
secretKeyRef:
key: f_access_token
name: api-tokens
name: s-f-c
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: '*/15 * * * *'
successfulJobsHistoryLimit: 1
suspend: false
status: {}
</code></pre>
| <p>After running these jobs in a test cluster I discovered that external circumstances prevented them from running as intended.</p>
<p>On the original cluster there were ~20k scheduled jobs. The built-in scheduler for Kubernetes is not yet capable of handling this volume consistently.</p>
<p>The maximum number of jobs that can be reliably run (within a minute of the time intended) may depend on the size of your master nodes.</p>
|
<p>I am using kubernetes helm to deploy apps to my cluster. Everything works fine from my laptop when helm uses the cluster's kube-config file to deploy to the cluster.</p>
<p>I want to use helm from my CI/CD server (which is separate from my cluster) to automatically deploy apps to my cluster. I have created a k8s service account for my CI/CD server to use. But how do I create a kube-config file for the service account so that helm can use it to connect to my cluster from my CI/CD server??</p>
<p>Or is this not the right way to use Helm from a CI/CD server? </p>
| <p>why not just run your CI server inside your kubernetes cluster then you don't have to manage secrets for accessing the cluster? We do that on <a href="https://jenkins-x.io/" rel="nofollow noreferrer">Jenkins X</a> and it works great - we can run kubectl or helm inside pipelines just fine.</p>
|
<p>As part of a Jenkins pipeline to build and deploy an app to Google's Kubernetes service (GKE), I've created a script to carry out the following deployment to GKE:</p>
<ul>
<li>checkout code </li>
<li>setup authentication to gcloud and </li>
<li>create the deployment and service using kubectl:</li>
</ul>
<p>Detailed steps implemented by the script are as follows:</p>
<pre><code>a) Create the docker registry authentication file (.json)
b) login to the google docker registry using the authentication file
c) initialise a git repo in the current directory
d) add the remote origin in prep for code pull
e) pull the source code for the microservice container
f) Create a kubectl configurtion file and directory to authenticate to the kubernetes cluster in Gcloud
g) Create a keyfile for a Gcloud service account that needs to authenticate to the container service
h) Activate the service account
i) Get the credentials for the container cluster from Gcloud
j) Run kubectl apply to create the kubernetes services
</code></pre>
<p>Full, tested, script at: <a href="https://pastebin.com/sZPrQuzD" rel="nofollow noreferrer">https://pastebin.com/sZPrQuzD</a></p>
<p>If I put this sequence of steps in a scripts on an AWS EC2 instance and run it manually it works. However,the Jenkins build step fails at the the point kubectl is invoked to run the service, with the following error:</p>
<pre><code>gcloud container clusters get-credentials jenkins-cd --zone europe-west1-b --project noon-prod
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Request had insufficient authentication scopes.
Build step 'Execute shell' marked build as failure
</code></pre>
<p>The full error dump from the Jenkins run is as follows:</p>
<p><a href="https://pastebin.com/pSWPQ5Ei" rel="nofollow noreferrer">https://pastebin.com/pSWPQ5Ei</a></p>
<p>My questions:</p>
<p>a) How to fix this? Surely it can't be that difficult to get authentication running from Jenkins?</p>
<p>b) Is this the correct way to authenticate to the gcloud container service from a Jenkins system which is not on Gcloud infrastructure at all?</p>
<p>Many thanks in advance for any help!
Traiano</p>
| <p>We're working on an open source project called <a href="https://jenkins-x.io/" rel="nofollow noreferrer">Jenkins X</a> which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.</p>
<p>We worked around some of the issues you've been having by running the Jenkins pipelines inside the kubernetes cluster; so there's no need to authenticate with GKE.</p>
<p>When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.</p>
<p>Here's <a href="https://jenkins-x.io/demos/devoxx-uk-2018/" rel="nofollow noreferrer">a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps</a> for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).</p>
|
<p>As explained at <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/</a>
labels name and value have limit of 63 characters. Does anyone know what drives this limit? I am not looking for code condition but the real reason behind choosing this value.</p>
| <p>As <a href="https://stackoverflow.com/users/8803619/suresh-vishnoi" title="1,427 reputation">Suresh Vishnoi</a> wrote, it is a global restriction based on <a href="https://www.rfc-editor.org/rfc/rfc1123" rel="noreferrer">RFC-1123</a>.</p>
<p><a href="https://stackoverflow.com/questions/32290167/what-is-the-maximum-length-of-a-dns-name#32294443">Here</a> is a simple explanation of this restriction.</p>
<p><a href="https://github.com/kubernetes/kubernetes/pull/1830" rel="noreferrer">There is</a> a commit in Kubernetes which provides the validation of labels' length.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.