prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>Can I pass AWS creds to Kops CLI or programmatically? How to handle multiple AWS accounts?</p>
| <p>You can redefine the <code>KOPS_STATE_STORE</code> environment variable to point to different s3 buckets in different aws accounts. You'll also have to export <code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code> variables for the given s3 bucket's account authorization.</p>
<p>To make the switch easier on yourself, do the following. Create two export var files, let's call them <code>aws_cluster_acct_1.env</code> and <code>aws_cluster_acct_2.env</code>.</p>
<p>aws_cluster_acct_1.env:</p>
<blockquote>
<p>export AWS_ACCESS_KEY_ID=foo1</p>
<p>export AWS_SECRET_ACCESS_KEY=bar1</p>
<p>export KOPS_STATE_STORE=s3://aws-account-kops-bucket-1</p>
</blockquote>
<p>aws_cluster_acct_2.env:</p>
<blockquote>
<p>export AWS_ACCESS_KEY_ID=foo2</p>
<p>export AWS_SECRET_ACCESS_KEY=bar2</p>
<p>export KOPS_STATE_STORE=s3://aws-account-kops-bucket-2</p>
</blockquote>
<p>Now when you want to switch between KOPS clusters in different AWS accounts, you can run either <code>source aws_cluster_acct_1.env</code> or <code>source aws_cluster_acct_2.env</code> and that will load up the appropriate environment variables for you to start doing stuff with either AWS account's KOPS clusters.</p>
|
<p>I can specify a specific version of a chart by doing: <code>helm install --version <some_version> stable/<some_chart></code></p>
<p>But, how do I know which versions are available?</p>
| <h3>Short Answer</h3>
<p>You can list all available versions of a chart using the <code>search repo</code> functionality together with the <code>--versions</code> flag:</p>
<p><code>helm search repo <reponame>/<chartname> --versions</code></p>
<p>This requires that the repo was added previously and is up to date. If your repo was added some time ago, please make sure to keep the local cache updated using <code>helm repo update</code> to also see recently released versions.</p>
<p>The behaviour of managing charts in a repository changed slightly between Helm v2 and Helm v3. So please refer to the corresponding section for details.</p>
<h3>Helm v3</h3>
<p>Helm v3 changed to a more decentralized management of charts, so you might have added a certain repository upfront compared to obtaining many of them directly from the preconfigured <code>stable</code> repository. Listing the versions of a certain chart can be accomplished running the command <code>helm search repo</code> and specifying the full path of the chart (specifying repo and chart name) in combination with the <code>--versions</code> flag (or shorthand <code>-l</code>) like so:</p>
<p><code>helm search repo <reponame>/<chartname> --versions</code></p>
<p>If you are interested in pre-release builds like <code>1.1.0-rc.1</code> or <code>3.0.0-alpha.2</code>, you have to add the <code>--devel</code> flag to also include those.</p>
<p><code>helm search repo <reponame>/<chartname> --versions --devel</code></p>
<p>You can limit the amount of results by specifying a version constraint using SEMVER notation with the <code>--version</code> flag in addition to <code>--versions</code>. This allows for example limiting the results to e.g. only v1 charts:</p>
<p><code>helm search repo <reponame>/<chartname> --versions --version ^v1.0</code></p>
<p>Depending on your shell, it can be required to put the version string in single quotes (<code>'</code>) due to special characters like <code>^</code>.</p>
<p><strong>Example</strong></p>
<p>One concrete example using jetstack's charts for cert-manager:</p>
<pre><code>$ helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories
</code></pre>
<p>Regular search for results that contain <code>jetstack</code></p>
<pre><code>$ helm search repo jetstack
NAME CHART VERSION APP VERSION DESCRIPTION
jetstack/cert-manager v1.0.4 v1.0.4 A Helm chart for cert-manager
jetstack/tor-proxy 0.1.1 A Helm chart for Kubernetes
</code></pre>
<p>Regular search for a specific chart</p>
<pre><code>$ helm search repo jetstack/cert-manager
NAME CHART VERSION APP VERSION DESCRIPTION
jetstack/cert-manager v1.0.4 v1.0.4 A Helm chart for cert-manager
</code></pre>
<p>Listing all the versions for one specific chart</p>
<pre><code>$ helm search repo jetstack/cert-manager --versions
NAME CHART VERSION APP VERSION DESCRIPTION
jetstack/cert-manager v1.0.4 v1.0.4 A Helm chart for cert-manager
jetstack/cert-manager v1.0.3 v1.0.3 A Helm chart for cert-manager
jetstack/cert-manager v1.0.2 v1.0.2 A Helm chart for cert-manager
jetstack/cert-manager v1.0.1 v1.0.1 A Helm chart for cert-manager
...
</code></pre>
<p>Listing unstable/pre-release builds will also include the alpha versions.</p>
<pre><code>$ helm search repo jetstack/cert-manager --versions --devel
NAME CHART VERSION APP VERSION DESCRIPTION
jetstack/cert-manager v1.1.0-alpha.1 v1.1.0-alpha.1 A Helm chart for cert-manager
jetstack/cert-manager v1.1.0-alpha.0 v1.1.0-alpha.0 A Helm chart for cert-manager
jetstack/cert-manager v1.0.4 v1.0.4 A Helm chart for cert-manager
jetstack/cert-manager v1.0.3 v1.0.3 A Helm chart for cert-manager
...
</code></pre>
<p>As listing the versions is integrated into the search, using <code>--versions</code> is not limited to a single chart. Specifying this flag will list all available versions for all charts that match the query string.</p>
<p>For additional information, please check the helm docs at <a href="https://helm.sh/docs/helm/helm_search_repo/" rel="noreferrer">https://helm.sh/docs/helm/helm_search_repo/</a></p>
<h3>Helm v2</h3>
<p>For Helm v2, many artifacts were accessible through the <code>stable</code> repo which came preconfigured with the Helm CLI. Listing all versions was done in a similar way but with a different command. To list the available versions of the chart with Helm v2 use the following command:</p>
<p><code>helm search -l stable/<some_chart></code></p>
<p>The <code>-l</code> or <code>--versions</code> flag is used to display all and not only the latest version per chart.</p>
<p>With Helm v2 you were able to keep your repos updated using the <code>helm update</code> command.</p>
<p>Reference: <a href="https://v2.helm.sh/docs/helm/#helm-search" rel="noreferrer">https://v2.helm.sh/docs/helm/#helm-search</a></p>
|
<p>I see a lot of traction for spark over kubernetes. Is it better over running spark on Hadoop? Both the approaches runs in distributive approach. Can someone help me understand the difference/comparision between running spark on kubernetes vs Hadoop ecosystem?</p>
<p>Thanks</p>
| <blockquote>
<p>Can someone help me understand the difference/comparision between running spark on kubernetes vs Hadoop ecosystem?</p>
</blockquote>
<p>Be forewarned this is a theoretical answer, because I don't run Spark anymore, and thus I haven't run Spark on kubernetes, but I have maintained both a Hadoop cluster and now a kubernetes cluster, and so I can speak to some of their differences.</p>
<p>Kubernetes is as much a battle hardened resource manager with api access to all its components as a reasonable person could wish for. It provides very painless declarative resource limitations (both cpu and ram, plus even syscall capacities), very, <em>very</em> painless log egress (both back to the user via <code>kubectl</code> and out of the cluster using multiple flavors of log management approaches), unprecedented level of metrics gathering and egress allowing one to keep an eye on the health of the cluster and the jobs therein, and the list goes on and on.</p>
<p>But perhaps the biggest reason one would choose to run Spark on kubernetes is the same reason one would choose to run kubernetes at all: shared resources rather than having to create new machines for different workloads (well, plus all of those benefits above). So if you have a Spark cluster, it is very, very likely it is going to burn $$$ while a job isn't actively running on it, versus kubernetes will cheerfully schedule other jobs onto those Nodes while they aren't running Spark jobs. Yes, I am aware that Mesos and Yarn are "generic" cluster resource managers, but it has not been my experience that they are as painless or ubiquitous as kubernetes.</p>
<p>I would welcome someone posting the counter narrative, or contributing more hands-on experience of Spark on kubernetes, but tho</p>
|
<p>I have a docker image which is been deployed into kubernetes.</p>
<p>The docker file is as follows.</p>
<pre><code>FROM alpine/jdk1.8:latest
RUN mkdir -p /opt/test/app
COPY app.war /opt/test/app/app.war
CMD java -jar /opt/test/app/app.war
</code></pre>
<p>This application uses hibernate and getting the below error when trying to load jar file for connection.</p>
<pre><code>loggerName="org.hibernate.orm.url" threadName="main" txnId="" HHH10000002: File or directory named by URL [file:/opt/test/app/app.war!/WEB-INF/classes] could not be found. URL will be ignored
java.io.FileNotFoundException: /opt/test/app/app.war!/WEB-INF/classes (No such file or directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:225)
at java.util.zip.ZipFile.<init>(ZipFile.java:155)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:103)
at org.hibernate.boot.archive.internal.JarFileBasedArchiveDescriptor.resolveJarFileReference(JarFileBasedArchiveDescriptor.java:165)
at org.hibernate.boot.archive.internal.JarFileBasedArchiveDescriptor.visitArchive(JarFileBasedArchiveDescriptor.java:51)
at org.hibernate.boot.archive.scan.spi.AbstractScannerImpl.scan(AbstractScannerImpl.java:47)
at org.hibernate.boot.model.process.internal.ScanningCoordinator.coordinateScan(ScanningCoordinator.java:75)
at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.prepare(MetadataBuildingProcess.java:98)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.<init>(EntityManagerFactoryBuilderImpl.java:199)
at org.hibernate.jpa.boot.spi.Bootstrap.getEntityManagerFactoryBuilder(Bootstrap.java:34)
at org.hibernate.jpa.HibernatePersistenceProvider.getEntityManagerFactoryBuilder(HibernatePersistenceProvider.java:165)
at org.hibernate.jpa.HibernatePersistenceProvider.getEntityManagerFactoryBuilderOrNull(HibernatePersistenceProvider.java:114)
at org.hibernate.jpa.HibernatePersistenceProvider.getEntityManagerFactoryBuilderOrNull(HibernatePersistenceProvider.java:71)
at org.hibernate.jpa.HibernatePersistenceProvider.createEntityManagerFactory(HibernatePersistenceProvider.java:52)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:55)
</code></pre>
<p>Is there any permission to be given in docker file or in kubernetes deployment yaml file?</p>
<p>Hibernate config,</p>
<pre><code>Map<String, String> Props = new HashMap<String, String>();
conProps.put("javax.persistence.jdbc.url", "JDBC_URL");
conProps.put("javax.persistence.jdbc.password", "PASSWORD");
conProps.put("javax.persistence.jdbc.user", "USER");
conProps.put("oracle.net.ssl_cipher_suites","CIPHER");
conProps.put("javax.persistence.jdbc.driver", "oracle.jdbc.OracleDriver");
Persistence.createEntityManagerFactory("appjdbc", conProps);
</code></pre>
<p>I checked the hibernate-core.jar and the below code gets executed.
Not sure, if it is supposed to return JarProtocolArchiveDescriptor but returns JarFileBasedArchiveDescriptor.</p>
<pre><code>public ArchiveDescriptor buildArchiveDescriptor(URL url, String entry) {
final String protocol = url.getProtocol();
if ( "jar".equals( protocol ) ) {
return new JarProtocolArchiveDescriptor( this, url, entry );
}
else if ( StringHelper.isEmpty( protocol )
|| "file".equals( protocol )
|| "vfszip".equals( protocol )
|| "vfsfile".equals( protocol ) ) {
final File file = new File( extractLocalFilePath( url ) );
if ( file.isDirectory() ) {
return new ExplodedArchiveDescriptor( this, url, entry );
}
else {
return new JarFileBasedArchiveDescriptor( this, url, entry );
}
}
else {
//let's assume the url can return the jar as a zip stream
return new JarInputStreamBasedArchiveDescriptor( this, url, entry );
}
}
</code></pre>
| <blockquote>
<p>loggerName="org.hibernate.orm.url" threadName="main" txnId="" HHH10000002: File or directory named by URL [file:/opt/test/app/app.war!/WEB-INF/classes] could not be found. URL will be ignored</p>
</blockquote>
<p>No, it's not a permission denied, it's using an incorrect URL scheme. <code>file://thing</code> is fine, but using the "bang" syntax requires prefixing the URL with <code>jar:</code>, like so:</p>
<pre><code>jar:file:///opt/test/app/app.war!/WEB-INF/classes
</code></pre>
<p>Without more context I can't say whether that's a hibernate bug or a your-configuration bug, but I can say with high confidence that the error message is exactly correct: there is no such directory as <code>app.war!</code></p>
|
<p>I have a testing Kubernetes cluster and I created elasticsearch on AWS which include Kibana for the log management.</p>
<p>Endpoint: <a href="https://search-this-is-my-es-wuktx5la4txs7avvo6ypuuyri.ca-central-1.es.amazonaws.com" rel="nofollow noreferrer">https://search-this-is-my-es-wuktx5la4txs7avvo6ypuuyri.ca-central-1.es.amazonaws.com</a></p>
<p>As far as I googled, I have to send logs from fluentd. Then I tried to implement DaemonSet using this <a href="https://medium.com/@while1eq1/logging-to-aws-elasticsearch-service-from-kubernetes-855ad0959251" rel="nofollow noreferrer">article</a>. No luck.</p>
<p>Could you please share any good documentation to me, please</p>
| <p>I had a similar problem. Below are the full details of how I got it working.</p>
<p>Setup:</p>
<ul>
<li>AWS ES instance accessible via a VPC.</li>
<li>Using <a href="https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch.yaml" rel="nofollow noreferrer">this yaml file</a> as a template.</li>
<li>k8s client version v1.9.2</li>
<li>k8s server version v1.8.7</li>
</ul>
<p><strong>Host problem:</strong></p>
<p>The main problem I had was with defining the environment variables correctly. For <code>FLUENT_ELASTICSEARCH_HOST</code>, I was including the <code>https://</code> prefix on the host URL. Once I removed that, my connection problems went away.</p>
<p><strong>Authentication:</strong></p>
<p>There's no username or password configured for AWS ES. Per <a href="https://github.com/fluent/fluentd-kubernetes-daemonset/issues/33" rel="nofollow noreferrer">this discussion</a>, I set the <code>FLUENT_ELASTICSEARCH_USER</code> and <code>FLUENT_ELASTICSEARCH_PASSWORD</code> values to null.</p>
<p><strong>Sample configuration:</strong></p>
<p>Here's the full set of environment variables in my daemonset yaml file:</p>
<pre><code>- name: FLUENT_ELASTICSEARCH_HOST
value: "vpc-MY-DOMAIN.REGION.es.amazonaws.com"
- name: FLUENT_ELASTICSEARCH_PORT
value: "443"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "https"
- name: FLUENT_ELASTICSEARCH_USER
value: null
- name: FLUENT_ELASTICSEARCH_PASSWORD
value: null
</code></pre>
<p><strong>Bonus: connecting to Kibana</strong></p>
<p>Instead of setting up AWS Cognito, I created an nginx pod in my kubernetes cluster that I use as a proxy to reach Kibana. I use the <code>kubectl port-foward</code> command to reach the nginx server from my local machine.</p>
<p>Here's my nginx.conf:</p>
<pre><code>server {
listen 80;
listen [::]:80;
server_name MY-DOMAIN;
location /_plugin/kibana {
proxy_pass https://vpc-MY-DOMAIN.REGION.es.amazonaws.com/_plugin/kibana;
}
location / {
proxy_pass https://vpc-MY-DOMAIN.REGION.es.amazonaws.com;
}
}
</code></pre>
<p>Once the nginx pod is deployed, I run this command:</p>
<pre><code>kubectl port-forward POD_NAME 8888:80
</code></pre>
<p>Now the Kibana is accessible at <a href="http://localhost:8888/_plugin/kibana" rel="nofollow noreferrer">http://localhost:8888/_plugin/kibana</a></p>
<p>I'm still having a timeout issue with the port-foward command and a problem with nginx caching the ES service IP (since that can change), but I'll update my response once I resolve those issues.</p>
|
<p>I am trying to create add the nginx ingress controller to my kubernetes cluster. My current cluster has 3 nodes and they all have open firewall rules in between them. (Note: This is a bare metal cluster for educational purposes) </p>
<p>I have created a deployment with the following config</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-test
labels:
app: nodejs
spec:
replicas: 5
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodejs-container
image: gcr.io/infrastructure/test-nodejs-server
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: test-deployment-service
spec:
selector:
app: nodejs
ports:
- protocol: TCP
port: 80
targetPort: 3000
</code></pre>
<p>After running kubectl create -f file.yaml</p>
<p>I see all pods and services running. Within the cluster I can curl that service ip and get a response from the pods.</p>
<p>Now I go to create a ingress.</p>
<p>I first followed the install steps found at <a href="https://github.com/kubernetes/ingress-nginx/tree/master/deploy" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/tree/master/deploy</a></p>
<p>I can see my ingress controller running. </p>
<pre><code> NAMESPACE NAME READY STATUS RESTARTS AGE
default my-test-6445d9bb7b-blm7t 1/1 Running 0 3h
default nodejs-test-5657f7bb74-7hwqk 1/1 Running 0 23h
default nodejs-test-5657f7bb74-bh44x 1/1 Running 0 23h
default nodejs-test-5657f7bb74-dw4g4 1/1 Running 0 23h
default nodejs-test-5657f7bb74-k7bf4 1/1 Running 0 23h
default nodejs-test-5657f7bb74-xdthl 1/1 Running 0 23h
ingress-nginx default-http-backend-55c6c69b88-4d5bf 1/1 Running 0 26m
ingress-nginx nginx-ingress-controller-5c6698dfbf-gs9zd 1/1 Running 0 25m
kube-system etcd-yoda.me.com 1/1 Running 1 1d
kube-system kube-apiserver-yoda.me.com 1/1 Running 1 1d
kube-system kube-controller-manager-yoda.me.com 1/1 Running 1 1d
kube-system kube-dns-6f4fd4bdf-62bxg 3/3 Running 3 1d
kube-system kube-flannel-ds-jbmwg 1/1 Running 1 1d
kube-system kube-flannel-ds-r8nfb 1/1 Running 2 1d
kube-system kube-flannel-ds-rczbx 1/1 Running 0 1d
kube-system kube-proxy-brm2g 1/1 Running 0 1d
kube-system kube-proxy-qjgcv 1/1 Running 1 1d
kube-system kube-proxy-sz6rw 1/1 Running 1 1d
kube-system kube-scheduler-yoda.me.com 1/1 Running 1 1d
kube-system kubernetes-dashboard-5bd6f767c7-stscl 1/1 Running 1 23h
kube-system tiller-deploy-59d854595c-tngg6 1/1 Running 0 23h
</code></pre>
<p>I then create my ingress with the following file</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /whoop
backend:
serviceName: test-deployment-service
servicePort: 80
</code></pre>
<p>Then I run kubectl get services --all-namespaces to find my ingress controller port.</p>
<pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
default my-test ClusterIP 10.110.102.5 <none> 80/TCP 4h
default test-deployment-service ClusterIP 10.104.239.37 <none> 80/TCP 23h
ingress-nginx default-http-backend ClusterIP 10.105.218.87 <none> 80/TCP 29m
ingress-nginx ingress-nginx NodePort 10.110.39.104 <none> 80:30652/TCP,443:31624/TCP 27m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1d
kube-system kubernetes-dashboard ClusterIP 10.100.219.110 <none> 443/TCP 23h
kube-system tiller-deploy ClusterIP 10.101.179.202 <none> 44134/TCP 23h
</code></pre>
<p>Now I go to curl my server:</p>
<pre><code>curl -v myserver:30652/whoop/
</code></pre>
<p>The resulting response is: </p>
<pre><code>* Trying myip...
* TCP_NODELAY set
* Connected to myserver (myip) port 30652 (#0)
> GET /whoop/ HTTP/1.1
> Host: myserver:30652
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 308 Permanent Redirect
< Server: nginx/1.13.9
< Date: Fri, 02 Mar 2018 14:38:21 GMT
< Content-Type: text/html
< Content-Length: 187
< Connection: keep-alive
< Location: https://myserver:30652/whoop/
< Strict-Transport-Security: max-age=15724800; includeSubDomains;
<
<html>
<head><title>308 Permanent Redirect</title></head>
<body bgcolor="white">
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx/1.13.9</center>
</body>
</html>
* Connection #0 to host myserver left intact
</code></pre>
<p>I would expect to be getting the response from my pods. However I always get the 308. </p>
<p>If I curl a random path</p>
<p>curl myserver:30652/random</p>
<p>The response I get is </p>
<pre><code>default backend - 404
</code></pre>
<p>How can I hook up my ingress and ingress controller for the routing to work correctly?</p>
| <p>I was having the same problem, but I did not want to add the <code>host</code> value.
To prevent the redirect HTTP to HTTPS by the default server like <a href="https://stackoverflow.com/a/49455779/4410404">David</a> mentioned, instead I added this annotation to the ingress and it fixed it for me: <code>nginx.ingress.kubernetes.io/ssl-redirect: "false"</code>.</p>
<p>My working ingress looks like this now:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "internal-nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: app1
servicePort: 80
- path: /app2/*
backend:
serviceName: app2
servicePort: 80
</code></pre>
|
<p>When creating an ingress resource in GCE using the Nginx ingress controller, the ingress resource is stuck on "Creating ingress". Any custom annotations appear to be lost, but I can access the URL defined by the ingress.</p>
<p>What could be causing this?</p>
<p><a href="https://i.stack.imgur.com/u3uLe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u3uLe.png" alt="enter image description here"></a></p>
| <p>This turned out to be because I was sending the annotation</p>
<pre><code>nginx.ingress.kubernetes.io/ssl-redirect: false
</code></pre>
<p>instead of </p>
<pre><code>nginx.ingress.kubernetes.io/ssl-redirect: "false"
</code></pre>
<p>According to <a href="https://github.com/kubernetes/ingress-nginx/issues/1990" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/1990</a>, the Nginx controller only accepts strings containing "true" or "false". By sending boolean values, GCE was hanging.</p>
<p>Interestingly there were no errors indicating a problem, and I could access the ingress URL, which made debugging the problem quote painful.</p>
|
<p>I'm attempting to get autoscaling set up using custom metrics.</p>
<p>Currently, I have:</p>
<ul>
<li><a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer">Prometheus Operator</a> running in the <code>monitoring</code> namespace.</li>
<li><a href="https://github.com/DirectXMan12/k8s-prometheus-adapter" rel="nofollow noreferrer">K8S Prometheus Adapter</a> running in the <code>custom-metrics</code> namespace, exposed through a service named <code>api</code>.</li>
<li>A deployment running in the <code>my-namespace</code> namespace.</li>
</ul>
<p>I added a <code>HorizontalPodAutoscaler</code> in the <code>custom-metrics</code> namespace, targeting my deployment.</p>
<p>The YAML looks like:</p>
<pre><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: my-deployment-hpa
namespace: custom-metrics
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1beta2
kind: Deployment
name: my-deployment
metrics:
- type: Object
object:
metricName: queue_length
targetValue: 1000
target:
apiVersion: extensions/v1beta1
kind: Service
name: api
</code></pre>
<p>When I describe the <code>HorizontalPodAutoscaler</code> via <code>kubectl describe hpa my-deployment-hpa -n custom-metrics</code>, I see <code>AbleToScale: False</code> because <code>FailedGetScale the HPA controller was unable to get the target's current scale: deployments/scale.extensions "my-deployment" not found</code>.</p>
<p>Does Kubernetes expect that the custom metrics API, the scale target and the <code>HorizontalPodAutoscaler</code> all exist in the same namespace?</p>
| <p>Yes, <code>Deployment</code> which you want to scale should be in the same namespace with <code>HorizontalPodAutoscaler</code> object.</p>
<p>Here is an extract from the Kubernetes <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer" title="Kubernetes Namespaces">documentation</a>:</p>
<blockquote>
<p>Namespaces provide a scope for names. Names of resources need to be
unique within a namespace, but not across namespaces.</p>
</blockquote>
<p>The <code>Namespace</code> value of the <code>HorizontalPodAutoscaler</code> object is the only way to provide information about namespace of deployment which you want to scale.</p>
|
<p>We have a requirement to build custom docker images from base docker images with some additional packages/customization. These custom docker images need to be then deployment into kubernetes. We are exploring various tools to figure out on how docker build can be done in kubernetes cluster (without direct access to docker daemon). Open source tools like kaniko provides the capability to build docker images within a container (hence in a kubernetes cluster).</p>
<ol>
<li><p>Is it a good practice is build docker images in kubernetes cluster where other containers will be run/executed? Are there any obvious challenges with kaniko?</p></li>
<li><p>Should separate dedicated VMs be created to manage the build process?</p></li>
</ol>
| <p><strong>1. Is it a good practice is build docker images in kubernetes cluster where other containers will be run/executed?
Are there any obvious challenges with kaniko?</strong></p>
<p>Yes, it is possible to build images inside <a href="https://kubernetes.io" rel="nofollow noreferrer">Kubernetes</a> containers, but it could be a bit of a challenge.</p>
<p>Some users use it to build a workflow for <a href="https://kublr.com/blog/using-jenkins-and-kubernetes-for-continuous-integration-and-delivery/" rel="nofollow noreferrer">CI/CD with Jenkins</a>. In fact, it is better to use tools to simplify the process.</p>
<p>Kubernetes also have rules to prepare containers development kit, they are described <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">here</a></p>
<p>Another way is to use <a href="https://kublr.com/blog/using-jenkins-and-kubernetes-for-continuous-integration-and-delivery/" rel="nofollow noreferrer">Kaniko</a>, this tool builds container images from a Dockerfile inside a container or Kubernetes cluster.</p>
<p>I found this article interesting to read on this <a href="https://applatix.com/case-docker-docker-kubernetes-part-2/" rel="nofollow noreferrer">topic</a>.</p>
<p>On the other hand, there was a successful attempt to build images without Docker daemon running. You may be interested in <a href="https://bazel.build/" rel="nofollow noreferrer">Bazel</a> project and <a href="https://medium.com/bitnami-perspectives/building-docker-images-without-docker-c619061b13a9" rel="nofollow noreferrer">story</a> how to use it.</p>
<p><strong>2. Should separate dedicated VMs be created to manage the build process?</strong> </p>
<p>Regarding your second question: It is not necessary to set up dedicated VM to run Docker images creation workflow.</p>
<p>Finally, it may be interesting to have a <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry" rel="nofollow noreferrer">private registry</a> in Kubernetes cluster and use it for building purposes.</p>
|
<p>I want to use the RestApi to update the deployment.
and I test it with postman, but always got 415 back.</p>
<hr>
<p>the info is as follows:</p>
<p><strong>type:</strong>
PATCH</p>
<p><strong>url:</strong> <a href="https://k8sClusterUrl:6443/apis/extensions/v1beta1/namespaces/ns/deployments/peer0" rel="nofollow noreferrer">https://k8sClusterUrl:6443/apis/extensions/v1beta1/namespaces/ns/deployments/peer0</a></p>
<p><strong>header:</strong> </p>
<pre><code>Authorization: bearer token
Content-Type:application/json
</code></pre>
<p><strong>body:</strong></p>
<pre><code>{
"kind": "Deployment",
"spec":
{
"template":
{
"spec":
{
"containers":[
{
"$setElementOrder/volumeMounts":[{"mountPath":"/host/var/run/"},{"mountPath":"/mnt"}],
"name":"peer0",
"image":"hyperledger/fabric-peer:x86_64-1.1.0"}
]
}
}
}
}
</code></pre>
<p><strong>response:</strong></p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server responded with the status code 415 but did not return more information",
"details": {},
"code": 415
}
</code></pre>
<hr>
<p>I have muti-containers in this pod, and only want to apply for the specific container: <code>peer0</code>.<br>
Any different for the <code>$setElementOrder</code> var? </p>
| <p><code>415</code> is invalid media type.</p>
<p>In this case, you should be setting the media type as <code>application/json+patch+json</code> (you can see this in the documentation <a href="https://kubernetes.io/docs/reference/federation/extensions/v1beta1/operations/" rel="nofollow noreferrer">here</a>) </p>
|
<p>Most of my images that I deploy into Kubernetes has the common base (From) image. So, I have multiple applications deployed into multiple pods. How does the Docker layer cache work across multiple pods, as the From image is same for all the pods?</p>
| <p>See "<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#how-pods-manage-multiple-containers" rel="noreferrer">How Pods manage multiple Containers</a>"</p>
<blockquote>
<p>Pods are designed to support multiple cooperating processes (as containers) that form a cohesive unit of service.<br>
The containers in a Pod are automatically co-located and co-scheduled on the same physical or virtual machine in the cluster.<br>
The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated.</p>
</blockquote>
<p>So, <strong>within the same cluster</strong>, your pods will share the same container runtime (for instance docker, but others exist).<br>
Which means the layers of your base image will be reused by the various containers running in those pods.<br>
Then each container will write to its special <a href="https://en.wikipedia.org/wiki/UnionFS" rel="noreferrer">UnionFS</a> (which can use one of a few different storage backends, such as aufs or btrfs, there are several choices), using <a href="https://en.wikipedia.org/wiki/Copy-on-write" rel="noreferrer">copy-on-write</a>.</p>
|
<p>In Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noreferrer">cronjobs</a>, It is stated in the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-job-limitations" rel="noreferrer">limitations section</a> that </p>
<blockquote>
<p>Jobs may fail to run if the CronJob controller is not running or broken for a span of time from before the start time of the CronJob to start time plus startingDeadlineSeconds, or if the span covers multiple start times and concurrencyPolicy does not allow concurrency.</p>
</blockquote>
<p>What I understand from this is that, If the <code>startingDeadlineSeconds</code> is set to <code>10</code> and the cronjob couldn't start for some reason at its scheduled time, then it can still be attempted to start again as long as those <code>10</code> seconds haven't passed, however, after the <code>10</code> seconds, it for sure won't be started, is this correct?</p>
<p>Also, If I have <code>concurrencyPolicy</code> set to <code>Forbid</code>, does K8s count it as a fail if a cronjob tries to be scheduled, when there is one already running?</p>
| <p>After investigating the code base of the <a href="https://github.com/kubernetes/kubernetes" rel="noreferrer">Kubernetes repo</a>, so this is how the CronJob controller works:</p>
<ol>
<li><p>The CronJob controller <a href="https://github.com/kubernetes/kubernetes/blob/392bf0adef478175b9cf0226b02820eb1820f797/pkg/controller/cronjob/cronjob_controller.go#L95-L96" rel="noreferrer">will check the every 10 seconds</a> the list of cronjobs in the given Kubernetes Client.</p>
</li>
<li><p>For every CronJob, it checks how many schedules it missed in the duration from the <code>lastScheduleTime</code> till now. If there are more than <a href="https://github.com/kubernetes/kubernetes/blob/392bf0adef478175b9cf0226b02820eb1820f797/pkg/controller/cronjob/utils.go#L146" rel="noreferrer">100 missed schedules</a>, then it doesn't start the job and records the event:</p>
<p><code>"FailedNeedsStart", "Cannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew."</code></p>
</li>
</ol>
<p>It is <strong>important</strong> to note, that if the field <code>startingDeadlineSeconds</code> is set (not <code>nil</code>), it will count how many missed jobs occurred from the value of <code>startingDeadlineSeconds</code> till now. For example, if <code>startingDeadlineSeconds</code> = <code>200</code>, It will count how many missed jobs occurred in the last <code>200</code> seconds. The exact implementation of counting how many missed schedules can be found <a href="https://github.com/kubernetes/kubernetes/blob/392bf0adef478175b9cf0226b02820eb1820f797/pkg/controller/cronjob/utils.go#L96" rel="noreferrer">here</a>.</p>
<ol start="3">
<li><p>In case there are not more than a 100 missed schedules from the previous step, the CronJob controller will check if the time <code>now</code> is not after the time of its <code>scheduledTime + startingDeadlineSeconds </code>, i.e. that it's not too late to start the job (passed the deadline). If it wasn't too late, the job will continue to be attempted to be started by the CronJob Controller. However, If it is already too late, then it doesn't start the job and records the event:</p>
<p><code>"Missed starting window for {cronjob name}. Missed scheduled time to start a job {scheduledTime}"</code></p>
</li>
</ol>
<p>It is also <strong>important</strong> to note, that if the field <code>startingDeadlineSeconds</code> is not set, then it means there is no deadline at all. This means the job will be attempted to start by the CronJob controller without checking if it's later or not.</p>
<p>Therefore to answer the questions above:</p>
<p><strong>1. If the startingDeadlineSeconds is set to 10 and the cronjob couldn't start for some reason at its scheduled time, then it can still be attempted to start again as long as those 10 seconds haven't passed, however, after the 10 seconds, it for sure won't be started, is this correct?</strong></p>
<p>The CronJob controller will attempt to start the job and it will be successfully scheduled if the 10 seconds after it's schedule time haven't passed yet. However, if the deadline has passed, it won't be started this run, and it will be counted as a missed schedule in later executions.</p>
<p><strong>2. If I have concurrencyPolicy set to Forbid, does K8s count it as a fail if a cronjob tries to be scheduled, when there is one already running?</strong></p>
<p>Yes, it will be counted as a missed schedule. Since missed schedules are calculated as I stated above in point 2.</p>
|
<p>I have a node.js container running on kubernetes that handles websocket connections that will normally be kept open until the user quits. Now when I do a rolling update or when the deployment scales down, is there a way to stop the pod from being killed until the last user has disconnected? </p>
| <p>At the moment, Kubernetes does not support graceful connection closing. </p>
<p><code>terminationGracePeriodSeconds</code> only define the timer before the Pod termination. It doesn't take care of Pod connections. </p>
<p>The only way to deal with rolling updates is to adapt your application to be able to switch clients to other pods transparently. </p>
<p>You may also want to take a look at <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">Disruption Budget</a>. It may help in some cases to have more reliable setup.</p>
<blockquote>
<p>A PDB specifies the number of replicas that an application can
tolerate having, relative to how many it is intended to have. For
example, a Deployment which has a .spec.replicas: 5 is supposed to
have 5 pods at any given time. If its PDB allows for there to be 4 at
a time, then the Eviction API will allow voluntary disruption of one,
but not two pods, at a time.</p>
<p>PDBs cannot prevent <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions" rel="nofollow noreferrer">involuntary disruptions</a> from occurring, but
they do count against the budget.</p>
<p>Pods which are deleted or unavailable due to a rolling upgrade to an
application do count against the disruption budget, but <em>controllers</em>
(like <em>deployment</em> and <em>stateful-set</em>) are not limited by PDBs when doing
rolling upgrades – the handling of failures during application updates
is configured in the controller spec. (Learn about updating a
<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">deployment</a>.)</p>
<p>When a pod is evicted using the eviction API, it is gracefully
terminated (see <code>terminationGracePeriodSeconds</code> in <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#podspec-v1-core" rel="nofollow noreferrer">PodSpec</a>.)</p>
</blockquote>
|
<p>According to this <a href="https://www.docker.com/kubernetes" rel="nofollow noreferrer">link</a> docker is supporting kubernetes on linux, mac and windows. I can't make out whether or not kubernetes is installed within docker like swarm is. So is it true that with a normal install of docker it will also install kubernetes?</p>
| <p>I would not say it'll be installed but you'll get the chance to do it if you want to.</p>
<p><strong>Here's the about page:</strong></p>
<blockquote>
<p>Version 18.05.0-ce-win67 (18263)
Channel: edge
0c7f1aa</p>
</blockquote>
<p><strong>If you navigate to the Settings page (Docker CE for windows in my case), and select to enable Kubernetes, this is what you get:</strong></p>
<p><a href="https://i.stack.imgur.com/wK7AM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wK7AM.png" alt="Kubernetes Opt-in"></a></p>
<p><strong>If you choose to install it...</strong></p>
<p><a href="https://i.stack.imgur.com/aTjAJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aTjAJ.png" alt="Kubernetes intallation"></a></p>
<p><strong>Wait a little...</strong></p>
<p><a href="https://i.stack.imgur.com/AtHfY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AtHfY.png" alt="Kubernetes installation completed"></a></p>
<p>After completion you can use your kubernetes - through regular <code>docker stack *</code> commands - to run a single-node cluster locally while developing. Which is really nice!</p>
<p>You can use the kubectl CLI directly also. It should be already in your PATH. The CLI is located at <code>C:\Program Files\Docker\Docker\resources\bin</code>.</p>
<p>Hope this helps!</p>
|
<p>I have written Pre- and Post-upgrade hooks for my Helm chart, which will get invoked when I do a helm upgrade. My Pre-upgrade hook is supposed to write some information to a file in the shared persistent storage volume. Somehow, I dont see this file getting created though I am able to see the hook getting invoked.</p>
<p>This is what my pre-upgrade hook looks like:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}-preupgrade"
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "0"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: pre-upgrade-job
image: {{ .Values.registry }}/{{ .Values.imageRepo }}:{{ .Values.imageTag }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
volumeMounts:
- mountPath: {{ .Values.pvc.shared_storage_path }}/{{ template "fullname" . }}
name: shared-pvc
command: ['/bin/sh -c scripts/preUpgradeScript.sh {{ .Values.pvc.shared_storage_path }}/{{ template "fullname" . }}']
volumes:
- name: shared-pvc
persistentVolumeClaim:
claimName: {{ template "fullname" . }}-shared-pv-claim
</code></pre>
<p>My expectation is that the hook should be able to write information to the PVC volume which was already created prior to the upgrade. When I did a describe on the upgrade pods, I could see the following error:</p>
<pre><code>Error: failed to start container "pre-upgrade-job": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/bin/sh -c scripts/preUpgradeScript.sh /opt/flink/share/myfl-flink\": stat /bin/sh -c scripts/preUpgradeScript.sh /opt/flink/share/myfl-flink: no such file or directory"
</code></pre>
<p>Doesn't the hook first mount the volume before running the command? Also, I'm packaging the script with the docker image, so I believe it should be there.
I am unable to exec into the hook pod as it goes into the Failed state.
Can anyone help me with this?</p>
<p>[Update] I added a sleep command to enter the pod and check if the script is available and if the mount path exists. All looks fine. I don't understand why this error would come up.</p>
| <p>Looks like I needed to give the command differently:</p>
<pre><code>command: ["/bin/sh", "-c", "scripts/preUpgradeScript.sh","{{ .Values.pvc.shared_storage_path }}/{{ template "fullname" . }}"]
</code></pre>
|
<p>Is there anyway to convert configuration files like xml to kubernetes configmap yaml file without using kubectl command ? Let’s say if I want to create xml files dynamically which in turn stored in git repo as configmap yaml and some operator can monitor for yaml changes and deploy it to the cluster.</p>
| <blockquote>
<p>configuration files like xml to kubernetes configmap yaml file without using kubectl command</p>
</blockquote>
<p>Sure, because the only thing <code>kubectl</code> does with <code>yaml</code> is immediately convert it to <code>json</code> and then <code>POST</code> (or <code>PUT</code> or whatever) to the kubernetes api with a <code>content-type: application/json;charset=utf-8</code> header (you can watch that take place via <code>kubectl --v=100 create -f my-thing.yaml</code>)</p>
<p>So, the answer to your question is to use your favorite programming language that has libraries for json (or the <a href="https://github.com/stedolan/jq#readme" rel="nofollow noreferrer">positively amazing jq</a>), package the XML as necessary, the use something like <a href="https://github.com/box/kube-applier#readme" rel="nofollow noreferrer">kube-applier</a> to monitor and roll out the change:</p>
<pre class="lang-py prettyprint-override"><code># coding=utf-8
import json
import sys
result = {
"apiVersion": "v1",
"kind": "ConfigMap",
# etc etc
"data": [],
}
for fn in sys.argv[1:]:
with open(fn) as fh:
body = fh.read()
data.append({fn: body})
json.dump(result, sys.stdout) # or whatever
</code></pre>
|
<p>I have deployed Kubernetes using the link <a href="https://kubernetes.io/docs/tasks/tools/install-kubeadm/" rel="nofollow noreferrer">Kubernetes official page</a></p>
<p>I see that Kubernetes is deployed because in the end i got this</p>
<pre><code>Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 172.16.32.101:6443 --token ma1d4q.qemewtyhkjhe1u9f --discovery-token-ca-cert-hash sha256:408b1fdf7a5ea5f282741db91ebc5aa2823802056ea9da843b8ff52b1daff240
</code></pre>
<p>when i do kubectl get pods it thorws this error </p>
<pre><code># kubectl get pods
The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port?
</code></pre>
<p>When I do see the cluster-info it says as follows </p>
<pre><code>kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:6553
</code></pre>
<p>But when i see the config it shows as follows </p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1EWXlNREEyTURJd04xb1hEVEk0TURZeE56QTJNREl3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0ZXCkxWQkJoWmZCQms4bXJrV0w2MmFGd2U0cUYvZkRHekJidnE5TFpGN3M4UW1rdDJVUlo5YmtSdWxzSlBrUXV1U2IKKy93YWRoM054S0JTQUkrTDIrUXdyaDVLSy9lU0pvbjl5TXJlWnhmRFdPTno2Y3c4K2txdnh5akVsRUdvSEhPYQpjZHpuZnJHSXVZS3lwcm1GOEIybys0VW9ldytWVUsxRG5Ra3ZwSUZmZ1VjVWF4UjVMYTVzY2ZLNFpweTU2UE4wCjh1ZjdHSkhJZFhNdXlVZVpFT3Z3ay9uUTM3S1NlWHVhcUlsWlFqcHQvN0RrUmFZeGdTWlBqSHd5c0tQOHMzU20KZHJoeEtyS0RPYU1Wczd5a2xSYjhzQjZOWDB6UitrTzhRNGJOUytOYVBwbXFhb3hac1lGTmhCeGJUM3BvUXhkQwpldmQyTmVlYndSWGJPV3hSVzNjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDTFBlT0s5MUdsdFJJTjdmMWZsNmlINTg0c1UKUWhBdW1xRTJkUHFNT0ZWWkxjbDVJZlhIQ1dGeE13OEQwOG1NNTZGUTNEVDd4bi9lNk5aK1l2M1JrK2hBdmdaQgpaQk9XT3k4UFJDOVQ2S1NrYjdGTDRSWDBEamdSeE5WTFIvUHd1TUczK3V2ZFhjeXhTYVJJRUtrLzYxZjJsTGZmCjNhcTdrWkV3a05pWXMwOVh0YVZGQ21UaTd0M0xrc1NsbDZXM0NTdVlEYlRQSzJBWjUzUVhhMmxYUlZVZkhCMFEKMHVOQWE3UUtscE9GdTF2UDBDRU1GMzc4MklDa1kzMDBHZlFEWFhiODA5MXhmcytxUjFQbEhJSHZKOGRqV29jNApvdTJ1b2dHc2tGTDhGdXVFTTRYRjhSV0grZXpJRkRjV1JsZnJmVHErZ2s2aUs4dGpSTUNVc2lFNEI5QT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
**server: https://172.16.32.101:6443**
</code></pre>
<p>Even telnet shows that there is a process running on 6443 but not on 6553
how can change the port and how can I fix the issue??</p>
<p>Any help would be of great use </p>
<p>Thanks in advance.</p>
| <p>I figured out the issue it is because of the firewall in the machine I could join nodes to the cluster once I allowed traffic via port 6443. I didn't fix the issue with this post but for beginners use this <a href="https://www.google.com/url?q=http://www.bogotobogo.com/DevOps/DevOps-Kubernetes-III-Kubernetes-on-Linux-with-kubeadm.php&sa=D&source=hangouts&ust=1530265751951000&usg=AFQjCNHMxjNrFJ7CBGaranLBvKr84_qFrA" rel="nofollow noreferrer">K8's on AWS</a> for a better idea.</p>
<p>Thanks for the help guys...!!!</p>
|
<p>I am trying to get Service <code>label selectors</code> through <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Kubernetes Python Client</a>. I am using <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#list_service_for_all_namespaces" rel="nofollow noreferrer">list_service_for_all_namespaces</a> method to retrieve the services, and filter it with <code>field_selector</code> parameter like:</p>
<pre><code>...
field_selector="spec.selector={u'app': 'redis'}
...
services = v1.list_service_for_all_namespaces(field_selector=field_selector, watch=False)
for service in services.items:
print(service)
...
</code></pre>
<p>I get this error:</p>
<pre><code>HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"\"spec.selector\" is not a known field selector: only \"metadata.name\", \"metadata.namespace\"","reason":"BadRequest","code":400}
</code></pre>
<p>So, it seems that only <code>name</code> and <code>namespace</code> are valid parameters, which is not documented:</p>
<blockquote>
<p>field_selector = 'field_selector_example' # str | A selector to restrict the list of returned objects by their fields. Defaults to everything. (optional)</p>
</blockquote>
<p>For now my workaround is to set the same <em>labels</em> as <em>label selectors</em> to the service, then to retrieve it through <code>label_selector</code> parameter, but I'd like to be able to get it through <code>label selectors</code>.</p>
<p>The thing is that from the beginning I need to get the endpoints behind the service (the backend pods), but the API call is not even returning this information, so I though I would get the selectors, match them against the labels on the pods, and there we go, but now I am realizing the selectors are not possible to get either. </p>
<p>This is too much limitation. I am thinking may be my approach is wrong. Does anyone know a way of getting <code>label selectors</code> from a service?</p>
| <p>You should be able to get the selector from a service object, and then use that to find all the pods that match the selector.</p>
<p>For example (I am hoping I don't have typos, and my python is rusty):</p>
<pre><code>services = v1.list_service_for_all_namespaces(watch=False)
for svc in services.items:
if svc.spec.selector:
# convert the selector dictionary into a string selector
# for example: {"app":"redis"} => "app=redis"
selector = ''
for k,v in svc.spec.selector.items():
selector += k + '=' + v + ','
selector = selector[:-1]
# Get the pods that match the selector
pods = v1.list_pod_for_all_namespaces(label_selector=selector)
for pod in pods.items:
print(pod.metadata.name)
</code></pre>
|
<p>I am new to Kubernetes <code>NetworkPolicy</code> and the Network plugin <code>calico</code>.</p>
<p>I have successfully implemented <code>calico</code> in my Kubernetes cluster:</p>
<pre><code>[root@node1 ~]# kubectl get po --all-namespaces -o wide | grep calico
kube-system calico-kube-controllers-5d8b5bc986-sllmk 1/1 Running
kube-system calico-node-4wk8f 1/1 Running
kube-system calico-node-5kz99 1/1 Running
kube-system calico-node-bfk9w 1/1 Running
kube-system calico-node-f2tb2 1/1 Running
kube-system calico-node-hrcf4 1/1 Running
kube-system calico-node-wvh8d 1/1 Running
</code></pre>
<p>I have also configured relevant network policies and they work perfectly fine.</p>
<p>The only only I am concerned about is logging. I am unable to find any logs that could tell me whether some request is being accepted or blocked.</p>
<p>Ive tried checking the logs of the <code>calico-nodes-*</code> pods but they do not provide any reasonable logs.</p>
<p>Are there any others logs that I can look at ? </p>
| <p>You can inspect calico-node containers logs across your Kubernetes cluster within this path <code>/var/log/calico</code>, or it can be modified via <code>--log-dir</code> parameter used in <code>calicoctl node run</code> command, as described in this <a href="https://docs.projectcalico.org/v1.5/usage/troubleshooting/logging" rel="nofollow noreferrer">link</a>. </p>
<p>However, if you want to observe logs along <a href="https://chrislovecnm.com/kubernetes/cni/choosing-a-cni-provider/" rel="nofollow noreferrer">CNI Network</a>, please visit this <a href="https://docs.projectcalico.org/v3.0/getting-started/kubernetes/troubleshooting" rel="nofollow noreferrer">page</a>. </p>
<p>I found it very helpful to log out events from Calico CNI using <code>kubelet</code> as target point and then collect them via <code>systemd</code>, besides you can specify a value for <code>log_level</code> parameter.</p>
|
<p>I am using <a href="https://github.com/fluent/fluentd-kubernetes-daemonset" rel="noreferrer">fluentd daemonset</a> to get kubernetes logs to Elasticsearch/Kibana which is working fine. Now the problem is that there are 3 4 application running in kubernetes which have different log pattern, these are running in pods and pods are writing to stdout. How should I capture these logs from different apps using fluentd and forward to ES?</p>
<p>I want to tag different application logs with the name of application. If possible please provide a fluentd config block to achieve the same.</p>
<p>Any help would be appreciable.</p>
| <p>Sure! check this ConfigMap, which I believe you have as well somewhere in your cluster:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: fluentd-es-config-debug
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
data:
system.conf: |-
<system>
root_dir /tmp/fluentd-buffers/
</system>
containers.input.conf: |-
# This configuration file for Fluentd / td-agent is used
# to watch changes to Docker log files. The kubelet creates symlinks that
# capture the pod name, namespace, container name & Docker container ID
# to the docker logs for pods in the /var/log/containers directory on the host.
# If running this fluentd configuration in a Docker container, the /var/log
# directory should be mounted in the container.
#
# These logs are then submitted to Elasticsearch which assumes the
# installation of the fluent-plugin-elasticsearch & the
# fluent-plugin-kubernetes_metadata_filter plugins.
# See https://github.com/uken/fluent-plugin-elasticsearch &
# https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for
# more information about the plugins.
#
# Example
# =======
# A line in the Docker log file might look like this JSON:
#
# {"log":"2014/09/25 21:15:03 Got request with path wombat\n",
# "stream":"stderr",
# "time":"2014-09-25T21:15:03.499185026Z"}
#
# The time_format specification below makes sure we properly
# parse the time format produced by Docker. This will be
# submitted to Elasticsearch and should appear like:
# $ curl 'http://elasticsearch-logging:9200/_search?pretty'
# ...
# {
# "_index" : "logstash-2014.09.25",
# "_type" : "fluentd",
# "_id" : "VBrbor2QTuGpsQyTCdfzqA",
# "_score" : 1.0,
# "_source":{"log":"2014/09/25 22:45:50 Got request with path wombat\n",
# "stream":"stderr","tag":"docker.container.all",
# "@timestamp":"2014-09-25T22:45:50+00:00"}
# },
# ...
#
# The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log
# record & add labels to the log record if properly configured. This enables users
# to filter & search logs on any metadata.
# For example a Docker container's logs might be in the directory:
#
# /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b
#
# and in the file:
#
# 997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
#
# where 997599971ee6... is the Docker ID of the running container.
# The Kubernetes kubelet makes a symbolic link to this file on the host machine
# in the /var/log/containers directory which includes the pod name and the Kubernetes
# container name:
#
# synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
# ->
# /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
#
# The /var/log directory on the host is mapped to the /var/log directory in the container
# running this instance of Fluentd and we end up collecting the file:
#
# /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
#
# This results in the tag:
#
# var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
#
# The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name
# which are added to the log message as a kubernetes field object & the Docker container ID
# is also added under the docker field object.
# The final tag is:
#
# kubernetes.var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
#
# And the final log record look like:
#
# {
# "log":"2014/09/25 21:15:03 Got request with path wombat\n",
# "stream":"stderr",
# "time":"2014-09-25T21:15:03.499185026Z",
# "kubernetes": {
# "namespace": "default",
# "pod_name": "synthetic-logger-0.25lps-pod",
# "container_name": "synth-lgr"
# },
# "docker": {
# "container_id": "997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b"
# }
# }
#
# This makes it easier for users to search for logs by pod name or by
# the name of the Kubernetes container regardless of how many times the
# Kubernetes pod has been restarted (resulting in a several Docker container IDs).
# Json Log Example:
# {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}
# CRI Log Example:
# 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here
<source>
@id fluentd-containers.log
@type tail
path /var/log/containers/*.log
pos_file /var/log/es-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%NZ
tag raw.kubernetes.*
format json
read_from_head true
</source>
# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
@id raw.kubernetes
@type detect_exceptions
remove_tag_prefix raw
message log
stream stream
multiline_flush_interval 5
max_bytes 500000
max_lines 1000
</match>
system.input.conf: |-
# Example:
# 2015-12-21 23:17:22,066 [salt.state ][INFO ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081
<source>
@id minion
@type tail
format /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
time_format %Y-%m-%d %H:%M:%S
path /var/log/salt/minion
pos_file /var/log/salt.pos
tag salt
</source>
# Example:
# Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script
<source>
@id startupscript.log
@type tail
format syslog
path /var/log/startupscript.log
pos_file /var/log/es-startupscript.log.pos
tag startupscript
</source>
# Examples:
# time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json"
# time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404
# TODO(random-liu): Remove this after cri container runtime rolls out.
<source>
@id docker.log
@type tail
format /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
path /var/log/docker.log
pos_file /var/log/es-docker.log.pos
tag docker
</source>
# Example:
# 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal
<source>
@id etcd.log
@type tail
# Not parsing this, because it doesn't have anything particularly useful to
# parse out of it (like severities).
format none
path /var/log/etcd.log
pos_file /var/log/es-etcd.log.pos
tag etcd
</source>
# Multi-line parsing is required for all the kube logs because very large log
# statements, such as those that include entire object bodies, get split into
# multiple lines by glog.
# Example:
# I0204 07:32:30.020537 3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537]
<source>
@id kubelet.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kubelet.log
pos_file /var/log/es-kubelet.log.pos
tag kubelet
</source>
# Example:
# I1118 21:26:53.975789 6 proxier.go:1096] Port "nodePort for kube-system/default-http-backend:http" (:31429/tcp) was open before and is still needed
<source>
@id kube-proxy.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-proxy.log
pos_file /var/log/es-kube-proxy.log.pos
tag kube-proxy
</source>
# Example:
# I0204 07:00:19.604280 5 handlers.go:131] GET /api/v1/nodes: (1.624207ms) 200 [[kube-controller-manager/v1.1.3 (linux/amd64) kubernetes/6a81b50] 127.0.0.1:38266]
<source>
@id kube-apiserver.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-apiserver.log
pos_file /var/log/es-kube-apiserver.log.pos
tag kube-apiserver
</source>
# Example:
# I0204 06:55:31.872680 5 servicecontroller.go:277] LB already exists and doesn't need update for service kube-system/kube-ui
<source>
@id kube-controller-manager.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-controller-manager.log
pos_file /var/log/es-kube-controller-manager.log.pos
tag kube-controller-manager
</source>
# Example:
# W0204 06:49:18.239674 7 reflector.go:245] pkg/scheduler/factory/factory.go:193: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [2578313/2577886]) [2579312]
<source>
@id kube-scheduler.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-scheduler.log
pos_file /var/log/es-kube-scheduler.log.pos
tag kube-scheduler
</source>
# Example:
# I1104 10:36:20.242766 5 rescheduler.go:73] Running Rescheduler
<source>
@id rescheduler.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/rescheduler.log
pos_file /var/log/es-rescheduler.log.pos
tag rescheduler
</source>
# Example:
# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf
<source>
@id glbc.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/glbc.log
pos_file /var/log/es-glbc.log.pos
tag glbc
</source>
# Example:
# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf
<source>
@id cluster-autoscaler.log
@type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/cluster-autoscaler.log
pos_file /var/log/es-cluster-autoscaler.log.pos
tag cluster-autoscaler
</source>
# Logs from systemd-journal for interesting services.
# TODO(random-liu): Remove this after cri container runtime rolls out.
<source>
@id journald-docker
@type systemd
filters [{ "_SYSTEMD_UNIT": "docker.service" }]
<storage>
@type local
persistent true
</storage>
read_from_head true
tag docker
</source>
<source>
@id journald-container-runtime
@type systemd
filters [{ "_SYSTEMD_UNIT": "{{ container_runtime }}.service" }]
<storage>
@type local
persistent true
</storage>
read_from_head true
tag container-runtime
</source>
<source>
@id journald-kubelet
@type systemd
filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
<storage>
@type local
persistent true
</storage>
read_from_head true
tag kubelet
</source>
<source>
@id journald-node-problem-detector
@type systemd
filters [{ "_SYSTEMD_UNIT": "node-problem-detector.service" }]
<storage>
@type local
persistent true
</storage>
read_from_head true
tag node-problem-detector
</source>
forward.input.conf: |-
# Takes the messages sent over TCP
<source>
@type forward
</source>
monitoring.conf: |-
# Prometheus Exporter Plugin
# input plugin that exports metrics
<source>
@type prometheus
</source>
<source>
@type monitor_agent
</source>
# input plugin that collects metrics from MonitorAgent
<source>
@type prometheus_monitor
<labels>
host ${hostname}
</labels>
</source>
# input plugin that collects metrics for output plugin
<source>
@type prometheus_output_monitor
<labels>
host ${hostname}
</labels>
</source>
# input plugin that collects metrics for in_tail plugin
<source>
@type prometheus_tail_monitor
<labels>
host ${hostname}
</labels>
</source>
output.conf: |-
# Enriches records with Kubernetes metadata
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
<match **>
@id elasticsearch
@type elasticsearch
@log_level info
include_tag_key true
host elasticsearch-logging
port 9200
logstash_format true
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 8
overflow_action block
</buffer>
</match>
</code></pre>
<p>And read the comments, they might help you achieve what you want, assuming that you have read through the doc for fluentd <a href="https://docs.fluentd.org/v1.0/articles/quickstart" rel="nofollow noreferrer">https://docs.fluentd.org/v1.0/articles/quickstart</a>.</p>
<p>Also bear in mind that logs parsing changes based on the scenario you need to apply it to, so the above is the best example that I can come up with (referenced from: <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml</a> ).</p>
|
<p>So, I would like to have <code>nginx</code> resolve hostnames for backends at request time. I expect to get <code>HTTP 502 Bad Gateway</code> when back-end service is down and I expect service response, when it's up.</p>
<p>I use <code>nginx:1.15-alpine</code> image for <code>nginx</code> and here is what I have in it's config:</p>
<pre><code>server {
resolver kube-dns.kube-system.svc.cluster.local valid=5s;
server_name mysystem.com;
listen 80;
client_max_body_size 20M;
location = /nginx_status {
stub_status on;
access_log off;
}
# Services configuration
location ~ /my-service/ {
set $service_endpoint http://my-service.namespace:8080;
proxy_pass $service_endpoint$request_uri;
include includes/defaults-inc.conf;
include includes/proxy-inc.conf;
}
}
</code></pre>
<p>So, when I make the request to the nginx, I get 502 Bad Gateway response. Nginx's log say the name is not found:</p>
<pre><code>2018/06/28 19:49:18 [error] 7#7: *1 my-service.namespace could not be resolved (3: Host not found), client: 10.44.0.1, server: mysystem.com, request: "GET /my-service/version HTTP/1.1", host: "35.229.17.63:8080"
</code></pre>
<p>However, when I log into the container with shell (<code>kubectl exec ... -- sh</code>) and test the DNS resolution, it works perfectly.</p>
<pre><code> # nslookup my-service.namespace kube-dns.kube-system.svc.cluster.local
Server: 10.47.240.10
Address 1: 10.47.240.10 kube-dns.kube-system.svc.cluster.local
Name: my-service.namespace
Address 1: 10.44.0.75 mysystem-namespace-mysystem-namespace-my-service-0.my-service.namespace.svc.cluster.local
</code></pre>
<p>Moreover, I can <code>wget http://my-service.namespace:8080/</code> and get a response.</p>
<p>Why nginx cannot resolve the hostname? </p>
<p><strong>Update: How I managed to resolve it:</strong></p>
<p>In <code>nginx.conf</code> at the <code>server</code> level I have added a resolver setting:</p>
<pre><code>resolver kube-dns.kube-system.svc.cluster.local valid=10s;
</code></pre>
<p>Then I used a FQDN in <code>proxy_pass</code>: </p>
<pre><code>proxy_pass http://SERVICE-NAME.YOUR-NAMESPACE.svc.cluster.local:8080;
</code></pre>
| <p>It fails because you need to use the FQDN to Resolve the name.</p>
<p>Using just the hostname will usually work because in kubernetes the resolv.conf is configured with search domains so that you don't usually need to provide a service's FQDN.</p>
<p>However, specifying the FQDN is necessary when you tell nginx to use a custom name server because it does not get the benefit of these domain search specs.</p>
<p>In nginx.conf added at the server level:</p>
<pre><code>resolver kube-dns.kube-system.svc.cluster.local valid=10s;
</code></pre>
<p>Then used a FQDN in proxy_pass:</p>
<pre><code>proxy_pass http://SERVICE-NAME.YOUR-NAMESPACE.svc.cluster.local:8080;
</code></pre>
|
<p>I want to create a secret through kubectl api.
Below is the script, I am running but getting error in parsing yaml file.
Please help</p>
<pre><code> curl -vk \
-X POST \
-d @- \
-H "Authorization: Bearer $(cat token)" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
https://ip:port/api/v1/namespaces/nginx-ingress/secrets <<'EOF'
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"namespace": "nginx-ingress",
},
"type": "Opaque"
"data": {
"username": "YWRtaW4=",
"password": "MWYyZDFlMmU2N2Rm"
}
EOF
</code></pre>
<p>Error:</p>
<blockquote>
<p>message": "the object provided is unrecognized (must be of type Secret): couldn't get version/kind; json parse error: invalid character '}' looking for beginning of object key string ({\"apiVersion\": \"v1\",\"kind\": \"S ...)",
"reason": "BadRequest",</p>
</blockquote>
| <p>I changed my JSON structure in which I added the curly brace which I missed in the end and one comma in the type key.</p>
<pre class="lang-sh prettyprint-override"><code>curl -vk \
-X POST \
-d @- \
-H "Authorization: Bearer $(cat token)" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
https://192.168.2.100:6443/api/v1/namespaces/nginx-ingress/secrets <<'EOF'
{
"apiVersion":"v1",
"kind" :"Secret",
"metadata" :{"namespace" :"nginx-ingress","name":"mysecret1"},
"type": "Opaque",
"data": {"username": "YWRtaW4=","password": "MWYyZDFlMmU2N2Rm"}
}
</code></pre>
<p>It worked.</p>
|
<p>I am trying to get my hands dirty on Kubernetes. I am firing following command:</p>
<pre><code>kubectl get deployment
</code></pre>
<p>and I get the following headers in the output:</p>
<p><a href="https://i.stack.imgur.com/mX2TR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mX2TR.png" alt="kubectll get deployment"></a></p>
<p>I can't find the difference between <code>current</code> and <code>available</code> columns in the following output?</p>
<p>I know that official documentation gives a small description each of these fields, but it doesn't answer my following questions:</p>
<ol>
<li>Is <code>Current</code> <= <code>Desired</code> true?</li>
<li>Is <code>Up-to-Date</code> <= <code>Current</code> true?</li>
<li>Is <code>Up-to-Date</code> > <code>Current</code> and <code>Up-to-Date</code> <= <code>Desired</code> true?</li>
<li>Is <code>Available</code> always <= <code>Current</code> OR it can be > <code>Available</code>?</li>
</ol>
<p>In short, what is the relation between all these fields?</p>
| <p>The Deployment object specifies the desired state of your Deployment, and the Deployment Controller drives the current state of the system towards the desired state.</p>
<p>The <code>Desired</code> field specifies the number of replicas you asked for, while the <code>Current</code> field specifies the number of replicas that are currently running in the system. The <code>Up-To-Date</code> field indicates the number of replicas that are up to date with the desired state. The <code>Available</code> field shows the number of replicas that are passing readiness probes (if defined).</p>
<ol>
<li><p>Is <code>Current</code> always <= <code>Desired</code>? No, current can be greater than desired during a deployment update.</p></li>
<li><p>Is <code>Up-to-date</code> always <= <code>Current</code>? I believe the answer here is yes.</p></li>
<li><p>Is <code>Up-to-date</code> > <code>Current</code>? No, up-to-date should be the same as current, or less than current during a deployment update.</p></li>
<li><p>Is <code>Available</code> always <= <code>Current</code>? Yes.</p></li>
</ol>
<p>I encourage you to go through a deployment update and scale out/in while using <code>watch</code> to monitor these fields as the controller converges current state to desired state.</p>
|
<p>so I need to connect to the python kubernetes client through a pod. I've been trying to use <code>config.load_incluster_config()</code>, basically following the example from <a href="https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py" rel="noreferrer">here</a>. However it's throwing these errors. </p>
<pre><code> File "/Users/myname/Library/Python/2.7/lib/python/site-packages/kubernetes/config/incluster_config.py", line 93, in load_incluster_config
cert_filename=SERVICE_CERT_FILENAME).load_and_set()
File "/Users/myname/Library/Python/2.7/lib/python/site- packages/kubernetes/config/incluster_config.py", line 45, in load_and_set
self._load_config()
File "/Users/myname/Library/Python/2.7/lib/python/site-packages/kubernetes/config/incluster_config.py", line 51, in _load_config
raise ConfigException("Service host/port is not set.")
</code></pre>
<p>I'm using Python 2.7 and Minikube
Any hints or suggestions would be really appreciated. thank you.</p>
| <blockquote>
<p>so I need to connect to that pod somehow through the python api</p>
</blockquote>
<p>I am pretty sure you misunderstood <a href="https://stackoverflow.com/a/51035715/225016">my answer</a>, and/or I misunderstood your question. One should only use <code>load_incluster_config</code> when ... in-cluster ... otherwise it will attempt to use <code>/var/run/secrets/kubernetes.io/etcetc</code> and not find them (above and beyond the missing env-var in the actual error you cited above). However, if you had guarded the <code>load_incluster_config()</code> with the <code>if os.getenv('KUBERNETES_SERVICE_HOST'):</code> as suggested, then it wouldn't run that code and this question here wouldn't be a problem.</p>
<p>If you have built a docker image, but <em>did not deploy it into kubernetes</em>, then that wasn't clear.</p>
<hr>
<p>If you just want to use the python api to <em>access</em> the cluster, but not from <em>within</em> the cluster, <code>config.load_kube_config()</code> is in fact the correct method call, but you will absolutely need to provide a working <code>kubeconfig</code>, whether at <code>/root/.kube/config</code> or at another place specified by the env-var <code>KUBECONFIG</code> (I mean, usually; I haven't specifically looked into the python library to see if that env-var is honored).</p>
|
<p>I have service account name: <code>myservice</code> </p>
<pre><code>$ kubectl get serviceaccount
NAME SECRETS AGE
default 1 15d
myservice 1 15d
$ kubectl get serviceaccount myservice -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-06-13T12:41:18Z
name: myservice
namespace: default
...
</code></pre>
<p>I want to change the service's namespace <code>default</code> to <code>development</code>.<br>
I tried to edit it with:<br>
<code>kubectl edit serviceaccount myservice</code> </p>
<p>After saving it I received: </p>
<pre><code>A copy of your changes has been stored to "/tmp/kubectl-edit-gjae6.yaml"
error: the namespace from the provided object "development" does not match the namespace "default". You must pass '--namespace=development' to perform this operation.
</code></pre>
<p>So I tried like they wrote and it still didn't work: </p>
<pre><code>$ kubectl edit serviceaccount myservice --namespace=development
Error from server (NotFound): serviceaccounts "myservice" not found
</code></pre>
<p>The namespace <code>development</code> is exist and also the service <code>myservice</code>. </p>
| <p>It seems you should create new myservice SA in development NS instead modifying existing SA in default namespace. Create new myservice in development NS, then remove one in default NS. The error cause the nonexistent myservice even is in development NS.</p>
|
<p>Is there a way to configure kube-proxy in GKE?</p>
<p>I can see the pods creating from the daemonset, but I cannot see the daemonset itself.</p>
<p>Thanks for your help.</p>
| <p>At the node level, the system is managed by Kubernetes K8s, whereas, at the master level, the system is <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overview" rel="nofollow noreferrer">managed by GKE</a>. The <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture" rel="nofollow noreferrer">master operates</a> and runs the Kubernetes API server, core resources controllers and the scheduler. </p>
<p>Even though the kube-proxy resides within the node, the cluster is responsible for the kube-proxy, and, please bear in mind that in GKE the cluster master is inaccessible. </p>
<p>While it is possible to <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/" rel="nofollow noreferrer">configure the proxy in Kubernetes k8s</a>, it is not possible to do so in GKE.</p>
<p>For the daemonset, please make sure that you are looking through all namespaces.</p>
<pre><code>$ kubectl get ds --all-namespaces
</code></pre>
|
<p>I have set up a AWS kops cluster for Kubernetes, i have multiple microservices where in each application need to interact with one another.</p>
<p><strong>Scenario: My ta2carbon app tries to invoke a function in ta1carbon app through service(dns) name.</strong> </p>
<p><strong>Result: It is failing with timeout error by trying to hit to port 80 (but configured port -3000)</strong> </p>
<p>my nodejs app console log,
apiUrl: <a href="http://ta1carbon/api/app1/app1Func2" rel="nofollow noreferrer">http://ta1carbon/api/app1/app1Func2</a></p>
<pre><code>{ Error: connect ETIMEDOUT 100.66.7.165:80
at Object._errnoException (util.js:992:11)
at _exceptionWithHostPort (util.js:1014:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1186:14)
code: 'ETIMEDOUT',
errno: 'ETIMEDOUT',
syscall: 'connect',
address: '100.66.7.165',
port: 80 }
</code></pre>
<p>same error logs for curl, when i tried to curl my ta1carbon app inside ta2carbon pod.</p>
<pre><code>root@ta2carbon-5fdcfb97cc-8j4nl:/home/appHome# curl -i http://ta1carbon/api/app1/app1Func2
curl: (7) Failed to connect to ta1carbon port 80: Connection timed out
</code></pre>
<p>but my defined port in service.yaml is 3000 not 80!
below are the yml configurations for service for both microservices.</p>
<p>ta1carbon service yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ta1carbon
labels:
app: ta1carbon
spec:
ports:
- port: 3000
targetPort: 3000
type: ClusterIP
selector:
app: ta1carbon
</code></pre>
<p>ta2carbon service yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ta2carbon
labels:
app: ta2carbon
spec:
ports:
- port: 3001
targetPort: 3001
type: ClusterIP
selector:
app: ta2carbon
</code></pre>
<p>And below is the describe service details for both ta1carbon and ta2 carbon.</p>
<pre><code>kubectl describe service ta1carbon
Name: ta1carbon
Namespace: default
Labels: app=ta1carbon
Annotations: <none>
Selector: app=ta1carbon
Type: ClusterIP
IP: 100.66.7.165
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
Endpoints: 100.96.1.13:3000
Session Affinity: None
Events: <none>
kubectl describe service ta2carbon
Name: ta2carbon
Namespace: default
Labels: app=ta2carbon
Annotations: <none>
Selector: app=ta2carbon
Type: ClusterIP
IP: 100.67.129.126
Port: <unset> 3001/TCP
TargetPort: 3001/TCP
Endpoints: 100.96.1.12:3001
Session Affinity: None
Events: <none>
</code></pre>
<p>So based on what i observe, for the url <a href="http://ta1carbon/api/app1/app1Func2" rel="nofollow noreferrer">http://ta1carbon/api/app1/app1Func2</a>
service dns <strong>ta1carbon</strong> is being resolved in to <strong>100.67.24.69:80</strong> resulting in timeout.</p>
<p>but however if i curl in to <strong>100.67.24.69:3000</strong> from inside ta2carbon pod i get a <strong>success response</strong></p>
<p>Also if i change my service yaml <strong>- port: 80</strong> and deploy and test again i get <strong>success response</strong></p>
<p>i am finding this behaviour in kubernetes quite weird, not sure weather i am making mistake or with environment.</p>
<p>My query is -</p>
<p><strong>why is it resolving service ta1carbon in to 100.67.24.69:80 and timing out, when the port should have been 3000!</strong></p>
<p>Any input on this will be much appreciated. please let me know what is missing in this.</p>
| <p>DNS resolves a doman name to an IP address, not an IP address + port. </p>
<p>There are two potential solutions:</p>
<ol>
<li><p>Modify your application source to issue API requests to <code>http://ta1carbon:3000</code></p></li>
<li><p>Set the <code>port</code> on your <code>ta1carbon</code> service to <code>80</code>.</p></li>
</ol>
<p>I recommend going with option 2. In this scenario, you are taking advantage of the power of Kubernetes services. Kubernetes will expose the service on port 80, but send requests to the pods backing the service on port 3000 (because of the <code>targetPort: 3000</code>).</p>
|
<p>I have been trying to run tomcat container on port 5000 on cluster using kubernetes. But when i am using kubectl create -f tmocat_pod.yaml , it creates pod but docker ps does not give any output. Why is it so?</p>
<p>Ideally, when it is running a pod, it means it is running a container inside that pod and that container is defined in yaml file.
Why is that docker ps does not show any containers running?
I am following the below URLs:</p>
<ul>
<li><a href="http://containertutorials.com/get_started_kubernetes/k8s_example.html" rel="noreferrer">http://containertutorials.com/get_started_kubernetes/k8s_example.html</a></li>
<li><a href="https://blog.jetstack.io/blog/k8s-getting-started-part2/" rel="noreferrer">https://blog.jetstack.io/blog/k8s-getting-started-part2/</a></li>
</ul>
<p>How can I get it running and see tomcat running on browser on port 5000.</p>
| <p>The docker containers should be running on the virtual machine. Since I only installed minikube on my local machine, I confirmed the following will bring what you want:</p>
<pre><code>minikub ssh
...
docker ps
</code></pre>
<p>Just try the kubernetes equivalent of <code>minikube ssh</code>.</p>
|
<p>I'm using minikube on windows 10 and trying to start it using Hyper-V.
Because I'm behind a corporate proxy and use CNTLM to avoid authentication. I added the http_proxy and https_proxy as docker env.</p>
<pre><code>minikube start --vm-driver hyperv --hyperv-virtual-switch "PrimaryVirtualSwitch" --docker-env http_proxy=http://10.55.35.143:3128 --docker-env https_proxy=http://10.55.35.143:3128 --memory 8192 --cpus 4
</code></pre>
<p>The output is the following:</p>
<blockquote>
<p>Starting local Kubernetes v1.10.0 cluster...<br/>
Starting VM...<br/>
Getting VM IP address...<br/>
Moving files into cluster...<br/>
Setting up certs...<br/>
Connecting to cluster...<br/>
Setting up kubeconfig...<br/>
Starting cluster components...<br/></p>
</blockquote>
<p>At the last step "Starting cluster components" the command stucks.</p>
<p>I tried to enable verbose output but it doesn't log anything after reaching the "Starting cluster components" step.</p>
<p>The command <code>minikube ssh</code> is working and I can ping <code>10.55.35.143:3128</code>.</p>
<p>Could anyone give me a hint what I'm doing wrong?</p>
<p><strong>Additional Information</strong>
Minikube version: v0.28.0</p>
| <p>Download <a href="http://cmder.net/" rel="nofollow noreferrer">cmder</a>, type bash to switch to bash, export http_proxy and https_proxy and run minikube start.
You can find more information about the related topic at <a href="https://github.com/kubernetes/minikube/issues/2425" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/2425</a>.</p>
<p>You may also try to launch minikube with <code>--bootstrapper=localkube</code> option.</p>
|
<p>I am using openshift with glusterfs as storage system. Dynamic provisioning works very well but always rounds the allocated capacity to the next GB value. E.g.: I request a volume of 400MB but a volume of 1GB is created.</p>
<p>Is this behavior configurable? I setup openshift via the advanced installation with openshift/ansible. </p>
| <p>It is how Kubernetes underneath works. Where you have static volumes defined, the allocation request is used to grab the best match available. So if there isn't one of the exact size, it will grab the next size up. It isn't able to split up a persistent volume and just give part of it to you. It also doesn't enforce any limit, so although you request 400MB, you will be able to use up to the 1GB the persistent volume provides.</p>
<p>If you are trying to be economical with storage space, provided storage is of type ReadWriteMany, you could use one persistent volume claim for multiple applications, by specifying that a sub path from the volume should be mounted in each case, into the respective containers. Just realise there is no quota to prevent one application from using all the storage from the persistent volume up, so be careful for example with sharing a persistent volume with a database and some other application which could run rampant and use all the space, as last thing you want to do is run out of space for the database.</p>
|
<p>When I follow <a href="http://kubernetes.io/docs/admin/multiple-schedulers/" rel="nofollow">these instructions</a> to create custom scheduler, the pods assigned to <code>my-scheduler</code> (<code>pod annotation-second-scheduler</code> in the example) keep status Pending and are never scheduled.</p>
<p>I think this is because the <code>kube-scheduler</code> cannot access the master from within the pod. I don't know how to get this working. How can the master be accessed from within a pod? I tried running <code>kubectl proxy -p 8001</code> in the pod, but this doesn't work.</p>
| <p>There are few issues with the instructions mentioned in <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/</a> for local clusters that I created using the instructions mentioned in <a href="https://blog.tekspace.io/setup-kubernetes-cluster-with-ubuntu-16-04/" rel="nofollow noreferrer">https://blog.tekspace.io/setup-kubernetes-cluster-with-ubuntu-16-04/</a></p>
<p>These errors were reported from custom scheduler container (kubect logs command):</p>
<pre><code>E0628 21:05:29.128618 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list persistentvolumeclaims at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.129945 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list services at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.132968 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.151367 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list persistentvolumes at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.152097 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list replicasets.extensions at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.153187 1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list pods at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.153201 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list nodes at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.153300 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list replicationcontrollers at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.153338 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.153757 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list statefulsets.apps at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:30.147954 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:30.149547 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list persistentvolumeclaims at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:30.149562 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list services at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
</code></pre>
<p>The issue is in the my-scheduler.yaml file: in roleref change the name field from kube-scheduler to system:kube-scheduler. Verify it using this command before changing the yaml file: </p>
<pre><code> kubectl get clusterrole --all-namespaces | grep -i kube
</code></pre>
<p>It should list system:kube-scheduler instead of kube-scheduler only.</p>
<p>Then, it might print these errors in the custom scheduler container:</p>
<pre><code>E0628 21:22:39.937271 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope
E0628 21:22:40.940461 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope
E0628 21:22:41.943323 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope
E0628 21:22:42.946263 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope
</code></pre>
<p>In this case, please append these lines:</p>
<pre><code>- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- watch
- list
- get
</code></pre>
<p>to the end of the output of this command (this opens a file for you to edit):</p>
<pre><code>kubectl edit clusterrole system:kube-scheduler
</code></pre>
|
<p>I have a Kubernetes environment with a rabbitmq servirve who deploys 2 pods of rabbitmq.</p>
<p>I need to install a plugin on rabbitmq, (Delayed Message Plugin) but I don't like the "manual" way, so if the pod is deleted, I have to install the plugin again.</p>
<p>I want to know which is the recommended way of achieving this. </p>
<p>FYI: the manual way is to copy a file into the plugins folder, and then launch the following command: </p>
<pre><code>rabbitmq-plugins enable rabbitmq_delayed_message_exchange
</code></pre>
| <p>You should mount the configuration for RabbitMQ from a config map.</p>
<p>For example:</p>
<p>The ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-config
namespace: rabbitmq
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_peer_discovery_k8s].
rabbitmq.conf: |
...
definitions.json: |
...
</code></pre>
<p>And then in your Deployment or StatefulSet:</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: rabbitmq
namespace: rabbitmq
spec:
replicas: 3
...
template:
...
spec:
containers:
- image: rabbitmq:3.7.4-management-alpine
imagePullPolicy: IfNotPresent
name: rabbitmq
volumeMounts:
- name: config-volume
mountPath: /etc/rabbitmq
...
volumes:
- name: config-volume
configMap:
name: rabbitmq-config
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- key: enabled_plugins
path: enabled_plugins
- key: definitions.json
path: definitions.json
...
</code></pre>
<p>There are several ways to install the plugin in the first place. One is to base off of the image you are currently using, add the plugin, and use the new image instead. Alternatively you could utilize <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="noreferrer">Kubernetes life cycle hooks</a> to download the file pre start. Here is an <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="noreferrer">example of postStart</a></p>
|
<p>I use <a href="https://gist.github.com/ruzickap/c072cdfe480ca52bd32b6c4fcf8397a2" rel="nofollow noreferrer">https://gist.github.com/ruzickap/c072cdfe480ca52bd32b6c4fcf8397a2</a>
for deploy kubernetes</p>
<p>Need:</p>
<pre><code># Create kubespray config file
cat > ~/.kubespray.yml << EOF
kubespray_git_repo: "https://github.com/kubespray/kubespray.git"
kubespray_path: "$PWD/kubespray"
loglevel: "info"
EOF
</code></pre>
<p>Can I use kubespray without <code>kubespray_git_repo</code>, <code>kubespray_path</code> ?</p>
| <blockquote>
<p>Can i use kubespray without kubespray_git_repo, kubespray_path ?</p>
</blockquote>
<p>Without question, yes; arguably it will <em>most often</em> be used without those things (I have actually never even heard of <code>kubespray prepare</code>):</p>
<ol>
<li><a href="https://github.com/kubernetes-incubator/kubespray/archive/v2.5.0.tar.gz" rel="nofollow noreferrer">download a release</a></li>
<li><code>pip install ansible</code> (if you don't already have ansible; you can also <code>brew install ansible</code> if you are on a Mac and/or have concerns about <code>pip</code> messing up your global pythonpath)</li>
<li><a href="https://github.com/kubernetes-incubator/kubespray/tree/v2.5.0#ansible" rel="nofollow noreferrer">use <code>ansible-playbook</code></a></li>
<li>declare victory</li>
</ol>
|
<p>I want to connect a docker container running locally to a service running on a Kubernetes cluster. To do so I have exposed a service through reserving some static IP addresses.</p>
<p>I have also saved those IP addresses in local DNS, in the <code>/etc/hosts/</code> file:</p>
<pre><code>123.123.123.12 host1
456.456.456.45 host2
</code></pre>
<p>I want to link my container to that such that all the traffic is routed to those addresses so that it can be processed by the cluster. I am using the link feature in the docker container but it isn't working. </p>
<p>I want to connect directly using IP? How should I do this?</p>
| <p>There's no difference doing this if the client is or isn't in Docker. However you have the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/" rel="nofollow noreferrer">service exposed from Kubernetes</a>, you'd make the same connection to it from a process running on an external host or from a process running in a Docker container on that host.</p>
<p>Say, as in the example in the Kubernetes documentation, you're running a NodePort service that's accessible on port 31496 on every node in the cluster, and you're trying to connect to it from outside the cluster. Maybe as in the question 123.123.123.12 is some node in the cluster. A typical setup would be to get the location of the service from an environment variable (JavaScript <code>process.env.THE_SERVICE_URL</code>; Ruby <code>ENV['THE_SERVICE_URL']</code>; Python <code>os.environ['THE_SERVICE_URL']</code>; ...).</p>
<p>When you're developing, you could set that variable in your local shell:</p>
<pre class="lang-sh prettyprint-override"><code>export THE_SERVICE_URL=http://123.123.123.12:31496
cd here && ./kubernetes_client_script.py
</code></pre>
<p>When you go to deploy your application, you can set the same environment variable:</p>
<pre class="lang-sh prettyprint-override"><code>docker run -e THE_SERVICE_URL=http://123.123.123.12:31496 me:k8s-client
</code></pre>
|
<p>I'm running Kuberentes with a Minikube node on my machine. The pods are accessing each other by their .metadata.name, and I would like to have a custom domain to that name.</p>
<p>i.e. one pod accesses Elastic's machine by elasticsearch.blahblah.com</p>
<p>Thanks for any suggestions</p>
| <p>You should have DNS records for pods by default due to kube-DNS addon enabled by default in minikube.</p>
<p>To check kube-dns addon status use the below command:</p>
<p><code>kubectl get pod -n kube-system</code></p>
<p>Please find below how cluster add-on DNS server works:</p>
<p>An optional (though strongly recommended) cluster add-on is a DNS server. The DNS server watches the Kubernetes API for new Services and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all Pods should be able to do name resolution of Services automatically.</p>
<p>For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.</p>
<p>Kubernetes also supports DNS SRV (service) records for named ports. If the "my-service.my-ns" Service has a port named "http" with protocol TCP, you can do a DNS SRV query for "_http._tcp.my-service.my-ns" to discover the port number for "http".</p>
<p>The Kubernetes DNS server is the only way to access services of type ExternalName.</p>
<p>You can follow <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="nofollow noreferrer">Configure DNS Service</a> document for configuration instructions.</p>
<p>Also, you can check <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods</a> for additional information.</p>
|
<p>I was successfully able to setup the Kubernetes master .
I created the Kubernetes slave node by installing Docker and kubelet (using kubeadm) . After running the join command the slave node is joined to the cluster . I am able to verify that from master node . But the pods that are getting deployed in slave node is always in ContainerCreating state .
<strong>Apart from docker and kubelet is there anything else needs to be installed in slave node ??</strong></p>
<p>Status of kubectl shows that remote_runtime.go: RunPodSandBox from runtime service failed : rpc error : code= DeadlineExceeded </p>
<p>Appreciate your help .</p>
| <p>In such cases I would usually start to troubleshoot cluster by checking the state of pods in kube-system namespace using the command:</p>
<pre><code>$ kubectl get pods --all-namespaces -o wide
</code></pre>
<p>There should be several pods related to networking, running on each node, e.g:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-node-2rpns 2/2 Running 0 2h 10.154.0.5 kube-node1
kube-system calico-node-cn6cl 2/2 Running 0 2h 10.154.0.6 kube-master
kube-system calico-node-fr7v5 2/2 Running 1 2h 10.154.0.7 kube-node2
</code></pre>
<p>Full set of networking container depends on what Kubernetes network solution is used.</p>
<p>Next, I check if there are some pods in “Not Ready” state and check the errors in the description:</p>
<pre><code>$ kubectl describe pod not-ready-pod-name
</code></pre>
<p>In case there are errors related to image pulling or container creating I check the kubelet logs on the node for more details:</p>
<pre><code>$ journalctl -u kubelet
</code></pre>
<p>or try to pull image manually to ensure that image is available and can be pulled:</p>
<pre><code>$ docker pull <image>
</code></pre>
<p>In case pod has many restarts I go to check pod's container logs:</p>
<pre><code>$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
</code></pre>
<p>or logs of previous crashed container:</p>
<pre><code>$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
</code></pre>
<p>My next steps depend on the previous results. </p>
<p>If you add your results to the question, it would be possible to tell you more about the case.</p>
|
<p>Yesterday, <a href="https://github.com/naftulikay/katyperry" rel="noreferrer">I built a full-featured example</a> which uses Terraform to create a network and a GKE cluster in Google Cloud Platform. The whole thing runs in Vagrant on a CentOS 7 VM and installs both <code>gcloud</code>, <code>kubectl</code>, and <code>helm</code>. I also <a href="https://github.com/naftulikay/katyperry/blob/master/doc/SPINNAKER.md" rel="noreferrer">extended the example</a> to use Helm to install Spinnaker.</p>
<p>The GKE cluster is called <code>gke-test-1</code>. In my documentation I documented getting <code>kubectl</code> setup:</p>
<pre><code>gcloud container clusters get-credentials --region=us-west1 gke-test-1
</code></pre>
<p>After this, I was able to use various <code>kubectl</code> commands to <code>get nodes</code>, <code>get pods</code>, <code>get services</code>, and <code>get deployments</code>, as well as all other cluster management commands. I was able to also use Helm to install Tiller and ultimately deploy Spinnaker.</p>
<p>However, today, the same process doesn't work for me. I spun up the network, subnet, GKE cluster, and the node pool, and whenever I try to use commands to get various resoures, I get this response:</p>
<pre><code>[vagrant@katyperry vagrant]$ kubectl get nodes
No resources found.
Error from server (NotAcceptable): unknown (get nodes)
[vagrant@katyperry vagrant]$ kubectl get pods
No resources found.
Error from server (NotAcceptable): unknown (get pods)
[vagrant@katyperry vagrant]$ kubectl get services
No resources found.
Error from server (NotAcceptable): unknown (get services)
[vagrant@katyperry vagrant]$ kubectl get deployments
No resources found.
Error from server (NotAcceptable): unknown (get deployments.extensions)
</code></pre>
<p>Interestingly enough, it seems that some command do work:</p>
<pre><code>[vagrant@katyperry vagrant]$ kubectl describe nodes | head
Name: gke-gke-test-1-default-253fb645-scq8
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/fluentd-ds-ready=true
beta.kubernetes.io/instance-type=n1-standard-4
beta.kubernetes.io/os=linux
cloud.google.com/gke-nodepool=default
failure-domain.beta.kubernetes.io/region=us-west1
failure-domain.beta.kubernetes.io/zone=us-west1-b
kubernetes.io/hostname=gke-gke-test-1-default-253fb645-scq8
</code></pre>
<p>When I open a shell in Google Cloud console, after running the same login command, I'm able to use <code>kubectl</code> to do all of the above:</p>
<pre><code>naftuli_kay@naftuli-test:~$ gcloud beta container clusters get-credentials gke-test-1 --region us-west1 --project naftuli-test
Fetching cluster endpoint and auth data.
kubeconfig entry generated for gke-test-1.
naftuli_kay@naftuli-test:~$ kubectl get pods
No resources found.
naftuli_kay@naftuli-test:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-gke-test-1-default-253fb645-scq8 Ready <none> 40m v1.8.10-gke.0
gke-gke-test-1-default-253fb645-tfns Ready <none> 40m v1.8.10-gke.0
gke-gke-test-1-default-8bf306fc-n8jz Ready <none> 40m v1.8.10-gke.0
gke-gke-test-1-default-8bf306fc-r0sq Ready <none> 40m v1.8.10-gke.0
gke-gke-test-1-default-aecb57ba-85p4 Ready <none> 40m v1.8.10-gke.0
gke-gke-test-1-default-aecb57ba-n7n3 Ready <none> 40m v1.8.10-gke.0
naftuli_kay@naftuli-test:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.64.1 <none> 443/TCP 43m
naftuli_kay@naftuli-test:~$ kubectl get deployments
No resources found.
</code></pre>
<p>The only difference I can see is the difference between the <code>kubectl</code> version; Vagrant has the latest version, 1.11.0, and the Google Cloud console has 1.9.7.</p>
<p>I will attempt to downgrade.</p>
<p>Is this a known issue and what, if anything, can I do to work around it?</p>
<hr>
<p><strong>EDIT:</strong> This is reproducible and I can't find a way to prevent it from recurring. I tore down all of my infrastructure and then stood it up again. The Terraform is available <a href="https://github.com/naftulikay/katyperry" rel="noreferrer">here</a>. </p>
<p>After provisioning the resources, I waited until the cluster reported being healthy:</p>
<pre><code>[vagrant@katyperry vagrant]$ gcloud container clusters describe \
--region=us-west1 gke-test-1 | grep -oP '(?<=^status:\s).*'
RUNNING
</code></pre>
<p>I then setup my login credentials:</p>
<pre><code>[vagrant@katyperry vagrant]$ gcloud container clusters get-credentials \
--region=us-west1 gke-test-1
</code></pre>
<p>I again attempted to get nodes:</p>
<pre><code>[vagrant@katyperry vagrant]$ kubectl get nodes
No resources found.
Error from server (NotAcceptable): unknown (get nodes)
</code></pre>
<p>The cluster appears green in the Google Cloud dashboard:</p>
<p><a href="https://i.stack.imgur.com/AiDR7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AiDR7.png" alt="enter image description here"></a></p>
<p>Apparently, this is a reproducible problem, as I'm able to recreate it using the same Terraform and commands. </p>
| <p>After successfully reproducing the issue multiple times by destroying and recreating all the infrastructure, I found <a href="https://gitlab.com/charts/gitlab/merge_requests/227#note_68683418" rel="noreferrer">some arcane post on GitLab</a> that mentions <a href="https://github.com/kubernetes/kubernetes/issues/61943#issuecomment-377586904" rel="noreferrer">a Kubernetes GitHub issue</a> that seems to indicate:</p>
<blockquote>
<p>...in order to maintain compatibility with 1.8.x servers (which are within the supported version skew of +/- one version)</p>
</blockquote>
<p>Emphasis on the "+/- one version."</p>
<p><a href="https://github.com/naftulikay/katyperry/pull/1" rel="noreferrer">Upgrading the masters and the workers to Kubernetes 1.10</a> seems to entirely have addressed the issue, as I can now list nodes and pods with impunity:</p>
<pre><code>[vagrant@katyperry vagrant]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.4-gke.2", GitCommit:"eb2e43842aaa21d6f0bb65d6adf5a84bbdc62eaf", GitTreeState:"clean", BuildDate:"2018-06-15T21:48:39Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
[vagrant@katyperry vagrant]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-gke-test-1-default-5989a78d-dpk9 Ready <none> 42s v1.10.4-gke.2
gke-gke-test-1-default-5989a78d-kh9b Ready <none> 58s v1.10.4-gke.2
gke-gke-test-1-default-653ba633-091s Ready <none> 46s v1.10.4-gke.2
gke-gke-test-1-default-653ba633-4zqq Ready <none> 46s v1.10.4-gke.2
gke-gke-test-1-default-848661e8-cv53 Ready <none> 53s v1.10.4-gke.2
gke-gke-test-1-default-848661e8-vfr6 Ready <none> 52s v1.10.4-gke.2
</code></pre>
<p>It appears that Google Cloud Platform's cloud shell pins to <code>kubectl</code> 1.9, which is within the version gap supported by the ideas expressed above.</p>
<p>Thankfully, the Kubernetes RHEL repository has a bunch of versions to choose from so it's possible to pin:</p>
<pre><code>[vagrant@katyperry gke]$ yum --showduplicates list kubectl
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.usc.edu
* epel: sjc.edge.kernel.org
* extras: mirror.sjc02.svwh.net
* updates: mirror.linuxfix.com
Installed Packages
kubectl.x86_64 1.11.0-0 @kubernetes
Available Packages
kubectl.x86_64 1.5.4-0 kubernetes
kubectl.x86_64 1.6.0-0 kubernetes
kubectl.x86_64 1.6.1-0 kubernetes
kubectl.x86_64 1.6.2-0 kubernetes
kubectl.x86_64 1.6.3-0 kubernetes
kubectl.x86_64 1.6.4-0 kubernetes
kubectl.x86_64 1.6.5-0 kubernetes
kubectl.x86_64 1.6.6-0 kubernetes
kubectl.x86_64 1.6.7-0 kubernetes
kubectl.x86_64 1.6.8-0 kubernetes
kubectl.x86_64 1.6.9-0 kubernetes
kubectl.x86_64 1.6.10-0 kubernetes
kubectl.x86_64 1.6.11-0 kubernetes
kubectl.x86_64 1.6.12-0 kubernetes
kubectl.x86_64 1.6.13-0 kubernetes
kubectl.x86_64 1.7.0-0 kubernetes
kubectl.x86_64 1.7.1-0 kubernetes
kubectl.x86_64 1.7.2-0 kubernetes
kubectl.x86_64 1.7.3-1 kubernetes
kubectl.x86_64 1.7.4-0 kubernetes
kubectl.x86_64 1.7.5-0 kubernetes
kubectl.x86_64 1.7.6-1 kubernetes
kubectl.x86_64 1.7.7-1 kubernetes
kubectl.x86_64 1.7.8-1 kubernetes
kubectl.x86_64 1.7.9-0 kubernetes
kubectl.x86_64 1.7.10-0 kubernetes
kubectl.x86_64 1.7.11-0 kubernetes
kubectl.x86_64 1.7.14-0 kubernetes
kubectl.x86_64 1.7.15-0 kubernetes
kubectl.x86_64 1.7.16-0 kubernetes
kubectl.x86_64 1.8.0-0 kubernetes
kubectl.x86_64 1.8.1-0 kubernetes
kubectl.x86_64 1.8.2-0 kubernetes
kubectl.x86_64 1.8.3-0 kubernetes
kubectl.x86_64 1.8.4-0 kubernetes
kubectl.x86_64 1.8.5-0 kubernetes
kubectl.x86_64 1.8.6-0 kubernetes
kubectl.x86_64 1.8.7-0 kubernetes
kubectl.x86_64 1.8.8-0 kubernetes
kubectl.x86_64 1.8.9-0 kubernetes
kubectl.x86_64 1.8.10-0 kubernetes
kubectl.x86_64 1.8.11-0 kubernetes
kubectl.x86_64 1.8.12-0 kubernetes
kubectl.x86_64 1.8.13-0 kubernetes
kubectl.x86_64 1.8.14-0 kubernetes
kubectl.x86_64 1.9.0-0 kubernetes
kubectl.x86_64 1.9.1-0 kubernetes
kubectl.x86_64 1.9.2-0 kubernetes
kubectl.x86_64 1.9.3-0 kubernetes
kubectl.x86_64 1.9.4-0 kubernetes
kubectl.x86_64 1.9.5-0 kubernetes
kubectl.x86_64 1.9.6-0 kubernetes
kubectl.x86_64 1.9.7-0 kubernetes
kubectl.x86_64 1.9.8-0 kubernetes
kubectl.x86_64 1.10.0-0 kubernetes
kubectl.x86_64 1.10.1-0 kubernetes
kubectl.x86_64 1.10.2-0 kubernetes
kubectl.x86_64 1.10.3-0 kubernetes
kubectl.x86_64 1.10.4-0 kubernetes
kubectl.x86_64 1.10.5-0 google-cloud-sdk
kubectl.x86_64 1.10.5-0 kubernetes
kubectl.x86_64 1.11.0-0 kubernetes
</code></pre>
<hr>
<p><strong>EDIT:</strong> I have found the <a href="https://github.com/kubernetes/kubernetes/pull/61419" rel="noreferrer">actual pull request that mentions this incompatibility</a>. I have also found <a href="https://kubernetes.io/docs/imported/release/notes/" rel="noreferrer">buried in the release notes</a> the following information:</p>
<blockquote>
<p>kubectl: This client version requires the <strong><code>apps/v1</code></strong> API, so it will not work against a cluster version older than v1.9.0. Note that kubectl only guarantees compatibility with clusters that are +/- [one] minor version away. </p>
</blockquote>
<h2>TL;DR</h2>
<p>This entire problem was an incompatibility between <code>kubectl</code> 1.11 and Kubernetes 1.8.</p>
|
<p>I am new to Kubernetes <code>NetworkPolicy</code> and the Network plugin <code>calico</code>.</p>
<p>I have successfully implemented <code>calico</code> in my Kubernetes cluster:</p>
<pre><code>[root@node1 ~]# kubectl get po --all-namespaces -o wide | grep calico
kube-system calico-kube-controllers-5d8b5bc986-sllmk 1/1 Running
kube-system calico-node-4wk8f 1/1 Running
kube-system calico-node-5kz99 1/1 Running
kube-system calico-node-bfk9w 1/1 Running
kube-system calico-node-f2tb2 1/1 Running
kube-system calico-node-hrcf4 1/1 Running
kube-system calico-node-wvh8d 1/1 Running
</code></pre>
<p>I have also configured relevant network policies and they work perfectly fine.</p>
<p>The only only I am concerned about is logging. I am unable to find any logs that could tell me whether some request is being accepted or blocked.</p>
<p>Ive tried checking the logs of the <code>calico-nodes-*</code> pods but they do not provide any reasonable logs.</p>
<p>Are there any others logs that I can look at ? </p>
| <p>Kubernetes <code>NetworkPolicy</code> doesn't support logging, but Calico's native <code>NetworkPolicy</code> supports a "log" action that allows you to log packets to the system log.</p>
<p>Tigera's (<strong><em>disclaimer:</em></strong> I work for Tigera) commercial product, CNX, which is built on Calico offers additional auditing and compliance features so you might want to check that out.</p>
|
<p>I have setup a kubernetes cluster with three nodes. All nodes are Linux centos machines.</p>
<p>I need persistent volume to store data and I am trying to achive this.</p>
<p>I was following this tutorial. But it only covers a one node cluster.
<a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/</a></p>
<p>Since, my cluster consist of three node, I could not use local path. Previous tutorial does not worked for me. </p>
<p>I need a network path and using NFS seems a reasonable solution to me. ( Is there any good alternative I would like to hear.)</p>
<p>Using NFS network mount contains two steps.
First, Creating a persistent volume on a network path.
Second, define this network path as a persistent volume and use it. </p>
<p>Second step pretty straight forward. Its is explained in kubernetes documentation and there is even sample yaml.
documentation:<a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes</a>
example: <a href="https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-pv.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-pv.yaml</a></p>
<p>First part also seems straight forward. Its explained in following document</p>
<p><a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-ubuntu-16-04#step-5-%E2%80%94-creating-the-mount-points-on-the-client" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-ubuntu-16-04#step-5-%E2%80%94-creating-the-mount-points-on-the-client</a></p>
<pre><code>/etc/exports
directory_to_share client(share_option1,...,share_optionN)
/etc/exports
/var/nfs/general 203.0.113.256(rw,sync,no_subtree_check)
/home 203.0.113.256(rw,sync,no_root_squash,no_subtree_check)
</code></pre>
<p>But when you export a path as a NFS path you should make some configuration and give clients some rights. Basically you need client ip.</p>
<p>With kubernetes we use abstraction such as pods and we don't want to deal with real machines and theirs ip addresses. So, the problem startes here.</p>
<p>So, I don't want to give nodes ip to nfs server. (They might change in he first place.) There should be a better solution that all pods (in any node) should be able to connect to NFS network path. </p>
<p>Even allowing all ip without restriction or allowing an ip range might solve the issue. I would like to hear if there is such solution. But, I would also like to hear what is the best practice. How everybody else use NFS network path from kubernetes.</p>
<p>I could not find any solution yet. If you solved similar problem, please let me know how you solve it. Any documenatation on this issue will be good too. Thanks in advance!</p>
| <p>You asked for best practices and from what I've found I think that the best option would be white-listing the IP addresses. Since you do not want to do that, there are also some workarounds <a href="https://stackoverflow.com/questions/43295344/kubernetes-dynamic-persistent-volume-provisioning-using-nfs?rq=1">answers posted on SO</a> created by people who had similar issues with dynamic IP clients in NFS. You can find a link to deployment using <a href="https://blog.lwolf.org/post/how-i-deployed-glusterfs-cluster-to-kubernetes/" rel="nofollow noreferrer">glusterfs</a> in the posted answers. If you want NFS with dynamic IP (because it can change), you can use DNS names instead of IP. If you need dynamic provisioning, use glusterfs. </p>
<p>I will add some information about the volumes as you asked. Might give you some light on the topic and help with the issue.<br>
Since pods are ephemeral you need to move the volume outside the Pod - therefore making it independent from the Pods - so the volume would persist its state in case of Pod failure. Kubernetes supports several types of Volumes.<br>
You could use NFS (more on NFS in the previous link) - so after the Pod is removed the volume gets unmounted, but it still exists. This is also not desired in your situation as the user needs to know the file system type and other details about the volume it will want to connect to. When you go to the examples in the documentation about NFS yaml files, you will see that their kind is defined as a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer">Persistent Volume Claim</a>.<br>
This method is based on creating a series of abstractions that will allow a Node to connect to the Persistent Volume, but the user will not need any backend details, in addition, your node can connect to many providers.</p>
|
<p>I created 3 vbox VM for testing k8s. Each vm has 2 netwoks: Nat and internal.
K8s was initialized with:</p>
<pre><code>kubeadm init --apiserver-advertise-address 192.168.1.1 --service-cidr 192.168.1.0/24 --pod-network-cidr 192.168.1.0/24
</code></pre>
<p>192.168.1.0/24 is the internal network.
Nodes were joined by</p>
<pre><code>kubeadm join 192.168.1.1:6443 --token some_token --discovery-token-ca-cert-hash hash
</code></pre>
<p>When I'm trying to deploy flannel I get errors like:</p>
<pre><code>I0629 09:25:06.640787 1 main.go:475] Determining IP address of default interface
I0629 09:25:06.645316 1 main.go:488] Using interface with name enp0s3 and address 10.0.2.15
I0629 09:25:06.645335 1 main.go:505] Defaulting external address to interface address (10.0.2.15)
I0629 09:25:06.656691 1 kube.go:131] Waiting 10m0s for node controller to sync
I0629 09:25:06.656794 1 kube.go:294] Starting kube subnet manager
I0629 09:25:07.657778 1 kube.go:138] Node controller sync successful
I0629 09:25:07.657795 1 main.go:235] Created subnet manager: Kubernetes Subnet Manager - kub2
I0629 09:25:07.657798 1 main.go:238] Installing signal handlers
I0629 09:25:07.657892 1 main.go:353] Found network config - Backend type: vxlan
I0629 09:25:07.657928 1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
E0629 09:25:07.658039 1 main.go:280] Error registering network: failed to acquire lease: node "kub2" pod cidr not assigned
</code></pre>
<p>what am I doing wrong?</p>
| <p>As per error string in your logs:</p>
<blockquote>
<p>E0629 09:25:07.658039 1 main.go:280] Error registering network:
failed to acquire lease: node "kub2" pod cidr not assigned</p>
</blockquote>
<p>It seems you used subnet <code>192.168.1.0/24</code> which was fully distributed for master node creation; however, there were not enough IP pools for work nodes building, as by default <code>kube-controller-manager</code> gives /24 network for each node. As the solution, you can use wider network subnet addressing, and also be aware that subnets you intend to use can't be a part of each other.</p>
<p>For example, if you have your local network <code>192.168.1.0/24</code>, then</p>
<ul>
<li><code>--service-cidr</code> should be like <code>192.168.2.0/24</code> and</li>
<li><code>--pod-network-cidr</code> like <code>172.17.0.0/16</code></li>
</ul>
|
<p>I have a repository with a Kubernetes deployment YAML. Pipelines run on each commit that builds and pushes an image into our repository, versioned with the commit (eg. <code>my_project:bm4a83</code>). Then I'm updating the deployment image</p>
<p><code>kubectl set image deployment/my_deployment my_deployment=my_project:bm4a83</code>.</p>
<p>This works, but I also want to keep the rest of the deployment YAML specification in version control.</p>
<p>I thought I could just keep it in the same repository, but that means my changes that may only be infrastructure (eg, changing <code>replicas</code>) triggers new builds, without code changes.</p>
<p>What felt like it made the most sense was keeping the deployment YAML in a totally separate repository. I figured I can manage all the values from there, independently from actual code changes. The only problem with that is the <code>image</code> key would need to be kept up to date. The only way around that, is working with some floating <code>:latest</code>-type version, but I don't really think that's ideal.</p>
<p>What's a sensible workflow for managing this? Am I missing something entirely?</p>
| <blockquote>
<p>What's a sensible workflow for managing this? Am I missing something entirely?</p>
</blockquote>
<p>Some of the answer depends on the kind of risk you're trying to drive down with any process you have in place. If it's "the cluster was wiped out by a hurricane and I need to recover my descriptors," then <a href="https://github.com/heptio/ark#readme" rel="nofollow noreferrer">Heptio Ark</a> is a good solution for that. If the risks are more "human-centric," then IMHO you will have to walk a very careful line between locking down all the things and crippling the very agile, empowering, tools that kubernetes provides to a team. A concrete example of that model running up against your model is: what happens when a developer edits a Deployment but does not (remember|know) to update the descriptor in the repo? So do you revoke the edit rights? Use some diff-esque logic to detect a changed in-cluster config?</p>
<p>To speak to something you said <em>specifically</em>: it is a highly suboptimal idea to commit a descriptor change just to resize a <code>(Deployment|ReplicationController|StatefulSet)</code>. Separately, a well-built CI pipeline would also understand if no <em>buildable</em> artifact changed and bail out (either early, or not even triggering a build, if the CI tool is that smart).</p>
<p>Finally, if you do want to carry on with the current situation, then the best practice I can think of is textual replacement right before applying a descriptor:</p>
<pre><code>$ grep "image: " the-deployment.yml
image: example.com/something:#CI_PIPELINE_IID#
$ sed -i'' -e "s/#CI_PIPELINE_IID#/${CI_PIPELINE_IID}/" the-deployment.yml
$ kubectl apply -f the-deployment.yml
</code></pre>
<p>so that the copy in the repo remains textually pristine, and also isn't inadvertently <em>actually applied</em> since it won't actually result in a runnable Deployment.</p>
|
<p>What setting up a new Kubernetes endpoint and clicking "Verify Connection" the error message:
"The Kubconfig does not contain user field. Please check the kubeconfig. " - is always displayed. </p>
<p>Have tried multiple ways of outputting the config file to no avail. I've also copy and pasted many sample config files from the web and all end up with the same issue. Anyone been successful in creating a new endpoint? </p>
| <p>This is followed by <a href="https://github.com/TsuyoshiUshio/KubernetesTask/issues/35" rel="nofollow noreferrer">TsuyoshiUshio/KubernetesTask issue 35</a></p>
<blockquote>
<p>I try to reproduce, however, I can't do it.<br>
I'm not sure, however, I can guess it might the mismatch of the version of the cluster/kubectl which you download by the download <code>task/kubeconfig</code>.<br>
Workaround might be like this:</p>
<ul>
<li>kubectl version in your local machine and check the current server/client version</li>
<li>specify the same version as the server on the download task. (by default it is 1.5.2)</li>
<li>See the log of your release pipeline which is fail, you can see which <code>kubectl</code> command has been executed, do the same thing on your local machine with fitting your local pc's environment.</li>
</ul>
<p><strong>The point is, before go to the VSTS, download the kubectl by yourself</strong>.<br>
Then, put the kubeconfg on the default folder like <code>~/.kube/config</code> or set environment variables <code>KUBECONFIG</code> to the binary.<br>
Then execute kubectl get nodes and make sure if it works. </p>
<p>My kubeconfig is different format with yours. If you use AKS, <code>az aks install-cli</code> command and <code>az aks get-credentials</code> command.<br>
Please refer <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough</a> .</p>
<p><strong>If it works locally, the config file must work on the VSTS task environment</strong>. (or this task or VSTS has a bug)</p>
</blockquote>
|
<p>Are there any third party tools for Kubernetes UI? Such as to view pods on the main page or to edit the config of an object?</p>
| <p>We are using kubernetic (www.kubernetic.com) in order to see/scale/delete kubernetes resources. It still has some issues with displaying correct information but in general it's nice.</p>
|
<p>I have a Kubernetes cluster running in minikube, I want to filter out all Logstash pods via Kubernetes API. Kubernetes API documentation is a bit confusing, I did some research and found out that I can use something like this, but I have been unsuccessful so far:</p>
<pre><code>localhost:8000/api/v1/namespaces/default/pods?labelSelector=logstash
</code></pre>
<p>any ideas how to retrieve this? Any help would be really appreciated. </p>
| <blockquote>
<p>any ideas how to retrieve this?</p>
</blockquote>
<p>Since labels are defined in <code><name>=<value></code> pairs you need to supply both, as described in <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">the documentation</a> (see the <code>API</code> section)</p>
<p>As an example, supposing you have:</p>
<ul>
<li>namepace: default</li>
<li>labels on pods you want to select:
<ul>
<li>role=ops</li>
<li>application=logstash</li>
</ul></li>
<li>kubectl proxy runs on localhost:8000</li>
</ul>
<p>Then your api call would look like this:</p>
<pre><code>curl localhost:8000/api/v1/namespaces/default/pods?labelSelector=role%3Dops,application%3Dlogstash
</code></pre>
|
<p>Has any one seen this error <code>Error: forwarding ports: error upgrading connection: Upgrade request required</code> in <strong>Kubernetes</strong></p>
| <p>I have seen this before and there were a couple of issues to address:</p>
<ol>
<li><p>The Nginx that was proxying requests did not contain the following
config (required for HTTP2 support):</p>
<pre><code>proxy_set_header Upgrade 'websocket';
proxy_set_header Connection 'Upgrade';
</code></pre></li>
<li><p>And we also had to switch our Classic Load Balancer on AWS (ELB)
with an Application Load Balancer (ALB).</p></li>
</ol>
<p>TBH, the question is a bit vague. Where are you seeing this exactly? It needs more context.</p>
|
<p>I am trying to deploy an app to kubernetes cluster and I want to store data in <strong>Persistent Volume</strong>. However, I am very confused about two parameters in the setup. Can someone explains what is the different between <strong>volumes.hostPath</strong> and <strong>volumeMounts.mountPath</strong>? I read some documentations online but it does not help me to understand.</p>
<pre><code>volumeMounts:
- mountPath: /var/lib/mysql
volumes:
hostPath:
path: /k8s
</code></pre>
<p>If my setup is as above, is the volume going to be mounted at <code>/k8s/var/lib/mysql</code>?</p>
| <p>The mount path is always the destination inside the Pod a volume gets mounted to.</p>
<p>I think the documentation is pretty clear on what hostPath does:</p>
<blockquote>
<p>A hostPath volume mounts a file or directory from the host node’s
filesystem into your Pod. This is not something that most Pods will
need, but it offers a powerful escape hatch for some applications.</p>
<p>For example, some uses for a hostPath are:</p>
<pre><code>- running a Container that needs access to Docker internals; use a hostPath of /var/lib/docker
- running cAdvisor in a Container; use a hostPath of /sys
- allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
</code></pre>
</blockquote>
<p>So your example does not what you think it does. It would mount the node's <code>/k8s</code> directory into the Pod at <code>/var/lib/mysql</code>.</p>
<p>This should be done only if you fully understand the implications!</p>
|
<p>Our kubernetes cluster recently crashed because of etcd "database size exceeded".</p>
<p>We succeeded to put everything back on with a "simple" etcd cluster endpoints defrag (<a href="https://coreos.com/etcd/docs/latest/op-guide/maintenance.html" rel="nofollow noreferrer">see here</a>).</p>
<p>Unfortunately everything is not perfect yet.
Especially the /var/lib/etcd/member/snap directory of an etcd endpoint:</p>
<pre><code>total 25G
-rw-r--r--. 1 etcd root 21K Jun 29 20:27 00000000000810cb-00000000066072bc.snap
-rw-r--r--. 1 etcd root 21K Jun 29 20:40 00000000000810cb-00000000066099cd.snap
-rw-r--r--. 1 etcd root 21K Jun 29 20:55 00000000000810df-000000000660c0de.snap
-rw-r--r--. 1 etcd root 21K Jun 29 21:19 000000000008113f-000000000660e7ef.snap
-rw-r--r--. 1 etcd root 21K Jun 29 21:37 0000000000081162-0000000006610f00.snap
-rw-------. 1 etcd root 916M Jun 29 15:40 000000000619e354.snap.db
-rw-------. 1 etcd root 916M Jun 29 15:41 00000000061b9704.snap.db
-rw-------. 1 etcd root 916M Jun 29 15:43 00000000061ca269.snap.db
-rw-------. 1 etcd root 916M Jun 29 15:44 00000000061dbb43.snap.db
-rw-------. 1 etcd root 916M Jun 29 15:47 00000000061e40df.snap.db
-rw-------. 1 etcd root 916M Jun 29 15:48 00000000061e8192.snap.db
-rw-------. 1 etcd root 916M Jun 29 15:49 00000000061f8799.snap.db
-rw-------. 1 etcd root 916M Jun 29 15:49 0000000006200018.snap.db
-rw-------. 1 etcd root 916M Jun 29 15:52 0000000006225cfd.snap.db
-rw-------. 1 etcd root 916M Jun 29 15:53 00000000062323d6.snap.db
-rw-------. 1 etcd root 916M Jun 29 15:53 00000000062396fa.snap.db
-rw-------. 1 etcd root 970M Jun 29 15:54 000000000624dfe7.snap.db
-rw-------. 1 etcd root 1003M Jun 29 15:54 0000000006259f3f.snap.db
-rw-------. 1 etcd root 1.2G Jun 29 15:58 0000000006296ff0.snap.db
-rw-------. 1 etcd root 1.2G Jun 29 15:59 000000000629b9bc.snap.db
-rw-------. 1 etcd root 1.2G Jun 29 16:01 00000000062b02c0.snap.db
-rw-------. 1 etcd root 1.2G Jun 29 16:02 00000000062bef04.snap.db
-rw-------. 1 etcd root 1.2G Jun 29 16:05 00000000062db8e2.snap.db
-rw-------. 1 etcd root 1.2G Jun 29 16:09 00000000062ef4c4.snap.db
-rw-------. 1 etcd root 1.2G Jun 29 16:11 00000000062fce7a.snap.db
-rw-------. 1 etcd root 1.2G Jun 29 16:12 00000000063063c1.snap.db
-rw-------. 1 etcd root 1.2G Jun 29 16:12 000000000630b648.snap.db
-rw-------. 1 etcd root 1.2G Jun 29 16:12 000000000630bdbb.snap.db
-rw-------. 1 etcd root 1.2G Jun 29 16:13 000000000630e3f1.snap.db
-rw-------. 1 etcd root 27M Jun 29 21:41 db
</code></pre>
<p>This is the only endpoint like this and on this endpoint we are the low disk.</p>
<p>No documentation exists about those files and especially how to reduce the overall disk footprint in our case (defrag / compaction / etc has been tried)).</p>
<p>What are those files?
How to reduce the disk footprint of this endpoint (get rid of those huge snap.db files) ?</p>
| <p>They seem to be <code>snapshots</code> of a given state of your etcd cluster over time.</p>
<p>Sounds like they can be rotated. At least according to this:</p>
<blockquote>
<p>etcdserver: purge old snap.db files #7967
<a href="https://github.com/coreos/etcd/pull/7967" rel="nofollow noreferrer">https://github.com/coreos/etcd/pull/7967</a></p>
</blockquote>
<p>Hope this answer is helpful somehow.</p>
|
<p>I am using <code>kubeadm</code> to deploy single node kubernetes 1.11 cluster.
I am unable to find <code>servicemonitor</code> resources.</p>
<pre><code>➜ kube-prometheus git:(master) ✗ kubectl get servicemonitor
error: the server doesn't have a resource type "servicemonitor"
➜ kube-prometheus git:(master) ✗ kubectl create -f manifests/prometheus-serviceMonitor.yaml
error: unable to recognize "manifests/prometheus-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
</code></pre>
<p>How do I enable <code>servicemonitor</code> with <code>kubeadm</code>?</p>
| <p>The <code>ServiceMonitor</code> resource not part of Kubernetes itself. It is a custom resource which is part of the Prometheus operator <a href="https://github.com/coreos/prometheus-operator#customresourcedefinitions" rel="noreferrer">as described here</a>.</p>
<p>Make sure that you have installed the Prometheus operator (including the custom resources) beforehand to enable creating a ServiceMonitor object.</p>
|
<p>Circleci and many other ci tools show real time logs during the job running. It like the <code>tail -f</code> in linux but it also show all the previous logs. </p>
<p>I am trying to sync specific logs from kubernetes to s3 and then move the update to the browse it that possible?</p>
| <blockquote>
<p>I am trying to sync specific logs from kubernetes to s3 and then move the update to the browse it that possible?</p>
</blockquote>
<p>IMHO you would want to have a "tee" mechanism to divert the log streams to <em>each</em> destination, since those two destinations have vastly different access (and retry!) mechanisms. AFAIK kubernetes allows unlimited(?) numbers of <code>kubectl logs -f</code> connections, so you would want one process that connects to the pod and relays those bytes out to the browser, and a separate process that connects and relays the bytes to S3. You <em>could</em> have one process that does both, but that runs the risk of a single bug wiping out <em>both</em> streams, making everyone unhappy.</p>
<p>I used <code>kubectl logs -f</code> as a "shortcut" for this answer, but I am super positive there's an API endpoint for doing that, I just don't remember what it is offhand -- <code>kubectl --v=100 logs -f $pod</code> will show it to you.</p>
|
<p>Here's the full error <code>Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.</code></p>
<p>Here's my Kubectl config view</p>
<pre><code>apiVersion: v1
clusters: []
contexts:
- context:
cluster: ""
user: ""
name: dev
current-context: dev
kind: Config
preferences: {}
users: []
</code></pre>
<p>I'm running <code>Minikube start</code>. It's stuck on <code>Starting VM...</code></p>
<p>In Hyper-V Manager, I have minikube VM running. </p>
| <p>Check out "<a href="https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c" rel="nofollow noreferrer">Minikube on Windows 10 with Hyper-V</a>" by <a href="https://twitter.com/JockDaRock" rel="nofollow noreferrer">Jock Reed</a></p>
<p>The command to run, from a Windows CMD console as Administrator, is:</p>
<pre><code>minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
</code></pre>
<p>With "<code>Primary Virtual Switch</code>" being the name of the new "External" "Virtual network switch" you have created first.</p>
<p>Don't forget to turn off Dynamic Memory for the minikube VM (<a href="https://github.com/kubernetes/minikube/issues/2326" rel="nofollow noreferrer">minikube issue 2326</a>)</p>
<p>And possibly, <a href="https://medium.com/@JockDaRock/disabling-ipv6-on-network-adapter-windows-10-5fad010bca75" rel="nofollow noreferrer">disable IPv6 on Network Adapter Windows 10</a> (<a href="https://github.com/kubernetes/minikube/issues/754#issuecomment-340315883" rel="nofollow noreferrer">issue 754</a></p>
<p>Make sure to use the <a href="https://github.com/kubernetes/minikube/releases/download/v0.28.0/minikube-windows-amd64" rel="nofollow noreferrer"><code>v0.28.0/minikube-windows-amd64</code></a> executable, as mentioned in <a href="https://github.com/kubernetes/minikube/issues/1943#issuecomment-332083151" rel="nofollow noreferrer">issue 1943</a>.</p>
|
<p>NodePort
This way of accessing Dashboard is only recommended for development environments in a single node setup.</p>
<p>Edit kubernetes-dashboard service.</p>
<p>$ kubectl -n kube-system edit service kubernetes-dashboard
You should see yaml representation of the service. Change type: ClusterIP to type: NodePort and save file.</p>
<p>Can change clusterip to nodeport command line without editor?
Thanks!</p>
| <p>you can change it like this</p>
<pre><code>kubectl patch svc kubernetes-dashboard --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'
</code></pre>
|
<p>I have an image pushed to Google Container Registry, named <code>gcr.io/$(PROJECT_ID)/img-name:46d49ab</code>.</p>
<p>In my replication controller I have:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: go-server-rc
spec:
replicas: 3
selector:
name: go-server
version: v8
template:
metadata:
labels:
name: go-server
version: v8
spec:
containers:
- name: go-server
image: gcr.io/$(PROJECT_ID)/img-name:46d49ab
ports:
- containerPort: 5000
</code></pre>
<p>This works, but it doesn't when I remove the commit hash tag <code>46d49ab</code>. I don't want to have to change the tag every single time I commit.</p>
<p>I have also set up a trigger on Google Container Builder to pull the master branch of my repository after every commit, and create an image <code>gcr.io/$(PROJECT_ID)/img-name:$(COMMIT_HASH)</code>.</p>
<p>How can I edit my replication controller file to just get the most recent? What workflows do people use?</p>
| <p><a href="https://kubernetes.io/docs/concepts/containers/images/#updating-images" rel="nofollow noreferrer">It's possible</a> to use the <code>latest</code> tag to ensure Kubernetes pulls the image each time it runs. Every time you create a new image tag it with <code>latest</code> and push it to the container registry. However, I would not recommend this.</p>
<p>You won't know which pods are running which version of your code. I do exactly as you mention in your question. I find it's better to update your deployment object each time your image updates. This will ensure the deployment is in the state you expect and troubleshooting will be clearer when looking at images.</p>
|
<p>I'm trying to learn my way around Kubernetes with Google Cloud Platform. I have a small Vue-based application working locally with the following <code>Dockerfile</code> and <code>docker-compose.yml</code>. </p>
<p>Building and bringing up this project locally works great. However, when using <code>kompose up</code> to create a deployment/svc/etc. for this thing, the container fails to build properly. Ultimately it ends up in a crashing loop.</p>
<p>Inspecting the logs shows that the issue is that <code>npm</code> cannot find <code>/opt/eyeball/package.json</code> or <code>/opt/eyeball/package-lock.json</code>. I'm confused since this isn't an issue when I build and push the image that my cluster is ultimately pulling down - those files are right where you'd expect them to be based on my <code>Dockerfile</code>. Any idea why this might be happening?</p>
<p><em>Dockerfile</em></p>
<pre><code>FROM node:8-alpine
RUN apk --no-cache --update add gzip
RUN mkdir -p /opt/eyeball
ADD ./package.json /opt/eyeball
ADD ./package-lock.json /opt/eyeball
WORKDIR /opt/eyeball
RUN npm install
ADD . /opt/eyeball
</code></pre>
<p><em>docker-compose.yml</em></p>
<pre><code>version: '3'
networks:
default:
external:
name: overmind
services:
eyeball:
image: registry.gitlab.com/souldeux/eyeball:latest
environment:
- HOST=0.0.0.0
ports:
- "8080:8080"
volumes:
- ./:/opt/eyeball
entrypoint: "npm run dev"
</code></pre>
| <p>You need to delete the <code>volumes:</code> block in your <code>docker-compose.yml</code> file.</p>
<p>The <code>volumes:</code> block in your <code>docker-compose.yml</code> directs Docker to take the contents of your local directory and mount them into the container, which hides everything that you add in the <code>Dockerfile</code>. When you deploy this with Kompose, this gets translated to Kubernetes directives, but since the Kubernetes environment doesn't have your local development environment, this results in the deployed containers failing.</p>
|
<p>I created a cluster with kubeadm (well it's just a single node / master for now) however I seem to be having issues with authenticating as the admin user.</p>
<pre><code>KUBECONFIG=/etc/kubernetes/kubelet.conf kubectl get sa
No resources found.
Error from server (Forbidden): serviceaccounts is forbidden: User "system:node:osboxes" cannot list serviceaccounts in the namespace "default": disabled by feature gate TokenRequest
</code></pre>
<p>I'm assuming that is the admin user? If not, where can I find it / authenticate with it?</p>
| <p>Found it: <code>export KUBECONFIG=/etc/kubernetes/admin.conf</code></p>
|
<p>I want to create a role to service account with context.<br>
My goal is to be able to run <code>kubectl get pods</code> with the context of the service account. </p>
<p>To do it I need:</p>
<ol>
<li>Create service account</li>
<li>Create role</li>
<li>Create bind role</li>
<li>Create context</li>
</ol>
<p>I created a service account: </p>
<pre><code>kubectl create serviceaccount myservice
</code></pre>
<p><strong>Role.yaml:</strong> </p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: development
name: my-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["pods"]
verbs: ["get"]
</code></pre>
<p><strong>BindRole.yaml:</strong> </p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: my-role-binding
namespace: development
subjects:
- kind: ServiceAccount
name: myservice
namespace: development
apiGroup: ""
roleRef:
kind: Role
name: my-role
apiGroup: ""
</code></pre>
<p>I want to be able to run <code>kubectl get pods</code> in the context of the service account <code>myservice</code>. </p>
<p>To create context I need something like that: </p>
<pre><code>kubectl config set-context myservice-context --cluster=kubernetes --user=???
</code></pre>
<p>But I can't use <code>--user</code> for the service account.<br>
So how can I do it ? </p>
<p>I thought to use <code>kubectl config set-credentials</code> but it just creates a user and I already have the service account. </p>
<p>EDIT:<br>
Here is my try to create a user with the token of the service account and then use it with <code>kubectl --context=myservice-context get pods</code> but it failed:<br>
<a href="https://i.stack.imgur.com/DxARm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DxARm.png" alt="enter image description here"></a></p>
| <p>It appears the cluster maybe missing from your <code>~/.kube/config</code> file. If it were a permissions issue, I would expect to see either <code>error: You must be logged in to the server (Unauthorized)</code> or <code>Error from server (Forbidden)</code>.</p>
<p>The error you are seeing <code>The connection to the server localhost:8080 was refused - did you specify the right host or port?</code> implies that there is no cluster with the name you specified in your kubeconfig.</p>
<p>I'd check that your kubeconfig includes the cluster name <code>kubernetes</code> with <code>certificate-authority-data</code> and respective <code>server</code>.</p>
<p>For example here is me attempting with non-existent service account first with an invalid cluster, then again with a cluster that does exist in my <code>kubeconfig</code>.</p>
<p>Bad cluster name:</p>
<pre><code>kubectl config set-context service-context \
--cluster=doesnotexist \
> --namespace=default \
> --user=service-account
Context "service-context" modified.
➜ ~ kubectl --context=service-context get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>Good cluster name:</p>
<pre><code>kubectl config set-context service-context \
--cluster=exists \
--namespace=default \
--user=service-account
Context "service-context" modified.
➜ ~ kubectl --context=service-context get pods
error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>The later error would suggest there was something wrong with your user/permissions. The former would suggest the <code>cluster</code> does not exist in your <code>kubeconfig</code>.</p>
<p>EDIT:</p>
<p>Also remember when you use <code>sudo</code> its using <code>/root/.kube/config</code> which may not be what you want</p>
|
<p>I am deploying a Spring boot application to Kubernetes. My Docker file is as follows.</p>
<pre><code>FROM alpine-jdk1.8:latest
RUN mkdir -p /ext/app
COPY target/app-service.war /ext/app
ENV JAVA_OPTS="" \
APPLICATION_ARGS=""
CMD java ${JAVA_OPTS} -jar /ext/app/app-service.war ${APPLICATION_ARGS}
</code></pre>
<p>I have many config files under conf directory, but there are secrets also.</p>
<p>So, moved few of them to secrets and few to configMaps in Kubernetes. But, created more than 1 configmaps and secrets to groups configs and secrets.</p>
<p>Since, there are many configMaps and secrets, I had to create many volume mounts and volumes and used the Spring config location to add all these volumes to the classpath as a comma separated values.</p>
<pre><code>- name: APPLICATION_ARGS
value: --spring.config.location=file:/conf,.....
</code></pre>
<p>Is there any other better approach?</p>
| <p>That is a good approach for secrets, but less so <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">for configMaps</a>.</p>
<p>If your war application can rely on environment variable, a possible approach is to convert that configMap into an rc file (file with properties) which can then be read once by the application and used</p>
<p>You can see an example of such an approach in "<a href="http://blog.knuthaugen.no/2016/06/kubernetes-wars-day-3/" rel="nofollow noreferrer"><strong>The Kubernetes Wars</strong></a>" from <a href="https://twitter.com/knuthaug" rel="nofollow noreferrer"><strong>knu:t hæugen</strong></a>:</p>
<blockquote>
<p>How to deal with configuration?<br>
Kubernetes likes app config in environment variables, not config files.<br>
This is easy in our node apps using <a href="https://github.com/mozilla/node-convict" rel="nofollow noreferrer">convict</a>, pretty easy in our ruby apps and ranging from relatively easy to bloody hard in our java apps. </p>
<p>But how to get config into the replication controllers? We opted for using configmaps (a kubernetes object) to store the config, reference the variables from the rc files and maintain it in git controlled files.<br>
So when we want to change to app config, update the config files and run a script which updates the configmap and reloads all the pods for the app</p>
</blockquote>
|
<p>I am running an <code>Ingress</code>, which supposed to connect to images inside my Pods. When I do <code>describe</code> it looks fine e.g.</p>
<pre><code>$ kubectl describe svc solar-demo
Name: solar-demo
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"solar-demo","namespace":"default"},"spec":{"ports":[{"name":"city","port":3000...
Selector: app=solardemo
Type: ClusterIP
IP: 10.97.245.248
Port: city 3000/TCP
TargetPort: 3000/TCP
Endpoints: 172.17.0.4:3000,172.17.0.6:3000
Port: solar 3001/TCP
TargetPort: 3001/TCP
Endpoints: 172.17.0.4:3001,172.17.0.6:3001
Session Affinity: None
Events: <none>
</code></pre>
<p>It even correctly lists IP addresses of my images. However, when I try to reach services, I get 404 when I just ask for a root, which is fine since I do not map map <code>/</code> to anything and <code>503</code> error when I try to reach routes <code>/solar</code> and/or <code>/city</code>.</p>
<p>When I check the logs, it returns:</p>
<pre><code>$ kubectl logs solar-demo-5845984b94-xp82l solar-svc
npm info it worked if it ends with ok
npm info using [email protected]
npm info using [email protected]
npm info lifecycle [email protected]~prestart-solar-svc: [email protected]
npm info lifecycle [email protected]~start-solar-svc: [email protected]
> [email protected] start-solar-svc /opt/app-root/src
> node solar-svc.js
{"level":30,"time":1530271233676,"msg":"Server listening at http://0.0.0.0:3001","pid":26,"hostname":"solar-demo-5845984b94-xp82l","v":1}
server listening on 3001
</code></pre>
<p>and the same thing for the other service:</p>
<pre><code>$ kubectl logs solar-demo-5845984b94-xp82l api
npm info it worked if it ends with ok
npm info using [email protected]
npm info using [email protected]
npm info lifecycle [email protected]~prestart-api: [email protected]
npm info lifecycle [email protected]~start-api: [email protected]
> [email protected] start-api /opt/app-root/src
> node api.js
{"level":30,"time":1530271244205,"msg":"Server listening at http://0.0.0.0:3000","pid":21,"hostname":"solar-demo-5845984b94-xp82l","v":1}
server listening on 3000
</code></pre>
<p>I get 503s and images never get any requests, as if <code>Ingress</code> "thought" that every <code>Pod</code> is down or something. What could I check?</p>
<pre><code>$ curl -v http://shmukler.example.com/solar
* Trying 192.168.99.101...
* Connected to shmukler.example.com (192.168.99.101) port 80 (#0)
> GET /solar HTTP/1.1
> Host: shmukler.example.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 503 Service Temporarily Unavailable
< Server: nginx/1.13.7
< Date: Sun, 01 Jul 2018 13:49:38 GMT
< Content-Type: text/html
< Content-Length: 213
< Connection: keep-alive
<
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body bgcolor="white">
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx/1.13.7</center>
</body>
</html>
* Connection #0 to host shmukler.example.com left intact
</code></pre>
<p>Please advise.</p>
| <p>In annotations, the config was missing:</p>
<pre><code>nginx.org/server-snippet: "proxy_ssl_verify off;"
</code></pre>
|
<p>I've lost the original 'kubeadm join' command when I previously ran <code>kubeadm init</code>. </p>
<p>How can I retrieve this value again?</p>
| <pre><code>kubeadm token create --print-join-command
</code></pre>
|
<p>Vert.x and kubernetes go hand in and hand. If I am using istio as a service mesh will Zipkin's request tracing be able to track communication done via the event bus?</p>
<p>client ->[rest/http1]-> service-A ->[eventbus]-> service-B</p>
<p>Will istio be able to trace requests done over the eventbus?</p>
<p><a href="https://istio.io/docs/tasks/telemetry/distributed-tracing/" rel="nofollow noreferrer">The tracing page</a> says that headers need to be propagated through in http or grpc - but the eventbus sends messages via tcp -- does that mean that istio will not be able to trace requests and show the visualisation tools [waterfall graph and visualisation of all the services]</p>
<p><a href="http://vertx.io" rel="nofollow noreferrer">Vertx main page</a></p>
<p><a href="https://vertx.io/docs/vertx-core/java/#event_bus" rel="nofollow noreferrer">Eventbus-Vertx page</a></p>
| <p>tracing in your example would work only for requests made to service-A via HTTP protocol. Istio at the moment traces only HTTP requests. </p>
<p>You can trace event bus messages manually by creating spans inside message producers and receivers.
IIRC vert.x event bus does not support headers per message:</p>
<p><a href="https://vertx.io/docs/apidocs/io/vertx/core/eventbus/MessageProducer.html" rel="nofollow noreferrer">https://vertx.io/docs/apidocs/io/vertx/core/eventbus/MessageProducer.html</a>
<a href="https://vertx.io/docs/apidocs/io/vertx/core/eventbus/DeliveryOptions.html" rel="nofollow noreferrer">https://vertx.io/docs/apidocs/io/vertx/core/eventbus/DeliveryOptions.html</a></p>
<p>so you would have to encode tracing IDs into the message body and deserialize it correctly on the receiver side.</p>
|
<p>We are running kubernetes in aws and We kept some files in one of the kubernetes nodes(I know it's bad idea to keep data in the host machine instead of separate volume). For some unknown reason, kubernetes node state became NotReady. When I used <strong>describe node</strong> command, I got status of Memory pressure and Disk pressure are as "<strong>NodeStatusUnknown</strong>". After some time, that node is terminated and new node is created automatically. Since the old node gets terminated, I couldn't get the data back in the terminated node's root volume. Now, how can I find the reason why it gets terminated?</p>
| <p>The node status NotReady causes the health check failed from master node during almost 3 minutes. If you already collect the journal log from the node host, you can investigate why the node service is not responsing. I experienced some NotReady issues, They have various causes, such as networking issue, lack of resources, hang the kubelet service, and so on. First you shoud collect the log to remote centric storage for check.</p>
|
<p>We have a number of different REST-based services running in Azure within a Kubernetes (version 1.9.6) cluster. </p>
<p>Two of the services, let's say A and B needs to communicate with each other using REST-calls. Typically, something like the following:</p>
<pre><code>Client calls A (original request)
A calls B (request 1)
B calls A (request 2)
A responds to B (request 2)
B responds to A (request 1)
A responds to the original request
</code></pre>
<p>The above being a typical intertwined micro-services architecture. Manually running the docker instances works perfectly on our local test servers.</p>
<p>The moment we run this in Kubernetes on Azure we get intermittent timeouts (60+ seconds) on the micro-services calling each other through Kubernetes' networking services. After a timeout, repeating the request would then often give correct responses in a few micro-seconds.</p>
<p>I am stuck at this point as I have no idea what could be causing this. Could it be the dynamic routing? The virtualised network? Kubernetes configuration? </p>
<p>Any ideas?</p>
| <p>So I ran into this as well.</p>
<p>Basically there is some sort of network timeout that happens on AKS that cuts all connections out of a pod. As you mentioned this results in seemingly random errors that are difficult to trouble shoot since you only get to see them once (as hitting the same service again results in the expected correct behavior).</p>
<p>More details on my question here:
<a href="https://stackoverflow.com/questions/50706483/what-azure-kubernetes-aks-time-out-happens-to-disconnect-connections-in-out">What Azure Kubernetes (AKS) 'Time-out' happens to disconnect connections in/out of a Pod in my Cluster?</a></p>
<p>In my case AKS (or potentially Kubernetes) was disconnecting / severing my Ghost blog connection to my database after a time but not notifying the service which then resulted in strange errors related to the service not realizing that it was disconnected and not being able to continue to utilize the connection it expects to be available / open.</p>
<p>Thats not a solution just more background!</p>
<p>I am debating whether to open a ticket on Azure AKS GitHub (and with my support subscription) to request more information. If I hear back I will update this answer!</p>
|
<p>I've followed the instructions to create an EKS cluster in AWS using Terraform.</p>
<p><a href="https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html" rel="nofollow noreferrer">https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html</a></p>
<p>I've also copied the output for connecting to the cluster to ~/.kube/config-eks. I've verified this successfully works as I've been able to connect to the cluster and manually deploy containers. However, now i'm trying to use the Terraform Kubernetes provider to connect to the cluster but cannot seem to be able to configure the provider properly.</p>
<p>I've configured the provider to use my kubectl configuration but when attempting to push a simple configmap, i get an error stating the following:</p>
<blockquote>
<p>configmaps is forbidden: User "system:anonymous" cannot create configmaps in the namespace "kube-system"</p>
</blockquote>
<p>I know that the provider is picking up part of the configuration but I cannot seem to get it to authenticate. I suspect this is because EKS uses heptio for authentication and i'm not sure if the K8s Go client used by Terraform can support heptio. However, given that Terraform released their AWS EKS support when EKS went GA, I'd doubt that they wouldn't also update their Terraform provider to work with it.</p>
<p>Is it possible to even do this now? Are there alternatives?</p>
| <p>Exec auth was added here: <a href="https://github.com/kubernetes/client-go/commit/19c591bac28a94ca793a2f18a0cf0f2e800fad04" rel="noreferrer">https://github.com/kubernetes/client-go/commit/19c591bac28a94ca793a2f18a0cf0f2e800fad04</a></p>
<p>This is what is utilized for custom authentication plugins and was published Feb 7th.</p>
<p>Right now, Terraform doesn't support the new exec-based authentication provider, but there is an issue open with a workaround: <a href="https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161" rel="noreferrer">https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161</a></p>
<p>That said, if I get some free time I will work on a PR.</p>
|
<p>I have been following this guide - <a href="https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine" rel="nofollow noreferrer">Connecting from Kubernetes Engine</a></p>
<p>And having an errors after all:</p>
<pre><code>$ kubectl describe pod | grep Warning
Warning FailedMount 4m (x15 over 19m) kubelet, gke-bar-dev-default-pool-a6045c50-dg5z MountVolume.SetUp failed for volume "cloudsql-instance-credentials" : secrets "cloudsql-instance-credentials" not found
Warning FailedMount 3m (x7 over 17m) kubelet, gke-bar-dev-default-pool-a6045c50-dg5z Unable to mount volumes for pod "bar-dev-556d7c4f6f-5c2nx_default(9a06b84a-7dcb-11e8-bca8-42010a8e0060)": timeout expired waiting for volumes to attach/mount for pod "default"/"bar-dev-556d7c4f6f-5c2nx". list of unattached/unmounted volumes=[cloudsql-instance-credentials]
Warning Failed 2m kubelet, gke-bar-dev-default-pool-a6045c50-dg5z Error: secrets "cloudsql-db-credentials" not found
</code></pre>
<p>My deployment.yaml file:</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: bar-dev
labels:
app: bar-dev
spec:
replicas: 1
selector:
matchLabels:
app: bar-dev
template:
metadata:
labels:
app: bar-dev
spec:
containers:
- name: nginx
image: gcr.io/foo/bar-dev-nginx:v9
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /health-check
port: 80
initialDelaySeconds: 10
periodSeconds: 60
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
# [START cloudsql_secrets]
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
- name: php-fpm
image: gcr.io/foo/bar-dev-php-fpm:v9
ports:
- containerPort: 9000
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
# [START cloudsql_secrets]
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=foo:us-east1:bar-dev=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
</code></pre>
<p>What is causing this issue and how to resolve it?</p>
| <pre><code>- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
</code></pre>
<p>Here, you set the environment variable from secrets. According to the error: </p>
<pre><code> Warning Failed 2m kubelet, gke-bar-dev-default-pool-a6045c50-dg5z Error: secrets "cloudsql-db-credentials" not found
</code></pre>
<p>So, you have to create a secret first.</p>
<p>For this you need to run:</p>
<pre><code>kubectl create secret generic cloudsql-db-credentials \
--from-literal=username=proxyuser --from-literal=password=[PASSWORD]
</code></pre>
|
<p>I am trying to run KeyCloak on Kubernetes using PostgreSQL as a database.</p>
<p>The files I am referring to are on the <a href="https://github.com/peterzandbergen/keycloak-kubernetes" rel="nofollow noreferrer">peterzandbergen/keycloak-kubernetes</a> project on GitHub.</p>
<p>I used <code>kompose</code> to generate the <code>yaml</code> files, as a staring point, using the files that <a href="https://github.com/jboss-dockerfiles/keycloak/tree/master/docker-compose-examples" rel="nofollow noreferrer">jboss</a> published.</p>
<p>PostgreSQL is started first using:</p>
<pre class="lang-sh prettyprint-override"><code>./start-postgres.sh
</code></pre>
<p>Then I try to start KeyCloak:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create -f keycloak-deployment.yaml
</code></pre>
<p>The KeyCloak pod stops because it cannot connect to the database with the error:</p>
<pre><code>10:00:40,652 SEVERE [org.postgresql.Driver] (ServerService Thread Pool -- 58) Error in url: jdbc:postgresql://172.17.0.4:tcp://10.101.187.192:5432/keycloak
</code></pre>
<p>The full log can be found on <a href="https://github.com/peterzandbergen/keycloak-kubernetes" rel="nofollow noreferrer">github</a>. This is also the place to look at the yaml files that I use to create the deployment and the services.</p>
| <p>After some experimenting I found out that using the name <strong>postgres</strong> in the keycloak-deployment.yaml file </p>
<pre><code>- env:
- name: DB_ADDR
value: postgres
</code></pre>
<p>messes things up and results in a strange expansion. After replacing this part of the yaml file with:</p>
<pre><code>- env:
- name: DB_ADDR
value: postgres-keycloak
</code></pre>
<p>makes it work fine. This also requires changing the postgres-service.yaml file. The new versions of the files are in <a href="https://github.com/peterzandbergen/keycloak-kubernetes" rel="nofollow noreferrer">github</a>.</p>
|
<blockquote>
<p>I am trying to use Cypress to test a React application inside a Kubernetes Pod.
I made two Containers inside the Pod. One for React application and one for the > Cypress test tool
Docker specification of the application is :</p>
</blockquote>
<pre><code>FROM node:9-slim AS app
ENV PORT 3000
EXPOSE 3000
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN npm install react --save
RUN npm install ajv --save
CMD ["npm", "start"]
</code></pre>
<blockquote>
<p>and for the cypress </p>
</blockquote>
<pre><code>FROM cypress/base AS cypress
COPY . /test
WORKDIR /test
RUN npm i --save-dev cypress
RUN npm install ajv --save-dev
CMD $(npm bin)/cypress run --record --key 92213f49-6b80-47d5-bbef-2422f308ebf1 -s ./cypress/integration/SystemTest.js
</code></pre>
<blockquote>
<p>the Kubernetes pod spesification </p>
</blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: productlib-pod
spec:
restartPolicy: Never
containers:
- name: app-container
image: steinko/app-productlib:1
- name: cypress-container
image: steinko/cypress-productlib:1
</code></pre>
<blockquote>
<p>I execute the commands in the build script</p>
</blockquote>
<pre><code>- docker login -u steinko -p DavidBowie1!
- docker build -t steinko/app-productlib:1 -f Dockerfile.app .
- docker push steinko/app-productlib:1
- docker build -t steinko/cypress-productlib:1 -f Dockerfile.cypress .
- docker push steinko/cypress-productlib:1
- kubectl apply -f pod.yaml
</code></pre>
<blockquote>
<p>I get an error that the cypress so not find the localhost:3000 What is
missing?</p>
</blockquote>
<pre><code>It looks like this is your first time using Cypress: 3.0.1
[13:08:43] Verifying Cypress can run /root/.cache/Cypress/3.0.1/Cypress [started] [13:08:46] Verified Cypress! /root/.cache/Cypress/3.0.1/Cypress [title changed] [13:08:46] Verified Cypress! /root/.cache/Cypress/3.0.1/Cypress [completed]
Opening Cypress...
================================================================================
(Run Starting)
┌────────────────────────────────────────────────────────────────────────────┐
│ Cypress: 3.0.1 │
│ Browser: Electron 59 (headless) │
│ Specs: 1 found (SystemTest.js) │
│ Searche… cypress/integration/SystemTest.js │
│ Run URL: https://dashboard.cypress.io/#/projects/rwxp3c/runs/609 │
└────────────────────────────────────────────────────────────────────────────┘
────────────────────────────────────────────────────────────────────────────────
Running: SystemTest.js... (1 of 1)
Spesifies client behaivior
1) should diapley hello world in div
✓ should be true (64ms)
1 passing (1m) 1 failing
1) Spesifies client behaivior should diapley hello world in div:
CypressError: Timed out after waiting '60000ms' for your remote page to load.
Your page did not fire its 'load' event within '60000ms'.
You can try increasing the 'pageLoadTimeout' value in 'cypress.json' to wait longer.
Browsers will not fire the 'load' event until all stylesheets and scripts are done downloading.
When this 'load' event occurs, Cypress will continue running commands.
at Object.cypressErr (http://localhost:42219/__cypress/runner/cypress_runner.js:67929:11)
at Object.throwErr (http://localhost:42219/__cypress/runner/cypress_runner.js:67894:18)
at Object.throwErrByPath (http://localhost:42219/__cypress/runner/cypress_runner.js:67921:17)
at timedOutWaitingForPageLoad (http://localhost:42219/__cypress/runner/cypress_runner.js:58198:17)
at http://localhost:42219/__cypress/runner/cypress_runner.js:58807:18
at tryCatcher (http://localhost:42219/__cypress/runner/cypress_runner.js:7091:23)
at http://localhost:42219/__cypress/runner/cypress_runner.js:2408:41
at tryCatcher (http://localhost:42219/__cypress/runner/cypress_runner.js:7091:23)
at Promise._settlePromiseFromHandler (http://localhost:42219/__cypress/runner/cypress_runner.js:5113:31)
at Promise._settlePromise (http://localhost:42219/__cypress/runner/cypress_runner.js:5170:18)
at Promise._settlePromise0 (http://localhost:42219/__cypress/runner/cypress_runner.js:5215:10)
at Promise._settlePromises (http://localhost:42219/__cypress/runner/cypress_runner.js:5290:18)
at Async._drainQueue (http://localhost:42219/__cypress/runner/cypress_runner.js:2023:16)
at Async._drainQueues (http://localhost:42219/__cypress/runner/cypress_runner.js:2033:10)
at Async.drainQueues (http://localhost:42219/__cypress/runner/cypress_runner.js:1907:14)
at <anonymous>
(Results)
┌───────────────────────────────────┐
│ Tests: 2 │
│ Passing: 1 │
│ Failing: 1 │
│ Pending: 0 │
│ Skipped: 0 │
│ Screenshots: 1 │
│ Video: true │
│ Duration: 1 minute, 0 seconds │
│ Spec Ran: SystemTest.js │
└───────────────────────────────────┘
(Screenshots)
- /test/cypress/screenshots/Spesifies client behaivior -- should diapley hello world in div.png (1280x720)
(Video)
- Started processing: Compressing to 32 CRF
- Compression progress: 45%
- Compression progress: 91%
- Finished processing: /test/cypress/videos/SystemTest.js.mp4 (22 seconds)
(Uploading Results)
- Done Uploading (1/2) /test/cypress/screenshots/Spesifies client behaivior -- should diapley hello world in div.png
- Done Uploading (2/2) /test/cypress/videos/SystemTest.js.mp4
================================================================================
(Run Finished)
Spec Tests Pass… Fail… Pend… Skip…
┌────────────────────────────────────────────────────────────────────────────┐
│ ✖ SystemTest.js 01:00 2 1 1 - - │
└────────────────────────────────────────────────────────────────────────────┘
1 of 1 failed (100%) 01:00 2 1 1 - -
───────────────────────────────────────────────────────────────────────────────────
Recorded Run: https://dashboard.cypress.io/#/projects/rwxp3c/runs/609
</code></pre>
| <p>I would recommend you to check application health.</p>
<p>Add the third container with Ubuntu to your pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: productlib-pod
spec:
restartPolicy: Never
containers:
- name: app-container
image: steinko/app-productlib:1
- name: cypress-container
image: steinko/cypress-productlib:1
- name: ubuntu
image: ubuntu
args: [bash, -c, 'apt-get update && apt-get -y install curl less net-tools && while true ; do sleep 1000; done']
</code></pre>
<p>Wait until packages in Ubuntu are installed:</p>
<pre><code>$ kubectl logs productlib-pod ubuntu
<skipped>
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
</code></pre>
<p>Execute bash in ubuntu container:</p>
<pre><code>$ kubectl exec -ti productlib-pod -c ubuntu -- bash
</code></pre>
<p>Check if the port 3000 is in LISTEN state:</p>
<pre><code>root@productlib-pod# netstat -anl
</code></pre>
<p>If so, try to connect to it using curl:</p>
<pre><code>root@productlib-pod# curl http://localhost:3000
</code></pre>
<p>This way you can ensure the application is able to respond to requests on port 3000.</p>
|
<p>I have a running jenkins pod and i am trying to execute following commands:</p>
<pre><code>sudo kubectl --kubeconfig /opt/jenkins_home/admin.conf apply -f /opt/jenkins_home/ab-kubernetes/ab-back.yml
</code></pre>
<p>It is giving following error:</p>
<p><strong>Error from server (NotFound): the server could not find the requested resource</strong></p>
<p>What cound go wrong here?</p>
<p><code>ab-back.yml</code> file</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: dg-back-svc
spec:
selector:
app: dg-core-backend-d
type: NodePort
ports:
- name: http
protocol: TCP
port: 8081
nodePort: 30003
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dg-core-backend-d
spec:
replicas: 1
template:
metadata:
labels:
app: dg-core-backend-d
spec:
containers:
- name: dg-core-java
image: ab/dg-springboot-java:1.0
imagePullPolicy: IfNotPresent
command: ["sh"]
args: ["-c", "/root/post-deployment.sh"]
ports:
- containerPort: 8081
# livenessProbe:
# httpGet:
# path: /
# port: 8080
env:
- name: SPRING_PROFILES_ACTIVE
value: xxx
</code></pre>
<p><strong>UPDATE:</strong></p>
<p><code>kubectl version</code> is as follows :</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:24:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>On applying logs as <code>--v=4</code>,<code>kubectl apply</code> is working and giving logs as follows :</p>
<pre><code>I0702 11:40:17.721604 1601 merged_client_builder.go:159] Using in-cluster namespace
I0702 11:40:17.734648 1601 decoder.go:224] decoding stream as YAML
service/dg-back-svc created
deployment.extensions/dg-core-backend-d created
</code></pre>
<p>but <code>kubectl create</code> is giving error as :</p>
<pre><code>I0702 11:41:12.265490 1631 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "unknown"
}
]
},
"code": 404
}]
</code></pre>
<p>Also on doing <code>kubectl get pods --v=10</code>,it is giving log as :</p>
<pre><code>Response Body: {
"metadata": {},
"status": "Failure",
"message": "only the following media types are accepted: application/json, application/yaml, application/vnd.kubernetes.protobuf",
"reason": "NotAcceptable",
"code": 406
}
I0702 12:34:27.542564 2514 request.go:1099] body was not decodable (unable to check for Status): Object 'Kind' is missing in '{
"metadata": {},
"status": "Failure",
"message": "only the following media types are accepted: application/json, application/yaml, application/vnd.kubernetes.protobuf",
"reason": "NotAcceptable",
"code": 406
}'
No resources found.
I0702 12:34:27.542813 2514 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "unknown (get pods)",
"reason": "NotAcceptable",
"details": {
"kind": "pods",
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "unknown"
}
]
},
"code": 406
}]
</code></pre>
| <p>The problem is in versions, try to use the old version of the client or upgrade the server. kubectl supports one version forward and backward skew:</p>
<p>From <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#supported-releases-and-component-skew" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>a client should be skewed no more than one minor version from the
master, but may lead the master by up to one minor version. For
example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes,
and should work with v1.2, v1.3, and v1.4 clients.</p>
</blockquote>
|
<p>I ran the following code in the python shell but it is not showing up the docker/kubernetes pod logs:</p>
<pre><code>import logging
logging.basicConfig(level=logging.INFO)
logging.info("Test")
</code></pre>
<p>This is a simplified version of what I am trying to do which is essentially run one-off scripts/commands in the python shell and have logs to show up. How do I go about getting this to work? Is it because the shell I opened up is not attached to the process that creates the logs?</p>
<p>I'm currently using Docker and Kubernetes.</p>
<p>Thanks!</p>
| <p>As <a href="https://stackoverflow.com/users/225016/matthew-l-daniel">Matthew L Daniel</a> wrote, you can cheat by using additional tools as a workaround to send your logs to stdout.</p>
<p>Also, you can import library sys and stream logs to stdout by this lib. There is an example of code:</p>
<pre><code>import logging
import sys
root = logging.getLogger()
root.setLevel(logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
root.addHandler(ch)
</code></pre>
<p>I found it on Stack, so I’ve attached <a href="https://stackoverflow.com/questions/14058453/making-python-loggers-output-all-messages-to-stdout-in-addition-to-log">link as a source.</a></p>
|
<p>I have a container running on Kubernetes where I would like to run a command from my local terminal. The working directory of the container based on its yaml file is at say <code>/opt/gopath/src/a/</code>. However, I would like to run the command at directory <code>/opt/gopath/src/b/</code>. Additionally, I also need to set certain temporary environment variables before running this command. So currently, this is what I am doing:</p>
<p><code>
kubectl exec $pod_name -- bash -c "cd ../b/; env ENV_VARIABLE_1=ENV_VALUE_2 && env ENV_VARIABLE_2=ENV_VALUE_2 && <cmd to run>".
</code></p>
<p>This seems to be working currently. However, I am not convinced that this is the best way to achieve this. Also, running this command displays all the environment variables in the container -- which I would like not to be printed if possible. Could anyone point me in the right direction?</p>
| <p>In case of static variables, I would suggest using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-pod-environment-variables-using-configmap-data" rel="noreferrer">Config maps</a>.</p>
<p>Since you need to use temporary variables from a local shell, there is no need to use long and complicated commands as exec connects your terminal to the running Container (pod). I tested your issue and created a simple environment variable on the local system. I used syntax provided by you:</p>
<pre><code>kubectl exec -it $pod_name -- sh -c 'key=123 key2=121; echo "$key $key2"'
</code></pre>
<p>To pass env vars, you can just set it like that and add delimiter ';' between variables and your command.</p>
|
<p>When I run <code>nslookup</code> on my docker container, I get this error </p>
<pre><code> nslookup 10.0.96.xx
nslookup: can't resolve '(null)': Name does not resolve
Name: 10.0.96.xx
Address 1: 10.0.96.xx
</code></pre>
<p>I've read that the problem comes from docker which injects this on the <code>resolv.conf</code>:</p>
<pre><code>nameserver 10.109.0.xx
nameserver 10.96.0.xx
search devlan.xxx.corp poc.svc.cluster.local svc.cluster.local cluster.local
options ndots:2 edns0
</code></pre>
<p>With kubernetes, I tried to overwrite it with the following parameters on my <code>PodConfFile.yaml</code>.</p>
<pre><code>spec:
dnsPolicy: "None"
# dnsPolicy: ClusterFirst
dnsConfig:
nameservers:
- 10.109.0.xx
- 10.96.0.xx
searches:
- devlan.xxx.corp
- poc.svc.cluster.local
- svc.cluster.local
- cluster.local
options:
- name: ndots
value: "2"
- name: edns0
</code></pre>
<p>Do you have any advice to resolve this issue? </p>
<ul>
<li>Docker version: 18.03.1-ce</li>
<li>go1.9.3</li>
</ul>
| <p>i found a solution to this issue . on your Pod remove this params </p>
<blockquote>
<p>nameserver options searches</p>
</blockquote>
<p>and add thie one </p>
<pre><code>spec:
dnsPolicy: Default
</code></pre>
<p>it will inherit directly the DNS configuration . </p>
<p>you will be able to ping a external services . </p>
|
<p>I created a service account and I want to give it permissions to list pods: <code>kubectl get pods</code>. </p>
<p>I have these settings for service account, role and rolebinding:</p>
<p><strong>ServiceAccount.yaml</strong>: </p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: myservice3
</code></pre>
<p><strong>Role.yaml</strong>: </p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: my-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["pods"]
verbs: ["get"]
</code></pre>
<p><strong>RoleBinding.yaml</strong>: </p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: my-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: myservice3
namespace: default
apiGroup: ""
roleRef:
kind: Role
name: my-role
apiGroup: ""
</code></pre>
<p>I created a user with the token of the service account: </p>
<pre><code>SECRET_NAME=`kubectl get serviceaccounts myservice3 -o json | jq -r '.secrets[].name'`
TOKEN=`kubectl get secrets $SECRET_NAME -o json | jq -r '.data | .token' | base64 -d`
kubectl config set-credentials $USER_NAME --token=$TOKEN
</code></pre>
<p>Set a context to this user: </p>
<pre><code>kubectl config set-context my-context \
--cluster=kubernetes \
--namespace=default \
--user=$USER_NAME
</code></pre>
<p>When tried to use its permissions: </p>
<pre><code>$ kubectl get pods --context=my-context
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:myservice3" cannot list pods in the namespace "default"
</code></pre>
<p>I also tried with <code>curl</code>: </p>
<pre><code>$ curl -k -v -H "Authorization: Bearer $TOKEN" https://127.0.0.1:6443
* About to connect() to 127.0.0.1 port 6443 (#0)
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 6443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* NSS: client certificate not found (nickname not specified)
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=kube-apiserver
* start date: Jul 02 09:36:21 2018 GMT
* expire date: Jul 02 09:36:21 2019 GMT
* common name: kube-apiserver
* issuer: CN=kubernetes
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:6443
> Accept: */*
> Authorization: Bearer <JWT_token>
>
< HTTP/1.1 403 Forbidden
< Content-Type: application/json
< X-Content-Type-Options: nosniff
< Date: Mon, 02 Jul 2018 12:17:10 GMT
< Content-Length: 257
<
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:serviceaccount:default:myservice3\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
* Connection #0 to host 127.0.0.1 left intact
</code></pre>
<p>Any idea what I am doing wrong? I gave the service account a <code>get pods</code> permissions and it is still being forbidden. </p>
| <p>The Role.yaml needed to have also <code>list</code>: </p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: my-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["pods"]
verbs: ["get", "list"]
</code></pre>
<p><code>get</code> is a permission to get an individual item and <code>list</code> is permission to get all itmes.<br>
Now it works when run: </p>
<pre><code>kubectl get pods --context=myservice3-context
</code></pre>
<p>OR </p>
<pre><code>curl -k -v -H "Authorization: Bearer <JWT_token> " https://127.0.0.1:6443/api/v1/namespaces/default/pods
</code></pre>
|
<p>I'm new to k8s, so some of my terminology might be off. But basically, I'm trying to deploy a simple web api: one load balancer in front of n pods (where right now, n=1). </p>
<p>However, when I try to visit the load balancer's IP address it doesn't show my web application. When I run kubectl get deployments, I get this:</p>
<pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
tl-api 1 1 1 0 4m
</code></pre>
<p>Here's my YAML file. Let me know if anything looks off--I'm very new to this!</p>
<pre><code>---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: tl-api
spec:
replicas: 1
template:
metadata:
labels:
app: tl-api
spec:
containers:
- name: tl-api
image: tlk8s.azurecr.io/devicecloudwebapi:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: acr-auth
nodeSelector:
beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: tl-api
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: tl-api
</code></pre>
<p>Edit 2: When I try using ACS (which supports Windows), I get this:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned tl-api-3466491809-vd5kg to dc9ebacs9000
Normal SuccessfulMountVolume 11m kubelet, dc9ebacs9000 MountVolume.SetUp succeeded for volume "default-token-v3wz9"
Normal Pulling 4m (x6 over 10m) kubelet, dc9ebacs9000 pulling image "tlk8s.azurecr.io/devicecloudwebapi:v1"
Warning FailedSync 1s (x50 over 10m) kubelet, dc9ebacs9000 Error syncing pod
Normal BackOff 1s (x44 over 10m) kubelet, dc9ebacs9000 Back-off pulling image "tlk8s.azurecr.io/devicecloudwebapi:v1"
</code></pre>
<p>I then try examining the failed pod:</p>
<pre><code>PS C:\users\<me>\source\repos\DeviceCloud\DeviceCloud\1- Presentation\DeviceCloud.Web.API> kubectl logs tl-api-3466491809-vd5kg
Error from server (BadRequest): container "tl-api" in pod "tl-api-3466491809-vd5kg" is waiting to start: trying and failing to pull image
</code></pre>
<p>When I run <code>docker images</code> I see the following:</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
devicecloudwebapi latest ee3d9c3e231d 24 hours ago 7.85GB
tlk8s.azurecr.io/devicecloudwebapi v1 ee3d9c3e231d 24 hours ago 7.85GB
devicecloudwebapi dev bb33ab221910 25 hours ago 7.76GB
</code></pre>
| <p>Your problem is that the container image <code>tlk8s.azurecr.io/devicecloudwebapi:v1</code> is in a private container registry. See the events at the bottom of the following command:</p>
<pre><code>$ kubectl describe po -l=app=tl-api
</code></pre>
<p>The official Kubernetes docs describe how to resolve this issue, see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="noreferrer">Pull an Image from a Private Registry</a>, essentially:</p>
<ul>
<li>Create a secret <code>kubectl create secret docker-registry</code></li>
<li>Use it in your deployment, under the <code>spec.imagePullSecrets</code> key</li>
</ul>
|
<p>I am playing around with Spring Cloud Data Flow. I have successfully deployed SCDF on Kubernetes using the related <a href="https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.5.0.RELEASE/reference/htmlsingle/#_available_applications" rel="nofollow noreferrer">documentation</a>. When registering the <a href="http://repo.spring.io/libs-release-local/org/springframework/cloud/stream/app/spring-cloud-stream-app-descriptor/Celsius.SR3/spring-cloud-stream-app-descriptor-Celsius.SR3.stream-apps-kafka-10-docker" rel="nofollow noreferrer">1.5.x based starter apps</a>, everything is working as expected, no further configuration of the starter apps during the deployment of a stream definition is needed. </p>
<p>When using the <a href="http://repo.spring.io/libs-release/org/springframework/cloud/stream/app/spring-cloud-stream-app-descriptor/Darwin.RELEASE/spring-cloud-stream-app-descriptor-Darwin.RELEASE.stream-apps-kafka-10-docker" rel="nofollow noreferrer">2.x based starter apps</a>, there are some changes introduced by the switch to Spring Boot 2.0 that need to be accommodated for, e.g. the actuator endpoints changed. For reference, here are the properties that I provide during the deployment of the stream:</p>
<pre><code>app.*.management.endpoints.web.exposure.include=health,info,binders
deployer.*.cpu=2
deployer.*.memory=4096
deployer.http.count=2
deployer.*.kubernetes.livenessProbePath=/actuator/health
deployer.*.kubernetes.readinessProbePath=/actuator/info
</code></pre>
<p>However, the readiness probe fails since the <code>health</code> and the <code>info</code> endpoint now seem to be protected by default. Therefore, the pods end up in crashloops since from the Kubernetes perspective they get never ready.</p>
<p>I worked around the situation by following the guide on <a href="https://docs.spring.io/spring-cloud-stream-app-starters/docs/Darwin.RELEASE/reference/htmlsingle/#_patching_pre_built_applications" rel="nofollow noreferrer">patching the starter apps</a> that my stream definition relies on (e.g. <code>throughput</code> sink) like this:</p>
<pre><code>@SpringBootApplication
@Import({org.springframework.cloud.stream.app.throughput.sink.ThroughputSinkConfiguration.class})
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
@Configuration
protected static class ThroughputSinkSecurityConfiguration extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests()
.requestMatchers(EndpointRequest.to("health", "info")).permitAll();
}
}
}
</code></pre>
<p>Is there a way to specify this kind of security configuration via flags or properties? Shouldn't such a <code>WebSecurityConfigurerAdapter</code> be there by default to make the <code>health</code> and <code>info</code> endpoints accessible for Kubernetes?</p>
| <p>Artem's response is very relevant. I wanted to also share a few other approaches specific to security and OOTB apps.</p>
<ol>
<li><p>In 1.6 SNAPSHOTs, we have recently added support via <a href="https://github.com/spring-cloud/spring-cloud-deployer-kubernetes/issues/236" rel="nofollow noreferrer">spring-cloud/spring-cloud-deployer-kubernetes#236</a> to plug basic-auth realm to interact with secured actuator endpoints. They are applicable to both liveness and readiness probes. Here's the <a href="https://github.com/spring-cloud/spring-cloud-dataflow-server-kubernetes/commit/110abeb97d1130af96b6c099860a85cd1995a7bf" rel="nofollow noreferrer">commit/docs</a> for your reference.</p>
</li>
<li><p>If you don't really want security at all, though not recommended, you can explicitly disable the Security configuration.</p>
</li>
</ol>
<blockquote>
<p>dataflow:>stream create foo -- definition "http | throughput"</p>
<p>dataflow:>stream deploy foo --properties app.*.spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration"</p>
</blockquote>
<p><em>(i.e., all the Apps in the <code>foo</code> stream definition will start with <code>SecurityAutoConfiguration</code> excluded)</em></p>
|
<p>I can't seem to get a node to join the cluster. </p>
<pre><code>[discovery] Trying to connect to API Server "10.0.2.15:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.2.15:6443"
I0702 11:09:08.268102 10342 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info'
I0702 11:09:08.268676 10342 round_trippers.go:405] GET https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info in 0 milliseconds
I0702 11:09:08.268873 10342 round_trippers.go:411] Response Headers:
[discovery] Failed to request cluster info, will try again: [Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.0.2.15:6443: connect: connection refused]
</code></pre>
<p>The port seems closed (from the node):</p>
<pre><code>telnet 10.0.2.15 6443
Trying 10.0.2.15...
telnet: Unable to connect to remote host: Connection refused
</code></pre>
<p>While on the master:</p>
<pre><code>telnet 10.0.2.15 6443
Trying 10.0.2.15...
Connected to 10.0.2.15.
Escape character is '^]'.
^CConnection closed by foreign host.
</code></pre>
<p>What may be the cause of this?</p>
| <p>Both machines are virtual machines and 10.02.15 is the NAT ip - which is the same for both machines (they are independent)...</p>
<p>Sigh...</p>
<p>In the event it is helpful to someone else:</p>
<pre><code>iptables -t raw -A OUTPUT -p tcp --dport 6443 -j TRACE
iptables -t raw -A PREROUTING -p tcp --dport 6443 -j TRACE
tail -f /var/log/kern.log
</code></pre>
|
<p>I am trying to produce to a kafka broker which is running inside the container launched by kubernetes. I am playing with <code>KAFKA_ADVERTISED_LISTENERES</code> and <code>KAFKA_LISTERNERS</code>.</p>
<p>I tried setting these two env variables <code>KAFKA_ADVERTISED_LISTENERES</code> = <code>PLAINTEXT://<host-ip>:9092</code> and <code>KAFKA_LISTERNERS</code> = <code>PLAINTEXT://0.0.0.0:9092</code> and ran using docker-compose. And I was able to produce from an application out of the host machine.</p>
<p>But setting these two env-variables in Kubernetes.yml file, I get <code>No broker list available</code> exception.</p>
<p>What am I missing here?</p>
<p><strong>Update:</strong></p>
<p><strong>kafka-pod.yaml:</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: casb-deployment
name: kafkaservice
spec:
replicas: 1
template:
metadata:
labels:
app: kafkaservice
spec:
hostname: kafkaservice
#hostNetwork: true # to access docker out side of host container
containers:
- name: kafkaservice
imagePullPolicy: IfNotPresent
image: wurstmeister/kafka:1.1.0
env: # for production
- name: KAFKA_ADVERTISED_LISTENERES
value: "PLAINTEXT://<host-ip>:9092"
- name: KAFKA_LISTERNERS
value: "PLAINTEXT://0.0.0.0:9092"
- name: KAFKA_CREATE_TOPICS
value: "Topic1:1:1,Topic2:1:1"
- name: KAFKA_MESSAGE_TIMESTAMP_TYPE
value: "LogAppendTime"
- name: KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE
value: "LogAppendTime"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
ports:
- name: port9092
containerPort: 9092
---
apiVersion: v1
kind: Service
metadata:
namespace: casb-deployment
name: kafkaservice
labels:
app: kafkaservice
spec:
selector:
app: kafkaservice
ports:
- name: port9092
port: 9092
targetPort: 9092
protocol: TCP
</code></pre>
| <p>I'm assuming you have a Kubernetes service, whose selector links the ingress flow to your Kafka Broker, that is exposing the <code>nodePort</code> (as opposed to <code>clusterIP</code>).
<a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
<p>So the kubernetes pod should be reachable through <code>localhost:<nodePort></code>.</p>
<p>You can also set a Load Balancer in front of your Kubernetes cluster then you can just expose the k8s pods, i.e., allow external ingress.</p>
<p>Then the next step is to just leverage some DNS record so the outbound request produced by your docker-compose-based containers will go to DNS and then come back to your Kubernetes cluster through the load balancer.</p>
|
<p>I have a AWS <strong>LoadBalancer</strong> which created using <code>Kube</code>, <code>Kops</code> and <code>AWS</code>.
protocl type for the <strong>ELB</strong> is <strong>tcp</strong>. this work fine for <code>http</code> requests, means I can access my site with <strong><a href="http://testing.example.com" rel="nofollow noreferrer">http://testing.example.com</a></strong>. Now I tried to add <strong>SSL</strong> for this <strong>ELB</strong> using <strong>ACM</strong> <code>(Certificate manager)</code>. I added my Domain details <code>example.com</code> and <code>*.example.com</code> by requesting a <strong>public Certificate</strong>. it created successfully and domain validation is also success.</p>
<blockquote>
<p>Then I tried to add this ssl to my ELB like below.</p>
</blockquote>
<ul>
<li>went to my ELB and selected the ELB.</li>
<li>Then went to Listeners tab and Added SSL to it like below.</li>
</ul>
<p><a href="https://i.stack.imgur.com/Za3k7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Za3k7.png" alt="enter image description here"></a></p>
<p>and <strong>ELB</strong> description is like below.</p>
<p><a href="https://i.stack.imgur.com/XUzVT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XUzVT.png" alt="enter image description here"></a></p>
<p>I cannot access the <a href="https://testing.example.com" rel="nofollow noreferrer">https://testing.example.com</a>, it hangs for few minutes and nothing happens. what is going on here. hope your help with this.</p>
| <p>In the Listener configuration, you are forwarding the default HTTP port <code>80</code> to port <code>30987</code> on the back-end server. So this tells me that the back-end server is listening for HTTP requests on port <code>30987</code>.</p>
<p>You then added an SSL listener on the default port <code>443</code> but you are forwarding that to port <code>443</code> on the back-end server. Do you have something on your back-end listening on port <code>443</code> in addition to <code>30987</code>?</p>
<p>The most likely fix for this is to change the SSL listener on the load balancer to forward to port <code>30987</code> on the back-end by setting that as the "Instance Port" setting.</p>
|
<p>I am very new to OpenWhisk and have some difficulties in the setup.
The Ngnix Pod is running in a CrashLoopBackOff because of an error in the Pod.</p>
<pre><code>2018/07/02 16:14:27 [emerg] 1#1: host not found in resolver "kube-dns.kube-system" in /etc/nginx/nginx.conf:41
nginx: [emerg] host not found in resolver "kube-dns.kube-system" in /etc/nginx/nginx.conf:41
</code></pre>
<p>I cannot jump into the Pod itself but I ran a Docker Container with the same Image the Pod is using and looked inside the nginx.conf:</p>
<pre><code>user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
</code></pre>
<p>When I looked into the conf.d directory I found a single default.conf file in which the server_name was set to localhost:</p>
<pre><code>server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
</code></pre>
<p>I believe this is causing the Issue and that the kube.dns service cannot resolv localhost. </p>
<p>However I do not know how I can resolve this Issue or at least work around it.
Maybe I can set a static hostname for the Pod in the Ngnix Deployment and enter that hostname into the ngnix config?</p>
<p>Could Someone provide a workaround too me or even a fix?</p>
<p>Many thanks.</p>
| <p>Are you using the "OpenWhisk Deployment on Kubernetes (<a href="https://github.com/apache/incubator-openwhisk-deploy-kube" rel="nofollow noreferrer">https://github.com/apache/incubator-openwhisk-deploy-kube</a>) project? </p>
<p>I suspect you may be hitting the Kubernetes bug described in the <a href="https://github.com/apache/incubator-openwhisk-deploy-kube/blob/master/README.md" rel="nofollow noreferrer">README.md</a>: </p>
<blockquote>
<p>However, multiple minor releases of Kubernetes, including 1.8.9 and 1.9.4 will not work for OpenWhisk due to bugs with volume mount subpaths (see[<a href="https://github.com/kubernetes/kubernetes/issues/61076" rel="nofollow noreferrer">1</a>]). This bug will surface as a failure when deploying the nginx container.</p>
</blockquote>
<p>The fix for this is using a version of Kubernetes that does not have the volume mount subpath bug.</p>
|
<p><strong>Scenario:</strong></p>
<p>I need to build a web-app, from which I can run/sop/delete/etc. containers in a cluster. So I installed <code>Kubernetes</code> and tested the API from the console. Everything seems working and looks fine. </p>
<p>Following the Docs, they write about Docker, but do I need it necessarily? </p>
<p>I mean I had to disable <code>Hyper-V</code> to make <code>Minikube</code> work, and after a reboot, Docker (which usually starts at startup) says that "something went wrong.. bla bla" .. but I can create deployments and <code>proxys</code> on <code>Minikube</code>.<br>
This is somehow confusing. </p>
<p>Can someone explain this please for dummies?</p>
| <p>Technically, you need a <em>container runtime</em> which respects <a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/" rel="noreferrer">CRI (Container Runtime Interface)</a>.</p>
<p><a href="https://i.stack.imgur.com/Z7aPG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Z7aPG.png" alt="https://d3vv6lp55qjaqc.cloudfront.net/items/0I3X2U0S0W3r1D1z2O0Q/Image%202016-12-19%20at%2017.13.16.png" /></a></p>
<p>That is why you have <a href="https://github.com/kubernetes-incubator/cri-o" rel="noreferrer">CRI-O</a>, which provides an integration path between OCI conformant runtimes and the kubelet.<br />
See "<a href="https://thenewstack.io/cri-o-project-run-containers-without-docker-reaches-1-0/" rel="noreferrer">CRI-O, the Project to Run Containers without Docker, Reaches 1.0</a>" by <strong><a href="https://twitter.com/HallSd" rel="noreferrer">Susan Hall</a></strong>.</p>
<blockquote>
<p>The project “opens the door for plugging <a href="http://programmableinfrastructure.com/components/container-runtime/" rel="noreferrer">alternative container runtimes</a> in the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">kubelet</a> more easily, instead of relying on the default docker runtime.</p>
<p>Those new runtimes may include virtual machines-based ones, such as <strong>runv</strong> and <strong>Clear Containers</strong>, or standard Linux containers runtimes like <code>rkt</code>,” <a href="http://red.ht/2uJGuQo" rel="noreferrer">Red Hat</a> senior engineer <a href="https://twitter.com/runc0m" rel="noreferrer">Antonio Murdaca</a> wrote on the <a href="https://www.projectatomic.io/blog/2017/02/crio-runtimes/" rel="noreferrer">Project Atomic blog</a>.</p>
</blockquote>
<hr />
<p>But in your case, your issue is to make Minikube work with HyperV: see "<a href="https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c" rel="noreferrer">Minikube on Windows 10 with Hyper-V</a>" from <strong><a href="https://twitter.com/JockDaRock" rel="noreferrer">Jock Reed</a></strong>.<br />
The trick is to create a new (External) Virtual network switch, named "Primary Virtual Switch", and to start Minikube with:</p>
<pre><code>minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
</code></pre>
|
<p>I currently have multiple AWS accounts each with it's own Kubernetes cluster. Unfortunately, when the clusters were initially deployed using kops VPCs were created with overlapping CIDR blocks. This normally wouldn't be a problem as each cluster essentially existed in it's own universe.</p>
<p>Things have changed a bit and now we want to implement cross account VPC peering. The idea is users connect over the VPN have have access to all resources through said peering. My understanding is the CIDR block overlap is going to be a major problem when peering is implemented.</p>
<p>It doesn't seem one can just change the CIDR block of the existing cluster. Is my only option to backup and restore the cluster in a new VPC with something like ark? Has anyone gone through a full cluster migration? I'd be curious if there is a better answer.</p>
| <p>Your understanding is correct: with kops, you can't change CIDR blocks of an existing cluster; it's stuck in the VPC in which it was created, and <a href="https://aws.amazon.com/premiumsupport/knowledge-center/vpc-ip-address-range/" rel="nofollow noreferrer">you can't change the CIDR block of a VPC</a>:</p>
<blockquote>
<p>The IP address range of a VPC is made up of the CIDR blocks associated
with it. You select one CIDR block when you create the VPC, and you
can add or remove secondary CIDR blocks later. <strong>The CIDR block that you
add when you create the VPC cannot be changed</strong>, but you can add and
remove secondary CIDR blocks to change the IP address range of the
VPC. (emphasis mine)</p>
</blockquote>
<p>That leads us to the second point: migrating your cluster. This can be broken down into two phases: </p>
<ol>
<li>Migrating the infrastructure managed by <code>kops</code></li>
<li>Migrating the workloads on the cluster </li>
</ol>
<p><strong>1. Migrating the infrastructure managed by <code>kops</code></strong></p>
<p>You will need to migrate (i.e. recreate) the kops cluster itself: the ec2 instances, kops <code>InstanceGroups</code> and <code>Cluster</code> objects, various AWS infrastructure, etc. For that, you can use the <code>kops toolbox template</code> command:</p>
<pre><code>kops toolbox template --values /path/to/values.yaml --template /path/to/cluster/template.yaml > /path/to/output/cluster.yaml
kops create -f /path/to/output/cluster.yaml
</code></pre>
<p>This is a Helm-like tool that allows you to templatize your kops cluster configuration and pass in different <code>values.yaml</code> files. You might want to include this command within a small shell script wrapper or a Makefile to create 1-click cluster deployments to get your k8s cluster infrastructure set up easily and repeatably.</p>
<p>A sample cluster template.yaml file and values.yaml file might look like the following, which includes the specs for the <code>Cluster</code>, and master, worker, and autoscale <code>InstanceGroup</code>s.</p>
<pre><code># template.yaml
{{ $clusterSubdomain := (env "CLUSTER_SUBDOMAIN") }}
{{ $subnetCidr := (env "SUBNET_CIDR") }}
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
name: {{ $clusterSubdomain }}.k8s.example.io
spec:
hooks:
- manifest: |
[Unit]
Description=Create example user
ConditionPathExists=!/home/example/.ssh/authorized_keys
[Service]
Type=oneshot
ExecStart=/bin/sh -c 'useradd example && echo "{{ .examplePublicKey }}" > /home/example/.ssh/authorized_keys'
name: useradd-example.service
roles:
- Node
- Master
- manifest: |
Type=oneshot
ExecStart=/usr/bin/coreos-cloudinit --from-file=/home/core/cloud-config.yaml
name: reboot-window.service
roles:
- Node
- Master
kubeAPIServer:
authorizationRbacSuperUser: admin
featureGates:
TaintBasedEvictions: "true"
kubeControllerManager:
featureGates:
TaintBasedEvictions: "true"
horizontalPodAutoscalerUseRestClients: false
kubeScheduler:
featureGates:
TaintBasedEvictions: "true"
kubelet:
featureGates:
TaintBasedEvictions: "true"
fileAssets:
- content: |
yes
name: docker-1.12
path: /etc/coreos/docker-1.12
roles:
- Node
- Master
- content: |
#cloud-config
coreos:
update:
reboot-strategy: "etcd-lock"
locksmith:
window-start: {{ .locksmith.windowStart }}
window-length: {{ .locksmith.windowLength }}
name: cloud-config.yaml
path: /home/core/cloud-config.yaml
roles:
- Node
- Master
api:
dns: {}
authorization:
rbac: {}
channel: stable
cloudProvider: aws
configBase: s3://my-bucket.example.io/{{ $clusterSubdomain }}.k8s.example.io
etcdClusters:
- etcdMembers:
- instanceGroup: master-{{ .zone }}
name: a
name: main
- etcdMembers:
- instanceGroup: master-{{ .zone }}
name: a
name: events
iam:
allowContainerRegistry: true
legacy: false
kubernetesApiAccess:
- {{ .apiAccessCidr }}
kubernetesVersion: {{ .k8sVersion }}
masterPublicName: api.{{ $clusterSubdomain }}.k8s.example.io
networkCIDR: {{ .vpcCidr }}
networkID: {{ .vpcId }}
networking:
canal: {}
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- {{ .sshAccessCidr }}
subnets:
- cidr: {{ $subnetCidr }}
name: {{ .zone }}
type: Public
zone: {{ .zone }}
topology:
dns:
type: Public
masters: public
nodes: public
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{ $clusterSubdomain }}.k8s.example.io
name: master-{{ .zone }}
spec:
{{- if .additionalSecurityGroups }}
additionalSecurityGroups:
{{- range .additionalSecurityGroups }}
- {{ . }}
{{- end }}
{{- end }}
image: {{ .image }}
machineType: {{ .awsMachineTypeMaster }}
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-{{ .zone }}
role: Master
subnets:
- {{ .zone }}
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{ $clusterSubdomain }}.k8s.example.io
name: nodes
spec:
{{- if .additionalSecurityGroups }}
additionalSecurityGroups:
{{- range .additionalSecurityGroups }}
- {{ . }}
{{- end }}
{{- end }}
image: {{ .image }}
machineType: {{ .awsMachineTypeNode }}
maxSize: {{ .nodeCount }}
minSize: {{ .nodeCount }}
nodeLabels:
kops.k8s.io/instancegroup: nodes
role: Node
subnets:
- {{ .zone }}
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
name: ag.{{ $clusterSubdomain }}.k8s.example.io
labels:
kops.k8s.io/cluster: {{ $clusterSubdomain }}.k8s.example.io
spec:
{{- if .additionalSecurityGroups }}
additionalSecurityGroups:
{{- range .additionalSecurityGroups }}
- {{ . }}
{{- end }}
{{- end }}
image: {{ .image }}
machineType: {{ .awsMachineTypeAg }}
maxSize: 10
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: ag.{{ $clusterSubdomain }}.k8s.example.io
role: Node
subnets:
- {{ .zone }}
</code></pre>
<p>And the values.yaml file:</p>
<pre><code># values.yaml:
region: us-west-2
zone: us-west-2a
environment: staging
image: ami-abc123
awsMachineTypeNode: c5.large
awsMachineTypeMaster: m5.xlarge
awsMachineTypeAg: c5.large
nodeCount: "2"
k8sVersion: "1.9.3"
vpcId: vpc-abc123
vpcCidr: 172.23.0.0/16
apiAccessCidr: <e.g. office ip>
sshAccessCidr: <e.g. office ip>
additionalSecurityGroups:
- sg-def234 # kubernetes-standard
- sg-abc123 # example scan engine targets
examplePublicKey: "ssh-rsa ..."
locksmith:
windowStart: Mon 16:00 # 8am Monday PST
windowLength: 4h
</code></pre>
<p><strong>2. Migrating the workloads on the cluster</strong></p>
<p>Not having any hands-on experience with Ark, <a href="https://heptio.github.io/ark/v0.8.1/use-cases.html#cluster-migration" rel="nofollow noreferrer">it does seem to fit your use case well</a>:</p>
<blockquote>
<p>Cluster migration</p>
<p>Using Backups and Restores</p>
<p>Heptio Ark can help you port your resources from one cluster to
another, as long as you point each Ark Config to the same cloud object
storage. In this scenario, we are also assuming that your clusters are
hosted by the same cloud provider. Note that Heptio Ark does not
support the migration of persistent volumes across cloud providers.</p>
<pre><code>(Cluster 1) Assuming you haven’t already been checkpointing your data with the Ark schedule operation, you need to first back up your
</code></pre>
<p>entire cluster (replacing as desired):</p>
<pre><code>ark backup create <BACKUP-NAME>
The default TTL is 30 days (720 hours); you can use the --ttl flag to change this as necessary.
(Cluster 2) Make sure that the persistentVolumeProvider and backupStorageProvider fields in the Ark Config match the ones from
</code></pre>
<p>Cluster 1, so that your new Ark server instance is pointing to the
same bucket.</p>
<pre><code>(Cluster 2) Make sure that the Ark Backup object has been created. Ark resources are synced with the backup files available in cloud
</code></pre>
<p>storage.</p>
<pre><code>(Cluster 2) Once you have confirmed that the right Backup (<BACKUP-NAME>) is now present, you can restore everything with:
ark restore create --from-backup <BACKUP-NAME>
</code></pre>
</blockquote>
<p>Configuring Ark on AWS clusters seems straight-forward enough: <a href="https://github.com/heptio/ark/blob/master/docs/aws-config.md" rel="nofollow noreferrer">https://github.com/heptio/ark/blob/master/docs/aws-config.md</a>. </p>
<p>With some initial setup with the kops toolbox script and Ark configuration, you should have a clean, repeatable way to migrate your cluster and turn your pets into cattle, as the meme goes.</p>
|
<p>I'm initiating a kubernetes cluster with:</p>
<pre><code>kubeadm init --pod-network-cidr=192.168.1.0/16 --apiserver-advertise-address=192.168.0.33
</code></pre>
<p>I'm not too familiar with networking concepts or CIDR, how do I determine an appropriate value for the pod CIDR?</p>
<p>I previously used <code>10.244.0.0/16</code> but that resulted in:</p>
<pre><code>Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "78cf556b2e87e15cc5ec8827ca3a397c16dcfb90f045e225a17028f072db6e5f" network for pod "coredns-78fcdf6894-c7kq2": NetworkPlugin cni failed to set up pod "coredns-78fcdf6894-c7kq2_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24
</code></pre>
| <p>Regarding CIDR (<code>Classless Inter-Domain Routing</code>): That is just a notation to define the subnet / range of IP addresses that you want to use in your network.</p>
<p>If you want to use <code>/16</code>, you must be planning the creation of a fairly big Kubernetes cluster since that will give you <code>>65k</code> virtual IP addresses (Note that we are talking about the SDN / Virtual network, i.e., the IPs you will set for your kubernetes PODs, not your actual nodes / hosts).
Here is a CIDR table you can use as reference to decide a more suitable range:
<a href="https://kb.wisc.edu/ns/page.php?id=3493" rel="noreferrer">https://kb.wisc.edu/ns/page.php?id=3493</a></p>
<p>Now, in terms of your failure that depends on the CNI (<code>Container Networking Interface</code>) plugin you are using in your Kubernetes cluster. If you are using the default one that must be <code>Kubenet</code>, which has limitations.
More information here:
<a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="noreferrer">https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/</a>
and here:
<a href="https://chrislovecnm.com/kubernetes/cni/choosing-a-cni-provider/" rel="noreferrer">https://chrislovecnm.com/kubernetes/cni/choosing-a-cni-provider/</a></p>
<p>I hope that helps.</p>
|
<p>I'm trying to deploy a simple python app to Google Container Engine:</p>
<p>I have created a cluster then run <code>kubectl create -f deployment.yaml</code>
It has been created a deployment pod on my cluster. After that i have created a service as: <code>kubectl create -f deployment.yaml</code></p>
<blockquote>
<p>Here's my Yaml configurations:</p>
<p><strong>pod.yaml</strong>:</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-app
spec:
containers:
- name: test-ctr
image: arycloud/flask-svc
ports:
- containerPort: 5000
</code></pre>
<blockquote>
<p>Here's my Dockerfile:</p>
</blockquote>
<pre><code>FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./app.py
</code></pre>
<blockquote>
<p><strong>deployment.yaml:</strong></p>
</blockquote>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
name: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
resources:
requests:
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 8080
</code></pre>
<blockquote>
<p><strong>service.yaml:</strong></p>
</blockquote>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 32000
selector:
app: test-app
</code></pre>
<blockquote>
<p><strong>Ingress</strong></p>
</blockquote>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
</code></pre>
<p>It creates a LoadBalancer and provides an external IP, when I open the IP it returns <code>Connection Refused error</code></p>
<p>What's going wrong?</p>
<p>Help me, please!</p>
<p>Thank You,
Abdul</p>
| <p>you can first check if the pod is working by <code>curl podip:port</code>, in your scenario, should be <code>curl podip:8080</code>; if not work well, you have to check if the precess is bind 8080 port in the image you are using.</p>
<p>if it work, then try with service by <code>curl svcip:svcport</code>, in your scenario, should be <code>curl svcip:80</code>; if not work well, will be a kubernetes networking [congiguration] issue.</p>
<p>if still work, then the issue should be happen on ingress layer.</p>
<p>In theory, it should work if all match the k8s rules.</p>
|
<p>Done according to <a href="https://kubernetes.io/docs/tasks/administer-cluster/setup-ha-etcd-with-kubeadm/" rel="nofollow noreferrer">this article</a>.</p>
<p>I installed Kubernetes. Then installed etcd cluster that works via HTTPS and listens to the localhost interface only (reachable from inside any Docker container).
Now I need persistent volume to install DB cluster. Chose Portworx. It generated daemonset YAML-config. Here is the description of installed daemonset:</p>
<pre><code># kubectl describe daemonset portworx --namespace=kube-system
Name: portworx
Selector: name=portworx
Node-Selector: <none>
Labels: name=portworx
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"extensions/v1beta1","kind":"DaemonSet","metadata":{"annotations":{"portworx.com/install-source":"http://install.portworx.com/?c=bp_clust...
portworx.com/install-source=http://install.portworx.com/?c=bp_cluster&k=etcd:https://127.0.0.1:2379&kbver=1.11.0&s=/dev/xvda1&d=ens3&m=ens3&stork=false&ca=/etc/kubernetes/pki/etcd/ca.crt%%20&cert=/etc...
Desired Number of Nodes Scheduled: 2
Current Number of Nodes Scheduled: 2
Number of Nodes Scheduled with Up-to-date Pods: 2
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: name=portworx
Service Account: px-account
Containers:
portworx:
Image: portworx/oci-monitor:1.3.4
Port: <none>
Host Port: <none>
Args:
-k
etcd:https://127.0.0.1:2379
-c
bp_cluster
-d
ens3
-m
ens3
-s
/dev/xvda1
-ca
/etc/kubernetes/pki/etcd/ca.crt
-cert
/etc/kubernetes/pki/etcd/server.crt
-key
/etc/kubernetes/pki/etcd/server.key
-x
kubernetes
Liveness: http-get http://127.0.0.1:9001/status delay=840s timeout=1s period=30s #success=1 #failure=3
Readiness: http-get http://127.0.0.1:9015/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
PX_TEMPLATE_VERSION: v3
Mounts:
/etc/pwx from etcpwx (rw)
/etc/systemd/system from sysdmount (rw)
/host_proc/1/ns from proc1nsmount (rw)
/opt/pwx from optpwx (rw)
/var/run/dbus from dbusmount (rw)
/var/run/docker.sock from dockersock (rw)
Volumes:
dockersock:
Type: HostPath (bare host directory volume)
Path: /var/run/docker.sock
HostPathType:
etcpwx:
Type: HostPath (bare host directory volume)
Path: /etc/pwx
HostPathType:
optpwx:
Type: HostPath (bare host directory volume)
Path: /opt/pwx
HostPathType:
proc1nsmount:
Type: HostPath (bare host directory volume)
Path: /proc/1/ns
HostPathType:
sysdmount:
Type: HostPath (bare host directory volume)
Path: /etc/systemd/system
HostPathType:
dbusmount:
Type: HostPath (bare host directory volume)
Path: /var/run/dbus
HostPathType:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 22m daemonset-controller Created pod: portworx-67w7m
Normal SuccessfulCreate 22m daemonset-controller Created pod: portworx-mxtr8
</code></pre>
<p>But in the log of portworx I see that it is trying to connect to etcd via plain HTTP and obviously get error because cannot interpreter the response wrapped to SSL:</p>
<pre><code># kubectl logs -f pod/portworx-67w7m --namespace=kube-system
<some logs are erased du to lack of relevance>
Jul 02 13:19:25 ip-172-31-18-91 px-runc[25417]: time="2018-07-02T13:19:25Z" level=error msg="Could not load config file /etc/pwx/config.json due to: Error in obtaining etcd version: Get http://127.0.0.1:2379/version: net/http: HTTP/1.x transport connection broken: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\". Please visit http://docs.portworx.com for more information."
Jul 02 13:19:25 ip-172-31-18-91 px-runc[25417]: PXPROCS: px daemon exited with code: 1
Jul 02 13:19:25 ip-172-31-18-91 px-runc[25417]: 2107
Jul 02 13:19:25 ip-172-31-18-91 px-runc[25417]: 2018-07-02 13:19:25,474 INFO exited: pxdaemon (exit status 1; not expected)
</code></pre>
<p>What I am doing wrong?</p>
| <p>I have no idea why they didn't surface the "cannot read the <code>-cert</code> file" error, but you specified <code>/etc/kubernetes/pki/etcd/server.crt</code> in the options but did not volume mount <code>/etc/kubernetes/pki</code> into the container. For obvious reasons, kubernetes will not <em>automatically</em> volume mount its pki directory, thus, you must specify it.</p>
<p>If that <code>DaemonSet</code> was generated for you (as it appears based on the annotation), then what happened is that <em>they</em> are <a href="https://docs.portworx.com/scheduler/kubernetes/etcd-certs-using-secrets.html#edit-portworx-spec" rel="nofollow noreferrer">expecting the certs to live in <code>/etc/pwx/etcdcerts</code></a> (it's in their <a href="https://docs.portworx.com/scheduler/kubernetes/install.html#secure-etcd-and-certificates" rel="nofollow noreferrer">manual provisioning docs</a> also), so when you provided a non-<code>/etc</code> path, the two worlds collided.</p>
|
<p>Having my cluster up and running on AWS EKS, I'm finding trouble running <code>helm init</code> with the following error:</p>
<pre><code>$ helm init --service-account tiller --upgrade
Error: error installing: deployments.extensions is forbidden: User "system:anonymous" cannot create deployments.extensions in the namespace "kube-system"
</code></pre>
<p><code>kubectl</code> works properly (object retrieval, creation and cluster administration), authenticating and authorizing correctly by running <code>heptio-authenticator-aws</code> at connection time ( with an <code>exec</code> section in the kubectl config).</p>
<p>In order to prepare the cluster for helm, I created the service account and role binding as specified in the helm <a href="https://github.com/kubernetes/helm/blob/master/docs/rbac.md" rel="nofollow noreferrer">docs</a>.</p>
<p>I've heard of people having helm running on EKS, and I'm guessing they're skipping the <code>exec</code> section of the kubectl config by hardcoding the token... I'd like to avoid that!</p>
<p>Any ideas on how to fix this? My guess is that it is related to helm not being able to execute <code>heptio-authenticator-aws</code> properly</p>
| <p>I was running <code>helm</code> version 2.8.2 when obtaining this error, upgrading to v2.9.1 fixed this!</p>
|
<p>We are now on our journey to break our monolith (on-prem pkg (rpm/ova)) into services (dockers).</p>
<p>In the process we are evaluation envoy/istio as our communication and security layer, it looks great when running as sidecar in k8s, or each service on a separate machie.</p>
<p>As we are going to deliver several services within one machine, and can't deliver it within k8s, I'm not sure if we can use envoy, I didn't find any reference on using envoy in additional ways, are there additional deployment methods I can use to enjoy it?</p>
| <p>You can <a href="https://istio.io/latest/docs/examples/virtual-machines/" rel="nofollow noreferrer">run part of your services on Kubernetes and part on VMs</a>.</p>
|
<p>I have been trying to follow the getting started guide to EKS.
When I tried to call kubectl get service I got the message: error: You must be logged in to the server (Unauthorized)
Here is what I did:<br>
1. Created the EKS cluster.<br>
2. Created the config file as follows:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
server: https://*********.yl4.us-west-2.eks.amazonaws.com
certificate-authority-data: *********
name: *********
contexts:
- context:
cluster: *********
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: heptio-authenticator-aws
args:
- "token"
- "-i"
- "*********"
- "-r"
- "arn:aws:iam::*****:role/******"
</code></pre>
<ol start="3">
<li>Downloaded and installed latest aws cli</li>
<li>Ran aws configure and set the credentials for my IAM user and the region as us-west-2</li>
<li>Added a policy to the IAM user for sts:AssumeRole for the EKS role and set it up as a trusted relationship</li>
<li>Setup kubectl to use the config file</li>
</ol>
<p>I can get a token when I run heptio-authenticator-aws token -r arn:aws:iam::**********:role/********* -i my-cluster-ame
However when I try to access the cluster I keep receiving error: You must be logged in to the server (Unauthorized)</p>
<p>Any idea how to fix this issue?</p>
| <blockquote>
<p>When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.</p>
</blockquote>
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-create-cluster" rel="noreferrer">eks-docs</a></p>
<p>So to add access to other <em>aws</em> users, first
you must edit ConfigMap to add an IAM user or role to an Amazon EKS cluster.</p>
<p>You can edit the ConfigMap file by executing:
<code>kubectl edit -n kube-system configmap/aws-auth</code>, after which you will be granted with editor with which you map new users.</p>
<pre><code>apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::111122223333:user/ops-user
username: ops-user
groups:
- system:masters
mapAccounts: |
- "111122223333"
</code></pre>
<p>Pay close attention to the <code>mapUsers</code> where you're adding <em>ops-user</em> together with <code>mapAccounts</code> label which maps the <em>AWS</em> user account with a username on Kubernetes cluster.</p>
<blockquote>
<p>However, no permissions are provided in RBAC by this action alone; you must still create role bindings in your cluster to provide these entities permissions.</p>
</blockquote>
<p>As the amazon documentation(<a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="noreferrer">iam-docs</a>) states you need to create a role binding on the kubernetes cluster for the user specified in the ConfigMap. You can do that by executing following command (<a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-clusterrolebinding" rel="noreferrer">kub-docs</a>):</p>
<p><code>kubectl create clusterrolebinding ops-user-cluster-admin-binding --clusterrole=cluster-admin --user=ops-user</code></p>
<p>which grants the cluster-admin <code>ClusterRole</code> to a user named <em>ops-user</em> across the entire cluster.</p>
|
<p>I'd like to know if I could set up that a pod restarts when a secret is changed?</p>
<p>I'm using openshift 3.7.</p>
| <p>There is currently no mechanism for this. You could mount the secret as a volume and watch the file for changes and kill the process when that happens, requiring you to bake in specific logic to your application. This is probably not the best idea.</p>
<p>Alternatively you could write a controller as a sidecar and watch for changes on the resource or take a look at <a href="https://github.com/mfojtik/k8s-trigger-controller" rel="nofollow noreferrer">https://github.com/mfojtik/k8s-trigger-controller</a></p>
|
<p>I am provisioning a Kubernetes cluster from scratch (reasons). Its a local setup inside VM, and everything is fine, except the <code>master</code> node is created as schedulable.</p>
<p>I have tried assigning the <code>master</code> label and appropriate taint to node by passing required arguments to kubelet binary (doesn't solve the problem):</p>
<pre><code>--register-with-taints=node-role.kubernetes.io/master=:NoSchedule
--node-labels=master,node-role.kubernetes.io/master=""
</code></pre>
<p>Here is the output from <code>kubectl describe <node></code>:</p>
<pre><code>Name: myNodeName
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=myHostName
master=
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Tue, 03 Jul 2018 05:56:53 +0000
Taints: node-role.kubernetes.io/master=true:NoSchedule
Unschedulable: false
</code></pre>
<p>How do I set this node as Unschedulable? The documentation doesn't really specify this (feel free to direct to appropriate part of the documentation, in case I missed something).</p>
<p>PS: The labels/taints mentioned above were present before the node was created/registered.</p>
| <p>Taints get us a possibility to mark a node in order to prevent scheduler from using it for certain Pods with a <code>NoSchedule</code> parameter, and they have special values which Kubernetes Scheduler uses on the planning step. Pods by default cannot be spawned on nodes with taints until you will add <code>tolerations</code> which will allow scheduler to create pods on nodes with taints specified in <code>toleration</code> configuration; therefore, as per your node description, you have successfully registered this node as <code>NoSchedule</code> and it means that Pods will not be scheduled on this node.</p>
<blockquote>
<p>Taints: node-role.kubernetes.io/master=true:NoSchedule</p>
</blockquote>
<p>Alternatively, you can use <code>kubectl cordon NODE</code> command to mark the node as <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="noreferrer">unschedulable</a>; therefore it will disable scheduling for the node at all, so Kubernetes Scheduler will just ignore it in the process of planning workload, and it reflects in your node configuration like:</p>
<blockquote>
<p>Unschedulable: true</p>
</blockquote>
|
<p>I'm using
<a href="https://github.com/Yolean/kubernetes-kafka" rel="noreferrer">https://github.com/Yolean/kubernetes-kafka</a>
to run kubernetes on minikube
I exposed an outside port and successfully used producers and consumers from outside the cluster</p>
<pre><code>➜ ~ kubectl get svc --namespace kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
bootstrap ClusterIP 10.108.21.84 <none> 9092/TCP
1h
broker ClusterIP None <none> 9092/TCP
1h
outside-0 NodePort 10.99.182.13 <none> 32400:32400/TCP
1h
outside-1 NodePort 10.108.10.223 <none> 32401:32401/TCP
1h
outside-2 NodePort 10.101.155.122 <none> 32402:32402/TCP
1h
pzoo ClusterIP None <none> 2888/TCP,3888/TCP
1h
zoo ClusterIP None <none> 2888/TCP,3888/TCP
1h
zookeeper ClusterIP 10.97.17.36 <none> 2181/TCP
1h
</code></pre>
<p>python producer</p>
<pre><code>from kafka import KafkaConsumer, KafkaProducer
KAFKA_TOPIC = 'demo'
KAFKA_BROKERS = '192.168.99.100:32400' # see step 1
producer = KafkaProducer(bootstrap_servers=KAFKA_BROKERS)
messages = [b'hello kafka', b'Falanga', b'3 test messages']
for m in messages:
print(f"sending: {m}")
producer.send(KAFKA_TOPIC, m)
producer.flush()
</code></pre>
<p>Simple consumer</p>
<pre><code>#!/usr/bin/env python
from kafka import KafkaConsumer
KAFKA_TOPIC = 'demo'
KAFKA_BROKERS = '192.168.99.100:32400' # see step 1
consumer = KafkaConsumer(KAFKA_TOPIC, bootstrap_servers=KAFKA_BROKERS)
for message in consumer:
print(f"message is of type: {type(message)}")
print(message)
print('yo')
consumer.subscribe([KAFKA_TOPIC])
</code></pre>
<p>How do I access the service from pods in 'default' namespace by name not using the out facing url? </p>
| <p>You can access to the Service in the pod using the following ways. </p>
<ul>
<li><p>Using the env variables, </p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services</a></p></li>
<li><p>Using internal service name,
i.e.> YOURSERVICENAME.NAMESPACE.svc.cluster.local</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services</a></p></li>
</ul>
|
<p>I tried to remain on the free-tier of google cloud platform and it only permits 3 nodes and 30Gb of Storage in which where the cluster created, each nodes are mapped to each storage 10Gb each. </p>
<p>And when I tried to mount persistentVolume and Claims to existing Disks, the error shows :
<code>
Attach failed for volume "myapp-pv" : googleapi: Error 400: The disk resource 'projects/myapp-dev/zones/us-central1-a/disks/gke-myapp-dev-clus-default-pool-64e30c4b-dvkc' is already being used by 'projects/myapp-dev/zones/us-central1-a/instances/gke-myapp-dev-clus-default-pool-64e30c4b-dvkc
</code>
The working sollution is for me to create another disks, but the problem is it is out of the free-tier, I wonder how can we stay in free-tier without creating another persistentDisk in GCP ?</p>
| <blockquote>
<p>And when I tried to mount persistentVolume and Claims to existing Disks, the error shows </p>
</blockquote>
<p>This error is happening because of <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">this constraint</a> of PV on GCE:</p>
<pre><code>Important! A volume can only be mounted using one access mode at a time,
even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce
by a single node or ReadOnlyMany by many nodes, but not at the same time.
</code></pre>
<p>Table given in above link shown that GCEPersistentDisk can't be mounted as <code>ReadWriteMany</code> so if you need to connect it in that way you have to use some other volume plugin.</p>
<blockquote>
<p>I wonder how can we stay in free-tier without creating another persistentDisk in GCP?</p>
</blockquote>
<p>Just some thouhgts... With free-tier you are limited in a number of nodes and disk space available:</p>
<ul>
<li>You can always 'simulate' <code>ReadWriteMany</code> with NFS volume plugin for example (installing your own provisioner for NFS) providing that your use case is not excluding NFS usage. Dowside is that you need to install NFS provisioner (squeeze it in you capacity) and it is not really well suited for fast io (database and stuff)</li>
<li>You can use <code>hostPath</code> on each of the nodes and manually juggle pods around, but that is prone to data loss and not really a proper kubernetes approach to PV handling. This is something to consider if you need fast io (you are testing with databases) and proper backup should be in place to avoid data loss if node dies.</li>
</ul>
|
<p>Before I had one single vm (centos 7.4, hostname kube-2.novalocal,ip <code>172.50.10.10</code>), I installed both master and kubelet in it and I could access my ingress by <code>172.50.10.10/uaa/login</code>. Inside cluster, I use ClusterIP, and deployed ingress nginx as NodePort on ingress. Since it is redirect/rewrite, so I changed nodeport as 80 by avoiding port omitted. The service url is <code>http://172.50.10.10/uaa/login</code>. And it works fine.</p>
<p>Now I adding two nodes (kube-1.novalocal/172.50.10.1 and kube-3.novalocal/172.50.10.4). I could see ingress is deployed by kubernetes on kube-3.novalocal. And it restarts frequently, it restarts almost every minute. And I do not know ingress service url either. Is it <code>http://kube-2.novalocal/uaa/login or http://kube-3.novalocal/uaa/login?</code> Why it restarts so frequently?</p>
<p>I put all related yaml files, log file, console commands output and dashboard information here.</p>
<pre><code>[centos@kube-2 ingress]$ sudo kubectl get po
NAME READY STATUS RESTARTS AGE
gearbox-rack-api-gateway 1/1 Running 0 15h
gearbox-rack-config-server 1/1 Running 0 15h
gearbox-rack-eureka-server 1/1 Running 0 15h
gearbox-rack-rabbitmq 1/1 Running 0 15h
gearbox-rack-redis 1/1 Running 0 15h
gearbox-rack-uaa-service 1/1 Running 0 15h
gearbox-rack-zipkin-server 1/1 Running 0 15h
ingress-nginx-5c6d78668c-brlsv 1/1 Running 279 15h
nginx-default-backend-6647766887-nbwhl 1/1 Running 0 15h
</code></pre>
<p>Access ingress url in kube-3.novalocal(172.50.10.4):</p>
<pre><code>[centos@kube-2 ingress]$ curl http://172.50.10.4/uaa/login
curl: (7) Failed connect to 172.50.10.4:80; Connection refused
</code></pre>
<p>ingress-nginx logs:</p>
<pre><code>[centos@kube-2 ingress]$ sudo kubectl logs ingress-nginx-5c6d78668c-frb2r
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.15.0
Build: git-df61bd7
Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
W0703 02:16:35.966965 7 client_config.go:533] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0703 02:16:35.967483 7 main.go:158] Creating API client for https://10.96.0.1:443
</code></pre>
<p>Dashborad images is as follows:</p>
<p><a href="https://i.stack.imgur.com/vqif2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vqif2.jpg" alt="enter image description here"></a></p>
<p>ingress-nginx-res.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: gearbox-rack-api-gateway
servicePort: 5555
</code></pre>
<p>ingress-nginx-ctl.yaml</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
spec:
type: NodePort
selector:
app: ingress-nginx
ports:
- name: http
port: 80
nodePort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
serviceAccount: lb
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
</code></pre>
<p>kubeadm.yaml</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
apiServerExtraArgs:
service-node-port-range: 80-32767
networking:
podSubnet: 192.168.0.0/16
kubernetesVersion: v1.10.3
featureGates:
CoreDNS: true
</code></pre>
<p>=================================================</p>
<h1>edition two</h1>
<p>Ingress-nginx controller is updated to 0.16.2, same deployment as before, ingress-nginx continue restart almost every two minutes.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
ingress-nginx-59b74f9684-lgm2k 0/1 CrashLoopBackOff 9 20m 192.168.179.5 kube-3.novalocal
</code></pre>
<p><a href="https://i.stack.imgur.com/xkzV3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xkzV3.jpg" alt="enter image description here"></a></p>
| <p>Usage of NodePort assumes that you are able to access all your pods, so you should be able to use both the <a href="http://kube-2.novalocal/uaa/login" rel="nofollow noreferrer">http://kube-2.novalocal/uaa/login</a> and the <a href="http://kube-3.novalocal/uaa/login" rel="nofollow noreferrer">http://kube-3.novalocal/uaa/login</a>.</p>
<p>You can find more information about NodePort here: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types</a>
"NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service to which the NodePort service will route is automatically created. You’ll be able to contact the NodePort service from outside the cluster by requesting :."</p>
<p>Regarding your ingress-nginx frequent restarts: Try to upgrade your nginx controller to the latest version and come back with the results. You can find it here: <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a></p>
<p>Also, take a look at this article with the similar issue: <a href="https://github.com/kubernetes/ingress-nginx/issues/2450" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/2450</a></p>
|
<pre><code>root@master2:/home/osboxes# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default hello-kubernetes-55857678b4-4xbgd 1/1 Running 1 18h 10.244.1.138 node
default hello-kubernetes-55857678b4-gtvn4 1/1 Running 1 18h 10.244.1.139 node
default hello-kubernetes-55857678b4-wttht 1/1 Running 1 18h 10.244.1.140 node
kube-system coredns-78fcdf6894-s4l8n 1/1 Running 1 18h 10.244.0.14 master2
kube-system coredns-78fcdf6894-tfjps 1/1 Running 1 18h 10.244.0.15 master2
kube-system etcd-master2 1/1 Running 1 18h 10.0.2.15 master2
kube-system kube-apiserver-master2 1/1 Running 1 18h 10.0.2.15 master2
kube-system kube-controller-manager-master2 1/1 Running 1 18h 10.0.2.15 master2
kube-system kube-flannel-ds-4br99 1/1 Running 1 18h 10.0.2.15 node
kube-system kube-flannel-ds-6c2x9 1/1 Running 1 18h 10.0.2.15 master2
kube-system kube-proxy-mf9fg 1/1 Running 1 18h 10.0.2.15 node
kube-system kube-proxy-xldph 1/1 Running 1 18h 10.0.2.15 master2
kube-system kube-scheduler-master2 1/1 Running 1 18h 10.0.2.15 master2
root@master2:/home/osboxes# kubectl exec -it hello-kubernetes-55857678b4-4xbgd sh
error: unable to upgrade connection: pod does not exist
</code></pre>
<p>What does this error indicate? I am able to <code>docker exec ...</code> into the container from the node.</p>
<p>I have set this cluster up myself using kubeadm.</p>
<p>Verbose:</p>
<pre><code>kubectl -v=10 exec -it hello-kubernetes-55857678b4-4xbgd sh
I0703 08:44:01.250752 10307 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0703 08:44:01.252809 10307 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0703 08:44:01.254167 10307 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0703 08:44:01.255808 10307 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://192.168.0.33:6443/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd'
I0703 08:44:01.272882 10307 round_trippers.go:405] GET https://192.168.0.33:6443/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd 200 OK in 16 milliseconds
I0703 08:44:01.273262 10307 round_trippers.go:411] Response Headers:
I0703 08:44:01.273485 10307 round_trippers.go:414] Date: Tue, 03 Jul 2018 12:44:01 GMT
I0703 08:44:01.273692 10307 round_trippers.go:414] Content-Type: application/json
I0703 08:44:01.273967 10307 round_trippers.go:414] Content-Length: 2725
I0703 08:44:01.275168 10307 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"hello-kubernetes-55857678b4-4xbgd","generateName":"hello-kubernetes-55857678b4-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd","uid":"002c6e23-7e23-11e8-b38f-0800273a59cb","resourceVersion":"5725","creationTimestamp":"2018-07-02T18:09:02Z","labels":{"app":"hello-kubernetes","pod-template-hash":"1141323460"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"hello-kubernetes-55857678b4","uid":"001893c6-7e23-11e8-b38f-0800273a59cb","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-n9t9d","secret":{"secretName":"default-token-n9t9d","defaultMode":420}}],"containers":[{"name":"hello-kubernetes","image":"paulbouwer/hello-kubernetes:1.4","ports":[{"containerPort":8080,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"default-token-n9t9d","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-02T18:09:02Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-03T12:32:26Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":null},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-02T18:09:02Z"}],"hostIP":"10.0.2.15","podIP":"10.244.1.138","startTime":"2018-07-02T18:09:02Z","containerStatuses":[{"name":"hello-kubernetes","state":{"running":{"startedAt":"2018-07-03T12:32:26Z"}},"lastState":{"terminated":{"exitCode":255,"reason":"Error","startedAt":"2018-07-02T18:09:21Z","finishedAt":"2018-07-03T10:10:58Z","containerID":"docker://e82d0338a51aef35869b755b8020704367859855f043d80897e48f4e9c7da869"}},"ready":true,"restartCount":1,"image":"paulbouwer/hello-kubernetes:1.4","imageID":"docker-pullable://paulbouwer/hello-kubernetes@sha256:a9fc93acfbc734827a72107bf7f759745a66ea61758863c094c36e5f4f4b810b","containerID":"docker://4a7e472b35b776700e61605826655950501d114ce182dc178d79d0f50775281d"}],"qosClass":"BestEffort"}}
I0703 08:44:01.290627 10307 round_trippers.go:386] curl -k -v -XPOST -H "X-Stream-Protocol-Version: v4.channel.k8s.io" -H "X-Stream-Protocol-Version: v3.channel.k8s.io" -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -H "User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://192.168.0.33:6443/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd/exec?command=sh&container=hello-kubernetes&container=hello-kubernetes&stdin=true&stdout=true&tty=true'
I0703 08:44:01.317914 10307 round_trippers.go:405] POST https://192.168.0.33:6443/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd/exec?command=sh&container=hello-kubernetes&container=hello-kubernetes&stdin=true&stdout=true&tty=true 404 Not Found in 26 milliseconds
I0703 08:44:01.317938 10307 round_trippers.go:411] Response Headers:
I0703 08:44:01.317944 10307 round_trippers.go:414] Date: Tue, 03 Jul 2018 12:44:01 GMT
I0703 08:44:01.317948 10307 round_trippers.go:414] Content-Length: 18
I0703 08:44:01.317951 10307 round_trippers.go:414] Content-Type: text/plain; charset=utf-8
F0703 08:44:01.318118 10307 helpers.go:119] error: unable to upgrade connection: pod does not exist
</code></pre>
| <p>It seems to have used the wrong network interface.</p>
<p>I had to manually set <code>KUBELET_EXTRA_ARGS=--node-ip=ABCXYZ</code> in <code>/etc/default/kubernetes</code> on both the node and master (replace <code>abcxyz</code> with the appropriate IP address).</p>
<p>You can check they have the correct ip address with:</p>
<pre><code>kubectl get nodes -o wide
</code></pre>
<p>Which outputs:</p>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master2 Ready master 19h v1.11.0 192.168.0.33 <none> Ubuntu 17.10 4.13.0-46-generic docker://1.13.1
node Ready <none> 18h v1.11.0 192.168.0.34 <none> Ubuntu 17.10 4.13.0-16-generic docker://1.13.1
</code></pre>
|
<p>I log in successfully, but cannot pull:</p>
<pre><code>PS C:\Users\Me> docker login tlk8s.azurecr.io
Username (myUsername): Password:
Login Succeeded
PS C:\Users\Me> docker pull tlk8s.azurecr.io/devicecloudwebapi:v1
Error response from daemon: pull access denied for tlk8s.azurecr.io/devicecloudwebapi, repository does not exist or may require 'docker login'
</code></pre>
<p>But it seems like I'm naming it correctly (see screenshot):</p>
<p><a href="https://i.stack.imgur.com/aDUFj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aDUFj.png" alt="enter image description here"></a></p>
<p>What could I be doing wrong? How do I check to see if my service principal has the correct permissions? </p>
<p>Here's the result of calling Get-AzureRmRoleAssignment:</p>
<p><a href="https://i.stack.imgur.com/QGqYH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QGqYH.png" alt="enter image description here"></a></p>
| <p>Per documentation below, the Service Principal account must have at least <strong>Reader</strong> role to be able to pull images from the Azure Container Registry.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-authentication" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/container-registry/container-registry-authentication</a></p>
<p>You can run the following PS script to <strong>list the existing role assignments</strong> for your Service Principal account.</p>
<p><a href="https://github.com/evandropaula/Azure/blob/master/ServicePrincipal/PS/List-ServicePrincipalRoleAssignments.ps1" rel="nofollow noreferrer">https://github.com/evandropaula/Azure/blob/master/ServicePrincipal/PS/List-ServicePrincipalRoleAssignments.ps1</a></p>
<p>You can run the following PS script to <strong>assign a role</strong> (e.g. Reader) to your Service Principal account.</p>
<p><a href="https://github.com/evandropaula/Azure/blob/master/ServicePrincipal/PS/Assign-ServicePrincipalRole.ps1" rel="nofollow noreferrer">https://github.com/evandropaula/Azure/blob/master/ServicePrincipal/PS/Assign-ServicePrincipalRole.ps1</a></p>
|
<p>I'm trying to get my dokerized Rails app to run on Kubernetes hosted at GCP.</p>
<pre><code>kubectl apply -f k8s/webshop.yml
kubectl get pods
NAME READY STATUS RESTARTS AGE
shop-lcvxc 2/3 CrashLoopBackOff 23 1h
</code></pre>
<p>So far so good, but the app will not start. Investigating further reveals that it has tried to start Rails (Puma) twice.
Any idea why this happens?</p>
<h1>Logs</h1>
<p><strong>$ kubectl logs shop-lcvxc -c webshop</strong></p>
<pre><code>=> Booting Puma
=> Rails 5.2.0 application starting in development
=> Run `rails server -h` for more startup options
initialize PushNotifications
Puma starting in single mode...
* Version 3.11.4 (ruby 2.4.4-p296), codename: Love Song
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:3001
Exiting
/usr/local/bundle/gems/puma-3.11.4/lib/puma/binder.rb:270:in `initialize':
Address already in use - bind(2) for "0.0.0.0" port 3001 (Errno::EADDRINUSE)
</code></pre>
<p><strong>$ kubectl logs shop-lcvxc -c app</strong></p>
<pre><code>=> Booting Puma
=> Rails 5.2.0 application starting in development
=> Run `rails server -h` for more startup options
initialize PushNotifications
Puma starting in single mode...
* Version 3.11.4 (ruby 2.4.4-p296), codename: Love Song
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:3001
Use Ctrl-C to stop
</code></pre>
<p><strong>k8s/webshop.yml</strong></p>
<pre><code> 1 # from https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine
2 apiVersion: extensions/v1beta1
3 kind: Deployment
4 metadata:
5 name: webshop
6 labels:
7 app: webshop
8
9 spec:
10 template:
11 metadata:
12 labels:
13 app: webshop
14 spec:
15 containers:
16 - name: app
17 image: eu.gcr.io/company/webshop:latest
18 ports:
19 - containerPort: 3000
20 # The following environment variables will contain the database host,
21 # user and password to connect to the PostgreSQL instance.
22 env:
23 - name: POSTGRES_HOST
24 value: 127.0.0.1:5432
25 - name: POSTGRES_DB
26 value: webshop-staging
27 # [START cloudsql_secrets]
28 - name: POSTGRES_USER
29 valueFrom:
30 secretKeyRef:
31 name: cloudsql-db-credentials
32 key: username
33 - name: POSTGRES_PASSWORD
34 valueFrom:
35 secretKeyRef:
36 name: cloudsql-db-credentials
37 key: password
38 # [END cloudsql_secrets]
39
40 # [START proxy_container]
41 - name: cloudsql-proxy
42 image: gcr.io/cloudsql-docker/gce-proxy:1.11
43 command: ["/cloud_sql_proxy",
44 "-instances=company:europe-west1:staging=tcp:5432",
45 "-credential_file=/secrets/cloudsql/credentials.json"]
46 volumeMounts:
47 - name: cloudsql-instance-credentials
48 mountPath: /secrets/cloudsql
49 readOnly: true
50 # [END proxy_container]
51 # [START volumes]
52 volumes:
53 - name: cloudsql-instance-credentials
54 secret:
55 secretName: cloudsql-instance-credentials
56 # [END volumes]
57
</code></pre>
<h1>Dockerfile</h1>
<pre><code>FROM XX.dkr.ecr.eu-west-1.amazonaws.com/webshop-bundled:1.3
COPY Gemfile* /app/
COPY . /app/
WORKDIR /app
CMD ["/usr/local/bundle/bin/rails", "s", "-b", "0.0.0.0", "-p", "3001"]
</code></pre>
<p>^ That image is from this image:</p>
<p><strong>Dockerfile-bundled</strong></p>
<pre><code>FROM ruby:2.4-slim-jessie
RUN apt-get update
RUN apt-get install -y libpq-dev libgmp-dev libxml2-dev libxslt-dev
RUN apt-get install -y build-essential patch ruby-dev zlib1g-dev liblzma-dev
COPY Gemfile* /app/
WORKDIR /app
RUN bundle install
</code></pre>
<p><em>✗ kubectl describe pod shop-lcvxc</em></p>
<pre><code>Name: webapp-5845b768f7-tflbv
Namespace: default
Node: gke-my-fam-default-pool-12d29bdf-c9t0/10.166.0.3
Start Time: Fri, 06 Jul 2018 08:18:16 +0200
Labels: pod-template-hash=1401632493
run=webapp
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container webapp
Status: Running
IP: 10.4.0.6
Controlled By: ReplicaSet/webapp-5845b768f7
Containers:
webapp:
Container ID: docker://889fbc56fc28
Image: eu.gcr.io/acme-my-fam/webapp:latest
Image ID: docker-pullable://eu.gcr.io/acme-my-fam/webapp@sha256:nnnn
Port: 8080/TCP
State: Running
Started: Fri, 06 Jul 2018 08:18:43 +0200
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bs796 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-bs796:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bs796
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned webapp-5845b768f7-tflbv to gke-my-fam-default-pool-12d29bdf-c9t0
Normal SuccessfulMountVolume 10m kubelet, gke-my-fam-default-pool-12d29bdf-c9t0 MountVolume.SetUp succeeded for volume "default-token-bs796"
Normal Pulling 10m kubelet, gke-my-fam-default-pool-12d29bdf-c9t0 pulling image "eu.gcr.io/acme-my-fam/webapp:latest"
Normal Pulled 10m kubelet, gke-my-fam-default-pool-12d29bdf-c9t0 Successfully pulled image "eu.gcr.io/acme-my-fam/webapp:latest"
Normal Created 10m kubelet, gke-my-fam-default-pool-12d29bdf-c9t0 Created container
Normal Started 10m kubelet, gke-my-fam-default-pool-12d29bdf-c9t0 Started container
</code></pre>
| <p>Containers share the port / network within a pod. So you can't have two processes listening on the same port.</p>
<p>Quite strangely your deployment yaml only specifies the <code>app</code> container however you used the <code>shop</code> container as well in your question?</p>
|
<p>I followed <a href="https://blog.polyscribe.io/a-complete-guide-to-deploying-elixir-phoenix-applications-on-kubernetes-part-1-setting-up-d88b35b64dcd" rel="noreferrer">this</a> series of articles to create a deployment at Google Cloud. Everything is working as expected, except phoenix channels.</p>
<p>No errors on the backend side. On Javascript frontend I am getting first channel ERROR and then the socket CLOSES on frontend while using channels. And this keeps repeating endlessly in an interval of 10–20 secs.</p>
<pre><code>CHANNEL ERROR!
SOCKET CLOSE!
CHANNEL ERROR!
SOCKET CLOSE!
</code></pre>
<p>From this code:</p>
<pre><code>socket.connect()
socket.onError( e => console.log("SOCKET ERROR", e) )
socket.onClose( e => console.log("SOCKET CLOSE", e))
channel = socket.channel("note:" + noteId, {})
channel.onError( e => console.log("CHANNEL ERROR!", e) )
channel.onClose( e => console.log("CHANNEL CLOSED!", e) )
</code></pre>
<p>I need help to debug this and figure out where this problem is originating from. Please let me know if any piece of code is needed and I will update the question with that code. Its been a week now. :(</p>
<p>Thanks a lot!</p>
<p>(No problem when run locally)</p>
<p><strong>UPDATE:</strong> The only difference I am seeing is that on local server, phoenix.js is continuously sending heartbeat but this is not happening on server.</p>
<p><strong>UPDATE:</strong></p>
<pre><code>---- my-ingress.yaml ----
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
# ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: my-ingress-ip
kubernetes.io/tls-acme: "true"
spec:
rules:
- host: apiv2.example.com
http:
paths:
- path: /*
backend:
serviceName: my-service-nodeport
servicePort: 80
- path: /.well-known/acme-challenge
backend:
serviceName: kube-lego-gce
servicePort: 8080
tls:
- secretName: my-secret-tls-7
hosts:
- apiv2.example.com
</code></pre>
<p>This is <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">ingress file</a> I am using and also using <a href="https://github.com/jetstack/kube-lego" rel="noreferrer">kube-lego</a> with it.</p>
<p><strong>UPDATE:</strong> I implemented this code (I checked frames in dev tools, they were being sent). But still... its giving SOCKER ERROR. May be issue is not with sending hearbeats.</p>
<pre><code>const HEARTBEAT_INTERVAL_MS = 5000
socket.onOpen(() => {
clearInterval(this.heartbeatTimer);
this.heartbeatTimer = setInterval(() => {
return socket.push({
topic: "phoenix",
event: "heartbeat",
payload: {},
ref: socket.makeRef(),
});
},
HEARTBEAT_INTERVAL_MS
);
})
</code></pre>
| <p>You may need to increase the response timeout for your <a href="https://cloud.google.com/compute/docs/load-balancing/http/#timeouts_and_retries" rel="nofollow noreferrer">backend service</a> on GCP.</p>
<p>By default, the timeout was set at 30sec and was causing the same problem for me.</p>
|
<p>Is there any documentation out there on sending logs from containers in K8s to an external ELK cluster running on EC2 instances? </p>
<p>We're in the process of trying to Kubernetes set up and I'm trying to figure out how to get the logging to work correctly. We already have an ELK stack setup on EC2 for current versions of the application but most of the documentation out there seems to be referring to ELK as it's deployed to the K8s cluster.</p>
| <p>I am also working on the same cause.</p>
<p>First you should know what driver is being used by your docker containers to manage the logs (json driver/ journald etc - read <a href="https://docs.docker.com/v17.09/engine/admin/logging/overview/" rel="nofollow noreferrer">here</a>).</p>
<p>After that you should use some log collector in your architecture to send the logs to the Logstash endpoint. You can use filebeat/fluent bit. They are light weight alternatives to logstash/fluentd respectively. You must use one of them and not directly send your logs to logstash via syslog since these log shippers have a special functionality of enriching your logs with kubernetes metadata of the respective containers. </p>
<p>There might be lot of challenges after that. Parsing log data (multiline logs for example) etc. For an efficient pipeline, it’s better to do most of the work (i.e. extracting the date object from the logs etc) at the log sender side, than using the common logstash for this purpose that might be a bottle-neck.</p>
<p>Note that in case the container logs are not sent to stdout/stderr but written else-where, you might need to run filebeat/fluent-bit as side-car with your containers.</p>
<p>As for the links for documentation are concerned, I myself didn’t find anything documented in a single place on this, but the keywords that I mentioned over, reading about them I got to know many things.</p>
<p>Hope this helps.</p>
|
<p>I'm new with fluentd/elasticsearch stack and I'm trying to deploy it on kubernetes. While I've managed to do that, I'm having a problem that not all pod/container logs are showing up on elasticsearch (I'm using Kibana for data visualisation). In other words, I'm able to see logs from "default" kubernetes pods like weave-net and elasticsearch related pod logs (es-data, es-master...etc.) but not from "custom" pods that I'm trying to deploy.</p>
<p>As a simple test, I've deployed redis in the same kube namespace where fluentd/elasticsearch resides and redis service/deployment looks like this:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
---
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
name: redis-master
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: k8s.gcr.io/redis:e2e # or just image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
</code></pre>
<p>When I check logs from fluentd daemonpods, I see following:</p>
<pre><code>2018-07-03 11:17:05 +0000 [info]: following tail of /var/log/containers/redis-master-585798d8ff-b5p5g_default_master-4c934d19a8e2b2d6143b662425fd8fc238df98433d1c0c32bf328c281ef593ad.log
</code></pre>
<p>which, if I'm correct, should give me an info that fluentd is picking up redis container logs. However, I'm unable to see any redis related documents stored in elasticsearch.</p>
<p>This is how part of the configuration for fluentd looks like (kubernetes.conf):</p>
<pre><code><source>
@type tail
@id in_tail_container_logs
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
read_from_head true
format json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</source>
</code></pre>
<p>and fluent.conf:</p>
<pre><code><match **>
@type elasticsearch
@id out_es
log_level info
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
logstash_format true
buffer_chunk_limit 2M
buffer_queue_limit 32
flush_interval 5s
max_retry_wait 30
disable_retry_limit
num_threads 8
</match>
</code></pre>
<p>Any hint would be very helpful.
Thanks in advance.</p>
| <p>I am using fluent bit for the same purpose and I met exactly the same problem quite a few days back. Fluent bit is a light weight version of fluentd, and what worked for me might work for you as well.</p>
<p>What was wrong with my fluent bit was the input configuration. For the tail plugins that tail into large log files, there was some issue with the log rotation. So I lowered my refresh_interval to something like 5 secs (time period over which list of watched files are updated). Then I lowered the mem_buf_limit to something like 5MB (the total size of logs fluent bit takes into memory before flushing that out to the output plugin).</p>
<p>By these changes I was able to get more logs which were earlier not being collected for god knows reason. </p>
<p>I have asked this as an issue. Will update my answer if I get to know the reason.</p>
<p>Hope this helps in anyway. Mainly I suggest you to tweak your input configurations and then see the changes.</p>
|
<p>I am playing around with Spring Cloud Data Flow. I have successfully deployed SCDF on Kubernetes using the related <a href="https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.5.0.RELEASE/reference/htmlsingle/#_available_applications" rel="nofollow noreferrer">documentation</a>. When registering the <a href="http://repo.spring.io/libs-release-local/org/springframework/cloud/stream/app/spring-cloud-stream-app-descriptor/Celsius.SR3/spring-cloud-stream-app-descriptor-Celsius.SR3.stream-apps-kafka-10-docker" rel="nofollow noreferrer">1.5.x based starter apps</a>, everything is working as expected, no further configuration of the starter apps during the deployment of a stream definition is needed. </p>
<p>When using the <a href="http://repo.spring.io/libs-release/org/springframework/cloud/stream/app/spring-cloud-stream-app-descriptor/Darwin.RELEASE/spring-cloud-stream-app-descriptor-Darwin.RELEASE.stream-apps-kafka-10-docker" rel="nofollow noreferrer">2.x based starter apps</a>, there are some changes introduced by the switch to Spring Boot 2.0 that need to be accommodated for, e.g. the actuator endpoints changed. For reference, here are the properties that I provide during the deployment of the stream:</p>
<pre><code>app.*.management.endpoints.web.exposure.include=health,info,binders
deployer.*.cpu=2
deployer.*.memory=4096
deployer.http.count=2
deployer.*.kubernetes.livenessProbePath=/actuator/health
deployer.*.kubernetes.readinessProbePath=/actuator/info
</code></pre>
<p>However, the readiness probe fails since the <code>health</code> and the <code>info</code> endpoint now seem to be protected by default. Therefore, the pods end up in crashloops since from the Kubernetes perspective they get never ready.</p>
<p>I worked around the situation by following the guide on <a href="https://docs.spring.io/spring-cloud-stream-app-starters/docs/Darwin.RELEASE/reference/htmlsingle/#_patching_pre_built_applications" rel="nofollow noreferrer">patching the starter apps</a> that my stream definition relies on (e.g. <code>throughput</code> sink) like this:</p>
<pre><code>@SpringBootApplication
@Import({org.springframework.cloud.stream.app.throughput.sink.ThroughputSinkConfiguration.class})
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
@Configuration
protected static class ThroughputSinkSecurityConfiguration extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests()
.requestMatchers(EndpointRequest.to("health", "info")).permitAll();
}
}
}
</code></pre>
<p>Is there a way to specify this kind of security configuration via flags or properties? Shouldn't such a <code>WebSecurityConfigurerAdapter</code> be there by default to make the <code>health</code> and <code>info</code> endpoints accessible for Kubernetes?</p>
| <p>I would suggest to look into the situation from the other angle and provide credential from the Kubernetes to get access to your secured Microservice.</p>
<p>The problem of the current status-quo that all the resources has to be protected. </p>
<p>You can generate your own static password and store it in the <code>application.properties</code> do not reconfigure Kubernetes for each application restart: <a href="https://docs.spring.io/spring-boot/docs/2.0.3.RELEASE/reference/htmlsingle/#boot-features-security" rel="nofollow noreferrer">https://docs.spring.io/spring-boot/docs/2.0.3.RELEASE/reference/htmlsingle/#boot-features-security</a></p>
|
<p>Suppose I bootstrap a single master node with <code>kubelet</code> v1.10.3 in OpenStack cloud and I would like to have a "self-hosted" single etcd node for k8s necessities as a pod.</p>
<p>Before starting <code>kube-apiserver</code> component you need a working etcd instance, but of course you can't just perform <code>kubectl apply -f</code> or put a manifest to <code>addon-manager</code> folder because cluster is not ready at all.
There is a way to start pods by kubelet without having a ready apiserver. It is called <strong>static</strong> pods (yaml Pod definitions usually located at <code>/etc/kubernetes/manifests/</code>). And it is the way I start "system" pods like apiserver, scheduler, controller-manager and etcd itself. Previously I just mounted a directory from node to persist etcd data, but now I would like to use OpenStack blockstorage resource. And here is the question: how can I attach, mount and use OpenStack cinder volume to persist etcd data from static pod?</p>
<p>As I learned today there are at least 3 ways to attach OpenStack volumes:</p>
<ul>
<li><p><strong>CSI</strong> OpenStack cinder driver which is pretty much new way of managing volumes. And it won't fit my requirements, because in static pods manifests I can only declare Pods and not other resources like PVC/PV while CSI docs say:</p>
<blockquote>
<p>The csi volume type does not support direct reference from Pod and may only be referenced in a Pod via a PersistentVolumeClaim object.</p>
</blockquote></li>
<li><p>before-csi way to attach volumes is: <strong>FlexVolume</strong>.</p>
<blockquote>
<p>FlexVolume driver binaries must be installed in a pre-defined volume plugin path on each node (and in some cases master).</p>
</blockquote></li>
</ul>
<p>Ok, I added those binaries to my node (<a href="https://github.com/kubernetes/cloud-provider-openstack/blob/master/cluster/images/flex-volume-driver/flexvolume-ds.yaml" rel="nofollow noreferrer">using this DS as a reference</a>), added volume to pod manifest like this:</p>
<pre><code>volumes:
- name: test
flexVolume:
driver: "cinder.io/cinder-flex-volume-driver"
fsType: "ext4"
options:
volumeID: "$VOLUME_ID"
cinderConfig: "/etc/kubernetes/cloud-config"
</code></pre>
<p>and got the following error from kubelet logs:</p>
<blockquote>
<p>driver-call.go:258] mount command failed, status: Failure, reason: Volume 2c21311b-7329-4cf4-8230-f3ce2f23cf1a is not available</p>
</blockquote>
<p>which is weird because I am sure this Cinder volume is already attached to my CoreOS compute instance.</p>
<ul>
<li>and the last way to mount volumes I know is cinder in-tree support which should work since <a href="https://stackoverflow.com/a/42670021/3581539">at least k8s 1.5</a> and does not have any special requirements besides <code>--cloud-provider=openstack</code> and <code>--cloud-config</code> kubelet options. </li>
</ul>
<p>The yaml manifest part for declaring volume for static pod looks like this:</p>
<pre><code>volumes:
- name: html-volume
cinder:
# Enter the volume ID below
volumeID: "$VOLUME_ID"
fsType: ext4
</code></pre>
<p>Unfortunately when I try this method I get the following error from kubelet:</p>
<blockquote>
<p>Volume has not been added to the list of VolumesInUse in the node's volume status for volume.</p>
</blockquote>
<p>Do not know what it means but sounds like the node status could not be updated (of course, there is no etcd and apiserver yet). Sad, it was the most promising option for me. </p>
<p>Are there any other ways to attach OpenStack cinder volume to a static pod relying on kubelet only (when cluster is actually not ready)? Any ideas on what cloud I miss of got above errors?</p>
| <p>Message <code>Volume has not been added to the list of VolumesInUse in the node's volume status for volume.</code> says that attach/detach operations for that node are delegated to controller-manager only. Kubelet waits for attachment being made by controller but volume doesn't reach appropriate state because controller isn't up yet.
The solution is to set kubelet flag <code>--enable-controller-attach-detach=false</code> to let kubelet attach, mount and so on. This flag is set to true by default because of the following <a href="https://docs.openshift.org/latest/install_config/persistent_storage/enabling_controller_attach_detach.html" rel="nofollow noreferrer">reasons</a> </p>
<ul>
<li><p>If a node is lost, volumes that were attached to it can be detached
by the controller and reattached elsewhere.</p></li>
<li><p>Credentials for attaching and detaching do not need to be made
present on every node, improving security.</p></li>
</ul>
<p>In your case setting of this flag to false is reasonable as this is the only way to achieve what you want.</p>
|
<p>I'd like to confirm information of the authenticated user and assigned role and assigned cluster role. How can I do it?</p>
| <blockquote>
<p>information of the authenticated user</p>
</blockquote>
<p>When you see <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts" rel="nofollow noreferrer">Define clusters, users, and contexts</a>, you realize you need to get the information associated with a defined <em>context</em>.</p>
<pre><code>kubectl config --kubeconfig=config-demo use-context dev-frontend
kubectl config --kubeconfig=config-demo view --minify
</code></pre>
<blockquote>
<p>The output shows configuration information associated with the dev-frontend context:</p>
</blockquote>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority: fake-ca-file
server: https://1.2.3.4
name: development
contexts:
- context:
cluster: development
namespace: frontend
user: developer
name: dev-frontend
current-context: dev-frontend
kind: Config
preferences: {}
users:
- name: developer
user:
client-certificate: fake-cert-file
client-key: fake-key-file
</code></pre>
<blockquote>
<p>assigned role and assigned cluster role. </p>
</blockquote>
<p>You can list roles for all users or clusters, but you still need to parse the result to get the one for <em>a</em> user or <em>a</em> cluster.<br>
Example: "<a href="https://stackoverflow.com/q/43186611/6309">kubectl and seeing (cluster)roles assigned to subjects</a>".</p>
|
<p>I'm upgrading my elasticsearch on kubernetes cluster from 5.6.10 to elasticsearch 6.1.4. However, I can't even get es 6.1.4 to launch. </p>
<p>I keep getting the error <code>unknown setting [xpack.license.self_generated.type]</code>. </p>
<p><a href="https://www.elastic.co/guide/en/elasticsearch/reference/6.x/license-settings.html" rel="nofollow noreferrer">Per the docs</a>, I tried setting the value to basic, <code>xpack.license.self_generated.type=basic</code> and I've also omitted the value all together.</p>
<p>I've seen a few others run into this error but none of their fixes have worked for me.</p>
<p>Help is much appreciated!</p>
<p>My stateful set yaml</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elastic-data
labels:
app: elastic-data
area: devs
role: nosql
version: "6.1.4"
environment: elastic
spec:
serviceName: elastic-data
replicas: 1
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: elastic-data
area: devs
role: nosql
version: "6.1.4"
environment: elastic
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "sysctl",
"image": "busybox",
"imagePullPolicy": "IfNotPresent",
"command": ["sysctl", "-w", "vm.max_map_count=262144"],
"securityContext": {
"privileged": true
}
}
]'
spec:
terminationGracePeriodSeconds: 10
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: elastic-data
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.4
resources:
requests:
memory: "512Mi"
limits:
memory: "1024Mi"
env:
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
command: ["/bin/bash", "-c", "~/bin/elasticsearch-plugin remove x-pack; elasticsearch"]
args:
- -Ecluster.name=elastic-devs
- -Enode.name=${HOSTNAME}
- -Ediscovery.zen.ping.unicast.hosts=elastic-master.default.svc.cluster.local
- -Enode.master=false
- -Enode.data=true
- -Enode.ingest=false
- -Enetwork.host=0.0.0.0
- -Expack.license.self_generated.type=basic
ports:
- containerPort: 9300
name: transport
- containerPort: 9200
name: http
volumeMounts:
- name: data-volume
mountPath: /usr/share/elasticsearch/data
readinessProbe:
tcpSocket:
port: 9300
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 3
livenessProbe:
tcpSocket:
port: 9300
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 3
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
</code></pre>
| <p>As they are trying to communicate, you forgot to remove the configuration property from <code>config/elasticsearch.yml</code>. So, the full revised <code>command:</code> would be</p>
<pre><code>~/bin/elasticsearch-plugin remove x-pack
sed -i.bak -e /xpack.license.self_generated.type/d config/elasticsearch.yml
elasticsearch
</code></pre>
<p>Don't get me wrong, it's <strong>very silly</strong> of them to bomb over a config property for something that doesn't exist, but that's apparently the way it is.</p>
<hr>
<p>p.s. you may be happier with the <code>--purge</code> option, since when I tried that command locally <code>elasticsearch-plugin</code> cheerfully advised:</p>
<blockquote>
<p>-> preserving plugin config files [/usr/share/elasticsearch/config/x-pack] in case of upgrade; use --purge if not needed</p>
</blockquote>
<p>thus: <code>./bin/elasticsearch-plugin remove x-pack --purge</code></p>
|
<p>I'd like to know wether I'm able to restart a pod or do a deploy using api.</p>
<p>I'm running a container in a pod it's watching out on a directory. When a change is made on a directory, I need to restart the pod, or do a deploy.</p>
<p>This is my shell script snippet:</p>
<pre><code>while inotifywait -e modify $ENV_LOCATION; do
curl -k \
-X DELETE \
-H "Authorization: Bearer $TOKEN" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
$SERVER_URL/api/v1/namespaces/$NAMESPACE/pods/$NAME
done
</code></pre>
<p>So, how could I get current pod name?</p>
<p>Any ideas?</p>
| <p>Take a look at <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">here</a>.</p>
<p>You can use metadata:</p>
<pre><code> - name: <name>
image: <image>
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.