prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I am currently trying to reproduce this tutorial on Minikube:</p>
<pre><code>http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html
</code></pre>
<p>I updated the configuration files to use a hostpath as a persistent storage on minikube node.</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: myclaim
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: myclaim
</code></pre>
<p>Which result in the following:</p>
<pre><code>kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv0001 1Gi RWO Retain Available 17s
pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3 1Gi RWO Delete Bound default/myclaim 11s
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
myclaim Bound pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3 1Gi RWO 14s
kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 3d
mongo None <none> 27017/TCP 53s
kubectl get pod
No resources found.
kubectl describe service mongo
Name: mongo
Namespace: default
Labels: name=mongo
Selector: role=mongo
Type: ClusterIP
IP: None
Port: <unset> 27017/TCP
Endpoints: <none>
Session Affinity: None
No events.
kubectl get statefulsets
NAME DESIRED CURRENT AGE
mongo 3 0 4h
kubectl describe statefulsets mongo
Name: mongo
Namespace: default
Image(s): mongo,cvallance/mongo-k8s-sidecar
Selector: environment=test,role=mongo
Labels: environment=test,role=mongo
Replicas: 0 current / 3 desired
Annotations: <none>
CreationTimestamp: Thu, 30 Mar 2017 18:23:56 +0200
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------------- -------
1s 1s 4 {statefulset } WarningFailedCreate pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
1s 1s 4 {statefulset } WarningFailedCreate pvc: myclaim-mongo-1, error: PersistentVolumeClaim "myclaim-mongo-1" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
1s 0s 4 {statefulset } WarningFailedCreate pvc: myclaim-mongo-2, error: PersistentVolumeClaim "myclaim-mongo-2" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
kubectl get ev | grep mongo
29s 1m 15 mongo StatefulSet Warning FailedCreate {statefulset } pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
29s 1m 15 mongo StatefulSet Warning FailedCreate {statefulset } pvc: myclaim-mongo-1, error: PersistentVolumeClaim "myclaim-mongo-1" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
29s 1m 15 mongo StatefulSet Warning FailedCreate {statefulset } pvc: myclaim-mongo-2, error: PersistentVolumeClaim "myclaim-mongo-2" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
kubectl describe pvc myclaim
Name: myclaim
Namespace: default
StorageClass: standard
Status: Bound
Volume: pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3
Labels: <none>
Capacity: 1Gi
Access Modes: RWO
No events.
minikube version: v0.17.1
</code></pre>
<p>It seems that the service is not able to load pods, which makes it complicated to debug with kubectl logs.
Is there something wrong with the way I am creating a persistent volume on my node ?</p>
<p>Thanks a lot</p>
| <h2>TL; DR</h2>
<p>In the situation described in the question the problem was that the Pods for the StatefulSet did not start up at all therefore the Service had no targets. The reason for not starting up was:</p>
<blockquote>
<p>WarningFailedCreate pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 <strong>access mode is required</strong>, spec.resources[storage]: Required value]`</p>
</blockquote>
<p>And since the volume by default is defined as required the Pod won't start without it. So edit the StatefulSet's volumeClaimTemplate to have:</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: myclaim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
</code></pre>
<p>(There is no need to create the PersistentVolumeClaim manually.)</p>
<h2>More general solution</h2>
<p>If can't connect a Service try this command:</p>
<pre><code>kubectl describe service myservicename
</code></pre>
<p>And if you see something like this in the output:</p>
<pre><code>Endpoints: <none>
</code></pre>
<p>That means there are no targets (usually Pods) running <strong>or</strong> the targets are not ready. To find out which one is the case do:</p>
<pre><code>kubectl describe endpoint myservicename
</code></pre>
<p>It will list all endpoints, ready or not. If not ready, investigate the readinessProbe in the Pod. If doesn't exist then try to find out why by looking at the StatefulSet (Deployment, ReplicaSet, ReplicationController, etc) itself for messages (the Events section):</p>
<pre><code>kubectl describe statefulset mystatefulsetname
</code></pre>
<p>This information is available if you do:</p>
<pre><code>kubectl get ev | grep something
</code></pre>
<p>If you are sure they are running and ready then the labels on the Pods and the Service do not match up.</p>
|
<p>I have some services with external IP addresses in Kubernetes that I was able to reach by navigating to the IP. I then configured an Ingress with a nginx Ingress-Controller:</p>
<pre><code> annotations:
kubernetes.io/ingress.class: "nginx"
</code></pre>
<p>I can now reach the service over the static IP of the Ingress, as expected. But can no longer reach the service directly over its external IP, nor any other services in k8s... why can't I reach the services with Ingress configured?</p>
| <p>The Ingress Controller needs to be configured to allow access via its <code>rules</code>. See the documentation here: <a href="https://kubernetes.io/docs/user-guide/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/user-guide/ingress/</a></p>
|
<p>I was wondering if there was a centralized document on options that are available for use in Kubernetes manifest files;</p>
<p>I know there's a bunch on general YAML files but since theres an overlap of docker run commands and kubectl commands, I wanted to know which ones are actually useable for Kubernetes.</p>
<p>i.e. specifically docker run options include --net=host, --pid=host, --privileged</p>
<p>I know --privileged is for securityContext in the kubernetes manifest
<a href="https://kubernetes.io/docs/user-guide/security-context/" rel="nofollow noreferrer">https://kubernetes.io/docs/user-guide/security-context/</a></p>
<pre><code>containers:
...
-name:
...
securityContext:
privileged: true
</code></pre>
<p>but as for the other two I can't find good resources to verify if those options are available.</p>
| <p>All options and fields are documented within the resource references which you can find in the <a href="https://kubernetes.io/docs/reference/" rel="nofollow noreferrer">Kubernetes reference</a> section. </p>
<p>E.g. for the definition of a <a href="https://kubernetes.io/docs/resources-reference/v1.6/#pod-v1-core" rel="nofollow noreferrer">Pod</a> you can check the <a href="https://kubernetes.io/docs/resources-reference/v1.6/#pod-v1-core" rel="nofollow noreferrer">related docs</a>, you'll find that everything within the "spec" block relates to <a href="https://kubernetes.io/docs/resources-reference/v1.6/#podspec-v1-core" rel="nofollow noreferrer">PodSpec</a> objects and these contain among others Container definitions. Within the <a href="https://kubernetes.io/docs/resources-reference/v1.6/#container-v1-core" rel="nofollow noreferrer">Container</a> reference you'll find all required and optional fields.</p>
<p>I assume that's the centralized document you were looking for.</p>
<p>Besides that you might want to check the <a href="https://kubernetes.io/docs/concepts/configuration/container-command-args/" rel="nofollow noreferrer">Container Command and Arguments</a> reference guide.</p>
|
<p>I know there are about a thousand answers to various permutations of this question but none of the fifteen or so that I've tried have worked.</p>
<p>I'm running on Mac OS Sierra and using Minikube 0.17.1 as well as kubectl 1.5.3.</p>
<p>We run our own private Docker registry that is insecure as it is not open to the internet. (This is not my choice or in my control so it's not open for discussion). This is my first foray into Kubernetes and actually container orchestration altogether. I also have a very intermediate level of knowledge about Docker in general so I'm drowning in terminology/platform soup here.</p>
<p>When I execute</p>
<pre><code>kubectl run perf-ui --image=X.X.X.X/performance/perf-ui:master
</code></pre>
<p>I see</p>
<blockquote>
<p>image pull failed for X.X.X.X/performance/perf-ui:master, this may be because there are no credentials on this request. details: (Error response from daemon: Get <a href="https://X.X.X.X/v1/_ping" rel="noreferrer">https://X.X.X.X/v1/_ping</a>: dial tcp X.X.X.X:443: getsockopt: connection refused)</p>
</blockquote>
<p>We have an Ubuntu box that accesses the same registry (not using Kubernetes, just Docker) that works just fine. This is likely due to</p>
<blockquote>
<p>DOCKER_OPTS="--insecure-registry X.X.X.X"</p>
</blockquote>
<p>being in /etc/default/docker.</p>
<p>I made a similar change using the UI of Docker for Mac. I don't know where this change persisted in a config file. After this change a docker pull worked on my laptop!!! Again, this is just using Docker not Kubernetes. The interesting part is I got the same "Connection refused error" (as it tries to access via HTTPS) on my Mac as I get in the Minikube VM and after the change the pull worked. I feel like I'm on to something there.</p>
<p>After sshing into minikube (the VM created my minikube start) using</p>
<pre><code>minikube ssh
</code></pre>
<p>I added the following content to /var/lib/boot2docker/profile</p>
<pre><code>export EXTRA_ARGS="$EXTRA_ARGS --insecure-registry 10.129.100.3
export DOCKER_OPTS="$DOCKER_OPTS --insecure-registry 10.129.100.3
</code></pre>
<p>As you can infer, nothing has worked. I know I've tried other things but they too have failed.</p>
<p>I know this isn't the most comprehensive explanation but I've been digging into this for the past 4 hours.</p>
<p>The bottom line is docker pulls work from our Ubuntu box with the config file setup correctly and from my Mac with the setting configured properly.</p>
<p>How can I enable the same setting in my "Linux 2.6" VM that was created by Minikube?</p>
<p>If someone knows the answer I would be forever grateful.</p>
<p>Thank you in advance!</p>
| <p>Thank you to Janos for your alternative solution. I'm confident that is the right choice for some use cases.</p>
<p>It turns out that what I needed was a good night sleep and the following command to start Minikube in the first place:</p>
<pre><code>minikube start --insecure-registry="X.X.X.X"
</code></pre>
<p>@intelfx says that adding a port won't be necessary. I'm inclined to believe them but if your registry is on a non-standard port just keep it in mind in case things still aren't working.</p>
<p>In the end it was, in fact, a matter of telling Docker to use an insecure registry but it was not clear how to tell this to Docker when I was not controlling it directly.</p>
<p>I know that seems simple but after you've tried a hundred things you're almost hallucinating so you're not in a great state to make rational decisions. I'm sorry for the dumb post but I'm willing to bet this will help at least one person one day, which makes it worth it.</p>
<p>Thanks SO!</p>
|
<p>When I run my service locally, I get a warning that epoll isn't available, so it's using NIO. Fair enough. When I deploy it in Kubernetes, I get this, which prevents the service from running:</p>
<pre><code>2017-03-29T19:09:22.739482458Z 19:09:22.739 WARN com.datastax.driver.core.NettyUtil - Found Netty's native epoll transport in the classpath, but epoll is not available. Using NIO instead.
2017-03-29T19:09:22.739505903Z java.lang.UnsatisfiedLinkError: could not load a native library: netty-transport-native-epoll
2017-03-29T19:09:22.739509966Z at io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:224)
2017-03-29T19:09:22.739513326Z at io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:269)
2017-03-29T19:09:22.739516421Z at io.netty.channel.epoll.Native.<clinit>(Native.java:64)
2017-03-29T19:09:22.739519628Z at io.netty.channel.epoll.Epoll.<clinit>(Epoll.java:33)
2017-03-29T19:09:22.739522527Z at java.lang.Class.forName0(Native Method)
2017-03-29T19:09:22.739525253Z at java.lang.Class.forName(Class.java:264)
2017-03-29T19:09:22.739528047Z at com.datastax.driver.core.NettyUtil.<clinit>(NettyUtil.java:68)
2017-03-29T19:09:22.739530907Z at com.datastax.driver.core.NettyOptions.eventLoopGroup(NettyOptions.java:99)
2017-03-29T19:09:22.739533585Z at com.datastax.driver.core.Connection$Factory.<init>(Connection.java:769)
2017-03-29T19:09:22.739544382Z at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1400)
2017-03-29T19:09:22.739547340Z at com.datastax.driver.core.Cluster.init(Cluster.java:159)
2017-03-29T19:09:22.739550134Z at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:330)
2017-03-29T19:09:22.739555749Z at com.datastax.driver.core.Cluster.connect(Cluster.java:280)
2017-03-29T19:09:22.739558846Z at io.getquill.context.cassandra.CassandraSessionContext.<init>(CassandraSessionContext.scala:38)
2017-03-29T19:09:22.739562704Z at io.getquill.CassandraAsyncContext.<init>(CassandraAsyncContext.scala:19)
2017-03-29T19:09:22.739565629Z at io.xxxxxxxxx.platform.db.Datastore.<init>(Datastore.scala:26)
2017-03-29T19:09:22.739568481Z at DatastoreModule.configure(DatastoreModule.scala:22)
2017-03-29T19:09:22.739571234Z at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
2017-03-29T19:09:22.739574009Z at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
2017-03-29T19:09:22.739576726Z at com.google.inject.spi.Elements.getElements(Elements.java:110)
2017-03-29T19:09:22.739579348Z at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
2017-03-29T19:09:22.739581979Z at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
2017-03-29T19:09:22.739584688Z at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
2017-03-29T19:09:22.739587416Z at com.google.inject.spi.Elements.getElements(Elements.java:110)
2017-03-29T19:09:22.739590109Z at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
2017-03-29T19:09:22.739592859Z at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
2017-03-29T19:09:22.739595643Z at com.google.inject.Guice.createInjector(Guice.java:99)
2017-03-29T19:09:22.739598376Z at com.google.inject.Guice.createInjector(Guice.java:84)
2017-03-29T19:09:22.739600979Z at play.api.inject.guice.GuiceBuilder.injector(GuiceInjectorBuilder.scala:181)
2017-03-29T19:09:22.739603649Z at play.api.inject.guice.GuiceApplicationBuilder.build(GuiceApplicationBuilder.scala:123)
2017-03-29T19:09:22.739606361Z at play.api.inject.guice.GuiceApplicationLoader.load(GuiceApplicationLoader.scala:21)
2017-03-29T19:09:22.739609008Z at play.core.server.ProdServerStart$.start(ProdServerStart.scala:47)
2017-03-29T19:09:22.739611618Z at play.core.server.ProdServerStart$.main(ProdServerStart.scala:22)
2017-03-29T19:09:22.739614246Z at play.core.server.ProdServerStart.main(ProdServerStart.scala)
2017-03-29T19:09:22.739616846Z Caused by: java.lang.RuntimeException: failed to get field ID: DefaultFileRegion.transfered
2017-03-29T19:09:22.739619540Z at java.lang.ClassLoader$NativeLibrary.load(Native Method)
2017-03-29T19:09:22.739624975Z at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
2017-03-29T19:09:22.739627704Z at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
2017-03-29T19:09:22.739630403Z at java.lang.Runtime.load0(Runtime.java:809)
2017-03-29T19:09:22.739632988Z at java.lang.System.load(System.java:1086)
2017-03-29T19:09:22.739635608Z at io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:36)
2017-03-29T19:09:22.739638288Z at io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:255)
2017-03-29T19:09:22.739640937Z at io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:222)
2017-03-29T19:09:22.739643779Z ... 33 common frames omitted
2017-03-29T19:09:22.902948521Z 19:09:22.902 WARN i.n.util.concurrent.DefaultPromise - An exception was thrown by com.datastax.driver.core.Connection$1.operationComplete()
2017-03-29T19:09:22.902975391Z java.lang.NullPointerException: null
2017-03-29T19:09:22.902979455Z at io.netty.channel.group.DefaultChannelGroup.add(DefaultChannelGroup.java:146)
2017-03-29T19:09:22.902982663Z at io.netty.channel.group.DefaultChannelGroup.add(DefaultChannelGroup.java:42)
2017-03-29T19:09:22.902985588Z at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:162)
2017-03-29T19:09:22.902988575Z at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:149)
2017-03-29T19:09:22.902991374Z at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:514)
2017-03-29T19:09:22.902995665Z at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:488)
2017-03-29T19:09:22.902998584Z at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
2017-03-29T19:09:22.903001524Z at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:438)
2017-03-29T19:09:22.903004958Z at io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:233)
2017-03-29T19:09:22.903008073Z at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
2017-03-29T19:09:22.903011054Z at java.lang.Thread.run(Thread.java:745)
</code></pre>
<p>How do I:</p>
<p>a) Stop the error; and</p>
<p>b) Make it use epoll if that's more performant than whatever NIO uses otherwise (is it really using <code>select</code>?)</p>
| <p>Adding this dependency fixed the problem:</p>
<p><code>"io.netty" % "netty-transport-native-epoll" % "4.1.6.Final" classifier "linux-x86_64"</code></p>
|
<p>I've rather a teoretical question which I can't answer with the reousrces found online. The question is: <strong>what's the rule to decide how to compose containers in POD?</strong> . Let me explain with an example.</p>
<p>I've these microservices:</p>
<ul>
<li>Authentication </li>
<li>Authorization </li>
<li>Serving content</li>
<li>(plus) OpenResty to forward the calls form one to the other and orhcestarate the flow. (is there a possibility to do so natively in K8?, it seems to have services base on nginx+lua, but not sure how it works)</li>
</ul>
<p><em>For the sake of the example I avoid Databases and co, I assume they are external and not managed by kubernetes</em></p>
<p>Now, what's the correct way here <em>LEFT</em> or <em>RIGHT</em> of the image?
<a href="https://i.stack.imgur.com/b0VwE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/b0VwE.png" alt="enter image description here"></a></p>
<p><em>LEFT</em> : this seems easier to make it working, everything works on "localhost" , the downside is that it looses a bit the benefit of the microservices. For example, if the auth become slows and it would need more instances, I've to duplicate the whole pod and not just that service.</p>
<p><em>RIGHT</em> seems a bit more complex, need services to expose each POD to the other PODs. Yet, here, I could duplicate auth as I need without duplicating the other containers. On the other hand I'll have a lot of pods since each pod is basically a container.</p>
| <p>It is generally recommended to keep different services in different pods or better deployments that will scale independently. The reasons are what is generally discussed as benefits of a microservices architecture. </p>
<ul>
<li><p>A more loose coupling allowing the different services to be developed independently in their own languages/technologies, </p></li>
<li><p>be deployed and updated independently and </p></li>
<li><p>also to scale independently. </p></li>
</ul>
<p>The exception are what is considered a "<em>helper application</em>" to assist a "primary application". Examples given in the k8s docs are data pullers, data pushers and proxies. In those cases a share file system or exchange via loopback network interface can help with critical performance use cases. A data puller can be a side-car container for an nginx container pulling a website to serve from a GIT repository for example. </p>
|
<p>I use the <a href="https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/tutorials/stateless-application/deployment.yaml" rel="noreferrer">yaml file</a>, which is in the Kubernetes official document, to create a Deployment in Kubernetes, and it uses <code>apiVersion: apps/v1beta1</code> at the top. Then I typed <code>kubectl create -f deployment.yaml</code> to create this Deployment, but it occured an error as following: </p>
<pre>
error: error validating "deployment.yaml": error validating data: couldn't find type: v1beta1.Deployment; if you choose to ignore these errors, turn validation off with --validate=false`
</pre>
<p>After some search, I changed <code>apiVersion: apps/v1beta1</code> to <code>extensions/v1beta1</code>, and then recreate the Deployment with the yaml file, and it worked fine.<br>
So, I wanna know what's the differences between <code>apps/v1beta1</code> and <code>extensions/v1beta1</code>. Is it pertinent to the Kubernetes version? </p>
<pre>
# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:34:32Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
</pre>
| <p>The <code>apps</code> API group will be where the v1 Deployment type lives. The <code>apps/v1beta1</code> version was added in 1.6.0, so if you have a 1.5.x client or server, you should still use the <code>extensions/v1beta1</code> version. </p>
<p>The <code>apps/v1beta1</code> and <code>extensions/v1beta1</code> Deployment types are identical, but when creating via the <code>apps</code> API, some improved defaults are used</p>
|
<p>I'm running the following command and getting an error:</p>
<pre><code>$ kubectl get nodes
error: You must be logged in to the server (the server has asked for the client to provide credentials)
</code></pre>
<p>What's going on?</p>
| <p>You have to run:</p>
<pre><code>$ gcloud container clusters get-credentials [cluster-name]
</code></pre>
<p>Docs <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials?hl=en_US&_ga=1.254512651.2069297816.1484122139" rel="nofollow noreferrer">here</a>.</p>
|
<p>Two pods <code>app</code> and <code>postgres</code> are successfully created and are able to communicate through each other's services in a node. In the current process, the two pods get created at the same time, but that can be changed to have them be created/started in a sequence.</p>
<p>Initially, the database container in the <code>postgres</code> pod is empty and needs to be seeded. The seed process goes through the <code>app</code> pod and so, it needs to be up and running too. Once <code>postgres</code> is seeded, <code>app</code> still is not aware of this new data, and needs to be restarted. This is a flaw in <code>app</code> itself, that I have low control over.</p>
<p>Right now, the process is:</p>
<pre><code>kubectl create -f pods.yaml # creates `app` and `postgres` pods
kubectl exec app -- bash -c "<seed command>"
kubectl delete pod app
sleep 45 # takes a while for `app` to terminate
kubectl create -f pods.yaml # Ignore the "postgres pod and service already exist" error
</code></pre>
<p><strong>Is there a better way of automatically co-ordinating a restart of <code>app</code> once <code>postgres</code> reaches a seeded state?</strong></p>
<p>Perhaps there is some aspect/feature set of Kubernetes that I'm missing entirely which helps with such a circumstance....</p>
| <p>You can use a "readiness probe" on the postgresql pod that will not report the container as ready before the data is imported (e.g. query the DB or Table you import). You app container can query the readiness status of the db pod in order to restart automatically once it reports ready.
The readiness probe can be a script performing the import. Here is an example (you need to replace the "SHOW DATABASES" command with whatever applies in your case):</p>
<pre><code>spec:
containers:
- name: mysql
image: mysql:latest
ports:
- containerPort: 3306
name: mysql
readinessProbe:
exec:
command:
- /path-in-container/readiness-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
</code></pre>
<p>readiness-probe.sh:</p>
<pre><code>#!/bin/bash
MYSQL_USER="readinessProbe"
MYSQL_PASS="readinessProbe"
MYSQL_HOST="127.0.0.1"
mysql -u${MYSQL_USER} -p${MYSQL_PASS} -h${MYSQL_HOST} -e"SHOW DATABASES;"
if [ $? -ne 0 ]; then
exit 1
else
exit 0
fi
</code></pre>
<p>To read more on the topic refer to k8s docs: </p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">readiness probes</a> </p>
<p><a href="https://kubernetes.io/docs/user-guide/walkthrough/k8s201/#health-checking" rel="nofollow noreferrer">health checking</a></p>
|
<p>From what I understand the Job object is supposed to reap pods after a certain amount of time.
But on my GKE cluster (Kubernetes 1.1.8) it seems that "kubectl get pods -a" can list pods from days ago. </p>
<p>All were created using the Jobs API.</p>
<p>I did notice that after delete the job with
kubectl delete jobs
The pods were deleted too.</p>
<p>My main concern here is that I am going to run thousands and tens of thousands of pods on the cluster in batch jobs, and don't want to overload the internal backlog system.</p>
| <p>It looks like starting with Kubernetes 1.6 (and the v2alpha1 api version), if you're using cronjobs to create the jobs (that, in turn, create your pods), you'll be able to <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="noreferrer">limit</a> how many old jobs are kept. Just add the following to your job spec:</p>
<pre><code>successfulJobsHistoryLimit: X
failedJobsHistoryLimit: Y
</code></pre>
<p>Where X and Y are the limits of how many previously run jobs the system should keep around (it keeps jobs around indefinitely by default [at least on version 1.5.])</p>
<p>Edit <strong>2018-09-29</strong>:</p>
<p>For newer K8S versions, updated links with documentation for this are here:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits" rel="noreferrer">CronJob - Job History Limits</a></p></li>
<li><p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#cronjob-v1beta1-batch" rel="noreferrer">CronJob API Spec</a></p></li>
</ul>
|
<p>I have a private Docker image registry running on a Linux VM (10.78.0.228:5000) and a Kubernetes master running on a different VM running Centos Linux 7.</p>
<p>I used the below command to create a POD:<br>
<code>kubectl create --insecure-skip-tls-verify -f monitorms-rc.yml</code></p>
<p>I get this:</p>
<blockquote>
<p>sample monitorms-mmqhm 0/1 ImagePullBackOff 0 8m</p>
</blockquote>
<p>and upon running:
<code>kubectl describe pod monitorms-mmqhm --namespace=sample</code></p>
<blockquote>
<p>Warning Failed Failed to pull image "10.78.0.228:5000/monitorms":
Error response from daemon: {"message":"Get
<a href="https://10.78.0.228:5000/v1/_ping" rel="noreferrer">https://10.78.0.228:5000/v1/_ping</a>: x509: certificate signed by unknown
authority"}</p>
</blockquote>
<p>Isn't Kubernetes supposed to ignore the server certificate for all operations during POD creation when the <code>--insecure-skip-tls-verify</code> is passed?</p>
<p>If not, how do I make it ignore the tls verification while pulling the docker image?</p>
<p><strong>PS:</strong></p>
<p><strong>Kubernetes version :</strong></p>
<p>Client Version: <code>v1.5.2</code>
Server Version: <code>v1.5.2</code></p>
<p>I have raised this issue here: <a href="https://github.com/kubernetes/kubernetes/issues/43924" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/43924</a></p>
| <p>The issue you're seeing is actually a docker issue. Using <code>--insecure-skip-tls-verify</code> is a valid arg to <code>kubectl</code>, but it only deals with the connecition between <code>kubectl</code> and the kubernetes API server. The error you're seeing is actually because the docker daemon cannot login to the private registry because the cert it's using in unsigned.</p>
<p>Have a look at the <a href="https://docs.docker.com/registry/insecure/" rel="noreferrer">Docker insecure registry docs</a> and this should solve your problem.</p>
|
<p>If I have a GitLab project, which contains several sub-folders: </p>
<ul>
<li>two with java code (from java:alpine, to compile with maven and build containers)</li>
<li>one with nginx config (from openresty:alpine, to build a web server container)</li>
</ul>
<p>Every of sub-projects has a Dockerfile, deployment.yml, and gitlab-ci.yml.</p>
<p>deployment.yml is similar for every of sub-folders in the project, as all the sub-projects results to a single multi-container kubernetes pod.</p>
<p>How can I set up this project to build and deploy to kubernetes only the container, which I edited by the last commit?</p>
| <p>I used a dirty hack like setting several build blocks in .gitlab-ci.yml for every sub-project and setting for every on them <code>only</code> parameter:</p>
<pre><code>maven-build-akka:
image: maven:3-jdk-8
stage: build
only:
- /^akka-.*$/
script:
- cd akka
- mvn package -B --settings ../settings.xml
artifacts:
paths:
- akka/target/*.jar
</code></pre>
<p>And so for <code>docker build stage</code>. </p>
<p>After that, if I push a tag like <code>akka-1.0.3</code>, appropriate pipeline jobs would be launched.</p>
<p>But, there's an issue with only one name for images in GitLab CI registry for a single project, so you should push images somewhere else (GCR, etc.)</p>
|
<p>I'm running kubernetes v1.5 (api reference <a href="https://kubernetes.io/docs/api-reference/v1.5/#service-v1" rel="nofollow noreferrer">here</a>). The field <code>service.spec.loadBalancerIp</code> should exist but I keep getting the following error when I attempt to set it. </p>
<p><code>error: error validating ".../service.yaml": error validating data: found invalid field loadBalancerIp for v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false</code></p>
<p>service.yaml:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: some-service
spec:
type: LoadBalancer
loadBalancerIp: xx.xx.xx.xx
selector:
deployment: some-deployment
ports:
- protocol: TCP
port: 80
</code></pre>
<p>kubectl version output: </p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:34:32Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I'm running my cluster on gke. </p>
<p>Any thoughts?</p>
| <p>You have a typo in your spec. It should be <code>loadBalancerIP</code>, not <code>loadBalancerIp</code>. Note the uppercase <code>P</code></p>
|
<p>I'm deploying microservices to Kubernetes with Spring Boot, Fabric8's Spring-Cloug-Kubernetes and I now want to have a Hystrix Dashboard, provided by <a href="https://github.com/fabric8io/kubeflix" rel="nofollow noreferrer">Fabric8 Kubeflix</a>.</p>
<p>I have set labels on my deployments :</p>
<pre><code>metadata :
labels:
hystrix.cluster: default
hystrix.enabled: true
spec:
template:
metadata :
labels:
hystrix.cluster: default
hystrix.enabled: true
</code></pre>
<p>And it is on my pods too :</p>
<pre><code>metadata :
labels:
hystrix.cluster: default
hystrix.enabled: true
</code></pre>
<p>In the turbine-server pod logs I have :</p>
<pre><code>2017-03-31T15:23:10.514696068Z 2017-03-31 15:23:10.514 INFO [turbine-server,,,] 1 --- [ Timer-0] c.n.t.discovery.InstanceObservable : Found hosts that have been previously terminated: 0
2017-03-31T15:23:10.514700568Z 2017-03-31 15:23:10.514 INFO [turbine-server,,,] 1 --- [ Timer-0] c.n.t.discovery.InstanceObservable : Hosts up:0, hosts down: 0
</code></pre>
<p>And its /discovery endpoint displays :</p>
<pre><code>Hystrix Endpoints:
</code></pre>
<p>turbine-server application.yml :</p>
<pre><code>spring:
application:
name: turbine-server
turbine:
instanceUrlSuffix: :80/hystrix.stream
aggregator:
clusterConfig: default
InstanceDiscovery:
impl: io.fabric8.kubeflix.turbine.TurbineDiscovery
</code></pre>
<p>On my microservices I just have a </p>
<pre><code>@EnableCircuitBreaker
@EnableHystrix
</code></pre>
<p>in their main Application classes.</p>
<p>I set port 80 as turbine suffix because I have Kubernetes services exposing port 80 of pods :</p>
<pre><code>spec:
ports: [
name: default
protocol: TCP,
port: 80,
targetPort: 8080,
nodePort: 32193
],
clusterIP: 10.0.72.62,
type: NodePort
</code></pre>
<p>When I do a /health on my service :</p>
<pre><code>"status": "UP",
"hystrix": {
"status": "UP"
},
</code></pre>
<p>And the /hystrix.stream :</p>
<pre><code>data: {"type":"HystrixCommand","name":"getLabel","group":"LabelController","currentTime":1491222462325,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":0,"rollingCountBadRequests":0,"rollingCountCollapsedRequests":0,"rollingCountEmit":0,"rollingCountExceptionsThrown":0,"rollingCountFailure":0,"rollingCountFallbackEmit":0,"rollingCountFallbackFailure":0,"rollingCountFallbackMissing":0,"rollingCountFallbackRejection":0,"rollingCountFallbackSuccess":0,"rollingCountResponsesFromCache":0,"rollingCountSemaphoreRejected":0,"rollingCountShortCircuited":0,"rollingCountSuccess":0,"rollingCountThreadPoolRejected":0,"rollingCountTimeout":0,"currentConcurrentExecutionCount":0,"rollingMaxConcurrentExecutionCount":0,"latencyExecute_mean":0,"latencyExecute":{"0":0,"25":0,"50":0,"75":0,"90":0,"95":0,"99":0,"99.5":0,"100":0},"latencyTotal_mean":0,"latencyTotal":{"0":0,"25":0,"50":0,"75":0,"90":0,"95":0,"99":0,"99.5":0,"100":0},"propertyValue_circuitBreakerRequestVolumeThreshold":20,"propertyValue_circuitBreakerSleepWindowInMilliseconds":5000,"propertyValue_circuitBreakerErrorThresholdPercentage":50,"propertyValue_circuitBreakerForceOpen":false,"propertyValue_circuitBreakerForceClosed":false,"propertyValue_circuitBreakerEnabled":true,"propertyValue_executionIsolationStrategy":"THREAD","propertyValue_executionIsolationThreadTimeoutInMilliseconds":1000,"propertyValue_executionTimeoutInMilliseconds":1000,"propertyValue_executionIsolationThreadInterruptOnTimeout":true,"propertyValue_executionIsolationThreadPoolKeyOverride":null,"propertyValue_executionIsolationSemaphoreMaxConcurrentRequests":10,"propertyValue_fallbackIsolationSemaphoreMaxConcurrentRequests":10,"propertyValue_metricsRollingStatisticalWindowInMilliseconds":10000,"propertyValue_requestCacheEnabled":true,"propertyValue_requestLogEnabled":true,"reportingHosts":1,"threadPool":"LabelController"}
data: {"type":"HystrixThreadPool","name":"LabelController","currentTime":1491222462325,"currentActiveCount":0,"currentCompletedTaskCount":2,"currentCorePoolSize":10,"currentLargestPoolSize":2,"currentMaximumPoolSize":10,"currentPoolSize":2,"currentQueueSize":0,"currentTaskCount":2,"rollingCountThreadsExecuted":0,"rollingMaxActiveThreads":0,"rollingCountCommandRejections":0,"propertyValue_queueSizeRejectionThreshold":5,"propertyValue_metricsRollingStatisticalWindowInMilliseconds":10000,"reportingHosts":1}
</code></pre>
<p>The hystrix-dashboard receive the following turbine.stream :</p>
<pre><code>{"reportingHostsLast10Seconds":0,"name":"meta","type":"meta","timestamp":1491222578286}
</code></pre>
<p>Versions :</p>
<pre><code>Spring Boot : 1.4.1.RELEASE
io.fabric8.kubernetes-client : 1.4.14
io.fabric8.kuberflix.turbine-discovery : 1.0.28
Spring cloud Neflix : 1.2.4.RELEASE
Netflix hystrix : 1.5.6
</code></pre>
<p>What is wrong with my turbine-server ? Why is it not detecting hystrix.stream from my microservices ?</p>
| <p>Ok it works, I had to disable the default Eureka discovery :</p>
<pre><code>ribbon.eureka.enabled: false
eureka.client.enabled: false
</code></pre>
<p>so that the Kubernetes one is used.</p>
<p>And the instanceUrlSuffix still need to be :8080/hystrix.stream as it takes the internal k8s IP of the pods.</p>
|
<p>Jenkins is failing to build due to "Free Swap Space" being 0. I don't know how to fix that. The build process keeps hanging. Here part of the output:</p>
<p><a href="https://i.stack.imgur.com/2Ceeo.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2Ceeo.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/crw7E.png" rel="noreferrer"><img src="https://i.stack.imgur.com/crw7E.png" alt="enter image description here"></a></p>
<p>When I ssh into the instance and <code>docker info</code> I get a <code>WARNING: No swap limit support</code>. </p>
| <p>Your screenshot shows "waiting for next available executor" so try increasing the number of executors in jenkins ("Manage Jenkins" -> "Configure System" -> "# of executors").</p>
<p>Also here the info how to create Swapfile (for avoiding out of memory on building large docker containers etc.). The example creates 4G swapfile at location /myswap :</p>
<pre><code>sudo dd if=/dev/zero of=/myswap count=4096 bs=1MiB
sudo chmod 600 /myswap
sudo mkswap /myswap
sudo swapon /myswap
</code></pre>
<p>to check swap is working:</p>
<pre><code>swapon -s
</code></pre>
<p>enable swap at boot, add line to fstab file:</p>
<pre><code>sudo nano /etc/fstab
</code></pre>
<p>Add this line:</p>
<pre><code>/myswap swap swap sw 0 0
</code></pre>
|
<p>I have two (independent) Kubernetes clusters, one setup as a GKE/GCE cluster and the other setup in an AWS environment created using the <code>kube-up.sh</code> script. Both clusters are working properly and I can start / stop pods, services, and everything else between.</p>
<p>I want pods located in these clusters to communicate with each other, but without exposing them as services. In order to achieve this, I have setup a VPN connection between the two clusters, and also a couple of routing / firewall rules to make sure the VMs / pods can see each other. </p>
<p>I can confirm that the following scenarios are working properly:</p>
<p>VM in GCE -> VM in AWS (OK)</p>
<p>Pod in GCE -> VM in AWS (OK)</p>
<p>VM in AWS -> VM in GCE (OK)</p>
<p>VM in AWS -> Pod in GCE (OK)</p>
<p>Pod in AWS -> Pod in GCE (OK)</p>
<p>However, I can't make a VM or Pod in GCE communicate with a Pod in AWS.</p>
<p>I was wondering if <em>there is</em> any way of making this work with current AWS VPC capabilities. It seems that when the AWS end of the VPN tunnel receives packets addressed to a pod, it doesn't really know what to do with them. On the other hand, GCE networking is automatically configured with routes that associate pod IPs to a GKE cluster. In this case, when a packet addressed to a pod reaches the GCE end of the VPN tunnel, it is correctly forwarded to its destination.</p>
<p>That's my configuration:</p>
<p><strong>GKE/GCE in us-east1</strong></p>
<p><em>Network</em>: <code>10.142.0.0/20</code></p>
<p><em>VM1 IP</em>: <code>10.142.0.2</code></p>
<p><em>Pod range (for VM1)</em> : <code>10.52.4.0/24</code></p>
<p><em>Pod1 IP</em>: <code>10.52.4.4</code> (running busybox)</p>
<p><em>Firewall rule</em>: Allows any traffic from <code>172.16.0.0/12</code></p>
<p><em>Route</em>: Sends everything with destination <code>172.16.0.0/12</code> to the VPN tunnel (automatically added when the VPN is created)</p>
<p><strong>AWS in ap-northeast-1</strong></p>
<p><em>VPC</em>: <code>172.24.0.0/16</code></p>
<p><em>Subnet1</em>: <code>172.24.1.0/24</code></p>
<p><em>VM3 IP (in Subnet1)</em>: <code>172.24.1.5</code></p>
<p><em>Kubernetes cluster network</em> (<code>NON_MASQUERADE_CIDR</code>): <code>172.16.0.0/16</code></p>
<p><em>Pod range</em> (<code>CLUSTER_IP_RANGE</code>): <code>172.16.128.0/17</code></p>
<p><em>Pod range (for VM3)</em> : <code>172.16.129.0/24</code></p>
<p><em>Pod3 IP</em>: <code>172.16.129.5</code></p>
<p><em>Security Group</em>: Allows any traffic from 10.0.0.0/8</p>
<p><em>Routes</em>: </p>
<ol>
<li>Destination <code>10.0.0.0/8</code> to the VPN tunnel</li>
<li>Destination <code>172.16.129.0/24</code> to VM3</li>
</ol>
<p>Has anyone tried to do something similar? Is there any way to configure AWS VPC VPN Gateway to ensure the packets destined to Pods are correctly sent to the VMs that host them? Any suggestions?</p>
| <p>What you are asking in kubernetes <a href="https://kubernetes.io/docs/concepts/cluster-administration/federation/" rel="nofollow noreferrer">federation</a>. </p>
<blockquote>
<p>Federation makes it easy to manage multiple clusters. It does so by
providing 2 major building blocks:</p>
<ul>
<li><p>Sync resources across clusters: Federation provides the ability to keep resources in multiple clusters in sync. This can be used, for
example, to ensure that the same deployment exists in multiple
clusters.</p></li>
<li><p>Cross cluster discovery: It provides the ability to auto-configure DNS servers and load balancers with backends from all clusters. This
can be used, for example, to ensure that a global VIP or DNS record
can be used to access backends from multiple clusters.</p></li>
</ul>
</blockquote>
<p>Also,this one might help you <a href="https://kubernetes.io/docs/admin/multiple-zones/" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/multiple-zones/</a></p>
|
<p>I can use kubectl to see to which subjects a cluster role is applied, eg:</p>
<pre><code>kubectl get clusterrolebindings system:node --all-namespaces -o json
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRoleBinding",
....
....
"subjects": [
{
"apiGroup": "rbac.authorization.k8s.io",
"kind": "Group",
"name": "system:nodes"
}
]
}
</code></pre>
<p>I would like to get this info the other way around, eg: I want to list all policies applied to the "system:nodes" subject.</p>
<p>How can I do that?</p>
| <p>There is no API for the reverse index. You can look up bindings and filter on ones containing the expected subject. For example, using bash, jq, and kubectl:</p>
<pre><code># $1 is kind (User, Group, ServiceAccount)
# $2 is name ("system:nodes", etc)
# $3 is namespace (optional, only applies to kind=ServiceAccount)
function getRoles() {
local kind="${1}"
local name="${2}"
local namespace="${3:-}"
kubectl get clusterrolebinding -o json | jq -r "
.items[]
|
select(
.subjects[]?
|
select(
.kind == \"${kind}\"
and
.name == \"${name}\"
and
(if .namespace then .namespace else \"\" end) == \"${namespace}\"
)
)
|
(.roleRef.kind + \"/\" + .roleRef.name)
"
}
$ getRoles Group system:authenticated
ClusterRole/system:basic-user
ClusterRole/system:discovery
$ getRoles ServiceAccount attachdetach-controller kube-system
ClusterRole/system:controller:attachdetach-controller
</code></pre>
|
<p>I am trying to deploy a simple node.js application with the new Kubernetes support in Bluemix. When I run the container I made, I get a <code>ImagePullBackOff</code> error, which means it can't pull down the image. </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>NAME READY STATUS RESTARTS AGE
hello-node-2399519400-6m8dz 0/1 ImagePullBackOff 0 13m</code></pre>
</div>
</div>
</p>
<p>My Docker image uses the node.js base image.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>FROM node:6.9.2
EXPOSE 8080
COPY server.js .
CMD node server.js</code></pre>
</div>
</div>
</p>
<p>I deployed using:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>docker build -t hello-node:v1 .
kubectl run hello-node --image=hello-node:v1 --port=8080</code></pre>
</div>
</div>
</p>
<p>I am thinking that Bluemix can't pull down the node.js image, but I am not certain. </p>
| <p>I see the docker build of the image, and I'm presuming that you're using the kubectl with the exported cluster config (bx cs cluster-config ...), so that it's targetting your cluster.</p>
<p>Did you tag and push that image from your local docker into the bluemix registry, or to another remote registry that would be accessible from the container service? (My apologies if this is obvious - just didn't see the step there to tag and push it to a registry that would be available).</p>
|
<p>In order to do ease development in Docker, the code is attached to the containers through volumes. In that way, there is no need to rebuild the images each time the code is changed.</p>
<p>So, is it correct to think to use the same idea in Kubernetes?</p>
<p><strong>PS:</strong> I know that the concepts <code>PersistentVolume</code> and <code>PersistentVolumeClaim</code> allow to attach volume, but they are intended for data.</p>
<hr>
<h1>Update</h1>
<p>To ease the development, I do need to use the volume for both code and data. This will avoid me to rebuild the images at each change of code. </p>
<p>Below this is what I am trying to do in minikube:</p>
<p><strong>the deployment</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: '/home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube/src/'
</code></pre>
<p><strong>the service</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: php-hostpath
namespace: default
labels:
app: php-hostpath
spec:
selector:
app: php-hostpath
ports:
- port: 80
targetPort: 80
type: "LoadBalancer"
</code></pre>
<p>The service and the deployment are well created in minikube:</p>
<pre><code>$ kubectl get pods -l app=php-hostpath
NAME READY STATUS RESTARTS AGE
php-hostpath-3796606162-bt94w 1/1 Running 0 19m
$ kubectl get service -l app=php-hostpath
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
php-hostpath 10.0.0.110 <pending> 80:30135/TCP 27m
</code></pre>
<p>The folder <code>src</code> and the file <code>src/index.php</code> are also well created.</p>
<pre><code><?php
echo "This is my first docker project";
</code></pre>
<p>Now I want to check that every thing is running:</p>
<pre><code>$ kubectl exec -ti php-hostpath-3796606162-bt94w bash
root@php-hostpath-3796606162-bt94w:/var/www/html# ls
root@php-hostpath-3796606162-bt94w:/var/www/html# exit
exit
</code></pre>
<p>The folder <code>src</code> and the file <code>index.php</code> are not in <code>/var/www/html</code>!</p>
<p>Have I missed something?</p>
<p>PS: if I were in production env, I will not put my code in a volume.
Thanks,</p>
| <p>Based on this <a href="https://github.com/kubernetes/minikube/blob/master/README.md" rel="nofollow noreferrer">doc</a>, Host folder sharing is not implemented in the KVM driver yet. This is the driver I am using actually.</p>
<p>To overcome this, there are 2 solutions:</p>
<ul>
<li><p>Use the <strong>virtualbox</strong> driver so that you can mount your hostPath volume by changing the path on you localhost <code>/home/THE_USR/...</code> to <code>/hosthome/THE_USR/...</code> </p></li>
<li><p>Mount your volume to the minikube VM based on the command <code>$ minikube mount /home/THE_USR/...</code>. The command will return you the path of your mounted volume on the minikube VM. Example is given down.</p></li>
</ul>
<h1>Example</h1>
<p><strong>(a) mounting a volume on the minikube VM</strong></p>
<p>the <code>minikube mount</code> command returned that path /mount-9p</p>
<pre><code>$ minikube mount -v 3 /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube
Mounting /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube into /mount-9p on the minikubeVM
This daemon process needs to stay alive for the mount to still be accessible...
2017/03/31 06:42:27 connected
2017/03/31 06:42:27 >>> 192.168.42.241:34012 Tversion tag 65535 msize 8192 version '9P2000.L'
2017/03/31 06:42:27 <<< 192.168.42.241:34012 Rversion tag 65535 msize 8192 version '9P2000'
</code></pre>
<p><strong>(b) Specification of the path on the deployment</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: /mount-9p
</code></pre>
<p><strong>(c) Checking if mounting the volume worked well</strong></p>
<pre><code>amine@amine-Inspiron-N5110:~/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube$ kubectl exec -ti php-hostpath-3498998593-6mxsn bash
root@php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo "This is my first docker project";
root@php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root@php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root@php-hostpath-3498998593-6mxsn:/var/www/html#
</code></pre>
<p><strong>PS:</strong> this kind of volume mounting is only development environment. If I were in production environment, the code will not be mounted: it will be in the image.</p>
<p><strong>PS:</strong> I recommend the virtualbox in stead of KVM.</p>
<p>Hope it helps others.</p>
|
<p>The <a href="https://github.com/kubernetes/minikube/blob/master/README.md" rel="nofollow noreferrer">kubernetes docs</a> provides for each OS and each driver the name VM when mounting a volume of type <strong>hostPath</strong>.</p>
<p>Nevertheless, it is missing that case:</p>
<ul>
<li>OS: linux</li>
<li>Driver: kvm</li>
<li>Host folder: /home</li>
<li>VM folder: ???</li>
</ul>
<p>This is the targeted deployment I would like to use in order to avoid to recreate the image after each change of the code.</p>
<p>This is only for the development env. In the production env, the code will be directly in the image.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: /hosthome/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube/src/
</code></pre>
<p>Thanks...</p>
| <p>Based on this <a href="https://github.com/kubernetes/minikube/blob/master/README.md" rel="nofollow noreferrer">doc</a>, Host folder sharing is not implemented in the KVM driver yet. This is the driver I am using actually.</p>
<p>To overcome this, there are 2 solutions:</p>
<ul>
<li><p>Use the <strong>virtualbox</strong> driver so that you can mount your hostPath volume by changing the path on you localhost <code>/home/THE_USR/...</code> to <code>/hosthome/THE_USR/...</code> </p></li>
<li><p>Mount your volume to the minikube VM based on the command <code>$ minikube mount /home/THE_USR/...</code>. The command will return you the path of your mounted volume on the minikube VM. Example is given down.</p></li>
</ul>
<h1>Example</h1>
<p><strong>(a) mounting a volume on the minikube VM</strong></p>
<p>the <code>minikube mount</code> command returned that path /mount-9p</p>
<pre><code>$ minikube mount -v 3 /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube
Mounting /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube into /mount-9p on the minikubeVM
This daemon process needs to stay alive for the mount to still be accessible...
2017/03/31 06:42:27 connected
2017/03/31 06:42:27 >>> 192.168.42.241:34012 Tversion tag 65535 msize 8192 version '9P2000.L'
2017/03/31 06:42:27 <<< 192.168.42.241:34012 Rversion tag 65535 msize 8192 version '9P2000'
</code></pre>
<p><strong>(b) Specification of the path on the deployment</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: /mount-9p
</code></pre>
<p><strong>(c) Checking if mounting the volume worked well</strong></p>
<pre><code>amine@amine-Inspiron-N5110:~/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube$ kubectl exec -ti php-hostpath-3498998593-6mxsn bash
root@php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo "This is my first docker project";
root@php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root@php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root@php-hostpath-3498998593-6mxsn:/var/www/html#
</code></pre>
<p><strong>NB:</strong> this kind of volume mounting is only development environment. If I were in production environment, the code will not be mounted: it will be in the image.</p>
<p>Hope it helps others.</p>
|
<p>I deployed a single node kubernetes cluster with "kubeadm". This deployed Kubernetes 1.6. According to the instructions (<a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/kubeadm/</a>) I need to install a network layer for pod networking.</p>
<p>I decided to give "weave" a try, as this was easy to install according to the documentation (<a href="https://www.weave.works/weave-net-kubernetes-integration/" rel="nofollow noreferrer">https://www.weave.works/weave-net-kubernetes-integration/</a>) using a simple one-liner: </p>
<pre><code>kubectl apply -f https://git.io/weave-kube
</code></pre>
<p>When I check the machine, I see there is a weave adapter now:</p>
<pre><code>weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet 10.32.0.1 netmask 255.240.0.0 broadcast 0.0.0.0
inet6 fe80::bca7:f5ff:fefb:c7a2 prefixlen 64 scopeid 0x20<link>
ether be:a7:f5:fb:c7:a2 txqueuelen 1000 (Ethernet)
RX packets 12 bytes 780 (780.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9 bytes 690 (690.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
</code></pre>
<p>I then deployed kube-dashboard with the provided yaml file:</p>
<pre><code>kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
</code></pre>
<p>This went well, but the pod got an ip assigned from the 172.17.0.0 range. This is the range defined in the docker configuration file, not the one used by "weave".</p>
<p>This doesn't seem right to me. Shouldn't it get an ip in the weave range?</p>
<p>I've been researching the whole cni stuff, but the more I read the more I get confused on how all the different components (docker, weave, kubernetes, cni) are supposed to work together.</p>
| <p>What should happen is that Kubernetes should be installed with a flag to kubelet <code>--network-plugin=cni</code>, and then kubelet will look for a CNI config file in <code>/etc/cni/net.d</code>, and use the network config in that file to look for a CNI plugin (an executable) to call.</p>
<p>Installing Weave Net via <code>kubectl apply -f https://git.io/weave-kube</code> should create this config file (<code>/etc/cni/net.d/10-weave.conf</code>), and then after that pods should get an address in the Weave IP allocation range (by default 10.32.0.0/12)</p>
<p>Since you are seeing a pod with a Docker address, is it possible the <code>--network-plugin=cni</code> flag has gone missing?</p>
|
<p>Can someone please detail the steps necessary to install the <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns" rel="noreferrer">kube-dns</a> addon? I've downloaded the nearly 400MB git repo in the previous link and run <code>make</code> as instructed but get <code>Nothing to be done for 'all'.</code></p>
<p>The docs aren't clear what form add-ons exist in, and how to install them. The "Administrators guide" link there takes me to <a href="https://kubernetes.io/docs/admin/dns/" rel="noreferrer">this</a> unhelpful page.</p>
<p>I've tried <a href="https://stackoverflow.com/a/42315074/4978821">https://stackoverflow.com/a/42315074/4978821</a>, but got an <code>error validating data</code> message. Even if this worked, it seems like it'd be an unofficial and awkward solution.</p>
<p>Answers like this are also too vague: <a href="https://stackoverflow.com/a/36105547/4978821">https://stackoverflow.com/a/36105547/4978821</a>.</p>
<p>I'd be happy to create a pull request to improve the documentation, once I have a solution.</p>
<p><strong>Updated to clarify my issue:</strong></p>
<p>As mentioned by Aaron, the dns addon is enabled in minikube by default. Running <code>minikube addons list</code> shows that it is enabled. However, if I get into a bash shell for a running pod, like such <code>kubectl exec -it node-controller-poqsl bash</code> and try to reach my mongo service using ping, for example, it resolves to a public URL, rather than the kubernetes service IP.</p>
| <p>The kube-dns addon should be enabled by default in minikube. You can run <code>kubectl get po -n kube-system</code> to check if the pod the addon-manager launches is there. If you don't see the pod listed, make sure that the addon is enabled in minikube by running <code>minikube addons list</code> and verifying that <code>kube-dns</code> is <code>enabled</code></p>
<p>Edit:
For me <code>kubectl get po -n kube-system</code> is a valid command, here is the output:</p>
<pre><code>$ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
kube-addon-manager-minikube 1/1 Running 2 5d
kube-dns-v20-7ddvt 3/3 Running 6 5d
kubernetes-dashboard-rn54g 1/1 Running 2 5d
</code></pre>
<p>You can see from this that the kube-dns pods are running correctly. Can you verify that your kube-dns pods are in the <code>Running</code> state?</p>
|
<p>Runnning <code>kubectl exec -it <PODNAME> -- /bin/bash</code> is printing a lot of trash of the shell:</p>
<pre><code>) Data frame handling
I0331 17:46:15.486652 3807 logs.go:41] (0xc4201158c0) Data frame received for 5
I0331 17:46:15.486671 3807 logs.go:41] (0xc42094a000) (5) Data frame handling
I0331 17:46:15.486682 3807 logs.go:41] (0xc42094a000) (5) Data frame sent
root@hello-node-2399519400-6q6s3:/# I0331 17:46:16.667823 3807 logs.go:41] (0xc420687680) (3) Writing data frame
I0331 17:46:16.669223 3807 logs.go:41] (0xc4201158c0) Data frame received for 5
I0331 17:46:16.669244 3807 logs.go:41] (0xc42094a000) (5) Data frame handling
I0331 17:46:16.669254 3807 logs.go:41] (0xc42094a000) (5) Data frame sent
root@hello-node-2399519400-6q6s3:/# I0331 17:46:17.331753 3807 logs.go:41] (0xc420687680) (3) Writing data frame
I0331 17:46:17.333338 3807 logs.go:41] (0xc4201158c0) Data frame received for 5
I0331 17:46:17.333358 3807 logs.go:41] (0xc42094a000) (5) Data frame handling
I0331 17:46:17.333369 3807 logs.go:41] (0xc42094a000) (5) Data frame sent
I0331 17:46:17.333922 3807 logs.go:41] (0xc4201158c0) Data frame received for 5
I0331 17:46:17.333943 3807 logs.go:41] (0xc42094a000) (5) Data frame handling
I0331 17:46:17.333956 3807 logs.go:41] (0xc42094a000) (5) Data frame sent
root@hello-node-2399519400-6q6s3:/# I0331 17:46:17.738444 3807 logs.go:41] (0xc420687680) (3) Writing data frame
I0331 17:46:17.740563 3807 logs.go:41] (0xc4201158c0) Data frame received for 5
I0331 17:46:17.740591 3807 logs.go:41] (0xc42094a000) (5) Data frame handling
I0331 17:46:17.740606 3807 logs.go:41] (0xc42094a000) (5) Data frame sent
</code></pre>
<p>It is a little bit better without 't' option:</p>
<pre><code>kubectl exec -i hello-4103519535-hcdm6 -- /bin/bash
I0331 18:29:06.918584 4992 logs.go:41] (0xc4200878c0) (0xc4204c5900) Create stream
I0331 18:29:06.918714 4992 logs.go:41] (0xc4200878c0) (0xc4204c5900) Stream added, broadcasting: 1
I0331 18:29:06.928571 4992 logs.go:41] (0xc4200878c0) Reply frame received for 1
I0331 18:29:06.928605 4992 logs.go:41] (0xc4200878c0) (0xc4203ffc20) Create stream
I0331 18:29:06.928614 4992 logs.go:41] (0xc4200878c0) (0xc4203ffc20) Stream added, broadcasting: 3
I0331 18:29:06.930565 4992 logs.go:41] (0xc4200878c0) Reply frame received for 3
I0331 18:29:06.930603 4992 logs.go:41] (0xc4200878c0) (0xc4204c59a0) Create stream
I0331 18:29:06.930615 4992 logs.go:41] (0xc4200878c0) (0xc4204c59a0) Stream added, broadcasting: 5
I0331 18:29:06.932455 4992 logs.go:41] (0xc4200878c0) Reply frame received for 5
I0331 18:29:06.932499 4992 logs.go:41] (0xc4200878c0) (0xc420646000) Create stream
I0331 18:29:06.932511 4992 logs.go:41] (0xc4200878c0) (0xc420646000) Stream added, broadcasting: 7
I0331 18:29:06.935363 4992 logs.go:41] (0xc4200878c0) Reply frame received for 7
echo toto
I0331 18:29:08.943066 4992 logs.go:41] (0xc4203ffc20) (3) Writing data frame
I0331 18:29:08.947811 4992 logs.go:41] (0xc4200878c0) Data frame received for 5
I0331 18:29:08.947837 4992 logs.go:41] (0xc4204c59a0) (5) Data frame handling
I0331 18:29:08.947851 4992 logs.go:41] (0xc4204c59a0) (5) Data frame sent
toto
</code></pre>
<p>Is there a way to disable that?
Would it come from my environment ?
I am still not sure if it comes from Kubernetes or my environment actually. </p>
| <p>Just found it:
<code>unset DEBUG</code> fixed it !</p>
|
<p>I would like to deploy an application cluster by managing my deployment via k8s Deployment object. The documentation has me extremely confused. My basic layout has the following components that scale independently:</p>
<ol>
<li>API server</li>
<li>UI server</li>
<li>Redis cache</li>
<li>Timer/Scheduled task server</li>
</ol>
<p>Technically, all 4 above belong in separate pods that are scaled independently. </p>
<p>My questions are:</p>
<ol>
<li>Do I need to create <code>pod.yml</code> files and then somehow reference them in <code>deployment.yml</code> file or can a deployment file also embed pod definitions?</li>
<li>K8s documentation seems to imply that the <code>spec</code> portion of <code>Deployment</code> is equivalent to defining <em>one</em> pod. Is that correct? What if I want to declaratively describe multi-pod deployments? Do I do need multiple deployment.yml files?</li>
</ol>
| <p>Pagids answer has most of the basics. You should create 4 <code>Deployments</code> for your scenario. Each deployment will create a <code>ReplicaSet</code> that schedules and supervises the collection of <code>PODs</code> for the <code>Deployment</code>.</p>
<p>Each <code>Deployment</code> will most likely also require a <code>Service</code> in front of it for access. I usually create a single yaml file that has a <code>Deployment</code> and the corresponding <code>Service</code> in it. Here is an example for an <code>nginx.yaml</code> that I use:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
name: nginx
targetPort: 80
nodePort: 32756
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginxdeployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginxcontainer
image: nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 80
</code></pre>
<p>Here some additional information for clarification:</p>
<ul>
<li>A <code>POD</code> is not a scalable unit. A <code>Deployment</code> that schedules PODs is.</li>
<li>A <code>Deployment</code> is meant to represent a single group of PODs fulfilling a single purpose together.</li>
<li>You can have many <code>Deployments</code> work together in the virtual network of the cluster.</li>
<li>For accessing a <code>Deployment</code> that may consist of many <code>PODs</code> running on different nodes you have to create a Service.</li>
<li>Deployments are meant to contain stateless services. If you need to store a state you need to create <code>StatefulSet</code> instead (e.g. for a database service).</li>
</ul>
|
<p>Just ran the following command to create a new ConfigMap from a file:<br>
<code>kubernetes create configmap foo --from-file=foo</code></p>
<p>It gets created successfully.</p>
<p>Now when I run,<br>
<code>kubernetes get configmaps foo -o yaml</code></p>
<p>I see the following lines in the resulting output: </p>
<pre><code>apiVersion: v1
data:
foo: |+
VAR1=value1
VAR2=value2
</code></pre>
<p><strong>What does the plus sign in this line mean?</strong><br>
<code>foo: |+</code></p>
<p>It doesn't appear in any of my other ConfigMaps.</p>
| <p><code>|+</code> is not kubernetes specific, it's part of yaml.</p>
<p>From <a href="http://lzone.de/cheat-sheet/YAML" rel="noreferrer">http://lzone.de/cheat-sheet/YAML</a>:</p>
<pre><code># + indicator (keep extra newlines after block)
content: |+
Arbitrary free text with two newlines after
foo: ...
</code></pre>
|
<p>So I'm using Kubernetes for a side project and it's great. It's cheaper to run for a small project like the one I'm on (a small cluster of 3-5 instances gives me basically everything I need for ~$30/month on GCP). </p>
<p>The only area where I'm struggling is in trying to use the kubernetes Ingress resource to map into cluster and fan out to my microservices (they're small Go or Node backends). I have the configuration setup for the ingress to map to different services and there's no problem there. </p>
<p>I understand that you can really easily have GCP spin up a LoadBalancer when you create an ingress resource. This is fine, but it also represents another $20-ish/month that adds to the cost of the project. Once/if this thing gets some traction, that could be ignored, but for now and also for the sake of understanding Kubernetes better, I want to the do the following:</p>
<ul>
<li>get a static IP from GCP, </li>
<li>use it w/ an ingress resource</li>
<li>host the load-balancer in the same cluster (using the nginx load balancer)</li>
<li>avoid paying for the external load balancer</li>
</ul>
<p>Is there any way this can even be done using Kubernetes and ingress resources?</p>
<p>Thanks!</p>
| <p>Yes this is possible. Deploy your ingress controller, and deploy it with a NodePort service. Example:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
namespace: kube-system
labels:
k8s-app: nginx-ingress-controller
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32080
protocol: TCP
name: http
- port: 443
targetPort: 443
nodePort: 32443
protocol: TCP
name: https
selector:
k8s-app: nginx-ingress-controller
</code></pre>
<p>Now, create an ingress with a DNS entry:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-service #obviously point this to a valid service + port
servicePort: 80
</code></pre>
<p>Now, assuming your static IP is attached to any kubernetes node running kube-proxy, have DNS updated to point to the static IP, and you should be able to visit <code>myapp.example.com:32080</code> and the ingress will map you back to your app.</p>
<p>A few additional things:</p>
<p>If you want to use a lower port than 32080, then bear in mind if you're using CNI networking, <a href="https://github.com/kubernetes/kubernetes/issues/23920" rel="noreferrer">you'll have trouble with hostport</a>. It's recommend to have a load balancer listening on port 80, I guess you could just have nginx set up to do proxy pass, but it becomes difficult. This is why a load balancer with your cloud provider is recommended :)</p>
|
<p>Creating a new ConfigMap from a file:<br>
<code>kubernetes create configmap foo --from-file=foo</code></p>
<p>This is how the ConfigMap looks internally:
<code>kubernetes get configmaps foo -o yaml</code></p>
<pre><code>apiVersion: v1
data:
foo: |
VAR1=value1
VAR2=value2
</code></pre>
<p>Then, I use this ConfigMap to create a set of environment variables in the container:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
name: app
name: app
spec:
containers:
- name: app-server
image: app:latest
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: foo
command: ["/bin/bash", "-c", "printenv"]
</code></pre>
<p>When the container command runs, I see the following output for <code>printenv</code>:</p>
<pre><code>foo=VAR1=value1
VAR2=value2
</code></pre>
<p>So, an <code>echo $foo</code> command in the pod returns:<br>
<code>VAR1=value1 VAR2=value2</code></p>
<p>According to the documentation for ConfigMap with <code>--from-file</code>, this is expected behaviour.</p>
<p>What would be a creative way (and the proper place) to somehow get the values of this file available to the pod as individual env variables <code>VAR1</code>, <code>VAR2</code>, <code>VAR3</code>, etc. ?</p>
| <p>This is not possible with the current version (1.6.x) of Kubernetes. As written in the <a href="https://kubernetes.io/docs/user-guide/kubectl/v1.6/#-em-configmap-em-" rel="nofollow noreferrer">offical documentation</a> for <code>kubectl create configmap</code>:</p>
<blockquote>
<p><strong>--from-file:</strong> Key file can be specified using its file path, in which case file basename will be used as configmap key, or optionally with a key and file path, in which case the given key will be used. Specifying a directory will iterate each named file in the directory whose basename is a valid configmap key.</p>
</blockquote>
<p>When you want to create a configmap which is used like this, as input for the <code>envFrom</code> container configuration you could create it with the <code>--from-literal</code> option like this:</p>
<pre><code>kubectl create configmap foo --from-literal=var1=value1 --from-literal=var2=value2
</code></pre>
<p>To still keep the file, you could transform your file into somethings which then runs this command like this:</p>
<pre><code>eval "kubectl create configmap foo $(cat foo.txt | sed -e 's/^/--from-literal=/' | tr "\n" ' ')"
</code></pre>
<p>Along with that maybe checking the outstanding proposals like the <a href="https://github.com/kubernetes/kubernetes/issues/37011" rel="nofollow noreferrer"><code>--flatten</code> flag proposal on Github</a> are worth your time.</p>
<p>Also keep an eye on the variable naming. iirc <code>VAR1</code> and <code>VAR2</code> are not valid property names - they have to be lower case which might cause some issues when passing them on.</p>
|
<p>I am trying to understand how development is done efficiently with dockerized applications, specifically on Kubernetes. I am struggling to find the proper way to go about live editing containers. </p>
<p>In the past with Docker, I would just use something like Nodemon to watch for files to change locally and then when I had the app finished, I would dockerize it and deploy. With Kubernetes (minikube) my first impression is that I'm supposed to rebuild the container with each edit. Surely this cannot be the way people develop on here - what am I missing? Am I supposed to edit locally and then dockerize->k8s deploy? That doesnt seem right. </p>
<p>I am looking for a way to sync all my local changes to a docker container which then reboots the kubernetes pod with the new changes, so I can read from the logs during development. If this is odd, please recommend me a superior way.</p>
<p>Thanks</p>
| <p>Kubernetes is a container orchestration tool. It's not a development platform. It's designed to ease the deployment of hundreds of containers, and deal with lifecycle/networking/storage issues. </p>
<p>If you're developing your application, you really don't need kubernetes/minikube at the moment. My suggested workflow would be:</p>
<ul>
<li>Develop your application in a local docker container. Iterate as you go until you're happy.</li>
<li>Create a snapshot release, tag the docker image and push it to a repo</li>
<li>Then deploy it.</li>
<li>When you need to update, apply a new tag.</li>
</ul>
<p>The benefit of using Docker is that it will deploy exactly the same on your local laptop, as well as to a production k8s cluster, so once you've arrived at the stage where you're ready to tag/push images, you can be assured the deployment process will be exactly the same.</p>
<p>Minikube isn't for local development, it's for people to test out kubernetes locally, and possibly develop kubernetes itself, it's not designed as a vagrant alternative.</p>
|
<p>I want to delete a pod permanently so I can create the deployment.yaml and derveice.yaml again from fresh, so I tried:</p>
<pre><code> kubctl delete pod <pod>
</code></pre>
<p>and the pod is still there, also tried:</p>
<pre><code>kubectl delete pods <pod> --grace-period=0
</code></pre>
<p>and didnt work.</p>
<p>the only thing that worked is when I set the deployment <code>replicas: 0</code> and then apply it, but the when i try to create new deployment i get:</p>
<blockquote>
<p>Error from server: error when creating "myService/deployment.yaml":
deployments.extensions "myService" already exists</p>
</blockquote>
| <p>Pods are created by Deployment, so when you delete a Pod then Deployment automatically create it base on <code>replicas</code> value, you have to delete Deployment and then create it again,</p>
<p>You can use:</p>
<pre><code>kubectl create -f deployment.yml
kubectl delete -f deployment.yml
</code></pre>
|
<p>I'm a bit confused by this, because it was working for days without issue. </p>
<p>I use to be able to join nodes to my cluster withoout issue. I would run the below on the master node:</p>
<pre><code>kubeadm init .....
</code></pre>
<p>After that, it would generate a join command and token to issue to the other nodes I want to join. Something like this:</p>
<pre><code>kubeadm join --token 99385f.7b6e7e515416a041 192.168.122.100
</code></pre>
<p>I would run this on the nodes, and they would join without issue. The next morning, all of a sudden this stopped working. This is what I see when I run the command now:</p>
<pre><code>[kubeadm] WARNING: kubeadm is in alpha, please do not use it for
production clusters.
[preflight] Running pre-flight checks
[tokens] Validating provided token
[discovery] Created cluster info discovery client, requesting info from "http://192.168.122.100:9898/cluster-info/v1/?token-id=99385f"
[discovery] Cluster info object received, verifying signature using given token
[discovery] Cluster info signature and contents are valid, will use API endpoints [https://192.168.122.100:6443]
[bootstrap] Trying to connect to endpoint https://192.168.122.100:6443
[bootstrap] Detected server version: v1.6.0-rc.1
[bootstrap] Successfully established connection with endpoint "https://192.168.122.100:6443"
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
failed to request signed certificate from the API server [cannot create certificate signing request: the server could not find the requested resource]
</code></pre>
<p>It seems like the node I'm trying to join does successfully connect to the API server on the master node, but for some reason, it now fails to request a certificate.</p>
<p>Any thoughts?</p>
| <p>To me </p>
<pre><code>sudo service kubelet restart
</code></pre>
<p>didn't work.
What I did was the following:</p>
<ul>
<li>Copied from master node contents of /etc/kubernetes/* into slave nodes at same location /etc/kubernetes</li>
</ul>
<p>I tried again "kubeadm join ..." command. This time the nodes joined the cluster without any complaint. </p>
<p>I think this is a temporary hack, but worked! </p>
|
<p>I am still kind of getting my feet under me with kubernetes. We have a spring-boot based app with ~17 microservices running on Kubernetes 1.4.2 on AWS. When I run this app on an AWS cluster of 4 m3.medium workers, my containers are all in the 200-300MB range of memory usage at rest (with a couple exceptions). For production I installed the same set of services on 4 m4.large workers and instantly my memory moved up to 700-1000MB of memory on the same containers with virtually identical specs. I am trying to figure out who is the offending party here - Springboot, Docker or Kubernetes.</p>
<p>Has anyone seen behavior like this before? </p>
<p>I know I can cap the resources using Kubernetes limits, but I really don't want to do that given that I know the application can run just fine on smaller machines and have a smaller footprint. Just looking for some advice on where the problem might be.</p>
<p>EDIT: One more piece or pertinent information. I am using CoreOS stable 1298.6.0 as the host OS image.</p>
| <p>In my opinion, the problem is that your processes inside the container see the total host RAM available as the RAM available for them. </p>
<p>If you use a bigger instance, the JVM will try to use even more RAM. You should try to limit your java virtual machine heap with -Xmx300m (adjust this value with what your app needs).
I recommend you to read <a href="https://dzone.com/articles/java-inside-docker-what-you-must-know-to-not-fail" rel="nofollow noreferrer">this</a> article where its explained in an easy and clean way.</p>
|
<p>Let's say I have a simple deployment kubernetes config:</p>
<pre><code>kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: backend
spec:
replicas: 1
template:
metadata:
name: backend
spec:
containers:
- name: backend
image: backend:1.0.0
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
</code></pre>
<p>And I want to have continuous deployment. </p>
<ul>
<li><p>One way to do it would be to simply use this file for creation, and for updates run something like <code>kubectl set image deployment/backend backend=backend:1.1.0</code>. But what if at some point later I wanted to change e.g. limits, I would have to find out the current version running, replace that in yaml file and apply it? Seems cumbersome</p></li>
<li><p>Google seems to use <code>sed</code> for replacing <a href="https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/blob/master/sample-app/Jenkinsfile" rel="nofollow noreferrer">image version in their example</a>, and this kind of works fine if your kubernetes config files are stored in the same repo as your code. And what if they are stored elsewhere, or your build takes quite some time? So just to change a single property you would have to run through the whole build process, which might be too long</p></li>
<li><p>Have a script that autogenerates yaml file after each deployment based on version input, and always edit the template only (but then again, where do you store the last deployed version?)</p></li>
<li><p>Some third party tool?</p></li>
</ul>
| <p>Helm charts become an increasingly popular way of managing kubernetes deployments. In short it allows you to generate deployments with only defining the variables that you need with a simple configuration and it has a built in upgrade mechanism as well. In other words it provides meta-deployments.</p>
<p>Refer to the following <a href="https://github.com/kubernetes/helm/blob/master/docs/using_helm.md#helm-upgrade-and-helm-rollback-upgrading-a-release-and-recovering-on-failure" rel="nofollow noreferrer">docs.</a></p>
|
<p>My website is not accessible from the browser for a few minutes after idling or not accessing it like 30 minutes or more. I would have to reload the page for how many times to view the page and I am not sure which to debug. </p>
<p>the stack I am running is a Golang app behind nginx that runs on kubernetes ingress. here is part of my nginx.conf.</p>
<pre><code> daemon off;
worker_processes 2;
pid /run/nginx.pid;
worker_rlimit_nofile 523264;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0;
real_ip_recursive on;
geoip_country /etc/nginx/GeoIP.dat;
geoip_city /etc/nginx/GeoLiteCity.dat;
geoip_proxy_recursive on;
# lua section to return proper error codes when custom pages are used
lua_package_path '.?.lua;./etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/lua-resty-http/lib/?.lua;';
init_by_lua_block {
require("error_page")
}
sendfile on;
aio threads;
tcp_nopush on;
tcp_nodelay on;
log_subrequest on;
reset_timedout_connection on;
keepalive_timeout 75s;
client_header_buffer_size 1k;
large_client_header_buffers 4 8k;
types_hash_max_size 2048;
server_names_hash_max_size 512;
server_names_hash_bucket_size 64;
map_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type text/html;
gzip on;
gzip_comp_level 5;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
gzip_proxied any;
server_tokens on;
log_format upstreaminfo '$remote_addr - '
'[$proxy_add_x_forwarded_for] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" '
'$request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status';
map $request_uri $loggable {
default 1;
}
access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
error_log /var/log/nginx/error.log notice;
resolver 10.131.240.10 valid=30s;
# Retain the default nginx handling of requests without a "Connection" header
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# trust http_x_forwarded_proto headers correctly indicate ssl offloading
map $http_x_forwarded_proto $pass_access_scheme {
default $http_x_forwarded_proto;
'' $scheme;
}
map $http_x_forwarded_port $pass_server_port {
default $http_x_forwarded_port;
'' $server_port;
}
# map port 442 to 443 for header X-Forwarded-Port
map $pass_server_port $pass_port {
442 443;
default $pass_server_port;
}
# Map a response error watching the header Content-Type
map $http_accept $httpAccept {
default html;
application/json json;
application/xml xml;
text/plain text;
}
map $httpAccept $httpReturnType {
default text/html;
json application/json;
xml application/xml;
text text/plain;
}
server_name_in_redirect off;
port_in_redirect off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# turn on session caching to drastically improve performance
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 10m;
# allow configuring ssl session tickets
ssl_session_tickets on;
# slightly reduce the time-to-first-byte
ssl_buffer_size 4k;
# allow configuring custom ssl ciphers
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
upstream default-ui-80 {
sticky hash=sha1 name=route httponly;
server 10.128.2.104:4000 max_fails=0 fail_timeout=0;
server 10.128.4.37:4000 max_fails=0 fail_timeout=0;
}
server {
server_name app.com;
listen [::]:80;
listen 442 ssl http2;
# PEM sha: a51bd3f56b3ec447945f1f92f0ad140bb8134d11
ssl_certificate /ingress-controller/ssl/default-linker-secret.pem;
ssl_certificate_key /ingress-controller/ssl/default-linker-secret.pem;
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains; preload";
location / {
set $proxy_upstream_name "default-ui-80";
port_in_redirect off;
# enforce ssl on server side
if ($scheme = http) {
return 301 https://$host$request_uri;
}
client_max_body_size "1024m";
proxy_set_header Host $host;
# Pass Real IP
proxy_set_header X-Real-IP $remote_addr;
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers
proxy_connect_timeout 5s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
proxy_redirect off;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_http_version 1.1;
proxy_pass http://default-ui-80;
}
}
}
</code></pre>
<p>ingress controller</p>
<hr>
<pre><code> apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.2
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-ingress-sticky-session
- --configmap=$(POD_NAMESPACE)/nginx-settings-configmap
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-configmaps
- --v=2
</code></pre>
<p>ingress</p>
<pre><code> apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-prod
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/affinity: "cookie"
ingress.kubernetes.io/session-cookie-name: "route"
ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.org/client-max-body-size: "1024m"
spec:
tls:
- hosts:
- foo.io
secretName: foo-secret
rules:
- host: foo.io
http:
paths:
- backend:
serviceName: foo.io
servicePort: 80
</code></pre>
<p>service</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: foo-prod-nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
- port: 443
name: https
selector:
app: nginx-ingress-controller
</code></pre>
| <p>The service.type=LoadBalancer allocates a public IP per k8s service, which is not how ingress works. You should expose your service as nodeport, and let ingress route traffic to it. <a href="https://github.com/kubernetes/ingress/blob/master/controllers/gce/ingress-app.yaml" rel="nofollow noreferrer">example here</a></p>
<p>Also, if you are going to use nginx as ingress controller, you should use endpoint instead of service. <a href="https://github.com/kubernetes/ingress/tree/master/controllers/nginx#why-endpoints-and-not-services" rel="nofollow noreferrer">Here is why</a> </p>
|
<p>I have an ACS Kubernetes cluster that was created with an agent count of 1. I went to the portal to increase the agent count to 2 and received a generic error saying the provisioning of resource(s) for container service failed.</p>
<p><a href="https://i.stack.imgur.com/8DfHy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8DfHy.jpg" alt="error!"></a></p>
<p>Looking at the activity logs, there is a bit more information.</p>
<blockquote>
<p>Write ContainerServices - PreconditionFailed - Provisioning of resource(s) for container service 'xxxxxxx' in
resource group 'xxxxxxxx' failed.</p>
</blockquote>
<hr>
<blockquote>
<p>Validate - InvalidTemplate - Deployment template validation failed: 'The resource 'Microsoft.Network/networkSecurityGroups/k8s-master-3E4D5818-nsg' is not defined in the template. Please see <a href="https://aka.ms/arm-template" rel="nofollow noreferrer">https://aka.ms/arm-template</a> for usage details.'.</p>
</blockquote>
<p>Trying to change it via the Azure CLI 2.0 also returns the same error.</p>
<hr>
<p>Update: The cluster was stood up using an ARM template with a single container service resource based on the sample in the <a href="https://github.com/Azure/azure-quickstart-templates/tree/master/101-acs-kubernetes" rel="nofollow noreferrer">quickstart templates repo</a>.</p>
<pre><code>{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"dnsNamePrefix": {
"type": "string",
"metadata": {
"description": "Sets the Domain name prefix for the cluster. The concatenation of the domain name and the regionalized DNS zone make up the fully qualified domain name associated with the public IP address."
}
},
"agentCount": {
"type": "int",
"defaultValue": 1,
"metadata": {
"description": "The number of agents for the cluster. This value can be from 1 to 100 (note, for Kubernetes clusters you will also get 1 or 2 public agents in addition to these seleted masters)"
},
"minValue":1,
"maxValue":100
},
"agentVMSize": {
"type": "string",
"defaultValue": "Standard_D2_v2",
"allowedValues": [
"Standard_A0", "Standard_A1", "Standard_A2", "Standard_A3", "Standard_A4", "Standard_A5",
"Standard_A6", "Standard_A7", "Standard_A8", "Standard_A9", "Standard_A10", "Standard_A11",
"Standard_D1", "Standard_D2", "Standard_D3", "Standard_D4",
"Standard_D11", "Standard_D12", "Standard_D13", "Standard_D14",
"Standard_D1_v2", "Standard_D2_v2", "Standard_D3_v2", "Standard_D4_v2", "Standard_D5_v2",
"Standard_D11_v2", "Standard_D12_v2", "Standard_D13_v2", "Standard_D14_v2",
"Standard_G1", "Standard_G2", "Standard_G3", "Standard_G4", "Standard_G5",
"Standard_DS1", "Standard_DS2", "Standard_DS3", "Standard_DS4",
"Standard_DS11", "Standard_DS12", "Standard_DS13", "Standard_DS14",
"Standard_GS1", "Standard_GS2", "Standard_GS3", "Standard_GS4", "Standard_GS5"
],
"metadata": {
"description": "The size of the Virtual Machine."
}
},
"linuxAdminUsername": {
"type": "string",
"defaultValue": "azureuser",
"metadata": {
"description": "User name for the Linux Virtual Machines."
}
},
"orchestratorType": {
"type": "string",
"defaultValue": "Kubernetes",
"allowedValues": [
"Kubernetes",
"DCOS",
"Swarm"
],
"metadata": {
"description": "The type of orchestrator used to manage the applications on the cluster."
}
},
"masterCount": {
"type": "int",
"defaultValue": 1,
"allowedValues": [
1
],
"metadata": {
"description": "The number of Kubernetes masters for the cluster."
}
},
"sshRSAPublicKey": {
"type": "string",
"metadata": {
"description": "Configure all linux machines with the SSH RSA public key string. Your key should include three parts, for example 'ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm'"
}
},
"servicePrincipalClientId": {
"metadata": {
"description": "Client ID (used by cloudprovider)"
},
"type": "securestring",
"defaultValue": "n/a"
},
"servicePrincipalClientSecret": {
"metadata": {
"description": "The Service Principal Client Secret."
},
"type": "securestring",
"defaultValue": "n/a"
}
},
"variables": {
"adminUsername":"[parameters('linuxAdminUsername')]",
"agentCount":"[parameters('agentCount')]",
"agentsEndpointDNSNamePrefix":"[concat(parameters('dnsNamePrefix'),'agents')]",
"agentVMSize":"[parameters('agentVMSize')]",
"masterCount":"[parameters('masterCount')]",
"mastersEndpointDNSNamePrefix":"[concat(parameters('dnsNamePrefix'),'mgmt')]",
"orchestratorType":"[parameters('orchestratorType')]",
"sshRSAPublicKey":"[parameters('sshRSAPublicKey')]",
"servicePrincipalClientId": "[parameters('servicePrincipalClientId')]",
"servicePrincipalClientSecret": "[parameters('servicePrincipalClientSecret')]",
"useServicePrincipalDictionary": {
"DCOS": 0,
"Swarm": 0,
"Kubernetes": 1
},
"useServicePrincipal": "[variables('useServicePrincipalDictionary')[variables('orchestratorType')]]",
"servicePrincipalFields": [
null,
{
"ClientId": "[parameters('servicePrincipalClientId')]",
"Secret": "[parameters('servicePrincipalClientSecret')]"
}
]
},
"resources": [
{
"apiVersion": "2016-09-30",
"type": "Microsoft.ContainerService/containerServices",
"location": "[resourceGroup().location]",
"name":"[resourceGroup().name]",
"properties": {
"orchestratorProfile": {
"orchestratorType": "[variables('orchestratorType')]"
},
"masterProfile": {
"count": "[variables('masterCount')]",
"dnsPrefix": "[variables('mastersEndpointDNSNamePrefix')]"
},
"agentPoolProfiles": [
{
"name": "agentpools",
"count": "[variables('agentCount')]",
"vmSize": "[variables('agentVMSize')]",
"dnsPrefix": "[variables('agentsEndpointDNSNamePrefix')]"
}
],
"linuxProfile": {
"adminUsername": "[variables('adminUsername')]",
"ssh": {
"publicKeys": [
{
"keyData": "[variables('sshRSAPublicKey')]"
}
]
}
},
"servicePrincipalProfile": "[variables('servicePrincipalFields')[variables('useServicePrincipal')]]"
}
}
],
"outputs": {
"masterFQDN": {
"type": "string",
"value": "[reference(concat('Microsoft.ContainerService/containerServices/', resourceGroup().name)).masterProfile.fqdn]"
},
"sshMaster0": {
"type": "string",
"value": "[concat('ssh ', variables('adminUsername'), '@', reference(concat('Microsoft.ContainerService/containerServices/', resourceGroup().name)).masterProfile.fqdn, ' -A -p 22')]"
},
"agentFQDN": {
"type": "string",
"value": "[reference(concat('Microsoft.ContainerService/containerServices/', resourceGroup().name)).agentPoolProfiles[0].fqdn]"
}
}
}
</code></pre>
| <p>This is a known service issue for old clusters. A fix is currently rolling out and is being tracked in this github issue, <a href="https://github.com/Azure/ACS/issues/16" rel="nofollow noreferrer">https://github.com/Azure/ACS/issues/16</a> </p>
<p>Jack (a dev on the ACS team)</p>
|
<p>This morning I learned about the (unfortunate) default in kubernetes of all previously run cronjobs' jobs instances being retained in the cluster. Mea culpa for not reading that detail in the documentation. I also notice that deleting jobs (<code>kubectl delete job [<foo> or --all]</code>) takes quite a long time. Further, I noticed that even a reasonably provisioned kubernetes cluster with three large nodes appears to fail (get timeouts of all sorts when trying to use kubectl) when there are just ~750 such old jobs in the system (plus some other active containers that otherwise had not entailed heavy load) [Correction: there were also ~7k pods associated with those old jobs that were also retained :-o]. (I did learn about <a href="https://stackoverflow.com/a/43115763/379037">the configuration settings</a> to limit/avoid storing old jobs from cronjobs, so this won't be a problem [for me] in the future.)</p>
<p>So, since I couldn't find documentation for kubernetes about this, my (related) questions are: </p>
<ol>
<li>what exactly is stored when kubernetes retains old jobs? (Presumably it's the associated pod's logs and some metadata, but this doesn't explain why they seemed to place such a load on the cluster.)</li>
<li>is there a way to see the resources (disk only, I assume, but maybe
there is some other resource) that individual or collective old jobs
are using?</li>
<li>why does deleting a kubernetes job take on the order of a minute?</li>
</ol>
| <p>I don't know if k8s provides that kinda details of what job is consuming how much disk space but here is something you can try.</p>
<p>Try to find the pods associated with the job:</p>
<pre><code>kubectl get pods --selector=job-name=<job name> --output=jsonpath={.items..metadata.name}
</code></pre>
<p>Once you know the pod then find the docker container associated with it:</p>
<pre><code>kubectl describe pod <pod name>
</code></pre>
<p>In the above output look for <code>Node</code> & <code>Container ID</code>. Now go on that node and in that node goto path <code>/var/lib/docker/containers/<container id found above></code> here you can do some investigation to find out what is wrong.</p>
|
<p>Does Mesos or Kubernetes provide the tooling to dynamically allocate/de-allocate virtual machines?</p>
<p>I need to build a workflow engine that has a highly variable load and a mixture of short and long running tasks. For example, at 9am I might need to run 10,000 jobs with 500 taking 4 hours to run and the rest taking 5 minutes. And then at 10 am I need to only run 600 short jobs.</p>
<p>If I'm running on Azure (my preferred cloud environment) can Mesos or Kubernetes dynamically scale up or scale down the available VMs in a cluster to match demand? And can it do so intelligently so long running jobs wont be interrupted? </p>
| <p>I don't know of an auto scaler for DCOS/MESOS in azure. Here is a github repository for making an auto scaler inside of an ACS kubernetes cluster though.
<a href="https://github.com/wbuchwalter/Kubernetes-acs-autoscaler" rel="nofollow noreferrer">https://github.com/wbuchwalter/Kubernetes-acs-autoscaler</a></p>
<p>As for how the code works. It is a tool built using the az cli to tell the ACS service to scale up and down nodes, using az acs scale command. Looking at the code it has aspirations to to drain all connections then pods from the node using kubectl, but it doens't appear to currently do so.</p>
|
<p>is it possible to do a reverse DNS lookup from one pod to another in the same namespace on Kuberenetes?
Setup: Kubernetes 1.5, kube-dns 1.9</p>
<p>When I exec a pod with nslookup I don't get a hostname but only a nslookup timeout like:</p>
<pre><code>$ time kubectl exec mypod -- nslookup 172\.18\.14\.13
nslookup: can't resolve '(null)': Name does not resolve Name:
172.18.14.13 Address 1: 172.18.14.13
</code></pre>
<p>real 0m5.592s</p>
<p>mypod2 does have the internal IP 172.18.14.13. Both mypod and mypod2 have been deployed to the same namespace (default).</p>
<p>A nslookup from mypod to mypod works:</p>
<pre><code>$ time kubectl exec mypod -- nslookup 172\.18\.58\.13
nslookup: can't resolve '(null)': Name does not resolve
Name: 172.18.58.13
Address 1: 172.18.58.13 mypod
real 0m0.403s
</code></pre>
<p>Do I have to configured something special or isn't it possible by design?</p>
| <p>The conclusion is that it's impossible by design.</p>
<p>General pod ip reverse lookup require a lot of work in kubedns and kubernetes, and this does not have many use cases. While headless services dns reverse lookup is supported after kubedns 1.11.</p>
<p>You can get more discussion about this in <a href="https://github.com/kubernetes/kubernetes/issues/33470" rel="nofollow noreferrer">issue#33470</a> and <a href="https://github.com/kubernetes/dns/pull/25" rel="nofollow noreferrer">PR#25</a></p>
|
<p>I am running NGINX and Tomcat on Docker containers (container OS is Red Hat linux) and deployed through Kubernetes pods. Host OS is Red Hat Linux.</p>
<p>My query is which OS parameter will be effective - host OS or container OS? During performance tuning do I need to tune both OS or host OS parameters are effective.
Example of some parameters I am referring to are ulimit - n (open files), net.ipv4.tcp.* , fs.file-max, etc.</p>
| <p>As Crazykev already mentioned, you can set ulimits using the respective <code>docker run</code> flags.</p>
<p>Parameters like <code>net.ipv4.tcp.*</code> are <strong>kernel parameters</strong>. Docker containers are run in the same Linux kernel as the host system; for this reason, parameters set on the host will also be effective in the container.</p>
<p>Usually, you will not be able to set these parameters from inside a container. You can (not saying you should) start a container with the <code>--privileged</code> flag, which might (untested) give you access to setting kernel parameters from within the container. The Kubernetes docs also describe <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers" rel="nofollow noreferrer">how to start privileged containers</a>.</p>
|
<p>I started a test if my kubernetes installation is running well, with:</p>
<pre><code>$ kubectl run -it hello --image=busybox -- sh
</code></pre>
<p>However it didn't work. Now I want to just clean up everything before I figure out how to continue. But if I <code>delete</code>the pod it gets restarted, I suppose by a corresponding replicaset. But this gets restarted as well if I <code>delete it</code>. What can I do now to clean up whatever <code>kubectl run</code> did?</p>
| <p>The run command creates a deployment which controls the replicaset please run the following commands to verify and solve:</p>
<pre><code>kubectl get deployments
kubectl delete deployment hello
</code></pre>
|
<p>When running a Kubernetes cluster on Google Cloud Platform is it possible to somehow have the IP address from service endpoints automatically assigned to a Google CloudDNS record? If so can this be done declaratively within the service YAML definition?</p>
<p>Simply put I don't trust that the IP address of my <code>type: LoadBalancer</code> service.</p>
| <p>One option is to front your services with an ingress resource (load balancer) and attach it to a static IP that you have previously reserved.</p>
<p>I was unable to find this documented in either the Kubernetes or GKE documentation, but I did find it here:</p>
<p><a href="https://github.com/kelseyhightower/ingress-with-static-ip" rel="nofollow noreferrer">https://github.com/kelseyhightower/ingress-with-static-ip</a></p>
<p>Keep in mind that the value you set for the <code>kubernetes.io/ingress.global-static-ip-name</code> annotation is the name of the reserved IP resource, and not the IP itself.</p>
<p>Previous to that being available, you needed to create a Global IP, attach it to a GCE load balancer which had a global forwarding rule targeting at the nodes of your cluster yourself.</p>
<p>I do not believe there is a way to make this work automatically, today, if you do not wish to front your services with a k8s Ingress or GCP load balancer. That said, the Ingress is pretty straightforward, so I would recommend you go that route, if you can.</p>
<p>There is also a Kubernetes Incubator project called "external-dns" that looks to be an add-on that supports this more generally, and entirely from within the cluster itself:</p>
<p><a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-dns</a></p>
<p>I have not yet tried that approach, but mention it hear as something you may want to follow.</p>
|
<p>I have a Kubernetes up and running on AWS working correctly.</p>
<p>I'm trying to deploy this <a href="https://github.com/mikechernev/dockerised-php" rel="nofollow noreferrer">sample application</a>. I can run the project locally with <code>docker-compose up</code> without any issue</p>
<p>I used <a href="https://github.com/kubernetes-incubator/kompose" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/kompose</a> to deploy it on Kubernetes. It worked but when I checked the pods I have the following error:</p>
<blockquote>
<p>[SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "web-claim0", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "web-claim0", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "web-claim0", which is unexpected.]</p>
</blockquote>
<p>I would like to share a volume between the 2 containers without setting up <code>aws-ebs</code>. Is it possible? I'd like to start with the simplest volume share as possible.</p>
<p>here is the volume configuration</p>
<hr>
<pre><code>Persistent Volume Claim {5}
kind : PersistentVolumeClaim
apiVersion : v1
metadata {6}
name : web-claim1
namespace : default
selfLink : /api/v1/namespaces/default/persistentvolumeclaims/web-claim1
uid : a94c38da-de18-11e6-84b6-027fd28089d4
resourceVersion : 296178
creationTimestamp : 2017-01-19T07:26:58Z
spec {2}
accessModes [1]
0 : ReadWriteOnce
resources {1}
requests {1}
storage : 100Mi
status {1}
phase : Pending
</code></pre>
| <p>I think I was quite confused by how it works so it was difficult for anybody to answer. I'm just going to give a couple of hints.</p>
<ol>
<li>Don't learn Kompose and just learn Kubernetes</li>
<li>The root cause of the problem was I didn't created the volumes on EBS first</li>
</ol>
|
<p>How to creating Rewrite Rules about nginx ingress controller, in the same way as NGINX able to do redirect.</p>
<p>like : (exemple)
server_name www.foo.com;
rewrite ^/search$ <a href="https://www.foo.com/all/search" rel="nofollow noreferrer">https://www.foo.com/all/search</a> permanent;</p>
| <p>This is supported in the latest (beta.3) version of the ingress controller.</p>
<p>You set it using annotations. Have a look at the example <a href="https://github.com/kubernetes/ingress/tree/master/examples/rewrite/nginx#rewrite-target" rel="nofollow noreferrer">here</a></p>
<p>Essentially, when you set up your ingress, you'll need to specify it like so:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /search
name: rewrite
namespace: default
spec:
rules:
- host: foo.com
http:
paths:
- backend:
serviceName: my-service
servicePort: 80
path: /all/search
</code></pre>
|
<p>I upgrade my GKE API server to 1.6, and am in the process of upgrading nodes to 1.6, but ran into a snag...</p>
<p>I've got a prometheus server (version 1.5.2) running in a pod managed by a Kubernetes deployment with a couple of nodes running version 1.5.4 Kubelet, with a single new node running 1.6.</p>
<p>Prometheus can't connect to the new node--it's metrics endpoint is returning 401 Unauthorized.</p>
<p>This seems to be a RBAC issue, but I'm not sure how to proceed. I can't find docs on what roles the Prometheus server needs, or even how to grant them to the server.</p>
<p>From the coreos/prometheus-operator repo I was able to piece together a configuration that I might expect to work:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: default
secrets:
- name: prometheus-token-xxxxx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: prometheus-prometheus
component: server
release: prometheus
name: prometheus-server
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-prometheus
component: server
release: prometheus
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: prometheus-prometheus
component: server
release: prometheus
spec:
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
serviceAccount: prometheus
serviceAccountName: prometheus
...
</code></pre>
<p>But Prometheus is still getting 401s.</p>
<p>UPDATE: seems like a kubernetes authentication issue as Jordan said. See new, more focused question here; <a href="https://serverfault.com/questions/843751/kubernetes-node-metrics-endpoint-returns-401">https://serverfault.com/questions/843751/kubernetes-node-metrics-endpoint-returns-401</a></p>
| <p>401 means unauthenticated, which means it is not an RBAC issue. I believe GKE no longer allows anonymous access to the kubelet in 1.6. What credentials are you using to authenticate to the kubelet?</p>
|
<p>The <a href="https://kubernetes.io/docs/admin/kube-proxy/" rel="nofollow noreferrer">kube-proxy admin page</a> says:</p>
<pre><code>--masquerade-all If using the pure iptables proxy, SNAT everything
</code></pre>
<p>But it does explain in detail.</p>
<ul>
<li>When should I set <code>--masquerade-all</code> to true? </li>
<li>And what problem it solves? </li>
<li>What could happen if it set <code>--masquerade-all=false</code>? What is the difference compared to <code>--masqurade-all=true</code>?</li>
</ul>
| <p>If you enable this and route the service IP range to your nodes then it will be possible to reach the service IPs from outside of the cluster.</p>
<p>The discussion is in <a href="https://github.com/kubernetes/kubernetes/issues/24224" rel="noreferrer">Issue #24224</a> and it's implemented in <a href="https://github.com/kubernetes/kubernetes/pull/24429" rel="noreferrer">PR 24429</a>.</p>
|
<p>I'm following the <a href="https://tensorflow.github.io/serving/serving_inception.html" rel="nofollow noreferrer">Serving Inception Model with TensorFlow Serving and Kubernetes</a> workflow and everything work well up to the point of the final serving of the inception model via k8s when I am trying to do inference from a local host.</p>
<p>I'm getting the pods running and the output of <code>$kubectl describe service</code> inception-service is consistent with what is suggested by the workflow in the <a href="https://tensorflow.github.io/serving/serving_inception.html" rel="nofollow noreferrer">Serving Inception Model with TensorFlow Serving and Kubernetes</a>.</p>
<p>However, when running inference things don't work. Here is the trace:</p>
<pre><code>$bazel-bin/tensorflow_serving/example/inception_client --server=104.155.175.138:9000 --image=cat.jpg
Traceback (most recent call last):
File "/home/dimlyus/serving/bazel-
bin/tensorflow_serving/example/inception_client.runfi
les/tf_serving/tensorflow_serving/example/inception_client.py", line 56, in
tf.app.run()
File "/home/dimlyus/serving/bazel-
bin/tensorflow_serving/example/inception_client.runfi
les/org_tensorflow/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "/home/dimlyus/serving/bazel-
bin/tensorflow_serving/example/inception_client.runfi
les/tf_serving/tensorflow_serving/example/inception_client.py", line 51, in
main
result = stub.Predict(request, 60.0) # 10 secs timeout
File "/usr/local/lib/python2.7/dist-
packages/grpc/beta/_client_adaptations.py", line 32
4, in call
self._request_serializer, self._response_deserializer)
File "/usr/local/lib/python2.7/dist-
packages/grpc/beta/_client_adaptations.py", line 21
0, in _blocking_unary_unary
raise _abortion_error(rpc_error_call)
grpc.framework.interfaces.face.face.AbortionError:
AbortionError(code=StatusCode.UNAVAILABLE, details="Connect Failed")
</code></pre>
<p>I am running everything on Google Cloud. The setup is done from a GCE instance and the k8s is run inside of Google Container Engine. The setup of the k8s follows the instructions from the workflow linked above and uses the <a href="https://github.com/tensorflow/serving/tree/master/tensorflow_serving/example/inception_k8s.yaml" rel="nofollow noreferrer">inception_k8s.yaml</a> file.</p>
<p>The service is set as follows: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
run: inception-service
name: inception-service
spec:
ports:
- port: 9000
targetPort: 9000
selector:
run: inception-service
type: LoadBalancer
</code></pre>
<p>Any advice on how to troubleshoot this would be greatly appreciated!</p>
| <p>The error message seems to indicate that your client cannot connect to the server. Without some additional information it is hard to trouble shoot. If you post your deployment and service configuration as well as give some information about the environement (is it running on a cloud? which one? what are your security rules? load balancers?) we may be able to help better.</p>
<p>But here some things that you can check right away:</p>
<ol>
<li><p>If you are running in some kind of cloud environment (Amazon, Google, Azure, etc.), they all have security rules where you need to explicitly open the ports on the nodes running your kubernetes cluster. So every port that your Tensorflow deployment/service is using should be opened on the Controller and Worker nodes. </p></li>
<li><p>Did you deploy only a <code>Deployment</code> for the app or also a <code>Service</code>? If you run a <code>Service</code> how does it expose? Did you forget to enable a <code>NodePort</code>?</p></li>
</ol>
<p><strong>Update</strong>: Your service type is load balancer. So there should be a separate load balancer be created in GCE. you need to get the IP of the load balancer and access the service through the load balancer's ip. Please see the section 'Finding Your IP' in this link <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/</a></p>
|
<p>Has anyone managed to run a H2O Cluster in Kubernetes?</p>
<p>I tried 2 options both using flatfile 1) using StatefulSet, but since the ip generated for the pod can change the cluster is unreliable 2) using a bunch of pairs of service/deployments and specifying the the flatfile the dns name of the service but the cluster doesn't start up correctly</p>
<p>none of the above work. Is there any way to make it work?</p>
| <p>If multicast packets can be transmitted between the pods, then you could rely on that for the cluster formation. Just specify a unique -name for all the nodes to share. This is easy if it works, with no code changes.</p>
<p>UPDATE (2018/04/21) -- one of my colleagues says:</p>
<p>I used weave as the network layer, what that does is provide a connection between all the containers for that kubernetes pod group, then you dont need to use the flatfile in H2O, as h2o will multicast on startup, weave will take the multicast and send it to all instances of the pod.</p>
<p>in K8s run this: kubectl apply --filename <a href="https://git.io/weave-kube-1.6" rel="nofollow noreferrer">https://git.io/weave-kube-1.6</a></p>
<hr>
<p>If multicast is not an option, there isn't an out-of-the-box solution today for Kubernetes that I'm aware of.</p>
<p>You will need an orchestrator to distribute the flatfile information.</p>
<p>There are at least three examples of code to do this for other environments in the H2O github repos.</p>
<ol>
<li>ec2 scripts</li>
</ol>
<p><a href="https://github.com/h2oai/h2o-3/tree/master/ec2" rel="nofollow noreferrer">https://github.com/h2oai/h2o-3/tree/master/ec2</a></p>
<ol start="2">
<li>The hadoop driver</li>
</ol>
<p><a href="https://github.com/h2oai/h2o-3/blob/master/h2o-hadoop/h2o-mapreduce-generic/src/main/java/water/hadoop/h2omapper.java" rel="nofollow noreferrer">https://github.com/h2oai/h2o-3/blob/master/h2o-hadoop/h2o-mapreduce-generic/src/main/java/water/hadoop/h2omapper.java</a></p>
<p>In particular, look at how this class gets overridden:</p>
<p><a href="https://github.com/h2oai/h2o-3/blob/master/h2o-core/src/main/java/water/init/AbstractEmbeddedH2OConfig.java" rel="nofollow noreferrer">https://github.com/h2oai/h2o-3/blob/master/h2o-core/src/main/java/water/init/AbstractEmbeddedH2OConfig.java</a></p>
<ol start="3">
<li>The sparkling water driver in the sparkling water repo.</li>
</ol>
|
<p>My question is about PersistentVolumeClaim
I have one node cluster setup on aws ec2
I am trying to create a storage class using <strong>kubernetes.io/host-path</strong> as <strong>Provisioner</strong>. </p>
<p>yaml file content for storage class as follows,</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: kube-system
name: my-storage
annotations:
storageclass.beta.kubernetes.io/is-default-class: "false"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/host-path
</code></pre>
<p>yaml file content for PersistentVolumeClaim as follows,</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
annotations:
volume.beta.kubernetes.io/storage-class: my-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<p>When I am trying to create storage class and PVC on <strong>minikube</strong>, it's working. It is creating
volume on minikube in /tmp/hostpath_volume/
But, When I am trying similar thing on one node cluster setup on aws ec2, I am getting following error</p>
<p>Failed to create provisioner: Provisioning in volume plugin "kubernetes.io/host-path" is disabled</p>
<p>I can see this error when I do the <strong>kubectl describe pvc task-pv-claim</strong>, Also as, PV is not created, so claim is in pending state</p>
<p>I found something like kube-controller-manager which shows
--enable-dynamic-provisioning and --enable-hostpath-provisioner in its option but don't know how to use it.</p>
| <p>It seems you might not be running the provisioner itself, so there's nothing to actually do the work of creating the hostpath directory.</p>
<p>Take a look <a href="https://github.com/kubernetes-incubator/external-storage/blob/master/docs/demo/hostpath-provisioner/pod.yaml" rel="noreferrer">here</a></p>
<p>The way this works is that the hostpath provisioner reads from the kubernetes API, and watches for you to create a storage class (which you've done) and a persistentvolumeclaim (also done).</p>
<p>When those exist, the provisioner (which is running as a pod) will go an execute a <code>mkdir</code> to create the hostpath.</p>
<p>Run the following:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/docs/demo/hostpath-provisioner/pod.yaml
</code></pre>
<p>And then recreate your storageclass and pvc</p>
|
<p>I'm currently using Kubernetes on GKE to serve the various parts of my product on different subdomains with the Ingress resource. For example: <code>api.mydomain.com</code>, <code>console.mydomain.com</code>, etc.</p>
<p><strong>ingress.yml (current)</strong>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- host: api.mydomain.com
http:
paths:
- backend:
serviceName: api-service
servicePort: 80
- host: console.mydomain.com
http:
paths:
- backend:
serviceName: console-service
servicePort: 80
</code></pre>
<p>That works wonderfully, with the L7 GCE load balancer routing to the appropriate places. What I would like to do, however, is deploy many feature-branch deployments as subdomains to test and demonstrate new features before pushing to production. These could be something like <code>new-stylesheet.console.mydomain.com</code> or <code>upgraded-algorithm.api.mydomain.com</code>, inspired by GitLab CI's <a href="https://docs.gitlab.com/ce/ci/environments.html" rel="noreferrer">environments</a>.</p>
<p>Here's a potential workflow for each deployment:</p>
<ol>
<li>Create feature-api-deployment.yml</li>
<li>Create feature-api-service.yml</li>
<li>Update ingress.yml with new subdomain rule: <code>feature.api.mydomain.com</code> specifying <code>serviceName: feature-api-service</code></li>
</ol>
<p>But enumerating and maintaining all subdomain->service mappings will get messy with tearing down deployments, and create a ton of GCE backends (default quota is 5...) so it's not ideal.</p>
<p>Is there anything built in to Kubernetes that I'm overlooking to handle this? Something like this would be ideal to pick a target service based on a matched subdomain:</p>
<p><strong>ingress.yml (wanted)</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- host: *.api.mydomain.com
http:
paths:
- backend:
serviceName: {value of *}-api-service
servicePort: 80
</code></pre>
| <p>There certainly isn't anything like wildcard domains available in kubernetes, but you might be able to get want you want using <a href="http://helm.sh/" rel="noreferrer">Helm</a></p>
<p>With helm, you can template variables inside your manifests, so you can have the name of your branch be in the helm <a href="https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/values_files.md" rel="noreferrer">values file</a></p>
<p>From there, you can have gitlab-ci do the helm installation in a build pipeline and if you configure your chart correctly, you can specify a helm argument of the pipeline name.</p>
<p>There's a great blog post about this kind of workflow <a href="http://blog.lwolf.org/post/how-to-create-ci-cd-pipeline-with-autodeploy-k8s-gitlab-helm/" rel="noreferrer">here</a> - hopefully this'll get your where you need to go.</p>
|
<p>Say I have 1000 identical pods I'd like to be run eventually, but node resources only allow for 10 pods to be run in parallel.</p>
<p>Each pod eventually removes their RC if they exit cleanly, so given enough time, all pods should be run.</p>
<p>If I schedule all 1000 pods at the same time, though, 990 of them will be pending initially. <a href="https://github.com/kubernetes/kubernetes/blob/b983cc76e15eaa983f90b451829292fdf78ce1b3/plugin/pkg/scheduler/scheduler.go#L155" rel="nofollow noreferrer">The scheduler</a> will keep all 990 pods on a busy loop trying to be scheduled, and the operation will only succeed (for a certain pod) after one of the 10 running pods is removed. </p>
<p>This busy loop is far from ideal in my situation, as it'll likely take all of the scheduler's available resources. Is there an alternative solution to this provided natively by kubernetes? It seems clear that this particular behaviour of scheduling way more pods than you're able to deal with isn't something that kubernetes optimises for.</p>
| <p>This type of workload is better suited for the <a href="https://kubernetes.io/docs/concepts/jobs/run-to-completion-finite-workloads/" rel="nofollow noreferrer">Job</a> resource.</p>
<p>Since you have a fixed number of pods to run, the easiest way to do this would be create a Job with <code>.spec.completions</code> set to 1000.</p>
<p>You could then control the number of pods running concurrently through <code>.spec.parallelism</code>. By default this is set to 1, which means only 1 pod will run at a time, but you can set it to a higher value to have the Job finish faster (e.g. 10, since that is the limit that your nodes can handle).</p>
|
<p>I'm tring to install minikube in Ubuntu vm (in virtual box). I have enabled VT-X/AMD-v for the vm. But i'm getting following error.</p>
<pre><code># minikube start
Starting local Kubernetes cluster...
E0217 15:00:35.395801 3869 start.go:107] Error starting host: Error creating host: Error with pre-create check: "This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory".
Retrying.
E0217 15:00:35.396019 3869 start.go:113] Error starting host: Error creating host: Error with pre-create check: "This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory"
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
</code></pre>
<p>I found a <a href="https://github.com/docker/machine/issues/2256" rel="noreferrer">reference</a>, according to that, we can not have virtualization inside virtualization. Is it true? How can i fix this?</p>
| <p><strong>Virtual Box does not support VT-X/AMD-v in nested virtualisation</strong>. See this open <a href="https://www.virtualbox.org/ticket/4032" rel="noreferrer">ticket/feature request</a> on virtualbox.org. </p>
<p>There are also some more questions and answers here on SO discussing <a href="https://stackoverflow.com/questions/21861794/enable-kvm-on-ubuntu-running-on-virtualbox-on-windows/21897478#21897478">this</a> <a href="https://stackoverflow.com/questions/24620599/error-vt-x-not-available-for-vagrant-machine-inside-virtualbox/26475597#26475597">topic</a>. </p>
<p>Possible solutions:</p>
<ol>
<li>As already mentioned: <strong>Use a different hypervisor</strong> that does support VT-X/AMD-v in nested virtualisation (like Xen, KVM or VMware). </li>
<li><strong>Install Minikube on the host OS</strong> and not in a VM.</li>
<li><strong>Run Minikube directly using Docker and the "none" driver option</strong> as described in <a href="https://stackoverflow.com/a/52635546/5730444">Tad Guskis answer</a>. </li>
</ol>
|
<p>Consider the following nginx config file:</p>
<pre><code>server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location / {
proxy_pass http://api.default.svc.cluster.local;
}
}
</code></pre>
<p>All incoming TCP requests on 443 should redirect to my server running on <code>api.default.svc.cluster.local:80</code> (which is a node REST-Api btw). This works fine, I can <code>curl https://<nginx-IP>/</code> nginx and get a correct response, as expected.</p>
<p>Now, I'd like to change the location from <code>/</code> to <code>/api</code>, so I can fire a <code>curl https://<nginx-IP>/api</code> in order to get the same response as before.</p>
<p><strong>1. Attempt</strong></p>
<p>So I change the location line in the config to:</p>
<p><code>location /api {</code></p>
<p>Unfortunately this won't work, instead I get an error <code>Cannot GET /api</code> which is a node error, so obviously it gets routed to the api but something's still smelly.</p>
<p><strong>2. Attempt</strong></p>
<p>It seems as the trailing slash in an URI is required so I added it to the location:</p>
<p><code>location /api/ {</code></p>
<p>Now something changed. I won't get the same error as before, instead I get an "301 moved permanently". How can I fix my nginx config file?</p>
<p><strong>Additional information regarding the environment</strong></p>
<p>I'm using a kubernetes deployment that deploys the nginx reverse proxy incl. the config introduced. I then expose nginx using a kubernetes service. Also, I tried using kubernetes ingress to deal with this situation, using the same routes, however, the ingress service would respond with a <code>default backend - 404</code> message.</p>
| <p>As mentioned in the question, trailing slashes in URIs are important. I fixed this in the location, however, I didn't add it to the URI I pass using <code>proxy_pass</code>.</p>
<p>As for the nginx proxy I got it to work using the following config:</p>
<pre><code>server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location /api/ {
proxy_pass http://api.default.svc.cluster.local/;
}
}
</code></pre>
<p>Concerning the ingress solution, I was not able to get it to work by adding the missing trailing slash to the path. The service is specified due its name and therefore no trailing slash can be added (i.e. it would result in an error). </p>
|
<p>Is there any way to get hold of the log file of the pod in Kubernetes cluster?</p>
<p>I know I can fetch logs using "kubectl exec log -f $POD_NAME" command but I want to get access to log file directly.</p>
| <p>It depends on the logging driver you're using
I'm assuming you're using the default json logging driver here, but you can see the node the pod is scheduled on by using <code>kubectl get po -o wide</code></p>
<p>Then, logon to that node and you'll see the docker logs of the container under <code>/var/lib/docker/containers/<long_container_id>/<long_container_id>-json.log</code></p>
<p>You will need to use <code>docker ps</code> and <code>docker inspect</code> to determine the long container id.</p>
|
<p>While running Minikube, I want to connect to a server that has the annoying habit of announcing itself to a service registry with its internal IP address from inside its pod. </p>
<p>However for legacy reasons I have to connect to this registry first and retrieve that server's ip address from it. The only way to access this server from my dev machine, it seems to me, is bridging to the internal network, so I can access the networking of the Minikube. Is there an easy way to do this?</p>
| <p>You can add a route to the k8 internal network from localhost</p>
<p>Add a route to the internal network using the minikube ip address</p>
<pre><code>$ sudo ip route add 172.17.0.0/16 via $(minikube ip) # linux
$ sudo route -n add 172.17.0.0/16 $(minikube ip) # OSX
</code></pre>
<p>your subnet mask could be found using <code>kubectl get service</code> command</p>
<p>Test the route by deploying a test container and connect to it from localhost</p>
<pre><code>$ kubectl run monolith --image=kelseyhightower/monolith:1.0.0 --port=80
$ IP=$(kubectl get pod -l run=monolith -o jsonpath='{.items[0].status.podIP }')
$ curl http://$IP
{"message":"Hello"}
</code></pre>
<p>You can also add a route to K8 master</p>
<pre><code>sudo route -n add 10.0.0.0/24 $(minikube ip)
</code></pre>
<p>This is only useful for local development, you should use <code>NodePort</code> or <code>LoadBalancer</code> for exposing pods in production.</p>
|
<p>I have a <code>PersistentVolumeClaim</code> that looks like the following:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitlab-config-storage
namespace: gitlab
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
</code></pre>
<p>This created a Disk in Google Compute Engine, I then deleted the claim and reapplied it, but this created a new Disk, I would like to attach the original Disk to my claim as this had data in it I've already created, is there a way to force GKE to use a specific Disk?</p>
| <p>By using a persistent volume claim, you are asking GKE to use a persistent disk, and then always use the same volume.</p>
<p>However, by deleting the claim, you've essentially destroyed it.</p>
<p>Don't delete the claim, ever, if you want to continue using it.</p>
<p>You can attach a claim to a multiple pods over its lifetime, and the disk will remain the same. As soon as you delete the claim, it will disappear.</p>
<p>Take a look <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#lifecycle-of-a-volume-and-claim" rel="nofollow noreferrer">here</a> for more in.formation</p>
|
<p>Is it worth using SSD as boot disk? I'm not planning to access local disks <a href="https://cloud.google.com/container-engine/docs/local-ssd" rel="nofollow noreferrer">within pods</a>.</p>
<p>Also, GCP by default creates 100GB disk. If I use 20GB disk, will it cripple the cluster or it's OK to use smaller sized disks?</p>
| <p>I would always recommend SSD considering the small difference in price and large difference in performance. Even if it just speeds up the deployment/upgrade of containers. </p>
<p>Reducing the disk size to what is required for running your PODs should save you more. I cannot give a general recommendation for disk size since it depends on the OS you are using and how many PODs you will end up on each node as well as how big each POD is going to be. To give an example: When I run coreOS based images with staging deployments for nginx, php and some application servers I can reduce the disk size to 10gb with ample free room (both for master and worker nodes). On the extreme side - If I run self-contained golang application containers without storage need, each POD will only require a few MB space.</p>
|
<p>I have deployed an app using Kubernetes to a Google Cloud Container Engine Cluster.</p>
<p>I got into autoscaling, and I found the following options:</p>
<p><strong>Kubernetes Horizontal Pod Autoscaling (HPA)</strong></p>
<p>As <a href="https://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/walkthrough/" rel="nofollow noreferrer">explained here</a>, Kubernetes offers the HPA on deployments. As per the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>Horizontal Pod Autoscaling automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization</p>
</blockquote>
<p><strong>Google Cloud Container Cluster</strong></p>
<p>Now I have a Google Cloud Container Cluster using 3 instances, with autoscaling enabled. As per the <a href="https://cloud.google.com/container-engine/docs/cluster-autoscaler" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>Cluster Autoscaler enables users to automatically resize clusters so that all scheduled pods have a place to run.</p>
</blockquote>
<p>This means I have two places to define my autoscaling. Hence my questions: </p>
<ul>
<li>Is a Pod the same as VM instance inside my cluster, or can multiple Pod's run inside a single VM instance?</li>
<li>Are these two parameters doing the same (aka creating/removing VM instances inside my cluster). If not, what is their behaviour compared to one another?</li>
<li>What happens if e.g. I have a number of pods between <code>3</code> and <code>10</code> and a cluster with number of instances between <code>1</code> and <code>3</code> and autoscaling kicks in. When and how would both scale?</li>
</ul>
<p>Many thanks!</p>
| <blockquote>
<p>Is a Pod the same as VM instance inside my cluster, or can multiple
Pod's run inside a single VM instance?</p>
</blockquote>
<p>Multiple Pods can run the same instance (called node in kuberenetes). You can define maximum resources to consume for a POD in the deployment yaml. See the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/limit-range/" rel="nofollow noreferrer">docs</a>. This is an important prerequisite for autoscaling.</p>
<blockquote>
<p>Are these two parameters doing the same (aka creating/removing VM
instances inside my cluster). If not, what is their behaviour compared
to one another?</p>
</blockquote>
<p>Kubernetes autoscaler will schedule additional PODs in your existing nodes. Google autoscaler will add worker nodes (new instances) to your cluster. Google autoscaler looks at queued up PODs that cannot be scheduled because there is no space in your cluster and when it finds those will add nodes.</p>
<blockquote>
<p>What happens if e.g. I have a number of pods between 3 and 10 and a
cluster with number of instances between 1 and 3 and autoscaling kicks
in. When and how would both scale?</p>
</blockquote>
<p>By the maximum resource usage you define for your pods google autoscaler will estimate how many new nodes are required to run all queued up, scheduled pods.</p>
<p>Also read this <a href="https://community.sdl.com/solutions/content-management/tridion/tridion-developer/b/feed/posts/autoscaling-your-kubernetes-cluster-on-aws" rel="nofollow noreferrer">article</a>.</p>
|
<p>I am using <strong>container Probes</strong> to check the health of the application running inside the container within kubernetes pod. For now my example pod config looks like,</p>
<pre><code>"spec":{
"containers":[
{
"image":"tomcat",
"name":"tomcat",
"livenessProbe":{
"httpGet":{
"port": 80
},
"initialDelaySeconds": 15,
"periodSeconds": 10
}
}
]
}
</code></pre>
<p>In my case, I need to monitor two ports for the same container. <strong>80</strong> and <strong>443</strong>. But I am unable to find a method to provide both the ports for same container in the config file. Is there an alternate way of doing this?</p>
| <p>It's not possible, try to encapsulate the health check inside your application</p>
<p>Ex:
<a href="http://localhost:80/health_check?full" rel="nofollow noreferrer">http://localhost:80/health_check?full</a> => (proxy to) => <a href="http://localhost:443/health_check?full" rel="nofollow noreferrer">http://localhost:443/health_check?full</a></p>
<p>can be help you <a href="https://github.com/kubernetes/kubernetes/issues/37218" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/37218</a></p>
|
<p>I'm running Nginx on Kubernetes.</p>
<p>When I use the following proxy_pass directive it works as expected:</p>
<pre><code>proxy_pass "http://service-1.default";
</code></pre>
<p>However the following does not work:</p>
<pre><code>set $service "service-1";
proxy_pass "http://$service.default";
</code></pre>
<p>I get an error saying <code>no resolver defined to resolve service-1.default</code></p>
<p>As far as I can tell <code>proxy_pass</code> is receiving the exact same string so why is it behaving differently?</p>
<p>I need to use a variable because I'm dynamically getting the service name from the URL using a regex.</p>
| <p>I've found the reason and a solution.</p>
<p>Nginx detects if a variable is being used in <code>proxy_pass</code> (I don't know how it does that). If there is no variable it resolved the hostname at startup and caches the IP address. If there is a variable it uses a resolver (DNS server) to lookup the IP at runtime.</p>
<p>So the solution is to specify the Kube DNS server like this:</p>
<pre><code>resolver kube-dns.kube-system.svc.cluster.local valid=5s;
set $service "service-1";
proxy_pass "http://$service.default.svc.cluster.local";
</code></pre>
<p>Note that the full local DNS name of the service must be used which you can get by running <code>nslookup service-1</code>.</p>
|
<p>I’m getting <code>Failed to pull image</code> because the image pull is timing out, I know the image is there I just think my private registry is slow, is there a way to set a timeout limit on image pulls?</p>
| <p>The timeout limit can be controlled with the <code>--runtime-request-timeout</code> option for the <code>kubelet</code> service.</p>
<p><a href="https://kubernetes.io/docs/admin/kubelet/" rel="noreferrer">Official documentation</a> for this:</p>
<blockquote>
<p>Timeout of all runtime requests except long running request - pull, logs, exec and attach. When timeout exceeded, kubelet will cancel the request, throw out an error and retry later. Default: 2m0s (default 2m0s)</p>
</blockquote>
<p>Even though this is not really visible from the description, this value is still passed down into the <a href="https://github.com/kubernetes/kubernetes/blob/5e442a3f61e1e1eb67323183cfba6540c02a4a54/pkg/kubelet/kubelet.go#L276" rel="noreferrer">RemoteImageService (see source code an Github)</a> which is used to pull the images.</p>
<p>Hope this helps.</p>
|
<p>I'm trying to deploy my cluster using <a href="https://github.com/Azure/acs-engine" rel="nofollow noreferrer" title="acs-engine">acs-engine</a>
I followed the steps everything went good. But then when i go <code>kubectl get pods --all-namespaces</code> I find some of them in pending state.
On <code>kubectl describe nodes</code> I see this </p>
<p><code>NetworkUnavailable True Tue, 11 Apr 2017 10:31:20 +0000 Tue, 11 Apr 2017 10:31:20 +0000 NoRouteCreated RouteController failed to create a route</code></p>
<p>I have not used any network plugin and using the azure's default one.
I have no idea what is wrong. </p>
<p>No of nodes : 1
No of master : 1</p>
<p>Please let me know what could be wrong. </p>
| <p>This will happen when the service principal does not have Contributor access, scoped to the resource group of the route table. Once you recreate your service principal with contributor access, this will work. More information here: <a href="https://github.com/Azure/acs-engine/blob/master/docs/serviceprincipal.md" rel="nofollow noreferrer">https://github.com/Azure/acs-engine/blob/master/docs/serviceprincipal.md</a>.</p>
|
<p>Could you please assist me with the following issue.
I use Apiman version 1.2.1</p>
<pre><code>FROM jboss/wildfly:9.0.2.Final
ENV APIMAN_VERSION 1.2.1.Final
</code></pre>
<p>I expose this version via kubernetes, as a persistent volume I use postgres in the same container. Once I create it at the first time, after this in the apiman I have added Organization/ API/.... and all necessary staff.</p>
<p>I press on button to publish api, and may check that it works perfect, so I use <code>kubectl port-forward pod-name 8080:8080</code> and may check my gateway via browser <code>http:localhost:8080/apiman-gateway/ORgId/bla/bla/bla/bla?givemedescriptionbyid=1</code>.</p>
<p>After this one I go to console and kill apiman pod, as a result of pod restart, the same operation <code>kubectl port-forward new-pod-name 8080:8080</code>, and I can see that the all data of apiman like organization, apis and all other staff is already there.</p>
<p>But one big problem if you try to call gateway again, it tells you that:</p>
<blockquote>
<p>{"responseCode":500,"message":"API not
found.","trace":"io.apiman.gateway.engine.beans.exceptions.InvalidApiException:
API not found.\n\tat
io.apiman.gateway.engine.impl.ApiRequestExecutorImpl$3.handle(ApiRequestExecutorImpl.java:278)\n\tat
io.apiman.gateway.engine.impl.ApiRequestExecutorImpl$3.handle(ApiRequestExecutorImpl.java:271)\n\tat
io.apiman.gateway.engine.impl.SecureRegistryWrapper$1.handle(SecureRegistryWrapper.java:122)\n\tat
io.apiman.gateway.engine.impl.SecureRegistryWrapper$1.handle(SecureRegistryWrapper.java:111)\n\tat
io.apiman.gateway.engine.es.CachingESRegistry.getApi(CachingESRegistry.java:116)\n\tat
io.apiman.gateway.engine.impl.SecureRegistryWrapper.getApi(SecureRegistryWrapper.java:111)\n\tat
io.apiman.gateway.engine.impl.ApiRequestExecutorImpl.execute(ApiRequestExecutorImpl.java:270)\n\tat
io.apiman.gateway.platforms.servlet.GatewayServlet.doAction(GatewayServlet.java:232)\n\tat
io.apiman.gateway.platforms.servlet.GatewayServlet.doGet(GatewayServlet.java:77)\n\tat
javax.servlet.http.HttpServlet.service(HttpServlet.java:687)\n\tat
javax.servlet.http.HttpServlet.service(HttpServlet.java:790)\n\tat
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:86)\n\tat
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)\n\tat
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)\n\tat
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)\n\tat
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)\n\tat
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)\n\tat
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)\n\tat
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)\n\tat
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)\n\tat
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)\n\tat
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:58)\n\tat
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:72)\n\tat
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)\n\tat
io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76)\n\tat
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)\n\tat
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)\n\tat
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)\n\tat
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)\n\tat
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:282)\n\tat
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:261)\n\tat
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:80)\n\tat
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:172)\n\tat
io.undertow.server.Connectors.executeRootHandler(Connectors.java:199)\n\tat
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:774)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat
java.lang.Thread.run(Thread.java:745)\n"}</p>
</blockquote>
<p>I can see, that by metrics of calls in api manager, all my calls can reach apiman-gateway, but I get 500 response code.</p>
| <p>If anybody will met such error, so, I have solved it with the following steps, first of all we need to use only new version of APIMAN, list of fixes already implemented. And also apiman-gateway cannot live without elastic search to restore itself. Hence elasticsearch configuration should to be provided.</p>
|
<p>I was trying to deploy my local docker image on kubernetes, but doesn't work for me.
I loaded image into docker and tagged it as <strong>app:v1</strong>, then I ran image by use kubectl this way <code>kubectl run app --image=app:v1 --port=8080</code>.</p>
<p>If I want to lookup my pods I see error <code>"Failed to pull image "app:v1": rpc error: code = 2 desc = Error: image library/app not found"</code>.</p>
<p>What am I doing wrong?</p>
| <p>In normal case your Kubernetes cluster runs on a different machine than your <code>docker build</code> was run on, hence it has no access to your local image (unless you are using minikube and you eval minikubes environment to actually run your docker commands against docker daemon powering the minikube install).</p>
<p>To get it working you need to push the image to a registry available to kubernetes cluster.</p>
<p>By running your command you actually tell kubernetes to pull <code>app:v1</code> from official docherhub hosted images.</p>
|
<p>I'm using <a href="https://github.com/mpdavis/python-jose" rel="nofollow noreferrer">python-jose</a>'s JWT implementation to generate JWT tokens for authentication purposes.</p>
<p>We're running our backend in a Docker container on Kubernetes and <strong>sometimes, when we have multiple pods, we get different tokens <em>for the same claims, secret and algorithm</em></strong>. I've also had this happen on a single container on my development environment when <code>touch</code>ing my <code>index.wsgi</code> script.</p>
<p>Pod 1:</p>
<pre><code>>>> jwt.encode({'key': 'value'}, 'secret', algorithm='HS256')
'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJ2YWx1ZSJ9.FG-8UppwHaFp1LgRYQQeS6EDQF7_6-bMFegNucHjmWg'
</code></pre>
<p>Pod 2:</p>
<pre><code>>>> jwt.encode({'key': 'value'}, 'secret', algorithm='HS256')
'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJrZXkiOiJ2YWx1ZSJ9.JPIDicqvQ6GAh14yE2yZ3wnZQ0LiLNTTRDtJgLZcn98'
</code></pre>
<p>I took a deep dive into the code to see what could be causing this and didn't find anything incriminating. In a nutshell, here's what the code does:</p>
<ol>
<li>Do a <code>json.dumps</code> of the algorithm header (<code>{'typ': 'JWT', 'alg': 'HS256'}</code>) and encode it as Base64, removing any <code>=</code>'s</li>
<li>Do a <code>json.dumps</code> of the payload (<code>{'key': 'value'}</code>) and encode it as Base64, removing any <code>=</code>'s</li>
<li>Sign <code>encoded_header.encoded_payload</code> using HMAC256 with the <code>secret</code> key and encode it as Base64, again removing any <code>=</code>'s</li>
<li>Concatenate the signature to the previous string, resulting in <code>encoded_header.encoded_payload.encoded_signature</code></li>
</ol>
<p>At this point, I have no idea what is causing this. I'm suspecting a bug of some sort in the HMAC or SHA256 implementation of Python, but that seems rather unlikely... Any clues?</p>
<p>Note: we've successfully reproduced the bug with <code>pyjwt</code>, which was the base for <code>python-jose</code>.</p>
| <p>This occurs because Python dictionaries are unordered. If you decode the two JWTs, you will see that the header portion is ordered differently for each token.</p>
<pre><code>{
"typ": "JWT",
"alg": "HS256"
}
</code></pre>
<p>and </p>
<pre><code>{
"alg": "HS256",
"typ": "JWT"
}
</code></pre>
<p>This causes the base64 encoded header to be different, which will in turn cause the signature to be different.</p>
<p>That said, both of those are valid tokens for the exact same claim set, and both should verify successfully. There is nothing in the JWT spec that dictates that an equivalent claim set should result in equivalent JWT output.</p>
<p>Note: I am the author of the <a href="https://github.com/mpdavis/python-jose" rel="noreferrer">python-jose</a> library.</p>
|
<p>I have k8s v1.5 installad, I tried to following <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/</a> to implement HPA with custom metric.</p>
<p>In the page above, it said "<code>--horizontal-pod-autoscaler-use-rest-clients</code> flag on the controller manager set to true. " but while I set it, the controller manager cannot be started because this flag is not support.</p>
<p>So how can I find any guide for k8s v1.5?</p>
<p>Here is my k8s version information:</p>
<pre><code>[bow@devvm13 ~]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.0", GitCommit:"58b7c16a52c03e4a849874602be42ee71afdcab1", GitTreeState:"clean", BuildDate:"2016-12-12T23:31:15Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>--horizontal-pod-autoscaler-use-rest-clients is supported after 1.6. <br/> you can refer <a href="https://medium.com/@marko.luksa/kubernetes-autoscaling-based-on-custom-metrics-without-using-a-host-port-b783ed6241ac" rel="nofollow noreferrer">https://medium.com/@marko.luksa/kubernetes-autoscaling-based-on-custom-metrics-without-using-a-host-port-b783ed6241ac</a> as an example.</p>
|
<p>My company has been running Kuberenetes for over a year now and GitLab for about 6 months. We recently upgrade to GitLab 9.x and are having trouble trying to figure out what's up with the decision around the CI + app configuration with Kube. This feature is awesome and would love to get it working in our environment. </p>
<p>It seems as though GitLab is expecting you to only have one cluster setup with all of your environments inside of that one cluster broken up by namespace which would equal your service/application and app which would equal your environment. This is what it looks like GitLab wants my Kuberenetes environment to look like, a single cluster with your service broken up into namespaces:</p>
<pre><code>namespace = hello-world
app = development
app = qa
app = production
</code></pre>
<p>where in a real world example we would prefer to have the opposite which would work well with a single cluster as well</p>
<pre><code>DEVELOPMENT CLUSTER
namespace = development
app = hello-world
QA CLUSTER
namespace = qa
app = hello-world
PRODUCTION CLUSTER
namespace = production
app = hello-world
</code></pre>
<p>Having the namespace be the application and the apps be the environment, we wouldn't have the ability to upgrade to the latest version of kube without upgrading all. Maybe I'm missing something but based on what I'm reading and after testing this out, it looks as though this was the way it was designed.</p>
<p>For reference this is what my CI looks like right now to make the deploy board + terminal happy</p>
<pre><code>development:
<<: *deploy_definition
stage: development
environment: hello-world
script:
deploy.sh -a "hello-world"
</code></pre>
<p>but it should look like this</p>
<pre><code>development:
<<: *deploy_definition
stage: development
environment: development
script:
deploy.sh -a "hello-world"
</code></pre>
<p>To add to this confusion, they give you only one Kubernetes master to connect to in the integrations tab.</p>
<p><strong>Is this correct, or am I missing something?</strong></p>
| <p>You're correct. I found it frustrating too. </p>
<p>But you can use environments even without their kubernetes integration </p>
<pre><code>development:
<<: *deploy_definition
stage: development
environment:
name: development
url: https://development.yourdomain.com
script:
deploy.sh -a "hello-world"
</code></pre>
<p>Check out the post I wrote recently about the configuration of auto deploy to kubernetes from gitlab.</p>
<p><a href="http://blog.lwolf.org/post/how-to-create-ci-cd-pipeline-with-autodeploy-k8s-gitlab-helm/" rel="nofollow noreferrer">http://blog.lwolf.org/post/how-to-create-ci-cd-pipeline-with-autodeploy-k8s-gitlab-helm/</a></p>
|
<p>Is it worth using SSD as boot disk? I'm not planning to access local disks <a href="https://cloud.google.com/container-engine/docs/local-ssd" rel="nofollow noreferrer">within pods</a>.</p>
<p>Also, GCP by default creates 100GB disk. If I use 20GB disk, will it cripple the cluster or it's OK to use smaller sized disks?</p>
| <p>Why one or the other?. Kubernetes (Google Conainer Engine) is mainly Memory and CPU intensive unless your applications need a huge throughput on the hard drives. If you want to save money you can create tags on the nodes with HDD and use the node-affinity to tweak which pods goes where so you can have few nodes with SSD and target them with the affinity tags.</p>
|
<p>In the Kubernetes/Docker ecosystem there is a convention of using <code>/healthz</code> as a health-check endpoint for applications.</p>
<p>Where does the name 'healthz' come from, and are there any particular semantics associated with that name?</p>
| <p>It historically comes from Google’s internal practices. They're called "z-pages".</p>
<p>The reason it ends with <code>z</code> is to reduce collisions with actual application endpoints with the same name (like <code>/status</code>). See this talk for more: <a href="https://vimeo.com/173610242" rel="noreferrer">https://vimeo.com/173610242</a></p>
<p>Similar endpoints (at least inside Google) are <code>/varz</code>, <code>/statusz</code>, <code>/rpcz</code>. Services developed at Google automatically get these endpoints to export their health and metrics and there are tools that collect the exposed metrics/statuses from all the deployed services.</p>
<p>Open source tools like Prometheus implement this pattern (since original authors of Prometheus are also ex-Googlers) by coming to a well-known endpoint to collect metrics from your application. Similarly <a href="http://opencensus.io" rel="noreferrer">OpenCensus</a> allows you to expose z-pages from your app (ideally on a different port) to diagnose problems.</p>
|
<p>I am working on setting up environment for deploying microservices.</p>
<p>I have gotten as far as building my code and deploying to a registry but having problem running it in Azure Container Services.</p>
<p>I am following this guide to connect to ACS: <a href="https://learn.microsoft.com/en-us/azure/container-service/container-service-connect" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/container-service/container-service-connect</a> </p>
<p>But i fail on the step: Download Cluster Credentials
Using the given command</p>
<pre><code>az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
</code></pre>
<p>Ofc changing the reseource group and clustername to the correct names from my portal. I get an error:</p>
<pre><code>[WinError 10049] The requested address is not valid in its context
</code></pre>
<p>(if i change resource group or clustername to something else I get other errors so seems it can find those at least)</p>
<p>When i try to search for the error it seems to be some IP adress problem but can't figure out what to do. Tried running same command from other network (from home) to make sure work firewall is not blocking something.. but I get the same error</p>
<p>Any help appriciated!</p>
| <p>This command copy the cluster credentials to your machine. Background processes are ssh to your cluster VM and copy the credentials. </p>
<p>So, <strong>you should ensure you could ssh to the master VM manual</strong>. If you could not ssh to master VM manual, <code>az</code> command also could not do it. You could get your <code>master-dns-name</code> on Azure Portal.</p>
<pre><code>ssh -i id_rsa <user>@<master-dns-name>
</code></pre>
<p>Notes: If <code>az</code> command does not work and you could ssh to master VM, you could download credentials to your machine. They are same. You could check your <a href="https://learn.microsoft.com/en-us/azure/container-service/container-service-connect#connect-to-a-kubernetes-cluster" rel="nofollow noreferrer">link</a> about this.</p>
<p>You also need check your azure cli version. You could use the following commands</p>
<pre><code>az --version
</code></pre>
<p>My version is 2.02. It works for me.</p>
|
<p>We have an app that runs on GKE Kubernetes and which expects an auth url (to which user will be redirected via his browser) to be passed as environment variable.</p>
<p>We are using different namespaces per environment </p>
<p>So our current pod config looks something like this:</p>
<pre><code> env:
- name: ENV
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AUTH_URL
value: https://auth.$(ENV).example.org
</code></pre>
<p>And all works amazingly, we can have as many dynamic environments as we want, we just do apply -f config.yaml and it works flawlessly without changing a single config file and without any third party scripts.</p>
<p>Now for production we kind of want to use different domain, so the general pattern <code>https://auth.$(ENV).example.org</code> does not work anymore.</p>
<p>What options do we have?</p>
<ol>
<li>Since configs are in git repo, create a separate branch for <code>prod</code> environment</li>
<li>Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists <code>prod-config.yaml</code> then use that, else use <code>config.yaml</code>) - but with this approach we cannot use kubectl directly anymore</li>
<li>Move this config to application level, and have separate config file for <code>prod</code> env - but this kind of goes against 12factor app?</li>
<li>Other...?</li>
</ol>
| <p>This seems like an ideal opportunity to use <a href="https://helm.sh/" rel="noreferrer">helm</a>!</p>
<p>It's really easy to get started, simply install tiller into your cluster.</p>
<p>Helm gives you the ability to create "charts" (which are like packages) which can be installed into your cluster. You can template these really easily. As an example, you might have you config.yaml look like this:</p>
<pre><code>env:
- name: AUTH_URL
value: {{ .Values.auth.url }}
</code></pre>
<p>Then, within the helm chart you have a <code>values.yaml</code> which contains defaults for the url, for example:</p>
<pre><code>auth:
url: https://auth.namespace.example.org
</code></pre>
<p>You can use the <code>--values</code> option with helm to specify per environment <code>values.yaml</code> files, or even use the <code>--set</code> flag on helm to override them when using <code>helm install</code>.</p>
<p>Take a look at the documentation <a href="https://github.com/kubernetes/helm/blob/master/docs/charts.md#values-files" rel="noreferrer">here</a> for information about how values and templating works in helm. It seems perfect for your use case</p>
|
<p>I am running Cassandra on Kubernetes (3 instances) and want to expose it to the outside, my application is not yet in Kubernetes. So i crated a load balanced service like so:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: getquanty
labels:
app: cassandra
name: cassandra
annotations:
kubernetes.io/tls-acme: "true"
spec:
clusterIP:
ports:
- port: 9042
name: cql
nodePort: 30001
- port: 7000
name: intra-node
nodePort: 30002
- port: 7001
name: tls-intra-node
nodePort: 30003
- port: 7199
name: jmx
nodePort: 30004
selector:
app: cassandra
type: LoadBalancer
</code></pre>
<p>This is the result is:</p>
<pre><code>NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra 10.55.249.88 GIVEN_IP_GCE_LB 9042:30001/TCP,7000:30002/TCP,7001:30003/TCP,7199:30004/TCP 26m
</code></pre>
<p>I am able to connect using sh (cqlsh GIVEN_IP_GCE_LB ) but when i try to add data to Cassandra using the datastax driver for node, i got this:</p>
<pre><code>message: 'Cannot achieve consistency level SERIAL',
info: 'Represents an error message from the server',
code: 4096,
consistencies: 8,
required: 1,
alive: 0,
coordinator: '35.187.166.68:9042' },
'10.52.4.32:9042': 'Host considered as DOWN',
'10.52.2.15:9042': 'Host considered as DOWN' },
info: 'Represents an error when a query cannot be performed because no host is available or could be reached by the driver.',
message: 'All host(s) tried for query failed. First host tried, 35.187.166.68:9042: ResponseError: Cannot achieve consistency level SERIAL. See innerErrors.' }
</code></pre>
<p>My first though was I need to expose the other ports too, so I did (intra-node, tls-intra-node, jmx), but it was the same error.</p>
<p>Kubernetes gives you access to proxy, i tried to proxy from my machine using the constructed URL for the pod to test if i have access but i cannot connect using cqlsh:</p>
<pre><code>http://127.0.0.1:8001/api/v1/namespaces/qq/pods/cassandra-0:cql/proxy
</code></pre>
<p>I am out of ideas, the one thing left to try is to expose every instance (make a service for every instance) which is very ugly, but it will let me connect to the nodes from the outside until i migrate the application to Kubernetes.</p>
<p>Does any one have ideas how to expose Cassandra nodes to the internet and make the Datastax driver aware of all the nodes? Thank you for your time.</p>
| <p>After more reading I found out that the replication strategy was the one causing the problem, NetworkStrategy is suitable for multi-cluster, I have one, so I changed the replication to simple with the number of nodes i had, now every thing works as expected.</p>
<p>EDIT 1:
Putting databases on Kube is not a good solution, I ended up making a standalone cluster, added it to the same Network as kube, and was able to access it from kube pods.</p>
<p>Kube is made to manage application and make them 'elastic', i don't think people really need to scale databases as quick as applications, furthermore, the scaling of a database is not the same operation as a stateless application.</p>
|
<p>I need to update my database schema before running our app. For that based on <a href="https://github.com/kubernetes/kubernetes/issues/3312#issuecomment-245668148" rel="nofollow noreferrer">this thread</a> and on <a href="https://stackoverflow.com/a/37264976/911849">this answer</a> I've decided to use init container to do the job.</p>
<p>Since my SQL instance is a hosted Google Cloud SQL instance, I need <code>gce-proxy</code> to be able to connect to the database. My initContainers looks like this:</p>
<pre><code> initContainers:
- name: cloudsql-proxy-init
image: gcr.io/cloudsql-docker/gce-proxy:1.09
command: ["/cloud_sql_proxy"]
args:
- --dir=/cloudsql
- -instances=xxxx:europe-west1:yyyyy=tcp:5432
- -credential_file=/secrets/cloudsql/credentials.json
volumeMounts:
- name: dev-db-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
- name: liquibase
image: eu.gcr.io/xxxxx/liquibase:v1
imagePullPolicy: Always
command: ["./liquibase.sh"]
env:
- name: DB_TYPE
value: postgresql
- name: DB_URL
value: jdbc:postgresql://localhost/test
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
</code></pre>
<p>But my pod is stuck:</p>
<p><code>containers with incomplete status: [cloudsql-proxy-init liquibase]</code></p>
<p>If I look at pod describe:</p>
<pre><code>Init Containers:
cloudsql-proxy-init:
Container ID: docker://0373fa6528ec3768d46a1c59ca45f12d9fc46d1f0d199b7eb3772545701e1b1d
Image: gcr.io/cloudsql-docker/gce-proxy:1.09
Image ID: docker://sha256:66c58ef63dbfe239ff95416d62635559498ebb395abb8a4b1edee78e48e05fe4
Port:
Command:
/cloud_sql_proxy
Args:
--dir=/cloudsql
-instances=xxxxx:europe-west1:yyyyyy=tcp:5432
-credential_file=/secrets/cloudsql/credentials.json
State: Running
Started: Thu, 13 Apr 2017 17:40:02 +0300
Ready: False
Restart Count: 0
Mounts:
/cloudsql from cloudsql (rw)
/etc/ssl/certs from ssl-certs (rw)
/secrets/cloudsql from dev-db-instance-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-th58c (ro)
liquibase:
Container ID:
Image: eu.gcr.io/xxxxxx/liquibase:v1
Image ID:
Port:
Command:
./liquibase.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
DB_TYPE: postgresql
DB_URL: jdbc:postgresql://localhost/test
DB_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
DB_USER: <set to the key 'username' in secret 'db-credentials'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-th58c (ro)
</code></pre>
<p>And it seems that cloud-sql-proxy-init is running:</p>
<pre><code>2017/04/13 14:40:02 using credential file for authentication; [email protected]
2017/04/13 14:40:02 Listening on 127.0.0.1:5432 for xxxxx:europe-west1:yyyyy
2017/04/13 14:40:02 Ready for new connections
</code></pre>
<p>Which is probably the problem, because init container should exit so that initialization could continue? So how can I connect from liquibase to the Google Cloud SQL instance?</p>
| <p>You are expecting that the init containers are all running next to each other like the normal containers in a pod.</p>
<p>But unfortunately for you the init containers are started one after the other as the previous one is finished. See
<a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#understanding-init-containers" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#understanding-init-containers</a></p>
<blockquote>
<p>Init Containers are exactly like regular Containers, except:</p>
<ul>
<li>They always run to completion.</li>
<li>Each one must complete successfully before the next one is started.</li>
</ul>
</blockquote>
<p>So you won't be able to run the proxy container alongside with your app container.</p>
<p>A solution would be to build a container that has both binaries in it and then use a shell script to background the proxy and run your application to completion.</p>
|
<p>I have allocated a static IP on my GCP account. Then, I updated my application's service definition to use this in a load balancer, like so:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
# Unique key of the Service instance
name: my-app-service
spec:
ports:
# Accept traffic sent to port 80
- name: http
port: 80
targetPort: 5000
selector:
# Loadbalance traffic across Pods matching
# this label selector
app: web
# Create an HA proxy in the cloud provider
# with an External IP address - *Only supported
# by some cloud providers*
type: LoadBalancer
# Use the static IP allocated
loadBalancerIP: 35.186.xxx.xxx
</code></pre>
<p>If I comment out the last line and let GKE allocate an ephemeral public IP, the service comes up just fine. Any ideas what I am doing wrong?</p>
<p>Based on advise in an answer, I created an <code>Ingress</code> as follows:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp
annotations:
kubernetes.io/ingress.global-static-ip-name: "myapp"
spec:
backend:
serviceName: my-app-service
servicePort: 80
</code></pre>
<p>Now, I see the <code>Ingress</code> getting assigned the right static IP. However, my <code>Service</code> also gets assigned a (different) public IP. The <code>Ingress</code> and <code>Service</code> are not connected. If I comment out the <code>type: LoadBalancer</code> line, the <code>Service</code> is not assigned a public IP, but <code>Ingress</code> still does not connect. I get a <code>default backend - 404</code> response when hitting the static IP. I have tried creating the service and ingress in different orders and that has not helped either.</p>
<p>If I leave this up long enough, the static IP routes traffic to my service, but the service itself stays stuck in external IP assignment:</p>
<pre><code>$ kubectl get service my-app-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-app-service 10.x.x.x <pending> 80:31432/TCP 12m
</code></pre>
| <p>You need to create an <code>Ingress</code> and use the <code>kubernetes.io/ingress.global-static-ip-name: "name-of-your-ip"</code> annotation on the Ingress for Kubernetes to be able to find it. </p>
<p>You can find a tutorial here: <a href="https://github.com/kelseyhightower/ingress-with-static-ip#tutorial" rel="nofollow noreferrer">https://github.com/kelseyhightower/ingress-with-static-ip#tutorial</a></p>
|
<p>Trying to understand why I'm seeing this output from my kube-proxy logs</p>
<pre><code>W0328 08:00:53.755379 1 server.go:468] Failed to retrieve node info: nodes "ip-172-31-55-175" not found
W0328 08:00:53.755505 1 proxier.go:249] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
</code></pre>
<p>The cluster is working just fine, is that indicating an issue with the cluster configuration?</p>
| <p>Can you please show the output of the command <code>kubectl get node</code>?</p>
<p>Probably the registered name used when kubelet starts is different from the name that kube-proxy is using.</p>
<p>You can force the usage of this very same name starting kube-proxy with the <code>--hostname-override</code> directive, and point to the same name kubelet registered itself.</p>
|
<p>I was considering using secrets to mount a single file but it seems that you can only mount directory that will overwrites all the other content. How can I share a single config file without mounting a directory?</p>
| <p>For example you have a configmap which contain 2 config files:</p>
<pre><code>kubectl create configmap config --from-file <file1> --from-file <file2>
</code></pre>
<p>You could use subPath like this to mount single file into existing directory:</p>
<pre><code>---
volumeMounts:
- name: "config"
mountPath: "/<existing folder>/<file1>"
subPath: "<file1>"
- name: "config"
mountPath: "/<existing folder>/<file2>"
subPath: "<file2>"
restartPolicy: Always
volumes:
- name: "config"
configMap:
name: "config"
---
</code></pre>
<p>Full example <a href="https://gist.github.com/tuannvm/0fc6e94a3759c91b1abe71c149152f77" rel="noreferrer">here</a></p>
|
<p>I was using the following code to randomly kill processes inside the pods</p>
<pre><code> : ${DELAY:=30}
if [ "$#" -ne 1 ]; then
echo "One argument expected specifying the time in seconds to run"
else
end=$((SECONDS+$1))
while [ $SECONDS -lt $end ]; do
kubectl \
-o 'jsonpath={.items[*].metadata.name}' \
get pods | \
tr " " "\n" | \
shuf | \
head -n 1 |
#xargs -t --no-run-if-empty \
kubectl exec $(head -n 1) -- kill -9 9
#if [ $((SECONDS+${DELAY})) -lt $end ];then #break loop if (seconds completed + delay) is greater than time specified in argument
# sleep "${DELAY}" #sleep so that pod gets completely deleted and a terminating pod not selecte
#else
# break
#fi
done
fi
</code></pre>
<p>The script runs but no output is shown as in it goes into some infinite loop. Can someone help where I am going wrong?</p>
| <p>Two things:
<code>-o 'jsonpath={.items[*].metadata.name}'</code> should go after <code>kubectl get pods</code>, otherwise kubectl will return help message.
Also, add -t to <code>kubectl exec</code> in order to see output from the container in your terminal.</p>
|
<p>I set up Kubernetes on a Ubuntu 16.04 vServer following this tutorial <a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/kubeadm/</a></p>
<p>On this node I want to make <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">Kubernetes Dashboard</a> available but after starting the service via <code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml</code> I have no clue how to proceed.
The UI is not accessible via <code>https://{master-ip}/ui</code>.</p>
<p><strong>How can I make the UI publicly accessible?</strong></p>
| <p>The easiest is to try running <code>kubectl proxy</code> on the client machine where you want to use the dashboard and then access the dashboard at <code>http://127.0.0.1:8001</code> with a browser on the same client machine.
If you want to connect via master node ip as described in your answer you need to set up authentication first. See <a href="https://kubernetes.io/docs/tasks/web-ui-dashboard/#accessing-the-dashboard-ui" rel="nofollow noreferrer">this</a> and <a href="https://kubernetes.io/docs/admin/authentication/" rel="nofollow noreferrer">this</a>.</p>
|
<p>Now we have deployed services as Kubernetes Deployments with multiple replicas. Once the server crashes, Kubernetes will migrate its containers to another available server which tasks about 3~5 minutes.</p>
<p>While migrating, the client can access the the Deployment service because we still have other running replicas. But sometimes the requests fail because the load balancer redirect to the dead or migrating containers.</p>
<p>It would be great if Kubernetes could kickoff the dead replicas automatically and add them once they run in other servers. Otherwise, we need to setup LB like haproxy to do the same job with multiple Deployment instances.</p>
| <p>You need to configure health checking to have properly working load balancing for a Service. Please have a read of:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/</a></p>
<blockquote>
<p>The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.</p>
</blockquote>
|
<p>I have got the following service for Kubernetes dashboard</p>
<pre><code>Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Annotations: kubectl.kubernetes.io/last-applied-configuration={"kind":"Service","apiVersion":"v1","metadata":{"name":"kubernetes-dashboard","namespace":"kube-system","creationTimestamp":null,"labels":{"k8s-app":"k...
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP: 10.0.106.144
Port: <unset> 80/TCP
NodePort: <unset> 30177/TCP
Endpoints: 10.244.0.11:9090
Session Affinity: None
Events: <none>
</code></pre>
<p>According to the <a href="https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-ui" rel="nofollow noreferrer">documentation</a>, I ran</p>
<pre><code>az acs kubernetes browse
</code></pre>
<p>and it works on <a href="http://localhost:8001/ui" rel="nofollow noreferrer">http://localhost:8001/ui</a></p>
<p>But I want to access it outside the cluster too. The describe output says that it is exposed using NodePort on port 30177. </p>
<p>But I'm not able to access it on <code>http://<any node IP>:30177</code></p>
| <p>As we know, expose the service to internet, we can use <code>nodeport</code> and <code>LoadBalancer</code>.</p>
<p>As far as I know, Azure does <strong>not</strong> support nodeport type now.</p>
<blockquote>
<p>But I want to access it outside the cluster too.</p>
</blockquote>
<p>we can use <code>LoadBalancer</code> to re-create the kubernetes dashboard, here are my steps:</p>
<ol>
<li><p><strong>Delete</strong> kubernetes-dashboard via kubernetes UI: select <strong>Namespace</strong> to <strong>kube-system</strong>, then select <strong>services</strong>, then delete it:
<a href="https://i.stack.imgur.com/DlO9j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DlO9j.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/wjJLI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wjJLI.png" alt="enter image description here" /></a></p>
</li>
<li><p>Modify Kubernetes-dashboard-service.yaml: SSH master VM, then <strong>change</strong> type from nodeport to <strong>LoadBalancer</strong>:</p>
<p>root@k8s-master-47CAB7F6-0:/etc/kubernetes/addons# vi kubernetes-dashboard-service.yaml</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/cluster-service: "true"
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
type: LoadBalancer
</code></pre>
</li>
<li><p><strong>start</strong> kubernetes browse from CLI 2.0:</p>
<p>C:\Users><code>az acs kubernetes browse -g k8s -n containerservice-k8s</code></p>
</li>
</ol>
<p>Then SSH to master VM to <strong>check the status</strong>:
<a href="https://i.stack.imgur.com/UUcQE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UUcQE.png" alt="enter image description here" /></a></p>
<p>Now, we can via the Public IP address to browse the UI:</p>
<p><a href="https://i.stack.imgur.com/OD3eB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OD3eB.png" alt="enter image description here" /></a>
<strong>Update</strong>:<br />
The following image shows the architecture of azure container service cluster(Kubernetes), we should use Load-Balancer to expose the service to internet.</p>
<p><a href="https://i.stack.imgur.com/pgz6R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pgz6R.png" alt="enter image description here" /></a></p>
|
<p>I am running a Kubernetes cluster with quite a few users, it has happened several times that the NodePorts from different users conflict. Is it possible in Kubernetes to set different NodePort ranges for different users? Just like the --service-node-port-range with kube-apiserver, but this is for all the users not for specific user.</p>
| <p>Your question is similar to <a href="https://stackoverflow.com/questions/43324199/dynamic-kubernetes-port-range">this existing question</a> and the answer will be same. It is not possible. You will have to communicate port ranges to your users and then check their deployments for enforcing the ranges, before actually deploying.</p>
|
<p>When I create a Pod that cannot be scheduled because there are no nodes with sufficient CPU to meet the Pod's CPU request, the events output from <code>kubectl describe pod/...</code> contain a message like <code>No nodes are available that match all of the following predicates:: Insufficient cpu (3)</code>.</p>
<p>What does the <code>(3)</code> in <code>Insufficient cpu (3)</code> mean?</p>
<p>For example, if I try to create a pod that requests 24 CPU when all of my nodes only have 4 CPUs:</p>
<pre><code>$ kubectl describe pod/large-cpu-request
Name: large-cpu-request
Namespace: default
Node: /
Labels: <none>
Annotations: <none>
Status: Pending
IP:
Controllers: <none>
Containers:
cpuhog:
...
Requests:
cpu: 24
...
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
23m 30s 84 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3).
</code></pre>
<p>At other times I have seen event messages like <code>No nodes are available that match all of the following predicates:: Insufficient cpu (2), PodToleratesNodeTaints (1)</code> when a pod's resource requests were too high, so the 3 does not seem like a constant number - nor does it seem related to my 24 CPU request either.</p>
| <p>It means that your Pod doesn't fit on 3 nodes because of Insufficient CPU and 1 node because of <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/taint-toleration-dedicated.md" rel="noreferrer">taints</a> (likely the master).</p>
|
<p>The following is the file used to create the Deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kloud-php7
namespace: kloud-hosting
spec:
replicas: 1
template:
metadata:
labels:
app: kloud-php7
spec:
containers:
- name: kloud-php7
image: 192.168.1.1:5000/kloud-php7
- name: kloud-nginx
image: 192.168.1.1:5000/kloud-nginx
ports:
- containerPort: 80
</code></pre>
<p>The Deployment and the Pod worked fine, but after deleting the Deployment and a generated ReplicaSet, the I cannot delete the spawn Pods permanently. New Pods will be created if old ones are deleted.</p>
<p>The kubernetes cluster is created with kargo, containing 4 nodes running CentOS 7.3, kubernetes version 1.5.6</p>
<p>Any idea how to solve this problem ?</p>
| <p>This is working as intended. The Deployment creates (and recreates) a ReplicaSet and the ReplicaSet creates (and recreates!) Pods. You need to delete the Deployment, not the Pods or the ReplicaSet:</p>
<pre><code>kubectl delete deploy -n kloud-hosting kloud-php7
</code></pre>
|
<p>I have <a href="/questions/tagged/node.js" class="post-tag" title="show questions tagged 'node.js'" rel="tag">node.js</a> application built using <code>Dockerfile</code> that defines:</p>
<pre><code>CMD node dist/bin/index.js
</code></pre>
<p>I'd like to "append" a parameter to the command as it is defined in Dockerfile, i.e. I want to execute the program as <code>node dist/bin/index.js foo</code>.</p>
<p>In <a href="/questions/tagged/docker" class="post-tag" title="show questions tagged 'docker'" rel="tag">docker</a> land I am able to achieve this via:</p>
<pre><code>docker build --tag test .
docker run test foo
</code></pre>
<p>In <a href="/questions/tagged/kubernetes" class="post-tag" title="show questions tagged 'kubernetes'" rel="tag"><img src="https://i.stack.imgur.com/8UH0j.png" height="16" width="18" alt="" class="sponsor-tag-img">kubernetes</a> I cannot use <code>command</code> because that will override the <code>ENTRYPOINT</code>. I cannot use <code>args</code> because that will override the <code>cmd</code> defined in the <code>Dockerfile</code>. It appears that my only option is:</p>
<pre><code>cmd: ["node", "dist/bin/index.js", "foo"]
</code></pre>
<p>Is there a way to append an argument to a container command without redefining the entire Docker <code>CMD</code> definition?</p>
| <p>No way to append. You can either set the <code>command:</code> or <code>args:</code> on container.spec. You can learn more about how to override CMD/ENTRYPOINT here: <a href="https://kubernetes.io/docs/concepts/configuration/container-command-args/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/container-command-args/</a></p>
|
<p>I want to use Minikube for local development. It needs to access my companies internal docker registry which is signed w/ a 3rd party certificate. </p>
<p>Locally, I would copy the cert and run <code>update-ca-trust extract</code> or <code>update-ca-certificates</code> depending on the OS. </p>
<p>For the Minikube vm, how do I get the cert installed, registered, and the docker daemon restarted so that <code>docker pull</code> will trust the server?</p>
| <p>I had to do something similar recently. You should be able to just hop on the machine with minikube ssh and then follow the directions here </p>
<p><a href="https://docs.docker.com/engine/security/certificates/#understanding-the-configuration" rel="noreferrer">https://docs.docker.com/engine/security/certificates/#understanding-the-configuration</a></p>
<p>to place the CA in the appropriate directory (/etc/docker/certs.d/[registry hostname]/). You shouldn't need to restart the daemon for it to work.</p>
|
<p>I am trying to call <code>kubectl create -n $test_namespace -f <absolute-path-to-tea-rc-file>.yaml</code> from a shell script.</p>
<pre><code>#!/bin/bash
current_dir=$(pwd)
echo "current_dir is $current_dir"
parentdir="$(dirname "$current_dir")"
echo $parentdir
kubectl create -n $test_namespace -f $parentdir/deployment/common/tea-rc.yaml
</code></pre>
<p>It gives error as below:</p>
<pre><code>Bhagyashrees-MacBook-Pro:execution_controller bhagyashree$ sh test.sh
current_dir is /Users/bhagyashree/Documents/FDrepo/armada-ingress-ctl-test/execution_controller
/Users/bhagyashree/Documents/FDrepo/armada-ingress-ctl-test
error: unknown command "/Users/bhagyashree/Documents/FDrepo/armada-ingress-ctl-test/deployment/common/tea-rc.yaml"
See 'kubectl create -h' for help and examples.
</code></pre>
<p>the same command works when it is executed from a terminal.
<code>kubectl create -n testnamespace -f /Users/bhagyashree/Documents/FDrepo/armada-ingress-ctl-test/deployment/common/tea-rc.yaml</code></p>
<p>What I am i missing here?</p>
| <p>I think it's because the variable $test_namespace has not been set.</p>
<p>If you change the last line to <code>echo "kubectl create -n $test_namespace -f $parentdir/deployment/common/tea-rc.yaml"</code> you'll see what it's trying to run, it will look like <code>kubctl create -n -f /path/to/dir/deployment/common/tea-rc.yaml</code>. The Namespace can not be left blank.</p>
<p>You could also try adding a line like <code>echo $test_namespace</code> to check.</p>
|
<p>I'm attempting to deploy a Docker container to a minikube instance running locally, and getting this error when it attempts to pull(?) the image. The image exists in a self-hosted Docker registry. The image I'm testing with is built with the following Dockerfile:</p>
<pre><code>FROM alpine:latest
ENTRYPOINT ["echo"]
</code></pre>
<p>I'm using the fabric8io <code>kubernetes-client</code> library to create a deployment like so:</p>
<pre><code>// 'kube' is an instance of io.fabric8.kubernetes.client.KubernetesClient
final Deployment deployment = kube.extensions().deployments()
.createOrReplaceWithNew()
.withNewMetadata()
.withName(name)
.withNamespace("staging")
.endMetadata()
.withNewSpec()
.withReplicas(1)
.withNewTemplate()
.withNewMetadata()
.addToLabels("app", name)
.endMetadata()
.withNewSpec()
.addNewImagePullSecret()
// "regsecret" is the kubectl-created docker secret
.withName("regsecret")
.endImagePullSecret()
.addNewContainer()
.withName(name)
.withImage(imageName + ":latest")
.endContainer()
.endSpec()
.endTemplate()
.endSpec()
.done();
</code></pre>
<p>This is all running on Arch Linux, kernel <code>Linux 4.10.9-1-ARCH x86_64 GNU/Linux</code>. Using <code>minikube 0.18.0-1</code> and <code>kubectl-bin 1.6.1-1</code> from the AUR, <code>docker 1:17.04.0-1</code> from the community repositories, and the docker <code>registry</code> container at <code>latest</code> (<code>2.6.1</code> as of writing this). fabric8io <code>kubernetes-client</code> is at version <code>2.2.13</code>. </p>
<p>I have checked:</p>
<ul>
<li>that the self-hosted registry is running over HTTPS correctly</li>
<li>that the image can even be pulled. <code>docker pull</code> and <code>docker run</code> on both the host and inside the minikube VM work exactly as expected</li>
<li>that the image runs. See above</li>
<li>that there aren't any name conflicts / etc. in minikube. I delete the deployments, replica sets, and pods between attempts, and I recreate the namespace, just to be safe. However, I've found that it doesn't make a difference which I do, as my code cleans up existing pods/replica sets/deployments as needed</li>
<li>that DNS is not an issue, as far as I can tell</li>
</ul>
<p>I have not:</p>
<ul>
<li>run kubernetes locally (as opposed to minikube), as the AUR package for kubernetes takes an unbelievably long time to build on my machine</li>
<li>read through the kubernetes source code, as I don't know golang</li>
</ul>
<p>When checking <code>minikube dashboard</code>, the sections for Deployments, Replica Sets, and Pods all have the same error:</p>
<pre><code>Failed to inspect image "registry_domain/XXX/YYY:latest": Id or size of image "registry_domain/XXX/YYY:latest" is not set
Error syncing pod, skipping: failed to "StartContainer" for "YYY" with ImageInspectError: "Failed to inspect image \"registry_domain/XXX/YYY:latest\": Id or size of image \"registry_domain/XXX/YYY:latest\" is not set"
</code></pre>
<p>and the pod logs are permanently stuck at</p>
<pre><code>container "YYY" in pod "YYY" is waiting to start: ImageInspectError
</code></pre>
<p>Looking up the error message provided leads me to <a href="https://github.com/kubernetes/minikube/issues/947" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/947</a>, but this is not the same issue, as <code>kube-dns</code> is working as expected. This is the only relevant search result, as the other results that come up are </p>
<ul>
<li>Slack chatroom archives that don't even contain the relevant error message</li>
<li>The kubernetes source, which isn't helpful to me</li>
<li>kubernetes/minikube #947, as above</li>
</ul>
<p>I'm honestly not sure where to go from here. Any advice would be appreciated. </p>
| <p>Kubernetes 1.6 may not compatible with latest Docker version(17.xx.xx), could you lower down your Docker version and retry this.</p>
<p>The recommend version of Docker in Kubernetes(v1.6) is 1.12, if you not sure how to find that version. Use this in ubuntu or debain:</p>
<pre><code>apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y docker.io
</code></pre>
<p>For centos, <a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/#instructions" rel="nofollow noreferrer">ref</a></p>
<p><strong>remove you current Docker install before trying this</strong></p>
|
<p>i build the docker image on my local machine and trying to pull docker image using kubectl. but its not starting the docker container.</p>
<p>images starts with docker command.</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
tomcat9 latest 8b228ac6f19f About an hour ago 111 MB
</code></pre>
<p>It stats with ImagePullBackOff massage.</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
tomcat9-511437148-sgkwp 0/1 ImagePullBackOff 0 9m
</code></pre>
<p>How can I make kubectl to run this docker image?</p>
<p>Thanks</p>
| <p>I see from your comments bellow you are using Minikube locally.</p>
<p>Minikube runs inside a virtual machine, the Docker running inside Minikube and the Docker running on you machine are two separate instances so will not have access to the same images, you can however link the Docker inside Minikube to your local Docker using <code>eval $(minikube docker-env)</code>, you can read more <a href="https://github.com/kubernetes/minikube#reusing-the-docker-daemon" rel="noreferrer">here</a>.</p>
<p>Also, you can get more information about why the <code>ImagePullBackOff</code> has happened by running <code>kubectl describe pods tomcat9-511437148-sgkwp</code>.</p>
|
<p>I have minikube version v0.17.1 running on my machine. I want to simulate the environment I will have in AWS, where my MySQL instance will be outside of my Kubernetes cluster. </p>
<p>Basically, how can I expose my local MySQL instance running on my machine to the Kubernetes cluster running via minikube?</p>
| <p>Kubernetes allows you to create a service without selector, and cluster will not create related endpoint for this service, this feature is usually used to proxy a legacy component or an outside component.</p>
<ol>
<li><p>Create a service without selector</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 1443
targetPort: <YOUR_MYSQL_PORT>
</code></pre></li>
<li><p>Create a relative Endpoint object</p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: <YOUR_MYSQL_ADDR>
ports:
- port: <YOUR_MYSQL_PORT>
</code></pre></li>
<li><p>Get service IP</p>
<pre><code>$ kubectl get svc my-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service <SERVICE_IP> <none> 1443/TCP 18m
</code></pre></li>
<li><p>Access your MYSQL from service <code><SERVICE_IP>:1443</code> or <code>my-service:1443</code></p></li>
</ol>
|
<p>My model as follows:</p>
<ul>
<li>a NFS server pod (Pod A) with an empty directory volume already mounted. This volume will be served as the NFS.</li>
<li>a pod (Pod B) with a container that will access a NFS volume "/resource" when a function is run on the container.</li>
</ul>
<p>I want Pod B to be able to mount the NFS volume from Pod A, but the problem is that Pod B is created and running before Pod A is created. </p>
<p>Is it possible for either:</p>
<ol>
<li>Pod B to be created & run with no volumes and then mount Pod A once Pod A is created & running</li>
<li>Pod B to be created & run with a "dummy" NFS volume that can be reconfigured to point to Pod A once Pod A is created & running</li>
</ol>
<p>I'm able to change my model if need be, so any help would be greatly appreciated.</p>
| <p>That's not possible with kubernetes concepts.</p>
<p>If you really really want you could run your pod in privileged mode and issue the mount commands from your app. I do not recommend this.</p>
|
<p>I'm relatively new to golang and need some help pointing to the right direction.</p>
<p>I'm trying to declare a new <code>Deployment</code> type.</p>
<p>My imports look like:</p>
<pre><code>import (
"encoding/json"
"fmt"
yaml "gopkg.in/yaml.v2"
"io/ioutil"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/kubernetes/pkg/api/v1"
"k8s.io/kubernetes/pkg/apis/extensions/v1beta1"
)
</code></pre>
<p>When I try to create a Deployment Object like:</p>
<pre><code> test := v1beta1.Deployment{
Spec: v1beta1.DeploymentSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
Containers: []v1.Container{{
Name: "test",
Image: "image_url",
},
},
},
},
},
}
</code></pre>
<p>It works, but the Deployment Object that returns doesn't have a <code>Kind</code> which is necessary to identify the object.</p>
<p>According to <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/extensions/types.go#L162" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/extensions/types.go#L162</a></p>
<p>There's an embedded <code>metav1.TypeMeta</code> which has the <code>Kind</code> object that I need. (For reference: <a href="https://github.com/kubernetes/apimachinery/blob/master/pkg/apis/meta/v1/types.go#L38" rel="nofollow noreferrer">https://github.com/kubernetes/apimachinery/blob/master/pkg/apis/meta/v1/types.go#L38</a>)</p>
<p>I tried declaring <code>metav1.TypeMeta</code> in the struct literal like:</p>
<pre><code> test := v1beta1.Deployment{
metav1.TypeMeta: metav1.TypeMeta{Kind: "Deployment"}
Spec: v1beta1.DeploymentSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
Containers: []v1.Container{{
Name: "test",
Image: "image_url",
},
},
},
},
},
}
</code></pre>
<p>But I get a </p>
<p><code>unknown field '"k8s.io/apimachinery/pkg/apis/meta/v1".TypeMeta' in struct literal of type v1beta1.Deployment</code></p>
<p>I suspect it is due to <code>metav1.TypeMeta</code> declaration in the <code>Deployment</code> struct is an unexported field.</p>
<p>How should I declare Kind?</p>
| <p>When using an embedded struct, the key is usually the type name without the package. You can declare your <code>TypeMeta</code> like this:</p>
<pre><code>test := v1beta1.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1beta1",
Kind: "Deployment",
},
}
</code></pre>
<p>However, manually setting the <code>TypeMeta</code> on any Kubernetes API object is usually only necessary if you plan to persist these objects yourself (for example, to generate YAML files).</p>
<p>When using the Kubernetes client API (for example, using the k8s.io/client-go package) to talk to an API server, you will not need the <code>TypeMeta</code> property, since all API operations are strongly typed anyway and all metadata can safely be inferred. After all, the API version and kind of a <code>v1beta1.Deployment</code> struct should be (and are, to the client library) obvious.</p>
|
<h1>Overview</h1>
<p>Pod fails to access its own service (timeout) in a single-node cluster. </p>
<ul>
<li>OS is Debian 8</li>
<li>Cloud is DigitalOcean or AWS (reproduced on both)</li>
<li>Kubernetes version is 1.5.4</li>
<li>Kube proxy uses iptables</li>
<li>Kubernetes installed manually</li>
<li>I do not use overlay network like weave or flannel</li>
</ul>
<p>I've changed the service to headless as a workaround but I want to find the real reason behind it.</p>
<p><strong>Works OK on GCP compute engine node (!?)</strong>. Probably would work fine with --proxy-mode=userspace as suggested <a href="https://stackoverflow.com/questions/34732597/kubernetes-pod-cant-connect-through-service-to-self-only-to-other-pod-contai">here</a>. </p>
<h1>More details</h1>
<h3>The service</h3>
<pre><code>{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2017-04-13T05:29:18Z",
"labels": {
"name": "anon-svc"
},
"name": "anon-svc",
"namespace": "anon",
"resourceVersion": "280",
"selfLink": "/api/v1/namespaces/anon/services/anon-svc",
"uid": "23d178dd-200a-11e7-ba08-42010a8e000a"
},
"spec": {
"clusterIP": "172.23.6.158",
"ports": [
{
"name": "agent",
"port": 8125,
"protocol": "TCP",
"targetPort": "agent"
}
],
"selector": {
"name": "anon-svc"
},
"sessionAffinity": "None",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
}
</code></pre>
<h3>Kube-proxy service (systemd)</h3>
<pre><code>[Unit]
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
ExecStart=/opt/kubernetes/bin/hyperkube proxy \
--master=127.0.0.1:8080 \
--proxy-mode=iptables \
--logtostderr=true
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
</code></pre>
<p>Output from nodes (GCP is where it works), DO (DigitalOcean is where it doesn't work).</p>
<h3><code>$ iptables-save</code></h3>
<p>GCP:</p>
<pre><code># Generated by iptables-save v1.4.21 on Thu Apr 13 05:30:33 2017
*nat
:PREROUTING ACCEPT [4:364]
:INPUT ACCEPT [1:60]
:OUTPUT ACCEPT [7:420]
:POSTROUTING ACCEPT [19:1460]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-2UBKOACGE36HHR6Q - [0:0]
:KUBE-SEP-5LOF5ZUWMDRFZ2LI - [0:0]
:KUBE-SEP-5T3UFOYBS7JA45MK - [0:0]
:KUBE-SEP-YBFG2OLQ4DHWIGIM - [0:0]
:KUBE-SEP-ZSS7W6PQOP26CZ6F - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-R6UZIZCIT2GFGDFT - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-TF3HNH35HFDYKE6V - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 443 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.17.0.3:443
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.3:80
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-2UBKOACGE36HHR6Q -s 10.142.0.10/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-2UBKOACGE36HHR6Q -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-2UBKOACGE36HHR6Q --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.142.0.10:6443
-A KUBE-SEP-5LOF5ZUWMDRFZ2LI -s 172.17.0.4/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-5LOF5ZUWMDRFZ2LI -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.4:53
-A KUBE-SEP-5T3UFOYBS7JA45MK -s 172.17.0.4/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-5T3UFOYBS7JA45MK -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.4:53
-A KUBE-SEP-YBFG2OLQ4DHWIGIM -s 172.17.0.3/32 -m comment --comment "anon/anon-svc:agent" -j KUBE-MARK-MASQ
-A KUBE-SEP-YBFG2OLQ4DHWIGIM -p tcp -m comment --comment "anon/anon-svc:agent" -m tcp -j DNAT --to-destination 172.17.0.3:8125
-A KUBE-SEP-ZSS7W6PQOP26CZ6F -s 172.17.0.1/32 -m comment --comment "anon/etcd:etcd" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZSS7W6PQOP26CZ6F -p tcp -m comment --comment "anon/etcd:etcd" -m tcp -j DNAT --to-destination 172.17.0.1:4001
-A KUBE-SERVICES -d 172.20.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 172.23.6.157/32 -p tcp -m comment --comment "anon/etcd:etcd cluster IP" -m tcp --dport 4001 -j KUBE-SVC-R6UZIZCIT2GFGDFT
-A KUBE-SERVICES -d 172.23.6.158/32 -p tcp -m comment --comment "anon/anon-svc:agent cluster IP" -m tcp --dport 8125 -j KUBE-SVC-TF3HNH35HFDYKE6V
-A KUBE-SERVICES -d 172.20.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 172.20.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-5T3UFOYBS7JA45MK
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-2UBKOACGE36HHR6Q --mask 255.255.255.255 --rsource -j KUBE-SEP-2UBKOACGE36HHR6Q
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-2UBKOACGE36HHR6Q
-A KUBE-SVC-R6UZIZCIT2GFGDFT -m comment --comment "anon/etcd:etcd" -j KUBE-SEP-ZSS7W6PQOP26CZ6F
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-5LOF5ZUWMDRFZ2LI
-A KUBE-SVC-TF3HNH35HFDYKE6V -m comment --comment "anon/anon-svc:agent" -j KUBE-SEP-YBFG2OLQ4DHWIGIM
COMMIT
# Completed on Thu Apr 13 05:30:33 2017
# Generated by iptables-save v1.4.21 on Thu Apr 13 05:30:33 2017
*filter
:INPUT ACCEPT [1250:625646]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1325:478496]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Thu Apr 13 05:30:33 2017
</code></pre>
<p>DO:</p>
<pre><code># Generated by iptables-save v1.4.21 on Thu Apr 13 05:38:05 2017
*nat
:PREROUTING ACCEPT [1:52]
:INPUT ACCEPT [1:52]
:OUTPUT ACCEPT [13:798]
:POSTROUTING ACCEPT [13:798]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-3VWUJCZC3MSW5W32 - [0:0]
:KUBE-SEP-CPJSBS35VMSBOKH6 - [0:0]
:KUBE-SEP-K7JQ5XSWBQ7MTKDL - [0:0]
:KUBE-SEP-WOG5WH7F5TFFOT4E - [0:0]
:KUBE-SEP-ZSS7W6PQOP26CZ6F - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-R6UZIZCIT2GFGDFT - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-TF3HNH35HFDYKE6V - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 443 -j MASQUERADE
-A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.17.0.4:443
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.4:80
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-3VWUJCZC3MSW5W32 -s 67.205.156.80/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-3VWUJCZC3MSW5W32 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-3VWUJCZC3MSW5W32 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 67.205.156.80:6443
-A KUBE-SEP-CPJSBS35VMSBOKH6 -s 172.17.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-CPJSBS35VMSBOKH6 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.3:53
-A KUBE-SEP-K7JQ5XSWBQ7MTKDL -s 172.17.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-K7JQ5XSWBQ7MTKDL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.3:53
-A KUBE-SEP-WOG5WH7F5TFFOT4E -s 172.17.0.4/32 -m comment --comment "anon/anon-svc:agent" -j KUBE-MARK-MASQ
-A KUBE-SEP-WOG5WH7F5TFFOT4E -p tcp -m comment --comment "anon/anon-svc:agent" -m tcp -j DNAT --to-destination 172.17.0.4:8125
-A KUBE-SEP-ZSS7W6PQOP26CZ6F -s 172.17.0.1/32 -m comment --comment "anon/etcd:etcd" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZSS7W6PQOP26CZ6F -p tcp -m comment --comment "anon/etcd:etcd" -m tcp -j DNAT --to-destination 172.17.0.1:4001
-A KUBE-SERVICES -d 172.20.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 172.20.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 172.23.6.158/32 -p tcp -m comment --comment "anon/anon-svc:agent cluster IP" -m tcp --dport 8125 -j KUBE-SVC-TF3HNH35HFDYKE6V
-A KUBE-SERVICES -d 172.23.6.157/32 -p tcp -m comment --comment "anon/etcd:etcd cluster IP" -m tcp --dport 4001 -j KUBE-SVC-R6UZIZCIT2GFGDFT
-A KUBE-SERVICES -d 172.20.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-CPJSBS35VMSBOKH6
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-3VWUJCZC3MSW5W32 --mask 255.255.255.255 --rsource -j KUBE-SEP-3VWUJCZC3MSW5W32
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-3VWUJCZC3MSW5W32
-A KUBE-SVC-R6UZIZCIT2GFGDFT -m comment --comment "anon/etcd:etcd" -j KUBE-SEP-ZSS7W6PQOP26CZ6F
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-K7JQ5XSWBQ7MTKDL
-A KUBE-SVC-TF3HNH35HFDYKE6V -m comment --comment "anon/anon-svc:agent" -j KUBE-SEP-WOG5WH7F5TFFOT4E
COMMIT
# Completed on Thu Apr 13 05:38:05 2017
# Generated by iptables-save v1.4.21 on Thu Apr 13 05:38:05 2017
*filter
:INPUT ACCEPT [1127:469861]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1181:392136]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT
-A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Thu Apr 13 05:38:05 2017
</code></pre>
<h3><code>$ ip route show table local</code></h3>
<p>GCP:</p>
<pre><code>local 10.142.0.10 dev eth0 proto kernel scope host src 10.142.0.10
broadcast 10.142.0.10 dev eth0 proto kernel scope link src 10.142.0.10
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
broadcast 172.17.0.0 dev docker0 proto kernel scope link src 172.17.0.1
local 172.17.0.1 dev docker0 proto kernel scope host src 172.17.0.1
broadcast 172.17.255.255 dev docker0 proto kernel scope link src 172.17.0.1
</code></pre>
<p>DO:</p>
<pre><code>broadcast 10.10.0.0 dev eth0 proto kernel scope link src 10.10.0.5
local 10.10.0.5 dev eth0 proto kernel scope host src 10.10.0.5
broadcast 10.10.255.255 dev eth0 proto kernel scope link src 10.10.0.5
broadcast 67.205.144.0 dev eth0 proto kernel scope link src 67.205.156.80
local 67.205.156.80 dev eth0 proto kernel scope host src 67.205.156.80
broadcast 67.205.159.255 dev eth0 proto kernel scope link src 67.205.156.80
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
broadcast 172.17.0.0 dev docker0 proto kernel scope link src 172.17.0.1
local 172.17.0.1 dev docker0 proto kernel scope host src 172.17.0.1
broadcast 172.17.255.255 dev docker0 proto kernel scope link src 172.17.0.1
</code></pre>
<h3><code>$ ip addr show</code></h3>
<p>GCP: </p>
<pre><code>1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
link/ether 42:01:0a:8e:00:0a brd ff:ff:ff:ff:ff:ff
inet 10.142.0.10/32 brd 10.142.0.10 scope global eth0
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:d0:6d:28:52 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
5: veth1219894: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether a6:4e:d4:48:4c:ff brd ff:ff:ff:ff:ff:ff
7: vetha516dc6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether ce:f2:e7:5d:34:d2 brd ff:ff:ff:ff:ff:ff
9: veth4a6b171: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether ee:42:d4:d8:ca:d4 brd ff:ff:ff:ff:ff:ff
</code></pre>
<p>DO:</p>
<pre><code>1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether da:74:7c:ad:9d:4d brd ff:ff:ff:ff:ff:ff
inet 67.205.156.80/20 brd 67.205.159.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.10.0.5/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::d874:7cff:fead:9d4d/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 76:66:0a:15:cb:a6 brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:85:21:28:00 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:85ff:fe21:2800/64 scope link
valid_lft forever preferred_lft forever
6: veth95a5fdf: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 12:2c:b9:80:6c:60 brd ff:ff:ff:ff:ff:ff
inet6 fe80::102c:b9ff:fe80:6c60/64 scope link
valid_lft forever preferred_lft forever
8: veth3fd8422: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 56:98:c1:96:0c:83 brd ff:ff:ff:ff:ff:ff
inet6 fe80::5498:c1ff:fe96:c83/64 scope link
valid_lft forever preferred_lft forever
10: veth3984136: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether ae:35:39:1c:bd:c1 brd ff:ff:ff:ff:ff:ff
inet6 fe80::ac35:39ff:fe1c:bdc1/64 scope link
valid_lft forever preferred_lft forever
</code></pre>
<p>Please let me know if you need more info.</p>
| <blockquote>
<p>"targetPort": "agent"</p>
</blockquote>
<p>I don't think this is valid style in normal service yaml, could you change it to normal port like 8080 and try again.</p>
|
<p>Currently, fluentd is used to collect logs produced by kubernetes pods, which are located under `/var/log/containers/'. The problem is that different kinds of pods may have different log formats. And I want to classify those log files so that they can be processed distinctively. </p>
<p>Can I add labels to kubernetes pods, such as <code>log4j</code>, <code>python_log</code>, and detected by fluentd? </p>
| <p>You can use <a href="https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter" rel="nofollow noreferrer">fluent plugin kubernetes metadata filter</a> to get the pod's metadata and classify it with other approach.</p>
|
<p>I have a sandbox Kubernetes cluster, in which I shutdown all pods at night, so it can scale down with the <code>cluster-autoscaler</code> add-on.</p>
<p>The problem is, it almost always keep the master plus 2 nodes running.</p>
<p>Looking into <code>cluster-autoscaler</code> logs, I see the problem seems to be this:</p>
<pre><code>Fast evaluation: node ip-172-16-38-51.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: dns-controller-3586597043-531v5
Fast evaluation: node ip-172-16-49-207.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: heapster-564189836-3h2ts
Fast evaluation: node ip-172-16-49-207.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: kube-dns-1321724180-c0rjr
Fast evaluation: node ip-172-16-49-207.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: kube-dns-autoscaler-265231812-dv17j
Fast evaluation: node ip-172-16-49-207.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: kubernetes-dashboard-2396447444-6bwtq
Fast evaluation: node ip-172-16-49-207.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: monitoring-influxdb-grafana-v4-50v9d
Fast evaluation: node ip-172-16-51-146.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: cluster-autoscaler-776613730-kqgk2
</code></pre>
<p>and because those pods are spread, cluster-autoscaler ends up keeping 2 or more nodes up even when there is nothing running in the default namespace...</p>
<p>Is there a way of forcing or inducing Kubernetes to schedule all those pods together?</p>
<p>The idea is to make the cluster run at night with the master plus one node only. If there isn't, I was thinking on add a <code>Scheduled Action</code> to the <code>AutoScale Group</code>, so it would be forced to run all in the same node.</p>
| <p>The solution could be to label a node, and then use the nodeSelector in the deployment to indicate that is has to run at that node. This is certainly not an advised solution since it will break once you scale to much.</p>
|
<p>I run an application which needs a list of all client ips for synchronization. I run this application in Kubernetes and make requests against <code>https://kubernetes/api/v1/pods</code> from these client-pods. So I am searching for a reliable way to identify all pods, which are created by the same deployment (replication set) this way.</p>
<p>Thereby it is important, that the pods are also identified correctly during a rolling-upgrade.</p>
<p>I do have a couple of ideas I want to share, even that none really convinces me:</p>
<p><strong>1) Using labels (<code>?labelSelector=label=value</code>):</strong></p>
<p>1.a) Define a <strong>custom label</strong> only these pods have: Valid solution, but since the solution should be really general, I don't want to define a special label, the pods need to have. Also this way I can not be sure to add other pods which got the label.</p>
<p>1.b) Use the auto-generated <strong>pod-template-hash</strong> label: Sadly this one gets changed during a rolling-update, so which the update takes place, the selector does not work correctly.</p>
<p><strong>2) Using custom fields (<code>?fieldSelector</code>)</strong></p>
<p>I thought of using either a subpart of the <code>name</code> or <code>ownerReference.name</code> attributes. Sadly the fieldSelector is <a href="https://github.com/kubernetes/kubernetes/issues/15128__" rel="nofollow noreferrer">badly documented</a> or does not work at all. I also thought of parsing the full list of pods via <code>jq</code>, but don't really like the dependency.</p>
<p><strong>3) Identify all pods by referencing a common service</strong></p>
<p>I see now way to recognize which pod belongs to which service (beside using labels again)</p>
<hr>
<p>Are there any other ideas to identify sibling-pods? I am kind of surprised that I did not find a simular question.</p>
| <p>That is what the <code>app</code> selector is typically used for.</p>
<p>You can use kubectl to list pods for a deployment. </p>
<p>The following is from the <a href="https://kubernetes.io/docs/tutorials/stateless-application/run-stateless-application-deployment/" rel="nofollow noreferrer">official docs</a>:</p>
<blockquote>
<p>List the pods created by the deployment: </p>
</blockquote>
<pre><code>kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS
AGE nginx-deployment-1771418926-7o5ns 1/1 Running 0
16h nginx-deployment-1771418926-r18az 1/1 Running 0
</code></pre>
<p>You can also list Pods by yaml file:</p>
<p><code>$ kubectl get pod -f ./pod.yaml</code></p>
|
<p>I use ansible to send jobs / configurations to my k8s cluster via the kubectl command on my local machine. I have my inventory file setup so that each cluster is it's own group and each cluster is basically a connection to localhost .</p>
<pre><code># Inventory File
#
[east.k.example.com]
localhost ansible_connection=local
[east2.k.example.com]
localhost ansible_connection=local
</code></pre>
<p>Then in my group_vars directory I have a different file with the name of my group from my inventory file that holds all the different variables for each cluster.</p>
<p>I limit my runs to target only one cluster with the limit option:
<code>ansible-playbook -vv create.yaml -l east2.k.example.com --tags ingress-generate-only</code></p>
<p>The problem is that when I attempt to use variables in my templates I get variables from the other groups. I'm thinking because each group includes localhost. </p>
<p>Is there a better way to solve this issue? Can I set a flag so that groups only include the variables from the group_var files?</p>
<p>thanks,</p>
| <p>Refactor your inventory to use distinct names:</p>
<pre><code># Inventory File
#
[east.k.example.com]
east ansible_connection=local
[east2.k.example.com]
east2 ansible_connection=local
</code></pre>
<p>This way Ansible will treat them as different hosts, thus not merging variables from different groups.</p>
|
<p>I am new to Kubernetes (aren't we all?).</p>
<p>Can a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Secret</a>, consisting of course of keys and values, be represented as a <em>single</em> file when used by a container?</p>
<p>I understand that normally a Secret, when <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod" rel="nofollow noreferrer">mounted by a container</a>, is essentially a directory. I was wondering if there was some syntactic sugar or other construct that I'm missing that could represent it as, say, a Java <code>Properties</code> file, whose keys would be the secret's keys, and whose values would be the secret's values.</p>
| <p>This is not possible currently. The only way to get around this is to have a key value pair where the value is a base64 encoded JSON object.</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
conf.json: eyJrZXkiOiAidmFsdWUiLCAia2V5MiI6ICJ2YWx1ZTIifQ==
</code></pre>
<p>The <code>conf.json</code> value is just:</p>
<pre><code>echo -n '{"key": "value", "key2": "value2"}' | base64
</code></pre>
|
<p>I am a bit new to (g)rpc, and I do not really understand the concept. We have a set of NodeJS servers in a Kubernetes cluster communicating between each other through grpc. The rpc interfaces are set up towards each server on client upstart.</p>
<p>We have recently discovered that upon restarting a server, its clients looses the connection to that server. That is, the previously working rpc calls to a server does no longer work after that server restarts. And not until we restart the servers in the right order does it start functioning again. </p>
<p>What I though was that through an address (host + port) you told the client 'here is a procedure you can call.' And upon calling the procedure, the address was called, processed on the server, and returned. If it worked like this, the client would not care if the server has restarted 0 or 100 times between rpc calls. </p>
<p>But with the above description of clients rpc calls failing/timing out, it seems like there is a socket-like connection, where the connection is established and maintained while both parts are running.</p>
<p>How does it work, and do I need to implement health checks to my rpc server on my clients to re-establish the interfaces upon server restart?</p>
<p>Thanks for you time.</p>
| <p><a href="https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md" rel="nofollow noreferrer">https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md</a> suggests that the Channel <em>will</em> go from "transient_failure" to "connecting" (and back to "ready") <em>eventually</em>, but because of exponential backoff, this could take a long time.</p>
<p><a href="https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md" rel="nofollow noreferrer">https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md</a> describes something called <code>MAX_BACKOFF</code>. In <a href="https://github.com/grpc/grpc-node/blob/master/packages/grpc-js/src/channel.ts" rel="nofollow noreferrer">https://github.com/grpc/grpc-node/blob/master/packages/grpc-js/src/channel.ts</a> that appears to be hardcoded to two minutes.</p>
|
<p>I'm writing a shell script which needs to login into the pod and execute a series of commands in a kubernetes pod.</p>
<p>Below is my sample_script.sh:</p>
<pre><code>kubectl exec octavia-api-worker-pod-test -c octavia-api bash
unset http_proxy https_proxy
mv /usr/local/etc/octavia/octavia.conf /usr/local/etc/octavia/octavia.conf-orig
/usr/local/bin/octavia-db-manage --config-file /usr/local/etc/octavia/octavia.conf upgrade head
</code></pre>
<p>After running this script, I'm not getting any output.
Any help will be greatly appreciated</p>
| <p>Are you running all these commands as a single line command? First of all, there's no <code>;</code> or <code>&&</code> between those commands. So if you paste it as a multi-line script to your terminal, likely it will get executed locally.</p>
<p>Second, to tell bash to execute something, you need: <code>bash -c "command"</code>.</p>
<p>Try running this:</p>
<pre><code>$ kubectl exec POD_NAME -- bash -c "date && echo 1"
Wed Apr 19 19:29:25 UTC 2017
1
</code></pre>
<p>You can make it multiline like this:</p>
<pre><code>$ kubectl exec POD_NAME -- bash -c "date && \
echo 1 && \
echo 2"
</code></pre>
|
<p>I am trying to create a kubernetes NFS volume on Google Container Engine (GKE) and get it used by a deployment.</p>
<p>I did this in several steps as it shown in this github repository <a href="https://github.com/mappedinn/kubernetes-nfs-volume-on-gke" rel="nofollow noreferrer">kubernetes-nfs-volume-on-gke</a>:</p>
<ol>
<li>Create a GKE cluster and GCE persistent disk</li>
<li>Config the context for the kubectl to deal with the GKE cluster</li>
<li>Creation of the PersistentVolume (PV) and the PersistentVolumeClaim (PVC)</li>
<li>Creation of an NFS server</li>
<li>Create a service for the NFS server to expose it (the IP address of that service is used for the creation of the NFS PV and NFS PVC)</li>
<li>Creation of NFS volume</li>
<li>Create a Deployment of a busybox for checking the NFS volume is accessible.</li>
</ol>
<p>After fellowing these step, this is the obtained error:</p>
<pre><code>$ kubectl describe pods nfs-busybox-2762569073-lhb5p
Name: nfs-busybox-2762569073-lhb5p
Namespace: default
Node: gke-mappedinn-cluster-default-pool-f94cb0d4-fmfb/10.240.0.3
Start Time: Wed, 12 Apr 2017 04:12:20 +0400
Labels: name=nfs-busybox
pod-template-hash=2762569073
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"nfs-busybox-2762569073","uid":"b1e523ae-1f14-11e7-a084-42010a8e0...
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container busybox
Status: Pending
IP:
Controllers: ReplicaSet/nfs-busybox-2762569073
Containers:
busybox:
Container ID:
Image: busybox
Image ID:
Port:
Command:
sh
-c
while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/mnt from my-pvc-nfs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-20n4b (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
my-pvc-nfs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs
ReadOnly: false
default-token-20n4b:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-20n4b
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 default-scheduler Normal Scheduled Successfully assigned nfs-busybox-2762569073-lhb5p to gke-mappedinn-cluster-default-pool-f94cb0d4-fmfb
3m 48s 2 kubelet, gke-mappedinn-cluster-default-pool-f94cb0d4-fmfb Warning FailedMount Unable to mount volumes for pod "nfs-busybox-2762569073-lhb5p_default(b1e7c901-1f14-11e7-a084-42010a8e0116)": timeout expired waiting for volumes to attach/mount for pod "default"/"nfs-busybox-2762569073-lhb5p". list of unattached/unmounted volumes=[my-pvc-nfs]
3m 48s 2 kubelet, gke-mappedinn-cluster-default-pool-f94cb0d4-fmfb Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"nfs-busybox-2762569073-lhb5p". list of unattached/unmounted volumes=[my-pvc-nfs]
37s 37s 1 kubelet, gke-mappedinn-cluster-default-pool-f94cb0d4-fmfb Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/nfs/b1e7c901-1f14-11e7-a084-42010a8e0116-nfs" (spec.Name: "nfs") pod "b1e7c901-1f14-11e7-a084-42010a8e0116" (UID: "b1e7c901-1f14-11e7-a084-42010a8e0116") with: mount failed: exit status 32
Mounting command: /home/kubernetes/bin/mounter
Mounting arguments: 10.247.250.208:/exports /var/lib/kubelet/pods/b1e7c901-1f14-11e7-a084-42010a8e0116/volumes/kubernetes.io~nfs/nfs nfs []
Output: Running mount using a rkt fly container
run: group "rkt" not found, will use default gid when rendering images
</code></pre>
<p>In the kubernetes dashboard, the error is as follows:</p>
<p><strong>Unable to mount volumes for pod "nfs-busybox-2762569073-lhb5p_default(b1e7c901-1f14-11e7-a084-42010a8e0116)": timeout expired waiting for volumes to attach/mount for pod "default"/"nfs-busybox-2762569073-lhb5p". list of unattached/unmounted volumes=[my-pvc-nfs]</strong></p>
<p><strong>Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"nfs-busybox-2762569073-lhb5p". list of unattached/unmounted volumes=[my-pvc-nfs]</strong></p>
<p>Have I missed something?</p>
<p>Thanks,</p>
| <p>This <a href="https://github.com/kubernetes/kubernetes/issues/33447#issuecomment-275198328" rel="nofollow noreferrer">comment</a> in the <a href="https://github.com/kubernetes/kubernetes/issues/33447" rel="nofollow noreferrer">issue</a> on kubernetes seems to solve this NFS issue on GKE.</p>
<p>Qutoing that comment:</p>
<blockquote>
<p>Edit <code>examples/volumes/nfs/nfs-pv.yaml</code> change the last line to path: <code>"/"</code>.</p>
<p>Edit <code>examples/volumes/nfs/nfs-server-rc.yaml</code> change the image to the one that enabled NFSv4 image: <code>gcr.io/google_containers/volume-nfs:0.8</code></p>
</blockquote>
<p>Also there are other issues where this is tracked <a href="https://github.com/kubernetes/kubernetes/issues/24687" rel="nofollow noreferrer">here</a> and <a href="https://github.com/kubernetes/kubernetes/issues/44377" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have</p>
<ul>
<li>kubernetes v1.6.0 setup by kubeadm v1.6.1</li>
<li>calico setup by offical <a href="http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml" rel="nofollow noreferrer">yaml</a></li>
<li>iptables v1.6.0</li>
<li>nodes are provided by AliCloud</li>
</ul>
<p>Problem:</p>
<p>The cni network is not working. Any deployment can only be visited from the node where it is running. I doubt it is related with route table conflict/missing, because I have another cluster on Vultr Cloud working fine, with the same setup steps.</p>
<p>Cluster Info:</p>
<pre><code>root@iZ2ze8ctk2q17u029a8wcoZ:~# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-etcd-66gf4 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system calico-node-4wxsb 2/2 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system calico-node-6n1g1 2/2 Running 0 16h 10.30.248.80 iz2zegw6nmd5t5qxy35lh0z
kube-system calico-policy-controller-2561685917-7bdd4 1/1 Running 0 16h 10.30.248.80 iz2zegw6nmd5t5qxy35lh0z
kube-system etcd-iz2ze8ctk2q17u029a8wcoz 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system heapster-bx03l 1/1 Running 0 16h 192.168.31.150 iz2zegw6nmd5t5qxy35lh0z
kube-system kube-apiserver-iz2ze8ctk2q17u029a8wcoz 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system kube-controller-manager-iz2ze8ctk2q17u029a8wcoz 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system kube-dns-3913472980-kgzln 3/3 Running 0 16h 192.168.31.149 iz2zegw6nmd5t5qxy35lh0z
kube-system kube-proxy-ck83t 1/1 Running 0 16h 10.30.248.80 iz2zegw6nmd5t5qxy35lh0z
kube-system kube-proxy-lssdn 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
kube-system kube-scheduler-iz2ze8ctk2q17u029a8wcoz 1/1 Running 0 16h 10.27.219.50 iz2ze8ctk2q17u029a8wcoz
</code></pre>
<p>I checked each pod's log, cannot find anything wrong.</p>
<p>Master Info:
internal ip: 10.27.219.50</p>
<pre><code>root@iZ2ze8ctk2q17u029a8wcoZ:~# ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:56:84:35:19
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth0 Link encap:Ethernet HWaddr 00:16:3e:30:51:ae
inet addr:10.27.219.50 Bcast:10.27.219.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4400927 errors:0 dropped:0 overruns:0 frame:0
TX packets:3906530 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:564808928 (564.8 MB) TX bytes:792611382 (792.6 MB)
eth1 Link encap:Ethernet HWaddr 00:16:3e:32:07:f8
inet addr:59.110.32.199 Bcast:59.110.35.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1148756 errors:0 dropped:0 overruns:0 frame:0
TX packets:688177 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1570341044 (1.5 GB) TX bytes:58104611 (58.1 MB)
tunl0 Link encap:IPIP Tunnel HWaddr
inet addr:192.168.201.0 Mask:255.255.255.255
UP RUNNING NOARP MTU:1440 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root@iZ2ze8ctk2q17u029a8wcoZ:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 59.110.35.247 0.0.0.0 UG 0 0 0 eth1
10.27.216.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0
10.30.0.0 10.27.219.247 255.255.0.0 UG 0 0 0 eth0
10.32.0.0 0.0.0.0 255.240.0.0 U 0 0 0 weave
59.110.32.0 0.0.0.0 255.255.252.0 U 0 0 0 eth1
100.64.0.0 10.27.219.247 255.192.0.0 UG 0 0 0 eth0
172.16.0.0 10.27.219.247 255.240.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.201.0 0.0.0.0 255.255.255.192 U 0 0 0 *
root@iZ2ze8ctk2q17u029a8wcoZ:~# ip route list
default via 59.110.35.247 dev eth1
10.27.216.0/22 dev eth0 proto kernel scope link src 10.27.219.50
10.30.0.0/16 via 10.27.219.247 dev eth0
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1
59.110.32.0/22 dev eth1 proto kernel scope link src 59.110.32.199
100.64.0.0/10 via 10.27.219.247 dev eth0
172.16.0.0/12 via 10.27.219.247 dev eth0
172.17.0.0/24 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
blackhole 192.168.201.0/26 proto bird
// NOTE: 10.30.0.0/16 via 10.27.219.247 dev eth0
// this rule is important, the worker node's ip is 10.30.xx.xx. If I delete this rule, I cannot ping worker node.
// this rule is 10.0.0.0/8 via 10.27.219.247 dev eth0 by default, I changed it to the above.
root@iZ2ze8ctk2q17u029a8wcoZ:~# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 3 packets, 180 bytes)
pkts bytes target prot opt in out source destination
20976 1250K cali-PREROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:6gwbT8clXdHdC1b1 */
21016 1252K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
20034 1193K DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 3 packets, 180 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 4 packets, 240 bytes)
pkts bytes target prot opt in out source destination
109K 6580K cali-OUTPUT all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:tVnHkvAo15HuiPy0 */
111K 6738K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
1263 75780 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 4 packets, 240 bytes)
pkts bytes target prot opt in out source destination
86584 5235K cali-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:O3lYWMrLQYEMJtB5 */
0 0 MASQUERADE all -- * !docker0 172.17.0.0/24 0.0.0.0/0
3982K 239M KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
28130 1704K WEAVE all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-MARK-DROP (0 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
Chain KUBE-MARK-MASQ (5 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
pkts bytes target prot opt in out source destination
Chain KUBE-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
Chain KUBE-SEP-2VS52M6CEWASZVOP (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.149 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:192.168.31.149:53
Chain KUBE-SEP-3XQHSFTDAPNNNDX3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.150 0.0.0.0/0 /* kube-system/heapster: */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/heapster: */ tcp to:192.168.31.150:8082
Chain KUBE-SEP-CH7KJM5XKO5WGA6D (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 10.27.219.50 0.0.0.0/0 /* default/kubernetes:https */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: SET name: KUBE-SEP-CH7KJM5XKO5WGA6D side: source mask: 255.255.255.255 tcp to:10.27.219.50:6443
Chain KUBE-SEP-X3WTOMIYJNS7APAN (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.149 0.0.0.0/0 /* kube-system/kube-dns:dns */
0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:192.168.31.149:53
Chain KUBE-SEP-YDCHDMTZNPMRRKCX (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 10.27.219.50 0.0.0.0/0 /* kube-system/calico-etcd: */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/calico-etcd: */ tcp to:10.27.219.50:6666
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-NTYB37XIWATNM25Y tcp -- * * 0.0.0.0/0 10.96.232.136 /* kube-system/calico-etcd: cluster IP */ tcp dpt:6666
0 0 KUBE-SVC-BJM46V3U5RZHCFRZ tcp -- * * 0.0.0.0/0 10.96.181.180 /* kube-system/heapster: cluster IP */ tcp dpt:80
7 420 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-BJM46V3U5RZHCFRZ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-3XQHSFTDAPNNNDX3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/heapster: */
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-2VS52M6CEWASZVOP all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-CH7KJM5XKO5WGA6D all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-CH7KJM5XKO5WGA6D side: source mask: 255.255.255.255
0 0 KUBE-SEP-CH7KJM5XKO5WGA6D all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */
Chain KUBE-SVC-NTYB37XIWATNM25Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-YDCHDMTZNPMRRKCX all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/calico-etcd: */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-X3WTOMIYJNS7APAN all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */
Chain WEAVE (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- * * 10.32.0.0/12 224.0.0.0/4
1 93 MASQUERADE all -- * * !10.32.0.0/12 10.32.0.0/12
0 0 MASQUERADE all -- * * 10.32.0.0/12 !10.32.0.0/12
Chain cali-OUTPUT (1 references)
pkts bytes target prot opt in out source destination
109K 6580K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:GBTAv2p5CwevEyJm */
Chain cali-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
109K 6571K cali-fip-snat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Z-c7XtVd2Bq7s_hA */
109K 6571K cali-nat-outgoing all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:nYKhEzDlr11Jccal */
0 0 MASQUERADE all -- * tunl0 0.0.0.0/0 0.0.0.0/0 /* cali:JHlpT-eSqR1TvyYm */ ADDRTYPE match src-type !LOCAL limit-out ADDRTYPE match src-type LOCAL
Chain cali-PREROUTING (1 references)
pkts bytes target prot opt in out source destination
20976 1250K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:r6XmIziWUJsdOK6Z */
Chain cali-fip-dnat (2 references)
pkts bytes target prot opt in out source destination
Chain cali-fip-snat (1 references)
pkts bytes target prot opt in out source destination
Chain cali-nat-outgoing (1 references)
pkts bytes target prot opt in out source destination
4 376 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Wd76s91357Uv7N3v */ match-set cali4-masq-ipam-pools src ! match-set cali4-all-ipam-pools dst
</code></pre>
<p>Worker Node Info:
internal ip: 10.30.248.80</p>
<pre><code>ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:58:2b:b5:39
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth0 Link encap:Ethernet HWaddr 00:16:3e:2e:3d:fd
inet addr:10.30.248.80 Bcast:10.30.251.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3856596 errors:0 dropped:0 overruns:0 frame:0
TX packets:4253613 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:827402268 (827.4 MB) TX bytes:510838231 (510.8 MB)
eth1 Link encap:Ethernet HWaddr 00:16:3e:2c:db:d1
inet addr:47.93.161.177 Bcast:47.93.163.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:890451 errors:0 dropped:0 overruns:0 frame:0
TX packets:825607 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1695352720 (1.6 GB) TX bytes:62341312 (62.3 MB)
tunl0 Link encap:IPIP Tunnel HWaddr
inet addr:192.168.31.128 Mask:255.255.255.255
UP RUNNING NOARP MTU:1440 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root@iZ2zegw6nmd5t5qxy35lh0Z:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 47.93.163.247 0.0.0.0 UG 0 0 0 eth1
10.0.0.0 10.30.251.247 255.0.0.0 UG 0 0 0 eth0
10.30.248.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0
47.93.160.0 0.0.0.0 255.255.252.0 U 0 0 0 eth1
100.64.0.0 10.30.251.247 255.192.0.0 UG 0 0 0 eth0
172.16.0.0 10.30.251.247 255.240.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.31.128 0.0.0.0 255.255.255.192 U 0 0 0 *
192.168.31.149 0.0.0.0 255.255.255.255 UH 0 0 0 cali3567b3362cc
192.168.31.150 0.0.0.0 255.255.255.255 UH 0 0 0 cali9d04015b0e7
root@iZ2zegw6nmd5t5qxy35lh0Z:~# ip route list
default via 47.93.163.247 dev eth1
10.0.0.0/8 via 10.30.251.247 dev eth0
10.30.248.0/22 dev eth0 proto kernel scope link src 10.30.248.80
47.93.160.0/22 dev eth1 proto kernel scope link src 47.93.161.177
100.64.0.0/10 via 10.30.251.247 dev eth0
172.16.0.0/12 via 10.30.251.247 dev eth0
172.17.0.0/24 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
blackhole 192.168.31.128/26 proto bird
192.168.31.149 dev cali3567b3362cc scope link
192.168.31.150 dev cali9d04015b0e7 scope link
// NOTE: 10.0.0.0/8 via 10.30.251.247 dev eth0
// I didn't change this one. So it is default now.
root@iZ2zegw6nmd5t5qxy35lh0Z:~# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
3524 263K cali-PREROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:6gwbT8clXdHdC1b1 */
3527 263K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
1031 53882 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 4 packets, 240 bytes)
pkts bytes target prot opt in out source destination
84174 5099K cali-OUTPUT all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:tVnHkvAo15HuiPy0 */
85201 5163K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 7 packets, 420 bytes)
pkts bytes target prot opt in out source destination
76279 4644K cali-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:O3lYWMrLQYEMJtB5 */
0 0 MASQUERADE all -- * !docker0 172.17.0.0/24 0.0.0.0/0
87179 5342K KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
43815 2646K WEAVE all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-MARK-DROP (0 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
Chain KUBE-MARK-MASQ (5 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
pkts bytes target prot opt in out source destination
Chain KUBE-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
Chain KUBE-SEP-2VS52M6CEWASZVOP (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.149 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:192.168.31.149:53
Chain KUBE-SEP-3XQHSFTDAPNNNDX3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.150 0.0.0.0/0 /* kube-system/heapster: */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/heapster: */ tcp to:192.168.31.150:8082
Chain KUBE-SEP-CH7KJM5XKO5WGA6D (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 10.27.219.50 0.0.0.0/0 /* default/kubernetes:https */
3 180 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: SET name: KUBE-SEP-CH7KJM5XKO5WGA6D side: source mask: 255.255.255.255 tcp to:10.27.219.50:6443
Chain KUBE-SEP-X3WTOMIYJNS7APAN (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.31.149 0.0.0.0/0 /* kube-system/kube-dns:dns */
0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:192.168.31.149:53
Chain KUBE-SEP-YDCHDMTZNPMRRKCX (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 10.27.219.50 0.0.0.0/0 /* kube-system/calico-etcd: */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/calico-etcd: */ tcp to:10.27.219.50:6666
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
3 180 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-NTYB37XIWATNM25Y tcp -- * * 0.0.0.0/0 10.96.232.136 /* kube-system/calico-etcd: cluster IP */ tcp dpt:6666
0 0 KUBE-SVC-BJM46V3U5RZHCFRZ tcp -- * * 0.0.0.0/0 10.96.181.180 /* kube-system/heapster: cluster IP */ tcp dpt:80
0 0 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-BJM46V3U5RZHCFRZ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-3XQHSFTDAPNNNDX3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/heapster: */
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-2VS52M6CEWASZVOP all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
3 180 KUBE-SEP-CH7KJM5XKO5WGA6D all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-CH7KJM5XKO5WGA6D side: source mask: 255.255.255.255
0 0 KUBE-SEP-CH7KJM5XKO5WGA6D all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */
Chain KUBE-SVC-NTYB37XIWATNM25Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-YDCHDMTZNPMRRKCX all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/calico-etcd: */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-X3WTOMIYJNS7APAN all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */
Chain WEAVE (1 references)
pkts bytes target prot opt in out source destination
Chain cali-OUTPUT (1 references)
pkts bytes target prot opt in out source destination
84174 5099K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:GBTAv2p5CwevEyJm */
Chain cali-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
86501 5298K cali-fip-snat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Z-c7XtVd2Bq7s_hA */
86501 5298K cali-nat-outgoing all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:nYKhEzDlr11Jccal */
0 0 MASQUERADE all -- * tunl0 0.0.0.0/0 0.0.0.0/0 /* cali:JHlpT-eSqR1TvyYm */ ADDRTYPE match src-type !LOCAL limit-out ADDRTYPE match src-type LOCAL
Chain cali-PREROUTING (1 references)
pkts bytes target prot opt in out source destination
3524 263K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:r6XmIziWUJsdOK6Z */
Chain cali-fip-dnat (2 references)
pkts bytes target prot opt in out source destination
Chain cali-fip-snat (1 references)
pkts bytes target prot opt in out source destination
Chain cali-nat-outgoing (1 references)
pkts bytes target prot opt in out source destination
29 1726 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Wd76s91357Uv7N3v */ match-set cali4-masq-ipam-pools src ! match-set cali4-all-ipam-pools dst
</code></pre>
| <p>Problem is found by <code>calicoctl node status</code>. The calico/node use a public ip to communicate with each other. But nodes in AliCloud are behind a firewall. So they cannot do that via public ip address.</p>
<p>As gunjan5 suggested, I used this env var <code>IP_AUTODETECTION_METHOD</code> to specify the internal interface. Problem solved.</p>
|
<p>I'm trying set up custom metrics with a <code>HorizontalPodAutoscaler</code> on a 1.6.1 alpha GKE cluster.</p>
<p>According to <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#prerequisites" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#prerequisites</a> I need to set <code>--horizontal-pod-autoscaler-use-rest-clients</code> on <code>kube-controller-manager</code> to enable metrics collection. From GKE, it's not clear whether it's possible to set flags on <code>kube-controller-manager</code>. Any ideas?</p>
<p>Has anyone gotten custom metrics working with HPA on GKE?</p>
| <p>You can't manipulate any of the kubernetes cluster component directly in GKE(Google Container Engine), Google will do that job, if you want to achieve that you may need to deploy your own kubernetes cluster.</p>
|
Subsets and Splits