Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>First of all request goes to proxy service that i've implemented, service forwards request to pods randomly without using sessionAffinity. I want to send requests to same pod based on custom value that i've set in request parameters using post method. I've used sessionAffinity with my service yml.</p>
<p>Here's service yml with sessionAffinity:</p>
<pre><code>apiVersion: v1
metadata:
name: abcd-service
namespace: ab-services
spec:
ports:
- name: http
protocol: TCP
port: ****
targetPort: ****
nodePort: *****
selector:
app: abcd-pod
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 600
type: NodePort
</code></pre>
<p>Now problem is that when request are send by multiple client's from same ip address all requests are directed to single pod and not to other replicas, causing uneven load balancing. But I don't want requests to be forwarded randomly either. i want all request's from same client or different client to be forwarded based on custom value that i set in post request and not by clientIP considering clientIP resolves to source ip of each request. </p>
| Akshay Lakhe | <p>As you can read <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/" rel="nofollow noreferrer">here</a>, it currently supports only <code>ClientIP</code> and <code>None</code> values.</p>
<blockquote>
<p><strong>sessionAffinity</strong> <em>string</em> Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity.
Must be ClientIP or None. Defaults to None. More info:
<a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies</a></p>
</blockquote>
<p>Unfortunatelly there are no other values allowed.</p>
| mario |
<p>I have installed Harbor as follows: </p>
<pre><code>helm install hub harbor/harbor \
--version 1.3.2 \
--namespace tool \
--set expose.ingress.hosts.core=hub.service.example.io \
--set expose.ingress.annotations.'kubernetes\.io/ingress\.class'=istio \
--set expose.ingress.annotations.'cert-manager\.io/cluster-issuer'=letsencrypt-prod \
--set externalURL=https://hub.service.example.io \
--set notary.enabled=false \
--set secretkey=secret \
--set harborAdminPassword=pw
</code></pre>
<p>Everything is up and running but the page is not reachable via <code>https://hub.service.example.io</code>. The same problem occurs here <a href="https://stackoverflow.com/questions/61590027/why-css-and-png-are-not-accessible/61592425#61592425">Why css and png are not accessible?</a> but how to set wildcard <code>*</code> in Helm? </p>
<p><strong>Update</strong></p>
<p>Istio supports ingress gateway. This for example works without Gateway and VirtualService definition: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: helloworld-ingress
spec:
rules:
- host: "hw.service.example.io"
http:
paths:
- path: "/*"
backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
</code></pre>
| softshipper | <p>I would say it won't work with ingress and istio.</p>
<p>As mentioned <a href="https://istio.io/faq/traffic-management/" rel="nofollow noreferrer">here</a> </p>
<blockquote>
<p>Simple ingress specifications, with host, TLS, and exact path based matches will work out of the box without the need for route rules. However, note that the path used in the ingress resource should not have any . characters.</p>
<p>For example, the following ingress resource matches requests for the example.com host, with /helloworld as the URL.</p>
</blockquote>
<pre><code>$ kubectl create -f - <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-ingress
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: example.com
http:
paths:
- path: /helloworld
backend:
serviceName: myservice
servicePort: grpc
EOF
</code></pre>
<blockquote>
<p>However, the following rules will not work because they use regular expressions in the path and ingress.kubernetes.io annotations:</p>
</blockquote>
<pre><code>$ kubectl create -f - <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: this-will-not-work
annotations:
kubernetes.io/ingress.class: istio
# Ingress annotations other than ingress class will not be honored
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /hello(.*?)world/
backend:
serviceName: myservice
servicePort: grpc
EOF
</code></pre>
<hr>
<p>I assume your hello-world is working because of just 1 annotation which is ingress class. </p>
<p>If you take a look at annotations of harbor <a href="https://github.com/goharbor/harbor-helm/blob/master/values.yaml#L36-L40" rel="nofollow noreferrer">here</a>, it might be the problem when you want to use ingress with istio.</p>
<hr>
<blockquote>
<p>but how to set wildcard * in Helm?</p>
</blockquote>
<p>Wildcard have nothing to do here. As I mentioned in this <a href="https://stackoverflow.com/questions/61590027/why-css-and-png-are-not-accessible/61592425#61592425">answer</a> you can use either wildcard or additional paths, which is done well. Take a look at the ingress paths <a href="https://github.com/goharbor/harbor-helm/blob/master/templates/ingress/ingress.yaml#L67-L104" rel="nofollow noreferrer">here</a>.</p>
| Jakub |
<p>I have deployed <strong>istio</strong> in my <strong>eks</strong> cluster with <strong>demo</strong> profile. <strong>demo</strong> has <strong>kiali</strong> deployment with it. The access secret for <strong>kiali dashboard</strong> is ( <strong>username:admin</strong>,<strong>password:admin</strong> ).I was able to access my dashboard with this credentials. Then I created my own secrets. </p>
<pre><code>$ echo shajaltest | base64
$ c2hhamFsdGVzdAo=
</code></pre>
<p>Deleted the secrets for kiali.</p>
<pre><code>$ kubectl delete secrets kiali -n istio-system
</code></pre>
<p>Deployed the secrets again with this <strong>yaml</strong></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: kiali
namespace: istio-system
labels:
app: kiali
type: Opaque
data:
username: c2hhamFsdGVzdAo=
passphrase: c2hhamFsdGVzdAo=
</code></pre>
<p>After all of that I deleted the pod of kiali.
After that I can not access my dashboard with this username and password. <strong>What should I do ?</strong> </p>
<p>I also checked the secrets of kiali. It has updated with recent secret value. </p>
<p>Here is the log of <strong>kiali pod</strong>.</p>
<pre><code>I0408 18:30:30.194890 1 kiali.go:66] Kiali: Version: v1.15.1, Commit:
3263b7692bcc06ad40292bedea5a9213e04aa9db
I0408 18:30:30.195179 1 kiali.go:205] Using authentication strategy [login]
I0408 18:30:30.195205 1 kiali.go:87] Kiali: Console version: 1.15.0
I0408 18:30:30.195212 1 kiali.go:286] Updating base URL in index.html with [/kiali]
I0408 18:30:30.195376 1 kiali.go:267] Generating env.js from config
I0408 18:30:30.197274 1 server.go:57] Server endpoint will start at [:20001/kiali]
I0408 18:30:30.197285 1 server.go:58] Server endpoint will serve static content from [/opt/kiali/console]
I0408 18:30:30.197297 1 metrics_server.go:18] Starting Metrics Server on [:9090]
I0408 18:30:30.197367 1 kiali.go:137] Secret is now available.
</code></pre>
| Shajal Ahamed | <p>Have you tried to follow the <a href="https://istio.io/docs/tasks/observability/kiali/#create-a-secret" rel="nofollow noreferrer">istio documentation</a> about changing the credentials in kiali?</p>
<hr>
<p>I made a reproduction of your issue with below steps and everything worked just fine.</p>
<blockquote>
<p>Enter a <strong>Kiali username</strong> when prompted:</p>
</blockquote>
<pre><code>KIALI_USERNAME=$(read -p 'Kiali Username: ' uval && echo -n $uval | base64)
</code></pre>
<blockquote>
<p>Enter a <strong>Kiali passphrase</strong> when prompted:</p>
</blockquote>
<pre><code>KIALI_PASSPHRASE=$(read -sp 'Kiali Passphrase: ' pval && echo -n $pval | base64)
</code></pre>
<blockquote>
<p><strong>To create a secret</strong>, run the following commands:</p>
</blockquote>
<pre><code>NAMESPACE=istio-system
</code></pre>
<hr>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: kiali
namespace: $NAMESPACE
labels:
app: kiali
type: Opaque
data:
username: $KIALI_USERNAME
passphrase: $KIALI_PASSPHRASE
EOF
</code></pre>
<p>And simply recreate the kiali pod with</p>
<pre><code>kubectl delete pod <name_of_the_kiali_pod> -n istio-system
</code></pre>
<hr>
<p><strong>EDIT</strong></p>
<p>As @Shajal Ahamed mentioned in comments the problem was absence of <code>-n</code>, if you want to use just echo, then use.</p>
<pre><code>echo -n username | base64
echo -n passphrase | base64
</code></pre>
| Jakub |
<p>I have many domain pointing to Ingress Controller IP. I want to block /particular-path for all the domains/sites. Is there a way to do this.
I can use <code>nginx.ingress.kubernetes.io/configuration-snippet: | </code> for each site. But looking for way to do for all sites/domains/Ingress resource at once.</p>
<p>Controller used: <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/</a></p>
| Tek Nath Acharya | <p>There are two ways to achieve this:</p>
<p><strong>1. First one is with using <code>server-snippet</code> annotation:</strong></p>
<blockquote>
<p>Using the annotation <code>nginx.ingress.kubernetes.io/server-snippet</code> it
is possible to add custom configuration in the server configuration
block.</p>
</blockquote>
<p>Here is my manifest for the ingress object:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
location ~* /admin-access {
deny all;
return 403;
}
spec:
rules:
- host: domain.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
</code></pre>
<p>Please note that using <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-snippet" rel="nofollow noreferrer">this</a> approach :</p>
<blockquote>
<p>This annotation can be used only once per host.</p>
</blockquote>
<hr />
<p><strong>2. Second one is with usage of <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#configmaps" rel="nofollow noreferrer">ConfigMaps</a> and <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#server-snippet" rel="nofollow noreferrer">Server-snippet</a>:</strong></p>
<p>What you have to do is to locate your <code>configMap</code>:</p>
<pre><code> kubectl get pod <nginx-ingress-controller> -o yaml
</code></pre>
<p>This is located the container <code>args</code>:</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
containers:
- args:
- /nginx-ingress-controller
- configmap=$(POD_NAMESPACE)/nginx-loadbalancer-conf
</code></pre>
<p>And then just edit it and place add the <code>server-snippet</code> part:</p>
<pre class="lang-yaml prettyprint-override"><code> apiVersion: v1
data:
server-snippet: |
location /admin-access {
deny all;
}
</code></pre>
<p>This approach allows you to define restricted location globally for all host defined in Ingress resource.</p>
<hr />
<p>Please note that with usage of <code>server-snippet</code> the path that you are blocking cannot be defined in ingress resource object. There is however another way with <code>location-snippet</code> via <code>ConfigMap</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>location ~* "^/web/admin {
deny all;
}
</code></pre>
<p>With this for every existing path in ingress object there will be ingress rule but it will be blocked for specific uri (In the example above it be be blocked when <code>admin</code> will appear after <code>web</code>). All of the other uri will be passed through.</p>
<hr />
<p><strong>3. Here`s a test:</strong></p>
<pre class="lang-sh prettyprint-override"><code>➜ curl -H "Host: domain.com" 172.17.0.4/test
...
"path": "/test",
"headers": {
...
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "domain.com",
"ip": "172.17.0.1",
"ips": [
"172.17.0.1"
],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "web-6b686fdc7d-4pxt9"
...
</code></pre>
<p>And here is a test with a path that has been denied:</p>
<pre class="lang-html prettyprint-override"><code>➜ curl -H "Host: domain.com" 172.17.0.4/admin-access
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.19.0</center>
</body>
</html>
➜ curl -H "Host: domain.com" 172.17.0.4/admin-access/test
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.19.0</center>
</body>
</html>
</code></pre>
<hr />
<p>Additional information: <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/" rel="nofollow noreferrer">Deprecated APIs Removed In 1.16</a>. Here’s What You Need To Know:</p>
<blockquote>
<p>The v1.22 release will stop serving the following deprecated API
versions in favor of newer and more stable API versions:</p>
<p>Ingress in the extensions/v1beta1 API version will no longer be
served</p>
</blockquote>
| acid_fuji |
<p>I'm trying to use pyspark interpreter on a zeppelin notebook deployed using Kubernetes. I have configured spark to use spark executors as well (5 cores, 1G storage). However, when I try to run pandas/seaborn and manipulate pandas dataframe, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-6458200865742049511.py", line 367, in <module>
raise Exception(traceback.format_exc())
Exception: Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-6458200865742049511.py", line 355, in <module>
exec(code, _zcUserQueryNameSpace)
File "<stdin>", line 2, in <module>
File "/opt/spark/python/pyspark/sql/dataframe.py", line 1703, in toPandas
return pd.DataFrame.from_records(self.collect(), columns=self.columns)
File "/opt/spark/python/pyspark/sql/dataframe.py", line 438, in collect
port = self._jdf.collectToPython()
File "/opt/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/opt/spark/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/opt/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o11395.collectToPython.:
org.apache.spark.SparkException: Job aborted due to stage failure: ResultStage 1395 (toPandas at <stdin>:2) has failed the maximum allowable number of times: 4.
Most recent failure reason: org.apache.spark.shuffle.FetchFailedException: Failure while fetching StreamChunkId{streamId=1165701532984, chunkIndex=0}:
java.lang.RuntimeException:
Failed to open file: /tmp/spark-local/blockmgr-aa951820-47d3-404f-a97e-12d25f460aec/13/shuffle_311_0_0.index
at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.getSortBasedShuffleBlockData(ExternalShuffleBlockResolver.java:249) at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.getBlockData(ExternalShuffleBlockResolver.java:174) at org.apache.spark.network.shuffle.ExternalShuffleBlockHandler$1.next(ExternalShuffleBlockHandler.java:105) at org.apache.spark.network.shuffle.ExternalShuffleBlockHandler$1.next(ExternalShuffleBlockHandler.java:95) at org.apache.spark.network.server.OneForOneStreamManager.getChunk(OneForOneStreamManager.java:89) at org.apache.spark.network.server.TransportRequestHandler.processFetchRequest(TransportRequestHandler.java:125) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:103) at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) at java.lang.Thread.run(Thread.java:748) Caused by: java.util.concurrent.ExecutionException: java.io.FileNotFoundException: /tmp/spark-local/blockmgr-aa951820-47d3-404f-a97e-12d25f460aec/13/shuffle_311_0_0.index (No such file or directory) at org.spark_project.guava.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306) at org.spark_project.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293) at org.spark_project.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) at org.spark_project.guava.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135) at org.spark_project.guava.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2410) at org.spark_project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2380) at org.spark_project.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342) at org.spark_project.guava.cache.LocalCache$Segment.get(LocalCache.java:2257) at org.spark_project.guava.cache.LocalCache.get(LocalCache.java:4000) at org.spark_project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004) at org.spark_project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874) at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.getSortBasedShuffleBlockData(ExternalShuffleBlockResolver.java:240) ... 34 more Caused by: java.io.FileNotFoundException: /tmp/spark-local/blockmgr-aa951820-47d3-404f-a97e-12d25f460aec/13/shuffle_311_0_0.index (No such file or directory) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.<init>(FileInputStream.java:138) at org.apache.spark.network.shuffle.ShuffleIndexInformation.<init>(ShuffleIndexInformation.java:41) at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver$1.load(ExternalShuffleBlockResolver.java:111) at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver$1.load(ExternalShuffleBlockResolver.java:109) at org.spark_project.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599) at org.spark_project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379) ... 40 more at org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:442) at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:418) at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:59) at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at org.apache.spark.util.random.SamplingUtils$.reservoirSampleAndCount(SamplingUtils.scala:41) at org.apache.spark.RangePartitioner$$anonfun$9.apply(Partitioner.scala:263) at org.apache.spark.RangePartitioner$$anonfun$9.apply(Partitioner.scala:261) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$26.apply(RDD.scala:844) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$26.apply(RDD.scala:844) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.spark.network.client.ChunkFetchFailureException: Failure while fetching StreamChunkId{streamId=1165701532984, chunkIndex=0}: java.lang.RuntimeException: Failed to open file: /tmp/spark-local/blockmgr-aa951820-47d3-404f-a97e-12d25f460aec/13/shuffle_311_0_0.index at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.getSortBasedShuffleBlockData(ExternalShuffleBlockResolver.java:249) at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.getBlockData(ExternalShuffleBlockResolver.java:174) at org.apache.spark.network.shuffle.ExternalShuffleBlockHandler$1.next(ExternalShuffleBlockHandler.java:105) at org.apache.spark.network.shuffle.ExternalShuffleBlockHandler$1.next(ExternalShuffleBlockHandler.java:95) at org.apache.spark.network.server.OneForOneStreamManager.getChunk(OneForOneStreamManager.java:89) at org.apache.spark.network.server.TransportRequestHandler.processFetchRequest(TransportRequestHandler.java:125) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:103) at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) at java.lang.Thread.run(Thread.java:748) Caused by: java.util.concurrent.ExecutionException: java.io.FileNotFoundException: /tmp/spark-local/blockmgr-aa951820-47d3-404f-a97e-12d25f460aec/13/shuffle_311_0_0.index (No such file or directory) at org.spark_project.guava.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306) at org.spark_project.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293) at org.spark_project.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) at org.spark_project.guava.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135) at org.spark_project.guava.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2410) at org.spark_project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2380) at org.spark_project.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342) at org.spark_project.guava.cache.LocalCache$Segment.get(LocalCache.java:2257) at org.spark_project.guava.cache.LocalCache.get(LocalCache.java:4000) at org.spark_project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004) at org.spark_project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874) at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.getSortBasedShuffleBlockData(ExternalShuffleBlockResolver.java:240) ... 34 more Caused by: java.io.FileNotFoundException: /tmp/spark-local/blockmgr-aa951820-47d3-404f-a97e-12d25f460aec/13/shuffle_311_0_0.index (No such file or directory) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.<init>(FileInputStream.java:138) at org.apache.spark.network.shuffle.ShuffleIndexInformation.<init>(ShuffleIndexInformation.java:41) at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver$1.load(ExternalShuffleBlockResolver.java:111) at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver$1.load(ExternalShuffleBlockResolver.java:109) at org.spark_project.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599) at org.spark_project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379) ... 40 more at org.apache.spark.network.client.TransportResponseHandler.handle(TransportResponseHandler.java:182) at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:120) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) ... 1 more
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1499)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1487)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1486)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1486)
at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1310)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1711)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2062)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2087)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
at org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:266)
at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:128)
at org.apache.spark.sql.execution.exchange.ShuffleExchange$.prepareShuffleDependency(ShuffleExchange.scala:221)
at org.apache.spark.sql.execution.exchange.ShuffleExchange.prepareShuffleDependency(ShuffleExchange.scala:87)
at org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:124)
at org.apache.spark.sql.execution.exchange.ShuffleExchange$$anonfun$doExecute$1.apply(ShuffleExchange.scala:115)
at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
at org.apache.spark.sql.execution.exchange.ShuffleExchange.doExecute(ShuffleExchange.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:252)
at org.apache.spark.sql.execution.SortExec.inputRDDs(SortExec.scala:121)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:386)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:228)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:275)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply$mcI$sp(Dataset.scala:2803)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:2800)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:2800)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2823)
at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:2800)
at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
</code></pre>
<p>I have checked <code>/tmp/spark-local/</code> for each spark executor and discovered that <code>blockmgr-aa951820-47d3-404f-a97e-12d25f460aec</code> (as shown in the logs) didn't exist on 2 out of 3 executor pods. I have checked the zeppelin server pod as well and it didn't have the aforementioned directory which is expected. </p>
<p>Here's one paragraph I'm trying to run</p>
<pre><code>%pyspark
import seaborn as sns
#Plot xxxxxxx
x = df_age.groupby("z").agg({"y":"mean"}).sort("z").toPandas()
sns.barplot("z","avg(y)", data = x, color = "cadetblue")
</code></pre>
<p>It works/runs sometimes but I want it to work flawlessly. Thanks!</p>
<p>[EDIT]
I had progress with the following observations:</p>
<ol>
<li><p>All jobs run without errors when there only exists one spark executor pod. </p></li>
<li><p>In relation to (1) I'm suspecting that this has something to do with spark shuffling. I'm trying to understand how this works but here's the best lead I've got.
<a href="https://medium.com/@foundev/you-won-t-believe-how-spark-shuffling-will-probably-bite-you-also-windowing-e39d07bf754e" rel="nofollow noreferrer">https://medium.com/@foundev/you-won-t-believe-how-spark-shuffling-will-probably-bite-you-also-windowing-e39d07bf754e</a></p></li>
</ol>
| Joshua Villanueva | <p>For everyone concerned, we were able to verify that this is an external shuffle service issue. Please check this thread:</p>
<p><a href="https://stackoverflow.com/questions/59730550/how-to-fix-error-opening-block-streamchunkid-on-external-spark-shuffle-service">How to fix "Error opening block StreamChunkId" on external spark shuffle service</a></p>
| Joshua Villanueva |
<p>I want to reach Prometheus Operator over Internet via HTTPS.</p>
<p>I deployed Prometheus via HELM and provided the following custom-vault.yml</p>
<p><strong>Chart Version:</strong></p>
<pre><code> kube-prometheus-0.0.105
nginx-ingress-0.31.0
</code></pre>
<p><strong>Deployment:</strong></p>
<pre><code>helm install coreos/kube-prometheus --name kube-prometheus --set global.rbacEnable=false \
--namespace monitoring -f custom-vault.yml
</code></pre>
<p><strong>What i expect</strong>:
I want to browse Prometheus via URL
<a href="https://example.com/prometheus" rel="nofollow noreferrer">https://example.com/prometheus</a></p>
<p><strong>my custom-vault.yml</strong> </p>
<pre><code>prometheus:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
tls:
- secretName: tls-secret
hosts:
- example.com
hosts:
- example.com
paths:
- /
#prometheusSpec:
#externalUrl: https://example.com/prometheus
</code></pre>
<p><strong>What happen?</strong></p>
<p>I can reach <a href="https://example.com/graph" rel="nofollow noreferrer">https://example.com/graph</a> but CSS Files doesn´t get loaded due to path error.</p>
<p>When i try to configure <a href="https://example.com/prometheus/graph" rel="nofollow noreferrer">https://example.com/prometheus/graph</a> also CSS doesn´t work and when i click on the Frontend on Alerts then i get redireted to <a href="https://example.com/alerts" rel="nofollow noreferrer">https://example.com/alerts</a> and getting and "default backend 404" error.</p>
<p>Other Ingress Routes for serval Services / Pods are working
Prometheus is also working - when i expose the Port to localhost, Prometheus get displayed correctly.</p>
| Stefan Schulze | <p>thanks for your Input. You helped me to solve the problem. Not on the direct way, but it gave me a new point of view.</p>
<p><strong>How i solved the issue:</strong>
I changed the deployment from coreos/kube-prometheus to stable/prometheus-operator</p>
<p>The current version is 6.11
I was not able to install directly 6.11 - i need to install 6.0.0 and upgraded to 6.11</p>
<p>Also provided a new custom-value.yaml</p>
<p>With this setting it works perfectly!</p>
<p>custom-value.yaml</p>
<pre><code>prometheus:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: "{my_whitelisted_public_ip}"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
tls:
- secretName: tls-secret
hosts:
- example.com
hosts:
- example.com
paths:
- /
prometheusSpec:
externalUrl: http://example.com/prometheus
routePrefix : prometheus/
</code></pre>
<p>Thank you.</p>
<p>BR</p>
| Stefan Schulze |
<p>I am trying to create a helm chart for my service with the following structure:</p>
<pre><code>.
├── my-app
│ ├── Chart.yaml
│ ├── templates
│ │ ├── deployment.yaml
│ │ ├── istio-virtualservice.yaml
│ │ └── service.yaml
│ └── values.yaml
</code></pre>
<p>After installing the helm chart the deployment and service are being created successfully but the virtualservice is not being created. </p>
<pre><code>$ helm install -name my-app ./my-app -n my-namespace
$ kubectl get pods -n my-namespace
NAME READY STATUS RESTARTS AGE
my-app-5578cbb95-xzqzk 2/2 Running 0 5m
$ kubectl get vs
NAME GATEWAYS HOSTS AGE
<Empty>
</code></pre>
<p>My istio virtual service yaml files looks like:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-virtual-service
spec:
hosts:
- {{$.Values.props.host | quote}}
gateways:
- my-api-gateway
http:
- match:
- uri:
prefix: /app/
rewrite:
uri: "/"
route:
- destination:
port:
number: 8083
host: my-service.my-namespace.svc.cluster.local
</code></pre>
<p>Surprisingly, if i apply the above yaml after helm install is done deploying the app then the virtualservice gets created. </p>
<pre><code>$ kubectl apply -f istio-vs.yaml
$ kubectl get vs
NAME GATEWAYS HOSTS AGE
my-virtual-service [my-api-gateway] [my-host.com] 60s
</code></pre>
<p>Please help me debug the issue and let me know if more debug information is needed.</p>
<pre><code>$ helm version
version.BuildInfo{Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4"}
</code></pre>
<pre><code>$ istioctl version
client version: 1.4.1
control plane version: 1.4.1
data plane version: 1.4.1 (2 proxies)
</code></pre>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:18:29Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| Kapil Ratnani | <p><strong>Use</strong> </p>
<pre><code>kubectl get vs -n my-namespace
</code></pre>
<p><strong>instead of</strong> </p>
<pre><code>kubectl get vs
</code></pre>
<hr>
<p>That´s because you have deployed everything in the <strong>my-namespace</strong> namespace.</p>
<blockquote>
<p>helm install -name my-app ./my-app -n <strong>my-namespace</strong></p>
</blockquote>
<p>And you´re searching for <a href="https://istio.io/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer">virtual service</a> in the <strong>default</strong> namespace.</p>
<hr>
<p>It´s working when you apply it by yourself, because there is no namespace in the virtual service yaml and it´s deployed in the default one. </p>
<hr>
<p>Additional info, I see you have <a href="https://istio.io/docs/reference/config/networking/gateway/" rel="nofollow noreferrer">gateway</a> which is already deployed, if it´s not in the same namespace as virtual service, you should add it like in below example.</p>
<p>Check the <code>spec.gateways</code> section</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo-Mongo
namespace: bookinfo-namespace
spec:
gateways:
- some-config-namespace/my-gateway # can omit the namespace if gateway is in same
namespace as virtual service.
</code></pre>
<hr>
<p>I hope this answer your question. Let me know if you have any more questions.</p>
| Jakub |
<p>Let's say I have a cronjob created at my cluster. It runs once a day.</p>
<p>My question is, on the second, third, etc. run will it use the cached copy of the pulled image (does Kubernetes have something like a "pulled image local cache"?) or for each and every cronjob run my kubernetes cluster will pull the image from my private repository?</p>
| jliendo | <p>That depend's on how your image is tagged and additional configuration. If it's image with specific version then it will pull only once until you specify other. If image is tagged with <code>:latest</code> tag or <code>imagePullPolicy: Always</code> is specified for that pod, then Kubernetes will try to keep image for that particular pod up to date as possible (might pull image over restart's) More common problem is how to force Kubernete's to pull new version of image each time the pod is run.
Please read here:</p>
<p><a href="https://stackoverflow.com/a/35941366/12053054">https://stackoverflow.com/a/35941366/12053054</a></p>
| Norbert Dopjera |
<p>I deployed the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> helm chart. While this chart offers a really nice starting point it has lots of default dashboards which I do not want to use. In the <em>values.yaml</em> of the chart, there is an option <em>defaultDashboardsEnabled: true</em>, which seems to be what I am looking for but if I set it to false using the code below in my values file, which I mount into the helm chart, the dashboards are still there. Does anyone know why this does not work?</p>
<p>A possibility which I thought of is that the chart has both a subchart called <em>grafana</em> and an option <em>grafana</em>, but I do not know how I could fix it or test if this is the issue.</p>
<pre><code>grafana:
defaultDashboardsEnabled: false
</code></pre>
| Manuel | <p>I`m placing this answer to better visibility as community might interested in other solutions.</p>
<ul>
<li>First way would be <a href="https://github.com/prometheus-community/helm-charts/blob/f551c35d120cc74b774a46b8160904af7bc57135/charts/kube-prometheus-stack/values.yaml#L576" rel="nofollow noreferrer">setting</a> <code>grafana.enable:</code> to false in <code>values.yaml</code>.</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>## Using default values from https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml
grafana:
enabled: true
</code></pre>
<p>With this your chart will not install <code>grafana</code>.</p>
<ul>
<li>Another way would be to <code>helm pull</code> the chart to your local directory and then just delete the <code>template.grafana</code> directory (to launch the chart locally you just need to <code>helm install <name> ./prometheus-stack</code>)</li>
</ul>
| acid_fuji |
<p>I have loaded Grafana as an add-on from the Istio docs, i put it behind a sub domain for the main site.</p>
<p>But i need to build a custom load balancer for it, so that the sub domain can point to that.</p>
<p>This is what i have:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: grafana-ingressgateway
namespace: istio-system
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
name: http2
- port: 443
name: https
selector:
app.kubernetes.io/name: grafana-lb
app.kubernetes.io/instance: grafana-lb
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ingress-grafana-gateway-configuration
namespace: istio-system
spec:
selector:
istio: grafana-ingressgateway
servers:
- port:
number: 80
name: grafana-http-ui
protocol: HTTP
hosts:
- "grafana.xxxx.com"
tls:
httpsRedirect: false
- port:
number: 443
name: grafana-https-ui
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: xxxx-cert
hosts:
- "grafana.xxxx.com"
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: grafana-virtual-service
namespace: istio-system
spec:
hosts:
- "grafana.xxxx.com"
gateways:
- ingress-grafana-gateway-configuration
http:
- match:
- uri:
prefix: /
route:
- destination:
host: grafana.istio-system.svc.cluster.local
</code></pre>
<p>But it's not loading, i have already updated the 'grafana' sub domain to point to the new load balancer. The cert is a wild card lets encrypt that is in the <code>istio-system</code> namespace.</p>
<p>Is this because I added to the same namespace as the default load balancer? I have not seen anything that says you cant run more than one LB in one NS?</p>
<p>Thanks,</p>
| C0ol_Cod3r | <p>From what I see it's not working because creation of a service is not enough to create a custom load balancer in Istio.</p>
<hr />
<p>If you want to create a custom gateway, then please refer to the following <a href="https://stackoverflow.com/questions/51835752/how-to-create-custom-istio-ingress-gateway-controller/51840872#51840872">answer</a>. You need to create it with either <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a> or <a href="https://istio.io/latest/docs/setup/install/operator/" rel="nofollow noreferrer">Istio Operator</a>.</p>
<p>Then you can use the selector to instruct your gateway to use the new custom ingress gateway, instead of the default one, which selector is <code>istio: ingressgateway</code>.</p>
<hr />
<p>As for your gateway configuration, if you want to use the following selector</p>
<pre><code>spec:
selector:
istio: grafana-ingressgateway
</code></pre>
<p>Then you should create this label on your custom grafana ingress gateway.</p>
<pre><code>gateways:
enabled: true
custom-grafana-ingressgateway:
namespace: default
enabled: true
labels:
istio: grafana-ingressgateway
</code></pre>
| Jakub |
<p>I'm encountering following error message when I'm trying to deploy to EKS Cluster even I've already added CodeBuild IAM role to <code>aws-auth.yaml</code> like</p>
<pre><code>- rolearn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/codebuild-eks
username: codebuild-eks
groups:
- system:masters
</code></pre>
<blockquote>
<p>error: unable to recognize "deployment.yml": Get
http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080:
connect: connection refused</p>
</blockquote>
<p>And here is my codebuild command:</p>
<pre><code>version: 0.2
phases:
install:
commands:
- curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.10/2019-06-21/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
- kubectl version --short --client
- curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.13.7/2019-06-11/bin/linux/amd64/aws-iam-authenticator
- chmod +x ./aws-iam-authenticator
- mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH
- aws-iam-authenticator help
pre_build:
commands:
- echo Entered the pre_build phase...
- echo Logging in to Amazon EKS...
- mkdir -p ~/.kube
- aws s3 cp s3://ppshein-eks/config ~/.kube/config
- export KUBECONFIG=$KUBECONFIG:~/.kube/config
- aws eks --region $AWS_DEFAULT_REGION update-kubeconfig --name $AWS_CLUSTER_NAME
build:
commands:
- echo Entered the build phase...
- echo Change directory to secondary source
- cd $CODEBUILD_SRC_DIR
- echo List directory
- ls -la
- kubectl get pods --kubeconfig ~/.kube/config
- kubectl apply -f deployment.yml
</code></pre>
<p>Problem is when CodeBuild run this <code>kubectl apply -f deployment.yml</code> statement, I've got error message but above one <code>kubectl get pods --kubeconfig ~/.kube/config</code> is working fine.</p>
<p>Please let me know which area I've missed to add or configure. Thanks.</p>
| PPShein | <p>These errors indicates that kubectl was unable to reach the kubernetes server endpoint at 127.0.0.1:8080, or the local host. Since you have configured kubeconfig with the command <code>update-kubeconfig</code>, it seems multiple configs are being merged <a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#merging-kubeconfig-files" rel="nofollow noreferrer">1</a> due to this command:</p>
<pre><code>- export KUBECONFIG=$KUBECONFIG:~/.kube/config
</code></pre>
<p>To see the resultant config that kubectl sees, run this command before the failing command:</p>
<pre><code>- kubectl config view # Add this
- kubectl apply -f deployment.yml
</code></pre>
<p>To fix, I recommend to change as follows in pre_build phase:</p>
<pre><code>- export KUBECONFIG=~/.kube/config
</code></pre>
<p>Or, use '--context' flag with kubectl to select the correct context.</p>
<pre><code>- export KUBECONFIG=file1:file2
- kubectl get pods --context=cluster-1
- kubectl get pods --context=cluster-2
</code></pre>
| shariqmaws |
<p>Setup on machine:</p>
<ul>
<li>Ubuntu 20.04</li>
<li>Kubernetes cluster started with kubeadm and flannel network plugin</li>
</ul>
<p>On my working machine I installed Jenkins on cluster and want to configure network to be able to access jenkins from port 8081. By default it's possible only to forwarded port (30667 in my case). Is it possible on ubuntu?</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/jenkins-5b6cb84957-n497l 1/1 Running 4 93m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins LoadBalancer 10.96.81.85 <pending> 8081:30667/TCP 93m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 94m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/jenkins 1/1 1 1 93m
NAME DESIRED CURRENT READY AGE
replicaset.apps/jenkins-5b6cb84957 1 1 1 93m
NAME COMPLETIONS DURATION AGE
job.batch/pv-recycler-generator 1/1 5s 42s
</code></pre>
<p>Tried also with calico network plugin - same result</p>
<p>But before I worked with Docker desktop on Mac and Windows where it was possible out of box</p>
| OLA | <p>Service type of <code>Loadbalancer</code> works best when you run it on cloud because cloud provisioners will automate the process of creating external loadbalancer for you and configuring it to work with Kubernetes. In case when working on prem this whole process has to be done by you. You have do it yourself or use 3rd party tools to do that, e.g. <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">Metallb</a>.</p>
<p>Notice the pending field in the <code>External-ip</code> column. The easiest way would be to set it manually to IP address of your node.</p>
<p>First you have get your node ip:</p>
<pre class="lang-sh prettyprint-override"><code>➜ ~ k get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
instance-1 Ready master 221d v1.17.0 👉 10.128.15.230 <none> Ubuntu 16.04.6 LTS 4.15.0-1090-gcp docker://18.6.1
</code></pre>
<p>Then add this IP address in the <code>externalIPs</code> field in the service spec:</p>
<pre class="lang-yaml prettyprint-override"><code>➜ ~ cat loadbalancer.yml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
externalIPs:
- 10.128.15.230 👈
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
</code></pre>
<p>One you do that you will be able to see that external IP is no longer pending:</p>
<pre class="lang-sh prettyprint-override"><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.101.150.192 👉 10.128.15.230 80:11438/TCP 5m1s
</code></pre>
<p>And then you just have use that external IP with the <code>nodePort</code> that <code>kube-proxy</code> opened on that node:</p>
<pre class="lang-sh prettyprint-override"><code>➜ ~ curl 10.128.15.230:11438
{
"path": "/",
"headers": {
"host": "10.128.15.230:11438",
"user-agent": "curl/7.47.0",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "10.128.15.230",
"ip": "::ffff:10.128.15.230",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "label-demo"
},
"connection": {}
</code></pre>
| acid_fuji |
<p>I would like to automate my Kubernetes configuration, like Ansible for VMs. </p>
<p>A sample scenario: </p>
<ol>
<li>Create namespace harbor </li>
<li>Deploy Harbor via Helm <a href="https://github.com/goharbor/harbor-helm" rel="nofollow noreferrer">https://github.com/goharbor/harbor-helm</a></li>
</ol>
<p>What kind tool is suitable for such as scenario? Can I also use terraform?</p>
| softshipper | <p>I totally agree with <em>@P Ekambaram</em> that <strong>kubernetes automation</strong> can be successfully done with <strong>ansible</strong> but at the same time I totally disagree with him when it comes to the way it should be done. Such <em>"declarative"</em> code isn't actually declarative at all. It turns into a mere wrapper for the set of imperative commands. Apart from that such playbook isn't <a href="https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html" rel="nofollow noreferrer">idempotent</a> (let's remind that <em>an operation is idempotent if the result of performing it once is exactly the same as the result of performing it repeatedly without any intervening actions</em>.). This approach is contrary to one of the key concepts present in <strong>ansible</strong>.</p>
<p><code>shell</code> or <code>command</code> modules should be used in <strong>ansible</strong> as last resort modules, only if there is no other way of performing required task e.g. when dedicated module doesn't exist or lacks some important functionalities.</p>
<p>When it comes to <strong>automating kubernetes with ansible</strong>, such dedicated modules already exist.</p>
<p>Take a quick look at <a href="https://docs.ansible.com/ansible/latest/modules/k8s_module.html#k8s-manage-kubernetes-k8s-objects" rel="nofollow noreferrer">k8s - ansible module for managing kubernetes objects</a> and you'll see that the desired results can be achieved in much more elegant and clearer way:</p>
<p><strong>namespace creation:</strong></p>
<pre><code>- name: Create a k8s namespace
k8s:
name: testing
api_version: v1
kind: Namespace
state: present
</code></pre>
<p><strong>creating service based on yaml definition file</strong></p>
<pre><code>- name: Create a Service object by reading the definition from a file
k8s:
state: present
src: /testing/service.yml
</code></pre>
<p>Have you just mentioned you're using <strong>helm</strong> for managing your <strong>kubernetes</strong> applications ? Take a look at <a href="https://docs.ansible.com/ansible/latest/modules/helm_module.html" rel="nofollow noreferrer">helm module documentation</a>. Any examples ? Here you are:</p>
<pre><code>- name: Install helm chart from a git repo
helm:
host: localhost
chart:
source:
type: git
location: https://github.com/user/helm-chart.git
state: present
name: my-example
namespace: default
</code></pre>
<h3>update:</h3>
<p>To be able to run most <strong>Ansible</strong> modules on a certain host, you need to have installed on it two things:</p>
<ul>
<li>python 2.7</li>
<li>ssh server</li>
</ul>
<p>While it is true for most modules, some of them have additional requirements that must be met before the module can be run on such host. </p>
<p>When it comes to <code>k8s</code> module, as we can read in the its <a href="https://docs.ansible.com/ansible/latest/modules/k8s_module.html#requirements" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>The below requirements are needed on the host that executes this
module.</p>
<pre><code>python >= 2.7
openshift >= 0.6
PyYAML >= 3.11
</code></pre>
</blockquote>
<p>It isn't true that it requires <strong>openshift</strong> (different kubernetes implementation). It wouldn't make much sense to install <strong>openshift</strong> to manage workload of our <strong>kubernetes cluster</strong>. So let's be precise to avoid spreading any potentially misleading information: It requires <a href="https://github.com/openshift/openshift-restclient-python" rel="nofollow noreferrer"><strong>OpenShift python client library</strong></a> and it requires it to be installed on the host, on which the module is run (i.e. on the host on which we normally run our <code>kubectl</code> commands).</p>
<blockquote>
<p>The OpenShift Python client wraps the K8s Python client, providing
full access to all of the APIS and models available on both platforms.
For API version details and additional information visit
<a href="https://github.com/openshift/openshift-restclient-python" rel="nofollow noreferrer">https://github.com/openshift/openshift-restclient-python</a></p>
</blockquote>
<p><strong>Important!:</strong> you don't have to install any additional modules on your kubernetes nodes. You need them only on the machine from which you manage your <strong>kubernetes workload</strong>.</p>
<p>If we have our <strong>Ansible</strong> installed on the same host on which we have configured our <code>kubectl</code> tool and we want to run our playbooks on it directly without a need of using <strong>ssh</strong> or configuring <strong>ansible inventory</strong>, we simply need to refer to it in our <strong>ansible playbook</strong> as:</p>
<pre><code>hosts: localhost
connection: local
</code></pre>
<p>The whole playbook for creating new k8s namespace may look like this:</p>
<pre><code>---
- hosts: localhost
connection: local
tasks:
- name: Create a k8s namespace
k8s:
name: testing
api_version: v1
kind: Namespace
state: present
</code></pre>
<p>And we can run it by simply executing:</p>
<pre><code>ansible-playbook playbook.yaml
</code></pre>
<p>Before we can do that, however, we need to make sure we have all required dependencies installed which in fact arn't so many and they limit to <strong>python 2.7</strong> and <strong>two above mentioned libraries</strong> (in required versions).</p>
<p>Once you have <strong>Python</strong> installed on your host, the easiest way to install the rest of the required dependencies is by runnig:</p>
<pre><code>pip install openshift
</code></pre>
<p>and</p>
<pre><code>pip install PyYAML
</code></pre>
<p>You may encounter the following error:</p>
<pre><code>"msg": "Failed to import the required Python library (openshift) on ***** Python /usr/bin/python. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"
</code></pre>
<p>If so, running:</p>
<pre><code>pip install --upgrade requests
</code></pre>
<p>should fix it (at least it fixed it in my case) and after that our simple <strong>ansible playbook</strong> which creates new <code>namespace</code> in our <strong>kubernetes cluster</strong> should run smoothly. :)</p>
| mario |
<p>I am reading the doc of <a href="https://istio.io/latest/blog/2019/data-plane-setup/" rel="nofollow noreferrer">istio</a>. It says:</p>
<blockquote>
<p>istio-init This init container is used to setup the iptables rules so that inbound/outbound traffic will go through the sidecar proxy.</p>
</blockquote>
<p>In my understanding, initContainer and application container are separated except that they share same network namespace. So why would iptables setup in initContainer still persist in application container?</p>
| Nick Allen | <p>As I mentioned in the comments, Iptables rules are briefly described <a href="https://github.com/istio/istio/wiki/Understanding-IPTables-snapshot" rel="nofollow noreferrer">here</a>.</p>
<p>There is also a Iptables Schematic:</p>
<p><a href="https://i.stack.imgur.com/bvnmk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bvnmk.png" alt="enter image description here" /></a></p>
| Jakub |
<p>I have enabled the VPA on cluster as read only mode and tried to collect the VPA recommendation data. But I could not find a good documentation or any API details specific to the Vertical Pod Autoscaling. I have found it for the Horizontal Pod Autoscaler but not for the VPA.</p>
| y2501 | <p>As I mentioned in comments</p>
<p>From what I see in the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">python client library</a>, there is neither documentation or API created for Vertical Pod Autoscaler at this moment.</p>
<p>There is only documentation and API created for <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a>. Which may be found <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/AutoscalingV1Api.md" rel="nofollow noreferrer">here</a>.</p>
| Jakub |
<p>I am learning how to use an ingress to expose my application on Google Kubernetes Engine. I followed several tutorials and had a rough setup of what is needed. However, I have no clue why are my service is marked as unhealthy despite them being accessible from the NodePort service I defined directly. </p>
<p>Here is my deployment file: (I removed some data but the most of it remains the same)</p>
<pre><code>--
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "deployment-1"
namespace: "default"
spec:
containers:
- name: myContainer
image: "myImage/"
readinessProbe:
httpGet:
path: /app1
port: 8080
initialDelaySeconds: 70
livenessProbe:
httpGet:
path: /app1
port: 8080
initialDelaySeconds: 70
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- mountPath: opt/folder/libs/jdbc/
name: lib
volumes:
- name: lib
persistentVolumeClaim:
claimName: lib
---
</code></pre>
<p>As I read, I need a ReadinessProbe and LivinessProbe in order for GKE to run a health check on a path I defined, and by using my own defined path shown as /app1 here (which will return a 200 OK), the generated health check should pass. I set an initial delay of 70s as buffer time for the tomcat server running in the image to startup.</p>
<p>Next I created a NodePort service as the backend for the Ingress:
I tested by connecting to the node's public IP and nodeport of this service, and it successfully runs. </p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- name: my-port
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: deployment-1
type: NodePort
</code></pre>
<p>and then the Ingress manifest file:</p>
<p>Here I have reserved a static IP address with the name "gke-my-static-ip" as well as created a managedCertificate "gke-my-certificate" with a domain name "mydomain.web.com". This has also been configured on the DNS records to point it to that reserved static IP. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gke-my-ingress-1
annotations:
kubernetes.io/ingress.global-static-ip-name: gke-my-static-ip
networking.gke.io/managed-certificates: gke-my-certificate
spec:
rules:
- host: mydomain.web.com
http:
paths:
- backend:
serviceName: my-service
servicePort: my-port
</code></pre>
<p>The ingress creates 2 backends by default, one on the /healthz path and one with my custom defined path /app1. The healthz path returns 200 OK, but my custom defined path is failing to connect. I have checked on the firewall rules and have allowed ports tcp30000-32767. </p>
<p>Checking on stackdriver, the health check tries to access my LoadBalancer's IP with the /app1 path but it seems to always return a 502 error. </p>
<p>Am I missing any steps in my setup?</p>
<p>Attached ingress,endpoints:</p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/gke-my-ingress-1 mydomain.web.com <IP_ADDRESS> 80 3d15h
NAME ENDPOINTS AGE
endpoints/kubernetes <IP_ADDRESS>443 9d
endpoints/presales-service <IP_ADDRESS>:8080 4d16h
</code></pre>
<p>kubectl get ingress:</p>
<pre><code>Name: gke-my-ingress-1
Namespace: default
Address: <IP_ADDRESS>
Default backend: default-http-backend:80 (<IP_ADDRESS>)
Rules:
Host Path Backends
---- ---- --------
mydomain.web.com
/ my-service:my-port (<IP_ADDRESS>:8080)
Annotations:
ingress.kubernetes.io/target-proxy: k8s-tp-default-gke-my-ingress-1--d8d0fcf4484c1dfd
ingress.kubernetes.io/url-map: k8s-um-default-gke-my-ingress-1--d8d0fcf4484c1dfd
kubernetes.io/ingress.global-static-ip-name: gke-my-static-ip
networking.gke.io/managed-certificates: gke-my-certificate
ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-e7dd5612-e6b4-42ca-91c9-7d9a86abcfb2
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-gke-my-ingress-1--d8d0fcf4484c1dfd
ingress.kubernetes.io/ssl-cert: mcrt-e7dd5612-e6b4-42ca-91c9-7d9a86abcfb2
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.global-static-ip-name":"gke-my-static-ip","networking.gke.io/managed-certificates":"gke-my-certificate"},"name":"gke-my-ingress-1","namespace":"default"},"spec":{"rules":[{"host":"mydomain.web.com","http":{"paths":[{"backend":{"serviceName":"my-service","servicePort":"my-port"},"path":"/"}]}}]}}
ingress.kubernetes.io/backends: {"k8s-be-30242--d8d0fcf4484c1dfd":"HEALTHY","k8s-be-30310--d8d0fcf4484c1dfd":"UNHEALTHY"}
ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-gke-my-ingress-1--d8d0fcf4484c1dfd
ingress.kubernetes.io/https-target-proxy: k8s-tps-default-gke-my-ingress-1--d8d0fcf4484c1dfd
</code></pre>
| user10518 | <p>By tinkering with the Readiness and Liveliness probe by adding a successThreshold and FailureThreshold, I managed to get my ingress working. It might be because my application needs a little more buffer time to run. </p>
<pre><code> readinessProbe:
httpGet:
path: /app1/
port: 8080
periodSeconds: 5
timeoutSeconds: 60
successThreshold: 1
failureThreshold: 3
initialDelaySeconds: 70
livenessProbe:
httpGet:
path: /app1/
port: 8080
periodSeconds: 5
timeoutSeconds: 60
successThreshold: 1
failureThreshold: 3
initialDelaySeconds: 70
</code></pre>
| user10518 |
<p>i'm new to kubernetes , i'm trying to learn it using minikube and i'm facing a problem with accessing apps outside the cluster. i created a deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 8080
</code></pre>
<p>To access it i need to expose it decoratively or imperatively. In the imperative way it works :</p>
<pre><code>kubectl expose deployment nginx-deployment --port 80 --type NodePort
</code></pre>
<p>When i create a service declaratively i always end up with a <strong>connection refused error</strong> :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type : NodePort
ports:
- port : 8080
nodePort : 30018
protocol : TCP
selector:
app: nginx
</code></pre>
<p>curl -k <a href="http://NodeIP:NodePort" rel="nofollow noreferrer">http://NodeIP:NodePort</a> returns :</p>
<blockquote>
<p>curl: (7) Failed to connect to Node IP port NodePORT: Connection
refused</p>
</blockquote>
| Elias | <p>As @Ansil suggested, your <strong>nginx</strong> should be configured to listen on port <code>8080</code> if you want to refer to this port in your <code>Service</code> definition. By default it listens on port <code>80</code>. </p>
<p>You cannot make it listen on different port like <code>8080</code> simply by specifying different <code>containerPort</code> in your <code>Deployment</code> definition as in your example:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 8080
</code></pre>
<p>You can easily verify it on your own by attaching to such <code>Pod</code>:</p>
<pre><code>kubectl exec -ti <nginx-pod-name> -- /bin/bash
</code></pre>
<p>Once you're there, run:</p>
<pre><code>ss -ntlp
</code></pre>
<p>And you should see on which port your <strong>nginx</strong> actually listens on.</p>
<p>Additionally you may:</p>
<pre><code>cat /etc/nginx/conf.d/default.conf
</code></pre>
<p>It will also tell you on which port your <strong>nginx</strong> is configured to listen. That's all. It's really simple. You changed <code>containerPort</code> to <code>8080</code> but inside your container nothing actually listens on such port.</p>
<p>You can still expose it as a <code>Service</code> (no matter declaratively or imperatively) but it won't change anything as eventually it points to the wrong port on your container, on which nothing listens and you'll see message similar to this one:</p>
<pre><code>curl: (7) Failed to connect to 10.1.2.3 port 30080: Connection refused
</code></pre>
| mario |
<p><strong>Question</strong></p>
<p>I wanted to inquire about the lifecycle of an object of type endpoints and would like to know if it is normal that the endpoints disappears automatically after a scale down of the Pod, to 0 instances?</p>
<p><strong>Structure of the scenario</strong></p>
<p>Kubernetes cluster v1.19 [1 master + 3 worker nodes]</p>
<p>Glusterfs endpoint (bound to the namespace) [includes the configuration IP addresses of the glusterfs devices]</p>
<p>Service [normal service for the pod and storage]</p>
<p>Deployment [includes relevant deployment informations e.g. env variables]</p>
<p><strong>Structure of the interconnection pipeline</strong></p>
<p>Endpoint -> Service -> Deployment</p>
<p><strong>Endpoints yaml</strong></p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
name: gluster-test
namespace: "test"
subsets:
- addresses:
- ip: "ip 1"
ports:
- port: 1
protocol: TCP
- addresses:
- ip: "ip 2"
ports:
- port: 1
protocol: TCP
- addresses:
- ip: "ip 3"
ports:
- port: 1
protocol: TCP
- addresses:
- ip: "ip 4"
ports:
- port: 1
protocol: TCP
</code></pre>
<p><strong>Glusterfs service yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: "gluster-test-sv"
namespace: "test"
spec:
ports:
- port: 1
</code></pre>
<p><strong>Persistence volume yaml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: "gluster-test2-pv"
namespace: test
spec:
capacity:
storage: "5Gi"
accessModes:
- ReadWriteMany
glusterfs:
endpoints: gluster-test
path: "/test2"
readOnly: false
persistentVolumeReclaimPolicy: Retain
</code></pre>
<p><strong>Persistence volume claim</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "gluster-test2-claim"
namespace: test
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: "5Gi"
</code></pre>
<p><strong>Deployment yaml</strong></p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: "test-de"
labels:
app.kubernetes.io/name: "test"
namespace: kubernetes-dashboard
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: "test"
template:
metadata:
labels:
app.kubernetes.io/name: "test"
spec:
containers:
- name: "test"
image: "test:latest"
ports:
- name: http
containerPort: XXXX
protocol: TCP
volumeMounts:
- mountPath: /XXX
name: storage
readOnly: false
imagePullSecrets:
- name: "test"
volumes:
- name: storage
persistentVolumeClaim:
claimName: "gluster-test-claim"
securityContext:
fsGroup: XXX
</code></pre>
| ZPascal | <p><code>Endpoint</code> is an object-oriented representation of REST API endpoint that is populated on the K8s APi server. Meaning that in K8s world it is the list of the addresses (IP and port) of the endpoints that implement a Service.</p>
<p>They are automatically created when a Service is created and configured with the pods matching the selector of the Service. Is is possible of course to create a service without selector. In this case you would have to create endpoints manually:</p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster <--remember to match this with service name
subsets:
- addresses:
- ip: 10.10.10.10
ports:
- port:
- addresses:
- ip: 12.12.12.12
ports:
- port:
</code></pre>
<p><strong>Note: In your examples endpoints name does not match the service name.</strong></p>
<p>What is exactly happening behind the scenes is that kube-proxy watches the Kubernetes control plane for the addition and removal of Service and Endpoint objects. Then for each service it installs iptables rules which capture traffic to the Service's <code>clusterIP</code> and <code>port</code> and redirect that traffic to one of the Service's backed sets. For each Endpoint object, it installs iptables rules which select a backed Pod.</p>
| acid_fuji |
<p>I am running several Java Applications with the Docker image <a href="https://hub.docker.com/r/jboss/wildfly" rel="nofollow noreferrer">jboss/wildfly:20.0.1.Final</a> on Kubernetes 1.19.3. Wildfly server is running in OpenJDK 11 so the jvm is supporting container memory limits (cgroups).</p>
<p>If I set a memory limit this limit is totally ignored by the container when running in Kubernetes. But it is respected on the same machine when I run it in plain Docker:</p>
<p><strong>1. Run Wildfly in Docker with a memory limit of 300M:</strong></p>
<pre><code>$ docker run -it --rm --name java-wildfly-test -p 8080:8080 -e JAVA_OPTS='-XX:MaxRAMPercentage=75.0' -m=300M jboss/wildfly:20.0.1.Final
</code></pre>
<p>verify Memory usage:</p>
<pre><code>$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
515e549bc01f java-wildfly-test 0.14% 219MiB / 300MiB 73.00% 906B / 0B 0B / 0B 43
</code></pre>
<p>As expected the container will NOT exceed the memory limit of 300M.</p>
<p><strong>2. Run Wildfly in Kubernetes with a memory limit of 300M:</strong></p>
<p>Now I start the same container within kubernetes.</p>
<pre><code>$ kubectl run java-wildfly-test --image=jboss/wildfly:20.0.1.Final --limits='memory=300M' --env="JAVA_OPTS='-XX:MaxRAMPercentage=75.0'"
</code></pre>
<p>verify memory usage:</p>
<pre><code>$ kubectl top pod java-wildfly-test
NAME CPU(cores) MEMORY(bytes)
java-wildfly-test 1089m 441Mi
</code></pre>
<p>The memory limit of 300M is totally ignored and exceeded immediately.</p>
<p>Why does this happen? Both tests can be performed on the same machine.</p>
<p><strong>Answer</strong></p>
<p>The reason for the high values was an incorrect output of the Metric data received from Project <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">kube-prometheus</a>. After uninstalling the kube-projemet and installing instead the <a href="https://github.com/kubernetes-sigs/metrics-server" rel="nofollow noreferrer">metric-server</a> all data was display correctly using kubctl top. It shows now the same values as docker stats. I do not know why kube-prometheus did compute wrong data. In fact it was providing the double values for all memory data.</p>
| Ralph | <p>I`m placing this answer as community wiki since it might be helpful for the community. Kubectl top was displying incorrect data. OP solved the problem with uninstalling the kube-prometheus stack and installing the metric-server.</p>
<blockquote>
<p>The reason for the high values was an incorrect output of the Metric
data received from Project kube-prometheus. After uninstalling the
kube-projemet and installing instead the metric-server all data was
display correctly using kubectl top. It shows now the same values as
docker stats. I do not know why kube-prometheus did compute wrong
data. In fact it was providing the double values for all memory data.</p>
</blockquote>
| acid_fuji |
<p>Fast question: how I can debug ingress and Nginx to know where exactly HTTP->HTTPS redirection happens?</p>
<p>More details:</p>
<p><strong>What we have</strong>: we have war file + Tomcat, build it with Docker. Run it with Kubernetes in AWS.</p>
<p><strong>What we need</strong>: the application should be accessible with HTTP and with HTTPS. HTTP <em><strong>should not</strong></em> redirect to HTTPS.</p>
<p><strong>Problem</strong>: HTTP always redirects to HTTPS.</p>
<p>What we try: we have Ingress</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${some name
namespace: ${some namespace}
labels:
app: ${some app}
env: ${some env}
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false" #we added this for turn off https redirection
nginx.ingress.kubernetes.io/force-ssl-redirect: "false" #we added this for turn off https redirection
nginx.ingress.kubernetes.io/affinity: "cookie" # We use it for sticky sessions
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "some cookie name"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/whitelist-source-range: ${whitelist of ip adresses}
spec:
tls:
- hosts:
- ${some host}
- ${some another host}
secretName: my-ingress-ssl
rules:
- host: ${some host}
http:
paths:
- path: /
backend:
serviceName: ${some another service name}
servicePort: 8080
- host: ${some another host}
http:
paths:
- path: /
backend:
serviceName: ${some another service name}
servicePort: 8080
</code></pre>
<p>And configmap</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
labels:
app: ${some app}
env: ${some env}
namespace: ${some namespace}
name: nginx-config
data:
hsts: "false" #we added this for turn off https redirection
hsts-max-age: "0" #we added this for turn off https redirection
ssl-redirect: "false" #we added this for turn off https redirection
hsts-include-subdomains: "false" #we added this for turn off https redirection
</code></pre>
<p>In Tomcat server.xml we have:</p>
<pre><code><Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
</code></pre>
<p>...</p>
<pre><code><!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
</code></pre>
<p>...</p>
<p>and this connector we commented (it shouldn't work now):</p>
<pre><code><!--
<Connector port="8443" protocol="org.apache.coyote.http11.Http11AprProtocol"
maxThreads="150" SSLEnabled="true" >
<UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" />
<SSLHostConfig>
<Certificate certificateKeyFile="conf/key.pem"
certificateFile="conf/cert.pem"
certificateChainFile="conf/chain.pem"
type="RSA" />
</SSLHostConfig>
</Connector>
-->
</code></pre>
<p>I tried all possible variants with ingress annotations, but without success result.</p>
<p>What I want to know: how can I <em><strong>debug</strong></em> ingress with Nginx to know where exactly HTTP->HTTPS redirection happens?</p>
<p><strong>UPD</strong></p>
<p>As it turned out, it was not the Nginx controller that was installed on the server, but Traefik. Due to security restrictions, I cant see that pod with the controller. So no Nginx annotations worked.
Nevertheless, the answers below to my questions will help people with a similar problem.</p>
| Optio | <p>In the setup you describe there are likley to be at least three processes involved in the handling of incoming http requests</p>
<p>internet -> AWS ELB -> Nginx Ingress Controller -> Tomcat</p>
<p>Any of the processes may be able to issue a 301 redirect to instruct http traffic to retry with https. A redirect in the tomcat process could either be specified by tomcat configuration or the app tomcat hosts.</p>
<p>I would attempt to isolate the process that is performing the redirect by experimenting to bypass the higher processes:</p>
<ol>
<li>Bypass the loadbalancer. SSH onto one of the K8s nodes and curl the ingress services' nodeport in the URL (check out Internal Endpoints on the nginx-controller service description for the right port). If you get a redirect, you know the problem is not the the loadbalancer.</li>
<li>Bypass the ingress controllers by 'docker exec' into one of the app containers, then curl localhost:8080. If you get a redirect, you know the problem is not the the ingress controller, and must be in the tomcat config or application.</li>
</ol>
<p>If none of this is possible due to ssh restrictions, curl availability etc... A different approach would be to turn on request logging for each of the processes involved - though this will not be appropriate for a production workload as it will lead to confidential information being output to inappropriate contexts, such as a web tier log file or cloudwatch logs.</p>
| Richard Woods |
<h2><strong>Edited with new data after complete Kubernetes wipe-out.</strong></h2>
<p>Lately I am trying to do a test deploy of a Blazor server app on locally hosted Kubernetes instance running on docker desktop.</p>
<p>I managed to correctly spin up the app in a container, migrations were applied etc, logs are telling me that the app is running and waiting.</p>
<p><strong>Steps taken after resetting Kubernetes using <code>Reset Kubernetes Kluster</code> in Docker Desktop:</strong></p>
<ul>
<li><p>Modified <code>hosts</code> file to include <code>127.0.0.1 scp.com</code></p>
</li>
<li><p>Added secret containing key to mssql</p>
</li>
<li><p>Installed Ngnix controller using <code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml</code></p>
</li>
<li><p>Applied local volume claim - <code>local-pvc.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 250Mi
</code></pre>
</li>
<li><p>Applied mssql instance and cluster ip - <code>mssql-scanapp-depl.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-depl
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Express"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- mountPath: /var/opt/mssql/data
name: mssqldb
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-claim
---
apiVersion: v1
kind: Service
metadata:
name: mssql-clusterip-srv
spec:
type: ClusterIP
selector:
app: mssql
ports:
- name: mssql
protocol: TCP
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Service
metadata:
name: mssql-loadbalancer
spec:
type: LoadBalancer
selector:
app: mssql
ports:
- protocol: TCP
port: 1433
targetPort: 1433
</code></pre>
</li>
<li><p>Applied Blazor application and cluster ip - <code>scanapp-depl.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: scanapp-depl
spec:
replicas: 1
selector:
matchLabels:
app: scanapp
template:
metadata:
labels:
app: scanapp
spec:
containers:
- name: scanapp
image: scanapp:1.0
---
apiVersion: v1
kind: Service
metadata:
name: scanapp-clusterip-srv
spec:
type: ClusterIP
selector:
app: scanapp
ports:
- name: ui
protocol: TCP
port: 8080
targetPort: 80
- name: ui2
protocol: TCP
port: 8081
targetPort: 443
- name: scanapp0
protocol: TCP
port: 5000
targetPort: 5000
- name: scanapp1
protocol: TCP
port: 5001
targetPort: 5001
- name: scanapp5
protocol: TCP
port: 5005
targetPort: 5005
</code></pre>
</li>
<li><p>Applied Ingress - <code>ingress-srv.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "affinity"
nginx.ingress.kubernetes.io/session-cookie-expires: "14400"
nginx.ingress.kubernetes.io/session-cookie-max-age: "14400"
spec:
ingressClassName: nginx
rules:
- host: scp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: scanapp-clusterip-srv
port:
number: 8080
</code></pre>
</li>
</ul>
<p>After all of this, Blazor app starts good, connects to mssql instance, seeds database and awaits for clients. Logs are as follows:</p>
<blockquote>
<p>[15:18:53 INF] Starting up...<br />
[15:18:53 WRN] Storing keys in a directory '/root/.aspnet
/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.<br />
[15:18:55 INF] AuthorizationPolicy Configuration started ...<br />
[15:18:55 INF] Policy 'LocationMustBeSady' was configured \successfully.
[15:18:55 INF] AuthorizationPolicy Configuration completed.
[15:18:55 INF] Now listening on: http://[::]:80
[15:18:55 INF] Application started. Press Ctrl+C to shut down.
[15:18:55 INF] Hosting environment: docker
[15:18:55 INF] Content root path: /app</p>
</blockquote>
<h1><strong>As stated in the beginning - I cannot, for the love of all, get into my blazor app from browser - I tried:</strong></h1>
<ul>
<li>scp.com</li>
<li>scp.com:8080</li>
<li>scp.com:5000</li>
<li>scp.com:5001</li>
<li>scp.com:5005</li>
</ul>
<p><strong>Also, <code>kubectl get ingress</code> now does not display ADDRESS value like before and <code>kubectl get services</code> now says <em>pending</em> for <code>mssql-loadbalancer</code> and <code>ingress-nginx-controller</code> EXTERNAL-IP - detailed logs at the end of this post</strong></p>
<p>Nothing seems to work, so there must be something wrong with my config files and I have no idea what could it be.
Also, note that there is no <code>NodePort</code> configured this time.</p>
<p>In addition, Dockerfile for Blazor app:</p>
<pre class="lang-yaml prettyprint-override"><code> # https://hub.docker.com/_/microsoft-dotnet
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /source
EXPOSE 5000
EXPOSE 5001
EXPOSE 5005
EXPOSE 80
EXPOSE 443
LABEL name="ScanApp"
# copy csproj and restore as distinct layers
COPY ScanApp/*.csproj ScanApp/
COPY ScanApp.Application/*.csproj ScanApp.Application/
COPY ScanApp.Common/*.csproj ScanApp.Common/
COPY ScanApp.Domain/*.csproj ScanApp.Domain/
COPY ScanApp.Infrastructure/*.csproj ScanApp.Infrastructure/
COPY ScanApp.Tests/*.csproj ScanApp.Tests/
Run ln -sf /usr/share/zoneinfo/posix/Europe/Warsaw /etc/localtime
RUN dotnet restore ScanApp/ScanApp.csproj
# copy and build app and libraries
COPY ScanApp/ ScanApp/
COPY ScanApp.Application/ ScanApp.Application/
COPY ScanApp.Common/ ScanApp.Common/
COPY ScanApp.Domain/ ScanApp.Domain/
COPY ScanApp.Infrastructure/ ScanApp.Infrastructure/
COPY ScanApp.Tests/ ScanApp.Tests/
WORKDIR /source/ScanApp
RUN dotnet build -c release --no-restore
# test stage -- exposes optional entrypoint
# target entrypoint with: docker build --target test
FROM build AS test
WORKDIR /source/ScanApp.Tests
COPY tests/ .
ENTRYPOINT ["dotnet", "test", "--logger:trx"]
FROM build AS publish
RUN dotnet publish -c release --no-build -o /app
# final stage/image
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=publish /app .
ENV ASPNETCORE_ENVIRONMENT="docker"
ENTRYPOINT ["dotnet", "ScanApp.dll"]
</code></pre>
<h2><strong>kubectl outputs</strong></h2>
<p><code>kubectl get ingress</code> output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>NAME</th>
<th>CLASS</th>
<th>HOSTS</th>
<th>ADDRESS</th>
<th>PORTS</th>
<th>AGE</th>
</tr>
</thead>
<tbody>
<tr>
<td>ingress-srv</td>
<td>nginx</td>
<td>scp.com</td>
<td></td>
<td>80</td>
<td>35m</td>
</tr>
</tbody>
</table>
</div>
<p><br />
<br />
<code>kubectl get pods --all-namespaces</code> output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;"><strong>NAME</strong></th>
<th style="text-align: center;"><strong>READY</strong></th>
<th style="text-align: center;"><strong>STATUS</strong></th>
<th style="text-align: center;"><strong>RESTARTS</strong></th>
<th style="text-align: center;"><strong>AGE</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">default</td>
<td style="text-align: center;">mssql-depl-7f46b5c696-7hhbr</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">Running</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">default</td>
<td style="text-align: center;">scanapp-depl-76f56bc6df-4jcq4</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">Running</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">ingress-nginx</td>
<td style="text-align: center;">ingress-nginx-admission-create-qdnck</td>
<td style="text-align: center;">0/1</td>
<td style="text-align: center;">Completed</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">ingress-nginx</td>
<td style="text-align: center;">ingress-nginx-admission-patch-chxqn</td>
<td style="text-align: center;">0/1</td>
<td style="text-align: center;">Completed</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">ingress-nginx</td>
<td style="text-align: center;">ingress-nginx-controller-54bfb9bb-f6gsf</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">Running</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">kube-system</td>
<td style="text-align: center;">coredns-558bd4d5db-mr8p7</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">Running</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">kube-system</td>
<td style="text-align: center;">coredns-558bd4d5db-rdw2d</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">Running</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">kube-system</td>
<td style="text-align: center;">etcd-docker-desktop</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">Running</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">kube-system</td>
<td style="text-align: center;">kube-apiserver-docker-desktop</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">Running</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">kube-system</td>
<td style="text-align: center;">kube-controller-manager-docker-desktop</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">Running</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">kube-system</td>
<td style="text-align: center;">kube-proxy-pws8f</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">Running</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">kube-system</td>
<td style="text-align: center;">kube-scheduler-docker-desktop</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">Running</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">kube-system</td>
<td style="text-align: center;">storage-provisioner</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">Running</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">kube-system</td>
<td style="text-align: center;">vpnkit-controller</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">Running</td>
<td style="text-align: center;">6</td>
</tr>
</tbody>
</table>
</div>
<p><br />
<br />
<code>kubectl get deployments --all-namespaces</code> output</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;"><strong>NAME</strong></th>
<th style="text-align: center;"><strong>READY</strong></th>
<th style="text-align: center;"><strong>UP-TO-DATE</strong></th>
<th style="text-align: center;"><strong>AVAILABLE</strong></th>
<th style="text-align: center;"><strong>AGE</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">default</td>
<td style="text-align: center;">mssql-depl</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">default</td>
<td style="text-align: center;">scanapp-depl</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">ingress-nginx</td>
<td style="text-align: center;">ingress-nginx-controller</td>
<td style="text-align: center;">1/1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">kube-system</td>
<td style="text-align: center;">coredns</td>
<td style="text-align: center;">2/2</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">2</td>
</tr>
</tbody>
</table>
</div>
<p><br />
<br />
<code>kubectl get services --all-namespaces</code> output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;"><strong>NAME</strong></th>
<th style="text-align: center;"><strong>TYPE</strong></th>
<th style="text-align: center;"><strong>CLUSTER-IP</strong></th>
<th style="text-align: center;"><strong>EXTERNAL-IP</strong></th>
<th style="text-align: center;"><strong>PORT(S)</strong></th>
<th style="text-align: center;"><strong>AGE</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">default</td>
<td style="text-align: center;">kubernetes</td>
<td style="text-align: center;">ClusterIP</td>
<td style="text-align: center;">10.96.0.1</td>
<td style="text-align: center;">none</td>
<td style="text-align: center;">443/TCP</td>
</tr>
<tr>
<td style="text-align: center;">default</td>
<td style="text-align: center;">mssql-clusterip-srv</td>
<td style="text-align: center;">ClusterIP</td>
<td style="text-align: center;">10.97.96.94</td>
<td style="text-align: center;">none</td>
<td style="text-align: center;">1433/TCP</td>
</tr>
<tr>
<td style="text-align: center;">default</td>
<td style="text-align: center;">mssql-loadbalancer</td>
<td style="text-align: center;">LoadBalancer</td>
<td style="text-align: center;">10.107.235.49</td>
<td style="text-align: center;">pending</td>
<td style="text-align: center;">1433:30149/TCP</td>
</tr>
<tr>
<td style="text-align: center;">default</td>
<td style="text-align: center;">scanapp-clusterip-srv</td>
<td style="text-align: center;">ClusterIP</td>
<td style="text-align: center;">10.109.116.183</td>
<td style="text-align: center;">none</td>
<td style="text-align: center;">8080/TCP,8081/TCP,5000/TCP,5001/TCP,5005/TCP</td>
</tr>
<tr>
<td style="text-align: center;">ingress-nginx</td>
<td style="text-align: center;">ingress-nginx-controller</td>
<td style="text-align: center;">LoadBalancer</td>
<td style="text-align: center;">10.103.89.226</td>
<td style="text-align: center;">pending</td>
<td style="text-align: center;">80:30562/TCP,443:31733/TCP</td>
</tr>
<tr>
<td style="text-align: center;">ingress-nginx</td>
<td style="text-align: center;">ingress-nginx-controller-admission</td>
<td style="text-align: center;">ClusterIP</td>
<td style="text-align: center;">10.111.235.243</td>
<td style="text-align: center;">none</td>
<td style="text-align: center;">443/TCP</td>
</tr>
<tr>
<td style="text-align: center;">kube-system</td>
<td style="text-align: center;">kube-dns</td>
<td style="text-align: center;">ClusterIP</td>
<td style="text-align: center;">10.96.0.10</td>
<td style="text-align: center;">none</td>
<td style="text-align: center;">53/UDP,53/TCP,9153/TCP</td>
</tr>
</tbody>
</table>
</div><h2>Ingress logs:</h2>
<blockquote>
<hr />
<p>NGINX Ingress controller</p>
<p>Release: v1.1.0</p>
<p>Build: cacbee86b6ccc45bde8ffc184521bed3022e7dee</p>
<p>Repository: <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a></p>
<p>nginx version: nginx/1.19.9</p>
<hr />
<p>W1129 15:00:51.705331 8 client_config.go:615] Neither
--kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.</p>
<p>I1129 15:00:51.705452 8 main.go:223] "Creating API client"
host="https://10.96.0.1:443"</p>
<p>I1129 15:00:51.721575 8 main.go:267] "Running in Kubernetes
cluster" major="1" minor="21" git="v1.21.5" state="clean"
commit="aea7bbadd2fc0cd689de94a54e5b7b758869d691"
platform="linux/amd64"</p>
<p>I1129 15:00:51.872964 8 main.go:104] "SSL fake certificate
created"
file="/etc/ingress-controller/ssl/default-fake-certificate.pem"</p>
<p>I1129 15:00:51.890273 8 ssl.go:531] "loading tls certificate"
path="/usr/local/certificates/cert" key="/usr/local/certificates/key"</p>
<p>I1129 15:00:51.910104 8 nginx.go:255] "Starting NGINX Ingress
controller"</p>
<p>I1129 15:00:51.920821 8 event.go:282]
Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx",
Name:"ingress-nginx-controller",
UID:"51060a85-d3a0-40de-b549-cf59e8fa7b08", APIVersion:"v1",
ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'CREATE'
ConfigMap ingress-nginx/ingress-nginx-controller</p>
<p>I1129 15:00:53.112043 8 nginx.go:297] "Starting NGINX process"</p>
<p>I1129 15:00:53.112213 8 leaderelection.go:248] attempting to
acquire leader lease ingress-nginx/ingress-controller-leader...</p>
<p>I1129 15:00:53.112275 8 nginx.go:317] "Starting validation
webhook" address=":8443" certPath="/usr/local/certificates/cert"
keyPath="/usr/local/certificates/key"</p>
<p>I1129 15:00:53.112468 8 controller.go:155] "Configuration
changes detected, backend reload required"</p>
<p>I1129 15:00:53.118295 8 leaderelection.go:258] successfully
acquired lease ingress-nginx/ingress-controller-leader</p>
<p>I1129 15:00:53.119467 8 status.go:84] "New leader elected"
identity="ingress-nginx-controller-54bfb9bb-f6gsf"</p>
<p>I1129 15:00:53.141609 8 controller.go:172] "Backend successfully
reloaded"</p>
<p>I1129 15:00:53.141804 8 controller.go:183] "Initial sync,
sleeping for 1 second"</p>
<p>I1129 15:00:53.141908 8 event.go:282]
Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx",
Name:"ingress-nginx-controller-54bfb9bb-f6gsf",
UID:"54e0c0c6-40ea-439e-b1a2-7787f1b37e7a", APIVersion:"v1",
ResourceVersion:"766", FieldPath:""}): type: 'Normal' reason: 'RELOAD'
NGINX reload triggered due to a change in configuration</p>
<p>I1129 15:04:25.107359 8 admission.go:149] processed ingress via
admission controller {testedIngressLength:1 testedIngressTime:0.022s
renderingIngressLength:1 renderingIngressTime:0s admissionTime:17.9kBs
testedConfigurationSize:0.022}</p>
<p>I1129 15:04:25.107395 8 main.go:101] "successfully validated
configuration, accepting" ingress="ingress-srv/default"</p>
<p>I1129 15:04:25.110109 8 store.go:424] "Found valid IngressClass"
ingress="default/ingress-srv" ingressclass="nginx"</p>
<p>I1129 15:04:25.110698 8 controller.go:155] "Configuration
changes detected, backend reload required"</p>
<p>I1129 15:04:25.111057 8 event.go:282]
Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default",
Name:"ingress-srv", UID:"6c15d014-ac14-404e-8b5e-d8526736c52a",
APIVersion:"networking.k8s.io/v1", ResourceVersion:"1198",
FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync</p>
<p>I1129 15:04:25.143417 8 controller.go:172] "Backend successfully
reloaded"</p>
<p>I1129 15:04:25.143767 8 event.go:282]
Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx",
Name:"ingress-nginx-controller-54bfb9bb-f6gsf",
UID:"54e0c0c6-40ea-439e-b1a2-7787f1b37e7a", APIVersion:"v1",
ResourceVersion:"766", FieldPath:""}): type: 'Normal' reason: 'RELOAD'
NGINX reload triggered due to a change in configuration</p>
<p>I1129 15:06:11.447313 8 admission.go:149] processed ingress via
admission controller {testedIngressLength:1 testedIngressTime:0.02s
renderingIngressLength:1 renderingIngressTime:0s admissionTime:17.9kBs
testedConfigurationSize:0.02}</p>
<p>I1129 15:06:11.447349 8 main.go:101] "successfully validated
configuration, accepting" ingress="ingress-srv/default"</p>
<p>I1129 15:06:11.449266 8 event.go:282]
Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default",
Name:"ingress-srv", UID:"6c15d014-ac14-404e-8b5e-d8526736c52a",
APIVersion:"networking.k8s.io/v1", ResourceVersion:"1347",
FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync</p>
<p>I1129 15:06:11.449669 8 controller.go:155] "Configuration
changes detected, backend reload required"</p>
<p>I1129 15:06:11.499772 8 controller.go:172] "Backend successfully
reloaded"</p>
<p>I1129 15:06:11.500210 8 event.go:282]
Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx",
Name:"ingress-nginx-controller-54bfb9bb-f6gsf",
UID:"54e0c0c6-40ea-439e-b1a2-7787f1b37e7a", APIVersion:"v1",
ResourceVersion:"766", FieldPath:""}): type: 'Normal' reason: 'RELOAD'
NGINX reload triggered due to a change in configuration</p>
</blockquote>
| quain | <h2><strong>AFTER COMPLETE RESET OF KUBERNETES THIS SOLUTION DOES NOT WORK!</strong></h2>
<p><strong>Will re-edit main question<br />
Leaving post for future use</strong></p>
<p>I solved the problem, or at least I think so.<br />
In addition to <a href="https://stackoverflow.com/questions/70115934/problem-with-deploying-blazor-server-app-to-kubernetes?noredirect=1#comment123954982_70115934">@moonkotte</a> suggestion to add the <code>ingressClassName: nginx</code> to <code>ingress-srv.yaml</code> I also changed the ingress port configuration so that it points to port <code>80</code> now.</p>
<p>Thanks to those changes using <code>scp.com</code> now correctly opens my app.
Also, using NodePort access I can visit my app using <code>localhost:30080</code>, where the 30080 port was set automatically (I removed the <code>nodePort</code> configuration line from <code>scanapp-np-srv.yaml</code>)</p>
<p>Why does the port in <code>ingress-srv.yaml</code> have to be set to 80 if <code>clusterIp</code> configuration states to set port <code>8080</code> to target port <code>80</code> - I don't know, I do not fully understand the inner workings of Kubernetes configuration - <strong>All explanations are more than welcome.</strong></p>
<p>Current state of main configuration files:</p>
<p><code>ingress-srv.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "affinity"
nginx.ingress.kubernetes.io/session-cookie-expires: "14400"
nginx.ingress.kubernetes.io/session-cookie-max-age: "14400"
spec:
ingressClassName: nginx
rules:
- host: scp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: scanapp-clusterip-srv
port:
number: 80
</code></pre>
<p><br />
<br />
<code>scanapp-np-srv.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: scanappnpservice-srv
spec:
type: NodePort
selector:
app: scanapp
ports:
- name: ui
port: 8080
targetPort: 80
- name: scanapp0
protocol: TCP
port: 5000
targetPort: 5000
- name: scanapp1
protocol: TCP
port: 5001
targetPort: 5001
- name: scanapp5
protocol: TCP
port: 5005
targetPort: 5005
</code></pre>
<p><br />
<br />
<code>scanapp-depl.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: scanapp-depl
spec:
replicas: 1
selector:
matchLabels:
app: scanapp
template:
metadata:
labels:
app: scanapp
spec:
containers:
- name: scanapp
image: scanapp:1.0
---
apiVersion: v1
kind: Service
metadata:
name: scanapp-clusterip-srv
spec:
type: ClusterIP
selector:
app: scanapp
ports:
- name: ui
protocol: TCP
port: 8080
targetPort: 80
- name: ui2
protocol: TCP
port: 8081
targetPort: 443
- name: scanapp0
protocol: TCP
port: 5000
targetPort: 5000
- name: scanapp1
protocol: TCP
port: 5001
targetPort: 5001
- name: scanapp5
protocol: TCP
port: 5005
targetPort: 5005
</code></pre>
<p>Rest of files remained untouched.</p>
| quain |
<p>For instance, I have a bare-metal cluster with 3 nodes ich with some instance exposing the port 105. In order to expose it on external Ip address I can define a service of type NodePort with "externalIPs" and it seems to work well. In the documentation it says to use a load balancer but I didn't get well why I have to use it and I worried to do some mistake.</p>
| dred dred | <blockquote>
<p>Can somebody explain whay I have to use external (MetalLB, HAProxy etc) Load Balancer with Bare-metal kubernetes cluster?</p>
</blockquote>
<p>You don't have to use it, it's up to you to choose if you would like to use NodePort or LoadBalancer.</p>
<hr />
<p>Let's start with the difference between NodePort and LoadBalancer.</p>
<p><strong>NodePort</strong> is the most primitive way to get external traffic directly to your service. As the name implies, it opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.</p>
<p><strong>LoadBalancer</strong> service is the standard way to expose a service to the internet. It gives you a single IP address that will forward all traffic to your service.</p>
<p>You can find more about that in kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">documentation</a>.</p>
<hr />
<p>As for the question you've asked in the comment, <code>But NodePort with "externalIPs" option is doing exactly the same. I see only one tiny difference is that the IP should be owned by one of the cluster machin. So where is the profit of using a loadBalancer?</code> let me answer that more precisely.</p>
<p>There are the <a href="https://medium.com/avmconsulting-blog/external-ip-service-type-for-kubernetes-ec2073ef5442" rel="nofollow noreferrer">advantages & disadvantages</a> of ExternalIP:</p>
<blockquote>
<p>The <strong>advantages</strong> of using <strong>ExternalIP</strong> is:</p>
<p>You have full control of the IP that you use. You can use IP that belongs to your ASN >instead of a cloud provider’s ASN.</p>
<p>The <strong>disadvantages</strong> of using <strong>ExternalIP</strong> is:</p>
<p>The simple setup that we will go thru right now is NOT highly available. That means if the node dies, the service is no longer reachable and you’ll need to manually remediate the issue.</p>
<p>There is some manual work that needs to be done to manage the IPs. The IPs are not dynamically provisioned for you thus it requires manual human intervention</p>
</blockquote>
<hr />
<p>Summarizing the pros and cons of both, we can conclude that ExternalIP is not made for a production environment, it's not highly available, if node dies the service will be no longer reachable and you will have to manually fix that.</p>
<p>With a LoadBalancer if node dies the service will be recreated automatically on another node. So it's dynamically provisioned and there is no need to configure it manually like with the ExternalIP.</p>
| Jakub |
<p>Is there a way to silence warning messages from kubectl, such as what is shown below for deprecation notices?</p>
<pre><code>Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Warning: admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
Warning: admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
</code></pre>
<p>It appears these warnings are occurring with Kubernetes 1.19.</p>
| dhelfand | <p>To add on top of the previous answer you may also want to redirect stderr output into <code>null device</code>. Its not ideal though since it will dispose all the stderr, not only warnings.</p>
<pre><code>kubectl get pod 2> /dev/null
</code></pre>
<p><code>null device</code> is an device file that discards all written data. The null device is typically used for disposing of unwanted output streams of a process, or as a convenient empty file for input streams.</p>
<p>The best here would be to redirect <code>stderr</code> into <code>stdout</code> and then filter it with grep.</p>
<pre><code>kubectl get pod 2>&1 | grep -i -v "Warn" | grep -i -v "Deprecat"
</code></pre>
| acid_fuji |
<p>I deployed a ray cluster on Kubernetes using kuberay and I want to monitor the cluster using prometheus metrics. After reading ray document, I know that there is service discovery file is generated on the head node <code>/tmp/ray/prom_metrics_service_discovery.json</code>. Using the below Prometheus config, Prometheus will automatically update the addresses that it scrapes based on the contents of Ray’s service discovery file.</p>
<pre class="lang-yaml prettyprint-override"><code># Prometheus config file
# my global config
global:
scrape_interval: 2s
evaluation_interval: 2s
# Scrape from Ray.
scrape_configs:
- job_name: 'ray'
file_sd_configs:
- files:
- '/tmp/ray/prom_metrics_service_discovery.json'
</code></pre>
<p>But since I am using Kubernetes, based on humble experience, I think the most convenient way to configure Prometheus to scape the ray metrics should be exposing metrics configuration on service annotations like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: xxx
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "8081"
</code></pre>
<p>Is there any way to achieve this?</p>
<hr />
<p>Updated at 2022-11-10
I manually add an Kubernetes service to solve this problem.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
# labels:
# app.kubernetes.io/name: kuberay-metrics
# ray.io/cluster: {{ include "ray-cluster.fullname" . }}
name: {{ include "ray-cluster.fullname" . }}-metrics-svc
annotations:
{{- if .Values.prometheus.enable }}
prometheus.io/scrape: "{{.Values.prometheus.enable }}"
prometheus.io/path: /metrics
prometheus.io/port: "8080"
{{- end }}
spec:
ports:
- name: metrics
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: kuberay
ray.io/cluster: {{ include "ray-cluster.fullname" . }}
type: ClusterIP
</code></pre>
| kk17 | <p>Re: Is there any way to achieve this?</p>
<p>-- I don't think so, not yet!
I think this would be great to discuss in a <a href="https://github.com/ray-project/kuberay/issues/new?assignees=&labels=enhancement&template=feature-request.yml&title=%5BFeature%5D+" rel="nofollow noreferrer">KubeRay feature request GitHub issue</a>. Please feel free to post there!</p>
| Dmitri Gekhtman |
<p>I'm trying to use CRC for testing Openshift 4 on my laptop (Ubuntu 20). CRC version 1.17 doesn't support Virtualbox virtualizazion so following the setup instructions</p>
<p><a href="https://access.redhat.com/documentation/en-us/red_hat_codeready_containers/1.17/html/getting_started_guide/installation_gsg" rel="nofollow noreferrer">https://access.redhat.com/documentation/en-us/red_hat_codeready_containers/1.17/html/getting_started_guide/installation_gsg</a></p>
<p>i'm using libvirt, but when i start the cluster with <code>crc start</code> it launch following error</p>
<pre><code>INFO Checking if oc binary is cached
INFO Checking if podman remote binary is cached
INFO Checking if goodhosts binary is cached
INFO Checking minimum RAM requirements
INFO Checking if running as non-root
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking if libvirt 'crc' network is available
INFO Checking if libvirt 'crc' network is active
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists
INFO Starting CodeReady Containers VM for OpenShift 4.5.14...
ERRO Error starting stopped VM: virError(Code=55, Domain=18, Message='Requested operation is not valid: format of backing image '/home/claudiomerli/.crc/cache/crc_libvirt_4.5.14/crc.qcow2' of image '/home/claudiomerli/.crc/machines/crc/crc.qcow2' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting)')
Error starting stopped VM: virError(Code=55, Domain=18, Message='Requested operation is not valid: format of backing image '/home/claudiomerli/.crc/cache/crc_libvirt_4.5.14/crc.qcow2' of image '/home/claudiomerli/.crc/machines/crc/crc.qcow2' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting)')
</code></pre>
<p>I have not experiences with libvirt so i'm stuck on that and online i'm not finding anything...</p>
| Claudio Merli | <p>There is an issue with the crc_libvirt_4.5.14 image. The easiest way to fix it is to do a</p>
<pre><code>qemu-img rebase -f qcow2 -F qcow2 -b /home/${USER}/.crc/cache/crc_libvirt_4.5.14/crc.qcow2 /home/${USER}/.crc/machines/crc/crc.qcow2
</code></pre>
<p>Now, if you try to do a <code>crc start</code>, you going to face a "Permission denied" error, which is related to Apparmor, unless you whitelisted your home directory. If you don't want to hack around with Apparmor settings, the <code>/var/lib/libvirt/images</code> supposed to be whitelisted. Move the image to there:</p>
<pre><code>sudo mv /home/${USER}/.crc/machines/crc/crc.qcow2 /var/lib/libvirt/images
</code></pre>
<p>then edit the virtual machine settings pointing to the new image location: <code>virsh edit crc</code> , then replace the <code> <source file='/home/yourusername/.crc/machines/crc/crc.qcow2'/></code> to <code><source file='/var/lib/libvirt/images/crc.qcow2'/></code>.</p>
<p>Then do the <code>crc start</code> and... that's it.</p>
<p>The relevant Github issues to follow:</p>
<ul>
<li><a href="https://github.com/code-ready/crc/issues/1596" rel="nofollow noreferrer">https://github.com/code-ready/crc/issues/1596</a></li>
<li><a href="https://github.com/code-ready/crc/issues/1578" rel="nofollow noreferrer">https://github.com/code-ready/crc/issues/1578</a></li>
</ul>
| aries1980 |
<p>I need to clone my deployment including the volume. I couldn't find any resources on this question except about cloning volume using CSI driver as explained in this <a href="https://kubernetes-csi.github.io/docs/volume-cloning.html#overview" rel="nofollow noreferrer">https://kubernetes-csi.github.io/docs/volume-cloning.html#overview</a></p>
<p>Is there a way to clone a deployment other than renaming the deployment and deploying it again through the kubectl?</p>
| Dinuka Salwathura | <blockquote>
<p>Is there a way to clone a deployment other than renaming the
deployment and deploying it again through the kubectl?</p>
</blockquote>
<p>Unfortunately no. At the time of this writing there is no such a tool that will allow to clone your deployment. So you will have to use the option already mentioned in the question or deploy that using helm chart.</p>
<p>With helm your can pass custom parameters such as <code>--generate-name</code> or edit <code>values.yaml</code> content to quickly deploy similar deployment.</p>
<p>Worth mentioning here is also <a href="https://velero.io/" rel="nofollow noreferrer">Velero</a>. A tool for backing up kubernetes resources and persistent volumes.</p>
| acid_fuji |
<p>I have couple of YAML files for mongo-express/mongodb as below:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: dXNlcm5hbWU=
mongo-root-password: cGFzc3dvcmQ=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
app: mongodb
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
app: mongodb
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
value: mongodb-service
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
labels:
app: mongo-express
spec:
ports:
- port: 8081
targetPort: 8081
nodePort: 30000
protocol: TCP
type: LoadBalancer
selector:
app: mongo-express
</code></pre>
<p>I can apply above YAML file on local <strong>minikube</strong> cluster when I execute minikube service mongo-express-service.
I can also apply it to my 3 nodes <strong>kubernetes</strong> cluster, but the mongo-express <strong>Pod</strong> seems not available to connect to mongodb-service.</p>
<p>This is my initial troubleshoot.</p>
<pre><code>$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-20 Ready master 35d v1.19.2 192.168.0.20 <none> Ubuntu 16.04.7 LTS 4.4.0-186-generic docker://19.3.13
node1 Ready <none> 35d v1.19.2 192.168.0.21 <none> Ubuntu 16.04.7 LTS 4.4.0-186-generic docker://19.3.13
node2 Ready <none> 35d v1.19.2 192.168.0.22 <none> Ubuntu 16.04.7 LTS 4.4.0-186-generic docker://19.3.13
$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongo-express-749445c6c9-wlnx8 1/1 Running 0 18s 10.244.2.23 node2 <none> <none>
pod/mongodb-deployment-8f6675bc5-w9wks 1/1 Running 0 22s 10.244.1.20 node1 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35d <none>
service/mongo-express-service LoadBalancer 10.108.20.77 <pending> 8081:30000/TCP 18s app=mongo-express
service/mongodb-service ClusterIP 10.98.48.206 <none> 27017/TCP 22s app=mongodb
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/mongo-express 1/1 1 1 18s mongo-express mongo-express app=mongo-express
deployment.apps/mongodb-deployment 1/1 1 1 22s mongodb mongo app=mongodb
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/mongo-express-749445c6c9 1 1 1 18s mongo-express mongo-express app=mongo-express,pod-template-hash=749445c6c9
replicaset.apps/mongodb-deployment-8f6675bc5 1 1 1 22s mongodb mongo app=mongodb,pod-template-hash=8f6675bc5
$ kubectl logs mongo-express-749445c6c9-wlnx8
Waiting for mongodb-service:27017...
/docker-entrypoint.sh: line 14: mongodb-service: Try again
/docker-entrypoint.sh: line 14: /dev/tcp/mongodb-service/27017: Invalid argument
Sun Nov 8 05:29:40 UTC 2020 retrying to connect to mongodb-service:27017 (2/5)
$ kubectl logs mongodb-deployment-8f6675bc5-w9wks
about to fork child process, waiting until server is ready for connections.
forked process: 28
...
MongoDB init process complete; ready for start up.
{"t":{"$date":"2020-11-08T05:28:54.631+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2020-11-08T05:28:54.634+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2020-11-08T05:28:54.634+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2020-11-08T05:28:54.636+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"mongodb-deployment-8f6675bc5-w9wks"}}
{"t":{"$date":"2020-11-08T05:28:54.636+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.1","gitVersion":"ad91a93a5a31e175f5cbf8c69561e788bbc55ce1","openSSLVersion":"OpenSSL 1.1.1 11 Sep 2018","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu1804","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2020-11-08T05:28:54.636+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"18.04"}}}
{"t":{"$date":"2020-11-08T05:28:54.636+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"},"security":{"authorization":"enabled"}}}}
{"t":{"$date":"2020-11-08T05:28:54.638+00:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}}
{"t":{"$date":"2020-11-08T05:28:54.639+00:00"},"s":"I", "c":"STORAGE", "id":22297, "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
{"t":{"$date":"2020-11-08T05:28:54.639+00:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=479M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
{"t":{"$date":"2020-11-08T05:28:56.498+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813336:498796][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}}
{"t":{"$date":"2020-11-08T05:28:56.889+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813336:889036][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}}
{"t":{"$date":"2020-11-08T05:28:57.525+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813337:525554][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 1/25728 to 2/256"}}
{"t":{"$date":"2020-11-08T05:28:57.682+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813337:682506][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}}
{"t":{"$date":"2020-11-08T05:28:57.791+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813337:791351][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}}
{"t":{"$date":"2020-11-08T05:28:57.880+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813337:880334][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}}
{"t":{"$date":"2020-11-08T05:28:57.880+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813337:880542][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}}
{"t":{"$date":"2020-11-08T05:28:57.892+00:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":3253}}
{"t":{"$date":"2020-11-08T05:28:57.893+00:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2020-11-08T05:28:57.913+00:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"}
{"t":{"$date":"2020-11-08T05:28:57.940+00:00"},"s":"I", "c":"STORAGE", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
{"t":{"$date":"2020-11-08T05:28:57.950+00:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
{"t":{"$date":"2020-11-08T05:28:57.958+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2020-11-08T05:28:57.958+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
{"t":{"$date":"2020-11-08T05:28:57.958+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
</code></pre>
| Steven Weng | <p>As suggested in the comments those kind of problems usually indicates issues with coredns and dns resolution. Is it worth to mention that Kubernetes <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">documentation</a> goes thru couple of good dns troubleshooting steps:</p>
<ul>
<li><p>Check the local DNS configuration first</p>
<pre><code> kubectl exec -ti dnsutils -- cat /etc/resolv.conf
</code></pre>
</li>
<li><p>Check if the DNS pod is running</p>
<pre><code> kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
</code></pre>
</li>
<li><p>Check for errors in the DNS pod</p>
<pre><code> kubectl logs --namespace=kube-system -l k8s-app=kube-dns
</code></pre>
</li>
<li><p>Are DNS endpoints exposed?</p>
<pre><code> kubectl get endpoints kube-dns --namespace=kube-system
</code></pre>
</li>
</ul>
<hr />
<p>To summarize OP confirmed that the issue was related to coredns and changing the nameserver in /etc/resolve.conf solved the issue.</p>
| acid_fuji |
<h2>Kafka brokers down with fully written storages</h2>
<p>I have tried produce as much messages as broker could handle. With fully written storages (8GB) brokers all stopped and they can't up again with this error</p>
<h3>logs of brokers trying to restart</h3>
<pre><code>[2020-04-28 04:34:05,774] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-28 04:34:05,774] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-28 04:34:05,842] INFO [KafkaServer id=1] shut down completed (kafka.server.KafkaServer)
[2020-04-28 04:34:05,844] INFO Shutting down SupportedServerStartable (io.confluent.support.metrics.SupportedServerStartable)
[2020-04-28 04:33:58,847] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-28 04:34:05,844] INFO Closing BaseMetricsReporter (io.confluent.support.metrics.BaseMetricsReporter)
[2020-04-28 04:34:05,844] INFO Waiting for metrics thread to exit (io.confluent.support.metrics.SupportedServerStartable)
[2020-04-28 04:34:05,844] INFO Shutting down KafkaServer (io.confluent.support.metrics.SupportedServerStartable)
[2020-04-28 04:34:05,845] INFO [KafkaServer id=1] shutting down (kafka.server.KafkaServer)
[2020-04-28 04:33:58,847] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-28 04:33:59,847] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-28 04:33:59,847] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-28 04:33:59,854] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)
[2020-04-28 04:33:59,942] INFO Shutting down SupportedServerStartable (io.confluent.support.metrics.SupportedServerStartable)
[2020-04-28 04:33:59,942] INFO Closing BaseMetricsReporter (io.confluent.support.metrics.BaseMetricsReporter)
[2020-04-28 04:33:59,942] INFO Waiting for metrics thread to exit (io.confluent.support.metrics.SupportedServerStartable)
[2020-04-28 04:33:59,942] INFO Shutting down KafkaServer (io.confluent.support.metrics.SupportedServerStartable)
[2020-04-28 04:33:59,942] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2020-04-28 04:37:18,938] INFO [ReplicaAlterLogDirsManager on broker 2] Removed fetcher for partitions Set(__consumer_offsets-22, kafka-connect-offset-15, kafka-connect-offset-16, kafka-connect-offset-2, __consumer_offsets-30, kafka-connect-offset-7, kafka-connect-offset-13, __consumer_offsets-8, __consumer_offsets-21, kafka-connect-offset-8, kafka-connect-status-2, kafka-connect-offset-5, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, kafka-connect-offset-19, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, kafka-connect-offset-20, kafka-connect-offset-3, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, kafka-connect-config-0, kafka-connect-offset-9, kafka-connect-offset-17, __consumer_offsets-31, __consumer_offsets-36, kafka-connect-status-1, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, __consumer_offsets-15, __consumer_offsets-24, kafka-connect-offset-10, kafka-connect-offset-24, kafka-connect-status-4, __consumer_offsets-38, __consumer_offsets-17, __consumer_offsets-48, kafka-connect-offset-23, kafka-connect-offset-21, kafka-connect-offset-0, __consumer_offsets-19, __consumer_offsets-11, kafka-connect-status-0, __consumer_offsets-13, kafka-connect-offset-18, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, kafka-connect-offset-14, kafka-connect-offset-22, kafka-connect-offset-6, perf-test4-0, kafka-connect-status-3, kafka-connect-offset-11, kafka-connect-offset-12, __consumer_offsets-20, __consumer_offsets-0, kafka-connect-offset-4, __consumer_offsets-44, __consumer_offsets-39, kafka-connect-offset-1, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaAlterLogDirsManager)
[2020-04-28 04:37:18,990] INFO [ReplicaManager broker=2] Broker 2 stopped fetcher for partitions __consumer_offsets-22,kafka-connect-offset-15,kafka-connect-offset-16,kafka-connect-offset-2,__consumer_offsets-30,kafka-connect-offset-7,kafka-connect-offset-13,__consumer_offsets-8,__consumer_offsets-21,kafka-connect-offset-8,kafka-connect-status-2,kafka-connect-offset-5,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,kafka-connect-offset-19,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,kafka-connect-offset-20,kafka-connect-offset-3,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,kafka-connect-config-0,kafka-connect-offset-9,kafka-connect-offset-17,__consumer_offsets-31,__consumer_offsets-36,kafka-connect-status-1,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,__consumer_offsets-15,__consumer_offsets-24,kafka-connect-offset-10,kafka-connect-offset-24,kafka-connect-status-4,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,kafka-connect-offset-23,kafka-connect-offset-21,kafka-connect-offset-0,__consumer_offsets-19,__consumer_offsets-11,kafka-connect-status-0,__consumer_offsets-13,kafka-connect-offset-18,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,kafka-connect-offset-14,kafka-connect-offset-22,kafka-connect-offset-6,perf-test4-0,kafka-connect-status-3,kafka-connect-offset-11,kafka-connect-offset-12,__consumer_offsets-20,__consumer_offsets-0,kafka-connect-offset-4,__consumer_offsets-44,__consumer_offsets-39,kafka-connect-offset-1,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 and stopped moving logs for partitions because they are in the failed log directory /opt/kafka/data/logs. (kafka.server.ReplicaManager)
[2020-04-28 04:37:18,990] INFO Stopping serving logs in dir /opt/kafka/data/logs (kafka.log.LogManager)
[2020-04-28 04:37:18,992] WARN [Producer clientId=producer-1] 1 partitions have leader brokers without a matching listener, including [__confluent.support.metrics-0] (org.apache.kafka.clients.NetworkClient)
[2020-04-28 04:37:18,996] ERROR Shutdown broker because all log dirs in /opt/kafka/data/logs have failed (kafka.log.LogManager)
</code></pre>
<h3>Prometheus monitoring snapshot</h3>
<p><a href="https://i.stack.imgur.com/LOdYO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LOdYO.png" alt="enter image description here"></a></p>
<p><strong>I hope to prevent this situation before broker down with solutions like remove previous messages for enough space or something. is there any best practice recommended for this ?</strong></p>
| Jaeho Lee | <p>You need to configure two parameters on your kafka: <code>log.retention.hours</code> and <code>log.retention.bytes</code>. The two parameters are "how many data I would store before rotate and for how many time I would store the data before rotate? Try to improve those values if you wanna store more data or reduce your retention time.</p>
<p>Another thing that you can take a look if the problem is the storage is your logs. The logs generated can use a lot of space. In this point you need to put some rotate technique. Take a look on the size of your <code>logs</code> folder.</p>
| William Prigol Lopes |
<p>I'm trying to copy the final WordPress files from the official WordPress Docker image, from an initContainer into an emptyDir volume. I expect the emptyDir volume to contain the entire WordPress installation so I can share this between containers. Instead I only get the wp-contents folder with no content. No other files or folders are copied over.</p>
<p>After reviewing the <a href="https://github.com/docker-library/wordpress/tree/b1127748deb2db34e9b1306489e24eb49720454f/php7.4/fpm-alpine" rel="nofollow noreferrer">WordPress Dockerfile and entrypoint</a>, I suspect the entrypoint is not getting executed because the Dockerfile has instructions to create the wp-content folder in /var/www/html, but it seems like nothing within the entrypoint is getting executed.</p>
<p>I have tried changing my <code>args:</code> to <code>args: ["sh", "-c", "sleep 60 && cp -r /var/www/html/. /shared-wordpress"]</code> in hopes that the container just needed some time to finish running through the entrypoint before I run the <code>cp</code> command, but that has no effect. If you inspect the <code>/shared-wordpress</code> mount in the running wordpress container, it only shows the wp-content folder.</p>
<p>Copying the WordPress files from <code>/usr/src/wordpress</code> works great with the exception that it does NOT include the generated wp-config.php file. I need all the WordPress files along with the generated wp-config.php file.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:5.5.1-php7.4-fpm-alpine
volumeMounts:
- name: shared-wordpress
mountPath: /shared-wordpress
initContainers:
- name: init-wordpress
image: wordpress:5.5.1-php7.4-fpm-alpine
env:
- name: WORDPRESS_DB_HOST
value: test_host
- name: WORDPRESS_DB_USER
value: root
- name: WORDPRESS_DB_PASSWORD
value: hw8djdl21ds
- name: WORDPRESS_DB_NAME
value: local_wordpress
- name: WORDPRESS_DEBUG
value: "1"
- name: WORDPRESS_CONFIG_EXTRA
value: "define( 'FS_METHOD', 'direct' );"
args: ["/bin/sh", "-c", "cp -r /var/www/html/. /shared-wordpress"]
volumeMounts:
- name: shared-wordpress
mountPath: /shared-wordpress
volumes:
- name: shared-wordpress
emptyDir: {}
</code></pre>
| Aaron | <p>Your data is not being copied because your docker cmd is overwritten by <code>args</code> yaml field.</p>
<p>Docker entrypoint exec requires <code>php-fpm</code> to be passed as an further argument in order to trigger the code that generates the word press files.</p>
<p>After some testing i find a way to address your question.
It is a more workaround than solution but it solves your problem.</p>
<p>So to trigger code generation you have to set the first parameter to <code>php-fpm</code>. In the example below I'm using <code>-v</code> parameter so the programs exist immediately. If you don`t pass any arguments it would start and hold init container from exiting.</p>
<pre><code>initContainers:
- args:
- php-fpm
- -v
</code></pre>
<p>Now the last part is tho address the issues with copying files. I have removed the <code>cp</code> args and mounted <code>EmptyDir</code> into <code>/var/www/html</code>. Now the files are being generated directly to your desired volume so there no need to copying the files afterwards.</p>
<pre><code> initContainers:
volumeMounts:
- name: shared-wordpress
mountPath: /var/www/html
</code></pre>
| acid_fuji |
<p>I have installed Postgresql and microk8s on Ubuntu 18.<br>
One of my microservice which is inside microk8s single node cluster needs to access postgresql installed on same VM.<br>
Some articles suggesting that I should create service.yml and endpoint.yml like this.</p>
<pre><code>apiVersion: v1
metadata:
name: postgresql
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
---
kind: Endpoints
apiVersion: v1
metadata:
name: postgresql
subsets:
- addresses:
- ip: ?????
ports:
- port: 5432
</code></pre>
<p>Now, I am not getting what should I put in <strong>subsets.addresses.ip</strong> field ?</p>
| Bhushan | <p>First you need to configure your <strong>Postgresql</strong> to listen not only on your vm's <code>localhost</code>. Let's assume you have a network interface with IP address <code>10.1.2.3</code>, which is configured on your node, on which <strong>Postgresql</strong> instance is installed.</p>
<p>Add the following entry in your <code>/etc/postgresql/10/main/postgresql.conf</code>:</p>
<pre><code>listen_addresses = 'localhost,10.1.2.3'
</code></pre>
<p>and restart your postgres service:</p>
<pre><code>sudo systemctl restart postgresql
</code></pre>
<p>You can check if it listens on the desired address by running:</p>
<pre><code>sudo ss -ntlp | grep postgres
</code></pre>
<p>From your <code>Pods</code> deployed within your <strong>Microk8s cluster</strong> you should be able to reach IP addresses of your node e.g. you should be able to ping mentioned <code>10.1.2.3</code> from your <code>Pods</code>.</p>
<p>As it doesn't require any loadbalancing you can reach to your <strong>Postgresql</strong> directly from your <code>Pods</code> without a need of configuring additional <code>Service</code>, that exposes it to your cluster.</p>
<p>If you don't want to refer to your <strong>Postgresql</strong> instance in your application using it's IP address, you can edit your <code>Deployment</code> (which manages the set of <code>Pods</code> that connect to your postgres db) to modify the default content of <code>/etc/hosts</code> file used by your <code>Pods</code>.</p>
<p>Edit your app Deployment by running:</p>
<pre><code>microk8s.kubectl edit deployment your-app
</code></pre>
<p>and add the following section under <code>Pod</code> template <code>spec</code>:</p>
<pre><code> hostAliases: # it should be on the same indentation level as "containers:"
- hostnames:
- postgres
- postgresql
ip: 10.1.2.3
</code></pre>
<p>After saving it, all your <code>Pods</code> managed by this <code>Deployment</code> will be recreated according to the new specification. When you exec into your <code>Pod</code> by running:</p>
<pre><code>microk8s.kubectl exec -ti pod-name -- /bin/bash
</code></pre>
<p>you should see additional section in your /etc/hosts file:</p>
<pre><code># Entries added by HostAliases.
10.1.2.3 postgres postgresql
</code></pre>
<p>Since now you can refer to your <strong>Postgres</strong> instance in your app by names <code>postgres:5432</code> or <code>postgresql:5432</code> and it will be resolved to your VM's IP address.</p>
<p>I hope it helps.</p>
<h1>UPDATE:</h1>
<p>I almost forgot that some time ago I've posted an answer on a very similar topic. You can find it <a href="https://stackoverflow.com/a/60096013/11714114">here</a>. It describes the usage of a <strong>Service without selector</strong>, which is basically what you mentioned in your question. And yes, it also can be used for configuring access to your <strong>Postgresql</strong> instance running on the same host. As this kind of <code>Service</code> doesn't have selectors by its definition, <strong>no endpoint</strong> is automatically created by <strong>kubernetes</strong> and you need to create one by yourself. Once you have the IP address of your <strong>Postgres</strong> instance (in our example it is <code>10.1.2.3</code>) you can use it in your <strong>endpoint</strong> definition.</p>
<p>Once you configure everything on the side of <strong>kubernetes</strong> you may still encounter an issue with <strong>Postgres</strong>. In your <code>Pod</code> that is trying to connect to the <strong>Postgres</strong> instance you may see the following error message:</p>
<pre><code>org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host 10.1.7.151
</code></pre>
<p>It basically means that your <a href="https://www.postgresql.org/docs/9.1/auth-pg-hba-conf.html" rel="nofollow noreferrer">pg_hba.conf</a> file lacks the required entry that would allow your <code>Pod</code> to access your <strong>Postgresql</strong> database. Authentication is host-based, so in other words only hosts with certain IPs or with IPs within certain IP range are allowed to authenticate.</p>
<blockquote>
<p>Client authentication is controlled by a configuration file, which
traditionally is named pg_hba.conf and is stored in the database
cluster's data directory. (HBA stands for host-based authentication.)</p>
</blockquote>
<p>So now you probably wonder which network you should allow in your <code>pg_hba.conf</code>. To handle cluster networking <strong>Microk8s</strong> uses <a href="https://github.com/coreos/flannel#flannel" rel="nofollow noreferrer">flannel</a>. Take a look at the content of your <code>/var/snap/microk8s/common/run/flannel/subnet.env</code> file. Mine looks as follows:</p>
<pre><code>FLANNEL_NETWORK=10.1.0.0/16
FLANNEL_SUBNET=10.1.53.1/24
FLANNEL_MTU=1410
FLANNEL_IPMASQ=false
</code></pre>
<p>Adding to your <code>pg_hba.conf</code> only <strong>flannel subnet</strong> should be enough to ensure that all your <code>Pods</code> can connect to <strong>Posgresql</strong>.</p>
| mario |
<p>I am quite new to <code>Kubernetes</code> and I have been struggling to migrate my current <em>docker-compose</em> environment to Kubernetes...</p>
<p>I converted my <code>docker-compose.yml</code> to Kubernetes manifests using <strong>kompose</strong>.</p>
<p>So far, I can access each pod individually but it seems like I have some issues to get those pods to communicate each other.. My Nginx pod can not access my app pod</p>
<p>My <code>docker-compose.yml</code> is something like below</p>
<pre><code>version: '3.3'
services:
myapp:
image: my-app
build: ./docker/
restart: unless-stopped
container_name: app
stdin_open: true
volumes:
- mnt:/mnt
env_file:
- .env
mynginx:
image: nginx:latest
build: ./docker/
container_name: nginx
ports:
- 80:80
stdin_open: true
restart: unless-stopped
user: root
</code></pre>
<p>My <code>Nginx.conf</code> is something like below</p>
<pre><code>server{
listen 80;
index index.html index.htm;
root /mnt/volumes/statics/;
location /myapp {
proxy_pass http://myapp/index;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
</code></pre>
<p>I understand that <code>docker-compose</code> enables containers to communicate each other through service names (<strong>myapp</strong> and <strong>mynginx</strong> in this case). Could somebody tell me what I need to do to achieve the same thing in Kubernetes?</p>
| rks | <p>Kompose did create services for me. It turned out that what I missed was docker-compose.overwrite file (apparently kompose just ignores overwrite.yml).</p>
| rks |
<p>We have components which use the Go library to write status to prometheus,
we are able to see the data in Prometheus UI,
we have components <strong>outside the K8S cluster</strong> which need to <em>pull the data</em> from
Prometheus , how can I expose this metrics? is there any components which I should use ?</p>
| NSS | <p>You may want to check the <a href="https://prometheus.io/docs/prometheus/latest/federation/#federation" rel="nofollow noreferrer">Federation</a> section of the Prometheus documents.</p>
<blockquote>
<p>Federation allows a Prometheus server to scrape selected time series
from another Prometheus server. Commonly, it is used to either achieve scalable Prometheus monitoring setups or to pull related metrics from one service's Prometheus into another.</p>
</blockquote>
<p>It would require to expose Prometheus service out of the cluster with Ingress or nodePort and configure the Center Prometheus to scrape metrics from the exposed service endpoint. You will have set also some proper authentication. Here`s an <a href="https://mattermost.com/blog/monitoring-a-multi-cluster-environment-using-prometheus-federation-and-grafana/" rel="nofollow noreferrer">example</a> of it.</p>
<p>Second way that comes to my mind is to use <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">Kube-state-metrics</a></p>
<blockquote>
<p>kube-state-metrics is a simple service that listens to the Kubernetes
API server and generates metrics about the state of the objects.</p>
</blockquote>
<p>Metrics are exported on the HTTP endpoint and designed to be consumed either by Prometheus itself or by scraper that is compatible with Prometheus client endpoints. However this differ from the Metrics Server and generate metrics about the state of Kubernetes objects: node status, node capacity, number of desired replicas, pod status etc.</p>
| acid_fuji |
<p>We have our application build on kubernetes. We are have many multi-containers pods.
We are facing challenges when as our many containers depends on each other to run application.
We first required database container to come up and then application container to run.
Is there is any equivalent solution to resolve this dependency where our database container will be up first then our application container??</p>
| sayali_bhavsar | <p>There's no feature like that in Kubernetes because each application should be responsible for (re)connecting to their dependencies.</p>
<p>However, you can do a similar thing by using <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">initContainer</a> which can let other containers in the same pod not start until the initContainer exits with 0.</p>
<p>As <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use" rel="nofollow noreferrer">the example</a> shows, if you run a simple shell script on a busybox that waits until it can connect to your application's dependencies, your applications will start after their dependencies can be connected.</p>
| Daigo |
<p>For the life of Bryan, how do I do this?</p>
<p><code>Terraform</code> is used to create an SQL Server instance in GCP.
Root password and user passwords are randomly generated, then put into the Google Secret Manager.
The DB's IP is exposed via private DNS zone.</p>
<p>How can I now get the username and password to access the DB into my K8s cluster? Running a Spring Boot app here.</p>
<p>This was one option I thought of:</p>
<p>In my deployment I add an <code>initContainer</code>:</p>
<pre><code>- name: secrets
image: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- echo "DB_PASSWORD=$(gcloud secrets versions access latest --secret=\"$NAME_OF_SECRET\")" >> super_secret.env
</code></pre>
<p>Okay, what now? How do I get it into my application container from here?</p>
<p>There are also options like <code>bitnami/sealed-secrets</code>, which I don't like since the setup is using <code>Terraform</code> already and saving the secrets in GCP. When using <code>sealed-secrets</code> I could skip using the secrets manager. Same with <code>Vault</code> IMO.</p>
| Moritz Schmitz v. Hülst | <p>On top of the other answers and suggestion in the comments I would like to suggest two tools that you might find interesting.</p>
<p>First one is <a href="https://github.com/doitintl/secrets-init#secrets-init" rel="noreferrer">secret-init</a>:</p>
<blockquote>
<p><code>secrets-init</code> is a minimalistic init system designed to run as PID 1
inside container environments and it`s integrated with
multiple secrets manager services, e.x. <a href="https://cloud.google.com/secret-manager/docs/" rel="noreferrer">Google Secret Manager</a></p>
</blockquote>
<p>Second one is <a href="https://github.com/doitintl/kube-secrets-init#kube-secrets-init" rel="noreferrer">kube-secrets-init</a>:</p>
<blockquote>
<p>The <code>kube-secrets-init</code> is a Kubernetes mutating admission webhook,
that mutates any K8s Pod that is using specially prefixed environment
variables, directly or from Kubernetes as Secret or ConfigMap.</p>
</blockquote>
<p>It`s also support integration with Google Secret Manager:</p>
<p>User can put Google secret name (prefixed with <code>gcp:secretmanager:</code>) as environment variable value. The <code>secrets-init</code> will resolve any environment value, using specified name, to referenced secret value.</p>
<p>Here`s a good <a href="https://blog.doit-intl.com/kubernetes-and-secrets-management-in-cloud-part-2-6c37c1238a87" rel="noreferrer">article</a> about how it works.</p>
| acid_fuji |
<p>I am writing a command line tool in Go which will perform an action based on the existence of a particular pod on a <code>k8s</code> cluster, in a specific namespace.</p>
<p>I could do via command line (shell) invocations within my <code>go</code> program something like </p>
<pre><code>kubectl get pods -n mynapespace l app=myapp
</code></pre>
<p>or in case I am not certain about the labels, something even less elegant as:</p>
<pre><code>kubectl get pods -n mynapespace | grep -i somepatternIamcertainabout
</code></pre>
<p>However, given that I am using the k8s native language (Go) I was wondering whether there might be a more Go native/specific way of making such an inquiry to the k8s api server, without resorting to shell invocations from within my cli tool.</p>
| pkaramol | <blockquote>
<p>However, given that I am using the k8s native language (Go) I was
wondering whether there might be a more Go native/specific way of
making such an inquiry to the k8s api server, without resorting to
shell invocations from within my cli tool.</p>
</blockquote>
<p>If you want to talk with <strong>k8s cluster</strong> in your programs written in <strong>go</strong> without resorting to shell invocations, <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> library is the way to go. It contains everything you need to query your k8s api server in your go programs.</p>
<blockquote>
<h3>What's included</h3>
<ul>
<li>The <code>kubernetes</code> package contains the clientset to
access Kubernetes API. </li>
<li>The <code>discovery</code> package is used to discover APIs
supported by a Kubernetes API server.</li>
<li>The <code>dynamic</code> package contains a
dynamic client that can perform generic operations on arbitrary
Kubernetes API objects.</li>
<li>The <code>plugin/pkg/client/auth</code> packages contain
optional authentication plugins for obtaining credentials from
external sources. </li>
<li>The <code>transport</code> package is used to set up auth and
start a connection. </li>
<li>The <code>tools/cache</code> package is useful for writing
controllers.</li>
</ul>
</blockquote>
| mario |
<p>I have a requirement where I would like to mount an EFS that has been created in AWS to be attached directly to a POD in an EKS cluster without mounting it on the actual EKS node. </p>
<p>My understanding was that if the EFS can be treated as an NFS server, then a PV/PVC can be created out of this and then directly mounted onto an EKS Pod.</p>
<p>I have done the above using EBS but with a normal vanilla Kubernetes and not EKS, I would like to know how to go about it for EFS and EKS. Is it even possible? Most of the documentations that I have read say that the mount path is mounted on the node and then to the k8s pods. But I would like to bypass the mounting on the node and directly mount it to the EKS k8s pods. </p>
<p>Are there any documentations that I can refer? </p>
| user1452759 | <p>That is not possible, because pods exist on nodes, therefore it has to be mounted on the nodes that host the pods.</p>
<p>Even when you did it with EBS, under the bonnet it was still attached to the node first.</p>
<p>However, you can restrict access to AWS resources with IAM using <a href="https://github.com/jtblin/kube2iam" rel="nofollow noreferrer">kube2iam</a> or you can use the EKS native solution to assign <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="nofollow noreferrer">IAM roles to Kubernetes Service Accounts</a>. The benefit of using <code>kube2iam</code> is it going to work with Kops should you migrate to it from EKS.</p>
| aries1980 |
<p>I'm following a tutorial <a href="https://docs.openfaas.com/tutorials/first-python-function/" rel="nofollow noreferrer">https://docs.openfaas.com/tutorials/first-python-function/</a>,</p>
<p>currently, I have the right image</p>
<pre class="lang-sh prettyprint-override"><code>$ docker images | grep hello-openfaas
wm/hello-openfaas latest bd08d01ce09b 34 minutes ago 65.2MB
$ faas-cli deploy -f ./hello-openfaas.yml
Deploying: hello-openfaas.
WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
Deployed. 202 Accepted.
URL: http://IP:8099/function/hello-openfaas
</code></pre>
<p>there is a step that forewarns me to do some setup(My case is I'm using <code>Kubernetes</code> and <code>minikube</code> and don't want to push to a remote container registry, I should enable the use of images from the local library on Kubernetes.), I see the hints</p>
<pre><code>see the helm chart for how to set the ImagePullPolicy
</code></pre>
<p>I'm not sure how to configure it correctly. the final result indicates I failed.</p>
<p>Unsurprisingly, I couldn't access the function service, I find some clues in <a href="https://docs.openfaas.com/deployment/troubleshooting/#openfaas-didnt-start" rel="nofollow noreferrer">https://docs.openfaas.com/deployment/troubleshooting/#openfaas-didnt-start</a> which might help to diagnose the problem.</p>
<pre><code>$ kubectl logs -n openfaas-fn deploy/hello-openfaas
Error from server (BadRequest): container "hello-openfaas" in pod "hello-openfaas-558f99477f-wd697" is waiting to start: trying and failing to pull image
$ kubectl describe -n openfaas-fn deploy/hello-openfaas
Name: hello-openfaas
Namespace: openfaas-fn
CreationTimestamp: Wed, 16 Mar 2022 14:59:49 +0800
Labels: faas_function=hello-openfaas
Annotations: deployment.kubernetes.io/revision: 1
prometheus.io.scrape: false
Selector: faas_function=hello-openfaas
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 0 max unavailable, 1 max surge
Pod Template:
Labels: faas_function=hello-openfaas
Annotations: prometheus.io.scrape: false
Containers:
hello-openfaas:
Image: wm/hello-openfaas:latest
Port: 8080/TCP
Host Port: 0/TCP
Liveness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
Readiness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
Environment:
fprocess: python3 index.py
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing False ProgressDeadlineExceeded
OldReplicaSets: <none>
NewReplicaSet: hello-openfaas-558f99477f (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 29m deployment-controller Scaled up replica set hello-openfaas-558f99477f to 1
</code></pre>
<p><code>hello-openfaas.yml</code></p>
<pre class="lang-yaml prettyprint-override"><code>version: 1.0
provider:
name: openfaas
gateway: http://IP:8099
functions:
hello-openfaas:
lang: python3
handler: ./hello-openfaas
image: wm/hello-openfaas:latest
imagePullPolicy: Never
</code></pre>
<hr />
<p>I create a new project <code>hello-openfaas2</code> to reproduce this error</p>
<pre class="lang-sh prettyprint-override"><code>$ faas-cli new --lang python3 hello-openfaas2 --prefix="wm"
Folder: hello-openfaas2 created.
# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
$ faas-cli build -f ./hello-openfaas2.yml
$ faas-cli deploy -f ./hello-openfaas2.yml
Deploying: hello-openfaas2.
WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
Deployed. 202 Accepted.
URL: http://192.168.1.3:8099/function/hello-openfaas2
$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
Error from server (BadRequest): container "hello-openfaas2" in pod "hello-openfaas2-7c67488865-7d7vm" is waiting to start: image can't be pulled
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-kp7vf 1/1 Running 0 47h
...
openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4h28m
openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 18h
openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 127m
openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 165m
openfaas-fn hello-openfaas2-7c67488865-qmrkl 0/1 ImagePullBackOff 0 13m
openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 97m
openfaas-fn hello-python-554b464498-zxcdv 0/1 ErrImagePull 0 3h23m
openfaas-fn hello-python-8698bc68bd-62gh9 0/1 ImagePullBackOff 0 3h25m
</code></pre>
<hr />
<p>from <a href="https://docs.openfaas.com/reference/yaml/" rel="nofollow noreferrer">https://docs.openfaas.com/reference/yaml/</a>, I know I put the <code>imagePullPolicy</code> in the wrong place, there is no such keyword in its schema.</p>
<p>I also tried <code>eval $(minikube docker-env</code> and still get the same error.</p>
<hr />
<p>I've a feeling that <code>faas-cli deploy</code> can be replace by <code>helm</code>, they all mean to run the image(whether from remote or local) in Kubernetes cluster, then I can use <code>helm chart</code> to setup the <code>pullPolicy</code> there. Even though the detail is not still clear to me, This discovery inspires me.</p>
<hr />
<p>So far, after <code>eval $(minikube docker-env)</code></p>
<pre class="lang-sh prettyprint-override"><code>$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
wm/hello-openfaas2 0.1 03c21bd96d5e About an hour ago 65.2MB
python 3-alpine 69fba17b9bae 12 days ago 48.6MB
ghcr.io/openfaas/figlet latest ca5eef0de441 2 weeks ago 14.8MB
ghcr.io/openfaas/alpine latest 35f3d4be6bb8 2 weeks ago 14.2MB
ghcr.io/openfaas/faas-netes 0.14.2 524b510505ec 3 weeks ago 77.3MB
k8s.gcr.io/kube-apiserver v1.23.3 f40be0088a83 7 weeks ago 135MB
k8s.gcr.io/kube-controller-manager v1.23.3 b07520cd7ab7 7 weeks ago 125MB
k8s.gcr.io/kube-scheduler v1.23.3 99a3486be4f2 7 weeks ago 53.5MB
k8s.gcr.io/kube-proxy v1.23.3 9b7cc9982109 7 weeks ago 112MB
ghcr.io/openfaas/gateway 0.21.3 ab4851262cd1 7 weeks ago 30.6MB
ghcr.io/openfaas/basic-auth 0.21.3 16e7168a17a3 7 weeks ago 14.3MB
k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 4 months ago 293MB
ghcr.io/openfaas/classic-watchdog 0.2.0 6f97aa96da81 4 months ago 8.18MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7 5 months ago 46.8MB
k8s.gcr.io/pause 3.6 6270bb605e12 6 months ago 683kB
ghcr.io/openfaas/queue-worker 0.12.2 56e7216201bc 7 months ago 7.97MB
kubernetesui/dashboard v2.3.1 e1482a24335a 9 months ago 220MB
kubernetesui/metrics-scraper v1.0.7 7801cfc6d5c0 9 months ago 34.4MB
nats-streaming 0.22.0 12f2d32e0c9a 9 months ago 19.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628d 11 months ago 31.5MB
functions/markdown-render latest 93b5da182216 2 years ago 24.6MB
functions/hubstats latest 01affa91e9e4 2 years ago 29.3MB
functions/nodeinfo latest 2fe8a87bf79c 2 years ago 71.4MB
functions/alpine latest 46c6f6d74471 2 years ago 21.5MB
prom/prometheus v2.11.0 b97ed892eb23 2 years ago 126MB
prom/alertmanager v0.18.0 ce3c87f17369 2 years ago 51.9MB
alexellis2/openfaas-colorization 0.4.1 d36b67b1b5c1 2 years ago 1.84GB
rorpage/text-to-speech latest 5dc20810eb54 2 years ago 86.9MB
stefanprodan/faas-grafana 4.6.3 2a4bd9caea50 4 years ago 284MB
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-kp7vf 1/1 Running 0 6d
kube-system etcd-minikube 1/1 Running 0 6d
kube-system kube-apiserver-minikube 1/1 Running 0 6d
kube-system kube-controller-manager-minikube 1/1 Running 0 6d
kube-system kube-proxy-5m8lr 1/1 Running 0 6d
kube-system kube-scheduler-minikube 1/1 Running 0 6d
kube-system storage-provisioner 1/1 Running 1 (6d ago) 6d
kubernetes-dashboard dashboard-metrics-scraper-58549894f-97tsv 1/1 Running 0 5d7h
kubernetes-dashboard kubernetes-dashboard-ccd587f44-lkwcx 1/1 Running 0 5d7h
openfaas-fn base64-6bdbcdb64c-djz8f 1/1 Running 0 5d1h
openfaas-fn colorise-85c74c686b-2fz66 1/1 Running 0 4d5h
openfaas-fn echoit-5d7df6684c-k6ljn 1/1 Running 0 5d1h
openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4d5h
openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 4d19h
openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 4d3h
openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 4d3h
openfaas-fn hello-openfaas2-5c6f6cb5d9-24hkz 0/1 ImagePullBackOff 0 9m22s
openfaas-fn hello-openfaas2-8957bb47b-7cgjg 0/1 ImagePullBackOff 0 2d22h
openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 4d2h
openfaas-fn hello-python-6d6976845f-cwsln 0/1 ImagePullBackOff 0 3d19h
openfaas-fn hello-python-b577cb8dc-64wf5 0/1 ImagePullBackOff 0 3d9h
openfaas-fn hubstats-b6cd4dccc-z8tvl 1/1 Running 0 5d1h
openfaas-fn markdown-68f69f47c8-w5m47 1/1 Running 0 5d1h
openfaas-fn nodeinfo-d48cbbfcc-hfj79 1/1 Running 0 5d1h
openfaas-fn openfaas2-fun 1/1 Running 0 15s
openfaas-fn text-to-speech-74ffcdfd7-997t4 0/1 CrashLoopBackOff 2235 (3s ago) 4d5h
openfaas-fn wordcount-6489865566-cvfzr 1/1 Running 0 5d1h
openfaas alertmanager-88449c789-fq2rg 1/1 Running 0 3d1h
openfaas basic-auth-plugin-75fd7d69c5-zw4jh 1/1 Running 0 3d2h
openfaas gateway-5c4bb7c5d7-n8h27 2/2 Running 0 3d2h
openfaas grafana 1/1 Running 0 4d8h
openfaas nats-647b476664-hkr7p 1/1 Running 0 3d2h
openfaas prometheus-687648749f-tl8jp 1/1 Running 0 3d1h
openfaas queue-worker-7777ffd7f6-htx6t 1/1 Running 0 3d2h
$ kubectl get -o yaml -n openfaas-fn deploy/hello-openfaas2
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "6"
prometheus.io.scrape: "false"
creationTimestamp: "2022-03-17T12:47:35Z"
generation: 6
labels:
faas_function: hello-openfaas2
name: hello-openfaas2
namespace: openfaas-fn
resourceVersion: "400833"
uid: 9c4e9d26-23af-4f93-8538-4e2d96f0d7e0
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
faas_function: hello-openfaas2
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io.scrape: "false"
creationTimestamp: null
labels:
faas_function: hello-openfaas2
uid: "969512830"
name: hello-openfaas2
spec:
containers:
- env:
- name: fprocess
value: python3 index.py
image: wm/hello-openfaas2:0.1
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /_/health
port: 8080
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
name: hello-openfaas2
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /_/health
port: 8080
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
enableServiceLinks: false
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2022-03-17T12:47:35Z"
lastUpdateTime: "2022-03-17T12:47:35Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2022-03-20T12:16:56Z"
lastUpdateTime: "2022-03-20T12:16:56Z"
message: ReplicaSet "hello-openfaas2-5d6c7c7fb4" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 6
replicas: 2
unavailableReplicas: 2
updatedReplicas: 1
</code></pre>
<hr />
<p>In one shell,</p>
<pre class="lang-sh prettyprint-override"><code>docker@minikube:~$ docker run --name wm -ti wm/hello-openfaas2:0.1
2022/03/20 13:04:52 Version: 0.2.0 SHA: 56bf6aac54deb3863a690f5fc03a2a38e7d9e6ef
2022/03/20 13:04:52 Timeouts: read: 5s write: 5s hard: 0s health: 5s.
2022/03/20 13:04:52 Listening on port: 8080
...
</code></pre>
<p>and another shell</p>
<pre><code>docker@minikube:~$ docker ps | grep wm
d7796286641c wm/hello-openfaas2:0.1 "fwatchdog" 3 minutes ago Up 3 minutes (healthy) 8080/tcp wm
</code></pre>
| ComplicatedPhenomenon | <p>If your image has a <code>latest</code> tag, the Pod's <code>ImagePullPolicy</code> will be automatically set to <code>Always</code>. Each time the pod is created, Kubernetes tries to pull the newest image.</p>
<p>Try not tagging the image as <code>latest</code> or manually setting the Pod's <code>ImagePullPolicy</code> to <code>Never</code>.
If you're using static manifest to create a Pod, the setting will be like the following:</p>
<pre><code>containers:
- name: test-container
image: testImage:latest
imagePullPolicy: Never
</code></pre>
| Daigo |
<p>I have a EKS cluster with min 3 and max 6 nodes, Created auto scaling group for this setup, How can i implement auto scale of nodes when spike up/down on Memory usage since there is no such metric in Auto scaling group like cpu.
Can somebody please suggest me clear steps i am new to this setup .</p>
| Lakshmi Reddy | <p>Out of the box ASG does not support scaling based on the memory utilization.
You`ll have to use custom metric to do that.</p>
<p><a href="https://medium.com/@lvthillo/aws-auto-scaling-based-on-memory-utilization-in-cloudformation-159676b6f4d6" rel="nofollow noreferrer">Here</a> is way how to do that.</p>
<p>Have you considered using CloudWatch alarms to monitor your nodes?
The scripts can collect memory parameters that can be used later.</p>
<p>See <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html" rel="nofollow noreferrer">here</a> how to set it up.</p>
| acid_fuji |
<p>Attempting to create a Kubernetes cluster using MiniKube based on these instructions: <a href="https://minikube.sigs.k8s.io/docs/start/macos/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/start/macos/</a> which appear relatively straightfwoard. Installing Minikube appears to go smoothly until I attempt to start it with command:</p>
<blockquote>
<p>minikube start</p>
</blockquote>
<p>Running that command results to these errors:</p>
<pre><code>🔄 Retriable failure: create: Error creating machine: Error in driver during machine creation: hyperkit crashed! command line:
hyperkit loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube
🔥 Deleting "minikube" in hyperkit ...
🔥 Creating hyperkit VM (CPUs=2, Memory=2000MB, Disk=20000MB) ..
</code></pre>
<p>System details are as follows:</p>
<ul>
<li>OS: mac mojave 10.14.2 </li>
<li>Docker: version: 18.03.1-ce, build 9ee9f40</li>
<li>VirtualBox: version 6.0.14,133895</li>
</ul>
<p>I have also enabled VirtualBox kernel properties in System Preferences.</p>
<p>Any ideas?</p>
| Klaus Nji | <p>When you start your <strong>Minikube</strong> on <strong>MacOS</strong> without any flag:</p>
<pre><code>minikube start
</code></pre>
<p>it assumes you want to use the <strong>default hypervisor</strong> which in this case happens to be <a href="https://github.com/moby/hyperkit" rel="nofollow noreferrer">hyperkit</a> and the above command is equivalent to:</p>
<pre><code>minikube start --vm-driver=hyperkit
</code></pre>
<p>It looks like <a href="https://github.com/moby/hyperkit" rel="nofollow noreferrer">hyperkit</a> is not properly configured on your system and that's why you get the error message.</p>
<p>In order to tell your <strong>Minikube</strong> to use <strong>VirtualBox</strong> you need to specify it in its <code>start command</code>:</p>
<pre><code>minikube start --vm-driver=virtualbox
</code></pre>
<p>If you don't want to provide this flag each time you start your <strong>Minikube</strong> you can set <a href="https://www.virtualbox.org/wiki/Downloads" rel="nofollow noreferrer">VirtualBox</a> as the <strong>default driver</strong> by issuing the following command:</p>
<pre><code>minikube config set vm-driver virtualbox
</code></pre>
<p>After that each time you run:</p>
<pre><code>minikube start
</code></pre>
<p>it will use <strong>VirtualBox</strong> as the virtualization technology to run your <strong>Minikube</strong> instance.</p>
| mario |
<p>I'm new to K8s, I'll try minikube with 2 container running in a pod with this command:</p>
<pre><code>kubectl apply -f deployment.yaml
</code></pre>
<p>and this deployment.yml:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: site-home
spec:
restartPolicy: Never
volumes:
- name: v-site-home
emptyDir: {}
containers:
- name: site-web
image: site-home:1.0.0
ports:
- containerPort: 80
volumeMounts:
- name: v-site-home
mountPath: /usr/share/nginx/html/assets/quotaLago
- name: site-cron
image: site-home-cron:1.0.0
volumeMounts:
- name: v-site-home
mountPath: /app/quotaLago</code></pre>
</div>
</div>
</p>
<p>I've a shared volume so if I understand I cannot use deployment but only pods (maybe stateful set?)</p>
<p>In any case I want to expose the port 80 from the container site-web in the pod site-home.
In the official docs I see this for deployments:</p>
<pre><code>kubectl expose deployment hello-node --type=LoadBalancer --port=8080
</code></pre>
<p>but I cannot use for example:</p>
<pre><code>kubectl expose pod site-web --type=LoadBalancer --port=8080
</code></pre>
<p>any idea?</p>
| Davide | <blockquote>
<p>but I cannot use for example:</p>
<pre><code>kubectl expose pod site-web --type=LoadBalancer --port=8080
</code></pre>
</blockquote>
<p>Of course you can, however exposing a single <code>Pod</code> via <code>LoadBalancer</code> <code>Service</code> doesn't make much sense. If you have a <code>Deployment</code> which typically manages a set of <code>Pods</code> between which the real load can be balanced, <code>LoadBalancer</code> does its job. However you can still use it just for exposing a single <code>Pod</code>.</p>
<p>Note that your container exposes port <code>80</code>, not <code>8080</code> (<code>containerPort: 80</code> in your <code>container</code> specification) so you need to specify it as <code>target-port</code> in your <code>Service</code>. Your <code>kubectl expose</code> command may look like this:</p>
<pre><code>kubectl expose pod site-web --type=LoadBalancer --port=8080 --target-port=80
</code></pre>
<p>If you provide only <code>--port=8080</code> flag to your <code>kubectl expose</code> command it assumes that the <code>target-port</code>'s value is the same as value of <code>--port</code>. You can easily check it by yourself looking at the service you've just created:</p>
<pre><code>kubectl get svc site-web -o yaml
</code></pre>
<p>and you'll see something like this in <code>spec.ports</code> section:</p>
<pre><code>- nodePort: 32576
port: 8080
protocol: TCP
targetPort: 8080
</code></pre>
<p>After exposing your <code>Pod</code> (or <code>Deployment</code>) properly i.e. using:</p>
<pre><code>kubectl expose pod site-web --type=LoadBalancer --port=8080 --target-port=80
</code></pre>
<p>you'll see something similar:</p>
<pre><code> - nodePort: 31181
port: 8080
protocol: TCP
targetPort: 80
</code></pre>
<p>After issuing <code>kubectl get services</code> you should see similar output:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
site-web ClusterIP <Cluster IP> <External IP> 8080:31188/TCP 4m42s
</code></pre>
<p>If then you go to <code>http://<External IP>:8080</code> in your browser or run <code>curl http://<External IP>:8080</code> you should see your website's frontend.</p>
<p>Keep in mind that this solution makes sense and will be fully functional in cloud environment which is able to provide you with a real load balancer. Note that if you declare such <code>Service</code> type in Minikube in fact it creates <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> service as it is unable to provide you with a real load balancer. So your application will be available on your Node's ( your Minikube VM's ) IP address on randomly selected port in range <code>30000-32767</code> (in my example it's port <code>31181</code>).</p>
<p>As to your question about the volume:</p>
<blockquote>
<p>I've a shared volume so if I understand I cannot use deployment but
only pods (maybe stateful set?)</p>
</blockquote>
<p>yes, if you want to use specifically <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">EmptyDir</a> volume, it cannot be shared between different <code>Pods</code> (even if they were scheduled on the same node), it is shared only between containers within the same <code>Pod</code>. If you want to use <code>Deployment</code> you'll need to think about another storage solution such as <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PersistentVolume</a>.</p>
<hr />
<h2>Edit</h2>
<p>In the first moment I didn't notice the error in your command:</p>
<pre><code>kubectl expose pod site-web --type=LoadBalancer --port=8080
</code></pre>
<p>You're trying to expose non-existing <code>Pod</code> as your <code>Pod</code>'s name is <code>site-home</code>, not <code>site-web</code>. <code>site-web</code> is a name of one of your containers (within your <code>site-home</code> <code>Pod</code>). Remember: we're exposing <code>Pod</code>, not <code>containers</code> via <code>Service</code>.</p>
<blockquote>
<p>I change 80->8080 but I always come to <code>error:kubectl expose pod site-home --type=LoadBalancer --port=8080 return:error: couldn't retrieve selectors via --selector flag or introspection: the pod has no labels and cannot be exposed</code> See <code>'kubectl expose -h'</code> for help
and examples.</p>
</blockquote>
<p>The key point here is: <code>the pod has no labels and cannot be exposed</code></p>
<p>It looks like your <code>Pod</code> doesn't have any <code>labels</code> defined which are required so that the <code>Service</code> can select this particular <code>Pod</code> (or set of similar <code>Pods</code> which have the same label) from among other <code>Pods</code> in your cluster. You need at least one label in your <code>Pod</code> definition. Adding simple label <code>name: site-web</code> under <code>Pod</code>'s <code>metadata</code> section should help. It may look like this in your <code>Pod</code> definition:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: site-home
labels:
name: site-web
spec:
...
</code></pre>
<p>Now you may even provide this label as selector in your service however it should be handled automatically if you omit <code>--selector</code> flag:</p>
<pre><code>kubectl expose pod site-home --type=LoadBalancer --port=8080 --target-port=80 --selector=name=site-web
</code></pre>
<p><strong>Remember:</strong> in Minikube, a real load balancer cannot be created and instead of <code>LoadBalancer</code> <code>NodePort</code> type will be created. Command <code>kubectl get svc</code> will tell you on which port (in range <code>30000-32767</code>) your application will be available.</p>
<blockquote>
<p>and `kubectl expose pod site-web --type=LoadBalancer
--port=8080 return: Error from server (NotFound): pods "site-web" not found. Site-home is the pod, site-web is the container with the port
exposed, what's the issue?</p>
</blockquote>
<p>If you don't have a <code>Pod</code> with name <code>"site-web"</code> you can expect such message. Here you are simply trying to expose non-existing <code>Pod</code>.</p>
<blockquote>
<p>If I exposed a port from a container the port is automatically exposed also for the pod ?</p>
</blockquote>
<p>Yes, you have the port defined in Container definition. Your <code>Pods</code> automatically expose all ports that are exposed by the <code>Containers</code> within them.</p>
| mario |
<p>according to the doc <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#emptydir</a> the volumes will be stored according to node's setting. Currently I am seeing the memory usage is pretty high in my case. I suspect that the default setting for the node is using "Memory". Is there a way to specify the volume to be mounted on other storage like disk?</p>
| shakalaka | <p>EmptyDir won't use memory unless you specifically set the <code>emptyDir.medium</code> field to <code>Memory</code> in each Pod's manifest.
Try executing <code>free -m</code> to confirm your current system doesn't consume shared memory too much:</p>
<pre><code># free -m
total used free shared buff/cache available
Mem: 31958 4680 21795 1079 5482 25781
</code></pre>
| Daigo |
<p>I am running a <strong>3 Node Kubernetes cluster with Flannel as CNI</strong>. I used kubeadm to setup the cluster and the version is 1.23.</p>
<p>My pods need to talk to external hosts using DNS addresses but there is no DNS server for those hosts. For that, I have added their entries in /etc/hosts on each node in cluster. The nodes can resolve the host from DNS but Pods are not able to resolve them.</p>
<p>I tried to search this problem over internet and there are suggestions to use HostAlias or update /etc/hosts file inside container. My problem is that the list of hosts is large and it's not feasible to maintain the list in the yaml file.</p>
<p>I also looked if Kubernetes has some inbuilt flag to make Pod look for entries in Node's /etc/hosts but couldn't find it.</p>
<p>So My question is -</p>
<ol>
<li>Why the pods running on the node cannot resolve hosts present in /etc/hosts file.</li>
<li>Is there a way to setup a local DNS server and asks all the Pods to query this DNS server for specific hosts resolutions?</li>
</ol>
<p>Any other suggestions or workarounds are also welcomed.</p>
| Krishnom | <p>Environments in the container should be separated from other containers and machines (including its host machine), and the same goes for /etc/hosts.</p>
<p>If you are using coreDNS (the default internal DNS), you can easily add extra hosts information by modifying its configMap.</p>
<p>Open the configMap <code>kubectl edit configmap coredns -n kube-system</code> and edit it so that it includes <code>hosts</code> section:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
data:
Corefile: |
.:53 {
...
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
### Add the following section ###
hosts {
{ip1} {hostname1}
{ip2} {hostname2}
...
fallthrough
}
prometheus :9153
...
}
</code></pre>
<p>The setting will be loaded in a few minutes then all the pods can resolve the hosts described in the configMap.</p>
| Daigo |
<p>I am trying to work out what the most suitable way of identifying if a request came from a specific internal service, or from the outside world (or indeed a different service).</p>
<p>I can write application level code to manage this of course, but was wondering what the simplest solution using Istio would be. The goal is to avoid writing extra layers of code if they're not necessary.</p>
<p>I have JWT on the perimeter for most endpoints, but there are some open (eg. auth).</p>
<p>Thanks!</p>
| JuanJSebGarcia | <p>For this specific scenario, I assumed that you are using http, so you can use Envoy and two http headers to determine the traffic source from internal or <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#http-header-manipulation" rel="nofollow noreferrer">external</a>.</p>
<p>Option 1: With <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-envoy-internal" rel="nofollow noreferrer">x-envoy-internal</a> you will be able to determine whether a request is internal origin or not.</p>
<p>Option 2: You can also check <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#config-http-conn-man-headers-x-forwarded-for" rel="nofollow noreferrer">x-forwarded-for</a> which indicates the IP addresses that a request has generated.</p>
<p>I hope this helps.</p>
| Raynel A.S |
<p>I am building an app (API) that will be running on the Google Kubernetes engine. I will add a NGINX Proxy (Cloud Endpoints) with SSL such that all external API requests will go through the Cloud Endpoints instance.</p>
<p>Question is, since I already have SSL enabled on the external open interface, do i need to add SSL cvertificates to the Kubernetes engine as well?</p>
| ssm | <p>In Google Kubernetes Engine, you can use Ingresses to create HTTPS load balancers with automatically configured SSL certificates. <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">Google-managed SSL certificates are provisioned, renewed, and managed for your domain names.</a> Read more about Google-managed SSL certificates <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates#managed-certs" rel="nofollow noreferrer">here</a>.</p>
<p>No, “You have the option to use <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates#managed-certs" rel="nofollow noreferrer">Google-managed SSL certificates (Beta)</a> or <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#setting_up_https_tls_between_client_and_load_balancer" rel="nofollow noreferrer">to use certificates that you manage yourself</a>.”</p>
<p>You added a NGINX Proxy (Cloud Endpoints) with SSL, then you do not need to add SSL certificates to the Kubernetes engine as well.</p>
| Ahmad P |
<p>I have established kubernetes cluster on one of the private hosting machine with public IP. I installed a few applications there and NGINX as the ingress controller. I would like to make a reachable my services outside of the cluster and be accessible with the specific domain. I installed cert-manager via helm and requested certificate via letsencrypt-prod, (validated domain via http-01 resolver) everything looks perfect (clusterissuer. certificate, certificaterequest, challenge) but from some reason, my TLS secret after BASE64 decoding - contains three certificates in the following order:</p>
<pre><code>-----BEGIN CERTIFICATE-----
XXXX
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
XXXX
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
XXXX
-----END CERTIFICATE-----
</code></pre>
<p>which is incorrect, as far as I know - it should be only two certificates (instead of 3), any ideas what can be wrong with that?</p>
| corey | <p>There's nothing wrong with it. You are seeing the root certificate, intermediate certificate, and your subscriber certificate.</p>
<p>It's actually normal for certificate authorities to use intermediate certificates.</p>
<p>You can read about it here: <a href="https://letsencrypt.org/certificates/" rel="nofollow noreferrer">https://letsencrypt.org/certificates/</a></p>
| ericfossas |
<p>In the case that an app and database are in different containers/pods (each a separate deployment yaml) how do you get them to communicate?</p>
<p>For example, a Django app requires the host name of the database in it's config/environment variables (along with the database name and a few other things) in order to connect to the database.</p>
<p>You should be able to specify the service as follows (assuming the database has a service called db-service in the default namespace):</p>
<p><strong>Inside Django app demployment.yaml file:</strong></p>
<pre><code> env:
- name: SQL_HOST
value: "db-service.default"
</code></pre>
<p>or you could do some hackery with the HOSTNAME for the database container (if it's similar to the app name) for example:</p>
<p><strong>Inside Django app demployment.yaml file:</strong></p>
<pre><code>env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: SQL_HOST
value: $(POD_NAME)-db
</code></pre>
<p><strong>Inside Postgres demployment.yaml file:</strong></p>
<pre><code>spec:
replicas: 1
selector:
matchLabels:
app: sitename-db-container
template:
metadata:
labels:
app: sitename-db-container
spec:
hostname: sitename-db
</code></pre>
<p>But what happens when you have multiple deployments inside a service for the same app (each having their app - database container pair)? How will the service know which app pod will communicate with what database pod? Does there now need to be a separate service for every app and database deployment?</p>
| Mick | <blockquote>
<p>But what happens when you have multiple deployments inside a service
for the same app (each having their app - database container pair)?
How will the service know which app pod will communicate with what
database pod? Does there now need to be a separate service for every
app and database deployment?</p>
</blockquote>
<p>What do you mean by <em>"multiple deployments inside a service"</em> ? In a <code>Service</code> definition you are supposed to select only one set of <code>Pods</code>, let's say managed by one specific <code>Deployment</code>. As @Matt suggested, you should always <em>create a service with a unique name for each pod/db you want to access</em>. If you have <code>Pods</code> dedicated to do specific tasks you deploy them separately (as separate <code>Deployments</code>). They can even consist of just one <code>Pod</code> if you don't need any redundancy. And basically you will always create a separate <code>Service</code> ( obviously with unique name as you cannot create more <code>Services</code> using the same name ) for exposing different microservice ( represented by unique <code>Deployment</code>). Note that if you don't create a <code>Deployment</code> but a simple <code>Pod</code> it won't be managed by any controller so if it crashes, nothing will take care of recreating it. So definitely you should always use <code>Deployment</code> even to run a single <code>Pod</code> representing your microservice.</p>
<p>Have you read <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service" rel="nofollow noreferrer">this</a> topic in official kubernetes documentation ? If you don't specify <code>Service</code> type, by default it creates so called <code>ClusterIP</code> Service which is basically what you need to expose your <code>app</code> components internally (make them available for other app components in your cluster).</p>
| mario |
<p>Based on the instructions found here (<a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/</a>) I am trying to create an nginx deployment and configure it using a config-map. I can successfully access nginx using curl (yea!) but the configmap does not appear to be "sticking." The only thing it is supposed to do right now is forward the traffic along. I have seen the thread here (<a href="https://stackoverflow.com/questions/52773494/how-do-i-load-a-configmap-in-to-an-environment-variable">How do I load a configMap in to an environment variable?</a>). although I am using the same format, their answer was not relevant.</p>
<p>Can anyone tell me how to properly configure the configmaps? the yaml is</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: sandbox
spec:
selector:
matchLabels:
run: nginx
app: dsp
tier: frontend
replicas: 2
template:
metadata:
labels:
run: nginx
app: dsp
tier: frontend
spec:
containers:
- name: nginx
image: nginx
env:
# Define the environment variable
- name: nginx-conf
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: nginx-conf
# Specify the key associated with the value
key: nginx.conf
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
containerPort: 80
</code></pre>
<p>the nginx-conf is</p>
<pre><code> # The identifier Backend is internal to nginx, and used to name this specific upstream
upstream Backend {
# hello is the internal DNS name used by the backend Service inside Kubernetes
server dsp;
}
server {
listen 80;
location / {
# The following statement will proxy traffic to the upstream named Backend
proxy_pass http://Backend;
}
}
</code></pre>
<p>I turn it into a configmap using the following line</p>
<pre><code>kubectl create configmap -n sandbox nginx-conf --from-file=apps/nginx.conf
</code></pre>
| user3877654 | <p>You need to mount the configMap rather than use it as an environment variable, as the setting is not a key-value format.</p>
<p>Your Deployment yaml should be like below:</p>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /etc/nginx
name: nginx-conf
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
</code></pre>
<p>You need to create (apply) the configMap beforehand. You can create it from file:</p>
<pre><code>kubectl create configmap nginx-conf --from-file=nginx.conf
</code></pre>
<p>or you can directly describe configMap manifest:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
# The identifier Backend is internal to nginx, and used to name this specific upstream
upstream Backend {
# hello is the internal DNS name used by the backend Service inside Kubernetes
server dsp;
}
...
}
</code></pre>
| Daigo |
<p>I am trying to achieve the exactly-once delivery on Kafka using Spring-Kafka on Kubernetes.
As far as I understood, the transactional-ID must be set on the producer and it should be the same across restarts, as stated here <a href="https://stackoverflow.com/a/52304789/3467733">https://stackoverflow.com/a/52304789/3467733</a>.</p>
<p>The problem arises using this semantic on Kubernetes. <strong>How can you get a consistent ID?</strong></p>
<p>To solve this problem I implementend a Spring boot application, let's call it "Replicas counter" that checks, through the Kubernetes API, how many pods there are with the same name as the caller, so I have a counter for every pod replica.</p>
<p>For example, suppose I want to deploy a Pod, let's call it <code>APP-1</code>.</p>
<p>This app does the following:</p>
<ol>
<li>It perfoms a <code>GET</code> to the Replicas-Counter passing the pod-name as parameter.</li>
<li>The replicas-counter calls the Kubernetes API in order to check how many pods there are with that pod name. So it does a a +1 and returns it to the caller. I also need to count not-ready pods (think about a first deploy, they couldn't get the ID if I wasn't checking for not-ready pods).</li>
<li>The APP-1 gets the id and will use it as the transactional-id</li>
</ol>
<p>But, as you can see a problem could arise when performing rolling updates, for example:</p>
<p>Suppose we have 3 pods:</p>
<p>At the beginning we have:</p>
<ul>
<li>app-1: transactional-id-1</li>
<li>app-2: transactional-id-2</li>
<li>app-3: transactional-id-3</li>
</ul>
<p>So, during a rolling update we would have:</p>
<ul>
<li><p>old-app-1: transactional-id-1</p>
</li>
<li><p>old-app-2: transactional-id-2</p>
</li>
<li><p>old-app-3: transactional-id-3</p>
</li>
<li><p>new-app-3: transactional-id-4 (Not ready, waiting to be ready)</p>
<p>New-app-3 goes ready, so Kubernetes brings down the Old-app-3. So time to continue the rolling update.</p>
</li>
<li><p>old-app-1: transactional-id-1</p>
</li>
<li><p>old-app-2: transactional-id-2</p>
</li>
<li><p>new-app-3: transactional-id-4</p>
</li>
<li><p>new-app-2: transactional-id-4 (Not ready, waiting to be ready)</p>
</li>
</ul>
<p>As you can see now I have 2 pods with the same transactional-id.</p>
<p>As far as I understood, these IDs have to be the same across restarts and unique.</p>
<p>How can I implement something that gives me consistent IDs? Is there someone that have dealt with this problem?</p>
<p>The problem with these IDs are only for the Kubernetes Deployments, not for the Stateful-Sets, as they have a stable identifier as name. I don't want to convert all deployment to stateful sets to solve this problem as I think it is not the correct way to handle this scenario.</p>
| Justin | <p>The only way to guarantee the uniqueness of <code>Pods</code> is to use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>.
<code>StatefulSets</code> will allow you to keep the number of replicas alive but everytime pod dies it will be replaced with the same host and configuration. That will prevent data loss that is required. </p>
<p>Service in <code>Statefulset</code> must be <code>headless</code> because since each pod is going to be unique, so you are going to need certain traffic to reach certain pods. </p>
<p>Every <code>pod</code> require a <code>PVC</code> (in order to store data and recreate whenever pod is deleted from that data).</p>
<p><a href="https://blog.yugabyte.com/best-practices-for-deploying-confluent-kafka-spring-boot-distributed-sql-based-streaming-apps-on-kubernetes/" rel="nofollow noreferrer">Here</a> is a great article describing why <code>StatefulSet</code> should be used in similar case.</p>
| acid_fuji |
<p>Is there a way to tie a skaffold profile to a namespace? I'd like to make sure that dev, staging and prod deployments always go to the right namespace. I know that I can add a namespace to <code>skaffold run</code> like <code>skaffold run -p dev -n dev</code> but that's a little error prone. I'd like to make my builds even safer by tying profiles to namespaces. </p>
<p>I've tried adding the following to my <code>skaffold.yaml</code> based on the fact that there's a path in <code>skaffold.yaml</code> which is <a href="https://skaffold.dev/docs/references/yaml/" rel="nofollow noreferrer"><code>build/cluster/namespace</code></a> but I suspect I'm misunderstanding the purpose of the cluster spec.</p>
<pre><code>profiles:
- name: local
patches:
- op: replace
path: /build/artifacts/0/cluster/namespace
value: testing
</code></pre>
<p>but I get the error </p>
<pre><code> ❮❮❮ skaffold render -p local
FATA[0000] creating runner: applying profiles: applying profile local: invalid path: /build/artifacts/0/cluster/namespace
</code></pre>
<p>I've tried other variants of changing the cluster namespace but all of them fail.</p>
| Paymahn Moghadasian | <h3>if TL/DR: please go directly to "solution" (the last section)</h3>
<blockquote>
<p>Is there a way to tie a skaffold profile to a namespace? I'd like to
make sure that dev, staging and prod deployments always go to the
right namespace. I know that I can add a namespace to <code>skaffold run</code>
like <code>skaffold run -p dev -n dev</code> but that's a little error prone. I'd
like to make my builds even safer by tying profiles to namespaces.</p>
</blockquote>
<p>At the beginning we need to clarify two things, namely if we are talking about <code>namespaces</code> in <code>build</code> or <code>deploy</code> stage of the pipeline. On one hand you write, that you want <em>to make sure that dev, staging and prod <strong>deployments</strong> always go to the right namespace</em> so I'm assuming you're rather interested in setting the appropriate <code>namespace</code> on your <strong>kubernetes cluster</strong> in which built images will be eventually deployed. Hovewer later you mentioned also about <em>making <strong>builds</strong> even safer by tying profiles to namespaces</em>. Please correct me if I'm wrong but my guess is that you rather mean <code>namespaces</code> at the <code>deploy</code> stage.</p>
<p>So answering your question: <strong>yes, it is possible to tie a skaffold profile to a specific namespace</strong>.</p>
<blockquote>
<p>I've tried adding the following to my <code>skaffold.yaml</code> based on the
fact that there's a path in <code>skaffold.yaml</code> which is
<a href="https://skaffold.dev/docs/references/yaml/" rel="nofollow noreferrer">build/cluster/namespace</a> but I suspect I'm misunderstanding the
purpose of the cluster spec.</p>
</blockquote>
<p>You're right, there is such path in <code>skaffold.yaml</code> but then your example should look as follows:</p>
<pre><code>profiles:
- name: local
patches:
- op: replace
path: /build/cluster/namespace
value: testing
</code></pre>
<p>Note that <code>cluster</code> element is on the same indentation level as <code>artifacts</code>. As you can read in the <a href="https://skaffold.dev/docs/references/yaml/" rel="nofollow noreferrer">reference</a>:</p>
<pre><code>cluster: # beta describes how to do an on-cluster build.
</code></pre>
<p>and as you can see, most of its options are related with <code>kaniko</code>. It can be also <code>patched</code> in the same way as other <code>skaffold.yaml</code> elements in specific <code>profiles</code> but anyway I don't think this is the element you're really concerned about so let's leave it for now.</p>
<p>Btw. you can easily validate your <code>skaffold.yaml</code> syntax by runnig:</p>
<pre><code>skaffold fix
</code></pre>
<p>If every element is properly used, all the indentation levels are correct etc. it will print:</p>
<pre><code>config is already latest version
</code></pre>
<p>otherwise something like the error below:</p>
<pre><code>FATA[0000] creating runner: applying profiles: applying profile prod: invalid path: /build/cluster/namespace
</code></pre>
<hr>
<h1>solution</h1>
<p>You can make sure your deployments go to the right namespace by setting <code>kubectl flags</code>. It assumes you're using <code>docker</code> as <code>builder</code> and <code>kubectl</code> as deployer. As there are plenty of different <code>builders</code> and <code>deployers</code> supported by <code>skaffold</code> the e.g. you deploy with <code>helm</code> the detailed solution may look quite different.</p>
<p><strong>One very important caveat:</strong> the path must be already present in your general config part, otherwise you won't be able to patch it in <code>profiles</code> section e.g.:</p>
<p>if you have in your profiles section following <code>patch</code>:</p>
<pre><code>profiles:
- name: prod
patches:
- op: replace
path: /build/artifacts/0/docker/dockerfile
value: DifferentNameForDockerfile
</code></pre>
<p>following section must be already present in your <code>skaffold.yaml</code>:</p>
<pre><code>build:
artifacts:
- image: skaffold-example
docker:
dockerfile: Dockerfile # the pipeline will fail at build stage
</code></pre>
<p>Going back to our <code>namaspaces</code>, first we need to set default values in <code>deploy</code> section:</p>
<pre><code>deploy:
kubectl:
manifests:
- k8s-pod.yaml
flags:
global: # additional flags passed on every command.
- --namespace=default
# apply: # additional flags passed on creations (kubectl apply).
# - --namespace=default
# delete: # additional flags passed on deletions (kubectl delete).
# - --namespace=default
</code></pre>
<p>I set only <code>global</code> flags but this is also possible to set for <code>apply</code> and <code>delete</code> commands separately.</p>
<p>In next step we need to override our default value (they must be already present, so we can override them) in our <code>profiles</code>:</p>
<pre><code>profiles:
- name: dev
patches:
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=dev
- name: staging
patches:
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=staging
- name: prod
patches:
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=prod
</code></pre>
<p>Then we can run:</p>
<pre><code>skaffold run --render-only --profile=prod
</code></pre>
<p>As we can see our <code>Pod</code> is going to be deployed in <code>prod</code> <code>namespace</code> of our <strong>kubernetes cluster</strong>:</p>
<pre><code>Generating tags...
- skaffold-example -> skaffold-example:v1.3.1-15-g11d005d-dirty
Checking cache...
- skaffold-example: Found Locally
apiVersion: v1
kind: Pod
metadata:
labels:
app.kubernetes.io/managed-by: skaffold-v1.3.1
skaffold.dev/builder: local
skaffold.dev/cleanup: "true"
skaffold.dev/deployer: kubectl
skaffold.dev/docker-api-version: "1.39"
skaffold.dev/profile.0: prod
skaffold.dev/run-id: b83d48db-aec8-4570-8cb8-dbf9a7795c00
skaffold.dev/tag-policy: git-commit
skaffold.dev/tail: "true"
name: getting-started
namespace: prod
spec:
containers:
- image: skaffold-example:3e4840dfd2ad13c4d32785d73641dab66be7a89b43355eb815b85bc09f45c8b2
name: getting-started
</code></pre>
| mario |
<p>I have many helm repositories in my kubernetes,
And I installed a lot of charts,
So How do I know which repository the installed Chart belongs to?
For example:</p>
<pre><code>$> helm repo list
NAME URL
lucy-dev https://harbor.mydomain.net/chartrepo/lucy-dev
lucy-prod https://harbor.mydomain.net/chartrepo/lucy-prod
$> helm ls -n msgbox-lucy -o yaml
- app_version: "1.0"
chart: msgbox-lucy-8.27.3
name: msgbox-lucy
namespace: msgbox-lucy
revision: "48"
status: deployed
updated: 2022-04-19 08:11:16.761059067 +0000 UTC
</code></pre>
<p>I can't use <code>helm show</code> because:</p>
<pre><code>$> helm show all msgbox-lucy -n msgbox-lucy --debug
show.go:195: [debug] Original chart version: ""
Error: non-absolute URLs should be in form of repo_name/path_to_chart, got: msgbox-lucy
...
</code></pre>
| Bryan Chen | <p>I don't believe you're guaranteed to get the info you're looking for, however we can try.</p>
<p>Find the latest Helm secret for your Helm release.</p>
<pre><code>kubectl get secret -n msgbox-lucy
</code></pre>
<p>Yours might look something like this:</p>
<pre><code>sh.helm.release.v1.msgbox-lucy.v5
</code></pre>
<p>and run this command to view the chart's metadata:</p>
<pre><code>SECRET_NAME="sh.helm.release.v1.msgbox-lucy.v5"
kubectl get secret $SECRET_NAME -o json | jq .data.release \
| tr -d '"' | base64 -d | base64 -d | gzip -d \
| jq '.chart.metadata'
</code></pre>
<p>The metadata should hopefully show you 2 things you're looking for. The chart name will be under the <code>name</code> field. The chart repository URL might be under <code>sources</code>.</p>
<p>I say "might" because the chart developer should have added it there, but they might not have.</p>
<p>Then you can match the URL to your repo alias.</p>
<hr />
<p>If it's not included in the metadata, you're probably out of luck for now.</p>
<p>There is an open Github issue about exactly this feature you're wanting:</p>
<p><a href="https://github.com/helm/helm/issues/4256" rel="nofollow noreferrer">https://github.com/helm/helm/issues/4256</a></p>
<p>And an open PR that adds that feature:</p>
<p><a href="https://github.com/helm/helm/pull/10369" rel="nofollow noreferrer">https://github.com/helm/helm/pull/10369</a></p>
<p>And an open PR to add a HIP (Helm Improvement Proposal) for adding that feature:</p>
<p><a href="https://github.com/helm/community/pull/224" rel="nofollow noreferrer">https://github.com/helm/community/pull/224</a></p>
| ericfossas |
<p>Today I have rebooted one of my k8s worker-nodes. Now can't get metrics of any pod started on this node. <code>kubectl top nodes</code> works ok.</p>
<pre><code>$ kubectl top pods
W0413 03:16:04.917900 596110 top_pod.go:266] Metrics not available for pod default/cluster-registry-84f8b6b45c-xmzr4, age: 1h32m29.917882167s
error: Metrics not available for pod default/cluster-registry-84f8b6b45c-xmzr4, age: 1h32m29.917882167s
</code></pre>
<pre><code>$ kubectl logs -f metrics-server-596fcd4bcd-fgk86 -n kube-system
E0412 20:16:07.413028 1 reststorage.go:160] unable to fetch pod metrics for pod default/runner-registry-74bdcf4f9b-8kkzn: no metrics known for pod
E0412 20:17:07.413399 1 reststorage.go:160] unable to fetch pod metrics for pod default/runner-registry-74bdcf4f9b-8kkzn: no metrics known for pod
</code></pre>
<p>I have tried to start metrics-server with <code>--v=4</code> arg, but not found any interesting.
Metrics of pods on another nodes are ok.</p>
<p>k8s - <code>v1.17.4</code></p>
<p>metrics-server-amd64:v0.3.6 started with</p>
<pre><code>--kubelet-insecure-tls
--kubelet-preferred-address-types=InternalIP
</code></pre>
<p><strong>UPDATE:</strong>
Node name is <code>sms-crm-stg-2</code>. Output of <code>kubectl get --raw /api/v1/nodes/sms-crm-stg-2/proxy/stats/summary</code> below:</p>
<pre><code>$ kubectl get --raw /api/v1/nodes/sms-crm-stg-2/proxy/stats/summary
{
"node": {
"nodeName": "sms-crm-stg-2",
"systemContainers": [
{
"name": "pods",
"startTime": "2020-04-12T17:50:25Z",
"cpu": {
"time": "2020-04-14T10:53:20Z",
"usageNanoCores": 12877941,
"usageCoreNanoSeconds": 4387476849484
},
"memory": {
"time": "2020-04-14T10:53:20Z",
"availableBytes": 16520691712,
"usageBytes": 154824704,
"workingSetBytes": 136818688,
"rssBytes": 68583424,
"pageFaults": 0,
"majorPageFaults": 0
}
},
{
"name": "kubelet",
"startTime": "2020-04-12T17:49:18Z",
"cpu": {
"time": "2020-04-14T10:53:05Z",
"usageNanoCores": 18983004,
"usageCoreNanoSeconds": 2979656573959
},
"memory": {
"time": "2020-04-14T10:53:05Z",
"usageBytes": 374534144,
"workingSetBytes": 353353728,
"rssBytes": 325005312,
"pageFaults": 133278612,
"majorPageFaults": 536505
}
},
{
"name": "runtime",
"startTime": "2020-04-12T17:48:35Z",
"cpu": {
"time": "2020-04-14T10:53:03Z",
"usageNanoCores": 15982086,
"usageCoreNanoSeconds": 1522750008369
},
"memory": {
"time": "2020-04-14T10:53:03Z",
"usageBytes": 306790400,
"workingSetBytes": 297889792,
"rssBytes": 280047616,
"pageFaults": 53437788,
"majorPageFaults": 255703
}
}
],
"startTime": "2020-04-12T17:48:19Z",
"cpu": {
"time": "2020-04-14T10:53:20Z",
"usageNanoCores": 110654764,
"usageCoreNanoSeconds": 29602969518334
},
"memory": {
"time": "2020-04-14T10:53:20Z",
"availableBytes": 1377738752,
"usageBytes": 15835013120,
"workingSetBytes": 15279771648,
"rssBytes": 14585233408,
"pageFaults": 3309653,
"majorPageFaults": 16969
},
"network": {
"time": "2020-04-14T10:53:20Z",
"name": "",
"interfaces": [
{
"name": "br-6edcec7930f0",
"rxBytes": 0,
"rxErrors": 0,
"txBytes": 0,
"txErrors": 0
},
{
"name": "cali63387897a01",
"rxBytes": 131540393,
"rxErrors": 0,
"txBytes": 71581241,
"txErrors": 0
},
{
"name": "cali75b3a97cfc0",
"rxBytes": 194967,
"rxErrors": 0,
"txBytes": 54249,
"txErrors": 0
},
{
"name": "cali382d1538876",
"rxBytes": 666667,
"rxErrors": 0,
"txBytes": 780072,
"txErrors": 0
},
{
"name": "br-0b3d0a271eb2",
"rxBytes": 0,
"rxErrors": 0,
"txBytes": 0,
"txErrors": 0
},
{
"name": "cali7c48479e916",
"rxBytes": 139682733,
"rxErrors": 0,
"txBytes": 205172367,
"txErrors": 0
},
{
"name": "cali346a5d86923",
"rxBytes": 112517660,
"rxErrors": 0,
"txBytes": 232383,
"txErrors": 0
},
{
"name": "br-5d30bcdbc231",
"rxBytes": 0,
"rxErrors": 0,
"txBytes": 0,
"txErrors": 0
},
{
"name": "tunl0",
"rxBytes": 195091257,
"rxErrors": 0,
"txBytes": 215334849,
"txErrors": 0
},
{
"name": "ens160",
"rxBytes": 3241985272,
"rxErrors": 0,
"txBytes": 3548616264,
"txErrors": 0
}
]
},
"fs": {
"time": "2020-04-14T10:53:20Z",
"availableBytes": 9231872000,
"capacityBytes": 24109666304,
"usedBytes": 14877794304,
"inodesFree": 23363080,
"inodes": 23556096,
"inodesUsed": 193016
},
"runtime": {
"imageFs": {
"time": "2020-04-14T10:53:20Z",
"availableBytes": 9231872000,
"capacityBytes": 24109666304,
"usedBytes": 6145920764,
"inodesFree": 23363080,
"inodes": 23556096,
"inodesUsed": 193016
}
},
"rlimit": {
"time": "2020-04-14T10:53:22Z",
"maxpid": 32768,
"curproc": 1608
}
},
"pods": []
}
</code></pre>
<p><code>"pods": []</code> is empty, so looks like it is node problem, not metrics-server.</p>
| Kirill Bugaev | <p>OP confirmed the metrics server issue was caused by malfunctioned node. Adding a new one solved resolved the issues.</p>
| acid_fuji |
<p>Building off another one of my questions about <a href="https://stackoverflow.com/questions/60061593/tie-skaffold-profile-to-namespace">tying profiles to namespaces</a>, is there a way to tie profiles to clusters?</p>
<p>I've found a couple times now that I accidentally run commands like <code>skaffold run -p local -n skeleton</code> when my current kubernetes context is pointing to <code>docker-desktop</code>. I'd like to prevent myself and other people on my team from committing the same mistake.</p>
<p>I found that there's a way of <a href="https://skaffold.dev/docs/environment/kube-context/" rel="nofollow noreferrer">specifying contexts</a> but that doesn't play nicely if developers use custom contexts like <code>kubeclt set-context custom --user=custom --cluster=custom</code>. I've also found a <a href="https://skaffold.dev/docs/references/yaml/" rel="nofollow noreferrer">cluster field</a> in the <code>skaffold.yaml</code> reference but it seems that doesn't satisfy my need because it doesn't let me specify a cluster name.</p>
| Paymahn Moghadasian | <p>After digging through the <a href="https://skaffold.dev/docs/" rel="nofollow noreferrer">skaffold documentation</a> and performing several tests I finally managed to find at least partial solution of your problem, maybe not the most elegant one, but still functional. If I find a better way I will edit my answer.</p>
<p>Let's start from the beginning:</p>
<p>As we can read <a href="https://skaffold.dev/docs/environment/kube-context/" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>When interacting with a Kubernetes cluster, just like any other
Kubernetes-native tool, Skaffold requires a valid Kubernetes context
to be configured. The selected kube-context determines the Kubernetes
cluster, the Kubernetes user, and the default namespace. By default,
Skaffold uses the current kube-context from your kube-config file.</p>
</blockquote>
<p>This is quite important point as we are actually starting from <code>kube-context</code> and based on it we are able to trigger specific profile, never the oposite.</p>
<p><strong>important to remember</strong>: <code>kube-context</code> is not activated based on the <code>profile</code> but the opposite is true: the specific <code>profile</code> is triggered based on the current context (selected by <code>kubectl config use-context</code>).</p>
<p>Although we can overwrite default settings from our <code>skaffold.yaml</code> config file by patching (<a href="https://stackoverflow.com/questions/60061593/tie-skaffold-profile-to-namespace/60107468#60107468">compare related answer</a>), it's not possible to overwrite the <code>current-context</code> based on slected <code>profile</code> e.g. manually as in your command:</p>
<pre><code>skaffold -p prod
</code></pre>
<p>Here you are manually selecting specific <code>profile</code>. This way you bypass automatic <a href="https://skaffold.dev/docs/environment/profiles/" rel="nofollow noreferrer">profile triggering</a>. As the documentation says:</p>
<blockquote>
<p><strong>Activations in skaffold.yaml</strong>: You can auto-activate a profile based on</p>
<ul>
<li>kubecontext (could be either a string or a regexp: prefixing with <code>!</code> will negate the match)</li>
<li>environment variable value</li>
<li>skaffold command (dev/run/build/deploy)</li>
</ul>
</blockquote>
<p>Let's say we want to activate our profile based on current <code>kube-context</code> only to make it simple however we can join different conditions together by AND and OR like in the example <a href="https://skaffold.dev/docs/environment/profiles/" rel="nofollow noreferrer">here</a>.</p>
<h3>solution</h3>
<blockquote>
<p>I want to make sure that if I run skaffold -p prod skaffold will fail
if my kubecontext points to a cluster other than my production
cluster.</p>
</blockquote>
<p>I'm affraid it cannot be done this way. If you've already manually selected <code>prod profile</code> by <code>-p prod</code> you're bypassing selection of profile based on current context therefore you already chosen <strong>what can be done</strong> no matter how <strong>where it can be done</strong> is set (currently selected <code>kube-context</code>). <strong>In this situation <code>skaffold</code> doesn't have any mechanisms that would prevent you from running something on wrong cluster</strong>. In other words you're forcing this way certain behaviour of your pipeline. You already agree to it by selecting the profile. If you gave up using <code>-p</code> or <code>--profile</code> flags, certain profiles will never be triggerd unless currently selected <code>kube-context</code> does it automatically. <code>skaffold</code> just won't let that happen.</p>
<p>Let's look at the following example showing how to make it work:</p>
<pre><code>apiVersion: skaffold/v2alpha3
kind: Config
metadata:
name: getting-started
build:
artifacts:
- image: skaffold-example
docker:
dockerfile: NonExistingDockerfile # the pipeline will fail at build stage
cluster:
deploy:
kubectl:
manifests:
- k8s-pod.yaml
flags:
global: # additional flags passed on every command.
- --namespace=default
kubeContext: minikube
profiles:
- name: prod
patches:
- op: replace
path: /build/artifacts/0/docker/dockerfile
value: Dockerfile
- op: replace
path: /deploy/kubectl/flags/global/0
value: --namespace=prod
activation:
- kubeContext: minikube
command: run
- kubeContext: minikube
command: dev
</code></pre>
<p>In general part of our <code>skaffold.yaml</code> config we configured:</p>
<pre><code>dockerfile: NonExistingDockerfile # the pipeline will fail at build stage
</code></pre>
<p>Untill we name our <code>Dockerfile</code> - <code>"NonExistingDockerfile"</code> every pipeline will fail at its <code>build</code> stage. So by default all builds, no matter what <code>kube-context</code> is selected are destined to fail. Hovewer we can override this default behaviour by <code>patching</code> specific fragment of the <code>skaffold.yaml</code> in our <code>profile</code> section and setting again <code>Dockerfile</code> to its standard name. This way every:</p>
<pre><code>skaffold run
</code></pre>
<p>or</p>
<pre><code>skaffold dev
</code></pre>
<p>command will succeed only if the current <code>kube-context</code> is set to <code>minikube</code>. Otherwise it will fail.</p>
<p>We can check it with:</p>
<pre><code>skaffold run --render-only
</code></pre>
<p>previously setting our current <code>kube-context</code> to the one that matches what is present in the <code>activation</code> section of our <code>profile</code> definition.</p>
<blockquote>
<p>I've found a couple times now that I accidentally run commands like
<code>skaffold run -p local -n skeleton</code> when my current kubernetes context
is pointing to <code>docker-desktop</code>. I'd like to prevent myself and other
people on my team from committing the same mistake.</p>
</blockquote>
<p>I understand your point that it would be nice to have some built-in mechanism that prevents overriding this automatic profile activation configured in <code>skaffold.yaml</code> by command line options, but it looks like currently it isn't possible. If you don't specify <code>-p local</code>, <code>skaffold</code> will always choose the correct profile based on the current <code>context</code>. Well, it looks like good material for <strong>feature request</strong>.</p>
| mario |
<p>I am using the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">official kubernetes dashboard</a> in Version <code>kubernetesui/dashboard:v2.4.0</code> to manage my cluster and I've noticed that, when I select a pod and look into the logs, the length of the displayed logs is quite short. It's like 50 lines or something?</p>
<p>If an exception occurs, the logs are pretty much useless because the original cause is hidden by lots of other lines. I would have to download the logs or shell to the kubernetes server and use <code>kubectl logs</code> in order to see whats going on.</p>
<p>Is there any way to configure the dashboard in a way so that more lines of logs get displayed?</p>
| Manu | <p>AFAIK, it is not possible with <code>kubernetesui/dashboard:v2.4.0</code>. On the <a href="https://github.com/kubernetes/dashboard/blob/master/docs/common/dashboard-arguments.md" rel="nofollow noreferrer">list of dashboard arguments</a> that allow for customization, there is no option to change the amount of logs displayed.</p>
<p>As a workaround you can use <a href="https://prometheus.io/docs/introduction/overview/" rel="nofollow noreferrer">Prometheus + Grafana</a> combination or <a href="https://www.elastic.co/kibana/" rel="nofollow noreferrer">ELK kibana</a> as separate dashboards with logs/metrics, however depending on the size and scope of your k8s cluster it might be overkill. There are also alternative k8s opensource dashboards such as <a href="https://github.com/skooner-k8s/skooner" rel="nofollow noreferrer">skooner</a> (formerly known as k8dash), however I am not sure if it offers more workload logs visibility.</p>
| Piotr Malec |
<p>I have deployment yaml file where I am setting up one ARG.</p>
<pre><code>args['--country=usa']
</code></pre>
<p>It runs perfectly and pod will be up with this argument. But for different country name, I need to change this yaml for argument and then run</p>
<p>kubectl create -f deploy.yml command again and again</p>
<p>Is there any way to pass this arg through create command</p>
<p>I tried kubectl create -f deploy.yml '--country=usa' but it seems like this is not the correct way to pass it</p>
| iRunner | <p>Unfortunately, there is no simple way to do that.
Here are some related questions.</p>
<p>Using env variable and configmap:
<a href="https://stackoverflow.com/questions/60449786/how-to-pass-command-line-argument-to-kubectl-create-command">How to pass command line argument to kubectl create command</a></p>
<p>Using Helm:
<a href="https://stackoverflow.com/questions/61946064/kubernetes-deployment-pass-arguments">Kubernetes Deployment - Pass arguments</a></p>
| Daigo |
<p><strong>Problem:</strong></p>
<p>I want to read a json file into a configmap so it looks like:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: json-test
data:
test.json: |-
{
"key": "val"
}
</code></pre>
<p>Instead I get</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: json-test
data:
test.json: |-
"{\r\n \"key\": \"val\"\r\n}"
</code></pre>
<p><strong>What I've done:</strong></p>
<p>I have the following helm chart:</p>
<pre><code>Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2020-02-06 10:51 AM static
d----- 2020-02-06 10:55 AM templates
-a---- 2020-02-06 10:51 AM 88 Chart.yaml
</code></pre>
<p>static/ contains a single file: <code>test.json</code>:</p>
<pre><code>{
"key": "val"
}
</code></pre>
<p>templates/ contains a single configmap that reads test.json: <code>test.yml</code>:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: json-test
data:
test.json: |-
{{ toJson ( .Files.Get "static/test.json" ) | indent 4}}
</code></pre>
<p>When I run <code>helm install test . --dry-run --debug</code> I get the following output</p>
<pre><code>NAME: test
LAST DEPLOYED: Thu Feb 6 10:58:18 2020
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
{}
HOOKS:
MANIFEST:
---
# Source: sandbox/templates/test.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: json-test
data:
test.json: |-
"{\r\n \"key\": \"val\"\r\n}"
</code></pre>
<p>The problem here is my json is wrapped in double quotes. My process that wants to read the json is expecting actual json, not a string.</p>
| Connor Graham | <p>I see that this is not specific behavior only for <strong>helm 3</strong>. It generally works in <strong>kubernetes</strong> this way.</p>
<p>I've just tested it on <strong>kubernetes v1.13</strong>.</p>
<p>First I created a <code>ConfigMap</code> based on this file:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: json-test
data:
test.json: |-
{
"key": "val"
}
</code></pre>
<p>When I run:</p>
<pre><code>$ kubectl get configmaps json-test -o yaml
</code></pre>
<p>I get the expected output:</p>
<pre><code>apiVersion: v1
data:
test.json: |-
{
"key": "val"
}
kind: ConfigMap
metadata:
...
</code></pre>
<p>but when I created my <code>ConfigMap</code> based on json file with the following content:</p>
<pre><code>{
"key": "val"
}
</code></pre>
<p>by running:</p>
<pre><code>$ kubectl create configmap json-configmap --from-file=test-json.json
</code></pre>
<p>Then when I run:</p>
<pre><code>kubectl get cm json-configmap --output yaml
</code></pre>
<p>I get:</p>
<pre><code>apiVersion: v1
data:
test-json.json: " { \n \"key\": \"val\"\n } \n"
kind: ConfigMap
metadata:
...
</code></pre>
<p>So it looks like it's pretty normal for kubernetes to transform the original <strong>json format</strong> into <strong>string</strong> when a <code>ConfigMap</code> is created from file.</p>
<p>It doesn't seem to be a bug as kubectl doesn't have any problems with extracting properly formatted json format from such <code>ConfigMap</code>:</p>
<pre><code>kubectl get cm json-configmap -o jsonpath='{.data.test-json\.json}'
</code></pre>
<p>gives the correct output:</p>
<pre><code>{
"key": "val"
}
</code></pre>
<p>I would say that it is application responsibility to be able to extract <strong>json</strong> from such <strong>string</strong> and it can be done probably in many different ways e.g. making direct call to <code>kube-api</code> or using <code>serviceaccount</code> configured to use <code>kubectl</code> in <code>Pod</code>.</p>
| mario |
<p>I have to share local <code>.ssh</code> directory content to pod. I search for hat and got answer from one of the post to share start as <a href="https://stackoverflow.com/a/48535001/243031"><code>--mount-string</code></a>.</p>
<pre><code>$ minikube start --mount-string="$HOME/.ssh/:/ssh-directory" --mount
😄 minikube v1.9.2 on Darwin 10.14.6
✨ Using the docker driver based on existing profile
👍 Starting control plane node m01 in cluster minikube
🚜 Pulling base image ...
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
E0426 23:44:18.447396 80170 kubeadm.go:331] Overriding stale ClientConfig host https://127.0.0.1:32810 with https://127.0.0.1:32813
📁 Creating mount /Users/myhome/.ssh/:/ssh-directory ...
🌟 Enabling addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
❗ /usr/local/bin/kubectl is v1.15.5, which may be incompatible with Kubernetes v1.18.0.
💡 You can also use 'minikube kubectl -- get pods' to invoke a matching version
</code></pre>
<p>When I check the docker for the given Minikube, it return</p>
<pre><code>$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad64f642b63 gcr.io/k8s-minikube/kicbase:v0.0.8 "/usr/local/bin/entr…" 3 weeks ago Up 45 seconds 127.0.0.1:32815->22/tcp, 127.0.0.1:32814->2376/tcp, 127.0.0.1:32813->8443/tcp minikube
</code></pre>
<p>And check the <code>.ssh</code> directory content are there or not.</p>
<pre><code>$ docker exec -it 5ad64f642b63 ls /ssh-directory
id_rsa id_rsa.pub known_hosts
</code></pre>
<p>I have deployment yml as</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
labels:
stack: api
app: api-web
spec:
replicas: 1
selector:
matchLabels:
app: api-web
template:
metadata:
labels:
app: api-web
spec:
containers:
- name: api-web-pod
image: tiangolo/uwsgi-nginx-flask
ports:
- name: api-web-port
containerPort: 80
envFrom:
- secretRef:
name: api-secrets
volumeMounts:
- name: ssh-directory
mountPath: /app/.ssh
volumes:
- name: ssh-directory
hostPath:
path: /ssh-directory/
type: Directory
</code></pre>
<p>When it ran, it gives error for <code>/ssh-directory</code>.</p>
<pre><code>$ kubectl describe pod/api-deployment-f65db9c6c-cwtvt
Name: api-deployment-f65db9c6c-cwtvt
Namespace: default
Priority: 0
Node: minikube/172.17.0.2
Start Time: Sat, 02 May 2020 23:07:51 -0500
Labels: app=api-web
pod-template-hash=f65db9c6c
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/api-deployment-f65db9c6c
Containers:
api-web-pod:
Container ID:
Image: tiangolo/uwsgi-nginx-flask
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables from:
api-secrets Secret Optional: false
Environment: <none>
Mounts:
/app/.ssh from ssh-directory (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9shz5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ssh-directory:
Type: HostPath (bare host directory volume)
Path: /ssh-directory/
HostPathType: Directory
default-token-9shz5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9shz5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/api-deployment-f65db9c6c-cwtvt to minikube
Warning FailedMount 11m kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[ssh-directory], unattached volumes=[default-token-9shz5 ssh-directory]: timed out waiting for the condition
Warning FailedMount 2m13s (x4 over 9m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[ssh-directory], unattached volumes=[ssh-directory default-token-9shz5]: timed out waiting for the condition
Warning FailedMount 62s (x14 over 13m) kubelet, minikube MountVolume.SetUp failed for volume "ssh-directory" : hostPath type check failed: /ssh-directory/ is not a directory
</code></pre>
<p>When I check the content of <code>/ssh-directory</code> in docker.</p>
<p>It gives IO error.</p>
<pre><code>$ docker exec -it 5ad64f642b63 ls /ssh-directory
ls: cannot access '/ssh-directory': Input/output error
</code></pre>
<p>I know there are default mount points for Minikube. As mentioned in <a href="https://minikube.sigs.k8s.io/docs/handbook/mount/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/mount/</a>, </p>
<pre><code>+------------+----------+---------------+----------------+
| Driver | OS | HostFolder | VM |
+------------+----------+---------------+----------------+
| VirtualBox | Linux | /home |/hosthome |
+------------+----------+---------------+----------------+
| VirtualBox | macOS | /Users |/Users |
+------------+----------+---------------+----------------+
| VirtualBox | Windows |C://Users | /c/Users |
+------------+----------+---------------+----------------+
|VMware Fusio| macOS |/Users |/Users |
+------------+----------+---------------+----------------+
| KVM | Linux | Unsupported. | |
+------------+----------+---------------+----------------+
| HyperKit | Linux | Unsupported |(see NFS mounts)|
+------------+----------+---------------+----------------+
</code></pre>
<p>But I installed minikube as <code>brew install minikube</code> and its set <code>driver</code> as <code>docker</code>.</p>
<pre><code>$ cat ~/.minikube/config/config.json
{
"driver": "docker"
}
</code></pre>
<p>There is no mapping for <code>docker</code> driver in mount point.</p>
<p>Initially, this directory has the files, but somehow, when I try to create the pod, it delete or something is wrong.</p>
| Nilesh | <p>While reproducing this on ubuntu I encountered the exact issue. </p>
<p>The directory was indeed looked like mounted but the files were missing which lead me to think that this is a general issue with mounting directories with docker driver. </p>
<p>There is open issue on github about the same problem ( <a href="https://github.com/kubernetes/minikube/issues/2481" rel="nofollow noreferrer">mount directory empty</a> ) and open feature request <a href="https://github.com/kubernetes/minikube/issues/7604" rel="nofollow noreferrer">to mount host volumes into docker driver</a>. </p>
<p>Inspecting minikube container shows no record of that mounted volume and confirms information mentioned in the github request that the only volume shared with host as of now is the one that mounts by default (that is <code>/var/lib/docker/volumes/minikube/_data</code> mounted into minikube's <code>/var</code> directory).</p>
<pre><code>$ docker inspect minikube
"Mounts": [
{
"Type": "volume",
"Name": "minikube",
"Source": "/var/lib/docker/volumes/minikube/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
</code></pre>
<p>As the workaround you could copy your <code>.ssh</code> directory into the running minikube docker container with following command: </p>
<pre><code>docker cp $HOME/.ssh minikube:<DESIRED_DIRECTORY>
</code></pre>
<p>and then mount this desired directory into the pod.</p>
| acid_fuji |
<p>I am trying to deploy nats in k8s cluster. I need to override default server config.
Tried creating a configmap with --from-file and attached it to deployment, but it gives me the following error</p>
<pre><code>nats-server: read /etc/nats-server-conf/server.conf: is a directory
</code></pre>
<p>ConfigMap</p>
<pre><code>k describe configmaps nats-server-conf
</code></pre>
<pre><code>Name: nats-server-conf
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
server.conf:
----
accounts: {
\$SYS: {
users: [{user: sys, password: pass}]
}
}
BinaryData
====
Events: <none>
</code></pre>
<p>Following is my deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nats-depl
spec:
replicas: 1
selector:
matchLabels:
app: nats
template:
metadata:
labels:
app: nats
spec:
containers:
- name: nats
image: nats
volumeMounts:
- mountPath: /etc/nats-server-conf/server.conf
name: nats-server-conf
args:
[
'-p',
'4222',
'-m',
'8222',
'-js',
'-c',
'/etc/nats-server-conf/server.conf'
]
volumes:
- configMap:
name: nats-server-conf
name: nats-server-conf
</code></pre>
<p>Thank you.</p>
| Shreyas Chorge | <pre><code>- mountPath: /etc/nats-server-conf/server.conf
</code></pre>
<p>The above setting will make a Pod mount <code>server.conf</code> as a directory, so try the below instead:</p>
<pre><code>- mountPath: /etc/nats-server-conf
</code></pre>
| Daigo |
<p>Trying to deploy postgres in kubernetes (<a href="https://github.com/paunin/PostDock/tree/master/k8s/example2-single-statefulset" rel="nofollow noreferrer">https://github.com/paunin/PostDock/tree/master/k8s/example2-single-statefulset</a>), </p>
<ol>
<li>Used pgpool-II(2 replicas) along with 1 postgres-master pod & 2 slave pods. </li>
</ol>
<pre><code>kubectl get pods -n postgres
NAME READY STATUS RESTARTS AGE
psql-db-pgpool-8****c-7**k 1/1 Running 0 35d
psql-db-pgpool-8****c-m**5 1/1 Running 0 35d
psql-db-node-0 1/1 Running 0 35d
psql-db-node-1 1/1 Running 0 35d
psql-db-node-2 1/1 Running 0 20h
</code></pre>
<ol start="2">
<li>Created a user "test" from the master postgres db with postgres user. </li>
<li>When trying to connect from within the pods(node-0), the authentication is successful, with user "test".</li>
</ol>
<pre><code>root@postgres-db-node-0:/# psql -h localhost postgres -U test
psql (11.4 (Debian 11.4-1.pgdg90+1))
Type "help" for help.
postgres=> \l
</code></pre>
<ol>
<li>When trying to connect with NodePort IP & NodePort of the kubernetes cluster, the new user "test" fails authentication with <strong>pool_passwd file does not contain an entry for "test"</strong></li>
</ol>
<pre><code>psql -h NODE_IP -U test -d postgres --port NODE_PORT
psql: FATAL: md5 authentication failed
DETAIL: pool_passwd file does not contain an entry for "test"
</code></pre>
<ol start="5">
<li>Logged in to the pgpool-II pod to find out </li>
</ol>
<pre><code>root@psql-db-pgpool-8****c-7**k:/# cat /usr/local/etc/pool_passwd
user1:md5****422f
replica_user:md5****3
</code></pre>
<p>The new user "test" created at the database is not reflected at the pgpool. Does it work this way, to create & update pgpool everytime a new user is created? Or am I missing something for this user update.</p>
| Sandy | <p>The postgres example You deployed uses secret object to store user and password credentials. And this is the recommended way of managing sensitive data in
kubernetes deployments.</p>
<p>There are following <a href="https://github.com/paunin/PostDock/blob/master/k8s/README.md#example-for-k8s" rel="nofollow noreferrer">instructions</a> in this example:</p>
<ul>
<li>Create namespace by <code>kubectl create -f ./namespace/</code></li>
<li>Create configs: <code>kubectl create -f ./configs/</code></li>
<li>Create volumes <code>kubectl create -f ./volumes/</code></li>
<li>Create services <code>kubectl create -f ./services/</code></li>
<li>Create nodes <code>kubectl create -f ./nodes/</code></li>
<li>Create pgpool <code>kubectl create -f ./pgpool/</code></li>
</ul>
<p>If You followed them in correct order, the <code>mysystem-secret</code> secret object is created when <code>kubectl create -f ./configs/</code> is called from <a href="https://github.com/paunin/PostDock/blob/master/k8s/example2-single-statefulset/configs/secret.yml" rel="nofollow noreferrer"><code>configs/secret.yml</code></a>.</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
namespace: mysystem
name: mysystem-secret
type: Opaque
data:
app.db.user: d2lkZQ== #wide
app.db.password: cGFzcw== #pass
app.db.cluster.replication.user: cmVwbGljYV91c2Vy #replica_user
app.db.cluster.replication.password: cmVwbGljYV9wYXNz #replica_pass
app.db.pool.users: d2lkZTpwYXNz #wide:pass
app.db.pool.pcp.user: cGNwX3VzZXI= #pcp_user
app.db.pool.pcp.password: cGNwX3Bhc3M= #pcp_pass
</code></pre>
<p><em>Note that the comments next to each encoded password is decoded password so in production setting it should be avoided.</em></p>
<p>Then the user and password credentials from <code>mysystem-secret</code> are used in <code>kubectl create -f ./nodes/</code> and <code>kubectl create -f ./pgpool/</code> as environmental values that are in all replicas and can be used to connect to Database.</p>
<pre><code>...
- name: "POSTGRES_USER"
valueFrom:
secretKeyRef:
name: mysystem-secret
key: app.db.user
- name: "POSTGRES_PASSWORD"
valueFrom:
secretKeyRef:
name: mysystem-secret
key: app.db.password
...
</code></pre>
<p>If You want to use Your own user and password You need to modify the <code>configs/secret.yml</code> file and replace passwords you wish to modify with base64 encoded passwords.</p>
<p>You can easily encode any password to base64 with following command:</p>
<pre><code>echo -n 'admin' | base64
YWRtaW4=
echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm
</code></pre>
<hr>
<p>Update:</p>
<p>To add additional users that would work with pgpool after cluster deployment you can use tool <a href="https://github.com/CrunchyData/postgres-operator" rel="nofollow noreferrer">postgres-operator</a>. Users added manually via exec to pod and then created locally would not be propagated to other nodes.</p>
<p>Follow <a href="https://access.crunchydata.com/documentation/postgres-operator/latest/installation/install-pgo-client/" rel="nofollow noreferrer">these</a> instructions to install Postgres Operator (pgo client) and configure it to work with kubernetes.</p>
| Piotr Malec |
<p>I am attempting to monitor the performance of my pods within <code>MiniShift</code> and tried to implement the Kubernetes Dashboard (<a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard</a>) following all instructions.</p>
<p>It creates the Kubernetes-Dashboard project (separate from the <code>NodeJs</code> project I am attempting to monitor) and when I run <code>kubectl</code> proxy and access the URL (<a href="http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/</a>) it gives the following error.</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"kubernetes-dashboard\" not found",
"reason": "NotFound",
"details": {
"name": "kubernetes-dashboard",
"kind": "services"
},
"code": 404
}
</code></pre>
| SherazS | <p>If you attempt to use dashboard in <code>minikube</code> the situation is similar to the minishift. You don't deploy the dashboard since <code>minikube</code> has integrated support for the dashboard. </p>
<p>To access the dashboard you use this command: </p>
<pre class="lang-sh prettyprint-override"><code>minikube dashboard
</code></pre>
<p>This will enable the dashboard add-on, and open the proxy in the default web browser. If you want just the simple url here is the dashboard command can also simply emit a URL:</p>
<pre class="lang-sh prettyprint-override"><code>minikube dashboard --url
</code></pre>
<p>Coming back to minishift you might want to check out the <a href="https://github.com/minishift/minishift-addons" rel="nofollow noreferrer">minishift add-ons</a> and it's <a href="https://github.com/minishift/minishift-addons/tree/master/add-ons/kube-dashboard" rel="nofollow noreferrer">kubernetes dashboard add-on</a></p>
| acid_fuji |
<p>micrometer exposing actuator metrics to set request/limit to pods in K8svs metrics-server vs kube-state-metrics -> K8s Mixin from kube-promethteus-stack Grafana dashboad
It's really blurry and frustrating to me to understand why there is such big difference between values from the 3 in the title and how should one utilize K8s Mixin to set proper request/limits and if that is expected at al.
I was hoping I can just see same data that I see when I type kubectl top podname --containers to what I see when I open K8s -> ComputeResources -> Pods dashboard in Grafana. But not only the values differ by more than a double, but also reported values from actuator differ from both.
When exposing spring data with micrometer the sum of jvm_memory_used_bytes is corresponding more to what I get from metrics-server (0.37.0) rather then what I see on Grafana from the mixin dashboards, but it is still far off.
I am using K8s: 1.14.3 on Ubuntu 18.04 LTS managed by kubespray.
kube-prometheus-stack 9.4.4 installed with helm 2.14.3.
Spring boot 2.0 with Micrometer. I saw the explanation on metrics-server git that this is the value that kubelet use for OOMKill, but again this is not helpful at all as what should I do with the dashboard? What is the the way to handle this?</p>
<p><a href="https://i.stack.imgur.com/aVzeS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aVzeS.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/elHG4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/elHG4.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/m1DZm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m1DZm.png" alt="enter image description here" /></a></p>
| anVzdGFub3RoZXJodW1hbg | <p>Based on what I see so far, I have found the root cause, renamed kubelet service from old chart to new that can get targeted by serviceMonitors. So for me the best solution would be grafana kube-state-metrics + comparing what I see in the jvm dashboard</p>
| anVzdGFub3RoZXJodW1hbg |
<p>First off, I'm completely new with Kubernetes so I may have missed something completely obvious but the documentation is exactly helping, so I'm turning to you guys for help.</p>
<p>I'm trying to figure out just how many types of "deployment files" there are for Kubernetes. I call them "deployment files" because I really don't know what else to call them and they're usually associated with a deployment.</p>
<p>So far, every yml/yaml file I've seen start like this:</p>
<pre><code>apiVersion:
kind: << this is what I'm asking about >>
metadata:
</code></pre>
<p>And so far I have seen this many "kind"(s)</p>
<pre><code>ClusterConfig
ClusterRole
ClusterRoleBinding
CronJob
Deployment
Job
PersistentVolumeClaim
Pod
ReplicationController
Role
RoleBinding
Secret
Service
ServiceAccount
</code></pre>
<p>I'm sure there are many more. But I can't seem to find a location where they are listed and the contexts broken down.</p>
<p>So what I want to know is this, </p>
<ol>
<li>Where can I find an explanation for these yaml files?</li>
<li>Where can I learn about the different kinds?</li>
<li>Where can I get a broken down explanation of the minimum required fields/values are for any of these?</li>
<li>Are there templates for these files?</li>
</ol>
<p>Thanks</p>
| luis.madrigal | <p>When you are talking about specific yaml file containing the definition of specific kubernetes object, you can call them <code>yaml manifests</code> or simply <code>yaml definition files</code>. Using word <code>Deployment</code> for all of them isn't a good idea as there is already specific resource type defined and called by this name in <strong>kubernetes</strong>. So it's better you don't call them all <code>deployments</code> for consistency.</p>
<blockquote>
<p>I'm sure there are many more. But I can't seem to find a location
where they are listed and the contexts broken down.</p>
</blockquote>
<p>Yes, there are a lot more of them and you can list those which are available by running:</p>
<pre><code>kubectl api-resources
</code></pre>
<p>These different objects are actually called <code>api-resources</code>. As you can see they are listed in three columns: NAME, SHORTNAMES, APIGROUP, NAMESPACED and KIND</p>
<pre><code>NAME SHORTNAMES APIGROUP NAMESPACED KIND
bindings true Binding
componentstatuses cs false ComponentStatus
configmaps cm true ConfigMap
endpoints ep true Endpoints
events ev true Event
limitranges limits true LimitRange
namespaces ns false Namespace
nodes no false Node
</code></pre>
<p>Note that the name of resource corresponds to its <code>KIND</code> but it is slightly different. <code>NAME</code> simply describes resource types as we are referring to them e.g. using <code>kubectl command line utility</code>. Just to give one example, when you want to list pods available in your cluster you simply type <code>kubectl get pods</code>. You don't have to use resource kind i.e. <code>Pod</code> in this context. You can but you don't have to. So <code>kubectl get Pod</code> or <code>kubectl get ConfigMap</code> will also return desired result. You can also refer to them by their shournames so <code>kubectl get daemonsets</code> and <code>kubectl get ds</code> are equivalent.</p>
<p>It's totally different when it comes to specific resource/object definition. In context of <code>yaml definition</code> file we must to use proper <code>KIND</code> of the resource. They are mostly start with capital letter and are written by co called <code>CamelCase</code> but there are exceptions from this rule.</p>
<p>I really recommend you to familiarize with kubernetes documentation. It is very user-friendly and nicely explains both key kubernetes concepts as well as all very tiny details.</p>
<p><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#resource-types" rel="nofollow noreferrer">Here</a> you have even more useful commands for exploring API resources:</p>
<blockquote>
<pre><code>kubectl api-resources --namespaced=true # All namespaced resources
kubectl api-resources --namespaced=false # All non-namespaced resources
kubectl api-resources -o name # All resources with simple output (just the resource name)
kubectl api-resources -o wide # All resources with expanded (aka "wide") output
kubectl api-resources --verbs=list,get # All resources that support the "list" and "get" request verbs
kubectl api-resources --api-group=extensions # All resources in the "extensions" API group
</code></pre>
</blockquote>
<p>As @wargre already suggested in his comment, kubernetes official documentetion is definitely the best place to start as you will find there very detailed description of every resource.</p>
| mario |
<p>We have a kubernetes cluster with istio proxy running.
At first I created a cronjob which reads from database and updates a value if found. It worked fine.</p>
<p>Then it turns out we already had a service that does the database update so I changed the database code into a service call.</p>
<pre><code>conn := dial.Service("service:3550", grpc.WithInsecure())
client := protobuf.NewServiceClient(conn)
client.Update(ctx)
</code></pre>
<p>But istio rejects the calls with an RBAC error. It just rejects and doesnt say why.</p>
<p>Is it possible to add a role to a cronjob? How can we do that?</p>
<p>The mTLS meshpolicy is PERMISSIVE.</p>
<p>Kubernetes version is 1.17 and istio version is 1.3</p>
<pre><code>API Version: authentication.istio.io/v1alpha1
Kind: MeshPolicy
Metadata:
Creation Timestamp: 2019-12-05T16:06:08Z
Generation: 1
Resource Version: 6578
Self Link: /apis/authentication.istio.io/v1alpha1/meshpolicies/default
UID: 25f36b0f-1779-11ea-be8c-42010a84006d
Spec:
Peers:
Mtls:
Mode: PERMISSIVE
</code></pre>
<p>The cronjob yaml</p>
<pre><code>Name: cronjob
Namespace: serve
Labels: <none>
Annotations: <none>
Schedule: */30 * * * *
Concurrency Policy: Allow
Suspend: False
Successful Job History Limit: 1
Failed Job History Limit: 3
Pod Template:
Labels: <none>
Containers:
service:
Image: service:latest
Port: <none>
Host Port: <none>
Environment:
JOB_NAME: (v1:metadata.name)
Mounts: <none>
Volumes: <none>
Last Schedule Time: Tue, 17 Dec 2019 09:00:00 +0100
Active Jobs: <none>
Events:
</code></pre>
<p><em>edit</em>
I have turned off RBA for my namespace in ClusterRBACConfig and now it works. So cronjobs are affected by roles is my conclusion then and it should be possible to add a role and call other services.</p>
| Serve Laurijssen | <p>The <code>cronjob</code> needs proper permissions in order to run if RBAC is enabled.</p>
<p>One of the solutions in this case would be to add a <code>ServiceAccount</code> to the <code>cronjob</code> configuration file that has enough privileges to execute what it needs to.</p>
<p>Since You already have existing services in the namespace You can check if You have existing <code>ServiceAccount</code> for specific <code>NameSpace</code> by using:</p>
<pre><code>$ kubectl get serviceaccounts -n serve
</code></pre>
<p>If there is existing <code>ServiceAccount</code> You can add it into Your cronjob manifest yaml file.</p>
<p>Like in this example:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: adwords-api-scale-up-cron-job
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
activeDeadlineSeconds: 100
template:
spec:
serviceAccountName: scheduled-autoscaler-service-account
containers:
- name: adwords-api-scale-up-container
image: bitnami/kubectl:1.15-debian-9
command:
- bash
args:
- "-xc"
- |
kubectl scale --replicas=2 --v=7 deployment/adwords-api-deployment
volumeMounts:
- name: kubectl-config
mountPath: /.kube/
readOnly: true
volumes:
- name: kubectl-config
hostPath:
path: $HOME/.kube # Replace $HOME with an evident path location
restartPolicy: OnFailure
</code></pre>
<p>Then under Pod Template there should be Service Account visable:</p>
<pre><code>$ kubectl describe cronjob adwords-api-scale-up-cron-job
Name: adwords-api-scale-up-cron-job
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"batch/v1beta1","kind":"CronJob","metadata":{"annotations":{},"name":"adwords-api-scale-up-cron-job","namespace":"default"},...
Schedule: */2 * * * *
Concurrency Policy: Allow
Suspend: False
Successful Job History Limit: 3
Failed Job History Limit: 1
Starting Deadline Seconds: <unset>
Selector: <unset>
Parallelism: <unset>
Completions: <unset>
Active Deadline Seconds: 100s
Pod Template:
Labels: <none>
Service Account: scheduled-autoscaler-service-account
Containers:
adwords-api-scale-up-container:
Image: bitnami/kubectl:1.15-debian-9
Port: <none>
Host Port: <none>
Command:
bash
Args:
-xc
kubectl scale --replicas=2 --v=7 deployment/adwords-api-deployment
Environment: <none>
Mounts:
/.kube/ from kubectl-config (ro)
Volumes:
kubectl-config:
Type: HostPath (bare host directory volume)
Path: $HOME/.kube
HostPathType:
Last Schedule Time: <unset>
Active Jobs: <none>
Events: <none>
</code></pre>
<p>In case of custom RBAC configuration i suggest referring to <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">kubernetes</a> documentation.</p>
<p>Hope this helps.</p>
| Piotr Malec |
<p>I'm trying to create a RabbitMQ instance using RabbitMQ cluster Kubernetes operator, but there is an issue with PersistentVolumeClaims. I'm running Kubernetes 1.18.8 using Docker Desktop for Windows.</p>
<p>I have installed the operator like this:</p>
<pre><code>kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
</code></pre>
<p>I have created this very simple configuration for the instance according to the documentation:</p>
<pre><code>apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: nccrabbitmqcluster
</code></pre>
<p>It seems to create all of the objects it is supposed to create, but the pod gets stuck on pending state:</p>
<pre><code>$ kubectl get all | grep rabbit
pod/nccrabbitmqcluster-server-0 0/1 Pending 0 14m
service/nccrabbitmqcluster ClusterIP 10.100.186.115 <none> 5672/TCP,15672/TCP 14m
service/nccrabbitmqcluster-nodes ClusterIP None <none> 4369/TCP,25672/TCP 14m
statefulset.apps/nccrabbitmqcluster-server 0/1 14m
</code></pre>
<p>There seems to be an unbound PVC according to the pod's events:</p>
<pre><code>$ kubectl describe pod/nccrabbitmqcluster-server-0 | tail -n 5
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler running "VolumeBinding" filter plugin for pod "nccrabbitmqcluster-server-0": pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler running "VolumeBinding" filter plugin for pod "nccrabbitmqcluster-server-0": pod has unbound immediate PersistentVolumeClaims
</code></pre>
<p>According to the events of the PVC, it is waiting for a volume to be created:</p>
<pre><code>$ kubectl describe pvc persistence-nccrabbitmqcluster-server-0
Name: persistence-nccrabbitmqcluster-server-0
Namespace: default
StorageClass: hostpath
Status: Pending
Volume:
Labels: app.kubernetes.io/component=rabbitmq
app.kubernetes.io/name=nccrabbitmqcluster
app.kubernetes.io/part-of=rabbitmq
Annotations: volume.beta.kubernetes.io/storage-provisioner: docker.io/hostpath
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: nccrabbitmqcluster-server-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 27s (x23 over 19m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "docker.io/hostpath" or manually created by system administrator
</code></pre>
<p>My understanding is that docker.io/hostpath is the correct provisioner:</p>
<pre><code>$ kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
hostpath (default) docker.io/hostpath Delete Immediate false 20d
</code></pre>
<p>I can't see any PVs related to PCS:</p>
<pre><code>$ kubectl get pv | grep rabbit
</code></pre>
<p>Why isn't the volume created automatically and what should I do?</p>
| user1937459 | <p>Yes, your local hostpath can not work as dynamic volume provisioner. This operator needs an storageclassname which can dynamically create PVs.
In your case, your operator waiting continuously for PV to get created. In stead you can manually create an PV and PVC if you are doing in local machine.
Check this example - <a href="https://github.com/rabbitmq/cluster-operator/blob/main/docs/examples/multiple-disks/rabbitmq.yaml" rel="nofollow noreferrer">https://github.com/rabbitmq/cluster-operator/blob/main/docs/examples/multiple-disks/rabbitmq.yaml</a></p>
<p>If you are going to try any cloud provider like AWS then its pretty easy. Deploy EBS CSI driver in your cluster which will create an storageclass for you and that storageclass will provision dynamic volumes.</p>
| Rajendra Gosavi |
<p>I am trying to follow the installation of helm chart for django-defectDojo on my CentOS machine given here <a href="https://github.com/DefectDojo/django-DefectDojo/blob/master/KUBERNETES.md" rel="nofollow noreferrer">https://github.com/DefectDojo/django-DefectDojo/blob/master/KUBERNETES.md</a></p>
<p>But on running the helm install command I am running into this issue -</p>
<blockquote>
<p>Error: validation failed: [unable to recognize "": no matches for kind
"Deployment" in version "extensions/v1beta1", unable to recognize "":
no matches for kind "StatefulSet" in version "apps/v1beta2"]</p>
</blockquote>
<p>On further inspection, I believe this has to do with the postgresql chart but I am unable to resolve the issue. </p>
<p>My kubectl version is</p>
<blockquote>
<p>kubectl version </p>
<p>GitVersion:"v1.17.1",
GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6",
GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z",
GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"} Server
Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1",
GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6",
GitTreeState:"clean", BuildDate:"2020-01-14T20:56:50Z",
GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}Client Version: version.Info{Major:"1", Minor:"17",</p>
</blockquote>
<p>Any help on this will be appreciated. </p>
| Pranav Bhatia | <p>Apparently there is a problem with this chart and it won't work with newer versions of <strong>Kubernetes</strong> (1.16 and higher) without additional modification. I found <a href="https://github.com/DefectDojo/django-DefectDojo/issues/1631" rel="nofollow noreferrer">this</a> issue on <strong>django-DefectDojo</strong> github page. <a href="https://github.com/DefectDojo/django-DefectDojo/issues/1631#issuecomment-549029627" rel="nofollow noreferrer">Here</a> same problem as yours is reported.</p>
<p>The problem is related with some major changes in <strong>Kubernetes APIs</strong> in <code>version 1.16</code>.</p>
<p>In <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#changelog-since-v1150" rel="nofollow noreferrer">Changelog since v1.15.0</a> you can read the following:</p>
<blockquote>
<p>The following APIs are no longer served by default: (#70672, @liggitt)
* <strong>All resources under <code>apps/v1beta1</code> and <code>apps/v1beta2</code> - use <code>apps/v1</code> instead</strong> * <code>daemonsets</code>, <code>deployments</code>, <code>replicasets</code> resources under
<code>extensions/v1beta1</code> - use <code>apps/v1</code> instead * <code>networkpolicies</code> resources
under <code>extensions/v1beta1</code> - use <code>networking.k8s.io/v1</code> instead *
<code>podsecuritypolicies</code> resources under <code>extensions/v1beta1</code> - use
<code>policy/v1beta1</code> instead</p>
</blockquote>
<p>And further there is even temporary solution provided:</p>
<blockquote>
<ul>
<li>Serving these resources can be temporarily re-enabled using the
<code>--runtime-config</code> apiserver flag. </li>
<li><code>apps/v1beta1=true</code></li>
<li><code>apps/v1beta2=true</code> </li>
<li><p><code>extensions/v1beta1/daemonsets=true,extensions/v1beta1/deployments=true,extensions/v1beta1/replicasets=true,extensions/v1beta1/networkpolicies=true,extensions/v1beta1/podsecuritypolicies=true</code></p></li>
<li><p><strong>The ability to serve these resources will be completely removed in v1.18.</strong></p></li>
</ul>
</blockquote>
<p>As your Kubernetes version is <strong>1.17</strong>, you can still use this workaround.</p>
<p>Alternatively you can use older <strong>Kubernetes</strong> version as suggested <a href="https://github.com/DefectDojo/django-DefectDojo/issues/1631#issuecomment-549966194" rel="nofollow noreferrer">here</a> or modify appropriate <code>yaml</code> manifests from <a href="https://github.com/DefectDojo/django-DefectDojo" rel="nofollow noreferrer">django-DefectDojo</a> project manually by yourself so they match current <strong>Kubernetes</strong> <strong>APIs</strong> structure.</p>
| mario |
<p>I would like to monitor the IO which my pod is doing. Using commands like 'kubectl top pods/nodes', i can monitor CPU & Memory. But I am not sure how to monitor IO which my pod is doing, especially disk IO.</p>
<p>Any suggestions ?</p>
| SunilS | <p>Since you already used <code>kubectl top</code> command I assume you have metrics server. In order to have more advanced monitoring solution I would suggest to use <code>cAdvisor</code>, <code>Prometheus</code> or <code>Elasticsearch</code>. </p>
<p>For getting started with Prometheus you can check <a href="https://www.replex.io/blog/kubernetes-in-production-the-ultimate-guide-to-monitoring-resource-metrics" rel="nofollow noreferrer">this article</a>. </p>
<p>Elastic search has <a href="https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-metricset-system-diskio.html" rel="nofollow noreferrer">System diskio</a> and <a href="https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-metricset-docker-diskio.html" rel="nofollow noreferrer">Docker diskio </a> metrics set. You can easily deploy it using <a href="https://github.com/elastic/helm-charts/tree/master/elasticsearch" rel="nofollow noreferrer">helm chart</a>. </p>
<p><a href="https://blog.freshtracks.io/a-deep-dive-into-kubernetes-metrics-part-3-container-resource-metrics-361c5ee46e66" rel="nofollow noreferrer">Part 3</a> of the series about kubernetes monitoring is especially focused on monitoring container metrics using cAdvisor. Allthough it is worth checking whole series. </p>
<p>Let me know if this helps. </p>
| acid_fuji |
<p>I have application with a DockerFile. This DockerFile needs to run a shell script that have curl commands and kubectl commands.</p>
<p>I have designed the dockerFile as</p>
<pre><code>FROM ubuntu:16.04
WORKDIR /build
RUN apt-get update && apt-get install -y curl && apt-get install -y jq
COPY ./ ./
RUN chmod +x script.sh
ENTRYPOINT ["./script.sh"]
</code></pre>
<p>The <code>script.sh</code> file is what contains curl commands and kubectl command.</p>
<p>If you see I have installed curl command inside the docker container using command <code>RUN apt-get update && apt-get install -y curl </code></p>
<p>What do I need to do in order to run kubectl commands ? Becase when I build and the run the above image, it throws an error saying <code>kubectl: command not found</code> .</p>
<p>Can anyone help me with this ?</p>
| Arjun Karnwal | <p>Instead of installing using apt-get, you can download the binary place whatever you want and use it.</p>
<p>This will give you more control under it and less chances to have problems in the future.</p>
<p>Steps on how to download it from the official repository can be fount in the <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux" rel="nofollow noreferrer">documentation</a>.</p>
<h3>Install kubectl binary with curl on Linux<a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-with-curl-on-linux" rel="nofollow noreferrer"></a></h3>
<ol>
<li><p>Download the latest release with the command:</p>
<pre><code>curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
</code></pre>
<p>To download a specific version, replace the <code>$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)</code> portion of the command with the specific version.</p>
<p>For example, to download version v1.18.0 on Linux, type:</p>
<pre><code>curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubectl
</code></pre>
</li>
<li><p>Make the kubectl binary executable.</p>
<pre><code>chmod +x ./kubectl
</code></pre>
</li>
<li><p>Move the binary in to your PATH.</p>
<pre><code>sudo mv ./kubectl /usr/local/bin/kubectl
</code></pre>
</li>
<li><p>Test to ensure the version you installed is up-to-date:</p>
<pre><code>kubectl version --client
</code></pre>
</li>
</ol>
<p>Considering this, you can have a Dockerfile similar to this:</p>
<pre><code>FROM debian:buster
RUN apt update && \
apt install -y curl && \
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl && \
chmod +x ./kubectl && \
mv ./kubectl /usr/local/bin/kubectl
CMD kubectl get po
</code></pre>
<p>After this we can create a pod using the following manifest:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: internal-kubectl
spec:
containers:
- name: internal-kubectl
image: myrep/internal-kubectl:latest
command: ['sh', '-c', "kubectl get pod; sleep 36000"]
</code></pre>
<p>Running this pod is going to give you an error and this will happen because you don't have the necessary <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> rules created.</p>
<p>The way to tell Kubernetes that we want this pod to have an identity that can list the pods is through the combination of a few different resources…</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
</code></pre>
<p>The identity object that we want to assign to our pod will be a service account. But by itself it has no permissions. That’s where roles come in.</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- delete
</code></pre>
<p>The role above specifies that we want to be able to get, list, and delete pods. But we need a way to correlate our new service account with our new role. Role bindings are the bridges for that…</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>This role binding connects our service account to the role that has the permissions we need. Now we just have to modify our pod config to include the service account…</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: internal-kubectl
spec:
serviceAccountName: internal-kubectl
containers:
- name: internal-kubectl
image: myrep/internal-kubectl:latest
command: ['sh', '-c', "kubectl get pod; sleep 36000"]
</code></pre>
<p>By specifying spec.serviceAccountName this changes us from using the default service account to our new one that has the correct permissions. Running our new pod we should see the correct output…</p>
<pre><code>$ kubectl logs internal-kubectl
NAME READY STATUS RESTARTS AGE
internal-kubectl 1/1 Running 1 5s
</code></pre>
| Mark Watney |
<p>Hey I'm installing fresh minikube and try to init helm on it no in 3.x.x but 2.13.0 version.</p>
<pre><code>$ minikube start
😄 minikube v1.6.2 on Darwin 10.14.6
✨ Automatically selected the 'hyperkit' driver (alternates: [virtualbox])
🔥 Creating hyperkit VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
🚜 Pulling images ...
🚀 Launching Kubernetes ...
⌛ Waiting for cluster to come online ...
🏄 Done! kubectl is now configured to use "minikube"
$ kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller created
$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/<user>/.helm.
Error: error installing: the server could not find the requested resource
$ helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -
deployment.apps/tiller-deploy created
service/tiller-deploy created
$ helm init --service-account tiller
59 ### ALIASES
$HELM_HOME has been configured at /Users/<user>/.helm.
Error: error installing: the server could not find the requested resource
$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Error: could not find tiller
</code></pre>
<hr>
<p>I try to do same on some random other ns, and with no result:</p>
<pre><code>$ kubectl create ns deployment-stuff
namespace/deployment-stuff created
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin \
--user=$(gcloud config get-value account)
clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created
$ kubectl create serviceaccount tiller --namespace deployment-stuff
kubectl create clusterrolebinding tiller-admin-binding --clusterrole=cluster-admin \
--serviceaccount=deployment-stuff:tiller
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller-admin-binding created
$ helm init --service-account=tiller --tiller-namespace=deployment-stuff
Creating /Users/<user>/.helm
Creating /Users/<user>/.helm/repository
Creating /Users/<user>/.helm/repository/cache
Creating /Users/<user>/.helm/repository/local
Creating /Users/<user>/.helm/plugins
Creating /Users/<user>/.helm/starters
Creating /Users/<user>/.helm/cache/archive
Creating /Users/<user>/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /Users/<user>/.helm.
Error: error installing: the server could not find the requested resource
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm list
Error: could not find tiller
$ helm list --tiller-namespace=kube-system
Error: could not find tiller
$ helm list --tiller-namespace=deployment-stuff
Error: could not find tiller
</code></pre>
<p>Same error everywhere <strong>Error: error installing: the server could not find the requested resource</strong> any ideas how to approach it ? </p>
<p>I installed helm with those commands and works fine with my gcp clusters, helm list returns full list of helms.</p>
<pre><code>wget -c https://get.helm.sh/helm-v2.13.0-darwin-amd64.tar.gz
tar -zxvf helm-v2.13.0-darwin-amd64.tar.gz
mv darwin-amd64/helm /usr/local/bin/helm
</code></pre>
<p>tbh I have no idea what's going on, sometimes it works fine on minikube sometimes I get these errors.</p>
| CptDolphin | <p>This can be fixed by deleting the tiller <code>deployment</code> and <code>service</code> and rerunning the <code>helm init --override</code> command after first <code>helm init</code>.</p>
<p>So after running commands You listed:</p>
<pre><code>$ kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller created
$ helm init --service-account tiller
</code></pre>
<p>And then finding out that tiller could not be found.</p>
<pre><code>$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Error: could not find tiller
</code></pre>
<p>Run the following commands:</p>
<p>1.</p>
<pre><code>$ kubectl delete service tiller-deploy -n kube-system
</code></pre>
<p>2.</p>
<pre><code>$ kubectl delete deployment tiller-deploy -n kube-system
</code></pre>
<p>3.</p>
<pre><code>helm init --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -
</code></pre>
<hr>
<p>After that You can verify if it worked with:</p>
<pre><code>$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Error: could not find a ready tiller pod
</code></pre>
<p>This one needs little more time, give it few seconds.</p>
<pre><code>$ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
</code></pre>
<p>Tell me if it worked.</p>
| Piotr Malec |
<p>I've set up a <strong>minikube</strong> cluster on an Azure's Ubuntu instance that exposes an external service on the IP: <code>192.168.49.2:30000</code></p>
<p><code>curl 192.168.49.2:30000</code> returns the desired output.
How can I access this service using a browser on my local machine using the azure host public IP address ?</p>
| user80 | <h3>Explanation:</h3>
<hr />
<h3>EDIT:</h3>
<p>OK, I see that <strong>docker</strong> driver by default uses <code>192.168.49.2</code> IP address:</p>
<pre><code>azureuser@minikube:~$ minikube service web --url
http://192.168.49.2:31527
</code></pre>
<p>so most probably you've used this one. Although the further explanation is based on <strong>virtualbox</strong> example it can be fully applied also in this case, with the only difference that <strong>minikube</strong> runs as an isolated docker container and not as a VM. But if you run <code>docker ps</code> on your <strong>Azure VM instance</strong> you'll see only one container, named <code>minikube</code>:</p>
<pre><code>azureuser@minikube:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
af68ab219c6c gcr.io/k8s-minikube/kicbase:v0.0.14 "/usr/local/bin/entr…" 53 minutes ago Up 53 minutes 127.0.0.1:32771->22/tcp, 127.0.0.1:32770->2376/tcp, 127.0.0.1:32769->5000/tcp, 127.0.0.1:32768->8443/tcp minikube
</code></pre>
<p>Only after running:</p>
<pre><code>azureuser@minikube:~$ minikube ssh
</code></pre>
<p>and then:</p>
<pre><code>docker@minikube:~$ docker ps
</code></pre>
<p>you'll see both the container running by your <strong>app <code>Pod</code></strong> as well as <strong>kubernetes control plane components</strong>. The list is quite long so I won't put it here.</p>
<p>But it shows that your <strong>kubernetes</strong> cluster is in practice isolated in a very similar way from your <strong>Azure VM</strong> host as in case when it runs in a <strong>nested VM</strong>. The key point here is that minikube cluster's network is isolated from host VM network and <code>NodePort</code> Service is accessible only on the external IP of a VM or <code>minikube</code> docker container, not exposing any port on any of host VM network interfaces.</p>
<hr />
<p>I've reproduced your case, assuming that you started your minikube with <a href="http://%20https://minikube.sigs.k8s.io/docs/drivers/virtualbox/" rel="noreferrer">virtualbox driver</a>, so please correct me if my assumption was wrong. You could've started it this way:</p>
<pre><code>minikube start --driver=virtualbox
</code></pre>
<p>or simply:</p>
<pre><code>minikube start
</code></pre>
<p>If you've installed <strong>VirtualBox</strong> on your <strong>minikube</strong> vm, it will be also automatically detected and <code>virtualbox</code> driver will be used.</p>
<p>OK, so let's pause for a moment at this point and take a closer look what such approach implies.</p>
<p>Your <strong>minikube</strong> runs inside your <strong>Azure VM</strong> but it doesn't run directly on it. Nested virtualization is used and your minikube kubernetes cluster actually runs on a vm inside another vm.</p>
<p>So if you've exposed your <code>Deployment</code> using <code>NodePort</code> service, it makes it available on the external IP address of <strong>minikube vm</strong> (which is nested vm). But this IP is external only from the perspective of this vm.</p>
<p>Let's ssh into our <strong>Azure VM</strong> and run:</p>
<pre><code>azureuser@minikube:~$ minikube service web --url
http://192.168.99.100:31955
</code></pre>
<p>Your service name and address will be different but I'm showing it just to illustrate the idea.</p>
<p>Then run:</p>
<pre><code>azureuser@minikube:~$ ip -4 a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
inet 10.0.0.4/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
3: vboxnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 192.168.99.1/24 brd 192.168.99.255 scope global vboxnet0
valid_lft forever preferred_lft forever
</code></pre>
<p>As you can see <code>192.168.99.100</code> is reachable via <code>vboxnet0</code> network interface as it's part of <code>192.168.99.0/24</code> network, used by virtualbox vm.</p>
<p>Therefore there are no means for it to be reachable without using additional port forwarding, not only on the external IP of the <strong>Azure VM</strong> but on any of its internal network interfaces. <code>NodePort</code> service exposes the <code>Deployment</code> only on the external IP of the nested <strong>VirtualBox VM</strong> (or other hypervisor), which is reachable from <strong>Azure VM</strong> host but it's not exposed on any of it's network interfaces so it cannot be reached from outside either.</p>
<h3>Solution:</h3>
<p>Did I just say port forwarding ? Fortunately <code>kubectl</code> makes it fairly easy task. If you haven't installed it so far on your <strong>Azure VM</strong>, please do it as <code>minikube kubectl</code> may not work. You can use for it the following command:</p>
<pre><code>kubectl port-forward --address 0.0.0.0 service/<service-name> 8080:8080
</code></pre>
<p>which will forward the traffic destined to any of <strong>Azure VM</strong> addresses (also the public one) to your <code>Service</code>, which in this case doesn't even have to be of <code>NodePort</code> type, <code>ClusterIP</code> will also work.</p>
<p>Caveat: To be able to bind to a well known port such as <code>80</code> you need to run this command as <code>root</code>. If you use registered ports, starting from <code>1024</code> such as <code>8080</code> you don't need root access. So after running:</p>
<pre><code>root@minikube:~# kubectl port-forward --address 0.0.0.0 service/web 80:8080
</code></pre>
<p>You should be able to access your app without any problems by using the external IP address of your <strong>Azure VM</strong>.</p>
| mario |
<p>The question is for pods <strong>DNS resolution</strong> in kubernetes. A statement from official doc here (choose v1.18 from top right dropdown list):
<a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods</a></p>
<blockquote>
<p>Pods. </p>
<p>A/AAAA records </p>
<p>Any pods created by a Deployment or DaemonSet have the following DNS resolution available: </p>
<p><code>pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example.</code></p>
</blockquote>
<p>Here is my kubernetes environments:</p>
<pre><code>master $ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>After I create a simple deployment using <code>kubectl create deploy nginx --image=nginx</code>, then I create a busybox pod in <code>test</code> namespace to do nslookup like this:</p>
<pre><code>kubectl create ns test
cat <<EOF | kubectl apply -n test -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
name: busybox
spec:
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox
EOF
</code></pre>
<p>Then I do <code>nslookup</code> like this, according to the offical doc <code>pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example</code>:</p>
<pre><code>master $ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-f89759699-h8cj9 1/1 Running 0 12m 10.244.1.4 node01 <none> <none>
master $ kubectl get deploy -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx 1/1 1 1 17m nginx nginx app=nginx
master $ kubectl exec -it busybox1 -n test -- nslookup 10.244.1.4.nginx.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve '10.244.1.4.nginx.default.svc.cluster.local'
command terminated with exit code 1
master $ kubectl exec -it busybox1 -n test -- nslookup 10-244-1-4.nginx.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve '10-244-1-4.nginx.default.svc.cluster.local'
command terminated with exit code 1
</code></pre>
<p><strong>Question 1:</strong><br>
Why nslookup for the name failed? Is there something I did wrong?</p>
<hr>
<p>When I continue to explore the dns name for pods, I did this:</p>
<pre><code>master $ kubectl exec -it busybox1 -n test -- nslookup 10-244-1-4.default.pod.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: 10-244-1-4.default.pod.cluster.local
Address 1: 10.244.1.4
master $ kubectl exec -it busybox1 -n test -- nslookup 10-244-1-4.test.pod.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: 10-244-1-4.test.pod.cluster.local
Address 1: 10.244.1.4
</code></pre>
<p><strong>Questions 2:</strong><br>
Why <code>nslookup 10-244-1-4.test.pod.cluster.local</code> succeeded even the pod of 10.244.1.4 is in default namespace?</p>
| Andrew Feng | <p>Regarding your first question, as far as I could check your assumptions are right, it seems like the documentation isn't accurate. The A/AAAA reference for pods is something new in the documentation (1.18). For that I highly encourage you to open an issue <a href="https://www.google.com/url?q=https://github.com/kubernetes/kubernetes/issues?q=is%253Aopen%2bis%253Aissue%2blabel%253Akind%252Fdocumentation&sa=D&source=hangouts&ust=1592383652055000&usg=AFQjCNFAXM6HjtH0p-mBBB_Rn2PDljrYtw" rel="nofollow noreferrer">here</a> so the developers can take a closer look into it. </p>
<p>I recommend you to refer to <a href="https://v1-17.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods" rel="nofollow noreferrer">1.17 documentation</a> on that regard as it's to be reflecting the actual thing.</p>
<p>In 1.17 we can see this note:</p>
<blockquote>
<p><strong>Note:</strong> Because A or AAAA records are not created for Pod names, <code>hostname</code> is required for the Pod’s A or AAAA record to be created. A Pod with no <code>hostname</code> but with <code>subdomain</code> will only create the A or AAAA record for the headless service (<code>default-subdomain.my-namespace.svc.cluster-domain.example</code>), pointing to the Pod’s IP address. Also, Pod needs to become ready in order to have a record unless <code>publishNotReadyAddresses=True</code> is set on the Service.</p>
</blockquote>
<p>As far as I could check this is still true on 1.18 despite of what the documentation is saying. </p>
<p>Regarding question two is going to the same direction and you can also open an issue but I personally don't see any practical reason for the usage of IP Based DNS names. These names are there for kubernetes internal use and using it isn't giving you any advantage. </p>
<p>The best scenario is to use <a href="https://v1-17.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="nofollow noreferrer">service based dns names</a> on Kubernetes. It's proven to be very reliable. </p>
| Mark Watney |
<p>I have seen example for EnvoyFilter in ISTIO where <code>grpc_service</code> is supported as filterconfig for external service call out. </p>
<pre><code>kind: EnvoyFilter
metadata:
name: ext-authz
namespace: istio-system
spec:
filters:
- insertPosition:
index: FIRST
listenerMatch:
listenerType: SIDECAR_INBOUND
listenerProtocol: HTTP
filterType: HTTP
filterName: "envoy.ext_authz"
filterConfig:
grpc_service:
google_grpc:
target_uri: 127.0.0.1:9191
stat_prefix: "ext_authz"
</code></pre>
<p>But I want to use my external service as filterconfig using <strong>http_service</strong> instead of <strong>grpc_service</strong> but everytime I get <code>404 not found</code> error.</p>
<p>Is <code>http_service</code> supported as <code>filterConfig</code> in Istio's <code>envoyFilter</code>?</p>
<p>version info : GKE is 14 and istio is 1.1.17</p>
| esha ingle | <p><strong>Update: modified entire answer</strong>.</p>
<p>After further verification it appears that Istio had <code>http_service</code> authorization service in the past it was not fully functional.</p>
<p>There were attempts to implement external HTTP service authorization for older versions of Istio, however it did work and the only workaround solutions were to use http lua filter or Nginx-Ingress Controller as Ingress Gateway that delegates the authentication part.</p>
<p>All of above cases can be found in <a href="https://github.com/istio/istio/issues/13819" rel="nofollow noreferrer">this</a> github issue. The HTTP call was successful but the headers were not being passed.</p>
<p><a href="https://github.com/istio/istio/issues/9097" rel="nofollow noreferrer">Here</a> is another attempt in running <code>http_service</code> as authorization service.</p>
<hr>
<p>As You noticed the <a href="https://www.envoyproxy.io/docs/envoy/v1.11.0/configuration/http_filters/ext_authz_filter.html?highlight=http_service#external-authorization" rel="nofollow noreferrer">Envoy</a> documentation for Envoy <code>1.11.0</code> <code>http_service</code> <code>filterConfig</code> has different syntax. Therefore I suggest trying the configuration for filter from the <a href="https://github.com/istio/istio/issues/13819" rel="nofollow noreferrer">github</a> issue. And if It doesnt't work, try the http lua filter as a workaround.</p>
<p>The HTTP service as an external authorization service its not mentioned in Istio documentation so, I think its safe to say its not fully supported.</p>
| Piotr Malec |
<p>I wondered when i restart my ubuntu machine on which i have setup kubernetes master with flannel. before reboot it's working fine. but after reboot master node is not in ready state.</p>
<p>I try to get node details using describe.</p>
<pre><code>KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
</code></pre>
<p>This error is printed in logs. i search about this and find some solutions like reinitialize flannel.yml but didn't work.</p>
<p><a href="https://github.com/kubernetes/kubeadm/issues/1031" rel="nofollow noreferrer">https://github.com/kubernetes/kubeadm/issues/1031</a> As per provided solution here, reinstall docker in machine. that's works.
every thing works fine after reinstall docker on machine. Can any one explain me why this happend? as if i restart machine then every time i need to reinstall docker? or is there any other setting or configuration which i missing?
Thank you</p>
| Vinit Patel | <p>Based on the provided information there are couple of steps and points to be
taken into consideration when you encounter this kind of issue:</p>
<p>First check is to verify if file <code>10-flannel.conflist</code> is not missing from <code>/etc/cni/net.d/</code>. You should have a file with this kind of information there:</p>
<pre><code>{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
</code></pre>
<p>If your file is placed there please check if you specifically have <code>cniVersion</code> field there.
In some flannel deployments there was missing the <code>cniVersion</code> field.</p>
<p>Second troubleshoot check is too check <code>kubelet</code> logs. Kubelet could report some problems with not finding cni config.
You may find logs at: <code>/var/log/kubelet.log</code></p>
<p>Also very useful is to check output of <code>journalctl -fu kubelet</code> and see if nothing wrong is happening there.
In some cases restart <code>kubelet</code> might be helpful, you can do that using <code>systemctl restart kubelet</code> </p>
<p>If you suspect that the docker is causing a problem you can check docker logs in similar way you checked the kukubelet logs
using <code>journalctl -ul docker</code>. If the docker is causing some issuse try to restart the docker service before reinstalling it
using <code>sudo systemctl restart docker.service</code> </p>
<p>Finally it is really worth following exactly <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">official documentation</a> with creating kubeadm clusters, espcially the pod <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network" rel="nofollow noreferrer">network section</a>.
Please note that it is important to hold all the binaries to prevent them from unwanted updates.
Kubernetes has also a very good <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/" rel="nofollow noreferrer">troubleshoot document</a> regarding <code>kubeadm</code>. </p>
<p>Hope this helps. </p>
| acid_fuji |
<p>we are running <strong>6</strong> nodes in K8s cluster. Out of 6, 2 of them running RabbitMQ, Redis & Prometheus we have used node-selector & cordon node so no other pods schedule on that particular nodes.</p>
<p>On renaming other 4 nodes application PODs run, we have around 18-19 micro services.
For GKE there is one open issue in K8s official repo regarding auto scale down: <a href="https://github.com/kubernetes/kubernetes/issues/69696#issuecomment-651741837" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/69696#issuecomment-651741837</a> automatically however people are suggesting approach of setting <strong>PDB</strong> and we that tested on Dev/Stag.</p>
<p>What we are looking for now is to fix PODs on particular <strong>node pool</strong> which do not scale, as we are running single replicas of some services.</p>
<p>As of now, we thought of using and apply affinity to those services which are running with <strong>single replicas</strong> and no requirement of <strong>scaling</strong>.</p>
<p>while for scalable services we won't specify any type of rule so by default K8s scheduler will schedule pod across different nodes, so this way if any node scale down we dont face any downtime for <strong>single</strong> running replica service.</p>
<p>Affinity example :</p>
<pre><code>affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: do-not-scale
operator: In
values:
- 'true'
</code></pre>
<p>We are planning to use affinity type <code>preferredDuringSchedulingIgnoredDuringExecution</code> instead of <code>requiredDuringSchedulingIgnoredDuringExecution</code>.</p>
<p>Note : Here K8s is not creating new replica first on another node during node drain (scaledown of any node) as we are running single replicas with rolling update & minAvailable: 25% strategy.</p>
<p>Why: If PodDisruptionBudget is not specified and we have a deployment with one replica, the pod will be terminated and then a new pod will be scheduled on a new node.</p>
<p>To make sure the application will be available during the node draining process we have to specify PodDisruptionBudget and create more replicas. If we have 1 pod with minAvailable: 30% it will refuse to drain node (scaledown).</p>
<p>Please point out a mistake if you are seeing anything wrong & suggest better option.</p>
| chagan | <p>First of all, defining <code>PodDisruptionBudget</code> makes not much sense whan having only one replica. <code>minAvailable</code> expressed as a percentage is rounded up to an integer as it represents the minimum number of <code>Pods</code> which need to be available all the time.</p>
<p>Keep in mind that you have no guarantee for any High Availability when launching only one-replica <code>Deployments</code>.</p>
<blockquote>
<p>Why: If PodDisruptionBudget is not specified and we have a deployment
with one replica, the pod will be terminated and then a new pod will
be scheduled on a new node.</p>
</blockquote>
<p>If you didn't explicitely define in your <code>Deployment</code>'s <code>spec</code> the value of <code>maxUnavailable</code>, by default it is set to 25%, which being rounded up to an integer (representing number of <code>Pods</code>/<code>replicas</code>) equals <code>1</code>. It means that 1 out of 1 replicas is allowed to be unavailable.</p>
<blockquote>
<p>If we have 1 pod with minAvailable: 30% it will refuse to drain node
(scaledown).</p>
</blockquote>
<p>Single replica with <code>minAvailable: 30%</code> is rounded up to <code>1</code> anyway. <code>1/1</code> should be still up and running so <code>Pod</code> cannot be evicted and node cannot be drained in this case.</p>
<p>You can try the following solution however I'm not 100% sure if it will work when your <code>Pod</code> is re-scheduled to another node due to it's eviction from the one it is currently running on.</p>
<p>But if you re-create your <code>Pod</code> e.g. because you update it's image to a new version, you can guarantee that at least one replica will be still up and running (old <code>Pod</code> won't be deleted unless the new one enters <code>Ready</code> state) by setting <code>maxUnavailable: 0</code>. As per the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment" rel="nofollow noreferrer">docs</a>, by default it is set to <code>25%</code> which is rounded up to <code>1</code>. So by default you allow that one of your replicas (which in your case happens to be <code>1/1</code>) becomes unavailable during the rolling update. If you set it to zero, it won't allow the old <code>Pod</code> to be deleted unless the new one becomes <code>Ready</code>. At the same time <code>maxSurge: 2</code> allows that 2 replicas temporarily exist at the same time during the update.</p>
<p>Your <code>Deployment</code> definition may begin as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0 👈
maxSurge: 2
selector:
...
</code></pre>
<p>Compare it with <a href="https://stackoverflow.com/a/46377159/11714114">this answer</a>, provided by <a href="https://stackoverflow.com/users/225016/mdaniel">mdaniel</a>, where I originally found it.</p>
| mario |
<p>we are facing issue while trying to set up <code>activeDeadlineSeconds</code> on Deployment. While we see at kubectl explain, according to that; it is a valid parameter on deployment. Please refer to this image:</p>
<p><a href="https://i.stack.imgur.com/0rY3r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0rY3r.png" alt="enter image description here"></a></p>
<p>Now, when we try to set same parameter to deployment; it say's this is invalid. Please refer to image below:</p>
<p><a href="https://i.stack.imgur.com/lgBGE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lgBGE.png" alt="enter image description here"></a></p>
<p>Please let us know, if we are doing something wrong here. You can use following yaml to do quick experiments:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: test
name: test
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: test
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: test
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
</code></pre>
<p>we are trying this because in our case there is an init-container which sometimes hangs, because <code>activeDeadlineSeconds</code> includes init containers too; <code>progressdeadlineseconds</code> doesn't include init containers</p>
<p>Is there an alternative to this?</p>
| Prateek Jain | <p>The idea of using <code>activeDeadlineSeconds</code> on a Deployment means that you want to run something that will not last for too long. This goes against the purpose of having a Deployment. A Deployment is meant for things you want to last. </p>
<p>As <a href="https://stackoverflow.com/users/1061413/amit-kumar-gupta" title="11,858 reputation">Amit Kumar Gupta</a> explained: </p>
<blockquote>
<p>Syntactically, <code>deployment.spec.template.spec</code> is the same as
<code>pod.spec</code> (which is why you see <code>activeDeadlineSeconds</code> in <code>kubectl
explain</code> output), but semantically not all the fields in a pod spec
are meaningful/allowed/supported in the context of a Deployment (which
is why you’re seeing the forbidden error message — under the hood,
creating a Deployment results in creating ReplicaSets).</p>
</blockquote>
<p>If we check the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/" rel="nofollow noreferrer">documentation</a> we can see that <code>activeDeadlineSeconds</code> is available for Jobs and Pods only. </p>
<p>Jobs and Pods are meant to run a task and die after if finishes. </p>
<blockquote>
<p>activeDeadlineSeconds </p>
<p>Specifies the duration in seconds relative to
the startTime that the job may be active before the system tries to
terminate it; value must be positive integer</p>
</blockquote>
<p>If you are considering to set <code>activeDeadlineSeconds</code> it means that you don't want you Deployment Pods to last and this is a wrong approach.</p>
<blockquote>
<p>Is there an alternative to this?</p>
</blockquote>
<p>It really depends on why your application needs this approach. If you really think this approach makes sense, you can open an <a href="https://github.com/kubernetes/kubernetes/issues/new/choose" rel="nofollow noreferrer">issue</a> and make a feature request for this. </p>
| Mark Watney |
<p>I've installed Kubernetes on ubuntu 18.04 using <a href="https://phoenixnap.com/kb/install-kubernetes-on-ubuntu" rel="nofollow noreferrer">this article</a>. Everything is working fine and then I tried to install Kubernetes dashboard with <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">these instructions</a>. </p>
<p>Now when I am trying to run <code>kubectl proxy</code> then the dashboard is not cumming up and it gives following error message in the browser when trying to access it using default kubernetes-dashboard URL.</p>
<p><a href="http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/</a></p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
"reason": "ServiceUnavailable",
"code": 503
}
</code></pre>
<p>Following commands give this output where kubernetes-dashboard shows status as CrashLoopBackOff</p>
<p>$> kubectl get pods --all-namespaces</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
default amazing-app-rs-59jt9 1/1 Running 5 23d
default amazing-app-rs-k6fg5 1/1 Running 5 23d
default amazing-app-rs-qd767 1/1 Running 5 23d
default amazingapp-one-deployment-57dddd6fb7-xdxlp 1/1 Running 5 23d
default nginx-86c57db685-vwfzf 1/1 Running 4 22d
kube-system coredns-6955765f44-nqphx 0/1 Running 14 25d
kube-system coredns-6955765f44-psdv4 0/1 Running 14 25d
kube-system etcd-master-node 1/1 Running 8 25d
kube-system kube-apiserver-master-node 1/1 Running 42 25d
kube-system kube-controller-manager-master-node 1/1 Running 11 25d
kube-system kube-flannel-ds-amd64-95lvl 1/1 Running 8 25d
kube-system kube-proxy-qcpqm 1/1 Running 8 25d
kube-system kube-scheduler-master-node 1/1 Running 11 25d
kubernetes-dashboard dashboard-metrics-scraper-7b64584c5c-kvz5d 1/1 Running 0 41m
kubernetes-dashboard kubernetes-dashboard-566f567dc7-w2sbk 0/1 CrashLoopBackOff 12 41m
</code></pre>
<p>$> kubectl get services --all-namespaces</p>
<pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP ---------- <none> 443/TCP 25d
default nginx NodePort ---------- <none> 80:32188/TCP 22d
kube-system kube-dns ClusterIP ---------- <none> 53/UDP,53/TCP,9153/TCP 25d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP ---------- <none> 8000/TCP 24d
kubernetes-dashboard kubernetes-dashboard ClusterIP ---------- <none> 443/TCP 24d
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP ====== <none> 443/TCP 25d
default nginx NodePort ====== <none> 80:32188/TCP 22d
kube-system kube-dns ClusterIP ====== <none> 53/UDP,53/TCP,9153/TCP 25d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP ====== <none> 8000/TCP 24d
kubernetes-dashboard kubernetes-dashboard ClusterIP ====== <none> 443/TCP 24d
</code></pre>
<p>$ kubectl get events -n kubernetes-dashboard</p>
<pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE
24m Normal Pulling pod/kubernetes-dashboard-566f567dc7-w2sbk Pulling image "kubernetesui/dashboard:v2.0.0-rc2"
4m46s Warning BackOff pod/kubernetes-dashboard-566f567dc7-w2sbk Back-off restarting failed container
</code></pre>
<p>$ kubectl describe services kubernetes-dashboard -n kubernetes-dashboard </p>
<pre><code>Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard"...
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.96.241.62
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints:
Session Affinity: None
Events: <none>
</code></pre>
<p>$ kubectl logs kubernetes-dashboard-566f567dc7-w2sbk -n kubernetes-dashboard</p>
<pre><code>> 2020/01/29 16:00:34 Starting overwatch 2020/01/29 16:00:34 Using
> namespace: kubernetes-dashboard 2020/01/29 16:00:34 Using in-cluster
> config to connect to apiserver 2020/01/29 16:00:34 Using secret token
> for csrf signing 2020/01/29 16:00:34 Initializing csrf token from
> kubernetes-dashboard-csrf secret panic: Get
> https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf:
> dial tcp 10.96.0.1:443: i/o timeout
>
> goroutine 1 [running]:
> github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0003dac80)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40
> +0x3b4 github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
> github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000534200)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:494
> +0xc7 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000534200)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:462
> +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:543
> main.main()
> /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105
> +0x212
</code></pre>
<p>Any suggestions to fix this? Thanks in advance.</p>
| Kundan | <p>I noticed that the guide You used to install kubernetes cluster is missing one important part.</p>
<p>According to kubernetes documentation:</p>
<blockquote>
<p>For <code>flannel</code> to work correctly, you must pass <code>--pod-network-cidr=10.244.0.0/16</code> to <code>kubeadm init</code>.</p>
<p>Set <code>/proc/sys/net/bridge/bridge-nf-call-iptables</code> to <code>1</code> by running <code>sysctl net.bridge.bridge-nf-call-iptables=1</code> to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see <a href="https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements" rel="nofollow noreferrer">here</a>.</p>
<p>Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network. see <a href="https://coreos.com/flannel/docs/latest/troubleshooting.html#firewalls" rel="nofollow noreferrer">here</a> .</p>
<p>Note that <code>flannel</code> works on <code>amd64</code>, <code>arm</code>, <code>arm64</code>, <code>ppc64le</code> and <code>s390x</code> under Linux. Windows (<code>amd64</code>) is claimed as supported in v0.11.0 but the usage is undocumented.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
</code></pre>
<p>For more information about <code>flannel</code>, see <a href="https://github.com/coreos/flannel" rel="nofollow noreferrer">the CoreOS flannel repository on GitHub</a> .</p>
</blockquote>
<p>To fix this:</p>
<p>I suggest using the command:</p>
<pre><code>sysctl net.bridge.bridge-nf-call-iptables=1
</code></pre>
<p>And then reinstall flannel:</p>
<pre><code>kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
</code></pre>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
</code></pre>
<hr>
<p>Update: After verifying the the <code>/proc/sys/net/bridge/bridge-nf-call-iptables</code> value is <code>1</code> by default <code>ubuntu-18-04-lts</code>. So issue here is You need to access the dashboard locally.</p>
<p>If You are connected to Your master node via ssh. It could be possible to use <code>-X</code> flag with ssh in order to launch we browser via <code>ForwardX11</code>. Fortunately <code>ubuntu-18-04-lts</code> has it turned on by default.</p>
<pre><code>ssh -X server
</code></pre>
<p>Then install local web browser like chromium.</p>
<pre><code>sudo apt-get install chromium-browser
</code></pre>
<pre><code>chromium-browser
</code></pre>
<p>And finally access the dashboard locally from node. </p>
<pre><code>http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
</code></pre>
<p>Hope it helps.</p>
| Piotr Malec |
<p>Im trying to write an IF statement in my configmap but I cannot find any sites that combine and IF statement with OR. For example:</p>
<pre><code> <% if @project_name == 'site-a' || 'site-b' %>
security:
default-groups:
- reader # Read only for everybody
<% end %>
</code></pre>
<p>Would this be accurate? Ideally, if the variable is called site a or site b. I can probably do an else block but it's not necessary.</p>
<p>Thanks. </p>
| b0uncyfr0 | <p>The initial code was not using the comparison correctly.
In the first line you only evaluated the the project in the first part and assumed that computer will know that the same operation is applied to both values. </p>
<pre><code><% if @project_name == 'site-a' || 'site-b' %>
</code></pre>
<p>The operator doesn't checks if a value is a member of a set which means you should check each <code>@project_name</code> explicitly so that both values are compared:</p>
<pre><code><% if @project_name == 'site-a' || @project_name == 'site-b' %>
</code></pre>
| acid_fuji |
<p>I'm beginner to K8s so please bear with me.</p>
<p>I've rollout a wordpress with a mysql using Kubernetes. The rollout has completed and is running on my machine using <code>minikube</code>.</p>
<p>However, the thing is wordpress is not showing up on my browser</p>
<p>These are my pods</p>
<blockquote>
<p>mysql291020-68d989895b-vxbwg 1/1 Running 0 18h</p>
</blockquote>
<blockquote>
<p>wp291020-7dccd94bd5-dfqqn 1/1 Running 0 19h</p>
</blockquote>
<p>These are my services</p>
<blockquote>
<p>mysql291020-68d989895b-vxbwg 1/1 Running 0 18h</p>
</blockquote>
<blockquote>
<p>wp291020-7dccd94bd5-dfqqn 1/1 Running 0 19h</p>
</blockquote>
<p>After some thoughts, I thought it maybe related to how I setup my service for wordpress (see code below).</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: wp291020
labels:
app: wp291020
spec:
ports:
- port: 80
selector:
app: wp291020
tier: frontend
type: LoadBalancer
</code></pre>
<p>Not sure if it is the right place to look at. I'm adding below my deployement for the wordpress, and also the service for mysql and the deployment for mysql, in case it is needed.</p>
<blockquote>
<p>deployment for wordpress</p>
</blockquote>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: wp291020
spec:
selector:
matchLabels:
app: wp291020
replicas: 1
template:
metadata:
labels:
app: wp291020
spec:
containers:
- name: wp-deployment
image: andykwo/test123:wp_291020
ports:
- containerPort: 80
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
emptyDir: {}
</code></pre>
<blockquote>
<p>service for mysql</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql291020
labels:
app: mysql291020
spec:
ports:
- port: 3306
selector:
app: mysql291020
tier: mysql
clusterIP: None
</code></pre>
<blockquote>
<p>deployment for mysql</p>
</blockquote>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql291020
spec:
selector:
matchLabels:
app: mysql291020
replicas: 1
template:
metadata:
labels:
app: mysql291020
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_PASSWORD
value: my_wordpress_db_password
- name: MYSQL_ROOT_PASSWORD
value: my_wordpress_db_password
- name: MYSQL_USER
value: wordpress
name: db
image: andykwo/test123:wp_291020
ports:
- containerPort: 3306
volumeMounts:
- name: db-data
mountPath: /var/lib/mysql
volumes:
- name: db-data
emptyDir: {}
</code></pre>
<p>Just to mention that the docker containers are functionning also correctly when running only on the containers but I do have access to the wordpress through my browser.</p>
<p>I can provide my docker compose yaml if asked.</p>
<p>Thank you.</p>
<p>PS: I'm adding my docker compose file, in case</p>
<pre><code>version: '3.3'
services:
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
- wordpress_files:/var/www/html
ports:
- "80:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: my_wordpress_db_password
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: my_db_root_password
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: my_wordpress_db_password
volumes:
wordpress_files:
db_data:
</code></pre>
| Andy K | <p>If you're using <strong>Minikube</strong> you can easily expose your frontend <code>Deployment</code> via <code>Service</code> of a type <code>NodePort</code> or <code>LoadBalancer</code>. As <strong>Minikube</strong> is unable to create a real load balancer it uses for it <code>NodePort</code> under the hood anyway.</p>
<p>So if you exposed your <code>wp291020</code> <code>Deployment</code> via <code>wp291020</code> <code>Service</code> you can get its URL by typing:</p>
<pre><code>minikube service wp291020 --url
</code></pre>
<p>and it will show you something like in the example below:</p>
<pre><code>http://172.17.0.15:31637
</code></pre>
<p>I'm wondering if you have any good reason for <code>clusterIP: None</code> in your <code>Service</code> definition. <code>mysql291020</code> <code>Deployment</code> is exposed within your kubernets cluster to your frontend <code>Pods</code> via <code>Service</code> which type is <code>ClusterIP</code> (if you don't specify the <code>type</code> explicitely, by default <code>ClusterIP</code> is created) so it should have its Cluster IP to be accessible by frontend <code>Pods</code>. I think you can simply get rid of <code>clusterIP: None</code> line in your <code>mysql291020</code> <code>Service</code> definition. What you have in your example is called <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless service with selector</a> but I guess there is no real need for it in your case.</p>
| mario |
<p>I am trying to find the kubeproxy logs on minikube, It doesn't seem they are located. </p>
<p>sudo cat: /var/log/kubeproxy.log: No such file or directory</p>
| Nag | <p>A more generic way (besides what <a href="https://stackoverflow.com/users/6027873/hoque">hoque</a> described) that you can use on any kubernetes cluster is to check the logs using kubectl. </p>
<pre><code>kubectl logs kube-proxy-s8lcb -n kube-system
</code></pre>
<p>Using this solution allow you to check logs for any K8s cluster even if you don't have access to your nodes. </p>
| Mark Watney |
<p>My configuration:
GKE cluster v. 1.15.7-gke.23
istio: 1.4.3</p>
<p>Istio creatd istio-ingressgateway service as Loadbalacner with a default firewall rule:</p>
<ul>
<li>Type: Ingress </li>
<li>Targets: VMs on the GKE cluster </li>
<li>Filters: 0.0.0.0/0</li>
<li>Protocols/ports: tcp:15020,tcp:80, tcp:443,
tcp:15029,tcp:15030,tcp:15031,tcp:15032,tcp:15443</li>
</ul>
<p>My goal is to update <strong>Filters</strong> on the rule, allow access to the endpoint only from allow list IP addresses. </p>
<p>Can it be realized through istio ?</p>
| Victor Vedmich | <p>AFAIK it is not possible to affect the istio-ingressgateway Loadbalancer default rules on GCP firewall from istio configuration alone.</p>
<hr>
<p>However,</p>
<p>This kind of filtering can be achieved with use of istio policies. So that the requests will reach the <code>istio-ingressgateway</code> but then will be denied by policies if IP address was not whitelisted.</p>
<p>According to <a href="https://istio.io/docs/tasks/policy-enforcement/denial-and-list/#ip-based-whitelists-or-blacklists" rel="nofollow noreferrer">istio</a> documentation:</p>
<blockquote>
<p>Istio supports <em>whitelists</em> and <em>blacklists</em> based on IP address. You can configure Istio to accept or reject requests from a specific IP address or a subnet.</p>
<ol>
<li><p>Verify you can access the Bookinfo <code>productpage</code> found at <code>http://$GATEWAY_URL/productpage</code>. You won’t be able to access it once you apply the rules below.</p></li>
<li><p>Apply configuration for the <a href="https://istio.io/docs/reference/config/policy-and-telemetry/adapters/list/" rel="nofollow noreferrer">list</a> adapter that white-lists subnet <code>"10.57.0.0\16"</code> at the ingress gateway:</p></li>
</ol>
<pre><code>$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.4/samples/bookinfo/policy/mixer-rule-deny-ip.yaml)
</code></pre>
</blockquote>
<p>Content of <code>mixer-rule-deny-ip.yaml</code>: </p>
<pre><code>apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelistip
spec:
compiledAdapter: listchecker
params:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.57.0.0/16"] # overrides provide a static list
blacklist: false
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: sourceip
spec:
compiledTemplate: listentry
params:
value: source.ip | ip("0.0.0.0")
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkip
spec:
match: source.labels["istio"] == "ingressgateway"
actions:
- handler: whitelistip
instances: [ sourceip ]
---
</code></pre>
<blockquote>
<ol start="3">
<li>Try to access the Bookinfo <code>productpage</code> at <code>http://$GATEWAY_URL/productpage</code> and verify that you get an error similar to: <code>PERMISSION_DENIED:staticversion.istio-system:<your mesh source ip> is not whitelisted</code></li>
</ol>
</blockquote>
<p>The example in documentation has <a href="https://istio.io/docs/tasks/policy-enforcement/denial-and-list/#before-you-begin" rel="nofollow noreferrer">Before you begin</a> part so make sure to meet requirements for <a href="https://istio.io/docs/tasks/policy-enforcement/enabling-policy/" rel="nofollow noreferrer">Enabling Policy Enforcement</a>.</p>
<p><strong>Edit:</strong></p>
<p>To clarify,</p>
<p>Istio and the GCP firewall rules are working at different levels. Istio is only enabled within its mesh, that is, wherever you have the sidecars injected. </p>
<p>In order to make the <code>istio-ingressgateway</code> work, GCE provides a Network Load Balancer that has some preconfigured rules, completely independent from the Istio mesh.</p>
<p>So basically: The GCE firewall rules will only affect the Network Load Balancer attached to the cluster in order to allow traffic into the Istio mesh and the filtering rules in Istio will only work in all the pods/services/endpoints that are within the mesh.</p>
| Piotr Malec |
<p>l want to launch a container with non-root user, but l cannot modify the origin Dockerfile, Or l know l can do something like <code>Run useradd xx</code> then <code>User xx</code> in Dockerfile to achieve that.
What l am doing now is modifying the yaml file like the following:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-pod
image: xxx
command:
- /bin/sh
- -c
- |
useradd xx -s /bin/sh;
su -l xx; // this line is not working
sleep 1000000;
</code></pre>
<p>when l exec into the pod, the default is still the root user, anyone can help with that? Thanks in advance!</p>
| Rosmee | <p>+1 to <a href="https://stackoverflow.com/a/64708860/7108457">dahiya_boy's</a> answer however I'd like to add also my 3 cents to what was already said.</p>
<p>I've reproduced your case using popular <code>nginx</code> image. I also modified a bit commands from your example so that home directory for the user <code>xxx</code> is created as well as some other commands for debugging purpose.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-pod
image: nginx
command:
- /bin/sh
- -c
- |
useradd xxx -ms /bin/bash;
su xxx && echo $?;
whoami;
sleep 1000000;
</code></pre>
<p>After successfully applying the above <code>yaml</code> we can run:</p>
<pre><code>$ kubectl logs my-pod
0
root
</code></pre>
<p>As you can see the exit status of the <code>echo $?</code> command is <code>0</code> which means that previous command in fact ran successfully. Even more: the construction with <code>&&</code> implies that second command is run if and only if the first command completed successfully (with exit status equal to <code>0</code>). If <code>su xxx</code> didn't work, <code>echo $?</code> would never run.</p>
<p>Nontheless, the very next command, which happens to be <code>whoami</code>, prints the actual user that is meant to run all commands in the container and which was defined in the original image. So no matter how many times you run <code>su xxx</code>, all subsequent commands will be run as user <code>root</code> (or another, which was defined in the <code>Dockerfile</code> of the image). So basically the only way to override it on kubernetes level is using already mentioned <code>securityContext</code>:</p>
<blockquote>
<p>You need to use security context as like below</p>
<pre><code> securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
</code></pre>
</blockquote>
<p>However I understand that you cannot use this method if you have not previously defined your user in a custom image. This can be done if a user with such <code>uid</code> already exists.</p>
<p>So to the best of my knowledge, it's impossible to do this the way you presented in your question and it's not an issue or a bug. It simply works this way.</p>
<p>If you <code>kubectl exec</code> to your newly created <code>Pod</code> in the interactive mode, you can see that everything works perfectly, user was successfully added and you can switch to this user without any problem:</p>
<pre><code>$ kubectl exec -ti my-pod -- /bin/sh
# tail -2 /etc/passwd
nginx:x:101:101:nginx user,,,:/nonexistent:/bin/false
xxx:x:1000:1000::/home/xxx:/bin/bash
# su xxx
xxx@my-pod:/$ pwd
/
xxx@my-pod:/$ cd
xxx@my-pod:~$ pwd
/home/xxx
xxx@my-pod:~$ whoami
xxx
xxx@my-pod:~$
</code></pre>
<p>But it doesn't mean that by running <code>su xxx</code> as one of the commands, provided in a <code>Pod</code> <code>yaml</code> definition, you will permanently change the default user.</p>
<p>I'd like to emphasize it again. In your example <code>su -l xxx</code> runs successfully. It's not true that it doesn't work. In other words: 1. container is started as user <code>root</code> 2. user <code>root</code> runs <code>su -l xxx</code> and once completed successfully, exits 3. user <code>root</code> runs <code>whoami</code>.</p>
<p>So the only reasonable solution is, already mentioned by @dahiya_boy, adding an extra layer and create a custom image.</p>
<p>As to:</p>
<blockquote>
<p>@Rosmee YOu can add new docker image layer. and use that image in your
kubernetes. – dahiya_boy 18 hours ago</p>
<p><strong>yes l know that, but as i said above, l cannot modify the original
image</strong>, l need to switch user dynamically – Rosmee 17 hours ago</p>
</blockquote>
<p>You say <em>"I cannot modify the original image"</em> and this is exactly what custom image is about. No one is talking here about modifying the original image. It remains untouched. By writing your own <code>Dockerfile</code> and e.g. by adding in it an extra user and setting it as a default one, you don't modify the original image at all, but build a new custom image on top of it. That's how it works and that's the way it is meant to be used.</p>
| mario |
<p>In kubernetes, I always see the service's definition like this:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: random-exporter
labels:
app: random-exporter
spec:
type: ClusterIP
selector:
app: random-exporter
ports:
- port: 9800
targetPort: http
name: random-port
</code></pre>
<p>whose targetPort is <code>http</code>, it's human friendly! </p>
<p>And what I'm interested is that is there more named port such as <code>http</code> in kubernetes? Maybe <code>https</code>?</p>
| Liqang Liu | <p>Usually you refer to target port by its number.
But you can give a specific name to each pod`s port
and refer this name in your service specification. </p>
<p>This will make your service clearer.
Here you have example where you named your ports in pod. </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
</code></pre>
<p>And here you refer to those ports by name in the service yaml. </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-svc
spec:
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
</code></pre>
<p>Also from the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#serviceport-v1-core" rel="nofollow noreferrer">kubernetes documention</a> you may find this information: </p>
<p><code>targetPort</code> - Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
If this is a string, it will be looked up as a named port in the target Pod's container ports. </p>
| acid_fuji |
<p>I'm new to kubernetes and I'm trying to understand how labels work on a node. We have eks server version 1.14 running in our organization. I'm trying to change built-in deprecated labels.</p>
<p>In aws-node daemonset, I want to replace beta.kubernetes.io/os to kubernetes.io/os and beta.kubernetes.io/arch to kubernetes.io/arch.</p>
<p>Since it both beta.kubernetes.io/arch and kubernetes.io/arch labels when i describe a node. </p>
<ul>
<li>Is it safe to go ahead remove the beta.kubernetes.io/arch and
beta.kubernetes.io/os labels? </li>
<li>I want to understand if I change the label, what are its effects?</li>
<li>Does Pods running on that node are affected? </li>
<li>Can apiVersion: apps/v1 change built-in labels?</li>
<li>Can I just run <code>kubectl label node "node-name" beta.kubernetes.io/arch=amd64 -</code> to remove the labels?</li>
<li><p>Is there a need to apply the daemonset ?</p>
<pre><code>kind: DaemonSet
apiVersion: apps/v1
metadata:
name: aws-node
namespace: kube-system
labels:
k8s-app: aws-node
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
k8s-app: aws-node
template:
metadata:
labels:
k8s-app: aws-node
spec:
priorityClassName: system-node-critical
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "beta.kubernetes.io/os"
operator: In
values:
- linux
- key: "beta.kubernetes.io/arch"
operator: In
values:
- amd64
</code></pre>
<p>kubectl describe node/ip-10-xx-xx-xx.ec2.internal -n kube-system</p></li>
</ul>
<pre><code> Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=c4.xlarge
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1a
group=nodes
kubernetes.io/arch=amd64
kubernetes.io/hostname=ip-10-182-32-156.ec2.internal
kubernetes.io/os=linux
</code></pre>
| user6826691 | <p>From the <a href="https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetesioarch" rel="nofollow noreferrer">documentation</a> we can read that <code>beta.kubernetes.io/arch</code> and <code>beta.kubernetes.io/os</code> are deprecated since version 1.14 (<a href="https://v1-16.docs.kubernetes.io/docs/setup/release/notes/#deprecations-and-removals" rel="nofollow noreferrer">removed on version 1.18</a>) and <code>kubernetes.io</code> should be used instead. </p>
<p>You are using version 1.14 and there is no reason for you to change/remove these labels. Changing them would add one more layer of complication to your cluster when you want to add a node for example (you have to always keep in mind that you have non-stock labels in your nodes).</p>
<blockquote>
<ul>
<li>Is it safe to go ahead remove the beta.kubernetes.io/arch and beta.kubernetes.io/os labels?</li>
</ul>
</blockquote>
<p>It's safe but unnecessary unless you have applications running on mixed clusters and you are using these labels.</p>
<blockquote>
<ul>
<li>I want to understand if I change the label, what are its effects?</li>
</ul>
</blockquote>
<p>From the <a href="https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#betakubernetesioarch-deprecated" rel="nofollow noreferrer">documentation</a> we can read: </p>
<p>kubernetes.io/arch: This can be handy if you are mixing arm and x86 nodes.</p>
<p>kubernetes.io/os: This can be handy if you are mixing operating systems in your cluster (for example: mixing Linux and Windows nodes).</p>
<p>So these labels are there for your convenience, you can use them to keep track of things. </p>
<blockquote>
<ul>
<li>Does Pods running on that node are affected?</li>
</ul>
</blockquote>
<p>No, pods are still going to be normally scheduled .</p>
<blockquote>
<ul>
<li>Can I just run <code>kubectl label node "node-name" beta.kubernetes.io/arch=amd64 -</code> to remove the labels?</li>
</ul>
</blockquote>
<p>To remove the label you can run: </p>
<pre><code>kubectl label node "node-name" beta.kubernetes.io/arch-
</code></pre>
<p>To remove from all nodes:</p>
<pre><code>kubectl label nodes --all beta.kubernetes.io/arch-
</code></pre>
<blockquote>
<ul>
<li>Is there a need to apply the daemonset ?</li>
</ul>
</blockquote>
<p>I particularly don't see a need for that. </p>
| Mark Watney |
<p>We are trying to migrate our microservices architecture to K8s and Istio. We will have two k8s different clusters. One per frontend applications and the another for backend apps. Our initial idea is to configure each cluster as a separated Istio Mesh. </p>
<p>My doubt is; </p>
<p>Can we keep the locality-aware routing between clusters when a frontend app do a request against a backend app? </p>
<p>I have read it is possible when you have one mesh distributed among K8s clusters but I'm not sure if this feature keeps working when a mesh federation architecture is implemented.</p>
<p>Thanks!</p>
| Mariano Mirabelli | <p>There is a functionality with something like that with istio multicluster configuration.</p>
<p>Depending on Your requirements there can be different types of multicluster models.</p>
<p>According to <a href="https://istio.io/docs/ops/deployment/deployment-models/" rel="nofollow noreferrer">istio</a> documentation: </p>
<blockquote>
<p>When configuring a production deployment of Istio, you need to answer a number of questions. Will the mesh be confined to a single cluster or distributed across multiple clusters? Will all the services be located in a single fully connected network, or will gateways be required to connect services across multiple networks? Is there a single control plane, potentially shared across clusters, or are there multiple control planes deployed to ensure high availability (HA)? If there is more than one cluster being deployed, and more specifically in isolated networks, are they going to be connected into a single multicluster service mesh or will they be federated into a multi-mesh deployment?</p>
<p>All of these questions, among others, represent independent dimensions of configuration for an Istio deployment.</p>
<ol>
<li>single or multiple cluster</li>
<li>single or multiple network</li>
<li>single or multiple control plane</li>
<li>single or multiple mesh</li>
</ol>
<p>All combinations are possible, although some are more common than others and some are clearly not very interesting (for example, multiple mesh in a single cluster).</p>
</blockquote>
<hr>
<p>As for <a href="https://istio.io/docs/ops/deployment/deployment-models/#mesh-models" rel="nofollow noreferrer">mesh</a> functionality:</p>
<blockquote>
<h3>Single mesh<a href="https://istio.io/docs/ops/deployment/deployment-models/#single-mesh" rel="nofollow noreferrer"></a></h3>
<p>The simplest Istio deployment is a single mesh. Within a mesh, service names are unique. For example, only one service can have the name <code>mysvc</code> in the <code>foo</code> namespace. Additionally, workload instances share a common identity since service account names are unique within a namespace, just like service names.</p>
<p>A single mesh can span <a href="https://istio.io/docs/ops/deployment/deployment-models/#cluster-models" rel="nofollow noreferrer">one or more clusters</a> and <a href="https://istio.io/docs/ops/deployment/deployment-models/#network-models" rel="nofollow noreferrer">one or more networks</a>. Within a mesh, <a href="https://istio.io/docs/ops/deployment/deployment-models/#namespace-tenancy" rel="nofollow noreferrer">namespaces</a> are used for <a href="https://istio.io/docs/ops/deployment/deployment-models/#tenancy-models" rel="nofollow noreferrer">tenancy</a>.</p>
<h3>Multiple meshes<a href="https://istio.io/docs/ops/deployment/deployment-models/#multiple-meshes" rel="nofollow noreferrer"></a></h3>
<p>Multiple mesh deployments result from mesh federation.</p>
<p>Multiple meshes afford the following capabilities beyond that of a single mesh:</p>
<ul>
<li>Organizational boundaries: lines of business</li>
<li>Service name or namespace reuse: multiple distinct uses of the <code>default</code> namespace</li>
<li>Stronger isolation: isolating test workloads from production workloads</li>
</ul>
<p>You can enable inter-mesh communication with mesh federation. When federating, each mesh can expose a set of services and identities, which all participating meshes can recognize.</p>
<p>To avoid service naming collisions, you can give each mesh a globally unique <strong>mesh ID</strong>, to ensure that the fully qualified domain name (FQDN) for each service is distinct.</p>
<p>When federating two meshes that do not share the same trust domain, you must federate identity and <strong>trust bundles</strong> between them. See the section on <a href="https://istio.io/docs/ops/deployment/deployment-models/#trust-between-meshes" rel="nofollow noreferrer">Multiple Trust Domains</a> for an overview.</p>
</blockquote>
<hr>
<p>So I suggest applying multicluster model to Your needs. The simplest solution is usually the best. Single mesh multicluster does allow for naming locality for Your multicluster environment.</p>
<hr>
<p>There is also <a href="https://istio.io/blog/2020/multi-cluster-mesh-automation/" rel="nofollow noreferrer">advanced</a> example of multicluster istio with use of <a href="https://github.com/istio-ecosystem/admiral" rel="nofollow noreferrer">Admiral</a> which allows to have custom naming possibilities.</p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>In my kubernetes cluster, have 1 master and 3 nodes.</p>
<p>Will have a deployment/daemonset running on each node and will need storage.</p>
<p>Thought about local storage due performance issues with gluster, since will have a lot of files (small/big).</p>
<p>Is there a way of assigning automatically the pvc which has the pv in the node where the pod has been allocated?</p>
<p>Thanks</p>
| Bruno | <blockquote>
<p>Is there a way of assigning automatically the pvc which has the pv in
the node where the pod has been allocated?</p>
</blockquote>
<p>It depends what you exactly mean by <em>"assigning automatically the pvc"</em> but I believe that <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">local volume</a> should meet your requirements. <code>PVC</code> or many different <code>PVCs</code></p>
<p>It's biggest advantagle over <code>hostPath</code> is no need for scheduling Pods manually to specific nodes as node affinity is defined on the <code>PersistentVolume</code>:</p>
<blockquote>
<p>Compared to <code>hostPath</code> volumes, local volumes can be used in a durable
and portable manner without manually scheduling Pods to nodes, as the
system is aware of the volume's node constraints by looking at the
node affinity on the PersistentVolume.</p>
</blockquote>
<p>You just need to define a local storage class, local <code>PV</code> and <code>PVC</code>, which then you can use in <code>volumeClaimTemplates</code> like in the following fragment of a <code>Statefulset</code> specification:</p>
<pre><code>...
volumeClaimTemplates:
- metadata:
name: local-vol
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 368Gi
</code></pre>
<p>All your <code>Pods</code> will be automatically scheduled to appropriate nodes as the node affinity is already defined in <code>PersistentVolume</code> and specific <code>PersistentVolumeClaim</code> is bind to such <code>PV</code>. It's important to remember that <code>PV:PVC</code> is always <code>1:1</code> binding, while one <code>PVC</code> can be used by many different <code>Pods</code>. When it comes to local volume, if <code>Pods</code> use a single common <code>PVC</code>, they all will be scheduled on the same node on which the claimed <code>PV</code> is located.</p>
| mario |
<p>I'm experimenting with SMTP (mailoney) and SSH honeypots in a Kubernetes cluster to be exposed to the big bad WWW. I cant seem to figure out how to get it working since I'm only beginning to understand Kubernetes just recently. </p>
<p>I've got some config now, for example my mailoney.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mailoney
spec:
selector:
matchLabels:
app: mailoney
template:
metadata:
labels:
app: mailoney
spec:
containers:
- name: mailoney
image: dtagdevsec/mailoney:2006
ports:
- containerPort: 25
</code></pre>
<p>and the service config:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-mailoney
labels:
name: mailoney
spec:
type: LoadBalancer
ports:
- name: smtp
port: 25
targetPort: 25
protocol: TCP
selector:
name: mailoney
</code></pre>
<p>But when the loadbalancer is configured, it exposes the services on port >30000, which I know is default behaviour for Kubernetes. But how do I exactly configure the loadbalancer to allow connections on port 25 and 22 respectively and actually letting connections through to the honeypots?</p>
<p>am I overlooking something really obvious?</p>
<p>Any help is appreciated. </p>
| chr0nk | <p>As @coderanger mentioned, your cloud provider will take care of everything and make the original port available.
Reading your service manifest I could notice that your selector is wrong, it should point to <code>app: mailoney</code> instead of <code>name:</code>. I tested it and it's working with the correct selector.</p>
<p>Here is how your manifest should look like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-mailoney
labels:
name: mailoney
spec:
type: LoadBalancer
ports:
- name: smtp
port: 25
targetPort: 25
protocol: TCP
selector:
app: mailoney
</code></pre>
<p>After changing it to <code>app: mailoney</code> I have the following results:</p>
<pre><code>$ kubectl get service ingress-mailoney -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ingress-mailoney LoadBalancer 10.31.250.51 104.197.119.16 25:30601/TCP 44m app=mailoney
</code></pre>
<pre><code>$ telnet 104.197.119.16 25
Trying 104.197.119.16...
Connected to 104.197.119.16.
Escape character is '^]'.
220 mailrelay.local ESMTP Exim 4.81 #1 Thu, 29 Jul 2010 05:13:48 -0700
</code></pre>
<p>As you can see, it's working as designed.</p>
| Mark Watney |
<p>I have an Istio 1.4.6 VirtualService with a match and a url rewrite defined as follows:</p>
<pre><code> match:
- authority:
prefix: example.com
uri:
prefix: /foo/bar
rewrite:
uri: /
route:
- destination:
host: some-service
port:
number: 80
</code></pre>
<p>I would like a rewrite like follows:</p>
<p>Traffic directed to <code>/foo/bar</code> or any subpath of that should be rewritten to <code>/</code> plus any subpath on <code>some-service</code>.</p>
<pre><code>i.e.
example.com/foo/bar -> some-service
example.com/foo/bar/subpath -> some-service/subpath
example.com/foo/bar/subpath/anothersubpath -> some-service/subpath/anothersubpath
</code></pre>
<p>However, when I sysdig the traffic coming into <code>some-service</code>, I see that Istio has rewritten the path to:</p>
<pre><code>GET //subpath/anothersubpath HTTP/1.1
</code></pre>
<p>Notice the two slashes in the GET request. In the VirtualService spec <code>rewrite.uri</code> field, I can't seem to leave that field empty or add an empty string there. Doing so causes the resource not to validate.</p>
<p>I.e. I can NOT do this:</p>
<pre><code> rewrite:
uri: ""
</code></pre>
<p>And can NOT do this</p>
<pre><code> rewrite:
uri:
</code></pre>
<p>How can I define a VirtualService rewrite to send traffic to the root of the destination service? Any help is much appreciated. </p>
| Joe J | <p>There is a github <a href="https://github.com/istio/istio/issues/8076" rel="noreferrer">issue</a> about this.</p>
<p>The simplest fix is to add whitespace in the <code>uri</code> as long as You are not running <code>.net</code> core application.</p>
<pre><code> rewrite:
uri: " "
</code></pre>
<p>Other workaround can be found <a href="https://github.com/istio/istio/issues/8076#issuecomment-448738814" rel="noreferrer">here</a>, </p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>I know that DaemonSets ensure only one pod running in every node.
I know that ReplicaSets ensure the number of pod replicas defined are running but doesn't ensure every node gets a pod.</p>
<p>My requirement is every Node should be occupied by one POD and also I should be able to increase pod replica count! Is there any way we can achieve this?</p>
<p>If we can deploy application 2 times with different names i.e 1st time with Daemonsets and next with Replicasets! But is there any better approach? So that deployment can have a single manifest file with single name.</p>
<p>FYI, his am trying to achieve in Google Cloud- GKE.</p>
| sudhir tataraju | <blockquote>
<p>My requirement is every Node should be occupied by one POD and also I should be able to increase pod replica count! Is there any way we can achieve this?</p>
</blockquote>
<p><strong>No, at the moment kubernetes doesn't provide a mechanism which would enable you to achive exactly what you want.</strong></p>
<p>Having read carefully your question I would summarize your key requirements as follows:</p>
<ol>
<li><code>Pods</code> should be scheduled <strong>on every node</strong> (as <code>Daemonset</code> does).</li>
<li>At the same time you need to be able to <strong>schedule desired number of <code>Pods</code> of a certain kind on all nodes</strong>. And of course the <strong>number of such <code>Pods</code> will be much bigger than the number of nodes</strong>. So you need to be able to schedule more than one <code>Pod</code> of a certain type on each node.</li>
<li>When one of the nodes becomes temporarily unavailable, missing <code>Pods</code> should be scheduled to the remaing nodes to be able to handle same workload.</li>
<li>When node becomes available again, <code>Pods</code> that were moved before to other nodes should be rescheduled on the newly recovered node.</li>
</ol>
<p>If you need to have more than just a one <code>Pod</code> on every node <code>Daemonset</code> definitely is not a solution you look for as it ensures that <strong>exactly one copy of a <code>Pod</code> of a certain kind is running on every node</strong>. A few different <code>Daemonsets</code> doesn't seem a good solutions either as <code>Pods</code> would be managed separately in such scenario.</p>
<p>I would also like to refer to @redzack's answer. Taking into consideration all the above requirements, <code>podAntiAffinity</code> doesn't solve this problem at all. Let's suppose you have only those 3 nodes. If you increase your <code>replicas</code> number e.g. to 6 you'll see something like below:</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE
web-server-1287567482-5d4dz 1/1 Running 0 7m 10.192.2.3 kube-node-1
web-server-1287567482-6f7v5 1/1 Running 0 7m 10.192.4.3 kube-node-3
web-server-1287567482-s330j 1/1 Running 0 7m 10.192.3.2 kube-node-2
web-server-1287567482-5ahfa 1/1 Pending 0 7m <none> <none>
web-server-1287567482-ah47s 1/1 Pending 0 7m <none> <none>
web-server-1287567482-ajgh7 1/1 Pending 0 7m <none> <none>
</code></pre>
<p>Due to <code>podAntiAffinity</code> new <code>Pods</code> won't be eligible to be scheduled on those <code>nodes</code>, on which one <code>Pod</code> of this kind is already running. Even if you change the type of <code>podAntiAffinity</code> from <code>requiredDuringSchedulingIgnoredDuringExecution</code> to <code>preferredDuringSchedulingIgnoredDuringExecution</code> it won't meet your requirement as you may end up with any scenario like: 3 pods on node1, 2 pods on node2 and 1 pod on node3 or even only 2 nodes may be used. So in such case it won't work any better than a normal deployment without any affinity/anti-affinity rules.</p>
<p>Furthermore it won't cover point <em>4</em> from the above requirements list at all. Once missing node is recovered, nothing will re-schedule to it those <code>Pods</code> that are already running on different nodes. The only solution that can guarantee that when new node appeares/re-appeares, <code>Pod</code> of a certain kind is scheduled on such node, is <code>Daemonset</code>. But it won't cover point <em>2</em> and <em>3</em>. So there is no ideal solution for your use case.</p>
<p>If someone has some better ideas how it cannot be achieved, feel free to join this thread and post your own answer but in my opinion such sulution is simply unavailable at the moment, at least not with the standard kube-scheduler.</p>
<p>If a single copy of a <code>Pod</code>, running on each node is not enough to handle your workload, I would say: simply use standard <code>Deployment</code> with desired numbers of replicas and rely on <strong>kube-scheduler</strong> to decide on which node it will be scheduled and you can be pretty sure that in most cases it will do it properly and distribute your workload evenly. Well, it won't re-destribute already running <code>Pods</code> on new/recovered node so it's not perfect but I would say for most scenarios it should work very well.</p>
| mario |
<p>I want to make a role that allows to do any operation on "Roles" and "RoleBindings" (but not ClusterRoles or ClusterRoleBindings) on a namespace level.</p>
<p>This is the roles YAML I put together but when binding it to a service account it is now applied. What did I do wrong?</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: role-binder
namespace: foo-namespace
rules:
- apiGroups:
- rbac.authorization.k8s.io
resources:
- Role
- RoleBinding
verbs:
- '*'
</code></pre>
| Hedge | <p>You can achieve that with the following rules: </p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: role-grantor
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["rolebindings"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-grantor-binding
namespace: office
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: role-grantor
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: employee
</code></pre>
<p>I tested it on my lab and it works as you desire: </p>
<pre><code>$ kubectl --context=employee-context get role
NAME AGE
deployment-manager 15m
role-binder 12m
$ kubectl --context=employee-context get rolebindings
NAME AGE
deployment-manager-binding 15m
role-grantor-binding 3m37s
$ kubectl --context=employee-context get clusterrolebindings
Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "employee" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope
</code></pre>
<p>You can read more about this specifically in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#restrictions-on-role-binding-creation-or-update" rel="nofollow noreferrer">documentation</a>.</p>
| Mark Watney |
<p>I want to run docker containers using real-time scheduler. Is it possible to pass parameters in pod/deployment file to Kubernetes to run my containers as follows?</p>
<pre><code>docker run -it --cpu-rt-runtime=950000 \
--ulimit rtprio=99 \
--cap-add=sys_nice \
debian:jessie
</code></pre>
| Václav Struhár | <p>Unfortunately not all Docker command line features has relevant options in Kubernetes YAML.</p>
<p>While <code>sys_time</code> capability can be set using <code>securiyContext</code> in yaml, the <code>--cpu-rt-runtime=950000</code> cannot.</p>
<p>In the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#podsecuritycontext-v1-core" rel="nofollow noreferrer">K8s API Pod</a> documentation you can find all the configuration that can be pass into container
under <code>PodSecurityContext v1 core</code>.</p>
<p>Another thing is that I`ve tried to run a container itself with the specs that you provided but I ran into an error:</p>
<pre><code>docker: Error response from daemon: Your kernel does not support
cgroup cpu real-time runtime. See 'docker run --help'
</code></pre>
<p>This is related directly to kernel configuration named <code>CONFIG_RT_GROUP_SCHED</code> that is missing from your kernel image. Without it the <code>cpu-rt-runtime</code> won`t be possible to set to container.</p>
| acid_fuji |
<p>This is my yaml for the adminer:</p>
<pre><code>kind: Service
metadata:
name: adminer-1598029219
labels:
app.kubernetes.io/name: adminer
helm.sh/chart: adminer-0.1.5
app.kubernetes.io/instance: adminer-1598029219
app.kubernetes.io/managed-by: Helm
spec:
type: NodePort
ports:
- port: 8000
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: adminer
app.kubernetes.io/instance: adminer-1598029219
---
# Source: adminer/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: adminer-1598029219
labels:
app.kubernetes.io/name: adminer
helm.sh/chart: adminer-0.1.5
app.kubernetes.io/instance: adminer-1598029219
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: adminer
app.kubernetes.io/instance: adminer-1598029219
template:
metadata:
labels:
app.kubernetes.io/name: adminer
app.kubernetes.io/instance: adminer-1598029219
spec:
containers:
- name: adminer
image: "dockette/adminer:full"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
env:
- name: ADMINER_PLUGINS
value:
- name: ADMINER_DESIGN
value: pepa-linha
- name: ADMINER_DEFAULT_SERVER
value:
resources:
{}
livenessProbe:
null
readinessProbe:
null
</code></pre>
<p>And this my yaml for th mongoDB</p>
<pre><code>kind: Service
metadata:
name: mongo
labels:
name: mongo
app: mongo
spec:
ports:
- port: 27017
targetPort: 27017
name: web
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: mongo # has to match .spec.template.metadata.labels
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
name: web
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
</code></pre>
<p>So my problem is that i can't log in into mongod becuse i get this from Adminer :
<strong>Adminer does not support accessing a database without a password</strong>. Is there any simple solution to this problem where i can log into my mongod?
P.S i run kubernetes</p>
| Шурбески Христијан | <p>The answer to your question can be found in <a href="https://www.adminer.org/en/password/" rel="nofollow noreferrer">Adminer's documentation</a>:</p>
<blockquote>
<h2>Accessing a database without a password</h2>
<p>Adminer 4.6.3 and newer does not support accessing a database without
a password. The reason is that a forgotten Adminer uploaded on a place
accessible by an attacker could have been used to access a database.
There are these options:</p>
<ol>
<li>Set up the database server to require a password. This is possible with all databases except SQLite and SimpleDB.</li>
<li>Use the <a href="https://raw.github.com/vrana/adminer/master/plugins/login-password-less.php" rel="nofollow noreferrer"><strong>login-password-less</strong></a>
<a href="https://www.adminer.org/en/plugins/" rel="nofollow noreferrer">plugin</a> to set up a password
required by Adminer but not passed to the database
(<a href="https://github.com/vrana/adminer/blob/master/adminer/sqlite.php" rel="nofollow noreferrer">example</a>).</li>
<li>Use the <a href="https://raw.github.com/vrana/adminer/master/plugins/login-ip.php" rel="nofollow noreferrer"><strong>login-ip</strong></a>
<a href="https://www.adminer.org/en/plugins/" rel="nofollow noreferrer">plugin</a> to check the IP address
and allow an empty password.</li>
<li>Implement the <code>login</code> <a href="https://www.adminer.org/en/extension/" rel="nofollow noreferrer">method</a> to add your custom
authentication.</li>
</ol>
</blockquote>
<p>You may find <a href="https://stackoverflow.com/questions/58009920/how-to-enter-adminer-without-password">this answer</a> helpful as well. It describes the whole process in a great detail.</p>
| mario |
<p>I have an isomorphic JavaScript app that uses Vue's SSR plugin running on K8s. This app can either be rendered server-side by my Express server with Node, or it can be served straight to the client as with Nginx and rendered in the browser. It works pretty flawlessly either way.</p>
<p>Running it in Express with SSR is a much higher resource use however, and Express is more complicated and prone to fail if I misconfigure something. Serving it with Nginx to be rendered client side on the other hand is dead simple, and barely uses any resources in my cluster.</p>
<p>What I want to do is have a few replicas of a pod running my Express server that's performing the SSR, but if for some reason these pods go down, I want a fallback service on the ingress that will serve from a backup pod with just Nginx serving the client-renderable code.</p>
<p>Setting up the pods is easy enough, but how can I tell an ingress to serve from a different service then normal if the normal service is unreachable and/or responding too slowly to requests?</p>
| sevensidedmarble | <p>The easiest way to setup NGINX Ingress to meet your needs is by using the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#default-backend" rel="nofollow noreferrer">default-backend</a> annotation. </p>
<blockquote>
<p>This annotation is of the form
<code>nginx.ingress.kubernetes.io/default-backend: <svc name></code> to specify
a custom default backend. This <code><svc name></code> is a reference to a
service inside of the same namespace in which you are applying this
annotation. This annotation overrides the global default backend.</p>
<p>This service will be handle the response when the service in the
Ingress rule does not have active endpoints. It will also handle the
error responses if both this annotation and the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-http-errors" rel="nofollow noreferrer">custom-http-errors
annotation</a> is set.</p>
</blockquote>
<p>Example: </p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '404'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
</code></pre>
<p>In this example NGINX is serving custom-http-backend as primary resource and if this service fails, it will redirect the end-user to default-http-backend. </p>
<p>You can find more details on this example <a href="https://koopakiller.com/kubernetes/nginx-ingress/ingress/2020/02/13/nginx-ingress-default-backend.html" rel="nofollow noreferrer">here</a>.</p>
| Mark Watney |
<h3>I do not want filebeat to report any metrics to elasticsearch.</h3>
<p>Once I start the deamon set I can see the following message:</p>
<pre><code>2020-03-17T09:14:59.524Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
</code></pre>
<p><strong>How can you disable that?</strong><br />
Basically what I think I need is <code>logging.metrics.enabled: false</code> or is it <code>monitoring.enabled: false </code>?
I just cannot make it work. I'm not sure where to put it. The <a href="https://www.elastic.co/guide/en/beats/filebeat/current/configuration-logging.html" rel="nofollow noreferrer">documentation</a> just says to put it into the logging section of my filebeat.yaml. So I added it on the same intendation level as "filebeat.inputs". To no success... - where do I need to put it? Or is it the completely wrong configuration setting I am looking at?</p>
<blockquote>
<p><a href="https://raw.githubusercontent.com/elastic/beats/master/deploy/kubernetes/filebeat-kubernetes.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/elastic/beats/master/deploy/kubernetes/filebeat-kubernetes.yaml</a></p>
</blockquote>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# node: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
logging.metrics.enabled: false
---
</code></pre>
| lapsus | <p>The <code>filebeat.yml</code> is configuration file that mounted at <code>/etc/filebeat.yml</code> in the <code>filebeat</code> <code>DaemonSet</code>.</p>
<p>There are <a href="https://www.elastic.co/guide/en/beats/filebeat/current/directory-layout.html" rel="nofollow noreferrer">directory layout</a> and <a href="https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-reference-yml.html#filebeat-reference-yml" rel="nofollow noreferrer">configuration reference</a> pages for <code>FileBeat</code> in elastic.co documentation.</p>
<p><strong>Update:</strong></p>
<p>The <code>logging.metrics.enabled: false</code> will only disable internal metrics.
Take a look at this <a href="https://discuss.elastic.co/t/non-zero-metrics-in-the-last-30s-meaning/173970" rel="nofollow noreferrer">post</a>.</p>
<p>Note the difference between this <code>INFO</code> log for the internal metrics:</p>
<pre><code>2019-03-26T16:16:02.557Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s
</code></pre>
<p>And the one in Your case:</p>
<pre><code>2020-03-17T09:14:59.524Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
</code></pre>
<p>Unfortunately this configuration will not stop <code>FileBeat</code> from reporting metrics to <code>ElasticSearch</code></p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>I have 2 pod running of one deployment on kubernetes GKE cluster. I have scale this stateless deployment replicas to 2.</p>
<p>Both replicas almost started at same time both are restating with error code 137 ERROR. To change restart timing i have deleted one pod manually so that RS (replicaset) create new one. </p>
<p>Now again both pods are restarting at same time. Is there any connection between them ?. Both has to work independently. </p>
<p>i have not set a resource limit. In cluster free space upto 3 GB and deployment not taking much memory still getting 137 and restart in pods. </p>
<p>Why both pod restarting at same time that's issue? other all 15 microservices running perfectly. </p>
| Harsh Manvar | <p>This is a common mistake when pods are defined. If you do not set a CPU and memory limit, there is no upper bound and the pod might take all resources, crash and restart. Those are discussed here [2][3]. You will also see that user “ciokan” [1] fixed his issue by setting the limit.</p>
<p>[1]<a href="https://github.com/kubernetes/kubernetes/issues/19825" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/19825</a>
[2]memory:<a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/</a>
[3]CPU:<a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/</a></p>
| Ali Reza Izadi |
<p>I am running several LXC containers on Oracle Linux (host) and running EBS application inside those containers.<br />
Now I have more than 10 containers on every 5 hosts.<br />
I am looking for a solution for managing these containers. The research told me that Kubernetes is a solution for managing containers and perform several other activities with it like autoscaling ..etc.<br />
But Kubernetes works with LXD for lxc containers.<br />
Is there any way through which I can manage lxc container directly with Kubernetes?<br />
Any help will be really appreciated !!</p>
<p>Thanks !!</p>
| krathi | <blockquote>
<p>Do you use LXD or classic LXC ?</p>
<p>I am using classic LXC package which is coming oracle Linux Base repo.</p>
</blockquote>
<p>Well, official <a href="https://github.com/automaticserver/lxe#lxe" rel="nofollow noreferrer">LXE project description</a> doesn't leave any doubts about it. It clearly states that <a href="https://github.com/automaticserver/lxe#requirements" rel="nofollow noreferrer">LXD must be installed</a>:</p>
<blockquote>
<h2>Requirements</h2>
<p>You need to have LXD >= 3.3 installed, which packages are officially
only available <a href="https://linuxcontainers.org/lxd/getting-started-cli/#snap-package-archlinux-debian-fedora-opensuse-and-ubuntu" rel="nofollow noreferrer">via
snap</a>.
A LXD built by source is also supported.</p>
</blockquote>
<p>So it means you cannot use classic LXC.</p>
<p>As you may know <a href="https://linuxcontainers.org/lxc/introduction/" rel="nofollow noreferrer">LXC</a> and <a href="https://linuxcontainers.org/lxd/introduction/" rel="nofollow noreferrer">LXD</a> are two different products although the second one is built on top of the first one as you can read <a href="https://linuxcontainers.org/lxd/introduction/#relationship-with-lxc" rel="nofollow noreferrer">here</a> but the most important difference is that <strong>LXD</strong> exposes a <strong>REST API</strong>:</p>
<blockquote>
<p>The core of LXD is a privileged daemon which exposes a REST API over a
local unix socket as well as over the network (if enabled).</p>
<p>Clients, such as the command line tool provided with LXD itself then
do everything through that REST API. It means that whether you're
talking to your local host or a remote server, everything works the
same way.</p>
</blockquote>
<p>This is actually its key feature which makes possible it's management with additional tools like <strong>LXE</strong>.</p>
<p>So again: <strong>The answer to your question is: No, you can't use classic LXC. It must be LXD.</strong> And as far as I know there is no other way available to manage <strong>LXC</strong> containers directly with <strong>kubernetes</strong>.</p>
| mario |
<p>Nearly 3 years ago, Kubernetes would <strong>not</strong> carry out a rolling deployment if you had a single replica (<a href="https://stackoverflow.com/questions/45638480/kubernetes-deployment-does-not-perform-a-rolling-update-when-using-a-single-repl">Kubernetes deployment does not perform a rolling update when using a single replica</a>).</p>
<p>Is this still the case? Is there any additional configuration required for this to work?</p>
| Chris Stryczynski | <p>You are not required to have a minimum number of replicas to rollout an update using Kubernetes Rolling Update anymore. </p>
<p>I tested it on my lab (v1.17.4) and it worked like a charm having only one replica. </p>
<p>You can test it yourself using this Katakoda Lab: <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-interactive/" rel="nofollow noreferrer">Interactive Tutorial - Updating Your App</a></p>
<p>This lab is setup to create a deployment with 3 replicas. Before starting the lab, edit the deployment and change the number of replicas to one and follow the lab steps. </p>
<p>I created a lab using different example similar to your previous scenario. Here is my deployment: </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:1.16.1
ports:
- containerPort: 80
</code></pre>
<p>Deployment is running with one replica only: </p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-6c4699c59c-w8clt 1/1 Running 0 5s
</code></pre>
<p>Here I edited my nginx-deployment.yaml and changed the version of nginx to <code>nginx:latest</code> and rolled out my deployment running replace: </p>
<pre><code>$ kubectl replace -f nginx-deployment.yaml
deployment.apps/nginx-deployment replaced
</code></pre>
<p>Another option is to change the nginx version using the <code>kubectl set image</code> <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">command</a>: </p>
<pre><code>kubectl set image deployment/nginx-deployment nginx-container=nginx:latest --record
</code></pre>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-566d9f6dfc-hmlf2 0/1 ContainerCreating 0 3s
nginx-deployment-6c4699c59c-w8clt 1/1 Running 0 48s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-566d9f6dfc-hmlf2 1/1 Running 0 6s
nginx-deployment-6c4699c59c-w8clt 0/1 Terminating 0 51s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-566d9f6dfc-hmlf2 1/1 Running 0 13s
</code></pre>
<p>As you can see, everything worked normally with only one replica. </p>
<p>In the latest version of the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">documentation</a> we can read: </p>
<blockquote>
<p>Deployment ensures that only a certain number of Pods are down while
they are being updated. By default, it ensures that at least 75% of
the desired number of Pods are up (25% max unavailable).</p>
<p>Deployment also ensures that only a certain number of Pods are created
above the desired number of Pods. By default, it ensures that at most
125% of the desired number of Pods are up (25% max surge).</p>
</blockquote>
| Mark Watney |
<p>I am using the click-to-deploy repository for Wordpress installation.</p>
<p>the is a commend in the instruction <a href="https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/wordpress/README.md#update-tls-certificate-for-wordpress" rel="nofollow noreferrer">Update TLS certificate for WordPress</a></p>
<blockquote>
<p>If you want to update the certificate that the application uses, copy
the new certificate and key pair in to the /tmp/tls.crt, and
/tmp/tls.key files, and execute the following command:</p>
</blockquote>
<pre><code>kubectl --namespace $NAMESPACE create secret tls $APP_INSTANCE_NAME-tls \
--cert=/tmp/tls.crt --key=/tmp/tls.key --dry-run -o yaml | kubectl apply -f -
</code></pre>
<p>I saw so many video references and articles. they Use One VM and for that, they can start there shell very easily. </p>
<p>I am using Kubernaties it has three VM and if I run this command it will destroy the container infrastructure.</p>
<p><strong>Which cloud shell I run to write this commend so I can implement my SSL.</strong></p>
<p>I try it on Cluster this is the output:<a href="https://i.stack.imgur.com/7WXlr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7WXlr.png" alt="Output of this commend in a cluster"></a></p>
| Sushen Biswas | <p>From the output I can tell that Your environmental value <code>$NAMESPACE</code> is empty.</p>
<p>So the command:</p>
<pre><code>kubectl --namespace $NAMESPACE create secret tls ...
</code></pre>
<p>is the same as </p>
<pre><code>kubectl --namespace create secret tls ...
</code></pre>
<p>this is why Your output said <code>unknown command "secret" for "kubectl"</code> the flag <code>--namespace</code> used up word <code>create</code> as its value because <code>$NAMESPACE</code> was empty.</p>
<hr>
<p>To fix this make sure the environmental values are set up correctly.</p>
<p>You can check their values by using:</p>
<pre><code>echo $APP_INSTANCE_NAME
echo $NAMESPACE
</code></pre>
<p>If they are indeed empty or different than expected use like mentioned in the <a href="https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/wordpress/README.md#prepare-the-environment-1" rel="nofollow noreferrer">guide</a>:</p>
<pre><code>export APP_INSTANCE_NAME=wordpress-1
export NAMESPACE=default
</code></pre>
<p>Hope it helps.</p>
| Piotr Malec |
<p>I have 3 GKE clusters sitting in 3 different regions on Google Cloud Platform.
I would like to create a Kafka cluster which has one Zookeeper and one Kafka node (broker) in every region (each GKE cluster).</p>
<p>This set-up is intended to survive regional failure (I know a whole GCP region going down is rare and highly unlikely).</p>
<p>I am trying this set-up using this <a href="https://github.com/helm/charts/tree/master/incubator/kafka" rel="nofollow noreferrer">Helm Chart</a> provided by Incubator.</p>
<p>I tried this setup manually on 3 GCP VMs following <a href="https://codeforgeek.com/how-to-setup-zookeeper-cluster-for-kafka/" rel="nofollow noreferrer">this guide</a> and I was able to do it without any issues.</p>
<p>However, setting up a Kafka cluster on Kubernetes seems complicated.</p>
<p>As we know we have to provide the IPs of all the zookeeper server in each zookeeper configuration file like below:</p>
<pre><code>...
# list of servers
server.1=0.0.0.0:2888:3888
server.2=<Ip of second server>:2888:3888
server.3=<ip of third server>:2888:3888
...
</code></pre>
<p>As I can see in the Helm chart <a href="https://github.com/helm/charts/blob/ca895761948d577df1cb37243b6afaf7b077bac3/incubator/zookeeper/templates/config-script.yaml#L82" rel="nofollow noreferrer">config-script.yaml</a> file has a script which creates the Zookeeper configuration file for every deployment.</p>
<p>The part of the script which <em>echos</em> the zookeeper servers looks something like below:</p>
<pre><code>...
for (( i=1; i<=$ZK_REPLICAS; i++ ))
do
echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT" >> $ZK_CONFIG_FILE
done
...
</code></pre>
<p>As of now the configuration that this Helm chart creates has the below Zookeeper server in the configuration with one replica (<em>replica</em> here means <code>Kubernetes Pods replicas</code>).</p>
<pre><code>...
# "release-name" is the name of the Helm release
server.1=release-name-zookeeper-0.release-name-zookeeper-headless.default.svc.cluster.local:2888:3888
...
</code></pre>
<p>At this point, I am clueless and do not know what to do, so that all the Zookeeper servers get included in the configuration file?</p>
<p>How shall I modify the script?</p>
| Amit Yadav | <p>I see you are trying to create 3 node <em>zookeeper</em> cluster on top of 3 different GKE clusters.</p>
<p>This is not an easy task and I am sure there are multiple ways to achieve it
but I will show you one way in which it can be done and I believe it should solve your problem.</p>
<p>The first thing you need to do is create a LoadBalancer service for every zookeeper instance.
After LoadBalancers are created, note down the ip addresses that got assigned
(remember that by default these ip addresses are ephemeral so you might want to change them later to static).</p>
<p>Next thing to do is to create an <a href="https://cloud.google.com/compute/docs/internal-dns" rel="nofollow noreferrer">private DNS zone</a>
on GCP and create A records for every zookeeper LoadBalancer endpoint e.g.:</p>
<pre><code>release-name-zookeeper-1.zookeeper.internal.
release-name-zookeeper-2.zookeeper.internal.
release-name-zookeeper-3.zookeeper.internal.
</code></pre>
<p>and in GCP it would look like this:<br><br>
<a href="https://i.stack.imgur.com/7xjRJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7xjRJ.png" alt="dns"></a></p>
<p>After it's done, just modify <a href="https://github.com/helm/charts/blob/ca895761948d577df1cb37243b6afaf7b077bac3/incubator/zookeeper/templates/config-script.yaml#L48" rel="nofollow noreferrer">this line</a>:</p>
<pre><code>...
DOMAIN=`hostname -d'
...
</code></pre>
<p>to something like this:</p>
<pre><code>...
DOMAIN={{ .Values.domain }}
...
</code></pre>
<p>and remember to set <code>domain</code> variable in <code>Values</code> file to <code>zookeeper.internal</code></p>
<p>so in the end it should look like this:</p>
<pre><code>DOMAIN=zookeeper.internal
</code></pre>
<p>and it should generate the folowing config:</p>
<pre><code>...
server.1=release-name-zookeeper-1.zookeeper.internal:2888:3888
server.2=release-name-zookeeper-2.zookeeper.internal:2888:3888
server.3=release-name-zookeeper-3.zookeeper.internal:2888:3888
...
</code></pre>
<p>Let me know if it is helpful</p>
| Matt |
<p>I need apply a oc patch to the following deploy, changing de value of "image". But I can´t do that, caused by an error:</p>
<p>DEPLOY YML:</p>
<pre><code>root@oc-jump-pod:/# oc get deploy deploy-test -o json
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"annotations": {
"deployment.kubernetes.io/revision": "3",
"meta.helm.sh/release-name": "teste",
"meta.helm.sh/release-namespace": "sda-test"
},
"creationTimestamp": "2020-05-25T07:01:14Z",
"generation": 23,
"labels": {
"app.kubernetes.io/instance": "test",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/name": "test",
"app.kubernetes.io/version": "latest",
"helm.sh/chart": "test-1.0.0"
},
"name": "test",
"namespace": "test-1",
"resourceVersion": "19316664",
"selfLink": "/apis/extensions/v1beta1/namespaces/test/deployments/test",
"uid": "863d7397"
},
"spec": {
"progressDeadlineSeconds": 600,
"replicas": 1,
"revisionHistoryLimit": 10,
"selector": {
"matchLabels": {
"app.kubernetes.io/instance": "test",
"app.kubernetes.io/name": "test"
}
},
"strategy": {
"rollingUpdate": {
"maxSurge": "25%",
"maxUnavailable": "25%"
},
"type": "RollingUpdate"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app.kubernetes.io/instance": "test",
"app.kubernetes.io/name": "test"
}
},
"spec": {
"containers": [
{
"env": [
{
"name": "APP_NAME",
"value": "test"
},
{
"name": "JAVA_OPTS_EXT",
"value": "-Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=false -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts -Djavax.net.ssl.trustStorePassword=changeit"
},
{
"name": "SPRING_CLOUD_CONFIG_PROFILE",
"value": "pre"
},
{
"name": "TZ",
"value": "Europe/Madrid"
},
{
"name": "WILY_MOM_PORT",
"value": "5001"
},
{
"name": "spring_application_name",
"value": "test"
},
{
"name": "spring_cloud_config_uri",
"value": "https://config.test.svc.cluster.local"
}
],
"image": "registry.sdi.dev.weu.azure.paas.cloudcenter.corp/test-dev/test-java:0.0.2",
"imagePullPolicy": "Always",
"name": "test",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {
...
</code></pre>
<p>I´m triying the following:</p>
<pre><code>root@oc-jump-pod:/# oc patch deploy push-engine --type='json' -p='{"spec":{"template":{"metadata":{"spec":{"containers":{"image":"registry.sdi.
dev.weu.azure.paas.cloudcenter.corp/test-dev/test:0.0.1"}}}}}}'
Error from server (BadRequest): json: cannot unmarshal object into Go value of type jsonpatch.Patch
</code></pre>
<p>and this to get the value</p>
<pre><code>root@oc-jump-pod:/# oc get deploy push-engine -o=jsonpath='{..image}'
registry.sdi.dev.weu.azure.paas.cloudcenter.corp/test-dev/test-java:0.0.2
</code></pre>
<p>I need do that, to change the tag of the image from 0.0.2 to 0.0.1 (or others). Probably I don´t understand oc patch yet, actually I do the change manually on the oc console. But this method, is rude and not follow the CI/CD.</p>
| RCat | <p>The correct JSON Patch document for your <code>Deployment</code> may look like this:</p>
<pre><code>[
{
"op": "replace",
"path": "/spec/template/spec/containers/0/image",
"value": "registry.sdi.dev.weu.azure.paas.cloudcenter.corp/test-dev/test:0.0.1"
}
]
</code></pre>
<p>Your example is not going to work as it doesn't reflect the structure of your original <code>yaml</code> file. Note that it contains <strong>arrays</strong> <code>[...]</code> and you treated it as if it contained only <strong>maps</strong> <code>{...}</code>.</p>
<p>Your final <code>oc patch</code> command may look as follows:</p>
<pre><code>oc patch deploy push-engine --type='json' -p '[{ "op": "replace", "path": "/spec/template/spec/containers/0/image", "value": "registry.sdi.dev.weu.azure.paas.cloudcenter.corp/test-dev/test:0.0.1" }]'
</code></pre>
| mario |
<p>My CI tool uses lifecycles so if Dev deployments works, it goes to QA.</p>
<p>I have an end to end test container that i want to run in kubernetes, but how do i get the exit code from the container?</p>
<p>Can i somehow run the container and get back the exit code in one command?</p>
<p><code>kubectl run -it</code>
doesn't seem to get the exit code and has some extra things to say after the container is done.</p>
| tekno45 | <p>To get the exit code from a Pod (container) you can get the pod details with the command:</p>
<pre><code>kubectl get pod termination-demo --output=yaml
</code></pre>
<p>Output:</p>
<pre><code>apiVersion: v1
kind: Pod
...
lastState:
terminated:
containerID: ...
exitCode: 0
finishedAt: ...
message: |
Sleep expired
...
</code></pre>
<p>To know more, you can check the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/#writing-and-reading-a-termination-message" rel="noreferrer">documentation.</a> </p>
<p>To make it easier as you wish you can run: </p>
<pre><code>kubectl get pod busybox-term -ojson | jq .status.containerStatuses[].lastState.terminated.exitCode
</code></pre>
<p>Or if you don't want to install <code>jq</code>, you can run: </p>
<pre><code>kubectl get pod busybox-term --output="jsonpath={.status.containerStatuses[].lastState.terminated.exitCode}"
</code></pre>
| Mark Watney |
<p>I am using kubectl to connect remote kubernetes cluster(v1.15.2),I am copy config from remote server to local macOS:</p>
<pre><code>scp -r root@ip:~/.kube/config ~/.kube
</code></pre>
<p>and change the url to <code>https://kube-ctl.example.com</code>,I exposed the api server to the internet:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURvakNDQW9xZ0F3SUJBZ0lVU3FpUlZSU3FEOG1PemRCT1MyRzlJdGE0R2Nrd0RRWUpLb1pJaHZjTkFRRUwKQlFB92FERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbAphVXBwYm1jeEREQUtCZ05WQkFvVEEyczRjekVTTUJBR0ExVUVDeE1KTkZCaGNtRmthV2R0TVJNd0VRWURWUVFECkV3cHJkV0psY201bGRHVnpNQ0FYR3RFNU1Ea3hNekUxTkRRd01Gb1lEekl4TVRrd09ESXdNVFUwTkRBd1dqQm8KTVFzd0NRWURWUVFHRXdKRFRqRVFNQTRHQTFVRUNCTUhRbVZwU21sdVp6RVFNQTRHQTFVRUJ4TUhRbVZwU21sdQpaekVNTUFvR0ExVUVDaE1EYXpoek1SSXdFQVlEVlFRTEV3azBVR0Z5WVdScFoyMHhFekFSQmdOVkJBTVRDbXQxClltVnlibVYwWlhNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNzOGFFR2R2TUgKb0E1eTduTjVydnAvQkEyTVM1RG1TNWwzQ0p4S3VMOGJ1bkF3alF1c0lTalUxVWlqeVdGOW03VzA3elZJaVJpRwpiYzZGT3JkSEJ2QXgzazBpT2pPRlduTHp1UjdaSFhqQ3lTbDJRam9YN3gzL0l1MERQelNHTXJLSzZETGpTRTk4CkdadEpjUi9OSmpiVFFJc3FXbWFEdUIyc3dmcEc1ZmlZU1A1KzVpcld1TG1pYjVnWnJYeUJJNlJ0dVV4K1NvdW0KN3RDKzJaVG5QdFF0QnFUZHprT3p3THhwZ0Zhd1kvSU1mbHBsaUlMTElOamcwRktxM21NOFpUa0xvNXcvekVmUApHT25GNkNFWlR6bkdrTWc2aUVGenNBcDU5N2lMUDBNTkR4YUNjcTRhdTlMdnMzYkdOZmpqdDd3WkxIVklLa0lGCm44Mk92cExGaElq2kFnTUJBQUdqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQUQKQVFIL01CMEdBMVVkRGdRV0JCUm0yWHpJSHNmVzFjMEFGZU9SLy9Qakc4dWdzREFOQmdrcWhraUc5dzBCQVFzRgpBQU9DQVFFQW1mOUozN3RYTys1dWRmT2RLejdmNFdMZyswbVJUeTBRSEVIblk5VUNLQi9vN2hLUVJHRXI3VjNMCktUeGloVUhvbHY1QzVUdG8zbUZJY2FWZjlvZlp0VVpvcnpxSUFwNE9Od1JpSnQ1Yk94K1d6SW5qN2JHWkhnZjkKSk8rUmNxQnQrUWsrejhTMmJKRG04WFdvMW5WdjJRNU1pUndPdnRIbnRxd3MvTlJ2bHBGV25ISHBEVExjOU9kVwpoMllzWVpEMmV4d0FRVDkxSlExVjRCdklrZGFPeW9USHZ6U2oybThSTzh6b3JBd09kS1NTdG9TZXdpOEhMeGI2ClhmaTRFbjR4TEE3a3pmSHFvcDZiSFF1L3hCa0JzYi9hd29kdDJKc2FnOWFZekxEako3S1RNYlQyRW52MlllWnIKSUhBcjEyTGVCRGRHZVd1eldpZDlNWlZJbXJvVnNRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://k8s-ctl.example.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: kube-system
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
</code></pre>
<p>when I get cluster pod info in my local Mac:</p>
<pre><code>kubectl get pods --all-namespaces
</code></pre>
<p>give this error:</p>
<pre><code>Unable to connect to the server: x509: certificate signed by unknown authority
</code></pre>
<p>when I access <code>https://k8s-ctl.example.com</code> in google chrome,the result is:</p>
<pre><code>{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "Unauthorized",
reason: "Unauthorized",
code: 401
}
</code></pre>
<p>what should I do to make access remote k8s cluster sucess using kubectl client?</p>
<p>One way I have tried to using this <code>.kube/config</code> generate by command,but get the same result:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority: ssl/ca.pem
server: https://k8s-ctl.example.com
name: default
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: ssl/admin.pem
client-key: ssl/admin-key.pem
</code></pre>
| Dolphin | <p>I've reproduced your problem and as you created your cluster following <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">kubernetes-the-hard-way</a>, you need to follow these steps to be able to access your cluster from a different console. </p>
<p>First you have to copy the following certificates created while you was bootstraping your cluster to <code>~/.kube/</code> directory in your local machine: </p>
<pre><code>ca.pem
admin.pem
admin-key.pem
</code></pre>
<p>After copying these files to your local machine, execute the following commands: </p>
<pre><code>kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=~/.kube/ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
</code></pre>
<pre><code>kubectl config set-credentials admin \
--client-certificate=~/.kube/admin.pem \
--client-key=~/.kube/admin-key.pem
</code></pre>
<pre><code>kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
</code></pre>
<pre><code>kubectl config use-context kubernetes-the-hard-way
</code></pre>
<p>Notice that you have to replace the <code>${KUBERNETES_PUBLIC_ADDRESS}</code> variable with the remote address to your cluster. </p>
| Mark Watney |
<p>I am totally new to Istio, and the pitch looks very exciting. However, I can't make it work, which probably means I don't use it properly.
My goal is to implement session affinity between 2 services, this is why originally I end up using Istio. However, I do a very basic test, and it does not seem to work:
I have an kubernetes demo app which as a front service, a stateful-service, and a stateless service. From a browser, I access the front-service which dispatches the request either on the stateful or stateless services, using K8s service names as url: <a href="http://api-stateful" rel="nofollow noreferrer">http://api-stateful</a> or <a href="http://api-stateless" rel="nofollow noreferrer">http://api-stateless</a>.</p>
<p>I want to declare a virtual service to intercept requests sent from the front-service to the stateful-service.I don't declare it as a gateway, as I understood gateway was at the external border of the K8s cluster.
I use Docker on Windows with Istio 1.6.</p>
<p>I copy my yaml file below. the basic test I want to do: reroute traffic for api-stateful to api-stateless, to validate that the Virtual Service is taken into account. And it does not work. Do you see what is wrong? Is it a wrong usage of Virtual Service ? My Kiali console does not detect any problem in setup.</p>
<pre><code>####################################################################
######################### STATEFUL BACKEND #########################
# Deployment for pocbackend containers, listening on port 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: stateful-deployment
spec:
replicas: 3
selector:
matchLabels:
app: stateful-backend
tier: backend
template:
metadata:
labels:
app: stateful-backend
tier: backend
spec:
containers:
- name: pocbackend
image: pocbackend:2.0
ports:
- name: http
containerPort: 3000
---
# Service for Stateful containers, listening on port 3000
apiVersion: v1
kind: Service
metadata:
name: api-stateful
spec:
selector:
app: stateful-backend
tier: backend
ports:
- protocol: TCP
port: 3002
targetPort: http
---
#####################################################################
######################### STATELESS BACKEND #########################
# Deployment for pocbackend containers, listening on port 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: stateless-backend
spec:
replicas: 3
selector:
matchLabels:
app: stateless-backend
tier: backend
template:
metadata:
labels:
app: stateless-backend
tier: backend
spec:
containers:
- name: pocbackend
image: pocbackend:2.0
ports:
- name: http
containerPort: 3000
---
# Service for Stateless containers, listening on port 3000
apiVersion: v1
kind: Service
metadata:
name: api-stateless
spec:
selector:
app: stateless-backend
tier: backend
ports:
- protocol: TCP
port: 3001
targetPort: http
---
#############################################################
######################### FRONT END #########################
# deployment of the container pocfrontend listening to port 3500
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-deployment
spec:
replicas: 2
selector:
matchLabels:
app: frontend
tier: frontend
template:
metadata:
labels:
app: frontend
tier: frontend
spec:
containers:
- name: pocfrontend
image: pocfrontend:2.0
ports:
- name: http
containerPort: 3500
---
# Service exposing frontend on node port 85
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: NodePort
selector:
app: frontend
tier: frontend
ports:
- protocol: TCP
port: 3500
targetPort: http
nodePort: 30000
---
##############################################################
############ ISTIO PROXY FOR API-STATEFUL SERVIC E############
##############################################################
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: api-stateful-proxy
spec:
hosts:
- api-stateful
http:
- route:
- destination:
host: api-stateless
</code></pre>
| Joel | <p>As mentioned in comments this can be fixed with <code>DestinationRule</code> with sticky session configuration.</p>
<p>Example of that can be found in istio <a href="https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<h2>LoadBalancerSettings<a href="https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings" rel="nofollow noreferrer"></a></h2>
<p>Load balancing policies to apply for a specific destination. See Envoy’s load balancing <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/load_balancing" rel="nofollow noreferrer">documentation</a> for more details.</p>
<p>For example, the following rule uses a round robin load balancing policy for all traffic going to the ratings service.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
</code></pre>
<p>The following example sets up sticky sessions for the ratings service hashing-based load balancer for the same ratings service using the the User cookie as the hash key.</p>
<pre class="lang-yaml prettyprint-override"><code> apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: user
ttl: 0s
</code></pre>
</blockquote>
| Piotr Malec |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.