prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I setup a <strong>self-hosted</strong> registry on my machine to store the docker image files to test it thoroughly using <strong>minikube</strong> (lightweight Kubernetes implementation for local development).</p>
<p>Though I'm able to successfully push & pull repositories from local registry using <strong>docker push</strong> and <strong>docker pull</strong> commands but while trying to run a pod locally, facing below issue :</p>
<p><strong>Error</strong> </p>
<pre><code>Failed to pull image "localhost:5000/dev/customer:v1": rpc error: code = Unknown desc
= Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
</code></pre>
<p>Here's the list of events I noticed while inspecting the pod. </p>
<p><strong>Pod Events</strong></p>
<pre><code>Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned custappdeployment-6c8ddcc5d8-2zdfn to minikube
Normal SuccessfulMountVolume 16m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-s5nlw"
Normal BackOff 16m (x2 over 16m) kubelet, minikube Back-off pulling image "localhost:5000/dev/customer:v1"
Warning Failed 16m (x2 over 16m) kubelet, minikube Error: ImagePullBackOff
Normal Pulling 15m (x3 over 16m) kubelet, minikube pulling image "localhost:5000/dev/customer:v1"
Warning Failed 15m (x3 over 16m) kubelet, minikube Failed to pull image "localhost:5000/dev/customer:v1": rpc error: code = Unknown desc
= Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
Warning Failed 15m (x3 over 16m) kubelet, minikube **Error: ErrImagePull**
Please see below the docker pull command output.
PS C:\Sunny\Projects\NodeApps\Nodejs-Apps\Customer> docker pull localhost:5000/dev/customer:v1
v1: Pulling from dev/customer
Digest: sha256:edf0b716728b1cc00f7d8eed76fb3bdceadf1a60a46b9e4f80d983a64389e95c
Status: Image is up to date for localhost:5000/dev/customer:v1
</code></pre>
| <p>Well, I wouldn't expect <strong>your</strong> <code>localhost</code> to be the same as <code>localhost</code> from <code>minikube</code>'s point-of-view. Actually, I wouldn't expect your localhost to be the same localhost from <em>anything</em> in the kubernetes world's point-of-view.</p>
<p>So, some practical things you'll want to check:</p>
<ul>
<li><p>is port 5000 accessible from not-your-machine (meaning could the minikube virtual machine plausibly pull from port 5000 on your machine)</p>
<p>this question likely has some intersection with the point right below, because your registry may very well be listening on one of the internal adapters, but that's not what your machine knows itself as, or the opposite</p></li>
<li><p>can minikube resolve the hostname that your machine presents itself as (because I actually don't <em>think</em> you can use an IP address in a docker image reference); obviously if that assumption isn't true, then no need to worry about this part</p></li>
<li><p>and be sure that docker either is untrusting of your registry's CA, or you have already loaded the cert onto minikube and bounced docker</p></li>
</ul>
<p>You can always cheat, since minikube is one virtual machine (and not a whole fleet of Nodes) and <code>docker save $the_image_ref | minikube ssh docker load</code> which will side-step the pull entirely (unless you have <code>imagePullPolicy: Always</code> but that's easily fixed).</p>
|
<p>I used helm stable charts to install mongodb in my AWS kubernetes cluster, when i run helm install mongodb for the first time, no issues all pod runs and i am able to access the db too.</p>
<p>however, when i run helm install mongodb second time with new release name , pod logs shows that mongodb running successfully, how ever the status shows otherwise..</p>
<pre><code>request-form-mongo-mongodb-7f8478854-t2g8z 1/1 Running 0 3m
scheduled-task-mongo-mongodb-8689677f67-tzhr9 0/1 CrashLoopBackOff 4 2m
</code></pre>
<p>when i checked the describe pod logs for the error pod,</p>
<p>everything seems fine, but the last two lines are with this warning.</p>
<pre><code> Normal Created 7m (x4 over 8m) kubelet, ip-172-20-38-19.us-west-2.compute.internal Created container
Normal Started 7m (x4 over 8m) kubelet, ip-172-20-38-19.us-west-2.compute.internal Started container
Warning FailedSync 7m (x6 over 8m) kubelet, ip-172-20-38-19.us-west-2.compute.internal Error syncing pod
Warning BackOff 2m (x26 over 8m) kubelet, ip-172-20-38-19.us-west-2.compute.internal Back-off restarting failed container
</code></pre>
<p>What could be the problem, and how to resolve this?</p>
| <p>Yes,
We can deploy multiple instances of mongodb on the same cluster using helm package.</p>
<p>The above issue is due to not allocating enough resource for my pv(persistant volume), issue got resolved when i dedicated min of 1Gi of memory for my pv and created respected pvc. </p>
<p>Once allocating enough resource i installed mongo db using helm successfully.</p>
|
<p>We have helm charts to deploy our application. We use a <code>configuration.json</code> file for application properties and load them to config map. But users typically use their own configuration file. </p>
<p>Default configuration.json file is packaged inside helm charts under data directoty. This file is read as </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
{{ (.Files.Glob .Values.appConfigFile).AsConfig | indent 4}}
</code></pre>
<p>And in values</p>
<pre><code>appConfigFile: data/configuration.json
</code></pre>
<p>If users install our charts directly from repository how can this configuration file be overriden? doing <code>--set appConfigFile=/path/to/custom.json</code> doen't populate config map.</p>
<p>If charts are untarred to a directory they can add the custom configuration file into charts directory and give the configuration file using <code>--set appConfigFile=customData/custom.json</code> works </p>
<p>Can file overrides be achieved when charts are deployed from repository directly?</p>
| <p>Adding custom configuration to a values file and execute <code>helm install</code> using <code>-f</code> flag is a solution.</p>
<p>customValues.yaml</p>
<pre><code>overrideConfig: true
customConfig:{
//Add your custom json here as variable value
}
</code></pre>
<p>Config map yaml </p>
<pre><code>#If custom values file passed then overrideConfig variable will be set.
#So load configmap from customConfig variable
{{ if .Values.overrideConfig}}
app-config.json : |-
{{ toJson .Values.customConfig }}
{{ else }}
# Else load from the default configuration available in charts.
{{ (.Files.Glob .Values.appConfigFile).AsConfig indent 4 }}
{{ end }}
</code></pre>
<p>If custom configuration is needed </p>
<pre><code>helm install -f customValues.yaml repo/chartName
</code></pre>
<p>Not sure if this is the perfect solution, but ended up taking this route.</p>
|
<p>It is a recommended best practice to not run dockerized Node.JS applications as PID 1 (see <a href="https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md#handling-kernel-signals" rel="noreferrer">https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md#handling-kernel-signals</a>) in order to correctly capture signals.</p>
<p>The <code>docker run</code> command provides the <code>--init</code> flag to wrap the application entry point with a small init system that forwards signals correctly.</p>
<p>Is there a built-in equivalent of the <code>--init</code> flag in Kubernetes?</p>
<p>I've explored the Pod and Container object specifications for Kubernetes 1.10 but have not seen anything related to specifying how the image gets started.</p>
<p>An alternative would be to explicitly include and use <a href="https://github.com/krallin/tini" rel="noreferrer">Tini</a> in every container, but I would really like some way that does it transparently the way the <code>--init</code> flag behaves.</p>
<p>Are there other alternatives?</p>
| <p>If you enable process (PID) namespace sharing for your pods, the init process (<code>pause</code>) will come from Kubernetes. If you have a separate process namespace for your containers, they need to include <code>tini</code> or another init process themselves.</p>
<p>According to <a href="https://www.ianlewis.org/en/almighty-pause-container" rel="noreferrer">https://www.ianlewis.org/en/almighty-pause-container</a>, Kubernetes 1.7 had a shared process namespace by default and a kubelet flag to disable it, 1.8 had it off by default and a kubelet flag to enable it. Kubernetes 1.11 has an alpha feature to enable a shared process namespace:
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/</a></p>
|
<p>I have created a workload on Rancher. This workload created from an image which is hosted on a gitlab-ci project registry.</p>
<p>I want to force rancher to download a new version of this image and upgrade workload.</p>
<p>I want to do this from a .gitlab-ci.yml script. How to do this with Rancher <strong>version 2</strong>? With Rancher 1.6 I used this script:</p>
<pre><code>deploy:
stage: deploy
image: cdrx/rancher-gitlab-deploy
script:
- upgrade --stack mystack --service myservice --no-start-before-stopping
</code></pre>
| <p>In rancher 2, much of the management of workloads is delegated to Kubernetes via its api or CLI (kubectl).</p>
<p>You could patch the deployment to specify a new image/version, but if you are using a tag like <code>:latest</code> which moves, you will need to force Kubernetes to redeploy the pods by changing something about the deployment spec. </p>
<p>One common way to do this is to change/add an environment variable, which forces a redeploy. </p>
<p>In Gitlab, set two variables in your gitlab project or group to pass authentication information into the build.</p>
<p>The <code>kubectl patch</code> will update or add an environment variable called <code>FORCE_RESTART_AT</code> on your deployment's container that will force a redeploy each time it is set because Gitlab's pipeline ID changes. </p>
<p>You will need to specify the namespace, the name of your deployment, the name of the container and the image. If the image tag is changing, there is no need to supply the environment variable. If you are using <code>:latest</code>, be sure that the your container's <code>imagePullPolicy: Always</code> is set, which is the default if Kubernetes detects an image using <code>:latest</code>.</p>
<p>The image <code>diemscott/rancher-cli-k8s</code> is a simple image derived from <code>rancher/cli</code> that also includes <code>kubectl</code>.</p>
<pre><code>RANCHER_SERVER_URL=https://rancher.example.com
RANCHER_API_TOKEN="token-sd5kk:d27nrsstx6z5blxgkmspqv94tzkptnrpj7rkcrt7vtxt28tvw4djxp"
deploy:
stage: deploy
image: diemscott/rancher-cli-k8s:v2.0.2
script:
- rancher login "$RANCHER_SERVER_URL" -t "$RANCHER_API_TOKEN"
- rancher kubectl --namespace=default patch deployment nginx --type=strategic -p '{"spec":{"template":{"spec":{"containers":[{"name":"nginx","image": "nginx","env":[{"name":"FORCE_RESTART_AT","value":"$CI_PIPELINE_ID"}]}]}}}}'
</code></pre>
|
<ol>
<li>I've created the persistent volume (EBS 10G) and corresponding persistent volume claim first. But when I try to deploy the postgresql pods as below (yaml file) :</li>
</ol>
<p><a href="https://i.stack.imgur.com/1bM88.png" rel="noreferrer">test-postgresql.yaml</a></p>
<p>Receive the errors from pod:</p>
<p><strong>initdb: directory "/var/lib/postgresql/data" exists but is not empty
It contains a lost+found directory, perhaps due to it being a mount point.
Using a mount point directly as the data directory is not recommended.
Create a subdirectory under the mount point.</strong></p>
<p>Why the pod can't use this path? I've tried the same tests on minikube. I didn't meet any problem. </p>
<ol start="2">
<li>I tried to change volume mount directory path to "/var/lib/test/data", the pods can be running. I created a new table and some data on it, and then killed this pod. Kubernete created a new pod. But the new one didn't preserve the previous data and table. </li>
</ol>
<p>So what's the way to correctly mount a postgresql volume using Aws EBS in Kubernete, which allows the recreated pods can reuse initial data base stored in EBS?</p>
| <blockquote>
<p>So what's the way to correctly mount a postgresql volume using Aws EBS</p>
</blockquote>
<p>You are on a right path...</p>
<p>Error you get is because you want to use root folder of mounted volume <code>/</code> as postgresql Data dir and postgresql complains that it is not best practice to do so since it is not empty and contains already some data inside (namely <code>lost+found</code> directory).</p>
<p>It is far better to locate data dir in separate empty subfolder (<code>/postgres</code> for example) and give postgresql clean slate when creating its file structure. You didn't get same thing on minicube since you most probably mounted host folder that didn't have anything inside (was empty) and didn't trigger such a complaint.</p>
<p>To do so, you would need initially empty <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="noreferrer">subPath</a> of your volume (empty <code>/postgres</code> subfolder on your PV for example) mounted to appropriate mount point (<code>/var/lib/posgresql/data</code>) in your pod. Note that you can name subPath and mount point end folder the same name, they are different here just as an example where <code>test-db-volume/postgres</code> folder would be mounted on pod to <code>/var/lib/postgresql/data</code> folder:</p>
<pre><code>...
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: test-db-volume
subPath: postgres
...
</code></pre>
|
<p>I am very new to OpenWhisk and have some difficulties in the setup.
The Ngnix Pod is running in a CrashLoopBackOff because of an error in the Pod.</p>
<pre><code>2018/07/02 16:14:27 [emerg] 1#1: host not found in resolver "kube-dns.kube-system" in /etc/nginx/nginx.conf:41
nginx: [emerg] host not found in resolver "kube-dns.kube-system" in /etc/nginx/nginx.conf:41
</code></pre>
<p>I cannot jump into the Pod itself but I ran a Docker Container with the same Image the Pod is using and looked inside the nginx.conf:</p>
<pre><code>user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
</code></pre>
<p>When I looked into the conf.d directory I found a single default.conf file in which the server_name was set to localhost:</p>
<pre><code>server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
</code></pre>
<p>I believe this is causing the Issue and that the kube.dns service cannot resolv localhost. </p>
<p>However I do not know how I can resolve this Issue or at least work around it.
Maybe I can set a static hostname for the Pod in the Ngnix Deployment and enter that hostname into the ngnix config?</p>
<p>Could Someone provide a workaround too me or even a fix?</p>
<p>Many thanks.</p>
| <p>kubeadm gets and checks environment from your currently running host OS session.</p>
<p>You can check if proxy has been set by executing the below command:</p>
<pre><code>env | grep _proxy
</code></pre>
<p>In environments where a proxy server is configured to access the internet services, such as the Docker Hub or the Oracle Container Registry, you may need to perform several configuration steps to get Kubernetes to install and to run correctly.</p>
<ul>
<li><p>Ensure that the Docker engine startup configuration on each node in the cluster is configured to use the proxy server. For instance, create a systemd service drop-in file at /etc/systemd/system/docker.service.d/http-proxy.conf with the following contents:</p>
<p><code>[Service]</code></p>
<p><code>Environment="HTTP_PROXY=http://proxy.example.com:80/"</code>
<code>Environment="HTTPS_PROXY=https://proxy.example.com:443/"</code></p>
<p>Replace <a href="http://proxy.example.com:80/" rel="nofollow noreferrer">http://proxy.example.com:80/</a> with the URL for your HTTP proxy service. If you have an HTTPS proxy and you have specified this as well, replace <a href="https://proxy.example.com:443/" rel="nofollow noreferrer">https://proxy.example.com:443/</a> with the URL and port for this service. If you have made a change to your Docker systemd service configuration, run the following commands:</p>
<p><code>systemctl daemon-reload; systemctl restart docker</code></p></li>
<li><p>You may need to set the http_proxy or https_proxy environment variables to be able to run other commands on any of the nodes in your cluster. For example:</p>
<p><code>export http_proxy="http://proxy.example.com:80/"</code></p>
<p><code>export https_proxy="https://proxy.example.com:443/"</code></p></li>
<li><p>Disable the proxy configuration for the localhost and any node IPs in the cluster:</p>
<p><code>export no_proxy="127.0.0.1, 192.0.2.10, 192.0.2.11, 192.0.2.12"</code></p></li>
</ul>
<p>These steps should be sufficient to enable the deployment to function normally. Use of a transparent proxy that does not require configuration on the host and which ignores internal network requests, can reduce the complexity of the configuration and may help to avoid unexpected behavior.</p>
|
<p>I have a multiplayer application consisting of many containers and at the top level i have a REST API. I managed to run this application and i can access the rest api using the public ip.
I am using the google cloud kubernetes engine.
I would like to make many replicas of this multilayer application that each one should have a public ip so as to be able to communicate.
Is it possible to have many public ips (not loadbalancer) each one pointing to a replica?</p>
| <p>You should use statefulsets. Then create as many load balancer services equal to your number of replicas. Use the custom labels that are added to the statefulset pods by default for selecting one service for one pod (see <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-identity" rel="nofollow noreferrer">here</a>). This will serve your purpose. </p>
<p>Also you can use <code>external-dns.alpha.kubernetes.io/hostname</code> in the <code>annotations</code> for your services to provide your own public dns for each of the pods.</p>
|
<p>I'm running through what looks like a configuration issue! I am using traefik as ingress controller within kubernetes and I have an ingress to route some URLs to route some frontends to various backends. Let's say I have something like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: ReplacePathRegex
spec:
rules:
- host: foo.io
http:
paths:
- path: /api/authservice/(.*) /$1
backend:
serviceName: auth
servicePort: 8901
- path: /api/svcXXX/v1/files/cover/(.*) /v1/files/cover/$1
backend:
serviceName: files
servicePort: 8183
- path: /api/svcXXX/v1/files/image/(.*) /v1/files/image/$1
backend:
serviceName: files
servicePort: 8183
</code></pre>
<p>Using Postman (or any other client), if I POST a request on <code>http://foo.io/api/authservice/auth/oauth/token</code>, while looking in the access logs, it seems that it is routed to <code>http://foo.io/api/svcXXX/v1/files/image/(.*) /v1/files/image/$1</code>. I'm seeing this in the access logs:</p>
<p><code>[03/Jul/2018:12:57:17 +0000] "POST /api/authservice/auth/oauth/token HTTP/1.1" 401 102 "-" "PostmanRuntime/7.1.5" 15 "foo.io/api/svcXXX/v1/files/image/(.*) /v1/files/image/$1" 37ms</code></p>
<p>Am I doing something wrong ?</p>
| <p><strong>Note:</strong> since the documentation is changed, I've updated the links, but content on the documentation pages would be different.</p>
<p><code>ReplacePathRegex</code> is a <strong>modifier</strong> rule. According to <a href="https://doc.traefik.io/traefik/v1.7/basics/#modifiers" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Modifier rules only modify the request. They do not have any impact on routing decisions being made.</p>
<p>Following is the list of existing modifier rules:</p>
<ul>
<li><code>AddPrefix</code>: <code>/products</code>: Add path prefix to the existing request path prior to forwarding the request to the backend.</li>
<li><code>ReplacePath</code>: <code>/serverless-path</code>: Replaces the path and adds the old path to the X-Replaced-Path header. Useful for mapping to AWS Lambda or Google Cloud Functions.</li>
<li><code>ReplacePathRegex</code>: <code>^/api/v2/(.*) /api/$1</code>: Replaces the path with a regular expression and adds the old path to the X-Replaced-Path header. Separate the regular expression and the replacement by a space.</li>
</ul>
</blockquote>
<p>To route requests, you should use <a href="https://doc.traefik.io/traefik/v1.7/basics/#matchers" rel="nofollow noreferrer"><strong>matchers</strong></a>:</p>
<blockquote>
<p>Matcher rules determine if a particular request should be forwarded to a backend.</p>
<p>Separate multiple rule values by , (comma) in order to enable ANY semantics (i.e., forward a request if any rule matches). Does not work for Headers and HeadersRegexp.</p>
<p>Separate multiple rule values by ; (semicolon) in order to enable ALL semantics (i.e., forward a request if all rules match).</p>
<p>###Path Matcher Usage Guidelines
This section explains when to use the various path matchers.</p>
<p>Use <code>Path</code> if your backend listens on the exact path only. For instance,
<code>Path: /products</code> would match <code>/products</code> but not <code>/products/shoes</code>.</p>
<p>Use a <code>*Prefix*</code> matcher if your backend listens on a particular base
path but also serves requests on sub-paths. For instance, <code>PathPrefix: /products</code> would match <code>/products</code> but also <code>/products/shoes</code> and
<code>/products/shirts</code>. Since the path is forwarded as-is, your backend is
expected to listen on /products.</p>
<p>Use a <code>*Strip</code> matcher if your backend listens on the root path (/) but
should be routable on a specific prefix. For instance,
<code>PathPrefixStrip: /products</code> would match <code>/products</code> but also
/<code>products/shoes</code> and <code>/products/shirts</code>. Since the path is stripped prior
to forwarding, your backend is expected to listen on <code>/</code>. If your
backend is serving assets (e.g., images or Javascript files), chances
are it must return properly constructed relative URLs. Continuing on
the example, the backend should return <code>/products/shoes/image.png</code> (and
not <code>/images.png</code> which Traefik would likely not be able to associate
with the same backend). The <code>X-Forwarded-Prefix</code> header (available since
Traefik 1.3) can be queried to build such URLs dynamically.</p>
<p>Instead of distinguishing your backends by path only, you can add a
Host matcher to the mix. That way, namespacing of your backends
happens on the basis of hosts in addition to paths.</p>
</blockquote>
<p>Full list of matchers and their descriptions can be found <a href="https://doc.traefik.io/traefik/middlewares/overview/" rel="nofollow noreferrer">here</a></p>
|
<p>I am trying to create multiple roles in Kubernetes in a single YAML file.<br>
I am not sure if it possible. </p>
<p>This is what I tried: </p>
<pre><code>cat > ~/tmp/Role.yaml <<EOF
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: testRole1
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["pods"]
verbs: ["get", "list"]
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: testRole2
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["pods"]
verbs: ["get"]
EOF
kubectl create -f ~/tmp/Role.yaml
</code></pre>
<p>It creates me only the latter - <code>testRole2</code>. </p>
<p>Is it possible to do it in one YAML file or I need to create YAML file for each Role ? </p>
| <p>I just received an answer from @liggitt. </p>
<p>Need to add a record separator between them (<code>---</code>). </p>
|
<p>How can I create a Pod using REST API ? </p>
<p>I checked the Kubernetes API documentation:<br>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#-strong-write-operations-strong--54" rel="noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#-strong-write-operations-strong--54</a> </p>
<p>They are writing that need to use POST request:<br>
<code>POST /api/v1/namespaces/{namespace}/pods</code> </p>
<p>I have this YAML of simple nginx pod: </p>
<pre><code>cat > nginx-pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx1
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
EOF
</code></pre>
| <p>Need to translate the YAML file to JSON file: </p>
<pre><code>cat > nginx-pod.json <<EOF
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "nginx1"
},
"spec": {
"containers": [
{
"name": "nginx",
"image": "nginx:1.7.9",
"ports": [
{
"containerPort": 80
}
]
}
]
}
}
EOF
</code></pre>
<p>Use the curl command like that: </p>
<pre><code>curl -k -v -X POST -H "Authorization: Bearer <JWT_TOKEN>" -H "Content-Type: application/json" https://127.0.0.1:6443/api/v1/namespaces/default/pods [email protected]
</code></pre>
<p>Of course, the token you are using should have permissions to create pod. </p>
<p>If someone has a way to do it without converting to JSON, please share. </p>
|
<p>I try install kubernetes using kubespray</p>
<p><a href="https://github.com/patsevanton/create-kubernetes-kubespray-gitlab-ci" rel="nofollow noreferrer">github repo</a></p>
<p>Change clusterip to nodeport command line without editor</p>
<pre><code>kubectl -n kube-system get service kubernetes-dashboard -o yaml > kube-dash-svc.yaml
sed 's/ClusterIP/NodePort/' kube-dash-svc.yaml > new-kube-dash-svc.yaml
kubectl delete svc kubectl delete svc kubernetes-dashboard --namespace kube-system --namespace kube-system
kubectl create -f new-kube-dash-svc.yaml
</code></pre>
<p>After install and change clusterIP to nodeport i try install kubernetes again and get error:</p>
<p>err='The Service \"kubernetes-dashboard\" is invalid: \n* metadata.resourceVersion: Invalid value: \"\": must be specified for an update\n* spec.clusterIP: Invalid value: \"\": field is immutable\n* spec.ports[0].nodePort: Forbidden: may not be used when <code>type</code> is 'ClusterIP'\n'"}</p>
<p>How correct change clusterIP to nodeport of kubespray?</p>
<p>Thanks</p>
| <p>ClusterIP can't simply change to nodeport just by replace ClusterIP with NodePort, since the ClusterIP is the default, there is maybe no 'ClusterIP' in the definitions at all.</p>
<p>You should delete everthing involve 'clusterIP/ClusterIP', then insert 'type: NodePort' in the proper place, something like:</p>
<pre><code>sed '/clusterIP/d;/ClusterIP/d;/^[ ]*ports:/i \ type: NodePort'
</code></pre>
|
<p>EKS documentation says </p>
<blockquote>
<p>"When you create an Amazon EKS cluster, the IAM entity (user or role) is automatically granted system:master permissions in the cluster's RBAC configuration". </p>
</blockquote>
<p>But after the EKS cluster creation, if you check the aws-auth config map, it does NOT have the ARN mapping to <code>system:masters</code> group. But I am able to access the cluster via kubectl. So if the aws-auth (heptio config map) DOES NOT have the my ARN (I was the one who created the EKS cluster) mapped to <code>system:masters</code> group, how does the heptio aws authenticator authenticate me?</p>
| <p>I got to know the answer. Basically on the heptio server side component, the static mapping for system:master is done under /etc/kubernetes/aws-iam-authenticator/ (<a href="https://github.com/kubernetes-sigs/aws-iam-authenticator#3-configure-your-api-server-to-talk-to-the-server" rel="noreferrer">https://github.com/kubernetes-sigs/aws-iam-authenticator#3-configure-your-api-server-to-talk-to-the-server</a>) which is mounted into the heptio authenticator pod. Since you do not have access to this in EKS, you cant see it. However if you do invoke the /authenticate yourself with the pre-signed request, you should get the TokenReviewStatus response from heptio authenticator showing the mapping for ARN (who created the cluster) to system:master group!</p>
|
<p>I have a working Kubernetes cluster that I want to monitor with Grafana.</p>
<p>I have been trying out many dashboards from <a href="https://grafana.com/dashboards" rel="nofollow noreferrer">https://grafana.com/dashboards</a> but they all seem to have some problems: it looks like there's a mismatch between the Prometheus metric names and what the dashboard expects.</p>
<p>Eg if I look at this recently released, quite popular dashboard: <a href="https://grafana.com/dashboards/5309/revisions" rel="nofollow noreferrer">https://grafana.com/dashboards/5309/revisions</a></p>
<p>I end up with many "holes" when running it: </p>
<p><a href="https://i.stack.imgur.com/TJ2ls.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TJ2ls.png" alt="grafana dashboard with missing values"></a></p>
<p>Looking into the panel configuration, I see that the issues come from small key changes, eg <code>node_memory_Buffers</code> instead of <code>node_memory_Buffers_bytes</code>.</p>
<p>Similarly the dashboard expects <code>node_disk_bytes_written</code> when Prometheus provides <code>node_disk_written_bytes_total</code>.</p>
<p>I have tried out a <em>lot</em> of Kubernetes-specific dashboards and I have the same problem with almost all of them.</p>
<p>Am I doing something wrong?</p>
| <p>The Prometheus node exporter changed a lot of the metric names in the 0.16.0 version to conform to new naming conventions.</p>
<p>From <a href="https://github.com/prometheus/node_exporter/releases/tag/v0.16.0" rel="nofollow noreferrer">https://github.com/prometheus/node_exporter/releases/tag/v0.16.0</a>:</p>
<blockquote>
<p><strong>Breaking changes</strong></p>
<p>This release contains major breaking changes to metric names. Many
metrics have new names, labels, and label values in order to conform
to current naming conventions.</p>
<ul>
<li>Linux node_cpu metrics now break out <code>guest</code> values into separate
metrics. </li>
<li>Many counter metrics have been renamed to <code>include _total</code>. </li>
<li>Many metrics have been renamed/modified to include
base units, for example <code>node_cpu</code> is now <code>node_cpu_seconds_total</code>.</li>
</ul>
</blockquote>
<p>See also the <a href="https://github.com/prometheus/node_exporter/blob/v0.16.0/docs/V0_16_UPGRADE_GUIDE.md" rel="nofollow noreferrer">upgrade guide</a>. One of its suggestion is to use <a href="https://github.com/prometheus/node_exporter/blob/v0.16.0/docs/example-16-compatibility-rules.yml" rel="nofollow noreferrer">compatibility rules</a> that will create duplicate metrics with the old names.</p>
<p>Otherwise use version 0.15.x until the dashboards are updated, or fix them!</p>
|
<p>I am testing Rancher 2 as a Kubernetes interface. Rancher 2 is launched with docker-compose, using image rancher/rancher:latest.</p>
<p>Everything is Ok for clusters, nodes, pods. Then I try to secure some load balancers with certificates. Do do so, I install cert-manager from the catalog/helm.</p>
<p><a href="https://i.stack.imgur.com/PoG4k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PoG4k.png" alt="enter image description here"></a></p>
<p>I have tried to follow this video tutorial (<a href="https://www.youtube.com/watch?v=xc8Jg9ItDVk" rel="nofollow noreferrer">https://www.youtube.com/watch?v=xc8Jg9ItDVk</a>) which explains how to create an issuer and a certificate, and how to link it to a load balancer.</p>
<p>I create a file for the issuer :</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-private-key
http01: {}
</code></pre>
<p>It's time to create the issuer.</p>
<pre><code>sudo docker-compose exec rancher bash
</code></pre>
<p>I am connected to the Rancher container. <code>kubectl</code> and <code>helm</code> are installed.</p>
<p>I try to create the issuer :</p>
<pre><code>kubectl create -f etc/cert-manager/cluster-issuer.yaml
error: unable to recognize "etc/cert-manager/cluster-issuer.yaml": no matches for certmanager.k8s.io/, Kind=ClusterIssuer
</code></pre>
<p>Additional informations :</p>
<p>When I do <code>helm list</code>: </p>
<pre><code>Error: could not find a ready tiller pod
</code></pre>
<p>I get the pods to find tiller :</p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
tiller-deploy-6ffc49c5df-zbjg8 0/1 Pending 0 39m
</code></pre>
<p>I describe this pod :</p>
<pre><code>kubectl describe pod tiller-deploy-6ffc49c5df-zbjg8
Name: tiller-deploy-6ffc49c5df-zbjg8
Namespace: default
Node: <none>
Labels: app=helm
name=tiller
pod-template-hash=2997057189
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"tiller-deploy-6ffc49c5df","uid":"46f74523-7f8f-11e8-9d04-0242ac1...
Status: Pending
IP:
Created By: ReplicaSet/tiller-deploy-6ffc49c5df
Controlled By: ReplicaSet/tiller-deploy-6ffc49c5df
Containers:
tiller:
Image: gcr.io/kubernetes-helm/tiller:v2.8.0-rancher3
Ports: 44134/TCP, 44135/TCP
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: default
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from tiller-token-hbfgz (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
tiller-token-hbfgz:
Type: Secret (a volume populated by a Secret)
SecretName: tiller-token-hbfgz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m (x125 over 39m) default-scheduler no nodes available to schedule pods
</code></pre>
<p>This problem is a bit specific : rancher/kubernetes/docker-compose ... If anyone has some ideas, you're welcome ;)</p>
<p>Thanks in advance !</p>
| <p>I just found an information to unblock the situation.</p>
<p>Thanks to <a href="https://www.idealcoders.com/posts/rancher/2018/06/rancher-2-x-and-lets-encrypt-with-cert-manager-and-nginx-ingress/" rel="nofollow noreferrer">https://www.idealcoders.com/posts/rancher/2018/06/rancher-2-x-and-lets-encrypt-with-cert-manager-and-nginx-ingress/</a></p>
<p>The first step is to load the cluster's configuration. I was working on the default cluster. So, </p>
<ol>
<li>I execute bash into the docker container, </li>
<li>I load the config file <code>/root/.kube/config</code> </li>
<li>Update the configuration</li>
<li>Go on ... The issuer is correctly created.</li>
</ol>
<p>If it can help someone ;)</p>
|
<p>I have 6 google nodes with single core and kube-system pods take too much of CPU.</p>
<pre><code> default scylla-2 200m (21%) 500m (53%) 1Gi (38%) 1Gi (38%)
kube-system fluentd-gcp-v2.0.9-p9pvs 100m (10%) 0 (0%) 200Mi (7%) 300Mi (11%)
kube-system heapster-v1.4.3-dcd99c9f8-n6wb2 138m (14%) 138m (14%) 301856Ki (11%) 301856Ki (11%)
kube-system kube-dns-778977457c-gctgs 260m (27%) 0 (0%) 110Mi (4%) 170Mi (6%)
kube-system kube-dns-autoscaler-7db47cb9b7-l9jhv 20m (2%) 0 (0%) 10Mi (0%) 0 (0%)
kube-system kube-proxy-gke-scylla-default-pool-f500679a-7dhh 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system kubernetes-dashboard-6bb875b5bc-n4xsm 100m (10%) 100m (10%) 100Mi (3%) 300Mi (11%)
kube-system l7-default-backend-6497bcdb4d-cncr4 10m (1%) 10m (1%) 20Mi (0%) 20Mi (0%)
kube-system tiller-deploy-dccdb6fd9-7hd2s 0 (0%) 0 (0%) 0 (0%) 0 (0%)
</code></pre>
<p>Is there easy way to lower CPU request/limit for all kube-system pods in 10 times?</p>
<p>I understand memory is needed to function properly but CPU could be lowered without any major issue in dev environment. What happens if DNS would work 10 times slower? 27% of node for single system dns pod is too much.</p>
| <p>As peer <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">the documentation</a> To specify a CPU request for a Container, include the <strong>resources:requests</strong> field in the Container’s resource manifest. To specify a CPU limit, include <strong>resources:limits</strong> see exemple below:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: cpu-demo
namespace: cpu-example
spec:
containers:
- name: cpu-demo-ctr
image: vish/stress
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
args:
- -cpus
- "2"
</code></pre>
<p>One cpu, in GCP Kubernetes, is equivalent to: 1 GCP Core.
the CPU limit for a Pod is the sum of the CPU limits for all the Containers in the Pod.</p>
<p>Pod scheduling is based on requests. A Pod is scheduled to run on a Node only if the Node has enough CPU resources available to satisfy the Pod’s CPU request.</p>
|
<p>I need to point Ingress to images so that my Pods gets the URL, in full. I have the below config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: solar-demo
annotations:
nginx.org/server-snippet: "proxy_ssl_verify off;"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: shmukler.example.com
http:
paths:
- path: /city/*
backend:
serviceName: solar-demo
servicePort: 3000
- path: /solar/*
backend:
serviceName: solar-demo
servicePort: 3001
</code></pre>
<p>If I keep the line: <code>nginx.ingress.kubernetes.io/rewrite-target: /</code>, my services inside the Pods get the rewritten paths, so <code>/city/dublin</code> becomes <code>/dublin</code>.</p>
<p>If I comment out the line <code>nginx.ingress.kubernetes.io/rewrite-target: /</code>, I just get <code>503</code> errors on the client side, and nothing in the logs. With rewrite, my services give me <code>404</code> because there is no route <code>/dublin</code>.</p>
<p>What am I doing wrong? How could I just pass the path on and have the Pods respond?</p>
<pre><code>$ kubectl describe svc solar-demo
Name: solar-demo
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration=
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"solar-demo","namespace":"default"},"spec":{"ports":[{"name":"city","port":3000...
Selector: app=testapp
Type: ClusterIP
IP: 10.107.221.76
Port: city 3000/TCP
TargetPort: 3000/TCP
Endpoints: 172.17.0.3:3000,172.17.0.8:3000
Port: solar 3001/TCP
TargetPort: 3001/TCP
Endpoints: 172.17.0.3:3001,172.17.0.8:3001
Session Affinity: None
Events: <none>
</code></pre>
<p>Suggestions?</p>
| <p>Here should be a working config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: solar-demo
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: shmukler.example.com
http:
paths:
- path: /city
backend:
serviceName: solar-demo
servicePort: 3000
- path: /solar
backend:
serviceName: solar-demo
servicePort: 3001
</code></pre>
<p>What changed:</p>
<ul>
<li>Removed <code>*</code> from paths</li>
<li>Specified ingress in the annotation</li>
<li>Removed re-write annotation</li>
</ul>
<p>The <code>path</code> on an ingress (when using an <code>nginx</code> ingress) is like specifying the nginx <code>location</code> block. <code>nginx</code> does not use the <code>*</code> character in location blocks.</p>
|
<p>I need advice in managing deployments in K8S. I need to have blue/green deployment, using gitops, and this leave me basically two options:</p>
<h2>1. Use single namespace.</h2>
<p>This will necessitate using helm to manage removing resources and whatnot, and by proxy managing blue/green through helm, and this in turn will necessitate creating <strong>duplicate deployment templates</strong> (for green and blue).</p>
<p>Pros: managed by helm, will delete removed resources; seems to be general practice.</p>
<p>Cons: managed by helm, can mess up something, <a href="https://github.com/kubernetes/helm/issues/3275" rel="nofollow noreferrer">especially in multiple failed deployments</a>; can create snowflake namespace if someone quickly fix/add some resource and will not commit to repo;</p>
<h2>2. Using one namespace per deployment</h2>
<p>Just deploy each revision to it's namespace like <em>web-front-2142</em>, check, promote to ingress, then delete all other <em>web-front-[\d]</em> I can still use helm template engine, but without tiller. No need to rely on tiller managing resources - namespace will be deleted after production namespace promotion.</p>
<p>I will need to create separate namespace for ingress, because it is singular resource, but this will be a really simple namespace, something like <em>web-front-ingress</em>.</p>
<p>Pros: no snowflakes, every deployment has been created fully from repo; if it works - it works; no dependency on previous deployments in any way, if previous deployment was totally foobar-ed, it doesn't matter.</p>
<p>Cons: separate namespace for singular resources like ingress; seems to be not how k8s was designed and can lead to unforeseen consequences; all deployments tools including spinnaker revolves around single namespace deployment.</p>
<hr>
<p>Need some advice and best practices! :)</p>
| <p>The official <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">documentation</a> mentions the following:</p>
<blockquote>
<p>Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. Start using namespaces when you need the features they provide.</p>
<p>It is not necessary to use multiple namespaces just to separate slightly different resources, such as different versions of the same software: use labels to distinguish resources within the same namespace.</p>
</blockquote>
<p>The <a href="https://kubernetes.io/blog/2016/08/kubernetes-namespaces-use-cases-insights/" rel="nofollow noreferrer">Kubernetes Namespaces: use cases and insights"</a> article tells us more about best approaches using namespaces.
It is not recommended to use different namespaces for versioning software:</p>
<blockquote>
<p>An easy to grasp anti-pattern for Kubernetes namespaces is versioning. You should not use Namespaces as a way to disambiguate versions of your Kubernetes resources. Support for versioning is present in the containers and container registries as well as in Kubernetes Deployment resource. Multiple versions should coexist by utilizing the Kubernetes container model which also provides for auto migration between versions with deployments. Furthermore versions scope namespaces will cause massive proliferation of namespaces within a cluster making it hard to manage.</p>
</blockquote>
<p>Other resources (e.g. <a href="https://cloudplatform.googleblog.com/2018/04/Kubernetes-best-practices-Organizing-with-Namespaces.html" rel="nofollow noreferrer">GCPB</a>) describe namespaces usage mostly for separating Kubernetes objects for various stages, teams, projects, clients.<br />
So you can assume that using separate namespaces for blue-green deployments or canary deployment is not a very common approach.</p>
|
<p>I move kubernetes from Google GKE to own in house rack. What persistent storage should I use?</p>
<p>Kubernetes Local Persistent became beta in 13th of April 2018 only
<a href="https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/" rel="nofollow noreferrer">https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/</a></p>
<p>I have seen that many options:-
<a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes</a></p>
<p>Not sure what should I choose. Will something work out box with GKE deployment files?</p>
| <p>In Deployment files from GKE, you need to change <code>spec.volumes</code> settings according to your Persistent Volume setups.</p>
<p>I recommend you to choose one of the following options:</p>
<ol>
<li><p>The easiest way is to use <code>HostPath</code>. It mounts a file or a directory from the host Node’s file system into your Pod. Note that in this case, data on one Node is not reachable from another Node without additional configurations. Example of usage in Kubernetes:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
</code></pre></li>
<li><p>You can use <code>NFS</code>. You need to configure NFS server and after that you can use its volumes in Deployments via Persistent Volume Claims. Example of usage in Kubernetes:</p>
<pre><code>apiVersion: v1
kind: Deployment
metadata:
name: nfs-busybox
spec:
replicas: 2
selector:
name: nfs-busybox
template:
metadata:
labels:
name: nfs-busybox
spec:
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
</code></pre>
<p>You can look through the <a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs" rel="nofollow noreferrer">link</a> for more information about using NFS.</p></li>
<li><p>You can use <code>GlusterFS</code>. You need to configure your own GlusterFS installation, and after that you can use its volumes in Deployments. Example of usage in Kubernetes:</p>
<pre><code>apiVersion: v1
kind: Deployment
metadata:
name: nfs-busybox
spec:
replicas: 2
selector:
name: nfs-busybox
template:
metadata:
labels:
name: nfs-busybox
spec:
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
volumes:
- name: glusterfsvol
glusterfs:
endpoints: glusterfs-cluster
path: kube_vol
readOnly: true
</code></pre>
<p>You can look through the <a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/glusterfs" rel="nofollow noreferrer">link</a> for more information about using GlusterFS.</p></li>
</ol>
<p>You can find more information about Persistent Volumes and Persistent Volume Claims <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have tried to run this YAML fine, but I am getting the below issue
1 master <code>2</code> nodes has been configured and I ran kubectl get nodes output looks fine</p>
<pre><code>kubectl apply -f https://k8s.io/examples/application/deployment.yaml
</code></pre>
<p>then I have downloaded the file locally and tried, but again same error </p>
<p><code>Error</code> from server (Forbidden) :</p>
<blockquote>
<p>deployments.extensions "nginx-deployment" is
forbidden: User "system:node:master" cannot get deployments.extensions in the
namespace "default"
root@master:~#</p>
</blockquote>
<p>Please assist me to resolve the issue</p>
| <p>This is a RBAC restriction about which you can read on <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes - Using RBAC Authorization</a> docs.</p>
<p>You want to create your own <code>ServiceAccount</code> then <code>Role</code> and then bind them
together using <code>RoleBinding</code>.</p>
<blockquote>
<p>ServiceAccount example</p>
</blockquote>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: some-name
namespace: my-name
</code></pre>
<blockquote>
<p>Role example</p>
</blockquote>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: some-name
namespace: my-name
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get","list","patch","update"]
</code></pre>
<blockquote>
<p>RoleBinding example</p>
</blockquote>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: some-name
namespace: my-name
subjects:
- kind: ServiceAccount
name: some-name
namespace: my-name
roleRef:
kind: Role
name: some-name
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>There are examples online which you can find.</p>
|
<p>I have a helm chart that requires <code>stable/redis</code> as a child chart. The parent chart needs to expose the redis service as an environment variable.</p>
<p>The redis chart includes a template called <code>redis.fullname</code>. How can I refer to this in my parent chart? I.e. I want something like this in my parent deployment but it doesn't work:</p>
<pre><code>kind: Deployment
spec:
template:
containers:
env:
- name: REDIS_CLUSTER_SERVICE_HOST
value: {{ template "redis.fullname" . }}
</code></pre>
| <p>You can use <code>'{{ .Release.Name }}-redis'</code> in your parent chart. I had same requirement. This is my example in case you want to take a look -><a href="https://github.com/kubernetes/charts/tree/master/incubator/distribution" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/incubator/distribution</a></p>
|
<p>I am using kubernetes 1.10 cluster and I want to schedule cron job which will use bash script to loop forever and send get request to http endpoint in every two seconds.</p>
<p>Here is my job yaml:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: notifsender-sms-cron
namespace: staging
spec:
template:
spec:
containers:
- name: notifsender-sms-cron
image: alpine:latest
command: ["/bin/sh"]
args:
- -c
- >
apk update && apk add --no-cache curl bash && bash -c
" echo \"Running cron loop\";
while true;
do
exit_status=$(curl -v -o /dev/null -w '%{http_code}' http://bbc.com);
if [ $exit_status -ne 200 ]; then
exit 1;
fi
sleep 2;
done
"
restartPolicy: OnFailure
backoffLimit: 4
</code></pre>
<p>The problem is that output is unexpected, because curl request is made only once and even before the loop, than program loops without any curl request made:</p>
<pre><code>...
&gt; Accept: */*
&gt;
&lt; HTTP/1.1 301 Moved Permanently
&lt; Server: Varnish
&lt; Retry-After: 0
&lt; Content-Length: 0
&lt; Accept-Ranges: bytes
&lt; Date: Thu, 05 Jul 2018 17:53:32 GMT
&lt; Via: 1.1 varnish
&lt; Connection: close
&lt; X-Served-By: cache-fra19122-FRA
&lt; X-Cache: MISS
&lt; X-Cache-Hits: 0
&lt; X-Timer: S1530813212.281242,VS0,VE0
&lt; Location: http://www.bbc.com/
&lt; cache-control: public, max-age=3600
&lt;
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
* Closing connection 0
Running cron loop
bash: line 4: [: -ne: unary operator expected
bash: line 4: [: -ne: unary operator expected
bash: line 4: [: -ne: unary operator expected
bash: line 4: [: -ne: unary operator expected
bash: line 4: [: -ne: unary operator expected
bash: line 4: [: -ne: unary operator expected
bash: line 4: [: -ne: unary operator expected
bash: line 4: [: -ne: unary operator expected
bash: line 4: [: -ne: unary operator expected
bash: line 4: [: -ne: unary operator expected
bash: line 4: [: -ne: unary operator expected
</code></pre>
<p>I think I have mistake in script, But with my current knowledge of bash scripting I can't fix this problem. Can anyone point me what's wrong ?</p>
<p>Thank you</p>
| <p>Minor modification:</p>
<p>dollar sign <strong>$</strong> should be escaped -> <strong>\$</strong></p>
<p>"<strong>\$exit_status</strong>" in place of $exit_status </p>
<pre><code>bash -c " echo \"Running cron loop\";
while true;
do
exit_status=\$(curl -v -o /dev/null -w '%{http_code}' http://bbc.com);
if [ \$exit_status -ne 200 ]; then
exit 1;
fi
sleep 2;
done
"
</code></pre>
|
<p>I need to set a custom error in traefik ingress on kubernetes so that when there is no endpoint or when the status is "404", or "[500-600]" it redirects to another error service or another custom error message I used the annotation as it's in the documentation in the ingress file as this (Note: this a helm template output of passing the annotation as a yaml in the values.yaml file)</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
namespace: "default"
annotations:
external-dns.alpha.kubernetes.io/target: "domain.com"
kubernetes.io/ingress.class: "traefik"
traefik.ingress.kubernetes.io/error-pages: "map[/:map[backend:hello-world status:[502 503]]]"
spec:
rules:
- host: frontend.domain.com
http:
paths:
- backend:
serviceName: frontend
servicePort: 3000
path: /
</code></pre>
| <p>The answer by ldez is correct, but there are a few caveats:</p>
<ul>
<li>First off, these annotations only work for traefik >= 1.6.x (earlier versions may support error pages, but not for the kubernetes backend)</li>
<li>Second, the traefik backend <strong>must</strong> be configured through kubernetes. You cannot create a backend in a config file and use it with kubernetes, at least not in traefik 1.6.x </li>
</ul>
<p>Here's how the complete thing looks like. <code>foo</code> is just a name, as explained in the other answer, and can be anything:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
namespace: "default"
annotations:
external-dns.alpha.kubernetes.io/target: "domain.com"
kubernetes.io/ingress.class: "traefik"
traefik.ingress.kubernetes.io/error-pages: |-
foo:
status:
- "404"
- "500"
# See below on where "error-pages" comes from
backend: error-pages
query: "/{{status}}.html"
spec:
rules:
# This creates an ingress on an non-existing host name,
# which binds to a service. As part of this a traefik
# backend "error-pages" will be created, which is the one
# we use above
- host: error-pages
http:
paths:
- backend:
serviceName: error-pages-service
servicePort: https
- host: frontend.domain.com
http:
# The configuration for your "real" Ingress goes here
# This is the service to back the ingress defined above
# Note that you can use anything for this, including an internal app
# Also: If you use https, the cert on the other side has to be valid
---
kind: Service
apiVersion: v1
metadata:
name: error-pages-service
namespace: default
spec:
ports:
- name: https
port: 443
type: ExternalName
externalName: my-awesome-errors.mydomain.test
</code></pre>
<p>If you use this configuration, and your app sends a 404, then <code>https://my-awesome-errors.mydomain.test/404.html</code> would be shown as the error page.</p>
|
<p>i created user teamcity to be able to use <strong>kube-apiserver</strong> </p>
<blockquote>
<p>kubectl create serviceaccount teamcity</p>
</blockquote>
<p>With the command below i get the secrets name </p>
<blockquote>
<p>kubectl get accourcissements teamcity -o yaml</p>
</blockquote>
<p>To find the token who were generated by the last command i use </p>
<blockquote>
<p>kubectl get secret teamcity-token-lmr6z -o yaml</p>
</blockquote>
<p>when i try to connect by curl i've an error and i dont understand where is my mistake :( </p>
<pre><code> curl -v -Sskk -H "Authorization: bearer ZXlKaGJH......wWHNIVzZ3" https://10.109.0.88:6443/api/v1/namespaces
HTTP/1.1 401 Unauthorized
Content-Type: application/json
Date: Thu, 05 Jul 2018 13:14:00 GMT
Content-Length: 165
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
* Connection #0 to host 10.109.0.88 left intact
</code></pre>
<p>I found a small description on kubernetes about why i get this error (section : Anonymous requests) <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/</a></p>
<p>But i still not understand where is my mistake because with kubectl it's work</p>
<pre><code>kubectl --token=ZXlKaGJHY2lPaUpTVXpJ........swWHNIVzZ3 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes NodePort 192.2.0.159 <none> 80:17502/TCP 13d
hello-kubernetes-olivier NodePort 192.2.0.235 <none> 80:17296/TCP 13d
kubernetes ClusterIP 192.2.0.1 <none> 443/TCP 14d
</code></pre>
| <p>It might be your typo the part of "bearer", as i remeber it's "Bearer".</p>
<p>Some command sample is as follows, <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#directly-accessing-the-rest-api" rel="nofollow noreferrer">Kubernetes - Accessing Clusters</a></p>
<pre><code>$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "10.0.1.149:443"
}
]
}
</code></pre>
|
<p>is it possible to deploy an ingress controller (nginx) without a public ip address? </p>
<p>Thanks!</p>
| <blockquote>
<p>is it possible to deploy an ingress controller (nginx) without a public ip address?</p>
</blockquote>
<p>Without question, yes, if the Ingress controller's <code>Service</code> is of <code>type: NodePort</code> then the Ingress controller's private IP address is <strong>every</strong> <code>Node</code>'s IP address, on the port(s) pointing to <code>:80</code> and <code>:443</code> of the <code>Service</code>. Secretly, that's exactly what is happening anyway with <code>type: LoadBalancer</code>, just with the extra sugar coating of the cloud provider mapping between the load balancer's IP address and the binding to the <code>Node</code>'s ports.</p>
<p>So, to close that loop: if you wished to have a 100% internal Ingress controller, then use a <code>hostNetwork: true</code> and bind the Ingress controller's <code>ports:</code> to be the <strong>host</strong>'s port 80 and 443; then, make a DNS (A record|CNAME record) for each virtual-host that resolve to the address of every <code>Node</code> in the cluster, and poof: 100% non-Internet-facing Ingress controller.</p>
|
<p>I am trying to mount a PV into a pod with the following:</p>
<pre><code> {
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv",
"labels": {
"type": "ssd1-zone1"
}
},
"spec": {
"capacity": {
"storage": "150Gi"
},
"hostPath": {
"path": "/mnt/data"
},
"accessModes": [
"ReadWriteOnce"
],
"persistentVolumeReclaimPolicy": "Retain",
"storageClassName": "zone1"
}
}
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "pvc",
"namespace": "clever"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "150Gi"
}
},
"volumeName": "pv",
"storageClassName": "zone1"
}
}
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: pv
persistentVolumeClaim:
claimName: pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: pv
</code></pre>
<p>The pod creates propertly and uses the PVC claim without problem. When I ssh into the pod to see the mount, however, the size is 50Gb, which is the size of the attached storage and not the volume I specified.</p>
<pre><code>root@task-pv-pod:/# df -aTh | grep "/html"
/dev/vda1 xfs 50G 13G 38G 26% /usr/share/nginx/html
</code></pre>
<p>The PVC appears to be correct to:</p>
<pre><code>root@5139993be066:/# kubectl describe pvc pvc
Name: pvc
Namespace: default
StorageClass: zone1
Status: Bound
Volume: pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"pvc","namespace":"default"},"spec":{"accessModes":["ReadWriteO...
pv.kubernetes.io/bind-completed=yes
Finalizers: []
Capacity: 150Gi
Access Modes: RWO
Events: <none>
</code></pre>
<p>I have deleted and recreated the volume and the claim many times and tried to use different images for my pod. Nothing is working. </p>
| <p>It looks like your /mnt/data is on root partition, hence it provides the same free space as any other folder in rootfs.</p>
<p>The thing about requested and defined capacities for PV/PVC is that these are ony values for matching or hinting dynamic provisioner. In case of hostPath and manually created PV you can define 300TB and it will bind, even if real folder for hostPath has 5G as the real size of the device is not verified (which is reasonable, cause you just trust the data that is provided in PV).</p>
<p>So as I said, check if your /mnt/data is not just part of the rootfs. If you still have the problem provide output of <code>mount</code> command on the node where the pod is running.</p>
|
<p>is there a way to scale dynamically the memory size of Pod based on size of data job (my use case)? </p>
<p>Currently we have Job and Pods that are defined with memory amounts, but we wouldn't know how big the data will be for a given time-slice (sometimes 1000 rows, sometimes 100,000 rows).<br>
So it will break if the data is bigger than the memory we have allocated beforehand. </p>
<p>I have thought of using slices by data volume, i.e. cut by every 10,000 rows, we will know memory requirement of processing a fixed amount of rows. But we are trying to aggregate by time hence the need for time-slice. </p>
<p>Or any other solutions, like Spark on kubernetes?</p>
<p>Another way of looking at it:<br>
How can we do an implementation of Cloud Dataflow in Kubernetes on AWS</p>
| <p>It's a best practice always define <code>resources</code> in your container definition, in particular: </p>
<ul>
<li><code>limits</code>:the upper level of CPU and memory</li>
<li><code>requests</code>: the minimal level of CPU and memory</li>
</ul>
<p>This allows the scheduler to take a better decision and it eases the assignment of <strong>Quality of Service (QoS)</strong> for each pod (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/</a>) which falls into three possible classes: </p>
<ul>
<li><strong>Guaranteed</strong> (highest priority): when <strong><em>requests = limits</em></strong></li>
<li><strong>Burstable</strong>: when <strong><em>requests < limits</em></strong></li>
<li><strong>BestEffort</strong> (lowest priority): when <strong>requests and limits are not set</strong>.</li>
</ul>
<p>The QoS enables a criterion for killing pods when the system is overcommited.</p>
|
<p>Can we use nfs volume plugin to maintain the High Availability and Disaster Recovery among the kubernetes cluster?</p>
<p>I am running the pod with MongoDB. Getting the error </p>
<blockquote>
<p>chown: changing ownership of '/data/db': Operation not permitted .</p>
</blockquote>
<p>Cloud any body, Please suggest me how to resolve the error? (or)</p>
<p>Is any alternative volume plugin is suggestible to achieve HA- DR in kubernetes cluster?</p>
| <blockquote>
<p>chown: changing ownership of '/data/db': Operation not permitted .</p>
</blockquote>
<p>You'll want to either launch the mongo container as <code>root</code>, so that you <em>can</em> <code>chown</code> the directory, or if the image prohibits it (as some images already have a <code>USER mongo</code> clause that prohibits the container from escalating privileges back up to <code>root</code>), then one of two things: supersede the user with a <code>securityContext</code> stanza in <code>containers:</code> or use an <code>initContainer:</code> to preemptively change the target folder to be the mongo UID:</p>
<p>Approach #1:</p>
<pre><code>containers:
- name: mongo
image: mongo:something
securityContext:
runAsUser: 0
</code></pre>
<p><em>(which may require altering your cluster's config to permit such a thing to appear in a <code>PodSpec</code>)</em></p>
<p>Approach #2 (which is the one I use with Elasticsearch images):</p>
<pre><code>initContainers:
- name: chmod-er
image: busybox:latest
command:
- /bin/chown
- -R
- "1000" # or whatever the mongo UID is, use string "1000" not 1000 due to yaml
- /data/db
volumeMounts:
- name: mongo-data # or whatever
mountPath: /data/db
containers:
- name: mongo # then run your container as before
</code></pre>
|
<p>Trying to add autoscaling to my deployment,but getting <code>ScalingActive False</code>,most answers are about DNS,Heapster,Limits I've done all but still can't find solution.</p>
<pre><code>kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
fetch Deployment/fetch <unknown>/50% 1 4 1 13m
kubectl cluster-info
Kubernetes master is running at --
addon-http-application-routing-default-http-backend is running at --
addon-http-application-routing-nginx-ingress is running at --
Heapster is running at --
KubeDNS is running at --
kubernetes-dashboard is running at --
</code></pre>
<p>kubectl describe hpa`</p>
<p><a href="https://i.stack.imgur.com/8XpMN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8XpMN.png" alt="kubectl describe hpa"></a></p>
<p>yaml ` </p>
<p><a href="https://i.stack.imgur.com/4PS2U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4PS2U.png" alt="yaml"></a></p>
<p>PS.I tried to deploy example witch azure provides....getting the same,so yaml settings isn't problem </p>
<p>kubectl describe pod `</p>
<p><a href="https://i.stack.imgur.com/N6Kt2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N6Kt2.png" alt="undefined"></a></p>
<p>kubectl top pod fetch-54f697989d-wczvn --namespace=default` </p>
<p><a href="https://i.stack.imgur.com/S2mfy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S2mfy.png" alt="enter image description here"></a></p>
<p>autoscaling by memory yaml `</p>
<p><a href="https://i.stack.imgur.com/5SxBO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5SxBO.png" alt="enter image description here"></a></p>
<p>description`</p>
<p><a href="https://i.stack.imgur.com/XLrnW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XLrnW.png" alt="enter image description here"></a></p>
<p><code>kubectl get hpa</code> give the same result,unknown/60%</p>
| <p>I've experienced similar issues, my solutions are setting <code>resources.requests.cpu</code> section up in deployment config in order to calculate the current percentage based on the requested resource values. Your event log messages also means not to set up the request resource, but your deployment yaml seems no problem to me too.</p>
<p>Let we do double check as following steps.</p>
<p>If you can verify the resources as following cmd,</p>
<pre><code># kubectl top pod <your pod name> --namespace=<your pod running namespace>
</code></pre>
<p>And you would also need to check the pod requested cpu resources using below cmd in order to ensure same the config with your deployment yaml.</p>
<pre><code># kubectl describe pod <your pod name>
...
Requests:
cpu: 250m
...
</code></pre>
<p>I hope it help you to resolve your issues. ;)</p>
|
<p>It might be a question based on curiosity which couldn't find help on google.</p>
<p>Consider this part of the yaml for a headless service:</p>
<pre><code>ports:
- port: abcd --> this line
</code></pre>
<p>My doubt is when the cluster-ip for a headless service is already none (as it is a set of pods that it points to), what is the use of having the port for a service? The dns record from the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="noreferrer">documentation</a> for services states that:</p>
<blockquote>
<p>“Headless” (without a cluster IP) Services are also assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. Unlike normal Services, this resolves to the set of IPs of the pods selected by the Service. Clients are expected to consume the set or else use standard round-robin selection from the set.</p>
</blockquote>
<p>Hence, if the dns that is allocated to the headless services is solely used to have endpoints into the pods, is there any use-case of having the port functionality in a headless service?</p>
<p>I have seen issues that people have faced while excluding the port value from the definition of headless service (<a href="https://github.com/kubernetes/kubernetes/issues/55158" rel="noreferrer">here</a>). This seems to have been fixed. But then, do we really have a use-case for the port functionality of a headless service?</p>
| <blockquote>
<p>But then, do we really have a use-case for the port functionality of a headless service?</p>
</blockquote>
<p>IMHO, yes: because the very idea of a <code>Service</code> is not "a random IP address" -- otherwise it would be called <code>DHCPIPAddress</code>. The idea of a <code>Service</code> in kubernetes is that you can consume some network functionality using one or more tuples of <code>(address, protocol, port)</code> just like in the non-kubernetes world.</p>
<p>So it can be fine if you don't care about the port of a headless <code>Service</code>, in which case toss in <code>ports:\n- port: 80\n</code> and call it a draw, but the <strong>benefit</strong> of a headless <code>Service</code> is to expose an extra-cluster network resource in a manner that kubernetes itself cannot manage. I used that very trick to help us transition from one cluster to another by creating a headless <code>Service</code>, whose name was what the previous <code>Deployment</code> expected, with the named <code>ports:</code> that the previous <code>Deployment</code> expected, but pointing to an IP that I controlled, not within the SDN.</p>
<p>Doing that, all the traditional kubernetes <code>kube-dns</code> and <code>$(SERVICE_THING_HOST)</code> and <code>$(SERVICE_THING_PORT)</code> injection worked as expected, but abstracted away the fact that said <code>_HOST</code> temporarily lived outside the cluster.</p>
|
<p>I have a web application consisting of a few services - web, DB and a job queue/worker. I host everything on a single Google VM and my deployment process is very simple and naive:</p>
<ul>
<li>I manually install all services like the database on the VM </li>
<li>a bash script scheduled by crontab polls a remote git repository for changes every N minutes</li>
<li>if there were changes, it would simply restart all services using supervisord (job queue, web, etc)</li>
</ul>
<p>Now, I am starting a new web project where I enjoy using docker-compose for local development. However, I seem to suck in analysis paralysis deciding between available options for production deployment - I looked at Kubernetes, Swarm, docker-compose, container registries and etc.</p>
<p>I am looking for a recipe that will keep me productive with a single machine deployment. Ideally, I should be able to scale it to multiple machines when the time comes, but simplicity and staying frugal (one machine) is more important for now. I want to consider 2 options - when the VM already exists and when a new bare VM can be allocated specifically for this application.</p>
<p>I wonder if docker-compose is a reasonable choice for a simple web application. Do people use it in production and if so, how does the entire process look like from bare VM to rolling out an updated application? Do people use Kubernetes or Swarm for a simple single-machine deployment or is it an overkill?</p>
| <blockquote>
<p>I wonder if docker-compose is a reasonable choice for a simple web application.</p>
</blockquote>
<p>It can be, sure, if the development time is best spent focused on the web application and <em>less</em> on the non-web stuff such as the job queue and database. The other asterisk is whether the development environment works ok with hot-reloads or port-forwarding and that kind of jazz. I say it's a reasonable choice because 99% of the <strong>work</strong> of creating an application suitable for use in a clustered environment is the work of containerizing the application. So if the app already works under <code>docker-compose</code>, then it is with high likelihood that you can take the docker image that is constructed on behalf of <code>docker-compose</code> and roll it out to the cluster.</p>
<blockquote>
<p>Do people use it in production</p>
</blockquote>
<p>I hope not; I am sure there are people who use <code>docker-compose</code> to run in production, just like there are people that use Windows batch files to deploy, but don't be that person.</p>
<blockquote>
<p>Do people use Kubernetes or Swarm for a simple single-machine deployment or is it an overkill?</p>
</blockquote>
<p>Similarly, don't be a person that deploys the entire application on a single virtual machine or be mentally prepared for one failure to wipe out everything that you value. That's part of what clustering technologies are designed to protect against: one mistake taking down the entirety of the application, web, queuing, and persistence all in one fell swoop.</p>
<p>Now whether deploying kubernetes for your situation is "overkill" or not depends on whether you get benefit from the <em>other</em> things that kubernetes brings aside from mere scaling. We get benefit from developer empowerment, log aggregation, CPU and resource limits, the ability to take down one Node without introducing any drama, secrets management, configuration management, using a small number of Nodes for a large number of hosted applications (unlike creating a single virtual machine per deployed application because the deployments have no discipline over the placement of config file or ports or whatever). I can keep going, because kubernetes is truly magical; but, as many people will point out, it is not zero human cost to successfully run a cluster.</p>
|
<p>Most tutorials I've seen for developing with Kubernetes locally use Minikube. In the latest Edge release of Docker for Windows, you can also enable Kubernetes. I'm trying to understand the differences between the two and which I should use.</p>
<ol>
<li>Minikube lets you choose the version of Kubernetes you want, can Docker for Windows do that? I don't see a way to configure it.</li>
<li>Minikube has CLI commands to enable the dashboard, heapster, ingress and other addons. I'm not sure why because my undertstanding is that these are simply executing <code>kubectl apply -f http://...</code>. </li>
<li>With Minikube I can do a <code>minikube ip</code> to get the cluster IP address for ingress, how can I do this with Docker for Windows?</li>
<li>Is there anything else different that I should care about.</li>
</ol>
| <p>I feel like you largely understand the space, and mostly have answers to your questions already. You might find <a href="https://docs.docker.com/docker-for-mac/docker-toolbox/" rel="noreferrer">Docker for Mac vs. Docker Toolbox</a> an informative read, even if it's about the Mac equivalent rather than Windows and about Docker packaged as a VM rather than Kubernetes specifically.</p>
<ol>
<li><p>In fact you are stuck with the specific version of Kubernetes the Docker Edge desktop distribution publishes.</p></li>
<li><p>is answered in the question.</p></li>
<li><p>I believe NodePort-type Services are published on your host's IP address; there isn't an intermediate VM address like there is with Docker Toolbox.</p></li>
<li><p>Docker Toolbox and minikube always use a full-blown virtual machine with an off-the-shelf hypervisor. The Docker desktop application might use a lighter-weight virtualization engine if one is available.</p></li>
<li><p>Kubernetes can involve some significant background work. If you're using Kubernetes-in-Docker it's hard to "turn off" Kubernetes and still have Docker available; but if you have a separate minikube VM you can just stop it.</p></li>
</ol>
|
<p>I wrote a python script to dump my database, gzip it and move it to a cloud storage.</p>
<p>Locally everything works smooth and consumses basically no memory (max 20MB RAM, CPU is quite high, I/O is at max).</p>
<p><strong>When I run it as a job in my kubernetes cluster, the memory usage piles up to about 1.6 GB.</strong></p>
<p>That is more or less the size of my gzipped dumpfile.</p>
<p>Here's my dumping logic:</p>
<pre><code>mysqldump_command = ['mysqldump', f'--host={host}', f'--port={port}', f'--user={username}',
f'--password={password}', '--databases', '--compact',
'--routines', db_name, f'--log-error={self.errorfile}']
print(f'## Creating mysql dump')
with open(self.filename_gzip, 'wb', 0) as f:
p1 = subprocess.Popen(mysqldump_command, stdout=subprocess.PIPE)
p2 = subprocess.Popen('gzip', stdin=p1.stdout, stdout=f)
p1.stdout.close() # force write error (/SIGPIPE) if p2 dies
p2.wait()
p1.wait()
</code></pre>
<p>I tried:</p>
<ul>
<li>setting <code>PYTHONUNBUFFERED=1</code>, no effect.</li>
<li>I tried <a href="https://stackoverflow.com/a/24103662/1331671">this logic</a> but it was even worse</li>
<li>I tried creating the dump first as a file and gzip it afterwards, was the worst of all my experiments</li>
</ul>
<p>Any further ideas?</p>
| <p>I reproduced your setup and the dump still works with more restrictive limits.</p>
<p>I just set the ressource caps in the my-job.yml and it does not consume much memory any more:</p>
<pre><code>resources:
limits:
memory: "100Mi"
requests:
memory: "50Mi"
</code></pre>
|
<p>I have a very strange effect with a pod on Kubernetes: It tries to mount a volume of type <code>emptyDir</code>, but fails to do so. The events list of the pod brings up the following entries:</p>
<pre><code>LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
2m 10h 281 953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-xbg84.153e978b9b811f46 Pod Warning FailedMount kubelet, ip-172-20-73-118.eu-central-1.compute.internal Unable to mount volumes for pod "953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-xbg84_example(6cfbf40a-809d-11e8-bb05-0227730cc812)": timeout expired waiting for volumes to attach/mount for pod "example"/"953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-xbg84". list of unattached/unmounted volumes=[workspace]
</code></pre>
<p>What's strange is that this works most of the times, but now this has happened. What could be the reason for this? And how to figure out in more detail what went wrong?</p>
<p>Update: As requested in a comment, I have added the pod spec here:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: 953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-xbg84
namespace: example
spec:
containers:
- args:
- --context=/workspace/1b5c4fd2-bb39-4096-b055-52dc99d8da0e
- --dockerfile=/workspace/1b5c4fd2-bb39-4096-b055-52dc99d8da0e/Dockerfile-broker
- --destination=registry.example.com:443/example/953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-broker:1530827425856
image: gcr.io/kaniko-project/executor:732a2864f4c9f55fba71e4afd98f4fdd575479e6
imagePullPolicy: IfNotPresent
name: 953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-broker
volumeMounts:
- mountPath: /kaniko/.docker/config.json
name: config-json
subPath: config.json
- mountPath: /workspace
name: workspace
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-mk89h
readOnly: true
- args:
- --context=/workspace/1b5c4fd2-bb39-4096-b055-52dc99d8da0e
- --dockerfile=/workspace/1b5c4fd2-bb39-4096-b055-52dc99d8da0e/Dockerfile-core
- --destination=registry.example.com:443/example/953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-core:1530827425856
image: gcr.io/kaniko-project/executor:732a2864f4c9f55fba71e4afd98f4fdd575479e6
imagePullPolicy: IfNotPresent
name: 953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-core
volumeMounts:
- mountPath: /kaniko/.docker/config.json
name: config-json
subPath: config.json
- mountPath: /workspace
name: workspace
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-mk89h
readOnly: true
- args:
- --context=/workspace/1b5c4fd2-bb39-4096-b055-52dc99d8da0e
- --dockerfile=/workspace/1b5c4fd2-bb39-4096-b055-52dc99d8da0e/Dockerfile-flows
- --destination=registry.example.com:443/example/953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-flows:1530827425856
image: gcr.io/kaniko-project/executor:732a2864f4c9f55fba71e4afd98f4fdd575479e6
imagePullPolicy: IfNotPresent
name: 953fb7fe6825ce398f6243fbe2b2df9400d8cbe0-1530827425856-flows
volumeMounts:
- mountPath: /kaniko/.docker/config.json
name: config-json
subPath: config.json
- mountPath: /workspace
name: workspace
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-mk89h
readOnly: true
initContainers:
- command:
- sh
- -c
- echo ${CONFIG_JSON} | base64 -d > /config-json/config.json
env:
- name: CONFIG_JSON
value: […]
image: alpine:3.7
imagePullPolicy: IfNotPresent
name: store-config-json
volumeMounts:
- mountPath: /config-json
name: config-json
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-mk89h
readOnly: true
restartPolicy: Never
volumes:
- emptyDir: {}
name: config-json
- name: workspace
persistentVolumeClaim:
claimName: example
- name: default-token-mk89h
secret:
defaultMode: 420
secretName: default-token-mk89h
</code></pre>
| <p>You don't use emptyDir in podSpec directly. I would suggest replacing PVC with just <code>emptyDir: {}</code> and checking if it solved things for you.</p>
<p>If you want to still chase the PVC and PV approach, provide their manifests and describes. It's possible that ie. you have a PVC mound to PV with empty dir on a different host that pod started on.</p>
|
<p>I am trying to make a <code>cron job</code> in Openshift but I got this error:</p>
<pre><code>error: unable to parse "openshift/cronJob-template.yml", not a valid Template but *batch.CronJob
</code></pre>
<p>I ran this command:</p>
<pre><code>oc process -f openshift/cronJob-template.yml -p namespace=test-project | oc apply -f -
</code></pre>
<p>My <code>yml</code> looks like:</p>
<pre><code>apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: demo
spec:
schedule: "*/30 2 * * *"
jobTemplate:
spec:
template:
metadata:
labels:
parent: "demo"
spec:
containers:
- name: demo
image: demo
command: ["java", "-jar", "test.jar"]
restartPolicy: OnFailure
</code></pre>
<p>I did have my schedule looking like the following but I am not sure if it is correct in a <code>yml</code> file <code>schedule: "*/30 2 * * * /tmp/logs"</code></p>
<p>This is my first time creating a cron job via <code>yml</code>and on Openshift so I apologise. </p>
| <p>The template looks ok. Try run <code>oc create -f openshift/cronJob-template.yml -n test-project</code> instead.</p>
|
<p>I have some up & running kubernetes cluster which potentially can produce a lot of logs.
Kubernetes is running on top of docker so I think I need to <a href="https://docs.docker.com/config/containers/logging/json-file/" rel="noreferrer">configure dockerd to rollout logs</a>.</p>
<p>I found some settings for logging driver for dockerd:</p>
<pre><code>{
"log-driver": "json-file",
"log-opts": {
"max-size": "10k",
"max-file": "2"
}
}
</code></pre>
<p>In case of docker service restart the changes are successfully applied.</p>
<pre><code>du /var/lib/docker/containers/container_hash/container_hash* -h
</code></pre>
<p>shows log files splitted on chunks by appropriate size.</p>
<p>But I don't want to restart the daemon, so tried to reload configuration with:</p>
<pre><code>sudo kill -SIGHUP $(pidof dockerd)
</code></pre>
<p>In syslog I found: </p>
<pre><code>Mar 12 15:38:16 kso-gl dockerd[5331]: time="2018-03-12T15:38:16.446894155+02:00" level=info msg="Got signal to reload configuration, reloading fr
om: /etc/docker/daemon.json"
</code></pre>
<p>So, I assume the configuration was reloaded. Unfortunately it had no effect. Even for new containers.
Looks like subsystem related to logging drivers ignores configuration reload.</p>
| <p>Sadly it appears the configuration reload SIGHUP feature does not support all configuration options.</p>
<p>The docs at <a href="https://docs.docker.com/v17.09/engine/reference/commandline/dockerd/#miscellaneous-options" rel="noreferrer">https://docs.docker.com/v17.09/engine/reference/commandline/dockerd/#miscellaneous-options</a> (See the "CONFIGURATION RELOAD BEHAVIOR" section) tell us the only supported configuration options which can be reloaded in this way are:</p>
<pre><code>The list of currently supported options that can be reconfigured is this:
debug: it changes the daemon to debug mode when set to true.
cluster-store: it reloads the discovery store with the new address.
cluster-store-opts: it uses the new options to reload the discovery store.
cluster-advertise: it modifies the address advertised after reloading.
labels: it replaces the daemon labels with a new set of labels.
live-restore: Enables keeping containers alive during daemon downtime.
max-concurrent-downloads: it updates the max concurrent downloads for each pull.
max-concurrent-uploads: it updates the max concurrent uploads for each push.
default-runtime: it updates the runtime to be used if not is specified at container creation. It defaults to “default” which is the runtime shipped with the official docker packages.
runtimes: it updates the list of available OCI runtimes that can be used to run containers
authorization-plugin: specifies the authorization plugins to use.
allow-nondistributable-artifacts: Replaces the set of registries to which the daemon will push nondistributable artifacts with a new set of registries.
insecure-registries: it replaces the daemon insecure registries with a new set of insecure registries. If some existing insecure registries in daemon’s configuration are not in newly reloaded insecure resgitries, these existing ones will be removed from daemon’s config.
registry-mirrors: it replaces the daemon registry mirrors with a new set of registry mirrors. If some existing registry mirrors in daemon’s configuration are not in newly reloaded registry mirrors, these existing ones will be removed from daemon’s config.
</code></pre>
<p>Which, as you can see, does not include the <code>log-driver</code> or <code>log-opts</code> configuration parameters.</p>
<p>I'm not aware of a way to reload the logging configuration without a restart, at this time.</p>
|
<p>In the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md#qos-classes" rel="nofollow noreferrer">kubernetes documentation here</a> the conditions for a pod that is classified as <code>Burstable</code> in regards to resource QOS is defined as </p>
<blockquote>
<p>If requests and optionally limits are set (not equal to 0) for one or
more resources across one or more containers, and they are not equal,
then the pod is classified as Burstable. When limits are not
specified, they default to the node capacity.</p>
</blockquote>
<p>so basically stated differently:</p>
<ol>
<li><code>requests</code> set for <strong>one or more</strong> resources (cpu/memory) across <strong>one or more</strong> containers in the pod.</li>
<li><code>limits</code> are optional: if set, they should be <strong>not be equal</strong> to the <code>requests</code> of the same
resource.</li>
</ol>
<p>But then later on the documentation gives the following as an example of <code>Burstable</code> pod:</p>
<pre><code>containers:
name: foo
resources:
limits:
cpu: 10m
memory: 1Gi
requests:
cpu: 10m
memory: 1Gi
name: bar
</code></pre>
<p><em>Note:</em> Container <code>bar</code> has no resources specified.</p>
<p>This example fulfils condition 1. However, it doesn't satisfy condition 2, since the limits and requests are set for one container but they are <strong>equal</strong>.</p>
<p>So why is this pod classified as a <code>Burstable</code> pod?</p>
<p>K8s documentation containing QOS explanation and examples: <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md#qos-classes" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md#qos-classes</a></p>
| <p>The evaluation of the <strong>Quality of Service (QoS)</strong> is done by scheduler on the <strong>whole</strong> pod, i.e. container by container and then taking the lowest evaluation.</p>
<p>Take a look at this example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: class
spec:
containers:
- name: container1
image: busybox
command: ["sh"]
args: ["-c","sleep 3600"]
resources:
requests:
memory: 100Mi
cpu: 200m
limits:
memory: 100Mi
cpu: 200m
- name: container2
image: busybox
command: ["sh"]
args: ["-c","sleep 3600"]
resources:
requests:
memory: 100Mi
cpu: 200m
</code></pre>
<p><code>container1</code> has Guaranteed QoS, because it has both requests and limits defined, and they are equals.</p>
<p><code>container2</code> has Burstable QoS, because it hasn't limits defined, but only requests.</p>
<p>class pod is evaluated, based on both containers and taking the lowest evaluation:</p>
<pre><code>min(Guaranteed, Burstable) = Burstable
</code></pre>
<p>Reference: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/</a></p>
|
<p>I'm using python kubernetes 3.0.0 library and kubernetes 1.6.6 on AWS.</p>
<p>I have pods that can disappear quickly. Sometimes when I try to exec to them I get ApiException <code>Handshake status 500</code> error status.</p>
<p>This is happening with <code>in cluster configuration</code> as well as <code>kube config</code>.</p>
<p>When pod/container doesn't exist I get <code>404</code> error which is reasonable but <code>500</code> is <code>Internal Server Error</code>. I don't get any <code>500</code> errors in <code>kube-apiserver.log</code> where I do find <code>404</code> ones.</p>
<p>What does it mean and can someone point me in the right direction.</p>
| <p>I know that this question is a little old, but I thought I would share what I found when trying to use python/kubernetes attach/exec for several debugging cases (since this isn't documented anywhere I can find).</p>
<p>As far as I can tell, it's all about making the keyword arguments match the actual container configuration as opposed to what you want the container to do.</p>
<p>When creating pods using <code>kubectl run</code>, if you don't use <code>-i --tty</code> flags (indicating interactive/TTY allocation), and then attempt to set either the <code>tty</code> or <code>stdin</code> flags to <code>True</code> in your function, then you'll get a mysterious 500 error with no other debug info. If you need to use <code>stdin</code> and <code>tty</code> and you are using a configuration file (as opposed to run), then make sure you <a href="https://stackoverflow.com/questions/37559704/kubectl-yaml-config-file-equivalent-of-kubectl-run-i-tty">set the <code>stdin</code> and <code>tty</code> flags to <code>true</code> in <code>spec.containers</code></a>.</p>
<p>While running <code>resp.readline_stdout()</code>, if you get a <code>OverflowError: timestamp too large to convert to C _PyTime_t</code>, set the keyword argument <code>timeout=<any integer></code>. By default, the timeout variable defaults to None, which is an invalid value in that function.</p>
<p>If you run the attach/exec command and get an APIException and a status code of 0, the error <code>Reason: hostname 'X.X.X.X' doesn't match either of...</code>, note that there appears to be an incompatibility with Python 2. Works in Python 3. Should be patched eventually.</p>
<p>I can confirm 404 code is thrown via an ApiException when the pod doesn't exist.</p>
<p>If you are getting a mysterious error saying <code>upgrade request required</code>, note that you need to use the <code>kubernetes.stream.stream</code> function to wrap the call to attach/exec. You can see this <a href="https://github.com/kubernetes-client/python/issues/409" rel="noreferrer">issue on GitHub</a> and <a href="https://github.com/kubernetes-client/python/blob/master/examples/exec.py" rel="noreferrer">this example code</a> to help you get past that part.</p>
<p>Here's my example: <code>resp = kubernetes.stream.stream(k8s.connect_get_namespaced_pod_attach, name='alpine-python-2', namespace="default", stderr=True, stdin=True, stdout=True, tty=True, _preload_content=False)</code></p>
<p>Note that the <code>_preload_content=False</code> is essential in the <code>attach</code> command or else the call will block indefinitely.</p>
<p>I know that was probably more information than you wanted, but hopefully at least some of it will help you.</p>
|
<p>I want to setup a PVC on AWS, where I need <code>ReadWriteMany</code> as access mode. Unfortunately, EBS only supports <code>ReadWriteOnce</code>.</p>
<p>How could I solve this?</p>
<ul>
<li>I have seen that there is a beta provider for AWS EFS which supports <code>ReadWriteMany</code>, but as said, this is still beta, and its installation looks somewhat flaky.</li>
<li>I could use node affinity to force all pods that rely on the EBS volume to a single node, and stay with <code>ReadWriteOnce</code>, but this limits scalability.</li>
</ul>
<p>Are there any other ways of how to solve this? Basically, what I need is a way to store data in a persistent way to share it across pods that are independent of each other.</p>
| <h3>Using EFS without automatic provisioning</h3>
<p>The EFS provisioner may be beta, but EFS itself is not. Since EFS volumes can be mounted via NFS, you can simply create a <code>PersistentVolume</code> with a NFS volume source manually -- assuming that automatic provisioning is not a hard requirement on your side:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: my-efs-volume
spec:
capacity:
storage: 100Gi # Doesn't really matter, as EFS does not enforce it anyway
volumeMode: Filesystem
accessModes:
- ReadWriteMany
mountOptions:
- hard
- nfsvers=4.1
- rsize=1048576
- wsize=1048576
- timeo=600
- retrans=2
nfs:
path: /
server: fs-XXXXXXXX.efs.eu-central-1.amazonaws.com
</code></pre>
<p>You can then claim this volume using a <code>PersistentVolumeClaim</code> and use it in a Pod (or multiple Pods) as usual.</p>
<h3>Alternative solutions</h3>
<p>If automatic provisioning is a hard requirement for you, there are alternative solutions you might look at: There are several distributed filesystems that you can roll out on yourcluster that offer <code>ReadWriteMany</code> storage on top of Kubernetes and/or AWS. For example, you might take a look at <a href="https://rook.io/" rel="noreferrer">Rook</a> (which is basically a Kubernetes operator for Ceph). It's also officially still in a pre-release phase, but I've already worked with it a bit and it runs reasonably well.
There's also the <a href="https://github.com/gluster/gluster-kubernetes" rel="noreferrer">GlusterFS operator</a>, which already seems to have a few stable releases.</p>
|
<p>I'm trying to configure the kubernetes provider in Terraform, however I've been unable to do it so far. EKS uses heptio authenticator, so I don't have certificate paths I can provide to the Kubernetes provider. What is the right way to accomplish this?</p>
<p>I already tried something like this:</p>
<pre><code>provider "kubernetes" {
config_context_auth_info = "context1"
config_context_cluster = "kubernetes"
}
</code></pre>
<p>Getting as a result:</p>
<pre><code>Error: Error applying plan:
1 error(s) occurred:
* kubernetes_namespace.example: 1 error(s) occurred:
* kubernetes_namespace.example: Post http://localhost/api/v1/namespaces: dial tcp [::1]:80: getsockopt: connection refused
</code></pre>
<p>I have a ~/.kube/config in place, what could I be missing?</p>
| <p>Such behaviour could be caused by known core bug: <a href="https://github.com/hashicorp/terraform/issues/12393" rel="nofollow noreferrer">core: No interpolation for cross-provider dependencies #12393</a>. </p>
<p>There is an issue on the Terraform Github that describes a similar case with the same error - <a href="https://github.com/hashicorp/terraform/issues/12869#issuecomment-288339585" rel="nofollow noreferrer">#12869</a>.<br>
It is about issues related to GKE, but I guess it could affect EKS also.</p>
<p>Here is a <a href="https://gist.github.com/radeksimko/1a2cc98c5536bd4aa92e960ed7a47cf0" rel="nofollow noreferrer">link</a> to gist with an example of using kubernetes_provider.<br>
It also related to GKE, but I believe with slight changes it could be applied to EKS.</p>
<p>Consider also checking another good <a href="https://stackoverflow.com/a/50104891/9521610">answer</a> on StackOverflow related to your question.<br>
In brief, the solution is to create Kubernetes cluster on the first stage and then create Kubernetes objects on the second stage.</p>
|
<p>We can use below command to install azure cli on Linux:</p>
<p>curl -L <a href="https://aka.ms/InstallAzureCli" rel="nofollow noreferrer">https://aka.ms/InstallAzureCli</a> | bash</p>
<p>But what if we want to install a specific version of azure cli lets say version 2.0.23 as 2.0.24 has some issue</p>
<p>Please Help!</p>
| <p>There is another way you can install and lock the version of CLI. Say you want to install 2.0.38 on Ubuntu 16.04:
<code>
sudo apt-get install -y azure-cli=2.0.38-1~xenial
echo azure-cli hold | sudo dpkg --set-selection
sudo apt-get upgrade -y
</code></p>
|
<p>I was learning Kubernetes authentication, and authorization using RBAC.
Now a question always puzzling me, How exactly the users in Kubeconfig file(eg /home/.kube/config) differ from the users of basic-auth-file in kube-apiserver startup command? </p>
<p>I have gone through the offical documents, and there seems no any relations between them. Please kindly help me figure it out. Thank you! </p>
| <p>A kubeconfig file contains three types of stanzas: clusters, users, and contexts.</p>
<ol>
<li><p>A cluster stanza describes how kubectl should reach a particular cluster. It has the URL and optionally a CA bundle to use to verify a secure connection.</p></li>
<li><p>A user stanza describes credentials kubectl should send. A user stanza in a kubeconfig file can reference a x509 client certificate, or a bearer token, or a basic auth username/password.</p></li>
<li><p>A context stanza pairs a cluster and a user stanza and gives it a name (e.g. "the 'development' context uses the 'development' cluster definition and the 'testuser' user credentials")</p></li>
</ol>
<p>The "current-context" attribute in a kubeconfig file indicates what context should be used by default when invoking kubectl.</p>
<blockquote>
<p>How exactly the users in Kubeconfig file(eg /home/.kube/config) differ from the users of basic-auth-file in kube-apiserver startup command? </p>
</blockquote>
<p>Only the credentials in user definitions in a kubeconfig are sent to the server. The name has no meaning apart from the reference from the context stanza.</p>
<p>User definitions in a kubeconfig file can contain many types of credentials, not just basic auth credentials.</p>
|
<p>I have executed the samples from the book "Kubernetes Up and Running" where a pod with a work queue is run, then a k8s job is created 5 pods to consume all the work on the queue. I have reproduced the yaml api objects below.</p>
<p>My Expectation is that once a k8s job completes then it's pods would be deleted but <code>kubectl get pods -o wide</code> shows the pods are still around even though it reports 0/1 containers ready and they still seem to have ip addresses assigned see output below.</p>
<ul>
<li>When will completed job pods be removed from the output of <code>kubectl get pods</code> why is that not right after all the containers in the pod finish?</li>
<li>Are the pods consuming any resources when they complete like an IP address or is the info being printed out historical? </li>
</ul>
<p>Output from kubectl after all the pods have consumed all the messages.</p>
<pre><code>kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
consumers-bws9f 0/1 Completed 0 6m 10.32.0.35 gke-cluster1-default-pool-3796b2ee-rtcr
consumers-d25cs 0/1 Completed 0 6m 10.32.0.33 gke-cluster1-default-pool-3796b2ee-rtcr
consumers-jcwr8 0/1 Completed 0 6m 10.32.2.26 gke-cluster1-default-pool-3796b2ee-tpml
consumers-l9rkf 0/1 Completed 0 6m 10.32.0.34 gke-cluster1-default-pool-3796b2ee-rtcr
consumers-mbd5c 0/1 Completed 0 6m 10.32.2.27 gke-cluster1-default-pool-3796b2ee-tpml
queue-wlf8v 1/1 Running 0 22m 10.32.0.32 gke-cluster1-default-pool-3796b2ee-rtcr
</code></pre>
<p>The follow three k8s api calls were executed these are cut and pasted from the book samples. </p>
<p>Run a pod with a work queue </p>
<pre><code>apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
labels:
app: work-queue
component: queue
chapter: jobs
name: queue
spec:
replicas: 1
template:
metadata:
labels:
app: work-queue
component: queue
chapter: jobs
spec:
containers:
- name: queue
image: "gcr.io/kuar-demo/kuard-amd64:1"
imagePullPolicy: Always
</code></pre>
<p>Expose the pod as a service so that the worker pods can get to it. </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: work-queue
component: queue
chapter: jobs
name: queue
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: work-queue
component: queue
</code></pre>
<p>Post 100 items to the queue then run a job with 5 pods executing in parallel until the queue is empty. </p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
labels:
app: message-queue
component: consumer
chapter: jobs
name: consumers
spec:
parallelism: 5
template:
metadata:
labels:
app: message-queue
component: consumer
chapter: jobs
spec:
containers:
- name: worker
image: "gcr.io/kuar-demo/kuard-amd64:1"
imagePullPolicy: Always
args:
- "--keygen-enable"
- "--keygen-exit-on-complete"
- "--keygen-memq-server=http://queue:8080/memq/server"
- "--keygen-memq-queue=keygen"
restartPolicy: OnFailure
</code></pre>
| <p>The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer">docs</a> say it pretty well:</p>
<blockquote>
<p>When a Job completes, no more Pods are created, but the Pods are not
deleted either. Keeping them around allows you to still view the logs
of completed pods to check for errors, warnings, or other diagnostic
output. The job object also remains after it is completed so that you
can view its status. It is up to the user to delete old jobs after
noting their status. Delete the job with kubectl (e.g. kubectl delete
jobs/pi or kubectl delete -f ./job.yaml). When you delete the job
using kubectl, all the pods it created are deleted too.</p>
</blockquote>
|
<p>I'm trying to add helm to a micro-services project that is very similar to the eShopOnContainers example project. </p>
<p><a href="https://github.com/dotnet-architecture/eShopOnContainers" rel="nofollow noreferrer">https://github.com/dotnet-architecture/eShopOnContainers</a></p>
<p>My question is should the charts folders and configuration be added at the solution level, per project or a combination of both?</p>
| <p>My practical experience has been that, in a microservices architecture implemented using Helm, each individual service should have its own Helm chart.</p>
<p>The real problem is that Helm doesn't deal well with recursive dependencies. A typical architectural statement is that each service has its own independent storage and no service "borrows" another's storage. If charts A and B both say in their <code>requirements.yaml</code> that they need some database as a dependency, and your "wrapper" chart W depends on A and B, then Helm's resolver will instantiate a <em>single</em> database dependency chart and use it for both components.</p>
<p>This style is also somewhat easier to deploy. If you take the Docker image tag as a parameter to the chart, then you can deploy each chart/service completely independently of everything else. With one "wrapper" chart you need to be continuously redeploying that chart and coordinating changes to its specific values. (The reverse of that is that it's a little trickier to know what specific versions of the entire system are deployed all in one place.)</p>
|
<p>I am running Mongodb pod in kunernetes cluster using nfs volume.I am writing pod.yml like this</p>
<p><a href="https://i.stack.imgur.com/qPtlS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qPtlS.png" alt="enter image description here"></a></p>
<p>but i am getting the below error<a href="https://i.stack.imgur.com/2iAVA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2iAVA.png" alt="enter image description here"></a></p>
<p>Could anybody suggest how to resolve the above issue?</p>
| <blockquote>
<p>I am running Mongodb pod in kunernetes cluster using nfs volume.I am writing pod.yml like this</p>
</blockquote>
<p>Putting <code>mongo</code> in the <code>command:</code> is erroneous. The <strong>daemon</strong> is <code>mongod</code> and the image <a href="https://github.com/docker-library/mongo/blob/27496625/4.0/Dockerfile#L91" rel="nofollow noreferrer">would have started it automatically</a> had you not provided any <code>command:</code> at all.</p>
<p>Further, all <code>command:</code> lines are <code>exec</code>-ed, so putting <code>bash -c</code> in front of a command is just moving that command out of pid 1 and unnecessarily adding memory consumption to your container.</p>
|
<p>I created a gossip cluster in AWS using <strong>kops</strong>, which means my cluster name is ending with <strong>k8s.local</strong> (clusters.test.k8s.local to be exected ),
All is working fine until I tried to create a deployment where the pod name needs to be with a domain at the end (api-manager.iot.test.co.nz).</p>
<p>I know that it is not authorized to create pods that are not standing in the requirements os this regex:</p>
<pre><code>'[a-z]([-a-z0-9]*[a-z0-9])?'
</code></pre>
<p>Is there a way I can do that?</p>
<p>I tried adding hostname under template->spec but it has the same restrictions (the regex).</p>
<p>This is my deployment YAML file:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
name: api-manager
spec:
replicas: 1
template:
metadata:
labels:
app: api-manager
spec:
volumes:
- name: api-manager-efs
persistentVolumeClaim:
claimName: pvc-apim
containers:
- image: api-manager:2.1.0
name: api-manager.iot.test.co.nz
ports:
- name: porta
containerPort: 9763
- name: portb
containerPort: 9443
env:
- name: SLEEP
value: "30"
volumeMounts:
- name: api-manager-efs
mountPath: /home/wso2carbon/wso2am-2.1.0/repository
</code></pre>
| <p>after a lot of struggling,
This is my solution:</p>
<p><a href="http://kubernetes%20manual" rel="nofollow noreferrer">https://kubernetes.io/blog/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes/</a></p>
<p>1.) create a <strong>dnsmasq</strong> with this your domain configuration inside, you will have to attach a cluster IP which must be in the range of your k8s cluster that you use.</p>
<p>These are the yaml files I created for that:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: dnsmasq
labels:
app: dnsmasq
data:
dnsmasq.conf: |+
user=root
#dnsmasq config, for a complete example, see:
# http://oss.segetech.com/intra/srv/dnsmasq.conf
#log all dns queries
log-queries
#dont use hosts nameservers
no-resolv
#use google as default nameservers
server=8.8.4.4
server=8.8.8.8
#serve all .company queries using a specific nameserver
server=/company/10.0.0.1
#explicitly define host-ip mappings
address=/api-manager.iot.test.vector.co.nz/100.64.53.55
apiVersion: v1
kind: Service
metadata:
labels:
app: dnsmasq
name: dnsstub
spec:
type: "{{.Values.Service.serviceType}}"
clusterIP: 100.68.140.187
ports:
- port: {{ .Values.Service.serviceports.port }}
protocol: UDP
selector:
app: dnsmasq
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: dnsmasq
spec:
replicas: {{ .Values.Deployment.replicaCount }}
template:
metadata:
labels:
app: dnsmasq
spec:
containers:
- name: dnsmasq
image: dnsmasq:1.0.2
ports:
- containerPort: {{ .Values.Deployment.ports.containerport }}
protocol: UDP
volumeMounts:
- name: etc
mountPath: /etc/dnsmasq.conf
subPath: dnsmasq.conf
imagePullSecrets:
- name: mprestg-credentials
volumes:
- name: etc
configMap:
name: dnsmasq
dnsPolicy: Default
</code></pre>
<p>2.) Create a kube-dns config-map with stubDomain:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{"iot.test.vector.co.nz": ["100.68.140.187"]}
</code></pre>
<p>3.) Add the static IP that we defined in our dns configuration to out service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: api-manager
labels:
app: api-manager
tier: apim
spec:
ports:
- port: 9763
name: porta
targetPort: 9763
selector:
app: api-manager
type: LoadBalancer
clusterIP: 100.64.53.55
</code></pre>
|
<p>I'm trying to locate a performance issue that might be network related. Therefore I want to inspect all packets entering and leaving a pod.</p>
<p>I'm on kubernetes 1.8.12 on GKE. I ssh to the host and I see the bridge called <code>cbr0</code> that sees all the traffic. I also see a ton of interfaces named like <code>vethdeadbeef@if3</code>. I assume those are virtual interfaces that are created per container. Where do I look to find out which interface belongs to which container, so I can get all a list of all the interfaces of a pod. </p>
| <p>If you have <code>cat</code> available in the container, you can compare the interface index of the containers <code>eth0</code> with those of the <code>veth*</code> devices on your host. For example:</p>
<pre><code># grep ^ /sys/class/net/vet*/ifindex | grep ":$(docker exec aea243a766c1 cat /sys/class/net/eth0/iflink)"
/sys/class/net/veth1d431c85/ifindex:92
</code></pre>
<p><code>veth1d431c85</code> is what your are looking for.</p>
|
<p>Looks like issues is because of CNI (calico) but not sure what is the fix in ICP (see journalctl -u kubelet logs below)</p>
<p><strong>ICP Installer Log:</strong></p>
<pre><code>FAILED! => {"attempts": 100, "changed": true, "cmd": "kubectl -n kube-system get daemonset kube-dns -o=custom-columns=A:.status.numberAvailable,B:.status.desiredNumberScheduled --no-headers=true | tr -s \" \" | awk '$1 == $2 {print \"READY\"}'", "delta": "0:00:00.403489", "end": "2018-07-08 09:04:21.922839", "rc": 0, "start": "2018-07-08 09:04:21.519350", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
</code></pre>
<p><strong>journalctl -u kubelet</strong>:</p>
<pre><code>Jul 08 22:40:38 dev-master hyperkube[2763]: E0708 22:40:38.548157 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: nodes is forbidden: User "kubelet" cannot list nodes at the cluster scope
Jul 08 22:40:38 dev-master hyperkube[2763]: E0708 22:40:38.549872 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "kubelet" cannot list pods at the cluster scope
Jul 08 22:40:38 dev-master hyperkube[2763]: E0708 22:40:38.555379 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: services is forbidden: User "kubelet" cannot list services at the cluster scope
Jul 08 22:40:38 dev-master hyperkube[2763]: E0708 22:40:38.738411 2763 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"k8s-master-10.50.50.201.153f85e7528e5906", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"k8s-master-10.50.50.201", UID:"b0ed63e50c3259666286e5a788d12b81", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{scheduler}"}, Reason:"Started", Message:"Started container", Source:v1.EventSource{Component:"kubelet", Host:"10.50.50.201"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbec8c296b58a5506, ext:106413065445, loc:(*time.Location)(0xb58e300)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbec8c296b58a5506, ext:106413065445, loc:(*time.Location)(0xb58e300)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "kubelet" cannot create events in the namespace "kube-system"' (will not retry!)
</code></pre>
<hr>
<pre><code>Jul 08 22:40:43 dev-master hyperkube[2763]: E0708 22:40:43.938806 2763 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 08 22:40:44 dev-master hyperkube[2763]: E0708 22:40:44.556337 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: nodes is forbidden: User "kubelet" cannot list nodes at the cluster scope
Jul 08 22:40:44 dev-master hyperkube[2763]: E0708 22:40:44.557513 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "kubelet" cannot list pods at the cluster scope
Jul 08 22:40:44 dev-master hyperkube[2763]: E0708 22:40:44.561007 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: services is forbidden: User "kubelet" cannot list services at the cluster scope
Jul 08 22:40:45 dev-master hyperkube[2763]: E0708 22:40:45.557584 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: nodes is forbidden: User "kubelet" cannot list nodes at the cluster scope
Jul 08 22:40:45 dev-master hyperkube[2763]: E0708 22:40:45.558375 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "kubelet" cannot list pods at the cluster scope
Jul 08 22:40:45 dev-master hyperkube[2763]: E0708 22:40:45.561807 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: services is forbidden: User "kubelet" cannot list services at the cluster scope
Jul 08 22:40:46 dev-master hyperkube[2763]: I0708 22:40:46.393905 2763 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jul 08 22:40:46 dev-master hyperkube[2763]: I0708 22:40:46.396261 2763 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jul 08 22:40:46 dev-master hyperkube[2763]: E0708 22:40:46.397540 2763 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: nodes is forbidden: User "kubelet" cannot create nodes at the cluster scope
</code></pre>
<hr>
<pre><code>Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.161949 9689 cni.go:259] Error adding network: no configured Calico pools
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.161980 9689 cni.go:227] Error while adding to cni network: no configured Calico pools
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.468392 9689 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-splct_kube-system" network: no configured Calico
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.468455 9689 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-splct_kube-system(113e64b2-82e6-11e8-83bb-0242a9e42805)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.468479 9689 kuberuntime_manager.go:646] createPodSandbox for pod "kube-dns-splct_kube-system(113e64b2-82e6-11e8-83bb-0242a9e42805)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.468556 9689 pod_workers.go:186] Error syncing pod 113e64b2-82e6-11e8-83bb-0242a9e42805 ("kube-dns-splct_kube-system(113e64b2-82e6-11e8-83bb-0242a9e42805)"), skipping: failed to "CreatePodSandbox" for "kube-d
Jul 08 19:43:48 dev-master hyperkube[9689]: I0708 19:43:48.938222 9689 kuberuntime_manager.go:513] Container {Name:calico-node Image:ibmcom/calico-node:v3.0.4 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ETCD_ENDPOINTS Value: ValueFrom:&EnvVarSource
Jul 08 19:43:48 dev-master hyperkube[9689]: e:FELIX_HEALTHENABLED Value:true ValueFrom:nil} {Name:IP_AUTODETECTION_METHOD Value:can-reach=10.50.50.201 ValueFrom:nil}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:lib-modules ReadOnly:true MountPath:/lib/m
Jul 08 19:43:48 dev-master hyperkube[9689]: I0708 19:43:48.938449 9689 kuberuntime_manager.go:757] checking backoff for container "calico-node" in pod "calico-node-wpln7_kube-system(10107b3e-82e6-11e8-83bb-0242a9e42805)"
Jul 08 19:43:48 dev-master hyperkube[9689]: I0708 19:43:48.938699 9689 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=calico-node pod=calico-node-wpln7_kube-system(10107b3e-82e6-11e8-83bb-0242a9e42805)
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.938735 9689 pod_workers.go:186] Error syncing pod 10107b3e-82e6-11e8-83bb-0242a9e42805 ("calico-node-wpln7_kube-system(10107b3e-82e6-11e8-83bb-0242a9e42805)"), skipping: failed to "StartContainer" for "calic
lines 4918-4962/4962 (END)
</code></pre>
<p><strong>docker ps (master node)</strong>: Container-> k8s_POD_kube-dns-splct_kube-system-* is repeatedly crashing.</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ed24d636fdd1 ibmcom/pause:3.0 "/pause" 1 second ago Up Less than a second k8s_POD_kube-dns-splct_kube-system_113e64b2-82e6-11e8-83bb-0242a9e42805_121
49b648837900 ibmcom/calico-cni "/install-cni.sh" 5 minutes ago Up 5 minutes k8s_install-cni_calico-node-wpln7_kube-system_10107b3e-82e6-11e8-83bb-0242a9e42805_0
933ff30177de ibmcom/calico-kube-controllers "/usr/bin/kube-contr…" 5 minutes ago Up 5 minutes k8s_calico-kube-controllers_calico-kube-controllers-759f7fc556-mm5tg_kube-system_1010712e-82e6-11e8-83bb-0242a9e42805_0
12e9262299af ibmcom/pause:3.0 "/pause" 6 minutes ago Up 5 minutes k8s_POD_calico-kube-controllers-759f7fc556-mm5tg_kube-system_1010712e-82e6-11e8-83bb-0242a9e42805_0
8dcb2b2b3eb5 ibmcom/pause:3.0 "/pause" 6 minutes ago Up 5 minutes k8s_POD_calico-node-wpln7_kube-system_10107b3e-82e6-11e8-83bb-0242a9e42805_0
9486ff78df31 ibmcom/tiller "/tiller" 6 minutes ago Up 6 minutes k8s_tiller_tiller-deploy-c59888d97-7nwph_kube-system_016019ab-82e6-11e8-83bb-0242a9e42805_0
e5588f68af1b ibmcom/pause:3.0 "/pause" 6 minutes ago Up 6 minutes k8s_POD_tiller-deploy-c59888d97-7nwph_kube-system_016019ab-82e6-11e8-83bb-0242a9e42805_0
e80460d857ff ibmcom/icp-image-manager "/icp-image-manager …" 10 minutes ago Up 10 minutes k8s_image-manager_image-manager-0_kube-system_7b7554ce-82e5-11e8-83bb-0242a9e42805_0
e207175f19b7 ibmcom/registry "/entrypoint.sh /etc…" 10 minutes ago Up 10 minutes k8s_icp-registry_image-manager-0_kube-system_7b7554ce-82e5-11e8-83bb-0242a9e42805_0
477faf0668f3 ibmcom/pause:3.0 "/pause" 10 minutes ago Up 10 minutes k8s_POD_image-manager-0_kube-system_7b7554ce-82e5-11e8-83bb-0242a9e42805_0
8996bb8c37b7 d4b6454d4873 "/hyperkube schedule…" 10 minutes ago Up 10 minutes k8s_scheduler_k8s-master-10.50.50.201_kube-system_9e5bce1f08c050be21fa6380e4e363cc_0
835ee941432c d4b6454d4873 "/hyperkube apiserve…" 10 minutes ago Up 10 minutes k8s_apiserver_k8s-master-10.50.50.201_kube-system_9e5bce1f08c050be21fa6380e4e363cc_0
de409ff63cb2 d4b6454d4873 "/hyperkube controll…" 10 minutes ago Up 10 minutes k8s_controller-manager_k8s-master-10.50.50.201_kube-system_9e5bce1f08c050be21fa6380e4e363cc_0
716032a308ea ibmcom/pause:3.0 "/pause" 10 minutes ago Up 10 minutes k8s_POD_k8s-master-10.50.50.201_kube-system_9e5bce1f08c050be21fa6380e4e363cc_0
bd9e64e3d6a2 d4b6454d4873 "/hyperkube proxy --…" 12 minutes ago Up 12 minutes k8s_proxy_k8s-proxy-10.50.50.201_kube-system_3e068267cfe8f990cd2c9a4635be044d_0
bab3c9ef7e40 ibmcom/pause:3.0 "/pause" 12 minutes ago Up 12 minutes k8s_POD_k8s-proxy-10.50.50.201_kube-system_3e068267cfe8f990cd2c9a4635be044d_0
</code></pre>
<p><strong>Kubectl (master node)</strong>: I believe kube should have been initialized and running by this time but seems like it is not. </p>
<pre><code>kubectl get pods -s 127.0.0.1:8888 --all-namespaces
The connection to the server 127.0.0.1:8888 was refused - did you specify the right host or port?
</code></pre>
<p>Following are the options I tried:</p>
<ol>
<li>Create cluster with both IP_IP enabled and disabled. As all
nodes are on same subnet, IP_IP setup should not have impact. </li>
<li>Etcd running on a separate node and as part of master node </li>
<li><p>ifconfig tunl0 returns following (i.e. w/o IP assignment) in all of the above scenarios :</p>
<p><strong>tunl0</strong> Link encap:IPIP Tunnel HWaddr</p>
<pre><code> NOARP MTU:1480 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
</code></pre></li>
<li><p>'calicoctl get profile' returns empty and so does 'calicoctl get nodes' which I believe is because, calico is not configured yet.</p></li>
</ol>
<p>Other checks, thoughts and options?</p>
<p><strong>Calico Kube Contoller Logs</strong> (repeated):</p>
<pre><code>2018-07-09 05:46:08.440 [WARNING][1] cache.go 278: Value for key has changed, queueing update to reprogram key="kns.default" type=v3.Profile
2018-07-09 05:46:08.440 [WARNING][1] cache.go 278: Value for key has changed, queueing update to reprogram key="kns.kube-public" type=v3.Profile
2018-07-09 05:46:08.440 [WARNING][1] cache.go 278: Value for key has changed, queueing update to reprogram key="kns.kube-system" type=v3.Profile
2018-07-09 05:46:08.440 [INFO][1] namespace_controller.go 223: Create/Update Profile in Calico datastore key="kns.default"
2018-07-09 05:46:08.441 [INFO][1] namespace_controller.go 246: Update Profile in Calico datastore with resource version key="kns.default"
2018-07-09 05:46:08.442 [INFO][1] namespace_controller.go 252: Successfully updated profile key="kns.default"
2018-07-09 05:46:08.442 [INFO][1] namespace_controller.go 223: Create/Update Profile in Calico datastore key="kns.kube-public"
2018-07-09 05:46:08.446 [INFO][1] namespace_controller.go 246: Update Profile in Calico datastore with resource version key="kns.kube-public"
2018-07-09 05:46:08.447 [INFO][1] namespace_controller.go 252: Successfully updated profile key="kns.kube-public"
2018-07-09 05:46:08.447 [INFO][1] namespace_controller.go 223: Create/Update Profile in Calico datastore key="kns.kube-system"
2018-07-09 05:46:08.465 [INFO][1] namespace_controller.go 246: Update Profile in Calico datastore with resource version key="kns.kube-system"
2018-07-09 05:46:08.476 [INFO][1] namespace_controller.go 252: Successfully updated profile key="kns.kube-system"
</code></pre>
| <p>Firstly, from ICP 2.1.0.3, the insecure port 8888 for K8S apiserver is disabled, so you can not use this insecure port to talk to Kubenetes.</p>
<p>For this issue, could you let me know the below information or outputs.</p>
<ul>
<li>The network configurations in your environment.
-> ifconfig -a</li>
<li>The route table:
-> route</li>
<li>The contents of your /ect/hosts file:
-> cat /etc/hosts</li>
<li>The ICP installation configurations files.
-> config.yaml & hosts</li>
</ul>
|
<p>I have an existing system that uses a relational DBMS. I am unable to use a NoSQL database for various internal reasons.</p>
<p>The system is to get some microservices that will be deployed using Kubernetes and Docker with the intention to do rolling upgrades to reduce downtime. The back end data layer will use the existing relational DBMS. The micro services will follow good practice and "own" their data store on the DBMS. The one big issue with this seems to be how to deal with managing the structure of the database across this. I have done my research:</p>
<ul>
<li><a href="https://blog.philipphauer.de/databases-challenge-continuous-delivery/" rel="nofollow noreferrer">https://blog.philipphauer.de/databases-challenge-continuous-delivery/</a></li>
<li><a href="http://www.grahambrooks.com/continuous%20delivery/continuous%20deployment/zero%20down-time/2013/08/29/zero-down-time-relational-databases.html" rel="nofollow noreferrer">http://www.grahambrooks.com/continuous%20delivery/continuous%20deployment/zero%20down-time/2013/08/29/zero-down-time-relational-databases.html</a></li>
<li><a href="http://blog.dixo.net/2015/02/blue-turquoise-green-deployment/" rel="nofollow noreferrer">http://blog.dixo.net/2015/02/blue-turquoise-green-deployment/</a></li>
<li><a href="https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database" rel="nofollow noreferrer">https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database</a></li>
<li><a href="https://www.rainforestqa.com/blog/2014-06-27-zero-downtime-database-migrations/" rel="nofollow noreferrer">https://www.rainforestqa.com/blog/2014-06-27-zero-downtime-database-migrations/</a></li>
</ul>
<p>All of the discussions seem to stop around the point of adding/removing columns and data migration. There is no discussion of how to manage stored procedures, views, triggers etc.</p>
<p>The application is written in .NET Full and .NET Core with Entity Framework as the ORM.</p>
<p>Has anyone got any insights on how to do continious delivery using a relational DBMS where it is a full production system? Is it back to the drawing board here? In as much that using a relational DBMS is "too hard" for rolling updates?</p>
<p>PS. Even though this is a continious delivery problem I have also tagged with Kubernetes and Docker as that will be the underlying tech in use for the orchestration/container side of things.</p>
| <p>I work in an environment that achieves continuous delivery. We use MySQL.</p>
<p>We apply schema changes with minimal interruption by using <a href="https://www.percona.com/doc/percona-toolkit/LATEST/pt-online-schema-change.html" rel="nofollow noreferrer">pt-online-schema-change</a>. One could also use <a href="https://github.com/github/gh-ost" rel="nofollow noreferrer">gh-ost</a>.</p>
<p>Adding a column can be done at any time if the application code can work with the extra column in place. For example, it's a good rule to avoid implicit columns like <code>SELECT *</code> or <code>INSERT</code> with no columns-list clause. Dropping a column can be done after the app code no longer references that column. Renaming a column is trickier to do without coordinating an app release, and in this case you may have to do two schema changes, one to add the new column and a later one to drop the old column after the app is known not to reference the old column.</p>
<p>We do upgrades and maintenance on database servers by using redundancy. Every database master has a replica, and the two instances are configured in master-master (circular) replication. So one is active and the other is passive. Applications are allowed to connect only to the active instance. The passive instance can be restarted, upgraded, etc. </p>
<p>We can switch the active instance in under 1 second by changing an internal CNAME, and updating the <code>read_only</code> option in each MySQL instance.</p>
<p>Database connections are terminated during this switch. Apps are required to detect a dropped connection and reconnect to the CNAME. This way the app is always connected to the active MySQL instance, freeing the passive instance for maintenance.</p>
<p>MySQL replication is asynchronous, so an instance can be brought down and back up, and it can resume replicating changes and generally catches up quickly. As long as its master keeps the binary logs needed. If the replica is down for longer than the binary log expiration, then it loses its place and must be reinitialized from a backup of the active instance.</p>
<hr>
<p>Re comments:</p>
<blockquote>
<p>how is the data access code versioned? ie v1 of app talking to v2 of DB? </p>
</blockquote>
<p>That's up to each app developer team. I believe most are doing continual releases, not versions.</p>
<blockquote>
<p>How are SP's, UDF's, Triggers etc dealt with?</p>
</blockquote>
<p>No app is using any of those.</p>
<p>Stored routines in MySQL are really more of a liability than a feature. No support for packages or libraries of routines, no compiler, no debugger, bad scalability, and the SP language is unfamiliar and poorly documented. I don't recommend using stored routines in MySQL, even though it's common in Oracle/Microsoft database development practices.</p>
<p>Triggers are not allowed in our environment, because pt-online-schema-change needs to create its own triggers.</p>
<p><a href="https://dev.mysql.com/doc/refman/8.0/en/adding-udf.html" rel="nofollow noreferrer">MySQL UDFs</a> are compiled C/C++ code that has to be installed on the database server as a shared library. I have never heard of any company who used UDFs in production with MySQL. There is too a high risk that a bug in your C code could crash the whole MySQL server process. In our environment, app developers are not allowed access to the database servers for SOX compliance reasons, so they wouldn't be able to install UDFs anyway. </p>
|
<p>I have an existing system that uses a relational DBMS. I am unable to use a NoSQL database for various internal reasons.</p>
<p>The system is to get some microservices that will be deployed using Kubernetes and Docker with the intention to do rolling upgrades to reduce downtime. The back end data layer will use the existing relational DBMS. The micro services will follow good practice and "own" their data store on the DBMS. The one big issue with this seems to be how to deal with managing the structure of the database across this. I have done my research:</p>
<ul>
<li><a href="https://blog.philipphauer.de/databases-challenge-continuous-delivery/" rel="nofollow noreferrer">https://blog.philipphauer.de/databases-challenge-continuous-delivery/</a></li>
<li><a href="http://www.grahambrooks.com/continuous%20delivery/continuous%20deployment/zero%20down-time/2013/08/29/zero-down-time-relational-databases.html" rel="nofollow noreferrer">http://www.grahambrooks.com/continuous%20delivery/continuous%20deployment/zero%20down-time/2013/08/29/zero-down-time-relational-databases.html</a></li>
<li><a href="http://blog.dixo.net/2015/02/blue-turquoise-green-deployment/" rel="nofollow noreferrer">http://blog.dixo.net/2015/02/blue-turquoise-green-deployment/</a></li>
<li><a href="https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database" rel="nofollow noreferrer">https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database</a></li>
<li><a href="https://www.rainforestqa.com/blog/2014-06-27-zero-downtime-database-migrations/" rel="nofollow noreferrer">https://www.rainforestqa.com/blog/2014-06-27-zero-downtime-database-migrations/</a></li>
</ul>
<p>All of the discussions seem to stop around the point of adding/removing columns and data migration. There is no discussion of how to manage stored procedures, views, triggers etc.</p>
<p>The application is written in .NET Full and .NET Core with Entity Framework as the ORM.</p>
<p>Has anyone got any insights on how to do continious delivery using a relational DBMS where it is a full production system? Is it back to the drawing board here? In as much that using a relational DBMS is "too hard" for rolling updates?</p>
<p>PS. Even though this is a continious delivery problem I have also tagged with Kubernetes and Docker as that will be the underlying tech in use for the orchestration/container side of things.</p>
| <p>All of the following under the assumption that I understand correctly what you mean by "rolling updates" and what its consequences are.</p>
<p>It has very little (as in : nothing at all) to do with "relational DBMS". Flatfiles holding XML will make you face the exact same problem. Your "rolling update" will inevitably cause (hopefully brief) periods of time during which your server-side components (e.g. the db) must interact with "version 0" as well as with "version -1" of (the client-side components of) your system.</p>
<p>Here "compatibility theory" (*) steps in. A "working system" is a system in which the set of offered services is a superset (perhaps a proper superset) of the set of required services. So backward compatibility is guaranteed if "services offered" is never ever reduced <em>and</em> "services required" is never extended. However, the latter is typically what always happens when the current "version 0" is moved to "-1" and a new "current version 0" is added to the mix. So the conclusion is that "rolling updates" are theoretically doable as long as the "services" offered on server side are only ever extended, and always in such a way as to be, and always remain, a superset of the services required on (any version currently in use on) the client side.</p>
<p>"Services" here is to be interpreted as something very abstract. It might refer to a guarantee to the effect that, say, if column X in this row of this table has value Y then I <em>will</em> find another row in that other table using a key computed such-and-so, and that other row might be guaranteed to have column values satisfying this-or-that condition.</p>
<p>If that "guarantee" is <em>introduced</em> as an <em>expectation</em> (i.e. requirement) on (new version of) client side, you must do something on server side to comply. If that "guarantee" is <em>currently offered</em> but a <em>contradicting guarantee</em> is introduced as an expectation on (new version of) client side, then your rolling update scenario has by definition become inachievable.</p>
<p>(*) <a href="http://davidbau.com/archives/2003/12/01/theory_of_compatibility_part_1.html" rel="nofollow noreferrer">http://davidbau.com/archives/2003/12/01/theory_of_compatibility_part_1.html</a></p>
<p>There are also parts 2 and 3.</p>
|
<p>I am <strong>completely new</strong> to Kubernetes, so go easy on me.</p>
<p>I am running <code>kubectl proxy</code> but am only seeing the JSON output. Based on <a href="https://github.com/kubernetes/dashboard/issues/2011" rel="nofollow noreferrer">this discussion</a> I attempted to set the memory limits by running:</p>
<pre><code>kubectl edit deployment kubernetes-dashboard --namespace kube-system
</code></pre>
<p>I then changed the container memory limit:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
...
spec:
containers:
- image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1
imagePullPolicy: IfNotPresent
livenessProbe:
...
name: kubernetes-dashboard
ports:
- containerPort: 9090
protocol: TCP
resources:
limits:
memory: 1Gi
</code></pre>
<p>I still only get the JSON served when I save that and visit <a href="http://127.0.0.1:8001/ui" rel="nofollow noreferrer">http://127.0.0.1:8001/ui</a></p>
<p>Running <code>kubectl logs --namespace kube-system kubernetes-dashboard-665756d87d-jssd8</code> I see the following:</p>
<pre><code>Starting overwatch
Using in-cluster config to connect to apiserver
Using service account token for csrf signing
No request provided. Skipping authorization
Successful initial request to the apiserver, version: v1.10.0
Generating JWE encryption key
New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
Initializing JWE encryption key from synchronized object
Creating in-cluster Heapster client
Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
Serving insecurely on HTTP port: 9090
</code></pre>
<p>I read through a bunch of links from a <a href="https://www.google.com/search?q=Metric%20client%20health%20check%20failed%3A%20the%20server%20could%20not%20find%20the%20requested%20resource%20(get%20services%20heapster).%20Retrying%20in%2030%20seconds.&oq=Metric%20client%20health%20check%20failed%3A%20the%20server%20could%20not%20find%20the%20requested%20resource%20(get%20services%20heapster).%20Retrying%20in%2030%20seconds.&aqs=chrome..69i57.214j0j7&sourceid=chrome&ie=UTF-8" rel="nofollow noreferrer">Google search</a> on the error but nothing really worked. </p>
<p>Key components are:</p>
<ul>
<li><strong>Local</strong>: Ubuntu 18.04 LTS</li>
<li><strong>minikube</strong>: v0.28.0</li>
<li><strong>Kubernetes Dashboard</strong>: 1.8.3</li>
</ul>
<p>Installed via:</p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
</code></pre>
<p>Halp!</p>
| <p>Have you considered using the minikube dashboard? You can reach it by:</p>
<pre><code>minikube dashboard
</code></pre>
<p>Also you will get json on <a href="http://127.0.0.1:8001/ui" rel="nofollow noreferrer">http://127.0.0.1:8001/ui</a> because it is deprecated, so you have to use full proxy URL as it states in the dashboard github <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">page</a>.</p>
<p>If you still want to use this 'external' dashboard for some future not minikube related projects or there is some other reason I don't know about you can reach it by:</p>
<pre><code>kubectl proxy
</code></pre>
<p>and then:
<a href="http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/</a></p>
<ul>
<li>note that in the documentation it is <code>https</code> which is not correct in this case (might be documentation error or it might be clarified in the <a href="https://github.com/kubernetes/dashboard/wiki/Accessing-dashboard" rel="nofollow noreferrer">documentation</a> part which I suggest you read if you need further information on web UI).</li>
</ul>
<p>Hope this helps.</p>
|
<p>I'm building a container cluster using CoreOs and Kubernetes on DigitalOcean, and I've seen that in order to expose a Pod to the world you have to create a Service with Type: LoadBalancer. I think this is the optimal solution so that you don't need to add external load balancer outside kubernetes like nginx or haproxy. I was wondering if it is possible to create this using DO's Floating IP.</p>
| <p>Things have changed, DigitalOcean created their own cloud provider implementation as answered <a href="https://github.com/kubernetes/kubernetes/issues/34783#issuecomment-357974666" rel="nofollow noreferrer">here</a> and they are maintaining a Kubernetes "<a href="https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/" rel="nofollow noreferrer">Cloud Controller Manager</a>" implementation:</p>
<blockquote>
<p><a href="https://github.com/digitalocean/digitalocean-cloud-controller-manager#kubernetes-cloud-controller-manager-for-digitalocean" rel="nofollow noreferrer">Kubernetes Cloud Controller Manager for DigitalOcean</a></p>
<p>Currently digitalocean-cloud-controller-manager implements:</p>
<ul>
<li><p>nodecontroller - updates nodes with cloud provider specific labels and
addresses, also deletes kubernetes nodes when deleted on the cloud
provider. </p></li>
<li><p><strong>servicecontroller - responsible for creating LoadBalancers
when a service of Type: LoadBalancer is created in Kubernete</strong>s.</p></li>
</ul>
</blockquote>
<p>To try it out clone the project on your master node.</p>
<p>Next get the token key from <a href="https://cloud.digitalocean.com/settings/api/tokens" rel="nofollow noreferrer">https://cloud.digitalocean.com/settings/api/tokens</a> and run:</p>
<pre><code>export DIGITALOCEAN_ACCESS_TOKEN=abc123abc123abc123
scripts/generate-secret.sh
kubectl apply -f do-cloud-controller-manager/releases/v0.1.6.yml
</code></pre>
<p>There more examples <a href="https://github.com/digitalocean/digitalocean-cloud-controller-manager/tree/master/docs/controllers/services/examples" rel="nofollow noreferrer">here</a></p>
<p>What will happen once you do the above? DO's cloud manager will create a load balancer (that has a failover mechanism out of the box, more on it <a href="https://www.digitalocean.com/docs/networking/load-balancers/" rel="nofollow noreferrer">in the load balancer's documentation</a></p>
<p>Things will change again soon as DigitalOcean are jumping on the Kubernetes bandwagon, check <a href="https://blog.digitalocean.com/introducing-digitalocean-kubernetes/" rel="nofollow noreferrer">here</a> and you will have a choice to let them manage your Kuberentes cluster instead of you worrying about a lot of the infrastructure (this is my understanding of the service, let's see how it works when it becomes available...)</p>
|
<p>Sorry if this sounds like I'm lazy, but I've search around, around and around, but couldn't find it!</p>
<p>I'm looking for a reference that explains each of the fields that may exist in an OpenShift / Kubernetes template, e.g. what possible values there are.</p>
| <p>The templates you get in OpenShift are OpenShift specific and not part of Kubernetes. If you mean the purpose of each of the possible fields you can specify for a parameter, you can run <code>oc explain template</code>. For example:</p>
<pre><code>$ oc explain template.parameters
RESOURCE: parameters <[]Object>
DESCRIPTION:
parameters is an optional array of Parameters used during the Template to
Config transformation.
Parameter defines a name/value variable that is to be processed during the
Template to Config transformation.
FIELDS:
description <string>
Description of a parameter. Optional.
displayName <string>
Optional: The name that will show in UI instead of parameter 'Name'
from <string>
From is an input value for the generator. Optional.
generate <string>
generate specifies the generator to be used to generate random string from
an input value specified by From field. The result string is stored into
Value field. If empty, no generator is being used, leaving the result Value
untouched. Optional. The only supported generator is "expression", which
accepts a "from" value in the form of a simple regular expression
containing the range expression "[a-zA-Z0-9]", and the length expression
"a{length}". Examples: from | value -----------------------------
"test[0-9]{1}x" | "test7x" "[0-1]{8}" | "01001100" "0x[A-F0-9]{4}" |
"0xB3AF" "[a-zA-Z0-9]{8}" | "hW4yQU5i"
name <string> -required-
Name must be set and it can be referenced in Template Items using
${PARAMETER_NAME}. Required.
required <boolean>
Optional: Indicates the parameter must have a value. Defaults to false.
value <string>
Value holds the Parameter data. If specified, the generator will be
ignored. The value replaces all occurrences of the Parameter ${Name}
expression during the Template to Config transformation. Optional.
</code></pre>
<p>You can find more information in:</p>
<ul>
<li><a href="https://docs.openshift.org/latest/dev_guide/templates.html" rel="nofollow noreferrer">https://docs.openshift.org/latest/dev_guide/templates.html</a></li>
</ul>
<p>If that isn't what you mean, you will need to be more specific as to what you mean. If you are talking about fields on any resource object (templates are specific type of resource object in OpenShift), you can use <code>oc explain</code> on any of them, pass the name of the resource type as argument, and then a dotted path as you traverse into fields. If using plain Kubernetes, you can use <code>kubectl explain</code>.</p>
|
<p>my question is simple.</p>
<p>How to execute a bash command in the pod? I want to do everything with one bash command?</p>
<pre><code>[root@master ~]# kubectl exec -it --namespace="tools" mongo-pod --bash -c "mongo"
Error: unknown flag: --bash
</code></pre>
<p>So, the command is simply ignored.</p>
<pre><code>[root@master ~]# kubectl exec -it --namespace="tools" mongo-pod bash -c "mongo"
root@mongo-deployment-78c87cb84-jkgxx:/#
</code></pre>
<p>Or so.</p>
<pre><code>[root@master ~]# kubectl exec -it --namespace="tools" mongo-pod bash mongo
Defaulting container name to mongo.
Use 'kubectl describe pod/mongo-deployment-78c87cb84-jkgxx -n tools' to see all of the containers in this pod.
/usr/bin/mongo: /usr/bin/mongo: cannot execute binary file
command terminated with exit code 126
</code></pre>
<p>If it's just a bash, it certainly works. But I want to jump into the mongo shell immediatelly.</p>
<p>I found a solution, but it does not work. Tell me if this is possible now?
<a href="https://stackoverflow.com/questions/43499313/executing-multiple-commands-or-from-a-shell-script-in-a-kubernetes-pod">Executing multiple commands( or from a shell script) in a kubernetes pod</a></p>
<p>Thanks.</p>
| <p>The double dash symbol "--" is used to separate the command you want to run inside the container from the kubectl arguments.
So the correct way is:</p>
<pre><code>kubectl exec -it --namespace=tools mongo-pod -- bash -c "mongo"
</code></pre>
<p>You forgot a space between "--" and "bash".</p>
<p>To execute multiple commands you may want:</p>
<ul>
<li><p>to create a script and mount it as a volume in your pod and execute it</p>
</li>
<li><p>to launch a side container with the script and run it</p>
</li>
</ul>
|
<p>I have created the single master kubernetes <code>v1.9.0</code> cluster using kubeadm command in bare metal server. Now I want to add two more master and make it multi master. </p>
<p>Is it possible to convert to multi master configuration? Is there a document available for this type of conversation? </p>
<p>I have found this link for <code>Kops</code> not sure same steps will work for other environment also.</p>
<p><a href="https://github.com/kubernetes/kops/blob/master/docs/single-to-multi-master.md" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/docs/single-to-multi-master.md</a></p>
<p>Thanks
SR</p>
| <p>Yes, it's possible, but you may need to break your master setup temporarily.
You'll need to follow the instructions <a href="https://kubernetes.io/docs/setup/independent/high-availability/" rel="noreferrer">here</a></p>
<p>In a nutshell:</p>
<p>Create a kubeadm config file. In that kubeadm config file you'll need to include the SAN for the loadbalancer you'll use. Example:</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
apiServerCertSANs:
- "LOAD_BALANCER_DNS"
api:
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://CP0_IP:2379"
advertise-client-urls: "https://CP0_IP:2379"
listen-peer-urls: "https://CP0_IP:2380"
initial-advertise-peer-urls: "https://CP0_IP:2380"
initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380"
serverCertSANs:
- CP0_HOSTNAME
- CP0_IP
peerCertSANs:
- CP0_HOSTNAME
- CP0_IP
networking:
# This CIDR is a Calico default. Substitute or remove for your CNI provider.
podSubnet: "192.168.0.0/16"
</code></pre>
<p>Copy the certificates created to your new nodes. All the certs under <code>/etc/kubernetes/pki/</code> should be copied</p>
<p>Copy the <code>admin.conf</code> from <code>/etc/kubernetes/admin.conf</code> to the new nodes</p>
<p>Example:</p>
<pre><code>USER=ubuntu # customizable
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
scp /etc/kubernetes/admin.conf "${USER}"@$host:
done
</code></pre>
<p>Create your second kubeadm config file for the second node:</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
apiServerCertSANs:
- "LOAD_BALANCER_DNS"
api:
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://CP1_IP:2379"
advertise-client-urls: "https://CP1_IP:2379"
listen-peer-urls: "https://CP1_IP:2380"
initial-advertise-peer-urls: "https://CP1_IP:2380"
initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380,CP1_HOSTNAME=https://CP1_IP:2380"
initial-cluster-state: existing
serverCertSANs:
- CP1_HOSTNAME
- CP1_IP
peerCertSANs:
- CP1_HOSTNAME
- CP1_IP
networking:
# This CIDR is a calico default. Substitute or remove for your CNI provider.
podSubnet: "192.168.0.0/16"
</code></pre>
<p>Replace the following variables with the correct addresses for this node:</p>
<p>LOAD_BALANCER_DNS</p>
<p>LOAD_BALANCER_PORT</p>
<p>CP0_HOSTNAME</p>
<p>CP0_IP</p>
<p>CP1_HOSTNAME</p>
<p>CP1_IP</p>
<p>Move the copied certs to the correct location</p>
<pre><code>USER=ubuntu # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
mv /home/${USER}/ca.key /etc/kubernetes/pki/
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
mv /home/${USER}/sa.key /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
</code></pre>
<p>Now, you can start adding the master using <code>kubeadm</code></p>
<pre><code> kubeadm alpha phase certs all --config kubeadm-config.yaml
kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml
kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml
kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml
systemctl start kubelet
</code></pre>
<p>Join the node to the etcd cluster:</p>
<pre><code> CP0_IP=10.0.0.7
CP0_HOSTNAME=cp0
CP1_IP=10.0.0.8
CP1_HOSTNAME=cp1
KUBECONFIG=/etc/kubernetes/admin.conf kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380
kubeadm alpha phase etcd local --config kubeadm-config.yaml
</code></pre>
<p>and then finally, add the controlplane:</p>
<pre><code> kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
kubeadm alpha phase controlplane all --config kubeadm-config.yaml
kubeadm alpha phase mark-master --config kubeadm-config.yaml
</code></pre>
<p>Repeat these steps for the third master, and you should be good.</p>
|
<p>When creating ingress, no address is generated and when viewed from GKE dashboard it is always in the <code>Creating ingress</code> status.
Describing the ingress does not show any events and I can not see any clues on GKE dashboard.</p>
<p>Has anyone has a similar issue or any suggestions on how to debug?</p>
<p>My deployment.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mobile-gateway-ingress
spec:
backend:
serviceName: mobile-gateway-service
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mobile-gateway-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: mobile-gateway
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mobile-gateway-deployment
labels:
app: mobile-gateway
spec:
selector:
matchLabels:
app: mobile-gateway
replicas: 2
template:
metadata:
labels:
app: mobile-gateway
spec:
containers:
- name: mobile-gateway
image: eu.gcr.io/my-project/mobile-gateway:latest
ports:
- containerPort: 8080
</code></pre>
<p>Describing ingress shows no events:</p>
<pre><code>mobile-gateway ➤ kubectl describe ingress mobile-gateway-ingress git:master*
Name: mobile-gateway-ingress
Namespace: default
Address:
Default backend: mobile-gateway-service:80 (10.4.1.3:8080,10.4.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
* * mobile-gateway-service:80 (10.4.1.3:8080,10.4.2.3:8080)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"mobile-gateway-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"mobile-gateway-service","servicePort":80}}}
Events: <none>
hello ➤
</code></pre>
<p>With a simple LoadBalancer service, an IP address is given. The problem is only with the ingress resource.</p>
| <p>The problem in this case was I had did not include the addon <code>HttpLoadBalancing</code> when creating the cluster!
My fault but was would have been noice to have an event informing me of this mistake in the ingress resource.</p>
<p>Strange that when I created a new cluster to follow the tutorial cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer using default addons including <code>HttpLoadBalancing</code> that I observed the same issue. Maybe I didn't wait long enough? Anyway, working now that I have included the addon.</p>
|
<p>I have a quick question regarding kubernetes yaml files :</p>
<p>can I reference a node label value in a pvc, using a node label as variable?</p>
<p>Something like :</p>
<pre><code> volumes:
- name: data-storage
persistentVolumeClaim:
claimName: main-dev-pvc-${node.failure-domain.beta.kubernetes.io/zone}
</code></pre>
<p>Is it possible?</p>
| <p>Is the idea to force the use of PV created in a specific zone? <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">StorageClass</a> can help with that.</p>
<p>Custom StorageClass:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd-usc1a
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zones: us-central1-a
</code></pre>
<p>PVC that refers to the custom storage class:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: main-dev-pvc-ssd-usc1a
namespace: dev-project
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ssd-usc1a
</code></pre>
<p>Assuming the same namespace as for the PVC:</p>
<pre><code> volumes:
- name: data-storage
persistentVolumeClaim:
claimName: main-dev-pvc-ssd-usc1a
</code></pre>
<p>Kubernetes will try to schedule the pod on a node in the same zone where the PV is (us-central1-a in the example above). Quote from the docs: "...The scheduler (via the VolumeZonePredicate predicate) will also ensure that pods that claim a given volume are only placed into the same zone as that volume, as volumes cannot be attached across zones". Therefore, the scheduling will fail if there isn't a such suitable node.</p>
|
<p>Is there any resource out there that gives an overview of all the possible status conditions a kubernetes job can have?</p>
<p>I'm wondering because I would like to check, when I run a job if it is already running and if so, exit the new job.</p>
<p>I came across <code>until kubectl get jobs myjob -o jsonpath='{.status.conditions[?(@.type=="Complete")].status}' | grep True ; do sleep 1 ; done</code> quite a few times but I want to know if it is running, not if it is already complete. Would prefer not to wait (ha) for kubectl 1.11 wait functionality</p>
| <p>The <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#jobcondition-v1-batch" rel="noreferrer">kubernetes API docs for JobCondition</a> imply that the only <code>type</code> values are “Complete” and “Failed”, and that they may have a <code>”True”</code> or <code>”False”</code> <code>status</code>.</p>
<p>In addition to the job status conditions array, you may also find it informative to look at the job status <code>active</code> count, and the <code>startTime</code> and <code>completionTime</code> if you’re just interested in whether it’s finished.</p>
|
<p>Can I start minikube on a remote dedicated server to develope and test kubernetes with my team?
For now we didn‘t want to make a real 3 node cluster to safe money and I only can find information about minikube running for locally development.</p>
| <p>Minikube is only designed for local development. You need to deploy a Kubernetes cluster using something like <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/" rel="nofollow noreferrer">kubeadm</a></p>
|
<p>I'm having a dockerfile that runs fine with CentOS as a baseimage and enabled systemd, as suggested on CentOS official docker hub image documentation - <a href="https://hub.docker.com/_/centos/" rel="nofollow noreferrer">https://hub.docker.com/_/centos/</a>.</p>
<p>I'll have to start my container using this following command - </p>
<pre><code>docker run -d -p 8080:8080 -e "container=docker" --privileged=true -d --security-opt seccomp:unconfined --cap-add=SYS_ADMIN -v /sys/fs/cgroup:/sys/fs/cgroup:ro myapplicationImage bash -c "/usr/sbin/init"
</code></pre>
<p>Till here, everything works like a charm, I can run my image and everything works fine. I'm trying to deploy my image to Azure Container service, so I was trying to create a yaml file that uses this docker image and creates a cluster. </p>
<p><strong>My Yaml file looks like this.</strong></p>
<pre><code>apiVersion: apps/v2beta1
kind: Deployment
metadata:
name: myapp-test
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-test
spec:
containers:
- name: myapp-test
image: myappregistry.azurecr.io/myapp-test:1.0
ports:
- containerPort: 8080
args: ["--allow-privileged=true","bash"]
securityContext:
capabilities:
add: ["SYS_ADMIN"]
privileged: true
command: [ "-c","/usr/sbin/init" ]
imagePullSecrets:
- name: myapp-secret-test
---
apiVersion: v1
kind: Service
metadata:
name: myapp-test
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: myapp-test
</code></pre>
<p>This doesn't spin-up my image. The above is a kubernetes cluster yaml file.
I've also tried Docker-Compose.</p>
<pre><code>version: '3'
services:
myapp-test:
build: ./myapp-folder
environment:
- container=docker
volumes:
- ./sys/fs/cgroup:/sys/fs/cgroup:ro
ports:
- "8082:8080"
privileged: true
cap_add:
- SYS_ADMIN
security_opt:
- seccomp:unconfined
command: "/usr/sbin/init"
</code></pre>
<p>Both of these configurations fails to create containers. I'm using same configuration as mentioned in above docker run time command and converted that into yaml. But runtime command works and I can access my application, but yaml files fail. Am I missing anything?</p>
<p><strong>here is my kubernetes error:</strong></p>
<pre><code>➜ $ kubectl get po --watch
NAME READY STATUS RESTARTS AGE
myapp-test-6957c57f6c-zmbt6 0/1 RunContainerError 4 9m
myapp-test-6957c57f6c-zmbt6 0/1 CrashLoopBackOff 4 9m
➜ $ kubectl get svc --watch
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp-test LoadBalancer 10.0.X.XXX 120.XXX.XX.XXX 8080:30150/TCP 12m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 43m
</code></pre>
<p><strong>In case of Docker Compose:</strong></p>
<p>The container gets kicked in fine, but the service inside my application fails to start. I cannot reach my localhost:8080, but container keeps running.</p>
<p>I'm thinking if it has something to do with my systemd enabled container while accessing it on compose or cluster?</p>
<p>Thanks!</p>
| <p>According to <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod" rel="nofollow noreferrer">the fine manual</a>, if you provide <code>command:</code> it supersedes <code>ENTRYPOINT</code>, and <code>args:</code> supersedes <code>CMD</code>, meaning your final "command" that image runs is:</p>
<pre><code>-c /usr/sbin/init --allow-privileged=true bash
</code></pre>
<p>which looks very suspicious with the leading <code>-c</code>, especially since your <code>docker-compose.yml</code> only contains <code>/usr/sbin/init</code>.</p>
|
<p>I have created a Persistant Disk Claim for my Postgres Database. </p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pgdata
name: pgdata
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
</code></pre>
<p>Once I delete the cluster, the volume got deleted too. what's the best practice here? Should I save in some other Persistent Disk like GCEPersistentDisk? I dont want to lose the volume even If cluster is deleted and what are the best strategies for deployment?</p>
| <p>In the PersistenVolume manifest you can set the reclaiming policy (Retain or Delete, there was a third option now deprecated - Recycle):</p>
<pre><code>persistentVolumeReclaimPolicy: Retain
</code></pre>
<p>Reference: <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes</a></p>
|
<p>I am trying to get Ansible AWX installed on my Kubernetes cluster but the RabbitMQ container is throwing "Failed to get nodes from k8s" error.</p>
<p><strong>Below are the version of platforms I am using</strong> </p>
<pre><code>[node1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5",
GitCommit:"f01a2bf98249a4db383560443a59bed0c13575df", GitTreeState:"clean",
BuildDate:"2018-03-19T15:50:45Z", GoVersion:"go1.9.3", Compiler:"gc",
Platform:"linux/amd64"}
</code></pre>
<p>Kubernetes is deployed via the <a href="https://github.com/kubernetes-incubator/kubespray" rel="nofollow noreferrer">kubespray</a> playbook v2.5.0 and all the services and pods are up and running. (CoreDNS, Weave, IPtables) </p>
<p>I am deploying <a href="https://github.com/ansible/awx" rel="nofollow noreferrer">AWX</a> via the 1.0.6 release using the 1.0.6 images for awx_web and awx_task.</p>
<p>I am using an external PostgreSQL database at v10.4 and have verified the tables are being created by awx in the db.</p>
<p><strong>Troubleshooting steps I have tried.</strong></p>
<ul>
<li>I tried to deploy AWX 1.0.5 with the etcd pod to the same cluster and it has worked as expected </li>
<li>I have deployed a stand alone <a href="https://github.com/rabbitmq/rabbitmq-peer-discovery-k8s/tree/master/examples/k8s_statefulsets" rel="nofollow noreferrer">RabbitMQ cluster</a> in the same k8s cluster trying to mimic the AWX rabbit deployment as much as possible and it works with the rabbit_peer_discovery_k8s backend.</li>
<li>I have tried tweeking some of the rabbitmq.conf for AWX 1.0.6 with no luck it just keeps thowing the same error.</li>
<li>I have verified the /etc/resolv.conf file has the kubernetes.default.svc.cluster.local entry</li>
</ul>
<p><strong>Cluster Info</strong></p>
<pre><code>[node1 ~]# kubectl get all -n awx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/awx 1 1 1 0 38m
NAME DESIRED CURRENT READY AGE
rs/awx-654f7fc84c 1 1 0 38m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/awx 1 1 1 0 38m
NAME DESIRED CURRENT READY AGE
rs/awx-654f7fc84c 1 1 0 38m
NAME READY STATUS RESTARTS AGE
po/awx-654f7fc84c-9ppqb 3/4 CrashLoopBackOff 11 38m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/awx-rmq-mgmt ClusterIP 10.233.10.146 <none> 15672/TCP 1d
svc/awx-web-svc NodePort 10.233.3.75 <none> 80:31700/TCP 1d
svc/rabbitmq NodePort 10.233.37.33 <none> 15672:30434/TCP,5672:31962/TCP 1d
</code></pre>
<p>AWX RabbitMQ error log</p>
<pre><code>[node1 ~]# kubectl logs -n awx awx-654f7fc84c-9ppqb awx-rabbit
2018-07-09 14:47:37.464 [info] <0.33.0> Application lager started on node '[email protected]'
2018-07-09 14:47:37.767 [info] <0.33.0> Application os_mon started on node '[email protected]'
2018-07-09 14:47:37.767 [info] <0.33.0> Application crypto started on node '[email protected]'
2018-07-09 14:47:37.768 [info] <0.33.0> Application cowlib started on node '[email protected]'
2018-07-09 14:47:37.768 [info] <0.33.0> Application xmerl started on node '[email protected]'
2018-07-09 14:47:37.851 [info] <0.33.0> Application mnesia started on node '[email protected]'
2018-07-09 14:47:37.851 [info] <0.33.0> Application recon started on node '[email protected]'
2018-07-09 14:47:37.852 [info] <0.33.0> Application jsx started on node '[email protected]'
2018-07-09 14:47:37.852 [info] <0.33.0> Application asn1 started on node '[email protected]'
2018-07-09 14:47:37.852 [info] <0.33.0> Application public_key started on node '[email protected]'
2018-07-09 14:47:37.897 [info] <0.33.0> Application ssl started on node '[email protected]'
2018-07-09 14:47:37.901 [info] <0.33.0> Application ranch started on node '[email protected]'
2018-07-09 14:47:37.901 [info] <0.33.0> Application ranch_proxy_protocol started on node '[email protected]'
2018-07-09 14:47:37.901 [info] <0.33.0> Application rabbit_common started on node '[email protected]'
2018-07-09 14:47:37.907 [info] <0.33.0> Application amqp_client started on node '[email protected]'
2018-07-09 14:47:37.909 [info] <0.33.0> Application cowboy started on node '[email protected]'
2018-07-09 14:47:37.957 [info] <0.33.0> Application inets started on node '[email protected]'
2018-07-09 14:47:37.964 [info] <0.193.0>
Starting RabbitMQ 3.7.4 on Erlang 20.1.7
Copyright (C) 2007-2018 Pivotal Software, Inc.
Licensed under the MPL. See http://www.rabbitmq.com/
## ##
## ## RabbitMQ 3.7.4. Copyright (C) 2007-2018 Pivotal Software, Inc.
########## Licensed under the MPL. See http://www.rabbitmq.com/
###### ##
########## Logs: <stdout>
Starting broker...
2018-07-09 14:47:37.982 [info] <0.193.0>
node : [email protected]
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.conf
cookie hash : at619UOZzsenF44tSK3ulA==
log(s) : <stdout>
database dir : /var/lib/rabbitmq/mnesia/[email protected]
2018-07-09 14:47:39.649 [info] <0.201.0> Memory high watermark set to 11998 MiB (12581714329 bytes) of 29997 MiB (31454285824 bytes) total
2018-07-09 14:47:39.652 [info] <0.203.0> Enabling free disk space monitoring
2018-07-09 14:47:39.653 [info] <0.203.0> Disk free limit set to 50MB
2018-07-09 14:47:39.658 [info] <0.205.0> Limiting to approx 1048476 file handles (943626 sockets)
2018-07-09 14:47:39.658 [info] <0.206.0> FHC read buffering: OFF
2018-07-09 14:47:39.658 [info] <0.206.0> FHC write buffering: ON
2018-07-09 14:47:39.660 [info] <0.193.0> Node database directory at /var/lib/rabbitmq/mnesia/[email protected] is empty. Assuming we need to join an existing cluster or initialise from scratch...
2018-07-09 14:47:39.660 [info] <0.193.0> Configured peer discovery backend: rabbit_peer_discovery_k8s
2018-07-09 14:47:39.660 [info] <0.193.0> Will try to lock with peer discovery backend rabbit_peer_discovery_k8s
2018-07-09 14:47:39.660 [info] <0.193.0> Peer discovery backend does not support locking, falling back to randomized delay
2018-07-09 14:47:39.660 [info] <0.193.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping randomized startup delay.
2018-07-09 14:47:39.665 [info] <0.193.0> Failed to get nodes from k8s - {failed_connect,[{to_address,{"kubernetes.default.svc.cluster.local",443}},
{inet,[inet],nxdomain}]}
2018-07-09 14:47:39.665 [error] <0.192.0> CRASH REPORT Process <0.192.0> with 0 neighbours exited with reason: no case clause matching {error,"{failed_connect,[{to_address,{\"kubernetes.default.svc.cluster.local\",443}},\n {inet,[inet],nxdomain}]}"} in rabbit_mnesia:init_from_config/0 line 164 in application_master:init/4 line 134
2018-07-09 14:47:39.666 [info] <0.33.0> Application rabbit exited with reason: no case clause matching {error,"{failed_connect,[{to_address,{\"kubernetes.default.svc.cluster.local\",443}},\n {inet,[inet],nxdomain}]}"} in rabbit_mnesia:init_from_config/0 line 164
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{{case_clause,{error,\"{failed_connect,[{to_address,{\\"kubernetes.default.svc.cluster.local\\",443}},\n {inet,[inet],nxdomain}]}\"}},[{rabbit_mnesia,init_from_config,0,[{file,\"src/rabbit_mnesia.erl\"},{line,164}]},{rabbit_mnesia,init_with_lock,3,[{file,\"src/rabbit_mnesia.erl\"},{line,144}]},{rabbit_mnesia,init,0,[{file,\"src/rabbit_mnesia.erl\"},{line,111}]},{rabbit_boot_steps,'-run_step/2-lc$^1/1-1-',1,[{file,\"src/rabbit_boot_steps.erl\"},{line,49}]},{rabbit_boot_steps,run_step,2,[{file,\"src/rabbit_boot_steps.erl\"},{line,49}]},{rabbit_boot_steps,'-run_boot_steps/1-lc$^0/1-0-',1,[{file,\"src/rabbit_boot_steps.erl\"},{line,26}]},{rabbit_boot_steps,run_boot_steps,1,[{file,\"src/rabbit_boot_steps.erl\"},{line,26}]},{rabbit,start,2,[{file,\"src/rabbit.erl\"},{line,793}]}]}}}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{{case_clause,{error,"{failed_connect,[{to_address,{\"kubernetes.defau
Crash dump is being written to: /var/log/rabbitmq/erl_crash.dump...done
</code></pre>
<p>Kubernetes API service </p>
<pre><code>[node1 ~]# kubectl describe service kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.233.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 10.237.34.19:6443,10.237.34.21:6443
Session Affinity: ClientIP
Events: <none>
</code></pre>
<p>nslookup from a busybox in the same kubernetes cluster</p>
<pre><code>[node2 ~]# kubectl exec -it busybox -- sh
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.233.0.3
Address 1: 10.233.0.3 coredns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local
Address 1: 10.233.0.1 kubernetes.default.svc.cluster.local
</code></pre>
<p>Please let me know if I am missing anything that could help troubleshooting.</p>
| <p>I <em>believe</em> the solution is to omit <a href="https://github.com/ansible/awx/blob/1.0.6/installer/roles/kubernetes/templates/deployment.yml.j2#L40" rel="nofollow noreferrer">the explicit kubernetes host</a>. I can't think of any good reason one would need to <em>specify</em> the kubernetes api host from inside the cluster.</p>
<p>If for some terrible reason the RMQ plugin requires it, then try swapping in the <code>Service</code> IP (assuming your SSL cert for the master has its <code>Service</code> IP in the SANs list).</p>
<hr>
<p>As for <em>why</em> it is doing such a silly thing, the only good reason I can think of is that the RMQ <code>PodSpec</code> has somehow gotten a <code>dnsPolicy</code> of something other than <code>ClusterFirst</code>. If you truly wish to troubleshoot the RMQ Pod, then you can provide an explicit <code>command:</code> to run some debugging bash commands first, in order to interrogate the state of the container at launch, and then <code>exec /launch.sh</code> to resume booting up RMQ (<a href="https://github.com/ansible/awx-rabbitmq/blob/7f20bd7dc057beb4070d5a6032c3c70ac0b7f724/Dockerfile#L14" rel="nofollow noreferrer">as they do</a>)</p>
|
<p>I'm learning Kubernetes. I have a 3 node cluster, to which I have deployed nginx with 3 replicas. Therefore I have 3 replicated nginx pods.</p>
<p>I have provisioned a Load Balancer server to access these pods and exposed Port 80 - all this works fine.</p>
<p>That is, until I try to update the website. I think maybe I'm confused about how the Pods sync. </p>
<p>For example, On one of the pods I have changed the the default index.html page. I did this by connecting to the shell on one of the pods and physically changing this file. Perhaps foolishly, I expected this change to be replicated to the other pods and for this change to be visible when accessing the service via the load balancer. However, it is not. I occasionally access this changed page, but I guess that's only because the Load Balancer is doing some sort of round robin on the replicas.</p>
<p>So my questions as a newcomer to Kubernetes are, how do the replicas sync? Can you force the others to sync with another replica? Should it be automatic?</p>
<p>I should add that I have looked through the Kubernetes documentation and haven't really found the answer I'm looking for, so any help would be appreciated.</p>
<p>Also, I am running Kubernetes on GCP.</p>
| <p>Pods are considered disposable groupings of containers. You should not modify them directly instead use a <strong>Deployment</strong></p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Kubernetes deployments</a></p>
<p>There is a very nice interactive <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-interactive/" rel="nofollow noreferrer">tutorial</a> regarding updating.</p>
<p>The gist is.</p>
<ol>
<li>Create a new image and push this image.</li>
<li>Set the image of your deployment to this new images.</li>
</ol>
<p>This will initiate a rolling update. </p>
|
<p>I have configured a working EFK(Elasticesearch,Fluentd,Kibana) in one of my kubernetes cluster builded in GCP. I have two more clusters and installed the same EFK in remaining too. Now If I want to monitor the logs of each cluster environment,then I need to check all the three kibana console. Please let me know is it possible to centralize the all EFK builded in three clusters, So that I can manage to see the pod logs from all my clusters in a single Kibana console. Any help or suggestion will be helpful.</p>
| <p>In fact <code>Kibana</code> only draws and allows to sort/manage data which exists in <code>Elasticsearch</code>. Let's say, you have 3 k8s clusters. Consequently, you have 3 <code>DaemonSet</code> of <code>Fluentd</code>. All you should do - is configure all <code>Fluentd</code> deployments to send data to the one and only <code>Elasticsearch</code> endpoint, to which the <code>Kibana</code> is connected.</p>
|
<p>I have installed the Edge version of Docker for Windows 18.05.0-ce (Windows 10 Hyper-V) and enabled Kubernetes afterwards.<br>
On my other machine a kubectl context was created automatically, but on this new machine it was not.</p>
<pre><code>> kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
> kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
</code></pre>
<p>Can i some how make Docker for Windows create the context?<br>
Or can I set it up manually?<br>
I am a little unsure how to get the infomation needed for the <code>kubectl config set-context</code> command.</p>
<p>I can run docker containers outside of Kubernetes.<br>
I see the Kubernetes containers running inside Docker.</p>
<pre><code>> docker ps
CONTAINER ID IMAGE COMMAND
8285ca0dd57a 353b8f1d102e "kube-scheduler --ad…"
3b25fdb0b7a6 40c8d10b2d11 "kube-controller-man…"
e81db90fa68e e03746fe22c3 "kube-apiserver --ad…"
2f19e723e0eb 80cc5ea4b547 "/kube-dns --domain=…"
etc...
</code></pre>
| <p>There is an issue with docker for windows when the <code>HOMEDRIVE</code> is set by a corporate policy.</p>
<p>If you set the <code>$KUBECONFIG</code> environment variable to <code>C:\Users\my_username\.kube\config</code> (make sure the <code>$HOME</code> environment variables expand, don't use <code>$HOME</code> itself.), it should work.</p>
<p>Further info: <a href="https://github.com/docker/for-win/issues/1651" rel="nofollow noreferrer">https://github.com/docker/for-win/issues/1651</a></p>
|
<p>I am trying to start minikube behind a corporate proxy on Windows machine. I am using the following start command</p>
<pre><code>minikube start --alsologtostderr --vm-driver="hyperv" --docker-env http_proxy=http://proxyabc.uk.sample.com:3128 --docker-env https_proxy=http://proxyabc.uk.sample.com:3128 --docker-env "NO_PROXY=localhost,127.0.0.1,192.168.211.157:8443"
</code></pre>
<p><strong>minikube version = 0.28.0</strong></p>
<p><strong>kubectl version = 1.9.2</strong></p>
<p>I've also tried setting the no proxy variable before the command</p>
<p>set NO_PROXY="$NO_PROXY,192.168.211.158/8443"</p>
<p>But everytime I run the "minikube start" command I end up with the following message</p>
<p><strong><em>Error starting cluster: timed out waiting to unmark master: getting node minikube: Get <a href="https://192.168.211.155:8443/api/v1/nodes/minikube" rel="nofollow noreferrer">https://192.168.211.155:8443/api/v1/nodes/minikube</a>: Forbidden</em></strong></p>
<p>I have already tried solutions at </p>
<p><a href="https://github.com/kubernetes/minikube/issues/2706" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/2706</a>
<a href="https://github.com/kubernetes/minikube/issues/2363" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/2363</a></p>
| <blockquote>
<p><code>set NO_PROXY="$NO_PROXY,192.168.211.158/8443"</code></p>
</blockquote>
<p>That slash is not the port, it's the <a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing" rel="nofollow noreferrer">CIDR</a> which defines how many IPs should be excluded from the proxy. Separately, it appears you somehow included the colon in the one provided to <code>--docker-env</code>, which I think is also wrong.</p>
<p>And, the <code>$NO_PROXY,</code> syntax in your <code>set</code> command is also incorrect, since that's the unix-y way of referencing environment variables -- you would want <code>set NO_PROXY="%NO_PROXY%,...</code> just be careful since unless you <em>already have</em> a variable named <code>NO_PROXY</code>, that <code>set</code> will expand to read <code>set NO_PROXY=",192.168.etcetc"</code> which I'm not sure is legal syntax for that variable.</p>
|
<p>guys
I want to watch all kubernetes events and I find the source code here: <a href="https://github.com/kubernetes/client-go/blob/master/informers/events/v1beta1/event.go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/blob/master/informers/events/v1beta1/event.go</a></p>
<p>However, I can not find any examples about how to use the functions.
Can anyone help me, thanks a lot!</p>
| <p>I’d like to collect the event logs with <code>kubectl</code> or REST API[2] as JSON, then you
can send the logs to <code>fluentd</code> for centralized monitoring such as Elasticsearch.</p>
<p>Here is a good sample;[0], though it's OpenShift, but if <code>oc</code> cmd replace with <code>kubectl</code> cmd, it's same with Kubernetes. (Yeah, OpenShift is Enterprise Kubernetes). </p>
<p>[1] is how to implement the <code>fluentd</code> - <code>Elasticsearch</code> stack.</p>
<p>I hope this help you.</p>
<ul>
<li><p>[0] [<a href="https://docs.openshift.com/container-platform/3.9/security/monitoring.html#security-monitoring-events]" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.9/security/monitoring.html#security-monitoring-events]</a></p></li>
<li><p>[1] [<a href="https://docs.fluentd.org/v0.12/articles/recipe-json-to-elasticsearch]" rel="nofollow noreferrer">https://docs.fluentd.org/v0.12/articles/recipe-json-to-elasticsearch]</a></p></li>
<li><p>[2] [<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#list-all-namespaces-292]" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#list-all-namespaces-292]</a></p></li>
</ul>
|
<p>I have a job definition based on example from kubernetes website.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi-with-timeout-6
spec:
activeDeadlineSeconds: 30
completions: 1
parallelism: 1
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["exit", "1"]
restartPolicy: Never
</code></pre>
<p>I would like run this job once and not restart if fails. With comand exit 1 kubernetes trying to run new pod to get exit 0 code until reach activeDeadlineSeconds timeout. How can avoid that? I would like run build commands in kubernetes to check compilation and if compilation fails I'll get exit code different than 0. I don't want run compilation again.</p>
<p>Is it possible? How?</p>
| <p>By now this is possible by setting <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy" rel="noreferrer"><code>backoffLimit: 0</code></a> which tells the controller to do 0 retries. default is 6</p>
|
<p>I've build the following script:</p>
<pre><code>import boto
import sys
import gcs_oauth2_boto_plugin
def check_size_lzo(ds):
# URI scheme for Cloud Storage.
CLIENT_ID = 'myclientid'
CLIENT_SECRET = 'mysecret'
GOOGLE_STORAGE = 'gs'
dir_file= 'date_id={ds}/apollo_export_{ds}.lzo'.format(ds=ds)
gcs_oauth2_boto_plugin.SetFallbackClientIdAndSecret(CLIENT_ID, CLIENT_SECRET)
uri = boto.storage_uri('my_bucket/data/apollo/prod/'+ dir_file, GOOGLE_STORAGE)
key = uri.get_key()
if key.size < 45379959:
raise ValueError('umg lzo file is too small, investigate')
else:
print('umg lzo file is %sMB' % round((key.size/1e6),2))
if __name__ == "__main__":
check_size_lzo(sys.argv[1])
</code></pre>
<p>It works fine locally but when I try and run on kubernetes cluster I get the following error:</p>
<pre><code>boto.exception.GSResponseError: GSResponseError: 403 Access denied to 'gs://my_bucket/data/apollo/prod/date_id=20180628/apollo_export_20180628.lzo'
</code></pre>
<p>I have updated the .boto file on my cluster and added my oauth client id and secret but still having the same issue. </p>
<p>Would really appreciate help resolving this issue.</p>
<p>Many thanks!</p>
| <p>If it works in one environment and fails in another, I assume that you're getting your auth from a .boto file (or possibly from the OAUTH2_CLIENT_ID environment variable), but your kubernetes instance is lacking such a file. That you got a 403 instead of a 401 says that your remote server is correctly authenticating as somebody, but that somebody is not authorized to access the object, so presumably you're making the call as a different user.</p>
<p>Unless you've changed something, I'm guessing that you're getting <a href="https://cloud.google.com/docs/authentication/production#obtaining_credentials_on_compute_engine_kubernetes_engine_app_engine_flexible_environment_and_cloud_functions" rel="nofollow noreferrer">the default Kubernetes Engine auth</a>, with means <a href="https://cloud.google.com/compute/docs/access/service-accounts#compute_engine_default_service_account" rel="nofollow noreferrer">a service account associated with your project</a>. That service account probably hasn't been granted read permission for your object, which is why you're getting a 403. Grant it read/write permission for your GCS resources, and that should solve the problem.</p>
<p>Also note that by default the default credentials aren't scoped to include GCS, so <a href="https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes" rel="nofollow noreferrer">you'll need to add that as well</a> and then restart the instance.</p>
|
<p>I did K8s(1.11) cluster using kubeadm tool. It 1 master and one node in the cluster. </p>
<ol>
<li><p>I applied dashboard UI there.
kubectl create -f <a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml" rel="noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml</a></p></li>
<li><p>Created service account (followed this link: <a href="https://github.com/kubernetes/dashboard/wiki/Creating-sample-user" rel="noreferrer">https://github.com/kubernetes/dashboard/wiki/Creating-sample-user</a>)</p></li>
</ol>
<blockquote>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
</code></pre>
</blockquote>
<p>and </p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
</code></pre>
<p>Start kube proxy: <code>kubectl proxy --address 0.0.0.0 --accept-hosts '.*'</code> </p>
<p>And access dashboard from remote host using this URL: <code>http://<k8s master node IP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login</code></p>
<p>Its asking for token for login: got token using this command: <code>kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')</code> </p>
<p>After copy and apply the token in browser.. its not logging in. Its not showing authentication error too… Not sure wht is wrong with this? Is my token wrong or my kube proxy command wrong? </p>
| <p>I recreated all the steps in accordance to what you've posted.</p>
<p>Turns out the issue is in the <code><k8s master node IP></code>, you should use localhost in this case. So to access the proper dashboard, you have to use:</p>
<p><a href="http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login" rel="noreferrer">http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login</a></p>
<p>When you start kubectl proxy - you create a tunnel to your apiserver on the master node. By default, Dashboard is starting with ServiceType: ClusterIP. The Port on the master node in this mode is not open, and that is the reason you can't reach it on the 'master node IP'. If you would like to use master node IP, you have to change the ServiceType to NodePort.</p>
<p>You have to delete the old service and update the config by changing service type to NodePort as in the example below (note that ClusterIP is not there because it is assumed by default).</p>
<p>Create a new yaml file name newservice.yaml</p>
<pre><code>---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
</code></pre>
<p>Delete the old service</p>
<pre><code> kubectl delete service kubernetes-dashboard -n kube-system
</code></pre>
<p>Apply the new service</p>
<pre><code>kubectl apply -f newservice.yaml
</code></pre>
<p>Run describe service</p>
<pre><code>kubectl describe svc kubernetes-dashboard -n kube-system | grep "NodePort"
</code></pre>
<p>and you can use that port with the IP address of the master node</p>
<pre><code>Type: NodePort
NodePort: <unset> 30518/TCP
http://<k8s master node IP>:30518/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
</code></pre>
<p>Note that the port number is generated randomly and yours will be probably different. </p>
|
<p>I have a .NET Core console application running in a docker container that I am deploying through Kubernetes.
When I update the deployment image, I would like to keep the existing pod around for a while, without accepting new connections, but to keep existing connections alive for a period to allow existing users to finish. </p>
<p>Looking at the Kubernetes docs, I thought that termination grace period seconds was the property to add, but it doesn't seem to be working. As soon as I change the image listed in the deployment then the existing pod is dropped - the grace period is not applied. </p>
<p>Does anyone have any ideas as to what I'm doing wrong in this instance? I can't see anything in the docs. </p>
<p>Bit from my .yml file below:</p>
<pre><code>spec:
terminationGracePeriodSeconds: 60
containers:
- name: myApplication
</code></pre>
| <p>The way the grace period works is that the main docker process is immediately sent a SIGTERM signal, and then it is allowed a certain amount of time to exit on its own before it is more forcefully shutdown. If your app is quitting right away, it is because it quits when it gets this signal.</p>
<p>Your app could catch the SIGTERM signal, and then quit on its own after all the open operations complete. Or it could catch the SIGTERM signal and just do nothing and wait for it to be forced down a different way.</p>
|
<p>guys
I want to watch all kubernetes events and I find the source code here: <a href="https://github.com/kubernetes/client-go/blob/master/informers/events/v1beta1/event.go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/blob/master/informers/events/v1beta1/event.go</a></p>
<p>However, I can not find any examples about how to use the functions.
Can anyone help me, thanks a lot!</p>
| <p>One or several of these could help:</p>
<ul>
<li><p>"watches" for (quote) "...efficient change notifications on resources" - see <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/" rel="nofollow noreferrer">Kubernetes API Concepts</a> as well as the <a href="https://kubernetes.io/docs/reference/#api-reference" rel="nofollow noreferrer">API Reference</a> for a particular version. Example: <code>GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245</code></p></li>
<li><p>Event <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#-strong-read-operations-strong--291" rel="nofollow noreferrer">Read Operations</a>.</p></li>
<li><p><code>kubectl get</code> allows you to specify the -w or --watch flag to start watching updates to a particular object.</p></li>
</ul>
<p>I believe the events are for a particular resource or collection of resources, not for all resources.</p>
|
<p>I have a cluster running on GCP that currently consists entirely of preemtible nodes. We're experiencing issues where kube-dns becomes unavailable (presumably because a node has been preempted). We'd like to improve the resilience of DNS by moving <code>kube-dns</code> pods to more stable nodes.</p>
<p>Is it possible to schedule system cluster critical pods like <code>kube-dns</code> (or all pods in the <code>kube-system</code> namespace) on a node pool of only non-preemptible nodes? I'm wary of using affinity or anti-affinity or taints, since these pods are auto-created at cluster bootstrapping and any changes made could be clobbered by a Kubernetes version upgrade. Is there a way do do this that will persist across upgrades?</p>
| <p>The solution was to use taints and tolerations in conjunction with node affinity. We created a second node pool, and added a taint to the preemptible pool.</p>
<p>Terraform config:</p>
<pre><code>resource "google_container_node_pool" "preemptible_worker_pool" {
node_config {
...
preemptible = true
labels {
preemptible = "true"
dedicated = "preemptible-worker-pool"
}
taint {
key = "dedicated"
value = "preemptible-worker-pool"
effect = "NO_SCHEDULE"
}
}
}
</code></pre>
<p>We then used a <code>toleration</code> and <code>nodeAffinity</code> to allow our existing workloads to run on the tainted node pool, effectively forcing the cluster-critical pods to run on the untainted (non-preemtible) node pool.</p>
<p>Kubernetes config:</p>
<pre><code>spec:
template:
spec:
# The affinity + tolerations sections together allow and enforce that the workers are
# run on dedicated nodes tainted with "dedicated=preemptible-worker-pool:NoSchedule".
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: dedicated
operator: In
values:
- preemptible-worker-pool
tolerations:
- key: dedicated
operator: "Equal"
value: preemptible-worker-pool
effect: "NoSchedule"
</code></pre>
|
<p>I am reading a blog to integrate EFK(a log system) into k8s in centos 7.4. There are following instructions:</p>
<pre><code># mkdir ~/es_data
# chmod g+rwx es_data
# chgrp 1000 es_data
# chown 1000 -R es_data
# ls -l /root/es_data/
total 8
drwxrwxr-x 2 1000 1000 4096 Jun 8 09:50 ./
drwx------ 8 root root 4096 Jun 8 09:50 ../
</code></pre>
<p>I log in as root. The instructions say, If I do chgrp 1000 es_data and chown 1000 -R es_data, the director's owner and group would be 1000. But when I follow the instructions:</p>
<p>I see following:</p>
<pre><code> drwxr-xr-x. 2 master16g master16g 6 Jul 11 15:27 es_data
</code></pre>
<p>The owner and group appears to machine hostname, <code>master16g</code>. </p>
<p>Could someone drop me hints what happens here for <code>chgrp 1000 and chown 1000</code>?</p>
| <p><code>chown</code> changes the owner, <code>chgrp</code> changes the group. Because you have user and group both named <code>master16g</code> having <code>1000</code> as UID and GID respectively, this is why you see the user name and the group name on the list. <code>chown</code> accepts UID as parameter as well as username, this is well documented in the manual. <code>chgrp</code> also accepts GID and group name. You can change both also with one command <code>chown 1000:1000 es_data -R</code> or <code>chown master16g:master16g es_data -R</code>.</p>
<p>First Linux user has usually UID/GID 1000.</p>
<p>For instance, if you <code>chown 0:1000 file</code> you will see <code>root:master16g</code> as the file owner.</p>
<p>You can get the details of the <code>elasticsearch</code> user by running <code>id elasticsearch</code>.</p>
|
<p>For every command with <code>kubectl</code> I need to use <code>sudo kubectl</code>. </p>
<p>I understand the security perspective but I am working on a test environment and I want to be able use it without <code>sudo</code>. </p>
<p>I tried to run <code>sudo -i</code> and use the root account to run<code>kubectl get pods</code> but I received: </p>
<pre><code>The connection to the server localhost:8080 was refused - did you
specify the right host or port?
</code></pre>
<p>I noticed that when I was playing with <a href="https://labs.play-with-k8s.com" rel="noreferrer">https://labs.play-with-k8s.com</a>, the user is root and I can run <code>kubectl</code> freely. </p>
<p>I wanted to have the same thing on my Ubuntu machine with my Minikube. </p>
<p>When I run<code>kubectl get pods</code> with my regular account I received the error: </p>
<pre><code>error: unable to read client-key /home/myuser/.minikube/client.key for minikube due to open /home/myuser/.minikube/client.key: permission denied
</code></pre>
<p>I supposed there are two ways:<br>
1. Give everyone access to <code>/home/myuser/.minikube/</code><br>
2. Give my account permissions to run <code>kubectl</code> without <code>sudo</code> </p>
<p>EDIT:<br>
Following @Konstantin Vustin request, here are the requested information: </p>
<pre><code>myuser@ubuntu:/usr/local/bin$ ls -l $(which kubectl)
-rwxrwxr-x 1 myuser myuser 54308597 Jun 13 05:21 /usr/local/bin/kubectl
myuser@ubuntu:/usr/local/bin$ ls -la ~ | grep kube
drwxr-xr-x 5 myuser myuser 4096 Jun 17 02:25 .kube
drwxrwxr-x 10 myuser myuser 4096 Jun 13 05:18 .minikube
myuser@ubuntu:/usr/local/bin$ ls -l ~/.kube
total 24
drwxr-xr-x 3 root root 4096 Jun 13 05:26 cache
-rw------- 1 myuser myuser 911 Jun 13 05:27 config
drwxrwxr-x 3 myuser myuser 4096 Jul 11 01:37 http-cache
</code></pre>
| <p><strong>Fix file permissions</strong></p>
<p>Most likely your kubectl files are not owned by your user.</p>
<p>You can set these permissions using below command.</p>
<pre><code>sudo chown -R $USER $HOME/.kube
</code></pre>
<p><strong>Run kubectl with sudo</strong></p>
<p>Alternatively you can run kubectl as sudo user using a <a href="https://www.sudo.ws/man/1.8.23/sudo.man.html#s" rel="noreferrer">persistent sudo shell</a>.</p>
<pre><code>sudo -s
</code></pre>
<p>then run your kubectl commands</p>
<pre><code>kubectl get pods
kubectl describe <resource_type> <resource_name>
</code></pre>
<p>finally exit the sudo shell</p>
<pre><code>exit
</code></pre>
|
<p>I have config my ingress support SSL:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "service"
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: "x.x.x.x/xx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- "example.com"
secretName: example.name
rules:
- host: "example.com"
http:
paths:
- path: /
backend:
serviceName: service
servicePort: 80
</code></pre>
<p>In my config above, only IP in whitelist can access the domain for both HTTP & HTTPS. But I would like to config all IP addresses can access <code>https://example.com</code> (HTTPS) and some IP addresses in whitelist can access without SSL - <code>http://example.com</code>.</p>
| <p>I have resolved my issue by add more config to nginx location (listen both http and https) by use <code>nginx.ingress.kubernetes.io/configuration-snippet</code> annotation.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "service"
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# The configs to allow all IPs access via https and allow some IPs in
# security whitelist access via http
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($https) {
set $allow_ip true;
}
if ($remote_addr ~ (x.x.x.x|y.y.y.y) {
set $allow_ip true;
}
if ($allow_ip != true) {
return 403;
}
spec:
tls:
- hosts:
- "example.com"
secretName: example.name
rules:
- host: "example.com"
http:
paths:
- path: /
backend:
serviceName: service
servicePort: 80
</code></pre>
|
<p>I have kubernetes running on 4 centos 7 boxes, master and minions. I also have flannel and skydns installed. flannel overlay ip is 172.17.0.0/16 and my service cluster ip is 10.254.0.0/16. I'm running spinnaker pods on the k8 cluster. what I see is that the spinnaker services are unable to find each other. Each pod gets an ip from the 172.17 slice and I can ping the pods from any of the nodes using that ip. However the services themselves uses the cluser ip and are unable to talk to each other. Since Kube-proxy is the one that should be forwarding this traffic, I looked at the iptable rules and I see this:</p>
<pre><code>[root@MultiNode4 ~$]iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-ISOLATION all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp -- anywhere 10.254.206.105 /* spinnaker/spkr-clouddriver: has no endpoints */ tcp dpt:afs3-prserver reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.254.162.75 /* spinnaker/spkr-orca: has no endpoints */ tcp dpt:us-srv reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.254.62.109 /* spinnaker/spkr-rush: has no endpoints */ tcp dpt:8085 reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.254.68.125 /* spinnaker/spkr-echo: has no endpoints */ tcp dpt:8089 reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.254.123.127 /* spinnaker/spkr-front50: has no endpoints */ tcp dpt:webcache reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.254.36.197 /* spinnaker/spkr-gate: has no endpoints */ tcp dpt:8084 reject-with icmp-port-unreachable
</code></pre>
<p>Seems like kube-proxy is unable to forward. I have no errors on kube-proxy startup:</p>
<pre><code>[root@MultiNode4 ~$]systemctl status kube-proxy -l
kube-proxy.service - Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2016-07-07 02:54:54 EDT; 1h 10min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 7866 (kube-proxy)
Memory: 3.6M
CGroup: /system.slice/kube-proxy.service
└─7866 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://centos-master:8080
Jul 07 02:54:54 clm-aus-015349.bmc.com systemd[1]: Started Kubernetes Kube-Proxy Server.
Jul 07 02:54:54 clm-aus-015349.bmc.com systemd[1]: Starting Kubernetes Kube-Proxy Server...
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: E0707 02:54:54.754845 7866 server.go:340] Can't get Node "multiNode4", assuming iptables proxy: nodes "MultiNode4" not found
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: I0707 02:54:54.756460 7866 server.go:200] Using iptables Proxier.
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: I0707 02:54:54.756527 7866 proxier.go:208] missing br-netfilter module or unset br-nf-call-iptables; proxy may not work as intended
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: I0707 02:54:54.756551 7866 server.go:213] Tearing down userspace rules.
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: I0707 02:54:54.770100 7866 conntrack.go:36] Setting nf_conntrack_max to 262144
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: I0707 02:54:54.770145 7866 conntrack.go:41] Setting conntrack hashsize to 65536
Jul 07 02:54:54 clm-aus-015349.bmc.com kube-proxy[7866]: I0707 02:54:54.771445 7866 conntrack.go:46] Setting nf_conntrack_tcp_timeout_established to 86400
</code></pre>
<p>What am I missing?</p>
| <p>I have the same problem, and it turns out that the selector is wrong.
After fixing that, everything works just fine.</p>
|
<p>how can I describe this command in yaml format?</p>
<pre><code>kubectl create configmap somename --from-file=./conf/nginx.conf
</code></pre>
<p>I'd expect to do something like the following yaml, but it doesn't work </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: somename
namespace: default
fromfile: ./conf/nginx.conf
</code></pre>
<p>any idea?</p>
| <p>That won't work, because kubernetes isn't aware of the local file's path. You can simulate it by doing something like this:</p>
<pre><code>kubectl create configmap --dry-run=client somename --from-file=./conf/nginx.conf --output yaml
</code></pre>
<p>The <code>--dry-run</code> flag will simply show your changes on stdout, and not make the changes on the server. This will output a valid configmap, so if you pipe it to a file, you can use that:</p>
<pre><code>kubectl create configmap --dry-run=client somename --from-file=./conf/nginx.conf --output yaml | tee somename.yaml
</code></pre>
|
<p>We can use below command to install azure cli on Linux:</p>
<p>curl -L <a href="https://aka.ms/InstallAzureCli" rel="nofollow noreferrer">https://aka.ms/InstallAzureCli</a> | bash</p>
<p>But what if we want to install a specific version of azure cli lets say version 2.0.23 as 2.0.24 has some issue</p>
<p>Please Help!</p>
| <p>If you want to get the complete list of versions for different distros: <a href="https://packages.microsoft.com/repos/azure-cli/dists/" rel="nofollow noreferrer">https://packages.microsoft.com/repos/azure-cli/dists/</a></p>
<p>E.g Azure CLI versions for Ubuntu
<a href="https://packages.microsoft.com/repos/azure-cli/dists/xenial/main/binary-amd64/Packages" rel="nofollow noreferrer">https://packages.microsoft.com/repos/azure-cli/dists/xenial/main/binary-amd64/Packages</a></p>
<pre><code>Package: azure-cli
Priority: extra
Section: python
Installed-Size: 317827
Maintainer: Azure Python CLI Team <[email protected]>
Architecture: all
Version: 2.0.41-1~xenial
Depends: libc6 (>= 2.17), libssl1.0.0 (>= 1.0.2~beta3)
Filename: pool/main/a/azure-cli/azure-cli_2.0.41-1~xenial_all.deb
Size: 40889424
MD5sum: cc807f4010b6af9507d74ef5404ee09c
SHA1: f593bffa3731a3a670b927723ca97071b0166471
SHA256: 5cfa52a187ac7d028c663f16c4bfea7ac7e49e5917cf90fe3a0299eaa496a194
SHA512: 384bf978379616522405d9e3e25645243c8c8529cd675c7bd036b8817f9220c2cdf3c1a6c3660f23f5775eeffab740608d678c5508b6aeca0359fc4370f84c74
Description: Azure CLI 2.0
A great cloud needs great tools; we're excited to introduce Azure CLI 2.0,
our next generation multi-platform command line experience for Azure.
Homepage: https://github.com/azure/azure-cli
Package: azure-cli
Priority: extra
Section: python
Installed-Size: 314381
Maintainer: Azure Python CLI Team <[email protected]>
Architecture: all
**Version: 2.0.38-1~xenial**
Depends: libc6 (>= 2.17), libssl1.0.0 (>= 1.0.2~beta3)
Filename: pool/main/a/azure-cli/azure-cli_2.0.38-1~xenial_all.deb
Size: 40504494
MD5sum: 7f076d5a7cbc38ab02913c73b2f2c372
SHA1: 10e1460570c72a1a28908c0ed73520e5ccd43a99
SHA256: c127e448b21c746c7bb67e8ab5074802458aaeb84516f746c25463fa20f45b1b
SHA512: 0969e2260ee48401df3796c26b682a8fb39b1a5d7bf5f7c9613a102dfcf3052d362cf93664317f8f4e035efa673c77316009d15d76ef8b378b829cec43f1836f
Description: Azure CLI 2.0
A great cloud needs great tools; we're excited to introduce Azure CLI 2.0,
our next generation multi-platform command line experience for Azure.
Homepage: https://github.com/azure/azure-cli
</code></pre>
|
<p>I want to filter the output of a <code>kubectl</code> command running in <code>--watch</code> mode to keep an eye on changes to a certain k8s annotation. I'm using the following command:</p>
<p><code>kubectl get pod my-pod --watch -o json | jq '.metadata.annotations["my-annotation"]'</code></p>
<p>Problem is - nothing is printed until I stop the command. I'd like the <code>jq</code> to process the input and print it as the changes to <code>my-pod</code> happen.</p>
<p>I tried using the <code>--unbuffered</code> flag but it doesn't fix it.</p>
| <p>So i've tested your command and it works perfectly. You are however missing a quote at the end of your command.</p>
<pre><code>kubectl get pod nginx-5dfd5597bb-tp8h7 --watch -o json | jq '.metadata.name'
</code></pre>
<p>gives me </p>
<pre><code>"nginx-5dfd5597bb-tp8h7"
</code></pre>
<p>if you can this command to work but not your own; it is probably related to the brackets and quotes or missing object keys.</p>
|
<p>I recently started looking into istio and got confused by the destination rule configuration.</p>
<p>Say I have a service A that have 10 pods running behind it. I pushed a destination rule that has two subsets with label version=v1 and version=v2.</p>
<p>So I'm wondering what will happen to the 10 pods under the hood? Will they be divided into two subsets automatically or just remain unlabeled? Or the subsets will only be valid when the pods themselves are labeled with version=v1 and version=v2?</p>
<p>Thanks a lot!</p>
| <p>The general purpose is to set up <code>DestinationRule</code> resource in order to specify how the network traffic will reach your underlying Kubernetes cluster Pods.
<code>Subsets</code> parameter in Istio defines labels that identify version specific instances. </p>
<p>The below example of Istio <code>DestinationRule</code> configuration demonstrates how it works and can potentially reproduce your case:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
</code></pre>
<p>Actually, label <code>version: v1</code> indicates that only Pods in Kubernetes marked with the same label will receive network traffic; therefore the same approach will work for label <code>version: v2</code>.</p>
<p>There are several resources available in Istio that can expand functionality for network management purposes as described in <a href="https://istio.io/docs/concepts/traffic-management/rules-configuration/" rel="nofollow noreferrer">Official documentation</a>.</p>
|
<p>I'm looking for a way to deploy a pod on kubernetes to run for a few hours each day. Essentially I want it to run every morning at 8AM and continue running until about 5:30 PM. </p>
<p>I've been researching a lot and haven't found a way to deploy the pod with a specific timeframe in mind. I've found cron jobs, but that seems to be to be for pods that terminate themselves, whereas mine should be running constantly.</p>
<p>Is there any way to deploy my pod on kubernetes this way? Or should I just set up the pod itself to run its intended application based on its internal clock?</p>
| <p>According to the Kubernetes architecture, a Job creates one or more pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the job tracks the successful completions. When a specified number of successful completions is reached, the job itself is complete. </p>
<p>In simple words, Jobs run until completion or failure. That's why there is no option to schedule a Cron Job termination in Kubernetes.</p>
<p>In your case, you can start a Cron Job regularly and terminate it using one of the following options:</p>
<ol>
<li><p>A better way is to terminate a container by itself, so you can add such functionality to your application or use Cron. More information about how to add Cron to the Docker container, you can find <a href="https://stackoverflow.com/questions/37458287/how-to-run-a-cron-job-inside-a-docker-container">here</a>.</p></li>
<li><p>You can use another Cron Job to terminate your Cron Job. You need to run a command inside a Pod to find and delete a Pod related to your Job. For more information, you can look through this <a href="https://stackoverflow.com/questions/42642170/kubernetes-how-to-run-kubectl-commands-inside-a-container">link</a>. But it is not a good way, because your Cron Job will always have <code>failed</code> status.</p></li>
</ol>
<p>In both cases, you need to check with what status your Cron Job was finished and use the correct <code>RestartPolicy</code> accordingly.</p>
|
<p>How can I set the timeout for the kubectl exec command ?</p>
<p>The below command does not work</p>
<pre><code>kubectl exec -it pod_name bash --requrest-timeout=0 -n test
</code></pre>
| <p>You have a typo, try:</p>
<pre><code>kubectl exec -it pod_name bash --request-timeout=0 -n test
</code></pre>
<p>See <a href="https://kubernetes.io/docs/reference/kubectl/kubectl/" rel="nofollow noreferrer">kubectl official documentation</a> about request-timeout</p>
<pre><code>--request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
</code></pre>
<p>Note that "0" is already the default.</p>
|
<p>i am trying to install a kops cluster on AWS and to that as a pre-requisite i installed kubectl as per these instructions provided,</p>
<p><a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl" rel="noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl</a></p>
<p>but when i try to verify the installation, i am getting the below error.</p>
<pre><code>ubuntu@ip-172-31-30-74:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>i am not sure why! because i had set up cluster with similar way earlier and everything worked fine.
Now wanted to set up a new cluster, but kind of stuck in this.</p>
<p>Any help appreciated.</p>
| <p>Two things :</p>
<ol>
<li>If every instruction was followed properly, and still facing same issue. @VAS answer might help.</li>
<li>However in my case, i was trying to verify with kubectl as soon as i deployed a cluster. It is to be noted that based on the size of the master and worker nodes cluster takes some time to come up. </li>
</ol>
<p>Once the cluster is up, kubectl was successfully able to communicate. As silly as it sounds, i waited out for 15 mins or so until my master was successfully running. Then everything worked fine.</p>
|
<p>I've got a problem with our Kubernetes not issuing certificates for kubelet.</p>
<p>The kubelet is submitting CSR and this seems to get approved and at this point certificate should be issues but this step never seems to take place.</p>
<p>I searched far and wide but nothing...</p>
<pre><code>$ kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-52xv4 11m kubelet Approved
csr-97rrv 43m kubelet Approved
csr-9p8gz 28m kubelet Approved
csr-n578g 53m kubelet Approved
csr-s76sv 44m kubelet Approved
csr-z2xhg 45m kubelet Approved
</code></pre>
<p>In result all new nodes/kubelets don't get certificate issued at all.</p>
<p>This seems to started with no particular change to the environment and I can't find any single log that would indicate problem with issuing certs.</p>
<p>Has anyone ever seen this?</p>
<p>Kind regards</p>
| <p>Just to let you know what was the original cause of the problem: somehow, and we still don't know how, the content of file holding the certificate on one of the master nodes (/srv/kubernetes/ca.crt) had its content duplicated. Strange... During troubleshooting we tested that certificate using openssl which read that without any issues and that mislead us assuming it has to be OK...</p>
<p>Fixing the content by hand immediately fixed this problem.</p>
|
<p>I have an angular app I'm trying to deploy to Kubernetes in Google Cloud. My problem is, that whenever I try to navigate to a path within the app that's not the root path, for example <a href="http://[ip]/demo" rel="noreferrer">http://[ip]/demo</a>, the request isn't routed in the application but in the container: If I have a directory called 'demo' in the working directory, it'll try to return its contents, so mostly it returns 404.</p>
<p>I have successfully served the application from a virtual machine. My networking configurations are as follows:</p>
<p><strong>service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-07-10T12:00:52Z
labels:
app: app
name: app
namespace: default
resourceVersion: "5759129"
selfLink: /api/v1/namespaces/default/services/app-service
uid: [redacted]
spec:
clusterIP: [redacted]
externalTrafficPolicy: Cluster
ports:
- nodePort: 30909
port: 80
protocol: TCP
targetPort: 80
selector:
app: app
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-30909--102c03247288a32a":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-app-ingress--102c03df7288a32a
ingress.kubernetes.io/target-proxy: k8s-tp-default-app-ingress--102c09bf7288a32a
ingress.kubernetes.io/url-map: k8s-um-default-app-ingress--102c9df4a288a32a
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"app-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"app-service","servicePort":80}}}
creationTimestamp: 2018-07-10T12:10:08Z
generation: 1
name: app-ingress
namespace: default
resourceVersion: "5762158"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/app-ingress
uid: [redacted]
spec:
backend:
serviceName: app-service
servicePort: 80
status:
loadBalancer:
ingress:
- ip: [redacted]
</code></pre>
<p>Does anyone know how to fix this?</p>
| <p>You run into this issue, because k8s is trying to call the path /demo on your ingress, and there is no backend handling this path. This happens because of the angular history routing. </p>
<p>It works in dev because the the dev http server is aware of that, because of this:</p>
<pre><code>rewrites": [ {
"source": "**",
"destination": "/index.html"
} ]
</code></pre>
<p>(check out <a href="https://stackoverflow.com/a/34418131/5613927">this answer</a>)</p>
<p>So, what you must achieve, is that all your URLs to your ingress are rewritten to /index.html. A rewrite target from nginx ingress is <strong>not</strong> the same, as it would translate /path/foo/bar to /foo/bar if your rewrite target is '/'</p>
<p>So, what are the options? You could dig into the ingress configuration. It looks like you are using the GKE ingress, you could use a different one, but a closer look didn't show up any specific configuration.</p>
<p>However, I would take out the crowbar for this problem and hack it off, by using a nginx container with all my custom rewrite rules and reverse proxy it to your angular app. In other words, I'm using nginx anyway if I deploy a prod angular application in a container. Just check out <a href="https://www.digitalocean.com/community/questions/nginx-404-error-with-existing-urls-angular-2-one-page-application-with-routing" rel="noreferrer">this guide</a> to setup nginx for angular properly. </p>
<p>The result would be a docker image, containing a custom nginx with the angular and custom rules inside.</p>
|
<p>I have installed 3 servers kubernetes setup by following <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a> </p>
<p>I created calico network service in the master node. my question should I create calico service in worker nodes also? </p>
<p>I am getting below error in worker node when i create pod </p>
<pre><code>ngwhq_kube-system(e17770a3-8507-11e8-962c-0ac29e406ef0)"
Jul 11 13:25:05 ip-172-31-20-212 kubelet: I0711 13:25:05.144142 23325 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=calico-node pod=calico-node-ngwhq_kube-system(e17770a3-8507-11e8-962c-0ac29e406ef0)
Jul 11 13:25:05 ip-172-31-20-212 kubelet: E0711 13:25:05.144169 23325 pod_workers.go:186] Error syncing pod e17770a3-8507-11e8-962c-0ac29e406ef0 ("calico-node-ngwhq_kube-system(e17770a3-8507-11e8-962c-0ac29e406ef0)"), skipping: failed to "StartContainer" for "calico-node" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=calico-node pod=calico-node-ngwhq_kube-system(e17770a3-8507-11e8-962c-0ac29e406ef0)"
Jul 11 13:25:07 ip-172-31-20-212 kubelet: E0711 13:25:07.221953 23325 cni.go:280] Error deleting network: context deadline exceeded
Jul 11 13:25:07 ip-172-31-20-212 kubelet: E0711 13:25:07.222595 23325 remote_runtime.go:115] StopPodSandbox "22fe8b5db360011aa79afadfe91a46bfef0322092478d378ef657d3babfc1326" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "test2-597bdc85dc-k2xsm_default" network: context deadline exceeded
Jul 11 13:25:07 ip-172-31-20-212 kubelet: E0711 13:25:07.222630 23325 kuberuntime_manager.go:799] Failed to stop sandbox {"docker" "22fe8b5db360011aa79afadfe91a46bfef0322092478d378ef657d3babfc1326"}
Jul 11 13:25:07 ip-172-31-20-212 kubelet: E0711 13:25:07.222664 23325 kuberuntime_manager.go:594] killPodWithSyncResult failed: failed to "KillPodSandbox" for "67e18616-850d-11e8-962c-0ac29e406ef0" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"test2-597bdc85dc-k2xsm_default\" network: context deadline exceeded"
Jul 11 13:25:07 ip-172-31-20-212 kubelet: E0711 13:25:07.222685 23325 pod_workers.go:186] Error syncing pod 67e18616-850d-11e8-962c-0ac29e406ef0 ("test2-597bdc85dc-k2xsm_default(67e18616-850d-11e8-962c-0ac29e406ef0)"), skipping: failed to "KillPodSandbox" for "67e18616-850d-11e8-962c-0ac29e406ef0" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"test2-597bdc85dc-k2xsm_default\" network: context deadline exceeded"
Jul 11 13:25:12 ip-172-31-20-212 kubelet: E0711 13:25:12.007944 23325 cni.go:280] Error deleting network: context deadline exceeded
Jul 11 13:25:12 ip-172-31-20-212 kubelet: E0711 13:25:12.008783 23325 remote_runtime.go:115] StopPodSandbox "4b14d68c7bc892594dedd1f62d92414574a3fb00873a805b62707c7a63bfdfe7" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "test2-597bdc85dc-qmc85_default" network: context deadline exceeded
Jul 11 13:25:12 ip-172-31-20-212 kubelet: E0711 13:25:12.008819 23325 kuberuntime_gc.go:153] Failed to stop sandbox "4b14d68c7bc892594dedd1f62d92414574a3fb00873a805b62707c7a63bfdfe7" before removing: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "test2-597bdc85dc-qmc85_default" network: context deadline exceeded
Jul 11 13:25:19 ip-172-31-20-212 kubelet: W0711 13:25:19.145386 23325 cni.go:243] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "22fe8b5db360011aa79afadfe91a46bfef0322092478d378ef657d3babfc1326"
</code></pre>
<p>I tried to install calico network in worker nodes as well with below mentioned commands but no luck getting error ..</p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
</code></pre>
| <p>Every single node needs calico service running, that's general knowledge.</p>
|
<p>I have a Kubernetes deployment that has 3 replicas. It starts 3 pods which are distributed across a given cluster. I would like to know how to reliably get one pod to contact another pod within the same <code>ReplicaSet</code>.</p>
<p>The deployment above is already wrapped up in a Kubernetes <code>Service</code>. But <code>Services</code> do not cover my use case. I need each instance of my container (each <code>Pod</code>) to start-up a local in memory cache and have these cache communicate/sync with other cache instances running on other <code>Pods</code>. This is how I see a simple distributed cache working on for my service. Pod to pod communication within the same cluster is allowed as per the <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">Kubernetes Network Model</a> but I cannot see a reliable way to address each a pod from another pod.</p>
<p>I believe I can use a <code>StatefulSet</code>, however, I don't want to lose the <code>ClusterIP</code> assigned to the service which is required by <code>Ingress</code> for load balancing.</p>
| <p>Ofcourse you can use statefulset, and ingress doesn't need ClusterIP that assigned to the service, since it uses the endpoints, so 'headless service' is ok.</p>
|
<p>I'm novice in using Kubernetes, Docker and GCP, sorry if the question is stupid and (or) obvious.</p>
<p>I try to create simple gRPC server with http(s) mapping using as example Google samples. The issue is that my container can be started from Google cloud shell with no complains but fails on Kubernetes Engine after deployment.</p>
<p>In Google Cloud Console:</p>
<pre><code>git clone https://gitlab.com/myrepos/grpc.git
cd grpc
docker build -t gcr.io/project-id/python-grpc-diagnostic-server:v1 .
# Run the container "locally"
docker run --rm -p 8000:8000 gcr.io/mproject-id/python-grpc-diagnostic-server:v1
Server is started
^CServer is stopped
# Pushing the image to Container Registry
gcloud docker -- push gcr.io/project-id/python-grpc-diagnostic-server:v1
# Deployment
kubectl create -f grpc-diagnostic.yaml
</code></pre>
<p>In Deployment details 'diagnostic' container has "CrashLoopBackOff" status and in Logs the next error appears:</p>
<pre><code>File "/diagnostic/diagnostic_pb2.py", line 17, in <module>
from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2
ModuleNotFoundError: No module named 'google.api'
</code></pre>
<p>Could you please give any idea why the same container starts in shell and fails on Kubernetes Engine?
Thanks.</p>
<p>requirements.txt</p>
<pre><code>grpcio
grpcio-tools
pytz
google-auth
googleapis-common-protos
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM gcr.io/google_appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
RUN virtualenv -p python3.6 /env
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV -p python3.6 /env
ENV PATH /env/bin:$PATH
ADD . /diagnostic/
WORKDIR /diagnostic
RUN pip install -r requirements.txt
EXPOSE 8000
ENTRYPOINT ["python", "/diagnostic/diagnostic_server.py"]
</code></pre>
<p>grpc-diagnostic.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: esp-grpc-diagnostic
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 80
targetPort: 9000 # or 8000?
protocol: TCP
name: http2
selector:
app: esp-grpc-diagnostic
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: esp-grpc-diagnostic
spec:
replicas: 1
template:
metadata:
labels:
app: esp-grpc-diagnostic
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=9000",
"--service=diagnostic.endpoints.project-id.cloud.goog",
"--rollout_strategy=managed",
"--backend=grpc://127.0.0.1:8000"
]
ports:
- containerPort: 9000
- name: diagnostic
image: gcr.io/project-id/python-grpc-diagnostic-server:v1
ports:
- containerPort: 8000
</code></pre>
| <p>That was my stupid mistake. I changed the image, but the name of image was the same, so the cluster continued using the old wrong image thinking nothing changed.
The right way to redeploy a code is create image with new tag, for instance v1.01 and set the new image for existing deployment as it is described in <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="nofollow noreferrer">documentation</a>. I deleted the service and the deployment and recreated it again, but I didn't delete the cluster thinking that I started from scratch.</p>
<p>Right way:</p>
<pre><code>docker build -t gcr.io/project-id/python-grpc-diagnostic-server:v1.01 .
gcloud docker -- push gcr.io/project-id/python-grpc-diagnostic-server:v1.01
kubectl set image deployment/esp-grpc-diagnostic diagnostic=gcr.io/project-id/python-grpc-diagnostic-server:v1.01
</code></pre>
<p>Another way pulling updated images with no name changed is changing <code>imagePullPolicy</code> that is set to <code>IfNotPresent</code> by default. <a href="https://kubernetes.io/docs/concepts/containers/images/#updating-images" rel="nofollow noreferrer">more info</a></p>
|
<p>I am trying to deploy a configmap onto a cluster </p>
<pre><code>- name: Make/Update all configmaps on the cluster
kubernetes:
api_endpoint: blah
url_username: blah
url_password: blah
inline_data:
apiVersion: v1
kind: ConfigMap
metadata:
name: blah
namespace: blah
data: my-data.txt: "{{ data }}"
state: present
data: |
some = foo
foo = some
(using spinnaker to attach it to pods)
</code></pre>
<p>When I go into the pod and open my-data.txt it displays:</p>
<pre><code>some = foo\n foo = some\n
</code></pre>
<p>I want it to look exactly like the text and print newline rather than \n</p>
<p>Weird thing if I put ' ' single quotes somewhere in the text it prints the text as is but with the single quotes so :</p>
<pre><code>data: |
some = foo
foo = some
' '
</code></pre>
<p>prints exactly the same.</p>
<p>I have tried to research but I couldn't find anything and I have been stuck on this for a while now.</p>
| <p>This seems to be similar to <a href="https://github.com/kubernetes/kubernetes/issues/36222" rel="noreferrer">kubernetes/kubernetes issue 36222</a> when creating configMap from files.</p>
<p>In your case, that happens when created from a <code>data</code> block.</p>
<p>The recent <a href="https://github.com/kubernetes/kubernetes/issues/63503" rel="noreferrer">kubernetes/kubernetes issue 63503</a> references all printed issues.</p>
<p>A <a href="https://github.com/kubernetes/kubernetes/issues/36222#issuecomment-497132788" rel="noreferrer">comment mentions</a>:</p>
<blockquote>
<p>I added a new line in a configMap using Tab for identation. After changing to Spaces instead of Tab, I was able to see the configmap as expected...</p>
</blockquote>
<p>August 202: The <a href="https://github.com/kubernetes/kubernetes/issues/36222#issuecomment-666168348" rel="noreferrer">issue 36222</a> now includes:</p>
<blockquote>
<p>If you just want the raw output as it was read in when created <code>--from-file</code>, you can use <code>jq</code> to get the raw string (without escaped newlines etc)</p>
<p>If you created a configmap from a file like this:</p>
<pre><code>kubectl create configmap myconfigmap --from-file mydata.txt
</code></pre>
<p>Get the data:</p>
<pre><code>kubectl get cm myconfigmap -o json | jq '.data."mydata.txt""' -r
</code></pre>
</blockquote>
<p>Also:</p>
<blockquote>
<p>If the formatting of cm goes wierd a simple hack to get it back to normal is :</p>
<p>kubectl get cm configmap_name -o yaml > cm.yaml</p>
<p>Now copy the contents of <code>cm.yaml</code> file and past it on <a href="http://www.yamllint.com/" rel="noreferrer"><code>yamllint.com</code></a>. Yamllint.com is powerful tool to check the linting of yaml files.<br />
This will provide you with the configmap as expected with correct formatting.</p>
<p>Paste the output in another yaml file (for e.g - cm_ready.yaml)</p>
<pre><code> kubectl apply -f cm_ready.yaml
</code></pre>
</blockquote>
<hr />
<p>Update Nov. 2020, the <a href="https://github.com/kubernetes/kubernetes/issues/36222#issuecomment-729237587" rel="noreferrer">same issue</a> includes:</p>
<blockquote>
<p>I was able to fix this behavior by:</p>
<ul>
<li><p>Don't use tabs, convert to spaces</p>
</li>
<li><p>To remove spaces before a newline character, use this:</p>
<pre><code> sed -i -E 's/[[:space:]]+$//g' File.ext
</code></pre>
</li>
</ul>
<p>It seems also will convert CRLF to LF only.</p>
</blockquote>
|
<p>We have Spring Boot service running in Kubernetes.<br>
This service has endpoint:<br>
- GET /healthz</p>
<p>We have liveness probe that uses this endpoint. Probe runs successfully.<br>
It means that the endpoint is reachable from the service pod (localhost). </p>
<p>When I run in the service pod :
<code>wget https://localhost:8080/healthz</code>
I get an answer (OK)</p>
<p>When I try to call this endpoint outside the pod <code>wget https://myhost:8080/healthz</code>, I get <strong>response 400 without body</strong>.<br>
I don't see any logs of Sprint. It seems that it does not reach the Sprint .<br>
When I added flag <code>-Djavax.net.debug=all</code> I see in log that TLS handshake finished and then: </p>
<pre><code>GET /healthz HTTP/1.1
host: myhost:8080
accept: application/json
Connection: close
</code></pre>
<p>and immediately</p>
<pre><code>HTTP/1.1 400
Transfer-Encoding: chunked
Date: Mon, 25 Jun 201 8 08:43:43 GMT
Connection: close
</code></pre>
<p>When I try <code>wget https://myhost:8080/blahblah</code> (non existing endpoint),
I still get 400, not 404!</p>
<p>When I try <code>wget https://myWronghost:8080/healthz</code> (wrong host), I get an error <code>Bad address</code>. It means that host 'myhost' is correct (otherwise I would get this error). </p>
<p>Docker file: </p>
<blockquote>
<p>FROM openjdk:8-jdk-alpine<br>
VOLUME /tmp<br>
ARG JAR_FILE<br>
COPY ${JAR_FILE} app.jar<br>
ENV JAVA_TOOL_OPTIONS -Dfile.encoding=UTF8<br>
ENTRYPOINT ["java","-Djavax.net.debug=all", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]<br>
EXPOSE 8080</p>
</blockquote>
<p>Summing up:<br>
The service endpoints are accessible from within service pod, but not accessible from outside the pod.<br>
Any idea why?</p>
<p>Update:<br>
The problem was solved by calling the service with <strong>fully qualified domain name</strong> : <em>serviceName.namespaceName.svc.cluster.local</em><br>
Tomcat didn't accept calls with short domain <em>serviceName.namespaceName</em>, it responded 400.</p>
| <p>Your issue can be caused by <a href="https://github.com/spring-projects/spring-boot/issues/13205" rel="nofollow noreferrer">https://github.com/spring-projects/spring-boot/issues/13205</a>.
All you have to do is upgrade Tomcat version to 8.5.32. You can do that by adding the version in pom.xml file.</p>
<pre><code> <properties>
<!-- your properties -->
<tomcat.version>8.5.32</tomcat.version>
</properties>
</code></pre>
|
<p>For company rules, our VMs can not access internet (can not use http proxy too). I installed a kubernetes cluster by downloading rpm packages and docker images as below:</p>
<pre><code> k8s.gcr.io/kube-apiserver-amd64:v1.11.0
k8s.gcr.io/kube-controller-manager-amd64:v1.11.0
k8s.gcr.io/kube-scheduler-amd64:v1.11.0
k8s.gcr.io/kube-proxy-amd64:v1.11.0 k8s.gcr.io/pause-amd64:3.1
k8s.gcr.io/etcd-amd64:3.2.18 k8s.gcr.io/coredns:1.1.3
quay.io/coreos/flannel:v0.10.0-amd64
</code></pre>
<p>Then i install the rpm packages and load these docker images into all VMs. This can successfully install kubernetes although it's hard working.</p>
<p>My question is that Can i use a private docker registry to store these k8s.gcr.io, quay.io and other public registries' images and each VM's docker.service can pull these images like my private images?</p>
| <p>There are several solutions:</p>
<ol>
<li>Since your machines don't have internet connection and you want use the same images names - you need to provide internet to them. It could be done with any PROXY server, like <code>squid</code> or something else. In this case, you'll need to reconfigure <code>docker</code> to make it work behind the proxy</li>
<li>Deploy any local registry solution (e.g <code>Artifactory</code>) and then use it as a mirror for <code>docker</code></li>
</ol>
<p>P.S: I am not insisting on using <code>Artifactory</code>, but it could be very convenient. Look, <code>Artifactory</code> provides the ability to create virtual registry. You can agregate another registries (k8s.gcr.io, quay.io, whatever) "under" this virtual one and use it for <code>docker</code> mirror after.</p>
|
<p>is there a way to scale dynamically the memory size of Pod based on size of data job (my use case)? </p>
<p>Currently we have Job and Pods that are defined with memory amounts, but we wouldn't know how big the data will be for a given time-slice (sometimes 1000 rows, sometimes 100,000 rows).<br>
So it will break if the data is bigger than the memory we have allocated beforehand. </p>
<p>I have thought of using slices by data volume, i.e. cut by every 10,000 rows, we will know memory requirement of processing a fixed amount of rows. But we are trying to aggregate by time hence the need for time-slice. </p>
<p>Or any other solutions, like Spark on kubernetes?</p>
<p>Another way of looking at it:<br>
How can we do an implementation of Cloud Dataflow in Kubernetes on AWS</p>
| <p>If you don’t know the memory requirement for your pod a priori for a given time-slice, then it is difficult for Kubernete Cluster Autoscaler to automatically scale node pool for you as per this documentation [1]. Therefore for both of your suggestions like running either Cloud Dataflow or Spark on Kubernete with Kubernete Cluster Autoscaler, may not work for your case.</p>
<p>However, you can use custom scaling as a workaround. For example, you can export memory related metrics of the pod to Stackdriver, then deploy HorizontalPodAutoscaler (HPA) resource to scale your application as [2].</p>
<p>[1] <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#how_cluster_autoscaler_works" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#how_cluster_autoscaler_works</a></p>
<p>[2] <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/custom-metrics-autoscaling" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/custom-metrics-autoscaling</a></p>
|
<p>I've just noticed from a very nasty gcp bill that cockroachDB has logged 1.5tb of errors on stackdriver, costing me several hundred dollars in just a few days. Sadly I had left it on 100% logging. The errors look like this and are piling up multiple times per second.</p>
<pre><code>E I180712 11:18:41.963205 106 server/status/runtime.go:223 [n2]
runtime stats: 1.5 GiB RSS, 283 goroutines, 254 MiB/54 MiB/441 MiB GO alloc/idle/total,
918 MiB/1.1 GiB CGO alloc/total,
2175.51cgo/sec,
0.16/0.02 %(u/s)time, 0.00 %gc (1x)
</code></pre>
<p>Does anybody know what they mean, and how to stop them? </p>
| <p>These are all CockroachDB logs, not just errors. This is indicated by the <code>I</code> prefix (meaning <code>Info</code>) in the CockroachDB log line. The listed log lines show basic memory information for the <code>cockroach</code> process. This is logged every 10 seconds.</p>
<p>If you wish to persist logs I would recommend filtering by severity.</p>
<p>This can be done by CockroachDB itself when redirecting logs to stderr by using: <code>--logtostderr=Level</code> where <code>Level</code> is one of <code>Info</code>, <code>Warning</code>, <code>Error</code>, or <code>Fatal</code>.</p>
<p>If you are saving the raw logs, you could do a quick pass to discard anything not starting with a desired prefix. This will however not be as accurate as the <code>--logtostderr</code> method as you would need to handle multi-line log entries.</p>
|
<p>I have setup a <code>kubernetes</code> cluster which is working fine. I created <code>deployment</code> with type as <code>daemonset</code> which then created few pods and it worked fine. I made few changes so I deleted the <code>daemonset</code> using:</p>
<pre><code>kubectl delete daemonset <name> --namespace=clustech
</code></pre>
<p>While few pods were getting deleted, few of the machine went offline and thus the pods running on them went on to say <code>unknown</code> status. I need to remove those pods or completely remove that previous <code>daemonset</code> but unable to do. I am using below commands to delete pods:</p>
<pre><code>kubectl delete pod PODNAME --grace-period=0 --force
</code></pre>
<p>But this shows below error:</p>
<pre><code>warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
Error from server (NotFound): pods "deployment-6sj54" not found
</code></pre>
<p>How can I delete the pods.?</p>
| <p>Re-run the second command with <code>namespace</code> parameter will do your job.</p>
<p><code>--grace-period=0 --force</code> is the right command to delete such pods.</p>
|
<p>I have three kubernetes cluster environments set up in GCP.
I have installed Fluentd as daemonset in all these environments to collect the logs from all the pods.
I have also installed elasticsearch and kibana in a separate server outside the cluster.
I need to feed the logs in fluentd to the elasticsearch in remote server and thereby run a centralised logging platform. </p>
<p>How can I send the data from fluentd to the elasticsearch in remote server?</p>
<p>The error received is:</p>
<blockquote>
<p>error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error="Can not reach Elasticsearch cluster</p>
</blockquote>
| <p>There are two common ways mentioned in <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">documentation</a> to access external resources from inside the Pod:</p>
<ol>
<li><p>Create a Service and Endpoint objects. Set external IP address in Endpoint's specification:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ext-elastic
namespace: default
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9200
---
kind: Endpoints
apiVersion: v1
metadata:
name: ext-elastic
namespace: default
subsets:
- addresses:
- ip: 1.2.3.4
ports:
- port: 9200
</code></pre></li>
</ol>
<blockquote>
<p><strong>NOTE:</strong> The endpoint IPs may not be loopback (<code>127.0.0.0/8</code>), link-local
(<code>169.254.0.0/16</code>), or link-local multicast (<code>224.0.0.0/24</code>). They cannot
be the cluster IPs of other Kubernetes services either because the
kube-proxy component doesn’t support virtual IPs as destination yet.</p>
</blockquote>
<p>You can access this service by using <code>http://ext-elastic</code> inside the same namespace or by using <code>http://ext-elastic.default.svc.cluster.local</code> from a different namespace.</p>
<ol start="2">
<li>Create the ExternalName Service and specify a name of the external resource in the specification:</li>
</ol>
<blockquote>
<p>An ExternalName service is a special case of service that does not
have selectors. It does not define any ports or Endpoints. Rather, it
serves as a way to return an alias to an external service residing
outside the cluster.</p>
</blockquote>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ext-elastic
namespace: default
spec:
type: ExternalName
externalName: my.external.elasticsearch.com
ports:
- port: 80
</code></pre>
<blockquote>
<p>When looking up the host my-service.prod.svc.CLUSTER, the cluster DNS
service will return a CNAME record with the value
my.database.example.com. Accessing such a service works in the same
way as others, with the only difference that the redirection happens
at the DNS level and no proxying or forwarding occurs. Should you
later decide to move your database into your cluster, you can start
its pods, add appropriate selectors or endpoints and change the
service type.</p>
</blockquote>
<p>Check out another <a href="https://akomljen.com/kubernetes-tips-part-1/" rel="nofollow noreferrer">article</a> to see some more examples.</p>
|
<p>I am trying to deploy <strong>elastic-search in kubernetes</strong> with local drive volume but I get the following error, can you please correct me.
<strong>using ubuntu 16.04
kubernetes v1.11.0
Docker version 17.03.2-ce</strong>
Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume</p>
<pre><code>error: error validating "es-d.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[1]): unknown field "hostPath" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>This is the yaml file of the statefulSet:</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
</code></pre>
| <p>You have the wrong structure. <code>volumes</code> must be on the same level as <code>containers</code>, <code>initContainers</code>.</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
</code></pre>
<p>You can find <a href="https://medium.com/@jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519" rel="nofollow noreferrer">example here</a>.</p>
|
<p>We have a <code>.net core</code> app being deployed to a Kubernetes cluster which accesses an AWS RDS MS SQL database.</p>
<p>In this environment we'd like to use EF Code First to handle our model (maybe with migrations later, but initially dropping and creating is fine).</p>
<p>How in this environment do we create a SQL user with appropriate permissions on the RDS instance so that the web application can login as this user and create the code first model?</p>
<p>Our initial approach involved creating a user as part of a <code>.sh</code> script, creating a db and assigning permissions. This fell down as when in the C# code we tried to run <code>Database.EnsureExists()</code> is saw there was a database and didn't build the model.</p>
<p>I thought perhaps not creating the db and assigning higher permissions to the user might work, but this feels like a bad approach unless we run some kind of post deploy to remove the <code>dbo</code>esqe permissions afterwards.</p>
<p>What is the recommended approach for a ephemeral deployment where we intend to drop/create/seed on each run? </p>
| <p>I've been tussling with this question as well. We're doing .NET Core EF code first on Kubernetes with a Microsoft SQL database.</p>
<p>I've been messing around with context.Database.Migrate(). This will create the DB then create the tables and do the migrations (case 1), or if the DB already exists, it will just create the tables and do the migrations (case 2).</p>
<p>For case 1, the account needs to have the dbcreator server role. Once it creates the DB, it will assign itself the dbo database role.</p>
<p>For case 2, you could potentially just give db_ddladmin, db_datareader, and db_datawriter. I've tested this and it seems to work fine, but I'm unsure of the side effects of not having dbo access. Julie?</p>
|
<p>I'm running more than one replicas of pods with kubernetes <code>deployment</code>
and I'd like to update the replicas to use updated <code>configmap</code> in a rolling way. same like rolling-update works.</p>
<p>So that kubernetes will terminate pod and start sending traffic to the newly updated pods one at a time until all pods will be updated.</p>
<p>Can I use rolling-update with deployment?</p>
| <p>Applying a change to the Deployment object will trigger a rolling-update. From the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="noreferrer">docs</a>:</p>
<blockquote>
<p>A Deployment’s rollout is triggered if and only if the Deployment’s pod template (that is, .spec.template) is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout. </p>
</blockquote>
<p>So if you want to trigger a rolling update to update your configmap I would suggest you update a metadata label. Perhaps a <code>CONFIG_VER</code> key.</p>
|
<p>I have deployed kubernetes v1.8 in my workplace. I have created roles for admin and view access to namespaces 3months ago. In the initial phase RBAC is working as per the access given to the users. Now RBAC is not happening every who has access to the cluster is having clusteradmin access. </p>
<p>Can you suggest the errors/changes that had to be done?</p>
| <p>Ensure the RBAC authorization mode is still being used (<code>--authorization-mode=…,RBAC</code> is part of the apiserver arguments)</p>
<p>If it is, then check for a clusterrolebinding that is granting the cluster-admin role to all authenticated users:</p>
<p><code>kubectl get clusterrolebindings -o yaml | grep -C 20 system:authenticated</code></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.