prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I'm new at kubernetes, and Im wondering the best way to inject values to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a>.</p>
<p>for now, I defined Deployment object which takes the relevant values from ConfigMap file. I wish to use the same <code>.yml</code> file for my production and staging environments. so only the values in the configMap will be changed, while the file itself will be the same.</p>
<p>Is there any way to do it built-in in kubernetes, without using configuration management tools (like Ansible, puppet, etc.)?</p>
| <p>You can find the links to the quoted text in the end of the answer.</p>
<blockquote>
<p>A good practice when writing applications is to separate application code from configuration. We want to enable application authors to easily employ this pattern within Kubernetes. While the Secrets API allows separating information like credentials and keys from an application, no object existed in the past for ordinary, non-secret configuration. In Kubernetes 1.2, we’ve added a new API resource called ConfigMap to handle this type of configuration data.</p>
<p>Besides, Secrets data will be stored in a base64 encoded form, which is also suitable for binary data such as keys, whereas ConfigMaps data will be stored in plain text format, which is fine for text files.</p>
</blockquote>
<p>The ConfigMap API is simple conceptually. From a data perspective, the ConfigMap type is just a set of key-value pairs.</p>
<p>There are several ways you can create config maps:</p>
<ul>
<li><p>Using list of values in the command line</p>
<pre><code>$ kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
</code></pre></li>
<li><p>Using a file on the disk as a source of data</p>
<pre><code>$ kubectl create configmap game-config-2 --from-file=docs/user-guide/configmap/kubectl/game.properties --from-file=docs/user-guide/configmap/kubectl/ui.properties
$ kubectl create configmap game-config-3 --from-file=game-special-key=docs/user-guide/configmap/kubectl/game.properties
</code></pre></li>
<li><p>Using directory with files as a source of data</p>
<pre><code>$ kubectl create configmap game-config --from-file=configure-pod-container/configmap/kubectl/
</code></pre></li>
<li><p>Combining all three previously mentioned methods</p></li>
</ul>
<p>There are several ways to consume a ConfigMap data in Pods</p>
<ul>
<li><p>Use values in ConfigMap as environment variables</p>
<pre><code>spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_LEVEL
</code></pre></li>
<li><p>Use data in ConfigMap as files on the volume</p>
<pre><code>spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# ConfigMap containing the files
name: special-config
</code></pre></li>
</ul>
<blockquote>
<p>Only changes in ConfigMaps that are consumed in a volume will be visible inside the running pod. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period + ttl of ConfigMaps cache in kubelet.</p>
</blockquote>
<p>Pod that contains in specification any references to non-existent ConfigMap or Secrets won't start.</p>
<p>Consider to read official documentation and other good articles for even more details:</p>
<ul>
<li><a href="https://kubernetes.io/blog/2016/04/configuration-management-with-containers/" rel="nofollow noreferrer">Configuration management with Containers</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Configure a Pod to Use a ConfigMap</a></li>
<li><a href="https://kubernetes-v1-4.github.io/docs/user-guide/configmap/" rel="nofollow noreferrer">Using ConfigMap</a></li>
<li><a href="https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b" rel="nofollow noreferrer">Kubernetes ConfigMaps and Secrets</a></li>
<li><a href="https://medium.com/@xcoulon/managing-pod-configuration-using-configmaps-and-secrets-in-kubernetes-93a2de9449be" rel="nofollow noreferrer">Managing Pod configuration using ConfigMaps and Secrets in Kubernetes</a></li>
</ul>
|
<p>I have a single node kubernetes cluster running. Everything working fine, but when I run the "kubectl get cs" (kubectl get componentstatus) it showing two instance of etcd. I have running a single etcd instance.</p>
<p><code>[root@master01 vagrant]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}</code></p>
<p><code>[root@master01 vagrant]# etcdctl member list
19ef3eced66f4ae3: name=master01 peerURLs=http://10.0.0.10:2380 clientURLs=http://0.0.0.0:2379 isLeader=true</code></p>
<p><code>[root@master01 vagrant]# etcdctl cluster-health
member 19ef3eced66f4ae3 is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy</code></p>
<p>Etcd is running as a docker container. In the /etc/systemd/system/etcd.service file single etcd cluster is mentioned.(<a href="http://10.0.0.10:2380" rel="nofollow noreferrer">http://10.0.0.10:2380</a>)</p>
<p><code>/usr/local/bin/etcd \
--name master01 \
--data-dir /etcd-data \
--listen-client-urls http://0.0.0.0:2379 \
--advertise-client-urls http://0.0.0.0:2379 \
--listen-peer-urls http://0.0.0.0:2380 \
--initial-advertise-peer-urls http://10.0.0.10:2380 \
--initial-cluster master01=http://10.0.0.10:2380 \
--initial-cluster-token my-token \
--initial-cluster-state new \</code></p>
<p>Also in the api server config file /etc/kubernetes/manifests/api-srv.yaml --etcd-servers flag is used.</p>
<p><code>- --etcd-servers=http://10.0.0.10:2379,</code></p>
<p><code>[root@master01 manifests]# netstat -ntulp |grep etcd
tcp6 0 0 :::2379 :::* LISTEN 31109/etcd
tcp6 0 0 :::2380 :::* LISTEN 31109/etcd</code></p>
<p>Any one know why it showing etcd-0 and etcd-1 in "kubectl get cs" ?. Any help is appreciated.</p>
| <p>Despite the fact that @Jyothish Kumar S has found the root cause on his own and fixed the issue - It's a good practice to have an answer that will be available for those, who will face the same problem in the future.</p>
<p>Issue came from missconfiguration in API server config file <code>/etc/kubernetes/manifests/api-srv.yaml</code> where<code>--etcd-servers</code> was set in an inappropriate way.
All flags for <code>kube-apiserver</code> along with their descriptions may be found <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">here</a>.
So, the issue was in the last comma in <code>--etcd-servers=http://10.0.0.10:2379,</code> line. This comma was interpreted as new ETCD server record <code>http://:::2379</code> and that’s why in the <code>"kubectl get cs"</code> output we were able to see two etcd records instead of one.
Pay attention to this aspect while configuring etcd.</p>
|
<p>I have installed Apache Superset from its Helm Chart in a Google Cloud Kubernetes cluster. I need to <code>pip install</code> a package that is not installed when installing the Helm Chart. If I connect to the Kubernetes bash shell like this:</p>
<p><code>kubectl exec -it superset-4934njn23-nsnjd /bin/bash</code></p>
<p>Inside there's no python available, no pip and apt-get doesn't find most of the packages.</p>
<p>I understand that during the container installation process the packages are listed in the Dockerfile, I suppose that I need to fork the docker container, modify the Dockerfile, register the container to a container registry and make a new Helm Chart that will run this container.</p>
<p>But all this seems too complicated for a simple <code>pip install</code>, is there a simpler way to do this?</p>
<p>Links:</p>
<p>Docker- <a href="https://hub.docker.com/r/amancevice/superset/" rel="nofollow noreferrer">https://hub.docker.com/r/amancevice/superset/</a></p>
<p>Helm Chart - <a href="https://github.com/helm/charts/tree/master/stable/superset" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/superset</a></p>
| <p>As @Murli mentioned, you should use <code>pip3</code>. However, one thing you should remember is, <code>helm</code> is for managing k8s, i.e. what goes into the cluster should be traceable. So I recommend you the following:</p>
<pre><code>$ helm get stable/superset
</code></pre>
<p>modify the values.yaml. In my case, I added jenkins-job-builder to pip3:</p>
<pre><code>initFile: |-
pip3 install jenkins-job-builder
/usr/local/bin/superset-init --username admin --firstname admin --lastname user --email [email protected] --password admin
superset runserver
</code></pre>
<p>and just pass the <code>values.yaml</code> to <code>helm install</code>.</p>
<pre><code>$ helm install --values=values.yaml stable/superset
</code></pre>
<p>Thats it.</p>
<pre><code> $ kubectl exec -it doltish-gopher-superset-696448b777-8b9c6 which jenkins-jobs
/usr/local/bin/jenkins-jobs
$
</code></pre>
|
<p>When I execute the following commands (taken from the official installation guide for kubernetes), the output is unexpected (shown below: )
Command (On CentOS 7):</p>
<pre><code>cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
</code></pre>
<p>Output:</p>
<pre><code>Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
base: centos.sonn.com
extras: mirror.sesp.northwestern.edu
updates: mirrors.cat.pdx.edu
kubernetes/signature | 454 B 00:00:00
kubernetes/signature | 1.4 kB 00:00:00 !!!
kubernetes/primary | 33 kB 00:00:00
kubernetes 237/237
No package kubelet available.
No package kubeadm available.
No package kubectl available.
Error: Nothing to do
</code></pre>
<p>What you expected to happen:</p>
<p>kubeadm, kubeclt and kubelet get installed and are enabled</p>
<p>How to reproduce it:</p>
<p>Run the above mentioned commands on centos 7 (by following the guide at <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/install-kubeadm/</a>)</p>
<pre><code>Docker version: Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
</code></pre>
<p>Server:</p>
<pre><code>Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
Experimental: false
</code></pre>
<p>Environment:</p>
<p>Kubernetes version (use kubectl version): Unable to install the latest version following the official guide.
hardware configuration: Vertual machine as per the guidelines on the official guide(2GB ram and 2 CPUs)
OS:</p>
<pre><code>NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
</code></pre>
<p>Kernel:</p>
<pre><code>Linux k1 3.10.0-862.9.1.el7.x86_64 #1 SMP Mon Jul 16 16:29:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
| <p>You seem to be missing <code><<EOF</code> at the end of the first line.</p>
<p>Also, I can see there is a <strong>mistake</strong> in the <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="noreferrer">docs</a>.</p>
<p>Line containing <code>exclude=kube*</code> should be removed.</p>
<p>It should be as follows:</p>
<pre><code>cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
</code></pre>
|
<p>The failing code runs inside a Docker container based on <code>python:3.6-stretch</code> debian.
It happens while Django moves a file from one Docker volume to another.</p>
<p>When I test on MacOS 10, it works without error. Here, the Docker containers are started with docker-compose and use regular Docker volumes on the local machine.</p>
<p>Deployed into Azure (AKS - Kubernetes on Azure), moving the file succeeds but copying the stats fails with the following error:</p>
<pre><code> File "/usr/local/lib/python3.6/site-packages/django/core/files/move.py", line 70, in file_move_safe
copystat(old_file_name, new_file_name)
File "/usr/local/lib/python3.6/shutil.py", line 225, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/local/lib/python3.6/shutil.py", line 157, in _copyxattr
names = os.listxattr(src, follow_symlinks=follow_symlinks)
OSError: [Errno 38] Function not implemented: '/some/path/file.pdf'
</code></pre>
<p>The volumes on Azure are persistent volume claims with <code>ReadWriteMany</code> access mode.</p>
<p>Now, <code>copystat</code> is documented as:</p>
<blockquote>
<p>copystat() never returns failure.</p>
</blockquote>
<p><a href="https://docs.python.org/3/library/shutil.html" rel="nofollow noreferrer">https://docs.python.org/3/library/shutil.html</a></p>
<p>My questions are:</p>
<ul>
<li>Is this a "bug" because the documentation says that it should "never return failure"?</li>
<li>Can I savely try/except this error because the file in question is moved (it only fails later on, while trying to copy the stats)</li>
<li>Can I change something about the Azure settings that fix this? (probably not)</li>
</ul>
<p>Here some small test on the machine in Azure itself:</p>
<pre><code>root:/media/documents# ls -al
insgesamt 267
drwxrwxrwx 2 1000 1000 0 Jul 31 15:29 .
drwxrwxrwx 2 1000 1000 0 Jul 31 15:29 ..
-rwxrwxrwx 1 1000 1000 136479 Jul 31 16:48 orig.pdf
-rwxrwxrwx 1 1000 1000 136479 Jul 31 15:29 testfile
root:/media/documents# lsattr
--S-----c-jI------- ./orig.pdf
--S-----c-jI------- ./testfile
root:/media/documents# python
Python 3.6.6 (default, Jul 17 2018, 11:12:33)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import shutil
>>> shutil.copystat('orig.pdf', 'testfile')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/shutil.py", line 225, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/local/lib/python3.6/shutil.py", line 157, in _copyxattr
names = os.listxattr(src, follow_symlinks=follow_symlinks)
OSError: [Errno 38] Function not implemented: 'orig.pdf'
>>> shutil.copystat('orig.pdf', 'testfile', follow_symlinks=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/shutil.py", line 225, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/local/lib/python3.6/shutil.py", line 157, in _copyxattr
names = os.listxattr(src, follow_symlinks=follow_symlinks)
OSError: [Errno 38] Function not implemented: 'orig.pdf'
>>>
</code></pre>
| <p>The following solution is a hotfix. It would have to be applied to <em>any</em> method that calls <code>copystat</code> directly or indirectly (or any shutil method that produces an ignorable <code>errno.ENOSYS</code>).</p>
<pre><code>if hasattr(os, 'listxattr'):
LOGGER.warning('patching listxattr to avoid ERROR 38 (errno.ENOSYS)')
# avoid "ERROR 38 function not implemented on Azure"
with mock.patch('os.listxattr', return_value=[]):
file_field.save(name=name, content=GeneratedFile(fresh, content_type=content_type), save=True)
else:
file_field.save(name=name, content=GeneratedFile(fresh, content_type=content_type), save=True)
</code></pre>
<p><code>file_field.save</code> is the Django method that calls the <code>shutil</code> code in question. It's the last location in my code before the error.</p>
|
<p>we are starting with Kubernetes and wondering how other projects manage Kubernetes secrets:</p>
<ul>
<li>Since Kubernetes secrets values are just base64 encoded, it's not recommended to commit the secrets into source control</li>
<li>If not committing to source control, it should be kept in some central place somewhere else, otherwise there's no single source of truth. If it's stored in some other places (.e.g. Hashicorp Vault), how's the integration with CI? Does the CI get values from Vault, create secrets resource on demand in Kubernetes?</li>
<li>Another approach is probably to have a dedicated team to handle infrastructure and only that team knows and manages secrets. But if this team can potentially become a bottleneck if number of projects are large</li>
</ul>
| <blockquote>
<p>how other projects manage Kubernetes secrets</p>
</blockquote>
<p>Since they are not (at least not yet) proper secrets (base64 encoded), we treat them to separate restricted access git repository.</p>
<p>Most of our projects have code repository (with non-secret related manifests such as deployments and services as part of CI/CD pipeline) and separate manifest repository (holding namespaces, shared database inits, secrets and more or less anything that is either one-time init separate from CI/CD, requires additional permission to implement or that should be restricted in any other way such as secrets).</p>
<p>With that being said, although regular developer doesn't have access to restricted repository, special care must be given to CI/CD pipelines since even if you secure secrets, they are known (and can be displayed/misused) during CI/CD stage, so that can be weak security spot there. We mitigate that by having one of our DevOps supervise and approve (protected branches) any change to CI/CD pipeline in much the same manner that senior lead is supervising code changes to be deployed to production environment.</p>
<p>Note that this is highly dependent on project volume and staffing, as well as actual project needs in term of security/development pressure/infrastructure integration.</p>
|
<p><code>kubectl describe nodes</code> gives information on the requests and limits for resources such as CPU and memory. However, the api endpoint <code>api/v1/nodes</code> doesn't provide this information.</p>
<p>Alternatively, I could also hit the <code>api/v1/pods</code> endpoint to get this information per pod which I can accumulate across nodes. But is there already a kubernetes API endpoint which provides the information pertaining to cpu/memory requests and limits per node?</p>
| <p>From what I've found in documentation, the endpoint responsible for that is the Kubernetes API Server. </p>
<blockquote>
<p><em>CPU</em> and <em>memory</em> are each a <em>resource type</em>. A resource type has a base unit. CPU is specified in units of cores, and memory is
specified in units of bytes.</p>
<p>CPU and memory are collectively referred to as <em>compute resources</em>,
or just <em>resources</em>. Compute resources are measurable quantities that
can be requested, allocated, and consumed. They are distinct from
<a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/" rel="nofollow noreferrer">API resources</a>.
API resources, such as Pods and <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Services</a>
are objects that can be read and modified through the Kubernetes API
server.</p>
</blockquote>
<p>Going further to what is a node:</p>
<blockquote>
<p>Unlike <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">pods</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a>, a node is not inherently created by Kubernetes: it is created externally by cloud providers like Google Compute Engine, or exists in your pool of physical or virtual machines. What this means is that when Kubernetes creates a node, it is really just creating an object that represents the node. After creation, Kubernetes will check whether the node is valid or not.
[...]
Currently, there are three components that interact with the Kubernetes node interface: node controller, kubelet, and kubectl.
[...]
The capacity of the node (number of cpus and amount of memory) is part of the node object. Normally, nodes register themselves and report their capacity when creating the node object. If you are doing <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration" rel="nofollow noreferrer">manual node administration</a>, then you need to set node capacity when adding a node.</p>
<p>The Kubernetes scheduler ensures that there are enough resources for
all the pods on a node. It checks that the sum of the requests of
containers on the node is no greater than the node capacity. It
includes all containers started by the kubelet, but not containers
started directly by Docker nor processes not in containers.</p>
</blockquote>
<p>Edit:</p>
<blockquote>
<p>Alternatively, I could also hit the api/v1/pods endpoint to get this
information per pod which I can accumulate across nodes.</p>
</blockquote>
<p>This is an actual description in what order it works.</p>
<blockquote>
<p>But is there already a kubernetes API endpoint which provides the
information pertaining to cpu/memory requests and limits per node?
Answer to this question is no, there is not. </p>
</blockquote>
<p>Unfortunately there is no endpoint to get that information directly. <code>Kubectl</code> uses several requests to show describe of nodes. You can check them by <code>kubectl -v 8 describe nodes</code>:</p>
<p>When you run <code>kubectl -v=8 describe nodes</code> you can see GET calls in this order:</p>
<pre><code>/api/v1/nodes?includeUninitialized=true
/api/v1/nodes/minikube
/api/v1/pods
/api/v1/events?fieldSelector
</code></pre>
|
<p>I am trying to implement CI/CD pipeline for my spring boot microservice deployment. Here I have some sample microservices. When I am exploring about Kubernetes , I found that pods, services, replica sets/ controller, statefulsets etc. I understood those Kubernetes terminologies properly. And I am planning to use Docker hub for my image registry. </p>
<p><strong>My Requirement</strong></p>
<p>When there is a commit made to my SVN code repository, then the Jenkins need to pull code from Subversion repository and need to build the project , create docker image, push into Docker hub - as mentioned earlier. And after that need to deploy into my test environment from Dockerhub by pulling by Jenkins.</p>
<p><strong>My Confusion</strong></p>
<ol>
<li>When am I creating services and pods, how I can define the docker image path within pod/services / statefulsets? Since it pulling from Docker hub for deployment.</li>
<li>Can I directly add kubectl command within Jenkins pipeline schedule job? How can I use kubectl command for Kubernetes deployment?</li>
</ol>
| <p>Jenkins can do anything you can do given that the tools are installed and accessible. So an easy solution is to install docker and kubectl on Jenkins and provide him with the correct kube config so he can access the cluster. So if your host can use kubectl you can have a look at the <code>$HOME/.kube/config</code> file.</p>
<p>So in your job you can just use kubectl like you do from your host.</p>
<p>Regarding the images from Docker Hub:</p>
<p>Docker Hub is the default Docker Registry for Docker anyway so normally there is no need to change anything in your cluster only if you want to use your own Private Hosted Registry. If you are running your cluster at any cloud provider I would use there Docker registries because they are better integrated.</p>
<p>So this part of a deployment will pull nginx from Docker Hub no need to specify anything special for it:</p>
<pre><code> spec:
containers:
- name: nginx
Image: nginx:1.7.9
</code></pre>
<p>So ensure Jenkins can do the following things from command line:</p>
<ol>
<li>build Docker images</li>
<li>Push Docker Images (make sure you called docker login on Jenkins)</li>
<li>Access your cluster via <code>kubectl get pods</code> </li>
</ol>
<p>So an easy pipeline needs to simply do this steps:</p>
<ol>
<li>trigger on SVN change</li>
<li>checkout code</li>
<li>create a unique version which could be Build number, SVN Revision, Date)</li>
<li>Build / Test</li>
<li>Build Docker Image</li>
<li>tag Docker Image with unique version</li>
<li>push Docker Image </li>
<li>change image line in Kubernetes deployment.yaml to newly build version (if your are using Jenkins Pipeline you can use readYaml, writeYaml to achive this)</li>
<li>call <code>kubectl apply -f deployment.yaml</code></li>
</ol>
<p>Depending on your build system and languages used there are some useful tools which can help building and pushing the Docker Image and ensuring a unique tag. For example for Java and Maven you can use <a href="https://maven.apache.org/maven-ci-friendly.html" rel="nofollow noreferrer">Maven CI Friendly Versions</a> with any maven docker plugin or <a href="https://github.com/GoogleContainerTools/jib" rel="nofollow noreferrer">jib</a>.</p>
|
<p>Question regarding AKS, each time release CD. The Kubernetes will give random IP Address to my services. <br/>
I would like to know how to bind the domain to the IP?</p>
<p>Can someone give me some link or article to read?</p>
| <p>You have two options.</p>
<p>You can either deploy a Service with <code>type=LoadBalancer</code> which will provision a cloud load balancer. You can then point your DNS entry to that provisioned LoadBalancer with (for example) a CNAME.</p>
<p>More information on this can be found <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">here</a></p>
<p>Your second option is to use an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers" rel="nofollow noreferrer">Ingress Controller</a> with an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress Resource</a>. This offers much finer grained access via url parameters. You'll probably need to deploy your ingress controller pod/service with a service <code>Type=LoadBalancer</code> though, to make it externally accessible.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/ingress" rel="nofollow noreferrer">Here's</a> an article which explains how to do ingress on Azure with the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx-ingress-controller</a></p>
|
<p>How can we get the real resource usage (not resource requests) of each pod on Kubernetes by command line?
Heapster is deprecated.
Meanwhile, Metrics-server still does not support <code>kubectl top pod</code>.</p>
<ol>
<li><p>Heapster - </p>
<p>I deployed Heapster using the following command </p>
<pre><code>$ heapster/deploy/kube.sh start
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-hlcbl 2/2 Running 0 39m
kube-system calico-node-m8jl2 2/2 Running 0 35m
kube-system coredns-78fcdf6894-bl94w 1/1 Running 0 39m
kube-system coredns-78fcdf6894-fwx95 1/1 Running 0 39m
kube-system etcd-ctl.kube.yarnrm-pg0.utah.cloudlab.us 1/1 Running 0 39m
kube-system heapster-84c9bc48c4-qzt8x 1/1 Running 0 15s
kube-system kube-apiserver-ctl.kube.yarnrm-pg0.utah.cloudlab.us 1/1 Running 0 39m
kube-system kube-controller-manager-ctl.kube.yarnrm-pg0.utah.cloudlab.us 1/1 Running 0 38m
kube-system kube-proxy-nj9f8 1/1 Running 0 35m
kube-system kube-proxy-zvr2b 1/1 Running 0 39m
kube-system kube-scheduler-ctl.kube.yarnrm-pg0.utah.cloudlab.us 1/1 Running 0 39m
kube-system monitoring-grafana-555545f477-jldmz 1/1 Running 0 15s
kube-system monitoring-influxdb-848b9b66f6-k2k4f 1/1 Running 0 15s
</code></pre>
<p>When I used <code>kubectl top</code>, I encountered the following errors.</p>
<pre><code>$ kubectl top pods
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get services http:heapster:)
$ kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get services http:heapster:)
</code></pre></li>
<li><p>metrics-server:</p>
<p>metrics-server has not supported <code>kubectl top</code> <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md" rel="noreferrer">Resource Metrics API</a></p></li>
</ol>
<p>If anyone already solved the same problem, please help me.
Thanks.</p>
| <blockquote>
<p>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get services http:heapster:)</p>
</blockquote>
<p>It sounds like the heapster deployment just forgot to install the <code>Service</code> for <code>heapster</code>; I would expect this would get you past <em>that</em> error, but unknown whether it would actually cause <code>kubectl top pods</code> to start to work:</p>
<pre><code>kubectl create -f /dev/stdin <<SVC
apiVersion: v1
kind: Service
metadata:
name: heapster
namespace: kube-system
spec:
selector:
whatever-label: is-on-heapster-pods
ports:
- name: http
port: 80
targetPort: whatever-is-heapster-is-listening-on
SVC
</code></pre>
|
<p>Which is the difference between a <code>Role</code> or a <code>ClusterRole</code>?</p>
<p>When should I create one or the other one?</p>
<p>I don't quite figure out which is the difference between them.</p>
| <p>From the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">documentation</a>:</p>
<blockquote>
<p>A Role can only be used to grant access to resources within a single namespace.</p>
</blockquote>
<p>Example: List all pods in a namespace</p>
<blockquote>
<p>A ClusterRole can be used to grant the same permissions as a Role, but
because they are cluster-scoped, they can also be used to grant access
to:</p>
<pre><code>cluster-scoped resources (like nodes)
non-resource endpoints (like “/healthz”)
namespaced resources (like pods) across all namespaces (needed to run kubectl get pods --all-namespaces, for example)
</code></pre>
</blockquote>
<p>Examples: List all pods in all namespaces. Get a list of all nodes and theis public IP.</p>
|
<p>I am keeping all my code in SVN repository within my on-premise server. And also I am trying to implement the CI/CD pipeline for deploying my application. I am trying to use Kubernetes and Jenkins tools for implementing this. When I am exploring the implementation examples of CI/CD pipeline using Jenkins and Kubernetes, I am only seeing example with GIT repository and managing code commits using Webhooks. </p>
<p>Here my confusion is that, I am using SVN code repository. So How I can use my SVN code repository with Jenkins Pipeline Job ? Do I need to install any additional plugin for SVN ? My requirement is that, when I am committing into my SVN code repository, Jenkins need to pull code from code repo and need to build project and need to deploy in test environment.</p>
| <p>Hooks to trigger Jenkins from SVN are also possible. Or you can poll the repository for changes - the Jenkins SVN plugin supports both methods (<a href="https://wiki.jenkins.io/display/JENKINS/Subversion+Plugin" rel="nofollow noreferrer">https://wiki.jenkins.io/display/JENKINS/Subversion+Plugin</a>). The examples you are looking at will have a step that does a build from the source code of a particular repo. You should be fine to swap git for SVN and still follow the examples as where and how the source is hosted is not normally related to how to use Jenkins to build and deploy it.</p>
|
<p>I want to persistent data file via pvc with glusterfs in kubernetes, I mount the diretory and it'll work, but when I try to mount the file, it'll fail, because the file was mounted to the directory type, how can I mount the data file in k8s ?</p>
<p>image info:<a href="https://i.stack.imgur.com/86jeC.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/86jeC.jpg" alt="error log"></a></p>
<p><a href="https://i.stack.imgur.com/GVLMK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/GVLMK.png" alt="pod yaml file"></a></p>
| <blockquote>
<p>how can I mount the data file in k8s ?</p>
</blockquote>
<p>This is often application specific and there are several ways to do so, but mainly you want to read about <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="noreferrer">subPath</a>.</p>
<p>Generally, you can chose to:</p>
<ul>
<li>use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="noreferrer">subPath</a> to separate config files.</li>
<li>Mount volume/path as directory at some other location and then link file to specific place within pod (in rare cases that mixing with other config files or directory permission in same dir is presenting an issue, or boot/start policy of application prevents files from being mounted at the pod start but are required to be present after some initialization is performed, really edge cases).</li>
<li>Use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="noreferrer">ConfigMaps</a> (or even <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">Secrets</a>) to hold configuration files. Note that if using subPath with configMap and Secret pod won't get updates there automatically, but is more common way of handling configuration files, and your <code>conf/interpreter.json</code> looks like a fine example...</li>
</ul>
<p>Notes to keep in mind:</p>
<ul>
<li>Mounting is "overlaping" underlying path, so you have to mount file up to the point of file in order to share its folder with other files. Sharing up to a folder would get you folder with single file in it which is usually not what is required.</li>
<li><p>If you use ConfigMaps then you have to reference individual file with subPath in order to mount it, even if you have a single file in ConfigMap. Something like this:</p>
<pre><code>containers:
- volumeMounts:
- name: my-config
mountPath: /my-app/my-config.json
subPath: config.json
volumes:
- name: my-config
configMap:
name: cm-my-config-map-example
</code></pre></li>
</ul>
<h2>Edit:</h2>
<h3>Full example of mounting a single <code>example.sh</code> script file to <code>/bin</code> directory of a container using <code>ConfigMap</code>.</h3>
<p>This example you can adjust to suit your needs of placing any file with any privilege in any desired folder. Replace <code>my-namespace</code> with any desired (or remove completely for <code>default</code> one) </p>
<p>Config map:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
namespace: my-namespace
name: cm-example-script
data:
example-script.sh: |
#!/bin/bash
echo "Yaaaay! It's an example!"
</code></pre>
<p>Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
namespace: my-namespace
name: example-deployment
labels:
app: example-app
spec:
selector:
matchLabels:
app: example-app
strategy:
type: Recreate
template:
metadata:
labels:
app: example-app
spec:
containers:
- image: ubuntu:16.04
name: example-app-container
stdin: true
tty: true
volumeMounts:
- mountPath: /bin/example-script.sh
subPath: example-script.sh
name: example-script
volumes:
- name: example-script
configMap:
name: cm-example-script
defaultMode: 0744
</code></pre>
<h3>Full example of mounting a single <code>test.txt</code> file to <code>/bin</code> directory of a container using persistent volume (file already exists in root of volume).</h3>
<p>However, if you wish to mount with persistent volume instead configMap, here is another example of mounting in much the same way (test.txt is mounted in /bin/test.txt)... Note two things: <code>test.txt</code> must exist on PV and that I'm using statefulset just to run with automatically provisioned pvc, and you can adjust accordingly...</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: my-namespace
name: ss-example-file-mount
spec:
serviceName: svc-example-file-mount
replicas: 1
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- image: ubuntu:16.04
name: example-app-container
stdin: true
tty: true
volumeMounts:
- name: persistent-storage-example
mountPath: /bin/test.txt
subPath: test.txt
volumeClaimTemplates:
- metadata:
name: persistent-storage-example
spec:
storageClassName: sc-my-storage-class-for-provisioning-pv
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: 2Gi
</code></pre>
|
<p>I've been following <a href="https://www.linkedin.com/pulse/adding-users-quick-start-kubernetes-aws-jakub-scholz/" rel="nofollow noreferrer">this post</a> to create user access to my kubernetes cluster (running on Amazon EKS). I did create key, csr, approved the request and downloaded the certificate for the user. Then I did create a kubeconfig file with the key and crt. When I run kubectl with this kubeconfig, I'm recognized as <code>system:anonymous</code>.</p>
<pre><code>$ kubectl --kubeconfig test-user-2.kube.yaml get pods
Error from server (Forbidden): pods is forbidden: User "system:anonymous" cannot list pods in the namespace "default"
</code></pre>
<p>I expected the user to be recognized but get denied access.</p>
<pre><code>$ kubectl --kubeconfig test-user-2.kube.yaml version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-18T11:37:06Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:13:43Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl --kubeconfig test-user-2.kube.yaml config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: REDACTED
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: test-user-2
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: test-user-2
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
# running with my other account (which uses heptio-authenticator-aws)
$ kubectl describe certificatesigningrequest.certificates.k8s.io/user-request-test-user-2
Name: user-request-test-user-2
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 01 Aug 2018 15:20:15 +0200
Requesting User:
Status: Approved,Issued
Subject:
Common Name: test-user-2
Serial Number:
Events: <none>
</code></pre>
<p>I did create a ClusterRoleBinding with <code>admin</code> (also tried <code>cluster-admin</code>) roles for this user but that should not matter for this step. I'm not sure how I can further debug 1) if the user is created or not or 2) if I missed some configuration.</p>
<p>Any help is appreciated!</p>
| <p>As mentioned in this <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">article</a>:</p>
<blockquote>
<p>When you create an Amazon EKS cluster, the IAM entity user or role (for example, for federated users) that creates the cluster is automatically granted system:master permissions in the cluster's RBAC configuration. To grant additional AWS users or roles the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes.</p>
</blockquote>
<ol>
<li><p>Check if you have aws-auth ConfigMap applied to your cluster:</p>
<pre><code>kubectl describe configmap -n kube-system aws-auth
</code></pre></li>
<li><p>If ConfigMap is present, skip this step and proceed to step 3.
If ConfigMap is not applied yet, you should do the following:</p></li>
</ol>
<p>Download the stock ConfigMap: </p>
<pre><code>curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/aws-auth-cm.yaml
</code></pre>
<p>Adjust it using your NodeInstanceRole ARN in the <code>rolearn:</code> . To get NodeInstanceRole value check out <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html" rel="nofollow noreferrer">this manual</a> and you will find it at steps 3.8 - 3.10.</p>
<pre><code>data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
</code></pre>
<p>Apply this config map to the cluster:</p>
<pre><code>kubectl apply -f aws-auth-cm.yaml
</code></pre>
<p>Wait for cluster nodes becoming Ready:</p>
<pre><code>kubectl get nodes --watch
</code></pre>
<ol start="3">
<li><p>Edit <code>aws-auth</code> ConfigMap and add users to it according to the example below:</p>
<pre><code>kubectl edit -n kube-system configmap/aws-auth
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::555555555555:user/admin
username: admin
groups:
- system:masters
- userarn: arn:aws:iam::111122223333:user/ops-user
username: ops-user
groups:
- system:masters
</code></pre></li>
</ol>
<p>Save and exit the editor.</p>
<ol start="4">
<li>Create kubeconfig for your IAM user following <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html" rel="nofollow noreferrer">this manual</a>.</li>
</ol>
|
<p>Kubernetes top (kubectl top) command shows different memory usage than Linux top command ran inside pod. </p>
<p>I’ve created k8s deployment where YAML contains these memory limits:</p>
<pre><code>resources:
limits:
cpu: "1"
memory: 2500Mi
requests:
cpu: 200m
memory: 2Gi
</code></pre>
<p>Following commands have output as shown: </p>
<pre><code>bash4.4$ kubectl top pod PODNAME
NAME CPU(cores) MEMORY(bytes)
openam-d975d46ff-rnp6h 2m 1205Mi
</code></pre>
<p>Run linux top command:</p>
<pre><code>Kubectl exec -it PODNAME top
Mem: 12507456K used, 4377612K free, 157524K shrd,
187812K buff, 3487744K cached
</code></pre>
<p>Note ‘free -g’ also shows 11Gb used. </p>
<p>Issue is this contradicts "kubectl top" which shows only 1205 mb used. </p>
| <p>Command <code>kubectl top</code> shows metrics for a given pod. That information is based on reports from <a href="https://github.com/google/cadvisor" rel="noreferrer">cAdvisor</a>, which collects real pods resource usage.</p>
<p>If you run <code>top</code> inside the pod, it will be like you run it on the host system because the pod is using kernel of the host system.
Unix <code>top</code> uses <code>proc</code> virtual filesystem and reads <code>/proc/meminfo</code>file to get an actual information about current memory status. Containers inside pods partially share <code>/proc</code>with the host system include path about a memory and CPU information.</p>
<p>More information you can find in these documents: <a href="https://www.mankier.com/1/kubectl-top-pod" rel="noreferrer">kubectl-top-pod man page</a>, <a href="https://fabiokung.com/2014/03/13/memory-inside-linux-containers/" rel="noreferrer">Memory inside Linux containers</a></p>
|
<p>I have a dockerfile that looks like this at the moment:</p>
<pre><code>FROM golang:1.8-alpine
COPY ./ /src
ENV GOOGLE_CLOUD_PROJECT = "snappy-premise-118915"
RUN apk add --no-cache git && \
apk --no-cache --update add ca-certificates && \
cd /src && \
go get -t -v cloud.google.com/go/pubsub && \
CGO_ENABLED=0 GOOS=linux go build main.go
# final stage
FROM alpine
ENV LATITUDE "-121.464"
ENV LONGITUDE "36.9397"
ENV SENSORID "sensor1234"
ENV ZIPCODE "95023"
ENV INTERVAL "3"
ENV GOOGLE_CLOUD_PROJECT "snappy-premise-118915"
ENV GOOGLE_APPLICATION_CREDENTIALS "/app/key.json"
ENV GRPC_GO_LOG_SEVERITY_LEVEL "INFO"
RUN apk --no-cache --update add ca-certificates
WORKDIR /app
COPY --from=0 /src/main /app/
COPY --from=0 /src/key.json /app/
ENTRYPOINT /app/main
</code></pre>
<p>and the pod config looks like this:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: sensorpub
spec:
template:
metadata:
labels:
app: sensorpub
spec:
volumes:
- name: google-cloud-key
secret:
secretName: pubsub-key
containers:
- name: sensorgen
image: gcr.io/snappy-premise-118915/sensorgen:v1
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
</code></pre>
<p>I want to be able to pass in these environment vars:</p>
<pre><code>ENV LATITUDE "-121.464"
ENV LONGITUDE "36.9397"
ENV SENSORID "sensor1234"
ENV ZIPCODE "95023"
ENV INTERVAL "3"
ENV GOOGLE_CLOUD_PROJECT "snappy-premise-118915"
ENV GOOGLE_APPLICATION_CREDENTIALS "/app/key.json"
ENV GRPC_GO_LOG_SEVERITY_LEVEL "INFO"
</code></pre>
<p>I want to be able to set the environment variables in the pod config so that the docker file can use those...how do I do that instead of just coding them into the docker image directly?</p>
| <blockquote>
<p>I want to be able to set the environment variables in the pod config so that the docker file can use those...how do I do that instead of just coding them into the docker image directly?</p>
</blockquote>
<p>There is no need to specify <strong>any</strong> <code>ENV</code> directive in a Dockerfile; those directives only provide defaults in the case where (as in your example <code>PodSpec</code>) they are not provided at runtime.</p>
<p>The "how" is to do exactly what you have done in your example <code>PodSpec</code>: populate the <code>env:</code> array with the environment variables you wish to appear in the Pod</p>
|
<p>With the Kubernetes orchestrator <a href="https://blog.docker.com/2018/07/kubernetes-is-now-available-in-docker-desktop-stable-channel/" rel="nofollow noreferrer">now available</a> in the stable version of Docker Desktop for Win/Mac, I've been playing around with running an existing compose stack on Kubernetes locally.</p>
<p>This works fine, e.g., <code>docker stack deploy -c .\docker-compose.yml myapp.</code></p>
<p>Now I want to go to the next step of running this same application in a production environment using the likes of Amazon EKS or Azure AKS. These services expect proper Kubernetes YAML files. </p>
<p>My question(s) is what's the best way to get these files, or more specifically:</p>
<ol>
<li>Presumably, docker stack is performing some conversion from Compose YAML to Kubernetes YAML 'under the hood'. Is there documentation/source code links as to what is going on here and can that converted YAML be exported?</li>
<li>Or should I just be using Kompose?</li>
<li>It seems that running the above <code>docker stack deploy</code> command against a remote context (e.g., AKS/EKS) is not possible and that one must do a <code>kubectl deploy</code>. Can anyone confirm?</li>
</ol>
| <p><code>docker stack deploy</code> with a Compose file to Kube only works on Docker's Kubernetes distributions - Docker Desktop and Docker Enterprise. </p>
<p>With the recent federation announcement you'll be able to manage AKS and EKS with Docker Enterprise, but using them direct means you'll have to use Kubernetes manifest files and <code>kubectl</code>.</p>
|
<p>I am getting below error while trying to apply patch :</p>
<pre><code>core@dgoutam22-1-coreos-5760 ~ $ kubectl apply -f ads-central-configuration.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"data":{"default":"{\"dedicated_redis_cluster\": {\"nodes\": [{\"host\": \"192.168.1.94\", \"port\": 6379}]}}"},"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"data\":{\"default\":\"{\\\"dedicated_redis_cluster\\\": {\\\"nodes\\\": [{\\\"host\\\": \\\"192.168.1.94\\\", \\\"port\\\": 6379}]}}\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"creationTimestamp\":\"2018-06-27T07:19:13Z\",\"labels\":{\"acp-app\":\"acp-discovery-service\",\"version\":\"1\"},\"name\":\"ads-central-configuration\",\"namespace\":\"acp-system\",\"resourceVersion\":\"1109832\",\"selfLink\":\"/api/v1/namespaces/acp-system/configmaps/ads-central-configuration\",\"uid\":\"64901676-79da-11e8-bd65-fa163eaa7a28\"}}\n"},"creationTimestamp":"2018-06-27T07:19:13Z","resourceVersion":"1109832","uid":"64901676-79da-11e8-bd65-fa163eaa7a28"}}
to:
&{0xc4200bb380 0xc420356230 acp-system ads-central-configuration ads-central-configuration.yaml 0xc42000c970 4434 false}
**for: "ads-central-configuration.yaml": Operation cannot be fulfilled on configmaps "ads-central-configuration": the object has been modified; please apply your changes to the latest version and try again**
core@dgoutam22-1-coreos-5760 ~ $
</code></pre>
| <p>It seems likely that your yaml configurations were copy pasted from what was generated, and thus contains fields such as <code>creationTimestamp</code> (and <code>resourceVersion</code>, <code>selfLink</code>, and <code>uid</code>), which don't belong in a declarative configuration file.</p>
<p>Go through your yaml and clean it up. Remove things that are instance specific. Your final yaml should be simple enough that you can easily understand it.</p>
|
<p>There is a default <code>ClusterRoleBinding</code> named <code>cluster-admin</code>.<br>
When I run <code>kubectl get clusterrolebindings cluster-admin -o yaml</code> I get: </p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: 2018-06-13T12:19:26Z
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "98"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
uid: 0361e9f2-6f04-11e8-b5dd-000c2904e34b
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
</code></pre>
<p>In the <code>subjects</code> field I have: </p>
<pre><code>- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
</code></pre>
<p>How can I see the members of the group <code>system:masters</code> ?<br>
I read <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-subjects" rel="noreferrer">here</a> about groups but I don't understand how can I see who is inside the groups as the example above with <code>system:masters</code>. </p>
<p>I noticed that when I decoded <code>/etc/kubernetes/pki/apiserver-kubelet-client.crt</code> using the command: <code>
openssl x509 -in apiserver-kubelet-client.crt -text -noout</code> it contained the subject <code>system:masters</code> but I still didn't understand who are the users in this group: </p>
<pre><code>Issuer: CN=kubernetes
Validity
Not Before: Jul 31 19:08:36 2018 GMT
Not After : Jul 31 19:08:37 2019 GMT
Subject: O=system:masters, CN=kube-apiserver-kubelet-client
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
</code></pre>
| <p><strong>Answer updated</strong>:</p>
<p>It seems that there is no way to do it using <code>kubectl</code>. There is no object like Group that you can "get" inside the Kubernetes configuration. </p>
<p>Group information in Kubernetes is currently provided by the Authenticator modules and usually it's just string in the user property. </p>
<p>Perhaps you can get the list of group from the subject of user certificate or if you use GKE, EKS or AKS the group attribute is stored in a cloud user management system. </p>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/</a>
<a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/</a></p>
<p>Information about ClusterRole membership in system groups can be requested from ClusterRoleBinding objects. (for example for "system:masters" it shows only cluster-admin ClusterRole):</p>
<p>Using jq:</p>
<pre><code>kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.subjects[0].kind=="Group") | select(.subjects[0].name=="system:masters")'
</code></pre>
<p>If you want to list the names only:</p>
<pre><code>kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.subjects[0].kind=="Group") | select(.subjects[0].name=="system:masters") | .metadata.name'
</code></pre>
<p>Using go-templates:</p>
<pre><code>kubectl get clusterrolebindings -o go-template='{{range .items}}{{range .subjects}}{{.kind}}-{{.name}} {{end}} {{" - "}} {{.metadata.name}} {{"\n"}}{{end}}' | grep "^Group-system:masters"
</code></pre>
<p>Some additional information about system groups can be found in <a href="https://github.com/kubernetes/kubernetes/issues/44418" rel="noreferrer">GitHub issue #44418</a> or in <a href="https://github.com/oracle/kubernetes-website/blob/master/docs/admin/authorization/rbac.md#referring-to-subjects" rel="noreferrer">RBAC document</a>:</p>
|
<p>I would like to use the etcdctl functionnalities of my etcd pod from my kurnetes cluster but I can't. I have install my kubernetes from kubeadm and I have 1 master and 1 node. </p>
<p>When I try commands etcdctl from inside of my etcd pod, I get those errors:</p>
<blockquote>
<p>client: etcd cluster is unavailable or misconfigured;<br>
error #0:malformed HTTP response "\x15\x03\x01\x00\x02\x02" ;<br>
error #1: dial tcp 127.0.0.1:4001: getsockopt: connection refused</p>
</blockquote>
<p>But when I check my etcd pod log, it seems healthy :</p>
<p><a href="https://i.stack.imgur.com/UQl0D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UQl0D.png" alt="enter image description here"></a></p>
| <p>It looks like you have <code>etcd</code> listening for client connections on <code>https://127.0.0.1:2379</code> and you have client certificate authentication enabled as far as i can see. You should then be able to run the following command to connect to <code>etcd</code>:</p>
<pre><code>ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key
</code></pre>
<p>Basically <code>etcd</code> expects you to authenticate using a client certificate.</p>
<p>Also, since you're running version 3.x of <code>etcd</code> you have to tell <code>etcdctl</code> that by providing the <code>ETCDCTL_API</code> environment variable.</p>
<p><strong>EDIT:</strong></p>
<p>For reference: </p>
<p>I was using <code>kubeadm</code> version 1.11.1 which installed version 3.2.18 of <code>etcd</code> on a server running Ubuntu 18.04 when testing the command above.</p>
|
<p>In my gcloud console it shows the following error for my defined ingresses: </p>
<blockquote>
<p>Error during sync: error while evaluating the ingress spec: service
"monitoring/kube-prometheus" is type "ClusterIP", expected "NodePort"
or "LoadBalancer"</p>
</blockquote>
<p>I am using traefik as reverse proxy (instead of nginx) and therefore I define an ingress using a ClusterIP. As far as I understand the process all traffic is proxied through the traefik service (which has a Loadbalancer ingress defined) and therefore all my other ingresses SHOULD actually have a ClusterIP instead of NodePort or Loadbalancer?</p>
<p><strong>Question:</strong></p>
<p>So why does Google Cloud warn me that it expected a NodePort or LoadBalancer?</p>
<p><a href="https://i.stack.imgur.com/WzOKi.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WzOKi.png" alt="enter image description here"></a></p>
| <p>I don't know why that error happens, because it seems (to me) to be a valid configuration. But to clear the error, you can switch your service to a named NodePort. Then switch your ingress to use the port name instead of the number. For example:</p>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: testapp
spec:
ports:
- name: testapp-http # ADD THIS
port: 80
protocol: TCP
targetPort: 80
selector:
app: testapp
type: NodePort
</code></pre>
<p>Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: testapp
spec:
rules:
- host: hostname.goes.here
http:
paths:
- backend:
serviceName: testapp
# USE THE PORT NAME FROM THE SERVICE INSTEAD OF THE PORT NUMBER
servicePort: testapp-http
path: /
</code></pre>
<p>Update:</p>
<p>This is the explanation I received from Google.</p>
<p>Since services by default are ClusterIP [1] and this type of service is meant to be accessible from inside the cluster. It can be accessed from outside when kube-proxy is used, not meant to be directly accessed with an ingress.</p>
<p>As a suggestion, I personally find this article [2] good for understanding the difference between these types of services.</p>
<p>[1] <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types</a></p>
<p>[2] <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="noreferrer">https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0</a></p>
|
<p>I need to give access to a set of pods within a namespace to an external support.
I've been reading about the RBAC API, [Cluster]Roles and [Cluster]Role Bindings; but I could not find anything about how to apply a role to a group of pods (based on annotations or labels).
Does anyone know if it is possible to do that?</p>
<p>This is the Role that I use now, and need limit it to a specific pods set:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: <ClientX>-PodMonitor
namespace: <namespace>
rules:
- apiGroups: [""]
verbs: ["get", "list"]
resources: ["pods", "pods/log"]
</code></pre>
<p>If you guys need more details, please let me know.</p>
<p>Thanks.</p>
| <p>Try below way of defining role-binding with resource name as example on <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">docs</a>:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: configmap-updater
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["my-configmap"]
verbs: ["update", "get"]
</code></pre>
|
<p>I was testing Skaffod and It is a great tool for microservices development.
But I do not find any tutorial on how to use it with Java. Is there any support to Maven builds?</p>
| <p>There is a discussion going on about adding <a href="https://github.com/GoogleContainerTools/skaffold/issues/526" rel="nofollow noreferrer">support for Java apps here</a>, you can very much use <a href="https://docs.docker.com/develop/develop-images/multistage-build/" rel="nofollow noreferrer">Docker multistage build</a> with Skaffold. A probably working example is <a href="https://github.com/GoogleContainerTools/skaffold/pull/527/files" rel="nofollow noreferrer">available here</a></p>
<p>Your build portion of Skaffold file will look something like:</p>
<pre><code>apiVersion: skaffold/v1alpha2
kind: Config
build:
tagPolicy:
dateTime:
format: 2006-01-02_15-04-05.999_MST
timezone: Local
artifacts:
- imageName: <repo>/<image>
workspace: ./appdir
</code></pre>
<p>In the <code>appdir</code> - you can use a multistage Dockerfile and integrate with rest of the workflow. Multistage Dockefile will build artefact in one stage and create a container using the artefact of the first stage.</p>
|
<p>I have already deployed pods using Daemonsets with nodeselector. My requirements is I need to use kind Deployment but at the same time I would want to retain Daemonsets functionality
.I have nodeselector defined so that same pod should be installed in labelled node.
How to achieve your help is appreciated.
My requirements is pod should be placed automatically based on nodeselector but with kind Deployment
In otherwords
Using Replication controller when I schedule 2 (two) replicas of a pod I expect 1 (one) replica each in each Nodes (VMs). Instead I find both replicas are created in same node This will make 1 Node a single point of failure which I need to avoid.</p>
<p>I have labelled two nodes properly. And I could see both pods spawned on single node. How to achieve both pods always schedule on both nodes?</p>
| <p>Look into <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">affinity and anti-affinity</a>, specifically, inter-pod affinity and anti-affinity.</p>
<p>From official documentation: </p>
<blockquote>
<p>Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled based on labels on pods that are already running on the node rather than based on labels on nodes. The rules are of the form “this pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y”.</p>
</blockquote>
|
<p>I have a batch job that I want to run on a Kubernetes cluster on Google Cloud. That job has to be run periodically, say once a week and takes a day to complete. From the doc:</p>
<blockquote>
<p>Beginning with Kubernetes version 1.7, you can specify a minimum size of zero for your node pool. This allows your node pool to scale down completely if the instances within aren't required to run your workloads. However, while a node pool can scale to a zero size, the overall cluster size does not scale down to zero nodes (as at least one node is always required to run system Pods).</p>
</blockquote>
<p>My question is, will it still generate costs to have this cluster if it is scaled down to 0 nodes? From what I understand, the cluster size won't be null hence costs would still be generated.</p>
<p>If that is the case, what would be the correct approach to minimize my costs? Should I periodically create/delete the cluster before/after running the job?</p>
| <p>If you provision Kubernetes cluster dynamically, as far as you can rebuild cluster environment without any dependencies on the worker Nodes from scratch, <code>Autoscaling</code> down to zero Nodes will be a good solution, whereas Kubernetes master Nodes (system Pods) are not charged in <code>GKE</code>, according to the <a href="https://cloud.google.com/kubernetes-engine/pricing" rel="noreferrer">Price page</a>.</p>
<p>You can create <code>node-pools</code>:</p>
<pre><code>gcloud container node-pools create ${CLUSTER_NAME}-pool \
--cluster ${CLUSTER_NAME} \
--enable-autoscaling --min-nodes 0 --max-nodes 10 \
--zone ${INSTANCE_ZONE}
</code></pre>
<p>and then force scaling down on demand:</p>
<pre><code>gcloud container clusters resize ${CLUSTER_NAME} --size=0 [--node-pool=${CLUSTER_NAME}-pool]
</code></pre>
<p>Also get yourself familiar with this <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node" rel="noreferrer">Document</a>, it describes the types of Pods which can prevent <code>Cluster Autoscaler</code> from removing Node. </p>
|
<p>Admins-MacBook-Pro:~ Harshin$ kubectl cluster-info
Kubernetes master is running at <a href="http://localhost:8080" rel="noreferrer">http://localhost:8080</a></p>
<p>To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"</p>
<p>i am following this document </p>
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?refid=gs_card" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?refid=gs_card</a></p>
<p>while i am trying to test my configuration in step 11 of configure kubectl for amazon eks </p>
<pre><code>apiVersion: v1
clusters:
- cluster:
server: ...
certificate-authority-data: ....
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: heptio-authenticator-aws
args:
- "token"
- "-i"
- "kunjeti"
# - "-r"
# - "<role-arn>"
# env:
# - name: AWS_PROFILE
# value: "<aws-profile>"
</code></pre>
| <p>Change "name: kubernetes" to actual name of your cluster. </p>
<p>Here is what I did to work it through....</p>
<p>1.Enabled verbose to make sure config files are read properly. </p>
<blockquote>
<p>kubectl get svc --v=10</p>
</blockquote>
<p>2.Modified the file as below:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
server: XXXXX
certificate-authority-data: XXXXX
name: abc-eks
contexts:
- context:
cluster: abc-eks
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "abc-eks"
# - "-r"
# - "<role-arn>"
env:
- name: AWS_PROFILE
value: "aws"
</code></pre>
|
<p>I have three nodes in my google container cluster.</p>
<p>Everytime i perform a kubernetes update through the web-ui on the cluster in Google Container Engine. </p>
<p><a href="https://i.stack.imgur.com/aHY5O.png" rel="noreferrer"><img src="https://i.stack.imgur.com/aHY5O.png" alt="Google container cluster update"></a></p>
<p>My external IP's change, and i have to manually assign the previous IP on all three instances in Google Cloud Console.</p>
<p>These are reserved static external IP set up using the following guide.</p>
<p><a href="https://cloud.google.com/compute/docs/configure-ip-addresses#reserve_new_static" rel="noreferrer">Reserving a static external IP</a></p>
<p>Has anyone run into the same problem? Starting to think this is a bug.</p>
<p>Perhaps you can set up the same static outbound external IP for all the instances to use, but i cannot find any information on how to do so, that would be a solution as long as it persists through updates, otherwise we've got the same issue.</p>
<p>It's only updates that cause this, not restarts.</p>
| <p>I was having the same problem as you. We found some solutions.</p>
<ul>
<li><a href="https://github.com/doitintl/kubeIP" rel="noreferrer">KubeIP</a> - But this needed a cluster 1.10 or higher. Ours is 1.8</li>
<li>NAT - At GCP documentation they talk about this method. It was too complex for me.</li>
</ul>
<p><strong>Our Solution</strong>
We followed the <a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#IP_assign" rel="noreferrer">documentation for assign IP addresses on GCE</a>. Used the command line.
Using this method, we didn't have any problems so far. I don't know the risks for it yet. If anyone has an idea, it would be good.</p>
<p>We basically just ran:</p>
<pre><code>gcloud compute instances delete-access-config [INSTANCE_NAME] --access-config-name [CONFIG_NAME]
gcloud compute instances add-access-config [INSTANCE_NAME] --access-config-name "external-nat-static" --address [IP_ADDRESS]
</code></pre>
<p>If anyone have any feedback on this solution. Please give it to us.
@Ahmet Alp Balkan - Google</p>
|
<p>I'm attempting to grant a Kubernetes ServiceAccount the cluster-admin role using a ClusterRoleBinding:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: jenkins
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: jenkins
namespace: jenkins
</code></pre>
<p>...and I'm getting this error:</p>
<pre><code>The ClusterRoleBinding "jenkins" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"cluster-admin"}:
cannot change roleRef
</code></pre>
<p>I've verified that the ClusterRole does exist:</p>
<pre><code>kubectl get clusterrole
NAME AGE
admin 1d
alb-ingress-controller 1d
aws-node 1d
cluster-admin 1d
</code></pre>
<p>I've also attempted to attach other cluster roles to my service account and have been unable to do so.</p>
<p>I assume that this means you cannot attach cluster roles to service accounts and if that is the case then how do you grant cluster level permissions to a service account?</p>
| <p>The error "cannot change roleRef" was referring to the fact that the ClusterRoleBinding I was trying to create already existed.</p>
<p>By running <code>kubectl get clusterrolebinding</code> I was able to see that the ClusterRoleBinding already existed.</p>
<p>After running <code>kubectl delete clusterrolebinding/jenkins</code> I was able to execute the YAML above successfully.</p>
|
<p>I am running geth full node <a href="https://github.com/ethereum/go-ethereum/wiki/geth" rel="nofollow noreferrer">https://github.com/ethereum/go-ethereum/wiki/geth</a> on Google Cloud platform on a VM instance. Currently, I have mounted a SSD and write the chain data to it.</p>
<p>I want to now run it on multiple VM instances and use a load balancer for serving the requests made by Dapp. I can do this using a normal load balancer and create VMs and autoscale. However, I have the following questions:</p>
<ol>
<li>SSD seems to be a very important part of blockchain syncing speed. If I simply create VM images and add it for autoscaling, it won't help much because the blockchain will take time to sync.</li>
<li>If I want to run these nodes on kubernetes cluster, what's the best way to use the disk?</li>
</ol>
| <p>Take a look at this Kubernetes Engine tutorial which shows you how to run StatefulSets with automatic persistent volume provisioning: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/stateful-apps" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/stateful-apps</a></p>
<p>Take a look at this Kubernetes Engine tutorial which shows you how to provision SSD disks <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#ssd_persistent_disks" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#ssd_persistent_disks</a> </p>
<p>With these + HorizontalPodAutoscaler, you should be able to create a StatefulSet with auto-scaling and each pod will get its own SSD disk.</p>
|
<p>I am working on the evolution of a siem soc, and my actual issue is to recover my logs from my storage account on Azure to a Elasticsearch-data deployed on a pod on KUBERNETES. So I would like to know the the right approach for that. </p>
<p>With filebeat there is no <code>input</code> for azureblod, only <code>output</code></p>
<p>Logstash seems don't work without agent. </p>
<p>Thanks ! </p>
| <p>There is an approach you could consider for accomplishing your task. Kubernetes in Azure allows you to deploy Azure File Shares with your containers. If you move your logs to a file share, you should be able to accomplish your project. </p>
<p>I'd recommend checking Diego's post <strong><a href="https://medium.com/@diegomrtnzg/monitoring-your-log-files-with-kubernetes-in-azure-b2a92e674947" rel="nofollow noreferrer">here</a></strong>, it shows how to access logs from a storage account, specifically FileShare. </p>
<p>here's a blurb from the tutorial:</p>
<p>1- *Create an Azure Storage account with your own parameters (deployment model: resource manager; type: general purpose). You will need the Azure Storage account name in the next step.</p>
<p>2- Modify the storageAccount parameter in this .yaml file with your Azure Storage account name and deploy it to your Kubernetes cluster: kubectl apply -f sidecar-storageclass.yaml. It will create a Kubernetes volume using your Azure File Storage account.</p>
<p>3- Deploy this .yaml file to your Kubernetes cluster: kubectl apply -f sidecar-pvc.yaml. It will create a volume claim for your volume in order to use it in your pod.</p>
<p>4- Modify your application deployment .yaml file by adding (modify the logFileDirectory parameter) this content and deploy it to your Kubernetes cluster. It will add the volume to your pod and store on it the logFilesDirectory.</p>
<p>5- Modify the logReaderName (you will filter the logs using this parameter), logFileDirectory (x2) and the logFileName with your data in this .yaml file and deploy it to your Kubernetes cluster: kubectl apply -f sidecar-logreaderpod.yaml. It will create the Log Reader pod and write the logFile content to the STDOUT.</p>
<p>The Log Reader pod uses tail command to write in the STDOUT. You can modify the tail command, for example, to write different files (extension .log) in the same STDOUT: tail -n+1 -f //*.log
Once you deploy the Log Reader, you can start to check the logs filtered by the pod name (you selected it when you deployed the last .yaml file):</p>
<pre><code>kubectl get pods
kubectl logs <podname>
</code></pre>
|
<p>I'm using a dockerized microservice architecture running on Kubernetes with Nginx, and am encountering an issue with hostnames. How do you correctly add the hostname to Kubernetes (or perhaps Nginx too)?</p>
<p>The problem: When microservice A called <code>admin</code> tries to talk to microservice B called <code>session</code>, <code>admin</code> logs the following error and <code>session</code> is not reached: </p>
<pre><code>{ Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's
altnames: Host: session. is not in the cert's altnames: DNS:*.example.com, example.com
at Object.checkServerIdentity (tls.js:225:17)
at TLSSocket.onConnectSecure (_tls_wrap.js:1051:27)
at TLSSocket.emit (events.js:160:13)
at TLSSocket._finishInit (_tls_wrap.js:638:8)
reason: 'Host: session. is not in the cert\'s altnames:
DNS:*.example.com, example.com',
host: 'session',
cert:
{ subject: { OU: 'Domain Control Validated', CN:
'*.example.com' },
issuer: ...
</code></pre>
<p>In response to this error, I tried to update the hostname in the kubernetes config yaml file unsuccessfully (based on <a href="https://stackoverflow.com/questions/34609572/is-it-possible-to-set-a-hostname-in-a-kubernetes-replication-controller">this</a>). See the added <code>hostname</code> below.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: session
namespace: demo
spec:
replicas: 1
selector:
matchLabels:
app: session
component: demo
template:
metadata:
labels:
app: session
component: demo
spec:
hostname: session.example.com . ----> added host name here
imagePullSecrets:
- name: docker-secret
containers:
- name: session
...
</code></pre>
<p>However, when I try to apply this updated config file in Kubernetes, an error emerges that I cannot use a period. If I cannot use a period, and the hostname is <code>*.example.com</code> (i.e. <code>session.example.com</code>), where/how should the hostname be updated.</p>
<pre><code>The Deployment "session" is invalid: spec.template.spec.hostname:
Invalid value: "session.example.com": a DNS-1123 label must
consist of lower case alphanumeric characters or '-', and must start and
end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex
used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')
</code></pre>
<p>Meanwhile, the server name in the nginx config file is indeed updated with <code>session.example.com</code>.</p>
<pre><code>upstream session {
server 127.0.0.1:3000;
keepalive 32;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name "session.example.com"; ---> updated for hostname
ssl_certificate /etc/ssl/nginx/certificate.pem;
ssl_certificate_key /etc/ssl/nginx/key.pem;
location / {
proxy_pass http://session/;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name "session.example.com"; ---> updated for hostname
return 301 https://$host$request_uri;
}
</code></pre>
<p>How do you suggest fixing this? My goal is for <code>admin</code> to successfully communicate with <code>session</code>.</p>
| <p>You can use kubernetes own dns.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
<p>So you can access your pod using pod dns;</p>
<blockquote>
<p>When enabled, pods are assigned a DNS A record in the form of</p>
<p><strong>“pod-ip-address.my-namespace.pod.cluster.local”</strong></p>
</blockquote>
<p>With service you can use</p>
<blockquote>
<p><strong>my-svc.my-namespace.svc.cluster.local</strong></p>
</blockquote>
|
<p>I have a cronjob that sends out emails to customers. It occasionally fails for various reasons. I <em>do not want</em> it to restart, but it still does.</p>
<p>I am running Kubernetes on GKE. To get it to stop, I have to delete the CronJob and then kill all the pods it creates manually. </p>
<p>This is bad, for obvious reasons. </p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
creationTimestamp: 2018-06-21T14:48:46Z
name: dailytasks
namespace: default
resourceVersion: "20390223"
selfLink: [redacted]
uid: [redacted]
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- kubernetes/daily_tasks.sh
env:
- name: DB_HOST
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
envFrom:
- secretRef:
name: my-secrets
image: [redacted]
imagePullPolicy: IfNotPresent
name: dailytasks
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: 0 14 * * *
successfulJobsHistoryLimit: 3
suspend: true
status:
active:
- apiVersion: batch
kind: Job
name: dailytasks-1533218400
namespace: default
resourceVersion: "20383182"
uid: [redacted]
lastScheduleTime: 2018-08-02T14:00:00Z
</code></pre>
| <p>It turns out that you have to set a <code>backoffLimit: 0</code> in combination with <code>restartPolicy: Never</code> in combination with <code>concurrencyPolicy: Forbid</code>. </p>
<p><strong>backoffLimit</strong> means the number of times it will retry before it is considered failed. The default is 6.</p>
<p><strong>concurrencyPolicy</strong> set to <code>Forbid</code> means it will run 0 or 1 times, but not more.</p>
<p><strong>restartPolicy</strong> set to <code>Never</code> means it won't restart on failure.</p>
<p>You need to do all 3 of these things, or your cronjob may run more than once.</p>
<pre><code>spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
[ADD THIS -->]backoffLimit: 0
template:
... MORE STUFF ...
</code></pre>
|
<p>I am to patch an image onto init containers</p>
<p>my image is stored in a variable called $IMAGE_NAME </p>
<p>When I run </p>
<blockquote>
<p>kubectl patch deployment production-art-backend -p {"spec":{"template":{"spec":{"initContainers":[{"name":"run-migrations","image":"$IMAGE_NAME"}]}}}}</p>
</blockquote>
<p>it patches the image as 'IMAGE_NAME' and not the value inside the variable IMAGE_NAME, how can I patch the image dynamically?</p>
| <p>This is a bash issue, not necessarily a <code>kubectl</code> issue!</p>
<p>Because the JSON string requires double quotes, you need to wrap the variable inside single quotes. Try this:</p>
<pre><code>kubectl patch deployment production-art-backend -p {"spec":{"template":{"spec":{"initContainers":[{"name":"run-migrations","image":"'${IMAGE_NAME}'"}]}}}}
</code></pre>
|
<p>I am trying to understand k8s and helm. </p>
<p>When I create a helm chart, there are 2 files: <code>service.yaml</code> and <code>deployment.yaml</code>. Both of them have a <strong>name</strong> field. </p>
<p>If I understand correctly, the deployment will be responsible for managing the pods, replicasets, etc and thus the service. </p>
<p>Basically, why am I allowed use a separate name for the service and for the deployment? Under what scenario would we want these 2 names to differ? Can a deployment have more than 1 service?</p>
| <p>The "service" creates a persistent IP address in your cluster which is how everything else connects it. The Deployment creates a ReplicaSet, which creates a Pod, and this Pod is the backend for that service. There can be more than 1 pod, in which case the service load balances, and these pods can change over time, change IP's, but your service remains constant.</p>
<p>Think of the service as a load balancer which points to your pods. It's analogous to interfaces and implementations. The service is like an interface, which is backed by the pods, the impementations.</p>
<p>The mapping is m:n. You can have multiple services backed by a single pod, or multiple pods backing a single service.</p>
|
<p>I have some questions about the golang API for kubernetes.</p>
<ol>
<li><p>which one should I use? k8s.io/client-go or k8s.io/kubernetes/pkg/client? What's the difference?</p></li>
<li><p>I want to get list of all pods and then listen to add/update/delete events, what's the difference between using the api.Pods("").Watch method and using an informer?</p></li>
<li><p>I'm using the API from inside the cluster, how can I fetch the name of the node I'm currently in? is it just the hostname of the machine?</p></li>
</ol>
| <blockquote>
<p>which one should I use? k8s.io/client-go or k8s.io/kubernetes/pkg/client?</p>
</blockquote>
<p>Use <code>k8s.io/client-go</code>.</p>
<blockquote>
<p>what's the difference between using the api.Pods("").Watch method and using an informer?</p>
</blockquote>
<p>The informer is essentially a shared cache, reducing the load on the API server. Unless you're doing something trivial, this is the preferred way.</p>
<blockquote>
<p>how can I fetch the name of the node I'm currently in? </p>
</blockquote>
<p>Use <a href="https://godoc.org/k8s.io/api/core/v1#Node" rel="noreferrer">k8s.io/api/core/v1.Node</a>, see for example <a href="https://github.com/openshift-talks/k8s-go/blob/master/client-go-basic/main.go" rel="noreferrer">this code</a>.</p>
<p>BTW, a colleague of mine and myself gave a workshop on this topic (using the Kube API with Go) last week at GopherCon UK—maybe the <a href="https://301.sh/2018-gopherconuk-slides" rel="noreferrer">slide deck</a> and the <a href="https://github.com/openshift-talks/k8s-go" rel="noreferrer">repo</a> are useful for you; also, there is an accompanying online <a href="https://www.katacoda.com/mhausenblas/scenarios/k8s-go" rel="noreferrer">Katacoda scenario</a> you can use to play around.</p>
|
<p>I am using Kubespray with Kubernetes 1.9</p>
<p>What I'm seeing is the following when I try to interact with pods on my new nodes in anyway through kubectl. Important to note that the nodes are considered to be healthy and are having pods scheduled on them appropriately. The pods are totally functional.</p>
<pre><code> ➜ Scripts k logs -f -n prometheus prometheus-prometheus-node-exporter-gckzj
Error from server: Get https://kubeworker-rwva1-prod-14:10250/containerLogs/prometheus/prometheus-prometheus-node-exporter-gckzj/prometheus-node-exporter?follow=true: dial tcp: lookup kubeworker-rwva1-prod-14 on 10.0.0.3:53: no such host
</code></pre>
<p>I am able to ping to the kubeworker nodes both locally where I am running kubectl and from all masters by both IP and DNS.</p>
<pre><code>➜ Scripts ping kubeworker-rwva1-prod-14
PING kubeworker-rwva1-prod-14 (10.0.0.111): 56 data bytes
64 bytes from 10.0.0.111: icmp_seq=0 ttl=63 time=88.972 ms
^C
pubuntu@kubemaster-rwva1-prod-1:~$ ping kubeworker-rwva1-prod-14
PING kubeworker-rwva1-prod-14 (10.0.0.111) 56(84) bytes of data.
64 bytes from kubeworker-rwva1-prod-14 (10.0.0.111): icmp_seq=1 ttl=64 time=0.259 ms
64 bytes from kubeworker-rwva1-prod-14 (10.0.0.111): icmp_seq=2 ttl=64 time=0.213 ms
➜ Scripts k get nodes
NAME STATUS ROLES AGE VERSION
kubemaster-rwva1-prod-1 Ready master 174d v1.9.2+coreos.0
kubemaster-rwva1-prod-2 Ready master 174d v1.9.2+coreos.0
kubemaster-rwva1-prod-3 Ready master 174d v1.9.2+coreos.0
kubeworker-rwva1-prod-1 Ready node 174d v1.9.2+coreos.0
kubeworker-rwva1-prod-10 Ready node 174d v1.9.2+coreos.0
kubeworker-rwva1-prod-11 Ready node 174d v1.9.2+coreos.0
kubeworker-rwva1-prod-12 Ready node 174d v1.9.2+coreos.0
kubeworker-rwva1-prod-14 Ready node 16d v1.9.2+coreos.0
kubeworker-rwva1-prod-15 Ready node 14d v1.9.2+coreos.0
kubeworker-rwva1-prod-16 Ready node 6d v1.9.2+coreos.0
kubeworker-rwva1-prod-17 Ready node 4d v1.9.2+coreos.0
kubeworker-rwva1-prod-18 Ready node 4d v1.9.2+coreos.0
kubeworker-rwva1-prod-19 Ready node 6d v1.9.2+coreos.0
kubeworker-rwva1-prod-2 Ready node 174d v1.9.2+coreos.0
kubeworker-rwva1-prod-20 Ready node 6d v1.9.2+coreos.0
kubeworker-rwva1-prod-21 Ready node 6d v1.9.2+coreos.0
kubeworker-rwva1-prod-3 Ready node 174d v1.9.2+coreos.0
kubeworker-rwva1-prod-4 Ready node 174d v1.9.2+coreos.0
kubeworker-rwva1-prod-5 Ready node 174d v1.9.2+coreos.0
kubeworker-rwva1-prod-6 Ready node 174d v1.9.2+coreos.0
kubeworker-rwva1-prod-7 Ready node 174d v1.9.2+coreos.0
kubeworker-rwva1-prod-8 Ready node 174d v1.9.2+coreos.0
kubeworker-rwva1-prod-9 Ready node 174d v1.9.2+coreos.0
</code></pre>
<p>When I describe a broken node, it looks identical to one of my functioning ones.</p>
<pre><code>➜ Scripts k describe node kubeworker-rwva1-prod-14
Name: kubeworker-rwva1-prod-14
Roles: node
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=kubeworker-rwva1-prod-14
node-role.kubernetes.io/node=true
role=app-tier
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Tue, 17 Jul 2018 19:35:08 -0700
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Fri, 03 Aug 2018 18:44:59 -0700 Tue, 17 Jul 2018 19:35:08 -0700 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Fri, 03 Aug 2018 18:44:59 -0700 Tue, 17 Jul 2018 19:35:08 -0700 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 03 Aug 2018 18:44:59 -0700 Tue, 17 Jul 2018 19:35:08 -0700 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Fri, 03 Aug 2018 18:44:59 -0700 Tue, 17 Jul 2018 19:35:18 -0700 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.0.0.111
Hostname: kubeworker-rwva1-prod-14
Capacity:
cpu: 32
memory: 147701524Ki
pods: 110
Allocatable:
cpu: 31900m
memory: 147349124Ki
pods: 110
System Info:
Machine ID: da30025a3f8fd6c3f4de778c5b4cf558
System UUID: 5ACCBB64-2533-E611-97F0-0894EF1D343B
Boot ID: 6b42ba3e-36c4-4520-97e6-e7c6fed195e2
Kernel Version: 4.4.0-130-generic
OS Image: Ubuntu 16.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.1
Kubelet Version: v1.9.2+coreos.0
Kube-Proxy Version: v1.9.2+coreos.0
ExternalID: kubeworker-rwva1-prod-14
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system calico-node-cd7qg 150m (0%) 300m (0%) 64M (0%) 500M (0%)
kube-system kube-proxy-kubeworker-rwva1-prod-14 150m (0%) 500m (1%) 64M (0%) 2G (1%)
kube-system nginx-proxy-kubeworker-rwva1-prod-14 25m (0%) 300m (0%) 32M (0%) 512M (0%)
prometheus prometheus-prometheus-node-exporter-gckzj 0 (0%) 0 (0%) 0 (0%) 0 (0%)
rabbit-relay rabbit-relay-844d6865c7-q6fr2 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
325m (1%) 1100m (3%) 160M (0%) 3012M (1%)
Events: <none>
➜ Scripts k describe node kubeworker-rwva1-prod-11
Name: kubeworker-rwva1-prod-11
Roles: node
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=kubeworker-rwva1-prod-11
node-role.kubernetes.io/node=true
role=test
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Fri, 09 Feb 2018 21:03:46 -0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Fri, 03 Aug 2018 18:46:31 -0700 Fri, 09 Feb 2018 21:03:38 -0800 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Fri, 03 Aug 2018 18:46:31 -0700 Mon, 16 Jul 2018 13:24:58 -0700 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 03 Aug 2018 18:46:31 -0700 Mon, 16 Jul 2018 13:24:58 -0700 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Fri, 03 Aug 2018 18:46:31 -0700 Mon, 16 Jul 2018 13:24:58 -0700 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.0.0.218
Hostname: kubeworker-rwva1-prod-11
Capacity:
cpu: 32
memory: 131985484Ki
pods: 110
Allocatable:
cpu: 31900m
memory: 131633084Ki
pods: 110
System Info:
Machine ID: 0ff6f3b9214b38ad07c063d45a6a5175
System UUID: 4C4C4544-0044-5710-8037-B3C04F525631
Boot ID: 4d7ec0fc-428f-4b4c-aaae-8e70f374fbb1
Kernel Version: 4.4.0-87-generic
OS Image: Ubuntu 16.04.3 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.1
Kubelet Version: v1.9.2+coreos.0
Kube-Proxy Version: v1.9.2+coreos.0
ExternalID: kubeworker-rwva1-prod-11
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
ingress-nginx-internal default-http-backend-internal-7c8ff87c86-955np 10m (0%) 10m (0%) 20Mi (0%) 20Mi (0%)
kube-system calico-node-8fzk6 150m (0%) 300m (0%) 64M (0%) 500M (0%)
kube-system kube-proxy-kubeworker-rwva1-prod-11 150m (0%) 500m (1%) 64M (0%) 2G (1%)
kube-system nginx-proxy-kubeworker-rwva1-prod-11 25m (0%) 300m (0%) 32M (0%) 512M (0%)
prometheus prometheus-prometheus-kube-state-metrics-7c5cbb6f55-jq97n 0 (0%) 0 (0%) 0 (0%) 0 (0%)
prometheus prometheus-prometheus-node-exporter-7gn2x 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
335m (1%) 1110m (3%) 176730Ki (0%) 3032971520 (2%)
Events: <none>
</code></pre>
<p>What's going on?</p>
<pre><code>➜ k logs -f -n prometheus prometheus-prometheus-node-exporter-gckzj
Error from server: Get https://kubeworker-rwva1-prod-14:10250/containerLogs/prometheus/prometheus-prometheus-node-exporter-gckzj/prometheus-node-exporter?follow=true: dial tcp: lookup kubeworker-rwva1-prod-14 on 10.0.0.3:53: no such host
➜ cat /etc/hosts | head -n1
10.0.0.111 kubeworker-rwva1-prod-14
ubuntu@kubemaster-rwva1-prod-1:~$ ping kubeworker-rwva1-prod-14
PING kubeworker-rwva1-prod-14 (10.0.0.111) 56(84) bytes of data.
64 bytes from kubeworker-rwva1-prod-14 (10.0.0.111): icmp_seq=1 ttl=64 time=0.275 ms
^C
--- kubeworker-rwva1-prod-14 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms
ubuntu@kubemaster-rwva1-prod-1:~$ kubectl logs -f -n prometheus prometheus-prometheus-node-exporter-gckzj
Error from server: Get https://kubeworker-rwva1-prod-14:10250/containerLogs/prometheus/prometheus-prometheus-node-exporter-gckzj/prometheus-node-exporter?follow=true: dial tcp: lookup kubeworker-rwva1-prod-14 on 10.0.0.3:53: no such host
</code></pre>
| <blockquote>
<p>What's going on?</p>
</blockquote>
<p>That name needs to be resolvable from your workstation, because for <code>kubectl logs</code> and <code>kubectl exec</code>, the API sends down the URL for the client to interact <em>directly</em> with the <code>kubelet</code> on the target Node (to ensure that all traffic in the world doesn't travel through the API server).</p>
<p>Thankfully, kubespray has a knob through which you can tell kubernetes to prefer the Node's <code>ExternalIP</code> (or, of course, <code>InternalIP</code> if you prefer): <a href="https://github.com/kubernetes-incubator/kubespray/blob/v2.5.0/roles/kubernetes/master/defaults/main.yml#L82" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/kubespray/blob/v2.5.0/roles/kubernetes/master/defaults/main.yml#L82</a></p>
|
<p>I have minikube installation on my machine for some testing that requires Basic Auth,</p>
<p>How can I enable it for all services I'll create in future?</p>
| <p>If you mean for the Kubernetes API services then that can be enabled in minikube with a CSV file - <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#static-password-file" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/#static-password-file</a> </p>
<p>(though some seem to have found this tricky on Windows <a href="https://github.com/kubernetes/minikube/issues/1898" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/1898</a> ). </p>
<p>If you mean for services you are writing and deploying to Kubernetes then I think that's not something that the Kubernetes platform would currently provide for you out of the box. </p>
<p>It would be open to you to use an authentication feature in an ingress controller (the nginx one for example) or to use Istio or, depending on your use-case, you might well choose to do it in your code. </p>
|
<p>I want to run a sails.js application on Google Kubernetes Engine. Running the application docker container locally works perfectly.
The deployment to GKE is done via gitlab Auto-Devops pipelines. Deploying the application to GKE is working so far and I can access pages of the sails.js application using the generated domain by gitlab-CI - e.g. <code>xyz-review-autodevops-123.my.host.com</code></p>
<p>But when accessing pages, only the requested page can be loaded - every asset like images, javascript and css files are not loaded but return a 404.</p>
<p>When looking at the nginx-ingress-controller logs in GKE, I see that these url's are requested but result in a 404:</p>
<pre><code>[05/Aug/2018:12:27:25 +0000] "GET /js/cloud.setup.js HTTP/1.1" 404 9 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.84 Safari/537.36" 573 0.004 [my-app-review-autodevops-1232u0-my-app-443] xx.xx.xx.xx:80 9 0.004 404
</code></pre>
<p>There are also regularly errors logged as the following. I am not sure if this is related to this issue but still want to mention it in case it is:</p>
<pre><code>error obtaining PEM from secret my-app-1239989/review-autodevops-1232u0-my-app-tls: error retrieving secret my-app-1239989/review-autodevops-1232u0-my-app-tls: secret my-app-1239989/review-autodevops-1232u0-my-app-tls was not found"
</code></pre>
<p>My guess is that ingress does not know the requested host for the assets because they are requested as absolute paths (<code>/js/cloud.setup.js</code>) instead of relative paths (<code>js/cloud.setup.js</code>) and therefore does not know where to route this request to because the domain information is lost in the request.</p>
<p>But I do now kow how I fix this 404 issue. Would changing all paths to relative paths in sails.js fix it? I even don't know if this can easily be done since quite some are generated by grunt tasks in sails.js and are generated in the absolute form.</p>
| <p>Ok, found the error by myself.
This was a silly mistake because sails was somehow configured to include <code>grunt</code> and <code>sails-hook-grunt</code> only as devdependencies in <code>package.json</code>.</p>
<p>Since on the production container, <code>NODE_ENV</code> is set as environment variable, <code>npm install</code> only installs the production dependencies.</p>
<p>And since grunt is used for manipulating and providing the assets in sails.js, those have just not been generated.</p>
<p>The solution was to move <code>grunt</code> and <code>sails-hook-grunt</code> to the <code>dependencies</code> in <code>package.json</code>.</p>
|
<p>I've got kubernetes running via docker (Running linux containers on a windows host).
I created a deployment (1 pod with 1 container, simple hello world node app) scaled to 3 replicas.</p>
<p>Scaled to 3 just fine, cool, lets scale to 20, nice, still fine.
So I decided to take it to the extreme to see what happens with 200 replicas (Now I know).</p>
<p>CPU is now 80%, the dashboard wont run, and I can't even issue a powershell command to scale the deployment back down.</p>
<p>I've tried restarting docker and seeing if I can sneak in a powershell command as soon as docker and kubernetes are available, and it doesn't seem to be taking.</p>
<p>Are the kubernetes deployment configurations on disk somewhere so I can modify them when kubernetes is down so it definitely picks up the new settings?</p>
<p>If not, is there any other way I can scale down the deployment?</p>
<p>Thanks</p>
| <p><a href="https://github.com/docker/for-mac/issues/2536" rel="nofollow noreferrer">https://github.com/docker/for-mac/issues/2536</a> is a useful thread on this as it gives tips for getting logs, increasing resources or if necessary doing a factory reset (as discussed in the comments)</p>
|
<p>I have a docker container that is running fine when I run it using docker run. I am trying to put that container inside a pod but I am facing issues. The first run of the pod shows status as "Completed". And then the pod keeps restarting with CrashLoopBackoff status. The exit code however is 0. </p>
<p>Here is the result of kubectl describe pod :</p>
<pre><code>Name: messagingclientuiui-6bf95598db-5znfh
Namespace: mgmt
Node: db1mgr0deploy01/172.16.32.68
Start Time: Fri, 03 Aug 2018 09:46:20 -0400
Labels: app=messagingclientuiui
pod-template-hash=2695115486
Annotations: <none>
Status: Running
IP: 10.244.0.7
Controlled By: ReplicaSet/messagingclientuiui-6bf95598db
Containers:
messagingclientuiui:
Container ID: docker://a41db3bcb584582e9eacf26b02c7ef26f57c2d43b813f44e4fd1ba63347d3fc3
Image: 172.32.1.4/messagingclientuiui:667-I20180802-0202
Image ID: docker-pullable://172.32.1.4/messagingclientuiui@sha256:89a002448660e25492bed1956cfb8fff447569e80ac8b7f7e0fa4d44e8abee82
Port: 9087/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 03 Aug 2018 09:50:06 -0400
Finished: Fri, 03 Aug 2018 09:50:16 -0400
Ready: False
Restart Count: 5
Environment Variables from:
mesg-config ConfigMap Optional: false
Environment: <none>
Mounts:
/docker-mount from messuimount (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-2pthw (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
messuimount:
Type: HostPath (bare host directory volume)
Path: /mon/monitoring-messui/docker-mount
HostPathType:
default-token-2pthw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-2pthw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m default-scheduler Successfully assigned messagingclientuiui-6bf95598db-5znfh to db1mgr0deploy01
Normal SuccessfulMountVolume 4m kubelet, db1mgr0deploy01 MountVolume.SetUp succeeded for volume "messuimount"
Normal SuccessfulMountVolume 4m kubelet, db1mgr0deploy01 MountVolume.SetUp succeeded for volume "default-token-2pthw"
Normal Pulled 2m (x5 over 4m) kubelet, db1mgr0deploy01 Container image "172.32.1.4/messagingclientuiui:667-I20180802-0202" already present on machine
Normal Created 2m (x5 over 4m) kubelet, db1mgr0deploy01 Created container
Normal Started 2m (x5 over 4m) kubelet, db1mgr0deploy01 Started container
Warning BackOff 1m (x8 over 4m) kubelet, db1mgr0deploy01 Back-off restarting failed container
</code></pre>
<p>kubectl get pods</p>
<pre><code> NAME READY STATUS RESTARTS AGE
messagingclientuiui-6bf95598db-5znfh 0/1 CrashLoopBackOff 9 23m
</code></pre>
<p>I am assuming we need a loop to keep the container running in this case. But I dont understand why it worked when it ran using docker and not working when it is inside a pod. Shouldnt it behave the same ?</p>
<p>How do we henerally debug CrashLoopBackOff status apart from running kubectl describe pod and kubectl logs </p>
| <p>The container would terminate with exit code 0 if there isn't at least one process running in the background. To keep the container running, add these to the deployment configuration:</p>
<pre><code> command: ["sh"]
stdin: true
</code></pre>
<p>Replace <code>sh</code> with <code>bash</code> on any other shell that the image may have.</p>
<p>Then you can drop inside the container with <code>exec</code>:</p>
<pre><code> kubectl exec -it <pod-name> sh
</code></pre>
<p>Add <code>-c <container-name></code> argument if the pod has more than one container.</p>
|
<p>I followed "<a href="https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html" rel="nofollow noreferrer">https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html</a>" to create an EKS cluster using terraform. </p>
<p>I was able to create a config map successfully but i am unable to get the node details - </p>
<pre><code>$ ./kubectl_1.10.3_darwin get nodes
No resources found.
</code></pre>
<p>Service details - </p>
<pre><code>$ ./kubectl_1.10.3_darwin get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 2h
</code></pre>
<p>Kubectl logs on nodes - </p>
<pre><code>Aug 5 09:14:32 ip-172-31-18-205 kubelet: I0805 09:14:32.617738 25463 aws.go:1026] Building AWS cloudprovider
Aug 5 09:14:32 ip-172-31-18-205 kubelet: I0805 09:14:32.618168 25463 aws.go:988] Zone not specified in configuration file; querying AWS metadata service
Aug 5 09:14:32 ip-172-31-18-205 kubelet: E0805 09:14:32.794914 25463 tags.go:94] Tag "KubernetesCluster" nor "kubernetes.io/cluster/..." not found; Kubernetes may behave unexpectedly.
Aug 5 09:14:32 ip-172-31-18-205 kubelet: F0805 09:14:32.795622 25463 server.go:233] failed to run Kubelet: could not init cloud provider "aws": AWS cloud failed to find ClusterID
Aug 5 09:14:32 ip-172-31-18-205 systemd: kubelet.service: main process exited, code=exited, status=255/n/a
Aug 5 09:14:32 ip-172-31-18-205 systemd: Unit kubelet.service entered failed state.
Aug 5 09:14:32 ip-172-31-18-205 systemd: kubelet.service failed.
</code></pre>
<p>AWS getting started documentation doesn't mention any tags related information "<a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html</a>".</p>
<p>After a while I found out that I missed to put resource tags like "kubernetes.io/cluster/*" to my networking resources. </p>
<p>My networking resources are pre-created, I use remote states to fetch the required details. I believe that I can either add tags to it OR create a new VPC env.</p>
<p>Is there any alternate way to solve this without adding tags or provisioning new resources?</p>
| <p>Make sure you add a similar tag as below to your VPCs, Subnets & ASGs - </p>
<blockquote>
<p>"kubernetes.io/cluster/${CLUSTER_NAME}" = "shared"</p>
</blockquote>
<p><strong>NOTE: The usage of the specific kubernetes.io/cluster/* resource tags below are required for EKS and Kubernetes to discover and manage networking resources.</strong><br>
<strong>NOTE: The usage of the specific kubernetes.io/cluster/* resource tag below is required for EKS and Kubernetes to discover and manage compute resources.</strong> - Terraform docs</p>
<p>I had missed propagating tags using auto-scaling groups on worker nodes. I added below code to ASG terraform module & it started working, at least the nodes were able to connect to the master cluster. You also need to add the tag to VPC & Subnets for EKS and Kubernetes to discover and manage networking resources. </p>
<p>For VPC - </p>
<pre><code>locals {
cluster_tags = {
"kubernetes.io/cluster/${var.project}-${var.env}-cluster" = "shared"
}
}
resource "aws_vpc" "myvpc" {
cidr_block = "${var.vpc_cidr}"
enable_dns_hostnames = true
tags = "${merge(map("Name", format("%s-%s-vpcs", var.project, var.env)), var.default_tags, var.cluster_tags)}"
}
resource "aws_subnet" "private_subnet" {
count = "${length(var.private_subnets)}"
vpc_id = "${aws_vpc.myvpc.id}"
cidr_block = "${var.private_subnets[count.index]}"
availability_zone = "${element(var.azs, count.index)}"
tags = "${merge(map("Name", format("%s-%s-pvt-%s", var.project, var.env, element(var.azs, count.index))), var.default_tags, var.cluster_tags)}"
}
resource "aws_subnet" "public_subnet" {
count = "${length(var.public_subnets)}"
vpc_id = "${aws_vpc.myvpc.id}"
cidr_block = "${var.public_subnets[count.index]}"
availability_zone = "${element(var.azs, count.index)}"
map_public_ip_on_launch = "true"
tags = "${merge(map("Name", format("%s-%s-pub-%s", var.project, var.env, element(var.azs, count.index))), var.default_tags, var.cluster_tags)}"
}
</code></pre>
<p>For ASGs - </p>
<pre><code>resource "aws_autoscaling_group" "asg-node" {
name = "${var.project}-${var.env}-asg-${aws_launch_configuration.lc-node.name}"
vpc_zone_identifier = ["${var.vpc_zone_identifier}"]
min_size = 1
desired_capacity = 1
max_size = 1
target_group_arns = ["${var.target_group_arns}"]
default_cooldown= 100
health_check_grace_period = 100
termination_policies = ["ClosestToNextInstanceHour", "NewestInstance"]
health_check_type="EC2"
depends_on = ["aws_launch_configuration.lc-node"]
launch_configuration = "${aws_launch_configuration.lc-node.name}"
lifecycle {
create_before_destroy = true
}
tags = ["${data.null_data_source.tags.*.outputs}"]
tags = [
{
key = "Name"
value = "${var.project}-${var.env}-asg-eks"
propagate_at_launch = true
},
{
key = "role"
value = "eks-worker"
propagate_at_launch = true
},
{
key = "kubernetes.io/cluster/${var.project}-${var.env}-cluster"
value = "owned"
propagate_at_launch = true
}
]
}
</code></pre>
<p>I was able to deploy a sample application post above changes. </p>
<p>PS - Answering this since AWS EKS getting started documentation doesn't have these instructions very clear & people trying to create ASGs manually may fall into this issue. This might help others save their time. </p>
|
<p>I follow <a href="https://medium.freecodecamp.org/learn-kubernetes-in-under-3-hours-a-detailed-guide-to-orchestrating-containers-114ff420e882" rel="nofollow noreferrer">this</a> tutorial about Kubernetes.</p>
<p>I got to the part, which guide me to run:</p>
<pre><code>minikube service sa-frontend-lb
</code></pre>
<p>(I used sudo to run it, because if I don't use sudo it ask me to use sudo).</p>
<p>I get those following errors:</p>
<pre><code>Opening kubernetes service default/sa-frontend-lb in default browser...
No protocol specified
No protocol specified
(firefox:4538): Gtk-WARNING **: 22:07:38.395: cannot open display: :0
/usr/bin/xdg-open: line 881: x-www-browser: command not found
No protocol specified
No protocol specified
(firefox:4633): Gtk-WARNING **: 22:07:39.112: cannot open display: :0
/usr/bin/xdg-open: line 881: iceweasel: command not found
/usr/bin/xdg-open: line 881: seamonkey: command not found
/usr/bin/xdg-open: line 881: mozilla: command not found
No protocol specified
Unable to init server: Could not connect: Connection refused
Failed to parse arguments: Cannot open display:
/usr/bin/xdg-open: line 881: konqueror: command not found
/usr/bin/xdg-open: line 881: chromium: command not found
[4749:4749:0805/220740.485576:ERROR:zygote_host_impl_linux.cc(88)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.
[4757:4757:0805/220740.725100:ERROR:zygote_host_impl_linux.cc(89)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.
/usr/bin/xdg-open: line 881: www-browser: command not found
/usr/bin/xdg-open: line 881: links2: command not found
/usr/bin/xdg-open: line 881: elinks: command not found
/usr/bin/xdg-open: line 881: links: command not found
</code></pre>
<p>I installed chromium and xdg-utils, but neither works.</p>
<p>How can I enter to the service, to see that it works?</p>
| <p>As one can see, it <em>is</em> attempting to launch a browser, but there are none installed that it recognizes, except for what I would <em>guess</em> is Chrome <em>(since one can see that "chromium" did not work out)</em>, and as the message indicates, it doesn't tolerate running as <code>root</code>.</p>
<p>In that case, what you want is actually:</p>
<pre><code>minikube service --url sa-frontend-lb
</code></pre>
<p>which causes <code>minikube</code> to <em>print</em> the URL rather than attempting to use <a href="https://github.com/pkg/browser/blob/master/browser_linux.go#L4" rel="noreferrer">xdg-open</a> to launch a browser.</p>
|
<p>I came to a very specific case by using Laravel framework as a part of a kubernetes cluster. These are the facts, which have to be known:</p>
<ul>
<li>I've created a Docker container for caching called <code>redis</code></li>
<li>I've created a Docker container for application called <code>application</code></li>
<li>These two work together in a Kubernetes cluster</li>
</ul>
<p>Kubernetes is setting ENV variables in each Docker container. Commonly, one is called <code>{container-name}_PORT</code>. Therefore, Kubernetes has created the ENV variable <code>REDIS_PORT</code> in my <code>application</code> container, which is set to something like that: <code>tcp://{redis-container-ip}:{redis-container-port}</code>.</p>
<p>Laravel sets this ENV variable too, but use it as a standalone port variable like <code>6379</code>. However, in this specific case, Redis does not work in Laravel, because of overwritten <code>REDIS_PORT</code> variable. The framework try to fetch redis on this example host string inside Kubernetes: <code>tcp://redis:tcp://10.7.240.204:6379</code>. Laravel logic behind: <code>{scheme}://{REDIS_HOST}:{REDIS_PORT}</code>. You can see, <code>REDIS_PORT</code> is filled with <code>tcp://10.7.240.204:6379</code>.</p>
<p><strong>What is preferable to solve the issue?</strong></p>
<p>In my opinion, Kubernetes uses the ENV variable for <code>{container-name}_PORT</code> in a wrong way, but I do understand the internal logic behind Kubernetes ENV variables.</p>
<p>At the moment, I have changed my <code>config/database.php</code> configuration in Laravel, but this causes a review of changelogs on every update.</p>
<p><em>Some of other details can be read here: <a href="https://github.com/laravel/framework/issues/24999" rel="nofollow noreferrer">https://github.com/laravel/framework/issues/24999</a></em></p>
| <p>@Florian's <a href="https://github.com/laravel/framework/issues/24999" rel="nofollow noreferrer">reply</a> to himself on github:</p>
<p>My solution was to change the config in <code>config/database.php</code> like that:</p>
<pre><code>'redis' => [
'client' => 'predis',
'default' => [
'scheme' => 'tcp',
'host' => env('REDIS_SERVICE_HOST', env('REDIS_HOST','127.0.0.1')),
'port' => env('REDIS_SERVICE_PORT', env('REDIS_PORT',6379)),
'password' => env('REDIS_PASSWORD', null),
'database' => 0,
],
],
</code></pre>
<p>Now, the config checks first, if the REDIS_SERVICE_HOST and REDIS_SERVICE_PORT are present as ENV variable. This is the case, if you have a container in a docker/kubernetes cluster which is called REDIS.</p>
<p>Advantage of this solution is, that REDIS_SERVICE_HOST returns the IP address of the container, not a hostname. Therefore, there is no dns resolution anymore for this internal connections.</p>
|
<p>I want to pass some values from Kubernetes yaml file to the containers. These values will be read in my Java app using <code>System.getenv("x_slave_host")</code>.
I have this dockerfile: </p>
<pre><code>FROM jetty:9.4
...
ARG slave_host
ENV x_slave_host $slave_host
...
$JETTY_HOME/start.jar -Djetty.port=9090
</code></pre>
<p>The kubernetes yaml file contains this part where I added <code>env</code> section:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: master
spec:
template:
metadata:
labels:
app: master
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: master
image: xregistry.azurecr.io/Y:latest
ports:
- containerPort: 9090
volumeMounts:
- name: shared-data
mountPath: ~/.X/experiment
- env:
- name: slave_host
value: slavevalue
- name: jupyter
image: xregistry.azurecr.io/X:latest
ports:
- containerPort: 8000
- containerPort: 8888
volumeMounts:
- name: shared-data
mountPath: /var/folder/experiment
imagePullSecrets:
- name: acr-auth
</code></pre>
<p>Locally when I did the same thing using docker compose, it worked using args. This is a snippet:</p>
<pre><code> master:
image: master
build:
context: ./master
args:
- slave_host=slavevalue
ports:
- "9090:9090"
</code></pre>
<p>So now I am trying to do the same thing but in Kubernetes. However, I am getting the following error (deploying it on Azure):</p>
<pre><code> error: error validating "D:\\a\\r1\\a\\_X\\deployment\\kub-deploy.yaml": error validating data: field spec.template.spec.containers[1].name for v1.Container is required; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>In other words, how to rewrite my docker compose file to kubernetes and passing this argument.</p>
<p>Thanks!</p>
| <p>env section should be added under containers, like this:</p>
<pre><code> containers:
- name: master
env:
- name: slave_host
value: slavevalue
</code></pre>
|
<p>I've an applicaion running inside pod which is created by deployment specification in k8s. As I've NFS clients in all nodes, I can mount volume from host to pod. Here is the mount-vol portion of yml file</p>
<pre><code>spec:
containers:
volumeMounts:
- mountPath: /app/config
name: config-volume
- mountPath: /var/log
name: logs
volumes:
- name: config-volume
hostPath:
path: /nfs/config
- name: logs
hostPath:
path: /nfs/logs/containerlogs
</code></pre>
<p><strong>config-volume</strong>: it mounts the config files from host to pod container.</p>
<p><strong>logs</strong>: it is <strong>not working</strong>. I want to mount the log folder from pod container to host folder. </p>
| <p>I don't think you mount containers to host. It is always host to pod. Make sure that the path /nfs/logs/containerLogs exist in the host so that your pod can write to it.</p>
|
<p>Im trying to set up a local cluster using VM and minikube, as Id been reading its only possible to use it for local purposes, but id like to join a secondary machine, and im searching a way to create the join and hash.</p>
| <p>You can easily do it in case your minikube machine is using VirtualBox. </p>
<ol>
<li><p>Start the minikube:</p>
<pre><code>$ minikube start --vm-driver="virtualbox"
</code></pre></li>
<li><p>Check the versions of kubeadm, kubelet and kubectl in minikube and print join command:</p>
<pre><code>$ kubectl version
$ minikube ssh
$ kubelet --version
$ kubeadm token create --print-join-command
</code></pre></li>
<li><p>Create a new VM in VirtualBox. I've used Vagrant to create Ubuntu 16lts VM for this test. Check that the minikube and the new VM are in the same host-only VM network.
You can use anything that suits you best, but the packages installation procedure would be different for different Linux distributions.</p></li>
<li><p>(On the new VM.) Add repository with Kubernetes:</p>
<pre><code>$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ apt-get update
</code></pre></li>
<li><p>(On the new VM.)Install the same version of kubelet kubeadm and other tools on the new VM (1.10.0 in my case)</p>
<pre><code>$ apt-get -y install ebtables ethtool docker.io apt-transport-https kubelet=1.10.0-00 kubeadm=1.10.0-00
</code></pre></li>
<li><p>(On the new VM.)Use your join command from the step 2. IP address should be from the VM Host-Only-Network. Only having Nat networks didn't work well in my case. </p>
<pre><code>$ kubeadm join 192.168.xx.yy:8443 --token asdfasf.laskjflakflsfla --discovery-token-ca-cert-hash sha256:shfkjshkfjhskjfskjdfhksfh...shdfk
</code></pre></li>
<li><p>(On the main host) Add network solution to the cluster:</p>
<pre><code>$ kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
</code></pre></li>
<li><p>(On the main host) Check your nodes and pods using kubectl:</p>
<pre><code>$ kubectl get nodes:
NAME STATUS ROLES AGE VERSION
minikube Ready master 1h v1.10.0
ubuntu-xenial Ready <none> 36m v1.10.0
$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-etcd-982l8 1/1 Running 0 10m 10.0.2.15 minikube
kube-system calico-kube-controllers-79dccdc4cc-66zxm 1/1 Running 0 10m 10.0.2.15 minikube
kube-system calico-node-9sgt5 1/2 Running 13 10m 10.0.2.15 ubuntu-xenial
kube-system calico-node-qtpg2 2/2 Running 0 10m 10.0.2.15 minikube
kube-system etcd-minikube 1/1 Running 0 1h 10.0.2.15 minikube
kube-system heapster-6hmhs 1/1 Running 0 1h 172.17.0.4 minikube
kube-system influxdb-grafana-69s5s 2/2 Running 0 1h 172.17.0.5 minikube
kube-system kube-addon-manager-minikube 1/1 Running 0 1h 10.0.2.15 minikube
kube-system kube-apiserver-minikube 1/1 Running 0 1h 10.0.2.15 minikube
kube-system kube-controller-manager-minikube 1/1 Running 0 1h 10.0.2.15 minikube
kube-system kube-dns-86f4d74b45-tzc4r 3/3 Running 0 1h 172.17.0.2 minikube
kube-system kube-proxy-vl5mq 1/1 Running 0 1h 10.0.2.15 minikube
kube-system kube-proxy-xhv8s 1/1 Running 2 35m 10.0.2.15 ubuntu-xenial
kube-system kube-scheduler-minikube 1/1 Running 0 1h 10.0.2.15 minikube
kube-system kubernetes-dashboard-5498ccf677-7gf4j 1/1 Running 0 1h 172.17.0.3 minikube
kube-system storage-provisioner 1/1 Running 0 1h 10.0.2.15 minikube
</code></pre></li>
</ol>
|
<p>I am aware that the general wisdom when using docker is to move data out of the container and use volumes instead.</p>
<p>But I am trying to create a system that provides a one-click, completely containerized installation method, and saying "you should install mongodb first on your computer" is not an option.</p>
<p>So the current solution is to keep the data itself in the container (MongoDB for example), but I am now looking for ways to back up a snapshot of the container itself every now and then, so that I can instantly spawn up an identical container from scratch when one goes down.</p>
<p>How can I achieve this?</p>
| <p>If you do not want to use volumes, you can save your container at a specific time by using <code>docker commit --change "added some changes to my container" <container_id> repo/image:tag</code></p>
<p>The new image will contains all filesystem of your current container.</p>
<p>That being said, not using volume has several disadvantages:</p>
<p>By default all files created inside a container are stored on a writable container layer. This means that:</p>
<blockquote>
<p>The data <strong>doesn’t persist when that container is no longer running</strong>, and it can be <strong>difficult to get the data out of the container if another process needs it</strong>.
A container’s writable layer is <strong>tightly coupled to the host machine</strong> where the container is running. You can’t easily move the data somewhere else.
Writing into a container’s writable layer requires a <strong>storage driver to manage the filesystem</strong>. The storage driver provides a union filesystem, using the Linux kernel. This <strong>extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem</strong>.</p>
</blockquote>
<p>So your best choice is to use volume !
Now you can choose between different kinds:</p>
<ul>
<li>bind mounts</li>
<li>named volumes</li>
</ul>
<blockquote>
<p>While bind mounts are dependent on the directory structure of the host machine, volumes are completely managed by Docker. Volumes have several advantages over bind mounts:</p>
<p>Volumes are easier to back up or migrate than bind mounts.
You can manage volumes using Docker CLI commands or the Docker API.
Volumes work on both Linux and Windows containers.
Volumes can be more safely shared among multiple containers.
Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
New volumes can have their content pre-populated by a container.</p>
</blockquote>
<p>Read the documentation to understand volume's features better : <a href="https://docs.docker.com/storage/volumes/" rel="nofollow noreferrer">https://docs.docker.com/storage/volumes/</a></p>
|
<p>I am aware that the general wisdom when using docker is to move data out of the container and use volumes instead.</p>
<p>But I am trying to create a system that provides a one-click, completely containerized installation method, and saying "you should install mongodb first on your computer" is not an option.</p>
<p>So the current solution is to keep the data itself in the container (MongoDB for example), but I am now looking for ways to back up a snapshot of the container itself every now and then, so that I can instantly spawn up an identical container from scratch when one goes down.</p>
<p>How can I achieve this?</p>
| <p>You could export and import the docker container easily.</p>
<p>An export whole container in the tar file</p>
<pre><code>docker export docker_container_name > latest.tar
</code></pre>
<p>Import Docker container and run it</p>
<pre><code>cat exampleimage.tar | docker import - exampleimagelocal:new
</code></pre>
<p>Helpful link - <a href="https://docs.docker.com/engine/reference/commandline/export/#usage" rel="nofollow noreferrer">Docker export</a> <a href="https://docs.docker.com/engine/reference/commandline/import/#examples" rel="nofollow noreferrer">Docker Import</a></p>
<p>If you want to export Docker image, then</p>
<pre><code>docker save -o image.tar Docker_Image_Name
</code></pre>
<p>Import Docker Image</p>
<pre><code>docker load -i image.tar
</code></pre>
<p>and then run docker images.</p>
<p>I think you could achieve by doing the above mostly you need to use the first approach, export docker container and deploy whenever you need it, it will contain your old data.</p>
|
<p>I have this <code>NodePort</code> service that exposes two ports:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: NodePort
selector:
app: my-service
ports:
- name: nginx-public
port: 443
targetPort: nginx-public
- name: metrics
port: 9200
targetPort: metrics
</code></pre>
<p>The <code>nginx-public</code> port has to be exposed on the node because I'm using it with aws-alb-ingress-controller.</p>
<p>However, the other port for <code>metrics</code> is only used internally (from within the cluster) - it's picked up by prometheus-operator. I need to have it documented in the service spec, but I'd rather not reserve another port on the Node at the same time.</p>
<p>Is there a way to tell Kubernetes to only reserve one node port for this service?</p>
<p>I tried specifying <code>nodePort: null</code> in the port spec like this:</p>
<pre><code> ports:
- name: nginx-public
port: 443
targetPort: nginx-public
- name: metrics
port: 9200
targetPort: metrics
nodePort: null
</code></pre>
<p>but according to <code>describe service</code>, seems like this has the same effect as omitting <code>nodePort</code> altogether, and a random port is still being reserved for <code>metrics</code>.</p>
| <p>you can define two services one selects nginx-public and the other selects metrics. use two
different labels on the pods respectively.</p>
|
<p>I have multiple instances of Mongo db deployed inside my kubernetes cluster through helm packages.
They are running as a service, in NodePort.
How do I connect to those Mongo db instances through UI tools like MongoDB Compass and RoboMongo from outside the cluster?
Any help is appreciated.</p>
| <p>You can use <code>kubectl port-forward</code> to connect to MongoDB from outside the cluster.</p>
<p>Run <code>kubectl port-forward << name of a mongodb pod >> --namespace << mongodb namespace >> 27018:27018</code>.<br>
Now point your UI tool to <code>localhost:27018</code> and <code>kubectl</code> will forward all connections to the pod inside the cluster.</p>
<p>Starting with Kubernetes 1.10+ you can also use this syntax to connect to a service (you don't have to find a pod name first):<br>
<code>kubectl port-forward svc/<< mongodb service name >> 27018:27018 --namespace << mongodb namespace>></code> </p>
|
<p>I`ve gotten delay time clock for some cluster nodes. Do I need to change the timesyncd service or any other thing to avoid synchronization issues?</p>
| <p>Time synchronization is very important because if on of node stay behind it may cause of "split brain". </p>
|
<p>Recently I have been researching about microservices and kubernetes. All the tutorial and article I read online talks about general staff. I have several specific questions about building a microservices app on kubernetes.</p>
<ol>
<li><strong>API gateway:</strong> Is API gateway a microservice I built for my app that can automatically scale? Or is it already a built-in function of kubernetes? The reason I ask is because a lot of the articles are saying that load-balancing is part of the API gateway which confuse me since in kubernetes, load-balancing is handled by <code>service</code>. Also, is this the same as the API gateway on AWS, why don't people use the AWS API gateway instead?</li>
<li><strong>Communication within services:</strong> from what I read only, there are <em>Rest/RPC</em> way and <em>Message queue</em> way. But why do people say that the <em>Rest</em> way is for sync operation? Can we build the services and have them communicate with rest api with <code>Nodejs async/await</code> functions? </li>
<li><strong>Service Discovery:</strong> Is this a problem with kubernetes at all? Does kubernetes automatically figure out this for you?</li>
<li><strong>Databases:</strong> What is the best practice to deploy a database? Deploy as a microservice on one of the node? Also, some articles say that each service should talk to a different db. So just separate the tables of one db to several dbs?</li>
</ol>
| <blockquote>
<p>Is API gateway a microservice I built for my app that can
automatically scale? Or is it already a built-in function of
kubernetes?</p>
</blockquote>
<p>Kubernetes does not have its own API-gateway service. It has an Ingress controller, which operates as a reverse proxy and exposes Kubernetes resources to the outside world. And Services, which load-balance traffic between Pods linked to them.</p>
<p>Also, Kubernetes provides an auto-scaling according to the resources consumed by Pods, memory usage or CPU utilization and some custom metrics. It is called Horizontal Pod Autoscaler, and you can read more about it <a href="https://medium.com/google-cloud/kubernetes-horizontal-pod-scaling-190e95c258f5" rel="nofollow noreferrer">here</a> or in the <a href="https://kubernetes.io/v1.1/docs/user-guide/horizontal-pod-autoscaler.html" rel="nofollow noreferrer">official documentation</a>. </p>
<blockquote>
<p>Service Discovery: Is this a problem with kubernetes at all? Does kubernetes automatically figure out this for you?</p>
</blockquote>
<p>Service Discovery is not a problem in Kubernetes, it has an entity called Services responsible for this. For more information, you can look through the <a href="https://kubernetes.io/docs/tutorials/services/" rel="nofollow noreferrer">link</a>.</p>
<p>Your other questions refer more to the architecture of your application.</p>
|
<p>What I found out so far:</p>
<ul>
<li>A "docker stop" sends a SIGTERM to process ID 1 in the container.</li>
<li>The process ID 1 in the container is the java process running tomcat.*)</li>
<li>Yes, tomcat itself shuts down gracefully, but not do so the servlets.</li>
<li>Servlets get killed after 2 seconds, even if they are in the middle of processing a reguest(!!)</li>
</ul>
<p>*) Side note:
Although our container entrypoint is [ "/opt/tomcat/bin/catalina.sh", "run" ], but
in catalina.sh the java process is started via the bash buildin "exec" command,
and therefore the java process <em>replaces</em> the shell process and hereby becomes the new process id 1.
(I can verify this by exec into the running container and do a "ps aux" in there.)
Btw, I am using tomcat 7.0.88.</p>
<p>I found statements about tomcat doing gracefull shutdown by default (<a href="http://tomcat.10.x6.nabble.com/Graceful-Shutdown-td5020523.html" rel="nofollow noreferrer">http://tomcat.10.x6.nabble.com/Graceful-Shutdown-td5020523.html</a> - "any in-progress connections will complete"), but all I can see is that the SIGTERM which is sent from docker to the java process results in hardly stopping the ongoing execution of a request.</p>
<p>I wrote a little rest servlet to test this behaviour:</p>
<pre><code>import javax.ws.rs.*;
import javax.ws.rs.core.*;
import javax.ws.rs.core.Response.Status;
@Path("/")
public class SlowServerRes
{
@GET
@Produces(MediaType.TEXT_PLAIN)
@Path("test1")
public Response test1(@QueryParam("sleep") final int sleepDurationSec)
{
long received = System.currentTimeMillis();
System.out.println("+++++++++++++++++++++ received request at " + received);
for (int i=1; i <= sleepDurationSec; i++) {
System.out.println(" ++++ Sleeping for 1 sec ("+i+")");
try { Thread.sleep(1000); }
catch (InterruptedException e) {
System.out.println(" Sleep was interrupted at second " + i + " ... ignoring/continue sleeping.");
}
}
long finished = System.currentTimeMillis();
String result = "received: " + received + " finished: " + finished;
System.out.println("+++++++++++++++++++++ " + result);
Response response = Response.status(Status.OK).entity(result).build();
return response;
}
}
</code></pre>
<p>After intensive googling I finally came across this posting:
<a href="http://grokbase.com/t/tomcat/users/113nayv5kx/tomcat-6-graceful-shutdown" rel="nofollow noreferrer">http://grokbase.com/t/tomcat/users/113nayv5kx/tomcat-6-graceful-shutdown</a></p>
<p>So the grace period that is given to tomcat is NOT propagated as a grace period for the servlets.
I wonder weather this makes much sense, but it looks like this is the fact.
So the only way to give servlets the possibility to properly end their ongoing requests is to change
the "unloadDelay" (<a href="https://tomcat.apache.org/tomcat-7.0-doc/config/context.html" rel="nofollow noreferrer">https://tomcat.apache.org/tomcat-7.0-doc/config/context.html</a>).</p>
<p>However, I did not find the right place in the tomcat config files to define a non-default unloadDelay. In case this matters, my main concern is about jersey servlets (org.glassfish.jersey.servlet.ServletContainer).</p>
<p>Or maybe there are other possibilities, which I don't see by now?</p>
<p>(I added kubernetes to the tags list, because this may be of major concern especially for Kubernetes, as it relocates (docker stop->SIGTERM) containers quite often, just to keep the load balanced.)</p>
| <p>Now I found the answer here: <a href="https://stackoverflow.com/a/11154770/2081279">https://stackoverflow.com/a/11154770/2081279</a></p>
<p>It worked for me under linux with </p>
<pre><code><Context path="/myapp" unloadDelay="10000"/>
</code></pre>
<p>but only with upper letter "C"ontext.</p>
|
<p>I'm running the leader-elector (v0.5) as a sidecar on three pods, on three different nodes. </p>
<p>Arguments: --election=XXX --http=0.0.0.0:4040</p>
<p>All works well until I kill the leader pod. </p>
<p>Now I get one of the pods into a state where the logs say it switched to the new leader:</p>
<pre><code>kubectl logs -c elector upper-0
I0803 21:08:38.645849 7 leaderelection.go:296] lock is held by upper-1 and has not yet expired
</code></pre>
<p>So that indicates that <strong>upper-1</strong> is now the leader.</p>
<p>But if I do query the HTTP server of upper-0, it returns the old leader:</p>
<pre><code>kubectl exec -it upper-0 bash
root@nso-lsa-upper-0:/# curl http://localhost:4040
{"name":"upper-2"}
</code></pre>
<p>Do I need to do something for the leader-electors HTTP service to update?</p>
| <p>It looks like a bug.<br>
There is an open issue on GitHub: <a href="https://github.com/kubernetes/contrib/issues/2930" rel="nofollow noreferrer">How is this possible? #2930</a></p>
|
<p>I tried running the official metricbeat docker image as described here (<a href="https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-kubernetes.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-kubernetes.html</a>) on a GCP kubernetes cluster as a deamonset and changed the settings so it should route traffic to the existing elastic search pod, but I keep getting the error:</p>
<pre><code>2018-02-22T14:04:54.515Z WARN transport/tcp.go:36 DNS lookup failure "elasticsearch-logging.kube-system.svc.cluster.local": lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
2018-02-22T14:04:55.516Z ERROR pipeline/output.go:74 Failed to connect: Get http://elasticsearch-logging.kube-system.svc.cluster.local:9200: lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
2018-02-22T14:04:55.517Z WARN transport/tcp.go:36 DNS lookup failure "elasticsearch-logging.kube-system.svc.cluster.local": lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
2018-02-22T14:04:57.517Z ERROR pipeline/output.go:74 Failed to connect: Get http://elasticsearch-logging.kube-system.svc.cluster.local:9200: lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
2018-02-22T14:04:57.519Z WARN transport/tcp.go:36 DNS lookup failure "elasticsearch-logging.kube-system.svc.cluster.local": lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
2018-02-22T14:05:01.519Z ERROR pipeline/output.go:74 Failed to connect: Get http://elasticsearch-logging.kube-system.svc.cluster.local:9200: lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
2018-02-22T14:05:01.532Z WARN transport/tcp.go:36 DNS lookup failure "elasticsearch-logging.kube-system.svc.cluster.local": lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
</code></pre>
<p>The hostname is fine, because other pods are successfully pushing data to elastic. Now, after some research this turns out to be an issue of the Golang DNS resolver (not metricbeat itself). <strong>Anyone else running into this issue? Anyone a solution?</strong> </p>
| <p>We had the same problem and what fixed it was adding this </p>
<pre><code>hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
</code></pre>
<p>In the DaemonSet yaml on the same level as the containers tag</p>
|
<p>I'd like to know what are your best practices (or just your practices) regarding the management of your helm charts versions.</p>
<p>I wonder what is the best way to deal with application versioning, continuous integration/delivery and chart packaging.</p>
<p>Today I have many microservices that live their life. Each one has its own lifecycle and it own versioning in its own git repository.</p>
<p>Beside, we choosed to have one git repository for all our charts.</p>
<p>Now, we have too choices :</p>
<ul>
<li>Each time a microservice changes, a new docker image is built and a new version of the chart is created too (with just the tag(s) of the docker image(s) that change in the value.yaml file)</li>
<li>Or, even if a microservice changes, we don't create a new version of the chart. The default value of the docker tag in the chart is set to "default" and when we want to upgrade the chart we have to use <code>--set image.tag=vx.x.x</code> flag.</li>
</ul>
<p>The benefit of the first approach for the "ops" point of view, is that at any time we know what version of each chart (and each docker image) are running on the cluster. The drawback is that at a certain time we will have many many versions of each charts with just a docker tag version that changed.</p>
<p>On the other side, the benefit of the second approach is that the only thing that makes the chart version to change is a modification of the chart code itself, not an application change. It reduces drastically the "uselessly" high version numbers of each chart. The drawback is that we have to override the docker tags at the installation/upgrade time and we lost the observability of what versions are running on the cluster (usefull in case of Disaster Recovery Plan).</p>
<p>So, what are your practices? Perhaps an hybrid approach?</p>
<p>Thank you for your help</p>
| <p>I think this is a choice that comes down to the needs of your project. An interesting comparison is the current versioning strategy of the public charts in the Kubernetes charts repo and the current default versioning strategy of Jenkins-X. </p>
<p>The public charts only get bumped when a change is made to the chart. This could be to bump the version of the default image tag that it points to but each time it is an explicit action requiring a pr and review and a decision on whether it is a major, minor or fix version bump. </p>
<p>In a Jenkins-X cluster the default behaviour is that when you make a change to the code of one of your microservices then it's chart version is automatically bumped whether or not the chart itself changes. The chart in the source repo refers to a snapshot but it is auto deployed under an explicit version and that version gets referenced in the environments it is deployed to via a pipeline. The chart refers to a draft/dev tag of the image in the source and that's also automatically replaced with an explicit version during the flow. </p>
<p>The key difference I think is that Jenkins-X is driven towards a highly automated CI/CD flow with particular environments in the flows. Its approach makes sense for handling frequent deployment of changes. The public charts are aimed at reusability and giving a stable experience across a hugely wide range of environments and situations through public contributions. So the strategy there is more aimed at visibility and ease of understanding for changes that you'd expect to be less frequent by comparison.</p>
|
<p>I'm running the leader-elector (v0.5) as a sidecar on three pods, on three different nodes. </p>
<p>Arguments: --election=XXX --http=0.0.0.0:4040</p>
<p>All works well until I kill the leader pod. </p>
<p>Now I get one of the pods into a state where the logs say it switched to the new leader:</p>
<pre><code>kubectl logs -c elector upper-0
I0803 21:08:38.645849 7 leaderelection.go:296] lock is held by upper-1 and has not yet expired
</code></pre>
<p>So that indicates that <strong>upper-1</strong> is now the leader.</p>
<p>But if I do query the HTTP server of upper-0, it returns the old leader:</p>
<pre><code>kubectl exec -it upper-0 bash
root@nso-lsa-upper-0:/# curl http://localhost:4040
{"name":"upper-2"}
</code></pre>
<p>Do I need to do something for the leader-electors HTTP service to update?</p>
| <p>Yes, bug. I've uploaded a fixed container here: <a href="https://hub.docker.com/r/fredrikjanssonse/leader-elector/tags/" rel="nofollow noreferrer">https://hub.docker.com/r/fredrikjanssonse/leader-elector/tags/</a></p>
|
<p>I have created basic helm template using <code>helm create</code> command. While checking the template for Ingress its adding the string RELEASE-NAME and appname like this <code>RELEASE-NAME-microapp</code></p>
<p>How can I change <code>.Release.Name</code> value?</p>
<pre><code>helm template --kube-version 1.11.1 microapp/
# Source: microapp/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-microapp
labels:
app: microapp
chart: microapp-0.1.0
release: RELEASE-NAME
heritage: Tiller
annotations:
kubernetes.io/ingress.class: nginx
</code></pre>
| <p>This depends on what version of Helm you have; <code>helm version</code> can tell you this.</p>
<p>In Helm version 2, it's the value of the <code>helm install --name</code> parameter, or absent this, a name Helm chooses itself. If you're checking what might be generated via <code>helm template</code> that also takes a <code>--name</code> parameter.</p>
<p>In Helm version 3, it's the first parameter to the <code>helm install</code> command. Helm won't generate a name automatically unless you explicitly ask it to <code>helm install --generate-name</code>. <code>helm template</code> also takes the same options.</p>
<p>Also, in helm 3, if you want to specify a name explicitly, you should use the <code>--name-template</code> flag. e.g. <code>helm template --name-template=dummy</code> in order to use the name <code>dummy</code> instead of <code>RELEASE-NAME</code></p>
|
<p>I just read the documentation of istio 1.0.0, especially its concept. There is one thing that I am trying to understand, especially the existence of <a href="https://istio.io/docs/concepts/traffic-management/#rule-configuration" rel="noreferrer"><code>DestinationRule</code></a>. So, before using Istio, the only way to expose pods is through Kubernetes's <code>Service</code> object. Now, using Istio, there are <code>DestinationRule</code> and <code>VirtualService</code>. </p>
<p>I understand that in Kubernetes's service, we can define what pod's label should the <code>service</code> routes the traffic. In istio, we also capable of do that by using <code>DestionationRule</code>'s <code>spec.subsets.label</code> field. What happen if we have <code>Service</code> and <code>DestinationRule</code> object in the same namespace? Does it conflicted each other?</p>
| <p>They complement each other. You still have to define a Kubernetes service, but the Istio <code>DestinationRules</code> will allow you to refine "subsets" in that service, via labels, and then route traffic intelligently between subsets used in a <code>VirtualService</code> object. You can still see the Kubernetes Service as the global entry point, but Istio will take the routing a step further by letting you declaring "versions" that may point to different deployments.</p>
<p>See in the istio docs ( <a href="https://istio.io/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer">https://istio.io/docs/reference/config/networking/virtual-service/</a> ) how the <code>VirtualService</code> definition relates to the subsets declared in <code>DestinationRules</code>.</p>
<p>The labels that you can see in subsets have to match labels set on your deployments/pods.</p>
|
<p>The issue is that I would like to persistent one status file(status generated by the service), not the directory, of some service in case the status lost when service restart, how to solve?</p>
| <p>If it's just a status file, you should be able to write it into a config map. See <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="nofollow noreferrer">Add ConfigMap data to a Volume</a>. If in volumes you have</p>
<pre><code>volumes:
- name: status
configMap:
name: status
defaultMode: 420
optional: true
</code></pre>
<p>and in volumeMounts</p>
<pre><code>volumeMounts:
- name: status
mountPath: /var/service/status
</code></pre>
<p>then you should be able to write in it. See also how kube-dns does it with the <code>kube-dns-config</code> mount from <code>kube-dns</code> config-map.</p>
|
<p>I need that tomcat is registered with my Node IP and a port.</p>
<p>My question is:</p>
<p>At the moment that i run the command:</p>
<pre><code>kubectl run tomcat-pod --image=tomcat --port=80 --labels="name=tomcat-pod"
</code></pre>
<p>In this moment the tomcat is running.</p>
<p>Then I believe that exposing like a service my tomcat with NodePort type, It will change my IP registration, because i have understanded that my server is registered with the command run?</p>
<p>Or what is the correct way to register my app with the Node machine using the tomcat in the container?</p>
<p>Thanks.</p>
<p>Regards.</p>
| <p>To achieve your goal and make <code>Tomcat</code> deployment available on the Node machine, consider using <code>Service</code> type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> to expose Tomcat application server on the <code>Node IP</code> address.</p>
<p>Create the manifest file for <code>Tomcat</code> application server implementation, ensuring that you remove previous Tomcat deployment:</p>
<pre><code>kubectl delete deployment tomcat-pod
</code></pre>
<p>Manifest file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-pod
spec:
selector:
matchLabels:
run: tomcat-pod
replicas: 1
template:
metadata:
labels:
run: tomcat-pod
spec:
containers:
- name: tomcat
image: tomcat:latest
ports:
- containerPort: 8080
</code></pre>
<p>Create deployment for <code>Tomcat</code> in your K8s cluster:</p>
<pre><code>kubectl apply -f manifest_file.yaml
</code></pre>
<p>Compose service exposing your <code>Tomcat</code> container port (by default 8080):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: tomcat-pod
labels:
run: tomcat-pod
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
run: tomcat-pod
</code></pre>
<p>Create service:</p>
<pre><code>kubectl apply -f manifest_file.yaml
</code></pre>
<p>Check your created service properties: <code>kubectl describe service tomcat-pod</code></p>
<pre><code>Name: tomcat-pod
Namespace: default
Labels: run=tomcat-pod
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"run":"tomcat-pod"},"name":"tomcat-pod","namespace":"default"},"spec":{"port...
Selector: run=tomcat-pod
Type: NodePort
IP: XXX.XX.XX.XX
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30218/TCP
Endpoints: XXX.XX.XX.XX:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>Now you can reach your Tomcat application server via Node IP address.</p>
<p>Be aware as <code>NodePort</code> is randomly selected from the default pool 30000-32767 and this value is unique for each Node in the cluster.</p>
|
<p>I am trying to enable the rate-limit for my istio enabled service. But it doesn't work. How do I debug if my configuration is correct?</p>
<pre><code>apiVersion: config.istio.io/v1alpha2
kind: memquota
metadata:
name: handler
namespace: istio-system
spec:
quotas:
- name: requestcount.quota.istio-system
maxAmount: 5
validDuration: 1s
overrides:
- dimensions:
engine: myEngineValue
maxAmount: 5
validDuration: 1s
---
apiVersion: config.istio.io/v1alpha2
kind: quota
metadata:
name: requestcount
namespace: istio-system
spec:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.service | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
engine: destination.labels["engine"] | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcount
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
# - service: '*' ; I tried with this as well
- name: my-service
namespace: default
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: handler.memquota
instances:
- requestcount.quota
</code></pre>
<p>I tried with <code>- service: '*'</code> as well in the <code>QuotaSpecBinding</code>; but no luck.</p>
<p>How, do I confirm if my configuration was correct? the <code>my-service</code> is the kubernetes service for my deployment. (Does this have to be a VirtualService of istio for rate limits to work? <strong>Edit: Yes, it has to!</strong>)</p>
<p>I followed <a href="https://istio.io/docs/tasks/policy-enforcement/rate-limiting/" rel="nofollow noreferrer">this doc</a> except the VirtualService part.</p>
<p>I have a feeling somewhere in the namespaces I am doing a mistake.</p>
| <p>You have to define the virtual service for the service <code>my-service</code>:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
spec:
hosts:
- myservice
http:
- route:
- destination:
host: myservice
</code></pre>
<p>This way, you allow Istio to know which service are you host you are referring to.</p>
<p>In terms of debugging, I know that there is a project named <a href="http://kiali.io" rel="nofollow noreferrer">Kiali</a> that aims to leverage observability in Istio environments. I know that they have validations for some Istio and Kubernetes objects: <a href="https://www.kiali.io/features/istio-configuration/" rel="nofollow noreferrer">Istio configuration browse</a>. </p>
|
<p>We have a Cassandra Cluster on my kubernetes cluster. How we are planning migrate the current cluster from GCP to AWS. How can I restore my Cassandra Keyspaces and Snapshots. From the following link, I got a idea for took backup:
<a href="https://8kmiles.com/blog/cassandra-backup-and-restore-methods/" rel="nofollow noreferrer">https://8kmiles.com/blog/cassandra-backup-and-restore-methods/</a></p>
<p>But the docs shows only snapshot backup case. Nothing says about 'Keyspaces'. How can I backup my Keyspace and Snapshot.? How can I retore the same. Our Cassandra running on Kubernetes both GCP and AWS. </p>
| <p>I think you have two options here:</p>
<ol>
<li>a backup/restore approach as you stated. Be aware that <code>nodetool snapshot</code> is copying your data and exports the schema of the table in <code>schema.cql</code> file. You will need to run a <code>describe keyspace</code> command on the existing cluster and run it on the new cluster in order to create the new keyspace, since nodetool is not doing this. This <a href="https://stackoverflow.com/questions/51650770/nodetool-snapshot-takes-schema-snapshot-backup-too">answer</a> has some details regarding this matter.</li>
<li>Add a <a href="https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html" rel="nofollow noreferrer">new DC</a> and <a href="https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsDecomissionDC.html" rel="nofollow noreferrer">decommission</a> the old one, after the data is migrated. Here you must have the same cassandra version.</li>
</ol>
|
<p>I'm running into DNS issues on a GKE 1.10 kubernetes cluster. Occasionally pods start without any network connectivity. Restarting the pod tends to fix the issue.</p>
<p>Here's the result of the same few commands inside a container without network, and one with.</p>
<h2>BROKEN:</h2>
<pre><code>kc exec -it -n iotest app1-b67598997-p9lqk -c userapp sh
/app $ nslookup www.google.com
nslookup: can't resolve '(null)': Name does not resolve
/app $ cat /etc/resolv.conf
nameserver 10.63.240.10
search iotest.svc.cluster.local svc.cluster.local cluster.local c.myproj.internal google.internal
options ndots:5
/app $ curl -I 10.63.240.10
curl: (7) Failed to connect to 10.63.240.10 port 80: Connection refused
/app $ netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN 1/python
tcp 0 0 ::1:50051 :::* LISTEN 1/python
tcp 0 0 ::ffff:127.0.0.1:50051 :::* LISTEN 1/python
</code></pre>
<h2>WORKING:</h2>
<pre><code>kc exec -it -n iotest app1-7d985bfd7b-h5dbr -c userapp sh
/app $ nslookup www.google.com
nslookup: can't resolve '(null)': Name does not resolve
Name: www.google.com
Address 1: 74.125.206.147 wk-in-f147.1e100.net
Address 2: 74.125.206.105 wk-in-f105.1e100.net
Address 3: 74.125.206.99 wk-in-f99.1e100.net
Address 4: 74.125.206.104 wk-in-f104.1e100.net
Address 5: 74.125.206.106 wk-in-f106.1e100.net
Address 6: 74.125.206.103 wk-in-f103.1e100.net
Address 7: 2a00:1450:400c:c04::68 wk-in-x68.1e100.net
/app $ cat /etc/resolv.conf
nameserver 10.63.240.10
search iotest.svc.cluster.local svc.cluster.local cluster.local c.myproj.internal google.internal
options ndots:5
/app $ curl -I 10.63.240.10
HTTP/1.1 404 Not Found
date: Sun, 29 Jul 2018 15:13:47 GMT
server: envoy
content-length: 0
/app $ netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:15000 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:15001 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN 1/python
tcp 0 0 10.60.2.6:56508 10.60.48.22:9091 ESTABLISHED -
tcp 0 0 127.0.0.1:57768 127.0.0.1:50051 ESTABLISHED -
tcp 0 0 10.60.2.6:43334 10.63.255.44:15011 ESTABLISHED -
tcp 0 0 10.60.2.6:15001 10.60.45.26:57160 ESTABLISHED -
tcp 0 0 10.60.2.6:48946 10.60.45.28:9091 ESTABLISHED -
tcp 0 0 127.0.0.1:49804 127.0.0.1:50051 ESTABLISHED -
tcp 0 0 ::1:50051 :::* LISTEN 1/python
tcp 0 0 ::ffff:127.0.0.1:50051 :::* LISTEN 1/python
tcp 0 0 ::ffff:127.0.0.1:50051 ::ffff:127.0.0.1:49804 ESTABLISHED 1/python
tcp 0 0 ::ffff:127.0.0.1:50051 ::ffff:127.0.0.1:57768 ESTABLISHED 1/python
</code></pre>
<p>These pods are identical, just one was restarted. </p>
<p>Does anyone have advice about how to analyse and fix this issue?</p>
| <p>I also faced the same problem, and I simply worked around it for now by switching to the 1.9.x GKE version (after spending many hours trying to debug why my app wasn't working).</p>
<p>Hope this helps!</p>
|
<p>I'm using the below manifest to deploy postgresql on kubernetes within NFS persistent volume:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs2
spec:
capacity:
storage: 6Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 10.139.82.123
path: /nfsfileshare/postgres
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs2
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 6Gi
---
apiVersion: v1
kind: Service
metadata:
name: db
labels:
app: aiflow-db
spec:
selector:
app: aiflow-db
clusterIP: None
ports:
- port: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
namespace: test-aiflow
labels:
app: aiflow-db
spec:
selector:
matchLabels:
app: aiflow-db
template:
metadata:
labels:
app: aiflow-db
spec:
containers:
- name: db
image: postgresql:10
ports:
- containerPort: 5432
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- mountPath: /var/lib/postgresql/data/pgdata
name: nfs2
volumes:
- name: nfs2
persistentVolumeClaim:
claimName: nfs2
restartPolicy: Always
</code></pre>
<p>The pg data can be mounted to nfs server (<code>/nfsfileshare/postgres *(rw,async,no_subtree_check,no_root_squash)</code>):</p>
<pre><code>total 124
drwx------ 19 999 root 4096 Aug 7 11:10 ./
drwxrwxrwx 5 root root 4096 Aug 7 10:28 ../
drwx------ 3 999 docker 4096 Aug 7 11:02 base/
drwx------ 2 999 docker 4096 Aug 7 11:10 global/
drwx------ 2 999 docker 4096 Aug 7 11:01 pg_commit_ts/
drwx------ 2 999 docker 4096 Aug 7 11:01 pg_dynshmem/
-rw------- 1 999 docker 4513 Aug 7 11:02 pg_hba.conf
-rw------- 1 999 docker 1636 Aug 7 11:02 pg_ident.conf
drwx------ 4 999 docker 4096 Aug 7 11:09 pg_logical/
drwx------ 4 999 docker 4096 Aug 7 11:01 pg_multixact/
drwx------ 2 999 docker 4096 Aug 7 11:10 pg_notify/
drwx------ 2 999 docker 4096 Aug 7 11:02 pg_replslot/
drwx------ 2 999 docker 4096 Aug 7 11:01 pg_serial/
drwx------ 2 999 docker 4096 Aug 7 11:01 pg_snapshots/
drwx------ 2 999 docker 4096 Aug 7 11:02 pg_stat/
drwx------ 2 999 docker 4096 Aug 7 11:02 pg_stat_tmp/
drwx------ 2 999 docker 4096 Aug 7 11:02 pg_subtrans/
drwx------ 2 999 docker 4096 Aug 7 11:02 pg_tblspc/
drwx------ 2 999 docker 4096 Aug 7 11:01 pg_twophase/
-rw------- 1 999 docker 3 Aug 7 11:02 PG_VERSION
drwx------ 3 999 docker 4096 Aug 7 11:02 pg_wal/
drwx------ 2 999 docker 4096 Aug 7 11:02 pg_xact/
-rw------- 1 999 docker 88 Aug 7 11:02 postgresql.auto.conf
-rw------- 1 999 docker 22729 Aug 7 11:02 postgresql.conf
-rw------- 1 999 docker 74 Aug 7 11:10 postmaster.pid
</code></pre>
<p>However the container is stuck with below log:</p>
<blockquote>
<p>The files belonging to this database system will be owned by user
"postgres". This user must also own the server process.</p>
<p>The database cluster will be initialized with locale "en_US.utf8". The
default database encoding has accordingly been set to "UTF8". The
default text search configuration will be set to "english".</p>
<p>Data page checksums are disabled.</p>
<p>fixing permissions on existing directory
/var/lib/postgresql/data/pgdata ... ok creating subdirectories ... ok
selecting default max_connections ... 100 selecting default
shared_buffers ... 128MB selecting dynamic shared memory
implementation ... posix creating configuration files ... ok running
bootstrap script ... ok</p>
</blockquote>
<p>Seems it stuck on post-bootstrap initialization.</p>
<p>It works only if I do not use nfs volume (works by using hostPath volume), why is that?</p>
| <p>NFS does not support fsync kernel vfs call which is required transaction logs for ensuring the writing out the redo logs on the disk. So you should use block storage when you need to use RDBMS, such as PostgreSQL and MySQL. You might lose the data consistency althogh you can run the one on the NFS. </p>
|
<p>I am stuggling with a simple one replica deployment of the <a href="https://hub.docker.com/r/eventstore/eventstore/" rel="nofollow noreferrer">official event store image</a> on a Kubernetes cluster. I am using a persistent volume for the data storage. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-eventstore
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: my-eventstore
spec:
imagePullSecrets:
- name: runner-gitlab-account
containers:
- name: eventstore
image: eventstore/eventstore
env:
- name: EVENTSTORE_DB
value: "/usr/data/eventstore/data"
- name: EVENTSTORE_LOG
value: "/usr/data/eventstore/log"
ports:
- containerPort: 2113
- containerPort: 2114
- containerPort: 1111
- containerPort: 1112
volumeMounts:
- name: eventstore-storage
mountPath: /usr/data/eventstore
volumes:
- name: eventstore-storage
persistentVolumeClaim:
claimName: eventstore-pv-claim
</code></pre>
<p>And this is the yaml for my persistent volume claim:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: eventstore-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>The deployments work fine. It's when I tested for durability that I started to encounter a problem. I delete a pod to force actual state from desired state and see how Kubernetes reacts.</p>
<p>It immediately launched a new pod to replace the deleted one. And the admin UI was still showing the same data. But after deleting a pod for the second time, the new pod did not come up. I got an error message that said "record too large" that indicated corrupted data according to this discussion. <a href="https://groups.google.com/forum/#!topic/event-store/gUKLaxZj4gw" rel="nofollow noreferrer">https://groups.google.com/forum/#!topic/event-store/gUKLaxZj4gw</a></p>
<p>I tried again for a couple of times. Same result every time. After deleting the pod for the second time the data is corrupted. This has me worried that an actual failure will cause similar result.</p>
<p>However, when deploying new versions of the image or scaling the pods in the deployment to zero and back to one no data corruption occurs. After several tries everything is fine. Which is odd since that also completely replaces pods (I checked the pod id's and they changed).</p>
<p>This has me wondering if deleting a pod using kubectl delete is somehow more forcefull in the way that a pod is terminated. Do any of you have similar experience? Of insights on if/how delete is different? Thanks in advance for your input.</p>
<p>Regards,</p>
<p>Oskar</p>
| <p>I was refered to this pull request on Github that stated the the proces was not killed properly: <a href="https://github.com/EventStore/eventstore-docker/pull/52" rel="nofollow noreferrer">https://github.com/EventStore/eventstore-docker/pull/52</a></p>
<p>After building a new image with the Docker file from the pull request put this image in the deployment. I am killing pods left and right, no data corruption issues anymore.</p>
<p>Hope this helps someone facing the same issue.</p>
|
<p><a href="https://i.stack.imgur.com/U450c.png" rel="nofollow noreferrer">Image for reference</a></p>
<p>I have 2 questions:</p>
<ol>
<li><p>I have a node on kubernetes cluster, I wanted to know the difference between CPU request, limit on this image. <strong>I know the difference between limit,request on deployment file</strong> but this seems something different</p></li>
<li><p>In image there's pod allocation capacity ( So far I know this is the limit of running pods at a specific moment ). I wanted to know if pending pods are also included in this capacity or not?</p></li>
</ol>
| <p>The values are the total of the values for the 4 pods on the node, shown as a fraction of the total resource available on the node. So e.g. you could have a maximum of 110 pods on the node and you currently have 4. You can use this to get a sense of when your node is nearing being 'full' (i.e. if one of those metrics is almost full then it's likely that kubernetes <a href="http://www.noqcks.io/notes/2018/02/03/understanding-kubernetes-resources/" rel="nofollow noreferrer">won't be able to schedule</a> any more pods to that node). It's the <a href="https://stackoverflow.com/questions/50912066/kubernetes-node-memory-limits">same metrics you get from <code>kubectl describe node <node_name></code></a></p>
|
<p>In process install of my kubernetes cluster, I ran into the problem of naming pods. <br>
How to add to the name of the container the name of the node on which it is was deployed?<br></p>
<p>Exemple of my Daemon Set yaml:</p>
<pre><code>....
spec:
template:
spec:
name: CONTAINER-NAME-HOST-NAME
....
</code></pre>
| <p>Naming convention falls under the <a href="https://www.rfc-editor.org/rfc/rfc1123#section-2" rel="nofollow noreferrer">RFC 1123 definition</a> of a DNS label.</p>
<p><code>Starting from v1.4 names must not be longer than 253 characters.</code></p>
<p><code>Name should be expressed by the regular expression: [a-z0-9]([-a-z0-9]*[a-z0-9])?</code></p>
<p><code>Underscore "_" is not allowed.</code></p>
<p>You can see applicable code in GitHub for <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/53a9d106c4aabcd550cc32ae4e8004f32fb0ae7b/pkg/api/validation/validation.go#L280" rel="nofollow noreferrer">checking container names</a>, <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/7f2d0c0f710617ef1f5eec4745b23e0d3f360037/pkg/util/validation.go#L26" rel="nofollow noreferrer">defining acceptable names</a>.</p>
|
<p>I'm new to K8s. In process to config Openstack Cinder as K8s StorageClass, i have to add some flags to my kube controller manager, and I found that it's my big problem.
I'm using K8s 1.11 in VMs, and my K8s cluster has a kube-controller-manager pod, but I don't know how to add these flags to my kube-controller-manager.
After hours search, i found that there's a lot of task require add flag to kube-controller-manager, but no exactly document guide me how to do that. Please share me the way to go over it. </p>
<p>Thank you.</p>
| <p>You can check <code>/etc/kubernetes/manifests</code> dir on your master nodes.
This dir would contain yaml files for master components. </p>
<p>These are also known as static pods. </p>
<p>More Info : <a href="https://kubernetes.io/docs/tasks/administer-cluster/static-pod/" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/static-pod/</a></p>
<p>Update these files and you would be able to see your changes as kubelet should restart the pod on file change.</p>
<p>As a more long term solution, you will need to incorporate the flags to the tooling that you use to generate your k8s cluster.</p>
|
<p>I am able to scrape Prometheus metrics from a Kubernetes service using this Prometheus job configuration:</p>
<pre><code>- job_name: 'prometheus-potapi'
static_configs:
- targets: ['potapi-service.potapi:1234']
</code></pre>
<p>It uses Kubernetes DNS and it gives me the metrics from any of my three pods I use for my service.</p>
<p>I would like to see the result for each pod. </p>
<p>I am able to see the data I want using this configuration:</p>
<pre><code>- job_name: 'prometheus-potapi-pod'
static_configs:
- targets: ['10.1.0.126:1234']
</code></pre>
<p>I have searched and experimented using the service discovery mechanism available in Prometheus. Unfortunately, I don't understand how it should be setup. The <a href="https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml" rel="nofollow noreferrer">service discovery reference</a> isn't really helpful if you don't know how it works.</p>
<p>I am looking for an example where the job using the IP number is replaced with some service discovery mechanism. Specifying the IP was enough for me to see that the data I'm looking for is exposed.</p>
<p>The pods I want to scrape metrics from all live in the same namespace, <code>potapi</code>. </p>
<p>The metrics are always exposed through the same port, <code>1234</code>.</p>
<p>Finally, the are all named like this:</p>
<pre><code>potapi-deployment-754d96f855-lkh4x
potapi-deployment-754d96f855-pslgg
potapi-deployment-754d96f855-z2zj2
</code></pre>
<p>When I do </p>
<pre><code>kubectl describe pod potapi-deployment-754d96f855-pslgg -n potapi
</code></pre>
<p>I get this description:</p>
<pre><code>Name: potapi-deployment-754d96f855-pslgg
Namespace: potapi
Node: docker-for-desktop/192.168.65.3
Start Time: Tue, 07 Aug 2018 14:18:55 +0200
Labels: app=potapi
pod-template-hash=3108529411
Annotations: <none>
Status: Running
IP: 10.1.0.127
Controlled By: ReplicaSet/potapi-deployment-754d96f855
Containers:
potapi:
Container ID: docker://72a0bafbda9b82ddfc580d79488a8e3c480d76a6d17c43d7f7d7ab18458c56ee
Image: potapi-service
Image ID: docker://sha256:d64e94c2dda43c40f641008c122e6664845d73cab109768efa0c3619cb0836bb
Ports: 4567/TCP, 4568/TCP, 1234/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Running
Started: Tue, 07 Aug 2018 14:18:57 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4fttn (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-4fttn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4fttn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p>How would you rewrite the job definition given these prerequisites?</p>
| <p>Here they use <a href="https://github.com/prometheus/prometheus/blob/v2.3.2/documentation/examples/prometheus-kubernetes.yml#L253-L256" rel="nofollow noreferrer"><code>example.io/scrape=true</code></a> (and similar annotations for specifying the <a href="https://github.com/prometheus/prometheus/blob/v2.3.2/documentation/examples/prometheus-kubernetes.yml#L266-L269" rel="nofollow noreferrer">scrape port</a> and the <a href="https://github.com/prometheus/prometheus/blob/v2.3.2/documentation/examples/prometheus-kubernetes.yml#L258-L260" rel="nofollow noreferrer">scrape path</a> if it's not <code>/metrics</code>), which is how one achieves the "autodiscovery" part.</p>
<p>If you apply that annotation -- and the relevant config snippets in the Prom config -- to a <code>Service</code>, then Prom will scrape the port and path on the <code>Service</code>, meaning you will have stats for the <code>Service</code> itself, and not the individual Endpoints behind it. Similarly, if you label the <code>Pod</code>s, you will gather metrics for the <code>Pod</code>s but they would need to be rolled up to have a cross-<code>Pod</code> view of the state of affairs. There are multiple different resource types that can be autodiscovered, including <a href="https://prometheus.io/docs/prometheus/2.3/configuration/configuration/#node" rel="nofollow noreferrer">node</a> and <a href="https://prometheus.io/docs/prometheus/2.3/configuration/configuration/#ingress" rel="nofollow noreferrer">ingress</a>, also. They all behave similarly.</p>
<p>Unless you have grave CPU or storage concerns for your Prom instance, I absolutely wouldn't enumerate the scrape targets in the config like that: I would use the scrape annotations, meaning you can change who is scraped, what port, etc. without having to reconfigure Prom each time.</p>
<p>Be aware that if you want to use their example as-is, and you want to apply those annotations from within the kubernetes resource YAML, ensure that you quote the <code>: 'true'</code> value, otherwise YAML will promote that to be a boolean literal, and kubernetes annotations can only be string values.</p>
<p>Applying the annotations from the command line will work just fine:</p>
<pre><code>kubectl annotate pod -l app=potapi example.io/scrape=true
</code></pre>
<p>(BTW, they use <code>example.io/</code> in their example, but there is nothing special about that string except it namespaces the <code>scrape</code> part to keep it from colliding with something else named <code>scrape</code>. So feel free to use your organization's namespace if you wish to avoid having something weird named <code>example.io/</code> in your cluster)</p>
|
<p>An environment is configured that uses kubernetes pods to instantiate master node and slaves nodes. Everyting works fine except that the workspace is not available on jenkins UI when the slave is taken down by the kubernetes (slave offline).
I have tried several solutions (using an NFS) .... but it doesn't work ... Jenkins don't want to display the workspace (even if the workspace is at the right NFS location, and on the master). Any pointers on the same is really appreciated.</p>
| <p>Jenkins' kubernetes plugin adds and removes slaves on demand, but as far as core Jenkins is concerned, they're permanent slaves which have their own workspaces. Jenkins doesn't know that their workspaces are on NFS, only that the slave is gone and the workspace is inaccessible. That's one of the limitations of ephemeral slaves.</p>
<p>If you want to persist something from the workspace in this environment, the best way would be to create a build artifact so that Jenkins archives it.</p>
<p>You could also avoid using the kubernetes plugin, and spin up several persistent slaves in Kubernetes which are seldom torn down, or give them persistent agent ID's.</p>
|
<p>I have a helm repository set up for my CI/CD pipeline, but the one thing I am having trouble with is helm's versioning system which is focused on a semantic versioning system as in <code>x.x.x</code>. </p>
<p>I want to be able to specify tags like "staging", "latest", and "production", and although I am able to successfully upload charts with string versions</p>
<p><code>NAME CHART VERSION APP VERSION
chartmuseum/myrchart latest 1.0
</code></p>
<p>Any attempt to actually access the chart fails, such as</p>
<p><code>helm inspect chartmuseum/mychart --version=latest</code></p>
<p>Generates the error:</p>
<p><code>Error: failed to download "chartmuseum/mychart" (hint: running 'helm repo update' may help)</code></p>
<p>I don't really want to get into controlled semantic versioning at this point in development, or the mess that is appending hashes to a version. Is there any way to get helm to pull non-semantically tagged chart versions?</p>
| <p>My approach to this, where I do not want to version my chart (and subcharts) semanticaly as well is not to use helm repository at all and just pull whole chart in CI/CD from git instead. If you are publishing them to wider audience this may not suit you, but for own CI/CD which is authorized to access our repositories anyway it works like charm.</p>
|
<p>I have a <strong>Deployment</strong> that I'm writing in a <strong>helm</strong> template and am getting an error when including part of a <strong>command</strong> line:</p>
<pre><code>command: ["/cloud_sql_proxy",
printf "-instances=%s=tcp:0.0.0.0:3306" .Values.sqlproxy.instanceName,
"-credential_file=/secrets/cloudsql/credentials.json"]
</code></pre>
<p>on linting the deployment (<code>helm lint .</code>) I get the following error:</p>
<blockquote>
<p>error converting YAML to JSON: yaml: line 25: found unexpected ':'</p>
</blockquote>
<p>If I remove the part of the <strong>command:</strong> <code>=tcp:0.0.0.0:3306</code> the deployment lints fine but I need it :)</p>
<p>Is there a way of escaping the colons?</p>
<p>Or should I rewrite the <strong>command</strong> array?</p>
<p>I've uploaded the complete yaml to a gist: <a href="https://gist.github.com/theGC/f04ba400d2d622a746470c79cee611c4" rel="nofollow noreferrer">sqlproxy-deployment.yaml</a></p>
| <p>You need to wrap template directives in <code>{{ }}</code></p>
<pre><code>"{{ printf "-instances=%s=tcp:0.0.0.0:3306" .Values.sqlproxy.instanceName }}"
</code></pre>
|
<p>Hello I tried looking at the auth options in the annotations for kubernetes traefik ingress. I couldn't find anything where I could configure Forward Authentication as documented here: <a href="https://docs.traefik.io/configuration/entrypoints/#forward-authentication" rel="noreferrer">https://docs.traefik.io/configuration/entrypoints/#forward-authentication</a></p>
<p>I would like to be able to configure forward authentication per ingress resource. This is possible in the nginx ingress controller.</p>
<p>Is that supported currently?</p>
| <p>According to the Traefik documentation that feature will be available in version 1.7 of Traefik (currently a release candidate).</p>
<p><a href="https://docs.traefik.io/v1.7/configuration/backends/kubernetes/#authentication" rel="noreferrer">Here is a link to the authentication documentation</a></p>
<p>My guess is that you will need to add the following 2 annotations:</p>
<pre><code>ingress.kubernetes.io/auth-type: forward
ingress.kubernetes.io/auth-url: https://example.com
</code></pre>
<p>and probably also the following annotation with the corresponding header fields your auth service returns as value:</p>
<pre><code>ingress.kubernetes.io/auth-response-headers: X-Auth-User, X-Secret
</code></pre>
|
<p>How can I substring a string like /com/app1/main to get the value of main in a HELM template ? </p>
| <p>You can use the <a href="http://masterminds.github.io/sprig/strings.html" rel="noreferrer">regexFind</a> function in helm</p>
<p><code>regexFind "[^/]+$" "/com/app1/main"</code></p>
<p>Will get the text following the last <code>/</code></p>
|
<p>When using pvc to persist the container data, it seems pvc always override the container's directory, the original data in directory will not be available, what's the reason ?</p>
| <p>This is by design. Kubelet is responsible for preparing the mounts for your container, and they can come from plaethora of different storagebackends. At the time of mounting they are empty and kubelet has no reason to put any content in them.</p>
<p>That said, there are ways to achieve what you seem to expect by using init container. In your pod you define init container using your docker image, mount your volume in it in some path (ie. <em>/target</em>) but instead of running regular content of your container, run something like</p>
<pre><code>cp -r /my/dir/* /target/
</code></pre>
<p>which will initiate your directory with expected content and exit allowing further startup of the pod</p>
|
<p><strong><em>Summary:</em></strong>
Jenkins in K8s minikkube works fine and scales well in case of default jnlp agent but stuck with "Waiting for agent to connect" in case of custom jnlp image. </p>
<p><strong><em>Detailed description:</em></strong></p>
<p>I'm running the local minikube with Jenkins setup. </p>
<p><strong>Jenkins master dockerfile:</strong></p>
<pre><code>from jenkins/jenkins:alpine
# Distributed Builds plugins
RUN /usr/local/bin/install-plugins.sh ssh-slaves
# install Notifications and Publishing plugins
RUN /usr/local/bin/install-plugins.sh email-ext
RUN /usr/local/bin/install-plugins.sh mailer
RUN /usr/local/bin/install-plugins.sh slack
# Artifacts
RUN /usr/local/bin/install-plugins.sh htmlpublisher
# UI
RUN /usr/local/bin/install-plugins.sh greenballs
RUN /usr/local/bin/install-plugins.sh simple-theme-plugin
# Scaling
RUN /usr/local/bin/install-plugins.sh kubernetes
# install Maven
USER root
RUN apk update && \
apk upgrade && \
apk add maven
USER jenkins
</code></pre>
<p><strong>Deployment:</strong></p>
<pre><code> apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: ybushnev/my-jenkins-image:1.3
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
</code></pre>
<p><strong>Service:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
type: NodePort
ports:
- port: 8080
name: "http"
targetPort: 8080
- port: 50000
name: "slave"
targetPort: 50000
selector:
app: jenkins
</code></pre>
<p><strong>After deployment I have such services:</strong></p>
<pre><code>Yuris-MBP-2% kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins NodePort 10.108.30.10 <none> 8080:30267/TCP,50000:31588/TCP 1h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1h
</code></pre>
<p><strong>Kubernetes master running on:</strong></p>
<pre><code>Yuris-MBP-2% kubectl cluster-info | grep master
Kubernetes master is running at https://192.168.99.100:8443
</code></pre>
<p><strong>Based on configuration above I specify the cloud config in Jenkins:</strong></p>
<p><a href="https://i.stack.imgur.com/PUxHF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PUxHF.png" alt="enter image description here"></a></p>
<p><strong>And finally I put such configuration for slave pod template:</strong>
<a href="https://i.stack.imgur.com/zloN0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zloN0.png" alt="enter image description here"></a></p>
<p>As a result, via k8s logs I see such logs on the master:</p>
<pre><code>Waiting for agent to connect (41/100): kubernetes-agent-tgskx
Waiting for agent to connect (42/100): kubernetes-agent-tgskx
Waiting for agent to connect (43/100): kubernetes-agent-tgskx
Waiting for agent to connect (44/100): kubernetes-agent-tgskx
Waiting for agent to connect (45/100): kubernetes-agent-tgskx
</code></pre>
<p>Jenkins container seems to be green. No logs in K8s but there are such events happened:</p>
<pre><code>Successfully assigned kubernetes-agent-517tl to minikube
MountVolume.SetUp succeeded for volume "workspace-volume"
MountVolume.SetUp succeeded for volume "default-token-8sgh6"
</code></pre>
<p><strong>IMPORTANT</strong>
If I do not put 'jnlp' inside the container name (I guess this is the important as in another case it takes some default jnlp agent image) slave is spinning up and connecting to the master just fine but even if I have custom docker image inside the 'Docker image' field it doesn't take it as a reference as I can see that Jenkins slave doesn't have such tools/files which it suppose to have based in provided image.
Last time I tried to use this image: "gcr.io/cloud-solutions-images/jenkins-k8s-slave" but for me it fails for any image in case I put 'jnlp' as container template name.
I tried to play with many images with no luck... Will be very glad for any hint! </p>
| <p>Under the "container template", You need to change the name "jnlp" to something else.</p>
<p>Kubernetes plugin will run a sidecar container with the name jnlp for connecting to the master server. If you use the name jnlp for the main container, it will conflict.</p>
|
<p>I am trying to Create User With Limited Namespace Access. created namespace named as test and also created Group:programmers, User:frontend. Generated credentials for user:frontend by the help of following <a href="http://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/" rel="nofollow noreferrer">http://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/</a></p>
<p>I created a role. Here is my role.yml </p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: test
name: frontend-developer
rules:
- apiGroups: ["","extensions","apps"]
resources: ["deployments","replicasets","pods"]
verbs: ["get","list","watch","create","patch"]`
</code></pre>
<p>I created rolebinding. Here is role-binding.yml</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: frontend-deploy
namespace: test
subjects:
- kind: User
name: frontend
namespace: test
roleRef:
kind: Role
name: frontend-developer
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>I am talking my deployment file as</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodefrontend
namespace: test
spec:
replicas: 3
template:
metadata:
labels:
app: bookstore
spec:
containers:
- name: nodeweb
image: balipalligayathri/devops-develop
ports:
- name: http
containerPort: 3000
protocol: TCP
</code></pre>
<p>I am using following commands while creating the role and rolebinding</p>
<pre><code>$ kubectl create -f role.yml
$ kubectl create -f role-binding.yml
</code></pre>
<p>frontend developer Role and frontend-deploy Rolebindings were created.</p>
<p>Again, i am using the command <code>kubectl create -f node-deployment.yml</code> for deployment creation. Deployment was created and deleted successfully. here, i didn't mention any user while creating the deployment.so, I am trying to create deployment with user by using the below command.</p>
<pre><code>kubectl create -f node-deployment.yml --as=frontend --context=frontend-context
</code></pre>
<p>I am facing the error like this</p>
<pre><code>Error from server (Forbidden):
<html><head><meta http-equiv='refresh' content='1;url=/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s'/><script>window.location.replace('/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s');</script></head><body style='background-color:white; color:white;'>
Authentication requiredhttps://stackoverflow.com/questions/48164369/kubernetes- 1-8-dashboard-configurations-fails-with-error-no-kind-role-is-regi
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Read
which is implied by: hudson.security.Permission.GenericRead
which is implied by: hudson.model.Hudson.Administer </body></html>
</code></pre>
<p>My doubt is: is there any necessity to mention the user in <code>deployment.yml</code> file? </p>
| <p>You need to create a <code>serviceAccount</code>, take a look at this snippet:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: myAccount
</code></pre>
<p>bind it to your role:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: myBinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: frontend-developer
subjects:
- kind: ServiceAccount
name: myAccount
</code></pre>
<p>and use it in your Deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodefrontend
namespace: test
spec:
template:
metadata:
labels:
...
spec:
serviceAccountName: myAccount
</code></pre>
<p>Ref: </p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/</a></li>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/</a></li>
</ul>
|
<p>I've read this <a href="https://istio.io/blog/2018/egress-https/#tls-origination-by-istio" rel="nofollow noreferrer">article</a> about TLS Origination problem in istio. Let me quote it here:</p>
<blockquote>
<p>There is a caveat to this story. In HTTPS, all the HTTP details (hostname, path, headers etc.) are encrypted, so Istio cannot know the destination domain of the encrypted requests. Well, Istio could know the destination domain by the SNI (Server Name Indication) field. This feature, however, is not yet implemented in Istio. <strong>Therefore, currently Istio cannot perform filtering of HTTPS requests based on the destination domains.</strong></p>
</blockquote>
<p>I want to understand, what does the bold statement really mean? Because, I've tried this:</p>
<ul>
<li><p>Downloaded the
<a href="https://github.com/istio/istio/releases/" rel="nofollow noreferrer">istio-1.0.0</a> here to get
the <code>samples</code> yaml code.</p></li>
<li><p>kubectl apply -f <(istioctl kube-inject -f samples/sleep/sleep.yaml)</p></li>
</ul>
<hr>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: sleep
labels:
app: sleep
spec:
ports:
- port: 80
name: http
selector:
app: sleep
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sleep
spec:
replicas: 1
template:
metadata:
labels:
app: sleep
spec:
containers:
- name: sleep
image: tutum/curl
command: ["/bin/sleep","infinity"]
imagePullPolicy: IfNotPresent
</code></pre>
<ul>
<li>And apply this <code>ServiceEntry</code></li>
</ul>
<hr>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: cnn
spec:
hosts:
- "*.cnn.com"
ports:
- number: 80
name: http-port
protocol: HTTP
- number: 443
name: https-port
protocol: HTTPS
resolution: NONE
</code></pre>
<ul>
<li>And exec this curl command inside the pod</li>
</ul>
<hr>
<pre><code>export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl exec -it $SOURCE_POD -c sleep -- curl -s -o /dev/null -D - https://edition.cnn.com/politics
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
x-servedByHost: ::ffff:172.17.128.31
access-control-allow-origin: *
cache-control: max-age=60
content-security-policy: default-src 'self' blob: https://*.cnn.com:* http://*.cnn.com:* *.cnn.io:* *.cnn.net:* *.turner.com:* *.turner.io:* *.ugdturner.com:* courageousstudio.com *.vgtf.net:*; script-src 'unsafe-eval' 'unsafe-inline' 'self' *; style-src 'unsafe-inline' 'self' blob: *; child-src 'self' blob: *; frame-src 'self' *; object-src 'self' *; img-src 'self' data: blob: *; media-src 'self' data: blob: *; font-src 'self' data: *; connect-src 'self' *; frame-ancestors 'self' https://*.cnn.com:* http://*.cnn.com https://*.cnn.io:* http://*.cnn.io:* *.turner.com:* courageousstudio.com;
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
Via: 1.1 varnish
Content-Length: 1554561
Accept-Ranges: bytes
Date: Wed, 08 Aug 2018 04:59:07 GMT
Via: 1.1 varnish
Age: 105
Connection: keep-alive
Set-Cookie: countryCode=US; Domain=.cnn.com; Path=/
Set-Cookie: geoData=mountain view|CA|94043|US|NA; Domain=.cnn.com; Path=/
Set-Cookie: tryThing00=3860; Domain=.cnn.com; Path=/; Expires=Mon Jul 01 2019 00:00:00 GMT
Set-Cookie: tryThing01=4349; Domain=.cnn.com; Path=/; Expires=Fri Mar 01 2019 00:00:00 GMT
Set-Cookie: tryThing02=4896; Domain=.cnn.com; Path=/; Expires=Wed Jan 01 2020 00:00:00 GMT
X-Served-By: cache-iad2150-IAD, cache-sin18022-SIN
X-Cache: HIT, MISS
X-Cache-Hits: 1, 0
X-Timer: S1533704347.303019,VS0,VE299
Vary: Accept-Encoding
</code></pre>
<p>As you can see, I can access the edition.cnn.com with HTTPS (ssl) protocol. Am I misunderstand the bold statement meaning?</p>
| <p>The cited blog post is from January 31, 2018, and the statement was correct then. Now (1.0) Istio supports traffic routing by SNI, see <a href="https://istio.io/docs/tasks/traffic-management/egress/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/egress/</a>.</p>
<p>This reminds me to update that blog post, will do it by the end of this week. Sorry for the confusion, thank you for pointing to the issue.</p>
|
<p>I need to allow requests from multiple origins: <code>http://localhost:4200</code>, <code>http://localhost:4242</code>, etc., on nginx-ingress version 1.7.1. But I'm not able to do that for multiple origins, because <code>nginx.ingress.kubernetes.io/cors-allow-credentials: true</code> will not work with <code>nginx.ingress.kubernetes.io/cors-allow-origin: "*"</code>. It causes the browser to generate CORS error. Maybe someone has a solution for avoiding this error?</p>
<p>this is my config </p>
<pre><code> annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS, DELETE"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-CustomHeader,X-LANG,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,X-Api-Key,X-Device-Id,Access-Control-Allow-Origin"
</code></pre>
<blockquote>
<p>Access to XMLHttpRequest at '<a href="https://stage.site.com/api/session" rel="noreferrer">https://stage.site.com/api/session</a>' from
origin '<a href="http://localhost:4200" rel="noreferrer">http://localhost:4200</a>' has been blocked by CORS policy: The
value of the 'Access-Control-Allow-Origin' header in the response must
not be the wildcard '*' when the request's credentials mode is
'include'. The credentials mode of requests initiated by the
XMLHttpRequest is controlled by the withCredentials attribute.</p>
</blockquote>
| <p>Add the annotation to enable <code>CORS</code>:</p>
<pre><code>nginx.ingress.kubernetes.io/enable-cors: "true"
</code></pre>
<p>Be aware that <strong>the string "*" cannot be used for a resource that supports credentials</strong> (<a href="https://www.w3.org/TR/cors/#resource-requests" rel="noreferrer">https://www.w3.org/TR/cors/#resource-requests</a>), try with your domain list (comma separated) instead of *</p>
|
<p>I want to work with Airflow on Kubernetes (GKE).
What is the best way and production-grade to deploy it? (Kubernetes executor or Celery) We also want to work with the Kubernetes operators and run the tasks as pods.</p>
| <p>If you want to use Google Kubernetes Engine (GKE), use <a href="https://cloud.google.com/composer/" rel="nofollow noreferrer">Google Cloud Composer</a> which is a managed version of Apache Airflow on Google that is run on GKE.</p>
|
<p>Is it possible to add some metrics to a Java system that will return the version number as a string for the application that is monitored?</p>
<p>I am aiming for a dashboard where each pod, running a Java application inside a Docker container, in a Kubernetes cluster is monitored and the current version of each Java application is viewed.</p>
<p>If it isn't possible, do you have an idea on how to get that information from the Java application and make it available in a Grafana dashboard?</p>
| <p>In your application, you can make available a Gauge metric that uses labels to export e.g. a version number or commit/build hash and then set the value of the gauge to <code>1</code>.</p>
<p>For example, this is how the <code>redis_exporter</code> exports information about a redis instance:</p>
<pre><code># HELP redis_instance_info Information about the Redis instance
# TYPE redis_instance_info gauge
redis_instance_info{addr="redis://localhost:6379",os="Linux 4.4.0-62-generic x86_64",redis_build_id="687a2a319020fa42",redis_mode="standalone",redis_version="3.0.6",role="master"} 1
</code></pre>
<p>You can see the version and a couple of other attributes exported as labels of the metric <code>redis_instance_info</code>.</p>
|
<p>I already have some services in my k8s cluster and want to mantain them separately. Examples:</p>
<ul>
<li>grafana with custom dashboards and custom dockerfile</li>
<li>prometheus-operator instead of basic prometheus</li>
<li>jaeger pointing to elasticsearch as internal storage</li>
<li>certmanager in my own namespace (also I use it for nginx-ingress legacy routing)</li>
</ul>
<p>Is it possible to use existing instances instead of creating istio-specific ones? Can istio communicate with them or it's hardcoded?</p>
| <p>Kubernetes provides quite a big variety of Networking and Load Balancing features from the box. However, the idea to simplify and extend the functionality of <a href="https://istio.io/docs/setup/kubernetes/sidecar-injection/#automatic-sidecar-injection" rel="nofollow noreferrer">Istio sidecars</a> is a good choice as they are used for automatic injection into the Pods in order to proxy the traffic between internal Kubernetes services.</p>
<p>You can implement <code>sidecars</code> manually or automatically. If you choose the manual way, make sure to add the appropriate parameter under Pod's annotation field:</p>
<pre><code>annotations:
sidecar.istio.io/inject: "true"
</code></pre>
<p>Automatic <code>sidecar</code> injection requires <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="nofollow noreferrer">Mutating Webhook admission controller</a>, available since Kubernetes version 1.9 released, therefore <code>sidecars</code> can be integrated for Pod's creation process as well.</p>
<p>Get yourself familiar with this <a href="https://medium.com/@timfpark/more-batteries-included-microservices-made-easier-with-istio-on-kubernetes-87c8b76ac2ef" rel="nofollow noreferrer">Article</a> to shed light on using different monitoring and traffic management tools in Istio.</p>
|
<p>I deployed an image to Kubernetes, but it never becomes ready, even after hours. </p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-b8dd974db-9jbsl 0/1 ImagePullBackOff 0 21m
</code></pre>
<p>All this happens with the Quickstart <a href="https://cloud.google.com/kubernetes-engine/docs/quickstart" rel="nofollow noreferrer">Hello app</a>, as well as my own Docker image.</p>
<p>Attempts to attach fail.</p>
<pre><code>$ kubectl attach -it myapp-b8dd974db-9jbs
Unable to use a TTY - container myapp did not allocate one
If you don't see a command prompt, try pressing enter.
error: unable to upgrade connection: container
myapp not found in pod myapp-b8dd974db-9jbsl_default
</code></pre>
<p>Attempts to access it over HTTP fail.</p>
<p>In Stackdriver Logging I see messages like </p>
<pre><code>skipping: failed to "StartContainer" for "myapp"
with ImagePullBackOff: "Back-off pulling image
\"gcr.io/myproject/myapp-image:1.0\""
</code></pre>
<p>and <code>No such image</code></p>
<p>Yet I did deploy these images and the Cloud Console shows that the pods are "green."</p>
<p>And <code>kubectl</code> seems to tell me that the cluster is OK.</p>
<pre><code>$ kubectl get service myapp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp LoadBalancer 10.43.248.78 35.193.107.141 8222:31840/TCP 29m
</code></pre>
<p>How can I diagnose this?</p>
| <p>You can use <code>kubectl describe myapp-b8dd974db-9jbsl</code> to get more information on your pod.</p>
<p>But from the status message 'ImagePullBackOff' it is probably trying to download the docker image and failing.</p>
<p>This might because of several reasons, you will obtain more information with the <code>kubectl describe</code> but it's probably that you don't have permissions to that docker repository or the image/image:tag does not exist.</p>
|
<p>I'm running a Kubernetes cluster (GCP) with 10 deployments. Each deployment is configured to auto scale on stress. </p>
<p>From my website statistics, I found that Monday is the day with the most load. I want to define Kubernetes deployment to have more <code>min-replicas</code> on this day.</p>
<p>Is this possible? </p>
<p>I read somewhere that I can run a cronjob script before and after this day and change the minimum number of machines. Is that the current way to do it? Is this safe? what if the cronjob wasn't fired? If this is the current way, please link me for some instruction how to do it. </p>
<p>Thanks!</p>
| <p>You seem to be talking of two things here. </p>
<ol>
<li><p>Pod autoscaling (Add more pods when load on existing pods increases) : HPA will help with this. If your workloads show a spike in CPU or memory and can handle horizontal scaling, then HPA would work fine.
Example : <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</a></p></li>
<li><p>Now HPA can increase pods only if the cluster has enough nodes to schedule them.
If it is desired to have more nodes with more traffic and reduce them when traffic is low, a cluster autoscaler could be a good option.
<a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler</a></p></li>
</ol>
<p>Ofcourse, the scaling of nodes is not instantaneous as the autoscaler watches for pods which are in Pending state due to resource level constraints. After this, it requests additional nodes from the cloud provider and once these nodes join the cluster, workloads get scheduled. </p>
|
<p>I have an use case where I need to collect the <strong>downtime of each deployment</strong> (if all the replicas(pods) are down at the same point of time).</p>
<p><strong>My goal is to maintain the total down time for each deployment since it was created.</strong></p>
<p>I tried getting it from <strong>deployment status</strong>, but the problem is that I need to make frequent calls to get the deployment and check for any down time.</p>
<p>Also the <strong>deployment status</strong> stores only the latest change. So, I will end up missing out the changes that occurred in between each call if there is more than one change(i.e., <strong>down time</strong>). Also I will end up making multiple calls for multiple deployments frequently which will consume more compute resource.</p>
<p>Is there any reliable method to collect the <strong>down time data of an deployment</strong>?</p>
<p>Thanks in advance.</p>
| <p>A monitoring tool like prometheus would be a better solution to handle this.
As an example, below is a graph from one of our deployments for last 2 days </p>
<p><a href="https://i.stack.imgur.com/99gPA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/99gPA.png" alt="Deployment Availablity"></a></p>
<p>If you look at the blue line for unavailable replicas, we had one replica unavailable from about 17:00 to 10:30 (ideally unavailable count should be zero)</p>
<p>This seems pretty close to what you are looking for. </p>
|
<p>I have deployed Jenkins on Kubernetes and am trying to configure the nginx ingress for it. </p>
<p>Assume I want it to be available at <a href="https://myip/jenkins" rel="noreferrer">https://myip/jenkins</a></p>
<p>This is my initial ingress configuration:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/add-base-url: "true"
spec:
rules:
- http:
paths:
- path: /jenkins
backend:
serviceName: jenkins
servicePort: 8080
</code></pre>
<p>With this when I access <code>https://myip/jenkins</code> I am redirected to <code>http://myip/login?from=%2F</code>.</p>
<p>When accessing <code>https://myip/jenkins/login?from=%2F</code> it stays on that page but none of the static resources are found since they are looked for at <a href="https://myip/static" rel="noreferrer">https://myip/static</a>...</p>
| <p>This is how I solved it configuring the Jenkins image context path without the need to use the ingress rewrite annotations:</p>
<pre><code>kind: Deployment
metadata:
creationTimestamp: null
labels:
app: jenkins
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: jenkins
spec:
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumes:
- name: jenkins-storage
persistentVolumeClaim:
claimName: jenkins
containers:
- image: jenkins/jenkins:lts
name: jenkins
ports:
- containerPort: 8080
name: "http-server"
- containerPort: 50000
name: "jnlp"
resources: {}
env:
- name: JENKINS_OPTS
value: --prefix=/jenkins
volumeMounts:
- mountPath: "/var/jenkins_home"
name: jenkins-storage
status: {}
</code></pre>
<p>Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prfl-apps-devops-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/add-base-url: "true"
spec:
rules:
- http:
paths:
- path: /jenkins
backend:
serviceName: jenkins
servicePort: 8080
</code></pre>
|
<p>I have a deployment and a service in GKE. I exposed the deployment as a Load Balancer but I cannot access it through the service (curl or browser). I get an:</p>
<pre><code>curl: (7) Failed to connect to <my-Ip-Address> port 443: Connection refused
</code></pre>
<p>I can port forward directly to the pod and it works fine:</p>
<pre><code>kubectl --namespace=redfalcon port-forward web-service-rf-76967f9c68-2zbhm 9999:443 >> /dev/null
curl -k -v --request POST --url https://localhost:9999/auth/login/ --header 'content-type: application/json' --header 'x-profile-key: ' --data '{"email":"<testusername>","password":"<testpassword>"}'
</code></pre>
<p>I have most likely misconfigured my service but cannot see how. Any help on what I did would be very much appreciated.</p>
<p>Service Yaml:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: red-falcon-lb
namespace: redfalcon
spec:
type: LoadBalancer
ports:
- name: https
port: 443
protocol: TCP
selector:
app: web-service-rf
</code></pre>
<p>Deployment YAML</p>
<pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: web-service-rf
spec:
selector:
matchLabels:
app: web-service-rf
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: web-service-rf
spec:
initContainers:
- name: certificate-init-container
image: proofpoint/certificate-init-container:0.2.0
imagePullPolicy: Always
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- "-namespace=$(NAMESPACE)"
- "-pod-name=$(POD_NAME)"
- "-query-k8s"
volumeMounts:
- name: tls
mountPath: /etc/tls
containers:
- name: web-service-rf
image: gcr.io/redfalcon-186521/redfalcon-webserver-minimal:latest
# image: gcr.io/redfalcon-186521/redfalcon-webserver-full:latest
command:
- "./server"
- "--port=443"
imagePullPolicy: Always
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 443
resources:
limits:
memory: "500Mi"
cpu: "100m"
volumeMounts:
- mountPath: /etc/tls
name: tls
- mountPath: /var/secrets/google
name: google-cloud-key
volumes:
- name: tls
emptyDir: {}
- name: google-cloud-key
secret:
secretName: pubsub-key
</code></pre>
<p>output: kubectl describe svc red-falcon-lb</p>
<pre><code>Name: red-falcon-lb
Namespace: redfalcon
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"red-falcon-lb","namespace":"redfalcon"},"spec":{"ports":[{"name":"https","port...
Selector: app=web-service-rf
Type: LoadBalancer
IP: 10.43.245.9
LoadBalancer Ingress: <EXTERNAL IP REDACTED>
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31524/TCP
Endpoints: 10.40.0.201:443,10.40.0.202:443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 39m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 38m service-controller Ensured load balancer
</code></pre>
| <p>I figured out what it was... </p>
<p>My golang app was listening on localhost instead of 0.0.0.0. This meant that port forwarding on kubectl worked but any service exposure didn't work.</p>
<p>I had to add "--host 0.0.0.0" to my k8s command and it then listened to requests from outside localhost.</p>
<p>My command ended up being...</p>
<p>"./server --port 8080 --host 0.0.0.0"</p>
|
<p>we have a kubernetes cluster running on Centos 7. However all logging is going to /var/log/messages which is making centos system logs hard to read. Is there a way I can tell kubeadm/kubernetes to log to /var/log/kubernetes rather?</p>
<p>We are already sending our application (pod) logs to a mountpoint. We need to move the stderr logs of kubernetes.</p>
| <blockquote>
<p>However all logging is going to /var/log/messages which is making centos system logs hard to read. Is there a way I can tell kubeadm/kubernetes to log to /var/log/kubernetes rather?</p>
</blockquote>
<p>No, not exactly, but you can reconfigure Docker to log in a different way.</p>
<p>This might depend on the Docker version you're running but in my CentOS 7 VM (a couple of weeks old) i'm running Docker version <code>1.13.1</code>, installed via <code>yum</code>. </p>
<p>When looking through the <a href="https://docs.docker.com/v1.13/engine/admin/logging/overview/" rel="nofollow noreferrer">docs</a> for version <code>1.13</code> and the latest <a href="https://docs.docker.com/config/containers/logging/configure/" rel="nofollow noreferrer">stable</a> version of Docker they say more or less the same thing:</p>
<blockquote>
<p>If you do not specify a logging driver, the default is json-file. </p>
</blockquote>
<p>The version of Docker i installed via <code>yum</code> had the following line in an environment file (<code>/etc/sysconfig/docker</code>) that is loaded when starting Docker:</p>
<pre><code>OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
</code></pre>
<p>As you can see the logging driver are configured as <code>journald</code>, that should be the reason you're seeing logs from your containers in <code>/var/log/messages</code>. You can check which logging drive are configured with:</p>
<pre><code>docker info | grep 'Logging Driver'
</code></pre>
<p>The logging driver decides where all of the logs, in Docker meaning <code>stderr</code> and <code>stdout</code> from containers are sent. Docker supports a couple of different <a href="https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers" rel="nofollow noreferrer">logging drivers</a>, if you choose to configure e.g. <code>json-file</code> which might be the best choice if you want to relocate the logging from an OS perspective ("changing" the log path). Every Docker container will have it's own log written to <code>/var/log/pods/<ID>/<NAME>/<LOGFILE></code>, actually the log files are symlinks back to <code>/var/lib/docker/containers/<ID>/<ID>-json.log</code>.</p>
<p>If you do configure <code>json-file</code> then remove the <code>--log-driver=journald</code> flag and instead configure this in the <code>/etc/docker/daemon.json</code> file, mentioned in the docs. With <code>json-file</code> you can configure things like log rotation and log file sizes, please consult the <a href="https://docs.docker.com/config/containers/logging/json-file/#options" rel="nofollow noreferrer">docs</a> for more options.</p>
<p>When configuring via the <code>daemon.json</code> file this becomes a global setting, you can always override the logging driver used for a specific container with <code>docker run ... --log-driver</code>.</p>
<p>These logging changes applies for everything running within Docker, to move logging for e.g. <code>kubelet</code> which runs alongside Docker on your host you can look at the configurable <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options" rel="nofollow noreferrer">options</a>. Default the <code>kubelet</code> <code>stderr</code> logs are logged via <code>journald</code> and ends up in <code>/var/log/messages</code>, to change this behavior can add the <code>--log-dir</code> options and point to another location.</p>
<p>In the end of the day i think it's good to give log shipping a thought and investigate the other logging drivers if they might fit into your environment.</p>
|
<p>I am working on Azure Kubernetes service. I am creating AKS through portal successfully. But, I need to do it through the ARM Templates.</p>
<p>How to create AKS with help of ARM Templates? </p>
<p>For this, I followed <a href="https://github.com/neumanndaniel/armtemplates/blob/master/container/aks.json" rel="nofollow noreferrer">link</a></p>
<p>But, here am receiving an issue like:</p>
<blockquote>
<p>Code : InvalidTemplate</p>
<p>Message : Deployment template validation failed: 'The template
resource
'AKSsubnet/Microsoft.Authorization/36985XXX-XXXX-XXXX-XXXX-5fb6b7ebXXXX'
for type
'Microsoft.Network/virtualNetworks/subnets/providers/roleAssignments'
at line '53' and column '9' has incorrect segment lengths. A
nested resource type must have identical number of segments as its resource name. A root resource type must have segment length
one greater than its resource name. Please see <a href="https://aka.ms/arm-template/#resources" rel="nofollow noreferrer">https://aka.ms/arm-template/#resources</a> for usage details.'.</p>
</blockquote>
| <blockquote>
<p>How to create AKS with help of ARM Templates?</p>
</blockquote>
<p>One of the most direct methods, navigate to creating AKS page in the portal, fill the attributes, click <code>Download a template for automation</code>, then you will get the template.</p>
<p><a href="https://i.stack.imgur.com/UUpTb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UUpTb.png" alt="enter image description here"></a></p>
<p>You could test the template in the <strong>Custom deployment</strong>, it will work fine.</p>
<p><a href="https://i.stack.imgur.com/2ktg3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2ktg3.png" alt="enter image description here"></a></p>
|
<p>Is it atypical for multi-master K8s cluster deployments to use unique certs per service, per controller node? Most guides I've seen generate unique certs per service (API, Controller, Scheduler) and then use those certs for the eponymous service on each Controller node. </p>
<p>Does Kubernetes disallow or discourage unique certs per service, per node? With DNS/IP SANs it should be possible to still have each service respond to a singular cluster address, so I'm curious if this decision is one for the sake of simpler instructions, or if it's actually some requirement I'm missing.</p>
<p>Thank you.</p>
| <blockquote>
<p>Does Kubernetes disallow or discourage unique certs per service, per
node? With DNS/IP SANs it should be possible to still have each
service respond to a singular cluster address, so I'm curious if this
decision is one for the sake of simpler instructions, or if it's
actually some requirement I'm missing</p>
</blockquote>
<p>When we have running Kubernetes cluster/s, we can have thousands of private and public keys, and different components usually do not know if they are valid. So there is the Certificate Authority that is a 3rd party entity which tells the interested elements "this certificate is trusted". </p>
<p><a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">documentation</a>: </p>
<blockquote>
<p>Every Kubernetes cluster has a cluster root Certificate Authority
(CA). The CA is generally used by cluster components to validate the
API server’s certificate, by the API server to validate kubelet client
certificates, etc.</p>
</blockquote>
<p>This actually shows that you can have different certificates in each cluster, but it is not a requirement, you can imagine many different combinations of your CA. You can have one global CA that is responsible for signing all the keys, or one CA for each cluster, one for internal communication and one for external, etc. </p>
<p>Any request that presents a client certificate signed by cluster CA will be considered authenticated. In that authentication process, it should be possible to obtain a username from the Common Name field (CN) and a group from the Organization field of that certificate. So the answer would be yes, you can use different certs per service, node or any component in the cluster unless it is signed by the Certificate Authority in the cluster. </p>
<p>When creating certificates for the master with multiple masters (HA cluster), you have to make sure that the <strong>load-balancers IP and DNS name is a part of that certificate</strong>. Otherwise, whenever a client will try to talk through the API server through an LB, a client will complain since the common name on the certificate will be different than the one it wants to communicate with. </p>
<p>Going further, each of the core cluster components has his own client certificate in addition to the main certificate because each of them will have a different access level to the cluster with different common names.
It is noteworthy that <strong>kubelet</strong> has a little different certificate name as each kublet will have a different identity (hostname where the kubelet is running will be a part of the certificate) it is related with other features like NodeAuthorizer and Node Restriction Admission Plugin. These features are important from the perspective of least privilege - they limit the in another case unrestricted access and interaction of the kubelet to apiserver.
Using this feature, you can limit kubelet to only being able to modify its node resource instead of the whole cluster, as the same it will only be able to read its nodes secrets instead of all secrets in the cluster, etc. </p>
<p><strong>EDIT - following comment discussion:</strong></p>
<p>Assuming you are asking for opinion on why does more people do not use multiple certificates, I think that it is because it does not realy rise security in a significant matter. As the certs are not as important as the CA - which is a trusted guarantor that the entities can talk with each other securely. So you can make multiple CA's - the reason for that would be more of a HA approach than security. Of course if you have a trusted CA, you don't need more certificate kinds as you actually do not reach any goal by increasing the number of them. </p>
|
<p>I have a problem, I need to collect metric data from read-only-port located on 10255, but unfortunately by using netstat I found that such port don't exists at all. Can somebody help with advise, how could I create such port on kubelet or how can I avoid this port for data collection?</p>
| <p>The <em>kubelet</em> requires a parameter to be set: <strong>--read-only-port=10255</strong> (<a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">read more about kubelet</a>) </p>
<p>If you are using <em>kubeadm</em> to bootstrap the cluster, you can use a config file to pass in for the kubelet (look for how to <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/" rel="nofollow noreferrer">"Set Kubelet parameters via a config file"</a>)</p>
<p>If, for example, you are using <a href="https://github.com/kubernetes-incubator/kubespray" rel="nofollow noreferrer">kubespray</a>, there's the <strong>kube_read_only_port</strong> variable (commented out by default).</p>
<blockquote>
<p>Warning! This is not a good practice and the <a href="https://github.com/kubernetes/kubeadm/issues/732" rel="nofollow noreferrer">read-only-port is deprecated</a>. There are ways to read from the secure port but this is another story.</p>
</blockquote>
|
<p>I am trying to run a docker container registry in Minikube for testing a CSI driver that I am writing. </p>
<p>I am running minikube on mac and am trying to use the following minikube start command: <code>minikube start --vm-driver=hyperkit --disk-size=40g</code>. I have tried with both kubeadm and localkube bootstrappers and with the virtualbox vm-driver.</p>
<p>This is the resource definition I am using for the registry pod deployment. </p>
<pre><code>---
apiVersion: v1
kind: Pod
metadata:
name: registry
labels:
app: registry
namespace: docker-registry
spec:
containers:
- name: registry
image: registry:2
imagePullPolicy: Always
ports:
- containerPort: 5000
volumeMounts:
- mountPath: /var/lib/registry
name: registry-data
volumes:
- hostPath:
path: /var/lib/kubelet/plugins/csi-registry
type: DirectoryOrCreate
name: registry-data
</code></pre>
<p>I attempt to create it using <code>kubectl apply -f registry-setup.yaml</code>. Before running this my minikube cluster reports itself as ready and with all the normal minikube containers running.</p>
<p>However, this fails to run and upon running <code>kubectl describe pod</code>, I see the following message:</p>
<pre><code> Name: registry
Namespace: docker-registry
Node: minikube/192.168.64.43
Start Time: Wed, 08 Aug 2018 12:24:27 -0700
Labels: app=registry
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"registry"},"name":"registry","namespace":"docker-registry"},"spec":{"cont...
Status: Running
IP: 172.17.0.2
Containers:
registry:
Container ID: docker://42e5193ac563c2b2e2a2b381c91350d30f7e7c5009a30a5977d33b403a374e7f
Image: registry:2
...
TRUNCATED FOR SPACE
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned registry to minikube
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "registry-data"
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-kq5mq"
Normal Pulling 1m kubelet, minikube pulling image "registry:2"
Normal Pulled 1m kubelet, minikube Successfully pulled image "registry:2"
Normal Created 1m kubelet, minikube Created container
Normal Started 1m kubelet, minikube Started container
...
TRUNCATED
...
Name: storage-provisioner
Namespace: kube-system
Node: minikube/192.168.64.43
Start Time: Wed, 08 Aug 2018 12:24:38 -0700
Labels: addonmanager.kubernetes.io/mode=Reconcile
integration-test=storage-provisioner
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provis...
Status: Pending
IP: 192.168.64.43
Containers:
storage-provisioner:
Container ID:
Image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
Image ID:
Port: <none>
Host Port: <none>
Command:
/storage-provisioner
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp from tmp (rw)
/var/run/secrets/kubernetes.io/serviceaccount from storage-provisioner-token-sb5hz (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
tmp:
Type: HostPath (bare host directory volume)
Path: /tmp
HostPathType: Directory
storage-provisioner-token-sb5hz:
Type: Secret (a volume populated by a Secret)
SecretName: storage-provisioner-token-sb5hz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned storage-provisioner to minikube
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "tmp"
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "storage-provisioner-token-sb5hz"
Normal Pulling 23s (x3 over 1m) kubelet, minikube pulling image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1"
Warning Failed 21s (x3 over 1m) kubelet, minikube Failed to pull image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1": rpc error: code = Unknown desc = failed to register layer: Error processing tar file(exit status 1): write /storage-provisioner: no space left on device
Warning Failed 21s (x3 over 1m) kubelet, minikube Error: ErrImagePull
Normal BackOff 7s (x3 over 1m) kubelet, minikube Back-off pulling image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1"
Warning Failed 7s (x3 over 1m) kubelet, minikube Error: ImagePullBackOff
------------------------------------------------------------
...
</code></pre>
<p>So while the registry container starts up correctly, a few of the other minikube services (including dns, http ingress service, etc) begin to fail with reasons such as the following: <code>write /storage-provisioner: no space left on device</code>. Despite allocating a 40GB disk-size to minikube, it seems as though minikube is trying to write to <code>rootfs</code> or <code>devtempfs</code> (depending on the vm-driver) which has only 1GB of space.</p>
<pre><code>$ df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 919M 713M 206M 78% /
devtmpfs 919M 0 919M 0% /dev
tmpfs 996M 0 996M 0% /dev/shm
tmpfs 996M 8.9M 987M 1% /run
tmpfs 996M 0 996M 0% /sys/fs/cgroup
tmpfs 996M 8.0K 996M 1% /tmp
/dev/sda1 34G 1.3G 30G 4% /mnt/sda1
</code></pre>
<p>Is there a way to make minikube actually use the 34GB of space that was allocated to /mnt/sda1 instead of rootfs when pulling images and creating containers?</p>
<p>Thanks in advance for any help!</p>
| <p>You need to configure your Minikube virtual machine for using <code>/dev/sda1</code> instead of <code>/</code> for Docker. To log in to it, use <code>minikube ssh</code> command.</p>
<p>Than you have two options:</p>
<ol>
<li><p>Mount <code>/dev/sda1</code> to <code>var/lib/docker</code>, but don't forget to copy the content from original <code>var/lib/docker</code> to <code>/mnt/sda1</code> before that.</p></li>
<li><p>Reconfigure Docker for using <code>/mnt/sda1</code> instead of <code>var/lib/docker</code> for storing images. Look through this <a href="https://stackoverflow.com/questions/24309526/how-to-change-the-docker-image-installation-directory">link</a> for more information about it.</p></li>
</ol>
|
<p>I'm trying to host an application using Google Kubernetes Engine. My docker image works when run locally, but when I put it on Google Cloud and set it up using a kubernetes cluster it fails in a very strange way. </p>
<p>I'm able to connect to the application and it works until I trigger a call of <code>google.cloud.storage.Client()</code>. Then it attempts to read the file I've provided through the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable and something goes wrong. I get this (truncated and redacted) traceback:</p>
<pre><code>self.gcs_client = storage.Client()
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/client.py", line 71, in __init__
_http=_http)
File "/usr/local/lib/python3.6/site-packages/google/cloud/client.py", line 215, in __init__
_ClientProjectMixin.__init__(self, project=project)
File "/usr/local/lib/python3.6/site-packages/google/cloud/client.py", line 169, in __init__
project = self._determine_default(project)
File "/usr/local/lib/python3.6/site-packages/google/cloud/client.py", line 182, in _determine_default
return _determine_default_project(project)
File "/usr/local/lib/python3.6/site-packages/google/cloud/_helpers.py", line 179, in _determine_default_project
_, project = google.auth.default()
File "/usr/local/lib/python3.6/site-packages/google/auth/_default.py", line 294, in default
credentials, project_id = checker()
File "/usr/local/lib/python3.6/site-packages/google/auth/_default.py", line 165, in _get_explicit_environ_credentials
os.environ[environment_vars.CREDENTIALS])
File "/usr/local/lib/python3.6/site-packages/google/auth/_default.py", line 89, in _load_credentials_from_file
'File {} was not found.'.format(filename))
google.auth.exceptions.DefaultCredentialsError: File {--redacted--} was not found.
</code></pre>
<p>What I've redacted is JSON containing my service account's private key. I've checked the docker container while it's running on the cloud and <code>GOOGLE_APPLICATION_CREDENTIALS</code> is set to a file name as expected. Somehow when my docker container is run on the cloud, instead of using the environment variable as the file name - it uses the contents of the referenced file as the file name. This error also gets reported in the browser console, so anyone that navigates to the app can get my service account credentials. </p>
<p>Does anyone have any guesses about what's going wrong here? </p>
<p><strong>UPDATE:</strong>
Looks like I've got it working now. My guess is the error occurred because I set <code>GOOGLE_APPLICATION_CREDENTIALS</code> using <code>valueFrom</code> as in the <code>bokeh.yaml</code> file. This seems to have been setting <code>GOOGLE_APPLICATION_CREDENTIALS</code> equal to the contents of <code>bokeh.yaml</code>. This format seems to work for <code>pandas.io.gbq.read_gbq</code> used in the tutorial code but not for instantiating <code>google.cloud.storage.Client()</code> like I was trying to do. Pointing <code>GOOGLE_APPLICATION_CREDENTIALS</code> to a volume mounted file like DazWilkin suggested and Eric Guan showed worked. </p>
| <p>I noticed you didn't mention using Kubernetes Secrets. Here's how I did it.</p>
<ol>
<li>Create a Secret with <code>kubectl</code> on your local machine.</li>
</ol>
<pre><code>$ kubectl create secret generic gac-keys --from-file=<PATH_TO_SERVICE_ACCOUNT_FILE>
</code></pre>
<p>This creates a Secret called <code>gac-keys</code>. It contains your json file found at <code><PATH_TO_SERVICE_ACCOUNT_FILE></code>. If you want to rename the json file, you can do </p>
<p><code>--from-file=new-file-name.json=<PATH_TO_SERVICE_ACCOUNT_FILE></code></p>
<ol start="2">
<li>Configure your Deployment to use the Secret.</li>
</ol>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
volumes:
- name: google-cloud-keys
secret:
secretName: gac-keys
containers:
- name: my-app
image: us.gcr.io/my-app
volumeMounts:
- name: google-cloud-keys
mountPath: /var/secrets/google
readOnly: true
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/new-file-name.json
</code></pre>
<p>You specify a volume with an arbitrary name <code>google-cloud-keys</code> to be referenced later. The volume is linked to the secret. Inside the container spec, you mount the <code>google-cloud-keys</code> volume at path <code>/var/secrets/google</code>. This places your file at the path, so <code>/var/secrets/google/new-file-name.json</code> should exist in the container at runtime. Then you specify an env variable named <code>GOOGLE_APPLICATION_CREDENTIALS</code> which points to the path. Now your client library can authenticate with google.</p>
<p>Docs: <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a></p>
|
<p>Getting below errors when I’m trying to run spark-submit on k8 cluster</p>
<p><strong>Error 1</strong>: This looks like a warning it doesn’t interrupt the app running inside executor pod but keeps on getting this warning</p>
<pre><code>2018-03-09 11:15:21 WARN WatchConnectionManager:192 - Exec Failure
java.io.EOFException
at okio.RealBufferedSource.require(RealBufferedSource.java:60)
at okio.RealBufferedSource.readByte(RealBufferedSource.java:73)
at okhttp3.internal.ws.WebSocketReader.readHeader(WebSocketReader.java:113)
at okhttp3.internal.ws.WebSocketReader.processNextFrame(WebSocketReader.java:97)
at okhttp3.internal.ws.RealWebSocket.loopReader(RealWebSocket.java:262)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:201)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
</code></pre>
<p><strong>Error2</strong>: This is intermittent error which is failing the executor pod to run </p>
<pre><code>org.apache.spark.SparkException: External scheduler cannot be instantiated
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2747)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:492)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2486)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at com.capitalone.quantum.spark.core.QuantumSession$.initialize(QuantumSession.scala:62)
at com.capitalone.quantum.spark.core.QuantumSession$.getSparkSession(QuantumSession.scala:80)
at com.capitalone.quantum.workflow.WorkflowApp$.getSession(WorkflowApp.scala:116)
at com.capitalone.quantum.workflow.WorkflowApp$.main(WorkflowApp.scala:90)
at com.capitalone.quantum.workflow.WorkflowApp.main(WorkflowApp.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [myapp-ef79db3d9f4831bf85bda14145fdf113-driver-driver] in namespace: [default] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62)
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:228)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.<init>(KubernetesClusterSchedulerBackend.scala:70)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:120)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2741)
... 11 more
Caused by: java.net.UnknownHostException: kubernetes.default.svc: Try again
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at okhttp3.Dns$1.lookup(Dns.java:39)
at okhttp3.internal.connection.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:171)
at okhttp3.internal.connection.RouteSelector.nextProxy(RouteSelector.java:137)
at okhttp3.internal.connection.RouteSelector.next(RouteSelector.java:82)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:171)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:121)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:100)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$2.intercept(HttpClientUtils.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185)
at okhttp3.RealCall.execute(RealCall.java:69)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:377)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:295)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:783)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:217)
... 15 more
2018-03-09 15:00:39 INFO AbstractConnector:318 - Stopped Spark@5f59185e{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2018-03-09 15:00:39 INFO SparkUI:54 - Stopped Spark web UI at http://myapp-ef79db3d9f4831bf85bda14145fdf113-driver-svc.default.svc:4040
2018-03-09 15:00:39 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2018-03-09 15:00:39 INFO MemoryStore:54 - MemoryStore cleared
2018-03-09 15:00:39 INFO BlockManager:54 - BlockManager stopped
2018-03-09 15:00:39 INFO BlockManagerMaster:54 - BlockManagerMaster stopped
2018-03-09 15:00:39 WARN MetricsSystem:66 - Stopping a MetricsSystem that is not running
2018-03-09 15:00:39 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2018-03-09 15:00:39 INFO SparkContext:54 - Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: External scheduler cannot be instantiated
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2747)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:492)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2486)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at com.capitalone.quantum.spark.core.QuantumSession$.initialize(QuantumSession.scala:62)
at com.capitalone.quantum.spark.core.QuantumSession$.getSparkSession(QuantumSession.scala:80)
at com.capitalone.quantum.workflow.WorkflowApp$.getSession(WorkflowApp.scala:116)
at com.capitalone.quantum.workflow.WorkflowApp$.main(WorkflowApp.scala:90)
at com.capitalone.quantum.workflow.WorkflowApp.main(WorkflowApp.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [myapp-ef79db3d9f4831bf85bda14145fdf113-driver] in namespace: [default] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62)
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:228)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.<init>(KubernetesClusterSchedulerBackend.scala:70)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:120)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2741)
... 11 more
Caused by: java.net.UnknownHostException: kubernetes.default.svc: Try again
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at okhttp3.Dns$1.lookup(Dns.java:39)
at okhttp3.internal.connection.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:171)
at okhttp3.internal.connection.RouteSelector.nextProxy(RouteSelector.java:137)
at okhttp3.internal.connection.RouteSelector.next(RouteSelector.java:82)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:171)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:121)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:100)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$2.intercept(HttpClientUtils.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185)
at okhttp3.RealCall.execute(RealCall.java:69)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:377)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:295)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:783)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:217)
... 15 more
2018-03-09 15:00:39 INFO ShutdownHookManager:54 - Shutdown hook called
2018-03-09 15:00:39 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-5bd85c96-d689-4c53-a0b3-1eadd32357cb
</code></pre>
<p>Note:Able to run the application successfully but spark-submit run fails with above error2 very frequently.</p>
| <p>You need to set <code>${SPARK_LOCAL_IP}</code> environment variable to pod IP and pass to spark-submit this environment variable using <code>--conf spark.driver.host=${SPARK_LOCAL_IP}</code>.</p>
<p>See the spark docs for more information regarding these variables.</p>
|
<p>I have create docker registry as a pod with a service and it's working login, push and pull. But when I would like to create a pod that use an image from this registry, the kubelet can't get the image from the registry.</p>
<p>My pod registry:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: registry-docker
labels:
registry: docker
spec:
containers:
- name: registry-docker
image: registry:2
volumeMounts:
- mountPath: /opt/registry/data
name: data
- mountPath: /opt/registry/auth
name: auth
ports:
- containerPort: 5000
env:
- name: REGISTRY_AUTH
value: htpasswd
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: /opt/registry/auth/htpasswd
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: Registry Realm
volumes:
- name: data
hostPath:
path: /opt/registry/data
- name: auth
hostPath:
path: /opt/registry/auth
</code></pre>
<p>pod I would like to create from registry:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: 10.96.81.252:5000/nginx:latest
imagePullSecrets:
- name: registrypullsecret
</code></pre>
<p>Error I get from my registry logs:</p>
<blockquote>
<p>time="2018-08-09T07:17:21Z" level=warning msg="error authorizing
context: basic authentication challenge for realm \"Registry Realm\":
invalid authorization credential" go.version=go1.7.6
http.request.host="10.96.81.252:5000"
http.request.id=655f76a6-ef05-4cdc-a677-d10f70ed557e
http.request.method=GET http.request.remoteaddr="10.40.0.0:59088"
http.request.uri="/v2/" http.request.useragent="docker/18.06.0-ce
go/go1.10.3 git-commit/0ffa825 kernel/4.4.0-130-generic os/linux
arch/amd64 UpstreamClient(Go-http-client/1.1)"
instance.id=ec01566d-5397-4c90-aaac-f56d857d9ae4 version=v2.6.2
10.40.0.0 - - [09/Aug/2018:07:17:21 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.06.0-ce go/go1.10.3 git-commit/0ffa825
kernel/4.4.0-130-generic os/linux arch/amd64
UpstreamClient(Go-http-client/1.1)"</p>
</blockquote>
<p>The secret I use created from <code>cat ~/.docker/config.json | base64</code>:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSJsb2NhbGhvc3Q6NTAwMCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZaRzlqYTJWeU1USXoiCgkJfQoJfSwKCSJIdHRwSGVhZGVycyI6IHsKCQkiVXNlci1BZ2VudCI6ICJEb2NrZXItQ2xpZW50LzE4LjA2$
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>The modification I have made to my default serviceaccount:</p>
<pre><code>cat ./sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-08-03T09:49:47Z
name: default
namespace: default
# resourceVersion: "51625"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 8eecb592-9702-11e8-af15-02f6928eb0b4
secrets:
- name: default-token-rfqfp
imagePullSecrets:
- name: registrypullsecret
</code></pre>
<p>file ~/.docker/config.json:</p>
<pre><code>{
"auths": {
"localhost:5000": {
"auth": "YWRtaW46ZG9ja2VyMTIz"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.06.0-ce (linux)"
}
</code></pre>
| <p>The <code>auths</code> data has login credentials for "localhost:5000", but your image is at "10.96.81.252:5000/nginx:latest".</p>
|
<p>I have followed the documentation for spinning up an EKS cluster that says to make a service role with certain policies.</p>
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf</a></p>
<pre><code>To create your Amazon EKS service role
1. Open the IAM console at https://console.aws.amazon.com/iam/.
2. Choose Roles, then Create role.
3. Choose EKS from the list of services, then Allows Amazon EKS to manage your clusters on your behalf for your use case, then Next: Permissions.
4. Choose Next: Review.
5. For Role name, enter a unique name for your role, such as eksServiceRole, then choose Create role.
</code></pre>
<p>When I create a basic hello world app, it throws an AccessDenied error.</p>
<pre><code>Error creating load balancer (will retry): failed to ensure load balancer for service default/nginx:
AccessDenied: User: arn:aws:sts::*************:assumed-role/eks-service-role/************* is not authorized to perform: iam:CreateServiceLinkedRole on resource: arn:aws:iam::*************:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing
</code></pre>
<p>The two Policies that were added (AmazonEKSClusterPolicy, AmazonEKSServicePolicy) do not have the iam:CreateServiceLinkedRole action allowed. Are we supposed to add this outside of the policies defined in the guide? Or is this something that should be included in the EKS policies?</p>
| <p>It seems that the EKS userguide assumes you have created load balancers in your AWS account prior to creating the EKS cluster, and thus have an existing <strong>AWSServiceRoleForElasticLoadBalancing</strong> service role in AWS IAM. </p>
<p>As described in <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/elb-service-linked-roles.html#create-service-linked-role" rel="noreferrer">https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/elb-service-linked-roles.html#create-service-linked-role</a></p>
<pre><code>You don't need to manually create the AWSServiceRoleForElasticLoadBalancing role. Elastic Load Balancing creates this role for you when you create a load balancer.
</code></pre>
<p>EKS is attempting to do this for you, resulting in the access denied exception using the default policies.</p>
<p>Other options to explicitly create the service-linked role prior to EKS cluster creation include:</p>
<p>AWS CLI</p>
<pre><code>aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
</code></pre>
<p>Terraform</p>
<pre><code>resource "aws_iam_service_linked_role" "elasticloadbalancing" {
aws_service_name = "elasticloadbalancing.amazonaws.com"
}
</code></pre>
<p>Or, manually create a load balancer from the UI Console.</p>
<p>Regardless of provisioning options, you should know things will work when you see the following role in AWS IAM</p>
<pre><code>arn:aws:iam::<ACCOUNT_ID>:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing
</code></pre>
|
<p>I am trying to route outbound traffic from an application in my GKE cluster through a static IP, as the destination server requires whitelisting IP for access. I have been able to do this using the terraformed nat gateway, but this impacts all traffic from the cluster.</p>
<p>Following the istio guide on the site, I've been able to route traffic through an egressgateway pod (I can see it in the gateway logs), but I need the gateway to have a static ip, and there is no override in the helm values for egressgateway static ip.</p>
<p>How can I assign a static ip to the egressgateway without having to patch anything or hack it after installing istio?</p>
| <p>I think of your problem as having three steps. First, to fix the outgoing traffic to a particular pod. The istio egress gateway does this for you. Second and third, to fix the pod to a particular IP address.</p>
<p>If you use GCP's version of floating IP addresses, then you can assign a known IP to one of the hosts in your cluster. Then, use node affinity on the egress-gateway to schedule it to the particular host, <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/</a></p>
<p>I've edited the egress deployment in one of my test clusters, to
<code>
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
- key: kubernetes.io/hostname
operator: In
values:
- worker-2720002
</code>
to pin it by the hostname label, but you'll probably want to choose and apply a new label to the node when you assign it a floating ip. In my test, the pod is moved to the specified node, and my outgoing egress traffic does too.</p>
|
<p>I'm trying to save the contents of a configmap to a file on my local hard drive. Kubectl supports selecting with JSONPath but I can't find the expression I need to select just the file contents.</p>
<p>The configmap was created using the command</p>
<pre><code>kubectl create configmap my-configmap --from-file=my.configmap.json=my.file.json
</code></pre>
<p>When I run </p>
<pre><code>kubectl describe configmap my-configmap
</code></pre>
<p>I see the following output:</p>
<pre><code>Name: my-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
my.file.json:
----
{
"key": "value"
}
Events: <none>
</code></pre>
<p>The furthest I've gotten so selecting only the file contents is this:</p>
<pre><code> kubectl get configmap my-configmap -o jsonpath="{.data}"
</code></pre>
<p>Which outputs</p>
<pre><code>map[my.file.json:{
"key": "value"
}]
</code></pre>
<p>The output that I want is </p>
<pre><code>{
"key": "value"
}
</code></pre>
<p>What is the last piece of the JSONPath puzzle?</p>
| <p>There’s an open <a href="https://github.com/kubernetes/kubernetes/issues/63503" rel="noreferrer">issue</a> at the Kubernetes GitHub repo with a list of things that needs to be fixed in regards to kubectl (and JSONpath), one of them are issue <a href="https://github.com/kubernetes/kubernetes/issues/16707" rel="noreferrer">16707</a> <em>jsonpath template output should be json</em>. </p>
<p><strong>Edit:</strong></p>
<p>How about this:</p>
<pre><code>kubectl get cm my-configmap -o jsonpath='{.data.my\.file\.json}'
</code></pre>
<p>I just realized i had <a href="https://stackoverflow.com/a/51052288/1547081">answered</a> another question related (kind of) to this one. The above command should output what you had in mind!</p>
|
<p>I want to undeploy pod/redeploy for previous image - meaning reverting of code. This should happen from kubernates dashboard. Like there is option to delete deployment in cluster dashboard .. similarly is there a way to undeploy or redeploy to previous image tag?</p>
| <p>You can't currently roll back from the dashboard, no. Actually there is an open feature request asking for it - <a href="https://github.com/kubernetes/dashboard/issues/2889" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard/issues/2889</a></p>
<p>Rolling back a deployment from the command-line is possible (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment</a>) or you could switch the image referenced by your deployment. But rollback is not currently a feature of the dashboard.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.