prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I would like to get the container status and restarts of a pod, using <a href="https://github.com/kubernetes-client/python" rel="noreferrer">python kubernetes</a>. I am currently using <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#list_namespaced_pod" rel="noreferrer">list_namespaced_pod</a> to get the pods and the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1ContainerStatus.md" rel="noreferrer">container status</a>, which I interpret as the status of the pod. This works, but is far from an ideal solution. I am looking for something that gives me similar information to <code>kubectl -n $NAMESPACE get pods</code>.</p>
| <p>You can do the following for example, to get all the pods and their status:</p>
<pre><code>api_response = api_instance.list_namespaced_pod(namespace, pretty=pretty, timeout_seconds=timeout_seconds, watch=watch)
for i in api_response.items:
print(i.metadata.name + " " + i.status.phase)
</code></pre>
|
<p>I have installed the AWS CLI around 6 moths back and upgraded using the following command today.</p>
<p><code>pip install --upgrade awscli</code></p>
<p>The current version on my Ubuntu 18.04 machine is:</p>
<pre><code>aws-cli/1.14.44 Python/3.6.5 Linux/4.15.0-23-generic botocore/1.8.48
</code></pre>
<p>When I try aws eks command, I am getting the following error:</p>
<pre><code>Invalid choice: 'eks', maybe you meant:
* es
</code></pre>
<p>Is my aws cli is upgraded properly or should I install something else for EKS service?</p>
| <p>I had the same issue on El Capitan 10.11.6 and the only one worked solution was this:</p>
<p><code>sudo pip install awscli --ignore-installed six</code></p>
<p>Version 6 comes with distutils in El Capitan</p>
<p>Reference: <a href="https://github.com/aws/aws-cli#installation" rel="noreferrer">https://github.com/aws/aws-cli#installation</a></p>
|
<p>How to force to specify --set option on <code>helm install|upgrade</code>?</p>
<p>in my case, some required environment variables. (e.g. "database.password")</p>
<p><strong>Files</strong></p>
<pre><code>.
|-- Chart.yaml
|-- templates
| |-- NOTES.txt
| |-- _helpers.tpl
| |-- deployment.yaml
| |-- ingress.yaml
| |-- secret.yaml
| `-- service.yaml
`-- values.yaml
</code></pre>
<p><strong>values.yaml (snip)</strong></p>
<pre><code>#...
database:
useExternal: no
host: "pgsql"
port: "5432"
name: "myapp"
userName: "myapp_user"
# password shouldn't write here.
# I want to be inject this value to secret.
password: ""
#...
</code></pre>
<p><strong>templates/secrets.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: myapp-secrets
type: Opaque
data:
app-database-password: {{required .Values.database.password | b64enc | quote }}
</code></pre>
<p><strong>templates/deployment.yaml (snip)</strong></p>
<pre><code>#...
env:
- name: APP_DATABASE_HOST
value: {{ .Values.database.host | quote }}
- name: APP_DATABASE_PORT
value: {{ .Values.database.port | quote }}
- name: APP_DATABASE_NAME
value: {{ .Values.database.name | quote }}
- name: APP_DATABASE_USERNAME
value: {{ .Values.database.username | quote }}
- name: APP_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: myapp-secrets
key: app-database-password
#...
</code></pre>
<p><strong>command</strong></p>
<pre><code># Retrieve from GCP KMS(prod) or define directly(dev)
DATABASE_PASSWORD=$( ... )
# Deploy.
helm upgrade --install \
-f ./values.yaml \
--set database.password=$DATABASE_PASSWORD \
myapp-dev ./ --dry-run --debug
</code></pre>
<p>It's failed with error.</p>
<pre><code>Error: render error in "myapp/templates/secret.yaml": template: myapp/templates/secret.yaml:7:28: executing "myapp/templates/secret.yaml" at <required>: wrong number of args for required: want 2 got 1
</code></pre>
<p>It seems the <code>required</code> function is evaluate template file statically when parsing.</p>
<p>I need matters below:</p>
<ul>
<li><code>database.password</code> is switchable by env such as "prod" or "stage".</li>
<li><code>database.password</code> should store to secret.</li>
<li>I want to set the actual <code>database.password</code> value using env vars on command execution.</li>
</ul>
<p>Any ideas?</p>
| <p>The Helm-specific <code>required</code> macro <a href="https://docs.helm.sh/developing_charts/#know-your-template-functions" rel="nofollow noreferrer">takes two parameters</a>: the error message if the value isn't present, and the value that you're checking for. This syntax also lets it be used in pipeline form. In your example, the secret value could be</p>
<pre><code>app-database-password: {{.Values.database.password | required "database password is required" | b64enc | quote }}
</code></pre>
|
<p>I have many pods in a kubernetes system with randomly name wordpress.xxx.xx.</p>
<p>Here the list of <a href="https://i.stack.imgur.com/k7Jxw.png" rel="nofollow noreferrer">pods</a></p>
<p>I want to use one command with <code>kubectl cp</code> in other to copy files to all pods from one deployment.</p>
<p>In my case I don't want to use volumes because they mount into a path that already the existing content of that folder will be hidden.</p>
<p>How to do that, please?</p>
<p>Thank you for your answer.</p>
| <p>The <code>kubectl cp</code> command copies files to a single container in a pod. To copy files in multiple containers easily, the below shell script can be used.</p>
<pre><code>for pod in `kubectl get pods -o=name | grep wordpress | sed "s/^.\{4\}//"`; do echo "copying to $pod"; kubectl cp file.txt $pod:/; done
</code></pre>
<p>or</p>
<pre><code>for pod in `kubectl get pods -o=name | grep wordpress | sed "s/^.\{4\}//"`
do
echo "copying file to $pod"
kubectl cp file.txt $pod:/
done
</code></pre>
<p>Both the scripts are same, single vs multi-line.</p>
|
<p>I'm running a k8s cluster - 1.9.4-gke.1 - on Google Kubernetes Engine (GKE).</p>
<p>I need to set sysctl <code>net.core.somaxconn</code> to a higher value inside some containers.</p>
<p>I've found this official k8s page: <a href="https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster" rel="nofollow noreferrer">Using Sysctls in a Kubernetes Cluster</a> - that seemed to solve my problem. The solution was to make an annotation on my pod spec like the following:</p>
<pre><code>annotations:
security.alpha.kubernetes.io/sysctls: net.core.somaxconn=1024
</code></pre>
<p>But when I tried to create my pod:</p>
<pre><code>Status: Failed
Reason: SysctlForbidden
Message: Pod forbidden sysctl: "net.core.somaxconn" not whitelisted
</code></pre>
<p>So I've tried to create a PodSecurityPolicy like the following:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: sites-psp
annotations:
security.alpha.kubernetes.io/sysctls: 'net.core.somaxconn'
spec:
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
</code></pre>
<p>... but it didn't work either.</p>
<p>I've also found that I can use a <code>kubelet</code> argument on every node to whitelist the specific <code>sysctl</code>: <code>--experimental-allowed-unsafe-sysctls=net.core.somaxconn</code></p>
<p>I've added this argument to the KUBELET_TEST_ARGS setting on my GCE machine and restarted it. From what I can see from the output of <code>ps</code> command, it seems that the option was successfully added to the <code>kubelet</code> process on the startup:</p>
<pre><code>/home/kubernetes/bin/kubelet --v=2 --kube-reserved=cpu=60m,memory=960Mi --experimental-allowed-unsafe-sysctls=net.core.somaxconn --allow-privileged=true --cgroup-root=/ --cloud-provider=gce --cluster-dns=10.51.240.10 --cluster-domain=cluster.local --pod-manifest-path=/etc/kubernetes/manifests --experimental-mounter-path=/home/kubernetes/containerized_mounter/mounter --experimental-check-node-capabilities-before-mount=true --cert-dir=/var/lib/kubelet/pki/ --enable-debugging-handlers=true --bootstrap-kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --anonymous-auth=false --authorization-mode=Webhook --client-ca-file=/etc/srv/kubernetes/pki/ca-certificates.crt --cni-bin-dir=/home/kubernetes/bin --network-plugin=kubenet --volume-plugin-dir=/home/kubernetes/flexvolume --node-labels=beta.kubernetes.io/fluentd-ds-ready=true,cloud.google.com/gke-nodepool=temp-pool --eviction-hard=memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5% --feature-gates=ExperimentalCriticalPodAnnotation=true
</code></pre>
<p>The problem is that I keep receiving a message telling me that my pod cannot be started because sysctl <code>net.core.somaxconn</code> is not whitelisted.</p>
<p>Is there some limitation on GKE so that I cannot whitelist a sysctl? Am I doing something wrong?</p>
| <p>Until sysctl support becomes better integrated you can put this in your pod spec</p>
<pre><code>spec:
initContainers:
- name: sysctl-buddy
image: busybox:1.29
securityContext:
privileged: true
command: ["/bin/sh"]
args:
- -c
- sysctl -w net.core.somaxconn=4096 vm.overcommit_memory=1
resources:
requests:
cpu: 1m
memory: 1Mi
</code></pre>
|
<p>Hello I try to have a Pod with 2 container, one a c++ app, one a mysql database. I used to have the mysql deployed in its own service, but i got latency issue. So i want to try multi-container pod.</p>
<p>But i've been struggling to connect my app with the mysql through localhost. It says..</p>
<blockquote>
<p>Can\'t connect to local MySQL server through socket
\'/var/run/mysqld/mysqld.sock</p>
</blockquote>
<p>Here is my kubernetes.yaml. Please I need help :(</p>
<pre><code># Database setup
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storage-camera
labels:
group: camera
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: camera-pv
labels:
group: camera
spec:
storageClassName: db-camera
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: storage-camera
---
# Service setup
apiVersion: v1
kind: Service
metadata:
name: camera-service
labels:
group: camera
spec:
ports:
- port: 50052
targetPort: 50052
selector:
group: camera
tier: service
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: camera-service
labels:
group: camera
tier: service
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
minReadySeconds: 60
template:
metadata:
labels:
group: camera
tier: service
spec:
containers:
- image: asia.gcr.io/test/db-camera:latest
name: db-camera
env:
- name : MYSQL_ROOT_PASSWORD
value : root
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: camera-persistent-storage
mountPath: /var/lib/mysql
- name: camera-service
image: asia.gcr.io/test/camera-service:latest
env:
- name : DB_HOST
value : "localhost"
- name : DB_PORT
value : "3306"
- name : DB_NAME
value : "camera"
- name : DB_ROOT_PASS
value : "password"
ports:
- name: http-cam
containerPort: 50052
volumes:
- name: camera-persistent-storage
persistentVolumeClaim:
claimName: camera-pv
restartPolicy: Always
</code></pre>
| <p>Your MySQL client is configured to use a socket and not talk over the network stack, cf. the <a href="https://dev.mysql.com/doc/refman/5.5/en/connecting.html" rel="nofollow noreferrer">MySQL documentation</a>:</p>
<blockquote>
<p>On Unix, MySQL programs treat the host name localhost specially, in a
way that is likely different from what you expect compared to other
network-based programs. For connections to localhost, MySQL programs
attempt to connect to the local server by using a Unix socket file.
This occurs even if a --port or -P option is given to specify a port
number. To ensure that the client makes a TCP/IP connection to the
local server, use --host or -h to specify a host name value of
127.0.0.1, or the IP address or name of the local server. You can also specify the connection protocol explicitly, even for localhost, by
using the --protocol=TCP option.</p>
</blockquote>
<p>If you still want <code>camera-service</code> to talk over the file system socket you need to mount the file system for the <code>camera-service</code> as well. Currently you only mount it for <code>db-camera</code></p>
|
<p>I'm using Fluentd with Elasticsearch for logs from Kubernetes but I noticed that some JSON logs cannot be correctly indexed because JSON is stored as string.</p>
<p>Logs from kubectl logs look like:</p>
<pre><code>{"timestamp":"2016-11-03T15:48:12.007Z","level":"INFO","thread":"cromwell-system-akka.actor.default-dispatcher-4","logger":"akka.event.slf4j.Slf4jLogger","message":"Slf4jLogger started","context":"default"}
</code></pre>
<p>But logs saved in file in /var/log/containers/... have escaped quotes and makes them string instead of JSON which spoil indexing:</p>
<pre><code>{"log":"{\"timestamp\":\"2016-11-03T15:45:07.976Z\",\"level\":\"INFO\",\"thread\":\"cromwell-system-akka.actor.default-dispatcher-4\",\"logger\":\"akka.event.slf4j.Slf4jLogger\",\"message\":\"Slf4jLogger started\",\"context\":\"default\"}\n","stream":"stdout","time":"2016-11-03T15:45:07.995443479Z"}
</code></pre>
<p>I'm trying to get logs looking like:</p>
<pre><code>{
"log": {
"timestamp": "2016-11-03T15:45:07.976Z",
"level": "INFO",
"thread": "cromwell-system-akka.actor.default-dispatcher-4",
"logger": "akka.event.slf4j.Slf4jLogger",
"message": "Slf4jLogger started",
"context": "default"
},
"stream": "stdout",
"time": "2016-11-03T15: 45: 07.995443479Z"
}
</code></pre>
<p>Can you suggest me how to do it?</p>
| <p>I ran into the same issue, however I'm using <code>fluent-bit</code>, the "C" version of <code>fluentd</code> (Ruby). Since this is an older issue, I'm answering for the benefit of others who find this.</p>
<p>In <code>fluent-bit</code> v0.13, they addressed this issue. You can now specify the parser to use through annotations. The parser can be configured to decode the log as json.</p>
<ul>
<li><a href="https://github.com/fluent/fluent-bit/issues/615" rel="nofollow noreferrer">fluent-bit issue detailing problem</a></li>
<li><a href="https://www.linux.com/blog/event/kubecon/2018/4/fluent-bit-flexible-logging-kubernetes" rel="nofollow noreferrer">blog post about annotations for specifying the parser</a></li>
<li><a href="https://fluentbit.io/documentation/0.14/parser/json.html" rel="nofollow noreferrer">json parser documentation</a> - The docker container's logs come out as json. However, your logs are <em>also</em> json. So an <a href="https://fluentbit.io/documentation/0.14/parser/decoder.html" rel="nofollow noreferrer">additional decoder</a> is needed.</li>
</ul>
<p>The final parser with decoder looks like this:</p>
<pre><code>[PARSER]
Name embedded-json
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
# Command | Decoder | Field | Optional Action
# =============|==================|=======|=========
Decode_Field_As escaped log do_next
Decode_Field_As json log
</code></pre>
|
<p>I have a requirement to pass cluster, namespace and pod name to AppDynamics agent from my container deployed in Kubernetes cluster.</p>
<p>I tried something as below, but that does not work.</p>
<pre><code>containers:
- env:
- name: JAVA_OPTS
value: -Dappdynamics.agent.nodeName=$HOST-$spec.nodeName-spec.PodName
</code></pre>
<p>and </p>
<pre><code>- name: appdynamics.agent.nodeName
value= $HOST-$spec.nodeName-spec.PodName
</code></pre>
<p>Could anyone please help me here how to collect the detail and pass to AppD.
Thanks in advance.</p>
| <p>You can get <code>POD_NAME</code> and <code>POD_NAMESPACE</code> passing them as environment variables via <code>fieldRef</code>. </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-env
spec:
containers:
- name: test-container
image: my-test-image:latest
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: REFERENCE_EXAMPLE
value: "/$(MY_NODE_NAME)/$(MY_POD_NAMESPACE)/$(MY_POD_NAME)/data.log"
restartPolicy: Never
</code></pre>
<p><strong>EDIT</strong>: <em>Added example env <code>REFERENCE_EXAMPLE</code> to show how to reference variables. Thanks to <a href="https://stackoverflow.com/questions/49582349/kubernetes-how-to-refer-to-one-environment-variable-from-another">this</a> answer for pointing out the <code>$()</code> interpolation.</em></p>
<p>You can reference <code>supports metadata.name, metadata.namespace, metadata.labels, metadata.annotations, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP</code> as mentioned in the documentation <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#envvarsource-v1-core" rel="noreferrer">here</a>.</p>
<p>However, <code>CLUSTERNAME</code> is not a standard property available. According to this <a href="https://github.com/kubernetes/kubernetes/pull/22043" rel="noreferrer">PR #22043</a>, the <code>CLUSTERNAME</code> should be injected to the <code>.metadata</code> field if using GCE.</p>
<p>Otherwise, you'll have to specific the <code>CLUSTERNAME</code> manually in the <code>.metadata</code> field and then use <code>fieldRef</code> to inject it as an environment variable.</p>
|
<p>I am serving jupyter notebook through a Kubernetes cluster. And I've set <code>resources.limits</code> to prevent someone from draining all of the host servers memory. </p>
<p>While one problem is that the jupyter notebook kernels after crash and automatic restart they do not throw any OOM errors after the container exceeds the memory, which will make the user very confused. </p>
<p>So how can I make the jupyter notebook raise the OOM error when running with Kubernetes? </p>
| <p>If you have only one specific pod, you can monitor the events/logs, <a href="https://stackoverflow.com/a/38270491/6309">as in here</a>:</p>
<pre><code>kubectl get events --watch
kubectl logs -f podname
</code></pre>
<p>That being said, not <em>all</em> events in a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">pod lifecycle</a> are properly reported, as shown in <a href="https://github.com/kubernetes/kubernetes/issues/38532#issuecomment-282154320" rel="nofollow noreferrer"><code>kubernetes/kubernetes</code> issue 38532</a> and the (abandoned) <a href="https://github.com/kubernetes/kubernetes/pull/45682" rel="nofollow noreferrer">PR 45682</a>.<br>
But you should still see <code>OOMKilled:true</code> when <code>docker inspect</code>'ing the pod.</p>
|
<p>How do you allow all outbound traffic with Istio?</p>
<p>I tried the following:</p>
<ul>
<li>Disable the egress gateway and
bypass the sidecar with --set global.proxy.includeIPRanges="0.0.0.0/0\,0.0.0.0/0"</li>
</ul>
<p>None of the options work, worth to mention I'm using kubernetes 1.11.2</p>
| <p>I assume you followed <a href="https://istio.io/docs/tasks/traffic-management/egress/#calling-external-services-directly" rel="nofollow noreferrer">this</a>, that's pretty much the way to do it. </p>
<p>Make sure that you update the <code>istio-sidecar-injector</code> <code>ConfigMap</code> on your namespace and restart your pods.</p>
|
<p>I have a rabbitMQ in my project, and I want the queues on one pod to be on the other and the information on one pod to be shared with the other pod. Is there a way for you to share the same volume and both can read and write? I use GCloud.</p>
| <p>GCEPersistentDisk supports only ReadWriteOnce and ReadOnlyMany and not the ReadWriteMany access modes. So, it's not possible to share a volume across two containers in a RW mode. <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">Here</a> is the documentation on the same.</p>
|
<p>I have scheduled the K8s cron to run every 30 mins.</p>
<p>If the current job is still running and the next cron schedule has reached it shouldn't create a new job but rather wait for the next schedule.</p>
<p>And repeat the same process if the previous job is still in Running state.</p>
| <p>set the following property to Forbid in CronJob yaml </p>
<pre><code>.spec.concurrencyPolicy
</code></pre>
<p><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy" rel="noreferrer">https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy</a></p>
|
<p>I am currently using Kubernetes Python SDK to fetch relevant information from my k8s cluster. I am running this from outside the cluster.</p>
<p>I have a requirement of fetching the images of all the POD's running within a namespace. I did look at Docker python SDK but that requires me to be running the script on the cluster itself which i want to avoid.</p>
<p>Is there a way to get this done ?</p>
<p>TIA</p>
| <blockquote>
<p>that requires me to be running the script on the cluster itself</p>
</blockquote>
<p>No, it should not: the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">kubernetes-client python</a> performs operations similar to <strong><a href="https://kubernetes.io/docs/reference/kubectl/overview/" rel="nofollow noreferrer"><code>kubectl</code></a></strong> calls (as <a href="https://github.com/kubernetes-client/python/blob/e057f273069de445a2d5a250ac5fe37d79671f3b/examples/notebooks/intro_notebook.ipynb" rel="nofollow noreferrer">detailed here</a>).<br>
And <code>kubectl</code> calls can be done from any client with a properly set <code>.kube/config</code> file.</p>
<p>Once you get the image name from a <code>kubectl describe po/mypod</code>, you might need to docker pull that image locally if you want more (like a docker history).</p>
<p>The <a href="https://stackoverflow.com/users/3219658/raks">OP Raks</a> adds <a href="https://stackoverflow.com/questions/52685119/fetching-docker-image-information-using-python-sdk/52686099?noredirect=1#comment92326426_52686099">in the comments</a>:</p>
<blockquote>
<p>I wanted to know if there is a python client API that actually gives me an option to do docker pull/save/load of an image as such</p>
</blockquote>
<p>The <a href="https://docker-py.readthedocs.io/en/stable/images.html" rel="nofollow noreferrer"><strong>docker-py</strong></a> library can pull/load/save images.</p>
|
<p>I was following this URL: <a href="https://stackoverflow.com/questions/42564058/how-to-use-local-docker-images-with-minikube">How to use local docker images with Minikube?</a>
I couldn't add a comment, so thought of putting my question here:</p>
<p>On my laptop, I have Linux Mint OS. Details as below:</p>
<pre><code>Mint version 19,
Code name : Tara,
PackageBase : Ubuntu Bionic
Cinnamon (64-bit)
</code></pre>
<p>As per one the answer on the above-referenced link:</p>
<ol>
<li>I started minikube and checked pods and deployments</li>
</ol>
<blockquote>
<pre><code>xxxxxxxxx:~$ pwd
/home/sj
xxxxxxxxxx:~$ minikube start
xxxxxxxxxx:~$ kubectl get pods
xxxxxxxxxx:~$ kubectl get deployments
</code></pre>
</blockquote>
<p>I ran command docker images</p>
<pre><code>xxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<username>/spring-docker-01 latest e10f88e1308d 6 days ago 640MB
openjdk 8 81f83aac57d6 4 weeks ago 624MB
mysql 5.7 563a026a1511 4 weeks ago 372MB
</code></pre>
<ol start="2">
<li>I ran below command: </li>
</ol>
<blockquote>
<p>eval $(minikube docker-env)</p>
</blockquote>
<ol start="3">
<li><p>Now when I check docker images, looks like as the <a href="https://github.com/kubernetes/minikube/blob/0c616a6b42b28a1aab8397f5a9061f8ebbd9f3d9/README.md#reusing-the-docker-daemon" rel="nofollow noreferrer">README</a> describes, it reuses the Docker daemon from Minikube with eval $(minikube docker-env).</p>
<p>xxxxxxxxxxxxx:~$ docker images</p></li>
</ol>
<blockquote>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
nginx alpine 33c5c6e11024 9 days ago 17.7MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 5 weeks ago 39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 5 weeks ago 122MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 6 months ago 97MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 6 months ago 148MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 6 months ago 225MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 6 months ago 50.4MB
k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 6 months ago 193MB
k8s.gcr.io/kube-addon-manager v8.6 9c16409588eb 7 months ago 78.4MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 9 months ago 41MB
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 9 months ago 42.2MB
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 9 months ago 50.5MB
k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 9 months ago 742kB
gcr.io/k8s-minikube/storage-provisioner v1.8.1 4689081edb10 11 months ago 80.8MB
k8s.gcr.io/echoserver 1.4 a90209bb39e3 2 years ago 140MB
</code></pre>
</blockquote>
<p><em>Note: if noticed docker images command pulled different images before and after step 2.</em></p>
<ol start="4">
<li>As I didn't see the image that I wanted to put on minikube, I pulled it from my docker hub.</li>
</ol>
<blockquote>
<pre><code>xxxxxxxxxxxxx:~$ docker pull <username>/spring-docker-01
Using default tag: latest
latest: Pulling from <username>/spring-docker-01
05d1a5232b46: Pull complete
5cee356eda6b: Pull complete
89d3385f0fd3: Pull complete
80ae6b477848: Pull complete
40624ba8b77e: Pull complete
8081dc39373d: Pull complete
8a4b3841871b: Pull complete
b919b8fd1620: Pull complete
2760538fe600: Pull complete
48e4bd518143: Pull complete
Digest: sha256:277e8f7cfffdfe782df86eb0cd0663823efc3f17bb5d4c164a149e6a59865e11
Status: Downloaded newer image for <username>/spring-docker-01:latest
</code></pre>
</blockquote>
<ol start="5">
<li>Verified if I can see that image using "docker images" command.</li>
</ol>
<blockquote>
<pre><code>xxxxxxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<username>/spring-docker-01 latest e10f88e1308d 6 days ago 640MB
nginx alpine 33c5c6e11024 10 days ago 17.7MB
</code></pre>
</blockquote>
<ol start="6">
<li>Then I tried to build image as stated in referenced link step.</li>
</ol>
<blockquote>
<pre><code>xxxxxxxxxx:~$ docker build -t <username>/spring-docker-01 .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/sj/Dockerfile: no such file or directory
</code></pre>
</blockquote>
<p><strong>As the error states that dockerfile doesn't exist at the location, I am not sure where exactly I can see dockerfile for the image I had pulled from docker hub.</strong></p>
<p>Looks like I have to go to the location where the image has been pulled and from that location, I need to run the above-mentioned command. Please correct me wrong.</p>
<p>Below are the steps, I will be doing after I fix the above-mentioned issue.</p>
<pre><code># Run in minikube
kubectl run hello-foo --image=myImage --image-pull-policy=Never
# Check that it's running
kubectl get pods
</code></pre>
<hr>
<p>UPDATE-1</p>
<p>There is mistake in above steps.
Step 6 is not needed. Image has already been pulled from docker hub, so no need of <code>docker build</code> command.</p>
<p>With that, I went ahead and followed instructions as mentioned by @aurelius in response.</p>
<pre><code>xxxxxxxxx:~$ kubectl run sdk-02 --image=<username>/spring-docker-01:latest --image-pull-policy=Never
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/sdk-02 created
</code></pre>
<p>Checked pods and deployments</p>
<pre><code>xxxxxxxxx:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sdk-02-b6db97984-2znlt 1/1 Running 0 27s
xxxxxxxxx:~$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
sdk-02 1 1 1 1 35s
</code></pre>
<p>Then exposed deployment on port 8084 as I was using other ports like 8080 thru 8083</p>
<pre><code>xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8084
service/sdk-02 exposed
</code></pre>
<p>Then verified if service has been started, checked if no issue on kubernetes dashboard and then checked the url</p>
<pre><code>xxxxxxxxx:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h
sdk-02 NodePort 10.100.125.120 <none> 8084:30362/TCP 13s
xxxxxxxxx:~$ minikube service sdk-02 --url
http://192.168.99.101:30362
</code></pre>
<p>When I tried to open URL: <a href="http://192.168.99.101:30362" rel="nofollow noreferrer">http://192.168.99.101:30362</a> in browser I got message:</p>
<pre><code>This site can’t be reached
192.168.99.101 refused to connect.
Search Google for 192 168 101 30362
ERR_CONNECTION_REFUSED
</code></pre>
<p><strong>So the question : Is there any issue with steps performed?</strong></p>
<hr>
<p>UPDATE-2</p>
<p>The issue was with below step:</p>
<pre><code>xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8084
service/sdk-02 exposed
</code></pre>
<p>Upon checking Dockerfile of my image : <code><username>/spring-docker-01:latest</code> I was exposing it to 8083 something like <code>EXPOSE 8083</code>
May be that was causing issue.
So I went ahead and changed expose command:</p>
<pre><code>xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8083
service/sdk-02 exposed
</code></pre>
<p>And then it started working.</p>
<p><em>If anyone has something to add to this, please feel free.</em></p>
<p><strong>However I am still not sure where exactly I can see dockerfile for the image I had pulled from docker hub.</strong></p>
| <p>For you UPDATE-2 question, also to help you to understand the port exposed in the Dockerfile and in the command <code>kubectl expose</code>.</p>
<p><strong>Dockerfile:</strong></p>
<blockquote>
<p>The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published.</p>
</blockquote>
<p>For more details, see <a href="https://docs.docker.com/engine/reference/builder/#expose" rel="nofollow noreferrer">EXPOSE</a>.</p>
<p><strong>Kubectl expose:</strong></p>
<blockquote>
<p><strong>--port</strong>: The port that the service should serve on. Copied from the resource being exposed, if unspecified</p>
<p><strong>--target-port</strong>: Name or number for the port on the container that the service should direct traffic to. Optional.</p>
</blockquote>
<p>For more details, see <a href="https://www.mankier.com/1/kubectl-expose" rel="nofollow noreferrer">kubectl expose</a>.</p>
<p>So I think you should add the parameters <code>--target-port</code> with the port that you exposed in the Dockerfile. And then the port mapping will be correct.</p>
|
<p>I did a customized installation of Kubernetes the hardway and installed it successfully on a 2 node cluster in Hyper V (1 master 1 worker), everything works, but there's this thing that makes me scratch my head and I was wondering if anyone could give me insight about why it's happening.</p>
<pre><code>etcdctl --help
</code></pre>
<p>Gives me the etcd v2 version of the help, and it also gives me the following Warning.</p>
<blockquote>
<p>WARNING: Environment variable ETCDCTL_API is not set; defaults to
etcdctl v2. Set environment variable ETCDCTL_API=3 to use v3 API or
ETCDCTL_API=2 to use v2 API.</p>
</blockquote>
<p>If I set the environment variable and run the command again, it's ignored:</p>
<pre><code>ETCDCTL_API=3
etcdctl --help
</code></pre>
<p>But if I do</p>
<pre><code>ETCDCTL_API=3 etcdctl --help
</code></pre>
<p>Then it gives me the etcd v3 version of the help, I also need to put ETCDCTL_API=3 in front of etcdctl snapshot save for it to be recognized as a valid command.</p>
<pre><code>ETCDCTL_API=3 etcdctl snapshot save ~/etcd.backup \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
</code></pre>
<p>Why is this?</p>
| <p>Well it turned out to be a lack of Linux knowledge.
When I was following a few Kubernetes how to guides I used Bash variables all the time, what I didn't realize is that there's a difference between normal variables an environment variables.</p>
<p>I was doing: </p>
<pre><code>ETCDCTL_API=3
echo $ETCDCTL_API
3
</code></pre>
<p>And it looked right, but then I remembered the printenv command, and I didn't see what I expected, that's when I figured out there was a difference between environment variables and regular variables.</p>
<p>I had to add export in front of the variable statement, to make it an environment variable.</p>
<p>What's really cool is that after reading this <br>
<a href="https://github.com/etcd-io/etcd/blob/master/etcdctl/README.md" rel="noreferrer">https://github.com/etcd-io/etcd/blob/master/etcdctl/README.md</a></p>
<p>I was able to make the above, become nice short commands like this: </p>
<pre><code>export ETCDCTL_API=3
export ETCDCTL_CACERT=/etc/etcd/ca.pem
export ETCDCTL_CERT=/etc/etcd/kubernetes.pem
export ETCDCTL_KEY=/etc/etcd/kubernetes-key.pem
etcdctl member list --endpoints=https://127.0.0.1:2379
etcdctl snapshot save ~/etcd.backup
</code></pre>
|
<p>I deployed a squid proxy in each namespace cause I want to access the services from external via the squid proxy, thus I need to add the line below to the squid.conf so that I can access services just using service names:</p>
<pre><code>append_domain .${namespace}.svc.cluster.local
</code></pre>
<p>Here is my problem:<br>
I can get <code>${namespace}</code> via <code>metadata.namespace</code> inside a pod, but how can I get the cluster domain ? Is it possible ?<br>
I’ve tried this but it retruned an error when creating pod:</p>
<pre><code> - name: POD_CLUSERDOMAIN
valueFrom:
fieldRef:
fieldPath: metadata.clusterName
</code></pre>
<p>Thanks for your help.</p>
| <p>Alright, failed to get the current <code>NAMESPACE</code> inside a pod, but I find another way to reach the point -- retrieve the whole host domain from <code>search domian</code> in <code>resolv.conf</code>.</p>
<p>Here's the detail:</p>
<ul>
<li>keep the Dockerfile unmodified</li>
<li><p>add a <code>command</code> item to deployment.yaml</p>
<pre><code>image: squid:3.5.20
command: ["/bin/sh","-c"]
args: [ "echo append_domain .$(awk -v s=search '{if($1 == s)print $2}' /etc/resolv.conf) >> /etc/squid/squid.conf; /usr/sbin/squid -N" ]
</code></pre></li>
</ul>
<p>This will add a line like <code>append_domain .default.svc.cluster.local</code> to the end of file <code>/etc/squid/squid.conf</code> then we can access the services from external via the squid proxy just using <code>service</code> name now.</p>
|
<p>Looking into Kubernetes documentation:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="noreferrer">Pod Security Policy</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="noreferrer">Pod Security Context</a></li>
</ul>
<p>Mmmm... aren't eventually they doing the same? What is the difference?</p>
| <p>I have no idea why folks are down-voting this question, it's spot on and actually we've got our docs to blame and not the OP. OK, here goes:</p>
<p>The pod security context (which is preceded by and largely based on OpenShift <a href="https://docs.openshift.com/container-platform/3.9/admin_guide/manage_scc.html" rel="noreferrer">Security Context Constraints</a>) allows you (as a developer?) to define runtime restrictions and/or settings on a per-pod basis.</p>
<p>But how do you enforce this? How do you make sure that folks are actually defining the constraints? That's where pod security policies (PSP) come into play: as a cluster or namespace admin you can define and enforce those security context-related policies using PSPs. See also the <a href="https://kubernetes-security.info/" rel="noreferrer">Kubernetes Security</a> book for more details. </p>
|
<p>I have a Cassandra cluster running in Kubernetes on AWS (using the incubator Helm chart). When I expose the nodes with a load balancer, Datastax is unable to directly connect to the nodes in the way it wants because it tries to use the IPs internal to the Kubernetes network. But it can still successfully read and write through the loadbalancer. <a href="https://docs.datastax.com/en/dse-planning/doc/planning/planningAntiPatterns.html#planningAntiPatterns__AntiPatLoadBal" rel="nofollow noreferrer">This is not recommended however</a> so I am looking for a proper solution.</p>
<p>Is there a way to have datastax connect properly with this setup? Would it require changing the <code>cassandra.yaml</code> inside the container (yuck)?</p>
| <p>First of all. You can't use LoadBalancer for this purpose.</p>
<p>Set up the service like so:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
nodePort: 30042
selector:
app: cassandra
</code></pre>
<p>The nodePort will be available outside the Kubernetes cluster. The nodePort has to be between 30000-32767.</p>
<p>From datastax driver, connect to K8S_NODE_IP:NODEPORT (not the pod ip).</p>
|
<p>I deployed Kubernetes on a bare metal dedicated server using <code>conjure-up kubernetes</code> on Ubuntu 18.04 LTS. This also means the nodes are LXD containers.</p>
<p>I need persistent volumes for Elasticsearch and MongoDB, and after some research I decided that the simplest way of getting that to work in my deployment was an NFS share.
I created an NFS share in the host OS, with the following configuration:</p>
<blockquote>
<p>/srv/volumes 127.0.0.1(rw) 10.78.69.*(rw,no_root_squash)</p>
</blockquote>
<p><code>10.78.69.*</code> appears to be the bridge network used by Kubernetes, at least looking at ifconfig there's nothing else.</p>
<p>Then I proceeded to create two folders, /srv/volumes/1 and /srv/volumes/2
I created two PVs from these folders with this configuration for the first (the second is similar):</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-pv1
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
nfs:
path: /srv/volumes/1
server: 10.78.69.1
</code></pre>
<p>Then I deploy the Elasticsearch helm chart (<a href="https://github.com/helm/charts/tree/master/incubator/elasticsearch" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/incubator/elasticsearch</a>) and it creates two claims which successfully bind to my PVs.</p>
<p>The issue is that afterwards the containers seem to encounter errors:</p>
<blockquote>
<p>Error: failed to start container "sysctl": Error response from daemon: linux runtime spec devices: lstat /dev/.lxc/proc/17848/fdinfo/24: no such file or directory
Back-off restarting failed container</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/NckA5.png" rel="nofollow noreferrer">Pods view</a></p>
<p><a href="https://i.stack.imgur.com/Yqogu.png" rel="nofollow noreferrer">Persistent Volume Claims view</a></p>
<p>I'm kinda stuck here. I've tried searching for the error but I haven't been able to find a solution to this issue. </p>
<p>Previously before I set the allowed IP in <code>/etc/exports</code> to <code>10.78.69.*</code> Kubernetes would tell me it got "permission denied" from the NFS server while trying to mount, so I assume that now mounting succeeded, since that error disappeared.</p>
<p><strong>EDIT:</strong> </p>
<p>I decided to purge the helm deployment and try again, this time with a different storage type, local-storage volumes. I created them following the guide from Canonical, and I know they work because I set up one for MongoDB this way and it works perfectly.</p>
<p>The configuration for the elasticsearch helm deployment changed since now I have to set affinity for the nodes on which the persistent volumes were created:</p>
<p><code>values.yaml</code>:</p>
<pre><code>data:
replicas: 1,
nodeSelector:
elasticsearch: data
master:
replicas: 1,
nodeSelector:
elasticsearch: master
client:
replicas: 1,
cluster:
env: {MINIMUM_MASTER_NODES: "1"}
</code></pre>
<p>I deployed using </p>
<blockquote>
<p>helm install --name site-search -f values.yaml incubator/elasticsearch</p>
</blockquote>
<p>These are the only changes, however elasticsearch still presents the same issues.</p>
<p>Additional information:</p>
<p><code>kubectl version</code>:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>The elasticsearch image is the default one in the helm chart:</p>
<blockquote>
<p>docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.1</p>
</blockquote>
<p>The various pods' (master, client, data) logs are empty.
The error is the same.</p>
| <p>I was able to solve the issue by running <code>sysctl -w vm.max_map_count=262144</code> myself on the host machine, and removing the "sysctl" init container which was trying to do this unsuccessfully.</p>
|
<p>I'm developing an application that consists of two containers. I want to deploy them in Kubernetes into one Pod, since I want the two services to be behind one IP address. However, I'm having a hard time trying to connect the Kubernetes Services with the containers.</p>
<p>How could I write a <code>deployment.yml</code> file, so that when the user calls a <code>x.x.x.x:port1</code>, the request is forwarded to the first container, and when the <code>x.x.x.x:port2</code> is called, the request is forwarded to the second container. How could I specify the Services?</p>
<p>Here's what I have until now:</p>
<pre><code>apiVersion:v1
kind: Pod
metadata:
name: my_app
spec:
containers:
- name: first_container
image: first_image
- name: second_container
image: second_image
---
apiVersion: v1
kind: Service
...
</code></pre>
| <p>In your containers section you need to define a containerPort for each:</p>
<pre><code> containers:
- name: first_container
image: first_image
ports:
- containerPort: port1
- name: second_container
image: second_image
ports:
- containerPort: port2
</code></pre>
<p>And then in the ports section of the service definition you need to point the targetPorts of the service at those ports like in <a href="https://stackoverflow.com/a/42272547/9705485">https://stackoverflow.com/a/42272547/9705485</a></p>
|
<p>I looking for the option to list all pods name</p>
<p>How to do without awk (or cut). Now i'm using this command</p>
<pre><code>kubectl get --no-headers=true pods -o name | awk -F "/" '{print $2}'
</code></pre>
| <p>Personally I prefer this method because it relies only on kubectl, is not very verbose and we don't get the pod/ prefix in the output:</p>
<pre><code>kubectl get pods --no-headers -o custom-columns=":metadata.name"
</code></pre>
|
<p>When I run my Pod I get the Pod (cas-de) is invalid spec : forbidden pod updates may not change fields other than the <code>spec.containers[*].image</code>, <code>spec.initContainers[*].image</code>, <code>spec.activeDeadlineSeconds</code> or <code>spec.tolerations</code> (only additions to existing tolerations)</p>
<p>However, I searched on the kubernetes website and I didn't find anything wrong:
(I really don't understand where is my mistake) </p>
<p>Does it better to set <code>volumeMounts</code> in a Pod or in Deployment? </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: cas-de
namespace: ds-svc
spec:
containers:
- name: ds-mg-cas
image: "docker-all.xxx.net/library/ds-mg-cas:latest"
imagePullPolicy: Always
ports:
- containerPort: 8443
- containerPort: 6402
env:
- name: JAVA_APP_CONFIGS
value: "/apps/ds-cas/configs"
- name: JAVA_EXTRA_PARAMS
value: "-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
volumeMounts:
- name: ds-cas-config
mountPath: "/apps/ds-cas/context"
volumes:
- name: ds-cas-config
hostPath:
path: "/apps/ds-cas/context"
</code></pre>
| <p>YAML template is valid. Some of the fields might have been changed that are forbidden and then <code>kubectl apply ....</code> is executed.</p>
<p>Looks like more like a development. Solution is to delete the existing pod using <code>kubectl delete pod cas-de</code> command and then execute <code>kubectl apply -f file.yaml</code> or <code>kubectl create -f file.yaml</code>.</p>
|
<p>I've configured a kubernetes cluster with metrics-server (as an aggregated apiserver) replacing heapster. kubectl top works fine, as do the raw endpoints in the metrics.k8s.io/v1beta1 api group. HPA, however, does not. controller-manager logs show the following errors (and no others):</p>
<pre><code>E1008 10:45:18.462447 1 horizontal.go:188] failed to compute desired number of replicas based on listed metrics for Deployment/kube-system/nginx: failed to get cpu utilization: missing request for cpu on container nginx in pod kube-system/nginx-64f497f8fd-7kr96
I1008 10:45:18.462511 1 event.go:221] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"kube-system", Name:"nginx", UID:"387f256e-cade-11e8-9cfa-525400c042d5", APIVersion:"autoscaling/v2beta1", ResourceVersion:"3367", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' missing request for cpu on container nginx in pod kube-system/nginx-64f497f8fd-7kr96
I1008 10:45:18.462529 1 event.go:221] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"kube-system", Name:"nginx", UID:"387f256e-cade-11e8-9cfa-525400c042d5", APIVersion:"autoscaling/v2beta1", ResourceVersion:"3367", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: missing request for cpu on container nginx in pod kube-system/nginx-64f497f8fd-7kr96
</code></pre>
<p>metrics-server spec:</p>
<pre><code>spec:
containers:
- args:
- --kubelet-preferred-address-types=InternalIP
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
imagePullPolicy: Always
name: metrics-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp-dir
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: metrics-server
serviceAccountName: metrics-server
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: tmp-dir
</code></pre>
<p>controller-manager is running with</p>
<pre><code>--horizontal-pod-autoscaler-use-rest-clients="true"
</code></pre>
<p>k8s version 1.11.3</p>
<p>Any ideas?</p>
| <p>Turns out this was me being stupid (and nothing to do with metrics-server).</p>
<p>I was testing on a deployment where the pod containers did not have any setting for CPU request.</p>
|
<p>Apologies if this is a really simple question - I am following the hello-minikube the tutorial on the Kubernetes link below (running on Mac OS)</p>
<p><a href="https://kubernetes.io/docs/tutorials/hello-minikube/#create-your-node-js-application" rel="nofollow noreferrer">Minikube tutorial</a></p>
<p>I created a deployment on port 8380 as 8080 is in use,</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.100.248.81 <none> 8380/TCP 11s
</code></pre>
<p>I also exposed the deployment, but when I try to curl or open the app URL I get connection refused.</p>
<blockquote>
<p>Failed to connect to localhost port 8380: Connection refused</p>
</blockquote>
<p>Also if I specify <code>--type=LoadBalancer</code> during the expose step - that also fails to connect.</p>
<p>Any help would be much appreciated. </p>
| <p>I've recreated all the steps from the tutorial you have mentioned.
Your error only occurs when you do not change the port from 8080 to 8380, in one of the steps provided in the documentation. After you change it in all 3 places, it works perfectly fine.
What I suggest is checking if you changed the port in the <strong>server.js</strong> file - as it is used by the Dockerfile in build phase:</p>
<pre><code>var www = http.createServer(handleRequest);
www.listen(8080); #->8380
</code></pre>
<p>Then in the Dockerfile in <code>EXPOSE 8080</code> <strong># -> 8380</strong>.
And the last place is while running the deployment:</p>
<pre><code>kubectl run hello-node --image=hello-node:v1 --port=8380 --image-pull-policy=Never
</code></pre>
<p>I've tested this with <code>--type=LoadBalancer</code>.</p>
|
<p><strong>tl;dr</strong> How do you reference an image in a Kubernetes <code>Pod</code> when the image is from a private docker registry hosted on the same k8s cluster without a separate DNS entry for the registry?</p>
<p>In an on-premise Kubernetes deployment, I have setup a private Docker registry using the <a href="https://github.com/helm/charts/tree/master/stable/docker-registry" rel="noreferrer">stable/docker-registry</a> helm chart using a self-signed certificate. This is on-premise and I can't setup a DNS record to give the registry it's own URL. I wish to use these manifests as templates, so I don't want to hardcode any environment specific config.</p>
<p>The docker registry service is of type <code>ClusterIP</code> and looks like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: docker-registry
labels:
app: docker-registry
spec:
type: ClusterIP
ports:
- port: 443
protocol: TCP
name: registry
targetPort: 5000
selector:
app: docker-registry
</code></pre>
<p>If I've pushed an image to this registry manually (or in the future via a Jenkins build pipeline), how would I reference that image in a <code>Pod</code> spec?</p>
<p>I have tried:</p>
<pre><code>containers:
- name: my-image
image: docker-registry.devops.svc.cluster.local/my-image:latest
imagePullPolicy: IfNotPresent
</code></pre>
<p>But I received an error about the node host not being able to resolve <code>docker-registry.devops.svc.cluster.local</code>. I think the Docker daemon on the k8s node can't resolve that URL because it is an internal k8s DNS record.</p>
<pre><code>Warning Failed 20s (x2 over 34s) kubelet, ciabdev01-node3
Failed to pull image "docker-registry.devops.svc.cluster.local/hadoop-datanode:2.7.3":
rpc error: code = Unknown desc = Error response from daemon: Get https://docker-registry.devops.svc.cluster.local/v2/: dial tcp: lookup docker-registry.devops.svc.cluster.local: no such host
Warning Failed 20s (x2 over 34s) kubelet, node3 Error: ErrImagePull
</code></pre>
<p>So, how would I reference an image on an internally hosted docker registry in this on-premise scenario?</p>
<p>Is my only option to use a service of type <code>NodePort</code>, reference one of the node's hostname in the <code>Pod</code> spec, and then configure each node's docker daemon to ignore the self signed certificate?</p>
| <p>Docker uses DNS settings configured on the Node, and, by default, it does not see DNS names declared in the Kubernetes cluster.</p>
<p>You can try to use one of the following solutions:</p>
<ol>
<li><p>Use the IP address from <code>ClusterIP</code> field in "docker-registry" Service description as docker registry name. This address is static until you recreate the service. Also, you can add this IP address to <code>/etc/hosts</code> on each node.</p>
<p>For example, you can add <code>my-docker-registry 10.11.12.13</code> line to <code>/etc/hosts</code> file. Therefore, you can use <code>10.11.12.13:5000</code> or <code>my-docker-registry:5000</code> as docker registry name for <code>image</code> field in Pods description.</p></li>
<li><p>Expose "docker-registry" Service outside the cluster using <code>type: NodePort</code>. Than use <code>localhost:<exposed_port></code> or <code><one_of_nodes_name>:<exposed_port></code> as docker registry name for <code>image</code> field in Pods description.</p></li>
</ol>
|
<p>I have two applications - <strong>app1</strong> and <strong>app2</strong>, where <strong>app1</strong> is a <code>config server</code> that holds configs for <strong>app2</strong>. I have defined <code>/readiness</code> endpoint in <strong>app1</strong> and need to wait till it returns <code>OK</code> status to start up pods of <strong>app2</strong>. </p>
<p>It's crucial that deployment of <strong>app2</strong> wait till <code>kubernetes</code> receives <code>Http Status OK</code> from <strong>/readiness</strong> endpoint in <strong>app1</strong> as it's a configuration server and holds crucial configs for app2.</p>
<p>Is it possible to do this kind of deployment dependency?</p>
| <p>You can use <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use" rel="noreferrer"><code>initContainers</code></a>. Following is an example of how you can do in your <code>YAML</code> file</p>
<pre><code>initContainers:
- name: wait-for-other-pod
image: docker.some.image
args:
- /bin/sh
- -c
- >
set -x;
while [ $(curl -sw '%{http_code}' "http://www.<your_pod_health_check_end_point>.com" -o /dev/null) -ne 200 ]; do
sleep 15;
done
</code></pre>
<p>I have used <code>curl</code> to hit the health check endpoint, you can use any other UNIX command to check if the other pod is ready.</p>
<p>If you have a dependency on k8s resources, you can make use of <a href="https://github.com/stackanetes/kubernetes-entrypoint" rel="noreferrer">stackanetes/kubernetes-entrypoint</a> example:</p>
<pre><code>initContainers:
- command:
- kubernetes-entrypoint
name: init-dependency-check
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: DEPENDENCY_SERVICE
- name: DEPENDENCY_DAEMONSET
- name: DEPENDENCY_CONTAINER
- name: DEPENDENCY_POD_JSON
value: '[{"labels":{"app.kubernetes.io/name":"postgres"}}]'
- name: COMMAND
value: echo done
image: projects.registry.vmware.com/tcx/snapshot/stackanetes/kubernetes-entrypoint:latest
securityContext:
privileged: true
runAsUser: 0
</code></pre>
<p>In the above example, the pod with initContainer <code>init-dependency-check</code> will wait until pod with label <code>"app.kubernetes.io/name":"postgres"</code> is in the Running state. Likewise you can make use of <code>DEPENDENCY_SERVICE</code>, <code>DEPENDENCY_DAEMONSET</code>, <code>DEPENDENCY_CONTAINER</code></p>
|
<p>I'm running into issues trying to deploy stateful mongodb replicaset with sidecar from cvallance while running istio 0.8, if I leave istio out of the mix everything works, but when istio is enabled mongo-sidecars can't find eachother and replicaset is not configured. Below is my mongo deployment and service.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
service: mongo-test
environment: test
name: mongo-test
namespace: test
spec:
ports:
- name: mongo
port: 27017
clusterIP: None
selector:
service: mongo-test
role: mongo-test
environment: test
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo-test
namespace: test
spec:
serviceName: "mongo-test"
replicas: 3
selector:
matchLabels:
service: mongo-test
template:
metadata:
labels:
role: mongo-test
environment: test
service: mongo-test
spec:
serviceAccountName: mongo-test-serviceaccount
terminationGracePeriodSeconds: 60
containers:
- name: mongo
image: mongo:3.6.5
resources:
requests:
cpu: "10m"
command:
- mongod
- "--bind_ip_all"
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
resources:
requests:
cpu: "10m"
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo-test,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volumes.beta.kubernetes.io/storage-class: "mongo-ssd"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
</code></pre>
| <p>istio does not support mutual TLS for statefulsets at least till V.1.0.2 </p>
|
<p>I've installed Kubernetes by kops on aws, and basically every function is fine so far, except for Dashboad.</p>
<p>I've installed it by following this URL, and received no error.
<a href="https://github.com/kubernetes/kops/blob/master/docs/addons.md#installing-kubernetes-addons" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/docs/addons.md#installing-kubernetes-addons</a></p>
<p>However, the browser ( chrome, firefox, safari ) just shows me json text like below. The URL is like '<a href="https://api.clustername.xxxx.com/ui" rel="nofollow noreferrer">https://api.clustername.xxxx.com/ui</a>'</p>
<blockquote>
<p>"paths": [
"/apis",
"/apis/",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/healthz",
"/healthz/etcd",
"/healthz/ping",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/metrics",
"/openapi/v2",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger.json",
"/swaggerapi",
"/version"
]</p>
</blockquote>
<p>I would like see the real dashboard...
What shall I do ?</p>
| <p>By default dashboard access this:</p>
<pre><code>/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/
</code></pre>
<p>Check: <code>https://api.clustername.xxxx.com/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/</code></p>
|
<p>I have a Spark Cluster with one Master and 4 Workers running in a 3 Node Kubernetes Cluster.
The Spark UI and Master are exposed through Ingress/NodePort and is hence accessible from outside the Kubernetes cluster.</p>
<p>But, the worker ports are not exposed.
<strong>Due to this, the Worker logs (stdout and stderr) are not accessible through the UI.</strong>
The logs URL is getting redirected to <code><Worker1_Pod_IP:8080></code> , <code><Worker2_Pod_IP:8080></code> and so on.</p>
<p><strong>My setup is such that there are two worker pods running on the same machine. So even if I NodePort expose the workers, there will be a conflict in ports as the same port will be assigned for the two workers on the machine.</strong>
Spark History server only provides Event Logs and not Worker logs.</p>
<p>How can this be solved? Is there a way the NodePort value can be dynamically assigned for the workers</p>
| <p>I believe you are talking about the <code>SPARK_WORKER_WEBUI_PORT</code> and not the <code>SPARK_WORKER_PORT</code> as described below since that its assigned a random port.</p>
<p><a href="https://i.stack.imgur.com/W9nHa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W9nHa.png" alt="Spark"></a></p>
<p>This is a bit tricky because you can only expose a single port per node. If you have two spark workers per node you could create two deployments for your nodes, one that exposes the <code>SPARK_WORKER_WEBUI_PORT</code> on <code>8081</code> and another on <code>8082</code> and also make sure that only one pod is scheduled per node.</p>
<p>You can pass these values in the <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">container environment varialbles</a></p>
|
<p>Does anyone know the status of Kubernetes federation v2? I mean which of the goals specified in the "Cluster Federation v2 API Structure and Workflow
" document are already addressed and which are not? In particular, I would like to know if policy-based resource placement is already supported? </p>
<p>Thanks</p>
| <p>It's still pretty much a WIP. I believe you can find most of the information in their <a href="https://github.com/kubernetes/community/tree/master/sig-multicluster" rel="nofollow noreferrer">community page</a></p>
<p>I would strongly suggest going through their <a href="https://docs.google.com/document/d/1v-Kb1pUs3ww_x0MiKtgcyTXCAuZlbVlz4_A9wS3_HXY/edit" rel="nofollow noreferrer">meeting notes</a> and their <a href="https://www.youtube.com/playlist?list=PL69nYSiGNLP3iKP5EzMbtNT2zOZv6RCrX&disable_polymer=true" rel="nofollow noreferrer">recordings</a>. Also, if you have any specific questions feel free to join the meetings.</p>
<p>Update: There are newer projects addressing the same problem. For example:</p>
<ul>
<li><a href="https://karmada.io/" rel="nofollow noreferrer">Karmada</a></li>
</ul>
|
<p><strong>Issue:</strong></p>
<ul>
<li><code>x-forwarded-for</code> http header just shows 127.0.0.1 instead of the original ip</li>
</ul>
<p><strong>Setup</strong></p>
<ul>
<li>GKE</li>
<li>gitlab ingress controller</li>
</ul>
<p><strong>Details</strong></p>
<p>I tried to adapt the ingress rule with <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#enable-cors" rel="nofollow noreferrer">nginx CORS enablement</a> but no success.</p>
<p>Here my ingress Annotation for the service:</p>
<pre><code>nginx.ingress.kubernetes.io/cors-allow-headers: X-Forwarded-For
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS
</code></pre>
<p>And here the output via echoheaders app:</p>
<pre><code>Hostname: backend-78dd9d4ffd-cwkvv
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=10.60.8.16
method=GET
real path=/
query=
request_version=1.1
request_scheme=http
request_uri=[REDACTED]
Request Headers:
accept=text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
accept-encoding=gzip, deflate, br
accept-language=en-GB,en-US;q=0.9,en;q=0.8
cache-control=max-age=0
connection=close
cookie=_ga=[REDACTED]
host=[REDACTED]
upgrade-insecure-requests=1
user-agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36
x-forwarded-for=127.0.0.1 <--- why doesn't it show the source IP?
x-forwarded-host=[REDACTED]
x-forwarded-port=443
x-forwarded-proto=https
x-original-uri=/
x-real-ip=127.0.0.1 <--- why doesn't it show the source IP?
x-scheme=https
Request Body:
-no body in request-
</code></pre>
| <p><code>X-forwarded-for</code> should work out of the box with the nginx ingress controller. This works for me:</p>
<pre><code>$ curl -H 'Host: foo.bar' aws-load-balancer.us-west-2.elb.amazonaws.com/first
Hostname: http-svc-xxxxxxxxxx-xxxxx
Pod Information:
node name: ip-172-x-x-x.us-west-2.compute.internal
pod name: http-svc-xxxxxxxxx-xxxxx
pod namespace: default
pod IP: 192.168.x.x
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=192.168.x.x
method=GET
real path=/first
query=
request_version=1.1
request_uri=http://foo.bar:8080/first
Request Headers:
accept=*/*
connection=close
host=foo.bar
user-agent=curl/7.58.0
x-forwarded-for=x.x.x.x <- public IP address
x-forwarded-host=foo.bar
x-forwarded-port=80
x-forwarded-proto=http
x-original-uri=/first
x-real-ip=x.x.x.x < - same IP as x-forwarded-for
x-request-id=xxxxxxxxxxxxxxxxxx
x-scheme=http
Request Body:
-no body in request-
</code></pre>
<p>There are a few things you can try:</p>
<ul>
<li><p>If you are enabling CORS you need to also need the enable annotation:</p>
<pre><code>nginx.ingress.kubernetes.io/enable-cors: "true"
</code></pre></li>
<li><p>There's a <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-forwarded-headers" rel="nofollow noreferrer">use-forwarded-headers</a> config option for nginx on your ingress controller that might need to be enabled. This would get enabled on the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/" rel="nofollow noreferrer"><code>ConfigMap</code></a> used by your nginx ingress controller.</p></li>
</ul>
|
<p>I have an ingress-nginx controller handling traffic to my Kubernetes cluster hosted on GKE. I set it up using helm installation instructions from docs:</p>
<p><a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">Docs here</a></p>
<p>For the most part everything is working, but if I try to set cache related parameters via a <code>server-snippet</code> annotation, all of the served content that should get the cache-control headers comes back as a <code>404</code>.</p>
<p>Here's my <code>ingress-service.yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-read-timeout: "4000"
nginx.ingress.kubernetes.io/proxy-send-timeout: "4000"
nginx.ingress.kubernetes.io/server-snippet: |
location ~* \.(js|css|gif|jpe?g|png)$ {
expires 1M;
add_header Cache-Control "public";
}
spec:
tls:
- hosts:
- example.com
secretName: example-com
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: client-cluster-ip-service
servicePort: 5000
- path: /api/
backend:
serviceName: server-cluster-ip-service
servicePort: 4000
</code></pre>
<p>Again, it's only the resources that are matched by the regex that come back as <code>404</code> (all <code>.js</code> files, <code>.css</code> files, etc.).</p>
<p>Any thoughts on why this would be happening?</p>
<p>Any help is appreciated!</p>
| <p>Those <code>location</code> blocks are <a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#location" rel="noreferrer">last and/or longest match wins</a>, and since the ingress <strong>itself</strong> is not serving any such content, the nginx relies on a <a href="https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass" rel="noreferrer"><code>proxy_pass</code> directive</a> pointing at the upstream server. Thus, if you are getting 404s, it's very likely because <em>your</em> <code>location</code> is matching, thus interfering with the <code>proxy_pass</code> one. There's a pretty good chance you'd actually want <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="noreferrer"><code>configuration-snippet:</code></a> instead, likely in combination with <a href="https://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if" rel="noreferrer"><code>if ($request_uri ~* ...) {</code></a> to add the header.</p>
<p>One can try this locally with a trivial nginx.conf pointing at <code>python3 -m http.server 9090</code> or whatever fake upstream target.</p>
<p>Separately, for debugging nginx ingress problems, it is often invaluable to consult its actual <code>nginx.conf</code>, which one can grab from any one of the ingress Pods, and/or consulting the logs of the ingress Pods where nginx will emit helpful debugging text.</p>
|
<p>I have a kubernetes Cronjob that performs some backup jobs, and the backup files needs to be uploaded to a bucket. The pod have the service account credentials mounted inside the pod at /var/run/secrets/kubernetes.io/serviceaccount, <strong>but how can I instruct gsutil to use the credentials in /var/run/secrets/kubernetes.io/serviceaccount?</strong></p>
<pre><code>lrwxrwxrwx 1 root root 12 Oct 8 20:56 token -> ..data/token
lrwxrwxrwx 1 root root 16 Oct 8 20:56 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 13 Oct 8 20:56 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 31 Oct 8 20:56 ..data -> ..2018_10_08_20_56_04.686748281
drwxr-xr-x 2 root root 100 Oct 8 20:56 ..2018_10_08_20_56_04.686748281
drwxrwxrwt 3 root root 140 Oct 8 20:56 .
drwxr-xr-x 3 root root 4096 Oct 8 20:57 ..
</code></pre>
| <p>The short answer is that the token there is not in a format that gsutil knows how to use, so you can't use it. You'll need a JSON keyfile, as mentioned in the tutorial here (except that you won't be able to use the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable):</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform</a></p>
<p>Rather than reading from the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable, Gsutil uses Boto configuration files to load credentials. The common places that it knows to look for these Boto config files are <code>/etc/boto.cfg</code> and <code>$HOME/.boto</code>. Note that the latter value changes depending on the user running the command (<code>$HOME</code> expands to different values for different users); since cron jobs usually run as a different user than the one who set up the config file, I wouldn't recommend relying on this path.</p>
<p>So, on your pod, you'll need to first create a Boto config file that references the keyfile:</p>
<pre><code># This option is only necessary if you're running an installation of
# gsutil that came bundled with gcloud. It tells gcloud that you'll be
# managing credentials manually via your own Boto config files.
$ gcloud config set pass_credentials_to_gsutil False
# Set up your boto file at /path/to/my/boto.cfg - the setup will prompt
# you to supply the /path/to/your/keyfile.json. Alternatively, to avoid
# interactive setup prompts, you could set up this config file beforehand
# and copy it to the pod.
$ gsutil config -e -o '/path/to/my/boto.cfg'
</code></pre>
<p>And finally, whenever you run gsutil, you need to tell it where to find that Boto config file which references your JSON keyfile (and also make sure that the user running the command has permission to read both the Boto config file and the JSON keyfile). If you wrote your Boto config file to one of the well-known paths I mentioned above, gsutil will attempt to find it automatically; if not, you can tell gsutil where to find the Boto config file by exporting the <code>BOTO_CONFIG</code> environment variable in the commands you supply for your cron job:</p>
<pre><code>export BOTO_CONFIG=/path/to/my/boto.cfg; /path/to/gsutil cp <src> <dst>
</code></pre>
<p><strong>Edit</strong>:</p>
<p>Note that GCE VM images come with a pre-populated file at /etc/boto.cfg. This config file tells gsutil to load a plugin that allows gsutil to contact the GCE metadata server and fetch auth tokens (corresponding to the <code>default</code> robot service account for that VM) that way. If your pod is able to read the host VM's /etc/boto.cfg file, you're able to contact the GCE metadata server, and you're fine with operations being performed by the VM's <code>default</code> service account, this solution should work out-of-the-box.</p>
|
<p>I have one Kubernetes cluster which has Metabase running. I am using the Metabase official helm configuration.</p>
<p>But when I connect to the SQL proxy from Kubernetes it always says <strong>Connections could not be acquired from the underlying database!</strong></p>
<p>I have added the Kubernetes pod IP and Nginx IP in cloud SQL proxy.</p>
| <p>Looks like you have to modify the <a href="https://github.com/helm/charts/blob/master/stable/metabase/templates/deployment.yaml" rel="nofollow noreferrer">deployment</a> variables to point to your Google Cloud SQL database.</p>
<p>This may vary depending on whether you are using PostgreSQL or MySQL for your Google Cloud SQL database.</p>
<p>With PostgreSQL you can specify a connectionURI like this:</p>
<pre><code>postgres://user:password@host:port/database?ssl=true&sslmode=require&sslfactory=org.postgresql.ssl.NonValidatingFactory"
</code></pre>
<p>With PostgreSQL and MySQL you can specify user/password/host/port.</p>
<p>To change these you can edit the deployment:</p>
<pre><code>kubectl -n <your-namespace> edit deployment metabase
</code></pre>
|
<p>Is there a table that will tell me which set of API versions I should be using, given a k8s cluster version? Kubernetes docs always assume I always have a nice, up-to-date cluster (1.12 at time of writing) but platform providers don't always live on this bleeding edge so it can get frustrating quite quickly.</p>
<p>Better yet, is there a <code>kubectl</code> command I can run that will let me cluster tell me each resource type and its latest supported API version?</p>
| <p>For getting a list of all the resource types and their latest supported version, run the following:</p>
<pre><code>for kind in `kubectl api-resources | tail +2 | awk '{ print $1 }'`; do kubectl explain $kind; done | grep -e "KIND:" -e "VERSION:"
</code></pre>
<p>It should produce output like</p>
<pre><code>KIND: Binding
VERSION: v1
KIND: ComponentStatus
VERSION: v1
KIND: ConfigMap
VERSION: v1
KIND: Endpoints
VERSION: v1
KIND: Event
VERSION: v1
...
</code></pre>
<p>As @Rico mentioned, they key is in the <code>kubectl explain</code> command. This may be a little fragile since it depends on the format of the printed output, but it works for kubernetes 1.9.6</p>
<p>Also, the information can be gathered in a less efficient way from the kubernetes API docs (with links for each version) found here - <a href="https://kubernetes.io/docs/reference/#api-reference" rel="noreferrer">https://kubernetes.io/docs/reference/#api-reference</a> </p>
|
<p>My goal is to write a simple init pod to echo something into a file using a redirect (<code>></code>) for testing purposes but instead, printing the redirect and file name. Here's the relevant part of my yaml:</p>
<pre><code> initContainers:
- name: cat-to-file
image: alpine
args: [ "echo", "Hello, World", ">", "test"]
workingDir: /project
volumeMounts:
- name: project-files
mountPath: /project
</code></pre>
<p>But, the file doesn't get created and when I view the container logs via:</p>
<pre><code>kubectl logs <pod id> cat-to-file
</code></pre>
<p>It shows me:</p>
<pre><code>Hello, World, > test
</code></pre>
<p>Which makes me think it's echoing the <code>> test</code> to stdout rather than to a file named <code>test</code>.</p>
<p>What am I doing wrong here?</p>
| <p>try this:</p>
<pre><code>...
args: [ "/bin/sh", "-c", "echo Hello World > test"]
...
</code></pre>
<p>This approach worked for me <a href="https://github.com/radanalyticsio/spark-operator/blob/9b6a08811738e0b10b6c2dd95578fd9a33cbecf0/src/main/java/io/radanalytics/operator/cluster/KubernetesSparkClusterDeployer.java#L230" rel="noreferrer">here</a>.</p>
|
<p>I have a Kubernetes v1.10.2 cluster and a cronjob on it.
The job config is set to:</p>
<pre><code> failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
</code></pre>
<p>But it has created more than ten jobs, which are all successful and not removed automatically.
Now I am trying to delete them manually, with <code>kubectl delete job XXX</code>, but the command timeout as:</p>
<pre><code>$ kubectl delete job XXX
error: timed out waiting for "XXX" to be synced
</code></pre>
<p>I want to know how can I check in such a situation. Is there a log file for the command execution?</p>
<p>I only know the <code>kubectl logs</code> command, but it is not for such a situation.</p>
<p>"kubectl get" shows the job has already finished:</p>
<pre><code>status:
active: 1
completionTime: 2018-08-27T21:20:21Z
conditions:
- lastProbeTime: 2018-08-27T21:20:21Z
lastTransitionTime: 2018-08-27T21:20:21Z
status: "True"
type: Complete
failed: 3
startTime: 2018-08-27T01:00:00Z
succeeded: 1
</code></pre>
<p>and "kubectl describe" output as:</p>
<pre><code>$ kubectl describe job test-elk-xxx-1535331600 -ntest
Name: test-elk-xxx-1535331600
Namespace: test
Selector: controller-uid=863a14e3-a994-11e8-8bd7-fa163e23632f
Labels: controller-uid=863a14e3-a994-11e8-8bd7-fa163e23632f
job-name=test-elk-xxx-1535331600
Annotations: <none>
Controlled By: CronJob/test-elk-xxx
Parallelism: 0
Completions: 1
Start Time: Mon, 27 Aug 2018 01:00:00 +0000
Pods Statuses: 1 Running / 1 Succeeded / 3 Failed
Pod Template:
Labels: controller-uid=863a14e3-a994-11e8-8bd7-fa163e23632f
job-name=test-elk-xxx-1535331600
Containers:
xxx:
Image: test-elk-xxx:18.03-3
Port: <none>
Host Port: <none>
Args:
--config
/etc/elasticsearch-xxx/xxx.yml
/etc/elasticsearch-xxx/actions.yml
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 100m
memory: 100Mi
Environment: <none>
Mounts:
/etc/elasticsearch-xxx from xxx-configs (ro)
Volumes:
xxx-configs:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: test-elk-xxx
Optional: false
Events: <none>
</code></pre>
<p>It indicates still one pod running, but I don't know how to figure out the pod name.</p>
| <p>Check if <code>kubectl describe pod <pod name></code> (associated pod of the job) still returns something, which would:</p>
<ul>
<li>mean the node is still there</li>
<li>include the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions" rel="nofollow noreferrer">pod condition</a></li>
</ul>
<p>In that state, you can then consider <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#delete-pods" rel="nofollow noreferrer">a force deletion</a>.</p>
|
<p>I followed the instructions in this post:
<a href="https://stackoverflow.com/questions/47378592/how-to-bound-a-persistent-volume-claim-with-a-gcepersistentdisk">how to bound a Persistent volume claim with a gcePersistentDisk?</a></p>
<p>And when I applied that, my PVC did not bind to the PV, instead I got this error in the event list:</p>
<pre><code>14s 17s 2 test-pvc.155b8df6bac15b5b PersistentVolumeClaim Warning ProvisioningFailed persistentvolume-controller Failed to provision volume with StorageClass "standard": claim.Spec.Selector is not supported for dynamic provisioning on GCE
</code></pre>
<p>I found a github posting that suggested something that would fix this:</p>
<p><a href="https://github.com/coreos/prometheus-operator/issues/323#issuecomment-299016953" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator/issues/323#issuecomment-299016953</a></p>
<p>But unfortunately that made no difference.</p>
<p>Is there a soup-to-nuts doc somewhere telling us exactly how to use PV and PVC to create truly persistent volumes? Specifically where you can shut down the pv and pvc and restore them later, and get all your content back? Because as it seems right now, if you lose your PVC for whatever reason, you lose connection to your volume and there is no way to get it back again.</p>
| <p>The default <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer"><code>StorageClass</code></a> is not compatible with a <code>gcePesistentDisk</code>. Something like this would work:</p>
<pre><code>$ cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
EOF
</code></pre>
<p>then on your PVC:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
labels:
app: test
spec:
accessModes:
- ReadWriteOnce
storageClassName: "slow" <== specify the storageClass
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: test
</code></pre>
<p>You can also set <em>"slow"</em> as the <a href="https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/" rel="nofollow noreferrer">default</a> <code>storageClass</code> in which case you wouldn't have to specify it on your PVC:</p>
<pre><code>$ kubectl patch storageclass slow -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
</code></pre>
|
<p>I've read a couple of passages from some books written on Kubernetes as well as the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="noreferrer">page on headless services in the docs</a>. But I'm still unsure what it really actually does and why someone would use it. Does anyone have a good understanding of it, what it accomplishes, and why someone would use it?</p>
| <p>Well, I think you need some theory. There are many explanations (including the official docs) across the whole internet, but I think Marco Luksa did it the best:</p>
<blockquote>
<p>Each connection to the service is forwarded to one randomly selected
backing pod. But what if the client needs to connect to all of those
pods? What if the backing pods themselves need to each connect to all
the other backing pods. Connecting through the service clearly isn’t
the way to do this. What is? </p>
<p>For a client to connect to all pods, it needs to figure out the the IP
of each individual pod. One option is to have the client call the
Kubernetes API server and get the list of pods and their IP addresses
through an API call, but because you should always strive to keep your
apps Kubernetes-agnostic, using the API server isn’t ideal</p>
<p>Luckily, Kubernetes allows clients to discover pod IPs through DNS
lookups. Usually, when you perform a DNS lookup for a service, the DNS
server returns a single IP — the service’s cluster IP. But if you tell
Kubernetes you don’t need a cluster IP for your service (you do this
by setting the clusterIP field to None in the service specification ),
the DNS server will return the pod IPs instead of the single service
IP. Instead of returning a single DNS A record, the DNS server will
return multiple A records for the service, each pointing to the IP of
an individual pod backing the service at that moment. Clients can
therefore do a simple DNS A record lookup and get the IPs of all the
pods that are part of the service. The client can then use that
information to connect to one, many, or all of them.</p>
<p>Setting the clusterIP field in a service spec to None makes the
service headless, as Kubernetes won’t assign it a cluster IP through
which clients could connect to the pods backing it.</p>
</blockquote>
<p>"Kubernetes in Action" by Marco Luksa</p>
|
<p>I have a livelinessProbe configured for my pod which does a http-get on path on the same pod and a particular port. It works perfectly. But, if I use the same settings and configure a readinessProbe it fails with the below error.</p>
<blockquote>
<p>Readiness probe failed: wsarecv: read tcp :50578->:80: An existing connection was forcibly closed by the remote host</p>
</blockquote>
<p>Actually after certain point I even see the liveliness probes failing. not sure why . Liveliness probe succeeding should indicate that the kube-dns is working fine and we're able to reach the pod from the node. Here's the readinessProbe for my pod's spec</p>
<pre class="lang-yaml prettyprint-override"><code>readinessProbe:
httpGet:
path: /<path> # -> this works for livelinessProbe
port: 80
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 10
</code></pre>
<p>Does anyone have an idea what might be going on here.</p>
| <p>I don't think it has anything to do with <code>kube-dns</code> or <code>coredns</code>. The most likely cause here is that your pod/container/application is crashing or stop serving requests. </p>
<p>Seems like this timeline:</p>
<ul>
<li>Pod/container comes up.</li>
<li>Liveliness probe passes ok.</li>
<li>Some time passes.</li>
<li>Probably app crash or error.</li>
<li>Readiness fails.</li>
<li>Liveliness probe fails too.</li>
</ul>
<p>More information about what that error means here:
<a href="https://stackoverflow.com/questions/2582036/an-existing-connection-was-forcibly-closed-by-the-remote-host">An existing connection was forcibly closed by the remote host</a></p>
|
<p>I really dont understand this issue. In my <code>pod.yaml</code> i set the <code>persistentVolumeClaim</code> . i copied on my lastapplication declaration with PVC & PV.
i've checked that the files are in the right place !
on my Deployment file i've just set the port and the spec for the containers. </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: ds-mg-cas-pod
namespace: ds-svc
spec:
containers:
- name: karaf
image: docker-all.xxxx.net/library/ds-mg-cas:latest
env:
- name: JAVA_APP_CONFIGS
value: "/apps/ds-cas-webapp/context"
- name: JAVA_EXTRA_PARAMS
value: "-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
volumeMounts:
- name: ds-cas-config
mountPath: "/apps/ds-cas-webapp/context"
volumes:
- name: ds-cas-config
persistentVolumeClaim:
claimName: ds-cas-pvc
</code></pre>
<p>the <code>PersistentVolume</code> & <code>PersistenteVolumeClaim</code></p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: ds-cas-pv
namespace: ds-svc
labels:
type: local
spec:
storageClassName: generic
capacity:
storage: 5Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/apps/ds-cas-webapp/context"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ds-cas-pvc
namespace: ds-svc
spec:
storageClassName: generic
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Mi
</code></pre>
<p>The error i get when i run the pod </p>
<pre><code> java.io.FileNotFoundException: ./config/truststore.jks (No such file or directory)
</code></pre>
<p>I run the same image manually with docker. i didn't had an error. My question is where i can made a mistake because i really dont see :(
i set everything </p>
<ul>
<li>the mountpoints</li>
<li>the ports</li>
<li>the variable</li>
</ul>
<p><em>the docker command that i used to run the container</em> :</p>
<pre><code>docker run --name ds-mg-cas-manually
-e JAVA_APP=/apps/ds-cas-webapp/cas.war
-e JAVA_APP_CONFIGS=/apps/ds-cas-webapp/context
-e JAVA_EXTRA_PARAMS="-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
-p 8443:8443
-p 6402:640
-d
-v /apps/ds-cas-webapp/context:/apps/ds-cas-webapp/context
docker-all.attanea.net/library/ds-mg-cas
/bin/sh -c
</code></pre>
| <p>Your PersistentVolumeClaim is probably bound to the wrong PersistentVolume.</p>
<p>PersistentVolumes exist cluster-wide, only PersistentVolumeClaims are attached to a namespace:</p>
<pre><code>$ kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
</code></pre>
|
<p>I am creating a two node Kubernetes cluster (1 master and 2 slave nodes) which will host Netflix eureka. Microservices would be created for applications which would register themselves on the Eureka server and would find other microservices to communicate from the service registry of Eureka.I want a scenario such that if any node fails, then how can we achieve high availability in this ? Also , there should be load balancing so that requests get simultaneously directed to other nodes in the cluster.</p>
<p>Can anybody let me know a solution for this ?</p>
| <blockquote>
<p>I want a scenario such that if any node fails, then how can we achieve high availability in this </p>
</blockquote>
<p>Creating a Pod directly is not a recommended approach. Lets say that the node on which the Pod is running crashes, then the Pod is not rescheduled and the service is not accessible.</p>
<p>For HA (High Availability), higher level abstractions like <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployments</a> should be used. A Deployment will create a ReplicaSet which will have multiple Pods associated with it. So, if a node on which the Pod is running crashes then the ReplicaSet will automatically reschedule the Pod on a healthy node and you will get HA.</p>
<blockquote>
<p>Also , there should be load balancing so that requests get simultaneously directed to other nodes in the cluster.</p>
</blockquote>
<p>Create a Service of <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">type LoadBalancer</a> for the Deployment and the incoming requests will be automatically redirected to the Pods on the different nodes. In this case a Load Balancer will be automatically created. And there is charge associated with the Load Balancer.</p>
<p>If you don't want to use a Load Balancer then another approach though which is a bit more complicated and powerful is to use <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a>. This will also load balance the requests across multiple nodes.</p>
<p><a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="nofollow noreferrer">Here</a> is a nice article explaining the difference between a Load Balancer and Ingress.</p>
<p>All the above queries are addressed directly or indirectly in the K8S documentation <a href="https://kubernetes.io/docs/home/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I want to integrate a minio object storage in to my minikune cluster. </p>
<p>I use the docker file from the minio <a href="https://github.com/minio/minio/blob/master/Dockerfile" rel="nofollow noreferrer">gitrepo</a></p>
<p>I also added the persistent volume with the claim </p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: minio-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/minio"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: minio-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<p>for the minio deployment I have </p>
<pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: minio
spec:
selector:
matchLabels:
app: minio
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: minio
role: master
tier: backend
spec:
imagePullSecrets:
- name: regcred
containers:
- name: minio
image: <secret Registry >
env:
- name: MINIO_ACCESS_KEY
value: akey
- name: MINIO_SECRET_KEY
value: skey
ports:
- containerPort: 9000
volumeMounts:
- name: data
mountPath: /data/ob
volumes:
- name: data
persistentVolumeClaim:
claimName: minio-pv-claim
</code></pre>
<p>For the service I opened up the external IP just for debugging </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
role: master
tier: backend
spec:
ports:
- port: 9000
targetPort: 9000
externalIPs:
- 192.168.99.101
selector:
app: minio
role: master
tier: backend
</code></pre>
<p>But when I start the deployment I get the error message <code>ERROR Unable to initialize backend: The disk size is less than the minimum threshold.</code></p>
<p>I assumed that 3GB should be enough. How can I solve this issue moreover now that I try to delete my persistent volume it rest in the terminating status. </p>
<p>How can I run minio in a minikube clutster? </p>
| <p>I dont think there is enough storage in /mnt/data inside minikube. Try /mnt/sda1 or /data. Better yet, go inside minikube and check the storage available. To get into minikube you can do minikube ssh.</p>
|
<p>First of all, I'd like to say that I'm new to Kubernetes, so forgive me if what I want to do is a bad idea :)</p>
<p>Here's my context :
I've got a very large application which is composed of lots of micro-services according to their domains :</p>
<pre><code>Domain 1
domain1-microservice1-app expose port 8080
domain1-microservice2-app expose port 8081
domain1-microservice3-app expose port 8082
Domain 2
domain2-microservice1-app expose port 9080
domain2-microservice2-app expose port 9081
domain2-microservice3-app expose port 9082
Domain 3
domain3-microservice1-app expose port 9180
domain3-microservice2-app expose port 9181
domain3-microservice3-app expose port 9182
</code></pre>
<p>... and so on.</p>
<p>So, in my example, I have 9 applications. Each application is registered in Kubernetes using <strong>kind:Deployment</strong></p>
<h2>Now I want to choose the best way to manage services.</h2>
<h3>First solution</h3>
<p>Each deployment has its own service</p>
<p>=> It works, it seems to be the classical way of doing things in Kubernetes. But in reality, I have far more than 9 applications, so I have lots of services</p>
<h3>Second solution</h3>
<p>Create a service by domain. Each service contains all its related apps</p>
<p>=> I've tried it and it seems to work (as far as I could test in my local dev environment)</p>
<p>I'd like to know what do you think of my second solution, and what may be its caveat?</p>
<p>I also take all your advice on what could be the best Kubernetes structure.</p>
<p>Thanks a lot,</p>
<p>Julien</p>
<h1>Edit: add sample configuration files for approach 2</h1>
<p>Deployment file for microservice1 of domain1 :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: domain1-app1
labels:
domain: domain1
spec:
selector:
matchLabels:
app: domain1-app1
replicas: 3
template:
metadata:
labels:
app: domain1-app1
domain: domain1
spec:
containers:
- name: XXX
image: YYY
ports:
- containerPort: 8080
</code></pre>
<p>Service file for service related to domain 1 :</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: domain1-service
spec:
type: LoadBalancer
selector:
domain: domain1
ports:
- name: port8080
protocol: TCP
port: 8080
targetPort: 8080
- name: port8081
protocol: TCP
port: 8081
targetPort: 8081
- name: port8082
protocol: TCP
port: 8082
targetPort: 8082
</code></pre>
| <p>It's subjective.</p>
<p>I would go with Approach 1 to keep the Service specifications simple. It will also be possible to have different Pods for different Services. With the Approach 2, the same set of Pods (based on the selector) should provide all the Services for a particular domain. It would be not possible to scale the Pods based on the Services.</p>
<p>Domain is more like metadata and not much with the functionality of the service. So, I would remove domain from service name and start using <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">labels</a> for it. This will allow to apply Selectors on Labels.</p>
|
<p>I'm trying to setup a Kubernetes cluster, but I cannot get CoreDNS running. I've ran the following to start the cluster:</p>
<pre><code>sudo swapoff -a
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s- version=$(kubectl version | base64 | tr -d '\n')"
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
</code></pre>
<p>To check the PODs with <code>kubectl get pods --all-namespaces</code>, I get</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-68fb79bcf6-6s5bp 0/1 CrashLoopBackOff 6 10m
kube-system coredns-68fb79bcf6-hckxq 0/1 CrashLoopBackOff 6 10m
kube-system etcd-myserver 1/1 Running 0 79m
kube-system kube-apiserver-myserver 1/1 Running 0 79m
kube-system kube-controller-manager-myserver 1/1 Running 0 79m
kube-system kube-proxy-9ls64 1/1 Running 0 80m
kube-system kube-scheduler-myserver 1/1 Running 0 79m
kube-system kubernetes-dashboard-77fd78f978-tqt8m 1/1 Running 0 80m
kube-system weave-net-zmhwg 2/2 Running 0 80m
</code></pre>
<p>So CoreDNS keeps crashing. The only error messages I could found were from
<code>/var/log/syslog</code>:</p>
<pre><code>Oct 4 18:06:44 myserver kubelet[16397]: E1004 18:06:44.961409 16397 pod_workers.go:186] Error syncing pod c456a48b-c7c3-11e8-bf23-02426706c77f ("coredns-68fb79bcf6-6s5bp_kube-system(c456a48b-c7c3-11e8-bf23-02426706c77f)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-68fb79bcf6-6s5bp_kube-system(c456a48b-c7c3-11e8-bf23-02426706c77f)"
</code></pre>
<p>and from <code>kubectl logs coredns-68fb79bcf6-6s5bp -n kube-system</code>:</p>
<pre><code>.:53
2018/10/04 11:04:55 [INFO] CoreDNS-1.2.2
2018/10/04 11:04:55 [INFO] linux/amd64, go1.11, eb51e8b
CoreDNS-1.2.2
linux/amd64, go1.11, eb51e8b
2018/10/04 11:04:55 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
2018/10/04 11:04:55 [FATAL] plugin/loop: Seen "HINFO IN 3256902131464476443.1309143030470211725." more than twice, loop detected
</code></pre>
<p>Some solutions I found are to issue</p>
<pre><code>kubectl -n kube-system get deployment coredns -o yaml | \
sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
kubectl apply -f -
</code></pre>
<p>and to modify <code>/etc/resolv.conf</code> to point to an actual DNS, not to localhost, which I tried as well.</p>
<p>The issue is described in <a href="https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/#pods-in-runcontainererror-crashloopbackoff-or-error-state" rel="noreferrer">https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/#pods-in-runcontainererror-crashloopbackoff-or-error-state</a> and I tried many different Pod Networks but no help.</p>
<p>I've run <code>sudo kubeadm reset && rm -rf ~/.kube/ && sudo kubeadm init</code> several times.</p>
<p>I'm running Ubuntu 16.04, Kubernetes 1.12 and Docker 17.03. Any ideas?</p>
| <p>I also have the same issue.</p>
<p>I've solved the problem by deleting the plugins 'loop' within the cm of coredns.
but i don't know if this cloud case other porblems.</p>
<p>1、kubectl edit cm coredns -n kube-system</p>
<p>2、<a href="https://i.stack.imgur.com/NsYL1.png" rel="noreferrer">delete ‘loop’ ,save and exit</a></p>
<p>3、restart coredns pods by:<code>kubectl delete pod coredns.... -n kube-system</code></p>
|
<p><strong>Brief of the problem</strong>:</p>
<ul>
<li>If I try to attach multiple TLS gateways (using the same certificate)
to one ingressgateway, only one TLS will work. (The last applied)</li>
<li>Attaching multiple non-TLS gateways to the same ingressgateway works ok.</li>
</ul>
<p><strong>Error messages</strong>:</p>
<p>Domain 1 (ok):</p>
<pre><code>✗ curl -I https://integration.domain.com
HTTP/2 200
server: envoy
[...]
</code></pre>
<p>Domain 2 (bad):</p>
<pre><code>✗ curl -vI https://staging.domain.com
* Rebuilt URL to: https://staging.domain.com/
* Trying 35.205.120.133...
* TCP_NODELAY set
* Connected to staging.domain.com (35.x.x.x) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to staging.domain.com:443
* Curl_http_done: called premature == 1
* stopped the pause stream!
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to staging.domain.com:443
</code></pre>
<p><strong>Facts</strong>:</p>
<p>I have a wildcard TLS cert (lets say '*.domain.com') I've put in a secret with:</p>
<pre><code>kubectl create -n istio-system secret tls istio-ingressgateway-certs --key tls.key --cert tls.crt
</code></pre>
<p>I have the default istio-ingressgateway attached to a static IP: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
labels:
chart: gateways-1.0.0
release: istio
heritage: Tiller
app: istio-ingressgateway
istio: ingressgateway
spec:
loadBalancerIP: "35.x.x.x"
type: LoadBalancer
selector:
app: istio-ingressgateway
istio: ingressgateway
[...]
</code></pre>
<p>Then I have two gateways in different namespaces, for two domains included on the TLS wildcard (staging.domain.com, integration.domain.com):</p>
<p>staging:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: domain-web-gateway
namespace: staging
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "staging.domain.com"
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "staging.domain.com"
</code></pre>
<p>integration:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: domain-web-gateway
namespace: integration
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "integration.domain.com"
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "integration.domain.com"
</code></pre>
| <p>The problem is that you are using the same name (https) for port 443 in two Gateways managed by the same workload (selector). They need to have unique names. This restriction is documented <a href="https://preliminary.istio.io/help/ops/traffic-management/deploy-guidelines/#configuring-multiple-tls-hosts-in-a-gateway" rel="nofollow noreferrer">here</a>.</p>
<p>You can fix it by just changing the name of your second Gateway, for example:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: domain-web-gateway
namespace: integration
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 443
name: https-integration
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "integration.domain.com"
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "integration.domain.com"
</code></pre>
|
<p>I'm trying to set up a bare-metal k8s cluster.</p>
<p>When creating the cluster, using flannel plugin (<strong>sudo kubeadm init --pod-network-cidr=10.244.0.0/16</strong>) - it seems that the API server doesn't even run:</p>
<pre><code>root@kubernetes-master:/# kubectl cluster-info
Kubernetes master is running at https://192.168.10.164:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 192.168.10.164:6443 was refused - did you specify the right host or port?
</code></pre>
<p>i've disabled swap, and that's what i have in the logs:</p>
<pre><code>Oct 09 11:45:50 kubernetes-master kubelet[12442]: E1009 11:45:50.975944 12442 kubelet_node_status.go:391] Error updating node status, will retry: error getting node "kubernetes-master": Get https://192.168.10.164:6443/api/v1/nodes/kubernetes-master?resourceVersion=0&timeout=10s: dial tcp 192.168.10.164:6443: connect: connection refused
Oct 09 11:45:50 kubernetes-master kubelet[12442]: E1009 11:45:50.976715 12442 kubelet_node_status.go:391] Error updating node status, will retry: error getting node "kubernetes-master": Get https://192.168.10.164:6443/api/v1/nodes/kubernetes-master?timeout=10s: dial tcp 192.168.10.164:6443: connect: connection refused
Oct 09 11:45:50 kubernetes-master kubelet[12442]: E1009 11:45:50.977162 12442 kubelet_node_status.go:391] Error updating node status, will retry: error getting node "kubernetes-master": Get https://192.168.10.164:6443/api/v1/nodes/kubernetes-master?timeout=10s: dial tcp 192.168.10.164:6443: connect: connection refused
Oct 09 11:45:50 kubernetes-master kubelet[12442]: E1009 11:45:50.977741 12442 kubelet_node_status.go:391] Error updating node status, will retry: error getting node "kubernetes-master": Get https://192.168.10.164:6443/api/v1/nodes/kubernetes-master?timeout=10s: dial tcp 192.168.10.164:6443: connect: connection refused
Oct 09 11:45:50 kubernetes-master kubelet[12442]: E1009 11:45:50.978199 12442 kubelet_node_status.go:391] Error updating node status, will retry: error getting node "kubernetes-master": Get https://192.168.10.164:6443/api/v1/nodes/kubernetes-master?timeout=10s: dial tcp 192.168.10.164:6443: connect: connection refused
</code></pre>
<p>when i do docker ps, i see that the api-server did not even start:</p>
<pre><code>root@kubernetes-master:/# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7904888d512d ca1f38854f74 "kube-scheduler --ad…" 15 minutes ago Up 15 minutes k8s_kube-scheduler_kube-scheduler-kubernetes-master_kube-system_009228e74aef4d7babd7968782118d5e_1
ad5f25be44a3 ca1f38854f74 "kube-scheduler --ad…" 16 minutes ago Exited (1) 16 minutes ago k8s_kube-scheduler_kube-scheduler-kubernetes-master_kube-system_009228e74aef4d7babd7968782118d5e_0
1948a59f8ec9 b8df3b177be2 "etcd --advertise-cl…" 16 minutes ago Up 16 minutes k8s_etcd_etcd-kubernetes-master_kube-system_2c12104e97be3063569dbbc535d06f35_0
a43f9cb2a143 k8s.gcr.io/pause:3.1 "/pause" 16 minutes ago Up 16 minutes k8s_POD_kube-scheduler-kubernetes-master_kube-system_009228e74aef4d7babd7968782118d5e_0
c0125fd3aa06 k8s.gcr.io/pause:3.1 "/pause" 16 minutes ago Up 16 minutes k8s_POD_etcd-kubernetes-master_kube-system_2c12104e97be3063569dbbc535d06f35_0
</code></pre>
<p>I'm also not able of course to configure the network plugin because the API server is down:</p>
<pre><code>root@kubernetes-master:/# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get https://192.168.10.164:6443/api?timeout=32s: dial tcp 192.168.10.164:6443: connect: connection refused
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get https://192.168.10.164:6443/api?timeout=32s: dial tcp 192.168.10.164:6443: connect: connection refused
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get https://192.168.10.164:6443/api?timeout=32s: dial tcp 192.168.10.164:6443: connect: connection refused
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get https://192.168.10.164:6443/api?timeout=32s: dial tcp 192.168.10.164:6443: connect: connection refused
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get https://192.168.10.164:6443/api?timeout=32s: dial tcp 192.168.10.164:6443: connect: connection refused
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get https://192.168.10.164:6443/api?timeout=32s: dial tcp 192.168.10.164:6443: connect: connection refused
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get https://192.168.10.164:6443/api?timeout=32s: dial tcp 192.168.10.164:6443: connect: connection refused
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get https://192.168.10.164:6443/api?timeout=32s: dial tcp 192.168.10.164:6443: connect: connection refused
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get https://192.168.10.164:6443/api?timeout=32s: dial tcp 192.168.10.164:6443: connect: connection refused
</code></pre>
<p>I'm not sure how to continue to debug this, Assistance would be helpful.</p>
| <p>Yes, you definitely have problems with API server.
My advice to you is wipe all, update <code>docker.io</code>, <code>kubelet</code>, <code>kubeadm</code>, <code>kubectl</code> to latest versions and start from scratch.</p>
<p>Let me help you step-by-step:</p>
<p>Wipe you current cluster, update packages under the root :</p>
<pre><code>#kubeadm reset -f && rm -rf /etc/kubernetes/
#apt-get update && apt-get install -y mc ebtables ethtool docker.io apt-transport-https curl
#curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
#cat <<EOF >/etc/apt/sources.list.d/kubernetes.list \
deb http://apt.kubernetes.io/ kubernetes-xenial main \
EOF
#apt-get update && apt-get install -y kubelet kubeadm kubectl
</code></pre>
<p>Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config:</p>
<pre><code>#docker info | grep -i cgroup
#cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
</code></pre>
<p>Check the versions:</p>
<pre><code>root@kube-master-1:~# docker -v
Docker version 17.03.2-ce, build f5ec1e2
root@kube-master-1:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root@kube-master-1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:43:08Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
root@kube-master-1:~# kubelet --version
Kubernetes v1.12.1
</code></pre>
<p>Start cluster:</p>
<pre><code>#kubeadm init --pod-network-cidr=10.244.0.0/16
</code></pre>
<p>Login and run the following as a regular user:</p>
<pre><code> mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
source <(kubectl completion bash) # setup autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
</code></pre>
<p>Check cluster:</p>
<pre><code>$ kubectl cluster-info
Kubernetes master is running at https://10.132.0.2:6443
KubeDNS is running at https://10.132.0.2:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl get no -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-master-1 NotReady master 4m26s v1.12.1 10.132.0.2 <none> Ubuntu 16.04.5 LTS 4.15.0-1021-gcp docker://17.3.2
$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-576cbf47c7-lw7jv 0/1 Pending 0 4m55s
kube-system pod/coredns-576cbf47c7-ncx8w 0/1 Pending 0 4m55s
kube-system pod/etcd-kube-master-1 1/1 Running 0 4m23s
kube-system pod/kube-apiserver-kube-master-1 1/1 Running 0 3m59s
kube-system pod/kube-controller-manager-kube-master-1 1/1 Running 0 4m17s
kube-system pod/kube-proxy-bwrwh 1/1 Running 0 4m55s
kube-system pod/kube-scheduler-kube-master-1 1/1 Running 0 4m10s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5m15s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5m9s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 <none> 5m8s
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 2 2 2 0 5m9s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-576cbf47c7 2 2 0 4m56s
</code></pre>
<p>Install CNI (I prefer <a href="https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/calico#installing-with-the-kubernetes-api-datastore50-nodes-or-less" rel="noreferrer">Calico</a>):</p>
<pre><code>$ kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
$ kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
configmap/calico-config created
service/calico-typha created
deployment.apps/calico-typha created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
</code></pre>
<p>Check result:</p>
<pre><code>$ kubectl get no -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-master-1 Ready master 9m15s v1.12.1 10.132.0.2 <none> Ubuntu 16.04.5 LTS 4.15.0-1021-gcp docker://17.3.2
$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-node-tsstf 2/2 Running 0 2m3s
kube-system pod/coredns-576cbf47c7-lw7jv 1/1 Running 0 9m20s
kube-system pod/coredns-576cbf47c7-ncx8w 1/1 Running 0 9m20s
kube-system pod/etcd-kube-master-1 1/1 Running 0 8m48s
kube-system pod/kube-apiserver-kube-master-1 1/1 Running 0 8m24s
kube-system pod/kube-controller-manager-kube-master-1 1/1 Running 0 8m42s
kube-system pod/kube-proxy-bwrwh 1/1 Running 0 9m20s
kube-system pod/kube-scheduler-kube-master-1 1/1 Running 0 8m35s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9m40s
kube-system service/calico-typha ClusterIP 10.105.62.183 <none> 5473/TCP 2m4s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 9m34s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 beta.kubernetes.io/os=linux 2m4s
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 <none> 9m33s
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-typha 0 0 0 0 2m4s
kube-system deployment.apps/coredns 2 2 2 2 9m34s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-typha-5f646c475c 0 0 0 2m4s
kube-system replicaset.apps/coredns-576cbf47c7 2 2 2 9m21s
$ sudo docker ps -a | grep api
996cf65268fe dcb029b5e3ad "kube-apiserver --..." 10 minutes ago Up 10 minutes k8s_kube-apiserver_kube-apiserver-kube-master-1_kube-system_371bd9e2260dc98257ab7a6961e293b0_0
ab9f0949b295 k8s.gcr.io/pause:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_kube-apiserver-kube-master-1_kube-system_371bd9e2260dc98257ab7a6961e293b0_0
</code></pre>
<p>Hope this will help you.</p>
|
<p>Is there a way to create a tree of inheritance for Kubernetes deployments? I have a number of deployments which are similar but not identical. They share many ENV vars but not all. They all use the same image. </p>
<p>For example, I have a dev deployment which is configured almost identical to a production deployment but has env vars pointing to a different database backend. I have a celery deployment which is configured the same as the production deployment, however, it has a different run command.</p>
| <p><a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a> is what many people are using for this. It let's you create templates for kubernetes descriptors and pass parameters in to generate descriptors from the templates.</p>
<p>There are other tools out there which can be used to generate variations on kubernetes deployment descriptors by injecting parameters into templates. Ansible is also popular. But Helm is closely connected with the <a href="https://www.cncf.io/blog/2018/06/01/cncf-to-host-helm/" rel="nofollow noreferrer">Kubernetes CNCF</a> and community and there's a good selection of <a href="https://hub.kubeapps.com/" rel="nofollow noreferrer">official charts</a> available.</p>
<p>EDIT: If the aim is to enable different deployments (e.g. for dev and prod) using a single docker image then that's probably best handled with a single chart. You can create different values files for each deployment and supply the chosen values file to helm install with the <code>--values</code> parameter. If there are parts of the chart that are only sometimes applicable then they can be wrapped in <code>if</code> conditions to turn them on/off.</p>
<p>On the subject of inheritance specifically, there's an <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md#overriding-values-from-a-parent-chart" rel="nofollow noreferrer">example in the helm documention of how to take another chart as a parent/dependency and override its values</a> and I created <a href="https://github.com/Activiti/ttc-acceptance-tests/tree/cd6c7956de21d7f29d336dc464e5e7e22d0dd7b6/ttc-example" rel="nofollow noreferrer">a chart earlier that you can see in github</a> that <a href="https://github.com/Activiti/ttc-acceptance-tests/blob/cd6c7956de21d7f29d336dc464e5e7e22d0dd7b6/ttc-example/requirements.yaml" rel="nofollow noreferrer">includes several other charts</a> and <a href="https://github.com/Activiti/ttc-acceptance-tests/blob/cd6c7956de21d7f29d336dc464e5e7e22d0dd7b6/ttc-example/values.yaml#L41" rel="nofollow noreferrer">overrides parts of all of them via the values.yml</a>. It also shares some config between the included charts <a href="https://github.com/Activiti/ttc-acceptance-tests/blob/cd6c7956de21d7f29d336dc464e5e7e22d0dd7b6/ttc-example/values.yaml#L47" rel="nofollow noreferrer">with globals</a>. If you're looking to use a parent to reduce duplication rather than join multiple apps then it is possible to create a <a href="https://medium.com/devopslinks/dry-helm-charts-for-micro-services-db3a1d6ecb80" rel="nofollow noreferrer">base/wrapper chart</a> but <a href="https://medium.com/@ryandawsonuk/tips-for-getting-started-with-helm-99b01efdab90" rel="nofollow noreferrer">it may turn out to be better to just duplicate config</a>.</p>
<p>EDIT (180119): The alternative of <a href="https://github.com/kubernetes-sigs/kustomize" rel="nofollow noreferrer">Kustomize</a> may soon <a href="https://github.com/kubernetes/enhancements/issues/633" rel="nofollow noreferrer">become available in kubectl</a></p>
|
<p>How to set the reserved resources without ssh the cluster (only with kubectl commands) in azure (acs-engine) ?</p>
| <p>Using <a href="https://kubernetes.io/docs/reference/kubectl/kubectl/" rel="nofollow noreferrer">kubectl</a> command-line tool to achieve your goal is not feasible, as it only sends commands to api-server and retrieves results without any possibility to change global cluster configuration. The above parameters belong to <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a> configuration. <code>Kubelet</code> is the node agent in a Kubernetes cluster and is responsible for the Pod lifecycle management on each local Node. Representing itself as a service, <code>kubelet</code> interacts with the <code>etcd</code> store in order to read configuration details or write new values.</p>
<p>You can apply specific flags when you implement <code>kubelet</code> service on your node or consider to change the configuration on a <a href="https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/" rel="nofollow noreferrer">live cluster</a> by changing parameters accordingly on the target Node.</p>
<p>There is a separate discussion about customization of <code>kubelet</code> parameters on <code>AKS</code> described <a href="https://github.com/Azure/AKS/issues/323" rel="nofollow noreferrer">here</a>.</p>
|
<p>running v1.10 and i notice that <code>kube-controller-manager</code>s memory usage spikes and the OOMs all the time. it wouldn't be so bad if the system didn't fall to a crawl before this happens tho.</p>
<p>i tried modifying <code>/etc/kubernetes/manifests/kube-controller-manager.yaml</code> to have a <code>resource.limits.memory=1Gi</code> but the kube-controller-manager pod never seems to want to come back up.</p>
<p>any other options?</p>
| <p>There is a bug in kube-controller-manager, and it's fixed in <a href="https://github.com/kubernetes/kubernetes/pull/65339" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/65339</a></p>
|
<p>I have the below annotation for my secrets</p>
<pre><code>annotations:
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-weight": "1"
"helm.sh/hook-delete-policy": "before-hook-creation"
</code></pre>
<p>it's giving me the following error when I try to upgrade.</p>
<blockquote>
<p>Error: UPGRADE FAILED: secrets "my-secret" already exists </p>
</blockquote>
<p>My expectation is it should delete previous release hook and create a new one</p>
<p>Helm Version: </p>
<ul>
<li>Client: 2.7.2 </li>
<li>Server: 2.7.2</li>
</ul>
| <p>after digging docs, came to know that <code>"helm.sh/hook-delete-policy": "before-hook-creation"</code> is not available in Helm version <code>2.7.2</code> it is available in <code>2.9.0</code></p>
|
<p>I have a Kubernetes cluster with a backend service and a security service.
The ingress is defined as follows:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: solidary-life
annotations:
kubernetes.io/ingress.global-static-ip-name: sl-ip
certmanager.k8s.io/acme-http01-edit-in-place: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/ssl-redirect: "true"
labels:
app: sl
spec:
rules:
- host: app-solidair-vlaanderen.com
http:
paths:
- path: /v0.0.1/*
backend:
serviceName: backend-backend
servicePort: 8080
- path: /auth/*
backend:
serviceName: security-backend
servicePort: 8080
tls:
- secretName: solidary-life-tls
hosts:
- app-solidair-vlaanderen.com
</code></pre>
<p>The backend service is configured like:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: sl
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: backend-app
image: gcr.io/solidary-life-218713/sv-backend:0.0.6
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /v0.0.1/api/online
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: backend-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
</code></pre>
<p>and the auth server service:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: security
labels:
app: sl-security
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: security-app
image: gcr.io/solidary-life-218713/sv-security:0.0.1
ports:
- name: http
containerPort: 8080
- name: management
containerPort: 9090
- name: jgroups-tcp
containerPort: 7600
- name: jgroups-tcp-fd
containerPort: 57600
- name: jgroups-udp
containerPort: 55200
protocol: UDP
- name: jgroups-udp-mc
containerPort: 45688
protocol: UDP
- name: jgroups-udp-fd
containerPort: 54200
protocol: UDP
- name: modcluster
containerPort: 23364
- name: modcluster-udp
containerPort: 23365
protocol: UDP
- name: txn-recovery-ev
containerPort: 4712
- name: txn-status-mgr
containerPort: 4713
readinessProbe:
httpGet:
path: /auth/
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: security-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
</code></pre>
<p>Now I can go to the url's:</p>
<ul>
<li><a href="https://app-solidair-vlaanderen.com/v0.0.1/api/online" rel="nofollow noreferrer">https://app-solidair-vlaanderen.com/v0.0.1/api/online</a></li>
<li><a href="https://app-solidair-vlaanderen.com/auth/" rel="nofollow noreferrer">https://app-solidair-vlaanderen.com/auth/</a></li>
</ul>
<p>Sometimes this works, sometimes I get 404's. This is quite annoying and I am quite new to Kubernetes. I don't find the error.</p>
<p>Can it have something to do with the "sl" label that's on both the backend and security service definition?</p>
| <p>Yes. At least that must be the start of the issue, assuming all your services are on the same Kubernetes namespace. Can you use a different label for each? </p>
<p>So, in essence, you have 2 services that are randomly selecting pods belonging to the security Deployment and the backend deployment. One way to determine where your service is really sending requests is by looking at its endpoints and running:</p>
<pre><code>kubectl -n <your-namespace> <get or describe> ep
</code></pre>
|
<p>I've developed a containerized Flask application and I want to deploy it with Kubernetes. However, I can't connect the ports of the Container with the Service correctly.</p>
<p>Here is my Deployment file:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: <my-app-name>
spec:
replicas: 1
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: <container-name>
image: <container-image>
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: http-port
---
apiVersion: v1
kind: Service
metadata:
name: <service-name>
spec:
selector:
app: flaskapp
ports:
- name: http
protocol: TCP
targetPort: 5000
port: 5000
nodePort: 30013
type: NodePort
</code></pre>
<p>When I run <code>kubectl get pods</code>, everything seems to work fine:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
<pod-id> 1/1 Running 0 7m
</code></pre>
<p>When I run <code>kubectl get services</code>, I get the following:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
<service-name> NodePort 10.105.247.63 <none> 5000:30013/TCP
...
</code></pre>
<p>However, when I give the following URL to the browser: 10.105.247.63:30013, the browser keeps loading but never returns the data from the application.</p>
<p>Does anyone know where the problem could be? It seems that the service is not connected to the container's port.</p>
| <p><code>30013</code> is the port on the Node not in the cluster IP. To get a reply you would have to connect to <code><IP-address-of-the-node>:30013</code>. To get the list of nodes you can:</p>
<pre><code>kubectl get nodes -o=wide
</code></pre>
<p>You can also go through the <code>CLUSTER-IP</code> but you'll have to use the exposed port <code>5000</code>: <code>10.105.247.63:5000</code></p>
|
<p>First off a disclaimer: I have only been using Azure's Kubernetes framework for a short while so my apologies for asking what might be an easy problem.</p>
<p>I have two Kubernetes services running in AKS. I want these services to be able to discover each other by service name. The pods associated with these services are each given an IP from the subnet I've assigned to my cluster:</p>
<pre><code>$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP ...
tom 1/1 Running 0 69m 10.0.2.10 ...
jerry 1/1 Running 5 67m 10.0.2.21 ...
</code></pre>
<p>If I make REST calls between these services using their pod IPs directly, the calls work as expected. I don't want to of course use hard coded IPs. In reading up on kube dns, my understanding is that entries for registered services are created in the dns. The tests I've done confirms this, but the IP addresses assigned to the dns entries are not the IP addresses of the pods. For example:</p>
<pre><code>$ kubectl exec jerry -- ping -c 1 tom.default
PING tom.default (10.1.0.246): 56 data bytes
</code></pre>
<p>The IP address that is associated with the service tom is the so-called "cluster ip":</p>
<pre><code>$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tom ClusterIP 10.1.0.246 <none> 6010/TCP 21m
jerry ClusterIP 10.1.0.247 <none> 6040/TCP 20m
</code></pre>
<p>The same is true with the service jerry. The problem with these IP addresses is that REST calls using these addresses do not work. Even a simple ping times out. So my question is how can I associate the kube-dns entry that's created for a service with the pod IP instead of the cluster IP?</p>
<p>Based on the posted answer, I updated my yml file for "tom" as follows:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: tom
spec:
template:
metadata:
labels:
app: tom
spec:
containers:
- name: tom
image: myregistry.azurecr.io/tom:latest
imagePullPolicy: Always
ports:
- containerPort: 6010
---
apiVersion: v1
kind: Service
metadata:
name: tom
spec:
ports:
- port: 6010
name: "6010"
selector:
app: tom
</code></pre>
<p>and then re-applied the update. I still get the cluster IP though when I try to resolve tom.default, not the pod IP. I'm still missing part of the puzzle.</p>
<p>Update: As requested, here's the describe output for tom:</p>
<pre><code>$ kubectl describe service tom
Name: tom
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"tom","namespace":"default"},"spec":{"ports":[{"name":"6010","po...
Selector: app=tom
Type: ClusterIP
IP: 10.1.0.139
Port: 6010 6010/TCP
TargetPort: 6010/TCP
Endpoints: 10.0.2.10:6010
</code></pre>
<p>The output is similar for the service jerry. As you can see, the endpoint is what I'd expect--10.0.2.10 is the IP assigned to the pod associated with the service tom. Kube DNS though resolves the name "tom" as the cluster IP, not the pod IP:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE IP ...
tom-b4ccbfb97-wfmjp 1/1 Running 0 15h 10.0.2.10
jerry-dd8fbf98f-8jgw7 1/1 Running 0 14h 10.0.2.20
$ kubectl exec jerry-dd8fbf98f-8jgw7 nslookup tom
Name: tom
Address 1: 10.1.0.139 tom.default.svc.cluster.local
</code></pre>
<p>This doesn't really matter of course as long as REST calls are routed to the expected pod IP. I've had some success with this today:</p>
<pre><code>$ kubectl exec jerry-5554b956b-9kpj7 -- wget -O - http://tom:6010/actuator/health
{"status":"UP"}
</code></pre>
<p>This shows that even though the name "tom" resolves to the cluster IP there is routing in place that makes sure the call gets to the pod. I've tried the same call from service tom to service jerry and that also works. Curiously, a loopback, from tom to tom, times out:</p>
<pre><code>$ kubectl exec tom-5c68d66cf9-dxlmf -- wget -O - http://tom:6010/actuator/health
Connecting to tom:6010 (10.1.0.139:6010)
wget: can't connect to remote host (10.1.0.139): Operation timed out
command terminated with exit code 1
</code></pre>
<p>If I use the pod IP explicitly, the call works:</p>
<pre><code>$ kubectl exec tom-5c68d66cf9-dxlmf -- wget -O - http://10.0.2.10:6010/actuator/health
{"status":"UP"}
</code></pre>
<p>So for some reason the routing doesn't work in the loopback case. I can probably get by with that since I don't think we'll need to make calls back to the same service. It is puzzling though.</p>
<p>Peter</p>
| <p>This means you didnt publish ports through your service (or used wrong labels). What you are trying to achieve should be done using services exactly, what you need to do is fix your service definition so that it works properly.</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: xxx-name
spec:
template:
metadata:
labels:
app: xxx-label
spec:
containers:
- name: xxx-container
image: kmrcr.azurecr.io/image:0.7
imagePullPolicy: Always
ports:
- containerPort: 7003
- containerPort: 443
---
apiVersion: v1
kind: Service
metadata:
name: xxx-service
spec:
ports:
- port: 7003
name: "7003"
- port: 443
name: "443"
selector:
app: xxx-label < must match your pod label
type: LoadBalancer
</code></pre>
<p>notice how this exposes same ports container is listening on and uses the same label as selector to determine to which pods the traffic must go</p>
|
<p>Is it possible to have a centralized storage/volume that can be shared between two pods/instances of an application that exist in different worker nodes in Kubernetes?</p>
<p>So to explain my case:</p>
<ul>
<li><p>I have a Kubernetes cluster with 2 worker nodes. In each one of these I have 1 instance of app X running. This means I have 2 instances of app X running totally at the same time.</p></li>
<li><p>Both instances subscribe on the topic topicX, that has 2 partitions, and are part of a consumer group in Apache Kafka called groupX.</p></li>
</ul>
<p>As I understand it the message load will be split among the partitions, but also among the consumers in the consumer group. So far so good, right? </p>
<p>So to my problem:</p>
<ul>
<li><p>In my whole solution I have a hierarchy division with the unique constraint by country and ID. Each combination of country and ID has a pickle model (python Machine Learning Model), which is stored in a directory accessed by the application. For each combination of a country and ID I receive one message per minute.</p></li>
<li><p>At the moment I have 2 countries, so to be able to scale properly I wanted to split the load between two instances of app X, each one handling its own country. </p></li>
<li><p>The problem is that with Kafka the messages can be balanced between the different instances, and to access the pickle-files in each instance without know what country the message belongs to, I have to store the pickle-files in both instances.</p></li>
</ul>
<p>Is there a way to solve this? I would rather keep the setup as simple as possible so it is easy to scale and add a third, fourth and fifth country later.</p>
<p>Keep in mind that this is an overly simplified way of explaining the problem. The number of instances is much higher in reality etc.</p>
| <p>Yes. It's possible if you look at <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="noreferrer">this table</a> any <code>PV</code> (Physical Volume) that supports <code>ReadWriteMany</code> will help you accomplish having the same data store for your Kafka workers. So in summary these:</p>
<ul>
<li>AzureFile</li>
<li>CephFS</li>
<li>Glusterfs</li>
<li>Quobyte</li>
<li>NFS</li>
<li>VsphereVolume - (works when pods are collocated)</li>
<li>PortworxVolume</li>
</ul>
<p>In my opinion, NFS is the easiest to implement. Note that Azurefile, Quobyte, and Portworx are paid solutions.</p>
|
<p>I try to run grafana and nginx as reverse proxy in a kubernetes cluster and I already found this <a href="https://stackoverflow.com/questions/50410503/grafana-behind-nginx-reverse-proxy-returns-alert-title">answer</a> but this seems not to work for me. At least I get the same {{alert.title}}-Message as Oles. That's why I would like ask again and maybe someone can give me a hint what I am doing wrong?</p>
<p>The configuration for the grafana deployment contains the following part:</p>
<pre><code>env:
- name: GF_SERVER_DOMAIN
value: "k8s-4"
- name: GF_SERVER_ROOT_URL
value: "http://k8s-4/grafana"
</code></pre>
<p>and I don't modify the grafana.ini inside the container/pod.</p>
<p>Further I configure the nginx in the default.conf as following: </p>
<pre><code>server {
listen 80;
server_name localhost k8s-4;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /grafana/ {
proxy_pass http://k8s-4:30080/grafana;
proxy_set_header X-Forwarded-Host k8s-4;
proxy_set_header X-Forwarded-Server k8s-4;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
</code></pre>
<p>But as I say above this leads to the alert.title Error. But if I set the context to the root Element and configured the tools as follows:</p>
<p><strong>deployment:</strong></p>
<pre><code>env:
- name: GF_SERVER_DOMAIN
value: "k8s-4"
- name: GF_SERVER_ROOT_URL
value: "http://k8s-4"
</code></pre>
<p><strong>nginx - default.conf</strong></p>
<pre><code>server {
listen 80;
server_name localhost k8s-4;
location / {
proxy_pass http://k8s-4:30080/grafana;
proxy_set_header X-Forwarded-Host k8s-4;
proxy_set_header X-Forwarded-Server k8s-4;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /grafana/ {
}
}
</code></pre>
<p>it works...
That's why I am quite sure this is a problem with the deployment of grafana but I don't see the error. Maybe someone here can give me a little hint? </p>
| <p>Your first <code>server</code> setup is almost correct, you need to change the <code>proxy_pass</code> line to:</p>
<p><code>proxy_pass http://k8s-4:30080/;</code></p>
|
<p>I am trying to figure out the ultimate best practices for using Java in Docker containers deployed using Kubernetes <strong>on local environments or while developing code</strong>. In the ideal state, Java developers should be able to move as fast as python/javascript developers, but I am having a hard time matching the speed (or even coming close).</p>
<p>At the moment, I have a working, manually deployed k8's cluster. My Java Spring project is built by maven after a build command is run manually (<code>mvn clean install</code>), then I run a script to make an image, after that I run a script to run minkube (if its not already running) and finally I have to <code>apply</code> a deployments manifest file (which launches the containers into the pods).</p>
<p>What I am missing:</p>
<ol>
<li>All of this is done manually (there is clear room to automate the process of building an image after code is built and to have k8s update using the new image).</li>
<li>Builds are manually specified (python relaunches on code save. No hot reloading to my knowledge in the java world).</li>
<li>I have yet to see an integration between a local development environment and a cloud hosted k8's cluster. Ideally, a dev would test locally until they are ready to deploy to the cloud. When they are ready, it would be awesome to click a button and have a cluster read from a remote registry that could pick up the docker image changes and reload.</li>
</ol>
<p>Sadly, <a href="https://github.com/GoogleContainerTools/skaffold" rel="noreferrer">Skaffold</a>, the tool that I would be excited to use does not work natively with Java.
Is there another tool that Java Devs are using to make their local deployments super fast and competitive with the DUCK languages (py, js)?</p>
| <p>You can build a docker image directly from maven with <a href="https://github.com/spotify/docker-maven-plugin" rel="nofollow noreferrer">docker-maven-plugin</a>. Add to your <code>pom.xml</code>: </p>
<pre><code><build>
<plugins>
...
<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>VERSION GOES HERE</version>
<configuration>
<imageName>example</imageName>
<dockerDirectory>docker</dockerDirectory>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
</plugin>
...
</plugins>
</build>
</code></pre>
<p>I don't know precisely your use case, but deploying a k8's cluster in your dev machine is maybe overkill. You can test your docker images with <a href="https://docs.docker.com/compose/" rel="nofollow noreferrer">Docker compose</a></p>
|
<p>I have recently migrated my application from a single server w/ docker into Google Kubernetes Engine for the reasons of scaling. I am new to the kubernetes platform, and I may not yet fully understand the concepts of it but I do get the basics. </p>
<p>I have successfully migrated my application on a <code>cluster size of 3 each with 1vCPU and 3.75 GB RAM</code></p>
<p>Now I came across on what is the best configuration for the php-fpm processes running in a kubernetes cluster. I have read a few articles on how to setup the php-fpm processes such as </p>
<p><a href="https://serversforhackers.com/c/php-fpm-process-management" rel="noreferrer">https://serversforhackers.com/c/php-fpm-process-management</a></p>
<p><a href="https://www.kinamo.be/en/support/faq/determining-the-correct-number-of-child-processes-for-php-fpm-on-nginx" rel="noreferrer">https://www.kinamo.be/en/support/faq/determining-the-correct-number-of-child-processes-for-php-fpm-on-nginx</a></p>
<p>On my cluster I have an Elasticsearch, Redis, Frontend and a REST Api and my understanding about kubernetes, each has their own pods running on my cluster, I tried to access the pod for the REST Api and see 1 vCPU and 3.75 GB RAM which is what I set on my cluster specs. And the RAM has only 1.75GB left, so I think there are other services or pods using the memory.</p>
<p>So now I wanted to increase the size of the following based on the articles I shared above.</p>
<pre><code>pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 4
pm.max_spare_servers = 8
</code></pre>
<p>But my problem is since the pod is on a worker, if I change the configuration base on the available memory left (base on the articles I shared above on Calculating pm.max_children) I might end up a pod consuming all memory space left, and will not be able to allocate for the other services. Is my problem makes sense? or is there an idea I am missing? </p>
<p>Base on the article since my worker has 3.75 GB RAM and and other services is already consuming 1.5GB ram so my best aim is at 1 GB RAM.</p>
<p>pm.max_children brings us to 1024 Mb / 60 Mb = 17 max_children</p>
<pre><code>pm.max_children = 17
pm.start_servers = 8
pm.min_spare_servers = 7
pm.max_spare_servers = 10
pm.max_requests = 500
</code></pre>
<p>Which leads me to the question How to compute for the php-fpm child process on a Kubernetes Cluster when there are other services or pods shares the same resources. </p>
<p>Thank you for reading until the end, and thanks in advance for your inputs.</p>
| <p>GKE comes with multiple system pods (such as kube-DNS and fluentd). Some of these pods do not scale up much, this means if you add additional nodes, they will have more available resources. </p>
<p>The nodes are also running an OS so some of the memory is being assigned to that.
You can also view the resources available per node by using <code>kubectl describe no | grep Allocatable -A 5</code></p>
<p>This will show you the amount of resources left after the node's consumption.
Using <code>kubectl describe no | grep Allocated -A 5</code> you can view the amount of memory and CPU that is already requested by current pods.</p>
<p>All this being said, you should choose the number of child processes based on your need. Once you know the amount of memory the pod will need, set <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container" rel="nofollow noreferrer">resource requests and limits</a> to your pod config so that the kubernetes scheduler can put the php-fpm on a node with sufficient resources.</p>
<p>Kubernetes strength is that you tell it what you want and it will try to make that happen. Instead of worrying too much about how much you can fit, choose an amount for your pod based on your expected/required performance and tell kubernetes that's how much memory you need. This way, you can also <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">increase the number of pods using HPA</a> instead of managing and scaling up the number of child processes.</p>
|
<p>So the idea is Kubernetes dashboard accesses Kubernetes API to give us beautiful visualizations of different 'kinds' running in the Kubernetes cluster and the method by which we access the Kubernetes dashboard is by the proxy mechanism of the Kubernetes API which can then be exposed to a public host for public access.</p>
<p>My question would be is there any possibility that we can access Kubernetes API proxy mechanism for some other service inside a Kubernetes cluster via that publically exposed address of Kubernetes Dashboard?</p>
| <p>Sure you can. So after you set up your proxy with <code>kubectl proxy</code>, you can access the services with this format:</p>
<pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/<service-name>:<port-name>/proxy/
</code></pre>
<p>For example for <code>http-svc</code> and port name <code>http</code>:</p>
<pre><code>http://localhost:8001/api/v1/namespaces/default/services/http-svc:http/proxy/
</code></pre>
<p>Note: it's not necessarily for public access, but rather a proxy for you to connect from your public machine (say your laptop) to a private Kubernetes cluster.</p>
|
<p>I am trying to configure the HTTP liveness probe as follows:</p>
<pre><code>livenessProbe:
httpGet:
path: /rest/sends/get?source=mwTESt2VP3Q9M99GNWYvvaLQ1owrGTTjTb #sends API to test address
port: 4000
httpHeaders:
- name: Authorization
value: Basic cnBjOnUzSGRlM0xvaWI1SGpEcTFTZGVoQktpU1NBbHE=
initialDelaySeconds: 60 #wait this period after staring fist time
periodSeconds: 30 # polling interval
timeoutSeconds: 30 # wish to receive response within this time period
</code></pre>
<p>Here, the URL path contains query parameters along with an authentication header (base64 encoding of username:password)</p>
<p>However, I get the following error:</p>
<pre><code> ERROR in app: Exception on /rest/sends/get [GET] (http 500)
</code></pre>
<p>I checked that this indeed works with status code 200 after logging into the pod</p>
<pre><code>curl http://username:password@localhost:4000/rest/sends/get?source=mwTESt2VP3Q9M99GNWYvvaLQ1owrGTTjTb
</code></pre>
<p>This question is probably similar to this one <a href="https://stackoverflow.com/questions/38532286/kubernetes-liveness-probes-with-query-string-parameters?rq=1">Kubernetes liveness probes with query string parameters</a></p>
<p>But, according to it, this should have already been fixed. I am using Kubernetes on <code>Google cloud version: 1.10.7-gke.2</code> on both master and other nodes.</p>
<p>Am I missing something?</p>
<h1>EDIT</h1>
<p>In the server access log, I get the following error</p>
<pre><code>10.0.2.1 - - [10/Oct/2018 03:50:45] "GET /rest/sends/get?source=mwTESt2VP3Q9M99GNWYvvaLQ1owrGTTjTb HTTP/1.1" 500 -
Exception on /rest/sends/get [GET]
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.5/dist-packages/flask/_compat.py", line 33, in reraise
raise value
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.5/dist-packages/flask_httpauth.py", line 88, in decorated
return f(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/counterparty_lib-9.55.4-py3.5.egg/counterpartylib/lib/api.py", line 813, in handle_root
response = handle_rest(rest_path, flask.request)
File "/usr/local/lib/python3.5/dist-packages/counterparty_lib-9.55.4-py3.5.egg/counterpartylib/lib/api.py", line 935, in handle_rest
file_format = flask_request.headers['Accept']
File "/usr/local/lib/python3.5/dist-packages/werkzeug/datastructures.py", line 1354, in __getitem__
return _unicodify_header_value(self.environ['HTTP_' + key])
KeyError: 'HTTP_ACCEPT'
</code></pre>
<p>The server is actually a counterparty-server <a href="https://github.com/CounterpartyXCP/counterparty-lib" rel="nofollow noreferrer">https://github.com/CounterpartyXCP/counterparty-lib</a></p>
<p>I am not really sure what the problem is.</p>
| <p>I added this header to the request</p>
<pre><code>httpHeaders:
- name: Authorization
value: Basic cnBjOnUzSGRlM0xvaWI1SGpEcTFTZGVoQktpU1NBbHE=
- name: Accept
value: application/json
</code></pre>
<p>And now it's working alright.</p>
|
<p>Couldn't be found at <a href="https://github.com/helm/helm/blob/master/docs/rbac.md" rel="nofollow noreferrer">https://github.com/helm/helm/blob/master/docs/rbac.md</a> , Is Tiller able to install chart on the other multiple namespaces? </p>
<p>I hope there is one Tiller on kube-system namespace and there are also multiple namespaces: namespaceA, namespaceB, namespaceC, ... .</p>
<p>Finally I hope I can deploy nginx to multiple namespaces like: </p>
<pre><code>helm init --service-account tiller --tiller-namespace kube-system
helm install nginx --tiller-namespace kube-system --namespace namespaceA
helm install nginx --tiller-namespace kube-system --namespace namespaceB
</code></pre>
<p>I'd like to know if it is possible and how can Service Accounts, Roles and Role Bindings be set. </p>
<p>Thanks. </p>
| <p>It can be done with clustetRoles instead of Roles, this way you can grant permissions in all namespaces. The clusterrole, clusterrolebinding and serviceaccount code would be:</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-manager
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-binding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: tiller-manager
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
</code></pre>
<p>If you only want to grant permissions to few namespaces, you should create a rolebinding in each namespace like this:</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-binding
namespace: namespaceA
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: tiller-manager
apiGroup: rbac.authorization.k8s.io
</code></pre>
|
<p>I have a Kubernetes v1.10.2 cluster and a cronjob on it.
The job config is set to:</p>
<pre><code> failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
</code></pre>
<p>But it has created more than ten jobs, which are all successful and not removed automatically.
Now I am trying to delete them manually, with <code>kubectl delete job XXX</code>, but the command timeout as:</p>
<pre><code>$ kubectl delete job XXX
error: timed out waiting for "XXX" to be synced
</code></pre>
<p>I want to know how can I check in such a situation. Is there a log file for the command execution?</p>
<p>I only know the <code>kubectl logs</code> command, but it is not for such a situation.</p>
<p>"kubectl get" shows the job has already finished:</p>
<pre><code>status:
active: 1
completionTime: 2018-08-27T21:20:21Z
conditions:
- lastProbeTime: 2018-08-27T21:20:21Z
lastTransitionTime: 2018-08-27T21:20:21Z
status: "True"
type: Complete
failed: 3
startTime: 2018-08-27T01:00:00Z
succeeded: 1
</code></pre>
<p>and "kubectl describe" output as:</p>
<pre><code>$ kubectl describe job test-elk-xxx-1535331600 -ntest
Name: test-elk-xxx-1535331600
Namespace: test
Selector: controller-uid=863a14e3-a994-11e8-8bd7-fa163e23632f
Labels: controller-uid=863a14e3-a994-11e8-8bd7-fa163e23632f
job-name=test-elk-xxx-1535331600
Annotations: <none>
Controlled By: CronJob/test-elk-xxx
Parallelism: 0
Completions: 1
Start Time: Mon, 27 Aug 2018 01:00:00 +0000
Pods Statuses: 1 Running / 1 Succeeded / 3 Failed
Pod Template:
Labels: controller-uid=863a14e3-a994-11e8-8bd7-fa163e23632f
job-name=test-elk-xxx-1535331600
Containers:
xxx:
Image: test-elk-xxx:18.03-3
Port: <none>
Host Port: <none>
Args:
--config
/etc/elasticsearch-xxx/xxx.yml
/etc/elasticsearch-xxx/actions.yml
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 100m
memory: 100Mi
Environment: <none>
Mounts:
/etc/elasticsearch-xxx from xxx-configs (ro)
Volumes:
xxx-configs:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: test-elk-xxx
Optional: false
Events: <none>
</code></pre>
<p>It indicates still one pod running, but I don't know how to figure out the pod name.</p>
| <p>I think this is the same as the problem reported in github:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/43168" rel="nofollow noreferrer">Cannot delete jobs when their associated pods are gone</a></p>
<p>This is reported by several people, and it is not fixed still.</p>
<p>And can use the "-v=X" (e.g. -v=8) option for the kubectl command, it will give more detailed debug info.</p>
|
<p>I'm new to Helm and I haven't quite fully grasped the concepts yet. What I'm currently trying to do is to create a custom chart that basically just sets specific values for another chart that's available in the default stable repository. Pretty much what I want to do is have this:</p>
<pre><code>helm install \
-f my-custom-values.yaml \
stable/target-chart \
--name=my-release
</code></pre>
<p>changed into</p>
<pre><code>helm install my-apps/my-release
</code></pre>
<p>With <code>my-release</code> using the same values in <code>my-custom-values.yaml</code>. It's essentially bundling the pre-existent chart into a new one with my custom values.</p>
<p>Is there a way to do this? I think I might be able to clone the source chart, but I don't feel like that's a practical thing to do.</p>
| <p>What is the issue with the first variation? If you have a custom values.yaml that you can pass to helm why do you need to remove it from the command line?</p>
<p>But if you are ready to play around a bit... :)</p>
<p>One way of doing this would be creating your own chart, that will be mainly empty but consist of a requirements.yaml that refers to <code>stable/target-chart</code>.</p>
<p>requirements.yaml (just beside Chart.yaml)</p>
<pre><code>dependencies:
- name: stable/target-chart
version: 1.0.0.0.0.0
alias: somealiasforvaluesyaml
</code></pre>
<p>In your values.yaml you then overwrite the values of that sub-chart:</p>
<pre><code>somealiasforvaluesyaml:
keyfromthattargetchart: newvalue
subkeyfromthattargetchart:
enabled: true
setting: "value"
</code></pre>
<p>The alias you give in the requirements.yaml is the section in your values.yaml from your chart.</p>
<p>Before installing you need to tell helm to update these requirements:</p>
<pre><code>helm repo update
helm dependency update
</code></pre>
<p>and then just <code>helm install</code> this (virtual?) chart. This chart does not contain any resources to it would not be called a package in linux package managers - but they also use transitional packages or packages that just are a collection of others (like the build-essential)</p>
<p>Considering you already have the values.yaml to overwrite the ones in the target-chart this is all a bit much? Since the cust-values .yaml to pass to install with <code>-f</code> just needs to contain the customization as it will ammend the values.yaml from the target-chart your first command in the question looks like the correct way to go.</p>
|
<p>I am new to helm charts and I am trying to pass some environment variables to schema-registry </p>
<p>Values.yaml </p>
<pre><code>replicaCount: 1
image:
repository: confluentinc/cp-schema-registry
tag: 5.0.0
pullPolicy: IfNotPresent
env:
- name: "SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS"
value: "PLAINTEXT://xx.xxx.xx.x:9092,PLAINTEXT://xx.xxx.xx.x:9092,PLAINTEXT://xx.xxx.xx.x:9092"
- name: "SCHEMA_REGISTRY_LISTENERS"
value: "http://0.0.0.0:8083"
</code></pre>
<p>But these environment variables are not passed to the pod. </p>
<p>I tried passing as part of install command, but it failed because I cannot pass multiple values, Can anyone please let me know how you have passed your multiple environment variables</p>
<pre><code>ubuntu@ip-10-xx-x-xx:~/helm-test$ helm install helm-test-0.1.0.tgz --set SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=PLAINTEXT://xx.xxx.xx.xx:9092,PLAINTEXT://xx.xxx.xx.xx:9092,PLAINTEXT://xx.xxx.xx.xx:9092,SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8083
Error: failed parsing --set data: key "97:9092" has no value (cannot end with ,)
</code></pre>
<hr>
<p>After trying to pass the environment values both inside the values.yaml file and also as install command</p>
<pre><code>replicaCount: 1
image:
repository: confluentinc/cp-schema-registry
tag: 5.0.0
pullPolicy: IfNotPresent
env:
- name:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "PLAINTEXT://10.xxx.x.xx:9092,PLAINTEXT://10.xxx.x.xx:9092,PLAINTEXT://10.xxx.x.xx.xxx:9092"
SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8083"
helm install helm-test-0.1.0.tgz --set env.name.SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="PLAINTEXT://10.xx.x.xx:9092\,PLAINTEXT://10.xx.x.xx:9092\,PLAINTEXT://10.xx.x.xx:9092", --set env.nameSCHEMA_REGISTRY_LISTENERS="http://0.0.0.0:8083"
</code></pre>
<p>I escaped the commas since it was throwing an error
Error: failed parsing --set data: key "xxx:9092" has no value (cannot end with ,)</p>
<p>I see that my environment values does not show when i try to describe a deployment.</p>
<pre><code>kubectl describe deployment/crusty-aardwolf-helm-test
Name: crusty-aardwolf-helm-test
Namespace: default
CreationTimestamp: Wed, 10 Oct 2018 14:23:37 +0000
Labels: app.kubernetes.io/instance=crusty-aardwolf
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=helm-test
helm.sh/chart=helm-test-0.1.0
Annotations: deployment.kubernetes.io/revision=1
Selector: app.kubernetes.io/instance=crusty-aardwolf,app.kubernetes.io/name=helm-test
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app.kubernetes.io/instance=crusty-aardwolf
app.kubernetes.io/name=helm-test
Containers:
helm-test:
Image: confluentinc/cp-schema-registry:5.0.0
Port: 80/TCP
Host Port: 0/TCP
Liveness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
</code></pre>
<p>Why are my environment values not passed to my container? Can someone please point me in right direction.</p>
| <p>The <a href="https://docs.helm.sh/developing_charts/#values-files" rel="noreferrer"><code>values.yaml</code></a> is more for actual values. You can use <a href="https://golang.org/pkg/text/template/" rel="noreferrer">go template</a> substitutions if you'd like to but it's less common. (These substitutions get used later in a template)</p>
<p>When you specify <code>--set</code> in for example <code>helm install --set foo=bar</code> foo will be overridden by <code>bar</code> in the <code>values.yaml</code> file. What you may really want is something like this:</p>
<pre><code>...
env:
name:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "PLAINTEXT://xx.xxx.xx.x:9092,PLAINTEXT://xx.xxx.xx.x:9092,PLAINTEXT://xx.xxx.xx.x:9092"
SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8083"
</code></pre>
<p>and then on the <code>helm install</code> command line:</p>
<pre><code>helm install helm-test-0.1.0.tgz --set env.name.SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="PLAINTEXT://xx.xxx.xx.xx:9092,PLAINTEXT://xx.xxx.xx.xx:9092,PLAINTEXT://xx.xxx.xx.xx:9092" --set env.nameSCHEMA_REGISTRY_LISTENERS="http://0.0.0.0:8083"
</code></pre>
<p>More information on how to set the values <a href="https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of-set" rel="noreferrer">here</a>.</p>
|
<p>I want to use Artifactory as private Helm repository and deploy my app using helm chart to Kubernetes everytime a new version of helm chart gets uploaded to Artifactory.</p>
| <p>Yes, you can do it. You can find more information about it <a href="https://www.jfrog.com/confluence/display/RTF/Helm+Chart+Repositories" rel="nofollow noreferrer">here</a></p>
<p>The Helm charts repo is a <a href="https://docs.helm.sh/developing_charts/#the-chart-repository-structure" rel="nofollow noreferrer">web server that serves files</a>, it could also be something like Amazon S3 or Google Cloud Storage bucket and then you can add an <code>index.yaml</code> file and then all your charts. So you could also integrate that with Spinnaker.</p>
|
<p>i have a lab environment with a bind server. The server manages the domain "lab.local" DNS Dynamic Update are configured. The lab client (windows and linux) are using the DNS server.</p>
<p>Now i would like do use a kubernetes cluster in our lab. Can i use the bind server with the zone "lab.local" with kubernetes? </p>
<p>For example: i would like to create a nginx pod and access it from my client over nginx.lab.local. I have looked at <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-dns</a> but i didn't find any Information how to use it with bind.</p>
| <p>Once the nginx Pod has been created, it will have a internal IP by default, not addressable from your lab network (only other pods can access it).</p>
<p>To access it from the lab network, expose it as a Service with type as <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> and then it will have a external routable IP address. Then add an entry in the bind server to that external routable IP address for everyone to access using the URL.</p>
<p>There are other and better ways also of exposing a Service by using a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">Load Balancer</a> or an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a>. For those who are new or getting started with K8S, exposing the Pod using NodePort is the easiest to see some quick results.</p>
|
<p>We are running Kubernetes with the following configuration:
On-premise Kubernetes 1.11.3, cri-o 1.11.6 and CentOS7 with UEK-4.14.35</p>
<p>I can't make <em>crictl stats</em> to return pods information, it returns empty list only. Has anyone run into the same problem?</p>
<p>Another issue we have, is that when I query the kubelet for stats/summary it returns an empty pods list.</p>
<p>I think that these two issues are related, although I am not sure which one of them is the problem.</p>
| <p>I would recommend checking <code>kubelet</code> service to verify health status and debug any suspicious events within the cluster. I assume that <a href="https://github.com/kubernetes-sigs/cri-o" rel="nofollow noreferrer">CRI-O</a> runtime engine can select <code>kubelet</code> as the main Pods information provider because of its managing Pod lifecycle role.</p>
<pre><code>systemctl status kubelet -l
journalctl -u kubelet
</code></pre>
<p>In case you found some errors or dubious events, share it in a comment below this answer.</p>
<p>However, you can use <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">metrics-server</a>, which will collect Pod metrics in the cluster and enable <code>kube-apiserver</code> flags for <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/" rel="nofollow noreferrer">Aggregation Layer</a>. Here is a good <a href="https://docs.bitnami.com/kubernetes/how-to/configure-autoscaling-custom-metrics/" rel="nofollow noreferrer">article</a> about Horizontal Pod Autoscaling and monitoring resources via <a href="https://github.com/prometheus/prometheus" rel="nofollow noreferrer">Prometheus</a>.</p>
|
<p>I am trying to start a local Kubernetes cluster using <code>minikube start</code> and getting the following error.</p>
<pre><code>Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0912 17:39:12.486830 17689 start.go:305] Error restarting
cluster: restarting kube-proxy: waiting for kube-proxy to be
up for configmap update: timed out waiting for the condition
</code></pre>
<p>Any idea how to ensure it starts? I am using VirtualBox and <a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="noreferrer">this</a> tutorial. I tried the tip given <a href="https://stackoverflow.com/questions/49899216/restarting-kube-proxy-wating-for-conditions">here</a> but without luck.</p>
<p>Also any specific reason why it takes so long to even reach to this stage? Or is it usually this slow?</p>
| <p>following are the points to be remembered while starting minikube for such errors.</p>
<p>1) close the <strong>VirtualBox</strong> if opened. <br>
2) delete the previous <strong>temp</strong> files generated.<br>
3) if you are behind the proxy <strong>set the proxy</strong>.<br>
4) then do the following...</p>
<pre><code> $ minikube stop
$ minikube delete
$ minikube start
</code></pre>
|
<pre><code> |--> service1:8081 --> pod1-a, pod1-b, pod1-c
UI -> load balancer -> ingress (mydomain.com)
|--> service2:8082 --> pod2-a, pod2-b, pod2-c
</code></pre>
<p>So from <code>service1</code>, I could call <code>service2</code> directly with <code>http://service2:8082</code>, but since this is not being done through the UI -> load balancer, how does this get load balanced? Should I not call <code>service2</code> directly, and call it through <code>mydomain.com/service2</code> instead so it would have to go through the flow?</p>
| <p>If you are by-passing the ingress, meaning directly calling <code>http://service2:8082</code> from <code>service1</code>, Kubernetes would pick a random backend based on the default <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="nofollow noreferrer"><code>Proxy-mode: iptables</code></a>.</p>
<p>You can tweak this a bit more if you are using something like <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="nofollow noreferrer">ipvs</a>. For example, set up round robin.</p>
<p>You can also use the old <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-userspace" rel="nofollow noreferrer"><code>Proxy-mode: userspace</code></a> if you'd like round robin only.</p>
<p>All these options are configurable using the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/" rel="nofollow noreferrer"><code>-proxy-mode ProxyMode</code></a> flag on the kube-proxy.</p>
|
<pre><code> |--> service1:8081 --> pod1-a, pod1-b, pod1-c
UI -> load balancer -> ingress (mydomain.com)
|--> service2:8082 --> pod2-a, pod2-b, pod2-c
</code></pre>
<p>So from <code>service1</code>, I could call <code>service2</code> directly with <code>http://service2:8082</code>, but since this is not being done through the UI -> load balancer, how does this get load balanced? Should I not call <code>service2</code> directly, and call it through <code>mydomain.com/service2</code> instead so it would have to go through the flow?</p>
| <p>Invoking a service from another service will hit the iptable routes on the node and pick service endpoint to route traffic to. This will be faster.</p>
<p>If you call it through mydomain.com/service2, then the flow passes through additional L7 ingress and will be comparatively slow.</p>
|
<p>I enabled modsecurity: "true" and enable-owasp-modsecurity-crs: "true" via the configmap of the nginx ingresss controller according to <a href="https://kubernetes.github.io/ingress-nginx/user-guide/third-party-addons/modsecurity/" rel="nofollow noreferrer">this link</a> . In the annotation of the ingress I set SecRuleEngine On.
When I use <a href="https://github.com/sullo/nikto" rel="nofollow noreferrer">nikto</a> to do some scans and try to trigger the owasp rules I only see 400 responses in the ingress logging. I would expect 403 responses. Anyone any idea on what I am doing wrong or what to check?</p>
| <p>Followed the instructions on:
<a href="https://karlstoney.com/2018/02/23/nginx-ingress-modsecurity-and-secchatops/" rel="nofollow noreferrer">https://karlstoney.com/2018/02/23/nginx-ingress-modsecurity-and-secchatops/</a></p>
<p>The only thing I had to change was "SecAuditLog /var/log/modsec/audit.log". Changed it to SecAuditLog /var/log/modsec_audit.log</p>
|
<p>I'm currently using a podtemplate (See below) inside my <code>Jenkinsfile</code> to provision a docker container which mounts to the docker socket to provision containers within the pipeline.</p>
<p>As the cloud-hosted kubernetes I use is going from dockerd to containerd as container runtime, I want to ask if there is somebody who is using containerd with jenkins kubernetes plugin (especially podtemplates).</p>
<pre><code>podTemplate(label: 'mypod', cloud: cloud, serviceAccount: serviceAccount, kubenamespace: kubenamespace, envVars: [
envVar(key: 'NAMESPACE', value: kubenamespace),
envVar(key: 'REGNAMESPACE', value: regnamespace),
envVar(key: 'APPNAME', value: appname),
envVar(key: 'REGISTRY', value: registry)
],
volumes: [
hostPathVolume(hostPath: '/etc/docker/certs.d', mountPath: '/etc/docker/certs.d'),
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')
],
containers: [
containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:v2.9.1', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'docker' , image: 'docker:17.06.1-ce', ttyEnabled: true, command: 'cat')]) {
</code></pre>
| <p>The folder structure is slightly different between the Docker engine and containerd. Specifically the containerd runtime has the following directories...</p>
<p>logs - /var/log/pods/
websocket - unix:////var/run/containerd/containerd.sock</p>
<p>This link has more details.
<a href="https://github.com/containerd/containerd/blob/master/docs/ops.md" rel="nofollow noreferrer">https://github.com/containerd/containerd/blob/master/docs/ops.md</a></p>
|
<p>I have a Kubernetes cluster with two services deployed: SvcA and SvcB - both in the service mesh. </p>
<p>SvcA is backed by a single Pod, SvcA_P1. The application in SvcA_P1 exposes a PreStop HTTP hook. When performing a "kubectl drain" command on the node where SvcA_P1 resides, the Pod transitions into the "terminating" state and remains in that state until the application has completed its work (the rest request returns and Kubernetes removes the pod). The work for SvcA_P1 includes completing ongoing in-dialog (belonging to established sessions) HTTP requests/responses. It can stay in the "terminating" state for hours before completing.</p>
<p>When the Pod enters the "terminating" phase, Istio sidecar appears to remove the SvcA_P1 from the pool. Requests sent to SvcA_P1 from e.g., SvcB_P1 are rejected with a "no healthy upstream".</p>
<p>Is there a way to configure Istio/Envoy to:</p>
<ol>
<li>Continue to send traffic/sessions with affinity to SvcA_P1 while in "terminating" state?</li>
<li>Reject traffic without session affinity to SvcA_P1 (no JSESSIONID, cookies, or special HTTP headers)?</li>
</ol>
<p>I have played around with the DestinationRule(s), modifying <code>trafficPolicy.loadBalancer.consistentHash.[httpHeaderName|httpCookie]</code> with no luck. Once the Envoy removes the upstream server, the new destination is re-hashed using the reduced set of servers.</p>
<p>Thanks,</p>
<p>Thor</p>
| <p>According to Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">documentation</a>, when pod must be deleted three things happen simultaneously:</p>
<blockquote>
<ul>
<li>Pod shows up as “Terminating” when listed in client commands</li>
<li>When the Kubelet sees that a Pod has been marked as terminating because the "dead" timer for the Pod has been set in the API server,
it begins the pod shutdown process.
<ul>
<li>If the pod has defined a preStop hook, it is invoked inside of the pod. If the preStop hook is still running after the grace period
expires, step 2 is then invoked with a small (2 second) extended grace
period.</li>
</ul></li>
<li>Pod is removed from endpoints list for service, and are no longer considered part of the set of running pods for replication
controllers. Pods that shutdown slowly <strong>cannot</strong> continue to serve
traffic as <em>load balancers (like the service proxy) remove them from
their rotations</em>.</li>
</ul>
</blockquote>
<p>As soon as Istio works like a mesh network below/behind Kubernetes Services and Services no longer consider a Pod in Terminating state as a destination for the traffic, tweaking Istio policies doesn't help much.</p>
|
<p>I'm attempting to connect to a CloudSQL instance via a cloudsql-proxy container on my Kubernetes deployment. I have the cloudsql credentials mounted and the value of <code>GOOGLE_APPLICATION_CREDENTIALS</code> set. </p>
<p>However, I'm still receiving the following error in my logs:</p>
<pre><code>2018/10/08 20:07:28 Failed to connect to database: Post https://www.googleapis.com/sql/v1beta4/projects/[projectID]/instances/[appName]/createEphemeral?alt=json&prettyPrint=false: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: x509: certificate signed by unknown authority
</code></pre>
<p>My connection string looks like this: </p>
<pre><code>[dbUser]:[dbPassword]@cloudsql([instanceName])/[dbName]]?charset=utf8&parseTime=True&loc=Local
</code></pre>
<p>And the proxy dialer is shadow-imported as:</p>
<pre><code>_ github.com/GoogleCloudPlatform/cloudsql-proxy/proxy/dialers/mysql
</code></pre>
<p>Anyone have an idea what might be missing?</p>
<p><strong>EDIT:</strong></p>
<p>Deployment Spec looks something like this (JSON formatted):</p>
<pre><code>{
"replicas": 1,
"selector": {
...
},
"template": {
...
"spec": {
"containers": [
{
"image": "[app-docker-imager]",
"name": "...",
"env": [
...
{
"name": "MYSQL_PASSWORD",
...
},
{
"name": "MYSQL_USER",
...
},
{
"name": "GOOGLE_APPLICATION_CREDENTIALS",
"value": "..."
}
],
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/secrets/cloudsql",
"name": "[secrets-mount-name]",
"readOnly": true
}
]
},
{
"command": [
"/cloud_sql_proxy",
"-instances=...",
"-credential_file=..."
],
"image": "gcr.io/cloudsql-docker/gce-proxy:1.11",
"name": "...",
"ports": [
{
"containerPort": 3306,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/secrets/cloudsql",
"name": "[secrets-mount-name]",
"readOnly": true
}
]
}
],
"volumes": [
{
"name": "[secrets-mount-name]",
"secret": {
"defaultMode": 420,
"secretName": "[secrets-mount-name]"
}
}
]
}
}
}
</code></pre>
| <p>The error message indicates that your client is not able to trust the certificate of <a href="https://www.googleapis.com" rel="noreferrer">https://www.googleapis.com</a>. There are two possible causes for this:</p>
<ol>
<li><p>Your client does not know what root certificates to trust. The official <a href="https://cloud.google.com/sql/docs/mysql/connect-docker" rel="noreferrer">cloudsql-proxy docker image</a> includes root certificates, so if you are using that image, this is not your problem. If you are not using that image, you should (or at least install ca certificates in your image).</p></li>
<li><p>Your outbound traffic is being intercepted by a proxy server that is using a different, untrusted, certificate. This might be malicious (in which case you need to investigate who is intercepting your traffic). More benignly, you might be in a organization using an outbound proxy to inspect traffic according to policy. If this is the case, you should build a new docker image that includes the CA certificate used by your organization's outbound proxy.</p></li>
</ol>
|
<p>For some strange and unknown reason, when I use a ConfigMap with key value pairs that will be set as environment variables in the pods (using <code>envFrom</code>), my pods fail to start. </p>
<p>Here is the ConfigMap portion of my YAML:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: all-config
data:
# DB configuration
dbServer: "host.docker.internal"
dbPort: "3306"
# problematic config
validationQuery: 'Select 1'
</code></pre>
<p>If I comment out the <code>validationQuery</code> key/value pair, the pod starts. If I leave it in, it fails. If I remove the space, it runs! Very strange behavior as it boils down to a whitespace.</p>
<p>Any ideas on why this fails and how users have been getting around this? Can someone try to reproduce?</p>
| <p>I honestly believe that it's something with your application not liking environment variables with spaces. I tried this myself and I can see the environment variable with the space nice and dandy when I shell into the pod/container.</p>
<p>PodSpec:</p>
<pre><code>...
spec:
containers:
- command:
- /bin/sleep
- infinity
env:
- name: WHATEVER
valueFrom:
configMapKeyRef:
key: myenv
name: j
...
</code></pre>
<hr>
<pre><code>$ kubectl get cm j -o=yaml
apiVersion: v1
data:
myenv: Select 1
kind: ConfigMap
metadata:
creationTimestamp: 2018-10-10T20:44:02Z
name: j
namespace: default
resourceVersion: "11111111"
selfLink: /api/v1/namespaces/default/configmaps/j
uid: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaa
</code></pre>
<hr>
<pre><code>root@mypod-xxxxxxxxxx-xxxxx:/# echo $WHATEVER
Select 1
root@mypod-xxxxxxxxxx-xxxxx:/#
</code></pre>
|
<p>I am trying to run a container on my Kubernetes cluster (<code>1.9.7-gke.6</code>), using a private registry (Artifactory). </p>
<pre><code>Failed to pull image "myrepo.myartifactory.mycompany.com/org/image:latest": rpc error: code = Unknown desc = Error: Status 400 trying to pull repository org/image: "{
\"errors\" :[ {
\"status\" : 400,
\"message\" : \"Unsupported docker v1 repository request for 'myrepo'\"\n } ]
}"
</code></pre>
<p>I assume this means that the <code>docker</code> client tries to perform a <code>v1</code> registry request, which seems to be not supported by our Artifactory installation.</p>
<p>I checked the docker version of my cluster nodes:</p>
<pre><code>$ kubectl describe nodes | grep docker
Container Runtime Version: docker://17.3.2
Container Runtime Version: docker://17.3.2
Container Runtime Version: docker://17.3.2
</code></pre>
<p>I found the Docker flag <code>--disable-legacy-registry=true</code> but I am not sure how to best configure my GKE cluster this way.</p>
| <p>The actual issue was that our credentials of the registry changed. Updating the pull credentials on our cluster fixed the issue.</p>
<p>I assume that the issue can occur under certain circumstances where the registry API returns an error such as an authentication or authorization error. If that is the case, the docker client tries to downgrade to an older API version - which is not available on Artifactory.</p>
<p>This would cause Artifactory to return the mentioned <code>Unsupported docker v1 repository request for 'myrepo'</code> error, which unfortunately masks the actual error.</p>
|
<p>I have a Service Account which I'd like to grant permissions to read/write/update/delete Secrets within a specific namespace. I'm not clear about how exactly Service Accounts, Roles, Bindings, etc. work together to grant the right permissions.</p>
<p>What <code>kubectl</code> invocations or YAML do I need to do to grant these permissions to the service account?</p>
<p>Here's the YAML for the Service Account I have so far:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-10-09T17:45:20Z
name: testaccount
namespace: test
resourceVersion: "369702913"
selfLink: /api/v1/namespaces/test/serviceaccounts/testaccount
uid: f742ed5c-c1b3-11e8-8a69-0ade4132ab56
secrets:
- name: testaccount-token-brjxq
</code></pre>
| <p>You need to create Role and Role binding.</p>
<p>Create a role:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: test
name: role-test-account
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
</code></pre>
<p>Create a role binding:</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: role-test-account-binding
namespace: test
subjects:
- kind: ServiceAccount
name: test-account
namespace: test
roleRef:
kind: Role
name: role-test-account
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>You can read more about <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">using RBAC Authorization</a></p>
|
<p>The goal is to enable Kubernetes api server to connect to resources on the internet when it is on a private network on which internet resources can only be accessed through a proxy.</p>
<p>Background: </p>
<p>A kubernetes cluster is spun up using kubespray containing two apiserver instances that run on two VMs and are controlled via a manifest file. The Azure AD is being used as the identity provider for authentication. In order for this to work the API server needs to initialize its OIDC component by connecting to Microsoft and downloading some keys that are used to verify tokens issued by Azure AD. </p>
<p>Since the Kubernetes cluster is on a private network and needs to go through a proxy before reaching the internet, one approach was to set https_proxy and no_proxy in the kubernetes API server container environment by adding this to the manifest file. The problem with this approach is that when using Istio to manage access to APIs, no_proxy needs to be updated whenever a new service is added to the cluster. One solution could have been to add a suffix to every service name and set *.suffix in no proxy. However, it appears that using wildcards in the no_proxy configuration is not supported.</p>
<p>Is there any alternate way for the Kubernetes API server to reach Microsoft without interfering with other functionality?</p>
<p>Please let me know if any additional information or clarifications are needed.</p>
| <p>I'm not sure how you would have Istio manage the egress traffic for your Kubernetes masters where your kube-apiservers run, so I wouldn't recommend it. As far as I understand, Istio is generally used to manage (ingress/egress/lb/metrics/etc) actual workloads in your cluster and these workloads generally run on your nodes, not masters. I mean the kube-apiserver actually manages the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="nofollow noreferrer">CRDs</a> that Istio uses.</p>
<p>Most people use Docker on their masters, you can use the <a href="https://docs.docker.com/network/proxy/#configure-the-docker-client" rel="nofollow noreferrer">proxy</a> environment variables for your containers like you mentioned.</p>
|
<p>Like some article that I previously read. It said that in new Kubernetes version, already include Spark capabilities. But with some different ways such as using KubernetesPodOperator instead of using BashOperator / PythonOperator to do SparkSubmit.</p>
<p>Is that the best practice to Combine Airflow + Kubernetes is to remove Spark and using KubernetesPodOperator to execute the task?</p>
<p>Which is have a better performance since Kubernetes have AutoScaling that Spark doesn’t have.</p>
<p>Need someone expert in Kubernetes to help me explain this. I’m still newbie with this Kubernetes, Spark, and Airflow things. :slight_smile:</p>
<p>Thank You.</p>
| <blockquote>
<p>in new Kubernetes version, already include Spark capabilities</p>
</blockquote>
<p>I think you got that backwards. New versions of Spark can run tasks in a Kubernetes cluster.</p>
<blockquote>
<p>using KubernetesPodOperator instead of using BashOperator / PythonOperator to do SparkSubmit</p>
</blockquote>
<p>Using Kubernetes would allow you to run containers with whatever isolated dependencies you wanted. </p>
<p>Meaning </p>
<ol>
<li>With BashOperator, you must distribute the files to some shared filesystem or to all the nodes that ran the Airflow tasks. For example, <code>spark-submit</code> must be available on all Airflow nodes. </li>
<li>Similarly with Python, you ship out some zip or egg files that include your pip/conda dependency environment</li>
</ol>
<blockquote>
<p>remove Spark and using KubernetesPodOperator to execute the task</p>
</blockquote>
<p>There is still good reasons to run Spark with Airflow, but instead you would be packaging a Spark driver container to execute <code>spark-submit</code> inside a container against the Kubernetes cluster. This way, you only need <code>docker</code> installed, not Spark (and all dependencies)</p>
<blockquote>
<p>Kubernetes have AutoScaling that Spark doesn’t have</p>
</blockquote>
<p>Spark does have <a href="https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation" rel="nofollow noreferrer">Dynamic Resource Allocation...</a></p>
|
<p>If one can push and consume from one topic in a kafka cluster of 3 brokers running in kubernetes with <code>auto_scaling disabled;</code> can that be used to draw a conclusion that we can successfully push and consume from all the other topics in a cluster?</p>
<pre><code>Total brokers in the cluster: 3
Min-InSyncReplicas: 2
No. of partitions for eahc topic: 25
</code></pre>
| <p>It depends.
If you can produce and consume from a particular topic this is a good indication that everything is working in terms of 'communication' (successful consumption from <code>topic_a</code> does not mean that consumption from <code>topic_b</code> will also be OK). However, 'healthy state' is a quite subjective term. Kafka exposes several metrics via <a href="https://docs.confluent.io/current/kafka/monitoring.html" rel="nofollow noreferrer">JMX</a> for Brokers, Consumers and Producers. Depending on your monitoring requirement you can choose the appropriate metric and decide/judge whether everything is OK (according to your requirements). </p>
<p>Note: If you are using Enterprise Confluent Platform these metrics can also be observed through <a href="https://docs.confluent.io/current/control-center/docs/index.html#control-center" rel="nofollow noreferrer">Confluent Control Center</a>. </p>
|
<p>I have currently a cluster HA (with three multiple masters, one for every AZ) deployed on AWS through kops. Kops deploys a K8S cluster with a pod for etcd-events and a pod for etcd-server on every master node. Every one of this pods uses a mounted volume.</p>
<p>All works well, for example when a master dies, the autoscaling group creates another master node in the same AZ, that recovers its volume and joins itself to the cluster. The problem that I have is respect to a disaster, a failure of an AZ.</p>
<p>What happens if an AZ should have problems? I periodically take volume EBS snapshots, but if I create a new volume from a snapshot (with the right tags to be discovered and attached to the new instance) the new instance mounts the new volumes, but after that, it isn't able to join with the old cluster. My plan was to create a lambda function that was triggered by a CloudWatch event that creates a new master instance in one of the two safe AZ with the volume mounted from a snapshot of the old EBS volume. But this plan has errors because it seems that I am ignoring something about Raft, Etcd, and their behavior. (I say that because I have errors from the other master nodes, and the new node isn't able to join itself to the cluster).</p>
<p>Suggestions?</p>
<p>How do you recover theoretically the situation of a single AZ disaster and the situation when all the master died? I have the EBS snapshots. Is it sufficient to use them?</p>
| <p>I'm not sure how exactly you are restoring the failed node but technically the first thing that you want to recover is your etcd node because that's where all the Kubernetes state is stored. </p>
<p>Since your cluster is up and running you don't need to restore from scratch, you just need to remove the old node and add the new node to etcd. You can find out more on how to do it <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#replacing-a-failed-etcd-member" rel="nofollow noreferrer">here</a>. You don't really need to restore any old volume to this node since it will sync up with the other existing nodes.</p>
<p>Then after this, you can start other services as <code>kube-apiserver</code>, <code>kube-controller-manager</code>, etc.</p>
<p>Having said that, if you keep the same IP address and the exact same physical configs you should be able to recover without removing the etcd node and adding a new one.</p>
|
<p>I'm writing some scripts that check the system to make sure of some cluster characteristics. Things running on private IP address spaces, etc. These checks are just a manual step when setting up a cluster, and used just for sanity checking.</p>
<p>They'll be run on each node, but I'd like a set of them to run when on the master node. Is there a bash, curl, kubectl, or another command that has information indicating the current node is a master node?</p>
| <p>The master(s) usually has the 'master' role associated with it. For example:</p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready master 78d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
</code></pre>
<p>It also has a label <code>node-role.kubernetes.io/master</code> associated with it. For example:</p>
<pre><code>$ kubectl get node ip-x-x-x-x.us-west-2.compute.internal -o=yaml
apiVersion: v1
kind: Node
metadata:
annotations:
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: "0"
projectcalico.org/IPv4Address: x.x.x.x/20
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: 2018-07-23T21:10:22Z
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: t3.medium
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: us-west-2
failure-domain.beta.kubernetes.io/zone: us-west-2c
kubernetes.io/hostname: ip-x-x-x-x.us-west-2.compute.internal
node-role.kubernetes.io/master: ""
</code></pre>
|
<p>I am using Docker for Mac with Kubernetes support and I'm struggling to create a Kubernetes Deployment that references a locally built image.</p>
<p>Output of <code>docker images</code>: </p>
<pre><code>REPOSITORY TAG IMAGE
test latest 2c3bdb36a5ed
</code></pre>
<p>My deployment.yaml :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
selector:
matchLabels:
app: helloworld
replicas: 1
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: aaa
image: test:latest
ports:
- containerPort: 8080
</code></pre>
<p>When i run <code>kubectl apply -f deplyment.yaml</code> pods are created but:</p>
<pre><code>helloworld-deployment-764b8b85d8-2c4kl 0/1 ImagePullBackOff 0
helloworld-deployment-764b8b85d8-rzq7l 0/1 ImagePullBackOff 0
</code></pre>
<p><code>kubectl describe</code> of one of these pods gives: </p>
<pre><code> Normal Scheduled 20s default-scheduler Successfully assigned helloworld-deployment-79f66d97c6-7tj2x to docker-for-desktop
Normal SuccessfulMountVolume 19s kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-72f44"
Normal BackOff 16s kubelet, docker-for-desktop Back-off pulling image "test:latest"
Warning Failed 16s kubelet, docker-for-desktop Error: ImagePullBackOff
Normal Pulling 4s (x2 over 19s) kubelet, docker-for-desktop pulling image "test:latest"
Warning Failed 2s (x2 over 17s) kubelet, docker-for-desktop Failed to pull image "test:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for test, repository does not exist or may require 'docker login'
Warning Failed 2s (x2 over 17s) kubelet, docker-for-desktop Error: ErrImagePull
</code></pre>
<p>What is interesting is that if i try to run some image hosted on dockerhub then everything is fine,
I also tried to use <a href="https://github.com/GoogleContainerTools/skaffold" rel="noreferrer">skaffold</a> and it also works like a charm...</p>
<p>I see some similar issues regarding minikube where the solution is to use the minikube docker daemon to build images so that they can be referenced from the Kubernetes cluster. </p>
<p>I would like to avoid setting up local repo, so how can I make it work with Docker's Kubernetes ? </p>
| <p>I was able to run a local image by setting the <code>imagePullPolicy</code> to <code>Never</code>.</p>
<p>For example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: local-image-test
spec:
containers:
- name: local-image-container
image: local-image:latest
imagePullPolicy: Never
</code></pre>
<p>(Credit to <a href="https://github.com/kubernetes/kubernetes/issues/1293#issuecomment-357326426" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/1293#issuecomment-357326426</a> for this solution)</p>
|
<p>We have recently moved to micro services based architecture for our enterprise application. we are using Kubernetes cluster to host all our micro services.</p>
<p>Currently we didn't configure ELK for manage our logs, just storing application logs into azure blob storage.</p>
<p>We are facing issue, when multiple POD instances are running for one services, since all instances use same log file to update the content. due to this, instances are getting stuck and getting memory leak issue.</p>
<p>I have configured mount path in docker container , and my logback property , has below entry to write the logs.</p>
<p><code><property name="DEV_HOME" value="/mnt/azure/<service-name>/logs" /></code></p>
<p>Is there a way to get the pod instance name in log configuration , so that i can add one more level down, to have separate logs for different instances.</p>
<p>Or is there better way to handle this scenario.</p>
<p><code><property name="DEV_HOME" value="/mnt/azure/<service-name>/<instances>/logs" /></code></p>
| <p>It should be possible to set the Pod information (including the name) as environment variables as mentioned <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">here</a>. In the application read the environment variable and log appropriately.</p>
|
<p>I am new to Kubernetes and looking for a better understanding of the difference between Kube-DNS and CoreDNS.</p>
<p>As I understand it the recommendation is to use the newer CoreDNS rather than the older Kube-DNS. </p>
<p>I have setup a small cluster using <code>kubeadm</code> and now I am a little confused about the difference between CoreDNS and Kube-DNS.</p>
<p>Using <code>kubectl get pods --all-namespaces</code> I can see that I have two CoreDNS pods running.</p>
<p>However using <code>kubectl get svc --all-namespaces</code> I also see that I have a service named <code>kube-dns</code> running in the <code>kube-system</code> namespace. When I inspect that with <code>kubectl describe svc/kube-dns -n kube-system</code> I can see that the <code>kube-dns</code> service links to coredns.</p>
<p>I am now wondering if I am actually running both kube-dns and coredns. Or else, why is that service called <code>kube-dns</code> and not <code>core-dns</code>?</p>
| <p>I have K8S 1.12. Do a describe of the dns pod.</p>
<blockquote>
<p>kubectl describe pod coredns-576cbf47c7-hhjrs --namespace=kube-system | grep -i "image:"</p>
<p>Image: k8s.gcr.io/coredns:1.2.2</p>
</blockquote>
<p>Looks like coredns is running. According to the documentation CoreDNS is default from K8S 1.11. For previous installations it's kube-dns.</p>
<p>The image is what important, rest are metadata (names, labels etc).</p>
<p>According to the K8S blog <a href="https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/" rel="noreferrer">here</a>.</p>
<blockquote>
<p>In Kubernetes 1.11, CoreDNS has reached General Availability (GA) for DNS-based service discovery, as an alternative to the kube-dns addon. This means that CoreDNS will be offered as an option in upcoming versions of the various installation tools. In fact, the kubeadm team chose to make it the default option starting with Kubernetes 1.11.</p>
</blockquote>
<p>Also, see this link for <a href="https://kubernetes.io/docs/tasks/administer-cluster/coredns/#installing-kube-dns-instead-of-coredns-with-kubeadm" rel="noreferrer">more</a> info.</p>
|
<p>I wanted to know is it possible to have a job in Kubernetes that will run every hour, and will delete certain pods. I need this as a temporary
stop gap to fix an issue.</p>
| <p>Use a CronJob (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">1</a>, <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#cronjob-v1beta1-batch" rel="nofollow noreferrer">2</a>) to run the Job every hour.</p>
<p>K8S API can be accessed from Pod (<a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">3</a>) with proper permissions. When a Pod is created a <code>default ServiceAccount</code> is assigned to it (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server" rel="nofollow noreferrer">4</a>) by default. The <code>default ServiceAccount</code> has no RoleBinding and hence the <code>default ServiceAccount</code> and also the Pod has no permissions to invoke the API.</p>
<p>If a role (with permissions) is created and mapped to the <code>default ServiceAccount</code>, then all the Pods by default will get those permissions. So, it's better to create a new ServiceAccount instead of modifying the <code>default ServiceAccount</code>.</p>
<p>So, here are steps for RBAC (<a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">5</a>)</p>
<ul>
<li>Create a ServiceAccount</li>
<li>Create a Role with proper permissions (deleting pods)</li>
<li>Map the ServiceAccount with the Role using RoleBinding</li>
<li>Use the above ServiceAccount in the Pod definition</li>
<li>Create a pod/container with the code/commands to delete the pods</li>
</ul>
<p>I know it's a bit confusing, but that's the way K8S works.</p>
|
<p>I'm using Docker for Windows with Kubernetes. If I'm disconnected from the internet and restart my computer or restart Kubernetes, then it gets stuck in a perpetual <code>kubernetes is starting...</code> mode. I can run <code>kubectl proxy</code> but anything else fails.</p>
<p>eg <code>kubectl get pod</code> gives me <code>Unable to connect to the server: EOF</code></p>
<p><strong>Edit: Solution</strong></p>
<ul>
<li>Uncheck the <code>Automatically check for updates</code> box in the Kubernetes General Settings fixed it for me.</li>
<li>(optional) Change your deployments to use <code>imagePullPolicy: IfNotPresent</code>. I did this for my <code>kubernetes-dashboard</code> deployment.</li>
</ul>
<p><a href="https://i.stack.imgur.com/Z7xLf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z7xLf.png" alt="enter image description here"></a></p>
<p>Oddly, the kubernetes status would still get stuck in <code>Kubernetes is starting...</code> even though I was able to interact with it via <code>kubectl</code></p>
<p>Following @aurleius's answer, I tried patching my <code>compose</code> and <code>compose-api</code> deployments but those settings were lost whenever I reset via the docker right click menu. I wrote a powershell script to patch the deployment so I'm putting it here just in case.</p>
<pre><code># Patch compose
kubectl patch deployment compose -n docker -p "{ \`"spec\`": { \`"template\`": { \`"spec\`": { \`"containers\`": [{ \`"name\`": \`"compose\`", \`"imagePullPolicy\`": \`"IfNotPresent\`" }] } } } }"
# Patch compose-api
kubectl patch deployment compose-api -n docker -p "{ \`"spec\`": { \`"template\`": { \`"spec\`": { \`"containers\`": [{ \`"name\`": \`"compose\`", \`"imagePullPolicy\`": \`"IfNotPresent\`" }] } } } }"
</code></pre>
| <p>I have tested your scenario on Mac and Windows and a short answer to this is that by default you require Internet connection to run Kubernetes cluster correctly.</p>
<p>The reason for that is specified in the <a href="https://docs.docker.com/docker-for-windows/#network" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>An Internet connection is required. Images required to run the
Kubernetes server are downloaded and instantiated as containers, and
the Program Files\Docker\Docker\Resources\bin\kubectl.exe` command is installed.</p>
</blockquote>
<p>What the documentation is not specifying is that the images which are used to run Kubernetes on Docker are possibly instantly checking for updates and new images for docker pods.</p>
<p>On Windows you can see that when you turn off the internet, close Docker and then run it again you can see that:</p>
<pre><code>PS C:\Users\Administrator> kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
docker compose-7447646cf5-hzdbl 0/1 CrashLoopBackOff 0 21m
docker compose-api-6fbc44c575-b5b47 0/1 CrashLoopBackOff 1 21m
kube-system etcd-docker-for-desktop 1/1 Running 1 20m
kube-system kube-apiserver-docker-for-desktop 1/1 Running 1 20m
kube-system kube-controller-manager-docker-for-desktop 1/1 Running 1 20m
kube-system kube-dns-86f4d74b45-chzdc 3/3 Running 3 21m
kube-system kube-proxy-xsksv 1/1 Running 1 21m
kube-system kube-scheduler-docker-for-desktop 1/1 Running 1 20m
> PS C:\Users\Administrator> kubectl get pods -n kube-system Unable to
> connect to the server: EOF
</code></pre>
<p>Machines go to <code>CrashLoopBackOff</code> or <code>ImagePullBackOff</code> so Kubernetes Cluster is not running because it can't download new images according to it's policies.
I have found how to prevent this error:</p>
<p><code>PS C:\Users\Administrator> kubectl get deployments --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE docker compose 1 1 1 1 33m docker compose-api 1 1 1 1 33m kube-system kube-dns 1 1 1 1 33m
</code></p>
<p>You can see the deployments and now we can change the <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">Image Pull Policy</a> to IfNotPresent. Note that we have to do it with both deployments in docker namespace.
Edit it:
<code>PS C:\Users\Administrator> kubectl edit deployment compose -n docker</code> </p>
<pre><code>spec:
containers:
- args:
- --kubeconfig
- ""
- --reconciliation-interval
- 30s
image: docker/kube-compose-controller:v0.3.9
imagePullPolicy: Alwaysspec:
containers:
- args:
- --kubeconfig
- ""
- --reconciliation-interval
- 30s
image: docker/kube-compose-controller:v0.3.9
imagePullPolicy: IfNotPresent
</code></pre>
<p>The difference between Mac and Windows is that Mac shows the error after some time while Windows ends in a loop. Hope this helps. </p>
<p><strong>Update:</strong>
From what I've seen there are several scenarios. Interestingly the update checkbox had no impact on these events:
1) editing deployments and offline restart (laptop restart) does not overwrite the imagePullPolicy
2) editing deployments and online laptop restart does not overwrite the imagePullPolicy
3) if by restart you understood the Cluster restart option in the Docker menu, then yes it overwrites all the deployment files.
I looked for those yaml files but they are nowhere to be found in Windows file system, also I am not sure if that would work since it would change the checksum of those files and Docker could not take it. Other option is that it might be just impossible since the docker on Windows: </p>
<blockquote>
<p>Images required to run the Kubernetes server are downloaded and
instantiated as containers</p>
</blockquote>
|
<p>I understand that helm consists of a client-side component (the <code>helm</code> CLI) and a cluster-side component (tiller). The docs say that tiller is responsible for building and managing releases. But why does this need to be done from the cluster? Why can't helm build and manage releases from the client, and then simply push resources to kubernetes?</p>
| <p>Tiller can also be run on the client side as mentioned in the Helm documentation <a href="https://docs.helm.sh/using_helm/#running-tiller-locally" rel="nofollow noreferrer">here</a>. The documentation refers to it as <code>Running Tiller Locally</code>.</p>
<p>But, as mentioned in the same documentation it's mainly for the sake of development. Had been thinking about it and not exactly sure why only for development and not for production.</p>
|
<p>I understand that helm consists of a client-side component (the <code>helm</code> CLI) and a cluster-side component (tiller). The docs say that tiller is responsible for building and managing releases. But why does this need to be done from the cluster? Why can't helm build and manage releases from the client, and then simply push resources to kubernetes?</p>
| <p>There where a lot of limitations with running client side only, as mentioned in this thread <a href="https://github.com/helm/helm/issues/2722" rel="nofollow noreferrer">https://github.com/helm/helm/issues/2722</a>.
But helm v3 will be a complete rewrite with no server side component. </p>
|
<p>I have created AWS EKS cluster since I have created using my AWS userID has been added to <code>system:masters</code> group. But when checked ConfigMap <code>aws-auth</code> I don't see my user ID. <strong>Why ?</strong></p>
<p>I had to give access to another user, so I have to assign appropriate AWS policies to the IAM user, then I edited the ConfigMap <code>aws-auth</code> with the following mapping </p>
<pre><code>mapUsers:
----
- userarn: arn:aws:iam::573504862059:user/abc-user
username: abc-user
groups:
- system:masters
</code></pre>
<p>So far I have understood when a user is part of <code>system:masters</code> group, this user has admin privileges on the cluster.</p>
<p>How can I <strong>add a new user who will have restricted privileges</strong> to a specific namespace? Do I have to do the same thing what I have done for the above user? If so then what group I should add the new user to?</p>
| <p>I would familiarize with <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes RBAC concepts</a></p>
<p>So you can create a <code>Role</code> since these are limited to a specific namespace.</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: my-namespace
name: full-namespace
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
</code></pre>
<p>Then create a <code>RoleBinding</code>:</p>
<pre><code>$ kubectl create rolebinding my-namespace-binding --role=full-namespace --group=namespacegroup --namespace=my-namespace
</code></pre>
<p>Or <code>kubectl create -f</code> this:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-namespace-binding
namespace: mynamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: full-namespace
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: namespacegroup
</code></pre>
<p>Then on your ConfigMap:</p>
<pre><code>mapUsers:
----
- userarn: arn:aws:iam::573504862059:user/abc-user
username: abc-user
groups:
- namespacegroup
</code></pre>
|
<p>I wanted to know is it possible to have a job in Kubernetes that will run every hour, and will delete certain pods. I need this as a temporary
stop gap to fix an issue.</p>
| <p>Yes, it's possible.</p>
<p>I think the easiest way is just to call the Kubernernes API directly from a job. Considering RBAC is configured, something like this:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: cleanup
spec:
serviceAccountName: service-account-that-has-access-to-api
template:
spec:
containers:
- name: cleanup
image: image-that-has-curl
command:
- curl
- -ik
- -X
- DELETE
- -H
- "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
- https://kubernetes.default.svc.cluster.local/api/v1/namespaces/{namespace}/pods/{name}
restartPolicy: Never
backoffLimit: 4
</code></pre>
<p>You can also run a kubectl proxy sidecar to connect to the cluster using <code>localhost</code>. More information <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="nofollow noreferrer">here</a></p>
<p>Or even running plain <code>kubectl</code> in a pod is also an option: <a href="https://stackoverflow.com/questions/42642170/kubernetes-how-to-run-kubectl-commands-inside-a-container">Kubernetes - How to run kubectl commands inside a container?</a></p>
|
<p>I am trying to follow <a href="https://burhan.io/flask-application-monitoring-with-prometheus/" rel="nofollow noreferrer">https://burhan.io/flask-application-monitoring-with-prometheus/</a> and make my pods discovered by Prometheus but I am not having any luck. Could someone see what I am doing wrong or debug it?</p>
<p>First to make sure my app is configured right...I configured it directly and saw the metrics in Prometheus.</p>
<pre><code>- job_name: 'myapp'
scheme: http
static_configs:
- targets: ['172.17.0.7:9090']
</code></pre>
<p>Next, I tried to do the discovery. This is how the deployment looks</p>
<pre><code>kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 9090
...
</code></pre>
<p>and this is the prometheus config</p>
<pre><code> - job_name: 'kubernetes-pods'
scheme: http
metrics_path: /metrics
kubernetes_sd_configs:
- role: node
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app]
regex: myapp
action: keep
</code></pre>
<p>but I don't see any metrics in Prometheus or any mention of <code>myapp</code> in Prometheus debug log. What am I missing?</p>
| <p>I see that you didn't define <code>- api_server: 'https://kubernetes'</code>. Make sure you define api-server in <code>kubernetes_sd_config</code>. Prometheus auto discovers services via api-server.</p>
<ul>
<li>Please refer my <a href="https://stackoverflow.com/questions/47682158/prometheus-auto-discovery-k8s">previous question</a> </li>
<li>Sample configuration in my repo <a href="https://github.com/veerendra2/prometheus-k8s-monitoring/blob/master/prometheus-configmap.yml" rel="nofollow noreferrer">here</a></li>
<li>Prometheus <code>kubernetes_sd_config</code> docs here</li>
</ul>
|
<p>I have deployed ocelot and consul on the kubernetes cluster. Ocelot acts as the api gateway which will distribute request to internal services. And consul is in charge of service discovery and health check. (BTW, I deploy the consul on the kubernetes cluster following the consul's <a href="https://www.consul.io/docs/platform/k8s/run.html" rel="nofollow noreferrer">official document</a>).</p>
<p>And my service (i.e. asp.net core webapi) is also deployed to the kubernetes cluster with 3 replicas. I didn't create a kubernete service object as those pods will only be consumbed by the ocelot which is in the same cluster.</p>
<p>The architecture is something like below:</p>
<pre><code> ocelot
|
consul
/\
webapi1 webapi2 ...
(pod) (pod) ...
</code></pre>
<p>Also, IMO, consul can de-register a pod(webapi) when the pod is dead. so I don't see any need to create a kubernete service object</p>
<p>Now My question: <strong>is it right to register each pod(webapi) to the consul when the pod startup? Or should I create a kubernete service object in front of those pods (webapi) and register the service object to the consul?</strong> </p>
| <ul>
<li><strong><code>Headless Service</code> is the answer</strong></li>
</ul>
<p>Kubernetes environment is more dynamic in nature.</p>
<blockquote>
<p>deregister a service when the pod is dead</p>
</blockquote>
<p>Yes</p>
<blockquote>
<p>Kubernetes Pods are mortal. They are born and when they die, they are
not resurrected. While each Pod gets its own IP address, even those IP
addresses cannot be relied upon to be stable over time. A Kubernetes
Service is an abstraction which defines a logical set of Pods and
provides stable ip</p>
</blockquote>
<p>That's why it is recomended to use <code>headless service</code> which basically fits into this situation. As they mentioned in first line in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">docs</a></p>
<blockquote>
<p>Sometimes you don’t need or want load-balancing and a single service
IP. In this case, you can create “headless” services by specifying
"None" for the cluster IP (.spec.clusterIP)</p>
</blockquote>
<p>headless service doesn't get the <code>ClusterIP</code>. If you do <code>nslookup</code> on the headless servive, it will resolve all IPs of pods that are under headless service. K8s will take care of adding/managing pod IP under the headless service. Please for more details. And I believe, you can register/provide this headless service name in Cosule.</p>
<ul>
<li>Please refer this blog for detailed <a href="https://akomljen.com/stateful-applications-on-kubernetes/" rel="nofollow noreferrer">here</a></li>
</ul>
<p><strong>UPDATE1:</strong></p>
<p>Please refer this <a href="https://www.youtube.com/watch?v=gsf__yuWCF8" rel="nofollow noreferrer">Youtube video</a>. May give you some idea.(Even I have to watch it..!!)</p>
|
<p>I've created K8S cluster in AWS and generated certificates per each component and they can connect eachother. But while I'm trying to get logs or installing an application via Helm, i'M getting below error : </p>
<pre><code>$ helm install ./.helm
Error: forwarding ports: error upgrading connection: error dialing backend: x509: certificate is valid for bla-bla-bla.eu-central-1.elb.amazonaws.com, worker-node, .*.compute.internal, *.*.compute.internal, *.ec2.internal, bla-bla-bla.eu-central-1.elb.amazonaws.com, not ip-172-20-74-98.eu-central-1.compute.internal`
</code></pre>
<p>and my certificate is :</p>
<pre><code>X509v3 Subject Alternative Name:
DNS:bla-bla-bla.eu-central-1.elb.amazonaws.com, DNS:worker-node, DNS:.*.compute.internal, DNS:*.*.compute.internal, DNS:*.ec2.internal, DNS:bla-bla-bla.eu-central-1.elb.amazonaws.com, IP Address:172.20.32.10, IP Address:172.20.64.10, IP Address:172.20.96.10`
</code></pre>
<p>Thanks for your help
best,</p>
| <p>Wildcard certificates can only be used for a single segment of DNS names. You will need a certificate valid for <code>ip-172-20-74-98.eu-central-1.compute.internal</code>
or <code>*.eu-central-1.compute.internal</code></p>
|
<p>I have a Kubernetes cluster in which I've created a deployment to run a pod. Unfortunately after running it the pod does not want to self-terminate, instead it enters a continuous state of restart/CrashLoopBackOff cycle. </p>
<p>The command (on the entry point) runs correctly when first deployed, and I want it to run only one time.</p>
<p>I am programatically deploying the docker image with the entrypoint configured, using the Python K8s API. Here is my deployment YAML:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kio
namespace: kmlflow
labels:
app: kio
name: kio
spec:
replicas: 1
selector:
matchLabels:
app: kio
name: kio
template:
metadata:
labels:
app: kio
name: kio
spec:
containers:
- name: kio-ingester
image: MY_IMAGE
command: ["/opt/bin/kio"]
args: ["some", "args"]
imagePullPolicy: Always
restart: Never
backofflimit: 0
</code></pre>
<p>Thanks for any help</p>
<p>Output from kubectl pod is:</p>
<pre><code>Name: ingest-160-779874b676-8pgv5
Namespace: kmlflow
Priority: 0
PriorityClassName: <none>
Node: 02-w540-02.glebe.kinetica.com/172.30.255.205
Start Time: Thu, 11 Oct 2018 13:31:20 -0400
Labels: app=kio
name=kio
pod-template-hash=3354306232
Annotations: <none>
Status: Running
IP: 10.244.0.228
Controlled By: ReplicaSet/ingest-160-779874b676
Containers:
kio-ingester:
Container ID: docker://b67a682d04e69c2dc5c1be7e02bf2e4cf7a12a7557dfbe642dfb531ca4b03f07
Image: kinetica/kinetica-intel
Image ID: docker-pullable://docker.io/kinetica/kinetica-intel@sha256:eefbb6595eb71822300ef97d5cbcdac7ec58f2041f8190d3a2ba9cffd6a0d87c
Port: <none>
Host Port: <none>
Command:
/opt/gpudb/bin/kio
Args:
--source
kinetica://172.30.50.161:9191::dataset_iris
--destination
kinetica://172.30.50.161:9191::iris5000
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 11 Oct 2018 13:33:27 -0400
Finished: Thu, 11 Oct 2018 13:33:32 -0400
Ready: False
Restart Count: 4
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-69wkn (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-69wkn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-69wkn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m39s default-scheduler Successfully assigned kmlflow/ingest-160-779874b676-8pgv5 to 02-w540-02.glebe.kinetica.com
Normal Created 89s (x4 over 2m28s) kubelet, 02-w540-02.glebe.kinetica.com Created container
Normal Started 89s (x4 over 2m28s) kubelet, 02-w540-02.glebe.kinetica.com Started container
Warning BackOff 44s (x7 over 2m15s) kubelet, 02-w540-02.glebe.kinetica.com Back-off restarting failed container
Normal Pulling 33s (x5 over 2m28s) kubelet, 02-w540-02.glebe.kinetica.com pulling image "kinetica/kinetica-intel"
Normal Pulled 33s (x5 over 2m28s) kubelet, 02-w540-02.glebe.kinetica.com Successfully pulled image "kinetica/kinetica-intel"
</code></pre>
<p>There is no output from <code>Kubectl logs <crashing-pod></code> because a successful run of the command KIO with the injected parameters does not print anything to standard output.</p>
| <p>If you'd like to run your task one time and finish after a successful completion you should consider using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Kubernetes Jobs</a> or <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJobs</a></p>
<p>Something like this:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
labels:
app: kio
name: kio
spec:
template:
metadata:
labels:
app: kio
name: kio
spec:
containers:
- name: kio-ingester
image: MY_IMAGE
command: ["/opt/bin/kio"]
args: ["some", "args"]
imagePullPolicy: Always
restart: Never
backoffLimit: 4
</code></pre>
<p>To delete the jobs automatically if you have Kubernetes 1.12 or later you can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically" rel="nofollow noreferrer"><code>ttlSecondsAfterFinished</code></a>. Unfortunately, you if you are using Kuberbetes 1.11 or earlier you will have to delete them manually or you can set up a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a> to do it.</p>
|
<p>I have a rabbitMQ in my project, and I want the queues on one pod to be on the other and the information on one pod to be shared with the other pod. Is there a way for you to share the same volume and both can read and write? I use GCloud.</p>
| <p>Actually, it is possible to share a GCEPersistentDisk volume between more than one pod. The <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">document</a> says it doesn't support ReadWriteMany mode, however it does support ReadWriteOnce mode. That means if a pod mounts the volume with ReadWriteOnce mode, no other pod can mount it until that pod unmounts the volume. So, this is like a locking mechanism in writing mode. On the other hand, multiple Pods can mount the volume in ReadOnlyMany.</p>
<p>You can review this <a href="https://stackoverflow.com/questions/31693529/how-to-share-storage-between-kubernetes-pods">Stack link</a> and also <a href="https://stackoverflow.com/questions/37649541/kubernetes-persistent-volume-accessmode">this one</a> for more information.</p>
|
<p>I made an image using dockerfile. And I created pod from the image that I created.
But an error has occurred. When I get a shell to pod and used 'systemctl', I got a D-bus error. How can I solve this problem?</p>
<pre><code>Failed to get D-Bus connection: Operation not permitted
</code></pre>
| <p>You basically can't use <code>systemctl</code> in Docker, since containers don't run the <code>systemd</code> daemon. This is tricky to run in Docker itself, and the couple of workarounds that could make it go are difficult-to-impossible to express in Kubernetes.</p>
<p>You should come up with some other way to run your process. Usually a container will only run a single service, and so instead of starting it via a systemd unit file you can just run the service directly; your Dockerfile might end with something like</p>
<pre><code>CMD ["/usr/sbin/myserviced", "--foreground"]
</code></pre>
|
<p>Cassandra cluster on aws is failing to start.
The error is as follows.</p>
<blockquote>
<p>INFO [main] 2018-10-11 08:11:42,794 DatabaseDescriptor.java:729 -
Back-pressure is disabled with strategy
org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9,
factor=5, flow=FAST}.</p>
<p>WARN [main] 2018-10-11 08:11:42,848 SimpleSeedProvider.java:60 - Seed
provider couldn't lookup host
cassandra-0.cassandra.default.svc.cluster.local Exception
(org.apache.cassandra.exceptions.ConfigurationException) encountered
during startup: The seed provider lists no seeds. The seed provider
lists no seeds. ERROR [main] 2018-10-11 08:11:42,851
CassandraDaemon.java:708 - Exception encountered during startup: The
seed provider lists no seeds.</p>
</blockquote>
<p>Here are my details of it.</p>
<pre><code>$kubectl get pods [13:48]
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 19h
cassandra-1 0/1 CrashLoopBackOff 231 19h
$kubectl get services [13:49]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra NodePort 100.69.201.208 <none> 9042:30000/TCP 1d
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 15d
$kubectl get pvc [13:50]
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cassandra-data-cassandra-0 Pending fast 15d
cassandra-storage-cassandra-0 Bound pvc-f3ff4203-c0a4-11e8-84a8-02c7556b5a4a 320Gi RWO gp2 15d
cassandra-storage-cassandra-1 Bound pvc-1bc3f896-c0a5-11e8-84a8-02c7556b5a4a 320Gi RWO gp2 15d
$kubectl get namespaces [13:53]
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
</code></pre>
<p>Even the working pod is not loading storage.</p>
<p>It was working fine till I tried to change <code>MAX_HEAP_SIZE</code> from 1024M to 2048M.</p>
<p>After that even I deleted all old pods, services and created fresh, still it's not working.</p>
| <p>You are using the NodePort type. This will not make the service a headless service which is why the IP-address of the pod doesn't get resolved.</p>
<p>What you need to do is to create a seperate headless service. You also need to create your own Docker image and run a script in your entrypoint that will fetch all the ips for the service domain name.</p>
<p>You can look at the following project as an example: <a href="https://github.com/vyshane/cassandra-kubernetes/" rel="nofollow noreferrer">https://github.com/vyshane/cassandra-kubernetes/</a></p>
|
<p>There's a kubernetes cluster that was created by kops
and when running <code>kops validate</code> this is the output:</p>
<pre><code> INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east-1a Master m4.xlarge 1 1 us-east-1a
nodes Node c4.2xlarge 1 75 us-east-1a
NODE STATUS
NAME ROLE READY
ip-172-20-59-93.ec2.internal master False
VALIDATION ERRORS
KIND NAME MESSAGE
Machine i-0a44bbdd18c86e846 machine "i-0a44bbdd18c86e846" has not yet joined cluster
Machine i-0d3302056f3dfeef0 machine "i-0d3302056f3dfeef0" has not yet joined cluster
Machine i-0d6199876b91962f4 machine "i-0d6199876b91962f4" has not yet joined cluster
Node ip-172-20-59-93.ec2.internal master "ip-172-20-59-93.ec2.internal" is not ready
Validation Failed
</code></pre>
<p>How I can recover this cluster? The s3 files for this cluster are available.</p>
<p>The etcd volumes show status as "in-use"</p>
<p><a href="https://i.stack.imgur.com/s26kT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s26kT.png" alt="screenshot"></a></p>
| <p>kops stores the state of your cluster in S3.</p>
<ol>
<li>Find the bucket where the state is stored</li>
<li>Set <code>export KOPS_STATE_STORE=s3://your-k8s-state-store</code></li>
<li>Run <code>kops update cluster</code></li>
<li>If that fails. </li>
<li>Terminate all your instances related to the cluster</li>
<li>Run <code>kops create cluster</code></li>
</ol>
<p>Note that the internal state of your cluster is not in S3 but in etcd. The answer here has more details about this topic and also how to backup/restore etcd: <a href="https://stackoverflow.com/questions/51408546/how-to-restore-kubernetes-cluster-using-kops">How to restore kubernetes cluster using kops?</a></p>
|
<p>I keep reading documentation that gives parameters for kube-proxy, but does not explain how where these parameters are supposed to be used. I create my cluster using <code>az aks create</code> with the azure-cli program, then I get credentials and use kubectl. So far everything I've done has involved yaml for services and deployments and such, but I can't figure out where all this kube-proxy stuff fits into all of this.</p>
<p>I've googled for days. I've opened question issues on github with AKS. I've asked on the kubernetes slack channel, but nobody has responded.</p>
| <p>The <code>kube-proxy</code> on all your Kubernetes nodes runs as a Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="noreferrer"><code>DaemonSet</code></a> and its configuration is stored on a Kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="noreferrer"><code>ConfigMap</code></a>. To make any changes or add/remove options you will have to edit the kube-proxy <code>DaemonSet</code> or <code>ConfigMap</code> on the <code>kube-system</code> namespace.</p>
<pre><code>$ kubectl -n kube-system edit daemonset kube-proxy
</code></pre>
<p>or </p>
<pre><code>$ kubectl -n kube-system edit configmap kube-proxy
</code></pre>
<p>For a reference on the <code>kube-proxy</code> command line options you can refer to <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/" rel="noreferrer">here</a>.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.