prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I have a Node app that's composed of several independent modules, talking to each other over AMQP. This app is started by an index.js file that instatiates every other index.js in the different folders of my project, which in turn instantiate the actual modules.</p>
<p>I've been reading about microservices and I'd like to convert my application to Docker containers. I've found examples of how to convert a simple Node app to Docker, but I want to separate my app so that every module is in an independent Node container (this is because each module is independent and doesn't depend on other modules to work. They receive work from the message queue, and put results on the message queue).</p>
<p>What I can't find is information on how I should organize and deploy my code. Should I have a different Node project (with separate packages.json) for each of my modules? Or should I have a single Node project for all my modules and deploy each of them individually?</p>
<p>After I have my project organized, is there a script tool that will generate (build* and deploy) each of my modules to its own container? All examples I've found so far are "hello world" level samples that just pack one app.</p>
<p>During development, will I have to deploy new containers for each change I test?</p>
<p>*: build because I use ES6 and I have to use Babel.</p>
| <blockquote>
<p>Should I have a separate packages.json for each Dockerfile?
From what I've read, I understand the Dockerfile will actually run NPM
and download the actual packages. If I have a single packages file I
would have a lot of bloat in each of my images, with unused modules.</p>
</blockquote>
<p>Sure you will have bloat, that's the beauty of Microservice, and <a href="http://www.markhneedham.com/blog/2012/11/28/micro-services-the-curse-of-code-duplication/" rel="nofollow noreferrer">That's the curse of Microservice</a>.</p>
<blockquote>
<p>Should I have a separate packages.json?</p>
</blockquote>
<p>Okay, let me try to address this question specifically. Let's just say your moduleA uses lodash.x.x.x. And you would like to use lodash.x+1.x.x in moduleB. You are sure that lodash.x+1.x.x is not backward compatible with lodash.x.x.x. So, now you are forced to make ModuleA's code compatible with lodash.x+1.x.x. If the above said sounds true, we call those applications as Monoliths, not microservices. To answer your question, yes, you might need separate packages.json unless you can have a parent package.json with common dependencies and submodules with it's own package.json (I'm not from node-land, so not sure whether package.json have such capabilities) </p>
<blockquote>
<p>What I can't find is information on how I should organize and deploy
my code. Should I have a different Node project (with separate
packages.json) for each of my modules? Or should I have a single Node
project for all my modules and deploy each of them individually?</p>
</blockquote>
<p>I have seen both the patterns (separate projects as well as submodules in same project). My personal opinion (for what it's worth) is to have separate project and separate code base, because your "modules" are talking over a language-agnostic protocol-AMQP already. Tomorrow, you might want to use golang/kotlin for "ModuleB" (microserviceB, depending on how you look at it :)), who knows.</p>
<blockquote>
<p>After I have my project organized, is there a script tool that will
generate (build* and deploy) each of my modules to its own container?</p>
</blockquote>
<p>One thing to clarify here is, development and deployment of one microservice should not force you to change/deploy other microservices as long as the contract between them is kept intact, otherwise it is a distributed-monolith (whoa..did I just invent a new term? fancy!!). When there is a contract change you should bump the version and have both the version run for a while until all the concerned parties (your other microservices) get a chance to upgrade.</p>
|
<blockquote>
<p>Per Taking Solr To Production
(<a href="https://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html" rel="nofollow noreferrer">https://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html</a>),
"Running Solr as root is not recommended for security reasons, and the
<strong>control script start command will refuse to do so.</strong></p>
</blockquote>
<p>Provisioning of the persistent volume occurred. However, when we claim and mount that into the folder structure for our Pod, the permissions setup for that mounted folder are only writable as root. Therefore, the SolrCloud micro services cannot either store its configuration files nor core/collection data or backups to the persistent volume. </p>
<p><strong>How should we go about addressing this permissions issue in Kubernetes, since Solr enforces the inability to use root via the Solr command / start script?</strong></p>
<p><a href="https://i.stack.imgur.com/xwrPE.png" rel="nofollow noreferrer">Here is also an excerpt from the running pod after mounting, showing the permissions problem (root ownership for data folder):</a></p>
<p>Here is also information about the Kubernetes server version:</p>
<pre><code>C:\Users\xxxx>kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommi
t:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2
017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"windows/amd6
4"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.8+coreos.0",
GitCommit:"fc34f797fe56c4ab78bdacc29f89a33ad8662f8c", GitTreeState:"clean", Bui
ldDate:"2017-08-05T00:01:34Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"lin
ux/amd64"}
</code></pre>
<p>Please see below yaml, docker file and start script.</p>
<p>yaml file:</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
labels:
app: "solrclouddemo1"
version: "1.0.0"
data:
config-env: dev
zookeeper-hosts: xxxx.com:2181
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
labels:
app: "solrclouddemo1"
version: "1.0.0"
spec:
replicas: 1
selector:
matchLabels:
app: "solrclouddemo1"
version: "1.0.0"
template:
metadata:
labels:
app: "solrclouddemo1"
version: "1.0.0"
build: "252"
developer: "XXX"
annotations:
prometheus.io/scrape.ne: 'true'
prometheus.io/port: '8000'
spec:
serviceAccount: "default"
containers:
- env:
- name: ENV
valueFrom:
configMapKeyRef:
key: config-env
name: "solrclouddemo1"
- name: ZK_HOST
valueFrom:
configMapKeyRef:
key: zookeeper-hosts
name: "solrclouddemo1"
- name: java_runtime_arguments
value: ""
image: "xxx.com:5100/com.xxx.cppseed/solrclouddemo1:1.0.0"
imagePullPolicy: Always
name: "solrclouddemo1"
ports:
- name: http
containerPort: 8983
protocol: TCP
resources:
requests:
memory: "600Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
labels:
app: "solrclouddemo1"
version: "1.0.0"
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8983
selector:
app: "solrclouddemo1"
version: "1.0.0"
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
spec:
selector:
matchLabels:
app: "solrclouddemo1"
version: "1.0.0"
minAvailable: 1
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
spec:
selector:
matchLabels:
app: "solrclouddemo1"
serviceName: "solrclouddemo1"
replicas: 1
template:
metadata:
labels:
app: "solrclouddemo1"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- "solrclouddemo1"
topologyKey: "kubernetes.io/hostname"
containers:
- name: "solrclouddemo1"
command:
- "/bin/bash"
- "-c"
- "/opt/docker-solr/scripts/startService.sh"
imagePullPolicy: Always
image: "xxx.com:5100/com.xxx.cppseed/solrclouddemo1:1.0.0"
resources:
requests:
memory: "600Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
ports:
- containerPort: 8983
name: http
*volumeMounts:
- name: datadir
mountPath: /opt/solr/server/data
securityContext:
runAsUser: 8983
fsGroup: 8983
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
selector:
matchLabels:
app: cppseed-solr*
</code></pre>
<p>Dockerfile:</p>
<pre><code>FROM xxx.com:5100/com.xxx.public/solr:7.0.0
LABEL maintainer="xxx.com"
ENV SOLR_USER="solr" \
SOLR_GROUP="solr"
# AAF Authentication
ADD aaf/config/ /opt/solr/server/etc/
ADD aaf/etc/ /opt/solr/server/etc/
ADD aaf/jars/ /opt/solr/server/lib/
ADD aaf/security/ /opt/solr/
# Entrypoint
ADD docker/startService.sh /opt/docker-solr/scripts/
# Monitoring
VOLUME /etc
#ADD monitoring/monitoring.jar /monitoring.jar
ADD /etc/ /etc/
# Permissions
USER root
RUN apt-get install sudo -y && \
chown -R $SOLR_USER:$SOLR_GROUP /opt/solr && \
chown -R $SOLR_USER:$SOLR_GROUP /opt/docker-solr/scripts/ && \
chmod 777 /opt/docker-solr/scripts/startService.sh
# && \ chmod 777 /monitoring.jar
WORKDIR /opt/solr
ENTRYPOINT ["startService.sh"]
</code></pre>
<p>startService.sh</p>
<pre><code>#!/bin/bash
#
# docker-entrypoint for docker-solr
# Fail immediately if anything has a non-zero result status
set -e
# Optionally echo commands before running them for debugging.
if [[ "$VERBOSE" = "yes" ]]; then
set -x
fi
# execute command passed in as arguments.
# The Dockerfile has specified the PATH to include
# /opt/solr/bin (for Solr) and /opt/docker-solr/scripts (for our scripts
# like solr-foreground, solr-create, solr-precreate, solr-demo).
# Note: if you specify "solr", you'll typically want to add -f to run it in
# the foreground.
echo "Invoking solr-foreground"
# Allow the clients to pass in java_runtime_arguments to tune the solr runtime when invoking the pipeline
if [[ -z "${java_runtime_arguments}" ]]; then
echo "No java_runtime_arguments received, so using default values"
exec solr-foreground -c -noprompt $@
else
echo "Received custom java_runtime_arguments. User will be responsible for prefixing all values passed with -a to allow SolrCloud to accept them. User is also responsible for establishing the -a -javaagent:/monitoring.jar=8000-/etc/config/prometheus_jmx_config.yaml-/etc/config/prometheus_application_config.yaml-/metrics which is used for Prometheus monitoring"
exec solr-foreground -c -noprompt $java_runtime_arguments $@
fi
</code></pre>
| <p>Workaround: Use initContainers </p>
<pre><code># Before Pod Starts this will change the ownership of the initContainers:
initContainers:
- name: volume-mount-hack
image: busybox
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 250m
memory: 600Mi
command:
- /bin/sh
- -c
- "chown -R solr:solr /opt/solr/server/data"
volumeMounts:
- name: datadir
mountPath: /opt/solr/server/data
</code></pre>
<p>Make sure to use same volumeMouth details in the container spec, along with runAsUser</p>
<pre><code>containers:
- name: "${APP_NAME}"
imagePullPolicy: Always
image: "${IMAGE_NAME}"
env:
- name: ENV
valueFrom:
configMapKeyRef:
key: config-env
name: "${APP_NAME}"
- name: ZK_HOST
valueFrom:
configMapKeyRef:
key: zookeeper-hosts
name: "${APP_NAME}"
- name: ZK_CLIENT_TIMEOUT
value: "30000"
- name: java_runtime_arguments
value: "${JAVA_RUNTIME_ARGUMENTS}"
command:
- "/bin/bash"
- "-c"
- "/opt/docker-solr/scripts/startService.sh"
resources:
requests:
memory: "600Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
ports:
- containerPort: 8983
name: http
volumeMounts:
- name: datadir
mountPath: /opt/solr/server/data
securityContext:
runAsUser: 8983
</code></pre>
|
<p>I have set up a multi-container pod consisting on multiple interrelated microservices. On docker-compose if I wanted to access the another container in a compose I just use the name of the service.</p>
<p>I am trying to do the same thing with Kube, without having to create a pod per microservice.</p>
<p>I tried the name of the container or suffix with <code>.local</code> neither worked and I got an UnknownHostException.</p>
<p>My preference is also to have all the microservices runnning on port 80, but in case that does not work within a single pod, I also tried having each microservice running on it's own port and use <code>localhost</code> but that didn't work either it simply said connection refused (as opposed to Unknown Host)</p>
| <blockquote>
<p>The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost. Because of this, applications in a pod must coordinate their usage of ports.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#resource-sharing-and-communication" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/#resource-sharing-and-communication</a></p>
|
<p>I am new to kubernetes, just wanted to know if my question is valid.</p>
<p>I had a question if a single POD can host 2 or more services.</p>
<p>And If it can host multiple services, how can it differentiate the traffic between the services. </p>
<p>Does it do a PORT mapping.</p>
<p>Please, let me know.</p>
| <p>You can add multiple containers to the same pod, but that's only recommended if the services are tightly coupled, like if they need to communicate. For example, if you have a web server and a sql database, you would likely want them in the same pod.</p>
<p>If the services are distinct, you would likely want to put them in different pods, but deploy them to the same cluster of nodes. Then, you can have a LoadBalancer service on the cluster that can route different ports or paths to the right pod. In this way, the services can be scaled and managed separately (and without worrying about port conflicts), but they still draw from the same pool of resources</p>
|
<p>I want to know when the master nodes want to connect the etcd cluster, which etcd node will be selected?does the master node always connects the same etcd node untill it becomes unavailable?does each node in master cluster will connect the same node in etcd cluster?</p>
| <p>The scheduler and controller-manager talk to the API server present on the same node. In a HA setup you'll have only one of them running at a time (based on a lease) and whoever is the current active will be talking to the local API server. If for some reason it fails to connect to the local API server, it doesn't renew the lease and another leader will be elected.</p>
<p>As described only one API server will be the leader at any given moment so that's the only place that needs to worry about reaching the etcd cluster. As for the etcd cluster itself, when you configure the kubernetes API server you pass it the <code>etcd-servers</code> flag, which is a list of etcd nodes like: </p>
<pre><code>--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379
</code></pre>
<p>This is then passed the Go <a href="https://github.com/coreos/etcd/tree/master/client" rel="nofollow noreferrer">etcd/client library</a> which, looking at it's README states:</p>
<blockquote>
<p>etcd/client does round-robin rotation on other available endpoints if the preferred endpoint isn't functioning properly. For example, if the member that etcd/client connects to is hard killed, etcd/client will fail on the first attempt with the killed member, and succeed on the second attempt with another member. If it fails to talk to all available endpoints, it will return all errors happened.</p>
</blockquote>
<p>Which means that it'll try each of the available nodes until it succeeds connecting to one.</p>
|
<p>I have to deploy a Kubernetes cluster, and I currently use <code>bash</code> to set up the security key, the environment prop, create pods etc with <code>kuberctl</code>. But I am wondering if <code>bash</code> is a good choice to used when the deployment steps getting big. E.g. ~50 deployment and services.</p>
<p>Is that a better choice than <code>bash</code> on deploying system under kubernetes? Any good example of automatic deployment under Kubernetes?</p>
| <p>In order to install/manage resources in kubernetes the advised solution is to use <a href="https://github.com/kubernetes/helm" rel="nofollow noreferrer">Helm Package Manager</a>. </p>
<blockquote>
<p>Helm is a tool for managing Kubernetes charts. Charts are packages of
pre-configured Kubernetes resources.</p>
</blockquote>
<p>So most configurations would belong to applications and tools charts. </p>
<p>Then you can use a third party tool (anSible, puppet, chef) in order to help with the automation.
In general I prefer to use <a href="https://www.ansible.com" rel="nofollow noreferrer">Ansible</a> as it relies mainly on ssh commands and it's easier to manage compared to Chef and Puppet whose scripts get really messy real quick.
I prefer composition over orchestration. </p>
|
<p>I have a Kubernetes pod based on jenkins/slave container to which I mount docker socket and docker binary file with necessary kernel module in privileged mode. Inside that pod I build Docker image basing on which I run docker container. Inside that container I don't have Internet connection at all because pod container uses flannel network (198.x.x.x) and that container uses bridged docker network (172.x.x.x) which is not available inside pod container. How can I make Internet to be available inside the second container which is being created inside Kubernetes pod container? Using Docker API in Jenkins pipeline is not a solution for me as long as it limits output of error logs and I can not commit changes made in the second container because of immediate removing of that container after build.</p>
| <p>Not sure if this article will help you with this issue, <a href="https://radu-matei.com/blog/kubernetes-jenkins-azure/#configuring-jenkins-to-dinamically-spawn-agents-docker-containers-for-builds" rel="nofollow noreferrer">JENKINS DECLARATIVE PIPELINES WITH KUBERNETES</a> . This article shows a full stack on how to setup Jenkins in Kubernetes and also involves idea about Docker in Docker.</p>
<p>Based on my thought, we could mark as pod container as <code>container1</code> and container in pod as <code>container2</code>.</p>
<p>I think <code>container1</code> and <code>container2</code> should locate in the same host and shared the same docker engine. So flannel network with docker network should setup together.</p>
<p>As my idea, network flow for <code>container2</code> should be from <code>container2</code> -> <code>docker0</code> -> <code>host</code>, should be not with <code>container1</code>.</p>
<p>Just let me know if this should be reasonable, or we could discuss together, I think this question is very interesting.</p>
|
<p>Is there any tools/way to get CPU, MEM, NET metrics of <code>PODs</code>. Other than below links, is there any tools available</p>
<ul>
<li><a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer"><code>kube-state-metrics</code></a> - Able to deploy but no useful POD metrics. You can see the POD metrics <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/Documentation/pod-metrics.md" rel="nofollow noreferrer">here</a>.</li>
<li><a href="https://itnext.io/kubernetes-monitoring-with-prometheus-in-15-minutes-8e54d1de2e13" rel="nofollow noreferrer">kubernetes-monitoring-with-prometheus-in-15-minutes</a>- Installed kube-prometheus with "helm" tool, No POD metrics. Metrics List <a href="https://pastebin.com/1YfNrrmm" rel="nofollow noreferrer">here</a></li>
<li><a href="https://github.com/camilb/prometheus-kubernetes" rel="nofollow noreferrer"><code>prometheus-kubernetes</code></a> - But it strucks at registering custom service forever. check <a href="https://stackoverflow.com/questions/46930397/struck-at-custom-resource-registration-k8s">here</a></li>
<li><a href="https://coreos.com/blog/monitoring-kubernetes-with-prometheus.html" rel="nofollow noreferrer">Monitoring K8s with Prometheus </a> - In blog they mentioned <code>container_cpu metrics</code>, but I dont see any metrics like that</li>
</ul>
<p><strong>UPDATE1</strong></p>
<p>Tried launch POD with <code>yaml file</code> as they mentioned in <a href="https://www.weave.works/blog/aggregating-pod-resource-cpu-memory-usage-arbitrary-labels-prometheus/" rel="nofollow noreferrer">blog</a>. Installed <code>go lang</code> with <code>GOPATH</code> & <code>GOROOT</code></p>
<pre><code>ubuntu@ip-172-:~$ kubectl create -f prometheus.yaml
panic: interface conversion: interface {} is []interface {}, not map[string]interface {}
goroutine 1 [running]:
k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation.getObjectKind(0x14dcb20, 0xc420c56480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xffffffffffffff01, 0xc420f6bca0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation/validation.go:111 +0x539
k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation.(*SchemaValidation).ValidateBytes(0xc4207b01d0, 0xc420b3ca80, 0x16c, 0x180, 0xc420b51628, 0x4ed384)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation/validation.go:49 +0x8f
k8s.io/kubernetes/pkg/kubectl/validation.ConjunctiveSchema.ValidateBytes(0xc42073cba0, 0x2, 0x2, 0xc420b3ca80, 0x16c, 0x180, 0x4ed029, 0xc420b3ca80)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/validation/schema.go:130 +0x9a
k8s.io/kubernetes/pkg/kubectl/validation.(*ConjunctiveSchema).ValidateBytes(0xc42073cbc0, 0xc420b3ca80, 0x16c, 0x180, 0xc420b51700, 0x443693)
<autogenerated>:3 +0x7d
k8s.io/kubernetes/pkg/kubectl/resource.ValidateSchema(0xc420b3ca80, 0x16c, 0x180, 0x2183f80, 0xc42073cbc0, 0x20, 0xc420b51700)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:222 +0x68
k8s.io/kubernetes/pkg/kubectl/resource.(*StreamVisitor).Visit(0xc420c2eb00, 0xc420c3d440, 0x218a000, 0xc420c3d4a0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:543 +0x269
k8s.io/kubernetes/pkg/kubectl/resource.(*FileVisitor).Visit(0xc420c3d2c0, 0xc420c3d440, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:502 +0x181
k8s.io/kubernetes/pkg/kubectl/resource.EagerVisitorList.Visit(0xc420f6bc30, 0x1, 0x1, 0xc420903c50, 0x1, 0xc420903c50)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:211 +0x100
k8s.io/kubernetes/pkg/kubectl/resource.(*EagerVisitorList).Visit(0xc420c3d360, 0xc420903c50, 0x7ff854222000, 0x0)
<autogenerated>:115 +0x69
k8s.io/kubernetes/pkg/kubectl/resource.FlattenListVisitor.Visit(0x2183d00, 0xc420c3d360, 0xc420c2eac0, 0xc420c2eb40, 0xc420c3d401, 0xc420c2eb40)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:417 +0xa3
k8s.io/kubernetes/pkg/kubectl/resource.(*FlattenListVisitor).Visit(0xc420c3d380, 0xc420c2eb40, 0x18, 0x18)
<autogenerated>:130 +0x69
k8s.io/kubernetes/pkg/kubectl/resource.DecoratedVisitor.Visit(0x2183d80, 0xc420c3d380, 0xc420c3d3c0, 0x3, 0x4, 0xc420c3d400, 0xc420386901, 0xc420c3d400)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:325 +0xd8
k8s.io/kubernetes/pkg/kubectl/resource.(*DecoratedVisitor).Visit(0xc420903c20, 0xc420c3d400, 0x151b920, 0xc420f6bc60)
<autogenerated>:153 +0x73
k8s.io/kubernetes/pkg/kubectl/resource.ContinueOnErrorVisitor.Visit(0x2183c80, 0xc420903c20, 0xc420c370e0, 0x7ff854222000, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:352 +0xf1
k8s.io/kubernetes/pkg/kubectl/resource.(*ContinueOnErrorVisitor).Visit(0xc420f6bc50, 0xc420c370e0, 0x40f3f8, 0x60)
<autogenerated>:144 +0x60
k8s.io/kubernetes/pkg/kubectl/resource.(*Result).Visit(0xc4202c23f0, 0xc420c370e0, 0x6, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/result.go:95 +0x62
k8s.io/kubernetes/pkg/kubectl/cmd.RunCreate(0x21acd60, 0xc420320e40, 0xc42029d440, 0x2182e40, 0xc42000c018, 0x2182e40, 0xc42000c020, 0xc420173000, 0x176f608, 0x4)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/create.go:187 +0x4a8
k8s.io/kubernetes/pkg/kubectl/cmd.NewCmdCreate.func1(0xc42029d440, 0xc4202aa580, 0x0, 0x2)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/create.go:73 +0x17f
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc42029d440, 0xc4202aa080, 0x2, 0x2, 0xc42029d440, 0xc4202aa080)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x22b
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420235b00, 0x8000102, 0x0, 0xffffffffffffffff)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x339
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420235b00, 0xc420320e40, 0x2182e00)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b
k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:39 +0xd5
main.main()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:26 +0x22
</code></pre>
<p><code>prometheus.yaml</code></p>
<pre><code># This scrape config scrapes kubelets
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
# couldn't get prometheus to validate the kublet cert for scraping, so don't bother for now
tls_config:
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- target_label: __scheme__
replacement: https
- source_labels: [__meta_kubernetes_node_label_kubernetes_io_hostname]
target_label: instance
</code></pre>
| <p>I just want to give complete answer to support @brian-brazil's answer. <code>cAvisor</code> supports <code>Prometheus</code>.</p>
<ol>
<li><p>Just launch <code>cAdvisor</code> container as said in <a href="https://github.com/google/cadvisor#quick-start-running-cadvisor-in-a-docker-container" rel="nofollow noreferrer">Readme</a></p></li>
<li><p>Then <code>curl</code> the <a href="http://localhost:8080/metrics" rel="nofollow noreferrer">http://localhost:8080/metrics</a> to check the metrics. You can configure the URL in <code>Prometheus</code> server to pull</p></li>
</ol>
|
<p>with the help of kubernetes I am running daily jobs on GKE, On a daily basis based on cron configured in kubernetes a new container spins up and try to insert some data into BigQuery.</p>
<p>The setup that we have is we have 2 different projects in GCP in one project we maintain the data in BigQuery in other project we have all the GKE running so when GKE has to interact with different project resource my guess is I have to set an environment variable with name GOOGLE_APPLICATION_CREDENTIALS which points to a service account json file, but since every day kubernetes is spinning up a new container I am not sure how and where I should set this variable.</p>
<p>Thanks in Advance!</p>
<h1>NOTE: this file is parsed as a golang template by the drone-gke plugin.</h1>
<pre><code>---
apiVersion: v1
kind: Secret
metadata:
name: my-data-service-account-credentials
type: Opaque
data:
sa_json: "bas64JsonServiceAccount"
---
apiVersion: v1
kind: Pod
metadata:
name: adtech-ads-apidata-el-adunit-pod
spec:
containers:
- name: adtech-ads-apidata-el-adunit-container
volumeMounts:
- name: service-account-credentials-volume
mountPath: "/etc/gcp"
readOnly: true
volumes:
- name: service-account-credentials-volume
secret:
secretName: my-data-service-account-credentials
items:
- key: sa_json
path: sa_credentials.json
</code></pre>
<hr>
<h1>This is our cron jobs for loading the AdUnit Data</h1>
<pre><code>apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: adtech-ads-apidata-el-adunit
spec:
schedule: "*/5 * * * *"
suspend: false
concurrencyPolicy: Replace
successfulJobsHistoryLimit: 10
failedJobsHistoryLimit: 10
jobTemplate:
spec:
template:
spec:
containers:
- name: adtech-ads-apidata-el-adunit-container
image: {{.image}}
args:
- -cp
- opt/nyt/DFPDataIngestion-1.0-jar-with-dependencies.jar
- com.nyt.cron.AdUnitJob
env:
- name: ENV_APP_NAME
value: "{{.env_app_name}}"
- name: ENV_APP_CONTEXT_NAME
value: "{{.env_app_context_name}}"
- name: ENV_GOOGLE_PROJECTID
value: "{{.env_google_projectId}}"
- name: ENV_GOOGLE_DATASETID
value: "{{.env_google_datasetId}}"
- name: ENV_REPORTING_DATASETID
value: "{{.env_reporting_datasetId}}"
- name: ENV_ADBRIDGE_DATASETID
value: "{{.env_adbridge_datasetId}}"
- name: ENV_SALESFORCE_DATASETID
value: "{{.env_salesforce_datasetId}}"
- name: ENV_CLOUD_PLATFORM_URL
value: "{{.env_cloud_platform_url}}"
- name: ENV_SMTP_HOST
value: "{{.env_smtp_host}}"
- name: ENV_TO_EMAIL
value: "{{.env_to_email}}"
- name: ENV_FROM_EMAIL
value: "{{.env_from_email}}"
- name: ENV_AWS_USERNAME
value: "{{.env_aws_username}}"
- name: ENV_CLIENT_ID
value: "{{.env_client_id}}"
- name: ENV_REFRESH_TOKEN
value: "{{.env_refresh_token}}"
- name: ENV_NETWORK_CODE
value: "{{.env_network_code}}"
- name: ENV_APPLICATION_NAME
value: "{{.env_application_name}}"
- name: ENV_SALESFORCE_USERNAME
value: "{{.env_salesforce_username}}"
- name: ENV_SALESFORCE_URL
value: "{{.env_salesforce_url}}"
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/etc/gcp/sa_credentials.json"
- name: ENV_CLOUD_SQL_URL
valueFrom:
secretKeyRef:
name: secrets
key: cloud_sql_url
- name: ENV_AWS_PASSWORD
valueFrom:
secretKeyRef:
name: secrets
key: aws_password
- name: ENV_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: secrets
key: dfp_client_secret
- name: ENV_SALESFORCE_PASSWORD
valueFrom:
secretKeyRef:
name: secrets
key: salesforce_password
restartPolicy: OnFailure
</code></pre>
<hr>
| <p>So, if your GKE project is project <code>my-gke</code>, and the project containing the services/things your GKE containers need access to is project <code>my-data</code>, one approach is to:</p>
<ul>
<li>Create a service account in the <code>my-data</code> project. Give it whatever GCP roles/permissions are needed (ex. <code>roles/bigquery.
dataViewer</code> if you have some BigQuery tables that your <code>my-gke</code> GKE containers need to read).
<ul>
<li>Create a service account key for that service account. When you do this in the console following <a href="https://cloud.google.com/iam/docs/creating-managing-service-account-keys" rel="noreferrer">https://cloud.google.com/iam/docs/creating-managing-service-account-keys</a>, you should automatically download a <code>.json</code> file containing the SA credentials.</li>
</ul></li>
<li><p>Create a Kubernetes secret resource for those service account credentials. It might look something like this:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: my-data-service-account-credentials
type: Opaque
data:
sa_json: <contents of running 'base64 the-downloaded-SA-credentials.json'>
</code></pre></li>
<li><p>Mount the credentials in the container that needs access:</p>
<pre><code>[...]
spec:
containers:
- name: my-container
volumeMounts:
- name: service-account-credentials-volume
mountPath: /etc/gcp
readOnly: true
[...]
volumes:
- name: service-account-credentials-volume
secret:
secretName: my-data-service-account-credentials
items:
- key: sa_json
path: sa_credentials.json
</code></pre></li>
<li><p>Set the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable in the container to point to the path of the mounted credentials:</p>
<pre><code>[...]
spec:
containers:
- name: my-container
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/sa_credentials.json
</code></pre></li>
</ul>
<p>With that, any official GCP clients (ex. the GCP Python client, GCP Java Client, gcloud CLI, etc. should respect the <code>GOOGLE_APPLICATION_CREDENTIALS</code> env var and, when making API requests, automatically use the credentials of the <code>my-data</code> service account that you created and mounted the credentials <code>.json</code> file for.</p>
|
<p>Since Kubernetes release 1.6 until RBAC's GA in 1.8 there has been a lot of background noise about how RBAC is now the default in Kubernetes and is a complete rehaul of the permissions framework. As an OpenShift Origin user/administrator, I haven't been following the Kubernetes side of the discussion, reasoning that I'd wait for a proper OpenShift Origin release including those changes and then look up the release notes and/or migration documentation.</p>
<p>However, now that I've skimmed through a couple of articles about RBAC, I have found the descriptions of RBAC extremely similar to OpenShift's permissions framework (being in use since Origin's inception), to the point of being minorly different in syntax only.</p>
<p>So is my impression correct that Kubernetes' RBAC is just an upstream port or direct code donation from OpenShift Origin, with only minor changes?</p>
| <p>Yes, it was an upstream port of OpenShift's access control, with minor differences. Over the 3.6 and 3.7 releases, OpenShift is migrating to back its authorization objects with Kubernetes objects:</p>
<ul>
<li>In OpenShift 3.6, OpenShift authorization objects migrate to corresponding Kubernetes RBAC objects, overwriting any existing Kubernetes RBAC objects</li>
<li>In OpenShift 3.7, the Kubernetes RBAC objects back both the OpenShift and Kubernetes APIs, and either can be used interchangeably</li>
</ul>
|
<p>I am pretty new to kubernetes and I have successfully setup a cluster on google container engine .
In my cluster I have a backend api developed with dropwizard, front end developed with node js and a mysql database.
All have been deployed to the cluster and are working .However my challenge is this after setting up an external ip for my node containers and backend I can access them remotely but I can't access my backed from my front end using the service name e.g my backend is called backendapi within the cluster. I can't do this <a href="http://backendapi:8080" rel="noreferrer">http://backendapi:8080</a> to call my rest services when deployed to the cluster .
The catch for me is when I deploy to the cluster I don't want my front end to hit my back end using the external ip, I want them to connect within the cluster without going via the external ip address. When I connect to a pod and ping backendapi it returns a result but when I deploy my front end and use the label name it doesn't work .What could I be doing wrong ?.</p>
| <p>As long as <code>kube-dns</code> is running (which I believe is "always unless you disable it"), all Service objects have an <em>in cluster</em> DNS name of <code>service_name +"."+ service_namespace + ".svc.cluster.local"</code> so all other things would address your <code>backendapi</code> in the <code>default</code> namespace as (to use your port numbered example) <code>http://backendapi.default.svc.cluster.local:8080</code>. That fact is the very reason Kubernetes forces all identifiers to be a "dns compatible" name (no underscores or other goofy characters).</p>
<p>Even if you are <em>not</em> running kube-dns, all Service names and ports are also injected into the environment of Pods just like docker would do, so the environment variables <code>${BACKENDAPI_SERVICE_HOST}:${BACKENDAPI_SERVICE_PORT}</code> would contain the Service's in-cluster IP (even though the env-var is named "host") and the "default" Service port (8080 in your example) if there is only one.</p>
<p>Whether you choose to use the DNS name or the environment-variable-ip is a matter of whether you like having the "readable" names for things in log output or error messages, versus whether you prefer to skip the DNS lookup and use the Service IP address for speed but less legibility. They <em>behave</em> the same.</p>
<p>The whole story lives <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">in the services-networking concept documentation</a></p>
|
<p>How long is a version of Kubernetes (e.g., 1.7.x) supported? How many versions of Kubernetes back are supported for security and bug fixes? Is this documented somewhere?</p>
<p>Note, I've tried to locate this information in the <a href="https://github.com/kubernetes/community" rel="nofollow noreferrer">community repo</a>, on slack, on k8s.io, and using search engines. Have not been able to locate anything in writing. In conversations I've been given multiple answers that are not the same.</p>
| <p>Matt,</p>
<p>At Kubernetes community, we define 3 types of Kubernetes releases:</p>
<ul>
<li>Major (x.0.0)</li>
<li>Minor (x.x.0)</li>
<li>Patch (x.x.x)</li>
</ul>
<p>At a single point of time, we develop the new "Major"/"Minor" version of Kubernetes (today - Kubernetes 1.9), and we support three existing releases as the "Patch" releases (today - 1.6.x, 1.7.x and 1.8.x).</p>
<p>When Kubernetes 1.9 will be released, and Kubernetes 1.10 development cycle will start, 1.9 will receive a status of "Patch" release (together with v1.8 and v1.7), and 1.6 will be marked as EOL.</p>
<p>The basic information about Kubernetes versioning is defined at the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md" rel="nofollow noreferrer">Versioning design doc</a>.</p>
<p>The project-wide release process is managed by Kubernetes <a href="https://github.com/kubernetes/sig-release" rel="nofollow noreferrer">SIG-Release</a>, while the releases itself are managed by the release teams (elected at per-release basis). Release teams, together with release timeline of a specific release, are defined at the <a href="https://github.com/kubernetes/features/" rel="nofollow noreferrer">kubernetes/features</a> repo, for example, for <a href="https://github.com/kubernetes/features/tree/master/release-1.9" rel="nofollow noreferrer">Kubernetes 1.9</a>.</p>
|
<p>I'm wondering how can install a package inside the minikube VM. I need some tools.
I have tried the /bin/toolbox container, but It does not have internet conexion.</p>
<pre><code>[root@docker-fedora-24 ~]# dnf update --verbose
cachedir: /var/cache/dnf
DNF version: 1.1.9
Cannot download 'https://mirrors.fedoraproject.org/metalink?repo=updates-released-f24&arch=x86_64': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for https://mirrors.fedoraproject.org/metalink?repo=updates-released-f24&arch=x86_64 [Could not resolve host: mirrors.fedoraproject.org].
Error: Failed to synchronize cache for repo 'updates'
</code></pre>
<p>I have tried the same toolbox script in my computer and it is properly working.</p>
<p>What configuration parameters I'm missing in minikube or systemd-nspaw?
Or how can I cook a customized minikube VM?</p>
<p>Thanks a lot</p>
| <p>Minikube is not meant to be tweaked. The advised method is to prepare a <a href="https://github.com/kubernetes/helm" rel="nofollow noreferrer">helm</a> chart for your application. As part of the helm chart you can add whatever tool you need in your docker file... Including make... Then you can install or upgrade your package in kubernetes/minikube using helm. </p>
|
<p>Im trying to create yaml for deployment with kubernetes. I am using a same script for different environment, which is separated with namespace. Now, I need to access the namespace name within the deployment yaml, such as </p>
<pre><code>"name":"$(namespace)"
</code></pre>
<p>in the yaml file. Is it possible to do so?</p>
| <p><em>edit</em> sorry, I may have misunderstood your question: if you want access to the current namespace in which the Pod is running, you can inject that into its environment via an <a href="https://kubernetes.io/docs/api-reference/v1.8/#envvarsource-v1-core" rel="nofollow noreferrer"><code>env: valueFrom:</code></a> construct, described in greater detail <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">here</a>:</p>
<pre><code>env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
</code></pre>
<hr>
<p>Omit the <code>namespace:</code> from the yaml and provide it to <code>kubectl</code> as <code>kubectl --namespace=foo1 create -f my-thing.yaml</code> (assuming, of course, you're using <code>kubectl</code>; the idea behind the answer is the same, just the mechanics will change if using a different method)</p>
<p>You can also specify the default namespace in <code>~/.kube/config</code> in the context, and address it that way: <code>kubectl --context=server-foo1</code> which allows associating different credentials with the different namespaces, too. They all boil down to the same effect in the end, it's just a matter of which is the most convenient for your case.</p>
<p>The most extreme(?) form is that you can also have multiple configs and switch between them via <code>env KUBECONFIG=$TMPDIR/foo1.yaml kubectl create -f my-thing.yaml</code></p>
|
<p>We run a Kubernetes cluster hosting a database, various microservices and an <code>nginx</code> reverse proxy, all in containers. We have a Google <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer" rel="nofollow noreferrer">load balancer</a> and a forwarding rule that forwards to the reverse proxy, and from there requests are proxied to the appropriate microservice.</p>
<p>This works well, however the reverse proxy is never shown the IP address of clients connecting to it. (Despite <a href="https://cloud.google.com/compute/docs/load-balancing/http/" rel="nofollow noreferrer">this documentation</a> indicating that it is possible to obtain this information via HTTP headers, we've had no luck and only ever see IPs on our GCP <code>default</code> network.)</p>
<p>Following a suggestion in another SO question - whose link I've temporarily mislaid - I want to deploy <code>nginx</code> on a VM instance instead, where it <em>does</em> have access to a connecting client's IP, and then forward requests from that instance into the cluster.</p>
<p>My question then is this: Each microservice listens on a TCP port and has a k8s <code>Service</code> configured. How can I refer to these k8s <code>Service</code>s from within my <code>nginx</code> VM? Can I do it via DNS or via ingress controllers?</p>
<p>Alternatively if you <em>can</em> in fact determine external IP addresses behind a Google Load balancer I'd much rather do that. I remember reading a very long k8s GitHub issue about it showing that that was some way off yet.</p>
| <p>What you are looking for is called <code>http proxy protocol</code></p>
<p><a href="https://www.nginx.com/resources/admin-guide/proxy-protocol/" rel="nofollow noreferrer">https://www.nginx.com/resources/admin-guide/proxy-protocol/</a></p>
<p>Note that both google load balancer and your nginx must be configured to use the proxy protocol at the same time.
If one of them is using proxy protocol, and the other not using, nothing will work.</p>
|
<p>I have a doubt regarding how to structure my dockerized stack, simplified in two containers to get help here:</p>
<ul>
<li>static: NGINX serving static resources (JS/HTML).</li>
<li>rest: express.js backend for the REST Api.</li>
</ul>
<p>Without Kubernetes, just docker-compose on a node, <em>rest</em> is simply listening on a different port and, from Javascript, the requests go to <em>same_host:rest_port</em>, no problem here.</p>
<p>With Kubernetes, I understand that I need to use the service name from Kubernetes, something like "rest" (to make transparent the service itself), but that name would only be visible from the docker container serving the static resources.</p>
<p>My question: do I need to forward traffic from NGINX to the REST Api? Does Kubernetes expose a public service name usable from Javascript, for example?</p>
<p>Thank you.</p>
| <blockquote>
<p>With Kubernetes, I understand that I need to use the service name from
Kubernetes, something like "rest" (to make transparent the service
itself), but that name would only be visible from the docker container
serving the static resources.</p>
</blockquote>
<p>Your understanding is correct. <a href="https://stackoverflow.com/a/47029021/6785908">As long as you have a kube-dns add-on running in your cluster</a>, your service name as Domain name is resolvable <strong>with in the same kubernetes cluster and namespace</strong>. In other words, as you said, "rest" will work only with in the kubernetes cluster.</p>
<blockquote>
<p>My question: do I need to forward traffic from NGINX to the REST Api?
Does Kubernetes expose a public service name usable from Javascript,
for example?</p>
</blockquote>
<p>This is one way to achieve this. </p>
<p><strong>Advantage</strong> of this approach is, you will avoid all the Same Origin Policy/CORS headaches, your microservice (express) authentication details will be abstracted out from user's browser. (This is not necessarily an advantage).</p>
<p><strong>Disadvantage</strong> of this approach is, your backend microservice (express) will have a tight coupling with front end (or vice-versa depending on how you look at it), This will make the scaling of backend <strong>dependent</strong> on front end. Your Backend is not exposed. So, if you have another consumer (let's just say an android app) it will not be able to access your service.</p>
<h1>Another Solution</h1>
<p>Create an ingress (and use an ingress controller in your cluster) and expose your Microservice(Express). </p>
|
<p>I'm trying to connect to my gke cluster using <code>kubernetes-incubator/client-python</code> library. I'm running just the basic query: </p>
<pre><code>from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
<p>And I'm getting an error: </p>
<pre><code>--------------------------------------------------------------------------
RefreshError Traceback (most recent call last)
<ipython-input-1-40695f414daf> in <module>()
2
3 # Configs can be set in Configuration class directly or using helper utility
----> 4 config.load_kube_config()
5
6 v1 = client.CoreV1Api()
/usr/local/lib/python2.7/distpackages/kubernetes/config/kube_config.pyc in
load_kube_config(config_file, context, client_configuration,
persist_config)
359 config_file, active_context=context,
360 client_configuration=client_configuration,
--> 361 config_persister=config_persister).load_and_set()
362
363
/usr/local/lib/python2.7/dist-packages/kubernetes/config/kube_config.pyc in load_and_set(self)
251
252 def load_and_set(self):
--> 253 self._load_authentication()
254 self._load_cluster_info()
255 self._set_config()
/usr/local/lib/python2.7/dist-packages/kubernetes/config/kube_config.pyc in
_load_authentication(self)
174 if not self._user:
175 return
/usr/local/lib/python2.7/dist-packages/kubernetes/config/kube_config.pyc in _load_gcp_token(self)
194 _is_expired(provider['config']['expiry']))):
195 # token is not available or expired, refresh it
--> 196 self._refresh_gcp_token()
197
198 self.token = "Bearer %s" % provider['config']['access-token']
/usr/local/lib/python2.7/dist-packages/kubernetes/config/kube_config.pyc in _refresh_gcp_token(self)
203 self._user['auth-provider'].value['config'] = {}
204 provider = self._user['auth-provider']['config']
--> 205 credentials = self._get_google_credentials()
206 provider.value['access-token'] = credentials.token
207 provider.value['expiry'] = format_rfc3339(credentials.expiry)
/usr/local/lib/python2.7/dist-packages/kubernetes/config/kube_config.pyc in _refresh_credentials()
133 credentials, project_id = google.auth.default()
134 request = google.auth.transport.requests.Request()
--> 135 credentials.refresh(request)
136 return credentials
137
/usr/local/lib/python2.7/dist-packages/google/oauth2/service_account.pyc in refresh(self, request)
320 assertion = self._make_authorization_grant_assertion()
321 access_token, expiry, _ = _client.jwt_grant(
--> 322 request, self._token_uri, assertion)
323 self.token = access_token
324 self.expiry = expiry
/usr/local/lib/python2.7/dist-packages/google/oauth2/_client.pyc in jwt_grant(request, token_uri, assertion)
141 }
142
--> 143 response_data = _token_endpoint_request(request, token_uri, body)
144
145 try:
/usr/local/lib/python2.7/dist-packages/google/oauth2/_client.pyc in _token_endpoint_request(request, token_uri, body)
107
108 if response.status != http_client.OK:
--> 109 _handle_error_response(response_body)
110
111 response_data = json.loads(response_body)
/usr/local/lib/python2.7/dist-packages/google/oauth2/_client.pyc in _handle_error_response(response_body)
57
58 raise exceptions.RefreshError(
---> 59 error_details, response_body)
60
61
RefreshError: ('invalid_scope: Empty or missing scope not allowed.', u'{\n "error" : "invalid_scope",\n "error_description" : "Empty or missing scope not allowed."\n}')
</code></pre>
<p>I thought there was an issue with my kube.config file. So I removed it and created the cluster again in order to recreate a new kube.config file. The issue remained. Can you help me on this please? </p>
| <p>This is an issue with your Google Cloud Platform credentials. They're not being found and you're not able to interact with the service. <a href="https://developers.google.com/identity/protocols/application-default-credentials" rel="nofollow noreferrer">Here's some instructions</a> on how to set those up. Either point the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable to your credentials file or authenticate through the SDK.</p>
|
<p>Using minikube and docker registry local.
Created and pushed local images to docker registry:</p>
<pre><code>$ curl http://localhost:5000/v2/_catalog
{"repositories":["app1"]}
</code></pre>
<p>Kubernetes config file:</p>
<pre><code>...
spec:
replicas: 3
template:
metadata:
labels:
app: app1
tier: backend
spec:
containers:
- image: localhost:5000/app1
name: app1
...
</code></pre>
<p>But from kubernetes dashboard, the <code>Pods</code> area got error:</p>
<pre><code>Failed to pull image "localhost:5000/app1": rpc error: code = 2 desc = Error while pulling image: Get http://localhost:5000/v1/repositories/app1/images: dial tcp 127.0.0.1:5000: getsockopt: connection refused
Error syncing pod
</code></pre>
<p>So how to use local docker images in registry correctly?</p>
| <p>This is a solution:</p>
<blockquote>
<p><a href="https://blog.hasura.io/sharing-a-local-registry-for-minikube-37c7240d0615" rel="nofollow noreferrer">https://blog.hasura.io/sharing-a-local-registry-for-minikube-37c7240d0615</a></p>
</blockquote>
<p>I followed it did:</p>
<pre><code>kubectl port-forward --namespace kube-system \
$(kubectl get po -n kube-system | grep kube-registry-v0 | \
awk '{print $1;}') 5000:5000
</code></pre>
<p>Then it pending:</p>
<pre><code>Forwarding from 127.0.0.1:5000 -> 5000
</code></pre>
<p>I don't know the reason.</p>
|
<p>I'm running a Kubernetes cluster of 3 nodes with GKE. I ask Kubernetes for 3 replicas of <code>backend</code> pods. The 3 pods are not well dispatched among the nodes to provide a high-availability service, they are usually all on 2 nodes.
I would like Kubernetes to dispatch the pods as much as possible to have a pod on each node, but not fail the deployment/scale-up if they are more <code>backend</code> pods than nodes.</p>
<p>Is it possible to do that with <code>preferredDuringSchedulingIgnoredDuringExecution</code>?</p>
| <p>Try setting up an preferred antiAffinity rule like so:</p>
<pre><code>affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- "my_app_name"
topologyKey: "kubernetes.io/hostname"
</code></pre>
<p>This will try to schedule pods onto nodes which do not already have a pod of the same label running on them. After that it's a free for all (so it won't evenly spread them after making sure at least 1 is running on each node). This means that after scaling up you might end up with a node with 5 pods, and other nodes with 1 pod each.</p>
|
<p>I'm trying to run Kubernetes on Windows using Minikube and Hyper-V. I've managed to succesfully run Minikube using <code>minikube start --vm-driver=hyperv --hyperv-virtual-switch=KuberNAT</code> and checking <code>minikube status</code> gives me</p>
<pre><code>PS > minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.1.74
</code></pre>
<p>but now I'm trying to run an image in kubernetes using kubectl, I've managed to get my powershell window to point towards the kubernetes vm with <code>minikube docker-env | Invoke-Expression</code> (powershell only, I haven't been able to do something similar on Command Prompt) and I have to insert this command in every powershell window I want to use to push an image to the kubernetes images.</p>
<p>The issue I'm having is that that I can't run a container, I can "deploy" an image with <code>kubectl run cloudconfig --image=cloudconfig</code> but the created pod is giving me this error:</p>
<blockquote>
<pre><code>Failed to pull image "cloudconfig": rpc error: code = Unknown desc =
Error response from daemon: repository cloudconfig not found: does not
exist or no pull access
</code></pre>
</blockquote>
<p>If I run <code>docker image ls</code> I get</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
cloudconfig latest 9199d500e746 2 minutes ago 105MB
openjdk 8-jre-alpine 5699ac7295f9 6 days ago 81.4MB
gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.5 fed89e8b4248 5 weeks ago 41.8MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.5 512cd7425a73 5 weeks ago 49.4MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.5 459944ce8cc4 5 weeks ago 41.4MB
gcr.io/google_containers/kubernetes-dashboard-amd64 v1.7.0 284ec2f8ed6c 5 weeks ago 128MB
gcr.io/google-containers/kube-addon-manager v6.4-beta.2 0a951668696f 4 months ago 79.2MB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 18 months ago 747kB
</code></pre>
<p>and <code>docker container ls</code> gives me</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d79bab2a212 gcr.io/google_containers/pause-amd64:3.0 "/pause" 41 seconds ago Up 40 seconds k8s_POD_cloudconfig-88c867589-qpqph_default_ac2dd8bb-bee1-11e7-8e51-00155d00ba16_0
e6723a726c26 gcr.io/google_containers/k8s-dns-sidecar-amd64 "/sidecar --v=2 --..." 43 minutes ago Up 43 minutes k8s_sidecar_kube-dns-6fc954457d-gvmz6_kube-system_f5de0b0e-be4b-11e7-8699-00155d00ba16_1
36980feeff76 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 "/dnsmasq-nanny -v..." 43 minutes ago Up 43 minutes k8s_dnsmasq_kube-dns-6fc954457d-gvmz6_kube-system_f5de0b0e-be4b-11e7-8699-00155d00ba16_1
d3d61e3861f0 gcr.io/google_containers/k8s-dns-kube-dns-amd64 "/kube-dns --domai..." 43 minutes ago Up 43 minutes k8s_kubedns_kube-dns-6fc954457d-gvmz6_kube-system_f5de0b0e-be4b-11e7-8699-00155d00ba16_1
45208e98bd7f gcr.io/google_containers/kubernetes-dashboard-amd64 "/dashboard --inse..." 43 minutes ago Up 43 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-8hw2w_kube-system_f5bb8241-be4b-11e7-8699-00155d00ba16_1
1fbb3688711d gcr.io/google_containers/pause-amd64:3.0 "/pause" 43 minutes ago Up 43 minutes k8s_POD_kube-dns-6fc954457d-gvmz6_kube-system_f5de0b0e-be4b-11e7-8699-00155d00ba16_1
c71ab868584c gcr.io/google-containers/kube-addon-manager "/opt/kube-addons.sh" 43 minutes ago Up 43 minutes k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_9831e93c3188555873fdb49f43198eef_1
1df643d195d2 gcr.io/google_containers/pause-amd64:3.0 "/pause" 43 minutes ago Up 43 minutes k8s_POD_kubernetes-dashboard-8hw2w_kube-system_f5bb8241-be4b-11e7-8699-00155d00ba16_1
28f48e09fd46 gcr.io/google_containers/pause-amd64:3.0 "/pause" 43 minutes ago Up 43 minutes k8s_POD_kube-addon-manager-minikube_kube-system_9831e93c3188555873fdb49f43198eef_1
</code></pre>
<p>So I'm assuming all the other things are working correctly and kubernetes just somehow doesn't know where to find my image. Does anyone know how I can get it working?</p>
| <p>I've found the solution, it's suggested by a user in <a href="https://github.com/kubernetes/kubernetes/issues/24903#issuecomment-274319080" rel="nofollow noreferrer">this</a> github thread</p>
<blockquote>
<p>If anyone else ends up on this thread, the solution that worked for me
was updating the image pull policy, you can find info on this here.
From the docs: Be default, the kubelet will try to pull each image
from the specified registry. You need to update this so it can look
locally.</p>
<p>If you're running from the CLI, add --image-pull-policy=IfNotPresent
to your kubectl run, i.e.</p>
<p>kubectl run some-node-proj --image=my-awesome-local-image:v1
--image-pull-policy=IfNotPresent</p>
</blockquote>
<p>adding <code>--image-pull-policy=IfNotPresent</code> allowed me to run the containers no problem.</p>
|
<p>I created a new cluster as per the <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="noreferrer">Azure guide</a> and created the cluster without issue but when I enter the <code>kubectl get nodes</code> to list the nodes I only get this response <code>Unable to connect to the server: net/http: TLS handshake timeout</code>.</p>
<p>I tried once in the Cloud Shell and once on my machine using the latest version of the Azure CLI (2.0.20).</p>
<p>I saw that there was a similar earlier issue regarding <a href="https://github.com/Azure/ACS/issues/4" rel="noreferrer">Service Principal credentials</a>, which I updated but that didn't seem to solve my issue either.</p>
<p>Any guidance would be greatly appreciated.</p>
| <p>Piling on: we are adding capacity as fast as possible for the preview. </p>
|
<p>Trying to make a bare metal k8s cluster to provide some services and need to be able to provide them on tcp port 80 and udp port 69 (accessible from outside the k8s cluster.) I've set the cluster up using kubeadm and it's running ubuntu 16.04. How do I access the services externally? I've been trying to use load-balancers and ingress but am having no luck since I'm not using an external load balancer (Local rather than AWS etc.)</p>
<p>An example of what I'm trying to do can be found <a href="https://www.safaribooksonline.com/library/view/getting-started-with/9781787283367/eb506133-083b-4708-9f92-1f56ddbfff56.xhtml" rel="nofollow noreferrer">here</a> but it's using GCE.</p>
<p>Thanks</p>
| <h1>Service with NodePort</h1>
<p>Create a service with type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer"><code>NodePort</code></a>, Service can be listening TCP/UDP port 30000-32767 on every node. By default, you can not simply choose to expose a Service on port 80 on your nodes. </p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: {SERVICE_PORT}
targetPort: {POD_PORT}
nodePort: 31000
- portocol: UDP
port: {SERVICE_PORT}
targetPort: {POD_PORT}
nodePort: 32000
type: NodePort
</code></pre>
<p>The container image <a href="https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service" rel="nofollow noreferrer"><code>gcr.io/google_containers/proxy-to-service:v2</code></a> is a very small container that will do port-forwarding for you. You can use it to forward a pod port or a host port to a service. Pods can choose any port or host port, and are not limited in the same way Services are.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dns-proxy
spec:
containers:
- name: proxy-udp
image: gcr.io/google_containers/proxy-to-service:v2
args: [ "udp", "53", "kube-dns.default", "1" ]
ports:
- name: udp
protocol: UDP
containerPort: 53
hostPort: 53
- name: proxy-tcp
image: gcr.io/google_containers/proxy-to-service:v2
args: [ "tcp", "53", "kube-dns.default" ]
ports:
- name: tcp
protocol: TCP
containerPort: 53
hostPort: 53
</code></pre>
<h1>Ingress</h1>
<p>If there are multiple services sharing same TCP port with different hosts/paths, deploy the <a href="https://github.com/kubernetes/ingress-nginx/" rel="nofollow noreferrer">NGINX Ingress Controller</a>, which listening on HTTP 80 and HTTPS 443.</p>
<p>Create an ingress, forward the traffic to specified services.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
</code></pre>
|
<p>In my company we have few public websites and many internal webapps. Currently they are are running in different AWS security groups.</p>
<p>Is it possible to run both kind of services on the same OpenShift cluster and make sure internal services are not accessible from the Internet?</p>
<p>Thanks!</p>
| <p>The traditional(?) way that is solved is through Internet-facing ELB/ALBs pointed to the NodePorts on the cluster. I personally haven't tried <code>Service</code> of <code>kind: LoadBalancer</code> since 1.2 to be able to speak to its functionality, but I do know kubernetes has a <strong>lot</strong> of users on AWS, so it's plausible it works fine by now.</p>
<p>You can also run your own Ingress Controller, several of which have support for ip white/black listing, authentication, SSL/TLS, all the fancy toys, if you'd prefer not to deal with the ELB headache.</p>
<p>If you're not already considering it, <a href="https://docs.projectcalico.org/v2.6/reference/architecture/" rel="nofollow noreferrer">Calico SDN</a> has support for in-cluster networking policies, so you could also apply an extra level of locked-down-ness to ensure no Internet app breaks out of its allowed network path; thus, security-groups moving down into the cluster.</p>
|
<p>I'm new to kubernetes and I'm setting it up on azure. </p>
<p>I've created this script based on what I managed to find in the Azure documentation:</p>
<pre><code>#!/bin/bash
cd "$( dirname "$0" )"
source vars.sh
echo "Create group $KUBE_GROUP"
az group create \
--verbose \
--name $KUBE_GROUP \
--location $LOCATION
echo "Create acs $KUBE_NAME"
az acs create \
--verbose \
--name $KUBE_NAME \
--resource-group $KUBE_GROUP \
--orchestrator-type Kubernetes \
--dns-prefix $KUBE_NAME \
--generate-ssh-key \
--agent-count 3 > creategroup.log 2>&1
echo "Get credentials for $KUBE_GROUP"
az acs kubernetes get-credentials \
--verbose \
--resource-group $KUBE_GROUP \
--name $KUBE_NAME > getcredentials.log 2>&1
echo "Create registry $REGISTRY_NAME"
az acr create \
--name $REGISTRY_NAME \
--resource-group $KUBE_GROUP \
--location $LOCATION \
--admin-enabled true \
--sku Basic > create_registry.log
echo "Setup kubernetes environment"
ssh-keygen -f "${HOME}/.ssh/known_hosts" -R "${KUBE_NAME}mgmt.westeurope.cloudapp.azure.com"
scp azureuser@${KUBE_NAME}mgmt.westeurope.cloudapp.azure.com:.kube/config $HOME/.kube/config
kubectl config current-context
#echo "Create a single nginx instance in kubernetes"
#kubectl run namenginx1 --image=nginx
REGISTRY_PASSWORD=$( az acr credential show \
--name $REGISTRY_NAME \
--resource-group $KUBE_GROUP \
| jq '.passwords[0] .value' \
| sed 's/"//g' )
echo "Create secret docker-registry"
kubectl create secret docker-registry kuberegistry \
--docker-server $REGISTRY_URL \
--docker-username $REGISTRY_NAME \
--docker-password $REGISTRY_PASSWORD \
--docker-email $REGISTRY_EMAIL > kuberegistry.log 2>&1
nohup kubectl proxy &
firefox 'http://localhost:8001/ui/'
</code></pre>
<p>Now, I'd expect to see the kubernetes dashboard where I can do a lot of things. But it seems this is not the case. If I open the FF console, I see tons of 404s and the page remains blank.</p>
<p>Does anybody see the reason why it's not working? Or give me any suggestions on how to solve the issue?</p>
<p>Thanks</p>
<hr>
<p>I've tried to execute <code>az acs kubernetes browse -g $KUBE_GROUP -n $KUBE_NAME</code> but I'm getting this error:</p>
<pre><code>'proxycommand'
Traceback (most recent call last):
File "/usr/local/Cellar/azure-cli/2.0.19/libexec/lib/python3.6/site-packages/azure/cli/main.py", line 36, in main
cmd_result = APPLICATION.execute(args)
File "/usr/local/Cellar/azure-cli/2.0.19/libexec/lib/python3.6/site-packages/azure/cli/core/application.py", line 212, in execute
result = expanded_arg.func(params)
File "/usr/local/Cellar/azure-cli/2.0.19/libexec/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 377, in __call__
return self.handler(*args, **kwargs)
File "/usr/local/Cellar/azure-cli/2.0.19/libexec/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 620, in _execute_command
reraise(*sys.exc_info())
File "/usr/local/Cellar/azure-cli/2.0.19/libexec/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/usr/local/Cellar/azure-cli/2.0.19/libexec/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 602, in _execute_command
result = op(client, **kwargs) if client else op(**kwargs)
File "/usr/local/Cellar/azure-cli/2.0.19/libexec/lib/python3.6/site-packages/azure/cli/command_modules/acs/custom.py", line 154, in k8s_browse
_k8s_browse_internal(name, acs_info, disable_browser, ssh_key_file)
File "/usr/local/Cellar/azure-cli/2.0.19/libexec/lib/python3.6/site-packages/azure/cli/command_modules/acs/custom.py", line 164, in _k8s_browse_internal
_k8s_get_credentials_internal(name, acs_info, browse_path, ssh_key_file)
File "/usr/local/Cellar/azure-cli/2.0.19/libexec/lib/python3.6/site-packages/azure/cli/command_modules/acs/custom.py", line 835, in _k8s_get_credentials_internal
'.kube/config', path_candidate, key_filename=ssh_key_file)
File "/usr/local/Cellar/azure-cli/2.0.19/libexec/lib/python3.6/site-packages/azure/cli/command_modules/acs/acs_client.py", line 65, in secure_copy
proxy = paramiko.ProxyCommand(host_config['proxycommand'])
KeyError: 'proxycommand'
</code></pre>
<p>NOTE: I've made sure that the <code>kubectl proxy</code> was not running and that the port was not 'busy'.</p>
| <p>I test your script in my lab, there is a mistake in your script.</p>
<p><code>{KUBE_NAME}mgmt.westeurope.cloudapp.azure.com</code> it should be <code>{KUBE_NAME}.westeurope.cloudapp.azure.com</code>. No need mgmt.</p>
<p>If you want to connect to the web UI, you need running</p>
<pre><code>az acs kubernetes get-credentials --resource-group=$KUBE_GROUP --name=$KUBE_NAME
nohup az acs kubernetes browse -g $KUBE_GROUP -n $KUBE_NAME &
</code></pre>
<p>More information about this please refer to this <a href="https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-ui" rel="nofollow noreferrer">link</a>.</p>
<p>Update:</p>
<p>You need check <code>.kube/config</code> file, if possible, you could recreate it.</p>
<p>The issue is because browser cache. Clean up the cache in the browser will solve this issue.</p>
<hr>
<p>External Update:</p>
<p>Beside the cache, when the command opens the page on your browser, it'll go directly to <code>http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy</code>. There's a missing / in the end. It should be <code>http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/</code></p>
|
<p>I am brand new to suiteCRM and I am trying to deploy into Minikube. I am using the helm charts in the K8s repo:</p>
<p><a href="https://github.com/kubernetes/charts/tree/master/stable/suitecrm" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/stable/suitecrm</a></p>
<p>I am using the command: </p>
<pre><code>helm install --name red-falcon-crm -f values.yaml stable/suitecrm
</code></pre>
<p>I modified the values.yaml to have some custom values (e.g. email, username, password). The install is not successful even though I don't get very usable errors. I do get errors about not having a resolvable host but I was hoping to proxy. </p>
<pre><code>craig@craigs-laptop:~/redfalcon/gitlab/platform-setup/modules/suitecrm$ helm install --name red-falcon-crm -f values.yaml stable/suitecrm
NAME: red-falcon-crm
LAST DEPLOYED: Tue Oct 31 19:13:03 2017
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
red-falcon-crm-mariadb-7fb6774f5c-b5w7t 0/1 ContainerCreating 0 0s
==> v1/Secret
NAME TYPE DATA AGE
red-falcon-crm-mariadb Opaque 2 1s
red-falcon-crm-suitecrm Opaque 2 1s
==> v1/ConfigMap
NAME DATA AGE
red-falcon-crm-mariadb 1 1s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
red-falcon-crm-mariadb Bound pvc-cf72d52d-bea1-11e7-b8a4-080027c951c6 8Gi RWO standard 1s
red-falcon-crm-suitecrm-apache Pending standard 1s
red-falcon-crm-suitecrm-suitecrm Pending standard 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
red-falcon-crm-mariadb ClusterIP 10.0.0.104 <none> 3306/TCP 1s
red-falcon-crm-suitecrm LoadBalancer 10.0.0.89 <pending> 80:32750/TCP,443:31973/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
red-falcon-crm-mariadb 1 1 1 0 1s
NOTES:
###############################################################################
### ERROR: You did not provide an external host in your 'helm install' call ###
###############################################################################
This deployment will be incomplete until you configure SuiteCRM with a resolvable
host. To configure SuiteCRM with the URL of your service:
1. Get the SuiteCRM URL by running:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace default -w red-falcon-crm-suitecrm'
export APP_HOST=$(kubectl get svc --namespace default red-falcon-crm-suitecrm --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
export APP_PASSWORD=$(kubectl get secret --namespace default red-falcon-crm-suitecrm -o jsonpath="{.data.suitecrm-password}" | base64 --decode)
export APP_DATABASE_PASSWORD=$(kubectl get secret --namespace default red-falcon-crm-mariadb -o jsonpath="{.data.mariadb-root-password}" | base64 --decode)
2. Complete your SuiteCRM deployment by running:
helm upgrade red-falcon-crm \
--set suitecrmHost=$APP_HOST,suitecrmPassword=$APP_PASSWORD,mariadb.mariadbRootPassword=$APP_DATABASE_PASSWORD stable/suitecrm
craig@craigs-laptop:~/redfalcon/gitlab/platform-setup/modules/suitecrm$ kubectl get svc --namespace default -w red-falcon-crm-suitecrm
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
red-falcon-crm-suitecrm LoadBalancer 10.0.0.89 <pending> 80:32750/TCP,443:31973/TCP 3m
^Ccraig@craigs-laptop:~/redfalcon/gitlab/platform-setup/modules/suitecrm$ minikube service red-falcon-crm-suitecrm
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
</code></pre>
<p>My Values.yaml:</p>
<pre><code>## Bitnami SuiteCRM image version
## ref: https://hub.docker.com/r/bitnami/suitecrm/tags/
##
image: bitnami/suitecrm:7.9.7-r0
## Specify a imagePullPolicy
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
imagePullPolicy: IfNotPresent
## SuiteCRM host to create application URLs
## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration
##
# suitecrmHost:
## loadBalancerIP for the SuiteCRM Service (optional, cloud specific)
## ref: http://kubernetes.io/docs/user-guide/services/#type-loadbalancer
##
# suitecrmLoadBalancerIP:
## User of the application
## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration
##
suitecrmUsername: craig
## Application password
## Defaults to a random 10-character alphanumeric string if not set
## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration
##
suitecrmPassword: <hadmypasswordhere>
## Admin email
## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration
##
suitecrmEmail: <hadmyemail>@gmail.com
## Lastname
## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration
##
suitecrmLastName: <hadmylastname>
## SMTP mail delivery configuration
## ref: https://github.com/bitnami/bitnami-docker-suitecrm/#smtp-configuration
##
# suitecrmSmtpHost:
# suitecrmSmtpPort:
# suitecrmSmtpUser:
# suitecrmSmtpPassword:
# suitecrmSmtpProtocol:
##
## MariaDB chart configuration
##
mariadb:
## MariaDB admin password
## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#setting-the-root-password-on-first-run
##
mariadbRootPassword: <hadMyPasswordHere>
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
## mariadb data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 8Gi
## Kubernetes configuration
## For minikube, set this to NodePort, elsewhere use LoadBalancer
##
serviceType: LoadBalancer
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
apache:
## apache data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 1Gi
suitecrm:
## suitecrm data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 8Gi
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
# requests:
# memory: 512Mi
# cpu: 300m
</code></pre>
<p>I did look at the logs of the running service (through minikube dashboard). Really not much use there but it seems to hang on preparing CRM Environment. It never gets beyond that...</p>
<pre><code> nami INFO Initializing apache
apache INFO ==> Patching httpoxy...
nami INFO apache successfully initialized
nami INFO Initializing php
nami INFO php successfully initialized
nami INFO Initializing suitecrm
suitecr INFO Preparing webserver environment...
suitecr INFO Preparing PHP environment...
suitecr INFO Preparing suitecrm environment...
</code></pre>
<p>The service endpoint is never ready and never appears to complete deployment. Any help would be appreciated.</p>
| <p>I never got the helm chart to work, but I did find a work around. Installing via traditional kubctrl worked fine. I used the method provided here:</p>
<p><a href="https://github.com/bitnami/bitnami-docker-suitecrm" rel="nofollow noreferrer">https://github.com/bitnami/bitnami-docker-suitecrm</a></p>
|
<p>A solution to retrieve all containers running in a pod is to run <code>kubectl get pods POD_NAME_HERE -o jsonpath={.spec.containers[*].name}</code>, however this command line does not provide the init containers.</p>
<p>Is there a way to cleanly retrieve all containers running in a pod, including init containers?</p>
<p>[edit] as svenwltr noted, on Kubernete 1.6.0 or higher, it is possible to retrieve the init container with <code>kubectl get pods POD_NAME_HERE -o jsonpath={.spec.initContainers[*].name}</code> and all containers can be retrieved with <code>kubectl get pod POD_NAME_HERE -o jsonpath="{.spec['containers','initContainers'][*].name}"</code>. However, this is not a valid workaround for lower versions of Kubernetes where <code>.spec.initContainers</code> isn't implemented yet.</p>
| <p>The init containers are stored in <code>spec.initContainers</code>:</p>
<pre><code>kubectl get pods POD_NAME_HERE -o jsonpath={.spec.initContainers[*].name}
</code></pre>
<p>You can display both with a bit of <a href="https://kubernetes.io/docs/user-guide/jsonpath/" rel="noreferrer">JSONPath magic</a>:</p>
<pre><code>kubectl get pod POD_NAME_HERE -o jsonpath="{.spec['containers','initContainers'][*].name}"
</code></pre>
<hr>
<p>Before Kubernetes 1.6 the init containers were stored in <code>.metadata.annotations."pod.beta.kubernetes.io/init-containers"</code>. So it should be possible to get them via:</p>
<pre><code>kubectl get pods POD_NAME_HERE -o jsonpath='{metadata.annotations."pod.beta.kubernetes.io/init-containers".[*].name}'
</code></pre>
<p>Unfortunately I cannot test this, because I don't have a cluster with this version. Also joining containers and init containers into a single command looks a bit harder this way.</p>
|
<p>I want to know if it possible to view the content of a certificate that is currently loaded by NGINX.</p>
<p>The use case is in Kubernetes where a secret was changed with a new .key and .crt and I don't know if the pods have undergone a restart since the secret was changed.</p>
<p>I know its possible to view the currently installed secret on the Kubernetes cluster with </p>
<pre><code>kubectl get secret <secret> -o yaml
</code></pre>
<p>I confirmed the config file loaded by nginx by checking the process</p>
<p><code>ps aux</code></p>
<p>I also know that I can view the contents of the <code>nginx.conf</code> and see that it refers to the mount point where the certificate is located</p>
<pre><code> ssl_certificate /etc/vol/sslcert.crt;
ssl_certificate_key /etc/vol/sslcert.key;
</code></pre>
<p>But since the name of the .crt and .key never changed in the secret (only the values), I can't tell from that alone which certificate is currently loaded</p>
<p>Is there any way to confirm the actual content that is being held in memory?</p>
| <p>My advice would be to access the server and get the certificate that returns. This answer uses openssl and should work for your case: <a href="https://serverfault.com/a/661982/380575">https://serverfault.com/a/661982/380575</a></p>
|
<p><br>
I'm new in using kubernetes and I need to know how can I install and use kubernetes with different nodes using ubuntu 14.04,<br>
because all what I found is aroud ubuntu 16.04.<br>
There is a huge difference between the two version concerning kubernetes.<br>
I need to have a master and two slaves(using the same OS).<br>
Is recommended to Work with kebernetes on ubuntu 14.4 ?
Thank you for your help.</p>
| <p>Kubernetes makes use of systemd, which is not available in Ubuntu 14.04. While it is possible to install in Ubuntu 14.04, you would have to do some magic in order to make it work. You can find more info here: <a href="https://stackoverflow.com/questions/44302071/how-to-install-latest-kubernetes-in-ubuntu-14">How to install latest Kubernetes in Ubuntu 14</a> (Thanks to Janos Lenart who shared in the comments)</p>
|
<p>I'm unable to find good information describing these errors:</p>
<pre><code>[sarah@localhost helm] helm install statefulset --name statefulset --debug
[debug] Created tunnel using local port: '33172'
[debug] SERVER: "localhost:33172"
[debug] Original chart version: ""
[debug] CHART PATH: /home/helm/statefulset/
Error: error validating "": error validating data: [field spec.template for v1beta1.StatefulSetSpec is required, field spec.serviceName for v1beta1.StatefulSetSpec is required, found invalid field containers for v1beta1.StatefulSetSpec]
</code></pre>
<p>I'm still new to Helm; I've built two working charts that were similar to this template and didn't have these errors, even though the code isn't much different. I'm thinking there might be some kind of formatting error that I'm not noticing. Either that, or it's due to the different type (the others were Pods, this is StatefulSet). </p>
<p>The YAML file it's referencing is here:</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
#serviceAccount: "{{.Values.PrimaryName}}-sa"
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}"
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}
</code></pre>
<p>Would someone be able to a) point me in the right direction to find out how to implement the spec.template and spec.serviceName required fields, b) understand why the field 'containers' is invalid, and/or c) give mention of any tool that can help debug Helm charts? I've attempted 'helm lint' and the '--debug' flag but 'helm lint' shows no errors, and the flag output is shown with the errors above.</p>
<p>Is it possible the errors are coming from a different file, also? </p>
| <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> objects has different structure than Pods are. You need to modify your yaml file a little:</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
selector:
matchLabels:
app: "" # has to match .spec.template.metadata.labels
serviceName: "" # put your serviceName here
replicas: 1 # by default is 1
template:
metadata:
labels:
app: "" # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}
</code></pre>
|
<p>I am looking for a kubernetes equivalent of <code>docker -v</code> for mounting the volumes in gcloud.</p>
<p>I am trying to run my container using google-container-engine which uses kubectl to manage clusters. In the kubectl run command I could not fund any provision for mounting the volumes.</p>
<pre><code> kubectl run foo --image=gcr.io/project_id/myimage --port 8080
</code></pre>
<p>I checkout their <a href="https://kubernetes.io/docs/user-guide/docker-cli-to-kubectl/#docker-run" rel="noreferrer">official docs</a> but could not find any clue whatsoever.</p>
| <p>As at the moment, It's not possible to mount a persistent Volume in a container by using imperative ways or using generators command (run, expose).Therefore, You could use declarative way to get it done. </p>
<p>Kubernetes provides 2 abstractions for storage in a cluster which are persistent volume claim (PVC) and persistent volume (PV). Moreover, you can use storage class to provide Persistent volume (PV) in a dynamic way.</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">persistent-volumes</a>.</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">storage-classes</a> </p>
<p>When you write a manifest file for deployment you need to use a volume claim field to access PVC as well as you will write a PVC to claim PV.</p>
|
<p>Clients are connecting to API gateway server through websocket connection. This server just orchestrates swarm of cloud functions, that are handling all of the data requesting and transforming. Server is statefull - it holds essential session data, which is defining, for example, what cloud functions are allowed to be requested by a given user.</p>
<p>This server doesn't use socket to broadcast data, so socket connections are not interacting between each other, and will not be doing this. So, all it needs to handle is single-client-to-server communication.</p>
<p>What will happen if i'll create bunch of replicas and put load balancer in front of all of them (like regular horizontal scaling)? If a user got connected to certain server instance, then his connection will stick there? or it will be switching between instances by load balancer?</p>
| <p>There is a parameter available for load balancer that allows you to do what you are looking for: <a href="https://cloud.google.com/compute/docs/load-balancing/http/backend-service#backend_service_components" rel="nofollow noreferrer">Session affinity</a>. </p>
<p>"Session affinity if set attempts to send all network request from the same client to the same virtual machine instance."</p>
<p>Actually even if it seems to be related to load balancer you set it while creating <a href="https://cloud.google.com/compute/docs/load-balancing/network/target-pools" rel="nofollow noreferrer">target pools</a> and/or <a href="https://cloud.google.com/compute/docs/load-balancing/http/backend-service#backend_service_components" rel="nofollow noreferrer">backends</a>. You should check if this solution can be applied to your particular configuration.</p>
|
<p>I am just beginning my journey to docker and am trying to put together the image of how to deploy docker to production instances in the cloud. We use ansible for all of our applications deployments currently. However, when I look for information on deploying docker containers I see either references to swarm and kubernetes or what appears to be manual deploys via ssh. </p>
<p>So, my noob question is, how do you deploy docker to your hosts in the cloud? Is there any reason to use ansible? (They have a plugin.) It seems like maybe swarm or kubernetes may just take care of that for us.</p>
| <p>I worked on a container based platform 4 years ago and at the time, we used Ansible to deploy containers as orchestration frameworks were non existent. </p>
<p>Nowadays, I would recommend using an orchestration framework (Kubernetes, Docker Swarm, Openshift, etc.) which provides features such as ingress load balancing, lifecycle management, persistent volume management etc.</p>
<p>There are many ways to run containers in public clouds, some implementing orchestration frameworks like Kubernetes, some implementing their own frameworks: azure acs/aks, AWS ECS, Google Container Engine, etc. Docker machine can provision docker swarms in public clouds as well. What's interesting with those is that the underlying VMs are either hiddent or managed for you. </p>
<p>There are many many ways to do it but just using Ansible will be limited to spinning up containers on existing VMs that you'll have to provision and manage.</p>
|
<p>i have a question on the kubernetes DNS lookup, if i am using in my services deployment "dns" instead of "env", </p>
<p>Can my microservice using another microservices in the cluster get the dns names of all the microservices?</p>
<p>I get this piece of code, if I use <code>env</code> then I get the the info of host from env. but if I am using <code>dns</code> what format and how do I get the dns names, is there a DNS object I can query on the client side?</p>
<pre><code>if (isset($_GET['cmd']) === true) {
$host = 'redis-master';
if (getenv('GET_HOSTS_FROM') == 'env') {
$host = getenv('REDIS_MASTER_SERVICE_HOST');
}
</code></pre>
<p>Ref: <a href="https://github.com/GoogleCloudPlatform/container-engine-samples/blob/master/guestbook/php-redis/guestbook.php" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/container-engine-samples/blob/master/guestbook/php-redis/guestbook.php</a></p>
<p>If someone has examples (preferably nodejs), I can dig into that. </p>
| <p>First off this is <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#supported-dns-schema" rel="noreferrer">documented throughly</a>. In any case if you want to query DNS to find out where things are running you can do so if you know the service name by pointing at:</p>
<pre><code>my-svc.my-namespace.svc.cluster.local
</code></pre>
<p>Additionally, if you also want to abstract port numbers and are OK with knowing a port name you can query SRV records and get both port numbers as well as CNAME:</p>
<pre><code>_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local
</code></pre>
<p>For your specific example this would be something like (assuming default namespace):</p>
<pre><code>_redis-client._tcp.redis-service.default.svc.cluster.local
</code></pre>
<p>Querying SRV records is more reliable than depending on environment variables because if during the lifetime of the pod, an external service changes location, environment variables can't be re-injected, but re-querying DNS records will yield updated results.</p>
|
<p>I have a couple of tiny containers with a very small memory footprint and little traffic. I think it's overkill and too expensive to have a separate pod for each of them.</p>
<p>I currently deploy containers by simply pushing Docker images to the <em>OpenShift Online</em> Container Registry. OpenShift rebuilds and deploys the application as soon as a new image arrives. It works fine, but I just can't find a way to make OpenShift accept multiple images/containers for the same application/pod.</p>
<p>Does anyone know how to run multiple containers in one application/pod?</p>
| <p>I don't know what kind of disadvantages you have in mind when creating multiple pods. The overhead of a Pod vs a Container is negligible.</p>
<p>But putting multiple applications into a single pod clearly has disadvantages:</p>
<ul>
<li>if you want to restart a single container, you need to restart all of them</li>
<li>you cannot scale the containers separately, so you could not have a different count of services (for HA or load distribution)</li>
<li>you have to identify the services by port, since the service discovery works per Pod
<ul>
<li>ie. having multiple HTTP services, you could map them all to port 80 and use <code>http://fooservice</code> and <code>http://barservice</code> instead of <code>http://uberpod:8001</code> and <code>http://uberpod:8002</code></li>
</ul></li>
</ul>
<p>Again, there is almost no overhead of having multiple Pods.</p>
<p>I have no idea how the Kubernetes integration in OpenShift works, but with plain Kubernetes YAML files you could just add another container to the container list:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: foo
image: busybox
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
- name: bar
image: mycontainer:latest
</code></pre>
|
<p>I keep seeing this error:</p>
<pre><code>Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
12s 2s 12 {statefulset } Warning FailedCreate create Pod pgset-0 in StatefulSet pgset failed error: pods "pgset-0" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{26}: 26 is not an allowed group]
</code></pre>
<p>I've created a ServiceAccount named "pgset-sa", and granted it the cluster-admin role. I've been researching other ways to get this to work (including editing scc restricted), but keep getting the error from fsGroup stating it's not an allowed group. What am I missing? </p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.ContainerName}}"
labels:
name: "{{.Values.ReplicaName}}"
app: "{{.Values.ContainerName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
selector:
matchLabels:
app: "{{.Values.ContainerName}}"
serviceName: "{{.Values.ContainerName}}"
replicas: 2
template:
metadata:
labels:
app: "{{.Values.ContainerName}}"
spec:
serviceAccount: "{{.Values.ContainerServiceAccount}}"
securityContext:
fsGroup: 26
terminationGracePeriodSeconds: 10
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_HOST
value: "{{.Values.PrimaryName}}"
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}"
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}
</code></pre>
| <p>Take a look at this document titled: <a href="https://docs.openshift.org/latest/admin_guide/manage_scc.html#how-do-i" rel="nofollow noreferrer">Managing Security Context Constraints</a>.</p>
<p>The service account associated with the statefulset must be granted a security context constraint sufficient to allow the pod (one that either allows exactly the fsGroup 26 or allows any fsGroup, in this case).</p>
|
<p>I have a brand new Kubernetes v1.8 cluster with two nodes (RBAC enabled). Jenkins is deployed as a StatefulSet and recommended ServiceAccount/Role and RoleBindings were created as well (<a href="https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/kubernetes" rel="nofollow noreferrer">from here</a>). Cluster info:</p>
<pre><code>$ kubectl cluster-info
Kubernetes master is running at https://10.182.255.35:6443
</code></pre>
<p>When I'm trying to set up Kubernetes cloud in Jenkins settings I'm getting an error 403 (Forbidden). I followed pugin guide and created 'Kubernetes Service Account' credentials in Jenkins and trying to configure new cloud. <a href="https://i.stack.imgur.com/VCsQi.png" rel="nofollow noreferrer">Jenkins configuration screenshot</a>. Here is the debug log from plugin:</p>
<pre><code>Nov 02, 2017 7:40:57 PM FINE org.csanchez.jenkins.plugins.kubernetes.KubernetesFactoryAdapter
Creating Kubernetes client: KubernetesFactoryAdapter [serviceAddress=https://10.182.255.35:6443, namespace=default, caCertData=null, credentials=org.csanchez.jenkins.plugins.kubernetes.ServiceAccountCredential@99ee54b6, skipTlsVerify=true, connectTimeout=0, readTimeout=0]
Nov 02, 2017 7:40:57 PM FINE org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud
Error connecting to https://10.182.255.35:6443
java.io.IOException: Unexpected response code for CONNECT: 403
at okhttp3.internal.connection.RealConnection.createTunnel(RealConnection.java:371)
...(skipped)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:605)
Caused: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [list] for kind: [Pod] with name: [null] in namespace: [default] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62)
...(skipped)
</code></pre>
<p>At the same time if I try to make an API call using this serviceAccount from the pod, it's working:</p>
<pre><code>$ kubectl exec -ti jenkins-0 bash (ssh into the pod)
bash-4.3$ KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token)
bash-4.3$ curl -sSk -H "Authorization: Bearer $KUBE_TOKEN"
https://10.182.255.35:6443/api/v1/namespaces/default/pods
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/default/pods",
"resourceVersion": "90645"
},
"items": [
{
...(skipped)
</code></pre>
| <p>Answering my own question: the problem was with my proxy settings. You need to specify instance IP in no_proxy environment variable during cluster setup.</p>
|
<p>We are experimenting with Kubernetes. Already have developed bunch of Spring boot Microservices which are ready to be integrated with kubernetes. </p>
<p>We would like to keep dev environment (local) simple and not complicate developers with running local kubernetes clusters/ building images etc. </p>
<ol>
<li>Is there any solution to access Kubernetes config-map/secrets outside of its cluster?</li>
<li>Is there any way to discover services running in a Kubernetes cluster in a standalone spring boot Microservice app??</li>
</ol>
<p>Thanks in advance!
A</p>
| <p>Kubernetes relies on its API servers for all operations. You can use <a href="https://kubernetes.io/docs/api-reference/v1.8/#-strong-api-overview-strong-" rel="nofollow noreferrer">this API</a> to query/do anything within Kubernetes.</p>
<ol>
<li><p>Get a configmap with the API: <code>GET /api/v1/namespaces/{namespace}/configmaps/{name}</code></p></li>
<li><p>List all the services: <code>GET /api/v1/namespaces/{namespace}/services/</code></p></li>
</ol>
<p>How you access the API will depend on your particular setup but you can test it quickly by running <code>kubectl proxy</code> and just using <code>curl</code> in localhost.</p>
|
<p>In Kubernetes object metadata, there are <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#metadata" rel="noreferrer">the concepts of <code>resourceVersion</code> and <code>generation</code></a>. I understand the notion of <code>resourceVersion</code>: it is an optimistic concurrency control mechanism—it will change with every update. What, then, is <code>generation</code> for?</p>
| <p>resourceVersion changes on every write, and is used for optimistic concurrency control</p>
<p>in some objects, generation is incremented by the server as part of persisting writes affecting the <code>spec</code> of an object.</p>
<p>some objects' <code>status</code> fields have an <code>observedGeneration</code> subfield for controllers to persist the generation that was last acted on.</p>
|
<p>I have a setup where docker ONLY works as root (I know, my fault). I'm trying to follow the GCR quickstart: [1]. I can't find anything on the troubleshooting page [2] either.</p>
<p>Can you help me (and I can then file a document fix)?</p>
<p>[1] <a href="https://cloud.google.com/container-registry/docs/quickstart" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/quickstart</a>
[2] <a href="https://cloud.google.com/container-registry/docs/support/troubleshooting" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/support/troubleshooting</a></p>
<h1>Repro</h1>
<p>Reproduction steps (also in b/68796816):</p>
<pre><code>$ docker -v
Docker version 1.6.2, build 7c8fca2
ricc@rubino:~/git/gce-recipes/gke/quickstart-image$ sudo docker run busybox date
Thu Nov 2 12:29:35 UTC 2017
$ sudo docker tag quickstart-image gcr.io/ric-cccwiki/quickstart-image
# All good so far ...
</code></pre>
<p>Option 1 (no sudo):</p>
<pre><code># no sudo: docker doesn't work
$ gcloud docker -- push gcr.io/ric-cccwiki/quickstart-image
FATA[0000] Post http:///var/run/docker.sock/v1.18/images/gcr.io/ric-cccwiki/quickstart-image/push?tag=: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?
</code></pre>
<p>Option 2: with sudo:</p>
<pre><code># docker works but gcloud is not found
$ sudo gcloud docker -- push gcr.io/ric-cccwiki/quickstart-image
sudo: gcloud: command not found
</code></pre>
<p>Neither way works</p>
| <p>There are a few options, the best being to just make docker usable without root: <a href="https://docs.docker.com/engine/installation/linux/linux-postinstall/" rel="nofollow noreferrer">https://docs.docker.com/engine/installation/linux/linux-postinstall/</a></p>
<p><strong>Option 1: Specify your full path to gcloud when using it.</strong></p>
<pre><code>sudo $(which gcloud)
</code></pre>
<p><strong>Option 2: Install glcoud as root</strong></p>
<pre><code>sudo su
#install gcloud
gcloud version
</code></pre>
<p>However, the best thing to do is to just use docker as non-root :)</p>
|
<p>Running a GKE cluster with 1.8.1 - when I look at <code>/logs/kube-apiserver-audit.log</code>, it's completely empty. I've taken actions like creating deployments and deleting pods that have been visible in audit logs for clusters I've provisioned outside of GKE.</p>
<p>Is there a better way to view or access these kinds of events with GKE?</p>
| <p>That would be because Container Engine 1.8 release does not enable the audit logging feature yet. From <a href="https://cloud.google.com/container-engine/release-notes" rel="nofollow noreferrer">Release Notes</a>:</p>
<blockquote>
<p>KNOWN ISSUE: Audit Logging, a beta feature in Kubernetes 1.8, is currently not enabled on Container Engine.</p>
</blockquote>
<p>It will probably be enabled at some point in the future, I’d keep an eye on the <a href="https://cloud.google.com/container-engine/release-notes" rel="nofollow noreferrer">Release Notes</a>.</p>
|
<p>I initialized the master node and add 2 worker nodes, but only master and one of the worker node show up when I run the following command:</p>
<pre><code>kubectl get nodes
</code></pre>
<p>also, both these nodes are in 'Not Ready' state.
What are the steps should I take to understand what the problem could be?</p>
<ul>
<li>I can ping all the nodes from each of the other nodes. </li>
<li>The version of Kubernetes is 1.8.</li>
<li>OS is Cent OS 7</li>
<li><p>I used the following repo to install Kubernetes:</p>
<pre><code>cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes] name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF
yum install kubelet kubeadm kubectl kubernetes-cni
</code></pre></li>
</ul>
| <p>First, describe nodes and see if it reports anything:</p>
<p><code>$ kubectl describe nodes</code></p>
<p>Look for conditions, capacity and allocatable:</p>
<pre><code>Conditions:
Type Status
---- ------
OutOfDisk False
MemoryPressure False
DiskPressure False
Ready True
Capacity:
cpu: 2
memory: 2052588Ki
pods: 110
Allocatable:
cpu: 2
memory: 1950188Ki
pods: 110
</code></pre>
<p>If everything is alright here, SSH into the node and observe <code>kubelet</code> logs to see if it reports anything. Like certificate erros, authentication errors etc.</p>
<p>If <code>kubelet</code> is running as a systemd service, you can use</p>
<p><code>$ journalctl -u kubelet</code></p>
|
<p>I have a cluster on google cloud container engine with 6 <code>n1-standard-1</code> machine.</p>
<p>I deployed several services and pod on this cluster and sometime they fail with the only reason <code>FailedSync</code> and no more explanation, I have no idea why they fail. Virtual machine are not overloaded, only 6% of the CPU is used and less than 1Gi of memory.</p>
<p>Here some events from describe command :</p>
<p><a href="https://i.stack.imgur.com/FLFnz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FLFnz.png" alt="events 1"></a>
<a href="https://i.stack.imgur.com/Khsh6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Khsh6.png" alt="enter image description here"></a></p>
<p>pods filter by <code>is system object: true</code> have the same problem, some of them have more than 900 restarts in 4 days...</p>
<p><a href="https://i.stack.imgur.com/Mkt6R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mkt6R.png" alt="enter image description here"></a></p>
<p>I miss maybe something in my kubernetes configuration and I have no idea what...</p>
<p>Thanks for your help</p>
| <p>I finally found the reason of the node failures. I use a glusterfs volume with the https://eventstore.org/ database and I think the latency make it fails, I saw lot of slow queries in the eventstore logs. I don't really know what happen but since I use a persistent ssd disk in the same region of my cluster I have no issue. 0 restart since several days and nodes work like a charm.</p>
<p>I also isolated this database on a single node.</p>
|
<p>I have <code>container_fs_usage_bytes</code> with prometheus to monitor container root fs, but it seems that there is no metrics for other volumes in cAdvisor.</p>
| <p>I confirmed that Kubernetes 1.8 expose metrics for prometheus.</p>
<ul>
<li><code>kubelet_volume_stats_available_bytes</code></li>
<li><code>kubelet_volume_stats_capacity_bytes</code></li>
<li><code>kubelet_volume_stats_inodes</code></li>
<li><code>kubelet_volume_stats_inodes_free</code></li>
<li><code>kubelet_volume_stats_inodes_used</code></li>
<li><code>kubelet_volume_stats_used_bytes</code></li>
</ul>
|
<p>I have a setup where docker ONLY works as root (I know, my fault). I'm trying to follow the GCR quickstart: [1]. I can't find anything on the troubleshooting page [2] either.</p>
<p>Can you help me (and I can then file a document fix)?</p>
<p>[1] <a href="https://cloud.google.com/container-registry/docs/quickstart" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/quickstart</a>
[2] <a href="https://cloud.google.com/container-registry/docs/support/troubleshooting" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/support/troubleshooting</a></p>
<h1>Repro</h1>
<p>Reproduction steps (also in b/68796816):</p>
<pre><code>$ docker -v
Docker version 1.6.2, build 7c8fca2
ricc@rubino:~/git/gce-recipes/gke/quickstart-image$ sudo docker run busybox date
Thu Nov 2 12:29:35 UTC 2017
$ sudo docker tag quickstart-image gcr.io/ric-cccwiki/quickstart-image
# All good so far ...
</code></pre>
<p>Option 1 (no sudo):</p>
<pre><code># no sudo: docker doesn't work
$ gcloud docker -- push gcr.io/ric-cccwiki/quickstart-image
FATA[0000] Post http:///var/run/docker.sock/v1.18/images/gcr.io/ric-cccwiki/quickstart-image/push?tag=: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?
</code></pre>
<p>Option 2: with sudo:</p>
<pre><code># docker works but gcloud is not found
$ sudo gcloud docker -- push gcr.io/ric-cccwiki/quickstart-image
sudo: gcloud: command not found
</code></pre>
<p>Neither way works</p>
| <p>Jake suggests: <code>gcloud docker</code> is on the deprecation path, for this reason among others. </p>
<p>I'd recommend doing <code>gcloud components install docker-credential-gcr</code>
followed by <code>which docker-credential-gcr</code>, <code>cp</code>ing the binary to a location
on root's PATH. </p>
<p><code>sudo docker-credential-gcr configure-docker</code> followed by
<code>sudo docker-credential-gcr gcr-login</code> should then allow you to use <code>sudo
docker</code> without issue.</p>
<p>See credentials helper's docs:
<a href="https://github.com/GoogleCloudPlatform/docker-credential-gcr" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/docker-credential-gcr</a></p>
|
<p>I've created a Kubernetes cluster using CoreOS on AWS and I'm having trouble communicating with nodes from the master.</p>
<p>For example, operations like <code>kubectl exec</code> or <code>kubectl logs</code> fail an error similar to the following:</p>
<pre><code>Error from server: dial tcp: lookup ip-XXX-X-XXX-XXX.eu-west-1.compute.internal: no such host
</code></pre>
<p>I've found some issues on Github that describe the problem so I know the team knows about this bug, but I would like to ask here if its possible to use some workaround until it gets addressed somehow.</p>
<p>One workaround mentioned was to use the <code>--hostname-override</code> flag but as I'm on AWS, this flag is ignored (see <a href="https://github.com/kubernetes/kubernetes/issues/22984" rel="noreferrer">#22984</a>)</p>
<p>Related issues on GitHub: <a href="https://github.com/kubernetes/kubernetes/issues/22770" rel="noreferrer">#22770</a> <a href="https://github.com/kubernetes/kubernetes/issues/22063" rel="noreferrer">#22063</a>.</p>
| <p>Have you made sure you're using the right context?</p>
<pre><code>kubectl config use-context my-cluster-name
</code></pre>
|
<p>I'm experiencing that the Kubernetes API server fails to start during cluster bootstrapping with the following error log, apparently due to being unable to initialize its "client CA configmap": </p>
<pre><code>E1029 14:35:56.211083 5 client_ca_hook.go:78] Timeout: request did not complete within allowed duration
F1029 14:35:56.211121 5 hooks.go:126] PostStartHook “ca-registration” failed: unable to initialize client CA configmap: timed out waiting for the condition
</code></pre>
<p>It seems to happen <a href="https://github.com/kubernetes/kubernetes/blob/17baaacb29247dbb6b04471f4bee831b0d617dc8/pkg/master/client_ca_hook.go#L67" rel="nofollow noreferrer">here</a> in the Kubernetes source code. What might cause this error?</p>
<p>See the full log <a href="https://slack-files.com/T09NY5SBT-F7SNQ2370-1671ad4b77" rel="nofollow noreferrer">here</a>.</p>
<p><strong>Update:</strong> It seems that my etcd cluster isn't accessible from master nodes, even though the same command works from etcd member machines:</p>
<pre><code>$ sudo ETCDCTL_API=3 etcdctl --cacert=/opt/tectonic/tls/etcd-client-ca.crt \
--cert=/opt/tectonic/tls/etcd-client.crt --key=/opt/tectonic/tls/etcd-client.key \
--endpoints=https://coreos-testing-etcd-0.socialfoodie.club:2379 \
endpoint health
https://coreos-testing-etcd-0.socialfoodie.club:2379 is unhealthy: failed to connect: grpc: timed out when dialing
Error: unhealthy cluster
</code></pre>
| <p>I found out that despite the cryptic error message in the API server, the cause is that it can't write to the etcd cluster. The reason was that the API server was configured with a different client certificate authority than what the etcd cluster was using, due to a timing issue wrt. copying certificates in my Terraform cluster setup. I figured out that the CA was the problem by using curl to contact the etcd cluster instead of etcdctl, as it gave a clear error message.</p>
<p>Thanks to @johnharris85 for suggesting etcd connectivity being an issue!</p>
|
<p>I'm using a CI to update my kubernetes cluster whenever there's an update to an image. Whenever the image is pushed and has the latest tag it kubectl apply's the existing deployment but nothing gets updated.</p>
<p>this is what runs
<code>$ kubectl apply --record --filename /tmp/deployment.yaml</code></p>
<p>My goal is when the apply is ran that a rolling deployment gets executed.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us.gcr.io/joule-eed41/api:latest
imagePullPolicy: Always
ports:
- containerPort: 1337
args:
- /bin/sh
- -c
- echo running api;npm start
env:
- name: NAMESPACE
valueFrom:
configMapKeyRef:
name: config
key: NAMESPACE
</code></pre>
| <p>As others suggested, have a specific tag.
Set new image using following command</p>
<p>kubectl set image deployment/deployment_name deployment_name=image_name:image_tag</p>
<p>In your case it would be</p>
<p>kubectl set image deployment/api api=us.gcr.io/joule-eed41/api:0.1</p>
|
<p>Is there a Nginx Ingress Controller Docker image available for ARM32, which will run on a Raspberry PI 3?</p>
<p>I have tried this:</p>
<pre><code>$ docker run gcr.io/google_containers/nginx-ingress-controller:0.8.3
standard_init_linux.go:195: exec user process caused "exec format error"
</code></pre>
<p>Is there some documentation / list of images with more details on which tags available for the images on gcr.io/google_containers?</p>
| <p>Support for arm64 was introduced with <a href="https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.9.0-beta.12" rel="nofollow noreferrer">0.9.0-beta.12</a>. I don't believe there's an arm32 compatible release.</p>
<p>As for how to know which images or tags are available on gcr you can either run these:</p>
<pre><code>gcloud container images list-tags [HOSTNAME]/[PROJECT-ID]/[IMAGE]
gcloud container images list --repository=[HOSTNAME]/[PROJECT-ID]
</code></pre>
<p>Or just hit <code>http://[HOSTNAME]/[PROJECT-ID]/[IMAGE]</code> with your browser, where for the nginx-controller it would be <a href="http://gcr.io/google_containers/nginx-ingress-controller" rel="nofollow noreferrer">http://gcr.io/google_containers/nginx-ingress-controller</a></p>
|
<h1>kubectl create -f web.yml</h1>
<h1>kubectl get pods -o wide</h1>
<p>NAME READY STATUS RESTARTS AGE IP NODE
httpd 0/1 ContainerCreating 0 1h kube-node2</p>
<p>[root@kube-master pods]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
httpd 0/1 ContainerCreating 0 1h kube-node2</p>
<blockquote>
<p>[root@kube-master pods]# kubectl describe pods httpd Name: httpd
Namespace: default Node: kube-node2/10.10.0.102 Start Time: Mon, 30
Oct 2017 17:47:38 +0600 Labels: app=webserver Status: Pending IP:<br>
Controllers: Containers: httpd:
Container ID:<br>
Image: webserver
Image ID:<br>
Port: 80/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Volume Mounts:
Environment Variables: Conditions: Type Status Initialized True Ready False PodScheduled True No volumes.
QoS Class: BestEffort Tolerations: Events:<br>
FirstSeen LastSeen Count From SubObjectPath Type Reason Message<br>
--------- -------- ----- ---- ------------- -------- ------ ------- 1h 5m 16 {kubelet kube-node2}<br>
Warning FailedSync Error syncing</p>
<blockquote>
<p>pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull:
"image pull failed for
registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may
be because there are no credentials on this request. details: (open
/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such
file or directory)"</p>
<p>1h 8s 271 {kubelet kube-node2} Warning FailedSync Error syncing
pod, skipping: failed to "StartContainer" for "POD" with
ImagePullBackOff: "Back-off pulling image
\
"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""</p>
</blockquote>
</blockquote>
<p>registry should go to hub.docker but here says </p>
<blockquote>
<p>Error syncing pod, skipping: failed to "StartContainer" for "POD" with
ErrImagePull: "image pull failed for
registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may
be because there are no credentials on this request. details: (open
/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such
file or directory)"</p>
</blockquote>
<p>Why ?</p>
<p>Please give me solution</p>
| <p>I encounter the same problem, and i found that i not install rhsm related software on machine, you can execute command "yum install <em>rhsm</em>" to solve this problem.</p>
|
<p>Kubernetes Cluster Autoscaler versions are <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#which-version-on-cluster-autoscaler-should-i-use-in-my-cluster" rel="nofollow noreferrer">tightly coupled</a> to Kubernetes versions. How can I check what version of Cluster Autoscaler is deployed currently in my Kubernetes cluster?</p>
<p>Running <code>gcloud container clusters describe my-kube-cluster</code> does not return the Cluster Autoscaler version:</p>
<pre><code>nodePools:
- autoscaling:
enabled: true
maxNodeCount: 12
minNodeCount: 3
</code></pre>
| <p>There's no endpoint in the cluster autoscaler that prints it's version, including <code>/health-check</code> and <code>/metrics</code>. The only place I could find that referenced a version number was <a href="https://github.com/kubernetes/autoscaler/blob/da224d4db9fc6269914e0cd1d662b923b2dc7789/cluster-autoscaler/main.go#L286" rel="nofollow noreferrer">this line in the initialisation code</a>, which you might find in the cluster autoscaler logs. Other than that I guess you could use the kubernetes API to query the cluster autoscaler Deployment resource image tag:</p>
<pre><code>kubectl get pods --all-namespaces -o=jsonpath="{..image}" -l app=cluster-autoscaler
</code></pre>
|
<p>I have deleted my elasticsearch cluster, but now after I've deployed a new cluster I need to access the old data that was stored on 3 Persistent Volumes PV described bellow: </p>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
storage-es-data-0 Bound pvc-19429b0b-ba42-11e7-979d-42010a840ff7 12Gi RWO standard 10d
storage-es-data-1 Bound pvc-36505962-ba42-11e7-979d-42010a840ff7 12Gi RWO standard 10d
storage-es-data-2 Bound pvc-422da328-ba42-11e7-979d-42010a840ff7 12Gi RWO standard 10d
</code></pre>
<p>This is the description of the old PV claims:</p>
<pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-19429b0b-ba42-11e7-979d-42010a840ff7 12Gi RWO Delete Bound default/storage-es-data-0 standard 10d
pvc-36505962-ba42-11e7-979d-42010a840ff7 12Gi RWO Delete Bound default/storage-es-data-1 standard 10d
pvc-422da328-ba42-11e7-979d-42010a840ff7 12Gi RWO Delete Bound default/storage-es-data-2 standard 10d
</code></pre>
<p>My new deployment is described as follow:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es-data
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
imagePullPolicy: Always
ports:
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage
mountPath: /data
volumes:
- name: storage
persistentVolumeClaim:
claimName: storage-es-data-0
</code></pre>
<p>After connecting my pod with a Loadblance service, I didn't find any documents. Am I missing something? And how can I use the three PV in the same POD. </p>
| <p>Your <code>deployment</code> yaml file is correct. You should be able to find files from <code>pvc-19429b0b-ba42-11e7-979d-42010a840ff7</code> volume inside <code>/data</code> folder in your pod.</p>
<p>In order to use three PV in the same POD just add them to your <code>deployment</code> yaml:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es-data
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
imagePullPolicy: Always
ports:
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage-0
mountPath: /data0
- name: storage-1
mountPath: /data1
- name: storage-2
mountPath: /data2
volumes:
- name: storage-0
persistentVolumeClaim:
claimName: storage-es-data-0
- name: storage-1
persistentVolumeClaim:
claimName: storage-es-data-1
- name: storage-2
persistentVolumeClaim:
claimName: storage-es-data-2
</code></pre>
|
<p>I have a n00b question about setting up gitlab on a kubernetes. I'm running a basic <a href="https://github.com/kubernetes/minikube" rel="nofollow noreferrer">minkube</a> cluster on my home machine and I want to install gitlab onto it. The setup instructions <a href="https://docs.gitlab.com/ee/install/kubernetes/gitlab_chart.html#prerequisites" rel="nofollow noreferrer">note the following prequesite</a>:</p>
<blockquote>
<p>The ability to point a DNS entry or URL at your GitLab install</p>
</blockquote>
<p>How do I do this? <strong>What is the basic mechanism for setting up a "DNS entry" on a home machine running minikube?</strong> There example shows:</p>
<blockquote>
<p>externalUrl: '<a href="http://gitlab.example.com" rel="nofollow noreferrer">http://gitlab.example.com</a>'</p>
</blockquote>
<p><strong>If I own a domain how would I set this up? Why does the setup need an external URL?</strong></p>
<p>Thank you in advance for you consideration and response. </p>
| <p>You'll be setting it up with a nodeport and can access it normally through that. If you really want a DNS entry you can just add one in your hosts file at <code>/etc/hosts</code>:</p>
<pre><code>[minikube ip] myfakelocalgitlabendpoint.com
</code></pre>
<p>You can get minikube's IP by running <code>minikube ip</code>.</p>
|
<p>Not sure how to name it, but having a rails app, and using sidekiq; I'd like to be able to view both sidekiq and rails logs when running <code>kubectl logs {podname}</code></p>
<p>Is that even possible ? is there another possibility (like extra commands to logs specific sources) ? Just wondering where to start investigating</p>
| <p>You should run your rails app in one container and sidekiq in another container. Both of them can run in the same pod. Assuming they're are indeed in the same pod but different containers then you can get each of their logs with the appropriate:</p>
<pre><code>kubectl logs my-pod -c my-container
</code></pre>
<p>If otherwise you run them each in their own pods, getting each of their logs should be the default setup of just calling <code>kubectl logs my-pod</code> for each of them.</p>
<p>If you want to aggregate logs from multiple sources you should use something like <a href="https://github.com/johanhaleby/kubetail" rel="nofollow noreferrer">kubetail</a>.</p>
|
<p>i would like to parse kubernetes manifest file (json/yaml) and be able to convert them to k8s structures (to later on manipulate them)</p>
<p>I know there is the NewYAMLOrJSONDecoder().Decode() function (<a href="https://github.com/kubernetes/apimachinery/blob/master/pkg/util/yaml/decoder.go" rel="noreferrer">https://github.com/kubernetes/apimachinery/blob/master/pkg/util/yaml/decoder.go</a>) to read a json/yaml file, but the next step is: how to convert them to k8s structure/type?</p>
<p>i.e. if I read a yaml file with a Namespace object, how to convert it to a core/v1/namespace interface for example</p>
<p>Regards,</p>
| <p>Thanks svenwltr, I was not aware we can do like this.</p>
<p>In the same time, I manage to find not a better approach but a different one:</p>
<pre><code>package main
import (
"flag"
"fmt"
"os"
"io"
"path/filepath"
"log"
"encoding/json"
//"time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"k8s.io/apimachinery/pkg/util/yaml"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
func main() {
var kubeconfig *string
if home := homeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// create the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
f,err := os.Open("namespace.yaml")
if err!=nil {
log.Fatal(err)
}
d := yaml.NewYAMLOrJSONDecoder(f,4096)
dd := clientset.Discovery()
apigroups,err := discovery.GetAPIGroupResources(dd)
if err != nil {
log.Fatal(err)
}
restmapper := discovery.NewRESTMapper(apigroups,meta.InterfacesForUnstructured)
for {
// https://github.com/kubernetes/apimachinery/blob/master/pkg/runtime/types.go
ext := runtime.RawExtension{}
if err := d.Decode(&ext); err!=nil {
if err == io.EOF {
break
}
log.Fatal(err)
}
fmt.Println("raw: ",string(ext.Raw))
versions := &runtime.VersionedObjects{}
//_, gvk, err := objectdecoder.Decode(ext.Raw,nil,versions)
obj, gvk, err := unstructured.UnstructuredJSONScheme.Decode(ext.Raw,nil,versions)
fmt.Println("obj: ",obj)
// https://github.com/kubernetes/apimachinery/blob/master/pkg/api/meta/interfaces.go
mapping, err := restmapper.RESTMapping(gvk.GroupKind(), gvk.Version)
if err != nil {
log.Fatal(err)
}
restconfig := config
restconfig.GroupVersion = &schema.GroupVersion {
Group: mapping.GroupVersionKind.Group,
Version: mapping.GroupVersionKind.Version,
}
dclient,err := dynamic.NewClient(restconfig)
if err != nil {
log.Fatal(err)
}
// https://github.com/kubernetes/client-go/blob/master/discovery/discovery_client.go
apiresourcelist, err := dd.ServerResources()
if err != nil {
log.Fatal(err)
}
var myapiresource metav1.APIResource
for _,apiresourcegroup := range(apiresourcelist) {
if apiresourcegroup.GroupVersion == mapping.GroupVersionKind.Version {
for _,apiresource := range(apiresourcegroup.APIResources) {
//fmt.Println(apiresource)
if apiresource.Name == mapping.Resource && apiresource.Kind == mapping.GroupVersionKind.Kind {
myapiresource = apiresource
}
}
}
}
fmt.Println(myapiresource)
// https://github.com/kubernetes/client-go/blob/master/dynamic/client.go
var unstruct unstructured.Unstructured
unstruct.Object = make(map[string]interface{})
var blob interface{}
if err := json.Unmarshal(ext.Raw,&blob); err != nil {
log.Fatal(err)
}
unstruct.Object = blob.(map[string]interface{})
fmt.Println("unstruct:",unstruct)
ns := "default"
if md,ok := unstruct.Object["metadata"]; ok {
metadata := md.(map[string]interface{})
if internalns,ok := metadata["namespace"]; ok {
ns = internalns.(string)
}
}
res := dclient.Resource(&myapiresource,ns)
fmt.Println(res)
us,err := res.Create(&unstruct)
if err != nil {
log.Fatal(err)
}
fmt.Println("unstruct response:",us)
}
}
func homeDir() string {
if h := os.Getenv("HOME"); h != "" {
return h
}
return os.Getenv("USERPROFILE") // windows
}
</code></pre>
|
<p>I discovered that, until a few months ago, <a href="https://github.com/kubernetes/kubernetes/issues/23920" rel="nofollow noreferrer">the "hostPort" configuration for Pods was not going to work with CNI based integrations</a>. This meant that, for any Kubernetes cluster using Calico, it was not possible to directly expose a Pod's port directly on a certain Node's port, without using a Service or flagging <code>hostNetwork=true</code> (which is a little bit extreme).</p>
<p>Starting from Kubernetes 1.7.0 it is possible but it is necessary to change Calico configuration in order to let <a href="https://github.com/containernetworking/plugins/pull/1" rel="nofollow noreferrer">the new "portmap" CNI plugin</a> in, which is what I'm trying to do, without success. I am starting from a new IBM Bluemix Container Service cluster.</p>
<p>My calico-node DaemonSet has the following CNI_NETWORK_CONFIG environmental variable:</p>
<pre><code>{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"type": "calico",
"etcd_endpoints": "__ETCD_ENDPOINTS__",
"etcd_key_file": "__ETCD_KEY_FILE__",
"etcd_cert_file": "__ETCD_CERT_FILE__",
"etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
"log_level": "info",
"mtu": 1480,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s",
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
}
</code></pre>
<p>What I did here was just trying to replace it with the following configuration:</p>
<pre><code>{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [{
"type": "calico",
"etcd_endpoints": "__ETCD_ENDPOINTS__",
"etcd_key_file": "__ETCD_KEY_FILE__",
"etcd_cert_file": "__ETCD_CERT_FILE__",
"etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
"log_level": "info",
"mtu": 1480,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s",
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {
"portMappings": true
}
}
]
}
</code></pre>
<p><code>calico-node</code> pods were running successfully after a forced reboot, but my own Pods keep getting stuck in "Pending" status during initialization, with the event "Error syncing pod" from "kubelet NODE_IP".</p>
<p>I'd appreciate some help on this issue. Thanks in advance.</p>
| <p>What you have looks reasonable as far as the contents, I think the problem may be that you need to change the name of the config file from ending in <code>.conf</code> to <code>.conflist</code>. There is a PR up with some WIP changes <a href="https://github.com/projectcalico/calico/pull/903" rel="nofollow noreferrer">https://github.com/projectcalico/calico/pull/903</a> for enabling hostport in the calico manifests, you can compare it with what you have done.</p>
<p>If you set the filename through the daemonset you should remove the previous config file on the hosts because the released install-cni container does not clean up the previous config and I am not sure which config file the kubelet would use.</p>
|
<p>I'm currently using Solr Cloud 6.1, the following behavior can also be observed until 7.0.</p>
<p>I'm trying to create a Solr collection with 5 shards and a replication factor of 2. I have 5 physical servers. Normally, this would distribute all 10 replicas evenly among the available servers. </p>
<p>But, when starting Solr Cloud with a <code>-h</code> (hostname) param to give every Solr instance an individual, but constant hostname, this doesn't work any more. The distribution then looks like this:</p>
<pre><code>solr-0:
wikipedia_shard1_replica1 wikipedia_shard2_replica1 wikipedia_shard3_replica2 wikipedia_shard4_replica1 wikipedia_shard4_replica2
solr-1:
solr-2:
wikipedia_shard3_replica1 wikipedia_shard5_replica1 wikipedia_shard5_replica2
solr-3:
wikipedia_shard1_replica2
solr-4:
wikipedia_shard2_replica2
</code></pre>
<p>I tried using <a href="https://lucene.apache.org/solr/guide/7_0/rule-based-replica-placement.html" rel="nofollow noreferrer">Rule-based Replica Placement</a>, but the rules seem to be ignored.</p>
<p>I need to use hostnames, because Solr runs in a Kubernetes cluster, where IP adresses change frequently and Solr won't find it's cores after a container restart. I first suspected a newer Solr version to be the cause of this, but I narrowed it down to the hostname problem.</p>
<p>Is there any solution for this?</p>
| <p>The solution was actually quite simple (but not really documented):</p>
<p>When creating a <code>Service</code> in OpenShift/Kubernetes, all matching Pods get backed by a load balancer. When all Solr instances get assigned an unique hostname, this hostnames would all resolve to one single IP address (that of the load balancer).</p>
<p>Solr somehow can't deal with that and fails to distribute its shards evenly.</p>
<p>The solution is to use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless services</a> from Kubernetes. Headless services aren't backed by a load balancer and therefore every hostname resolves to an unique IP address.</p>
|
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: node-timer-analytics-parser-cronjob
spec:
schedule: "0 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: node-timer-analytics-parser-cronjob
image: round0.azurecr.io/node-timer-analytics-parser:latest
args:
- /bin/sh
- -c
- date; npm start
restartPolicy: OnFailure
imagePullSecrets:
- name: regsecret
</code></pre>
<p>I have a following YAML file which creates a cronjob but <strong>$ kubectl get cronjob</strong> cmd gives this
$ kubectl get cronjob</p>
<pre><code>NAME KIND
node-timer-analytics-parser-controller CronJob.v1beta1.batch
</code></pre>
<p>also <strong>$ kubectl get job</strong> says <strong><em>NO resources found.</em></strong></p>
<p>I also tried this <strong>$ kubectl describe cronjob node-timer-analytics-parser-controller</strong> which gives </p>
<pre><code>Name: node-timer-analytics-parser-controller
Namespace: default
Labels: <none>
Events: <none>
</code></pre>
<p>This docker image/container does some extensive sql queries(40-50 queries in parallel) to sql database.<br>
I tested the docker container locally it runs without any issues. One complete execution probably takes around 1- 5 minutes. I don't understand why it is not working in <strong>Kubernetes</strong> </p>
<p>I even tried the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#creating-a-cron-job" rel="nofollow noreferrer">example of the k8 documentation</a> it seems to be working done know what is the issue with my cronjob.</p>
| <p>Presumably you are running Kubernetes 1.8, according to the <a href="https://cloud.google.com/container-engine/release-notes" rel="nofollow noreferrer">release notes</a> 1.7.8 is the default version run in GKE. You can check which version you are running by using <code>kubectl version</code></p>
<p>If you are running < 1.8 then you will need to enable cron jobs as per <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#prerequisites" rel="nofollow noreferrer">the CronJob documentation</a>. I don't use GKE, but looks like you do this at cluster create time:</p>
<pre><code>gcloud alpha container clusters create my-cluster --enable-kubernetes-alpha
</code></pre>
<p>For more info, take a look at <a href="https://cloud.google.com/container-engine/docs/alpha-clusters#about_feature_stages" rel="nofollow noreferrer">About Alpha Features</a></p>
<p>Even with 1.8 is seems to be listed on the <a href="https://cloud.google.com/container-engine/release-notes" rel="nofollow noreferrer">release notes</a> as a new feature</p>
<blockquote>
<p>You can now run CronJobs on your Container Engine cluster. CronJob is a Beta feature in Kubernetes version 1.8.</p>
</blockquote>
<p>So you might need to run an upgrade.</p>
<p>I also notice that you are using an azure container. Might be worth starting with the example <code>CronJob</code> to see if you can get that to work first.</p>
|
<p>After a few days of testing Azure aks, I find myself in a situation where existing aks instances don't clean up when I delete the parent resource group (or with az aks delete) and I am also unable to create new aks instances. Has anyone encountered the same issue?</p>
<p>Curent state:</p>
<pre><code>rbigeard@ROMAINWORK199A:~|⇒ az aks list -o table
Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
------------- ---------- --------------- ------------------- ------------------- ----------------------------------------------------------
K8Cluster westus2 K8 1.8.1 Failed
K8Cluster2 westus2 K8 1.8.1 Failed
K8test westus2 K8 1.7.7 Failed
K8TestCluster westus2 K8Test 1.7.7 Failed
myK8Cluster westus2 myK8Group 1.7.7 Failed myk8cluste-myk8group-5ec36a-b448f367.hcp.westus2.azmk8s.io
myK8s westus2 myK8Group 1.8.1 Failed
</code></pre>
<p>Creation error in a brand new empty resource group in westus2:</p>
<pre><code>az aks create --name K8TestCluster --resource-group K8Test --agent-count 1 --generate-ssh-keys
Deployment failed. Correlation ID: 27476ee2-fea2-406a-83bd-89de89d7aec1. getAndWaitForManagedClusterProvisioningState error: <nil>
</code></pre>
<p>The version of the cli is (I run it in WSL):</p>
<pre><code>az --version
azure-cli (2.0.20)
acr (2.0.14)
acs (2.0.18)
appservice (0.1.19)
backup (1.0.2)
batch (3.1.6)
batchai (0.1.2)
billing (0.1.6)
cdn (0.0.10)
cloud (2.0.9)
cognitiveservices (0.1.9)
command-modules-nspkg (2.0.1)
component (2.0.8)
configure (2.0.12)
consumption (0.1.6)
container (0.1.12)
core (2.0.20)
cosmosdb (0.1.14)
dla (0.0.13)
dls (0.0.16)
eventgrid (0.1.5)
extension (0.0.5)
feedback (2.0.6)
find (0.2.7)
interactive (0.3.11)
iot (0.1.13)
keyvault (2.0.13)
lab (0.0.12)
monitor (0.0.11)
network (2.0.17)
nspkg (3.0.1)
profile (2.0.15)
rdbms (0.0.8)
redis (0.2.10)
resource (2.0.17)
role (2.0.14)
servicefabric (0.0.5)
sql (2.0.14)
storage (2.0.18)
vm (2.0.17)
Python location '/opt/az/bin/python3'
Extensions directory '/home/rbigeard/.azure/cliextensions'
Python (Linux) 3.6.1 (default, Oct 18 2017, 20:41:18)
[GCC 4.8.4]
Legal docs and information: aka.ms/AzureCliLegal
</code></pre>
| <p>Apologies for the service disruption. There was a provisioning/capacity issue that impacted the regional Kubernetes service which was resolved today. You can view the resolution updates @ <a href="https://github.com/Azure/AKS/issues/2" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/2</a> </p>
<p>Active status on additional known Kubernetes issues are being tracked @ <a href="https://github.com/Azure/AKS/issues" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues</a>.</p>
|
<p>I'm debugging log output from kubectl that states:</p>
<pre><code>Error from server (BadRequest): a container name must be specified for pod postgres-operator-49202276-bjtf4, choose one of: [apiserver postgres-operator]
</code></pre>
<p>OK, so that's an explanatory error message, but looking at my JSON template it ought to just create both containers specified, correct? What am I missing? (please forgive my ignorance.)</p>
<p>I'm using just a standard kubectl create -f command to create the JSON file within a shell script. The JSON deployment file is as follows:</p>
<pre><code>{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "postgres-operator"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"name": "postgres-operator"
}
},
"spec": {
"containers": [{
"name": "apiserver",
"image": "$CCP_IMAGE_PREFIX/apiserver:$CO_IMAGE_TAG",
"imagePullPolicy": "IfNotPresent",
"env": [{
"name": "DEBUG",
"value": "true"
}],
"volumeMounts": [{
"mountPath": "/config",
"name": "apiserver-conf",
"readOnly": true
}, {
"mountPath": "/operator-conf",
"name": "operator-conf",
"readOnly": true
}]
}, {
"name": "postgres-operator",
"image": "$CCP_IMAGE_PREFIX/postgres-operator:$CO_IMAGE_TAG",
"imagePullPolicy": "IfNotPresent",
"env": [{
"name": "DEBUG",
"value": "true"
}, {
"name": "NAMESPACE",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.namespace"
}
}
}, {
"name": "MY_POD_NAME",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.name"
}
}
}],
"volumeMounts": [{
"mountPath": "/operator-conf",
"name": "operator-conf",
"readOnly": true
}]
}],
"volumes": [{
"name": "operator-conf",
"configMap": {
"name": "operator-conf"
}
}, {
"name": "apiserver-conf",
"configMap": {
"name": "apiserver-conf"
}
}]
}
}
}
}
</code></pre>
| <p>If a pod has more than 1 containers then you need to provide the name of the specific container. </p>
<p>in your case, There is a pod (postgres-operator-49202276-bjtf4) which has 2 containers (apiserver and postgres-operator ).
following commands will provide logs for the specific containers </p>
<pre><code>kubectl logs deployment/postgres-operator -c apiserver
kubectl logs deployment/postgres-operator -c postgres-operator
</code></pre>
|
<p>I'm extremely new to Kubernetes (besides it's not my field) but I got required to be able to execute this practice.</p>
<p>Question is that I need a Handbrake Converter in a containerized pod with a Persistent Volume mounted on a GKE cluster:</p>
<ul>
<li>3 nodes.</li>
<li>node version 1.8.1-gke.1</li>
<li>node image Ubuntu</li>
</ul>
<p>Everything is fine until this point but now I'm not able to upload a folder to that PV from my local machine.</p>
<p>What I have tried is a ssh connection to the node and then a <code>sudo docker exec -ti containerId bash</code> but I just got <code>rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"bash\\\": executable file not found in $PATH\"\n"</code>.</p>
<p>Thanks in advance.</p>
| <p>To transfer local files to a kubernetes pod, use <code>kubectl cp</code>:</p>
<pre><code>kubectl cp /root/my-local-file my-pod:/root/remote-filename
</code></pre>
<p>or</p>
<pre><code>kubectl cp /root/my-local-file my-namepace/my-pod:/root/remote-filename -c my-container
</code></pre>
<p>The namespace can be omitted (and you'll get the default), and the container can be omitted (you'll get the first in the pod).</p>
<p>For SSH'ing you need to go through kubectl as well:</p>
<pre><code>kubectl exec -it <podname> -- /bin/sh
</code></pre>
|
<p>I have a nodeJS api which uses a mongoDb. I deploy the application in a kubernetes cluster.
Here you can find the kubernetes yml files <a href="https://github.com/daumann/chronas-api/tree/azure/kuberneties" rel="nofollow noreferrer">https://github.com/daumann/chronas-api/tree/azure/kuberneties</a></p>
<p>Now I want to use the azure cosmosdb for mongodb instatt of an container.
<a href="https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction</a></p>
<p>Can someone help me how I can do that.
It would be create to use only the yml files from kuberneties to do so.</p>
<p>Cheers</p>
| <p>Assuming you already launched your cosmos-db through Azure, you will need to use the generated connection string that you can pass to your application as a secret (since it contains a password). The connection string is of the format:</p>
<pre><code>mongodb://username:password@host:port/[database]?ssl=true
</code></pre>
<p>To create a secret (assuming you paste your connection string into the <code>connstring.txt</code> file:</p>
<pre><code>kubectl create secret generic cosmos-db-secret --from-file=./connstring.txt
</code></pre>
<p>Then in your application's deployment definition add:</p>
<pre><code>env:
- name: MONGO_HOST
valueFrom:
secretKeyRef:
name: cosmos-db-secret
key: connstring
</code></pre>
|
<p>After a few days of testing Azure aks, I find myself in a situation where existing aks instances don't clean up when I delete the parent resource group (or with az aks delete) and I am also unable to create new aks instances. Has anyone encountered the same issue?</p>
<p>Curent state:</p>
<pre><code>rbigeard@ROMAINWORK199A:~|⇒ az aks list -o table
Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
------------- ---------- --------------- ------------------- ------------------- ----------------------------------------------------------
K8Cluster westus2 K8 1.8.1 Failed
K8Cluster2 westus2 K8 1.8.1 Failed
K8test westus2 K8 1.7.7 Failed
K8TestCluster westus2 K8Test 1.7.7 Failed
myK8Cluster westus2 myK8Group 1.7.7 Failed myk8cluste-myk8group-5ec36a-b448f367.hcp.westus2.azmk8s.io
myK8s westus2 myK8Group 1.8.1 Failed
</code></pre>
<p>Creation error in a brand new empty resource group in westus2:</p>
<pre><code>az aks create --name K8TestCluster --resource-group K8Test --agent-count 1 --generate-ssh-keys
Deployment failed. Correlation ID: 27476ee2-fea2-406a-83bd-89de89d7aec1. getAndWaitForManagedClusterProvisioningState error: <nil>
</code></pre>
<p>The version of the cli is (I run it in WSL):</p>
<pre><code>az --version
azure-cli (2.0.20)
acr (2.0.14)
acs (2.0.18)
appservice (0.1.19)
backup (1.0.2)
batch (3.1.6)
batchai (0.1.2)
billing (0.1.6)
cdn (0.0.10)
cloud (2.0.9)
cognitiveservices (0.1.9)
command-modules-nspkg (2.0.1)
component (2.0.8)
configure (2.0.12)
consumption (0.1.6)
container (0.1.12)
core (2.0.20)
cosmosdb (0.1.14)
dla (0.0.13)
dls (0.0.16)
eventgrid (0.1.5)
extension (0.0.5)
feedback (2.0.6)
find (0.2.7)
interactive (0.3.11)
iot (0.1.13)
keyvault (2.0.13)
lab (0.0.12)
monitor (0.0.11)
network (2.0.17)
nspkg (3.0.1)
profile (2.0.15)
rdbms (0.0.8)
redis (0.2.10)
resource (2.0.17)
role (2.0.14)
servicefabric (0.0.5)
sql (2.0.14)
storage (2.0.18)
vm (2.0.17)
Python location '/opt/az/bin/python3'
Extensions directory '/home/rbigeard/.azure/cliextensions'
Python (Linux) 3.6.1 (default, Oct 18 2017, 20:41:18)
[GCC 4.8.4]
Legal docs and information: aka.ms/AzureCliLegal
</code></pre>
| <p>For now, we can create AKS in west US 2, it is working now.</p>
<p><a href="https://i.stack.imgur.com/VsA1z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VsA1z.png" alt="enter image description here"></a></p>
|
<p>I am interested in the behaviour of multi-master Kubernetes in the event of different types of failure, particularly if the masters are on different racks.</p>
<ul>
<li><p>Scenario:</p>
<ul>
<li><p>2 racks, R1, R2.</p></li>
<li><p>API Masters:</p>
<ul>
<li>M1 on R1, M2 on R2.</li>
</ul></li>
<li><p>Worker nodes:</p>
<ul>
<li>W1 on R1, W2 on R2.</li>
</ul></li>
<li><p>Etcd:</p>
<ul>
<li>A completely separate HA Etcd cluster comprising 3 nodes (i.e. it's not running on the API Master nodes).</li>
</ul></li>
</ul></li>
</ul>
<p>My failure questions are basically around split brain scenarios:</p>
<p>What happens if M1 is the active master and R1 loses connection with Etcd and R2, but R2/M2 has connectivity to Etcd? i.e. what specifically causes a leadership election?</p>
<p>If there is a Pod P1 on R1/W1, M1 is the active master and R1 becomes disconnected from R2 and Etcd, what happens? Does P1 keep going, or is it killed? Does M2 start a separate instance of P (P2) on R2? If so, can P1 & P2 both be running at the same time?</p>
<p>If there is a Pod P2 on R2/W2 and M1 is the active master (i.e. pod is on separate rack to the master) and R1 loses connection to R2 and Etcd, what happens to P2? Does it keep going and M2 takes over?</p>
| <p>The master holds a lease in etcd. If the lease expires, the active master exits it’s process (expects to restart). The other master would observe the lease expiring and attempt to acquire it in etcd. As long as M2 can reach etcd and etcd has a quorum, the second master would then take over.</p>
<p>As far as competing Masters go, in general Kubernetes still uses etcd to perform consistent updates - ie even two Masters active at the same time are still contending to do the same thing to etcd, which has strong consistency and so the usual outcome is just failed updates. One example where that is not the case is daemonsets and ReplicaSets - two active Masters may create multiples of pods, and then scale them down when they realize there are too many per node or compare to the desired scale. But since neither daemonsets or ReplicaSets guarantee that behavior anyway (ReplicaSets can have > scale pods running at any time, daemonsets can have two pods per node briefly) it’s not broken per se.</p>
<p>If you need at-most-X pods behavior, only StatefulSets provide that guarantee today.</p>
|
<p>Currently, I am migrating one of our microservice from K8S Deployment type to StatefulSets.
While updating Kubernetes deployment config I noticed StatefulSets doesn't support <code>revisionHistoryLimit</code> and <code>minReadySeconds</code>.</p>
<ol>
<li><code>revesionHistoryLimit</code> is used keep previous N numbers of replica sets for rollback.</li>
<li><code>minReadySeconds</code> is number of seconds pod should be ready without any of its container crashing.</li>
</ol>
<p>I couldn't find any compatible settings for <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer" title="StatefulSets doc">StatefulSets</a>.</p>
<p>So my questions are:
1) How long master will wait to consider Stateful Pod ready?
2) How to handle rollback of Stateful application.</p>
| <ol>
<li>You should define a readiness probe, and the master will wait for it to report the pod as Ready.</li>
<li>StatefulSets currently do not support rollbacks.</li>
</ol>
|
<p>I’m having really hard time trying to construct workflow with k8s that would include:</p>
<ul>
<li>Having <strong>monorepo</strong> for multiple microservices</li>
<li>Having <strong>single command to start</strong> all of them and being able to start local development</li>
<li>Having <code>docker-like</code> experience of installing entire infrastructure on another machine that has no k8s installed on it (for local development) 1. <code>git pull</code> 2. <code>k8s start</code>, 3. wait, 4. <code>ping localhost:3000</code> would be goal here.</li>
<li>Being able to have <strong>changes in my local files instantly applied</strong> to services without rebuilding images etc (something similar to docker volumes I guess)</li>
<li>Having <strong>modular config</strong> files where there is one root config file for infrastructure that is referencing to services smaller configs</li>
</ul>
<p>I was looking hard for some example or guide about constructing such system without luck.</p>
<p>Am I missing something important about k8s design that makes me look for something not really possible witk k8s?</p>
<p><strong>Why I think such question should not be closed</strong></p>
<ul>
<li><p>There are many developers without dev-ops experience trying their best with microservices and I've found lack of some solid guide about such (and very common) use case</p>
</li>
<li><p>There is no clear guide about smooth local development experience with rapid feedback loop when it comes to k8s.</p>
</li>
<li><p>While it's opinion based, I find this question being more focused on general directions that would lead to such developer experience, rather than exact steps.</p>
<p>I'm not even sure (and I was trying to find out) if it's considered good practice for professional dev-ops. I have no idea how big infrastructures (tens or hundreds of microservices) are managed. Is it possible to run them all on single machine? Is it desired?</p>
</li>
</ul>
| <p>I built something similar to what you're asking before. I ran <code>hyperkube</code> manually, which is hardly recommended but did the trick for local development. In my case this was all running in Vagrant for team uniformity.</p>
<pre><code>docker run -d --name=kubelet \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:slave \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged \
--restart=always \
gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override=127.0.0.1 \
--api-servers=http://localhost:8080 \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged --v=2 \
--image-gc-high-threshold=50 \
--image-gc-low-threshold=40 \
--kube-api-qps 1000 --kube-api-burst=2000 \
--pod-manifest-path=/etc/kubernetes/manifests
</code></pre>
<p>On top of this, I had build scripts that would use YAML <a href="https://mustache.github.io/" rel="nofollow noreferrer">mustache</a> templates that were aware where this was being deployed. When this was being deployed locally, every pod had the source code mounted as a volume so I could auto-reload it.</p>
<p>The same scripts were able to deploy to production thanks to it all being based on mustache templates. I even had multiple configuration files that would apply different template values for different environments.</p>
<p>The build script would prepare the YAML templates, build whatever images it needs to build, apply to Kubernetes and from there it would just auto-reload. It was a semi-nice user experience. My main issue was sluggishness when it come to file updating because it was running inside Docker inside Vagrant. There was no file sharing type that would provide good performance for both client and server <strong>and</strong> allow for file watching (<code>inotify</code> didn't work with most file share types, and NFS/SMB was slow for IDEs).</p>
<p>It was my first Kubernetes experience, so I doubt it's the "recommended way", but it worked. There was a lot of scripting involved so there are probably better ways to do this today.</p>
|
<p>I was trying out <code>Kubernetes</code> using <code>minikube</code> in a <code>VM</code> host using <code>KVM</code> as <code>VM Driver</code>.But it looks like I cannot use KVM on a vm host as virtualization will not be enabled on a VM host.</p>
<pre><code> $ egrep -c '(vmx|svm)' /proc/cpuinfo
0
</code></pre>
<p><code>minikube start</code> is failing with the below error </p>
<pre><code>$ minikube start --vm-driver=kvm
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Downloading Minikube ISO
140.01 MB / 140.01 MB [============================================]
100.00% 0s
E1108 02:38:25.792900 17062 start.go:150] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: virError(Code=8, Domain=44, Message='invalid argument: could not find capabilities for domaintype=kvm ').
</code></pre>
<p>Any suggestions how to proceed on a <code>VM host</code></p>
| <p>My advice would be to use <code>--vm-driver=none</code>. This will install all minikube binaries in host machine. While not all features are available, it should be enough for doing some testing. </p>
|
<p>I want to set up my web page to make HTTP2 requests to a Jetty API server. I read that browsers will only use the "h2" protocol, that is, HTTP2 with TLS. However, my setup has a kubernetes ingress performing SSL termination, and proxying a cleartext request back to the Jetty server. The dilemma is that I don't think I want to negotiate an "h2" connection using Jetty, because that would require an SSL context on that server.</p>
<p>My question is, will this setup allow a browser to perform HTTP2 requests? If so, what do I need to enable on the Jetty server in order to properly serve HTTP2 requests?</p>
| <p>You can configure Jetty to serve clear-text HTTP/2 (also known as <code>h2c</code>), so that your setup will be:</p>
<p><code>browser -- h2 --> kubernetes tls termination -- h2c --> Jetty</code></p>
<p>In order to setup Jetty with clear-text HTTP/2, you just need to enable the <code>http2c</code> module if you are using Jetty as a standalone server, see <a href="http://www.eclipse.org/jetty/documentation/current/http2-enabling.html" rel="nofollow noreferrer">http://www.eclipse.org/jetty/documentation/current/http2-enabling.html</a>.</p>
<p>Alternatively, if you're using Jetty embedded you can look at <a href="https://github.com/eclipse/jetty.project/blob/jetty-9.4.7.v20170914/jetty-http2/http2-server/src/test/java/org/eclipse/jetty/http2/server/HTTP2CServer.java" rel="nofollow noreferrer">this example</a>.</p>
|
<p>I have a GKE service with load balancer, but I want to use it internally by my other services, e.g. <strong>I want public IP not to be assigned to it</strong></p>
<p>Is is it possible without private VPN and juggling over firewall settings?</p>
<p>All other load-balancing (like <code>kube-dns</code>) features work great and for services within my Container Engine do not need public IP</p>
<p>All nodes live in same region and zone so I do not need and do not care about multi-regional features</p>
| <p>GKE supports <a href="https://cloud.google.com/container-engine/docs/internal-load-balancing" rel="noreferrer">Internal Load Balancing</a> now:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: [SERVICE-NAME]
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: echo
spec:
type: LoadBalancer
loadBalancerIP: [IP-ADDRESS]
ports:
- port: 9000
protocol: TCP
selector:
[KEY]: [VALUE]
</code></pre>
<p>Notice the annotation: <code>cloud.google.com/load-balancer-type: "Internal"</code>.
This will create a LoadBalancer Service with a private IP-Address routable from within your VPC.</p>
<p>Bitnami also has a great blog post on this:
<a href="https://engineering.bitnami.com/articles/creating-private-kubernetes-clusters-on-gke.html" rel="noreferrer">creating-private-kubernetes-clusters-on-gke</a>.</p>
|
<p>In several places on the Kubernetes documentation site they recommend that you store your configuration YAML files inside source control for easy version-tracking, rollback, and deployment. </p>
<p>My colleagues and I are currently in the process of trying to decide on the structure of our git repository. </p>
<ul>
<li>We have decided that since configuration can change without any changes to the app code, that we would like to store configurations in a separate, shared repository.</li>
<li>We may need multiple versions of some components running side-by-side within a given environment (cluster). These versions may have different configurations.</li>
</ul>
<p>There seem to be a lot of potential variations, and all of them have shortcomings. What is the accepted way to structure such a repository?</p>
| <p>There is no established standard yet, I believe. I find helm's charts too complicated to start with, especially having another unmanaged component running on the k8s cluster. This is a workflow that we follow that works quite well for a setup of 15ish microservices, and 5 different environments (devx2, staging, qa, prod).</p>
<p>The 2 key ideas:</p>
<ol>
<li>Store kubernetes configurations in the same source repo that has the
other build tooling. Eg: alongside the microservice source code which has the tooling for building/releasing that particular microservice. </li>
<li>Template the kubernetes configuration with something like jinja and render the templates according to the environment you're targeting.</li>
</ol>
<p>The tooling is reasonably straightforward to figure out by putting together a few bash scripts or integrating with a Makefile etc.</p>
<p>EDIT: to answer some of the questions in the comment</p>
<p>The application source code repository is used as the single source of truth. So that means that if everything works as it should, changes should never be moved from the kubernetes cluster to the repository.</p>
<p>Changes directly on the server are prohibited in our workflow. If it ever does happen, we have to manually make sure they enter the application repository again.</p>
<p>Again, just want to note that the configurations stored in the source code are actually templates and use <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer"><code>secretKeyRef</code></a> quite liberally. This means that some configurations are coming in from the CI tooling as they are rendered and some are coming in from secrets that live only on the cluster (like database passwords, API tokens etc.).</p>
|
<p>Kubernetes kubelets can be run with a specific set of options (<a href="https://kubernetes.io/docs/admin/kubelet/" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/kubelet/</a>). Is there a way to see, through kubectl or similar way, the options that kubelet was run with? </p>
<p>I basically want to know if <code>--allow-privileged</code> was passed in, but see no way of checking that.</p>
| <p>Use <code>ps x | grep kubelet</code> or <code>cat /proc/$(pidof kubelet)/cmdline</code> to get commandline.</p>
<p>If kubelet is installed by <code>apt</code> or <code>yum</code>, mostly it's working as a systemd service. </p>
<p>Take a look at files in <code>/etc/systemd/system/kubelet.service.d/</code> folder, where the arguments kubelet running with.</p>
|
<p>Follow this guide to install Kubernetes:</p>
<p><a href="https://www.linuxtechi.com/install-kubernetes-1-7-centos7-rhel7/" rel="nofollow noreferrer">https://www.linuxtechi.com/install-kubernetes-1-7-centos7-rhel7/</a></p>
<p>When went to <code>kubeadm init</code> step, got error:</p>
<pre><code>$ kubeadm init --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Using existing up-to-date KubeConfig file: "admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by that:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- There is no internet connection; so the kubelet can't pull the following control plane images:
- gcr.io/google_containers/kube-apiserver-amd64:v1.8.3
- gcr.io/google_containers/kube-controller-manager-amd64:v1.8.3
- gcr.io/google_containers/kube-scheduler-amd64:v1.8.3
You can troubleshoot this for example with the following commands if you're on a systemd-powered system:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster
</code></pre>
<p>When check <code>systemctl status kubelet</code>:</p>
<pre><code>● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Fri 2017-11-10 05:34:12 UTC; 6s ago
Docs: http://kubernetes.io/docs/
Process: 29927 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 29927 (code=exited, status=1/FAILURE)
Nov 10 05:34:12 master systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Nov 10 05:34:12 master systemd[1]: Unit kubelet.service entered failed state.
Nov 10 05:34:12 master systemd[1]: kubelet.service failed.
</code></pre>
<p>When check <code>journalctl -xeu kubelet</code>:</p>
<pre><code>Nov 10 05:35:15 master systemd[1]: kubelet.service holdoff time over, scheduling restart.
Nov 10 05:35:15 master systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Nov 10 05:35:15 master systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.364837 30174 feature_gate.go:156] feature gates: map[]
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.364917 30174 controller.go:114] kubelet config controller: starting controller
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.364921 30174 controller.go:118] kubelet config controller: validating combination of defaults and flags
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.375149 30174 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.375226 30174 client.go:95] Start docker client with request timeout=2m0s
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.377200 30174 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.382890 30174 feature_gate.go:156] feature gates: map[]
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.383011 30174 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be set explicitly
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.408678 30174 certificate_manager.go:361] Requesting new certificate.
Nov 10 05:35:15 master kubelet[30174]: E1110 05:35:15.409287 30174 certificate_manager.go:284] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://10.0.2.15:6443/apis/certificates.k8s.io/v1beta1/certifica
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.411480 30174 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.425796 30174 manager.go:157] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.426006 30174 manager.go:166] unable to connect to CRI-O api service: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: no such file or directory
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.440364 30174 fs.go:139] Filesystem UUIDs: map[4537d533-47ff-463c-bffc-7ce294d9c93a:/dev/dm-1 598bbfb9-027e-4f52-a5b3-c4d3d1fbc2b8:/dev/dm-0 8ffa0ee9-e1a8-4c03-acce-b65b342c6935:/dev/sda2]
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.440395 30174 fs.go:140] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 minor:17 fsType:tmpfs blockSize:0} /dev/mapper/VolGroup00-LogVol00:{mountpoint:/var/lib/docker/overlay major:253 minor:0 fsType:xf
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.441589 30174 manager.go:216] Machine: {NumCores:1 CpuFrequency:3100000 MemoryCapacity:1040621568 HugePages:[{PageSize:2048 NumPages:0}] MachineID:a0b78b0170c248288e172d5196d59063 SystemUUID:A0B78B01-70C2-4828-8E17-2D
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.446544 30174 manager.go:222] Version: {KernelVersion:3.10.0-693.5.2.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:17.09.0-ce DockerAPIVersion:1.32 CadvisorVersion: CadvisorRevision:}
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.447201 30174 server.go:422] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.451260 30174 container_manager_linux.go:252] container manager verified user specified cgroup-root exists: /
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.451293 30174 container_manager_linux.go:257] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.451403 30174 container_manager_linux.go:288] Creating device plugin handler: false
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.451616 30174 kubelet.go:273] Adding manifest file: /etc/kubernetes/manifests
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.451710 30174 kubelet.go:283] Watching apiserver
Nov 10 05:35:15 master kubelet[30174]: E1110 05:35:15.480061 30174 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&resourceVersion=0: dial tcp 10.0.2.15
Nov 10 05:35:15 master kubelet[30174]: E1110 05:35:15.500829 30174 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection r
Nov 10 05:35:15 master kubelet[30174]: E1110 05:35:15.500917 30174 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&resourceVersion=0: dial tcp 10.
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.541334 30174 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.541369 30174 kubelet.go:517] Hairpin mode set to "hairpin-veth"
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.541616 30174 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.548689 30174 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.553143 30174 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.553164 30174 docker_service.go:207] Docker cri networking managed by cni
Nov 10 05:35:15 master kubelet[30174]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
Nov 10 05:35:15 master systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Nov 10 05:35:15 master systemd[1]: Unit kubelet.service entered failed state.
Nov 10 05:35:15 master systemd[1]: kubelet.service failed.
</code></pre>
| <p>The key point in logs <code>misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"</code></p>
<blockquote>
<p>Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. </p>
<p>To ensure compatability you can either update Docker, or ensure the <code>--cgroup-driver</code> kubelet flag is set to the same value as Docker (e.g. cgroupfs)</p>
<p>-- <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-docker" rel="nofollow noreferrer">Installing kubeadm</a></p>
</blockquote>
<h1>Either update Docker to use <code>systemd</code></h1>
<pre><code>cat << EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
</code></pre>
<p>And restart docker service.</p>
<h1>Or update kubelet to use <code>cgroupfs</code></h1>
<pre><code>sed -i -E 's/--cgroup-driver=systemd/--cgroup-driver=cgroupfs/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
</code></pre>
<p>And restart the kubelet by <code>systemctl restart kubelet.service</code>.</p>
|
<p>I'm new to java and k8, and I have some doubts about how to handle application configurations for my java apps. I've got one spring boot app and the other three use wildfly.</p>
<p>So, they all got hardcoded application configurations, and when starting them the just use something like:</p>
<pre><code>java -Dswarm.project.stage=development -jar foobar/target/foobar-swarm.jar
</code></pre>
<p>except for the spring boot which has an <strong>application.properties</strong> file that consists of application configuration data.</p>
<p>So basically the three java apps have backed in two files (which I know is a no no):</p>
<pre><code> - project-stages.yml
- standalone.xml
</code></pre>
<p>And when the developer wants to deploy to production he uses:</p>
<pre><code> java -Dswarm.project.stage=production -jar foobar/target/foobar-swarm.jar
</code></pre>
<p>And, now we come to kubernetes which has three ways of dealing with application configuration data:</p>
<pre><code>1.) Env variables
2.) Config maps
3.) Secrets
</code></pre>
<p>I was thinking of using <strong>configmaps</strong> instead of env variables because they have more <a href="https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b" rel="nofollow noreferrer">benefits</a>.</p>
<p>So, the developer gave me the possibility of overwriting those hardcoded variables with an external file : <strong>Dsystem.properties.file=/var/foobar/environment.properties</strong></p>
<p>But I'm still overwriting an hardcoded files with an external file, and I'm not happy with that solution!</p>
<p>So, I'm basically looking on advise can those hardcoded files be supplied externally and populated with configmaps in k8 - what would be the best practice of handling the config files in the world of k8?</p>
<p>Tnx,
Tom</p>
| <p>There are several questions in the post, but I can address only the one related to spring-boot. </p>
<p>The simplest and the most convenient way of specifying configurations to spring boot app is via its built in profiling feature. As you already mentioned you have <code>application.properties</code>. You can create similar files according to your usage cases: <code>application-production.properties</code>, <code>application-staging.properties</code>, <code>application-k8s.properties</code>, etc.
Kubernetes deployment doesn't change this in any way.
You can control which configuration to pick by setting <code>SPRING_PROFILES_ACTIVE</code> env variable from the kubernetes.</p>
<p>You might have something like this:</p>
<pre><code>docker run -e SPRING_PROFILES_ACTIVE=k8s -d -p 0.0.0.0:8080:8080 \
--name=yourapp your_image_name bash -c "java -jar yourapp.jar"
</code></pre>
<p>It will pick configuration from <code>application-k8s.properties</code>.</p>
<p>Configuration files support environment variables as well.
You can have placeholders like <code>${YOUR_DB}</code> in your properties files and Spring will automatically pick up env variable with name <code>YOUR_DB</code>. It is convenient to use this feature let's say when your app pod must have its own db pod.</p>
|
<p>I am trying to use kubectl run command to pull an image from private registry and run a command from that. But I don't see an option to specify image pull secret. It looks like it is not possible to pass image secret as part for run command. </p>
<p>Is there any alternate option to pull a container and run a command using kubectl? The command output should be seen on the console. Also once the command finishes the pod should die.</p>
| <p>You can use the overrides if you specify it right, it's an array in the end, that took me a bit to figure out, the below works on Kubernetes of at least 1.6:</p>
<p><code>--overrides='{ "spec": { "template": { "spec": { "imagePullSecrets": [{"name": "your-registry-secret"}] } } } }'</code></p>
<p>for example</p>
<pre><code>kubectl run -i -t hello-world --rm --generator=run-pod/v1 \
--image=eu.gcr.io/your-registry/hello-world \
--image-pull-policy="IfNotPresent" \
--overrides='{ "spec": { "template": { "spec": { "imagePullSecrets": [{"name": "your-registry-secret"}] } } } }'
</code></pre>
|
<p>I've created an AKS cluster in the UK region in Azure.</p>
<p>Currently, I can no longer access my AKS cluster. Connecting to the public IPs fails; all connections time out.</p>
<p>Furthermore, I can't run the <code>kubectl</code> command either:</p>
<p><code>fcarlier@ubuntu:~$ kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout
</code></p>
<p>Is there a known issue with AKS in that region or is it something on my side?</p>
| <blockquote>
<p>Is there a known issue with AKS in that region or is it something on
my side?</p>
</blockquote>
<p>Sorry to give you a bad experience.<br>
For now, Azure AKS still in preview, please try to <strong>recreate</strong> it, ukwest works fine now.</p>
<p>Here is a similar <a href="https://github.com/Azure/AKS/issues/14" rel="nofollow noreferrer">case</a> about you, please refer to it.</p>
|
<p>I am trying to create Deployment from <code>client-go</code> but it is not creating and throwing an error as</p>
<pre><code>the server could not find the requested resource
</code></pre>
<p><em>My <code>client-go</code> version:</em> 4.0.0</p>
<p><em>My Kubernetes version is:</em></p>
<p><strong>Client Version:</strong></p>
<pre><code>version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:36:33Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p><strong>Server Version:</strong> </p>
<pre><code>version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:34:56Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>My sample code is </p>
<pre><code>package main
import (
"fmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
intstr "k8s.io/apimachinery/pkg/util/intstr"
kube "k8s.io/client-go/kubernetes"
v1 "k8s.io/client-go/pkg/api/v1"
appsv1beta1 "k8s.io/client-go/pkg/apis/apps/v1beta1"
rest "k8s.io/client-go/rest"
"net"
)
var (
KubeMasterIP = "x.x.x.x"
Port = "xxxx"
UserName = "xxxxx"
Password = "xxxxxxxxxx"
TLSValue = true
Protocol = "https"
NameSpaceName = "test-namespace"
)
func main() {
fmt.Println("***************************")
buildAndDeployApp()
fmt.Println("***************************")
}
func buildAndDeployApp() {
tlsClientConfig := rest.TLSClientConfig{}
tlsClientConfig.Insecure = TLSValue
fmt.Println("HostPath: ", net.JoinHostPort(KubeMasterIP, Port))
restConfig := &rest.Config{
Host: net.JoinHostPort(KubeMasterIP, Port),
Username: UserName,
Password: Password,
TLSClientConfig: tlsClientConfig,
}
cSet, err := kube.NewForConfig(restConfig)
if err != nil {
fmt.Println("Error in Kube clientSet : ", err.Error())
}
deploy, err := cSet.AppsV1beta1().Deployments(NameSpaceName).Create(BuildDeployment())
fmt.Println("Deploy Output: ", deploy)
fmt.Println("Error: ", err)
}
func int32Ptr(i int32) *int32 { return &i }
func BuildDeployment() *appsv1beta1.Deployment {
return &appsv1beta1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "test-deploy",
},
Spec: appsv1beta1.DeploymentSpec{
Replicas: int32Ptr(1),
Strategy: appsv1beta1.DeploymentStrategy{
Type: "RollingUpdate",
RollingUpdate: &appsv1beta1.RollingUpdateDeployment{
MaxSurge: &intstr.IntOrString{
IntVal: 1,
},
MaxUnavailable: &intstr.IntOrString{
IntVal: 1,
},
},
},
MinReadySeconds: int32(5),
RevisionHistoryLimit: int32Ptr(5),
Template: v1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"app": "demo",
},
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "web",
Image: "nginx:1.13",
Ports: []v1.ContainerPort{
{
Name: "http",
Protocol: v1.ProtocolTCP,
ContainerPort: 80,
},
},
ImagePullPolicy: "Always",
},
},
RestartPolicy: "Always",
},
},
},
}
}
</code></pre>
| <p>apps/v1beta1 will not work in 1.5, its supported from 1.6. I used extensions/v1beta1 to access deployments in 1.5.</p>
|
<p>I'm trying to figure out, what is import/export best practices in K8S keycloak(version 3.3.0.CR1). Here is keycloak official page <a href="http://www.keycloak.org/docs/2.0/server_admin_guide/topics/export-import.html" rel="nofollow noreferrer">import/export</a> explanation, and they example of export to single file json. Going to /keycloak/bin folder and the run this:</p>
<pre><code>./standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=keycloak-export.json
</code></pre>
<p>I logged in to pod, and I get errors after run this command:</p>
<pre><code>12:23:32,045 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("core-service" => "management"),
("management-interface" => "http-interface")
]) - failure description: {
"WFLYCTL0080: Failed services" => {"org.wildfly.management.http.extensible" => "java.net.BindException: Address already in use /127.0.0.1:9990"},
"WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
"Services that were unable to start:" => ["org.wildfly.management.http.extensible.shutdown"],
"Services that may be the cause:" => ["jboss.remoting.remotingConnectorInfoService.http-remoting-connector"]
}
}
</code></pre>
<p>As I see, Keycloak server run on the same port, where I ran backup script. Here helm/keycloak values.yml:</p>
<pre><code>Service:
Name: keycloak
Port: 8080
Type: ClusterIP
Deployment:
Image: jboss/keycloak
ImageTag: 2.5.1.Final
ImagePullPolicy: IfNotPresent
ContainerPort: 8080
KeycloakUser: Admin
KeycloakPassword: Admin
</code></pre>
<p>So, server should be stopped, before we ran this scripts? I can't stop keycloak process inside of pod, because ingress will close pod and will create new one.
Any suggestions for any other way to export/import(backup/restore) data? Or I missing something?</p>
<p>P.S.
I even tried UI import/export. Export work good, and I see all data. But import worked in half way. He Brought me all "Clients", but not my "Realm" and "User Federation".</p>
| <p>Basically, you just have to start the exporting Keycloak instance on ports that are different from your main instance. I used something like this just now:</p>
<p><code>bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=keycloak-export.json -Djboss.http.port=8888 -Djboss.https.port=9999 -Djboss.management.http.port=7777</code></p>
<p>The important part are all the ports. If you get more error messages, you might need to add more properties (<code>grep port standalone/configuration/standalone.xml</code> is your friend for finding out property names), but in the end, all error messages stop and you see this message instead:</p>
<p><code>
09:15:26,550 INFO [org.keycloak.exportimport.singlefile.SingleFileExportProvider] (ServerService Thread Pool -- 52) Exporting model into file /opt/jboss/keycloak/keycloak-export.json
[...]
09:15:29,565 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: Keycloak 3.2.0.Final (WildFly Core 2.0.10.Final) started in 12156ms - Started 444 of 818 services (558 services are lazy, passive or on-demand)
</code></p>
<p>Now you can stop the server with <kbd>Ctrl</kbd>-<kbd>C</kbd>, exit the container and copy the export file away with <code>kubectl cp</code>.</p>
|
<p>I am developing an <strong>ETL</strong> batch application using spring batch. My ETL process takes data from one pagination based REST API and loads it to the Google Big-query. I would like to deploy this batch application in kubernetes cluster and want to exploit pod scalability feature. I understand spring batch supports both horizontal and vertical scaling. I have few questions:-</p>
<p>1) How to deploy this ETL app on kubernetes so that it creates pod on demand using remote chunking / remote partitioning?</p>
<p>2) I am assuming there would be main master pod and different slave pods provisioned based on load. Is it correct?</p>
<p>3) There is one kubernetes batch API also available. Use kubernetes batch API or use Spring Cloud feature.Whis option is the better one?</p>
| <p>I have used Spring Boot with Spring Batch and Spring Cloud Task to do something similar to what you want to do. Maybe it will help you.</p>
<p>The way it works is like this: I have a manager app that deploys pods on Kubernetes with my master application. The master application does some work and then starts the remote partitioning deploying several other pods with "workers".</p>
<p>Trying to answer your questions:</p>
<p>1) You can create a docker image of an application that has a Spring Batch job. Let's call it Master application.
The application that will deploy the master application could uses a TaskLauncher or an AppDeployer from <a href="https://github.com/spring-cloud/spring-cloud-deployer-kubernetes" rel="nofollow noreferrer">spring cloud deployer kubernetes</a></p>
<p>2) Correct. In this case you could use remote partitioning. Each partition would be another docker image with a Job. This would be your worker.
An example of remote partitioning can be found <a href="https://github.com/spring-cloud/spring-cloud-task/tree/master/spring-cloud-task-samples/partitioned-batch-job" rel="nofollow noreferrer">here</a>.</p>
<p>3) In my case I used spring batch and manage to do everything I needed. The only problems I have now is with Upscalling and Downscaling my cluster. Since my workers are not stateful I'm experiencing some problems when instances are removed from the cluster. If you don't need to upscale or downscale your cluster, you are good to go.</p>
|
<p>In one of my HTTP(S) LoadBalancer, I wish to change my backend configuration to increase the timeout from 30s to 60s (We have a few 502's that do not have any logs server-side, I wish to check if it comes from the LB)</p>
<p>But, as I validate the change, I got an error saying </p>
<blockquote>
<p>Invalid value for field 'namedPorts[0].port': '0'. Must be greater
than or equal to 1</p>
</blockquote>
<p>even if i didn't change the namedPort.</p>
<p><a href="https://stackoverflow.com/q/45030092/6916552">This</a> issue seems to be the same, but the only solution is a workaround that does not work in my case : </p>
<p>Thanks for your help,</p>
| <p>I faced the same issue and @tmirks 's fix didn't work for me.</p>
<p>After experimenting with GCE for a while, I realised that the issue is with the service.</p>
<p>By default all services are <code>type: ClusterIP</code> unless you specified otherwise.</p>
<p>Long story short, if your service isn't exposed as <code>type: NodePort</code> than the GCE load balancer won't route the traffic to it.</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/26508#issuecomment-222376962" rel="noreferrer">From the official Kubernetes project</a>:</p>
<blockquote>
<p>nodeport is a requirement of the GCE Ingress controller (and cloud controllers in general). "On-prem" controllers like the nginx ingress controllers work with clusterip:</p>
</blockquote>
|
<p>I am new to azure. I created an azure acs with kubenetes by using the following config(part of the whole file). </p>
<pre><code> - apiVersion: v1
kind: Service
metadata:
name: my-web
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: my-web
</code></pre>
<p>The service can be visited, but only via IP address. There is no secondary dns(like: xxx.azurewebsite.com) generated. Currently, I use A record points to the ip address. It works, but I am afraid the ip address will be changed, and I have to manually change the dns A record. Just asking if there is a way to generate some stable dns for acs services?</p>
| <blockquote>
<p>Just asking if there is a way to generate some stable dns for acs
services?</p>
</blockquote>
<p>We can via Azure portal to change the public IP address to static, in this way, restart the service will not change the IP.</p>
<p>But in Azure, if we <strong>delete</strong> the k8s service, the Public IP address will collected by Azure platform, and we will lose this IP address. For now, Azure does <strong>not</strong> support to keep the public IP address for k8s service.</p>
|
<p>In Kubernetes cluster installed Prometheus using:</p>
<pre><code>helm install stable/prometheus
</code></pre>
<p>It succese:</p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
winsome-otter-prometheus-alertmanager-3488774855-mk4ph 2/2 Running 0 5m
winsome-otter-prometheus-kube-state-metrics-2907311046-ggnwx 1/1 Running 0 5m
winsome-otter-prometheus-node-exporter-dp9b3 1/1 Running 0 5m
winsome-otter-prometheus-pushgateway-3103937292-fvw8m 1/1 Running 0 5m
winsome-otter-prometheus-server-2211167584-hjlp6 2/2 Running 0 5m
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4d
winsome-otter-prometheus-alertmanager ClusterIP 10.0.0.215 <none> 80/TCP 8m
winsome-otter-prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 8m
winsome-otter-prometheus-node-exporter ClusterIP None <none> 9100/TCP 8m
winsome-otter-prometheus-pushgateway ClusterIP 10.0.0.168 <none> 9091/TCP 8m
winsome-otter-prometheus-server ClusterIP 10.0.0.62 <none> 80/TCP 8m
</code></pre>
<p>How can access it from browser? Use which port? How can know?</p>
| <p>You need to forward port 9090 from your localhost to prometheus pod first:</p>
<pre><code>export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090
</code></pre>
<p>Now you can access Prometheus via browser on <a href="http://localhost:9090" rel="nofollow noreferrer">http://localhost:9090</a></p>
<p>You can do the same for <code>alertmanager</code> as well:</p>
<pre><code>export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9093
</code></pre>
<p>Now Alertmanager is available via browser on <a href="http://localhost:9093" rel="nofollow noreferrer">http://localhost:9093</a></p>
|
<p>somebody could please help me to create a yaml config file for Kubernetes in order to face a situation like: one pod with 3 containers (for example) and these containers have to be deployed on 3 nodes of a cluster (Google GCE).</p>
<pre><code>|P| |Cont1| ----> |Node1|
|O| ---> |Cont2| ----> |Node2| <----> GCE cluster
|D| |Cont3| ----> |Node3|
</code></pre>
<p>Thanks</p>
| <p>From <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/" rel="nofollow noreferrer">Kuberenets Concepts</a>,</p>
<blockquote>
<p>Pods in a Kubernetes cluster can be used in two main ways: <strong>Pods that
run a single container</strong>. The “one-container-per-Pod” model is the most
common Kubernetes use case; in this case, you can think of a Pod as a
wrapper around a single container, and Kubernetes manages the Pods
rather than the containers directly. <strong>Pods that run multiple containers
that need to work together</strong>. A Pod might encapsulate an application
composed of multiple co-located containers that are tightly coupled
and need to share resources. These co-located containers might form a
single cohesive unit of service–one container serving files from a
shared volume to the public, while a separate “sidecar” container
refreshes or updates those files. The Pod wraps these containers and
storage resources together as a single manageable entity.</p>
</blockquote>
<p>In short, most likely, you should place each container in a single Pod to truly benefit from the microservices architecture vs the monolithic architecture commonly deployed in VMs. However there are some cases where you want to consider co-locating containers. Namely, as described in this <a href="http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html" rel="nofollow noreferrer">article (Patterns for Composite Containers)</a> some of the composite containers applications are:</p>
<ul>
<li>Sidecar containers</li>
</ul>
<blockquote>
<p>extend and enhance the "main" container</p>
</blockquote>
<ul>
<li>Ambassador containers</li>
</ul>
<blockquote>
<p>proxy a local connection to the world</p>
</blockquote>
<ul>
<li>Adapter containers</li>
</ul>
<blockquote>
<p>standardize and normalize output</p>
</blockquote>
<p>Once you define and run the Deployments, the Scheduler will be responsible to select the most suitable placement for your Pods, unless you <a href="https://stackoverflow.com/a/42004597/1657309">manually assign Nodes</a> by defining Labels in Deployment's YAML (not recommended unless you know what you're doing).</p>
|
<p>I am new to kubernetes. here I have some confusions about the CA certificates used in a kubernetes cluster. As far as I know there are several CA certificates in kubernetes, but still not clear what each functionality of them. Here is my understanding of them, but still not sure of them. </p>
<ol>
<li><p>Root CA also know as serving CA, </p>
<ul>
<li>it signs the apiserver certwhich are configured in the apiserver with --tls-cert-file and --tls-private-key-file. </li>
<li>this CA certificate is configured in kube-controller-manager with --root-ca-file </li>
</ul></li>
<li><p>Client CA </p>
<ul>
<li>this CA certificate can be a intermediate CA certificate signed by ROOT CA certificate. </li>
<li>which is used to sign the individual components in cluster, help to identify their identities when RBAC and NODE authorization are enabled. for example, sign the kube-controller-manager, kube-scheduler, kube-proxy, kubelet.</li>
<li>can be configured in apiserver with --client-ca-file</li>
</ul></li>
<li>requestheader client ca
<ul>
<li>this CA certificate can also be a intermediate CA certificate signed by ROOT CA certificate. </li>
<li>Still not understand what this CA is used for ? what scenario of the CA file, just found that if metric server is deployed, it will ask for requestheader related certificates and keys</li>
<li>for --proxy-client-cert-file and --proxy-client-key-file in kube-apiserver, what the value of these parameter? can kubelet certificates and keys be used? </li>
</ul></li>
</ol>
<p>Besides these CA certificates, some other certificates relative confusions are also encountered. </p>
<ol>
<li>--service-account-key-file parameter in kube-apiserver and kube-controller-manager, which file can be configured? is the apiserver key file or ROOT CA key file ?</li>
<li>what can be set to<br>
--cluster-signing-cert-file and --cluster-signing-key-file in kube-controller-manager? can we use client ca and client ca key file? </li>
</ol>
| <p>A good summary is at <a href="https://github.com/kubernetes/kubernetes/issues/54665#issuecomment-340960398" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/54665#issuecomment-340960398</a></p>
<p>The request header CA is used to verify a client cert presented by an authenticating proxy along with user info set in request headers. The proxy authenticated the user, proxies the request, and sets username/groups in headers. The API server verifies the client cert before trusting user info in any headers. </p>
|
<p>My deployment JSON file have following entry from my image, where 49 is build number. I'm confused how do I use VSTS or kubectl to replace that number with each incremental build. VSTS provides command line options to cubectl command, is it possible to specify something like <code>cubectl -f file.json -imagetag $TAG</code> or something?</p>
<pre><code>"containers": [
{
"name": "jietest",
"image": "myreg.azurecr.io/jietest:49",
"resources": {},
</code></pre>
| <p>You can use <a href="https://kubernetes.io/docs/user-guide/kubectl/v1.8/#-em-image-em-" rel="nofollow noreferrer"><code>kubectl set image</code></a> to update container image of a resource.</p>
<p>For example: <code>kubectl set image -f file.json jietest=myreg.azurecr.io/jietest:$TAG</code>.</p>
|
<p>My current version of istio is 0.2.12.
I have a deployment that is deployed with istio kube-inject and tries to connect to a service/deployment inside of the kubernetes cluster that not uses Istio, how is it possible to allow access from the istio using deployment to the not istio using deployment.
In this case is the istio baked deployment a Spring boot application and the other is an ephemeral MySQL server.
Any ideas? </p>
| <p>You should be able to access all the kubernetes services (Istio-injected and the regular Kubernetes ones) from Istio-injected pods.</p>
|
<p>I setup a 3 nodes K8S cluster locally with virtualbox. When I try out the ingress, it doesn't setup the IP address:</p>
<pre><code>2017-11-11 17:00:49.015691 I | proto: duplicate proto type registered:
google.protobuf.Any
2017-11-11 17:00:49.016061 I | proto: duplicate proto type registered: google.protobuf.Duration
2017-11-11 17:00:49.016112 I | proto: duplicate proto type registered: google.protobuf.Timestamp
NAME HOSTS ADDRESS PORTS AGE
whale-ingress a.whale.hey,b.whale.hey 80 9m
</code></pre>
<p>Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whale-ingress
spec:
rules:
- host: a.whale.hey
http:
paths:
- path: /
backend:
serviceName: whale-svc-a
servicePort: 80
- host: b.whale.hey
http:
paths:
- path: /
backend:
serviceName: whale-svc-b
servicePort: 80
</code></pre>
<p>Did I set something wrong?</p>
| <p>Are you running an Ingress controller? A minimal Kubernetes cluster does not have an Ingress controller by default. If not, try deploying this controller: <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a></p>
|
<p>We have a number of micro services in our eco system and a two of them deal with user data:</p>
<ul>
<li><p>user service -> </p>
<ul>
<li>POST /users</li>
<li>GET /users/[[:alnum:]]+</li>
</ul></li>
<li><p>documents service -> </p>
<ul>
<li>POST /users/[[:alnum:]]+/documents</li>
<li>GET /users/[[:alnum:]]+/documents/[[:alnum:]]+</li>
</ul></li>
</ul>
<p>Therefore I wanted to define this in an Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{.Release.Name}}
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/enable-cors: "false"
ingress.kubernetes.io/ssl-redirect: "true"
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "fullname" . }}"
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
tier: frontend
spec:
tls:
- secretName: {{ template "tls.fullname" . }}
hosts:
- "staging.ourhost.com"
rules:
- host: "staging.ourhost.com"
http:
paths:
- path: "/users/[[:alnum:]]+"
backend:
serviceName: {{ .Values.api.services.user_service | quote }}
servicePort: 8080
- path: "/users/[[:alnum:]]+/documents"
backend:
serviceName: {{ .Values.api.services.document_service | quote }}
servicePort: 8888
</code></pre>
<p>This results in the following being set in ngix.conf:</p>
<pre><code>location /users/[[:alnum:]]+ {
...
}
location /users/[[:alnum:]]+/documents{
...
}
</code></pre>
<p>i.e. the regex is not resolved.</p>
<p>I tried splitting it out into individual ingresses and using the annotation</p>
<pre><code>ingress.kubernetes.io/rewrite-target: "/users/$1/documents"
</code></pre>
<p>but this resulted in the following in nginx.conf</p>
<pre><code>location ~* ^/users/[[:alnum:]]+/documents/(?<baseuri>.*) {
set $proxy_upstream_name "default-dev-document_service-8888";
...
rewrite /users/[[:alnum:]]+/documents/(.*) /users/$1/documents/$1 break;
proxy_pass http://default-dev-document_service-8888;
}
</code></pre>
<p>Question (finally):</p>
<ul>
<li>Why is $1 always appended? this seems strange to me</li>
<li>Does someone know another way to make paths work with regex? </li>
</ul>
<p>Any advice would be greatly apreciated</p>
| <p>After lots of searching, it seems that what I was trying to do is not (yet) possible in ingress. I opened a <a href="https://github.com/kubernetes/ingress-nginx/pull/1415" rel="nofollow noreferrer">PR</a> with them and I am waiting to see if it will be merged. </p>
<p>I have not yet publicly deployed a version with this change available, but the branch can be build using the dev docs from k8s</p>
|
<p>I am using the <a href="https://www.terraform.io/docs/providers/kubernetes/index.html" rel="nofollow noreferrer">Kubernetes Provider</a> to describe services/pods in Terraform.</p>
<p>It can get confusing using the Hashicorp Configuration Language to define <a href="https://www.terraform.io/docs/providers/kubernetes/r/pod.html" rel="nofollow noreferrer">kubernetes_pod</a> or <a href="https://www.terraform.io/docs/providers/kubernetes/r/service.html" rel="nofollow noreferrer">kubernetes_service</a> resources because the Kubernetes documentation describes everything in YAML which it means you need to translate it into HCL.</p>
<p>Is it possible to define pods as YAML and use them with <code>kubernetes_pod</code> and <code>kubernetes_service</code> resources as templates?</p>
| <p>While Terraform normally uses HCL, this is a superset of JSON (much like YAML itself) so <a href="https://www.terraform.io/docs/configuration/syntax.html#json-syntax" rel="nofollow noreferrer">can also read JSON</a>.</p>
<p>One possible option would be to take the YAML examples you already have and convert them into JSON and then use Terraform on those.</p>
<p>Unfortunately, that's unlikely to work because keywords are likely to be different for how Terraform is expecting things so you'd need to write something to do some basic translation of the input YAML to a Terraform resource JSON. At this point, it'd probably be worth just adding HCL output to the conversion so your outputted Terraform config is more readable if you ever intend to keep the Terraform config around instead of just one shot converting and applying the config.</p>
<p>The benefit of doing things this way would be that you have a reusable Kubernetes config that could be ran using <code>kubectl</code> or other tools but gives you the power of Terraform's lifecycle management, being able to plan changes and integration with non Kubernetes parts of your infrastructure (such as setting up instances to run the Kubernetes cluster on).</p>
<p>I've not used it much but I believe <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">Kops</a> will allow you to keep pod/service config in typical Kubernetes YAML files but can then use Terraform to manage the configuration and even allows you to output the Terraform configuration so you can run it outside of Kops itself.</p>
|
<p>I created a new cluster as per the <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="noreferrer">Azure guide</a> and created the cluster without issue but when I enter the <code>kubectl get nodes</code> to list the nodes I only get this response <code>Unable to connect to the server: net/http: TLS handshake timeout</code>.</p>
<p>I tried once in the Cloud Shell and once on my machine using the latest version of the Azure CLI (2.0.20).</p>
<p>I saw that there was a similar earlier issue regarding <a href="https://github.com/Azure/ACS/issues/4" rel="noreferrer">Service Principal credentials</a>, which I updated but that didn't seem to solve my issue either.</p>
<p>Any guidance would be greatly appreciated.</p>
| <p>For now, Azure AKS still in preview. We have a general service outage in West US 2 that we are investigating.</p>
<p>During the time we investigate, cluster creations in West US 2 will <strong>not</strong> be possible and existing customers might not work. </p>
<p>We will <strong>update</strong> <a href="https://github.com/Azure/AKS/blob/master/annoucements/service_outage_2017-11-09.md" rel="nofollow noreferrer">this thread</a> when we fix the issue.</p>
<p>We apologize for the inconvenience. </p>
|
<p>I have a project in west-1 on Openshift v3. In it I have an app that worked fine and one that stopped working following a GitHub committing something very downstream within the code. The issue is with making a pod: </p>
<pre><code>No nodes are available that match all of the following predicates::
Insufficient cpu (173), MatchNodeSelector (5).
</code></pre>
<p>I stopped it and then I scaled down the other pod to see if it was any weird settings that the new build for the other might have picked up and I get the same. Now both are broken.<br>
Is it that Red Hat Openshift has run out of CPUs or have I got some weird default setting that has decided to request 173 CPUs and 5 Nodes? </p>
| <p>The message doesn't mean that the pod is requesting 173 CPUs.</p>
<p>The error message means that scheduling failed that on 173 machines there isn't enough CPU left available to fit the pod requests (see pod definition) and on 5 machines (nodes) it can't be scheduled because those nodes don't have the particular set of labels the pod is specifying in <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="noreferrer"><code>spec.nodeSelector</code></a> of the pod definition. (There might be intersection between the 2 groups of nodes)</p>
|
<p>I have a webapp running in a Docker-container in a Kubernetes cluster. The app has an endpoint I want to be called periodically. The app runs at multiple nodes/pods, and it is important that only one node performs the task initiated by the endpoint. I have looked at Kubernetes Cron Jobs, but have not found any documentation on calling endpoints from a Kubernetes Cron Job. Does anybody have any proposal for a solution of this problem? How do you handle scheduling in a cluster where it is crucial that only one node performs the task?</p>
| <p><code>CronJob</code>s are a good choice. Here's a quick layout that runs 3 nginx pods accepting all traffic. Every minute, a <code>Job</code> curls 1 of the 3 pods (always the same pod).</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: main
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: singleton
labels:
app: nginx
special: singleton
spec:
replicas: 1
selector:
matchLabels:
app: nginx
special: singleton
template:
metadata:
labels:
app: nginx
special: singleton
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: allpods
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: singleton
spec:
selector:
special: singleton
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: callout
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: callout
image: buildpack-deps:curl
args:
- /bin/sh
- -ec
- curl http://singleton
restartPolicy: Never
</code></pre>
|
<p>I'm trying to scale out my deployments using openshift rest api, but I'm encountering the error "invalid character 's' looking for beginning of value".
I can successfully get the deployment config details but it's the patch request which is troubling me.
From the documents I have tried Content-Type as below 3 but nothing works: </p>
<ul>
<li>application/json-patch+json</li>
<li>application/merge-patch+json</li>
<li>application/strategic-merge-patch+json</li>
</ul>
<p>Here's my code:</p>
<pre><code>data = {'spec':{'replicas':2}}
headers = {"Authorization": token, "Content-Type": "application/json-patch+json"}
def updateReplicas():
url = root + "namespaces" + namespace + "deploymentconfigs" + dc + "scale"
resp = requests.patch(url, headers=headers, data=data, verify=False)
print(resp.content)
</code></pre>
<p>Thank you.</p>
| <p>Ok, I found out the issue. Silly thing first, data should be inside single quotes data = '{'spec':{'replicas':2}}'. </p>
<p>Then, we need few more info in our data, which finally looks like : </p>
<p>data = '{"kind":"Scale","apiVersion":"extensions/v1beta1","metadata":{"name":"deployment_name","namespace":"namespace_name"},"spec":{"replicas":1}}'</p>
<p>Thank you for your time.</p>
|
<p>I have installed Kubernetes Cluster (minikube) on my Windows 10 machine and seems to be running (ie: I can browse the minikube dashboard, etc).</p>
<p><a href="https://i.stack.imgur.com/uX3AZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uX3AZ.png" alt="kubernetes cluster status"></a></p>
<p>I also have a Windows Image (has an Asp.Net Web API .Net framework 4.6 Application in it) on Azure Container Registry that I would like to pull and deploy to my local Kubernetes Cluster.</p>
<p>I have built the following yaml file to create the Kubernetes deployment:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hubapi
spec:
containers:
- name: hubapi
image: lgmimages.azurecr.io/hubapi/hubapi
imagePullSecrets:
- name: azurepasswordsecret
</code></pre>
<p>When I run this command:</p>
<pre><code>kubectl create -f hubapi.yaml
</code></pre>
<p>and I see:</p>
<pre><code>pod "hubapi" created
</code></pre>
<p>Then when I go to the dashboard or get Pod Description I see the following error:</p>
<pre><code>kubelet, minikube Failed to pull image "lgmimages.azurecr.io/hubapi/hubapi": rpc error: code = Unknown desc = image operating system "windows" cannot be used on this platform
</code></pre>
<p>I was wondering what I am missing here and is what I am trying to do is even possible?</p>
<p><strong>Note:</strong> It works when I use this command and pull nginx image from dockerhub:</p>
<pre><code>kubectl run kubernetes-nginx --image=nginx:latest --port=80
</code></pre>
<p>Then I expose this service and I can browse the nginx web page on my local Cluster.</p>
| <blockquote>
<p>rpc error: code = Unknown desc = image operating system "windows"
cannot be used on this platform</p>
</blockquote>
<p>Actually, we install kubernetes on windows 10 <strong>bash on Ubuntu</strong>, in this way, the bash on ubuntu works as a master, base on Linux, we can't run windows docker image on it.</p>
<p>As we know, kubernetes master should be a Linux, you have not other nodes, so we can't run <code>windows</code> docker image on it.</p>
<p>For test, you can use <code>Azure container service</code> and deploy <code>kubernetes</code> with windows nodes, in this way, we can run Windows docker image on the k8s windows nodes. </p>
<p>Hope this helps:)</p>
|
<p>I have a two-cluster multi-region HA enabled in production working in MS Azure.</p>
<p>I was asked to reuse the same cluster to manage several new projects using Microservices.</p>
<p>What is the best practice here ? Should I create a cluster per app ? Is it better to isolate every project in different clusters and cloud account subcriptions?</p>
<p>Looking forward for opinions.</p>
<p>Thanks.</p>
| <p>I would suggest you slice and dice your cluster by using <strong>Namespaces</strong>.You can easily create a namespace by using the following command.</p>
<pre><code>kubectl create namespace my-project
</code></pre>
<p>Now you can feed your all manifest files (deployment, services, secrets, PersistentVolumeClaims) to API Server to that my-project namespace. for instance,</p>
<pre><code>kubectl create -f my-deployment.yaml --namespace my-project
</code></pre>
<p>Do not forget to use <strong>namespace flag</strong> otherwise these manifest would be applied to the default namespace.</p>
<p>If you want to delete your project. you just need to delete the namespace.It will delete all of the resources related to that project.</p>
<pre><code>kubectl delete namespace my-project
</code></pre>
<p>furthermore, You can limit the quota to each namespace for resources utilization.</p>
<p>you can further dig up with <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">Namespace</a></p>
<p><strong>Edited</strong></p>
<p>Namespaces are virtual clusters in a physical cluster </p>
|
<p>I would like to change the VM size of agents. I can't seem to do that from the CLI or by adjusting the Container Service. How can I change the size of the agents?</p>
| <blockquote>
<p>How can I change the size of the agents?</p>
</blockquote>
<p>We can change k8s agent via Azure portal, the agent in Azure is a VM, we should resize the VM :</p>
<p><a href="https://i.stack.imgur.com/g8G39.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g8G39.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/dwd5Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dwd5Q.png" alt="enter image description here"></a></p>
|
<p>Using Kubernetes on Azure Container Service (not the new AKS though).</p>
<p>I'm deploying a front-end up like this:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend-deployment
spec:
selector:
matchLabels:
app: frontend
replicas: 2
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: etc/etc
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: frontend
</code></pre>
<p>I can see that it's started correctly from the logs.</p>
<p>From <code>kubectl get services</code> I can see that it has been assigned an External IP. But when I try to access that via HTTP it just hangs.</p>
<p>I also can see in the Azure Portal that the Azure Load Balancer was created and is pointing to the correct external IP and backend pool.</p>
<p>Can anyone tell me if I somehow messed up the port assignments in the pod definition?</p>
<p>--</p>
<p>Update: Somehow it started working on it's own (or seemed like). But when I tried to re-create it as a Service instead of Deployment it stopped working</p>
<p>Here's my Service:</p>
<p>This is my config:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: meteor
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: http-server
selector:
app: frontend
sessionAffinity: ClientIP
type: LoadBalancer
</code></pre>
<p>It creates the external IP for the load balancer, and I can see that it is properly matching the pods. but I get a timeout when I try to connect to the external IP. Meanwhile the load balancer that was created as part of the deployment continues to work just fine.</p>
| <blockquote>
<p>do you know how to change the agent VM size in an existing ACS
deployment?</p>
</blockquote>
<p>We can change k8s agent via Azure portal, the agent in Azure is a VM, we should <strong>resize</strong> the VM :</p>
<p><a href="https://i.stack.imgur.com/ixx5q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ixx5q.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/5wIKP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5wIKP.png" alt="enter image description here"></a></p>
<p>Hope this helps.</p>
|
<p>In the context of the OpenShift 3.7 <a href="https://docs.openshift.com/container-platform/3.6/architecture/service_catalog/index.html" rel="nofollow noreferrer">Service Catalog</a>, what would it look like if I wanted to create Service Instances outside of the OpenShift Console UI?</p>
<p>I.e. from a terminal on my local machine, if I wanted to create an instance of a Service Class available in the Service Catalog (e.g. run and an <a href="https://github.com/ansibleplaybookbundle/ansible-playbook-bundle" rel="nofollow noreferrer">APB</a> available via the <a href="https://github.com/openshift/ansible-service-broker" rel="nofollow noreferrer">Ansible Service Broker</a>) what would the REST resource call look like (assuming there is a REST API available)?</p>
<p>An example using <code>curl</code> would be appreciated.</p>
<p>The assumption is that there is a REST API available for Service Catalog related resources. If this is not true, what is another integration pattern that would satisfy the requirement above? Does the <code>oc</code> CLI tool support Service Catalog commands?</p>
| <p>The best examples I can point you to are the ones from <a href="https://github.com/kubernetes-incubator/service-catalog/blob/master/docs/walkthrough.md" rel="nofollow noreferrer">the incubator repo walkthrough</a>. You'll probably be most interested in the <a href="https://github.com/kubernetes-incubator/service-catalog/blob/master/contrib/examples/walkthrough/ups-broker.yaml" rel="nofollow noreferrer">example ServiceInstance</a> and example <a href="https://github.com/kubernetes-incubator/service-catalog/blob/master/contrib/examples/walkthrough/ups-instance.yaml" rel="nofollow noreferrer">ServiceBinding</a>.</p>
<p>The service-catalog has a normal Kubernetes API server and the resources work like the other resources you're familiar with do. For example, you could make a new instance of a service in the openshift console and poke around in the API server using:</p>
<pre><code>$ kubectl get serviceinstances
</code></pre>
<p>Note, <code>oc get serviceinstances</code> is another valid form of command to use.</p>
<p>The CLI experience isn't fantastic for <code>kubectl get</code> for these resources yet, but <code>describe</code> works very well. In a future release of service-catalog the CLI experience for <code>kubectl get</code> will be improved.</p>
|
<p>I deploy a 3 nodes k8s cluster on my local virtualbox. But I cannot find the external IP of the nodes:</p>
<pre><code>kubectl get nodes -o yaml | grep IP
2017-11-11 22:36:06.346458 I | proto: duplicate proto type registered: google.protobuf.Any
2017-11-11 22:36:06.346701 I | proto: duplicate proto type registered: google.protobuf.Duration
2017-11-11 22:36:06.346743 I | proto: duplicate proto type registered: google.protobuf.Timestamp
type: InternalIP
type: InternalIP
type: InternalIP
</code></pre>
<p>Reading up online and I cannot get the external IP if I don't deploy the cluster with the public cloud provider...Is there a workaround for that?</p>
<p>I would like to try service and ingress locally without going thru the provide cloud provider setup.</p>
| <p>I don't think the IP is exposed in the yaml file. You could do:</p>
<pre><code>kubectl get nodes -o wide
</code></pre>
|
<p>I had installed <em>minikube</em> a few months ago and wanted to upgrade as newer versions are available.</p>
<p>I am unable to find out how to upgrade <em>minikube</em>. I see a feature request for an upgrade command here - <a href="https://github.com/kubernetes/minikube/issues/1171" rel="noreferrer">https://github.com/kubernetes/minikube/issues/1171</a></p>
<p>I tried to then uninstall <em>minikube</em> and hit another brickwall again. I don't see a command to uninstall <em>minikube</em>. The information that came closest to this was not very helpful - <a href="https://github.com/kubernetes/minikube/issues/1043" rel="noreferrer">https://github.com/kubernetes/minikube/issues/1043</a></p>
<p>I guess we need ways to upgrade these (at least once every 6 months or so).</p>
| <p>Before reinstall minikube (OS X), check the following:</p>
<ul>
<li><p>Make sure that you have <code>brew</code> updated: </p>
<pre><code>brew update
</code></pre></li>
<li><p>Make sure that you already have <code>cask</code> installed:</p>
<pre><code>brew cask install minikube --verbose
</code></pre></li>
</ul>
<p>Finally, execute the following command in the same directory you've installed minikube previously (usually <code>/usr/local/bin/</code>):</p>
<pre><code>brew cask reinstall minikube
</code></pre>
<p>If you see an output similar to this: </p>
<p><code>Error: It seems there is already a Binary at '/usr/local/bin/minikube'; not linking.</code></p>
<ul>
<li><p>Remove the existing binary:</p>
<pre><code>rm /usr/local/bin/minikube
</code></pre></li>
</ul>
<p>Now, you should able to reinstall (upgrade) minikube. :)</p>
|
<p>I can't delete this Stateful Set in Kubernetes, even with <code>--cascade=false</code> so it doesn't delete the Pods managed by it.</p>
<pre><code>kubectl get statefulsets
NAME DESIRED CURRENT AGE
assets-elasticsearch-data 0 1 31m
</code></pre>
<p>Then:</p>
<pre><code>kubectl delete statefulsets assets-elasticsearch-data
^C
</code></pre>
<p>... hangs for minutes until I give up, then:
</p>
<pre><code>kubectl delete statefulsets assets-elasticsearch-data --cascade=false
statefulset "assets-elasticsearch-data" deleted
kubectl get statefulsets
NAME DESIRED CURRENT AGE
assets-elasticsearch-data 0 1 32m
</code></pre>
<p>I'm using Google's GKE.</p>
| <p>Had a similar issue with k8s 1.8. Tried many times and it timedout.
Eventually I tried, </p>
<pre><code>kubectl delete statefulsets mariadb -n openstack --force
</code></pre>
<p><em>error: timed out waiting for "mariadb" to be synced</em></p>
<p>This appears to work :</p>
<pre><code>kubectl delete statefulsets mariadb -n openstack --force --grace-period=0 --cascade=false
</code></pre>
<p><em>warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.</em></p>
<p><em>statefulset "mariadb" deleted</em></p>
|
<p>Using Kubernetes on Azure Container Service (not the new AKS though).</p>
<p>I'm deploying a front-end up like this:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend-deployment
spec:
selector:
matchLabels:
app: frontend
replicas: 2
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: etc/etc
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: frontend
</code></pre>
<p>I can see that it's started correctly from the logs.</p>
<p>From <code>kubectl get services</code> I can see that it has been assigned an External IP. But when I try to access that via HTTP it just hangs.</p>
<p>I also can see in the Azure Portal that the Azure Load Balancer was created and is pointing to the correct external IP and backend pool.</p>
<p>Can anyone tell me if I somehow messed up the port assignments in the pod definition?</p>
<p>--</p>
<p>Update: Somehow it started working on it's own (or seemed like). But when I tried to re-create it as a Service instead of Deployment it stopped working</p>
<p>Here's my Service:</p>
<p>This is my config:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: meteor
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: http-server
selector:
app: frontend
sessionAffinity: ClientIP
type: LoadBalancer
</code></pre>
<p>It creates the external IP for the load balancer, and I can see that it is properly matching the pods. but I get a timeout when I try to connect to the external IP. Meanwhile the load balancer that was created as part of the deployment continues to work just fine.</p>
| <p>It looks like the problem was a mis-specification of the targetPort. Adjusting it to the correct value and replacing the Service definition solved the problem.</p>
|
Subsets and Splits