prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I'm using an Ingress to expose my services from outside the Kubernetes cluster, so I don't need Kubernetes to provision a loadbalancer. Therefore, I created a ClusterIP service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: default
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: myapp
type: ClusterIP
</code></pre>
<p>This <em>works</em> - I have a separate Ingress and Deployment set up and I can access the app just fine.</p>
<p>However, Kubernetes insists on trying to create a loadbalancer anyway. Since it doesn't have permission in my AWS account do so, every service I create logs errors like this:</p>
<pre><code>FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
22s 22s 1 service-controller Warning CreatingLoadBalancerFailed Error creating load balancer (will retry): Error getting LB for service default/myapp: AccessDenied: User: -redacted- is not authorized to perform: elasticloadbalancing:DescribeLoadBalancers with an explicit deny
status code: 403, request id: -redacted-
</code></pre>
<p>I assume that it's trying to DescribeLoadBalancers because it intends to create one. The docs claim that loadbalancers should only be created when <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types" rel="nofollow noreferrer">you specify service type "LoadBalancer"</a>. How can I stop Kubernetes from trying anyway?</p>
| <p>It is not trying to create one. You are receiving the error message from <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/service/service_controller.go#L287" rel="nofollow noreferrer">service_controller.go:287</a>:</p>
<pre><code>func (s *ServiceController) createLoadBalancerIfNeeded(key string, service *v1.Service) (error, bool) {
[...]
if !wantsLoadBalancer(service) {
_, exists, err := s.balancer.GetLoadBalancer(s.clusterName, service)
if err != nil {
return fmt.Errorf("Error getting LB for service %s: %v", key, err), retryable
}
[...]
</code></pre>
<p>In turn <code>GetLoadBalancer</code> is calling the cloudprovider to describe (check if one exists). I suggest you authorize the aws service account for this particular action.</p>
|
<p>I'm running into an issue where I seem to max out my Container Engine (Kubernetes Engine) cluster at 9 nodes (or 9 vCPUs). For context, I'm trying to run around 50 or so web scrapers as Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noreferrer">Cron Jobs</a>. Here is the <code>gcloud</code> command I am using to create the cluster in the <code>us-east1-c</code> zone:</p>
<pre><code>gcloud beta container clusters create my-example-cluster \
--cluster-version=1.8.1-gke.1 \
--machine-type=f1-micro \
--enable-autoscaling \
--min-nodes=1 \
--max-nodes=60
</code></pre>
<p>When I checked out <a href="https://console.cloud.google.com/iam-admin/quotas" rel="noreferrer">IAM & Admin > Quotas</a>, it looked like I was possibly maxing out on in-use IP addresses (max=8).</p>
<p>After launching all the scrapers, I'm ending up around 35 pods that are unschedulable (seemingly because I can't exceed 9 nodes?).</p>
<p>Has anyone run into a similar situation and come up with a solution? Is there a way, perhaps via Kubernetes config, to not count against IP address usage since these nodes do not need to be accessible externally (if that's even the issue)? Also open to other setups if anyone has recommendations.</p>
<p>Thanks for all the help!</p>
<p>P.S. I do have billing enabled</p>
| <p>I received some help on this issue from the <a href="https://gcp-slack.appspot.com/" rel="noreferrer">Google Cloud Platform Community Slack</a>. It turns out I was hitting the <em>In-use IP addresses</em> quota; however, I didn't realize you could request increased quotas.</p>
<p>I was able to increase quotas by:</p>
<ol>
<li>Navigating to <strong>Menu > IAM & admin > Quotas</strong> in the Google Cloud Platform console for my project</li>
<li>Selecting/checking the services in the region I was using (e.g., <em>In-use IP addresses</em> in <code>us-west1</code>)</li>
<li>Click <strong>EDIT QUOTAS</strong> button above the table of services</li>
<li>Input/verify contact information</li>
<li>Comply with GCP support when they reach out to you</li>
</ol>
<p>So, basically an oversight on my end but maybe it will help somebody else who didn't notice or wasn't aware of that option.</p>
|
<p>Apologies for not keeping this short, as any such attempt would make me miss-out on some important details of my problem.</p>
<p>I have a legacy Java application which works in a active/standby mode in a clustered environment to expose certain RESTful WebServices via a predefined port. </p>
<p>If there are two nodes in my app cluster, at any point in time only one would be in Active mode, and the other in Passive mode, and the requests are always served by the node with app running in Active mode. 'Active' and 'Passive' are just roles, the app as such would be running on both nodes. The Active and Passive instances communicate with each other through this same predetermined port. </p>
<p>Suppose I have a two node cluster with one instance of my application running on each node, then one of the instance would initially be active and the other will be passive. If for some reason the active node goes for a toss for some reason, the app instance in other node identifies this using some heartbeat mechanism, takes over the control and becomes the new active. When the old active comes back up it detects the other guy has owned up the new Active role, hence it goes into Passive mode. </p>
<p>The application manages to provide RESTful webservices on the same endpoint IP irrespective of which node is running the app in 'Active' mode by using a cluster IP, which piggy-backs on the active instance, so the cluster IP switches over to whichever node is running the app in Active mode.</p>
<p>I am trying to containerize this app and run this in a Kubernetes cluster for scale and ease of deployment. I am able to containerize and able to deploy it as a POD in a Kubernetes cluster. </p>
<p>In order to bring in the Active/Passive role here, I am running two instances of this POD, each pinned to a separate K8S nodes using node affinity (each node is labeled as either active or passive, and POD definitions pin on these labels), and clustering them up using my app's clustering mechanism whereas only one will be active and the other will be passive. </p>
<p>I am exposing the REST service externally using K8S Service semantics by making use of the NodePort, and exposing the REST WebService via a NodePort on the master node. </p>
<p>Here's my yaml file content:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-service
labels:
app: myapp-service
spec:
type: NodePort
ports:
- port: 8443
nodePort: 30403
selector:
app: myapp
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: active
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: nodetype
operator: In
values:
- active
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: active-pv-claim
containers:
- name: active
image: myapp:latest
imagePullPolicy: Never
securityContext:
privileged: true
ports:
- containerPort: 8443
volumeMounts:
- mountPath: "/myapptmp"
name: task-pv-storage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: passive
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: nodetype
operator: In
values:
- passive
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: active-pv-claim
containers:
- name: passive
image: myapp:latest
imagePullPolicy: Never
securityContext:
privileged: true
ports:
- containerPort: 8443
volumeMounts:
- mountPath: "/myapptmp"
name: task-pv-storage
</code></pre>
<p>Everything seems working fine, except that since both PODs are exposing the web service via same port, the K8S Service is routing the incoming requests to one of these PODS in a random fashion. Since my REST WebService endpoints work only on Active node, the service requests work via K8S Service resource only when the request is getting routed to the POD with app in Active role. If at any point in time the K8S Service happens to route the incoming request to POD with app in passive role, the service is inaccessible/not served.</p>
<p>How do I make this work in such a way that the K8S service always routes the requests to POD with app in Active role? Is this something doable in Kubernetes or I'm aiming for too much? </p>
<p>Thank you for your time!</p>
| <p>You can use a readiness probe in conjunction with election container. Election will always elect one master from the election pool, and if you make sure only that pod is marked as ready... only that pod will recieve traffic.</p>
|
<p>Usually we expose our Kubenretes services through an managed K8s/GCP Ingress with auto-assigned NodePorts, but for some <a href="https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress" rel="nofollow noreferrer">use</a> <a href="https://kubernetes.io/docs/tasks/administer-federation/ingress/" rel="nofollow noreferrer">cases</a> we need to specify a static NodePort ourselves.</p>
<p>The <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">documentation</a> says that we need to make sure to avoid port collissions:</p>
<blockquote>
<p>you need to take care about possible port collisions yourself</p>
</blockquote>
<p><strong>Q: How should how we choose the correct NodePort?</strong></p>
<p>Should we / do we have to allocate our static NodePort from the <em>flag-configured range (default: 30000-32767)</em>?</p>
<p>Or rather not from this range to avoid collisions with these auto-assigned ports?</p>
| <p>It is more about not assigning the same port manually to multiple services then anything else. If you have a manualy defined NodePort it will not get assigned to dynamic service, so yes, you should use port from this range.</p>
|
<p>I'm trying to configure the spinnaker to deploy applications in kubernetes environment. </p>
<p>I followed a <a href="http://www.spinnaker.io/docs/kubernetes-source-to-prod" rel="nofollow">documentation</a>,
at <a href="http://www.spinnaker.io/docs/kubernetes-source-to-prod#section-3-create-a-demo-server-group" rel="nofollow">step-3</a> the containers are not showing up as shown in the <a href="https://files.readme.io/JRMxxbaSQ1mmD5VH8EtD_firstSG1.png" rel="nofollow">screenshot</a>. Then I moved to next <a href="http://www.spinnaker.io/docs/kubernetes-source-to-prod#section-4-git-to-_dev_-pipeline" rel="nofollow">step</a>(Pipeline creation), when I select <code>type: Docker</code> in the <code>Automated Trigger</code>, again the <code>Repo name</code> is not showing up as shown in <a href="https://files.readme.io/ZV0WoYPyTQSwLJ1CvysC_dockertrigger.png" rel="nofollow">screenshot</a>. </p>
<p><strong>So, I'm suspecting there is problem with spinnaker and docker hub repo(Authentication/Misconfiguration?)</strong></p>
<p>I have copied the Kubernetes Authentication config file to <code>~/.kube/config</code>. I think there is no problem with spinnaker and kubernetes. When I create a <code>Load Balancer</code> in spinnaker I can see <code>Kube Services</code> are creating (test-dev & test-prod)</p>
<pre><code>root@veeru:~# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 192.168.3.1 <none> 443/TCP 91d
test-dev 192.168.3.113 <none> 80/TCP 6h
test-prod 192.168.3.185 80/TCP 6h
</code></pre>
<p>My <code>spinnaker-local.yml</code></p>
<pre><code><Content removed for brevity>
kubernetes:
# For more information on configuring Kubernetes clusters (kubernetes), see
# http://www.spinnaker.io/v1.0/docs/target-deployment-setup#section-kubernetes-cluster-setup
# NOTE: enabling kubernetes also requires enabling dockerRegistry.
enabled: true
primaryCredentials:
# These credentials use authentication information at ~/.kube/config
# by default.
name: veerendrav2
namespace: default
dockerRegistryAccount: veerendrav2
dockerRegistry:
# If you want to deploy containers to a container management solution,
# you must specifiy where these container images exist first.
# NOTE: Enabling dockerRegistry is independent of other providers.
# However, for convienience, we tie docker and kubernetes together
# since kubernetes (and only kubernetes) depends on this docker provider
# configuration.
enabled: true
primaryCredentials:
name: veerendrav2
address: https://hub.docker.com
repository: veerendrav2/spin-kub-demo
<Content removed for brevity>
</code></pre>
<p>My <code>/opt/spinnaker/config/clouddriver-local.yml</code></p>
<pre><code>dockerRegistry:
enabled: true
accounts:
- name: veerendrav2
address: https://hub.docker.com/ # Point to registry of choice
username: veerendrav2
password: password
repositories:
- veerendrav2/spin-kub-demo
</code></pre>
<p>My Sample application <a href="https://github.com/veerendra2/spin-kub-demo" rel="nofollow">github repo</a> and <a href="https://hub.docker.com/r/veerendrav2/spin-kub-demo/" rel="nofollow">docker hub repo</a></p>
<p>Thanks</p>
| <p>The recommended way to configure the docker registry address, or any other configs, is using <a href="https://www.spinnaker.io/setup/install/halyard/" rel="nofollow noreferrer">Halyard</a>. Modifying the config files directly could result in them getting overwritten. </p>
<p>You can add an account, or edit an existing one this way.</p>
<pre><code># Add a docker registry account
hal config provider docker-registry account add <ACCOUNT_NAME> --address https://index.docker.io
# Edit the account(i.e. add a repo)
hal config provider docker-registry account edit grizzthedj --add-repository <ACCOUNT_NAME>/<REPO>
# Deploy the changes
hal deploy apply
</code></pre>
|
<p>I have RBAC enabled kubernetes cluster created using
kops version <strong>1.8.0-beta.1</strong>, I am trying to run a nginx pod which should attach pre-created EBS volume and pod should start. But getting issue as not authorized even though i am a <strong>admin</strong> user. Any help would be highly appreciated. </p>
<pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-09T07:27:47Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:27:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
namespace:default
</code></pre>
<p>cat test-ebs.yml </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
awsElasticBlockStore:
volumeID: <vol-IDhere>
fsType: ext4
</code></pre>
<p>I am getting the below error: </p>
<pre><code>Warning FailedMount 8m attachdetach AttachVolume.Attach failed for volume "test-volume" : Error attaching EBS volume "<vol-ID>" to instance "<i-instanceID>": "UnauthorizedOperation: You are not authorized to perform this operation
</code></pre>
| <p>In kops 1.8.0-beta.1, master node requires you to tag the AWS volume with:</p>
<p><code>KubernetesCluster: <clustername-here></code></p>
<p>If you have created the k8s cluster using kops like so:</p>
<p><code>kops create cluster --name=k8s.yourdomain.com [other-args-here]</code></p>
<p>your tag on the EBS volume needs to be </p>
<p><code>KubernetesCluster: k8s.yourdomain.com</code> </p>
<p>And the policy on master would contain a block which would contain:
</p>
<pre><code>{
"Sid": "kopsK8sEC2MasterPermsTaggedResources",
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:DeleteRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteVolume",
"ec2:DetachVolume",
"ec2:RevokeSecurityGroupIngress"
],
"Resource": [
"*"
],
"Condition": {
"StringEquals": {
"ec2:ResourceTag/KubernetesCluster": "k8s.yourdomain.com"
}
}
}
</code></pre>
<p>The condition indicates that master-policy has privilege to only attach volumes which contain the right tag.</p>
|
<p>Trying to run a local registry. I have the following configuration:</p>
<p>Deployment:</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: registry
labels:
app: registry
role: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:latest
ports:
- containerPort: 5000
volumeMounts:
- mountPath: '/registry'
name: registry-volume
volumes:
- name: registry-volume
hostPath:
path: '/data'
type: Directory
</code></pre>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
role: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
</code></pre>
<p>It all works well when I create deployment/service. <code>kubectl</code> shows status as <code>Running</code> for both service and deployment:</p>
<pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME READY STATUS RESTARTS AGE
po/registry-6549cbc974-mmqpj 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 37m
svc/registry NodePort 10.0.0.6 <none> 5000:31001/TCP 7m
</code></pre>
<p>However, when I try to get external IP for service using: <code>minikube service registry --url</code>, it times-out/fails: <code>Waiting, endpoint for service is not ready yet...</code>.</p>
<p>When I delete the service (keeping deployment intact), and manually expose the deployment using <code>kubectl expose deployment registry --type=NodePort</code>, I am able to get it working.</p>
<p>Minikube log can be found <a href="https://pastebin.com/KHEPGwaZ" rel="nofollow noreferrer">here</a>.</p>
| <p>You need to specify the correct <code>spec.selector</code> in <code>registry</code> service manifest:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
app: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
</code></pre>
<p>Now <code>registry</code> service correctly points to the <code>registry</code> pod:</p>
<pre><code>$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 14m
registry 172.17.0.4:5000 4s
</code></pre>
<p>And you can get external url as well:</p>
<pre><code>$ minikube service registry --url
http://192.168.99.106:31001
</code></pre>
|
<p>I am brand new to suiteCRM and I am trying to deploy into Minikube. I am using the helm charts in the K8s repo:</p>
<p><a href="https://github.com/kubernetes/charts/tree/master/stable/suitecrm" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/stable/suitecrm</a></p>
<p>I am using the command: </p>
<pre><code>helm install --name red-falcon-crm -f values.yaml stable/suitecrm
</code></pre>
<p>I modified the values.yaml to have some custom values (e.g. email, username, password). The install is not successful even though I don't get very usable errors. I do get errors about not having a resolvable host but I was hoping to proxy. </p>
<pre><code>craig@craigs-laptop:~/redfalcon/gitlab/platform-setup/modules/suitecrm$ helm install --name red-falcon-crm -f values.yaml stable/suitecrm
NAME: red-falcon-crm
LAST DEPLOYED: Tue Oct 31 19:13:03 2017
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
red-falcon-crm-mariadb-7fb6774f5c-b5w7t 0/1 ContainerCreating 0 0s
==> v1/Secret
NAME TYPE DATA AGE
red-falcon-crm-mariadb Opaque 2 1s
red-falcon-crm-suitecrm Opaque 2 1s
==> v1/ConfigMap
NAME DATA AGE
red-falcon-crm-mariadb 1 1s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
red-falcon-crm-mariadb Bound pvc-cf72d52d-bea1-11e7-b8a4-080027c951c6 8Gi RWO standard 1s
red-falcon-crm-suitecrm-apache Pending standard 1s
red-falcon-crm-suitecrm-suitecrm Pending standard 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
red-falcon-crm-mariadb ClusterIP 10.0.0.104 <none> 3306/TCP 1s
red-falcon-crm-suitecrm LoadBalancer 10.0.0.89 <pending> 80:32750/TCP,443:31973/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
red-falcon-crm-mariadb 1 1 1 0 1s
NOTES:
###############################################################################
### ERROR: You did not provide an external host in your 'helm install' call ###
###############################################################################
This deployment will be incomplete until you configure SuiteCRM with a resolvable
host. To configure SuiteCRM with the URL of your service:
1. Get the SuiteCRM URL by running:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace default -w red-falcon-crm-suitecrm'
export APP_HOST=$(kubectl get svc --namespace default red-falcon-crm-suitecrm --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
export APP_PASSWORD=$(kubectl get secret --namespace default red-falcon-crm-suitecrm -o jsonpath="{.data.suitecrm-password}" | base64 --decode)
export APP_DATABASE_PASSWORD=$(kubectl get secret --namespace default red-falcon-crm-mariadb -o jsonpath="{.data.mariadb-root-password}" | base64 --decode)
2. Complete your SuiteCRM deployment by running:
helm upgrade red-falcon-crm \
--set suitecrmHost=$APP_HOST,suitecrmPassword=$APP_PASSWORD,mariadb.mariadbRootPassword=$APP_DATABASE_PASSWORD stable/suitecrm
craig@craigs-laptop:~/redfalcon/gitlab/platform-setup/modules/suitecrm$ kubectl get svc --namespace default -w red-falcon-crm-suitecrm
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
red-falcon-crm-suitecrm LoadBalancer 10.0.0.89 <pending> 80:32750/TCP,443:31973/TCP 3m
^Ccraig@craigs-laptop:~/redfalcon/gitlab/platform-setup/modules/suitecrm$ minikube service red-falcon-crm-suitecrm
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
</code></pre>
<p>My Values.yaml:</p>
<pre><code>## Bitnami SuiteCRM image version
## ref: https://hub.docker.com/r/bitnami/suitecrm/tags/
##
image: bitnami/suitecrm:7.9.7-r0
## Specify a imagePullPolicy
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
imagePullPolicy: IfNotPresent
## SuiteCRM host to create application URLs
## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration
##
# suitecrmHost:
## loadBalancerIP for the SuiteCRM Service (optional, cloud specific)
## ref: http://kubernetes.io/docs/user-guide/services/#type-loadbalancer
##
# suitecrmLoadBalancerIP:
## User of the application
## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration
##
suitecrmUsername: craig
## Application password
## Defaults to a random 10-character alphanumeric string if not set
## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration
##
suitecrmPassword: <hadmypasswordhere>
## Admin email
## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration
##
suitecrmEmail: <hadmyemail>@gmail.com
## Lastname
## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration
##
suitecrmLastName: <hadmylastname>
## SMTP mail delivery configuration
## ref: https://github.com/bitnami/bitnami-docker-suitecrm/#smtp-configuration
##
# suitecrmSmtpHost:
# suitecrmSmtpPort:
# suitecrmSmtpUser:
# suitecrmSmtpPassword:
# suitecrmSmtpProtocol:
##
## MariaDB chart configuration
##
mariadb:
## MariaDB admin password
## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#setting-the-root-password-on-first-run
##
mariadbRootPassword: <hadMyPasswordHere>
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
## mariadb data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 8Gi
## Kubernetes configuration
## For minikube, set this to NodePort, elsewhere use LoadBalancer
##
serviceType: LoadBalancer
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
apache:
## apache data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 1Gi
suitecrm:
## suitecrm data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 8Gi
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
# requests:
# memory: 512Mi
# cpu: 300m
</code></pre>
<p>I did look at the logs of the running service (through minikube dashboard). Really not much use there but it seems to hang on preparing CRM Environment. It never gets beyond that...</p>
<pre><code> nami INFO Initializing apache
apache INFO ==> Patching httpoxy...
nami INFO apache successfully initialized
nami INFO Initializing php
nami INFO php successfully initialized
nami INFO Initializing suitecrm
suitecr INFO Preparing webserver environment...
suitecr INFO Preparing PHP environment...
suitecr INFO Preparing suitecrm environment...
</code></pre>
<p>The service endpoint is never ready and never appears to complete deployment. Any help would be appreciated.</p>
| <p>I think the issue has to do with not setting the suitecrm host correctly. It is configured to use LoadBalancer but as you are using minikube you need to manually set the APP_HOST variable with a different command than that displayed by helm. This is what I did to make it work:</p>
<pre><code>helm install --name red-falcon-crm -f values.yaml stable/suitecrm
export APP_HOST=$(minikube ip):$(kubectl get svc red-falcon-crm-suitecrm --template "{{ (index .spec.ports 0).nodePort }}")
helm upgrade red-falcon-crm --set suitecrmHost=$APP_HOST -f values.yaml stable/suitecrm
</code></pre>
|
<p>Can I use the same storage class for different services - e.g. I am using persistent volumes for Redis, RabbitMQ , Elastic Search on Kubernetes. </p>
<p>Should I create only only storage class if they all require the same provisioner (GCEPersistentDisk) or what is the best practice for creating and managing storage class. </p>
<p>Can I change the storage class of a claim at a later time ?</p>
<p>Thanks,
Rajesh</p>
| <p>Let's look at the definition of <code>StorageClass</code> from the documentation:</p>
<p><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug</code></p>
<p>So at a basic level, it defines - who is responsible for creating storage (<code>provisioner</code>) and what parameters need to pass and things such as reclaim policy and so on.</p>
<p>So the number of storage classes you need is dependent characteristics of the underlying storage. As a hypothetical example let's say you had HD and SSD as two types of storages then you would create corresponding two storage classes. </p>
<p>To answer specific questions:</p>
<blockquote>
<p>Can I use the same storage class for different services - e.g. I am
using persistent volumes for Redis, RabbitMQ , Elastic Search on
Kubernetes.</p>
</blockquote>
<p>Yes</p>
<blockquote>
<p>Can I change the storage class of a claim at a later time ?</p>
</blockquote>
<p>In practice, No</p>
|
<p>See the connected question - <a href="https://stackoverflow.com/q/46968582/2790937">Kubernetes pod exec API exception: Response must not include 'Sec-WebSocket-Protocol' header if not present in request</a>.</p>
<p>I have been able to successfully make a WebSocket connection using Pod <code>exec</code> API. But I am using <code>kubectl proxy</code> on localhost to handle the authorization on behalf of the terminal client.</p>
<p>The next step is to be able to authorize the request directly to the kubernetes API server, so that there's no need to route the traffic via <code>kubectl proxy</code>. <a href="https://github.com/kubernetes-incubator/client-python/issues/58" rel="nofollow noreferrer">Here's a discussion</a> in the python community where they have been able to send Authorization token to the api-server. But I haven't had any success with this in nodejs. I must admit that I am not familiar with python as well to understand the discussion enough.</p>
<p>Can someone from the kubernetes team point me in the right direction?</p>
<p>Thanks</p>
| <p>For future wanderers....</p>
<p>Although the <code>exec</code> API supports <code>Authorization</code> header, the browser WebSocket API doesn't support it yet. So the solution for us was to reverse-proxy it from our server APIs. </p>
<p>It went like this...</p>
<p>client browser -wss-> GKE LB (SSL Termination) -ws-> site API (nodejs) -WSS & Authorization-> kube api-server exec API</p>
<p><strong>So to answer my own question</strong>, per my tests, the GKE kubernetes supports Authorization only in headers, so you need to reverse proxy if you want to connect to it via browser. Per <a href="https://github.com/kubernetes-ui/container-terminal/" rel="nofollow noreferrer">this code</a>, some Kubernetes setups allow tokens in the query string, but I didn't have any success with GKE. If you are using a different cluster host, YMMY. I welcome comments from kubernetes team on my observations.</p>
<p>If you came here only for an authorization issue, you may stop reading further.</p>
<p>There are still more challenges to overcome though, and there's good news and bad news... the good news first:</p>
<p>GKE Loadbalancer automatically handles SSL termination <a href="https://cloud.google.com/compute/docs/load-balancing/http/#websocket_proxy_support" rel="nofollow noreferrer">even for WebSockets</a>, so you can proxy to either WS or WSS without any issues.</p>
<p>And then the bad news:</p>
<p>GKE Loadbalancer force terminates ALL connections within 30 seconds, even if they are in use! There are workarounds, but they either <a href="https://github.com/kubernetes/kubernetes/issues/32273#issuecomment-297190222" rel="nofollow noreferrer">don't stay put</a>, require you to <a href="https://github.com/kubernetes/kubernetes/issues/24745#issuecomment-241410916" rel="nofollow noreferrer">deploy your own controller</a>, or you need to use <a href="https://github.com/kubernetes/ingress-nginx/issues/243" rel="nofollow noreferrer">Ingress</a>. What this means for a Terminal sessions is that Chrome will close the client with a 1006 code, even if a command is running at that time.</p>
<p>For some WS scenarios, it may be acceptable to simply reconnect on a 1006 close, but for a terminal session, this is a deal-breaker as you cannot reconnect to the previous terminal instance and must begin with a new one.</p>
<p>For now we have resorted to increasing the timeout of the GKE Loadbalancer. But eventually we are planning to deploy our own Loadbalancer which can handle this better. Ingress has <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#before-you-begin" rel="nofollow noreferrer">some issues</a> which we don't want to live with at the moment.</p>
|
<p>We are building a grpc service in python which has a bidirectional streaming endpoint and also a unary endpoint.</p>
<p>we want the stream to live forever so we have no timeouts and streams are working as expected, we are using kubernates and docker for deployment.</p>
<p>But we are facing issues with scaling the service, how to scale an infinite streaming grpc server, we can't scale based on a number of requests but there is only one request made and data is sent as frames. </p>
<p>how can we scale this service? right now in worker thread pool has max threads as 100.</p>
<p>one quick solution is to give max worker threads a higher number and scale based on CPU load and memory usage.</p>
<p>is there any better way to do it.</p>
| <p>Right now we don't have a good answer: the thread-per-RPC assumption was baked into gRPC Python fairly early and deeply, well before we were aware of "just keep an open connection in case either side has anything to say" long-lived RPCs being a use case.</p>
<p>We're working on better solutions but they'll likely be a while in coming.</p>
<p>Increasing the number of worker threads definitely sounds like the right answer for the time being. I'd be very curious to hear how it works out since your threads will be mostly idle most of the time (right?).</p>
<p>An option to <em>maybe</em> try that <em>might</em> work out well would be to design an object that implements the interface of <code>futures.ThreadPoolExecutor</code> but that actually does some sophisticated internal multiplexing to service a great many more RPCs. It's <a href="https://github.com/grpc/grpc/issues/7632" rel="nofollow noreferrer">an idea that I've had on my mind for a while</a> but haven't gotten around to testing out myself.</p>
|
<p>Running a baremetal master. Trying to set it up. Looked at the other answers - the hostname is in place - everything. And still...</p>
<pre><code>[root@kube-future kubernetes]# kubectl get
The connection to the server localhost:8080 was refused - did you
specify the right host or port?
</code></pre>
<p>UPDATE</p>
<pre><code>Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 5854/kube-scheduler
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 5763/etcd
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 5814/kube-controlle
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 5763/etcd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1419/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1508/master
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 5531/kubelet
tcp6 0 0 :::10250 :::* LISTEN 5531/kubelet
tcp6 0 0 :::6443 :::* LISTEN 5812/kube-apiserver
tcp6 0 0 :::10255 :::* LISTEN 5531/kubelet
tcp6 0 0 :::22 :::* LISTEN 1419/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1508/master
</code></pre>
<p>this is the result of env update 2:</p>
<pre><code>LC_PAPER=cs_CZ.UTF-8
XDG_SESSION_ID=80
LC_ADDRESS=cs_CZ.UTF-8
HOSTNAME=kube-future
LC_MONETARY=cs_CZ.UTF-8
TERM=xterm
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=10.194.4.254 44190 22
LC_NUMERIC=cs_CZ.UTF-8
SSH_TTY=/dev/pts/0
USER=root LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
LC_TELEPHONE=cs_CZ.UTF-8
MAIL=/var/spool/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
LC_IDENTIFICATION=cs_CZ.UTF-8
PWD=/root
LANG=en_US.UTF-8
LC_MEASUREMENT=cs_CZ.UTF-8
HISTCONTROL=ignoredups
SHLVL=1
HOME=/root
no_proxy=$no_proxy,10.194.110.4
LOGNAME=root
SSH_CONNECTION=10.194.4.254 44190 10.194.110.4 22
LESSOPEN=||/usr/bin/lesspipe.sh %s
XDG_RUNTIME_DIR=/run/user/0
LC_NAME=en_US.UTF-8
_=/usr/bin/env
</code></pre>
<p><code>kubectl config view</code> --> <a href="https://pastebin.com/sgbnyp8F" rel="nofollow noreferrer">https://pastebin.com/sgbnyp8F</a></p>
| <p>I had the same. </p>
<p>Running the following command as suggested <a href="https://groups.google.com/forum/#!topic/kubernetes-users/2EJ-i7vLVNw" rel="nofollow noreferrer">here</a> solved my problem. </p>
<pre><code>sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
</code></pre>
|
<p>I am running a NodeJS image in my Kubernetes Pod, while exposing a specific port (9080), and running Traefik as a side-car container as reverse proxy. How do I specify Traefik route from the Deployment template.</p>
<p><strong>Deployment</strong></p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: web
name: web-controller
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: "nodeJS-image"
name: web
ports:
- containerPort: 9080
name: http-server
- image: "traefik-image"
name: traefik-proxy
ports:
- containerPort: 80
name: traefik-proxy
- containerPort: 8080
name: traefik-ui
args:
- --web
- --kubernetes
</code></pre>
| <p>If I understand correctly, you want to forward requests hitting the Traefik container to the Node.js application living in the same pod. Given that the application is configured statically from Traefik's perspective, you can simply mount a proper file provider configuration into the Traefik pod (presumably via a ConfigMap) pointing at the side car container.</p>
<p>The most simple way to achieve this (<a href="https://docs.traefik.io/configuration/backends/file/" rel="nofollow noreferrer">as documented</a>) is to append the following file provider configuration directly at the bottom of Traefik's TOML configuration file:</p>
<pre><code>[file]
[backends.backend.servers.server]
url = "http://127.0.0.1:9080"
[frontends.frontend]
backend = "backend"
[frontends.frontend.routes.route]
host = "machine-echo.example.com"
</code></pre>
<p>If you mount the TOML configuration file into the Traefik pod under a path other than the default one (<code>/etc/traefik.toml</code>), you will also need to pass the <code>--configFile</code> option in the manifest referencing the correct location of the file.</p>
<p>After that, any request hitting the Traefik container on port 80 with a host header of <code>machine-echo.example.com</code> should get forwarded to the Node.js side car container on port 9080.</p>
|
<p>We have a cluster with a master node (foo-1), and two worker nodes (foo-2 and foo-3). We have a pod that was running on foo-3 (as decided by Kubernetes). We purposely shut down foo-3 as an experiment.</p>
<p>My expectation was that Kubernetes would "see" the shutdown, and automatically restart the pod in foo-2. But, it didn't seem to happen. In fact, it seemed to think that the pod was still running on foo-3.</p>
<p>After five minutes of waiting, Kubernetes finally recognized that the cluster node had disappeared, and responded gracefully by restarting the pod on foo-2. Five minutes is too long for us, as this is not a replicated application. How can we make that timeout drastically shorter (like, 10 seconds)? And actually, if the host has a graceful shutdown (like for patching), the effect should be immediate.</p>
| <p>There is a <code>--pod-eviction-timeout</code> parameter in <a href="https://kubernetes.io/docs/admin/kube-controller-manager/" rel="noreferrer">kube-controller-manager</a> which is 5m by default:</p>
<pre><code> --pod-eviction-timeout duration The grace period for deleting pods on failed nodes. (default 5m0s)
</code></pre>
<p>You need to modify it if you want to speed up an eviction process.</p>
<p>But if you want to minimize your pod's downtime, when node goes down, you need to modify the following parameters as well:</p>
<pre><code>kubelet: node-status-update-frequency=4s (default 10s)
kube-controller-manager: node-monitor-period=2s (default 5s)
kube-controller-manager: node-monitor-grace-period=16s (default 40s)
kube-controller-manager: pod-eviction-timeout=30s (default 5m)
</code></pre>
<p>And, of course, you can always have your deployments with replica 2 and service will be up even if one node goes down. </p>
|
<p>I am following the instructions as given <a href="https://kubernetes.io/docs/getting-started-guides/gce/" rel="nofollow noreferrer">here</a>.</p>
<p>I used the following command to get a running cluster, in gcloud console I typed: <code>curl -sS https://get.k8s.io | bash</code> as described in the link, after that, I ran the command <code>kubectl cluster-info</code> from that I got:</p>
<pre><code>kubernetes-dashboard is running at https://35.188.109.36/api/v1/proxy/namespaces/kube-
system/services/kubernetes-dashboard
</code></pre>
<p>but when I go to that url from firefox, the message that comes is: </p>
<pre><code>User "system:anonymous" cannot proxy services in the namespace
"kube-system".: "No policy matched."
</code></pre>
<p>Expected behavior: Should ask for an admin name and password to connect to the dashboard.</p>
| <p>Is there a reason why you did not use GKE (Google Kubernetes Engine) which provides the dashboard add-on installed out of the box?</p>
<p>In your case, simply:</p>
<ul>
<li>the kubernetes-dashboard addon might not be installed (but logs say so, so I think this is not the problem)</li>
<li>network configuration that makes <code>kubectl proxy</code> work might not be there</li>
<li>the <code>curl .. | sh</code> script you used probably did not configure the authentication properly.</li>
</ul>
<p>I recommend using GKE as this works out of the box. You can find documentation here: <a href="https://cloud.google.com/kubernetes-engine/docs/oss-ui" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/oss-ui</a></p>
<hr>
<p>If you still want to use GCE, I recommend running <code>kubectl proxy</code> on your workstation (not your kubernetes nodes) and visiting <code>http://127.0.0.1:8001/ui</code> on your browser to see if it works.</p>
<p>If you get an error about not having enough permissions, you might be using a Kubernetes version new enough that enforces RBAC policies on pods like dashboard which access the API. You can grant those permissions by running:</p>
<pre><code>kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
</code></pre>
<hr>
<p>I also recommend trying out GKE UI in Google Cloud Console: <a href="https://console.cloud.google.com/kubernetes" rel="nofollow noreferrer">https://console.cloud.google.com/kubernetes</a></p>
|
<p>I am setting up a kubernetes lab using one node only and learning to setup kubernetes nfs.
I am following kubernetes nfs example step by step from the following link:
<a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs" rel="nofollow noreferrer">https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs</a></p>
<p>Trying the first section, NFS server part, executed 3 commands:</p>
<pre><code>$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-service.yaml
</code></pre>
<p>I experience problem, where I see the following event:</p>
<pre><code>PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
</code></pre>
<p>Research done:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/43120" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/43120</a></p>
<p><a href="https://github.com/kubernetes/examples/pull/30" rel="nofollow noreferrer">https://github.com/kubernetes/examples/pull/30</a></p>
<p>None of those links above help me to resolve issue I experience.
I have made sure it is using image 0.8.</p>
<pre><code>Image: gcr.io/google_containers/volume-nfs:0.8
</code></pre>
<p>Does anyone know what does this message mean?
Clue and guidance on how to troubleshoot this issue is very much appreciated.
Thank you.</p>
<pre><code>$ docker version
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:41:23 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:49 2017
OS/Arch: linux/amd64
Experimental: false
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:27:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
lab-kube-06 Ready master 2m v1.8.3
$ kubectl describe nodes lab-kube-06
Name: lab-kube-06
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=lab-kube-06
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Thu, 16 Nov 2017 16:51:28 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.0.0.6
Hostname: lab-kube-06
Capacity:
cpu: 2
memory: 8159076Ki
pods: 110
Allocatable:
cpu: 2
memory: 8056676Ki
pods: 110
System Info:
Machine ID: e198b57826ab4704a6526baea5fa1d06
System UUID: 05EF54CC-E8C8-874B-A708-BBC7BC140FF2
Boot ID: 3d64ad16-5603-42e9-bd34-84f6069ded5f
Kernel Version: 3.10.0-693.el7.x86_64
OS Image: Red Hat Enterprise Linux Server 7.4 (Maipo)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://Unknown
Kubelet Version: v1.8.3
Kube-Proxy Version: v1.8.3
ExternalID: lab-kube-06
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-lab-kube-06 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-lab-kube-06 250m (12%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-lab-kube-06 200m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-dns-545bc4bfd4-gmdvn 260m (13%) 0 (0%) 110Mi (1%) 170Mi (2%)
kube-system kube-proxy-68w8k 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-lab-kube-06 100m (5%) 0 (0%) 0 (0%) 0 (0%)
kube-system weave-net-7zlbg 20m (1%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
830m (41%) 0 (0%) 110Mi (1%) 170Mi (2%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 39m kubelet, lab-kube-06 Starting kubelet.
Normal NodeAllocatableEnforced 39m kubelet, lab-kube-06 Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 39m (x8 over 39m) kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 39m (x8 over 39m) kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39m (x7 over 39m) kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasNoDiskPressure
Normal Starting 38m kube-proxy, lab-kube-06 Starting kube-proxy.
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pv-provisioning-demo Pending 14s
$ kubectl get events
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
18m 18m 1 lab-kube-06.14f79f093119829a Node Normal Starting kubelet, lab-kube-06 Starting kubelet.
18m 18m 8 lab-kube-06.14f79f0931d0eb6e Node Normal NodeHasSufficientDisk kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientDisk
18m 18m 8 lab-kube-06.14f79f0931d1253e Node Normal NodeHasSufficientMemory kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientMemory
18m 18m 7 lab-kube-06.14f79f0931d131be Node Normal NodeHasNoDiskPressure kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasNoDiskPressure
18m 18m 1 lab-kube-06.14f79f0932f3f1b0 Node Normal NodeAllocatableEnforced kubelet, lab-kube-06 Updated Node Allocatable limit across pods
18m 18m 1 lab-kube-06.14f79f122a32282d Node Normal RegisteredNode controllermanager Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
17m 17m 1 lab-kube-06.14f79f1cdfc4c3b1 Node Normal Starting kube-proxy, lab-kube-06 Starting kube-proxy.
17m 17m 1 lab-kube-06.14f79f1d94ef1c17 Node Normal RegisteredNode controllermanager Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
14m 14m 1 lab-kube-06.14f79f4b91cf73b3 Node Normal RegisteredNode controllermanager Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
58s 11m 42 nfs-pv-provisioning-demo.14f79f766cf887f2 PersistentVolumeClaim Normal FailedBinding persistentvolume-controller no persistent volumes available for this claim and no storage class is set
14s 4m 20 nfs-server-kq44h.14f79fd21b9db5f9 Pod Warning FailedScheduling default-scheduler PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
4m 4m 1 nfs-server.14f79fd21b946027 ReplicationController Normal SuccessfulCreate replication-controller Created pod: nfs-server-kq44h
2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-kq44h 0/1 Pending 0 16s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-kq44h 0/1 Pending 0 26s
$ kubectl get rc
NAME DESIRED CURRENT READY AGE
nfs-server 1 1 0 40s
$ kubectl describe pods nfs-server-kq44h
Name: nfs-server-kq44h
Namespace: default
Node: <none>
Labels: role=nfs-server
Annotations: kubernetes.io/created-
by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-server","uid":"5653eb53-caf0-11e7-ac02-000d3a04eb...
Status: Pending
IP:
Created By: ReplicationController/nfs-server
Controlled By: ReplicationController/nfs-server
Containers:
nfs-server:
Image: gcr.io/google_containers/volume-nfs:0.8
Ports: 2049/TCP, 20048/TCP, 111/TCP
Environment: <none>
Mounts:
/exports from mypvc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-plgv5 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
mypvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-pv-provisioning-demo
ReadOnly: false
default-token-plgv5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-plgv5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 39s (x22 over 5m) default-scheduler PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
</code></pre>
| <p>Each <em>Persistent Volume Claim</em> (PVC) needs a <em>Persistent Volume</em> (PV) that it can bind to. In your example, you have only created a PVC, but not the volume itself.</p>
<p>A PV can either be created manually, or automatically by using a <em>Volume class</em> with a provisioner. Have a look at <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#provisioning" rel="nofollow noreferrer">the docs of <em>static and dynamic provisioning</em></a> for more information):</p>
<blockquote>
<p>There are two ways PVs may be provisioned: statically or dynamically.</p>
<h3>Static</h3>
<p>A cluster administrator creates a number of PVs. They carry the details of the real storage which is available for use by cluster users. [...]</p>
<h3>Dynamic</h3>
<p>When none of the static PVs the administrator created matches a userβs <code>PersistentVolumeClaim</code>, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on <code>StorageClasses</code>: the PVC must request a class and the administrator must have created and configured that class in order for dynamic provisioning to occur.</p>
</blockquote>
<p>In your example, you are creating a storage class provisioner (defined in <code>examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml</code>) that seems to be tailored for usage within the Google cloud (which it will probably not be able to actually create PVs in your lab setup).</p>
<p>You can create a persistent volume manually on your own. After creating the PV, the PVC should automatically bind itself to the volume and your pods should start. Below is an example for a persistent volume that uses the node's local file system as a volume (which is probably OK for a one-node test setup):</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: someVolume
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/on/host
</code></pre>
<p>For a production setup, you'll probably want to choose a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">different volume type</a> at <code>hostPath</code>, although the volume types available to you will greatly differ depending on the environment that you're in (cloud or self-hosted/bare-metal).</p>
|
<p>I have a Kubernetes pod that is already running on a Minikube (v0.23.0) node. </p>
<p>Some context: it's the coredns pod that is created from enabling the plugin. I'm working through this blog post, in an attempt to set up custom DNS entries for my Kubernetes cluster: <a href="https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/" rel="nofollow noreferrer">https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/</a></p>
<p>I'm able to replace the config map without a problem; I altered the template spec from the blog post to suit my needs and ran: </p>
<pre><code>kubectl create -f configmap.yml -o yaml --dry-run | kubectl replace -f -
</code></pre>
<p>That seemed to work; I inspected the configmap using kubectl and everything looked fine.</p>
<p>However I'm not sure how to update the volumes. I tried putting something like this in a file:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
k8s-app: coredns
namespace: kube-system
spec:
containers:
- name: coredns
volumes:
- configMap:
items:
- key: Corefile
path: Corefile
- key: cluster.db
path: cluster.db
name: coredns
name: config-volume
</code></pre>
<p>Then I tried applying it with:</p>
<pre><code>kubectl create -f k8s_config/coredns/volumes.yml -o yaml --dry-run | kubectl apply -f -
</code></pre>
<p>But that gave this error message:</p>
<pre><code>error: error when retrieving current configuration of:
&{0xc4218be840 0xc420213500 kube-system STDIN 0xc421a9e5e0 0xc421a9e5e0 false}
from server for: "STDIN": resource name may not be empty
</code></pre>
<p>I also tried dumping the contents of the coredns pod into a file, altering the volume section accordingly, then replacing the pod, but got this message:</p>
<pre><code>The Pod "coredns-6b4fd7784-xhb5s" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
</code></pre>
<p>Short of deleting the pod and bringing it back up, how should I go about updating the volumes in a Pod? Is it even possible? </p>
<p>I suppose another overarching question would be: what's the correct way to update/configure the <code>coredns</code> pod/plugin? I can't seem to find a guide anywhere. In the blog post it says the following: </p>
<blockquote>
<p>To create the new zone, we need to modify the <code>coredns.yaml</code> we have
been using to create an additional file in the pod.</p>
</blockquote>
<p>However this must have come from when coredns was manually set up in Kubernetes, and now that it "just works" as a plugin I'm not sure how to go about configuring it. </p>
| <p>Looking at the Pod name <code>coredns-6b4fd7784-xhb5s</code>, it looks like your CoreDNS Pod is actually being managed by a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment controller</a>.</p>
<p>Do not modify Pods managed by a deployment controller directly. To edit Pods managed by a deployment, edit the deployment object itself (<code>kubectl -n kube-system edit deployment coredns</code>) and modify the <code>spec.template</code> property. After editing the deployment, the controller will delete the old pod and create a new one with the updated specification.</p>
<p>If you simply updated a ConfigMap object and want the CoreDNS Pod to receive the new contents, its also sufficient to simply delete the old Pod; the deployment controller will create a new one with the most recent version of the ConfigMap.</p>
<p>If you're worried about a potential service interruption by deleting your CoreDNS Pod, consider increasing the deployment's <code>spec.replica</code> value to a value higher than 1.</p>
|
<p>I created an headless service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-service-headless
spec:
ports:
- port: 80
selector:
app: myapp
clusterIP: None
</code></pre>
<p>From Kubernetes dashboard I can see its <code>Internal endpoints</code>:</p>
<pre><code>myapp-service-headless:80 TCP
myapp-service-headless:0 TCP
</code></pre>
<p>In this application, I also set internal endpoint to:</p>
<pre><code>http://myapp-service-headless
</code></pre>
<p>But from outside, how can I access its IP to connect API?</p>
<p>For example, my Kubernetes' IP is <code>192.168.99.100</code>, then connect to <code>192.168.99.100</code> is okay?</p>
<h1>Now the service status from Kubernetes dashboard</h1>
<h2>Services</h2>
<p><a href="https://i.stack.imgur.com/E6BeS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E6BeS.png" alt="enter image description here" /></a></p>
<h2>Service Details</h2>
<p><a href="https://i.stack.imgur.com/6WlBi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6WlBi.png" alt="enter image description here" /></a></p>
| <p>There is two option to expose the service outside, you can use the ingress controller to connect to the server. </p>
<p>The simple method is change your service type to NodePort, then you should be able access server using NodeIP and service external port number.</p>
<p>here is the more info.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport</a></p>
|
<p>I'm quite new to the Spinnaker and have to ask for some help I guess. Does anyone knows why it could be that I can't create any Application and just keep seeing this screen.
My installation is through Halyard 1.5.0 and Ubuntu 14.04.
We don't use any cloud provider but I did configure Docker and Kubernetes part</p>
<p>And here is the error I see in the /var/log/spinnaker/echo/echo.log:</p>
<pre><code>2017-11-16 13:52:29.901 INFO 13877 --- [ofit-/pipelines] c.n.s.echo.services.Front50Service : java.net.SocketTimeoutException: timeout
at okio.Okio$3.newTimeoutException(Okio.java:207)
at okio.AsyncTimeout.exit(AsyncTimeout.java:261)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:215)
at okio.RealBufferedSource.indexOf(RealBufferedSource.java:306)
at okio.RealBufferedSource.indexOf(RealBufferedSource.java:300)
at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:196)
at com.squareup.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:186)
at com.squareup.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:127)
at com.squareup.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:739)
at com.squareup.okhttp.internal.http.HttpEngine.access$200(HttpEngine.java:87)
at com.squareup.okhttp.internal.http.HttpEngine$NetworkInterceptorChain.proceed(HttpEngine.java:724)
at com.squareup.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:578)
at com.squareup.okhttp.Call.getResponse(Call.java:287)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at retrofit.client.OkClient.execute(OkClient.java:53)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:326)
at retrofit.RestAdapter$RestHandler.access$100(RestAdapter.java:220)
at retrofit.RestAdapter$RestHandler$1.invoke(RestAdapter.java:265)
at retrofit.RxSupport$2.run(RxSupport.java:55)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at retrofit.Platform$Base$2$1.run(Platform.java:94)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.read(SocketInputStream.java:204)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at okio.Okio$2.read(Okio.java:139)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:211)
... 24 more
2017-11-16 13:52:29.901 INFO 13877 --- [ofit-/pipelines] c.n.s.echo.services.Front50Service : ---- END ERROR
</code></pre>
<p><a href="https://i.stack.imgur.com/CDhC1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CDhC1.png" alt="enter image description here"></a></p>
| <p>I suspect you may be using <code>redis</code> as the persistent storage type(I ran into the same issue). </p>
<p>If this is the case, persistent storage using redis doesn't seem to be working properly out-of-the-box, and it is not supported. I would try using an S3 target, if available.</p>
<p><a href="https://www.spinnaker.io/setup/storage/redis/" rel="nofollow noreferrer">More info here on support for redis</a></p>
<p>To configure S3 using Halyard, use the following commands: </p>
<pre><code>echo <SECRET_ACCESS_KEY> | hal config storage s3 edit --access-key-id <ACCESS_KEY_ID> --endpoint <S3_ENDPOINT> --bucket <BUCKET_NAME> --root-folder spinnaker --secret-access-key
hal config storage edit --type s3
hal deploy apply
</code></pre>
|
<p>How do you <strong>hard</strong> force a pod to terminate? </p>
<p>We have tried every documented form of kubectl delete pod and they remain. kubectl reports that they've been deleted, but kubectl get pods tells a different story. All pods that they could've been using have been deleted as well as any pods that could be using them. </p>
<p>Is there any form of kubectl SERIOUSLY_DELETE this pod?</p>
<p>I've tried: <code>kubectl delete pods --all --grace-period=0 --force -n monitoring</code></p>
<p>With no favorable result. I've also tried to delete them individually.</p>
<p><code>NAME READY STATUS RESTARTS AGE
es-master-962148878-7vxpp 1/1 Terminating 1 12d
es-master-962148878-h1r4k 1/1 Terminating 1 12d
es-master-962148878-rkg9g 1/1 Terminating 1 12d</code></p>
| <p>Taken from <code>kubectl delete --help</code>:</p>
<pre><code>kubectl delete pod foo --grace-period=0 --force
</code></pre>
<p>Note that if your pods are controlled via e.g. a deployment, then a new one will be recreated every time you delete one. So do make sure that's not the symptom you're observing!</p>
|
<p>I want to implement graceful shutdown in a Kubernetes Pod. I know I need to listen for SIGTERM, which indicates the start of the shutdown procedure. But what exactly do I do when I receive it?</p>
<p>At the very least I must wait for all running request to finish before exiting. But can the pod still receive new requests after receiving the SIGTERM? (It's exposed using a service.) I can't find any explicit documentation on this.</p>
<p><a href="http://kubernetes.io/docs/user-guide/pods/#termination-of-pods" rel="noreferrer">The docs</a> state:</p>
<blockquote>
<p>Pod is removed from endpoints list for service, and are no longer considered part of the set of running pods for replication controllers. Pods that shutdown slowly can continue to serve traffic as load balancers (like the service proxy) remove them from their rotations.</p>
</blockquote>
<p>So that seems to imply that new requests can still come in. So how long should I continue to expect new requests before graceful termination? Do I simply ignore the SIGTERM, continue to serve requests as usual and wait for the eventual SIGKILL?</p>
<p>I suppose ensuring future readiness checks fail and then waiting longer than the period with which they occur before terminating might work?</p>
<p>I'm on Kubernetes 1.2.5, if that makes any difference, and am talking about rolling updates in particular, but also scaling replication controllers down generally.</p>
| <p>I recently faced similar problem, I used simple preStop hook, which introduces some delay(sleep) between start of termination and receiving SIGTERM to underlying process</p>
<pre><code>lifecycle:
preStop:
exec:
command:
- "sleep"
- "60"
</code></pre>
<p>This delay helps,</p>
<ol>
<li><p>Load balancer to remove(sync) the pod being terminated</p></li>
<li><p>Gives chance to terminating pod to complete requests received before termination</p></li>
<li><p>Fulfill the requests received by terminating pod between termination and load balancer update(sync)</p></li>
</ol>
<p>PreStop can be made more intelligent for pod with unpredictable time of serving</p>
|
<p>Failed to create clusterroles. <> already assigned as the roles of "container engine admin" & "container engine cluster admin"</p>
<pre><code>Error from server (Forbidden): error when creating "prometheus-
operator/prometheus-operator-cluster-role.yaml":
clusterroles.rbac.authorization.k8s.io "prometheus-operator"
is forbidden: attempt to grant extra privileges: [{[create]
[extensions] [thirdpartyresources] [] []} {[*]
[monitoring.coreos.com] [alertmanagers] [] []} {[*]
[monitoring.coreos.com] [prometheuses] [] []} {[*]
[monitoring.coreos.com] [servicemonitors] [] []} {[*]
[apps] [statefulsets] [] []} {[*] [] [configmaps] [] []}
{[*] [] [secrets] [] []} {[list] [] [pods] [] []} {[delete]
[] [pods] [] []} {[get] [] [services] [] []} {[create]
[] [services] [] []} {[update] [] [services] [] []} {[get]
[] [endpoints] [] []} {[create] [] [endpoints] [] []}
{[update] [] [endpoints] [] []} {[list] [] [nodes]
[] []} {[watch] [] [nodes] [] []}]
user=&{<<my_account>>@gmail.com
[system:authenticated] map[]} ownerrules=[{[create]
[authorization.k8s.io] [selfsubjectaccessreviews]
[] []} {[get] [] [] [] [/api /api/* /apis /apis/*
/healthz /swaggerapi /swaggerapi/* /version]}]
ruleResolutionErrors=[]
</code></pre>
| <p>I've got the same problem on Google Kubernetes Engine.</p>
<p>According to the answer of <em>enj</em> and the comment of <em>ccyang2005</em> please find the following snipet who solve my problem :)</p>
<h1>Step 1 : Get your identity</h1>
<pre><code>gcloud info | grep Account
</code></pre>
<p>Will output you something like <code>Account: [[email protected]]</code></p>
<h1>Step 2 : grant cluster-admin to your current identity</h1>
<pre><code>kubectl create clusterrolebinding myname-cluster-admin-binding \
--clusterrole=cluster-admin \
[email protected]
</code></pre>
<p>Will output somthing like <code>Clusterrolebinding "myname-cluster-admin-binding" created</code></p>
<hr>
<p>After that, you'll be able to create CusterRoles</p>
|
<p>For improved performance and availability we'd like to distribute certain services from out stack across different Kubernetes clusters in different parts of the world (GCP regions).</p>
<p>The majority of our stack will continue to run in one cluster / region but some user facing services will be deployed all over the world.</p>
<p>Some of these services need to access other services in our main cluster.</p>
<p><strong>Q: How can we reliably access services in a different Kubernetes cluster?</strong></p>
<p>Using internal load balancers seems to be out of the question as those are <a href="https://cloud.google.com/kubernetes-engine/docs/internal-load-balancing#restrictions_for_internal_load_balancers" rel="nofollow noreferrer">per region</a> only.</p>
<p>We'd like to keep the communication between our services inside the private GCP network and avoid going over the public internet. So an public ingress also wouldn't work.</p>
| <p>VPC networks are <a href="https://cloud.google.com/vpc/docs/vpc" rel="nofollow noreferrer">global</a> resources, not restricted by regional boundaries, and so with the correct <a href="https://cloud.google.com/vpc/docs/using-firewalls" rel="nofollow noreferrer">firewall rules</a> set up, you should be able to access any internal resource from any other resource "right out of the box", assuming they are in the same VPC network and same project.</p>
|
<p>After months of using Helm (version 2.6.2) to deploy services in kubernetes, we've started experiencing random errors while performing the <code>--upgrade</code> command.</p>
<p>Most of the time the upgrade times out, in other situations instead it looks like a network issue occurs, with errors like <code>getsockopt: connection refused</code> or <code>TLS handshake timeout</code>.</p>
<p>Sometimes we've also seen <code>the server cannot complete the requested operation at this time, try again later (get configmaps)</code>.</p>
<p>We're using Helm to deploy several versions a day of our services to our CI environment, and the instability of the deployment process that started to creep in is affecting our productivity.</p>
<p>Any idea what I should look for to restore the <code>--upgrade</code> command to a reliable state?</p>
| <p>Upgrading to Helm 2.7.0 and using <code>--history-max</code> solved the issue for me, so the problem must have been related to the fact that old config maps were not cleared up by tiller, and over time they piled up until tiller started to struggle to make sense of them. </p>
<p>More information about it <a href="https://blog.shazam.com/rewriting-history-with-helm-b66e72958008" rel="nofollow noreferrer">here</a>. </p>
|
<p>I am trying to follow this tutorial on configuring nginx-ingress-controller for a Kubernetes cluster I deployed to AWS using kops.</p>
<p><a href="https://daemonza.github.io/2017/02/13/kubernetes-nginx-ingress-controller/" rel="nofollow noreferrer">https://daemonza.github.io/2017/02/13/kubernetes-nginx-ingress-controller/</a></p>
<p>When I run <strong>kubectl create -f ./nginx-ingress-controller.yml</strong>, the pods are created but error out. From what I can tell, the problem lies with the following portion of nginx-ingress-controller.yml:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>volumes:
- name: tls-dhparam-vol
secret:
secretName: tls-dhparam
- name: nginx-template-volume
configMap:
name: nginx-template
items:
- key: nginx.tmpl
path: nginx.tmpl</code></pre>
</div>
</div>
</p>
<p>Error shown on the pods:</p>
<p>MountVolume.SetUp failed for volume "nginx-template-volume" : configmaps "nginx-template" not found</p>
<p>This makes sense, because the tutorial does not have the reader create this configmap before creating the controller. I know that I need to create the configmap using:</p>
<p><strong>kubectl create configmap nginx-template --from-file=nginx.tmpl=nginx.tmpl</strong></p>
<p>I've done this using <strong>nginx.tmpl</strong> files found from sources <a href="https://github.com/kubernetes/ingress-nginx/blob/master/rootfs/etc/nginx/template/nginx.tmpl" rel="nofollow noreferrer">like this</a>, but they don't seem to work (always fail with invalid NGINX template errors). Log example:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>I1117 16:29:49.344882 1 main.go:94] Using build: https://github.com/bprashanth/contrib.git - git-92b2bac
I1117 16:29:49.402732 1 main.go:123] Validated default/default-http-backend as the default backend
I1117 16:29:49.402901 1 main.go:80] mkdir /etc/nginx-ssl: file exists already exists
I1117 16:29:49.402951 1 ssl.go:127] using file '/etc/nginx-ssl/dhparam/dhparam.pem' for parameter ssl_dhparam
F1117 16:29:49.403962 1 main.go:71] invalid NGINX template: template: nginx.tmpl:1: function "where" not defined</code></pre>
</div>
</div>
</p>
<p>The image version used is quite old, but I've tried newer versions with no luck.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code> containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.8.3</code></pre>
</div>
</div>
</p>
<p><a href="https://github.com/kubernetes/contrib/issues/2061" rel="nofollow noreferrer">This thread</a> is similar to my issue, but I don't quite understand the proposed solution. Where would I use docker cp to extract a usable template from? Seems like the templates I'm using use a language/syntax incompatible with Docker...?</p>
| <p>To copy the nginx template file from the ingress controller pod to your local machine, you can first grab the name of the pod with <code>kubectl get pods</code> then run <code>kubectl exec [POD_NAME] -it -- cat /etc/nginx/template/nginx.tmpl > nginx.tmpl</code>.</p>
<p>This will leave you with the <code>nginx.tmpl</code> file you can then edit and push back up as a configmap. I would recommend though keeping custom changes to the template to a minimum as it can make it hard for you to update the controller in the future.</p>
<p>Hope this helps!</p>
|
<p>I'm quite new to the Spinnaker and have to ask for some help I guess. Does anyone knows why it could be that I can't create any Application and just keep seeing this screen.
My installation is through Halyard 1.5.0 and Ubuntu 14.04.
We don't use any cloud provider but I did configure Docker and Kubernetes part</p>
<p>And here is the error I see in the /var/log/spinnaker/echo/echo.log:</p>
<pre><code>2017-11-16 13:52:29.901 INFO 13877 --- [ofit-/pipelines] c.n.s.echo.services.Front50Service : java.net.SocketTimeoutException: timeout
at okio.Okio$3.newTimeoutException(Okio.java:207)
at okio.AsyncTimeout.exit(AsyncTimeout.java:261)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:215)
at okio.RealBufferedSource.indexOf(RealBufferedSource.java:306)
at okio.RealBufferedSource.indexOf(RealBufferedSource.java:300)
at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:196)
at com.squareup.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:186)
at com.squareup.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:127)
at com.squareup.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:739)
at com.squareup.okhttp.internal.http.HttpEngine.access$200(HttpEngine.java:87)
at com.squareup.okhttp.internal.http.HttpEngine$NetworkInterceptorChain.proceed(HttpEngine.java:724)
at com.squareup.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:578)
at com.squareup.okhttp.Call.getResponse(Call.java:287)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at retrofit.client.OkClient.execute(OkClient.java:53)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:326)
at retrofit.RestAdapter$RestHandler.access$100(RestAdapter.java:220)
at retrofit.RestAdapter$RestHandler$1.invoke(RestAdapter.java:265)
at retrofit.RxSupport$2.run(RxSupport.java:55)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at retrofit.Platform$Base$2$1.run(Platform.java:94)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.read(SocketInputStream.java:204)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at okio.Okio$2.read(Okio.java:139)
at okio.AsyncTimeout$2.read(AsyncTimeout.java:211)
... 24 more
2017-11-16 13:52:29.901 INFO 13877 --- [ofit-/pipelines] c.n.s.echo.services.Front50Service : ---- END ERROR
</code></pre>
<p><a href="https://i.stack.imgur.com/CDhC1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CDhC1.png" alt="enter image description here"></a></p>
| <p>@grizzthedj</p>
<p>thanks again for recommendations. It doesn't seem, however, solved the issue. I wonder if it has something to do with my Docker Registry or Kubernetes.
Here is what I have in my .hal/config:</p>
<pre><code>dockerRegistry:
enabled: true
accounts:
- name: <hidden-name>
requiredGroupMembership: []
address: https://docker-registry.<hidden-name>.net/
cacheIntervalSeconds: 30
repositories:
- hellopod
- demoapp
primaryAccount: <hidden-name>
kubernetes:
enabled: true
accounts:
- name: <username>
requiredGroupMembership: []
dockerRegistries:
- accountName: <hidden-name>
namespaces: []
context: sre-os1-dev
namespaces:
- spinnaker
omitNamespaces: []
kubeconfigFile: /home/<username>/.kube/config
</code></pre>
|
<p>I am trying to configure gitlab ci to deploy app to google compute engine. I have succesfully pushed image to gitlab repository but after applying kubernetes deployment config i see following error in kubectl describe pods:</p>
<pre><code>Failed to pull image "registry.gitlab.com/proj/subproj/api:v1": rpc error: code = 2
desc = Error response from daemon: {"message":"Get https://registry.gitlab.com/v2/proj/subproj/api/manifests/v1: unauthorized: HTTP Basic: Access denied"}
</code></pre>
<p>Here is my deployment gitlab-ci job:</p>
<pre><code>docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl delete secret registry.gitlab.com --ignore-not-found
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com/v1/ --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD" [email protected]
- kubectl apply -f cloud-kubernetes.yml
</code></pre>
<p>and here is cloud-kubernetes.yml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: proj
labels:
app: proj
spec:
type: LoadBalancer
ports:
- port: 8082
name: proj
targetPort: 8082
nodePort: 32756
selector:
app: proj
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: projdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: proj
spec:
containers:
- name: projcontainer
image: registry.gitlab.com/proj/subproj/api:v1
imagePullPolicy: Always
env:
- name: SPRING_PROFILES_ACTIVE
value: "cloud"
ports:
- containerPort: 8082
imagePullSecrets:
- name: registry.gitlab.com
</code></pre>
<p>I have followed <a href="https://about.gitlab.com/2016/12/14/continuous-delivery-of-a-spring-boot-application-with-gitlab-ci-and-kubernetes/" rel="noreferrer">this article</a></p>
| <p>There is workaround, image could be pushed to google container registry, and then pulled from gcr without security. We can push image to gcr without gcloud cli using <a href="https://cloud.google.com/container-registry/docs/advanced-authentication" rel="nofollow noreferrer">json token file</a>. So <code>.gitlab-ci.yaml</code> could look like:</p>
<pre><code>docker:
stage: docker_images
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/proj/subproj/api:v1 -f Dockerfile .
- docker push registry.gitlab.com/proj/subproj/api:v1
- docker tag registry.gitlab.com/proj/subproj/api:v1 gcr.io/proj/api:v1
- docker login -u _json_key -p "$GOOGLE_KEY" https://gcr.io
- docker push gcr.io/proj/api:v1
only:
- master
dependencies:
- build_java
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-c
- gcloud config set project proj
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials proj-cluster
- kubectl apply -f cloud-kubernetes.yml
</code></pre>
<p>And image in cloud-kubernetes.yaml should be:</p>
<p>gcr.io/proj/api:v1</p>
|
<p>Created a headless service:</p>
<p>myapp-service-headless.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-service-headless
spec:
ports:
- port: 8000
selector:
app: myapp
clusterIP: None
</code></pre>
<p>After create it to Kubernetes cluster, check its service status:</p>
<pre><code>$ kubectl create -f myapp-service-headless.yaml
$ kubectl describe service myapp-service-headless
Name: myapp-service-headless
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myapp
Type: ClusterIP
IP: None
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
Endpoints: 172.17.0.11:8000,172.17.0.9:8000
Session Affinity: None
Events: <none>
</code></pre>
<p>Try to connect <code>172.17.0.11:8000</code> or <code>172.17.0.9:8000</code>, pending and no result.</p>
<p>Here using <code>kube-dns</code>: <code>myapp-service-headless.default.svc.cluster.local</code> in the application. Now it's in the container of pods.</p>
<p>So how to connect to these applications from other application via API? Which IP can been used?</p>
| <p>Did you expose your container port inside your Endpoint (Pod)? </p>
<p>From what i knew, the headless service can be used to generate entries in kube-dns based on how you configure the Service, as discussed <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records-and-hostname-based-on-pods-hostname-and-subdomain-fields" rel="nofollow noreferrer">in this doc</a>.</p>
<p>However, you can create another Service with <code>type: NodePort</code> that selects your backend Pods, and expose them on a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types" rel="nofollow noreferrer">NodePort/LB</a>.</p>
|
<p>I would like to bound PersistentVolumeClaim with a gcePersistentDisk PersistentVolume. Below the steps I did for getting that:</p>
<h1>1. Creation of the gcePersistentDisk:</h1>
<p><code>gcloud compute disks create --size=2GB --zone=us-east1-b gce-nfs-disk</code></p>
<h1>2. Definition the PersistentVolume and the PersistentVolumeClaim</h1>
<pre><code># pv-pvc.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: gce-nfs-disk
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
labels:
app: test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
</code></pre>
<p>After running <code>kubectl apply -f pv-pvc.yml</code>, the <code>nfs-pvc</code> is not bound with <code>nfs-pv</code>. In fact, below is the list of the PersistentVolume and PersistentVolumeClaim I have:</p>
<pre><code>$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv 2Gi RWO Retain Available 30s
pvc-16e4cdf2-cd3d-11e7-83ae-42010a8e0243 2Gi RWO Delete Bound default/nfs-pvc standard 26s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc Bound pvc-16e4cdf2-cd3d-11e7-83ae-42010a8e0243 2Gi RWO standard 59s
</code></pre>
<p>The obtained PersistentVolume is a volume on the disk of the node I created on Google Container Engine.
So, have I missed something?</p>
<p>PS: the version of kubernetes</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.8-gke.0", GitCommit:"a7061d4b09b53ab4099e3b5ca3e80fb172e1b018", GitTreeState:"clean", BuildDate:"2017-10-10T18:48:45Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>I found the solution.</p>
<p>Below the new definitions of the PV and PVC:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
labels:
app: test # the label has been added to make sure the bounding is working as expected
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: gce-nfs-disk
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
labels:
app: test
spec:
accessModes:
- ReadWriteOnce
storageClassName: "" # the storageClassName has to be specified
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: test
</code></pre>
<p>After these modifications, this is the bounding worked:</p>
<pre><code>$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc Bound nfs-pv 2Gi RWO 8s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv 2Gi RWO Retain Bound default/nfs-pvc 22m
</code></pre>
<p>I hope it will help.</p>
|
<p>What I understood by the documentation is that:</p>
<ul>
<li>
<pre class="lang-sh prettyprint-override"><code>kubectl create
</code></pre>
Creates a new k8s resource in the cluster</li>
<li>
<pre class="lang-sh prettyprint-override"><code>kubectl replace
</code></pre>
Updates a resource in the live cluster</li>
<li>
<pre class="lang-sh prettyprint-override"><code>kubectl apply
</code></pre>
If I want to do create + replace (<em><a href="https://kubernetes.io/docs/user-guide/kubectl-overview/" rel="noreferrer">Reference</a></em>)</li>
</ul>
<p><strong>My questions are</strong></p>
<ol>
<li>Why are there three operations for doing the same task in a cluster?</li>
<li>What are the use cases for these operations?</li>
<li>How do they differ from each other under the hood?</li>
</ol>
| <p>Those are two different approaches:</p>
<h3>Imperative Management</h3>
<p><code>kubectl create</code> is what we call <a href="https://kubernetes.io/docs/tutorials/object-management-kubectl/imperative-object-management-configuration/" rel="noreferrer">Imperative Management</a>. On this approach you tell the Kubernetes API what you want to create, replace or delete, not how you want your K8s cluster world to look like.</p>
<h3>Declarative Management</h3>
<p><code>kubectl apply</code> is part of the <a href="https://kubernetes.io/docs/tutorials/object-management-kubectl/declarative-object-management-configuration/" rel="noreferrer">Declarative Management</a> approach, where changes that you may have applied to a live object (i.e. through <code>scale</code>) are "<strong>maintained</strong>" even if you <code>apply</code> other changes to the object.</p>
<blockquote>
<p>You can read more about imperative and declarative management in the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/object-management/" rel="noreferrer">Kubernetes Object Management</a> documentation.</p>
</blockquote>
<h3>In laymans They do different things. If the resource exists, <code>kubectl create</code> will error out and <code>kubectl apply</code> will not error out.</h3>
|
<p>I have an asp.net core 2.0 application whose docker image runs fine locally, but when that same image is deployed to an AKS cluster, the pods have a status of <em>CrashLoopBackOff</em> and the pod log shows:</p>
<blockquote>
<p>Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
<a href="http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409" rel="nofollow noreferrer">http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409</a>.</p>
</blockquote>
<p>And since you can't ssh to AKS clusters, it's pretty difficult to figure this out?</p>
<p><strong>Dockerfile</strong>:</p>
<pre><code>FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY . .
EXPOSE 80
ENTRYPOINT ["dotnet", "myapi.dll"]
</code></pre>
| <p>Turned out that our build system wasn't putting the app code into the container as we thought. Since the container wasn't runnable, I didn't know how to inspect its contents until I found this command which is a lifesaver for these kinds of situations:</p>
<p><code>docker run --rm -it --entrypoint=/bin/bash [image_id]</code></p>
<p>... which at this point, you can freely inspect/verify the contents of the container.</p>
|
<p>I'm pulling my hair out for a week but I am close to giving up. Please share your wisdom. </p>
<p>This is my Docker file: </p>
<pre><code>FROM node
RUN apt-get update
RUN mkdir -p /var/www/stationconnect
RUN mkdir -p /var/log/node
WORKDIR /var/www/stationconnect
COPY stationconnect /var/www/stationconnect
RUN chown node:node /var/log/node
COPY ./stationconnect_fromstage/api/config /var/www/stationconnect/api/config
COPY ./etc/stationconnect /etc/stationconnect
WORKDIR /var/www/stationconnect/api
RUN cd /var/www/stationconnect/api
RUN npm install
RUN apt-get install -y vim nano
RUN npm install supervisor forever -g
EXPOSE 8888
USER node
WORKDIR /var/www/stationconnect/api
CMD ["bash"]
</code></pre>
<p>It works fine in docker alone running e.g.</p>
<pre><code>docker run -it 6bcee4528c7c
</code></pre>
<p>Any advice?</p>
| <p>When create a container, you should have a foreground process to keep the container alive.</p>
<p>What iβve done is add a shell script line
<code>while true; do sleep 1000; done</code> at the end of my <strong>docker-entrypoint.sh</strong>, and refer to it in <code>ENTRYPOINT [/docker-entrypoint.sh]</code></p>
<p>Take a look at <a href="https://stackoverflow.com/questions/25775266/how-to-keep-docker-container-running-after-starting-services">this issue</a> to find out more.</p>
<p>Thereβs an <a href="https://nodejs.org/en/docs/guides/nodejs-docker-webapp/" rel="nofollow noreferrer">example</a> how to make a Nodejs dockerfile, be sure to check it out.</p>
|
<p>I'm trying to follow <a href="https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/azure.md" rel="nofollow noreferrer">this guide</a> to setting up a K8s cluster with external-dns' Azure DNS provider.
The guide states that:</p>
<blockquote>
<p>When your Kubernetes cluster is created by ACS, a file named <code>/etc/kubernetes/azure.json</code> is created to store the Azure credentials for API access. Kubernetes uses this file for the Azure cloud provider.</p>
</blockquote>
<p>When I create a cluster using aks (e.g. <code>az aks create --resource-group myResourceGroup --name myK8sCluster --node-count 1 --generate-ssh-keys</code>) this file doesn't exist.</p>
<p>Where do the API credentials get stored when using AKS?</p>
<p>Essentially I'm trying to work out where to point this command:</p>
<p><code>kubectl create secret generic azure-config-file --from-
file=/etc/kubernetes/azure.json</code></p>
| <p>From what I can see when using AKS the <code>/etc/kubernetes/azure.json</code> doesn't get created. As an alternative I followed the instructions for use with non Azure hosted sites and created a service principal (<a href="https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/azure.md#optional-create-service-principal" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/azure.md#optional-create-service-principal</a>)</p>
<p>Creating the service principal produces some json that contains most of the detail. This can be used to manually create the azure.json file and the secret can be created from it.</p>
|
<p>The last(3rd) container is continuously being delete and recreated by kubernetes. It goes from Running to Terminating state. The Kubernetes UI shows status as : 'Terminated: ExitCode:${state.terminated.exitCode}' </p>
<p>My deployment YAML:</p>
<pre><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: openapi
spec:
scaleTargetRef:
kind: Deployment
name: openapi
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 75
---
kind: Service
apiVersion: v1
metadata:
name: openapi
spec:
selector:
app: openapi
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8443
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: openapi
spec:
template:
metadata:
labels:
app: openapi
spec:
containers:
- name: openapi
image: us.gcr.io/PROJECT_ID/openapi:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
</code></pre>
<p>Portion of Output of <code>kubectl get events -n namespace</code>: </p>
<pre><code>Pod Normal Created kubelet Created container
Pod Normal Started kubelet Started container
Pod Normal Killing kubelet Killing container with id docker://openapi:Need to kill Pod
ReplicaSet Normal SuccessfulCreate replicaset-controller (combined from similar events): Created pod: openapi-7db5f8d479-p7mcl
ReplicaSet Normal SuccessfulDelete replicaset-controller (combined from similar events): Deleted pod: openapi-7db5f8d479-pgmxf
HorizontalPodAutoscaler Normal SuccessfulRescale horizontal-pod-autoscaler New size: 2; reason: Current number of replicas above Spec.MaxReplicas
HorizontalPodAutoscaler Normal SuccessfulRescale horizontal-pod-autoscaler New size: 3; reason: Current number of replicas below Spec.MinReplicas
Deployment Normal ScalingReplicaSet deployment-controller Scaled up replica set openapi-7db5f8d479 to 3
Deployment Normal ScalingReplicaSet deployment-controller Scaled down replica set openapi-7db5f8d479 to 2
</code></pre>
<p><code>kubectl describe pod -n default openapi-7db5f8d479-2d2nm</code> for a pod that spawned and was killed:</p>
<p>A different pod with a different unique id spawns each time after a pod gets killed by Kubernetes.</p>
<pre><code>Name: openapi-7db5f8d479-2d2nm
Namespace: default
Node: gke-testproject-default-pool-28ce3836-t4hp/10.150.0.2
Start Time: Thu, 23 Nov 2017 11:50:17 +0000
Labels: app=openapi
pod-template-hash=3861948035
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"openapi-7db5f8d479","uid":"b7b3e48f-ceb2-11e7-afe7-42010a960003"...
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container openapi
Status: Terminating (expires Thu, 23 Nov 2017 11:51:04 +0000)
Termination Grace Period: 30s
IP:
Created By: ReplicaSet/openapi-7db5f8d479
Controlled By: ReplicaSet/openapi-7db5f8d479
Containers:
openapi:
Container ID: docker://93d2f1372a7ad004aaeb34b0bc9ee375b6ed48609f505b52495067dd0dcbb233
Image: us.gcr.io/testproject-175705/openapi:latest
Image ID: docker-pullable://us.gcr.io/testproject-175705/openapi@sha256:54b833548cbed32db36ba4808b33c87c15c4ecde673839c3922577f30b
Port: 8080/TCP
State: Terminated
Reason: Error
Exit Code: 143
Started: Thu, 23 Nov 2017 11:50:18 +0000
Finished: Thu, 23 Nov 2017 11:50:35 +0000
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-61k6c (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-61k6c:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-61k6c
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned openapi-7db5f8d479-2d2nm to gke-testproject-default-pool-28ce3836-t4hp
Normal SuccessfulMountVolume 21s kubelet, gke-testproject-default-pool-28ce3836-t4hp MountVolume.SetUp succeeded for volume "default-token-61k6c"
Normal Pulling 21s kubelet, gke-testproject-default-pool-28ce3836-t4hp pulling image "us.gcr.io/testproject-175705/openapi:latest"
Normal Pulled 20s kubelet, gke-testproject-default-pool-28ce3836-t4hp Successfully pulled image "us.gcr.io/testproject-175705/openapi:latest"
Normal Created 20s kubelet, gke-testproject-default-pool-28ce3836-t4hp Created container
Normal Started 20s kubelet, gke-testproject-default-pool-28ce3836-t4hp Started container
Normal Killing 3s kubelet, gke-testproject-default-pool-28ce3836-t4hp Killing container with id docker://openapi:Need to kill Pod
</code></pre>
| <p>Check the pod logs using the commands below:
<code>kubectl get events -w -n namespace</code>
and
<code>kubectl describe pod -n namespace pod_name</code></p>
|
<p>Requests that our app needs to serve exhibit great variance (unknown in advance) of processing latencies (from few seconds to hours). </p>
<p>We'd like to use kubernetes autoscaling capabilities but it is not clear how to deal with random pod termination policy during downscaling (as it comes at odds with our desire to not terminate long running requests being processed).</p>
<p>Wondering if anybody else has seen similar situations? what solutions did you come up with?</p>
| <p>One of the things you can do, is to build into your app support for termination handling and set a rather long termination grace period. You can find a nice explanation of this topic in <a href="https://pracucci.com/graceful-shutdown-of-kubernetes-pods.html" rel="nofollow noreferrer">https://pracucci.com/graceful-shutdown-of-kubernetes-pods.html</a></p>
<p>This does not completely prevent you from killing long term connections. To be honest, nothing will. Yet it does significantly limit the impact of events like scaling on this type of workloads.</p>
|
<p>I have a setup a config map called kube-dns in the kube-system NS, with some custom upstream DNS entries (3 x of them), but my containers/pods are not inheriting these custom DNS entries in their resolv.conf files when I schedule them through a simple deployment.</p>
<p>My logs in the kube-dns pods don't seem to be pointing at the name of the config map (it looks like an empty string). Could this be the problem?</p>
<p>After adding the custom config map, I did delete the kube-dns pods, and allowed the existing kube-dns deployment to re-create the pods (there are 2 x sets of kube-dns pods that were terminated, and re-created).</p>
<p>I used this guide to set up my config map (I a blog post around the feature that was introduced with 1.6):</p>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configmap-options" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configmap-options</a></p>
<p><a href="http://blog.kubernetes.io/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes.html" rel="nofollow noreferrer">http://blog.kubernetes.io/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes.html</a></p>
<p>Here is my config map:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{"myinternaldomainhere.net": ["10.254.131.155"]}
upstreamNameservers: |
["10.254.131.155", "8.8.8.8", "8.8.4.4"]
</code></pre>
<p>Is there somewhere else that I need to specify that the deployment of kube-dns references the name of the config map? In the pod logs, I can see the flag for the config map names seems to be an empty string.</p>
<p>Logs for the new kubedns pods that I looked at after deleting the old pods say (notice line 5 is where I see the empty string reference):</p>
<pre><code>I1110 16:35:35.685518 1 dns.go:48] version: 1.14.4-2-g5584e04
I1110 16:35:35.686074 1 server.go:70] Using configuration read from directory: /kube-dns-config with period 10s
I1110 16:35:35.686136 1 server.go:113] FLAG: --alsologtostderr="false"
I1110 16:35:35.686148 1 server.go:113] FLAG: --config-dir="/kube-dns-config"
I1110 16:35:35.686152 1 server.go:113] FLAG: --config-map=""
I1110 16:35:35.686155 1 server.go:113] FLAG: --config-map-namespace="kube-system"
I1110 16:35:35.686158 1 server.go:113] FLAG: --config-period="10s"
I1110 16:35:35.686161 1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I1110 16:35:35.686164 1 server.go:113] FLAG: --dns-port="10053"
I1110 16:35:35.686192 1 server.go:113] FLAG: --domain="cluster.local."
I1110 16:35:35.686196 1 server.go:113] FLAG: --federations=""
I1110 16:35:35.686200 1 server.go:113] FLAG: --healthz-port="8081"
I1110 16:35:35.686202 1 server.go:113] FLAG: --initial-sync-timeout="1m0s"
I1110 16:35:35.686205 1 server.go:113] FLAG: --kube-master-url=""
I1110 16:35:35.686208 1 server.go:113] FLAG: --kubecfg-file=""
I1110 16:35:35.686211 1 server.go:113] FLAG: --log-backtrace-at=":0"
I1110 16:35:35.686236 1 server.go:113] FLAG: --log-dir=""
I1110 16:35:35.686250 1 server.go:113] FLAG: --log-flush-frequency="5s"
I1110 16:35:35.686257 1 server.go:113] FLAG: --logtostderr="true"
I1110 16:35:35.686260 1 server.go:113] FLAG: --nameservers=""
I1110 16:35:35.686262 1 server.go:113] FLAG: --stderrthreshold="2"
I1110 16:35:35.686275 1 server.go:113] FLAG: --v="2"
I1110 16:35:35.686281 1 server.go:113] FLAG: --version="false"
I1110 16:35:35.686286 1 server.go:113] FLAG: --vmodule=""
I1110 16:35:35.686459 1 server.go:176] Starting SkyDNS server (0.0.0.0:10053)
I1110 16:35:35.686713 1 server.go:198] Skydns metrics enabled (/metrics:10055)
I1110 16:35:35.686724 1 dns.go:147] Starting endpointsController
I1110 16:35:35.686728 1 dns.go:150] Starting serviceController
I1110 16:35:35.686924 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I1110 16:35:35.686937 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I1110 16:35:36.187196 1 dns.go:171] Initialized services and endpoints from apiserver
I1110 16:35:36.187215 1 server.go:129] Setting up Healthz Handler (/readiness)
I1110 16:35:36.187221 1 server.go:134] Setting up cache handler (/cache)
I1110 16:35:36.187226 1 server.go:120] Status HTTP port 8081
</code></pre>
<p>The pod I am testing this against is setup to use ClusterFirst DNS type which is what the docs say I need for this to work.</p>
<p>Any advice or pointers as to where to look for the issue would be most welcome.</p>
| <p>I also read those blogs/tutorials and they seem to fail to mention that you need to run <code>kubectl apply -f your-dns-configmap.yaml</code> on your config map.</p>
<p>I'd run the following: <code>kubectl get cm --namespace="kube-system"</code> to see if your config map is actually there or not. It should turn up with the name kube-dns.</p>
<p>I'm running into another issue and your post inspired me to check the kube-dns logs, at the top of my logs it says: <code>1 server.go:66] Using configuration read from ConfigMap: kube-system:kube-dns</code> though your logs are saying <code>Using configuration read from directory: /kube-dns-config</code>.</p>
<p>So it appears your kube-dns is definitely not finding the config map and is defaulting to some config in the kube-dns containers filesystem. I'd definitely try run that apply and get cm command and then try again to see what happens.</p>
|
<p>I've been trying to conceptualise a website that allows users to play short audio samples (about 10 in total on the entire site). </p>
<p>I've become quite interested in containers but don't have a great deal of experience in using them (hence the reason i'm considering this idea- to learn more).</p>
<p>I want to containerize/dockerize the website and was just wondering what the normal practice would be in terms of where the audio is stored.
The containers would be running on a virtual machine in the cloud. Would the audio files generally be within the container (i.e. built into the image and present within the containers filesystem when the containers is run) or stored on the VM/node the container runs on and then referenced by the container? What would be the most common practice?</p>
| <p>That really depends on what you see fit for your use case. If you have small amount of rather small samples, that does not change as time passes (without image rebuild) it makes perfect sense to bake them into the image. On the other side, if you want to ie. manage uploads via your website, or even just upload/manage them outside of your container and have these changes persisted over container restarts, then you need to mount external storage as a volume and keep the files there.</p>
|
<p>I just deployed a managed Kubernetes cluster with Azure Container Service. My deployment includes a single agent machine over the managed cluster and an Azure disk attached to it for persistent storage.</p>
<p>The problem I am facing is that I don't know how to ssh this agent server. I read that you should be able to ssh the master node and connect to the agent from there but as I am using a managed Kubernetes master I can't find the way of doing this.</p>
<p>Any idea? Thank you in advance.</p>
| <blockquote>
<p>The problem I am facing is that I don't know how to ssh this agent
server.</p>
</blockquote>
<p>Do you mean you create AKS and can't find master VM?</p>
<p>If I understand it correctly, that is a by design behavior, AKS does <strong>not</strong> provide direct access (Such as with SSH) to the cluster.</p>
<p>If you want to SSH to the agent node, as a workaround, we can create <strong>a public IP address</strong> and <strong>associate</strong> this public IP address to the agent's NIC, then we can SSH to this agent.</p>
<p>Here are my steps:</p>
<p>1.<strong>Create</strong> Public IP address via Azure portal:</p>
<p><a href="https://i.stack.imgur.com/PU5kb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PU5kb.png" alt="enter image description here"></a></p>
<p>2.<strong>Associate</strong> the public IP address to the agent VM's NIC:</p>
<p><a href="https://i.stack.imgur.com/NFym7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NFym7.png" alt="enter image description here"></a></p>
<p>3.<strong>SSH</strong> to this VM with this public IP address:</p>
<p><a href="https://i.stack.imgur.com/lDLa6.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lDLa6.png" alt="enter image description here"></a></p>
<p>Note:</p>
<p>By default, we can find ssh key when we try to create AKS, like this:</p>
<p><a href="https://i.stack.imgur.com/BOiz0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BOiz0.png" alt="enter image description here"></a></p>
|
<p>I was just wondering, why it is useful to run etcd cluster <a href="https://github.com/kubernetes/charts/tree/master/incubator/etcd" rel="nofollow noreferrer">inside Kubernetes</a>, when Kubernetes itself <a href="https://kubernetes.io/docs/concepts/overview/components/#etcd" rel="nofollow noreferrer">depends on etcd</a>. </p>
<p>It just does not make sense to me, as if I have HA Kube, I am also forced to have HA etcd outside. Hence to reason to install it again inside... </p>
| <p>I have an external ETCD that manages my k8s HA cluster and im not letting any developer apps near it. I would be too concerned about something going wrong and breaking the k8s cluster. It is also a fixed size at 3 which works well for the cluster size with its requirements. If the developers need a key/value store for their db and want etcd, this would be a great way to make one in the cluster for the applications. With it being statefulsets, its scalable. </p>
|
<p>Im writing custom controller for kubernetes.
Im creating shared informer</p>
<pre><code>cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options meta_v1.ListOptions) (k8sruntime.Object, error) {
return client.CoreV1().ConfigMaps(nameSpace).List(options)
},
WatchFunc: func(options meta_v1.ListOptions) (watch.Interface, error) {
return client.CoreV1().ConfigMaps(nameSpace).Watch(options)
},
},
&api_v1.ConfigMap{},
0, //Skip resyncr
cache.Indexers{},
)
</code></pre>
<p>I have option to add filtering function into call back functions to further decrease number of objects im working with.
Something like that</p>
<pre><code>options.FieldSelector := fields.OneTermEqualSelector("metadata.name", nodeName).String()
</code></pre>
<p>I would like to filter out objects by regular expression. Or by some label at least. Unfortunately documentation is not helping. Could not find anything except for tests for code itself.
Ho do i apply regular expression on filtering mechanism?
Where do i get some examples on this issue?</p>
| <p>Its not possible to filter objects by regular expression.
It is possible to filer object by lable</p>
<p>This is the code that will filter by label</p>
<pre><code>labelSelector := labels.Set(map[string]string{"mylabel": "ourdaomain1"}).AsSelector()
informer := cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options meta_v1.ListOptions) (k8sruntime.Object, error) {
options.LabelSelector = labelSelector.String()
return client.CoreV1().ConfigMaps(nameSpace).List(options)
},
WatchFunc: func(options meta_v1.ListOptions) (watch.Interface, error) {
options.LabelSelector = labelSelector.String()
return client.CoreV1().ConfigMaps(nameSpace).Watch(options)
},
},
&api_v1.ConfigMap{},
0, //Skip resyncr
cache.Indexers{},
)
</code></pre>
<p>Another thing that is important to remember is how do you add new objects to k8s
I was doing something like </p>
<pre><code>kubectl --namespace==ourdomain1 create configmap config4 -f ./config1.yaml
</code></pre>
<p>This is not good. It overwrites all the fields in config map and puts whole file content into data of the new object.
Proper way is </p>
<pre><code>kubectl create -f ./config1.yam
</code></pre>
|
<p>When I create a deployment and a service in a Kubernetes Engine in GCP I get connection refused for no apparent reason.</p>
<p>The service creates a Load Balancer in GCP and all corresponding firewall rules are in place (allows traffic to port 80 from <code>0.0.0.0/0</code>). The underlying service is running fine, when I <code>kubectl exec</code> into the pod and <code>curl localhost:8000/</code> I get the correct response.</p>
<p>This deployment setting used to work just fine for other images, but yesterday and today I keep getting</p>
<pre><code>curl: (7) Failed to connect to 35.x.x.x port 80: Connection refused
</code></pre>
<p>What could be the issue? I tried deleting and recreating the service multiple times, with no luck.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-app
spec:
selector:
app: app
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8000
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: my-app
image: gcr.io/myproject/my-app:0.0.1
imagePullPolicy: Always
ports:
- containerPort: 8000
</code></pre>
| <p>This turned out to be a dumb mistake on my part. The gunicorn server was using a bind to <code>127.0.0.1</code> instead of <code>0.0.0.0</code>, so it wasn't accessible from outside of the pod, but worked when I <code>exec</code>-ed into the pod.</p>
<p>The fix in my case was changing the entrypoint of the Dockerfile to</p>
<pre><code>CMD [ "gunicorn", "server:app", "-b", "0.0.0.0:8000", "-w", "3" ]
</code></pre>
<p>rebuilding the image and updating the deployment.</p>
|
<p>It seems there is 2 ways to logically group pods, which are:</p>
<ul>
<li>using labels to identifies organizational units in a loosely coupled fashion</li>
<li>creating namespaces</li>
</ul>
<p>What are exactly the differences and use cases?</p>
| <p>Let's begin with a cluster which consists a number of nodes(Virtual Machines or Physical Machines ). Now We will divide our cluster in a logical way </p>
<p><strong>Namespaces</strong> is a virtual cluster or a logical cluster. It helps us to organise projects or environments (development, testing, staging, and production environments).
By using <strong>Namespaces</strong>, You can limit the quota to each namespace for resources utilization. You want to be sure that production environment has more than enough resources, so it won't starve to death.</p>
<p><strong>Labels</strong> is a powerful concept in Kubernetes.It's a key-value pair which is assigned to kubernetes resources such as Pods, RepicaSet, Node etc. It is used to organise pods.for instance, ReplicaSet or Service can select the pods in a k8s cluster by using labels and perform an operation on them such as increasing the number of pods. </p>
<p>I have attached the link for further reading <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">labels and Selector</a> and <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="noreferrer">Namespaces</a> </p>
|
<p>Action:</p>
<p>Tried updating the kubernetes-dashboard in k8s hosted in azure acs with image <code>gcrio.azureedge.net/google_containers/kubernetes-dashboard-amd64</code> from version <code>v1.6.3</code> to <code>v1.7.1</code> (latest).</p>
<p>Problem:</p>
<p>The image version, when edited either with kubectl or UI, is not getting reflected/updated.</p>
<p>Question: </p>
<p>Is there any way to update the image version ?</p>
| <p>It will be getting updated but then reverted by the k8s addon manager. If you ssh into a master the templates for the addon services live in <code>etc/kubernetes/addons</code>.</p>
<p>To upgrade the image you can edit <code>/etc/kubernetes/addons/kubernetes-dashboard-deployment.yaml</code> and change <code>image</code> inside the <em>Deployment</em> spec.</p>
<p>Your change should be picked up in a few seconds. </p>
|
<p>I have an azure container service (aks) cluster. It is migrated to version 1.8.1. I am trying to deploy postgres database and use <code>AzureFileVolume</code> to persist postgres data on.</p>
<p>By default, if I deploy the postgres database without mounting volume, everything is working as excepted, i.e. pod is created and database is initialized.</p>
<p>When I try to mount a volume using the yaml below, I get <strong>initdb: could not access directory "/var/lib/postgresql/data": Permission denied</strong>.</p>
<p>I tried various hacks as suggested in this long <a href="https://github.com/kubernetes/kubernetes/issues/2630" rel="nofollow noreferrer">github thread</a>, like: setting security context for the pod or running chown commands in <em>initContainers</em>. The result was the same - permission denied.</p>
<p>Any ideas would be appreciated.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
component: test-db
name: test-db
spec:
ports:
- port: 5432
selector:
app: myapp
component: test-db
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-db
spec:
template:
metadata:
labels:
app: myapp
component: test-db
spec:
securityContext:
fsGroup: 999
runAsUser: 999
containers:
- name: test-db
image: postgres:latest
securityContext:
allowPrivilegeEscalation: false
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: myappdb
- name: POSTGRES_USER
value: myappdbuser
- name: POSTGRES_PASSWORD
value: qwerty1234
volumeMounts:
- name: azure
mountPath: /var/lib/postgresql/data
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: acishare
readOnly: false
</code></pre>
| <p>We came across the same problems and figured out the following solution:</p>
<p>Instead of using an <em>AzureFileVolume</em>, we used an <strong>AzureDisk</strong>. So what we needed in Kubernetes is the following...</p>
<p><strong>Storage Class</strong></p>
<p><a href="https://i.stack.imgur.com/uvMBA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uvMBA.jpg" alt="enter image description here"></a></p>
<p>With your Azure account name</p>
<p><strong>Persistent Volume Claim</strong></p>
<p><a href="https://i.stack.imgur.com/zKMLl.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zKMLl.jpg" alt="Persistent Volume Claim"></a></p>
<p><strong>PostgreSQL Deployment</strong>
Include PVC in the Kubernetes Deployment</p>
<pre><code>- name: postgres-db
persistentVolumeClaim:
claimName: pvc-postgresdb
</code></pre>
<p>Additionally we need to point the PGDATA var to a subdirectory of the mounted directory. Because Azure is creating some issues with the AzureDisk type in the base directory.</p>
<pre><code>#... evn definitions...
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- mountPath: /var/lib/postgresql/data/
name: postgres-db
</code></pre>
|
<p>I am running kubernetes(v1.7) and flannel(v0.9.0) which was installed using kubeadm.</p>
<p>I want to know that-</p>
<ol>
<li>How does a node get a subnet?</li>
<li>Where are all allocated subnets stored and how I can see them?</li>
<li>How does flannel interact with kubernetes?</li>
</ol>
<p>Thanks,</p>
| <ol>
<li>flannel gives the POD IP address. network range is defined in <code>subnet.env</code> file</li>
</ol>
<blockquote>
<pre><code># cat /var/run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
</code></pre>
</blockquote>
<ol start="2">
<li><p>Allocated IPs are stored in etcd database in the Master Node. you can query API server to view them</p></li>
<li><p>Flannel is a virtual network that gives a subnet to PODs. when Kubernetes start the pod it gets the IP address from flannel and assign to PODs</p></li>
</ol>
<p>you can look at the network info like this from etcd database.</p>
<pre><code> export ETCDCTL_API=3; etcdctl get "/registry/configmaps/kube-system/kubeadm-config" --prefix=true
</code></pre>
|
<ol>
<li>Is this even possible with helm? </li>
<li>What options for security are possible (BasicAuth? Client Certs? OAuth2?)</li>
<li>Is there any example / documentation about this (both server and client side)?</li>
</ol>
| <p><strong>Helm Private Repository using private GitHub repositories</strong>.<br/></p>
<p><strong>Steps:</strong></p>
<ol>
<li>Create a private GitHub repository named <code>private-helm-registry</code> or any.</li>
<li>Place your Raw Helm Charts.</li>
<li><p>Create a <code>charts</code> folder and place <code>index.yaml</code> and packaged charts in <code>.tgz</code> format inside this folder.</p></li>
<li><p>Create a GitHub <a href="https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/" rel="noreferrer">Personal Access Token</a> with read-only access.</p></li>
<li><p>Add your Repository to helm using the following command:</p>
<pre><code>$ helm repo add helm-registry 'https://<git_hub_personal_access_token>@raw.githubusercontent.com/myGitHubAccountName/private-helm-registry/master/charts/'
"helm-registry" has been added to your repositories
</code></pre>
<p><strong>Note:</strong> </p>
<pre><code>1. Enclose the Url with single quotes ' '.
2. The trailing / is mandatory.
</code></pre></li>
<li><p>To add development or other branches as helm repository, use branch name:</p>
<pre><code>$ helm repo add helm-registry-dev 'https://<git_hub_personal_access_token>@raw.githubusercontent.com/myGitHubAccountName/private-helm-registry/<branch>/charts/'
"helm-registry-dev" has been added to your repositories
</code></pre></li>
</ol>
<p>Explore more at: <a href="https://medium.com/@kavehmz/using-a-private-github-repo-as-helm-chart-repo-https-access-95629b2af27c" rel="noreferrer">Using a private github repo as helm chart repo</a>.</p>
|
<p>I'm using the hosted version of gitlab and gitlab-ci and following the kubernetes integration setup <a href="https://docs.gitlab.com/ce/user/project/integrations/kubernetes.html" rel="noreferrer">https://docs.gitlab.com/ce/user/project/integrations/kubernetes.html</a>.</p>
<p>I'm struggling to find what to enter as the Kubernetes API URL for my gcloud hosted kubernetes instance. I assume I'm missing something obvious :-(</p>
| <pre><code>kubectl cluster-info
</code></pre>
<p>or</p>
<pre><code>cat ~/.kube/config | grep server
</code></pre>
|
<p>I'm trying to access the Kubernetes API in order to discover pods from within a deployed container. Although I'll do this programatically, right now, I'm just using cURL to check for issues.</p>
<p>I run this from a pod terminal:</p>
<pre><code>curl -vvv -H "Authorization: Bearer $(</var/run/secrets/kubernetes.io/serviceaccount/token)" "https://kubernetes.default/api/v1/namespaces/$(</var/run/secrets/kubernetes.io/serviceaccount/namespace)/endpoints" --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
</code></pre>
<p>And I get a 403 result:</p>
<pre><code>* About to connect() to kubernetes.default port 443 (#0)
* Trying 172.30.0.1...
* Connected to kubernetes.default (172.30.0.1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
CApath: none
* NSS: client certificate not found (nickname not specified)
* SSL connection using TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
* Server certificate:
* subject: CN=10.0.75.2
* start date: Nov 23 16:55:27 2017 GMT
* expire date: Nov 23 16:55:28 2019 GMT
* common name: 10.0.75.2
* issuer: CN=openshift-signer@1511456125
> GET /api/v1/namespaces/myproject/endpoints HTTP/1.1 s/$(</var/run/secrets/kubernetes.io/serv
> User-Agent: curl/7.29.0
> Host: kubernetes.default
> Accept: */*> Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJteXByb2plY3QiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi00cXZidCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMjg3NzAzYjEtZDA4OC0xMWU3LTkzZjQtNmEyNGZhYWZjYzQxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om15cHJvamVjdDpkZWZhdWx0In0.yl2HUhmxjrb4UqkAioq1TixWl_YqUPoxSvQPPSgl9Hzr97Hjm7icdL_mdptwEnOSErfzqSUBiMKJcIRdIa3Z7mfkgEk-f2H-M7TUU8GpXmD2Zex6Bcn_dq-Hsoed6W2PYpeFDoy98p5rSNTUL5MPMATOodeAulB0NG_zF01-8qTbLO_I6FRa3BCVXVMaZWBoZgwZ1acQbd4fJqDRsYmQMSi5P8a3nYgjBdifkQeTTb3S8Kmnszct41LoUlh9Xv29YVEyr1uQc5DSLAgQKj_NdSxkVq-MJP8z1PWV3OmHULNChocXr7RGKaNwlVpwpgNqsDAOqIyE1ozxlntIrotLBw
>
< HTTP/1.1 403 Forbidden
< Cache-Control: no-store
< Content-Type: application/json
< Date: Thu, 23 Nov 2017 22:18:01 GMT
< Content-Length: 282
<
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "User \"system:serviceaccount:myproject:default\" cannot list endpoints in project \"myproject\"",
"reason": "Forbidden",
"details": {
"kind": "endpoints"
},
"code": 403
}
* Connection #0 to host kubernetes.default left intact
</code></pre>
<p>I've tried to access a number of resources, like, endpoints, pods, etc. I've also omitted the namespace (as to access the whole cluster resources) to no avail. </p>
<p>I'm currently using OpenShift Origin, clean (just ran <code>oc cluster up</code> and deployed a test image to access the terminal in the web console).</p>
| <p>It looks like you're on fully RBAC enabled cluster, and your default service account <code>system:serviceaccount:myproject:default</code>, as expected, is unauthorised. You should create and use dedicated service account for this pod and explicitly grant it access to what it needs to read.</p>
<p><a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/authorization/rbac/</a></p>
|
<p>I have an Image which i should add a dependency to it. Therefore I have tried to change the image when is running on the container and create new Image.
I have follow <a href="https://www.techrepublic.com/article/how-to-commit-changes-to-a-docker-image/" rel="nofollow noreferrer">this article</a> with the following commands after :</p>
<pre><code>kubectl run my-app --image=gcr.io/my-project-id/my-app-image:v1 --port 8080
kubectl get pods
kubectl exec -it my-app-container-id -- /bin/bash
</code></pre>
<p>then in the shell of container, i have installed the dependency using "pip install NAME_OF_Dependncy".
Then I have exited from the shell of container and as it have been explained in the article, i should commit the change using this command :</p>
<pre><code>sudo docker commit CONTAINER_ID nginx-template
</code></pre>
<p>But I can not find the corresponding command for Google Kubernetes Engine with <strong>kubectl</strong></p>
<p>How should i do the commit in google container engine?</p>
| <p>As with K8s Version 1.8. There is no way to do Hot Fix changes directly to the images.For example, Committing new image from running container. If you still change or add something by using <code>exec</code> it will stay until the container is running. It's <strong>not best practice</strong> in K8s eco-system. </p>
<p><strong>The recommended</strong> way is to use <strong>Dockerfile</strong> and customise the images according to the necessity and requirements.After that, you can push that images to the registry(public/ private ) and deploy it with K8s manifest file.</p>
<p><strong>Solution to your issue</strong> </p>
<ul>
<li>Create a Dockerfile for your images. </li>
<li>Build the image by using Dockerfile. </li>
<li>Push the image to the registry.</li>
<li>write the deployment manifest file as well service manifest file.</li>
<li>apply the manifest file to the k8s cluster.</li>
</ul>
<p>Now If you want to change/modify something, you just need to change/modify the Dockerfile and follow the remaining steps. </p>
<p>As you know that containers are a short living creature which does not have persist changed behaviour ( modified configuration, changing file system).Therefore, It's better to give new behaviour or modification at the Dockerfile.</p>
<p><strong>Kubernetes Mantra</strong><br>
Kubernetes is Cloud Native product which means it does not matter whether you are using Google Cloud, AWS or Azure. It needs to have consistent behaviour on each cloud provider. </p>
|
<p>Can someone example to me how CSRF works in the cluster setup?</p>
<p>I have a kubernetes cluster hosting a django website, and I'm having some occasional issues with 403 errors. I have multiple instances of the site load balanced in kubernetes.</p>
<p>How does CSRF work when a POST is sent from 1 instance and handled by another?</p>
<p>Does CSRF site work if the docker images are updated during the time the form is being filled out?</p>
<p>Thanks!</p>
| <blockquote>
<p>Can someone example to me how CSRF works in the cluster setup?</p>
</blockquote>
<p>Exactly the same way it usually ought not to (CSRF is Cross Site Request Forgery, i.e. the attack). To protect against it, you hand out secret tokens to your clients which they must include with subsequent requests. Your backend must validate that the tokens are valid, applicable and were, in fact, issued by a trusted source. There's a few ways to do that bit:</p>
<ul>
<li>You can use MACs for that (in which case you have something pretty close to JSON WebTokens). </li>
<li>You can save your tokens to some trusted store and query that store on subsequent requests.</li>
</ul>
<p>That is pretty much <em>all</em> there is to it.</p>
<p>Since your CSRF protection emerges from the combination of choices you made above, how to make it work in a distributed setup also depends on the specific implementation of the CSRF protection scheme.</p>
<p>Going by the Django docs, the default way to do it uses a 'secret' which is reset every time a user logs in. That means if hitting a different server for two subsequent requests triggers a new log in, all old CSRF tokens are effectively invalidated. So based on that: </p>
<ul>
<li>You need to adapt your Django project to make sure different instances can resume working with the same session, and a re-login is not triggered </li>
<li>All your Django instances need to be able to access the same per log-in secret, so that any one of them can validate a CSRF token issued by any other.</li>
</ul>
|
<p>For development purposes I try to use Minikube. I want to test how my application will catch an event of exposing a service and assigning an External-IP.
When I exposed a service in Google Container Engine quick start tutorial I could see an event of External IP assignment with:
<code>kubectl get services --watch</code></p>
<p>I want to achieve the same with Minikube (if possible).</p>
<p>Here is how I try to set things up locally on my OSX development machine:</p>
<pre><code>minikube start --vm-driver=xhyve
minikube addons enable ingress
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment echoserver --type="LoadBalancer"
kubectl get services --watch
</code></pre>
<p>I see the following output:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver LoadBalancer 10.0.0.138 <pending> 8080:31384/TCP 11s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4m
</code></pre>
<p>External-Ip field never gets updated and shows <strong>pending</strong> phase. Is it possible to achieve external IP assignment with Minikube?</p>
| <p>On GKE or AWS installs, the external IP comes from the cloud support that reports back to kube API the address that the created LB was assigned.</p>
<p>To have the same on minikube you'd have to run some kind of an LB controller, ie. <a href="https://github.com/kubernetes/contrib/tree/master/service-loadbalancer" rel="nofollow noreferrer">haproxy</a> one, but honestly, for minikube it makes little sense, as you have single IP that you know in advance by <code>minikube ip</code> so you can use NodePort with that knowledge. LB solution would require setting some IP rangethat can be mapped to particular nodeports, as this is effectively what LB will do - take traffic from extIP:extPort and proxy it to minikubeIP:NodePort.</p>
<p>Unless your use case prevents you from it, you should consider Ingress as the way of ingesting traffic to your minikube.</p>
|
<p>I have a kubernetes cluster with running pods. In order to monitor and troubleshoot the infrastructure, I want to implement a centralized logging solution so all incoming and outgoing HTTP requests will be logged within one place.</p>
<p>For the incoming requests this is not a problem at all, I can use nginx log from ingress controller and present it.</p>
<p>I also understand that I can log outgoing requests inside the application I run in pod, but the problem is that applications from outside developers are also used and it may not contain logging implementation.</p>
<p>As for the outgoing requests, there is no any solution provided by default if I understand it right. I have explored k8s logging and k8s audit, but it does not provide such feature.</p>
<p>Probably, I need some network sniffer, but it is quite a low-level solution for such problem as I can see. So, the question is: is there any out-of-the-box implementation for such demand?</p>
<p>Thanks!</p>
| <p>Take a look at a service mesh solution like <a href="https://istio.io" rel="nofollow noreferrer">Istio</a> or <a href="https://linkerd.io" rel="nofollow noreferrer">Linkerd</a> as well as tracing solutions like <a href="https://uber.github.io/jaeger/" rel="nofollow noreferrer">Jaeger</a> or <a href="https://zipkin.io" rel="nofollow noreferrer">Zipkin</a>. With these you can build to have full observability on how information flows in/out and through your kube cluster</p>
|
<p>I would like to know which steps I have to follow in order to send the logs created in my custom apache container (deployed in a pod with Kubernetes) to the Stackdriver collector.</p>
<p>I have noticed that if I create a pod with a standard apache (or nginx) container, access.log and error.log are sent automatically to Stackdriver.</p>
<p>In fact I'm able to see the log both on Kubernetes dashboard and on Google Cloud Dashboard--->Logging--->Logs
Instead I don't see anything related my custom apache...</p>
<p>Any suggestions?</p>
| <p>After some researches I have resolved the problem of log forwarder from my custom apache container.</p>
<p>I don't know why the "standard redirection" (using /dev/stdout or /proc/self/fd/1) is not working anyway the solution that I followed is called "sidecar container with the logging agent"</p>
<p>1) create a configMag file where you'll set a fluentd configuration:</p>
<pre><code>apiVersion: v1
data:
fluentd.conf: |
<source>
type tail
format none
path /var/log/access.log
pos_file /var/log/access.log.pos
tag count.format1
</source>
<source>
type tail
format none
path /var/log/error.log
pos_file /var/log/error.log.pos
tag count.format2
</source>
<match **>
type google_cloud
</match>
kind: ConfigMap
metadata:
name: my-fluentd-config
</code></pre>
<p>2) create a pod with 2 containers: the custom apache + a log agent. Both containers will mount a log folder. Only log agent will mount the fluentd config:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-sidecar
labels:
app: my-sidecar
spec:
volumes:
- name: varlog
emptyDir: {}
- name: config-volume
configMap:
name: my-fluentd-config
containers:
- name: my-apache
image: <your_custom_image_repository>
ports:
- containerPort: 80
name: http
protocol: TCP
volumeMounts:
- name: varlog
mountPath: /var/log
- name: log-agent
image: gcr.io/google_containers/fluentd-gcp:1.30
env:
- name: FLUENTD_ARGS
value: -c /etc/fluentd-config/fluentd.conf
volumeMounts:
- name: varlog
mountPath: /var/log
- name: config-volume
mountPath: /etc/fluentd-config
</code></pre>
<p>3) Enter in my-apache container with: </p>
<pre><code>kubectl exec -it my-sidecar --container my-apache -- /bin/bash
</code></pre>
<p>and change/check the httpd.conf is using the following files:</p>
<pre><code>ErrorLog /var/log/error.log
CustomLog /var/log/access.log common
</code></pre>
<p>(if you change something remember to restart apache..)</p>
<p>4) Now in Google Cloud Console -> Logging you'll be able to see the apache access/error logs in Stackdriver with a filter like:</p>
<pre><code>resource.type="container"
labels."compute.googleapis.com/resource_name"="my-sidecar"
</code></pre>
|
<p>Using AWS EC2 to install Rancher cluster. Then setup Kubernetes cluster from Rancher server.</p>
<p>About auto scaling, there are some ways to do:</p>
<h2>Use Rancher cattle webhook service</h2>
<blockquote>
<p><a href="https://rancher.com/docs/rancher/v1.6/en/cattle/webhook-service/" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v1.6/en/cattle/webhook-service/</a></p>
</blockquote>
<p>This way should use monitoring tool Prometheus to monitor CPU usage, then add or delete nodes due to alerting.</p>
<h2>Use terraform to generate rancher-master-ha, rancher-nodes, networking, database dynamictly</h2>
<blockquote>
<p><a href="http://rancher.com/aws-rancher-building-resilient-stack/" rel="nofollow noreferrer">http://rancher.com/aws-rancher-building-resilient-stack/</a></p>
</blockquote>
<p>This can well done for Rancher cluster.</p>
<h2>Horizontal Pod Autoscaling Walkthrough</h2>
<blockquote>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</a></p>
</blockquote>
<p>It's official horizontal pod autoscaling way.</p>
<h2>Kubernetes Autoscaler</h2>
<blockquote>
<p><a href="https://github.com/kubernetes/autoscaler" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler</a></p>
</blockquote>
<p>It's also a official auto scaling way for Kubernets cluster.</p>
<h2>Use AWS Auto Scaling</h2>
<blockquote>
<p><a href="https://aws.amazon.com/autoscaling/" rel="nofollow noreferrer">https://aws.amazon.com/autoscaling/</a></p>
</blockquote>
<p>About this way, how to connect it to Rancher cluster and Kubernetes cluster running on EC2?</p>
<hr>
<p>There are many ways to do the auto scaling, but which is the best way? And the very important thing is, how to use AWS Auto Scaling for this architecture?</p>
| <p>Since you deployed Kubernetes with Rancher, you should use Rancher webhooks for this operation.
Use Prometheus/Grafana to set up webhook when CPU utilization is over some %.</p>
|
<p>I checked the kubernetes docs, find that pods/exec resources has no verb,
and do not know how to only control access for it? Since I create a pod, someone else need to access it use 'exec' but cannot create anything in my cluster.</p>
<p>How to implement this?</p>
| <p>Since pods/exec is a subresource of pods, If you want to exec a pod, you first need to get the pod, so here is my role definition.</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
</code></pre>
|
<p>We have a full cluster running in production and suddenly it stopped working with the following error:</p>
<blockquote>
<pre><code>The Deployment "authapi" is invalid: metadata.finalizers[0]: Invalid value: "foregroundDeletion": name is neither a standard
</code></pre>
<p>finalizer name nor is it fully qualified</p>
</blockquote>
<p>My current cluster version is:</p>
<blockquote>
<p>Client Version: version.Info{Major:"1", Minor:"7",
GitVersion:"v1.7.3",
GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1",
GitTreeState:"clean", BuildDate:"2017-08-03T07:00:21Z",
GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server
Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6",
GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2",
GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z",
GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}</p>
</blockquote>
<p>On the other hand, we cannot deploy either. The following message appear when kubectl is trying to deploy:</p>
<blockquote>
<pre><code> W1127 15:28:32.999978 42625 factory_object_mapping.go:423] Failed to download OpenAPI (the server could not find the requested
</code></pre>
<p>resource), falling back to swagger
The Deployment "authapi" is invalid: metadata.finalizers[0]: Invalid value: "foregroundDeletion": name is neither a standard
finalizer name nor is it fully qualified
/home/builduser/myagent/_work/_temp/kubectlTask/1511796511792/kubectl
failed with return code: 1</p>
</blockquote>
<p>YAML definition is shown below:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: authapi
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: authapi
spec:
containers:
- name: authapi
image: edgecontainerregistry.azurecr.io/portal.authapi:latest
imagePullPolicy: Always
resources:
requests:
cpu: 100m
ports:
- containerPort: 5006
env:
- name: ASPNETCORE_ENVIRONMENT
valueFrom:
configMapKeyRef:
name: aspnetcore-config
key: aspnetcore.env
imagePullSecrets:
- name: edgesecret
---
kind: Service
apiVersion: v1
metadata:
name: authapi
spec:
ports:
- protocol: TCP
port: 5006
targetPort: 5006
selector:
app: authapi
type: ClusterIP
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: authapi
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: authapi
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
</code></pre>
<p>Any help on this?</p>
| <p>This is a bug, fixed in 1.6.7+</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.6.md/#v167" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.6.md/#v167</a></p>
<blockquote>
<p>Fix Invalid value: "foregroundDeletion" error when attempting to delete a resource. (#46500, @tnozicka)</p>
</blockquote>
|
<p>My current Kafka deployment file with 3 Kafka brokers looks like this:</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
serviceName: kafka-headless
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka-instance
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181,\
zookeeper-1.zookeeper-headless.default.svc.cluster.local:2181,\
zookeeper-2.zookeeper-headless.default.svc.cluster.local:2181"
- name: BROKER_ID_COMMAND
value: "hostname | awk -F '-' '{print $2}'"
- name: KAFKA_CREATE_TOPICS
value: hello:2:1
volumeMounts:
- name: data
mountPath: /var/lib/kafka/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
</code></pre>
<p>This creates 3 Kafka brokers as a Stateful Set and connects to the Zookeeper cluster using the Kubedns service with FQDN (Fully Qualified Domain Names) such as: </p>
<pre><code>zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181
</code></pre>
<p>Broker IDs are generated based on the pod name:</p>
<pre><code>- name: BROKER_ID_COMMAND
value: "hostname | awk -F '-' '{print $2}'"
</code></pre>
<p>Result:</p>
<pre><code>kafka-0 = 0
kafka-1 = 1
kafka-2 = 2
</code></pre>
<p>However, In order to use the Kubedns names for the Kafka brokers:</p>
<pre><code>kafka-0.kafka-headless.default.svc.cluster.local:9092
kafka-1.kafka-headless.default.svc.cluster.local:9092
kafka-2.kafka-headless.default.svc.cluster.local:9092
</code></pre>
<p>I need to be able to set the <code>KAFKA_ADVERTISED_HOST_NAME</code> variable to the above FQDN values based on the name of the pod.</p>
<p>Currently I have the variable set to the name of the pod:</p>
<pre><code>- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
</code></pre>
<p>Result:</p>
<pre><code>KAFKA_ADVERTISED_HOST_NAME=kafka-0
KAFKA_ADVERTISED_HOST_NAME=kafka-1
KAFKA_ADVERTISED_HOST_NAME=kafka-2
</code></pre>
<p>But somehow I would need to append the rest of the DNS name.</p>
<p>Is there a way I could set the DNS value directly? </p>
<p>Something like that:</p>
<pre><code>- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: kubedns.name
</code></pre>
| <p>I managed to solve the problem with a command field inside the pod definition:</p>
<pre><code> command:
- sh
- -c
- "export KAFKA_ADVERTISED_HOST_NAME=$(hostname).kafka-headless.default.svc.cluster.local &&
start-kafka.sh"
</code></pre>
<p>This runs a shell command which exports the advertised hostname environment variable based on the <code>hostname</code> value.</p>
|
<p>I'm having a problem going through the step in the <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="nofollow noreferrer">Quickstart for Azure Kubernetes cluster for Linux</a>. </p>
<p>The following command creates a resource group successfully:
$> az group create --name myResourceGroup --location eastus</p>
<p>However, I get an error when trying to create the Kubernetes cluster:
$> az aks create --resource-group myResourceGroup --name myK8sCluster --node-count 1 --generate-ssh-keys</p>
<p>The error returned is:</p>
<p>"Operation failed with status: 'Bad Request'. Details: Service principal clientID: b986e403-1baa-4e97-8fea-e0a411516c61 not found in Active Directory tenant fee04516-9fb0-4e3e-a906-0b8d8bb493d6, Please see <a href="https://aka.ms/acs-sp-help" rel="nofollow noreferrer">https://aka.ms/acs-sp-help</a> for more details".</p>
<p>Any thoughts on what the problem is?</p>
<p>Thanks,
Cameron.</p>
| <p>yes you cannot create kubernete cluster in azure without app registration in AD, for that u need to create role,serviceprincipal,application in AD tenant and it should be on same region. follow these 2 links to create serviceprincipal either from cli or portal.
<a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal</a>
<a href="https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-service-principal" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-service-principal</a></p>
|
<p>When you run: <code>kubectl get svc -n default</code>, you will have a kubernetes service with Type as ClusterIP already there.</p>
<p>What is the purpose of this service? Any references appreciated.</p>
<p>I'm running in Minikube</p>
<pre><code>xyz:Kubernetes _$ kubectl describe svc/kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.0.0.1
Port: https 443/TCP
TargetPort: 8443/TCP
Endpoints: 10.0.2.15:8443
Session Affinity: ClientIP
Events: <none>
xyz:Kubernetes _$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
</code></pre>
| <p>AFAIK the kubernetes service in the default namespace is a service which forwards requests to the Kubernetes master ( Typically kubernetes API server).</p>
<p>So all the requests to the kubernetes.default service from the cluster will be routed to the configured Endpoint IP. In this scenario its the kubernetes master IP</p>
<p>For example </p>
<p>Lets checkout the output of <code>kubectl describe svc kubernetes</code> and look at the the Endpoint IP.</p>
<p><a href="https://i.stack.imgur.com/JSFks.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/JSFks.jpg" alt="enter image description here"></a> </p>
<p>Now lets check our cluster info</p>
<p><code>kubectl cluster-info</code></p>
<p><a href="https://i.stack.imgur.com/RhCrx.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/RhCrx.jpg" alt="enter image description here"></a></p>
<p>Please note that the kubernetes master is running at the same IP as the Endpoints IP of kubernetes.default service.</p>
<p>Hope it helps.</p>
|
<p>I know that I can <a href="https://kubernetes.io/docs/tasks/administer-cluster/memory-default-namespace/#create-a-limitrange-and-a-pod" rel="nofollow noreferrer">create a LimitRange</a> in a namespace. Then all pods created in <em>that</em> namespace will have Resource (CPU/Memory) Limits/Requests set.</p>
<p>So my initial question of how to enforce Resource Limits/Requests on all resources in a kubernetes cluster seems to be equivalent to: How do I enforce a LimitRange exists in every namespace?</p>
| <p>It is currently not possible to enforce <code>LimitRange</code> and <code>ResourceQuota</code> cluster-wide, that is, providing defaults for every namespace. I'm aware of at least <a href="https://github.com/kubernetes/kubernetes/issues/17097" rel="nofollow noreferrer">one discussion</a> to potentially change this.</p>
|
<p>I created an EBS volume with 30 GiB size. Made two manifest files:</p>
<ul>
<li>pv-ebs.yml</li>
<li>pvc-ebs.yml</li>
</ul>
<p>In pv-ebs.yml:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: ebs
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
awsElasticBlockStore:
fsType: ext4
# The EBS volume ID
volumeID: vol-111222333aaabbbccc
</code></pre>
<p>in pvc-ebs.yml</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-prometheus-alertmanager
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
release: "stable"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-prometheus-server
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
release: "stable"
</code></pre>
<p>Use <code>helm</code> installed it: <code>helm install --name prometheus stable/prometheus</code>.</p>
<p>But on the k8s dashboard, got message:</p>
<pre><code>prometheus-prometheus-alertmanager-3740839786-np7kb
No nodes are available that match all of the following predicates:: NoVolumeZoneConflict (2).
prometheus-prometheus-server-3176041168-m3w2g
PersistentVolumeClaim is not bound: "prometheus-prometheus-server" (repeated 2 times)
</code></pre>
<p>Is there anything wrong about my method?</p>
<h2>Pods</h2>
<p><a href="https://i.stack.imgur.com/W5upH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W5upH.png" alt="enter image description here"></a></p>
<h2>Persistent Volumes</h2>
<p><a href="https://i.stack.imgur.com/aDyti.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aDyti.png" alt="enter image description here"></a></p>
| <p>When EBS is created, it is provisioned in a particular AZ and it can not be cross-zone mounted. If you do not have nodes available in the same zone for scheduling the pod, it will not start.</p>
<p>Another thing is that with a properly configured kube cluster, you should not need to create PV on your own at all, just create PVC and let <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#dynamic" rel="nofollow noreferrer">dynamic provisioning</a> do it's thing.</p>
|
<p>In this stackoverflow question: <a href="https://stackoverflow.com/questions/38261107/kubernetes-deployment-how-to-change-container-envi">kubernetes Deployment. how to change container environment variables for rolling updates?</a></p>
<p>The asker mentions mentions he edited the deployment to change the version to v2. What's the workflow for automated deployments of a new version assuming the container v2 already exists? How do you then deploy it without manually editing the deployment config or checking in a new version of the yaml? </p>
<p>If you change the underlying container (like v1 -> another version also named v1) will Kubernetes deploy the new or the old?</p>
| <p>Assuming <code>v1</code> is already running and you try to deploy <code>v1</code> again with the same environment variable values etc., then k8s will not see any difference between your current and updated deployment resource.
Without diff, the k8s scheduler assumes that the desired state is already reached and won't schedule any new pods, even when <code>imagePullPolicy: Always</code> is set. The reason is that <code>imagePullPolicy</code> only has an effect on newly created pods. So if a new pod is being scheduled, then k8s will always pull the image again. Still, without any diff in your deployment, no new pod will be scheduled in the first place ..</p>
<p>For my deployments I always set a dummy environment variable, like a deploy timestamp <code>DEPLOY_TS</code>, e.g.:</p>
<pre><code> containers:
- name: my-app
image: my-app:{{ .Values.app.version }} ## value dynamically set by my deployment pipeline
env:
- name: DEPLOY_TS
value: "{{ .Values.deploy_ts }}" ## value dynamically set by my deployment pipeline
</code></pre>
<p>The value of <code>DEPLOY_TS</code> is always set to the current timestamp - so it is always a different value. That way k8s will see a diff on every deploy and schedule a new pod - even if the same version is being re-deployed. </p>
<p>(I am currently running k8s 1.7)</p>
|
<p>While diving into Docker, Google Cloud and Kubernetes, and without clearly understanding all three of them yet, it seems to me these products are overlapping, yet they're not compatible.</p>
<p>For example, a <code>docker-compose.yml</code> file needs to be re-written so an app can be deployed to Kubernetes.</p>
<p>Could someone provide a high-level, rough description of where Docker, Docker Compose, Docker Cloud, and Kubernetes overlap and where one is dependent on the other?</p>
| <p><strong><a href="https://www.docker.com/resources/what-container" rel="noreferrer">Containers</a></strong>:</p>
<ul>
<li>Containers are at the core of the other technologies listed here</li>
</ul>
<p><strong><a href="https://www.docker.com/" rel="noreferrer">Docker</a></strong>:</p>
<ul>
<li>Docker is a popular implementation of the technology that allows applications to be bundled into a container.</li>
<li><code>docker</code> is a command-line tool to manage images, containers, volumes, and networks</li>
</ul>
<p><strong><a href="https://docs.docker.com/compose/" rel="noreferrer">Docker Compose</a></strong></p>
<ul>
<li>Docker Compose is the declarative version of the docker cli</li>
<li>It can start one or more containers</li>
<li>It can create one or more networks and attach containers to them</li>
<li>It can create one or more volumes and configure containers to mount them</li>
<li>All of this is for use on a <em><strong>single</strong></em> host</li>
</ul>
<p><strong><a href="https://docs.docker.com/engine/swarm/" rel="noreferrer">Docker Classic Swarm</a></strong></p>
<ul>
<li>Docker swarm has been abandoned by Docker Inc. and is not being actively maintained or supported.</li>
<li>Docker Swarm is for running and connecting containers on <strong>multiple</strong> hosts.</li>
<li>Docker Swarm is a container cluster management and orchestration tool.</li>
<li>It manages containers running on multiple hosts and does things like scaling, starting a new container when one crashes, networking containers ...</li>
<li>The Docker Swarm file named stack file is very similar to a Docker Compose file</li>
<li>The only comparison between Kubernetes and Compose is at the most trivial and unimportant level: they both run containers, but this says nothing to help one understand what the two tools are and where they are useful. They are both useful for different things</li>
</ul>
<p><strong><a href="https://kubernetes.io/" rel="noreferrer">Kubernetes</a></strong></p>
<ul>
<li>Kubernetes (K8S) is a <strong>distributed container orchestration tool</strong> initially created by Google</li>
<li>It was open-sourced in 2014 and handed over to the Cloud Native Computing Foundation (CNCF) the following year</li>
<li>The CNCF is an industry body with <a href="https://landscape.cncf.io/members" rel="noreferrer">hundreds of members drawn from the majority</a> of large cloud, software and hardware companies</li>
<li>At the time of writing (late 2021) there are nearly a <a href="https://landscape.cncf.io/" rel="noreferrer">thousand K8S related projects</a> split into around twenty classes with a total of over $21 billion dollars in funding</li>
<li>Kubernetes (2021) is the most popular distributed system orchestrator in the world with <a href="https://www.redhat.com/en/resources/kubernetes-adoption-security-market-trends-2021-overview" rel="noreferrer">88% adoption</a></li>
<li>Because of its near ubiquity, K8S has become the most popular contemporary platform for innovative system development in 2021</li>
<li>Kubernetes is a competitor (more or less) to Docker swarm but does more stuff than docker swarm i.e a popular choice.</li>
</ul>
|
<p>In this stackoverflow question: <a href="https://stackoverflow.com/questions/38261107/kubernetes-deployment-how-to-change-container-envi">kubernetes Deployment. how to change container environment variables for rolling updates?</a></p>
<p>The asker mentions mentions he edited the deployment to change the version to v2. What's the workflow for automated deployments of a new version assuming the container v2 already exists? How do you then deploy it without manually editing the deployment config or checking in a new version of the yaml? </p>
<p>If you change the underlying container (like v1 -> another version also named v1) will Kubernetes deploy the new or the old?</p>
| <p>If you don't want to:</p>
<ul>
<li>Checking in the new YAML version</li>
<li>Manually updating the config</li>
</ul>
<p>You can update the deployment either through:</p>
<ul>
<li>A REST call to the deployment in question by patching/putting your new image as a resource modification. i.e. <code>PUT /apis/extensions/v1beta1/namespaces/{namespace}/deployments -d {... deployment with v2...}</code></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">Set the image</a> <code>kubectl set image deployment/<DEPLOYMENT_NAME> <CONTAINER_NAME>:< IMAGE_NAME>:v2</code></li>
</ul>
|
<p>I have a Kubernetes deployment containing a very simple Spring Boot web application. I am experiencing random timeouts trying to connect to this application externally.</p>
<p>Some requests return instantly whereas others hang for minutes.</p>
<p>I am unable to see any issues in the logs.</p>
<p>When connecting to the pod directly, I am able to <code>curl</code> the application and get a response immediately so it feels more like a networking issue.</p>
<p>I also have other applications with the identical configuration running in the same cluster which are experiencing no problems.</p>
<p>I am still quite new to Kubernetes so my question would be:</p>
<p>Where and how should I go about diagnosing network issues?</p>
<p>Can provide more information if it helps.</p>
| <p>As you have narrow down the issue to networking which means components of the cluster are healthy such as <strong>Kubelet, Kube-proxy</strong> and etc. </p>
<p>You can check their status by using systemctl utility. For example </p>
<pre><code>systemctl status kubelet
systemctl status kube-proxy
</code></pre>
<p>You can get more detail by using journalctl utility. for example </p>
<pre><code>journalctl -xeu kubelet
journalctl -f -u docker
</code></pre>
<p>Now If you want to know what's the <strong>destiny</strong> of the packets then you need to use iptables utility. It's the one who decides forwarding, routing, and verdict of the packets (incoming or outgoing packetes). </p>
<p>My plan of action is <strong>Do Not</strong> make any assumptions.I follow following utilities to clear the doubts.</p>
<ul>
<li><p>Kubectl </p>
<p>Kubectl describe pod/svc podName/svcName</p></li>
<li><p>systemctl </p></li>
<li>journalctl</li>
<li>etcdctl </li>
<li>curl</li>
<li>iptables</li>
</ul>
<p>If I still could not solve the issue it means I have made an assumption.</p>
<p>please let me know any other tools I would love to put it on my utility-set</p>
|
<p>I'm having a problem going through the step in the <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="nofollow noreferrer">Quickstart for Azure Kubernetes cluster for Linux</a>. </p>
<p>The following command creates a resource group successfully:
$> az group create --name myResourceGroup --location eastus</p>
<p>However, I get an error when trying to create the Kubernetes cluster:
$> az aks create --resource-group myResourceGroup --name myK8sCluster --node-count 1 --generate-ssh-keys</p>
<p>The error returned is:</p>
<p>"Operation failed with status: 'Bad Request'. Details: Service principal clientID: b986e403-1baa-4e97-8fea-e0a411516c61 not found in Active Directory tenant fee04516-9fb0-4e3e-a906-0b8d8bb493d6, Please see <a href="https://aka.ms/acs-sp-help" rel="nofollow noreferrer">https://aka.ms/acs-sp-help</a> for more details".</p>
<p>Any thoughts on what the problem is?</p>
<p>Thanks,
Cameron.</p>
| <p>Thanks for the feedback. I managed to resolve the issue by deleting my .azure folder and retrying.</p>
|
<p>I'm looking to create a statefulset using affinity. I have added a label to my 3 nodes. Two of them have area=area1 and one node has area=area2. I'm looking to run my statefulset pods only on the nodes with area=area1. It's not working. I'm getting an error from the scheduler that no nodes were matched. I'm running Kubernetes v1.7.4</p>
<p>Yaml:</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: stateful-bcs
spec:
serviceName: mybcs
replicas: 2
template:
metadata:
labels:
app: simplecount
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: area
operator: In
values:
- area1
containers:
- name: test1
image: XXXX.azurecr.io/simple
env:
- name: SIMPLE_SERVICE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
command:
- ./simplecount
- "$(SIMPLE_SERVICE_NAME)"
imagePullSecrets:
- name: XXXXXXX
restartPolicy: Always
</code></pre>
| <p>You should be using <code>nodeAffinity</code> not <code>podAffinity</code>, <code>podAffinity</code> is "based on labels on pods that are already running on the node rather than based on labels on nodes" (<a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/</a>).</p>
|
<p>I have horizontal pod autoscaler running on GKE. It works but its slow to respond to increases in workload, so I would like to try modifying a couple of the kube-controller-manager parameters, specifically:</p>
<p>--horizontal-pod-autoscaler-upscale-delay
--horizontal-pod-autoscaler-downscale-delay</p>
<p>But the only CLI available seems to be kubectl. I have scoured docs and the client API but can find no way to do it, in fact this link suggested it may not be possible on GKE:</p>
<p><a href="https://stackoverflow.com/questions/46317275/change-the-horizontal-pod-autoscaler-sync-period-with-gke">change the horizontal-pod-autoscaler-sync-period with gke</a></p>
<p>Does anyone know if there is a way to do it?</p>
<p>Thank you!</p>
| <p>These flags must be configured in the kube-controller-manager and since you cannot pass flags to the GKE cluster it is impossible to modify the autoscaler settings on GKE.</p>
|
<p>Can embedded tomcat or any such embedded server be used for microservices in production environment? How, embedded server is different wrt the normal standalone full fledged server (performance . reliability wise)? Is the embedded server light weight compared to standalone? What are the features that embeded servers do not have compared to their standalone ones? Can we change the default security settings, tls ciphers etc advanced things in embedded tomcat?</p>
| <p>Well, it's a matter of choice. I've seen some services which use embedded tomcat in production and some services which rely on standalone tomcat container which is directly provided by the platform. Some platforms might not have a java runtime, in such scenarios you're bound to use embedded containers for tomcat or jetty.
The key difference would be that standalone containers can host multiple java deployments whereas for single deployments you can stick to the embedded ones. And yeah reliability and performance won't be a huge concern, although I believe that standalone containers are designed to be more scalable. From my personal experience, embedded deployments are easier to manage, since we can custom configure the tomcat setting specific to that deployment (might be the answer to your last question)</p>
|
<p>I'm trying to write Helm Charts for our product. The images are stored on GCR private repo. The charts for all components are ready, but I'm trying to write a YAML file in a chart for imagePullSecrets. I've read chart tips from <a href="https://github.com/kubernetes/helm/blob/master/docs/charts_tips_and_tricks.md#creating-image-pull-secrets" rel="nofollow noreferrer">here</a>,</p>
<p>I also know how to create imagePullSecret with:</p>
<pre><code>kubectl create secret docker-registry mydockercfg \
--docker-server "https://eu.gcr.io" \
--docker-username _json_key \
--docker-email [email protected] \
--docker-password=$(cat your_service_account.json)
</code></pre>
<p>But I don't know how to fill the content of "your_service_account.json" to password of values.yaml of that Chart. It's better I can change the name "your_service_account.json" to update the password of values.yaml.</p>
<p>Currently, My implementation is as follows:</p>
<pre><code>$ cat values.yaml
secretName: gcr-json-key-test
imageCredentials:
registry: us.gcr.io/xxxxx
username: _json_key
password:
</code></pre>
<p>Contents of secrets.yaml:</p>
<pre><code>$ cat templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.secretName }}
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
type: kubernetes.io/dockercfg
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
</code></pre>
<p>Contents of _helpers.tpl:</p>
<pre><code>$ cat templates/_helpers.tpl
{{/*
Expand the name of the chart.
*/}}
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited
to this (by the DNS naming spec).
*/}}
{{- define "fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "imagePullSecret" }}
{{- printf "{\"auths\": {\"%s\": {\"auth\": \"%s\"}}}" .Values.imageCredentials.registry (printf "%s:%s" .Values.imageCredentials.username .Values.imageCredentials.password | b64enc) | b64enc }}
{{- end }}
</code></pre>
<p>And then using</p>
<pre><code>$ helm install ./secrets --set imageCredentials.password "$(cat ./my_service_account.json)"
</code></pre>
<p>Will result an error:</p>
<blockquote>
<p>Error: This command needs 1 argument: chart name</p>
</blockquote>
<p>How can I solve this problem?</p>
| <p>It can be created and deployed using the following steps:</p>
<p><strong>Steps:</strong></p>
<ol>
<li><p>Create base64 encoded string using your docker_username and docker_password</p>
<pre><code>$ echo -n "docker_username:docker_password" | base64
ZG9rY2VyX3VzZXI6ZG9ja2VyX3Bhc3N3b3Jk
</code></pre></li>
<li><p>Place the encoded string obtained in the <strong>Step 1</strong> as value for <strong>auth</strong> key in the following Json and fill the required details. </p>
<pre><code>{
"https://eu.gcr.io":
{
"username":"docker_user",
"password":"docker_password",
"email":"[email protected]",
"auth":"ZG9rY2VyX3VzZXI6ZG9ja2VyX3Bhc3N3b3Jk",
}
}
</code></pre></li>
<li><p>Reduce this <strong>json</strong> into a <strong>string</strong> enclosed by single quote:</p>
<pre><code>'{"https://eu.gcr.io":{"username":"docker_user","password":"docker_password","email":"[email protected]","auth":"ZG9rY2VyX3VzZXI6ZG9ja2VyX3Bhc3N3b3Jk"}}'
</code></pre></li>
<li><p>Create base64 encoded string for the above Json string as follows:</p>
<pre><code>$ echo -n '{"https://eu.gcr.io":{"username":"docker_user","password":"docker_password","email":"[email protected]","auth":"ZG9rY2VyX3VzZXI6ZG9ja2VyX3Bhc3N3b3Jk"}}' | base64
eyJodHRwczovL2V1Lmdjci5pbyI6eyJ1c2VybmFtZSI6ImRva2Nlcl91c2VyIiwicGFzc3dvcmQiOiJkb2NrZXJfcGFzc3dvcmQiLCJlbWFpbCI6ImRvY2tlckBnYW1pbC5jb20iLCJhdXRoIjoiWkc5clkyVnlYM1Z6WlhJNlpHOWphMlZ5WDNCaGMzTjNiM0prIn19
</code></pre></li>
<li><p>Create secret.yml in the following format:</p>
<pre><code>$ cat templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.secretName }}
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
type: kubernetes.io/dockercfg
data:
.dockercfg: {{ .Values.dockercfg }}
</code></pre></li>
<li><p>Place the encoded string obtained in <strong>Step 4</strong> in the value.yaml:</p>
<pre><code>$ cat values.yaml
secretName: gcr-json-key-test
dockercfg:
</code></pre></li>
<li><p>Install the chart using the following command:</p>
<pre><code>$ helm install ./secrets -n release_name --set dockecfg="eyJodHRwczovL2V1Lmdjci5pbyI6eyJ1c2VybmFtZSI6ImRva2Nlcl91c2VyIiwicGFzc3dvcmQiOiJkb2NrZXJfcGFzc3dvcmQiLCJlbWFpbCI6ImRvY2tlckBnYW1pbC5jb20iLCJhdXRoIjoiWkc5clkyVnlYM1Z6WlhJNlpHOWphMlZ5WDNCaGMzTjNiM0prIn19" --debug
</code></pre>
<p>or store it in a file ( .dockercfg ) and use the following command</p>
<pre><code>$ helm install ./secrets -n release_name --set dockecfg="$(cat ./.dockercfg )"
</code></pre></li>
</ol>
<p>Hope this will be useful...!! :)</p>
|
<p>Now we're using <code>Kubernetes</code> to implement the PaaS service and users can ssh into the containers. Because container runs inside the network of Kubernetes, users can access the services like <code>kube-apiserver</code>.</p>
<p>We want to restrict the outbound of the user's pods. It seems that <code>Kubernetes Network Policy</code> is only for inbound traffic now.</p>
<p>Is that possible to do that? Should we setup the rules of <code>iptables</code> in compute nodes?</p>
| <p>Outbound traffic is now supported by Network Policies since v1.8, you should check again and see if your use case is fully supported.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-egress-traffic" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-egress-traffic</a></p>
|
<p>Kops created a cluster as well as terraform scripts.</p>
<p>How to restart kubelet and provide new flags for garbage collection(manual approach) and how to apply them to terraform?</p>
<p>Problem occurs when I manually kill the kubelet process on master and it keeps getting restarted immediately. </p>
| <p>You could change the /etc/sysconfig/kubelet file.
In the config, append / change the flags you need. Then restart the kubelet service using
<code>$ systemctl restart kubelet</code></p>
|
<p>In a kubernetes I have a container X. I'd like to run a cronjob, that executes a command within a container X.</p>
<p>I have the following idea in mind:</p>
<ol>
<li>run a cronjob using busybox container,</li>
<li><p>from within busybox container execute a script that will:</p>
<ol>
<li>determine the pod name where the execution must happen,</li>
<li><p>run curl, <em>something like</em>:</p>
<pre><code> curl https://35.187.120.184/api/v1/namespaces/my_namespace/pods/my_pod_name/exec?command=my_sh_command&container=my_container_name&stdin=true&stdout=true&tty=true
</code></pre>
<p>Dynamic values here are:</p>
<ul>
<li>my_namespace</li>
<li>my_pod_name</li>
<li>my_sh_command</li>
<li>my_container_name</li>
</ul></li>
</ol></li>
</ol>
<p>I know I should probably use a token to access kubernetes API, but I can't figure out how to obtain / use that token in curl.</p>
<p>Is it possible to do that using <code>curl</code> and kubernetes API? If yes, how?</p>
<hr>
<p>There's a reason I want to do this weird thing. I have a compiled app, that recognizes CLI commands. I want to run a particular CLI command, without the hassle of exposing the CLI via a web-server route, that'd be subsequently accessed via k8s service.</p>
| <p>Your general approach (calling the <code>{pod}/exec</code> endpoint on the Kubernetes API from within your container) seems totally valid to me. In general, I see two aspects to consider, here: 1) how to use <code>curl</code> to communicate with the Kubernetes API, and 2) how to authenticate against the API from within a Pod.</p>
<h3>Using <code>curl</code> for Kubernetes API access</h3>
<p>In general, the Kubernetes API is easily callable via <code>curl</code>. However, the <code>exec</code> endpoint is an exception to that rule, since the API server upgrades connections on that endpoint to an SPDY connection, which is not supported by <code>curl</code> (there even is a <a href="https://github.com/kubernetes/kubernetes/issues/24668" rel="nofollow noreferrer">discussion in the Kubernetes issue tracker</a> on that topic). For that reason, I'd recommend using kubectl (as already suggested by @sfgroups) or using one of the client SDKs (like for example, the <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">Go</a> or <a href="https://github.com/kubernetes-incubator/client-python" rel="nofollow noreferrer">Python</a> ones).</p>
<h3>Authenticating <code>exec</code> calls from within a Pod</h3>
<p>If you are using kubectl from within a Pod (or any of the client SDKs, or even curl, for that matter), you will need to authenticate against the API server. For this, your Pod needs to be associated with a <em>Service Account</em>, and that Service Account needs to be authorized to call the <code>/pods/{pod}/exec</code> endpoint. How this works, is largely dependent on your cluster configuration:</p>
<ol>
<li><p>In many cluster configurations, your Pod may have already been associated with a Service Account (with sufficient authorization) by default. In the Pod, you will find the credentials in the <code>/var/run/secrets/kubernetes.io/serviceaccount</code> directory. Both <code>kubectl</code> and the common client SDKs will find this directory automatically, allowing them to "just work" without any additional configuration. Using curl, you will need to extract the authentication token from the <code>token</code> file within that directory and use it in an <code>Authorization: Bearer <token></code> header.</p>
<p>Your Pod may not have received a Service Account token, if it was created with the <code>automountServiceAccountToken: false</code> property.</p></li>
<li><p>If your cluster is configured <a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="nofollow noreferrer">to use RBAC</a>, your Pod may still be associated with a Service Account, but that account may not be authorized to execute commands in other Pods. To grant access, you can create your own Role that grants the required rights, and then create a Service Account and a RoleBinding:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-exec
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["pods/exec]
verbs: ["create"]
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: cron
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cron
subjects:
- kind: ServiceAccount
name: cron
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-exec
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Then, use the <code>serviceAccountName: cron</code> attribute in your <a href="https://kubernetes.io/docs/api-reference/v1.8/#podspec-v1-core" rel="nofollow noreferrer">PodSpec</a> to associate your cron runner pod with the newly created Service Account.</p></li>
</ol>
|
<p>I want to expose 2 ports on one service:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: etools
spec:
replicas: 1
template:
metadata:
labels:
app: etools
spec:
containers:
- name: etools
image: eregistry.azurecr.io/etools:latest
ports:
- containerPort: 8080
- containerPort: 3100
---
apiVersion: v1
kind: Service
metadata:
name: etools
spec:
ports:
- port: 8080
selector:
app: etools
ports:
- port: 3100
selector:
app: etools
</code></pre>
<p>How can I achieve it?</p>
| <p>Your <code>Service</code> is very close, but in the <a href="https://kubernetes.io/docs/api-reference/v1.8/#serviceport-v1-core" rel="nofollow noreferrer">ports:</a> array, the ports need to be named if there is more than one (they ideally would always have names), and the <code>selector:</code> is just once per <code>Service</code>, not per-port:</p>
<pre><code>spec:
selector:
app: etools
ports:
- name: web
port: 8080
targetPort: 8080
- name: other-port-something
port: 3100
targetPort: 3100
</code></pre>
<p>Be aware that while you will often see <code>port:</code> and <code>targetPort:</code> equal to the same number, they don't have to be. So your container could listen on 8080, because docker image says it will, but your <code>Service</code> could expose that to other members of your cluster as <code>port: 80</code> to be closer to what one would expect.</p>
<p>It's also possible to name the ports in your <code>PodSpec</code> with natural language names, and then point the <code>Service</code> at <em>that</em> value:</p>
<pre><code>ports:
- name: http
port: 80
targetPort: http-in-my-pod
</code></pre>
<p>which I recommend because it decouples your <code>Service</code> from having to change just because the <code>containerPort</code> changed in your <code>PodSpec</code>, but at your discretion.</p>
<p>I'm a little surprised that <code>kubectl</code> didn't offer helpful feedback when you provided it that malformed yaml, but either way, I believe the snippet above is correct. As the docs specify, the names must be both unique within the Service, and also "DNS-compatible" names, so no underscores, spaces, crazy characters</p>
|
<p>I have been using kops to build the kubernetes cluster which is really easy-to-go tool, however i am unable to find a way-out on how to change the admin password which is auto-generated while the cluster is being created.</p>
| <blockquote>
<p>As it is currently not possible to modify or delete + create secrets of type "Secret" with the CLI you have to modify them directly in the kops s3 bucket.</p>
<p>They are stored /clustername/secrets/ and contain the secret as a base64 encoded string. To change the secret base64 encode it with:</p>
<p><code>echo -n 'MY_SECRET' | base64</code></p>
<p>and replace it in the "Data" field of the file. Verifiy your change with get secrets and perform a rolling update of the cluster</p>
</blockquote>
<p>Seen in <strong>Managing secrets</strong> Kopsβs documentation: <a href="https://github.com/kubernetes/kops/blob/master/docs/secrets.md#workaround-for-changing-secrets-with-type-secret" rel="noreferrer">Workaround for changing secrets with type "Secret"</a></p>
|
<p>I have a helm chart template, and I would like to use the result of <code>whoami</code> as a template variable. How do I do this?</p>
<p>So if my values.yaml file has: </p>
<pre><code>env:
uniqueId: {{ whoami? }}
</code></pre>
<p>how might I do this?</p>
<p>note: I am on os x, so whoami I believe assumes a linux environment, however, in the spirit of this being deployment agnostic I presume there is a non-unix way of doing this.</p>
| <p>The Helm Chart's "values.yaml" file is typically for default values. Anything that you'd like to override should be done at time of install/upgrade of the chart.</p>
<p>The Helm docs show a lot of different ways in which values can be used: <a href="https://github.com/kubernetes/helm/blob/master/docs/charts.md" rel="nofollow noreferrer">https://github.com/kubernetes/helm/blob/master/docs/charts.md</a></p>
<p>In this case, one option is to set the value on the command line:</p>
<pre><code>helm install -set env.whoami=$(id -un) ./your-chart.tgz
</code></pre>
<p>You could then have a value.yaml file like:</p>
<pre><code>env:
whoami: "default"
</code></pre>
<p>Finally, you can use it in a template like:</p>
<pre><code> containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Chart.Version }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: WHOAMI
value: {{ .Values.env.whoami }}
</code></pre>
<p>Obviously your template will vary, the above is just a snippet.</p>
|
<p>Just curious, with mesos I'm used to being able to do <code>systemctl stop mesos-master</code> and <code>systemctl start mesos-master</code> (if I need to bounce it for some reason). With k8s, there are multiple components to 'stop' in the control plane, such as apiserver, controller-manager, etc.</p>
<p>When creating a cluster with kubeadm, it runs the control plane as pods (no replica set, or anything like that, perhaps because I only have a single master at the moment).</p>
<p>What's the best way to stop the things in the control plane and then start them again, without tearing down the cluster?</p>
| <p>The Kubernetes control plane pods are often deployed as <a href="https://kubernetes.io/docs/tasks/administer-cluster/static-pod/" rel="noreferrer">Static Pods</a>. These are not managed by any kind of Deployment controller, but are defined in static (hence the name) configuration files that are placed in a configuration directory (like for example <code>/etc/kubelet.d/</code> or <code>/etc/kubernetes/manifests</code>, depending on how your cluster is set up). These definition files are picked up by the Kubelet running on the Kubernetes master node that creates the respective pods.</p>
<p>According to the documentation, you can stop/delete static pods simply by removing the respective configuration files, and start/create them again by creating new files:</p>
<blockquote>
<p>Running kubelet periodically scans the configured directory (<code>/etc/kubelet.d</code> in our example) for changes and adds/removes pods as files appear/disappear in this directory.</p>
<pre><code>[joe@my-node1 ~] $ mv /etc/kubelet.d/static-web.yaml /tmp
[joe@my-node1 ~] $ sleep 20
[joe@my-node1 ~] $ docker ps
// no nginx container is running
[joe@my-node1 ~] $ mv /tmp/static-web.yaml /etc/kubelet.d/
[joe@my-node1 ~] $ sleep 20
[joe@my-node1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED ...
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
</code></pre>
</blockquote>
<p>To temporarily disable/enable these pods, simply move the definition files to a safe location and back again:</p>
<pre><code>$ mv /etc/kubelet.d/*.yaml /tmp # Disable static pods
$ mv /tmp/*.yaml /etc/kubelet.d # Re-enable static pods
</code></pre>
|
<p>I'm just getting started with kuberneted on Windows 10.
I downloaded the bits from <a href="https://github.com/kubernetes/minikube/releases" rel="noreferrer">here</a>.</p>
<p>While attempting to start minikube from powershell:</p>
<pre><code>PS C:\WINDOWS\system32> minikube start --vm-driver=hyperv
</code></pre>
<p>I'm encountering the error:</p>
<pre><code>Starting local Kubernetes v1.8.0 cluster...
Starting VM...
E1202 06:53:29.869106 2368 start.go:150] Error starting host: Error starting stopped host: exit status 1.
</code></pre>
<p>While the documentation does not mention any prerequisites to run minikube, is there any setting on Windows 10 that needs to change to make it run?</p>
| <p>While I do not completely understand what happend, I chanced upon <a href="https://github.com/kubernetes/minikube/issues/1400" rel="noreferrer">this article</a>. </p>
<hr>
<p>I got minikube running using the following steps: </p>
<pre><code> PS C:\WINDOWS\system32> minikube delete
PS C:\WINDOWS\system32> kubectl config use-context minikube
PS C:\WINDOWS\system32> minikube start --vm-driver=hyperv
</code></pre>
|
<p>Tiller is not working properly in my kubernetes cluster. I want to delete everything Tiller. Tiller (2.5.1) has 1 Deployment, 1 ReplicaSet and 1 Pod.</p>
<p><strong>I tried: kubectl delete deployment tiller-deploy -n kube-system</strong></p>
<ul>
<li>results in "deployment "tiller-deploy" deleted"</li>
<li>however, tiller-deploy is immediately recreated</li>
<li>kubectl get deployments -n kube-system shows tiller-deploy running again</li>
</ul>
<p><strong>I also tried: kubectl delete rs tiller-deploy-393110584 -n kube-system</strong></p>
<ul>
<li>results in "replicaset "tiller-deploy-2745651589" deleted"</li>
<li>however, tiller-deploy-2745651589 is immediately recreated</li>
<li>kubectl get rs -n kube-system shows tiller-deploy-2745651589 running again</li>
</ul>
<p><strong>What is the correct way to permanently delete Tiller?</strong></p>
| <p>To uninstall tiller from a kubernetes cluster:</p>
<pre><code>helm reset
</code></pre>
<p>To delete failed tiller from a kubernetes cluster:</p>
<pre><code>helm reset --force
</code></pre>
|
<p>We are running Prometheus on our cloud under Kubernetes, and are able to get stats back for the memory, CPU usage etc of the nodes. Now we want to be able to scrape our own custom time series from pods running on the cloud. As I understand it, this requires a job with a kubernetes_sd_config of role 'pod' (and a relabel that will select only pods with a given name). So far so good. However, from the docs I read:</p>
<blockquote>
<p>The pod role discovers all pods and exposes their containers as
targets. For each declared port of a container, a single target is
generated. If a container has no specified ports, a port-free target
per container is created for manually adding a port via relabeling.</p>
</blockquote>
<p>So each pod must declare a port (in its k8s yaml, presumably?) otherwise it comes up port-free. In which case how would I ensure that each pod has a unique port number? (and if I didn't, it would presumably fail if two pods were started on the same node)</p>
| <blockquote>
<p>and a relabel that will select only pods with a given name</p>
</blockquote>
<p>I suppose that's possible, but in our config, we just omit the scrape annotation (discussed below) and Prom doesn't contact those Pods; we haven't touched the Prom config itself in months, but change the Pod annotations more frequently.</p>
<blockquote>
<p>So each pod must declare a port</p>
</blockquote>
<p>In <a href="https://github.com/prometheus/prometheus/blob/release-1.8/documentation/examples/prometheus-kubernetes.yml#L254" rel="nofollow noreferrer">their example config</a> one can see where the annotations from the Pod are extracted and used to point the Prom scraper at the port and path for that specific Pod (including, of course, whether to scrape it at all).</p>
<pre><code>metadata:
annotation:
prometheus.io/scrape: true
prometheus.io/path: /custom-prom-metrics
prometheus.io/port: 9145
</code></pre>
<p>That snippet could go into your ReplicationController, Deployment, ReplicaSet, Pod (if you deploy "bare" Pods), or attached to a Pod post-facto using the Kubernetes API. To the best of my knowledge, one can apply and remove those annotations at will, and Prom will pick it up in the next scrape.</p>
<blockquote>
<p>how would I ensure that each pod has a unique port number? (and if I didn't, it would presumably fail if two pods were started on the same node)</p>
</blockquote>
<p>Assuming you have a reasonable Kubernetes install, with a software defined network, then every Pod has its own IP address, and thus its own port namespace. If you wanted to standardize on <em>every</em> Pod using 9145, that would be perfectly fine.</p>
|
<p>I'm trying to build a docker image from my <code>go</code> app. The github repo is private. I'm a bit new to go (and docker), but I thought that the <code>/vendor</code> file managed through <code>glide</code> should circumvent the problem? Also why does docker have to get everything from github? Everything is available locally. </p>
<p>command:</p>
<pre><code>docker build -t testimage .
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM golang:1.8-onbuild
</code></pre>
<p>It fails at the step</p>
<pre><code>exec go get -v -d
.......
fatal: could not read Username for 'https://github.com': No such device or address
</code></pre>
<p>How can I get around this? Thx</p>
| <p>Only use <code>golang:1.8-onbuild</code> if you need to compile your go code image INSIDE a container. That's typically useful for CI builds. Otherwise avoid since it's a massive image.</p>
<p>A much much faster solution is to build your Go application locally (dev env for instance) and copy the final Go application to a very lightweight container.</p>
<p>I'll give you our standard process.</p>
<ol>
<li><p>If your local machine is a Mac or Windows, you need to <code>cross-compile</code> your Go code for linux using: <code>GOOS=liux GOARCH=amd64 go build -o myapp_linux-amd64</code>. The <code>linux-amd64</code> is just a convention to remind yourself that the file is compiled for linux, not mac or windows.</p></li>
<li><p>We also deploy our Go apps to the very lightweight Alpine linux container. Alpine is now the standard Docker image to create app. It's very small and secure but it has one major quirk; it is using the musc instead of the more common glibc as the underlying OS/IO library, so we need a few more compilation flags: <code>-a -ldflags '-w -extldflags "-static"'</code></p></li>
<li><p>As an extra, we also remove the developer's own path in the filename listed in a stacktrace using: <code>-gcflags=-trimpath=$(pwd) -asmflags=-trimpath=$(pwd)</code></p></li>
</ol>
<p>The resulting compile command that we use is:
<code>
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GOROOT_FINAL=$(pwd) go build -a -ldflags '-w -extldflags "-static"' -gcflags=-trimpath=$(pwd) -asmflags=-trimpath=$(pwd) -o myapp_linux-amd64
</code></p>
<p>You can now build your app locally on your dev env and create the image using the following Dockerfile:</p>
<pre><code>FROM alpine:3.6
COPY ./myapp_linux-amd64 /usr/local/bin/myapp
ENTRYPOINT []
CMD /usr/local/bin/myapp
</code></pre>
<p>build it using:</p>
<p><code>docker build -t myimagename:tag .</code></p>
|
<p>I am trying to install kubernetes.
There was a problem when checking the container executed on kubernetes.
I set the type of service to Nodeport, but I could not access from a node other than the one on which the container is running.
Want to make it accessible from other computers, please tell me where it is different.
I tried externalIPs and LoadBarancer, but it was impossible.</p>
<p>Enviroment</p>
<ul>
<li>OS:Ubuntu 16.04 LTS</li>
<li>Kubernetes:1.8</li>
<li>Docker:17.09.0-ce</li>
<li>etcd:3.2.8</li>
<li>flannel:0.9.0</li>
</ul>
<p>Network</p>
<ul>
<li>Physical:10.1.1.0/24</li>
<li>flannel:172.16.0.0/16</li>
<li>docker:192.168.0.0/16</li>
</ul>
<p>Machines</p>
<ul>
<li>Master Node(2nodes):10.1.1.24,10.1.1.25</li>
<li>Worker Node(2nodes):10.1.1.26,10.1.1.27</li>
</ul>
<p>kubectl describe svc nginx-cluster</p>
<pre><code>Name: nginx-cluster
Namespace: default
Labels: app=nginx-demo
Annotations: <none>
Selector: app=nginx-demo
Type: ClusterIP
IP: 172.16.236.159
Port: <unset> 8090/TCP
TargetPort: 80/TCP
Endpoints: 192.168.24.2:80
Session Affinity: None
Events: <none>
</code></pre>
<p>kubectl describe svc nginx-service</p>
<pre><code>Name: nginx-service
Namespace: default
Labels: app=nginx-demo
Annotations: <none>
Selector: app=nginx-demo
Type: NodePort
IP: 172.16.199.69
Port: <unset> 8090/TCP
TargetPort: 80/TCP
NodePort: <unset> 31659/TCP
Endpoints: 192.168.24.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>running container worker node(10.1.1.27)</p>
<p>curl 10.1.1.27:31659</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<p>worker node(10.1.1.26)</p>
<p>curl 10.1.1.27:31659</p>
<pre><code>curl: (7) Failed to connect to 10.1.1.27 port 31659:Connection timed out.
</code></pre>
<p>other machine(10.1.1.XX)</p>
<p>curl 10.1.1.27:31659</p>
<pre><code>curl: (7) Failed to connect to 10.1.1.27 port 31659:Connection timed out.
</code></pre>
<p>kubectl get pods -o wide</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE
echoserver-848b75d85-9fx7r 1/1 Running 3 6d 192.168.70.2 k8swrksv01
nginx-demo-85cc49574c-wv2b9 1/1 Running 3 6d 192.168.2.2 k8swrksv02
</code></pre>
<p>kubectl get svc -o wide</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
clusterip ClusterIP 172.16.39.77 <none> 80/TCP 6d run=echoserver
kubernetes ClusterIP 172.16.0.1 <none> 443/TCP 10d <none>
nginx-cluster ClusterIP 172.16.236.159 <none> 8090/TCP 6d app=nginx-demo
nginx-service NodePort 172.16.199.69 <none> 8090:31659/TCP 6d app=nginx-demo
nodeport NodePort 172.16.38.40 <none> 80:31317/TCP 6d run=echoserver
</code></pre>
<p>netstat -ntlp</p>
<pre><code>tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1963/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 2202/kube-proxy
tcp 0 0 127.0.0.1:4243 0.0.0.0:* LISTEN 1758/dockerd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 996/sshd
tcp6 0 0 :::4194 :::* LISTEN 1963/kubelet
tcp6 0 0 :::10250 :::* LISTEN 1963/kubelet
tcp6 0 0 :::31659 :::* LISTEN 2202/kube-proxy
tcp6 0 0 :::10255 :::* LISTEN 1963/kubelet
tcp6 0 0 :::10256 :::* LISTEN 2202/kube-proxy
tcp6 0 0 :::31317 :::* LISTEN 2202/kube-proxy
tcp6 0 0 :::22 :::* LISTEN 996/sshd
</code></pre>
<p>iptables-save</p>
<pre><code>*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-AZ4EGFEAU4RTSLJO - [0:0]
:KUBE-SEP-C7HQKKO26GIFOZZM - [0:0]
:KUBE-SEP-EWKNS2YCPXGJCXDC - [0:0]
:KUBE-SEP-LQVPUPFGW6BWATIP - [0:0]
:KUBE-SEP-OMMOFZ27GPKZ4OPA - [0:0]
:KUBE-SEP-UD3HOGDD5NDLNY74 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-CQNAS6RSUGJF2C2D - [0:0]
:KUBE-SVC-GKN7Y2BSGW4NJTYL - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-XP7QDA4CRQ2QA33W - [0:0]
:KUBE-SVC-Z5P6OMNAEVLAQUTS - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 192.168.2.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 192.168.0.0/16 -d 192.168.0.0/16 -j RETURN
-A POSTROUTING -s 192.168.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 192.168.0.0/16 -d 192.168.2.0/24 -j RETURN
-A POSTROUTING ! -s 192.168.0.0/16 -d 192.168.0.0/16 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/nginx-service:" -m tcp --dport 31659 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/nginx-service:" -m tcp --dport 31659 -j KUBE-SVC-GKN7Y2BSGW4NJTYL
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/nodeport:" -m tcp --dport 31317 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/nodeport:" -m tcp --dport 31317 -j KUBE-SVC-XP7QDA4CRQ2QA33W
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-AZ4EGFEAU4RTSLJO -s 192.168.70.2/32 -m comment --comment "default/clusterip:" -j KUBE-MARK-MASQ
-A KUBE-SEP-AZ4EGFEAU4RTSLJO -p tcp -m comment --comment "default/clusterip:" -m tcp -j DNAT --to-destination 192.168.70.2:8080
-A KUBE-SEP-C7HQKKO26GIFOZZM -s 192.168.70.2/32 -m comment --comment "default/nodeport:" -j KUBE-MARK-MASQ
-A KUBE-SEP-C7HQKKO26GIFOZZM -p tcp -m comment --comment "default/nodeport:" -m tcp -j DNAT --to-destination 192.168.70.2:8080
-A KUBE-SEP-EWKNS2YCPXGJCXDC -s 10.1.1.25/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-EWKNS2YCPXGJCXDC -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-EWKNS2YCPXGJCXDC --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.1.25:6443
-A KUBE-SEP-LQVPUPFGW6BWATIP -s 192.168.2.2/32 -m comment --comment "default/nginx-service:" -j KUBE-MARK-MASQ
-A KUBE-SEP-LQVPUPFGW6BWATIP -p tcp -m comment --comment "default/nginx-service:" -m tcp -j DNAT --to-destination 192.168.2.2:80
-A KUBE-SEP-OMMOFZ27GPKZ4OPA -s 10.1.1.24/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-OMMOFZ27GPKZ4OPA -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-OMMOFZ27GPKZ4OPA --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.1.24:6443
-A KUBE-SEP-UD3HOGDD5NDLNY74 -s 192.168.2.2/32 -m comment --comment "default/nginx-cluster:" -j KUBE-MARK-MASQ
-A KUBE-SEP-UD3HOGDD5NDLNY74 -p tcp -m comment --comment "default/nginx-cluster:" -m tcp -j DNAT --to-destination 192.168.2.2:80
-A KUBE-SERVICES -d 172.16.236.159/32 -p tcp -m comment --comment "default/nginx-cluster: cluster IP" -m tcp --dport 8090 -j KUBE-SVC-Z5P6OMNAEVLAQUTS
-A KUBE-SERVICES -d 172.16.199.69/32 -p tcp -m comment --comment "default/nginx-service: cluster IP" -m tcp --dport 8090 -j KUBE-SVC-GKN7Y2BSGW4NJTYL
-A KUBE-SERVICES -d 172.16.38.40/32 -p tcp -m comment --comment "default/nodeport: cluster IP" -m tcp --dport 80 -j KUBE-SVC-XP7QDA4CRQ2QA33W
-A KUBE-SERVICES -d 172.16.39.77/32 -p tcp -m comment --comment "default/clusterip: cluster IP" -m tcp --dport 80 -j KUBE-SVC-CQNAS6RSUGJF2C2D
-A KUBE-SERVICES -d 172.16.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-CQNAS6RSUGJF2C2D -m comment --comment "default/clusterip:" -j KUBE-SEP-AZ4EGFEAU4RTSLJO
-A KUBE-SVC-GKN7Y2BSGW4NJTYL -m comment --comment "default/nginx-service:" -j KUBE-SEP-LQVPUPFGW6BWATIP
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-OMMOFZ27GPKZ4OPA --mask 255.255.255.255 --rsource -j KUBE-SEP-OMMOFZ27GPKZ4OPA
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-EWKNS2YCPXGJCXDC --mask 255.255.255.255 --rsource -j KUBE-SEP-EWKNS2YCPXGJCXDC
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-OMMOFZ27GPKZ4OPA
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-EWKNS2YCPXGJCXDC
-A KUBE-SVC-XP7QDA4CRQ2QA33W -m comment --comment "default/nodeport:" -j KUBE-SEP-C7HQKKO26GIFOZZM
-A KUBE-SVC-Z5P6OMNAEVLAQUTS -m comment --comment "default/nginx-cluster:" -j KUBE-SEP-UD3HOGDD5NDLNY74
COMMIT
*filter
:INPUT ACCEPT [40:14606]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [42:6275]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
</code></pre>
| <p>It's caused by default DROP on FORWARD chain (which in turn is caused by docker).</p>
<p>You can see it working again if you add <code>iptables -A FORWARD -j ACCEPT</code> rule to your node.</p>
<p>The k8s issue is here: <a href="https://github.com/kubernetes/kubernetes/issues/39823" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/39823</a> but the actual fix is here <a href="https://github.com/kubernetes/kubernetes/pull/52569" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/52569</a> (expected to be in 1.9).</p>
|
<p>I have a simple application that Gets and Puts information from a Datastore.</p>
<p>It works everywhere, but when I run it from inside the Kubernetes Engine cluster, I get this output:</p>
<pre><code>Error from Get()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
Error from Put()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
</code></pre>
<p>I'm using the <code>cloud.google.com/go/datastore</code> package and the Go language.</p>
<p>I don't know why I'm getting this error since the application works everywhere else just fine.</p>
<p><strong>Update:</strong></p>
<p>Looking for an answer I found this comment on Google Groups:</p>
<blockquote>
<p>In order to use Cloud Datastore from GCE, the instance needs to be
configured with a couple of extra scopes. These can't be added to
existing GCE instances, but you can create a new one with the
following Cloud SDK command:</p>
<p>gcloud compute instances create hello-datastore --project
--zone --scopes datastore userinfo-email</p>
</blockquote>
<p>Would that mean I can't use Datastore from GKE by default?</p>
<p><strong>Update 2:</strong></p>
<p>I can see that when creating my cluster I didn't enable any permissions (which are disabled for most services by default). I suppose that's what's causing the issue:</p>
<p><img src="https://i.stack.imgur.com/gTx4i.png" width="400"/></p>
<p>Strangely, I can use CloudSQL just fine even though it's disabled (using the <code>cloudsql_proxy</code> container).</p>
| <p>So what I learnt in the process of debugging this issue was that:</p>
<ul>
<li><p>During the creation of a Kubernetes Cluster you can specify permissions for the GCE nodes that will be created.</p></li>
<li><p>If you for example enable Datastore access on the cluster nodes during creation, you will be able to access Datastore directly from the Pods without having to set up anything else.</p></li>
<li><p>If your cluster node permissions are disabled for most things (default settings) like mine were, you will need to create an appropriate Service Account for each application that wants to use a GCP resource like Datastore.</p></li>
<li><p>Another alternative is to create a new node pool with the <code>gcloud</code> command, set the desired permission scopes and then migrate all deployments to the new node pool (rather tedious).</p></li>
</ul>
<p>So at the end of the day I fixed the issue by creating a Service Account for my application, downloading the JSON authentication key, creating a Kubernetes secret which contains that key, and in the case of Datastore, I set the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable to the path of the mounted secret JSON key. </p>
<p>This way when my application starts, it checks if the <code>GOOGLE_APPLICATION_CREDENTIALS</code> variable is present, and authenticates Datastore API access based on the JSON key that the variable points to.</p>
<p>Deployment YAML snippet:</p>
<pre><code> ...
containers:
- image: foo
name: foo
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /auth/credentials.json
volumeMounts:
- name: foo-service-account
mountPath: "/auth"
readOnly: true
volumes:
- name: foo-service-account
secret:
secretName: foo-service-account
</code></pre>
|
<p>I am using </p>
<p><code>kubectl scale --replicas=0 -f deployment.yaml</code> </p>
<p>to stop all my running pods. Please let me know if there are better ways to bring down all running pods to Zero keeping configuration, deployments etc.. intact, so that I can scale up later as required.</p>
| <p>You are doing the correct action; <em>traditionally</em> the <code>scale</code> verb is applied just to the resource name, as in <code>kubectl scale deploy my-awesome-deployment --replicas=0</code>, which removes the need to always point at the <em>specific file</em> that describes that deployment, but there's nothing wrong (that I know of) with using the file if that is more convenient for you.</p>
|
<p>I have a <code>express</code> web server with static files. Let's call this my <code>express-deployment</code>. </p>
<p>I'd like to use my <code>ingress-nginx</code> to serve static files from my <code>express-deployment</code> without ever actually hitting my express server. </p>
<p>In <code>nginx</code> this is done with the <code>location</code> directive where you point to files locally hosted. While I see an option for <code>locations-snippet</code> in the <code>ingress-nginx</code> <code>configMap</code>, I'm not entirely sure how I would have this point to files in another container. </p>
<p>Is this possible with <code>ingress-nginx</code>? If so how would I go about it? Alternatively, is this something that requires an nginx container to be hosted along side my express server? (Seems odd that I would need 2 nginx for that) </p>
| <p>In theory, you could have a PV in in RWX mode mounted to both express and ingress and provide custom config to the nginx-ingress pods, but that should be avoided. Ingress Controller has one responsibility - implement Ingress rules defined in your cluster. To serve static content you should have a pod that does that, which indeed means ie. running second nginx in your stack. The thing is, that you should treat your ingress controller as part of the infrastructure providing generic cluster functionality, and serving static files from some place (or container if they are versioned/built as docker images) is de facto part of your application.</p>
|
<h1>Question</h1>
<p>What is the purpose of the K8S_HOST_URL configuration parameter in EFK? In EFK, K8S_HOST_URL exists as an environment variable and it looks being used to communicate to the Kubernetes API server by fluentd as being specified in the <a href="https://github.com/openshift/origin-aggregated-logging/blob/877d84296ce113fbafca6177612741054ed5a584/fluentd/configs.d/openshift/filter-k8s-meta.conf" rel="nofollow noreferrer">filter-k8s-meta.conf</a> of the fluentd configuration.</p>
<p>Looking for the documentation but not in the OpenShift <a href="https://docs.openshift.com/container-platform/3.4/install_config/aggregate_logging.html" rel="nofollow noreferrer">Aggregating Container Logs</a> documentation. Searched in Google but could not find a definite answer.</p>
<p>Please suggest the documentation which explains in detail. </p>
| <p>First "<code>K8S_HOST_URL</code>" does not show up in <a href="https://github.com/elastic/elasticsearch/search?utf8=%E2%9C%93&q=K8S_HOST_URL&type=" rel="nofollow noreferrer"><code>elastic/elasticsearch</code></a>, <a href="https://github.com/fluent/fluentd/search?utf8=%E2%9C%93&q=K8S_HOST_URL&type=" rel="nofollow noreferrer"><code>fluent/fluentd</code></a> or <a href="https://github.com/elastic/kibana/search?utf8=%E2%9C%93&q=K8S_HOST_URL&type=" rel="nofollow noreferrer"><code>elastic/kibana</code></a> (the 3 projects from "EFK")</p>
<p>It does show up only in <a href="https://github.com/openshift/origin-aggregated-logging/search?utf8=%E2%9C%93&q=K8S_HOST_URL&type=" rel="nofollow noreferrer"><code>openshift/origin-aggregated-logging</code></a>, which is the repo with the image definitions for the components of the logging stack as well as tools for building and deploying them.</p>
<p>So this seems to be how Openshift deploys its stack (through Kubernetes), which it will monitor (see "<a href="https://github.com/openshift/origin-aggregated-logging/blob/14c08207d03e60c84786d325a677380c56453469/docs/proposals/kube_events_design_doc.md" rel="nofollow noreferrer">Collect and store Kubernetes events</a>")</p>
<p>See also "<a href="https://github.com/openshift/origin-aggregated-logging/blob/a7b161c011f3485edda6978d36a0d703628bd024/docs/mux-logging-service.md" rel="nofollow noreferrer">mux-logging service</a>" (still from <a href="https://github.com/openshift/origin-aggregated-logging" rel="nofollow noreferrer"><code>openshift/origin-aggregated-logging</code></a>):</p>
<blockquote>
<p>Fluentd gets Kubernetes metadata to enrich each container log message with the namespace UUID, the pod UUID, and pod labels and annotations. </p>
</blockquote>
|
<p>I have several apps deployed into a kubernetes cluster. I have the receipts and etc as yaml files and I deploy and apply everything with <code>kubectl</code>.</p>
<p>I decided to migrate to helm because of its features and to avoid replicating code between dev and prod (I could use variables and etc).</p>
<p>The problem is: my services are AWS ELBs, and, as far as I found out, to migrate something already deployed with kubectl to helm, I need to delete it and install it again with helm - which will destroy and re-create my ELBs, so I'll need to change DNS records, etc. On top of all that, I'll have downtime - which is not really desired.</p>
<p>I thought about renaming the old-things and create the new things with the right name, keeping a common label to use in the service - so I could create the new one, change the DNS records, and then delete the old stuff - but apparently renames are not allowed by kubernetes.</p>
<p>Another strategy would be to launch a new cluster - which I don't really want to do.</p>
<p>Are there any other alternatives?</p>
| <p>A properly designed chart should namespace deployed resources by <code>.Release.Name</code>. That way you can deploy chart side by side with existing software you have. It's likely that you will be able create large part of your stack in parallel and then update your off-chart services to point to on-chart pods, that way you can have both on-chart and off-chart services defined and working with two distinct ELBs and if you want to get rid of off-charts, just edit DNS and let it propagate, then, after a day, week or month you can scrap it and be left with chart only.</p>
|
<p>anyone knows how to find the name of a kubernetes cluster in azure portal? I did not create the cluster but I'm trying to connect to it and I don't know and can't find what to put in the --name tag</p>
| <p>Well, the name of the kubernetes cluster is the name of the resource you see in the portal. As simple as that.</p>
<p>Just find the cluster in question and look how its called.</p>
|
<p>I am new to helm charts. So please correct me if I am going wrong in understanding. I have a service which I am trying to deploy using helm charts. I want to change the config map name and its key values to read depending on deployment environment. Hence I want to add conditional logic in values.yaml.</p>
<p>Can someone point me to some document/link which explains how to add conditional logic in values.yaml? </p>
| <p>A chart's <code>values.yaml</code> is primarily used to set default values, regardless of the environment. It exists to fill chart templates with values. It is not designed to be a template itself, so there is no logic you can apply inside a <code>values.yaml</code> file.</p>
<p>Each environment should have its own <code>values.yaml</code> file. You could store those inside the chart itself, like:</p>
<pre><code>.
βββ Chart.yaml
βββ README
βββ templates
βΒ Β βββ config.yaml
βΒ Β βββ deployment.app.yaml
βΒ Β βββ service.app.yaml
βββ values.prod.yaml
βββ values.test.yaml
βββ values.yaml
</code></pre>
<p>Now, when you deploy a chart, you can use the environment specific <code>values.<env>.yaml</code> to override the default values. For your test environment this may look like this:</p>
<pre><code>helm upgrade --install my-chart path/to/my/chart --values path/to/my/chart/values.test.yaml
</code></pre>
<p>Of course you could store the <code>values.<env>.yaml</code> files also outside of your chart directory. You just need to find a way to make them available at chart upgrade/install time to override the chart templates default <code>values.yaml</code>.</p>
|
<p>I have a Kubernetes cluster running on Google Kubernetes Engine (GKE) with network policy support enabled.
I created an nginx deployment and load balancer for it:</p>
<pre><code>kubectl run nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer
</code></pre>
<p>Then I created this network policy to make sure other pods in the cluster won't be able to connect to it anymore:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: TCP
port: 80
</code></pre>
<p>Now other pods in my cluster can't reach it (as intended):</p>
<pre><code>kubectl run busybox --rm -ti --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 nginx
Connecting to nginx (10.63.254.50:80)
wget: download timed out
</code></pre>
<p>However, it surprised me that using my external browser I also can't connect anymore to it through the load balancer:</p>
<pre><code>open http://$(kubectl get svc nginx --output=jsonpath={.status.loadBalancer.ingress[0].ip})
</code></pre>
<p>If I delete the policy it starts to work again.</p>
<p>So, my question is: how do I block other pods from reaching nginx, but keep access through the load balancer open?</p>
| <p>I talked about this in my <a href="https://github.com/ahmetb/kubernetes-networkpolicy-tutorial/blob/a18f9e6e/08-allow-external-traffic.md" rel="nofollow noreferrer">Network Policy recipes repository</a>.</p>
<p>"<em>Allowing EXTERNAL load balancers while DENYING local traffic</em>" is not a use case that makes sense, therefore it's not possible to using network policy.</p>
<p>For <code>Service</code> type=LoadBalancer and <code>Ingress</code> resources to work, you must allow ALL traffic to the pods selected by these resources.</p>
<p>If you REALLY want you can use the <code>from.ipBlock.cidr</code> and <code>from.ipBlock.cidr.except</code> resources to allow traffic from <code>0.0.0.0/0</code> (all IPv4) and then excluding <code>10.0.0.0/8</code> (or whatever private IP range GKE uses).</p>
|
<p>I get a weird new line when my chart is rendered that templates from another file using <code>{{ include }}</code>. For instance, my manifest looks like this </p>
<pre><code> containers:
- name: {{ .Release.Name }}
image: {{ .Values.global.image}}:{{ .Values.global.imageTag }}
imagePullPolicy: {{ .Values.global.pullPolicy }}
ports:
- containerPort: {{ .Values.gloabl.containerPort }}
{{ include "common_deployment" . }}
</code></pre>
<p>and my <code>common_deployment</code> is defined as </p>
<pre><code>{{- define "common_deployment" }}
envFrom:
- secretRef:
name: {{ .Release.Name }}-secret
{{- end -}}
</code></pre>
<p>when I look at my manifest after doing a dry run on Helm, my template looks something like this </p>
<pre><code> containers:
- name: test
image: myrepo/myimage:latest
imagePullPolicy: Always
ports:
- containerPort: 4444
envFrom:
- secretRef:
name: test-secret
</code></pre>
<p>Notice how there is a new lie between the <code>ports</code> and the <code>envFrom</code>. I'm wondering if this will affect how my pods will turn out because there are issues with volumes being mounted and I want to be able to make sure that this templating problem is the culprit before going down another rabbit hole.</p>
| <p>You can use a hyphen to suppress the newline on template commands. You're already using it for <code>define</code> and <code>end</code>.</p>
<p>Similarly, you should use <code>{{- include ... -}}</code>.</p>
|
<p>I would like to know if there is a way to force Kubernetes, during a deploy, to use every node in the cluster.
The question is due some attempts that I have done where I noticed a situation like this:</p>
<ul>
<li><p>a cluster of 3 nodes</p></li>
<li><p>I update a deployment with a command like: <code>kubectl set image deployment/deployment_name my_repo:v2.1.2</code></p></li>
<li><p>Kubernetes updates the cluster</p></li>
</ul>
<p>At the end I execute <code>kubectl get pod</code> and I notice that 2 pods have been deployed in the same node.
So after the update, the cluster has this configuration:</p>
<ul>
<li>one node with 2 pods</li>
<li>one node with 1 pod</li>
<li>one node without any pod (totally without any workload)</li>
</ul>
| <p>The scheduler will try to figure out the most reasonable way of scheduling at given point in time, which can change later on and results in situations like you described. Two simple ways to manage this in one way or another are :</p>
<ul>
<li>use DaemonSet instead of Deployment : will make sure you have one and only one pod per node (matching nodeSelector / tolerations etc.)</li>
<li><p>use PodAntiAffinity : you can make sure that two pods of the same deployment in the same version are never deployed on the same node. This is what I personally prefer for many apps (unless I want more then one to be scheduled per node). Note that it will be in a bit of trouble if you decide to scale your deployment to more replicas then you have nodes.
Example for versioned PodAntiAffinity I use :</p>
<pre><code>metadata:
labels:
app: {{ template "fullname" . }}
version: {{ .Values.image.tag }}
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values: ["{{ template "fullname" . }}"]
- key: version
operator: In
values: ["{{ .Values.image.tag }}"]
topologyKey: kubernetes.io/hostname
</code></pre></li>
<li><p>consider fiddling with <a href="https://github.com/kubernetes-incubator/descheduler" rel="nofollow noreferrer">Descheduler</a> which is like an evil twin of Kubes Scheduler component which will cause deleting of pods for them tu reschedule differently</p></li>
</ul>
|
<p>Problem: the code we're running on Kubernetes Pods have a very high variance across it's runtime; specifically, it has occasional CPU & Memory <em>spikes</em> when certain conditions are triggered. These triggers involve user queries with hard realtime requirements (system has to respond within <5 seconds). </p>
<p>Under conditions where the node serving the spiking pod doesn't have enough CPU/RAM, Kubernetes responds to these excessive requests by killing the pod altogether; which results in <em>no output</em> across any time whatsoever.</p>
<p>In what way can we ensure, that these spikes are being taken into account when pods are allocated; and more critically, that no pod shutdown happens for these reasons?</p>
<p>Thanks!</p>
| <p>High availability of pods with load can be achieved in two ways:</p>
<blockquote>
<p><strong>Configuring More CPU/Memory</strong></p>
</blockquote>
<p>As the applications requires more CPU/memory during the peak times configure in such a way that allocated resources for the POD will take care of extra load. Configure the POD something like this:</p>
<pre><code>resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
</code></pre>
<p>You can increase the limits based on the usage. But this way of doing can cause two issues</p>
<p>1) <strong>Underutilized resources</strong></p>
<p>As the resources are allocated in large number, these may go wasted unless there is a spike in the traffic.</p>
<p>2) <strong>Deployment failure</strong></p>
<p>POD deployment may fail because of not having enough resources in the kubernetes node to cater the request.</p>
<p>For more info : <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/</a></p>
<p><strong>> Autoscaling</strong></p>
<p>Ideal way of doing it is to autoscale the POD based on the traffic.</p>
<pre><code>kubectl autoscale deployment <DEPLOY-APP-NAME> --cpu-percent=50 --min=1 --max=10
</code></pre>
<p>Configure the cpu-percent based on the requirement, else 80% by default. Min and max are the number of PODS which can be configured accordingly. </p>
<p>So each time a POD hits the CPU percent with 50% a new pod will be launched and continues till it launches a max of 10 PODS and same applicable for vice-versa scenario.</p>
<p>For more info: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</a></p>
|
<p>I have seen many questions on the same regard but somehow I still can't successfully connect to MongoDB Slave.</p>
<p>I am running my cluster on Kubernetes using <a href="https://github.com/cvallance/mongo-k8s-sidecar" rel="nofollow noreferrer">mongo-k8s-sidecar</a>, connecting my application directly to the master works fine, however whenever I try to connect to the slave, I can't seem to read, here's error code:</p>
<p><code>MongoError: not master and slaveOk=false</code></p>
<p>I am using Node.js for my application, and this is how I am connecting:</p>
<pre><code>var mongodb = require('mongodb').Db;
var Server = require('mongodb').Server
var db = new mongodb('dbname', new Server("localhost",27017,{slaveOk: true}));
db.open(function(err, conn){
if (err){
callback(err);
}else {
client=conn;
client.createIndex("tablename", {field:1}
, {background:true}, function(err, i) {
logger.info(err);
});
//The rest of the code is trimmed
</code></pre>
<p>What am I missing here?</p>
| <p>You can't run <code>createIndex()</code> on a Secondary member, even if you're using slaveOk/ReadPreference.SECONDARY.</p>
<p>If you test the code without the <code>createIndex()</code> call, you'll probably see that it's working as you expect.</p>
<p>See <a href="https://docs.mongodb.com/manual/tutorial/build-indexes-on-replica-sets/" rel="nofollow noreferrer">Build Indexes on Replica Sets</a> for more details. Basically, if you wanted to build an index on a secondary you need to stop the mongod instance, start it as a standalone (without the replSet option), then build the index.</p>
<p>Or you can only create the index on the Primary, and the index creation will replicate to the secondaries.</p>
|
<p>The issue: I have a Prometheus outside of Kubernetes cluster. So, I want to export metrics from remote cluster.</p>
<p>I took the config sample from <a href="https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml" rel="noreferrer">Prometheus Github repo</a> and modified this a little bit. So, here is my jobs config.</p>
<pre><code> - job_name: 'kubernetes-apiservers'
scheme: http
kubernetes_sd_configs:
- role: endpoints
api_server: http://cluster-manager.dev.example.net:8080
bearer_token_file: /opt/prometheus/prometheus/kube_tokens/dev
tls_config:
insecure_skip_verify: true
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;http
- job_name: 'kubernetes-nodes'
scheme: http
kubernetes_sd_configs:
- role: node
api_server: http://cluster-manager.dev.example.net:8080
bearer_token_file: /opt/prometheus/prometheus/kube_tokens/dev
tls_config:
insecure_skip_verify: true
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- job_name: 'kubernetes-service-endpoints'
scheme: http
kubernetes_sd_configs:
- role: endpoints
api_server: http://cluster-manager.dev.example.net:8080
bearer_token_file: /opt/prometheus/prometheus/kube_tokens/dev
tls_config:
insecure_skip_verify: true
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (http?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: (.+)(?::\d+);(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-services'
scheme: http
metrics_path: /probe
params:
module: [http_2xx]
kubernetes_sd_configs:
- role: service
api_server: http://cluster-manager.dev.example.net:8080
bearer_token_file: /opt/prometheus/prometheus/kube_tokens/dev
tls_config:
insecure_skip_verify: true
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_service_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
scheme: http
kubernetes_sd_configs:
- role: pod
api_server: http://cluster-manager.dev.example.net:8080
bearer_token_file: /opt/prometheus/prometheus/kube_tokens/dev
tls_config:
insecure_skip_verify: true
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: (.+):(?:\d+);(\d+)
replacement: ${1}:${2}
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
</code></pre>
<p>I don't use a TLS connection to API, so I want to disable it.</p>
<p>When I curl <code>/metrics</code> URL from Prometheus host - it prints them.</p>
<p>Finally I connected to the cluster, but...the jobs are not up and therefore Prometheus doesn't expose relabeled metrics.</p>
<p>What I see in Console.</p>
<p><a href="https://i.stack.imgur.com/r52Ju.png" rel="noreferrer"><img src="https://i.stack.imgur.com/r52Ju.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/eYbPZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/eYbPZ.png" alt="enter image description here"></a></p>
<p>Targets state:</p>
<p><a href="https://i.stack.imgur.com/69eqG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/69eqG.png" alt="enter image description here"></a></p>
<p>Also I checked the Prometheus debug. It's thought the system gets any necessary information and requests are done successfully.</p>
<pre><code>time="2017-01-25T06:58:04Z" level=debug msg="pod update" kubernetes_sd=pod source="pod.go:66" tg="&config.TargetGroup{Targets:[]model.LabelSet{model.LabelSet{\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\", \"__address__\":\"10.32.0.2:10053\", \"__meta_kubernetes_pod_container_name\":\"kube-dns\", \"__meta_kubernetes_pod_container_port_number\":\"10053\", \"__meta_kubernetes_pod_container_port_name\":\"dns-local\"}, model.LabelSet{\"__address__\":\"10.32.0.2:10053\", \"__meta_kubernetes_pod_container_name\":\"kube-dns\", \"__meta_kubernetes_pod_container_port_number\":\"10053\", \"__meta_kubernetes_pod_container_port_name\":\"dns-tcp-local\", \"__meta_kubernetes_pod_container_port_protocol\":\"TCP\"}, model.LabelSet{\"__meta_kubernetes_pod_container_name\":\"kube-dns\", \"__meta_kubernetes_pod_container_port_number\":\"10055\", \"__meta_kubernetes_pod_container_port_name\":\"metrics\", \"__meta_kubernetes_pod_container_port_protocol\":\"TCP\", \"__address__\":\"10.32.0.2:10055\"}, model.LabelSet{\"__address__\":\"10.32.0.2:53\", \"__meta_kubernetes_pod_container_name\":\"dnsmasq\", \"__meta_kubernetes_pod_container_port_number\":\"53\", \"__meta_kubernetes_pod_container_port_name\":\"dns\", \"__meta_kubernetes_pod_container_port_protocol\":\"UDP\"}, model.LabelSet{\"__address__\":\"10.32.0.2:53\", \"__meta_kubernetes_pod_container_name\":\"dnsmasq\", \"__meta_kubernetes_pod_container_port_number\":\"53\", \"__meta_kubernetes_pod_container_port_name\":\"dns-tcp\", \"__meta_kubernetes_pod_container_port_protocol\":\"TCP\"}, model.LabelSet{\"__meta_kubernetes_pod_container_port_number\":\"10054\", \"__meta_kubernetes_pod_container_port_name\":\"metrics\", \"__meta_kubernetes_pod_container_port_protocol\":\"TCP\", \"__address__\":\"10.32.0.2:10054\", \"__meta_kubernetes_pod_container_name\":\"dnsmasq-metrics\"}, model.LabelSet{\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\", \"__address__\":\"10.32.0.2:8080\", \"__meta_kubernetes_pod_container_name\":\"healthz\", \"__meta_kubernetes_pod_container_port_number\":\"8080\", \"__meta_kubernetes_pod_container_port_name\":\"\"}}, Labels:model.LabelSet{\"__meta_kubernetes_pod_ready\":\"true\", \"__meta_kubernetes_pod_annotation_kubernetes_io_created_by\":\"{\\\"kind\\\":\\\"SerializedReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"ReplicaSet\\\",\\\"namespace\\\":\\\"kube-system\\\",\\\"name\\\":\\\"kube-dns-2924299975\\\",\\\"uid\\\":\\\"fa808d95-d7d9-11e6-9ac9-02dfdae1a1e9\\\",\\\"apiVersion\\\":\\\"extensions\\\",\\\"resourceVersion\\\":\\\"89\\\"}}\\n\", \"__meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_affinity\":\"{\\\"nodeAffinity\\\":{\\\"requiredDuringSchedulingIgnoredDuringExecution\\\":{\\\"nodeSelectorTerms\\\":[{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"beta.kubernetes.io/arch\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"amd64\\\"]}]}]}}}\", \"__meta_kubernetes_pod_name\":\"kube-dns-2924299975-dksg5\", \"__meta_kubernetes_pod_ip\":\"10.32.0.2\", \"__meta_kubernetes_pod_label_k8s_app\":\"kube-dns\", \"__meta_kubernetes_pod_label_pod_template_hash\":\"2924299975\", \"__meta_kubernetes_pod_label_tier\":\"node\", \"__meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_tolerations\":\"[{\\\"key\\\":\\\"dedicated\\\",\\\"value\\\":\\\"master\\\",\\\"effect\\\":\\\"NoSchedule\\\"}]\", \"__meta_kubernetes_namespace\":\"kube-system\", \"__meta_kubernetes_pod_node_name\":\"cluster-manager.dev.example.net\", \"__meta_kubernetes_pod_label_component\":\"kube-dns\", \"__meta_kubernetes_pod_label_kubernetes_io_cluster_service\":\"true\", \"__meta_kubernetes_pod_host_ip\":\"54.194.166.39\", \"__meta_kubernetes_pod_label_name\":\"kube-dns\"}, Source:\"pod/kube-system/kube-dns-2924299975-dksg5\"}"
time="2017-01-25T06:58:04Z" level=debug msg="pod update" kubernetes_sd=pod source="pod.go:66" tg="&config.TargetGroup{Targets:[]model.LabelSet{model.LabelSet{\"__address__\":\"10.43.0.0\", \"__meta_kubernetes_pod_container_name\":\"bot\"}}, Labels:model.LabelSet{\"__meta_kubernetes_pod_host_ip\":\"172.17.101.25\", \"__meta_kubernetes_pod_label_app\":\"bot\", \"__meta_kubernetes_namespace\":\"default\", \"__meta_kubernetes_pod_name\":\"bot-272181271-pnzsz\", \"__meta_kubernetes_pod_ip\":\"10.43.0.0\", \"__meta_kubernetes_pod_node_name\":\"ip-172-17-101-25\", \"__meta_kubernetes_pod_annotation_kubernetes_io_created_by\":\"{\\\"kind\\\":\\\"SerializedReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"ReplicaSet\\\",\\\"namespace\\\":\\\"default\\\",\\\"name\\\":\\\"bot-272181271\\\",\\\"uid\\\":\\\"c297b3c2-e15d-11e6-a28a-02dfdae1a1e9\\\",\\\"apiVersion\\\":\\\"extensions\\\",\\\"resourceVersion\\\":\\\"1465127\\\"}}\\n\", \"__meta_kubernetes_pod_ready\":\"true\", \"__meta_kubernetes_pod_label_pod_template_hash\":\"272181271\", \"__meta_kubernetes_pod_label_version\":\"v0.1\"}, Source:\"pod/default/bot-272181271-pnzsz\"}"
</code></pre>
<p>Prometheus fetches updates, but...doesn't convert them to metrics.
So, I've broken my brain to figure out why is it going this way. So, please, help if you can figure out where might be mistake. </p>
| <p>If you want to monitor a Kubernetes cluster from an external Prometheus server, I would suggest to set up a <a href="https://prometheus.io/docs/prometheus/latest/federation/#use-cases" rel="noreferrer">Prometheus federation</a> topology:</p>
<ul>
<li>Inside the K8s, install node-exporter pods and a Prometheus instance with short-term storage.</li>
<li>Expose the Prometheus service out of the K8s cluster, either via an ingress-controller (LB), or a node port. You can protect this endpoint with HTTPS + basic authentication.</li>
<li>Configure the center Prometheus to scrape metrics from above endpoint with proper authentication and tags.</li>
</ul>
<p>This is the scalable solution. You can add monitor as many K8s clusters you want, until it reaches the capacities of the center Prometheus. Then you can add another center Prometheus instance to monitor others.</p>
|
<p>I have a subchart in <code>charts/</code> directory. I would like to disable it for some deployments.</p>
<p>Is it possible somehow? Currently i see the only way to add condition to all templates like below:</p>
<p>deployment.yaml</p>
<pre><code>{{- if .Values.isDev }}
deployment code
{{- end }}
</code></pre>
<p>service.yaml</p>
<pre><code>{{- if .Values.isDev }}
service code
{{- end }}
</code></pre>
| <p>As a general rule of thumb I always have</p>
<pre><code>{{- if .Values.enabled }}
...
{{- end }}
</code></pre>
<p>in every file in every subchart. Depending on situation the default value will be either <code>true</code> for regular components or <code>false</code> for dev related, or simply <code>false</code> for everything if I want to enable these in completely selective manner. A typical values for deployment for this approach looks like ie.:</p>
<pre><code>api:
enabled: true
database:
host: mysql-dev
mysql:
enabled: false
mysql-dev:
enabled: true
</code></pre>
|
<p>Kubernetes uses Docker, and <a href="https://kubernetes.io/docs/getting-started-guides/scratch/#docker" rel="noreferrer">kubelet dictates</a> the compatible Docker versions for any given cluster.</p>
<p>My question is, given a Kubernetes cluster that is already configured and running, how would I find out what version of Docker is running in the cluster if I don't have direct access to the nodes?</p>
| <p>You can find container runtime and its version using the following:</p>
<pre><code>kubectl get node <node> -o jsonpath="{.status.nodeInfo.containerRuntimeVersion}"
</code></pre>
|
<p>I was looking for ways to get details on disk utilization (mainly writes and Delete's) on a per POD level. I did get google advice such as cAdvisor/heapster etc but none of them talk about disk usage profiling from POD perspective.</p>
<p>Any help on this is greatly appreciated.</p>
<p>TIA!</p>
| <p>Assuming the pods are running a linux variant you can do:</p>
<pre><code>kubectl exec -it <pod> cat /proc/1/io
</code></pre>
<p>Returns info on the main process' IO defined <a href="https://stackoverflow.com/questions/3633286/understanding-the-counters-in-proc-pid-io">here</a></p>
<p>You could then write a script to run the above command (or use the kuberentes API) for each pod of interest.</p>
|
<p>I am trying to consume Python Rest API call from a container running inside the Kubernetes.
I am able to consume the service inside the pod</p>
<pre><code>*curl http://localhost:5002/analyst_rating -v
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 5002 (#0)
> GET /analyst_rating HTTP/1.1
> Host: localhost:5002
> User-Agent: curl/7.47.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Type: application/json
< Content-Length: 37
< Server: Werkzeug/0.12.2 Python/2.7.12
< Date: Tue, 05 Dec 2017 17:02:00 GMT
<
{
"Analyst Rating": "Hello World"
* Closing connection 0*
</code></pre>
<p>When I am running the curl command from outside the cluster, I am getting the below error:</p>
<pre><code> curl -I http://184.173.44.62:30484/analyst_rating -v
* Trying 184.173.44.62...
* TCP_NODELAY set
* Connected to 184.173.44.62 (184.173.44.62) port 30484 (#0)
> HEAD /analyst_rating HTTP/1.1
> Host: 184.173.44.62:30484
> User-Agent: curl/7.54.0
> Accept: */*
>
* Closing connection 0
curl: (8) Weird server reply
</code></pre>
<p>My Cluster IP is 184.173.44.62 and my service node port is 30484.
I am able to make connections but not able to receive any response.
Also, I have checked inside the pod, I am not getting any GET request.</p>
<p>Also, Following is my deployment info: </p>
<pre><code>kubectl describe deployment
Name: sunlife-analystrating-deployment
Namespace: default
CreationTimestamp: Tue, 05 Dec 2017 10:53:53 -0500
Labels: app=sunlife-analystrating-deployment
Annotations: deployment.kubernetes.io/revision=1
Selector: app=sunlife-analystrating-deployment
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=sunlife-analystrating-deployment
Containers:
sunlife-analystrating-deployment:
Image: registry.ng.bluemix.net/dockerservice/tensorflowrunningimage:02
Port: 5002/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: sunlife-analystrating-deployment-3230069030 (1/1 replicas created)
Events: <none>
</code></pre>
<p>and Following is my Service info:</p>
<pre><code>kubectl describe service
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 172.21.0.1
Port: https 443/TCP
TargetPort: 32444/TCP
Endpoints: 184.173.44.62:32444
Session Affinity: ClientIP
Events: <none>
Name: sunlife-analystrating-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=sunlife-analystrating-deployment
Type: NodePort
IP: 172.21.210.178
Port: <unset> 5002/TCP
TargetPort: 5002/TCP
NodePort: <unset> 30484/TCP
Endpoints: 172.30.111.147:5002
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>Following is the code snippet, that I have used to expose the rest client inside container</p>
<pre><code>-------
def get(self):
print("Hello World")
response="Hello World"
result_dict = { 'Analyst Rating': str(response) }
return jsonify(result_dict)
-------
if __name__ == '__main__':
app.run(port='5002')
-------
</code></pre>
| <p>You can't get any GET request because it seems that you use POST method </p>
<pre><code>*curl -I -X POST http://184.173.44.62:30484/analyst_rating -v
</code></pre>
<p>Also, why do you use -I parameter? Try to execute:</p>
<pre><code>curl http://184.173.44.62:30484/analyst_rating -v
</code></pre>
<p>If this doesn't work, you'll have to provide more details about your k8s service specification</p>
|
<p>I'm using Search Guard plugin to secure an elasticsearch cluster composed of multiple nodes.
Here is my Dockerfile:</p>
<pre><code>#!/bin/sh
FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.3
USER root
# Install search guard
RUN bin/elasticsearch-plugin install --batch com.floragunn:search-guard-5:5.6.3-16 \
&& chmod +x \
plugins/search-guard-5/tools/hash.sh \
plugins/search-guard-5/tools/sgadmin.sh \
bin/init_sg.sh \
&& chown -R elasticsearch:elasticsearch /usr/share/elasticsearch
USER elasticsearch
</code></pre>
<p>To initialize SearchGuard (create internal users and assign roles). I need to run the script <code>init_sg.sh</code> after the container startup.
Here is the problem: Unless elasticsearch is running, the script will not initialize any security index. </p>
<p>The script's content is :</p>
<pre><code>sleep 10
plugins/search-guard-5/tools/sgadmin.sh -cd config/ -ts config/truststore.jks -ks config/kirk-keystore.jks -nhnv -icl
</code></pre>
<p>Now, I just run the script manually after the container startup but since I'm running it on Kubernetes.. Pods may get killed or fail and get recreated automatically for some reason. In this case, the plugin have to be initialized automatically after the container startup! </p>
<p>So how to accomplish this? Any help or hint would be really appreciated.</p>
| <p>The image itself has an entrypoint <code>ENTRYPOINT ["/run/entrypoint.sh"]</code> specified in the Dockerfile. You can replace it by your own script. So for example create a new script, mount it and first call <code>/run/entrypoint.sh</code> and then wait for start of elasticsearch before running your <code>init_sg.sh</code>.</p>
|
Subsets and Splits