question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I am using deploying an outward facing service, that is exposed behind a nodeport and then an istio ingress. The deployment is using manual sidecar injection. Once the deployment, nodeport and ingress are running, I can make a request to the istio ingress.
For some unkown reason, the request does not route through to my deployment and instead displays the text "no healthy upstream". Why is this, and what is causing it?
I can see in the http response that the status code is 503 (Service Unavailable) and the server is "envoy". The deployment is functioning as I can map a port forward to it and everything works as expected.
| Just in case, like me, you get curious... Even though in my scenario it was clear the case of the error...
Error cause: I had two versions of the same service (v1 and v2), and an Istio VirtualService configured with traffic route destination using weights. Then, 95% goes to v1 and 5% goes to v2. As I didn't have the v1 deployed (yet), of course, the error "503 - no healthy upstream" shows up 95% of the requests.
Ok, even so, I knew the problem and how to fix it (just deploy v1), I was wondering... But, how can I have more information about this error? How could I get a deeper analysis of this error to find out what was happening?
This is a way of investigating using the configuration command line utility of Istio, the istioctl:
# 1) Check the proxies status -->
$ istioctl proxy-status
# Result -->
NAME CDS LDS EDS RDS PILOT VERSION
...
teachstore-course-v1-74f965bd84-8lmnf.development SYNCED SYNCED SYNCED SYNCED istiod-86798869b8-bqw7c 1.5.0
...
...
# 2) Get the name outbound from JSON result using the proxy (service with the problem) -->
$ istioctl proxy-config cluster teachstore-course-v1-74f965bd84-8lmnf.development --fqdn teachstore-student.development.svc.cluster.local -o json
# 2) If you have jq install locally (only what we need, already extracted) -->
$ istioctl proxy-config cluster teachstore-course-v1-74f965bd84-8lmnf.development --fqdn teachstore-course.development.svc.cluster.local -o json | jq -r .[].name
# Result -->
outbound|80||teachstore-course.development.svc.cluster.local
inbound|80|9180-tcp|teachstore-course.development.svc.cluster.local
outbound|80|v1|teachstore-course.development.svc.cluster.local
outbound|80|v2|teachstore-course.development.svc.cluster.local
# 3) Check the endpoints of "outbound|80|v2|teachstore-course..." using v1 proxy -->
$ istioctl proxy-config endpoints teachstore-course-v1-74f965bd84-8lmnf.development --cluster "outbound|80|v2|teachstore-course.development.svc.cluster.local"
# Result (the v2, 5% of the traffic route is ok, there are healthy targets) -->
ENDPOINT STATUS OUTLIER CHECK CLUSTER
172.17.0.28:9180 HEALTHY OK outbound|80|v2|teachstore-course.development.svc.cluster.local
172.17.0.29:9180 HEALTHY OK outbound|80|v2|teachstore-course.development.svc.cluster.local
# 4) However, for the v1 version "outbound|80|v1|teachstore-course..." -->
$ istioctl proxy-config endpoints teachstore-course-v1-74f965bd84-8lmnf.development --cluster "outbound|80|v1|teachstore-course.development.svc.cluster.local"
ENDPOINT STATUS OUTLIER CHECK CLUSTER
# Nothing! Emtpy, no Pods, that's explain the "no healthy upstream" 95% of time.
| Istio | 47,664,397 | 27 |
I have 4 microservices running on my laptop listening at various ports. Can I use Istio to create a service mesh on my laptop so the services can communicate with each other through Istio? All the links on google about Istio include kubernetes but I want to run Istio without Kubernetes. Thanks for reading.
| In practice, not really as of this writing, since pretty much all the Istio runbooks and guides are available for Kubernetes.
In theory, yes. Istio components are designed to be 'platform independent'. Quote from the docs:
While Istio is platform independent, using it with Kubernetes (or infrastructure) network policies, the benefits are even greater, including the ability to secure pod-to-pod or service-to-service communication at the network and application layers.
But unless you know really well the details of each of the components: Envoy, Mixer, Pilot, Citadel, and Galley and you are willing to spend a lot of time it becomes not practically feasible to get it running outside of Kubernetes.
If you want to use something less tied to Kubernetes you can take a look at Consul, although it doesn't have all the functionality Istio has, it has overlap with some of its features.
| Istio | 53,014,727 | 23 |
I'm currently looking through an Istio and Kubernetes talk and mention the management of services along with the use of sidecars. I'm not sure what that is.
| I think of them as helper containers. A pod can have 1 or more containers. A container should do only one thing, like a web server or load balancer. So if you need some extra work to be done inside the pod, like github sync or data processing, you create an additional container AKA sidecar.
| Istio | 45,847,796 | 23 |
How to disable Istio sidecar injection for the Kubernetes Job?
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pod-restart
spec:
concurrencyPolicy: Forbid
schedule: '0 8 * * *'
jobTemplate:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: pod-restart
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl
command: ['kubectl', 'rollout', 'restart', 'deployment/myapp']
Sidecar still gets injected.
| The annotation is in wrong place. You have to put it on the pod template.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
spec:
jobTemplate:
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
There is working CronJob example with istio injection disabled.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo "Hello, World!"
restartPolicy: OnFailure
Also there is related github issue about that.
| Istio | 65,807,748 | 19 |
I have read the docs, but seem not able to understand differences between Mixer and Pilot. Is there any overlap? I mean I would like to draw a definite boundary between them to understand their responsibilities and with respect to their communication with the envoy proxies in the mesh. Please add examples of different use-cases if possible.
| The Istio Service Mesh provides the following functionalities:
Routing. For example 90% of the traffic goes to the version 1 of a
microservice and the remaining 10% goes to the version 2. Or some
specific requests go to the version 1 and all the others to the
version 2, according to some condition. And also: a) rewrite b)
redirect
Support for microservices development, deployment and testing: a) timeouts b) retries c) circuit breakers d) load balancing e) fault injection for testing
Reporting: Logging, Distributed Tracing, Telemetry
Policy enforcement
Secure communication between micro services and strong identity.
Pilot is responsible for the items 1 and 2. Mixer is responsible for the items 3 and 4. Citadel (previously CA, previously Auth) is responsible for the item 5.
Envoy, the sidecar proxy, gets its routing and configuration tables from Pilot to implement the items 1 and 2. Envoy reports to Mixer about each request, to implement the item 3. Envoy asks Mixer to allow or forbid requests, to implement the item 4. Envoy gets certificates from Citadel to implement the item 5.
| Istio | 48,639,660 | 17 |
Circuit breaker doesn't trip on httpConsecutiveErrors: 1 (for 500 response). All requests pass through and give a 500 instead .
Circuit breaker should trip and should return 503(Service Unavailable) instead .
Follow the steps Circuit breaker setup
.
Once httpbin is up you can simulate 500 with it
Request :
kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 1 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/status/500
Running this will simulate 20 requests returning 500 .
But if you have applied the circuit breaker if should allow just the one request as 500 and remaining requests should be tripped and a 503 should be returned . This doesn't happen.
Issue raised on github Github issue
| Yes, currently Circuit Breaker is not working in the case of HTTP-500, it only works with (Http-502/3/4) till now. But for making Http-500 is in the scope of the circuit breaker the work has been started. You can check this GitHub issue for more detail.
| Istio | 50,622,870 | 14 |
Here is something I noticed in my kubectl get events output
Warning FailedToUpdateEndpoint Endpoints Failed to update endpoint mynamespace/myservice: Operation cannot be fulfilled on endpoints "myservice": the object has been modified; please apply your changes to the latest version and try again
I am aware of this discussion, but I do not think is applicable, given I am not explicitly creating an Endpoint resource via yaml.
I am noticing some minor service unavailability during image updates so I am trying to check if this is related.
Using GKE with version v1.12.7-gke.25 on both masters and nodes, on top of istio.
| It's common behaviour of k8s to let the k8s clients (Controllers) know to try again.
Kubernetes leverages the concept of resource versions to achieve optimistic concurrency. concurrency-control-and-consistency
It's populated by the system.
To enable clients to build a model of the current state of a cluster, all Kubernetes object resource types are required to support consistent lists and an incremental change notification feed called a watch. Every Kubernetes object has a resourceVersion field representing the version of that resource as stored in the underlying database. When retrieving a collection of resources (either namespace or cluster scoped), the response from the server will contain a resourceVersion value that can be used to initiate a watch against the server. The server will return all changes (creates, deletes, and updates) that occur after the supplied resourceVersion. This allows a client to fetch the current state and then watch for changes without missing any updates. If the client watch is disconnected they can restart a new watch from the last returned resourceVersion, or perform a new collection request and begin again efficient-detection-of-changes
| Istio | 57,957,516 | 13 |
I have a mutual TLS enabled Istio mesh. My setup is as follows
A service running inside a pod (Service container + envoy)
An envoy gateway which stays in front of the above service. An Istio Gateway and Virtual Service attached to this. It routes /info/ route to the above service.
Another Istio Gateway configured for ingress using the default istio ingress pod. This also has Gateway+Virtual Service combination. The virtual service directs /info/ path to the service described in 2
I'm attempting to access the service from the ingress gateway using a curl command such as:
$ curl -X GET http://istio-ingressgateway.istio-system:80/info/ -H "Authorization: Bearer $token" -v
But I'm getting a 503 not found error as below:
$ curl -X GET http://istio-ingressgateway.istio-system:80/info/ -H "Authorization: Bearer $token" -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.105.138.94...
* Connected to istio-ingressgateway.istio-system (10.105.138.94) port 80 (#0)
> GET /info/ HTTP/1.1
> Host: istio-ingressgateway.istio-system
> User-Agent: curl/7.47.0
> Accept: */*
> Authorization: Bearer ...
>
< HTTP/1.1 503 Service Unavailable
< content-length: 57
< content-type: text/plain
< date: Sat, 12 Jan 2019 13:30:13 GMT
< server: envoy
<
* Connection #0 to host istio-ingressgateway.istio-system left intact
I checked the logs of istio-ingressgateway pod and the following line was logged there
[2019-01-13T05:40:16.517Z] "GET /info/ HTTP/1.1" 503 UH 0 19 6 - "10.244.0.5" "curl/7.47.0" "da02fdce-8bb5-90fe-b422-5c74fe28759b" "istio-ingressgateway.istio-system" "-"
If I logged into istio ingress pod and attempt to send the request with curl, I get a successful 200 OK.
# curl hr--gateway-service.default/info/ -H "Authorization: Bearer $token" -v
Also, I managed to get a successful response for the same curl command when the mesh was created in mTLS disabled mode. There are no conflicts shown in mTLS setup.
Here are the config details for my service mesh in case you need additional info.
Pods
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hr--gateway-deployment-688986c87c-z9nkh 1/1 Running 0 37m
default hr--hr-deployment-596946948d-c89bn 2/2 Running 0 37m
default hr--sts-deployment-694d7cff97-gjwdk 1/1 Running 0 37m
ingress-nginx default-http-backend-6586bc58b6-8qss6 1/1 Running 0 42m
ingress-nginx nginx-ingress-controller-6bd7c597cb-t4rwq 1/1 Running 0 42m
istio-system grafana-85dbf49c94-lfpbr 1/1 Running 0 42m
istio-system istio-citadel-545f49c58b-dq5lq 1/1 Running 0 42m
istio-system istio-cleanup-secrets-bh5ws 0/1 Completed 0 42m
istio-system istio-egressgateway-7d59954f4-qcnxm 1/1 Running 0 42m
istio-system istio-galley-5b6449c48f-72vkb 1/1 Running 0 42m
istio-system istio-grafana-post-install-lwmsf 0/1 Completed 0 42m
istio-system istio-ingressgateway-8455c8c6f7-5khtk 1/1 Running 0 42m
istio-system istio-pilot-58ff4d6647-bct4b 2/2 Running 0 42m
istio-system istio-policy-59685fd869-h7v94 2/2 Running 0 42m
istio-system istio-security-post-install-cqj6k 0/1 Completed 0 42m
istio-system istio-sidecar-injector-75b9866679-qg88s 1/1 Running 0 42m
istio-system istio-statsd-prom-bridge-549d687fd9-bspj2 1/1 Running 0 42m
istio-system istio-telemetry-6ccf9ddb96-hxnwv 2/2 Running 0 42m
istio-system istio-tracing-7596597bd7-m5pk8 1/1 Running 0 42m
istio-system prometheus-6ffc56584f-4cm5v 1/1 Running 0 42m
istio-system servicegraph-5d64b457b4-jttl9 1/1 Running 0 42m
kube-system coredns-78fcdf6894-rxw57 1/1 Running 0 50m
kube-system coredns-78fcdf6894-s4bg2 1/1 Running 0 50m
kube-system etcd-ubuntu 1/1 Running 0 49m
kube-system kube-apiserver-ubuntu 1/1 Running 0 49m
kube-system kube-controller-manager-ubuntu 1/1 Running 0 49m
kube-system kube-flannel-ds-9nvf9 1/1 Running 0 49m
kube-system kube-proxy-r868m 1/1 Running 0 50m
kube-system kube-scheduler-ubuntu 1/1 Running 0 49m
Services
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default hr--gateway-service ClusterIP 10.100.238.144 <none> 80/TCP,443/TCP 39m
default hr--hr-service ClusterIP 10.96.193.43 <none> 80/TCP 39m
default hr--sts-service ClusterIP 10.99.54.137 <none> 8080/TCP,8081/TCP,8090/TCP 39m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 52m
ingress-nginx default-http-backend ClusterIP 10.109.166.229 <none> 80/TCP 44m
ingress-nginx ingress-nginx NodePort 10.108.9.180 192.168.60.3 80:31001/TCP,443:32315/TCP 44m
istio-system grafana ClusterIP 10.102.141.231 <none> 3000/TCP 44m
istio-system istio-citadel ClusterIP 10.101.128.187 <none> 8060/TCP,9093/TCP 44m
istio-system istio-egressgateway ClusterIP 10.102.157.204 <none> 80/TCP,443/TCP 44m
istio-system istio-galley ClusterIP 10.96.31.251 <none> 443/TCP,9093/TCP 44m
istio-system istio-ingressgateway LoadBalancer 10.105.138.94 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31219/TCP,8060:31482/TCP,853:30034/TCP,15030:31544/TCP,15031:32652/TCP 44m
istio-system istio-pilot ClusterIP 10.100.170.73 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 44m
istio-system istio-policy ClusterIP 10.104.77.184 <none> 9091/TCP,15004/TCP,9093/TCP 44m
istio-system istio-sidecar-injector ClusterIP 10.100.180.152 <none> 443/TCP 44m
istio-system istio-statsd-prom-bridge ClusterIP 10.107.39.50 <none> 9102/TCP,9125/UDP 44m
istio-system istio-telemetry ClusterIP 10.110.55.232 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 44m
istio-system jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 44m
istio-system jaeger-collector ClusterIP 10.102.43.21 <none> 14267/TCP,14268/TCP 44m
istio-system jaeger-query ClusterIP 10.104.182.189 <none> 16686/TCP 44m
istio-system prometheus ClusterIP 10.100.0.70 <none> 9090/TCP 44m
istio-system servicegraph ClusterIP 10.97.65.37 <none> 8088/TCP 44m
istio-system tracing ClusterIP 10.109.87.118 <none> 80/TCP 44m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 52m
Gateway and virtual service described in point 2
$ kubectl describe gateways.networking.istio.io hr--gateway
Name: hr--gateway
Namespace: default
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
...
Spec:
Selector:
App: hr--gateway
Servers:
Hosts:
*
Port:
Name: http2
Number: 80
Protocol: HTTP2
Hosts:
*
Port:
Name: https
Number: 443
Protocol: HTTPS
Tls:
Mode: PASSTHROUGH
$ kubectl describe virtualservices.networking.istio.io hr--gateway
Name: hr--gateway
Namespace: default
Labels: app=hr--gateway
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
...
Spec:
Gateways:
hr--gateway
Hosts:
*
Http:
Match:
Uri:
Prefix: /info/
Rewrite:
Uri: /
Route:
Destination:
Host: hr--hr-service
Gateway and virtual service described in point 3
$ kubectl describe gateways.networking.istio.io ingress-gateway
Name: ingress-gateway
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"ingress-gateway","namespace":"default"},"spec":{"sel...
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
...
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http2
Number: 80
Protocol: HTTP2
$ kubectl describe virtualservices.networking.istio.io hr--gateway-ingress-vs
Name: hr--gateway-ingress-vs
Namespace: default
Labels: app=hr--gateway
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
Spec:
Gateways:
ingress-gateway
Hosts:
*
Http:
Match:
Uri:
Prefix: /info/
Route:
Destination:
Host: hr--gateway-service
Events: <none>
| The problem is probably as follows: istio-ingressgateway initiates mTLS to hr--gateway-service on port 80, but hr--gateway-service expects plain HTTP connections.
There are multiple solutions:
Define a DestinationRule to instruct clients to disable mTLS on calls to hr--gateway-service
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: hr--gateway-service-disable-mtls
spec:
host: hr--gateway-service.default.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
Instruct hr-gateway-service to accept mTLS connections. For that, configure the server TLS options on port 80 to be MUTUAL and to use Istio certificates and the private key. Specify serverCertificate, caCertificates and privateKey to be /etc/certs/cert-chain.pem, /etc/certs/root-cert.pem, /etc/certs/key.pem, respectively.
| Istio | 54,160,215 | 13 |
So relatively new to Istio and have a question regarding Istio. Say that I want to rewrite a URI based on a path, but use part of that original uri in the rewrite, is that something I could do with Regex? I'm imagining something like this
http:
- match:
- uri:
regex: ^/(.*\s*)?(canary)(.*)?$
rewrite:
prefix: "/$1"
Where $1 would be a matching group on the uri regex. Is something like that possible?
| Only those rule, which contain StringMatch type of values can work with regex. For example HTTPMatchRequest.
Unfortunately NOT HTTPRewrite, which takes only strings as a value
| Istio | 56,476,847 | 11 |
We've used Zuul as API gateway for a while in Microservices scene, recently we decided to move to Kubernetes and choose a more cloud native way.
After some investigation and going through the Istio docs, we have some questions about API gateway selection in Kubernetes:
Which aspects should be considered when choosing an API gateway in Kubernetes?
Do we still need Zuul if we use Istio?
| I assume that Zuul offers a lots of features as an edge service for traffic management, routing and security functions. It has to declare API Gateway the main point of accessing microservices by the external clients as per Microservice Architecture pattern Design. However, Zuul needs to somehow discover underlying microservices and for Kubernetes you might need to adapt Kubernetes Discovery Client which defines the rules how API Gateway will detect routes and transmit network traffic to the nested services.
As per design, Istio represents Service mesh architecture and becomes Kubernetes oriented solution with smooth integration as well. The main concept here is using advanced version of Envoy proxy by injecting sidecars into Kubernetes Pods with no need to change or rewrite existing deployment or use any other methods for service discovery purposes. Zuul API Gateway can be fully replaced by Istio Gateway resource as the edge load balancer for ingress or egress HTTP(S)/TCP connections. Istio contains a set of traffic management features which can be included in the general configuration.
You might be interested with other fundamental concepts of functional Istio facilities like:
Authorization model;
Authentication policies;
Istio telemetry and Mixer policies.
| Istio | 55,555,334 | 11 |
I am trying to setup ALB load balancer instead of default ELB loadbalancer in Kubernetes AWS.The loadbalancer has to be connected to the istio ingressgateway.I looked for solutions and only found one.
But the istio version mentioned is V1 and there has been so many changes in istio now.I tried to change service type to nodeport in the chart (according to the blog)but still the service comes as a Loadbalancer.
Can someone mention steps how to configure ALB for istio ingressgateway?
Thanks for reading
|
Step 1 : Change istioingresssgateway service type as nodeport
Step 2 : Install ALB ingress controller
Step 3 : Write ingress.yaml for istioingressgateway as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: istio-system
name: ingress
labels:
app: ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets: <subnet1>,<subnet2>
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: istio-ingressgateway
servicePort: 80
alb.ingress.kubernetes.io/subnets annotation can be avoided if you labelled subnet of vpc with :
kubernetes.io/cluster/: owned
kubernetes.io/role/internal-elb: 1 (for internal ELB)
kubernetes.io/role/elb: 1 (for external ELB)
or else you can provide two subnet values and each subnet should be in different availability zone in the above yaml
It worked in Istio 1.6
| Istio | 62,407,364 | 10 |
I try to build istio (1.6.0+) using Jenkins and get an error:
docker: Error response from daemon: invalid mount config for type "bind":
bind mount source path does not exist: /home/jenkins/.docker
the slave contains .docker directory:
13:34:42 + ls -a /home/jenkins
13:34:42 .
13:34:42 ..
13:34:42 agent
13:34:42 .bash_logout
13:34:42 .bash_profile
13:34:42 .bashrc
13:34:42 .cache
13:34:42 .docker
13:34:42 .gitconfig
13:34:42 .jenkins
13:34:42 .m2
13:34:42 .npmrc
13:34:42 .oracle_jre_usage
13:34:42 postgresql-9.4.1212.jar
13:34:42 .ssh
13:34:42 workspace
parts of Istio script
export CONDITIONAL_HOST_MOUNTS=${CONDITIONAL_HOST_MOUNTS:-}
if [[ -d "${HOME}/.docker" ]]; then
CONDITIONAL_HOST_MOUNTS+="--mount type=bind,source=${HOME}/.docker,destination=/config/.docker,readonly "
fi
"${CONTAINER_CLI}" run --rm \
-u "${UID}:${DOCKER_GID}" \
--sig-proxy=true \
${DOCKER_SOCKET_MOUNT:--v /var/run/docker.sock:/var/run/docker.sock} \
-v /etc/passwd:/etc/passwd:ro \
-v /etc/group:/etc/group:ro \
$CONTAINER_OPTIONS \
--env-file <(env | grep -v ${ENV_BLOCKLIST}) \
-e IN_BUILD_CONTAINER=1 \
-e TZ="${TIMEZONE:-$TZ}" \
--mount "type=bind,source=${PWD},destination=/work" \
--mount "type=volume,source=go,destination=/go" \
--mount "type=volume,source=gocache,destination=/gocache" \
${CONDITIONAL_HOST_MOUNTS} \
-w /work "${IMG}" "$@"
... Have you tried to use -v instead of --mount, do you have any error then?
❗️ I changed --mount to -v and error disappeared
-v ${HOME}/.docker:/config/.docker
| As I mentioned in comments, workaround here could be to use
-v
instead of
--mount
Differences between -v and --mount behavior
Because the -v and --volume flags have been a part of Docker for a long time, their behavior cannot be changed. This means that there is one behavior that is different between -v and --mount.
If you use -v or --volume to bind-mount a file or directory that does not yet exist on the Docker host, -v creates the endpoint for you. It is always created as a directory.
If you use --mount to bind-mount a file or directory that does not yet exist on the Docker host, Docker does not automatically create it for you, but generates an error.
If you use docker swarm then it's well documented here
If you bind mount a host path into your service’s containers, the path must exist on every swarm node. The Docker swarm mode scheduler can schedule containers on any machine that meets resource availability requirements and satisfies all constraints and placement preferences you specify.
Worth to check this github issue comment.
| Istio | 61,316,142 | 10 |
Our GKE cluster is shared to multiple teams in company. Each team can have different public domain (and hence want to have different CA cert setup and also different ingress gateway controller). How to do that in Istio? All the tutorial/introduction articles in Istio's website are using a shared ingress gateway. See the example shared ingress gateway that comes installed by istio-1.0.0:
https://istio.io/docs/tasks/traffic-management/secure-ingress/
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
| Okay, I found the answer after looking at the code of Istio installation via helm. So, basically the istio have an official way (but not really documented in their readme.md file) to add additional gateway (ingress and egress gateway). I know that because I found this yaml file in their github repo and read the comment (also looking at the gateway chart template code for the spec and its logic).
So, I solved this by, for example, defining this values-custom-gateway.yaml file:
# Gateways Configuration
# By default (if enabled) a pair of Ingress and Egress Gateways will be created for the mesh.
# You can add more gateways in addition to the defaults but make sure those are uniquely named
# and that NodePorts are not conflicting.
# Disable specifc gateway by setting the `enabled` to false.
#
gateways:
enabled: true
agung-ingressgateway:
namespace: agung-ns
enabled: true
labels:
app: agung-istio-ingressgateway
istio: agung-ingressgateway
replicaCount: 1
autoscaleMin: 1
autoscaleMax: 2
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
#requests:
# cpu: 1800m
# memory: 256Mi
loadBalancerIP: ""
serviceAnnotations: {}
type: LoadBalancer #change to NodePort, ClusterIP or LoadBalancer if need be
ports:
## You can add custom gateway ports
- port: 80
targetPort: 80
name: http2
# nodePort: 31380
- port: 443
name: https
# nodePort: 31390
- port: 31400
name: tcp
secretVolumes:
- name: ingressgateway-certs
secretName: istio-ingressgateway-certs
mountPath: /etc/istio/ingressgateway-certs
- name: ingressgateway-ca-certs
secretName: istio-ingressgateway-ca-certs
mountPath: /etc/istio/ingressgateway-ca-certs
If you take a look at yaml file above, I specified the namespace other than istio-system ns. In this case, we can have a way to customize the TLS and ca cert being used by our custom gateway.
Also the agung-ingressgateway as the holder of the custom gateway controller spec is used as the gateway controller's name.
Then, i just install the istio via helm upgrade --install so that helm can intelligently upgrade the istio with additional gateway.
helm upgrade my-istio-release-name <istio-chart-folder> --install
Once it upgrades successfully, I can specify custom selector to my Gateway:
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: agung-gateway
namespace: agung-ns
spec:
selector:
app: agung-istio-ingressgateway # use custom gateway
# istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
| Istio | 51,835,752 | 10 |
Why does k8s secrets need to be base64 encoded when configmaps does not?
When creating a configmap you simply do somthing like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
SOME_KEY: a string value
But when you want to create a secret you have to
echo -n "some secret string" | base64 and then put the result of that in a file looking something like this:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
SOME_KEY: c29tZSBzZWNyZXQgc3RyaW5n
I really wonder why there is this difference? Are kubernetes secrets simply base64 encoded strings? I would expect that secrets were stored encrypted in kubernetes.
| Secrets can contain binary data (the type is map[string][]byte), and byte arrays are base64-encoded in JSON serialization.
ConfigMaps only contain string data (the type is map[string]string), so the JSON serialization just outputs the string.
In 1.10, ConfigMaps have a new binaryData field that allows storing binary data, which is base64-encoded, just like secrets. https://github.com/kubernetes/kubernetes/pull/57938
| Kubernetes | 49,046,439 | 41 |
I have a kustomize base that I'd like to re-use without editing it. Unfortunately, it creates a namespace I don't want to create. I'd like to simply remove that resource from consideration when compiling the manifests and add a resource for mine since I can't patch a namespace to change the name.
Can this be done? How?
| You can omit the specific resource by using a delete directive of Strategic Merge Patch like this.
Folder structure
$ tree .
.
├── base
│ ├── kustomization.yaml
│ └── namespace.yaml
└── overlays
├── dev
│ └── kustomization.yaml
└── prod
├── delete-ns-b.yaml
└── kustomization.yaml
File content
$ cat base/kustomization.yaml
resources:
- namespace.yaml
$ cat base/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
---
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ cat overlays/dev/kustomization.yaml
bases:
- ../../base
$ cat overlays/prod/delete-ns-b.yaml
$patch: delete
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ cat overlays/prod/kustomization.yaml
bases:
- ../../base
patches:
- path: delete-ns-b.yaml
Behavior of kustomize
$ kustomize build overlays/dev
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
---
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ kustomize build overlays/prod
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
In this case, we have two namespaces in the base folder. In dev, kustomize produces 2 namespaces because there is no patch. But, in prod, kustomize produces only one namespace because delete patch deletes namespace ns-b.
| Kubernetes | 64,899,665 | 40 |
On AWS EKS
I'm adding deployment with 17 replicas (requesting and limiting 64Mi memory) to a small cluster with 2 nodes type t3.small.
Counting with kube-system pods, total running pods per node is 11 and 1 is left pending, i.e.:
Node #1:
aws-node-1
coredns-5-1as3
coredns-5-2das
kube-proxy-1
+7 app pod replicas
Node #2:
aws-node-1
kube-proxy-1
+9 app pod replicas
I understand that t3.small is a very small instance. I'm only trying to understand what is limiting me here. Memory request is not it, I'm way below the available resources.
I found that there is IP addresses limit per node depending on instance type.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html?shortFooter=true#AvailableIpPerENI .
I didn't find any other documentation saying explicitly that this is limiting pod creation, but I'm assuming it does.
Based on the table, t3.small can have 12 IPv4 addresses. If this is the case and this is limiting factor, since I have 11 pods, where did 1 missing IPv4 address go?
| The real maximum number of pods per EKS instance are actually listed in this document.
For t3.small instances, it is 11 pods per instance. That is, you can have a maximum number of 22 pods in your cluster. 6 of these pods are system pods, so there remains a maximum of 16 workload pods.
You're trying to run 17 workload pods, so it's one too much. I guess 16 of these pods have been scheduled and 1 is left pending.
The formula for defining the maximum number of pods per instance is as follows:
N * (M-1) + 2
Where:
N is the number of Elastic Network Interfaces (ENI) of the instance type
M is the number of IP addresses of a single ENI
So, for t3.small, this calculation is 3 * (4-1) + 2 = 11.
Values for N and M for each instance type in this document.
| Kubernetes | 57,970,896 | 40 |
Am using minikube to test out the deployment and was going through this link
And my manifest file for deployment is like
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
imagePullPolicy: Never # <-- here we go!
image: sams
ports:
- containerPort: 80
and after this when I tried to execute the below commands got output
user@usesr:~/Downloads$ kubectl create -f mydeployment.yaml --validate=false
deployment "webapp" created
user@user:~/Downloads$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
---- -------- ------- ---------- --------- ----
webapp 1 1 1 0 9s
user@user:~/Downloads$ kubectl get pods
NAME READY STATUS RESTARTS AGE
---- -------- ------- ---------- --------- ----
webapp-5bf5bd94d-2xgs8 0/1 ErrImageNeverPull 0 21s
I tried to pull images even from Docker-Hub by removing line imagePullPolicy: Never from the deployment.yml But getting the same error.
Can anyone help me here to identify where and what's going wrong?
Updated the question as per the comment
kubectl describe pod $POD_NAME
Name: webapp-5bf5bd94d-2xgs8
Namespace: default
Node: minikube/10.0.2.15
Start Time: Fri, 31 May 2019 14:25:41 +0530
Labels: app=webapp
pod-template-hash=5bf5bd94d
Annotations: <none>
Status: Pending
IP: 172.17.0.4
Controlled By: ReplicaSet/webapp-5bf5bd94d
Containers:
webapp:
Container ID:
Image: sams
Image ID:
Port: 80/TCP
State: Waiting
Reason: ErrImageNeverPull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wf82w (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-wf82w:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wf82w
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned default/webapp-5bf5bd94d-2xgs8 to minikube
Warning ErrImageNeverPull 8m (x50 over 18m) kubelet, minikube Container image "sams" is not present with pull policy of Never
Warning Failed 3m (x73 over 18m) kubelet, minikube Error: ErrImageNeverPull
docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE
---------- --- -------- ------- ----
<none> <none> 723ce2b3d962 3 hours ago 1.91GB
bean_ben501/sams latest c7c4a04713f4 4 hours ago 278MB
sams latest c7c4a04713f4 4 hours ago 278MB
sams v1 c7c4a04713f4 4 hours ago 278MB
<none> <none> b222da630bc3 4 hours ago 1.91GB
mcr.microsoft.com/dotnet/core/sdk 2.2-stretch e4747ec2aaff 9 days ago 1.74GB
mcr.microsoft.com/dotnet/core/aspnet 2.2-stretch-slim f6d51449c477 9 days ago 260MB
|
When using a single VM for Kubernetes, it’s useful to reuse Minikube’s built-in Docker daemon. Reusing the built-in daemon means you don’t have to build a Docker registry on your host machine and push the image into it. Instead, you can build inside the same Docker daemon as Minikube, which speeds up local experiments.
The following command does the magic
eval $(minikube docker-env)
Then you have to rebuild your image again.
for imagePullPolicy: Never the images need to be on the minikube node.
This answer provide details
local-images-in minikube docker environment
| Kubernetes | 56,392,041 | 40 |
Is there a table that will tell me which set of API versions I should be using, given a k8s cluster version? Kubernetes docs always assume I always have a nice, up-to-date cluster (1.12 at time of writing) but platform providers don't always live on this bleeding edge so it can get frustrating quite quickly.
Better yet, is there a kubectl command I can run that will let me cluster tell me each resource type and its latest supported API version?
| For getting a list of all the resource types and their latest supported version, run the following:
for kind in `kubectl api-resources | tail +2 | awk '{ print $1 }'`; do kubectl explain $kind; done | grep -e "KIND:" -e "VERSION:"
It should produce output like
KIND: Binding
VERSION: v1
KIND: ComponentStatus
VERSION: v1
KIND: ConfigMap
VERSION: v1
KIND: Endpoints
VERSION: v1
KIND: Event
VERSION: v1
...
As @Rico mentioned, they key is in the kubectl explain command. This may be a little fragile since it depends on the format of the printed output, but it works for kubernetes 1.9.6
Also, the information can be gathered in a less efficient way from the kubernetes API docs (with links for each version) found here - https://kubernetes.io/docs/reference/#api-reference
| Kubernetes | 52,711,326 | 40 |
I have created a cluster using the google cloud platform (container engine) and deployed a pod using the following YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-name
spec:
replicas: 1
template:
metadata:
name: pod-name
labels:
app: app-label
spec:
containers:
- name: container-name
image: gcr.io/project-id/image-name
resources:
requests:
cpu: 1
ports:
- name: port80
containerPort: 80
- name: port443
containerPort: 443
- name: port6001
containerPort: 6001
Then I want to create a service that enables the pod to listen on all these ports. I know that the following YAML file works to create a service that listens on one port:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app-label
type: LoadBalancer
However when I want the pod to listen on multiple ports like this, it doesn't work:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
- port: 443
targetPort: 443
- port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
How can I make my pod listen to multiple ports?
| You have two options:
You could have multiple services, one for each port. As you pointed out, each service will end up with a different IP address
You could have a single service with multiple ports. In this particular case, you must give all ports a name.
In your case, the service becomes:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: something
port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
This is necessary so that endpoints can be disambiguated.
| Kubernetes | 45,147,664 | 40 |
The new ability to specify a HEALTHCHECK in a Dockerfile seems redundant with the Kubernetes probe directives. Any advice on what to use when?
| If you use Kubernetes, I'd suggest using only the Kubernetes liveness/readiness checks because Docker healthcheck has not been integrated in the Kubernetes as of now (release 1.12). This means that Kubernetes does not expose the check status in its api server, and the internal system components can not consume this information. Also, Kubernetes distinguishes liveness from readiness checks, so that other components can react differently (e.g., restarting the container vs. removing the pod from the list of endpoints for a service), which the docker HEALTHCHECK currently does not provide.
Update: Since Kubernetes 1.8, the Docker HEALTHCHECK has been disabled explicitly in Kubernetes.
| Kubernetes | 41,475,088 | 40 |
We're using Kubernetes 1.1.3 with its default fluentd-elasticsearch logging.
We also use LivenessProbes on our containers to make sure they operate as expected.
Our problem is that lines we send out to the STDOUT from the LivenessProbe does not appear to reach Elastic Search.
Is there a way to make fluentd ship LivenessProbes output like it does to regular containers in a pod?
| The output from the probe is swallowed by the Kubelet component on the node, which is responsible for running the probes (source code, if you're interested). If a probe fails, its output will be recorded as an event associated with the pod, which should be accessible through the API.
The output of successful probes isn't recorded anywhere unless your Kubelet has a log level of at least --v=4, in which case it'll be in the Kubelet's logs.
Feel free to file a feature request in a Github issue if you have ideas of what you'd like to be done with the output :)
| Kubernetes | 34,455,040 | 40 |
I try to install gke-gcloud-auth-plugin on a Mac M1 with zsh, following the gcloud docs.
The installation ran without issue and trying to re-run gcloud components install gke-gcloud-auth-plugin I get the All components are up to date. message.
However, gke-gcloud-auth-plugin --version returns zsh: command not found: gke-gcloud-auth-plugin. kubectl, installed the same way, works properly.
I tried to install kubectl using brew, with no more success.
| I had the same error and here's how I fixed it.
brew info google-cloud-sdk
which produces:
To add gcloud components to your PATH, add this to your profile:
for bash users
source "$(brew --prefix)/share/google-cloud-sdk/path.bash.inc"
for zsh users
source "$(brew --prefix)/share/google-cloud-sdk/path.zsh.inc"
source "$(brew --prefix)/share/google-cloud-sdk/completion.zsh.inc"
for fish users
source "$(brew --prefix)/share/google-cloud-sdk/path.fish.inc"
Grab the code for your terminal and then run it (e.g., for zsh)
source "$(brew --prefix)/share/google-cloud-sdk/path.zsh.inc"
Add the above line to your .zshrc profile to make sure that it is loaded every time you open a new terminal.
I had originally installed gcloud sdk with homebrew brew install google-cloud-sdk. At that time, I read the Caveats, which tell you how to add gcloud components to your PATH.
I installed both kubectl and gke-gcloud-auth-plugin, and neither of them could be found from the command line. I got the same error as the OP "command not found"
| Kubernetes | 74,233,349 | 39 |
I need to copy dump data from pod to local. Below the commands I am trying but I am getting error: unexpected EOF
kubectl cp device-database-79fc964c8-q7ncc:tmp /Users/raja
error: unexpected EOF
or
kubectl cp device-database-79fc964c8-q7ncc:tmp/plsql_data/prod.dump /Users/raja/prod.dump
error: unexpected EOF
kubectl version
kubectl version --client
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"darwin/amd64"}
Can anyone help how to fix this issue?
Thanks
| For newer versions of kubectl, adding a retries=-1 flag may resolve the issue:
kubectl cp --retries=-1 pod-www-xxx-yyy-zzz:/path/to/remote/dir ~/local/dir
| Kubernetes | 67,624,630 | 39 |
I'm using mysql Kubernetes statefulset, i mapped PVs to host directory (CentOS 8 VM) but getting " pod has unbound immediate PersistentVolumeClaims"
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-container
spec:
serviceName: mysql
replicas: 1
selector:
matchLabels:
app: mysql-container
template:
metadata:
labels:
app: mysql-container
spec:
containers:
- name: mysql-container
image: mysql:dev
imagePullPolicy: "IfNotPresent"
envFrom:
- secretRef:
name: prod-secrets
ports:
- containerPort: 3306
# container (pod) path
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pvc
volumeClaimTemplates:
- metadata:
name: data
spec:
storageClassName: localstorage
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 3Gi
selector:
matchLabels:
type: local
Storage class is defaulr and no events in PV
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: localstorage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Delete
allowVolumeExpansion: True
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-01
labels:
type: local
spec:
storageClassName: localstorage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mysql01"
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-02
labels:
type: local
spec:
storageClassName: localstorage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mysql02"
Storage class is default one
get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
localstorage (default) kubernetes.io/no-provisioner Delete Immediate true 35m
PVC also shows no events:
Name: data-mysql-0
Namespace: default
StorageClass: localstorage
Status: Pending
Volume: mysql-storage
Labels: app=mysql
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 0
Access Modes:
VolumeMode: Filesystem
Mounted By: mysql-0
Events: <none>
Name: mysql-01
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"mysql-01"},"spec":{"accessMode...
Finalizers: [kubernetes.io/pv-protection]
StorageClass: localstorage
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/mysql01
HostPathType:
Events: <none>
Name: mysql-02
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"mysql-02"},"spec":{"accessMode...
Finalizers: [kubernetes.io/pv-protection]
StorageClass: localstorage
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/mysql02
HostPathType:
Events: <none>
Pod is in pending state:
> Events:
> Type Reason Age From Message
> ---- ------ ---- ---- -------
> Warning FailedScheduling 27s (x2 over 27s) default-scheduler error while running >"VolumeBinding" filter plugin for pod "mysql-0": pod has unbound immediate PersistentVolumeClaims
Can someone point out what else should be done here, thanks
| PersistentVolumeClaims will remain unbound indefinitely if a matching PersistentVolume does not exist. The PersistentVolume is matched with accessModes and capacity. In this case capacity the PV is 10Gi whereas PVC has capacity of 3Gi.
The capacity in the PV needs to same as in the claim i.e 3Gi to fix the unbound immediate PersistentVolumeClaims issue.
| Kubernetes | 60,774,220 | 39 |
I've created Hyper-V machine and tried to deploy Sawtooth on Minikube using Sawtooth YAML file :
https://sawtooth.hyperledger.org/docs/core/nightly/master/app_developers_guide/sawtooth-kubernetes-default.yaml
I changed the apiVersion i.e. apiVersion: extensions/v1beta1 to apiVersion: apps/v1, though I have launched Minikube in Kubernetes v1.17.0 using this command
minikube start --kubernetes-version v1.17.0
After that I can't deploy the server. Command is
kubectl apply -f sawtooth-kubernetes-default.yaml --validate=false
It shows an error with "sawtooth-0" is invalid.
---
apiVersion: v1
kind: List
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: sawtooth-0
spec:
replicas: 1
selector:
matchLabels:
name: sawtooth-0
template:
metadata:
labels:
name: sawtooth-0
spec:
containers:
- name: sawtooth-devmode-engine
image: hyperledger/sawtooth-devmode-engine-rust:chime
command:
- bash
args:
- -c
- "devmode-engine-rust -C tcp://$HOSTNAME:5050"
- name: sawtooth-settings-tp
image: hyperledger/sawtooth-settings-tp:chime
command:
- bash
args:
- -c
- "settings-tp -vv -C tcp://$HOSTNAME:4004"
- name: sawtooth-intkey-tp-python
image: hyperledger/sawtooth-intkey-tp-python:chime
command:
- bash
args:
- -c
- "intkey-tp-python -vv -C tcp://$HOSTNAME:4004"
- name: sawtooth-xo-tp-python
image: hyperledger/sawtooth-xo-tp-python:chime
command:
- bash
args:
- -c
- "xo-tp-python -vv -C tcp://$HOSTNAME:4004"
- name: sawtooth-validator
image: hyperledger/sawtooth-validator:chime
ports:
- name: tp
containerPort: 4004
- name: consensus
containerPort: 5050
- name: validators
containerPort: 8800
command:
- bash
args:
- -c
- "sawadm keygen \
&& sawtooth keygen my_key \
&& sawset genesis -k /root/.sawtooth/keys/my_key.priv \
&& sawset proposal create \
-k /root/.sawtooth/keys/my_key.priv \
sawtooth.consensus.algorithm.name=Devmode \
sawtooth.consensus.algorithm.version=0.1 \
-o config.batch \
&& sawadm genesis config-genesis.batch config.batch \
&& sawtooth-validator -vv \
--endpoint tcp://$SAWTOOTH_0_SERVICE_HOST:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--bind network:tcp://eth0:8800"
- name: sawtooth-rest-api
image: hyperledger/sawtooth-rest-api:chime
ports:
- name: api
containerPort: 8008
command:
- bash
args:
- -c
- "sawtooth-rest-api -C tcp://$HOSTNAME:4004"
- name: sawtooth-shell
image: hyperledger/sawtooth-shell:chime
command:
- bash
args:
- -c
- "sawtooth keygen && tail -f /dev/null"
- apiVersion: apps/v1
kind: Service
metadata:
name: sawtooth-0
spec:
type: ClusterIP
selector:
name: sawtooth-0
ports:
- name: "4004"
protocol: TCP
port: 4004
targetPort: 4004
- name: "5050"
protocol: TCP
port: 5050
targetPort: 5050
- name: "8008"
protocol: TCP
port: 8008
targetPort: 8008
- name: "8800"
protocol: TCP
port: 8800
targetPort: 8800
| You need to fix your deployment yaml file. As you can see from your error message, the Deployment.spec.selector field can't be empty.
Update the yaml (i.e. add spec.selector) as shown in below:
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: sawtooth-0
template:
metadata:
labels:
app.kubernetes.io/name: sawtooth-0
Why selector field is important?
The selector field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (app.kubernetes.io/name: sawtooth-0). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.
Reference
Update:
The apiVersion for k8s service is v1:
- apiVersion: v1 # Update here
kind: Service
metadata:
app.kubernetes.io/name: sawtooth-0
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: sawtooth-0
... ... ...
| Kubernetes | 59,480,373 | 39 |
I have the following configuration for my pod:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app
labels:
app: my-app
spec:
serviceName: my-app
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
restartPolicy: Never
containers:
- name: my-app
image: myregistry:443/mydomain/my-app
imagePullPolicy: Always
And it deploys fine without the restartPolicy. However, I do not want the process to be run again once finished, hence I added the 'restartPolicy: Never'. Unfortunately I get the following error when I attempt to deploy:
Error from server (Invalid): error when creating "stack.yaml": StatefulSet.apps "my-app" is invalid: spec.template.spec.restartPolicy: Unsupported value: "Never": supported values: "Always"
What am I missing?
Thanks
| Please see https://github.com/kubernetes/kubernetes/issues/24725
It appears that only "Always" is supported.
| Kubernetes | 55,169,075 | 39 |
I am using Prometheus to monitor my Kubernetes cluster. I have set up Prometheus in a separate namespace. I have multiple namespaces and multiple pods are running. Each pod container exposes a custom metrics at this end point, :80/data/metrics . I am getting the Pods CPU, memory metrics etc, but how to configure Prometheus to pull data from :80/data/metrics in each available pod ? I have used this tutorial to set up Prometheus, Link
| You have to add this three annotation to your pods:
prometheus.io/scrape: 'true'
prometheus.io/path: '/data/metrics'
prometheus.io/port: '80'
How it will work?
Look at the kubernetes-pods job of config-map.yaml you are using to configure prometheus,
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
Check this three relabel configuration
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
Here, __metrics_path__ and port and whether to scrap metrics from this pod are being read from pod annotations.
For, more details on how to configure Prometheus see here.
| Kubernetes | 53,365,191 | 39 |
DNS resolution looks fine, but I cannot ping my service. What could be the reason?
From another pod in the cluster:
$ ping backend
PING backend.default.svc.cluster.local (10.233.14.157) 56(84) bytes of data.
^C
--- backend.default.svc.cluster.local ping statistics ---
36 packets transmitted, 0 received, 100% packet loss, time 35816ms
EDIT:
The service definition:
apiVersion: v1
kind: Service
metadata:
labels:
app: backend
name: backend
spec:
ports:
- name: api
protocol: TCP
port: 10000
selector:
app: backend
The deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
run: backend
replicas: 1
template:
metadata:
labels:
run: backend
spec:
containers:
- name: backend
image: nha/backend:latest
imagePullPolicy: Always
ports:
- name: api
containerPort: 10000
I can curl my service from the same container:
kubectl exec -it backend-7f67c8cbd8-mf894 -- /bin/bash
root@backend-7f67c8cbd8-mf894:/# curl localhost:10000/my-endpoint
{"ok": "true"}
It looks like the endpoint on port 10000 does not get exposed though:
kubectl get ep
NAME ENDPOINTS AGE
backend <none> 2h
| Ping doesn't work with service's cluster IPs like 10.233.14.157, as it is a virtual IP. You should be able to ping a specific pod, but no a service.
| Kubernetes | 50,852,542 | 39 |
Base question: When I try to use kube-apiserver on my master node, I get command not found error. How I can install/configure kube-apiserver? Any link to example will help.
$ kube-apiserver --enable-admission-plugins DefaultStorageClass
-bash: kube-apiserver: command not found
Details: I am new to Kubernetes and Docker and was trying to create StatefulSet with volumeClaimTemplates. My problem is that the automatic PVs are not created and I get this message in the PVC log: "persistentvolume-controller waiting for a volume to be created". I am not sure if I need to define DefaultStorageClass and so needed kube-apiserver to define it.
Name: nfs
Namespace: default
StorageClass: example-nfs
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner=example.com/nfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 3m (x2401 over 10h) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator
Here is get pvc result:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs Pending example-nfs 10h
And get storageclass:
$ kubectl describe storageclass example-nfs
Name: example-nfs
IsDefaultClass: No
Annotations: <none>
Provisioner: example.com/nfs
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
How can I troubleshoot this issue (e.g. logs for why the storage was not created)?
| You are asking two different questions here, one about kube-apiserver configuration, one about troubleshooting your StorageClass.
Here's an answer for your first question:
kube-apiserver is running as a Docker container on your master node. Therefore, the binary is within the container, not on your host system. It is started by the master's kubelet from a file located at /etc/kubernetes/manifests. kubelet is watching this directory and will start any Pod defined here as "static pods".
To configure kube-apiserver command line arguments you need to modify /etc/kubernetes/manifests/kube-apiserver.yaml on your master.
| Kubernetes | 50,352,621 | 39 |
kubernetes PodSecurityPolicy set to runAsNonRoot, pods are not getting started post that Getting error Error: container has runAsNonRoot and image has non-numeric user (appuser), cannot verify user is non-root
We are creating the user (appuser) uid -> 999 and group (appgroup) gid -> 999 in the docker container, and we are starting the container with that user.
But the pod creating is throwing error.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 53s default-scheduler Successfully assigned app-578576fdc6-nfvcz to appmagent01
Normal SuccessfulMountVolume 52s kubelet, appagent01 MountVolume.SetUp succeeded for volume "default-token-ksn46"
Warning DNSConfigForming 11s (x6 over 52s) kubelet, appagent01 Search Line limits were exceeded, some search paths have been omitted, the applied search line is: app.svc.cluster.local svc.cluster.local cluster.local
Normal Pulling 11s (x5 over 51s) kubelet, appagent01 pulling image "app.dockerrepo.internal.com:5000/app:9f51e3e7ab91bb835d3b85f40cc8e6f31cdc2982"
Normal Pulled 11s (x5 over 51s) kubelet, appagent01 Successfully pulled image "app.dockerrepo.internal.com:5000/app:9f51e3e7ab91bb835d3b85f40cc8e6f31cdc2982"
Warning Failed 11s (x5 over 51s) kubelet, appagent01 Error: container has runAsNonRoot and image has non-numeric user (appuser), cannot verify user is non-root
| Here is the implementation of the verification:
case uid == nil && len(username) > 0:
return fmt.Errorf("container has runAsNonRoot and image has non-numeric user (%s), cannot verify user is non-root", username)
And here is the validation call with the comment:
// Verify RunAsNonRoot. Non-root verification only supports numeric user.
if err := verifyRunAsNonRoot(pod, container, uid, username); err != nil {
return nil, cleanupAction, err
}
As you can see, the only reason of that messages in your case is uid == nil. Based on the comment in the source code, we need to set a numeric user value.
So, for the user with UID=999 you can do it in your pod definition like that:
securityContext:
runAsUser: 999
| Kubernetes | 49,720,308 | 39 |
Deploying my service to production:
envsubst < ./kubernetes/pre-production/aks.yaml | kubectl apply -f -
I'm getting the following error:
The Deployment "moverick-mule-pre" is invalid:
spec.template.metadata.labels: Invalid value:
map[string]string{"commit":"750a26deebc3582bec4bfbb2426b3f22ee042eaa",
"app":"moverick-mule-pre"}: selector does not match template
labels
My yaml file is:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: moverick-mule-pre
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: moverick-mule-pre
commit: $CI_COMMIT_SHA
spec:
containers:
- name: moverick-mule-pre
image: $REGISTRY_SERVER_PRE/$CI_PROJECT_NAME:$CI_COMMIT_REF_NAME
imagePullPolicy: Always
ports:
- containerPort: 80
envFrom:
- secretRef:
name: moverick-pre
livenessProbe:
httpGet:
path: /console
port: 80
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- name: logs
mountPath: /opt/mule/logs/
- name: asc
mountPath: /opt/mule/asc/
imagePullSecrets:
- name: registry-pre
volumes:
- name: logs
azureFile:
secretName: azure-files-pre
shareName: logs-pre
readOnly: false
- name: asc
azureFile:
secretName: azure-asc-pre
shareName: asc-pre
readOnly: false
---
apiVersion: v1
kind: Service
metadata:
name: moverick-mule-pre
spec:
ports:
- port: 80
selector:
app: moverick-mule-pre
| You need to add selector in spec of Deployment.
And also, these selector should match with labels in PodTemplate.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: moverick-mule-pre
spec:
replicas: 2
selector:
matchLabels:
app: moverick-mule-pre
commit: $CI_COMMIT_SHA
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: moverick-mule-pre
commit: $CI_COMMIT_SHA
Otherwise, you will get error like below
The Deployment "moverick-mule-pre" is invalid:
spec.selector: Required value
spec.template.metadata.labels: Invalid value: map[string]string{...} selector does not match template labels
| Kubernetes | 48,582,951 | 39 |
Helm _helpers.tpl?
Helm allows for the use of Go templating in resource files for Kubernetes.
A file named _helpers.tpl is usually used to define Go template helpers with this syntax:
{{- define "yourFnName" -}}
{{- printf "%s-%s" .Values.name .Values.version | trunc 63 -}}
{{- end -}}
Which you can then use in your *.yaml resource files like so:
{{ template "yourFnName" . }}
The Question
How can I use the helpers I define, in other helper definitions?
For example, what if I have a helper for the application name, and want to use that in the definition for a helper which determines the ingress host name?
I have tried calling helpers in other definitions a couple different ways. Given this basic helper function:
{{- define "host" -}}
{{- printf "%.example.com" <Somehow get result of "name" helper here> -}}
{{- end -}}
I have tried the following:
{{- printf "%.example.com" {{ template "name" . }} -}}
{{- printf "%.example.com" {{- template "name" . -}} -}}
{{- printf "%.example.com" ( template "name" . ) -}}
{{- printf "%.example.com" template "name" . -}}
# Separator
{{- $name := {{ template "environment" . }} -}}
{{- printf "%.example.com" $name -}}
# Separator
{{- $name := template "environment" . -}}
{{- printf "%.example.com" $name -}}
# Separator
{{- $name := environment -}}
{{- printf "%.example.com" $name -}}
Is it possible to even do this? If so, how?
| You can use (include ... ) syntax. Example of including previously defined template foo:
{{- define "bar" -}}
{{- printf "%s-%s" (include "foo" .) .Release.Namespace | trunc 63 | trimSuffix "-" -}}
{{- end -}}
| Kubernetes | 46,719,082 | 39 |
I use the yaml file, which is in the Kubernetes official document, to create a Deployment in Kubernetes, and it uses apiVersion: apps/v1beta1 at the top. Then I typed kubectl create -f deployment.yaml to create this Deployment, but it occured an error as following:
error: error validating "deployment.yaml": error validating data: couldn't find type: v1beta1.Deployment; if you choose to ignore these errors, turn validation off with --validate=false`
After some search, I changed apiVersion: apps/v1beta1 to extensions/v1beta1, and then recreate the Deployment with the yaml file, and it worked fine.
So, I wanna know what's the differences between apps/v1beta1 and extensions/v1beta1. Is it pertinent to the Kubernetes version?
# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:34:32Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
| The apps API group will be where the v1 Deployment type lives. The apps/v1beta1 version was added in 1.6.0, so if you have a 1.5.x client or server, you should still use the extensions/v1beta1 version.
The apps/v1beta1 and extensions/v1beta1 Deployment types are identical, but when creating via the apps API, some improved defaults are used
| Kubernetes | 43,163,625 | 39 |
I deployed my angular application(Static webpage) on kubernetes and tried launching it from Google Chrome. I see app is loading, however there is nothing displayed on the browser. Upon checking on browser console , I could see this error
"Failed to load module script: Expected a JavaScript module script but the server responded with a MIME type of "text/html". Strict MIME type checking is enforced for module scripts per HTML spec."
for (main.js,poylfill.js,runtime.js) files . I research few forums and one possible rootcause could because of type attribute in <script> tag should be type=text/javascript instead of type=module in my index.html file that is produced under dist folder after executing ng build. I don't how to make that change as to these tags as generated during the build process, and my ng-build command is taken care by a docker command.
URL i'm trying to access will be something like : "http://xxxx:portnum/issuertcoetools
note: The host xxxx:portnum will be used by many other apps as well.
Are there any work-arounds or solutions to this issue?
index.html - produced after running ng-build in local, (which is the same i see in kubernetes POD too)
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<title>Data Generator</title>
<base href="/">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="icon" type="image/x-icon" href="favicon.ico">
<link rel="preconnect" href="https://fonts.gstatic.com">
<style type="text/css">@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fCRc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fABc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fCBc4AMP6lbBP.woff2) format('woff2');unicode-range:U+1F00-1FFF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fBxc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0370-03FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fCxc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fChc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fBBc4AMP6lQ.woff2) format('woff2');unicode-range:U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu72xKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu5mxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu7mxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+1F00-1FFF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu4WxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0370-03FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu7WxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu7GxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu4mxKKTU1Kg.woff2) format('woff2');unicode-range:U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fCRc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fABc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fCBc4AMP6lbBP.woff2) format('woff2');unicode-range:U+1F00-1FFF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fBxc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0370-03FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fCxc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fChc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fBBc4AMP6lQ.woff2) format('woff2');unicode-range:U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;}</style>
<style type="text/css">@font-face{font-family:'Material Icons';font-style:normal;font-weight:400;src:url(https://fonts.gstatic.com/s/materialicons/v128/flUhRq6tzZclQEJ-Vdg-IuiaDsNcIhQ8tQ.woff2) format('woff2');}.material-icons{font-family:'Material Icons';font-weight:normal;font-style:normal;font-size:24px;line-height:1;letter-spacing:normal;text-transform:none;display:inline-block;white-space:nowrap;word-wrap:normal;direction:ltr;-webkit-font-feature-settings:'liga';-webkit-font-smoothing:antialiased;}</style>
<style>.mat-typography{font:400 14px/20px Roboto,Helvetica Neue,sans-serif;letter-spacing:normal}html,body{height:100%}body{margin:0;font-family:Roboto,Helvetica Neue,sans-serif}</style><link rel="stylesheet" href="styles.9c0548760c8b22be.css" media="print" onload="this.media='all'"><noscript><link rel="stylesheet" href="styles.9c0548760c8b22be.css"></noscript></head>
<body class="mat-typography">
<app-root></app-root>
<script src="runtime.397c3874548e84cd.js" type="module">
</script><script src="polyfills.2145acc81d0726ab.js" type="module">
</script><script src="main.a655dca28148b7e2.js" type="module"></script>
</body></html>
nginx.conf file
worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 8080;
include /etc/nginx/mime.types;
location /issuertcoetools {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}
}
| This error usually happens because you deployment was into a subfolder, so it seems like Angular is fetching you app directly from your base URL, so your html is found when you go to your domain.com/mysubfolder/index.html, but as the Angular fetches your resources from domain.com/index.html instead of domain.com/mysubfolder/index.html; I’m pretty sure this is the cause of your issue. You can resolve it building your app with:
ng build --prod --base-href mysubfolder
| Kubernetes | 72,070,748 | 38 |
Is it possible to know the progress of file transfer with kubectl cp for Google Cloud?
| No, this doesn't appear to be possible.
kubectl cp appears to be implemented by doing the equivalent of
kubectl exec podname -c containername \
tar cf - /whatever/path \
| tar xf -
This means two things:
tar(1) doesn't print any useful progress information. (You could in principle add a v flag to print out each file name as it goes by to stderr, but that won't tell you how many files in total there are or how large they are.) So kubectl cp as implemented doesn't have any way to get this out.
There's not a richer native Kubernetes API to copy files.
If moving files in and out of containers is a key use case for you, it will probably be easier to build, test, and run by adding a simple HTTP service. You can then rely on things like the HTTP Content-Length: header for progress metering.
| Kubernetes | 54,293,353 | 38 |
This is a total noob Kubernetes question. I have searched for this, but can't seem to find the exact answer. But that may just come down to not having a total understanding of Kubernetes. I have some pods deployed across three nodes, and my questions are simple.
How do I check the total disk space on a node?
How do I see how much of that space each pod is taking up?
| For calculating total disk space you can use
kubectl describe nodes
from there you can grep ephemeral-storage which is the virtual disk size This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers
If you are using Prometheus you can calculate with this formula
sum(node_filesystem_size_bytes)
| Kubernetes | 54,149,618 | 38 |
As I understand, Istio VirtualService is kind of abstract thing, which tries to add an interface to the actual implementation like the service in Kubernetes or something similar in Consul.
When use Kubernetes as the underlying platform for Istio, is there any difference between Istio VirtualService and Kubernetes Service or are they the same?
| Kubernetes service
Kubernetes service manage a pod's networking. It specifies whether your pods are exposed internally (ClusterIP), externally (NodePort or LoadBalancer) or as a CNAME of other DNS entries (externalName).
As an example this foo-service will expose the pods with label app: foo. Any requests sent to the node on port 30007 will be forwarded to the pod on port 80.
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: NodePort
selector:
app: foo
ports:
- port: 80
targetPort: 80
nodePort: 30007
Istio virtualservice
Istio virtualservice is one level higher than Kuberenetes service. It can be used to apply traffic routing, fault injection, retries and many other configurations to services.
As an example this foo-retry-virtualservice will retry 3 times with a timeout 2s each for failed requests to foo.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-retry-virtualservice
spec:
hosts:
- foo
http:
- route:
- destination:
host: foo
retries:
attempts: 3
perTryTimeout: 2s
Another example of this foo-delay-virtualservice will apply a 0.5s delay to 0.1% of requests to foo.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-delay-virtualservice
spec:
hosts:
- foo
http:
- fault:
delay:
percentage:
value: 0.1
fixedDelay: 5s
route:
- destination:
host: foo
Ref
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
https://istio.io/latest/docs/reference/config/networking/virtual-service/
https://istio.io/latest/docs/concepts/traffic-management/#virtual-services
| Kubernetes | 53,743,219 | 38 |
I work in a multi-tenant node app, I know to create a new namespace in Kubernetes is possible to run a kubectl command as follow:
kubectl create namespace <namespace name>
How can I create a new namespace from node Microservices when a new customer make a sign up for a new account?
Is there some kubectl API to make a request from an external app?
Is necessary for the user to log out from app, destroy the pods created in kubernetes?
| It could be as simple as calling from a shell in your app:
kubectl create namespace <your-namespace-name>
Essentially, kubectl talks to the kube-apiserver.
You can also directly call the kube-apiserver. This is an example to list the pods:
$ curl -k -H 'Authorization: Bearer <token>' \
https://$KUBERNETES_SERVICE_HOST:6443/api/<api-version>/namespaces/default/pods
More specifically to create a namespace:
$ curl -k -H -X POST -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <token>' \
https://$KUBERNETES_SERVICE_HOST:6443/api/v1/namespaces/ -d '
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "mynewnamespace"
}
}'
In case you are wondering about the <token>, it's a Kubernetes Secret typically belonging to a ServiceAccount and bound to a ClusterRole that allows you to create namespaces.
You can create a Service Account like this:
$ kubectl create serviceaccount namespace-creator
Then you'll see the token like this (a token is automatically generated):
$ kubectl describe sa namespace-creator
Name: namespace-creator
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: namespace-creator-token-xxxxx
Tokens: namespace-creator-token-xxxxx
Events: <none>
Then you would get the secret:
$ kubectl describe secret namespace-creator-token-xxxxx
Name: namespace-creator-token-xxxx
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: namespace-creator
kubernetes.io/service-account.uid: <redacted>
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 7 bytes
token: <REDACTED> <== This is the token you need for Authorization: Bearer
Your ClusterRole should look something like this:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace-creator
rules:
- apiGroups: ["*"]
resources: ["namespaces"]
verbs: ["create"]
Then you would bind it like this:
$ kubectl create clusterrolebinding namespace-creator-binding --clusterrole=namespace-creator --serviceaccount=namespace-creator
When it comes to writing code you can use any HTTP client library in any language to call the same endpoints.
There are also libraries like the client-go library that takes care of the plumbing of connecting to a kube-apiserver.
| Kubernetes | 52,901,435 | 38 |
Kubernetes not able to find metric-server api.I am using Kubernetes with Docker on Mac. I was trying to do HPA from following example. However, when I execute command kubectl get hpa, My target still was unknown. Then I tried, kubectl describe hpa. Which gave me error like below:
Name: php-apache
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Sun, 07 Oct 2018 12:36:31 -0700
Reference: Deployment/php-apache
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 5%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 1h (x34 over 5h) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
Warning FailedGetResourceMetric 1m (x42 over 5h) horizontal-pod-autoscaler unable to get metrics for resource cpu: no metrics returned from resource metrics API
I am using metrics-server as suggested in Kubernetes documentation. I also tried doing same just using Minikube. But that also didn't work.
Running kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes outputs :
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": []
}
| Use official metrics server - https://github.com/kubernetes-sigs/metrics-server
If you use one master node, run this command to create the metrics-server:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
If you have HA (High availability) cluster, use this:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability.yaml
Then use can use kubectl top nodes or kubectl top pods -A and get something like:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
| Kubernetes | 52,694,238 | 38 |
How do I run a docker image that I built locally on Google Container Engine?
| You can push your image to Google Container Registry and reference them from your pod manifest.
Detailed instructions
Assuming you have a DOCKER_HOST properly setup , a GKE cluster running the last version of Kubernetes and Google Cloud SDK installed.
Setup some environment variables
gcloud components update kubectl
gcloud config set project <your-project>
gcloud config set compute/zone <your-cluster-zone>
gcloud config set container/cluster <your-cluster-name>
gcloud container clusters get-credentials <your-cluster-name>
Tag your image
docker tag <your-image> gcr.io/<your-project>/<your-image>
Push your image
gcloud docker push gcr.io/<your-project>/<your-image>
Create a pod manifest for your container: my-pod.yaml
id: my-pod
kind: Pod
apiVersion: v1
desiredState:
manifest:
containers:
- name: <container-name>
image: gcr.io/<your-project>/<your-image>
...
Schedule this pod
kubectl create -f my-pod.yaml
Repeat from step (4) for each pod you want to run. You can have multiple definitions in a single file using a line with --- as delimiter.
| Kubernetes | 26,788,485 | 38 |
I'm currently facing an issue with my Kubernetes clusters and need some assistance. Everything was running smoothly until today. However, after performing an update on my Ubuntu system, I'm unable to establish a connection from my working environment to the kubernetes clusters.
When executing the command kubectl get pods, I'm encountering the following error message:
E0805 09:59:45.750534 234576 memcache.go:265] couldn’t get current server API group list: Get "http://localhost:3334/api?timeout=32s": EOF
Here are the details of my cluster setup: Kubernetes 1.27, bare-metal, Host System is Ubuntu 20.04
I would greatly appreciate any guidance or insights on resolving this issue.
| Try
kubectl get nodes -v=10
and look for the errors.
| Kubernetes | 76,841,889 | 37 |
I have microk8s v1.22.2 running on Ubuntu 20.04.3 LTS.
Output from /etc/hosts:
127.0.0.1 localhost
127.0.1.1 main
Excerpt from microk8s status:
addons:
enabled:
dashboard # The Kubernetes dashboard
ha-cluster # Configure high availability on the current node
ingress # Ingress controller for external access
metrics-server # K8s Metrics Server for API access to service metrics
I checked for the running dashboard (kubectl get all --all-namespaces):
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-node-2jltr 1/1 Running 0 23m
kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
kube-system deployment.apps/metrics-server 1/1 1 1 22m
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
I want to expose the microk8s dashboard within my local network to access it through http://main/dashboard/
To do so, I did the following nano ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: dashboard
namespace: kube-system
spec:
rules:
- host: main
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
Enabling the ingress-config through kubectl apply -f ingress.yaml gave the following error:
error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
Help would be much appreciated, thanks!
Update:
@harsh-manvar pointed out a mismatch in the config version. I have rewritten ingress.yaml to a very stripped down version:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard
namespace: kube-system
spec:
rules:
- http:
paths:
- path: /dashboard
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
Applying this works. Also, the ingress rule gets created.
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
kube-system dashboard public * 127.0.0.1 80 11m
However, when I access the dashboard through http://<ip-of-kubernetes-master>/dashboard, I get a 400 error.
Log from the ingress controller:
192.168.0.123 - - [10/Oct/2021:21:38:47 +0000] "GET /dashboard HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" 466 0.002 [kube-system-kubernetes-dashboard-443] [] 10.1.76.3:8443 48 0.000 400 ca0946230759edfbaaf9d94f3d5c959a
Does the dashboard also need to be exposed using the microk8s proxy? I thought the ingress controller would take care of this, or did I misunderstand this?
| To fix the error error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1 you need to set apiVersion to the networking.k8s.io/v1. From the Kubernetes v1.16 article about deprecated APIs:
NetworkPolicy in the extensions/v1beta1 API version is no longer served
- Migrate to use the networking.k8s.io/v1 API version, available since v1.8. Existing persisted data can be retrieved/updated via the new version.
Now moving to the second issue. You need to add a few annotations and make few changes in your Ingress definition to make dashboard properly exposed on the microk8s cluster:
add nginx.ingress.kubernetes.io/rewrite-target: /$2 annotation
add nginx.ingress.kubernetes.io/configuration-snippet: | rewrite ^(/dashboard)$ $1/ redirect; annotation
change path: /dashboard to path: /dashboard(/|$)(.*)
We need them to properly forward the request to the backend pods - good explanation in this article:
Note: The "nginx.ingress.kubernetes.io/rewrite-target" annotation rewrites the URL before forwarding the request to the backend pods. In /dashboard(/|$)(.*) for path, (.*) stores the dynamic URL that's generated while accessing the Kubernetes Dashboard. The "nginx.ingress.kubernetes.io/rewrite-target" annotation replaces the captured data in the URL before forwarding the request to the kubernetes-dashboard service. The "nginx.ingress.kubernetes.io/configuration-snippet" annotation rewrites the URL to add a trailing slash ("/") only if ALB-URL/dashboard is accessed.
Also we need another two changes:
add nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" annotation to tell NGINX Ingress to communicate with Dashboard service using HTTPs
add kubernetes.io/ingress.class: public annotation to use NGINX Ingress created by microk8s ingress plugin
After implementing everything above, the final YAML file looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/dashboard)$ $1/ redirect;
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
kubernetes.io/ingress.class: public
name: dashboard
namespace: kube-system
spec:
rules:
- http:
paths:
- path: /dashboard(/|$)(.*)
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
It should work fine. No need to run microk8s proxy command.
| Kubernetes | 69,517,855 | 37 |
I have kubernetes pods running as shown in command "kubectl get all -A" :
and same pods are shown in command "kubectl get pod -A" :
I want to enter/login to any of these pod (all are in Running state). How can I do that please let me know the command?
| In addition to Jonas' answer above;
If you have more than one namespace, you need to specify the namespace your pod is currently using i.e kubectl exec -n <name space here> <pod-name> -it -- /bin/sh
After successfully accessing your pod, you can go ahead and navigate through your container.
| Kubernetes | 67,550,782 | 37 |
I found specifying like kubectl --context dev --namespace default {other commands} before kubectl client in many examples. Can I get a clear difference between them in a k8's environment?
| The kubernetes concept (and term) context only applies in the kubernetes client-side, i.e. the place where you run the kubectl command, e.g. your command prompt.
The kubernetes server-side doesn't recognise this term 'context'.
As an example, in the command prompt, i.e. as the client:
when calling the kubectl get pods -n dev, you're
retrieving the list of the pods located under the namespace 'dev'.
when calling the kubectl get deployments -n dev, you're
retrieving the list of the deployments located under the namespace 'dev'.
If you know that you're targetting basically only the 'dev' namespace at the moment, then instead of adding "-n dev" all the time in each of your kubectl commands, you can just:
Create a context named 'context-dev'.
Specify the namespace='dev' for this context.
Set the current-context='context-dev'.
This way, your commands above will be simplified to as followings:
kubectl get pods
kubectl get deployments
You can set different contexts, such as 'context-dev', 'context-staging', etc., whereby each of them is targeting different namespace. BTW it's not obligatory to prefix the name with 'context'. You can also the name with 'dev', 'staging', etc.
Just as an analogy where a group of people are talking about films. So somewhere within the conversation the word 'Rocky' was used. Since they're talking about films, it's clear and there's no ambiguity that 'Rocky' here refers to the boxing film 'Rocky' and not about the "bumpy, stony" terrains. It's redundant and unnecessary to mention 'the movie Rocky' each time. Just one word, 'Rocky', is enough. The context is obviously about film.
The same thing with Kubernetes and with the example above. If the context is already set to a certain cluster and namespace, it's redundant and unnecessary to set and / or mention these parameters in each of your commands.
My explanation here is just revolving around namespace, but this is just an example. Other than specifying the namespace, within the context you will actually also specify which cluster you're targeting and the user info used to access the cluster. You can have a look inside the ~/.kube/config file to see what information other than the namespace is associated to each context.
In the sample command in your question above, both the namespace and the context are specified. In this case, kubectl will use whatever configuration values set within the 'dev' context, but the namespace value specified within this context (if it exists) will be ignored as it will be overriden by the value explicitly set in the command, i.e. 'default' for your sample command above.
Meanwhile, the namespace concept is used in both sides: server and client sides. It's a logical grouping of Kubernetes objects. Just like how we group files inside different folders in Operating Systems.
| Kubernetes | 61,171,487 | 37 |
I would like to know what is/are differences between an api gateway and Ingress controller. People tend to use these terms interchangeably due to similar functionality they offer. When I say, 'Ingress controller'; don't confuse it with Ingress objects provided by kubernetes. Also, it would be nice if you can explain the scenario where one will be more useful than other.
Is api gateway a generic term used for traffic routers in cloud-native world and 'Ingress controller' is implementation of api-gateway in kubernetes world?
| Ingress controller allows single ip-port to access all services running in k8s through ingress rules. The ingress controller service is set to load balancer so it is accessible from public internet.
An api gateway is used for application routing, rate limiting, security, request and response handling and other application related tasks. Say, you have a microservice based application in which the request needs an information to be collected from multiple micro services. You need a way to distribute the user requests to different services and gather the responses from all micro services and prepare the final response to be sent to the user. API Gateway is the one which does this kind of work for you.
| Kubernetes | 59,071,842 | 37 |
How can I list the kubernetes services in k9s?
By default only the pods and deployments are shown.
It's possible as shown here and I'm using the current k9s version 0.7.11
| It's documented here:
Key Bindings
K9s uses aliases to navigate most K8s resources.
:alias View a Kubernetes resource aliases
So you have to type:
:svc
EDIT: Hotkeys
You can also configure custom hotkeys.
Here's my config file ~/.k9s/hotkey.yml
or ~/Library/Application Support/k9s/hotkey.yml
hotKey:
f1:
shortCut: F1
description: View pods
command: pods
f2:
shortCut: F2
description: View deployments
command: dp
f3:
shortCut: F3
description: View statefulsets
command: sts
f4:
shortCut: F4
description: View services
command: service
f5:
shortCut: F5
description: View ingresses
command: ingress
| Kubernetes | 56,823,971 | 37 |
What is the equivalent command for minikube delete in docker-for-desktop on OSX
As I understand, minikube creates a VM to host its kubernetes cluster but I do not understand how docker-for-desktop is managing this on OSX.
| Tear down Kubernetes in Docker for OS X is quite an easy task.
Go to Preferences, open Reset tab, and click Reset Kubernetes cluster.
All object that have been created with Kubectl before that will be deleted.
You can also reset docker VM image (Reset disk image) and all settings (Reset to factory defaults) or even uninstall Docker.
| Kubernetes | 52,876,194 | 37 |
I am a beginner to Kubernetes and starting off with this tutorial. I installed VM and expected to be able to start a cluster by using the command:
minikube start
But I get the error:
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
E0911 13:34:45.394430 41676 start.go:174] Error starting host: Error
creating host: Error executing step: Creating VM.
: Error setting up host only network on machine start: The host-only
adapter we just created is not visible. This is a well known
VirtualBox bug. You might want to uninstall it and reinstall at least
version 5.0.12 that is is supposed to fix this issue.
It says that it is a well known bug in Virtualbox but I installed its latest version. Any ideas?
| Figured out the issue. VirtualBox was not installed correctly as Mac had blocked it. It wasn't obvious at first.
Restarting won't work if VirtualBox isn't installed correctly.
System Preferences -> Security & Privacy -> Allow -> Then allow the software corporation (in this case Oracle)
Restart
Now it worked as expected.
| Kubernetes | 52,277,019 | 37 |
I try to pull image from an ACR using a secret and I can't do it.
I created resources using azure cli commands:
az login
az provider register -n Microsoft.Network
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Compute
az provider register -n Microsoft.ContainerService
az group create --name aksGroup --location westeurope
az aks create --resource-group aksGroup --name aksCluster --node-count 1 --generate-ssh-keys -k 1.9.2
az aks get-credentials --resource-group aksGroup --name aksCluster
az acr create --resource-group aksGroup --name aksClusterRegistry --sku Basic --admin-enabled true
After that I logged in and pushed image successfully to created ACR from local machine.
docker login aksclusterregistry.azurecr.io
docker tag jetty aksclusterregistry.azurecr.io/jetty
docker push aksclusterregistry.azurecr.io/jetty
The next step was creating a secret:
kubectl create secret docker-registry secret --docker-server=aksclusterregistry.azurecr.io --docker-username=aksClusterRegistry --docker-password=<Password from tab ACR/Access Keys> [email protected]
And eventually I tried to create pod with image from the ACR:
#pod.yml
apiVersion: v1
kind: Pod
metadata:
name: jetty
spec:
containers:
- name: jetty
image: aksclusterregistry.azurecr.io/jetty
imagePullSecrets:
- name: secret
kubectl create -f pod.yml
In result I have a pod with status ImagePullBackOff:
>kubectl get pods
NAME READY STATUS RESTARTS AGE
jetty 0/1 ImagePullBackOff 0 1m
> kubectl describe pod jetty
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned jetty to aks-nodepool1-62963605-0
Normal SuccessfulMountVolume 2m kubelet, aks-nodepool1-62963605-0 MountVolume.SetUp succeeded for volume "default-token-w8png"
Normal Pulling 2m (x2 over 2m) kubelet, aks-nodepool1-62963605-0 pulling image "aksclusterregistry.azurecr.io/jetty"
Warning Failed 2m (x2 over 2m) kubelet, aks-nodepool1-62963605-0 Failed to pull image "aksclusterregistry.azurecr.io/jetty": rpc error: code = Unknown desc = Error response from daemon: Get https://aksclusterregistry.azurecr.io/v2/jetty/manifests/latest: unauthorized: authentication required
Warning Failed 2m (x2 over 2m) kubelet, aks-nodepool1-62963605-0 Error: ErrImagePull
Normal BackOff 2m (x5 over 2m) kubelet, aks-nodepool1-62963605-0 Back-off pulling image "aksclusterregistry.azurecr.io/jetty"
Normal SandboxChanged 2m (x7 over 2m) kubelet, aks-nodepool1-62963605-0 Pod sandbox changed, it will be killed and re-created.
Warning Failed 2m (x6 over 2m) kubelet, aks-nodepool1-62963605-0 Error: ImagePullBackOff
What's wrong? Why does approach with secret not work?
Please don't advice me approach with service principal, because I would like to understand why this aproach doesn't work. I think it must be working.
| The "old" way with AKS was to do create secret as you mentioned. That is no longer recommended.
The "new" way is to attach the container registry. This article explains the "new" way to attach ACR, and also provides a link to the old way to clear up confusion. When you create your cluster, attach with:
az aks create -n myAKSCluster -g myResourceGroup --attach-acr $MYACR
Or if you've already created your cluster, update it with:
az aks update -n myAKSCluster -g myResourceGroup --attach-acr $MYACR
Notes:
$MYACR is just the name of your registry without the .azurecr.io. Ex: MYACR=foobar not MYACR=foobar.azurecr.io.
After you attach your ACR, it will take a few minutes for the ImagePullBackOff to transition to Running.
| Kubernetes | 49,512,099 | 37 |
I'm attempting to create a Kubernetes CronJob to run an application every minute.
A prerequisite is that I need to get my application code onto the container that runs within the CronJob. I figure that the best way to do so is to use a persistent volume, a pvclaim, and then defining the volume and mounting it to the container. I've done this successfully with containers running within a Pod, but it appears to be impossible within a CronJob? Here's my attempted configuration:
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: update_db
spec:
volumes:
- name: application-code
persistentVolumeClaim:
claimName: application-code-pv-claim
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: update-fingerprints
image: python:3.6.2-slim
command: ["/bin/bash"]
args: ["-c", "python /client/test.py"]
restartPolicy: OnFailure
The corresponding error:
error: error validating "cron-applications.yaml": error validating
data: found invalid field volumes for v2alpha1.CronJobSpec; if you
choose to ignore these errors, turn validation off with
--validate=false
I can't find any resources that show that this is possible. So, if not possible, how does one solve the problem of getting application code into a running CronJob?
| A CronJob uses a PodTemplate as everything else based on Pods and can use Volumes. You placed your Volume specification directly in the CronJobSpec instead of the PodSpec, use it like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: update-db
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: update-fingerprints
image: python:3.6.2-slim
command: ["/bin/bash"]
args: ["-c", "python /client/test.py"]
volumeMounts:
- name: application-code
mountPath: /where/ever
restartPolicy: OnFailure
volumes:
- name: application-code
persistentVolumeClaim:
claimName: application-code-pv-claim
| Kubernetes | 46,578,331 | 37 |
I want to debug the pod in a simple way, therefore I want to start the pod without deployment.
But it will automatically create a deployment
$ kubectl run nginx --image=nginx --port=80
deployment "nginx" created
So I have to create the nginx.yaml file
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
And create the pod like below, then it creates pod only
kubectl create -f nginx.yaml
pod "nginx" created
How can I specify in the command line the kind:Pod to avoid deployment?
// I run under minikue 0.20.0 and kubernetes 1.7.0 under Windows 7
| kubectl run nginx --image=nginx --port=80 --restart=Never
--restart=Always: The restart policy for this Pod. Legal values [Always, OnFailure, Never]. If set to Always
a deployment is created, if set to OnFailure a job is created, if set to Never, a regular pod is created. For the latter two --replicas must be 1. Default Always [...]
see official document https://kubernetes.io/docs/reference/kubectl/generated/kubectl_run/
| Kubernetes | 45,279,572 | 37 |
How do you get logs from kube-system pods? Running kubectl logs pod_name_of_system_pod does not work:
λ kubectl logs kube-dns-1301475494-91vzs
Error from server (NotFound): pods "kube-dns-1301475494-91vzs" not found
Here is the output from get pods:
λ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default alternating-platypus-rabbitmq-3309937619-ddl6b 1/1 Running 1 1d
kube-system kube-addon-manager-minikube 1/1 Running 1 1d
kube-system kube-dns-1301475494-91vzs 3/3 Running 3 1d
kube-system kubernetes-dashboard-rvm78 1/1 Running 1 1d
kube-system tiller-deploy-3703072393-x7xgb 1/1 Running 1 1d
| Use the namespace param to kubectl : kubectl --namespace kube-system logs kubernetes-dashboard-rvm78
| Kubernetes | 45,185,015 | 37 |
Specific to Docker based deployment, what are the differences between those two? Since Google App Engine Flexible now also supports Dockerfile based deployment and it is also fully-managed service, seems it's more preferred option rather than configuring Kubernetes deployment on Container Engine, isn't it?
What are the use cases where it's more preferred to use Google Container Engine over App Engine Flexible?
| They are different things. App Engine Flexible is focused on application development - i.e. you have an application and you want to be deployed and managed by Google). On the other hand, Kubernetes is more about having your own infrastructure. Obviously, you can also deploy applications in Kubernetes but, as it's your "own" infrastructure, you are the one to directly manage how both the and the application will behave (create services, create scalability policies, RBAC, security policies...).
In this sense, Kubernetes is more flexible in what you can achieve. However, as a developer, you may not be interested in the infrastructure at all, only that your application works and scales. For this kind of profile, App Engine Flexible is more suitable.
If, on the other side, want to manage a complete Container infrastructure (more SRE profile), then Kubernetes is for you.
| Kubernetes | 44,899,828 | 37 |
Is there way to specify a custom NodePort port in a kubernetes service YAML definition?
I need to be able to define the port explicitly in my configuration file.
| You can set the type NodePort in your Service Deployment. Note that there is a Node Port Range configured for your API server with the option --service-node-port-range (by default 30000-32767). You can also specify a port in that range specifically by setting the nodePort attribute under the Port object, or the system will chose a port in that range for you.
So a Service example with specified NodePort would look like this:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx
For more information on NodePort, see this doc. For configuring API Server Node Port range please see this.
| Kubernetes | 43,935,502 | 37 |
I am finding docker swarm, kubernetes quite similar and then there is docker which is a company and the above two are docker clustering tool. So what exactly all these tools are and differences between them?
| There are lots of articles out there which will explain the differences. In a nutshell:
Both are trying to solve the same problem - container orchestration over a large number of hosts. Essentially these problems can be broken down like so:
Scheduling containers across multiple hosts (taking into account resource utilization etc)
Grouping containers into logical units
Scaling of containers
Load balancing/access to these containers once they're deployed
Attaching storage to the containers, whether it be shared or not
Communication/networking between the containers/grouped containers
Service discovery of the containers (ie where is X service)
Both Kubernetes & Docker Swarm can solve these problems for you, but they have different naming conventions and ideas on how to solve them. The differences are relatively conceptual. There are articles that break this down quite well:
https://platform9.com/blog/compare-kubernetes-vs-docker-swarm/
https://torusware.com/blog/2016/09/hands-on-orchestration-docker-1-12-swarm-vs-kubernetes/
http://containerjournal.com/2016/04/07/kubernetes-vs-swarm-container-orchestrator-best/
Essentially, they are similar products in the same space. There are a few caveats to bear in mind though:
Kubernetes is developed with a container agnostic mindset (at present it supports Docker, rkt and has some support for hyper whereas docker swarm is docker only)
Kubernetes is "cloud native" in that it can run equally well across Azure, Google Container Engine and AWS - I am not currently aware of this being a feature of Docker Swarm, although you could configure it yourself
Kubernetes is an entirely open source product. If you require commercial support, you need to go to a third party to get it. Docker provides enterprise support for Swarm
If you are familiar with the docker-compose workflow, Docker Swarm makes use of this so it may be familiar to you and easier to get started. Kubernetes requires learning pod/service/deployment definitions which while pure yaml, are an adjustment.
| Kubernetes | 40,844,726 | 37 |
I will use a very specific way to explain my problem, but I think this is better to be specific than explain in an abstract way...
Say, there is a MongoDB replica set outside of a Kubernetes cluster but in a network. The ip addresses of all members of the replica set were resolved by /etc/hosts in app servers and db servers.
In an experiment/transition phase, I need to access those mongo db servers from kubernetes pods.
However, kubernetes doesn't seem to allow adding custom entries to /etc/hosts in pods/containers.
The MongoDB replica sets are already working with large data set, creating a new replica set in the cluster is not an option.
Because I use GKE, changing any of resources in kube-dns namespace should be avoided I suppose. Configuring or replace kube-dns to be suitable for my need are last thing to try.
Is there a way to resolve ip address of custom hostnames in a Kubernetes cluster?
It is just an idea, but if kube2sky can read some entries of configmap and use them as dns records, it colud be great.
e.g. repl1.mongo.local: 192.168.10.100.
EDIT: I referenced this question from https://github.com/kubernetes/kubernetes/issues/12337
| There are 2 possible solutions for this problem now:
Pod-wise (Adding the changes to every pod needed to resolve these domains)
cluster-wise (Adding the changes to a central place which all pods have access to, Which is in our case is the DNS)
Let's begin with the pod-wise solution:
As of Kunbernetes 1.7, It's possible now to add entries to a Pod's /etc/hosts directly using .spec.hostAliases
For example: to resolve foo.local, bar.local to 127.0.0.1 and foo.remote,
bar.remote to 10.1.2.3, you can configure HostAliases for a Pod under
.spec.hostAliases:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
The Cluster-wise solution:
As of Kubernetes v1.12, CoreDNS is the recommended DNS Server, replacing kube-dns. If your cluster originally used kube-dns, you may still have kube-dns deployed rather than CoreDNS. I'm going to assume that you're using CoreDNS as your K8S DNS.
In CoreDNS it's possible to Add an arbitrary entries inside the cluster domain and that way all pods will resolve this entries directly from the DNS without the need to change each and every /etc/hosts file in every pod.
First:
Let's change the coreos ConfigMap and add required changes:
kubectl edit cm coredns -n kube-system
apiVersion: v1
kind: ConfigMap
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
hosts /etc/coredns/customdomains.db example.org {
fallthrough
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . "/etc/resolv.conf"
cache 30
loop
reload
loadbalance
}
customdomains.db: |
10.10.1.1 mongo-en-1.example.org
10.10.1.2 mongo-en-2.example.org
10.10.1.3 mongo-en-3.example.org
10.10.1.4 mongo-en-4.example.org
Basically we added two things:
The hosts plugin before the kubernetes plugin and used the fallthrough option of the hosts plugin to satisfy our case.
To shed some more lights on the fallthrough option. Any given backend is usually the final word for its zone - it either returns a result, or it returns NXDOMAIN for the
query. However, occasionally this is not the desired behavior, so some of the plugin support a fallthrough option.
When fallthrough is enabled, instead of returning NXDOMAIN when a record is not found, the plugin will pass the
request down the chain. A backend further down the chain then has the opportunity to handle the request and that backend in our case is kubernetes.
We added a new file to the ConfigMap (customdomains.db) and added our custom domains (mongo-en-*.example.org) in there.
Last thing is to Remember to add the customdomains.db file to the config-volume for the CoreDNS pod template:
kubectl edit -n kube-system deployment coredns
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
- key: customdomains.db
path: customdomains.db
and finally to make kubernetes reload CoreDNS (each pod running):
$ kubectl rollout restart -n kube-system deployment/coredns
| Kubernetes | 37,166,822 | 37 |
I currently have an Ingress configured on GKE (k8s 1.2) to forward requests towards my application's pods. I have a request which can take a long time (30 seconds) and timeout from my application (504). I observe that when doing so the response that i receive is not my own 504 but a 502 from what looks like the Google Loadbalancer after 60 seconds.
I have played around with different status codes and durations, exactly after 30 seconds i start receiving this weird behaviour regardless of statuscode emitted.
Anybody have a clue how i can fix this? Is there a way to reconfigure this behaviour?
| Beginning with 1.11.3-gke.18, it is possible to configure timeout settings in kubernetes directly.
First add a backendConfig:
# For GKE < 1.16.8-gke.3 use "cloud.google.com/v1beta1" below
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-bsc-backendconfig
spec:
timeoutSec: 40
Then add an annotation in Service to use this backendConfig:
apiVersion: v1
kind: Service
metadata:
name: my-bsc-service
labels:
purpose: bsc-config-demo
annotations:
# For GKE < 1.16.8-gke.3 use "beta.cloud.google.com/backend-config" below
cloud.google.com/backend-config: '{"default": "my-bsc-backendconfig"}'
spec:
type: NodePort
selector:
purpose: bsc-config-demo
ports:
- port: 80
protocol: TCP
targetPort: 8080
And voila, your ingress load balancer now has a timeout of 40 seconds instead of the default 30 seconds.
See https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service#creating_a_backendconfig
| Kubernetes | 36,200,528 | 37 |
To exec into a container in a pod, I use the following two commands (note the template flag in the first command trims the output to print just the name of the pods):
$ kubectl get pods --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'
app-api-6421cdf4fd-x9tbk
app-worker-432f86f54-fknxw
app-frontend-87dd65d49c-6b4mn
app-rabbit-413632c874-s2ptw
$ kubectl exec -it app-api-6421cdf4fd-x9tbk -- bash
It would be nice to exec into the container without having to discover the random guid at the end of the pod name every single time. How can I do this?
| You can exec into a pod using the deployment.
You can use the following command:
kubectl exec -it deploy/<deployment-name> -- bash
| Kubernetes | 63,626,842 | 36 |
I'm running kubernetes in azure.
I want to delete a specific deployment, with AZ AKS or kubectl.
The only info I've found is how to delete pods, but this is not what I'm looking for, since pods will regenerate once deleted.
I know I just can go to the ui and delete the deployment but i want to do it with az aks or kubectl.
I've run
kubectl get all -A
Then I copy the name of the deployment that I want to delete and run:
kubectl delete deployment zr-binanceloggercandlestick-btcusdt-2hour
kubectl delete deployment deployment.apps/zr-binanceloggercandlestick-btcusdt-12hour
but noting no success, i get these errors:
Error from server (NotFound): deployments.extensions "zr-binanceloggercandlestick-btcusdt-2hour" not found
error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'C:\Users\amnesia\.azure-kubectl\kubectl.exe get resource/<resource_name>' instead of 'C:\Users\amnesia\.azure-kubectl\kubectl.exe get resource resource/<resource_name>'
| Find out all deployments across all namespaces
kubectl get deploy -A
Then delete a deployment with deploymentname from namespace. deploymentname can be found from above command.
kubectl delete deploy deploymentname -n namespacename
Docs on how to configure kubectl to connect to AKS.
| Kubernetes | 61,058,684 | 36 |
I am using a Windows 10 Pro machine.
When I run netstat, it is showing kubernetes:port as foreign address in active connections.
What does this mean? I have checked and there is no kubernetes cluster running on my machine.
How do I close these connections?
Minikube status:
$ minikube status
host:
kubelet:
apiserver:
kubectl:
| That happens because of the way netstat renders output. It has nothing to do with actual Kubernetes.
I have Docker Desktop for Windows and it adds this to the hosts file:
# Added by Docker Desktop
192.168.43.196 host.docker.internal
192.168.43.196 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
There is a record which maps 127.0.0.1 to kubernetes.docker.internal. When netstat renders its output, it resolves foreign address and it looks at the hosts file and sees this record. It says kubernetes and that is what you see in the console. You can try to change it to
127.0.0.1 tomato.docker.internal
With this, netstat will print:
Proto Local Address Foreign Address State
TCP 127.0.0.1:6940 tomato:6941 ESTABLISHED
TCP 127.0.0.1:6941 tomato:6940 ESTABLISHED
TCP 127.0.0.1:8080 tomato:40347 ESTABLISHED
TCP 127.0.0.1:8080 tomato:40348 ESTABLISHED
TCP 127.0.0.1:8080 tomato:40349 ESTABLISHED
So what actually happens is there are connections from localhost to localhost (netstat -b will show apps that create them). Nothing to do with Kubernetes.
| Kubernetes | 59,918,618 | 36 |
By default,according to k8s documentation, Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster-domain.example.
Is there a command to retrieve the full name of a service?
| You can do a DNS query from any pod and you would get the FQDN.
# nslookup api-server
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: api-server.default.svc.cluster.local
Address: 10.104.225.18
root@api-server-6ff8c8b9c-6pgkb:/#
cluster-domain.example is just a example in the documentation. cluster.local is the default cluster domain assigned. So the FQDN of any service by default would be <service-name>.<namespace>.svc.cluster.local.
You don't need to use the FQDN to access services - for services in same namespace, just the service name would be enough. For services in other namespaces, <service-name>.<namespace> would be enough as kubernetes would automatically set up the DNS search domains.
| Kubernetes | 59,559,438 | 36 |
How do I determine which apiGroup any given resource belongs in?
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: thing
rules:
- apiGroups: ["<wtf goes here>"]
resources: ["deployments"]
verbs: ["get", "list"]
resourceNames: []
| To get API resources - supported by your Kubernetes cluster:
kubectl api-resources -o wide
example:
NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS
deployments deploy apps true Deployment [create delete deletecollection get list patch update watch]
deployments deploy extensions true Deployment [create delete deletecollection get list patch update watch]
To get API versions - supported by your Kubernetes cluster:
kubectl api-versions
You can verify f.e. deployment:
kubectl explain deploy
KIND: Deployment
VERSION: extensions/v1beta1
DESCRIPTION:
DEPRECATED - This group version of Deployment is deprecated by
apps/v1beta2/Deployment.
Furthermore you can investigate with api-version:
kubectl explain deploy --api-version apps/v1
Shortly you an specify in you apiGroups like:
apiGroups: ["extensions", "apps"]
You can also configure those settings for your cluster using (for example to test it will work with next 1.16 release) by passing options into --runtime-config in kube-apiserver.
Additional resources:
api Resources:
Kubernetes Deprecation Policy
Additional Notable Feature Updates for specific release please follow like:
Continued deprecation of extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs; these extensions will be retired in 1.16!
| Kubernetes | 57,821,065 | 36 |
I want to get the value of a specific field of a secret in a shell script.
From the kubectl get secret documentation, it seems the standard way to get a secret returns the whole thing, in a specified format, with the values base64 encoded.
So, to get the bar field of the foo secret, output as an unencoded string, I'm doing this:
kubectl get secret foo -o json | jq -r ".data.bar" | base64 --decode
That is
get the whole foo secret as JSON
pipe to jq to read the bar field from the JSON
decode the value using base64
Is there a way to do this only using kubectl?
Or an elegant way in POSIX-compliant shell that doesn't rely on any dependencies like jq?
| Try this
kubectl get secret foo --template={{.data.bar}} | base64 --decode
No need of jq.
| Kubernetes | 57,379,081 | 36 |
I setup a new k8s in a single node, which is tainted. But the PersistentVolume can not be created successfully, when I am trying to create a simple PostgreSQL.
There is some detail information below.
The StorageClass is copied from the official page: https://kubernetes.io/docs/concepts/storage/storage-classes/#local
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
The StatefulSet is:
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
...
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
About the running StorageClass:
$ kubectl describe storageclasses.storage.k8s.io
Name: local-storage
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
About the running PersistentVolumeClaim:
$ kubectl describe pvc
Name: postgres-data-postgres-0
Namespace: default
StorageClass: local-storage
Status: Pending
Volume:
Labels: app=postgres
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer <invalid> (x2 over <invalid>) persistentvolume-controller waiting for first consumer to be created before binding
K8s versions:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
| The app is waiting for the Pod, while the Pod is waiting for a PersistentVolume by a PersistentVolumeClaim.
However, the PersistentVolume should be prepared by the user before using.
My previous YAMLs are lack of a PersistentVolume like this:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-data
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
local:
path: /data/postgres
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteOnce
storageClassName: local-storage
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- postgres
The local path /data/postgres should be prepared before using.
Kubernetes will not create it automatically.
| Kubernetes | 55,044,486 | 36 |
I've found a documentation about how to configure your NginX ingress controller using ConfigMap: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
Unfortunately I've no idea and couldn't find it anywhere how to load that ConfigMap from my Ingress controller.
My ingress controller:
helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress
My config map:
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-configmap
data:
proxy-read-timeout: "86400s"
client-max-body-size: "2g"
use-http2: "false"
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- my.endpoint.net
secretName: ingress-tls
rules:
- host: my.endpoint.net
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 443
- path: /api
backend:
serviceName: api
servicePort: 443
How do I make my Ingress to load the configuration from the ConfigMap?
| I've managed to display what YAML gets executed by Helm using the: --dry-run --debug options at the end of helm install command. Then I've noticed that there controller is executed with the: --configmap={namespace-where-the-nginx-ingress-is-deployed}/{name-of-the-helm-chart}-nginx-ingress-controller.
In order to load your ConfigMap you need to override it with your own (check out the namespace).
kind: ConfigMap
apiVersion: v1
metadata:
name: {name-of-the-helm-chart}-nginx-ingress-controller
namespace: {namespace-where-the-nginx-ingress-is-deployed}
data:
proxy-read-timeout: "86400"
proxy-body-size: "2g"
use-http2: "false"
The list of config properties can be found here.
| Kubernetes | 54,884,735 | 36 |
I understand that {{.Release.namespace}} will render the namespace where the application being installed by helm. In that case, helm template command will render it as empty string (since it doesn't know yet the release namespace).
However, what makes me surprise is helm upgrade --install command (i haven't tried other command such as helm install) also renders it empty on some cases.
Here is the example of my helm chart template:
apiVersion: v1
kind: Service
metadata:
name: {{.Values.app.name}}-{{.Values.app.track}}-internal
namespace: {{.Release.namespace}}
annotations:
testAnnotate: "{{.Release.namespace}}"
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: {{.Values.app.name}}
environment: {{.Values.app.env}}
track: {{.Values.app.track}}
type: ClusterIP
After invoke helm upgrade --install on that chart template (and installed it successfully), I then try to see the output of my resource
> kubectl get -o yaml svc java-maven-app-stable-internal -n data-devops
apiVersion: v1
kind: Service
metadata:
annotations:
testAnnotate: ""
creationTimestamp: 2018-08-09T06:56:41Z
name: java-maven-app-stable-internal
namespace: data-devops
resourceVersion: "62906341"
selfLink: /api/v1/namespaces/data-devops/services/java-maven-app-stable-internal
uid: 5e888e6a-9ba1-11e8-912b-42010a9400fa
spec:
clusterIP: 10.32.76.208
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: java-maven-app
environment: stg
track: stable
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
As you can see, I put {{.Release.namespace}} on 2 places:
in metadata.namespace field
in metadata.annotations.testAnnotate field.
But it only renders the correct namespace on metadata.namespace field. Any idea why?
| The generated value .Release.Namespace is case-sensitive. The letter N in "namespace" should be capitalized.
| Kubernetes | 51,760,772 | 36 |
In Kubernetes object metadata, there are the concepts of resourceVersion and generation. I understand the notion of resourceVersion: it is an optimistic concurrency control mechanism—it will change with every update. What, then, is generation for?
| resourceVersion changes on every write, and is used for optimistic concurrency control
in some objects, generation is incremented by the server as part of persisting writes affecting the spec of an object.
some objects' status fields have an observedGeneration subfield for controllers to persist the generation that was last acted on.
| Kubernetes | 47,100,389 | 36 |
I am failing to pull from my private Docker Hub repository into my local Kubernetes setup running on Vagrant:
Container "hellonode" in pod "hellonode-n1hox" is waiting to start: image can't be
pulled
Failed to pull image "username/hellonode": Error: image username/hellonode:latest not found
I have set up Kubernetes locally via Vagrant as described here and created a secret named "dockerhub" with kubectl create secret docker-registry dockerhub --docker-server=https://registry.hub.docker.com/ --docker-username=username --docker-password=... --docker-email=... which I supplied as the image pull secret.
I am running Kubernetes 1.2.0.
| To pull a private DockerHub hosted image from a Kubernetes YAML:
Run these commands:
DOCKER_REGISTRY_SERVER=docker.io
DOCKER_USER=Type your dockerhub username, same as when you `docker login`
DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login`
DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login`
kubectl create secret docker-registry myregistrykey \
--docker-server=$DOCKER_REGISTRY_SERVER \
--docker-username=$DOCKER_USER \
--docker-password=$DOCKER_PASSWORD \
--docker-email=$DOCKER_EMAIL
If your username on DockerHub is DOCKER_USER, and your private repo is called PRIVATE_REPO_NAME, and the image you want to pull is tagged as latest, create this example.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: whatever
spec:
containers:
- name: whatever
image: DOCKER_USER/PRIVATE_REPO_NAME:latest
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
imagePullSecrets:
- name: myregistrykey
Then run:
kubectl create -f example.yaml
| Kubernetes | 36,232,906 | 36 |
We use Kubernetes Jobs for a lot of batch computing here and I'd like to instrument each Job with a monitoring sidecar to update a centralized tracking system with the progress of a job.
The only problem is, I can't figure out what the semantics are (or are supposed to be) of multiple containers in a job.
I gave it a shot anyways (with an alpine sidecar that printed "hello" every 1 sec) and after my main task completed, the Jobs are considered Successful and the kubectl get pods in Kubernetes 1.2.0 shows:
NAME READY STATUS RESTARTS AGE
job-69541b2b2c0189ba82529830fe6064bd-ddt2b 1/2 Completed 0 4m
job-c53e78aee371403fe5d479ef69485a3d-4qtli 1/2 Completed 0 4m
job-df9a48b2fc89c75d50b298a43ca2c8d3-9r0te 1/2 Completed 0 4m
job-e98fb7df5e78fc3ccd5add85f8825471-eghtw 1/2 Completed 0 4m
And if I describe one of those pods
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 24 Mar 2016 11:59:19 -0700
Finished: Thu, 24 Mar 2016 11:59:21 -0700
Then GETing the yaml of the job shows information per container:
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-03-24T18:59:29Z
message: 'containers with unready status: [pod-template]'
reason: ContainersNotReady
status: "False"
type: Ready
containerStatuses:
- containerID: docker://333709ca66462b0e41f42f297fa36261aa81fc099741e425b7192fa7ef733937
image: luigi-reduce:0.2
imageID: docker://sha256:5a5e15390ef8e89a450dac7f85a9821fb86a33b1b7daeab9f116be252424db70
lastState: {}
name: pod-template
ready: false
restartCount: 0
state:
terminated:
containerID: docker://333709ca66462b0e41f42f297fa36261aa81fc099741e425b7192fa7ef733937
exitCode: 0
finishedAt: 2016-03-24T18:59:30Z
reason: Completed
startedAt: 2016-03-24T18:59:29Z
- containerID: docker://3d2b51436e435e0b887af92c420d175fafbeb8441753e378eb77d009a38b7e1e
image: alpine
imageID: docker://sha256:70c557e50ed630deed07cbb0dc4d28aa0f2a485cf7af124cc48f06bce83f784b
lastState: {}
name: sidecar
ready: true
restartCount: 0
state:
running:
startedAt: 2016-03-24T18:59:31Z
hostIP: 10.2.113.74
phase: Running
So it looks like my sidecar would need to watch the main process (how?) and exit gracefully once it detects it is alone in the pod? If this is correct, then are there best practices/patterns for this (should the sidecar exit with the return code of the main container? but how does it get that?)?
** Update **
After further experimentation, I've also discovered the following:
If there are two containers in a pod, then it is not considered successful until all containers in the pod return with exit code 0.
Additionally, if restartPolicy: OnFailure is set on the pod spec, then any container in the pod that terminates with non-zero exit code will be restarted in the same pod (this could be useful for a monitoring sidecar to count the number of retries and delete the job after a certain number (to workaround no max-retries currently available in Kubernetes jobs)).
| You can use the downward api to figure out your own podname from within the sidecar, and then retrieving your own pod from the apiserver to lookup exist status. Let me know how this goes.
| Kubernetes | 36,208,211 | 36 |
Is there any way to configure nodeSelector at the namespace level?
I want to run a workload only on certain nodes for this namespace.
| To achieve this you can use PodNodeSelector admission controller.
First, you need to enable it in your kubernetes-apiserver:
Edit /etc/kubernetes/manifests/kube-apiserver.yaml:
find --enable-admission-plugins=
add PodNodeSelector parameter
Now, you can specify scheduler.alpha.kubernetes.io/node-selector option in annotations for your namespace, example:
apiVersion: v1
kind: Namespace
metadata:
name: your-namespace
annotations:
scheduler.alpha.kubernetes.io/node-selector: env=test
spec: {}
status: {}
After these steps, all the pods created in this namespace will have this section automatically added:
nodeSelector
env: test
More information about the PodNodeSelector you can find in the official Kubernetes documentation:
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podnodeselector
kubeadm users
If you deployed your cluster using kubeadm and if you want to make this configuration persistent, you have to update your kubeadm config file:
kubectl edit cm -n kube-system kubeadm-config
specify extraArgs with custom values under apiServer section:
apiServer:
extraArgs:
enable-admission-plugins: NodeRestriction,PodNodeSelector
then update your kube-apiserver static manifest on all control-plane nodes:
# Kubernetes 1.22 and forward:
kubectl get configmap -n kube-system kubeadm-config -o=jsonpath="{.data}" > kubeadm-config.yaml
# Before Kubernetes 1.22:
# "kubeadmin config view" was deprecated in 1.19 and removed in 1.22
# Reference: https://github.com/kubernetes/kubeadm/issues/2203
kubeadm config view > kubeadm-config.yaml
# Update the manifest with the file generated by any of the above lines
kubeadm init phase control-plane apiserver --config kubeadm-config.yaml
kubespray users
You can just use kube_apiserver_enable_admission_plugins variable for your api-server configuration variables:
kube_apiserver_enable_admission_plugins:
- PodNodeSelector
| Kubernetes | 52,487,333 | 35 |
I can run this command to create a docker registry secret for a kubernetes cluster:
kubectl create secret docker-registry regsecret \
--docker-server=docker.example.com \
--docker-username=kube \
--docker-password=PW_STRING \
[email protected] \
--namespace mynamespace
I would like to create the same secret from a YAML file. Does anyone know how this can be set in a YAML file?
I need this as a YAML file so that it can be used as a Helm template, which allows for a Helm install command such as this (simplified) one:
helm install ... --set docker.user=peter,docker.pw=foobar,docker.email=...
| You can write that yaml by yourself, but it will be faster to create it in 2 steps using kubectl:
Generate a 'yaml' file. You can use the same command but in dry-run mode and output mode yaml.
Here is an example of a command that will save a secret into a 'docker-secret.yaml' file for kubectl version < 1.18 (check the version by kubectl version --short|grep Client):
kubectl create secret docker-registry --dry-run=true $secret_name \
--docker-server=<DOCKER_REGISTRY_SERVER> \
--docker-username=<DOCKER_USER> \
--docker-password=<DOCKER_PASSWORD> \
--docker-email=<DOCKER_EMAIL> -o yaml > docker-secret.yaml
For kubectl version >= 1.18:
kubectl create secret docker-registry --dry-run=client $secret_name \
--docker-server=<DOCKER_REGISTRY_SERVER> \
--docker-username=<DOCKER_USER> \
--docker-password=<DOCKER_PASSWORD> \
--docker-email=<DOCKER_EMAIL> -o yaml > docker-secret.yaml
You can apply the file like any other Kubernetes 'yaml':
kubectl apply -f docker-secret.yaml
UPD, as a question has been updated.
If you are using Helm, here is an official documentation about how to create an ImagePullSecret.
From a doc:
First, assume that the credentials are defined in the values.yaml file like so:
imageCredentials:
registry: quay.io
username: someone
password: sillyness
We then define our helper template as follows:
{{- define "imagePullSecret" }}
{{- printf "{\"auths\": {\"%s\": {\"auth\": \"%s\"}}}" .Values.imageCredentials.registry (printf "%s:%s" .Values.imageCredentials.username .Values.imageCredentials.password | b64enc) | b64enc }}
{{- end }}
Finally, we use the helper template in a larger template to create the Secret manifest:
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
| Kubernetes | 49,629,241 | 35 |
Where are documented the "types" of secrets that you can create in kubernetes?
looking at different samples I have found "generic" and "docker-registry" but I have no been able to find a pointer to documentation where the different type of secrets are documented.
I always end in the k8s doc:
https://kubernetes.io/docs/concepts/configuration/secret/
https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/
Thank you.
| Here is a list of 'types' from the source code:
SecretTypeOpaque SecretType = "Opaque"
[...]
SecretTypeServiceAccountToken SecretType = "kubernetes.io/service-account-token"
[...]
SecretTypeDockercfg SecretType = "kubernetes.io/dockercfg"
[...]
SecretTypeDockerConfigJson SecretType = "kubernetes.io/dockerconfigjson"
[...]
SecretTypeBasicAuth SecretType = "kubernetes.io/basic-auth"
[...]
SecretTypeSSHAuth SecretType = "kubernetes.io/ssh-auth"
[...]
SecretTypeTLS SecretType = "kubernetes.io/tls"
[...]
SecretTypeBootstrapToken SecretType = "bootstrap.kubernetes.io/token"
| Kubernetes | 49,614,439 | 35 |
I am new to all things Kubernetes so still have much to learn.
Have created a two node Kubernetes cluster and both nodes (master and worker) are ready to do work which is good:
[monkey@k8s-dp1 nginx-test]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-dp1 Ready master 2h v1.9.1
k8s-dp2 Ready <none> 2h v1.9.1
Also, all Kubernetes Pods look okay:
[monkey@k8s-dp1 nginx-test]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-dp1 1/1 Running 0 2h
kube-system kube-apiserver-k8s-dp1 1/1 Running 0 2h
kube-system kube-controller-manager-k8s-dp1 1/1 Running 0 2h
kube-system kube-dns-86cc76f8d-9jh2w 3/3 Running 0 2h
kube-system kube-proxy-65mtx 1/1 Running 1 2h
kube-system kube-proxy-wkkdm 1/1 Running 0 2h
kube-system kube-scheduler-k8s-dp1 1/1 Running 0 2h
kube-system weave-net-6sbbn 2/2 Running 0 2h
kube-system weave-net-hdv9b 2/2 Running 3 2h
However, if I try to create a new deployment in the cluster, the deployment gets created but its pod fails to go into the appropriate RUNNING state. e.g.
[monkey@k8s-dp1 nginx-test]# kubectl apply -f https://k8s.io/docs/tasks/run-application/deployment.yaml
deployment "nginx-deployment" created
[monkey@k8s-dp1 nginx-test]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-deployment-569477d6d8-f42pz 0/1 ContainerCreating 0 5s
default nginx-deployment-569477d6d8-spjqk 0/1 ContainerCreating 0 5s
kube-system etcd-k8s-dp1 1/1 Running 0 3h
kube-system kube-apiserver-k8s-dp1 1/1 Running 0 3h
kube-system kube-controller-manager-k8s-dp1 1/1 Running 0 3h
kube-system kube-dns-86cc76f8d-9jh2w 3/3 Running 0 3h
kube-system kube-proxy-65mtx 1/1 Running 1 2h
kube-system kube-proxy-wkkdm 1/1 Running 0 3h
kube-system kube-scheduler-k8s-dp1 1/1 Running 0 3h
kube-system weave-net-6sbbn 2/2 Running 0 2h
kube-system weave-net-hdv9b 2/2 Running 3 2h
I am not sure how to figure out what the problem is but if I for example do a kubectl get ev, I can see the following suspect event:
<invalid> <invalid> 1 nginx-deployment-569477d6d8-f42pz.15087c66386edf5d Pod
Warning FailedCreatePodSandBox kubelet, k8s-dp2 Failed create pod sandbox.
But I don't know where to go from here. I can also see that the nginx docker image itself never appears in docker images.
How do I find out more about the problem? Am I missing something fundamental in the kubernetes setup?
--- NEW INFO ---
For background info in case it helps...
Kubernetes nodes are running on CentOS 7 VMs hosted on Windows 10 hyper-v.
--- NEW INFO ---
Running kubectl describe pods shows the following Warning:
Warning NetworkNotReady 1m kubelet, k8s-dp2 network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]
--- NEW INFO ---
Switched off the Hyper-v VMs running Kubernetes for the night after my day job hours were over and on my return to the office this morning, I powered up the Kubernetes VMs once again to carry on and, for about 15 mins, the command:
kubectl get pods --all-namespaces was still showing ContainerCreating for those nginx pods the same as yesterday but, right now, the command is now showing all pods as Running including the nginx pods... i.e. the problem solved itself after a full reboot of both master and worker node VMs.
I now did another full reboot again and all pods are showing as Running which is good.
| Use kubectl describe pod <name> to see more info
| Kubernetes | 48,190,928 | 35 |
Using Helm templates, I'm trying to generate a list of server names based on a number in values.yaml. The dot for this template is set to the number (its a float64).
{{- define "zkservers" -}}
{{- $zkservers := list -}}
{{- range int . | until -}}
{{- $zkservers := print "zk-" . ".zookeeper" | append $zkservers -}}
{{- end -}}
{{- join "," $zkservers -}}
{{- end -}}
For an input of, say, 3 I'm expecting this to produce:
zk-0.zookeeper,zk-1.zookeeper,zk-2.zookeeper
It produces nothing.
I understand that the line within the range block is a no-op since the variable $zkservers is a new variable each time the loop iterates. It is not the same variable as the $zkservers in the outer scope.
I hope the intention is clear of what I want to do. I am at a loss how to do it.
Anyone know how to do this with Helm templates?
| For those from 2020+ it now can be achieved as simple as that:
{{ join "," .Values.some.array }}
It produces the correct output:
value: "test1,test2,test3"
At least it works with (more or less) recent versions of helm-cli:
Client: &version.Version{SemVer:"v2.16.2", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
| Kubernetes | 47,668,793 | 35 |
I started minikube with k8s version 1.5.2 and I would like to downgrade my kubectl so that it is also 1.5.2. Currently when I run kubectl version I get:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:12Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7", Compiler:"gc", Platform:"linux/amd64"}
I would like to use kubectl to fetch PetSets but in later versions this was updated to StatefulSets so I cannot use the commands with my current kubectl version
kubectl get petsets
the server doesn't have a resource type "petsets"
Thanks!
| You can just download the previous version binary and replace the one you have now.
Linux:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.5.2/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
macOS:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.5.2/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
Windows:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.5.2/bin/windows/amd64/kubectl.exe
And add it to PATH.
If not follow instructions for other Operating Systems here: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl
| Kubernetes | 46,610,180 | 35 |
Traefik is a reverse HTTP proxy with several supported backends, Kubernetes included. How does Istio compare?
| It's something of an apples-to-oranges comparison.
Edge proxies like Traefik or Nginx are best compared to Envoy - the proxy that Istio leverages. An Envoy proxy is installed automatically by Istio adjacent to every pod.
Istio provides several higher level capabilities beyond Envoy, including routing, ACLing and service discovery and access policy across a set of services. In effect, it stitches a set of Envoy enabled services together. This design pattern is often called a service mesh.
Istio is also currently limited to Kubernetes deployments in a single cluster, though work is in place to remove these restrictions in time.
| Kubernetes | 44,212,356 | 35 |
I am getting a couple of errors with Helm that I can not find explanations for elsewhere. The two errors are below.
Error: no available release name found
Error: the server does not allow access to the requested resource (get configmaps)
Further details of the two errors are in the code block further below.
I have installed a Kubernetes cluster on Ubuntu 16.04. I have a Master (K8SMST01) and two nodes (K8SN01 & K8SN02).
This was created using kubeadm using Weave network for 1.6+.
Everything seems to run perfectly well as far as Deployments, Services, Pods, etc... DNS seems to work fine, meaning pods can access services using the DNS name (myservicename.default).
Using "helm create" and "helm search" work, but interacting with the tiller deployment do not seem to work. Tiller is installed and running according to the Helm install documentation.
root@K8SMST01:/home/blah/charts# helm version
Client: &version.Version{SemVer:"v2.3.0",
GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
root@K8SMST01:/home/blah/charts# helm install ./mychart
Error: no available release name found
root@K8SMST01:/home/blah/charts# helm ls
Error: the server does not allow access to the requested resource (get configmaps)
Here are the running pods:
root@K8SMST01:/home/blah/charts# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
etcd-k8smst01 1/1 Running 4 1d 10.139.75.19 k8smst01
kube-apiserver-k8smst01 1/1 Running 3 19h 10.139.75.19 k8smst01
kube-controller-manager-k8smst01 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-dns-3913472980-dm661 3/3 Running 6 1d 10.32.0.2 k8smst01
kube-proxy-56nzd 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-proxy-7hflb 1/1 Running 1 1d 10.139.75.20 k8sn01
kube-proxy-nbc4c 1/1 Running 1 1d 10.139.75.21 k8sn02
kube-scheduler-k8smst01 1/1 Running 3 1d 10.139.75.19 k8smst01
tiller-deploy-1172528075-x3d82 1/1 Running 0 22m 10.44.0.3 k8sn01
weave-net-45335 2/2 Running 2 1d 10.139.75.21 k8sn02
weave-net-7j45p 2/2 Running 2 1d 10.139.75.20 k8sn01
weave-net-h279l 2/2 Running 5 1d 10.139.75.19 k8smst01
| The solution given by kujenga from the GitHub issue worked without any other modifications:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
| Kubernetes | 43,499,971 | 35 |
Currently, I'm using Docker Desktop with WSL2 integration. I found that Docker Desktop automatically had created a cluster for me. It means I don't have to install and use Minikube or Kind to create cluster.
The problem is that, how could I enable Ingress Controller if I use "built-in" cluster from Docker Desktop?
I tried to create an Ingress to check if this work or not, but as my guess, it didn't work.
The YAML file I created as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
minReadySeconds: 30
selector:
matchLabels:
app: webapp
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nodejs-helloworld:v1
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- name: http
port: 3000
nodePort: 30090 # only for NotPort > 30,000
type: NodePort #ClusterIP inside cluster
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
spec:
defaultBackend:
service:
name: webapp-service
port:
number: 3000
rules:
- host: ingress.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 3000
I tried to access ingress.local/ but it was not successful. (I added ingress.local to point to 127.0.0.1 in host file. And the webapp worked fine at kubernetes.docker.internal:30090 )
Could you please help me to know the root cause?
Thank you.
| Finally I found the way to fix. I have to deploy ingress Nginx by command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml
(Follows the instruction at https://kubernetes.github.io/ingress-nginx/deploy/#docker-for-mac. It works just fine for Docker for Windows)
Now I can access http://ingress.local successfully.
| Kubernetes | 65,193,758 | 34 |
I'm trying to use NodePort with kind but somehow it doesn't want to work.
I've successfully deployed the following cluster:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 80
hostPort: 30000
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: tcp # Optional, defaults to tcp
- role: worker
and then a very simple deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostname-deployment
labels:
app: hostname
spec:
replicas: 2
selector:
matchLabels:
app: hostname
template:
metadata:
labels:
app: hostname
spec:
containers:
- name: hostname
image: hostname:0.1
ports:
- containerPort: 80
and a service:
apiVersion: v1
kind: Service
metadata:
name: hostname-service
spec:
type: NodePort
selector:
app: hostname
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30000
and I can connect to the service via e.g.
(in one terminal)
k port-forward service/hostname-service 8080:80
Forwarding from 127.0.0.1:8080 -> 80
(another one)
curl localhost:8080
hostname: hostname-deployment-75c9fd6584-ddc59 at Wed, 17 Jun 2020 15:38:33 UTC
But I cannot connect to the service via the exposed NodePort
curl -v localhost:30000
* Rebuilt URL to: localhost:30000/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 30000 (#0)
> GET / HTTP/1.1
> Host: localhost:30000
> User-Agent: curl/7.58.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* stopped the pause stream!
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
kubectl get all output:
NAME READY STATUS RESTARTS AGE
pod/hostname-deployment-75c9fd6584-ddc59 1/1 Running 0 34m
pod/hostname-deployment-75c9fd6584-tg8db 1/1 Running 0 34m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hostname-service NodePort 10.107.104.231 <none> 80:30000/TCP 34m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hostname-deployment 2/2 2 2 34m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hostname-deployment-75c9fd6584 2 2 2 34m
| Kind cluster configuration needs to be like below
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: tcp # Optional, defaults to tcp
- role: worker
This file is then passed to your creation command as kind create cluster --config=config.yaml (according to docs).
| Kubernetes | 62,432,961 | 34 |
I am running a jenkins pipeline with the following command:
kubectl exec -it kafkacat-5f8fcfcc57-2txhc -- kafkacat -b cord-kafka -C -t BBSim-OLT-0-Events -o s@1585031458
which is running fine on the terminal of the machine the pipeline is running on, but on the actual pipeline I get the following error: "Unable to use a TTY - input is not a terminal or the right kind of file"
Any tips on how to go about resolving this?
| For windows git bash:
alias kubectl='winpty kubectl'
$ kubectl exec -it <container>
Or just use winpty before the desired command.
| Kubernetes | 60,826,194 | 34 |
I need some advise on an issue I am facing with k8s 1.14 and running gitlab pipelines on it. Many jobs are throwing up exit code 137 errors and I found that it means that the container is being terminated abruptly.
Cluster information:
Kubernetes version: 1.14
Cloud being used: AWS EKS
Node: C5.4xLarge
After digging in, I found the below logs:
**kubelet: I0114 03:37:08.639450** 4721 image_gc_manager.go:300] [imageGCManager]: Disk usage on image filesystem is at 95% which is over the high threshold (85%). Trying to free 3022784921 bytes down to the low threshold (80%).
**kubelet: E0114 03:37:08.653132** 4721 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to garbage collect required amount of images. Wanted to free 3022784921 bytes, but freed 0 bytes
**kubelet: W0114 03:37:23.240990** 4721 eviction_manager.go:397] eviction manager: timed out waiting for pods runner-u4zrz1by-project-12123209-concurrent-4zz892_gitlab-managed-apps(d9331870-367e-11ea-b638-0673fa95f662) to be cleaned up
**kubelet: W0114 00:15:51.106881** 4781 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
**kubelet: I0114 00:15:51.106907** 4781 container_gc.go:85] attempting to delete unused containers
**kubelet: I0114 00:15:51.116286** 4781 image_gc_manager.go:317] attempting to delete unused images
**kubelet: I0114 00:15:51.130499** 4781 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
**kubelet: I0114 00:15:51.130648** 4781 eviction_manager.go:362] eviction manager: pods ranked for eviction:
1. runner-u4zrz1by-project-10310692-concurrent-1mqrmt_gitlab-managed-apps(d16238f0-3661-11ea-b638-0673fa95f662)
2. runner-u4zrz1by-project-10310692-concurrent-0hnnlm_gitlab-managed-apps(d1017c51-3661-11ea-b638-0673fa95f662)
3. runner-u4zrz1by-project-13074486-concurrent-0dlcxb_gitlab-managed-apps(63d78af9-3662-11ea-b638-0673fa95f662)
4. prometheus-deployment-66885d86f-6j9vt_prometheus(da2788bb-3651-11ea-b638-0673fa95f662)
5. nginx-ingress-controller-7dcc95dfbf-ld67q_ingress-nginx(6bf8d8e0-35ca-11ea-b638-0673fa95f662)
And then the pods get terminated resulting in the exit code 137s.
Can anyone help me understand the reason and a possible solution to overcome this?
Thank you :)
| Exit Code 137 does not necessarily mean OOMKilled. It indicates failure as container received SIGKILL (some interrupt or ‘oom-killer’ [OUT-OF-MEMORY])
If pod got OOMKilled, you will see below line when you describe the pod
State: Terminated
Reason: OOMKilled
Edit on 2/2/2022
I see that you added **kubelet: I0114 03:37:08.639450** 4721 image_gc_manager.go:300] [imageGCManager]: Disk usage on image filesystem is at 95% which is over the high threshold (85%). Trying to free 3022784921 bytes down to the low threshold (80%). and must evict pod(s) to reclaim ephemeral-storage from the log. It usually happens when application pods are writing something to disk like log files. Admins can configure when (at what disk usage %) to do eviction.
| Kubernetes | 59,729,917 | 34 |
I am using k8s with version 1.11 and CephFS as storage.
I am trying to mount the directory created on the CephFS in the pod. To achieve the same I have written the following volume and volume mount config
in the deployment configuration
Volume
{
"name": "cephfs-0",
"cephfs": {
"monitors": [
"10.0.1.165:6789",
"10.0.1.103:6789",
"10.0.1.222:6789"
],
"user": "cfs",
"secretRef": {
"name": "ceph-secret"
},
"readOnly": false,
"path": "/cfs/data/conf"
}
}
volumeMounts
{
"mountPath": "/opt/myapplication/conf",
"name": "cephfs-0",
"readOnly": false
}
Mount is working properly. I can see the ceph directory i.e. /cfs/data/conf getting mounted on /opt/myapplication/conf but following is my issue.
I have configuration files already present as a part of docker image at the location /opt/myapplication/conf. When deployment tries to mount the ceph volume then all the files at the location /opt/myapplication/conf gets disappear. I know it's the behavior of the mount operation but is there any way by which I would be able to persist the already existing files in the container on the volume which I am mounting so that other pod which is mounting the same volume can access the configuration files. i.e. the files which are already there inside the pod at the location /opt/myapplication/conf should be accessible on the CephFS at location /cfs/data/conf.
Is it possible?
I went through the docker document and it mentions that
Populate a volume using a container
If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume. The container then mounts and uses the volume, and other containers which use the volume also have access to the pre-populated content.
This matches with my requirement but how to achieve it with k8s volumes?
| Unfortunately Kubernetes' volume system differs from Docker's, so this is not possible directly.
However, in case of a single file foo.conf you can use:
a mountPath ending in this file name and
a subPath containing this file name, like this:
volumeMounts:
- name: cephfs-0
mountPath: /opt/myapplication/conf/foo.conf
subPath: foo.conf
Repeat that for each file. But if you have a lot of them, or if their names can vary, then you have to handle this at runtime or use templating tools. Usually that means mounting it somewhere else and setting up symlinks before your main process starts.
| Kubernetes | 58,128,591 | 34 |
Please explain the difference between ResourceQuota vs LimitRange objects in Kubernetes...?
| LimitRange and ResourceQuota are objects used to control resource usage by a Kubernetes cluster administrator.
ResourceQuota is for limiting the total resource consumption of a namespace, for example:
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
spec:
hard:
configmaps: "10"
persistentvolumeclaims: "4"
replicationcontrollers: "20"
secrets: "10"
services: "10"
LimitRangeis for managing constraints at a pod and container level within the project.
apiVersion: "v1"
kind: "LimitRange"
metadata:
name: "resource-limits"
spec:
limits:
-
type: "Pod"
max:
cpu: "2"
memory: "1Gi"
min:
cpu: "200m"
memory: "6Mi"
-
type: "Container"
max:
cpu: "2"
memory: "1Gi"
min:
cpu: "100m"
memory: "4Mi"
default:
cpu: "300m"
memory: "200Mi"
defaultRequest:
cpu: "200m"
memory: "100Mi"
maxLimitRequestRatio:
cpu: "10"
An individual Pod or Container that requests resources outside of these LimitRange constraints will be rejected, whereas a ResourceQuota only applies to all of the namespace/project's objects in aggregate.
| Kubernetes | 54,929,714 | 34 |
I would like to do two things with MicroK8s:
Route the host machine (Ubuntu 18.04) ports 80/443 to Microk8s
Use something like the simple ingress defined in the kubernetes.io docs
My end goal is to create a single node Kubernetes cluster that sits on the Ubuntu host, then using ingress to route different domains to their respective pods inside the service.
I've been attempting to do this with Microk8s for the past couple of days but can't wrap my head around it.
The best I've gotten so far is using MetalLB to create a load balancer. But this required me to use a free IP address available on my local network rather than the host machines IP address.
I've also enabled the default-http-backend and attempted to export and edit these config files with no success.
As an example this will work on Minikube once the ingress add on is enabled, This example shows the base Nginx server image at port 80 on the cluster IP:
# ingress-service.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
# - host: nginx.ioo
- http:
paths:
- path: /
backend:
serviceName: nginx-cluster-ip-service
servicePort: 80
# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
component: nginx
template:
metadata:
labels:
component: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
# nginx-cluster-ip-service
apiVersion: v1
kind: Service
metadata:
name: nginx-cluster-ip-service
spec:
type: ClusterIP
selector:
component: nginx
ports:
- port: 80
targetPort: 80
| TLDR
Update the annotation to be kubernetes.io/ingress.class: public
Why
For MicroK8s v1.21, running
microk8s enable ingress
Will create a DaemonSet called nginx-ingress-microk8s-controller in the ingress namespace.
If you inspect that, there is a flag to set the ingress class:
- args:
... omitted ...
- --ingress-class=public
... omitted ...
Therefore in order to work with most examples online, you need to either
Remove the --ingress-class=public argument so it defaults to nginx
Update annotations like kubernetes.io/ingress.class: nginx to be kubernetes.io/ingress.class: public
| Kubernetes | 54,506,269 | 34 |
I'm looking for a kubectl command to list / delete all completed jobs
I've try:
kubectl get job --field-selector status.succeeded=1
But I get:
enfield selector "status.succeeded=1": field label "status.succeeded" not supported for batchv1.Jobter code here
What are the possible fields for --fieldSelector when getting jobs ?
Is there a better way to do this ?
| you are almost there, you can do below to delete completed jobs
kubectl delete jobs --all-namespaces --field-selector status.successful=1
| Kubernetes | 53,539,576 | 34 |
I have kubeadm and Kubernetes v1.12 without AWS or Google Cloud.
I want to know if the Kubernetes cluster installed already has an ingress controller and if it has two what is the default.
Thanks :)
| You can check for pods implementing ingress controllers (actually with ingress in the name) with:
kubectl get pods --all-namespaces | grep ingress
And services exposing them with:
kubectl get service --all-namespaces | grep ingress
As @Prafull Ladha says, you won't have an ingress controller by default. The documentation states that in "environments other than GCE/Google Kubernetes Engine, you need to deploy a controller as a pod".
| Kubernetes | 53,299,238 | 34 |
I have recently installed minikube and VirtualBox on a new Mac using homebrew. I am following instructions from the official minikube tutorial.
This is how I am starting the cluster -
minikube start --vm-driver=hyperkit
On running kubectl cluster-info I get this
Kubernetes master is running at https://192.168.99.100:8443
CoreDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Then I set the context of minikube
kubectl config use-context minikube
But when I run minikube dashboard it takes a lot of time to get any output and ultimately I get this output -
http://127.0.0.1:50769/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ is not responding properly: Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
I am expecting to see a web UI for minikube clusters, but getting error output. Is there something I am doing wrong?
More-info -
OS: macOS Mojave (10.14)
kubectl command was installed using gcloud sdk.
Update
Output of kubectl cluster-info dump
Unable to connect to the server: net/http: TLS handshake timeout
Output of kubectl get pods and kubectl get pods --all-namespaces both
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
| stop the minikube:
minikube stop
clean up the current minikune config and data ( which is not working or gone bad)
rm -rf ~/.minikube
Start minikube again: ( a fresh instance )
minikube start
| Kubernetes | 52,916,548 | 34 |
This is the helm and tiller version:
> helm version --tiller-namespace data-devops
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
The previous helm installation failed:
helm ls --tiller-namespace data-devops
NAME REVISION UPDATED STATUS CHART NAMESPACE
java-maven-app 1 Thu Aug 9 13:51:44 2018 FAILED java-maven-app-1.0.0 data-devops
When I tried to install it again using this command, it failed:
helm --tiller-namespace data-devops upgrade java-maven-app helm-chart --install \
--namespace data-devops \
--values helm-chart/values/stg-stable.yaml
Error: UPGRADE FAILED: "java-maven-app" has no deployed releases
Is the helm upgrade --install command going to fail, if the previous installation failed? I am expecting it to force install. Any idea?
| This is or has been a helm issue for a while. It only affects the situation where the first install of a chart fails and has up to helm 2.7 required a manual delete of the failed release before correcting the issue and installing again. However there is now a --force flag available to address this case - https://github.com/helm/helm/issues/4004
| Kubernetes | 51,760,640 | 34 |
In my gcloud console it shows the following error for my defined ingresses:
Error during sync: error while evaluating the ingress spec: service
"monitoring/kube-prometheus" is type "ClusterIP", expected "NodePort"
or "LoadBalancer"
I am using traefik as reverse proxy (instead of nginx) and therefore I define an ingress using a ClusterIP. As far as I understand the process all traffic is proxied through the traefik service (which has a Loadbalancer ingress defined) and therefore all my other ingresses SHOULD actually have a ClusterIP instead of NodePort or Loadbalancer?
Question:
So why does Google Cloud warn me that it expected a NodePort or LoadBalancer?
| I don't know why that error happens, because it seems (to me) to be a valid configuration. But to clear the error, you can switch your service to a named NodePort. Then switch your ingress to use the port name instead of the number. For example:
Service:
apiVersion: v1
kind: Service
metadata:
name: testapp
spec:
ports:
- name: testapp-http # ADD THIS
port: 80
protocol: TCP
targetPort: 80
selector:
app: testapp
type: NodePort
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: testapp
spec:
rules:
- host: hostname.goes.here
http:
paths:
- backend:
serviceName: testapp
# USE THE PORT NAME FROM THE SERVICE INSTEAD OF THE PORT NUMBER
servicePort: testapp-http
path: /
Update:
This is the explanation I received from Google.
Since services by default are ClusterIP [1] and this type of service is meant to be accessible from inside the cluster. It can be accessed from outside when kube-proxy is used, not meant to be directly accessed with an ingress.
As a suggestion, I personally find this article [2] good for understanding the difference between these types of services.
[1] https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
[2] https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
| Kubernetes | 51,572,249 | 34 |
Issue Redis POD creation on k8s(v1.10) cluster and POD creation stuck at "ContainerCreating"
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned redis to k8snode02
Normal SuccessfulMountVolume 30m kubelet, k8snode02 MountVolume.SetUp succeeded for volume "default-token-f8tcg"
Warning FailedCreatePodSandBox 5m (x1202 over 30m) kubelet, k8snode02 Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "redis_default" network: failed to find plugin "loopback" in path [/opt/loopback/bin /opt/cni/bin]
Normal SandboxChanged 47s (x1459 over 30m) kubelet, k8snode02 Pod sandbox changed, it will be killed and re-created.
| Ensure that /etc/cni/net.d and its /opt/cni/bin friend both exist and are correctly populated with the CNI configuration files and binaries on all Nodes. For flannel specifically, one might make use of the flannel cni repo
| Kubernetes | 51,169,728 | 34 |
How to delete all the contents from a kubernetes node? Contents include deployments, replica sets etc. I tried to delete deplyoments seperately. But kubernetes recreates all the pods again. Is there there any ways to delete all the replica sets present in a node?
| If you are testing things, the easiest way would be
kubectl delete deployment --all
Althougth if you are using minikube, the easiest would probably be delete the machine and start again with a fresh node
minikube delete
minikube start
If we are talking about a production cluster, Kubernetes has a built-in feature to drain a node of the cluster, removing all the objects from that node safely.
You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node. Safe evictions allow the pod’s containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.
Note: By default kubectl drain will ignore certain system pods on the node that cannot be killed; see the kubectl drain documentation for more details.
When kubectl drain returns successfully, that indicates that all of the pods (except the ones excluded as described in the previous paragraph) have been safely evicted (respecting the desired graceful termination period, and without violating any application-level disruption SLOs). It is then safe to bring down the node by powering down its physical machine or, if running on a cloud platform, deleting its virtual machine.
First, identify the name of the node you wish to drain. You can list all of the nodes in your cluster with
kubectl get nodes
Next, tell Kubernetes to drain the node:
kubectl drain <node name>
Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node). drain waits for graceful termination. You should not operate on the machine until the command completes.
If you leave the node in the cluster during the maintenance operation, you need to run
kubectl uncordon <node name>
afterwards to tell Kubernetes that it can resume scheduling new pods onto the node.
Please, note that if there are any pods that are not managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force, as mentioned in the docs.
kubectl drain <node name> --force
| Kubernetes | 48,077,931 | 34 |
I've found jsonpath examples for testing multiple values but not extracting multiple values.
I want to get image and name from kubectl get pods.
this gets me name
kubectl get pods -o=jsonpath='{.items[*].spec.containers[*].name}' | xargs -n 1
this gets me image
kubectl get pods -o=jsonpath='{.items[*].spec.containers[*].image}' | xargs -n 1
but
kubectl get pods -o=jsonpath='{.items[*].spec.containers[*].[name,image}' | xargs -n 2
complains invalid array index image - is there a syntax for getting a list of node-adjacent values?
| Use below command to get name and image:
kubectl get pods -Ao jsonpath='{range .items[*]}{@.metadata.name}{" "}{@.spec.template.spec.containers[].image}{"\n"}{end}'
It will give output like below:
name image
| Kubernetes | 46,229,072 | 34 |
Just curious about the intent for this default namespace.
| That namespace exists in clusters created with kubeadm for now. It contains a single ConfigMap object, cluster-info, that aids discovery and security bootstrap (basically, contains the CA for the cluster and such). This object is readable without authentication.
If you are courious:
$ kubectl get configmap -n kube-public cluster-info -o yaml
There are more details in this blog post and the design document:
NEW: kube-public namespace
[...] To create a config map that everyone can see, we introduce a new kube-public namespace. This namespace, by convention, is readable by all users (including those not authenticated). [...]
In the initial implementation the kube-public namespace (and the cluster-info config map) will be created by kubeadm. That means that these won't exist for clusters that aren't bootstrapped with kubeadm. [...]
| Kubernetes | 45,929,548 | 34 |
I am trying to setup an Ingress in GCE Kubernetes. But when I visit the IP address and path combination defined in the Ingress, I keep getting the following 502 error:
Here is what I get when I run: kubectl describe ing --namespace dpl-staging
Name: dpl-identity
Namespace: dpl-staging
Address: 35.186.221.153
Default backend: default-http-backend:80 (10.0.8.5:8080)
TLS:
dpl-identity terminates
Rules:
Host Path Backends
---- ---- --------
*
/api/identity/* dpl-identity:4000 (<none>)
Annotations:
https-forwarding-rule: k8s-fws-dpl-staging-dpl-identity--5fc40252fadea594
https-target-proxy: k8s-tps-dpl-staging-dpl-identity--5fc40252fadea594
url-map: k8s-um-dpl-staging-dpl-identity--5fc40252fadea594
backends: {"k8s-be-31962--5fc40252fadea594":"HEALTHY","k8s-be-32396--5fc40252fadea594":"UNHEALTHY"}
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
15m 15m 1 {loadbalancer-controller } Normal ADD dpl-staging/dpl-identity
15m 15m 1 {loadbalancer-controller } Normal CREATE ip: 35.186.221.153
15m 6m 4 {loadbalancer-controller } Normal Service no user specified default backend, using system default
I think the problem is dpl-identity:4000 (<none>). Shouldn't I see the IP address of the dpl-identity service instead of <none>?
Here is my service description: kubectl describe svc --namespace dpl-staging
Name: dpl-identity
Namespace: dpl-staging
Labels: app=dpl-identity
Selector: app=dpl-identity
Type: NodePort
IP: 10.3.254.194
Port: http 4000/TCP
NodePort: http 32396/TCP
Endpoints: 10.0.2.29:8000,10.0.2.30:8000
Session Affinity: None
No events.
Also, here is the result of executing: kubectl describe ep -n dpl-staging dpl-identity
Name: dpl-identity
Namespace: dpl-staging
Labels: app=dpl-identity
Subsets:
Addresses: 10.0.2.29,10.0.2.30
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
http 8000 TCP
No events.
Here is my deployment.yaml:
apiVersion: v1
kind: Secret
metadata:
namespace: dpl-staging
name: dpl-identity
type: Opaque
data:
tls.key: <base64 key>
tls.crt: <base64 crt>
---
apiVersion: v1
kind: Service
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
spec:
type: NodePort
ports:
- port: 4000
targetPort: 8000
protocol: TCP
name: http
selector:
app: dpl-identity
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: dpl-identity
rules:
- http:
paths:
- path: /api/identity/*
backend:
serviceName: dpl-identity
servicePort: 4000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: dpl-staging
name: dpl-identity
kind: Ingress
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: dpl-identity
rules:
- http:
paths:
- path: /api/identity/*
backend:
serviceName: dpl-identity
servicePort: 4000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
spec:
replicas: 2
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: dpl-identity
spec:
containers:
- image: gcr.io/munpat-container-engine/dpl/identity:0.4.9
name: dpl-identity
ports:
- containerPort: 8000
name: http
volumeMounts:
- name: dpl-identity
mountPath: /data
volumes:
- name: dpl-identity
secret:
secretName: dpl-identity
| Your backend k8s-be-32396--5fc40252fadea594 is showing as "UNHEALTHY".
Ingress will not forward traffic if the backend is UNHEALTHY, this will result in the 502 error you are seeing.
It will be being marked as UNHEALTHY becuase it is not passing it's health check, you can check the health check setting for k8s-be-32396--5fc40252fadea594 to see if they are appropriate for your pod, it may be polling an URI or port that is not returning a 200 response. You can find these setting under Compute Engine > Health Checks.
If they are correct then there are many steps between your browser and the container that could be passing traffic incorrectly, you could try kubectl exec -it PODID -- bash (or ash if you are using Alpine) and then try curl-ing localhost to see if the container is responding as expected, if it is and the health checks are also configured correctly then this would narrow down the issue to likely be with your service, you could then try changing the service from a NodePort type to a LoadBalancer and see if hitting the service IP directly from your browser works.
| Kubernetes | 42,967,763 | 34 |
is there a way to tell kubectl that my pods should only deployed on a certain instance pool?
For example:
nodeSelector:
pool: poolname
Assumed i created already my pool with something like:
gcloud container node-pools create poolname --cluster=cluster-1 --num-nodes=10 --machine-type=n1-highmem-32
| Ok, i found out a solution:
gcloud creates a label for the pool name. In my manifest i just dropped that under the node selector. Very easy.
Here comes my manifest.yaml: i deploy ipyparallel with kubernetes
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ipengine
spec:
replicas: 1
template:
metadata:
labels:
app: ipengine
spec:
containers:
- name: ipengine
image: <imageaddr.>
args:
- ipengine
- --ipython-dir=/tmp/config/
- --location=ipcontroller.default.svc.cluster.local
- --log-level=0
resources:
requests:
cpu: 1
#memory: 3Gi
nodeSelector:
#<labelname>:value
cloud.google.com/gke-nodepool: pool-highcpu32
| Kubernetes | 40,154,907 | 34 |
If I do kubectl get deployments, I get:
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
analytics-rethinkdb 1 1 1 1 18h
frontend 1 1 1 1 6h
queue 1 1 1 1 6h
Is it possible to rename the deployment to rethinkdb? I have tried googling kubectl edit analytics-rethinkdb and changing the name in the yaml, but that results in an error:
$ kubectl edit deployments/analytics-rethinkdb
error: metadata.name should not be changed
I realize I can just kubectl delete deployments/analytics-rethinkdb and then do kubectl run analytics --image=rethinkdb --command -- rethinkdb etc etc but I feel like it should be possible to simply rename it, no?
| Object names are immutable in Kubernetes. If you want to change a name, you can export/edit/recreate with a different name
| Kubernetes | 39,428,409 | 34 |
How can I tell whether or not I am running inside a kubernetes cluster? With docker I can check if /.dockerinit exist. Is there an equivalent?
| You can check for KUBERNETES_SERVICE_HOST environment variable.
This variable is always exported in an environment where the container is executed.
Refer to https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables
| Kubernetes | 36,639,062 | 34 |
my web application is running as a Kubernetes pod behind an nginx reverse proxy for SSL. Both the proxy and my application use Kubernetes services for load balancing (as described here).
The problem is that all of my HTTP request logs only show the internal cluster IP addresses instead of the addresses of the actual HTTP clients. Is there a way to make Kubernetes services pass this information to my app servers?
| As of 1.5, if you are running in GCE (by extension GKE) or AWS, you simply need to add an annotation to your Service to make HTTP source preservation work.
...
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/external-traffic: OnlyLocal
...
It basically exposes the service directly via nodeports instead of providing a proxy--by exposing a health probe on each node, the load balancer can determine which nodes to route traffic to.
In 1.7, this config has become GA, so you can set "externalTrafficPolicy": "Local" on your Service spec.
Click here to learn more
| Kubernetes | 32,112,922 | 34 |
I am getting the below error when installing the latest stable Rancher Desktop in my Virtual Machine.
Could someone please help?
Error:
Error: wsl.exe exited with code 4294967295
Command:
wsl --distribution rancher-desktop --exec mkdir -p /mnt/wsl/rancher-desktop/run/data
Logs:
2022-02-02T09:58:39.490Z: Running command wsl --distribution rancher-desktop --exec wslpath -a -u C:\Users\VIVEK~1.NUN\AppData\Local\Temp\rd-distro-gGd3SG\distro.tar...
2022-02-02T09:58:40.641Z: Running command wsl --distribution rancher-desktop --exec tar -cf /mnt/c/Users/VIVEK~1.NUN/AppData/Local/Temp/rd-distro-gGd3SG/distro.tar -C / /bin/busybox /bin/mount /bin/sh /lib /etc/wsl.conf /etc/passwd /etc/rancher /var/lib...
2022-02-02T09:58:42.628Z: Running command wsl --import rancher-desktop-data C:\Users\Vivek.Nuna\AppData\Local\rancher-desktop\distro-data C:\Users\VIVEK~1.NUN\AppData\Local\Temp\rd-distro-gGd3SG\distro.tar --version 2...
2022-02-02T09:58:44.025Z: Running command wsl --distribution rancher-desktop-data --exec /bin/busybox [ ! -d /etc/rancher ]...
2022-02-02T09:58:44.025Z: Running command wsl --distribution rancher-desktop-data --exec /bin/busybox [ ! -d /var/lib ]...
2022-02-02T10:03:54.533Z: Running command wsl --terminate rancher-desktop...
2022-02-02T10:03:54.534Z: Running command wsl --terminate rancher-desktop-data...
2022-02-02T10:03:54.971Z: Running command wsl --distribution rancher-desktop --exec mkdir -p /mnt/wsl/rancher-desktop/run/data...
2022-02-02T10:04:03.418Z: WSL: executing: mkdir -p /mnt/wsl/rancher-desktop/run/data: Error: wsl.exe exited with code 4294967295
| I met the same issue in Windows 10.
Below solution helped me:
1. Quit Rancher Desktop
2. Run below command in Windows command line:
wsl --update
3. After update completed, open Rancher Desktop again.
Rancher Desktop works well now.
After completed installing Rancher desktop, you can use the docker and kubectl commands in Windows command line successfully.
References:
Error: wsl.exe exited with code 4294967295 #1328 - github
| Kubernetes | 70,953,842 | 33 |
The command helm init does not work any longer as of version 3.
Running helm --help lists all available commands, amongst which init is no longer present.
Why is that?
| According to the official documentation, the helm init command has been removed without replacement:
The helm init command has been removed. It performed two primary functions. First, it installed Tiller. This is no longer needed. Second, it setup directories and repositories where Helm configuration lived. This is now automated. If the directory is not present it will be created.
There has been another notable change, which might trouble you next:
The stable repository is no longer added by default. This repository will be deprecated during the life of Helm v3 and we are now moving to a distributed model of repositories that can be searched by the Helm Hub.
However, according to the official quickstart guide, this can be done manually if desired:
$ helm repo add stable https://charts.helm.sh/stable
$ helm repo update
⎈ Happy Helming!⎈
| Kubernetes | 59,261,340 | 33 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.