title
stringlengths
3
46
content
stringlengths
0
1.6k
20:701
The endpoint check is quite similar:
20:702
...
20:703
livenessProbe:
20:704
exec:
20:705
httpGet:
20:706
path: /healthz
20:707
port: 8080
20:708
httpHeaders:
20:709
- name: Custom-Health-Header
20:710
value: container-is-ok
20:711
initialDelaySeconds: 10
20:712
periodSeconds: 5
20:713
...
20:714
20:715
The test is successful if the HTTP response contains the declared header with the declared value. You may also use a pure TCP check, as shown here:
20:716
...
20:717
livenessProbe:
20:718
exec:
20:719
tcpSocket:
20:720
port: 8080
20:721
initialDelaySeconds: 10
20:722
periodSeconds: 5
20:723
...
20:724
20:725
In this case, the check succeeds if Kubernetes is able to open a TCP socket to the container on the declared port.
20:726
Similarly, the readiness of containers once they are installed is monitored with a readiness check. The readiness check is defined in a similar way as the liveness check, the only difference being that livenessProbe is replaced with readinessProbe.
20:727
The following subsection explains how to autoscale Deployments.
20:728
Autoscaling
20:729
Instead of manually modifying the number of replicas in a Deployment to adapt it to a decrease or increase in load, we can let Kubernetes decide for itself the number of replicas needed to keep a declared resource’s consumption constant. Thus, for instance, if we declare a target of 10% CPU consumption, then when the average resource consumption of each replica exceeds this limit, a new replica will be created. If the average CPU consumption falls below this limit, a replica is destroyed. The typical resource used to monitor replicas is CPU consumption, but we can also use memory consumption.
20:730
In actual high-traffic production systems, autoscaling is a must, because it is the only way to adapt quickly the system to changes in the load.
20:731
Autoscaling is achieved by defining a HorizontalPodAutoscaler object. Here is an example of the HorizontalPodAutoscaler definition:
20:732
apiVersion: autoscaling/v2beta1
20:733
kind: HorizontalPodAutoscaler
20:734
metadata:
20:735
name: my-autoscaler
20:736
spec:
20:737
scaleTargetRef:
20:738
apiVersion: extensions/v1beta1
20:739
kind: Deployment
20:740
name: my-deployment-name
20:741
minReplicas: 1
20:742
maxReplicas: 10
20:743
metrics:
20:744
- type: Resource
20:745
resource:
20:746
name: cpu
20:747
targetAverageUtilization: 25
20:748
20:749
spec-> scaleTargetRef->name specifies the name of the Deployment to autoscale, while targetAverageUtilization specifies the target resource (in our case, cpu) percentage usage (in our case, 25%).
20:750
The following subsection gives a short introduction to the Helm package manager and Helm charts and explains how to install Helm charts on a Kubernetes cluster. An example of how to install an Ingress Controller is given as well.
20:751
Helm – installing an Ingress Controller
20:752
Helm charts are a way to organize the installation of complex Kubernetes applications that contain several .yaml files. A Helm chart is a set of .yaml files organized into folders and subfolders. Here is a typical folder structure of a Helm chart taken from the official documentation:
20:753
20:754
Figure 20.9: Folder structure of a Helm chart
20:755
The .yaml files specific to the application are placed in the top templates directory, while the charts directory may contain other Helm charts used as helper libraries. The top-level Chart.yaml file contains general information on the package (name and description), together with both the application version and the Helm chart version. The following is a typical example:
20:756
apiVersion: v2
20:757
name: myhelmdemo
20:758
description: My Helm chart
20:759
type: application
20:760
version: 1.3.0
20:761
appVersion: 1.2.0
20:762
20:763
Here, type can be either application or library. Only application charts can be deployed, while library charts are utilities for developing other charts. library charts are placed in the charts folder of other Helm charts.
20:764
In order to configure each specific application installation, Helm chart .yaml files contain variables that are specified when Helm charts are installed. Moreover, Helm charts also provide a simple templating language that allows some declarations to be included only if some conditions depending on the input variables are satisfied. The top-level values.yaml file declares default values for the input variables, meaning that the developer needs to specify just the few variables for which they require values different from the defaults. We will not describe the Helm chart template language because it would be too extensive, but you can find it in the official Helm documentation referred to in the Further reading section.
20:765
Helm charts are usually organized in public or private repositories in a way that is similar to Docker images. There is a Helm client that you can use to download packages from a remote repository and to install charts in Kubernetes clusters. The Helm client is immediately available in Azure Cloud Shell, so you can start using Helm for your Azure Kubernetes cluster without needing to install it.
20:766
A remote repository must be added before using its packages, as shown in the following example:
20:767
helm repo add <my-repo-local-name> https://charts.helm.sh/stable
20:768
20:769
The preceding command makes the packages of a remote repository available and gives a local name to that remote repository. After that, any package from the remote repository can be installed with a command such as the following:
20:770
helm install <instance name><my-repo-local-name>/<package name> -n <namespace>
20:771
20:772
Here, <namespace> is the namespace in which to install the application. As usual, if it’s not provided, the default namespace is assumed. <instance name> is the name that you give to the installed application. You need this name to get information about the installed application with the following command:
20:773
helm status <instance name>
20:774
20:775
You can get also information about all applications installed with Helm with the help of the following command:
20:776
helm ls
20:777
20:778
The application name is also needed to delete the application from the cluster by means of the following command:
20:779
helm delete <instance name>
20:780
20:781
When we install an application, we may also provide a .yaml file with all the variable values we want to override. We can also specify a specific version of the Helm chart, otherwise the most recent version is used. Here is an example with both the version and values overridden:
20:782
helm install <instance name><my-repo-local-name>/<package name> -f values.yaml --version <version>
20:783
20:784
Finally, value overrides can also be provided in-line with the --set option, as shown here:
20:785
...--set <variable1>=<value1>,<variable2>=<value2>...
20:786
20:787
We can also upgrade an existing installation with the upgrade command, as shown here:
20:788
helm upgrade <instance name><my-repo-local-name>/<package name>...
20:789
20:790
The upgrade command may be used to specify new value overrides with the –f option or with the --set option, and a new version with --version.
20:791
Let’s use Helm to provide an Ingress for the Guestbook demo application. More specifically, we will use Helm to install an Ingress Controller based on Nginx. The detailed procedure to be observed is as follows:
20:792
20:793
Add the remote repository:
20:794
helm repo add gcharts https://charts.helm.sh/stable
20:795
20:796
20:797
Install the Ingress Controller:
20:798
helm install ingress gcharts/nginx-ingress
20:799
20:800