title
stringlengths
3
46
content
stringlengths
0
1.6k
20:501
metadata:
20:502
labels:
20:503
app: redis
20:504
role: master
20:505
tier: backend
20:506
spec:
20:507
containers:
20:508
- name: master
20:509
image: docker.io/redis:6.0.5
20:510
resources:
20:511
requests:
20:512
cpu: 100m
20:513
memory: 100Mi
20:514
ports:
20:515
- containerPort: 6379
20:516
---
20:517
apiVersion: v1
20:518
kind: Service
20:519
metadata:
20:520
name: redis-leader
20:521
labels:
20:522
app: redis
20:523
role: master
20:524
tier: backend
20:525
spec:
20:526
ports:
20:527
- port: 6379
20:528
targetPort: 6379
20:529
selector:
20:530
app: redis
20:531
role: master
20:532
tier: backend
20:533
20:534
The file is composed of two object definitions separated by a line containing just ---, that is, the object definition separator of .yaml files. It is common to group related objects, such as a Deployment with its associated Service, in the same file separated by the --- objects separator symbol in order to increase code readability.
20:535
The first object is a Deployment with a single replica, and the second object is a ClusterIP Service that exposes the Deployment on the 6379 port at the internal redis-leader.default.svc.cluster.local network address. The Deployment pod template defines the three app, role, and tier labels with values that are used in the selector definition of the Service to connect the Service with the unique Pod defined in the Deployment.
20:536
Let’s upload the redis-master.yaml file to Azure Cloud Shell, and then deploy it in the cluster with the following command:
20:537
kubectl create -f redis-master.yaml
20:538
20:539
Once the operation is complete, you can inspect the contents of the cluster with kubectl get all.
20:540
The slave storage is defined in the redis-slave.yaml file and is created in the same way, the only difference being that this time we have two replicas, and a different Docker image. The full code is in the GitHub repository associated with this book.
20:541
Let’s upload this file as well and deploy it with the following command:
20:542
kubectl create -f redis-slave.yaml
20:543
20:544
The code for the UI tier is contained in the frontend.yaml file. Deployment has three replicas and a different Service type. Let’s upload and deploy this file with the following command:
20:545
kubectl create -f frontend.yaml
20:546
20:547
It is worthwhile analyzing the Service code in the frontend.yaml file:
20:548
apiVersion: v1
20:549
kind: Service
20:550
metadata:
20:551
name: frontend
20:552
labels:
20:553
app: guestbook
20:554
tier: frontend
20:555
spec:
20:556
type: LoadBalancer
20:557
ports:
20:558
- port: 80
20:559
selector:
20:560
app: guestbook
20:561
tier: frontend
20:562
20:563
Again, the full code is in the GitHub repository associated with the book.
20:564
This Service is of the LoadBalancer type. Since this Pod is the application interface with the world outside of the Kubernetes cluster, its service must have a fixed IP and must be load balanced. Therefore, we must use a LoadBalancer service since this is the unique service type that satisfies those requirements. (See the Services section of this chapter for more information.)
20:565
If you are in Azure Kubernetes or any other cloud Kubernetes service, in order to get the public IP address assigned to the service, and then to the application, use the following command:
20:566
kubectl get service
20:567
20:568
The preceding command should display information on all the installed services. You should find the public IP in the EXTERNAL-IP column of the list. If you see only <none> values, please repeat the command until the public IP address is assigned to the load balancer.
20:569
If no IP is assigned after a few minutes, please verify whether there is some error or warning in any of the service descriptions. If not, please check whether all deployments are actually running using the following command:
20:570
kubectl get deployments
20:571
20:572
If, instead, you are on minikube, LoadBalancer services can be accessed by issuing this command:
20:573
minikube service <service name>
20:574
20:575
Thus, in our case:
20:576
minikube service frontend
20:577
20:578
The command should automatically open the browser.
20:579
Once you get the IP address, navigate with the browser to this address. The application’s home page should now appear!
20:580
If the page doesn’t appear, verify whether any service has an error by issuing the following command:
20:581
kubectl get service
20:582
20:583
If not, also verify that all deployments are in the running state with the following:
20:584
kubectl get deployments
20:585
20:586
If you find problems, please look for errors in the .yaml files, correct them, and then update the object defined in the file with:
20:587
kubectl update -f <file name>
20:588
20:589
20:590
Once you have finished experimenting with the application, make sure to remove the application from the cluster to avoid wasting your free Azure credit (public IP addresses cost money) with the following commands:
20:591
kubectl delete deployment frontend redis-master redis-slave
20:592
kubectl delete service frontend redis-leader redis-follower
20:593
20:594
In the next section, we will analyze other important Kubernetes features.
20:595
Advanced Kubernetes concepts
20:596
In this section, we will discuss other important Kubernetes features, including how to assign permanent storage to StatefulSets; how to store secrets such as passwords, connection strings, or certificates; how a container can inform Kubernetes about its health state; and how to handle complex Kubernetes packages with Helm. All of these subjects are organized into dedicated subsections. We will start with the problem of permanent storage.
20:597
Requiring permanent storage
20:598
Since Pods are moved between nodes, they can’t store data on the disk storage offered by the current node where they are running, or they would lose that storage as soon as they are moved to a different node. This leaves us with two options:
20:599
20:600
Using external databases: With the help of databases, ReplicaSets can also store information. However, if we need better performance in terms of write/update operations, we should use distributed sharded databases based on non-SQL engines such as Cosmos DB or MongoDB (see Chapter 12, Choosing Your Data Storage in the Cloud). In this case, in order to take maximum advantage of table sharding, we need StatefulSets, where each Pod instance takes care of a different table shard.