title
stringlengths 3
46
| content
stringlengths 0
1.6k
|
---|---|
11:109
| |
11:110
|
These kinds of structures will be introduced in the Which tools are needed to manage microservices? section of this chapter, and discussed in detail in Chapter 20, Kubernetes.
|
11:111
|
Moreover, fine-grained scaling of distributed microservices that communicate through asynchronous communication requires each microservice to be resilient. In fact, communication that’s directed to a specific microservice instance may fail due to a hardware fault or for the simple reason that the target instance was killed or moved to another node during a load-balancing operation.
|
11:112
|
Temporary failures can be overcome with exponential retries. This is where we retry the same operation after each failure with a delay that increases exponentially until a maximum number of attempts is reached. For instance, first, we would retry after 10 milliseconds, and if this retry operation results in a failure, a new attempt is made after 20 milliseconds, then after 40 milliseconds, and so on.
|
11:113
|
On the other hand, long-term failures often cause an explosion of retry operations that may saturate all system resources in a way that is similar to a denial-of-service attack. Therefore, usually, exponential retries are used together with a circuit break strategy: after a given number of failures, a long-term failure is assumed, and access to the resource is prevented for a given time by returning an immediate failure without attempting the communication operation.
|
11:114
|
It is also fundamental that the congestion of some subsystems, due to either failure or a request peak, does not propagate to other system parts in order to prevent overall system congestion. Bulkhead isolation prevents congestion propagation in the following ways:
|
11:115
| |
11:116
|
Only a maximum number of similar simultaneous outbound requests are allowed; let’s say, 10. This is similar to putting an upper bound on thread creation.
|
11:117
|
Requests exceeding the previous bound are queued.
|
11:118
|
If the maximum queue length is reached, any further requests result in exceptions being thrown to abort them.
|
11:119
| |
11:120
|
The practical .NET implementation of exponential retries, circuit break, and bulkhead isolation is described in the Resilient task execution subsection of this chapter.
|
11:121
|
Retry policies may make it so that the same message is received and processed several times because the sender has received no confirmation that the message has been received, or simply because it has timed out the operation while the receiver actually received the message. The only possible solution to this problem is designing all messages so that they’re idempotent – that is, designing messages in such a way that processing the same message several times has the same effect as processing it once.
|
11:122
|
Updating a database table field to a value, for instance, is an idempotent operation since repeating it once or twice has exactly the same effect. However, incrementing a decimal field is not an idempotent operation. Microservice designers should make an effort to design the overall application with as many idempotent messages as possible.
|
11:123
|
An idempotent message is also a message that, if processed twice, doesn’t cause malfunctions. For instance, a message that modifies the price of travel is idempotent because if we process it another time, we just set again the price to the same price as before. However, a message whose purpose is to add a new travel booking is not idempotent since if we process it twice, we add two travel bookings instead of one.
|
11:124
|
The remaining non-idempotent messages must be transformed into idempotent in the following way or with other similar techniques:
|
11:125
| |
11:126
|
Attach both a time and some identifier that uniquely identifies each message.
|
11:127
|
Store all the messages that have been received in a dictionary that’s been indexed by the unique identifier attached to the message mentioned in the previous point.
|
11:128
|
Reject old messages.
|
11:129
|
When a message that may be a duplicate is received, verify whether it’s contained in the dictionary. If it is, then it has already been processed, so reject it.
|
11:130
|
Since old messages are rejected, they can be periodically removed from the dictionary to prevent it from growing exponentially.
|
11:131
| |
11:132
|
In Chapter 14, Implementing Microservices with .NET, we will use this technique in practice and discuss communication and coordination problems in more detail.
|
11:133
|
It is worth pointing out that some message brokers, such as Azure Service Bus, offer facilities for implementing the technique described previously. However, the receiver must always be able to recognize duplicate messages since, due to time-outs in the reception of acknowledgments, messages might be resent. Azure Service Bus is discussed in the .NET communication facilities subsection.
|
11:134
|
In the next subsection, we will talk about microservice containerization based on Docker.
|
11:135
|
Containers and Docker
|
11:136
|
We’ve already discussed the advantages of having microservices that don’t depend on the environment where they run; microservices can be moved from busy nodes to idle nodes without constraints, thus achieving a better load balance and, consequently, better usage of the available hardware.
|
11:137
|
However, if we need to mix legacy software with newer modules, the ability to mix several development stacks in order to use the best stack for each module implementation, and so on, we are faced with the problem that the various microservices have different hardware/software prerequisites. In these cases, the independence of each microservice from the hosting environment can be restored by deploying each microservice with all its dependencies on a private virtual machine.
|
11:138
|
However, starting a virtual machine with its private copy of the operating system takes a lot of time, and microservices must be started and stopped quickly to reduce load-balancing and fault recovery costs. In fact, new microservices may be started either to replace faulty ones or because they were moved from one hardware node to another to perform load-balancing. Moreover, adding a whole copy of the operating system to each microservice instance would be an excessive overhead.
|
11:139
|
Luckily, microservices can rely on a lighter form of technology: containers. Containers provide a lightweight, efficient form of virtualization. Unlike traditional virtual machines that virtualize an entire machine, including the operating system, containers virtualize at the OS filesystem level, sitting on top of the host OS kernel. They use the operating system of the hosting machine (kernel, DLLs, and drivers) and use the OS’s native features to isolate processes and resources, creating an isolated environment for the images they run.
|
11:140
|
As a consequence, containers are tied to a specific OS, but they don’t suffer the overhead of copying and starting a whole OS in each container instance.
|
11:141
|
On each host machine, containers are handled by a runtime that takes care of creating them from images and creating an isolated environment for each of them. The most popular container image format is Docker, which is a de facto standard for container images.
|
11:142
|
Images contain files needed to create each container and specify which container resources, such as communication ports, to expose outside of the container. However, they need not explicitly contain all involved files since they can be layered. This way, each image is built by adding new files and configuration information on top of another existing image that is referenced from inside the newly defined image.
|
11:143
|
For instance, if you want to deploy a .NET application as a Docker image, it is enough to just add your software and files to your Docker image and then reference an already existing .NET Docker image.
|
11:144
|
To allow for easy image referencing, images are grouped into registries that may be either public or private. They are similar to NuGet or npm registries. Docker offers a public registry (https://hub.docker.com/_/registry) where you can find most of the public images you may need to reference in your own images. However, each company can define private registries. For instance, Microsoft offers Azure Container Registry, where you can define your private container registry service: https://azure.microsoft.com/en-us/services/container-registry/. There, you can also find most of the .NET-related images you might need to reference in your code.
|
11:145
|
Before instantiating each container, the Docker runtime must solve all the recursive references. This cumbersome job is not performed each time a new container is created since the Docker runtime has a cache where it stores the fully assembled images that correspond to each input image and that it has already processed.
|
11:146
|
Since each application is usually composed of several modules to be run in different containers, a tool called Docker Compose also allows .yml files, known as composition files, that specify the following information:
|
11:147
| |
11:148
|
Which images to deploy.
|
11:149
|
How the internal resources that are exposed by each image must be mapped to the physical resources of the host machine. For instance, how communication ports that are exposed by Docker images must be mapped to the ports of the physical machine.
|
11:150
| |
11:151
|
We will analyze Docker images and .yml files in the How does .NET deal with microservices? section of this chapter.
|
11:152
|
The Docker runtime handles images and containers on a single machine, but usually, containerized microservices are deployed and load-balanced on clusters that are composed of several machines. Clusters are handled by pieces of software called orchestrators. Orchestrators will be introduced in the Which tools are needed to manage microservices? section of this chapter, and described in detail in Chapter 20, Kubernetes.
|
11:153
|
Now that we understand what microservices are, what problems they can solve, and their basic design principles, we are ready to analyze when and how to use them in our system architecture. The next section analyzes when we should use them.
|
11:154
|
When do microservices help?
|
11:155
|
The answer to this question requires us to understand the roles microservices play in modern software architectures. We will look at this in the following two subsections:
|
11:156
| |
11:157
|
Layered architectures and microservices
|
11:158
|
When is it worth considering microservice architectures?
|
11:159
| |
11:160
|
Let’s start with a detailed look at layered architectures and microservices.
|
11:161
|
Layered architectures and microservices
|
11:162
|
As discussed in Chapter 7, Understanding the Different Domains in Software Solutions, enterprise systems are usually organized in logical independent layers. The outermost layer is the one that interacts with the user and is called the presentation layer (in the onion architecture, the outermost layer also contains drivers and test suites), while the last layer (the innermost layer in the onion architecture) takes care of application permanent data handling and is called the data layer (the domain layer in the onion architecture). Requests originate in the presentation layer and pass through all the layers until they reach the data layer (and then come back, traversing all the layers in reverse until they reach the outermost layer again).
|
11:163
|
In the case of classical layered architecture (the onion architecture is quite different, as discussed in Chapter 7, Understanding the Different Domains in Software Solutions), each layer takes data from the previous layer, processes it, and passes it to the next layer. Then, it receives the results from its next layer and sends them back to its previous layer. Also, thrown exceptions can’t jump layers – each layer must take care of intercepting all the exceptions and either solve them somehow or transform them into other exceptions that are expressed in the language of its previous layer. The layered architecture ensures the complete independence of the functionalities of each layer from the functionalities of all the other layers.
|
11:164
|
For instance, we can change the Object-Relational Mapping (ORM) software that interfaces the database without affecting all the layers that are above the data layer (ORM software is discussed in Chapter 13, Interacting with Data in C# – Entity Framework Core). In the same way, we can completely change the user interface (that is, the presentation layer) without affecting the remainder of the system.
|
11:165
|
Moreover, each layer implements a different kind of system specification. The data layer takes care of what the system must remember, the presentation layer takes care of the system-user interaction protocol, and all the layers that are in the middle implement the domain rules, which specify how data must be processed (for instance, how an employee paycheck must be computed). Typically, the data and presentation layers are separated by just one domain rule layer, called the business or application layer.
|
11:166
| |
11:167
|
Figure 11.2: Layers of classic architectures
|
11:168
|
Each layer speaks a different language: the data layer speaks the language of relation among entities, the business layer speaks the language of domain experts, and the presentation layer speaks the language of users. So, when data and exceptions pass from one layer to another, they must be translated into the language of the destination layer.
|
11:169
|
That being said, how do microservices fit into a layered architecture? Are they adequate for the functionalities of all the layers or just some layers? Can a single microservice span several layers?
|
11:170
|
The last question is the easiest to answer: yes! In fact, we’ve already stated that microservices should store the data they need within their logical boundaries. Therefore, there are microservices that span the business and data layers, for sure.
|
11:171
|
However, since we said that each logical microservice can be implemented with several physical microservices for pure load-balancing reasons, one microservice might take care of encapsulating data used by another microservice that might remain confined in the data layer.
|
11:172
|
Moreover, we said also that while each microservice must have its exclusive storage, it can use also external storage engines. This is shown in the diagram below:
|
11:173
| |
11:174
|
Figure 11.3: External or internal storage
|
11:175
|
It is worth pointing out that the storage engine itself can be implemented as a set of physical microservices that are associated with no logical microservice since they may be considered part of the infrastructure.
|
11:176
|
This is the case, for instance, for storage engines based on the distributed Redis in-memory cache, where we use microservice facilities offered by the infrastructure to implement scalable one-master/many-read-only replicas, or sophisticated many-master/many-read-only replicas distributed in memory storage. Redis and Redis Cloud services are described in the Redis section of Chapter 12, Choosing Your Data Storage in the Cloud, while many-master/many-read-only replicas architectures are described in Chapter 20, Kubernetes. The diagram below shows how microservice-based many-master/many-read-only replicas storage engines work.
|
11:177
| |
11:178
|
Figure 11.4: Many-master/many-read-only replicas storage engine
|
11:179
|
Each master has its associated read-only replicas. Storage updates can be passed just to masters that replicate their data to all their associated read-only replicas.
|
11:180
|
Each master takes care of a portion of the storage space, for instance, all products whose name starts with “A,” and so on. In this way, the load is balanced between all masters.
|
11:181
|
Thus, we may have business layer microservices, data layer microservices, and microservices that span both layers. So, what about the presentation layer?
|
11:182
|
The presentation layer
|
11:183
|
This layer can also fit into a microservice architecture if it is implemented on the server side – that is, if the whole graphic that interacts with the user is built on the server side and not in the user client machine (mobile device, desktop, etc.).
|
11:184
|
When there are microservices that interact directly with the user, we speak of server-side implementation of the presentation layer since the HTML and/or all elements of the user interface are created by the frontend, which sends the response to the user.
|
11:185
|
These kinds of microservices are called frontend microservices, while microservices that do back-office work without interacting with the user are called worker microservices. The diagram below summarizes the frontend/worker organization.
|
11:186
| |
11:187
|
Figure 11.5: Frontend and worker microservices
|
11:188
|
When, instead, the HTML and/or all elements of the user interface are generated on the user machine, we speak of client-side implementation of the presentation layer. The so-called single-page applications and mobile applications run the presentation layer on the client machine and interact with the application through communication interfaces exposed by dedicated microservices. These dedicated microservices are completely analogous to the frontend microservices depicted in Figure 11.5 and are called API gateways, to underline their role of exposing a public API for connecting client devices with the whole microservices infrastructure. Also, API gateways interact with worker microservices in a way that is completely analogous to frontend microservices.
|
11:189
|
Single-page applications and mobile/desktop client applications are discussed in Chapter 19, Client Frameworks: Blazor.
|
11:190
|
In a microservice architecture, when the presentation layer is a website, it can be implemented with a set of several microservices. However, if it requires heavy web servers and/or heavy frameworks, containerizing them may not be convenient. This decision must also consider the loss of performance that happens when containerizing the web server and the possible need for hardware firewalls between the web server and the remainder of the system.
|
11:191
|
ASP.NET Core is a lightweight framework that runs on the Kestrel web server, so it can be containerized efficiently and used as is in the worker microservices. The usage of ASP:NET Core in the implementation of worker microservices is described in great detail in Chapter 14, Implementing Microservices with .NET.
|
11:192
|
Instead, frontend and/or high-traffic websites have more compelling security and load-balancing requirements that can be satisfied just with fully-featured web servers. Accordingly, architectures based on microservices usually offer specialized components that take care of interfacing with the outside world. For instance, in Chapter 20, Kubernetes, we will see that in microservices-dedicated infrastructures like Kubernetes clusters, this role is played by so-called ingresses. These are fully-featured web servers interfaced with the microservices infrastructure. Thanks to the integration with the microservices infrastructure, the whole web server traffic is automatically routed to the interested microservices. More details on this will be given in Chapter 20, Kubernetes. The diagram below shows the role of Ingresses.
|
11:193
| |
11:194
|
Figure 11.6: Ingresses based on load-balanced web servers
|
11:195
|
Monolithic websites can be easily broken into load-balanced smaller subsites without microservice-specific technologies, but a microservice architecture can bring all the advantages of microservices into the construction of a single HTML page. More specifically, different microservices may take care of different areas of each HTML page. Microservices that cooperate in the construction of the HTML of application pages, and, in general, in the construction of any kind of user interface, are named micro-frontends.
|
11:196
|
When the HTML is created on the server side, the various micro-frontends create HTML chunks that are combined either on the server side or directly in the browser.
|
11:197
|
When, instead, the HTML is created directly on the client, each micro-frontend provides a different chunk of code to the client. These code chunks are run on the client machine, and each of them takes care of different pages/page areas. We will speak more of this kind of micro-frontend in Chapter 18, Implementing Frontend Microservices with ASP.NET Core.
|
11:198
|
Now that we’ve clarified which parts of a system can benefit from the adoption of microservices, we are ready to state the rules when it comes to deciding how they’re adopted.
|
11:199
|
When is it worth considering microservice architectures?
|
11:200
|
Microservices can improve the implementation of both the business and data layers, but their adoption has some costs:
|
11:201
| |
11:202
|
Allocating instances to nodes and scaling them has a cost in terms of cloud fees or internal infrastructures and licenses.
|
11:203
|
Splitting a unique process into smaller communication processes increases communication costs and hardware needs, especially if the microservices are containerized.
|
11:204
|
Designing and testing software for a microservice requires more time and increases engineering costs, both in time and complexity. In particular, making microservices resilient and ensuring that they adequately handle all possible failures, as well as verifying these features with integration tests, can increase the development time by more than one order of magnitude (that is, about 10 times).
|
11:205
| |
11:206
|
So, when are microservices worth the cost of using them? Are there functionalities that must be implemented as microservices?
|
11:207
|
A rough answer to the second question is yes when the application is big enough in terms of traffic and/or software complexity. In fact, as an application grows in complexity and its traffic increases, it’s recommended that we pay the costs associated with scaling it since this allows for more scaling optimization and better handling when it comes to the development team. The costs we pay for these would soon exceed the cost of microservice adoption.
|
11:208
|
Thus, if fine-grained scaling makes sense for our application, and if we can estimate the savings that fine-grained scaling and development give us, we can easily compute an overall application throughput limit that makes the adoption of microservices convenient.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.