title
stringlengths 3
46
| content
stringlengths 0
1.6k
|
---|---|
11:209
|
Microservice costs can also be justified by an increase in the market value of our products/services. Since the microservice architecture allows us to implement each microservice with a technology that has been optimized for its use, the quality that’s added to our software may justify all or part of the microservice costs.
|
11:210
|
However, scaling and technology optimizations are not the only parameters to consider. Sometimes, we are forced to adopt a microservice architecture without being able to perform a detailed cost analysis.
|
11:211
|
If the size of the team that takes care of the CI/CD of the overall system grows too much, the organization and coordination of this big team cause difficulties and inefficiencies. In this type of situation, it is desirable to move to an architecture that breaks the whole CI/CD cycle into independent parts that can be taken care of by smaller teams.
|
11:212
|
Moreover, since these development costs are only justified by a high volume of requests, we probably have high traffic being processed by independent modules that have been developed by different teams. Therefore, scaling optimizations and the need to reduce interaction between development teams make the adoption of a microservice architecture very convenient.
|
11:213
|
From this, we may conclude that if the system and the development team grow too much, it is necessary to split the development team into smaller teams, each working on an efficient bounded context subsystem. It is very likely that, in a similar situation, a microservice architecture is the only possible option.
|
11:214
|
Another situation that forces the adoption of a microservice architecture is the integration of newer subparts with legacy subsystems based on different technologies, as containerized microservices are the only way to implement an efficient interaction between the legacy system and the new subparts in order to gradually replace the legacy subparts with newer ones. Similarly, if our team is composed of developers with experience in different development stacks, an architecture based on containerized microservices may become a must.
|
11:215
|
In the next section, we will analyze the building blocks and tools that are available for the implementation of .NET-based microservices.
|
11:216
|
How does .NET deal with microservices?
|
11:217
|
The new .NET, which evolved from .NET Core, was conceived as a multi-platform framework that was light and fast enough to implement efficient microservices. In particular, ASP.NET Core is the ideal tool for implementing text REST and binary gRPC APIs to communicate with a microservice since it can run efficiently with lightweight web servers such as Kestrel and is itself light and modular.
|
11:218
|
The whole .NET stack evolved with microservices as a strategic deployment platform in mind and has facilities and packages for building efficient and light HTTP and gRPC communication to ensure service resiliency and to handle long-running tasks. The following subsections describe some of the different tools or solutions that we can use to implement a .NET-based microservice architecture.
|
11:219
|
.NET communication facilities
|
11:220
|
Microservices need two kinds of communication channels.
|
11:221
|
The first communication channel receives external requests, either directly or through an API gateway. HTTP is the usual protocol for external communication due to available web service standards and tools. .NET’s main HTTP/gRPC communication facility is ASP.NET Core since it’s a lightweight HTTP/gRPC framework, which makes it ideal for implementing web APIs in small microservices. We will describe ASP.NET REST API apps in detail in Chapter 15, Applying Service-Oriented Architectures with .NET, and we will describe gRPC services in Chapter 14, Implementing Microservices with .NET. .NET also offers an efficient and modular HTTP client solution that is able to pool and reuse heavy connection objects. Also, the HttpClient class will be described in more detail in Chapter 15.
|
11:222
|
The second channel is a different type of communication channel to push updates to other microservices. In fact, we have already mentioned that intra-microservice communication cannot be triggered by an ongoing request since a complex tree of blocking calls to other microservices would increase request latency to an unacceptable level. As a consequence, updates must not be requested immediately before they’re used and should be pushed whenever state changes take place. Ideally, this kind of communication should be asynchronous to achieve acceptable performance. In fact, synchronous calls would block the sender while they are waiting for the result, thus increasing the idle time of each microservice. However, synchronous communication that just puts the request in a processing queue and then returns confirmation of the successful communication instead of the final result is acceptable if communication is fast enough (low communication latency and high bandwidth). A publisher/subscriber communication would be preferable since, in this case, the sender and receiver don’t need to know each other, thus increasing the microservices’ independence. In fact, all the receivers that are interested in a certain type of communication merely need to register to receive a specific event, while senders just need to publish those events. All the wiring is performed by a service that takes care of queuing events and dispatching them to all the subscribers. The publisher/subscriber pattern was described in Chapter 6, Design Patterns and .NET 8 Implementation, along with other useful patterns.
|
11:223
|
While .NET doesn’t directly offer tools that may help in asynchronous communication or client/server tools that implement publisher/subscriber communication, Azure offers a similar service with Azure Service Bus (https://docs.microsoft.com/en-us/azure/service-bus-messaging/). Azure Service Bus handles both queued asynchronous communication through Azure Service Bus queues and publisher/subscriber communication through Azure Service Bus topics.
|
11:224
|
Once you’ve configured the Azure Service Bus on the Azure portal, you can connect to it in order to send messages/events and receive messages/events through a client contained in Microsoft.Azure.ServiceBus NuGet package.
|
11:225
|
Azure Service Bus has two types of communication: queue-based and topic-based. In queue-based communication, each message that’s placed in the queue by a sender is removed from the queue by the first receiver that pulls it from the queue. Topic-based communication, on the other hand, is an implementation of the publisher/subscriber pattern. Each topic has several subscriptions, and a different copy of each message sent to a topic can be pulled from each topic subscription.
|
11:226
|
The design flow is as follows:
|
11:227
| |
11:228
|
Define an Azure Service Bus private namespace.
|
11:229
|
Get the root connection strings that were created by the Azure portal and/or define new connection strings with fewer privileges.
|
11:230
|
Define queues and/or topics where the sender will send their messages in binary format.
|
11:231
|
For each topic, define names for all the required subscriptions.
|
11:232
|
In the case of queue-based communication, the sender sends messages to a queue, and the receivers pull messages from the same queue. Each message is delivered to one receiver. That is, once a receiver gains access to the queue, it reads and removes one or more messages.
|
11:233
|
In the case of topic-based communication, each sender sends messages to a topic while each receiver pulls messages from its private subscription associated with that topic.
|
11:234
| |
11:235
|
There are also other commercial and free open-source alternatives to Azure Service Bus, such as NServiceBus (https://particular.net/nservicebus), MassTransit (https://masstransit-project.com/), and Brighter (https://www.goparamore.io/). They enhance existing brokers (like Azure Service Bus itself) with higher-level functionalities.
|
11:236
|
There is also a completely independent option that can be used on on-premises platforms: RabbitMQ. It is free and open source and can be installed locally, on a virtual machine, or in a Docker container. Then, you can connect with it through the client contained in the RabbitMQ.Client NuGet package.
|
11:237
|
The functionalities of RabbitMQ are similar to the ones offered by Azure Service Bus, but you have to take care of more implementation details, like serialization, reliable messages, and error handling, while Azure Service Bus takes care of all the low-level operations and offers you a simpler interface. However, there are clients that build a more powerful abstraction on top of RabbitMQ, like, for instance, EasyNetQ. The publisher/subscriber-based communication pattern used by both Azure Service Bus and RabbitMQ was described in Chapter 6, Design Patterns and .NET 8 Implementation. RabbitMQ will be described in more detail in Chapter 14, Implementing Microservices with .NET.
|
11:238
|
Resilient task execution
|
11:239
|
Resilient communication and, in general, resilient task execution can be implemented easily with the help of a .NET library called Polly, whose project is a member of the .NET Foundation. Polly is available through the Polly NuGet package.
|
11:240
|
In Polly, you define policies and then execute tasks in the context of those policies, as follows:
|
11:241
|
var myPolicy = Policy
|
11:242
|
.Handle<HttpRequestException>()
|
11:243
|
.Or<OperationCanceledException>()
|
11:244
|
.RetryAsync(3);
|
11:245
|
....
|
11:246
|
....
|
11:247
|
await myPolicy.ExecuteAsync(()=>{
|
11:248
|
//your code here
|
11:249
|
});
|
11:250
| |
11:251
|
The first part of each policy specifies the exceptions that must be handled. Then, you specify what to do when one of those exceptions is captured. In the preceding code, the Execute method is retried up to three times if a failure is reported either by an HttpRequestException exception or by an OperationCanceledException exception.
|
11:252
|
The following is the implementation of an exponential retry policy:
|
11:253
|
var retryPolicy= Policy
|
11:254
|
...
|
11:255
|
//Exceptions to handle here
|
11:256
|
.WaitAndRetryAsync(6,
|
11:257
|
retryAttempt => TimeSpan.FromSeconds(Math.Pow(2,
|
11:258
|
retryAttempt)));
|
11:259
| |
11:260
|
The first argument of WaitAndRetryAsync specifies that a maximum of six retries is performed in the event of failure. The lambda function passed as the second argument specifies how much time to wait before the next attempt. In this specific example, this time grows exponentially with the number of attempts by a power of 2 (2 seconds for the first retry, 4 seconds for the second retry, and so on).
|
11:261
|
The following is a simple circuit breaker policy:
|
11:262
|
var breakerPolicy =Policy
|
11:263
|
.Handle<SomeExceptionType>()
|
11:264
|
.CircuitBreakerAsync (6, TimeSpan.FromMinutes(1));
|
11:265
| |
11:266
|
After six failures, the task can’t be executed for one minute since an exception is returned.
|
11:267
|
The following is the implementation of the Bulkhead Isolation policy (see the Microservices design principles section for more information):
|
11:268
|
Policy
|
11:269
|
.BulkheadAsync(10, 15)
|
11:270
| |
11:271
|
A maximum of 10 parallel executions is allowed in the Execute method. Further tasks are inserted in an execution queue. This has a limit of 15 tasks. If the queue limit is exceeded, an exception is thrown.
|
11:272
|
For the Bulkhead Isolation policy to work properly and, in general, for every strategy to work properly, task executions must be triggered through the same policy instance; otherwise, Polly is unable to count how many executions of a specific task are active.
|
11:273
|
Policies can be combined with the Wrap method:
|
11:274
|
var combinedPolicy = Policy
|
11:275
|
.Wrap(retryPolicy, breakerPolicy);
|
11:276
| |
11:277
|
Polly offers several more options, such as generic methods for tasks that return a specific type, timeout policies, task result caching, the ability to define custom policies, and so on. It is also possible to configure Polly as part of an HttpClient definition in the dependency injection section of any ASP.NET Core and .NET application. This way, it is quite immediate to define resilient clients.
|
11:278
|
Polly’s official documentation can be found in its GitHub repository here: https://github.com/App-vNext/Polly.
|
11:279
|
The practical usage of Polly is explained in the A worker microservice with ASP.NET Core section of Chapter 21, Case Study.
|
11:280
|
The resilience and robustness provided by tools like Polly are crucial components of microservice architecture, particularly when managing complex tasks and processes.
|
11:281
|
This brings us to another fundamental aspect of microservices: the implementation of generic hosts.
|
11:282
|
Using generic hosts
|
11:283
|
Each microservice may need to run several independent threads, with each performing a different operation on requests received. Such threads need several resources, such as database connections, communication channels, specialized modules that perform complex operations, and so on. Moreover, all processing threads must be adequately initialized when the microservice is started and gracefully stopped when the microservice is stopped as a consequence of either load-balancing or errors.
|
11:284
|
All of these needs led the .NET team to conceive and implement hosted services and hosts. A host creates an adequate environment for running several tasks, known as hosted services, and provides them with resources, common settings, and a graceful start/stop.
|
11:285
|
The concept of a web host was initially conceived to implement the ASP.NET Core web framework, but, with effect from .NET Core 2.1, the host concept was extended to all .NET applications.
|
11:286
|
At the time of writing this book, a Host is automatically created for you in any ASP.NET Core, Blazor, and Worker Service project. The simplest way to test .NET Host features is to select a Service -> Worker Service project.
|
11:287
| |
11:288
|
Figure 11.7: Creating a Worker Service project in Visual Studio
|
11:289
|
All features related to the concept of a Host are contained in the Microsoft.Extensions.Hosting NuGet package.
|
11:290
|
Program.cs contains some skeleton code for configuring the host with a fluent interface, starting with the CreateDefaultBuilder method of the Host class. The final step of this configuration is calling the Build method, which assembles the actual host with all the configuration information we provided:
|
11:291
|
...
|
11:292
|
var myHost=Host.CreateDefaultBuilder(args)
|
11:293
|
.ConfigureServices(services =>
|
11:294
|
{
|
11:295
|
//some configuration
|
11:296
|
...
|
11:297
|
})
|
11:298
|
.Build();
|
11:299
|
...
|
11:300
| |
11:301
|
Host configuration includes defining the common resources, defining the default folder for files, loading the configuration parameters from several sources (JSON files, environment variables, and any arguments that are passed to the application), and declaring all the hosted services.
|
11:302
|
It is worth pointing out that ASP.NET Core and Blazor projects use methods that perform pre-configuration of the Host, including several of the tasks listed previously.
|
11:303
|
Then, the host is started, which causes all the hosted services to be started:
|
11:304
|
await host.RunAsync();
|
11:305
| |
11:306
|
The program remains blocked on the preceding instruction until the host is shut down. The host is automatically shut down when the operating system kills the process. However, the host can also be shut down manually either by one of the hosted services or externally by calling await host.StopAsync(timeout). Here, timeout is a time span defining the maximum time to wait for the hosted services to stop gracefully. After this time, all the hosted services are aborted if they haven’t been terminated. We will explain how a hosted service can shut down the host later on in this subsection.
|
11:307
|
When the thread contains a host.RunAsync is launched from within another thread instead of Program.cs. The fact that the host thread is being shut down can be signaled by a cancellationToken passed to RunAsync:
|
11:308
|
await host.RunAsync(cancellationToken)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.