id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
---|---|---|
54cd5b3bf948-1 | 3 IOPS per GiB of volume size\. AWS designs `gp2` volumes to deliver their provisioned performance 99% of the time\. A `gp2` volume can range in size from 1 GiB to 16 TiB\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
2c3f7c98d465-0 | The performance of `gp2` volumes is tied to volume size, which determines the baseline performance level of the volume and how quickly it accumulates I/O credits; larger volumes have higher baseline performance levels and accumulate I/O credits faster\. I/O credits represent the available bandwidth that your `gp2` volume can use to burst large amounts of I/O when more than the baseline performance is needed\. The more credits your volume has for I/O, the more time it can burst beyond its baseline performance level and the better it performs when more performance is needed\. The following diagram shows the burst\-bucket behavior for `gp2`\.
![\[gp2 burst bucket\]](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/images/gp2-burst-bucket.png)
Each volume receives an initial I/O credit balance of 5\.4 million I/O credits, which is enough to sustain the maximum burst performance of 3,000 IOPS for 30 minutes\. This initial credit balance is designed to provide a fast initial boot cycle for boot volumes and to provide a good bootstrapping experience for other applications\. Volumes earn I/O credits at the baseline performance rate of 3 IOPS per GiB of volume size\. For example, a 100 GiB `gp2` volume has a baseline performance of 300 IOPS\.
![\[Comparing baseline performance and burst IOPS\]](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/images/gp2_iops_1.png) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
2c3f7c98d465-1 | When your volume requires more than the baseline performance I/O level, it draws on I/O credits in the credit balance to burst to the required performance level, up to a maximum of 3,000 IOPS\. When your volume uses fewer I/O credits than it earns in a second, unused I/O credits are added to the I/O credit balance\. The maximum I/O credit balance for a volume is equal to the initial credit balance \(5\.4 million I/O credits\)\.
When the baseline performance of a volume is higher than maximum burst performance, I/O credits are never spent\. If the volume is attached to an instance built on the [Nitro System](instance-types.md#ec2-nitro-instances), the burst balance is not reported\. For other instances, the reported burst balance is 100%\.
The burst duration of a volume is dependent on the size of the volume, the burst IOPS required, and the credit balance when the burst begins\. This is shown in the following equation:
```
(Credit balance)
Burst duration = ------------------------------------
(Burst IOPS) - 3(Volume size in GiB)
```
The following table lists several volume sizes and the associated baseline performance of the volume \(which is also the rate at which it accumulates I/O credits\), the burst duration at the 3,000 IOPS maximum \(when starting with a full credit balance\), and the time in seconds that the volume would take to refill an empty credit balance\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
2c3f7c98d465-2 | | Volume size \(GiB\) | Baseline performance \(IOPS\) | Burst duration when driving sustained 3,000 IOPS \(second\) | Seconds to fill empty credit balance when driving no IO |
| --- | --- | --- | --- |
| 1 | 100 | 1,802 | 54,000 |
| 100 | 300 | 2,000 | 18,000 |
| 250 | 750 | 2,400 | 7,200 |
| 334 \(Min\. size for max throughput\) | 1,002 | 2,703 | 5,389 |
| 500 | 1,500 | 3,600 | 3,600 |
| 750 | 2,250 | 7,200 | 2,400 |
| 1,000 | 3,000 | N/A\* | N/A\* |
| 5,334 \(Min\. size for max IOPS\) | 16,000 | N/A\* | N/A\* |
| 16,384 \(16 TiB, max volume size\) | 16,000 | N/A\* | N/A\* |
\* The baseline performance of the volume exceeds the maximum burst performance\.
**What happens if I empty my I/O credit balance?** | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
2c3f7c98d465-3 | \* The baseline performance of the volume exceeds the maximum burst performance\.
**What happens if I empty my I/O credit balance?**
If your `gp2` volume uses all of its I/O credit balance, the maximum IOPS performance of the volume remains at the baseline IOPS performance level \(the rate at which your volume earns credits\) and the volume's maximum throughput is reduced to the baseline IOPS multiplied by the maximum I/O size\. Throughput can never exceed 250 MiB/s\. When I/O demand drops below the baseline level and unused credits are added to the I/O credit balance, the maximum IOPS performance of the volume again exceeds the baseline\. For example, a 100 GiB `gp2` volume with an empty credit balance has a baseline performance of 300 IOPS and a throughput limit of 75 MiB/s \(300 I/O operations per second \* 256 KiB per I/O operation = 75 MiB/s\)\. The larger a volume is, the greater the baseline performance is and the faster it replenishes the credit balance\. For more information about how IOPS are measured, see [I/O characteristics and monitoring](ebs-io-characteristics.md)\.
If you notice that your volume performance is frequently limited to the baseline level \(due to an empty I/O credit balance\), you should consider using a larger `gp2` volume \(with a higher baseline performance level\) or switching to an `io1` or `io2` volume for workloads that require sustained IOPS performance greater than 16,000 IOPS\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
2c3f7c98d465-4 | For information about using CloudWatch metrics and alarms to monitor your burst bucket balance, see [Monitoring the burst bucket balance for `gp2`, `st1`, and `sc1` volumes](#monitoring_burstbucket)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
ceb594e36fee-0 | Throughput for a `gp2` volume can be calculated using the following formula, up to the throughput limit of 250 MiB/s:
```
Throughput in MiB/s = ((Volume size in GiB) × (IOPS per GiB) × (I/O size in KiB))
```
Assuming V = volume size, I = I/O size, R = I/O rate, and T = throughput, this can be simplified to:
```
T = VIR
```
The smallest volume size that achieves the maximum throughput is given by:
```
T
V = -----
I R
250 MiB/s
= ---------------------
(256 KiB)(3 IOPS/GiB)
[(250)(2^20)(Bytes)]/s
= ------------------------------------------
(256)(2^10)(Bytes)([3 IOP/s]/[(2^30)(Bytes)])
(250)(2^20)(2^30)(Bytes)
= ------------------------ | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
ceb594e36fee-1 | = ------------------------
(256)(2^10)(3)
= 357,913,941,333 Bytes
= 333⅓ GiB (334 GiB in practice because volumes are provisioned in whole gibibytes)
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
df49ab84046b-0 | Provisioned IOPS SSD \(`io1` and `io2`\) volumes are designed to meet the needs of I/O\-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency\. Unlike `gp2`, which uses a bucket and credit model to calculate performance, `io1` and `io2` volumes allow you to specify a consistent IOPS rate when you create volumes, and Amazon EBS delivers the provisioned performance 99\.9 percent of the time\.
`io1` volumes are designed to provide 99\.8 to 99\.9 percent volume durability with an annual failure rate \(AFR\) no higher than 0\.2 percent, which translates to a maximum of two volume failures per 1,000 running volumes over a one\-year period\. `io2` volumes are designed to provide 99\.999 percent volume durability with an AFR no higher than 0\.001 percent, which translates to a single volume failure per 100,000 running volumes over a one\-year period\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
df49ab84046b-1 | `io1` and `io2` volumes can range in size from 4 GiB to 16 TiB\. You can provision from 100 IOPS up to 64,000 IOPS per volume on [Instances built on the Nitro System](instance-types.md#ec2-nitro-instances) and up to 32,000 on other instances\. The maximum ratio of provisioned IOPS to requested volume size \(in GiB\) is 50:1 for `io1` volumes, and 500:1 for `io2` volumes\. For example, a 100 GiB `io1` volume can be provisioned with up to 5,000 IOPS, while a 100 GiB `io2` volume can be provisioned with up to 50,000 IOPS\. On a supported instance type, the following volume sizes allow provisioning up to the 64,000 IOPS maximum:
+ `io1` volume 1,280 GiB in size or greater \(50 × 1,280 GiB = 64,000 IOPS\)
+ `io2` volume 128 GiB in size or greater \(500 × 128 GiB = 64,000 IOPS\)
`io1` and `io2` volumes provisioned with up to 32,000 IOPS support a maximum I/O size of 256 KiB and yield as much as 500 MiB/s of throughput\. With the I/O size at the maximum, peak throughput is reached at 2,000 IOPS\. A volume provisioned with more than 32,000 IOPS \(up to the cap of 64,000 IOPS\) supports a maximum I/O size of 16 KiB and yields as much as 1,000 MiB/s of throughput\. The following graph illustrates these performance characteristics: | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
df49ab84046b-2 | ![\[Throughput limits for io1 volumes\]](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/images/io1_throughput.png)
Your per\-I/O latency experience depends on the provisioned IOPS and on your workload profile\. For the best I/O latency experience, ensure that you provision IOPS to meet the I/O profile of your workload\.
**Note**
Some AWS accounts created before 2012 might have access to Availability Zones in us\-west\-1 or ap\-northeast\-1 that do not support Provisioned IOPS SSD \(`io1`\) volumes\. If you are unable to create an `io1` volume \(or launch an instance with an `io1` volume in its block device mapping\) in one of these Regions, try a different Availability Zone in the Region\. You can verify that an Availability Zone supports `io1` volumes by creating a 4 GiB `io1` volume in that zone\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
f7ff6b46583f-0 | Throughput Optimized HDD \(`st1`\) volumes provide low\-cost magnetic storage that defines performance in terms of throughput rather than IOPS\. This volume type is a good fit for large, sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing\. Bootable `st1` volumes are not supported\.
Throughput Optimized HDD \(`st1`\) volumes, though similar to Cold HDD \(`sc1`\) volumes, are designed to support *frequently* accessed data\.
This volume type is optimized for workloads involving large, sequential I/O, and we recommend that customers with workloads performing small, random I/O use `gp2`\. For more information, see [**Inefficiency of small read/writes on HDD**](#inefficiency)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
448764555c9a-0 | Like `gp2`, `st1` uses a burst\-bucket model for performance\. Volume size determines the baseline throughput of your volume, which is the rate at which the volume accumulates throughput credits\. Volume size also determines the burst throughput of your volume, which is the rate at which you can spend credits when they are available\. Larger volumes have higher baseline and burst throughput\. The more credits your volume has, the longer it can drive I/O at the burst level\.
The following diagram shows the burst\-bucket behavior for `st1`\.
![\[st1 burst bucket\]](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/images/st1-burst-bucket.png)
Subject to throughput and throughput\-credit caps, the available throughput of an `st1` volume is expressed by the following formula:
```
(Volume size) x (Credit accumulation rate per TiB) = Throughput
```
For a 1\-TiB `st1` volume, burst throughput is limited to 250 MiB/s, the bucket fills with credits at 40 MiB/s, and it can hold up to 1 TiB\-worth of credits\.
Larger volumes scale these limits linearly, with throughput capped at a maximum of 500 MiB/s\. After the bucket is depleted, throughput is limited to the baseline rate of 40 MiB/s per TiB\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
448764555c9a-1 | On volume sizes ranging from 0\.5 to 16 TiB, baseline throughput varies from 20 to a cap of 500 MiB/s, which is reached at 12\.5 TiB as follows:
```
40 MiB/s
12.5 TiB x ---------- = 500 MiB/s
1 TiB
```
Burst throughput varies from 125 MiB/s to a cap of 500 MiB/s, which is reached at 2 TiB as follows:
```
250 MiB/s
2 TiB x ---------- = 500 MiB/s
1 TiB
```
The following table states the full range of base and burst throughput values for `st1`:
| Volume size \(TiB\) | ST1 base throughput \(MiB/s\) | ST1 burst throughput \(MiB/s\) |
| --- | --- | --- |
| 0\.5 | 20 | 125 |
| 1 | 40 | 250 |
| 2 | 80 | 500 |
| 3 | 120 | 500 |
| 4 | 160 | 500 |
| 5 | 200 | 500 |
| 6 | 240 | 500 |
| 7 | 280 | 500 |
| 8 | 320 | 500 |
| 9 | 360 | 500 |
| 10 | 400 | 500 | | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
448764555c9a-2 | | 8 | 320 | 500 |
| 9 | 360 | 500 |
| 10 | 400 | 500 |
| 11 | 440 | 500 |
| 12 | 480 | 500 |
| 12\.5 | 500 | 500 |
| 13 | 500 | 500 |
| 14 | 500 | 500 |
| 15 | 500 | 500 |
| 16 | 500 | 500 |
The following diagram plots the table values:
![\[Comparing st1 base and burst throughput\]](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/images/st1_base_v_burst.png)
**Note**
When you create a snapshot of a Throughput Optimized HDD \(`st1`\) volume, performance may drop as far as the volume's baseline value while the snapshot is in progress\.
For information about using CloudWatch metrics and alarms to monitor your burst bucket balance, see [Monitoring the burst bucket balance for `gp2`, `st1`, and `sc1` volumes](#monitoring_burstbucket)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
2a1bd4874cc4-0 | Cold HDD \(`sc1`\) volumes provide low\-cost magnetic storage that defines performance in terms of throughput rather than IOPS\. With a lower throughput limit than `st1`, `sc1` is a good fit for large, sequential cold\-data workloads\. If you require infrequent access to your data and are looking to save costs, `sc1` provides inexpensive block storage\. Bootable `sc1` volumes are not supported\.
Cold HDD \(`sc1`\) volumes, though similar to Throughput Optimized HDD \(`st1`\) volumes, are designed to support *infrequently* accessed data\.
**Note**
This volume type is optimized for workloads involving large, sequential I/O, and we recommend that customers with workloads performing small, random I/O use `gp2`\. For more information, see [**Inefficiency of small read/writes on HDD**](#inefficiency)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
b36d4522fc6a-0 | Like `gp2`, `sc1` uses a burst\-bucket model for performance\. Volume size determines the baseline throughput of your volume, which is the rate at which the volume accumulates throughput credits\. Volume size also determines the burst throughput of your volume, which is the rate at which you can spend credits when they are available\. Larger volumes have higher baseline and burst throughput\. The more credits your volume has, the longer it can drive I/O at the burst level\.
![\[sc1 burst bucket\]](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/images/sc1-burst-bucket.png)
Subject to throughput and throughput\-credit caps, the available throughput of an `sc1` volume is expressed by the following formula:
```
(Volume size) x (Credit accumulation rate per TiB) = Throughput
```
For a 1\-TiB `sc1` volume, burst throughput is limited to 80 MiB/s, the bucket fills with credits at 12 MiB/s, and it can hold up to 1 TiB\-worth of credits\.
Larger volumes scale these limits linearly, with throughput capped at a maximum of 250 MiB/s\. After the bucket is depleted, throughput is limited to the baseline rate of 12 MiB/s per TiB\.
On volume sizes ranging from 0\.5 to 16 TiB, baseline throughput varies from 6 MiB/s to a maximum of 192 MiB/s, which is reached at 16 TiB as follows:
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
b36d4522fc6a-1 | ```
12 MiB/s
16 TiB x ---------- = 192 MiB/s
1 TiB
```
Burst throughput varies from 40 MiB/s to a cap of 250 MiB/s, which is reached at 3\.125 TiB as follows:
```
80 MiB/s
3.125 TiB x ----------- = 250 MiB/s
1 TiB
```
The following table states the full range of base and burst throughput values for `sc1`:
| Volume Size \(TiB\) | SC1 Base Throughput \(MiB/s\) | SC1 Burst Throughput \(MiB/s\) |
| --- | --- | --- |
| 0\.5 | 6 | 40 |
| 1 | 12 | 80 |
| 2 | 24 | 160 |
| 3 | 36 | 240 |
| 3\.125 | 37\.5 | 250 |
| 4 | 48 | 250 |
| 5 | 60 | 250 |
| 6 | 72 | 250 |
| 7 | 84 | 250 |
| 8 | 96 | 250 |
| 9 | 108 | 250 |
| 10 | 120 | 250 |
| 11 | 132 | 250 |
| 12 | 144 | 250 | | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
b36d4522fc6a-2 | | 10 | 120 | 250 |
| 11 | 132 | 250 |
| 12 | 144 | 250 |
| 13 | 156 | 250 |
| 14 | 168 | 250 |
| 15 | 180 | 250 |
| 16 | 192 | 250 |
The following diagram plots the table values:
![\[Comparing sc1 base and burst throughput\]](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/images/sc1_base_v_burst.png)
**Note**
When you create a snapshot of a Cold HDD \(`sc1`\) volume, performance may drop as far as the volume's baseline value while the snapshot is in progress\.
For information about using CloudWatch metrics and alarms to monitor your burst bucket balance, see [Monitoring the burst bucket balance for `gp2`, `st1`, and `sc1` volumes](#monitoring_burstbucket)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
afe156fd4868-0 | Magnetic volumes are backed by magnetic drives and are suited for workloads where data is accessed infrequently, and scenarios where low\-cost storage for small volume sizes is important\. These volumes deliver approximately 100 IOPS on average, with burst capability of up to hundreds of IOPS, and they can range in size from 1 GiB to 1 TiB\.
**Note**
Magnetic is a previous generation volume type\. For new applications, we recommend using one of the newer volume types\. For more information, see [Previous Generation Volumes](http://aws.amazon.com/ebs/previous-generation/)\.
For information about using CloudWatch metrics and alarms to monitor your burst bucket balance, see [Monitoring the burst bucket balance for `gp2`, `st1`, and `sc1` volumes](#monitoring_burstbucket)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
02caa68cd8c4-0 | For optimal throughput results using HDD volumes, plan your workloads with the following considerations in mind\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
4f907d7e91b1-0 | The `st1` and `sc1` bucket sizes vary according to volume size, and a full bucket contains enough tokens for a full volume scan\. However, larger `st1` and `sc1` volumes take longer for the volume scan to complete due to per\-instance and per\-volume throughput limits\. Volumes attached to smaller instances are limited to the per\-instance throughput rather than the `st1` or `sc1` throughput limits\.
Both `st1` and `sc1` are designed for performance consistency of 90% of burst throughput 99% of the time\. Non\-compliant periods are approximately uniformly distributed, targeting 99% of expected total throughput each hour\.
The following table shows ideal scan times for volumes of various size, assuming full buckets and sufficient instance throughput\.
In general, scan times are expressed by this formula:
```
Volume size
------------- = Scan time
Throughput
```
For example, taking the performance consistency guarantees and other optimizations into account, an `st1` customer with a 5\-TiB volume can expect to complete a full volume scan in 2\.91 to 3\.27 hours\.
```
5 TiB 5 TiB
----------- = ------------------- = 10,486 s = 2.91 hours (optimal)
500 MiB/s 0.00047684 TiB/s
2.91 hours | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
4f907d7e91b1-1 | 500 MiB/s 0.00047684 TiB/s
2.91 hours
2.91 hours + -------------- = 3.27 hours (minimum expected)
(0.90)(0.99) <-- From expected performance of 90% of burst 99% of the time
```
Similarly, an `sc1` customer with a 5\-TiB volume can expect to complete a full volume scan in 5\.83 to 6\.54 hours\.
```
5 TiB
------------------- = 20972 s = 5.83 hours (optimal)
0.000238418 TiB/s
5.83 hours
-------------- = 6.54 hours (minimum expected)
(0.90)(0.99)
```
| Volume size \(TiB\) | ST1 scan time with burst \(hours\)\* | SC1 scan time with burst \(hours\)\* |
| --- | --- | --- |
| 1 | 1\.17 | 3\.64 |
| 2 | 1\.17 | 3\.64 |
| 3 | 1\.75 | 3\.64 |
| 4 | 2\.33 | 4\.66 | | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
4f907d7e91b1-2 | | 3 | 1\.75 | 3\.64 |
| 4 | 2\.33 | 4\.66 |
| 5 | 2\.91 | 5\.83 |
| 6 | 3\.50 | 6\.99 |
| 7 | 4\.08 | 8\.16 |
| 8 | 4\.66 | 9\.32 |
| 9 | 5\.24 | 10\.49 |
| 10 | 5\.83 | 11\.65 |
| 11 | 6\.41 | 12\.82 |
| 12 | 6\.99 | 13\.98 |
| 13 | 7\.57 | 15\.15 |
| 14 | 8\.16 | 16\.31 |
| 15 | 8\.74 | 17\.48 |
| 16 | 9\.32 | 18\.64 |
\* These scan times assume an average queue depth \(rounded to the nearest whole number\) of four or more when performing 1 MiB of sequential I/O\.
Therefore if you have a throughput\-oriented workload that needs to complete scans quickly \(up to 500 MiB/s\), or requires several full volume scans a day, use `st1`\. If you are optimizing for cost, your data is relatively infrequently accessed, and you don’t need more than 250 MiB/s of scanning performance, then use `sc1`\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
f39e8cb3e6f7-0 | The performance model for `st1` and `sc1` volumes is optimized for sequential I/Os, favoring high\-throughput workloads, offering acceptable performance on workloads with mixed IOPS and throughput, and discouraging workloads with small, random I/O\.
For example, an I/O request of 1 MiB or less counts as a 1 MiB I/O credit\. However, if the I/Os are sequential, they are merged into 1 MiB I/O blocks and count only as a 1 MiB I/O credit\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
6ed29626f08f-0 | Throughput for `st1` and `sc1` volumes is always determined by the smaller of the following:
+ Throughput limits of the volume
+ Throughput limits of the instance
As for all Amazon EBS volumes, we recommend that you select an appropriate EBS\-optimized EC2 instance in order to avoid network bottlenecks\. For more information, see [Amazon EBS–optimized instances](ebs-optimized.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
e82be6b13759-0 | You can monitor the burst\-bucket level for `gp2`, `st1`, and `sc1` volumes using the EBS `BurstBalance` metric available in Amazon CloudWatch\. This metric shows the percentage of I/O credits \(for `gp2`\) or throughput credits \(for `st1` and `sc1`\) remaining in the burst bucket\. For more information about the `BurstBalance` metric and other metrics related to I/O, see [I/O characteristics and monitoring](ebs-io-characteristics.md)\. CloudWatch also allows you to set | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
e82be6b13759-1 | CloudWatch also allows you to set an alarm that notifies you when the `BurstBalance` value falls to a certain level\. For more information, see [Creating Amazon CloudWatch Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ebs-volume-types.md |
0d9512454734-0 | With CloudWatch metrics, you can efficiently monitor your Capacity Reservations and identify unused capacity by setting CloudWatch alarms to notify you when usage thresholds are met\. This can help you maintain a constant Capacity Reservation volume and achieve a higher level of utilization\.
On\-Demand Capacity Reservations send metric data to CloudWatch every five minutes\. Metrics are not supported for Capacity Reservations that are active for less than five minutes\.
For more information about viewing metrics in the CloudWatch console, see [Using Amazon CloudWatch Metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html)\. For more information about creating alarms, see [Creating Amazon CloudWatch Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html)\.
**Topics**
+ [Capacity Reservation usage metrics](#capacity-reservation-usage-metrics)
+ [Capacity Reservation metric dimensions](#capacity-reservation-dimensions)
+ [Viewing CloudWatch metrics for Capacity Reservations](#viewing-capacity-reservation-metrics) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/capacity-reservation-cw-metrics.md |
08342e71550d-0 | The `AWS/EC2CapacityReservations` namespace includes the following usage metrics you can use to monitor and maintain on\-demand capacity within thresholds you specify for your reservation\.
| Metric | Description |
| --- | --- |
| UsedInstanceCount | The number of instances that are currently in use\. Unit: Count |
| AvailableInstanceCount | The number of instances that are available\. Unit: Count |
| TotalInstanceCount | The total number of instances you have reserved\. Unit: Count |
| InstanceUtilization | The percentage of reserved capacity instances that are currently in use\. Unit: Percent | | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/capacity-reservation-cw-metrics.md |
9423603b7ee3-0 | You can use the following dimensions to refine the metrics listed in the previous table\.
| Dimension | Description |
| --- | --- |
| CapacityReservationId | This globally unique dimension filters the data you request for the identified capacity reservation only\. | | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/capacity-reservation-cw-metrics.md |
fda4f57be669-0 | Metrics are grouped first by the service namespace, and then by the supported dimensions\. You can use the following procedures to view the metrics for your Capacity Reservations\.
**To view Capacity Reservation metrics using the CloudWatch console**
1. Open the CloudWatch console at [https://console\.aws\.amazon\.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/)\.
1. If necessary, change the Region\. From the navigation bar, select the Region where your Capacity Reservation resides\. For more information, see [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html)\.
1. In the navigation pane, choose **Metrics**\.
1. For **All metrics**, choose **EC2 Capacity Reservations**\.
1. Choose the metric dimension **By Capacity Reservation**\. Metrics will be grouped by `CapacityReservationId`\.
1. To sort the metrics, use the column heading\. To graph a metric, select the check box next to the metric\.
**To view Capacity Reservation metrics \(AWS CLI\)**
Use the following [list\-metrics](https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/list-metrics.html) command:
```
aws cloudwatch list-metrics --namespace "AWS/EC2CapacityReservations" | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/capacity-reservation-cw-metrics.md |
fda4f57be669-1 | ```
aws cloudwatch list-metrics --namespace "AWS/EC2CapacityReservations"
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/capacity-reservation-cw-metrics.md |
1b7fbd48b40f-0 | The following examples show launch configurations that you can use with the [request\-spot\-instances](https://docs.aws.amazon.com/cli/latest/reference/ec2/request-spot-instances.html) command to create a Spot Instance request\. For more information, see [Creating a Spot Instance request](spot-requests.md#using-spot-instances-request)\.
1. [Launch Spot Instances](#spot-launch-specification1)
1. [Launch Spot Instances in the specified Availability Zone](#spot-launch-specification2)
1. [Launch Spot Instances in the specified subnet](#spot-launch-specification3)
1. [Launch a Dedicated Spot Instance](#spot-launch-specification4) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/spot-request-examples.md |
a96e59af5496-0 | The following example does not include an Availability Zone or subnet\. Amazon EC2 selects an Availability Zone for you\. Amazon EC2 launches the instances in the default subnet of the selected Availability Zone\.
```
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroupIds": [ "sg-1a2b3c4d" ],
"InstanceType": "m3.medium",
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/spot-request-examples.md |
4406bb9df390-0 | The following example includes an Availability Zone\. Amazon EC2 launches the instances in the default subnet of the specified Availability Zone\.
```
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroupIds": [ "sg-1a2b3c4d" ],
"InstanceType": "m3.medium",
"Placement": {
"AvailabilityZone": "us-west-2a"
},
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/spot-request-examples.md |
78c64c855a50-0 | The following example includes a subnet\. Amazon EC2 launches the instances in the specified subnet\. If the VPC is a nondefault VPC, the instance does not receive a public IPv4 address by default\.
```
{
"ImageId": "ami-1a2b3c4d",
"SecurityGroupIds": [ "sg-1a2b3c4d" ],
"InstanceType": "m3.medium",
"SubnetId": "subnet-1a2b3c4d",
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
```
To assign a public IPv4 address to an instance in a nondefault VPC, specify the `AssociatePublicIpAddress` field as shown in the following example\. When you specify a network interface, you must include the subnet ID and security group ID using the network interface, rather than using the `SubnetId` and `SecurityGroupIds` fields shown in example 3\.
```
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"InstanceType": "m3.medium",
"NetworkInterfaces": [
{ | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/spot-request-examples.md |
78c64c855a50-1 | "InstanceType": "m3.medium",
"NetworkInterfaces": [
{
"DeviceIndex": 0,
"SubnetId": "subnet-1a2b3c4d",
"Groups": [ "sg-1a2b3c4d" ],
"AssociatePublicIpAddress": true
}
],
"IamInstanceProfile": {
"Arn": "arn:aws:iam::123456789012:instance-profile/my-iam-role"
}
}
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/spot-request-examples.md |
e6324a6777ef-0 | The following example requests Spot Instance with a tenancy of `dedicated`\. A Dedicated Spot Instance must be launched in a VPC\.
```
{
"ImageId": "ami-1a2b3c4d",
"KeyName": "my-key-pair",
"SecurityGroupIds": [ "sg-1a2b3c4d" ],
"InstanceType": "c3.8xlarge",
"SubnetId": "subnet-1a2b3c4d",
"Placement": {
"Tenancy": "dedicated"
}
}
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/spot-request-examples.md |
329f01de4a84-0 | You can get a list of some types of resource using the Amazon EC2 console\. You can get a list of each type of resource using its corresponding command or API action\. If you have many resources, you can filter the results to include only the resources that match certain criteria\.
**Topics**
+ [Advanced search](#advancedsearch)
+ [Listing resources using the console](#listing-resources)
+ [Filtering resources using the console](#filtering-resources)
+ [Listing and filtering using the CLI and API](#Filtering_Resources_CLI) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/Using_Filtering.md |
7d4a01cef50b-0 | Advanced search allows you to search using a combination of filters to achieve precise results\. You can filter by keywords, user\-defined tag keys, and predefined resource attributes\.
The specific search types available are:
+ **Search by keyword**
To search by keyword, type or paste what you’re looking for in the search box, and then choose Enter\. For example, to search for a specific instance, you can type the instance ID\.
+ **Search by fields**
You can also search by fields, tags, and attributes associated with a resource\. For example, to find all instances in the stopped state:
1. In the search box, start typing **Instance State**\. As you type, you'll see a list of suggested fields\.
1. Select **Instance State **from the list\.
1. Select **Stopped** from the list of suggested values\.
1. To further refine your list, select the search box for more search options\.
+ **Advanced search**
You can create advanced queries by adding multiple filters\. For example, you can search by tags and see instances for the Flying Mountain project running in the Production stack, and then search by attributes to see all t2\.micro instances, or all instances in us\-west\-2a, or both\.
+ **Inverse search** | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/Using_Filtering.md |
7d4a01cef50b-1 | + **Inverse search**
You can search for resources that do not match a specified value\. For example, to list all instances that are not terminated, search by the **Instance State** field, and prefix the Terminated value with an exclamation mark \(\!\)\.
+ **Partial search**
When searching by field, you can also enter a partial string to find all resources that contain the string in that field\. For example, search by **Instance Type**, and then type **t2** to find all t2\.micro, t2\.small or t2\.medium instances\.
+ **Regular expression**
Regular expressions are useful when you need to match the values in a field with a specific pattern\. For example, search by the Name tag, and then type **^s\.\*** to see all instances with a Name tag that starts with an 's'\. Regular expression search is not case\-sensitive\.
After you have the precise results of your search, you can bookmark the URL for easy reference\. In situations where you have thousands of instances, filters and bookmarks can save you a great deal of time; you don’t have to run searches repeatedly\.
**Combining search filters** | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/Using_Filtering.md |
7d4a01cef50b-2 | **Combining search filters**
In general, multiple filters with the same key field \(for example, tag:Name, search, Instance State\) are automatically joined with OR\. This is intentional, as the vast majority of filters would not be logical if they were joined with AND\. For example, you would get zero results for a search on Instance State=running AND Instance State=stopped\. In many cases, you can granulate the results by using complementary search terms on different key fields, where the AND rule is automatically applied instead\. If you search for tag: Name:=All values and tag:Instance State=running, you get search results that contain both those criteria\. To fine\-tune your results, simply remove one filter in the string until the results fit your requirements\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/Using_Filtering.md |
ba6c0f67ae13-0 | You can view the most common Amazon EC2 resource types using the console\. To view additional resources, use the command line interface or the API actions\.
**To list EC2 resources using the console**
1. Open the Amazon EC2 console at [https://console\.aws\.amazon\.com/ec2/](https://console.aws.amazon.com/ec2/)\.
1. In the navigation pane, choose the option that corresponds to the resource, such as **AMIs** or **Instances**\.
![\[Amazon EC2 console navigation pane\]](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/images/EC2_navigation.png)
1. The page displays all the available resources\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/Using_Filtering.md |
af52e6d8c515-0 | You can perform filtering and sorting of the most common resource types using the Amazon EC2 console\. For example, you can use the search bar on the instances page to sort instances by tags, attributes, or keywords\.
You can also use the search field on each page to find resources with specific attributes or values\. You can use regular expressions to search on partial or multiple strings\. For example, to find all instances that are using the MySG security group, enter `MySG` in the search field\. The results will include any values that contain `MySG` as a part of the string, such as `MySG2` and` MySG3`\. To limit your results to MySG only, enter `\bMySG\b` in the search field\. To list all the instances whose type is either `m1.small` or `m1.large`, enter `m1.small|m1.large` in the search field\.
**To list volumes in the `us-east-1b` Availability Zone with a status of `available`**
1. In the navigation pane, choose **Volumes**\.
1. Click on the search box, select **Attachment Status** from the menu, and then select **Detached**\. \(A detached volume is available to be attached to an instance in the same Availability Zone\.\)
1. Click on the search box again, select **State**, and then select **Available**\.
1. Click on the search box again, select **Availability Zone**, and then select `us-east-1b`\.
1. Any volumes that meet this criteria are displayed\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/Using_Filtering.md |
af52e6d8c515-1 | 1. Any volumes that meet this criteria are displayed\.
**To list public 64\-bit Linux AMIs backed by Amazon EBS**
1. In the navigation pane, choose **AMIs**\.
1. In the **Filter** pane, select **Public images**, **EBS images**, and then your Linux distribution from the **Filter** lists\.
1. Type `x86_64` in the search field\.
1. Any AMIs that meet this criteria are displayed\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/Using_Filtering.md |
42c0e30073e7-0 | Each resource type has a corresponding CLI command and API action that you use to list resources of that type\. The resulting lists of resources can be long, so it can be faster and more useful to filter the results to include only the resources that match specific criteria\.
**Filtering considerations**
+ You can specify multiple filters and multiple filter values in a single request\.
+ You can use wildcards with the filter values\. An asterisk \(\*\) matches zero or more characters, and a question mark \(?\) matches zero or one character\.
+ Filter values are case sensitive\.
+ Your search can include the literal values of the wildcard characters; you just need to escape them with a backslash before the character\. For example, a value of `\*amazon\?\\` searches for the literal string `*amazon?\`\.
**Supported filters**
To see the supported filters for each Amazon EC2 resource, see the following documentation:
+ AWS CLI: The `describe` commands in the [AWS CLI Command Reference\-Amazon EC2](https://docs.aws.amazon.com/cli/latest/reference/ec2/index.html)\.
+ Tools for Windows PowerShell: The `Get` commands in the [AWS Tools for PowerShell Cmdlet Reference\-Amazon EC2](https://docs.aws.amazon.com/powershell/latest/reference/items/EC2_cmdlets.html)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/Using_Filtering.md |
42c0e30073e7-1 | + Query API: The `Describe` API actions in the [Amazon EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/)\.
**Example: Specify a single filter**
You can list your Amazon EC2 instances using [describe\-instances](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html)\. Without filters, the response contains information for all your resources\. You can use the following command to include only the running instances in your output\.
```
aws ec2 describe-instances --filters Name=instance-state-name,Values=running
```
To list only the instance IDs for your running instances, add the `--query` parameter as follows\.
```
aws ec2 describe-instances --filters Name=instance-state-name,Values=running --query "Reservations[*].Instances[*].InstanceId" --output text
```
The following is example output:
```
i-0ef1f57f78d4775a4
i-0626d4edd54f1286d
i-04a636d18e83cfacb
```
**Example: Specify multiple filters or filter values**
If you specify multiple filters or multiple filter values, the resource must match all filters to be included in the results\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/Using_Filtering.md |
42c0e30073e7-2 | If you specify multiple filters or multiple filter values, the resource must match all filters to be included in the results\.
You can you the following command to list all instances whose type is either `m5.large` or `m5d.large`\.
```
aws ec2 describe-instances --filters Name=instance-type,Values=m5.large,m5d.large
```
You can use the following command to list all stopped instances whose type is `t2.micro`\.
```
aws ec2 describe-instances --filters Name=instance-state-name,Values=stopped Name=instance-type,Values=t2.micro
```
**Example: Use wildcards in a filter value**
If you specify database as the filter value for the `description` filter when describing EBS snapshots using [describe\-snapshots](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-snapshots.html), the command returns only the snapshots whose description is "database"\.
```
aws ec2 describe-snapshots --filters Name=description,Values=database
```
The \* wildcard matches zero or more characters\. If you specify \*database\* as the filter value, the command returns only snapshots whose description includes the word database\.
```
aws ec2 describe-snapshots --filters Name=description,Values=*database* | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/Using_Filtering.md |
42c0e30073e7-3 | ```
aws ec2 describe-snapshots --filters Name=description,Values=*database*
```
The ? wildcard matches exactly 1 character\. If you specify database? as the filter value, the command returns only snapshots whose description is "database" or "database" followed by one character\.
```
aws ec2 describe-snapshots --filters Name=description,Values=database?
```
If you specify `database????`, the command returns only snapshots whose description is "database" followed by up to four characters\. It excludes descriptions with "database" followed by five or more characters\.
```
aws ec2 describe-snapshots --filters Name=description,Values=database????
```
**Example: Filter based on date**
With the AWS CLI, you can use JMESPath to filter results using expressions\. For example, the following [describe\-snapshots](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-snapshots.html) command displays the IDs of all snapshots created by your AWS account \(represented by *123456789012*\) before the specified date \(represented by *2020\-03\-31*\)\. If you do not specify the owner, the results include all public snapshots\.
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/Using_Filtering.md |
42c0e30073e7-4 | ```
aws ec2 describe-snapshots --filters Name=owner-id,Values=123456789012 --query "Snapshots[?(StartTime<=`2020-03-31`)].[SnapshotId]" --output text
```
The following command displays the IDs of all snapshots created in the specified date range\.
```
aws ec2 describe-snapshots --filters Name=owner-id,Values=123456789012 --query "Snapshots[?(StartTime>=`2019-01-01`) && (StartTime<=`2019-12-31`)].[SnapshotId]" --output text
```
**Filter based on tags**
For examples of how to filter a list of resources according to their tags, see [Working with tags using the command line](Using_Tags.md#Using_Tags_CLI)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/Using_Filtering.md |
5932267421f9-0 | An Elastic Fabric Adapter \(EFA\) is a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing \(HPC\) and machine learning applications\. EFA enables you to achieve the application performance of an on\-premises HPC cluster, with the scalability, flexibility, and elasticity provided by the AWS Cloud\.
EFA provides lower and more consistent latency and higher throughput than the TCP transport traditionally used in cloud\-based HPC systems\. It enhances the performance of inter\-instance communication that is critical for scaling HPC and machine learning applications\. It is optimized to work on the existing AWS network infrastructure and it can scale depending on application requirements\.
EFA integrates with Libfabric 1\.10 and it supports Open MPI 4\.0\.3 and Intel MPI 2019 Update 7 for HPC applications, and Nvidia Collective Communications Library \(NCCL\) for machine learning applications\.
**Note**
The OS\-bypass capabilities of EFAs are not supported on Windows instances\. If you attach an EFA to a Windows instance, the instance functions as an Elastic Network Adapter, without the added EFA capabilities\.
**Topics**
+ [EFA basics](#efa-basics)
+ [Supported interfaces and libraries](#efa-mpi)
+ [Supported instance types](#efa-instance-types)
+ [Supported AMIs](#efa-amis)
+ [EFA limitations](#efa-limits) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa.md |
5932267421f9-1 | + [Supported AMIs](#efa-amis)
+ [EFA limitations](#efa-limits)
+ [Getting started with EFA and MPI](efa-start.md)
+ [Getting started with EFA and NCCL](efa-start-nccl.md)
+ [Working with EFA](efa-working-with.md)
+ [Monitoring an EFA](efa-working-monitor.md)
+ [Verifying the EFA installer using a checksum](efa-verify.md) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa.md |
63e07893c986-0 | An EFA is an Elastic Network Adapter \(ENA\) with added capabilities\. It provides all of the functionality of an ENA, with an additional OS\-bypass functionality\. OS\-bypass is an access model that allows HPC and machine learning applications to communicate directly with the network interface hardware to provide low\-latency, reliable transport functionality\.
![\[Contrasting a traditional HPC software stack with one that uses an EFA.\]](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/images/efa_stack.png)
Traditionally, HPC applications use the Message Passing Interface \(MPI\) to interface with the system’s network transport\. In the AWS Cloud, this has meant that applications interface with MPI, which then uses the operating system's TCP/IP stack and the ENA device driver to enable network communication between instances\.
With an EFA, HPC applications use MPI or NCCL to interface with the *Libfabric* API\. The Libfabric API bypasses the operating system kernel and communicates directly with the EFA device to put packets on the network\. This reduces overhead and enables the HPC application to run more efficiently\.
**Note**
Libfabric is a core component of the OpenFabrics Interfaces \(OFI\) framework, which defines and exports the user\-space API of OFI\. For more information, see the [Libfabric OpenFabrics](https://ofiwg.github.io/libfabric/) website\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa.md |
d00f695291f4-0 | Elastic Network Adapters \(ENAs\) provide traditional IP networking features that are required to support VPC networking\. EFAs provide all of the same traditional IP networking features as ENAs, and they also support OS\-bypass capabilities\. OS\-bypass enables HPC and machine learning applications to bypass the operating system kernel and to communicate directly with the EFA device\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa.md |
91696172962e-0 | EFA supports the following interfaces and libraries:
+ Open MPI 4\.0\.3
+ Intel MPI 2019 Update 7
+ NVIDIA Collective Communications Library \(NCCL\) 2\.4\.2 and later | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa.md |
47011a781011-0 | The following instance types support EFAs: `c5n.18xlarge`, `c5n.metal`, `g4dn.metal`, `i3en.24xlarge`, `i3en.metal`, `inf1.24xlarge`, `m5dn.24xlarge`, `m5n.24xlarge`, `p3dn.24xlarge`, `r5dn.24xlarge`, and `r5n.24xlarge`\.
The available instance types vary by Region\. To see the available instance types that support EFA in a Region, use the following command and, for `--region`, specify a Region\.
```
aws --output table --region us-east-1 ec2 describe-instance-types --query InstanceTypes[*].[InstanceType,NetworkInfo.EfaSupported] | grep True
```
The following is example output\.
```
| i3en.24xlarge | True |
| g4dn.metal | True |
| c5n.metal | True |
| r5n.24xlarge | True |
| c5n.18xlarge | True |
| inf1.24xlarge | True |
| i3en.metal | True |
| p3dn.24xlarge | True |
| r5dn.24xlarge | True | | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa.md |
47011a781011-1 | | p3dn.24xlarge | True |
| r5dn.24xlarge | True |
| m5n.24xlarge | True |
| m5dn.24xlarge | True |
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa.md |
4c49afbb373c-0 | The following AMIs support EFAs: Amazon Linux, Amazon Linux 2, RHEL 7\.6, RHEL 7\.7, RHEL 7\.8, CentOS 7, Ubuntu 16\.04, and Ubuntu 18\.04\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa.md |
68112c50734f-0 | EFA has the following limitations:
+ You can attach only one EFA per instance\.
+ EFA OS\-bypass traffic is limited to a single subnet\. In other words, EFA traffic cannot be sent from one subnet to another\. Normal IP traffic from the EFA can be sent from one subnet to another\.
+ EFA OS\-bypass traffic is not routable\. Normal IP traffic from the EFA remains routable\.
+ The EFA must be a member of a security group that allows all inbound and outbound traffic to and from the security group itself\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa.md |
7af748192421-0 | You can aggregate statistics for the EC2 instances in an Auto Scaling group\. Note that Amazon CloudWatch cannot aggregate data across regions\. Metrics are completely separate between regions\.
This example shows you how to retrieve the total bytes written to disk for one Auto Scaling group\. The total is computed for one\-minute periods for a 24\-hour interval across all EC2 instances in the specified Auto Scaling group\.
**To display DiskWriteBytes for the instances in an Auto Scaling group \(console\)**
1. Open the CloudWatch console at [https://console\.aws\.amazon\.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/)\.
1. In the navigation pane, choose **Metrics**\.
1. Choose the **EC2** namespace and then choose **By Auto Scaling Group**\.
1. Choose the row for the **DiskWriteBytes** metric and the specific Auto Scaling group, which displays a graph for the metric for the instances in the Auto Scaling group\. To name the graph, choose the pencil icon\. To change the time range, select one of the predefined values or choose **custom**\.
1. To change the statistic or the period for the metric, choose the **Graphed metrics** tab\. Choose the column heading or an individual value, and then choose a different value\.
**To display DiskWriteBytes for the instances in an Auto Scaling group \(AWS CLI\)** | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/GetMetricAutoScalingGroup.md |
7af748192421-1 | **To display DiskWriteBytes for the instances in an Auto Scaling group \(AWS CLI\)**
Use the [get\-metric\-statistics](https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/get-metric-statistics.html) command as follows\.
```
aws cloudwatch get-metric-statistics --namespace AWS/EC2 --metric-name DiskWriteBytes --period 360 \
--statistics "Sum" "SampleCount" --dimensions Name=AutoScalingGroupName,Value=my-asg --start-time 2016-10-16T23:18:00 --end-time 2016-10-18T23:18:00
```
The following is example output:
```
{
"Datapoints": [
{
"SampleCount": 18.0,
"Timestamp": "2016-10-19T21:36:00Z",
"Sum": 0.0,
"Unit": "Bytes"
},
{
"SampleCount": 5.0,
"Timestamp": "2016-10-19T21:42:00Z",
"Sum": 0.0,
"Unit": "Bytes"
}
],
"Label": "DiskWriteBytes"
} | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/GetMetricAutoScalingGroup.md |
7af748192421-2 | "Unit": "Bytes"
}
],
"Label": "DiskWriteBytes"
}
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/GetMetricAutoScalingGroup.md |
4a47fb77b9cb-0 | You can select an AMI to use based on the following characteristics:
+ Region \(see [Regions and Zones](using-regions-availability-zones.md)\)
+ Operating system
+ Architecture \(32\-bit or 64\-bit\)
+ [Launch permissions](#launch-permissions)
+ [Storage for the root device](#storage-for-the-root-device) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ComponentsAMIs.md |
6e21d2d37052-0 | The owner of an AMI determines its availability by specifying launch permissions\. Launch permissions fall into the following categories\.
| Launch permission | Description |
| --- | --- |
| public | The owner grants launch permissions to all AWS accounts\. |
| explicit | The owner grants launch permissions to specific AWS accounts\. |
| implicit | The owner has implicit launch permissions for an AMI\. |
Amazon and the Amazon EC2 community provide a large selection of public AMIs\. For more information, see [Shared AMIs](sharing-amis.md)\. Developers can charge for their AMIs\. For more information, see [Paid AMIs](paid-amis.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ComponentsAMIs.md |
a5a7c4db9392-0 | All AMIs are categorized as either *backed by Amazon EBS* or *backed by instance store*\. The former means that the root device for an instance launched from the AMI is an Amazon EBS volume created from an Amazon EBS snapshot\. The latter means that the root device for an instance launched from the AMI is an instance store volume created from a template stored in Amazon S3\. For more information, see [Amazon EC2 root device volume](RootDeviceStorage.md)\.
The following table summarizes the important differences when using the two types of AMIs\.
| Characteristic | Amazon EBS\-backed AMI | Amazon instance store\-backed AMI |
| --- | --- | --- |
| Boot time for an instance | Usually less than 1 minute | Usually less than 5 minutes |
| Size limit for a root device | 16 TiB | 10 GiB |
| Root device volume | Amazon EBS volume | Instance store volume |
| Data persistence | By default, the root volume is deleted when the instance terminates\.\* Data on any other Amazon EBS volumes persists after instance termination by default\. | Data on any instance store volumes persists only during the life of the instance\. |
| Modifications | The instance type, kernel, RAM disk, and user data can be changed while the instance is stopped\. | Instance attributes are fixed for the life of an instance\. |
| Charges | You're charged for instance usage, Amazon EBS volume usage, and storing your AMI as an Amazon EBS snapshot\. | You're charged for instance usage and storing your AMI in Amazon S3\. | | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ComponentsAMIs.md |
a5a7c4db9392-1 | | AMI creation/bundling | Uses a single command/call | Requires installation and use of AMI tools |
| Stopped state | Can be in a stopped state\. Even when the instance is stopped and not running, the root volume is persisted in Amazon EBS | Cannot be in stopped state; instances are running or terminated |
\* By default, Amazon EBS\-backed instance root volumes have the `DeleteOnTermination` flag set to `true`\. For information about how to change this flag so that the volume persists after termination, see [Changing the root volume to persist](RootDeviceStorage.md#Using_RootDeviceStorage)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ComponentsAMIs.md |
f5ae7cd1695e-0 | **To determine the root device type of an AMI using the console**
1. Open the Amazon EC2 console\.
1. In the navigation pane, click **AMIs**, and select the AMI\.
1. Check the value of **Root Device Type** in the **Details** tab as follows:
+ If the value is `ebs`, this is an Amazon EBS\-backed AMI\.
+ If the value is `instance store`, this is an instance store\-backed AMI\.
**To determine the root device type of an AMI using the command line**
You can use one of the following commands\. For more information about these command line interfaces, see [Accessing Amazon EC2](concepts.md#access-ec2)\.
+ [describe\-images](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-images.html) \(AWS CLI\)
+ [Get\-EC2Image](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-EC2Image.html) \(AWS Tools for Windows PowerShell\) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ComponentsAMIs.md |
b144f19b2f87-0 | You can stop an Amazon EBS\-backed instance, but not an Amazon EC2 instance store\-backed instance\. Stopping causes the instance to stop running \(its status goes from `running` to `stopping` to `stopped`\)\. A stopped instance persists in Amazon EBS, which allows it to be restarted\. Stopping is different from terminating; you can't restart a terminated instance\. Because Amazon EC2 instance store\-backed instances can't be stopped, they're either running or terminated\. For more | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ComponentsAMIs.md |
b144f19b2f87-1 | they're either running or terminated\. For more information about what happens and what you can do while an instance is stopped, see [Stop and start your instance](Stop_Start.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ComponentsAMIs.md |
b51234e1bd4f-0 | Instances that use an instance store volume for the root device automatically have instance store available \(the root volume contains the root partition and you can store additional data\)\. You can add persistent storage to your instance by attaching one or more Amazon EBS volumes\. Any data on an instance store volume is deleted when the instance fails or terminates\. For more information, see [Instance store lifetime](InstanceStorage.md#instance-store-lifetime)\.
Instances that use Amazon EBS for the root device automatically have an Amazon EBS volume attached\. The volume appears in your list of volumes like any other\. With most instance types, Amazon EBS\-backed instances don't have instance store volumes by default\. You can add instance store volumes or additional Amazon EBS volumes using a block device mapping\. For more information, see [Block device mapping](block-device-mapping-concepts.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ComponentsAMIs.md |
2f3d3db22419-0 | Instances launched from an Amazon EBS\-backed AMI launch faster than instances launched from an instance store\-backed AMI\. When you launch an instance from an instance store\-backed AMI, all the parts have to be retrieved from Amazon S3 before the instance is available\. With an Amazon EBS\-backed AMI, only the parts required to boot the instance need to be retrieved from the snapshot before the instance is available\. However, the performance of an instance that uses | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ComponentsAMIs.md |
2f3d3db22419-1 | However, the performance of an instance that uses an Amazon EBS volume for its root device is slower for a short time while the remaining parts are retrieved from the snapshot and loaded into the volume\. When you stop and restart the instance, it launches quickly, because the state is stored in an Amazon EBS volume\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ComponentsAMIs.md |
89cf44e7e21f-0 | To create Linux AMIs backed by instance store, you must create an AMI from your instance on the instance itself using the Amazon EC2 AMI tools\.
AMI creation is much easier for AMIs backed by Amazon EBS\. The `CreateImage` API action creates your Amazon EBS\-backed AMI and registers it\. There's also a button in the AWS Management Console that lets you create an AMI from a running instance\. For more information, see [Creating an Amazon EBS\-backed Linux AMI](creating-an-ami-ebs.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ComponentsAMIs.md |
c9c14d732ee2-0 | With AMIs backed by instance store, you're charged for instance usage and storing your AMI in Amazon S3\. With AMIs backed by Amazon EBS, you're charged for instance usage, Amazon EBS volume storage and usage, and storing your AMI as an Amazon EBS snapshot\.
With Amazon EC2 instance store\-backed AMIs, each time you customize an AMI and create a new one, all of the parts are stored in Amazon S3 for each AMI\. So, the storage footprint for each customized AMI is the full size of the AMI\. For Amazon EBS\-backed AMIs, each time you customize an AMI and create a new one, only the changes are stored\. So the storage footprint for subsequent AMIs you customize after the first is much smaller, resulting in lower AMI storage charges\.
When an Amazon EBS\-backed instance is stopped, you're not charged for instance usage; however, you're still charged for volume storage\. As soon as you start your instance, we charge a minimum of one minute for usage\. After one minute, we charge only for the seconds used\. For example, if you run an instance for 20 seconds and then stop it, we charge for a full one minute\. If you run an instance for 3 minutes and 40 seconds, we charge for exactly 3 minutes and 40 seconds of usage\. We charge you for each second, with a one\-minute minimum, that you keep the instance running, even if the instance remains idle and you don't connect to it\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/ComponentsAMIs.md |
6efd395ca377-0 | Some resource\-creating Amazon EC2 API actions enable you to specify tags when you create the resource\. For more information, see [Tagging your resources](Using_Tags.md#tag-resources)\.
To enable users to tag resources on creation, they must have permissions to use the action that creates the resource, such as `ec2:RunInstances` or `ec2:CreateVolume`\. If tags are specified in the resource\-creating action, Amazon performs additional authorization on the `ec2:CreateTags` action to verify if users have permissions to create tags\. Therefore, users must also have explicit permissions to use the `ec2:CreateTags` action\.
In the IAM policy definition for the `ec2:CreateTags` action, use the `Condition` element with the `ec2:CreateAction` condition key to give tagging permissions to the action that creates the resource\.
The following example demonstrates a policy that allows users to launch instances and apply any tags to instances and volumes during launch\. Users are not permitted to tag any existing resources \(they cannot call the `ec2:CreateTags` action directly\)\.
```
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
], | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/supported-iam-actions-tagging.md |
6efd395ca377-1 | "Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:region:account:*/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction" : "RunInstances"
}
}
}
]
}
```
Similarly, the following policy allows users to create volumes and apply any tags to the volumes during volume creation\. Users are not permitted to tag any existing resources \(they cannot call the `ec2:CreateTags` action directly\)\.
```
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateVolume"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:region:account:*/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction" : "CreateVolume"
}
}
} | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/supported-iam-actions-tagging.md |
6efd395ca377-2 | "ec2:CreateAction" : "CreateVolume"
}
}
}
]
}
```
The `ec2:CreateTags` action is only evaluated if tags are applied during the resource\-creating action\. Therefore, a user that has permissions to create a resource \(assuming there are no tagging conditions\) does not require permissions to use the `ec2:CreateTags` action if no tags are specified in the request\. However, if the user attempts to create a resource with tags, the request fails if the user does not have permissions to use the `ec2:CreateTags` action\.
The `ec2:CreateTags` action is also evaluated if tags are provided in a launch template\. For an example policy, see [Tags in a launch template](ExamplePolicies_EC2.md#iam-example-tags-launch-template)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/supported-iam-actions-tagging.md |
b21f18254e1c-0 | You can use additional conditions in the `Condition` element of your IAM policies to control the tag keys and values that can be applied to resources\.
The following condition keys can be used with the examples in the preceding section:
+ `aws:RequestTag`: To indicate that a particular tag key or tag key and value must be present in a request\. Other tags can also be specified in the request\.
+ Use with the `StringEquals` condition operator to enforce a specific tag key and value combination, for example, to enforce the tag `cost-center`=`cc123`:
```
"StringEquals": { "aws:RequestTag/cost-center": "cc123" }
```
+ Use with the `StringLike` condition operator to enforce a specific tag key in the request; for example, to enforce the tag key `purpose`:
```
"StringLike": { "aws:RequestTag/purpose": "*" }
```
+ `aws:TagKeys`: To enforce the tag keys that are used in the request\.
+ Use with the `ForAllValues` modifier to enforce specific tag keys if they are provided in the request \(if tags are specified in the request, only specific tag keys are allowed; no other tags are allowed\)\. For example, the tag keys `environment` or `cost-center` are allowed:
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/supported-iam-actions-tagging.md |
b21f18254e1c-1 | ```
"ForAllValues:StringEquals": { "aws:TagKeys": ["environment","cost-center"] }
```
+ Use with the `ForAnyValue` modifier to enforce the presence of at least one of the specified tag keys in the request\. For example, at least one of the tag keys `environment` or `webserver` must be present in the request:
```
"ForAnyValue:StringEquals": { "aws:TagKeys": ["environment","webserver"] }
```
These condition keys can be applied to resource\-creating actions that support tagging, as well as the `ec2:CreateTags` and `ec2:DeleteTags` actions\. To learn whether an Amazon EC2 API action supports tagging, see [Actions, Resources, and Condition Keys for Amazon EC2](https://docs.aws.amazon.com/IAM/latest/UserGuide/list_amazonec2.html) in the *IAM User Guide*\.
To force users to specify tags when they create a resource, you must use the `aws:RequestTag` condition key or the `aws:TagKeys` condition key with the `ForAnyValue` modifier on the resource\-creating action\. The `ec2:CreateTags` action is not evaluated if a user does not specify tags for the resource\-creating action\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/supported-iam-actions-tagging.md |
b21f18254e1c-2 | For conditions, the condition key is not case\-sensitive and the condition value is case\-sensitive\. Therefore, to enforce the case\-sensitivity of a tag key, use the `aws:TagKeys` condition key, where the tag key is specified as a value in the condition\.
For example IAM policies, see [Example policies for working with the AWS CLI or an AWS SDK](ExamplePolicies_EC2.md)\. For more information about multi\-value conditions, see [Creating a Condition That Tests Multiple Key Values](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_multi-value-conditions.html) in the *IAM User Guide*\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/supported-iam-actions-tagging.md |
f82cb19668ab-0 | To achieve the maximum network performance on instances with enhanced networking, you may need to modify the default operating system configuration\. We recommend the following configuration changes for applications that require high network performance\.
In addition to these operating system optimizations, you should also consider the maximum transmission unit \(MTU\) of your network traffic, and adjust according to your workload and network architecture\. For more information, see [Network maximum transmission unit \(MTU\) for your EC2 instance](network_mtu.md)\.
AWS regularly measures average round trip latencies between instances launched in a cluster placement group of 50us and tail latencies of 200us at the 99\.9 percentile\. If your applications require consistently low latencies, we recommend using the latest version of the ENA drivers on fixed performance Nitro\-based instances\.
These procedures were written for Amazon Linux 2 and Amazon Linux AMI\. However, they may also work for other Linux distributions with kernel version 3\.9 or newer\. For more information, see your system\-specific documentation\.
**To optimize your Amazon Linux instance for enhanced networking**
1. Check the clock source for your instance:
```
cat /sys/devices/system/clocksource/clocksource0/current_clocksource
```
1. If the clock source is `xen`, complete the following substeps\. Otherwise, skip to [Step 3](#post-xen-step)\.
1. Edit the GRUB configuration and add `xen_nopvspin=1` and `clocksource=tsc` to the kernel boot options\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/enhanced-networking-os.md |
f82cb19668ab-1 | 1. Edit the GRUB configuration and add `xen_nopvspin=1` and `clocksource=tsc` to the kernel boot options\.
+ For Amazon Linux 2, edit the `/etc/default/grub` file and add these options to the `GRUB_CMDLINE_LINUX_DEFAULT` line, as shown below:
```
GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,115200n8 net.ifnames=0 biosdevname=0 nvme_core.io_timeout=4294967295 xen_nopvspin=1 clocksource=tsc"
GRUB_TIMEOUT=0
```
+ For Amazon Linux AMI, edit the `/boot/grub/grub.conf` file and add these options to the `kernel` line, as shown below:
```
kernel /boot/vmlinuz-4.14.62-65.117.amzn1.x86_64 root=LABEL=/ console=tty1 console=ttyS0 selinux=0 nvme_core.io_timeout=4294967295 xen_nopvspin=1 clocksource=tsc
```
1. \(Amazon Linux 2 only\) Rebuild your GRUB configuration file to pick up these changes:
```
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/enhanced-networking-os.md |
f82cb19668ab-2 | ```
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
```
1. <a name="post-xen-step"></a>If your instance type is listed as supported on [Processor state control for your EC2 instance](processor_state_control.md), prevent the system from using deeper C\-states to ensure low\-latency system performance\. For more information, see [High performance and low latency by limiting deeper C\-states](processor_state_control.md#c-states)\.
1. Edit the GRUB configuration and add `intel_idle.max_cstate=1` to the kernel boot options\.
+ For Amazon Linux 2, edit the `/etc/default/grub` file and add this option to the `GRUB_CMDLINE_LINUX_DEFAULT` line, as shown below:
```
GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,115200n8 net.ifnames=0 biosdevname=0 nvme_core.io_timeout=4294967295 xen_nopvspin=1 clocksource=tsc intel_idle.max_cstate=1"
GRUB_TIMEOUT=0
```
+ For Amazon Linux AMI, edit the `/boot/grub/grub.conf` file and add this option to the `kernel` line, as shown below:
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/enhanced-networking-os.md |
f82cb19668ab-3 | ```
kernel /boot/vmlinuz-4.14.62-65.117.amzn1.x86_64 root=LABEL=/ console=tty1 console=ttyS0 selinux=0 nvme_core.io_timeout=4294967295 xen_nopvspin=1 clocksource=tsc intel_idle.max_cstate=1
```
1. \(Amazon Linux 2 only\) Rebuild your GRUB configuration file to pick up these changes:
```
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
```
1. Ensure that your reserved kernel memory is sufficient to sustain a high rate of packet buffer allocations \(the default value may be too small\)\.
1. Open \(as `root` or with sudo\) the `/etc/sysctl.conf` file with the editor of your choice\.
1. Add the `vm.min_free_kbytes` line to the file with the reserved kernel memory value \(in kilobytes\) for your instance type\. As a rule of thumb, you should set this value to between 1\-3% of available system memory, and adjust this value up or down to meet the needs of your application\.
```
vm.min_free_kbytes = 1048576
```
1. Apply this configuration with the following command:
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/enhanced-networking-os.md |
f82cb19668ab-4 | ```
1. Apply this configuration with the following command:
```
sudo sysctl -p
```
1. Verify that the setting was applied with the following command:
```
sudo sysctl -a 2>&1 | grep min_free_kbytes
```
1. Reboot your instance to load the new configuration:
```
sudo reboot
```
1. \(Optional\) Manually distribute packet receive interrupts so that they are associated with different CPUs that all belong to the same NUMA node\. Use this carefully, however, because irqbalancer is disabled globally\.
**Note**
The configuration change in this step does not survive a reboot\.
1. Create a file called `smp_affinity.sh` and paste the following code block into it:
```
#/bin/sh
service irqbalance stop
affinity_values=(00000001 00000002 00000004 00000008 00000010 00000020 00000040 00000080)
irqs=($(grep eth /proc/interrupts|awk '{print $1}'|cut -d : -f 1))
irqLen=${#irqs[@]}
for (( i=0; i<${irqLen}; i++ )); | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/enhanced-networking-os.md |
f82cb19668ab-5 | for (( i=0; i<${irqLen}; i++ ));
do
echo $(printf "0000,00000000,00000000,00000000,${affinity_values[$i]}") > /proc/irq/${irqs[$i]}/smp_affinity;
echo "IRQ ${irqs[$i]} =" $(cat /proc/irq/${irqs[$i]}/smp_affinity);
done
```
1. Run the script with the following command:
```
sudo bash ./smp_affinity.sh
```
1. \(Optional\) If the vCPUs that handle receive IRQs are overloaded, or if your application network processing is demanding on CPU, you can offload part of the network processing to other cores with receive packet steering \(RPS\)\. Ensure that cores used for RPS belong to the same NUMA node to avoid inter\-NUMA node locks\. For example, to use cores 8\-15 for packet processing, use the following command\.
**Note**
The configuration change in this step does not survive a reboot\.
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/enhanced-networking-os.md |
f82cb19668ab-6 | **Note**
The configuration change in this step does not survive a reboot\.
```
for i in `seq 0 7`; do echo $(printf "0000,00000000,00000000,00000000,0000ff00") | sudo tee /sys/class/net/eth0/queues/rx-$i/rps_cpus; done
```
1. \(Optional\) If possible, keep all processing on the same NUMA node\.
1. Install numactl:
```
sudo yum install -y numactl
```
1. When you run your network processing program, bind it to a single NUMA node\. For example, the following command binds the shell script, `run.sh`, to NUMA node 0:
```
numactl --cpunodebind=0 --membind=0 run.sh
```
1. If you have hyperthreading enabled, you can configure your application to only use a single hardware thread per CPU core\.
+ You can view which CPU cores map to a NUMA node with the lscpu command:
```
lscpu | grep NUMA
```
Output:
```
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47 | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/enhanced-networking-os.md |
f82cb19668ab-7 | ```
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
```
+ You can view which hardware threads belong to a physical CPU with the following command:
```
cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list
```
Output:
```
0,32
```
In this example, threads 0 and 32 map to CPU 0\.
+ To avoid running on threads 32\-47 \(which are actually hardware threads of the same CPUs as 0\-15, use the following command:
```
numactl --physcpubind=+0-15 --membind=0 ./run.sh
```
1. Use multiple elastic network interfaces for different classes of traffic\. For example, if you are running a web server that uses a backend database, use one elastic network interfaces for the web server front end, and another for the database connection\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/enhanced-networking-os.md |
377d9e171452-0 | You can create, use, and manage an EFA much like any other elastic network interface in Amazon EC2\. However, unlike elastic network interfaces, EFAs cannot be attached to or detached from an instance in a running state\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa-working-with.md |
46b929f20f10-0 | To use an EFA, you must do the following:
+ Use one of the following supported instance types: `c5n.18xlarge`, `c5n.metal`, `g4dn.metal`, `i3en.24xlarge`, `i3en.metal`, `inf1.24xlarge`, `m5dn.24xlarge`, `m5n.24xlarge`, `p3dn.24xlarge`, `r5dn.24xlarge`, and `r5n.24xlarge`\.
+ Use one of the following supported AMIs: Amazon Linux, Amazon Linux 2, RHEL 7\.6, RHEL 7\.7, RHEL 7\.8, CentOS 7, Ubuntu 16\.04, and Ubuntu 18\.04\.
+ Install the EFA software components\. For more information, see [Step 3: Install the EFA software](efa-start.md#efa-start-enable) and [Step 5: \(Optional\) Install Intel MPI](efa-start.md#efa-start-impi)\.
+ Use a security group that allows all inbound and outbound traffic to and from the security group itself\. For more information, see [Step 1: Prepare an EFA\-enabled security group](efa-start.md#efa-start-security)\.
**Topics**
+ [EFA requirements](#efa-reqs)
+ [Creating an EFA](#efa-create) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa-working-with.md |
46b929f20f10-1 | + [EFA requirements](#efa-reqs)
+ [Creating an EFA](#efa-create)
+ [Attaching an EFA to a stopped instance](#efa-attach)
+ [Attaching an EFA when launching an instance](#efa-launch)
+ [Adding an EFA to a launch template](#efa-launch-template)
+ [Assigning an IP address to an EFA](#efa-ip-assign)
+ [Unassigning an IP address from an EFA](#efa-ip-unassign)
+ [Changing the security group](#efa-security)
+ [Detaching an EFA](#efa-detach)
+ [Viewing EFAs](#efa-view)
+ [Deleting an EFA](#efa-delete) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa-working-with.md |
4a719f7fcc94-0 | You can create an EFA in a subnet in a VPC\. You can't move the EFA to another subnet after it's created, and you can only attach it to stopped instances in the same Availability Zone\.
**To create a new EFA using the console**
1. Open the Amazon EC2 console at [https://console\.aws\.amazon\.com/ec2/](https://console.aws.amazon.com/ec2/)\.
1. In the navigation pane, choose **Network Interfaces**\.
1. Choose **Create Network Interface**\.
1. For **Description**, enter a descriptive name for the EFA\.
1. For **Subnet**, select the subnet in which to create the EFA\.
1. For **Private IP**, enter the primary private IPv4 address\. If you don't specify an IPv4 address, we select an available private IPv4 address from the selected subnet\.
1. \(IPv6 only\) If you selected a subnet that has an associated IPv6 CIDR block, you can optionally specify an IPv6 address in the **IPv6 IP** field\.
1. For **Security groups**, select one or more security groups\.
1. For **EFA**, choose **Enabled**\.
1. Choose **Yes, Create**\.
**To create a new EFA using the AWS CLI** | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa-working-with.md |
4a719f7fcc94-1 | 1. Choose **Yes, Create**\.
**To create a new EFA using the AWS CLI**
Use the [create\-network\-interface](https://docs.aws.amazon.com/cli/latest/reference/ec2/create-network-interface.html) command and for `interface-type`, specify `efa`, as shown in the following example\.
```
aws ec2 create-network-interface --subnet-id subnet-01234567890 --description example_efa --interface-type efa
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa-working-with.md |
3ccfb9abfa65-0 | You can attach an EFA to any supported instance that is in the `stopped` state\. You cannot attach an EFA to an instance that is in the `running` state\. For more information about the supported instance types, see [Supported instance types](efa.md#efa-instance-types)\.
You attach an EFA to an instance in the same way that you attach an elastic network interface to an instance\. For more information, see [Attaching a network interface to a stopped or running instance](using-eni.md#attach_eni_running_stopped)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa-working-with.md |
51a97a96db6d-0 | **To attach an existing EFA when launching an instance \(AWS CLI\)**
Use the [run\-instances](https://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html) command and for **NetworkInterfaceId**, specify the ID of the EFA, as shown in the following example\.
```
aws ec2 run-instances --image-id ami_id --count 1 --instance-type c5n.18xlarge --key-name my_key_pair --network-interfaces DeviceIndex=0,NetworkInterfaceId=efa_id,Groups=sg_id,SubnetId=subnet_id
```
**To attach a new EFA when launching an instance \(AWS CLI\)**
Use the [run\-instances](https://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html) command and for **InterfaceType**, specify `efa`, as shown in the following example\.
```
aws ec2 run-instances --image-id ami_id --count 1 --instance-type c5n.18xlarge --key-name my_key_pair --network-interfaces DeviceIndex=0,InterfaceType=efa,Groups=sg_id,SubnetId=subnet_id
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa-working-with.md |
3aaebcd4e204-0 | You can create a launch template that contains the configuration information needed to launch EFA\-enabled instances\. To create an EFA\-enabled launch template, create a new launch template and specify a supported instance type, your EFA\-enabled AMI, and an EFA\-enabled security group\. For more information, see [Getting started with EFA and MPI](efa-start.md)\.
You can leverage launch templates to launch EFA\-enabled instances with other AWS services, such as AWS Batch\.
For more information about creating launch templates, see [Creating a launch template](ec2-launch-templates.md#create-launch-template)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa-working-with.md |
a1776c974d4c-0 | If you have an Elastic IP \(IPv4\) address, you can associate it with an EFA\. If your EFA is provisioned in a subnet that has an associated IPv6 CIDR block, you can assign one or more IPv6 addresses to the EFA\.
You assign an Elastic IP \(IPv4\) and IPv6 address to an EFA in the same way that you assign an IP address to an elastic network interface\. For more information, see:
+ [Associating an Elastic IP address \(IPv4\)](using-eni.md#associate_eip)
+ [Assigning an IPv6 address](using-eni.md#eni-assign-ipv6) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa-working-with.md |
6f9a162a8bc4-0 | You unassign an Elastic IP \(IPv4\) and IPv6 address from an EFA in the same way that you unassign an IP address from an elastic network interface\. For more information, see:
+ [Disassociating an Elastic IP address \(IPv4\)](using-eni.md#disassociate_eip)
+ [Unassigning an IPv6 address](using-eni.md#eni-unassign-ipv6) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa-working-with.md |
5d2159baa931-0 | You can change the security group that is associated with an EFA\. To enable OS\-bypass functionality, the EFA must be a member of a security group that allows all inbound and outbound traffic to and from the security group itself\.
You change the security group that is associated with an EFA in the same way that you change the security group that is associated with an elastic network interface\. For more information, see [Changing the security group](using-eni.md#eni_security_group)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-ec2-user-guide/doc_source/efa-working-with.md |
Subsets and Splits