The instance types listed in this topic are retired on the Alibaba Cloud China website (www.aliyun.com). However, the sn2, sn1, n1, n2, and e3 instance types are still available for purchase on the Alibaba Cloud International website (www.alibabacloud.com).
scchfc6, compute-optimized SCC instance family with high clock speeds
scchfg6, general-purpose SCC instance family with high clock speeds
scchfr6, memory-optimized SCC instance family with high clock speeds
ebmgn6ia, GPU-accelerated compute-optimized ECS Bare Metal Instance family
c4, ce4, and cm4, compute-optimized instance families with high clock speeds
ebmhfg5, ECS Bare Metal Instance family with high clock speeds
sccgn6, GPU-accelerated compute-optimized SCC instance family
sccgn6e, GPU-accelerated compute-optimized SCC instance family
sccgn6ne, GPU-accelerated compute-optimized SCC instance family
Instance type changes
If you are using a retired instance type, we recommend changing it to an available instance type. For more information about the supported changes between instance types, see Limitations and pre-checks for changing instance types.
scchfc6, compute-optimized SCC instance family with high clock speeds
Instance family overview: Provides all features of an ECS Bare Metal Instance. For more information, see ECS Bare Metal Instance specifications.
Scenarios: Large-scale machine learning training, high-performance scientific computing and simulation, data analytics, batch computing, and video encoding.
Compute:
Offers a CPU-to-memory ratio of 1:2.4.
Processor: 3.1 GHz Intel ® Xeon ® Platinum 8269 (Cascade Lake) with an all-core turbo frequency of 3.5 GHz.
Storage:
I/O optimized instance.
Supported disk types include enterprise SSDs, ESSD AutoPL disks, standard SSDs, and ultra disks. For more information, see Elastic Block Storage Overview.
Network:
These instances support IPv4 and IPv6. For information about IPv6 communication, see IPv6 communication.
Supports both RoCE v2 networks and VPCs. RoCE networks are dedicated to RDMA communication.
The following table describes the instance types of the scchfc6 family.
Instance type | vCPU | Physical cores | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | RoCE network bandwidth (Gbit/s) | ENIs |
ecs.scchfc6.20xlarge | 80 | 40 | 192.0 | 30 | 6,000,000 | 50 | 32 |
ecs.scchfc6.20xlarge provides 80 logical processors on 40 physical cores.
scchfg6, general-purpose SCC instance family with high clock speeds
Instance family overview: Provides all features of an ECS Bare Metal Instance. For more information, see ECS Bare Metal Instance specifications.
Scenarios: Large-scale machine learning training, high-performance scientific computing and simulation, data analytics, batch computing, and video encoding.
Compute:
Offers a CPU-to-memory ratio of 1:4.8.
Processor: 3.1 GHz Intel ® Xeon ® Platinum 8269 (Cascade Lake) with an all-core turbo frequency of 3.5 GHz.
Storage:
I/O optimized instance.
Supported disk types include enterprise SSDs, ESSD AutoPL disks, standard SSDs, and ultra disks. For more information, see Elastic Block Storage Overview.
Network:
These instances support IPv4 and IPv6. For information about IPv6 communication, see IPv6 communication.
Supports both RoCE v2 networks and VPCs. RoCE networks are dedicated to RDMA communication.
The following table describes the instance types of the scchfg6 family.
Instance type | vCPU | Physical cores | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | RoCE network bandwidth (Gbit/s) | ENIs |
ecs.scchfg6.20xlarge | 80 | 40 | 384.0 | 30 | 6,000,000 | 50 | 32 |
ecs.scchfg6.20xlarge provides 80 logical processors on 40 physical cores.
scchfr6, memory-optimized SCC instance family with high clock speeds
Instance family description: Provides all features of an ECS Bare Metal Instance. For more information, see ECS Bare Metal Instance specifications.
Scenarios: Large-scale machine learning training, high-performance scientific computing and simulation, data analytics, batch computing, and video encoding.
Compute:
Offers a CPU-to-memory ratio of 1:9.6.
Processor: 3.1 GHz Intel ® Xeon ® Platinum 8269 (Cascade Lake) with an all-core turbo frequency of 3.5 GHz.
Storage:
I/O optimized instance.
Supported disk types include enterprise SSDs, ESSD AutoPL disks, standard SSDs, and ultra disks. For more information, see Elastic Block Storage Overview.
Network:
These instances support IPv4 and IPv6. For information about IPv6 communication, see IPv6 communication.
Supports both RoCE v2 networks and VPCs. RoCE networks are dedicated to RDMA communication.
The following table describes the instance types of the scchfr6 family.
Instance type | vCPU | Physical cores | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (PPS) | RoCE network bandwidth (Gbit/s) | ENIs |
ecs.scchfr6.20xlarge | 80 | 40 | 768.0 | 30 | 6,000,000 | 50 | 32 |
ecs.scchfr6.20xlarge provides 80 logical processors on 40 physical cores.
ebmgn6ia, GPU-accelerated compute-optimized ECS Bare Metal Instance family
Family Introduction:
This instance family uses the third-generation SHENLONG architecture and fast path acceleration on chips to provide predictable and consistent ultra-high computing, storage, and network performance.
This instance family uses NVIDIA T4 GPUs to offer GPU acceleration capabilities for graphics and AI applications. It uses container technology to start more than 60 virtual Android terminals and provide hardware-accelerated video transcoding for each terminal display.
Scenarios:
Remote application services based on Android, such as online standby cloud services, cloud mobile games, cloud phones, and Android service crawlers.
Compute:
Offers a CPU-to-memory ratio of approximately 1:3.
Processor: 2.8 GHz Ampere® Altra® processor with a turbo frequency of 3.0 GHz. The native ARM computing platform provides efficient performance and excellent application compatibility for Android servers.
Storage:
These are I/O optimized instances.
Supported disk types: elastic ephemeral disk, ESSDs, ESSD AutoPL disks, and Regional ESSDs. For more information, see Block Storage Overview.
Network:
These instances support IPv4 and IPv6. For information about IPv6 communication, see IPv6 communication.
The following table describes the instance types of the ebmgn6ia family.
Instance type | vCPU | Memory (GiB) | GPU | GPU memory | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | Multi-Queue | ENIs | Private IPv4 addresses per ENI | IPv6 addresses per ENI |
ecs.ebmgn6ia.20xlarge | 80 | 256 | NVIDIA T4 * 2 | 16GB * 2 | 32 | 24,000,000 | 32 | 15 | 10 | 1 |
Ampere® Altra® processors have specific requirements for operating system kernels. Instances of this instance type can use Alibaba Cloud Linux 3 images or CentOS 8.4 or later images. We recommend using Alibaba Cloud Linux 3 images on the instances. If you want to use a different operating system distribution, patch the kernel of an instance that runs that operating system, create a custom image from the instance, and then use the custom image to create instances of this instance type. For information about kernel patches, visit Ampere Altra (TM) Linux Kernel Porting Guide.
g5se, storage-enhanced instance family
Features of g5se:
g5se instances can be created only on Dedicated Hosts.
NoteFor information about other instance types that can be created on Dedicated Hosts, see Specifications.
When you attach an enterprise SSD, a single instance can achieve up to 1 million IOPS for random read and write operations and up to 32 Gbit/s for sequential read and write operations.
Compute:
Offers a CPU-to-memory ratio of 1:4.
Processor: 2.5 GHz Intel ® Xeon ® Platinum 8163 (Skylake) that provides stable computing performance.
Storage:
I/O optimized instance.
Supports ESSDs, standard SSDs, and ultra disks.
The storage I/O performance of an instance is proportional to its specifications. A higher specification provides better storage I/O performance.
NoteFor information about the storage I/O performance of the next-generation, enterprise-level instance families, see Storage I/O performance.
Network:
Supports IPv6.
Scenarios:
I/O-intensive scenarios, such as medium-to-large OLTP core databases.
Medium-to-large NoSQL databases.
Search and real-time log analytics.
Large enterprise-level commercial software, such as SAP.
The following table describes the instance types of the g5se family.
Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | Multi-queue | ENIs | Private IPv4 addresses per ENI | Disk baseline IOPS | Disk baseline bandwidth (Gbit/s) |
ecs.g5se.large | 2 | 8.0 | 1.0 | 300,000 | 2 | 2 | 6 | 30,000 | 1.5 |
ecs.g5se.xlarge | 4 | 16.0 | 1.5 | 500,000 | 2 | 3 | 6 | 60,000 | 2 |
ecs.g5se.2xlarge | 8 | 32.0 | 2.0 | 800,000 | 2 | 4 | 8 | 85,000 | 3 |
ecs.g5se.4xlarge | 16 | 64.0 | 4.0 | 1,000,000 | 4 | 8 | 10 | 150,000 | 5 |
ecs.g5se.8xlarge | 32 | 128.0 | 7.0 | 2,000,000 | 8 | 8 | 10 | 300,000 | 10 |
ecs.g5se.16xlarge | 64 | 256.0 | 14.0 | 3,000,000 | 16 | 7 | 10 | 750,000 | 25 |
ecs.g5se.18xlarge | 70 | 336.0 | 16.0 | 4,000,000 | 16 | 15 | 10 | 1,000,000 | 32 |
sn2, general-purpose instance family
Features of sn2:
Offers a CPU-to-memory ratio of 1:4.
Processor: 2.5 GHz Intel Xeon E5-2682 v4 (Broadwell), E5-2680 v3 (Haswell), Platinum 8163 (Skylake), or 8269CY (Cascade Lake), which provides stable computing performance.
NoteInstances of this family may be deployed on different server platforms. If your business requires that all instances are deployed on the same server platform, we recommend using g6, g6e, or g7 instances.
The network performance of an instance is proportional to its specifications. A higher specification provides better network performance.
Scenarios:
Enterprise applications of various types and sizes.
Small and medium-sized database systems, caches, and search clusters.
Data analytics and computing.
The following table describes the instance types of the sn2 family.
Instance type | vCPU | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | Multi-queue | ENIs |
ecs.sn2.medium | 2 | 8.0 | 0.5 | 100,000 | 1 | 2 |
ecs.sn2.large | 4 | 16.0 | 0.8 | 200,000 | 1 | 3 |
ecs.sn2.xlarge | 8 | 32.0 | 1.5 | 400,000 | 1 | 4 |
ecs.sn2.3xlarge | 16 | 64.0 | 3.0 | 500,000 | 2 | 8 |
ecs.sn2.7xlarge | 32 | 128.0 | 6.0 | 800,000 | 3 | 8 |
ecs.sn2.13xlarge | 56 | 224.0 | 10.0 | 1,200,000 | 4 | 8 |
sn1, compute-optimized instance family
Features of sn1:
Offers a CPU-to-memory ratio of 1:2.
Processor: 2.5 GHz Intel Xeon E5-2682 v4 (Broadwell), E5-2680 v3 (Haswell), Platinum 8163 (Skylake), or 8269CY (Cascade Lake), which provides stable computing performance.
NoteInstances of this family may be deployed on different server platforms. If your business requires that all instances are deployed on the same server platform, we recommend using c6, c6e, or c7 instances.
The network performance of an instance is proportional to its specifications. A higher specification provides better network performance.
Scenarios:
Web frontend server
Frontend servers for massively multiplayer online (MMO) games.
Data analytics, batch computing, and video encoding.
High-performance scientific and engineering applications.
The following table describes the instance types of the sn1 family.
Instance type | vCPU | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Multi-Queue | ENIs |
ecs.sn1.medium | 2 | 4.0 | 0.5 | 100,000 | 1 | 2 |
ecs.sn1.large | 4 | 8.0 | 0.8 | 200,000 | 1 | 3 |
ecs.sn1.xlarge | 8 | 16.0 | 1.5 | 400,000 | 1 | 4 |
ecs.sn1.3xlarge | 16 | 32.0 | 3.0 | 500,000 | 2 | 8 |
ecs.sn1.7xlarge | 32 | 64.0 | 6.0 | 800,000 | 3 | 8 |
c4, ce4, and cm4, compute-optimized instance families with high clock speeds
Features of c4, ce4, and cm4:
Processor: 3.2 GHz Intel Xeon E5-2667 v4 (Broadwell).
They provide stable computing performance.
They are I/O optimized instances.
They support only standard SSDs and ultra disks.
The network performance of an instance is proportional to its specifications. A higher specification provides better network performance.
Scenarios:
High-performance web frontend servers.
High-performance scientific and engineering applications.
MMO games and video encoding.
The following table describes the instance types of the c4 family.
Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | Multiple queues | ENIs |
ecs.c4.xlarge | 4 | 8.0 | 1.5 | 200,000 | 1 | 3 |
ecs.c4.2xlarge | 8 | 16.0 | 3.0 | 400,000 | 1 | 4 |
ecs.c4.3xlarge | 12 | 24.0 | 4.5 | 600,000 | 2 | 6 |
ecs.c4.4xlarge | 16 | 32.0 | 6.0 | 800,000 | 2 | 8 |
The following table lists the specifications and metrics for ce4.
Instance type | vCPU | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | Multi-queue | ENIs |
ecs.ce4.xlarge | 4 | 32.0 | 1.5 | 200,000 | 1 | 3 |
ecs.ce4.2xlarge | 8 | 64.0 | 3.0 | 400,000 | 1 | 3 |
The following table lists the specifications and metrics for cm4.
Instance type | vCPU | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | Multi-queue | ENIs |
ecs.cm4.xlarge | 4 | 16.0 | 1.5 | 200,000 | 1 | 3 |
ecs.cm4.2xlarge | 8 | 32.0 | 3.0 | 400,000 | 1 | 4 |
ecs.cm4.3xlarge | 12 | 48.0 | 4.5 | 600,000 | 2 | 6 |
ecs.cm4.4xlarge | 16 | 64.0 | 6.0 | 800,000 | 2 | 8 |
ecs.cm4.6xlarge | 24 | 96.0 | 10.0 | 1,200,000 | 4 | 8 |
vgn6i, vGPU-accelerated instance family
Features of vgn6i:
Compute:
Uses NVIDIA T4 GPU accelerators.
Instances contain vGPUs that are virtualized from sliced GPUs.
It supports 1/4 and 1/2 of the computing power of an NVIDIA Tesla T4 GPU.
It supports 4 GB and 8 GB of GPU memory.
Offers a CPU-to-memory ratio of approximately 1:5.
Processor: 2.5 GHz Intel ® Xeon ® Platinum 8163 (Skylake).
Storage:
I/O optimized instance.
It supports only standard SSDs and ultra disks.
Network:
Supports IPv6.
The network performance of an instance is proportional to its specifications. A higher specification provides better network performance.
Scenarios:
Real-time rendering for cloud gaming.
Real-time rendering for AR and VR.
AI (deep learning and machine learning) inference, suitable for elastic deployment of Internet services that include AI inference applications.
Deep learning training and practice environments.
Deep learning model experiment environments.
The following table describes the instance types of the vgn6i family.
Instance type | vCPU | Memory (GiB) | GPU | GPU memory | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | NIC queues (primary NIC/secondary NIC) | ENIs | Private IPv4 addresses per ENI |
ecs.vgn6i-m4.xlarge | 4 | 23 | NVIDIA T4 * 1/4 | 16GB * 1/4 | 2 | 500,000 | 4/2 | 3 | 10 |
ecs.vgn6i-m8.2xlarge | 10 | 46 | NVIDIA T4 * 1/2 | 16 GB × 1/2 | 4 | 800,000 | 8/2 | 4 | 10 |
vgn5i, vGPU-accelerated instance family
Features of vgn5i:
Compute:
Uses NVIDIA P4 GPU accelerators.
Instances contain vGPUs that are virtualized from sliced GPUs.
It supports 1/8, 1/4, 1/2, and 1:1 of the computing power of an NVIDIA Tesla P4 GPU.
It supports 1 GB, 2 GB, 4 GB, and 8 GB of GPU memory.
Offers a CPU-to-memory ratio of 1:3.
Processor: 2.5 GHz Intel ® Xeon ® E5-2682 v4 (Broadwell).
Storage:
I/O optimized instance.
It supports only standard SSDs and ultra disks.
Network:
Supports IPv6.
The network performance of an instance is proportional to its specifications. A higher specification provides better network performance.
Scenarios:
Real-time rendering for cloud gaming.
Real-time rendering for AR and VR.
AI (deep learning and machine learning) inference, suitable for elastic deployment of Internet services that include AI inference applications.
Deep learning training and practice environments.
Deep learning model experiment environments.
The following table describes the instance types of the vgn5i family.
Instance type | vCPUs | Memory (GiB) | GPU | GPU memory | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | Multi-Queue | ENIs | Private IPv4 addresses per ENI |
ecs.vgn5i-m1.large | 2 | 6 | NVIDIA P4 * 1/8 | 8GB * 1/8 | 1 | 300,000 | 2 | 2 | 6 |
ecs.vgn5i-m2.xlarge | 4 | 12 | NVIDIA P4 * 1/4 | 8GB * 1/4 | 2 | 500,000 | 2 | 3 | 10 |
ecs.vgn5i-m4.2xlarge | 8 | 24 | NVIDIA P4 * 1/2 | 8GB * 1/2 | 3 | 800,000 | 2 | 4 | 10 |
ecs.vgn5i-m8.4xlarge | 16 | 48 | NVIDIA P4 * 1 | 8GB * 1 | 5 | 1,000,000 | 4 | 5 | 20 |
The GPU column in the preceding table indicates the GPU model and GPU slicing information for each instance type. Each GPU can be sliced into multiple GPU partitions, and each GPU partition can be allocated as a vGPU to an instance. Example:
In NVIDIA P4 * 1/8, NVIDIA P4 indicates the GPU model. 1/8 indicates that one GPU card is divided into 8 shards, and each instance uses one shard.
gn4, GPU-accelerated compute-optimized instance family
Features of gn4:
It uses NVIDIA M40 GPU cards.
Compute:
Offers multiple CPU-to-memory ratios.
Processor: 2.5 GHz Intel ® Xeon ® E5-2682 v4 (Broadwell).
Storage:
I/O optimized instance.
It supports only standard SSDs and ultra disks.
Network:
The network performance of an instance is proportional to its specifications. A higher specification provides better network performance.
Scenarios:
Deep learning.
Scientific computing, such as computational fluid dynamics, computational finance, genomics research, and environmental analysis.
High-performance computing, rendering, multimedia encoding and decoding, and other server-side GPU computing workloads.
The following table describes the instance types of the gn4 family.
Instance type | vCPU | Memory (GiB) | GPU | GPU memory | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | Multi-queue | ENIs | Private IPv4 addresses per ENI |
ecs.gn4-c4g1.xlarge | 4 | 30.0 | NVIDIA M40 * 1 | 12GB * 1 | 3.0 | 300,000 | 1 | 3 | 10 |
ecs.gn4-c8g1.2xlarge | 8 | 30.0 | NVIDIA M40 × 1 | 12 GB × 1 | 3.0 | 400,000 | 1 | 4 | 10 |
ecs.gn4.8xlarge | 32 | 48.0 | NVIDIA M40 * 1 | 12GB * 1 | 6.0 | 800,000 | 3 | 8 | 20 |
ecs.gn4-c4g1.2xlarge | 8 | 60.0 | NVIDIA M40 * 2 | 12 GB × 2 | 5.0 | 500,000 | 1 | 4 | 10 |
ecs.gn4-c8g1.4xlarge | 16 | 60.0 | NVIDIA M40 * 2 | 12GB * 2 | 5.0 | 500,000 | 1 | 8 | 20 |
ecs.gn4.14xlarge | 56 | 96.0 | NVIDIA M40 * 2 | 12GB * 2 | 10.0 | 1,200,000 | 4 | 8 | 20 |
ga1, GPU-accelerated compute-optimized instance family
Features of ga1:
It uses AMD S7150 GPU cards.
It is equipped with high-performance NVMe SSD local disks.
Compute:
Offers a CPU-to-memory ratio of 1:2.5.
Processor: 2.5 GHz Intel ® Xeon ® E5-2682 v4 (Broadwell).
Storage:
I/O optimized instance.
It supports only standard SSDs and ultra disks.
Network:
The network performance of an instance is proportional to its specifications. A higher specification provides better network performance.
Scenarios:
Rendering, multimedia encoding and decoding.
Machine learning, high-performance computing, and high-performance databases.
Other server-side workloads that require powerful parallel floating-point computing capabilities.
The following table describes the instance types of the ga1 family.
Instance type | vCPU | Memory (GiB) | Local storage (GiB) | GPU | GPU memory | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (PPS) | Multi-Queue | ENIs | Private IPv4 addresses per ENI |
ecs.ga1.xlarge | 4 | 10.0 | 1 * 87 | AMD S7150 * 1/4 | 8GB * 1/4 | 1.0 | 200,000 | 1 | 3 | 10 |
ecs.ga1.2xlarge | 8 | 20.0 | 1 × 175 | AMD S7150 × 1/2 | 8GB * 1/2 | 1.5 | 300,000 | 1 | 4 | 10 |
ecs.ga1.4xlarge | 16 | 40.0 | 1 × 350 | AMD S7150 × 1 | 8GB * 1 | 3.0 | 500,000 | 2 | 8 | 20 |
ecs.ga1.8xlarge | 32 | 80.0 | 1 × 700 | AMD S7150 × 2 | 8 GB × 2 | 6.0 | 800,000 | 3 | 8 | 20 |
ecs.ga1.14xlarge | 56 | 160.0 | 1 × 1400 | AMD S7150 × 4 | 8GB * 4 | 10.0 | 1,200,000 | 4 | 8 | 20 |
ebmc4, compute-optimized ECS Bare Metal Instance family
Features of ebmc4:
It provides dedicated hardware resources and physical isolation.
Compute
Offers a CPU-to-memory ratio of 1:2.
Processor: 2.5 GHz Intel ® Xeon ® E5-2682 v4 (Broadwell) with a turbo frequency of 3.0 GHz.
Storage
All instances are I/O optimized.
It supports only standard SSDs and ultra disks.
Network
Supports only VPCs.
High network performance with a packet forwarding rate of 4 million PPS.
Scenarios
Workloads that require direct access to physical resources or require a license to be bound to the hardware.
Compatibility with third-party hypervisors to meet hybrid cloud and multicloud deployment requirements.
Containers, such as Docker, Clear Containers, and Pouch.
Heavy-duty database applications for medium-to-large enterprises.
Video encoding.
The following table describes the instance types of the ebmc4 family.
Instance type | vCPU | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding PPS | ENIs | Private IPv4 addresses per ENI |
ecs.ebmc4.8xlarge | 32 | 64 | 10 | 4,000,000 | 12 | 10 |
ebmhfg5, ECS Bare Metal Instance family with high clock speeds
Features of ebmhfg5:
It provides dedicated hardware resources and physical isolation.
It supports Intel ® SGX encrypted computing.
Downtime migration is not available.
You can call the ModifyInstanceMaintenanceAttributes operation to modify the maintenance action and set the ActionOnMaintenance parameter to AutoRedeploy to enable failover.
It offers a CPU-to-memory ratio of 1:4.
Processor: 3.7 GHz Intel ® Xeon ® E3-1240v6 (Skylake) with a turbo frequency of 4.1 GHz.
All instances are I/O optimized.
It supports only standard SSDs and ultra disks.
It supports only VPCs.
High network performance with a packet forwarding rate of 2 million PPS.
Scenarios:
Workloads that require direct access to physical resources or require a license to be bound to the hardware.
High-performance applications for gaming and finance.
High-performance web servers.
Enterprise applications, such as high-performance databases.
The following table describes the instance types of the ebmhfg5 family.
Instance type | vCPU | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (PPS) | ENIs | Private IPv4 addresses per ENI |
ecs.ebmhfg5.2xlarge | 8 | 32 | 6 | 2,000,000 | 6 | 8 |
sccgn6, GPU-accelerated compute-optimized SCC instance family
Features of sccgn6:
This instance family supports all features of an ECS Bare Metal Instance. For more information, see ECS Bare Metal Instance families.
Compute:
GPU accelerator: V100 (SXM2-based).
It features an innovative Volta architecture.
It has 16 GB of HBM2 GPU memory.
It has 5,120 CUDA Cores.
It has 640 Tensor Cores.
It has 900 GB/s of GPU memory bandwidth.
Each GPU supports six NVLink links. NVLink is a bidirectional link. The bandwidth of each unidirectional link is 25 GB/s, for a total bandwidth of 6 × 25 × 2 = 300 GB/s.
It offers a CPU-to-memory ratio of 1:4.
Processor: 2.5 GHz Intel ® Xeon ® Platinum 8163 (Skylake), which provides stable computing performance.
Storage:
I/O optimized instance.
It supports only ESSDs, ESSD AutoPL disks, standard SSDs, and ultra disks.
It supports high-performance Cloud Paralleled File System (CPFS).
Network:
It supports IPv6.
It supports VPCs.
It supports RoCE v2 networks for low-latency RDMA communication.
Scenarios:
Ultra-large-scale machine learning cluster training.
Large-scale high-performance scientific computing and simulation.
Large-scale data analytics, batch computing, and video encoding.
The following table describes the instance types of the sccgn6 family.
Instance type | vCPU | Memory (GiB) | GPU | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (PPS) | RoCE network bandwidth (Gbit/s) | Multi-Queue | ENIs | Private IPv4 addresses per ENI |
ecs.sccgn6.24xlarge | 96 | 384.0 | NVIDIA V100 * 8 | 30 | 4,500,000 | 50 | 8 | 32 | 10 |
sccgn6e, GPU-accelerated compute-optimized SCC instance family
Features of sccgn6e:
This instance family provides all features of an ECS Bare Metal Instance. For more information, see ECS Bare Metal Instance families.
Compute:
GPU accelerator:
It features an innovative Volta architecture.
It has 32 GB of HBM2 GPU memory.
It has 5,120 CUDA Cores.
It has 640 Tensor Cores.
It has 900 GB/s of GPU memory bandwidth.
Each GPU supports six NVLink links. NVLink is a bidirectional link. The bandwidth of each unidirectional link is 25 GB/s, for a total bandwidth of 6 × 25 × 2 = 300 GB/s.
It offers a CPU-to-memory ratio of 1:8.
Processor: 2.5 GHz Intel ® Xeon ® Platinum 8163 (Skylake), which provides stable computing performance.
Storage:
I/O optimized instance.
It supports only ESSDs, ESSD AutoPL disks, standard SSDs, and ultra disks.
It supports high-performance CPFS.
Network:
It supports IPv6.
It supports VPCs.
It supports RoCE v2 networks for low-latency RDMA communication.
Scenarios:
Ultra-large-scale machine learning cluster training.
Large-scale high-performance scientific computing and simulation.
Large-scale data analytics, batch computing, and video encoding.
The following table describes the instance types of the sccgn6e family.
Instance type | vCPU | Memory (GiB) | GPU | GPU memory (GB) | Network baseline bandwidth (Gbit/s) | Packet Forwarding Rate (PPS) | RoCE network bandwidth (Gbit/s) | Multiple Queues | ENIs | Private IPv4 addresses per ENI |
ecs.sccgn6e.24xlarge | 96 | 768.0 | NVIDIA V100 × 8 | 32 GB × 8 | 32 | 4,800,000 | 50 | 8 | 32 | 10 |
sccg5, general-purpose SCC instance family
Features of sccg5:
Provides all features of an ECS Bare Metal Instance. For more information, see ECS Bare Metal Instance families.
Compute:
It offers a CPU-to-memory ratio of 1:4.
Processor: 2.5 GHz Intel ® Xeon ® Platinum 8163 (Skylake), which provides stable computing performance.
Storage:
All instances are I/O optimized.
It supports only standard SSDs and ultra disks.
Network:
It supports both RoCE v2 networks and VPCs. RoCE networks are dedicated to RDMA communication.
Scenarios:
Large-scale machine learning training.
Large-scale high-performance scientific computing and simulation.
Large-scale data analytics, batch computing, and video encoding.
The following table describes the instance types of the sccg5 family.
Instance type | vCPU | Physical cores | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (PPS) | RoCE network bandwidth (Gbit/s) | Multi-queue | ENIs | Private IPv4 addresses per ENI |
ecs.sccg5.24xlarge | 96 | 48 | 384.0 | 10 | 4,500,000 | 50 | 8 | 32 | 10 |
scch5, SCC instance family with high clock speeds
Instance family overview: Provides all features of an ECS Bare Metal Instance. For more information, see ECS Bare Metal Instance specifications.
Scenarios: Large-scale machine learning training, high-performance scientific computing and simulation, data analytics, batch computing, and video encoding.
Compute:
It offers a CPU-to-memory ratio of 1:3.
Processor: 3.1 GHz Intel ® Xeon ® Gold 6149 (Skylake).
Storage:
It is an I/O optimized instance.
Supported disk types: standard SSDs and ultra disks.
Network:
It supports only IPv4.
It supports both RoCE v2 networks and VPCs. RoCE networks are dedicated to RDMA communication.
The following table describes the instance types of the scch5 family.
Instance type | vCPUs | Physical cores | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (PPS) | RoCE network bandwidth (Gbit/s) | ENIs |
ecs.scch5.16xlarge | 64 | 32 | 192.0 | 10 | 4,500,000 | 50 | 32 |
ecs.scch5.16xlarge provides 64 logical processors on 32 physical cores.
sccgn6ne, GPU-accelerated compute-optimized SCC instance family
Features of sccgn6ne:
It provides all the features of an ECS Bare Metal Instance.
Compute:
GPU accelerator: V100 (SXM2-based).
It features an innovative Volta architecture.
It has 32 GB of HBM2 GPU memory.
It has 5,120 CUDA Cores.
It has 640 Tensor Cores.
It has 900 GB/s of GPU memory bandwidth.
It supports six NVLink links, each at 25 GB/s, for a total of 300 GB/s.
It offers a CPU-to-memory ratio of 1:4.
Processor: 2.5 GHz Intel ® Xeon ® Platinum 8163 (Skylake), which provides stable computing performance.
Storage:
It is an I/O optimized instance.
It supports ESSDs, standard SSDs, and ultra disks.
It supports high-performance CPFS.
Network:
It supports IPv6.
It supports VPCs.
It supports RoCE v2 networks for low-latency RDMA communication.
Scenarios:
Ultra-large-scale machine learning cluster training.
Large-scale high-performance scientific computing and simulation.
Large-scale data analytics, batch computing, and video encoding.
The following table describes the instance types of the sccgn6ne family.
Instance type | vCPU | Memory (GiB) | GPU | GPU memory | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | RoCE network bandwidth (Gbit/s) | Multi-queue | ENIs | Private IPv4 addresses per ENI |
ecs.sccgn6ne.24xlarge | 96 | 768.0 | NVIDIA V100 * 8 | 32 GB × 8 | 32.0 | 4,800,000 | 100 | 16 | 8 | 20 |
d1, big data instance family
Introduction: Instances are equipped with high-capacity, high-throughput SATA HDD local disks and provide a maximum inter-instance network bandwidth of 17 Gbit/s.
Scenarios:
Hadoop MapReduce, HDFS, Hive, and HBase.
Spark in-memory computing and MLlib.
Scenarios in which customers in industries such as the Internet and finance need to compute, store, and analyze big data.
Elasticsearch and logs.
Compute:
It offers a CPU-to-memory ratio of 1:4, designed for big data scenarios.
Processor: 2.5 GHz Intel® Xeon® E5-2682 v4 (Broadwell) or Intel® Xeon® Platinum 8163 (Skylake), which provides stable computing performance.
Storage:
It is an I/O optimized instance.
Supported disk types: standard SSDs and ultra disks.
Network:
It supports only IPv4.
The network performance of an instance corresponds to its instance type. A larger instance type provides higher network performance.
d1 includes the instance types and metric data, as shown in the following table.
Instance type | vCPU | Memory (GiB) | Local storage | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps, ×10⁴) |
ecs.d1.2xlarge | 8 | 32.0 | 4 × 5,905 GB (4 × 5,500 GiB) | 3.0 | 30 |
ecs.d1.3xlarge | 12 | 48.0 | 6 * 5,905 GB (6 × 5500 GiB) | 4.0 | 40 |
ecs.d1.4xlarge | 16 | 64.0 | 8 × 5905 GB (8 × 5500 GiB) | 6.0 | 60 |
ecs.d1.6xlarge | 24 | 96.0 | 12 × 5905 GB (12 × 5500 GiB) | 8.0 | 80 |
ecs.d1-c8d3.8xlarge | 32 | 128.0 | 12 × 5,905 GB (12 × 5,500 GiB) | 10.0 | 100 |
ecs.d1.8xlarge | 32 | 128.0 | 16 × 5,905 GB (16 × 5,500 GiB) | 10.0 | 100 |
ecs.d1-c14d3.14xlarge | 56 | 160.0 | 12 * 5,905 GB (12 × 5500 GiB) | 17.0 | 180 |
ecs.d1.14xlarge | 56 | 224.0 | 28 × 5905 GB (28 × 5500 GiB) | 17.0 | 180 |
i1, instance family with local SSDs
Introduction: Instances are equipped with high-performance NVMe SSD local disks that deliver high IOPS, large throughput, and low access latency.
Scenarios: OLTP and high-performance relational databases. NoSQL databases, such as Cassandra and MongoDB. Search scenarios, such as Elasticsearch.
Compute:
It offers a CPU-to-memory ratio of 1:4, designed for scenarios such as high-performance databases.
Processor: 2.5 GHz Intel® Xeon® E5-2682 v4 (Broadwell) or Intel® Xeon® Platinum 8163 (Skylake), which provides stable computing performance.
Storage:
It is an I/O optimized instance.
Supported disk types: standard SSDs and ultra disks.
Network:
It supports only IPv4.
The network performance of an instance corresponds to its instance type. A larger instance type provides higher network performance.
i1 instance types and metrics are listed in the following table.
Instance type | vCPU | Memory (GiB) | Local storage | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) |
ecs.i1.xlarge | 4 | 16 | 2 × 111 GB (2 × 104 GiB) | 0.8 | 200,000 |
ecs.i1.2xlarge | 8 | 32 | 2 × 223 GB (2 × 208 GiB) | 1.5 | 400,000 |
ecs.i1.3xlarge | 12 | 48 | 2 × 335 GB (2 × 312 GiB) | 2 | 400,000 |
ecs.i1.4xlarge | 16 | 64 | 2 × 446 GB (2 × 416 GiB) | 3 | 500,000 |
ecs.i1-c5d1.4xlarge | 16 | 64 | 2 × 1563 GB (2 × 1456 GiB) | 3 | 400,000 |
ecs.i1.6xlarge | 24 | 96 | 2 × 670 GB (2 × 624 GiB) | 4.5 | 600,000 |
ecs.i1.8xlarge | 32 | 128 | 2 × 893 GB (2 × 832 GiB) | 6 | 800,000 |
ecs.i1-c10d1.8xlarge | 32 | 128 | 2 × 1563 GB (2 × 1456 GiB) | 6 | 800,000 |
ecs.i1.14xlarge | 56 | 224 | 2 × 1563 GB (2 × 1456 GiB) | 10 | 1,200,000 |
n1, n2, and e3, shared-resource instances
Features of n1, n2, and e3:
Processor: 2.5 GHz Intel Xeon E5-2680 v3 (Haswell), Platinum 8163 (Skylake), or 8269CY (Cascade Lake).
All instances are I/O optimized.
They support standard SSDs and ultra disks.
The network performance of an instance is proportional to its specifications. A higher specification provides better network performance.
Instance family | Features | CPU-to-memory ratio | Scenarios |
n1 | Shared compute-optimized instance family | 1:2 |
|
n2 | Shared general-purpose instance family | 1:4 |
|
e3 | Shared memory-optimized instance family | 1:8 |
|
The following table describes the instance types of the n1 family.
Instance type | vCPU | Memory (GiB) | ENIs |
ecs.n1.tiny | 1 | 1.0 | 1 |
ecs.n1.small | 1 | 2.0 | 1 |
ecs.n1.medium | 2 | 4.0 | 1 |
ecs.n1.large | 4 | 8.0 | 2 |
ecs.n1.xlarge | 8 | 16.0 | 2 |
ecs.n1.3xlarge | 16 | 32.0 | 2 |
ecs.n1.7xlarge | 32 | 64.0 | 2 |
The following table describes the instance types of the n2 family.
Instance type | vCPU | Memory (GiB) | ENIs |
ecs.n2.small | 1 | 4.0 | 1 |
ecs.n2.medium | 2 | 8.0 | 1 |
ecs.n2.large | 4 | 16.0 | 2 |
ecs.n2.xlarge | 8 | 32.0 | 2 |
ecs.n2.3xlarge | 16 | 64.0 | 2 |
ecs.n2.7xlarge | 32 | 128.0 | 2 |
The following table describes the instance types of the e3 family.
Instance type | vCPU | Memory (GiB) | ENIs |
ecs.e3.small | 1 | 8.0 | 1 |
ecs.e3.medium | 2 | 16.0 | 1 |
ecs.e3.large | 4 | 32.0 | 2 |
ecs.e3.xlarge | 8 | 64.0 | 2 |
ecs.e3.3xlarge | 16 | 128.0 | 2 |
Series I instance families
Series I instance families include t1, s1, s2, s3, m1, m2, c1, and c2. These are legacy shared-resource instance families. They are grouped by the number of cores, such as 1, 2, 4, 8, or 16, and are not sensitive to specific instance families.
Features of Series I instance families:
Processor: Intel Xeon E5-2420 with a clock speed of at least 1.9 GHz.
They feature latest-generation DDR3 memory.
I/O optimized and non-I/O optimized options are available.
I/O optimized instance types support standard SSDs and ultra disks. The following table describes the instance types.
Category | Instance type | vCPU | Memory (GiB) |
Standard | ecs.s2.large | 2 | 4 |
ecs.s2.xlarge | 2 | 8 | |
ecs.s2.2xlarge | 2 | 16 | |
ecs.s3.medium | 4 | 4 | |
ecs.s3.large | 4 | 8 | |
High Memory | ecs.m1.medium | 4 | 16 |
ecs.m2.medium | 4 | 32 | |
ecs.m1.xlarge | 8 | 32 | |
High CPU | ecs.c1.small | 8 | 8 |
ecs.c1.large | 8 | 16 | |
ecs.c2.medium | 16 | 16 | |
ecs.c2.large | 16 | 32 | |
ecs.c2.xlarge | 16 | 64 |
Non-I/O optimized instance types support only basic disks. The following table describes the instance types.
Category | Instance type | vCPU | Memory (GiB) |
Tiny | ecs.t1.small | 1 | 1 |
Standard | ecs.s1.small | 1 | 2 |
ecs.s1.medium | 1 | 4 | |
ecs.s1.large | 1 | 8 | |
ecs.s2.small | 2 | 2 | |
ecs.s2.large | 2 | 4 | |
ecs.s2.xlarge | 2 | 8 | |
ecs.s2.2xlarge | 2 | 16 | |
ecs.s3.medium | 4 | 4 | |
ecs.s3.large | 4 | 8 | |
High Memory | ecs.m1.medium | 4 | 16 |
ecs.m2.medium | 4 | 32 | |
ecs.m1.xlarge | 8 | 32 | |
High CPU | ecs.c1.small | 8 | 8 |
ecs.c1.large | 8 | 16 | |
ecs.c2.medium | 16 | 16 | |
ecs.c2.large | 16 | 32 | |
ecs.c2.xlarge | 16 | 64 |