This topic describes all retired instance types on the China site (aliyun.com). However, the sn1, sn2, n1, n2, and e3 instance types are still available for purchase on the International site (alibabacloud.com).
c4, ce4, and cm4, compute-optimized instance families with high clock speeds
ebmhfg5, ECS Bare Metal Instance family with high clock speeds
sccgn6, GPU-accelerated compute-optimized SCC instance family
sccgn6e, GPU-accelerated compute-optimized SCC instance family
sccgn6ne, GPU-accelerated compute-optimized SCC instance family
Instance type changes
If you are using a retired instance type, we recommend that you change the instance type to another instance type that is available for purchase. For information about the supported changes between instance types, see Instance families that support instance type changes.
g5se, storage-enhanced instance family
Features:
g5se instances can be created only on dedicated hosts.
NoteFor information about instances of other instance types that can be created on dedicated hosts, see Dedicated host types.
A single g5se instance to which enhanced SSDs (ESSDs) are attached can deliver up to 1,000,000 random IOPS and up to 32 Gbit/s of sequential read/write throughput.
Compute:
Offers a CPU-to-memory ratio of 1:4.
Uses 2.5 GHz Intel® Xeon® Platinum 8163 (Skylake) processors for consistent computing performance.
Storage:
Is an instance family in which all instances are I/O optimized.
Supports ESSDs, standard SSDs, and ultra disks.
Provides high storage I/O performance based on large computing capacity.
NoteFor information about the storage I/O performance of the next-generation, enterprise-level instance families, see Storage I/O performance.
Network:
Supports IPv6.
Supported scenarios:
I/O-intensive scenarios such as large and medium-sized online transaction processing (OLTP) core databases
Large and medium-sized NoSQL databases
Search and real-time log analytics
Traditional large enterprise-level commercial software such as SAP
Instance types
Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | NIC queues | Elastic network interfaces (ENIs) | Private IPv4 addresses per ENI | Disk baseline IOPS | Disk baseline bandwidth (Gbit/s) |
ecs.g5se.large | 2 | 8.0 | 1.0 | 300,000 | 2 | 2 | 6 | 30,000 | 1.5 |
ecs.g5se.xlarge | 4 | 16.0 | 1.5 | 500,000 | 2 | 3 | 6 | 60,000 | 2 |
ecs.g5se.2xlarge | 8 | 32.0 | 2.0 | 800,000 | 2 | 4 | 8 | 85,000 | 3 |
ecs.g5se.4xlarge | 16 | 64.0 | 4.0 | 1,000,000 | 4 | 8 | 10 | 150,000 | 5 |
ecs.g5se.8xlarge | 32 | 128.0 | 7.0 | 2,000,000 | 8 | 8 | 10 | 300,000 | 10 |
ecs.g5se.16xlarge | 64 | 256.0 | 14.0 | 3,000,000 | 16 | 7 | 10 | 750,000 | 25 |
ecs.g5se.18xlarge | 70 | 336.0 | 16.0 | 4,000,000 | 16 | 15 | 10 | 1,000,000 | 32 |
sn2, general-purpose instance family
Features:
Offers a CPU-to-memory ratio of 1:4.
Uses 2.5 GHz Intel Xeon E5-2682 v4 (Broadwell), E5-2680 v3 (Haswell), Platinum 8163 (Skylake), or 8269CY (Cascade Lake) processors for consistent computing performance.
NoteInstances of this instance family may be deployed on different server platforms. If your business requires all instances to be deployed on the same server platform, we recommend that you use the g6, g6e, or g7 instance family.
Provides high network performance based on large computing capacity.
Supported scenarios:
Enterprise-level applications of various types and sizes
Small and medium-sized database systems, caches, and search clusters
Data analytics and computing
Instance types
Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | NIC queues | ENIs |
ecs.sn2.medium | 2 | 8.0 | 0.5 | 100,000 | 1 | 2 |
ecs.sn2.large | 4 | 16.0 | 0.8 | 200,000 | 1 | 3 |
ecs.sn2.xlarge | 8 | 32.0 | 1.5 | 400,000 | 1 | 4 |
ecs.sn2.3xlarge | 16 | 64.0 | 3.0 | 500,000 | 2 | 8 |
ecs.sn2.7xlarge | 32 | 128.0 | 6.0 | 800,000 | 3 | 8 |
ecs.sn2.13xlarge | 56 | 224.0 | 10.0 | 1,200,000 | 4 | 8 |
sn1, compute-optimized instance family
Features:
Offers a CPU-to-memory ratio of 1:2.
Uses 2.5 GHz Intel Xeon E5-2682 v4 (Broadwell), E5-2680 v3 (Haswell), Platinum 8163 (Skylake), or 8269CY (Cascade Lake) processors for consistent computing performance.
NoteInstances of this instance family may be deployed on different server platforms. If your business requires all instances to be deployed on the same server platform, we recommend that you use the c6, c6e, or c7 instance family.
Provides high network performance based on large computing capacity.
Supported scenarios:
Web frontend servers
Frontend servers of massively multiplayer online (MMO) games
Data analytics, batch processing, and video encoding
High-performance scientific and engineering applications
Instance types
Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | NIC queues | ENIs |
ecs.sn1.medium | 2 | 4.0 | 0.5 | 100,000 | 1 | 2 |
ecs.sn1.large | 4 | 8.0 | 0.8 | 200,000 | 1 | 3 |
ecs.sn1.xlarge | 8 | 16.0 | 1.5 | 400,000 | 1 | 4 |
ecs.sn1.3xlarge | 16 | 32.0 | 3.0 | 500,000 | 2 | 8 |
ecs.sn1.7xlarge | 32 | 64.0 | 6.0 | 800,000 | 3 | 8 |
c4, ce4, and cm4, compute-optimized instance families with high clock speeds
Features:
Use 3.2 GHz Intel Xeon E5-2667 v4 (Broadwell) processors.
Offer consistent computing performance.
Are instance families in which all instances are I/O optimized.
Support standard SSDs and ultra disks.
Provide high network performance based on large computing capacity.
Supported scenarios:
High-performance web frontend servers
High-performance scientific and engineering applications
MMO games and video encoding
c4 instance types
Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | NIC queues | ENIs |
ecs.c4.xlarge | 4 | 8.0 | 1.5 | 200,000 | 1 | 3 |
ecs.c4.2xlarge | 8 | 16.0 | 3.0 | 400,000 | 1 | 4 |
ecs.c4.3xlarge | 12 | 24.0 | 4.5 | 600,000 | 2 | 6 |
ecs.c4.4xlarge | 16 | 32.0 | 6.0 | 800,000 | 2 | 8 |
ce4 instance types
Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | NIC queues | ENIs |
ecs.ce4.xlarge | 4 | 32.0 | 1.5 | 200,000 | 1 | 3 |
ecs.ce4.2xlarge | 8 | 64.0 | 3.0 | 400,000 | 1 | 3 |
cm4 instance types
Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | NIC queues | ENIs |
ecs.cm4.xlarge | 4 | 16.0 | 1.5 | 200,000 | 1 | 3 |
ecs.cm4.2xlarge | 8 | 32.0 | 3.0 | 400,000 | 1 | 4 |
ecs.cm4.3xlarge | 12 | 48.0 | 4.5 | 600,000 | 2 | 6 |
ecs.cm4.4xlarge | 16 | 64.0 | 6.0 | 800,000 | 2 | 8 |
ecs.cm4.6xlarge | 24 | 96.0 | 10.0 | 1,200,000 | 4 | 8 |
f3, FPGA-accelerated compute-optimized instance family
Features:
This instance family uses Xilinx 16 nm Virtex UltraScale+ VU9P FPGAs.
Compute:
Offers a CPU-to-memory ratio of 1:4.
Uses 2.5 GHz Intel® Xeon® Platinum 8163 (Skylake) processors.
Storage:
Is an instance family in which all instances are I/O optimized.
Supports standard SSDs and ultra disks.
Network:
Provides high network performance based on large computing capacity.
Supported scenarios:
Deep learning inference
Genomics research
Database acceleration
Image transcoding such as conversion of JPEG images to WebP images
Real-time video processing such as H.265 video compression
Instance types
Instance type | vCPUs | Memory (GiB) | FPGA | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | NIC queues | ENIs | Private IPv4 addresses per ENI |
ecs.f3-c4f1.xlarge | 4 | 16.0 | 1 * Xilinx VU9P | 1.5 | 30 | 2 | 3 | 10 |
ecs.f3-c8f1.2xlarge | 8 | 32.0 | 1 * Xilinx VU9P | 2.5 | 50 | 4 | 4 | 10 |
ecs.f3-c16f1.4xlarge | 16 | 64.0 | 1 * Xilinx VU9P | 5.0 | 100 | 4 | 8 | 20 |
ecs.f3-c16f1.8xlarge | 32 | 128.0 | 2 * Xilinx VU9P | 10.0 | 200 | 8 | 8 | 20 |
ecs.f3-c16f1.16xlarge | 64 | 256.0 | 4 * Xilinx VU9P | 20.0 | 250 | 16 | 8 | 20 |
vgn6i, vGPU-accelerated instance family
Features:
Compute:
Uses NVIDIA T4 GPUs.
Uses vGPUs.
Supports the 1/4 and 1/2 compute capacity of NVIDIA Tesla T4 GPUs.
Supports 4 GB and 8 GB of GPU memory.
Offers a CPU-to-memory ratio of 1:5.
Uses 2.5 GHz Intel® Xeon® Platinum 8163 (Skylake) processors.
Storage:
Is an instance family in which all instances are I/O optimized.
Supports standard SSDs and ultra disks.
Network:
Supports IPv6.
Provides high network performance based on large computing capacity.
Supported scenarios:
Real-time rendering for cloud gaming
Real-time rendering for Augmented Reality (AR) and Virtual Reality (VR) applications
AI (deep learning and machine learning) inference for elastic Internet service deployment
Educational environment of deep learning
Modeling experiment environment of deep learning
Instance types
Instance type | vCPUs | Memory (GiB) | GPU | GPU memory | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | NIC queues (primary NIC/secondary NIC) | ENIs | Private IPv4 addresses per ENI |
ecs.vgn6i-m4.xlarge | 4 | 23 | NVIDIA T4 * 1/4 | 16 GB * 1/4 | 2 | 500,000 | 4/2 | 3 | 10 |
ecs.vgn6i-m8.2xlarge | 10 | 46 | NVIDIA T4 * 1/2 | 16 GB * 1/2 | 4 | 800,000 | 8/2 | 4 | 10 |
vgn5i, vGPU-accelerated instance family
Features:
Compute:
Uses NVIDIA P4 GPUs.
Uses vGPUs.
Supports the 1/8, 1/4, 1/2, and 1/1 compute capacity of NVIDIA Tesla P4 GPUs.
Supports 1 GB, 2 GB, 4 GB, and 8 GB of GPU memory.
Offers a CPU-to-memory ratio of 1:3.
Uses 2.5 GHz Intel® Xeon® E5-2682 v4 (Broadwell) processors.
Storage:
Is an instance family in which all instances are I/O optimized.
Supports standard SSDs and ultra disks.
Network:
Supports IPv6.
Provides high network performance based on large computing capacity.
Supported scenarios:
Real-time rendering for cloud gaming
Real-time rendering for AR and VR applications
AI (deep learning and machine learning) inference for elastic Internet service deployment
Educational environment of deep learning
Modeling experiment environment of deep learning
Instance types
Instance type | vCPUs | Memory (GiB) | GPU | GPU memory | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | NIC queues | ENIs | Private IPv4 addresses per ENI |
ecs.vgn5i-m1.large | 2 | 6 | NVIDIA P4 * 1/8 | 8 GB * 1/8 | 1 | 300,000 | 2 | 2 | 6 |
ecs.vgn5i-m2.xlarge | 4 | 12 | NVIDIA P4 * 1/4 | 8 GB * 1/4 | 2 | 500,000 | 2 | 3 | 10 |
ecs.vgn5i-m4.2xlarge | 8 | 24 | NVIDIA P4 * 1/2 | 8 GB * 1/2 | 3 | 800,000 | 2 | 4 | 10 |
ecs.vgn5i-m8.4xlarge | 16 | 48 | NVIDIA P4 * 1 | 8 GB * 1 | 5 | 1,000,000 | 4 | 5 | 20 |
The GPU column in the preceding table indicates the GPU model and GPU slicing information of each instance type. Each GPU can be sliced into multiple GPU partitions, and each GPU partition can be allocated as a vGPU to an instance. Example:
NVIDIA P4 * 1/8
. NVIDIA P4
is the GPU model. 1/8
indicates that a GPU is sliced into eight GPU partitions and each GPU partition can be allocated as a vGPU to an instance.
gn4, GPU-accelerated compute-optimized instance family
Features:
This instance family uses NVIDIA M40 GPUs.
Compute:
Offers multiple CPU-to-memory ratios.
Uses 2.5 GHz Intel® Xeon® E5-2682 v4 (Broadwell) processors.
Storage:
Is an instance family in which all instances are I/O optimized.
Supports standard SSDs and ultra disks.
Network:
Provides high network performance based on large computing capacity.
Supported scenarios:
Deep learning
Scientific computing applications, such as computational fluid dynamics, computational finance, genomics, and environmental analysis
Server-side GPU compute workloads such as high-performance computing, rendering, and multi-media encoding and decoding
Instance types
Instance type | vCPUs | Memory (GiB) | GPU | GPU memory | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | NIC queues | ENIs | Private IPv4 addresses per ENI |
ecs.gn4-c4g1.xlarge | 4 | 30.0 | NVIDIA M40 * 1 | 12 GB * 1 | 3.0 | 300,000 | 1 | 3 | 10 |
ecs.gn4-c8g1.2xlarge | 8 | 30.0 | NVIDIA M40 * 1 | 12 GB * 1 | 3.0 | 400,000 | 1 | 4 | 10 |
ecs.gn4.8xlarge | 32 | 48.0 | NVIDIA M40 * 1 | 12 GB * 1 | 6.0 | 800,000 | 3 | 8 | 20 |
ecs.gn4-c4g1.2xlarge | 8 | 60.0 | NVIDIA M40 * 2 | 12 GB * 2 | 5.0 | 500,000 | 1 | 4 | 10 |
ecs.gn4-c8g1.4xlarge | 16 | 60.0 | NVIDIA M40 * 2 | 12 GB * 2 | 5.0 | 500,000 | 1 | 8 | 20 |
ecs.gn4.14xlarge | 56 | 96.0 | NVIDIA M40 * 2 | 12 GB * 2 | 10.0 | 1,200,000 | 4 | 8 | 20 |
ga1, GPU-accelerated compute-optimized instance family
Features:
This instance family uses AMD S7150 GPUs.
This instance family is configured with high-performance local Non-Volatile Memory Express (NVMe) SSDs.
Compute:
Offers a CPU-to-memory ratio of 1:2.5.
Uses 2.5 GHz Intel® Xeon® E5-2682 v4 (Broadwell) processors.
Storage:
Is an instance family in which all instances are I/O optimized.
Supports standard SSDs and ultra disks.
Network:
Provides high network performance based on large computing capacity.
Supported scenarios:
Rendering and multi-media encoding and decoding
Machine learning, high-performance computing, and high-performance databases
Server-side workloads that require powerful parallel floating-point computing capacity
Instance types
Instance type | vCPUs | Memory (GiB) | Local storage (GiB) | GPU | GPU memory | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | NIC queues | ENIs | Private IPv4 addresses per ENI |
ecs.ga1.xlarge | 4 | 10.0 | 1 * 87 | AMD S7150 * 1/4 | 8 GB * 1/4 | 1.0 | 200,000 | 1 | 3 | 10 |
ecs.ga1.2xlarge | 8 | 20.0 | 1 * 175 | AMD S7150 * 1/2 | 8 GB * 1/2 | 1.5 | 300,000 | 1 | 4 | 10 |
ecs.ga1.4xlarge | 16 | 40.0 | 1 * 350 | AMD S7150 * 1 | 8 GB * 1 | 3.0 | 500,000 | 2 | 8 | 20 |
ecs.ga1.8xlarge | 32 | 80.0 | 1 * 700 | AMD S7150 * 2 | 8 GB * 2 | 6.0 | 800,000 | 3 | 8 | 20 |
ecs.ga1.14xlarge | 56 | 160.0 | 1 * 1,400 | AMD S7150 * 4 | 8 GB * 4 | 10.0 | 1,200,000 | 4 | 8 | 20 |
ebmc4, compute-optimized ECS Bare Metal Instance family
Features:
This instance family provides dedicated hardware resources and physical isolation.
Compute:
Offers a CPU-to-memory ratio of 1:2.
Uses 2.5 GHz Intel® Xeon® E5-2682 v4 (Broadwell) processors that deliver a turbo frequency of 3.0 GHz.
Storage:
Is an instance family in which all instances are I/O optimized.
Supports standard SSDs and ultra disks.
Network:
Supports only virtual private clouds (VPCs).
Provides high network performance with a packet forwarding rate of 4,000,000 pps.
Supported scenarios:
Workloads that require direct access to physical resources or that require a license to be bound to the hardware
Scenarios that require compatibility with third-party hypervisors to implement hybrid-cloud and multi-cloud deployments
Containers such as Docker, Clear Containers, and Pouch
Enterprise-level applications such as large and medium-sized databases
Video encoding
Instance types
Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | ENIs | Private IPv4 addresses per ENI |
ecs.ebmc4.8xlarge | 32 | 64 | 10 | 4,000,000 | 12 | 10 |
ebmhfg5, ECS Bare Metal Instance family with high clock speeds
Features:
Provides dedicated hardware resources and physical isolation.
Supports encrypted computing based on Intel® Software Guard Extensions (SGX).
Has failover disabled by default.
You can call the ModifyInstanceMaintenanceAttributes operation to modify the maintenance action and set ActionOnMaintenance to AutoRedeploy to enable failover.
Offers a CPU-to-memory ratio of 1:4.
Uses 3.7 GHz Intel® Xeon® E3-1240v6 (Skylake) processors that deliver a turbo frequency of 4.1 GHz.
Is an instance family in which all instances are I/O optimized.
Supports standard SSDs and ultra disks.
Supports only VPCs.
Provides high network performance with a packet forwarding rate of 2,000,000 pps.
Supported scenarios:
Workloads that require direct access to physical resources or that require a license to be bound to the hardware
Gaming and finance applications that require high performance
High-performance web servers
Enterprise-level applications such as high-performance databases
Instance types
Instance type | vCPUs | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | ENIs | Private IPv4 addresses per ENI |
ecs.ebmhfg5.2xlarge | 8 | 32 | 6 | 2,000,000 | 6 | 8 |
sccgn6, GPU-accelerated compute-optimized SCC instance family
Features:
This instance family provides all features of ECS Bare Metal Instance. For more information, see Overview of ECS Bare Metal Instance families.
Compute:
Uses NVIDIA V100 GPUs (SXM2-based) that have the following features:
Innovative Volta architecture
16 GB of HBM2 GPU memory
CUDA Cores 5120
Tensor Cores 640
GPU memory bandwidth of up to 900 GB/s
Support for up to six bidirectional NVLink connections, each of which has a unidirectional bandwidth of 25 GB/s for a total bandwidth of 300 GB/s
Offers a CPU-to-memory ratio of 1:4.
Uses 2.5 GHz Intel® Xeon® Platinum 8163 (Skylake) processors for consistent computing performance.
Storage:
Is an instance family in which all instances are I/O optimized.
Supports ESSDs, ESSD AutoPL disks, standard SSDs, and ultra disks.
Supports high-performance Cloud Paralleled File System (CPFS).
Network:
Supports IPv6.
Supports VPCs.
Supports RDMA over Converged Ethernet (RoCE) v2 networks, which are dedicated to low-latency remote direct memory access (RDMA) communication.
Supported scenarios:
Ultra-large-scale training for machine learning on distributed GPU clusters
Large-scale high-performance scientific computing and simulations
Large-scale data analytics, batch processing, and video encoding
Instance types
Instance type | vCPUs | Memory (GiB) | GPU | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | RoCE network bandwidth (Gbit/s) | NIC queues | ENIs | Private IPv4 addresses per ENI |
ecs.sccgn6.24xlarge | 96 | 384.0 | NVIDIA V100 * 8 | 30 | 4,500,000 | 50 | 8 | 32 | 10 |
sccgn6e, GPU-accelerated compute-optimized SCC instance family
Features:
This instance family provides all features of ECS Bare Metal Instance. For more information, see Overview of ECS Bare Metal Instance families.
Compute:
Uses GPUs that have the following features:
Innovative Volta architecture
32 GB HBM2 GPU memory
CUDA Cores 5120
Tensor Cores 640
GPU memory bandwidth of up to 900 GB/s
Support for up to six bidirectional NVLink connections, each of which has a unidirectional bandwidth of 25 GB/s for a total bandwidth of 300 GB/s
Offers a CPU-to-memory ratio of 1:8.
Uses 2.5 GHz Intel® Xeon® Platinum 8163 (Skylake) processors for consistent computing performance.
Storage:
Is an instance family in which all instances are I/O optimized.
Supports ESSDs, ESSD AutoPL disks, standard SSDs, and ultra disks.
Supports high-performance CPFS.
Network:
Supports IPv6.
Supports VPCs.
Supports RoCE v2 networks, which are dedicated to low-latency RDMA communication.
Supported scenarios:
Ultra-large-scale training for machine learning on distributed GPU clusters
Large-scale high-performance scientific computing and simulations
Large-scale data analytics, batch processing, and video encoding
Instance types
Instance type | vCPUs | Memory (GiB) | GPU | GPU memory (GB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | RoCE network bandwidth (Gbit/s) | NIC queues | ENIs | Private IPv4 addresses per ENI |
ecs.sccgn6e.24xlarge | 96 | 768.0 | NVIDIA V100 * 8 | 32 GB * 8 | 32 | 4,800,000 | 50 | 8 | 32 | 10 |
sccg5, general-purpose SCC instance family
Features:
This instance family provides all features of ECS Bare Metal Instance. For more information, see Overview of ECS Bare Metal Instance families.
Compute:
Offers a CPU-to-memory ratio of 1:4.
Uses 2.5 GHz Intel® Xeon® Platinum 8163 (Skylake) processors for consistent computing performance.
Storage:
Is an instance family in which all instances are I/O optimized.
Supports standard SSDs and ultra disks.
Network:
Supports both RoCE networks and VPCs. RoCE networks are dedicated to RDMA communication.
Supported scenarios:
Large-scale machine learning training
Large-scale high-performance scientific computing and simulations
Large-scale data analytics, batch processing, and video encoding
Instance types
Instance type | vCPUs | Physical cores | Memory (GiB) | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | RoCE network bandwidth (Gbit/s) | NIC queues | ENIs | Private IPv4 addresses per ENI |
ecs.sccg5.24xlarge | 96 | 48 | 384.0 | 10 | 4,500,000 | 50 | 8 | 32 | 10 |
sccgn6ne, GPU-accelerated compute-optimized SCC instance family
Features:
This instance family provides all features of ECS Bare Metal Instance.
Compute:
Uses NVIDIA V100 GPUs (SXM2-based) that have the following features:
Innovative Volta architecture
32 GB HBM2 GPU memory
CUDA Cores 5120
Tensor Cores 640
GPU memory bandwidth of up to 900 GB/s
Support for up to six bidirectional NVLink connections, each of which has a unidirectional bandwidth of 25 GB/s for a total bandwidth of 300 GB/s
Offers a CPU-to-memory ratio of 1:4.
Uses 2.5 GHz Intel® Xeon® Platinum 8163 (Skylake) processors for consistent computing performance.
Storage:
Is an instance family in which all instances are I/O optimized.
Supports ESSDs, standard SSDs, and ultra disks.
Supports high-performance CPFS.
Network:
Supports IPv6.
Supports VPCs.
Supports RoCE v2 networks, which are dedicated to low-latency RDMA communication.
Supported scenarios:
Ultra-large-scale training for machine learning on distributed GPU clusters
Large-scale high-performance scientific computing and simulations
Large-scale data analytics, batch processing, and video encoding
Instance types
Instance type | vCPUs | Memory (GiB) | GPU | GPU memory | Network baseline bandwidth (Gbit/s) | Packet forwarding rate (pps) | RoCE network bandwidth (Gbit/s) | NIC queues | ENIs | Private IPv4 addresses per ENI |
ecs.sccgn6ne.24xlarge | 96 | 768.0 | NVIDIA V100 * 8 | 32 GB * 8 | 32.0 | 4,800,000 | 100 | 16 | 8 | 20 |
n1, n2, and e3, shared instance families
Features:
Use 2.5 GHz Intel Xeon E5-2680 v3 (Haswell), E5-2680 v3 (Haswell), Platinum 8163 (Skylake), or 8269CY (Cascade Lake) processors.
Are instance families in which all instances are I/O optimized.
Support standard SSDs and ultra disks.
Provide high network performance based on large computing capacity.
Instance family | Description | CPU-to-memory ratio | Supported scenario |
n1 | Shared compute-optimized instance family | 1:2 |
|
n2 | Shared general-purpose instance family | 1:4 |
|
e3 | Shared memory-optimized instance family | 1:8 |
|
n1 instance types
Instance type | vCPUs | Memory (GiB) | ENIs |
ecs.n1.tiny | 1 | 1.0 | 1 |
ecs.n1.small | 1 | 2.0 | 1 |
ecs.n1.medium | 2 | 4.0 | 1 |
ecs.n1.large | 4 | 8.0 | 2 |
ecs.n1.xlarge | 8 | 16.0 | 2 |
ecs.n1.3xlarge | 16 | 32.0 | 2 |
ecs.n1.7xlarge | 32 | 64.0 | 2 |
n2 instance types
Instance type | vCPUs | Memory (GiB) | ENIs |
ecs.n2.small | 1 | 4.0 | 1 |
ecs.n2.medium | 2 | 8.0 | 1 |
ecs.n2.large | 4 | 16.0 | 2 |
ecs.n2.xlarge | 8 | 32.0 | 2 |
ecs.n2.3xlarge | 16 | 64.0 | 2 |
ecs.n2.7xlarge | 32 | 128.0 | 2 |
e3 instance types
Instance type | vCPUs | Memory (GiB) | ENIs |
ecs.e3.small | 1 | 8.0 | 1 |
ecs.e3.medium | 2 | 16.0 | 1 |
ecs.e3.large | 4 | 32.0 | 2 |
ecs.e3.xlarge | 8 | 64.0 | 2 |
ecs.e3.3xlarge | 16 | 128.0 | 2 |
Series I instance families
Series I instance families include t1, s1, s2, s3, m1, m2, c1, and c2. All these instance families are legacy shared instance families. They are categorized based on the number of cores such as 1, 2, 4, 8, or 16 cores.
Features:
Use Intel Xeon E5-2420 processors with clock speeds of no less than 1.9 GHz.
Use the latest DDR3 memory.
Are instance families in which instances can be I/O optimized or non-I/O optimized.
I/O optimized instance types support standard SSDs and ultra disks. The following table describes the instance types and their specifications.
Category | Instance type | vCPUs | Memory (GiB) |
Standard | ecs.s2.large | 2 | 4 |
ecs.s2.xlarge | 2 | 8 | |
ecs.s2.2xlarge | 2 | 16 | |
ecs.s3.medium | 4 | 4 | |
ecs.s3.large | 4 | 8 | |
High memory | ecs.m1.medium | 4 | 16 |
ecs.m2.medium | 4 | 32 | |
ecs.m1.xlarge | 8 | 32 | |
High CPU | ecs.c1.small | 8 | 8 |
ecs.c1.large | 8 | 16 | |
ecs.c2.medium | 16 | 16 | |
ecs.c2.large | 16 | 32 | |
ecs.c2.xlarge | 16 | 64 |
Non-I/O optimized instance types support only basic disks. The following table describes the instance types and their specifications.
Category | Instance type | vCPUs | Memory (GiB) |
Tiny | ecs.t1.small | 1 | 1 |
Standard | ecs.s1.small | 1 | 2 |
ecs.s1.medium | 1 | 4 | |
ecs.s1.large | 1 | 8 | |
ecs.s2.small | 2 | 2 | |
ecs.s2.large | 2 | 4 | |
ecs.s2.xlarge | 2 | 8 | |
ecs.s2.2xlarge | 2 | 16 | |
ecs.s3.medium | 4 | 4 | |
ecs.s3.large | 4 | 8 | |
High memory | ecs.m1.medium | 4 | 16 |
ecs.m2.medium | 4 | 32 | |
ecs.m1.xlarge | 8 | 32 | |
High CPU | ecs.c1.small | 8 | 8 |
ecs.c1.large | 8 | 16 | |
ecs.c2.medium | 16 | 16 | |
ecs.c2.large | 16 | 32 | |
ecs.c2.xlarge | 16 | 64 |