Feature comparison between PolarDB-X 2.0 and PolarDB-X 1.0

Updated at: 2022-09-05 03:50

This topic compares features of PolarDB-X 2.0 instances and PolarDB-X 1.0 instances.

Note
  • In this topic, PolarDB-X refers to a PolarDB-X 2.0 instance and PolarDB-X 1.0 refers to a PolarDB-X 1.0 instance that is formerly known as a DRDS instance.

  • For information about PolaDB-X 1.0 instances, see PolarDB-X 1.0.

Feature

PolarDB-X 1.0 instance

PolarDB-X 2.0 instance

Feature

PolarDB-X 1.0 instance

PolarDB-X 2.0 instance

Purchase an instance

PolarDB-X 1.0 instances do not include ApsaraDB RDS for MySQL resources. You must purchase ApsaraDB RDS for MySQL instances separately, and integrate ApsaraDB RDS for MySQL instances into PolarDB-X 1.0 instances in the PolarDB-X 1.0 console.

Provides an overall database service. You need to only create a PolarDB-X instance.

Create a database

In PolarDB-X 1.0 instances, you must create databases in the PolarDB-X 1.0 console. When you create a database, you must select an existing ApsaraDB RDS for MySQL instance or purchase an ApsaraDB RDS for MySQL instance.

Provides two methods for you to create a database:

  1. Use a tool that you are already familiar with or Data Management (DMS) to log on to the database service, and execute the CREATE DATABASE statement to create a database.

  2. Create a database in the console.

Scale out

You need to evaluate the storage capacity of each ApsaraDB RDS for MySQL instance, and migrate some database shards to the storage of new ApsaraDB RDS for MySQL instances.

You need to only add nodes. The data is automatically and evenly distributed on each storage node.

Storage layer

PolarDB-X 1.0 instances connect to general-purpose ApsaraDB RDS for MySQL instances. In most cases, nodes of PolarDB-X 1.0 instances are deployed in primary/secondary mode.

PolarDB-X uses a three-node storage mode that can ensure the security of financial data. PolarDB-X uses the Paxos protocol to achieve a zero recovery point objective (RPO).

Failover for high availability

A failover is performed at the storage layer upon downtime based on the failure detection mechanism of the high availability system in which ApsaraDB RDS for MySQL primary and secondary instances are deployed. The service level agreement (SLA) ensures that the failover is complete in minutes.

A failover is performed at the storage layer upon downtime based on the Paxos protocol. The SLA ensures that the failover is complete in no more than 30 seconds.

The computing layer is aware of the failover that is performed at the storage layer, and performs a failover based on the active close of the connection to the ApsaraDB RDS for MySQL instance. The SLA ensures that the failover is complete in minutes.

The computing layer is aware of the failover that is performed at the storage layer, and performs a failover based on the metadata information of Paxos. The SLA ensures that the failover is complete in seconds.

Data synchronization

If you want to synchronize data from a PolarDB-X 1.0 instance to the downstream, you must use Data Transmission Service (DTS) to subscribe to each ApsaraDB RDS for MySQL instance, and carefully handle the differences between different table shards of the same logical table, such as the differences in table names. DDL operations are not supported in the data synchronization link.

Provides a unified binary log service. You can use DTS to subscribe to the binary log services in the same way in which you use DTS to subscribe to a standalone ApsaraDB RDS for MySQL instance.

Read/write splitting

You must add read-only ApsaraDB RDS for MySQL instances and bind them to the PolarDB-X 1.0 instance.

You can add read-only PolarDB-X instances only.

O&M

  1. If the load is unbalanced, you need to upgrade the specification of one of the nodes separately.

  2. You can assign one of the storage nodes to other business.

  3. You can use ApsaraDB RDS 5.6, 5.7, or 8.0.

  4. You can subscribe to binary logs of an ApsaraDB RDS for MySQL instance.

Take note that if you delete one of the database shards, the PolarDB-X 1.0 instance cannot access the data stored in the database shard that is deleted.

PolarDB-X shields users from storage nodes. Users cannot directly access the storage nodes. PolarDB-X presents databases to users from a holistic perspective.

PolarDB-X reduces direct access requests to storage nodes by using capabilities, such as automatic load balancing, logical binary logs, and hybrid transaction and analytical processing (HTAP) for mixed loads. The compute nodes of PolarDB-X instances are based on MySQL 5.7.

Database Autonomy Service (DAS)

Provides SQL audit and analysis based on Log Service and allows you to view details about slow SQL queries.

Provides SQL Explorer and analysis features, such as security audit, SQL Explorer, intelligent stress testing, performance trends, instance sessions, slow query logs, storage space analysis, real-time performance, and storage capacity evaluation.

Architecture difference

In the architecture of PolarDB-X 1.0, a large number of features are implemented based on the peripheral management system. Examples:

  1. Scale-out operations are performed by using the internal Data Replication System.

  2. Instances that are deployed in a region shares a Diamond to store metadata.

  3. The failure detection mechanism of primary and secondary instances and failovers are implemented based on the Alibaba Database High Availability (ADHA) component.

In the architecture of PolarDB-X, all core features are integrated into the kernel.

  1. X-DB is used as a data node.

  2. Global Meta Service (GMS) nodes support the following features:

    1. Provides the global auto-increment timestamps that are used by distributed transactions. Evenly distributes data among nodes based on the loads.

    2. Provides unified metadata, such as INFORMATION_SCHEMA.

    3. Manage compute nodes and data nodes, such as failover, bringing nodes online, and bringing nodes offline.

  3. The scale-out operations of PolarDB-X instances are completed by using the kernel based on distributed transactions.

Transaction model

Uses XA transactions that are provided by open source MySQL. XA transactions ensure the atomicity of write operations.

Uses self-developed transactions that support global multiversion concurrency control (MVCC). In addition to the two-phase commit (2PC) protocol, snapshot timestamps (snapshot_ts) and commit timestamps (commit_ts) are supported for transactions.

Performance improvement

PolarDB-X 1.0 instances are connected to ApsaraDB RDS for MySQL instances by using standard links. Transfers over Server Load Balancer (SLB) are required. This increases the network latency of one hop.

  1. The compute nodes and data nodes of a PolarDB-X instance are deployed in the same physical network. Direct point-to-point connections are supported without transfers over SLB or Linux Virtual Server (LVS). This achieves a low network latency.

  2. Supports a private remote procedure call (RPC) protocol.

    1. Transfers execution plans instead of SQL statements. This prevents MySQL from repeatedly parsing and optimizing SQL statements.

    2. Uses an asynchronous model. Connections are not bound to threads one by one. Connections are not bound to sessions one by one. A few connections can be used to meet requirements.

    3. Deletes information that is no longer required in communications, such as the response header.

    4. Transmits data in the same format as the format that is used during computing of compute nodes. This prevents secondary conversion of data.

For information about performance data, see Sysbench test instructions.

For information about performance data, see Sysbench test.

    Feedback