This topic provides answers to frequently asked questions about PolarDB for MySQL.
Basics
What is PolarDB?
PolarDB is a cloud-native relational database service. PolarDB has been deployed in data centers in more than 10 regions around the world. PolarDB provides out-of-the-box online database services. PolarDB supports three independent engines. This allows PolarDB to be fully compatible with MySQL and PostgreSQL, and highly compatible with Oracle syntax. A PolarDB cluster provides a maximum storage capacity of 200 TB. For more information, see What is PolarDB for MySQL Enterprise Edition?
Why does PolarDB outperform traditional databases?
Compared with traditional databases, PolarDB can store hundreds of terabytes of data. It also provides a wide array of features, such as high availability, high reliability, rapid elastic upgrades and downgrades, and lock-free backups. For more information, see Benefits.
When was PolarDB released? When was it available for commercial use?
PolarDB was released for public preview in September 2017, and available for commercial use in March 2018.
What are clusters and nodes?
PolarDB Cluster Edition uses a multi-node cluster architecture. A cluster has one primary node and multiple read-only nodes. A PolarDB cluster can be deployed across zones but not across regions. The PolarDB service is managed and billed at the cluster level. For more information, see Terms.
Which programming languages does PolarDB support?
PolarDB supports programming languages including Java, Python, PHP, Golang, C, C++, .NET, and Node.js. Programming languages that can interact with native MySQL can directly interact with PolarDB for MySQL. For more information, visit the MySQL official website.
Which storage engines does PolarDB support?
PolarDB for MySQL has two editions. The available storage engines vary based on the edition.
PolarDB for MySQL Cluster Edition exclusively uses the InnoDB storage engine for all tables. If you specify a non-InnoDB storage engine such as MyISAM, Memory, and CSV when you create a table in a PolarDB for MySQL cluster, the cluster automatically changes the storage engine to InnoDB. In a migration process to a PolarDB for MySQL cluster, PolarDB for MySQL also changes tables that use non-InnoDB storage engines to InnoDB to ensure a smooth migration.
Is PolarDB a distributed database?
Yes, PolarDB is a distributed storage cluster based on the Parallel Raft consensus protocol. Its computing engine consists of 1 to 16 compute nodes that are distributed on different servers. The cluster provides a maximum storage capacity of 200 TB and a maximum of 88 CPU cores and 710 GB of memory. The cluster allows online dynamic scaling of storage and computing resources, ensuring that normal business operations are not affected during the scaling process.
After I purchase PolarDB, do I need to purchase PolarDB-X database middleware to implement sharding?
Yes.
Does PolarDB support table partitioning?
Yes.
Does PolarDB have a default partitioning mechanism?
PolarDB implements partitioning at the storage layer. This is transparent and imperceptible to users.
How does a single-node cluster ensure service availability and data reliability?
A single-node cluster is designed to operate a single compute node for specific purposes. A single-node cluster uses technologies such as second-level computing scheduling and distributed multi-replica storage to ensure high service availability and high data reliability.
How do I purchase a single-node PolarDB cluster?
Single-node clusters are no longer available for sale. However, you can create a single-node cluster by setting the number of read-only nodes to 0 when you purchase a PolarDB cluster.
Compatibility
Is PolarDB for MySQL compatible with MySQL Community Edition?
Yes, PolarDB for MySQL is fully compatible with MySQL Community Edition.
What transaction isolation levels are supported?
PolarDB for MySQL supports the READ_UNCOMMITTED, READ_COMMITTED (default), and REPEATABLE_READ isolation levels and does not support the SERIALIZABLE isolation level.
Are the query results of the SHOW PROCESSLIST statement in PolarDB for MySQL the same as those in MySQL Community Edition?
If you use the primary endpoint to execute the SHOW PROCESSLIST statement, the query results are the same. If you use the cluster endpoint to execute the SHOW PROCESSLIST statement, the query results are different between PolarDB for MySQL and MySQL Community Edition. In the query results of the statement in PolarDB for MySQL, you can find multiple records that have the same thread ID. Each of these records corresponds to a node in the PolarDB for MySQL cluster.
Is the metadata lock (MDL) mechanism of PolarDB for MySQL the same as that of MySQL Community Edition?
Yes, the MDL lock mechanism of PolarDB for MySQL is the same as that of MySQL Community Edition. However, the architecture of PolarDB for MySQL database nodes is based on shared storage. Therefore, when data definition language (DDL) operations are performed on the primary node, the read-only nodes may access intermediate data in the DDL operations, potentially leading to data inconsistency. To migrate this issue, PolarDB for MySQL uses redo logs to synchronize the exclusive MDLs that are involved in DDL operations to read-only nodes. This prevents other user threads on the read-only nodes from accessing the table data during the DDL operations. In specific scenarios, this may block DDL operations. You can execute the
SHOW PROCESSLIST
statement to view the state of DDL operations. If the DDL operations are in theWait for syncing with replicas
state, such blocking has occurred. For information about how to resolve this issue, see View the DDL statement execution status and MDL status.Is the binary log format of PolarDB for MySQL the same as the native binary log format of MySQL?
Yes, the binary log format of PolarDB for MySQL is the same as the native binary log format of MySQL.
Does PolarDB support the performance schema and the sys schema?
Yes.
Does PolarDB for MySQL use the same table statistics collection and update mechanism as MySQL Community Edition?
Yes, PolarDB for MySQL uses the same table statistics collection and update mechanism as MySQL Community Edition Each update of table statistics in the primary node is synchronized to the read-only nodes to ensure that execution plans are consistent between the primary node and the read-only nodes. You can also execute the
ANALYZE TABLE
statement on the read-only nodes to proactively load the latest statistics from disks.Does PolarDB support extended architecture (XA) transactions? Does PolarDB support XA transactions in the same way as the native MySQL system?
Yes, PolarDB supports XA transactions in the same way as the native MySQL system.
Does PolarDB support full-text indexes?
Yes.
NoteWhen you query data by using full-text indexes, index caches are used on read-only nodes. Due to the index caches, you cannot retrieve the latest data based on the indexes. We recommend that you use the primary endpoint to read and write data based on full-text indexes. This ensures that you can retrieve the latest data.
Is Percona Toolkit supported?
Yes, Percona Toolkit is supported. However, we recommend that you use online DDL.
Is gh-ost supported?
Yes, gh-ost is supported. However, we recommend that you use online DDL.
Billing
What are the billable items of a PolarDB cluster?
The billable items include the storage space, compute nodes, data backup storage (with a free quota), and SQL Explorer feature (optional). For more information, see Billable items.
Which files are stored in the billable storage space?
The billable storage space stores database table files, index files, undo log files, redo log files, binlog files, slowlog files, and a small number of system files. For more information, see Storage pricing overview.
How do I use storage plans of PolarDB?
You can use storage plans to offset the storage fees of subscription or pay-as-you-go clusters. For example, you have three clusters, each with a storage capacity of 40 GB, which adds up to a total capacity of 120 GB. You can use a storage plan of 100 GB for the three clusters. Then, you are charged for the excess 20 GB storage on a pay-as-you-go basis. For more information, see Purchase a storage plan.
How am I charged if I add a read-only node to the cluster?
The price of a read-only node is the same as that of a primary node. For more information, see Pricing of compute nodes.
Is the storage capacity doubled after I add a read-only node?
No, the storage capacity is not doubled after you add a read-only node. PolarDB uses an architecture in which computing is decoupled from storage. The read-only node that you purchase is used as a computing resource. Therefore, the storage capacity is not increased.
A serverless mode is used for storage. Therefore, you do not need to specify the storage capacity when you purchase clusters. The storage capacity automatically scales when the amount of data increases. You are charged for the storage that you use. The maximum storage capacity varies based on cluster specifications. For information about how to increase the maximum storage capacity, see Manually change the specifications of a cluster.
How can I no longer be charged for a pay-as-you-go cluster?
If the cluster is no longer needed, you can release the cluster. For more information, see Release a cluster. After you release the cluster, you are no longer charged for the cluster.
Can I change the specifications of a cluster during a temporary upgrade process?
During a temporary upgrade process (the cluster is in the running state), you can manually upgrade the specifications of the cluster but cannot manually downgrade the specifications of the cluster, allow auto-scaling of the cluster specifications, or add or remove read-only nodes
What is the public bandwidth of PolarDB clusters? Do I pay for the public bandwidth?
PolarDB clusters does not have restrictions on the public bandwidth. The public bandwidth of PolarDB clusters depends on the bandwidth of the SLB service that you use. You are not charged for the public bandwidth of PolarDB clusters.
Why are bills generated for subscription clusters every day?
The billable items of PolarDB clusters mainly include compute nodes (primary nodes and read-only nodes), storage space, data backup storage beyond the free quota, SQL Explorer (optional), and GDNs (optional). For more information, see Billable items. If you use the subscription billing method, you must pay upfront for the compute nodes when you purchase the cluster. The upfront payment does not include the fees for storage space, data backup storage, and SQL Explorer. The storage fees are generated based on your actual storage usage on an hourly basis. Therefore, pay-as-you-go bills are still generated when the subscription billing method is used.
Am I charged for migrating an ApsaraDB RDS instance to a PolarDB cluster?
No, you are not charged for migrating an ApsaraDB RDS instance to a PolarDB cluster. You are charged only for the ApsaraDB RDS instance and the PolarDB cluster.
Why am I still charged for the storage space used by specific tables after I execute the
DELETE
statement to delete the data of the tables in PolarDB?The
DELETE
statement only adds delete markers to the tables. The table space is not released.
Cluster access (read/write splitting)
How do I implement read/write splitting in PolarDB?
You need only to specify the cluster endpoint in your application so that read/write splitting can be implemented based on the read/write mode configured for the cluster endpoint. For more information, see Configure PolarProxy.
How many read-only nodes are supported in a PolarDB cluster?
PolarDB uses a distributed cluster architecture. A cluster consists of one primary node and a maximum of 15 read-only nodes. At least one read-only node is required to ensure high availability.
Why are loads unbalanced among read-only nodes?
One of the possible reasons is that only a small number of connections to read-only nodes exist. Another possible reason is that one of the read-only nodes is not associated with the custom cluster endpoint that you use.
What are the causes of heavy or light loads on the primary node?
Heavy loads on the primary node may occur due to the following causes: 1. The primary endpoint is used to connect your applications to the cluster. 2. The primary node accepts read requests. 3. A large number of transaction requests exist. 4. Requests are routed to the primary node because of a high primary/secondary replication delay. 5. Read requests are routed to the primary node due to read-only node exceptions.
The possible cause of light loads on the primary node is that the Offload Reads from Primary Node feature is enabled.
How do I reduce the loads on the primary node of a cluster?
You can reduce the loads on the primary node of a cluster by using the following methods:
You can use the cluster endpoint to connect to the PolarDB cluster. For more information, see Configure PolarProxy.
If a large number of transactions cause heavy loads on the primary node, you can enable the transaction splitting feature in the console. This way, part of queries in the transactions are routed to read-only nodes. For more information, see Transaction splitting.
If requests are routed to the primary node because of replication delays, you can decrease the consistency level. For example, you can use the eventual consistency level. For more information, see Consistency levels.
If the primary node accepts read requests, the loads on the primary node may also become heavy. In this case, you can disable the feature that allows the primary node to accept read requests in the console. This reduces the number of read requests that are routed to the primary node. For more information, see Primary Node Accepts Read Requests.
Why am I unable to immediately retrieve the newly inserted data?
The issue may be caused by the consistency level setting. The cluster endpoint of a PolarDB cluster supports the following consistency levels:
Eventual consistency: This consistency level does not ensure that you can immediately retrieve the newly inserted data regardless of whether based on the same session (connection) or different sessions.
Session consistency: This consistency level ensures that you can immediately retrieve the newly inserted data based on the same session.
Global consistency: This consistency level ensures that you can immediately retrieve the latest data based on either the same session or different sessions.
NoteA high consistency level results in heavy loads on the primary node. This compromises the performance of the primary node. Use caution when you select the consistency level. In most scenarios, the session consistency level can ensure service availability. For a few SQL statements that require strong consistency, you can add the
/* FORCE_MASTER */
hint to the SQL statements to meet the consistency level requirements. For more information, see Consistency levels.How do I force an SQL statement to be executed on the primary node?
If you use the cluster endpoint, you can add
/* FORCE_MASTER */
or/* FORCE_SLAVE */
before an SQL statement to forcibly specify where the SQL statement is routed. For more information, see Hints.is used to forcibly route requests to the primary node.
This method applies to a few scenarios in which strong consistency is required for read requests.is used to forcibly route requests to read-only nodes.
This method applies to scenarios in which special syntax is required to be routed to read-only nodes to ensure accuracy. For example, statements that call stored procedures and use multi-statements are routed to read-only nodes by default.
NoteHints are assigned the highest priority for routing and are not limited by consistency levels or transaction splitting. Before you use hints, evaluate the impacts on your business.
The hints cannot contain the statements that change GUC parameters, such as *FORCE_SLAVE*/ set enable_hashjoin = off;. This kind of statements may cause unexpected query results.
Can I assign different endpoints to different services? Can I use different endpoints to isolate services?
Yes, you can create multiple custom endpoints and assign them to different services. If the underlying nodes are different, the custom cluster endpoints can be used to isolate the services and do not affect each other. For information about how to create a custom endpoint, see Create a custom cluster endpoint.
How do I separately create a single-node endpoint for one of the read-only nodes if multiple read-only nodes exist?
You can create a single-node endpoint only if the Read/Write Mode parameter for the cluster endpoint is set to Read Only and the cluster has three or more nodes. For more information, see Configure the cluster endpoint.
WarningHowever, if you create a single-node endpoint for a read-only node and the read-only node becomes faulty, the single-node endpoint may be unavailable for up to 1 hour. We recommend that you do not create single-node endpoints in your production environment.
What is the maximum number of single-node endpoints that I can create in a cluster?
If your cluster has three nodes, you can create a single-node endpoint for only one of the read-only nodes. If your cluster has four nodes, you can create single-node endpoints for two of the read-only nodes, one for each. Similar rules apply if your cluster has five or more nodes.
Read-only nodes have loads when I use only the primary endpoint. Does the primary endpoint support read/write splitting?
No, the primary endpoint does not support read/write splitting. The primary endpoint is always connected to only the primary node. Read-only nodes may have a small number of queries per second (QPS). This is a normal case and is irrelevant to the primary endpoint.
Management and maintenance
How do I add fields and indexes online?
You can use tools such as the native online DDL of MySQL, pt-osc, and gh-ost to add fields and indexes online. We recommend that you use the native online DDL of MySQL.
NoteIf you use pt-osc, do not use the parameters for checking data consistency between primary and read-only nodes, such as the
recursion-method
parameter. This is because pt-osc checks data consistency between primary and read-only nodes based on binlog replication. However, PolarDB uses physical replication and does not support binlog replication.Is the bulk insert feature supported?
Yes.
Can I bulk insert data if I write data to only a write-only node? What is the maximum number of values can I insert at a time?
Yes, you can bulk insert data if you write data to only a write-only node. The maximum number of values you can insert at a time is determined by the value of the max_allowed_packet parameter. For more information, see Replication and max_allowed_packet.
Can I use cluster endpoints to perform the bulk insert operation?
Yes.
Does a replication delay occur when I replicate data from the primary node to the read-only nodes?
Yes, a replication delay of a few milliseconds occurs.
When does a replication delay increase?
A replication delay increases in the following scenarios:
The primary node processes a large number of write requests and generates excess redo logs. As a result, these redo logs cannot be replayed on the read-only nodes in time.
To process heavy loads, the read-only nodes occupy a large number of resources that are used to replay redo logs.
The system reads and writes redo logs at a low rate due to I/O bottlenecks.
How do I ensure the consistency of query results if a replication delay occurs?
You can use a cluster endpoint and select an appropriate consistency level for the cluster endpoint. The following consistency levels are listed in descending order: global consistency (strong consistency), session consistency, and eventual consistency. For more information, see Consistency levels.
Can the recovery point objective (RPO) be zero if a single node fails?
Yes.
How are node specifications upgraded in the backend, for example, upgrading node specifications from 2 cores and 8 GB of memory to 4 cores and 16 GB of memory? How does the upgrade impact my services?
The PolarProxy and database nodes of PolarDB must be upgraded to the latest configurations. A rolling upgrade method is used to upgrade multiple nodes to minimize the impacts on your services. Each upgrade takes about 10 to 15 minutes. The impacts on your services last for no more than 30 seconds. During this period, one to three transient connection errors may occur. For more information, see Manually change the specifications of a cluster.
How long does it take to add a node? Are my services affected when the node is added?
It takes about 5 minutes to add a node. Your services are not affected when the node is added. For information about how to add a node, see Add a read-only node.
NoteAfter you add a read-only node, a read/write splitting connection is established to forward requests to the read-only node. A read/write splitting connection that is created before a read-only node is added does not forward requests to the read-only node. You must close the connection and establish the connection again. For example, you can restart the application to establish the connection.
How long does it take to update a kernel minor version to the latest revision version? Are my services affected when the update is complete?
PolarDB uses a rolling update method to upgrade multiple nodes to minimize the impacts on your services. In most cases, an update requires less than 30 minutes to complete. PolarProxy or the database engine is restarted during the upgrade. This may interrupt services. We recommend that you perform the update during off-peak hours. Make sure that your application can automatically reconnect to your database. For more information, see Minor version update.
How is an automatic failover implemented?
PolarDB uses an active-active high availability architecture. When the primary node fails, the system automatically elects a new primary node from the read-only nodes and fails over services from the original primary node to the new primary node. A failover priority is assigned to each node in a PolarDB cluster. The priorities determine which node can be elected as the primary node during a failover. If multiple nodes have the same failover priority, they have the same probability of being elected as the primary node. For more information, see Automatic failover and manual failover.
Backup and restoration
How does PolarDB back up data?
PolarDB uses snapshots to back up data. For more information, see Backup method 1: Automatic backup and Backup method 2: Manual backup.
How fast can a database be restored?
It takes 40 minutes to restore or clone 1 TB of data in a database based on backup sets or snapshots. If you want to restore data to a specific time point, you must include the time required to replay the redo logs. It takes about 20 to 70 seconds to replay 1 GB of redo log data. The total restoration time is the sum of the time required to restore data based on backup sets and the time required to replay the redo logs.
Performance and capacity
Why does a PolarDB for MySQL cluster fail to show significant performance improvements than an ApsaraDB RDS for MySQL instance?
Before you compare the performance of a PolarDB for MySQL cluster with that of an ApsaraDB RDS for MySQL instance, take note of the following considerations to obtain accurate and reasonable performance comparison results:
Make sure that the PolarDB for MySQL cluster and the ApsaraDB RDS for MySQL instance use the same specifications.
Make sure that the PolarDB for MySQL cluster and ApsaraDB RDS for MySQL instance are of the same version.
The reason is that implementation mechanisms vary based on versions. For example, MySQL 8.0 optimizes multi-core CPUs by separately abstracting threads, such as Log_writer, log_fluser, log_checkpoint, and log_write_notifier. However, if only a few CPU cores are used, the performance of MySQL 8.0 is lower than that of MySQL 5.6 or MySQL 5.7. We recommend that you do not compare PolarDB for MySQL 5.6 with ApsaraDB RDS for MySQL 5.7 or 8.0. This is because the optimizer of PolarDB for MySQL 5.6 does not perform as well as that of the later versions of PolarDB for MySQL.
We recommend that you simulate the loads in actual online environments or use the sysbench benchmark suite to compare the performance. This makes the obtained performance data closer to that obtained in actual online scenarios.
We recommend that you do not use a single SQL statement to compare the read performance between PolarDB for MySQL and ApsaraDB RDS for MySQL.
This is because PolarDB uses an architecture in which computing is decoupled from storage, which leads to increased latency for single queries due to network communication. Therefore, the read performance of PolarDB for MySQL is lower than that of ApsaraDB RDS for MySQL. The cache hit ratio for an online database is greater than 99% in most cases. Only the first read request consumes I/O resources, and the read performance is compromised. The subsequent read requests do not consume I/O resources because the data is stored in a buffer pool. For the subsequent read requests, PolarDB for MySQL and ApsaraDB RDS for MySQL offer the same read performance.
We recommend that you do not use a single SQL statement to compare the write performance. Instead, we recommend that you simulate a production environment and perform stress testing.
We recommend that you compare the primary nodes and the read-only nodes in PolarDB with the primary instances and the read-only instances in ApsaraDB RDS for MySQL for performance comparison. Semi-synchronous replication is implemented for the read-only instances in ApsaraDB RDS for MySQL. This is because the architecture of PolarDB uses the quorum mechanism for data writes by default. If the data is written to two of the triplicate or all of the triplicate, the system determines that the write operation is successful. PolarDB implements data redundancy at the storage layer, and ensures strong consistency and high reliability for the triplicate. Therefore, an appropriate comparison method is to compare PolarDB for MySQL with ApsaraDB RDS for MySQL where semi-synchronous replication instead of asynchronous replication is implemented.
For more information about the performance comparison results between PolarDB for MySQL and ApsaraDB RDS for MySQL, see Performance comparison between PolarDB for MySQL and ApsaraDB RDS for MySQL.
Why does a deleted database occupy a large amount of storage space?
This is because the redo log files of the deleted database occupy storage space. In most cases, the redo log files occupy 2 GB to 11 GB storage space. If a total of 11 GB storage space is occupied, 8 GB storage space is occupied by the eight redo log files in the buffer pool. The remaining 3 GB storage space is evenly occupied by the redo log file that is being written, the pre-created redo log file, and the latest redo log file.
The
loose_innodb_polar_log_file_max_reuse
parameter specifies the number of redo log files in the buffer pool. The default value of this parameter is 8. You can change the value of this parameter to reduce the storage space that is occupied by log files. In this case, periodic performance fluctuations may occur if heavy loads are processed.What is the maximum number of tables? What is the upper limit for the number of tables if I want to ensure that the performance is not compromised?
The maximum number of tables depends on the number of files. For more information, see Limits.
Can table partitioning improve the query performance of PolarDB?
In most cases, if the SQL query statement falls into a partition, the performance can be improved.
Can I create 10,000 databases in a PolarDB cluster? What is the maximum number of databases in a PolarDB cluster?
Yes, you can create 10,000 databases in a PolarDB cluster. The maximum number of databases you can create depends on the number of files. For more information, see Limits.
Does the maximum number of connections depend on the number of read-only nodes? Can I increase the maximum number of connections by adding read-only nodes?
The number of read-only nodes is irrelevant to the maximum number of connections. The maximum number of connections of PolarDB is determined by node specifications. For more information, see Limits. Upgrade specifications if you need more connections.
How are the input/output operations per second (IOPS) limited and isolated? Do the multiple nodes of a PolarDB cluster compete for I/O resources?
The IOPS is specified for each node of a PolarDB cluster based on the node specifications. The IOPS of each node is isolated from that of the other nodes and does not affect each other.
Is the primary node affected if the performance of the read-only nodes is compromised?
Yes, the memory consumption of the primary node is slightly increased if the loads on the read-only nodes are excessively heavy and the replication delay increases.
What is the impact on the database performance if I enable the binary log feature?
After you enable the binary log feature, only write and update (INSERT, UPDATE, and DELETE) performance is affected. Query (SELECT) performance is not affected. In most cases, if you enable the binary log feature for a database in which read and write requests are balanced, the database performance decreases by no more than 10%.
What is the impact on the database performance if I enable the SQL Explorer (full SQL log audit) feature?
The database performance is not affected if you enable the SQL Explorer feature.
Which high-speed network protocol does PolarDB use?
PolarDB uses dual-port Remote Direct Memory Access (RDMA) to ensure high I/O throughput between compute nodes and storage nodes, and between data replicas. Each port provides a data rate of up to 25 Gbit/s at a low latency.
What is the maximum bandwidth that I can use if I access PolarDB from the Internet?
If you access PolarDB from the Internet, the maximum bandwidth is 10 Gbit/s.
Large tables
What are the advantages of the large tables in PolarDB for MySQL over the local disks of traditional databases?
A large table in a PolarDB for MySQL database is split and stored across N physical storage servers. Therefore, the I/O operations for the large table are allocated to multiple disks. The overall throughput (rather than the I/O latency) of I/O read operations in the PolarDB for MySQL database is higher than that of the database where all I/O operations are scheduled to local disks.
How do I optimize large tables?
We recommend that you use partitioned tables to optimize large tables.
What are the application scenarios of partitioned tables?
You can use partitioned tables when you want to prune large tables to control the amount of scanned data for queries and do not want to modify the business code. For example, you can use partitioned tables to clear the historical data of your services at regular intervals. You can delete the partitions that are created in the earliest month and create partitions for the next month, and retain only the data of the latest six months.
What method is suitable if I copy a table that has a large amount of data in the same PolarDB for MySQL database, for example, copy all the data of table A to table B?
You can execute the following SQL statement to directly copy data:
create table B as select * from A
Stability
Can I optimize PHP short-lived connections in high concurrency scenarios?
Yes, you can optimize PHP short-lived connections in high concurrency scenarios To optimize PHP short-lived connections, enable the session-level connection pool in the settings of cluster endpoints. For more information, see Specify a cluster endpoint.
How do I prevent slow SQL queries from decreasing the performance of the entire database?
If you use PolarDB for MySQL 5.6 or 8.0 clusters, you can use the statement concurrency control feature to implement rate limiting and throttling on the specified SQL statements. For more information about this feature, see Concurrency control.
Does PolarDB support the idle session time-out feature?
Yes. You can change the value of the wait_timeout parameter to specify a time-out period for idle sessions. For more information, see Specify cluster and node parameters.
How do I identify slow SQL queries?
You can identify slow SQL queries by using the following two methods:
Retrieve slow SQL queries in the console. For more information, see Slow SQL queries.
Connect to a database cluster and execute the
SHOW PROCESSLIST
statement to find the SQL statements that take a long time to be executed. For more information about how to connect to database clusters, see Connect to a cluster.
How do I terminate slow SQL queries?
After you identify a slow SQL query, find the ID of the slow SQL query and run the
kill <Id>
command to terminate the SQL query.
Data lifecycle management
How does a PolarDB for MySQL cluster archive hot and warm data as cold data?
A PolarDB for MySQL cluster can archive hot data stored by using the InnoDB engine and warm data stored by using the X-Engine from PolarStore to Object Storage Service (OSS) as cold data in the CSV or ORC format by using data definition language (DDL) policies. This archiving effectively releases the storage space on PolarStore and reduces the overall database storage costs. For more information, see Manually archive cold data.
Does a PolarDB for MySQL cluster support the automatic separation and archiving of hot, warm, and cold data?
A PolarDB for MySQL cluster supports the automatic separation and archiving of hot, warm, and cold data. You can configure a data lifecycle management (DLM) policy to implement automatic archiving of data from PolarStore to low-cost OSS storage. This reduces database storage costs and improves storage efficiency. For more information, see Automatically archive cold data.