All Products
Search
Document Center

PolarDB:FAQ

Last Updated:Jan 24, 2026

This topic answers frequently asked questions about PolarDB for PostgreSQL.

Basic questions

  • Q: What is PolarDB?

    PolarDB is a cloud-native relational database service that provides out-of-the-box online database capabilities. It is deployed across more than 10 regions. PolarDB is 100% compatible with PostgreSQL. By default, a PolarDB cluster offers up to 500 TB of storage capacity.

    Note

    PolarStore (PSL4/PSL5) supports petabyte-scale storage. If you require storage at this scale, contact us to reserve resources.

  • Q: Why does cloud-native PolarDB outperform traditional databases?

    A: Compared to traditional databases, cloud-native PolarDB can store hundreds of terabytes of data and provides features such as high availability, high reliability, rapid elastic scaling, and lock-free backups. For more information, see Benefits.

  • Q: When was PolarDB released? When was it available for commercial use?

    A: It entered public preview in September 2017 and became commercially available in March 2018.

  • Q: What are clusters and nodes?

    A: PolarDB Cluster Edition uses a multi-node cluster architecture. A cluster includes one primary node and multiple read-only nodes. A single PolarDB cluster can be deployed across zones but not across regions. Clusters are managed and billed at the cluster level. For more information, see Glossary.

  • Q: Which programming languages are supported?

    A: PolarDB supports programming languages such as Java, Python, PHP, Go, C, C++, .NET, and Node.js.

  • Q: After purchasing PolarDB, do I still need to purchase the PolarDB-X database middleware to implement sharding?

    A: Yes.

  • Q: Does PolarDB support table partitioning?

    A: Yes.

  • Q: Does PolarDB automatically include a partitioning mechanism?

    A: PolarDB performs partitioning at the storage layer. This is transparent to users.

Billing

  • Q: What do PolarDB fees include?

    A: Fees include charges for storage space, compute nodes, backups (which include a free quota), and SQL Explorer (optional). For more information, see Specifications and pricing.

  • Q: What is included in the billable storage space?

    A: Billable storage space includes database table files, index files, undo log files, redo log files, slow log files, and a small number of system files. For more information, see Specifications and pricing.

Cluster access (read/write splitting)

  • Q: How do I implement read/write splitting in PolarDB?

    A: Use the cluster endpoint in your application to implement read/write splitting. The splitting is based on the configured read/write mode. For more information, see Configure the database proxy.

  • Q: How many read-only nodes can a PolarDB cluster support?

    A: PolarDB uses a distributed cluster architecture. A cluster contains one primary node and up to 15 read-only nodes. At least one read-only node is required to ensure high availability.

  • Q: Why are loads unbalanced among multiple read-only nodes?

    A: Loads can be unbalanced among read-only nodes because there are few connections to the read-only nodes, or a read-only node was not included when a custom cluster endpoint was configured.

  • Q: What causes high or low loads on the primary node?

    A: High loads on the primary node can be caused by several factors: direct connections to the primary endpoint, the primary node accepting read requests, a high volume of transaction requests, high primary-secondary replication latency that causes requests to be routed to the primary node, or read-only node exceptions that cause read requests to be routed to the primary node.

    A low load on the primary node may indicate that the primary database is configured to reject read requests.

  • Q: How do I reduce the load on the primary node?

    A: Use the following methods to reduce the load on the primary node:

    • Use a cluster endpoint to connect to the PolarDB cluster. For more information, see Configure the database proxy.

    • If many transactions cause high pressure on the primary node, you can enable the transaction splitting feature in the console. This routes some queries within transactions to read-only nodes. For more information, see Advanced options—Transaction splitting.

    • If requests are routed to the primary node because of replication delay, consider lowering the consistency level, for example, using eventual consistency. For more information, see Advanced options—Consistency level.

    • Accepting read requests on the primary node can also cause high loads. You can enable the offload reads from primary node feature in the console to reduce the number of read requests routed to the primary node.

  • Q: Why can't I read data that was just inserted?

    A: This issue may be caused by the consistency level configuration. The cluster endpoints of PolarDB support the following consistency levels:

    • Eventual consistency: Does not guarantee that reads can immediately see newly inserted data, whether in the same session (connection) or a different one.

    • Session consistency: Guarantees that you can read data inserted within the same session.

    Note

    Higher consistency levels result in poorer performance and greater pressure on the primary node. Choose carefully. For most application scenarios, session consistency ensures that services work correctly. For the few statements that require strong consistency, you can use the Hint /* FORCE_MASTER */. For more information, see Consistency level.

  • Q: How do I force an SQL statement to execute on the primary node?

    When using a cluster endpoint, add /* FORCE_MASTER */ or /* FORCE_SLAVE */ before an SQL statement to force its routing direction. For more information, see Custom route—Hint.

    • /* FORCE_MASTER */ forces the request to be routed to the primary node. This can be used for the few read requests that have high consistency requirements.

    • /* FORCE_SLAVE */ forces the request to be routed to a read-only node. This can be used in the few scenarios where the PolarDB proxy requires special syntax to be routed to a read-only node to ensure correctness. For example, statements that call stored procedures or use multistatement are routed to the primary node by default.

    Note
    • Hints have the highest routing priority and are not constrained by consistency levels or transaction splitting. Evaluate them before use.

    • Do not include statements that change environment variables in Hint statements, such as /*FORCE_SLAVE*/ set names utf8; . These types of statements may lead to unexpected query results.

  • Q: Can I assign different endpoints to different services? Can different endpoints achieve isolation?

    A: You can create multiple custom endpoints for different services. If the underlying nodes are different, the custom endpoints can also provide isolation and will not affect each other. For more information about how to create a custom endpoint, see Configure the database proxy.

  • Q: If there are multiple read-only nodes, how can I create a separate single-node endpoint for one of them?

    A: Create a single-node endpoint only when the read/write mode of the cluster endpoint is Read-only and the cluster has three or more nodes. For detailed steps, see Configure the database proxy.

    Warning

    After a single-node endpoint is created, if this node fails, the endpoint may be unavailable for up to 1 hour. Do not use it in a production environment.

  • Q: What is the maximum number of single-node endpoints that can be created in a cluster?

    A: If your cluster has 3 nodes, you can create a single-node endpoint for only 1 of the read-only nodes. If your cluster has 4 nodes, you can create single-node endpoints for 2 of the read-only nodes. The same logic applies to clusters with more nodes.

  • Q: I only use the primary endpoint, but I see that the read-only nodes also have a load. Does the primary endpoint also support read/write splitting?

    A: The primary endpoint does not support read/write splitting. It always connects only to the primary node. It is normal for read-only nodes to have a small query per second (QPS) load, which is unrelated to the primary endpoint.

Management and maintenance

  • Q: Is there a replication delay between the primary node and the read-only nodes?

    A: Yes, there is a millisecond-level delay between them.

  • Q: What causes an increase in replication delay?

    A: Replication delay can increase in the following situations:

    • The primary node has a high write load and generates redo logs faster than the read-only nodes can apply them.

    • The read-only nodes have an excessively high load, which preempts resources that are needed for applying redo logs.

    • An I/O bottleneck occurs, which causes slow reading and writing of redo logs.

  • Q: How do I ensure query consistency when there is replication delay?

    A: Use a cluster endpoint and select an appropriate consistency level for it. Currently, the consistency levels from highest to lowest are session consistency, and eventual consistency. For more information, see Configure the database proxy.

  • Q: Can a Recovery Point Objective (RPO) of 0 be guaranteed in the event of a single node failure?

    A: With the default database cluster parameters, the RPO is not 0. However, you can set the RPO to 0 by adjusting the synchronous_commit parameter. For more information about the default parameter values, see Default cluster parameter values.

  • Q: How are specification upgrades (for example, from 2 cores and 8 GB of memory to 4 cores and 16 GB of memory) implemented in the backend? What is the impact on services?

    A: Both the proxy and database nodes of PolarDB need to be upgraded to the latest configuration. A rolling upgrade of multiple nodes is used to minimize the impact on services. Currently, each upgrade takes about 10 to 15 minutes. The impact on services lasts no more than 30 seconds. During this period, 1 to 3 transient disconnections may occur. For more information, see Change specifications.

  • Q: How long does it take to add a node? Will it affect services?

    A: Adding a node takes 5 minutes and has no impact on services. For more information about how to add a node, see Add a read-only node.

    Note

    After a new read-only node is added, new read/write splitting connections will forward requests to that read-only node. Read/write splitting connections established before the new read-only node was added will not forward requests to the new node. You need to disconnect and re-establish the connection, for example, by restarting the application.

  • Q: How long does it take to upgrade to the latest revision version? Will it affect services?

    A: PolarDB uses a multi-node rolling upgrade to minimize the impact on services. A version upgrade generally takes no more than 30 minutes. During the upgrade, the database proxy or the DB kernel engine will be restarted, which may cause transient disconnections. Perform the upgrade during off-peak hours and ensure that your application has an automatic reconnection mechanism. For more information, see Minor version management.

  • Q: How does automatic failover work?

    A: PolarDB uses an active-active high-availability cluster architecture. It performs automatic failover between the read/write primary node and the read-only nodes. The system automatically elects a new primary node. Each node in a PolarDB cluster has a failover priority, which determines its probability of being elected as the primary node during a failover. When multiple nodes have the same priority, they have the same probability of being elected as the primary node. For more information, see Automatic and manual primary/secondary failover.

  • Q: What is the architecture of the database proxy? Does it have a failover mechanism? How is its high availability ensured?

    A: The database proxy uses a dual-node high availability architecture and distributes traffic evenly between the two proxy nodes. The system continuously monitors the health of the proxy nodes. If a node fails, the system proactively disconnects its connections, and the remaining healthy node automatically takes over all traffic to ensure uninterrupted service. At the same time, the system automatically rebuilds and recovers the failed proxy node. This process is typically completed in about 2 minutes, during which the database cluster remains accessible.

    In rare cases, connections to a failed node may not be disconnected promptly and may become unresponsive. To handle this, configure a timeout policy on the client, such as the JDBC socketTimeout and connectTimeout parameters. This allows the application layer to promptly detect and terminate suspended connections, improving the system's fault tolerance and response efficiency.

Backup and recovery

  • Q: What backup method does PolarDB use?

    A: PolarDB uses snapshots for backups. For more information, see Backup method 2: Manual backup.

  • Q: How fast is database recovery?

    A: Currently, recovery (cloning) from a backup set (snapshot) takes 40 minutes per TB. If you are recovering to a point in time, the time to apply redo logs must also be included. This part of the recovery takes about 20 to 70 seconds per GB. The total recovery time is the sum of these two parts.

Performance and capacity

  • Q: What is the maximum number of tables? At what number of tables might performance start to degrade?

    A: The maximum number of tables is limited by the number of files. For more information, see Limits.

  • Q: Can table partitioning improve the query performance of PolarDB?

    A: Yes. If a query can be limited to a specific partition, performance can be improved.

  • Q: Does PolarDB support creating 10,000 databases? What is the maximum number of databases?

    A: PolarDB supports creating 10,000 databases. The maximum number of databases is limited by the number of files. For more information, see Limits.

  • Q: How are IOPS limited and isolated? Will multiple PolarDB cluster nodes compete for I/O?

    A: The IOPS for each node in a PolarDB cluster is set according to its specifications. The IOPS of each node is independently isolated and does not affect other nodes.

  • Q: Will the performance of the primary node be affected if the performance of a read-only node slows down?

    A: An excessively high load or increased replication delay on a read-only node may slightly increase the memory consumption of the primary node.

  • Q: What is the performance impact of enabling SQL Explorer (full SQL log audit)?

    A: There is no impact.

  • Q: What high-speed network protocol does PolarDB use?

    A: PolarDB uses dual 25 Gbps Remote Direct Memory Access (RDMA) technology between its database compute nodes and storage nodes, and also between storage data replicas. This provides strong I/O performance with low latency and high throughput.

  • Q: What is the maximum bandwidth for an external connection to PolarDB?

    A: The maximum bandwidth for an external connection to PolarDB is 10 Gbit/s.

  • Q: What should I do if it takes a long time to restart a node?

    A: The more files in your cluster, the longer it will take to restart a node. In this case, you can speed up the restart by setting the innodb_fast_startup parameter to ON. For more information about how to modify parameters, see Set cluster and node parameters.