All Products
Search
Document Center

ApsaraMQ for Kafka:Limits

Last Updated:Feb 27, 2026

ApsaraMQ for Kafka enforces constraints on specific metrics. To avoid program exceptions, stay within these limits when using ApsaraMQ for Kafka.

Important

Instability caused by exceeding the following limits is not covered under the Service-Level Agreement (SLA) or eligible for compensation.

Limits

The following table lists the limits for ApsaraMQ for Kafka.

Limits

Limit

Description

Maximum number of topics (total partitions)

Supported

ApsaraMQ for Kafka stores and coordinates data at the partition level. Too many topics or partitions causes storage fragmentation, reducing cluster performance and stability.

Minimum number of partitions per topic

  • Subscription and pay-as-you-go instances:

    • Topic: cloud storage. Minimum: 2.

    • For local storage, the Topic setting has a minimum value of 1.

  • Serverless instances:

    • This topic covers cloud-native storage, and the minimum value is 1.

If traffic is high, a single partition may cause data skew and hot spots. Set the number of partitions appropriately.

Decrease the number of partitions for a topic

Not supported

This is restricted by Apache Kafka's design.

Expose ZooKeeper

Not supported

Starting with Apache Kafka 0.9.0, clients no longer need to access ZooKeeper. In ApsaraMQ for Kafka, ZooKeeper is partially shared and not exposed for security reasons. You do not need to interact with ZooKeeper.

Log on to machines where ApsaraMQ for Kafka is deployed

Not supported

None.

Version

2.2.x to 3.3.x

  • Non-serverless instances support versions 2.2.x to 2.6.x.

  • Serverless instances support version 3.3.x.

To upgrade your version, see Upgrade instance versions.

Partition-to-topic ratio

1:1

The number of available topics equals the total number of partitions. For example, if you purchase an instance with 50 partitions, select the alikafka.hw.2xlarge traffic specification, and receive 1,000 free partitions, your total partitions = 50 + 1,000 = 1,050. Thus, you can create up to 1,050 topics.

Note

This applies only to non-serverless instances.

Change instance region

Not supported

After purchase and deployment, an instance’s region is tightly bound to physical resources and cannot be changed. To use a different region, release the instance and purchase a new one.

Change instance network properties

Supported

You can change network properties as needed. For details, see Upgrade instance configurations.

Message size

10 MB

Messages must not exceed 10 MB. Larger messages fail to send.

Monitoring and alerting

Supported

Data has a 1-minute delay.

Access point

Purchase Specifications

  • Non-serverless instances:

    • Standard Edition: supports default and SSL endpoints.

    • Professional Edition: supports default, SSL, and SASL endpoints.

  • Serverless instances: support default, SSL, and SASL endpoints.

Single-partition cloud storage

May become unavailable during breakdowns or upgrades

Create topics with more than one partition. If your workload requires a single partition, use local storage.

Note
  • This restriction applies only to non-serverless instances. Serverless instances provide high availability for single-partition cloud storage topics.

  • Only Professional Edition instances let you select local storage as the storage engine when creating a topic. Standard Edition does not support this.

Maximum number of messages per batch

32,767

If individual messages are small, set batch.size to no more than 16,384.

Note

This limit applies only to non-serverless instances.

Note

You can no longer purchase non-serverless ApsaraMQ for Kafka instances by topic count. If you have an existing instance purchased by topic, the partition-to-topic ratio is 1:16. For Professional Edition instances purchased by topic, the number of available topics equals twice the number of purchased topics.

Quota limits

The following table lists usage limits for ApsaraMQ for Kafka. Exceeding these limits may cause stability issues. The “Other limits” section describes scenarios that can overload the server and affect stability. Use caution in these cases.

Limits apply per cluster unless otherwise stated. To request higher quotas, submit a ticket.

The “//” symbol in formulas denotes integer division (rounding down).

Limitations

Conditions

Description

Subscription and hourly pay-as-you-go instances

Serverless (Basic Edition)

Serverless (Standard Edition and Professional Edition)

Connections per node

  • Base connections: 1,000.

  • Add 1,000 connections for every 100 MB/s increase in actual traffic.

  • Maximum: 10,000.

Formula:

C = min(10000, 1000 + (F // 100) * 1000)

  • Base connections: 2,000.

  • Add 1,000 connections for every 300 MB/s increase in reserved production capacity.

  • Maximum: 10,000.

Formula:

C = min(10000, 2000 + (F // 300) * 1000)

TCP connections per broker.

To request a higher connection limit, please submit a ticket.

Internet (SSL) connections per node

  • The initial number of connections is 200.

  • Add 100 connections for every 100 MB/s increase in actual traffic.

  • Maximum: 1,000.

Formula:

C = min(1000, 200 + (F // 100) * 100)

  • Base connections: 200.

  • Add 100 connections for every 300 MB/s increase in reserved production capacity.

  • Maximum: 1,000.

Formula:

C = min(1000, 200 + (F // 300) * 100)

Internet (SSL) TCP connections per broker.

Connection attempts per second per node

50 per second

150 per second

150 per second

Client-to-server connection attempts per second, including failed attempts due to authentication errors.

Internet (SSL) connection attempts per second per node

10 per second

Client-to-server Internet (SSL) connection attempts per second, including failed attempts due to authentication errors.

Batch size

Fragmented sending occurs if batch size TP50 is below 4 KB.

Message batch size in PRODUCE requests after client batching. Use client version 2.4 or later to improve batching. See Improve sending performance (reduce fragmented requests).

Request Frequency (Cluster)

  • Base: 10,000 requests per second.

  • Add 2,000 requests per second for every 20 MB/s increase in actual traffic.

Formula:

R = 10000 + (F // 20) * 2000

  • Base: 10,000 requests per second.

  • Add 5,000 requests per second for every 300 MB/s increase in reserved production capacity.

Formula:

R = 10000 + (F // 300) * 5000

  • Base: 10,000 requests per second.

  • Add 2,000 requests per second for every 60 MB/s increase in reserved production capacity.

Formula:

R = 10000 + (F // 60) * 2000

Number of PRODUCE requests clients send per second.

To request a higher limit, please submit a ticket.

Fetch request rate (cluster)

  • Base: 5,000 requests per second.

  • Add 1,000 requests per second for every 20 MB/s increase in actual consumption traffic.

Formula:

R = 5000 + (F // 20) * 1000

  • Base: 5,000 requests per second.

  • Add 2,500 requests per second for every 100 MB/s increase in reserved consumption capacity.

Formula:

R = 5000 + (F // 100) * 2500

  • Base: 5,000 requests per second.

  • Add 1,000 requests per second for every 20 MB/s increase in reserved consumption capacity.

Formula:

R = 5000 + (F // 20) * 1000

Number of FETCH requests clients send per second.

To request a higher limit, please submit a ticket.

Offset commit rate per node

  • Base: 100 requests per second.

  • Add 100 requests per second for every 100 MB/s increase in actual traffic.

  • Maximum: 1,000 requests per second.

Formula:

R = min(1000, 100 + (F // 100) * 100)

  • Base: 100 requests per second.

  • Add 100 requests per second for every 100 MB/s increase in reserved production capacity.

  • Maximum: 1,000 requests per second.

Formula:

R = min(1000, 100 + (F // 100) * 100)

Number of OFFSET_COMMIT requests clients send per second.

To request a higher limit, please submit a ticket.

Metadata request rate (cluster)

  • Base: 100 requests per second.

  • Add 100 requests per second for every 100 MB/s increase in actual traffic.

  • Maximum: 1,000 requests per second.

Formula:

R = min(1000, 100 + (F // 100) * 100)

  • Base: 100 requests per second.

  • Add 100 requests per second for every 100 MB/s increase in reserved production capacity.

  • Maximum: 1,000 requests per second.

Formula:

R = min(1000, 100 + (F // 100) * 100)

Client metadata requests received by the server, such as METADATA, INIT_PRODUCER_ID, CREATE_ACL, JOIN_GROUP.

Warning

Excessive requests can affect cluster stability.

Maximum number of partitions

For maximum partitions per instance type, see Instance partitions.

Includes partitions from all topic types created by users.

To request a higher limit, please submit a ticket.

Partition creation/deletion rate (cluster)

900 partitions every 10 seconds

Includes all operations via console, OpenAPI, Kafka Admin, and other methods.

Consumer groups per cluster

2,000 per cluster

Maintain a topic-to-group subscription ratio of 1:1. Do not exceed 3:1.

Number of consumer groups used.

To request a higher limit, please submit a ticket.

Warning

Too many consumer groups increases coordination load and metadata complexity, affecting performance and fault recovery time.

Message format version

Message format version must be greater than V1 for both produce and consume operations.

Use client version 2.4 or later.

Warning

Older Kafka message formats can increase server CPU usage, reduce throughput, and cause compatibility and security issues.

Other limits

  • Enabling compression algorithms such as GZIP consumes more server resources, increasing latency and reducing throughput.

  • Frequent initialization of transactional Producer Id can cause memory overflow and server overload, affecting stability. The kernel parameter transactional.id.expiration.ms is set to 15 minutes. For special requirements, submit a ticket.

  • Invalid message timestamps are blocked. When message.timestamp.type=CreateTime, the broker rejects messages if the difference between its receive timestamp and the message timestamp exceeds message.timestamp.difference.max.ms. This prevents incorrect timestamp settings: too small deletes LogSegments immediately; too large prevents deletion.

  • To prevent storage exhaustion from abnormal writes to compact topics, the default storage limit per compact topic partition is 5 GB. For special requirements, submit a ticket.

  • If instance CPU usage exceeds 85%, it may cause instability, such as breakdowns or long-tail latency jitter in produce or consume operations.

  • Kafka performance relies on cluster resources. Skewed message distribution or partition allocation prevents full utilization of cluster capacity.

  • Open source transactional messaging has known unresolved issues. Use with caution, such as KAFKA-12671. For more information, see KAFKA ISSUES.

  • Kafka may deliver duplicate messages during rebalancing. Implement idempotency checks in your consumption logic to avoid business impact.

None