All Products
Search
Document Center

ApsaraMQ for Kafka:Troubleshoot ApsaraMQ for Kafka client errors

Last Updated:Mar 11, 2026

Find the error message reported by your ApsaraMQ for Kafka client in the following sections to identify the cause and apply a fix.

Connection and authentication errors

TimeoutException (Java), run out of brokers (Go), Authentication failed for user (Python)

The client cannot connect to the ApsaraMQ for Kafka broker. The root cause is typically a network issue or an authentication failure.

Note

These errors apply only to ApsaraMQ for Kafka instances accessed over the Internet.

Diagnose the issue:

  1. Make sure that the server is correctly configured.

  2. Run telnet to test connectivity to the broker:

       telnet <broker-endpoint> <port>
  3. If the connection succeeds but the error persists, the issue is likely authentication-related. Check the following:

No such configuration property: "sasl.mechanisms" or No worthy mechs found (C++, PHP, Node.js)

The SASL and Secure Sockets Layer (SSL) modules are not installed or not properly installed on your system.

Fix (CentOS):

Install the required modules:

# Install the SSL module
sudo yum install openssl openssl-devel

# Install the SASL module
sudo yum install cyrus-sasl{,-plain}

For other operating systems, refer to the official documentation for your distribution.

No KafkaClient Entry (Java)

The Java client cannot find the kafka_client_jaas.conf configuration file required for Java Authentication and Authorization Service (JAAS) login.

Fix:

  1. Create or locate the kafka_client_jaas.conf file and place it in a known directory, such as /home/admin/.

  2. Point the Java client to this file using one of the following methods: Option A: Set a JVM parameter Option B: Set the system property in code Option C: Add an entry to the Java security configuration Add the following line to ${JAVA_HOME}/jre/lib/java.security:

       -Djava.security.auth.login.config=/home/admin/kafka_client_jaas.conf
       // This line must run before you initialize the Kafka client.
       System.setProperty("java.security.auth.login.config", "/home/admin/kafka_client_jaas.conf");
       login.config.url.1=file:/home/admin/kafka_client_jaas.conf

For details on the JAAS configuration file format, see JAAS Login Configuration File in the Oracle documentation.

Topic errors

Leader is not available or leader is in election (all languages)

This error typically appears briefly while a topic is being initialized. If it persists, the topic may not exist.

Fix:

  1. Log on to the ApsaraMQ for Kafka console.

  2. Check whether the target topic exists.

  3. If the topic does not exist, create one. For instructions, see Step 1: Create a topic.

Consumer errors

Error sending fetch request (Java)

The consumer fails to fetch messages from the broker. Common causes include network issues, fetch timeouts, and broker-side traffic throttling.

Diagnose the issue:

  1. Make sure that the server is correctly configured.

  2. Run telnet to test connectivity to the broker.

  3. If the network is healthy, the consumer may be requesting too much data per fetch. Reduce the fetch size by tuning these parameters:

    ParameterDescription
    fetch.max.bytesThe maximum number of bytes that are returned by the broker from a single pull.
    max.partition.fetch.bytesThe maximum number of bytes that are returned by one partition on the broker from a single pull.
  4. If the issue persists, check whether broker-side traffic is throttled. In the ApsaraMQ for Kafka console, open the Instance Details page and check: If your traffic exceeds the specification, upgrade your instance or reduce client throughput.

    • Traffic Specification -- for VPC-connected instances.

    • Public Traffic -- for Internet-connected instances.

Message format errors

CORRUPT_MESSAGE or DisconnectException (all languages)

The cause depends on the storage type of your ApsaraMQ for Kafka instance:

Storage typeCauseFix
Cloud storageKafka clients 3.0 and later enable the idempotence feature by default, but cloud storage does not support idempotence.Set enable.idempotence to false in the producer configuration.
Local storageThe message key is not specified during log compaction.Specify a message key for each message.

Spring Cloud integration errors

array index out of bound exception (Java)

Spring Cloud uses a built-in format to parse message headers. When messages are produced by a non-Spring Cloud client but consumed by Spring Cloud Stream, the parser fails because the message headers do not match the expected format.

Fix:

  • Recommended: Use Spring Cloud Stream for both producing and consuming messages.

  • Alternative: If you produce messages with a different client (for example, a native Java Kafka client) and consume them with Spring Cloud Stream, disable header parsing by setting headerMode to raw: For details, see the Spring Cloud Stream Reference Guide.

      spring:
        cloud:
          stream:
            bindings:
              input:
                consumer:
                  headerMode: raw