All Products
Search
Document Center

E-MapReduce:Log on to a Kafka cluster by using SASL

更新時間:Aug 15, 2024

Simple Authentication and Security Layer (SASL) is a framework that allows applications to select and implement various authentication mechanisms. You can use SASL to verify the identity of a user. This ensures that only clients with valid credentials can connect to Kafka and significantly improves the service security. This topic describes how to configure SASL for an E-MapReduce (EMR) Kafka cluster and log on to the cluster by using SASL.

Prerequisites

A Dataflow cluster is created in the E-MapReduce (EMR) console, and Kafka is selected when you create the cluster. For more information, see Create a Dataflow Kafka cluster.

Configure SASL

EMR manages the SASL configuration policy based on the kafka.sasl.config.type configuration item in the server.properties configuration file.

By default, SASL is disabled for Kafka clusters. You can perform the following steps to enable SASL. The following example shows you how to configure the SCRAM-SHA-512 mechanism.

  1. Create a user.

    1. Log on to the master node of your cluster in SSH mode. For more information, see Log on to a cluster.

    2. Run the following command to create an admin user:

      kafka-configs.sh --bootstrap-server core-1-1:9092 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
      Note

      In this example, the password of the admin user is admin-secret. You can specify the password based on your business requirements.

  2. Go to the Configure tab of the Kafka service page.

    1. Log on to the EMR console. In the left-side navigation pane, click EMR on ECS.

    2. In the top navigation bar, select the region in which your cluster resides and select a resource group based on your business requirements.

    3. On the EMR on ECS page, find the desired cluster and click Services in the Actions column.

    4. On the Services tab, find the Kafka service and click Configure.

  3. Modify the configurations that are related to the SASL authentication mechanism.

    1. Add configuration items that are related to SASL.

      On the server.properties tab of the Configure tab of the Kafka service page, add configuration items that are related to SASL.

      1. Click Add Configuration Item.

      2. In the Add Configuration Item dialog box, add the configuration items described in the following table and click OK.

        Configuration item

        Value

        sasl.mechanism.inter.broker.protocol

        SCRAM-SHA-512

        sasl.enabled.mechanisms

        SCRAM-SHA-512

      3. In the dialog box that appears, configure the Execution Reason parameter and click Save.

    2. Modify the listener configurations.

      1. On the Configure tab of the Kafka service page, click the server.properties tab.

      2. On the server.properties tab, change the value of the kafka.sasl.config.type configuration item to CUSTOM and click Save.

      3. In the dialog box that appears, configure the Execution Reason parameter and click Save.

    3. Configure Java Authentication and Authorization Service (JAAS) for the Kafka broker.

      • Method 1: Configure JAAS for the Kafka broker by using custom configuration items.

        1. On the Configure tab of the Kafka service page, click the server.properties tab.

        2. Click Add Configuration Item, add the configuration items described in the following table, and then click OK.

          Configuration item

          Value

          listener.name.sasl_plaintext.sasl.enabled.mechanisms

          SCRAM-SHA-512

          listener.name.sasl_plaintext.scram-sha-512.sasl.jaas.config

          org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret" ;

        3. In the dialog box that appears, configure the Execution Reason parameter and click Save.

      • Method 2: Configure JAAS for the Kafka broker by using a configuration file.

        1. On the Configure tab of the Kafka service page, modify the configuration items described in the following table and click Save.

          Tab

          Configuration item

          Value

          kafka_server_jaas.conf

          kafka.server.jaas.content

          KafkaServer {
          org.apache.kafka.common.security.scram.ScramLoginModule required
          username="admin"
          password="admin-secret";
          };

          server.properties

          kafka_opts

          -Djava.security.auth.login.config=/etc/taihao-apps/kafka-conf/kafka-conf/kafka_server_jaas.conf

        2. In the dialog box that appears, configure the Execution Reason parameter and click Save.

    4. Configure JAAS for the Kafka client.

      To configure JAAS for the Kafka client, configure the kafka.client.jaas.content configuration item in the kafka_client_jaas.conf configuration file. You can use this configuration item to start Kafka Schema Registry and Kafka REST Proxy.

      1. On the Configure tab of the Kafka service page, modify the configuration items described in the following table and click Save.

        Tab

        Configuration item

        Value

        kafka_client_jaas.conf

        kafka.client.jaas.content

        KafkaClient {
        org.apache.kafka.common.security.scram.ScramLoginModule required
        username="admin"
        password="admin-secret";
        };

        schema-registry.properties

        schema_registry_opts

        -Djava.security.auth.login.config=/etc/taihao-apps/kafka-conf/kafka-conf/kafka_client_jaas.conf

        kafka-rest.properties

        kafkarest_opts

        -Djava.security.auth.login.config=/etc/taihao-apps/kafka-conf/kafka-conf/kafka_client_jaas.conf

      2. In the dialog box that appears, configure the Execution Reason parameter and click Save.

  4. Restart the Kafka service.

    1. On the Configure tab of the Kafka service page, choose More > Restart in the upper-right corner.

    2. In the dialog box that appears, configure the Execution Reason parameter and click OK.

    3. In the Confirm message, click OK.

Log on to a Kafka cluster by using SASL

The following example shows how to authenticate and log on to a Kafka cluster on the Kafka client by using the SCRAM-SHA-512 mechanism. In this example, the Producer and Consumer programs of the Kafka cluster are used to run jobs.

  1. Log on to the master node of your EMR cluster in SSH mode. For more information, see Log on to a cluster.

  2. Create an administrator configuration file.

    1. Run the following command to create the sasl_admin.properties configuration file:

      vim sasl_admin.properties
    2. Add the following information to the configuration file:

      security.protocol=SASL_PLAINTEXT
      sasl.mechanism=SCRAM-SHA-512
      sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
  3. Run the following command to create a regular user:

    kafka-configs.sh --bootstrap-server core-1-1:9092 --alter --add-config 'SCRAM-SHA-256=[password=<yourUserpassword>],SCRAM-SHA-512=[password=<yourUserpassword>]' --entity-type users --entity-name <yourUsername> --command-config /root/sasl_admin.properties

    <yourUsername> and <yourUserpassword> in the command indicate the name and password of the user that you want to create. You can specify the name and password based on your business requirements.

  4. Create a user configuration file.

    1. Run the following command to create the sasl_user.properties configuration file:

      vim sasl_user.properties
    2. Add the following information to the configuration file:

      security.protocol=SASL_PLAINTEXT
      sasl.mechanism=SCRAM-SHA-512
      sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="<yourUsername>" password="<yourUserpassword>";
  5. Run the following command to create a topic:

    kafka-topics.sh --partitions 10 --replication-factor 2 --bootstrap-server core-1-1:9092 --topic test --create --command-config /root/sasl_user.properties

    test in the command is the name of the topic that you want to create. You can specify the name based on your business requirements.

  6. Run the following command to use the SASL configuration file to generate data:

    kafka-producer-perf-test.sh --topic test --num-records 123456 --throughput 10000 --record-size 1024 --producer-props bootstrap.servers=core-1-1:9092 --producer.config sasl_user.properties
  7. Run the following command to use the SASL configuration file to consume data:

    kafka-consumer-perf-test.sh --broker-list core-1-1:9092 --messages 100000000 --topic test --consumer.config sasl_user.properties

References

For information about how to establish an encrypted data transmission channel between a client and a server, see Use SSL to encrypt Kafka data.