All Products
Search
Document Center

Simple Log Service:Configure Kibana to use the Elasticsearch-compatible APIs of Simple Log Service for data visualization

Last Updated:Oct 31, 2024

If you have visualized Elasticsearch logs in Kibana, you can use the Elasticsearch-compatible APIs of Simple Log Service to migrate log data from Elasticsearch to Simple Log Service without the need to modify the business code.

Important

Alibaba Cloud has proprietary rights to the information in this topic. This topic describes the capabilities of Alibaba Cloud to interact with third-party services. The names of third-party companies and services are used only for reference.

Prerequisites

  • Data is collected to a Standard Logstore in Simple Log Service. For more information, see Data collection overview.

  • Indexes are configured before you query logs. For more information, see Create indexes.

  • An AccessKey pair is created for the Resource Access Management (RAM) user, and the required permissions to query logs in Logstores are granted to the RAM user. For more information, see the Grant permissions to the RAM user.

Background information

Kibana is an Elasticsearch-based data visualization and exploration tool. You can use Kibana to query, analyze, and visualize data in Elasticsearch. Simple Log Service provides Elasticsearch-compatible APIs for users who use Kibana for log queries and report-based visualization. Users can use these APIs to query and analyze Simple Log Service data in Kibana.

How it works

You must deploy an Elasticsearch cluster, Kibana, and a proxy in your client environment.

  • Kibana is used to query, analyze, and visualize data.

  • The Elasticsearch cluster is used to store the metadata of Kibana. The metadata contains the configurations of Kibana. The amount of metadata is excessively small.

    The metadata of Kibana must be frequently updated. Simple Log Service does not support updates. Therefore, you must deploy an Elasticsearch cluster to store Kibana metadata.

  • A proxy is used to distinguish the requests for metadata and the requests for Elasticsearch-compatible APIs. You must deploy a proxy to route the API requests of Kibana.

image

Step 1: Deploy an Elasticsearch cluster, Kibana, and a proxy

Important

We recommend that you use a server that has at least 8 GB of memory.

Docker Compose

  1. Create a directory named sls-kibana and a subdirectory named data in the directory. Then, modify the permissions on the data subdirectory to ensure that the Elasticsearch container has the read, write, and execution permissions on the subdirectory.

    mkdir sls-kibana
    
    cd sls-kibana
    
    mkdir data
    chmod 777 data 
  2. Create a file named .env in the sls-kibana directory. The following sample code shows the content of the file. Change the parameter values based on the actual business scenario.

    ES_PASSWORD=aStrongPassword # Change the parameter value based on your business scenario.
      
    SLS_ENDPOINT=cn-huhehaote.log.aliyuncs.com
    SLS_PROJECT=etl-dev-7494ab****
    SLS_ACCESS_KEY_ID=xxx
    SLS_ACCESS_KEY_SECRET=xxx
    # The following parameter is optional. If the name of your Simple Log Service project is excessively long, you can configure the parameter to specify an alias for the project.
    # SLS_PROJECT_ALIAS=etl-dev  
      
    # You can add more Simple Log Service projects. Take note that if you add more than six projects, you must reference the projects in the docker-compose.yml file.
    #SLS_ENDPOINT2=cn-huhehaote.log.aliyuncs.com
    #SLS_PROJECT2=etl-dev2
    #SLS_PROJECT_ALIAS2=etl-dev2
    #SLS_ACCESS_KEY_ID2=xxx
    #SLS_ACCESS_KEY_SECRET2=xxx

    Parameter

    Description

    ES_PASSWORD

    The password of the Elasticsearch cluster. You can use this password to log on to the Kibana console.

    SLS_ENDPOINT

    The endpoint of the Simple Log Service project. For more information, see Manage a project.

    SLS_PROJECT

    The name of the Simple Log Service project. For more information, see Manage a project.

    SLS_ACCESS_KEY_ID

    The AccessKey ID that you created for the RAM user in the "Prerequisites" section of this topic. The RAM user must have the required permissions to query logs in the Logstore. For more information, see Grant permissions to the RAM user.

    SLS_ACCESS_KEY_SECRET

    The AccessKey secret that you created for the RAM user in the "Prerequisites" section of this topic. The RAM user must have the required permissions to query logs in the Logstore. For more information, see Grant permissions to the RAM user.

  3. Create a file named docker-compose.yaml in the sls-kibana directory. The following sample code shows the content of the file:

    services:
      es:
        image: sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/elasticsearch:7.17.3
        environment:
          - "discovery.type=single-node"
          - "ES_JAVA_OPTS=-Xms2G -Xmx2G"
          - ELASTIC_USERNAME=elastic
          - ELASTIC_PASSWORD=${ES_PASSWORD} 
          - xpack.security.enabled=true
        volumes:
          # Make sure that a subdirectory named data is created in advance and the public read and write permissions on the directory are granted to the RAM user.
          - ./data:/usr/share/elasticsearch/data  
      kproxy:
        image: sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.0.8
        depends_on:
          - es
        environment:
          - ES_ENDPOINT=es:9200
    
          # The first Simple Log Service project.
          - SLS_ENDPOINT=${SLS_ENDPOINT}
          - SLS_PROJECT=${SLS_PROJECT}
          - SLS_PROJECT_ALIAS=${SLS_PROJECT_ALIAS}
          - SLS_ACCESS_KEY_ID=${SLS_ACCESS_KEY_ID}
          - SLS_ACCESS_KEY_SECRET=${SLS_ACCESS_KEY_SECRET}
    
          # The second Simple Log Service project
          - SLS_ENDPOINT2=${SLS_ENDPOINT2}
          - SLS_PROJECT2=${SLS_PROJECT2}
          - SLS_PROJECT_ALIAS2=${SLS_PROJECT_ALIAS2}
          - SLS_ACCESS_KEY_ID2=${SLS_ACCESS_KEY_ID2}
          - SLS_ACCESS_KEY_SECRET2=${SLS_ACCESS_KEY_SECRET2}
    
          - SLS_ENDPOINT3=${SLS_ENDPOINT3}
          - SLS_PROJECT3=${SLS_PROJECT3}
          - SLS_PROJECT_ALIAS3=${SLS_PROJECT_ALIAS3}
          - SLS_ACCESS_KEY_ID3=${SLS_ACCESS_KEY_ID3}
          - SLS_ACCESS_KEY_SECRET3=${SLS_ACCESS_KEY_SECRET3}
    
          - SLS_ENDPOINT4=${SLS_ENDPOINT4}
          - SLS_PROJECT4=${SLS_PROJECT4}
          - SLS_PROJECT_ALIAS4=${SLS_PROJECT_ALIAS4}
          - SLS_ACCESS_KEY_ID4=${SLS_ACCESS_KEY_ID4}
          - SLS_ACCESS_KEY_SECRET4=${SLS_ACCESS_KEY_SECRET4}
    
          - SLS_ENDPOINT5=${SLS_ENDPOINT5}
          - SLS_PROJECT5=${SLS_PROJECT5}
          - SLS_PROJECT_ALIAS5=${SLS_PROJECT_ALIAS5}
          - SLS_ACCESS_KEY_ID5=${SLS_ACCESS_KEY_ID5}
          - SLS_ACCESS_KEY_SECRET5=${SLS_ACCESS_KEY_SECRET5}
    
          - SLS_ENDPOINT6=${SLS_ENDPOINT6}
          - SLS_PROJECT6=${SLS_PROJECT6}
          - SLS_PROJECT_ALIAS6=${SLS_PROJECT_ALIAS6}
          - SLS_ACCESS_KEY_ID6=${SLS_ACCESS_KEY_ID6}
          - SLS_ACCESS_KEY_SECRET6=${SLS_ACCESS_KEY_SECRET6}
    
          # You can add up to 255 Simple Log Service projects.
      kibana:
        image: sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kibana:7.17.3
        depends_on:
          - kproxy
        environment:
          - ELASTICSEARCH_HOSTS=http://kproxy:9201
          - ELASTICSEARCH_USERNAME=elastic
          - ELASTICSEARCH_PASSWORD=${ES_PASSWORD} 
          - XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED=true
        ports:
          - "5601:5601"
    
      # The following service component is optional. The component is used to automatically create Kibana index patterns.
      index-patterner:
        image: sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.0.8
        command: /usr/bin/python3 -u /workspace/create_index_partten.py
        depends_on:
          - kibana
        environment:
          - KPROXY_ENDPOINT=http://kproxy:9201
          - KIBANA_ENDPOINT=http://kibana:5601
          - KIBANA_USER=elastic
          - KIBANA_PASSWORD=${ES_PASSWORD}
    
          - SLS_PROJECT_ALIAS=${SLS_PROJECT_ALIAS}
          - SLS_ACCESS_KEY_ID=${SLS_ACCESS_KEY_ID}
          - SLS_ACCESS_KEY_SECRET=${SLS_ACCESS_KEY_SECRET}
    
          - SLS_PROJECT_ALIAS2=${SLS_PROJECT_ALIAS2}
          - SLS_ACCESS_KEY_ID2=${SLS_ACCESS_KEY_ID2}
          - SLS_ACCESS_KEY_SECRET2=${SLS_ACCESS_KEY_SECRET2}
    
          - SLS_PROJECT_ALIAS3=${SLS_PROJECT_ALIAS3}
          - SLS_ACCESS_KEY_ID3=${SLS_ACCESS_KEY_ID3}
          - SLS_ACCESS_KEY_SECRET3=${SLS_ACCESS_KEY_SECRET3}
    
          - SLS_PROJECT_ALIAS4=${SLS_PROJECT_ALIAS4}
          - SLS_ACCESS_KEY_ID4=${SLS_ACCESS_KEY_ID4}
          - SLS_ACCESS_KEY_SECRET4=${SLS_ACCESS_KEY_SECRET4}
    
          - SLS_PROJECT_ALIAS5=${SLS_PROJECT_ALIAS5}
          - SLS_ACCESS_KEY_ID5=${SLS_ACCESS_KEY_ID5}
          - SLS_ACCESS_KEY_SECRET5=${SLS_ACCESS_KEY_SECRET5}
    
          - SLS_PROJECT_ALIAS6=${SLS_PROJECT_ALIAS6}
          - SLS_ACCESS_KEY_ID6=${SLS_ACCESS_KEY_ID6}
          - SLS_ACCESS_KEY_SECRET6=${SLS_ACCESS_KEY_SECRET6}
    
          # You can add up to 255 Simple Log Service projects.
  4. Run the following command to start the service:

    docker compose up -d
  5. Run the following command to query the status of the service:

    docker compose ps
  6. After the deployment is complete, enter http://${IP address of Kibana}:5601 in the browser to go to the logon page of Kibana. Then, enter the username and password that you specified for the Elasticsearch cluster in Step 1.

    Important

    You must add port 5601 to the security group rule of the server. For more information, see Add a security group rule.

    http://${IP address of Kibana}:5601

    image

Helm

Prerequisites

The following components are installed in the Container Service for Kubernetes(ACK) cluster. For more information, see Manage components.

Procedure

  1. Create a namespace.

    # Create a namespace.
    kubectl create namespace sls-kibana
  2. Create and edit the values.yaml file. The following sample code shows the content of the file. Change the parameter values based on your business scenario.

    kibana:
      ingressClass: nginx # Change the value based on the Ingress controller that you installed.
      # To obtain the value of this parameter, perform the following steps: Log on to the ACK console. In the left-side navigation pane, click Clusters. On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane, choose Operations > Add-ons. On the Add-ons page, enter Ingress to search for the installed Ingress controller.
      # ALB Ingress controller: Set the value to alb.
      # MSE Ingress controller: Set the value to mse.
      # NGINX Ingress controller: Set the value to nginx.
      ingressDomain: # You can leave this parameter empty. If you want to access Kibana by using a domain name, configure this parameter.
      ingressPath: /kibana/ # The subpath that is used to access Kibana. This parameter is required.
      # If you do not leave the ingressDomain parameter empty, you can set the ingressPath parameter to /.
        
    elasticsearch:
      password: aStrongPass # Change the password of the Elasticsearch cluster based on your business scenario. You can use this password to log on to the Kibana console. The username that is used to access the Elasticsearch cluster is elastic.
      #diskZoneId: cn-hongkong-c # Specify the zone where the cloud disk used by the Elasticsearch cluster is located. If not specified, the zone is automatically selected.
    
    repository:
      region: cn-hangzhou 
      # The region where the image resides. If the image resides in a region in the Chinese mainland, set the value to cn-hangzhou. If the image resides in a region outside the Chinese mainland, set the value to ap-southeast-1. The image is pulled over the Internet.
        
    sls:
      - project: k8s-log-c5****** # The Simple Log Service project.
        endpoint: cn-huhehaote.log.aliyuncs.com # The endpoint of the Simple Log Service project.
      # alias: etl-logs # (Optional) If the name of the Simple Log Service project is excessively long in Kibana, you can specify an alias for the project.
        accessKeyId: the AccessKey ID that is used to access Simple Log Service.
        accessKeySecret: the AccessKey secret that is used to access Simple Log Service.
      # If you want to add another Simple Log Service project, follow the preceding steps.
      #- project: the etl-dev2# The Simple Log Service project.
      #  endpoint: cn-huhehaote.log.aliyuncs.com # The endpoint of the Simple Log Service project.
      # accessKeyId: the AccessKey ID that is used to access Simple Log Service.
      # accessKeySecret: the AccessKey secret that is used to access Simple Log Service.

    Parameter

    Description

    kibana.ingressClass

    Specify the value based on the Ingress controller that you installed. For more information, see Manage components.

    • ALB Ingress controller: Set the value to alb.

    • MSE Ingress controller: Set the value to mse.

    • NGINX Ingress controller: Set the value to nginx.

    kibana.ingressDomain

    You can leave this parameter empty. If you want to access Kibana by using a domain name, configure this parameter.

    repository.region

    The region where the image resides. If the image resides in a region in the Chinese mainland, set the value to cn-hangzhou. If the image resides in a region outside the Chinese mainland, set the value to ap-southeast-1. The image is pulled over the Internet.

    kibana.ingressPath

    The subpath that is used to access Kibana. If you do not leave the ingressDomain parameter empty, you can set the ingressPath parameter to /.

    elasticsearch.password

    The password of the Elasticsearch cluster. Change the password based on your business scenario. You can use this parameter to log on to the Kibana console. The username that is used to access the Elasticsearch cluster is elastic.

    project

    The name of the Simple Log Service project. For more information, see Manage a project.

    endpoint

    The endpoint of the Simple Log Service project. For more information, see Manage a project.

    accessKeyId

    The AccessKey ID that you created for the RAM user in the "Prerequisites" section of this topic. The RAM user must have the required permissions to query logs in the Logstore. For more information, see Grant permissions to the RAM user.

    accessKeySecret

    The AccessKey secret that you created for the RAM user in the "Prerequisites" section of this topic. The RAM user must have the required permissions to query logs in the Logstore. For more information, see Grant permissions to the RAM user.

  3. Run the following command to deploy Kibana by using Helm:

    helm install sls-kibana https://sls-kproxy.oss-cn-hangzhou.aliyuncs.com/sls-kibana-1.3.0.tgz -f values.yaml --namespace sls-kibana
  4. After the deployment is complete, enter http://${Ingress address}/kibana/ in the browser to go to the logon page of Kibana. Then, enter the username and password that you specified for the Elasticsearch cluster in Step 1.

    http://${Ingress address}/kibana/

    image

Docker

Step 1: Deploy an Elasticsearch cluster

Important

To deploy an Elasticsearch cluster by using Docker, you must first install and start Docker. For more information, see Install and use Docker on a Linux instance.

  1. Run the following command on the server to deploy an Elasticsearch cluster:

    sudo docker pull sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/elasticsearch:7.17.3
    
    sudo mkdir /data  # The directory in which Elasticsearch data is stored. Change the value based on your business scenario. 
    sudo chmod 777 /data # Configure permissions. 
    
    sudo docker run -d --name es -p 9200:9200 \
               -e "discovery.type=single-node" \
               -e "ES_JAVA_OPTS=-Xms2G -Xmx2G" \
               -e ELASTIC_USERNAME=elastic \
               -e ELASTIC_PASSWORD=passwd \
               -e xpack.security.enabled=true \
               -v /data:/usr/share/elasticsearch/data \
               sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/elasticsearch:7.17.3

    Parameter

    Description

    ELASTIC_USERNAME

    The username that is used to access the Elasticsearch cluster. Set the value to elastic.

    ELASTIC_PASSWORD

    The password that is used to access the Elasticsearch cluster. The value of this parameter must be a string.

    /data

    The data storage location of the Elasticsearch cluster on the physical machine. Modify the data storage location based on your business scenario.

  2. After the deployment is complete, run the following command to check whether the Elasticsearch cluster is deployed. If you use a public IP address, add port 9200 to the security group rule of the server. For more information, see Add a security group rule.

    curl http://${IP address of the server on which the Elasticsearch cluster is deployed}:9200

    If the output contains the JSON-formatted field security_exception, the Elasticsearch cluster is deployed.

    {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

Step 2: Deploy a proxy

When you use Kibana to access Simple Log Service, one or more projects are supported. You must add project information when you deploy a proxy. Sample code:

Single project

sudo docker pull sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.0.8

sudo docker run  -d --name proxy \
            -e ES_ENDPOINT=${IP address of the server on which the Elasticsearch cluster is deployed}:9200 \
            -e SLS_ENDPOINT=https://prjA.cn-guangzhou.log.aliyuncs.com/es/ \
            -e SLS_PROJECT=prjA \
            -e SLS_ACCESS_KEY_ID=${aliyunAccessId} \
            -e SLS_ACCESS_KEY_SECRET=${aliyunAccessKey} \
            -p 9201:9201 \
            -ti sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.0.8

Multiple projects

    Important
    • You can add information about up to 32 projects when you deploy a proxy.

    • The following variables are related to the first project: SLS_PROJECT, SLS_ENDPOINT, SLS_ACCESS_KEY_ID, and SLS_ACCESS_KEY_SECRET. Variables that are related to the other projects must be suffixed with numbers, such as SLS_PROJECT2 and SLS_ENDPOINT2.

    • If the endpoint and AccessKey pair settings of a project are the same as the settings of the first project, you do not need to specify the endpoint or AccessKey pair of the project.

sudo docker pull sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.0.8

sudo docker run  -d --name proxy \
            -e ES_ENDPOINT=${IP address of the server on which the Elasticsearch cluster is deployed}:9200 \
            -e SLS_ENDPOINT=https://prjA.cn-guangzhou.log.aliyuncs.com/es/ \
            -e SLS_ENDPOINT2=https://prjB.cn-guangzhou.log.aliyuncs.com/es/ \ 
            -e SLS_PROJECT=prjA \
            -e SLS_PROJECT2=prjB \
            -e SLS_ACCESS_KEY_ID=${aliyunAccessId} \ 
            -e SLS_ACCESS_KEY_SECRET=${aliyunAccessKey} \ 
            -e SLS_ACCESS_KEY_ID2=${aliyunAccessId} \ # If the AccessKey ID that you want to specify is the same as the AccessKey ID specified by the SLS_ACCESS_KEY_ID variable, you do not need to specify an AccessKey ID for the SLS_ACCESS_KEY_ID2 variable. 
            -e SLS_ACCESS_KEY_SECRET2=${aliyunAccessKey} \ # If the AccessKey secret that you want to specify is the same as the AccessKey secret specified by the SLS_ACCESS_KEY_SECRET variable, you do not need to specify an AccessKey secret for the SLS_ACCESS_KEY_SECRET2 variable. 
            -p 9201:9201 \
            -ti sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.0.8
  • Example 1

    Use two projects named prjA and prjB that have the same AccessKey pair settings.

    sudo docker pull sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.0.8
    sudo docker run  -d --name proxy \
                -e ES_ENDPOINT=${IP address of the server on which the Elasticsearch cluster is deployed}:9200 \
                -e SLS_ENDPOINT=https://prjA.cn-guangzhou.log.aliyuncs.com/es/ \
                -e SLS_ENDPOINT=https://prjB.cn-guangzhou.log.aliyuncs.com/es/ \
                -e SLS_PROJECT=prjA \
                -e SLS_PROJECT2=prjB \
                -e SLS_ACCESS_KEY_ID=${aliyunAccessId} \
                -e SLS_ACCESS_KEY_SECRET=${aliyunAccessKey} \
                -p 9201:9201 \
                -ti sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.0.8
  • Example 2

    Use three projects named prjA, prjB, and prjC. The prjA and prjC projects have the same AccessKey pair settings.

    sudo docker run  -d --name proxy \
                -e ES_ENDPOINT=${IP address of the server on which the Elasticsearch cluster is deployed}:9200 \
                -e SLS_ENDPOINT=https://prjA.cn-guangzhou.log.aliyuncs.com/es/ \
                -e SLS_ENDPOINT2=https://prjB.cn-guangzhou.log.aliyuncs.com/es/ \
                -e SLS_ENDPOINT3=https://prjC.cn-guangzhou.log.aliyuncs.com/es/ \
                -e SLS_PROJECT=prjA \
                -e SLS_PROJECT2=prjB \
                -e SLS_PROJECT3=prjC \
                -e SLS_ACCESS_KEY_ID=${aliyunAccessId} \
                -e SLS_ACCESS_KEY_SECRET=${aliyunAccessKey} \
                -e SLS_ACCESS_KEY_ID2=${aliyunAccessId} \
                -e SLS_ACCESS_KEY_SECRET2=${aliyunAccessKey} \            
                -p 9201:9201 \
                -ti sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.0.8

The following table describes the parameters.

Parameter

Description

ES_ENDPOINT

The address that is used to access the Elasticsearch cluster. Format: ${IP address of the server on which the Elasticsearch cluster is deployed}:9200.

SLS_ENDPOINT

The URL that is used to access data. Format: https://${project}.${sls-endpoint}/es/. ${project} specifies the name of the project. ${sls-endpoint} specifies the endpoint of the project. For more information, see Endpoints. Example: https://etl-guangzhou.cn-guangzhou.log.aliyuncs.com/es/.

Important

You must use the HTTPS protocol.

SLS_PROJECT

The name of the Simple Log Service project. For more information, see Manage a project.

SLS_ACCESS_KEY_ID

The AccessKey ID of your Alibaba Cloud account.

We recommend that you use an AccessKey pair of a RAM user that has the permissions to query logs in the Logstore. You can use the permission assistant feature to grant the RAM user the query permissions. For more information, see Configure the permission assistant feature. For information about how to obtain an AccessKey pair, see AccessKey pair.

SLS_ACCESS_KEY_SECRET

The AccessKey secret of your Alibaba Cloud account.

We recommend that you use an AccessKey pair of a RAM user that has the permissions to query logs in the Logstore. You can use the permission assistant feature to grant the RAM user the query permissions. For more information, see Configure the permission assistant feature. For information about how to obtain an AccessKey pair, see AccessKey pair.

After you configure the settings, run the following command to check whether the proxy is deployed. If you use a public IP address, add port 9201 to the security group rule of the server. For more information, see Add a security group rule.

curl http://${IP address of the server on which the proxy is deployed}:9201

If the output contains the JSON-formatted field security_exception, the proxy is deployed.

{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

Step 3: Deploy Kibana

The following sample code provides an example on how to deploy Kibana. In this example, Kibana 7.17.3 is used.

sudo docker pull  sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kibana:7.17.3

sudo docker run -d --name kibana \
            -e ELASTICSEARCH_HOSTS=http://${IP address of the server on which the proxy is deployed}:9201 \
            -e ELASTICSEARCH_USERNAME=elastic \
            -e ELASTICSEARCH_PASSWORD=passwd \
            -e XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED=true \
            -p 5601:5601 \
            sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kibana:7.17.3

Parameter

Description

ELASTICSEARCH_HOSTS

The URL that is used to access the proxy. Format: http://${IP address of the server where the proxy is deployed}:9201.

ELASTICSEARCH_USERNAME

The username that is used to log on to the Kibana console.

Set the value to the Elasticsearch username that you specified when you deployed the Elasticsearch cluster.

ELASTICSEARCH_PASSWORD

The password that is used to log on to the Kibana console.

Set the value to the Elasticsearch password that you specified when you deployed the Elasticsearch cluster.

After the deployment is complete, enter http://${IP address of Kibana}:5601 in the browser to go to the logon page of Kibana. Then, enter the username and password that you specified for the Elasticsearch cluster in Step 1.

Important

You must add port 5601 to the security group rule of the server. For more information, see Add a security group rule.

http://${IP address of Kibana}:5601

image

Step 2: Access Kibana

  1. Configure index patterns.

    1. In the left-side navigation pane, choose Management > Stack Management. image..png

    2. In the left-side navigation pane, choose Kibana > Index Patterns.

    3. The first time you create an index pattern, click create an index pattern against hidden or system indices in the dialog box. image.png

      Note

      No data is displayed in the pattern list on the Index pattern page. To display data in the pattern list, map Simple Log Service Logstores and Kibana index patterns.

    4. In the Create index pattern panel, configure the parameters. The following table describes the parameters.

      image.png

      Parameter

      Description

      Name

      The name of the index pattern. Format: ${Simple Log Service project}.${Logstore name}.

      Important

      Only exact match is supported. You must enter a complete name.

      For example, if the project name is etl-guangzhou and the Logstore name is es_test22, the name of the index pattern is etl-guangzhou.es_test22.

      Timestamp field

      The timestamp field. Set the value to @timestamp.

    5. Click Create index pattern.

  2. Query and analyze data.

    1. In the left-side navigation pane, choose Analytics > Discover.

      Important

      If you use an Elasticsearch-compatible API to analyze Simple Log Service data in Kibana, you can use only the Discover and Dashboard modules.

      image

    2. In the upper-left corner of the page that appears, select the index that you want to manage. In the upper-right corner of the page that appears, select a time range to query log data.

      image.png

FAQ

Why am I unable to access Kibana after I deploy Kibana by using Docker Compose?

  1. In the sls-kibana directory, run the docker compose ps command to check the status of the containers. Make sure that all containers are in the UP state.

    image.png

  2. If all containers are in the UP state, view the error logs of each container.

    docker logs sls-kibana_es_1 # View the startup logs of the Elasticsearch cluster.
    docker logs sls-kibana_kproxy_1 # View the startup logs of KProxy.
    docker logs sls-kibana_kibana_1 # View the startup logs of Kibana.

Why am I unable to access Kibana after I deploy Kibana by using Helm?

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane of the page that appears, choose Workloads > Stateful.

  3. In the upper part of the page that appears, select sls-kibana as the namespace. Check whether the Elasticsearch cluster, Kibana, and KProxy are started. For information about how to view and change the status of stateful workloads and redeploy applications in batches, see Batch Redeploy.

How do I uninstall Helm?

helm uninstall sls-kibana --namespace sls-kibana