All Products
Search
Document Center

Simple Log Service:Connect Simple Log Service to Kibana

Last Updated:Feb 15, 2025

If you have visualized Elasticsearch logs in Kibana and want to migrate logs from Elasticsearch to Simple Log Service, you can use Elasticsearch-compatible API of Simple Log Service without the need to modify the business code.

Important

Alibaba Cloud has proprietary rights to the information in this topic. This topic describes the capabilities of Alibaba Cloud to interact with third-party services. The names of third-party companies and services may be referenced.

Prerequisites

Background information

Kibana is an Elasticsearch-based data visualization and exploration tool. You can use Kibana to query, analyze, and visualize data in Elasticsearch. If you use Kibana to query logs and configure visualized reports and you want to migrate your data to Simple Log Service, you can use Elasticsearch-compatible API of Simple Log Service. Then, you can query and analyze Simple Log Service data in Kibana.

How it works

You must deploy Kibana, an Elasticsearch cluster, and a proxy in your client environment.

  • Kibana is used to query, analyze, and visualize data.

  • Elasticsearch is used to store the metadata of Kibana. The metadata contains the configurations of Kibana. The amount of metadata is excessively small.

    The metadata of Kibana needs to be frequently updated. Simple Log Service does not support updates. In this case, you must deploy an Elasticsearch cluster to store Kibana metadata.

  • A proxy is used to distinguish the requests for metadata and the requests for Elasticsearch-compatible API. You must deploy a proxy to route the API requests of Kibana.

image

Step 1: Deploy an Elasticsearch cluster, Kibana, and a proxy

Important

We recommend that you use a server that has at least 8 GB of memory.

Use Docker Compose

  1. Run the following commands to create a directory named sls-kibana and a subdirectory named data in the sls-kibana directory on a server. Then, modify the permissions on the data subdirectory to ensure that Elasticsearch containers have the read, write, and execution permissions on the subdirectory.

    mkdir sls-kibana
    
    cd sls-kibana
    
    mkdir data
    chmod 777 data 
  2. Create a file named .env in the sls-kibana directory. The following sample code shows the content of the file. Change the parameter values based on your business requirements.

    ES_PASSWORD=aStrongPassword # Change the parameter value based on your business requirements.
    
    SLS_ENDPOINT=cn-huhehaote.log.aliyuncs.com
    SLS_PROJECT=etl-dev-7494ab****
    SLS_ACCESS_KEY_ID=xxx
    SLS_ACCESS_KEY_SECRET=xxx
    #SLS_PROJECT_ALIAS=etl-dev # Optional. If the name of the Simple Log Service project specified by the SLS_PROJECT parameter is excessively long, you can configure this parameter to specify an alias for the project.
    #SLS_LOGSTORE_FILTERS="access*" # Optional. The Logstores that are used to automatically create index patterns. Separate multiple index patterns with commas (,) and enclose index patterns in double quotation marks (""). Example: "access*,error*".
    #KIBANA_SPACE=default # Optional. The space in which the index pattern is created. If no space exists, a space is automatically created. 
    
    # You can specify multiple Simple Log Service projects. Note that if you specify more than six projects, you must reference the projects in the docker-compose.yml file.
    #SLS_ENDPOINT2=cn-huhehaote.log.aliyuncs.com
    #SLS_PROJECT2=etl-dev2
    #SLS_ACCESS_KEY_ID2=xxx
    #SLS_ACCESS_KEY_SECRET2=xxx
    #SLS_PROJECT_ALIAS2=etl-dev2 # Optional. If the name of the Simple Log Service project specified by the SLS_PROJECT2 parameter is excessively long, you can configure this parameter to specify an alias for the project.
    #SLS_LOGSTORE_FILTERS2="test*log" # Optional. The Logstores that are used to automatically create index patterns. Separate multiple index patterns with commas (,) and enclose index patterns in double quotation marks (""). Example: "access*,error*".
    #KIBANA_SPACE2=default # Optional. The space in which the index pattern is created. If no space exists, a space is automatically created.

    Parameter

    Description

    ES_PASSWORD

    The password of the Elasticsearch cluster. You can use the password to log on to the Kibana console.

    SLS_ENDPOINT

    The endpoint of the Simple Log Service project. For more information, see Manage a project.

    SLS_PROJECT

    The name of the Simple Log Service project. For more information, see Manage a project.

    SLS_ACCESS_KEY_ID

    The AccessKey ID that you created for the RAM user in the "Prerequisites" section of this topic. The RAM user must have the required permissions to query logs in the Logstore. For more information, see Grant permissions to a RAM user.

    SLS_ACCESS_KEY_SECRET

    The AccessKey secret that you created for the RAM user in the "Prerequisites" section of this topic. The RAM user must have the required permissions to query logs in the Logstore. For more information, see Grant permissions to a RAM user.

    SLS_PROJECT_ALIAS

    Optional. If the name of the Simple Log Service project specified by the SLS_PROJECT parameter is excessively long, you can configure this parameter to specify an alias for the project.

    SLS_LOGSTORE_FILTERS

    Optional. The Logstores that are used to automatically create index patterns. Separate multiple index patterns with commas (,) and enclose index patterns in double quotation marks (""). Example: "access*,error*".

    KIBANA_SPACE

    Optional. The space in which the index pattern is created. If no space exists, a space is automatically created.

  3. Create a file named docker-compose.yaml in the sls-kibana directory. The following sample code shows the content of the file:

    services:
      es:
        image: sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/elasticsearch:7.17.26
        environment:
          - "discovery.type=single-node"
          - "ES_JAVA_OPTS=-Xms2G -Xmx2G"
          - ELASTIC_USERNAME=elastic
          - ELASTIC_PASSWORD=${ES_PASSWORD}
          - xpack.security.enabled=true
        volumes:
          # Make sure that the mkdir data && chmod 777 data command is run to create a subdirectory named data and grant the read, write, and execution permissions on the subdirectory to all users.
          - ./data:/usr/share/elasticsearch/data
      kproxy:
        image: sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.1.2
        depends_on:
          - es
        environment:
          - ES_ENDPOINT=es:9200
    
          # The first Simple Log Service project
          - SLS_ENDPOINT=${SLS_ENDPOINT}
          - SLS_PROJECT=${SLS_PROJECT}
          - SLS_LOGSTORE_FILTERS=${SLS_LOGSTORE_FILTERS}
          - KIBANA_SPACE=${KIBANA_SPACE}
          - SLS_PROJECT_ALIAS=${SLS_PROJECT_ALIAS}
          - SLS_ACCESS_KEY_ID=${SLS_ACCESS_KEY_ID}
          - SLS_ACCESS_KEY_SECRET=${SLS_ACCESS_KEY_SECRET}
    
          # The second Simple Log Service project
          - SLS_ENDPOINT2=${SLS_ENDPOINT2}
          - SLS_PROJECT2=${SLS_PROJECT2}
          - SLS_LOGSTORE_FILTERS2=${SLS_LOGSTORE_FILTERS2}
          - KIBANA_SPACE2=${KIBANA_SPACE2}
          - SLS_PROJECT_ALIAS2=${SLS_PROJECT_ALIAS2}
          - SLS_ACCESS_KEY_ID2=${SLS_ACCESS_KEY_ID2}
          - SLS_ACCESS_KEY_SECRET2=${SLS_ACCESS_KEY_SECRET2}
    
          - SLS_ENDPOINT3=${SLS_ENDPOINT3}
          - SLS_PROJECT3=${SLS_PROJECT3}
          - SLS_LOGSTORE_FILTERS3=${SLS_LOGSTORE_FILTERS3}
          - KIBANA_SPACE3=${KIBANA_SPACE3}
          - SLS_PROJECT_ALIAS3=${SLS_PROJECT_ALIAS3}
          - SLS_ACCESS_KEY_ID3=${SLS_ACCESS_KEY_ID3}
          - SLS_ACCESS_KEY_SECRET3=${SLS_ACCESS_KEY_SECRET3}
    
          - SLS_ENDPOINT4=${SLS_ENDPOINT4}
          - SLS_PROJECT4=${SLS_PROJECT4}
          - SLS_LOGSTORE_FILTERS4=${SLS_LOGSTORE_FILTERS4}
          - KIBANA_SPACE4=${KIBANA_SPACE4}
          - SLS_PROJECT_ALIAS4=${SLS_PROJECT_ALIAS4}
          - SLS_ACCESS_KEY_ID4=${SLS_ACCESS_KEY_ID4}
          - SLS_ACCESS_KEY_SECRET4=${SLS_ACCESS_KEY_SECRET4}
    
          - SLS_ENDPOINT5=${SLS_ENDPOINT5}
          - SLS_PROJECT5=${SLS_PROJECT5}
          - SLS_LOGSTORE_FILTERS5=${SLS_LOGSTORE_FILTERS5}
          - KIBANA_SPACE5=${KIBANA_SPACE5}
          - SLS_PROJECT_ALIAS5=${SLS_PROJECT_ALIAS5}
          - SLS_ACCESS_KEY_ID5=${SLS_ACCESS_KEY_ID5}
          - SLS_ACCESS_KEY_SECRET5=${SLS_ACCESS_KEY_SECRET5}
    
          - SLS_ENDPOINT6=${SLS_ENDPOINT6}
          - SLS_PROJECT6=${SLS_PROJECT6}
          - SLS_LOGSTORE_FILTERS6=${SLS_LOGSTORE_FILTERS6}
          - KIBANA_SPACE6=${KIBANA_SPACE6}
          - SLS_PROJECT_ALIAS6=${SLS_PROJECT_ALIAS6}
          - SLS_ACCESS_KEY_ID6=${SLS_ACCESS_KEY_ID6}
          - SLS_ACCESS_KEY_SECRET6=${SLS_ACCESS_KEY_SECRET6}
          # You can specify up to 255 Simple Log Service projects.
      kibana:
        image: sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kibana:7.17.26
        depends_on:
          - kproxy
        environment:
          - ELASTICSEARCH_HOSTS=http://kproxy:9201
          - ELASTICSEARCH_USERNAME=elastic
          - ELASTICSEARCH_PASSWORD=${ES_PASSWORD}
          - XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED=true
        ports:
          - "5601:5601"
    
      # The following service component is optional. The component is used to automatically create Kibana index patterns.
      index-patterner:
        image: sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.1.2
        command: /usr/bin/python3 -u /workspace/create_index_pattern.py
        depends_on:
          - kibana
        environment:
          - KPROXY_ENDPOINT=http://kproxy:9201
          - KIBANA_ENDPOINT=http://kibana:5601
          - KIBANA_USER=elastic
          - KIBANA_PASSWORD=${ES_PASSWORD}
    
          - SLS_PROJECT_ALIAS=${SLS_PROJECT_ALIAS}
          - SLS_ACCESS_KEY_ID=${SLS_ACCESS_KEY_ID}
          - SLS_ACCESS_KEY_SECRET=${SLS_ACCESS_KEY_SECRET}
    
          - SLS_PROJECT_ALIAS2=${SLS_PROJECT_ALIAS2}
          - SLS_ACCESS_KEY_ID2=${SLS_ACCESS_KEY_ID2}
          - SLS_ACCESS_KEY_SECRET2=${SLS_ACCESS_KEY_SECRET2}
    
          - SLS_PROJECT_ALIAS3=${SLS_PROJECT_ALIAS3}
          - SLS_ACCESS_KEY_ID3=${SLS_ACCESS_KEY_ID3}
          - SLS_ACCESS_KEY_SECRET3=${SLS_ACCESS_KEY_SECRET3}
    
          - SLS_PROJECT_ALIAS4=${SLS_PROJECT_ALIAS4}
          - SLS_ACCESS_KEY_ID4=${SLS_ACCESS_KEY_ID4}
          - SLS_ACCESS_KEY_SECRET4=${SLS_ACCESS_KEY_SECRET4}
    
          - SLS_PROJECT_ALIAS5=${SLS_PROJECT_ALIAS5}
          - SLS_ACCESS_KEY_ID5=${SLS_ACCESS_KEY_ID5}
          - SLS_ACCESS_KEY_SECRET5=${SLS_ACCESS_KEY_SECRET5}
    
          - SLS_PROJECT_ALIAS6=${SLS_PROJECT_ALIAS6}
          - SLS_ACCESS_KEY_ID6=${SLS_ACCESS_KEY_ID6}
          - SLS_ACCESS_KEY_SECRET6=${SLS_ACCESS_KEY_SECRET6}
    
          # You can specify up to 255 Simple Log Service projects.
  4. Run the following command to start the service:

    docker compose up -d
  5. Run the following command to query the status of the service:

    docker compose ps
  6. After you configure the settings, enter http://${IP address of Kibana}:5601 in a browser to go to the logon page of Kibana. Then, enter the username and password that you specify for the Elasticsearch cluster.

    Important

    You must add port 5601 to the security group rule of the server. For more information, see Add a security group rule.

    http://${IP address of Kibana}:5601

    image

Use Helm

Prerequisites

The following components are installed in a Container Service for Kubernetes (ACK) cluster. For more information, see Manage components.

Procedure

  1. Create a namespace:

    # Create a namespace.
    kubectl create namespace sls-kibana
  2. Create and modify the values.yaml file. The following sample code shows the content of the file. Change the parameter values based on your business requirements.

    kibana:
      ingressClass: nginx # Change the value based on the Ingress controller that you installed.
      # To obtain the value of this parameter, perform the following steps: Log on to the ACK console. In the left-side navigation pane, click Clusters. On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane, choose Operations > Add-ons. On the Add-ons page, enter Ingress to search for the installed Ingress controller.
      # ALB Ingress Controller: Set the value to alb.
      # MSE Ingress Controller: Set the value to mse.
      # NGINX Ingress Controller: Set the value to nginx.
      ingressDomain: # You can leave this parameter empty. If you want to access Kibana by using a domain name, configure this parameter.
      ingressPath: /kibana/ # Required. The subpath that is used to access Kibana.
      # If you configure the ingressDomain parameter, you can set the ingressPath parameter to /.
      #i18nLocale: en # The language of Kibana. Default value: en. If you want to use Chinese, you can set the value to zh-CN.
    
    elasticsearch:
      password: aStrongPass # Change the password of the Elasticsearch cluster based on your business requirements. You can use the password to log on to the Kibana console. The username that is used to access the Elasticsearch cluster is elastic.
      #diskZoneId: cn-hongkong-c # The zone where the disk used by the Elasticsearch cluster resides. If you leave this parameter empty, the system automatically selects a zone.
    
    repository:
      region: cn-hangzhou
      # The region where the image resides. If the image resides in a region in the Chinese mainland, set the value to cn-hangzhou. If the image resides in a region outside the Chinese mainland, set the value to ap-southeast-1. The image is pulled over the Internet.
    
    sls:
      - project: k8s-log-c5****** # The Simple Log Service project.
        endpoint: cn-huhehaote.log.aliyuncs.com # The endpoint of the Simple Log Service project.
        accessKeyId: The AccessKey ID that is used to access Simple Log Service.
        accessKeySecret: The AccessKey secret that is used to access Simple Log Service.
      # alias: etl-logs # Optional. If the name of the Simple Log Service project specified by the project parameter is excessively long in Kibana, you can configure this parameter to specify an alias for the project.
      #  kibanaSpace: default  # Optional. The space in which the index pattern is created. If no space exists, a space is automatically created.
      #  logstoreFilters: "*" # Optional. The Logstores that are used to automatically create index patterns. Separate multiple index patterns with commas (,) and enclose index patterns in double quotation marks (""). Example: "access*,error*". 
    
      # If you want to specify another Simple Log Service project, follow the preceding steps.
      #- project: etl-dev2 # The Simple Log Service project.
      #  endpoint: cn-huhehaote.log.aliyuncs.com # The endpoint of the Simple Log Service project.
      #  accessKeyId: The AccessKey ID that is used to access Simple Log Service.
      #  accessKeySecret: The AccessKey secret that is used to access Simple Log Service.
      #  alias: etl-logs2 # Optional. If the name of the Simple Log Service project specified by the project parameter is excessively long in Kibana, you can configure this parameter to specify an alias for the project.
      #  kibanaSpace: default  # Optional. The space in which the index pattern is created. If no space exists, a space is automatically created.
      #  logstoreFilters: "*" # Optional. The Logstores that are used to automatically create index patterns. Separate multiple index patterns with commas (,) and enclose index patterns in double quotation marks (""). Example: "access*,error*".
    

    Parameter

    Description

    kibana.ingressClass

    The class of the Ingress controller. Specify the value based on the Ingress controller that you installed. For more information, see Manage components. Valid values:

    • ALB Ingress Controller: Set the value to alb.

    • MSE Ingress Controller: Set the value to mse.

    • NGINX Ingress Controller: Set the value to nginx.

    kibana.ingressDomain

    The domain name that is used to access Kibana. You can leave this parameter empty. If you want to access Kibana by using a domain name, configure this parameter.

    repository.region

    The region where the image resides. If the image resides in a region in the Chinese mainland, set the value to cn-hangzhou. If the image resides in a region outside the Chinese mainland, set the value to ap-southeast-1. The image is pulled over the Internet.

    kibana.ingressPath

    The subpath that is used to access Kibana. If you configure the ingressDomain parameter, you can set the ingressPath parameter to /.

    elasticsearch.password

    The password of the Elasticsearch cluster. Change the password based on your business requirements. You can use this parameter to log on to the Kibana console. The username that is used to access the Elasticsearch cluster is elastic.

    sls.project

    The name of the Simple Log Service project. For more information, see Manage a project.

    sls.endpoint

    The endpoint of the Simple Log Service project. For more information, see Manage a project.

    sls.accessKeyId

    The AccessKey ID that you created for the RAM user in the "Prerequisites" section of this topic. The RAM user must have the required permissions to query logs in the Logstore. For more information, see Grant permissions to a RAM user.

    sls.accessKeySecret

    The AccessKey secret that you created for the RAM user in the "Prerequisites" section of this topic. The RAM user must have the required permissions to query logs in the Logstore. For more information, see Grant permissions to a RAM user.

    sls.alias

    Optional. If the name of the Simple Log Service project specified by the sls.project parameter is excessively long in Kibana, you can configure this parameter to specify an alias for the project.

    sls.kibanaSpace

    Optional. The space in which the index pattern is created. If no space exists, a space is automatically created.

    sls.logstoreFilters

    Optional. The Logstores that are used to automatically create index patterns. Separate multiple index patterns with commas (,) and enclose index patterns in double quotation marks (""). Example: "access*,error*".

  3. Run the following command to deploy Kibana by using Helm:

    helm install sls-kibana https://sls-kproxy.oss-cn-hangzhou.aliyuncs.com/sls-kibana-1.5.4 -f values.yaml --namespace sls-kibana
  4. After you configure the settings, enter http://${Ingress address}/kibana/ in a browser to go to the logon page of Kibana. Then, enter the username and password that you specify for the Elasticsearch cluster.

    http://${Ingress address}/kibana/

    image

Use Docker

Step 1: Deploy an Elasticsearch cluster

Important

Before you can deploy an Elasticsearch cluster by using Docker, you must install and start Docker. For more information, see Install Docker.

  1. Run the following commands to deploy an Elasticsearch cluster on a server:

    sudo docker pull sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/elasticsearch:7.17.26
    
    sudo mkdir /data  # The directory in which Elasticsearch data is stored. Change the value based on your business requirements. 
    sudo chmod 777 /data # Configure permissions. 
    
    sudo docker run -d --name es -p 9200:9200 \
               -e "discovery.type=single-node" \
               -e "ES_JAVA_OPTS=-Xms2G -Xmx2G" \
               -e ELASTIC_USERNAME=elastic \
               -e ELASTIC_PASSWORD=passwd \
               -e xpack.security.enabled=true \
               -v /data:/usr/share/elasticsearch/data \
               sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/elasticsearch:7.17.26

    Parameter

    Description

    ELASTIC_USERNAME

    The username that is used to access the Elasticsearch cluster. Set the value to elastic.

    ELASTIC_PASSWORD

    The password that is used to access the Elasticsearch cluster. The value of this parameter must be a string.

    /data

    The location of data stored in the Elasticsearch cluster on the physical machine. Change the data storage location based on your business requirements.

  2. After you configure the settings, check whether the Elasticsearch cluster is deployed. If you use a public IP address, add port 9200 to the security group rule of the server. For more information, see Add a security group rule.

    curl http://${IP address of the server on which the Elasticsearch cluster is deployed}:9200

    If the output contains the JSON-formatted field security_exception, the Elasticsearch cluster is deployed.

    {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

Step 2: Deploy a proxy

When you use Kibana to access Simple Log Service, one or more projects are supported. You must add project information when you deploy a proxy. Sample code:

Single project

sudo docker pull sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.1.2

sudo docker run  -d --name proxy \
            -e ES_ENDPOINT=${IP address of the server on which the Elasticsearch cluster is deployed}:9200 \
            -e SLS_ENDPOINT=https://prjA.cn-guangzhou.log.aliyuncs.com/es/ \
            -e SLS_PROJECT=prjA \
            -e SLS_ACCESS_KEY_ID=${aliyunAccessId} \
            -e SLS_ACCESS_KEY_SECRET=${aliyunAccessKey} \
            -p 9201:9201 \
            -ti sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.1.2

Multiple projects

    Important
    • You can specify up to 32 projects.

    • The following variables are related to the first project: SLS_PROJECT, SLS_ENDPOINT, SLS_ACCESS_KEY_ID, and SLS_ACCESS_KEY_SECRET. Variables that are related to the other projects must be suffixed with numbers. Examples: SLS_PROJECT2 and SLS_ENDPOINT2.

    • If the endpoint and AccessKey pair settings of a project are the same as the settings of the first project, you do not need to specify the endpoint or AccessKey pair for the project.

sudo docker pull sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.1.2

sudo docker run  -d --name proxy \
            -e ES_ENDPOINT=${IP address of the server on which the Elasticsearch cluster is deployed}:9200 \
            -e SLS_ENDPOINT=https://prjA.cn-guangzhou.log.aliyuncs.com/es/ \
            -e SLS_ENDPOINT2=https://prjB.cn-guangzhou.log.aliyuncs.com/es/ \ 
            -e SLS_PROJECT=prjA \
            -e SLS_PROJECT2=prjB \
            -e SLS_ACCESS_KEY_ID=${aliyunAccessId} \ 
            -e SLS_ACCESS_KEY_SECRET=${aliyunAccessKey} \ 
            -e SLS_ACCESS_KEY_ID2=${aliyunAccessId} \ # If the AccessKey ID that you want to specify is the same as the AccessKey ID specified by the SLS_ACCESS_KEY_ID variable, you do not need to specify an AccessKey ID for the SLS_ACCESS_KEY_ID2 variable. 
            -e SLS_ACCESS_KEY_SECRET2=${aliyunAccessKey} \ # If the AccessKey secret that you want to specify is the same as the AccessKey secret specified by the SLS_ACCESS_KEY_SECRET variable, you do not need to specify an AccessKey secret for the SLS_ACCESS_KEY_SECRET2 variable. 
            -p 9201:9201 \
            -ti sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.1.2
  • Example 1

    Use two projects named prjA and prjB that have the same AccessKey pair settings.

    sudo docker pull sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.1.2
    sudo docker run  -d --name proxy \
                -e ES_ENDPOINT=${IP address of the server on which the Elasticsearch cluster is deployed}:9200 \
                -e SLS_ENDPOINT=https://prjA.cn-guangzhou.log.aliyuncs.com/es/ \
                -e SLS_ENDPOINT=https://prjB.cn-guangzhou.log.aliyuncs.com/es/ \
                -e SLS_PROJECT=prjA \
                -e SLS_PROJECT2=prjB \
                -e SLS_ACCESS_KEY_ID=${aliyunAccessId} \
                -e SLS_ACCESS_KEY_SECRET=${aliyunAccessKey} \
                -p 9201:9201 \
                -ti sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.1.2
  • Example 2

    Use three projects named prjA, prjB, and prjC. The prjA and prjC projects have the same AccessKey pair settings.

    sudo docker run  -d --name proxy \
                -e ES_ENDPOINT=${IP address of the server on which the Elasticsearch cluster is deployed}:9200 \
                -e SLS_ENDPOINT=https://prjA.cn-guangzhou.log.aliyuncs.com/es/ \
                -e SLS_ENDPOINT2=https://prjB.cn-guangzhou.log.aliyuncs.com/es/ \
                -e SLS_ENDPOINT3=https://prjC.cn-guangzhou.log.aliyuncs.com/es/ \
                -e SLS_PROJECT=prjA \
                -e SLS_PROJECT2=prjB \
                -e SLS_PROJECT3=prjC \
                -e SLS_ACCESS_KEY_ID=${aliyunAccessId} \
                -e SLS_ACCESS_KEY_SECRET=${aliyunAccessKey} \
                -e SLS_ACCESS_KEY_ID2=${aliyunAccessId} \
                -e SLS_ACCESS_KEY_SECRET2=${aliyunAccessKey} \            
                -p 9201:9201 \
                -ti sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kproxy:2.1.2

The following table describes the parameters.

Parameter

Description

ES_ENDPOINT

The address that is used to access the Elasticsearch cluster. Format: ${IP address of the server on which the Elasticsearch cluster is deployed}:9200.

SLS_ENDPOINT

The URL that is used to access data. Format: https://${project}.${sls-endpoint}/es/. ${project} specifies the name of the project. ${sls-endpoint} specifies the endpoint of the project. For more information, see Endpoints. Example: https://etl-guangzhou.cn-guangzhou.log.aliyuncs.com/es/.

Important

You must use the HTTPS protocol.

SLS_PROJECT

The name of the Simple Log Service project. For more information, see Manage a project.

SLS_ACCESS_KEY_ID

The AccessKey ID of your Alibaba Cloud account.

We recommend that you use the AccessKey pair of a RAM user who has the permissions to query logs in Logstores. You can use the permission assistant feature to grant the query permissions to the RAM user. For more information, see Configure the permission assistant feature. For more information about how to obtain an AccessKey pair, see AccessKey pair.

SLS_ACCESS_KEY_SECRET

The AccessKey secret of your Alibaba Cloud account.

We recommend that you use the AccessKey pair of a RAM user who has the permissions to query logs in Logstores. You can use the permission assistant feature to grant the query permissions to the RAM user. For more information, see Configure the permission assistant feature. For more information about how to obtain an AccessKey pair, see AccessKey pair.

After you configure the settings, run the following command to check whether the proxy is deployed. If you use a public IP address, add port 9201 to the security group rule of the server. For more information, see Add a security group rule.

curl http://${IP address of the server on which the proxy is deployed}:9201

If the output contains the JSON-formatted field security_exception, the proxy is deployed.

{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

Step 3: Deploy Kibana

The following sample code provides an example on how to deploy Kibana. In this example, Kibana 7.17.26 is used.

sudo docker pull  sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kibana:7.17.26

sudo docker run -d --name kibana \
            -e ELASTICSEARCH_HOSTS=http://${IP address of the server on which the proxy is deployed}:9201 \
            -e ELASTICSEARCH_USERNAME=elastic \
            -e ELASTICSEARCH_PASSWORD=passwd \
            -e XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED=true \
            -p 5601:5601 \
            sls-registry.cn-hangzhou.cr.aliyuncs.com/kproxy/kibana:7.17.26

Parameter

Description

ELASTICSEARCH_HOSTS

The URL that is used to access the proxy. Format: http://${IP address of the server on which the proxy is deployed}:9201.

ELASTICSEARCH_USERNAME

The username that is used to log on to the Kibana console.

You must set the value to the Elasticsearch username that you specify when you deploy the Elasticsearch cluster.

ELASTICSEARCH_PASSWORD

The password that is used to log on to the Kibana console.

You must set the value to the Elasticsearch password that you specify when you deploy the Elasticsearch cluster.

After you configure the settings, enter http://${IP address of Kibana}:5601 in a browser to go to the logon page of Kibana. Then, enter the username and password that you specify for the Elasticsearch cluster.

Important

You must add port 5601 to the security group rule of the server. For more information, see Add a security group rule.

http://${IP address of Kibana}:5601

image

Step 2: Access Kibana

Query and analyze data

  1. In the left-side navigation pane, choose Analytics > Discover.

    Important

    If you use Elasticsearch-compatible API to analyze Simple Log Service data in Kibana, you can use only the Discover and Dashboard modules.

    image

  2. In the upper-left corner of the page that appears, select the index that you want to manage. In the upper-right corner of the page that appears, select a time range to query logs.

    image.png

Manually create index patterns (optional)

Important

By default, if you use Docker Compose or Helm for deployment, you do not need to manually create index patterns. If you use Docker for deployment, you must manually create index patterns.

  1. In the left-side navigation pane, choose Management > Stack Management.image..png

  2. In the left-side navigation pane, choose Kibana > Index Patterns.

  3. The first time you create an index pattern, click create an index pattern against hidden or system indices in the dialog box.image.png

    Note

    No data is displayed in the pattern list on the Index pattern page. To display data in the pattern list, map Simple Log Service Logstores and Kibana index patterns.

  4. In the Create index pattern panel, configure the parameters. The following table describes the parameters.

    image.png

    Parameter

    Description

    Name

    The name of the index pattern. Format: ${Simple Log Service project}.${Logstore name}.

    Important

    Only exact match is supported. Wildcard characters are not supported. You must enter a complete name.

    For example, if the project name is etl-guangzhou and the Logstore name is es_test22, you must set the index pattern name to etl-guangzhou.es_test22.

    Timestamp field

    The timestamp field. Set the value to @timestamp.

  5. Click Create index pattern.

FAQ

Why am I unable to access Kibana after I use Docker Compose for deployment?

  1. In the sls-kibana directory, run the docker compose ps command to check the status of the following containers. Make sure that all containers are in the UP state.

    image.png

  2. If all containers are in the UP state, view the error logs of each container.

    docker logs sls-kibana_es_1 # View the startup logs of the Elasticsearch cluster.
    docker logs sls-kibana_kproxy_1 # View the startup logs of KProxy.
    docker logs sls-kibana_kibana_1 # View the startup logs of Kibana.

Why am I unable to access Kibana after I use Helm for deployment?

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane of the page that appears, choose Workloads > Stateful.

  3. In the upper part of the page that appears, select sls-kibana as the namespace. Check whether the Elasticsearch cluster, Kibana, and Kproxy are started. For more information about how to view and change the status of stateful workloads and redeploy applications in batches, see Use a StatefulSet to create a stateful application.

How do I uninstall Helm?

helm uninstall sls-kibana --namespace sls-kibana

How do I display high-precision timestamps in Kibana?

  1. To ensure that logs that have high-precision time values are collected to Simple Log Service, you must configure timestamps that are accurate to the nanosecond. For more information, see Collect logs whose timestamps are accurate to the nanosecond.

  2. Then, add the indexes of the __time_ns_part__ field whose data type is long to field indexes. Specific Kibana query statements may be converted to SQL statements for execution. In this case, you must include high-precision time values in the SQL results.

How do I update a Helm chart?

The process in which a Helm chart is updated is similar to the process in which a Helm chart is installed. You need to only change the helm install command to the helm upgrade command. You can use the values.yaml file that is used for Helm chart installation in the update process.

helm upgrade sls-kibana https://sls-kproxy.oss-cn-hangzhou.aliyuncs.com/sls-kibana-1.5.4 -f values.yaml --namespace sls-kibana