All Products
Search
Document Center

Lindorm:Java High Level REST Client

Last Updated:Jun 26, 2024

Java High Level REST Client is a high-level REST client provided by Elasticsearch and supports APIs that are easier to use. LindormSearch is compatible with the features of Elasticsearch 7.10 and earlier version. To perform complex queries and analysis or use the advanced features provided by Elasticsearch, you can use Java High Level REST Client to connect to LindormSearch. This way, you can easily design and manage search indexes and documents.

Prerequisites

  • Java Development Kit (JDK) V1.8 or later is installed.

  • LindormSearch is activated for your Lindorm instance. For more information, see Activate LindormSearch.

  • The IP address of your client is added to the whitelist of the Lindorm instance. For more information, see Configure whitelists.

Procedure

  1. Install Java High Level REST Client. For example, you can add the following dependencies to the pom.xml file in your Maven project. The following example shows how to add dependencies to the configuration file in a Maven project:

    <dependency>
        <groupId>org.elasticsearch.client</groupId>
        <artifactId>elasticsearch-rest-high-level-client</artifactId>
        <version>7.10.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.logging.log4j</groupId>
        <artifactId>log4j-core</artifactId>
        <version>2.20.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.logging.log4j</groupId>
        <artifactId>log4j-api</artifactId>
        <version>2.20.0</version>
    </dependency>
    Important

    The Java High Level REST Client is forward compatible. For example, Java High Level REST Client 6.7.0 can communicate with clusters that run Elasticsearch 6.7.0 and later versions. To use more features provided by clients of later versions, we recommend that you use Java REST Client 7.10.0 and earlier versions to connect to LindormSearch.

  2. Configure connection parameters and use the RestClient.builder() method to create a RestHighLevelClient object.

    // Specify the LindormSearch endpoint for Elasticsearch.
    String search_url = "ld-t4n5668xk31ui****-proxy-search-public.lindorm.rds.aliyuncs.com";
    int search_port = 30070;
    
    String username = "user";
    String password = "test";
    final CredentialsProvider credentials_provider = new BasicCredentialsProvider();
    credentials_provider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(username, password));
    
    
    RestHighLevelClient highClient = new RestHighLevelClient(
      RestClient.builder(new HttpHost( search_url, search_port, "http")).setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
        public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
          return httpClientBuilder.setDefaultCredentialsProvider(credentials_provider);
        }
      })
    );

    Parameters

    Parameter

    Description

    search_url

    The LindormSearch endpoint for Elasticsearch. For more information about how to obtain the endpoint, see LindormSearch endpoint for Elasticsearch.

    Important
    • If your application is deployed on an ECS instance, we recommend that you use a VPC to connect to the Lindorm instance to ensure higher security and lower network latency.

    • If your application is deployed on a local server and needs to connect to the Lindorm instance over the Internet, you can perform the following steps to enable the public endpoint of the instance in the Lindorm console: In the left-side navigation pane, click Database Connections. On the page that appears, click the Search Engine tab. Then, click Enable Public Endpoint in the upper-right corner.

    • If you use a VPC to access the Lindorm instance, specify the LindormSearch VPC endpoint for Elasticsearch in the value of search_url. If you use the Internet to access the Lindorm instance, specify the LindormSearch Internet endpoint for Elasticsearch in the value of search_url.

    search_port

    The port used to access the LindormSearch endpoint for Elasticsearch. The value of this parameter is fixed to 30070.

    username

    The username and password used to connect to LindormSearch.

    You can perform the following steps to obtain the default username and password: In the left-side navigation pane, click Database Connections. On the page that appears, click the Search Engine tab. Then, view the username and password displayed on this tab.

    password

  3. Perform operations in LindormSearch.

    The following operations are performed by the sample code:

    • Create a search index: A search index named lindorm_index is created.

    • Write data: A single data record is written to a document whose ID is test. Then, 100,000 documents are written to the index in a batch.

    • Query data: Initiate a refresh request to display the data written to LindormSearch. In the sample code, two requests are sent to individually query all documents in the index and the documents whose ID is test.

    • Delete data: Delete the documents whose ID is test and delete the lindorm_index index.

    try {
      String index_name = "lindorm_index";
    
      // Construct a CreateIndex request to create a search index.
      CreateIndexRequest createIndexRequest = new CreateIndexRequest(index_name);
      // Specify the settings of the index.
      Map<String, Object> settingsMap = new HashMap<>();
      settingsMap.put("index.number_of_shards", 4);
      createIndexRequest.settings(settingsMap);
      CreateIndexResponse createIndexResponse = highClient.indices().create(createIndexRequest, COMMON_OPTIONS);
      if (createIndexResponse.isAcknowledged()) {
        System.out.println("Create index [" + index_name + "] successfully.");
      }
    
      // Specify the document ID. If you do not specify the document ID, an ID is automatically generated for the document by the system. In this case, the document can offer better write performance.
      String doc_id = "test";
      // Specify the fields in the document. Replace the fields and values in the sample code with actual ones in your business.
      Map<String, Object> jsonMap = new HashMap<>();
      jsonMap.put("field1", "value1");
      jsonMap.put("field2", "value2");
    
      // Construct a request to write a single data record to the document. Specify the document ID and the field that you want to write to the document.
      IndexRequest indexRequest = new IndexRequest(index_name);
      indexRequest.id(doc_id).source(jsonMap);
      IndexResponse indexResponse = highClient.index(indexRequest, COMMON_OPTIONS);
      System.out.println("Index document with id[" + indexResponse.getId() + "] successfully.");
    
      // Write data in a batch.
      int bulkTotal = 100000;
      AtomicLong failedBulkItemCount = new AtomicLong();
      // Create a BulkProcessor object to initiate a Bulk request.
      BulkProcessor.Builder builder = BulkProcessor.builder((request, bulkListener) -> highClient.bulkAsync(request, COMMON_OPTIONS, bulkListener),
        new BulkProcessor.Listener() {
          @Override
          public void beforeBulk(long executionId, BulkRequest request) {}
    
          @Override
          public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
            // You can obtain the response of each request in the Bulk request. The sample code counts the number of failed Bulk items based on the response.
            for (BulkItemResponse bulkItemResponse : response) {
              if (bulkItemResponse.isFailed()) {
                failedBulkItemCount.incrementAndGet();
              }
            }
          }
    
          @Override
          public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
            // If a failure is captured by this code block, all requests in the Bulk request are not executed.
            if (null != failure) {
              failedBulkItemCount.addAndGet(request.numberOfActions());
            }
          }
        });
      // Specify the maximum number of concurrent Bulk requests. Default value: 1.
      builder.setConcurrentRequests(10);
      // Specify the threshold based on which the BulkProcessor object sends the Bulk request. You can specify a time interval, the number of operations, or the size of requests as the threshold.
      builder.setFlushInterval(TimeValue.timeValueSeconds(5));
      builder.setBulkActions(5000);
      builder.setBulkSize(new ByteSizeValue(5, ByteSizeUnit.MB));
      BulkProcessor bulkProcessor = builder.build();
      Random random = new Random();
      for (int i = 0; i < bulkTotal; i++) {
        // Replace the fields and values in the sample code with actual ones in your business.
        Map<String, Object> map = new HashMap<>();
        map.put("field1", random.nextInt() + "");
        map.put("field2", random.nextInt() + "");
        IndexRequest bulkItemRequest = new IndexRequest(index_name);
        bulkItemRequest.source(map);
        // Add operations to the BulkProcessor object.
        bulkProcessor.add(bulkItemRequest);
      }
      // You can use the awaitClose method to wait until all operations are executed.
      bulkProcessor.awaitClose(120, TimeUnit.SECONDS);
      long failure = failedBulkItemCount.get(),
        success = bulkTotal - failure;
      System.out.println("Bulk using BulkProcessor finished with [" + success + "] requests succeeded, [" + failure + "] requests failed.");
    
      // Construct a refresh request to display the written data.
      RefreshRequest refreshRequest = new RefreshRequest(index_name);
      RefreshResponse refreshResponse = highClient.indices().refresh(refreshRequest, COMMON_OPTIONS);
      System.out.println("Refresh on index [" + index_name + "] successfully.");
    
      // Construct a search request to query all data.
      SearchRequest searchRequest = new SearchRequest(index_name);
      SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
      QueryBuilder queryMatchAllBuiler = new MatchAllQueryBuilder();
      searchSourceBuilder.query(queryMatchAllBuiler);
      searchRequest.source(searchSourceBuilder);
      SearchResponse searchResponse = highClient.search(searchRequest, COMMON_OPTIONS);
      long totalHit = searchResponse.getHits().getTotalHits().value;
      System.out.println("Search query match all hits [" + totalHit + "] in total.");
    
      // Construct a search request to query data based on IDs.
      QueryBuilder queryByIdBuilder = new MatchQueryBuilder("_id", doc_id);
      searchSourceBuilder.query(queryByIdBuilder);
      searchRequest.source(searchSourceBuilder);
      searchResponse = highClient.search(searchRequest, COMMON_OPTIONS);
      for (SearchHit searchHit : searchResponse.getHits()) {
        System.out.println("Search query by id response [" + searchHit.getSourceAsString() + "]");
      }
    
      // Construct a delete request to delete a single document with the specified ID.
      DeleteRequest deleteRequest = new DeleteRequest(index_name);
      deleteRequest.id(doc_id);
      DeleteResponse deleteResponse = highClient.delete(deleteRequest, COMMON_OPTIONS);
      System.out.println("Delete document with id [" + deleteResponse.getId() + "] successfully.");
    
      // Construct a DeleteIndex request to delete the index.
      DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(index_name);
      AcknowledgedResponse deleteIndexResponse = highClient.indices().delete(deleteIndexRequest, COMMON_OPTIONS);
      if (deleteIndexResponse.isAcknowledged()) {
        System.out.println("Delete index [" + index_name + "] successfully.");
      }
    
      highClient.close();
    } catch (Exception exception) {
      // Handle exceptions.
      System.out.println("msg " + exception);
    }

Sample code

The following code provides a complete example on how to use the Java High Level REST Client to connect to LindormSearch and perform operations in LindormSearch:

import org.apache.http.HttpHost;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.admin.indices.refresh.RefreshRequest;
import org.elasticsearch.action.admin.indices.refresh.RefreshResponse;
import org.elasticsearch.action.bulk.BulkItemResponse;
import org.elasticsearch.action.bulk.BulkProcessor;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.client.HttpAsyncResponseConsumerFactory;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.indices.CreateIndexRequest;
import org.elasticsearch.client.indices.CreateIndexResponse;
import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.query.MatchAllQueryBuilder;
import org.elasticsearch.index.query.MatchQueryBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.builder.SearchSourceBuilder;

import java.util.HashMap;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;

public class RestHClientTest {
  private static final RequestOptions COMMON_OPTIONS;
  static {
    // Create a request and configure parameters. In the sample code, the maximum cache size is set to 30 MB. By default, the maximum cache size is 100 MB.
    RequestOptions.Builder builder = RequestOptions.DEFAULT.toBuilder();
    builder.setHttpAsyncResponseConsumerFactory(
      new HttpAsyncResponseConsumerFactory
        .HeapBufferedResponseConsumerFactory(30 * 1024 * 1024));
    COMMON_OPTIONS = builder.build();
  }

  public static void main(String[] args) {
    // Specify the LindormSearch endpoint for Elasticsearch.
    String search_url = "ld-t4n5668xk31ui****-proxy-search-public.lindorm.rds.aliyuncs.com";
    int search_port = 30070;

    // Specify the username and password used to connect to LindormSearch. You can obtain them in the Lindorm console.
    String username = "user";
    String password = "test";

    final CredentialsProvider credentials_provider = new BasicCredentialsProvider();
    credentials_provider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(username, password));
    RestHighLevelClient highClient = new RestHighLevelClient(
      RestClient.builder(new HttpHost( search_url, search_port, "http")).setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
        public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
          return httpClientBuilder.setDefaultCredentialsProvider(credentials_provider);
        }
      })
    );

    try {
      String index_name = "lindorm_index";

      // Construct a CreateIndex request to create a search index.
      CreateIndexRequest createIndexRequest = new CreateIndexRequest(index_name);
      // Specify the settings of the index.
      Map<String, Object> settingsMap = new HashMap<>();
      settingsMap.put("index.number_of_shards", 4);
      createIndexRequest.settings(settingsMap);
      CreateIndexResponse createIndexResponse = highClient.indices().create(createIndexRequest, COMMON_OPTIONS);
      if (createIndexResponse.isAcknowledged()) {
        System.out.println("Create index [" + index_name + "] successfully.");
      }

      // Specify the document ID. If you do not specify the document ID, an ID is automatically generated for the document by the system. In this case, the document can offer better write performance.
      String doc_id = "test";
      // Specify the fields in the document. Replace the fields and values in the sample code with actual ones in your business.
      Map<String, Object> jsonMap = new HashMap<>();
      jsonMap.put("field1", "value1");
      jsonMap.put("field2", "value2");

      // Construct a request to write a single data record to the document. Specify the document ID and the field that you want to write to the document.
      IndexRequest indexRequest = new IndexRequest(index_name);
      indexRequest.id(doc_id).source(jsonMap);
      IndexResponse indexResponse = highClient.index(indexRequest, COMMON_OPTIONS);
      System.out.println("Index document with id[" + indexResponse.getId() + "] successfully.");

      // Write data in a batch.
      int bulkTotal = 100000;
      AtomicLong failedBulkItemCount = new AtomicLong();
      // Create a BulkProcessor object to initiate a Bulk request.
      BulkProcessor.Builder builder = BulkProcessor.builder((request, bulkListener) -> highClient.bulkAsync(request, COMMON_OPTIONS, bulkListener),
        new BulkProcessor.Listener() {
          @Override
          public void beforeBulk(long executionId, BulkRequest request) {}

          @Override
          public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
            // You can obtain the response of each request in the Bulk request. The sample code counts the number of failed Bulk items based on the response.
            for (BulkItemResponse bulkItemResponse : response) {
              if (bulkItemResponse.isFailed()) {
                failedBulkItemCount.incrementAndGet();
              }
            }
          }

          @Override
          public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
            // If a failure is captured by this code block, all requests in the Bulk request are not executed.
            if (null != failure) {
              failedBulkItemCount.addAndGet(request.numberOfActions());
            }
          }
        });
      // Specify the maximum number of concurrent Bulk requests. Default value: 1.
      builder.setConcurrentRequests(10);
      // Specify the threshold based on which the BulkProcessor object sends the Bulk request. You can specify a time interval, the number of operations, or the size of requests as the threshold.
      builder.setFlushInterval(TimeValue.timeValueSeconds(5));
      builder.setBulkActions(5000);
      builder.setBulkSize(new ByteSizeValue(5, ByteSizeUnit.MB));
      BulkProcessor bulkProcessor = builder.build();
      Random random = new Random();
      for (int i = 0; i < bulkTotal; i++) {
        // Replace the fields and values in the sample code with actual ones in your business.
        Map<String, Object> map = new HashMap<>();
        map.put("field1", random.nextInt() + "");
        map.put("field2", random.nextInt() + "");
        IndexRequest bulkItemRequest = new IndexRequest(index_name);
        bulkItemRequest.source(map);
        // Add operations to the BulkProcessor object.
        bulkProcessor.add(bulkItemRequest);
      }
      // You can use the awaitClose method to wait until all operations are executed.
      bulkProcessor.awaitClose(120, TimeUnit.SECONDS);
      long failure = failedBulkItemCount.get(),
        success = bulkTotal - failure;
      System.out.println("Bulk using BulkProcessor finished with [" + success + "] requests succeeded, [" + failure + "] requests failed.");

      // Construct a refresh request to display the written data.
      RefreshRequest refreshRequest = new RefreshRequest(index_name);
      RefreshResponse refreshResponse = highClient.indices().refresh(refreshRequest, COMMON_OPTIONS);
      System.out.println("Refresh on index [" + index_name + "] successfully.");

      // Construct a search request to query all data.
      SearchRequest searchRequest = new SearchRequest(index_name);
      SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
      QueryBuilder queryMatchAllBuiler = new MatchAllQueryBuilder();
      searchSourceBuilder.query(queryMatchAllBuiler);
      searchRequest.source(searchSourceBuilder);
      SearchResponse searchResponse = highClient.search(searchRequest, COMMON_OPTIONS);
      long totalHit = searchResponse.getHits().getTotalHits().value;
      System.out.println("Search query match all hits [" + totalHit + "] in total.");

      // Construct a search request to query data based on IDs.
      QueryBuilder queryByIdBuilder = new MatchQueryBuilder("_id", doc_id);
      searchSourceBuilder.query(queryByIdBuilder);
      searchRequest.source(searchSourceBuilder);
      searchResponse = highClient.search(searchRequest, COMMON_OPTIONS);
      for (SearchHit searchHit : searchResponse.getHits()) {
        System.out.println("Search query by id response [" + searchHit.getSourceAsString() + "]");
      }

      // Construct a delete request to delete a single document with the specified ID.
      DeleteRequest deleteRequest = new DeleteRequest(index_name);
      deleteRequest.id(doc_id);
      DeleteResponse deleteResponse = highClient.delete(deleteRequest, COMMON_OPTIONS);
      System.out.println("Delete document with id [" + deleteResponse.getId() + "] successfully.");

      // Construct a DeleteIndex request to delete the index.
      DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(index_name);
      AcknowledgedResponse deleteIndexResponse = highClient.indices().delete(deleteIndexRequest, COMMON_OPTIONS);
      if (deleteIndexResponse.isAcknowledged()) {
        System.out.println("Delete index [" + index_name + "] successfully.");
      }

      highClient.close();
    } catch (Exception exception) {
      // Handle exceptions.
      System.out.println("msg " + exception);
    }
  }
}

The following result is returned:

Create index [lindorm_index] successfully.
Index document with id[test] successfully.
Bulk using BulkProcessor finished with [100000] requests succeeded, [0] requests failed.
Refresh on index [lindorm_index] successfully.
Search query match all hits [10000] in total.
Search query by id response [{"field1":"value1","field2":"value2"}]
Delete document with id [test] successfully.
Delete index [lindorm_index] successfully.
Note

According to the result, 100,000 data records are written to the index. However, only 10,000 data records are returned for the request that queries all data in the index. This is because that up to 10,000 queried data records are returned by default.

To obtain all queried data records in the response, set the trackTotalHits attribute of the SearchSourceBuilder object to true in the following format: searchSourceBuilder.trackTotalHits(true);