All Products
Search
Document Center

Lindorm:Java High Level REST Client

Last Updated:Feb 02, 2026

Java High Level REST Client is a high-level REST client provided by Elasticsearch and supports APIs that are easier to use. LindormSearch is compatible with the features of Elasticsearch 7.10 and earlier version. To perform complex queries and analysis or use the advanced features provided by Elasticsearch, you can use Java High Level REST Client to connect to LindormSearch. This way, you can easily design and manage search indexes and documents.

Prerequisites

  • A Java environment with JDK 1.8 or later is installed.

  • You have enabled the search engine. For more information, see the Enablement Guide.

  • The IP address of your client is added to the whitelist of the Lindorm instance. For more information, see Configure whitelists.

Procedure

  1. Install the Java High Level REST Client. For a Maven project, add dependencies to the pom.xml file under dependencies.

    <dependency>
        <groupId>org.elasticsearch.client</groupId>
        <artifactId>elasticsearch-rest-high-level-client</artifactId>
        <version>7.10.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.logging.log4j</groupId>
        <artifactId>log4j-core</artifactId>
        <version>2.20.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.logging.log4j</groupId>
        <artifactId>log4j-api</artifactId>
        <version>2.20.0</version>
    </dependency>
    Important

    The Java High Level REST Client is forward compatible. For example, Java High Level REST Client version 6.7.0 can communicate with Elasticsearch clusters of version 6.7.0 or later. To take full advantage of new client features, we recommend using Java High Level REST Client version 7.10.0 or earlier for .

  2. Configure connection parameters and use the RestClient.builder() method to create a RestHighLevelClient object.

    // Specify the LindormSearch endpoint for Elasticsearch.
    String search_url = "ld-t4n5668xk31ui****-proxy-search-public.lindorm.rds.aliyuncs.com";
    int search_port = 30070;
    
    String username = "user";
    String password = "test";
    final CredentialsProvider credentials_provider = new BasicCredentialsProvider();
    credentials_provider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(username, password));
    
    
    RestHighLevelClient highClient = new RestHighLevelClient(
      RestClient.builder(new HttpHost( search_url, search_port, "http")).setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
        public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
          return httpClientBuilder.setDefaultCredentialsProvider(credentials_provider);
        }
      })
    );

    Parameters

    Parameter

    Description

    search_url

    The Elasticsearch-compatible endpoint for the search engine. To obtain the endpoint, see Elasticsearch-compatible address.

    Important
    • If your application is deployed on an ECS instance, we recommend that you use a VPC to connect to the Lindorm instance to ensure higher security and lower network latency.

    • If your application runs on-premises, enable the public endpoint in the console before connecting over the public network. To enable it: In the navigation pane on the left, select Database Connections, click the Search Engine tab, then click Enable Public Endpoint in the upper-right corner.

    • When accessing the Lindorm instance through a VPC, set search_url to the VPC address of the Elasticsearch-compatible endpoint. When accessing over the public network, set search_url to the Internet address of the Elasticsearch-compatible endpoint.

    search_port

    The port for the Elasticsearch Compatibility feature on the Lindorm search engine is 30070.

    username

    The username and password for accessing the search engine.

    To get the default username and password: In the navigation pane on the left, select Database Connections, click the Search Engine tab, then view the credentials on the Search Engine tab.

    password

  3. Use a search engine.

    The sample code includes the following:

    • Create a search index: A search index named lindorm_index is created.

    • Write data: A single data record is written to a document whose ID is test. Then, 100,000 documents are written to the index in a batch.

    • Query data: Initiate a refresh request to display the data written to LindormSearch. In the sample code, two requests are sent to individually query all documents in the index and the documents whose ID is test.

    • Delete data: Delete the documents whose ID is test and delete the lindorm_index index.

    try {
      String index_name = "lindorm_index";
    
      // Construct a CreateIndex request to create a search index.
      CreateIndexRequest createIndexRequest = new CreateIndexRequest(index_name);
      // Specify the settings of the index.
      Map<String, Object> settingsMap = new HashMap<>();
      settingsMap.put("index.number_of_shards", 4);
      createIndexRequest.settings(settingsMap);
      CreateIndexResponse createIndexResponse = highClient.indices().create(createIndexRequest, COMMON_OPTIONS);
      if (createIndexResponse.isAcknowledged()) {
        System.out.println("Create index [" + index_name + "] successfully.");
      }
    
      // Specify the document ID. If you do not specify the document ID, an ID is automatically generated for the document by the system. In this case, the document can offer better write performance.
      String doc_id = "test";
      // Specify the fields in the document. Replace the fields and values in the sample code with actual ones in your business.
      Map<String, Object> jsonMap = new HashMap<>();
      jsonMap.put("field1", "value1");
      jsonMap.put("field2", "value2");
    
      // Construct a request to write a single data record to the document. Specify the document ID and the field that you want to write to the document.
      IndexRequest indexRequest = new IndexRequest(index_name);
      indexRequest.id(doc_id).source(jsonMap);
      IndexResponse indexResponse = highClient.index(indexRequest, COMMON_OPTIONS);
      System.out.println("Index document with id[" + indexResponse.getId() + "] successfully.");
    
      // Write data in a batch.
      int bulkTotal = 100000;
      AtomicLong failedBulkItemCount = new AtomicLong();
      // Create a BulkProcessor object to initiate a Bulk request.
      BulkProcessor.Builder builder = BulkProcessor.builder((request, bulkListener) -> highClient.bulkAsync(request, COMMON_OPTIONS, bulkListener),
        new BulkProcessor.Listener() {
          @Override
          public void beforeBulk(long executionId, BulkRequest request) {}
    
          @Override
          public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
            // You can obtain the response of each request in the Bulk request. The sample code counts the number of failed Bulk items based on the response.
            for (BulkItemResponse bulkItemResponse : response) {
              if (bulkItemResponse.isFailed()) {
                failedBulkItemCount.incrementAndGet();
              }
            }
          }
    
          @Override
          public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
            // If a failure is captured by this code block, all requests in the Bulk request are not executed.
            if (null != failure) {
              failedBulkItemCount.addAndGet(request.numberOfActions());
            }
          }
        });
      // Specify the maximum number of concurrent Bulk requests. Default value: 1.
      builder.setConcurrentRequests(10);
      // Specify the threshold based on which the BulkProcessor object sends the Bulk request. You can specify a time interval, the number of operations, or the size of requests as the threshold.
      builder.setFlushInterval(TimeValue.timeValueSeconds(5));
      builder.setBulkActions(5000);
      builder.setBulkSize(new ByteSizeValue(5, ByteSizeUnit.MB));
      BulkProcessor bulkProcessor = builder.build();
      Random random = new Random();
      for (int i = 0; i < bulkTotal; i++) {
        // Replace the fields and values in the sample code with actual ones in your business.
        Map<String, Object> map = new HashMap<>();
        map.put("field1", random.nextInt() + "");
        map.put("field2", random.nextInt() + "");
        IndexRequest bulkItemRequest = new IndexRequest(index_name);
        bulkItemRequest.source(map);
        // Add operations to the BulkProcessor object.
        bulkProcessor.add(bulkItemRequest);
      }
      // You can use the awaitClose method to wait until all operations are executed.
      bulkProcessor.awaitClose(120, TimeUnit.SECONDS);
      long failure = failedBulkItemCount.get(),
        success = bulkTotal - failure;
      System.out.println("Bulk using BulkProcessor finished with [" + success + "] requests succeeded, [" + failure + "] requests failed.");
    
      // Construct a refresh request to display the written data.
      RefreshRequest refreshRequest = new RefreshRequest(index_name);
      RefreshResponse refreshResponse = highClient.indices().refresh(refreshRequest, COMMON_OPTIONS);
      System.out.println("Refresh on index [" + index_name + "] successfully.");
    
      // Construct a search request to query all data.
      SearchRequest searchRequest = new SearchRequest(index_name);
      SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
      QueryBuilder queryMatchAllBuiler = new MatchAllQueryBuilder();
      searchSourceBuilder.query(queryMatchAllBuiler);
      searchRequest.source(searchSourceBuilder);
      SearchResponse searchResponse = highClient.search(searchRequest, COMMON_OPTIONS);
      long totalHit = searchResponse.getHits().getTotalHits().value;
      System.out.println("Search query match all hits [" + totalHit + "] in total.");
    
      // Construct a search request to query data based on IDs.
      QueryBuilder queryByIdBuilder = new MatchQueryBuilder("_id", doc_id);
      searchSourceBuilder.query(queryByIdBuilder);
      searchRequest.source(searchSourceBuilder);
      searchResponse = highClient.search(searchRequest, COMMON_OPTIONS);
      for (SearchHit searchHit : searchResponse.getHits()) {
        System.out.println("Search query by id response [" + searchHit.getSourceAsString() + "]");
      }
    
      // Construct a delete request to delete a single document with the specified ID.
      DeleteRequest deleteRequest = new DeleteRequest(index_name);
      deleteRequest.id(doc_id);
      DeleteResponse deleteResponse = highClient.delete(deleteRequest, COMMON_OPTIONS);
      System.out.println("Delete document with id [" + deleteResponse.getId() + "] successfully.");
    
      // Construct a DeleteIndex request to delete the index.
      DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(index_name);
      AcknowledgedResponse deleteIndexResponse = highClient.indices().delete(deleteIndexRequest, COMMON_OPTIONS);
      if (deleteIndexResponse.isAcknowledged()) {
        System.out.println("Delete index [" + index_name + "] successfully.");
      }
    
      highClient.close();
    } catch (Exception exception) {
      // Handle exceptions.
      System.out.println("msg " + exception);
    }

Complete example

The following is the complete sample code:

import org.apache.http.HttpHost;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.admin.indices.refresh.RefreshRequest;
import org.elasticsearch.action.admin.indices.refresh.RefreshResponse;
import org.elasticsearch.action.bulk.BulkItemResponse;
import org.elasticsearch.action.bulk.BulkProcessor;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.client.HttpAsyncResponseConsumerFactory;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.indices.CreateIndexRequest;
import org.elasticsearch.client.indices.CreateIndexResponse;
import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.query.MatchAllQueryBuilder;
import org.elasticsearch.index.query.MatchQueryBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.builder.SearchSourceBuilder;

import java.util.HashMap;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;

public class RestHClientTest {
  private static final RequestOptions COMMON_OPTIONS;
  static {
    // Create a request and configure parameters. In the sample code, the maximum cache size is set to 30 MB. By default, the maximum cache size is 100 MB.
    RequestOptions.Builder builder = RequestOptions.DEFAULT.toBuilder();
    builder.setHttpAsyncResponseConsumerFactory(
      new HttpAsyncResponseConsumerFactory
        .HeapBufferedResponseConsumerFactory(30 * 1024 * 1024));
    COMMON_OPTIONS = builder.build();
  }

  public static void main(String[] args) {
    // Specify the LindormSearch endpoint for Elasticsearch.
    String search_url = "ld-t4n5668xk31ui****-proxy-search-public.lindorm.rds.aliyuncs.com";
    int search_port = 30070;

    // Specify the username and password used to connect to LindormSearch. You can obtain them in the Lindorm console.
    String username = "user";
    String password = "test";

    final CredentialsProvider credentials_provider = new BasicCredentialsProvider();
    credentials_provider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(username, password));
    RestHighLevelClient highClient = new RestHighLevelClient(
      RestClient.builder(new HttpHost( search_url, search_port, "http")).setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
        public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
          return httpClientBuilder.setDefaultCredentialsProvider(credentials_provider);
        }
      })
    );

    try {
      String index_name = "lindorm_index";

      // Construct a CreateIndex request to create a search index.
      CreateIndexRequest createIndexRequest = new CreateIndexRequest(index_name);
      // Specify the settings of the index.
      Map<String, Object> settingsMap = new HashMap<>();
      settingsMap.put("index.number_of_shards", 4);
      createIndexRequest.settings(settingsMap);
      CreateIndexResponse createIndexResponse = highClient.indices().create(createIndexRequest, COMMON_OPTIONS);
      if (createIndexResponse.isAcknowledged()) {
        System.out.println("Create index [" + index_name + "] successfully.");
      }

      // Specify the document ID. If you do not specify the document ID, an ID is automatically generated for the document by the system. In this case, the document can offer better write performance.
      String doc_id = "test";
      // Specify the fields in the document. Replace the fields and values in the sample code with actual ones in your business.
      Map<String, Object> jsonMap = new HashMap<>();
      jsonMap.put("field1", "value1");
      jsonMap.put("field2", "value2");

      // Construct a request to write a single data record to the document. Specify the document ID and the field that you want to write to the document.
      IndexRequest indexRequest = new IndexRequest(index_name);
      indexRequest.id(doc_id).source(jsonMap);
      IndexResponse indexResponse = highClient.index(indexRequest, COMMON_OPTIONS);
      System.out.println("Index document with id[" + indexResponse.getId() + "] successfully.");

      // Write data in a batch.
      int bulkTotal = 100000;
      AtomicLong failedBulkItemCount = new AtomicLong();
      // Create a BulkProcessor object to initiate a Bulk request.
      BulkProcessor.Builder builder = BulkProcessor.builder((request, bulkListener) -> highClient.bulkAsync(request, COMMON_OPTIONS, bulkListener),
        new BulkProcessor.Listener() {
          @Override
          public void beforeBulk(long executionId, BulkRequest request) {}

          @Override
          public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
            // You can obtain the response of each request in the Bulk request. The sample code counts the number of failed Bulk items based on the response.
            for (BulkItemResponse bulkItemResponse : response) {
              if (bulkItemResponse.isFailed()) {
                failedBulkItemCount.incrementAndGet();
              }
            }
          }

          @Override
          public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
            // If a failure is captured by this code block, all requests in the Bulk request are not executed.
            if (null != failure) {
              failedBulkItemCount.addAndGet(request.numberOfActions());
            }
          }
        });
      // Specify the maximum number of concurrent Bulk requests. Default value: 1.
      builder.setConcurrentRequests(10);
      // Specify the threshold based on which the BulkProcessor object sends the Bulk request. You can specify a time interval, the number of operations, or the size of requests as the threshold.
      builder.setFlushInterval(TimeValue.timeValueSeconds(5));
      builder.setBulkActions(5000);
      builder.setBulkSize(new ByteSizeValue(5, ByteSizeUnit.MB));
      BulkProcessor bulkProcessor = builder.build();
      Random random = new Random();
      for (int i = 0; i < bulkTotal; i++) {
        // Replace the fields and values in the sample code with actual ones in your business.
        Map<String, Object> map = new HashMap<>();
        map.put("field1", random.nextInt() + "");
        map.put("field2", random.nextInt() + "");
        IndexRequest bulkItemRequest = new IndexRequest(index_name);
        bulkItemRequest.source(map);
        // Add operations to the BulkProcessor object.
        bulkProcessor.add(bulkItemRequest);
      }
      // You can use the awaitClose method to wait until all operations are executed.
      bulkProcessor.awaitClose(120, TimeUnit.SECONDS);
      long failure = failedBulkItemCount.get(),
        success = bulkTotal - failure;
      System.out.println("Bulk using BulkProcessor finished with [" + success + "] requests succeeded, [" + failure + "] requests failed.");

      // Construct a refresh request to display the written data.
      RefreshRequest refreshRequest = new RefreshRequest(index_name);
      RefreshResponse refreshResponse = highClient.indices().refresh(refreshRequest, COMMON_OPTIONS);
      System.out.println("Refresh on index [" + index_name + "] successfully.");

      // Construct a search request to query all data.
      SearchRequest searchRequest = new SearchRequest(index_name);
      SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
      QueryBuilder queryMatchAllBuiler = new MatchAllQueryBuilder();
      searchSourceBuilder.query(queryMatchAllBuiler);
      searchRequest.source(searchSourceBuilder);
      SearchResponse searchResponse = highClient.search(searchRequest, COMMON_OPTIONS);
      long totalHit = searchResponse.getHits().getTotalHits().value;
      System.out.println("Search query match all hits [" + totalHit + "] in total.");

      // Construct a search request to query data based on IDs.
      QueryBuilder queryByIdBuilder = new MatchQueryBuilder("_id", doc_id);
      searchSourceBuilder.query(queryByIdBuilder);
      searchRequest.source(searchSourceBuilder);
      searchResponse = highClient.search(searchRequest, COMMON_OPTIONS);
      for (SearchHit searchHit : searchResponse.getHits()) {
        System.out.println("Search query by id response [" + searchHit.getSourceAsString() + "]");
      }

      // Construct a delete request to delete a single document with the specified ID.
      DeleteRequest deleteRequest = new DeleteRequest(index_name);
      deleteRequest.id(doc_id);
      DeleteResponse deleteResponse = highClient.delete(deleteRequest, COMMON_OPTIONS);
      System.out.println("Delete document with id [" + deleteResponse.getId() + "] successfully.");

      // Construct a DeleteIndex request to delete the index.
      DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(index_name);
      AcknowledgedResponse deleteIndexResponse = highClient.indices().delete(deleteIndexRequest, COMMON_OPTIONS);
      if (deleteIndexResponse.isAcknowledged()) {
        System.out.println("Delete index [" + index_name + "] successfully.");
      }

      highClient.close();
    } catch (Exception exception) {
      // Handle exceptions.
      System.out.println("msg " + exception);
    }
  }
}

The following result is returned:

Create index [lindorm_index] successfully.
Index document with id[test] successfully.
Bulk using BulkProcessor finished with [100000] requests succeeded, [0] requests failed.
Refresh on index [lindorm_index] successfully.
Search query match all hits [10000] in total.
Search query by id response [{"field1":"value1","field2":"value2"}]
Delete document with id [test] successfully.
Delete index [lindorm_index] successfully.
Note

According to the result, 100,000 data records are written to the index. However, only 10,000 data records are returned for the request that queries all data in the index. This is because that up to 10,000 queried data records are returned by default.

To obtain the total number of matching records, set the trackTotalHits property of the SearchSourceBuilder object to true, such as searchSourceBuilder.trackTotalHits(true);.