All Products
Search
Document Center

Performance Testing:Stress test Hologres with PTS

Last Updated:Mar 10, 2026

Use Performance Testing (PTS) to benchmark a Hologres instance with TPC-H workloads. This tutorial walks through three stress testing scenarios -- OLAP analytical queries at 1 VU, high-concurrency point queries at 500 VUs, and primary-key upserts -- and shows how to configure each test, run it, and interpret the results.

Prerequisites

Before you begin, make sure that you have:

  • A Hologres instance

  • PTS activated

  • A Resource Access Management (RAM) user named pts-test with the AliyunHologresReadOnlyAccess permission, authorized in HoloWeb to access the test database

How it works

PTS connects to Hologres over JDBC (PostgreSQL protocol), runs SQL statements at configurable concurrency, and collects latency and throughput metrics. PTS handles load generation, metric collection, and report generation -- no separate benchmarking environment is needed.

This tutorial uses TPC-H 100 GB data from the public MaxCompute project MAXCOMPUTE_PUBLIC_DATA. Three scenarios are tested:

ScenarioTable typeWhat it measuresKey metric
OLAP queriesColumn-orientedAnalytical query latencyAverage response time (RT) per query
Point queries of key-value pairsRow-orientedKey-value lookup throughputQueries per second (QPS)
Data updatesRow-orientedUpsert throughputTransactions per second (TPS)

Hologres capabilities referenced in this tutorial:

  • HoloWeb: Connect to a Hologres instance, create databases, and run SQL statements.

  • Data ingestion: Synchronize data from MaxCompute, OSS-based data lakes, MySQL (via DataWorks), or local files (via COPY).

  • Index tuning: Configure clustering keys, segment keys, distribution keys, and bitmap indexes to optimize query performance.

Important
  • The tests in this tutorial are based on TPC-H but do not meet all TPC-H benchmark requirements. Results may differ from published TPC-H benchmark results.

  • TPC-H 100 GB data stored in the public MaxCompute project MAXCOMPUTE_PUBLIC_DATA is written to Hologres.

Scenario 1: OLAP queries

This scenario uses column-oriented tables and runs the 22 standard TPC-H query statements to measure analytical query latency.

Step 1: Prepare test data in Hologres

1. Create foreign tables

Import the eight TPC-H tables as foreign tables from the public MaxCompute project.

DROP FOREIGN TABLE IF EXISTS odps_customer_100g;
DROP FOREIGN TABLE IF EXISTS odps_lineitem_100g;
DROP FOREIGN TABLE IF EXISTS odps_nation_100g;
DROP FOREIGN TABLE IF EXISTS odps_orders_100g;
DROP FOREIGN TABLE IF EXISTS odps_part_100g;
DROP FOREIGN TABLE IF EXISTS odps_partsupp_100g;
DROP FOREIGN TABLE IF EXISTS odps_region_100g;
DROP FOREIGN TABLE IF EXISTS odps_supplier_100g;

IMPORT FOREIGN SCHEMA "MAXCOMPUTE_PUBLIC_DATA#default" LIMIT to
(
    odps_customer_100g,
    odps_lineitem_100g,
    odps_nation_100g,
    odps_orders_100g,
    odps_part_100g,
    odps_partsupp_100g,
    odps_region_100g,
    odps_supplier_100g
)
FROM SERVER odps_server INTO public OPTIONS(if_table_exist 'error',if_unsupported_type 'error');

2. Create internal tables

OLAP queries require column-oriented tables (the default in Hologres) with properly configured indexes. The following statements create all eight TPC-H tables with optimized index properties.

Why these index settings matter: Distribution keys (e.g., L_ORDERKEY) are chosen to match TPC-H join patterns and minimize data shuffling. Clustering keys and segment keys enable range-scan pruning on date columns. Bitmap indexes and dictionary encoding accelerate filtering and reduce storage overhead.

For details on table properties, see CREATE TABLE.

Show CREATE TABLE statements for all eight tables

DROP TABLE IF EXISTS LINEITEM;

BEGIN;
CREATE TABLE LINEITEM
(
    L_ORDERKEY      BIGINT      NOT NULL,
    L_PARTKEY       INT         NOT NULL,
    L_SUPPKEY       INT         NOT NULL,
    L_LINENUMBER    INT         NOT NULL,
    L_QUANTITY      DECIMAL(15,2) NOT NULL,
    L_EXTENDEDPRICE DECIMAL(15,2) NOT NULL,
    L_DISCOUNT      DECIMAL(15,2) NOT NULL,
    L_TAX           DECIMAL(15,2) NOT NULL,
    L_RETURNFLAG    TEXT        NOT NULL,
    L_LINESTATUS    TEXT        NOT NULL,
    L_SHIPDATE      TIMESTAMPTZ NOT NULL,
    L_COMMITDATE    TIMESTAMPTZ NOT NULL,
    L_RECEIPTDATE   TIMESTAMPTZ NOT NULL,
    L_SHIPINSTRUCT  TEXT        NOT NULL,
    L_SHIPMODE      TEXT        NOT NULL,
    L_COMMENT       TEXT        NOT NULL,
    PRIMARY KEY (L_ORDERKEY,L_LINENUMBER)
);
CALL set_table_property('LINEITEM', 'clustering_key', 'L_SHIPDATE,L_ORDERKEY');
CALL set_table_property('LINEITEM', 'segment_key', 'L_SHIPDATE');
CALL set_table_property('LINEITEM', 'distribution_key', 'L_ORDERKEY');
CALL set_table_property('LINEITEM', 'bitmap_columns', 'L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_RETURNFLAG,L_LINESTATUS,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
CALL set_table_property('LINEITEM', 'dictionary_encoding_columns', 'L_RETURNFLAG,L_LINESTATUS,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
COMMIT;

DROP TABLE IF EXISTS ORDERS;

BEGIN;
CREATE TABLE ORDERS
(
    O_ORDERKEY      BIGINT      NOT NULL PRIMARY KEY,
    O_CUSTKEY       INT         NOT NULL,
    O_ORDERSTATUS   TEXT        NOT NULL,
    O_TOTALPRICE    DECIMAL(15,2) NOT NULL,
    O_ORDERDATE     timestamptz NOT NULL,
    O_ORDERPRIORITY TEXT        NOT NULL,
    O_CLERK         TEXT        NOT NULL,
    O_SHIPPRIORITY  INT         NOT NULL,
    O_COMMENT       TEXT        NOT NULL
);
CALL set_table_property('ORDERS', 'segment_key', 'O_ORDERDATE');
CALL set_table_property('ORDERS', 'distribution_key', 'O_ORDERKEY');
CALL set_table_property('ORDERS', 'bitmap_columns', 'O_ORDERKEY,O_CUSTKEY,O_ORDERSTATUS,O_ORDERPRIORITY,O_CLERK,O_SHIPPRIORITY,O_COMMENT');
CALL set_table_property('ORDERS', 'dictionary_encoding_columns', 'O_ORDERSTATUS,O_ORDERPRIORITY,O_CLERK,O_COMMENT');
COMMIT;

DROP TABLE IF EXISTS PARTSUPP;

BEGIN;
CREATE TABLE PARTSUPP
(
    PS_PARTKEY    INT    NOT NULL,
    PS_SUPPKEY    INT    NOT NULL,
    PS_AVAILQTY   INT    NOT NULL,
    PS_SUPPLYCOST DECIMAL(15,2) NOT NULL,
    PS_COMMENT    TEXT   NOT NULL,
    PRIMARY KEY(PS_PARTKEY,PS_SUPPKEY)
);
CALL set_table_property('PARTSUPP', 'distribution_key', 'PS_PARTKEY');
CALL set_table_property('PARTSUPP', 'bitmap_columns', 'PS_PARTKEY,PS_SUPPKEY,PS_AVAILQTY,PS_COMMENT');
CALL set_table_property('PARTSUPP', 'dictionary_encoding_columns', 'PS_COMMENT');
COMMIT;

DROP TABLE IF EXISTS PART;

BEGIN;
CREATE TABLE PART
(
    P_PARTKEY     INT    NOT NULL PRIMARY KEY,
    P_NAME        TEXT   NOT NULL,
    P_MFGR        TEXT   NOT NULL,
    P_BRAND       TEXT   NOT NULL,
    P_TYPE        TEXT   NOT NULL,
    P_SIZE        INT    NOT NULL,
    P_CONTAINER   TEXT   NOT NULL,
    P_RETAILPRICE DECIMAL(15,2) NOT NULL,
    P_COMMENT     TEXT   NOT NULL
);
CALL set_table_property('PART', 'distribution_key', 'P_PARTKEY');
CALL set_table_property('PART', 'bitmap_columns', 'P_PARTKEY,P_SIZE,P_NAME,P_MFGR,P_BRAND,P_TYPE,P_CONTAINER,P_COMMENT');
CALL set_table_property('PART', 'dictionary_encoding_columns', 'P_NAME,P_MFGR,P_BRAND,P_TYPE,P_CONTAINER,P_COMMENT');
COMMIT;

DROP TABLE IF EXISTS CUSTOMER;

BEGIN;
CREATE TABLE CUSTOMER
(
    C_CUSTKEY    INT    NOT NULL PRIMARY KEY,
    C_NAME       TEXT   NOT NULL,
    C_ADDRESS    TEXT   NOT NULL,
    C_NATIONKEY  INT    NOT NULL,
    C_PHONE      TEXT   NOT NULL,
    C_ACCTBAL    DECIMAL(15,2) NOT NULL,
    C_MKTSEGMENT TEXT   NOT NULL,
    C_COMMENT    TEXT   NOT NULL
);
CALL set_table_property('CUSTOMER', 'distribution_key', 'C_CUSTKEY');
CALL set_table_property('CUSTOMER', 'bitmap_columns', 'C_CUSTKEY,C_NATIONKEY,C_NAME,C_ADDRESS,C_PHONE,C_MKTSEGMENT,C_COMMENT');
CALL set_table_property('CUSTOMER', 'dictionary_encoding_columns', 'C_NAME,C_ADDRESS,C_PHONE,C_MKTSEGMENT,C_COMMENT');
COMMIT;

DROP TABLE IF EXISTS SUPPLIER;

BEGIN;
CREATE TABLE SUPPLIER
(
    S_SUPPKEY   INT    NOT NULL PRIMARY KEY,
    S_NAME      TEXT   NOT NULL,
    S_ADDRESS   TEXT   NOT NULL,
    S_NATIONKEY INT    NOT NULL,
    S_PHONE     TEXT   NOT NULL,
    S_ACCTBAL   DECIMAL(15,2) NOT NULL,
    S_COMMENT   TEXT   NOT NULL
);
CALL set_table_property('SUPPLIER', 'distribution_key', 'S_SUPPKEY');
CALL set_table_property('SUPPLIER', 'bitmap_columns', 'S_SUPPKEY,S_NAME,S_ADDRESS,S_NATIONKEY,S_PHONE,S_COMMENT');
CALL set_table_property('SUPPLIER', 'dictionary_encoding_columns', 'S_NAME,S_ADDRESS,S_PHONE,S_COMMENT');
COMMIT;

DROP TABLE IF EXISTS NATION;

BEGIN;
CREATE TABLE NATION(
  N_NATIONKEY INT NOT NULL PRIMARY KEY,
  N_NAME text NOT NULL,
  N_REGIONKEY INT NOT NULL,
  N_COMMENT text NOT NULL
);
CALL set_table_property('NATION', 'distribution_key', 'N_NATIONKEY');
CALL set_table_property('NATION', 'bitmap_columns', 'N_NATIONKEY,N_NAME,N_REGIONKEY,N_COMMENT');
CALL set_table_property('NATION', 'dictionary_encoding_columns', 'N_NAME,N_COMMENT');
COMMIT;

DROP TABLE IF EXISTS REGION;

BEGIN;
CREATE TABLE REGION
(
    R_REGIONKEY INT  NOT NULL PRIMARY KEY,
    R_NAME      TEXT NOT NULL,
    R_COMMENT   TEXT
);
CALL set_table_property('REGION', 'distribution_key', 'R_REGIONKEY');
CALL set_table_property('REGION', 'bitmap_columns', 'R_REGIONKEY,R_NAME,R_COMMENT');
CALL set_table_property('REGION', 'dictionary_encoding_columns', 'R_NAME,R_COMMENT');
COMMIT;

3. Load data and collect statistics

Insert data from the foreign tables into the internal tables, then run VACUUM and ANALYZE to optimize storage layout and update query planner statistics.

Note

Hologres V2.1.17 and later support Serverless Computing. For large offline data imports and ETL jobs, Serverless Computing offloads work to on-demand resources, keeping your instance stable and avoiding out-of-memory (OOM) errors. You pay only for the serverless resources consumed. See Serverless Computing user guide.

-- Optional: Use Serverless Computing for large data imports
SET hg_computing_resource = 'serverless';

INSERT INTO public.customer SELECT * FROM public.odps_customer_100g;
INSERT INTO public.lineitem SELECT * FROM public.odps_lineitem_100g;
INSERT INTO public.nation SELECT * FROM public.odps_nation_100g;
INSERT INTO public.orders SELECT * FROM public.odps_orders_100g;
INSERT INTO public.part SELECT * FROM public.odps_part_100g;
INSERT INTO public.partsupp SELECT * FROM public.odps_partsupp_100g;
INSERT INTO public.region SELECT * FROM public.odps_region_100g;
INSERT INTO public.supplier SELECT * FROM public.odps_supplier_100g;

-- Reclaim storage and optimize data layout
VACUUM nation;
VACUUM region;
VACUUM supplier;
VACUUM customer;
VACUUM part;
VACUUM partsupp;
VACUUM orders;
VACUUM lineitem;

-- Update query planner statistics
ANALYZE nation;
ANALYZE region;
ANALYZE lineitem;
ANALYZE orders;
ANALYZE customer;
ANALYZE part;
ANALYZE partsupp;
ANALYZE supplier;

-- Collect statistics on join keys that are not primary keys
ANALYZE lineitem (l_orderkey, l_partkey, l_suppkey);
ANALYZE orders (o_custkey);
ANALYZE partsupp (ps_partkey, ps_suppkey);

-- Reset to avoid using serverless resources for subsequent queries
RESET hg_computing_resource;

Step 2: Configure a stress testing scenario in PTS

  1. Log on to the PTS console. Choose Performance Test > Create Scenario, then click PTS.

  2. Delete the default HTTP node.

  3. Add a JDBC node and configure it as follows.

Note

Nodes within a single business session run sequentially. Different business sessions run in parallel. For this scenario, keep only one business session and control concurrency through the stress mode settings.

Basic Request Information tab:

ParameterValue
Database TypePostgreSQL
Database URL<endpoint>:<port>/<database-name> (find this on the Instance Details tab in the Hologres console)
UsernameAccessKey ID of the RAM user pts-test
PasswordAccessKey secret of the RAM user pts-test
SQLEnter one of the 22 TPC-H query statements

Connection Pool Settings tab:

ParameterValueReason
Initial Connections1Single-user OLAP test requires minimal connections
Max Wait TimeDefault--
Max Connections15Supports follow-up tests with moderate concurrency
Min Connections1Keeps one idle connection ready
  1. Replicate the JDBC node 21 more times (use the API replication feature) to create a total of 22 nodes within the same business session. Assign each node one of the 22 TPC-H query statements in order. PTS runs them sequentially in loops.

  2. Configure the Stress Mode Configuration section.

ParameterValueReason
Stress sourceAlibaba Cloud VPCLow-latency network path to the Hologres instance
Max VUs1Measures single-query latency without resource contention
Auto Increment ModeManual AdjustmentKeeps concurrency fixed at 1
Total Test Duration60 minutesAllows time for all 22 queries to run multiple loops
Number of Specified IP Addresses1Single load generator is sufficient for 1 VU

Step 3: Debug and run the test

  1. Debug the scenario before running the full test. Debugging validates JDBC connectivity, SQL syntax, and parameter settings without consuming a full test run.

  2. Click Save and Start. In the Note dialog box, select Execute Now and The test is permitted and complies with the applicable laws and regulations., then click Start.

Step 4: Interpret the report

  1. After the test completes, go to the Test Reports tab on the Edit Scenario page. Click View next to the report for this test.

  2. On the Overview tab of the Report Details page, check whether the test succeeded.

    • Successful test: Review the success rate, average RT, TPS/VU, number of exceptions, and total requests. Average RT is the average query latency. TPS equals QPS because each transaction contains a single query.

    • Failed test (metrics show 0): Click View Sampling Logs > Click to view details > Error Information to identify the root cause.

  3. On the Details tab, view per-node metrics. Each node corresponds to one TPC-H query.

  4. Single-VU benchmark: Focus on query duration. In this example, the combined execution time of all 22 TPC-H queries is approximately 25 seconds.

  5. Multi-VU follow-up: If you increase VUs later, monitor both query duration and QPS. Use the time-range selector on the Details tab to compare per-VU results.

Note

To investigate slow individual queries, run EXPLAIN ANALYZE on the query in HoloWeb to identify bottlenecks such as data skew, missing indexes, or suboptimal join strategies.

For details on report analysis, see View a JDBC stress testing report.

Scenario 2: Point queries of key-value pairs

This scenario uses a row-oriented table with the same schema as the ORDERS table and runs primary-key lookups to measure point query throughput.

Step 1: Prepare test data in Hologres

Reuse the database created in Scenario 1. Create a row-oriented copy of the ORDERS table and populate it.

1. Create a row-oriented table

Point queries require a row-oriented table with a primary key. The clustering_key and distribution_key are both set to o_orderkey to optimize single-row lookups.

For details on table properties, see CREATE TABLE.

DROP TABLE IF EXISTS public.orders_row;

BEGIN;
CREATE TABLE public.orders_row(
    O_ORDERKEY       INT            NOT NULL PRIMARY KEY
    ,O_CUSTKEY       INT            NOT NULL
    ,O_ORDERSTATUS   TEXT           NOT NULL
    ,O_TOTALPRICE    DECIMAL(15,2)  NOT NULL
    ,O_ORDERDATE     TIMESTAMPTZ    NOT NULL
    ,O_ORDERPRIORITY TEXT           NOT NULL
    ,O_CLERK         TEXT           NOT NULL
    ,O_SHIPPRIORITY  INT            NOT NULL
    ,O_COMMENT       TEXT           NOT NULL
);
CALL SET_TABLE_PROPERTY('public.orders_row', 'orientation', 'row');
CALL SET_TABLE_PROPERTY('public.orders_row', 'clustering_key', 'o_orderkey');
CALL SET_TABLE_PROPERTY('public.orders_row', 'distribution_key', 'o_orderkey');
COMMIT;

2. Load data

-- Optional: Use Serverless Computing for large data imports
SET hg_computing_resource = 'serverless';

INSERT INTO public.orders_row
SELECT * FROM public.orders;

-- Reset to avoid using serverless resources for subsequent queries
RESET hg_computing_resource;

Step 2: Configure a stress testing scenario in PTS

  1. Log on to the PTS console. Choose Performance Test > Create Scenario, then click PTS.

  2. Delete the default HTTP node.

  3. Add a JDBC node and configure it.

Note

Keep only one business session. Nodes within a session run sequentially; different sessions run in parallel.

Basic Request Information tab:

ParameterValue
Database TypePostgreSQL
Database URL<endpoint>:<port>/<database-name>
UsernameAccessKey ID of the RAM user pts-test
PasswordAccessKey secret of the RAM user pts-test
SQLUse a single-value or multi-value point query (see below)

Single-value point query:

SELECT
    O_ORDERKEY,
    O_CUSTKEY,
    O_ORDERSTATUS,
    O_TOTALPRICE,
    O_ORDERDATE,
    O_ORDERPRIORITY,
    O_CLERK,
    O_SHIPPRIORITY,
    O_COMMENT
FROM
    public.orders_row
WHERE
    o_orderkey = ?;

Multi-value point query (9 values):

SELECT
    O_ORDERKEY,
    O_CUSTKEY,
    O_ORDERSTATUS,
    O_TOTALPRICE,
    O_ORDERDATE,
    O_ORDERPRIORITY,
    O_CLERK,
    O_SHIPPRIORITY,
    O_COMMENT
FROM
    public.orders_row
WHERE
    o_orderkey IN (?, ?, ?, ?, ?, ?, ?, ?, ?);

Placeholder tab:

Use ? as placeholders in the SQL statement. Configure each placeholder on this tab in the same order as the ? symbols appear.

ParameterValue
Typebigint
Value${sys.random(1,99999999)}
Important

Configure placeholders in the exact order they appear in the SQL statement. For the multi-value query, configure 9 placeholders.

Connection Pool Settings tab:

ParameterValueReason
Initial Connections20Generates 20 connections per load generator at startup
Max Wait TimeDefault--
Max Connections20Matches the initial pool size for stable performance
Min Connections20Prevents connection churn under sustained load
  1. Configure the Stress Mode Configuration section.

ParameterValueReason
Stress sourceAlibaba Cloud VPC--
Stress ModeVU ModeSimulates concurrent database connections
Max VUs500Tested on a 64-CU Hologres instance
Auto Increment ModeManual AdjustmentKeeps concurrency fixed at 500
Total Test Duration5 minutesSufficient for a single-statement point query test
Number of Specified IP Addresses25Estimated QPS of 100,000 / max 4,000 QPS per load generator = 25 generators
Note

In VU mode, each PTS load generator supports up to 4,000 QPS. Estimate your expected total QPS to determine the number of load generators, then set the connection pool size accordingly (connections per generator = Max Connections setting).

Step 3: Debug and run the test

  1. Debug the scenario to validate JDBC connectivity and placeholder configuration.

  2. Click Save and Start. In the Note dialog box, select Execute Now and The test is permitted and complies with the applicable laws and regulations., then click Start.

Step 4: Interpret the report

Reference results from this example (64-CU Hologres instance):

  • Single-value point queries: Average QPS exceeds 100,000.

  • Multi-value point queries (30 values per query): Average QPS of 34,819. Effective per-key QPS = 34,819 x 30 = 1,044,570.

Scenario 3: Data updates

This scenario tests primary-key upsert performance using the same row-oriented orders_row table from Scenario 2.

Configure the JDBC node

Use the same PTS configuration as Scenario 2 (connection pool, stress mode, VU count), but replace the SQL statement with the following upsert:

INSERT INTO public.orders_row (o_orderkey, o_custkey, o_orderstatus, o_totalprice, o_orderdate, o_orderpriority, o_clerk, o_shippriority, o_comment)
    VALUES (?, 1, 'demo', 1.1, '2021-01-01', 'demo', 'demo', 1, 'demo')
ON CONFLICT (o_orderkey)
    DO UPDATE SET
        (o_orderkey, o_custkey, o_orderstatus, o_totalprice, o_orderdate, o_orderpriority, o_clerk, o_shippriority, o_comment) = ROW (excluded.*);

Configure a single bigint placeholder for o_orderkey with the value ${sys.random(1,99999999)}.

For instructions on debugging, running the test, and analyzing the report, follow the same steps as in Scenario 2: Point queries of key-value pairs.

Troubleshooting

Connection failures during debugging typically result from one of the following:

  • Invalid Database URL: Verify the endpoint, port, and database name. Find the correct values on the Instance Details tab in the Hologres console.

  • Incorrect credentials: Confirm the AccessKey ID and AccessKey secret belong to the RAM user pts-test.

  • Missing permissions: Make sure the RAM user has AliyunHologresReadOnlyAccess and is authorized in HoloWeb for the test database.

References