By digoal
Requirements for databases in time series scenarios:
PostgreSQL Solutions in Time Series Scenarios:
The differences between columnar and heap are listed below:
Another associated product is timescaledb, which is not discussed in this article.
https://github.com/timescale/timescaledb
Let's take PostgreSQL 13 as an example.
Example of the citus columnar storage
https://github.com/citusdata/citus
Create a plug-in:
vi postgresql.conf
shared_preload_libraries='citus'
pg_ctl restart -m fast
postgres=# create extension citus;
CREATE EXTENSION
Create a column store table and write 10 million pieces of test data:
postgres=# create table cstore (id int, c1 int, c2 int, c3 int, c4 int, c5 int, c6 int, c7 int, crt_time timestamp, info text) using columnar;
CREATE TABLE
postgres=# insert into cstore select generate_series(1,10000000), random()*10, random()*20,random()*100,random()*1000,random()*10000, random()*500, random()*2000, clock_timestamp(), random()::text;
INSERT 0 10000000
Create a row store table and write 10 million pieces of test data:
postgres=# create table heap (like cstore);
CREATE TABLE
postgres=# insert into heap select * from cstore ;
INSERT 0 10000000
View column storage parameters, including compression ratio and other configurations:
postgres=# show columnar.compression;
columnar.compression
----------------------
zstd
(1 row)
postgres=# show columnar.compression_level;
columnar.compression_level
----------------------------
3
(1 row)
postgres=# show columnar.stripe_row_limit;
columnar.stripe_row_limit
---------------------------
150000
(1 row)
postgres=# show columnar.chunk_group_row_limit;
columnar.chunk_group_row_limit
--------------------------------
10000
(1 row)
Compare the space usage of column storage and row storage tables:
postgres=# \dt+ cstore
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
--------+--------+-------+----------+-------------+--------+-------------
public | cstore | table | postgres | permanent | 243 MB |
(1 row)
postgres=# \dt+ heap
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
--------+------+-------+----------+-------------+--------+-------------
public | heap | table | postgres | permanent | 888 MB |
(1 row)
Simple Aggregate Query Performance:
postgres=# select count(*) from heap ;
count
----------
10000000
(1 row)
Time: 512.432 ms
postgres=# select count(*) from cstore;
count
----------
10000000
(1 row)
Time: 386.415 ms
Implement the plan:
postgres=# explain (analyze,verbose,costs,settings,buffers,wal,timing,summary) select count(*) from heap ;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=238637.75..238637.76 rows=1 width=8) (actual time=1142.604..1142.605 rows=1 loops=1)
Output: count(*)
Buffers: shared hit=14684 read=98953
-> Seq Scan on public.heap (cost=0.00..213637.60 rows=10000060 width=0) (actual time=0.018..686.369 rows=10000000 loops=1)
Output: id, c1, c2, c3, c4, c5, c6, c7, crt_time, info
Buffers: shared hit=14684 read=98953
Settings: max_parallel_workers_per_gather = '0'
Planning Time: 0.035 ms
JIT:
Functions: 2
Options: Inlining false, Optimization false, Expressions true, Deforming true
Timing: Generation 0.186 ms, Inlining 0.000 ms, Optimization 0.081 ms, Emission 1.006 ms, Total 1.273 ms
Execution Time: 1142.839 ms
(13 rows)
Time: 1143.141 ms (00:01.143)
postgres=# explain (analyze,verbose,costs,settings,buffers,wal,timing,summary) select count(*) from cstore;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=25000.00..25000.01 rows=1 width=8) (actual time=996.225..996.227 rows=1 loops=1)
Output: count(*)
Buffers: shared hit=3
-> Custom Scan (ColumnarScan) on public.cstore (cost=0.00..0.00 rows=10000000 width=0) (actual time=0.021..530.998 rows=10000000 loops=1)
Columnar Chunk Groups Removed by Filter: 0
Buffers: shared hit=3
Settings: max_parallel_workers_per_gather = '0'
Planning:
Buffers: shared hit=6
Planning Time: 0.127 ms
Execution Time: 996.249 ms
(11 rows)
Time: 996.632 ms
Practice in Time Series Scenarios
Create a row store table partitioned by month:
CREATE table ts (id int, c1 int, c2 int, c3 int, c4 int, c5 int, c6 int, c7 int, crt_time timestamp, info text)
PARTITION by range (crt_time);
create table ts_202101 PARTITION OF ts FOR VALUES FROM ('2021-01-01') to ('2021-02-01');
create table ts_202102 PARTITION OF ts FOR VALUES FROM ('2021-02-01') to ('2021-03-01');
create table ts_202103 PARTITION OF ts FOR VALUES FROM ('2021-03-01') to ('2021-04-01');
create table ts_202104 PARTITION OF ts FOR VALUES FROM ('2021-04-01') to ('2021-05-01');
create table ts_202105 PARTITION OF ts FOR VALUES FROM ('2021-05-01') to ('2021-06-01');
Write 10 million pieces of test data:
insert into ts select generate_series(1,10000000), random()*10, random()*20, random()*30,
random()*100, random()*200, random()*300, random()*1000,
'2021-01-01'::timestamp + ((random()*5*30*24)||' hour')::interval,
random()::text;
Create a column store table at the beginning of the month to store the partition data of several months ago, transfer the partition data of several months ago to the column store table, and exchange the column store and row store partitions:
begin;
lock table heap in exclusive mode;
CREATE table ts_columnar_202101 (like ts) using columnar;
insert into ts_columnar_202101 select * from ts_202101;
alter table ts DETACH PARTITION ts_202101;
alter table ts ATTACH PARTITION ts_columnar_202101 FOR VALUES FROM ('2021-01-01') to ('2021-02-01');
end;
The exchange ends after more than two seconds:
INSERT 0 2066564
Time: 2654.981 ms (00:02.655)
After the exchange, partition 202101 of the TS table is the column store partition:
postgres=# \dt+ ts*
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
--------+--------------------+-------------------+----------+-------------+---------+-------------
public | ts | partitioned table | postgres | permanent | 0 bytes |
public | ts_202101 | table | postgres | permanent | 184 MB |
public | ts_202102 | table | postgres | permanent | 166 MB |
public | ts_202103 | table | postgres | permanent | 184 MB |
public | ts_202104 | table | postgres | permanent | 178 MB |
public | ts_202105 | table | postgres | permanent | 178 MB |
public | ts_columnar_202101 | table | postgres | permanent | 58 MB |
(7 rows)
postgres=# \d+ ts
Partitioned table "public.ts"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
----------+-----------------------------+-----------+----------+---------+----------+--------------+-------------
id | integer | | | | plain | |
c1 | integer | | | | plain | |
c2 | integer | | | | plain | |
c3 | integer | | | | plain | |
c4 | integer | | | | plain | |
c5 | integer | | | | plain | |
c6 | integer | | | | plain | |
c7 | integer | | | | plain | |
crt_time | timestamp without time zone | | | | plain | |
info | text | | | | extended | |
Partition key: RANGE (crt_time)
Partitions: ts_202102 FOR VALUES FROM ('2021-02-01 00:00:00') TO ('2021-03-01 00:00:00'),
ts_202103 FOR VALUES FROM ('2021-03-01 00:00:00') TO ('2021-04-01 00:00:00'),
ts_202104 FOR VALUES FROM ('2021-04-01 00:00:00') TO ('2021-05-01 00:00:00'),
ts_202105 FOR VALUES FROM ('2021-05-01 00:00:00') TO ('2021-06-01 00:00:00'),
ts_columnar_202101 FOR VALUES FROM ('2021-01-01 00:00:00') TO ('2021-02-01 00:00:00')
Query the TS table and correctly route it to the corresponding partition:
postgres=# select count(*) from ts where crt_time between '2021-01-02' and '2021-01-15';
count
--------
866004
(1 row)
Time: 140.311 ms
postgres=# explain select count(*) from ts where crt_time between '2021-01-02' and '2021-01-15';
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=766.30..766.31 rows=1 width=8)
-> Custom Scan (ColumnarScan) on ts_columnar_202101 ts (cost=0.00..740.46 rows=10333 width=0)
Filter: ((crt_time >= '2021-01-02 00:00:00'::timestamp without time zone) AND (crt_time <= '2021-01-15 00:00:00'::timestamp without time zone))
(3 rows)
Time: 0.435 ms
An example of multi-partition parallel computing for column storage:
postgres=# explain select count(*) from ts;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=140940.59..140940.60 rows=1 width=8)
-> Gather (cost=140940.27..140940.58 rows=3 width=8)
Workers Planned: 3
-> Partial Aggregate (cost=139940.27..139940.28 rows=1 width=8)
-> Parallel Append (cost=0.00..131875.76 rows=3225806 width=0)
-> Custom Scan (ColumnarScan) on ts_columnar_202101 ts_1 (cost=0.00..0.00 rows=2066564 width=0)
-> Parallel Seq Scan on ts_202103 ts_3 (cost=0.00..30163.08 rows=666908 width=0)
-> Parallel Seq Scan on ts_202104 ts_4 (cost=0.00..29186.01 rows=645301 width=0)
-> Parallel Seq Scan on ts_202105 ts_5 (cost=0.00..29183.47 rows=645247 width=0)
-> Parallel Seq Scan on ts_202102 ts_2 (cost=0.00..27214.16 rows=601716 width=0)
JIT:
Functions: 9
Options: Inlining false, Optimization false, Expressions true, Deforming true
(13 rows)
Time: 1.113 ms
postgres=# select count(*) from ts;
count
----------
10000000
(1 row)
Time: 545.859 ms
You can handle the detach partition as you like or delete it.
Other Information:
PostgreSQL Recursive Query – Examples of Depth-First and Breadth-First Search
PostgreSQL Convergence Filtering Optimization for Segment SQL with Multiple Range Conditions
Alibaba Clouder - January 17, 2018
digoal - May 16, 2019
digoal - May 16, 2019
digoal - May 16, 2019
Alibaba Clouder - July 29, 2019
digoal - December 21, 2020
A cost-effective online time series database service that offers high availability and auto scaling features
Learn MoreTSDB is a stable, reliable, and cost-effective online high-performance time series database service.
Learn MoreAlibaba Cloud PolarDB for PostgreSQL is an in-house relational database service 100% compatible with PostgreSQL and highly compatible with the Oracle syntax.
Learn MoreLeverage cloud-native database solutions dedicated for FinTech.
Learn MoreMore Posts by digoal
eJeeban Design October 7, 2022 at 3:58 am
Nice Article