×
Community Blog Best Practices for PostgreSQL Time Series Database Design

Best Practices for PostgreSQL Time Series Database Design

This article discusses databases in time series scenarios (with examples and demos).

By digoal

Background

Requirements for databases in time series scenarios:

  1. High-Speed Writing
  2. Compression of Storage
  3. Efficient Interval Query and Interval Analysis

PostgreSQL Solutions in Time Series Scenarios:

  1. Partition table is adopted. Tables are partitioned based on the time. The current partition adopts the heap storage format. (It provides high-performance writing and batch storage. A single machine can store hundreds of thousands of data per second.)
  2. Convert the historical partition table into columnar partition storage. The columnar engine of the citus provides a good compression ratio. In other words, the old partition adopts columnar storage for a table. The new partition adopts heap storage.
  3. PG supports parallel queries, which can provide good analysis efficiency.
  4. Querying recent data (for example, data of the last month) is usually done with the point query or time range query. It is recommended to adopt the heap engine for this partition because the heap engine supports the index. Columnar does not support the index for the time being.

The differences between columnar and heap are listed below:

  1. Columnar is the column storage format and supports compression.
  2. Heap is the row storage format and supports dynamic compression of variable-length fields. However, the compression ratio is not as high as columnar.
  3. Columnar only supports batch writing (copy or insert into), which is suitable for static data writing and one-time conversion.
  4. Heap supports arbitrary writing operations, which is suitable for high concurrent transactions.
  5. Columnar does not support indexes, so the efficiency of full table scan is high, while the efficiency of point query or small range query is low.
  6. Heap supports indexes, so the efficiency of full table scan is lower than columnar, but the efficiency of point query or small range query is high.
  7. Columnar does not support parallel computing based on a single table. Therefore, multiple partitions and the parallel append scan are required to implement parallel computing.
  8. Heap supports parallel computing based on a single table.

Another associated product is timescaledb, which is not discussed in this article.

https://github.com/timescale/timescaledb

Demo

Let's take PostgreSQL 13 as an example.

Example of the citus columnar storage

https://github.com/citusdata/citus

Create a plug-in:

vi postgresql.conf  
shared_preload_libraries='citus'  
  
  
pg_ctl restart -m fast  
  
postgres=# create extension citus;  
CREATE EXTENSION  

Create a column store table and write 10 million pieces of test data:

postgres=# create table cstore (id int, c1 int, c2 int, c3 int, c4 int, c5 int, c6 int, c7 int, crt_time timestamp, info text) using columnar;  
CREATE TABLE  
  
postgres=# insert into cstore select generate_series(1,10000000), random()*10, random()*20,random()*100,random()*1000,random()*10000, random()*500, random()*2000, clock_timestamp(), random()::text;   
INSERT 0 10000000  

Create a row store table and write 10 million pieces of test data:

postgres=# create table heap (like cstore);  
CREATE TABLE  
  
  
postgres=# insert into heap select * from cstore ;  
INSERT 0 10000000  

View column storage parameters, including compression ratio and other configurations:

postgres=# show columnar.compression;  
 columnar.compression   
----------------------  
 zstd  
(1 row)  
  
postgres=# show columnar.compression_level;  
 columnar.compression_level   
----------------------------  
 3  
(1 row)  
  
postgres=# show columnar.stripe_row_limit;  
 columnar.stripe_row_limit   
---------------------------  
 150000  
(1 row)  
  
postgres=# show columnar.chunk_group_row_limit;  
 columnar.chunk_group_row_limit   
--------------------------------  
 10000  
(1 row)  

Compare the space usage of column storage and row storage tables:

postgres=# \dt+ cstore   
                            List of relations  
 Schema |  Name  | Type  |  Owner   | Persistence |  Size  | Description   
--------+--------+-------+----------+-------------+--------+-------------  
 public | cstore | table | postgres | permanent   | 243 MB |   
(1 row)  
  
postgres=# \dt+ heap   
                           List of relations  
 Schema | Name | Type  |  Owner   | Persistence |  Size  | Description   
--------+------+-------+----------+-------------+--------+-------------  
 public | heap | table | postgres | permanent   | 888 MB |   
(1 row)  

Simple Aggregate Query Performance:

postgres=# select count(*) from heap ;  
  count     
----------  
 10000000  
(1 row)  
  
Time: 512.432 ms  
  
postgres=# select count(*) from cstore;  
  count     
----------  
 10000000  
(1 row)  
  
Time: 386.415 ms  

Implement the plan:

postgres=# explain (analyze,verbose,costs,settings,buffers,wal,timing,summary) select count(*) from heap ;  
                                                           QUERY PLAN                                                             
--------------------------------------------------------------------------------------------------------------------------------  
 Aggregate  (cost=238637.75..238637.76 rows=1 width=8) (actual time=1142.604..1142.605 rows=1 loops=1)  
   Output: count(*)  
   Buffers: shared hit=14684 read=98953  
   ->  Seq Scan on public.heap  (cost=0.00..213637.60 rows=10000060 width=0) (actual time=0.018..686.369 rows=10000000 loops=1)  
         Output: id, c1, c2, c3, c4, c5, c6, c7, crt_time, info  
         Buffers: shared hit=14684 read=98953  
 Settings: max_parallel_workers_per_gather = '0'  
 Planning Time: 0.035 ms  
 JIT:  
   Functions: 2  
   Options: Inlining false, Optimization false, Expressions true, Deforming true  
   Timing: Generation 0.186 ms, Inlining 0.000 ms, Optimization 0.081 ms, Emission 1.006 ms, Total 1.273 ms  
 Execution Time: 1142.839 ms  
(13 rows)  
  
Time: 1143.141 ms (00:01.143)  
postgres=# explain (analyze,verbose,costs,settings,buffers,wal,timing,summary) select count(*) from cstore;  
                                                                  QUERY PLAN                                                                     
-----------------------------------------------------------------------------------------------------------------------------------------------  
 Aggregate  (cost=25000.00..25000.01 rows=1 width=8) (actual time=996.225..996.227 rows=1 loops=1)  
   Output: count(*)  
   Buffers: shared hit=3  
   ->  Custom Scan (ColumnarScan) on public.cstore  (cost=0.00..0.00 rows=10000000 width=0) (actual time=0.021..530.998 rows=10000000 loops=1)  
         Columnar Chunk Groups Removed by Filter: 0  
         Buffers: shared hit=3  
 Settings: max_parallel_workers_per_gather = '0'  
 Planning:  
   Buffers: shared hit=6  
 Planning Time: 0.127 ms  
 Execution Time: 996.249 ms  
(11 rows)  
  
Time: 996.632 ms  

Practice in Time Series Scenarios

Create a row store table partitioned by month:

CREATE table ts (id int, c1 int, c2 int, c3 int, c4 int, c5 int, c6 int, c7 int, crt_time timestamp, info text)   
PARTITION by range (crt_time);   
  
create table ts_202101 PARTITION OF ts FOR VALUES FROM ('2021-01-01') to ('2021-02-01');  
create table ts_202102 PARTITION OF ts FOR VALUES FROM ('2021-02-01') to ('2021-03-01');  
create table ts_202103 PARTITION OF ts FOR VALUES FROM ('2021-03-01') to ('2021-04-01');  
create table ts_202104 PARTITION OF ts FOR VALUES FROM ('2021-04-01') to ('2021-05-01');  
create table ts_202105 PARTITION OF ts FOR VALUES FROM ('2021-05-01') to ('2021-06-01');  

Write 10 million pieces of test data:

insert into ts select generate_series(1,10000000), random()*10, random()*20, random()*30,   
random()*100, random()*200, random()*300, random()*1000,   
'2021-01-01'::timestamp + ((random()*5*30*24)||' hour')::interval,   
random()::text;  

Create a column store table at the beginning of the month to store the partition data of several months ago, transfer the partition data of several months ago to the column store table, and exchange the column store and row store partitions:

begin;  
lock table heap in exclusive mode;  
CREATE table ts_columnar_202101 (like ts) using columnar;  
insert into ts_columnar_202101 select * from ts_202101;  
alter table ts DETACH PARTITION ts_202101;  
alter table ts ATTACH PARTITION ts_columnar_202101 FOR VALUES FROM ('2021-01-01') to ('2021-02-01');  
end;  

The exchange ends after more than two seconds:

INSERT 0 2066564  
Time: 2654.981 ms (00:02.655)  

After the exchange, partition 202101 of the TS table is the column store partition:

postgres=# \dt+ ts*  
                                        List of relations  
 Schema |        Name        |       Type        |  Owner   | Persistence |  Size   | Description   
--------+--------------------+-------------------+----------+-------------+---------+-------------  
 public | ts                 | partitioned table | postgres | permanent   | 0 bytes |   
 public | ts_202101          | table             | postgres | permanent   | 184 MB  |   
 public | ts_202102          | table             | postgres | permanent   | 166 MB  |   
 public | ts_202103          | table             | postgres | permanent   | 184 MB  |   
 public | ts_202104          | table             | postgres | permanent   | 178 MB  |   
 public | ts_202105          | table             | postgres | permanent   | 178 MB  |   
 public | ts_columnar_202101 | table             | postgres | permanent   | 58 MB   |   
(7 rows)  
  
postgres=# \d+ ts  
                                          Partitioned table "public.ts"  
  Column  |            Type             | Collation | Nullable | Default | Storage  | Stats target | Description   
----------+-----------------------------+-----------+----------+---------+----------+--------------+-------------  
 id       | integer                     |           |          |         | plain    |              |   
 c1       | integer                     |           |          |         | plain    |              |   
 c2       | integer                     |           |          |         | plain    |              |   
 c3       | integer                     |           |          |         | plain    |              |   
 c4       | integer                     |           |          |         | plain    |              |   
 c5       | integer                     |           |          |         | plain    |              |   
 c6       | integer                     |           |          |         | plain    |              |   
 c7       | integer                     |           |          |         | plain    |              |   
 crt_time | timestamp without time zone |           |          |         | plain    |              |   
 info     | text                        |           |          |         | extended |              |   
Partition key: RANGE (crt_time)  
Partitions: ts_202102 FOR VALUES FROM ('2021-02-01 00:00:00') TO ('2021-03-01 00:00:00'),  
            ts_202103 FOR VALUES FROM ('2021-03-01 00:00:00') TO ('2021-04-01 00:00:00'),  
            ts_202104 FOR VALUES FROM ('2021-04-01 00:00:00') TO ('2021-05-01 00:00:00'),  
            ts_202105 FOR VALUES FROM ('2021-05-01 00:00:00') TO ('2021-06-01 00:00:00'),  
            ts_columnar_202101 FOR VALUES FROM ('2021-01-01 00:00:00') TO ('2021-02-01 00:00:00')  

Query the TS table and correctly route it to the corresponding partition:

postgres=# select count(*) from ts where crt_time between '2021-01-02' and '2021-01-15';  
 count    
--------  
 866004  
(1 row)  
  
Time: 140.311 ms  
postgres=# explain select count(*) from ts where crt_time between '2021-01-02' and '2021-01-15';  
                                                                       QUERY PLAN                                                                          
---------------------------------------------------------------------------------------------------------------------------------------------------------  
 Aggregate  (cost=766.30..766.31 rows=1 width=8)  
   ->  Custom Scan (ColumnarScan) on ts_columnar_202101 ts  (cost=0.00..740.46 rows=10333 width=0)  
         Filter: ((crt_time >= '2021-01-02 00:00:00'::timestamp without time zone) AND (crt_time <= '2021-01-15 00:00:00'::timestamp without time zone))  
(3 rows)  
  
Time: 0.435 ms  

An example of multi-partition parallel computing for column storage:

postgres=# explain select count(*) from ts;  
                                                      QUERY PLAN                                                         
-----------------------------------------------------------------------------------------------------------------------  
 Finalize Aggregate  (cost=140940.59..140940.60 rows=1 width=8)  
   ->  Gather  (cost=140940.27..140940.58 rows=3 width=8)  
         Workers Planned: 3  
         ->  Partial Aggregate  (cost=139940.27..139940.28 rows=1 width=8)  
               ->  Parallel Append  (cost=0.00..131875.76 rows=3225806 width=0)  
                     ->  Custom Scan (ColumnarScan) on ts_columnar_202101 ts_1  (cost=0.00..0.00 rows=2066564 width=0)  
                     ->  Parallel Seq Scan on ts_202103 ts_3  (cost=0.00..30163.08 rows=666908 width=0)  
                     ->  Parallel Seq Scan on ts_202104 ts_4  (cost=0.00..29186.01 rows=645301 width=0)  
                     ->  Parallel Seq Scan on ts_202105 ts_5  (cost=0.00..29183.47 rows=645247 width=0)  
                     ->  Parallel Seq Scan on ts_202102 ts_2  (cost=0.00..27214.16 rows=601716 width=0)  
 JIT:  
   Functions: 9  
   Options: Inlining false, Optimization false, Expressions true, Deforming true  
(13 rows)  
  
Time: 1.113 ms  
postgres=# select count(*) from ts;  
  count     
----------  
 10000000  
(1 row)  
  
Time: 545.859 ms  

You can handle the detach partition as you like or delete it.

Summary

  1. Columnar has a higher compression ratio.
  2. Columnar has better statistical query efficiency because the storage space is compressed, and the number of scanned blocks is also less.
  3. Currently, columnar storage only supports multiple-partition parallel queries of parallel append scan and does not support parallel queries in a single columnar partition. Therefore, it is recommended to partition to implement better peformance.
  4. For more information about the restrictions on the Columnar, refer to https://github.com/citusdata/citus/blob/master/src/backend/columnar/README.md
  5. Citus columnar provides a good choice for column storage to improve analysis and high compression ratio, and high performance in time series scenarios without requiring scale out. Combined with the PG partition table, it realizes perfect hybrid storage to meet the requirements for high concurrent writing, high-performance analysis, and compression ratio.

Other Information:

1 2 0
Share on

digoal

282 posts | 24 followers

You may also like

Comments

eJeeban Design October 7, 2022 at 3:58 am

Nice Article