×
Community Blog Optimize Read and Writer speeds in IoT Scenarios

Optimize Read and Writer speeds in IoT Scenarios

This blog article discusses several different method you can use to optimize the write and read operations of large databases using PostgreSQL.

By Digoal

For Internet of Things (IoT) applications, the reading and writing requirements of data are all very important concerns. This blog article discusses several different method you can use to optimize the write and read operations of large databases using PostgreSQL.

Optimizing Write Speeds

When it comes to IoT, the heap storage class of PostgreSQL is generally recommended over other alternatives. This is because its data storage structure is well suited for high write and append speeds. In other words, simply using this storage class can help to optimize write speeds.

Using the BRIN Index to Optimize Speeds

BRIN indexes, also known as block indexes, are also intended to speed up the read and write performance of tables with a large number of entries. In practice, BRIN indexes do not add much data to the system but can be useful to increase the read speeds. These indexes are especially advantageous in scenarios where there is a strong linear correlation between the physical storage and the field values in a table. BRIN indexes can offer you a bigger bang for your buck if you have time series fields or have fields that have been rearranged using Cluster or Order.

Optimizing Read Speeds

Reading operations generally can be defined as usually as either involving the process of reading data or, more generally, processing data asynchronously. For IoT applications, write operations tend not to suffer from any sort of latency issues, with throughput being relatively high. In terms of data processing, data is usually processed through the read mechanism. But, you may ask, how is data read anyway?

Data in general is read by indexes, such as the BRIN index mentioned earlier, which has little impact on the overall throughput of write operations, as well as on support and range queries. However, BRIN indexes are best used in scenarios where time series fields are involved. If time series fields are not found in your data, you can add them to make your data more suitable for the use of BRIN indexes.

Using Block Scan to Optimize Speeds

Block scan is another viable means of increasing read and write speeds. This is especially the case for data that is stored in a heap or in the append format.

PostgreSQL happens to have a tuple identifier (TID) scan function, which provides a type of block scan that can be used to query the record of the data block you want to search. TID scans also work through an indexing function where a pair of values are indexed in the data. This pair consists of a tuple value and the corresponding offset number that points to the tuple.

select * from tbl where ctid='(100,99)';  

This above SQL statement is used to query the 100th record of the 100th data block. To be more specific, it queries the record with a TID of 100. A scan using the TID function is highly efficient, and can be used in conjunction with heap storage for read operations.

Estimating the Number of Records for a Block

PostgreSQL does not currently have an interface in which you can return all records of the entire data block. Current interfaces can only return one record of a data block at a time. Therefore, if you need to read all the records of any particular data block, you need to first enumerate the rows of that data block.

Despite this, PostgreSQL does allow you to estimate the maximum number of records per page. You can do so by using this following code:

Maximum number of records = block_size/(ctid+tuple head)=block_size/(4+27); 

Consider this example:

postgres=# select current_setting('block_size');  
 current_setting   
-----------------  
 32768  
(1 row)  
  
postgres=# select current_setting('block_size')::int/31;  
 ? column?   
----------  
     1057  
(1 row)  

If you need to make a more precise estimation of the number of rows, you can get a more close-to-exact number by adding the fixed length of the field to extend the header of the field.

Examples

Generate the Function of the Specified Block TID

create or replace function gen_tids(blkid int) returns tid[] as 
$$
  
select array(  
  SELECT ('('||blkid||',' || s.i || ')')::tid  
    FROM generate_series(0,current_setting('block_size')::int/31) AS s(i)  
)  ;  

$$
 language sql strict immutable;  

Read Records of a Data Block

postgres=# create table test(id int);  
CREATE TABLE  
postgres=# insert into test select generate_series(1,10000);  
INSERT 0 10000  
  
postgres=# explain (analyze,verbose,timing,costs,buffers) select * from test where ctid = any  
(  
  array  
  (  
    SELECT ('(0,' || s.i || ')')::tid  
      FROM generate_series(0, current_setting('block_size')::int/31) AS s(i)  
  )  
);  
                                                                QUERY PLAN                                                                  
------------------------------------------------------------------------------------------------------------------------------------------  
 Tid Scan on postgres.test  (cost=25.03.. 40.12 rows=10 width=4) (actual time=0.592.. 0.795 rows=909 loops=1)  
   Output: test.id  
   TID Cond: (test.ctid = ANY ($0))  
   Buffers: shared hit=1057  
   InitPlan 1 (returns $0)  
     ->  Function Scan on pg_catalog.generate_series s  (cost=0.01.. 25.01 rows=1000 width=6) (actual time=0.087.. 0.429 rows=1058 loops=1)  
           Output: ((('(0,'::text || (s.i)::text) || ')'::text))::tid  
           Function Call: generate_series(0, ((current_setting('block_size'::text))::integer / 31))  
 Planning time: 0.106 ms  
 Execution time: 0.881 ms  
(10 rows)  
postgres=# explain (analyze,verbose,timing,costs,buffers) select * from test where ctid = any(gen_tids(1));  
  
 Tid Scan on postgres.test  (cost=1.32.. 1598.90 rows=1058 width=4) (actual time=0.026.. 0.235 rows=909 loops=1)  
   Output: id  
   TID Cond: (test.ctid = ANY ('{"(1,0)","(1,1)","(1,2)","(1,3)","(1,4)","(1,5)","(1,6)","(1,7)","(1,8)","(1,9)","(1,10)","(1,11)","(1,12)","(1,13)","(1,14)","(1,15)","(1,16)","(1,17)","(1,18)","(1,19)","(1,20)","(1,21)","(1,22)","(1,23)  
","(1,24)","(1,25)"  
....  
   Buffers: shared hit=1057  
 Planning time: 1.084 ms  
 Execution time: 0.294 ms  
(6 rows)  
postgres=# select ctid,* from test where ctid = any(gen_tids(11));
  ctid  |  id   
--------+-------
 (11,1) | 10000
(1 row)

postgres=# select ctid,* from test where ctid = any(gen_tids(9));
  ctid   |  id  
---------+------
 (9,1)   | 8182
 (9,2)   | 8183
 (9,3)   | 8184
 (9,4)   | 8185
 (9,5)   | 8186
 (9,6)   | 8187
 ...
 (9,904) | 9085
 (9,905) | 9086
 (9,906) | 9087
 (9,907) | 9088
 (9,908) | 9089
 (9,909) | 9090
(909 rows)

Other Extended Scenarios

If the data is not updated and deleted, CTID can also be used as an index, such as a full-text index (ES). CTID can be used to point to records in the database when an index is created, without the need to build another primary key, and the writing performance can be greatly improved.

Parallel Scenarios

Create a function, and generate a CTID array according to the rule

create or replace function gen_tids(
  blk1 int,  -- total number of blocks in this table
  m int,     -- degree of parallelism, that is, modulus
  N int, -- parallel number, that is, current remainder
  rx int     -- average row length
) returns tid[] as 
$$
  
with blks as (select id from generate_series(0,blk1) t(id) where mod(id,m)=n)
select array(  
  SELECT ('('||blks.id||',' || s.i || ')')::tid  
    FROM generate_series(0, (current_setting('block_size')::int/rx)+10 ) AS s(i) , blks   -- add 10 remaining items to minimize the risk of missing rows
);  

$$
 language sql strict immutable;  

Create a Test table, insert 1 billion rows, and each row is small in size

postgres=# \d tbl
            Unlogged table "public.tbl"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 id     | integer |           |          | 

postgres=# select count(*) from tbl;
   count    
------------
 1000000000
(1 row)

postgres=# \dt+ tbl
                   List of relations
 Schema | Name | Type  |  Owner   | Size  | Description 
--------+------+-------+----------+-------+-------------
 public | tbl  | table | postgres | 34 GB | 
(1 row)
Query the average length of each row and the total number of data blocks occupied.
postgres=# select floor(current_setting('block_size')::int8*relpages/reltuples), relpages from pg_class where relname='tbl';
 floor | relpages 
-------+----------
    36 |  1100111
(1 row)

Hash vs CTID Parallel Scan

A Large Table with Small Rows

1.  Use CTID scan

postgres=# select count(*) from tbl where ctid = any(gen_tids(1100111, 20, 0, 36)); 
  count   
----------
 50000454
(1 row)

Time: 234985.944 ms (03:54.986)

2.  Use hash scan

postgres=# select count(*) from tbl where mod(id,20)=0;
  count   
----------
 50000000
(1 row)

Time: 79916.058 ms (01:19.916)

3.  Use partition scan

postgres=# create table tbl2 as  select * from tbl where mod(id,20)=0;
SELECT 50000000

postgres=# \dt+ tbl2
                    List of relations
 Schema | Name | Type  |  Owner   |  Size   | Description 
--------+------+-------+----------+---------+-------------
 public | tbl2 | table | postgres | 1719 MB | 
(1 row)

postgres=# \timing
Timing is on.

postgres=# select count(*) from tbl2;
  count   
----------
 50000000
(1 row)

Time: 593.304 ms

The effect of scanning partitions directly is the best.

1.  For version 9.6 and later, the parallel scanning function is already built in.

2.  If the front-end can write partitions, it is recommended to use front-end partitions to achieve the best write performance, that is, directly write data to the partitioned table.

3.  Performance loss of the partitioned table. If a large amount of data is written, the performance loss of the front-end partition and the database partition is different. For partitioning methods based on rules, triggers, and PG 10 built-in table partition, there is significant performance loss.

Currently, there is minimal performance loss by using pg_pathman for partitioning. It is recommended.

The PostgreSQL build-in table partition will definitely be optimized in the future. In the long run, it is recommended to use the built-in partitioning function.

4.  Risks

The gen_tids function uses the average row length to compute the CTID. Therefore, when the records in some blocks are smaller than the average length, more rows may be saved, while the actually generated CTIDs may not contain that many rows, which may result in missing records when querying.

5.  CTID parallel scan is suitable for tables with large rows.

Summary

If the kernel has a more elegant built-in writing style, the performance of CTID scan will certainly be better. For example, a scan at the BLOCK level returns all data belonging to those blocks.

select * from tbl where blkid = any (array[blocks]);

References

Five ways to paginate in Postgres, from the basic to the exotic

Getting all entries in a single block with ctid

0 0 0
Share on

digoal

282 posts | 25 followers

You may also like

Comments