By Digoal
There are two ways to import data to MySQL servers quickly.
1) LOAD DATA LOCAL INFILE
Import data in client files to MySQL servers.
Here is an example:
create table test (id int, c1 numeric);
直接从pg生成, 管道过去
psql -c "copy (select generate_series(1,10000), random()*100) to stdout with (format 'csv')" | mysql -h rm-bp1wv992ym962k85888370.mysql.rds.aliyuncs.com -P 3306 -u user123 --password=Dts_test123! -D db1 -e "LOAD DATA LOCAL INFILE '/dev/stdin' INTO TABLE test FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' (id, c1)"
2) LOAD DATA INFILE
Import data in files from one MySQL server to another. Use stored procedures for loop insert, and don't forget to utilize transactions; otherwise, the process will be very slow.
Here is an example:
CREATE TABLE employees (
id INT NOT NULL,
fname VARCHAR(30),
lname VARCHAR(30),
birth TIMESTAMP,
hired DATE NOT NULL DEFAULT '1970-01-01',
separated DATE NOT NULL DEFAULT '9999-12-31',
job_code INT NOT NULL,
store_id INT NOT NULL
);
DROP PROCEDURE IF EXISTS BatchInser;
delimiter // -- 把界定符改成双斜杠
CREATE PROCEDURE BatchInsert(IN init INT, IN loop_time INT) -- 第一个参数为初始ID号(可自定义),第二个位生成MySQL记录个数
BEGIN
DECLARE Var INT;
DECLARE ID INT;
SET Var = 0;
SET ID = init;
WHILE Var < loop_time DO
insert into employees
(id, fname, lname, birth, hired, separated, job_code, store_id)
values
(ID, CONCAT('chen', ID), CONCAT('haixiang', ID), Now(), Now(), Now(), 1, ID);
SET ID = ID + 1;
SET Var = Var + 1;
END WHILE;
END;
//
delimiter ; -- 界定符改回分号
-- 开启事务插入,否则会很慢
begin;
CALL BatchInsert(1, 200000);
commit;
Query OK, 1 row affected (7.53 sec)
In contrast, PostgreSQL is much faster:
CREATE TABLE employees (
id INT NOT NULL,
fname VARCHAR(30),
lname VARCHAR(30),
birth TIMESTAMP,
hired DATE NOT NULL DEFAULT '1970-01-01',
separated DATE NOT NULL DEFAULT '9999-12-31',
job_code INT NOT NULL,
store_id INT NOT NULL
);
\timing
insert into employees
(id, fname, lname, birth, hired, separated, job_code, store_id)
select
ID, CONCAT('chen', ID), CONCAT('haixiang', ID), Now(), Now(), Now(), 1, ID
from generate_series(1,200000) id;
INSERT 0 200000
Time: 355.652 ms
Making PostgreSQL Compatible with MySQL Tinyint, Unsigned Int, Zerofill – Domain and Lpad
Read/Write Splitting Configuration for ApsaraDB RDS for PostgreSQL 12 using pgpool
Alibaba Clouder - April 8, 2018
Alibaba Clouder - March 31, 2021
Michael Peng - September 24, 2019
ApsaraDB - March 26, 2024
ApsaraDB - March 13, 2024
ApsaraDB - July 4, 2022
Migrate your legacy Oracle databases to Alibaba Cloud to save on long-term costs and take advantage of improved scalability, reliability, robust security, high performance, and cloud-native features.
Learn MoreMigrating to fully managed cloud databases brings a host of benefits including scalability, reliability, and cost efficiency.
Learn MoreSecure and easy solutions for moving you workloads to the cloud
Learn MoreAlibaba Cloud PolarDB for PostgreSQL is an in-house relational database service 100% compatible with PostgreSQL and highly compatible with the Oracle syntax.
Learn MoreMore Posts by digoal