This topic describes how to obtain the traffic file of an RDS_MySQL instance or PolarDB-X instance and upload it to an OSS bucket before you start a performance assessment task by using the migration assessment feature.
Import traffic data of an RDS_MySQL instance to SLS
Enable SQL audit for an RDS for MySQL instance
Log on to ApsaraDB for RDS, and go to the Instance List page.
Specify the region at the top of the page and click the ID of the destination instance to go to the details page of the instance.
ImportantTo enable the SQL audit feature of an RDS for MySQL instance, the following prerequisites must be met:
The RDS for MySQL instance must be of the High Availability Edition or Enterprise Edition.
If you log on as a RAM user, you must have the read and write privileges on the RDS for MySQL instance, such as the AliyunRDSFullAccess privilege.
In the left-side navigation pane, click SQL Explorer, and choose Autonomy Service > SQL Explorer and Audit
For more information about how to enable the SQL Explorer feature, see SQL Explorer.
If the RDS for MySQL instance is located in any of the following regions, you can click Enable to enable the SQL Explorer and audit features: China (Hangzhou), China (Shanghai), China (Qingdao), China (Beijing), China (Shenzhen), China (Zhangjiakou), China (Hohhot), China (Chengdu), China (Guangzhou), China (Heyuan), China (Ulanqab), China (Hong Kong), Singapore, Malaysia (Kuala Lumpur), and Indonesia (Jakarta).
If the RDS for MySQL instance is located in a region other than the preceding regions, click Official Version to specify the retention period of the SQL audit logs, and click OK to enable the SQL Explorer feature.
Import traffic data to SLS
Log on to the SLS console and enable the SLS feature as prompted.
In the Import Data section, choose Cloud Products > RDS SQL Audit - Cloud Products to go to the RDS SQL Audit page.
At the Specify Logstore step, respectively select the created project and Logstore from the Project and Logstore drop-down lists. Then, click Next.
You can also click Create Now next to the Project or Logstore drop-down list to create a new project or Logstore. For more information, see Create a project and a Logstore.
At the Specify Data Source step, complete the RAM authorization. Then, find the ID of the RDS for MySQL instance for which you have enabled the SQL Explorer feature, and enable file delivery.
Click Next.
Import traffic data of a PolarDB-X instance to SLS
Prerequisites
You have enabled the SLS feature.
You have created a database on the PolarDB-X instance.
Enable SQL audit for a PolarDB-X instance
The operations for PolarDB-X 1.0 and PolarDB-X 2.0 may vary. The following procedure takes PolarDB-X 2.0 as an example.
Log on to the PolarDB-X console.
On top of the page, select the region where the destination instance resides.
On the Instance List page, click the PolarDB-X 2.0 tab.
Find the destination instance and click the instance ID to go to the instance details page.
In the left-side navigation pane, choose Diagnostics and Optimization > SQL Audit and Analysis.
In the upper-right corner of the SQL Audit and Analysis page, enable Current Database SQL Audit Log Status.
In the dialog box for configuring the log retention period, choose whether to import historical data as needed.
After the SQL audit feature is enabled, audit logs of the PolarDB-X databases in the same region are automatically written to the Logstores of the same log service. Then, you can export traffic files to an OSS bucket.
Export traffic files from SLS to OSS
Return to the homepage of the SLS console.
In the Projects section, click the name of the target project to go to the Logstores page.
In the left-side navigation pane of the Logstores page, choose Data Transformation > Export, and click + next to OSS. Then, configure the OSS LogShipper feature and export data to the OSS bucket. For more information, see Create an OSS shipping job (new version).
ImportantIn the OSS LogShipper dialog box, configure the parameters according to the following rules:
Partition Format must be set to %Y/%m/%d/%H/%M.
Storage Format must be set to json or csv. If you select csv, you must enable Shipped Fields and set Delimiter to Comma and Escape Character to ".
Compression Format must be set to snappy or gzip.
You can also click the name of the target Logstore, specify the query period, and click Search & Analyze. After the query results are displayed, click the download icon to download the logs.