Unlock the Power of AI

1 million free tokens

88% Price Reduction

Activate Now
This topic was translated by AI and is currently in queue for revision by our editors. Alibaba Cloud does not guarantee the accuracy of AI-translated content. Request expedited revision

Create table as (CTAS) statement

Updated at: 2025-03-03 07:54

The CREATE TABLE AS statement allows for real-time data synchronization and schema changes from one table to another. This enhances the efficiency of creating tables in a destination store and synchronizing schema changes from a source table to a result table. This topic explains how to use the CREATE TABLE AS statement and provides examples for various scenarios.

Note

Data ingestion YAML jobs are a new feature in Realtime Compute for Apache Flink, offering powerful data integration capabilities through simple YAML configurations.

YAML jobs encompass key functionalities of CREATE TABLE AS and CREATE DATABASE AS statements, such as full database synchronization and schema evolution. They also support additional scenarios, including immediate schema change synchronization, original binlog synchronization, and automatic new table synchronization. YAML is recommended for developing data ingestion job logic. For more information, see Data ingestion YAML best practices.

Prerequisites

Before executing the CREATE TABLE AS statement, ensure that the destination store's catalog is created in your workspace. For more information, see Data Management.

Limits

  • The CREATE TABLE AS statement does not support the test feature.

  • The CREATE TABLE AS statement cannot be used with the INSERT INTO statement in a data synchronization deployment.

  • The CREATE TABLE AS statement does not support MiniBatch configuration.

    For existing SQL deployments requiring MiniBatch configuration adjustments, create a new SQL deployment instead of modifying and restarting the existing one.

    Important

    Before creating an SQL deployment, remove the MiniBatch configuration from the Configuration Management page under the Job Default Configuration tab in the Other Configuration section.

  • The table below describes the upstream and downstream data stores compatible with the CREATE TABLE AS statement. Select one source table and one result table from the list.

    Connector name

    Source table

    Result table

    Remarks

    Connector name

    Source table

    Result table

    Remarks

    MySQL

    ×

    • By default, the database name and table names of the upstream storage are synchronized during merging and synchronization of multiple tables in a sharded database.

    • During single-table synchronization, the database name and table names are not synchronized. If you want to synchronize the database name and table names, execute an SQL statement to create a catalog and add the catalog.table.metadata-columns parameter to the code. For more information, see Manage MySQL Catalog.

    • The MySQL connector cannot be used to synchronize views of MySQL databases.

    Message queue Kafka

    ×

    None.

    MongoDB

    ×

    • The MongoDB connector does not support merging and synchronization of multiple tables in a sharded database.

    • The MongoDB connector does not support synchronization of MongoDB metadata.

    • You cannot execute the CREATE TABLE AS statement to synchronize data from new tables in the source database by using the MongoDB connector.

    • You can execute the CREATE TABLE AS statement to synchronize data and schema changes from a MongoDB collection to a destination table. For more information, see Example 9.

    Upsert Kafka

    ×

    None.

    StarRocks

    ×

    The CREATE TABLE AS statement supports only StarRocks clusters of E-MapReduce (EMR).

    Hologres

    ×

    If the downstream storage service is Hologres, the CREATE TABLE AS statement creates a specific number of connections for each Hologres table by default. The number of connections is specified by the connectionSize parameter. You can configure the connectionPoolName parameter to allow tables for which the same connection pool is configured to share the connection pool.

    Note

    When you synchronize data to Hologres, if the upstream source table contains data types that are not supported by the Fixed Plan, we recommend that you use the INSERT INTO statement to convert the data types in Flink and then synchronize data to Hologres. In this scenario, we recommend that you do not use the CREATE TABLE AS statement to create a sink table for data synchronization. If you use the CREATE TABLE AS statement, the fixed plan feature cannot be used and the writing performance is poor.

    Paimon

    ×

    • Only Realtime Compute for Apache Flink whose engine version is vvr-6.0.7-flink-1.15 or later supports Apache Paimon result tables.

    • Only Realtime Compute for Apache Flink that uses VVR 8.0.10 or later allows you to use the Apache Paimon connector to synchronize data to Apache Paimon sink tables stored in Data Lake Formation (DLF) 2.0.

Features

Feature

Details

Feature

Details

Single-table synchronization

Synchronizes full and incremental data from a source table to a result table in real time.

Synchronization of table schema changes

Synchronizes real-time schema changes from a source table, such as added columns, to a result table.

Merging and synchronization of multiple tables in a sharded database

Uses regular expressions to define shard names and table names, allowing the merging and synchronization of data from multiple database shards and tables to a result table.

Note

The caret (^) cannot be used to match the beginning of a table name when using regular expressions.

Addition of custom computed columns

Enables the addition of computed columns for data conversion and computation to a source table. Computed columns are added as physical columns to the result table, with real-time synchronization of computation results.

Execution of multiple CREATE TABLE AS statements

Allows multiple CREATE TABLE AS statements to be committed as one deployment using the STATEMENT SET statement. Realtime Compute for Apache Flink optimizes source operators, using a single operator to read data from multiple tables, reducing server-id usage, database connections, and load.

Note

Deployments with multiple CREATE TABLE AS statements can add new statements to include new tables in synchronization jobs. For more information, see Example 6.

Startup procedure

When executing the CREATE TABLE AS statement, Realtime Compute for Apache Flink performs the following operations:

  1. Verifies the existence of the result table in the destination store.

    • If the result table does not exist, it is created using the destination store's catalog with the same schema as the data source.

    • If the result table exists, the creation step is skipped.

    • If the result table's schema differs from the source table's schema, an error is returned.

  2. Commits and runs the data synchronization deployment.

    Synchronizes data and schema changes from the data source to the result table.

The following figure illustrates the data synchronization process from MySQL to Hologres using the CREATE TABLE AS statement. Data synchronization diagram

Synchronization policies for table schema changes

The CREATE TABLE AS statement can synchronize data in real time and also handle schema changes from the source table to the result table. These changes include table creation and subsequent schema modifications.

  • Supported schema change policies include the following:

    • Adding a nullable column: Automatically adds the related column to the result table's schema and synchronizes data to the new column.

    • Deleting a nullable column: Fills null values in the nullable column of the result table instead of removing the column.

    • Adding a non-nullable column: Adds the related column to the result table's schema and synchronizes the new column's data. The new column is set to nullable, and pre-existing data is filled with null values.

    • Renaming a column: Treated as adding and deleting a column. After renaming in the source table, the new column is added to the result table, and the original column is filled with null values.

    • Changing a column's data type:

      • If a result table's column data type changes during a CREATE TABLE AS statement execution, the change is supported only if the downstream sink supports the data type change. For example, changing from INT to BIGINT. Downstream sink support for data type changes depends on the rules for column type changes, which vary by result table. Only Apache Paimon supports column type changes.

      • If a downstream sink like Hologres does not support a data type change, the CREATE TABLE AS statement does not support the change. In such cases, use the type normalization mode to synchronize data. When starting the deployment, create a table in the downstream sink with a more general data type to accommodate changes. Only Hologres supports column type changes in type normalization mode. Enable type normalization mode when starting a deployment for the first time. If not enabled initially, the mode does not take effect, requiring the downstream table to be deleted and the deployment restarted without state data. For more information, see Example 8: Synchronize data to a Hologres table using the CREATE TABLE AS statement in type normalization mode.

  • The following schema changes are not supported:

    • Changes to constraints, such as primary keys or indexes.

    • Deletion of non-nullable columns.

    • Changing from not null to nullable.

Important
  • If the source table's schema undergoes one of the unsupported changes, the result table must be deleted and the deployment restarted. This recreates the result table and resynchronizes historical data.

  • The CREATE TABLE AS statement does not identify DDL statement types but compares schema differences between data records before and after a change. If a column is deleted and re-added without data changes between the two DDL statements, the statement considers no schema change has occurred. Schema changes are identified and synchronized only when data changes in the source table.

Basic syntax

CREATE TABLE IF NOT EXISTS <sink_table>
(
  [ <table_constraint> ]
)
[COMMENT table_comment]
[PARTITIONED BY (partition_column_name1, partition_column_name2, ...)]
WITH (
  key1=val1,
  key2=val2, 
  ...
 )
AS TABLE <source_table> [/*+ OPTIONS(key1=val1, key2=val2, ... ) */]
[ADD COLUMN { <column_component> | (<column_component> [, ...])}];

<sink_table>:
  [catalog_name.][db_name.]table_name

<table_constraint>:
  [CONSTRAINT constraint_name] PRIMARY KEY (column_name, ...) NOT ENFORCED

<source_table>:
  [catalog_name.][db_name.]table_name

<column_component>:
  column_name AS computed_column_expression [COMMENT column_comment] [FIRST | AFTER column_name]

The CREATE TABLE AS statement follows the basic syntax of the CREATE TABLE statement. The following table describes the parameters.

Parameter

Description

Parameter

Description

sink_table

The name of the table to which data is synchronized. You can use a catalog name and a database name to specify the name of the result table.

COMMENT

The description of the result table. By default, the description of source_table is used.

PARTITIONED BY

Specifies the columns based on which a table is partitioned.

Important

The CREATE TABLE AS statement cannot be used to synchronize data to a partitioned table in StarRocks.

table_constraint

Defines the PRIMARY KEY constraint for the table to ensure data uniqueness.

WITH

The parameters of the result table. You can specify the parameters in the WITH clause that are supported by the result table. For more information about the supported WITH parameters, see Upsert Kafka WITH parameters, Hologres WITH parameters, StarRocks WITH parameters, or Paimon WITH parameters.

Note

Both the key and value must be of the string type. For example, 'jdbcWriteBatchSize' = '1024'.

source_table

The name of the table from which data is synchronized. You can use a catalog name and database name to specify the name of the source table.

OPTIONS

The parameters of the source table. You can specify the parameters in the WITH clause that are supported by the source table. For more information about the supported WITH parameters, see MySQL WITH parameters and Kafka WITH parameters.

Note

Both the key and value must be of the string type. For example, 'server-id' = '65500'.

ADD COLUMN

Adds columns to the result table when data is synchronized from the source table to the result table. Only computed columns can be added.

column_component

The description of the new column.

computed_column_expression

The description of the computed column expression.

FIRST

Specifies that the new column is used as the first field in the source table. If you do not use this parameter, the new column is used as the last field in the source table by default.

AFTER

Specifies that the new column is added after the specified field in the source table.

Note

The IF NOT EXISTS keyword is mandatory. If the result table does not exist in the destination store, it is created first. If the result table already exists, the creation step is bypassed. Sink tables that are created adopt the source tables' schemas, including primary keys and the names and types of physical fields. However, computed columns, meta fields, and watermarks are excluded. The field types of the source tables are mapped to the field types of the sink tables. For detailed information on data type mappings, refer to the documentation of the respective connector.

Sample code

Example 1: Single-table synchronization

Typically, the CREATE TABLE AS statement is used with both the data source's and the destination store's catalogs. For instance, you can synchronize full and incremental data from a MySQL database to Hologres using a MySQL catalog for the source table and a Hologres catalog for the destination table, without manually writing DDL statements.

For example, if you have a Hologres catalog named 'holo' and a MySQL catalog named 'mysql' in your workspace, you can synchronize data from the MySQL 'web_sales' table to Hologres with the following code:

USE CATALOG holo;

CREATE TABLE IF NOT EXISTS web_sales  
WITH ('jdbcWriteBatchSize' = '1024')   -- Configure the parameters of the result table. This setting is optional.
AS TABLE mysql.tpcds.web_sales   
/*+ OPTIONS('server-id'='8001-8004') */;  -- Specify additional parameters for the mysql-cdc source table.

Example 2: merging and synchronization of multiple tables in a sharded database

To merge and synchronize data from multiple tables in a sharded database, use a MySQL catalog and a regular expression to match the tables you want to synchronize. The CREATE TABLE AS statement can then merge the data into a single Hologres table, with the shard and table names included as additional fields. To ensure a unique primary key, combine the shard name, table name, and original primary key as the new composite primary key in the Hologres table.

USE CATALOG holo;

CREATE TABLE IF NOT EXISTS user
WITH ('jdbcWriteBatchSize' = '1024')
AS TABLE mysql.`wp.*`.`user[0-9]+`  
/*+ OPTIONS('server-id'='8001-8004') */;

The diagram below shows the merging process. Merging effect diagramIf a new 'age' column is added to the 'user02' table and a record is inserted, the data and schema changes on the 'user02' table are automatically synchronized to the sink table in real time, even if the source tables' schemas differ.

ALTER TABLE `user02` ADD COLUMN `age` INT;
INSERT INTO `user02` (id, name, age) VALUES (27, 'Tony', 30);

image

Example 3: addition of custom computed columns

This example demonstrates how to add computed columns for data conversion and computation to source tables during the merging and synchronization process of multiple tables in a sharded database. The 'user' table serves as an example.

USE CATALOG holo;

CREATE TABLE IF NOT EXISTS user
WITH ('jdbcWriteBatchSize' = '1024')
AS TABLE mysql.`wp.*`.`user[0-9]+`
/*+ OPTIONS('server-id'='8001-8004') */
ADD COLUMN (
  `c_id` AS `id` + 10 AFTER `id`,
  `calss` AS 3  AFTER `id`
);

The effect of adding a computed column synchronization is depicted in the following diagram.

The image below illustrates the synchronization effect. image

Example 4: Execution of multiple CREATE TABLE AS statements as one deployment

Realtime Compute for Apache Flink allows multiple CREATE TABLE AS statements to be committed as a single deployment using the STATEMENT SET statement. This optimizes the data of source operators, enabling a single source operator to read data from multiple business tables. This is particularly beneficial for MySQL CDC data sources as it reduces server-id usage, the number of database connections, and the database reading load.

Important

To optimize data from source operators and use a single operator to read from multiple tables, ensure that the options for all source table operators are identical.

For instance, the first code segment synchronizes data from the 'web_sales' table, while the second code segment synchronizes data from multiple 'user' tables in a sharded database. You can use the STATEMENT SET statement to commit these segments as one deployment.

USE CATALOG holo;

BEGIN STATEMENT SET;

-- Synchronize data from the web_sales table.
CREATE TABLE IF NOT EXISTS web_sales
AS TABLE mysql.tpcds.web_sales
/*+ OPTIONS('server-id'='8001-8004') */;

-- Synchronize data from multiple tables in the specified database shards whose names start with user.
CREATE TABLE IF NOT EXISTS user
AS TABLE mysql.`wp.*`.`user[0-9]+`
/*+ OPTIONS('server-id'='8001-8004') */;

END;

Example 5: Synchronize data from the same data source table to different result tables by using multiple CREATE TABLE AS statements

In Realtime Compute for Apache Flink version VVR 4.0.16 or later, you can use multiple CREATE TABLE AS statements to synchronize data from the same data source table to different result tables without adding computed columns.

USE CATALOG `holo`;

BEGIN STATEMENT SET;

-- Execute the CREATE TABLE AS statement to synchronize data from the user table of the MySQL database to the user table in database1 of Hologres.
CREATE TABLE IF NOT EXISTS `database1`.`user`
AS TABLE `mysql`.`tpcds`.`user`
/*+ OPTIONS('server-id'='8001-8004') */;

-- Execute the CREATE TABLE AS statement to synchronize data from the user table of the MySQL database to the user table in database2 of Hologres.
CREATE TABLE IF NOT EXISTS `database2`.`user`
AS TABLE `mysql`.`tpcds`.`user`
/*+ OPTIONS('server-id'='8001-8004') */;

END;

If you need to add computed columns to the result table, execute the following statements for data synchronization:

-- Create a temporary table named user_with_changed_id based on the source table user. You can define a computed column. The following sample code defines the computed_id column that is calculated based on the id column of the source table.
CREATE TEMPORARY TABLE `user_with_changed_id` (
  `computed_id` AS `id` + 1000
) LIKE `mysql`.`tpcds`.`user`;

-- Create a temporary table named user_with_changed_age based on the source table user. You can define a computed column. The following sample code defines the computed_age column that is calculated based on the age column of the source table.
CREATE TEMPORARY TABLE `user_with_changed_age` (
  `computed_age` AS `age` + 1
) LIKE `mysql`.`tpcds`.`user`;

BEGIN STATEMENT SET;

-- Execute the CREATE TABLE AS statement to synchronize data from the user table of the MySQL database to the user_with_changed_id table of Hologres. The user_with_changed_id table contains the IDs that are obtained from the calculation based on the id column of the source table. The obtained IDs are in the computed_id column. 
CREATE TABLE IF NOT EXISTS `holo`.`tpcds`.`user_with_changed_id`
AS TABLE `user_with_changed_id`
/*+ OPTIONS('server-id'='8001-8004') */;

-- Execute the CREATE TABLE AS statement to synchronize data from the user table of the MySQL database to the user_with_changed_age table of Hologres. The user_with_changed_age table contains the age values that are obtained from the calculation based on the age column of the source table. The obtained age values are in the computed_age column. 
CREATE TABLE IF NOT EXISTS `holo`.`tpcds`.`user_with_changed_age`
AS TABLE `user_with_changed_age`
/*+ OPTIONS('server-id'='8001-8004') */;

END;

Example 6: Add a CREATE TABLE AS statement to a deployment that executes multiple CREATE TABLE AS statements

In Realtime Compute for Apache Flink version VVR 8.0.1 or later, if you add a CREATE TABLE AS statement to a deployment already executing multiple CREATE TABLE AS statements, you can restart the deployment from a savepoint to include the new table and synchronize its data.

  1. When developing an SQL draft, include the following statement to enable the feature for reading data from a new table:

    SET 'table.cdas.scan.newly-added-table.enabled' = 'true';
  2. To add a CREATE TABLE AS statement, stop the deployment on the Job Maintenance page and select Create A Snapshot Before Stopping.

  3. In SQL Development, add a CREATE TABLE AS statement and Deploy the SQL deployment again.

  4. On the Job Maintenance page, click the target deployment's name and select the State Management tab. Then, click History.

  5. In the Job Snapshots list, locate the snapshot created when the deployment was stopped.

  6. Click the Operation column of the desired snapshot and choose More > Restore Deployment From This Snapshot.

  7. In the Job Startup configuration dialog box, set up the deployment's startup details. For more information, see Job Startup.

Important

When adding a CREATE TABLE AS statement to a draft's code, consider the following:

  • For CDC source table data synchronization, the feature for reading from a new table is effective only for deployments with an Initial Mode startup.

  • The new CREATE TABLE AS statement's source table configuration must match the original source table's configuration for reusability.

  • Do not alter deployment parameters, such as startup mode, before and after adding a CREATE TABLE AS statement.

Example 7: Synchronize data from a MySQL source table to a partitioned table in Hologres by using the CREATE TABLE AS statement

When creating a partitioned table in Hologres, if a primary key is defined, the partition field must be included in the primary key. The following example shows how to synchronize data from a MySQL source table to a partitioned Hologres table using the CREATE TABLE AS statement.

Consider a MySQL table that needs to be synchronized to Hologres. The statement for creating the MySQL table is as follows:

CREATE TABLE orders (
    order_id INTEGER NOT NULL,
    product_id INTEGER NOT NULL,
    city VARCHAR(100) NOT NULL
    order_date DATE,
    purchaser INTEGER,
    PRIMARY KEY(order_id, product_id)
);

When synchronizing data using the CREATE TABLE AS statement, handle the relationship between the primary key and partition field of the upstream table according to Hologres' partitioned table rules.

  • Scenario 1: The upstream table's primary key includes the partition field.

    If the partition field of the Hologres table, such as product_id, is part of the upstream table's primary key, you can directly synchronize data using the CREATE TABLE AS statement. For example:

    CREATE TABLE IF NOT EXISTS `holo`.`tpcds`.`orders`
    PARTITIONED BY (product_id)
    AS TABLE `mysql`.`tpcds`.`orders`;

    In this scenario, Hologres automatically verifies the inclusion of the partition field in the primary key and completes the synchronization.

  • Scenario 2: The upstream table's primary key does not include the partition field.

    If the partition field of the Hologres table, such as city, is not part of the upstream table's primary key, synchronization fails because Hologres requires the partition field to be part of the primary key.

    To address this, redefine the result table's primary key in the CREATE TABLE AS statement to include the partition field. For example:

    -- You can use the following SQL statement to specify the primary key of the partitioned table in Hologres as order_id, product_id, and city.
    CREATE TABLE IF NOT EXISTS `holo`.`tpcds`.`orders`(
        CONSTRAINT `PK_order_id_city` PRIMARY KEY (`order_id`,`product_id`,`city`) NOT ENFORCED
    )
    PARTITIONED BY (city)
    AS TABLE `mysql`.`tpcds`.`orders`;

    This ensures the partition field city is included in the primary key definition, complying with Hologres' partitioned table requirements.

Example 8: Synchronize data to a Hologres table by using the CREATE TABLE AS statement in type normalization mode

When executing the CREATE TABLE AS statement, you may need to adjust the data type precision for existing fields, such as changing from VARCHAR(10) to VARCHAR(20).

  • For Realtime Compute for Apache Flink versions earlier than vvr-6.0.5-flink-1.15, changing a field's data type in a source table's deployment may cause the deployment to fail, necessitating the recreation of the result table.

  • For deployments in Realtime Compute for Apache Flink version vvr-6.0.5-flink-1.15 or later, the type normalization mode is available. It is recommended to enable this mode when starting a deployment for the first time. If not enabled initially, the mode does not take effect, and you must delete the downstream table and restart the deployment without state data.

    CREATE TABLE IF NOT EXISTS `holo`.`tpcds`.`orders` 
    WITH (
    'connector' = 'hologres', 
    'enableTypeNormalization' = 'true' -- Use the type normalization mode.
    ) AS TABLE `mysql`.`tpcds`.`orders`;

    With type normalization mode, data type changes in the source table do not cause deployment failure as long as the pre- and post-change data types can be converted based on the normalization rules. The rules for type normalization mode are as follows:

    • TINYINT, SMALLINT, INT, and BIGINT data types are normalized to BIGINT.

    • CHAR, VARCHAR, and STRING data types are normalized to STRING.

    • FLOAT and DOUBLE data types are normalized to DOUBLE.

    • Other data types follow the original data type mapping rules. For more information, see Data type mappings.

    For instance, when type normalization mode is enabled, changing a SMALLINT data type to INT is considered successful because both are normalized to BIGINT, allowing the deployment to proceed as expected.

    • With type normalization enabled, SMALLINT and INT data types are normalized to BIGINT. Changing from SMALLINTto INT is considered successful, and the deployment runs smoothly.

    • With type normalization enabled, FLOAT is normalized to DOUBLE, and BIGINT remains BIGINT. Changing from FLOAT to BIGINT results in a data type incompatibility error.

Example 9: Synchronize data from a MongoDB source table to a Hologres table by using the CREATE TABLE AS statement

In Realtime Compute for Apache Flink version VVR 8.0.6 or later, the CREATE TABLE AS statement can synchronize data from a MongoDB source table to a Hologres table in real time, including schema changes. A MongoDB catalog can be used without manually defining the schema. For details on the MongoDB catalog, see Manage MongoDB Catalog.

The sample code below demonstrates how to use the CREATE TABLE AS statement to synchronize data from a MongoDB source table to a Hologres table:

BEGIN STATEMENT SET;

CREATE TABLE IF NOT EXISTS `holo`.`database`.`table1`
AS TABLE `mongodb`.`database`.`collection1`
/*+ OPTIONS('scan.incremental.snapshot.enabled'='true','scan.full-changelog'='true') */;

CREATE TABLE IF NOT EXISTS `holo`.`database`.`table2`
AS TABLE `mongodb`.`database`.`collection2`
/*+ OPTIONS('scan.incremental.snapshot.enabled'='true','scan.full-changelog'='true') */;

END;
Important

When synchronizing data from a MongoDB database to a destination table using the CREATE TABLE AS or CREATE DATABASE AS statements, ensure the following conditions are met:

  • The VVR version of Realtime Compute for Apache Flink is 8.0.6 or later, and the MongoDB database version is 6.0 or later.

  • The scan.incremental.snapshot.enabled and scan.full-changelog parameters are set to true in the SQL hints.

  • The pre- and post-images feature is enabled for the MongoDB database. For more information, see Document Preimages.

To synchronize data from multiple MongoDB collections in a deployment, ensure that the configurations of the following parameters for all tables are identical:

  • Parameters related to the MongoDB database, such as hosts, scheme, username, password, and connectionOptions

  • scan.startup.mode

Example 10: Synchronize data from all tables in a MySQL database to Kafka

When multiple deployments use the same MySQL table, the MySQL database establishes multiple connections, leading to high server and network load. To alleviate this, Realtime Compute for Apache Flink enables synchronization of data from all tables in a MySQL database to Kafka, using Kafka as an intermediate layer for data synchronization. The CREATE DATABASE AS statement can synchronize data from all tables in a MySQL database to Kafka, and the CREATE TABLE AS statement can do the same for individual tables. For more information, see Synchronize data from all tables in a MySQL database to Kafka.

References

  • On this page (1)
  • Prerequisites
  • Limits
  • Features
  • Startup procedure
  • Synchronization policies for table schema changes
  • Basic syntax
  • Sample code
  • Example 1: Single-table synchronization
  • Example 2: merging and synchronization of multiple tables in a sharded database
  • Example 3: addition of custom computed columns
  • Example 4: Execution of multiple CREATE TABLE AS statements as one deployment
  • Example 5: Synchronize data from the same data source table to different result tables by using multiple CREATE TABLE AS statements
  • Example 6: Add a CREATE TABLE AS statement to a deployment that executes multiple CREATE TABLE AS statements
  • Example 7: Synchronize data from a MySQL source table to a partitioned table in Hologres by using the CREATE TABLE AS statement
  • Example 8: Synchronize data to a Hologres table by using the CREATE TABLE AS statement in type normalization mode
  • Example 9: Synchronize data from a MongoDB source table to a Hologres table by using the CREATE TABLE AS statement
  • Example 10: Synchronize data from all tables in a MySQL database to Kafka
  • References
Feedback
phone Contact Us

Chat now with Alibaba Cloud Customer Service to assist you in finding the right products and services to meet your needs.

alicare alicarealicarealicare