All Products
Search
Document Center

Dataphin:Configure SAP HANA Output Components

Last Updated:May 28, 2025

The SAP HANA output component enables data writing to an SAP HANA data source. When synchronizing data from other sources to SAP HANA, you must configure the target data source settings for the SAP HANA output component. This topic guides you through the configuration process.

Prerequisites

Procedure

  1. Select Development > Data Integration from the top menu bar on the Dataphin home page.

  2. On the integration page's top menu bar, select Project (Dev-Prod mode requires selecting an environment).

  3. In the left-side navigation pane, click on the Batch Pipeline. From the Batch Pipeline list, select the offline pipeline you want to develop to access its configuration page.

  4. To open the Component Library panel, click on Component Library located in the upper right corner of the page.

  5. In the Component Library panel's left-side navigation pane, select Output. Find the SAP HANA component in the list on the right and drag it to the canvas.

  6. Connect the target input component to the SAP HANA output component by clicking and dragging the image icon.

  7. To configure the SAP HANA output component, click the image icon on the component card, which opens the SAP HANA Output Configuration dialog box. image

  8. In the SAP HANA Output Configuration dialog box, set the required parameters.

    Parameter

    Description

    Basic Settings

    Step Name

    This is the name of the SAP HANA output component. Dataphin automatically generates the step name, and you can also modify it according to the business scenario. The naming convention is as follows:

    • It can only contain Chinese characters, letters, underscores (_), and numbers.

    • The name can be up to 64 characters in length.

    Datasource

    In the data source drop-down list, all SAP HANA type data sources are displayed, including data sources for which you have write-through permission and those for which you do not. Click the image icon to copy the current data source name.

    • For data sources without write-through permission, you can click Request after the data source to request write-through permission for the data source. For specific operations, see Request Data Source Permissions.

    • If you do not have an SAP HANA type data source, click the dfag new icon to create a data source. For specific operations, see Create an SAP HANA Data Source.

    Schema (optional)

    You can select a schema. If not selected, the default is the schema with the same name as the user.

    Table

    Select the target table for the output data. You can enter a table name keyword to search or enter the exact table name and click Precise Search. After selecting the table, the system will automatically perform a table status check. Click the image icon to copy the name of the currently selected table.

    Loading Policy

    Select the loading policy for the fields. The system supports selecting the Append Data and Overwrite Data policies.

    • Append Data: When there is a primary key or constraint violation, the system will prompt a dirty data error.

    • Overwrite Data: When there is a primary key or constraint violation, the system will first delete the original data and then insert the entire new data row.

    Batch Write Data Volume (optional)

    The size of the data volume written at one time. You can also set the Batch Write Count. During writing, the system will write according to the limit reached first among the two configurations. The default is 32M.

    Batch Write Count (optional)

    The default is 2048 entries. During data synchronization writing, a batch writing strategy is adopted. The parameters set include Batch Write Count and Batch Write Data Volume.

    • When the accumulated data volume reaches any of the set limits (that is, reaches the batch write data volume or count limit), the system will consider a batch of data to be full and will immediately write this batch of data to the target end at one time.

    • It is recommended to set the batch write data volume to 32MB. For the upper limit of the batch insert count, you can flexibly adjust it according to the actual size of a single record. It is usually set to a large value to fully utilize the advantages of batch writing. For example, if the size of a single record is about 1KB, you can set the batch insert byte size to 16MB. Considering this condition, set the batch insert count to be greater than the result of 16MB divided by the size of a single record, 1KB (that is, greater than 16384 entries). Here, it is assumed to be set to 20000 entries. After this configuration, the system will trigger a batch write operation based on the batch insert byte size. Whenever the accumulated data volume reaches 16MB, a write action will be executed.

    Preparation Statement (optional)

    The SQL script executed on the database before data import.

    For example, to ensure the continuous availability of the service, before the current step writes data, first create the target table Target_A, execute the write to the target table Target_A, and after the current step writes data, rename the table Service_B that continuously provides services in the database to Temp_C, then rename the table Target_A to Service_B, and finally delete Temp_C.

    End Statement (optional)

    The SQL script executed on the database after data import.

    Field Mapping

    Input Field

    Displays the input fields based on the output of the upstream component.

    Output Field

    Displays the output fields. You can perform the following operations:

    • Field Management: Click Field Management to select output fields.

      image

      • Click the gaagag icon to move the Selected Input Fields to the Unselected Input Fields.

      • Click the agfag icon to move the Unselected Input Fields to the Selected Input Fields.

    • Batch Add: Click Batch Add to support batch configuration in JSON, TEXT format, or DDL format .

      • Batch configuration in JSON format, for example:

        // Example:
        [{
          "name": "user_id",
          "type": "String"
         },
         {
          "name": "user_name",
          "type": "String"
         }]
        Note

        name indicates the name of the imported field, and type indicates the type of the imported field. For example, "name":"user_id","type":"String" indicates that the field named user_id is imported, and the field type is set to String.

      • Batch configuration in TEXT format, for example:

        // Example:
        user_id,String
        user_name,String
        • The row delimiter is used to separate the information of each field. The default is a line feed (\n), and it supports line feed (\n), semicolon (;), or period (.).

        • The column delimiter is used to separate the field name and field type. The default is a comma (,).

      • Batch configuration in DDL format, for example:

        CREATE TABLE tablename (
            id INT PRIMARY KEY,
            name VARCHAR(50),
            age INT
        );
    • Create Output Field: Click +create Output Field, fill in the Column and select the Type according to the page prompts. After completing the configuration of the current row, click the image icon to save.

    Quick Mapping

    Based on the input from upstream and the fields of the target table, you can manually select field mapping. Quick Mapping includes Row Mapping and Name Mapping.

    • Name Mapping: Map fields with the same field name.

    • Row Mapping: The field names of the source table and the target table are inconsistent, but the data of the corresponding rows of the fields need to be mapped. Only the fields of the same row are mapped.

  9. To finalize the property configuration for the SAP HANA Output Component, click Confirm.