All Products
Search
Document Center

DataWorks:Configure an SSH node

Last Updated:Nov 08, 2024

In DataWorks, you can create an SSH node and use the SSH node based on a specific SSH data source to remotely access a host that is connected to the data source and trigger script running on the host. For example, you can use this method to remotely access an Elastic Compute Service (ECS) instance from DataWorks and trigger periodic scheduling of scripts on the ECS instance. This topic describes how to use an SSH node to develop tasks.

Limits

Tasks on SSH nodes can be run on a serverless resource group or an old-version exclusive resource group for scheduling. We recommend that you run tasks on a serverless resource group. For more information about how to purchase a serverless resource group, see Create and use a serverless resource group.

Precautions

  • When a process is started on a remote host by using an SSH node, the operations on the underlying remote host are not affected even if tasks on the SSH node exit due to an exception, such as task timeout. In this case, DataWorks does not issue a process termination command to the remote host.

  • SSH nodes support the standard shell syntax but not the interactive syntax.

Prerequisites

  • A workflow is created.

    Development operations in different types of compute engines are performed based on workflows in DataStudio. Therefore, before you create a node, you must create a workflow. For more information, see Create a workflow.

  • An SSH data source is added.

    You must add an SSH data source to remotely access your SSH server before you can develop and periodically schedule SSH tasks in an SSH node. For information about how to add a data source, see Add an SSH data source.

    Note

    You can use SSH nodes to develop tasks based on only SSH data sources that are added to DataWorks in Java Database Connectivity (JDBC) connection string mode. In addition, you must make sure that your data source is connected to the desired resource group to prevent task running failures.

  • (Required if you use a RAM user to develop tasks) The RAM user is added to the DataWorks workspace as a member and is assigned the Develop or Workspace Administrator role. The Workspace Administrator role has more permissions than necessary. Exercise caution when you assign the Workspace Administrator role. For more information about how to add a member and assign roles to the member, see Add workspace members and assign roles to them.

Step 1: Create an SSH node

  1. Go to the DataStudio page.

    Log on to the DataWorks console. In the top navigation bar, select the desired region. In the left-side navigation pane, choose Data Development and Governance > Data Development. On the page that appears, select the desired workspace from the drop-down list and click Go to Data Development.

  2. On the DataStudio page, find the desired workflow, right-click the workflow name, and then choose Create Node > SSH.

  3. In the Create Node dialog box, configure the Name parameter and click Confirm. Then, you can use the created node to develop and configure tasks.

Step 2: Develop an SSH task

(Optional) Select an SSH data source

If multiple SSH data sources are added to your workspace, you must select one from the Select Data Source drop-down list in the upper part of the configuration tab of the node based on your business requirements. If only one SSH data source is added to your workspace, the SSH data source is used to develop tasks.

Note

You can use SSH nodes to develop tasks based on only SSH data sources that are added to DataWorks in Java Database Connectivity (JDBC) connection string mode. In addition, you must make sure that your data source is connected to the desired resource group to prevent task running failures.

Develop code: Simple example

In the code editor on the configuration tab of the SSH node, write task code. Sample code:

# 1. Prepare an environment.
# Find the file that you want to run on the remote host. For example, the nihao.sh file is stored in the tmp directory of the remote host. 
# To facilitate testing, you can run the following command on the SSH node to create the nihao.sh file: 
echo "echo nihao,dataworks" >/tmp/nihao.sh
# 2. Use the SSH node to trigger the running of the file on the remote host. 
# Use the SSH node in DataWorks to trigger the running of the nihao.sh file. 
sh /tmp/nihao.sh

Develop code: Use scheduling parameters

DataWorks provides scheduling parameters whose values are dynamically replaced in the code of a task based on the configurations of the scheduling parameters in periodic scheduling scenarios. You can define variables in the task code in the ${Variable} format and assign values to the variables in the Scheduling Parameter section of the Properties tab. For information about the supported formats of scheduling parameters and how to configure scheduling parameters, see Supported formats of scheduling parameters and Configure and use scheduling parameters.

The following content provides an example on how to use scheduling parameters in an SSH node:

# Requirement: Write the running time of the SSH node to the sshnode.log file that is stored in the /tmp directory on a daily basis. 
# Implementation: Use the ${myDate} variable in the sshnode.log file and assign $[yyyy-mm-dd hh24:mi:ss] to the myDate variable as a value. 
echo ${myDate} >/tmp/sshnode.log
cat /tmp/sshnode.log

Step 3: Configure task scheduling properties

If you want the system to periodically run a task on the node, you can click Properties in the right-side navigation pane on the configuration tab of the node to configure task scheduling properties based on your business requirements. For more information, see Overview.

Note

You must configure the Rerun and Parent Nodes parameters on the Properties tab before you commit the task.

Step 4: Debug task code

You can perform the following operations to check whether the task is configured as expected based on your business requirements:

  1. Optional. Select a resource group and assign custom parameters to variables.

    • Click the 高级运行 icon in the top toolbar of the configuration tab of the node. In the Parameters dialog box, select a resource group for scheduling that you want to use to debug and run task code.

    • If you use scheduling parameters in your task code, assign the scheduling parameters to variables as values in the task code for debugging. For more information about the value assignment logic of scheduling parameters, see Debugging procedure.

  2. Save and run task code.

    In the top toolbar, click the 保存 icon to save task code. Then, click the 运行 icon to run task code.

  3. Optional. Perform smoke testing.

    You can perform smoke testing on the task in the development environment to check whether the task is run as expected when you commit the task or after you commit the task. For more information, see Perform smoke testing.

Step 5: Commit and deploy the task

After a task on a node is configured, you must commit and deploy the task. After you commit and deploy the task, the system runs the task on a regular basis based on scheduling configurations.

  1. Click the 保存 icon in the top toolbar to save the task.

  2. Click the 提交 icon in the top toolbar to commit the task.

    In the Submit dialog box, configure the Change description parameter. Then, determine whether to review task code after you commit the task based on your business requirements.

    Note
    • You must configure the Rerun and Parent Nodes parameters on the Properties tab before you commit the task.

    • You can use the code review feature to ensure the code quality of tasks and prevent task execution errors caused by invalid task code. If you enable the code review feature, the task code that is committed can be deployed only after the task code passes the code review. For more information, see Code review.

If you use a workspace in standard mode, you must deploy the task in the production environment after you commit the task. To deploy a task on a node, click Deploy in the upper-right corner of the configuration tab of the node. For more information, see Deploy tasks.

What to do next

Task O&M: After you commit and deploy the task, the task is periodically run based on the scheduling configurations. You can click Operation Center in the upper-right corner of the configuration tab of the corresponding node to go to Operation Center and view the scheduling status of the task. For more information, see View and manage auto triggered tasks.

FAQ

Question: What do I do if an SSH node runs for a long period of time that exceeds the specified duration and cannot exit?

Answer: The SSH server that runs on the remote server may have a default disconnection logic. If no data is exchanged between the client and the SSH server within 1 hour, the server disconnects from the client. However, DataWorks cannot determine whether the connection is terminated and will continue to run the task on the node

Solution:

  1. Configure the following parameters in the configuration file (such as /etc/ssh/sshd_config) of the SSH server to prevent the SSH server from being disconnected and send a keepalive packet message every 60 seconds.

    • ClientAliveInterval: 30

    • ClientAliveCountMax: 0

    • TCPKeepAlive: yes

  2. Restart the SSH server.

    sudo service sshd restart
    Note

    The commands may vary based on the type and version of the operating system that you use.