All Products
Search
Document Center

Alibaba Cloud Model Studio:Workflow application

Last Updated:Feb 13, 2025

Workflow applications streamline complex tasks by breaking the tasks down into a series of steps. You can create a workflow application in the Model Studio console to integrate large laguage models (LLMs), APIs, and other nodes to effectively reduce coding efforts. This topic describes how to create a workflow application and use the nodes.

Overview

Scenarios

Why use workflow applications?

Workflow is essential in modern software development and business management for simplifying complex tasks into steps, thereby enhancing efficiency. In Model Studio, workflow applications enable clear definition of task execution order, responsibility allocation, and step dependencies, facilitating automation and optimization.

You can use workflow applications in the following scenarios:

  • Travel planning: Specify parameters such as destination to automatically generate travel plans, including flights, accommodations, and attractions.

  • Report analysis: Use data processing, analysis, and visualization plug-ins to produce structured and formatted analysis reports for complex datasets.

  • Customer service: Automatically classify and deal with customer inquiries to enhance the speed and precision of customer service responses.

  • Content creation: Produce content such as articles and marketing materials based on themes and requirements.

  • Education and training: Design personalized learning plans that include progress tracking and assessments, facilitating student self-learning.

  • Medical consultation: Use various analysis tools to generate preliminary diagnoses or examination recommendations based on patient symptoms and help doctors make further decisions.

Supported models

Type

Text generation

Image understanding

Models

  • Qwen-Max

  • Qwen-Plus

  • Qwen-Turbo

  • QwenVL-Plus

  • QwenVL-Max

For more information about the models, see List of models.

Use case: Telecom fraud detection

This example shows how to create a workflow application to identify whether a text message is related to telecom fraud.

  1. Log on to the Model Studio console. In the left-side navigation pane, choose My Applications.

  2. Click Create Application, choose Workflow Application, and click Create Task-based Workflow.

    image

  3. On the Canvas Configuration page, you can see that the Start node already has two preset parameters. You can modify or delete them based on your requirements.

  4. Drag an LLM node from the left-side pane into the canvas. Connect it to the Start node and configure the parameters.

    • Configure the Start node: Delete the city and date parameters. The Start node has another built-in default parameter, query.

    • Configure the LLM node:

      Parameter

      Example

      Mode

      Single Mode

      Model Configuration

      Qwen-Max

      Temperature

      Use the default value

      Maximum Response Length

      1024

      Prompt

      System Prompt: 
      Analyze and determine whether the given information is suspected of fraud. Provide a definite answer on whether there is a suspicion of fraud.
       Processing requirements: Carefully review the content of the information, focusing on keywords and typical fraud patterns, such as requests for urgent transfers, provision of personal information, and promises of unrealistic benefits. 
      Procedure: 
      1. Identify key elements in the information, including but not limited to the sender's identity, requests made, promised returns, and any urgency expressions. 
      2. Compare with known fraud case characteristics to check if there are similar tactics or language patterns in the information. 
      3. Evaluate the overall reasonableness of the information, considering whether the requests made are in line with conventional logic and processes. 
      4. If the information contains links or attachments, do not click or download them directly to avoid potential security risks, and remind users of the dangers of such content. 
      Output format: Clearly indicate whether the information exhibits characteristics of fraud and briefly explain the basis for judgment. If there is a suspicion of fraud, provide some suggestions or preventive measures to protect user safety.
      User Prompt:
      Determine whether “${sys.query}” is suspected of fraud.
      Note

      You can enter / to insert variables. Select query from System Variables.

      image

      Output

      Use the default value

      image

  5. Drag an Intent Classification node from the left-side pane into the canvas. Connect the LLM node to the Intent Classification node and configure the following parameters.

    Parameter

    Example

    Input

    Select System Variable query

    Model Configuration

    Qwen-Plus

    Intent Configuration

    Add the following categories. The model will match subsequent links based on different intent descriptions.

    • The information involves fraud

    • The information does not involve fraud

    Other Intents

    Use the default value

    If no intent is matched, this link is matched.

    Intent Mode

    Select Single Selection.

    Single Selection: The model will select and output the most appropriate intent from the configured intents.
    Multiple Selection: The model will select and output all appropriate intents from the configured intents.

    Thinking Mode

    Select Speed Mode

    Speed Mode: The model will reduce the thinking process to improve response speed.
    Effect Mode: The model will gradually think and output more accurate answers.

    Advanced Configurations

    Use the default value

    You can provide additional prompts to the model as advanced configurations, where you can input more conditions or examples to make the model's classification more in line with your requirements.

    Output

    Use the default value

    image

  6. Drag a Text Conversion node from the left-side pane into the canvas. Connect all output ports of the Intent Classification node to the Text Conversion node and configure the following parameters.

    Parameter

    Example

    Output Mode

    Select Text Output.

    Enter box

    Enter / to insert the following variables:

    • Intent Classification_1 > result > thought

    • Intent Classification_1 > result > subject

    • LLM_1 > result

    image

    image

  7. Connect the Text Conversion node to the End node and configure the following parameters.

    Parameter

    Example

    Output Mode

    Select Text Output.

    Input

    Enter / to insert Text Conversion_1 > resultimage

    Response

    Use the default value

    image

  8. Click Test in the upper right corner, enter Your package has been stored at the pickup station for several days. Please come and collect it at your earliest convenience. Click Execute.

  9. After the workflow is executed, the End node displays the Run Result.

    image

  10. Click Test again and enter You've won $1 million in the lottery. Please check. Click Execute.

  11. Upon completion of the workflow, the output node will produce the Execution Result.

    image

  12. Click Publish in the upper right corner.

Node description

Start/End node

  • The start and end of a workflow. Each workflow must include both a Start and an End node. The End node cannot be followed by any other nodes. The workflow application only outputs the execution result after reaching the End node. The End node must declare one or more output variables, which can be the output variables of any upstream node.

  • Start node parameters:

    Parameter

    Description

    Variable Name

    The key of the input parameter.

    Type

    The type of the parameter. Valid values: String, Boolean, Number.

    Description

    Parameter description.

    Note

    The Start node includes a built-in system variable: System Variables.query, which represents the query input by the user.

  • End node parameters:

    Parameter

    Description

    Output Mode

    Choose from Text Output and JSON Output.

    Input box

    Text Output: Enter / to insert variables.

    JSON Output: Enter variable name and select Reference or Input.

    Response

    The response switch.

  • Example:

    In the following workflow, the variable in the End node Knowledge Base_1/result is the output from the preceding node. This indicates that the workflow ends with the execution of the Knowledge Base node and outputs the results from the Knowledge Base node.

    Click Test and enter Phones as the query:

    image

    Sample output:

    image

Knowledge Base node

  • Search for content or chunks in configured knowledge bases, outputting search results and related information. The Knowledge Base node can act as a proceeding node for the LLM node.

  • Parameters:

    Parameter

    Description

    Input

    Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.

    Select Knowledge Base

    Select one or more desired knowledge base.

    Output

    The variable name for the output of the node. Subsequent nodes can identify and process the variable by its name.

  • Example:

    Common scenario: An AI Q&A system based on external data or knowledge.

    The following figure shows a simple knowledge base Q&A system. The Knowledge Base node precedes the LLM node. When a user enters a query, the system forwards the query to the Knowledge Base node for retrieval. The node first searches the knowledge base for text content closely related to the query and recalls it. Then, the recalled content and the query are passed to the LLM.

    Click Test and enter Tell me about your phones in the query parameter: image

    Knowledge Base node output:

    rewriteQuery: A rewritten version of the user's query, optimized for better matching with available information or documents.
    documentName: The name or identifier of the document, used to reference specific documents.
    title: The title of the document, usually the name or description of the document, helping users quickly understand the subject of the document.
    content: The main content of the document, usually a piece of text providing information related to the user's query.
    score: A score value indicating the relevance of the document to the query.
    {
      "result": {
        "rewriteQuery": "Tell me about your mobile phones",
        "chunkList": [
          {
            "score": 0.3639097213745117,
            "documentName": "Bailian Series Mobile Phone Product Introduction",
            "title": "Bailian Mobile Phone Product Introduction",
            "content": "Reference Price: 5999-6499. Bailian Ace Ultra - The Choice for Gamers: Equipped with a 6.67-inch 1080 x 2400 pixel screen, built-in 10GB RAM and 256GB storage, ensuring smooth and unobstructed gaming. Bailian Ace Ultra - The Choice for Gamers: Equipped with a 6.67-inch 1080 x 2400 pixel screen, built-in 10GB RAM and 256GB storage, ensuring smooth and unobstructed gaming. 5500mAh battery with liquid cooling system keeps calm during long gaming sessions. High dynamic dual speakers enhance the immersive sound effect for an upgraded gaming experience. Reference Price: 3999-4299. Bailian Zephyr Z9 - Lightweight and Portable Art: Lightweight 6.4-inch 1080 x 2340 pixel design, paired with 128GB storage and 6GB RAM, easily handling daily use. 4000mAh battery ensures worry-free all-day use, 30x digital zoom lens captures distant details, lightweight yet powerful. Reference Price: 2499-2799. Bailian Flex Fold+ - A New Era of Foldable Screens: Combining innovation and luxury, the main screen is 7.6 inches 1800 x 2400 pixels with an external screen of 4.7 inches 1080 x 2400 pixels, multi-angle free hover design meets different scene needs. 512GB storage, 12GB RAM, plus a 4700mAh battery and UTG ultra-thin flexible glass, opening a new chapter in the era of foldable screens. In addition, this phone also supports Dual SIM Dual Standby and satellite calls, helping you stay connected anywhere in the world. Reference Retail Price: 9999-10999.",
            "score": 85.7
          },
          {
            "score": 0.3558429479598999,
            "documentName": "Bailian Series Mobile Phone Product Overview",
            "title": "Bailian Mobile Phone Product Overview",
            "content": "Welcome to the forefront of future technology, exploring our carefully crafted smartphone series, each designed to fulfill your infinite imagination of a tech life. Bailian X1 - Enjoy the Ultimate Visual Experience: Equipped with a 6.7-inch 1440 x 3200 pixel ultra-clear screen, paired with a 120Hz refresh rate, a smooth visual experience leaps before your eyes. 256GB mass storage space and 12GB RAM work together to easily handle large games or multitasking. 5000mAh battery long battery life, plus ultra-sensitive quad-camera system, capturing every exciting moment of life. Reference Price: 4599-4999. Qwen Vivid 7 - New Experience of Smart Photography: Featuring a 6.5-inch 1080 x 2400 pixel full screen, AI smart photography function allows every photo to show professional-grade colors and details. 8GB RAM and 128GB storage space ensure smooth operation, 4500mAh battery meets daily needs. Side fingerprint unlock, convenient and safe. Reference Price: 2999-3299. Stardust S9 Pro - Innovative Visual Feast: Breakthrough 6.9-inch 1440 x 3088 pixel under-screen camera design brings a borderless visual enjoyment. 512GB storage and 16GB RAM top configuration, combined with a 6000mAh battery and 100W fast charging technology, let performance and battery life go hand in hand, leading the technology trend. Reference Price: 5999-6499. Bailian Ace Ultra - The Choice for Gamers: Equipped with a 6.67-inch 1080 x 2400 pixel screen, built-in 10GB RAM and 256GB storage, ensuring smooth and unobstructed gaming.",
            "score": 85.2
          },
          {
            "score": 0.28801095485687256,
            "documentName": "Bailian Series Mobile Phone Overview",
            "title": "Bailian Series Mobile Phone Overview",
            "content": "In addition, this phone also supports Dual SIM Dual Standby and satellite calls, helping you stay connected anywhere in the world. Reference Retail Price: 9999-10999. Each phone is meticulously crafted to create a technological masterpiece in your hands. Choose your smart partner and start a new chapter in future tech life.",
            "score": 86.3
          }
        ]
      }
    }

    Sample output:

    image

LLM node - Task-based Workflow

  • The LLM node processes input variables or content and outputs the result as a variable for subsequent transmission. As the core of a workflow, it leverages the chat, generation, and classification capabilities of LLMs to handle various tasks based on prompts. Suitable for all stages in a workflow.

  • Node parameters:

    Parameter

    Description

    Mode

    Single Mode: Fast search version with a lower search ratio and without query rewriting.

    Batch Mode: The node will run multiple times. Each time it runs, an item from the list is sequentially assigned to the batch variable. This process will continue until all items in the list have been processed or the maximum number of batches has been reached.

    Batch Configuration:

    • Maximum Number of Batches: The upper limit for batch iterations. Valid range: 1 to 100. The default value for regular users: 100.

      Note

      The actual number depends on the minimum length of the arrays in the user's input. If there are no input variables, it depends on the number of batches configured.

    • Number of Parallel Runs: The concurrency limit of batch processing. If you set this value to 1, all tasks are executed in series. Valid range: 1 to 10.

    Model Configuration

    Choose a suitable LLM and configure model parameters . For a list of supported models, see Supported models.

    When using a visual model (See Image model for a sample):

    • Model Input Parameters: Use vlImageUrl to reference a variable or enter image URLs.

    • Image Source: Choose from Image Set or Video Frame.

      • Image Set: The model considers uploaded images independent from each other and will match the images and queries for understanding.

      • Video Frame: The model considers the uploaded images from the same video and understands the images sequentially as a whole. The number of video frames must be no less than four.

    Parameter Configuration

    Temperature: Adjusts content diversity. Higher values increase randomness and uniqueness, while lower values yield more predictable and consistent results.

    Maximum Reply Length: The maximum text length generated by the model, not including the prompt. This limit varies by model, and the exact maximum may differ.

    System Prompt

    Defines the role, task, and output format of a model. For example, "You are a math expert, specializing in solving math problems. Please output the math problem-solving process and results in the specified format."

    User Prompt

    Set up the prompt template and insert variables. The model will generate content based on this configuration.

    Output

    The variable name for the output of the node. Subsequent nodes can identify and process the variable by its name.

  • Text model example:

    Click Test and enter Chip Engineer as the query:

    image

    Sample output:

    image

  • Image model example:

    The large model accepts either a single image or multiple images and supports input as URL and base64.

    Note

    You can directly input a single image, such as https://****.com/****.jpg.

    You can input multiple images as a list, such as ["URL", "URL", "URL"].

    Click Test, enter https://****.com/****.jpg as the query.

    image

    Sample output:

    image

LLM node - Dialog Workflow

  • Different from the LLM node of task-based workflows, the LLM node of dialog workflows supports multi-round conversations.

    Round Configuration: The application collects the variables from previous rounds specified in Context and passes them as input parameters to the LLM.

    image

  • Context: The context required by the LLM. The default ${System Variables.historyList} represents the input and output of the application from previous rounds. Other parameters refer to the variables from previous rounds.

    image

API node

  • The API node calls custom API services using POST or GET methods and outputs the results of the API call.

  • Parameters:

    Parameter

    Description

    API Request URL

    The API address for the call, you can select from POST and GET.

    Header Settings

    Configure Header parameters, setting KEY, VALUE.

    Param Settings

    Configure Param parameters, setting KEY, VALUE.

    Body Settings

    Valid values: none, form-data, raw, JSON.

    Output

    The variable name for the output of the node. Subsequent nodes can identify and process the variable by its name.

  • Example:

    Use the POST method to call the interface.

    image

Intent Classification node

  • The Intent Classification node intelligently classifies and matches based on intent descriptions, selecting one of the links to execute.

  • Parameters:

    Parameter

    Description

    Input

    Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.

    Model Configuration

    Select Qwen-Plus.

    Intent Configuration

    Configure different intent categories and corresponding descriptions. The model matches subsequent links based on the descriptions. For example: "Math problems" and "Weather Q&A".

    Other Intents

    If no other intents are matched, this link is matched.

    Intent Mode

    • Single Selection: The LLM selects and outputs the most appropriate intent from Intent Configuration.

    • Multiple Selection: The LLM selects and outputs all matching intents from Intent Configuration.

    Thinking Mode

    • Speed Mode: This mode does not output the thinking process to improve its speed. Suitable for simple scenarios.

    • Effect Mode: This mode thinks step by step to output more accurate answers.

    Advanced Configurations

    You can provide additional prompts to the model as advanced configurations, where you can input more conditions or examples to make the model's classification more in line with your requirements.

    Example

    Suppose you are developing a customer service system for an e-commerce platform, and users may ask various questions about order inquiries, returns, and payments. To ensure accurate classification by the model, you can add relevant prompts and examples in the advanced configuration.

    Please classify the intent based on the following examples:
    Example 1: User input "I want to return the coat I just bought", classified as "Return".
    Example 2: User input "Please help me check the shipping status of the order", classified as "Order Inquiry".
    Conditions: Only process queries related to orders, ignore payment and technical issues.

    Effect:

    User input: "When can the book I ordered on your website last week be delivered to my home?"

    Classification result: "Order Inquiry"

    In this example, the advanced configuration guides the model to classify "query delivery time" as the "Order Inquiry" intent by providing specific classification examples, while also limiting the classification scope and excluding other unrelated issues.

    Context

    When Context is enabled, the system will automatically record historical conversation in the Message format. When calling the model, the context will be passed in, and the model will generate based on the context content.

    This configuration item is only available in the intent classification node of dialog workflows.

    Output

    The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.

    Note
    • This node supports context in dialog workflows.

    • Running this node consumes tokens, with the amount displayed.

Text Conversion node

  • The Text Conversion node is used for text content conversion and processing, such as extracting specific content or converting formats, and supports template mode.

  • Parameters:

    Parameter

    Description

    Output Mode

    Choose from Text Output and JSON Output.

    Input box

    Specify a processing method in which the LLM converts the input into a specific format. You can reference the result of the predecessor nodes through variables.

    • Text Output: Enter / to insert variables.

    • JSON Output: Enter variable name and select Reference or Input.

  • Example:

    The following example shows a basic text conversion workflow: After the user enters a keyword. The text conversion node receives this keyword and processes it, producing an appropriate output.

    Click Test and enter Mathematics as the query:

    image

    Sample output:

    image

Script Conversion node

  • The Script Conversion node use the specified code to convert the input into a specific format or form. The process includes parsing, converting, and formatting for consistency and readability.

  • Parameters:

    Parameter

    Description

    Input

    Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.

    Code

    The code to convert the input to a specific format for subsequent nodes. In the code, you can reference variables from preceding nodes.

    Output

    The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.

  • Example:

    The following example shows a basic script conversion workflow. The user first inputs two parameters, which are then passed to the Script Conversion node. Inside the node, the code processes these parameters and ultimately generates the required output.

    Click Test and enter Beijing for City and February 10, 2022 for Date:

    image

    Sample output:

    image

Conditional Judgment node

  • Specify conditions for the Conditional Judgment node. The node selects subsequent link based on the conditions. You can configure and/or conditions. If multiple conditions are met, the links are executed from top to bottom.

  • Parameters:

    Parameter

    Description

    Conditional Branch

    Enter the conditional judgment statements.

    Other

    Outputs without conditional judgment.

  • Example:

    The following example shows a Conditional Judgement workflow. The user inputs two parameters, which are then passed to the Conditional Judgment node. Inside the node, the parameters undergo conditional evaluation, and then the output response is generated through different branches of the text conversion node.

    Click Test and enter 12345 for secret and admin for admin.

    image

    Sample output:

    image

Function Compute node

  • After you authorize Function Compute in the canvas, you can use the node to call the custom services in Function Compute.

  • Parameters:

    Parameter

    Description

    Input

    Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.

    Region

    Select one of the regions: Singapore, Kuala Lumpur, Jakarta.

    Service Configuration

    Select the service configuration.

    Output

    The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.

Plug-in node

  • Plug-in nodes can be integrated into the workflow application to enhance its functionality and perform more sophisticated tasks. Model Studio offers a series of official plug-ins, including Calculator and Image Generation. You can also develop custom plug-ins based on your requirements.

    For more information, see Plug-in overview.

Publish application

Click Publish in the upper right corner of the canvas application. After you publish the application, it can be accessed by using API or shared with RAM users under the same Alibaba Cloud account on a web page.

Use API

On the Publish Channel tab, click View API to learn how to use API to call the application.

Note: Replace YOUR_API_KEY with your actual API key to initiate the call.

image

Note

For more information, see Application API reference.

Other calling method (Dialogue workflow application)

For more information, see Application sharing.

View application versions

  1. To publish a new version, click Publish at the upper right corner of the page. Enter the version details (such as 1.0.0) and click OK. image

  2. Click the image icon at the top of the page. In the Historical Version panel, you can view or apply different versions of the application by clicking Use This Version or Return to Current Canvas. image

    Click Export This Version in DSL to export the DSL of a specific historical version.

  3. (Optional) You can view or search for nodes on the current canvas in the toolbar.image