All Products
Search
Document Center

Alibaba Cloud Model Studio:Workflow application

Last Updated:Nov 27, 2024

Workflow applications streamline complex tasks by breaking the tasks down into a series of steps. You can create a workflow application in Alibaba Cloud Model Studio console to integrate large laguage models (LLMs), APIs, and other nodes to effectively reduce coding efforts. This topic describes how to create a workflow application and use the nodes.

Overview

Scenarios

  • Travel planning: Specify parameters such as destination to automatically generate travel plans, including flights, accommodations, and attractions.

  • Report analysis: Use data processing, analysis, and visualization plug-ins to produce structured and formatted analysis reports for complex datasets.

  • Customer service: Automatically classify and deal with customer inquiries to enhance the speed and precision of customer service responses.

  • Content creation: Produce content such as articles and marketing materials based on themes and requirements.

  • Education and training: Design personalized learning plans that include progress tracking and assessments, facilitating student self-learning.

  • Medical consultation: Use various analysis tools to generate preliminary diagnoses or examination recommendations based on patient symptoms and help doctors make further decisions.

Supported Models

  • Qwen-Max

  • Qwen-Plus

  • Qwen-Turbo

  • QwenVL-Plus

  • QwenVL-Max

For more information about the models, see List of models.

Use cases

For beginners

This example shows how to create a workflow application to identify whether a text message is related to telecom fraud.

  1. Go to My Applications in the Model Studio console.

  2. Click Create Application, choose Workflow Application, and click Create Task-based Workflow.

    image

  3. On the Canvas Configuration page, you can see the Start node already has two preset parameters. You can modify them based on your requirements.

  4. Drag an LLM node from the left-side pane into the canvas. Connect it to the Start node and configure the parameters.

    • Configure the Start node: Delete the city and date parameters. The input node has a built-in default parameter query.

    • Configure the LLM node:

      Parameter

      Example

      Model Configuration

      Qwen-Max

      Temperature

      Default

      Maximum Reply Length

      1024

      enable_search

      Disable

      Prompt

      System Prompt: 
      Analyze and determine whether the given information is suspected of fraud. Provide a definite answer on whether there is a suspicion of fraud.
       Processing requirements: Carefully review the content of the information, focusing on keywords and typical fraud patterns, such as requests for urgent transfers, provision of personal information, and promises of unrealistic benefits. 
      Procedure: 
      1. Identify key elements in the information, including but not limited to the sender's identity, requests made, promised returns, and any urgency expressions. 
      2. Compare with known fraud case characteristics to check if there are similar tactics or language patterns in the information. 
      3. Evaluate the overall reasonableness of the information, considering whether the requests made are in line with conventional logic and processes. 
      4. If the information contains links or attachments, do not click or download them directly to avoid potential security risks, and remind users of the dangers of such content. 
      Output format: Clearly indicate whether the information exhibits characteristics of fraud and briefly explain the basis for judgment. If there is a suspicion of fraud, provide some suggestions or preventive measures to protect user safety.
      User Prompt:
      Determine whether “${sys.query}” is suspected of fraud.
      Note

      You can enter / to insert variables. Select query from System Variables.

      image

      Output

      Default

  5. Drag an Intent Classification node from the left-side pane into the canvas, connect the LLM node to the Intent Classification node, and configure the following parameters.

    Parameter

    Example

    Input

    Select LLM_**** > result

    Model Configuration

    Qwen-Plus

    Intent Configuration

    Add Category

    • The information involves fraud.

    • The information does not involve fraud.

    Other Intents

    Default

    Output

    Default

    image

  6. Drag a Text Conversion node from the left-side pane into the canvas, connect all outputs from the Intent Classification node to the Text Conversion node. Configure the following parameter.

    Parameter

    Example

    Text Template

    Enter / to insert variables. Select Classifier_**** > result > thought and subject, and LLM_**** > result.image

    image

  7. Connect the Text Conversion node to the End node, and configure the following parameters.

    Parameter

    Example

    Output Mode

    Text Output.

    Input box

    Enter / to insert variables. Choose TextConverter_**** > resultimage

    image

  8. Click Test in the upper right corner, enter Your mom misses you, call her when you have time as the query, and click Execute.

    image

  9. After the workflow is executed, the End node displays the Run Result.

    image

  10. Click Test in the upper right corner, enter You have won a prize, please check as the query, and click Execute.

    image

  11. After the workflow is executed, the End node displays the Run Result.

    image

  12. Click Publish in the upper-right corner to publish the workflow application.

Advanced case

This example shows how to build an intelligent shopping assistant to recommend phones, TVs, and refrigerators using a dialog workflow. For a dialog workflow, the variable ${sys.query} is the user input in the dialog box.

  1. Go to My Applications in the Model Studio console.

  2. Click Create Application, choose Workflow Application, and click Create Dialog Workflow.

    image

  3. On the Canvas Configuration page, you can see the Start node already has two preset parameters. You can modify them based on your requirements.

  4. Drag an Intent Classification node from the left-side pane into the canvas. Connect the Start node to the intent classification node, and configure the following parameters.

    • Configure the Start node: Delete the city and date parameters. The input node has a built-in default parameter query.

    • Configure the intent classification node:

      Parameter

      Example

      Input

      Select System Variables > query.

      Model Configuration

      Qwen-Plus

      Intent Configuration

      Add Category

      • TV

      • Phone

      • Refrigerator

      Other Intents

      Default

      Output

      Default

    image

  5. Drag an LLM node into the canvas configuration page, connect the TV output of the Intent Classification node to the LLM node, and configure the following parameters.

    Parameter

    Example

    Model Configuration

    Qwen-Max

    Temperature

    Default

    Maximum Reply Length

    1024

    enable_search

    Disable

    Prompt

    System Prompt:
    You are an intelligent shopping assistant responsible for recommending TVs to customers.
    You need to actively ask users what parameters they need for a TV according to the order in the [TV Parameter List] below, asking only one parameter at a time and not repeating questions for one parameter.
    If the user tells you the parameter value, you need to continue asking for the remaining parameters.
    If the user asks about the concept of this parameter, you need to use your professional knowledge to answer and continue to ask which parameter is needed.
    If the user mentions that they do not need to continue purchasing the product, please output: Thank you for visiting, looking forward to serving you next time.
    [TV Parameter List]
    1. Screen Size: [50 inches, 70 inches, 80 inches]
    2. Refresh Rate: [60Hz, 120Hz, 240Hz]
    3. Resolution: [1080P, 2K, 4K]
    If all parameters in the [TV Parameter List] have been collected, you need to ask: "Are you sure you want to purchase?" and output the customer's selected parameter information at the same time, such as: 50 inches|120Hz|1080P. Ask if they are sure they need a TV with these parameters. If the customer decides not to purchase, ask which parameters need to be adjusted.
    If the customer confirms that these parameters meet their requirements, you need to output in the following format:
    [Screen Size: 50 inches, Refresh Rate: 120Hz, Resolution: 1080P]. Please only output this format and do not output other information.
    User Prompt:
    The user's question is: ${sys.query}
    Note

    You can also enter / to insert variables. Select System Variables > query.

    image

    Context

    Disable

    Output

    Default

  6. Similarly, drag the second LLM node into the canvas, connect Phone to the LLM node, and set up the relevant parameters.

    Parameter

    Example

    Model Configuration

    Qwen-Max

    Temperature

    Default

    Maximum Reply Length

    1024

    enable_search

    Disable

    Prompt

    System Prompt:
    You are an intelligent shopping assistant responsible for recommending phones to customers.
    You need to actively ask users what parameters they need for a phone according to the order in the [Phone Parameter List] below, asking only one parameter at a time and not repeating questions for one parameter.
    If the user tells you the parameter value, you need to continue asking for the remaining parameters.
    If the user asks about the concept of this parameter, you need to use your professional knowledge to answer and continue to ask which parameter is needed.
    If the user mentions that they do not need to continue purchasing the product, please output: Thank you for visiting, looking forward to serving you next time.
    [Phone Parameter List]
    1. Usage Scenario: [Gaming, Photography, Watching Movies]
    2. Screen Size: [6.4 inches, 6.6 inches, 6.8 inches, 7.9 inches foldable screen]
    3. RAM Space + Storage Space: [8GB+128GB, 8GB+256GB, 12GB+128GB, 12GB+256GB]
    If all parameters in the [Parameter List] have been collected, you need to ask: "Are you sure you want to purchase?" and output the customer's selected parameter information at the same time, such as: For photography|8GB+128GB|6.6 inches. Ask if they are sure they need a phone with these parameters. If the customer decides not to purchase, ask which parameters need to be adjusted.
    If the customer confirms that these parameters meet their requirements, you need to output in the following format:
    [Usage Scenario: Photography, Screen Size: 6.8 inches, Storage Space: 128GB, RAM Space: 8GB]. Please only output this format and do not output other information.
    User Prompt:
    The user's question is: ${sys.query}

    Context

    Disable

    Output

    Default

  7. Similarly, drag the third LLM node into the canvas, connect Refrigerator to the LLM node, and configure the following parameters.

    Parameter

    Example

    Model Configuration

    Qwen-Max

    Temperature

    Default

    Maximum Reply Length

    1024

    enable_search

    Disable

    Prompt

    System Prompt:
    You are an intelligent shopping assistant responsible for recommending refrigerators to customers.
    You need to actively ask users what parameters they need for a refrigerator according to the order in the [Refrigerator Parameter List] below, asking only one parameter at a time and not repeating questions for one parameter.
    If the user tells you the parameter value, you need to continue asking for the remaining parameters.
    If the user asks about the concept of this parameter, you need to use your professional knowledge to answer and continue to ask which parameter is needed.
    If the user mentions that they do not need to continue purchasing the product, please output: Thank you for visiting, looking forward to serving you next time.
    [Refrigerator Parameter List]
    1. Usage Scenario: [Household, Small Commercial, Large Commercial]
    2. Capacity: [200L, 300L, 400L, 500L]
    3. Energy Efficiency Level: [Level 1, Level 2, Level 3]
    If all parameters in the [Parameter List] have been collected, you need to ask: "Are you sure you want to purchase?" and output the customer's selected parameter information at the same time, such as: For small commercial use|300L|Level 1. Ask if they are sure they need a refrigerator with these parameters. If the customer decides not to purchase, ask which parameters need to be adjusted.
    If the customer confirms that these parameters meet their requirements, you need to output in the following format:
    [Usage Scenario: Household, Capacity: 300L, Energy Efficiency Level: Level 1]. Please only output this format and do not output other information.
    User Prompt:
    The user's question is: ${sys.query}

    Context

    Disable

    Output

    Default

  8. Drag a Text Conversion node from the left-side pane into the canvas, connect the three LLM nodes to the Text Conversion node, and configure the following parameters.

    Parameter

    Configure Corresponding Parameters

    Text Template

    Enter / to insert variables. Choose LLM_**** > result for all three LLM nodes.image

  9. Drag another Text Conversion node into the canvas, connect the Other Intents output of the Intent Classification node to this Text Conversion node. Configure the following parameters.

    Parameter

    Example

    Text Template

    This product is not within the scope of the shopping assistant. Thank you for visiting, looking forward to serving you next time.

  10. Connect both Text Conversion nodes to the End node, and configure the following parameters.

    Parameter

    Configure Corresponding Parameters

    Output Mode

    Text Output

    Input box

    Enter / to insert the variables. Choose TextConverter > result for both nodes.

    image

  11. Click Test in the upper right corner, enter Tell me about your refrigerators, I need one for household use. as the query, and click Execute.

  12. After the workflow is executed, the End node displays the Run Result.

  13. Enter Tell me about a 200L household refrigerator? as the query, the End node displays the following Run Result.

  14. EnterTell me about your headphones? as the query, the End node displays the following Run Result.

  15. Click Publish in the upper-right corner to publish the workflow application.

Node description

Start/End node

  • Description: The start and end of a workflow. Each workflow must include both a Start and an End node. The End node cannot be followed by any other nodes. The workflow application only outputs the execution result after reaching the End node. The End node must declare one or more output variables, which can be the output variables of any ancestor node.

  • Start node parameters:

    Parameter

    Description

    Variable Name

    The key for the input parameter.

    Type

    The type of the parameter. Valid values: String, Boolean, Number.

    Description

    The description of the parameter.

    Note

    The Start node provides a built-in system variable: sys.query, which is the user query.

  • End node parameters:

    Parameter

    Description

    Output Mode

    Select Text Output or JSON Output.

    Input box

    Text Output: Enter / to insert variables.

    JSON Output: Enter variable name and select Reference or Input.

  • Example:

    In the following workflow, the End node includes Retrieval_SrpH/result, which is the output from the ancestor node. This indicates that the workflow ends following the execution of the Knowledge Base (Retrieval) node and outputs the results from the Knowledge Base (Retrieval) node.

    Enter What phones do you have as the query:

    image

    Sample output:

    image

Knowledge Base node

  • Description: Search for content or chunks in configured knowledge bases, outputting search results and related information. The Knowledge Base node can act as a predecessor node for the LLM node.

  • Parameters:

    Parameter

    Description

    Input

    Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.

    Select Knowledge Base

    Select one or more desired knowledge base.

    Output

    Outputs the name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.

  • Example:

    Common Scenario: AI Q&A system based on external data or a knowledge base.

    This is a simple knowledge base Q&A system. The Knowledge Base node precedes the LLM node. When a user enters a query, the system forwards the query to the Knowledge Base node for retrieval. The node first searches the knowledge base for text content closely related to the query and recalls it. Then, the recalled content and the query are passed to the LLM.

LLM node - task-based workflow

  • Description: The LLM node processes input variables or content and outputs the result as a variable for subsequent transmission. As the core of a workflow, it leverages the dialog, generation, and classification capabilities of LLMs to handle various tasks based on prompts. Suitable for all stages in a workflow.

  • Parameters:

    Parameter

    Description

    Model Configuration

    Select an appropriate LLM and configure model parameters. For a list of supported models, see Supported Models.

    Prompt

    Configure the system prompt and user prompt with variables. The LLM processes and generates content based on the prompt.

    Temperature

    Adjusts the diversity of generated content. Higher temperature results in more unique outputs and lower temperature results in more consistent outputs.

    Maximum Reply Length

    The maximum length of text generated by the model (excluding the Prompt). This limit varies by model type.

    enable_search

    When enabled, the LLM can search the Internet for relevant information. If this parameter is not visible, the current model does not support enable_search.

    Output

    The variable name for the output of the node. Subsequent nodes can identify and process the variable by its name.

  • Example:

    In the Test Window, enter Chip engineer as the query:

    image

    End node output:

    **Career Development Strategy for Chip Engineer**
    
    **1. Skills Development (1-3 Years):**
    
    a. **Technical Skills:**
       - **Semiconductor Fundamentals:** Strengthen understanding of semiconductor physics, VLSI design, and fabrication processes.
       - **Programming & Scripting:** Master languages like Python and C++, and scripting tools such as Tcl/Tk for chip design automation.
       - **EDA Tools Expertise:** Gain proficiency in Electronic Design Automation (EDA) tools like Cadence, Synopsys, and Mentor Graphics for circuit design, simulation, and verification.
       - **Machine Learning (ML) Applications:** Learn how ML can optimize chip design, particularly in areas like floorplanning and power management.
    
    b. **Soft Skills:**
       - **Problem-Solving:** Enhance analytical and problem-solving abilities to tackle complex chip design challenges.
       - **Team Collaboration:** Develop effective communication and teamwork skills, essential in multidisciplinary chip development projects.
       - **Project Management:** Basic understanding of project management methodologies to lead small-scale projects effectively.
    
    **2. Project Involvement:**
    
    a. **New Technology Integration:** Seek projects involving the implementation of emerging technologies (e.g., 5nm process nodes, 3D stacking) to stay at the forefront of innovation and enhance technical prowess.
    b. **Cross-Domain Collaborations:** Participate in interdisciplinary projects that involve software-hardware integration or AI-assisted chip design, broadening perspectives and expanding the professional network.
    c. **International Standards Compliance:** Engage in projects ensuring compliance with international standards (e.g., ISO, IEEE), enhancing knowledge of global regulations and increasing visibility among senior management.
    
    **3. Promotion Pathway:**
    
    a. **Junior Chip Engineer → Senior Chip Engineer (3-5 years):** Achieve mastery in EDA tools, complete significant project contributions, and demonstrate leadership in smaller project teams. Obtain professional certifications related to VLSI design or semiconductor engineering.
    b. **Senior Chip Engineer → Lead Chip Engineer/Manager (5-7 years):** Showcase successful project management, mentorship capabilities, and a track record of improving design efficiency or cost-effectiveness. Pursue an MBA or a master's degree in electrical engineering for management roles.
    c. **Lead Chip Engineer → Principal Engineer/Director of Engineering (7-10 years):** Lead large-scale projects, drive innovation strategies, and contribute to organizational policy. Establish a strong industry presence through publications, conference talks, and patents.
    
    **4. Long-Term Vision (5-10 Years):**
    
    The aspirational role could be a **Chief Technology Officer (CTO)** of a semiconductor company or founding a startup focused on next-generation chip technologies. To achieve this:
    
    - **Continued Education:** Regularly attend advanced courses, workshops, and conferences to stay updated on technological advancements.
    - **Industry Networking:** Build a robust network within the semiconductor industry, including academics, entrepreneurs, and investors, to stay informed about emerging trends and potential collaboration opportunities.
    - **Thought Leadership:** Publish research papers, give keynote speeches, and participate in panel discussions to establish oneself as an industry expert.
    - **Contingency Measures:** Stay adaptable to technology shifts (e.g., quantum computing, advanced AI hardware) by continuously upskilling and being open to diversifying expertise if needed.
    
    By following this structured approach, the individual can systematically progress in their career, staying aligned with both personal aspirations and the evolving demands of the semiconductor industry.

LLM node - dialog workflow

  • Different from the LLM node of task-based workflows, the LLM node of dialog workflows supports multi-round conversations.

    Multi-round Conversation Configuration: The application collects the variables from previous rounds specified in Context and passes them as input parameters to the LLM.

    image

  • Context: The context required by the LLM. The default sys/historyList represents the input and output of the application from previous rounds. Other parameters refer to the parameters from previous rounds.

    image

API node

  • Description: The API node calls custom API services using POST or GET methods and outputs the results of the API call.

  • Parameters:

    Parameter

    Description

    API Request URL

    The API address for the call, you can select from POST or GET methods.

    Header Settings

    Configure Header parameters, setting KEY and VALUE.

    Param Settings

    Configure Param parameters, setting KEY and VALUE.

    Body Settings

    Valid values: none, form-data, ram, JSON.

    Output

    Designate the variable name for the node's result, enabling subsequent nodes to identify and process this node's output.

Intent Classification node

  • Description: The Intent Classification node intelligently classifies and matches the input based on intent descriptions, directing the execution to one of the configured links.

  • Parameters:

    Parameter

    Description

    Input

    Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.

    Model Configuration

    Select Qwen-Plus.

    Intent Configuration

    Configure different intent categories and corresponding descriptions. The model matches subsequent links based on the descriptions. For example: "Math problems" and "Weather Q&A".

    Other Intents

    If no other intents are matched, this link is matched.

    Output

    The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.

Text Conversion node

  • Description: The Text Conversion node is used for text content conversion and processing, such as extracting specific content or converting formats, and supports template mode.

  • Parameters:

    Parameter

    Description

    Text Template

    Specify a processing method in which the LLM converts the input into a specific format. You can reference the result of the predecessor nodes through variables.

    Output

    The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.

  • Example of a Node:

    The following example shows a basic text conversion workflow: After the user enters a keyword. The text conversion node receives this keyword and processes it, producing an appropriate output.

    In the Test Window, enter Math as the query:

    image

    Sample output:

    image

Script Conversion node

  • Description: The Script Conversion node use the specified code to convert the input into a specific format or form. The process includes parsing, converting, and formatting for consistency and readability.

  • Parameters:

    Parameter

    Description

    Input

    Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.

    Code

    The code to convert the input to a specific format for subsequent nodes. In the code, you can reference variables from preceding nodes.

    Output

    The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.

  • Example:

    The following example shows a basic script conversion workflow. The user first inputs two parameters, which are then passed to the Script Conversion node. Inside the node, the code processes these parameters and ultimately generates the required output.

    In the Test Window, enter Singapore for city and 2022.2.10 for Date:

    image

    Sample output:

    image

Conditional Judgment node

  • Description: Specify conditions for the Conditional Judgment node. The node selects subsequent link based on the conditions. You can configure and/or conditions. If multiple conditions are met, the links are exectured from top to bottom.

  • Parameters:

    Parameter

    Description

    Conditional Branch

    Enter the conditional judgment statements.

    Other

    Outputs without conditional judgment.

  • Example:

    The following example shows a Conditional Judgement workflow. The user inputs two parameters, which are then passed to the Conditional Judgment node. Inside the node, the parameters undergo conditional evaluation, and then the output response is generated through different branches of the text conversion node.

    In the Test Window, enter 12345 for secret and admin for admin.

    image

    Sample output:

    image

Function Compute node

  • Description: After you authorize Function Compute in the canvas, you can use the node to call the custom services in Function Compute.

  • Parameters:

    Parameter

    Description

    Input

    Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.

    Region

    Select one of the regions: Singapore, Kuala Lumpur, Jakarta.

    Service Configuration

    Select the service configuration.

    Output

    The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.