All Products
Search
Document Center

Microservices Engine:Use XXL-JOB and DeepSeek to push hot news and analyze financial data

Last Updated:Jan 16, 2026

This topic describes how to use the distributed task scheduler XXL-JOB with the large language model DeepSeek to automatically push trending financial news and analyze financial data on a schedule.

Background

As the capabilities of large AI models grow, their business applications expand. Many business scenarios can be automated and run as scheduled tasks in the background instead of being manually initiated. These tasks can be enhanced with the capabilities of large language models. The following are common scenarios:

  • Risk monitoring: You can monitor key system metrics on a schedule and use the smart analysis of large language models to detect potential threats.

  • Data analytics: You can collect online financial data on a schedule and use a large language model to perform intelligent analysis and generate decision-making advice for investors.

Prerequisites

Prepare the environment

Set up DeepSeek

We chose the DeepSeek large language model for the following reasons:

  • DeepSeek is known for its inference capabilities, which makes it suitable for data analytics. The parent company of DeepSeek, High-Flyer Quant, specializes in quantitative trading. Therefore, we believe DeepSeek has a significant advantage in data analytics.

  • DeepSeek is open source and lightweight, which makes it easy to deploy.

You can also choose QwQ, Alibaba Cloud's latest open source model. Its inference capabilities are comparable to DeepSeek-R1, and it also excels at complex data analytics. The following chart compares QwQ-32B with other leading models in mathematical reasoning, programming, and general capabilities:

image

Option 1: Deploy locally

The steps to deploy DeepSeek, QwQ, or other models locally are similar. The following steps use DeepSeek as an example:

  1. Install ollama from https://ollama.com/download.

    image

  2. Install the DeepSeek-R1 model. The R1 model focuses on complex logical reasoning and is better suited for data analytics.

    image

    Choose a model based on your machine's specifications. For example, if your computer has 16 GB of memory, choose the 7b model. Then, run the following command to install it.

    image

    The following table lists the hardware requirements for different models.

    Model name

    Model size

    GPU memory

    Memory

    deepseek-r1:1.5b

    1.1 GB

    4 GB+

    8 GB+

    deepseek-r1:7b

    4.7 GB

    8 GB+

    16 GB+

    deepseek-r1:8b

    4.9 GB

    10 GB+

    18 GB+

    deepseek-r1:14b

    9.0 GB

    16 GB+

    32 GB+

    deepseek-r1:32b

    20 GB

    24 GB+

    64 GB+

  3. After the deployment is complete, test it using the OpenAI-compatible API. The default port is 11434. Using this API makes it easier to write code later.

    image

Option 2: Use a cloud product

You can also use a cloud product directly, such as Alibaba Cloud Model Studio. You can start using it immediately after activation. It comes with a large free quota. Another benefit of using a cloud product is that you can switch models at any time to experience their different strengths and weaknesses.

Set up XXL-JOB

Using XXL-JOB provides the following advantages:

  • You can send AI task requests on a schedule.

  • You can specify the prompt and response format in the task parameters to allow for dynamic changes.

  • You can use a broadcast sharding task to split a large task into smaller sub-tasks. This speeds up AI task execution.

  • You can use task dependency orchestration to build an AI data analytics flow.

Option 1: Deploy locally

Deploying XXL-JOB is simple. For detailed steps, see the official website. The general steps are as follows:

  1. Prepare a database and initialize the database table schema.

    image

  2. Import the code into your IDE and configure the configuration file for xxl-job-admin.

    image

  3. Run the XxlJobAdminApplication class. Then, enter http://127.0.0.1:8080/xxl-job-admin in your browser to log on. The default username and password are admin and 123456.

Option 2: Use a cloud product

You can use the managed Alibaba Cloud MSE XXL-JOB version. For more information, see Create an XXL-JOB instance. A free trial is also available.

Push hot news

This example shows how to implement the solution using the MSE XXL-JOB version or a self-hosted XXL-JOB and DeepSeek-R1 hosted on Alibaba Cloud Model Studio. For more information about the demo, see xxljob-demo (SpringBoot).

Step 1: Connect the application to the XXL-JOB scheduling platform

  1. Log on to the Alibaba Cloud Container Service for Kubernetes (ACK) console and create an ACK serverless cluster. To pull the demo image, you must enable and configure Server Name Address Translation (SNAT) for the current VPC. If SNAT is already configured for the VPC, you can skip this step.

  2. On the Clusters page of the ACK console, click the name of the target cluster. In the navigation pane on the left, choose Workloads > Stateless. Click Create from YAML. Use the following YAML configuration to connect the application to the MSE XXL-JOB scheduling platform. For more information about the parameters -Dxxl.job.admin.addresses, -Dxxl.job.executor.appname, -Dxxl.job.accessToken, -Ddashscope.api.key, and -Dwebhook.url, see Configure startup parameters.

    The following is a sample YAML configuration for the application deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: xxljob-demo
      labels:
        app: xxljob-demo
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: xxljob-demo
      template:
        metadata:
          labels:
            app: xxljob-demo
        spec:
          containers:
          - name: xxljob-executor
            image: schedulerx-registry.cn-hangzhou.cr.aliyuncs.com/schedulerx3/xxljob-demo:2.4.1
            ports:
            - containerPort: 9999
            env:
              - name: JAVA_OPTS
                value: >-
                  -Dxxl.job.admin.addresses=http://xxljob-xxxxx.schedulerx.mse.aliyuncs.com
                  -Dxxl.job.executor.appname=xxxxx
                  -Dxxl.job.accessToken=xxxxxxx
                  -Ddashscope.api.key=sk-xxx
                  -Dwebhook.url=https://oapi.dingtalk.com/robot/send?access_token=xx

Step 2: Configure startup parameters

  1. Obtain the startup parameters.

    1. Log on to the MSE XXL-JOB console and select a region in the top menu bar.

    2. Click the target instance. On the Application Management page, click Access in the Executor Count column for the target application.

    image

    Replace the placeholders with the access configuration of the target instance. Then, click Copy to copy the parameters to your YAML configuration:

    -Dxxl.job.admin.addresses=http://xxljob-xxxxx.schedulerx.mse.aliyuncs.com
    -Dxxl.job.executor.appname=xxxxx
    -Dxxl.job.accessToken=xxxxxxx
  2. Log on to the Alibaba Cloud Model Studio platform. Click the profile icon in the upper-right corner and select API-KEY to go to the management page. On this page, you can create or copy an API-KEY.

    Replace the placeholder with your API-KEY and copy the parameter to your YAML configuration:

    -Ddashscope.api.key=sk-xxx
  3. To obtain the DingTalk group webhook URL, add a custom robot in your DingTalk group settings.

    Replace the access_token value and copy the parameter to your YAML configuration:

    -Dwebhook.url=https://oapi.dingtalk.com/robot/send?access_token=xx

Step 3: Create and run the AI task

MSE XXL-JOB console
  1. Log on to the MSE XXL-JOB console and select a region in the top menu bar. Click the target instance. In the navigation pane on the left, click Task Management, and then click Create Task.

  2. In the Create Task panel, set JobHandler Name to sinaNews. Configure the Task Parameter with the following prompt information. Keep the default settings for the other parameters and save the configuration.

    Sample Task Parameter prompt configuration:

    You are a news assistant. You need to Unicode-decode the content provided by the user, extract the 5 hottest news articles, and finally summarize the content.
    The output format is as follows:
    
    Today's hot financial news (sorted by popularity):
    
    ---
    
    #### 1. [**title**](url)
    
    Popularity: 99,999
    
    Publisher: publisher
    
    ---
    
    #### **Message Summary**
    
    Analyze if there is any latest news related to AI. Briefly summarize today's news content.
  3. On the Task Management page, find the sinaNews task that you created. In the Actions column, click Run Once. Wait for the task to run successfully. The DingTalk group will then receive AI-analyzed news summaries periodically.

Self-hosted XXL-JOB Admin
  1. On your self-hosted XXL-JOB Admin console, create a task. Set JobHandler to sinaNews. For the task parameters, see Sample Task Parameter configuration.

  2. On the task management page, run the task once manually. You will receive a DingTalk notification as shown in the following figure.

Performing financial data analytics

In the Push trending financial news example, you pull news only from Sina Finance. However, to pull domestic and international financial news and data in near real-time for quick decision-making, a single-machine job is not fast enough. To address this, you can use MSE XXL-JOB sharding broadcast jobs to split large jobs into smaller jobs that pull different data. You can then use the job orchestration feature of MSE XXL-JOB to create a flow to complete your task step by step.

  1. Create three tasks in MSE XXL-JOB and establish their dependencies: Pull financial data → Data analysis → Generate report. Set the routing policy for the Pull financial data task to broadcast sharding.

  2. When the Pull financial data task starts, broadcast sharding dispatches multiple sub-tasks to different executors to pull domestic and international financial news and data. The results are then stored in a location such as a database, Redis, or Object Storage Service.

  3. When the Data analysis task starts, it retrieves the collected financial data. The task then calls DeepSeek to analyze the data and stores the results.

  4. After the data analysis is complete, the Generate report task generates a report from the analysis results. The task then sends the report, which contains investment advice, to users through DingTalk or email.