All Products
Search
Document Center

Application Real-Time Monitoring Service:Integrate a custom alert source with ARMS

Last Updated:Mar 11, 2026

ARMS Alert Management provides an API that accepts alert events from any system capable of making HTTP calls. Use this API to report alerts from custom or third-party monitoring tools to ARMS for centralized management.

Three core concepts drive the integration:

  • Trigger: Send an alert event to create or update an alert in ARMS.

  • Resolve: Automatically clear alerts when recovery conditions are met.

  • Deduplicate: Merge events that share the same field values into a single alert notification.

Prerequisites

Before you begin, make sure that you have:

  • An Alibaba Cloud account with ARMS activated

  • A third-party monitoring system or custom tool that can send HTTP POST requests

Step 1: Create a custom integration

  1. Log on to the ARMS console. In the left-side navigation pane, choose Alert Management > Integrations.

  2. On the Alert Integration tab, click Custom Integration.

  3. In the Create Custom Event Integration dialog box, enter an integration name, specify the auto-clear period, and add an optional description. Click Save and Configure.

Note If an alert event is not triggered again within the auto-clear period, ARMS automatically clears the alert.

Step 2: Send alert events to the API endpoint

After creating the integration, ARMS generates an API endpoint and API key.

  1. On the Integration Details page, in the Span Configuration section, copy the API endpoint and API key.

    API key of a custom integration

  2. Send a POST request to the API endpoint from your alert source.

Quick start: minimal payload

Send the smallest possible payload to verify connectivity:

curl -k -H "Content-Type: application/json" -d '{
  "alertname": "test-alert",
  "severity": "P2",
  "message": "Test alert from custom source"
}' "https://alerts.aliyuncs.com/api/v1/integrations/custom/<your-api-key>"

Replace <your-api-key> with the API key copied in the previous step.

Note The first time you send an event, the API returns error code 601 with the message Invalidincident,labels.alertnameisrequired. This is expected. Configure field mappings in Step 3 first, then resend the event.

Full payload example

The following curl command sends a complete alert event for a scenario where TCP packet errors occur on a server. The payload includes nested metadata and equipment arrays:

curl -k -H "Content-Type: application/json" -d '{
  "trigger-type": "network",
  "trigger-location": "cn-hangzhou",
  "trigger-severity": "MAX",
  "trigger-policy": "package errors > 5%",
  "trigger-event": "inbound tcp package errors is 20%",
  "trigger-check": "tcp package error percentage",
  "trigger-value": "20",
  "trigger-time": "1629702922000",
  "metadata": [
    {
      "agent": "SERVER",
      "ip": "141.219.XX.XX",
      "fqdn": "websrv1.damenport.org",
      "microServiceId": "ms-login-2251",
      "accountId": "1504000433993",
      "service": "login-0"
    },
    {
      "agent": "CONTAINER",
      "ip": "172.1.XX.XX",
      "fqdn": "websrv2.damenport.org",
      "microServiceId": "ms-login-2252",
      "accountId": "129930302939",
      "service": "login-1"
    }
  ],
  "equipments": [
    {
      "equipmentId": "112"
    },
    {
      "equipmentId": "113"
    }
  ]
}' "https://alerts.aliyuncs.com/api/v1/integrations/custom/<your-api-key>"

The following table describes the key fields in this payload:

FieldDescriptionExample value
trigger-typeEvent typenetwork
trigger-severitySeverity level (MAX, MID, or LOW)MAX
trigger-eventEvent descriptioninbound tcp package errors is 20%
trigger-timeStart time as a Unix timestamp in milliseconds1629702922000
metadata[*].agentAgent type (SERVER or CONTAINER)SERVER
metadata[*].accountIdUser ID (not included in alert notifications)1504000433993

Replace the following placeholder with your actual value:

PlaceholderDescriptionExample
<your-api-key>API key from the Span Configuration sectionymQBN******

Step 3: Configure alert field mappings

Field mappings define how your alert source fields translate to ARMS alert fields. Without mappings, ARMS cannot parse incoming events.

Send test data

  1. On the Integration Details page, in the Event Mapping section, click Send Test Data.

  2. In the Send Test Data dialog box, paste the JSON payload from your alert source and click Send.

    • If the message Uploaded. No events are generated. Configure mappings based on the original data. appears, the fields are not yet mapped. The raw data appears in the left pane for reference when configuring mappings.

    • If the message Uploaded. appears, the event was successfully reported. View it on the Alert Event History page. For more information, see View historical alert events.

    Send test data

  3. In the Send Test Data dialog box, click Disable.

Configure batch processing (optional)

If your alert data contains an array node (such as metadata), you can process all elements in the array as a batch.

  1. In the left pane of the Event Mapping section, click a data record to view its details.

  2. In the right pane, under Select Root Node, select Use Batch Processing, then choose the array node to use as the root node.

Note Only one array node can be selected for batch processing at a time.

How batch processing affects field mapping:

  • Batch enabled (e.g., metadata as root node): All $.metadata[*].service values are iteratively mapped to the target ARMS field.

  • Batch disabled: Only a specific element (e.g., $.metadata[0].service or $.metadata[1].service) maps to the target field.

Configure alert recovery events (optional)

To automatically clear alerts when a recovery event arrives, select Configure Alert Recovery Events and define field conditions.

When ARMS receives an event matching your conditions, it finds and clears the corresponding alerts. The condition field must be equivalent to alert severity in the event. The $.severity field cannot be used.

For example, the condition {$.eventType = "resolved"} clears all alerts in the integration where the eventType field equals resolved.

Map source fields to ARMS alert fields

In the Map Source Fields to Target Fields section, map each source field to an ARMS alert field.

The following table describes the ARMS alert fields:

Alert fieldRequiredDescriptionMapping methodSource field example
alertnameYesAlert nameSeries$.trigger-type and $.trigger-policy
severityYesAlert level. Requires a mapping table to convert source values to ARMS values (P1, P2, P3).Direct + Mapping table$.trigger-severity (MAX -> P1, MID -> P2, MIN -> P3)
messageNoAlert description, used as the notification content. Maximum 15,000 characters.Direct$.trigger-event
valueNoSample metric valueDirect$.trigger-value
imageUrlNoGrafana metrics chart URL for embedding in the alert--
checkNoCheck item, such as CPU, JVM, Application Crash, or DeploymentDirect$.trigger-check
sourceNoAlert source identifier, typically an IP addressDirect$.metadata[*].ip
classNoObject type that triggers the alert, such as hostDirect$.trigger-type
serviceNoSource service name. Supports conditional mapping.ConditionSee the conditional mapping example below
startatNoEvent start timestampDirect$.trigger-time
endatNoEvent end timestamp--
generatorUrlNoURL linking to the event details page--

Mapping methods

Click the Map icon next to a field to switch between these mapping methods:

  • Direct: Maps one source field directly to one ARMS field.

  • Series: Concatenates multiple source fields with a delimiter (special characters only) into one value, then maps it to an ARMS field. For example, joining $.trigger-type and $.trigger-policy with an underscore (_) produces network_package errors > 5%, which maps to alertname.

  • Condition: Maps a source field based on a condition. For example: Configure conditional mappings

    • If $.metadata[*].agent equals CONTAINER, map $.metadata[*].microServiceId to the ARMS service field.

    • If $.metadata[*].agent equals SERVER, map $.metadata[*].service to the ARMS service field.

  • Mapping table: Converts source values to ARMS values through a lookup table. Used only for the severity field.

Configure the custom alert source

Step 4: Configure deduplication

Deduplication merges events that share the same field values into a single alert notification, reducing noise.

Note Deduplication applies only to events that are not cleared.
  1. In the Event Deduplication section, select the fields to use for deduplication. For example, if $.metadata[*].ip maps to source and $.trigger-check maps to check, selecting both source and check merges events from the same IP address with the same check item into one alert. Events with different IP addresses or check items remain separate.

  2. Click Deduplication Test to preview the grouping results.

    Note The test runs only against the latest 10 data records uploaded in the Event Mapping section.

    Deduplicate alert events

  3. Click Save.

Verify the integration

After saving the configuration, go to Alert Management > Integrations. Confirm that your new integration appears on the Alert Integration tab.

API key of a custom integration

More operations

On the Alert Integration tab, the following operations are available for each integration:

OperationSteps
View detailsClick the integration row to open the Integration Details page.
Update the API keyChoose More > Update Key in the Actions column, then click OK. After updating, modify the endpoint in the alert source configured in Step 2.
Edit the integrationClick Edit in the Actions column. Modify the settings on the Integration Details page, then click Save.
Enable or disableClick Enable or Disable in the Actions column.
Delete the integrationClick Delete in the Actions column, then click OK.
Add an event processing flowClick Add Event Processing Flow in the Actions column. For more information, see Work with event processing flows.
Create a notification policyChoose More > Create Notification Policy in the Actions column. For more information, see Create and manage a notification policy.

What's next

After you create a notification policy, the system generates alerts and sends alert notifications based on that policy. For more information, see Create and manage a notification policy.

On the Alert Sending History page, you can view alerts generated based on the configured notification policy. For more information, see View historical alerts.