All Products
Search
Document Center

AI Guardrails:Synchronous moderation API for Image Moderation 2.0

Last Updated:Mar 03, 2026

API description

The Image Moderation 2.0 API detects images for regulatory violations, platform policy breaches, and harmful content. It supports more than 40 risk labels and more than 40 risk control items. Use the returned risk labels and confidence scores to take moderation or administrative action based on your business requirements. For more information, see Introduction to Image Moderation 2.0 and Billing.

Connection guide

  1. Create an Alibaba Cloud account. Register now and follow the on-screen instructions.

  2. Activate the pay-as-you-go Content Moderation service. Make sure that you have activated the service. Activation is free. After you start using the service, you are charged based on your usage. For more information, see Billing details.

  3. Create an AccessKey. Use Resource Access Management (RAM) to create an AccessKey. If you use a RAM user AccessKey, grant the AliyunYundunGreenWebFullAccess permission to the RAM user. For more information, see RAM authorization.

  4. Integrate with SDKs. Use SDKs to call the API. For more information, see Image Moderation 2.0 SDKs and usage guide.

Usage notes

Call the Image Moderation 2.0 API to create a task for image moderation. For more information about constructing an HTTP request, see Make HTTPS calls. You can also use an SDK. For more information, see Image Moderation 2.0 SDKs and usage guide.

  • API operation: ImageModeration

  • Supported regions and endpoints:

RegionPublic endpointVirtual Private Cloud (VPC) endpointSupported services
Singaporegreen-cip.ap-southeast-1.aliyuncs.comgreen-cip-vpc.ap-southeast-1.aliyuncs.combaselineCheck_global, aigcDetector_global
UK (London)green-cip.eu-west-1.aliyuncs.comNot available
US (Virginia)green-cip.us-east-1.aliyuncs.comgreen-cip-vpc.us-east-1.aliyuncs.combaselineCheck_global, aigcDetector_global
US (Silicon Valley)green-cip.us-west-1.aliyuncs.comNot available
Germany (Frankfurt)green-cip.eu-central-1.aliyuncs.comgreen-cip-vpc.eu-central-1.aliyuncs.com
The UK (London) region reuses the console configurations of the Singapore region. The US (Silicon Valley) and Germany (Frankfurt) regions reuse the console configurations of the US (Virginia) region.
  • Billing:

This API operation is billable. Only requests that return an HTTP status code of 200 are charged. Requests that return other error codes are not billed. For more information, see Billing details.

  • Image requirements:

    • Supported formats: PNG, JPG, JPEG, BMP, WEBP, TIFF, SVG, HEIF (the longest edge must be less than 8,192 px), GIF (only the first frame is used), and ICO (only the last image is used).

    • Maximum file size: 20 MB. The height or width cannot exceed 16,384 px, and the total pixel count cannot exceed 167 million. For best results, use images with a resolution greater than 200 x 200 px. Low resolution may reduce moderation accuracy.

    • Image download timeout: 3 seconds. If the download exceeds this limit, a timeout error is returned.

QPS limit

The queries per second (QPS) limit for a single user is 100 calls/second. If you exceed this limit, API calls are throttled, which may affect your business. To request a higher QPS or scale out urgently, contact your business manager.

Debug

Before you connect, use Alibaba Cloud OpenAPI Explorer to debug the Image Moderation 2.0 API online. View sample code and SDK dependency information to understand how to use the API and its parameters.

Important

The online debugging feature calls the Content Moderation API using the currently logged-on account. The call count is included in the billable usage of that account.

Request parameters

For more information about the common request parameters that must be included in a request, see Common parameters.

The request body is a JSON struct that contains the following fields:

NameTypeRequiredExampleDescription
ServiceStringYesbaselineCheck_globalThe moderation service for Image Moderation 2.0. Valid values:
  • baselineCheck_global: Baseline Check
  • aigcDetector_global: AIGC image detection
Note

For the differences between services, see Service Description. For the AIGC-dedicated service, see AIGC Scenario Detection Service. The international version can be used only in regions outside China.

ServiceParametersJSONStringYesThe content moderation parameters. The value is a JSON string. For details about each field, see ServiceParameters.

Table 1. ServiceParameters

NameTypeRequiredExampleDescription
imageUrlStringYes. Image Moderation 2.0 supports three ways to pass images. Select one of the following:
  • Pass the image URL by setting imageUrl.
  • Use Object Storage Service (OSS) authorization. Pass ossBucketName, ossObjectName, and ossRegionId together.
  • Use a local image for detection. This method does not consume your OSS storage space, and the file is stored for only 30 minutes. The SDKs integrate the local image upload feature. For code examples, see Image Moderation 2.0 SDK and Access Guide.
https://img.alicdn.com/tfs/TB1U4r9AeH2gK0jSZJnXXaT1FXa-2880-480.pngThe URL of the image to moderate. The URL must be publicly accessible and cannot exceed 2,048 characters.
Note

  • Do not include Chinese characters in URLs, and submit only one URL per request.
  • If a download times out, use OSS authorization to troubleshoot the issue.

ossBucketNameStringbucket_01The name of the authorized OSS bucket.
Note

Before you use the VPC endpoint of an OSS image, use your Alibaba Cloud account to access the Cloud Resource Access Authorization page to grant the AliyunCIPScanOSSRole role.

ossObjectNameString2022023/04/24/test.jpgThe name of the file in the authorized OSS bucket.
Note

1. Pass the original filename from OSS. You cannot add image processing parameters. To add image processing parameters, use the imageUrl address. 2. If a filename contains Chinese characters or spaces, pass it as is. It does not need to be URL-encoded.

ossRegionIdStringcn-beijingThe region where the OSS bucket is located.
dataIdStringNoimg123****The data ID of the image to moderate. Allowed characters: uppercase and lowercase letters, digits, underscores (_), hyphens (-), and periods (.). Maximum length: 64 characters. Use this field to uniquely identify your business data.
refererStringNowww.aliyun.comThe referer request header, used for hotlink protection. Maximum length: 256 characters.
infoTypeStringYescustomImage,textInImageThe types of auxiliary information to return. Valid values:
  • customImage: custom image library hit information
  • textInImage: text-in-image information
  • publicFigure: public figure hit information
  • logoData: logo information
Separate multiple values with commas. For example, "customImage,textInImage" returns both custom image library hit information and text-in-image information.
Note

Public figure and logo information require an advanced image moderation service. For more information, see Service description.

Response data

NameTypeExampleDescription
RequestIdString70ED13B0-BC22-576D-9CCF-1CC12FEAC477The unique request ID. Alibaba Cloud generates a unique ID for each request, which you can use for troubleshooting.
DataObjectThe image moderation result. For more information, see Data.
CodeInteger200The HTTP status code. For more information, see Response codes.
MsgStringOKThe response message.

Table 2. Data

NameTypeExampleDescription
ResultArrayThe moderation results, including risk labels and confidence scores. For more information, see result.
RiskLevelStringhighThe overall risk level, determined by the label with the highest risk. Valid values:
  • high: high risk
  • medium: medium risk
  • low: low risk
  • none: no risk detected
Warning

Take immediate action on high-risk content and manually review medium-risk content. Process low-risk content only when you have strict recall requirements. Otherwise, treat it the same as content for which no risk is detected. Configure risk scores in the Content Moderation console.

DataIdStringimg123******The data ID of the moderated image.
Note

If you passed a dataId in the request, the corresponding dataId is returned here.

ExtObjectAuxiliary reference information for the image. For more information, see Auxiliary information.

Table 3. result

NameTypeExampleDescription
LabelStringviolent_explosionThe risk label returned by image moderation. A single image may return multiple labels with scores. For supported labels, see:
  • Labels supported by baselineCheck_global
ConfidenceFloat81.22The confidence score. Valid values: 0 to 100. Accurate to two decimal places. Some labels do not include a confidence score. For more information, see Descriptions of risk labels.
DescriptionStringFireworks contentA description of the Label field.
Important

This field explains the Label field and may change. Process results using the Label field and do not rely on this field.

RiskLevelStringhighThe risk level for this label, determined by the configured high and low risk scores. Valid values:
  • high: high risk
  • medium: medium risk
  • low: low risk
  • none: no risk detected

Auxiliary information returned (click to expand)

Table 4. Ext

NameTypeExampleDescription
CustomImageJSONArrayCustom image library hit information. Returned when a custom image library is matched. For more information, see CustomImage.
TextInImageObjectText-in-image hit information. For more information, see TextInImage.
PublicFigureJSONArrayPublic figure identification results. Returned when a specific figure is detected. For more information, see PublicFigure.
LogoDataJSONArrayLogo hit information. For more information, see LogoData.

Table 5. CustomImage

NameTypeExampleDescription
LibIdStringlib0001The ID of the matched custom image library.
LibNameStringCustom Image Library AThe name of the matched custom image library.
ImageIdString20240307The ID of the matched custom image.

Table 6. TextInImage

NameTypeExampleDescription
OcrResultJSONArrayThe recognized text lines in the image. For more information, see OcrResult.
RiskWordStringArray[ "risk_word_1", "Sensitive word 2" ]The risk fragments detected in the text. Returned when a tii type label is triggered.
CustomTextJSONArrayCustom term library hit information. Returned when a custom term library is matched. For more information, see CustomText.

Table 7. OcrResult

NameTypeExampleDescription
TextStringIdentified text line 1The content of the recognized text line in the image.

Table 8. CustomText

NameTypeExampleDescription
LibIdStringtest20240307The ID of the matched custom keyword library.
LibNameStringCustom Keyword Library AThe name of the matched custom keyword library.
KeyWordsStringKeyword 1The matched custom keyword.

Table 9. PublicFigure

NameTypeExampleDescription
FigureNameStringJohn DoeThe name of the identified person.
FigureIdStringxxx001The code of the identified person.
Note

A code is returned for specific individuals, while a name is returned for others. Retrieve the person's name first. If the name is empty, retrieve the person's code.

LocationJSONArrayThe location of the figure. For more information, see Location.

Table 10. LogoData

NameTypeExampleDescription
LogoJSONArrayLogo information. For more information, see Logo.
LocationObjectThe location of the logo. For more information, see Location.

Table 11. Logo

NameTypeExampleDescription
NameStringDingTalkThe logo name.
LabelStringlogo_snsThe matched label.
ConfidenceFloat88.18The confidence score.

Table 12. Location

NameTypeExampleDescription
XFloat41The distance from the upper-left corner of the area to the y-axis, with the origin at the upper-left corner of the image. Unit: pixels.
YFloat84The distance from the upper-left corner of the area to the x-axis, with the origin at the upper-left corner of the image. Unit: pixels.
WFloat83The width of the area. Unit: pixels.
HFloat26The height of the area. Unit: pixels.

Examples

Request example

{
    "Service": "baselineCheck_global",
    "ServiceParameters": {
        "imageUrl": "https://img.alicdn.com/tfs/TB1U4r9AeH2gK0jSZJnXXaT1FXa-2880-480.png",
        "dataId": "img123****"
    }
}

Response example

  • If the system detects risky content, the following response is returned:

{
    "Msg": "OK",
    "Code": 200,
    "Data": {
        "DataId": "img123****",
        "Result": [
            {
                "Label": "pornographic_adultContent",
                "Confidence": 81,
                "Description": "Adult pornographic content"
            },
            {
                "Label": "sexual_partialNudity",
                "Confidence": 98,
                "Description": "Partial nudity or sexy"
            },
            {
                "Label": "violent_explosion",
                "Confidence": 70,
                "Description": "Fireworks content"
            },
            {
                "Label": "violent_explosion_lib",
                "Confidence": 81,
                "Description": "Fireworks content_Hit custom library"
            }
        ],
        "RiskLevel": "high"
    },
    "RequestId": "ABCD1234-1234-1234-1234-1234XYZ"
}
  • If the system does not detect any risky content, the following response is returned:

{
    "Msg": "OK",
    "Code": 200,
    "Data": {
        "DataId": "img123****",
        "Result": [
            {
                "Label": "nonLabel",
                "Description": "No risk detected"
            }
        ],
        "RiskLevel": "none"
    },
    "RequestId": "ABCD1234-1234-1234-1234-1234XYZ"
}
  • If the submitted image matches an image in your configured allowlist, the following response is returned:

{
    "Msg": "OK",
    "Code": 200,
    "Data": {
        "DataId": "img123****",
        "Result": [
            {
                "Label": "nonLabel_lib",
                "Confidence": 83,
                "Description": "Hit allowlist"
            }
        ],
        "RiskLevel": "none"
    },
    "RequestId": "ABCD1234-1234-1234-1234-1234XYZ"
}
  • Auxiliary information response example (click to expand)

    • When a custom image library is matched, the following response is returned:

{
    "Code": 200,
    "Data": {
        "DataId": "",
        "Ext": {
            "CustomImage": [
                {
                    "ImageId": "12345",
                    "LibId": "TEST20240307",
                    "LibName": "Risk Image Library A"
                }
            ]
        },
        "Result": [
            {
                "Confidence": 100.0,
                "Label": "pornographic_adultContent_lib",
                "Description": "Adult pornographic content_Hit custom library"
            }
        ],
        "RiskLevel": "high"
    },
    "Msg": "success",
    "RequestId": "5F572704-4C03-51DF-8957-D77BF6E7444E"
}
  • When a custom keyword library is matched, the following response is returned:

{
    "Code": 200,
    "Data": {
        "DataId": "",
        "Ext": {
            "TextInImage": {
                "CustomText": [
                    {
                        "KeyWords": "Custom Keyword 1",
                        "LibId": "TEST20240307",
                        "LibName": "Text Blacklist A"
                    }
                ],
                "OcrResult": [
                    {
                        "Text": "Text line 1"
                    },
                    {
                        "Text": "Text line 2"
                    },
                    {
                        "Text": "Text line 3 with custom keyword"
                    }
                ],
                "RiskWord": null
            }
        },
        "Result": [
            {
                "Confidence": 99.0,
                "Label": "pornographic_adultContent_tii_lib",
                "Description": "Text contains pornographic content_Hit custom library"
            }
        ],
        "RiskLevel": "high"
    },
    "Msg": "success",
    "RequestId": "TESTZGL-0307-2024-0728-FOREVER"
}
  • When a text violation in an image is detected, the following response is returned:

{
    "Code": 200,
    "Data": {
        "DataId": "",
        "Ext": {
            "TextInImage": {
                "CustomText": null,
                "OcrResult": [
                    {
                        "Text": "Text line 1"
                    },
                    {
                        "Text": "Text line 2"
                    },
                    {
                        "Text": "Text line 3 with risk content"
                    }
                ],
                "RiskWord": [
                    "Risk Word 1"
                ]
            }
        },
        "Result": [
            {
                "Confidence": 89.15,
                "Label": "political_politicalFigure_name_tii",
                "Description": "Text contains leader's name"
            }
        ],
        "RiskLevel": "high"
    },
    "Msg": "success",
    "RequestId": "TESTZGL-0307-2024-0728-FOREVER"
}
  • When logo information is detected, the following response is returned:

{
    "Code": 200,
    "Data": {
        "DataId": "",
        "Ext": {
            "LogoData": [
                {
                    "Location": {
                        "H": 44,
                        "W": 100,
                        "X": 45,
                        "Y": 30
                    },
                    "Logo": [
                        {
                            "Confidence": 96.15,
                            "Label": "pt_logotoSocialNetwork",
                            "Name": "CCTV"
                        }
                    ]
                }
            ]
        },
        "Result": [
            {
                "Confidence": 96.15,
                "Label": "pt_logotoSocialNetwork",
                "Description": "Social platform logo"
            }
        ],
        "RiskLevel": "high"
    },
    "Msg": "success",
    "RequestId": "TESTZGL-0307-2024-0728-FOREVER"
}
  • When person information is detected, the following response is returned:

{
    "Code": 200,
    "Data": {
        "DataId": "",
        "Ext": {
            "PublicFigure": [
                {
                    "FigureId": null,
                    "FigureName": "Yang San",
                    "Location": [
                        {
                            "H": 520,
                            "W": 13,
                            "X": 14,
                            "Y": 999
                        }
                    ]
                }
            ]
        },
        "Result": [
            {
                "Confidence": 92.05,
                "Label": "political_politicalFigure_3",
                "Description": "Provincial and municipal government personnel"
            }
        ],
        "RiskLevel": "high"
    },
    "Msg": "success",
    "RequestId": "TESTZGL-0307-2024-0728-FOREVER"
}
The request and response examples in this document are formatted for readability. Actual results are not formatted with line breaks or indentation.

Risk label definitions

The following describes the risk label values, their score ranges, and their meanings. Enable or disable each risk label in the console. For some risk labels, you can configure a more granular detection scope. For more information, see the Console User Guide. The labels supported by each image service are listed below.

ScenarioService and labels
General scenariosGeneral baseline check (baselineCheck_global) supports labels
AIGC scenariosSupported labels for AI-generated image detection (aigcDetector_global)

For labels returned when no risk is detected or the allowlist is matched, see Supported labels when there is no risk or the allowlist is matched.

Store the risk labels and confidence scores returned by the system for a period. This allows you to reference them for subsequent content governance. Set priorities for manual review or annotation, and implement layered and categorized content governance measures based on the risk labels.

Table 4. Labels supported by general baseline check (baselineCheck_global)

Label valueConfidence score range (confidence)Description
pornographic_adultContent0 to 100. A higher score indicates a higher confidence level.The image may contain adult or pornographic content.
pornographic_cartoon0 to 100. A higher score indicates a higher confidence level.The image may contain pornographic cartoon content.
pornographic_adultToys0 to 100. A higher score indicates a higher confidence level.The image may contain adult toy content.
pornographic_art0 to 100. A higher score indicates a higher confidence level.The image may contain pornographic artwork.
pornographic_adultContent_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may contain pornographic content.
pornographic_suggestive_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may contain vulgar content.
pornographic_o_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may contain inappropriate content. For more information, see the Content Moderation console.
pornographic_organs_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may describe sexual organs.
pornographic_adultToys_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may contain content about adult toys.
sexual_suggestiveContent0 to 100. A higher score indicates a higher confidence level.The image may contain vulgar or sexually suggestive content.
sexual_femaleUnderwear0 to 100. A higher score indicates a higher confidence level.The image may contain underwear or swimwear.
sexual_cleavage0 to 100. A higher score indicates a higher confidence level.The image may feature female cleavage.
sexual_maleTopless0 to 100. A higher score indicates a higher confidence level.The image may show shirtless men.
sexual_cartoon0 to 100. A higher score indicates a higher confidence level.The image may contain sexually suggestive animated content.
sexual_shoulder0 to 100. A higher score indicates a higher confidence level.The image may show sexually suggestive shoulders.
sexual_femaleLeg0 to 100. A higher score indicates a higher confidence level.The image may show sexually suggestive legs.
sexual_pregnancy0 to 100. A higher score indicates a higher confidence level.The image may contain pregnancy photos or breastfeeding.
sexual_feet0 to 100. A higher score indicates a higher confidence level.The image may show sexually suggestive feet.
sexual_kiss0 to 100. A higher score indicates a higher confidence level.The image may contain kissing.
sexual_intimacy0 to 100. A higher score indicates a higher confidence level.The image may contain intimate behavior.
sexual_intimacyCartoon0 to 100. A higher score indicates a higher confidence level.The image may contain intimate actions in cartoons or anime.
violent_explosion0 to 100. A higher score indicates a higher confidence level.The image may contain content related to smoke or fire. For more information, see the Content Moderation console.
violent_burning0 to 100. A higher score indicates a higher confidence level.The image may contain burning content.
violent_armedForces0 to 100. A higher score indicates a higher confidence level.The image is suspected of containing content related to a terrorist organization.
violent_weapon0 to 100. A higher score indicates a higher confidence level.The image may contain military equipment.
violent_crowding0 to 100. A higher score indicates a higher confidence level.The image may show a crowd gathering.
violent_gun0 to 100. A higher score indicates a higher confidence level.The image may contain guns.
violent_knives0 to 100. A higher score indicates a higher confidence level.The image may contain knives.
violent_horrific0 to 100. A higher score indicates a higher confidence level.The image may contain horrific content.
violent_nazi0 to 100. A higher score indicates a higher confidence level.The image may contain Nazi-related content.
violent_bloody0 to 100. A higher score indicates a higher confidence level.The image may contain bloody content.
violent_extremistGroups_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may contain content about extremist groups.
violent_extremistIncident_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may contain content about extremist incidents.
violence_weapons_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may describe guns and knives.
violent_ACU0 to 100. A higher score indicates a higher confidence level.The image may contain combat uniforms.
contraband_drug0 to 100. A higher score indicates a higher confidence level.The image may contain drug-related content.
contraband_drug_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may describe illegal drugs.
contraband_gamble0 to 100. A higher score indicates a higher confidence level.The image may contain gambling-related content.
contraband_gamble_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may describe gambling.
inappropriate_smoking0 to 100. A higher score indicates a higher confidence level.The image may contain smoking-related content.
inappropriate_drinking0 to 100. A higher score indicates a higher confidence level.The image may contain alcohol-related content.
inappropriate_tattoo0 to 100. A higher score indicates a higher confidence level.The image may contain tattoos.
inappropriate_middleFinger0 to 100. A higher score indicates a higher confidence level.The image may show a middle finger gesture.
inappropriate_foodWasting0 to 100. A higher score indicates a higher confidence level.The image may contain content about wasting food.
profanity_Offensive_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may contain severe profanity, verbal attacks, or offensive content.
profanity_Oral_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may contain colloquial profanity.
religion_clothing0 to 100. A higher score indicates a higher confidence level.The image may contain special logos and elements. For more information, see the Content Moderation console.
religion_logo0 to 100. A higher score indicates a higher confidence level.
religion_flag0 to 100. A higher score indicates a higher confidence level.
religion_taboo1_tii0 to 100. A higher score indicates a higher confidence level.The text in the image may contain prohibited content. For more information, see the Content Moderation console.
religion_taboo2_tii0 to 100. A higher score indicates a higher confidence level.
flag_country0 to 100. A higher score indicates a higher confidence level.The image may contain flag-related content.
political_historicalNihility0 to 100. A higher score indicates a higher confidence level.The image may contain specific content. For more information, see the Content Moderation console.
political_historicalNihility_tii0 to 100. A higher score indicates a higher confidence level.
political_politicalFigure_10 to 100. A higher score indicates a higher confidence level.
political_politicalFigure_20 to 100. A higher score indicates a higher confidence level.
political_politicalFigure_30 to 100. A higher score indicates a higher confidence level.
political_politicalFigure_40 to 100. A higher score indicates a higher confidence level.
political_politicalFigure_name_tii0 to 100. A higher score indicates a higher confidence level.
political_prohibitedPerson_10 to 100. A higher score indicates a higher confidence level.
political_prohibitedPerson_20 to 100. A higher score indicates a higher confidence level.
political_prohibitedPerson_tii0 to 100. A higher score indicates a higher confidence level.
political_taintedCelebrity0 to 100. A higher score indicates a higher confidence level.
political_taintedCelebrity_tii0 to 100. A higher score indicates a higher confidence level.
political_CNFlag0 to 100. A higher score indicates a higher confidence level.
political_CNMap0 to 100. A higher score indicates a higher confidence level.
political_logo0 to 100. A higher score indicates a higher confidence level.
political_outfit0 to 100. A higher score indicates a higher confidence level.
political_badge0 to 100. A higher score indicates a higher confidence level.
pt_logo0 to 100. A higher score indicates a higher confidence level.The image may contain a logo.
QRCode0 to 100. A higher score indicates a higher confidence level.The image may contain a QR code.
pt_custom_010 to 100. A higher score indicates a higher confidence level.Custom label 01.
pt_custom_020 to 100. A higher score indicates a higher confidence level.Custom label 02.
tii is an abbreviation for "text in image". A label ending in tii indicates that a text violation was detected in the image. You can also configure custom image libraries for each risk label. If a moderated image closely matches an image in a custom library, the system returns the corresponding risk label with _lib appended. For example, if you configure a custom image library for "violent_explosion" and a moderated image matches an image in that library, the system returns violent_explosion_lib in the label parameter. The confidence parameter represents the similarity score.

If the system detects no anomalies in the submitted image, or if it closely matches an image in your configured allowlist, the returned label and confidence score are as shown in the table below.

Label valueConfidence score range (confidence)Description
nonLabelThis field is not present.No risk was detected in this image, or you have disabled all moderation items. For more information, see the Content Moderation console.
nonLabel_lib0 to 100. A higher score indicates a higher confidence level.This image closely matches an image in your configured allowlist. For more information, see the Content Moderation console.

Code descriptions

The following table describes the response codes returned by the API. Only requests that return an HTTP status code of 200 are charged. Requests that return other error codes are not billed.

CodeDescription
200The request is successful.
400A request parameter is empty.
401A request parameter is invalid. Check and correct the parameter value.
402A request parameter length does not meet the requirements. Check and adjust.
403The request exceeds the QPS limit. Check and adjust the concurrency.
404An error occurred while downloading the image. Check the image URL or retry.
405The image download timed out. The image may be inaccessible. Check the URL, adjust, and retry.
406The image is too large. Check and adjust the image size, then retry.
407The image format is not supported. Check and adjust, then retry.
408The account lacks permission. The service may not be activated, the account may have overdue payments, or the calling account may not be authorized.
500A system exception occurred.