All Products
Search
Document Center

Content Moderation:Synchronous moderation API of Image Moderation 2.0

Last Updated:Sep 19, 2024

Feature description

You can call the Image Moderation 2.0 API to identify whether an image contains content or elements that violate relevant regulations on network content dissemination, affect the content order of a specific platform, or affect user experience. Image Moderation 2.0 supports over 40 content risk labels and over 40 risk control items. Image Moderation 2.0 of Content Moderation allows you to develop further moderation or governance measures for specific image content based on business scenarios, platform-specific content governance rules, or rich risk labels and scores of confidence levels returned by API calls. For more information, see Introduction to Image Moderation 2.0 and its billing method.

Usage guide

  1. Create an Alibaba Cloud account: Click Register account and follow the instructions to create an Alibaba Cloud account.

  2. Activate the Content Moderation service in pay-as-you-go mode: Make sure that the Content Moderation service is activated. You are not charged for activating this service. After you call the Image Moderation 2.0 API, you are charged by using the pay-as-you-go billing method. For more information, see Billing method.

  3. Create an AccessKey pair: Make sure that you have created an AccessKey pair as a Resource Access Management (RAM) user. If you want to use the AccessKey pair of the RAM user, use your Alibaba Cloud account to grant the AliyunYundunGreenWebFullAccess permission to the RAM user. For more information, see Permission.

  4. Call the API: We recommend that you use SDKs to call the API. For more information, see Image Moderation 2.0 SDKs and usage guide.

Usage notes

You can call this Image Moderation 2.0 API to create a task for image moderation. For more information about how to construct an HTTP request, see Make HTTPS calls. You can also use an existing HTTP request. For more information, see Image Moderation 2.0 SDKs and usage guide.

  • Operation: ImageModeration

  • Supported regions and endpoints

    Region

    Public endpoint

    Internal endpoint

    Supported service

    Singapore

    https://green-cip.ap-southeast-1.aliyuncs.com

    https://green-cip-vpc.ap-southeast-1.aliyuncs.com

    baselineCheck_global and aigcDetector_global

  • Billing method

    You are charged for calling this operation. You are charged by using the pay-as-you-go billing method only for requests whose HTTP status code is 200. For more information about the billing method, see Billing method.

  • Limits on images

    • The images must be in PNG, JPG, JPEG, BMP, WEBP, TIFF, SVG, HEIC (the longest side is less than 8192 pixels), GIF (extract the first frame), or ICO (extract the last layer) format.

    • The maximum size of an image is limited to 20 MB, the height or width cannot exceed 16,384 pixels, and the total pixels must not exceed 167 million pixels. We recommend that you submit images with a minimum resolution of 200 × 200 pixels to ensure that the results of Content Moderation detection algorithms are not affected.

    • The maximum download duration for an image is 3 seconds. If an image fails to be downloaded within 3 seconds, a timeout error is returned.

QPS limit

You can call this operation up to 100 times per second per account. If the number of calls per second exceeds the limit, throttling is triggered. As a result, your business may be affected. We recommend that you take note of the limit when you call this operation. If you require higher queries per second (QPS) to meet larger business requirements or urgent scale-out requirements, contact your customer business manager (CBM).

Debugging

Before you deploy SDKs, you can use Alibaba Cloud OpenAPI Explorer to debug the Image Moderation 2.0 API online and view the sample code for the call and SDK dependencies. This way, you can understand how to call the API and how to set related parameters.

Important

Before you call the Content Moderation API, you must log on to the Content Moderation console by using your Alibaba Cloud account. Therefore, the fees incurred by the number of calls are billed to the account.

Request parameters

For more information about the common request parameters that must be included in all Content Moderation API requests, see Common parameters.

The request body is a JSON structure. The following table describes the parameters that are contained in the request body.

Parameter

Type

Required

Example

Description

Service

String

Yes

baselineCheck_global

The moderation service supported by Image Moderation 2.0. Valid values:

  • baselineCheck_global: common baseline moderation

  • aigcDetector_global: AI-generated image identification

Note

For more information about the differences between different services, see Service description. For more information about the AI-generated image identification service, see AI-generated image identification service of Image Moderation 2.0.

ServiceParameters

JSONString

Yes

The parameters related to the content moderation object. The value is a JSON string. For more information about the description of each string, see ServiceParameters.

Table 1. ServiceParameters

Parameter

Type

Required

Example

Description

imageUrl

String

Yes. Image Moderation 2.0 supports three image upload methods:

  • Submit the URL of an image, imageUrl, for image moderation.

  • Obtain Object Storage Service (OSS) authorization and submit the ossBucketName, ossObjectName, and ossRegionId of an image for image moderation.

  • Submit a local image file for image moderation. This method does not occupy your OSS storage space. The image file is stored only for 30 minutes. SDKs are integrated with the feature of uploading local images. For more information about the sample code, see Image Moderation 2.0 SDKs and usage guide.

https://img.alicdn.com/tfs/TB1U4r9AeH2gK0jSZJnXXaT1FXa-2880-480.png

The URL of the object that you want to moderate. Make sure that the URL can be accessed over the Internet and that the URL cannot exceed 2,048 characters in length.

Note

The URL cannot contain Chinese characters. Make sure that you specify only one URL in each request.

ossBucketName

String

bucket_01

The name of the OSS bucket that has been authorized.

Note

Before you submit the internal endpoint of an image in an OSS bucket, you must use your Alibaba Cloud account to access the Cloud Resource Access Authorization page to grant the permissions to access the OSS bucket.

ossObjectName

String

2022023/04/24/test.jpg

The name of the object in the authorized OSS bucket.

ossRegionId

String

cn-beijing

The region of the OSS bucket.

dataId

String

No

img123****

The ID of the object that you want to moderate.

The ID can contain uppercase letters, lowercase letters, digits, underscores (_), hyphens (-), and periods (.). The ID can be up to 64 characters in length and uniquely identifies your business data.

referer

String

No

www.aliyun.com

The referer request header, which is used in scenarios such as hotlink protection. The value can be up to 256 characters in length.

infoType

String

Yes

customImage,textInImage

The auxiliary information to be obtained. Valid values:

  • customImage: information about the custom image libraries that are hit

  • textInImage: information about the text in images

  • publicFigure: information about the figures that are hit

  • logoData: logo information

You can specify multiple values. Separate multiple values with commas (,). For example, the values "customImage,textInImage" indicate that both the information about the custom image libraries that are hit and the information about the text in images are returned.

Note

Figure information and logo information can be returned in a moderation service whose moderation type is advanced image moderation. For more information, see Service description.

Response parameters

Parameter

Type

Example

Description

RequestId

String

70ED13B0-BC22-576D-9CCF-1CC12FEAC477

The request ID, which is used to locate and troubleshoot issues.

Data

Object

The result of image moderation. For more information, see Data.

Code

Integer

200

The returned HTTP status code. For more information, see Response codes.

Msg

String

OK

The message that is returned in the response.

Table 2. Data

Parameter

Type

Example

Description

Result

Array

The results of image moderation parameters such as the label parameter and the confidence parameter. For more information, see result.

RiskLevel

String

high

The risk level, which is returned based on the configured risk scores. Valid values:

  • high

  • medium

  • low

  • none

Note

The following handling suggestions are recommended: Customers handle high-risk content directly. Manual review should be performed on medium-risk content. Low-risk content can be handled when more risky content is detected. Customers can handle the content on which no risk is detected based on their business requirements. Risk scores can be configured in the Content Moderation console.

DataId

String

img123******

The ID of the moderated object.

Note

If you specify the dataId parameter in the request, the value of the dataId parameter is returned in the response.

Ext

Object

The auxiliary reference information of the image. For more information, see Auxiliary information.

Table 3. result

Parameter

Type

Example

Description

Label

String

violent_explosion

The labels returned after the image moderation. Multiple risk labels and the corresponding scores of confidence levels may be returned for an image. For more information about supported labels, see:

Confidence

Float

81.22

The score of the confidence level. Valid values: 0 to 100. The value is accurate to two decimal places. Some labels do not have scores of confidence levels. For more information, see Descriptions of risk labels.

Description

String

Content about smoke or fire

The description of the Labal field.

Important

This field is an explanation of the Label field and may be changed. We recommend that you process the moderation results based on the Label field instead of this field.

Auxiliary information returned (expand to view details)

Table 4. Ext

Parameter

Type

Example

Description

CustomImage

JSONArray

If a custom image library is hit, information about the custom image library is returned. For more information, see CustomImage.

TextInImage

Object

The information about the text hit in the image is returned. For more information, see TextInImage.

PublicFigure

JSONArray

If an image contains a specific figure, the ID of the identified figure is returned. For more information, see PublicFigure.

LogoData

JSONArray

The information about the logo that is hit is returned. For more information, see LogoData.

Table 5. CustomImage

Parameter

Type

Example

Description

LibId

String

lib0001

The ID of the custom image library that is hit.

LibName

String

Custom image library A

The name of the custom image library that is hit.

ImageId

String

20240307

The ID of the custom image that is hit.

Table 6. TextInImage

Parameter

Type

Example

Description

OcrResult

JSONArray

The information about each row of text identified in the image is returned. For more information, see OcrResult.

RiskWord

StringArray

[ "Risky word 1",

"Risky word 2"]

The risky words that are hit in the text. This field is returned when a label of the tii type is hit.

CustomText

JSONArray

If a custom term library is hit, the information about the custom term library is returned. For more information, see CustomText.

Table 7. OcrResult

Parameter

Type

Example

Description

Text

String

Identified text row 1

The text row content that is identified in the image is returned.

Table 8. CustomText

Parameter

Type

Example

Description

LibId

String

test20240307

The ID of the custom term library that is hit.

LibName

String

Custom term library A

The name of the custom term library that is hit.

KeyWords

String

Keyword 1

The custom keywords that are hit.

Table 9. PublicFigure

Parameter

Type

Example

Description

FigureName

String

Alice

The information about the identified figure.

FigureId

String

xxx001

The ID of the identified figure.

Note

Figure IDs are returned for specific figures, and the figure information is returned for other figures. We recommend that the figure information is used as the auxiliary reference information. If the figure information is empty, the figure ID can be used as the auxiliary reference information.

Location

JSONArray

The location information of the figure. For more information, see Location.

Table 10. LogoData

Parameter

Type

Example

Description

Logo

JSONArray

Logo information. For more information, see Location.

Location

Object

The location information of the logo. For more information, see Location.

Table 11. Logo

Parameter

Type

Example

Description

Name

String

DingTalk

The logo name.

Label

String

logo_sns

The label that is hit.

Confidence

Float

88.18

The score for the confidence level.

Table 12. Location

Parameter

Type

Example

Description

X

Float

41

The distance between the upper-left corner of the text area and the y-axis, with the upper-left corner of the image being the coordinate origin. Unit: pixels.

Y

Float

84

The distance between the upper-left corner of the text area and the x-axis, with the upper-left corner of the image being the coordinate origin. Unit: pixels.

W

Float

83

The width of the text area. Unit: pixels.

H

Float

26

The height of the text area. Unit: pixels.

Examples

Sample requests

{
    "Service": "baselineCheck_global",
    "ServiceParameters": {
        "imageUrl": "https://img.alicdn.com/tfs/TB1U4r9AeH2gK0jSZJnXXaT1FXa-2880-480.png",
        "dataId": "img123****"
    }
}

Sample success response

  • If the system detects risky content, the following sample code is returned:

    {
        "Msg": "OK",
        "Code": 200,
        "Data": {
            "DataId": "img123****",
            "Result": [
                {
                    "Label": "pornographic_adultContent",
                    "Confidence": 81,
                    "Description": "Adult pornographic content"
                },
                {
                    "Label": "sexual_partialNudity",
                    "Confidence": 98,
                    "Description": "Naked or sexy content"
                },
                {
                    "Label": "violent_explosion",
                    "Confidence": 70,
                    "Description": "Content about smoke or fire"
                },
                {
                    "Label": "violent_explosion_lib",
                    "Confidence": 81,
                    "Description": "Content about smoke or fire_Hit custom library"
                }
            ],
            "RiskLevel": "high"
        },
        "RequestId": "ABCD1234-1234-1234-1234-1234XYZ"
    }
  • If the system does not detect risky content, the following sample code is returned:

    {
        "Msg": "OK",
        "Code": 200,
        "Data": {
            "DataId": "img123****",
            "Result": [
                {
                    "Label": "nonLabel",
                    "Description": "No risks are detected"
                }
            ],
            "RiskLevel": "none"
        },
        "RequestId": "ABCD1234-1234-1234-1234-1234XYZ"
    }
  • If the system detects that the image matches an image in the configured image library that is exempt from risk detection, the following sample code is returned:

    {
        "Msg": "OK",
        "Code": 200,
        "Data": {
            "DataId": "img123****",
            "Result": [
                {
                    "Label": "nonLabel_lib",
                    "Confidence": 83,
                    "Description": "Hits an image library that is exempt from risk detection"
                }
            ],
            "RiskLevel": "none"
        },
        "RequestId": "ABCD1234-1234-1234-1234-1234XYZ"
    }
  • Example of returning auxiliary information (expand to view details)

    • If the system detects that the image matches an image in a custom image library, the following sample code is returned:

    {
        "Code": 200,
        "Data": {
            "DataId": "",
            "Ext": {
                "CustomImage": [
                    {
                        "ImageId": "12345",
                        "LibId": "TEST20240307",
                        "LibName": "Risky image library A"
                    }
                ]
            },
            "Result": [
                {
                    "Confidence": 100.0,
                    "Label": "pornographic_adultContent_lib",
                    "Description": "Adult pornographic content_Hit custom image library"
                }
            ],
            "RiskLevel": "high"
        },
        "Msg": "success",
        "RequestId": "5F572704-4C03-51DF-8957-D77BF6E7444E"
    }
    • If the system detects that the text in the image matches a term in a custom term library, the following sample code is returned:

    {
        "Code": 200,
        "Data": {
            "DataId": "",
            "Ext": {
                "TextInImage": {
                    "CustomText": [
                        {
                            "KeyWords": "Custom keyword 1",
                            "LibId": "TEST20240307",
                            "LibName": "Custom term library A containing a text blacklist"
                        }
                    ],
                    "OcrResult": [
                        {
                            "Text": "Text row 1"
                        },
                        {
                            "Text": "Text row 2"
                        },
                        {
                            "Text": "Text row 3 with custom keywords"
                        }
                    ],
                    "RiskWord": null
                }
            },
            "Result": [
                {
                    "Confidence": 99.0,
                    "Label": "pornographic_adultContent_tii_lib",
                    "Description": "Text contains pornographic content_Hit a custom term library"
                }
            ],
            "RiskLevel": "high"
        },
        "Msg": "success",
        "RequestId": "TESTZGL-0307-2024-0728-FOREVER"
    }
    • If the system detects that the text in the image contains risky words, the following sample code is returned:

    {
        "Code": 200,
        "Data": {
            "DataId": "",
            "Ext": {
                "TextInImage": {
                    "CustomText": null,
                    "OcrResult": [
                        {
                            "Text": "Text row 1"
                        },
                        {
                            "Text": "Text row 2"
                        },
                        {
                            "Text": "Text row 3 with risky words"
                        }
                    ],
                    "RiskWord": [
                        "Risky word 1"
                    ]
                }
            },
            "Result": [
                {
                    "Confidence": 89.15,
                    "Label": "political_politicalFigure_name_tii",
                    "Description": "Text contains the name of a leader"
                }
            ],
            "RiskLevel": "high"
        },
        "Msg": "success",
        "RequestId": "TESTZGL-0307-2024-0728-FOREVER"
    }
    • If the system detects that the image contains logo Information, the following sample code is returned:

    {
        "Code": 200,
        "Data": {
            "DataId": "",
            "Ext": {
                "LogoData": [
                    {
                        "Location": {
                            "H": 44,
                            "W": 100,
                            "X": 45,
                            "Y": 30
                        },
                        "Logo": [
                            {
                                "Confidence": 96.15,
                                "Label": "pt_logotoSocialNetwork",
                                "Name": "CCTV"
                            }
                        ]
                    }
                ]
            },
            "Result": [
                {
                    "Confidence": 96.15,
                    "Label": "pt_logotoSocialNetwork",
                    "Description": "Social platform logo"
                }
            ],
            "RiskLevel": "high"
        },
        "Msg": "success",
        "RequestId": "TESTZGL-0307-2024-0728-FOREVER"
    }
    • If the system detects that the image contains the information about a figure, the following sample code is returned:

    {
        "Code": 200,
        "Data": {
            "DataId": "",
            "Ext": {
                "PublicFigure": [
                    {
                        "FigureId": null,
                        "FigureName": "Yang San",
                        "Location": [
                            {
                                "H": 520,
                                "W": 13,
                                "X": 14,
                                "Y": 999
                            }
                        ]
                    }
                ]
            },
            "Result": [
                {
                    "Confidence": 92.05,
                    "Label": "political_politicalFigure_3",
                    "Description": "Personnel of provincial or municipal governments"
                }
            ],
            "RiskLevel": "high"
        },
        "Msg": "success",
        "RequestId": "TESTZGL-0307-2024-0728-FOREVER"
    }
Note

The sample requests and responses in this topic are formatted to improve readability. Actual responses are not formatted with line breaks or indentation.

Descriptions of risk labels

The following table describes the values of risk labels, the corresponding scores of confidence levels, and definitions of the risk labels. You can configure each risk label in the Content Moderation console. You can also configure more specific moderation scopes for some risk labels. For more information, see Console Guide.

Note

We recommend that you store risk labels and scores of confidence levels returned by the system within a specified period of time. You can use the stored data as a reference for subsequent content governance. You can set the priority of manual review, the priority of annotations, or content governance measures of hierarchical classification based on the risk labels.

Table 4. Labels supported by baselineCheck_global

Label value

Score of the confidence level

Definition

pornographic_adultContent

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected adult pornographic content.

pornographic_cartoon

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected cartoon pornographic content.

pornographic_adultToys

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected adult sexual organ content.

pornographic_art

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected artwork with pornographic content.

pornographic_adultContent_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains pornographic content.

pornographic_suggestive_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains vulgar content.

pornographic_o_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains inappropriate content. For more information, see the Content Moderation console.

pornographic_organs_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains descriptions of sexual organs.

pornographic_adultToys_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains descriptions of adult sex toys.

sexual_suggestiveContent

Valid values: 0 to 100. A higher score indicates a higher confidence level.

Suspected vulgar content or sexually suggestive content is detected.

sexual_femaleUnderwear

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected underwear and swimsuit content.

sexual_cleavage

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected female cleavage characteristics.

sexual_maleTopless

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected male shirtless content.

sexual_cartoon

Valid values: 0 to 100. A higher score indicates a higher confidence level.

Suspected sexy animation content is detected.

sexual_shoulder

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected sexy shoulder content.

sexual_femaleLeg

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected sexy leg content.

sexual_pregnancy

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected pregnancy and breastfeeding content.

sexual_feet

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected sexy foot content.

sexual_kiss

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected kissing content.

sexual_intimacy

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected intimate behavior content.

sexual_intimacyCartoon

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected cartoon anime intimate actions.

violent_explosion

Valid values: 0 to 100. A higher score indicates a higher confidence level.

Suspected content about smoke or fire is detected. For more information, see the Content Moderation console.

violent_burning

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected burning content.

violent_armedForces

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected terrorist organizations.

violent_weapon

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected military equipment.

violent_crowding

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains a suspected crowd gathering.

violent_gun

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected firearms.

violent_knives

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected knives.

violent_horrific

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected thrilling content.

violent_nazi

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected Nazi content.

violent_bloody

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected bloody content.

violent_extremistGroups_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains the content of terrorist organizations.

violent_extremistIncident_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains the content of terrorist events.

violence_weapons_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains descriptions of firearms and knives.

contraband_drug

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected drug content.

contraband_drug_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains suspected descriptions of prohibited drugs.

contraband_gamble

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected gambling content.

contraband_gamble_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains suspected descriptions of gambling behavior.

inappropriate_smoking

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected smoke-related content.

inappropriate_drinking

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected alcohol-related content.

inappropriate_tattoo

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected tattoo content.

inappropriate_middleFinger

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains content that appears to include a middle finger gesture.

inappropriate_foodWasting

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected content about wasting food.

profanity_Offensive_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains suspected content, such as content about serious abuse, verbal attacks, and verbal offenses.

profanity_Oral_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains suspected content, such as verbal abuse.

religion_clothing

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected special logos and elements. For more information, see the Content Moderation console.

religion_logo

Valid values: 0 to 100. A higher score indicates a higher confidence level.

religion_flag

Valid values: 0 to 100. A higher score indicates a higher confidence level.

religion_taboo1_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains prohibited content. For more information, see the Content Moderation console.

religion_taboo2_tii

Valid values: 0 to 100. A higher score indicates a higher confidence level.

flag_country

Valid values: 0 to 100. A higher score indicates a higher confidence level.

The image contains suspected flag-related content.

Note

tii stands for text in the image. If the moderation result of an image contains a label that ends with tii, text violations in the image are detected.

In addition, you can configure custom image libraries for the preceding risk labels. If the moderated image has a high similarity with an image in a custom image library, the system returns a corresponding risk label. The format of a label value is Value of the original risk label_lib to help identify risk labels. For example, the system returns the label parameter whose value is violent_explosion_lib if the following conditions are met: (1) You configure a custom image library for violent_explosion. (2) The moderated image matches an image in the custom image library. (3) The two images have a high similarity. The system also returns the value of the corresponding confidence parameter. The confidence parameter indicates the score of a confidence level. A higher score indicates a higher similarity between the two images.

If the system detects no exception on the specified image or the specified image has a high similarity with an image in the configured image library that is exempt from risk detection, the system returns a label value and the score of a confidence level that are described in the following table.

Label value

Score of the confidence level

Definition

nonLabel

N/A

The system detects that this image has no risks or that the moderated items are disabled. For more information, see the Content Moderation console.

nonLabel_lib

Valid values: 0 to 100. A higher score indicates a higher confidence level.

This image has a high similarity with an image in the configured image library that is exempt from risk detection. For more information, see the Content Moderation console.

Response codes

The following table describes the response codes. You are charged by using the pay-as-you-go billing method only for requests whose response code is 200.

Code

Description

200

The request is successful.

400

Not all request parameters are configured.

401

The values specified for one or more request parameters are invalid.

402

Invalid request parameters. Check and modify them and try again.

403

The QPS of requests exceeds the upper limit. Check and modify the number of requests that are sent at a time.

404

The specified image failed to be downloaded. Check the URL of the image or try again.

405

Downloading the specified image timed out. The possible cause is that the image cannot be accessed. Check and adjust the image and try again.

406

The specified image is excessively large. Check and change the image size and try again.

407

The format of the specified image is not supported. Check and change the image format and try again.

408

You do not have the required permissions. The possible cause is that this account is not activated, has overdue payments, or is not authorized to call this API operation.

500

A system exception occurred.