API description
The Image Moderation 2.0 API detects images for regulatory violations, platform policy breaches, and harmful content. It supports more than 40 risk labels and more than 40 risk control items. Use the returned risk labels and confidence scores to take moderation or administrative action based on your business requirements. For more information, see Introduction to Image Moderation 2.0 and Billing.
Connection guide
Create an Alibaba Cloud account. Register now and follow the on-screen instructions.
Activate the pay-as-you-go Content Moderation service. Make sure that you have activated the service. Activation is free. After you start using the service, you are charged based on your usage. For more information, see Billing details.
Create an AccessKey. Use Resource Access Management (RAM) to create an AccessKey. If you use a RAM user AccessKey, grant the AliyunYundunGreenWebFullAccess permission to the RAM user. For more information, see RAM authorization.
Integrate with SDKs. Use SDKs to call the API. For more information, see Image Moderation 2.0 SDKs and usage guide.
Usage notes
Call the Image Moderation 2.0 API to create a task for image moderation. For more information about constructing an HTTP request, see Make HTTPS calls. You can also use an SDK. For more information, see Image Moderation 2.0 SDKs and usage guide.
API operation: ImageModeration
Supported regions and endpoints:
| Region | Public endpoint | Virtual Private Cloud (VPC) endpoint | Supported services |
|---|---|---|---|
| Singapore | green-cip.ap-southeast-1.aliyuncs.com | green-cip-vpc.ap-southeast-1.aliyuncs.com | baselineCheck_global, aigcDetector_global |
| UK (London) | green-cip.eu-west-1.aliyuncs.com | Not available | |
| US (Virginia) | green-cip.us-east-1.aliyuncs.com | green-cip-vpc.us-east-1.aliyuncs.com | baselineCheck_global, aigcDetector_global |
| US (Silicon Valley) | green-cip.us-west-1.aliyuncs.com | Not available | |
| Germany (Frankfurt) | green-cip.eu-central-1.aliyuncs.com | green-cip-vpc.eu-central-1.aliyuncs.com |
The UK (London) region reuses the console configurations of the Singapore region. The US (Silicon Valley) and Germany (Frankfurt) regions reuse the console configurations of the US (Virginia) region.
Billing:
This API operation is billable. Only requests that return an HTTP status code of 200 are charged. Requests that return other error codes are not billed. For more information, see Billing details.
Image requirements:
Supported formats: PNG, JPG, JPEG, BMP, WEBP, TIFF, SVG, HEIF (the longest edge must be less than 8,192 px), GIF (only the first frame is used), and ICO (only the last image is used).
Maximum file size: 20 MB. The height or width cannot exceed 16,384 px, and the total pixel count cannot exceed 167 million. For best results, use images with a resolution greater than 200 x 200 px. Low resolution may reduce moderation accuracy.
Image download timeout: 3 seconds. If the download exceeds this limit, a timeout error is returned.
QPS limit
The queries per second (QPS) limit for a single user is 100 calls/second. If you exceed this limit, API calls are throttled, which may affect your business. To request a higher QPS or scale out urgently, contact your business manager.
Debug
Before you connect, use Alibaba Cloud OpenAPI Explorer to debug the Image Moderation 2.0 API online. View sample code and SDK dependency information to understand how to use the API and its parameters.
The online debugging feature calls the Content Moderation API using the currently logged-on account. The call count is included in the billable usage of that account.
Request parameters
For more information about the common request parameters that must be included in a request, see Common parameters.
The request body is a JSON struct that contains the following fields:
| Name | Type | Required | Example | Description |
|---|---|---|---|---|
| Service | String | Yes | baselineCheck_global | The moderation service for Image Moderation 2.0. Valid values:
Note For the differences between services, see Service Description. For the AIGC-dedicated service, see AIGC Scenario Detection Service. The international version can be used only in regions outside China. |
| ServiceParameters | JSONString | Yes | The content moderation parameters. The value is a JSON string. For details about each field, see ServiceParameters. |
Table 1. ServiceParameters
| Name | Type | Required | Example | Description |
|---|---|---|---|---|
| imageUrl | String | Yes. Image Moderation 2.0 supports three ways to pass images. Select one of the following:
| https://img.alicdn.com/tfs/TB1U4r9AeH2gK0jSZJnXXaT1FXa-2880-480.png | The URL of the image to moderate. The URL must be publicly accessible and cannot exceed 2,048 characters. Note
|
| ossBucketName | String | bucket_01 | The name of the authorized OSS bucket. Note Before you use the VPC endpoint of an OSS image, use your Alibaba Cloud account to access the Cloud Resource Access Authorization page to grant the AliyunCIPScanOSSRole role. | |
| ossObjectName | String | 2022023/04/24/test.jpg | The name of the file in the authorized OSS bucket. Note 1. Pass the original filename from OSS. You cannot add image processing parameters. To add image processing parameters, use the imageUrl address. 2. If a filename contains Chinese characters or spaces, pass it as is. It does not need to be URL-encoded. | |
| ossRegionId | String | cn-beijing | The region where the OSS bucket is located. | |
| dataId | String | No | img123**** | The data ID of the image to moderate. Allowed characters: uppercase and lowercase letters, digits, underscores (_), hyphens (-), and periods (.). Maximum length: 64 characters. Use this field to uniquely identify your business data. |
| referer | String | No | www.aliyun.com | The referer request header, used for hotlink protection. Maximum length: 256 characters. |
| infoType | String | Yes | customImage,textInImage | The types of auxiliary information to return. Valid values:
Note Public figure and logo information require an advanced image moderation service. For more information, see Service description. |
Response data
| Name | Type | Example | Description |
|---|---|---|---|
| RequestId | String | 70ED13B0-BC22-576D-9CCF-1CC12FEAC477 | The unique request ID. Alibaba Cloud generates a unique ID for each request, which you can use for troubleshooting. |
| Data | Object | The image moderation result. For more information, see Data. | |
| Code | Integer | 200 | The HTTP status code. For more information, see Response codes. |
| Msg | String | OK | The response message. |
Table 2. Data
| Name | Type | Example | Description |
|---|---|---|---|
| Result | Array | The moderation results, including risk labels and confidence scores. For more information, see result. | |
| RiskLevel | String | high | The overall risk level, determined by the label with the highest risk. Valid values:
Warning Take immediate action on high-risk content and manually review medium-risk content. Process low-risk content only when you have strict recall requirements. Otherwise, treat it the same as content for which no risk is detected. Configure risk scores in the Content Moderation console. |
| DataId | String | img123****** | The data ID of the moderated image. Note If you passed a dataId in the request, the corresponding dataId is returned here. |
| Ext | Object | Auxiliary reference information for the image. For more information, see Auxiliary information. |
Table 3. result
| Name | Type | Example | Description |
|---|---|---|---|
| Label | String | violent_explosion | The risk label returned by image moderation. A single image may return multiple labels with scores. For supported labels, see:
|
| Confidence | Float | 81.22 | The confidence score. Valid values: 0 to 100. Accurate to two decimal places. Some labels do not include a confidence score. For more information, see Descriptions of risk labels. |
| Description | String | Fireworks content | A description of the Label field. Important This field explains the Label field and may change. Process results using the Label field and do not rely on this field. |
| RiskLevel | String | high | The risk level for this label, determined by the configured high and low risk scores. Valid values:
|
Auxiliary information returned (click to expand)
Table 4. Ext
| Name | Type | Example | Description |
|---|---|---|---|
| CustomImage | JSONArray | Custom image library hit information. Returned when a custom image library is matched. For more information, see CustomImage. | |
| TextInImage | Object | Text-in-image hit information. For more information, see TextInImage. | |
| PublicFigure | JSONArray | Public figure identification results. Returned when a specific figure is detected. For more information, see PublicFigure. | |
| LogoData | JSONArray | Logo hit information. For more information, see LogoData. |
Table 5. CustomImage
| Name | Type | Example | Description |
|---|---|---|---|
| LibId | String | lib0001 | The ID of the matched custom image library. |
| LibName | String | Custom Image Library A | The name of the matched custom image library. |
| ImageId | String | 20240307 | The ID of the matched custom image. |
Table 6. TextInImage
| Name | Type | Example | Description |
|---|---|---|---|
| OcrResult | JSONArray | The recognized text lines in the image. For more information, see OcrResult. | |
| RiskWord | StringArray | [ "risk_word_1", "Sensitive word 2" ] | The risk fragments detected in the text. Returned when a tii type label is triggered. |
| CustomText | JSONArray | Custom term library hit information. Returned when a custom term library is matched. For more information, see CustomText. |
Table 7. OcrResult
| Name | Type | Example | Description |
|---|---|---|---|
| Text | String | Identified text line 1 | The content of the recognized text line in the image. |
Table 8. CustomText
| Name | Type | Example | Description |
|---|---|---|---|
| LibId | String | test20240307 | The ID of the matched custom keyword library. |
| LibName | String | Custom Keyword Library A | The name of the matched custom keyword library. |
| KeyWords | String | Keyword 1 | The matched custom keyword. |
Table 9. PublicFigure
| Name | Type | Example | Description |
|---|---|---|---|
| FigureName | String | John Doe | The name of the identified person. |
| FigureId | String | xxx001 | The code of the identified person. Note A code is returned for specific individuals, while a name is returned for others. Retrieve the person's name first. If the name is empty, retrieve the person's code. |
| Location | JSONArray | The location of the figure. For more information, see Location. |
Table 10. LogoData
| Name | Type | Example | Description |
|---|---|---|---|
| Logo | JSONArray | Logo information. For more information, see Logo. | |
| Location | Object | The location of the logo. For more information, see Location. |
Table 11. Logo
| Name | Type | Example | Description |
|---|---|---|---|
| Name | String | DingTalk | The logo name. |
| Label | String | logo_sns | The matched label. |
| Confidence | Float | 88.18 | The confidence score. |
Table 12. Location
| Name | Type | Example | Description |
|---|---|---|---|
| X | Float | 41 | The distance from the upper-left corner of the area to the y-axis, with the origin at the upper-left corner of the image. Unit: pixels. |
| Y | Float | 84 | The distance from the upper-left corner of the area to the x-axis, with the origin at the upper-left corner of the image. Unit: pixels. |
| W | Float | 83 | The width of the area. Unit: pixels. |
| H | Float | 26 | The height of the area. Unit: pixels. |
Examples
Request example
{
"Service": "baselineCheck_global",
"ServiceParameters": {
"imageUrl": "https://img.alicdn.com/tfs/TB1U4r9AeH2gK0jSZJnXXaT1FXa-2880-480.png",
"dataId": "img123****"
}
}Response example
If the system detects risky content, the following response is returned:
{
"Msg": "OK",
"Code": 200,
"Data": {
"DataId": "img123****",
"Result": [
{
"Label": "pornographic_adultContent",
"Confidence": 81,
"Description": "Adult pornographic content"
},
{
"Label": "sexual_partialNudity",
"Confidence": 98,
"Description": "Partial nudity or sexy"
},
{
"Label": "violent_explosion",
"Confidence": 70,
"Description": "Fireworks content"
},
{
"Label": "violent_explosion_lib",
"Confidence": 81,
"Description": "Fireworks content_Hit custom library"
}
],
"RiskLevel": "high"
},
"RequestId": "ABCD1234-1234-1234-1234-1234XYZ"
}If the system does not detect any risky content, the following response is returned:
{
"Msg": "OK",
"Code": 200,
"Data": {
"DataId": "img123****",
"Result": [
{
"Label": "nonLabel",
"Description": "No risk detected"
}
],
"RiskLevel": "none"
},
"RequestId": "ABCD1234-1234-1234-1234-1234XYZ"
}If the submitted image matches an image in your configured allowlist, the following response is returned:
{
"Msg": "OK",
"Code": 200,
"Data": {
"DataId": "img123****",
"Result": [
{
"Label": "nonLabel_lib",
"Confidence": 83,
"Description": "Hit allowlist"
}
],
"RiskLevel": "none"
},
"RequestId": "ABCD1234-1234-1234-1234-1234XYZ"
}Auxiliary information response example (click to expand)
When a custom image library is matched, the following response is returned:
{
"Code": 200,
"Data": {
"DataId": "",
"Ext": {
"CustomImage": [
{
"ImageId": "12345",
"LibId": "TEST20240307",
"LibName": "Risk Image Library A"
}
]
},
"Result": [
{
"Confidence": 100.0,
"Label": "pornographic_adultContent_lib",
"Description": "Adult pornographic content_Hit custom library"
}
],
"RiskLevel": "high"
},
"Msg": "success",
"RequestId": "5F572704-4C03-51DF-8957-D77BF6E7444E"
}When a custom keyword library is matched, the following response is returned:
{
"Code": 200,
"Data": {
"DataId": "",
"Ext": {
"TextInImage": {
"CustomText": [
{
"KeyWords": "Custom Keyword 1",
"LibId": "TEST20240307",
"LibName": "Text Blacklist A"
}
],
"OcrResult": [
{
"Text": "Text line 1"
},
{
"Text": "Text line 2"
},
{
"Text": "Text line 3 with custom keyword"
}
],
"RiskWord": null
}
},
"Result": [
{
"Confidence": 99.0,
"Label": "pornographic_adultContent_tii_lib",
"Description": "Text contains pornographic content_Hit custom library"
}
],
"RiskLevel": "high"
},
"Msg": "success",
"RequestId": "TESTZGL-0307-2024-0728-FOREVER"
}When a text violation in an image is detected, the following response is returned:
{
"Code": 200,
"Data": {
"DataId": "",
"Ext": {
"TextInImage": {
"CustomText": null,
"OcrResult": [
{
"Text": "Text line 1"
},
{
"Text": "Text line 2"
},
{
"Text": "Text line 3 with risk content"
}
],
"RiskWord": [
"Risk Word 1"
]
}
},
"Result": [
{
"Confidence": 89.15,
"Label": "political_politicalFigure_name_tii",
"Description": "Text contains leader's name"
}
],
"RiskLevel": "high"
},
"Msg": "success",
"RequestId": "TESTZGL-0307-2024-0728-FOREVER"
}When logo information is detected, the following response is returned:
{
"Code": 200,
"Data": {
"DataId": "",
"Ext": {
"LogoData": [
{
"Location": {
"H": 44,
"W": 100,
"X": 45,
"Y": 30
},
"Logo": [
{
"Confidence": 96.15,
"Label": "pt_logotoSocialNetwork",
"Name": "CCTV"
}
]
}
]
},
"Result": [
{
"Confidence": 96.15,
"Label": "pt_logotoSocialNetwork",
"Description": "Social platform logo"
}
],
"RiskLevel": "high"
},
"Msg": "success",
"RequestId": "TESTZGL-0307-2024-0728-FOREVER"
}When person information is detected, the following response is returned:
{
"Code": 200,
"Data": {
"DataId": "",
"Ext": {
"PublicFigure": [
{
"FigureId": null,
"FigureName": "Yang San",
"Location": [
{
"H": 520,
"W": 13,
"X": 14,
"Y": 999
}
]
}
]
},
"Result": [
{
"Confidence": 92.05,
"Label": "political_politicalFigure_3",
"Description": "Provincial and municipal government personnel"
}
],
"RiskLevel": "high"
},
"Msg": "success",
"RequestId": "TESTZGL-0307-2024-0728-FOREVER"
}The request and response examples in this document are formatted for readability. Actual results are not formatted with line breaks or indentation.
Risk label definitions
The following describes the risk label values, their score ranges, and their meanings. Enable or disable each risk label in the console. For some risk labels, you can configure a more granular detection scope. For more information, see the Console User Guide. The labels supported by each image service are listed below.
| Scenario | Service and labels |
|---|---|
| General scenarios | General baseline check (baselineCheck_global) supports labels |
| AIGC scenarios | Supported labels for AI-generated image detection (aigcDetector_global) |
For labels returned when no risk is detected or the allowlist is matched, see Supported labels when there is no risk or the allowlist is matched.
Store the risk labels and confidence scores returned by the system for a period. This allows you to reference them for subsequent content governance. Set priorities for manual review or annotation, and implement layered and categorized content governance measures based on the risk labels.
Table 4. Labels supported by general baseline check (baselineCheck_global)
| Label value | Confidence score range (confidence) | Description |
|---|---|---|
| pornographic_adultContent | 0 to 100. A higher score indicates a higher confidence level. | The image may contain adult or pornographic content. |
| pornographic_cartoon | 0 to 100. A higher score indicates a higher confidence level. | The image may contain pornographic cartoon content. |
| pornographic_adultToys | 0 to 100. A higher score indicates a higher confidence level. | The image may contain adult toy content. |
| pornographic_art | 0 to 100. A higher score indicates a higher confidence level. | The image may contain pornographic artwork. |
| pornographic_adultContent_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain pornographic content. |
| pornographic_suggestive_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain vulgar content. |
| pornographic_o_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain inappropriate content. For more information, see the Content Moderation console. |
| pornographic_organs_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may describe sexual organs. |
| pornographic_adultToys_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain content about adult toys. |
| sexual_suggestiveContent | 0 to 100. A higher score indicates a higher confidence level. | The image may contain vulgar or sexually suggestive content. |
| sexual_femaleUnderwear | 0 to 100. A higher score indicates a higher confidence level. | The image may contain underwear or swimwear. |
| sexual_cleavage | 0 to 100. A higher score indicates a higher confidence level. | The image may feature female cleavage. |
| sexual_maleTopless | 0 to 100. A higher score indicates a higher confidence level. | The image may show shirtless men. |
| sexual_cartoon | 0 to 100. A higher score indicates a higher confidence level. | The image may contain sexually suggestive animated content. |
| sexual_shoulder | 0 to 100. A higher score indicates a higher confidence level. | The image may show sexually suggestive shoulders. |
| sexual_femaleLeg | 0 to 100. A higher score indicates a higher confidence level. | The image may show sexually suggestive legs. |
| sexual_pregnancy | 0 to 100. A higher score indicates a higher confidence level. | The image may contain pregnancy photos or breastfeeding. |
| sexual_feet | 0 to 100. A higher score indicates a higher confidence level. | The image may show sexually suggestive feet. |
| sexual_kiss | 0 to 100. A higher score indicates a higher confidence level. | The image may contain kissing. |
| sexual_intimacy | 0 to 100. A higher score indicates a higher confidence level. | The image may contain intimate behavior. |
| sexual_intimacyCartoon | 0 to 100. A higher score indicates a higher confidence level. | The image may contain intimate actions in cartoons or anime. |
| violent_explosion | 0 to 100. A higher score indicates a higher confidence level. | The image may contain content related to smoke or fire. For more information, see the Content Moderation console. |
| violent_burning | 0 to 100. A higher score indicates a higher confidence level. | The image may contain burning content. |
| violent_armedForces | 0 to 100. A higher score indicates a higher confidence level. | The image is suspected of containing content related to a terrorist organization. |
| violent_weapon | 0 to 100. A higher score indicates a higher confidence level. | The image may contain military equipment. |
| violent_crowding | 0 to 100. A higher score indicates a higher confidence level. | The image may show a crowd gathering. |
| violent_gun | 0 to 100. A higher score indicates a higher confidence level. | The image may contain guns. |
| violent_knives | 0 to 100. A higher score indicates a higher confidence level. | The image may contain knives. |
| violent_horrific | 0 to 100. A higher score indicates a higher confidence level. | The image may contain horrific content. |
| violent_nazi | 0 to 100. A higher score indicates a higher confidence level. | The image may contain Nazi-related content. |
| violent_bloody | 0 to 100. A higher score indicates a higher confidence level. | The image may contain bloody content. |
| violent_extremistGroups_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain content about extremist groups. |
| violent_extremistIncident_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain content about extremist incidents. |
| violence_weapons_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may describe guns and knives. |
| violent_ACU | 0 to 100. A higher score indicates a higher confidence level. | The image may contain combat uniforms. |
| contraband_drug | 0 to 100. A higher score indicates a higher confidence level. | The image may contain drug-related content. |
| contraband_drug_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may describe illegal drugs. |
| contraband_gamble | 0 to 100. A higher score indicates a higher confidence level. | The image may contain gambling-related content. |
| contraband_gamble_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may describe gambling. |
| inappropriate_smoking | 0 to 100. A higher score indicates a higher confidence level. | The image may contain smoking-related content. |
| inappropriate_drinking | 0 to 100. A higher score indicates a higher confidence level. | The image may contain alcohol-related content. |
| inappropriate_tattoo | 0 to 100. A higher score indicates a higher confidence level. | The image may contain tattoos. |
| inappropriate_middleFinger | 0 to 100. A higher score indicates a higher confidence level. | The image may show a middle finger gesture. |
| inappropriate_foodWasting | 0 to 100. A higher score indicates a higher confidence level. | The image may contain content about wasting food. |
| profanity_Offensive_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain severe profanity, verbal attacks, or offensive content. |
| profanity_Oral_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain colloquial profanity. |
| religion_clothing | 0 to 100. A higher score indicates a higher confidence level. | The image may contain special logos and elements. For more information, see the Content Moderation console. |
| religion_logo | 0 to 100. A higher score indicates a higher confidence level. | |
| religion_flag | 0 to 100. A higher score indicates a higher confidence level. | |
| religion_taboo1_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain prohibited content. For more information, see the Content Moderation console. |
| religion_taboo2_tii | 0 to 100. A higher score indicates a higher confidence level. | |
| flag_country | 0 to 100. A higher score indicates a higher confidence level. | The image may contain flag-related content. |
| political_historicalNihility | 0 to 100. A higher score indicates a higher confidence level. | The image may contain specific content. For more information, see the Content Moderation console. |
| political_historicalNihility_tii | 0 to 100. A higher score indicates a higher confidence level. | |
| political_politicalFigure_1 | 0 to 100. A higher score indicates a higher confidence level. | |
| political_politicalFigure_2 | 0 to 100. A higher score indicates a higher confidence level. | |
| political_politicalFigure_3 | 0 to 100. A higher score indicates a higher confidence level. | |
| political_politicalFigure_4 | 0 to 100. A higher score indicates a higher confidence level. | |
| political_politicalFigure_name_tii | 0 to 100. A higher score indicates a higher confidence level. | |
| political_prohibitedPerson_1 | 0 to 100. A higher score indicates a higher confidence level. | |
| political_prohibitedPerson_2 | 0 to 100. A higher score indicates a higher confidence level. | |
| political_prohibitedPerson_tii | 0 to 100. A higher score indicates a higher confidence level. | |
| political_taintedCelebrity | 0 to 100. A higher score indicates a higher confidence level. | |
| political_taintedCelebrity_tii | 0 to 100. A higher score indicates a higher confidence level. | |
| political_CNFlag | 0 to 100. A higher score indicates a higher confidence level. | |
| political_CNMap | 0 to 100. A higher score indicates a higher confidence level. | |
| political_logo | 0 to 100. A higher score indicates a higher confidence level. | |
| political_outfit | 0 to 100. A higher score indicates a higher confidence level. | |
| political_badge | 0 to 100. A higher score indicates a higher confidence level. | |
| pt_logo | 0 to 100. A higher score indicates a higher confidence level. | The image may contain a logo. |
| QRCode | 0 to 100. A higher score indicates a higher confidence level. | The image may contain a QR code. |
| pt_custom_01 | 0 to 100. A higher score indicates a higher confidence level. | Custom label 01. |
| pt_custom_02 | 0 to 100. A higher score indicates a higher confidence level. | Custom label 02. |
tiiis an abbreviation for "text in image". A label ending intiiindicates that a text violation was detected in the image. You can also configure custom image libraries for each risk label. If a moderated image closely matches an image in a custom library, the system returns the corresponding risk label with_libappended. For example, if you configure a custom image library for "violent_explosion" and a moderated image matches an image in that library, the system returnsviolent_explosion_libin the label parameter. The confidence parameter represents the similarity score.
If the system detects no anomalies in the submitted image, or if it closely matches an image in your configured allowlist, the returned label and confidence score are as shown in the table below.
| Label value | Confidence score range (confidence) | Description |
|---|---|---|
| nonLabel | This field is not present. | No risk was detected in this image, or you have disabled all moderation items. For more information, see the Content Moderation console. |
| nonLabel_lib | 0 to 100. A higher score indicates a higher confidence level. | This image closely matches an image in your configured allowlist. For more information, see the Content Moderation console. |
Code descriptions
The following table describes the response codes returned by the API. Only requests that return an HTTP status code of 200 are charged. Requests that return other error codes are not billed.
| Code | Description |
|---|---|
| 200 | The request is successful. |
| 400 | A request parameter is empty. |
| 401 | A request parameter is invalid. Check and correct the parameter value. |
| 402 | A request parameter length does not meet the requirements. Check and adjust. |
| 403 | The request exceeds the QPS limit. Check and adjust the concurrency. |
| 404 | An error occurred while downloading the image. Check the image URL or retry. |
| 405 | The image download timed out. The image may be inaccessible. Check the URL, adjust, and retry. |
| 406 | The image is too large. Check and adjust the image size, then retry. |
| 407 | The image format is not supported. Check and adjust, then retry. |
| 408 | The account lacks permission. The service may not be activated, the account may have overdue payments, or the calling account may not be authorized. |
| 500 | A system exception occurred. |