Based on the content governance experience and common content management practices in various industries, the Image Moderation 2.0 API of Content Moderation has preset configurations for moderation scopes.
When you use the Content Moderation console for the first time, we recommend that you check the initial configurations of moderation scopes on the Rules page in the console.
Use scenarios
In the following scenarios, you can customize the moderation rules and moderation scopes by using the Image Moderation 2.0 console provided by Alibaba Cloud. You can also query the moderation results and the number of moderated images.
Use scenario | Description |
You can adjust the scopes and details of image moderation based on common industry characteristics or your business requirements. Adjustable scopes of Baseline Check (baselineCheck_global): pornographic content, suggestive content, terrorism content, prohibited content, banner content, undesirable content, and abusive content. For example, if your business scenario involves the display of a lot of swimwear and you do not want similar images to be classified as risky, you can turn off the switch controlling the moderation of related elements in the console. | |
Configure differentiated image moderation scopes for multiple services | If multiple services need to use the same moderation service and you want to configure different image moderation scopes for different services, you can copy the moderation service and configure different moderation scopes for these services. For example, three services A, B, and C need to use the common baseline moderation service (baselineCheck), but they have different moderation requirements. You can copy the baselineCheck service in the console to create the baselineCheck_01 and baselineCheck_02 services. Then, you can configure different image moderation scopes for the three services to meet the needs of these services. |
Perform special moderation or emergency management on specific known images | If you have the following business requirements for image moderation, you can use the console and API of Image Moderation 2.0 to implement special management.
|
You can determine that certain images belong to trusted content that can be exposed to the public based on their sources or purposes. To avoid trusted images from being labeled as risky by the recognition and moderation algorithms of Content Moderation, you want to exempt trusted image libraries from risk moderation. For example, marketing materials created by your business or platform, official images, and authority avatars that have been manually reviewed can be exempted from risk moderation. | |
For text in images, you can configure custom term libraries to ignore or match certain terms during text moderations.
| |
If you need to view and analyze recently moderated images in further detail, you can view or search for them on the Detection Results page. | |
View the statistics on the number of recent image moderations | You can view and collect statistics on the number of moderated images on the Dashboards page. |
Prerequisites
You have activated the Image Moderation 2.0 service on the Content Moderation 2.0 page.
Before you activate the Image Moderation 2.0 service, make sure that you are familiar with the billing rules of Image Moderation 2.0. For more information, see Introduction to Image Moderation 2.0 and its billing method.
Adjust the scopes of image moderation
You can adjust the scopes and details of image moderation based on common industry characteristics or your business requirements.
Log on to the Content Moderation console.
In the left-side navigation pane, choose Image ModerationV2.0 > Rules.
On the Rules Management tab, click Settings in the Operation column for the baselineCheck_global service in this example.
On the Detection Scope tab, select the moderation type to be adjusted. In this example, Prohibited Content Detection is selected.
On the Prohibited Content Detection tab, view the default settings in the Detection Scope Configuration section. For example, the following figure shows that four items are moderated. If an item is hit, the corresponding label is returned.
Click Edit to enter the edit mode and change the On or Off setting of an item that you want to manage. For example, the following figure shows that the moderation switch of the third item is turned off.
You can also modify the settings of Medium Risk Score and High Risk Score, which are used for returning the risk levels.
Click Save to save the new moderation scope. The new moderation scope takes effect in about 2 to 5 minutes and is then applied to the production environment.
Configure differentiated moderation scopes for multiple services
Copy a moderation service and configure different moderation scopes based on the needs of different services to meet the moderation requirements of these services.
Log on to the Content Moderation console.
In the left-side navigation pane, choose Image ModerationV2.0 > Rules.
On the Rules Management tab, copy the moderation service for which you want to set different moderation scopes. In this example, baselineCheck_global is copied.
On the service list page, click Copy in the Operation column.
In the Copy Service panel, set Service Name and Service Description.
Click Create to save the information about the replicate moderation service. The new moderation service can be called in 1 to 2 minutes after it is successfully created.
After the moderation service is created, you can perform the Settings and Edit operations on the replicate baselineCheck_global_01 service. You can call the baselineCheck_global_01 service and the baselineCheck_global service separately to perform moderation tasks with different moderation scopes for different services.
Perform special moderation or emergency management on specific known images
You can configure a custom image library for images that may have risks. If an uploaded image matches an image in the custom image library, a risk label is returned.
Log on to the Content Moderation console.
Create a custom image library. If an existing image library meets your business requirements, skip this step.
NoteEach account can create up to 10 custom image libraries, and the total number of images in all the image libraries is limited to 100,000.
Create an image library and upload images.
In the left-side navigation pane, choose
.Click Create Image Library. In the Create Image Library panel, set the Name and Memo parameters, and then click OK.
On the page that appears, find the created image library and click Image Detail in the Actions column.
Click Add Image. In the Add Image dialog box that appears, click Click to select a picture and upload images as instructed.
You can upload a maximum of 10 images at a time. Each image should be less than 4 MB in size. We recommend that you upload an image with a resolution greater than 256 × 256 pixels. The upload list displays the upload status of up to 10 images. If you want to upload more images, click Clear the list and continue uploading images.
On the details page of the image library, you can view the uploaded images. You can also query images and delete images.
Query images: You can search for images by Picture Id or Add Time.
Delete images: You can delete images from an image library one by one or in batches.
Maintain an existing image library.
In the left-side navigation pane, choose
.Find the image library that you want to maintain, and click Edit in the Actions column to modify the Name and Memo values of the image library. You can also click Image Detail in the Actions column to upload or delete images.
In the left-side navigation pane, choose Image ModerationV2.0 > Rules.
On the Rules Management tab, click Settings in the Operation column for the baselineCheck_global service in this example.
On the Detection Scope tab, select the moderation type to be adjusted. In this example, Prohibited Content Detection is selected.
In the Set Labels by Customized Libraries section of the Prohibited Content Detection tab, view the information about custom image libraries.
Click Edit to enter the edit mode and select a custom image library that you want to configure.
Click Save to save the configuration of the custom image library.
The configuration takes effect in about 2 to 5 minutes and is then applied to the production environment. If an uploaded image matches an image in the custom image library, the contraband_drug_lib label is returned.
Exempt trustworthy images from risk moderation
Uploaded images that match those in trustworthy image libraries can be exempted from risk moderation. This way, trustworthy images are not labeled as risky by recognition and moderation algorithms of Content Moderation.
Log on to the Content Moderation console.
In the left-side navigation pane, choose Image ModerationV2.0 > Rules.
On the Rules Management tab, click Settings in the Operation column.
Click the Exemption Configuration tab, and modify the settings.
On the Exemption Configuration tab, you can view the list of custom image libraries and the exemption status of the custom image libraries.
For example, the exemption switches are turned off for all image libraries in the following figure.
Click Edit to turn on the exemption switch for a specific image library.
Click Save to save the new configuration.
The new configuration takes effect in about 2 to 5 minutes. The Content Moderation service compares the similarity between an uploaded image and each image in the selected image library. For images that are considered highly similar by moderation algorithms, the Content Moderation service returns the label "nonLabel_lib". Other risk labels are not returned.
Customize moderation settings for text in images
You can configure custom term libraries. Based on the custom term libraries, the Content Moderation service ingores the related terms in the text of images or determines that an image is risky if the text in the image matches a term in the custom term library.
Log on to the Content Moderation console.
Create a custom term library. If an existing term library meets your business requirements, skip this step.
In the left-side navigation pane, choose . Then, perform the following steps to configure a term library:
On the Keyword Library Management tab, click Create Library.
In the Create Library panel, enter the library information as required.
NoteYou can add terms to the term library based on your business requirements later. Each account can create up to 20 custom term libraries, and the total number of terms in all the term libraries is limited to 100,000. A single term cannot exceed 20 characters. Special characters are not supported.
Click Create Library.
If the library fails to be created, a message is displayed. You can create a new library based on the message.
In the left-side navigation pane, choose Image ModerationV2.0 > Rules.
On the Rules Management tab, click Settings in the Operation column for the baselineCheck service in this example.
Configure Ignored terms for text in images.
On the Ignoring vocabulary configuration tab, you can view the list of custom term libraries and the exemption status of the custom term libraries.
For example, the exemption switches are turned off for all term libraries in the following figure.
Click Edit to turn on the exemption switch for a specific term library.
Click Save to save the new configuration.
NoteThe new configuration takes effect in about 2 to 5 minutes. The Content Moderation service ignores terms in the term libraries and then performs further moderations. For example, if the text in an image is "Oh, there is a kitten", the terms "Oh" and "a" are ignored and the Content Moderation service moderates the remaining content in "there is kitten".
Configure Matching terms for text in images.
On the Detection Scope tab, select the moderation type to be adjusted. In this example, Prohibited Content Detection is selected.
In the Set Labels by Customized Libraries section of the Prohibited Content Detection tab, view the information about custom term libraries.
NoteIn the Set Labels by Customized Libraries section, you can configure a custom term library for all labels that end with tii. (tii indicates that risky text is detected in images.)
Click Edit to enter the edit mode and select a custom term library that you want to configure.
Click Save to save the configuration of the custom term library.
The configuration takes effect in about 2 to 5 minutes and is then applied to the production environment. If the text in an uploaded image matches a term in the custom term library, the contraband_drug_tii_lib label is returned.
Query the detailed moderation results of images one by one
You can query the detailed moderation results of images by request ID, data ID, and returned label.
Log on to the Content Moderation console.
In the left-side navigation pane, choose Image ModerationV2.0 > Detection Results.
On the Detection Results page, enter the query criteria to search for the moderation results.
The following query criteria are supported: requestID, dataID, service, and Return to Label.
NoteBy default, a maximum of 50,000 moderation results are displayed in chronological order from the most recent to the oldest. The Content Moderation console allows you to query moderation results within the last 30 days. We recommend that you store the data or logs of the response to each API call to facilitate data analysis and statistics over a longer period of time.
If you disagree with a moderation result, you can select the No violation false alarm or Violation missed option from the Feedback drop-down list in the Operation column of the result to submit a feedback.
If you select Return to Label from the query criteria drop-down list to search for results by label, you can enter multiple labels separated by commas (,).
For example, if you want to search for all records with matching labels, select Return to Label first and then !=nonLabel.
You can click an image or click Details in the Operation column to view the details.
View the statistics on moderated images
You can view the statistics on the number of moderated images in the console. Based on the statistics, you can develop further moderation or governance measures for specific image content.
Log on to the Content Moderation console.
In the left-side navigation pane, choose Image ModerationV2.0 > Dashboards.
On the Dashboards page, select a time range to query or export the number of moderated images.
Query the number of moderated images: Content Moderation collects statistics on the number of moderated images by day and by month. The statistics are stored for one year. You can query data that spans two months.
Export the statistics on the number of moderated images: You can click the icon in the upper-right corner to export the number of moderated images by day or month.
The exported report is in the Excel format and contains only the number of moderated images. The following table describes the row headers in the exported report.
Header
Description
Unit
Account UID
The ID of the account that exports the Excel report.
N/A
service
Information about Image Moderation calls.
N/A
Usage
The total number of moderated images.
N/A
Date
The date on which the statistics are collected.
Day/Month
View service hits: After the usage statistics are collected, the hit statistics of each label for which a moderation service is called are displayed. The hit conditions are displayed by using the column chart of daily moderation volumes and the rectangle hierarchy chart of label proportions.
Column chart of daily moderation volumes: displays the number of images that hit the risk labels and the number of images that do not hit the risk labels per day.
Rectangle hierarchy chart of label proportions: displays the hit conditions of all labels in descending order of label proportion. The same bottom color is displayed for labels with the same prefix.