| object | | |
RequestId | string | | B42299E6-F71F-465F-8FE9-4FC2E3D3C2CA |
MediaCensorJobDetail | object | The results of the content moderation job. | |
CreationTime | string | The time when the content moderation job was created. | 2018-09-13T16:32:24Z |
FinishTime | string | The time when the content moderation job was complete. | 2018-09-13T16:38:24Z |
Suggestion | string | The overall result of the content moderation job. Valid values:
- pass: The content passes the moderation.
- review: The content needs to be manually reviewed.
- block: The content needs to be blocked.
Note
If the moderation result of any type of content is review, the overall result is review. If the moderation result of any type of content is block, the overall result is block.
| block |
CoverImageCensorResults | array<object> | The moderation results of thumbnails. | |
CoverImageCensorResult | object | | |
Object | string | The Object Storage Service (OSS) object that is used as the thumbnail. | test/ai/censor/v2/vme-****.jpg |
Location | string | The OSS region in which the thumbnail resides. | oss-cn-shanghai |
Bucket | string | The OSS bucket in which the thumbnail is stored. | bucket-out-test-**** |
Results | array<object> | | |
Result | object | The detailed moderation results. | |
Suggestion | string | The recommended subsequent operation. Valid values:
- pass: The content passes the moderation.
- review: The content needs to be manually reviewed.
- block: The content needs to be blocked.
| pass |
Label | string | The label of the moderation result.
-
Valid values in the pornographic content moderation scenario:
- normal: normal content.
- sexy: sexy content.
- porn: pornographic content.
-
Valid values in the terrorist content moderation scenario:
- normal: normal content.
- bloody: bloody content.
- explosion: explosion and smoke.
- outfit: special costume.
- logo: special logo.
- weapon: weapon.
- politics: political content.
- violence: violence.
- crowd: crowd.
- parade: parade.
- carcrash: car accident.
- flag: flag.
- location: landmark.
- others: other content.
-
Valid values in the ad moderation scenario:
- normal: normal content.
- ad: other ads.
- politics: political content in text.
- porn: pornographic content in text.
- abuse: abuse in text.
- terrorism: terrorist content in text.
- contraband: prohibited content in text.
- spam: spam in text.
- npx: illegal ad.
- qrcode: QR code.
- programCode: mini program code.
-
Valid values in the undesirable scene moderation scenario:
- normal: normal content.
- meaningless: meaningless content, such as a black or white screen.
- PIP: picture-in-picture.
- smoking: smoking.
- drivelive: live broadcasting in a running vehicle.
-
Valid values in the logo moderation scenario:
- normal: normal content.
- TV: controlled logo.
- trademark: trademark.
| Normal |
Scene | string | The moderation scenario. Valid values:
- porn: pornographic content moderation.
- terrorism: terrorist content moderation.
- ad: ad moderation.
- live: undesirable scene moderation.
- logo: logo moderation.
| Antispam |
Rate | string | The score. Valid values: 0 to 100. | 100 |
State | string | | Success |
TitleCensorResult | object | The moderation results of titles. | |
Suggestion | string | The recommended subsequent operation. Valid values:
- pass: The content passes the moderation.
- review: The content needs to be manually reviewed.
- block: The content needs to be blocked.
| block |
Label | string | The label of the moderation result. Valid values:
- normal: normal content.
- spam: spam.
- ad: ads.
- abuse: abuse content.
- flood: excessive junk content.
- contraband: prohibited content.
- meaningless: meaningless content.
| meaningless |
Scene | string | The moderation scenario. The value is antispam. | antispam |
Rate | string | | 99.91 |
Message | string | The error message returned if the job failed. This parameter is not returned if the job is successful. | The resource operated cannot be found |
Input | object | The information about the job input. | |
Object | string | The name of the OSS object that is used as the input file. | test/ai/censor/test-****.mp4 |
Location | string | The OSS region in which the input file resides. | oss-cn-shanghai |
Bucket | string | The name of the OSS bucket in which the input file is stored. | bucket-test-in-**** |
BarrageCensorResult | object | The moderation results of live comments. | |
Suggestion | string | The recommended subsequent operation. Valid values:
- pass: The content passes the moderation.
- review: The content needs to be manually reviewed.
- block: The content needs to be blocked.
| pass |
Label | string | The label of the moderation result. Valid values:
- normal: normal content.
- spam: spam.
- ad: ads.
- abuse: abuse content.
- flood: excessive junk content.
- contraband: prohibited content.
- meaningless: meaningless content.
| normal |
Scene | string | The moderation scenario. The value is antispam. | antispam |
Rate | string | | 99.91 |
DescCensorResult | object | The moderation results of descriptions. | |
Suggestion | string | The recommended subsequent operation. Valid values:
- pass: The content passes the moderation.
- review: The content needs to be manually reviewed.
- block: The content needs to be blocked.
| review |
Label | string | The label of the moderation result. Valid values:
- normal: normal content.
- spam: spam.
- ad: ads.
- abuse: abuse content.
- flood: excessive junk content.
- contraband: prohibited content.
- meaningless: meaningless content.
| terrorism |
Scene | string | The moderation scenario. The value is antispam. | antispam |
Rate | string | | 100 |
VideoCensorConfig | object | The video moderation configurations. | |
OutputFile | object | The information about output snapshots. | |
Object | string | The OSS object that is generated as the output snapshot.
Note
In the example, {Count} is a placeholder. The OSS objects that are generated as output snapshots are named output00001-****.jpg , output00002-****.jpg , and so on.
| output{Count}.jpg |
Location | string | The OSS region in which the output snapshot resides. | oss-cn-shanghai |
Bucket | string | The OSS bucket in which the output snapshot is stored. | test-bucket-**** |
VideoCensor | string | Indicates whether the video content needs to be moderated. Default value: true. Valid values:
- true: The video content needs to be moderated.
- false: The video content does not need to be moderated.
| true |
BizType | string | The custom business type. Default value: common. | common |
JobId | string | The ID of the content moderation job. | f8f166eea7a44e9bb0a4aecf9543**** |
UserData | string | | example userdata **** |
Code | string | The error code returned if the job failed. This parameter is not returned if the job is successful. | InvalidParameter.ResourceNotFound |
VensorCensorResult | object | The moderation results of videos. | |
VideoTimelines | array<object> | The moderation results that are sorted in ascending order by time. | |
VideoTimeline | object | | |
Timestamp | string | The position in the video. Format: hh:mm:ss[.SSS] . | 00:02:59.999 |
Object | string | The OSS object that is generated as the output snapshot.
Note
In the example, {Count} is a placeholder. The OSS objects that are generated as output snapshots are named output00001-****.jpg , output00002-****.jpg , and so on.
| output{Count}.jpg |
CensorResults | array<object> | The moderation results that include information such as labels and scores. | |
CensorResult | object | | |
Suggestion | string | The recommended subsequent operation. Valid values:
- pass: The content passes the moderation.
- review: The content needs to be manually reviewed.
- block: The content needs to be blocked.
| block |
Label | string | The label of the moderation result.
-
Valid values in the pornographic content moderation scenario:
- normal: normal content.
- sexy: sexy content.
- porn: pornographic content.
-
Valid values in the terrorist content moderation scenario:
- normal: normal content.
- bloody: bloody content.
- explosion: explosion and smoke.
- outfit: special costume.
- logo: special logo.
- weapon: weapon.
- politics: political content.
- violence: violence.
- crowd: crowd.
- parade: parade.
- carcrash: car accident.
- flag: flag.
- location: landmark.
- others: other content.
-
Valid values in the ad moderation scenario:
- normal: normal content.
- ad: other ads.
- politics: political content in text.
- porn: pornographic content in text.
- abuse: abuse in text.
- terrorism: terrorist content in text.
- contraband: prohibited content in text.
- spam: spam in text.
- npx: illegal ad.
- qrcode: QR code.
- programCode: mini program code.
-
Valid values in the undesirable scene moderation scenario:
- normal: normal content.
- meaningless: meaningless content, such as a black or white screen.
- PIP: picture-in-picture.
- smoking: smoking.
- drivelive: live broadcasting in a running vehicle.
-
Valid values in the logo moderation scenario:
- normal: normal content.
- TV: controlled logo.
- trademark: trademark.
| flood |
Scene | string | The moderation scenario. Valid values:
- porn: pornographic content moderation.
- terrorism: terrorist content moderation.
- ad: ad moderation.
- live: undesirable scene moderation.
- logo: logo moderation.
| porn |
Rate | string | | 99.99 |
NextPageToken | string | A pagination token. It can be used in the next request to retrieve a new page of results. | ea04afcca7cd4e80b9ece8fbb251**** |
CensorResults | array<object> | A collection of moderation results. The information includes the summary about various scenarios such as pornographic content moderation and terrorist content moderation. | |
CensorResult | object | The detailed moderation results. | |
Suggestion | string | The recommended subsequent operation. Valid values:
- pass: The content passes the moderation.
- review: The content needs to be manually reviewed.
- block: The content needs to be blocked.
| review |
Label | string | The label of the moderation result.
-
Valid values in the pornographic content moderation scenario:
- normal: normal content.
- sexy: sexy content.
- porn: pornographic content.
-
Valid values in the terrorist content moderation scenario:
- normal: normal content.
- bloody: bloody content.
- explosion: explosion and smoke.
- outfit: special costume.
- logo: special logo.
- weapon: weapon.
- politics: political content.
- violence: violence.
- crowd: crowd.
- parade: parade.
- carcrash: car accident.
- flag: flag.
- location: landmark.
- others: other content.
-
Valid values in the ad moderation scenario:
- normal: normal content.
- ad: other ads.
- politics: political content in text.
- porn: pornographic content in text.
- abuse: abuse in text.
- terrorism: terrorist content in text.
- contraband: prohibited content in text.
- spam: spam in text.
- npx: illegal ad.
- qrcode: QR code.
- programCode: mini program code.
-
Valid values in the undesirable scene moderation scenario:
- normal: normal content.
- meaningless: meaningless content, such as a black or white screen.
- PIP: picture-in-picture.
- smoking: smoking.
- drivelive: live broadcasting in a running vehicle.
-
Valid values in the logo moderation scenario:
- normal: normal content.
- TV: controlled logo.
- trademark: trademark.
| meaningless |
Scene | string | The moderation scenario. Valid values:
- porn: pornographic content moderation.
- terrorism: terrorist content moderation.
- ad: ad moderation.
- live: undesirable scene moderation.
- logo: logo moderation.
| terrorism |
Rate | string | | 100 |
PipelineId | string | The ID of the ApsaraVideo Media Processing (MPS) queue to which the job was submitted. | c5b30b7c0d0e4a0abde1d5f9e751**** |