All Products
Search
Document Center

Intelligent Media Services:SubmitSmarttagJob

Last Updated:Dec 16, 2024

Submits a smart tagging job.

Operation description

Before you call this operation to submit a smart tagging job, you must add a smart tagging template and specify the analysis types that you want to use in the template. For more information, see CreateCustomTemplate. You can use the smart tagging feature only in the China (Beijing), China (Shanghai), and China (Hangzhou) regions. By default, an ApsaraVideo Media Processing (MPS) queue can process a maximum of two concurrent smart tagging jobs. If you need to process more concurrent smart tagging jobs, submit a ticket to contact Alibaba Cloud Technical Support for evaluation and configuration.

Debugging

You can run this interface directly in OpenAPI Explorer, saving you the trouble of calculating signatures. After running successfully, OpenAPI Explorer can automatically generate SDK code samples.

Authorization information

The following table shows the authorization information corresponding to the API. The authorization information can be used in the Action policy element to grant a RAM user or RAM role the permissions to call this API operation. Description:

  • Operation: the value that you can use in the Action element to specify the operation on a resource.
  • Access level: the access level of each operation. The levels are read, write, and list.
  • Resource type: the type of the resource on which you can authorize the RAM user or the RAM role to perform the operation. Take note of the following items:
    • The required resource types are displayed in bold characters.
    • If the permissions cannot be granted at the resource level, All Resources is used in the Resource type column of the operation.
  • Condition Key: the condition key that is defined by the cloud service.
  • Associated operation: other operations that the RAM user or the RAM role must have permissions to perform to complete the operation. To complete the operation, the RAM user or the RAM role must have the permissions to perform the associated operations.
OperationAccess levelResource typeCondition keyAssociated operation
ice:SubmitSmarttagJob
*All Resources
*
    none
none

Request parameters

ParameterTypeRequiredDescriptionExample
TitlestringYes

The video title. The title can contain letters, digits, and hyphens (-) and cannot start with a special character. The title can be up to 256 bytes in length.

example-title-****
ContentstringNo

The video description. The description can contain letters, digits, and hyphens (-) and cannot start with a special character. The description can be up to 1 KB in length.

example content ****
ContentTypestringNo

This parameter is discontinued.

application/zip
ContentAddrstringNo

This parameter is discontinued.

http://123.com/testVideo.mp4
ParamsstringNo

The additional request parameters. The value is a JSON string. Example: {"needAsrData":true, "needOcrData":false}. The following parameters are supported:

  • needAsrData: specifies whether to query the automatic speech recognition (ASR) data. The value is of the BOOLEAN type. Default value: false. Valid values: true and false.
  • needOcrData: specifies whether to query the optical character recognition (OCR) data. The value is of the BOOLEAN type. Default value: false. Valid values: true and false.
  • needMetaData: specifies whether to query the metadata. The value is of the BOOLEAN type. Default value: false. Valid values: true and false.
  • nlpParams: the input parameters of the natural language processing (NLP) operator. The value is a JSON object. This parameter is empty by default, which indicates that the NLP operator is not used. For more information, see the "nlpParams" section of this topic.
{"needAsrData":true, "needOcrData":false}
NotifyUrlstringNo

The URL for receiving callbacks. Set the value to an HTTP URL or an HTTPS URL.

https://example.com/endpoint/aliyun/ai?id=76401125000***
UserDatastringNo

The data to be passed through Simple Message Queue (SMQ, formerly MNS) during callbacks. The data can be up to 1 KB in length. For more information about how to specify an SMQ queue for receiving callbacks, see UpdatePipeline.

{“a”:"test"}
InputobjectYes

The job input.

TypestringNo

The media type. Valid values:

  • OSS
  • Media
  • URL
Media
MediastringNo

If Type is set to OSS, specify an OSS path. Example: OSS://test-bucket/video/202208/test.mp4.

If Type is set to Media, specify a media asset ID. Example: c5c62d8f0361337cab312dce8e77dc6d.

If Type is set to URL, specify an HTTP URL. Example: https://zc-test.oss-cn-shanghai.aliyuncs.com/test/unknowFace.mp4.

c5c62d8f0361337cab312dce8e77dc6d
TemplateIdstringNo

The ID of the template that specifies the analysis algorithms. For more information about template operations, see Configure templates.

39f8e0bc005e4f309379701645f4
ScheduleConfigobjectNo

The scheduling configurations.

PipelineIdstringNo

The ID of the ApsaraVideo Media Processing (MPS) queue to which you want to submit the smart tagging job. The MPS queue is bound to an SMQ queue. This parameter specifies the default MPS queue. By default, an MPS queue can process a maximum of two concurrent smart tagging jobs. To increase the limit, submit a ticket.

acdbfe4323bcfdae
PrioritystringNo

The job priority. This parameter is not implemented. You can leave this parameter empty or enter a random value.

4

nlpParams

FeatureParameterTypeRequiredDescriptionExample
nlpParamsobjectYesThe parameters related to NLP. This parameter is required if NLP is specified in the template. Otherwise, the job fails.{"sourceLanguage":"cn"}
TranscriptionsourceLanguagestringYesThe source language used for transcription. Valid values: cn (Chinese), en (English), yue (Cantonese), and fspk (Chinese and English)."cn"
diarizationEnabledbooleanNoSpecifies whether to enable the speaker diarization feature. Default value: false.true
speakerCountintegerNoThe speaker diarization parameter. If this parameter is not specified, the speakers are not recognized. 0: The number of speakers to be recognized is not limited. 2: The number of speakers to be recognized is limited to 2.2
SummarizationsummarizationEnabledbooleanNoSpecifies whether to enable the summarization feature. After this feature is enabled, results such as the full-text summary and speaker summary can be generated.true
summarizationTypesstringNoThe expected summary type. Valid values: Paragraph (full-text summary), Conversational (speaker summary), QuestionsAnswering (Q&A summary), and MindMap (mind map)."Paragraph,Conversational,QuestionsAnswering,MindMap"
TranslationtranslationEnabledbooleanNoSpecifies whether to enable the translation feature.true
targetLanguagesstringNoThe target language of translation. Valid values: cn (Chinese), en (English), yue (Cantonese), and fspk (Chinese and English)."en,cn"
Chapter identificationautoChaptersEnabledbooleanNoSpecifies whether to enable the chapter identification feature. The result includes the chapter title and chapter summary.true
AI minutesmeetingAssistanceEnabledbooleanNoSpecifies whether to enable the AI minutes feature. The results include categories, keywords, key sentences, and to-do items.true

Response parameters

ParameterTypeDescriptionExample
object
RequestIdstring

The request ID.

******11-DB8D-4A9A-875B-275798******
JobIdstring

The ID of the smart tagging job. We recommend that you save this ID for subsequent calls of other operations.

****d80e4e4044975745c14b****

Examples

Sample success responses

JSONformat

{
  "RequestId": "******11-DB8D-4A9A-875B-275798******",
  "JobId": "****d80e4e4044975745c14b****"
}

Error codes

For a list of error codes, visit the Service error codes.

Change history

Change timeSummary of changesOperation
2022-08-25Add OperationView Change Details