AutoML automates hyperparameter fine-tuning by iterating experiments, trials, and training tasks to find the optimal hyperparameter combination.
The following figure shows the mechanism of AutoML.
After you configure the value range of hyperparameters, search algorithms, and the condition to stop the experiment, AutoML configures an experiment based on the configuration.
An experiment generates multiple hyperparameter combinations based on the configured algorithm. Each trial uses one hyperparameter combination to train the model.
NoteYou can configure multiple trials to run concurrently to accelerate model training. However, this increases resources costs.
A trial performs one or more computing tasks based on one hyperparameter combination. The task may be a DLC job that runs on general computing resources or intelligent computing LINGJUN resources, or a MaxCompute task that runs on MaxCompute computing resources. The billing, configuration method, and resource usage of the training vary based on whether the task is a DLC job or a MaxCompute task.
After you start an experiment, AutoML continuously monitors the task metrics.
When the stop condition of an experiment is triggered, such as when the maximum number of searches is reached, the algorithm stop condition is met, or the calculation of all combinations is completed, the experiment is stopped.
AutoML returns results. The result can be a combination of hyperparameters or the optimal model for each trial. You need to specify a model storage path to view the models. You can also view the results in the logs.
Before you start an experiment, you need to configure several types of parameters based on the preceding working principle of AutoML. The parameters include the basic experiment configurations, trial configurations, DLC or MaxCompute task configurations, and hyperparameter search configurations.