All Products
Search
Document Center

Platform For AI:Model prediction and deployment

Last Updated:Dec 11, 2024

After you train a model in Machine Learning Designer, you may need to generate predictions on new data. According to your requirements on prediction timeliness, you can choose from the two types of prediction services: real-time predictions and batch predictions.

  • Batch predictions

    You can add the trained model and test data to the prediction components to implement batch prediction by using Machine Learning Designer. Then, submit the pipeline to DataWorks and schedule it as a periodic task. For more information, see Batch predictions.

  • Real-time predictions

    • Deploy a model as an online service

      You can deploy models as online services in Elastic Algorithm Service (EAS) to implement real-time prediction. Push-button deployment is available for Predictive Model Markup Language (PMML), AlinkModel, and XGBoost models trained by Machine Learning Designer. You can also manually export the PMML model files to import in EAS. Models in the Parameter Server (PS) format require manual export before you can deploy them as EAS online services.

    • Deploy a pipeline as an online service

      You can deploy specific pipelines as online services to implement real-time prediction. Specifically, you can use Alink algorithm components to build a batch data processing pipeline that implements data preprocessing, feature engineering, and model prediction. Package the pipeline as a pipeline model and then deploy it as an EAS online service with a few clicks.