All Products
Search
Document Center

Platform For AI:E2E Development and Usage of LLM: Data Processing + Model Training + Model Inference

Last Updated:Sep 04, 2024

Machine Learning Designer of Platform for AI (PAI) provides various data processing components to help you edit, convert, filter, identify, and deduplicate data. You can combine different components to filter high-quality data and generate text samples that meet your business requirements. You can use the processed data to train large language models (LLMs). This topic describes the E2E development and usage of LLM, including data processing, model training, and model inference.

Dataset

In this topic, the dataset that is used in the preset template E2E Development and Usage of LLM: Data Processing + Model Training + Model Inference of Machine Learning Designer must contain the instruction and output fields.

Create and run a pipeline

  1. Go to the Visualized Modeling (Designer) page.

    1. Log on to the PAI console.

    2. In the upper-left corner, select a region based on your business requirements.

    3. In the left-side navigation pane, click Workspaces. On the Workspaces page, click the name of the workspace that you want to manage.

    4. In the left-side navigation pane, choose Model Training > Visualized Modeling (Designer).

  2. Create a pipeline.

    1. On the Preset Templates tab, choose Business Area > LLM. In the E2E Development and Usage of LLM: Data Processing + Model Training + Model Inference section, click Create.

      image

    2. In the Create Pipeline dialog box, configure the pipeline parameters and click OK. You can retain the default values.

    3. In the pipeline list, find the pipeline that you created and click Open.

  3. Configure the pipeline.

    image

    The pipeline contains the following key components:

    • LLM-Text Normalizer (DLC)-1/LLM-Text Normalizer (DLC)-2

      Normalizes text samples in the instruction and output fields to the Unicode format and converts Chinese text from traditional to simplified characters.

    • LLM-Sensitive Content Mask (DLC)-1/LLM-Sensitive Content Mask (DLC)-2

      Masks the sensitive information in text samples in the instruction and output fields. Examples:

      • Replace an email address with [EMAIL].

      • Replace a mobile phone number with [TELEPHONE] or [MOBILEPHONE].

      • Replace an ID card number with IDNUM.

    • LLM model training

      Trains the model that you select based on the corresponding training method. The model is from QuickStart of PAI, and the underlying computing is based on DLC jobs. The training method must match the model that you select. The following section describes the training methods supported by different models:

      • qwen-7b: supports Quantized Low-Rank Adaptation (QLoRA) and full-parameter fine-tuning.

      • qwen-7b-chat: supports QLoRA and full-parameter fine-tuning.

      • qwen-1_8b-chat: supports QLoRA.

      • llama-2-7b: supports QLoRA and full-parameter fine-tuning.

      • llama-2-7b-chat: supports QLoRA and full-parameter fine-tuning.

      • baichuan2-7b-base: supports QLoRA, Low-Rank Adaptation (LoRA), and all-parameter fine-tuning.

      • baichuan2-7b-chat: supports QLoRA, LoRA, and all-parameter fine-tuning.

      • chatglm3-6b: supports QLoRA and LoRA.

    • Offline LLM inference

      Performs offline model inference based on the model that you select.

  4. Run the pipeline.

    After you run the pipeline, right-click the LLM Model Inference -1 component and choose View Data > Output Table to view the inference result.

    image

More applications

You can also use the same preprocessed data to train multiple models and perform model inference on them. For example, you can create the following pipeline to concurrently fine-tune the qwen-7b-chat and llama2-7b-chat models and compare the inference results generated by the two models trained on the same test data.

image

References