All Products
Search
Document Center

Platform For AI:Develop custom processors by using Python

Last Updated:Sep 13, 2024

You can use Python to develop a custom processor and debug the processor together with the model file on on-premises machines. When debugging is complete, you can upload the processor package and model file to Object Storage Service (OSS) and mount the files when you deploy the model as a service. This topic describes how to use Python to develop custom processors.

Background information

    Note
    • We recommend that you separate the model file from the processor package. This allows you to reuse the processor for future model deployment when the model is modified. You can call the get_model_path() method to obtain the storage path of the model file. This path is used to load the model in the prediction logic.

    • If a custom processor has a great number of dependencies and the package is large, we recommend that you use an image to deploy the model. For information about the two deployment methods, see the "Deployment methods" section in the EAS overview topic.

To develop a custom processor by using Python, perform the following steps:

  1. Step 1: Create a Python environment

    Elastic Algorithm Service (EAS) SDK for Python supports multiple machine learning frameworks and can integrate with various data analysis and manipulation frameworks such as Pandas. This topic describes two methods to create and upload an on-premises Python environment for developing custom processors.

  2. Step 2: Add prediction logic

    EAS SDK for Python adopts a high-performance remote procedure call (RPC) framework and contains APIs to facilitate interaction among EAS clusters. You need to only implement several functions in the prediction logic to deploy the model in EAS.

  3. Step 3: Run on-premises tests

    Run on-premises tests to verify the prediction logic and ensure that the service can work as expected after deployment.

  4. Step 4: Package the Python code and environment

    Package the Python code and environment in the required format.

  5. Step 5: Upload the package and model file

    Upload the package and model file to OSS.

  6. Step 6: Deploy and test the model service

    Use the custom processor to deploy the model service.

Prerequisites

The model file is prepared.

Note

To facilitate management, we recommend that you separate the model file from the custom processor. After the development is complete, upload the model file and processor package to OSS and mount the files when you deploy the model.

Step 1: Create a Python environment

You can use package management tools such as pyenv to create a Python environment. The EASCMD client provided by EAS encapsulates the initialization process of EAS SDK for Python. After you download the tool, you only need to run a command to initialize the Python environment and generate related file templates. This client is suitable for Linux operating systems. Sample command:

# Install EASCMD and initialize EAS SDK for Python. 
$ wget https://eas-data.oss-cn-shanghai.aliyuncs.com/tools/eascmd/v2/eascmd64
# After you install EASCMD, modify the access permissions by configuring your AccessKey pair. 
$ chmod +x eascmd64
$ ./eascmd64 config -i <access_id> -k <access_key>

# Initialize the environment. 
$ ./eascmd64 pysdk init ./pysdk_demo

Enter the Python version that you want to use in the command output. The default version is 3.6. After you select a version, the following directory and files are automatically created within the ./pysdk_demo directory: a directory named ENV (stores the Python environment variables), the app.py file (contains a template for the prediction logic), and the app.json file (contains a template for service deployment).

Step 2: Add prediction logic

To add the prediction logic, create a file named app.py in the directory that contains the ENV directory. Sample code:

Note
  • If you use the EASCMD client to create the Python environment, the app.py file is automatically created. You can modify the file based on your business requirements.

  • If you use a pre-built image to create the Python environment, the app.py file is automatically created. You can modify the file based on your business requirements.

# -*- coding: utf-8 -*-
import allspark


class MyProcessor(allspark.BaseProcessor):
    """ MyProcessor is a example
        you can send mesage like this to predict
        curl -v http://127.0.0.1:8080/api/predict/service_name -d '2 105'
    """
    def initialize(self):
        """ load module, executed once at the start of the service
             do service intialization and load models in this function.
        """
        self.module = {'w0': 100, 'w1': 2}
        # model_dir = self.get_model_path().decode()
        # Define the load_model function. If you want to load the model.pt model file, you can implement the function as torch.load(model_dir + "/model.pt"). 
        # self.model = load_model(model_dir)

    def pre_process(self, data):
        """ data format pre process
        """
        x, y = data.split(b' ')
        return int(x), int(y)

    def post_process(self, data):
        """ process after process
        """
        return bytes(data, encoding='utf8')

    def process(self, data):
        """ process the request data
        """
        x, y = self.pre_process(data)
        w0 = self.module['w0']
        w1 = self.module['w1']
        y1 = w1 * x + w0
        if y1 >= y:
            return self.post_process("True"), 200
        else:
            return self.post_process("False"), 400


if __name__ == '__main__':
    # allspark.default_properties().put('rpc.keepalive', '10000')
    # Set the timeout time to 10 seconds. By default, the time is 5 seconds.
    # parameter worker_threads indicates concurrency of processing
    runner = MyProcessor(worker_threads=10)
    runner.run()

The preceding sample code provides an example on how to use EAS SDK for Python. The sample code creates a class that inherits the BaseProcessor base class and implements the initialize() and process() functions. The following table describes the relevant functions.

Function

Description

Remarks

initialize()

Initializes the processor. This function is called during service startup to load the model.

You can add the following code to the initialize() function to separate the loading of the model file from the implementation of the processor.

model_dir = self.get_model_path().decode()
                                    self.model = load_model(model_dir)
  • The get_model_path() method is used to obtain the storage path of the model file on the service instance. The path is returned as a bytes object.

  • The load_model() function is used to load the model file for service deployment. If you want to load the model.pt model file, you can implement the function as torch.load(model_dir + "/model.pt").

get_model_path()

Retrieves the storage path of the model file. The path is returned as a bytes object.

If you upload the model file by specifying the model_path parameter in the JSON file, you can call the get_model_path() method to obtain the storage path of the model file on the service instance.

process(data)

Processes a request. This function accepts the request body as an argument and returns the response to the client.

The data input parameter specifies the request body. The parameter is of the BYTES data type. The response_data output parameter is of the BYTES data type and the status_code output parameter is of the INT data type. In a success response, the returned value of status_code is 0 or 200.

_init_(worker_threads=5, worker_processes=1,endpoint=None)

The constructor of the processor.

  • worker_threads: the number of worker threads. Default value: 5.

  • worker_processes: the number of processes. Default value: 1. If you set the value of worker_processes to 1, the single-process multi-thread mode is used. If you set worker_processes to a value greater than 1, multiple processes concurrently handle requests, and all threads only read the request data. Each process invokes the initialize() function.

  • endpoint: the endpoint to which the service listens. You can specify the IP address and port number to which the service listens. Example: endpoint='0.0.0.0:8079'.

    Note

    Do not use port 8080 and port 9090 because EAS listens to these ports.

run()

Starts the service.

N/A

Step 3: Run on-premises tests

  1. Open a terminal window and run the following command in the directory that contains the app.py file to launch the Python project:

    ./ENV/bin/python app.py

    The following output indicates that the project was launched:

    [INFO] waiting for service initialization to complete...
    [INFO] service initialization complete
    [INFO] create service
    [INFO] rpc binds to predefined port 8080
    [INFO] install builtin handler call to /api/builtin/call
    [INFO] install builtin handler eastool to /api/builtin/eastool
    [INFO] install builtin handler monitor to /api/builtin/monitor
    [INFO] install builtin handler ping to /api/builtin/ping
    [INFO] install builtin handler prop to /api/builtin/prop
    [INFO] install builtin handler realtime_metrics to /api/builtin/realtime_metrics
    [INFO] install builtin handler tell to /api/builtin/tell
    [INFO] install builtin handler term to /api/builtin/term
    [INFO] Service start successfully
  2. Open a new terminal window and run the following command to send two requests.

    Verify the responses based on the sample code in the "Step 2: Add prediction logic" section of this topic.

    curl http://127.0.0.1:8080/test -d '10 20'

Step 4: Package the Python code and environment

The EASCMD client provides commands for you to package the Python code in a quick manner. If you do not use the EASCMD client to develop custom processors, you can manually package the complete environment. You can use one of the following methods to package the Python code and environment:

  • Run the pack command provided by the EASCMD client (Linux only).

    $ ./eascmd64 pysdk pack ./pysdk_demo

    The following output indicates that the command was executed:

    [PYSDK] Creating package: /home/xi****.lwp/code/test/pysdk_demo.tar.gz
  • Manually package the environment if you did not use the EASCMD client to develop the processor.

    Requirement

    Description

    Format

    The package must be compressed in the .zip or .tar.gz format.

    Content

    • The root directory of the package must be /ENV, and the package must contain the app.py file.

    • Example: .tar.gz package.

Step 5: Upload the package and model file

After you package the Python code and environment, upload the package (in the .zip or .tar.gz format) and the model file to OSS. You can mount the files when you deploy the service. For information about how to upload files to OSS, see ossutil command reference.

Step 6: Deploy and test the model service

You can deploy the model service in the PAI console or by using the EASCMD client.

  1. Deploy the service.

    Use the PAI console

    1. Go to the Deploy Service page. For more information, see Model service deployment by using the PAI console.

    2. On the Deploy Service page, configure the parameters. The following table describes the key parameters. For more information about parameter configuration, see Model service deployment by using the PAI console.

      Parameter

      Description

      Deployment Method

      Select Deploy Service by Using Model and Processor.

      Model File

      Configure this parameter based on your business requirements.

      Processor Type

      Select Custom Processor.

      Processor Language

      Select python.

      Processor Package

      Select Import OSS File and select the OSS path in which the package file is stored.

      Processor Main File

      Set the value to ./app.py.

    3. (Optional) Add the data_image parameter in the Configuration Editor section. Set the value to the image path that you specified when you packaged the files.

      Note

      Configure the data_image parameter only if you use an image to upload the development environment in Step 4: Package the Python code and environment.

    4. Click Deploy.

    Use EASCMD

    The following section uses the Linux operating system as an example.

    1. Download the EASCMD client and perform identity authentication. For more information, see Download the EASCMD client and complete identity authentication.

    2. Create a JSON file named app.json in the directory where the EASCMD client is stored. Sample file if the processor is packaged manually or by using the EASCMD client:

      {
        "name": "pysdk_demo",
        "processor_entry": "./app.py",
        "processor_type": "python",
        "processor_path": "oss://examplebucket/exampledirectory/pysdk_demo.tar.gz",
        "model_path": "oss://examplebucket/exampledirectory/model",
        "cloud": {
              "computing": {
                  "instance_type": "ecs.c7.large"
              }
        },
        "metadata": {
          "instance": 1,
          }
      }
    3. Open a terminal window and run the following command in the directory where the JSON file is stored to deploy the service:

      $ ./eascmd64 create app.json

      The following output indicates that the service was deployed:

      [RequestId]: 1202D427-8187-4BCB-8D32-D7096E95B5CA
      +-------------------+-------------------------------------------------------------------+
      | Intranet Endpoint | http://182848887922****.vpc.cn-beijing.pai-eas.aliyuncs.com/api/predict/pysdk_demo |
      |             Token | ZTBhZTY3ZjgwMmMyMTQ5OTgyMTQ5YmM0NjdiMmNiNmJkY2M5ODI0****          |
      +-------------------+-------------------------------------------------------------------+
      [OK] Waiting task server to be ready
      [OK] Fetching processor from [oss://eas-model-beijing/195557026392****/pysdk_demo.tar.gz]
      [OK] Building image [registry-vpc.cn-beijing.aliyuncs.com/eas/pysdk_demo_cn-beijing:v0.0.1-20190806082810]
      [OK] Pushing image [registry-vpc.cn-beijing.aliyuncs.com/eas/pysdk_demo_cn-beijing:v0.0.1-20190806082810]
      [OK] Waiting [Total: 1, Pending: 1, Running: 0]
      [OK] Service is running
  2. Test the service.

    1. Go to the EAS-Online Model Services page. For more information, see Model service deployment by using the PAI console.

    2. Find the service that you want to test and click Invocation Method in the Service Type column to obtain the public endpoint and token.

    3. Run the following command in the terminal window to call the service:

      $ curl <service_url> -H 'Authorization: <token>' -d '10 20'

      Modify the following parameters:

      • Replace <service_url> with the public endpoint that you obtained in Step b. Example: http://182848887922****.vpc.cn-beijing.pai-eas.aliyuncs.com/api/predict/pysdk_demo.

      • Replace <token> with the token that you obtained in Step b. Example: ZTBhZTY3ZjgwMmMyMTQ5OTgyMTQ5YmM0NjdiMmNiNmJkY2M5ODI0****.

      • The -d option specifies the input parameters of the service.

References