Elastic Algorithm Service (EAS) allows you to deploy model services by using custom images. You can build a Docker image that contains the entire runtime environment of your service and mount your model or code to the service instances during runtime. For more information about how to mount the model or code, see Mount storage to services (advanced). This topic describes how to deploy a model service by using a custom image.
Background information
A model service often contains complex business logic and the implementation of custom inference requires a complex development environment. For example, you may use the yum install/apt-get install
command to install multiple dependencies in the system path. To accommodate this complexity, EAS allows you to use custom images to deploy model services.
This topic describes how to use custom images to deploy model services in the PAI console or through the EASCMD client. For more information about how to use the EASCMD client, see Download the EASCMD client and complete user authentication.
If you use custom images to deploy model services, the EAS engine is injected to service instances as a sidecar container during runtime to collect traffic or system monitoring data and add authentication information to service requests. You can send API requests over HTTP, WebSocket, or gRPC (HTTP/2) to call a model service.
To send requests over gRPC, set the metadata.enable_grpc parameter to true when you deploy the service.
Select an image repository
EAS supports the following types of custom image repositories:
Container Registry: For more information about how to create a container image, see What is Container Registry?
Personal Edition: provides a unified internal address for each region, such as registry-vpc.cn-shanghai.aliyuncs.com.
NoteBy default, EAS cannot access the Internet. To use a Container Registry Personal Edition image for deployment, use the internal address for the region in which EAS is activated.
Enterprise Edition: provides advanced features, such as image acceleration and on-demand resource loading. For more information, see Load resources of a container image on demand.
NoteA Container Registry Enterprise Edition image can be accessed only within your virtual private cloud (VPC). To allow EAS to pull the image, you must connect EAS to the corresponding VPC. For more information, see Configure network connectivity.
Self-managed image repository: If you use Harbor to create a self-managed image repository in a VPC, the image repository is accessible only within your VPC. Similar to using a Container Registry Enterprise Edition image, you must connect EAS to your VPC. For more information, see Configure network connectivity.
By default, EAS is not accessible over the Internet. If you want EAS to pull an image from an image repository over the Internet, you must enable Internet access for EAS. For more information, see Configure Internet access and a whitelist.
Authenticate access to an image repository
The required information for authentication varies based on the type of the image repository.
Container Registry: If you use an Alibaba Cloud account to activate EAS and grant EAS the permissions to access Container Registry, you can use the same account to pull Container Registry Personal Edition or Enterprise Edition images without authentication.
Self-managed image repository: If your self-managed image repository uses a username and a password as access credentials, you must configure the dockerAuth parameter when you deploy the model service. For more information, see the "Deploy a model service" section of this topic.
Develop a custom image
If an image that supports access over HTTP, WebSocket, and gRPC is deployed, skip this step and proceed to deploy the model service.
You can use multiple methods to develop an image. For example, you can use Flask to start a web server. Sample code:
from flask import Flask
app = Flask(__name__)
@app.route('/hello/model')
def hello_world():
return 'Hello World'
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000)
You can also use allspark, which is the high-performance serving framework provided by EAS, to develop an image. For more information, see Develop custom processors by using Python.
# -*- coding: utf-8 -*-
import allspark
class MyProcessor(allspark.BaseProcessor):
def initialize(self):
"""
This function is only executed once when the service starts,
and traffic is not imported until the execution completes.
It can be used to load models and service initialization.
"""
pass
def process(self, data):
"""
Each time a request is received, this function is called.
The input parameter of the function is the request body of the HTTP request,
and the function returns two parameters:
1. the first parameter will be used as the HTTP response body,
2. the second parameter is the HTTP response status code.
"""
data = 'hello eas'
return bytes(data, encoding='utf8'), 200
if __name__ == '__main__':
runner = MyProcessor(endpoint='0.0.0.0:8000')
runner.run()
After you complete development, you can run the docker build command to build an image that contains your code and use the image for service deployment. Alternatively, you can store the code in an File Storage NAS (NAS) file system or a Git repository and mount the code during service runtime. For more information, see Mount storage to services (advanced).
Deploy a model service
Preparations
Obtain the required information to deploy a model service by using a custom image:
The address of the image repository. Example:
registry-vpc.cn-shanghai.aliyuncs.com/xxx/yyy:zzz
.The command to start a container based on the image. Example:
/data/eas/ENV/bin/python /data/eas/app.py
.The port number of the container that the service listens on. Example: 8000.
NoteThe port number is optional. For example, if you want the service to accept requests from message queues within the container instead of the EAS gateway, you do not need to specify the port number.
Prepare the JSON configuration file that is used for service deployment based on the following example.
You can use the configuration file to deploy a model service by using the EASCMD client or calling API operations. If you deploy a model service in the PAI console, the configuration file is automatically generated based on your interactions with the GUI.
{ "name": "image_test", "containers": [ { "image": "registry-vpc.cn-shanghai.aliyuncs.com/xxx/yyy:zzz", "env": [ { "name": "var_name", "value": "var_value" } ], "command": "/data/eas/ENV/bin/python /data/eas/app.py", "port": 8000 } ], "metadata": { "cpu": 1, "instance": 1 } }
The following table describes the parameters in the JSON file.
Parameter
Required
Description
containers
image
Yes
The address of the image that you want to use to deploy a model service.
Since EAS does not support access over Internet, you must specify the address of the image in the VPC to which EAS is connected. Example: registry-vpc.cn-shanghai.aliyuncs.com.
env
name
Yes
The name of the environment variable that is used to start a container based on the image.
value
Yes
The value of the environment variable that is used to start a container based on the image.
command
Specify one of the two parameters.
The command to run when you start a container. Use this parameter to specify simple commands, such as commands that do not contain /bin/sh. If you want to run complex commands such as cd xxx && python app.py, configure the script parameter.
script
The script to run when you start a container. Use this parameter to specify complex commands. Separate multiple lines with \n or semicolons (;).
port
No
The port number of the container that the service listens on.
ImportantDo not specify port 8080 and port 9090 because the EAS engine listens on these ports.
If you configure the command parameter, set this parameter to the port in the xxx.py file that you specified in the command parameter.
prepare
pythonRequirements
No
A list of the packages that you want to install before the container starts. This parameter is valid only when Python and pip commands are available in the system path. Example:
"prepare": { "pythonRequirements": [ "numpy==1.16.4", "absl-py==0.11.0" ] }
pythonRequirementsPath
No
The path of the requirements.txt file that contains the packages you want to install before the container starts. This parameter is valid only when Python and pip commands are available in the system path. The requirements.txt file can be stored in the image or mounted from a storage. Example:
"prepare": { "pythonRequirementsPath": "/data_oss/requirements.txt" }
health_check
tcp_socket.port
No
The port number that is used to receive TCP health check requests.
http_get.path
No
The HTTP server address that is used to receive HTTP health check requests.
http_get.port
No
The port number of the HTTP server that is used to receive HTTP health check requests.
initial_delay_seconds
No
The default waiting time before a health check starts. Default value: 3. Unit: seconds.
period_seconds
No
The interval between consecutive health check requests. Default value: 3. Unit: seconds.
timeout_seconds
No
The timeout period of a health check request. Default value: 1. Unit: seconds. If a health check request does not receive a response within the timeout period, the health check is reported as failed.
success_threshold
No
The required number of successful health checks to determine a healthy container. For example, if you set this parameter to 2, a container is considered as healthy after two successful health checks and begins to receive traffic.
failure_threshold
No
The required number of failed health checks to determine an unhealthy container. For example, if you set this parameter to 4, a container is considered as unhealthy after four failed health checks and stops receiving traffic.
dockerAuth
No
The authentication information for the Docker registry. This parameter is required if the image resides in a private image repository. Valid values:
username:password
in the Base64-encoded format.For example, if the value of
username:password
isabcd:abcde12345
, you can run theecho -n "abcd:abcde12345" | base64
command to Base64-encode the "abcd:abcde12345" string. Then, you can specify the outputYWJjZDphYmNkZTEy****
as the value of the dockerAuth parameter.metadata
No
The metadata of the service. For more information, see Run commands to use the EASCMD client.
name
Yes
The name of the service. The name must be unique within a region.
Use the EASCMD client
Run the
create
command on the EASCMD client.
eascmd create image.json
In the preceding command, image.json specifies the name of the configuration file that you created in the preceding step.
If the command runs as expected, the command output contains the endpoint and token of the service, as shown in the following example.
[RequestId]: BFFFE5F5-1F07-437E-B59A-AF1F2B66****
+-------------------+-----------------------------------------------------------------------------------+
| Internet Endpoint | http://182848887922***.cn-shanghai.pai-eas.aliyuncs.com/api/predict/image_test |
| Intranet Endpoint | http://182848887922***.vpc.cn-shanghai.pai-eas.aliyuncs.com/api/predict/image_test|
| Token | NjA4MzQxOWQ0MTY2M2Y4OGY0NjgwODkwZTZmYWJmZWU1ZmY0Njhk**** |
+-------------------+-----------------------------------------------------------------------------------+
[OK] Service is now creating
[OK] Waiting [Total: 2, Pending: 2, Running: 0]
[OK] Running [Total: 2, Pending: 0, Running: 2]
[OK] Service is running
If you need to modify service configurations, run the
modify
command, as shown in the following example.
For more information about the modify
command, see Modify service configurations.
eascmd modify registry_test -s image.json
In the preceding command, registry_test specifies the name of the service and image.json specifies the name of the configuration file that you created in the preceding step.
Use the PAI console
The preceding figure shows how to deploy a model service by using an image from a self-managed image repository. In this example, the username and password that is used to access the image repository are configured and the model file and implementation code are mounted to service instances from OSS. The following configuration file is generated:
{
"metadata": {
"name": "image_test",
"instance": 1
},
"storage": [
{
"mount_path": "/models/",
"oss": {
"path": "oss://example-cn-beijing/models/",
"readOnly": true
}
},
{
"mount_path": "/root/code",
"oss": {
"path": "oss://example-cn-beijing/processors/",
"readOnly": true
}
}
],
"containers": [
{
"image": "myharbor.com/xxx/yyy:zzz",
"script": "python /root/code/app.py",
"port": 8000,
"prepare": {
"pythonRequirementsPath": "/root/code/requirements.txt"
},
"env": [
{
"name": "var_name",
"value": "var_value"
}
]
}
],
"dockerAuth": "YWJjOmJjZA=="
}
Call a model service
After you deploy a model service, you can obtain the endpoint of the service on the EAS-Online Model Services page in the PAI console or by running the eascmd desc
command on the EASCMD client. Assume that you use Flask to create a web service within a container that hosts your model service. Sample code:
from flask import Flask
app = Flask(__name__)
@app.route('/hello/model')
def hello_world():
return 'Hello World'
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000)
The web service is accessible at the endpoint /hello/model
. To send a synchronous request to the web service, you must append /hello/model to the endpoint of the model service.
For example, the endpoint of the model service in EAS is http://182848887922***.cn-shanghai.pai-eas.aliyuncs.com/api/predict/image_test
.
Then, requests to the web service must be sent to the following endpoint:
http://182848887922***.cn-shanghai.pai-eas.aliyuncs.com/api/predict/image_test/hello/model
.
For more information about how to call a service, see Call a service over a public endpoint.
Appendix: Sample service configurations
Configure the pythonRequirements parameter
{
"name": "image_test",
"containers": [
{
"image": "registry-vpc.cn-shanghai.aliyuncs.com/xxx/yyy:zzz",
"prepare": {
"pythonRequirements": [
"numpy==1.16.4",
"absl-py==0.11.0"
]
},
"command": "python app.py",
"port": 8000
}
],
"metadata": {
"instance": 1
}
}
Configure the health_check parameter
{
"name": "image_test",
"containers": [
{
"image": "registry-vpc.cn-shanghai.aliyuncs.com/xxx/yyy:zzz",
"command": "python app.py",
"port": 8000,
"health_check":{
"http_get": {
"path": "/",
"port": 8000
},
"initial_delay_seconds": 3,
"period_seconds": 3,
"timeout_seconds": 1,
"success_threshold": 2,
"failure_threshold": 4
}
}
],
"metadata": {
"instance": 1
}
}
Use the gRPC protocol
{
"name": "image_test",
"containers": [
{
"image": "registry-vpc.cn-shanghai.aliyuncs.com/xxx/yyy:zzz",
"command": "python app.py",
"port": 8000
}
],
"metadata": {
"instance": 1,
"enable_grpc": true
}
}
References
For information about how deployment works in EAS, see Overview.
After you deploy a service, you can use the auto scaling feature to handle workload fluctuations in your business. For more information, see Auto scaling.