This topic describes how to construct a request for a TensorFlow service that is based on a universal processor.
Notes on input data
Elastic Algorithm Service (EAS) provides a built-in TensorFlow processor for you to deploy TensorFlow models as services. To guarantee the performance, you must ensure that the input and output data are in the Protocol Buffers format.
Examples
A public test model is deployed as a service in the China (Shanghai) region. The service name is mnist_saved_model_example. The service is accessible to all users in VPCs in this region. No access token is specified. You can send requests to the http://pai-eas-vpc.cn-shanghai.aliyuncs.com/api/predict/mnist_saved_model_example endpoint to call the service. The following section describes how to call the service:
Obtain the model information.
You can send a GET request to obtain the model information, including signature_name, name, type, and shape. The following section shows the sample response:
$curl http://pai-eas-vpc.cn-shanghai.aliyuncs.com/api/predict/mnist_saved_model_example | python -mjson.tool { "inputs": [ { "name": "images", "shape": [ -1, 784 ], "type": "DT_FLOAT" } ], "outputs": [ { "name": "scores", "shape": [ -1, 10 ], "type": "DT_FLOAT" } ], "signature_name": "predict_images" }
This model is a classification model that uses the Mixed National Institute of Standards and Technology (MNIST) dataset. Download the MNIST dataset. The input data type is DT_FLOAT. For example, the input shape is [-1,784]. The first dimension is batch_size. If the request contains only one image, batch_size is set to 1. The second dimension is a 784-dimensional vector. When you train the test model, you flatten the input into a one-dimensional vector. Therefore, an image that has 28 by 28 pixels must be flattened into a one-dimensional vector of length 784, which is 28 by 28. When you construct the input, you must flatten it into a one-dimensional vector regardless of the value of shape. In this example, if you input an image, the image is flattened into a one-dimensional vector of 1 by 784. If the input shape that you specify when you train the model is [-1,28, 28], you must flatten the input of the request into a one-dimensional vector of 1 by 28 by 28. If the input shape specified in the request does not match the input shape of the model, the request fails.
Install Protocol Buffers and call the service. This topic describes how to use a Python 2 client to call a TensorFlow service.
EAS provides a Protocol Buffers package for Python clients. You can run the following command to install it:
$ pip install http://eas-data.oss-cn-shanghai.aliyuncs.com/sdk/pai_tf_predict_proto-1.0-py2.py3-none-any.whl
The following sample code is used to call the service to make a prediction:
#! /usr/bin/env python # -*- coding: UTF-8 -*- import json from urlparse import urlparse from com.aliyun.api.gateway.sdk import client from com.aliyun.api.gateway.sdk.http import request from com.aliyun.api.gateway.sdk.common import constant from pai_tf_predict_proto import tf_predict_pb2 import cv2 import numpy as np with open('2.jpg', 'rb') as infile: buf = infile.read() # Use NumPy to convert bytes to a NumPy array. x = np.fromstring(buf, dtype='uint8') # Decode the array into a 28-by-28 matrix. img = cv2.imdecode(x, cv2.IMREAD_UNCHANGED) # The API for prediction requires one-dimensional vectors of length 784. Therefore, you must reshape the matrix into such a vector. img = np.reshape(img, 784) def predict(url, app_key, app_secret, request_data): cli = client.DefaultClient(app_key=app_key, app_secret=app_secret) body = request_data url_ele = urlparse(url) host = 'http://' + url_ele.hostname path = url_ele.path req_post = request.Request(host=host, protocol=constant.HTTP, url=path, method="POST", time_out=6000) req_post.set_body(body) req_post.set_content_type(constant.CONTENT_TYPE_STREAM) stat,header, content = cli.execute(req_post) return stat, dict(header) if header is not None else {}, content def demo(): # Enter the model information. Click the model name to obtain the information. app_key = 'YOUR_APP_KEY' app_secret = 'YOUR_APP_SECRET' url = 'YOUR_APP_URL' # Construct a service. request = tf_predict_pb2.PredictRequest() request.signature_name = 'predict_images' request.inputs['images'].dtype = tf_predict_pb2.DT_FLOAT # The type of the images parameter. request.inputs['images'].array_shape.dim.extend([1, 784]) # The shape of the images parameter. request.inputs['images'].float_val.extend(img) # The data about the images parameter. request.inputs['keep_prob '].dtype = tf_predict_pb2.DT_FLOAT # The type of the keep_prob parameter. request.inputs['keep_prob'].float_val.extend([0.75]) # The default value of shape is 1. # Serialize data in the Protocol Buffers format to a string and transfer the string. request_data = request.SerializeToString() stat, header, content = predict(url, app_key, app_secret, request_data) if stat != 200: print 'Http status code: ', stat print 'Error msg in header: ', header['x-ca-error-message'] if 'x-ca-error-message' in header else '' print 'Error msg in body: ', content else: response = tf_predict_pb2.PredictResponse() response.ParseFromString(content) print(response) if __name__ == '__main__': demo()
The following section shows the output:
outputs { key: "scores" value { dtype: DT_FLOAT array_shape { dim: 1 dim: 10 } float_val: 0.0 float_val: 0.0 float_val: 1.0 float_val: 0.0 float_val: 0.0 float_val: 0.0 float_val: 0.0 float_val: 0.0 float_val: 0.0 float_val: 0.0 } }
The scores of 10 categories are listed in
outputs
. The output shows that when the input image is 2.jpg, all values are 0 except value[2]. The final prediction result is 2, which is correct.
Use a client in other languages to call the service
If you use a client in a language other than Python, you must manually generate the prediction request code file based on the .proto file. The following section shows the sample code:
Prepare a request code file, such as tf.proto, which contains the following content:
syntax = "proto3"; option cc_enable_arenas = true; option java_package = "com.aliyun.openservices.eas.predict.proto"; option java_outer_classname = "PredictProtos"; enum ArrayDataType { // Not a legal value for DataType. Used to indicate a DataType field // has not been set. DT_INVALID = 0; // Data types that all computation devices are expected to be // capable to support. DT_FLOAT = 1; DT_DOUBLE = 2; DT_INT32 = 3; DT_UINT8 = 4; DT_INT16 = 5; DT_INT8 = 6; DT_STRING = 7; DT_COMPLEX64 = 8; // Single-precision complex. DT_INT64 = 9; DT_BOOL = 10; DT_QINT8 = 11; // Quantized int8. DT_QUINT8 = 12; // Quantized uint8. DT_QINT32 = 13; // Quantized int32. DT_BFLOAT16 = 14; // Float32 truncated to 16 bits. Only for cast ops. DT_QINT16 = 15; // Quantized int16. DT_QUINT16 = 16; // Quantized uint16. DT_UINT16 = 17; DT_COMPLEX128 = 18; // Double-precision complex. DT_HALF = 19; DT_RESOURCE = 20; DT_VARIANT = 21; // Arbitrary C++ data types. } // Dimensions of an array. message ArrayShape { repeated int64 dim = 1 [packed = true]; } // Protocol buffer representing an array. message ArrayProto { // Data Type. ArrayDataType dtype = 1; // Shape of the array. ArrayShape array_shape = 2; // DT_FLOAT. repeated float float_val = 3 [packed = true]; // DT_DOUBLE. repeated double double_val = 4 [packed = true]; // DT_INT32, DT_INT16, DT_INT8, DT_UINT8. repeated int32 int_val = 5 [packed = true]; // DT_STRING. repeated bytes string_val = 6; // DT_INT64. repeated int64 int64_val = 7 [packed = true]; // DT_BOOL. repeated bool bool_val = 8 [packed = true]; } // PredictRequest specifies which TensorFlow model to run, as well as // how inputs are mapped to tensors and how outputs are filtered before // returning to user. message PredictRequest { // A named signature to evaluate. If unspecified, the default signature // will be used. string signature_name = 1; // Input tensors. // Names of input tensor are alias names. The mapping from aliases to real // input tensor names is expected to be stored as named generic signature // under the key "inputs" in the model export. // Each alias listed in a generic signature named "inputs" should be provided // exactly once in order to run the prediction. map<string, ArrayProto> inputs = 2; // Output filter. // Names specified are alias names. The mapping from aliases to real output // tensor names is expected to be stored as named generic signature under // the key "outputs" in the model export. // Only tensors specified here will be run/fetched and returned, with the // exception that when none is specified, all tensors specified in the // named signature will be run/fetched and returned. repeated string output_filter = 3; } // Response for PredictRequest on successful run. message PredictResponse { // Output tensors. map<string, ArrayProto> outputs = 1; }
In the file,
PredictRequest
defines the input format of the TensorFlow service, andPredictResponse
defines the output format of the service. For more information about Protocol Buffers, see Protocol Buffers.Install protoc.
#/bin/bash PROTOC_ZIP=protoc-3.3.0-linux-x86_64.zip curl -OL https://github.com/google/protobuf/releases/download/v3.3.0/$PROTOC_ZIP unzip -o $PROTOC_ZIP -d ./ bin/protoc rm -f $PROTOC_ZIP
Generate the request code file.
Java
$ bin/protoc --java_out=./ tf.proto
After the command completes, the request code file com/aliyun/openservices/eas/predict/proto/PredictProtos.java is generated in the current directory. Import the file to the project.
Python
$ bin/protoc --python_out=./ tf.proto
After the command completes, the request code file tf_pb2.py is generated in the current directory. Run the
import
command to import the file to the project.C++
$ bin/protoc --cpp_out=./ tf.proto
After the command completes, the request code files including tf.pb.cc and tf.pb.h are generated in the current directory. Add the
include tf.pb.h
command to the code and add tf.pb.cc to the compile list.