Official Elastic Algorithm Service (EAS) SDKs are provided to call services deployed based on their models. EAS SDKs reduce the amount of time required for defining the call logic and improve call stability. This topic describes EAS SDK for Go. Demos are provided to show how to use EAS SDK for Go to call services. In these demos, inputs and outputs are of commonly used types.
Background information
You do not need to install EAS SDK for Go in advance. The SDK is automatically downloaded from GitHub by the package manager of the GO language during code compilation. To customize specific parts of the call logic, you can download EAS SDK for Go and modify the code. To download the SDK, visit eas-golang-sdk.
Methods
Class | Method | Description |
PredictClient |
|
|
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
| Description: Initializes a client object. When any of the preceding methods that are used to set parameters is called, the parameters do not take effect until you call the | |
|
| |
|
| |
|
| |
|
| |
TFRequest |
|
|
|
| |
|
| |
TFResponse |
|
|
|
| |
TorchRequest |
| Description: Creates an object of the TFRequest class. |
|
| |
|
| |
TorchResponse |
|
|
|
| |
QueueClient |
|
|
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
types.Watcher |
|
|
| Description: Stops a watcher to close backend connections. Note Only one watcher can be started for a single client. You must close the watcher before you can start another watcher. |
Demos
Input and output as strings
If you use custom processors to deploy models as services, strings are often used to call the services, such as a service deployed based on a Predictive Model Markup Language (PMML) model. For more information, see the following demo:
package main import ( "fmt" "github.com/pai-eas/eas-golang-sdk/eas" ) func main() { client := eas.NewPredictClient("182848887922****.cn-shanghai.pai-eas.aliyuncs.com", "scorecard_pmml_example") client.SetToken("YWFlMDYyZDNmNTc3M2I3MzMwYmY0MmYwM2Y2MTYxMTY4NzBkNzdj****") client.Init() req := "[{\"fea1\": 1, \"fea2\": 2}]" for i := 0; i < 100; i++ { resp, err := client.StringPredict(req) if err != nil { fmt.Printf("failed to predict: %v\n", err.Error()) } else { fmt.Printf("%v\n", resp) } } }
Input and output as tensors
If you use TensorFlow to deploy models as services, you must use the TFRequest and TFResponse classes to call the services. For more information, see the following demo:
package main import ( "fmt" "github.com/pai-eas/eas-golang-sdk/eas" ) func main() { client := eas.NewPredictClient("182848887922****.cn-shanghai.pai-eas.aliyuncs.com", "mnist_saved_model_example") client.SetToken("YTg2ZjE0ZjM4ZmE3OTc0NzYxZDMyNmYzMTJjZTQ1YmU0N2FjMTAy****") client.Init() tfreq := eas.TFRequest{} tfreq.SetSignatureName("predict_images") tfreq.AddFeedFloat32("images", []int64{1, 784}, make([]float32, 784)) for i := 0; i < 100; i++ { resp, err := client.TFPredict(tfreq) if err != nil { fmt.Printf("failed to predict: %v", err) } else { fmt.Printf("%v\n", resp) } } }
Call a PyTorch model
If you use PyTorch to deploy models as services, you must use the TorchRequest and TorchResponse classes to call the services. For more information, see the following demo:
package main import ( "fmt" "github.com/pai-eas/eas-golang-sdk/eas" ) func main() { client := eas.NewPredictClient("182848887922****.cn-shanghai.pai-eas.aliyuncs.com", "pytorch_resnet_example") client.SetTimeout(500) client.SetToken("ZjdjZDg1NWVlMWI2NTU5YzJiMmY5ZmE5OTBmYzZkMjI0YjlmYWVl****") client.Init() req := eas.TorchRequest{} req.AddFeedFloat32(0, []int64{1, 3, 224, 224}, make([]float32, 150528)) req.AddFetch(0) for i := 0; i < 10; i++ { resp, err := client.TorchPredict(req) if err != nil { fmt.Printf("failed to predict: %v", err) } else { fmt.Println(resp.GetTensorShape(0), resp.GetFloatVal(0)) } } }
Use a VPC direct connection channel to call a service
You can use a VPC direct connection channel to access only the services that are deployed in the dedicated resource group for EAS. In addition, to use the channel, the dedicated resource group for EAS and the specified vSwitch must be connected to the VPC. For more information, see Work with dedicated resource groups and Configure network connectivity. Compared with the regular mode, this mode contains an additional line of code:
client.SetEndpointType(eas.EndpointTypeDirect)
. You can use this mode in high-concurrency and heavy-traffic scenarios. For more information, see the following demo:package main import ( "fmt" "github.com/pai-eas/eas-golang-sdk/eas" ) func main() { client := eas.NewPredictClient("pai-eas-vpc.cn-shanghai.aliyuncs.com", "scorecard_pmml_example") client.SetToken("YWFlMDYyZDNmNTc3M2I3MzMwYmY0MmYwM2Y2MTYxMTY4NzBkNzdj****") client.SetEndpointType(eas.EndpointTypeDirect) client.Init() req := "[{\"fea1\": 1, \"fea2\": 2}]" for i := 0; i < 100; i++ { resp, err := client.StringPredict(req) if err != nil { fmt.Printf("failed to predict: %v\n", err.Error()) } else { fmt.Printf("%v\n", resp) } } }
Set the connection parameters of the client
You can set the connection parameters of the client by using the
http.Transport
attribute. For more information, see the following demo:package main import ( "fmt" "github.com/pai-eas/eas-golang-sdk/eas" ) func main() { client := eas.NewPredictClient("pai-eas-vpc.cn-shanghai.aliyuncs.com", "network_test") client.SetToken("MDAwZDQ3NjE3OThhOTI4ODFmMjJiYzE0MDk1NWRkOGI1MmVhMGI0****") client.SetEndpointType(eas.EndpointTypeDirect) client.SetHttpTransport(&http.Transport{ MaxConnsPerHost: 300, TLSHandshakeTimeout: 100 * time.Millisecond, ResponseHeaderTimeout: 200 * time.Millisecond, ExpectContinueTimeout: 200 * time.Millisecond, }) }
Use the queuing service to send and subscribe to data
You can send and query data in a queue, query the state of a queue, and subscribe to data pushed by a queue. In the following demo, a thread pushes data to a queue, and another thread uses a watcher to subscribe to the pushed data. For more information, see the following demo:
const ( QueueEndpoint = "182848887922****.cn-shanghai.pai-eas.aliyuncs.com" QueueName = "test_group.qservice" QueueToken = "YmE3NDkyMzdiMzNmMGM3ZmE4ZmNjZDk0M2NiMDA3OTZmNzc1MTUx****" ) queue, err := NewQueueClient(QueueEndpoint, QueueName, QueueToken) // truncate all messages in the queue attrs, err := queue.Attributes() if index, ok := attrs["stream.lastEntry"]; ok { idx, _ := strconv.ParseUint(index, 10, 64) queue.Truncate(context.Background(), idx+1) } ctx, cancel := context.WithCancel(context.Background()) // create a goroutine to send messages to the queue go func() { i := 0 for { select { case <-time.NewTicker(time.Microsecond * 1).C: _, _, err := queue.Put(context.Background(), []byte(strconv.Itoa(i)), types.Tags{}) if err != nil { fmt.Printf("Error occured, retry to handle it: %v\n", err) } i += 1 case <-ctx.Done(): break } } }() // create a watcher to watch the messages from the queue watcher, err := queue.Watch(context.Background(), 0, 5, false, false) if err != nil { fmt.Printf("Failed to create a watcher to watch the queue: %v\n", err) return } // read messages from the queue and commit manually for i := 0; i < 100; i++ { df := <-watcher.FrameChan() err := queue.Commit(context.Background(), df.Index.Uint64()) if err != nil { fmt.Printf("Failed to commit index: %v(%v)\n", df.Index, err) } } // everything is done, close the watcher watcher.Close() cancel()