When your microservices communicate over gRPC, you need a way to expose them to external clients while retaining fine-grained traffic control. An Alibaba Cloud Service Mesh (ASM) ingress gateway accepts inbound gRPC connections at the mesh edge and routes them to backend services based on Istio routing rules. This allows you to implement accurate access control on gRPC services, improve service governance, and ensure the security of service-to-service communication. You can also use weighted traffic splitting between service versions -- useful for canary deployments, A/B testing, and progressive rollouts.
This tutorial walks through deploying two versions of a gRPC service, routing all traffic to version 1, and then shifting a percentage of traffic to version 2.
How it works
Three Istio resources work together to route gRPC traffic from outside the mesh to backend pods:
| Resource | Role |
|---|---|
| Gateway | Listens on port 8080 with the GRPC protocol and accepts inbound connections at the mesh edge. |
| DestinationRule | Groups backend pods into subsets (v1, v2) based on version labels and defines the load-balancing policy. |
| VirtualService | Binds to the Gateway and forwards matched traffic to a specific subset with a configurable weight. |
External gRPC requests reach the ingress gateway on port 8080. The VirtualService then forwards them to the target subset on backend service port 50051.
Prerequisites
Before you begin, make sure you have:
Deploy sample gRPC services
Deploy version 1 and version 2 of a Python-based gRPC hello-world service.
Create a file named
app.yamlwith the following content:This file defines:
Two Deployments (
grpc-helloworld-py-v1andgrpc-helloworld-py-v2), each with one replica. Both share theapp: grpc-helloworld-pylabel but differ by theversionlabel.A Service (
grpc-helloworld-py) that selects all pods withapp: grpc-helloworld-pyand exposes port 50051.
Apply the file:
kubectl apply -f app.yaml
Configure routing rules
Create an Istio Gateway, a DestinationRule, and a VirtualService to route all inbound gRPC traffic to version 1.
Create a file named
rules.yamlwith the following content:apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: grpc-gateway spec: selector: istio: ingressgateway servers: - port: number: 8080 name: grpc protocol: GRPC hosts: - "*" --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dr-istio-grpc-server spec: host: grpc-helloworld-py trafficPolicy: loadBalancer: simple: ROUND_ROBIN subsets: - name: v1 labels: version: "v1" - name: v2 labels: version: "v2" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: grpc-vs spec: hosts: - "*" gateways: - grpc-gateway http: - match: - port: 8080 route: - destination: host: grpc-helloworld-py port: number: 50051 subset: v1 weight: 100 - destination: host: grpc-helloworld-py port: number: 50051 subset: v2 weight: 0Configuration Details Gateway port 8080, protocol set toGRPCDestinationRule subsets v1andv2, withROUND_ROBINload balancingVirtualService weights 100% to v1, 0% tov2Apply the file:
kubectl apply -f rules.yaml
Set up the ingress gateway port
The ingress gateway must listen on port 8080 to match the Gateway resource. Either create a new ingress gateway or add port 8080 to an existing one.
Option A: Create an ingress gateway
Create an ingress gateway and set Service Port to 8080.
Option B: Add port 8080 to an existing ingress gateway
Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.
On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose ASM Gateways > Ingress Gateway.
On the Ingress Gateway page, click the name of the target gateway.
In the Basic options section of the Gateway Details page, click the edit icon next to Port.
In the Port Mapping dialog box, click Add Port, set Protocol to TCP, set Service Port to
8080, and then click Submit.
Verify gRPC connectivity
Send a gRPC request through the ingress gateway to confirm that all traffic reaches version 1.
Run the following command:
grpcurl -d '{"name": "Jack"}' -plaintext {IP address of the ingress gateway}:8080 helloworld.Greeter/SayHelloExpected output -- every response identifies a v1 pod:
{
"message": "Hello, Jack! I'm from grpc-helloworld-py-v1-79b5dc9654-cg4dq!"
}Run the command several times to confirm that no responses come from v2 pods.
Shift traffic to version 2
After you verify that version 1 works, update the VirtualService to split traffic between versions -- for example, 60% to v1 and 40% to v2.
Edit the VirtualService:
kubectl edit VirtualService grpc-vsUpdate the
routesection with the following weights and save the file:route: - destination: host: grpc-helloworld-py port: number: 50051 subset: v1 weight: 60 - destination: host: grpc-helloworld-py port: number: 50051 subset: v2 weight: 40Send gRPC requests again to verify the traffic split:
grpcurl -d '{"name": "Jack"}' -plaintext {IP address of the ingress gateway}:8080 helloworld.Greeter/SayHelloRun the command multiple times. Responses now come from both
v1andv2pods:"message": "Hello, Jack! I'm from grpc-helloworld-py-v1-79b5dc9654-cg4dq!" "message": "Hello, Jack! I'm from grpc-helloworld-py-v2-7f56b49b7f-9vvr7!"NoteIndividual requests may not match the exact 60:40 ratio. Over a sufficient number of requests, the distribution converges to the configured weights.
Clean up
Remove the sample resources created in this tutorial:
kubectl delete -f rules.yaml
kubectl delete -f app.yaml