Sidecar proxies can be injected for application containers in a cluster to enhance network security, reliability, and observability for service-to-service calls. You can flexibly configure parameters such as those related to the resources, lifecycle, traffic interception mode, and observability for sidecar proxies based on your business requirements. This topic describes how to configure sidecar proxies and introduces the related parameters.
Configuration levels of sidecar proxies
Different configuration levels of sidecar proxies represent different effective scopes and priorities of configurations. The configuration levels of sidecar proxies take effect in the following ascending order of priority: global, namespace, workload, and pod.
You can configure a sidecar proxy at different levels based on your business requirements. Service Mesh (ASM) determines the configuration parameters that take effect based on the priority of each sidecar proxy configuration level when sidecar proxies are injected. For example, you can configure a sidecar proxy in the default namespace at the namespace level and at the global level. The configurations at the namespace level take precedence over those at the global level. Therefore, when new workloads are deployed in the default namespace, the parameters configured at the namespace level take effect for the injected sidecar proxies.
Configuration level of sidecar proxies | Description |
Global | Sidecar proxy configurations take effect globally. The configurations apply to all pods when sidecar proxies are injected. |
Namespace | Sidecar proxy configurations take effect in a specified namespace. The configurations are applied only when sidecar proxies are injected into the pods in this namespace. When you configure sidecar proxies at this level, you must select a namespace. |
Workload | Sidecar proxy configurations take effect on the specified workloads. The configurations apply only to the workloads that are selected by the specified label selector when sidecar proxies are injected. When you configure sidecar proxies at this level, you must specify the workload label selector to select the workloads to which the configurations apply. |
Pod | Sidecar proxies at this level cannot be configured in the ASM console. You can configure them by adding annotations to pods. For more information, see Configure a sidecar proxy by adding resource annotations. |
Procedure
The following section describes how to configure sidecar proxies at different levels. If you want to make a new sidecar proxy take effect on a workload, redeploy the corresponding pod. For more information, see Redeploy workloads.
Global level
Log on to the ASM console. In the left-side navigation pane, choose .
On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose .
On the global tab of the Sidecar Proxy Setting page, configure the parameters based on your business requirements and then click Update Settings.
For more information about the configuration items of sidecar proxies, see Configuration items of sidecar proxies.
Check whether the sidecar proxy configurations take effect.
In the left-side navigation pane, choose .
In the Basic Information section, view the Status of the ASM instance.
If the Status is Running, the global sidecar proxy configurations take effect.
Namespace level
Log on to the ASM console. In the left-side navigation pane, choose .
On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose .
On the Sidecar Proxy Setting page, click the Namespace tab, select a namespace from the Namespace drop-down list, set the related configuration items, and then click Update Settings.
The namespace level is not the lowest configuration level of sidecar proxies. Therefore, all sidecar proxy configuration items at the namespace level have no default values (for sidecar proxy configuration items that are not selected and configured, the configurations at the global level take effect by default). After you click Update Settings, the sidecar proxy configurations at the namespace level take effect immediately. For more information about the configuration items of sidecar proxies, see Configuration items of sidecar proxies.
Workload level
In the same namespace, you can create multiple sidecar proxy configurations for different workloads.
Log on to the ASM console. In the left-side navigation pane, choose .
On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose .
On the Sidecar Proxy Setting page, click the workload tab and then click Create.
On the workload tab, select a namespace from the Namespace drop-down list, set the Name of the sidecar proxy configuration at the workload level, create a label selector for selecting workloads in the Match Label field, set the related configuration items, and then click Create.
The workload level is not the lowest configuration level of sidecar proxies. Therefore, all sidecar proxy configuration items have no default values (for sidecar proxy configuration items that are not selected and configured, the default values are the configurations at the global level). After you click Create, a sidecar proxy configuration at the workload level is created. For more information about the configuration items of sidecar proxies, see Configuration items of sidecar proxies.
After a sidecar proxy configuration is created, you can update or delete the configurations of the sidecar proxy.
To update a sidecar proxy configuration at the workload level, find the desired sidecar proxy configuration on the workload tab and click Update in the Actions column. Then, modify Match Label and the sidecar proxy configuration as required, and click Update.
To delete a sidecar proxy configuration at the workload level, find the desired sidecar proxy configuration on the workload tab and click Delete in the Actions column. In the Submit message, click OK.
(Optional) Redeploy workloads
The configurations of deployed pods cannot be changed. Therefore, after you configure sidecar proxies at the pod level, the configurations do not take effect immediately. To make the configurations take effect, you need to redeploy pods after you configure sidecar proxies. After pods are redeployed, the new configurations take effect for the sidecar proxies that are injected into the pods.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Deployments page, perform the following operations to redeploy workloads.
Scenario | Procedure |
Single workload | Find the workload that you want to redeploy and choose in the Actions column. In the Redeploy message, click Confirm. |
Multiple workloads | Select multiple workloads in the Name column and click Batch Redeploy in the lower part of the page. In the Confirm message, click OK. |
Initial versions that support the configuration items of sidecar proxies
The features supported by different ASM instance versions may vary. In most cases, an ASM instance of a later version supports more features and parameters than that of an earlier version. If you cannot find a sidecar proxy configuration item, refer to the information provided in the following table and determine whether you need to update the version of your ASM instance. For more information about how to update an ASM instance, see Update an ASM instance.
You can click the link in the configuration item column of the following table to view the description and configuration example of a configuration item.
Note If the version of your ASM instance is V1.22 or later and the version of your Kubernetes cluster on the data plane is V1.30 or later, sidecar proxies are deployed as native sidecar containers. In this case, the Kubernetes cluster manages the lifecycle of sidecar proxy containers. The sidecar proxy configurations related to lifecycle management do not take effect.
Configuration items of sidecar proxies
You can configure the resource usage, traffic interception mode, Domain Name System (DNS) proxy, and lifecycle of sidecar proxies. The following section provides the descriptions and configuration example of sidecar proxy configuration items.
Configure Resources for Injected Sidecar Proxy
Show the descriptions and configuration example of Configure Resources for Injected Sidecar Proxy
Descriptions of the configuration items
These configuration items are used to configure the minimum CPU and memory resources that a sidecar proxy container needs to use at runtime and the maximum CPU and memory resources that a sidecar proxy container can apply for.
Configuration item | Description |
Resource Limits | The maximum CPU and memory resources that a sidecar proxy container can apply for. The unit of CPU resources is Core. The unit of memory resources is MiB. |
Required Resources | The minimum CPU and memory resources that a sidecar proxy container needs to use at runtime. The unit of CPU resources is Core. The unit of memory resources is MiB. |
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Resource Settings.
(Optional) In the Resource Settings section, select Configure Resources for Injected Sidecar Proxy.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
In the Resource Limits section, set CPU to 2 cores and Memory to 1025 MiB. In the Required Resources section, set CPU to 0.1 cores and Memory to 128 MiB. Then, click Update Settings in the lower part of the page.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configured resources of the sidecar proxy:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
- proxy
...
name: istio-proxy
...
resources:
limits:
cpu: '2'
memory: 1025Mi
requests:
cpu: 100m
memory: 128Mi
...
The istio-proxy
container indicates a sidecar proxy container. The resources field of the istio-proxy
container in the pod is set to the expected resource values. This indicates that the configurations of Configure Resources for Injected Sidecar Proxy take effect.
Configure Resources for istio-init Container
Show the descriptions and configuration example of Configure Resources for istio-init Container
Descriptions of the configuration items
These configuration items are used to configure the minimum CPU and memory resources that the istio-init container in the pod into which the sidecar proxy is injected needs to use at runtime, and the maximum CPU and memory resources that the istio-init container can apply for. The istio-init container is an initialization container that is executed when the pod into which a sidecar proxy is injected is started. This container is used to set up traffic routing rules for the sidecar proxy container.
Configuration item | Description |
Resource Limits | The maximum CPU and memory resources that the istio-init container can apply for. The unit of CPU resources is Core. The unit of memory resources is MiB. |
Required Resources | The minimum CPU and memory resources that the istio-init container needs to use at runtime. The unit of CPU resources is Core. The unit of memory resources is MiB. |
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Resource Settings.
(Optional) In the Resource Settings section, select Configure Resources for istio-init Container.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
In the Resource Limits section, set CPU to 1 core and Memory to 512 MiB. In the Required Resources section, set CPU to 0.1 cores and Memory to 128 MiB. Then, click Update Settings in the lower part of the page.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the resource configurations of the istio-init container:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
...
name: istio-init
resources:
limits:
cpu: '1'
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
...
The resources field of the istio-init
container in the pod is set to the expected resource values. This indicates that the configurations of Configure Resources for istio-init Container take effect.
Set ACK Resources That Can Be Dynamically Overcommitted for Sidecar Proxy
Show the descriptions and configuration example of Set ACK Resources That Can Be Dynamically Overcommitted for Sidecar Proxy
Descriptions of the configuration items
These configuration items are used to configure the ACK resources that can be dynamically overcommitted for the injected sidecar proxy and the istio-init container. For more information about resources that can be dynamically overcommitted, see Dynamic resource overcommitment.
The configuration items are configured in the same way as those of Configure Resources for Injected Sidecar Proxy and Configure Resources for istio-init Container. After the preceding configurations are complete, resources that can be dynamically overcommitted, instead of regular CPU and memory resources, are allocated to the sidecar proxy and istio-init containers in a pod if the pod has the koordinator.sh/qosClass
label. This label indicates that dynamic overcommitment of ACK resources is enabled.
Note When you configure ACK resources that can be dynamically overcommitted for a sidecar proxy, the unit of CPU resources is millicores.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Resource Settings. Select Set ACK Resources That Can Be Dynamically Overcommitted for Sidecar Proxy, configure the related parameters, and then click Update Settings in the lower part of the page.
Configuration item | Child configuration item | Description |
Configure Resources for Injected Sidecar Proxy (ACK Dynamically Overcommitted Resources) | Resource Limits | For this example, set CPU to 2000 millicores and Memory to 2048 MiB. |
Required Resources | For this example, set CPU to 200 millicores and Memory to 256 MiB. |
istio-init container resource (ACK dynamic oversold resource) | Resource Limits | For this example, set CPU to 1000 millicores and Memory to 1024 MiB. |
Required Resources | For this example, set CPU to 100 millicores and Memory to 128 MiB. |
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the resource configurations of the istio-init container:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
metadata:
...
labels:
koordinator.sh/qosClass: BE
spec:
containers:
- args:
...
name: istio-proxy
...
resources:
limits:
kubernetes.io/batch-cpu: 2k
kubernetes.io/batch-memory: 2Gi
requests:
kubernetes.io/batch-cpu: '200'
kubernetes.io/batch-memory: 256Mi
...
initContainers:
- args:
...
name: istio-init
resources:
limits:
kubernetes.io/batch-cpu: 1k
kubernetes.io/batch-memory: 1Gi
requests:
kubernetes.io/batch-cpu: '100'
kubernetes.io/batch-memory: 128Mi
...
Both the istio-proxy
container (sidecar proxy container) and the istio-init
container in the pod contain the resources field and have the configured resource values. This indicates that the configurations of Set ACK Resources That Can Be Dynamically Overcommitted for Sidecar Proxy take effect.
Number of Sidecar Proxy Threads
Show the descriptions and configuration example of Number of Sidecar Proxy Threads
Descriptions of the configuration items
This configuration item is used to configure the number of worker threads that run in the sidecar proxy container. You must configure a non-negative integer to indicate the number of threads in the sidecar proxy container. If this parameter is set to 0, the number of worker threads is automatically selected based on the required CPU resources or the CPU resource limit configured for the sidecar proxy (the resource limit takes precedence over the required resources).
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Resource Settings.
(Optional) In the Resource Settings section, select Number of Sidecar Proxy Threads.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
Set Number of Sidecar Proxy Threads to 3 and click Update Settings in the lower part of the page.
This configuration indicates that the sidecar proxy container will start three worker threads when it is running.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of Number of Sidecar Proxy Threads:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
- args:
- proxy
- sidecar
- '--domain'
- $(POD_NAMESPACE).svc.cluster.local
- '--proxyLogLevel=warning'
- '--proxyComponentLogLevel=misc:error'
- '--log_output_level=default:info'
- '--concurrency'
- '3'
...
name: istio-proxy
...
The concurrency parameter of the istio-proxy
container is set to 3
. This indicates that the configuration of Number of Sidecar Proxy Threads takes effect.
Addresses to Which External Access Is Redirected to Sidecar Proxy
Show the descriptions and configuration example of Addresses to Which External Access Is Redirected to Sidecar Proxy
Descriptions of the configuration items
You need to configure a list of IP address ranges that are separated by commas (,). Each IP address range uses the CIDR format. When a workload into which a sidecar proxy is injected accesses other services, only the requests whose destination IP addresses are in one of the configured address ranges are redirected to the sidecar proxy container. Other requests are directly sent to the destinations without being redirected to the sidecar proxy container. The default value of this configuration item is *, which indicates that all outbound traffic of the workload is redirected to the sidecar proxy container.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.
(Optional) In the Enable/Disable Sidecar Proxy by Ports or IP Addresses section, select Addresses to Which External Access Is Redirected to Sidecar Proxy.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
Set Addresses to Which External Access Is Redirected to Sidecar Proxy to 192.168.0.0/16,10.1.0.0/24 and click Update Settings in the lower part of the page.
This configuration indicates that the sidecar proxy container will intercept requests whose destination IP addresses are within the 192.168.0.0/16 and 10.1.0.0/24 CIDR blocks.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of Addresses to Which External Access Is Redirected to Sidecar Proxy:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- REDIRECT
- '-i'
- '192.168.0.0/16,10.1.0.0/24'
- '-x'
- 192.168.0.1/32
- '-b'
- '*'
- '-d'
- '15090,15021,15081,9191,15020'
- '--log_output_level=default:info'
...
name: istio-init
...
The runtime parameter -i of the istio-init
container is set to 192.168.0.0/16,10.1.0.0/24
. This indicates that the configuration of Addresses to Which External Access Is Redirected to Sidecar Proxy takes effect.
Addresses to Which External Access Is Not Redirected to Sidecar Proxy
Show the descriptions and configuration example of Addresses to Which External Access Is Not Redirected to Sidecar Proxy
Descriptions of the configuration items
You need to configure a list of IP address ranges that are separated by commas (,). Each IP address range uses the CIDR format. When a workload into which a sidecar proxy is injected accesses other services, the sidecar proxy container intercepts outbound traffic. If the destination IP address of a request is within the CIDR block configured for this configuration item, the request is not intercepted by the sidecar proxy container.
Important If an IP address is specified by both the configuration items Addresses to Which External Access Is Not Redirected to Sidecar Proxy and Addresses to Which External Access Is Redirected to Sidecar Proxy, the sidecar proxy container does not intercept the requests whose destination addresses is this IP address. For more information, see Addresses to Which External Access Is Redirected to Sidecar Proxy.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.
(Optional) In the Enable/Disable Sidecar Proxy by Ports or IP Addresses section, select Addresses to Which External Access Is Not Redirected to Sidecar Proxy.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
Set Addresses to Which External Access Is Not Redirected to Sidecar Proxy to 10.1.0.0/24 and click Update Settings in the lower part of the page.
This configuration indicates that the sidecar proxy container will not intercept the requests whose destination IP addresses are within the 10.1.0.0/24 CIDR block.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of Addresses to Which External Access Is Not Redirected to Sidecar Proxy:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- REDIRECT
- '-i'
- '*'
- '-x'
- '192.168.0.1/32,10.1.0.0/24'
- '-b'
- '*'
- '-d'
- '15090,15021,15081,9191,15020'
- '--log_output_level=default:info'
...
name: istio-init
...
The runtime parameter -x of the istio-init
container is set to 192.168.0.1/32,10.1.0.0/24
. 192.168.0.1/32
is the CIDR block of hosts configured by default. 10.1.0.0/24
is the same as the IP address range specified in the sidecar proxy configuration. This indicates that the configuration of Addresses to Which External Access Is Not Redirected to Sidecar Proxy takes effect.
Ports on Which Inbound Traffic Redirected to Sidecar Proxy
Show the descriptions and configuration example of Ports on Which Inbound Traffic Redirected to Sidecar Proxy
Descriptions of the configuration items
You need to configure a list of port numbers that are separated by commas (,). The sidecar proxy container intercepts inbound traffic whose destination ports are in the list. The default value is *, which indicates that the sidecar proxy container intercepts all inbound traffic of the workload.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.
(Optional) In the Enable/Disable Sidecar Proxy by Ports or IP Addresses section, select Ports on Which Inbound Traffic Redirected to Sidecar Proxy.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
Set Ports on Which Inbound Traffic Redirected to Sidecar Proxy to 80,443 and click Update Settings.
This configuration indicates that the sidecar proxy container will only intercept requests destined for ports 80 and 443 of the corresponding workload.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configured ports on which inbound traffic is redirected to the sidecar proxy container:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- REDIRECT
- '-i'
- '*'
- '-x'
- 192.168.0.1/32
- '-b'
- '80,443'
- '-d'
- '15090,15021,15081,9191,15020'
- '--log_output_level=default:info'
...
name: istio-init
...
The runtime parameter -b of the istio-init
container is set to 80,443
, which is the same as the inbound ports set in the sidecar proxy configuration. This indicates that the configuration of Ports on Which Inbound Traffic Redirected to Sidecar Proxy takes effect.
Ports on Which Outbound Traffic Redirected to Sidecar Proxy
Show the descriptions and configuration example of Ports on Which Outbound Traffic Redirected to Sidecar Proxy
Descriptions of the configuration items
You need to configure a list of destination service ports for outbound traffic. They are separated by commas (,). The sidecar proxy container intercepts all requests whose destination service ports are included in the list.
Important Even if the destination service port of a request is included in the list specified by this configuration item, the sidecar proxy container will not intercept this request when all the following conditions are met: 1. Both this configuration item and the configuration item Addresses to Which External Access Is Not Redirected to Sidecar Proxy or Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy are configured. 2. The destination IP address of the request is included in Addresses to Which External Access Is Not Redirected to Sidecar Proxy, or the destination service port of the request is included in Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy. For more information, see Addresses to Which External Access Is Not Redirected to Sidecar Proxy and Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.
(Optional) In the Enable/Disable Sidecar Proxy by Ports or IP Addresses section, select Ports on Which Outbound Traffic Redirected to Sidecar Proxy.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
Set Ports on Which Outbound Traffic Redirected to Sidecar Proxy to 80,443 and click Update Settings in the lower part of the page.
This configuration indicates that the sidecar proxy container intercepts requests destined for ports 80 and 443.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of Ports on Which Outbound Traffic Redirected to Sidecar Proxy:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- REDIRECT
- '-i'
- '*'
- '-x'
- 192.168.0.1/32
- '-b'
- '*'
- '-d'
- '15090,15021,15081,9191,15020'
- '--log_output_level=default:info'
- '-q'
- '80,443'
- '--log_output_level=default:info'
...
name: istio-init
...
The runtime parameter -q of the istio-init
container is set to 80,443
, which is the same as the outbound ports set in the sidecar proxy configuration. This indicates that the configuration of Ports on Which Outbound Traffic Redirected to Sidecar Proxy takes effect.
Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy
Show the descriptions and configuration example of Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy
Descriptions of the configuration items
You need to configure a list of port numbers that are separated by commas (,). Inbound traffic destined for the ports in the list will not be intercepted by the sidecar proxy container.
Important This configuration item takes effect only when Ports on Which Inbound Traffic Redirected to Sidecar Proxy is set to the default value *, which indicates that the sidecar proxy container intercepts all inbound traffic.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.
(Optional) In the Enable/Disable Sidecar Proxy by Ports or IP Addresses section, select Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
Set Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy to 8000, and then click Update Settings in the lower part of the page.
This configuration indicates that the sidecar proxy container no longer intercepts requests destined for port 8000 of the workload.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- REDIRECT
- '-i'
- '*'
- '-x'
- 192.168.0.1/32
- '-b'
- '*'
- '-d'
- '15090,15021,15081,9191,15020,8000'
- '--log_output_level=default:info'
...
name: istio-init
...
The runtime parameter -d of the istio-init
container is set to 15090,15021,15081,9191,8000
. Ports 15090, 15021, 15081, and 9191
are application ports of the sidecar proxy. By default, the sidecar proxy container does not intercept inbound traffic destined for these ports. Port 8000
is the same as the inbound port set in the sidecar proxy configuration. This indicates that the configuration of Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy takes effect.
Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy
Show the descriptions and configuration example of Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy
Descriptions of the configuration items
You need to configure a list of destination service ports for outbound traffic. They are separated by commas (,). The sidecar proxy container does not intercept requests destined for the service ports in the list, regardless of whether the IP addresses of the destination services are in Addresses to Which External Access Is Redirected to Sidecar Proxy and whether the ports of the destination services are in Ports on Which Outbound Traffic Redirected to Sidecar Proxy.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.
(Optional) In the Enable/Disable Sidecar Proxy by Ports or IP Addresses section, select Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
Set Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy to 8000, and then click Update Settings in the lower part of the page.
This configuration indicates that the sidecar proxy container no longer intercepts service requests destined for port 8000.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configured ports for which outbound traffic is not redirected to the sidecar proxy container:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- REDIRECT
- '-i'
- '*'
- '-x'
- 192.168.0.1/32
- '-b'
- '*'
- '-d'
- '15090,15021,15081,9191,15020'
- '--log_output_level=default:info'
- '-o'
- '8000'
...
name: istio-init
...
The runtime parameter -o of the istio-init
container is set to 8000
, which is the same as the port set in the configuration item Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy. This indicates that the configuration item takes effect.
Enable DNS Proxy
Show the descriptions and configuration example of Enable DNS Proxy
Descriptions of the configuration items
You can choose to enable or disable the DNS proxy feature for the sidecar proxy container. After the DNS proxy feature is enabled, the sidecar proxy container intercepts DNS requests of the workload to improve the performance and availability of ASM.Service Mesh All requests from the workload are redirected to the sidecar proxy container. The sidecar proxy container stores mappings between IP addresses and local domain names. Therefore, the sidecar proxy container can directly return DNS responses to the workload to avoid sending requests to a remote DNS service. If a DNS request cannot be processed by the sidecar proxy container, the sidecar proxy container directly forwards the DNS request to the DNS service in the Kubernetes cluster. For more information, see Use the DNS proxy feature in an ASM instance.
Important Due to network permission issues, you cannot enable the DNS proxy feature for sidecar proxies in ACK Serverless clusters or ACK clusters of Elastic Container Instance-based pods.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click DNS Proxy.
(Optional) In the DNS Proxy section, select Enable DNS Proxy, turn on the switch on the right side, and then click Update Settings.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab. If you turn on the switch, the DNS proxy feature is enabled for the sidecar proxy.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of the DNS proxy feature:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
spec:
containers:
- args:
- proxy
- sidecar
- '--domain'
- $(POD_NAMESPACE).svc.cluster.local
- '--proxyLogLevel=warning'
- '--proxyComponentLogLevel=misc:error'
- '--log_output_level=default:info'
- '--concurrency'
- '3'
env:
...
- name: ISTIO_META_DNS_AUTO_ALLOCATE
value: 'true'
- name: ISTIO_META_DNS_CAPTURE
value: 'true'
...
name: istio-proxy
The ISTIO_META_DNS_AUTO_ALLOCATE
and ISTIO_META_DNS_CAPTURE
environment variables of the istio-proxy
container are set to true
, which indicates that the configuration of DNS Proxy takes effect.
Manage Environment Variables for Sidecar Proxy
Show the descriptions and configuration example of Manage Environment Variables for Sidecar Proxy
Descriptions of the configuration items
These configuration items are used to add additional environment variables in the sidecar proxy container. You can configure the following environment variables for the sidecar proxy.
Configuration item | Description |
Sidecar Graceful Shutdown (EXIT_ON_ZERO_ACTIVE_CONNECTIONS) | If you turn on this switch, the environment variable EXIT_ON_ZERO_ACTIVE_CONNECTIONS: "true" is added to the environment variables of the sidecar proxy container. This environment variable works in the following way: During the termination of the sidecar proxy container, the pilot-agent process in the container stops the Envoy proxy from listening to inbound traffic. The pilot-agent process waits for a period of time, which is specified by the configuration item Sidecar Proxy Drain Duration at Pod Termination. Then, the pilot-agent process stops the Envoy proxy process. After the environment variable EXIT_ON_ZERO_ACTIVE_CONNECTIONS is set to true, during the termination of the sidecar proxy container, the pilot-agent process in the container first stops the Envoy proxy from listening to inbound traffic and waits for the default period of 5 seconds. After the waiting period elapses, the pilot-agent process starts to poll the number of active connections of the Envoy proxy until the number of active connections becomes zero. Then, the pilot-agent process stops the Envoy proxy process. You can configure EXIT_ON_ZERO_ACTIVE_CONNECTIONS to perfect the termination process of the sidecar proxy container in common situations. This reduces the number of requests that are discarded during termination and minimizes the termination time.
Important After the environment variable EXIT_ON_ZERO_ACTIVE_CONNECTIONS is set to true, the configuration item Sidecar Proxy Drain Duration at Pod Termination does not take effect. For more information, see Sidecar Proxy Drain Duration at Pod Termination. |
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Manage Environment Variables for Sidecar Proxy.
(Optional) In the Manage Environment Variables for Sidecar Proxy section, select Sidecar Graceful Shutdown (EXIT_ON_ZERO_ACTIVE_CONNECTIONS).
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
Turn on the switch on the right side of Sidecar Graceful Shutdown (EXIT_ON_ZERO_ACTIVE_CONNECTIONS), and then click Update Settings.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of Manage Environment Variables for Sidecar Proxy:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
...
env:
- name: EXIT_ON_ZERO_ACTIVE_CONNECTIONS
value: 'true'
name: istio-proxy
...
The EXIT_ON_ZERO_ACTIVE_CONNECTIONS
environment variable is added to the environment variables of the istio-proxy
container in the pod. This indicates that the configuration of Manage Environment Variables for Sidecar Proxy takes effect.
Sidecar Graceful Startup (HoldApplicationUntilProxyStarts)
Show the descriptions and configuration example of Sidecar Graceful Startup
Descriptions of the configuration items
Sidecar Graceful Startup (HoldApplicationUntilProxyStarts) is a configuration item that is used to manage the lifecycle of sidecar proxies. By default, this configuration item is enabled. This indicates that in a pod that is injected with a sidecar proxy, the sidecar proxy container is ready before the application containers in the pod are started. The purpose is to ensure that the traffic destined for the application containers is not lost in the case that the sidecar proxy is not started.
If this configuration item is disabled, the sidecar proxy container and the application containers in the pod can be started in parallel. When a large number of pods are deployed in a cluster, the sidecar proxy containers may be started slowly due to the heavy load on the API server. You can disable this configuration item to speed up deployment.
Configuration example
The following example describes how to disable Sidecar Graceful Startup (HoldApplicationUntilProxyStarts) on the global tab.
On the Sidecar Proxy Setting page, click the global tab, and then click Lifecycle Management.
Turn off the switch next to Sidecar Graceful Startup (HoldApplicationUntilProxyStarts) and click Update Settings.
This configuration indicates that Sidecar Graceful Startup (HoldApplicationUntilProxyStarts) is disabled.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of Sidecar Graceful Startup (HoldApplicationUntilProxyStarts):
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- command:
...
name: sleep
...
env:
- name: PROXY_CONFIG
value: >-
{..."holdApplicationUntilProxyStarts":false,...}
...
name: istio-proxy
...
After the Sidecar Graceful Startup (holdApplicationUntilProxyStarts) feature is disabled, the istio-proxy
container is not required to start before application containers are started, and the default lifecycle field is not declared. In this case, ASM does not ensure that application containers are started after the sidecar proxy container is successfully started.Service Mesh
Sidecar Proxy Drain Duration at Pod Termination
Show the descriptions and configuration example of Sidecar Proxy Drain Duration at Pod Termination
Descriptions of the configuration items
Sidecar Proxy Drain Duration at Pod Termination is a configuration item for managing the lifecycle of a sidecar proxy. After a sidecar proxy is injected into a pod, the traffic of the pod is intercepted by the sidecar proxy container.
After the pod starts stopping, the corresponding Services no longer route traffic to the pod. The sidecar proxy container waits for a period of time after receiving an exit signal. During this period, the sidecar proxy container does not accept new inbound traffic but continue to process the existing inbound traffic (outbound traffic is not affected and can be initiated normally). This period of time is called Sidecar Proxy Drain Duration at Pod Termination. The default value of Sidecar Proxy Drain Duration at Pod Termination is 5s. You can specify a value for this configuration item in the unit of seconds (s), for example, 10s.
If the duration of API operation calls provided by a Service in a pod to be stopped exceeds Sidecar Proxy Drain Duration at Pod Termination, all existing inbound and outbound connections will be terminated even if they are not processed. As a result, some requests are lost. In this case, you can configure this configuration item to a greater value so that the processing of inbound and outbound traffic can be completed.
This configuration item must be set to a value in seconds, for example, 10s.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Lifecycle Management.
(Optional) In the Lifecycle Management section, select Sidecar Proxy Drain Duration at Pod Termination.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
Set Sidecar Proxy Drain Duration at Pod Termination to 10s, and click Update Settings.
This configuration indicates that the sidecar proxy will wait for 10s to process existing connections before termination.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of Sidecar Proxy Drain Duration at Pod Termination:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
...
env:
- name: TERMINATION_DRAIN_DURATION_SECONDS
value: '10'
...
- name: PROXY_CONFIG
value: >-
{..."terminationDrainDuration":"10s"}
...
name: istio-proxy
...
The istio-proxy
container in the pod is configured with an environment variable named TERMINATION_DRAIN_DURATION_SECONDS
with the value of 10
, and terminationDrainDuration
is 10s
in the PROXY_CONFIG
environment variable. This indicates that the configuration of Sidecar Proxy Drain Duration at Pod Termination takes effect.
Lifecycle of Sidecar Proxy
Show the descriptions and configuration example of Lifecycle of Sidecar Proxy
Descriptions of the configuration items
This configuration item allows you to customize the lifecycle hook of the sidecar proxy container. In this configuration item, you need to enter the container lifecycle hook field (lifecycle) declared in the JSON format. This field will replace the default container lifecycle hook field configured for the sidecar proxy container. For more information, see Container Lifecycle Hooks.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Lifecycle Management.
(Optional) In the Lifecycle Management section, select Lifecycle of Sidecar Proxy.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
In the edit box under Lifecycle of Sidecar Proxy, configure the following YAML file and then click Update Settings.
This YAML file configures the postStart and preStop hook parameters.
postStart: indicates that the pilot-agent starts to wait for the complete startup of the pilot-agent and Envoy proxy after the sidecar proxy container is started.
preStop: indicates that the sidecar proxy container sleeps for 13s before it is stopped.
{
"postStart": {
"exec": {
"command": [
"pilot-agent",
"wait"
]
}
},
"preStop": {
"exec": {
"command": [
"/bin/sh",
"-c",
"sleep 13"
]
}
}
}
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of Lifecycle of Sidecar Proxy:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
...
...
lifecycle:
postStart:
exec:
command:
- pilot-agent
- wait
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 13
name: istio-proxy
...
The lifecycle hook field (lifecycle) of the istio-proxy
container in the pod is changed to the expected configuration. This indicates that the configuration of Lifecycle of Sidecar Proxy takes effect.
Outbound Traffic Policy
Show the descriptions and configuration example of Outbound Traffic Policy
Descriptions of the configuration items
This configuration item is used to configure an outbound traffic policy for the sidecar proxy container. External services indicate services that are not defined in the service registry of ASM.Service Mesh By default, services in the Kubernetes clusters that are managed by ASM are registered services.Service Mesh You can manually register services with ASM by declaring service entry (ServiceEntry) resources. Services that are not registered are external services.Service Mesh
This configuration item can be set to one of the following two values:
ALLOW_ANY: the default outbound traffic policy. The sidecar proxy allows access to external services and forwards requests destined for external services as they are.
REGISTRY_ONLY: The sidecar proxy denies access to external services. The workload cannot establish connections to external services.
Note This configuration item is a global configuration item. You can only set this configuration item at the global level. To configure an outbound traffic policy at the namespace or workload level, you can log on to the ASM console, find the desired ASM instance, choose , and configure the related parameters.
Configuration example
On the global tab of the Sidecar Proxy Setting page, click Outbound Traffic Policy, click REGISTRY_ONLY next to Outbound Traffic Policy, and then click Update Settings.
This configuration indicates that services in ASM are restricted from accessing external services.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Create a sleep.yaml file that contains the following content:
apiVersion: v1
kind: ServiceAccount
metadata:
name: sleep
---
apiVersion: v1
kind: Service
metadata:
name: sleep
labels:
app: sleep
service: sleep
spec:
ports:
- port: 80
name: http
selector:
app: sleep
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sleep
spec:
replicas: 1
selector:
matchLabels:
app: sleep
template:
metadata:
labels:
app: sleep
spec:
terminationGracePeriodSeconds: 0
serviceAccountName: sleep
containers:
- name: sleep
image: curlimages/curl
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/sleep/tls
name: secret-volume
volumes:
- name: secret-volume
secret:
secretName: sleep-secret
optional: true
---
Run the following command to deploy the sleep application:
kubectl apply -f sleep.yaml -n default
Run the following command to use the sleep application to access external services:
kubectl exec -it {Name of the pod for the sleep service} -c sleep -- curl www.aliyun.com -v
Expected output:
* Trying *********...
* Connected to www.aliyun.com (********) port 80 (#0)
> GET / HTTP/1.1
> Host: www.aliyun.com
> User-Agent: curl/7.87.0-DEV
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< date: Mon,********* 03:25:00 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host www.aliyun.com left intact
The HTTP status code 502
is returned, indicating that the sleep application for which the sidecar proxy is injected cannot access the external service www.aliyun.com
. This indicates that the configuration of Outbound Traffic Policy takes effect.
Sidecar Traffic Interception Mode
Show the descriptions and configuration example of Sidecar Traffic Interception Mode
Descriptions of the configuration items
This configuration item is used to configure a policy for the sidecar proxy container to intercept inbound traffic. By default, the sidecar proxy container uses the iptables redirection policy to intercept inbound traffic destined for the corresponding application. After the sidecar proxy container intercepts inbound traffic, the application can obtain only the IP address of the sidecar proxy, not the original IP addresses of the clients.
After you set the Sidecar Traffic Interception Mode configuration item to TPROXY, ASM allows the sidecar proxy container to use the transparent proxy mode of iptables to intercept inbound traffic.Service Mesh With this configuration, the application can obtain the original IP addresses of the clients. For more information, see Preserve the source IP address of a client when the client accesses services in ASM.
Important You cannot use the transparent proxy mode in nodes whose operating system is CentOS. If your pod runs on the CentOS operating system, use redirection policies.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Sidecar Traffic Interception Mode.
(Optional) In the Sidecar Traffic Interception Mode section, select Sidecar Traffic Interception Mode.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
On the right side of Sidecar Traffic Interception Mode, click TPROXY, and then click Update Settings.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of Sidecar Traffic Interception Mode:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
...
- name: PROXY_CONFIG
value: >-
{..."interceptionMode":"TPROXY",...}
- name: ISTIO_META_POD_PORTS
value: |-
[
]
...
name: istio-proxy
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- TPROXY
...
name: istio-init
...
The "interceptionMode":"TPROXY"
information is recorded in the environment variable of the istio-proxy
container in the pod. The istio-init
container also uses the TPROXY setting to run the initialization commands during initialization. This indicates that the configuration of Sidecar Traffic Interception Mode takes effect.
Log Level
Show the descriptions and configuration example of Log Level
Descriptions of the configuration items
This configuration item is used to set the log level of the sidecar proxy container. By default, the log level of the sidecar proxy is info. You can change the log level of a sidecar proxy to one of the seven levels: info, debug, trace, warning, error, critical, and off. This way, you can obtain more or less logs from the sidecar proxy.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Monitoring Statistics.
(Optional) In the Monitoring Statistics section, select Log Level.
You need to perform this step on the Namespace and workload tabs. You do not need to perform this step on the global tab.
Select error from the Log Level drop-down list, and then click Update Settings.
This configuration indicates that the sidecar proxy provides logs at the error or higher levels.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of Log Level:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
- proxy
- sidecar
- '--domain'
- $(POD_NAMESPACE).svc.cluster.local
- '--proxyLogLevel=error'
...
name: istio-proxy
...
The runtime parameter --proxyLogLevel
of the istio-proxy
container is set to error, which indicates that the configuration of Log Level takes effect.
proxyStatsMatcher
Show the descriptions and configuration example of proxyStatsMatcher
Descriptions of the configuration items
This configuration item is used to define custom Envoy statistics that are reported by the sidecar proxy. Envoy is a technical implementation of a sidecar proxy and can collect and report a series of metrics. By default, ASM enables the collection and exposure of only some metrics to mitigate the performance degradation of the sidecar proxy.Service Mesh In this configuration item, you can use prefix matching, suffix matching, or regular expression matching to specify the metrics to be reported by the sidecar proxy.
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Monitoring Statistics.
In the Monitoring Statistics section, select proxyStatsMatcher and Regular Expression Match, and set Regular Expression Match to .*outlier_detection.*.
This configuration indicates that the sidecar proxy collects the statistics of circuit breaker metrics.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the Container Service for Kubernetes (ACK) cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configuration of proxyStatsMatcher:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
...
- name: PROXY_CONFIG
value: >-
{..."proxyStatsMatcher":{"inclusionRegexps":[".*outlier_detection.*"]},...}
...
The custom metrics are updated in the environment variables of the istio-proxy
container in the pod. This indicates that the configuration of proxyStatsMatcher takes effect.
Envoy Runtime Parameters
Show the descriptions and configuration example of Envoy Runtime Parameters
Descriptions of the configuration items
This configuration item is used to define runtime parameters of Envoy proxy processes in the sidecar proxy container. The following table describes the runtime parameters that you can configure.
Configuration item | Description |
Limits on Downstream Connections | By default, a sidecar proxy does not limit the number of downstream connections. This may be exploited by malicious activities. For more information, see ISTIO-SECURITY-2020-007. You can configure the maximum number of downstream connections allowed by a sidecar proxy based on your business requirements. |
Configuration example
On the Sidecar Proxy Setting page, click a configuration level tab, and then click Manage Environment Variables for Sidecar Proxy.
(Optional) In the Envoy Runtime Parameters section, enter 5000 in the input box on the right side of Limits on Downstream Connections and then click Update Settings.
Redeploy the workloads to make the sidecar proxy configurations take effect. For more information, see (Optional) Redeploy workloads.
Use kubectl to connect to the ACK cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to view the configurations of Manage Environment Variables for Sidecar Proxy:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
...
env:
- name: PROXY_CONFIG
value: >-
{"concurrency":2,"configPath":"/etc/istio/proxy","discoveryAddress":"istiod-1-22-6.istio-system.svc:15012","holdApplicationUntilProxyStarts":true,"interceptionMode":"REDIRECT","proxyMetadata":{"BOOTSTRAP_XDS_AGENT":"false","DNS_AGENT":"","EXIT_ON_ZERO_ACTIVE_CONNECTIONS":"true"},"runtimeValues":{"overload.global_downstream_max_connections":"5000"},"terminationDrainDuration":"5s","tracing":{"zipkin":{"address":"zipkin.istio-system:9411"}}}
name: istio-proxy
...
The "runtimeValues":{"overload.global_downstream_max_connections":"5000"}
field is added to the PROXY_CONFIG environment variable of the istio-proxy
container in the pod. This indicates that the configuration of Envoy Runtime Parameters takes effect.