By default, Horizontal Pod Autoscaler (HPA) supports automatic scaling based on CPU and memory metrics. However, these metrics may not suffice for complex operational scenarios. To convert custom and external metrics collected by Managed Service for Prometheus into scaling metrics that are supported by the HPA, you can refer to this topic, which describes how to retrieve monitoring data and implement the corresponding scaling configurations. This solution offers a flexible and convenient scaling mechanism for your applications.
Prerequisites
Managed Service for Prometheus is installed. For more information, see Managed Service for Prometheus.
ack-alibaba-cloud-metrics-adapter is deployed. For more information, see the Deploy alibaba-cloud-metrics-adapter section.
NoteTo deploy ack-alibaba-cloud-metrics-adapter, log on to the Container Service for Kubernetes (ACK) console. In the left-side navigation pane, choose . On the Marketplace page, find and deploy ack-alibaba-cloud-metrics-adapter.
Features
By default, HPA supports only auto scaling based on the CPU and memory usage. This cannot meet the O&M requirements. Managed Service for Prometheus is a fully managed monitoring service that is interfaced with the open-source Prometheus ecosystem. Managed Service for Prometheus monitors various components and provides multiple ready-to-use dashboards. To enable horizontal pod auto scaling based on Prometheus metrics, perform the following steps:
Use Managed Service for Prometheus in the ACK cluster to expose the metrics.
Use alibaba-cloud-metrics-adapter to convert Prometheus metrics to Kubernetes metrics supported by the HPA. For more information, see Autoscaling on metrics not related to Kubernetes objects.
Configure and deploy the HPA to perform auto scaling based on the preceding metrics.
The metrics can be classified into the following types based on scenarios:
Custom metrics: scales Kubernetes objects, such as pods, based on the metrics related to the objects. For example, you can scale pods based on pod metrics. For more information, see Autoscaling on multiple metrics and custom metrics.
External metrics: scales Kubernetes objects, such as pods, based on the metrics that are not related to the objects. For example, you can scale the pods of a workload based on the business QPS. For more information, see Autoscaling on metrics not related to Kubernetes objects.
The following section describes how to configure alibaba-cloud-metrics-adapter to convert Prometheus metrics to metrics supported by the HPA for auto scaling.
Step 1: Collect Prometheus metrics
Example 1: Use the predefined metrics
You can perform auto scaling based on the predefined metrics available in Managed Service for Prometheus that is installed in your ACK cluster. The predefined metrics include cadvisor metrics for container monitoring, Node-Exporter and GPU-Exporter metrics for node monitoring, and all metrics provided by Managed Service for Prometheus. To view the predefined metrics in Managed Service for Prometheus, perform the following steps:
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
Click Go to ARMS Prometheus in the upper-right corner.
In the left-side navigation pane, click Settings to view all metrics supported by Managed Service for Prometheus.
Example 2: Use the Prometheus metrics reported by pods
Deploy a testing application and expose the metrics of the application based on the metric standards of open-source Prometheus. For more information, see METRIC TYPES. The following section describes how to deploy an application named sample-app and expose the http_requests_total
metric to indicate the number of requests sent to the application.
Deploy the workload of the application.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose
.In the upper part of the Deployments page, click Create from YAML. On the Create page, select Custom from the Sample Template drop-down list, add the following content to the template, and then click Create.
NoteThe application pod is used to expose the
http_requests_total
metric, which indicates the number of requests.
Create a ServiceMonitor.
Log on to the Application Real-Time Monitoring Service (ARMS) console.
In the left-side navigation pane, click Integration Management. In the upper part of the page, select the region in which your cluster resides.
On the Integrated Environments tab of the Integration Management page, click the Container Service tab. In the Environment Name/ID column, click the Prometheus instance that has the same name as the cluster.
On the instance details page, click the Metric Scraping tab.
In the left-side navigation pane of the current page, click Service Monitor. Then, click Create. In the Add ServiceMonitor Configuration panel, click YAML to configure the ServiceMonitor, and click Create.
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: sample-app namespace: default spec: endpoints: - interval: 30s port: http path: /metrics namespaceSelector: any: true selector: matchLabels: app: sample-app
Check the monitoring status.
On the Self-Monitoring tab of the instance details page, click the Targets tab. If default/sample-app/0(1/1 up) is displayed, Managed Service for Prometheus is monitoring your application.
In the Prometheus dashboard, query the value of
http_requests_total
within a period of time to verify that monitoring data is collected without errors.
Step 2: Modify the configuration of alibaba-cloud-metrics-adapter
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose
.On the Helm page, find ack-alibaba-cloud-metrics-adapter and click Update in the Actions column.
In the Update Release panel, add the following content to the YAML editor and click OK.
The following table describes some of the fields. For more information about the configuration file of ack-alibaba-cloud-adapter, see the Configuration file of ack-alibaba-cloud-adapter section of this topic.
Field
Description
AlibabaCloudMetricsAdapter. prometheus.adapter.rules.custom
The configurations of alibaba-cloud-metrics-adapter that is used to convert Prometheus metrics. Set this field to the value in the preceding YAML content.
alibabaCloudMetricsAdapter. prometheus.url
The endpoint of Managed Service for Prometheus. For more information about how to obtain the endpoint, see the Obtain the endpoint of the Managed Service for Prometheus API section of this topic.
AlibabaCloudMetricsAdapter. prometheus.prometheusHeader[].Authorization
The token. For more information about how to obtain the token, see the Obtain the endpoint of the Managed Service for Prometheus API section of this topic.
AlibabaCloudMetricsAdapter. prometheus.adapter.rules.default
Specifies whether to create predefined metrics. By default, the predefined metrics are created. We recommend that you use the default value
false
.
Configure ack-alibaba-cloud-metrics-adapter. After ack-alibaba-cloud-metrics-adapter is deployed, run the following command to check whether the Kubernetes aggregation API has collected data.
Scale pods based on custom metrics.
Run the following command to query the details of custom metrics supported by the HPA:
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq .
Run the following command to query the current value of the
http_requests_per_second
metric in the default namespace:# Query the container_memory_working_set_bytes_per_second metric to view the size of the working memory of the pods in the kube-system namespace per second. kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/kube-system/pods/*/container_memory_working_set_bytes_per_second" # Query the container_cpu_usage_core_per_second metric to view the number of vCores of the pods in the kube-system namespace per second. kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/kube-system/pods/*/container_cpu_usage_core_per_second"
Sample output:
{ "kind": "MetricValueList", "apiVersion": "custom.metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/kube-system/pods/%2A/container_memory_working_set_bytes_per_second" }, "items": [ { "describedObject": { "kind": "Pod", "namespace": "kube-system", "name": "ack-alibaba-cloud-metrics-adapter-7cf8dcb845-h****", "apiVersion": "/v1" }, "metricName": "container_memory_working_set_bytes_per_second", "timestamp": "2023-08-09T06:30:19Z", "value": "24576k", "selector": null } ] }
Scale pods based on external metrics.
Run the following command to query the details of the external metrics supported by the HPA:
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/" | jq .
Run the following command to query the current value of the
http_requests_per_second
metric in the default namespace:kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/http_requests_per_second"
Sample output:
{ "kind": "ExternalMetricValueList", "apiVersion": "external.metrics.k8s.io/v1beta1", "metadata": {}, "items": [ { "metricName": "http_requests_per_second", "metricLabels": {}, "timestamp": "2022-01-28T08:40:20Z", "value": "33m" } ] }
Step 3: Configure and deploy the HPA to perform auto scaling based on the collected metrics
Deploy the HPA
You can use Prometheus metrics to expose custom metrics and external metrics at the same time. The following table describes the two types of metrics.
Metric type | Description |
Custom metric | Scales Kubernetes objects, such as pods, based on the metrics that are related to the objects. For example, you can scale pods based on pod metrics. For more information, see Autoscaling on multiple metrics and custom metrics. |
External metric | Scales Kubernetes objects, such as pods, based on the metrics that are not related to the objects. For example, you can scale the pods of a workload based on the business QPS. For more information, see Autoscaling on metrics not related to Kubernetes objects . |
Scale pods based on custom metrics
Create a file named hpa.yaml and add the following content to the file:
kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2 metadata: name: sample-app-memory-high spec: # Describe the object that you want the HPA to scale. The HPA can dynamically change the number of pods that are deployed for the object. scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: sample-app # Specify the maximum and minimum number of pods. minReplicas: 1 maxReplicas: 10 # Specify the metrics based on which the HPA performs auto scaling. You can specify different types of metrics at the same time. metrics: - type: Pods pods: # Use the pods/container_memory_working_set_bytes_per_second metric. metric: name: container_memory_working_set_bytes_per_second # Specify an AverageValue type threshold. You can specify only AverageValue type thresholds for Pods metrics. target: type: AverageValue averageValue: 1024000m # 1024000m indicates a memory threshold of 1 KB. Unit: bytes per second. m is a precision unit used by Kubernetes. If the value contains decimal places and ACK requires high precision, the m or k unit is used. For example, 1001m is equal to 1.001 and 1k is equal to 1000.
Run the following command to create the HPA:
kubectl apply -f hpa.yaml
Run the following command to check whether the HPA runs as expected:
kubectl get hpa sample-app-memory-high
Expected output:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE sample-app-memory-high Deployment/sample-app 24576k/1024000m 3 10 1 7m
Scale pods based on external metrics
Create a file named hpa.yaml and add the following content to the file:
apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: sample-app spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: sample-app minReplicas: 1 maxReplicas: 10 metrics: - type: External external: metric: name: http_requests_per_second selector: matchLabels: job: "sample-app" # You can specify only thresholds of the Value or AverageValue type for external metrics. target: type: AverageValue averageValue: 500m
Run the following command to create the HPA:
kubectl apply -f hpa.yaml
After the LoadBalancer Service is created, run the following command to perform a stress test:
ab -c 50 -n 2000 LoadBalancer(sample-app):8080/
Run the following command to query the details of the HPA:
kubectl get hpa sample-app
Expected output:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE sample-app Deployment/sample-app 33m/500m 1 10 1 7m
Configuration file of ack-alibaba-cloud-adapter
ack-alibaba-cloud-adapter performs the following steps to convert Prometheus metrics to metrics that are supported by the HPA:
Discovery: discovers Prometheus metrics that can be used by the HPA.
Association: associates the metrics with Kubernetes resources, such as pods, nodes, and namespaces.
Naming: defines the names of the metrics that can be used by the HPA after conversion.
Querying: defines the template of the requests that are sent to the Managed Service for Prometheus API.
In the preceding example, the http_requests_total
metric that is exposed by the sample-app pod is converted to the http_requests_per_second
metric for the HPA. The following code block shows the configurations of ack-alibaba-cloud-adapter:
- seriesQuery: http_requests_total{namespace!="",pod!=""}
resources:
overrides:
namespace: {resource: "namespace"}
pod: {resource: "pod"}
name:
matches: ^(.*)_total
as: ${1}_per_second
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[2m])) by (<<.GroupBy>>)
Parameter | Description |
| The Prometheus Query Language (PromQL) query data. |
| Aggregates the PromQL query data in the seriesQuery. Note The |
| Labels in the PromQL query data, which are matched against |
| Converts the names of Prometheus metrics to easy-to-read metric names by using a regular expression. In this example, |
Discovery
Specify the Prometheus metric that you want to convert. You can specify the
seriesFilters
parameter to filter metrics. TheseriesQuery
parameter matches data based on the specified labels. The following code block shows an example:seriesQuery: http_requests_total{namespace!="",pod!=""} seriesFilters: - isNot: "^container_.*_seconds_total"
seriesFilters
: optional. This field filters metrics:is:<regex>
: matches metrics whose names contain this regular expression.isNot:<regex>
: matches metrics whose names do not contain this regular expression.
Association
Map the labels of Prometheus metrics to Kubernetes resources. The labels of the
http_requests_total
metric arenamespace!=""
andpod!=""
.- seriesQuery: http_requests_total{namespace!="",pod!=""} resources: overrides: namespace: {resource: "namespace"} pod: {resource: "pod"}
Naming
Name the HPA metrics that are converted from Prometheus metrics. The names of the Prometheus metrics remain unchanged. You do not need to configure the naming settings if you directly use the Prometheus metrics.
You can run the
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1"
command to query the metrics that are supported by the HPA.- seriesQuery: http_requests_total{namespace!="",pod!=""} resources: overrides: namespace: {resource: "namespace"} pod: {resource: "pod"} name: matches: "^(.*)_total" as: "${1}_per_second"
Querying
The template of requests that are sent to the Managed Service for Prometheus API. ack-alibaba-cloud-adapter passes parameters in the HPA to the request template, sends a request to the Managed Service for Prometheus API based on the template, and then sends the returned parameter values to the HPA for auto scaling.
- seriesQuery: http_requests_total{namespace!="",pod!=""} resources: overrides: namespace: {resource: "namespace"} pod: {resource: "pod"} name: matches: ^(.*)_total as: ${1}_per_second metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[2m])) by (<<.GroupBy>>)
Obtain the endpoint of the Managed Service for Prometheus API
Scenario 1: Managed Service for Prometheus
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
Click Go to ARMS Prometheus in the upper-right corner.
In the left-side navigation pane, click Settings. Then, click the Configure tab and view HTTP API URL (Grafana Read URL).
We recommend that you call the Managed Service for Prometheus API over an internal network. You can call the API over the Internet if no internal network is available.
Scenario 2: Open-source Prometheus
For open-source, self-managed Prometheus solutions, you must expose the standard Prometheus API by using a Service. Then, configure the relative parameters of the Prometheus data source URL in the metrics-adapter component to complete the configuration of the HPA data source based on the data from open-source Prometheus.
The following example uses the Helm Chart community application ack-prometheus-operator from the Marketplace page in the ACK console. For more information, see Use open source Prometheus to monitor an ACK cluster.
Deploy Managed Service for Prometheus and expose standard APIs of Prometheus.
Log on to the ACK console. In the left-side navigation pane, choose .
On the App Catalog page, find and click ack-prometheus-operator. On the page that appears, click Deploy.
In the panel that appears, configure the Cluster and Namespace parameters, modify the Release Name parameter based on your business requirements, and click Next. Modify the Parameters section based on your business requirements and click OK.
View the deployment result.
Expose the standard Prometheus API by using a Service. Take the Service
ack-prometheus-operator-prometheus
from ack-prometheus-operator as an example.Enter
ServiceIP:9090
in the address bar of a browser and enable Internet access for the Service to access the Server Load Balancer (SLB) instances in order to visit the Prometheus console."Enter in the address bar of a browser and enable access over the Internet for the Service to access to Server Load Balancer (SLB) instances to visit the Prometheus console.
In the top navigation bar, choose
to view all collection tasks.Tasks in the UP state are running as expected.
Check service and namespace in the Labels column.
The following code block shows the endpoint. In this example, ServiceName is ack-prometheus-operator-prometheus and ServiceNamespace is monitoring.
http://ack-prometheus-operator-prometheus.monitoring.svc.cluster.local:9090
In the configuration of the component, set the URL parameter of Prometheus data source to ensure proper communication between the component and Prometheus.
If you choose to access the Prometheus API over the Internet, refer to the following example for configuration.
AlibabaCloudMetricsAdapter: ...... prometheus: enabled: true url: http://your_domain.com:9090 # Replace your_domain.com with your public IP address.
Use ack-prometheus-operator as an example, the
url
value is:http://ack-prometheus-operator-prometheus.monitoring.svc.cluster.local:9090
.
References
For more information about how to implement HPA by using external metrics, such as HTTP request rate and Ingress queries per second (QPS), see Implement horizontal auto scaling based on Alibaba Cloud metrics.
For more information about how to implement HPA by using NGINX Ingress to dynamically adjust the number of pods for multiple applications based on their workloads, see Configure horizontal pod autoscaling for multiple applications based on the metrics of the NGINX Ingress controller.