Diagnostic reports help you evaluate the operational conditions of a Tair instance and identify anomalies on the instance based on statistics such as performance level, skewed request distribution, and slow logs.
Prerequisites
Components of a diagnostic report
- Basic instance information: displays basic information of an instance such as the instance ID, instance type, engine version, and the zone in which the instance is deployed.
- Summary: displays the score of the instance health status and describes the reasons why points are deducted.
- Performance level: displays the statistics and states of important performance metrics related to the instance.
- TOP 10 nodes that receive the greatest number of slow queries: displays the top 10 data nodes that receive the greatest number of slow queries and provides information about the slow queries.
Basic instance information
This section displays the instance ID, instance type, engine version, and the region in which the instance is deployed.
Summary
This section displays the diagnostic results and the score of the instance health status. The highest score is 100. If your instance scores less than 100, you can check the diagnostic items and details.
Performance level
This section displays the statistics and states of key performance metrics related to the instance. You must pay attention to performance metrics that are in the Hazard state.
Performance metric | Threshold | Impact | Possible cause and troubleshooting method |
---|---|---|---|
CPU Utilization | 60% | When an ApsaraDB for Redis instance has high CPU utilization, the throughput of the instance and the response time of clients are affected. In some cases, the clients may be unable to respond. |
Possible causes:
For more information about how to troubleshoot these issues, see Troubleshoot high CPU utilization on a Tair instance. |
Memory Usage | 80% | When the memory usage of a Tair instance continuously increases, keys may be frequently evicted, response time increases, and queries per second (QPS) becomes unstable. This affects your business. | Possible causes:
For more information about how to troubleshoot these issues, see Troubleshoot the high memory usage on a Tair instance. |
Connections Usage of data nodes | 80% | When the number of connections to a data node reaches the upper limit, connection
requests may time out or fail.
Note
|
Possible causes:
For more information about how to troubleshoot these issues, see Session management. |
Inbound Traffic | 80% | When the inbound or outbound traffic exceeds the maximum bandwidth provided by the instance type, the performance of clients is affected. |
Possible causes:
For more information about how to troubleshoot these issues, see Troubleshoot high traffic usage on a Tair instance. |
Outbound Traffic | 80% |
If your instance runs in the cluster architecture or read/write splitting architecture, the system measures the overall access performance of the instance based on the preceding performance metrics and displays the results in the diagnostic report. For more information, see Cluster architecture and Read/write splitting architecture. The following table describes the criteria used to determine skewed requests, possible causes of skewed requests, and troubleshooting methods.
Criterion | Possible cause | Troubleshooting method |
---|---|---|
The following conditions are met:
|
|
TOP 10 nodes that receive the greatest number of slow queries
This section displays the top 10 data nodes that receive the greatest number of slow queries and statistics about the slow queries. The statistics come from the following slow logs:
- The slow logs of data nodes that are stored in the system audit logs. These slow logs are retained only for four days.
- The slow logs that are stored on the data node. Only the most recent 1,024 log entries are retained. You can use redis-cli to connect to the instance and run the SLOWLOG GET command to view these slow logs.
You can analyze the slow queries, determine whether improper commands exist, and find solutions to different issues.
Cause | Solution |
---|---|
Commands that have a time complexity of O(N) are run or consume a large amount of CPU resources, such as keys *. |
Evaluate and disable commands that cause a high risk and consume a large amount of CPU resources, such as FLUSHALL, KEYS, and HGETALL. For more information, see Disable high-risk commands. |
Large keys are frequently read from and written to the data nodes. | Analyze and evaluate the large keys. For more information, see Offline key analysis. Then, split these large keys based on your business requirements. |