You are separately charged for the billable items of pay-by-ingested-data. For example, if you upload and store logs, you are charged a log write fee and a log storage fee. This topic describes the billable items of pay-by-ingested-data and the fee calculation for each item.
Precautions
You can log on to the Simple Log Service console to view the write traffic, read traffic, and storage usage of the previous day.
In the SAU (Riyadh - Partner Region) region, you cannot use resource plans to offset the fees of billable items of pay-by-ingested-data.
What is OCU?
Observability Capacity Unit (OCU) is a new billing unit proposed by Managed Service for Grafana. The OCU usage is automatically calculated based on the hourly resource usage.
The computational functions of Simple Log Service are gradually being billed based on OCU, with the actual consumption of computing resources by the user serving as the measurement dimension. The performance of one OCU is approximately equivalent to 0.5 Core CPU, 2 GB of memory, and 3000 IOPS. When calculating the total number of OCUs, the system separately calculates the OCU quantities based on the CPU cores, memory size, and IOPS consumed. The maximum value among these three quantities is taken as the final OCU value for billing purposes. For example, if a computational job consumes 1 Core CPU, 2 GB of memory, and 3000 IOPS, then this job consumes 2 OCUs.
The following table describes the billable items of pay-by-ingested-data. For more information, visit Pricing of Simple Log Service.
Billable item | Description | Fee calculation | Free quota |
Ingested raw data volume | When data is uploaded to Simple Log Service, the data is compressed. The ingested raw data volume is the size of raw data that is uploaded to Simple Log Service. |
| None |
Storage usage of the hot storage tier | The storage usage of the hot storage tier is the total size of compressed log data and indexes that are created on raw log data. You are charged for data storage 30 days after the data is stored. For example, the size of raw log data that is uploaded to Simple Log Service is 1 GB, and indexes are created for two fields. The compression ratio is 5:1 when the raw log data is uploaded, and the size of the indexes is 0.5 GB. In this example, the storage usage of the hot storage tier is calculated by using the following formula: 0.2 GB + 0.5 GB = 0.7 GB. |
| None |
Storage usage of the IA storage tier |
If you enable the intelligent tiered storage feature, logs are moved from the hot storage tier to the IA storage tier (formerly the cold storage tier) after the specified data retention period for the hot storage tier ends. In this case, you are charged based on the storage usage of the IA storage tier. The storage usage of the IA storage tier is the total size of compressed log data and indexes that are created on raw log data. For example, the size of raw log data that is uploaded to Simple Log Service is 1 GB, and indexes are created for two fields. The compression ratio is 5:1 when the raw log data is uploaded, and the size of the indexes is 0.5 GB. In this example, the storage usage of the IA storage tier is calculated by using the following formula: 0.2 GB + 0.5 GB = 0.7 GB. |
| None |
Storage usage of the Archive storage tier | If you enable the intelligent tiered storage feature, logs are moved to the Archive storage tier after the specified data retention period for the hot storage tier or data retention period for the IA storage tier ends. In this case, you are charged based on the storage usage of the Archive storage tier. The storage usage of the Archive storage tier is the total size of compressed log data and indexes that are created on raw log data. For example, the size of raw log data that is uploaded to Simple Log Service is 1 GB, and indexes are created for two fields. The compression ratio is 5:1 when the raw log data is uploaded, and the size of the indexes is 0.5 GB. In this example, the storage usage of the Archive storage tier is calculated by using the following formula: 0.2 GB + 0.5 GB = 0.7 GB. |
| None |
Read traffic over the Internet | If data is pulled over a public Simple Log Service endpoint, read traffic over the Internet is generated. The traffic is calculated based on the size of data after compression. |
| None |
Transfer acceleration | The fee is calculated based on the inbound and outbound traffic generated through the transfer acceleration domain. Transfer acceleration is measured based on the actual amount of data transmitted. In data upload scenarios where data compression is applied, the traffic is calculated based on the compressed data volume. For more information, see Use the transfer acceleration feature. |
| None |
Ingest processor | To process data (such as data filtering, field extraction, field extension, and data masking) before writing it into a Logstore, you can use an ingest processor. The ingest processor is billed based on the amount of computational resources consumed during data processing. The billing unit is OCU. You can estimate the average OCU consumption during a 1-hour measurement period as follows: processing 1 GB of data with the ingest processor consumes about 1/3 of an OCU. |
| None |