×
Community Blog PouchContainer RingBuffer Log Practices

PouchContainer RingBuffer Log Practices

This article analyzes the reasons for introducing the non-blocking log buffer and illustrates the practices of the non-blocking log buffer in Golang.

PouchContainer is open-source container technology of Alibaba, which helps enterprises containerize the existing services and enables reliable isolation. PouchContainer is committed to offering new and reliable container technology. Apart from managing service life cycles, PouchContainer is also used to collect logs. This article describes the log data streams of PouchContainer, analyzes the reasons for introducing the non-blocking log buffer, and illustrates the practices of the non-blocking log buffer in Golang.

PouchContainer Log Data Streams

Currently, PouchContainer creates and starts a container using Containerd. The modules involved are shown in the following figure. Without the communication feature of a daemon, a runtime is like a process. To better manage a runtime, the Shim service is introduced between Containerd and Runtime. The Shim service not only manages the life cycle of a runtime but also forwards the standard input/output data of a runtime, namely log data generated by a container.

1

However, Containerd does not offer RPC interfaces for the receipt of container's log data. The existing log data is exchanged between PouchContainer and the Shim service through named pipes. The Shim service needs to import the input/output data generated by a runtime to a named pipe and PouchContainer exports such data from the other end, as shown in the following figure.

2

The input/output data of a container traverses the kernel. What is the problem with this communication method?

Problem with the Log Data Forwarding Link

The input/output data in a named pipe is stored in the kernel which does not infinitely extend the data buffer for the named pipe. When the buffer is filled with data, the importing end is blocked. The following is a simulated scenario by code.

package main

import (
        "io"
        "log"
        "os"
        "strconv"
        "syscall"
        "time"
)

var (
        namedPipePath = "/tmp/namedpipe"
        dataSize      = 1024
)

func main() {
        mkfifo()

        // set the data block size for writer, unit KB
        kb, _ := strconv.ParseInt(os.Args[1], 10, 64)
        dataSize *= int(kb)

        // create goroutine for writer side
        waitCh := make(chan struct{}, 0)
        go func() {
                close(waitCh)

                in, err := os.OpenFile(namedPipePath, syscall.O_WRONLY, 0600)
                if err != nil {
                        log.Fatal("failed to open named pipe for writing:", err)
                }

                defer in.Close()
                if _, err := in.Write(make([]byte, dataSize)); err != nil {
                        log.Fatal("failed to write it into named pipe:", err)
                }
                log.Printf("finished to write %d KB data into named pipe", kb)
        }()

        <-waitCh

        out, err := os.OpenFile(namedPipePath, syscall.O_RDONLY, 0600)
        if err != nil {
                log.Fatal("failed to open named pipe for reading:", err)
        }
        defer out.Close()

        // don't copy the data right now to make the buffer full
        time.Sleep(2 * time.Second)
        log.Println("Start to read data from Named Pipe")
        if _, err := io.Copy(os.Stdout, out); err != nil {
                log.Fatal("failed to read the data:", err)
        }
}

func mkfifo() {
        os.Remove(namedPipePath)

        if err := syscall.Mkfifo(namedPipePath, 0666); err != nil {
                log.Fatal("failed to create named pipe:", err)
        }
}

The code above starts a goroutine to write data to the named pipe while the other end does not rush for reading and is waiting, which thus creates a scenario where the buffer is filled with data, as shown in the following figure.

  /tmp go run main.go 4 # 4KB
2018/07/02 19:37:55 finished to write 4 KB data into named pipe
2018/07/02 19:37:57 Start to read data from Named Pipe
  /tmp go run main.go 64 # 64KB
2018/07/02 19:38:03 finished to write 64 KB data into named pipe
2018/07/02 19:38:05 Start to read data from Named Pipe
  /tmp go run main.go 128 # 128KB
2018/07/02 19:38:12 Start to read data from Named Pipe
2018/07/02 19:38:12 finished to write 128 KB data into named pipe

When the size of data blocks is 4 KB or 64 KB, the importing end can quickly write data to the kernel. When the size of data blocks is 128 KB, the importing end is blocked and can be unblocked only after the other end is activated.

Note: In the Demo code, the buffer size in the named pipe is 64 KB by default and can be modified using F_SETPIPE_SZ.

In case the Shim service generates a large number of logs, PouchContainer needs to quickly consume such data to prevent blocking. Log data is forwarded by PouchContainer as well as the Shim service. The current version of PouchContainer supports multiple log drivers. Different log drivers have different data formats and different destinations, for example, Jsonfile flushes data into disks of the host. As shown in the figure below, the standard output data of a container is forwarded twice and each forwarding may cause blocking. Once log data forwarding causes blocking, the service is affected.

3

When the service directly redirects logs to files, this fundamentally prevents the above log forwarding problem, but this also requires the infrastructure to additionally support the collection of container logs. A common solution is FileBeat + ELK Stack. If the infrastructure uses PouchContainer as a data collection tool, resources need to be restricted to slow down the generation of logs and prevent blocking. The approach to the problem is related to the infrastructure of the service.

When a large number of logs are generated in a high-concurrency scenario, and the service is not sensitive to the loss of some log data, PouchContainer needs to offer a non-blocking solution.

RingBuffer Practices in Golang

After retrieving data from a named pipe, PouchContainer caches data in memory. When PouchContainer flushes data into a local device or forwards it to other log collection services, the buffer in memory is full while RingBuffer allows new data to overwrite old data and prevents blocking by data loss. For the container management scenario, RingBuffer interfaces in PouchContainer are defined as follows.

type RingBuffer interface {
    // Push pushes value into buffer and return whether it covers the oldest data or not.
    Push(val interface{}) (bool, error)
    
    // Pop pops the value in the buffer.
    //
    // NOTE: it returns ErrClosed if the buffer has been closed.
    Pop() (interface{}, error)

    // Drain returns all the data in the buffer.
    //
    // NOTE: it can be used after closed to make sure the data have been consumed.
    Drain() []interface{}

    // Close closes the ringbuffer.
    Close() error
}

Note: The Drain interface ensures that the container can forward the remaining log data in the buffer after execution.

PouchContainer is a project of Golang. When there is a problem with the communication between a write goroutine and a read goroutine, you may naturally think of the channel.

package main

import "fmt"

func main() {
        size := 5
        bufCh := make(chan int, size)

        for i := 0; i < size*2; i++ {
                select {
                case bufCh <- i:
                default:
                        // remove the first one
                        <-bufCh
                        // add the new one
                        bufCh <- i
                }
        }
        close(bufCh)

        // val == size and ok == true
        val, ok := <-bufCh
        fmt.Println(val, ok)
}
0 0 0
Share on

You may also like

Comments

Related Products