×
Community Blog Kubernetes Demystified: Solving Service Dependencies

Kubernetes Demystified: Solving Service Dependencies

In the third article of this series, we will explore how we can handle dependencies between services when using Kubernetes.

This series of articles explores some of the common problems enterprise customers encounter when using Kubernetes. One question frequently asked by Container Service customers is, "How do I handle dependencies between services?"

In applications, component dependencies refer to middleware services and business services. In traditional software deployment methods, application startup and stop tasks must be completed in a specific order.

When using Kubernetes, Docker Swarm, and other container orchestration technologies to deploy applications in distributed environments, different components start up concurrently, so it is impossible to ensure a certain startup order. In addition, when applications are running, the services they depend on may fail or be migrated. Therefore, solving service dependencies between containers is an issue frequently raised by customers.

Method 1: Inspecting Dependencies in an Application

We can add service dependency inspection logic in the application startup logic. If a service required by the application cannot be accessed, it is retried. If the service is still inaccessible after a set number of retries, the application automatically gives up. Based on the container's restart policy, Kubernetes and Docker wait for a certain period of time before automatically giving up.

In the following, we use a simple Golang application as an example to check if MySQL service dependencies are ready.

  ...
    // Connect to database.
    hostPort := net.JoinHostPort(config.Host, config.Port)
    log.Println("Connecting to database at", hostPort)
    dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?timeout=30s",
        config.Username, config.Password, hostPort, config.Database)

    db, err = sql.Open("mysql", dsn)
    if err != nil {
        log.Println(err)
    }

    var dbError error
    maxAttempts := 20
    for attempts := 1; attempts <= maxAttempts; attempts++ {
        dbError = db.Ping()
        if dbError == nil {
            break
        }
        log.Println(dbError)
        time.Sleep(time.Duration(attempts) * time.Second)
    }
    if dbError != nil {
        log.Fatal(dbError)
    }

    log.Println("Application started successfully.")
    ...

Note:

"Fail Fast" is an important principle of Design by Contract that helps ensure system robustness and predictability. In the preceding code, if the retry mechanism fails, log.Fatal(dbError) is reported and the process ends. In addition, the K8S and Docker container restart rollback functions ensure that system resource are not exhausted by repeated failed attempts to access application dependencies.

Method 2: Independent Service Dependency Inspection Logic

In the real world, some legacy applications and frameworks cannot be adjusted. Therefore, we want to decouple their inspection policies and application logic.

One common method is to add the relevant service dependency inspection logic in the container's Dockerfile startup script. For more information about this method, see this Docker document. Another method is to use the Kubernetes pod mechanism itself to add dependency inspection logic.

Before we start, we must understand the pod lifecycle. The following figure is taken from this article: https://blog.openshift.com/kubernetes-pods-life/.

1

First, pods contain three types of containers:

  1. Infra container: This is the famous pause container.
  2. Init container: This is an initialization container, generally used to initialize and prepare applications. The application container can start up only after waiting for all initialization containers to finish running.
  3. Main container: This is an application container.

Best practices for Kubernetes generally rely initialization containers to inspect service dependencies. We use the following WordPress example to show how this is done.

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  clusterIP: None
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql
---
apiVersion: v1
kind: Service
metadata:
  name: wordpress
spec:
  ports:
  - name: wordpress
    port: 80
    targetPort: 80
  selector:
    app: wordpress
  type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql 
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "true"
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      containers:
      - name: wordpress
        image: wordpress:4
        ports:
        - containerPort: 80
        env:
        - name: WORDPRESS_DB_HOST
          value: mysql
        - name: WORDPRESS_DB_PASSWORD
          value: ""
      initContainers:
      - name: init-mysql
        image: busybox
        command: ['sh', '-c', 'until nslookup mysql; do echo waiting for mysql; sleep 2; done;']

In the WordPress Deployment pod definition, we added initContainers. This will check that the MySQL domain name can be resolved to determine if the MySQL service dependency is ready.

At the same time, we introduced a readinessProbe and livenessProbe in MySQL StatefulSet to determine if the MySQL process is ready for business. In K8S, as long as a pod is healthy it can perform ClusterIP access or DNS resolution.

$ kubectl create -f wordpress.yaml
service "mysql" created
service "wordpress" created
statefulset "mysql" created
deployment "wordpress" created
$ kubectl get pods
NAME                         READY     STATUS     RESTARTS   AGE
mysql-0                      0/1       Running    0          5s
wordpress-797655cf44-w4p87   0/1       Init:0/1   0          5s
$ kubectl get pods
NAME                         READY     STATUS     RESTARTS   AGE
mysql-0                      1/1       Running    0          11s
wordpress-797655cf44-w4p87   0/1       Init:0/1   0          11s
$ kubectl get pods
NAME                         READY     STATUS            RESTARTS   AGE
mysql-0                      1/1       Running           0          14s
wordpress-797655cf44-w4p87   0/1       PodInitializing   0          14s
$ kubectl get pods
NAME                         READY     STATUS    RESTARTS   AGE
mysql-0                      1/1       Running   0          17s
wordpress-797655cf44-w4p87   1/1       Running   0          17s
$ kubectl describe pods wordpress-797655cf44-w4p87
...

NOTE:

  1. Liveness probe: This probe is mainly used to determine if the container is in the Running state. For example, it can detect service deadlocks, slow responses, and other situations.
  2. Readiness probe: This probe is mainly used to determine if the service is already working normally.
  3. Readiness probes cannot be used in init containers.
  4. If the pod restarts, all of its init containers must be run again.

Conclusion

This article discussed common solutions used to inspect service dependencies and provided an example to demonstrate how to use init containers, liveness and readiness probes, and other service health check and dependency inspection functions.

Kubernetes provides flexible pod lifecycle management functions. Due to space limitations, we did not discuss postStart, preStop, and other lifecycle hooks.

Alibaba Cloud Kubernetes Service is the first such service with certified Kubernetes consistency. It simplifies Kubernetes cluster lifecycle management and provides built-in integration for Alibaba Cloud products. In addition, the service further optimizes the Kubernetes developer experience, allowing users to focus on the value of cloud applications and further innovations.

0 0 0
Share on

Alibaba Container Service

175 posts | 31 followers

You may also like

Comments

Alibaba Container Service

175 posts | 31 followers

Related Products