×
Community Blog Decipher the Open-source Serverless Container Framework: Event-driven

Decipher the Open-source Serverless Container Framework: Event-driven

This article introduces the open-source serverless container framework Knative, emphasizing its event-driven capabilities for cloud-native applications.

By Yuanyi

Introduction to Event-driven

Event-driven refers to a communication model in distributed systems where interactions between components are based on event notifications rather than direct request-response interactions. It is characterized by asynchronous communication and loose coupling. In EDA, components collaborate by publishing and subscribing events. These events can be user actions, system state changes, sensor data, and so on.

Cloud-native Serverless Event-driven Framework: Knative Eventing

Knative is an open-source serverless framework based on Kubernetes clusters, providing a cloud-native and cross-platform orchestration standard for serverless applications. As an essential event-driven capability in the serverless architecture, Knative Eventing provides cloud-native event-driven capabilities.

Knative Eventing is an independent platform that supports various workloads, including standard Kubernetes services and Knative Serving services. It uses standard HTTP POST requests to send and receive events between event producers and receivers. These events conform to the CloudEvents specification and support creating, parsing, sending, and receiving events in any programming language. In addition, Knative Eventing components are loosely coupled, allowing them to be developed and deployed independently.

Knative Eventing is used in the following scenarios:

Publish events: Events can be dispatched as HTTP POSTs to brokers, decoupling them from the applications that generate the events.

Consume events: Triggers can be used to consume events from brokers based on event attributes. The consuming service receives events in the form of HTTP POST.

Event Mesh in Knative (Broker/Trigger)

The event mesh is a dynamic infrastructure designed to simplify the distribution of events from senders to receivers. Similar to traditional messaging architectures like Apache Kafka or RabbitMQ, the event mesh provides asynchronous (store-and-forward) messaging, allowing for the timely decoupling of senders and receivers. However, unlike traditional integration models based on message channels, the event mesh also simplifies routing issues for senders and receivers by decoupling them from the underlying event transport infrastructure, which may be a set of federated solutions such as Kafka, RabbitMQ, or cloud provider infrastructure. The event mesh transports events from producers to consumers through interconnected EventBroker in any environment, even transferring events between clouds in a seamless and loosely coupled manner.

1
Soured from the Knative Community

As shown in the preceding figure, the Knative event mesh defines the broker and trigger APIs for event ingress and egress. Knative Eventing uses a pattern called "duck typing" to allow multiple types of resources to participate in the event mesh. Duck typing allows multiple types of resources to advertise common capabilities, such as "events can be received at a URL" or "events can be sent to a destination." Knative Eventing uses these capabilities to provide an interoperable pool of sources for sending events to brokers and serving as the destination for routing events through triggers. The Knative Eventing API contains three types:

Event ingress: Support connecting event senders of Source duck typing and SinkBinding, enabling easy configuration of applications to send events to brokers. Even without any sources installed, applications can still submit events and use Knative Eventing.

Event routing: Brokers and triggers support the definition of event mesh and event routing. Note that brokers conform to the definition of an addressable event target, allowing events to be relayed from a broker in one cluster to a broker in another cluster. Similarly, the triggers use the same deliverable duck typing as many sources, making it easy to replace direct event delivery with the event mesh.

Event egress: The deliverable contract supports specifying either a raw URL or a reference to a Kubernetes object that implements the addressable interface (with status.address.url) as the destination.

Event Sources

Knative Event Sources

The Knative community provides extensive support for event sources, mainly including the following:

APIServerSource: This source integrates events from the Kubernetes API server into Knative. Whenever a Kubernetes resource is created, updated, or deleted, an APIServerSource triggers a new event.

PingSource: This source sends a periodic ping event notification at specified intervals.

Apache CouchDB: This source integrates messages from Apache CouchDB into Knative.

Apache Kafka: KafkaSource reads events from an Apache Kafka cluster and sends them to the event consumer.

• RabbitMQ

• GitHub

• GitLab

• RedisSource

In addition, Knative also supports third-party event sources such as Apache Camel and VMware.

Event Forwarding

Broker/Trigger Event Forwarding Process

The InMemoryChannel (IMC) event processing is used as an example to describe the event processing in Knative.

To use the broker/trigger model in Knative Eventing, you need to select the corresponding channel, that is, the event forwarding system. Currently, the community supports event-forwarding channels such as Kafka, NATS Streaming, and InMemoryChannel. The default channel is InMemoryChannel.

2

The key components are described as follows:

Ingress: In the broker/trigger model, it serves as the event ingress, receiving events and forwarding them to the corresponding channel service.

imc-dispatch: The event forwarding service for InMemoryChannel. It receives event requests from the ingress and fans out the events to the filter service based on the forwarding targets (subscriptions) described in the InMemoryChannel.

Filter: It is used for event filtering. It implements event filtering based on the rules defined in the trigger, and ultimately forwards them to the corresponding target service.

The following example illustrates this with a broker:

apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
  generation: 1
  name: default
  namespace: default
spec:
  config:
    apiVersion: v1
    kind: ConfigMap
    name: config-br-default-channel
    namespace: knative-eventing
  delivery:
    backoffDelay: PT0.2S
    backoffPolicy: exponential
    retry: 10
status:
  address:
    name: http
    url: http://broker-ingress.knative-eventing.svc.cluster.local/default/default
  annotations:
    knative.dev/channelAPIVersion: messaging.knative.dev/v1
    knative.dev/channelAddress: http://default-kne-trigger-kn-channel.default.svc.cluster.local
    knative.dev/channelKind: InMemoryChannel
    knative.dev/channelName: default-kne-trigger
  ...

Here, the actual parameter passed to status.address.url is: http://broker-ingress.knative-eventing.svc.cluster.local/{namespace}/{broker}. In other words, after each broker is created, it corresponds to a request path in the ingress service.

Where does the ingress forward the event to after receiving it? The ingress forwards the events based on the knative.dev/channelAddress in the status. In the IMC Channel, it forwards the events to the default-kne-trigger-kn-channel service.

http://default-kne-trigger-kn-channel.default.svc.cluster.local

Let's then look at where this default-kne-trigger-kn-channel service corresponds. It actually corresponds to imc-dispatcher pod, meaning that the event is forwarded through the dispatcher.

kubectl get svc default-kne-trigger-kn-channel
NAME                             TYPE           CLUSTER-IP   EXTERNAL-IP                                         PORT(S)   AGE
default-kne-trigger-kn-channel   ExternalName   <none>       imc-dispatcher.knative-eventing.svc.cluster.local   80/TCP    98m

The core processing of the dispatcher is in the fanout handler, which is responsible for distributing the received events to different subscriptions.

Now let's take a look at the configuration of InMemoryChannel, which defines the subscriptions to which events are forwarded.

apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
metadata:
  labels:
    eventing.knative.dev/broker: default
    eventing.knative.dev/brokerEverything: "true"
  name: default-kne-trigger
  namespace: default
  ownerReferences:
  - apiVersion: eventing.knative.dev/v1
    blockOwnerDeletion: true
    controller: true
    kind: Broker
    name: default
    uid: cb148e43-6e6c-45b0-a7b9-c5b1d81eeeb6
spec:
  delivery:
    backoffDelay: PT0.2S
    backoffPolicy: exponential
    retry: 10
  subscribers:
  - delivery:
      backoffDelay: PT0.2S
      backoffPolicy: exponential
      retry: 10
    generation: 1
    replyUri: http://broker-ingress.knative-eventing.svc.cluster.local/default/default
    subscriberUri: http://broker-filter.knative-eventing.svc.cluster.local/triggers/default/my-service-trigger/f8df36a0-df4c-47cb-8c9b-1405111aa7dd
    uid: 382fe07c-ce4d-409b-a316-9be0b585183a
status:
  address:
    name: http
    url: http://default-kne-trigger-kn-channel.default.svc.cluster.local
  ...
  subscribers:
  - observedGeneration: 1
    ready: "True"
    uid: 382fe07c-ce4d-409b-a316-9be0b585183a

The corresponding service is http://broker-filter.knative-eventing.svc.cluster.local/triggers/default/my-service-trigger/f8df36a0-df4c-47cb-8c9b-1405111aa7dd. That is, the event is forwarded to the broker-filter service.

The broker-filter service then filters the events based on the filtering rules defined in the trigger (the filter property) and sends the filtered event to the subscriberUri address specified in the status. In this case, it is http://event-display.default.svc.cluster.local.

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  labels:
    eventing.knative.dev/broker: default
  name: my-service-trigger
  namespace: default
spec:
  broker: default
  filter: {}
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: event-display
      namespace: default
status:
  ...
  observedGeneration: 1
  subscriberUri: http://event-display.default.svc.cluster.local

At this point, the entire process of using the InMemoryChannel for forwarding based on the broker/trigger model is complete.

curl -v "http://172.16.85.64/default/default" -X POST -H "Ce-Id: 536808d3-88be-4077-9d7a-a3f162705f79" -H "Ce-Specversion: 1.0" -H "Ce-Type: dev.knative.samples.helloworld" -H "Ce-Source: dev.knative.samples/helloworldsource" -H "Content-Type: application/json" -d '{"msg":"Hello World from the curl pod."}'
 
2024/09/23 03:25:23 receive cloudevents.Event: %!(EXTRA string=Validation: valid Context Attributes,   specversion: 1.0   type: dev.knative.samples.helloworld   source: dev.knative.samples/helloworldsource   id: 536808d3-88be-4077-9d7a-a3f162705f79   time: 2024-09-23T03:25:03.355819672Z   datacontenttype: application/json Extensions,   knativearrivaltime: 2024-09-23T03:25:23.380731115Z Data,   {     "msg": "Hello World from the curl pod."   } )

Event Orchestration

Knative Eventing provides two types of Custom Resource Definitions (CRDs) for defining event orchestration processes:

Sequence: sequential event processing workflow

Parallel: parallel event processing workflow

Sequence

Sequence orchestrates multiple Knative services sequentially, and uses the processed result as the input of the next service. The configuration example is as follows:

apiVersion: flows.knative.dev/v1
kind: Sequence
metadata:
  name: sequence
spec:
  channelTemplate:
    apiVersion: messaging.knative.dev/v1
    kind: InMemoryChannel
  steps:
    - ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: first
    - ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: second
    - ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: third
  reply:
    ref:
      kind: Service
      apiVersion: serving.knative.dev/v1
      name: event-display

Scenarios include:

Sequential processing

As shown in the figure, create a PingSource to provide events to the sequence, and then obtain the output of the sequence to display the result.

3

Connect a sequence to another sequence

As shown in the figure, create a PingSource to send events to the first sequence service. Then, obtain the output processed by this sequence service and forward it to the second sequence service, finally displaying the result.

4

Direct processing

As shown in the figure, create a PingSource to provide events to the sequence. Then directly process the events sequentially through the sequence.

5

Use the broker/trigger model

As shown in the figure, create a PingSource to input events into the broker. Then create a filter to connect these events to a sequence composed of three services. Next, we obtain the output from the sequence, send the newly created events back to the broker, and create another trigger. Finally, the events are printed through the EventDisplay service.

6

Parallel

Parallel is the parallel processing workflow defined in Knative Eventing, with the configuration example as follows:

apiVersion: flows.knative.dev/v1
kind: Parallel
metadata:
  name: demo-parallel
  namespace: default
spec:
  branches:
  - subscriber:
      ref:
        apiVersion: v1
        kind: Service
        name: demo-ksvc1
        namespace: default
  - subscriber:
      ref:
        apiVersion: v1
        kind: Service
        name: demo-ksvc2
        namespace: default
  channelTemplate:
    apiVersion: messaging.knative.dev/v1
    kind: InMemoryChannel

The parallel event processing workflow is shown in the figure.

7

Simply configure the target as parallel in the trigger. Then, through parallel, you can define multiple target services. The system will automatically create corresponding subscriptions to forward events to the respective Knative service instances.

Integration with EventBridge

The default InMemoryChannel in Knative Eventing is an in-memory channel, which is not recommended for production environments by the community. It is suggested to use message or event-driven products such as Kafka or EventBridge.

EventBridge is a serverless event bus service that is provided by Alibaba Cloud. You can connect Alibaba Cloud services, custom applications, and software as a service (SaaS) applications to EventBridge in a standardized and centralized manner. You can use EventBridge to route events among the preceding applications based on the standardized CloudEvents 1.0 protocol. You can also use EventBridge to build loosely coupled and distributed event-driven architectures.

EventBridge supports a wide array of event sources. You can configure event buses, rules, and targets to filter, transform, and deliver events. By using EventBridge to trigger Knative Services to consume events, you can use resources on demand. The technical architecture diagram is shown below:

8

Currently, Alibaba Cloud Container Service for Kubernetes (ACK) Knative provides the ability to configure triggers for products with one click, as shown in the following figure:

9

The effect of delivering events through EventBridge is as follows:

10

The advantages of integrating with EventBridge are as follows:

Standardization and ecosystem

Compatible with the CloudEvents protocol and fully embrace the open-source community ecosystem.

Integrate more Alibaba Cloud event sources and event target processing services to cover most user scenarios.

High throughput and disaster recovery

Based on a high-throughput, highly reliable, and multi-replica disaster recovery message kernel for storage.

Provide differentiated features such as event playback and event tracking.

Comprehensive functions

Simple and flexible configuration, supporting event filtering and event routing.

Provide the event pushing capability across regions, hybrid clouds, and multiple clouds.

Supporting toolchain

Centralized schema storage with multi-language mapping to enhance collaboration efficiency in event processing.

Schema discovery, automatic registration and validation, and IDE plugin integration.

Observability & governability

Provide observability for events, supporting event querying, auditing, and full-stack tracing.

Provide governability for events, supporting features such as event throttling, event replay, and event retry.

Differences with KEDA

When it comes to event-driven capabilities in Kubernetes, KEDA (Kubernetes event-driven autoscaling) is also a well-known event-driven service. There are often questions about the differences between KEDA and Knative Eventing. Here, I'll explain from my perspective. Actually, it is easier to distinguish between the two by understanding their respective roles or positions.

Let's first look at the official definition of KEDA:

KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.

KEDA is an event-driven autoscaler based on Kubernetes. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events that need to be processed. In short, KEDA is an enhanced HPA (Horizontal Pod Autoscaler) that supports a variety of metrics. It scales pods based on event counts, and the pods themselves are responsible for obtaining and consuming events from the corresponding services. In the event-driven process, KEDA does not take over the forwarding of events. It only scales the number of pods for consumer services based on event metrics. It is actually a very simple elastic service driven by events.

Let's then take a look at Knative Eventing:

Knative Eventing is a collection of APIs that enable you to use an event-driven architecture with your applications.

As described in this article, Knative Eventing takes over the orchestration, forwarding, filtering based on rules, and distribution of events, providing a relatively comprehensive event-driven framework.

Therefore, when choosing between Knative Eventing and KEDA, it is best to make the decision based on specific scenarios.

An Interesting Scenario

Finally, let's share a simple demo scenario based on event-driven in Knative.

As we know, when we are thirsty, we need to drink water. The human body performs this simple thing, and the processing flow can be simply abstracted as the following figure.

11

To simulate this scenario in Knative, you can perform the following operations:

• Send a thirsty event.

• Qwen receives the thirsty input and makes a decision.

• Execute a service that simulates drinking water.

12

According to the relevant health guidelines for drinking water, office workers should form good drinking habits, such as drinking water every hour. Here, we assume that a PingSource is used to simulate sending a thirsty signal every hour.

apiVersion: sources.knative.dev/v1
kind: PingSource
metadata:
  name: ping-source
spec:
  schedule: "* */1 * * *"
  contentType: "application/json"
  data: '{"model": "qwen", "messages": [{"role": "user", "content": "thirsty"}], "max_tokens": 10, "temperature": 0.7, "top_p": 0.9, "seed": 10}'
  sink:
    ref:
      apiVersion: flows.knative.dev/v1
      kind: Sequence
      name: sequence

Sequence is used to orchestrate the received signals and send them to Qwen. For more information, please refer to Deploy a vLLM Inference Application on Knative.

apiVersion: flows.knative.dev/v1
kind: Sequence
metadata:
  name: sequence
spec:
  channelTemplate:
    apiVersion: messaging.knative.dev/v1
    kind: InMemoryChannel
  steps:
    - ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: qwen
  reply:
    ref:
      kind: Service
      apiVersion: serving.knative.dev/v1
      name: drink-svc

Qwen provides the following decision:

{"id":"cmpl-6251aab6a0dc4932beb82714373db2ac","object":"chat.completion","created":1733899095,"model":"qwen","choices":[{"index":0,"message":{"role":"assistant","content":"If you feel thirsty, you can try drinking some water"},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":10,"total_tokens":20,"completion_tokens":10}}

Then, the drink-svc service for drinking water is called:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: drink-svc
  namespace: default
spec:
  template:
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/knative-sample/event-display:v1211-action
        env:
        - name: ACTION
          value: "drink water"

Here, the output is printed directly:

# The log output is as follows:
ACTION: drink water

Finally, let's take a look at a picture of the Agentic AI scenario defined by Nvidia:

13

Agentic AI refers to AI systems with a higher degree of autonomy, capable of proactively thinking, planning, and executing tasks rather than relying solely on predefined instructions. Therefore, in the application scenarios of Agentic AI, Knative may be able to provide some assistance.

Summary

Knative Eventing offers an open-source, cloud-native serverless event-driven framework. It is believed to have greater application potential in scenarios where serverless and AI are combined.

0 1 0
Share on

Alibaba Container Service

190 posts | 33 followers

You may also like

Comments

Alibaba Container Service

190 posts | 33 followers

Related Products