Skip to content

Commit db7059d

Browse files
committed
🐫 Adding apache camel k blog post
Signed-off-by: Matthias Wessendorf <mwessend@redhat.com>
1 parent 1613b37 commit db7059d

File tree

2 files changed

+97
-0
lines changed

2 files changed

+97
-0
lines changed

blog/config/nav.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ nav:
4747
- releases/announcing-knative-v0-3-release.md
4848
- releases/announcing-knative-v0-2-release.md
4949
- Articles:
50+
- articles/knative-meets-apache-camel.md
5051
- articles/knative-backstage-plugins.md
5152
- articles/demystifying-activator-on-path.md
5253
- articles/knative-eventing-vision.md
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
# Event Sourcing with Apache Camel K and Knative Eventing
2+
3+
**Author: Matthias Weßendorf, Senior Principal Software Engineer @ Red Hat**
4+
5+
## Why Apache Camel K?
6+
7+
The [Apache Camel](https://camel.apache.org/){:target="_blank"} is a popular Open Source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data. With [Apache Camel K](https://camel.apache.org/camel-k/latest){:target="_blank"} the project provides a lightweight integration framework built from Apache Camel that runs natively on Kubernetes and is specifically designed for serverless and microservice architectures.
8+
9+
The Camel K framework does also support Knative, allowing developers to [bind](https://camel.apache.org/camel-k/latest/kamelets/kamelets-user.html#kamelets-usage-binding){:target="_blank"} any Kamelet to a Knative component. A Kamelet can act as "source" of data or alternatively as "sink". There are several Kamelets available for integrating and connecting to 3rd party services or products, such as Amazon Web Services (AWS), Google Cloud or even tradition message systems like AMQP 1.0 or JMS brokers like Apache Artemis. The full list of Kamelets can be found [here](https://camel.apache.org/camel-kamelets/latest/index.html){:target="_blank"}.
10+
11+
## Installation
12+
13+
The [Installation](https://camel.apache.org/camel-k/next/installation/installation.html) from Apache Camel K offers a few choices, such as CLI, Kustomize, OLM or Helm, like:
14+
15+
```
16+
$ helm repo add camel-k https://apache.github.io/camel-k/charts/
17+
$ helm install my-camel-k camel-k/camel-k
18+
```
19+
20+
Beside Camel K we also need to have Knative Eventing installed, as discussed [here](https://knative.dev/docs/install/yaml-install/eventing/install-eventing-with-yaml/).
21+
22+
## Creating a Knative Broker instance
23+
24+
We are using a Knative Broker as the heart of our system, acting as an Event Mesh(https://knative.dev/docs/eventing/event-mesh/) for both event producers and event consumers:
25+
26+
```yaml
27+
apiVersion: eventing.knative.dev/v1
28+
kind: Broker
29+
metadata:
30+
namespace: default
31+
name: demo-broker
32+
```
33+
34+
Now event producers can send events to it and event consumers can receive events.
35+
36+
## Using Kamelets as Event Sources
37+
38+
In order to bind a Kamelet to a Knative component, like the above broker, we are using the `Pipe` API. A Pipe allows to declaratively move data from a system described by a Kamelet _towards_ a Knative destination **or** _from_ a Knative destination to another (external) system described by a Kamelet.
39+
40+
Below is a `Pipe` that uses a ready-to-use `Kamelet`, the `timer-source`
41+
42+
```yaml
43+
apiVersion: camel.apache.org/v1
44+
kind: Pipe
45+
metadata:
46+
name: beer-source-pipe
47+
spec:
48+
source:
49+
ref:
50+
kind: Kamelet
51+
apiVersion: camel.apache.org/v1
52+
name: timer-source
53+
properties:
54+
message: Hello Knative Eventing!
55+
sink:
56+
properties:
57+
cloudEventsType: com.corp.my.timer.source
58+
ref:
59+
kind: Broker
60+
apiVersion: eventing.knative.dev/v1
61+
name: demo-broker
62+
```
63+
64+
The `timer-source` Kamelet is referenced as the `source` of the `Pipe` and sends periodically (default is `1000ms`) the value of its `message` property to the outbound `sink`. Here we use the Knative Broker, which accepts CloudEvents. The conversion of the message payload to CloudEvents format is done by Apache Camel for us. On the `sink` we can also define the `type` of the CloudEvent to be send.
65+
66+
## Using Kamelets as Event Consumers
67+
68+
In order to consume message from the Knative broker, using Apache Camel K we need a different `Pipe` where the above broker acts as the source of events and a Kamelet is used as sink to receive the CloudEvents:
69+
70+
```yaml
71+
apiVersion: camel.apache.org/v1
72+
kind: Pipe
73+
metadata:
74+
name: log-sink-pipe
75+
spec:
76+
source:
77+
ref:
78+
kind: Broker
79+
apiVersion: eventing.knative.dev/v1
80+
name: demo-broker
81+
properties:
82+
type: com.corp.my.timer.source
83+
sink:
84+
ref:
85+
kind: Kamelet
86+
apiVersion: camel.apache.org/v1
87+
name: log-sink
88+
```
89+
90+
The `demo-broker` is referenced as the `source` of the `Pipe` and within the `properties` we define which CloudEvent `type` we are interested in. On a matching CloudEvent, the event is routed to the referenced `sink`. In this example we are using a simple `log-sink` Kamelet, which will just print the received data on its standard out log.
91+
92+
> NOTE: In order for the above to work, the Apache Camel K operator will indeed create a Knative `Trigger` from the `Pipe` data, where the `spec.broker` will match our `demo-broker` and the `spec.filter.attributes.types` field will be set to `com.corp.my.timer.source` to ensure only matching CloudEvent types are being forwarded.
93+
94+
## Conclusion
95+
96+
With Apache Camel K the Knative Eventing ecosystem benefits from a huge number of predefined `Kamelet`s for integration with a lot of services and products. Sending events from Google Cloud to AWS is possible. Knative Eventing acts as the heart of the routing, with the Knative `Broker` and `Trigger` APIs as the Event Mesh for your Kubernetes cluster!

0 commit comments

Comments
 (0)