camel k logo
  • Software-Entwicklung

How to ride a camel through clouds?

The following article gives a short overview of the main concepts of Apache Camel-K as well as basic and advanced integration examples including tests. Challenged by taking a look into Apache Camel-K as a possible replacement for our Apache Camel based Red Hat Fuse integration platform, the members of our project team analyzed whether or not Apache Camel-K would fit our “real-world” scenarios and how it could contribute to speeding up development and deployment.

What are Camel and Camel-K?

Camel is an open source integration framework built by the Apache community. It has been out there for many years and is based on the Enterprise Integration Patterns (EIPs) [1] . It comes with a lot of integration related components which are used to access message queues, APIs, or databases (or anything else you can imagine). Camel is very powerful to send data between different applications, protocols and technologies.  Camel-K is an integration framework based on Camel that runs natively on Kubernetes and other container platforms like OpenShift. It includes all features of Camel and is designed for a serverless and microservice architecture.

Core concepts of Camel-K

Camel-K CLI

Before we dive into the architecture of Camel-K itself we should note a critical component of the Camel-K stack, the Camel-K CLI.

“The Camel-K command line interface (kamel) is the main entry point for running integrations on a Kubernetes cluster.” [2]

First, we should state the obvious. Yes it is called kamel with k and we did not mix up English and German spelling. Second, as the documentation above states, without the Camel-K CLI hardly anything happens. It is used for a whole variety of operations:

  • Install the Camel-K operator in Kubernetes
  • Run, debug and delete integrations
  • Get deployed integrations and see their logs

So it’s kind of a “big deal” and we need it to run Camel-K integrations.

Architecture

Operator

The other “big deal” is the Camel-K operator for Kubernetes which is the counterpart of the Camel-K CLI on the Kubernetes side. The operator does not only install and maintain applications it also creates resources according to the integration logic expressed through the Camel DSL.
Running “kamel install” installs the operator and defines the following custom resources:

  • IntegrationPlatform
  • Integration
  • IntegrationKit
  • Build
  • CamelCatalog

Runtime

The runtime transforms all configurations coming from the operator into the language of the Camel framework and executes the route written by the user. The Camel-K runtime is a Java application which utilizes Camel Quarkus under the hood. This allows for minimal resource footprint and fast startup times. Besides executing our code it also manages the following on application startup:

  • Loading Sources
  • Properties setting
  • Cron
  • Knative
  • Kamelet
  • Master
  • Webhook

Traits

With traits the final integration can be configured. Traits can be enabled/disabled or existing traits can be customized. The trait profile specifies on which platform (Knative, Openshift, Kubernetes) it can be run and the configuration defines the id of the trait and the parameter, as a key value pair, which will be exposed.

kamel run -t [trait-id].[key]=[value] Integration

The traits can be divided into feature traits like Knative or Prometheus and platform traits e.g Jvm, Deployment, Dependencies, Container, Camel. For example, by enabling the Prometheus trait a Prometheus-compatible endpoint will be configured and a PodMonitor resource will be created. Another example is the Jvm trait. It can be configured to activate remote debugging or add an additional classpath to a dependency.

Let’s code

Before you start

We used the examples provided in the OpenShift Integration repository for this blog [3]. You can open this repository in any editor of your choice but we found the recommended combination of VSCode and the VSCode Didact plugin very helpful and convenient. RedHat CodeReady Containers was chosen as our local development OpenShift cluster. During our preparation we found out that some examples are really resource intensive. Therefore we recommend starting the cluster with at least 6 CPUs and 16 GB of RAM for a smooth experience. For the sake of completeness we should mention that we tried all examples on Fedora.

First integration

For showcasing a simple integration we chose the basic example [4]. This example contains two integrations, both written in Java. Camel-K integrations can be written in a variety of languages which can be found here [5]. If we take a look into the “basic” integration we see that it starts a timer that ticks every second. It sets a header parameter “example” with value “Java”. Then it uses this header parameter as part of the body. Finally it logs the body with the log component.

{code:Basic.java}
// camel-k: language=java

import org.apache.camel.builder.RouteBuilder;

public class Basic extends RouteBuilder {

  @Override
  public void configure() throws Exception {

      from("timer:java?period=1000")
        .setHeader("example")
          .constant("Java")
        .setBody()
          .simple("Hello World! Camel K route written in ${header.example}.")
        .to("log:info");
      
  }
}
{code}

Assuming the project setup is finished, we can run this integration in development mode by typing ”kamel run Basic.java –dev”. After some time we see the log of our application. This is a really simple example but there is quite some stuff happening. First of all, there is no deployable artifact in this whole process. We write our integration in a single java file. No dependencies, no maven, no unit tests. We send this single java file to the Camel-K operator and the operator automatically resolves the needed dependencies, adding the components we need (in our case the timer and log component). Then it will build our application with its dependencies and spin up the integration as a pod in our cluster immediately. In development mode the Camel-K CLI will pick up changes and send the new code to the operator. The operator will rebuild our code, stop the old pod and spin up a new one. Very cool

Testing integrations

We can test our basic integration and this testing is done with e2e-tests powered by the yaks framework [6]. Yaks is a framework for cloud native BDD (Behavior Driven Development) on Kubernetes. Yaks uses its own operator and cli which can be set up like documented here [7].  For our basic integration example we can find the test called “integration.feature” in the test subdirectory. The test is written in a human readable format (called Gherkin) containing keywords instructing the operator.

{code}
Feature: all integrations print the correct messages

  Scenario: Integration basic prints Hello World
    Given Camel-K integration basic is running
    Then Camel-K integration basic should print Hello World
{code}

We can run our test by typing “yaks test integration.feature”. This will send the test to the yaks operator. The operator will check the provided test and spin up the necessary integrations, in this case the basic integration. Afterwards the test will check the output of the basic integration and match it with the text “Hello World”.

More stuff …

Knative

Camel-K supports serverless integrations with Knative. To use the power of this feature the integration has to provide either a HTTP endpoint implemented with the camel HTTP component (for example with the REST DSL) or consume Knative events [8]. The Camel-K operator will automatically recognize the mentioned components and deploy these integrations as knative services (the serverless-operator has to be installed and knative-serving must be configured). These knative services can autoscale based on different metrics (traffic, CPU usage, custom metrics) and they can even scale down to zero if no requests are served. A good starting point for looking into this topic is the camel-k-example-knative repository [9].

Kamlets

Kamelets (Kamel route snippets) are templates for your routes which can be installed on your cluster and then be used for integration. It is an additional layer of abstraction to make the routes easier to read and reuse. To create a Kamelet start with writing a camel route. The next step is to decide on the type of the Kamelet. If we want to send data we have a “sink” and if we want to consume data it is a “source”. Now we check if the route logic has any configurable values e.g intervals these will be defined by the parameters at the kamelet creation. The last step is to write our information in the kamelet format which look like this:

{code}
apiVersion: camel.apache.org/v1alpha1
kind: Kamelet
metadata:
  name: twitter-search-source
  labels:
    camel.apache.org/kamelet.type: "source"
spec:
  definition:
    title: "Timer"
    description: "Produces periodic events with a custom payload"
    required:
      - message
    properties:
      period:
        title: Period
        description: The time interval between two events
        type: integer
        default: 1000
      message:
        title: Message
        description: The message to generate
        type: string
  types:
    out:
      mediaType: text/plain
  flow:
    from:
      uri: timer:tick
      parameters:
        period: "{{period}}"
      steps:
        - set-body:
            constant: "{{message}}"
        - to: "kamelet:sink"
{code}

Example 1 – Creation of a simple Kamelet [10] 

The kamelet:source and kamelet:sink endpoints are only available in Kamelet route templates and will be replaced with actual references at runtime. After the Kamelet is installed it can be used like any other camel component.

Conclusion

Camel-K is capable of a lot of things, be it deploying integration code on Kubernetes in less than a second or resolving dependencies and creating resources automatically. As funky as it sounds, hold your horses. Just because you have Kubernetes and an integration project does not mean you should use Camel-K. A big downside is that troubleshooting is difficult and debugging is simply annoying. If you want to use Camel-K in a production environment Red Hat does provide support but only under certain conditions. [11] All in all, it depends on your business needs and the complexity of your application. As a conclusion we found that Camel-K uses some really good approaches but might need some refinement for “real-world” applications and most importantly a shift in the mindset of development teams and business to unleash its full potential.

References

[1] Hohpe, G., Woolf, B. (2003). Enterprise Integration Patterns : Designing, Building, and Deploying Messaging Solutions. Addison-Wesley Professional. ISBN: 0321200683 

[2] The Apache Software Foundation (24.11.2021) “Camel-K CLI (kamel)” [User Guide]. Retrieved from https://camel.apache.org/camel-k/next/cli/cli.html

[3] (24.11.2021) “OpenShift Integration” [GitHub Repository]. Retrieved from https://github.com/openshift-integration

[4] (24.11.2021) “Camel-K Basic Example” [GitHub Repository]. Retrieved from

https://github.com/openshift-integration/camel-k-example-basic

[5] The Apache Software Foundation (24.11.2021) “Languages” [User Guide]. Retrieved from https://camel.apache.org/camel-k/1.7.x/languages/languages.html

[6] The YAKS Community (24.11.2021) “YAKS is a platform to enable Cloud Native BDD testing on Kubernetes” [GitHub Repository]. Retrieved from https://github.com/citrusframework/yaks

[7] The YAKS Community (24.11.2021) “Operator install” [User Guide]. Retrieved from

https://citrusframework.org/yaks/reference/html/index.html#installation-operator

[8] The Apache Software Foundation (24.11.2021) “Autoscaling with Knative” [User Guide]. Retrieved from https://camel.apache.org/camel-k/next/scaling/integration.html#_autoscaling_with_knative

[9] (24.11.2021) “Camel-K Knative Example” [GitHub Repository]. Retrieved from https://github.com/openshift-integration/camel-k-example-knative

[10] The Apache Software Foundation (24.11.2021) “Creating a simple Kamelet” [User Guide]. Retrieved from

https://camel.apache.org/camel-k/1.6.x/kamelets/kamelets-dev.html#_creating_a_simple_kamelet

[11] Red Hat (21.11.2021 “ Camel K Supported Configurations“ [Article]. Retrieved from https://access.redhat.com/articles/6241991

Fig1. The Apache Software Foundation (24.11.2021) “High Level Architecture”. https://camel.apache.org/camel-k/next/_images/architecture/camel-k-high-level.svg

geschrieben von:
Johannes, Mattias, Rainer
WordPress Cookie Plugin von Real Cookie Banner