Our pipeline
Our GitLab CI pipeline consists of 5 stages with 7 jobs. Each job belongs to a single stage. All jobs in a single stage run in parallel. Each job uses its own container image. When a job is executed, GitLab starts a container based on the defined image and injects the source code into it. Different jobs use different images to fit the requirements at hand – e.g. which tools we need. One of the benefits of this approach is that the Ops Team does not need to install or maintain the tools for us like they did before when we deployed to VMs.
Stage Build
Job create-base-image
To save some time – and a lot of disk space – we created a base image containing JBoss and some other binaries that we needed for all our applications. This job only runs if the Dockerfile for this image gets modified in git. To build our images we use Buildah bud (build-using-dockerfile). The credentials we use to upload our images to Nexus are stored in GitLab and only privileged people can view them. Through this we reduce access to credentials while allowing a larger number of employees to improve our images. Both IT Security and Application Development benefit from it. IT Security can breathe a little bit easier knowing that the secret is only used by the pipeline with little to no access from employees while employees working with the base image can easily provide patches to the base image.
Job compile-and-test
We create our war file with Maven and run unit tests. However we do not upload the artifacts to Nexus. Instead we save them in the GitLab job itself. GitLab has a neat little feature that allows subsequent jobs to use artefact generated by the previous job.
Since our GitLab runners are running inside our Kubernetes cluster we had some difficulties to cache the Maven dependencies between pipeline runs. By default Maven downloads all dependencies from the internet everytime we do a build since each build runs in a new container. The easiest way to mitigate the issue is to use our existing Nexus as a proxy / cache to reduce build time.
Stage Store for Test
Job create-test-image
We create our images with Buildah. We use the previously created base image and copy our previously created war file to the new image. Since we need to configure JBoss we use the jboss-cli to execute a cli file. In this cli file an embedded-server is started and configuration is added. The image then gets uploaded to Nexus with the tag “to-test”.
Stage Pre Integration Test
Job pre-integration-test
Because we have integration tests and want to see if everything works as expected in Kubernetes we deploy the image using Helm. All needed Kubernetes resources get created and the image tagged with “to-test” is deployed. We use JaCoCo to measure the code coverage. Hence we start JBoss with the JaCoCo java agent enabled. To dump the report we need to open a random NodePort. In addition a RabbitMQ installation is needed for our integration tests which also is installed via Helm Charts.
Stage Integration Test
Job integration-test
Integration tests with Maven are executed. Once all tests are executed we dump the coverage report and trigger a SonarQube analyse.
Stage Store for Development
Job promote-image
Labeling is a great feature of container technology. With a tool called Skopeo we can copy an image from one container registry to another. In addition to that we can use it to add a tag to an image in a registry without transferring the image over the network first.
In this section of our pipeline we benefit from those features – we can promote an image and add two tags without putting additional traffic on our network. The first tag we add is “develop-latest” the second tag is the hash of the git commit. Why do we add the commit hash you may ask? To release our application, we merge the develop branch into our master branch. Now the latest commit in master has a reference to an image in the registry and we can just add another tag to that image like latest or a version number for better readability.
Job upload-helm-chart
As previously mentioned we use ChartMuseum to store our Helm charts. Before we upload the Helm chart to ChartMuseum the image tag is set as the app-version in the Helm chart. At installation time Kubernetes knows which image with which tag to pull because we are referencing it in the Helm chart (Chart.yaml -> appVersion).