Background: Avoris Travel is a unique travel tour operator serving Spain and Portugal with a global presence in 15 Latin American countries and beyond. At its core, the company is driven by digital transformation, so it's no wonder that when the performance of their build times was on the decline, the team knew it needed improvements.
With Kubernetes slaves, we saw that within each Jenkins build, dependencies are downloaded to an .m2 directory, which is subsequently deleted after the build. Then, when generating the final image to be delivered in the different environments, a similar problem was uncovered: image layers, which are common to each build, are also being downloaded and subsequently deleted.
Goals: Minimize delivery cycle times.
Solution & Results: To solve the continuous and repetitive download of maven dependencies, we decided to use the mounted volumes offered by the Kubernetes plugin, mapping the agent's directory (/var/jenkins_home/.m2) to a shared directory on the Kubernetes nodes (/mnt/home-jenkins-m2). This allows all builds to share the same .m2.
Jenkins rocks because of the huge number of integration possibilities and plugins available, as well as the huge community and resources available.
To ensure security and concurrency, we installed the Takari plugin (Concurrent Safe Local Repository). To solve the repetitive download of images and base layers, we opted to use host path volumes, as we did with maven dependencies, mapping the agent's directory (/var/run/containers/storage) to the /mnt/var-run-containers-storage node directory. We were able to do all this thanks to the use of the overlay storage driver. The creation of images is done with Buildah, while the upload to the clusters is made possible with Skopeo.
All of our builds are performed on Kubernetes agent nodes. Jenkins is our own Docker image based on the official Jenkins compressed file (LTS). We add other packages such as the Takari plugin, jcasc config, and the Openshift CLI to perform some cluster operations. The structure of our jobs is based on Bitbucket multi-branch pipelines, which all share a Jenkins shared library. These are hosted in a git repo, which -- in turn -- contains all the stages of the pipelines. Now, any modification applies to all developments. We use parameterized Jenkinsfiles based on technology and other parameters to configure the builds. For example (Jenkinsfile):
Our project was successful due to the vast number of Jenkins plugins available, the extensibility of Jenkinsfiles with the use of Groovy, and many integration possibilities. Additionally, we relied on a large and helpful Jenkins community and all the Jenkins resources available.
Our goal to minimize delivery cycle times exceeded our expectations: