Background: Our Java application consists of over 50 repositories, resulting in at least 50 Jenkins jobs. Also, we maintained several code lines simultaneously, resulting into over 300 jobs in Jenkins with lots of duplication between these code lines.
The first challenge was to prevent this kind of duplication with a generation script. For this, we have an advanced seed job in place that generates/updates all Jenkins jobs. To create a release (we have a setup.exe and a docker image artifact) we must run all of these 50 jobs in a specific order (to prevent dependency issues and to build with a specific version in mind) to finally combine all of them in a single web app artifact.
To do this by hand is tedious and cumbersome. For this, we developed a dependency ordering calculation script and a Jenkins pipeline library that helps us generate a single pipeline job that will do all the required lifting. Once the release has been created, it automatically kicks off a subsequent pipeline that will extensively test the generated release artifact. When that succeeds the RC will be marked as stable and pushed to the appropriate channels.
Goals: One-click release.
Jenkins provides us stability and release cycle reliability with just a single click of a button while having a complex software product with many components.
Solution & Results: For the first challenge (having lots of jobs) we developed a Jenkins seed job that allows us to re-generate/update all jobs in all code lines (over 300 jobs).
For the 2nd challenge, we developed a Jenkins pipeline library with lots of Groovy that hooks into the Jenkins runtime, allowing us to do anything we want. This made it possible for us to build a release of our product (that consists of over 50 Jenkins jobs), test it extensively, and stamp it as stable once all tests have succeeded with just a single click of a button!
We're about to automate that last button as well to fully adopt continuous delivery. So that after every sprint it will automatically try to create a release once all jobs from a certain code line are stable :-)
Another thing we solved was to do real performance tests. For this we leveraged a Jenkins job that kicks off a Terraform script that sets up a certain EC2 instance type on AWS (to measure performance over kind of hardware resources), automatically provisions it with puppet (that sets up a Jenkins agent), waits for that agent to be available, and then run the performance tests on that. Afterward, the instance is destroyed. This works in just one go! In a single pipeline job!
What we really love about Jenkins is its "swiss army knife" characteristics; an immense ecosystem of available plugins and a custom-tailored pipeline library in groovy allows us to make every build wish come true.
Plugins that we really love:
Favorite feature:
Top results: