Jenkins is the way to build chat-bot driven declarative pipelines for continuous delivery

Declarative Pipelines With Chat Bots

Submitted By Jenkins User Prasanjit Singh
StarzPlay, a Middle East/North African media & entertainment company, couldn't imagine deploying CI/CD without Jenkins.
Logo
Organization: STARZPLAY, <https://starzplay.com>
Team Members: Faraz Arshad, VP Technology, Pankaj Sharma, Asst. Manager, DevOps, Saman Batool, DevOps Engineer
Industries: Media & Entertainment
Programming Languages: Java, Node.js, PHP, Python
Platform: : Android, iOS, Embedded, Docker or Kubernetes, Linux, Microservices
Version Control System: Bitbucket Server
Build Tools: Gradle, Maven, npm
Community Support: Jenkins.io websites & blogs

Using Jenkins & CI/CD to bring better entertainment choices to MENA.

Background: STARZPLAY is the leading OTT provider in the Middle East / North Africa (MENA) region. This means serving millions of concurrent requests simultaneously. The architecture here is cloud-native and has 150+ microservices running together. All this is spread across multiple environments and supports multiple device platforms and technologies, making the deployment process extremely complex.

In order to solve these delivery challenges, coupled with the need to meet SLOs like mean time to market, staying within error budgets, and have a resilient delivery process, the DevOps team came up with a unique solution: declarative pipelines!

Goals: Building declarative pipelines for continuous delivery to multiple environments triggered by Chatbots.

Solution & Results: We use declarative pipelines to capture all the parameters which differentiate services, environments, repositories, container registries, environment variables, deployment types, and associated information and trigger a build and delivery pipeline.

Jenkins served as the single point of integration for various moving parts in the Build-Test-Release-Deploy-Monitor cycle. It allowed federated control and opened up the capability to deliver reliably.
profile picture
Prasanjit Singh, Engineering Manager - Cloud & DevOps, STARZPLAY

The pipelines are triggered by chatbot integration with Jenkins and help maintain a feedback loop and ease of operation when working in collaboration. Further, the pipelines also have Jenkins integrated with artifact registries and multiple cloud providers using plugins which allows to store artifacts, manage versions and releases, and deploy multi-cloud if needed.

The solution to the challenges of continuous delivery of a heterogeneous set of microservices across heterogeneous environments encompassing a distributed team required a robust delivery tool. Jenkins proved to be the one-stop-shop! My STARZPLAY DevOps team and I set up a unique solution set.

  • We built a common library of declarative pipeline scripts using groovy and integrated these to be triggered via chatbot integration with various team channels where the bot could be invoked.
  • We integrated their orchestrators like Apache Mesos, Kubernetes, and Amazon ECS with these Jenkins pipelines.
  • For artifact storage, Nexus and Private Docker registries were roped in.
  • Telemetry and logging agents were integrated into the pipelines.
  • Deployment and alerting were applied to the scripts for notifying the health of the application.

The deployment process turned out to be a great success and put all these moving parts together without Jenkins would have been unimaginable.

Here are the capabilities we relied on most:

Jenkins featured used:

  • Declarative pipelines
  • Multibranch
  • Jenkins cluster
  • Cron schedulers

Plugins used:

  • Bitbucket Plugin
  • Parametrised Build
  • Maven & Gradle
  • Slack integration
  • AWS CLI
  • Kubernetes plugin
  • Sonarqube & JaCoco
  • Ansible plugins
  • Selenium
  • Nagios plugins

Our top 5 results are as follows:

  • Ease of deployment and maintainability improved 4X
  • Invoking deployments and rollbacks is effortless
  • Release cycles have shortened from weeks to hours
  • The mean time for delivery has reduced from 3 hours to 15 minutes
  • Simultaneous deployments across 15 multi-cloud environments