Jenkins, we all know is an absolute best
Continous Integration Tool. But hey, lets not stop at that point because I feel there can be a lot more use cases of Jenkins and we will try to cover those in this article.
I would not get into the history and installation/configuration of Jenkins, as it is beyond the scope of this article. However, if you would like to read about it, I would highly recommend this article
To start with, let me tell you guys as to how I look at
Jenkins is an extremely powerful, extremely capable but a
dumb tool. I call this as a
dumb tool only because, as of now
Jenkins is an absolute executor but it cannot make any logical decisions.
Jenkins does not make decisions, lets not undermine
Jenkins in any way, because as I said earlier, its an absolute executor and it can do anything and everything to perfection and its just that, we need to know how we can use
Jenkins and how best we can utilize it’s potential and that is what I would like to touch upon in the below sections.
Use Cases of
- Continous Integration and Continous Delivery and Deployment - CI/CD
SPOT(Single Point of Trigger) for
BAD(Build, Automate Infrastructure and Deployment)
Jenkins for Continous Integration
Continuous Integration is a process in which all development work is integrated at a predefined time or event and the resulting work is automatically tested and built. The idea is that development errors are identified very early in the process. The basic functionality of Jenkins is to execute a predefined list of steps based on a certain trigger. The trigger might be a change in a version control system or a time based trigger
Jenkins performs the following steps:
- Perform a
- Archive the
Jenkins can be extended by additional plug-ins, e.g. for building and testing Android applications or to support the Git version control system or any other activities can be performed using the right plugins
Jenkins stores all the settings, logs and build artifacts in its home directory, for example, in
/var/lib/jenkins under the default install location of Ubuntu.
The jobs directory contains the individual jobs configured in the Jenkins install. We can move a job from one Jenkins installation to another by copying the corresponding job directory. We can also copy a job directory to clone a job or rename the directory.
Jenkins as Scheduler
We all know how a Cron works. A Cron is nothing but a Scheduler, just a Scheduler which schedules the process and and those processes can be absolutely anything. Cron runs things at fixed times, repeat them etc.
In the same lines, if implemented properly,
Jenkins can be your best alternate for
Jenkins has something called as
periodic builds which accepts the time frame in a fashion very much similar to cron and this in combination with
Execute Shell feature can beat Cron anytime, provided, we use it in the Right Way.
Even if we want to migrate from
Jenkins, it can be done effortlessly as the learning curve is as good as nothing.
Advantages of Jenkins Over Cron:
- Measuring Build Successs:
Jenkinsverifies a zero return status for every
Execute Shellbuild step and if it returns anything else other than Zero, it can notify us.
- Logs are available for every job, which means, we have logs for every
Execute Shellsteps inside the Job.
Jenkinscomes with a feature called
Queue, which comes handly if a job is still running and the
periodic build, then based on how we define,
Jenkinscan run the job immediately like cron or queue the job to run when the one in progress finishes.
- Jenkins also supports
Jenkinscan sign onto systems over SSH, copy over its own runtime, and run whatever you’d like on the remote system. No matter servers in a cluster need scheduled jobs,
Jenkinscan schedule, execute and log them from one server.
Jenkins as a single point of trigger for
Automate Infrastructure and
Deployment is more of a process that can be implemented and is not a ready to use feature of
Jenkins. When we are using Jenkins as
BAD, we are effectively giving instructions to Jenkins as to what needs to be done and in what way and we also need to pass the required parameters.
How a code gets built, varies per project.
Jenkins can be used to make different types of builds, such as continuous, official, periodic, nightly and others. Builds can be triggered manually by initializing the build through Jobs or through scheduled or periodic jobs or even through email.
Jenkins can accept the code from various
Source Control systems, such as
SVN or any other
Source Control and all it needs is the right kind of plugins to support it. Once it gets the Code, it compiles, based on the result, if it passes, it builds an artifact and stores the artifact in a location defined by you and if the build process fails, it triggers the notifications to the distribution list, again pre configured by you.
I would not like to get into the details of build steps as that is not what I would like cover in this article and would rather prefer to the Use Cases of Jenkins and yeah,
Build is one Really Important Use case.
When I am talking about Infrastructure Automation, what I mean is, how do I bring up the entire stack of infrasturture using
Jenkins and to achieve this, I would prefer using Infrastructure Automation tools such as
Puppet or even
Ansible along with
Jenkins and if configured properly, infrastructure can be brought up anywhere, be it, Physical Data Center or Public Clouds such as
AWS or Private Clouds such as
OpenStack. Infact, the same logic can be applied to say
Azure or even
In my case, I would be using
Chef as my Infrastructure Automation Tool and I would want my Infrastructure to come up in
What I would do is, one of my
Jenkins Job which brings up the Infrastructure would accept the requred answers in the form of Parameters and these Parameters would include but not limited to details such as the
CLOUD TYPE and the
ACCOUNT or Tenant and the
Components that would need to come up and the
Environment Name which needs to be created and a few others.
Once all the paramters are taken, as part of
Execute Shell feature, I would be calling a couple of bash scripts in the backend, which are again version controlled. These scripts would take the parameters from previous section and contacts the Chef Server and would create a
Once the environment is created, then I would do a
Knife Create on either
OpenStack and bring up the instances based on the parameters and also
Tagging would be taken care. Once the instances are up, with the help of
Public IP and
Tags, I would attach those instances to the
Chef Environment and the
Roles would be assigned through the script itself, based on the
Instance Type or
Tags and then would do a bootstarp on those machines and run the
Chef-Client and my Infrastructure would be ready. Everything is scripted and everything is controlled via Jenkins. The same approach can also be used via another Job to bring down the infrastructure.
The above scenario would suit any kind of Infrastructure and is best suited for
Deployment through Jenkins is a Straight forward process and guess there is no need to go into too much of details.
However, if we follow the above logic, then, depolyments would be even much easier, because, as parameters, all we need to probably pass is the
Build Name and the
Chef Environment Name and the
Account name and again through the scripts in the backend and the paramters passed, Jenkins can be configured to do a auto discovery to connect to the nodes and deploy the artifacts on them.
This deployment job can also have some
Post Job actions to take care of any post deployment activities and may also include the required test cases to do
Jenkins if configured and controlled properly, can be a best
Hope this helps! Keep forking.
The Remote Lab DevOps Offerings:
Please leave your comments below if you have any doubts or questions.