Liveblogging from Jenkins User Conference 2015 Day 1

The following is likely to have typos, mistakes and poor writing. I’m liveblogging from the Jenkins User Conference in London.

Quick links:

You can read my notes from Day 2 here.

Back to top

Welcome and Introductions: Harpreet Singh @singh_harpreet

#Jenkins 100k: Jenkins now has 100,000 active installations and over 1000 plugins, with about three new ones being added daily.

One of the biggest new features for Jenkins is the workflow plug-in, extending Jenkins to support release management/continuous delivery.

Also, Jenkins are proud of new integration with Docker.

Fun fact: Kohsuke enjoys applying algorithms to cross-stitch.


Back to top

Keynote Address: Kohsuke Kawaguchi @kohsukekawa

Jenkins usage has grown 30% in the last year. It would take three datacenters to run all the Jenkins installations in the world. The number of global Jenkins jobs has gone up by 67% in the last year. According to eclipse survey 39% of developers use Jenkins.

Many companies are plugging into Jenkins, including Redgate and Salesforce, supporting the community. Jenkins is everywhere, managing all aspects of code – even databases.

Wow! That’s my blog post! 🙂 🙂 🙂 🙂 🙂 🙂 🙂 🙂 🙂


CloudBees are hiring for a Jenkins evangelist.

In the early days, the sorts of stuff being done by Jenkins was simple. Just an individual or a small team running unit tests etc. Over time it was used as a platform for automating deployment processes and all sorts of other tasks. It also started to become a concern of more and more people within the organization.

The picture I have in my mind for the future of Jenkins is to automate a full continuous delivery pipeline.

But then if we can program the whole pipeline, can we source control the end to end pipeline? CD as code. Jenkins can be the single automation platform to manage your end to end pipeline.

Can Jenkins monitor your VCS and automatically update your build process whenever you commit a change to your workflow code? CD of config?

Call to action: there is now more functionality for developers writing extensions – so please continue to write extensions. This is a really important part of the Jenkins community.

I started using Docker to manage the Jenkins infrastructure. It turned out this was actually quite hard. Often with containers you are building on top of pre-existing images. It is important for users to be able to trigger tests whenever containers are updated, so we added Docker Hub Notification Trigger Plugin.

There are still problems though when containers are being thrown around all over the place. You need a tracking tag so that you can keep track of how containers move through your pipeline. You also need to be able to keep track of which version of app goes with which version of container etc. Jenkins supports this using a container ID. Docker traceability plugin helps with this.

But what is the future of CD with containers? Tools like Terreform and Kubernetes Pod are allowing us to manage entire datacenters of Docker containers. Jenkins can orchestrate CD for datacenters including the automation of your code, infrastructure and workflows all together.

Deployment is becoming a test problem. I often maintain the Jenkins infrastructure so sometimes I put on my ops hat too. Making manual deployments in fixed time windows is scary. A much more sane way is to test my deployments in a test environment. If it does not work I can tweak the deployment code and automatically it will test it again. I can keep iterating until the deployment works and then I can simply merge it with production to trigger a deployment that I can have confidence in.

For this future to happen the workflow plugin and Docker integrations are integral.

For the next two days I hope you take some fantastic ideas from your developer peers and take them back to your organizations and make some changes.

Back to top

An integrated Deployment Pipeline with Jenkins and CloudFoundry: Sufyaan Kazi @sufyaan_kazi

The thing that some successful companies IT companies share is that they have solved a problem using software. However they go a step further. Uber, for example, listen to feedback and release new versions quickly that improve on the previous version – and they do this constantly.

The most important thing to achieve this is a culture for change and questioning, but also to establish a mechanism for providing feedback and, finally, implementing technology and tooling to enable a CD pipeline.

Also – for software quality – this:


My company (Pivotal) produces an opensource tool called CloudFoundry which is used to manage the apps in my datacenter. It has tools to deploy apps, Jenkins instances and even MySQL databases as well as other technologies.

Demo of Jenkins fetching artifacts from Artifactory and passing them to CloudFoundry which provisions an environment with a container and deploys the code in my datacentre. From build to deploy in 2 minutes.

Demo run twice, once deploying to a vCloud datacenter and once to Amazon. CloudFoundry provided a consistent experience on both platforms.


Demo of green/blue deployments where new version of app is systematically deployed out across a datacentre using Jenkins for orchestration and CloudFoundry for implementation. Jenkins workflow and CloudFoundry combined can manage the following:


Some best practices:

  • Use Jenkins plugins that are built for you
  • Use proper build numbers
  • Use canonical route names for easy use and external access
  • Version control your config details and environment variables
  • Design with multi cloud in mind for portability

Back to top

How to optimize Automated Testing with Everyone’s Favourite Butler: Viktor Clerc @viktorclerc

Viktor works for Xebia Labs who produce tools for test and deployment automation.

It is important for us to bring testing much earlier in our process. In this slide the bottom line is the old/traditional way of testing but the blocks in the top part are what Viktor recommends:


These early tests have Jenkins written all over it. Also, we should just drop the grey sections. We should try to handle these sorts of tests in our automated tests.

This means developers are becoming testers. We also need to be able to automate our infrastructure. There is no point having a bottleneck while I wait for a testing environment to be provisioned.

How do developers know if code is good enough to go live? You need a central place to log/analyze all your test results.

What tests are the most important? When you have many tests running all of them might take too much time. How do I know which tests are the most important so that I can prioritize them to get the most important feedback more quickly?

Test Automation and CD: Execution and Analysis

Since most of us go live with some failing tests – we need a good way to analyze results so that we can work out if the code is good enough.

There are a myriad of testing tools out there. Different teams are likely to use different tools for similar or different types of tests for various reasons. Forcing them to standardize goes against agile, letting them use their own goes against standardization – so you need a tool to amalgamate all the results.

Brief recap of Cohn’s pyramid of tests. (I won’t outline it here – If you don’t know it Google it.)

Brief recap of Conway’s Law (again, see Google if you don’t know it). This applies to your test infrastructure too.

You should link tests to use cases. Radical parallelization – fail faster! You can always do it faster.

Focus on functional coverage, not technical coverage.


  • Number of tests
  • Number of tests that have not passed in X time
  • Flaky tests
  • Test duration

Label your tests by responsible team, topic, functional area, flaky, known issues etc.

Keep Jenkins jobs sane and simple.

Always parameterize shell scripts.

Parameters fed to individual test tools (Fitnesse, Cucumber etc)

Different browsers run as separate tests. Parallelize!

Brief mention of Xebia Labs and XL Test View. If interested watch the webinar here of XL Test view and Jenkins:

(My regular readers should note that XL test view will support test reports in the JUnit format. Hence it can read the output of your SQL Server unit tests produced by Redgate SQL CI.)

The goal: Can we back our go live decisions based on data from our tests. This will make our deploy decisions truly data driven.


(Also, there was so much more useful stuff in this talk that I didn’t capture in my notes. If you missed this session on the day you should watch the video.)

Back to top

Continuous Delivery @cloud-scale: Harpreet Singh @singh_harpreet and Kohsuke Kawaguchi @kohsukekawa

Kohsuke and Harpeet both work for CloudBees, ‘the enterprise Jenkins company’.

In the last couple of years Harpeet has begun to see CEOs going to their IT teams asking how their IT delivery can be a commercial advantage. This boils down to getting Dev and Ops working better together.

DevOps started out with pockets of automation but where we find ourselves now is that these pockets are being joined together.

Case study: Tesla. A little while ago they had a fire which they were highly criticized for. However, what people don’t know is they were able to analyse the data and the next morning they shipped a software update to all existing cars that had already been sold. When these cars were next started they automatically changed suspension settings to mitigate the risk of fire. This meant Tesla did not need to recall their cars. Great feedback loops and automation allowed this to happen.

So try to do the following for your software:


Intro to Docker and containers. Containers allow you to deploy virtual machines of a particular configuration, allowing you to create disposable infrastructure on demand.

Demo by Kohsuke:

  • Edit code and commit to Git
  • New Docker image created automatically in Jenkins using Docker plugin
  • Docker images tracked using Docker fingerprints plugin

In fact, build can be run on Docker slaves – where the entire build environment is created inside a Docker container so your builds are always happening in a clean environment.

With the Docker custom build environment plugin you can even create special containers for particular builds.

This also allows you to experiment with different types of infrastructure.

The future for workflows and containers? Get involved:


And for CloudBees? (Additional enterprise features on top of Jenkins open-source tool)

Security with Roles-Based Access Control.

Jenkins Operations Centre to act as a dashboard across several master Jenkins instances.

Workflow stage view and Jenkins Analytics:


Promotions of jobs from one master instance to another. (This can be a problem if you scale Jenkins horizontally.)

Also, announcing the CloudBees Jenkins Platform to help people to scale Jenkins horizontally:


There will be a whole bunch of additional features in the enterprise version to help people scale horizontally and enable high availability etc. (The slides went very quick so I missed most of the features!)

Partnering with Azure, AWS, Cloud Foundry and a few others for tighter integrations.

Also – TIGER – working towards Jenkins as a Service. A cloud version of Jenkins. (JaaS – I reckon they should have called it Jazz. :-)) It’s coming.

Back to top

Leave a Reply

Your email address will not be published.