- DevOps and why 50 Production Deploys Per Day is Essential: Martin Croker @martincroker and Markus Rendall @markusrendall
- Conversation with Kohsuke Kawaguchi, creator of Jenkins, about continuous delivery for databases
- From virtual machines to containers: Achieving Continuous Integration, Build Reproducibility, Isolation and Scalability: Christian Lipphardt @hawky4s and Sebastian Menski @skilledinvader
- My take-aways from Jenkins User Conference 2015
DevOps and why 50 Production Deploys Per Day is Essential: Martin Croker @martincroker and Markus Rendall @markusrendall
DevOps the movement started in 2009 when Flickr stood on stage and said they deployed to production 10 times a day. Amazon now do it every 11 seconds. Why?
Feedback and iteration. More feedback loops equals more likely we’ll optimise our tools appropriately for our business goals.
I think we should be aiming for 50 deploys a day. This will enforce that our testing and deployment processes are rock solid. This is something we should aspire to be able to do – rather than something to expect. It’s aspirational.
My definition of DevOps:
- DevOps-centric Architecture
- Continuous Delivery
- Software designed platform
All wrapped within a culture of business flexibility against a stable platform.
Historically, deployments have been hard so we bunch a tonne of work, for a bunch of different business values, into large releases. What we want to do is ship more frequently, so we can deliver individual pieces of work more quickly in smaller batches – delivering business value more quickly and more reliably.
So thinking about development – we need to think about what feedback developers want on their code and how quickly they want it. We should use this as the foundation for building our pipelines.
Traditionally we have worked to minimise the likelihood of failed deployments. However, let’s assume it is not possible to entirely remove the risk of failure. With ‘Anti-fragile’ we aim to reduce the impact of failures, rather than the likelihood. For example, high availability, ability to rollback/rollforward quickly.
Cats and cattle. Great metaphor. We should stop treating our servers like pets, with fluffy names and cuddling them and caring for them when they are poorly. We should treat them like cattle. If the server is ill, shoot it, save the money on vets bills, deploy a new one.
Demo, building a new app using Amazon Cloud Formation, putting it through a complete build/test process and and deploying it to live.
Using Docker containers to version control how my VMs will be provisioned on AWS. During the presentation a complete CD environment with 8 VMs, 3 subnets, a link to the internet, a Jenkins instance and various other bits:
A web application was created live. It was able to gather real feedback from the audience. Here is a screenshot from my laptop:
Then they made an edit (adding a twitter button). Committed it to VCS and it was automatically picked up by Jenkins, 112 functional tests were carried out, also performance tests with Gatling, also security validation. It was deployed to 1 web server but not the other for AB testing. After deciding the feedback was good they deployed the twitter button to both web servers at the click of a button. Finally, they tore down the entire infrastructure in 1 click, and stopped paying. I tweeted this using the share button they had deployed to the website. (I was sharing a comment by another attendee – not my words!):
How many virgins did you have to sacrifice to the demo gods? Nice but you have to pick your application – A banks #jenkinsconf
— Alex Yates (@_AlexYates_) June 24, 2015
Final thought, devops spreads like freezoing water. Individual centres for excellence that spread. It comes from the bottom up. It comes from the developers and ops functions. Back to top
Conversation with Kohsuke Kawaguchi, creator of Jenkins, about continuous delivery for databases
My regular readers will be aware that my passion is for helping folk to solve the problem of continuous delivery for databases. Since I was in the same building as such a CD expert I couldn’t resist the opportunity to sit down with Kohsuke Kawaguchi to discuss his views on the problem.
For those who don’t know, Kohsuke is the creator of Jenkins, the CTO of CloudBees (‘the enterprise Jenkins company’) and general well-respected developer. Before CloudBees he worked for Sun Microsystems and Oracle.
Kohsuke hears about CD problems regarding databases regularly – although this is not really his area of expertise he is aware that it is a real problem and is interested in potential solutions. The three main issues he hears about are that:
- managing changes for stored procedures. This is hard because getting the ordering/dependencies right is a pain and there aren’t (m)any trustworthy ways to automate it.
- when deployments include changes to the database schema they will have a normal code deployment for apps and also an upgrade script for the database to deal with. This complicates the deployment process and produces problems. Often it also requires manual review and execution tasks which slows down delivery. When deployments involve schema changes people are not confident to automate generation of deployment scripts.
- forking a DB into a clone for testing is hard. People want to produce production like environments with real data but there is not enough tooling out there to do this – especially for MySQL and Postgres.
We discussed how Redgate is attempting to solve some of these problems for SQL Server. You can read more about how it all works in a blog post I wrote last year (the same one Kohsuke mentioned in the day 1 keynote). The Redgate tooling allows developers to get their SQL Server database into source control and also package it up into a NuGet package. These NuGet packages can be used as immutable deployment or testing artifacts. For example, if using the tSQLt unit test framework there is native functionality in the SQL CI tool to build test environments automatically and to run unit tests. Also, the SQL Release tool enables an appropriate level of automation/manual review for DBAs etc. This could be automated using Jenkins workflows.
Kohsuke was interested in the solution offered by Redgate and likes the Jenkins integration but would like to see us support more database platforms. At the moment he estimates that only about 30% of Jenkins users are on the MS stack (stats here – renders best in Firefox). In particular, it would be nice to see support for MySQL and Postgres.
He was also happy to hear about the new plugin which is open-source (however it makes calls to proprietary Redgate tools). The source code is on GitHub – feel free to contribute. He would like to see more people trying to solve the problem of cloning production databases. This still feels like a really hard problem and there is more work to be done in this area.
From virtual machines to containers: Achieving Continuous Integration, Build Reproducibility, Isolation and Scalability: Christian Lipphardt @hawky4s and Sebastian Menski @skilledinvader
The Dark Age 1 x Jenkins instance with 4 executors does not scale. Slow feedback cycles. Cannot reproduce CI environments.
The Promising Present All our configuration and infrastructure is now handled as code. It is reproducible upon demand. Our three rules:
- Every configuration is in VCS
- Every application/test runs in a docker container
- Every Docker Container is built automatically
One Jenkins per concern:
- Commmunity/other projects
All jobs are saved as code so they can easily be shared. No more single point of failure. Much easier to manage. Easy to spin up Jenkins instances on demand – more agile Advantages:
- Automate as much as possible – it will make your life much easier
- Design your infrastructure with scale in mind to avoid ending up in a bad place. Think about the requirements of your hardware (if not using the cloud)
- Test everything: Jenkins configurations, plugins, plugin updates, job generations, docker images, scalability, DR. If your config is code, test it like code.
- Unit test job generation
- Use JobGenerator classes to cover the basic job logic consistently
- Use diff tools to compare jobs
- Docker works for windows too!
- Plugins can be breaky. Pin your plugin versions for stability. Also, please contribute to fix broken plugins.
- Don’t install too many plugins – more plugins = more bugs.
- You can even deploy Windows in Docker! (They do this because they need to support SQL Server. See here: https://github.com/camunda-ci/camunda-docker-qemu-packer/blob/master/Dockerfile. They started with: https://github.com/rancherio/vm to wrap a kvm image into a docker file and went from there.)
The Bright Future:
Also, pulling the logs together from all Jenkins instances and reporting on it. Back to top
My take-aways from Jenkins User Conference 2015
Containers. Workflow. Philosophy. Enterprise.
Perhaps it’s because I spend my life living with persistent databases in MS-land but I’ve been struck how often people brought up docker, both at Jenkins User Conference and NDC Oslo last week. The concept of disposable infrastructure is completely changing the way we are thinking about continuous delivery – especially in the cloud and/or at scale. The best descriptions of this were Mitch Denny’s session at NDC (video) and, to be fair, pretty much all the talks I at Jenkins conf including Kohsuke’s session at the the end of the first day and “From virtual machines to containers”.
I’m not sure what this means yet for persistent databases, which by definition are not disposable, or for the MS stack, which is not yet supported by docker*, but this is something that we’ll need to think about hard.
*The “VMs to containers” guys did it a horribly complicated way but it worked. First class MS support in Docker is coming.
I must admit – in the past I’ve been guilty of disregarding most CI servers as release management tools. Most of the time when people schedule their deployments from CI servers it feels like people are bending a tool that almost fits their needs into a use case it has not really be designed for. I had thought that Atlassian Bamboo was the only CI tool that was able to really span both CI and release management and I had assumed Octopus Deploy was simply the best release management tool hands down. Based on my experience so far that was true. Microsoft Release Management simply is not as usable.
What I have seen at Jenkins User Conference is that the community has really come together to build the workflow plugin and they are proud of it – with good reason. It looks really good. While I have not yet used it personally, the UI actually looks really nice – and let me remind you this is a part of Jenkins, which is hardly popular for its looks.
Workflow models the pipeline slightly differently from tools like Octopus Deploy. Each build has its own pipeline which makes sense. You also have the ability to send different builds down different paths depending on requirements/test results which is interesting. I cannot imagine now that many Jenkins users will be looking for a different tool to handle their deployments. I can’t wait to try it out for myself.
Last week I gave a lightning talk at NDC Oslo, in which I spoke kindly about Jenkins. However, I must admit, I described it as a tool that was less feature rich and stable than the likes of commercial tools like TeamCity. I’d said that it was great for small teams and simple tasks – but I was nervous about recommending it for larger or more complex scenarios.
At this conference I have learned that Jenkins is significantly more stable when running on Linux (which demonstrates predominantly my MS experience). I’ve also learned how strongly the Jenkins community feels that all your builds should be programmed as code anyway. This means that not only you can source control your jobs, but also that you have complete flexibility to code them yourself. You can even code the environment that you want to run the build in using a container.
If I was to give that lightning talk again – well – it would be different.
I’ve learned about the Enterprise licence of Jenkins from CloudBees – the company that Kohsuke works for.
This enterprise version seems to include many features that are missing from the open source version. There is more high level management stuff. For example, to review data from many different Jenkins master instances. It also includes features to support greater resilience, security with role based access, dashboards and analytics, promotions of jobs from one Jenkins instance to another etc.
There is also a cloud version coming soon called Tiger. However, in my opinion, as much as I love tigers, ‘JaaS’ should be called Jazz.
The overall feeling I’m left with after two days at Jenkins Conf is that this certainly does seem to be an exciting time for the Jenkins community.