On Wednesday I put on a suit and boarded a train to the city. Then I went to a high up meeting room in a skyscraper with spectacular views and started working with the lead database architect and build manager at a large international bank. It was strange. I missed my jeans, my SQL Saturday t-shirt and my odd shoes.
What we were doing
The purpose of the visit was to set up a self-contained SQL Server CI and automated deployment proof of concept. (A couple of people at the bank had read my walk-through and wanted it.) We started at 9.30am, talked for some time and downloaded and installed the bits. At about 11am we started putting it together. We took it fairly slowly and talked about all the processes as we went. I didn’t touch the keyboard once. By 1pm we had it all working, from creating a development database, the initial installation of SQL Source Control and SVN, the configuration of the Jenkins build (they already had a Jenkins build server that was up and running for their applications) and the complete set-up of Deployment Manager.
We did not do any unit testing at this stage… One step at a time!
And then…
… we decided to demonstrate how it was all valuable by making a deliberate mistake and watching what happened. But we made actual mistakes with our deliberate mistakes. I’m going to talk through it slowly because it was beautiful. 🙂
First we tried to simulate a broken build. To be clear, the build would break if some code found its way into source control that:
- had incorrect syntax.
- contained invalid objects that could not be deployed.
- introduced complex changes that caused SQL Compare (which generated and executed our deployment scripts) to throw some warnings or errors.
We simulated this by going into the SVN repository manually and editing the code. One of the developers tried to introduce an object that would not build. He did this by adding code that divided by zero within a stored procedure.
Unexpectedly, the build passed… We frowned for a few moments.
Then we worked it out. The stored procedure was never run, it was simply built. The error would only occur if the procedure was executed. This is not something that automated builds will pick up. This is an error that would be picked up with unit tests.
Some clarity on what unit tests are for and what automated builds are for
Let’s imagine you are the president of the USA and you have a big red button in the oval office that triggers your nuclear launch program. Except the world has gone digital – the button is on some sort of futuristic touchscreen gadget.
Automated builds will test if the code can actually run. Is it possible to actually run the program that presents the president with a button to press?
Unit tests will test that the code does what you expect. When the president presses that button, will it accurately launch the missiles as intended – or will it go disastrously wrong?
So then…
… we had another go. We manually edited the same text file in source control and simply introduced a spelling mistake in the words “CREATE PROCEDURE” and committed. This time the Jenkins build ran and within a minute we had a detailed report of the user who committed change which caused the error, the file that was edited and the console log with the error message. This was enough information to spot the error and either roll it back or fix it.
The unexpected proof of the pudding
The proof of the pudding is in the eating.
By now it was nearly time to demo our work to the wider team. The guy with the keyboard casually corrected his typo, committed the change and went to the toilet. The rest of us in the room weren’t really paying attention to the monitor when he made the change, but we noticed when the build failed.
Again it alerted us to the fact that we had a (genuine) mistake and pointed us to the exact location of the error. We were able to fix it before the guilty guy who broke the build returned a few minutes later.
On reflection
What we have seen is how even a very basic automated build process helped us to spot errors as soon as they are committed to source control. And this insight can be achieved very easily – we did it by following the guidelines in my previous post (a snappier version of which is now on this blog) at a casual pace in a couple of hours. In the real world catching these build errors early will allow you to keep your code much cleaner.
We also saw the limitations of automated builds. They only test that your database can be built and deployed – they do not test whether it does what you expect. For this you need to write your own unit tests.
Unit tests are a big investment but an important one. I have direct experience with a thirteen year old application at Red Gate. There is no way we would be able to do any reliable work on SQL Compare without our relatively good unit test coverage. As with many 13 year old legacy code-bases that have been worked on by dozens of developers, if you change code in one part of the product you often see unexpected consequences somewhere else. Our unit tests tell us when we make a change in one place which break other parts of our code that we had not even thought about. If you want to learn more about how to unit test your SQL code try SQL Test (Red Gate tool) or tSQLt (free non-Red Gate tool).
Also – if you don’t know anything about test driven development… just google TDD.
3 comments for “A brief anecdote that demonstrates the value of automated builds and unit tests for your database”