While I announced last month that I’m leaving Redgate, I’ve not gone yet. I have a few bits and pieces to finish off before I go. My last day will be in mid-August and I’ll start working on DLM Consultants full time in September.
I’m spending most of my last month helping one of Redgate’s clients to implement a DLM solution for both Oracle and SQL Server for several hundred developers. I’ve set up the Oracle part and it #worksOnMyMachine. I’m documenting my set-up for the client. I’m also posting it here for my own future reference. Perhaps someone else will find it useful too – and that would be nice.
I should be clear that this is just a PoC, not a polished solution. It works, but I’ve already decided that in retrospect there are better ways I could have put it together. I discuss these points as they come up.
The clients requirements
A few of the client’s requirements that are relevant to this post are as follows:
- Use their existing Jenkins Enterprise infrastructure for build and deployment automation. They have not yet upgraded to version 2.x, so we have to use version 1.x. (Note, I set up the following PoC on the regular, open source, Jenkins v1.609.3, not Jenkins Enterprise.) Using any other build or deployment tools (such as VSTS or Octopus Deploy) is not an option.
- Use the ‘Build Pipeline Plugin’ to extend Jenkins to support deployment pipelines. (Note, this is different from the more recent ‘Pipeline plug-in’/‘Workflow plugin’. I used v1.4.9.)
- Support both fully automated deployments and also manual approval steps. (The client already uses the ‘Promoted Builds Plugin’.)
- Ideally the same approach can be used for SQL Server and Oracle.
In order to set this up I referred closely to a couple of my previous blog posts for similar POCs:
- Database CI with Jenkins: A step by step tutorial
- Continuous Delivery for Oracle Databases with Atlassian Bamboo and the Redgate DLM Automation Suite for Oracle (part 1) (and part 2).
I’ve also found the following blog posts, Stack Overflow answers and documentation sites by other people very useful – so thank you community:
- Another Look at the Jenkins Promoted Builds Plugin
- Automating with Jenkins and PowerShell on Windows – Part 1 (and part 2)
- PowerTip: Remove Leading and Trailing Spaces with PowerShell
- How to use a Jenkins variable in my Powershell Script
- NuGet documentation
I’m using the Redgate Deployment Suite for Oracle to version control my Oracle source code (see Redgate website for documentation) and the Redgate DLM Automation Suite for Oracle to handle the deployments.
I’m automating the Redgate tools using four ‘Freestyle Software Projects’ that run in sequence. The first project packages up my Oracle source code using NuGet. The following projects download the appropriate package and deploy the code to a particular environment.
I’ve set up a ‘Build Pipeline View’ to visualise the set-up as a deployment pipeline.
Run Jenkins on a Windows box (or run a slave node on a Windows box).
Install the following on the Jenkins node:
- DLM Automation Suite for Oracle
Install the following Jenkins plugins:
- Build Pipeline Plugin
- EnvInject Plugin
- Promoted Builds Plugin
- Parameterized Trigger Plugin
- Powershell Plugin (optional – but I like to use PowerShell. You can use batch commands if you’re old school. :-P)
Step 1: Packaging up the source code
We’re using NuGet to package up our deployment artifacts because it is the de facto package format for Microsoft stack development. It’s also a good choice because of its simplicity and usability. Finally, the Redgate SQL Server tools support NuGet out of the box, and the client will be using these tools for SQL Server deployment, so it makes sense for us to use the same technology for packaging the Oracle source code. However, if you would prefer to package up your Oracle source code using a different technology that should work too.
- Create a Jenkins project: From the Jenkins dashboard select ‘New Item’ in the top left corner and choose a ‘Freestyle project’. Give the project a name along the lines of “01. Build Oracle deployment package”. (Since the dashboard lists projects alphabetically many people like to number their projects so that they appear in a sensible order.)
- Hook up to source control: This task is not documented here because this is probably something you already know how to do and everyone’s set-ups will be different. However, I did provide an example for Subversion in an old blog post. With my current client we are using the Git plugin.
- Set up an appropriate trigger: I’m a fan of running this build for every commit where practical. To get that set up quickly select ‘Poll SCM’ from the ‘Build Triggers’ section and add five stars separated by spaces: * * * * *
- Add a .nuspec file to your source code: This file provides instructions for NuGet when packaging. It includes some default metadata (e.g. licence information and version numbers) as well as defining the files that should be included in the package. This is a basic .nuspec file that I added to the route of my source code that simply includes the entire repository. If you want to be more specific the docs are here.
- Create the package: Add a build step to ‘Execute a Windows Batch command’ and run the following command. This will call nuget.exe to create your NuGet package:
- Archive the package: In this case I’ll just save the NuGet package to somewhere on disk. However, one of the great benefits of NuGet is the rich selection of NuGet package repositories that are available to you. The client will instead upload their packages to their artifact repository called Nexus. However, for the purposes of this POC the file system will do. I added a new ‘Windows PowerShell’ build step that executed the following one-liner. You could call robocopy etc from a batch command if you prefer:
Step 2: Creating a downstream job and injecting variables
This is a step that I agonised over for ages. Let me explain the problem to provide some context.
What we want to end up with is the ability to deploy a particular build to a particular environment. We might not always want the latest version. It’s unlikely that the code you want to deploy to production will *always* be the latest version from source control. Let’s say, for example, that in the following scenario Pipeline #53 has been fully tested and needs to go to staging and production, but Pipeline #54 isn’t ready yet:
In this scenario the job ‘9. Deploy to Staging’ needs to download not the most recent NuGet package, but the package associated with a specific upstream build. In this case we need the package for build #53 of the project “6. Build Oracle”, not build #54. It turns out this wasn’t trivial for me.
As Steve Jones pointed out to me on Twitter, this is a solved problem for those using the latest version of Jenkins and the new “Pipeline Plugin”. Those folk can simply reference the built in Jenkins variable PIPELINE_NUMBER, but this does not exist in the older “Build Pipeline Plugin” that my client was using.
Gareth Hendry suggested using variables from the ‘Promoted Builds Plugin’ such as PROMOTED_NUMBER. From the documentation it looks good. However, I didn’t adopt it because I misread some of the documentation and discounted it as a solution. In retrospect I believe Gareth’s approach is the most suitable.
That said, I have it working the following way and I don’t have time to get me head fully around the so in the interests of shipping a working POC within the deadline this is what I did. You may find that this approach is a useful alternative if you aren’t able to use the Promoted Builds plugin but you do have the Parameterized Trigger Plugin.:
- Ensure the Parameterized Trigger Plugin is installed.
- Go back to the Jenkins dashboard and create another New Item. Again use the ‘Freestyle software project’ option and call it something like ‘02. Deploy to Integration’. We’ll configure it later.
- Return to the original project and add another build step to run a Windows batch command. We are going to save the BUILD_NUMBER as a new variable, PIPELINE_NUMBER and write it to a file. To do that, paste the following command:
- Add a ‘Post-build Action’ to ‘Trigger a parameterised build on other projects’. In the ‘Projects to build’ textbox typing the name of the new project that you created above.
- Click ‘Add Parameters’ and select ‘Parameters from properties file’.
- In the text box for ‘Use properties from file’ write: env:properties
Now when the downstream job runs the variable ‘PIPELINE_NUMBER’ will be available and it will reference the build number of the upstream job. This will enable it to pick up the appropriate NuGet package.
Step 3: Creating a pipeline view
Now we have an upstream and a downstream job we can view them in a pipeline.
- Ensure the Build Pipeline Plugin is installed.
- From the dashboard click the + sign above your projects:
- Select ‘Build Pipeline View’ and give it a sensible name such as ‘Oracle DB deployment pipeline’ and click OK.
- Write the name of your upstream project next to ‘Select Initial Job’ and I advise increasing the ‘No Of Displayed Builds’ to about 5 or 10 depending on your preferences. Click save.
- Your pipeline view will now show the current progress of your upstream and downstream job. You can access your view from the main dashboard above the list of projects.
Step 4: Downloading the appropriate deployment artifact and deploying it
This is where the actual deployment happens. First we need to retrieve the appropriate package, then we need to call the Redgate DLM Automation Suite for Oracle to deploy it.
- Ensure the Powershell Plugin is installed.
- Ensure that the Redgate DLM Automation Suite for Oracle is installed on the Jenkins node.
- Select your downstream project and click ‘Configure’.
- Add a new ‘Windows PowerShell’ build step.
- Paste the following code to retrieve the correct NuGet package (note the use of the PIPELINE_NUMBER variable).
- Add another ‘Windows PowerShell’ build step.
- Paste the following code to deploy the database schema. Ensure that you provide your own details at the top, rather than my dummy details. (I’ve just hard coded this config for my PoC but you should probably think about parameterising it for real world projects. There is a great series of blog posts about PowerShell in Jenkins by Matthew Hodgekins that explains how to do that. The relevant part is here.)
Now trigger a build for the upstream project and watch. Hopefully you just automated the deployment of an Oracle schema. J
To add deployments to further downstream environments simply add more identical deployment projects and tweak the config to point at your next schema. Each time you add a new downstream project you’ll also need to add a ‘post-build action’ to the preceding project to ‘Trigger/call builds on other projects’ and point it at your new downstream project. You’ll need to ensure you have selected the ‘Add Parameters’ option to pass on the ‘current build parameters’. This will ensure that the PIPELINE_NUMBER variable is passed down the chain. (Here is one reason why using the alternative approach in step 2 may have been superior as it would have made this step redundant.)
Step 5: Adding approval gates
We’ve now set up a build pipeline to deploy all source control commits to each of our environments. However, you should be aware that we’ve fully automated the deployment to every environment. That might not be what we are looking for. For example, we might want to have a DBA review the deployment before pushing it into production and if we have a QA team they might want to be able to choose when they take the latest version into the test environment.
CloudBees have written this blog post that explains how to set up approval steps using the Promoted Build Plugin. This will allow you to trigger deployments to environments on your own schedule or to give a select set of users the authority to approve deployments to certain environments.
There are a bunch of ways you could improve on or extend this set up. It is by no means complete, it’s just a POC I put together in a couple of days. For example:
- Use Jenkins variables to parameterise (and encrypt) the credentials and other connection details for your Oracle schemas.
- Use proper artifact management software like Artifactory, MyGet or Nexus to manage your deployment artifacts.
- Upload the upgrade script and diff report so they are viewable in Jenkins, rather than leaving them in the file-system somewhere.
- Configure the Redgate tools to generate the script for approval before it is executed. And then for production deployments work out how to re-use the same upgrade script that you used in a pre-prod/staging environment.
Those are just a few ideas, you may well have other thoughts.
There are software updates available that would have made my life a bit easier, and I’m pretty sure that if I was to do it again I could have improved the way I configured it, but it works. The concept that we could create a deployment pipeline for an Oracle schema using Jenkins v1.x, an old Build Pipeline plugin and the latest version of the Redgate tools has been proven.
Our next task is to do the same for SQL Server. However, since we have achieved it for Oracle using a lot of the same software we can go into that POC with some confidence.