We are often asked about using Thoughtworks Go™ with Team Foundation Server and other Microsoft® tools. You can. This article presents a pattern and framework with pipeline and template code that can be copied and used as a starting point.
Using templates, parameters and environment variables it is possible to hoist a flexible, reusable continuous delivery infrastructure in Go that can be completely automated. This article shows you how to do this using a variety of Microsoft® technologies including:
There are many ways to apply Go to this problem space. I present an example here using the above technologies. Scripting languages like PowerShell and WinRM have been very effectively used with Go and are preferred in some instances.
Go is like the conductor of an orchestra. It does not know how to play a violin, but it knows how to read a complete score and cue the violins to play unattended at exactly the right moment.
Let’s begin with a workflow (see below). We are going to automate it all. The flow is triggered by source code check-in events in source control. I have chosen to apply manual gates in front of staging and production deployment, but that’s a personal preference. We could automate all of it. This work flow is a pattern to be applied. Later in this article I describe how to use Go to implement this pattern.
There are really two work flows here. One is for a browser-based application, which requires configuration testing in different browser environments. The other is the database. Using feature toggles we can build and deploy application code and database changes independently in many cases using the toggles to isolate them from each other.
We have principles:
Continuous delivery doesn’t necessarily mean you deploy through to production on every code push, although it is reasonable in 2012 to achieve this. It’s OK to have manual gates for UAT, staging, production and the like. Most people do. You should automate through integration and configuration testing though and subject each code push to that rigor.
When I use push I am referring to it in way it is used in Git or Mercurial. A push is a bunch of commits that are saved as a group into the source control server. Team Foundation Server requires each “check-in” (what Git calls a “commit”) is a push of its own. (The local work space feature in TFS 2012 may resolve this issue at some level.
First some terminology. Go defines pipeline as:
A pipeline allows you to break down a complex build into a sequence of simple stages for fast feedback, exhaustive validation and continuous deployment.
Here’s a network of pipelines that implement the build and deployment work flow mentioned above in two flavors. First is the control flow, which shows the order in which pipelines are triggered. Second is the data flow showing how original source and build artifacts flow through the pipeline network.
Flow of control:
Other things considered include:
Each of these pipelines is backed by a reusable template. This makes tuning the implementation across our Integration, Staging and Production environments straight forward and foolproof.
Let’s look at each pipeline in detail. Go has some features that help us build and implement these pipelines using reusable template components, feeding them with parameters and environment variables. First, the basics.
A Go pipeline is a composed of synchronous tasks within asynchronous jobs within synchronous stages like this:
Pipelines consume “materials”, which can be code from source control or artifacts from Go’s repository, presumably put there by upstream work.
Pipelines also consume “parameters”, which are what you expect. They are reusable name/value pairs defined at the top of the pipeline.
Stages in a pipeline can be based on a template. Inheritance is one way to think of it as in a pipeline inherits a template. Another way to think about this is that a template is "injected” into a pipeline.
Finally, pipelines can reference environment variables. These can be defined at various levels in a pipeline: top, stage or job.
What follows is an implementation I chose. Go is flexible. I enumerated a list of tasks for each item of work. These lists could be encapsulated in command files (.cmd, .bat) to make the pipeline definition cleaner. Doing this lets us keep most of the pipeline definition in source control by pulling them as materials at the top of the pipeline. Note:
Each of the pipeline sections that follows uses a convention composed of screen shots from Go. Arrows indicate data flow. My source is organized like this:
This pipeline consumes source code and compiles and tests a component.
Materials: Three sub-trees of source code from a Team Foundation Server team project.
This pipeline consumes the source code for the web site and artifacts from the component build pipelines to generate a package for IIS Web Deploy.
This pipeline uses IIS Web Deploy to deploy the site to IIS. The database is deployed separately. Feature toggles are used to isolate the new features of the application from incompatible older versions of the database schema.
I use a database project created using Visual Studio’s database project wizard to manage the schema for my database. This is but one way to do it. Your choice really depends on personal preference and your experience using one of a variety of tools. DBDeploy (open source) is another good choice.
This pipeline builds the database project, which creates a SQL script used to deploy the schema changes.
This pipeline runs an environment-specific deployment SQL script using SQLCMD and tests the database to ensure the schema changes were correctly applied. I am not applying a failsafe rollback script if the tests fail, but that is a good practice.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.