Menu

Go - How To: Build, Test and Deploy Using IIS Web Deploy

We are often asked about using ThoughtWorks Go™ with Team Foundation Server and other Microsoft® tools. You can. This article presents a pattern and framework with pipeline and template code that can be copied and used as a starting point.

Using templates, parameters and environment variables it is possible to hoist a flexible, reusable continuous delivery infrastructure in Go that can be completely automated. This article shows you how to do this using a variety of Microsoft® technologies including:

Note:

There are many ways to apply Go to this problem space. I present an example here using the above technologies. Scripting languages like PowerShell and WinRM have been very effectively used with Go and are preferred in some instances.

Go is like the conductor of an orchestra. It does not know how to play a violin, but it knows how to read a complete score and cue the violins to play unattended at exactly the right moment.

Let’s begin with a workflow (see below). We are going to automate it all. The flow is triggered by source code check-in events in source control. I have chosen to apply manual gates in front of staging and production deployment, but that’s a personal preference. We could automate all of it. This work flow is a pattern to be applied. Later in this article I describe how to use Go to implement this pattern.
 

 

 

 

There are really two work flows here. One is for a browser-based application, which requires configuration testing in different browser environments. The other is the database. Using feature toggles we can build and deploy application code and database changes independently in many cases using the toggles to isolate them from each other.
We have principles:

  • Failing fast: Tracing the source of and fixing problems is much less costly the further to the left on this work flow you are.
  • Consistency: Model build, test and deployment and apply it as consistently as you can everywhere.
  • Relentless automation: Automation is essential . Absent a commitment to automation stop reading, now.

Continuous delivery doesn’t necessarily mean you deploy through to production on every code push, although it is reasonable in 2012 to achieve this. It’s OK to have manual gates for UAT, staging, production and the like. Most people do. You should automate through integration and configuration testing though and subject each code push to that rigor.

Note:

When I use push I am referring to it in way it is used in Git or Mercurial. A push is a bunch of commits that are saved as a group into the source control server. Team Foundation Server requires each “check-in” (what Git calls a “commit”) is a push of its own. (The local work space feature in TFS 2012 may resolve this issue at some level.

First some terminology. Go defines pipeline as:

A pipeline allows you to break down a complex build into a sequence of simple stages for fast feedback, exhaustive validation and continuous deployment.

Here’s a network of pipelines that implement the build and deployment work flow mentioned above in two flavors. First is the control flow, which shows the order in which pipelines are triggered. Second is the data flow showing how original source and build artifacts flow through the pipeline network.
 

 

 

Flow of control:

 

 

  • The IIS web site and the database are on parallel flows so that we can build and deploy them independently. Implementing support for this in an application is possible using feature toggles.
  • Source pushes (Git, Mercurial) and check-ins (TFS, Subversion) trigger the “build” pipelines. Each pipeline is monitoring it’s own part of the source tree so it is possible to build, package and deploy a single component in this scenario.
  • An assumption is made that components are “packages” by the “package.site” pipeline, which generates a Web Deploy package ready to be deployed. It is reasonable to configure things to that individual components trigger deployment, bypassing the package stage, so long as the component build produce deployable artifacts.
  • When a pipeline or stage fails for any reason the entire flow halts until remedial action is taken. 
 
 
 
Flow of data:
  • Build pipelines, A, are triggered from changes to source control. They emit artifacts from, B, and stash them in Go’s artifact repository.
  • Packaging pipelines are triggered by source changes directly or by new versions of components, C, from the artifact repository.
    • Package.site builds the deployment package, D, for Web Deploy and puts it in the artifact repository.
    • Package.mssql creates the SQL scripts needed to deploy/upgrade the database and puts them in the artifact repository.
  • There are three deployment pipelines for the web site and the database flowing on parallel tracks. Each is triggered by the presence of new versions of artifacts in the repository placed there by upstream pipelines.

Other things considered include:

  • Testing is generally bundled inside each pipeline as a stage called “accept”. There are some exceptions. Component build pipelines do testing in the build stage.
  • After the Integration environment is deployed and verified with smoke tests bundled in the deployment pipelines, the process stops and awaits our manual signal to continue to Staging and Production. Stopping for manual intervention before staging and production is purely a personal choice. We could easily set these pipelines to trigger automatically when the Integration environment testing “certifies”.
  • Pipeline triggers can be manual or automated.

Each of these pipelines is backed by a reusable template. This makes tuning the implementation across our Integration, Staging and Production environments straight forward and foolproof.
Let’s look at each pipeline in detail. Go has some features that help us build and implement these pipelines using reusable template components, feeding them with parameters and environment variables. First, the basics.
A Go pipeline is a composed of synchronous tasks within asynchronous jobs within synchronous stages like this:

  • Pipeline
    • Stage 1
      • Job A
        • task
        • task
      • Job B
        • task
        • task
    • Stage 2
      • … and so on

Pipelines consume “materials”, which can be code from source control or artifacts from Go’s repository, presumably put there by upstream work.
Pipelines also consume “parameters”, which are what you expect. They are reusable name/value pairs defined at the top of the pipeline.
Stages in a pipeline can be based on a template. Inheritance is one way to think of it as in a pipeline inherits a template. Another way to think about this is that a template is "injected” into a pipeline.
Finally, pipelines can reference environment variables. These can be defined at various levels in a pipeline: top, stage or job.
Note:
What follows is an implementation I chose. Go is flexible. I enumerated a list of tasks for each item of work. These lists could be encapsulated in command files (.cmd, .bat) to make the pipeline definition cleaner. Doing this lets us keep most of the pipeline definition in source control by pulling them as materials at the top of the pipeline. Note:
Each of the pipeline sections that follows uses a convention composed of screen shots from Go. Arrows indicate data flow. My source is organized like this:
 

 
 

Build Components Pipeline

Narrative

 

This pipeline consumes source code and compiles and tests a component.
Materials: Three sub-trees of source code from a Team Foundation Server team project.
Parameters:

  • File paths to MSBUILD, MSTEST and MSXSL. MSXSL is a publicly-available tool for using XSL transformation, used in this case to convert TRX results from MSTEST into XUnit-compatible XML.
  • The name of the component is also passed as a parameter and used to locate the code in the source tree.

Commands:

  • MSBUILD is called to build the component.
  • MSBUILD is called to build the tests for the component.
  • MSTEST is called to run the tests.
  • MSXSL is called to transform the test results for Go.

Artifacts:

  • Component shared library (dll)
  • Test results 

 

 

Template Code

Pipeline Code

 

Package for IIS Web Deploy Pipeline

Narrative

This pipeline consumes the source code for the web site and artifacts from the component build pipelines to generate a package for IIS Web Deploy.
Materials:

  • Two sub-trees of source code from a Team Foundation Server team project.
  • The two component build pipelines.

Parameters:

  • File paths to MSBUILD, MSTEST and MSXSL. MSXSL is a publicly-available tool for using XSL transformation, used in this case to convert TRX results from MSTEST into XUnit-compatible XML.
  • Indicator as to whether the pipeline should run automatically based on upstream triggers or wait for manual intervention.

Commands:

  • Build stage
    • Fetch the component libraries from Go’s repository.
    • Call MSBUILD to build the site, build unit tests, build the post-deployment smoke test and create the package for IIS Web Deploy.
  • Test stage
    • MSTEST is called to run the unit tests.
    • MSXSL is called to transform the test results for Go.
Artifacts:
  • IIS Web Deploy package
  • Environment-specific Web Deploy parameter files (integration, staging and production)
  • Post-deployment smoke test
  • Test results 

 

 
 
 

Template Code

Pipeline Code

 

Deploying Using IIS Web Deploy Pipeline

Narrative

This pipeline uses IIS Web Deploy to deploy the site to IIS. The database is deployed separately. Feature toggles are used to isolate the new features of the application from incompatible older versions of the database schema.
Materials:

  • The upstream package-site pipeline artifacts in Go’s repository

Parameters:

  • File paths to MSBUILD, MSTEST and MSXSL. MSXSL is a publicly-available tool for using XSL transformation, used in this case to convert TRX results from MSTEST into XUnit-compatible XML.
  • Indicator as to whether the pipeline should run automatically based on upstream triggers or wait for manual intervention.
  • “Fetch pipeline” – the full path beginning with this pipeline back to the materials. With subsequent, synchronous pipelines we extend this path. What this does is tell go that we have a sequence of “ancestors” through which artifacts flow.
  • The name of the environment: integration, staging or production

Commands:

  • Deploy stage
    • Fetch the deployment package and environment-specific parameter files from Go’s repository.
    • IIS Wed Deploy (MSDEPLOY) is called to deploy the site to the specified environment.
  • Test stage
    • Fetch the smoke test library from Go’s repository.
    • MSTEST is called to run the smoke test.
    • MSXSL is called to transform the test results for Go.
 

Template Code
 

 

 

Packaging Database Changes Pipeline

Narrative

I use a database project created using Visual Studio’s database project wizard to manage the schema for my database. This is but one way to do it. Your choice really depends on personal preference and your experience using one of a variety of tools. DBDeploy (open source) is another good choice.
This pipeline builds the database project, which creates a SQL script used to deploy the schema changes.
Materials:

  • Database project from source control.
  • Database tests project from source control

Parameters:

  • File paths to MSBUILD, MSTEST and MSXSL. MSXSL is a publicly-available tool for using XSL transformation, used in this case to convert TRX results from MSTEST into XUnit-compatible XML.
  • Indicator as to whether the pipeline should run automatically based on upstream triggers or wait for manual intervention.
  • “Fetch pipeline” – the full path beginning with this pipeline back to the materials. With subsequent, synchronous pipelines we extend this path. What this does is tell go that we have a sequence of “ancestors” through which artifacts flow.
  • The name of the environment: integration, staging or production

Commands:

  • MSBUILD is called three times to build environment-specific versions of the SQL deployment script.

Artifacts:

  • Three environment-specific deployment scripts to be used with SQLCMD.exe
  • Source code for the tests
 

Template Code

Pipeline Code
 

 

Deploying Database Changes Pipeline

Narrative

This pipeline runs an environment-specific deployment SQL script using SQLCMD and tests the database to ensure the schema changes were correctly applied. I am not applying a failsafe rollback script if the tests fail, but that is a good practice.
Materials:

  • Upstream database packaging pipeline artifacts in Go’s repository.

Parameters:

  • File paths to MSBUILD, MSTEST and MSXSL. MSXSL is a publicly-available tool for using XSL transformation, used in this case to convert TRX results from MSTEST into XUnit-compatible XML.
  • Indicator as to whether the pipeline should run automatically based on upstream triggers or wait for manual intervention.
  • “Fetch pipeline” – the full path beginning with this pipeline back to the materials. With subsequent, synchronous pipelines we extend this path. What this does is tell go that we have a sequence of “ancestors” through which artifacts flow.
  • The name of the environment: integration, staging or production
  • The path to SQLCMD.exe

Commands:

  • Deploy stage:
    • Fetch the environment-specific SQL script from Go’s repository
    • SQLCMD is called to run the script
  • Test stage:
    • Fetch the test source code from Go’s repository
    • Configure the app.config file for the tests depending on the environment
    • MSTEST is called to run the tests
    • MSXSL is called to transform the test results for Go.

Artifacts: None.
 

 

Template Code

Pipeline Code

 
 

This post is from Mark Richter's blog. Click here to see the original post in full.