menu

Cloud-Based DevOps: Possible on Windows?

As a developer in the Windows environment for over 10 years, I have overcome my fair share of challenges. I was developing before Google was useful, when finding information required an MSDN license and when deployment meant copying a DLL onto a file-share, or worse, installing it with a CD. 

The world of development and operations tooling and practices has moved on considerably since then, thankfully, as it was wrought with fear that at any moment something could go horribly wrong in production! And then you would have an angry operations colleague on your hands. 

Thankfully now we are making friends. The movement of DevOps allowed us to break down the silo, to work together despite our different concerns. I’m trying to get more features into an environment and my ops colleague is trying to keep production stable. We’ve realized that by working together we can release lots of small well-tested features into lots of production-like environments and when we are happy they won’t break, we can release them together with little fear, and skip merrily along into the sunset. Or at least that’s how it appears when you are looking at the unix land from the dark underworld of Windows.

“Tooling is a solved problem in DevOps. Except if you are in Windows”

I keep hearing “tooling is a solved problem in DevOps”. But is that really the case on Windows? Let’s look at some of the challenges you will face when trying to build infrastructure and deploy applications

on the cloud on a Windows platform. What can we as a community do to make it a better and brighter place? Maybe then, we can be the ones doing the skipping.

With any problem I face I like to break it up into understandable pieces that I can tackle one by one. So, let’s break up the challenge of DevOps for the cloud into three parts: Build, Deployment and Environment Provisioning.

1. Build: 

This is one of the few areas that is actually a solved problem in Windows. The real challenges are self-enforced through bad tooling choices. Build is really made up of source control management (SCM), orchestration of compilation, testing and packaging your application for deployment. For this you should choose tools that support the practice of continuous integration, allow you to manage your builds exactly how you want them, visualize at what stage your build is at and give the team immediate feedback if it is broken.

2. Deployment: 

This is where it gets a little harder. It should be as easy as taking your packaged application and dropping it onto a server somewhere. In Windows 

you might want to configure and restart IIS, install and run your services or any other 3rd party software and services, run your database scripts and run a smoke test to check it is all functioning together as expected. Taking your packages and copying them on the server is actually quite easy if you are using a packaging mechanism like MSI, for example. These tools can help with it:

  • PowerShell: PowerShell has the ability to copy over your files, restart IIS, start your services and many other remote commands available on your server - given your server supports the remoting command you need.
  • Octopus: Octopus is a new tool that installs tentacles onto your servers, setting up a remote connection. If you package your application with Octopak, it can also copy over your files and start IIS and other services. 

Pain

This all sounds pretty easy and awesome, but it starts to get painful really quickly as soon as you want to deploy something other than your application or components. Something that someone else wrote that your application depends upon, which, let’s face it, is a very common requirement. First of all, there isn’t a common packaging mechanism in Windows. People use a combination of msi’s, ini’s, Nullsoft, Innosetup and others that I am sure you will discover. Not all of those can be automated or accessed through the command line and often require a human to click a button to install. GUI installs are really hard, if not impossible to automate. Some might provide

command line scripted installs, but they offer less functionality than their GUI counterparts (e.g. NuGet). So what can you do? 

  1. Choose tools you can script and automate   Unfortunately that might be out of the question. The software may have already been purchased or may not exist. But if you do have the opportunity, make the automation and testing of 3rd party tools a first-class concern, so that you can repeatedly deploy, configure and test it without worrying about introducing any opportunity for human error. Something Chocolatey: There is some good news though. Let me introduce you to Chocolatey. A package install tool such as the likes of Homebrew on Mac, which lets you install system packages. It uses PowerShell scripts and Nuget to install system applications for you. It could be the start of a standard packaging mechanism on the Windows platform, and for that reason it is worth keeping an eye on.
  2. Build it yourself, but better: If you can’t buy it, build it. But build it with a common packaging mechanism like an msi and the ability to deploy and configure it in a script. Or leverage an open source project and add features you need by contributing. Again, that might also be out of the question, as it could be too expensive and time-consuming. There is a reason you had to buy 3rd party software or tool in the first place. In reality, for the time being you’ll probably have to accept that it can’t be automated. But it isn’t all doom and gloom. 
  3. Create the need for change: Much of the reason software on Windows doesn’t have a common packaging mechanism or uses a GUI to install, is due to customer requirements. Once upon a time we didn’t care or know to automate our deployments and we were content with just deploying them through the GUI one time only. So the vendors and toolmakers met that need. However, that need has changed, and only with the push from customers will the tide turn.

3. Environment Provisioning   

And now we’re onto the really tough problem. In the world of the cloud you have a couple of options, you can host your own cloud or use a third-party hosted solution. Regardless of what you choose the benefit you gain is Virtualization. Virtualization means you won’t be building all your environments from scratch on one box. Through shared use of server resources you can create lots of production-like environments at low cost, which will allow you to do parallel testing and scale your infrastructure. 

Most hosted cloud options provided by vendors like Azure from MS, EC2 from Amazon, provide Infrastructure as a Service (IaaS). This will give you some basic Windows infrastructure, like your operating system, but that is only part of the problem solved. Most real systems are heterogeneous, which means they need to be configured for specific purposes. Examples of the types of environments you might need are development, continuous integration (CI) and production-like. Development is likely to have development tools and stubs to external components and applications. CI may only include your build tool and basic frameworks like .NET, but won’t need development tools or real external components and applications. Production-like environments will have real external components and systems, but won’t need build and development tools. You could start with a base image and then use configuration management tools like Puppet and Chef to configure your environment to your needs. Alternatively, you can build the entire environment from scratch using Puppet and Chef. But hold on, I thought we were on Windows?

Configuration Management

We are on Windows, and fortunately massive improvements have been made to configuration management tools like Puppet and Chef. They now have a lot of built in support for Windows packages and configuration. A large part of the environment configuration can be performed using these tools and for everything else there are always executable blocks, that allow you to call out to PowerShell. If you do use an executable block, ensure you replace it with the actual package should it start being supported by default. It isn’t clean or easy to manage lots of executable blocks.

PowerShell & WMI

For everything else there is PowerShell. Specifically, WindowsFeatures and PowerShell for Windows Server 2008 r2 has a server manager module. This module has comandlets that allow you to add a Windows server feature. So basically with PowerShell and WMI (Windows Management Infrastructure and WinRM (Windows Remoting)) you can pretty much do anything that the Windows Server API will let you. Including automating Active Directory commands. The caveat is that your Windows Server needs to be at least Windows Server 2008, when PowerShell support became a built-in feature. Before then... good luck!

And lets not forget our new friend Chocolatey, which will allow you to install system packages. More and more applications are becoming available in Chocolatey.

However, it isn’t all sunshine and lollipops. WinRM is actually pretty painful and fiddly to use and PowerShell is an ugly and procedural language. You have to be careful not to let it grow arms and legs and become difficult to understand, and therefore, difficult to maintain. We all know what happens to that kind of code. 

More Pain

There are a few other pain points that I would be remiss not to mention. Let’s start with registries. In Windows we have 32bit and 64bit registries. Both Puppet and Chef have issues with living on one registry and installing to another. Once you end up in this space be prepared to debug and perhaps jump through some hoops to get things working. 

Other irritating “features” you need to manage are Windows updates and ISO mounting. ISO mounting is still not built into the Windows operating system, so you’ll need to download something like Virtual Clone Drive. And finally there is the cost. Even on Azure, Windows environments are more expensive than Linux, but good luck using that as an excuse to port all your software.

So now that I have thoroughly depressed you and perhaps made you consider just doing it on Linux. Let’s talk about the light at the end of the tunnel. Each of these problems can currently be solved by one of the following solutions: 

  • Manage the problem                                      

Being aware of a problem is the first step to fixing it. Don’t go into Windows automation believing it’s going to be easy and you won’t have to deal with these little niggles. You will. Therefore the best defense is an is an offense. Be prepared, be careful with registries, ensure you manage Windows updates so they don’t manage you, and use a version of a Windows that supports PowerShell and WinRM. Right now, I recommend Windows Server 2008 and above.

  • Create the need for change

I’ve said it before but it is really the only way that things will get better. Microsoft and vendors have a responsibility to respond to requirements from their customers. So create those requirements by making them visible at scale. If we all started trying to do DevOps on Windows, the vendors would respond by trying to make it easier for us. We just need to remind them that using a GUI is not the kind of ease we are after. Push the community and support open source software (OSS). Even Microsoft is supporting OSS now. For example, they have open sourced MVC and their Entity Framework. Demonstrating that a lot of fantastic tooling and innovation can be built and trusted in this space.

  • Automate what you can, accept what you can’t

Given the nature of software, I’m sure if I write this again in five years, we won’t have these problems and we’ll have awesome tools for automating the creation of our infrastructure and the deployment of our applications. But right now we have to create the requirement for change and the only way to do that is for everyone to try and automate what they can. Use the best tool there is and petition to have it better. Automate what you, and accept what you can’t… for now.

How has your experience been with Cloud-based DevOps on Windows?

Comments powered by Disqus