Test Driven CAPS(Chef, Ansible, Puppet and Salt) Provisioning with Docker

By Martin Rusev read

In the last couple of years I had the opportunity to work for extensive periods of time with all four major provisioning tools. I started with Chef, then Puppet and currently I am using both Ansible and SaltStack for different use cases.

Provisioning tools are really great addition to our DevOps arsenal, but one thing that really bothers me is how difficult it is to test your code before applying it on your actual infrastructure.

Testing directly on localhost is a recipe for disaster - a provisioning resource could work fine on your dev machine, but could be completely broken on Ubuntu LTS, which is running in production.

Another way is to use separate Virtual Machine, either VMware/VirtualBox and optionally Vagrant as a wrapper around them. In this case, you have to figure a way to always start from a blank state - running apt-get install build-essential once could mislead you into thinking that everything will be fine in production.

The same goes for any changes you apply with your recipes and forget to reverse afterwards. Transferring recipes between local dev and your virtual machines is not trivial task either.

There is a better and simpler way and in this post I want to share how I am using Docker to test all my provisioning resources before deploying them in production.

Why Docker?

My first encounter with Docker was almost two years ago, when I was looking for a faster Vagrant alternative. At first, I didn't care much about containers, all I knew was that Docker could boot a VM in a matter of seconds, compared to sometimes 2-3 minutes in Vagrant.

My goal was to test the collector agent for Amon across the most popular distros - Ubuntu, CentOS and Debian. That is between 6 and 10 virtual machines, if we include the most used version for each distro (Ubuntu 14.04LTS, Ubuntu 15.04, Debian 8, Debian 7, etc).

With Docker I was able to wrap everything in a single bash loop and execute on all of them in less then 30 seconds.

#!/bin/bash
declare -a arr=(ubuntu1404, ubuntu1504, debian7, debian8, centos6, centos7)

for distro in ${arr[@]}
do
    cp $distro/Dockerfile
    docker build $distro
    rm Dockerfile
done

How does it work?

It is a very simple and straightforward process. It works for all existing CAPS provisioning tools and any new ones that could come out in the future. I am going to use Ansible to explain the core concept and in the last part of this post I am going to cover the small differences in the process, if you are using Salt, Puppet or Chef instead.

  1. The first step is to install Docker on your dev machine, find and pull an image from the Docker Hub with the distro(s) you would like to test. On top of that image, you install your favorite provisioning tool and use the resulting image as a base for testing.

Optionally, you can skip step 1, search Docker Hub for an image with your provisioning tool already installed. There are several officially supported images for SaltStack and community supported for Ansible, Chef and Puppet.

In that case, you still have to go through each one of them and see the installed version, read the Dockerfile and rarely did I find an image with any docs. The amount of time you will spent digging through these premade images should be more than enough to build them on your own.

docker pull ubuntu 
docker pull debian
# Dockerfile
FROM ubuntu:latest

RUN apt-add-repository ppa:ansible/ansible
RUN apt-get update
RUN apt-get install ansible 

# Build the Test container
docker build --force-rm=true --rm=true --no-cache --tag="ansible:ubuntu1404"
docker build --force-rm=true --rm=true --no-cache --tag="ansible:debian7"
  1. The next step is to copy your provisioning resources into the container and test them

For this step we are going to create an ephemeral container - it doesn't need a name or tag. This container is going to run the playbook and will be destroyed afterwards.

If the playbook fails, the container will remain in your Docker images list as "dangling" and has to be removed with docker rmi $$(docker images -q --filter dangling=true)

# apache.yml
---
- hosts: localhost
  connection: local
  sudo: True
  tasks:
    - name: Install Apache.
      command: apt-get install -y --force-yes apache2

    - shell: apache2 -v
      register: version

    - debug: msg="{{version.stdout}}"
# Dockerfile
FROM ansible:ubuntu1404

WORKDIR /tmp
COPY apache.yml /tmp

# ==> Creating inventory file...
RUN echo localhost > inventory

# ==> Executing Ansible...
RUN ansible-playbook -i inventory apache.yml --connection=local
# Makefile  - make test
test:
    docker build --force-rm=true --rm=true --no-cache .  
    docker rmi $$(docker images -q --filter dangling=true)
  1. The final step is to create a test suite and run it after all the changes described in the playbook have been applied. There are several infrastructure testing frameworks out there, with Serverspec being the most popular.

Serverspec is a Ruby/RSpec based project and this might be a problem for you, if you don't have any experience with Ruby. To use it, you will absolutely need some Ruby specific knowledge - how to work with RSpec, how to install gems with Bundler and run/create tasks withRake.

If this is not your cup of tea, you can explore bats, aka Bash Automated System. Bats is a TAP testing framework for Bash. It simpler compared to Serverspec, but you have to be absolutely sure that your bash checks are error free and work the same way across different distros and bash versions.

# Serverspec
require 'spec_helper'

describe package('httpd'), :if => os[:family] == 'redhat' do
  it { should be_installed }
end

describe package('apache2'), :if => os[:family] == 'ubuntu' do
  it { should be_installed }
end
# Bats
#!/usr/bin/env bats

@test "Check if the apache server is available" {
    command -v apache2
}

While doing my research for this article, I came across a new testing framework, called goss. Goss is written in Go, distributed as a single binary and I think it is a decent compromise between Bats and Serverspec. Goss comes with a nice automatic test generation capabilities, the test format is human readable and stored in JSON.

$ goss autoadd apache2
Adding to './goss.json':
$ goss validate
{
    "package": {
        "apache2": {
            "installed": true
        }
    },
    "service": {
        "apache2": {
            "enabled": true,
            "running": true
        }
    },
    {
        "port": "tcp:80",
        "listening": true,
        "ip": "0.0.0.0"
    }
}

SaltStack

To test your Salt states with Docker you have to install salt-minion and set the following configuration option.

# /etc/salt/minion
file_client: local

You can apply your states with salt-call --local state.sls apache

Chef

To test your Chef recipes, you have to install chef-client and use the CLI argument --local-mode (-z is the short variant) and -o to test a single recipe.

chef-client -o "recipe[apache2]" -z

Puppet

There is nothing specific when it comes to running your Puppet Manifests in a Docker container. puppet apply apache.pp works fine.

Examples

You can find all the code described in this article, complete with Ansible, Puppet, Chef and SaltStack examples in the following Github repo - https://github.com/martinrusev/devops-articles/tree/master/tdd-caps-with-docker

Note on Puppet Beaker and Test Kitchen

Puppet Beaker and Test Kitchen for Chef are both popular projects in their respective communities. Behind the scenes both projects rely heavily on Serverspec. Both projects have somewhat lacking documentation and at least in my opinion - steep learning curve.

If you have already invested the time to learn them and you are happily using them in your work flow - this is great. Still - I think my approach could be appealing for you, if you want more precise control over the testing process and want to avoid vendor lock-in.