Heuristic Services     About     Archive     Feed

Create Staging and Production servers with Vagrant and Ansible

The problem with manually managing your own Linux servers, is that if you need to recreate the server for any reason, it is difficult, if not impossible, to recreate the manual steps you took to get your server in that state.

Docker-based solutions to this problem have become popular over the years, as it allows you to define your infrastructure as containers. However moving from manual server administration to Docker is a big step. This step might be so big, that small and medium sized businesses might never find the investment to move away from manual configuration.

The search for something between manual configuration and Docker brought me to Vagrant and Ansible. Vagrant is a command-line tool that allows you to create virtual machines based on “boxes”, images of systems like Ubuntu, provided by Hashicorp, the creator of Vagrant, or your own internal box repository.

Vagrant allows you to test your automated configuration works, as you can create local staging servers as often as necessary: if your configuration doesn’t work, you can destroy the box and start again.

Instead of manually configuring your Vagrant box, Vagrant also allows you to “provision” the box at the moment it is created. It provides a variety of different methods, such as a bash script, or through a tool called Ansible.

Ansible is a tool that allows you to define server configuration and infrastructure (the servers you have) in .yml files. You can run Ansible from the command line, and it will parse your .yml files, connect to your servers through SSH, then run the parsed configuration as bash instructions.

As Ansible uses plain SSH, you don’t have to worry about learning and installing any new software on your servers. You just have to understand how to create the configuration in terms of .yml files. In some difficult, or one-off, cases, you can also run commands using bash.

Creating staging servers with Vagrant

At this point, you will need to have installed Vagrant and Ansible on your system, I’m running Ubuntu, however the instructions should work for a Mac. You’ll also need to install the vagrant-hostmanager plugin, which will automatically update your /etc/hosts file, allowing you to access your boxes using a hostname, rather than IP address. It will also edit the /etc/hosts file for each of your boxes, so that they can communicate with each other too:

vagrant plugin install vagrant-hostmanager

Next you can initialise a Vagrantfile in a new directory.

vagrant init

This will create a Vagrantfile written in ruby that is largely comments. You can read, then delete the comments and insert the hostmanager configuration as below. I have also changed the box’s type to ubuntu/bionic64.

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/bionic64"
  config.hostmanager.enabled = true
  config.hostmanager.manage_host = true
  config.hostmanager.manage_guest = true
end

For this guide, I’m going to use the scenario of a web application shop running on one server, which communicates to a database on another server database. So just before the end above, insert the following configuration. The IP addresses are chosen at random under a private network.

  config.vm.define "shop" do |shop|
    shop.vm.hostname = "shop.local"
    shop.vm.network "private_network", ip: "192.168.0.100"
    shop.vm.provider "virtualbox" do |v|
      v.memory = 512
      v.name = "shop"
    end
  end

  config.vm.define "database" do |database|
    database.vm.hostname = "database.local"
    database.vm.network "private_network", ip: "192.168.0.101"
    database.vm.provider "virtualbox" do |v|
      v.memory = 512
      v.name = "database"
    end
  end

You can create these boxes by then running the following command in the same directory as your Vagrantfile. After a while, you will need to enter your password, as it needs to modify your /etc/hosts:

vagrant up

If you cat /etc/hosts you can see that your virtual machines have now been added and if you connect to one of your virtual machines through SSH and do the same thing, then the same configuration should also exist.

user@local:~/shop] $ vagrant ssh shop
vagrant@shop:~] $ cat /etc/hosts

Note that vagant ssh shop is a shortcut for the following command:

user@local:~/shop] $ ssh -i .vagrant/machines/shop/virtualbox/private_key  vagrant@shop.local

This key is generated by Vagrant when you type vagrant up to replace the existing key within the box. The reason behind this, is that the existing key is used by everyone who initialises that box, therefore could be insecure if not replaced. However if this insecure key didn’t exist, you would never be able to make the initial connection to the virtual machine at all.

Later, we will need to give this information to Ansible, which won’t be able to use the vagrant ssh shortcut, but uses plain SSH.

Telling Ansible about our Vagrant servers

I found from reading the Ansible documentation that Ansible is somewhat flexible regarding how to structure your configuration and inventory (the list of servers, in our case shop and database).

Accordingly, your solution might be different, however the solution I came up with allows me to issue the following commands to either configure staging, or production:

ansible-playbook -i inventories/production sites.yml --vault-password-file=~/.vault_pass.txt
ansible-playbook -i inventories/staging sites.yml --vault-password-file=~/.vault_pass.txt

This command also handles either entering test, or production, credentials into your server configuration. These are encrypted values that can be committed to source control. They are decrypted by the .vault_pass.txt, which shouldn’t be committed to source control.

The idea is if someone maliciously connects to the server where you keep your source control, then they will be able to download your server configuration, but not be able to see the passwords, without the vault password. I will explain how this is used in more detail after we create the configuration .yml files themselves.

In comparison to production passwords, staging passwords don’t necessarily need to be protected, however I decided to also encrypt staging passwords, solely to check that the encryption mechanism worked.

To tell Ansible about our inventory, we want to setup a folder structure like the following, which is part of what Ansible describes as the “alternative directory layout”.

inventories
   ├── production
   │   ├── group_vars
   │   │   ├── all.yml
   │   │   ├── databases.yml
   │   │   └── webservers.yml
   │   └── hosts
   └── staging
       ├── group_vars
       │   ├── all.yml
       │   ├── databases.yml
       │   └── webservers.yml
       └── hosts

I recommend reading the documentation, to fully understand what is happening here. First I’ll create the staging hosts file. In addition to .yml you can use an INI style file:

shop ansible_ssh_host=shop.local ansible_ssh_port=22 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='~/shop/.vagrant/machines/shop/virtualbox/private_key'
database ansible_ssh_host=database.local ansible_ssh_port=22 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='~/shop/.vagrant/machines/database/virtualbox/private_key'

[webservers]
shop

[databases]
database

Remember how I mentioned we need to know where the Vagrant SSH key is, to connect to Ansible? Well, that is what we used above in the hosts file. The second part of the INI file is telling Ansible what type we want each server to be.

You can test your hosts file worked by using the ad-hoc Ansible “ping” command.

~/shop] $ ansible -i inventories/staging webservers -m ping -e 'ansible_python_interpreter=/usr/bin/python3'
shop | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
~/shop] $ ansible -i inventories/staging databases -m ping -e 'ansible_python_interpreter=/user/bin/python3'
database | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

Now that we have told Ansible which hosts to use, we’ll move onto to telling Ansible which hosts should use which configuration. You’ll be able to see that production and staging each have their own databases.yml and webservers.yml configuration.

This facilitates entering different passwords–and other configuration values–for different environments. You’ll also see the all.yml configuration file, which provides configuration values that are important for both databases and webservers.

What we’ll change here is the staging all.yml file. We need to tell our webserver and database two things:

  1. what password to use, and
  2. what address to find the database at.

Both servers need to know this information, as the database needs to know what password is correct, while the webserver needs to know which password to send. Similarly the database needs to know which IP address it should listen against and the webserver needs to know which IP address to open a connection to. So in all.yml:

database_host: 192.168.0.101
database_password: !vault |
          $ANSIBLE_VAULT;1.1;AES256
          34633964626536383437353431633534656339343338363332323165333364373866393363306662
          6261326635626534306566383364643432353236626163340a666235626132383262353833653363
          37653832343936663162666236386334353431313863316262656139353434336333396666303931
          6533663864613338660a353238653161646162373965303938356339623139316339383066343465
          3463

You’ll see that the value of the second key-value pair is encrypted. You generate this value by using a command called ansible-vault. It requires you to have a password defined in a file, such as in ~/.vault_pass. Just make sure that you don’t commit it to source control, but you could share it with other developers in your business through a USB stick, or an encrypted service such as LastPass.

ansible-vault encrypt_string --vault-password-file ~/.vault_pass.txt 'abcd1234' --name 'database_password'
database_password: !vault |
          $ANSIBLE_VAULT;1.1;AES256
          34633964626536383437353431633534656339343338363332323165333364373866393363306662
          6261326635626534306566383364643432353236626163340a666235626132383262353833653363
          37653832343936663162666236386334353431313863316262656139353434336333396666303931
          6533663864613338660a353238653161646162373965303938356339623139316339383066343465
          3463
Encryption successful

Configuring Ansible Roles

The hosts and configuration are going to be the variables that change between our staging and production environments. The software which we install on our systems, in comparison, should be the same.

The idea is that you are able to test and reproduce your environment in staging, before rolling it out into production. You can destroy your Vagrant servers as many times as you like, before being confident that what you have will work in production.

Ansible divides the task of installing and configuring software into what it calls roles. Ansible has many pre-defined roles, so often you won’t even need to write the roles yourself.

For the purposes of this tutorial, we’ll try installing redis on our database and the redis-client on our shop. The directory structure you need to create will look like this:

roles
├── redis
│   ├── tasks
│   │   └── main.yml
│   └── templates
│       └── redis-custom.conf.j2
└── redis-client
    └── tasks
        └── main.yml 

The main.yml files will be translated by Ansible into SSH commands that are run on your server. The redis main.yml should look like this:

---
- name: Install redis
  become: true
  apt:
    name: redis 

- name: Copy redis custom configuration
  become: true
  template:
    src: ../templates/redis-custom.conf.j2
    dest: /etc/redis/redis-custom.conf
    owner: redis
    group: redis
    mode: 0600
 
- name: Add custom configuration to main redis file
  become: true
  lineinfile:
    path: /etc/redis/redis.conf
    insertafter: 'EOF'
    line: "include /etc/redis/redis-custom.conf"
 
- name: Restart redis
  become: true
  service:
    name: redis-server
    state: restarted
  
- name: Allow connections from webserver to redis
  become: true
  ufw:
    rule: "allow"
    port: "6379"
    proto: "tcp"
    from_ip: "{{ database_host }}" 

You’ll need to read the Ansible documentation to understand everything in the file above, however there are two interesting things. The first is at the bottom, you will see the familiar database_host variable placeholder. This will be replaced by Ansible with the decrypted version of the variable, from the encrypted version that we specified above in all.yml.

The second interesting part is the use of a Jinja template to specify our custom redis configuration. This allows us, again, to write variable placeholders directly into the configuration files themselves. So you’ll need to also create this template file under templates as redis-custom.conf.j2:

bind {{ database_host }}
requirepass {{ database_password }}

There is now just one more file we need to create and that is the configuration for the redis clients, so under redis-client/tasks create another main.yml file:

---
- name: Install redis cli 
  become: true
  apt:
    name: redis-tools

Linking Roles to Inventory

Now that we have both our inventory and roles defined, we just need to connect the two together. To do this, we’ll create three files: webservers.yml databases.yml and the third, sites.yml. The final directory layout will therefore look like this:

shop
├── databases.yml
├── webservers.yml
├── sites.yml
├── inventories
│   └── staging
│       ├── group_vars
│       │   └── all.yml
│       └── hosts
├── roles
│   ├── redis
│   │   ├── tasks
│   │   │   └── main.yml
│   │   └── templates
│   │       └── redis-custom.conf.j2
│   └── redis-client
│       └── tasks
│           └── main.yml
└── Vagrantfile

So let’s start by creating webservers.yml which will link our webserver inventory to the redis-client role, telling Ansible that every webserver needs to have a redis client installed on it:

---
  - hosts: webservers
    roles:
    - redis-client

Then a databases.yml file, which does the same for our database inventory:

---
  - hosts: databases
    roles:
    - redis

You can actually now use Ansible to provision your entire infrastructure, by issuing two commands: first specifying databases.yml then webservers.yml:

~/shop] $ ansible-playbook -i inventories/staging databases.yml -e 'ansible_python_interpreter=/usr/bin/python3' --vault-password-file=~/.vault_pass.txt
~/shop] $ ansible-playbook -i inventories/staging webservers.yml -e 'ansible_python_interpreter=/usr/bin/python3' --vault-password-file=~/.vault_pass.txt

However to do it in one command, create a sites.yml file to link the two together:

---
- import_playbook: webservers.yml
- import_playbook: databases.yml

Now you can run the following:

~/shop]  $ ansible-playbook -i inventories/staging sites.yml -e 'ansible_python_interpreter=/usr/bin/python3' --vault-password-file=~/.vault_pass.txt

PLAY [webservers] *****************************************************************************************************************************

TASK [redis-client : Install redis cli] *******************************************************************************************************
ok: [shop]

PLAY [databases] ******************************************************************************************************************************

TASK [redis : Install redis] ******************************************************************************************************************
ok: [database]

TASK [redis : Copy redis custom configuration] ************************************************************************************************
ok: [database]

TASK [redis : Add custom configuration to main redis file] ************************************************************************************
ok: [database]

TASK [redis : Restart redis] ******************************************************************************************************************
ok: [database]

TASK [redis : Allow connections from webserver to redis] **************************************************************************************
ok: [database]

PLAY RECAP ************************************************************************************************************************************
database                   : ok=6    changed=0    unreachable=0    failed=0
shop                       : ok=1    changed=0    unreachable=0    failed=0 

Now if we SSH into our shop, we should be able to connect to redis using the password we encrypted above abcd1234.

user@local~] $ vagrant ssh shop.local
vagrant@shop:~] $ redis-cli -h database.local 
database.local:6379> auth abcd1234
OK
database.local:6379> keys *
(empty list or set)

Congratulations! Note that this is fine for an exercise, however in production you might also want to configure redis to run over SSL to protect traffic from being sniffed over the local network.

Running Ansible automatically from Vagrant

If you don’t want to have to run Ansible manually, you can actually tell Vagrant to automatically run Ansible for you, after it has finished creating your boxes. Just add the following configuration to your Vagrantfile:

config.vm.provision "ansible" do |ansible|
  ansible.playbook = "sites.yml"
  ansible.inventory_path = "inventories/staging/hosts"
  ansible.vault_password_file  = "~/.vault_pass.txt"
end

Now you can create and destroy your entire staging infrastructure with two commands: vagrant up and vagrant destroy.

Production

To recreate your staging environment on production, you need to do two things:

  1. Tell Ansible about your production inventory (servers) in the same way that we told it about our staging inventory above, by creating more INI files and telling Ansible how to connect to each server through SSH.
  2. Copy and paste your staging group_vars folder and replacing all the staging variables with their production equivalents. You’ll need to ansible-vault over your production passwords, like we did with staging. You could also use a different ~/.vault_pass.txt for extra security.

Then you can run ansible-playbook with inventories/production to setup your production system, after making sure it all works in staging:

~/shop] $ ansible-playbook -i inventories/production sites.yml --vault-password-file=~/.vault_pass.txt

If you find an issue with this tutorial, or find anything hard to follow, open an issue on GitHub.