The problem with manually managing your own Linux servers, is that if you need to recreate the server for any reason, it is difficult, if not impossible, to recreate the manual steps you took to get your server in that state.
Docker-based solutions to this problem have become popular over the years, as it allows you to define your infrastructure as containers. However moving from manual server administration to Docker is a big step. This step might be so big, that small and medium sized businesses might never find the investment to move away from manual configuration.
The search for something between manual configuration and Docker brought me to
Ansible. Vagrant is a command-line tool that allows you to create virtual machines based on “boxes”, images of systems like Ubuntu, provided by Hashicorp, the creator of Vagrant, or your own internal box repository.
Vagrant allows you to test your automated configuration works, as you can create local staging servers as often as necessary: if your configuration doesn’t work, you can destroy the box and start again.
Instead of manually configuring your Vagrant box, Vagrant also allows you to “provision” the box at the moment it is created. It provides a variety of different methods, such as a bash script, or through a tool called Ansible.
Ansible is a tool that allows you to define server configuration and infrastructure (the servers you have) in
.yml files. You can run Ansible from the command line, and it will parse your
.yml files, connect to your servers through
SSH, then run the parsed configuration as
As Ansible uses plain
SSH, you don’t have to worry about learning and installing any new software on your servers. You just have to understand how to create the configuration in terms of
.yml files. In some difficult, or one-off, cases, you can also run commands using
Creating staging servers with Vagrant
At this point, you will need to have installed Vagrant and Ansible on your system, I’m running Ubuntu, however the instructions should work for a Mac. You’ll also need to install the
vagrant-hostmanager plugin, which will automatically update your
/etc/hosts file, allowing you to access your boxes using a hostname, rather than IP address. It will also edit the
/etc/hosts file for each of your boxes, so that they can communicate with each other too:
Next you can initialise a
Vagrantfile in a new directory.
This will create a
Vagrantfile written in
ruby that is largely comments. You can read, then delete the comments and insert the
hostmanager configuration as below. I have also changed the box’s type to
For this guide, I’m going to use the scenario of a web application
shop running on one server, which communicates to a database on another server
database. So just before the
end above, insert the following configuration. The IP addresses are chosen at random under a private network.
You can create these boxes by then running the following command in the same directory as your
Vagrantfile. After a while, you will need to enter your password, as it needs to modify your
cat /etc/hosts you can see that your virtual machines have now been added and if you connect to one of your virtual machines through
SSH and do the same thing, then the same configuration should also exist.
vagant ssh shop is a shortcut for the following command:
This key is generated by Vagrant when you type
vagrant up to replace the existing key within the box. The reason behind this, is that the existing key is used by everyone who initialises that box, therefore could be insecure if not replaced. However if this insecure key didn’t exist, you would never be able to make the initial connection to the virtual machine at all.
Later, we will need to give this information to Ansible, which won’t be able to use the
vagrant ssh shortcut, but uses plain
Telling Ansible about our Vagrant servers
I found from reading the Ansible documentation that Ansible is somewhat flexible regarding how to structure your configuration and inventory (the list of servers, in our case
Accordingly, your solution might be different, however the solution I came up with allows me to issue the following commands to either configure staging, or production:
This command also handles either entering test, or production, credentials into your server configuration. These are encrypted values that can be committed to source control. They are decrypted by the
.vault_pass.txt, which shouldn’t be committed to source control.
The idea is if someone maliciously connects to the server where you keep your source control, then they will be able to download your server configuration, but not be able to see the passwords, without the vault password. I will explain how this is used in more detail after we create the configuration
.yml files themselves.
In comparison to production passwords, staging passwords don’t necessarily need to be protected, however I decided to also encrypt staging passwords, solely to check that the encryption mechanism worked.
To tell Ansible about our inventory, we want to setup a folder structure like the following, which is part of what Ansible describes as the “alternative directory layout”.
I recommend reading the documentation, to fully understand what is happening here. First I’ll create the staging
hosts file. In addition to
.yml you can use an
INI style file:
Remember how I mentioned we need to know where the Vagrant
SSH key is, to connect to Ansible? Well, that is what we used above in the
hosts file. The second part of the
INI file is telling Ansible what type we want each server to be.
You can test your
hosts file worked by using the ad-hoc Ansible “ping” command.
Now that we have told Ansible which hosts to use, we’ll move onto to telling Ansible which hosts should use which configuration. You’ll be able to see that production and staging each have their own
This facilitates entering different passwords–and other configuration values–for different environments. You’ll also see the
all.yml configuration file, which provides configuration values that are important for both databases and webservers.
What we’ll change here is the staging
all.yml file. We need to tell our webserver and database two things:
- what password to use, and
- what address to find the database at.
Both servers need to know this information, as the database needs to know what password is correct, while the webserver needs to know which password to send. Similarly the database needs to know which IP address it should listen against and the webserver needs to know which IP address to open a connection to. So in
You’ll see that the value of the second key-value pair is encrypted. You generate this value by using a command called ansible-vault. It requires you to have a password defined in a file, such as in
~/.vault_pass. Just make sure that you don’t commit it to source control, but you could share it with other developers in your business through a
USB stick, or an encrypted service such as LastPass.
Configuring Ansible Roles
The hosts and configuration are going to be the variables that change between our staging and production environments. The software which we install on our systems, in comparison, should be the same.
The idea is that you are able to test and reproduce your environment in staging, before rolling it out into production. You can destroy your Vagrant servers as many times as you like, before being confident that what you have will work in production.
Ansible divides the task of installing and configuring software into what it calls roles. Ansible has many pre-defined roles, so often you won’t even need to write the roles yourself.
For the purposes of this tutorial, we’ll try installing redis on our
database and the redis-client on our
shop. The directory structure you need to create will look like this:
main.yml files will be translated by Ansible into
SSH commands that are run on your server. The redis
main.yml should look like this:
You’ll need to read the Ansible documentation to understand everything in the file above, however there are two interesting things. The first is at the bottom, you will see the familiar
database_host variable placeholder. This will be replaced by Ansible with the decrypted version of the variable, from the encrypted version that we specified above in
The second interesting part is the use of a Jinja template to specify our custom redis configuration. This allows us, again, to write variable placeholders directly into the configuration files themselves. So you’ll need to also create this template file under
There is now just one more file we need to create and that is the configuration for the redis clients, so under
redis-client/tasks create another
Linking Roles to Inventory
Now that we have both our inventory and roles defined, we just need to connect the two together. To do this, we’ll create three files:
databases.yml and the third,
sites.yml. The final directory layout will therefore look like this:
So let’s start by creating
webservers.yml which will link our webserver inventory to the
redis-client role, telling Ansible that every webserver needs to have a redis client installed on it:
databases.yml file, which does the same for our database inventory:
You can actually now use Ansible to provision your entire infrastructure, by issuing two commands: first specifying
However to do it in one command, create a
sites.yml file to link the two together:
Now you can run the following:
Now if we
SSH into our
shop, we should be able to connect to redis using the password we encrypted above
Congratulations! Note that this is fine for an exercise, however in production you might also want to configure redis to run over
SSL to protect traffic from being sniffed over the local network.
Running Ansible automatically from Vagrant
If you don’t want to have to run Ansible manually, you can actually tell Vagrant to automatically run Ansible for you, after it has finished creating your boxes. Just add the following configuration to your
Now you can create and destroy your entire staging infrastructure with two commands:
vagrant up and
To recreate your staging environment on production, you need to do two things:
- Tell Ansible about your production inventory (servers) in the same way that we told it about our staging inventory above, by creating more
INIfiles and telling Ansible how to connect to each server through
- Copy and paste your staging
group_varsfolder and replacing all the staging variables with their production equivalents. You’ll need to
ansible-vaultover your production passwords, like we did with staging. You could also use a different
~/.vault_pass.txtfor extra security.
Then you can run
inventories/production to setup your production system, after making sure it all works in staging:
If you find an issue with this tutorial, or find anything hard to follow, open an issue on GitHub.