Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough

Ansible and Drupal Development - Part 2

Parent Feed: 

In part 1 of this tutorial, we covered how to configure and use Ansible for local Drupal development. If you didn't have a chance to read that article, you can download my fork of Jeff Geerling's Drupal Dev VM to see the final, working version from part 1. In this article, we'll be switching things up quite a bit as we take a closer look at the 2nd three requirements, namely:

  1. Using the same playbook for both local dev and remote administration (on DigitalOcean)
  2. Including basic server security
  3. Making deployments simple

TL;DR Feel free to download the final, working version of this repo and/or use it to follow along with the article.

Caveat Emptor

Before we dig in, I want to stress that I am not an expert in server administration and am relying heavily on the Ansible roles created by Jeff Geerling. The steps outlined in this article come from my own experience of trying use Ansible to launch this site, and I'm not aware of how they stray from best-practices. But if you're feeling adventurous, or like me, foolhardy enough to jump in headfirst and just try to figure it out, then read on.

Sharing Playbooks Between Local and Remote Environments

One of the features that makes Ansible so incredibly powerful is to be able to run a given task or playbook across a range of hosts. For example, when the Drupal Security team announced the SQL injection bug now known as "Drupalgeddon", Jeff Geerling wrote a great post about using Ansible to deploy a security fix on many sites. Given that any site that was not updated within 12 hours is now considered compromised, you can easily see what an important role Ansible can play. Ansible is able to connect to any host that is defined in the default inventory file at /etc/ansible/hosts. However, you can also create a project specific inventory file and put it the git repo, which is what we'll do here.

To start with, we'll add a file called "inventory" and put it in the provisioning folder. Inventories are in ini syntax, and basically allow you to define hosts and groups. For now, simply add the following lines to the inventory:

[dev]
yourdomain.dev

The inventory can define hostnames or IP addresses, so 192.168.88.88 (the IP address from the Vagrantfile) would work fine here as well. Personally, I prefer hostnames because I find them easier to organize and track. It will also help us avoid an issues with Ansible commands on the local VirtualBox. With our dev host defined, we are now able to set any required host-specific variables.

Ansible is extremely flexible in how you create and assign variables. For the most part, we'll be using the same variables for all our environments. But a few of them, such as the Drupal domain, ssh port, etc., will be different. Some of these differences are related to the group (such as the ssh port Ansible connects to), while other's are host-specific (such as the Drupal domain). Let's start by creating a folder called "host_vars" in the provisioning folder with a file in it named with the host name of your dev site (a-fro.dev for me). Add the following lines to it:

---
drupal_domain: "yourdomain.dev"

At this point, we're ready to dig into remote server configuration for the first time. Lately, I've been using DigitalOcean to host my virtual servers because they are inexpensive (starting at $5/month) and they have a plethora of good tutorials that helped me work through the manual configuration workflows I was using. I'm sure there are many other good options, but the only requirement is to have a server to which you have root access and have added your public key. I also prefer to have a staging server where I can test things remotely before deploying to production, so for the sake of this tutorial let's create a server that will host stage.yourdomain.com. If you're using a domain for which DNS is not yet configured, you can just add it to your system's hosts file and point to the server's IP address.

Once you've created your server (I chose the most basic plan at DO and added Ubuntu 12.04 x32), you'll want to add it to your inventory like so:

[staging]
stage.yourdomain.com

Assuming that DNS is either already set up, or that you've added the domain to your hosts file, Ansible is now almost ready to talk to the server for the first time. The last thing Ansible needs is some ssh configuration. If you're used to adding this to your ~/.ssh/config file, that's fine. That approach would work fine for now, but we'll see that it will impose some limitations as we move forward, so let's go ahead and add the ssh config to the host file (host_vars/stage.yourdomain.com):

---
drupal_domain: "stage.yourdomain.com"
ansible_ssh_user: root
ansible_ssh_private_key_file: '~/.ssh/id_rsa'

At this point, you should have everything you need to connect to your virtual server and configure it via Ansible. You can test this by heading to the provisioning folder of your repo and typing ansible staging -i inventory -m ping, where "staging" is the group name you defined in your inventory file. You should see something like the following output:

stage.yourdomain.com | success >> {
    "changed": false,
    "ping": "pong"
}

If that's what you see, then congratulations! Ansible has just talked to your server for the first time. If not, you can try running the same command with -vvvv and check the debug messages. We could run the playbook now from part 1 and it should configure the server, but before doing that, let's take a look at the next requirement.

Basic Server Security

Given that the Drupal Dev VM is really set up to support a local environment, it's missing important security features and requirements. Luckily, Jeff comes to the rescue again with a set of additional Ansible roles we can add to the playbook to help fill in the gaps. We'll need the roles installed on our system, which we can do with ansible-galaxy install -r requirements.txt (read more about roles and files). If you already have the roles installed, the easiest way to make sure they're up-to-date is with ansible-galaxy install -r requirements.txt --force (since updating a role is not yet supported by Ansible Galaxy).

In this section, we'll focus on the geerlingguy.firewall and geerlingguy.security roles. Jeff uses the same pattern for all his Ansible roles, so it's easy to find the default vars for a given role by replacing the role name (ie: ansible-role-rolename) of the url: https://github.com/geerlingguy/ansible-role-security/blob/master/defaults/main.yml. The two variables that we care about here are security_ssh_port and security_sudoers_passwordless. This role is going to help us remove password authentication, root login, change the ssh port and add a configured user account to the passwordless sudoers group.

You might notice that the role says "configured user accounts", which begs the question: where does the account get configured? This was actually a stumbling block for me for a while, as I had to work through many different issues my attempts to create and configure the role. The approach we'll take here is working, though may not be the most efficient (or best-pratice, see Caveat Emptor above). Yet there is another issue as well, because the first time we connect to the server it will be over the default ssh port (22), but in the future, we want to choose a more secure port. We're also going to need to make sure that port gets opened on the firewall.

Ansible's variable precendence is going to help us work through these issues. To start with, let's take a look at the following example vars file:

---
ntp_timezone: America/New_York

firewall_allowed_tcp_ports:
  - "{{ security_ssh_port }}"
  - "80"

# The core version you want to use (e.g. 6.x, 7.x, 8.0.x).
# A-fro note: this is slightly deceptive b/c it's really used to check out the correct branch
drupal_core_version: "master"

# The path where Drupal will be downloaded and installed.
drupal_core_path: "/var/www/{{ drupal_domain }}/docroot"

# Your drupal site's domain name (e.g. 'example.com').
# drupal_domain:  moved to group_vars

# Your Drupal site name.
drupal_site_name: "Aaron Froehlich's Blog"
drupal_admin_name: admin
drupal_admin_password: password

# The webserver you're running (e.g. 'apache2', 'httpd', 'nginx').
drupal_webserver_daemon: apache2

# Drupal MySQL database username and password.
drupal_mysql_user: drupal
drupal_mysql_password: password
drupal_mysql_database: drupal

# The Drupal git url from which Drupal will be cloned.
drupal_repo_url: "[email protected]:a-fro/a-fro.com.git"

# The Drupal install profile to be used
drupal_install_profile: standard

# Security specific
# deploy_user: defined in group_vars for ad-hoc commands
# security_ssh_port: defined in host_vars and group_vars
security_sudoers_passwordless:
  - "{{ deploy_user }}"
security_autoupdate_enabled: true

You'll notice that some of the variables have been moved to host_vars or group_vars files. Our deploy_user, for example, would work just fine for our playbook if we define it here. But since we want to make this user available to Ansible for ad-hoc commands (not in playbooks), it is better to put it in group_vars. This is also why we can't just use our ~/.ssh/config file. With Ansible, any variables added to provisioning/group_vars/all are made available by default to all hosts in the inventory, so create that file and add the following lines to it:

---
deploy_user: deploy

For the security_ssh_port, we'll be connecting to our dev environment over the default port 22, but changing the port on our remote servers. I say servers (plural), because eventually we'll have both staging and production environments. We can modify our inventory file to make this a bit easier:

[dev]
a-fro.dev

[staging]
stage.a-fro.com

[production]
a-fro.com

[droplets:children]
staging
production

This allows us to issue commands to a single host, or to all our droplets. Therefore, we can add a file called "droplets" to the group_vars folder and add the group-specific variables there:

---
ansible_ssh_user: "{{ deploy_user }}"

security_ssh_port: 4895 # Or whatever you choose
ansible_ssh_port: "{{ security_ssh_port }}"

ansible_ssh_private_key_file: ~/.ssh/id_rsa # The private key that pairs to the public key on your remote server.

Configuring the Deploy User

There are two additional issues that we need to address if we want our security setup to work. The first is pragmatic: using a string in the security_sudoers_passwordless yaml array above works fine, but Ansible throws an error when we try to use a variable there. I have a pull request issued to Ansible-role-security that resolves this issue, but unless that gets accepted, we can't use the role as is. The easy alternative is to download that role to our local system and add it's contents to a folder named "roles" in provisioning (ie. provisioning/roles/security). You can see the change we need to make to the task here. Then, we modify the playbook to use our local "security" role, rather than geerlingguy.security.

The second issue we face is that the first time we connect to our server, we'll do it as root over port 22, so that we can add the deploy_user account, and update the security configuration. Initially, I was just modifying the variables depending on whether it was the first time I was running the playbook, but that got old really quickly as I created, configured and destroyed my droplets to work through all the issues. And while there may be better ways to do this, what worked for me was to add an additional playbook that handles our initial configuration. So create a provisioning/deploy_config.yml file and add the following lines to it:

---

- hosts: all
  sudo: yes

  vars_files:
    - vars/main.yml
    - vars/deploy_config.yml

  pre_tasks:
    - include: tasks/deploy_user.yml

  roles:
    - security

Here's the task that configures the deploy_user:

---
- name: Ensure admin group exists.
  group: name=admin state=present

- name: Add deployment user
  user: name='{{ deploy_user }}'
        state=present
        groups="sudo,admin"
        shell=/bin/bash

- name: Create .ssh folder with correct permissions.
  file: >
    path="/home/{{ deploy_user }}/.ssh/"
    state=directory
    owner="{{ deploy_user }}"
    group=admin
    mode=700

- name: Add authorized deploy key
  authorized_key: user="{{ deploy_user }}"
                  key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
                  path="/home/{{ deploy_user }}/.ssh/authorized_keys"
                  manage_dir=no
  remote_user: "{{ deploy_user }}"

The private/public key pair you define in the "Add authorized deploy key" task and in your ansible_ssh_private_key_file variable should have access to both your remote server and your GitHub repository. If you've forked or cloned my version, then you will definitely need to modify the keys.

Our final security configuration prep step is to leverage Ansible's variable precendence to override the ssh settings to use root and the default ssh port with the following lines in provisioning/vars/deploy_config:

---
ansible_ssh_user: root
ansible_ssh_port: 22

We now have everything in place to configure the basic security we're adding to our server. Remembering that one of we want our playbooks to work both locally over Vagrant and remotely, we can first try to run this playbook in our dev environment. I couldn't find a good way to make this seamless with Vagrant, so I've added a conditional statement to the Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # All Vagrant configuration is done here. The most common configuration
  # options are documented and commented below. For a complete reference,
  # please see the online documentation at vagrantup.com.

  # Every Vagrant virtual environment requires a box to build off of.
  config.vm.box = "ubuntu-precise-64"

  # The url from where the 'config.vm.box' box will be fetched if it
  # doesn't already exist on the user's system.
  config.vm.box_url = "http://files.vagrantup.com/precise64.box"

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  config.vm.network :private_network, ip: "192.168.88.88"

  # Share an additional folder to the guest VM. The first argument is
  # the path on the host to the actual folder. The second argument is
  # the path on the guest to mount the folder. And the optional third
  # argument is a set of non-required options.
  config.vm.synced_folder "../a-fro.dev", "/var/www/a-fro.dev", :nfs => true

  # Configure VirtualBox.
  config.vm.provider :virtualbox do |vb|
    # Set the RAM for this VM to 512M.
    vb.customize ["modifyvm", :id, "--memory", "512"]
    vb.customize ["modifyvm", :id, "--name", "a-fro.dev"]
  end

  # Enable provisioning with Ansible.
  config.vm.provision "ansible" do |ansible|
    ansible.inventory_path = "provisioning/inventory"
    ansible.sudo = true
    # ansible.raw_arguments = ['-vvvv']
    ansible.sudo = true
    ansible.limit = 'dev'

    initialized = false

    if initialized
      play = 'playbook'
      ansible.extra_vars = { ansible_ssh_private_key_file: '~/.ssh/ikon' }
    else
      play = 'deploy_config'
      ansible.extra_vars = {
        ansible_ssh_user: 'vagrant',
        ansible_ssh_private_key_file: '~/.vagrant.d/insecure_private_key'
      }
    end
    ansible.playbook = "provisioning/#{play}.yml"
  end
end

The first time we run vagrant up, if initialized is set to false, then it's going to run deploy_config. Once it's been initialized the first time (assuming there were no errors), you can set initialized to true and from that point on, playbook.yml will run when we vagrant provision. Assuming everything worked for you, then we're ready to configure our remote server with ansible-playbook provisioning/deploy_config.yml -i provisioning/inventory --limit=staging.

Installing Drupal

Whew! Take a deep breath, because we're really at the home stretch now. In part 1, we used a modified Drupal task file to install Drupal. Since then, however, Jeff has accepted a couple of pull requests that get us really close to being able to use his Drupal Ansible Role straight out of the box. I have another pull request issued that get's us 99% of the way there, but since that hasn't been accepted, we're going to follow the strategy we used with the security role and add a "drupal" folder to the roles.

I've uploaded a branch of ansible-role-drupal, that includes the modifications we need. They're all in the provisioning/drupal.yml task, and I've outlined the changes and reasons in my pull request. If you're following along, I suggest downloading that branch from GitHub and adding it to a drupal folder in your provisioning/roles. One additional change that I have not created a pull request for relates to the structure I use for Drupal projects. I like to put Drupal in a subfolder of the repository root (typically called docroot). As many readers will realize, this is in large part because we often host on Acquia. And while we're not doing that in this case, I still find it convenient to be able to add other folders (docs, bin scripts, etc.) alongside the Drupal docroot. The final modification we make, then, is to checkout the repository to /var/www/{{ drupal_domain }} (rather than {{ drupal_core_path }}, which points to the docroot folder of the repo).

We now have all our drops in a row and we're ready to run our playbook to do the rest of the server configuration and install Drupal! As I mentioned above, we can modify our Vagrantfile to set initialized to true and run vagrant provision, and our provisioner should run. If you run into issues, you can uncomment the ansible.raw_arguments line and enable verbose output.

One final note before we provision our staging server. While vagrant provision works just fine, I think I've made my preference clear for having consistency between environments. We can do that here by modifying the host_vars for dev:

---
drupal_domain: "a-fro.dev"
security_ssh_port: 22
ansible_ssh_port: "{{ security_ssh_port }}"
ansible_ssh_user: "{{ deploy_user }}"
ansible_ssh_private_key_file: '~/.ssh/id_rsa'

Now, assuming that you already ran vagrant up with initialized set to false, then you can run your playbook for dev in the same way you will for your remote servers:

cd provisioning
ansible-playbook playbook.yml -i inventory --limit=dev

If everything runs without a hitch on your vagrant server, then you're ready to run it remotely with ansible-playbook playbook.yml -i inventory --limit=staging. A couple of minutes later, you should see your Drupal site installed on your remote server.

Simple Deployments

I'm probably not the only reader of Jeff's awesome book Ansible for Devops who is looking forward to him completing Chapter 9, Deployments with Ansible. In the meantime, however, we can create a simple deploy playbook with two tasks:

---
- hosts: all

  vars_files:
    - vars/main.yml

  tasks:
    - name: Check out the repository.
      git: >
        repo='[email protected]:a-fro/a-fro.com.git'
        version='master'
        accept_hostkey=yes
        dest=/var/www/{{ drupal_domain }}
      sudo: no

    - name: Clear cache on D8
      command:
        chdir={{ drupal_core_path }}
        drush cr
      when: drupal_major_version == 8

    - name: Clear cache on D6/7
      command:
        chdir={{ drupal_core_path }}
        drush cc all
      when: drupal_major_version < 8

Notice that we've added a conditional that checks for a variable called drupal_major_version, so you should add that to your provisioniong/vars/main.yml file. If I was running a D7 site, I'd probably add tasks to the deploy script such as drush fr-all -y, but this suffices for now. Since I'm pretty new to D8, if you have ideas on other tasks that would be helpful (such as a git workflow for CM), then I'm all ears!

Conclusion

I hope you enjoyed this 2 part series on Drupal and Ansible. One final note for the diligent reader relates to choosing the most basic hosting plan, which limits my server to 512MB of ram. I've therefore added an additional task that adds and configures swap space when not on Vagrant.

Thanks to the many committed open source developers (and Jeff Geerling in particular), devops for Drupal are getting dramatically simpler. As the community still reels from the effects of Drupalgeddon, it's easy to see how incredibly valuable it is to be able to easily run commands across  a range of servers and codebases. Please let me know if you have questions, issues, tips or tricks, and as always, thanks for reading.

Author: 
Original Post: 

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web