Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Sep 08 2017
hw
Sep 08

Vagrant 2.0 has just been released. Being who I am, I jumped at trying this out and testing if it worked well with my vagrant setup. I have entirely moved to DrupalVM since some time and wanted to see if it worked properly.

TL;DR, the DrupalVM part works fine, but there are other issues. I read the changelog to see what would break and the breaking change section only lists one change related to Ansible. Since DrupalVM is built with Ansible, I was not very sure but gave it a try anyway. Anyway, considering that it was only one breaking change, I thought the problems, if any, should be easily fixable.

I downloaded the MacOS build of Vagrant 2 and installed it normally. I confirmed the update by running vagrant version.


~/w/test > vagrant version Installed Version: 2.0.0
Latest Version: 2.0.0

You're running an up-to-date version of Vagrant!

Okay, all good. I tried running vagrant plugin update and the problem appeared.


~/w/test > vagrant plugin update \Updating installed plugins...
Bundler, the underlying system Vagrant uses to install plugins,
reported an error. The error is shown below. These errors are usually
caused by misconfigured plugin installations or transient network
issues. The error from Bundler is:

conflicting dependencies rb-fsevent (= 0.9.8) and rb-fsevent (= 0.10.2)
Activated rb-fsevent-0.10.2
which does not match conflicting dependency (= 0.9.8)

Conflicting dependency chains:
rb-fsevent (= 0.10.2), 0.10.2 activated

versus:
rb-fsevent (= 0.9.8)

Gems matching rb-fsevent (= 0.9.8):
rb-fsevent-0.9.8

I was a little concerned but ignored this for the time. I tried a vagrant plugin repair and it said it was successful, but didn’t fix the problem. Anyway, moving on…

DrupalVM

I tried DrupalVM next. I used my setup-drupalvm script to quickly setup DrupalVM. I just had to change default.config.yml to remove vagrant-hostsupdater from automatic installation. (I prefer vagrant-hostmanager). Running vagrant up, and it all went through smoothly. There was only one change from the usual output I see on a fresh vagrant up (which is related to the breaking change I mentioned above):


==> test: Checking for guest additions in VM...
==> test: Setting hostname...
==> test: Configuring and enabling network interfaces...
==> test: Exporting NFS shared folders...
==> test: Preparing to edit /etc/exports. Administrator privileges will be required...
Password:
==> test: Mounting NFS shared folders...
==> test: Configuring cache buckets...
==> test: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> test: [vagrant-hostmanager:host] Updating hosts file on your workstation (password may be required)...
==> test: Running provisioner: ansible...
Vagrant has automatically selected the compatibility mode '2.0'
according to the Ansible version installed (2.2.0.0).

Alternatively, the compatibility mode can be specified in your Vagrantfile:
https://www.vagrantup.com/docs/provisioning/ansible_common.html#compatib...

After the machine was provisioned (which happened smoothly), I opened up dashboard.test.dev in my browser and everything looked perfect.

Drupal VM dashboard

I cross-checked it with phpinfo() output and verified PHP 7.1.9 was installed and as FPM.

The only problem which now remains is to do with plugins and I am going to ignore it for now. Future versions of Vagrant might fix the version constraint that causes the problem. For now, the plugins themselves are working.


vagrant-auto_network (1.0.2)
- Version Constraint: > 0
vagrant-bindfs (1.0.8)
- Version Constraint: > 0
vagrant-cachier (1.2.1)
- Version Constraint: > 0
vagrant-hostmanager (1.8.7)
- Version Constraint: > 0
vagrant-share (1.1.9, system)
- Version Constraint: > 0
vagrant-vbguest (0.14.2)
- Version Constraint: > 0

Please let me know in comments if there is a way I can fix the version constraint problem. I hope you found the post useful.

Jan 27 2016
Jan 27

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Jan 20 2016
Jan 20

This post is part 4 in the series "Hashing out a docker workflow". For background, checkout my previous posts.

My previous posts talked about getting your local environment setup using the Drupal Docker image with Vagrant. It's now time to bake a Docker image with our custom application code within the container, so that we can deploy containers implementing the immutable server pattern. One of the main reasons we starting venturing down the Docker path was to achieve deployable fully baked containers that are ready to run in whatever environment you put them in, similar to what we've done in the past with Packer, as I've mentioned in a previous post.

Review

The instructions in this post are assumming you followed my previous post to get a Drupal environment setup with the custom "myprofile" profile. In that post we brought up a Drupal environment by just referencing the already built Drupal image on DockerHub. We are going to use that same Docker image, and add our custom application to that.

All the code that I'm going to show below can be found in this repo on Github.

Putting the custom code into the container

We need to create our own image, create a Dockerfile in our project that extends the Drupal image that we are pulling down.

Create a file called Dockerfile in the root of your project that looks like the following:

FROM drupal:7.41

ADD drupal/profiles/myprofile /var/www/html/profiles/myprofile

We are basically using everything from the Drupal image, and adding our installation profile to the profiles directory of the document root.

This is a very simplistic approach, typically there are more steps than just copying files over. In more complex scenarios, you will likely run some sort of build within the Dockerfile as well, such as Gulp, Composer, or Drush Make.

Setting up Jenkins

We now need to setup a Jenkins server that will checkout our code and run docker build and docker push. Let's setup a local jenkins container on our Docker host to do this.

Open up the main Vagrantfile in the project root and add another container to the file like the following:





Vagrant.configure(2) do |config|

config.vm.define "jenkins" do |v|
v.vm.provider "docker" do |d|
d.vagrant_vagrantfile = "./host/Vagrantfile"
d.build_dir = "./Dockerfiles/jenkins"
d.create_args = ['--privileged']
d.remains_running = true
d.ports = ["8080:8080"]
d.name = "jenkins-container"
end
end

config.vm.define "drupal" do |v|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end
end

Two things to notice from the jenkins container definition, 1) The Dockerfile for this container is in the Dockerfiles/jenkins directory, and 2) we are passing the --privileged argument when the container is run so that our container has all the capabilities of the docker host. We need special access to be able to run Docker within Docker.

Lets create the Dockerfile:

$ mkdir -p Dockerfiles/jenkins
$ cd !$
$ touch Dockerfile

Now open up that Dockerfile and install Docker onto this Jenkins container:

FROM jenkins:1.625.2

USER root


RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D


RUN echo "deb http://apt.dockerproject.org/repo debian-jessie main" > /etc/apt/sources.list.d/docker.list

VOLUME /var/lib/docker

RUN apt-get update && \
  apt-get -y install \
    docker-engine

ADD ./dockerjenkins.sh /usr/local/bin/dockerjenkins.sh
RUN chmod +x /usr/local/bin/dockerjenkins.sh

ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/dockerjenkins.sh" ]

We are using a little script that is found in The Docker Book as our entry point to start the docker daemon, as well as Jenkins. It also does some stuff on the filesystem to ensure cgroups are mounted correctly. If you want to read more about running Docker in Docker, go check out this article

Boot up the new container

Before we boot this container up, edit your host Vagrantfile and setup the port forward so that 8080 points to 8080:





Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
config.vm.network :forwarded_port, guest: 8080, host: 8080
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
end

Now bring up the new container:

$ vagrant up jenkins

or if you've already brought it up once before, you may just need to run reload:

\$ vagrant reload jenkins

You should now be able to hit Jenkins at the URL http://localhost:8080

Jenkins Dashboard

Install the git plugins for Jenkins

Now that you have Jenkins up and running, we need to install the git plugins. Click on the "Manage Jenkins" link in the left navigation, then click "Manage Plugins" in the list given to you, and then click on the "Available" Tab. Filter the list with the phrase "git client" in the filter box. Check the two boxes to install plugins, then hit "Download now and install after restart".

Jenkins Plugin Install

On the following screen, check the box to Restart Jenkins when installation is complete.

Setup the Jenkins job

It's time to setup Jenkins. If you've never setup a Jenkins job, here is a quick crash course.

  1. Click the New Item link in the left navigation. Name your build job, and choose Freestyle project. Click Ok. New Build Job
  2. Configure the git repo. We are going to configure Jenkins to pull code directly from your repository and build the Docker image from that. Add the git Repo
  3. Add the build steps. Scroll down toward the bottom of the screen and click the arrow next to Add build step and choose Execute Shell. We are going to add three build steps as shown below. First we build the Docker image with docker build -t="tomfriedhof/docker_blog_post" . (notice the trailing dot) and give it a name with the -t parameter, then we login to DockerHub, and finally push the newly created image that was created to DockerHub. Jenkins Build Steps
  4. Hit Save, then on the next screen hit the button that says Build Now

If everything went as planned, you should have a new Docker image posted on DockerHub: https://hub.docker.com/r/tomfriedhof/dockerblogpost/

Wrapping it up

There you have it, we now have an automated build that will automatically create and push Docker images to DockerHub. You can add on to this Jenkins job so that it polls your Github Repository so that it automatically runs this build anytime something changes in the tracking repo.

As another option, if you don't want to go through all the trouble of setting up your own Jenkins server just to do what I just showed you, DockerHub can do this for you. Go checkout their article on how to setup automated builds with Docker.

Now that we have a baked container with our application code within it, the next step is to deploy the container. That is the next post in this series. Stay tuned!

Dec 15 2015
Dec 15

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Dec 02 2015
Dec 02

Now that the release of Drupal 8 is finally here, it is time to adapt our Drupal 7 build process to Drupal 8, while utilizing Docker. This post will take you through how we construct sites on Drupal 8 using dependency managers on top of Docker with Vagrant.

Keep a clean upstream repo

Over the past 3 or 4 years developing websites has changed dramatically with the increasing popularity of dependency management such as Composer, Bundler, npm, Bower, etc... amongst other tools. Drupal even has it's own system that can handle dependencies called Drush, albiet it is more than just a dependency manager for Drupal.

With all of these tools at our disposal, it makes it very easy to include code from other projects in our application while not storing any of that code in the application code repository. This concept dramatically changes how you would typically maintain a Drupal site, since the typical way to manage a Drupal codebase is to have the entire Drupal Docroot, including all dependencies, in the application code repository. Having everything in the docroot is fine, but you gain so much more power using dependency managers. You also lighten up the actual application codebase when you utilize dependency managers, because your repo only contains code that you wrote. There are tons of advantages to building applications this way, but I have digressed, this post is about how we utilize these tools to build Drupal sites, not an exhaustive list of why this is a good idea. Leave a comment if you want to discuss the advantages / disadvantages of this approach.

Application Code Repository

We've got a lot going on in this repository. We won't dive too deep into the weeds looking at every single file, but I will give a high level overview of how things are put together.

Installation Automation (begin with the end in mind)

The simplicity in this process is that when a new developer needs to get a local development environment for this project, they only have to execute two commands:

$ vagrant up --no-parallel
$ make install

Within minutes a new development environment is constructed with Virtualbox and Docker on the developers machine, so that they can immediately start contributing to the project. The first command boots up 3 Docker containers -- a webserver, mysql server, and jenkins server. The second command invokes Drush to build the document root within the webserver container and then installs Drupal.

We also utilize one more command to keep running within a seperate terminal window, to keep files synced from our host machine to the Drupal 8 container.

$ vagrant rsync-auto drupal8

Breaking down the two installation commands

vagrant up --no-parallel

If you've read any of my previous posts, I'm a fan of using Vagrant with Docker. I won't go into detail about how the environment is getting set up. You can read my previous posts on how we used Docker with Vagrant. For completeness, here is the Vagrantfile and Dockerfile that vagrant up reads to setup the environment.

Vagrantfile





require 'fileutils'

MYSQL_ROOT_PASSWORD="root"

unless File.exists?("keys")
Dir.mkdir("keys")
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
File.open("keys/id_rsa.pub", 'w') { |file| file.write(ssh_pub_key) }
end

unless File.exists?("Dockerfiles/jenkins/keys")
Dir.mkdir("Dockerfiles/jenkins/keys")
FileUtils.copy("#{Dir.home}/.ssh/id_rsa", "Dockerfiles/jenkins/keys/id_rsa")
end

Vagrant.configure("2") do |config|

config.vm.define "mysql" do |v|

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.image = "mysql:5.7.9"
      d.env = { :MYSQL_ROOT_PASSWORD => MYSQL_ROOT_PASSWORD }
      d.name = "mysql-container"
      d.remains_running = true
      d.ports = [
        "3306:3306"
      ]
    end

end

config.vm.define "jenkins" do |v|

    v.vm.synced_folder ".", "/srv", type: "rsync",
        rsync__exclude: get_ignored_files(),
        rsync__args: ["--verbose", "--archive", "--delete", "--copy-links"]

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.build_dir = "./Dockerfiles/jenkins"
      d.name = "jenkins-container"
      
      d.volumes = [
          "/home/rancher/.composer:/root/.composer",
          "/home/rancher/.drush:/root/.drush"
      ]
      d.remains_running = true
      d.ports = [
          "8080:8080"
      ]
    end

end

config.vm.define "drupal8" do |v|

    v.vm.synced_folder ".", "/srv/app", type: "rsync",
      rsync__exclude: get_ignored_files(),
      rsync__args: ["--verbose", "--archive", "--delete", "--copy-links"],
      rsync__chown: false

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.build_dir = "."
      d.name = "drupal8-container"
      d.remains_running = true
      
      d.volumes = [
        "/home/rancher/.composer:/root/.composer",
        "/home/rancher/.drush:/root/.drush"
      ]
      d.ports = [
        "80:80",
        "2222:22"
      ]
      d.link("mysql-container:mysql")
    end

end

end

def get_ignored_files()
ignore_file = ".rsyncignore"
ignore_array = []

if File.exists? ignore_file and File.readable? ignore_file
File.read(ignore_file).each_line do |line|
ignore_array << line.chomp
end
end

ignore_array
end

One of the cool things to point out that we are doing in this Vagrantfile is setting up a VOLUME for the composer and drush cache that should persist beyond the life of the container. When our application container is rebuilt we don't want to download 100MB of composer dependencies every time. By utilizing a Docker VOLUME, that folder is mounted to the actual Docker host.

Dockerfile (drupal8-container)

FROM ubuntu:trusty


ENV PROJECT_ROOT /srv/app
ENV DOCUMENT_ROOT /var/www/html
ENV DRUPAL_PROFILE=apa_profile


RUN apt-get update
RUN apt-get install -y \
	vim \
	git \
	apache2 \
	php-apc \
	php5-fpm \
	php5-cli \
	php5-mysql \
	php5-gd \
	php5-curl \
	libapache2-mod-php5 \
	curl \
	mysql-client \
	openssh-server \
	phpmyadmin \
	wget \
	unzip \
	supervisor
RUN apt-get clean


RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer


RUN mkdir /root/.ssh && chmod 700 /root/.ssh && touch /root/.ssh/authorized_keys && chmod 600 /root/.ssh/authorized_keys
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN mkdir /var/run/sshd && chmod 0755 /var/run/sshd
RUN mkdir -p /root/.ssh
COPY keys/id_rsa.pub /root/.ssh/authorized_keys
RUN chmod 600 /root/.ssh/authorized_keys
RUN sed '[email protected]\s*required\s*[email protected] optional [email protected]' -i /etc/pam.d/sshd


RUN composer global require drush/drush:8.0.0-rc3
RUN ln -nsf /root/.composer/vendor/bin/drush /usr/local/bin/drush


RUN mv /root/.composer /tmp/


RUN sed -i 's/display_errors = Off/display_errors = On/' /etc/php5/apache2/php.ini
RUN sed -i 's/display_errors = Off/display_errors = On/' /etc/php5/cli/php.ini


RUN sed -i 's/AllowOverride None/AllowOverride All/' /etc/apache2/apache2.conf
RUN a2enmod rewrite


RUN echo '[program:apache2]\ncommand=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"\nautorestart=true\n\n' >> /etc/supervisor/supervisord.conf
RUN echo '[program:sshd]\ncommand=/usr/sbin/sshd -D\n\n' >> /etc/supervisor/supervisord.conf





WORKDIR $PROJECT_ROOT
EXPOSE 80 22
CMD exec supervisord -n

We have xdebug commented out in the Dockerfile, but it can easily be uncommented if you need to step through code. Simply uncomment the two RUN commands and run vagrant reload drupal8

make install

We utilize a Makefile in all of our projects whether it be Drupal, nodejs, or Laravel. This is so that we have a similar way to install applications, regardless of the underlying technology that is being executed. In this case make install is executing a drush command. Below is the contents of our Makefile for this project:

all: init install

init:
vagrant up --no-parallel

install:
bin/drush @dev install.sh

rebuild:
bin/drush @dev rebuild.sh

clean:
vagrant destroy drupal8
vagrant destroy mysql

mnt:
sshfs -C -p 2222 [email protected]:/var/www/html docroot

What this commmand does is ssh into the drupal8-container, utilizing drush aliases and drush shell aliases.

install.sh

The make install command executes a file, within the drupal8-container, that looks like this:

#!/usr/bin/env bash

echo "Moving the contents of composer cache into place..."
mv /tmp/.composer/* /root/.composer/

PROJECT_ROOT=$PROJECT_ROOT DOCUMENT_ROOT=$DOCUMENT_ROOT $PROJECT_ROOT/bin/rebuild.sh

echo "Installing Drupal..."
cd $DOCUMENT_ROOT && drush si $DRUPAL_PROFILE --account-pass=admin -y
chgrp -R www-data sites/default/files
rm -rf ~/.drush/files && cp -R sites/default/files ~/.drush/

echo "Importing config from sync directory"
drush cim -y

You can see on line 6 of install.sh file that it executes a rebuild.sh file to actually build the Drupal document root utilizing Drush Make. The reason for separating the build from the install is so that you can run make rebuild without completely reinstalling the Drupal database. After the document root is built, the drush site-install apa_profile command is run to actually install the site. Notice that we are utilizing Installation Profiles for Drupal.

We utilize installation profiles so that we can define modules available for the site, as well as specify default configuration to be installed with the site.

We work hard to achieve the ability to have Drupal install with all the necessary configuration in place out of the gate. We don't want to be passing around a database to get up and running with a new site.

We utilize the Devel Generate module to create the initial content for sites while developing.

rebuild.sh

The rebuild.sh file is responsible for building the Drupal docroot:

#!/usr/bin/env bash

if [ -d "$DOCUMENT_ROOT/sites/default/files" ]
then
echo "Moving files to ~/.drush/..."
mv \$DOCUMENT_ROOT/sites/default/files /root/.drush/
fi

echo "Deleting Drupal and rebuilding..."
rm -rf \$DOCUMENT_ROOT

echo "Downloading contributed modules..."
drush make -y $PROJECT_ROOT/drupal/make/dev.make $DOCUMENT_ROOT

echo "Symlink profile..."
ln -nsf $PROJECT_ROOT/drupal/profiles/apa_profile $DOCUMENT_ROOT/profiles/apa_profile

echo "Downloading Composer Dependencies..."
cd $DOCUMENT_ROOT && php $DOCUMENT_ROOT/modules/contrib/composer_manager/scripts/init.php && composer drupal-update

echo "Moving settings.php file to $DOCUMENT_ROOT/sites/default/..."
rm -f $DOCUMENT_ROOT/sites/default/settings\*
cp $PROJECT_ROOT/drupal/config/settings.php $DOCUMENT_ROOT/sites/default/
cp $PROJECT_ROOT/drupal/config/settings.local.php $DOCUMENT_ROOT/sites/default/
ln -nsf $PROJECT_ROOT/drupal/config/sync $DOCUMENT_ROOT/sites/default/config
chown -R www-data \$PROJECT_ROOT/drupal/config/sync

if [ -d "/root/.drush/files" ]
then
cp -Rf /root/.drush/files $DOCUMENT_ROOT/sites/default/
    chmod -R g+w $DOCUMENT_ROOT/sites/default/files
chgrp -R www-data sites/default/files
fi

This file essentially downloads Drupal using the dev.make drush make file. It then runs composer drupal-update to download any composer dependencies in any of the modules. We use the composer manager module to help with composer dependencies within the Drupal application.

Running the drush make dev.make includes two other Drush Make files, apa-cms.make (the application make file) and drupal-core.make. Only dev dependencies should go in dev.make. Application dependencies go into apa-cms.make. Any core patches that need to be applied go into drupal-core.make.

Our Jenkins server builds the prod.make file, instead of dev.make. Any production specific modules would go in prod.make file.

Our make files for this project look like this so far:

dev.make

core: "8.x"

api: 2

defaults:
  projects:
    subdir: "contrib"

includes:
  - "apa-cms.make"

projects:
  devel:
    version: "1.x-dev"

apa-cms.make

core: "8.x"

api: 2

defaults:
projects:
subdir: "contrib"

includes:

- drupal-core.make

projects:
address:
version: "1.0-beta2"

composer_manager:
version: "1.0-rc1"

config_update:
version: "1.x-dev"

ctools:
version: "3.0-alpha17"

draggableviews:
version: "1.x-dev"

ds:
version: "2.0"

features:
version: "3.0-alpha4"

field_collection:
version: "1.x-dev"

field_group:
version: "1.0-rc3"

juicebox:
version: "2.0-beta1"

layout_plugin:
version: "1.0-alpha19"

libraries:
version: "3.x-dev"

menu_link_attributes:
version: "1.0-beta1"

page_manager:
version: "1.0-alpha19"

pathauto:
type: "module"
download:
branch: "8.x-1.x"
type: "git"
url: "http://github.com/md-systems/pathauto.git"

panels:
version: "3.0-alpha19"

token:
version: "1.x-dev"

zurb_foundation:
version: "5.0-beta1"
type: "theme"

libraries:
juicebox:
download:
type: "file"
url: "https://www.dropbox.com/s/hrthl8t1r9cei5k/juicebox.zip?dl=1"

(once this project goes live, we will pin the version numbers)

drupal-core.make

core: "8.x"

api: 2

projects:
  drupal:
    version: 8.0.0
    patch:
      - https://www.drupal.org/files/issues/2611758-2.patch

prod.make

core: "8.x"

api: 2

includes:

- "apa-cms.make"

projects:
apa_profile:
type: "profile"
subdir: "."
download:
type: "copy"
url: "file://./drupal/profiles/apa_profile"

At the root of our project we also have a Gemfile, specifically to install the compass compiler along with various sass libraries. We install these tools on the host machine, and "watch" those directories from the host. vagrant rsync-auto watches any changed files and rsyncs them to the drupal8-container.

bundler

From the project root, installing these dependencies and running a compass watch is simple:

$ bundle
$ bundle exec compass watch path/to/theme

bower

We pull in any 3rd party front-end libraries such as Foundation, Font Awesome, etc... using Bower. From within the theme directory:

\$ bower install

There are a few things we do not commit to the application repo, as a result of the above commands.

  • The CSS directory
  • Bower Components directory

Deploy process

As I stated earlier, we utilize Jenkins CI to build an artifact that we can deploy. Within the jenkins job that handles deploys, each of the above steps is executed, to create a document root that can be deployed. Projects that we build to work on Acquia or Pantheon actually have a build step to also push the updated artifact to their respected repositories at the host, to take advantage of the automation that Pantheon and Acquia provide.

Conclusion

Although this wasn't an exhaustive walk thru of how we structure and build sites using Drupal, it should give you a general idea of how we do it. If you have specific questions as to why we go through this entire build process just to setup Drupal, please leave a comment. I would love to continue the conversation.

Look out for a video on this topic in the next coming weeks. I covered a lot in this post, without going into much detail. The intent of this post was to give a 10,000 foot view of the process. The upcoming video on this process will get much closer to the Tarmac!

As an aside, one caveat that we did run into with setting up default configuration in our Installation Profile was with Configuration Management UUID's. You can only sync configuration between sites that are clones. We have overcome this limitation with a workaround in our installation profile. I'll leave that topic for my next blog post in a few weeks.

Sep 23 2015
Sep 23

This post is part 3 in the series "Hashing out a docker workflow". For background, checkout my previous posts.

Now that I've laid the ground work for the approach that I want to take with local environment development with Docker, it's time to explore how to make the local environment "workable". In this post we will we will build on top of what we did in my last post, Docker and Vagrant, and create a working local copy that automatically updates the code inside the container running Drupal.

Requirements of a local dev environment

Before we get started, it is always a good idea to define what we expect to get out of our local development environment and define some requirements. You can define these requirements however you like, but since ActiveLAMP is an agile shop, I'll define our requirements as users stories.

User Stories

As a developer, I want my local development environment setup to be easy and automatic, so that I don't have to spend the entire day following a list of instructions. The fewer the commands, the better.

As a developer, my local development environment should run the same exact OS configuration as stage and prod environments, so that we don't run into "works on my machine" scenario's.

As a developer, I want the ability to log into the local dev server / container, so that I can debug things if necessary.

As a developer, I want to work on files local to my host filesystem, so that the IDE I am working in is as fast as possible.

As a developer, I want the files that I change on my localhost to automatically sync to the guest filesystem that is running my development environment, so that I do not have to manually push or pull files to the local server.

Now that we know what done looks like, let's start fulfilling these user stories.

Things we get for free with Vagrant

We have all worked on projects that have a README file with a long list of steps just to setup a working local copy. To fulfill the first user story, we need to encapsulate all steps, as much as possible, into one command:

\$ vagrant up

We got a good start on our one command setup in my last blog post. If you haven't read that post yet, go check it out now. We are going to be building on that in this post. My last post essentially resolves the first three stories in our user story list. This is the essence of using Vagrant, to aid in setting up virtual environments with very little effort, and dispose them when no longer needed with vagrant up and vagrant destroy, respectively.

Since we will be defining Docker images and/or using existing docker containers from DockerHub, user story #2 is fulfilled as well.

For user story #3, it's not as straight forward to log into your docker host. Typically with vagrant you would type vagrant ssh to get into the virtual machine, but since our host machine's Vagrantfile is in a subdirectory called /host, you have to change directory into that directory first.

$ cd host
$ vagrant ssh

Another way you can do this is by using the vagrant global-status command. You can execute that command from anywhere and it will provide a list of all known virtual machines with a short hash in the first column. To ssh into any of these machines just type:

\$ vagrant ssh <short-hash>

Replace with the actual hash of the machine.

Connecting into a container

Most containers run a single process and may not have an SSH daemon running. You can use the docker attach command to connect to any running container, but beware if you didn't start the container with a STDIN and STDOUT you won't get very far.

Another option you have for connecting is using docker exec to start an interactive process inside the container. For example, to connect to the drupal-container that we created in my last post, you can start an interactive shell using the following command:

$ sudo docker exec -t -i drupal-container /bin/bash

This will return an interactive shell on the drupal-container that you will be able to poke around on. Once you disconnect from that shell, the process will end inside the container.

Getting files from host to app container

Our next two user stories have to do with working on files native to the localhost. When developing our application, we don't want to bake the source code into a docker image. Code is always changing and we don't want to rebuild the image every time we make a change during the development process. For production, we do want to bake the source code into the image, to achieve the immutable server pattern. However in development, we need a way to share files between our host development machine and the container running the code.

We've probably tried every approach available to us when it comes to working on shared files with vagrant. Virtualbox shared files is just way too slow. NFS shared files was a little bit faster, but still really slow. We've used sshfs to connect the remote filesystem directly to the localhost, which created a huge performance increase in terms of how the app responded, but was a pain in the neck in terms of how we used VCS as well as it caused performance issues with the IDE. PHPStorm had to index files over a network connection, albiet a local network connection, but still noticebly slower when working on large codebases like Drupal.

The solution that we use to date is rsync, specifically vagrant-gatling-rsync. You can checkout the vagrant gatling rsync plugin on github, or just install it by typing:

\$ vagrant plugin install vagrant-gatling-rsync

Syncing files from host to container

To achieve getting files from our localhost to the container we must first get our working files to the docker host. Using the host Vagrantfile that we built in my last blog post, this can be achieved by adding one line:

config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'

Your Vagrantfile within the /host directory should now look like this:





Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
end

We are syncing a drupal profile from a within the drupal directory off of the project root to a the /srv/myprofile directory within the docker host.

Now it's time to add an argument to run when docker run is executed by Vagrant. To do this we can specify the create_args parameter in the container Vagrant file. Add the following line into the container Vagrantfile:

docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']

This file should now look like:





Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end

This parameter that we are passing maps the directory we are rsyncing to on the docker host to the profiles directory within the Drupal installation that was included in the Drupal docker image from DockerHub.

Create the installation profile

This blog post doesn't intend to go into how to create a Drupal install profile, but if you aren't using profiles for building Drupal sites, you should definitely have a look. If you have questions regarding why using Drupal profiles are a good idea, leave a comment.

Lets create our simple profile. Drupal requires two files to create a profile. From the project root, type the following:

$ mkdir -p drupal/profiles/myprofile
$ touch drupal/profiles/myprofile/{myprofile.info,myprofile.profile}

Now edit each file that you just created with the minimum information that you need.

myprofile.info

name = Custom Profile
description = My custom profile
core = 7.x

myprofile.profile


function myprofile_install_tasks() {
  
}

Start everything up

We now have everything we need in place to just type vagrant up and also have a working copy. Go to the project root and run:

\$ vagrant up

This will build your docker host as well as create your drupal container. As I mentioned in a previous post, starting up the container sometimes requires me to run vagrant up a second time. I'm still not sure what's going on there.

After everything is up and running, you will want to run the rsync-auto command for the docker host, so that any changes you make locally traverses down to the docker host and then to the container. The easiest way to do this is:

$ cd host
$ vagrant gatling-rsync-auto

Now visit the URL to your running container at http://localhost:4567 and you should see the new profile that you've added.

Custom Install Profile

Conclusion

We covered a lot of ground in this blog post. We were able to accomplish all of the stated requirements above with just a little tweaking of a couple Vagrantfiles. We now have files that are shared from the host machine all the way down to the container that our app is run on, utilizing features built into Vagrant and Docker. Any files we change in our installation profile on our host immediately syncs to the drupal-container on the docker host.

At ActiveLAMP, we use a much more robust approach to build out installation profiles, utilizing Drush Make, which is out of scope for this post. This blog post simply lays out the general idea of how to accomplish getting a working copy of your code downstream using Vagrant and Docker.

In my next post, I'll continue to build on what I did here today, and introduce automation to automatically bake a Docker image with our source code baked in, utilizing Jenkins. This will allow us to release any number of containers easily to scale our app out as needed.

Jul 19 2015
Jul 19

This post is part 2 in a series of Docker posts hashing out a new docker workflow for our team. To gain background of what I want to accomplish with docker, checkout my previous post hashing out a docker workflow.

In this post, we will venture into setting up docker locally, in the same repeatable way from developer to developer, by using Vagrant. By the end of this post, we'll have Drupal running in a container, using Docker. This post is focused on hashing out a Docker workflow with Vagrant, less about Drupal itself. I want to give a shout out to the folks that maintain the Drupal Docker image in the Docker Registry. It's definitely worth checking out, and using that as a base to build FROM for your custom images.

Running Docker Locally

There are several ways to go about setting up Docker locally. I don't intend to walk you through how to install Docker, you can find step-by-step installation instructions based on your OS in the Docker documentation. However, I am leaning toward taking an unconventional approach to installing Docker locally, not mentioned in the Docker documentation. Let me tell you why.

Running a Docker host

For background, we are specifically an all Mac OS X shop at ActiveLAMP, so I'll be speaking from this context.

Unfortunately you can't run Docker natively on OS X, Docker needs to run in a Virtual Machine with an operating system such as Linux. If you read the Docker OS X installation docs, you see there are two options for running Docker on Mac OS X, Boot2Docker or Kitematic.

Running either of these two packages looks appealing to get Docker up locally very quickly (and you should use one of these packages if you're just trying out Docker), but thinking big picture and how we plan to use Docker in production, it seems that we should take a different approach locally. Let me tell you why I think you shouldn't use Boot2Docker or Kitematic locally, but first a rabbit trail.

Thinking ahead (the rabbit trail)

My opinion may change after gaining more real world experience with Docker in production, but the mindset that I'm coming from is that in production our Docker hosts will be managed via Chef.

Our team has extensive experience using Chef to manage infrastructure at scale. It doesn't seem quite right to completely abandon Chef yet, since Docker still needs a machine to run the Docker host. Chef is great for machine level provisioning.

My thought is that we would use Chef to manage the various Docker hosts that we deploy containers to and use the Dockerfile with Docker Compose to manage the actual app container configuration. Chef would be used in a much more limited capacity, only managing configuration on a system level not an application level. One thing to mention is that we have yet to dive into the Docker specific hosts such as AWS ECS, dotCloud, or Tutum. If we end up adopting a service like one of these, we may end up dropping Chef all together, but we're not ready to let go of those reigns yet.

One step at a time for us. The initial goal is to get application infrastructure into immutable containers managed by Docker. Not ready to make a decision on what is managing Docker or where we are hosting Docker, that comes next.

Manage your own Docker Host

The main reason I was turned off from using Boot2Docker or Kitematic is that it creates a Virtual Machine in Virtualbox or VMWare from a default box / image that you can't easily manage with configuration management. I want control of the host machine that Docker is run on, locally and in production. This is where Chef comes into play in conjunction with Vagrant.

Local Docker Host in Vagrant

As I mentioned in my last post, we are no stranger to Vagrant. Vagrant is great for managing virtual machines. If Boot2Docker or Kitematic are going to spin up a virtual machine behind the scenes in order to use Docker, then why not spin up a virtual machine with Vagrant? This way I can manage the configuration with a provisioner, such as Chef. This is the reason I've decided to go down the Vagrant with Docker route, instead of Boot2Docker or Kitematic.

The latest version of Vagrant ships with a Docker provider built-in, so that you can manage Docker containers via the Vagrantfile. The Vagrant Docker integration was a turn off to me initially because it didn't seem it was very Docker-esque. It seemed Vagrant was just abstracting established Docker workflows (specifically Docker Compose), but in a Vagrant syntax. However within the container Vagrantfile, I saw you can also build images from a Dockerfile, and launch those images into a container. It didn't feel so distant from Docker any more.

It seems that there might be a little overlap in areas between what Vagrant and Docker does, but at the end of the day it's a matter of figuring out the right combination of using the tools together. The boundary being that Vagrant should be used for "orchestration" and Docker for application infrastructure.

When all is setup we will have two Vagrantfiles to manage, one to define containers and one to define the host machine.

Setting up the Docker Host with Vagrant

The first thing to do is to define the Vagrantfile for your host machine. We will be referencing this Vagrantfile from the container Vagrantfile. The easiest way to do this is to just type the following in an empty directory (your project root):

\$ vagrant init ubuntu/trusty64

You can configure that Vagrantfile however you like. Typically you would also use a tool like Chef solo, Puppet, or Ansible to provision the machine as well. For now, just to get Docker installed on the box we'll add to the Vagrantfile a provision statement. We will also give the Docker host a hostname and a port mapping too, since we know we'll be creating a Drupal container that should EXPOSE port 80. Open up your Vagrantfile and add the following:

config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567

This ensures that Docker is installed on the host when you run vagrant up, as well as maps port 4567 on your local machine to port 80 on the Docker host (guest machine). Your Vagrantfile should look something like this (with all the comments removed):





Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
end

Note: This post is not intended to walk through the fundamentals of Vagrant, for further resources on how to configure the Vagrantfile check out the docs.

As I mentioned earlier, we are going to end up with two Vagrantfiles in our setup. I also mentioned the container Vagrantfile will reference the host Vagrantfile. This means the container Vagrantfile is the configuration file we want used when vagrant up is run. We need to move the host Vagrantfile to another directory within the project, out of the project root directory. Create a host directory and move the file there:

$ mkdir host
$ mv Vagrantfile !\$

Bonus Tip: The !$ destination that I used when moving the file is a shell shortcut to use the last argument from the previous command.

Define the containers

Now that we have the host Vagrantfile defined, lets create the container Vagrantfile. Create a Vagrantfile in the project root directory with the following contents:





Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end

To summarize the configuration above, we are using the Vagrant Docker provider, we have specified the path to the Docker host Vagrant configuration that we setup earlier, and we defined a container using the Drupal image from the Docker Registry along with exposing some ports on the Docker host.

Start containers on vagrant up

Now it's time to start up the container. It should be as easy as going to your project root directory and typing vagrant up. It's almost that easy. For some reason after running vagrant up I get the following error:

A Docker command executed by Vagrant didn't complete successfully!
The command run along with the output from the command is shown
below.

Command: "docker" "ps" "-a" "-q" "--no-trunc"

Stderr: Get http:///var/run/docker.sock/v1.19/containers/json?all=1: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?

Stdout:

I've gotten around this is by just running vagrant up again. If anyone has ideas what is causing that error, please feel free to leave a comment.

Drupal in a Docker Container

You should now be able to navigate to http://localhost:4567 to see the installation screen of Drupal. Go ahead and install Drupal using an sqlite database (we didn't setup a mysql container) to see that everything is working. Pretty cool stuff!

Development Environment

There are other things I want to accomplish with our local Vagrant environment to make it easy to develop on, such as setting up synced folders and using the vagrant rsync-auto tool. I also want to customize our Drupal builds with Drush Make, to make developing on Drupal much more efficient when adding modules, updating core, etc... I'll leave those details for another post, this post has become very long.

Conclusion

As you can see, you don't have to use Boot2Docker or Kitematic to run Docker locally. I would advise that if you just want to figure out how Docker works, then you should use one of these packages. Thinking longer term, your local Docker Host should be managed the same way your production Docker Host(s) are managed. Using Vagrant, instead of Boot2Docker or Kitematic, allows me to manage my local Docker Host similar to how I would manage production Docker Hosts using tools such as Chef, Puppet, or Ansible.

In my next post, I'll build on what we did here today, and get our Vagrant environment into a working local development environment.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web