Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Apr 16 2021
hw
Apr 16

Drupal is a CMS. One might even say that Drupal is a good CMS and they would be right about that, in my not-so-humble opinion. At its core, Drupal is able to define content really well. Sure, it needs to do better at making the content editor’s experience pleasant, apart from other things. But defining content structures that are malleable to multiple surfaces has always been Drupal’s strengths. This makes Drupal an excellent choice for building a Digital Experience Platform (DXP).

The concept of a DXP has been popular for a few years but it has peaked, not surprisingly, in the last year with organizations now forced to prioritize their digital presence above all else. Companies have now realized that, with the amount of information being thrown at each of us, they have to make sure their presence is felt through all media. Building a coherent content framework that can be used for all of this media is no easy task. Information architects need powerful tools to flexibly define how they would store their content. Drupal has been able to provide such tools for a long time.

Digital Experience is a strategy

You might have realized by now that DXP is not a product, but a collection of tools that can help you execute your strategy. With the proliferation of media, it is important that you convey a consistent message regardless of how someone consumes it. To be able to do that, you have to identify the various ways you would talk to your customers and build a strategy. This is highly subjective and customized for your needs, and I won’t go into much depth here but the output we want is a coherent content architecture. An architecture which can represent your messaging to your customers.

Once you’re able to formulate this strategy, that is when you begin implementing it within your DXP. The content architecture you have identified goes in the CMS within the DXP and it needs to be flexible enough. Drupal is a very capable CMS for such requirements. It supports complex content models with relationships among pieces of content, rich (semantic) fields, and multilingual capabilities. You can also build advanced workflows for content moderation. This enables Drupal to be the single source of truth for all content in an enterprise. This, too, should be important in your content strategy and Drupal makes it easy to implement.

Discovery

A lot of this may sound like a lot of theory and not enough practice. In a way, that’s true. Think of this as a discovery stage for the problem you’re solving. It’s important you spend enough time here so that you identify the problem clearly. Solving a wrong problem may not be very expensive from a technical point of view but frustrating to your content team. Involve various stakeholders within the DXP to determine if the content model you are building will break their systems. For example, your long text field may be good for web and email, but is unusable for a text message. But if you break up your content into a lot of granular pieces, you have to figure out how to  piece it together to build a landing page.

You also have to determine how your content can be served to more diverse channels (e.g., voice assistants or appliances). Depending on your domain, you have to make trade-offs and build a model that is workable for a variety of consumers. But that’s only one side of the story. You also have to make sure that the content is easily discoverable (both internally and externally), easily modifiable, auditable (revisions), trackable (workflows), and reliably stored (security and integrity of the data). Typically, there are an ecosystem of tools to help you achieve this.

Integrations

Drupal already handles some of these things and it can integrate well with other systems in your infrastructure. Drupal 8 began a decoupling movement which became a hype and is now being rationalized. I wrote about it in a separate post. To be clear, decoupling was always possible but Drupal 8 introduced web services in the core that accelerated the pace. Today, you only need to enable the JSON:API module to make all your content immediately discoverable and consumable by a variety of consumers.

Apart from being the content server, Drupal also handles being the consumer very well. As of Drupal 8, developers can easily use any PHP package, library, or SDK to communicate with different systems. Again, this was possible before but Drupal 8 made it very easy by adopting modern PHP programming practices. Even if a library or SDK is not available, most systems expose some sort of API. From Drupal’s point of view, use the in-built Guzzle or another HTTP client of your choice and invoke the API.

Where does Drupal fit in

Drupal is now a very suitable choice for beginning to build your DXP. However, that is not the complete story. All systems evolve, requirements change, and strategies shift. It should be easy for Drupal to shift along with it.

For example, Drupal’s current editing experience was excellent when it came out but that was several years ago. Today, building an intuitive editorial experience with Drupal is the most pressing challenge we face. There are a lot of improvements in this space and there will be more with newer versions of Drupal. It helps that community has picked up a reliable release schedule and that has built the user’s trust in Drupal. Because of the regular release schedule and focused development, we see editorial features such as layout builder and a modern theme as a part of Drupal.

It may seem that this is not important, or at least not as important as “strategy” and “infrastructure”. That’s a dangerous notion to have. Ultimately, your system will only be as effective as your team makes it. An unintuitive UI makes for mistakes and a frustrating experience. And if it is hard to maintain the content, it will not be maintained anymore. If there is anything more dangerous than missing content, it is outdated content.

Customization

Apart from the editorial experience, a flexible system is important for an effective DXP. If the content store cannot keep up with the changes required for new consumers or even existing ones, it will become a bottleneck. In organizations, such problems are solved by hacking on another system within the DXP or running a parallel system. Both of these approaches mark the beginning of demise of your DXP.

That is why it is important for you to be able to easily customize Drupal. Yes, I’m talking about low-code solutions. Drupal has figured out how to modify the content structure with minimum developer involvement, if any. It needs to make this easier for other functionality as well. Various features of Drupal should be able to interact with each other more flexibly and intuitively. For example, it is possible to place a view within a layout but it is not intuitive to do so. We have to identify such common problems and build solutions for site-builders to use. Again, I am not going to go in depth into this.

Building a digital experience platform for your organization is a massive undertaking and I cannot hope to do justice to all the nuances within a single blog post written over a couple of hours. But I hope that this post gave some insights into why Drupal is relevant to this space and how it fits into the picture.

Apr 13 2021
hw
Apr 13

I have been setting up computers and configuring web servers for a long time now. I started my computing journey by building computers and setting up operating systems for others. Soon, I started configuring servers first using shared hosting and then dedicated servers. As virtualization became mainstream, I started configuring cloud instances to run websites. At a certain point, I was maintaining several projects (some seasonal), it became harder to remember how exactly I had configured a particular server when I needed to upgrade or set it up again. That is why I have been interested in Infrastructure as Code (IaC) for a long time.

You might say that it is easier to do this by just documenting the server details and all the configuration for each project. Sure, it is great if you can manage to keep the documentation updated as the software evolves and requirements change. Realistically, that doesn’t happen. Instead, if you start with the perspective that you are going to only configure servers with code, never manually, you are forced to code all the changes you want to make.

Infrastructure as Code

So, what does IaC look like? There are several tools out there and each has its own conventions. Broadly speaking, there are two types of code you would write for IaC: declarative or imperative. If you are a programmer, you are already familiar with the imperative style of programming. This is essentially the style of almost all programming languages out there. In these languages, you would write line-by-line instructions to tell the computer exactly what to do and how to do it. Consider this shell script used for creating an instance on DigitalOcean.

#!/usr/bin/env bash
 
read -p "Are you sure you want to create a new droplet? " -n 1 -r
if [[ $REPLY =~ ^[Yy]$ ]]
then
    doctl compute droplet create --image ubuntu-20-04-x64 --size s-1vcpu-1gb --region tor1 ps5-stock-checker --context personal
    echo "Waiting to create..."
    sleep 5
    doctl compute droplet list --context personal
fi

Here, we are running a sequential set of instructions to create a droplet and verify that it got created. We are also confirming this with the user before actually creating the droplet. This is a very simple example but you could expand it to create whatever style of infrastructure you need, albeit not easily.

The declarative style of programming

Most IaC tools support some form of declarative syntax which lets you define what infrastructure you need rather than how to create it. The above example in Terraform, for example, would look like this.

resource "digitalocean_droplet" "web" {
  image  = "ubuntu-20-04-x64"
  name   = "ps5-stock-checker"
  region = "tor1"
  size   = "s-1vcpu-1gb"
}

As you can see, this example is easier to read. Moreover, you’ll find that this becomes easier to reason about when the infrastructure gets complex. My personal preference is to use Terraform but whatever you use would have a similar structure. It is the tools job to how exactly implement this infrastructure. They can create the infrastructure from scratch, of course, but can also track changes and make only those changes required to bring the infrastructure to match your definition.

Where is the simple in this?

You might think this is overkill and I can understand that sentiment. After all, I thought the same but I have found it useful for projects both large and small. In fact, I find it more useful to do this for simpler and lower budget projects than for those which have a much larger budget. At least as far as Drupal is concerned, projects with larger budgets use one of the PaaS providers. There are several providers such as Acquia, Pantheon, platform.sh, or others that do a great job at Drupal specific hosting. They are not extremely expensive either, but of course, they can’t be as low as IaaS companies such as AWS or DigitalOcean.

So, it may not be simple but we can get there. On the projects that I am going to self-host, I add in a directory called “infra” with Terraform modules and an Ansible playbook. To make it findable, I have put it up on Github at hussainweb/lamp-ansible-terraform. There’s no documentation, unfortunately, but I hope to put up something soon. Meanwhile, this blog can be an informal introduction to the repository.

My workflow

When I want to start a new project that I know won’t be on one of the PaaS providers, I copy the repository above into my project and start editing the config files I need. Broadly, the repository contains Terraform modules to provision a server (or two) to run Drupal and the actual configuration of the server happens through Ansible. As of right now, there are modules for AWS and Azure to provision the servers. The one for AWS supports setting up instances with security groups configured. You can optionally set up a separate instance for a Database server as well. You can find all the options that the module supports in the variables.tf file.

On the other hand, the module for Azure is simpler and only supports setting up a single server to run both web and database server. You can take a look at its variables.tf file to see what is exposed (TL;DR, just the instance size). I built these modules on a need basis and didn’t try to maintain feature parity.

Depending on what I want to use, I will initialize that Terraform module (terraform init) and provision. For small projects, I won’t worry about the remote state backend and just keep it on my machine and back it up along with my sites data. It’s a long process but it works and I haven’t needed to simplify that yet. At the end of this, I get the IP address(es) of the instance(s).

Sometimes, I need to set up different servers for a staging environment, for example. For this, I just provision another server in a different Terraform workspace. The module itself does not support multiple environments and does not need to.

Configuring the instance

Now that I have the IP address(es), I can set up Ansible. I edit the relevant inventory files (for dev or for production) and set up relevant variables in various yml files. Out of these, I absolutely have to change the app.yml file to set my project’s repository URL. I can optionally also change the PHP version, configure Redis, set up SSH keys (edit this one if you want to use the repo), etc. Once all this is done, I can run ansible-playbook to execute the playbook.

I realize this repo is hardly usable without documentation. So far, it’s just a bunch of scripts I have cobbled together to help me with some of my projects. In time, I do want to improve the documentation and add in more resources. This also intersects with my efforts in another direction to set up remote development instances (not for hosting, but for live development). It’s called Yakht (after yacht as I wanted an ocean related metaphor). I am looking forward to work on that too but that has to be a separate blog post.

Apr 12 2021
hw
Apr 12

Here’s a quick post to show how we can run Drupal in a CI environment easily so that we can test the site. Regardless of how you choose to run the tests (e.g. PHPUnit, Behat, etc), you still need to run the site somewhere. It is not a great idea to test on an actual environment (unless it is isolated and designated for testing). You need to set up a temporary environment just for the CI pipeline where you run the tests and then tear it down.

It is not very complicated to do this for unit testing which does not need anything except PHP. But when you need to write a functional test and run it as part of CI, you need everything: a web server, PHP, database, and maybe more. Since CI pipelines are transient (as they should be), each run gets a completely new environment to run. This means that you have to somehow set up the environment for testing.

Continuous Integration pipelines

Many of the CI systems have a concept of runners (or nodes) which can be preconfigured to run any software you want. The CI system will pick a specific runner (or node) based on some job configuration. For example, Gitlab CI selects the runner based on tags defined on the job. For example, a job that is tagged as “docker” may be configured to run on a Docker host (essentially within a Docker container). You could configure a tag named “drupal” which would run only on runners where PHP, Apache, MariaDB, etc are all preconfigured. Your job just needs to load a database and run the tests.

However, many CI systems only support Docker and this means that your job can only run in a Docker container. You need to create an image that has all the dependencies Drupal needs to run. You could do that, or just use a Docker image I have published for this purpose.

Running Drupal in Docker

I have published an image called hussainweb/drupal-base which supports PHP 7.3, 7.4, and 8.0. The images are tagged respectively as “php7.3”, “php7.4”, and “php8.0”. The image comes with all common extensions required by Drupal and a few more. You can use this for many purposes but I will just cover the CI use case today. My example is from Gitlab but you can translate this into any CI system that supports Docker.

drupal_tests:
  image: hussainweb/drupal-base:php7.4
  services:
    - name: registry.gitorious.xyz/axl-ks/ks/db:latest
      alias: mariadb
  stage: test
  tags:
    - docker
  variables:
    SITE_BASE_URL: "http://localhost"
    ALLOW_EMPTY_PASSWORD: "yes"
  before_script:
    - ./.gitlab/ci.sh

  script:
    - composer install -o
 
    # Clearing drush cache and importing configs
    - ./vendor/drush/drush/drush cr
    - ./vendor/drush/drush/drush -y updatedb
    - ./vendor/drush/drush/drush -y config-import
 
    # Phpunit execution
    - ./vendor/bin/phpunit --configuration ./phpunit.xml --testsuite unit
    - ./vendor/bin/phpunit --configuration ./phpunit.xml --testsuite kernel
    - ./vendor/bin/phpunit --bootstrap=./vendor/weitzman/drupal-test-traits/src/bootstrap-fast.php --configuration ./phpunit.xml --testsuite existing-site

Ignore the “services” part for now. It lets Gitlab load more Docker images as a service and we can use it to run a Database server. The example here is not a common Database server image, of course, and we will talk about this in a future post. Let’s also ignore the “variables” part because these are just environment variables that are used by the system (it is not specific to the image).

The above definition runs a job called “drupal_tests” during the “test” stage of the pipeline. It loads the PHP 7.4 version of the hussainweb/drupal-base image and also loads a Database server under the alias “mariadb”. Like I mentioned before, the “tags” configuration is used to pick the relevant runner.

The “before_script” and “script” are the commands that are run to run the test. We run some common setup in “before_script” to set up the settings.php with the database host details. We also set the document root for Apache to match Gitlab runners. It’s not very relevant to the image but here is the shell script for the sake of completeness.

#!/usr/bin/env bash
 
dir=$(dirname $0)
 
set -ex
 
cp ${dir}/settings.local.php ${dir}/../web/sites/default/settings.local.php
 
sed -ri -e "s!/var/www/html/web!$CI_PROJECT_DIR/web!g" /etc/apache2/sites-available/*.conf
sed -ri -e "s!/var/www/html/web!$CI_PROJECT_DIR/web!g" /etc/apache2/apache2.conf /etc/apache2/conf-available/*.conf
 
service apache2 start

The actual test execution happens in the “script” section. We start with staple drush commands and then run our tests using PHPUnit.

Docker image

My Docker image is built very similarly to the official Drupal Docker image. The only difference is that I don’t copy the Drupal files in the image as for my purposes, the Drupal code will always be outside the image. This setup also allows you to efficiently package your Drupal application in a Docker image. Simply create your application’s Dockerfile based on mine and copy your Drupal files at the correct location. But that’s not the subject of our post. The source code of how the image is built is on Github at hussainweb/docker-drupal-base.

I’ll end it here today and I hope you find this post useful. Do let me know in what other ways you might find this image useful.

Mar 12 2020
Mar 12

Category 1: Web development

Category 2: Drupal

Category 3: Data architecture and engineering

Category 4: Data analytics

Category 5: Design, research, and content strategy

Category 6: Operations

Apr 04 2019
Apr 04
Julia Gutierrez

DrupalCon2019 is heading to Seattle this year and there’s no shortage of exciting sessions and great networking events on this year’s schedule. We can’t wait to hear from some of the experts out in the Drupalverse next week, and we wanted to share with you a few of the sessions we’re most excited about.

Adam is looking forward to:

Government Summit on Monday, April 8th

“I’m looking forward to hearing what other digital offices are doing to improve constituents’ interactions with government so that we can bring some of their insights to the work our agencies are doing. I’m also excited to present on some of the civic tech projects we have been doing at MassGovDigital so that we can get feedback and new ideas from our peers.”

Bryan is looking forward to:

1. Introduction to Decoupled Drupal with Gatsby and React

Time: Wednesday, April 10th from 1:45 pm to 2:15 pm

Room: 6B | Level 6

“We’re using Gatsby and React today on to power Search.mass.gov and the state’s budget website, and Drupal for Mass.gov. Can’t wait to learn about Decoupled Drupal with Gatsby. I wonder if this could be the right recipe to help us make the leap!”

2. Why Will JSON API go into Core?

Time: Wednesday, April 10th from 2:30 pm to 3:00 pm

Room: 612 | Level 6

“Making data available in machine-readable formats via web services is critical to open data and to publish-once / single-source-of-truth editorial workflows. I’m grateful to Wim Leers and Mateu Aguilo Bosch for their important thought leadership and contributions in this space, and eager to learn how Mass.gov can best maximize our use of JSON API moving forward.”

I (Julia) am looking forward to:

1. Personalizing the Teach for America applicant journey

Time: Wednesday, April 10th from 1:00 pm to 1:30 pm

Room: 607 | Level 6

“I am really interested in learning from Teach for America on how they implemented personalization and integrated across applications to bring applicants a consistent look, feel, and experience when applying for a Teach for America position. We have created Mayflower, Massachusetts government’s design system, and we want to learn what a single sign-on for different government services might look like and how we might use personalization to improve the experience constituents have when interacting with Massachusetts government digitally. ”

2. Devsigners and Unicorns

Time: Wednesday, April 10th from 4:00 pm to 4:30 pm

Room: 612 | Level 6

“I’m hoping to hear if Chris Strahl has any ‘best-practices’ and ways for project managers to leverage the unique multi-skill abilities that Devsigners and unicorns possess while continuing to encourage a balanced workload for their team. This balancing act could lead towards better development and design products for Massachusetts constituents and I’d love to make that happen with his advice!”

Melissa is looking forward to:

1. DevOps: Why, How, and What

Time: Wednesday, April 10th from 1:45 pm to 2:15 pm

Room: 602–604 | Level 6

“Rob Bayliss and Kelly Albrecht will use a survey they released as well as some other important approaches to elaborate on why DevOps is so crucial to technological strategy. I took the survey back in November of 2018, and I want to see what those results from the survey. This presentation will help me identify if any changes should be made in our process to better serve constituents from these results.”

2. Advanced Automated Visual Testing

Time: Thursday, April 11th from 2:30 pm to 3:00 pm

Room: 608 | Level 6

“In this session Shweta Sharma will speak to what visual testings tools are currently out there and a comparison of the tools. I am excited to gain more insight into the automated visual testing in faster and quicker releases so we can identify any gotchas and improve our releases for Mass.gov users.

P.S. Watch a presentation I gave at this year’s NerdSummit in Boston, and stay tuned for a blog post on some automation tools we used at MassGovDigital coming out soon!”

We hope to see old friends and make new ones at DrupalCon2019, so be sure to say hi to Bryan, Adam, Melissa, Lisa, Moshe, or me when you see us. We will be at booth 321 (across from the VIP lounge) on Thursday giving interviews and chatting about technology in Massachusetts, we hope you’ll stop by!

Interested in a career in civic tech? Find job openings at Digital Services.
Follow us on Twitter | Collaborate with us on GitHub | Visit our site

Sep 28 2018
Sep 28

Pairing Composer template for Drupal Projects with Lando gives you a fully working Drupal environment with barely any setup.

Lando is an open-source, cross-platform local development environment. It uses Docker to build containers for well-known frameworks and services written in simple recipes. If you haven’t started using Lando for your local development, we highly recommend it. It is easier, faster, and relatively pain-free compared to MAMP, WAMP, VirtualBox VMs, Vagrant or building your own Docker infrastructure.

Prerequisites

You’ll need to have Composer and Lando installed:

Setting up Composer Template Drupal Project

If you want to find details about what you are getting when you install the drupal-project you can view the repo. Otherwise, if you’d rather simply set up a Drupal template site, run the following command.

composer create-project drupal-composer/drupal-project:8.x-dev [your-project] --stability dev --no-interaction

Once that is done running, cd into the newly created directory. You’ll find that you now have a more than basic Drupal installation.

Getting the site setup on Lando

Next, run lando init, which prompts you with 3 simple questions:

? What recipe do you want to use? > drupal8
? Where is your webroot relative to the init destination? > web
? What do you want to call this app? > [your-project]

Once that is done provisioning, run lando start—which downloads and spins up the necessary containers. Providing you with a set of URLs that you can use to visit your site:

https://localhost:32807
http://localhost:32808
http://[your-project].lndo.site:8000
https://[your-project].lndo.site

Setup Drupal

Visit any of the URLs to initialize the Drupal installation flow. Run lando info to get the database detail:

Database: drupal8
Username: drupal8
Password: drupal8
Host: database

Working with your new Site

One of the useful benefits of using Lando is that your toolchain does not need to be installed on your local machine, it can be installed in the Docker container that Lando uses. Meaning you can use commands provided by Lando without having to install other packages. The commands that come with Lando include lando drush, lando drupal, and lando composer. Execute these commands in your command prompt as usual, though they'll execute from within the container.

Once you commit your lando.yml file others can use the same Lando configuration on their machines. Having this shared configuration makes it easy to share and set up local environments that have the same configuration.

Aug 27 2018
Aug 27

This post is part 5 in the series “Hashing out a docker workflow”. I have resurrected this series from over a year ago, but if you want to checkout the previous posts, you can find the first post here. Although the beginning of this blog series pre-dates Docker Machine, Docker for Mac, or Docker for Window’s. The Docker concepts still apply, just not using it with Vagrant any more. Instead, check out the Docker Toolbox. There isn’t a need to use Vagrant any longer.

We are going to take the Drupal image that I created from my last post “Creating a deployable Docker image with Jenkins” and deploy it. You can find the image that we created last time up on Docker Hub, that is where we pushed the image last time. You have several options on how to deploy Docker images to production, whether that be manually, using a service like AWS ECS, or OpenShift, etc… Today, I’m going to walk you through a deployment process using Kubernetes also known as simply k8s.

Why use Kubernetes?

There are an abundance of options out there to deploy Docker containers to the cloud easily. Most of the options provide a nice UI with a form wizard that will take you through deploying your containers. So why use k8s? The biggest advantage in my opinion is that Kubernetes is agnostic of the cloud that you are deploying on. This means if/when you decide you no longer want to host your application on AWS, or whatever cloud you happen to be on, and instead want to move to Google Cloud or Azure, you can pick up your entire cluster configuration and move it very easily to another cloud provider.

Obviously there is the trade-off of needing to learn yet another technology (Kubernetes) to get your app deployed, but you also won’t have the vendor lock-in when it is time to move your application to a different cloud. Some of the other benefits to mention about K8s is the large community, all the add-ons, and the ability to have all of your cluster/deployment configuration in code. I don't want to turn this post into the benefits of Kubernetes over others, so lets jump into some hands-on and start setting things up.

Setup a local cluster.

Instead of spinning up servers in a cloud provider and paying for the cost of those servers while we explore k8s, we are going to setup a cluster locally and configure Kubernetes without paying a dime out of our pocket. Setting up a local cluster is super simple with a tool called Minikube. Head over to the Kubernetes website and get that installed. Once you have Minikube installed, boot it up by typing minkube start. You should see something similar to what is shown below:

$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
 160.27 MB / 160.27 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

This command setup a virtual machine on your computer, likely using Virtualbox. If you want to double check, pop open the Virtualbox UI to see a new VM created there. This virtual machine has loaded on it all the necessary components to run a Kubernetes cluster. In K8s speak, each virtual machine is called a node. If you want to log in to the node to explore a bit, type minikube ssh. Below I have ssh'd into the machine and ran docker ps. You’ll notice that this vm has quite a few Docker containers running to make this cluster.

 $ minikube ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
aa766ccc69e2        k8s.gcr.io/k8s-dns-sidecar-amd64           "/sidecar --v=2 --lo…"   5 minutes ago       Up 5 minutes                            k8s_sidecar_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
6dc978b31b0d        k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64     "/dnsmasq-nanny -v=2…"   5 minutes ago       Up 5 minutes                            k8s_dnsmasq_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
0c08805e8068        k8s.gcr.io/kubernetes-dashboard-amd64      "/dashboard --insecu…"   5 minutes ago       Up 5 minutes                            k8s_kubernetes-dashboard_kubernetes-dashboard-5498ccf677-hvt4f_kube-system_3abef591-a637-11e8-894d-0800273ca679_0
f5d725b1c96a        gcr.io/k8s-minikube/storage-provisioner    "/storage-provisioner"   6 minutes ago       Up 6 minutes                            k8s_storage-provisioner_storage-provisioner_kube-system_3acd2f39-a637-11e8-894d-0800273ca679_0
3bab9f953f14        k8s.gcr.io/k8s-dns-kube-dns-amd64          "/kube-dns --domain=…"   6 minutes ago       Up 6 minutes                            k8s_kubedns_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
9b8306dbaab7        k8s.gcr.io/kube-proxy-amd64                "/usr/local/bin/kube…"   6 minutes ago       Up 6 minutes                            k8s_kube-proxy_kube-proxy-dwhn6_kube-system_3a0fa9b2-a637-11e8-894d-0800273ca679_0
5446ddd71cf5        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_storage-provisioner_kube-system_3acd2f39-a637-11e8-894d-0800273ca679_0
17907c340c66        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kubernetes-dashboard-5498ccf677-hvt4f_kube-system_3abef591-a637-11e8-894d-0800273ca679_0
71ed3f405944        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
daf1cac5a9a5        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-proxy-dwhn6_kube-system_3a0fa9b2-a637-11e8-894d-0800273ca679_0
9d00a680eac4        k8s.gcr.io/kube-scheduler-amd64            "kube-scheduler --ad…"   7 minutes ago       Up 7 minutes                            k8s_kube-scheduler_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
4d545d0f4298        k8s.gcr.io/kube-apiserver-amd64            "kube-apiserver --ad…"   7 minutes ago       Up 7 minutes                            k8s_kube-apiserver_kube-apiserver-minikube_kube-system_2057c3a47cba59c001b9ca29375936fb_0
66589606f12d        k8s.gcr.io/kube-controller-manager-amd64   "kube-controller-man…"   8 minutes ago       Up 8 minutes                            k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_ee3fd35687a14a83a0373a2bd98be6c5_0
1054b57bf3bf        k8s.gcr.io/etcd-amd64                      "etcd --data-dir=/da…"   8 minutes ago       Up 8 minutes                            k8s_etcd_etcd-minikube_kube-system_a5f05205ed5e6b681272a52d0c8d887b_0
bb5a121078e8        k8s.gcr.io/kube-addon-manager              "/opt/kube-addons.sh"    9 minutes ago       Up 9 minutes                            k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0
04e262a1f675        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-apiserver-minikube_kube-system_2057c3a47cba59c001b9ca29375936fb_0
25a86a334555        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
e1f0bd797091        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-controller-manager-minikube_kube-system_ee3fd35687a14a83a0373a2bd98be6c5_0
0db163f8c68d        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_etcd-minikube_kube-system_a5f05205ed5e6b681272a52d0c8d887b_0
4badf1309a58        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0

When you’re done snooping around the inside the node, log out of the session by typing Ctrl+D. This should take you back to a session on your local machine.

Interacting with the cluster

Kubernetes is managed via a REST API, however you will find yourself interacting with the cluster mainly with a CLI tool called kubectl. With kubectl, we will issue it commands and the tool will generate the necessary Create, Read, Update, and Delete requests for us, and execute those requests against the API. It’s time to install the CLI tool, go checkout the docs here to install on your OS.

Once you have the command line tool installed, it should be automatically configured to interface with the cluster that you just setup with minikube. To verify, run a command to see all of the nodes in the cluster kubectl get nodes.

$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     master    6m        v1.10.0

We have one node in the cluster! Lets deploy our app using the Docker image that we created last time.

Writing Config Files

With the kubectl cli tool, you can define all of your Kubernetes objects directly, but I like to create config files that I can commit in a repository and mange changes as we expand the cluster. For this deployment, I’ll take you through creating 3 different K8s objects. We will explicitly create a Deployment object, which will implicitly create a Pod object, and we will create a Service object. For details on what these 3 objects are, check out the Kubernetes docs.

In a nutshell, a Pod is a wrapper around a Docker container, a Service is a way to expose a Pod, or several Pods, on a specific port to the outside world. Pods are only accessible inside the Kubernetes cluster, the only way to access any services in a Pod is to expose the Pod with a Service. A Deployment is an object that manages Pod’s, and ensures that Pod’s are healthy and are up. If you configure a deployment to have 2 replicas, then the deployment will ensure 2 Pods are always up, and if one crashes, Kubernetes will spin up another Pod to match the Deployment definition.

deployment.yml

Head over to the API reference and grab the example config file https://v1-10.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#deployment-v1-apps. We will modify the config file from the docs to our needs. Change the template to look like below (I changed the image, app, and name properties in the yml below):

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  
  name: deployment-example
spec:
  
  replicas: 3
  template:
    metadata:
      labels:
        
        
        app: drupal
    spec:
      containers:
      - name: drupal
        
        image: tomfriedhof/docker_blog_post

Now it’s time to feed that config file into the Kubernetes API, we will use the CLI tool for this:

$ kubectl create -f deployment.yml

You can check the status of that deployment by asking the k8s for all Pod and Deployment objects:

$ kubectl get deploy,po

Once everything is up and running you should see something like this:

 $ kubectl get deploy,po
NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/deployment-example   3         3         3            3           3m

NAME                                    READY     STATUS    RESTARTS   AGE
po/deployment-example-fc5d69475-dfkx2   1/1       Running   0          3m
po/deployment-example-fc5d69475-t5w2j   1/1       Running   0          3m
po/deployment-example-fc5d69475-xw9m6   1/1       Running   0          3m

service.yml

We have no way of accessing any of those Pods in the deployment. We need to expose the Pods using a Kubernetes Service. To do this, grab the example file from the docs again and change it to the following: https://v1-10.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#service-v1-core

kind: Service
apiVersion: v1
metadata:
  
  name: service-example
spec:
  ports:
    
    - name: http
      port: 80
      targetPort: 80
  selector:
    
    
    app: drupal
  
  
  
  type: LoadBalancer

Create this service object using the CLI tool again:

$ kubectl create -f service.yml

You can now ask Kubernetes to show you all 3 objects that you created by typing the following:

$ kubectl get deploy,po,svc
NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/deployment-example   3         3         3            3           7m

NAME                                    READY     STATUS    RESTARTS   AGE
po/deployment-example-fc5d69475-dfkx2   1/1       Running   0          7m
po/deployment-example-fc5d69475-t5w2j   1/1       Running   0          7m
po/deployment-example-fc5d69475-xw9m6   1/1       Running   0          7m

NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
svc/kubernetes        ClusterIP      10.96.0.1       <none>        443/TCP        1h
svc/service-example   LoadBalancer   10.96.176.233   <pending>     80:31337/TCP   13s

You can see under the services at the bottom that port 31337 was mapped to port 80 on the Pods. Now if we hit any node in the cluster, in our case it's just the one VM, on port 31337 we should see the Drupal app that we built from the Docker image we created in the last post. Since we are using Minikube, there is a command to open a browser on the specific port of the service, type minikube service :

$ minikube service service-example

This should open up a browser window and you should see the Installation screen for Drupal. You have successfully deployed the Docker image that we created to a production-like environment.

What is next?

We have just barely scratched the surface of what is possible with Kubernetes. I showed you the bare minimum to get a Docker image deployed on Kubernetes. The next step is to deploy your cluster to an actual cloud provider. For further reading on how to do that, definitely check-out the KOPS project.

If you have any questions, feel free to leave a comment below. If you want to see a demo of everything that I wrote about on the ActiveLAMP YouTube channel, let us know in the comments as well.

Jun 28 2018
Jun 28
Drupal Europe

Distributed systems face incredible challenges — Photo by Dennis van Zuijlekom

With Drupal 8 reaching its maturity and coupling/decoupling from other services — including itself — we have an increasing demand for Drupal sites to shine and make engaged teams thrive with good DevOps practices and resilient Infrastructure. All that done in the biggest Distributed System ever created by humans: the Internet. The biggest challenges of any distributed system are heterogeneity of systems and clients, transparency to the end user, openness to other systems, concurrency to support many users simultaneously, security, scalability on the fly and failure handling in a graceful way. Are we there yet?

We envision, in the DevOps + Infrastructure track, to see solutions from the smallest containers that can grow to millions of services to best practices in the DevOps world that accomplish very specific tasks to support Drupal and teams working on it and save precious human time, by reducing repetitive and automatable tasks.

Questions about container orchestration, virtualization and cloud infrastructure arise every day and we expect answers to come in the track sessions to deal with automation and scaling faster — maybe using applied machine learning or some other forms of prediction or self management. See? We’re really into saving time, by using technology to assist us.

We clearly don’t manage our sites in the same way we did years ago, due to increased complexity of what we manage and how we are managing change in process and culture, therefore it’s our goal at Drupal Europe to bring the best ideas, stories and lessons learned from each industry into the room and share them with the community.

Be ready to raise, receive and answer some hard questions and but most of all, inspire people to think from a different angle. What works for a high-high traffic website might not be applicable for maintaining a massive amount of smaller sites. We want operations to inspire development on reliability and for development to inspire operations on any kind of automation. We want security to be always top of mind while still have an impact on business value rapidly and efficiently. And that is just the beginning…

Please help us to spread the word about this awesome conference. Our hashtag is #drupaleurope.

To recommend speakers or topics please get in touch at [email protected].

Drupal Europe 2018 brings over 2,000 creators, innovators, and users of digital technologies from all over Europe and the rest of the world together for three days of intense and inspiring interaction.

Drupalcon Nashville — Photo by Amazee Labs

Mar 06 2017
Mar 06

In the modern world of web / application development, using package managers to pull in dependencies has become a de-facto standard. In fact, if you are developing enterprise software and you aren't leveraging package managers I would challenge you to ask yourself why not?

Drupal was very early to adopt this mindset of pulling in dependencies almost a decade ago when Dmitri Gaskin created an extension for Drush (the Drupal Shell) that added the ability to pull contributed modules by listing them in a make file (I think Dmitri was 12 years old when he wrote the Drush extension, pretty amazing!). Since that time, the make extension has been added to Drush core.

Composer is the current standard for putting together PHP applications, which is why Drupal 8 has gone this direction, so why not use Composer to put together Drupal 7 applications?

First off, I want to clarify what I'm not talking about in this post. I am not advocating that we ditch Drush all together, I still find value in other aspects of what Drush can do. I am specifically referring to the Make aspect of Drush. Is Drush Make still necessary?

This post is also not about Drupal Console vs Drush, both CLI tools add tremendous value to development workflow, and there isn't 100% overlap with these tools [yet]. I think we still need both tools.

This post is about how I came to see the benefit of switching to Composer from Drush Make. I recommend making this move for Drupal 7 and Drupal 8. This Drupal Composer workflow is not new, it has been around for a while. I just never saw a good reason to make the jump from Drush Make to this new process, until now. We have been asked in the comments on previous posts, "Why haven't you adopted the Composer process?" I now have a good reason to change our process and fully jump on board with Composer building Drupal 7 applications. We appreciate all the comments we get on our blog, it sharpens everyone involved!

We have blogged about the Composer workflow in a previous post on our Drupal 8 build process in the past, but the main motivation there was to be proactive about where PHP application development is going [already is]. We didn't have a real use case for the switch to Composer until now. This post will review how I came to that revelation.

Dependency Managers

I want to make one more point before I make the case for Composer. There are many reasons to use package managers to pull in dependencies. I'll save the details for another blog post. The main reason developers use package managers is so that your project repository does not include libraries and modules that you do not maintain. That is why tools like Composer, npm, Yarn, Bower, and Bundler exist. Hook up your RSS reader to our blog, I'll explain in more detail in a future post, but for now I'll leave this link to the Composer site explaining why committing dependencies is a bad idea, in your project repo.

Version Numbers

The #1 reason to make the switch to Composer is the ability to manage version numbers. You may be asking "What's the big deal, Drush Make handles version numbers as well?" let me give you a little context of why using Composer version numbers are a better approach.

The Back Story

Recently in a strategy meeting with one of our enterprise clients, we were discussing how to approach launching 100's of sites on one Drupal Core utilizing multiple installation profiles on top of Acquia Site Factory. Our goal was to figure out how we could sanely manage updating potentially dozens of installation profiles without explicitly defining each version number of the profile being updated. This type of Drupal architecture is also a topic for a future blog post, but for now read Acquia's explanation of why architecting with profiles is a good idea.

As a developer, it is common place to lock down versions to a very specific version so that we know exactly what versions we are using / deploying. This is the reason composer.lock, Gemfile.lock, yarn.lock, and npm shrinkwrap exist. We have experienced the pain of unexpected defects in applications due to an obscure dependency changing deep in the dependency tree. Most dependency managers have a very explicit command for updating dependencies, i.e. composer update, bundle update, yarn upgrade respectively, which in turn update the lock file.

A release manager does not need to know explicitly which version of a dependency (installation profile, module, etc), to release next, she simply wants the latest stable release.

Herein lies the problem with Drush Make. There are practices that exist that solve both the developer problem and release manager problem that do not exist in Drush Make, but do exist in Composer and other application development environments. It's a common pattern that has been around for a while, it's called semantic versioning.

Semantic Versioning

If you haven't heard of semantic versioning (semver), go check it out now. Pretty much every package manager I have dealt with has adopted semver. Adopting semver gives the developer, or release manager, the choice of how to update dependencies within their app. There are very distinct numbers in semver for introducing breaking changes, new features, and bug fixes. How does this play into what problem use cases I mentioned above?

A developer has the ability to specify in the composer.json file specific versions, while leaving the version number flexible to pull in new bug fixes and feature improvements (patch and minor releases). Look at the example below:

{
  "name": "My Drupal Platform",
  ...
  "require": {
	...
    "drupal/drupal": "~7.53.0",
    "drupal/views": "^3.14.0"
  },
  ...
}

The tilde ~ and caret ^ symbols have special meanings when specifying version numbers. The tilde matches the most recent minor version (updates patch release number, the last number), the caret will update you to the most recent major version (updates minor release number, the middle number).

The above example basically says, use the views module at version 3.14, and when version 3.15 comes out, update me to that version when I run composer update.

Breaking changes should only be introduced when you update the first number, the major release. Of course, if you completely trust the developer writing the contributed code this system would be enough, but not all developers follow best practice, which is why the lock file was created and the need to explicitly run composer update.

With this system in place, a release manager now only needs to worry about running one command to get the latest stable release of all dependencies. This command could also be hidden behind a nice UI (a CI Server) so all she has to do is push one button to grab all the latest dependencies and push to a testing site for verification.

Understanding everyones needs

In the past, I didn't have a good reason to move away from Drush Make, because it did the job, and Drush is so much more than Drush Make. The strategy session we had was eye opening. Understanding the needs from an operations perspective, while not jeopardizing the integrity of the application led us down a path to see a problem that the wider development community at large has already solved (not just the PHP community). It's very rewarding to solve problems like this, especially when you come to the conclusion that someone has already solved the problem! "We just had to find the path to the water! (--A.W.)"

What do you think about using Drush Make vs Composer for pulling together a Drupal Application? Leave us your thoughts in the comments.

Jul 14 2016
Jul 14

Back in December, Tom Friedhof shared how we set up our Drupal 8 development and build process utilizing Docker. It has been working well in the several months we have used it and worked within its framework. Within the time-span however, we experienced a few issues here and there which led me to come up with an alternative process which keeps the good things we like and getting rid of/resolving the issues we encountered.

First, I'll list some improvements that we'd like to see:

  1. Solve file-syncing issues

    One issue that I keep running into when working with our development process is that the file-syncing stops working when the host machine powers off in the interim. Even though Vagrant's rsync-auto can still detect changes on the host file-system and initiates an rsync to propel files up into the containers via a mounted volume, the changes do not really appear within the containers themselves. I had a tough time debugging this issue, and the only resolution in sight was to do a vagrant reload -- it's a time-consuming process as it rebuilds every image and running them again. Having to do this every morning when I turn on my laptop at work was no fun.

  2. Performant access to Drupal's root

    Previously, we had to mount Drupal's document root to our host machine using sshfs to explore in it, but it's not exactly performant. For example, performing a grep or ag to search within files contents under Drupal 8's core takes ~10 seconds or more. Colleagues using PhpStorm report that mounting the Drupal root unto the host system brings the IDE to a crawl while it indexes the files.

  3. Levarage Docker Compose

    Docker Compose is a great tool for managing the life-cycle of Docker containers, especially if you are running multiple applications. I felt that it comes with useful features that we were missing out because we were just using Vagrant's built-in Docker provider. Also with the expectation that Docker for Mac Beta will become stable in the not-so-distant future, I'd like the switch to a native Docker development environment as smooth as possible. For me, introducing Docker Compose into the equation is the logical first-step.

    dlite just got into my attention quite recently which could fulfill the role of Docker for Mac before its stable release, but haven't gotten the chance to try it yet.

  4. Use Composer as the first-class package manager

    Our previous build primarily uses Drush to build the Drupal 8 site and download dependencies and relegating the resolution of some Composer dependencies to Composer Manager. Drush worked really well for us in the past and there is no pressing reason why we should abandon it, but considering that Composer Manager is deprecated for Drupal 8.x and that there is already a Composer project for Drupal sites, I thought it would be a good idea to be more proactive and rethink the way we have been doing Drupal builds and adopt the de-facto way of putting together a PHP application. At the moment, Composer is where it's at.

  5. Faster and more efficient builds

    Our previous build utilizes a Jenkins server (also ran as a container) to perform the necessary steps to deploy changes to Pantheon. Since we were mostly deploying from our local machines anyway, I always thought that perhaps running the build steps via docker run ... would probably suffice (and it doesn't incur the overhead of a running Jenkins instance). Ultimately, we decided to explore Platform.sh as our deployment target, so basing our build in Composer became almost imperative as Drupal 8 support (via Drush) on Platform.sh is still in beta.

With these in mind, I'd like to share our new development environment & build process.

1. File & directory structure

Here is a high-level tree-view of the file structure of the project:

/<project_root>
├── Vagrantfile
├── Makefile
├── .platform/ 
│   └── routes.yaml
├── bin/ 
│   ├── drupal*
│   ├── drush*
│   └── sync-host*
├── docker-compose.yml 
├── environment 
├── src/ 
│   ├── .gitignore
│   ├── .platform.app.yaml 
│   ├── Dockerfile
│   ├── LICENSE
│   ├── bin/ 
│   │   ├── drupal-portal*
│   │   └── drush-portal*
│   ├── composer.json
│   ├── composer.lock
│   ├── custom/
│   ├── phpunit.xml.dist
│   ├── scripts/
│   ├── vendor/
│   └── web/ 
└── zsh/ 
    ├── zshrc
    ├── async.zsh
    └── pure.zsh

2. The Vagrantfile

Vagrant.configure("2") do |config|

  config.vm.box = "debian/jessie64"
  config.vm.network "private_network", ip: "192.168.100.47"

  config.vm.hostname = 'activelamp.dev'

  config.vm.provider :virtualbox do |vb|
    vb.name = "activelamp.com"
    vb.memory = 2048
  end

  config.ssh.forward_agent = true

  config.vm.provision "shell",
    inline: "apt-get install -y zsh && sudo chsh -s /usr/bin/zsh vagrant",
    run: "once"

  config.vm.provision "shell",
    inline: "[ -e /home/vagrant/.zshrc ] && echo '' || ln -s /vagrant/zsh/zshrc /home/vagrant/.zshrc",
    run: "once"

  config.vm.provision "shell",
    inline: "[ -e /usr/local/share/zsh/site-functions/prompt_pure_setup ] && echo '' || ln -s /vagrant/zsh/pure.zsh /usr/local/share/zsh/site-functions/prompt_pure_setup",
    run: "once"

  config.vm.provision "shell",
    inline: "[ -e /usr/local/share/zsh/site-functions/async ] && echo '' || ln -s /vagrant/zsh/async.zsh /usr/local/share/zsh/site-functions/async",
    run: "once"

  if ENV['GITHUB_OAUTH_TOKEN']
    config.vm.provision "shell",
      inline: "sudo sed -i '/^GITHUB_OAUTH_TOKEN=/d' /etc/environment  && sudo bash -c 'echo GITHUB_OAUTH_TOKEN=#{ENV['GITHUB_OAUTH_TOKEN']} >> /etc/environment'"
  end

  
  config.vm.provision :docker

  config.vm.provision :docker_compose, yml: "/vagrant/docker-compose.yml", run: "always", compose_version: "1.7.1"

  config.vm.synced_folder ".", "/vagrant", type: "nfs"
  config.vm.synced_folder "./src", "/mnt/code", type: "rsync", rsync__exclude: [".git/", "src/vendor"]
end

Compare this new manifest to the old one and you will notice that we reduce Vagrant's involvement in defining and managing Docker containers. We are simply using this virtual machine as the Docker host, using the vagrant-docker-compose plugin to provision it with the Docker Compose executable and having it (re)build the images during provisiong stage and (re)start the containers on vagrant up.

We are also setting up Vagrant to sync file changes on src/ to /mnt/code/ in the VM via rsync. This directory in the VM will be mounted into the container as you'll see later.

We are also setting up zsh as the login shell for the vagrant user for an improved experience when operating within the virtual machine.

3. The Drupal 8 Build

For now let's zoom in to where the main action happens: the Drupal 8 installation. Let's remove Docker from our thoughts for now and focus on how the Drupal 8 build works.

The src/ directory cotains all files that constitute a Drupal 8 Composer project:

/src/
├── composer.json
├── composer.lock
├── phpunit.xml.dist
├── scripts/
│   └── composer/
├── vendor/ # Composer dependencies
│   └── ...
└── web/ # Web root
    ├── .htaccess
    ├── autoload.php
    ├── core/ # Drupal 8 Core
    ├── drush/
    ├── index.php
    ├── modules/
    ├── profiles/
    ├── robots.txt
    ├── sites/
    │   ├── default/
    │   │   ├── .env
    │   │   ├── config/ # Configuration export files
    │   │   │   ├── system.site.yml
    │   │   │   └── ...
    │   │   ├── default.services.yml
    │   │   ├── default.settings.php
    │   │   ├── files/
    │   │   │   └── ...
    │   │   ├── services.yml
    │   │   ├── settings.local.php.dist
    │   │   ├── settings.php
    │   │   └── settings.platform.php
    │   └── development.services.yml
    ├── themes/
    ├── update.php
    └── web.config

The first step of the build is simply executing composer install within src/. Doing so will download all dependencies defined in composer.lock and scaffold files and folders necessary for the Drupal installation to work. You can head over to the Drupal 8 Composer project repository and look through the code to see in depth how the scaffolding works.

3.1 Defining Composer dependencies from custom installation profiles & modules

Since we cannot use the Composer Manager module anymore, we need a different way of letting Composer know that we may have other dependencies defined in other areas in the project. For this let's look at composer.json:

{
    ...
    "require": {
        ...
        "wikimedia/composer-merge-plugin": "^1.3",
        "activelamp/sync_uuids": "dev-8.x-1.x"
    },
    "extra": {
        ...
        "merge-plugin": {
          "include": [
            "web/profiles/activelamp_com/composer.json",
            "web/profiles/activelamp_com/modules/custom/*/composer.json"
          ]
        }
    }
}

We are requiring the wikimedia/composer-merge-plugin and configuring it in the extra section to also read the installation profile's composer.json and one's that are in custom modules within it.

We can define the contrib modules that we need for our site from within the installation profile.

src/web/profiles/activelamp_com/composer.json:

{
  "name": "activelamp/activelamp-com-profile",
  "require": {
    "drupal/admin_toolbar": "^8.1",
    "drupal/ds": "^8.2",
    "drupal/page_manager": "^[email protected]",
    "drupal/panels": "~8.0",
    "drupal/pathauto": "~8.0",
    "drupal/redirect": "~8.0",
    "drupal/coffee": "~8.0"
  }
}

As we create custom modules for the site, any Composer dependencies in them will be picked up everytime we run composer update. This replicates what Composer Manager allowed us to do in Drupal 7. Note however that unlike Composer Manager, Composer does not care if a module is enabled or not -- it will always read its Composer dependencies and resolve them.

3.2 Drupal configuration

3.2.1 Settings file

Let's peek at what's inside src/web/settings.php:




$settings['container_yamls'][] = __DIR__ . '/services.yml';

$config_directories[CONFIG_SYNC_DIRECTORY] = __DIR__ . '/config';


include __DIR__ . "/settings.platform.php";

$update_free_access = FALSE;
$drupal_hash_salt = '';

$local_settings = __DIR__ . '/settings.local.php';

if (file_exists($local_settings)) {
  require_once($local_settings);
}

$settings['install_profile'] = 'activelamp_com';
$settings['hash_salt'] = $drupal_hash_salt;

Next, let's look at settings.platform.php:



if (!getenv('PLATFORM_ENVIRONMENT')) {
    return;
}

$relationships = json_decode(base64_decode(getenv('PLATFORM_RELATIONSHIPS')), true);

$database_creds = $relationships['database'][0];

$databases['default']['default'] = [
    'database' => $database_creds['path'],
    'username' => $database_creds['username'],
    'password' => $database_creds['password'],
    'host' => $database_creds['host'],
    'port' => $database_creds['port'],
    'driver' => 'mysql',
    'prefix' => '',
    'collation' => 'utf8mb4_general_ci',
];

We return early from this file if PLATFORM_ENVIRONMENT is not set. Otherwise, we'll parse the PLATFORM_RELATIONSHIPS data and extract the database credentials from it.

For our development environment however, we'll do something different in settings.local.php.dist:



$databases['default']['default'] = array(
    'database' => getenv('MYSQL_DATABASE'),
    'username' => getenv('MYSQL_USER'),
    'password' => getenv('MYSQL_PASSWORD'),
    'host' => getenv('DRUPAL_MYSQL_HOST'),
    'driver' => 'mysql',
    'port' => 3306,
    'prefix' => '',
);

We are pulling the database values from the environment, as this is how we'll pass data in a Docker run-time. We also append .dist to the file-name because we don't actually want settings.local.php in version control (otherwise, it will mess up the configuration in non-development environments). We will simply rename this file as part of the development workflow. More on this later.

3.2.2 Staged configuration

src/web/sites/default/config/ contains YAML files that constitute the desired Drupal 8 configuration. These files will be used to seed a fresh Drupal 8 installation with configuration specific for the site. As we develop features, we will continually export the configuration entities and place them into this folder so that they are also versioned via Git.

Configuration entities in Drupal 8 are assigned a universally unique ID (a.k.a UUID). Because of this, configuration files are typically only meant to be imported into the same (or a clone of the) Drupal site they were imported from. The proper approach is usually getting hold of a database dump of the Drupal site and use that to seed a Drupal 8 installation which you plan to import the configuration files into. To streamline the process during development, we wrote the drush command sync-uuids that updates the UUIDs of the active configuration entities of a non-clone site (i.e. a freshly installed Drupal instance) to match those found in the staged configuration. We packaged it as Composer package named activelamp/sync_uuids.

The complete steps for the Drupal 8 build is the following:

$ cd src
$ composer install
$ [ -f web/sites/default/settings.local.php ] && : || cp web/sites/default/settings.local.php.dist web/sites/default/settings.local.php
$ drush site-install activelamp_com --account-pass=default-pass -y
$ drush pm-enable config sync_uuids -y
$ drush sync-uuids -y
$ drush config-import -y

These build steps will result a fresh Drupal 8 installation based on the activelamp_com installation profile and will have the proper configuration entities from web/sites/default/config. This will be similar to any site that is built from the same code-base minus any of the actual content. Sometimes that is all that you need.

Now let's look at the development workflow utilizing Docker. Let's start with the src/Dockerfile:

FROM php:7.0-apache

RUN apt-get update && apt-get install -y \
  vim \
  git \
  unzip \
  wget \
  curl \
  libmcrypt-dev \
  libgd2-dev \
  libgd2-xpm-dev \
  libcurl4-openssl-dev \
  mysql-client

ENV PHP_TIMEZONE America/Los_Angeles


RUN docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
 && docker-php-ext-install -j$(nproc) gd pdo_mysql curl mbstring opcache


RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
RUN echo 'export PATH="$PATH:/root/.composer/vendor/bin"' >> $HOME/.bashrc


RUN composer global require drush/drush:8.1.2 drupal/console:0.11.3
RUN $HOME/.composer/vendor/bin/drupal init
RUN echo source '$HOME/.console/console.rc' >> $HOME/.bashrc


RUN echo "date.timezone = \"$PHP_TIMEZONE\"" > /usr/local/etc/php/conf.d/timezone.ini
ARG github_oauth_token

RUN [ -n $github_oauth_token ] && composer config -g github-oauth.github.com $github_oauth_token || echo ''

RUN [ -e /etc/apache2/sites-enabled/000-default.conf ] && sed -i -e "s/\/var\/www\/html/\/var\/www\/web/" /etc/apache2/sites-enabled/000-default.conf || sed -i -e "s/\/var\/www\/html/\/var\/www\/web/" /etc/apache2/apache2.conf


COPY bin/drush-portal /usr/bin/drush-portal
COPY bin/drupal-portal /usr/bin/drupal-portal

COPY . /var/www/
WORKDIR /var/www/

RUN composer --working-dir=/var/www install

The majority of the Dockerfile should be self-explanatory. The important bits are the provisioning of a GitHub OAuth token & adding of the {drupal,drush}-portal executables which are essential for the bin/{drush,drupal} pass-through scripts.

Provisioning a GitHub OAuth token

Sometimes it is necessary to configure Composer to use an OAuth token to authenticate on GitHub's API when resolving dependencies. These tokens must remain private and should not be committed into version control. We declare that our Docker build will take github_oauth_token as a build argument. If present, it will configure Composer to authenticate using it to get around API rate limits. More on this later.

DrupalConsole and Drush pass-through scripts

Our previous build involved opening up an SSH port on the container running Drupal so that we can execute Drush commands remotely. However, we should already be able to run Drush commands inside the container without having SSH access by utilizing docker run. However the commands can get too lengthy. In fact, they will be extra lengthy because we also need to execute this from within the Vagrant machine using vagrant ssh.

Here are a bunch of scripts that makes it easier to execute drush and drupal commands from the host machine:

Here are the contents of bin/drush and bin/drupal:

#!/usr/bin/env bash
cmd="docker-compose -f /vagrant/docker-compose.yml  run --no-deps --rm server drupal-portal $@"
vagrant ssh -c "$cmd"
#!/usr/bin/env bash
cmd="docker-compose -f /vagrant/docker-compose.yml  run --no-deps --rm server drush-portal $@"
vagrant ssh -c "$cmd"

This allow us to do bin/drush to run Drush commands and bin/drupal ... to run DrupalConsole commands, and the arguments will be pass over to the executables in the container.

Here are the contents of src/bin/drupal-portal and src/bin/drush-portal:

#!/usr/bin/env bash
/root/.composer/vendor/bin/drupal --root=/var/www/web $@
#!/usr/bin/env bash
/root/.composer/vendor/bin/drush --root=/var/www/web $@

The above scripts are added to the container and is essential to making sure drush and drupal commands are applied to the correct directory.

In order for this to work, we actually have to remove Drush and DrupalConsole from the project's composer.json file. This is easily done via the composer remove command.

The docker-compose.yml file

To tie everything together, we have this Compose file:

version: '2'
services:
  server:
    build:
      context: ./src
      args:
        github_oauth_token: ${GITHUB_OAUTH_TOKEN}
    volumes:
      - /mnt/code:/var/www
      - composer-cache:/root/.composer/cache
    env_file: environment
    links:
      - mysql:mysql
    ports:
      - 80:80
  mysql:
    image: 'mysql:5.7.9'
    env_file: environment
    volumes:
      - database:/var/lib/mysql

volumes:
  database: {}
  composer-cache: {}

There are four things of note:

  1. github_oauth_token: ${GITHUB_OAUTH_TOKEN}

    This tells Docker Compose to use the environment variable GITHUB_OAUTH_TOKEN as the github_oauth_token build argument. This, if not empty, will effectively provision the Composer with an OAuth token. If you go back to the Vagrantfile, you will see that this environment variable is set in the virtual machine (because docker-compose is run under it) by appending it to the /etc/environment file. All it needs is that the environment variable is present in the host environment (OS X) during the provisioning step.

    For example, it can be provisioned via: GITHUB_OAUTH_TOKEN= vagrant provision

  2. composer-cache:/root/.composer/cache

    This tells Docker to mount a volume on /root/.composer/cache so that we can persist the contents of this directory between restarts. This will ensure that composer install and composer update is fast and would not require re-downloading packages from the web every time we run. This will drastically imrpove the build speeds.

  3. database:/var/lib/mysql

    This will tell Docker to persist the MySQL data between builds as well. This is so that we don't end up with an empty database whenever we restart the containers.

  4. env_file: environment

    This let us define all environment variables in a single file, for example:

    MYSQL_USER=activelamp
    MYSQL_ROOT_PASSWORD=root
    MYSQL_PASSWORD=some-secret-passphrase
    MYSQL_DATABASE=activelamp
    DRUPAL_MYSQL_HOST=mysql

    We just configure each service to read environment variables from the same file as they both need these values.

We employ rsync to sync files from the host machine to the VM since it offers by far the fastest file I/O compared to the built-in alternatives in Vagrant + VirtualBox. In the Vagrantfile we specified that we sync src/ to /mnt/code/ in the VM. Following this we configured Docker Compose to mount this directory into the server container. This means that any file changes we make on OS X will get synced up to /mnt/code, and ultimately into /var/www/web in the container. However, this only covers changes that originate from the host machine.

To sync changes that originates from the container -- files that were scaffolded by drupal generate:*, Composer dependencies, and Drupal 8 core itself -- we'll use the fact that our project root is also available at /vagrant as a mount in the VM. We can use rsync to sync files the other way -- rsyncing from /mnt/code to /vagrant/src will bring file changes back up to the host machine.

Here is a script I wrote that does an rsync but will ask for confirmation before doing so to avoid overwriting potentially uncommitted work:

#!/usr/bin/env bash

echo "Dry-run..."

args=$@

diffs="$(vagrant ssh -- rsync --dry-run --itemize-changes $args | grep '^[>)"

if [ -z "$diffs" ]; then
  echo "Nothing to sync."
  exit 0
fi

echo "These are the differences detected during dry-run. You might lose work.  Please review before proceeding:"
echo "$diffs"
echo ""
read -p "Confirm? (y/N): " choice

case "$choice" in
  y|Y ) vagrant ssh -- rsync $args;;
  * ) echo "Cancelled.][dfLDS]\|^\*deleted'";;
esac

We are keeping this generic and not bake in the paths because we might want to sync arbitrary files to arbitrary destinations.

We can use this script like so:

$ bin/sync-host --recursive --progress --verbose --exclude=".git/" --delete-after /mnt/code/ /vagrant/src/

If the rsync will result in file changes on the host machine, it will bring up a summary of the changes and will ask if you want to proceed or not.

Makefile

We are using make as our task-runner just like in the previous build. This is really useful for encapsulating operations that are common in our workflow:


sync-host:
	bin/sync-host --recursive --progress --verbose --delete-after --exclude='.git/' /mnt/code/ /vagrant/src/

sync:
	vagrant rsync-auto

sync-once:
	vagrant rsync

docker-rebuild:
	vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml build

docker-restart:
	vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml up -d

composer-install:
	vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml run --no-deps --rm server composer --working-dir=/var/www install

composer-update:
	vagrant ssh -- docker-compose -f /vagrant/docker-compose.yml run --no-deps --rm server composer --working-dir=/var/www update --no-interaction



lock-file:
	@vagrant ssh -- cat /mnt/code/composer.lock

install-drupal: composer-install
	vagrant ssh -- '[ -f /mnt/code/web/sites/default/settings.local.php ] && echo '' || cp /mnt/code/web/sites/default/settings.local.php.dist /mnt/code/web/sites/default/settings.local.php'
	-bin/drush si activelamp_com --account-pass=secret -y
	-bin/drush en config sync_uuids -y
	bin/drush sync-uuids -y
	[ $(ls -l src/web/sites/default/config/*.yml | wc -l) -gt 0  ] && bin/drush cim -y || echo "Config is empty. Skipping import..."

init: install-drupal
	yes | bin/sync-host --recursive --progress --verbose --delete-after --exclude='.git/' /mnt/code/ /vagrant/src/

platform-ssh:
	ssh ></span>@ssh.us.platform.sh

The Drupal 8 build steps are simply translated to use bin/drush and the actual paths within the virtual machine in the install-drupal task. After cloning the repository for the first time, a developer should just be able to execute make init, sit back with a cup of coffee and wait until the task is complete.

Try it out yourself!

I wrote the docker-drupal-8 Yeoman generator so that you can easily give this a spin. Feel free to use it to look around and see it in action, or even to start off your Drupal 8 sites in the future:

$ npm install -g yo generator-docker-drupal-8
$ mkdir myd8
$ cd myd8
$ yo docker-drupal-8

Just follow through the instructions, and once complete, run vagrant up && make docker-restart && make init to get it up & running.

If you have any questions, suggestions, anything, feel free to drop a comment below!

Jan 27 2016
Jan 27

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Jan 20 2016
Jan 20

This post is part 4 in the series "Hashing out a docker workflow". For background, checkout my previous posts.

My previous posts talked about getting your local environment setup using the Drupal Docker image with Vagrant. It's now time to bake a Docker image with our custom application code within the container, so that we can deploy containers implementing the immutable server pattern. One of the main reasons we starting venturing down the Docker path was to achieve deployable fully baked containers that are ready to run in whatever environment you put them in, similar to what we've done in the past with Packer, as I've mentioned in a previous post.

Review

The instructions in this post are assumming you followed my previous post to get a Drupal environment setup with the custom "myprofile" profile. In that post we brought up a Drupal environment by just referencing the already built Drupal image on DockerHub. We are going to use that same Docker image, and add our custom application to that.

All the code that I'm going to show below can be found in this repo on Github.

Putting the custom code into the container

We need to create our own image, create a Dockerfile in our project that extends the Drupal image that we are pulling down.

Create a file called Dockerfile in the root of your project that looks like the following:

FROM drupal:7.41

ADD drupal/profiles/myprofile /var/www/html/profiles/myprofile

We are basically using everything from the Drupal image, and adding our installation profile to the profiles directory of the document root.

This is a very simplistic approach, typically there are more steps than just copying files over. In more complex scenarios, you will likely run some sort of build within the Dockerfile as well, such as Gulp, Composer, or Drush Make.

Setting up Jenkins

We now need to setup a Jenkins server that will checkout our code and run docker build and docker push. Let's setup a local jenkins container on our Docker host to do this.

Open up the main Vagrantfile in the project root and add another container to the file like the following:





Vagrant.configure(2) do |config|

config.vm.define "jenkins" do |v|
v.vm.provider "docker" do |d|
d.vagrant_vagrantfile = "./host/Vagrantfile"
d.build_dir = "./Dockerfiles/jenkins"
d.create_args = ['--privileged']
d.remains_running = true
d.ports = ["8080:8080"]
d.name = "jenkins-container"
end
end

config.vm.define "drupal" do |v|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end
end

Two things to notice from the jenkins container definition, 1) The Dockerfile for this container is in the Dockerfiles/jenkins directory, and 2) we are passing the --privileged argument when the container is run so that our container has all the capabilities of the docker host. We need special access to be able to run Docker within Docker.

Lets create the Dockerfile:

$ mkdir -p Dockerfiles/jenkins
$ cd !$
$ touch Dockerfile

Now open up that Dockerfile and install Docker onto this Jenkins container:

FROM jenkins:1.625.2

USER root


RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D


RUN echo "deb http://apt.dockerproject.org/repo debian-jessie main" > /etc/apt/sources.list.d/docker.list

VOLUME /var/lib/docker

RUN apt-get update && \
  apt-get -y install \
    docker-engine

ADD ./dockerjenkins.sh /usr/local/bin/dockerjenkins.sh
RUN chmod +x /usr/local/bin/dockerjenkins.sh

ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/dockerjenkins.sh" ]

We are using a little script that is found in The Docker Book as our entry point to start the docker daemon, as well as Jenkins. It also does some stuff on the filesystem to ensure cgroups are mounted correctly. If you want to read more about running Docker in Docker, go check out this article

Boot up the new container

Before we boot this container up, edit your host Vagrantfile and setup the port forward so that 8080 points to 8080:





Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
config.vm.network :forwarded_port, guest: 8080, host: 8080
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
end

Now bring up the new container:

$ vagrant up jenkins

or if you've already brought it up once before, you may just need to run reload:

\$ vagrant reload jenkins

You should now be able to hit Jenkins at the URL http://localhost:8080

Jenkins Dashboard

Install the git plugins for Jenkins

Now that you have Jenkins up and running, we need to install the git plugins. Click on the "Manage Jenkins" link in the left navigation, then click "Manage Plugins" in the list given to you, and then click on the "Available" Tab. Filter the list with the phrase "git client" in the filter box. Check the two boxes to install plugins, then hit "Download now and install after restart".

Jenkins Plugin Install

On the following screen, check the box to Restart Jenkins when installation is complete.

Setup the Jenkins job

It's time to setup Jenkins. If you've never setup a Jenkins job, here is a quick crash course.

  1. Click the New Item link in the left navigation. Name your build job, and choose Freestyle project. Click Ok. New Build Job
  2. Configure the git repo. We are going to configure Jenkins to pull code directly from your repository and build the Docker image from that. Add the git Repo
  3. Add the build steps. Scroll down toward the bottom of the screen and click the arrow next to Add build step and choose Execute Shell. We are going to add three build steps as shown below. First we build the Docker image with docker build -t="tomfriedhof/docker_blog_post" . (notice the trailing dot) and give it a name with the -t parameter, then we login to DockerHub, and finally push the newly created image that was created to DockerHub. Jenkins Build Steps
  4. Hit Save, then on the next screen hit the button that says Build Now

If everything went as planned, you should have a new Docker image posted on DockerHub: https://hub.docker.com/r/tomfriedhof/dockerblogpost/

Wrapping it up

There you have it, we now have an automated build that will automatically create and push Docker images to DockerHub. You can add on to this Jenkins job so that it polls your Github Repository so that it automatically runs this build anytime something changes in the tracking repo.

As another option, if you don't want to go through all the trouble of setting up your own Jenkins server just to do what I just showed you, DockerHub can do this for you. Go checkout their article on how to setup automated builds with Docker.

Now that we have a baked container with our application code within it, the next step is to deploy the container. That is the next post in this series. Stay tuned!

Dec 15 2015
Dec 15

Tom Friedhof

Senior Software Engineer

Tom has been designing and developing for the web since 2002 and got involved with Drupal in 2006. Previously he worked as a systems administrator for a large mortgage bank, managing servers and workstations, which is where he discovered his passion for automation and scripting. On his free time he enjoys camping with his wife and three kids.

Dec 02 2015
Dec 02

Now that the release of Drupal 8 is finally here, it is time to adapt our Drupal 7 build process to Drupal 8, while utilizing Docker. This post will take you through how we construct sites on Drupal 8 using dependency managers on top of Docker with Vagrant.

Keep a clean upstream repo

Over the past 3 or 4 years developing websites has changed dramatically with the increasing popularity of dependency management such as Composer, Bundler, npm, Bower, etc... amongst other tools. Drupal even has it's own system that can handle dependencies called Drush, albiet it is more than just a dependency manager for Drupal.

With all of these tools at our disposal, it makes it very easy to include code from other projects in our application while not storing any of that code in the application code repository. This concept dramatically changes how you would typically maintain a Drupal site, since the typical way to manage a Drupal codebase is to have the entire Drupal Docroot, including all dependencies, in the application code repository. Having everything in the docroot is fine, but you gain so much more power using dependency managers. You also lighten up the actual application codebase when you utilize dependency managers, because your repo only contains code that you wrote. There are tons of advantages to building applications this way, but I have digressed, this post is about how we utilize these tools to build Drupal sites, not an exhaustive list of why this is a good idea. Leave a comment if you want to discuss the advantages / disadvantages of this approach.

Application Code Repository

We've got a lot going on in this repository. We won't dive too deep into the weeds looking at every single file, but I will give a high level overview of how things are put together.

Installation Automation (begin with the end in mind)

The simplicity in this process is that when a new developer needs to get a local development environment for this project, they only have to execute two commands:

$ vagrant up --no-parallel
$ make install

Within minutes a new development environment is constructed with Virtualbox and Docker on the developers machine, so that they can immediately start contributing to the project. The first command boots up 3 Docker containers -- a webserver, mysql server, and jenkins server. The second command invokes Drush to build the document root within the webserver container and then installs Drupal.

We also utilize one more command to keep running within a seperate terminal window, to keep files synced from our host machine to the Drupal 8 container.

$ vagrant rsync-auto drupal8

Breaking down the two installation commands

vagrant up --no-parallel

If you've read any of my previous posts, I'm a fan of using Vagrant with Docker. I won't go into detail about how the environment is getting set up. You can read my previous posts on how we used Docker with Vagrant. For completeness, here is the Vagrantfile and Dockerfile that vagrant up reads to setup the environment.

Vagrantfile





require 'fileutils'

MYSQL_ROOT_PASSWORD="root"

unless File.exists?("keys")
Dir.mkdir("keys")
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
File.open("keys/id_rsa.pub", 'w') { |file| file.write(ssh_pub_key) }
end

unless File.exists?("Dockerfiles/jenkins/keys")
Dir.mkdir("Dockerfiles/jenkins/keys")
FileUtils.copy("#{Dir.home}/.ssh/id_rsa", "Dockerfiles/jenkins/keys/id_rsa")
end

Vagrant.configure("2") do |config|

config.vm.define "mysql" do |v|

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.image = "mysql:5.7.9"
      d.env = { :MYSQL_ROOT_PASSWORD => MYSQL_ROOT_PASSWORD }
      d.name = "mysql-container"
      d.remains_running = true
      d.ports = [
        "3306:3306"
      ]
    end

end

config.vm.define "jenkins" do |v|

    v.vm.synced_folder ".", "/srv", type: "rsync",
        rsync__exclude: get_ignored_files(),
        rsync__args: ["--verbose", "--archive", "--delete", "--copy-links"]

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.build_dir = "./Dockerfiles/jenkins"
      d.name = "jenkins-container"
      
      d.volumes = [
          "/home/rancher/.composer:/root/.composer",
          "/home/rancher/.drush:/root/.drush"
      ]
      d.remains_running = true
      d.ports = [
          "8080:8080"
      ]
    end

end

config.vm.define "drupal8" do |v|

    v.vm.synced_folder ".", "/srv/app", type: "rsync",
      rsync__exclude: get_ignored_files(),
      rsync__args: ["--verbose", "--archive", "--delete", "--copy-links"],
      rsync__chown: false

    v.vm.provider "docker" do |d|
      d.vagrant_machine = "apa-dockerhost"
      d.vagrant_vagrantfile = "./host/Vagrantfile"
      d.build_dir = "."
      d.name = "drupal8-container"
      d.remains_running = true
      
      d.volumes = [
        "/home/rancher/.composer:/root/.composer",
        "/home/rancher/.drush:/root/.drush"
      ]
      d.ports = [
        "80:80",
        "2222:22"
      ]
      d.link("mysql-container:mysql")
    end

end

end

def get_ignored_files()
ignore_file = ".rsyncignore"
ignore_array = []

if File.exists? ignore_file and File.readable? ignore_file
File.read(ignore_file).each_line do |line|
ignore_array << line.chomp
end
end

ignore_array
end

One of the cool things to point out that we are doing in this Vagrantfile is setting up a VOLUME for the composer and drush cache that should persist beyond the life of the container. When our application container is rebuilt we don't want to download 100MB of composer dependencies every time. By utilizing a Docker VOLUME, that folder is mounted to the actual Docker host.

Dockerfile (drupal8-container)

FROM ubuntu:trusty


ENV PROJECT_ROOT /srv/app
ENV DOCUMENT_ROOT /var/www/html
ENV DRUPAL_PROFILE=apa_profile


RUN apt-get update
RUN apt-get install -y \
	vim \
	git \
	apache2 \
	php-apc \
	php5-fpm \
	php5-cli \
	php5-mysql \
	php5-gd \
	php5-curl \
	libapache2-mod-php5 \
	curl \
	mysql-client \
	openssh-server \
	phpmyadmin \
	wget \
	unzip \
	supervisor
RUN apt-get clean


RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer


RUN mkdir /root/.ssh && chmod 700 /root/.ssh && touch /root/.ssh/authorized_keys && chmod 600 /root/.ssh/authorized_keys
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN mkdir /var/run/sshd && chmod 0755 /var/run/sshd
RUN mkdir -p /root/.ssh
COPY keys/id_rsa.pub /root/.ssh/authorized_keys
RUN chmod 600 /root/.ssh/authorized_keys
RUN sed '[email protected]\s*required\s*[email protected] optional [email protected]' -i /etc/pam.d/sshd


RUN composer global require drush/drush:8.0.0-rc3
RUN ln -nsf /root/.composer/vendor/bin/drush /usr/local/bin/drush


RUN mv /root/.composer /tmp/


RUN sed -i 's/display_errors = Off/display_errors = On/' /etc/php5/apache2/php.ini
RUN sed -i 's/display_errors = Off/display_errors = On/' /etc/php5/cli/php.ini


RUN sed -i 's/AllowOverride None/AllowOverride All/' /etc/apache2/apache2.conf
RUN a2enmod rewrite


RUN echo '[program:apache2]\ncommand=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"\nautorestart=true\n\n' >> /etc/supervisor/supervisord.conf
RUN echo '[program:sshd]\ncommand=/usr/sbin/sshd -D\n\n' >> /etc/supervisor/supervisord.conf





WORKDIR $PROJECT_ROOT
EXPOSE 80 22
CMD exec supervisord -n

We have xdebug commented out in the Dockerfile, but it can easily be uncommented if you need to step through code. Simply uncomment the two RUN commands and run vagrant reload drupal8

make install

We utilize a Makefile in all of our projects whether it be Drupal, nodejs, or Laravel. This is so that we have a similar way to install applications, regardless of the underlying technology that is being executed. In this case make install is executing a drush command. Below is the contents of our Makefile for this project:

all: init install

init:
vagrant up --no-parallel

install:
bin/drush @dev install.sh

rebuild:
bin/drush @dev rebuild.sh

clean:
vagrant destroy drupal8
vagrant destroy mysql

mnt:
sshfs -C -p 2222 [email protected]:/var/www/html docroot

What this commmand does is ssh into the drupal8-container, utilizing drush aliases and drush shell aliases.

install.sh

The make install command executes a file, within the drupal8-container, that looks like this:

#!/usr/bin/env bash

echo "Moving the contents of composer cache into place..."
mv /tmp/.composer/* /root/.composer/

PROJECT_ROOT=$PROJECT_ROOT DOCUMENT_ROOT=$DOCUMENT_ROOT $PROJECT_ROOT/bin/rebuild.sh

echo "Installing Drupal..."
cd $DOCUMENT_ROOT && drush si $DRUPAL_PROFILE --account-pass=admin -y
chgrp -R www-data sites/default/files
rm -rf ~/.drush/files && cp -R sites/default/files ~/.drush/

echo "Importing config from sync directory"
drush cim -y

You can see on line 6 of install.sh file that it executes a rebuild.sh file to actually build the Drupal document root utilizing Drush Make. The reason for separating the build from the install is so that you can run make rebuild without completely reinstalling the Drupal database. After the document root is built, the drush site-install apa_profile command is run to actually install the site. Notice that we are utilizing Installation Profiles for Drupal.

We utilize installation profiles so that we can define modules available for the site, as well as specify default configuration to be installed with the site.

We work hard to achieve the ability to have Drupal install with all the necessary configuration in place out of the gate. We don't want to be passing around a database to get up and running with a new site.

We utilize the Devel Generate module to create the initial content for sites while developing.

rebuild.sh

The rebuild.sh file is responsible for building the Drupal docroot:

#!/usr/bin/env bash

if [ -d "$DOCUMENT_ROOT/sites/default/files" ]
then
echo "Moving files to ~/.drush/..."
mv \$DOCUMENT_ROOT/sites/default/files /root/.drush/
fi

echo "Deleting Drupal and rebuilding..."
rm -rf \$DOCUMENT_ROOT

echo "Downloading contributed modules..."
drush make -y $PROJECT_ROOT/drupal/make/dev.make $DOCUMENT_ROOT

echo "Symlink profile..."
ln -nsf $PROJECT_ROOT/drupal/profiles/apa_profile $DOCUMENT_ROOT/profiles/apa_profile

echo "Downloading Composer Dependencies..."
cd $DOCUMENT_ROOT && php $DOCUMENT_ROOT/modules/contrib/composer_manager/scripts/init.php && composer drupal-update

echo "Moving settings.php file to $DOCUMENT_ROOT/sites/default/..."
rm -f $DOCUMENT_ROOT/sites/default/settings\*
cp $PROJECT_ROOT/drupal/config/settings.php $DOCUMENT_ROOT/sites/default/
cp $PROJECT_ROOT/drupal/config/settings.local.php $DOCUMENT_ROOT/sites/default/
ln -nsf $PROJECT_ROOT/drupal/config/sync $DOCUMENT_ROOT/sites/default/config
chown -R www-data \$PROJECT_ROOT/drupal/config/sync

if [ -d "/root/.drush/files" ]
then
cp -Rf /root/.drush/files $DOCUMENT_ROOT/sites/default/
    chmod -R g+w $DOCUMENT_ROOT/sites/default/files
chgrp -R www-data sites/default/files
fi

This file essentially downloads Drupal using the dev.make drush make file. It then runs composer drupal-update to download any composer dependencies in any of the modules. We use the composer manager module to help with composer dependencies within the Drupal application.

Running the drush make dev.make includes two other Drush Make files, apa-cms.make (the application make file) and drupal-core.make. Only dev dependencies should go in dev.make. Application dependencies go into apa-cms.make. Any core patches that need to be applied go into drupal-core.make.

Our Jenkins server builds the prod.make file, instead of dev.make. Any production specific modules would go in prod.make file.

Our make files for this project look like this so far:

dev.make

core: "8.x"

api: 2

defaults:
  projects:
    subdir: "contrib"

includes:
  - "apa-cms.make"

projects:
  devel:
    version: "1.x-dev"

apa-cms.make

core: "8.x"

api: 2

defaults:
projects:
subdir: "contrib"

includes:

- drupal-core.make

projects:
address:
version: "1.0-beta2"

composer_manager:
version: "1.0-rc1"

config_update:
version: "1.x-dev"

ctools:
version: "3.0-alpha17"

draggableviews:
version: "1.x-dev"

ds:
version: "2.0"

features:
version: "3.0-alpha4"

field_collection:
version: "1.x-dev"

field_group:
version: "1.0-rc3"

juicebox:
version: "2.0-beta1"

layout_plugin:
version: "1.0-alpha19"

libraries:
version: "3.x-dev"

menu_link_attributes:
version: "1.0-beta1"

page_manager:
version: "1.0-alpha19"

pathauto:
type: "module"
download:
branch: "8.x-1.x"
type: "git"
url: "http://github.com/md-systems/pathauto.git"

panels:
version: "3.0-alpha19"

token:
version: "1.x-dev"

zurb_foundation:
version: "5.0-beta1"
type: "theme"

libraries:
juicebox:
download:
type: "file"
url: "https://www.dropbox.com/s/hrthl8t1r9cei5k/juicebox.zip?dl=1"

(once this project goes live, we will pin the version numbers)

drupal-core.make

core: "8.x"

api: 2

projects:
  drupal:
    version: 8.0.0
    patch:
      - https://www.drupal.org/files/issues/2611758-2.patch

prod.make

core: "8.x"

api: 2

includes:

- "apa-cms.make"

projects:
apa_profile:
type: "profile"
subdir: "."
download:
type: "copy"
url: "file://./drupal/profiles/apa_profile"

At the root of our project we also have a Gemfile, specifically to install the compass compiler along with various sass libraries. We install these tools on the host machine, and "watch" those directories from the host. vagrant rsync-auto watches any changed files and rsyncs them to the drupal8-container.

bundler

From the project root, installing these dependencies and running a compass watch is simple:

$ bundle
$ bundle exec compass watch path/to/theme

bower

We pull in any 3rd party front-end libraries such as Foundation, Font Awesome, etc... using Bower. From within the theme directory:

\$ bower install

There are a few things we do not commit to the application repo, as a result of the above commands.

  • The CSS directory
  • Bower Components directory

Deploy process

As I stated earlier, we utilize Jenkins CI to build an artifact that we can deploy. Within the jenkins job that handles deploys, each of the above steps is executed, to create a document root that can be deployed. Projects that we build to work on Acquia or Pantheon actually have a build step to also push the updated artifact to their respected repositories at the host, to take advantage of the automation that Pantheon and Acquia provide.

Conclusion

Although this wasn't an exhaustive walk thru of how we structure and build sites using Drupal, it should give you a general idea of how we do it. If you have specific questions as to why we go through this entire build process just to setup Drupal, please leave a comment. I would love to continue the conversation.

Look out for a video on this topic in the next coming weeks. I covered a lot in this post, without going into much detail. The intent of this post was to give a 10,000 foot view of the process. The upcoming video on this process will get much closer to the Tarmac!

As an aside, one caveat that we did run into with setting up default configuration in our Installation Profile was with Configuration Management UUID's. You can only sync configuration between sites that are clones. We have overcome this limitation with a workaround in our installation profile. I'll leave that topic for my next blog post in a few weeks.

Oct 17 2015
Oct 17

A little over a year ago the ActiveLAMP website had undergone a major change -- we made the huge decision of moving away from using Drupal to manage its content in favor of building it as a static HTML site using Jekyll, hosted on Amazon S3. Not only did this extremely simplify our development stack, it also trimmed down our server requirements to the very bare minimum. Now, we are just hosting everything on a file storage server like it's 1993.

A few months ago we identified the need to restructure our URL schemes as part of an ongoing SEO campaign. As easy as that sounds, this, however, necessitates the implementation of 301 redirects from the older URL scheme to their newer, more SEO-friendly versions.

I'm gonna detail how I managed to (1) implement these redirects quite easily using an nginx service acting as a proxy, and (2) achieve parity between our local and production environments while keeping everything light-weight with the help of Docker.

Nginx vs Amazon S3 Redirects

S3 is a file storage service offered by AWS that not only allows you to store files but also allows you to host static websites in conjunction with Route 53. Although S3 gives you the ability to specify redirects, you'll need to use S3-specific configuration and routines. This alone wouldn't be ideal because not only would it tie us to S3 by the hips, but it is not a methodology that we could apply to any other environment (i.e. testing and dev environments on our local network and machines). For these reasons, I opted to use nginx as a very thin reverse proxy to accomplish the job.

Configuring Nginx

Rather than compiling the list of redirects manually, I wrote a tiny Jekyll plugin that can do it faster and more reliably. The plugin allows me to specify certain things within the main Jekyll configuration file and it will generate the proxy.conf file for me:



nginx:
    proxy_host: ></span>;
    proxy_port: 80
    from_format: "/:year/:month/:day/:title/"
    
    redirects:
        - { from: "^/splash(.*)", to: "/$1" type: redirect }

With this in place, I am able to generate the proxy.conf by simply issuing this command:

> jekyll nginx_config > proxy.conf

This command will produce a proxy.conf file which will look like this:



rewrite ^/2008/09/21/drupalcampla\-revision\-control\-presentation/?(\?.*)?$ /blog/drupal/drupalcampla-revision-control-presentation$1 permanent;




rewrite ^/blog/development/aysnchronous\-php\-with\-message\-queues/?(\?.*)?$ /blog/development/asynchronous-php-with-message-queues/$1 permanent;


location / {
	proxy_set_header Host <S3 bucket URL>;
	proxy_set_header X-Real-IP $remote_addr;
	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	proxy_pass http://<S3 bucket URL>:80;
}

You probably noticed that this is not a complete nginx configuration. However all that is need to be done is to define a server directive which will import this file:



server {
    listen 80;
    include /etc/nginx/proxy.conf;
}

Here we have an nginx proxy listening on port 80 that knows how to redirect old URLs to the new ones and pass on any request to the S3 bucket on AWS.

From this point, I was able to change the DNS records on our domain to point to the nginx proxy service instead of pointing directly at S3.

Check out the documentation for more ways of specifying redirects and more advanced usage.

Docker

Spinning up the proxy locally is a breeze with the help of Docker. Doing this in Vagrant and a provisioner would require a lot of boilerplate and code. With Docker, everything (with the exception of the config file that is automatically generated), is under 10 lines!

The Dockerfile

Here I am using the official nginx image straight from DockerHub but added some minor modifications:



FROM nginx


RUN rm /etc/nginx/conf.d/default.conf


ADD server.conf /etc/nginx/conf.d/
ADD proxy.conf /etc/nginx/

The build/nginx directory will contain everything the nginx proxy will need: the server.conf that you saw from the previous section, and the proxy.conf file which was generated by the jekyll nginx_config command.

Automating it with Grunt

Since we are using generator-jekyllrb, a Yeoman generator for Jekyll sites which uses Grunt to run a gamut of various tasks, I just had to write a grunt proxy task which does all the needed work when invoked:



...

grunt.initConfig({
    ...
    local_ip: process.env.LOCAL_IP,
    shell: {
        nginx_config: {
            command: 'jekyll nginx_config --proxy_host=192.168.0.3
            --proxy_port=8080 --config=_config.yml,_config.build.yml --source=app > build/nginx/proxy.conf'
        }
        docker_build: {
            command: 'docker build -t jekyll-proxy build/nginx'
        },
        docker_run: {
            command: 'docker run -d -p 80:80 jekyll-proxy'
        }
    },
    ...
});

...

grunt.registerTask('proxy', [
      'shell:nginx_config',
      'shell:docker_build',
      'shell:docker_run'
]);

This requires grunt-shell

With this in place, running grunt proxy will prepare the configuration, build the image, and run the proxy on http://192.168.99.100 where 192.168.99.100 is the address to the Docker host VM on my machine.

Note that this is a very simplified version of the actual Grunt task config that we actually use which just serves to illustrate the meager list of commands that is required to get the proxy configured and running.

I have set up a GitHub repository that replicates this set-up plus the actual Grunt task configuration we use that adds more logic around things like an auto-rebuilding the Docker image, cleaning up of stale Docker processes, configuration for different build parameters for use in production, etc. You can find it here: bezhermoso/jekyll-nginx-proxy-with-docker-demo.

Sep 23 2015
Sep 23

This post is part 3 in the series "Hashing out a docker workflow". For background, checkout my previous posts.

Now that I've laid the ground work for the approach that I want to take with local environment development with Docker, it's time to explore how to make the local environment "workable". In this post we will we will build on top of what we did in my last post, Docker and Vagrant, and create a working local copy that automatically updates the code inside the container running Drupal.

Requirements of a local dev environment

Before we get started, it is always a good idea to define what we expect to get out of our local development environment and define some requirements. You can define these requirements however you like, but since ActiveLAMP is an agile shop, I'll define our requirements as users stories.

User Stories

As a developer, I want my local development environment setup to be easy and automatic, so that I don't have to spend the entire day following a list of instructions. The fewer the commands, the better.

As a developer, my local development environment should run the same exact OS configuration as stage and prod environments, so that we don't run into "works on my machine" scenario's.

As a developer, I want the ability to log into the local dev server / container, so that I can debug things if necessary.

As a developer, I want to work on files local to my host filesystem, so that the IDE I am working in is as fast as possible.

As a developer, I want the files that I change on my localhost to automatically sync to the guest filesystem that is running my development environment, so that I do not have to manually push or pull files to the local server.

Now that we know what done looks like, let's start fulfilling these user stories.

Things we get for free with Vagrant

We have all worked on projects that have a README file with a long list of steps just to setup a working local copy. To fulfill the first user story, we need to encapsulate all steps, as much as possible, into one command:

\$ vagrant up

We got a good start on our one command setup in my last blog post. If you haven't read that post yet, go check it out now. We are going to be building on that in this post. My last post essentially resolves the first three stories in our user story list. This is the essence of using Vagrant, to aid in setting up virtual environments with very little effort, and dispose them when no longer needed with vagrant up and vagrant destroy, respectively.

Since we will be defining Docker images and/or using existing docker containers from DockerHub, user story #2 is fulfilled as well.

For user story #3, it's not as straight forward to log into your docker host. Typically with vagrant you would type vagrant ssh to get into the virtual machine, but since our host machine's Vagrantfile is in a subdirectory called /host, you have to change directory into that directory first.

$ cd host
$ vagrant ssh

Another way you can do this is by using the vagrant global-status command. You can execute that command from anywhere and it will provide a list of all known virtual machines with a short hash in the first column. To ssh into any of these machines just type:

\$ vagrant ssh <short-hash>

Replace with the actual hash of the machine.

Connecting into a container

Most containers run a single process and may not have an SSH daemon running. You can use the docker attach command to connect to any running container, but beware if you didn't start the container with a STDIN and STDOUT you won't get very far.

Another option you have for connecting is using docker exec to start an interactive process inside the container. For example, to connect to the drupal-container that we created in my last post, you can start an interactive shell using the following command:

$ sudo docker exec -t -i drupal-container /bin/bash

This will return an interactive shell on the drupal-container that you will be able to poke around on. Once you disconnect from that shell, the process will end inside the container.

Getting files from host to app container

Our next two user stories have to do with working on files native to the localhost. When developing our application, we don't want to bake the source code into a docker image. Code is always changing and we don't want to rebuild the image every time we make a change during the development process. For production, we do want to bake the source code into the image, to achieve the immutable server pattern. However in development, we need a way to share files between our host development machine and the container running the code.

We've probably tried every approach available to us when it comes to working on shared files with vagrant. Virtualbox shared files is just way too slow. NFS shared files was a little bit faster, but still really slow. We've used sshfs to connect the remote filesystem directly to the localhost, which created a huge performance increase in terms of how the app responded, but was a pain in the neck in terms of how we used VCS as well as it caused performance issues with the IDE. PHPStorm had to index files over a network connection, albiet a local network connection, but still noticebly slower when working on large codebases like Drupal.

The solution that we use to date is rsync, specifically vagrant-gatling-rsync. You can checkout the vagrant gatling rsync plugin on github, or just install it by typing:

\$ vagrant plugin install vagrant-gatling-rsync

Syncing files from host to container

To achieve getting files from our localhost to the container we must first get our working files to the docker host. Using the host Vagrantfile that we built in my last blog post, this can be achieved by adding one line:

config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'

Your Vagrantfile within the /host directory should now look like this:





Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
config.vm.synced_folder '../drupal/profiles/myprofile', '/srv/myprofile', type: 'rsync'
end

We are syncing a drupal profile from a within the drupal directory off of the project root to a the /srv/myprofile directory within the docker host.

Now it's time to add an argument to run when docker run is executed by Vagrant. To do this we can specify the create_args parameter in the container Vagrant file. Add the following line into the container Vagrantfile:

docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']

This file should now look like:





Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.create_args = ['--volume="/srv/myprofile:/var/www/html/profiles/myprofile"']
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end

This parameter that we are passing maps the directory we are rsyncing to on the docker host to the profiles directory within the Drupal installation that was included in the Drupal docker image from DockerHub.

Create the installation profile

This blog post doesn't intend to go into how to create a Drupal install profile, but if you aren't using profiles for building Drupal sites, you should definitely have a look. If you have questions regarding why using Drupal profiles are a good idea, leave a comment.

Lets create our simple profile. Drupal requires two files to create a profile. From the project root, type the following:

$ mkdir -p drupal/profiles/myprofile
$ touch drupal/profiles/myprofile/{myprofile.info,myprofile.profile}

Now edit each file that you just created with the minimum information that you need.

myprofile.info

name = Custom Profile
description = My custom profile
core = 7.x

myprofile.profile


function myprofile_install_tasks() {
  
}

Start everything up

We now have everything we need in place to just type vagrant up and also have a working copy. Go to the project root and run:

\$ vagrant up

This will build your docker host as well as create your drupal container. As I mentioned in a previous post, starting up the container sometimes requires me to run vagrant up a second time. I'm still not sure what's going on there.

After everything is up and running, you will want to run the rsync-auto command for the docker host, so that any changes you make locally traverses down to the docker host and then to the container. The easiest way to do this is:

$ cd host
$ vagrant gatling-rsync-auto

Now visit the URL to your running container at http://localhost:4567 and you should see the new profile that you've added.

Custom Install Profile

Conclusion

We covered a lot of ground in this blog post. We were able to accomplish all of the stated requirements above with just a little tweaking of a couple Vagrantfiles. We now have files that are shared from the host machine all the way down to the container that our app is run on, utilizing features built into Vagrant and Docker. Any files we change in our installation profile on our host immediately syncs to the drupal-container on the docker host.

At ActiveLAMP, we use a much more robust approach to build out installation profiles, utilizing Drush Make, which is out of scope for this post. This blog post simply lays out the general idea of how to accomplish getting a working copy of your code downstream using Vagrant and Docker.

In my next post, I'll continue to build on what I did here today, and introduce automation to automatically bake a Docker image with our source code baked in, utilizing Jenkins. This will allow us to release any number of containers easily to scale our app out as needed.

Jul 19 2015
Jul 19

This post is part 2 in a series of Docker posts hashing out a new docker workflow for our team. To gain background of what I want to accomplish with docker, checkout my previous post hashing out a docker workflow.

In this post, we will venture into setting up docker locally, in the same repeatable way from developer to developer, by using Vagrant. By the end of this post, we'll have Drupal running in a container, using Docker. This post is focused on hashing out a Docker workflow with Vagrant, less about Drupal itself. I want to give a shout out to the folks that maintain the Drupal Docker image in the Docker Registry. It's definitely worth checking out, and using that as a base to build FROM for your custom images.

Running Docker Locally

There are several ways to go about setting up Docker locally. I don't intend to walk you through how to install Docker, you can find step-by-step installation instructions based on your OS in the Docker documentation. However, I am leaning toward taking an unconventional approach to installing Docker locally, not mentioned in the Docker documentation. Let me tell you why.

Running a Docker host

For background, we are specifically an all Mac OS X shop at ActiveLAMP, so I'll be speaking from this context.

Unfortunately you can't run Docker natively on OS X, Docker needs to run in a Virtual Machine with an operating system such as Linux. If you read the Docker OS X installation docs, you see there are two options for running Docker on Mac OS X, Boot2Docker or Kitematic.

Running either of these two packages looks appealing to get Docker up locally very quickly (and you should use one of these packages if you're just trying out Docker), but thinking big picture and how we plan to use Docker in production, it seems that we should take a different approach locally. Let me tell you why I think you shouldn't use Boot2Docker or Kitematic locally, but first a rabbit trail.

Thinking ahead (the rabbit trail)

My opinion may change after gaining more real world experience with Docker in production, but the mindset that I'm coming from is that in production our Docker hosts will be managed via Chef.

Our team has extensive experience using Chef to manage infrastructure at scale. It doesn't seem quite right to completely abandon Chef yet, since Docker still needs a machine to run the Docker host. Chef is great for machine level provisioning.

My thought is that we would use Chef to manage the various Docker hosts that we deploy containers to and use the Dockerfile with Docker Compose to manage the actual app container configuration. Chef would be used in a much more limited capacity, only managing configuration on a system level not an application level. One thing to mention is that we have yet to dive into the Docker specific hosts such as AWS ECS, dotCloud, or Tutum. If we end up adopting a service like one of these, we may end up dropping Chef all together, but we're not ready to let go of those reigns yet.

One step at a time for us. The initial goal is to get application infrastructure into immutable containers managed by Docker. Not ready to make a decision on what is managing Docker or where we are hosting Docker, that comes next.

Manage your own Docker Host

The main reason I was turned off from using Boot2Docker or Kitematic is that it creates a Virtual Machine in Virtualbox or VMWare from a default box / image that you can't easily manage with configuration management. I want control of the host machine that Docker is run on, locally and in production. This is where Chef comes into play in conjunction with Vagrant.

Local Docker Host in Vagrant

As I mentioned in my last post, we are no stranger to Vagrant. Vagrant is great for managing virtual machines. If Boot2Docker or Kitematic are going to spin up a virtual machine behind the scenes in order to use Docker, then why not spin up a virtual machine with Vagrant? This way I can manage the configuration with a provisioner, such as Chef. This is the reason I've decided to go down the Vagrant with Docker route, instead of Boot2Docker or Kitematic.

The latest version of Vagrant ships with a Docker provider built-in, so that you can manage Docker containers via the Vagrantfile. The Vagrant Docker integration was a turn off to me initially because it didn't seem it was very Docker-esque. It seemed Vagrant was just abstracting established Docker workflows (specifically Docker Compose), but in a Vagrant syntax. However within the container Vagrantfile, I saw you can also build images from a Dockerfile, and launch those images into a container. It didn't feel so distant from Docker any more.

It seems that there might be a little overlap in areas between what Vagrant and Docker does, but at the end of the day it's a matter of figuring out the right combination of using the tools together. The boundary being that Vagrant should be used for "orchestration" and Docker for application infrastructure.

When all is setup we will have two Vagrantfiles to manage, one to define containers and one to define the host machine.

Setting up the Docker Host with Vagrant

The first thing to do is to define the Vagrantfile for your host machine. We will be referencing this Vagrantfile from the container Vagrantfile. The easiest way to do this is to just type the following in an empty directory (your project root):

\$ vagrant init ubuntu/trusty64

You can configure that Vagrantfile however you like. Typically you would also use a tool like Chef solo, Puppet, or Ansible to provision the machine as well. For now, just to get Docker installed on the box we'll add to the Vagrantfile a provision statement. We will also give the Docker host a hostname and a port mapping too, since we know we'll be creating a Drupal container that should EXPOSE port 80. Open up your Vagrantfile and add the following:

config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567

This ensures that Docker is installed on the host when you run vagrant up, as well as maps port 4567 on your local machine to port 80 on the Docker host (guest machine). Your Vagrantfile should look something like this (with all the comments removed):





Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = "docker-host"
config.vm.provision "docker"
config.vm.network :forwarded_port, guest: 80, host: 4567
end

Note: This post is not intended to walk through the fundamentals of Vagrant, for further resources on how to configure the Vagrantfile check out the docs.

As I mentioned earlier, we are going to end up with two Vagrantfiles in our setup. I also mentioned the container Vagrantfile will reference the host Vagrantfile. This means the container Vagrantfile is the configuration file we want used when vagrant up is run. We need to move the host Vagrantfile to another directory within the project, out of the project root directory. Create a host directory and move the file there:

$ mkdir host
$ mv Vagrantfile !\$

Bonus Tip: The !$ destination that I used when moving the file is a shell shortcut to use the last argument from the previous command.

Define the containers

Now that we have the host Vagrantfile defined, lets create the container Vagrantfile. Create a Vagrantfile in the project root directory with the following contents:





Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
end

To summarize the configuration above, we are using the Vagrant Docker provider, we have specified the path to the Docker host Vagrant configuration that we setup earlier, and we defined a container using the Drupal image from the Docker Registry along with exposing some ports on the Docker host.

Start containers on vagrant up

Now it's time to start up the container. It should be as easy as going to your project root directory and typing vagrant up. It's almost that easy. For some reason after running vagrant up I get the following error:

A Docker command executed by Vagrant didn't complete successfully!
The command run along with the output from the command is shown
below.

Command: "docker" "ps" "-a" "-q" "--no-trunc"

Stderr: Get http:///var/run/docker.sock/v1.19/containers/json?all=1: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?

Stdout:

I've gotten around this is by just running vagrant up again. If anyone has ideas what is causing that error, please feel free to leave a comment.

Drupal in a Docker Container

You should now be able to navigate to http://localhost:4567 to see the installation screen of Drupal. Go ahead and install Drupal using an sqlite database (we didn't setup a mysql container) to see that everything is working. Pretty cool stuff!

Development Environment

There are other things I want to accomplish with our local Vagrant environment to make it easy to develop on, such as setting up synced folders and using the vagrant rsync-auto tool. I also want to customize our Drupal builds with Drush Make, to make developing on Drupal much more efficient when adding modules, updating core, etc... I'll leave those details for another post, this post has become very long.

Conclusion

As you can see, you don't have to use Boot2Docker or Kitematic to run Docker locally. I would advise that if you just want to figure out how Docker works, then you should use one of these packages. Thinking longer term, your local Docker Host should be managed the same way your production Docker Host(s) are managed. Using Vagrant, instead of Boot2Docker or Kitematic, allows me to manage my local Docker Host similar to how I would manage production Docker Hosts using tools such as Chef, Puppet, or Ansible.

In my next post, I'll build on what we did here today, and get our Vagrant environment into a working local development environment.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web