Nov 17 2018
Nov 17

This blog post contains a written transcript of my NEDCamp 2018 keynote, Real World DevOps, edited to match the style of this blog. Accompanying resources: presentation slides, video (coming soon).

Jeff Geerling at NEDCamp 2018 - New England Drupal Camp

I'm Jeff Geerling; you probably know that because my name appears in huge letters at the top of every page on this site, including the post you're reading right now. I currently work at Acquia as a Senior Technical Architect, building hosting infrastructure projects using some buzzword-worthy tech like Kubernetes, AWS, and Cloud.

I also maintain Drupal VM, the most popular local development environment for the Drupal open source CMS. And I run two SaaS products with hundreds of happy customers, Hosted Apache Solr and Server, both of which have had over 99.99% uptime since their inception for a combined 15 years. I also write (and continuously update) a best-selling book on Ansible, Ansible for DevOps, and a companion book about Kubernetes. Finally, I maintain a large ecosystem of Ansible roles and automation projects on GitHub which have amassed over 17,000 stars and 8,000 forks.

Oh, I also have three children under the age of six, have a strong passion for photography (see my Flickr), maintain four Drupal websites for local non-profit organizations, and love spending time with my wife.

You might be thinking: this guy probably never spends time with his family.

And, if you're speaking of this weekend, sadly, you'd be correct—because I'm here in Rhode Island with all of you!

But on a typical weeknight, I'm headed upstairs around 5-6 p.m., spend time with my family for dinner, after-meal activities, prayers, and bedtime. And on weekends, it's fairly rare I'll need to do any work. We go to the zoo, we go on family trips, we go to museums, and we generally spend the entire weekend growing together as a family.

Some nights, after the kids are settled in bed, I'll spend an hour or two jumping through issue queues, updating a section of my book—or, as is the case right now, writing this blog post.

How do I do it?

CNCF Cloud Native Landscape

Well I apply complex self-healing, highly-scalable DevOps architectures to all my projects using all the tools shown in this diagram! I'm kidding, that would be insane. But have you seen this graphic before? It's the Cloud Native Landscape, published by the Cloud Native Computing Foundation.

The reason I show this picture is because I expect everyone reading this to memorize all these tools so you know how to implement DevOps by next week.

Just kidding again! Some people think the mastery of some tools in this diagram means they're doing 'DevOps'. To be honest, you might be practicing DevOps better than someone who integrates fifty of these tools using nothing but Apache and Drupal—neither of which are listed in this infographic!

What is DevOps?

The framework I use is what I call 'Real World DevOps'. But before I get into my definition, I think it's important we understand what the buzzword 'DevOps' means, according to our industry:

Azure DevOps

Microsoft, apparently, packaged up DevOps and sells it as part of Azure's cloud services. So you can put a price on it, apparently, get a purchase order, and have it! Right?


And I see a lot of DevOps people talk about how Docker transformed their developers into amazing coding ninjas who can deploy their code a thousand times faster. So Docker is part of DevOps, right?

There is no cloud sticker

And to do DevOps, you have to be in the cloud, because that's where all DevOps happens, right?

Agile Alliance

And DevOps requires you to strictly follow Agile methodologies, like sprints, kanban, scrums, pair programming, and pointing poker, right?

Well, let's go a little further, and see what some big-wigs in the industry have to say:

"People working together to build, deliver, and run resilient software at the speed of their particular business."

So it sounds like there's a people component, and some sort of correlation between speed and DevOps.

Okay, how about Atlassian?

DevOps "help[s] development and operations teams be more efficient, innovate faster, and deliver higher value"

So it sounds like it's all about making teams better. Okay...

"Rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach"

(Oh... that's funny, this quote is also in one of O'Reilly's books on DevOps, in a post from Ensono, and in basically every cookie-cutter Medium post about DevOps.)

In Gartner's case, they seem to focus strongly on methodology and service delivery—but that's probably because their bread and butter is reviewing products which purportedly help people track methodology and service delivery! It's interesting (and telling) there's no mention about people or teams!

But what do I say about DevOps?

But what about me? I just claimed to practice DevOps in my work—heck, my book has the word DevOps in the title! Surely I can't just be shilling for the buzzword profit multiplier by throwing the word 'DevOps' in my book title... right?

Ansible for DevOps - is Jeff Geerling just riding the wave of the buzzword?

Well, to be honest, I did use the word to increase visibility a bit. Why else do you think my second book has the word 'Kubernetes' in it!?

But my definition of DevOps is a bit simpler:

Jeff Geerling's DevOps Definition - Making people happier while making apps better

"Making people happier while making apps better."
—Jeff Geerling (Photo above by Kevin Thull)

I think this captures the essence of real world, non-cargo-cult DevOps, and that's because it contains the two most important elements:

Making people happier

DevOps is primarily about people: every team, no matter the size, has to figure out a way to work together to make users happy, and not burn out in the process. And what are some of the things I see in teams that are implementing DevOps successfully?

  • Reduced friction between Operations/sysadmins, Developers, Project Management, InfoSec, and QA. People don't feel like it's 'us against them', or 'we will loop them in after we finish our part'. Instead, everyone talks, everyone has open communication lines in email or Slack, and requirements and testing are built up throughout the life of the project.
  • Reduced burnout, because soul-sucking problems and frustrating communications blockades are almost non-existent.
  • Frequent code deploys, and almost always in the middle of the workday—and this also feeds back into reduced burnout, because nobody's pulling all-nighters fixing a bad deploy and wrangling sleepy developers to implement hotfixes.
  • Stable teams that stay together and grow into a real team, not just a 'project team'; note that this can sometimes be impossible (e.g. in some agency models), but it does make it easier to iteratively improve if you're working with the same people for a long period of time.
  • There are no heroes! Nobody has to be a rockstar ninja, working through the weekend getting a release ready, because DevOps processes emphasize stability, enabling a better work-life balance for everyone on the team.

How many times have you seen an email praising the heroic efforts of the developer who fixed some last-minute major issues in a huge new feature that were discovered in final user acceptance testing? This should not be seen as a heroic deed—rather it should be seen as a tragic failure. Not a failure of the developer, but as a failure of the system that enabled this to happen in the first place!

DevOps is about making people happier.

Making apps better

Devops is also about apps: you can't afford to develop at a glacial pace in the modern world, and when you make changes, you should be confident they'll work. Some of the things I see in the apps that are built with a DevOps mentality include:

  • Continuous delivery: a project's master (or production) code branch is always deployable, and passes all automated tests—and there are automated tests, at least covering happy paths.
  • Thorough monitoring: teams know when deployments affect performance. They know whether their users are having a slow or poor experience. They get alerts when systems are impaired but not down.
  • Problems are fixed as they occur. Bugfixes and maintenance are part of the regular workflow, and project planning gives equal importance to these issues as it does features.
  • Features are delivered frequently, and usually in small increments. Branches or unmerged pull requests rarely last more than a few days, and never more than a sprint.

Small but frequent deployments are one of the most important ways to make your apps better, because it also makes it easier to fix things as problems occur. Instead of dropping an emergent bug into a backlog, and letting it fester for weeks or months before someone tries to figure out how to reproduce the bug, DevOps-empowered teams 'swarm' the bug, and prevent similar bugs from ever happening again by adding a new test, correcting their process, or improving their monitoring.

DevOps is about making apps better.

DevOps Prerequisites

So we know that DevOps is about people and apps, and we know some of the traits of a team that's doing DevOps well, but are there some fundamental tools or processes essential to making DevOps work? Looking around online, I've found most DevOps articles mention these prerequisites:

  • Automation
  • CI/CD
  • Monitoring
  • Collaboration

I tend to agree that these four traits are essential to implementing DevOps well. But I think we can distill the list even further—and in some cases, some prerequisites might not be as important as the others.

I think the list should be a lot simpler. To do DevOps right, it should be:

  • Easy to make changes
  • Easy to fix and prevent problems (and prevent them from happening again)

Easy to make changes

I'm just wondering: have you ever timed how long it takes for a developer completely new to your project to get up and running? From getting access to your project codebase and being able to make a change to it locally? If not, it might be a good idea to find out. Or just try deleting your local environment and codebase entirely, and starting from scratch. It should be very quick.

If it's not easy and fast to start working on your project locally, it's hard to make changes.

Once you've made some changes, how do you know you won't break any existing functionality on your site? Do you have behavioral testing that you can easily run, and doesn't take very long to run, and doesn't require hours of setup work or a dedicated QA team? Do you have visual regression tests which verify that the code you just changed won't completely break the home page of your site?

If you can't be sure your changes won't break things, it's scary to make changes.

Once you deploy changes to production, how hard is it to revert back if you find out the changes did break something badly? Have you practiced your rollback procedure? Do you even have a process for rollbacks? Have you tested your backups and have confidence you could restore your production system to a known good state if you totally mess it up?

If you can't back out of broken changes, it's scary to make changes.

The easier and less stressful it is to make changes, the more willing you'll be to make them, and the more often you'll make them. Not only that, with more confidence in your disaster recovery and testing, you'll also be more confident and less stressed.

"High performers deployed code 30x more frequently, and the time required to go from “code committed” to “successfully running in production” was 200x faster."
—The DevOps Handbook

While you might not be deploying code 300 times a day, you'll be happy to deploy code whenever you want, in the middle of the workday, if you can make changes easy.

Easy to fix and prevent problems

Making changes has to be easy, otherwise it's hard to fix and prevent problems. But that's not all that's required.

Are developers able to deploy their changes to production? Or is there a long, drawn out process to get a change deployed to production? If you can build the confidence that at least the home page still loads before the code is deployed, then you'll be more likely to make small but frequent changes—which are a lot easier to fix than huge batches of changes!

Developers should be able to deploy to production after their code passes tests.

Once you deploy code, how do you know if it's helping or hurting your site's performance? Do you have detailed metrics for things like average end-user page load times (Application Performance Monitoring, or APM), CPU usage, memory usage, and logs? Without these metrics you can't make informed decisions about what's broken, or whether a particular problem is fixed.

Detailed system metrics and logging is essential to fix and prevent problems.

When something goes wrong, does everyone duck and cover, finding ways to avoid being blamed for the incident? Or does everyone come together to figure out what went wrong, why it went wrong, and how to prevent it from happening in the future? It's important that people realize when something goes wrong, it's rarely the fault of the person who wrote the code or pressed the 'go' button—it's the fault of the process. Better tests, better requirements, more thorough reviews would prevent most issues from ever happening.

'Blameless postmortems' prevent the same failure from happening twice while keeping people happy.

DevOps Tools

But what about tools?

"It's a poor craftsman that blames his tools."
—An old saying

Earlier in this post I mentioned that you could be doing DevOps even if you don't use any of the tools in the Cloud Native Landscape. That may be true, but you should also avoid falling into the trap of having one of these:

Golden hammer driving a screw into a board

A golden hammer is a tool that someone loves so much, they use it for purposes for which it isn't intended. Sometimes it can work... but the results and experience are not as good as you'd get if you used the right tool for the job. I really like this quote I found on a Hacker News post:

"Part of being an expert craftsman is having the experience and skills to select excellent tools, and the experience and skills to drive those excellent tools to produce excellent results."
jerf, HN commenter

So a good DevOps practitioner knows when it's worth spending the time learning how to use a new tool, and when to stick with the tools they know.

So now that we know something about DevOps, here's a project for you: build some infrastructure for a low-profile Drupal blog-style site for a budget-conscious client with around 10,000 visitors a day. Most of the traffic comes from Google searches, and there is little authenticated traffic. What would you build?

Complex Drupal hosting architecture in AWS VPC with Kubernetes

Wow! That looks great! And it uses like 20 CNL projects, so it's definitely DevOps, right?

Great idea, terrible execution.

Simple Drupal hosting architecture with a LAMP server and CloudFlare

Just because you know how to produce excellent results with excellent tools doesn't mean you always have to use the 'best' and most powerful tools. You should also know when to use a simple hammer to nail in a few nails! This second architecture is better for this client, because it will cost less, be easier to maintain long-term, and won't require a full-time development team maintaining the infrastructure!

Jeff Geerling holding a golden hammer

So know yourself. Learn and use new tools, but don't become an architecturenaut, always dreaming up and trying to build over-engineered solutions to simple problems!

That being said, not all the tools you'll need appear in the Cloud Native Landscape. Some of the tools I have in my toolbelt include:


I don't know how many times I've had to invoke YAGNI. That is, "You Ain't Gonna Need It!" It's great that you aspire to have your site get as much traffic as Facebook. But that doesn't mean you should architect it like Facebook does. Don't build fancy, complex automations and flexible architectures until you really need them. It saves you money, time, and sometimes it can even save a project from going completely off the rails!

Much like the gold plating on the hammer I was holding earlier, extra features that you don't need are a waste of resources, and may actually make your project worse off.

Andon board

In researching motivations behind some Agile practices, I came across an interesting book about lean manufacturing, The Machine that Changed the World. A lot of the ideas you may hear and even groan about in Agile methodology, and even DevOps, come from the idea of lean manufacturing.

One of the more interesting ideas is the andon board, a set of displays visible to every single worker in Toyota's manufacturing plant. If there's ever a problem or blockage, it is displayed on that board, and workers are encouraged to 'swarm the problem' until it is fixed—even if it's in a different part of the plant. The key is understanding that problems should not be swept aside to be dealt with when you have more time. Instead, everyone on the team must be proactive in fixing the problem before it causes a plant-wide failure to produce.

Time to Drupal

I did a blog post after DrupalCon last year discussing how different local Drupal development environments have dramatically different results in my measurement of "Time to Drupal". That is, from not having it downloaded on your computer, to having a functional Drupal environment you can play around with, how long does it take?

If it takes you more than 10 minutes to bring up your local environment, you should consider ways to make that process much faster. Unless you have a multi-gigabyte database that's absolutely essential for all development work (and this should be an exceedingly rare scenario), there's no excuse to spend hours or days onboarding a new developer, or setting up a new computer when your old one dies!

Dev to Prod

Similarly, how long does it take, once a feature or bugfix has been deployed somewhere and approved, for it to be deployed to production? Does this process take more than a day? Why? Are you trying to batch multiple changes together into one larger deployment?

The DevOps Handbook has some good advice about this:

"one of the best predictors of short lead times was small batch sizes of work"
—The DevOps Handbook

And wouldn't you know, there's a lean term along this theme: Takt time, or the average amount of time it takes between delivering units of work.

If you batch a bunch of deployments together instead of delivering them to production as they're ready, you'll have a large Takt time, and this means you can't quickly deliver value to your end users. You want to reduce that time by speeding up your process for getting working code to production.


Those tools might not be the tools you were thinking I'd mention, like DevShop, Drupal VM, Lando, Docker, or Composer. But in my mind, if you want to implement DevOps in the real world, those tools might be helpful as implementation details, but you should spend more time thinking about real world DevOps tools: better process, better communication, and better relationships.

If you do that, you will truly end up making people happier while making apps better.

Thank you.

Resources mentioned in the presentation

Nov 03 2018
Nov 03

Lately I've been spending a lot of time working with Drupal in Kubernetes and other containerized environments; one problem that's bothered me lately is the fact that when autoscaling Drupal, it always takes at least a few seconds to get a new Drupal instance running. Not installing Drupal, configuring the database, building caches; none of that. I'm just talking about having a Drupal site that's already operational, and scaling by adding an additional Drupal instance or container.

One of the principles of the 12 Factor App is:

IX. Disposability

Maximize robustness with fast startup and graceful shutdown.

Disposability is important because it enables things like easy, fast code deployments, easy, fast autoscaling, and high availability. It also forces you to make your code stateless and efficient, so it starts up fast even with a cold cache. Read more about the disposability factor on the 12factor site.

Before diving into the details of how I'm working to get my Drupal-in-K8s instances faster to start, I wanted to discuss one of the primary optimizations, opcache...

Measuring opcache's impact

I first wanted to see how fast page loads were when they used PHP's opcache (which basically stores an optimized copy of all the PHP code that runs Drupal in memory, so individual requests don't have to read in all the PHP files and compile them on every request.

  1. On a fresh Acquia BLT installation running in Drupal VM, I uninstalled the Internal Dynamic Page Cache and Internal Page Cache modules.
  2. I also copied the codebase from the shared NFS directory /var/www/[mysite] into /var/www/localsite and updated Apache's virtualhost to point to the local directory (/var/www/[mysite], is, by default, an NFS shared mount to the host machine) to eliminate NFS filesystem variability from the testing.
  3. In Drupal VM, run the command while true; do echo 1 > /proc/sys/vm/drop_caches; sleep 1; done to effectively disable the linux filesystem cache (keep this running in the background while you run all these tests).
  4. I logged into the site in my browser (using drush uli to get a user 1 login), and grabbed the session cookie, then stored that as export cookie="KEY=VALUE" in my Terminal session.
  5. In Terminal, run time curl -b $cookie http://local.example.test/admin/modules three times to warm up the PHP caches and see page load times for a quick baseline.
  6. In Terminal, run ab -n 25 -c 1 -C $cookie http://local.example.test/admin/modules (requires apachebench to be installed).

At this point, I could see that with PHP's opcache enabled and Drupal's page caches disabled, the page loads took on average 688 ms. A caching proxy and/or requesting cached pages as an anonymous user would dramatically improve that (the anonymous user/login page takes 160 ms in this test setup), but for a heavy PHP application like Drupal, < 700 ms to load every code path on the filesystem and deliver a generated page is not bad.

Next, I set opcache.enable=0 (was 1) in the configuration file /etc/php/7.1/fpm/conf.d10-opcache.ini, restarted PHP-FPM (sudo systemctl restart php7.1-fpm), and confirmed in Drupal's status report page that opcache was disabled (Drupal shows a warning if opcache is disabled). Then I ran another set of tests:

  1. In Terminal, run time curl -b $cookie http://local.example.test/admin/modules three times.
  2. In Terminal, run ab -n 25 -c 1 -C $cookie http://local.example.test/admin/modules

With opcache disabled, average page load time was up to 1464 ms. So in comparison:

Opcache status Average page load time Difference Enabled 688 ms baseline Disabled 1464 ms 776 ms (72%, or 2.1x slower)

Note: Exact timings are unimportant in this comparison; the delta between different scenarios what's important. Always run benchmarks on your own systems for the most accurate results.

Going further - simulating real-world disk I/O in VirtualBox

So, now that we know a fresh Drupal page load is almost 4x slower than one with the code precompiled in opcache, what if the disk access were slower? I'm running these tests on a 2016 MacBook Pro with an insanely-fast local NVMe drive, which can pump through many gigabytes per second sequentially, or hundreds of megabytes per second random access. Most cloud servers have disk I/O which is much more limited, even if they say they are 'SSD-backed' on the tin.

Since Drupal VM uses VirtualBox, I can limit the VM's disk bandwidth using the VBoxManage CLI (see Limiting bandwidth for disk images):

# Stop Drupal VM.
vagrant halt

# Add a disk bandwidth limit to the VM, 1 MB/sec.
VBoxManage bandwidthctl "VirtualBox-VM-Name-Here" add Limit --type disk --limit 5M

# Get the name of the disk image (vmdk) corresponding to the VM.
VBoxManage list hdds

# Apply the limit to the VM's disk.
VBoxManage storageattach "VirtualBox-VM-Name-Here" --storagectl "IDE Controller" --port 0 --device 0 --type hdd --medium "full-path-to-vmdk-from-above-command" --bandwidthgroup Limit

# Start Drupal VM.
vagrant up

# (You can update the limit in real time once the VM's running with the command below)
# VBoxManage bandwidthctl "VirtualBox-VM-Name-Here" set Limit --limit 800K

I re-ran the tests above, and the average page load time was now 2171 ms. Adding that to the test results above, we get:

Opcache status Average page load time Difference Enabled 688 ms baseline Disabled 1464 ms 776 ms (72%, or 2.1x slower) Disabled (slow I/O) 2171 ms 1483 ms (104%, or 3.2x slower)

Not every cloud VM has that slow of disk I/O... but I've seen many situations where I/O gets severely limited, especially in cases where you have multiple volumes mounted per VM (e.g. maximum EC2 instance EBS bandwidth per instance) and they're all getting hit pretty hard. So it's good to test for these kinds of worst-case scenarios. In fact, last year I found that a hard outage was caused by an E_F_S volume hitting a burst throughput limit, and bandwidth went down to 100 Kbps. This caused so many issues, so I had to architect around that potential issue to prevent it from happening in the future.

The point is, if you need fast PHP startup times, slow disk IO can be a very real problem. This could be especially troublesome if trying to run Drupal in environments like Lambda or other Serverless environments, where disk I/O is usually the lowest priority—especially if you choose to allocate a smaller portion of memory to your function! Cutting down the initial request compile time could be immensely helpful for serverless, microservices, etc.

Finding the largest bottlenecks

Now that we know the delta for opcache vs. not-opcache, and vs. not-opcache on a very slow disk, it's important to realize that compilation is just one in a series of many different operations which occurs when you start up a new Drupal container:

  • If using Kubernetes, the container image might need to be pulled (therefore network bandwidth and image size may have a great affect on startup time)
  • The amount of time Docker spends allocating resources for the new container, creating volume mounts (e.g. for a shared files directory) can differ depending on system resources
  • The latency between the container and the database (whether in a container or in some external system like Amazon RDS or Aurora) can cause tens or even hundreds of ms of time during startup

However, at least in this particular site's case—assuming the container image is already pulled on the node where the new container is being started—the time spent reading in code into the opcache is by far the longest amount of time (~700ms) spent waiting for a fresh Drupal Docker container to serve its first web request.

Can you precompile Drupal for faster startup?

Well... not really, at least not with any reasonable sanity, currently.

But there is hope on the horizon: There's a possibility PHP 7.4 could add a cool new feature, Preloading! You can read the gory details in the RFC link, but the gist of it is: when you are building your container image, you could precompile all of your application code (or at least the hot code paths) so when the container starts, it only takes a couple ms instead of hundreds of ms to get your application's code compiled into opcache.

We'll see if this RFC gets some uptake; in the meantime, there's not really much you can do to mitigate the opcache warming problem.


With Preloading, we might be able to pre-compile our PHP applications—notably beefy ones like Drupal or Magento—so they can start up much more quickly in lightweight environments like Kubernetes clusters, Lambda functions, and production-ready docker containers. Until that time, if it's important to have Drupal serve its first request as quickly as possible, consider finding ways to trim your codebase so it doesn't take half a second (or longer) to compile into the opcache!

Nov 02 2018
Nov 02

I am currently building a Drupal 8 application which is running outside Acquia Cloud, and I noticed there are a few 'magic' settings I'm used to working on Acquia Cloud which don't work if you aren't inside an Acquia or Pantheon environment; most notably, the automatic Configuration Split settings choice (for environments like local, dev, and prod) don't work if you're in a custom hosting environment.

You have to basically reset the settings BLT provides, and tell Drupal which config split should be active based on your own logic. In my case, I have a site which only has a local, ci, and prod environment. To override the settings defined in BLT's included config.settings.php file, I created a config.settings.php file in my site in the path docroot/sites/settings/config.settings.php, and I put in the following contents:

* Settings overrides for configuration management.

// Disable all splits which may have been enabled by BLT's configuration.
foreach ($split_envs as $split_env) {
  $config["$split_filename_prefix.$split_env"]['status'] = FALSE;

$split = 'none';

// Local env.
if ($is_local_env) {
  $split = 'local';
// CI env.
if ($is_ci_env) {
  $split = 'ci';
// Prod env.
if (getenv('K8S_ENVIRONMENT') == 'prod') {
  $split = 'prod';

// Enable the environment split only if it exists.
if ($split != 'none') {
  $config["$split_filename_prefix.$split"]['status'] = TRUE;

The K8S_ENVIRONMENT refers to an environment variable I have set up in the production Kubernetes cluster where the BLT Drupal 8 codebase is running. There are a few other little tweaks I've made to make this BLT project build and run inside a Kubernetes cluster, but I'll leave those for another blog post and another day :)

Aug 02 2018
Aug 02

Over the past decade, I've enjoyed presenting sessions at many DrupalCamps, DrupalCon, and other tech conferences. The conferences are some of the highlights of my year (at least discounting all the family things I do!), and lately I've been appreciative of the local communities I meet and get to be a part of (even if for a very short time) at Drupal Camps.

The St. Louis Drupal Users Group has chosen to put off it's annual Camp to 2019, so we're guiding people to DrupalCorn Camp, which is only a little bit north of us, in Iowa.

NEDCamp New England Drupal Camp logo

And I was extremely excited to receive an invitation to present the keynote at NEDCamp (New England Drupal Camp) 2018 in Rhode Island! Not only that, the keynote (and indeed the entire camp this year) is focused on DevOps—a topic/philosophy/role very near and dear to my heart.

I maintain Drupal VM, wrote Ansible for DevOps, and maintain hundreds of open source projects used by thousands of organizations to make their DevOps dreams a reality. In the keynote, I will be focusing a little more on the philosophy of DevOps, and how adopting this philosophy can accelerate your development and site building process, while reducing pain (outages, bugs, angry users and clients).

I'd love to see you at NEDCamp—or DrupalCorn (fingers crossed I'll be able to make it!)—later this year!

May 21 2018
May 21

I started Hosted Apache Solr almost 10 years ago, in late 2008, so I could more easily host Apache Solr search indexes for my Drupal websites. I realized I could also host search indexes for other Drupal websites too, if I added some basic account management features and a PayPal subscription plan—so I built a small subscription management service on top of my then-Drupal 6-based Midwestern Mac website and started selling a few Solr subscriptions.

Back then, the latest and greatest Solr version was 1.4, and now-popular automation tools like Chef and Ansible didn't even exist. So when a customer signed up for a new subscription, the pipeline for building and managing the customer's search index went like this:

Hosted Apache Solr original architecture

Original Hosted Apache Solr architecture, circa 2009.

Hosted Apache Solr was run entirely on three servers for the first few years—a server running, and two regional Apache Solr servers running Tomcat and Apache Solr 1.4. With three servers and a few dozen customers, managing everything using what I'll call GaaS ("Geerlingguy as a Service") was fairly manageable.

But fast forward a few years, and organic growth meant Hosted Apache Solr was now spread across more than 10 servers, and Apache Solr needed to be upgraded to 3.6.x. This was a lot more daunting of an upgrade, especially since many new customers had more demanding needs in terms of uptime (and a lot more production search traffic!). This upgrade was managed via shell scripts and lots of testing on a staging server, but it was a little bit of a nail-biter to be sure. A few customers had major issues after the upgrade, and I learned a lot from the experience—most especially the importance of automated and fast backup-and-restore automation (it's not good enough to just have reliable backups!).

It was around the time of Apache Solr 4.6's release when I discovered Ansible and started adopting it for all my infrastructure automation (so much so that I eventually wrote a book on Ansible!). For the Apache Solr 3.6 to 4.6 upgrade, I used an Ansible playbook which allowed me better control over the rollout process, and also made it easier to iterate on testing everything in an identical non-production environment.

But at that time, there were still a number of infrastructure operations which could be classified as 'GaaS', and took up some of my time. Wanting to optimize the infrastructure operations even further meant I needed to automate more, and start changing some architecture to make things more automate-able!

So in 2015 or so, I conceived an entirely new architecture for Hosted Apache Solr:

Hosted Apache Solr Docker Jenkins and Ansible-based Architecture

Docker was still new and changing rapidly, but I saw a lot of promise in terms of managing multiple search subscriptions with more isolation, and especially with the ability to move search indexes between servers more efficiently (in a more 'clustered' environment).

Unfortunately, life got in the way as I have multiple health-related problems that robbed me of virtually all my spare time (Hosted Apache Solr is one of many side projects, in addition to my book (Ansible for DevOps), Server, and hundreds of open source projects).

But the architecture was sound in 2015, even though—at the time—Docker was still a little unstable. In 2015, I had a proof of concept running which allowed me to run multiple Docker containers on a single server, but I was having trouble getting requests routed to the containers, since each one was running a separate search index for a different client.

Request routing problems

One major problem I had to contend with was a legacy architecture design where each client's Drupal site would connect directly to one of the servers by hostname, e.g. "", or "". Originally this wasn't a major issue as I had few servers and would scale individual 'pet' servers up and down as capacity dictated. But as the number of servers increases, and capacity needs fluctuate more and more, this has become a real pain point; mostly, it makes it hard for me to move a client from one server to another, because the move needs to be coordinated with the client and can't be dynamic.

To resolve this problem, I decided to integrate Hosted Apache Solr's automation with AWS' Route53 DNS service, which allows me to dynamically assign a new CNAME record to each customer's search index—e.g. instead of, a customer uses the domain to connect to Hosted Apache Solr. If I need to move the search index to another server, I just need to re-point to another server after moving the search index data. Problem solved!

Another problem had to do with authentication: Hosted Apache Solr has used IP-based authentication for search index access since the beginning. Each client's search index is firewalled and only accessible by one or more IP addresses, configured by the client. This is a fairly secure and recommended way to configure access for Apache Solr search, but there are three major problems with this approach:

  1. Usability: Clients had to discover their servers' IP address(es), and enter them into their subscription details. This creates friction in the onboarding experience, and can also cause confusion if—and this happens a lot—the actual IP address of the server differs from the IP address their hosting provider says is correct.
  2. Security: A malicious user with a valid subscription could conceivably spoof IP addresses to gain access to other clients search indexes, provided they're located on the same server (since multiple clients are hosted on a single server, and the firewall was on the server level).
  3. Cloud Compatibility: Many dynamic Cloud-based hosting providers like AWS, Pantheon, and allocate IP addresses to servers dynamically, and a client can't use one stable IP address—or even a range of IP addresses—to connect to Hosted Apache Solr's servers.

Luckily, there's a second way to provide an authentication layer for Apache Solr indexes, and that's HTTP Basic Authentication. And HTTP Authentication is well supported by all versions of Drupal's Apache Solr modules, as well as almost all web software in existence, and it resolves all three of the above issues!

I settled on using Nginx to proxy all web requests to the Solr backends, because using server directives, I could support both dynamic per-customer hostnames, as well as HTTP Authentication, using Nginx configuration like:

server {
  listen 80;

  auth_basic "Connection requires authentication.";
  auth_basic_user_file /path/to/htpasswd-file;

  location / {

Each customer gets a Solr container running on a specific port (solr-port), with a unique hostname and a username and password stored on the server using htpasswd.

Good migrations

This new architecture worked great in my testing, and was a lot easier to automate (it's always easier to automate something when you have a decade of experience with it and work on a greenfield rewrite of the entire thing...)—but I needed to support a large number of existing customers, ideally with no changes required on their end.

That's a tall order!

But after some more medically-induced delays in the project in 2017 and early 2018, I finally had the time to work on the migration/transition plan. I basically had to:

  1. Build an Ansible playbook to backup a Solr server, delete it, build a new one with the same hostname, then restore all the existing client Solr index data, with as little downtime as possible.
  2. Support IP-based authentication for individual search indexes during a transition period.
  3. Build functionality into the Drupal website which allows the customer to choose when to transition from IP-based authentication to HTTP Basic Authentication.

The first task was the most straightforward, but took the longest; migrating all the data for dozens of servers with minimal downtime while completely rebuilding them all is a bit of a process, especially since the entire architecture for the new version was different. It required a lot of testing, and I tested against an automated test bed of 8 Drupal sites, across Drupal 6, 7, and 8, running all the popular Apache Solr connection modules with a variety of configurations.

For IP-based authentication, I wrote some custom rules in an Nginx server directive that would route requests from certain IP addresses or IP ranges to a particular Solr container; this was fairly straightforward, but I spent a bit of time making sure that having a large number of these legacy routing rules wouldn't slow down Nginx or cause any memory contention (luckily, they didn't, even when there were hundreds of them!).

Finally, after I completed the infrastructure upgrade process, I worked on the end-user-facing functionality for upgrading to the new authentication method.

Letting the customer choose when to transition

Learning from past experience, it's never nice to automatically upgrade clients to something newer, even after doing all the testing you could possibly do to ensure a smooth upgrade. Sometimes it may be necessary to force an upgrade... but if not, let your customers decide when they want to click a button to upgrade.

I built and documented the simple upgrade process (basically, edit your subscription and check a box that says "Use HTTP Basic Authentication"), and it goes something like:

  1. User updates subscription node in Drupal.
  2. Drupal triggers Jenkins job to update the search server configuration.
  3. Jenkins runs Ansible playbook.
  4. Ansible adds a Route53 domain for the subscription, updates the server's Nginx configuration, and restarts Nginx.

The process again worked flawlessly in my testing, with all the different Drupal site configurations... but in any operation which involves DNS changes, there's always a bit of an unknown. A few customers have reported issues with the length of time it takes for their Drupal site to re-connect after updating the server hostname, but most customers who have made the upgrade have had no issues and are happy to stop managing server IP addresses!

Why not Kubernetes?

A few people who I've discussed architecture with almost immediately suggested using Kubernetes (or possibly some other similarly-featured container orchestration/scheduler layer). There are a couple major reason I have decided to stick with vanilla docker containers and a much more traditional scheduling approach (assign a customer to a particular server):

  1. Legacy requirements: Currently, most clients are still using the IP-based authentication and pointing their sites at the servers directly. Outside of building a complex load balancing/routing layer on top of everything else, this is not that easy to support with Kubernetes, at least not for a side project.
  2. Complexity: I'm working on my first Kubernetes-in-production project right now (but for pay, not for a side hustle), and building, managing, and monitoring a functional Kubernetes environment is still a lot of work. Way too much for something that I manage in my spare time, and expect to maintain 99.9%+ uptime through a decade!

Especially considering many Kubernetes APIs on which I'd be reliant are still in alpha or beta status (though many use them in production), the time's not yet ripe for moving a service like Hosted Apache Solr over to it.

Summary and Future plans

Sometimes, as a developer, working on new 'greenfield' projects can be very rewarding. No technical debt to manage, no existing code, tests, or baggage to deal with. You just pick what you think is the best solution and soldier on!

But upgrading projects like Hosted Apache Solr—with ten years of baggage and an architecture foundation in need of a major overhaul—can be just as rewarding. The challenge is different, in terms of taking an existing piece of software and transforming it (versus building something new). The feeling of satisfaction after a successful major upgrade is the same as I get when I launch something new... but in some ways it's even better because an existing audience (the current users) will immediately reap the benefits!

The new Hosted Apache Solr infrastructure architecture has already made automation and maintenance easier, but there are some major client benefits in the works now, too, like allowing clients to choose any Solr version they want, better management of search indexes, better statistics and monitoring, and in the future even the ability to self-manage configuration changes!

Finally... if you need to supercharge your Drupal site search, consider giving Hosted Apache Solr a try today :)

May 03 2018
May 03

A question which I see quite often in response to posts like A modern way to build and develop Drupal 8 sites, using Composer is: "I want to start using Composer... but my current Drupal 8 site wasn't built with Composer. Is there an easy way to convert my codebase to use Composer?"

Convert a tarball Drupal codebase to a Composer Drupal codebase

Unfortunately, the answer to that is a little complicated. The problem is the switch to managing your codebase with Composer is an all-or-nothing affair... there's no middle ground where you can manage a couple modules with Composer, and core with Drush, and something else with manual downloads. (Well, technically this is possible, but it would be immensely painful and error-prone, so don't try it!).

But since this question comes up so often, and since I have a Drupal 8 codebase that I built that doesn't currently use Composer, I thought I'd record the process of converting this codebase from tarball-download-management (where I download core, then drag it into the codebase, download a module, drag it into modules/, etc.) to being managed with Composer. This blog post contains the step-by-step guide using the method I recommend (basically, rebuilding your codebase from scratch with modern Composer/Drupal best practices), as well as a video of the process (coming soon - will be on my YouTube channel!).

Note: There are a few tools that attempt to convert your existing Drupal codebase to a Composer-managed codebase (e.g. composerize-drupal, or Drupal Console's composerize command), but I have found them to be a little more trouble than they are worth. I recommend rebuilding the codebase from scratch, like I do in this guide.

Getting started - taking an inventory of the codebase

The first thing you need to do is take an inventory of all the stuff that makes up your Drupal codebase. Hopefully the codebase is well-organized and doesn't contain a bunch of random files thrown wily-nily throughout. And hopefully you didn't hack core or contrib modules (though if you do, as long as you did so using patches, you'll be okay—more on that later).

I'm going to work on converting the codebase behind my Raspberry Pi Dramble website. Admittedly, this is a very small and tidy codebase, but it's a good one to get started with. Here's what it looks like right now:

Drupal codebase before Composer conversion

Note: I use Git to manage my codebase, so any changes I make can be undone if I completely break my site. If you're not using Git or some other VCS to version control your changes... you should make sure you have a backup of the current working codebase—and start using version control!

The most important parts are:

  • Drupal Core
  • Modules (/modules): I have one contrib module, admin_toolbar
  • Install profiles (/profiles): mine is empty
  • Themes (/themes): I have aone custom theme,pidramble`

For this particular site, I don't customize the robots.txt or .htaccess files, though I sometimes do for other sites. So as far as an inventory of my current codebase goes, I have:

  • Drupal Core
  • Admin Toolbar
  • pidramble (custom theme)
  • No modifications to core files in the docroot

Now that I know what I'm working with, I'm ready to get started switching to Composer.

Rebuilding with Composer

I'm about to obliterate my codebase as I know it, but before doing that, I need to temporarily copy out only my custom code and files (in my case, just the pidramble theme) into a folder somewhere else on my drive.

Next up, the scariest part of this whole process: delete everything in the codebase. The easiest way to do this, and include invisible files like the .htaccess file, is to use the Terminal/CLI and in the project root directory, run the commands:

# First command deletes everything besides the .git directory:
find . -path ./.git -prune -o -exec rm -rf {} \; 2> /dev/null

# Second command stages the changes to your repository:
git add -A

# Third command commits the changes to your repository:
git commit -m "Remove all files and folders in preparation for Composer."

At this point, the codebase is looking a little barren:

Drupal codebase is empty before converting to Composer

Now we need to rebuild it with Composer—and the first step is to set up a new codebase based on the Composer template for Drupal projects. Run the following command in your now-empty codebase directory:

composer create-project drupal-composer/drupal-project:8.x-dev new-drupal-project --stability dev --no-interaction

After a few minutes, that command should complete, and you'll have a fresh new Composer-managed codebase at your disposal, inside the new-drupal-project directory! We need to move that codebase into the project root directory and delete the then-empty new-drupal-project directory, so run:

mv new-drupal-project/* ./
mv new-drupal-project/.* ./
rm -rf new-drupal-project

And now you should have a Drupal project codebase that looks like this:

Drupal codebase after building a new Composer Template for Drupal

But wait! The old codebase had Drupal's docroot in the project root directory... where did my Drupal docroot go? The Composer template for Drupal projects places the Drupal docroot in a subdirectory of the project instead of the project root—in this case, a web/ subdirectory:

Drupal codebase - web docroot subdirectory from Composer Template

Note: You might also see a vendor/ directory, and maybe some other directories; note that those are installed locally but won't be committed to your Git codebase—at least not by default. I'll mention a few different implications of how this works later!

There are a few good reasons for putting Drupal's actual docroot in a subdirectory:

  1. You can store other files besides the Drupal codebase in your project repository, and they won't be in a public web directory where anyone can access them by default.
  2. Composer can manage dependencies outside of the docroot, which is useful for development and ops tools (e.g. Drush), or for files which shouldn't generally be served in a public web directory.
  3. Your project can be organized a little better (e.g. you can just have a project-specific README and a few scaffold directories in the project root, instead of a ton of random-looking Drupal core files like index.php, CHANGELOG.txt, etc. which have nothing to do with your specific Drupal project).

Now that we have the base Composer project set up, it's time to add in all the things that make our site work. Before doing that, though, you should commit all the core files and Drupal project scaffolding that was just created:

git add -A
git commit -m "Recreate project from Composer template for Drupal projects."

Adding contrib modules, themes, and profiles

For each contributed module, theme, or install profile your site uses, you need to require it using Composer to get it added to your codebase. For example, since I only use one contributed module, I run the following command:

composer require drupal/admin_toolbar:~1.0

Parsing this out, we are telling Composer to require (basically, add to our project) the admin_toolbar project, from the packagist repository. If you look on at the Admin Toolbar project page, you'll notice the latest version is something like 8.x-1.23... so where's this weird ~1.0 version coming from? Well, Drupal's packagist repository translates versions from the traditional Drupal style (e.g. 8.x-1.0) to a Composer-compatible style. And we want the latest stable version in the 1.x series of releases, so we say ~1.0, which tells Composer to grab whatever is the latest 1.x release. If you visit a project's release page on (e.g. Admin Toolbar 8.x-1.23), there's even a handy command you can copy out to get the latest version of the module (see the 'Install with Composer' section under the download links):

Install with Composer on project release page

You could also specify a specific version of a module (if you're not running the latest version currently) using composer require drupal/admin_toolbar:1.22. It's preferred to not require specific versions of modules... but if you're used to using Drupal's update manager and upgrading one module at a time to a specific newer version, you can still work that way if you need to. But Composer makes it easier to manage updating modules without even having to use Drupal's update manager, so I opt to use a more generic version like ~1.0.

Note: If you look in the docroot after adding a contrib module, you might notice the module is now inside modules/contrib/admin_toolbar. In my original codebase, the module was located in the path modules/admin_toolbar. When you move a module to another directory like this, you need to make sure you clear all caches on your Drupal site after deploying the change, and either restart your webserver or manually flush your PHP opcache/APC (otherwise there could be weird errors when PHP looks in the wrong directory for module files!).

After you run a composer require for each contrib project (you can also run composer require drupal/project_one drupal/project_two etc. if you want to add them all in one go), it's time to commit all the contrib projects to your codebase (git add -A, then git commit -m "Add contrib projects to codebase."), then move on to restoring custom modules, themes, and profiles (if you have any).

Adding custom modules, themes, and profiles

Following the convention of having 'contrib' modules/themes/profiles in a special 'contrib' subdirectory, a lot of people (myself included) use the convention of placing all custom projects into a 'custom' subdirectory.

I have a custom theme, pidramble, so I created a custom directory inside themes/, and placed the pidramble theme directory inside there:

Drupal codebase - pidramble theme inside custom themes subdirectory

For any of your custom code:

  • Place modules in web/modules/custom/
  • Place themes in web/themes/custom/
  • Place profiles in web/profiles/custom/

Note: If you use Drupal's multisite capabilities (where you have one codebase but many websites running off of it), then site-specific custom modules, themes, and profiles can also be placed inside site-specific folders (e.g. inside web/sites/[sitename]/modules/custom).

Adding libraries

Libraries are a little different, especially since right now there are a few different ways people manage third party libraries (e.g. Javascript libraries) in Drupal 8. It seems most people have settled on using Asset Packagist to bundle up npm dependencies, but there is still active work in the Drupal community to standardize third party library management in this still-nascent world of managing Drupal sites with Composer.

If you have a bunch of libraries you need to add to your codebase, please read through these issues for some ideas for how to work with Asset Packagist:

Adding customizations to .htaccess, robots.txt, etc.

You might need to customize certain files that are included with Drupal core for your site—like adding exclusions to robots.txt or adding a redirection to .htaccess. If so, make sure you make those changes and commit them to your git repository. The Composer template project recommends you do a git diff on any customized files any time you update Drupal core.

Note: There are more advanced ways of managing changes to these 'scaffold' files if you want take out the human review part from Drupal core updates, but they require a bit more work to make them run smoothly, or may require you to manually apply changes to your customized files whenever Drupal core updates those files in a new release.

Adding patches to core and contrib projects

One of the best things about using the Composer template project is that it automatically sets up the composer-patches project, which allows you to apply patches directly to Drupal core and contrib projects. This is a little more of an advanced topic, and most sites I use don't need this feature, but it's very nice to have when you do need it!

Add a local development environment

While you're revamping your codebase, why not also revamp your local development process, and add a local environment for easy testing and development of your code? In my blog post A modern way to build and develop Drupal 8 sites, using Composer, I showed how easy it is to get your codebase up and running with Drupal VM's Docker container:

composer require --dev geerlingguy/drupal-vm-docker
docker-compose up -d

Then visit http://localhost/ in your browser, and you can install a new Drupal site locally using your codebase (or you can connect to the Docker container's MySQL instance and import a database from your production site for local testing—always a good idea to validate locally before you push a major change like this to production!).

Drupal VM Docker container running a new Drupal Composer template project

Note: The quickest way to import a production database locally with this setup is to do the following:

  1. Drop a .sql dump file into the project root (e.g. sql-dump.sql).
  2. Import the database: docker exec drupal_vm_docker bash -c "mysql drupal < /var/www/drupalvm/drupal/sql-dump.sql"
  3. Clear caches: docker exec drupal_vm_docker bash -c "drush --root=/var/www/drupalvm/drupal/web cr

Deploying the new codebase

One major change—at least in my Raspberry Pi Dramble codebase—is the transition from having the Drupal docroot be in the project root to having the docroot be in the web/ subdirectory. If I were to git push this to my production web server (which happens to be a little Raspberry Pi on my desk, in this case!), then the site would break because docroot is in a new location.

So the first step is to make sure your webserver knows to look in project-old-location/web for the docroot in your Apache, Nginx, etc. configuration. In my case, I updated the docroot in my Apache configuration, switching it from /var/www/drupal to /var/www/drupal/web.

Then, to deploy to production:

  1. Take a backup of your site's database and codebase (it never hurts, especially if you're about to make a major change like this!)
  2. Stop Apache/Nginx (whatever your server is running) so no web traffic is served during this docroot transition.
  3. Deploy the updated code (I used git, so did a git pull on my production web server).
  4. Run composer install --no-dev inside the project root, so the production server has all the right code in all the right places (--no-dev means 'don't install development tools that aren't needed in production').
  5. Start Apache or Nginx so it starts serving up the new docroot subdirectory.

One more caveat: Since you moved the docroot to a new directory, the public files directory might need to also be moved and/or have permissions changed so things like CSS and JS aggregation work correctly. If this is the case, make sure the sites/default/files directory has the correct file ownership and permissions (usually something like www-data and 775), and if your git pull wiped out the existing files directory, make sure your restore the rest of the contents from the backup you took in step 1!

Note: There are many reasons not to run composer install on your production server—and in some cases, it might not even work! It's best to 'build' your codebase for production separately, on a CI server or using a service like Circle CI, Travis CI, etc., then to deploy the built codebase to the server... but this requires another set of infrastructure management and deployment processes, so for most of my smaller projects I just have one Git codebase I pull from and run composer install on the production web server.

Another option is to commit everything to your codebase, including your vendor directory, the web/core directory, all the contrib modules, etc. This isn't ideal, but it works and might be a better option if you can't get composer install working in production for one reason or another.

Managing everything with Composer

One of the main drivers for this blog post was the questions Matthew Grasmick and I got after our session at DrupalCon Nashville 2018, How to build a Drupal site with Composer AND keep all of your hair. Even before then, though, I have regularly heard from people who are interested in starting to use Composer, but have no clue where to start, since their current codebase is managed via tarball, or via Drush.

And this is an especially pressing issue for those using Drush, since Drush 9 doesn't even support downloading Drupal or contributed projects anymore—from the Drush 9 documentation:

Drush 9 only supports one install method. It requires that your Drupal 8 site be built with Composer and Drush be listed as a dependency.

So while you can continue managing your codebase using the tarball-download method for the foreseeable future, I would highly recommend you consider moving your codebase to Composer, since a lot of the tooling, tutorials, and the rest of the Drupal ecosystem is already in the middle of a move in that direction. There are growing pains, to be sure, but there are also a lot of benefits, many of which are identified in the DrupalCon Nashville presentation I mentioned earlier.

Finally, once you are using Composer, make sure you always run composer install after updating your codebase (whether it's a git pull on a production server, or even on your workstation when doing local development!

Apr 26 2018
Apr 26

The St. Louis Drupal Users Group has hosted a Drupal Camp in the 'Gateway to the West' for four years (since 2014), but this year, the organizers have decided to take a year off, for various reasons. Our camp has grown a little every year, and last year we even increased the scope and usefulness of the camp even more by adding a well-attended training day—but life and work have taken precedence this year, and nobody is able to take on the role of 'chief organizer'.

Meet me in Des Moines St. Louis Drupal Camp goes to DrupalCorn in Iowa

All is not lost, however! There are other great camps around the Midwest, and this year we're directing everyone to our northern neighbors, in Iowa: DrupalCorn Camp is going to be held in Des Moines, Iowa, from September 27-30, 2018!

We'll still be hosting our monthly Drupal meetups throughout the year (we've had an ongoing monthly meetup in the St. Louis area for more than 10 years now!), and you can follow along with the St. Louis Drupal community (and find out about next year's Camp!) via our many social media channels.

Apr 23 2018
Apr 23

Mollom End of Life Announcement from their homepage

Earlier this month, Mollom was officially discontinued. If you still have the Mollom module installed on some of your Drupal sites, form submissions that were previously protected by Mollom will behave as if Mollom was offline completely, meaning any spam Mollom would've prevented will be passed through.

For many Drupal sites, especially smaller sites that deal mostly with bot spam, there are a number of great modules that will help prevent 90% or more of all spam submissions, for example:

  • Honeypot: One of the most popular and effective bot spam prevention modules, and it doesn't harm the user experience for normal users (disclaimer: I maintain the Honeypot module).
  • CAPTCHA or reCAPTCHA: Modules that use a sort of 'test' to verify a human is submitting the form. Some tradeoffs in either case, but they do a decent job of deterring bots.
  • Antibot: An even simpler method than what Honeypot or CAPTCHA uses... but might not be adequate for some types of bot spam.

There are many other modules that use similar tricks as the above, but many modules only exist for Drupal 7 (or 8, and not 7), and many don't have the long-term track record and maintenance history that these other more popular modules have.

But there's a problem—all the above solutions are primarily for mitigating bot spam, not human spam. Once your Drupal site grows beyond a certain level of popularity, you'll notice more and more spam submissions coming through even if you have all the above tools installed!

This is the case with this website, The site has many articles which have become popular resources for all things Ansible, Drupal, photography, etc., and I get enough traffic on this site that I get somewhere between 5,000-10,000 spam comment submissions per day. Most of these are bots, and are caught by Honeypot. But 2% or so of the spam is submitted by humans, and they go right through Honeypot's bot-targeted filtering!

Filtering Human Spam with CleanTalk

Mollom did a respectable job of cleaning up those 100 or so daily human spam submissions. A few would get through every day, but I have a custom module whipped up that shoots me an email with links to approve or deny comments, so it's not a large burden to deal with less than 10 spam submissions a day.

The day Mollom reached EOL, the number of emails started spiking to 50-100 every day... and that was a large burden!

So I started looking for new solutions, fast. The first I looked into was Akismet, ostensibly 'for Wordpress', but it works with many different CMSes via connector modules. For Drupal, there's the AntiSpam module, but it has no Drupal 8 release yet, so it was kind of a non-starter. The Akismet module has been unmaintained for years, so that's a non-starter too...

Looking around at other similar services, I found CleanTalk, which has an officially-supported CleanTalk module for both Drupal 7 and Drupal 8, and it checks all the boxes I needed for my site:

  • Well-maintained module for Drupal 7 and 8
  • Easy SaaS interface for managing settings (better and more featureful than Mollom or Akismet, IMO)
  • Free trial so I could see how well it worked for my site (which I used, then opted for a paid plan after 3 days)
  • Blacklist and IP-based filtering features for more advanced use cases (but only if you want to use them)

I installed CleanTalk in the first week of April, and so far have only had 5 actual human spam comment submissions make it through the CleanTalk filter; 4 of them were marked as 'possible spam', and one was approved (there's a setting in the module to auto-approve comments that look legit, or have all comments go into an approval queue using Drupal's built-in comment approval system).

So CleanTalk worked better than Mollom did, and it was a little simpler to get up and running. The one tradeoff is that CleanTalk's Drupal module isn't quite as 'Drupally' as Mollom or Honeypot. By that, I mean it's not built to be a flexible "turn this on for any kind of form on your site" solution, but it's more tailored for things like:

  • Drupal comments and comment entities
  • User registration
  • Webform

To be fair, those are probably the top three use cases for spam prevention—but as of right now, CleanTalk can't easily be made to work with generic entity submissions (e.g. forum nodes, custom node types, etc.), so it works best on sites with simpler needs.

CleanTalk's pricing is pretty simple (and IMO pretty cheap) too: for one website, it's currently $8/year, or cheaper if you pay for multiple years in advance.

Disclaimer: CleanTalk offers a free year of service for those who post a review of CleanTalk on their websites, and I'll probably take them up on that offer... but I had actually written the first draft of this blog post over a month ago, before I found out about this offer, when I was initially investigating using CleanTalk for my DrupalCon Nashville session submission Mollom is gone. Now what?. I would still write the review exactly the same with or without their offer—it's been that good in my testing! Heck, I already paid for 3 years of service!


If you have a Drupal site and are disappointed by the uptick in user registration, webform, and comment spam since Mollom's end of life, check out CleanTalk for a potentially better solution!

Edit: Also see some excellent suggestions and ongoing work in integrating other spam prevention services into Drupal in the comments below!

Apr 19 2018
Apr 19
Fellow Acquian Matthew Grasmick and I just presented How to build a Drupal site with Composer AND keep all of your hair at DrupalCon Nashville, and though the session wasn't recorded, we posted the slides and hands-on guide to using Composer to manage your Drupal 8 sites to the linked session page.
Apr 13 2018
Apr 13

At DrupalCon Nashville 2018, I became deeply interested in the realm of first-time Drupal experiences, specifically around technical evaluation, and how people would get their feet wet with Drupal. There were two great BoFs related to the topic which I attended, and which I hope will bear some fruits over the next year in making Drupal easier for newcomers:

There are a number of different tools people can use to run a new Drupal installation, but documentation and ease of use for beginners is all over the place. The intention of this project is to highlight the most stable, simple, and popular ways to get a Drupal site installed and running for testing or site building, and measure a few benchmarks to help determine which one(s) might be best for Drupal newcomers.

For reference, here's a spreadsheet I maintain of all the community-maintained local Drupal development environment tools I've heard of.

Throughout the week at DrupalCon, I've been adding automated scripts to build new Drupal environments, seeing what makes different development tools like Lando, Drupal VM, Docksal, DDEV,, and even Drupal core (using code from the active issue Provide a single command to install and run Drupal) tick. And now I've compiled some benchmark data to help give an overview of what's involved with the different tools, for someone who's never used the tool (or Drupal) before.

All the code for these benchmarks is available under an open source license in the Drupal, the Fastest project on GitHub.

Time to Drupal

One of my favorite metrics is "time to Drupal": basically, how long does it take, at minimum, for someone who just discovered a new tool to install a new Drupal website and have it running (front page accessible via the browser) locally?

Time to Drupal - how long it takes different development environments to go from nothing to running Drupal

The new drupal quick-start command that will be included with Drupal core once this patch is merged is by far the fastest way to go from "I don't have any local Drupal environment" to "I'm running Drupal and can start playing around with a fresh new site." And being that it's included with Drupal core (and doesn't even require something like Drush to run), I think it will become the most widely used way to quickly test out and even build simple Drupal sites, just because it's easy and fast!

But the drupal quick-start environment doesn't have much in the way of extra tools, like an email catching system (to prevent your local environment from accidentally sending emails to people), a debugger (like XDebug), a code profiler (like Blackfire or Tideways), etc. So most people, once they get into more advanced usage, would prefer a more fully-featured local environment.

There's an obvious trend in this graph: Docker-based environments are generally faster to get up and running than a Vagrant-based environment like Drupal VM, mostly because the Docker environments use pre-compiled/pre-installed Docker images, instead of installing and configuring everything (like PHP, MySQL, and the like) inside an empty VirtualBox VM.

Now, 'time to Drupal' isn't the only metric you should care about—there are some good reasons people may choose something like Drupal VM over a Docker-based tool—but it is helpful to know that some tools are more than twice as fast as others when it comes to getting Drupal up and running.

Required Dependencies

Another important aspect of choosing one of these tools is realizing what you will need to have installed on your Computer to make that tool work. All options require you to have something installed, whether it be just PHP and Composer, or a virtualization environment like VirtualBox or Docker.

How many dependencies are required per development environment

Almost all Drupal local development environment tools are settling on either requiring Docker CE, or requiring Vagrant and VirtualBox. The two exceptions here are:

  1. runs in the cloud, so it doesn't require any dependencies locally. But you can't run the site you create in locally either, so that's kind of a moot point!
  2. Drupal's quick-start command requires PHP and Composer, which in some sense are less heavyweight to install locally (than Docker or VirtualBox)—but in another sense can be a little more brittle (e.g. you can only easily install and one version of PHP at a time—a limitation you can bypass easily by having multiple PHP Docker or VM environments).

Overall, though, the number of required dependencies shouldn't turn you off from any one of these tools, unless there are corporate policies that restrict you from installing certain software on your workstation. Docker, VirtualBox, PHP, Composer, Git and the like are pretty common tools for any developer to have running and updated on their computer.

Number of setup steps

While the raw number of steps involved in setting up a local environment is not a perfect proxy for how complex it is to use, it can be frustrating when it takes many commands just to get a new thing working. Consider, for example, most Node.js projects: you usually install or git clone a project, then run npm install, and npm start. The fewer steps involved, the less can go wrong!

How many steps are involved in the setup of a Drupal development tool

The number of individual steps required for each environment varies pretty wildly, and some tools are a little easier to start using than others; whereas Drupal VM relies only on vagrant, and doesn't require any custom command line utility or setup to get started, tools like Lando (lando), Docksal (fin), and DDEV (ddev) each require a couple extra steps to first add a CLI helper utility, then to initialize the overall Docker-based environment—then your project runs inside the environment.

In reality, the tradeoff is that since Docker doesn't include as many frills and plugins as something like Vagrant, Docker-based environments usually require some form of wrapper or helper tool to make managing multiple environments easier (setting up DNS, hostnames, http proxy containers and the like).


In the end, these particular benchmarks don't paint a perfect picture of why any individual developer should choose one local development environment over another. All the environments tested have their strengths and weaknesses, and it's best to try one or two of the popular tools before you settle on one for your project.

But I'm most excited for the new drupal quick-start command that is going to be in core soon—it will make it a lot faster and easier to do something like clone drupal and test a core patch or new module locally within a minute or two, tops! It's not a feature-filled local development environment, but it's fast, it only requires people have PHP (and maybe Composer or Git) installed, and it works great on Mac, Windows (7, 8, or 10!), and Linux.

Apr 11 2018
Apr 11

Note: If you want to install and use PHP 7 and Composer within Windows 10 natively, I wrote a guide for that, too!

[embedded content]

Since Windows 10 introduced the Windows Subsystem for Linux (WSL), it has become far easier to work on Linux-centric software, like most PHP projects, within Windows.

To get the WSL, and in our case, Ubuntu, running in Windows 10, follow the directions in Microsoft's documentation: Install the Windows Subsystem for Linux on Windows 10, and download and launch the Ubuntu installer from the Windows Store.

Once it's installed, open an Ubuntu command line, and let's get started:

Install PHP 7 inside Ubuntu in WSL

Ubuntu has packages for PHP 7 already available, so it's just a matter of installing them with apt:

  1. Update the apt cache with sudo apt-get update
  2. Install PHP and commonly-required extensions: sudo apt-get install -y git php7.0 php7.0-curl php7.0-xml php7.0-mbstring php7.0-gd php7.0-sqlite3 php7.0-mysql.
  3. Verify PHP 7 is working: php -v.

If it's working, you should get output like:

PHP 7 running under Ubuntu under WSL on Windows 10

Install Composer inside Ubuntu in WSL

Following the official instructions for downloading and installing Composer, copy and paste this command into the CLI:

php -r "copy('', 'composer-setup.php');" && \
php -r "if (hash_file('SHA384', 'composer-setup.php') === '544e09ee996cdf60ece3804abc52599c22b1f40f4323403c44d44fdfdd586475ca9813a858088ffbc1f233e9b180f061') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" && \
php composer-setup.php && \
php -r "unlink('composer-setup.php');"

To make Composer easier to use, run the following command to move Composer into your global path:

sudo mv composer.phar /usr/local/bin/composer

Now you can run composer, and you should get the output:

Composer running under Ubuntu under WSL on Windows 10

That's it! Now you have PHP 7 and Composer running inside Ubuntu in WSL on your Windows 10 PC. Next up, dominate the world with some new PHP projects!

Apr 10 2018
Apr 10

The Drupal community has been on an interesting journey since the launch of Drupal 8 in 2015. In the past three years, as the community has started to get its sea legs 'off the island' (using tools, libraries, and techniques used widely in the general PHP community), there have been growing pains.

One area where the pains have been (and sometimes still are) highly visible is in how Drupal and Composer work together. I've written posts like Composer and Drupal are still strange bedfellows in the past, and while in some ways that's still the case, we as a community are getting closer and closer to a nirvana with modern Drupal site building and project management.

For example, in preparing a hands-on portion of my and Matthew Grasmick's upcoming DrupalCon Nashville lab session on Composer and Drupal, I found that we're already to the point where you can go from literally zero to a fully functional and complete Drupal site codebase—along with a functional local development environment—in about 10 or 15 minutes:

  1. Make sure you have PHP, Composer, and Docker CE installed (Windows users, look here).
  2. Create a Drupal codebase using the Drupal Composer Project: composer create-project drupal-composer/drupal-project:8.x-dev drupal8 --stability dev --no-interaction
  3. Open the project directory: cd drupal8
  4. Add a plugin to build a quick and simple local dev environment using Drupal VM Docker Composer Plugin: composer require --dev geerlingguy/drupal-vm-docker
  5. Start the local dev environment: docker-compose up -d
  6. Open a browser and visit http://localhost/

I'm not arguing that Drupal VM for Docker is the ideal local development environment—but accounting for about one hour's work last night, I think this shows the direction our community can start moving once we iron out a few more bumps in our Composer-y/Drupal-y road. Local development environments as Composer plugins. Development tools that automatically configure themselves. Cloud deployments to any hosting provider made easy.

Right now a few of these things are possible. And a few are kind of pipe dreams of mine. But I think this year's DrupalCon (and the follow-up discussions and issues that will result) will be a catalyst for making Drupal and Composer start to go from being often-frustrating to being extremely slick!

If you want to follow along at home, follow this core proposal: Proposal: Composer Support in Core initiative. Basically, we might be able to make it so Drupal core's own Composer usage is good enough to not need a shim like drupal-composer/drupal-project, or a bunch of custom tweaks to a core Composer configuration to build new Drupal projects!

Also, I will be working this DrupalCon to help figure out new and easier ways to make local development easier and faster, across Mac, Linux and Windows (I even brought my clunky old Windows 10/Fedora 26 laptop with me!). There are a number of related BoFs and sessions if you're here (or to watch post-conference), for example:

  • Tuesday
  • Wednesday
Apr 09 2018
Apr 09

Note: If you want to install and use PHP 7 and Composer within the Windows Subsystem for Linux (WSL) using Ubuntu, I wrote a guide for that, too!

[embedded content]

I am working a lot on Composer-based Drupal projects lately (especially gearing up for DrupalCon Nashville and my joint workshop on Drupal and Composer with Matthew Grasmick), and have been trying to come up with the simplest solutions that work across macOS, Linux, and Windows. For macOS and Linux, getting PHP and Composer installed is fairly quick and easy. However, on Windows there seem to crop up little issues here and there.

Since I finally spent a little time getting the official version of PHP for native Windows installed, I figured I'd document the process here. Note that many parts of this process were learned from the concise article Install PHP7 and Composer on Windows 10 from the website KIZU 514.

Install PHP 7 on Windows 10

PHP 7 running in Windows 10 in PowerShell

  1. Install the Visual C++ Redistributable for Visual Studio 2015—this is linked in the sidebar of the PHP for Windows Download page, but it's kind of hidden. If you don't do this, you'll run into a rather cryptic error message, VCRUNTIME140.DLL was not found, and php commands won't work.
  2. Download PHP for Windows. I prefer to use 7.1.x (current release - 1), so I downloaded the latest Non-thread-safe 64-bit version of 7.1.x. I downloaded the .zip file version of the VC14 x64 Non Thread Safe edition, under the PHP 7.1 heading.
  3. Expand the zip file into the path C:\PHP7.
  4. Configure PHP to run correctly on your system:
    1. In the C:\PHP7 folder, rename the file php.ini-development to php.ini.
    2. Edit the php.ini file in a text editor (e.g. Notepad++, Atom, or Sublime Text).
    3. Change the following settings in the file and save the file:
      1. Change memory_limit from 128M to 1G (because Composer can use lots of memory!)
      2. Uncomment the line that reads ; extension_dir = "ext" (remove the ; so the line is just extension_dir = "ext").
      3. In the section where there are a bunch of extension= lines, uncomment the following lines:
        1. extension=php_gd2.dll
        2. extension=php_curl.dll
        3. extension=php_mbstring.dll
        4. extension=php_openssl.dll
        5. extension=php_pdo_mysql.dll
        6. extension=php_pdo_sqlite.dll
        7. extension=php_sockets.dll
  5. Add C:\PHP7 to your Windows system path:
    1. Open the System Control Panel.
    2. Click 'Advanced System Settings'.
    3. Click the 'Environment Variables...' button.
    4. Click on the Path row under 'System variables', and click 'Edit...'
    5. Click 'New' and add the row C:\PHP7.
    6. Click OK, then OK, then OK, and close out of the System Control Panel.
  6. Open PowerShell or another terminal emulator (I generally prefer cmder), and type in php -v to verify PHP is working.

At this point, you should see output like:

Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

PS C:\Users\jgeerling> php -v
PHP 7.0.29 (cli) (built: Mar 27 2018 15:23:04) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies

This means PHP is working, yay!

Install Composer on Windows 10

Composer running in Windows 10 in PowerShell

Next, we're going to install Composer by downloading it and moving it into place so we can run it with just the composer command:

  1. Download the Windows Installer for Composer and run it.
  2. Note that the Windows Installer for Composer might ask to make changes to your php.ini file. That's okay; allow it and continue through the setup wizard.
  3. Close out of any open PowerShell or other terminal windows, and then open a new one.
  4. Run the composer command, and verify you get a listing of the Composer help and available commands.

That's it! Now you have PHP 7 and Composer running natively on your Windows 10 PC. Next up, dominate the world with some new PHP projects!

Mar 22 2018
Mar 22

For the past two minor release Drupal core upgrades, I've had major problems trying to get some of my Composer-based Drupal codebases upgraded. For both 8.3.x to 8.4.0, and now 8.4.x to 8.5.0, I've had the following issue:

  1. I have the version constraint for drupal/core set to ~8.0 or ~8.4 in my composer.json.
  2. I run composer update drupal/core --with-dependencies (as recommended in's Composer documentation).
  3. Composer does its thing.
  4. A few things get updated... but not drupal/core. It remains stubbornly on the previous minor release.

Looking around the web, it seems this is a very common problem, and a lot of people soon go for the nuclear (or thermonuclear1) option:

  1. Run composer update (updating everything in the entire project, contrib modules, core, other dependencies, etc.).
  2. Profit?

This works, but it's definitely not ideal. If you have a site that uses a number of contrib modules, and maybe even depends on some of their APIs in custom code or in a custom theme... you don't want to be upgrading core and all contrib modules all in one go. You want to update each thing independently so you can test and make sure things don't break.

So, I was searching around for 'how do I figure out why updating something with Composer doesn't update that thing?', and I got a few good answers. The most important is the command composer prohibits.

Use composer prohibits to figure out what's blocking an update

composer prohibits allows you to see exactly what is preventing a package from being updated. For example, on this codebase, I know I want to end up with drupal/core:8.5.0, so I can run:

composer prohibits drupal/core:8.5.0

This gave me a list of a ton of different Symfony components that seemed to be holding back the upgrade, for example:

drupal/core                     8.5.0       requires          symfony/class-loader (~3.4.0)
drupal-composer/drupal-project  dev-master  does not require  symfony/class-loader (but v3.2.14 is installed)
drupal/core                     8.5.0       requires          symfony/console (~3.4.0)
drupal-composer/drupal-project  dev-master  does not require  symfony/console (but v3.2.14 is installed)
drupal/core                     8.5.0       requires          symfony/dependency-injection (~3.4.0)

Add any blocking dependencies to composer update

So, knowing this, one quick way I can get around this problem is to include symfony/* to update the symfony components at the same time as drupal/core:

composer update drupal/core symfony/* --with-dependencies

Unfortunately, it's more difficult to figure out which dependencies exactly are blocking the update. You'd think all of the ones listed by composer prohibits are blocking the upgrade, but as it turns out, only the symfony/config dependency (which is a dependency of Drush and/or Drupal Console, but not Drupal core) was blocking the upgrade on this particular site!

I learned a bit following the discussion in the issue composer fail to upgrade from 8.4.4 to 8.5.0-alpha1, and I also contributed a new answer to the Drupal Answers question Updating packages with composer and knowing what to update. Finally, I updated the documentation page Update core via Composer (option 4) and added a little bit of the information above in the troubleshooting section, because I'm certain I'm not the only one hitting these issues time and again, every time I try to upgrade Drupal core using Composer!

1The thermonuclear option with Composer is to delete your vendor directory, delete your lock file, hand edit your composer.json with newer package versions, then basically start over from scratch. IMO, this is always a bad idea unless you feel safe upgrading all the things all the time (for some simple sites, this might not be the worst idea, but it still removes all the dependency management control you get when using Composer properly).

Mar 12 2018
Mar 12

Umami demo profile running on Lando for Drupal 8
Testing out the new Umami demo profile in Drupal 8.6.x.

I wanted to post a quick guide here for the benefit of anyone else just wanting to test out how Lando works or how it integrates with a Drupal project, since the official documentation kind of jumps you around different places and doesn't have any instructions for "Help! I don't already have a working Drupal codebase!":

  1. Install Docker for Mac / Docker for Windows / Docker CE (if it's not already installed).
  2. Install Lando (on Mac, brew cask install lando, otherwise, download the .dmg, .exe., .deb., or .rpm).
  3. You'll need a Drupal codebase, so go somewhere on your computer and use Git to clone it: git clone --branch 8.6.x lando-d8
  4. Change into the Drupal directory: cd lando-d8
  5. Run lando init, answering drupal8, ., and Lando D8.
  6. Run lando start, and wait while all the Docker containers are set up.
  7. Run lando composer install (this will use Composer/PHP inside the Docker container to build Drupal's Composer dependencies).
  8. Go to the site's URL in your web browser, and complete the Drupal install wizard with these options:
    1. Database host: database
    2. Database name, username, password: drupal8

At the end of the lando start command, you'll get a report of 'Appserver URLs', like:

APPSERVER URLS  https://localhost:32771                        

You can also get this info (and some other info, like DB connection details) by running lando info. And if you want to install Drupal without using the browser, you could run the command lando drush site-install -y [options] (this requires Drush in your Drupal project, which can be installed via composer require drush/drush).

It looks like Lando has a CloudFlare rule set up that redirects * to, and the https version uses a bare root certificate, so if you want to access the HTTPS version of the default site Lando creates, you need to add an exception to your browser when prompted.

Note that Little Snitch reported once or twice that the lando cli utility was calling out to a metrics site, likely with some sort of information about your environment for their own metrics and tracking purposes. I decided to block the traffic, and Lando still worked fine, albeit with a few EHOSTDOWN errors. It looks like the data it tried sending was:


Nothing pernicious; I just don't like my desktop apps sending metrics back to a central server.

Docker Notes

For those who also use many other dev environments (some of us are crazy like that!), I wanted to note a few things specific to Lando's Docker use:

  • Lando starts 3 containers by default: one for MySQL, one for Apache/PHP, and one for Traefik. The Traefik container grabs host ports 80, 443, and 58086.
  • Running lando stop doesn't stop the Traefik container, so whatever ports it holds won't be freed up until you run lando poweroff.
  • If another running container (or some other service) is already binding to port 80, lando start will find a different free port and use that instead (e.g.
Aug 31 2017
Aug 31
In part one of this two-part series, I explained why my Hackathon team wanted to build an open source photo gallery in Drupal 8, and integrate it with Amazon S3, Rekognition, and Lambda for face and object recognition.In this post, I'll detail how we built it, then how you can set it up, too!
Jul 17 2017
Jul 17

There are a number of scenarios in Drupal 8 where you might notice your MySQL database size starts growing incredibly fast, even if you're not adding any content. Most often, in my experience, the problem stems from a exponentially-increasing-in-size cache_render table. I've had enough private conversations about this issue that I figure I'd write this blog post to cover common scenarios, as well as short and long-term fixes if you run into this issue.

Consider the following scenarios I've seen where a cache_render table increased to 10, 50, 100 GB or more:

  • A Search API search page with five facets; each facet has 5-10 links to narrow the search down.
  • A views page with a few taxonomy filters with dozens of options.
  • A views block with a couple filters that allow sorting and narrowing options.

In all three cases, the problem is that the one page (e.g. /search) can have hundreds, thousands, millions, or even billions of possible variations (e.g. /search?value=test, /search?value=another%20test, etc.). So the problem is that every single variation produces a row in the render_cache table—whether that cached entry is accessed once, ever, or a million times a day. And there's no process that cleans up the cache_render table, so it just grows and grows. Especially when crawlers start crawling the page and following every combination of every link!

This isn't a problem that only affects large sites with millions of nodes, either—all it takes is a few hundred taxonomy terms, nodes, etc., and you will likely encounter the problem.

So, what's the fix?

First of all, you should follow the Drupal 8 core issue Database cache bins allow unlimited growth: cache DB tables of gigabytes!—that's the best place to testing patches which should resolve the issue more permanently.

After that, here are some ways to fix the issue, in order from the most immediate/quick to the most correct and long-lasting (but possibly more difficult to implement):

  • For an immediate fix, run drush cr or click 'Clear all caches' in the admin UI at Configuration > Development > Performance. This will nuke the cache_render table immediately. Note: This could take a while if you have a giant table! It's recommended to use drush if possible since a timeout is less likely.
  • For a short-term band-aid to prevent it from happening again, add a cron job that runs drush cr at least once a day or week, during a low-traffic period. This is kind of like cleaning up your room by detonating it with TNT, but hey, it works!
  • For a better band-aid, consider using the Slushi cache module, which basically limits the growth of the cache_render and cache_dynamic_page_cache tables.
  • For the best fix long-term (and worthwhile to do for better performance regardless of whether you have this problem!), you should use Redis or Memcache as your site's cache backend. Unlike the MySQL database, these in-memory caches are usually a bit faster, and are designed to be able to discard old cache entries more intelligently.

And for a bit more detail about debugging this problem (to verify it's the root cause of site issues), check out Acquia's helpful help document, Managing Large Cache Render tables in Drupal 8.

The issue linked earlier in this post will hopefully have a better long-term fix for the many users who are limited to using MySQL, but that's little consolation if your site is offline right now due to the database filling up it's disk space, or backups failing because the database is just too large! In those cases, use one of the bandaids above and determine whether using Redis or Memcache is a possibility.

Jul 17 2017
Jul 17
In this two-part series of blog posts, I'm going to show you how we built a Drupal 8 photo gallery site, integrated with Amazon S3, Rekognition, and Lambda to automatically detect faces (allowing us to automatically identify names!) and objects found in our photos.Drupal 8 Photo Gallery website
Jul 17 2017
Jul 17

DrupalCamp St. Louis 2017 - September 22-23

(It's not too late to submit a session—and register for DrupalCamp soon, since early bird pricing ends on August 1!)

The organizers behind DrupalCamp St. Louis 2017 are happy to announce we have a speaker scheduled to present the Keynote, Adam Bergstein—Associate Director of Engineering at CivicActions!

Adam's Keynote is titled "Restoring Our Lost Imagination", and here's a short summary:

Experience helps shape our perspective. Somewhere along the way, our youthful imagination fades away and our reality forms predisposition. How can we bring back our imagination to achieve things we don’t think are possible? This talk explores how to recognize what we believe to be limitations and ways to use our imagination in routine opportunities. Only when we free ourselves from our own biases, can we reach our fullest potential.

Read more about the Keynote on the 2017 Camp's website, and register soon to lock in early bird pricing—and a snazzy T-shirt!

Jun 29 2017
Jun 29

Drupal VM has been hitting its stride this year; after adding experimental Docker support in 4.5, Docker usage has been refined in 4.6 (with a more stable and useful drupal-vm Docker image, along with a few other things:

  • Drupal VM now supports running inside a Debian 9 'Stretch' Vagrant box (Packer build here, and Vagrant Cloud box here), or in a Docker container based on Debian 9.
  • The official Docker image includes a script that lets you set up a fresh Drupal site (any version supported by Drush!) with the command docker exec drupalvm install-drupal (seriously, check it out!.
  • Essential Vagrant plugins that automatically update your hosts file and ensure the VM has the right version of VirtualBox Guest Additions are now automatically installed—though you can disable the feature, or even add in additional essential plugins if you so desire.

I think the fact that adding Debian 9 support took less than one hour is a testament to the approach we've* taken with Drupal VM. Every component is mixed in from an upstream Ansible role—all the roles work on multiple OSes, and have full automated test coverage on all those OSes, and are easily composable (Drupal VM is one of many diverse projects that benefit from the roles' modularity).

Drupal VM on Docker - Quick Drupal site installation

And taking the note about the Docker image's Drupal install script a little further, if you have Docker installed, try the following to see how quick it is to get started with Drupal VM on Docker:

  1. docker run -d -p 80:80 -p 443:443 --name=drupalvm --privileged geerlingguy/drupal-vm
  2. docker exec drupalvm install-drupal (optionally add a version after a space, like 8.4.x or 7.56).
  3. Open your browser, and visit http://localhost/

Read more about using Drupal VM with Docker in the official documentation.

I'm beginning some exciting experimentation using Ansible Container to build Docker images 1:1 to many of the Ansible roles I maintain, and I can't wait to share some of the optimizations I'm working on that will make Drupal VM continue to be one of the most flexible and performant development (and even production!) environments for PHP projects.

  • I say we've, because I am extremely indebted to many consistent contributors to Drupal VM and all the upstream projects. Were it not for their hard work and ideas, I would likely not have tidied up a lot of these niceties in time for a 4.6 release! Thanks especially to co-maintainer oxyc, who somehow has time for fixing all the hard bugs :)
May 24 2017
May 24

Drupal VM on Docker Hub

Drupal VM has used Vagrant and (usually) VirtualBox to run Drupal infrastructure locally since its inception. But ever since Docker became 'the hot new thing' in infrastructure tooling, I've been asked when Drupal VM will convert to using Docker.

The answer to that question is a bit nuanced; Drupal VM has been using Docker to run its own integration tests for over a year (that's how I run tests on seven different OSes using Travis CI). And technically, Drupal VM's core components have always been able to run inside Docker containers (most of them use Docker-based integration tests as well).

But Docker usage was always an undocumented and unsupported feature of Drupal VM. But no longer—with 4.5.0, Drupal VM now supports Docker as an experimental alternative to Vagrant + VirtualBox, and you can use Drupal VM with Docker in one of two ways:

  1. Use the drupal-vm Docker Hub container.
  2. Use Drupal VM to build a custom Docker image for your project.

The main benefit of using Docker instead of Vagrant (at least at this point) is speed—not only is provisioning slightly faster (or nearly instantaneous if using the Docker Hub image), but performance on Windows and Linux is decidedly better than with VirtualBox.

Another major benefit? The Drupal VM Docker image is only ~250 MB! If you use VirtualBox, box files are at least twice that size; getting started with Drupal VM using Docker is even faster than Vagrant + VirtualBox!

Use the Docker Hub image

The simplest option—if you don't need much customization, but rather need a quick LAMP stack running with all Drupal VM's defaults—is to use the official Drupal VM docker image. Using it with an existing project is easy:

  1. Copy Drupal VM's example Docker Compose file into your project's root directory.
  2. Customize the Docker Compose file for your project.
  3. Add an entry to your computer's hosts file.
  4. Run docker-compose up -d.

If you're using a Mac, there's an additional step required to make the container's IP address usable; you currently have to create an alias for the IP address you use for the container with the command sudo ifconfig lo0 alias (where the IP address is the one you have chosen in your project's Docker Compose file).

You can even customize the default image slightly using a Dockerfile and changing one line in the Docker Compose file; see Add a Dockerfile for customization.

Want a real-world example? See the Site for Drupal VM Prod Deployment Demonstrations codebase on GitHub—it's using this technique for the local environment.

Use Drupal VM to 'bake and share' a custom image

The second way you can use Drupal VM with Docker is to use some built-in functionality to build a completely custom Docker image. For teams with particular requirements (e.g. using Varnish, Solr, and PostgreSQL), you can configure Drupal VM using a config.yml file as usual, but instead of requiring each team member to provision a Drupal VM instance on their own, one team member can composer docker-bake a Docker container.

Then, save the image with composer docker-save-image, share it with team members (e.g. via Google Drive, Dropbox, etc.), then each team member can load in the image with composer docker-load-image.

See the documentation here: 'Bake and Share' a custom Drupal VM Docker image.


Since there are bound to be a lot of questions surrounding experimental Docker support, I thought I'd stick a few of the frequently asked questions in here.

Why wasn't this done sooner?

The main reason I have held off getting Drupal VM working within Docker containers was because Docker's support for Mac has been weak at best. There were two major issues that I've been tracking, and thankfully, both issues are resolved with the most recent release of Docker:

  1. Using Docker on a custom IP address (so you can have multiple Drupal VM instances on different IP addresses all using port 80, 443, etc.).
  2. Docker for Mac is known for sluggish filesystem access when using default volumes.

The latter issue was a deal-breaker for me, because the performance using a Docker container with a Drupal codebase as a volume was abysmal—it was 18 times slower running the same Drupal codebase within a Docker container vs. running it in VirtualBox with an NFS mounted shared folder.

In the issue File system performance improvements, Docker has already added cached volume support, where reads are cached for native filesystem read performance, and support for delegated volumes will be added soon (which allows writes to be cached as well, so operations like composer update on a volume will not be sluggish).

As of May 2017, you need to download Docker's Edge release to use the cached or delegated volume options on Mac. For a good historical overview of these features, read File access in mounted volumes extremely slow.

Why aren't you using Vagrant with the Docker provider?

We're looking into that: Consider allowing use of Vagrant with docker provider to run and provision Drupal VM.

You're doing Docker wrong. You shouldn't use a monolith container!

Yeah, well, it works, and it means Docker can function similarly to a VirtualBox VM so we don't have to have two completely different architectures for Docker vs. a VM. At some point in the future, Drupal VM may make its Docker-based setup 'more correct' from a microservices perspective. But if you're pining for a Docker setup that's Docker-centric and splits things up among dozens of containers, there are plenty of options already.

You're doing Docker wrong. You're using privileged!

Yeah, well, it works. See the above answer for reasoning.

I wish there were other variations of the Drupal VM image on Docker Hub...

So do I; currently Docker Hub doesn't easily support having multiple Dockerfiles with the same build context in an automatically-built Docker Hub image. But Drupal VM might be able to build and manage multiple image versions (e.g. one with LEMP, one with LAMP + Solr, etc.) using a CI tool soon. I just haven't had the time to get something like this running.

Will this make Windows development awesome?

Sort-of. The nice thing is, now you can use Docker in Windows and drop the baggage of Vagrant + VirtualBox (which has always been slightly painful). But as Docker support is currently experimental, you should expect some bumps in the road. Please feel free to open issues on GitHub if you encounter any problems.

Should I use a Docker container built with Drupal VM in production?

Probably not. In my opinion, one of the best things about Docker is the fact that you can ship your application container directly to production, and then you can guarantee that your production environment is 100% identical to development (that whole 'it works on my machine' problem...).

But currently Drupal VM (just like about 99% of other local-environment-focused tools) is more geared towards development purposes, and not production (though you can use Drupal VM to build production environments... and I do so for the Drupal VM Production Demo site!).

May 19 2017
May 19

DrupalCamp St. Louis logo - Fleur de Lis

DrupalCamp St. Louis 2017 will be held September 22-23, 2017, in St. Louis, Missouri. This will be our fourth year hosting a DrupalCamp, and we're one of the best camps for new presenters!

If you did something amazing with Drupal, if you're an aspiring themer, site builder, or developer, or if you are working on making the web a better place, we'd love for you to submit a session. Session submissions are due by August 1.

This year's Camp will kick off with a full day of Drupal training on Friday, September 22, then a full day of sessions and community networking on Saturday, September 23. Come and join us at DrupalCamp St. Louis—we'd love to see you! (Registration will open soon.)

May 04 2017
May 04

More and more sites are being built in Drupal 8 (over 160,000 as of DrupalCon Baltimore 2017!). As developers determine best practices for Drupal 8 site builds and deployment, they need to come to terms with Composer. In one of the most visible signs that Drupal is 'off the island', many modules are now requiring developers to have at least a fundamental grasp of Composer and dependency management.

But even more than that, many developers now use Composer in place of manual dependency management or a simpler tools like Drush Make files.

With these major changes comes some growing pains. Seeing these pains on a daily basis, I wrote Tips for Managing Drupal 8 projects with Composer to highlight some best practices and tricks for making Composer more powerful and helpful.

But many developers still wrestle with Composer, and mourn the fact that deployments aren't as simple as dragging zip files and tarballs around between servers, or checking everything into a Git repository and doing a git push. For example:

  • If I manage my codebase with Composer and follow Composer's own recommendation—don't commit dependencies in my vendor directory, what's the best way to actually deploy my codebase? Should I run composer install on my production web server? What about shared hosting where I might not have command line access at all?
  • Many modules (like Webform) require dependencies to be installed in a libraries folder in the docroot. How can I add front end dependencies via Composer in custom locations outside of the vendor directory?

And on and on.

DrupalCon Baltimore 2017 - participants sitting and waiting to see the opening Keynote
Over 3,000 community members attended DrupalCon Baltimore 2017.
(Photo by Michael Cannon)

During a BoF I led at DrupalCon Baltimore 2017 (Managing Drupal sites with Composer), we identified over 20 common pain points people are having with Composer, and for many of them, we discussed ways to overcome the problems. However, there are still a few open questions, or problems which could be solved in a number of different ways (some better than others).

I've taken all my notes from the BoF, and organized them into a series of problems (questions) and answers below. Please leave follow-up comments below this post if you have any other thoughts or ideas, or if something is not clear yet!

Note: I want to make it clear I'm not against using Composer—quite the contrary, I've been using Composer for other PHP projects for years, and I'm glad Drupal's finally on board! But Drupal, with all it's legacy and complexity, does require some special treatment for edge cases, as evidenced by the breadth of problems listed below!

Common Problems and Solutions with Drupal + Composer


How do I deploy a Drupal codebase if the vendor directory isn't in the codebase?

This was one of the most discussed issues. Basically: If it's best to not commit the vendor directory to my Git repository, and I deploy my Git repository to my production server... how do I run my site in production?

In the old days, we used to commit every file for every module, theme, and library—custom or contributed—to our website's codebase. Then, we could just git push the codebase to the production web server, and be done with it. But nowadays, there are two different ways you can not commit everything and still run a codebase on production:

  1. Using Deployment Artifacts: Acquia's BLT project is the best example of this—instead of cloning your source code repository to the production server, BLT uses an intermediary (either a developer locally, or in a CI environment like Travis CI or Jenkins) to 'build' the codebase, and then commit the 'build' (the artifact to be deployed, containing all code, including core, contrib modules, libraries, etc.) to a special branch or a separate repository entirely. Then you deploy this artifact to production.
  2. Run composer install on Prod: This is often a simpler solution, as you can still use the tried-and-true 'git push to prod' method, then run composer install in place, and all the new dependencies will be installed.

The second method may seem simpler and more efficient at first, but there be dragons. Not only is it likely that Composer (which is quite the RAM-hungry monster) will run out of memory, leaving a half-built copy of your codebase, it's also difficult to manage the more complicated deployment steps and synchronization required to make this work well.

The first method is the most stable production deployment option, but it requires either two separate Git repositories, or some other mechanism of storing the deployment artifact, and it requires an additional manual step or a CI system to actually build the deployment artifact.

Some hosting providers like Acquia are building new systems like Acquia Cloud CD to make the deployment artifact process more streamlined. But in either case (composer install on prod or deployment artifacts), there are tradeoffs.

I need to put a library (e.g. CKEditor for Webform) in the libraries directory, and not in vendor. Is this even possible with Composer?

Using the composer/installers package, you can set specific paths for certain types of packages, and you can even set specific directories per-package as long as the package has a type and requires composer/installers.

The last bit is the part that makes this a little difficult. Let's take two examples:

First, if you want to install all the Drupal modules (type of drupal-module) into your codebase in docroot/modules/contrib. Tell Composer by setting paths in the extra section of composer.json:

"extra": {
    "installer-paths": {
        "docroot/core": [
        "docroot/modules/contrib/{$name}": [
        "docroot/themes/contrib/{$name}": [
        "docroot/libraries/{$name}": [

Now, lets say you've installed the Webform module, which requires the geocomplete library be placed inside your site's libraries directory.

Since geocomplete isn't available through Packagist (and doesn't have type: 'drupal-library'), we can't easily tell Composer to put that particular library somewhere outside of vendor.

Currently, there are three ways to work around this issue, and all of them are slightly hacky (in my opinion):

  1. Add the Composer Installers Extender plugin, which allows you to set a custom install path per-dependency.
  2. Add the library as a custom repository in your composer.json file's repositories section:

    "repositories": { "drupal": { "type": "composer", "url": "" }, "fontawesome-iconpicker": { "type": "package", "package": { "name": "itsjavi/fontawesome-iconpicker", "version": "v1.3.0", "type": "drupal-library", "extra": { "installer-name": "fontawesome-iconpicker" }, "dist": { "url": "", "type": "zip" }, "require": { "composer/installers": "~1.0" } } } }

  3. Attach a script to one of Composer's 'hooks' to move the library at the end of the composer install process. See the Composer Scripts documentation for more info and examples.

After the BoF, I opened a core issue, Drupal core should help make 3rd party library management not torturous, and basically asked if it might be possible for Drupal core to solve the problem using something like the third option, but using a class or library that was provided by core and available to all contributed and custom Drupal projects (instead of each codebase solving the problem in some one-off way).

The workarounds are all a bit burdensome, even after you have a few site builds under your belt. Hopefully this situation improves over time (note that a similar issue, Best practices for handling external libraries in Drupal 8, has been open since late 2015).

One frontend library I need to add to my site (for a module, not a theme) is not on Packagist. How can I composer require it without it being on Packagist?

There are two ways you can add packages that might exist on frontend packaging sites, but not Packagist:

  1. Use Asset Packagist to require Bower or NPM packages (even if they're not on Packagist).
  2. Either get the upstream package maintainer to add a composer.json file to the package repo and submit it to Packagist, or fork the repository and add your fork to Packagist.

The former option is my preferred method, though I've had to do the second option in a pinch a couple times. The best long-term solution is to try to get the upstream library added to Packagist, so you don't have any extra maintenance burden.

What's the best way to start building the codebase for a new Drupal 8 website?

The community seems to have settled on using a starter template like the Composer template for Drupal projects. In the beginning, some people did a require drupal/core to kick off a new Drupal codebase... but doing that can lead to some pain further down the road (for various reasons).

When you start with the Composer template, or something like Acquia's BLT, you have more flexibility in deploying your site, storing things in different places in your repository, and a more stable upgrade path.

See the documentation on for more info: Using Composer to manage Drupal site dependencies.

If I have a multisite installation, how can I put certain modules in one sites/site-a/modules directory, and others in another sites/site-b/modules directory?

You're in luck! Just like we can do with libraries (see the earlier question above), we can use the composer/installers package to define a specific directory per-dependency (as long as the dependencies—e.g. modules or themes—have a type assigned):

"extra": {
    "installer-paths": {
        "sites/{$name}": ["drupal/module"]

Using the command line to install modules is a lot to ask. Isn't there some sort of UI I could use?

You're in luck again! Matt Glaman built a fancy UI, Conductor, to use with Composer (he's asked for feedback on GitHub of how he can make it even better). Additionally, some IDEs have support for basic functionality (e.g. Composer PHP Support for Eclipse, or the Sublime Text Composer plugin).

And finally, There's a core issue for that™: Do not leave non-experts behind: do not require Composer unless a GUI is also included.

Just like with Git, though, it pays dividends to know how to use Composer on the command line.

What happens if there are dependencies required by two modules or core and a module, and there's a conflict with version requirements?

This... is a hard problem to solve. As Drupal has moved to a more semver-style release cadence (8.x.0 releases can include major new features, and could drop experimental modules), there are times when a core dependency and a contributed module or install profile dependency might conflict (e.g. core requires version 1.3.2 or later, and an install profile you're using requires version 1.2.9 but nothing later than 1.2).

In these cases, the best option is likely to get the non-core packages (e.g. contrib modules, profiles, themes, etc.) to align with Drupal core. When two different modules or other contrib libraries conflict, the best option is to work it out between the two modules in their issue queues, or to manually override one of the dependency definitions.

This is a hard problem to solve, and best practices here are still being discussed.

How can I include libraries and shared modules that I made for my own sites (they're not on Packagist)?

Many people have private modules or themes they use on multiple Drupal codebases, where they want to maintain the module or theme separately from the individual sites. In these situations, there are two options:

  1. You can add additional repositories in your composer.json file, one for each of these custom libraries, then you can composer require them just like any other dependency.
  2. If you have many of these dependencies, you could set up a private Packagist. This option is probably best for large organizations that have specific security requirements and a number of custom libraries and packages.

See related documentation in the Composer docs: Handling private packages.

I have a Drush make file for my site. Can I easily convert this to a composer.json file?

Well, there's the Drush makefile to composer.json converter tool, but other than that, it might be easiest to regenerate your site from scratch using something like the Composer template for Drupal projects.

Note that if you built your site by hand using .zip or .tar.gz downloads, you can use Drush to at least generate a makefile (see drush make-generate).

I set up my Drupal 8 site by downloading archives from and copying them into the codebase. How can I convert my codebase to use a composer.json file?

See the above question—you can't easily go straight from custom codebase to Composer-managed, but you can use drush make-generate to generate a Drush Make file; and from there you can convert to a Composer-based project.

What is the difference between ^ and ~ in version requirements, and should I use one over the other?

Put simply:

  • ~: ~8.3 won't go to 8.4.0
  • ^: ^8.3 will go to 8.4.x, 8.5.x, etc. to 8.9xxx

Both operators tell Composer 'use at least this version or higher', but the ~ says "stay within the same minor release (8.3.x)", while the ^ says "stay within the same major release (all releases up to, but not including, 9.0.0)".

There's some good discussion about standards in core here: Prefer carat over tilde in composer.json, and as with most other things, you should also read through the official Composer documentation for a more canonical answer.

As long as you trust a module or library's maintainers to follow semver standards and not break backwards compatibility within a major release, it's best to use the ^ to indicate required minimum versions (e.g. composer require drupal/honeypot ^1.0.0.

When I run composer update a lot of weird things happen and everything is updated, even if I don't want it to be updated. Is there a way I can just update one module at a time?

Yes! In fact, in real world usage, you'd rarely run composer update alone.

To just update one module or library at a time, you should run the command (e.g. for Honeypot):

composer update drupal/honeypot --with-dependencies

When you add those arguments, you are telling Composer:

  • Only update Honeypot
  • Check if Honeypot has any dependencies that also need updating
  • Don't update anything else!

This is generally safer than running composer update alone, which will look at all the modules, themes, etc. on your site and update them. Technically speaking, if we lived in an ideal world where patch and minor releases never broke anything, composer update would be perfectly safe, even if it updated every module on your site.

But we live in the real world, and despite the best efforts of module maintainers, many bugfix and patch releases end up breaking something. So better safe updating one thing at a time, then committing that update (so you can verify exactly where something went wrong in the case of failure).

When I run composer install on my production server, it runs out of memory. How can I avoid this issue?

Short answer: don't use Composer on your production server. Build a deployment artifact instead (on a non-production server), then deploy the artifact to production.

More realistic answer: If you have to use Composer in production (e.g. you deploy your codebase then run composer install as part of your deployment process), you might need to bump up your command line PHP memory_limit. See Composer's troubleshooting guide for memory limit errors.

Another realistic answer if you don't want to use CI or some other means of building your production codebase: commit vendor directories and all depdendencies to your codebase. This isn't necessarily the most technically pure way of doing things. But it's definitely not the worst thing you could do—it's akin to how must people managed Drupal sites before Drupal 8.

For security reasons, I need to verify that the code I'm running in production wasn't hacked or modified. How can I do this if I'm using Composer to manage my dependencies?

If you commit all your code to a Git repository (including dependencies), you can easily do a git diff on the production codebase to see if any files have been hacked. But if you're constantly using composer install to install dependencies, you need to be able to verify that you're getting the right code in two places:

  1. When you or anyone else runs composer install.
  2. When the code is deployed to production.

In the first case, you should always make sure you're using https endpoints; Composer will validate certificates when communicating with secure channels. If you use http repositories, there's no protection against man-in-the-middle attacks.

To verify your production code, the best solution is to not run composer commands on production. Instead, use a CI tool (like Travis CI, Jenkins, CircleCI, etc.) to build your codebase, then commit your codebase to an artifact repository (or a separate branch on your main repository), then deploy that code to production. That way, you preserve the ability to do a git diff on your production codebase.

I totally screwed up some of the dependencies on my site while I was tinkering with them. Is there any way to 'nuke' all the things in my codebase that Composer manages so I can do a fresh composer install and get them back into the correct state (short of deleting my entire codebase and re-cloning my site locally)?

This can be tricky—with a Drupal site, Composer might be managing code inside vendor/, modules/, libraries/, profiles/, themes/, and possibly other places too! The best way to 'nuke' all the dependencies Composer downloaded and start fresh with a composer install is to run:

git clean -fdx .

This command tells Git: force delete all files and directories in the current directory that aren't being tracked by Git, including files not tracked because of .gitignore (x).

Then run composer install to get back all the dependencies in their correct state.

My team constantly battles with composer.lock file merge conflicts. How can we avoid this painful experience?

The two strategies that were discussed at the BoF for avoiding merge conflicts are:

  • Have one dedicated dependency manager on the team—e.g. only one person/role who runs composer require *, composer remove *, composer update * etc. Typically this would be a senior or lead developer who is communicating with the other members of the team and can unblock them.
  • Have one dedicated 'sweeper'—a person whose job it is to deal with merge conflicts.

If push comes to shove, and you have to deal with a merge conflict (or if you're the dedicated conflict resolver), one tip for successfully merging is to accept all the composer.json changes, but clear out all the composer.lock changes that were conflicting, then run composer update none (or composer update lock... it's not obvious if there's a difference). This will update the lock file based on your current composer.json and make sure the file hashes and timestamps are all in sync.

Do you have any other tips for making Composer use easier and more delightful?

I'm glad you asked! I was thinking you'd keep asking the hard questions. Here are some of the other miscellaneous Composer tips that were mentioned during the DrupalCon BoF:

  • Don't run Composer in prod! (I think this was mentioned at least five times in the BoF). Instead, either commit vendor to codebase, or use CI to commit vendor and create a 'deployment artifact' that is deployed to production.
  • Use the Composer Merge Plugin to mix custom module composer.json files into one root repository.
  • Don't use composer create-project drupal/drupal when setting up Drupal. Use drupal-composer/drupal-project instead, as it follows more of the Composer best practices for managing a Drupal 8 codebase.
  • Don't run composer update unless you really know what you're doing. Use composer update [package] --with-dependencies instead.
  • Commit the composer.lock file for individual sites and deployed projects. But don't commit the .lock file to contributed modules or modules shared among multiple sites.


As I alluded to in this post's title, there are still some growing pains as the Drupal community more comprehensively adopts Composer and structured dependency management in general.

One thing that's clear: developers who need to include external libraries and code have a much easier time in Drupal 8 than they did in past versions.

But with that benefit comes some downside: site builders and sysadmins tasked with deploying Drupal 8 securely and efficiently will have to adapt to some new tools and techniques if they want to avoid dependency hell.

Apr 27 2017
Apr 27

I presented Just Keep Swimming! Or, how not to drown in your open source project at DrupalCon Baltimore 2017, as part of the Being Human track. Below is a text summary of the presentation (along with the associated slides).

Here's a video of the presentation; scroll past it to read through a transcript and slides:

[embedded content]

And the slides/transcript:

Just Keep Swimming - Slide 2

Hi, I'm Jeff Geerling, known most places around the Internet as geerlingguy. I've been involved with the Drupal community for almost nine years, and the Ansible community for four. I work for a company named Acquia, one of the largest Drupal service providers, and I write a lot (mostly about open source software), and automate a lot of things (mostly using Ansible).

Just Keep Swimming - Slide 3

I do a lot of open source work. I maintain over 160 independent open source projects on GitHub, and help with a few others. Some of these projects are pretty tiny, but there are a number with hundreds or thousands of stars, and thousands of issues, commits, and pull requests.

My biggest claim to fame is that I'm the highest ranked open source contributor in St. Louis. That's gotta count for something, right?

Just Keep Swimming - Slide 4

But my open source work is just one small part of my life. I'm also a husband and father of three kids—a four year old, a two year old, and a six month old. In this picture, you can witness one of the common rituals in the Geerling household (those of you who don't have kids might be wondering what I'm holding): the changing of the diaper pail. Between dealing with toddler meltdowns, activity time, changing diapers, and family game nights, family is an important part of my life. A lot of the reason I do the work I do is for my family: I want to make sure my children have the tools and time to do more than I could imagine to make the world a better place. Family is extremely important, and other things in my life often take a backseat to family matters.

Just Keep Swimming - Slide 5

I also spend as much time as I can taking pictures and exploring photography. Before I decided to work in software development full-time, I was splitting my time between development and photojournalism, and I still take on a few paid gigs every year. This year, in fact, I'm taking pictures in a semi-official role as a volunteer photographer at DrupalCon (you may have seen me sneaking around with this massive camera!).

Just Keep Swimming - Slide 6

I also enjoy DIY projects: in this picture, contrary to what you may be thinking, I am not building a meth lab. I was building the home office in which I've been working for Acquia. I work on projects around the house, I help neighbors with their own projects, and I even enjoy light gardening—even though almost all my perennials seem to die every year.

Just Keep Swimming - Slide 7

Finally, something that impacts me on almost a daily basis is my Crohn's disease. It's a chronic, but not fatal, disease, and I've been dealing with it since 2003. This picture is from about six months ago, when I had major surgery to remove part of my gut. Even then, I was thinking about Drupal, ha!

I'm doing pretty well today (thanks for your concern!), and having to deal with the ups and downs of Crohn's disease has really made me feel thankful for the times when I can be productive (in work, in family time, and in open source), and more importantly it is a constant reminder that you never know what someone else is going through, so I know it's important to always treat others with compassion and assume the best intent.

Just Keep Swimming - Slide 8

The point is, I'm a human. I have human needs and desires. And the same for all of you. All open source maintainers, contributors, and users come together and bring their own relationships, experiences, and struggles to the table.

Often, when someone is struggling in life, open source work (often being one of the less important aspects) is the first thing to go. That's a major reason why all open source maintainers have a constant battle with burnout. Once open source work becomes a struggle and a burden, it no longer rewards maintainers with doses of optimism and euphoria that sustain their involvement!

Just Keep Swimming - Slide 9

This chart shows the time I spend doing various things through the day. It's not a ranking of importance, just the amount of time spent, on average, doing each thing. I spend almost all my day either working (that's the W-2 section), sleeping, or being present to my family. Then there are some things like eating and reading (or, let's be honest, binging on Netflix or Reddit!). Then there's the last slice, accounting for about 1.5 hours a day, that I choose to devote to Open Source work.

Most open source maintainers are in the same boat. Most of us aren't paid to work on our open source projects during work hours (though sometimes we're fortunate enough to be able to devote some time to them), and most of us have only a small amount of time to devote to open source work. So it's important that we use that time well, and focus on tasks that don't drag us down and make us dread the work. We have plenty of W-2 work during the day like that!

Just Keep Swimming - Slide 10

So I'm going to show you what that small slice of open source work looks like, on an average day.

Just Keep Swimming - Slide 11

I often start my time glancing at my 'open source' email folder.

Just Keep Swimming - Slide 12

YIKES, that's overwhelming! Well, maybe I'll change gears and just focus on my GitHub issues.

Just Keep Swimming - Slide 13


Just Keep Swimming - Slide 14

This constant barrage of notifications can be really jarring. It's probably best to ignore the common notifications, and find a way to filter out issues that are not as important or wouldn't be a good use of your time.

Let's jump into an issue queue and find a pull request or patch to review.

Just Keep Swimming - Slide 15

Nice! It looks like I stumbled upon a rare breed of GitHub pull requests: this PR has the four hallmarks of a pull request likely to be merged:

  1. There is a thorough description of the why behind the PR, how it works, and even a link to find further information! This makes it so I don't have to spend my hour researching the code changes—the contributor already did it for me.
  2. This PR doesn't change everything under the sun. It's only adding a few lines of code, which means I will likely be able to review it quickly (certainly in less than 1 hour).
  3. The commit history looks clean. There's one commit with a short message describing the changes.
  4. The PR passes all the project's automated tests (so I know at minimum it doesn't break my project's existing functionality).

Just Keep Swimming - Slide 16

What a great way to kick off my open source time! If all the issues are like this one, I'll probably get through at least five or ten issues today!

Just Keep Swimming - Slide 17

... But then reality hits us. Very few issues, patches, and pull requests are so easy to resolve.

Just Keep Swimming - Slide 18

These examples highlight one of the most prolific issue types that will clog up your queues. Weeks go by with no response from the person who opened the issue, and you have no recourse but to close the issue with no definitive resolution.

Well, let me introduce you to some common issues and some characters you may run into:

Just Keep Swimming - Slide 19

These are Ghost Writer issues. A Ghost Writer is when the OP (Original Poster) disappears off the face of the planet, never to be heard from again. These clog up every major project's issue queues, and take a lot of time away while maintainers prune these dead issues out of their issue queues.

Just Keep Swimming - Slide 20

The next kind of issue I see more often than I'd like is by someone who apparently doesn't see the documentation, known as the See No Docs issues. When the OP asks a question that's answered in the first section of the project's documentation! And alongside are the Read No Docs issues (when the maintainer links to documentation but it's obvious the user didn't read them before replying again), and the Fix No Docs issues (when the issue mentions a small bug or improvement in the docs, but nobody ever files a patch).

Just Keep Swimming - Slide 21

Then there's the Iceberg issues. These are issues that look like they'll require only minimal effort to resolve, but once you get into them, you realize you just burned up a week's worth of your time trying to resolve the issue! There's a lot more to the issue hidden under the water.

Just Keep Swimming - Slide 22

Next up, you know that person who prods you to merge an old patch that passes automated tests but you haven't merged for some reason or another?

Just Keep Swimming - Slide 23

That's a 'For Shame'er. These comments are often unintentionally insidious; they usually have a slightly passive-agressive tone, and if you're not careful as a maintainer, these can really tear down your confidence. Most of us suffer from the Imposter Syndrome already, and when we see these comments, it makes us feel potentially more inadequate as a maintainer!

I'm not saying that bumping or +1'ing an issue is necessarily wrong. Rather, if you want to add your 2¢, please either stick to a "+1" comment, or make sure your statement doesn't have any passive-agressive undertones!

Just Keep Swimming - Slide 24

And then there's the Dung Heap. These issues are pretty easy to identify, and if you're like me and you're ruthless in cleaning up your issue queue, they don't stay long. Dung Heap issues are those that contain little or no description, hundreds or thousands of lines of changed code, don't follow your project's coding style, and often fails to pass automated tests.

There's no way you should consider merging these; just close them quickly and move on.

Just Keep Swimming - Slide 25

Closely related is the Diamond in the Rough: it's mostly a Dung Heap, but there is one change in the patch or pull request which is actually really good. Enough so that you don't want to close the issue, and you give the OP feedback asking to remove all the unrelated changes and resubmit.

But it seems these issues are often combined with a Ghost Writer (the OP never responds), and then after a month or two, you start getting a few 'For Shame'ers asking why this obviously brilliant change hasn't been merged yet.

Just Keep Swimming - Slide 26

There's also the Imposter. These issues are ones that have no real relation to your project. Instead, it's a poor lost soul who knows you've helped them in the past, so they post an issue only tangentially related to your project, hoping you'll answer their question out of the kindness of your heart.

Just Keep Swimming - Slide 27

And finally, the Black Hole. These issues start off unassumingly, with a simple premise and maybe a straightforward path to completion. But then someone suggests changing a variable name. Then, three hundred comments later, after the variable name is back to the original one, it seems there is no end in sight. Black Holes most often devolve into a Bikeshed, where people are arguing more over a variable name or a code comment than the actual change itself!

Just Keep Swimming - Slide 28

Whether or not you have kids, there's a chance you've heard of Dory, a Disney/Pixar character who has issues with short-term memory, but is a faithful companion and optimist. Dory offers us this simple advice: when your issue queue gets you down, you need to just keep swimming!

Just Keep Swimming - Slide 29

Let's not drown in our issue queues. Instead, I'm going to talk about some of the things that I do when managing all my projects. I hope you'll find some of these things helpful!

Just Keep Swimming - Slide 30

One thing that is often overlooked, but in some ways fairly important in increasing uptake and continued usage is to inject some personality in your project. If your project has some amount of personality, it can make both you and your users like and remember it better.

One example is the 'Raspberry Pi Dramble' logo (the one with the six Druplicons in the shape of a raspberry). After making the logo and sticking it on the Pi Dramble website, I started to approach the project differently. I tried to simplify some things and make the videos and documentation for it more approachable. It's a subtle change, but I think it's important. (Case in point, the venerable Druplicon.)

Another example is Drupal VM: I first created it as "A Drupal Development VM", but that name didn't really roll off the tongue well, was really long, and didn't endear users to the project. Once it got more popular, I took some time to:

  1. Rename the project to, simply, "Drupal VM" — no "The". This made Drupal VM a proper noun, and also affected the language around the project.
  2. Create a wordmark and use it consistently.

It's very hard to relate to projects that have no personality, and especially when things are rough, it's a lot easier to abandon something if it has no personality, no branding, no identifying marks.

This isn't to say branding is everything—but it definitely helps guide people's perception, and your own too!

Just Keep Swimming - Slide 31

Next up: Work in the open. I actually had a great example of this at a BoF yesterday—I was talking about a problem I encountered with Drupal but didn't know if an issue had been filed about it. Someone mentioned that an issue was in the Core issue queue, and when I opened it up and scrolled down, I had the most recent comment on the issue, from two months ago!

There are a number of benefits that come from documenting things as you go, and from blogging—or at a minimum, storing your thoughts somewhere public:

  1. You can use Google to search your own brain!
  2. If you start working on something but get stuck, or need to switch gears, someone else can pick up where you left off and push the issue through to completion.
  3. Other people get their own problems resolved by seeing what you wrote. If it stays in your head or in a private journal, chances are nobody (including yourself) will benefit from the work beyond the immediate fix or feature you completed.

Just Keep Swimming - Slide 32

This slide I added just because I'm a Tesla nut.

No, I kid; the real reason is because of something Elon Musk said:

Tesla focuses heavily on designing the machine that makes the machine - turning the factory itself into a product.

Tesla is currently the largest automotive manufacturer in the US (by market capitalization)—even though they produce far fewer vehicles than Ford, GM, Chrysler, and other manufacturers. The reason for this is that people value Tesla's philosophy: automating things and increasing efficiency.

Most open source projects have a very sparse commodity: development time. And wasting development time on mundane, mind-numbing tasks like manual testing, code linting, etc. will quickly discourage potential contributors.

At a minimum, you should have a goal of automating:

  • Code style / linting
  • Functional tests (at least one 'happy path' test to verify things aren't horribly broken
  • First-time setup (how long does it take a new contributor to be able to get a clean environment running your code? Docker can help here!)
  • Notifications and email (e.g. route all OSS email into a specific folder, and don't let those emails show in your 'unread' count)

Additionally, if you find yourself asking for the same extra information every time someone posts a new issue, you can add an ISSUE_TEMPLATE if you use GitHub, and put required information and suggestions inside to ensure people posting issues provide all the right information.

Just Keep Swimming - Slide 33

As your project starts to grow, you'll start interacting with more people: users, potential users, and even (if you're lucky) contributors!

If you want to keep your project maintainable, there are some things you should do to foster your community and distribute responsibility:

  1. Connect with fans: Often there are a few people who are willing to help or at least promote your project. Communicate with them, ask them to help, allow them to take on responsibilities if they seem aligned with your project's goals, structure, etc.
  2. Give karma: Thank contributors in release notes, on Twitter, at events, etc. Open source is a volunteer-driven effort; if money is not available, gratitude is the currency of open source, so give generously and you'll have some passionate users and contributors!
  3. Avoid drama: If you find some controversy brewing in your issue queue or elsewhere, take a step back and see if it quickly resolves itself. If you have to intervene, don't make a knee-jerk reaction, and make sure you're aware of the entire situation. If you don't have a direct role in the drama, my advice is avoid it. And always ask yourself the question: Is what you add going to be helpful?

Just Keep Swimming - Slide 34

I've spoken about a few of the things I find most important in being a happy and sane open source maintainer, but there's a wealth of information on the web on this topic. Almost every maintainer has to wrestle with the problem of staying afloat, and many maintainers put their thoughts into writing.

I'd like to call out GitHub's Open Source Guides especially: these guides are great primers for so many different areas of open source contribution—for maintainers, for contributors, for users, and for businesses.

But what it all comes down to for all of us, is how we use our time. And what I'd ask you to remember is:

Just Keep Swimming - Slide 35

Thank you.

Apr 25 2017
Apr 25

Update: The BoF has come and passed... and I put up a comprehensive summary of the session here: Composer and Drupal are still strange bedfellows.

Tomorrow (Wednesday, April 25), I'm leading a Birds of a Feather (BoF) at DrupalCon Baltimore titled Managing Drupal sites with Composer (3:45 - 4:45 p.m. in room 305).

Composer for PHP - Logo

I've built four Drupal 8 websites now, and for each site, I have battle scars from working with Composer (read my Tips for Managing Drupal 8 projects with Composer). Even some of the tools that I use alongside composer—for project scaffolding, managing dependencies, patching things, etc.—have changed quite a bit over the past year.

As more and more Drupal developers adopt a Composer workflow for Drupal, we are solving some of the most painful problems.

For example:

  • Should I use the composer.json that's included with Drupal core? If so, why is there also a .lock file included? (See issue: Improve instructions for updating composer.json and /vendor).
  • I just installed Webform and now it's yelling at me about front-end libraries. How do I install front-end libraries with Composer, especially if they're not on Packagist?
  • What's the best way (or ways) to set up your Drupal project using Composer? (See discussions about this on 1, 2, 3).
  • Is it a best practice to commit the vendor directory or not? Why?
  • Sometimes I see tildes (~), other times ^, and sometimes Composer yells at me when I try to add a module that has a beta release. Why is this stuff so confusing? I just want to add a module!
  • I download tarballs or zip files of modules and drag them into my codebase. What's the best way to install modules like Search API Solr or Address, when they require Composer?

Some of these questions have answers. Some are still being debated on a daily basis!

I'd love to see you come to the BoF tomorrow if you're at DrupalCon, and you want to talk about Composer and Drupal. I'll try to take notes and post them on my blog as well.

Apr 10 2017
Apr 10

When Microsoft announced the Windows Subsystem for Linux, now seemingly rebranded as Bash on ubuntu on Windows, I was excited at the possibility of having Drupal VM (and other similarly command-line-friendly open source projects) work better in a Windows environment. But unfortunately, the Anniversary update's version of WSL/Ubuntu Bash was half-baked, and there were a lot of little issues trying to get anything cohesive done between the Windows and Ubuntu Bash environments (even with cbwin).

Then, a year or so later, Microsoft finally announced that tons of improvements (including upgrading Ubuntu in the WSL from 14.04 to 16.04!) would be included in the 'Creators Update' to Windows 10, dropping tomorrow, April 11.

One of the major improvements would be the ability to call out to Windows executables from within Ubuntu Bash, which was added in October 2016, but only available to those willing to run Microsoft's bleeding-edge 'Insider' builds. This feature is going to be included with the rest of the improvements in the Creator's update, so I thought I'd update early and explore whether this is the panacea I dreamt of when I first heard about WSL/Ubuntu Bash.

tl;dr: It's not a panacea. But it is something. And things are improved significantly from where we were one year ago, as far as open source devs who develop in Windows (by choice or by dictum).

How to update to Windows 10 'Creators Update'

If it's April 11 or later, you can just use Windows Auto Update mechanism to upgrade to the Creators Update version of Windows 10. It seems the ISO images are already on, and it's only April 10 where I live, so I'm not sure when your particular computer will see the update become available for install.

How to Install WSL/Ubuntu Bash

Note: If you've previously installed Ubuntu Bash on an older version of Windows, and want to get the new version running Ubuntu 16.04 instead of 14.04, you need to run Powershell as administrator, and enter the command lxrun /uninstall /full.

  1. Run lxrun /install /y
  2. After installation is finished, enter 'Ubuntu' in Cortana/search and open "Bash on Ubuntu on Windows"
  3. Confirm you're on Ubuntu 16.04 by running lsb_release -a

Use Vagrant with Ubuntu Bash

  1. Add Windows executables to your Ubuntu $PATH: `export PATH=$PATH:/mnt/c/Windows/System32/
  2. Test that Vagrant is callable by running vagrant.exe status
  3. Change directories to a location that's accessible to Windows (e.g. cd /mnt/c/Users/yourusername).
  4. Clone a Vagrant-based project locally: git clone
  5. Change directories into the project dir: cd drupal-vm
  6. Run vagrant.exe up.

Note that, currently, you can't run vagrant.exe ssh from within Ubuntu Bash. You have to either use Cmder or a similar CLI for that, or run vagrant.exe ssh-config, then run a manual SSH command (ssh -p [port] [email protected]) to connect to your VM from within Ubuntu Bash.

Necessary improvements

I'm hopeful that Microsoft may be able to find ways to allow Windows executables (like Vagrant) see and use binaries in the WSL environment (e.g. Vagrant.exe can call out to Ubuntu's ssh). In lieu of that, I wonder if Vagrant itself may be able to make some improvements to make it easier for Windows users to do things like vagrant ssh and other esoteric commands without resorting to Cygwin.

I'm still working through some of these issues and trying to find the best path forward (especially for beginners) in the Drupal VM issue queue: Update docs for Windows with WSL Windows Interoperability improvements.

I've also left feedback in the Vagrant project's issue dealing with the WSL (Use Linux Subsystem on Windows), but I think as more developers start using Vagrant and WSL together, we may find clever ways of working around the current limitations.

You could also just run Linux or macOS as your desktop OS, and avoid these issues entirely ;-)

Apr 01 2017
Apr 01

MidCamp is one of my favorite Drupal events—it hits the sweet spot (at least for me) in terms of diversity, topics, and camp size. I was ecstatic when one of my session submissions was accepted, and just finished presenting Developing for Drupal 8 with Drupal VM.

Drupal VM presentation slide

You can see slides from the presentation here: Drupal VM for Drupal 8 Development, but without the full video there are a lot of gaps (especially on slides where there's just a giant emoji!). Luckily, Kevin Thull of Blue Drop Shop is hard at work recording all the sessions and posting them to YouTube. He's already processed the video from my session, and it's available below:

[embedded content]

This presentation walks through the basic steps involved in using Drupal VM for both Local and Production environments, following along with the blog post I published recently, Soup to Nuts: Using Drupal VM to build local and prod. Some of the topics covered include:

  • How to get started with Drupal VM
  • How to build an identical local and production environment using Drupal VM
  • How to add Apache Solr to Drupal VM and your Drupal site for supercharged search
  • How to add Behat tests to Drupal VM so you can use Behavior Driven Development
  • Features that are planned for inclusion in future Drupal VM versions

It's probably the best summary of what's currently happening with Drupal VM, so if you're a current user, at least skim through to the end of the video to get a glimpse of some of the new features, as well as what's to come!

Learn more about Drupal VM.

Mar 29 2017
Mar 29

AKA "Supercharged Windows-based Drupal development"

tl;dr: Use either PhpStorm or a Samba share in the VM mounted on the host instead of using a (slow) Vagrant synced folder, and use Drupal VM 4.4's new drupal_deploy features. See the video embedded below for all the details!

[embedded content]

I've often mentioned that Windows users who want to build modern Drupal sites and apps are going to have a bit of a difficult time, and even wrote a long post about why this is the case (Developing with VirtualBox and Vagrant on Windows).

But for a long time, I haven't had much incentive to actually get my hands dirty with a Windows environment. Sure, I would make sure Drupal VM minimally ran inside Windows 10... but there were a lot of little inefficiencies in the recommended setup process that would lead to developers getting frustrated with the sluggish speed of the VM!

Since a client I'm currently helping is stuck on Windows 7 for the short-term, and can't do internal development on a Mac or Linux (other OSes are not allowed on workstations), I decided to try to set up a project how I would set it up if I had to work in that situation.

Basically, how do you get Drupal VM to perform as well (or better!) on a Windows 7 machine as it does on a Mac, while still allowing a native IDE (like PHPStorm or Notepad++) to be used?

After a couple days of tinkering, I've found the most stable solution, and that is to build Drupal VM 'outside in'—meaning you build your project inside Drupal VM, then share a folder from Drupal VM to Windows, currently using Samba (but in the future, SSHFS might be a viable option for Windows too!).

Here's how I used Drupal VM to run and develop an Acquia BLT-based project locally on Windows 7:


Install required software

To make this work, you will need to install the following software:

  1. VirtualBox
  2. Vagrant
  3. PuTTY (used for Pageant)
  4. Powershell 4.0+ (if on Windows 7)
  5. Cmder ('full' download)

Create an SSH public/private key pair

Generate an SSH key pair in Cmder

Before doing any work with a project with code hosted on GitHub, Acquia Cloud, Redmine, or elsewhere, you need to create an SSH key pair so you can authenticate yourself to the main Git repository.

  1. Open Cmder (right click and 'Run as Administrator')
  2. Create an SSH key in the default location: ssh-keygen -t rsa -b 4096 -C "[email protected]"
    • Press 'enter' three times to accept the defaults.
  3. Print the contents of the public key: cat C:\Users\[youraccount]\.ssh\
  4. Copy the entire contents (from ssh-rsa to the end of your email address).
  5. Add the copied key into GitHub under Profile icon > Settings > SSH and GPG keys.
    • Also add the key into any other relevant service (e.g. Acquia Cloud under Profile > Credentials).

Set your SSH key to run inside Pageant

Puttygen - Import and convert SSH key to ppk

Due to a bug in the current version of Vagrant, you need to use Pageant to allow Vagrant to use your SSH key inside the VM when running commands like vagrant up and vagrant provision.

Also, Pageant requires a specially-formatted version of your key, so you'll need to use puttygen.exe to convert the key before loading it into Pageant.

  1. Install PuTTY.
  2. Open puttygen.exe (inside C:\Program Files\PuTTY).
  3. Select "Conversions > Import key".
  4. In the 'Actions' section, click "Save private key".
  5. In the save dialog, save the file as id_rsa.ppk with type "PuTTY Private Key Files (*.ppk)".
  6. Close puttygen.exe.
  7. Open pageant.exe (this opens a taskbar item).
  8. Right click on the Pageant item in the taskbar, and choose 'Add key', then navigate to where you saved id_rsa.ppk.
  9. You should also ensure Pageant runs on system startup to ensure keys are always available when you need them. See this guide for further instructions on how to have pageant.exe start on boot.

Build Drupal VM

Drupal VM Dashboard page on Windows 7

At this point, we're ready to build a local development environment with Drupal VM.

Note: In the rest of this guide, substitute projectname for your project's own machine name.

  1. Open Cmder (right click and 'Run as Administrator')
  2. Run start-ssh-agent (so your SSH key is loaded and can be used in the VM).
    • When using Pageant this is not strictly required, but once this bug is resolved, you won't need Pageant at all, and you can just use start-ssh-agent.
  3. Clone a copy of Drupal VM to your computer: git clone
  4. Change directories into Drupal VM: cd drupal-vm
  5. Create a shell script to create your project's directory and set it's permissions correctly:

    1. touch examples/scripts/
    2. Open your editor of choice (e.g. Notepad++) and edit (be sure to use Unix line endings!):

      mkdir -p /var/www/projectname
      chown vagrant:vagrant /var/www/projectname
  6. Create a config.yml file: touch config.yml

  7. Open your editor of choice (e.g. Notepad++) and edit config.yml (be sure to use Unix line endings!)—also make sure the drupal_deploy_repo varaible is set to the correct GitHub repository you want to use as your origin remote:

    vagrant_synced_folders: []
    vagrant_synced_folder_default_type: ""
    vagrant_machine_name: projectname
    drupal_deploy: true
    drupal_deploy_repo: "[email protected]:github-username/projectname.git"
    drupal_deploy_version: master
    drupal_deploy_update: true
    drupal_deploy_dir: "/var/www/projectname"
    drupal_deploy_accept_hostkey: yes
    drupal_core_path: "{{ drupal_deploy_dir }}/docroot"
    ssh_home: "{{ drupal_deploy_dir }}"
    drupal_build_composer: false
    drupal_composer_path: false
    drupal_build_composer_project: false
    drupal_install_site: false
      - "22"
      - "25"
      - "80"
      - "81"
      - "443"
      - "4444"
      - "8025"
      - "8080"
      - "8443"
      - "8983"
      - "9200"
      # For reverse-mount Samba share.
      - "137"
      - "138"
      - "139"
      - "445"
    # Run the setup script.
      - "../examples/scripts/"
    # BLT-specific overrides.
    vagrant_box: geerlingguy/ubuntu1404
    drupal_db_user: drupal
    drupal_db_password: drupal
    drupal_db_name: drupal
    configure_drush_aliases: false
    # Use PHP 5.6.
    php_version: "5.6"
      - "php{{ php_version }}-bz2"
      - "php{{ php_version }}-imagick"
      - imagemagick
    nodejs_version: "4.x"
      - name: bower
      - name: gulp-cli
    drupalvm_user: vagrant
    nodejs_install_npm_user: "{{ drupalvm_user }}"
    npm_config_prefix: "/home/{{ drupalvm_user }}/.npm-global"
      - adminer
      - drupalconsole
      - drush
      - mailhog
      - nodejs
      - selenium
      - xdebug
    # XDebug configuration.
    # Change this value to 1 in order to enable xdebug by default.
    php_xdebug_default_enable: 0
    php_xdebug_coverage_enable: 0
    # Change this value to 1 in order to enable xdebug on the cli.
    php_xdebug_cli_enable: 0
    php_xdebug_remote_enable: 1
    php_xdebug_remote_connect_back: 1
    # Use PHPSTORM for PHPStorm, sublime.xdebug for Sublime Text.
    php_xdebug_idekey: PHPSTORM
    php_xdebug_max_nesting_level: 256
    php_xdebug_remote_port: "9000"
  8. Back in Cmder, run vagrant up

  9. Wait for Drupal VM to complete its initial provisioning. If you get an error, try running vagrant provision again.

Set up BLT inside the VM

BLT local refresh command in Cmder in Windows 7

  1. Run vagrant ssh to log into the VM.
  2. Make sure you're in the project root directory (e.g. cd /var/www/projectname).
  3. Run composer install to make sure all project dependencies are installed.
  4. Create blt/project.local.yml with the following contents:

        local: self
  5. Run the command on this line to set up the blt alias inside the VM.

    • Note: After this BLT bug is fixed and your project is running a version of BLT with the fix included, you can just run: ./vendor/acquia/blt/scripts/drupal-vm/
  6. Type exit to exit Vagrant, then vagrant ssh to log back in.
  7. Make sure you're back in the project root directory.
  8. Correct NPM permissions: sudo chown -R $USER:$USER ~/.npm-global
  9. Run blt local:refresh to pull down the database and run setup tasks.
    • Note that the first time you connect, you'll need to type yes when it asks if you want to accept the staging site's host key.

At this point, you can use the site locally at, and manage the codebase inside the VM.

Use PhpStorm to work on the codebase

For speed and compatibility reasons, this method of using Drupal VM to run a BLT project is done 'outside in', where everything is done inside the VM. But if you want to edit the codebase in a native Windows-based editor or IDE, you need access to the project files.

PhpStorm is a fully-featured PHP development IDE from Jetbrains, and is used by many in the Drupal community due to its speed and deep integration with PHP projects and Drupal in particular. You can download a free trial, or acquire a license for PhpStorm to use it.

One benefit of using PhpStorm is that it can work directly with the project codebase inside Drupal VM, so you don’t need to configure a shared folder.

  1. Open PhpStorm.
  2. Click “Create New Project from Existing Files” (if you are in a project already, choose File > “New Project from Existing Files…”).
  3. Choose the option “Web server is on remote host, files are accessible via FTP/SFTP/FTPS.” and then click “Next”.
  4. For “Project name”, enter Projectname
  5. Leave everything else as default, and click “Next”.
  6. In the ‘Add Remote Server’ step, add the following information:
    1. Name: Drupal VM
    2. Type: SFTP
    3. SFTP host:
    4. Port: 22
    5. Root path: /var/www/projectname
    6. User name: vagrant
    7. Auth type: Password
    8. Password: vagrant
    9. Click ‘Test SFTP connection’ and accept the host key if prompted.
    10. Ensure the “Web server root URL” is correct (should be “”).
  7. Click “Next”
  8. Click on the “Project Root” item in the “Choose Remote Path” dialog, and click “Next”.
    1. Leave the ‘Web path’ empty, and click “Finish”.
  9. PhpStorm will start ‘Collecting files’ (this could take a couple minutes).
  10. Change the Deployment configuration in the menu option Tools > Deployment > “Configuration…”
  11. Click on ‘Excluded Paths’ and add deployment paths containing things like Composer and Node.js dependencies:
    1. “/vendor”
    2. “/docroot/themes/custom/project_theme/node_modules”
  12. Enable automatic uploads in menu option Tools > Deployment > “Automatic Upload”

From this point on, you should be able to manage the codebase within PhpStorm. You can also open an SSH session inside the VM directly from PhpStorm—just go to the menu option Tools > “Start SSH Session…”

PhpStorm might also ask if this is a Drupal project—if so you can click on the option to enable it, and make sure the Drupal version is set to the proper version for your site.

Note: If you perform git operations on the codebase running inside the VM, or other operations like configuration exports which generate new files or update existing files, you need to manually sync files back to PhpStorm (e.g. click on project folder and click menu option Tools > Deployment > "Download from Drupal VM". At this time, only automatic upload of files you edit within PhpStorm is supported. See the issue Auto Refresh of Remote Files for more info and progress towards an automated solution.

Note 2: If you encounter issues with CSS and JS aggregation, or things like Stage File Proxy showing logged errors like Stage File Proxy encountered an unknown error by retrieving file [filename], you might need to manually create the public files folder inside the project docroot (e.g. inside Drupal VM, run mkdir files wherever the public files directory should be.

Configure a reverse-mounted shared folder

Add a Network Location to Windows 7 from Drupal VM's Samba share

If you don't have a PhpStorm license, or prefer another editor or IDE that doesn't allow working on remote codebases (via SFTP), then you can manually create a Samba share instead. In this case, we're going to use Samba, which requires the TCP ports listed in the customized config.yml file above to be open (137, 138, 139, and 445).

  1. (Inside the VM) Install Samba: sudo apt-get install samba
  2. Add the following contents to the bottom of /etc/samba/smb.conf:

       comment = projectname
       path = /var/www/projectname
       guest ok = yes
       force user = vagrant
       browseable = yes
       read only = no
       writeable = yes
       create mask = 0777
       directory mask = 0777
       force create mode = 777
       force directory mode = 777
       force security mode = 777
       force directory security mode = 777
  3. Restart the Samba daemon: sudo service smbd restart (sudo systemctl restart smbd.service on systems with systemd).

  4. Set Samba to start on boot: sudo update-rc.d samba defaults (sudo systemctl enable smbd.service on systems with systemd).
  5. (Back on Windows) Mount the shared folder from Windows Explorer: \\\projectname-share
  6. You can now browse files from within Windows just as you would any other folder.
  7. If you'd like, you can 'Map a new network drive' for more convenient access:
    1. Right-click while browsing 'Computer' and select "Add a network location"
    2. Go through the wizard, and add the shared folder location you used above (\\\projectname-share).
    3. Save the new network location with a descriptive name (e.g. "BEAGOV").
    4. Now, you can just go to Computer and open the network location directly.


  • You can only access the network location when the VM is running. If it is offline, files are inaccessible.
  • You should always use Unix line endings in your editor to prevent strange errors.
  • You can set guest ok = no if you want to make sure the shared directory is protected by a login. However, unless you change the Drupal VM defaults, the share should only be accessible internally on your computer (and not on the wider LAN).
  • You can also open a folder as a workspace, for example in Notepad++, by manually entering the network path (\\\projectname-share).
  • Global operations (e.g. project-wide search and replace) are going to be very slow if done on the Windows side. It's better to do that either in the VM itself (using tools like grep or vim), or on another computer that can work on the files locally!


Using this 'outside in' approach makes it so you can get near-native performance when building Drupal sites locally on Windows 7, 8, or 10. It requires a little extra effort to get it working correctly, and there are still a couple small tradeoffs, but it's a lot faster and easier to set up than if you were to use normal shared folders!

Hopefully the SSHFS issues I mentioned earlier are fixed soon, so that the reverse shared folder will be easier to configure (and not require any manual steps inside the VM!).

Also, note that you could use the vagrant-exec plugin to run commands inside the VM without first logging in via vagrant ssh.

Mar 26 2017
Mar 26

In preparing for my session Developing for Drupal 8 with Drupal VM at MidCamp later this month, I wanted to build out an example of a canonical "this is the way I'd do it" Drupal 8 site using nothing but Drupal VM and Composer. And I wanted to build both my local development environment and a production environment on DigitalOcean, all using the Ansible automation playbooks built into Drupal VM.

I also wanted to brush up Drupal VM's production environment management capabilities, so I made a few more tweaks to Drupal VM and the main Ansible role that powers the Drupal deployment and installation, and released Drupal VM 4.4 with some great production environment improvements!

Drupal VM can manage production environments, too!

Before you get started, you need to have PHP and Composer installed on your computer. On my Mac, I installed PHP via Homebrew, and Composer using the manual global installation method. But as long as you can run a composer command and have an up-to-date version of Composer (run composer self-update to check!), you should be all set!

Table of Contents

  1. Create a Drupal project with Composer
  2. Build a local development environment with Drupal VM
  3. Build a prod environment with Drupal VM
  4. Pull the prod site down to local
  5. Update the site with Ansible
  6. Open issues

Create a Drupal project with Composer

  1. Create a new Drupal 8 project using drupal-project: composer create-project drupal-composer/drupal-project:8.x-dev projectname --stability dev --no-interaction (replace projectname with the name of your site).
  2. Change directories into the new projectname directory, and take a look around—you should see some Composer files, as well as a web folder. The web folder contains your project's document root, where Drupal and contributed modules and themes will live.
  3. Add a few modules you know you'll need on the site: composer require drupal/admin_toolbar:^1.0 drupal/pathauto:^1.0 drupal/redirect:^1.0
  4. Add in development modules as --dev dependencies (these modules should only be present in the codebase locally, never on production!): composer require --dev drupal/devel:^1.0

Now that we have a full Drupal 8 codebase, it's time to start committing things to it so we can deploy the site in the future, or roll back changes if we decide we don't need them. So let's set up a git repository:

  1. Run git init to track your project in a brand new Git repository.
  2. Run git add -A to add all the files that should be version-controlled to the new Git repository.
  3. Run git commit -m "Initial commit." to store all these changes in the project's first ever commit!

Build a local development environment with Drupal VM

A Drupal site is pretty boring if it's just a bunch of PHP files! We need a development environment suitable for local development on any Mac, Linux, or Windows PC, so we can add Drupal VM as another --dev dependency (we don't need Drupal VM in our production codebase, just like we shouldn't need Devel or other local development dependencies). Following the documentation on using Drupal VM as a Composer Dependency, the first step is to add Drupal VM via Composer:

  1. Add Drupal VM: composer require --dev geerlingguy/drupal-vm
  2. Create a VM configuration directory—in my case, I'll call it vm: mkdir vm.
  3. Create a new config.yml file inside this directory, and add some sane defaults (these will be baseline defaults for extra security hardening and performance—we'll override some things for flexibility and debugging later, in a Vagrant-specific config file):

    # Composer project settings.
    drupal_build_composer_project: false
    drupal_build_composer: false
    drupal_composer_dependencies: []
    # Drupal install settings.
    drupal_site_name: "My Production Website"
    drupal_core_path: "/var/www/drupal/web"
    drupal_install_site: true
      - name: "Drupal Cron"
        minute: "*/15"
        job: "{{ drush_path }} -r {{ drupal_core_path }} core-cron"
    # Other overrides.
    php_version: "7.1"
    php_sendmail_path: "/usr/sbin/sendmail -t -i"
      - drush
      - varnish
    # Other secure defaults.
    dashboard_install_dir: ''
    apache_packages_state: installed
    # Restrict the firewall to only ports that are required for external services.
      - "22"
      - "80"
      - "443"
    firewall_log_dropped_packets: true
    # Set Apache to listen on port 81 (internal only), and Varnish on 80.
    apache_listen_port: "81"
    varnish_listen_port: "80"
    varnish_default_backend_port: "81"
  4. Also, since we added in some modules, we can have Drupal VM automatically install them the first time we build the site, by adding them to drupal_enable_modules in config.yml (note that you can override any of the config in Drupal VM's default.config.yml in your project-specific config.yml):

      - admin_toolbar
      - admin_toolbar_tools
      - pathauto
      - redirect
  5. There are some configuration settings that should be different in the project's local environment, and Drupal VM automatically uses a vagrant.config.yml file when building the local Vagrant VM. So we'll add those local-environment-specific overrides in a vagrant.config.yml alongside the config.yml file:

    # Local Vagrant options.
    vagrant_machine_name: local-example
    # Configure the synced folder.
      - local_path: .
        destination: /var/www/drupal
        type: nfs
    # Undo some of the extra-hardened security settings in config.yml.
    drupal_account_pass: admin
    drupal_db_password: drupal
    mysql_root_password: root
    php_sendmail_path: "/opt/mailhog/mhsendmail"
      - drush
      - mailhog
      - varnish
    dashboard_install_dir: /var/www/dashboard
    extra_security_enabled: false
      - "22"
      - "25"
      - "80"
      - "81"
      - "443"
      - "8025"
    firewall_log_dropped_packets: false
    # Set Apache to listen on port 80, and Varnish on 81.
    apache_listen_port: "80"
    varnish_listen_port: "81"
    varnish_default_backend_port: "80"
  6. Create a 'delegating Vagrantfile', which will be used by Vagrant to connect all the dots and make sure Drupal VM works correctly with your project. Create a Vagrantfile inside your project's root directory, with the following contents:

    # The absolute path to the root directory of the project.
    ENV['DRUPALVM_PROJECT_ROOT'] = "#{__dir__}"
    # The relative path from the project root to the VM config directory.
    # The relative path from the project root to the Drupal VM directory.
    ENV['DRUPALVM_DIR'] = "vendor/geerlingguy/drupal-vm"
    # Load the real Vagrantfile
    load "#{__dir__}/#{ENV['DRUPALVM_DIR']}/Vagrantfile"
  7. Also, to ensure that you don't accidentally commit Vagrant-related files in your project's Git repository, add the following to the .gitignore file in your project root:

    # Ignore Vagrant files.
  8. At this point, after running through Drupal VM's Quick Start Guide, you should be able to run vagrant up, and in just a few minutes, you can visit the URL defined in config.yml for vagrant_hostname (in this case, to see a fresh new Drupal 8 site!

  9. Make sure you commit your changes that added Drupal VM to the project: git add -A then git commit -m "Add Drupal VM to the project."

Note: For easier provisioning and local usage, be sure to install extra Vagrant plugins: vagrant plugin install vagrant-vbguest vagrant-hostsupdater vagrant-cachier

Build a prod environment with Drupal VM

Having a local development environment is well and good... but it would be great if Drupal VM could also manage a production server to match the local environment identically! Luckily, Drupal VM can do this quite easily. Most of the steps below follow the documentation guide for Deploying to a production environment.

  1. Create a new Droplet/Linode/other type of cloud-hosted Virtual Server. You should make sure to include your SSH public key when building the server (or manually add it to the root account with ssh-copy-id after the server is built). See the Deploying to prod guide for details.
  2. Define configuration that should only apply to the production environment: create a file named prod.config.yml inside the config directory, with the following configuration:

    # Deploy from the project's Git repository.
    drupal_deploy: true
    drupal_deploy_repo: "[email protected]:geerlingguy/drupalvm-live.git"
    drupal_deploy_dir: /var/www/drupal
    # Set the domain for this site appropriately.
    drupal_domain: ""
    vagrant_hostname: "{{ drupal_domain }}"
    # Only add the production docroot virtualhost.
      - servername: "{{ drupal_domain }}"
        documentroot: "{{ drupal_core_path }}"
        extra_parameters: "{{ apache_vhost_php_fpm_parameters }}"
  3. For improved security, you should store sensitive passwords (and any other variables like API keys) in an encrypted vars file. Drupal VM recommends using Ansible Vault to encrypt a secrets.yml file containing said passwords and keys. Create a secrets.yml file in the vm directory using the command ansible-vault create secrets.yml, then put the following inside (replacing the actual passwords with your own secure ones!):

    drupal_account_pass: add-your-secure-password-1-here
    drupal_db_password: add-your-secure-password-2-here                                                                  
    mysql_root_password: add-your-secure-password-3-here
  4. Create an Ansible inventory file, in the vm folder (alongside other Drupal VM configuration), named inventory, with the contents below (replace with your server's IP address, and my_admin_username with the admin username you'll set in the next step):

    [drupalvm] ansible_ssh_user=my_admin_username
  5. Bootstrap the server with your own administrator account and SSH key (this is a one-time process when you build the server):

    1. Copy the example bootstrap vars to a plain vars file: cd vendor/geerlingguy/drupal-vm/examples/prod/bootstrap/ && cp example.vars.yml vars.yml
    2. Edit the vars.yml file with your own information (e.g. your desired user account name, a password, etc.).
    3. Go back to your project's root directory.
    4. Run the init.yml playbook: ansible-playbook -i vm/inventory vendor/geerlingguy/drupal-vm/examples/prod/bootstrap/init.yml -e "ansible_ssh_user=root"
      • You may need to accept the host key the first time you connect to the server. Type yes to accept the host key.
      • You can delete the vars.yml file you created in step 1. Just remember the administrator account's password in case you need it in the future!
    • Note that this manual step may be fixed at some point in the future. It currently can't be automated correctly using the same exact setup that's used for Vagrant locally.
  6. Run the main Drupal VM playbook to provision the entire server: DRUPALVM_ENV=prod ansible-playbook -i vm/inventory vendor/geerlingguy/drupal-vm/provisioning/playbook.yml -e "config_dir=$(pwd)/vm" --sudo --ask-sudo-pass
    • Note the addition of --ask-sudo-pass (this will ask for the password you set in the vars.yml file for the admin account you configured when bootstrapping the server).
    • Note the addition of --ask-vault-pass (this will ask you for the secrets.yml Ansible Vault password so Ansible can temporarily decrypt the contents of that file).

After 5-10 minutes (or longer, depending on how many things you're installing!), the playbook should complete, and if you visit your production VM's URL, you should see Drupal 8 installed, yay!

Note: You can also have Ansible forward SSH keys if you use ssh-agent; this is helpful when doing things like cloning from a private Git repository, because otherwise you'd have to manually place a key on the server before running the Ansible playbook. To use SSH Agent forwarding, add an ansible.cfg file in your project root, and put in the following contents:

ssh_args = -o ForwardAgent=yes

You also need to add any keys you want ssh-agent to use via ssh-add -K (check what keys are currently loaded using ssh-add -l).

Pull the prod site down to local

Now the rubber meets the road. It's nice to be able to build a new site, but sites are nothing if they aren't maintained. The first step in maintaining a live Drupal site is to be able to pull the database and files down to your local environment so you can test your changes with production data.

Drush will be our weapon of choice, and using it to pull down the database is very simple, using the sql-sync command. The first step is to describe two aliases to Drush—we can add them to our project so anyone else working on the site gets the same aliases by creating a file in our project in the path drush/site-aliases/aliases.drushrc.php, with the contents:

* @file
* Drush Aliases for
$aliases[''] = array(
'root' => '/var/www/drupal/web',
'uri' => '',
'remote-host' => '',
'remote-user' => 'vagrant',
'ssh-options' => '-o PasswordAuthentication=no -i ' . drush_server_home() . '/.vagrant.d/insecure_private_key'
);$aliases[''] = array(
'root' => '/var/www/drupal/web',
'uri' => '',
'remote-host' => '',
'remote-user' => 'my_admin_username',

Then, to make sure Drush uses your project-specific settings and aliases, add a drush.wrapper file in the project root with the following contents:

#!/usr/bin/env sh
DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
cd "`dirname $0`"
./vendor/bin/drush.launcher --local --alias-path=${DIR}/drush/site-aliases "[email protected]"

Now, if you run drush status, you should see the live server status information. And if you run drush status, you'll see the local environment's status. To pull the production database down to your local environment, run:

drush sql-sync

Confirm you want to destroy the local database and replace it with the one from production, then after a minute or so, you should be able to work on your local environment to develop using production content and settings!

For files, you can use Drush's trusty rsync command (e.g. drush rsync -y, but for most sites, I prefer to install Stage File Proxy instead. That way you don't have to fully replicate the (often ginormous) files directory on your local machine just to see the site as it looks on prod.

Once everything's in sync, it's a good idea to:

  • Run drush updb -y if you might have updated code or modules (so database updates are applied)
  • Run drush cim -y if you have configuration that's updated in your local codebase
  • Run drush cr to reset caches, since you'll want the local environment to start fresh

Load in your browser, and you should now see that your local Drupal VM environment has all the wonderful content from production!

Update the site with Ansible

One of the first things I do on a Drupal 8 site is get all the configuration exported and checked into Git so configuration changes can easily be deployed to production. The first thing I'll do is create a config/default folder inside the project root (note that it's outside the project's docroot directory, for security reasons!). Then edit web/sites/default/settings.php and change the $config_directories['sync'] setting, which should be near the bottom of the file:

$config_directories['sync'] = '../config/default';

Then run drush cex to export the configuration, and git add -A, then git commit all of it to the repository. From this point on, any changes you make to the sites configuration when doing local development—from adding a content type, to removing a field, modifying a view, or even something mundane like changing the site's name—can be exported to code via drush cex then deployed to production!

Note: You'll also need to update this value in the production site's settings.php file (just this once). You can either use ansible with the lineinfile module to do this, or (for just this once, I won't tell anyone!) log into the server and run sudo vi /var/www/drupal/web/sites/default/settings.php and go to the bottom, replacing the sync line with $config_directories['sync'] = '../config/default';.

Change something simple, like the site name, or enable or disable a module, then run drush cex to export the update configuration. If you run git status you'll see exactly what changed. After you commit the changes to your Git repository then push them, it's time to deploy to production, yay!

Let Ansible do the hard work:

DRUPALVM_ENV=prod ansible-playbook -i vm/inventory vendor/geerlingguy/drupal-vm/provisioning/playbook.yml -e "config_dir=$(pwd)/vm" --sudo --ask-sudo-pass --ask-vault-pass --tags=drupal

This is the exact same command we ran earlier (to build the server), with one small addition: --tags=drupal. That flag tells Ansible to only run the tasks that are related to the Drupal site configuration and deployment.

Drupal VM production update deployment in Terminal

After a few minutes, you should see that there were a few changes reported by Ansible: the Drupal codebase was updated, database updates were run automatically, and caches were cleared. As of March 2017, other tasks (like configuration import, features revert, etc.) are not automatically run by Drupal VM, but that may change by the time you do this! My hope is to have this feature added in soon.

Open Issues

Currently the setup outlined above has a few issues—issues which currently block me from using Drupal VM on most of my own Drupal 8 standalone servers. Here are some of the the blockers I'm still working on (and you can help with!):

  1. 100% hands-free provisioning (even down to creating a DigitalOcean Droplet, Amazon Web Services EC2 instance, etc.).
  2. Not being able to tell the production instance to 'deploy from Git'. I fixed this while writing the blog post :)
  3. Slightly inflexible support for Ansible Vault usage—it's best practice to encrypt and securely encapsulate all production 'secrets' (API keys, passwords, etc.), but the way things work currently, you can only throw everything inside one secrets.yml file.

In the mean time, I often clone Drupal VM, then slightly fork it specifically for certain projects (e.g. the DrupalCamp St. Louis website). This way I still am using a familiar setup, but I can tailor it exactly to the particular deploy setup and strange infrastructure quirks required for a given site.

Mar 22 2017
Mar 22

Windows 10 is the only release Acquia's BLT officially supports. But there are still many people who use Windows 7 and 8, and most of these people don't have control over what version of Windows they use.

Windows 7 - Drupal VM and BLT Setup Guide

Drupal VM has supported Windows 7, 8, and 10 since I started building it a few years ago (at that time I was still running Windows 7), and using a little finesse, you can actually get an entire modern BLT-based Drupal 8 project running on Windows 7 or 8, as long as you do all the right things, as will be demonstrated in this blog post.

Note that this setup is not recommended—you should try as hard as you can to either upgrade to Windows 10, or switch to Linux or macOS for your development workstation, as setup and debugging are much easier on a more modern OS. However, if you're a sucker for pain, have at it! The process below is akin to the Apollo 13 Command Module startup sequence:

It required the crew—in particular the command module pilot, Swigert—to perform the entire power-up procedure in the blind. If he made a mistake, by the time the instrumentation was turned on and the error was detected, it could be too late to fix. But, as a good flight controller should, Aaron was confident his sequence was the right thing to do.

Following the instructions below, you'll be akin to Swigert: there are a number of things you have to get working correctly (in the right sequence) before BLT and Drupal VM can work together in Windows 7 or Windows 8. And once you have your environment set up, you should do everything besides editing source files and running Git commands inside the VM (and, if you want, you can actually do everything inside the VM. For more on why this is the case, please read my earlier post: Developing with VirtualBox and Vagrant on Windows.

Here's a video overview of the entire process (see the detailed instructions below the video):

[embedded content]

Upgrade PowerShell

Windows 7 ships with a very old version of PowerShell (2.0) which is incompatible with Vagrant, and causes vagrant up to hang. To work around this problem, you will need to upgrade to PowerShell 4.0:

  1. Visit the How to Install Windows PowerShell 4.0 guide.
  2. Download the .msu file appropriate for your system (most likely Windows6.1-KB2819745-x64-MultiPkg.msu).
  3. Open the downloaded installer.
  4. Run through the install wizard.
  5. Restart your computer when the installation completes.
  6. Open Powershell and enter the command $PSVersionTable.PSVersion to verify you're running major version 4 or later.

Install XAMPP (for PHP)

XAMPP will be used for it's PHP installation, but it won't be used for actually running the site; it's just an easy way to get PHP installed and accessible on your Windows computer.

  1. Download XAMPP (PHP 5.6.x version).
  2. Run the XAMPP installer.
  3. XAMPP might warn that UAC is enabled; ignore this warning, you don't need to bypass UAC to just run PHP.
  4. On the 'Select Components' screen, only choose "Apache", "PHP", and "Fake Sendmail" (you don't need to install any of the other components).
  5. Install in the C:\xampp directory.
  6. Uncheck the Bitnami checkbox.
  7. When prompted, allow access to the Apache HTTP server included with XAMPP.
  8. Uncheck the 'Start the control panel when finished' checkbox and Finish the installation.
  9. Verify that PHP is installed correctly:
    1. Open Powershell.
    2. Run the command: C:\xampp\php\php.exe -v

Note: If you have PHP installed and working through some other mechanism, that's okay too. The key is we need a PHP executable that can later be run through the CLI.

Set up Cmder

  1. Download Cmder - the 'full' installation.
  2. Expand the zip file archive.
  3. Open the cmder directory and right-click on the Cmder executable, then choose 'Run as administrator'.
    • If you are prompted to allow access to Cmder utilities, grant that access.
  4. Create an alias to PHP: alias php=C:\xampp\php\php.exe $*
  5. Verify that the PHP alias is working correctly: php -v (should return the installed PHP version).

Cmder is preferred over Cygwin because it's terminal emulator works slightly better than mintty, which is included with Cygwin. Cygwin can also be made to work, as long as you install the following packages during Cygwin setup: openssh, curl, unzip, git.

Configure Git (inside Cmder)

There are three commands you should run (at a minimum) to configure Git so it can work correctly with the repository:

  1. git config --global "John Doe" (use your own name)
  2. git config --global [email protected] (use an email address associated with your GitHub account)
  3. git config --global core.autocrlf true (to ensure correct line endings)

Install Composer (inside Cmder)

  1. Make sure you're in your home directory: cd C:\Users\[yourusername].
  2. Run the following commands to download and install Composer:
    1. Download Composer: php -r "copy('', 'composer-setup.php');"
    2. Build the Composer executable: php composer-setup.php
    3. Delete the setup file: rm -f composer-setup.php
  3. Test that Composer is working by running php composer.phar --version.

Create a Private/Public SSH Key Pair to authenticate to GitHub and Acquia Cloud

Generate an SSH key

All the following commands will be run inside of Cmder.

  1. Inside of Cmder, make sure you're in your home directory: C:\Users\[yourusername]
  2. Run the command: ssh-keygen -t rsa -b 4096 -C "[email protected]"
    • When prompted for where to save the key, press enter/return (to choose the default path).
    • When prompted to enter a passphrase, press enter/return (to leave it empty).
    • When prompted to enter the same passphrase, press enter/return (to leave it empty).
  3. To get the value of the new public key, run the command: cat .ssh/
    • Highlight the text that is output (starts with ssh-rsa and ends with your email address)
    • Copy the text of the public key

Note: The public key ( can be freely shared and added to services where you need to connect. NEVER share the private key (id_rsa), as that is a secret that identifies you personally to the services to which you're connecting.

Add the SSH key to your GitHub account

  1. Log into GitHub
  2. Go to your account settings (click on your profile image in the top right, then choose 'Settings').
  3. Click on "SSH and GPG keys" in the 'Personal settings' area.
  4. Click "New SSH key" to add your new SSH key.
  5. Add a title (like "Work Windows 7")
  6. Paste your key in the "Key" field.
  7. Click "Add SSH Key" to save the key.

Add the SSH key to your Acquia Cloud account

  1. Log into Acquia Cloud
  2. Go to your Profile (click on your profile image in the top right, then choose 'Edit Profile').
  3. Click on "Credentials"
  4. Under "SSH Keys", click "Add SSH Key"
  5. Add a nickname (like "Work_Windows_7")
  6. Paste your key in the "Public key" field.
  7. Click "Add Key" to save the key.

Set up the BLT-based Drupal project

Fork the BLT project into your own GitHub account

  1. Log into GitHub.
  2. Visit your project's GitHub repository page.
  3. Click the "Fork" button at the top right to create a clone of the project in your own account.
  4. After GitHub has created the Fork, you'll be taken to your Fork's project page.

Clone the BLT project to your computer

  1. Click the "Clone or download" button on your Fork's project page.
  2. Copy the URL in the 'Clone with SSH' popup that appears.
  3. Go back to Cmder, and cd into whatever directory you want to store the project on your computer.
  4. Enter the command: git clone [URL that you copied in step 2 above]
    • If you receive a prompt asking if you want to connect to, type yes and press enter/return.
    • Wait for Git to clone the project to your local computer.

Note: At this time, you may also want to add the 'canonical' GitHub repository as an upstream remote repo so you can easily synchronize your codebase with the repository you forked earlier. That way it's easy for you to make sure you're always working on the latest code. This also makes it easier to do things like open Pull Requests on GitHub from the command line using Hub.

Install BLT project dependencies

  1. Change directory into the project directory (cd projectname).
  2. Ensure you're on the master branch by checking the repository's status: git status
  3. Move the composer.phar file into the project directory: mv ..\composer.phar .\composer.phar
  4. Run php composer.phar install --prefer-source to ensure everything needed for the project is installed.
    • Wait for Composer to finish installing all dependencies. This could take 10-20 minutes the first time it's run.
    • If the installation times out, run php composer.phar install --prefer-source again to pick up where it left off.
    • The installation may warn that patches can't be applied; ignore this warning (we'll reinstall dependencies later).
  5. Move the composer.phar file back out of the project directory: mv composer.phar ..\composer.phar (in case you ever need it on your host computer again).

Note: You need to use the --prefer-source option when installing to prevent the ZipArchive::extractTo(): Full extraction path exceed MAXPATHLEN (260) error. See this Stack Overflow answer for details.

Set up the Virtual Machine (Drupal VM)

  1. Download and install Vagrant.
  2. Download and install VirtualBox.
  3. Restart your computer after installation.
  4. Open Cygwin, and cd into the project folder (where you cloned the project).
  5. Install recommended Vagrant plugins: vagrant plugin install vagrant-hostsupdater vagrant-vbguest vagrant-cachier
    • Note that if you get an error about a space in your username, you will need to add a folder and VAGRANT_HOME environment variable (see this post for more info).
  6. Create a config file for your VM instance: touch box/local.config.yml
  7. Inside the config file, add the following contents (you can use vi or some other Unix-line-ending-compatible editor to modify the file):

    vagrant_synced_folder_default_type: ""
      - local_path: .
        destination: /var/www/projectname
        type: ""
    post_provision_scripts: []
  8. Run vagrant up.

    • This will download a Linux box image, install all the prerequisites, then configure it for the Drupal site.
    • This could take 10-20 minutes (or longer), depending on your PC's speed and Internet connection.
    • If the command stalls out and seems to not do anything for more than a few minutes, you may need to restart your computer, then run vagrant destroy -f to destroy the VM, then run vagrant up again to start fresh.

Note: If you encounter any errors during provisioning, kick off another provisioning run by running vagrant provision again. You may also want to reload the VM to make sure all the configuration is correct after a failed provision—to do that, run vagrant reload.

Log in and start working inside the VM

  1. First make sure that ssh-agent is running and has your SSH key (created earlier) loaded: start-ssh-agent
  2. Run vagrant ssh to log into the VM (your SSH key will be used inside the VM via ssh-agent).
  3. Change directories into the project directory: cd /var/www/projectname
  4. Delete the Composer vendor directory: rm -rf vendor
  5. Also delete downloaded dependencies—in Windows Explorer, remove the contents of the following directories:
    • docroot/core
    • docroot/libraries
    • docroot/modules/contrib
    • docroot/themes/contrib
    • docroot/profiles/contrib
  6. Run composer install so all the packages will be reinstalled and linked correctly.
  7. Manually run the BLT post-provision shell script to configure BLT inside the VM: ./vendor/acquia/blt/scripts/drupal-vm/
  8. Log out (exit), then log back in (vagrant ssh) and cd back into the project directory.
  9. Use BLT to pull down the latest version of the database and finish building your local environment: blt local:refresh

After a few minutes, the site should be running locally, accessible at

Note: ssh-agent functionality is outside the scope of this post. Basically, it allows you to use your SSH keys on your host machine in other places (including inside Drupal VM) without worrying about manually copying anything. If you want ssh-agent to automatically start whenever you open an instance of Cmder, please see Cmder's documentation: SSH Agent in Cmder.

Mar 16 2017
Mar 16

I'm excited to be presenting at this year's DrupalCon Baltimore on a topic near and dear to my heart: I'll be presenting Just Keep Swimming: Don't drown in your open source project! at DrupalCon next week.

On a basic level, I'll outline ways I deal with rage-inducingly-vague bug reports, hundreds of GitHub notifications per day, angry and entitled users, and keep a positive attitude that allows me to continue to contribute on a daily basis.

This will be my first time presenting in the Being Human track, but I'm excited because in a lot of ways, it's the DrupalCon track that I get the most out of—what good is the work we do if we can't tie it back to our own identity, our community, and ultimately a sense of relationship with other people? There are many other great sessions in the Being Human track; if you're a DrupalCon veteran and have spent most of your time in the more technical sessions (and even the 'hallway track'), you're missing out on a wealth of knowledge gained through the community's collective experience!

The Birds of a Feather schedule was also just posted, so please add a BoF if you're interested in discussing some aspect of Drupal, Open Source, community, etc. (I just added a BoF to discuss Managing Drupal sites with Composer, as that's a topic that I deal with on a daily basis, and there's surely room for improvement!)

Mar 14 2017
Mar 14

Last week, the proverbial floodgates were opened when finally opened access to any registered user to create a 'full' project (theme, module, or profile). See the Project Applications Process Revamp issue on for more details. modules page
You can now contribute full Drupal projects even if you're new to the community!

Some people weren't comfortable with this change, but it makes onboarding new Drupal contributors more painless and rewarding experience. In the past, when projects got stuck in a Project Application Review hell, it could take months to years before a new Drupal contrib project would be granted access to the exclusive 'full project' club! One of the sad byproducts of that process was the fact that so few contrib themes seem to have been created in the past year or so (compared to the Drupal 6 and 7 release cycle, parts of which still allowed anyone access to full projects).

Whether it's a net benefit for the community or whether there are dragons lurking in this new process, only time will tell. I think it will be a huge boon towards contrib health, especially since full projects are much easier to download, evaluate, test, fix, and work with than sandbox projects—and security coverage is still possible through the existing PAR process. I know that if I had hit tons of roadblocks the first time I tried submitting a theme I just spent many hours building, I would've just given up on submitting any Drupal contrib at that point. Other ecosystems (Wordpress plugins, NPM modules, Packagist libraries, etc.) don't have such a high bar for entry; and they don't offer automatic security support either!

In any case, one of the best things to come out of the entire PAR situation is a neat script/project,, that runs any Drupal module through a gauntlet of tests, checking for code quality, typos, etc. And you can run it on your own modules, easily—whether they're on, GitHub, or locally, on your filesystem!

Traditionally, you had two options for running on your module:

  1. You could use the extremely handy online website to test any publicly-accessible Git project repository. (But this didn't work for local or private projects, or for really quick iterative fixes.)
  2. You could spend some time installing all the script's dependencies as outlined in the installation docs. (But this can take a while, and assumes you're on Ubuntu — some of the instructions take some time to get right on a Mac or other Linux distros.)

At Pieter De Clercq's suggestion, though, I added out-of-the-box support for to Drupal VM (see issue: Feature Idea: as installed extra.

Enabling on Drupal VM

The setup is very simple, and works the same on macOS, Linux, or Windows (anywhere Drupal VM runs!):

  1. Download Drupal VM.
  2. Install Vagrant and VirtualBox.
  3. Create a config.yml file inside the Drupal VM directory, and put in the following contents:

      - "../examples/scripts/"
      - { name: hirak/prestissimo, release: '^0.3' }
      - { name: drupal/coder, release: '^8.2' }
    nodejs_version: "6.x"
  4. Open a Terminal and cd into the Drupal VM directory, and run vagrant up.

Note: For an even speedier build (if you're just using Drupal VM to test out a module or three), you can add the following to config.yml to make sure only the required dependencies are installed, and to prevent a full Drupal 8 site from being built (if you don't need it):

installed_extras: []
extra_packages: []
drupal_build_composer_project: false
drupal_install_site: false
configure_drush_aliases: false

A new Drupal VM instance should be built within 5-10 minutes (depending on your computer's speed), with configured inside.

Using on Drupal VM

Drupal VM script after vagrant ssh

In the Drupal VM directory (still in the Terminal), run vagrant ssh to log into the VM. Then run the script on a module just to ensure it's working:

This should output some messages, and maybe even a few errors (hey, no maintainer is perfect!), and then drop you back to the command line.

You can also run the script against any local project. If you drop a module named my_custom_module into the same directory as the Vagrantfile, you can see it inside the VM in the /vagrant folder, so you can run the script against your custom module like so: /vagrant/my_custom_module

See the documentation for more usage examples and notes, and go make your modules perfect!


Special thanks to Pieter De Clercq (PieterDC) for the inspiration for this new feature, to Klaus Purer (klausi) for the amazing work on, and also to Patrick Drotleff (patrickd) for maintaining the free online version.

Feb 24 2017
Feb 24

Another day, another Acquia Developer Certification exam review (see the previous one: Certified Back end Specialist - Drupal 8, I recently took the Front End Specialist – Drupal 8 Exam, so I'll post some brief thoughts on the exam below.

Acquia Certified Front End Specialist - Drupal 8 Exam Badge

Now that I've completed all the D8-specific Certifications, I think the only Acquia Certification I haven't completed is the 'Acquia Cloud Site Factory' Exam—one for which I'm definitely not qualified, as I haven't worked on a project that uses Acquia's 'ACSF' multisite setup (though I do a lot of other multisite and install profile/distribution work, just nothing specific to Site Factory!). Full disclosure: Since I work for Acquia, I am able to take these Exams free of charge, though many of them are worth the price depending on what you want to get out of them. I paid for the first two that I took (prior to Acquia employment) out of pocket!

Some old, some new

This exam feels very much in the style of the Drupal 7 Front End Specialist exam—there are questions on theme hook suggestions, template inheritance, basic HTML5 and CSS usage, basic PHP usage (e.g. how do you combine two arrays, in what order are PHP statements evaluated... really simple things), etc.

The main difference with this exam centers on the little differences in doing all the same things. For example, instead of PHPTemplate, Drupal 8 uses Twig, so there are questions relating to Twig syntax (e.g. how to chain a filter to a variable, how to print a string from a variable that has multiple array elements, how to do basic if/else statements, etc.). The question content is the same, but the syntax is using what would be done in Drupal 8. Another example is theme hook suggestions—the general functionality is identical, but there were a couple questions centered on how you add or use suggestions specifically in Drupal 8.

The main thing that tripped me up a little bit (mostly due to my not having used it too much) is new Javascript functionality and theme libraries in Drupal 8. You should definitely practice adding JS and CSS files, and also learn about differences in Drupal 8's Javascript layer (things like using 'use strict';, how to make sure Drupal.behaviors are available to your JS library, and the like).

I think if you've built at least one custom theme with a few Javascript and CSS files, and a few custom templates, you'll do decently on this exam. Bonus points if you've added a JS file that shouldn't be aggregated, added translatable strings in both Twig files and in JS, and worked out the differences in Drupal's stable and classy themes in Drupal 8 core.

For myself, the only preparation for this exam was:

  • I've helped build two Drupal 8 sites with rather complex themes, with many libraries, dozens of templates, use of Twig extends and include syntax, etc. Note that I was probably only involved in theming work 20-30% of the time.
  • I built one really simple Drupal 8 custom theme for a photo sharing website (closed to the public):
  • I read through the excellent Drupal 8 Theming Guide by Sander Tirez (sqndr)

My Results

I scored an 83.33% (10% better than the Back End test... maybe I should stick to theming :P), with the following section-by-section breakdown:

  • Fundamental Web Development Concepts : 92.85%
  • Theming concepts: 73.33%
  • Templates and Pre-process Functions: 87.50%
  • Layout Configuration: 66.66%
  • Performance: 100.00%
  • Security: 100.00%

I'm not surprised I scored worst in Layout Configuration, as there were some questions about defining custom regions, overriding region-specific markup, and configuring certain things using the Breakpoints and Responsive Images module. I've done all these things, but only rarely, since you generally set up breakpoints only when you initially build the theme (and I only did this once), and I only deal with Responsive Images for a few specific image field display styles, so I don't use it enough to remember certain settings, etc.

It's good to know I keep hitting 90%+ on performance and security-related sections—maybe I should just give up site building and theming and become a security and performance consultant! (Heck, I do a lot more infrastructure-related work than site-building outside of my day job nowadays...)

This exam was not as difficult as the Back End Specialist exam, because Twig syntax and general principles are very consistent from Drupal 7 to Drupal 8 (and dare I say better and more comprehensible than in the Drupal 7 era!). I'm also at a slight advantage because almost all my Ansible work touches on Jinja2, which is the templating system that inspired Twig—in most cases, syntax, functions, and functionality are identical... you just use {{.j2}} instead of {{.twig}} for the file extension!

Feb 22 2017
Feb 22

Continuing along with my series of reviews of Acquia Developer Certification exams (see the previous one: Drupal 8 Site Builder Exam, I recently took the Back End Specialist – Drupal 8 Exam, so I'll post some brief thoughts on the exam below.

Acquia Certified Drupal Site Builder - Drupal 8 2016
I didn't get a badge with this exam, just a cert... so here's the previous exam's badge!

Acquia finally updated the full suite of Certifications—Back/Front End Specialist, Site Builder, and Developer—for Drupal 8, and the toughest exams to pass continue to be the Specialist exams. This exam, like the Drupal 7 version of the exam, requires a deeper knowledge of Drupal's core APIs, layout techniques, Plugin system, debugging, security, and even some esoteric things like basic webserver configuration!

A lot of new content makes for a difficult exam

Unlike the other exams, this exam sets a bit of a higher bar—if you don't do a significant amount of Drupal development and haven't built at least one or two custom Drupal modules (nothing crazy, but at least some block plugins, maybe a service or two, and some other integrations), then it's likely you won't pass.

There are a number of questions that require at least working knowledge of OOP, Composer, and Drupal's configuration system—things that an old-time Drupal developer might know absolutely nothing about! I didn't study for this exam at all, but would've likely scored higher if I spent more time going through some of the awesome Drupal ladders or other study materials. The only reason I passed is I work on Drupal 8 sites in my day job, and have for at least 6 months, and in my work I'm exposed to probably 30-50% of Drupal's APIs.

Unlike in Drupal 7, there are no CSS-related questions and few UI-related questions whatsoever. This is a completely new and more difficult exam that covers a lot of corners of Drupal 8 that you won't touch if you're mostly a site builder or themer.

My Results

I scored an 73%, with the following section-by-section breakdown:

  • Fundamental Web Concepts: 80.00%
  • Drupal core API : 55.00%
  • Debug code and troubleshooting: 75.00%
  • Theme Integration: 66.66%
  • Performance: 87.50%
  • Security: 87.50%
  • Leveraging Community: 100.00%

I am definitely least familiar with Drupal 8's core APIs, as I tend to stick to solutions that can be built with pre-existing modules, and have as yet avoided diving too deeply into custom code for the projects I work on. Drupal 8 is really streamlined in that sense—I can do a lot more just using Core and a few Contrib modules than I could've done in Drupal 7 with thousands of lines of custom code!

Also, I'm still trying to wrap my head around the much more formal OOP structure of Drupal (especially around caching, plugins, services, and theme-related components), and I bet that I could score 10% or more higher in another 6 months, just due to familiarity.

I also scored fairly low on the 'debug code and troubleshooting' section, because it dealt with some lower-level debugging tools than what I prefer to use day-to-day. I use Xdebug from time to time, and it really is necessary for some things in Drupal 8 (where it wasn't so in Drupal 7), but I stick to Devel's dpm() and Devel Kint's kint() as much as I can, so I can debug in the browser where I'm more comfortable.

In summary, this exam was by far the toughest one I've taken, and the first one where I'd consider studying a bit before attempting to pass it again. I've scheduled the D8 Front End Specialist exam for next week, and I'll hopefully have time to write a 'Thoughts on it' review on this blog after that—I want to see if it's as difficult (especially regarding to twig debugging and the render system changes) as the D8 Back End Specialist exam was!

Feb 15 2017
Feb 15

I've been looking at a ton of different solutions to using Drupal 8's Configuration Management in a way that meets the following criteria:

  1. As easy (or almost as easy) as plain old drush cex -y to export and drush cim -y to import.
  2. Allows a full config export/import (so you don't have to use update hooks to do things like enable modules, delete fields, etc.).
  3. Allows environment-specific configuration and modules (so you don't have to have some sort of build system to tweak things post-config-import—Drupal should manage its own config).
  4. Allows certain configurations to be ignored/not overwritten on production (so content admins could, for example, manage Webforms or Contact Forms on prod, but not have to have a developer pull the database back and re-export config just to account for a new form).

The Configuration Split module checks off the first three of those four requirements, so I've been using it on a couple Drupal 8 sites that I'm building using Acquia's BLT and hosting on Acquia Cloud. The initial setup poses a bit of a challenge due to the 'chicken-and-egg' problem of needing to configure Config Split before being able to use Config Split... therefore this blog post!

Installing Config Split

Configuration Split setup - Drupal 8

The first time you get things set up, you might already be using core CMI, or you might not yet. In my case, I'm not set up with config management at all, and BLT is currently configured out of the box to do a --partial config import, so I need to do a couple specific things to get started with Config Split:

  1. Add the module to your project with composer require drupal/config_split:^1.0.
  2. Deploy the codebase to production with the module in it (push a build to prod).
  3. On production, install the module either through the UI or via Drush (assuming you're not already using core CMI to manage extensions).
  4. On production, create one config split per Acquia Cloud environment, plus another one for local and ci (so I created local, ci, dev, test, and prod).
    • For each split, make sure the machine name matches the Acquia Cloud environment name, and for the path, use ../config/[environment-machine-name]).
    • For Local, use local, for CI (Travis, Pipelines, etc.), use ci (for the machine names).
  5. Pull the production database back to all your other Acquia Cloud environments so Config Split will be enabled and configured identically in all of them. 6 On your local, run blt local:refresh to get prod's database, which has the module enabled.

Note that there may be more efficient (and definitely more 'correct') ways of getting Config Split installed and configured initially—but this way works, and is quick for new projects that don't necessarily have a custom install profile or module where you can toss in an update hook to do everything automated.

Configuring the Splits

Now that you have your local environment set up with the database version that has Config Split installed—and now that Config Split is installed in all the other environments using the same configuration, it's time to manage your first split—the local environment!

  1. Enable a module on your local environment that you only use for local dev (e.g. Devel).
  2. Configure the 'Local' config split (on
  3. Select the module for the Local split (e.g. select Devel in the 'Modules' listing).
  4. Select all the module's config items in the 'Blacklist' (use Command on Mac, or Ctrl on Windows to multi-select, e.g. select devel.settings, devel.toolbar.settings, and
  5. Click 'Save' to save the config split.

Now comes the important part—instead of using Drush's config-export command (cex), you want to make it a little... spicier:

drush @project.local csex -y --split=local

This command (configuration-split-export, csex for short) will dump all the configuration just like cex... but it splits out all the blacklisted config into the separate config/local directory in your repository!

Note: If you get Command csex needs the following extension(s) enabled to run: config_split., you might need to run drush @project.local cc drush. Weird drush bug.

Next up, you need to create a blank folder for each of the other splits—so create one folder each for ci, dev, test, and prod, then copy the .htaccess file that Config Split added to the config/local folder into each of the other folders.

We're not ready to start deploying config yet—we need to modify BLT to make sure it knows to run csim (short for config-split-import) instead of cim --partial when importing configuration on the other environments. It also needs to know which --split to use for each environment.

Modifying BLT

For starters, see the following BLT issue for more information about trying to standardize the support for Configuration Split in BLT: Support Config Split for environment-specific Core CMI.

  1. You need to override some BLT Phing tasks, so first things first, replace the import: null line in blt/project.yml with import: '${repo.root}/blt/build.xml'.
  2. Add a file in the blt/ directory named build.xml, and paste in the contents of this gist:
  3. Since you'll be managing all the modules via Config Split, you don't want or need BLT messing with modules during deployment, so clear out all the settings in blt/project.yml as is shown in this gist:

Once you've made those two changes to BLT's project.yml and added a blt/build.xml file with your custom Phing tasks, it's time to test if this stuff is all working correctly! Go ahead and run:

blt local:refresh

And see if the local environment is set up as it should be, with Devel enabled at the end of the process. If it is, congratulations! Time to commit this stuff and deploy it to the Cloud!

Once you deploy some code to a Cloud environment, in the build log, you should see something like:

The following directories will be used to merge configuration to import:
Import the configuration? (y/n):
Configuration successfully imported from:                              [success]

This means it's importing the default config, mixed in with all the dev config split directory. And that means it worked.

Start deploying with impunity!

The great thing about using Drupal 8's core CMI the way it is meant to be used (instead of using it with --partial) is that configuration management becomes a total afterthought!

Remember in Drupal 7 when you had to remember to export certain features? And when features-revert-all would sometimes bring with it a six hour debugging session as to what happened to your configuration?

Remember in Drupal 7 when you had to write hundreds of update hooks to do things like add field, delete a field, remove a content type, enable or disable a module?

With CMI, all of that is a distant memory. You do whatever you need to do—delete a field, add a view, enable a dozen modules, etc.—then you export configuration with drush csex --split=local. Commit the code, push it up to prod, et voilà, it's magic! The same changes you made locally are on prod!

The one major drawback to this approach (either with Config Split or just using core CMI alone without --partial) is that, at least at this time, it's an all-or-nothing approach. You can't, for example, allow admins on prod to create new Contact forms, Webforms, blocks, or menus without also pulling the database back, exporting the configuration, then pushing the exported config back to prod. If you forget to do that, CMI will happily delete all the new configuration that was added on prod, since it doesn't exist in the exported configuration!

If I can find a way to get that working with Config Split (e.g. say "ignore configuration for the webform. config namespace" without using --partial), I think I'll have found configuration nirvana in Drupal 8!

Feb 13 2017
Feb 13

It's been over a year since Drupal 8.0.0 was released, and the entire ecosystem has improved vastly between that version's release and the start of the 8.3.0-alpha releases (which just happened a couple weeks ago).

One area that's seen a vast improvement in documentation and best practices—yet still has a ways to go—is Composer-based project management.

Along with a thousand other 'get off the island' initiatives, the Drupal community has started to take dependency management more seriously, by integrating with the wider PHP ecosystem and maintaining a separate packagist for Drupal modules, themes, and other projects.

At a basic level, Drupal ships with a starter composer.json file that you can use if you're building simpler Drupal sites to manage modules and other dependencies. Then there are projects like the Composer template for Drupal projects (which Drupal VM uses by default to build new D8 sites) and Acquia's BLT which integrate much more deeply with Composer-based tools and libraries to allow easier patching, custom pathing, and extra library support.

One thing I've found lacking in my journey towards dependency management nirvana is a list of all the little tips and tricks that make managing a Drupal 8 project entirely via Composer easier. Therefore I'm going to post some of the common (and uncommon) things I do below, and keep this list updated over time as best practices evolve.

Adding a new module

In the days of old, you would either download a module from directly, and drag it into your codebase. Or, if you were command line savvy, you'd fire up Drush and do a drush dl modulename. Then came Drush Makefiles, which allowed you to specify module version constraints and didn't require the entire module codebase to exist inside your codebase (yay for smaller repositories and repeatable deployments and site rebuilds!).

But with Composer, and especially with the way many (if not most) Drupal 8 modules integrate with required libraries (e.g. TODO Solarium/Solr/link to issue in search api solr module queue), it's easier and more correct to use composer require to add a new module. modules don't quite follow semantic versioning, but the way release versioning works out with the packagist endpoint, you should generally be able to specify a version like "give me any version 8.x-1.0 or later, and I'll be happy".

Therefore, the proper syntax for requiring a module this way (so that when you run composer update drupal/modulename later, it will update to the latest stable 8.x-1.x release) is:

composer require drupal/modulename:^1.0

This says "add modulename to my codebase, and download version 1.0 or whatever is the latest release in the 8.x-1.x release series (including alpha/beta releases, if there hasn't been a stable release yet).

Note on version constraints: Two of the most-commonly-used version constraints I see are ~ (tilde) and ^ (caret). Both are similar in that they tell Composer: 'use this version but update to a newer version in the series', but the tilde is a bit more strict in keeping to the same minor release, while the caret allows for any new version up to the next major release. See this article for more details: Tilde and caret version constraints in Composer. See this Drupal core issue for discussion on why the caret is often preferred in Drupal projects: Prefer carat over tilde in composer.json.

Updating modules

Early on in my Composer adventures, I did the reasonable thing to update my site—I ran composer update, waited a while for everything to be updated, then I committed the updated composer.json and composer.lock files and was on my merry way. Unfortunately, doing this is kind of like cleaning a dirty blue dress shirt by washing it in a bucket of bleach—sure, the stain will be removed, but you'll also affect the rest of your shirt!

If you are meticulous about your dependencies, and lock in certain ones that are finicky at specific versions (e.g. composer require drupal/modulename:1.2) or at a specific git commit hash (composer require drupal/modulename:dev-1.x#dfa710e), then composer update is manageable.

But if you're managing a project with many moving parts using more than a dozen contributed modules... be cautious when considering running composer update without specifying specific modules to update!

Instead, what I recommend is a more cautious approach:

  1. See what modules need updating.
  2. Update those modules specifically using composer update drupal/modulename --with-dependencies.

If you had required the module using a release series like drupal/modulename:^1.0, then Composer will update that module—and only that module—to the latest tagged release in the 8.x-1.x branch. And adding --with-dependencies will ensure that any libraries the module depends on are updated as well (e.g. if you update the Search API Solr module, the Solarium dependency will also be updated).

Another quick tip: In addition to Drupal core's update module functionality and drush pm-updatestatus, you can use Composer's built-in mechanism to quickly scan for outdated dependencies. Just use composer outdated. This will show you if Drupal core, contrib modules, or any other dependencies are outdated.

Removing modules

This one is pretty easy. To remove a module you're no longer using (be sure it's uninstalled first!):

composer remove drupal/modulename

Older versions of Composer required a flag to also remove module dependencies that aren't otherwise required, but modern versions will remove the module and all it's dependencies from your composer.json, composer.lock, and the local filesystem.

Requiring -dev releases at specific commits

From time to time (especially before modules are stable or have a 1.0 final release), it's necessary to grab a module at a specific Git commit. You can do this pretty simply by specifying the dev-[branch]#[commit-hash] version constraint. For example, to get the Honeypot module at it's latest Git commit (as of the time of this writing):

composer require drupal/honeypot:dev-1.x#dfa710e

Be careful doing this, though—if at all possible, try to require a stable version, then if necessary, add a patch or two from the issue queues to get the functionality or fixes you need. Relying on specific dev releases is one way your project's technical debt increases over time, since you can no longer cleanly composer update that module.

Regenerating your .lock file

Raise your hand if you've ever seen the following after resolving merge conflicts from two branches that both added a module or otherwise modified the composer.lock file:

$ composer validate
./composer.json is valid, but with a few warnings
See for details on the schema
The lock file is not up to date with the latest changes in composer.json, it is recommended that you run `composer update`.

Since I work on a few projects with multiple developers, I run into this on almost a daily basis. Until recently, I would find a module, then run a composer update drupal/modulename. Now, I just found that I can quickly regenerate the lockfile without updating or requiring anything, by running:

composer update nothing

Note that some people on Twitter mentioned there's a composer update --lock command that does a similar thing. The docs say "Only updates the lock file hash to suppress warning about the lock file being out of date." — but I've had success with nothing, so I'm sticking with it for now until someone proves --lock is better.

Development dependencies

There are often components of your project that you need when doing development work, but you don't need on production. For example, Devel, XHProf, and Stage File Proxy are helpful to have on your local environment, but if you don't need them in production, you should exclude them from your codebase entirely (not only for minor performance reasons and keeping your build artifacts smaller—non-installed modules can still be a security risk if they have vulnerabilities).

Composer lets you track 'dev dependencies' (using require-dev instead of require) that are installed by default, but can be excluded when building the final deployable codebase (by passing --no-dev when running composer install or composer update).

One concrete example is the inclusion of the Drupal VM codebase in a Drupal project. This VM configuration is intended only for local development, and shouldn't be deployed to production servers. When adding Drupal VM to a project, you should run:

composer require --dev geerlingguy/drupal-vm:^4.0

This will add geerlingguy/drupal-vm to a require-dev section in your composer.json file, and then you can easily choose to not include that project in the deployed codebase.

Commiting your .lock file

The Composer documentation on the lock file bolds the line:

**Commit your application's composer.lock (along with composer.json) into version control.

For good reason—one of the best features of any package manager is the ability to 'lock in' a set of dependencies at a particular version or commit hash, so every copy of the codebase can be completely identical (assuming people haven't gone around git --force pushing changes to the libraries you use!), even if you don't include any of the code in your project.

Ideally, a project would just include a composer.json file, a composer.lock file, and any custom code (and config files). Everything else would be downloaded and 'filled in' by Composer. The lock file makes this possible.

Patching modules

Acquia's BLT the composer-patches project, which is what it says on the tin: "Simple patches plugin for Composer."

Use is fairly simple: first, composer require cweagans/composer-patches:^1.0, then add a patches section to the extra section of your composer.json file:

    "extra": {
        "patches": {
            "acquia/lightning": {
                "New LightningExtension subcontexts do not autoload": ""
            "drupal/core": {
                "Exposed date view filter fix": ""

Once you've added a patch, you might wonder how to get Composer to apply the patch and update composer.lock while still maintaining the same version you currently have (instead of running composer update drupal/module which may or may not update/change versions of the module).

The safest way is to run composer update none (a handy trick yet again!), which allows Composer to delete the module in question (or core), then download the same version anew, and apply the specified patch.

Other Tips and Tricks?

Do you know any other helpful Composer tricks or things to watch out for? Please post them in the comments below!

Feb 11 2017
Feb 11

XHProf, a PHP extension formerly created and maintained by Facebook, has for many years been the de-facto standard in profiling Drupal's PHP code and performance issues. Unfortunately, as Facebook has matured and shifted resources, the XHProf extension maintenance tailed off around the time of the PHP 7.0 era, and now that we're hitting PHP 7.1, even some sparsely-maintained forks are difficult (if not impossible) to get running with newer versions of PHP.

Enter Tideways.

Tideways has basically taken on the XHProf extension, updated it for modern PHP versions, but also re-branded it to be named 'Tideways' instead of 'XHProf'. This has created a little confusion, since Tideways also offers a branded and proprietary service for aggregating and displaying profiling information through But you can use Tideways completely independent from, as a drop-in replacement for XHProf. And you can even browse profiling results using the same old XHProf UI!

So in this blog post, I want to show you how you can use Drupal VM (version 4.2 or later) to quickly and easily profile Drupal 8 pages using Tideways (the PHP extension), the XHProf UI, and the XHProf Drupal module (all running locally—no cloud connection or paid service required!). You can even get fancy callgraph images!

Here's a video walkthrough for the more visually-inclined:

[embedded content]

Configure Drupal VM to install Tideways

The only thing you need to do to a stock Drupal VM configuration is make sure tideways is in your list of installed_extras. So, for my VM instance, I created a config.yml file and put the following inside:

  - drush
  - mailhog
  - tideways

You can add whatever other installed_extras you need, but for this testing and benchmarking, I'm only including the essentials for my site.

If you want to have Drupal VM build a Drupal 8 site for you, and also automatically composer require the XHProf module for Drupal 8, you can also add:

  - "drupal/xhprof:1.x-dev"

This will ensure that, after a Drupal 8 codebase is generated, the appropriate composer require command will be run to add the Drupal XHProf module to the codebase and the composer.json file. You could even add xhprof to the array of drupal_enable_modules in config.yml if you want the module installed automatically during provisioning!

Run vagrant up to start Drupal VM and provision it with Tideways, or run vagrant provision if you already have Drupal VM set up and are just adding Tideways to it.

Install Drupal's XHProf module

After Vagrant finishes provisioning Drupal VM, you can enable the XHProf module with drush en -y xhprof (or do it via the 'Extend' page in Drupal's UI). Then, to configure the module to collect profiles for page loads, do the following:

  1. Visit the XHProf configuration page: /admin/config/development/xhprof
  2. Check the 'Enable profiling of page views' checkbox.
  3. Make sure the 'Tideways' extension is selected (it should be, by default).
  4. Check the 'Cpu' and 'Memory' options under 'Profile'
  5. Click 'Save' to save the settings.

Profile a page request!

XHProf Profile link from Drupal module

  1. Visit any page on the site (outside of the admin area, or any of the other paths excluded in the XHProf 'Exclude' configuration).
  2. Find the 'XHProf output' link near the bottom of the page.
  3. Click the link, and you'll see the XHProf module's rendering of the profile for that page.

For more basic profiling, that's all you need to do. But Drupal VM's Tideways integration also automatically sets up the XHProf GUI so you can browse the results in a much more efficient and powerful way. To use the more powerful XHProf GUI:

  1. Visit (or xhprof.[yoursiteurl]).
  2. Click on a profile result in the listing.

Drupal 8 home page XHProf profile GUI

In here, you have access to much more granular data, including a full 'callgraph', which is a graphical representation of the entire request flow. Note that it can take a minute or longer to render callgraphs for more complex page loads!

Here's a small snippet of what Drupal 8's home page looks like with empty caches:

Drupal 8 home page callgraph rendered by XHProf GUI


If you're still running PHP 5.6 or 7.0, you can still use XHProf, but it seems like XHProf's maintenance is now in a perpetually fuzzy state—nobody's really picked up the ball consistently after Facebook's maintenance of the extension dropped off.

Another service which has a freemium model but requires the use of a web UI rather than a locally-hosted UI is Blackfire, which is also supported by Drupal VM out of the box!

Feb 10 2017
Feb 10

As someone who loves YAML syntax (so much more pleasant to work with than JSON!), I wanted to jot down a few notes about syntax formatting for the benefit of Drupal 8 developers everywhere.

I often see copy/pasted YAML examples like the following:

  child-object: {key: value, key2: {key: value}}

This is perfectly valid YAML. And technically any JSON is valid YAML too. That's part of what makes YAML so powerful—it's easy to translate between JSON and YAML, but YAML is way more readable!

So instead of using YAML like that, you can make the structure and relationships so much more apparent by formatting it like so:

    key: value
      key: value

This format makes it much more apparent that both key and key2 are part of child-object, and the last key: value is part of key2.

In terms of Drupal, I see the confusing { } syntax used quite often in themes and library declarations. Take, for instance, a library declaration that adds in some attributes to an included JS file: {type: external, attributes: { defer: true, async: true} }

That's difficult to read at a glance—and if you have longer key or value names, it gets even worse!

Instead, use the structured syntax for a more pleasant experience (and easier git diff ability):
  type: external
    defer: true
    async: true

You really only need to use the { } syntax for objects if you're defining an empty object (one without any keys or subelements):

# Objects.
  key: value
empty-object: { }

# Arrays.
  - item
empty-array: [ ]

I've worked with a lot of YAML in the past few years, especially in my work writing Ansible for DevOps. It's a great structured language, and the primary purpose is to make structured data easy to read and edit (way, way simpler than JSON, especially considering you won't need to worry about commas and such!)—so go ahead and use that structure to your advantage!

Feb 09 2017
Feb 09

...aka, avoid the annoying Javascript error below:

TypeError: undefined is not an object (evaluating 'entityElement

Many themers working on Drupal 8 sites have Contextual menus and Quick Edit enabled (they're present in the Standard Drupal install profile, as well as popular profiles like Acquia's Lightning), and at some point during theme development, they notice that there are random and unhelpful fatal javascript errors—but they only appear for logged in administrators.

Eventually, they may realize that disabling the Contextual links module fixes the issue, so they do so and move along. Unfortunately, this means that content admins (who tend to love things like contextual links—at least when they work) and up not being able to hover over content to edit it.

There are two ways you can make things better without entirely disabling these handy modules:

  1. Apply the patch from this issue: contextual.js and quickedit.js should fail gracefully, with useful error messages, when Twig templates forget to print attributes.
  2. Make sure you always include attributes somewhere in the wrapping class in all node templates, as well as {{ title_prefix }} and {{ title_suffix }}. If you don't, the contextual links module won't be able to inject the proper classes it needs to add the contextual link. And until the above patch hits Drupal core, any page where your template is used and a user with permission to use Contextual links visits, Javascript on that page will break!

As one quick example, I was working on a node template for a bootstrap theme, and it looked something like:

  <div class="col-md-6">
    <h2 class="lead">{{ label }}</h2>
    <p class="small">{{ node.getCreatedTime() | date("F d, Y") }}</p>
    <div class="body">
      {{ content.body }}
    {{ content|without('body') }}

To fix this so the contextual link displays to the right of the title, I modified the template as in number 2 above, to look like:

<div{{ attributes }}>
  <div class="col-md-6">
    {{ title_prefix }}
    <h2 class="lead">{{ label }}</h2>
    {{ title_suffix }}
    <p class="small">{{ node.getCreatedTime() | date("F d, Y") }}</p>
    <div class="body">
      {{ content.body }}
    {{ content|without('body') }}

Now, I get the handy little contextual link widget, and I can happily go about editing nodes within the context of the page I'm on (instead of digging through the admin content listings for the node!):

Quick edit contextual link in Drupal 8

Note also the {{ content|without('body') }}—it's always important to render the entire {{ content }} element somewhere (even without() all the other fields on the node) so that cache tags bubble correctly—see a related core issue I opened a week or so ago: Bubbling cache tag metadata when rendering nodes in preprocess functions is difficult.

Jan 31 2017
Jan 31

BLT - Setup complete on Windows 10

Quite often, I get inquiries from developers about how to get Drupal VM working on Windows 10—and this is often after encountering error after error due to many different factors. Just for starters, I'll give a few tips for success when using Drupal VM (or most any Linux-centric dev tooling or programming languages) on Windows 10:

  • If at all possible, run as much as possible in the Windows Subsystem for Linux with Ubuntu Bash. Some things don't work here yet (like calling Windows binaries from WSL), but hopefully a lot will be improved by the next Windows 10 update (slated for Q2 2017). It's basically an Ubuntu 14.04 CLI running inside Windows.
  • When using Vagrant, run vagrant plugin install vagrant-vbguest and vagrant plugin install vagrant-hostsupdater. These two plugins make a lot of little issues go away.
  • If you need to use SSH keys for anything, remember that the private key (id_rsa) is secret. Don't share it out! The public key (e.g. can be shared freely and should be added to your GitHub, Acquia, etc. accounts. And you can take one public/private key pair and put it anywhere (e.g. copy it from your Mac to PC, inside a VM, wherever). Just be careful to protect the private key from prying eyes.
  • If you're getting errors with any step along the way, copy out parts of the error message that seem relevant and search Google and/or the Drupal VM issue queue. Chances are 20 other people have run into the exact problem before. There's a reason I ask everyone to submit issues to the GitHub tracker and not to my email!

Now, down to the nitty-gritty. One group of developers had a requirement that everyone only use Windows 10 to do everything. On most projects I'm involved with, at least one or two developers will have a Linux or macOS environment, and that person would be the one to set up BLT.

But if you need to set up BLT and Drupal VM entirely within Windows, there are a few things you need to do unique to the Windows environment, due to the fact that Windows handles CLIs, line endings, and symlinks differently than other OSes.

I created a video/screencast of the entire process (just to prove to myself it was reliably able to be rebuilt), which I've embedded below, and I'll also post the detailed step-by-step instructions (with extra notes and cautionary asides) below.

Video / Screencast

[embedded content]

Step-by-step instructions

  1. Install the Windows Subsystem for Linux with Ubuntu Bash.
  2. Install Vagrant.
    1. You should also install the following: vagrant plugin install vagrant-vbguest and vagrant plugin install vagrant-hostsupdater. This helps make things go more smoothly.
  3. Install VirtualBox.
  4. Open Ubuntu Bash.
  5. Install PHP and Composer (no need for Node.js at this time) following these BLT Windows install directions.
  6. Set up your Git username and password following these BLT directions.
  7. Run the commands inside the BLT - creating a new project guide.
    1. Note that the composer create-project command could take a while (5-10 minutes usually, but could be slower).
    2. If it looks like it's not really doing anything, try pressing a key (like down arrow); the Ubuntu Bash environment can get temporarily locked up if you accidentally scroll down.
  8. When you get to the blt vm step, run that command, but expect it to fail with a red error message (as of Windows 10 Anniversary Update the WSL can't easily call out to Windows executables from the Ubuntu Bash environment... therefore it fails to see that VirtualBox is installed in Windows since it's only able to see executables in the Ubuntu virtual environment.).
  9. Install Cmder (preferred), Cygwin, or Git for Windows.
  10. Open Cmder.
    1. You need to run Cmder as an administrator (otherwise BLT's Composer-based symlinks go nuts). In Cmder, right-click on the toolbar, click 'New Console...', then check the 'Run as administrator' checkbox and click Start.
    2. You can use other Bash emulators for this (e.g. Cygwin, Git Bash, etc.) as long as they support SSH and are run as Administrator.
    3. When Microsoft releases the Windows 10 update post-Anniversary-update, the WSL might be able to do everything. But right now it's close to impossible to reliably call Windows native exe's from Ubuntu Bash, so don't even try it.
  11. cd into the directory created by the composer create-project command (e.g. projectname).
    1. Note that Ubuntu Bash's home directory is located in your Windows user's home directory, in a path like C:\Users\[windows-username]\AppData\Local\lxss\home\[ubuntu-username].
  12. Run vagrant up to build Drupal VM.
    1. Note that it will take anywhere from 5-25 minutes to bring up Drupal VM, depending on your PC's speed and Internet connection.
  13. Once vagrant up completes, run vagrant ssh to log into the VM.
  14. From this point on, all or most of your local environment management will take place inside the VM!
  15. Make sure you add your SSH private key to the Vagrant user account inside Drupal VM (so you can perform actions on the codebase wherever you host it (e.g. BitBucket, GitHub, GitLab, etc.) and through Acquia Cloud).
    1. You can create a new key pair with ssh-keygen -t rsa -b 4096 -C "[email protected]" if you don't have one already.
    2. Make sure your SSH public key ( contents) is also in your Acquia Cloud account (under 'Credentials'), and GitHub (under 'SSH Keys') or whatever source repository your team uses.
  16. Inside the VM, cd into the project directory (cd /var/www/[projectname]).
  17. Delete Composer's vendor/bin directory so Composer can set it up correctly inside the VM: sudo rm -rf vendor/bin.
  18. Run composer install.
    1. If this fails the first time, you may be running a version of BLT that requires this patch. If so, run the command sudo apt-get install -y php5.6-bz2 (using the php_version you have configured in box/config.yml in place of 5.6).
    2. If this has weird failures about paths to blt or phing, you might not be running Cmder as an administrator. Restart the entire process from #12 above.
  19. Run blt local:setup to install the project locally inside Drupal VM.
    1. If this fails with a warning about insecure_private_key or something along those lines, you need to edit your blt/project.local.yml file and update the drush.aliases.local key to self (instead of projectname.local). BLT presumes you'll run blt commands outside the VM, but when you run them inside, you need to override this behavior.
  20. On your host machine, open up a browser and navigate to (where the URL is the one you have configured in blt/project.yml under project.local.hostname).

CONGRATULATIONS! If all goes well, you should have a BLT-generated project running inside Drupal VM on your Windows 10 PC! You win the Internet for the day.

Next Steps

If you want to push this new BLT-generated project to a Git repository, make sure you have a public/private key pair set up inside Drupal VM, then in the project root, add the remote Git repository as a new remote (e.g. git remote add origin [email protected]:path/to/repo.git), then push your code to the new remote (e.g. git push -u origin master).

Now, other developers can pull down the codebase, follow a similar setup routine to run composer install, then bring up the VM and start working inside the VM environment as well!


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web