Oct 30 2019
Oct 30

We recently started using Vale to help automate the tedious task of enforcing our style guide. Doing so has helped make reviews faster, and reduced any hard feelings between us. Emotions can run high when you feel someone is being overly scrupulous in their review of something you've worked really hard to create.

Everything gets reviewed

Every content item we publish goes through a rigorous review process to ensure we're always putting our best foot forward. This review consists of a number of different steps:

  • Technical review: Is the content technically correct? Do all the code samples work?
  • Copy editing: Does it meet our style guide? Does it use Chicago Manual of Style formatting guidelines? Does it use proper grammar, spelling, etc.?
  • Check for broken links and images
  • Apply consistent Markdown formatting

Some of these things are objective. For example we always use Drupal, never drupal. We always use italics for filenames and paths. And we always format lists in Markdown using a - followed by a single space, never a *. These are things that are simply not up for debate. You either did it right or you didn't. Most tutorials have at least a handful of these fixes that need to be made.

Other style guidelines are more subjective. For example, we try to not use passive voice, but there are exceptions. A technical review might point out multiple ways of accomplishing the same task, and we'll generally only cover one. Avoid cliches. Don't use superlatives and hyperbole. A single tutorial usually has 10+ of these suggestions. These are by far the more important things to focus on in the review as they can have a real impact on the usefulness of the content.

No one wants to be the jerk who points out dozens of formatting errors. And no one enjoys having their work nit-picked by their peers.

We've been talking for a long time about the utility of a tool to help with automating some of the steps in the review process -- specifically, the objective ones. Similarly, Drupal developers use PHPCS to ensure their PHP code follows the Drupal coding standards, and JavaScript developers use Prettier to ensure consistent formatting.

Without a tool, we spend a lot of time in the review process commenting on, and fixing, non-substantive things. That's a distraction from the more important work of providing a critique of the content itself.

Let the robots do the nit-picking

Amber recently introduced me to Vale, a tool she learned about while attending the Write the Docs conference in Portland. We've since introduced it into our review workflow, and are loving it, along with remark for linting Markdown formatting.

Side note: Check out this lightning talk from the conference. It's not Vale, but gives a great overview of the types of things we're doing.

While there are numerous other tools we evaluated, in the end we choose Vale. We've found that it's easier for non-technical users to configure and it allows us to differentiate between objective and subjective suggestions through the use of different error levels.

YAML configuration files

When using Vale you implement your styles as YAML files.


extends: substitution
message: Use '%s' instead of '%s'
level: warning
ignorecase: false
# Maps tokens in form of bad: good
  "contrib": "contributed"
  "D6": "Drupal 6"
  "D7": "Drupal 7"
  "D8": "Drupal 8"
  "D9": "Drupal 9"
  "[Dd]rupalize.me": "Drupalize.Me"
  "Drupal to Drupal migration": "Drupal-to-Drupal migration"
  "drush": "Drush"
  "github": "GitHub"
  "in core": "in Drupal core"
  "internet": "Internet"
  "java[ -]?scripts?": JavaScript

The above configuration file provides a list of common typos and their correction. Because this is a YAML file it's relatively easy for anyone to edit and add additional substitutions. For these suggestions we've set the error level to warning. When we run Vale we can tell it to skip warnings and only report errors.

In another example we've got a style that enforces use of the Chicago Manual of Style for determining how to capitalize a tutorial's title.

extends: capitalization
message: "Tutorial title '%s' should be in title case"
level: error
scope: heading.h1
style: Chicago
# $title, $sentence, $lower, $upper, or a pattern.
match: $title

This is configured as an error.

Running it locally

Everyone authoring, or reviewing, content can install Vale locally and run it with our specific styles. Doing so outputs a list of all the errors and warnings that Vale caught.


Output from running our review linting tool in a CLI. Shows examples of various errors and warnings.

As a content author this is great because it can help me fix things before sending the content off for review. I don't have to worry about the disappointment of having someone send a tutorial back with endless nit-picks over my failure to remember every last detail of our style guide.

As a content reviewer I get a good list of places to start looking for possible improvements, as well as feel confident I can spend more time focusing on substantive review rather than looking for incorrect use of Javascript vs. JavaScript.

Automating it with Circle CI

Screenshot of CircleCI integration in GitHub

Once we got an initial set of styles in place we were able to setup a CircleCI job that executes against each new Pull Request (the canonical version of all our content is stored in Git). The result is that at the bottom of every pull request you can see two checks: one for Vale rules, and one for Markdown formatting. If either detects an error it is revealed quickly and can be fixed.

When we run Vale in Circle CI we suppress all non-error suggestions. So it'll only mark a PR as failing if there's something objectively wrong. These are usually quick to fix.

Because we can switch a rule from warning to error by editing the configuration file we can trial new rules. We can also set up rules that are useful for us to have while reviewing but don't need to block a piece of content from being published.


In order to ensure that content reviewers can spend their time focused on the substance of a tutorial and not on enforcing the style guide, we use Vale to help automate the process of content review. It's helped us have more meaningful conversations about the content, and has also reduced the animosity that can occur as the result of feeling like someone is being hypercritical of your work.

If you work with a style guide I highly recommend you check out Vale as a tool to help enforce it.

Oct 25 2019
Oct 25

DrupalCon Amsterdam 2019. 28 OCT - 31 OCT. Amsterdam, Netherlands

We're sad to miss DrupalCon Europe in Amsterdam next week (October 28-31, 2019). But which talks would we attend if we were going? Amber and I combed through the Interactive Program and created a list of what looks intriguing at the next DrupalCon. Will you be there? You might want to check out our picks.

Joe's picks

I'm not going to be at DrupalCon Amsterdam. First time I've missed a DrupalCon since 2011! And I'm bummed about missing the chance to catch up with friends, meet new people, and keep up with everything the Drupal community is doing. If, however, I was in Amsterdam, these are some of the sessions that would be on my calendar.

Amber's picks

Amber: A big plus one from me on Joe's picks, and here are a few more I would check out if I was there.

  • Autosave and Concurrent editing (conflict resolution) in Drupal 8

    Training around content editing can be tricky because each site has a different configuration and internal process for creating, editing, publishing, and archiving content. But there are definitely some universal known problems in editing content in Drupal and "losing changes before saving content" and "concurrent editing conflicts" are two of them. If you are in the frustration stage of this problem and are looking for potential solutions, check out this session which introduces two modules that address these problems.

  • Configuration Management Initiative 2.0 updates

    Now that Configuration Management in Drupal 8 has been used in sites for a while now, some limitations and challenges have emerged. In this session, you'll get an overview of these issues and how the Configuration Management Initiative 2.0 will seek to address them and how you can structure your sites today for minimal disruption in the future. I'll definitely be checking out the recording on this one to make sure we're making the best recommendations possible in our tutorials on Configuration Management.

  • Initiative Leads Keynote

    Attend this keynote to get updates from initiative leads and learn how you can get involved with core contribution for these coordinated efforts. I'll be cheering from the internet sidelines for my fellow core contributors!

  • (Paid) Training: Drupal + Gatsby

    Our training friend Suzanne Dergacheva is offering a training on Drupal + Gatsby. If I could, I would totally register for this training workshop. Suzanne is a great instructor and the topic is very hot right now, and I think will continue to be into the future.

Sep 25 2019
Sep 25

One of our members recently asked this question in support:

Wonder if you have, or can suggest, a resource to learn how to access, authenticate (via OAuth preferably) and process JSON data from an external API?

In trying to answer the question I realized that I first needed to know more about what they are trying to accomplish. Like with most things Drupal, there's more than one right way to accomplish a task. Choosing a solution requires understanding what options are available and the pros and cons of each. This got me thinking about the various different ways one could consume data from an API and display it using Drupal 8.

The problem at a high level

You've got data in an external service, available via a REST API, that you need to display on one or more pages in a Drupal site. Perhaps accessing that data requires authentication via OAuth2 or an API token. There are numerous ways to go about it. Which one should you choose? And how should you get started?

Some questions to ask yourself before you start:

  • How much data are we talking about?
  • How frequently does the data you're consuming change, and how import is it that it's up-to-date? Are real-time updates required? Or is a short lag acceptable?
  • Does that data being consumed from the API need to be incorporated into the Drupal-generated pages' HTML output? How does it impact SEO?
  • How much control does a Drupal site administrator need to have over how the data is displayed?

While I'm certain this list is not exhaustive, here's are some of the approaches I'm aware of:

  • Use the Migrate API
  • Create a Views Query Plugin
  • Write a custom service that uses Guzzle or similar PHP SDK via Composer
  • Use JavaScript

I'll explain each one a little more, and provide some ideas about what you'll need to learn in order to implement them.

Option 1: Use the Migrate API

Use the Migrate API combined with the HTTP Fetchers in the Migrate Plus module to ingest data from an API and turn it into Drupal nodes (or any entity type).

In this scenario you're dealing with a data set that doesn't change frequently (a few times per day, maybe), and/or it's okay for the data displayed on the site to lag a little behind what's in the external data service. This approach is somewhat analogous to using a static site generator like Gatsby, or Sculpin, that requires a build to occur in order for the site to get updated.

In this case that build step is running your migration(s). The result is you'll end up with a Drupal entity for each record imported that would be no different than if a user had created a new node by filling out a form on your Drupal site. In addition, you get the complete extract, transform, load pipeline of the Migrate API to manipulate the ingested data as necessary.


  • If you've worked with Migrate API before, this path likely provides the least friction
  • Data is persisted into Drupal entities, which opens up the ability to use Views, Layout Builder, Field Formatters, and all the other powerful features of Drupal's Entity & Field APIs
  • You can use Migrate API process plugins to transform data before it's used by Drupal
  • Migrate Plus can handle common forms of authentication like OAuth 2 and HTTP Basic Auth


  • Requires a build step to make new or updated data available
  • Data duplication; you've now got an entity in Drupal that is a clone of some other existing data
  • Probably not the best approach for really large data sets

Learn more about this approach:

Option 2: Create a Views Query Plugin

Write a Views Query Plugin that teaches Views how to access data from a remote API. Then use Views to create various displays of that data on your site.

The biggest advantage of this approach is that you get the power of Views for building displays, without the need to persist the data into Drupal as Entities. This is approach is also well suited for scenarios where there's an existing module that already integrates with the third party API and provides a service you can use to communicate with the API.


  • You, or perhaps more importantly your editorial team, can use Views to build a UI for displaying and filtering the data
  • Displays built with Views integrate well with Drupal's Layout Builder and Blocks systems
  • Data is not persisted in Drupal and is queried fresh for each page view
  • Can use Views caching to help improve performance and reduce the need to make API calls for every page load


  • Requires a lot of custom code that is very specific to this one use-case
  • Requires in-depth understanding of the underpinnings of the Views API
  • Doesn't allow you to take advantage of other tools that interact with the Entity API

Learn more about this approach:

Option 3: Write a Service using Guzzle (or similar)

Write a Guzzle client, or use an existing PHP SDK to consume API data.

Guzzle is included in Drupal 8 as a dependency, which makes it an attractive and accessible utility for module developers. But you could also use another similar low-level PHP HTTP client library, and add it to your project as a dependency via Composer.

Guzzle is a PHP HTTP client that makes it easy to send HTTP requests and trivial to integrate with web services. --Guzzle Documentation

If you want the most control over how the data is consumed, and how it's displayed, you can use Guzzle to consume data from an API and then write one or more Controllers or Plugins for displaying that data in Drupal. Perhaps a page controller that provides a full page view of the data, and a block plugin that provides a summary view.

This approach could be combined with the Views Query Plugin approach above, especially if there's not an existing module that provides a means to communicate with the API. In this scenario, you could create a service that is a wrapper around Guzzle for accessing the API, then use that service to retrieve the data to expose to views.

If you need to do anything (POST, PUT, etc. ) other than GET from the API in question you'll almost certainly need to use this approach. The above two methods deal only with consuming data from an API.


  • Able to leverage any existing PHP SDK available for the external API
  • Some of the custom code you write could be reused outside of Drupal
  • Greatest level of control over what is consumed, and how the consumed data is handled
  • Large ecosystem of Guzzle middleware for handling common tasks like OAuth authentication


  • Little to no integration with Drupal's existing tools like Views and others that are tailored to work with Entities

Learn more about this approach:

Option 4: JavaScript

Use client-side JavaScript to query the API and display the returned data.

Another approach would be to write JavaScript that does the work of obtaining and displaying data from the API. Then integrate that JavaScript into Drupal as an asset library. A common example of something like this is a weather widget that displays the current weather for a user, or a Twitter widget that displays a list of most recent Tweets for a specific hash tag.

You could also create a corresponding Drupal module with an admin settings form that would allow a user the ability to configure various aspects of the JavaScript application. Then expose those configuration values using Drupal's JavaScript settings API.

While it's the least Drupal-y way of solving this problem, in many cases this might also be the easiest -- especially if the content you're consuming from the API is for display purposes only and there is no reason that Drupal needs to be aware of it.


  • Data is consumed and displayed entirely by the client, making it easier to keep up-to-date in real time.
  • Existing services often provide JavaScript widgets for displaying data from their system in real time that are virtually plug-and-play.
  • Code can be used independent of Drupal.


  • No server-side rendering, so any part of the page populated with data from the external API will not be visible to clients that don't support JavaScript. This also has potential SEO ramifications.
  • You can't query the API directly if it requires an API key that you need to keep secret (e.g., because the key has access to POST/PUT/DELETE resources). In that case, you would need server-side code to act as a proxy between the API and the JavaScript frontend
  • Drupal has no knowledge of the data that's being consumed.
  • Drupal has little control over how the data is consumed, or how it's displayed.

Learn more about this approach:

Honorary mention: Feeds module

The Feeds module is another popular method for consuming data from an API that serves as an alternative to the Migrate API approach outlined above. I've not personally used it with Drupal 8 yet, and would likely use the Migrate API based on the fact that I have much more experience with it. Feeds is probably worth at least taking a look at though.


There are a lot of different ways to approach the problem of consuming data from an API with Drupal. Picking the right one requires first understanding your specific use case, your data, and the level of control site administrators are going to need over how it's consumed and displayed. Remember to keep in mind that turning the data into Drupal entities can open up a whole bunch of possibilities for integration with other aspects of the Drupal ecosystem.

What other ways can you think of that someone might go about solving the problem of consuming data from an API with Drupal?

May 04 2018
May 04

Twin Cities DrupalCamp is always my favorite Drupal event of the year, both because it's close to home, and because I get to be involved in planning and organizing. I have the opportunity to attend quite a few Drupal events every year, and there's just something extra special in doing so not only as a participant but as a volunteer. You get to see a different side of things, and engage with people in different ways. It's a great chance to spend some time working on something you care about with friends.

This year the camp is taking place from June 7th-10th in Minneapolis, and it's shaping up to be a good one. As is tradition, the Thursday before the official camp starts is training day. This year, we've got three great options for you to choose from, including a workshop by Amber and me about theming Drupal 8.

We actually did the same workshop at Twin Cities DrupalCamp last year, and it sold out. So thought we would offer it again this year to give a new set of people the chance to learn to make beautiful things with Drupal 8.

Learn Drupal 8 Theming

You can sign up for our Drupal 8 Theming workshop to get the low-down on all of the goodies that come with the powerful theme system. The workshop is free for anyone that's registered for the camp, but seats are limited, so don't wait.

This workshop will familiarize front-end developers with Drupal 8's theme system. Whether your goal is to theme your personal site, pass the Acquia front-end developer certification, or upgrade your skills for a job, we provide students with a solid foundation on which to start, and enough knowledge to continue to practice and learn on their own. Whether you're creating an entirely new theme from scratch or making nips and tucks to an existing design, understanding how Drupal's theme system works -- or having someone on your team who does -- is essential.

Here's what we're planning to cover in this all-day workshop:

  • How the Drupal theme layer relates to the rest of the system
  • Common theming terminology and processes
  • How to override any of Drupal's HTML output
  • The relationship between base themes and sub themes
  • Everything you need to know about Twig when working with Drupal templates
  • How to add both custom and third-party CSS and JavaScript libraries
  • Tools for introspecting and debugging a theme
  • Tips and tricks for using common front-end development tools like CSS preprocessors and task runners, in conjunction with Drupal

We do have a limited number of seats for this workshop, so sign up soon to reserve your spot.

Sign up for Theming Drupal 8!

Additional trainings

If that's not what you're after, there are two additional trainings that you can choose from, put on by our friends at Agaric, and Backdrop CMS.

Drupal 8 Content Migrations, presented by Agaric This training is aimed at site builders who will learn to combine various core and contributed modules and write YAML files to accomplish content migrations. No prior experience with the Migrate module is required.

Intro to Backdrop CMS, presented by Backdrop CMS This introductory training will cover the basics of creating and administering a website with Backdrop CMS.

Come say hi

Even if you're not attending one of the trainings above we would still love to say hi if you're going to be at Twin Cities DrupalCamp this year. Amber and I will be around throughout the event, we'll have stickers, and would love to catch up. So if you see us, say hello.

Feb 27 2018
Feb 27

Before reading this, check out these other posts:

After reading all of these I have lots of thoughts and feelings, and I don't yet know what I think the best path forward is. But I'm happy to see this conversation being highlighted and the continued discussion of how we can make Drupal's documentation even better. To that end, I think we can look to the User Guide project as an example we can learn from.

Learn from past successes

I want to point to the Drupal 8 User Guide project as an example of what can be accomplished through a documentation initiative. It wasn't ever an "official" initiative, but there was a lot of community involvement and excitement around it nonetheless, as well as more coordination and collective effort towards a goal than we typically see with regards to work on Drupal's documentation. While this particular guide only covers one aspect of Drupal, I believe it can serve as a good example for possible future documentation initiatives.

For reference, the readable version of the guide is here: https://www.drupal.org/docs/user_guide/en/index.html, and the project is located here: https://www.drupal.org/project/user_guide.

You can get some history of how this project was started, and how it evolved over time, by watching these two presentations:

We set out to solve a bunch of existing issues:

  • No overall plan
  • Limited scope
  • Lack of peer review
  • Lack of copy editing
  • Tools that don't facilitate the governance model we wanted to impose
  • The desire to have Drupal's documentation translated
  • Guidelines, and conventions, for writers, and a process for enforcing them
  • etc.

Things we learned that could be applied elsewhere:

Start with a plan

Once we knew we wanted to write the guide, and felt like we had some buy-in from the community, the first thing we did was make a plan. Jennifer and I met in person, and drafted an initial outline covering exactly what we felt the guide should contain. Then we shared that for feedback. Having this outline allowed us to know what was in scope, track the progress of the initiative, and get other people involved in a meaningful way (running a documentation sprint is a lot easier if you know what you want people to write).

In addition to the outline, we also defined a process that every page in the guide would go through.

  1. Initial draft
  2. Review: does it match our scope and guidelines?
  3. Technical review: do the instructions work?
  4. Copy editing
  5. Finalize screenshots
  6. Ready for publication

Spreadsheet with rows for user guide topics and colums representing phases in the process of review. Cells indate the progress of each topic as it moves through each phase.

Again, this allowed us to track progress, and helped with recruiting people because we had clearly-defined tasks for people to work on. We could also define that certain steps (copy editing, publication) could only be undertaken by specific people in order to allow for consistent quality control.

Use version control

While maybe not as friendly for someone who just wants to make a drive-by edit to a documentation page, using Git as the canonical source for the content of the guide has proven extremely valuable.

  • Limit who can commit, ensuring that all content is vetted by the same procedure prior to being added
  • Allow for better collaboration between multiple people working on the same content
  • Facilitates a standarized review process
  • Having a patch + review process helped attract contributors who might not otherwise participate. Many people are hesitant to edit a free-for-all page because they're concerned that they're not "doing it right." When they know there is a friendly review process in place, they are emboldened.
  • Allows for opening issues and discussing changes before they're made (this is huge in comparison to the wiki-like nature of drupal.org)
  • Allows for maintaining of translations. Once a page has been translated we have a way to track that the English version has changed, and thus the translated copies require an update.
  • You can have a version of the documentation that matches a version of Drupal.

It does raise the barrier to entry for contribution. However, in my experience I generally feel the trade-offs are worth it.

Give people credit

A nice side effect of the use of version control in this case is that we can give people a commit credit for the work they've done. For better or worse, this is an important aspect of the contribution process. Writing documentation is often a thankless task, and we want to help elevate those that are contributing.

In addition to commit credits we also maintain an ATTRIBUTIONS.txt file for the guide, and individual attributions with each page.

Limit scope and define your audience

The user guide has a defined scope and audience.

From the guide:

This guide was written mainly for people with minimal knowledge of the Drupal content management system. The topics will help them become skilled at installing, administering, site building, and/or maintaining the content of a Drupal-based website.

This allowed us to make critical decisions about what to include, what was too much, and where we maybe needed more information.

Additionally, writing documentation is one thing. Keeping it up-to-date is a whole other beast. Knowing the scope of what your documentation covers makes this easier.

Keeping the scope of what the guide covers allowed us to, well... finish the guide.

Oversight and governance are important

As Dries said in his post:

It's hard to write world-class documentation by committee without good governance...

While the idea of a free-for-all wiki where anyone can come along and help with updates to the documentation certainly has its merits, it is also a promoter of sprawl and can lead to vastly inconsistent quality. In the end Jennifer and I didn't write all that much of the content. Instead, we helped others figure out where and how they could get involved, ensured that processes were followed by acting as gatekeepers, and worked hard to identify issues and adopt our plan in order to address them. I believe this allowed the guide to be completed.

Going forward we can help to identify areas that need improvement, facilitate translation teams who are doing the hard work of translating the content, and throughout all of this ensure consistent quality. But this only works because from the beginning it was made clear that this isn't a free-for-all, and there is a process.

The combination of a clear definition of scope and governance gives us the authority to say, "Thanks, but no thanks." With the existing wiki-like documentation there's no clear authority, which I believe leads to duplication, unnecessary content, and people feeling like they can't update or fix content written by someone else for fear of breaking some unwritten rules. Anyone wanting to improve things by deleting or re-writing is left to contend with the rebuttal from the author and no real recourse other than to decide they want to get into a battle or back off and leave it as-is. Clear scope and governance help solve these issues.


Because we have a defined scope, and a set of strict formatting guidelines we are able to automate the process of creating screenshots for the guide. Almost every page in the guide contains multiple screenshots. Keeping those up-to-date with changes to Drupal core would be arduous -- which really means it wouldn't get done. Having an automated way to generate the majority of these ensures that when a new minor version of Drupal 8 is released we can just update all the screenshots. It also means we can generate screenshots for translated versions of the guide that show the Drupal UI with alternative languages installed.

Without being able to do this I'm not sure we would have decided to add screenshots to the whole guide. Its impossible to understate how much harder it is to maintain things like images and videos in comparison to text.

And, as a side-effect, this process serves as a sort of functional test suite for the guide. It often catches changes in Drupal core that require us to upgrade the step-by-step instructions in the guide.

Be prepared to explain again, and again, why we're taking this approach to writing documentation. After many years of wiki-style free-for-all on Drupal.org there are a lot of people who push back against imposing more oversight and control. Taking the time to explain the benefits of an approach like this and helping people to see how they can still contribute can be tedious, but it's worth it.


Additionally, Drupalize.Me is currently working on creating videos to complement the content of the user guide -- a task that we couldn't possibly take on without all of the above structures being in place. You can read more about the effort in Adding Free Videos to the Drupal 8 User Guide.


I hope that we can use some of the experience gained writing the Drupal 8 User Guide to help inform future decisions about how we create and maintain documentation for all of Drupal. Including:

  • Defining a clear plan and scope for what should be included
  • Implementing a process that facilitates peer review and oversight
  • Evolving the governance of documentation in order to empower people with the authority to make decisions
  • Create tooling that helps us to better maintain existing content

That, and of course, empower more people to get involved in creating high-quality documentation that matches people's expectations and makes Drupal look good.

I would love to hear your thoughts about adopting lessons learned from the user guide either in the comments here, or on one of the posts linked above.

Feb 14 2018
Feb 14

Drupal 8 User Guide

We've just released a new free guide on our site, Drupal 8 User Guide in order to help our members--and anyone--with minimal existing knowledge of Drupal get started building something quickly. It's a re-publication of the one already available on Drupal.org, with the addition of embedded videos.

I want to share a little bit about why we choose to republish existing content instead of creating new materials, why we've opted to add video to the user guide, and why we're giving it all away for free.

What is the User Guide project?

The Drupal 8 User Guide project consists of about 100 pages of written content that provides an introduction to Drupal for newcomers.

From the guide's preface:

This guide was written mainly for people with minimal knowledge of the Drupal content management system. The topics will help them become skilled at installing, administering, site building, and/or maintaining the content of a Drupal-based website. The guide is also aimed at people who already have some experience with a current or past version of Drupal, and want to expand the range of their skills and knowledge or update them to the current version.

Its content is written and maintained by the Drupal community. It is published and freely available on Drupal.org and it's licensed Creative Commons CCbySA 2.0.

When it came time for us to start planning for the intro-level content we wanted to include on our site we opted to make use of this existing resource. Drupalize.Me has a long history of involvement with the project. I put forth the initial proposal at DrupalCon LA, helped to subsequently refine it into the current version, and am one of the current maintainers. Amber Matz helped with some of the editorial process, and we created the graphics used in the example site, licensed under Creative Commons for use in the guide.

Why republish the user guide?

  • "A good introduction to Drupal 8" is one of the more common requests we get from our members.
  • The text is already written and licensed Creative Commons. So it's a great head start for us and allows us to complete things faster without also essentially duplicating quality content that is already available elsewhere.
  • It's really high quality. Given our involvement with the project since the beginning, we already know that it's as good or better than anything we might write ourselves.
  • We can do double-duty with our time, and benefit both our site and help improve the official user guide project at the same time.
  • The content of the guide is already organized in a way that is similar to how to we like to break things up. Short concept tutorials that introduce a new idea followed by one or more task tutorials that demonstrate the new concepts in use. So it fits well into our existing architecture.
  • Our site has some unique features that our members appreciate that Drupal.org doesn't currently have. For example, tracking which tutorials you've already read so you can more easily pick up where you left off last time or adding things to your queue for future watching or reading.

We're super excited about this and feel like it's a big win for Drupalize.Me, our members, and the Drupal community as a whole.

Adding Video to the User Guide

One thing that we feel the current iteration of the user guide project is missing is video, and we want to help fix that. So we recorded video for all of the task tutorials in the guide, are making them available under the Creative Commons license, and publishing them all for free on both our site and our YouTube channel.

Why video?

  • Different people learn in different ways, some are more visual learners, and having the ability to watch as someone else navigates the steps required to complete a task is more helpful than either reading instructions, or looking at screenshots.
  • Video can also be beneficial for auditory learners.
  • Video allows the user to see important elements of the UI that may not be covered by screenshots.
  • Some people prefer watching a video over reading a page of text.

The downside of video is that it's harder than text to produce and requires some specialized knowledge that can make it harder for volunteers to create and maintain--something we've gotten really good at over the years. We know first-hand the difficulty of producing and updating high-quality video content. When talking about our own content and the work we do on a daily basis we often state that, "It's easier to patch a text file than a video."

Additionally, we've learned from experience that when it comes to video, people tend to expect a highly polished and consistent format. It can be jarring to switch frequently from one presenter to another, or distract from the learning experience when different screen-casters are using different browser or a different configuration in their terminal. This is by no means impossible for a volunteer team to accomplish, but it's absolutely easier for a team with experience and relevant resources to do.

For these reasons, and because we're firm believers that when Drupal does better we all do better, we're working to contribute all the videos we created back to the original Drupal 8 User Guide project. Our hope is that by contributing them back to the community more people can get the chance to learn from them. And, that by also using them on Drupalize.Me we can continue to help keep them up-to-date and accurate for future versions of Drupal.

If you've got thoughts about how, or if, these videos should be included in the guide see this issue on Drupal.org.

What's next?

Going forward we would like to create and contribute videos to accompany the concept tutorials in the user guide. Although, we don't yet have a timeline for that work. Additionally, with this baseline information in place, we'll begin working on expanding the Drupalize.Me tutorial library to go more in-depth into topics like content types, views, and user management that the user guide introduces.

Get started learning Drupal 8 today with the Drupal 8 User Guide, now with videos!

Dec 11 2017
Dec 11

Thank you for casting your votes! The poll is now closed.

One of our favorite things to do at Drupal community events is in-person training. There's just something special—and motivating—about getting to teach people face-to-face rather than from the other side of a computer screen. In 2017 we developed and offered a Theming Drupal 8 workshop at DrupalCon, Twin-Cities DrupalCamp, MidCamp, and BADCamp. It was super popular and filled up every time we did it. So this year we're considering developing another workshop that we can offer at events we're fortunate enough to be able to attend. But we're torn on what to cover! What are you most interested in learning?

Our current top contenders are:

  • Creating Modern Web Services APIs with Drupal: Building on our Web Services in Drupal 8 series, we would teach you how to use Drupal as the backend for your API; popular modules and how to configure them; and best practices for important topics regarding the architecture of your API, like presentation versus content, security, and documentation. In addition, we would look at how consumers can interact with the API that we build. We won’t have time to get into any specific frameworks, but would instead demonstrate how any HTTP client could retrieve information from Drupal, leaving the specific implementation up to you. This would be a great first step for anyone wanting to get started with decoupled Drupal.
  • JavaScript for Drupal Developers: This workshop would build on the JavaScript portions of our Drupal 8 Theming Guide, and integrate additional content from experts in the community regarding current developments in the Drupal+JavaScript ecosystem. The intent would be to teach people who are already familiar with JavaScript basics about the various ways in which JavaScript can be used with Drupal themes and modules. We'd cover Drupal's existing JavaScript API, using ES6 with Drupal, and integrating modern JavaScript toolchains and frameworks like React into an existing Drupal module or theme. This would be a great first step for developers who want to level up their JavaScript game, as well as help prepare for the eventual inclusion of more and more JavaScript in Drupal.

Both of the workshops would follow the same pattern as the Theming Drupal 8 workshop we've been doing for the last year. They would include a combination of short lectures, hands on exercises, and discussion time to help keep attendees engaged and maximize the amount of knowledge we can share in a single class.

As much as we wish we had the time and resources to run all, it's not realistic. So if you were to participate in a Drupal community event in 2018 and could choose between attending these workshops, which would you choose? Cast your vote in the poll below!

Thank you for casting your votes! The poll is now closed.

Jun 29 2017
Jun 29

The call for sessions for DrupalCon Vienna just closed. Now all of us who submitted a session get to play the waiting game, eager to find out whether our session was accepted. Ever wonder what goes on during the session selection process? Here's some insight.

Amber Matz and I (Joe Shindelar) had the awesome opportunity to be part of the programming team for DrupalCon Baltimore. Both of us served as local track chairs. Amber worked on the Horizons track and I worked on the Being Human track. We thought it would be fun to share some of our experience as track chairs helping with session selection.

The official session selection process is well-documented. It's designed to reduce bias and provide as much transparency as possible. Hopefully this post provides some additional insight into how sessions are chosen at DrupalCon.

Who chooses the sessions?

The DrupalCon programming team is responsible for soliciting session proposals, selecting sessions from those submissions, and laying out the schedule for the week of DrupalCon. The programming team is made up of Drupal Association staff and community volunteers. The exact composition of the team changes for each DrupalCon, but always includes a mix of people new to the process as well as those with previous experience.

Sessions are primarily selected on a per track basis, with input from the entire programming team. Each "track team" is composed of one local track chair and two global track chairs. (It was like this for DrupalCon Baltimore when I was involved, but I imagine this can vary.) These are the people who give your session the "thumbs-up" or add it to the "thanks, but no thanks" list.

Local track chair

For DrupalCon Baltimore, Amber was the Horizons local track chair and Joe was the Being Human local track chair.

The local track chair has these responsibilities:

  • Write a description for their track along with specific topic suggestions
  • Respond to inquiries about the track or requests for submission review from the community
  • Actively participate in weekly meetings
  • Determine the method and criteria your track will use to evaluate sessions
  • Read and evaluate all session proposals
  • Determine if session proposals would be a better fit in a different track and communicate accordingly with other track teams
  • Select session proposals for inclusion in the final program, along with alternates
  • Communicate with speakers and offer support to review slides or practice presentations

In many of the above tasks, especially session proposal selection, the local track chair works closely together with their global track chairs.

Global track chairs

There are one or two global track chairs for each track team. These are people who have previously acted as a track chair for DrupalCon. In many cases, the global track chairs were previously the local track chair for the track in question. This allows for things like transfer of knowledge, cohesion between events, and the ability to provide some insight into what did or did not work last time. For Amber and I, as local track chairs, the global track chairs were our first stop for questions.

Global track chairs have these responsibilities:

  • Provide support and guidance to the local track chair and help them complete all their responsibilities.
  • Support the transfer of knowledge and experience from previous events.

Each track team was responsible for figuring out a process and plan that worked for them with the local track chair taking the lead. Local track chairs (and optionally globals) met with Amanda, the Program Manager for the Drupal Association, once a week in the months leading up to DrupalCon. Besides directing the programming-related tasks for DrupalCon, Amanda coordinates all the volunteers on the track team. Our meetings generally involved locals giving an update about the current status of things. One such update might be: "My track has 25 submissions so far, I've been in contact with a few people to get clarification, and I've been working with my globals to finalize the draft of our track blog post that's due next Monday. One of the people who submitted a session asked me about how much time they should plan to leave for Q&A. What should I tell them?"

Meetings ended with locals having a list of action items to complete.

  • Joe: For our team, I was always at the meetings, and the other two attended when they could, but not every time. I would generally immediately follow up with the rest of my team, let them know what was discussed, and what our next tasks were. For actionable tasks, I would usually try and take a first pass at accomplishing it and then work with the rest of the track team to make sure we all agreed. For example, I wrote the track description and then they helped by providing feedback.
  • Amber: My experience was similar to Joe's. I usually time-blocked the time right before and right after our weekly meeting to work on tasks and communicate with my globals. I would always ask my globals to review any copy I had written or responses to inquiries from the community.

Submit early

Here's our number one bit of advice: submit your session well before the deadline. Submitting early is about more than just garnering favor with an over-zealous track chair. It gives them an opportunity to ask clarifying questions, provide feedback, and give your submission the attention it deserves.

Towards the end of the session submission process, this becomes impossible to do. The volume of new submissions is simply too high. Check out the graph below. It shows the total number of sessions submitted over time for the DrupalCon Baltimore CFP (Call For Presentations). The CFP was open for 2 months, but 75% of the submissions came in in the last 5 days. About 38% were submitted in the last 24 hours.

Graph showing exponential rise in number of submissions relative to close of CFP.

  • Joe: As a track chair it is really exciting when people submit a proposal to your track. And I read many of them as they came in. I was just thrilled that people were submitting things. I also started to follow up with people after reading their session descriptions if something wasn't clear, or if I needed more info. Since they submitted early, they also had time to revise. Every track chair on the program team had stories about reading early submissions and taking the time to reach out to get clarification and improve the submission.
  • Joe: I wish I could give the same amount of feedback for every session, but the reality is this can only be done if you submit early. If you wait until just before the deadline I simply can't read all the submissions as quickly as they come in. By the time I do get around to reading it, it'll be too late for you to make any changes.
  • Joe: A couple of people reached out to me specifically after submitting their proposal asking for feedback. I was happy to provide feedback in order to help someone refine their proposal and make sure that it fits with my vision for the track.
  • Amber: As a track chair, I was trying to envision the final program and imagine how each presentation would fit into it. This sort of exercise of imagination becomes difficult with the overflow of submissions at the end.
  • Amber: I really appreciated early submissions that were thorough. What I didn't like were early submissions from veteran speakers who didn't fill out at least 1 link to a recording of a previous talk. Just one is all I needed. Don't make me hunt down your previous talks just because you've spoken many times before at DrupalCon. My advice for veteran speakers is: don't assume the track chairs know who you are and what your reputation is.

Read the track description

Each track has put together both a description of the track and a blog post, explaining the vision for the track and the things that we are hoping to cover. The list reflects both our ideas about what we think will be valuable for the community and specifically DrupalCon's audience, and even topics of personal interest. You can bet we're going to keep that list in mind when we're picking sessions. You don't have to follow the list, but know that we're likely trying to make sure we cover things on the list.

  • Joe: Every time I sat down to start reviewing sessions I would start by first rereading the description I had written for the track. This helped me to get into the right mindset and to remember what it is we're trying to accomplish. That being said, I was also trying to be aware of the fact that other people are going to have some great ideas that I didn't think of when I wrote the track description. This was especially true when I saw that there were multiple sessions submitted about the same topic -- a good indicator that there is community interest. (Amber: I totally agree!)
  • Joe: Sometimes I found it difficult to qualify if something was a good fit for my track or not. As someone who has submitted to DrupalCon in the past, I realize that sometimes you have an idea for a great talk that just doesn't fit perfectly into any of the predetermined tracks. There was a lot of talk about this amongst the overall program team and my suspicion is that in the future this likely to change a bit. Possible ideas include allowing the selection of multiple possible tracks or even moving to more of a tagging system and less of a one-to-one relationship between sessions and tracks.
  • Amber: I depended heavily on my global track chairs for input on the track description. The nature of the Horizons track is such that you want to make sure you're not rehashing "the new hotness" from last year or the year before. I also tried, with a tiny bit of success, to solicit the community for ideas through a survey. I had better luck soliciting folks from our sister company Lullabot for ideas about current trends in tech.

Write a good description

After having read a few hundred submissions you start to get a sense of who put some thought and effort into their descriptions and who didn't. Things like spelling and proper grammar aren't necessarily going to make or break your chances of getting accepted. However, attention to detail in your description indicates you're likely to put the same effort into preparing an awesome session.

  • Joe: When reviewing sessions I pretty much immediately discarded anything that I could tell right away didn't have much effort put into it. There were a few with only one or two sentences in the description. There was even one where someone had left the default "copy goes here" values in the form fields. I suspect they meant to come back and fill in more later but then never did. Either way, if I can tell you didn't put much effort into your submission it's going to be hard to convince me that you're going to put in the time and effort required to prepare a quality session.
  • Amber: In my mind, a thorough track description describes the topic and what problem it addresses, describes what the presentation will cover, and at what level, and describes learning outcomes--what the audience member can expect to learn by attending this session. An absence of any of these elements earned a lower ranking. No matter your level of English proficiency in writing, spelling, or grammar, have at least one person read it over and give you feedback. But please do them the courtesy of running your copy through a spelling and grammar checker tool first.

25-minute sessions

Prior to DrupalCon Baltimore, all sessions were always 60 minutes long. Or rather, 45-50 minutes with time for Q&A at the end. In Baltimore, the program team decided to introduce the option to have 25-minute sessions. And we allowed you to choose when you submitted your proposal whether you intended for it to be 25 minutes, 60 minutes, or that you were willing to do either.

  • Joe: I found it tricky whenever someone would select the "either" option. It forced me to try and imagine what the talk might look like in a 25-minute version vs. a 60-minute version. And, I had to assume the description was written to cover the 60-minute version. What would they cut? Would they cut or just compress? A few people left extra notes indicating their intent. But most did not. I don't think it swayed my opinion of the session much one way or another, but I tended to assume that any session indicated as suitable for both was probably intended to be 60 minutes.
  • Joe: Another thing I thought about a lot related to 25-minute sessions was the fact that when we scheduled them we put 2 sessions in the same room back-to-back. So the audience was likely to be mostly the same group of people for both. And thus it made a lot of sense to try and pair them based on a similar theme.
  • Amber: After sessions were selected and I reached out to the speakers in my track, I did sense a bit of nervousness (maybe even annoyance?) from speakers chosen for 25-minute slots, especially when they had chosen the "either" option. It is quite challenging to reformat a talk to fit in a shorter time-slot, especially when it has been developed as a 50-minute session.

View of spreadsheet used to schedule sessions showing time slot divided in half with two 25-minute sessions in each slot.

Note: We learned our lesson here. When submitting a session for the upcoming DrupalCon Vienna, you can choose 30 minutes or 60 minutes -- not both.

Reviewing sessions

All of the sessions were placed into a spreadsheet with a tab for each track. Each track team (local, and 2 globals) was responsible for picking X hours worth of content for their track plus alternatives in case someone backed out. Beyond that, the requirements were somewhat minimal.

  • Joe: The programming team as a whole had at least 2 meetings where we talked about ways to approach ranking/rating sessions. I found that this was an area where it really helped to have the chance to work alongside other people who had done this before. You could really see how experience and knowledge gained over the years was carried along. For the Being Human track we chose to go through and individually rank each session from -2 to 2, tag them with info about how they mapped to our track description and the big ideas they included. For example, "Imposter syndrome" was something we knew we wanted to cover, and there were multiple sessions about it. Tagging each one allowed us to more easily identify duplication, etc. I also left a LOT of notes for myself. Much of the initial work of selection was assigning a rating and making sure I could figure out later why I assigned something that rating.
  • Joe: I reviewed things in multiple passes. Sometimes I just went through and tagged based on topics covered. Another pass through I was just thinking about the speaker(s) and their experience speaking. In order to keep a clear head and to give each submission the quality review it deserved, I would limit myself to spending no more than an hour or two at a time reviewing sessions -- which basically meant I spent an hour every day for about 2 weeks just reading and ranking. I initially tried reading a bunch all at once but quickly found that they were becoming jumbled in my head and I was having a hard time distinguishing one "Imposter syndrome" session from another. And that didn't seem fair. I was worried that sessions I reviewed most recently would just sort of win by virtue of the fact that I could remember them. This approach -- spending a shorter time each day -- ended up working really well for me.
  • Joe: I really wanted to make sure that the Being Human track was for the community and not just for me. For every session I accepted or declined, I would ask myself the question, "How will I justify this decision to the community?"
  • Amber: The Horizons track got lots of interesting submissions. For us, the main question was, how well does this session fit into the vision we had for our track, as described in our track description and topic suggestions list. I found it helpful to assign subtopics to each session so that we could see where there was more than one submission on a certain topic and so that we could compare submissions that were covering pretty much the same ground. We wanted to have complementary sessions or even two competing approaches to the same problem but avoid duplication. We also wanted to have as many "subtopics" in our track represented as possible. This is where it was nice to have the option to choose 25-minute sessions. We were able to cover a lot more interesting topics in our track that way.
  • Amber: Ranking was difficult. I broke the rating up into multiple categories and then took the average. We also calculated a standard deviation so that we could see which submissions we needed to talk about. Where there was a debate, it really came down to, "How well does this session fit into the vision for our track?" and "Do we think this is a good speaker, based on what we know about them?" The bottom line was always, "Can we justify this decision to the community?" Most of the work of ranking we were able to accomplish asynchronously on the spreadsheet. For matters of debate, we took to Slack or Google Hangouts to discuss. It was hard to find time for all 3 of us to sync up, so we did as much as we could asynchronously.


When you submit a session to DrupalCon there are a couple of optional questions related to inclusivity. As part of our effort to increase diversity amongst speakers, you're asked to self-identify as to whether or not you've presented at a prior DrupalCon and whether or not you identify with any of the Big 8 social identifiers. This information isn't intended to be used as part of the session rating/ranking process per se, but we do use it to make sure we've got a diverse group of speakers.

Form showing questions about diversity when submitting a session.

  • Joe: In our spreadsheet we just had a yes/no for "first-time speaker" and "identifies with one or more social identifier". It came up a bit when thinking about a speaker's prior experience. There was one case where we had two very similar sounding talks, one by a new speaker and one by someone who had previously talked about the same topic. Both were ranked equally and as a tie-breaker we opted to go for the new perspective, especially since a recorded version of the other talk was still available from a previous conference.
  • Joe: When ranking sessions I intentionally didn't look at this too much. I was trying to initially rate sessions based on the session and the speaker's experience. After we had completed our selection for the Being Human track I took some time to go back through and generate some basic stats, such as percent of new speakers, percent non-male speakers, and percent identifying with one or more social identifiers. Our thinking was if we felt that any of those numbers needed to be improved that we could re-evaluate our choices to make sure we were representing the whole community. But, we were quite happy and didn't change anything. The Being Human track had a really diverse pool of speakers to choose from, which made our job pretty easy in this respect.
  • Amber: While the Horizons track did have around 20-25% session submitters self-identify with one of the Big 8 underrepresented communities, I was the only woman to submit to and speak in my track. (It is standard to recuse oneself from rating one's own talk or others where being objective would be difficult.) I thought this was a bummer. Although I did make an effort to reach out to women in the PHP and VR communities in particular, I was not successful in recruiting any other women to submit to Horizons. So, there's plenty of room for community growth, in that particular respect.
  • Amber: I was pleased with the mix of new-to-DrupalCon and veteran DrupalCon speakers in the Horizons track. I have always been a bit weary of seeing tracks dominated by the same speakers and I was glad to be able to include a number of folks new to DrupalCon, but who were still excellent experienced speakers. It wasn't a criteria for selection, but just a reflection of the quality of submissions from both new and experienced DrupalCon speakers.

One speaker, one session

You can submit as many sessions as you like. Submitting more than one might help increase your chances of being selected. But, with a few exceptions, we were committed to choosing one session per speaker across all tracks. We did allow a speaker to have a solo session and participate in a panel. In two cases we allowed a single speaker to have two solo sessions, but only after contacting them to chat about it.

Because speakers can submit multiple sessions to multiple different tracks this lead to some initial speaker duplication between tracks which we had to sort out.

  • Joe: For the Being Human track we ended up replacing two of our initially selected sessions because of speaker duplication. In one case it was a relatively easy decision; we had another session on the same topic that we liked equally well and had already had a hard time choosing between the two. The other one was kind of a bummer. I was personally really excited about the session so it was hard to give it up. But, the other track team made a stronger case for why the speaker should present the session in their track. It was kind of surprising to me how I could get sort of emotionally attached to a particular session! The program team worked really well together and I think did a good job of creating a schedule that was good for the community and not just for me, the track chair.
  • Amber: This was a bit of a challenge for us in Horizons. We had a number of excellent submissions, all on topics that we were interested in, but by the same speaker. In addition, the Horizons track topics have some overlap with PHP and Front-End, and so even when we thought we were out of the woods as far as overcommitting a speaker, we ended up making some compromises with other track teams in order to solve this problem of not allowing one speaker to speak too many times. I suppose it does increase your chances of being selected if you submit many times and are in general an awesome speaker and ridiculously intelligent, but it was a challenge to get that all sorted out. A good challenge, though, because I think it's good not to have a track dominated by a single person.

In conclusion

In conclusion, being a local track chair took a lot of time and effort. Overall, it was a positive experience. It was really interesting to see this side of DrupalCon planning and to gain insight into the process of session selection.

Apr 06 2017
Apr 06

Last week Blake and I attended MidCamp 2017 in Chicago, and it was awesome. I always enjoy attending regional camps, and especially those that are relatively close to home for me. It's fun to get to geek out with some of my Drupal neighbors. I also like the pace of these smaller events sometimes. I feel that I'm able to actually spend a bit of quality time with the people I meet vs. DrupalCon where I often feel like I'm being pulled in 3 or 4 different directions all at the same time.

I encourage you to take the time to attend your local camps as well if you get the opportunity. Not sure where/when they are happening? Check out http://drupical.com, and/or locate your regional group on https://groups.drupal.org. I've said this before, and I'll probably keep saying it for as long as I'm involved with teaching Drupal. There is no better way to improve your Drupal knowledge than through mentors and interaction with other members in the community. Regional events like this are a great opportunity to make those connections.

Drupal 8 theming workshop

On Thursday Blake and I presented an Introduction to Drupal 8 Theming workshop. It was an 8-hour long whirlwind tour of all the components that make up a Drupal 8 theme. Really, it's the in-person version of the Drupal 8 Theming Guide on our site. There's always something different--and energizing--about getting to teach these things in person. I love being able to get real-time feedback, and to answer people's questions. Creating training videos can be kind of isolating sometimes. I keep talking to my screen, but no one ever responds.

We'll be doing this training again at DrupalCon Baltimore, and we're also interested in presenting it at some other local camps. If you're helping to organize a camp this year and might be interested, let us know and we can see how it works with our schedules.


I attended a bunch of sessions during the camp, and as per usual I've got a bunch of pages in my notebook full of notes and ideas I now need to follow up on. All of the sessions were recorded and are now available online. I recommend checking out:

  • Understanding Drupal by Mauricio Dinarte. You might think it's kind of silly that I'm attending an "Understanding Drupal" session, but I always love to hear the different ways people explain it, and Mauricio brings a really unique perspective and a good story.
  • Building Great Teams by Drew Gorton. I came away with a bunch of notes about things I'm now going to try and get the Drupalize.Me team to do with me. This sessions is mostly about finding a common purpose and then going out and tackling it together.
  • Whitewashed - Drupal's Diversity Problem and How to Solve It by Chris Rooney. This session left me thinking about how many people lack a solid safety net, and that when that's missing it can be even hard to do things like decide to learn Drupal and switch your career. I wrote down some notes about how I think Drupalize.Me might be able to improve our offering by being aware of this barrier to entry and am already excited for our next team retreat so we can talk about it more.
  • Drupal 8 Caching: A Developer's Guide by Peter Sawczynec. A 10,000 foot overview of all the pieces that come into play when caching content served by Drupal. There are a lot of them, and Peter does a great job of breaking it down and providing information about each layer of the stack.
  • I also attended Tim Erickson's Drupal as a Political Act? session which was more of a discussion. We talked about the reasons that we all adopt and advocate for free open source software, barriers to entry, and more. I enjoy these conversations about using Drupal to make the world a better place, and find them inspiring. And Tim is a great person to chat with about it as he's got a lot of opinions and ideas.


Finally, to cap it all off, on Sunday, Blake and I helped to facilitate a documentation sprint. As you can probably guess, having high-quality documentation is important to us at Drupalize.Me, and sprints like this are a great way for us to contribute back to the community.

So we brought a box of donuts along to help power the sprinters. And also to try and entice people to join the documentation table.

We ended up with a table full of people helping update various aspects of the Drupal.org documentation. Including some work on documentation for the Drupal 8 Migrate API, a new payment gateway for Drupal commerce, and a bunch of work on ensuring content exists for all the modules in Drupal 8 core. There's an open issue to ensure that there is a known URL with good documentation for each of the Drupal 8 core modules, and we made progress on that by adding and cleaning up documentation for 5 different modules during the sprint.

Thanks to everyone that joined us to help out. Some long time contributors like Mike, and Benjamin, and some first time documentation contributors like purplenwu and David. You all rock. Thanks for helping make Drupal better.

I'm already looking forward to getting to attend some more regional camps this summer. Hope to see some of you there.

Oct 17 2016
Oct 17

Back in August we announced that we were moving our site to Pantheon hosting. Last month we completed the migration and Blake wrote a post about the process. This month I'm going to take a look at some performance comparisons between our previous infrastructure and our shiny new home.


Prior to moving our site to Pantheon it was hosted on Linode, using a couple of different VPS servers that we managed ourselves with a bit of help from Lullabot. Our old Linode infrastructure consisted of a single web server running Varnish, Solr, Memcache, and Apache, along with a few other servers for testing and DevOps. It was always plenty fast. The choice to move to Pantheon wasn't because we hoped for a performance improvement, but still, we thought it would be a fun exercise to see how the change affected the performance of our site.

My hypothesis

They say that if you're going to measure something you should know what questions you want to answer before you start. Because if you go in saying, "to see what happens", that's what you'll do. See what happens. So I wanted to answer this question: How did moving our site from Linode to Pantheon affect the performance-measured in response time-of our site for both members and non-members?

Going into this, I expect that Pantheon will perform better than our previous setup, though I don't really have a sense of how much better. Hosting Drupal sites is, after all, what they do. I don't think our site was slow on Linode, but I also know that there are a lot of infrastructure and performance tweaks we never got around to making because they were never a top priority.

What should I test?

I want to see what response time looks like for various important pages on our site, as well as a few pages that are good samples of common page variants. So I came up with the following list of pages:

  • / : Our home page: most people's first impression of Drupalize.Me, and the content dashboard for authenticated users.
  • /tutorials : The main listing of tutorials on our site; the 2nd most popular page on our site.
  • /pricing : This page is important when it comes to converting users to paid members, so we want to set a good impression.
  • /user : Returning users go here to sign in, a common task. This is also the account dashboard for authenticated users.
  • /tutorial/core-migration-modules?p=2578 : Example of a written tutorial with an embedded video.
  • /videos/build-your-first-page-symfony-3?p=2603 : Example of a stand alone video tutorial.
  • /series/drupal-8-theming-guide : Example of a series, or guide, landing page.
  • /blog/201607/why-learning-drupal-hard : Example of a blog post with a few comments.
  • /search?query=pantheon : Example of a search query.

In the future we might want to test things like navigational scenarios. For example: an anonymous user navigating to a blog post, leaving a comment, and then navigating to the contact page. For now though, we're after some basic response time comparisons. So this feels like a good list.

Set up

Before running the tests I did a bit of configuration on our site to facilitate testing. First, I created a dummy user on both environments and configured it as if it was a normal monthly personal membership. This way I have an account I can use for testing the authenticated user experience.

I also made sure I could answer these two questions in advance:

  • Are your tests going to be performed against the live site? If so, do you have a way to quickly abort them?
  • Do your tests create dummy content? How are you going to make sure that content gets cleaned up afterwards?

Establish a baseline

I started by gathering some basic information using cURL. We'll use curl to request HTTP headers from the environments, and time to see how long our curl command takes. This will give us some information about the current environment, and a rough idea of what we can expect for a single page request.


time /usr/bin/curl -I https://drupalize.me/tutorials
HTTP/1.1 200 OK
Date: Fri, 16 Sep 2016 16:24:40 GMT
Server: Apache
Strict-Transport-Security: max-age=15552000
X-Drupal-Cache: MISS
Expires: Sun, 19 Nov 1978 05:00:00 GMT
X-Content-Type-Options: nosniff
Content-Language: en
X-Generator: Drupal 7 (http://drupal.org)
Link: <https://drupalize.me/tutorials>; rel="canonical",<https://drupalize.me/tutorials>; rel="shortlink"
Last-Modified: Fri, 16 Sep 2016 16:24:40 GMT
Vary: Accept-Encoding
Content-Type: text/html; charset=utf-8
X-Varnish: 2623806 2725564
Age: 23
Via: 1.1 varnish-v4
ETag: W/"1474043080-0-gzip"
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
>> 0.57 real         0.02 user         0.00 sys


time /usr/bin/curl -I https://drupalize.me/tutorials
HTTP/1.1 200 OK
Date: Fri, 16 Sep 2016 16:25:37 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Set-Cookie: __cfduid=db4a4fc18bf748493351d2d6ae784af911474043137; expires=Sat, 16-Sep-17 16:25:37 GMT; path=/; domain=.drupalize.me; HttpOnly
Cache-Control: public, max-age=900
Content-Language: en
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Fri, 16 Sep 2016 16:25:24 GMT
Link: <https://drupalize.me/tutorials>; rel="canonical",<https://drupalize.me/tutorials>; rel="shortlink"
X-Content-Type-Options: nosniff
X-Drupal-Cache: MISS
X-Frame-Options: SAMEORIGIN
X-Generator: Drupal 7 (http://drupal.org)
X-Pantheon-Endpoint: 089c557c-2188-434f-b435-827816b210ba
X-Pantheon-Styx-Hostname: styx480365c9
X-Styx-Req-Id: styx-1bee92be066d604d0c8eb52711752b8a
X-Styx-Version: StyxGo
X-Varnish: 51631136 64695802
Age: 12
Via: 1.1 varnish-v4
Vary: Accept-Encoding, Cookie, Cookie
Strict-Transport-Security: max-age=15552000
Server: cloudflare-nginx
CF-RAY: 2e35ace6dc5a555e-ORD
>> 0.24 real         0.11 user         0.01 sys

The "real" value from the time command is probably the most interesting thing in this output. It gives you a rough idea of how long it takes for the site to respond to a single request. Which basically amounts to: how long does it take Drupal (and all the layers in front of it) to service my request? Shorter is better. In both of these examples you can see the X-Varnish: 51631136 64695802 header, which indicates to me that these anonymous requests are actually being serviced by Varnish, and aren't even making it to Drupal. It's also why they're so fast. In this instance we're really testing the speed at which Varnish can return a page.

Cache busting

What about if we force our requests to bypass the Varnish cache by adding a NO_CACHE cookie?


time /usr/bin/curl -I -H "Cookie: NO_CACHE=1;" https://drupalize.me/tutorials
HTTP/1.1 200 OK
Date: Fri, 16 Sep 2016 17:15:11 GMT
Server: Apache
Strict-Transport-Security: max-age=15552000
X-Drupal-Cache: HIT
Etag: "1474046080-0"
Content-Language: en
X-Generator: Drupal 7 (http://drupal.org)
Link: <https://drupalize.me/tutorials>; rel="canonical",<https://drupalize.me/tutorials>; rel="shortlink"
Cache-Control: public, max-age=900
Last-Modified: Fri, 16 Sep 2016 17:14:40 GMT
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Content-Type: text/html; charset=utf-8
>> 0.58 real         0.02 user         0.00 sys


time /usr/bin/curl -I -H "Cookie: NO_CACHE=1;" https://drupalize.me/tutorials
HTTP/1.1 200 OK
Date: Fri, 16 Sep 2016 17:14:23 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Set-Cookie: __cfduid=d2a7c943f0a16e2620050e4ffe8fd29cf1474046063; expires=Sat, 16-Sep-17 17:14:23 GMT; path=/; domain=.drupalize.me; HttpOnly
Cache-Control: public, max-age=900
Content-Language: en
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Fri, 16 Sep 2016 17:04:59 GMT
Link: <https://drupalize.me/tutorials>; rel="canonical",<https://drupalize.me/tutorials>; rel="shortlink"
X-Drupal-Cache: HIT
X-Frame-Options: SAMEORIGIN
X-Generator: Drupal 7 (http://drupal.org)
X-Pantheon-Endpoint: 089c557c-2188-434f-b435-827816b210ba
X-Pantheon-Styx-Hostname: styx480365c9
X-Styx-Req-Id: styx-2efd3f16e56111a76a349f5b3ab8e14b
X-Styx-Version: StyxGo
X-Varnish: 77632744
Age: 0
Via: 1.1 varnish-v4
Vary: Accept-Encoding, Cookie, Cookie
Strict-Transport-Security: max-age=15552000
X-Content-Type-Options: nosniff
Server: cloudflare-nginx
CF-RAY: 2e35f45652fc256d-ORD
>> 0.29 real         0.01 user         0.02 sys

Notice that the X-Varnish: 77632744 header only contains a single ID this time instead of the 2 numbers it showed before. This indicates that Varnish was not able to service the request, and thus passed it along to Drupal. We are still getting cached results from Drupal though: the X-Drupal-Cache: HIT indicates that the content was served from Drupal 7's anonymous page cache.

Authenticated users

So far all the data we've looked at is for anonymous users. That is, people who are browsing our site but are not signed in to their account. As a business that sells membership subscriptions, our goal is to convert anonymous users to subscribers, and subscribers always navigate our site while signed in. So we want to make sure that the experience is a good one for them as well.

Before doing any testing I fully anticipated that the experience would be slower for authenticated users. When you're signed in to our site we customize the experience in a lot of different and unique-per-user ways that make doing things such as caching the HTML of an entire page difficult. The page is unique for each person. So we already know that building the page for an authenticated user is going to be more expensive.

In order to generate authenticated requests using curl we can use the session cookie from a session in our browser. Here's how to find that. Sign in to your site in your favorite browser. Then find the cookie that starts with either SESS, or SSESS followed by a random string. Copy the cookie name, and value, and then use them as arguments to curl using the --cookie flag like so:

curl --cookie "{cookie.name}={cookie.value}"


time /usr/bin/curl -I --cookie "SSESS77386d408b0660b92f2dbc30c5675085=Xawrv1CllbUwC6ksX3qq7Ya2cbwitQv7xF33baJ2644" https://drupalize.me/tutorials
HTTP/1.1 200 OK
Date: Fri, 16 Sep 2016 17:22:33 GMT
Server: Apache
Strict-Transport-Security: max-age=15552000
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
X-Content-Type-Options: nosniff
Content-Language: en
X-Generator: Drupal 7 (http://drupal.org)
Link: <https://drupalize.me/tutorials>; rel="canonical",<https://drupalize.me/tutorials>; rel="shortlink"
Content-Type: text/html; charset=utf-8
>> 0.75 real         0.02 user         0.00 sys


time /usr/bin/curl -I --cookie "SSESS77386d408b0660b92f2dbc30c5675085=Ifrv29Rrk3RZ2DdUWhZDUhmCYzdFw_J0n0p217GXMTY" https://drupalize.me/tutorials
HTTP/1.1 200 OK
Date: Fri, 16 Sep 2016 17:26:45 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Set-Cookie: __cfduid=d32d0c09a57b7ae447b943ebee6427dc81474046805; expires=Sat, 16-Sep-17 17:26:45 GMT; path=/; domain=.drupalize.me; HttpOnly
Cache-Control: no-cache, must-revalidate
Content-Language: en
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Link: <https://drupalize.me/tutorials>; rel="canonical",<https://drupalize.me/tutorials>; rel="shortlink"
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Generator: Drupal 7 (http://drupal.org)
X-Pantheon-Endpoint: 089c557c-2188-434f-b435-827816b210ba
X-Pantheon-Styx-Hostname: styx480365c9
X-Styx-Req-Id: styx-22105de3110b076241cad5d6b9e44e61
X-Styx-Version: StyxGo
X-Varnish: 83009225
Age: 0
Via: 1.1 varnish-v4
Vary: Accept-Encoding, Cookie
Strict-Transport-Security: max-age=15552000
Server: cloudflare-nginx
CF-RAY: 2e3606762f2f2597-ORD
>> 0.81 real         0.01 user         0.01 sys

Linode Pantheon Anon. + Varnish/CDN 0.57 0.24 Anon. + No Cache 0.58 0.29 Authenticated 0.75 0.81

This shows that for a single request Pantheon significantly outperforms our Linode setup, but that Linode handles authenticated requests slightly better.

Calculating concurrent users

The above tests really only measure the performance of a page without accounting for load. We've just learned how fast a page from our site can theoretically be served, but this doesn't really tell us much about the underlying infrastructures ability to handle multiple concurrent users.

Individual page time is the thing we can affect the most as developers, but the underlying infrastructure impacts concurrency. When load testing we're not necessarily testing how fast Drupal or any of our custom code is. We are actually testing how well the given infrastructure can handle Drupal and our custom code while serving multiple users at the same time. In order to gather some more data I performed a load test, simulating normal load on our site.

So, what is normal load?

One way to approach this is to determine the average number of concurrent users you expect to be using your site and then run your test with that many users. I did this by looking at our Google Analytics stats for the last month and doing some quick math in order to calculate the average number of people actively using our site at any given time.

Total session for last 30 days: 48,585 Average length of each session: 6 minutes 32 seconds (392 seconds)

concurrent_users = (total_session_for_month * average_time_on_site) / (3600 * 24 * 30)
7.34 = (48585 * 392) / (3600 * 24 * 30)

Another, and perhaps more common, use for load testing is to try and get a sense of whether or not your application is going to hold up when you get a traffic spike on awards night. A site like Grammy.com for instance sees relatively little traffic 364 days out of the year, but on awards night, that traffic spikes to extremely high levels. In order to ensure that the site remains available during that traffic spike you might try and calculate the number of users you think will use the site in the given period and run that simulation instead. The end result is still X concurrent users.

For good measure, when load testing I would usually add 10% to this number.

Use Siege

I'm not going to cover this here, but another technique for getting an idea of how well a page performs is Siege. The difference is tools like Siege make multiple concurrent requests and average the results so you get a more accurate picture. Our example above could be suspect to network latency, and other variations, that skew the results. So an average might a bit more accurate.

Read more about using Siege to test the performance of your site in this blog post from earlier this year.

Using JMeter

Instead, for this test I'm going to use Apache JMeter to configure a test suite, and then run those tests via BlazeMeter.

Apache JMeter is a Java application that can be used to load test web applications. It is highly configurable, and can be used to simulate virtually any scenario you can imagine. In addition it can be used to simulate any number of concurrent users. It comes with a complete set of plugins for graphing and analyzing test results.

At a certain point you're going to want to simulate more users than your laptop has the resources for. JMeter has the ability to perform distributed testing by setting up a master instance that delegates to any number of slave machines to do the heavy lifting. Thus, you can scale your tests to any size. BlazeMeter is a service that understands how to read a JMX test file, and do this autoscaling for us. Bonus!

So here's what I did.

I started by installing the BlazeMeter Chrome plugin, which effectively allows you to record your active browser session, turn it into a JMX file, and upload it to BlazeMeter. This was a great way to perform some quick/simple tests.

I then downloaded those tests and opened them in JMeter so I could further tweak the scenarios and learn a bit more about how JMeter works. This ended up being great because I could run/debug my scenarios locally, and even do some initial testing for lower levels of concurrent users. I actually had a lot of fun playing around with JMeter once I got the hang of it.

Screenshot of JMeter showing list of summary results

Remember that list of URLs above that I wanted to test? I configured JMeter to read in a list of URLs from a CSV file, and then set up scenarios to test the set of URLs both as an anonymous user, and as an authenticated user. Finally, I generated lots of graphs because I love graphs.

Screenshot of JMeter load testing application

I then ran those scenarios from my localhost a couple of times, both on the Linode instance of the site, and on the Pantheon instance. In both cases, I had 7 concurrent users, and just for a few minutes, mostly as a litmus test. This still produced some useful information. JMeter allowed me to export a summary of response times from the tested URLs to CSV files, which I then imported into Numbers to make even more graphs.

This simple comparison allowed me to get a sense of how both Linode and Pantheon perform for each URL and for both anonymous and authenticated users.

This data represents the response time that you could expect as a user when our site is under normal load.

Graph showing summary results of JMeter tests. Pantheon generally outperforming Linode in response time.

Overall, this shows positive gains for almost every scenario on Pantheon. In most cases the gains are in the range of 30 to 50ms. In some cases, like /user for authenticated users (viewing your account dashboard), the gains are actually quite substantial: Linode 731ms vs. Pantheon 343ms.

Check out the JMX files for the above tests (which are also used below). Perhaps they will be useful as a starting point for your own load test suite.


Of course, those numbers are reflective of what you can expect if you're the only person using the server at a given time. What about the more likely scenario where you're sharing resources with a number of other users? Remember how we calculated concurrent users earlier?

To test this, I uploaded the JMX files from my JMeter tests above to BlazeMeter. I then used their free plan, and maxed out all available resources. 50 concurrent users for 20 minutes with a ramp up time of 900 seconds. So start with 1 user, and gradually increase to 50 over the course of 15 minutes and then continue to stress test with 50 concurrent users for an additional 5 minutes.

I ran this test once for Linode, and once for Pantheon. Because my JMeter tests contain 2 thread groups (one for authenticated users, one for anonymous users) and BlazeMeter runs each group separately the resulting graphs show two distinct scenarios. The first 20 minutes is anonymous traffic, and the second 20 minutes is authenticated.

Here's a comparison of average response times from all scenarios for the two. Linode in blue. Pantheon in yellow.

Comparison of Linode and Pantheon response times over time relative to concurrent users.

The following graphs show response time relative to number of concurrent users. In both cases you can see that adding more anonymous users has very little impact on overall response time. This is to be expected, as this should essentially all be cached by Varnish. On both environments I would anticipate that you could continue to increase the number of users (blue line) with little to no real effect on the response time (purple line).

Where it gets interesting is the second part of each graph where it shows how adding more authenticated traffic impacts the response time. My analysis of these graphs shows that for just a couple of authenticated users Linode performed marginally better than Pantheon. However, as the load increased, response times decreased more rapidly for Linode than Pantheon.


Graph of response time vs. concurrent users on Linode


Graph of response time vs. concurrent users on Pantheon

Summary and conclusions

I don't have a whole lot of experience doing load testing so this was a fun experience for me. I got to learn some new tools, and look at a lot of pretty graphs.

I tested response time, using various methods, for both anonymous and authenticated traffic on the Drupalize.Me site in order to get a sense of how the move to Pantheon for hosting impacted performance. Verdict? It was a good choice. Pantheon performs better in almost every case. Although the difference is generally expressed in changes like 50 milliseconds, the perceived length of a millisecond is pretty significant to users of our site.

As I said at the start, this is basically the outcome I expected. Though I was prepared for the differences to be bit bigger, any win is a big win when it comes to performance. In addition, these are wins that we gained by allowing someone else to manage our hosting infrastructure for us, which is an important win itself. As we've pointed out in previous posts in this series, this change allows us to focus more on producing the best Drupal training material. Pantheon can help us make sure you get it super fast.

Next steps

In addition to the already faster response times, I'm super excited about some of the tools that Pantheon provides us that will help us make this even better in the future. For example, we now have access to application profiling data from New Relic. I've barely started digging in yet, but I've already noticed a couple of SQL queries we could either cache or eliminate to shave off quite a lot of time on the front page and pricing page when loading from a stale cache.

Graph from new relic showing application response time increasing during load testing.

Pantheon also supports PHP7. Combine that with their MultiDev tools and we can pretty easily test our site on PHP7, see if everything works, then easily apply those same changes to our live environment. I anticipate that will bring yet further speed increases.


Want to do some load testing yourself? Here's some resources I found useful when figuring this all out:

Jul 21 2016
Jul 21

I'm super excited to be invited to be a keynote speaker for this year's DrupalCamp WI (July 29/30). If you're in the area you should attend. The camp is free. The schedule is shaping up and includes some great presentations. Spending time with other Drupal developers is by and large the most effective way to learn Drupal. So sign-up, and come say hi to Blake and me.

Why is Drupal hard?

The title of my presentation is "Why is Drupal Hard?" It is my belief that if we want to continue to make it easier for people to learn Drupal we first need to understand why it is perceived as difficult in the first place. In my presentation I'm going to talk about what makes Drupal hard to learn, why it's not necessarily accurate to label difficult as "bad", and what we as individuals and as a community can do about it.

As part of the process of preparing for this talk I've been working on forming a framework within which we can discuss the process of learning Drupal. And I've got a couple of related questions that I would love to get other people's opinions on.

But before I can ask the question I need to set the stage. Close your eyes, take a deep breath, and imagine yourself in the shoes of someone setting out to be a "Drupal developer."

Falling off the Drupal learning cliff

Illustration showing the scope of required knowledge across the 4 phases of learning Drupal. Small at phase 1, widens quickly at phase 2, slowly narrows again in phase 3 and through phase 4.

When it comes to learning Drupal, I have a theory that there's an inverse relationship between the scope of knowledge that you need to understand during each phase of the learning process and the density of available resources that can teach it to you. Accepting this, and understanding how to get through the dip, is an important part of learning Drupal. This is a commonly referenced idea when it comes to learning technical things in general, and I'm trying to see how it applies to Drupal.

Phase 1

Graph showing Drupal learning curve, showing exponential growth at phase 1

When you set out to start, there's a plethora of highly-polished resources teaching you things that seem tricky but are totally doable with their hand holding. Drupalize.Me is a classic example: polished tutorials that guide you step-by-step through accomplishing a pre-determined goal. During this stage you might learn how to use fields and views to construct pages. Or how to implement the hook pattern in your modules. You don't have a whole lot of questions yet because you're still formulating an understanding of the basics, and the scope of things you need to know is relatively limited. For now. As you work through hand-holding tutorials, your confidence increases rapidly.

Phase 2

Graph of Drupal learning curve showing exponential decay of confidence relative to time at phase 2, the cliff

Now that you're done with "Hello World!", it's time to try and solve some of your own problems. As you proceed you'll eventually realize that it's a lot harder when the hand-holding ends. It feels like you can't actually do anything on your own just yet. You can find tutorials but they don't answer your exact question. The earlier tutorials will have pointed you down different paths that you want to explore further but the resources are less polished, and harder to find. You don't know what you don't know. Which also means you don't know what to Google for.

It's a much shorter period than the initial phase, and you might not even know you're in it. Your confidence is still bolstered based on your earlier successes, but frustration is mounting as you're unable to complete what you thought would be simple goals. This is the formulation of the cliff, and, like it or not, you're about to jump right off.

Phase 3

Graph of Drupal learning curve showing relatively flat and low confidence over time at phase 3

Eventually you'll get overwhelmed and step off the cliff, smash yourself on the rocks at the bottom, and wander aimlessly. Every new direction seems correct but you're frequently going in circles and you're starving for the resources to help. Seth Godin refers to this as "the dip", and Erik Trautman calls it the "Desert of Despair". Whatever label you give it, you've just fallen off the Drupal learning cliff. For many people this is a huge confidence loss. Although you're still gaining competence, it's hard to feel like you're making progress when you're flailing so much.

In this phase you know how to implement a hook but not which hook is the right one. You know how to use fields but not the implications of the choice of field type. Most of your questions will start with why, or which. Tutorials like those on Drupalize.Me can go a long ways toward teaching you how to operate in a pristine lab environment, but only years of experience can teach you how to do it in the real world. As much as we might like to, it's unrealistic to expect that we can create a guide that answers every possible permutation of every question. Instead, you need to learn to find the answers to the questions on your own by piecing together many resources.

The scope of knowledge required to get through this phase is huge. And yet the availability of resources that can help you do it is limited. Because, as mentioned before, you're now into solving your own unique problems and no longer just copying someone else's example.

Phase 4

Graph of Drupal learning curve showing upswing of confidence, linear growth, at phase 4

If you persevere long enough you'll eventually find a path through the darkness. You have enough knowledge to formulate good questions, and the ability to do so increases your ability to get them answered. You gain confidence because you appear to be able to solve real problems. Your task now is to learn best practices, and the tangential things that take you from, "I can build a website", to "I can launch a production ready project." You still need to get through this phase before you'll be confident in your skills as a Drupal developer, but at this point it's mostly just putting in time and getting experience.

During this phase, resources that were previously inaccessible to you are now made readily available. Your ability to understand the content and concepts of technical presentations at conferences, industry blog posts, and even to participate in a conversation with your peers is bolstered by the knowledge you gained while wandering around the desert for a few months. You're once again gaining confidence in your own skills, and your confidence is validated by your ability to continue to attain loftier goals.

And then some morning you'll wake up, and nothing will have changed, but through continually increasing confidence and competence you'll say to yourself, "Self, I'm a Drupal developer. I'm ready for a job."

What resources can help you get through phase 3?

So here's my questions:

  • What resources do you think are currently available, and useful, for aspiring Drupal developers who are currently stuck in phase 3, wandering around the desert without a map asking themselves, "Panels or Context?"?
  • What resources do you think would help if they existed?
  • If you're on the other side, how did you personally get through this dip?

Responses from Lullabot

I asked this same question internally at Lullabot a few days ago, and here are some of the answers I received (paraphrased). Hopefully this helps jog your own memory of what it was like for yourself. Or even better, if you're stuck in the desert now, here's some anecdotal evidence that it's all going to be okay. You're going to make it out alive.

For me, it was trial and error. I would choose a solution that could solve the particular problem at hand most efficiently, and then I would overuse it to the extreme. The deeper lessons came months later when changes had to be made and I realized the mistakes I had made... Learning usually came also from working with others more experienced. Getting the confidence to just read others' code and step through it is also a big plus.

building something useful++. That's the absolute best way. Can't believe I forgot to mention it. Preferably something that interests you or fulfills your own need. You still fall off the cliff, but you at least see the fall coming, and your ability to bounce back is better.

At this stage I find that the best resources are people, not books or tutorials. A mentor. Someone that can patiently listen to your whines and frustrations and suggest the proper questions to ask, and who can give you the projects and assignments that help you grow and stretch.

Everything I know about Drupal I know through years of painful trial and error and shameless begging for help in IRC.

I spent a lot of time desperately reading Stack Overflow, or trying to figure a bug out from looking at an issue where the patch was never merged, or reading through a drupal.org forum where somebody tries to solve something but then just ends with "nevermind, solved this" without saying why.

I'd agree that people is what gets you through that. I learned IRC and how to write patches and get help from individuals and that is when the doors opened.

Another approach that really boosted me to the next level, especially early on in my career as a developer, was to work with someone that you can just bounce ideas off of. I'll never forget all the hacking sessions Jerad and I had back in the day. Coding at times can be boring, or the excitement of doing something awesome is self-contained. Being able to share ideas, concepts, and example code with someone that appreciates the effort or awesomeness of something you've done and at the same time challenges you to take it to the next level is priceless.

Printing out the parts of Drupal code I wanted to learn: node, taxonomy and reading comments and code like a gazillion times.

Try and code something useful so I could ask others for help. That's how I wrote the path aliasing module for core.

I often find that as you get into more complicated, undocumented territory, being able to read code is super valuable. You can often get lost in disparate blog posts, tutorials and forums that can lead you all sorts of ways. The code is the ultimate source of truth. Sometimes it takes firing up a debugger, stepping through the parts that matter to see how things are connected and why.

Oct 20 2015
Oct 20

At Drupalize.Me Drupal is in our DNA, and as you might expect we're all heavily involved in the Drupal community, continuing to learn more every day about Drupal and how it works. But sometimes it's nice to remember that there is a lot going on in the PHP world outside of Drupal.

Did you know that PHP is the language of choice for over 80% of the websites that use a scripting language? That's a whole lot of PHP.

And that's why this week we'll be attending ZendCon 2015 in Las Vegas, getting off the Drupal island and soaking in some of the amazing tools and knowledge that the PHP community has to offer. With Drupal 8 just around the corner—and the adoption of non-Drupal components into the Drupal ecosystem—being an expert Drupal developer these days means being an informed PHP developer.

If you're at ZendCon, Joe will be sharing some of our Drupal 8 knowledge in his presentation An Overview of the Drupal 8 Plugin System (Wednesday, 9:45AM PT - Room: Studio 1B) where you can learn more about what is going to become important knowledge for any Drupal developer. Or, if you're looking for something less Drupal-specific check out The Dark Art of Debugging (Wednesday, 5:15PM PT - Room: Artist #2).

Take your skills beyond Drupal with Drupalize.Me

As Drupal has started to make use of tools from the larger PHP world, we've been doing our best to stay on top of things and provide you with training on not just Drupal, but the surrounding technology as well. Learn about best-practices for programming in PHP with our Object-Oriented PHP series, learn about Symfony in our Getting Started with Symfony series, and don't miss our free tutorial on The Wonderful World of Composer to learn about the tool that's helping to tie all the various PHP components together.

For our members, if you're not familiar with how Drupal 8 has utilized modern PHP and object-oriented practices, check out these two videos in our What's New in Drupal 8 series of presentations:

If you're already a Drupalize.Me member, or are thinking about coming one we would love to hear more about how what you think we should be teaching outside of the Drupal world. Our mission is to empower people to build and maintain kick-ass Drupal-based websites—and these days that often means learning about more than just Drupal. Let us know in the comments or get in touch!

Sep 15 2015
Sep 15

Drupalize.Me Tutorial

You know all those JavaScript tracking codes that get added to the footer of every page on your site? Google Analytics is the classical example, but there are tons of others out there. They are slowing your pages down. In most cases it's probably not that big of a deal. If used correctly, the performance hit seen by the end-user should be pretty minimal. We're usually talking about milliseconds. But, when you're using a tool like CasperJS to perform testing on your site, it's not uncommon for a single test run to visit tens, or even hundreds of pages—and have to load that JS tracking code on every one. Those milliseconds start to add up, and can increase the amount of time it takes for your test suite to run.

Generally, when your browser encounters one of those JavaScript tracking codes the embedded JavaScript either directly references, or ultimately ends up including, a .js file hosted on the providers server. This requires that your browser make and extra HTTP request to retrieve that file. More time spent downloading more data. That, when you're running tests, you probably really don't need. In fact, it's unlikely that you want your test suite mucking up your analytics data or your customer insights information. So instead, let's teach CasperJS to just skip those external files altogether when running tests.

Find the scripts you don't need

The first thing you'll need to do is figure out what script(s) it is that your test-suite is loading but doesn't need. The easiest way to do this is to view your site as a regular user would (since CasperJS is just simulating regular users anyway) and use your browser's web inspector to view the network requests that it's making when you view a page.

  • In Chrome, open the Developer Tools (CMD - Option - I) and switch to the "Network" tab
  • Navigate to the page you want to test, example: https://drupalize.me/videos
  • Browse through the list of resources loaded, look for those of type "script" as a good starting point, hover over an item to see its full path
  • If the resources comes from an external server, and you don't need that script, make note of the URI of that resource
  • Repeat until you've found the URI's for all the resources you want to skip

Network Graph

In our example we'll skip https://www.googleadservices.com/pagead/conversion.js, and https://googleads.g.doubleclick.net/pagead/viewthroughconversion/1004527117/?random=1438379014341&cv=7....

Stop the requests before they happen

CasperJS is a wrapper around the headless webkit browser PhantomJS. When you instruct CasperJS to navigate to a page it does so in the PhantomJS browser, which loads the page you asked it to, and then any linked resources. Just like Chrome did above. We're going to intercept those resource requests before they happen, compare the URI of the requested resource to our blacklist, and if it matches simply instruct PhantomJS to skip that resource.

Use the casper.options.onResourceRequested configuration option to bind a callback function to the PhantomJS onResourceRequested action. Whatever function you bind to this action will be triggered once for every resource requested by PhantomJS just before making the actual HTTP request. Note that when proxying though CasperJS like this in order to bind to a PhantomJS callback CasperJS will also pass the current Casper instance as the first argument to your function. All remaining arguments are those passed along from PhantomJS.


casper.options.onResourceRequested = function(casper, requestData, request) ...

For our purposes, the two arguments passed in from PhantomJS are useful. The requestData argument is an object containing meta-data about the request being made. Including the URL. requestData.url. And the second argument is a network request object with the various methods that allow us to control the request, including request.abort(). So now all you have to do is compare the URL of the request in the requestData object to your blacklisted URLs from above, and if you get a match call request.abort() which will instruct PhantomJS to skip that request.

In my experience I've found it useful to use substring matches for blacklisted URLs instead of trying to match the whole thing because often times the URLs contain query parameters that are unique for every request. The code below shows how we're handling this.

 * Add a listener for the phantomjs resource request.
 * This allows us to abort requests for external resources that we don't need
 * like Google adwords tracking.
casper.options.onResourceRequested = function(casper, requestData, request) {
  // If any of these strings are found in the requested resource's URL, skip
  // this request. These are not required for running tests.
  var skip = [

  skip.forEach(function(needle) {
    if (requestData.url.indexOf(needle) > 0) {

Load your code when CasperJS runs

The last thing we need to do is make sure that this code is included somewhere where CasperJS will encounter it while running our test suite. You've got two options here. Either, just include the code at the top of your homepage-test.js file and it'll be loaded when you run that test like so:

casperjs test homepage-test.js

Or, if you've got a lot of test files, add this to a file named casperjs-options.js and include it when running your other tests.

casperjs test --include="path/to/casperjs-option.js" homepage-test.js

An aside about images

Images are another common culprit here. In most cases your CasperJS tests aren't going to need to load the images for every page. Luckily, PhantomJS makes it really easy to skip the HTTP request for image resources with just a simple setting.

casper.pageSettings.loadImages = false;


When you're running a test suite that is navigating through your site just as fast as it can load each page excluding resources from the page that don't need to be loaded can speed things up. Using an onResourceRequested callback we can detect requests for resources we don't need and abort those requests before they happen. The improvements might not be huge, but every little bit counts.

For more about using CasperJS check out this post on Using a Remote Debugger with CasperJS.

Interested in other topics related to automated testing, like SimpleTest? I've put together a series of video tutorials on Automated Testing in Drupal 7 with SimpleTest.

Apr 14 2015
Apr 14

Ace up sleeve

Drupal 8 is right around the corner; it's time to start brushing off your old textbooks, taking notes, asking questions, and preparing for all the awesomeness coming your way.

For most of us, Drupal 8 represents a departure from what we've come to know about how to create with Drupal. In short, we've got a learning curve we're going to have to overcome before we can be proficient with Drupal 8. But I'm here to tell you: it’s okay, we're in this together, and, given the proper learning environment and a little bit of guidance, you'll be Drupal 8 ready in no time.

In the glorious words of Douglas Adams, “Don't panic!”

While most of us have a tendency to want to jump right into the documentation and start poring over code samples, this is a good opportunity to take a step back and make sure we're ready to learn before we dive in. So let’s take a minute to think about education theory and the environment we put ourselves in when preparing to learn a new technology. How do we remove blockers from the learning process and set ourselves up for success?


  • What is my motivation for learning this?
  • Where can I practice what I'm learning?
  • How will I know if I have learned the right thing?

How motivated are you?

Are you learning for fun, or for work?

Because you want to, or because you have to?

Our motivation – and our understanding of it – allows us to decide whether it is worth the investment in time and energy necessary to learn something new today – right now – which we may not use until tomorrow.

One of the best ways to assess whether or not you've learned something is to teach it to someone else. Lucky for you, you're not the only one embarking on the quest to learn Drupal 8; there are plenty of opportunities to share your new knowledge with others. Local user groups, co-workers – even friends on IRC – all represent great teaching opportunities. Moreover, these interactions often turn into discussions, and discussions are one of the best ways to get beyond the how and into the why.

In addition to practicing by teaching, you can also practice your new Drupal 8 skills by assisting with Drupal 8 itself, or helping to improve the documentation and making it easier for the next person to learn.

My third question is: How will you know if you’ve learned the right things? This will, of course, depend on your motivation. If your motivation was to become a better-rounded programmer, and you learn OOP where you only knew functional programming before, I'd say you've accomplished your goals.

I encourage you to stop reading now and take a moment to think about these three questions and see if you can answer them for yourself. It may seem trivial but I believe that knowing why is just as important as knowing how.

Ready to learn?

Great. Here are some excellent resources to help get you started, and some basic terminology so at least you'll know what you don't know.

The following is by no means a comprehensive list of all the things you'll ultimately need or want to know, but it does provide some general background knowledge that will be applicable to everyone writing code for Drupal 8. A lot of it will also be assumed knowledge by many people already working with Drupal 8, so having at least a cursory understanding of these topics will make it easier to follow along with documentation and converse with others on the subject.

In order to take fuller advantage of the PHP language, and what are considered to be best practices for modern PHP based web applications, Drupal 8 has chosen to require a minimum of PHP version 5.4. Doing so opens the door to better leverage modern object-oriented programming patterns, which is good for you because it allows Drupal to be more flexible and less tightly-coupled than ever before. It also helps bring Drupal 8 more in line with the syntax, patterns, and methodologies that many other PHP based frameworks are using. One of the big hopes here is that this will allow Drupal to attract more PHP developers and at the same time allow existing Drupal developers to become better overall PHP programmers.

For those of us making the transition from Drupal 7, this means that the syntax of our modules is going to look quite a bit different: The important thing to remember is that everything you could do in Drupal 7 you can still do in Drupal 8 once you learn the new patterns. One good resource for adapting your knowledge of Drupal 7 to the new way of doing something in Drupal 8 is the change records available on Drupal.org. I've also found that doing a side-by-side comparison between how something is implemented in Drupal 7 core vs. Drupal 8 core can go a long way towards pointing me in the right direction: even if I don't completely understand how it's working, at least I have working code and a better knowledge of what I don't know.

As a developer, make sure you're familiar with basic OOP syntax and patterns, which is now assumed knowledge for anyone doing Drupal 8 development. Also pay particular attention to namespaces (which allow for better compartmentalization of code, and eliminate the namespace conflict problems inherent in Drupal 7's architecture) as well as the concept of late static binding (which allows for the constructor injection used throughout Drupal), both of which you'll encounter early and often.

Speaking of OOP design patterns in Drupal, dependency injection (DI) is a pattern that you'll absolutely want to become familiar with. DI is being applied throughout Drupal in order to ensure a less tightly-coupled system by providing dependencies as arguments instead of hard coding them.

"Wait, what was that?" you say.

Although the pattern sounds complex, it is actually straightforward and, in my experience, is often just giving a fancy name to something you're already doing. Be prepared to hear the term used frequently, and understand the basics of setter injection and constructor injection – the two forms of dependency injection used most frequently in Drupal.

In addition to using the DI pattern, Drupal 8 also provides a Dependency Injection Container; in essence, a global object where you can simply request the service you need, and it'll configure and provide the appropriate object already initialized and usable.

Need to query the database? Get a database connection object from the Dependency Injection Container and it'll already be configured to connect to the appropriate database server using the correct username, password, and other data. As a module developer, that means you can just start making queries, with no need to worry about what database is being used in the background or the specifics of connecting to it with PHP.

PSR-0 and PSR-4 (or PHP Specification Request 0 and 4) are patterns for standardizing the name, location, and content of files, which makes it possible for Drupal to locate and load files on an as-needed basis. Neither standard is specific to Drupal – the standards are put forth by the PHP Framework Interoperability Group with the hope of allowing greater compatibility between PHP code libraries. By adopting these standards, Drupal 8 is able to include third party libraries like Guzzle and components from other frameworks like Symfony 2 in a clean way. At the time this article was written, both PSR-0 and 4 are in use in Drupal 8, though that is subject to change. Either way the patterns are very similar and learning both won't hurt.

Because files are now located in specific known places, and their contents are also known, Drupal can perform auto loading without requiring a Drupal 7 style registry. As a module developer, this allows you to instantiate a new object of a specific type without having to ensure you've loaded the file containing the class definition first. It also reduces the amount of code and thus memory that PHP needs for each request, and increases the findability of a particular file or class definition in the codebase. Once you understand the pattern, you only have to repeat it and Drupal will take care of the rest.

Remember hook_block_info() from Drupal 7?

Implementations of this hook return an array that provides metadata which specifies things like the title of the block and an optional cache granularity setting. For performance reasons, it's best to have this kind of metadata available without having to access a database. Annotations provide a mechanism for encoding metadata about a class within the comments for that class, keeping the configuration and the actual code close to one another for discoverability purposes, and allowing Drupal to access some general information about the block this class represents, without the hassle of instantiating an object first.

Although you can accomplish quite a bit without ever using annotations, sooner or later you're going to encounter it, probably either creating custom blocks or working with the Field API.

While we're on the topic of metadata, it's also worth mentioning YAML. A recursive acronym for YAML Ain't Markup Language, YAML is a human readable syntax for representing configuration in static files. Drupal 8 makes use of YAML in numerous places, including as the default storage mechanism for configuration data provided by the configuration management system, as a way for theme and module developers to describe basic properties of their project (like name and description, replacing Drupal 7 .info files), and as a way, in the routing system, for module developers to define the various ways in which incoming requests to Drupal are mapped to the PHP code they should execute.

The YAML syntax itself is straightforward; learning it will be of huge value and little trouble. Once you've mastered the syntax though, understanding the various configuration options that it can be used to define will require understanding the respective Drupal subsystem that makes use of the YAML file in question.

One of the biggest misconceptions in the Drupal community right now is that you'll be required to learn Symfony 2 in order to be a Drupal 8 developer. I don’t think this is true.

If you're going to start hacking on core, then yes, you'll want to brush up on the various Symfony 2 components that have been included. But as a day-to-day module developer, you're not likely to encounter much that will require you to know Symfony 2.

Bottom line: many of the patterns and concepts that you need to know for Drupal are also used by the Symfony 2 project, so learning Symfony 2 will only make you a better Drupal developer – and a better PHP developer.

I could go on listing topics, but that should be enough documentation to get you going, along with a final bit to keep in mind as you wend your way through all these resources.

About a year ago I was at a company retreat and I posed a question to my coworkers: “How can I help you learn Drupal 8?”

What we realized was that we were all capable of reading the documentation, and content to look at code examples in order to understand how any particular system worked, but that the more important and difficult question was “why?”

Why does CMI use YAML for its default data storage?

Why should I extend FormBase instead of just creating my own class for building a form?

When you can figure out why specific decisions were made, you'll also understand the potential limitations of those decisions, and you'll be capable of pushing Drupal to its limits. So while you're poring over documentation and sample code remember to take a moment to ask, "Why this way?"


PHP 5.4 - Modern PHP

Importance: Critical
Difficulty: Easy

PHP - Namespaces

Importance: Critical
Difficulty: Easy

Dependency Injection

Importance: Critical
Difficulty: Moderate

General information on DI

DI in Drupal

PSR-0 & PSR-4

Importance: Critical
Difficulty: Easy


Importance: Moderate
Difficulty: Moderate


Importance: Critical
Difficulty: Easy

Symfony 2

Importance: Good to know
Difficulty: Hard

The basic Symfony 2 documentation is extremely good and can be worked through over the course of a couple days. Like Drupal though, mastering Symfony 2 will take much longer, and learning all the moving parts will require actually making use of them in real-world scenarios.

Image: ©iStockphoto.com/lisegagne

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web