Apr 13 2018
Apr 13

Drupal SA-2018-002 has been weaponized. Within 12 hours of a published proof-of-concept by security researchers, we can see automated attempts to systematically exploit sites across the internet.

Attempted coin miner installations over the past 24 hours

Attempted coin miner installations over the past 24 hours

Pantheon has had platform-wide mitigations in place since an hour after the original Drupal security announcement over two weeks ago. In addition to protecting our customers by blocking attempted exploits, our filter included payload logging, which means we can analyze the attempted attacks to determine their objective.

Compared to the last time there was an issue of this severity, there’s a new class of attack: coin mining. In 2014, BitCoin and its cousins were still a geek curiosity. Today, they’re a lucrative (if highly volatile) business, with mass appeal. It makes sense: why bother to ransomware a site when you can use their CPU to just mine coin directly?

Payload Analysis

There are a few different classes of exploit which we can observe in the first 24 hours. First, there are a wide array of simple probing attacks, which simply try to echo out a certain string, or cause the server to “phone home” to some remote server by calling wget or curl to ping back to a remote url. The goal of this exploit is likely to know who to come back for later.

Another level of attack is an attempt to install a beachhead file. These are usually small PHP scripts which will sit around waiting for subsequent requests; they’ll run eval() or and equivalent on whatever data blob the attacker decides to send down the line. Files we saw in our initial analysis included wer.php, l.php, config.php, and rxr.php, but honestly the specific filenames will change all the time. Any unexpected PHP file should be considered suspect.

While these two attack types are familiar, we’re now seeing a more sophisticated exploit which attempts to connect the server for a Drupal website to a botnet, install coin mining software, and turn the CPU of the server into a revenue generator for someone else. This exploit has been independently confirmed on the SANS Technology Institute's Internet Storm Center.

This high-level attack is fairly well obfuscated, includes resilience features such as creating a crontab to reinstall itself if it’s found and removed, and contains a script that will attempt to kill off any competing miners. That’s pretty impressive stuff.

We also saw slightly less sophisticated attempts to download a known trojan miner onto the server that powers Drupal, and I expect we’ll see more attempts around coin mining in the coming days. Given the immediate monetary benefit to a black-hat actors who can turn a vulnerable website into an active mining node, it seems like this could be the new go-to objective when targeting website vulnerabilities.

Keeping Our Customers Safe

The only way to know you are completely safe is to update your Drupal website so it is no longer vulnerable to this kind of attack. If you haven’t done this yet, please do so now. We at Pantheon try to do everything we can to protect our customers:

  • We make keeping your CMS up to date manageable with one-click core updates, and allow central teams to automate updates across hundreds or even thousands of sites.

  • We deploy Drupal (and WordPress) in an immutable configuration, meaning the CMS cannot write files to any area outside the uploads directory (which is locked down). This prevents a large class of attacks from finding a foothold on our platform.

  • We are also able to deploy platform-wide mitigations. In this case we were able to sanitize attempted exploits within an hour of the initial security release, two weeks before widespread exploit attempts began. This benefits our customers, but also allows us to capture attempted exploits and provide insight into what we see to the wider community.

Internet security is a team effort. We’re happy to support the work of the Drupal Security Team, and are actively sharing the details of our findings with them and other platform providers. Michael Schmid has also posted what he's been seeing from Amazee as well.

I would like to specifically thank the Drupal Security Team and Narayan Newton of Tag1 Consulting for helping to diagnose some of the payloads today in the DrupalCon Nashville Sprint rooms.

Topics Drupal Hosting, Drupal Planet, Security, Drupal
Mar 28 2018
Mar 28

UPDATE: 1:27pm PT After analyzing the vulnerability and the most obvious remote exploitation path, we have deployed a platform wide mitigation and are logging potential exploits. At this time we do not see any systematic attacks. Patching your site is the only way to be sure you are safe, so please do that as soon as possible.

— — —

The Drupal Security Team has published Drupal SA-2018-002 to address a critical vulnerability. This the first update of this magnitude since SA-2014-005 (aka “Drupageddon”) back in 2014. In that case, the time from release to automated exploitation was around seven hours.

As soon as 8.5.1 (and related releases) came out, we immediately pushed the update to all site dashboards, where it can be deployed with a few clicks or via scripted mass-updates. Please update your Drupal sites now before continuing to read this post.

We’ve been planning for this since the Security Team issued a PSA last week, and have engineers standing by if additional response is needed.

As with SA-2014-005, we will update our status page as well as this blog post with any additional information, and will follow up with any interesting findings we can observe at a platform level.

However, I cannot emphasize enough that the only way to be sure you sites are safe is to deploy the core update. Please do not delay in rolling that out today.

Topics Drupal, Drupal Planet, Security
Dec 05 2017
Dec 05

Symfony 4.0 stable has been released, and, while packed with many new and powerful features, still maintains many of the APIs provided in earlier versions. Just before the stable release, many participants at #SymfonyConHackday2017 submitted pull requests in projects all over the web, encouraging them to adopt Symfony 4 support. In many cases, these pull requests consisted of nothing more than the addition of a “|^4” in the version constraints of all of the Symfony dependencies listed in the composer.json file of the selected project, with no code changes needed. The fact that this works is quite a testament to the commitment to backwards compatibility shown by the Symfony team. However, adding Symfony 4 as an allowed version also has testing implications. Ideally, the project would run its tests both on Symfony 4, while continuing to test other supported versions of Symfony at the same time.

In this blog post, we’ll look at testing techniques for projects that need to test two different major versions of a dependency; after that, we will examine how to do the same thing when you need to test three major versions. Finally, we’ll present some generalized scripts that make dependency testing easier for two, three or more different combinations of dependant versions.

Sometimes, the best thing to do is to bump up to the next major version number, adopt Symfony 4, and leave support for earlier versions, if needed, on the older branches. That way, the tests on each branch will cover the Symfony version applicable for that branch. This is a good solution for PHP projects that are building applications. For projects that are libraries in use by other projects, though, it is better to provide a level of continuity between different versions of your dependencies on a single branch. This sort of flexibility can greatly relieve the frustration caused by the “dependency hell” that can arise when different projects cannot be used together because they have strict requirements on their own dependencies that clash. In the case of Libraries that use Symfony components, the best thing to do is to provide Symfony 4 support in the current active branch, and make a new branch that supports only Symfony 4 and later, when new features from the new release are used.

Current / Lowest / Highest Testing

A technique called lowest / current / highest testing is commonly used for projects that support two different major versions of their dependencies. In this testing scheme, you would ensure that Symfony 4 components appear in your composer.lock file. If you Symfony version constraint in your composer.json file was "^3.4|^4", then you could run composer update --prefer-lowest on Travis to bring in the Symfony 3 components.

The portion of your .travis.yml file to support this might look something like the following:

    - php: 7.2
      env: 'HIGHEST_LOWEST="update"'
    - php: 7.1
    - php: 7.0.11
      env: 'HIGHEST_LOWEST="update --prefer-lowest"'

  - 'composer -n ${HIGHEST_LOWEST-install} --prefer-dist'

In this example, we use the expression ${HIGHEST_LOWEST-install} to determine whether we are running a current, lowest or highest test; this simplifies our .travis.yml file by removing a few lines of conditionals. In the bash shell, the expression ${VARIABLE-default} will evaluate to the contents of $VARIABLE if it has been set, and will otherwise return the literal value "default". Therefore, if the HIGHEST_LOWEST environment variable is not set, the composer command shown above will run composer -n install --prefer-dist. This will install the dependencies recorded in our lock file. To run the lowest test, we simply define HIGHEST_LOWEST to be update --prefer-lowest, which will select the lowest version allowed in our composer.json file.

Highest/lowest testing with just two sets of dependencies is easy to set up and takes very little overhead; there is really no reason why you should not do it. Even for projects that support but a single major version of each their dependencies still benefit from highest/lowest testing, as these tests will catch problems the otherwise might accidentally creep into the code base. For example, if one of the project’s dependencies inadvertently introduces a bug that breaks backwards compatibility mid-release, or if an API not available in the lowest-advertised version of a dependency is used, that fact should be flagged by a failing test.

Supporting only two major versions of a dependency is sufficient in many instances. Symfony 2 is no longer supported, so maintaining tests for it is not strictly necessary. In some cases, though, you may wish to continue supporting older packages. If a project has traditionally supported both Symfony 2 and Symfony 3, then support for Symfony 2 should probably be maintained until the next major version of the project. I have seen projects that drop support for obsolete versions of PHP or dependencies without creating a major version increase, but doing this can have a cascading effect on other projects, and should therefore be avoided. There are also some niche use cases for supporting older dependency versions. For example, Drush 8 continues to support obsolete versions of Drupal 8, which still depend on Symfony 2, to prevent problems for people who need to update an old website.

Extending to Test Three Versions

If you are in a position to support three major versions of a dependency in a project all in the same branch, then highest/lowest testing is still possible, but it gets a little more complicated. In the case of Symfony, what we will do is ensure that our lock file contains Symfony 3 components, and use the highest test to cover Symfony 4, and the lowest test to cover Symfony 2. Because Symfony 4 requires a minimum PHP version of 7.1, we can keep our Composer dependencies constrained to Symfony 3 by setting the PHP platform version to 7.0 or lower. We’ll use PHP 5.6, to keep other dependencies at a reasonable baseline.

"require": {
    "php": ">=5.6.0",
    "symfony/console": "^2.8|^3|^4",
    "symfony/finder": "^2.5|^3|^4"
"config": {
    "platform": {
        "php": "5.6"

There are a couple of implications to doing this that will impact our highest/lowest testing, though. For one thing, the platform PHP version constraint that we added will interfere with the composer update command’s ability to update our dependencies all the way to Symfony 4. We can remove it prior to updating via composer config --unset platform.php. This alone is not enough, though.

The .travis.yml tests then look like the following example:

    - php: 7.2
      env: 'HIGHEST_LOWEST="update"'
    - php: 7.1
    - php: 7.0.11
    - php: 5.6
      env: 'HIGHEST_LOWEST="update --prefer-lowest"'

  - |
    if [ -n "$HIGHEST_LOWEST" ] ; then
      composer config --unset platform.php
  - 'composer -n ${HIGHEST_LOWEST-install} --prefer-dist'

This isn’t a terrible solution, but it does add a little complexity to the test scripts, and poses some additional questions.

  • The test dependencies that are installed are being selected by sideeffects of the constraints of the project dependencies themselves. If the dependencies of our dependencies change, will our tests still be covering all of the dependencies we expect them to?
  • What if you want to test the highest configuration locally? This involves modifying the local copy of your composer.json and composer.lock files, which introduces the risk these might accidentally be committed.
  • What if you need to make other modifications to the composer.json in some test scenarios--for example, setting the minimum-stability to dev to test the latest HEAD of master for some dependencies?
  • What if you want to do current/highest testing on both Symfony 3.4 and Symfony 4 at the same time?

If you want to do current/highest testing for more than one set of dependency versions, then there is no alternative but to commit multiple composer.lock files. If each composer.lock has an associated composer.json file, that also solves the problem of managing different configuration settings for different test scenarios. The issue of testing these different scenarios is then simplified to to the matter of selecting the specific lock file for the test. There are two ways to do this in Composer:

  • Use the COMPOSER environment variable to specify the composer.json file to use. The composer lock file will be named similarly.
  • Use the --working-dir option to stipulate the directory where the composer.json and composer.lock file are located.

Using either of these techniques, it would be easy to keep multiple composer.json files, and install the right one with a single-line install: step in .travis.yml using environment variables from the test matrix. However, we really do not want to have to modify multiple composer.json files every time we make any change to our project’s main composer.json file. Also, having to remember to run composer update on multiple sets of Composer dependencies is an added step that we also could really do without. Fortunately, these steps can be easily automated using a Composer post-update-cmd script handler. It wouldn’t take too many lines of code to do this manually in a project’s composer.json and .travis.yml files, but we will make things even more streamlined by using the composer-test-scenarios project, as explained in the next section.

Using Composer Test Scenarios

You can add the composer-test-scenarios project to your composer.json file via:

composer require --dev greg-1-anderson/composer-test-scenarios:^1

Copy the scripts section from the example composer.json file from composer-test-scenarios. It contains some recommended steps to use for testing your project with Phpunit. Customize these steps as desired, and then modify the post-update-cmd handler to define the test scenarios you would like to test. Here is the example test scenarios defined in the example file:

"post-update-cmd": [
    "create-scenario symfony2 'symfony/console:^2.8' --platform-php '5.4' --no-lockfile",
    "create-scenario symfony3 'symfony/console:^3.0' --platform-php '5.6'",
    "create-scenario symfony4 'symfony/console:^4.0'"

These commands create “test scenarios” named symfony2, symfony3 and symfony4, respectively. As you can see, additional composer requirements are used to control the dependencies that will be selected for each scenario, which is an improvement over what we were doing before. There are also additional options for setting configuration values such as the platform PHP version. The --no-lockfile option may be used for test scenarios that are only used in lowest/highest scenarios, as only “current” tests need a composer.lock file. Once you have defined your scenarios, you no longer need to worry about maintaining the derived composer.json and composer.lock files, as they will be created for you on every run of composer update. The generated files are written into a directory called scenarios; commit the contents of this directory along with your composer.lock file.

The install step in our .travis.yml can now be done in a single line again, with the HIGHEST_LOWEST environment variable being defined the same way they were in the first example:

  - 'composer scenario "${SCENARIO}" "${HIGHEST_LOWEST-install}"'

The composer scenario command will run the scenario step in your composer.json file that you copied in on the previous section, above. This command will run composer install or composer update on the appropriate composer.json file generated by the post-update-cmd, or, if SCENARIO is not defined, then the project’s primary composer.json file will be installed.

In conclusion, the composer-test-scenarios project allows current / highest / lowest testing to be setup with minimal effort, and gives more control over managing the different test scenarios without complicating the test scripts. If you have a project that would benefit from highest / lowest testing, give it a try. Making more use of flexible dependency version constraints in commonly-used PHP libraries reduce “dependency hell” problems, and testing these variations will make them easier to maintain. Being fastidious in these practices will make it easier for everyone to use and adopt improved libraries more quickly and with less effort.

You may also like: 

Topics Development, Drupal, Drupal Planet
Apr 18 2017
Apr 18

Fairfax County Public Schools (FCPS) is the largest school system in Virginia and the 10th largest in the United States, with more than 200 schools and centers serving 186,000 students. To keep this large community of students, parents, teachers, employees, and the general public informed, FCPS is building out a network of 195 school websites.

Over time and without a unified and modern content management system, FCPS faced numerous obstacles to managing its content and effectively communicating with its audiences. The school system engaged Forum One to help realize its vision of a modern enterprise web platform that connected their sites centrally. Harnessing the content creation and syndication powers of Drupal with Pantheon Custom Upstreams, Forum One developed a platform that enables FCPS to deploy, manage, and update this network of school websites from a common codebase and to easily share news, events, and alerts from a central source.

I’m Brooke Heaton, a senior developer at Forum One, and I helped lead the development of our solution for the FCPS system. In this post, I’ll discuss how we worked with Pantheon to devise a powerful solution that met a number of critical needs for FCPS. I’ll outline the modules we used to scaffold each school site starting from a Pantheon Upstream, and I’ll also dig into the tools and practices we used to quickly deploy multiple sites.

One Codebase Upstream, Dozens of School Sites Downstream

Getting the solution right for FCPS required a long-term vision that would meet a range of needs in a sustainable and scalable way. Our solution needed to:

  • Provide a common CMS that is user friendly, highly efficient, and cost effective

  • Modernize the FCPS brand with an updated visual identity

  • Syndicate central communications for multiple, diverse audiences

  • Quickly scaffold and deploy numerous new sites with common menu items, homepage and landing pages, content importers, and a unified user interface

  • Centrally add or remove users from a central source

While a Drupal “multisite” approach could have met many such needs, experienced Drupalists can attest that the multisite approach has been fraught with issues. We opted instead to harness Pantheon’s Custom Upstream workflow, which allowed us to unify all sites with a common codebase while also customizing each site with each school’s own configuration—allowing them to display their individual school name, logo, custom menus items, unique landing pages, and their own users.

Utilizing Upstreams, we are also able to continually develop core functionality and new features and to then propagate these updates to downstream school sites.

school repo Upstream graphic

Code Propagation: Upstream code changes are merged to downstream school repositories.

Our solution was also able to create a content strategy and governance model that allows FCPS administrators to syndicate content from a central point—the central FCPS site—to individual schools by harnessing Drupal 8’s core Views, Feeds, and Migrate modules.  

FCPS.edu upstream graphic

Content Propagation: Up-to-the minute News, Events, and Blog updates are syndicated by the central FCPS website to all school sites and imported by Feeds or Migrate.

mobile view of FCPS site

Creating Turn-Key Sites with Drupal 8

To get where we wanted to go, the Forum One dev team utilized a suite of powerful Drupal 8 core and contributed modules that generate default-content, menus, taxonomy terms, and blocks—even images!—immediately after spinning up a new site from the Pantheon Upstream.

Key modules we used include:

  • Configuration Installer: A common installation profile that would import Drupal 8 configuration from a directory so that all of the core configuration of a standard FCPS school site would be there from the start.

  • Default Content: The ability to create generic and school system-wide content that cannot be saved in configuration and to export this in json format. Upon site deployment, the Default Content module scans the modules directory for any exported default content and adds it to the database. Brilliant!

  • Migrate: Now integrated into the Drupal core, the Migrate module allows us to import complex content from a central source, whether csv or xml. While the Drupal 8 Feeds module continues to mature, the Migrate module combined with Migrate Plus and Migrate Source CSV provides powerful tools to import content.

  • Views: Also part of Drupal core, the Views module provides powerful content. syndication tools, including the ability to produce content “feeds” in json, csv, or xml format. This allows content from the central FCPS site to be broadcast in a lightweight data format, then imported by the Feeds module on each individual school site. Triggered regularly by a cron job, Feeds instances import the latest system-wide News, Events, and Blog posts.

The FCPS Install Profile

With a solid foundation in a central FCPS site (hat tip to F1 Tech Lead Chaz Chumley), we were able to clone the central site then ‘genericize’ the initial site so that it could be customized by each school. We removed site.settings from config to prevent Upstream code from overwriting downstream site settings, such as the site name, url, email, etc. We also developed a special solution in the theme settings to allow schools adjust their site colors with a custom theme color selector. Bring on the team spirit!

With this genericized starting site in code and all pages, menus, taxonomy terms, image files, and blocks exported via Default Content, we were ready to deploy our code to the Upstream. To set up the Upstream, we placed a request to Pantheon with some basic information about our Upstream and within minutes, we were ready to deploy our first FCPS school site. Huzzah!

Spinning up a New FCPS School Site from Pantheon’s Upstream

With the installation profile complete and our codebase hosted on Pantheon’s Upstream, we could now create our first school site and install Drupal from the profile. While sites can be created via Pantheon’s UI, we sped up the process using Pantheon’s Terminus CLI tool and Drush to quickly spin up multiple sites while meticulously tracking site metadata and settings. Pantheon’s helpful documentation illustrates just how easy this is:

$ terminus site:create --org=1a2b3c4d5e6f7g8h9i10j11k12l13m14n15o our_new_site 'Our New Site Label' o15n14m13l12k11j10i9h8g7f6e5d4c3b2a1 

The above command uses the format site: create [--org [ORG]] [--] <site> <label> <upstream_id>

[notice] Creating a new site...

With a site created from the Upstream codebase, the next step was to update our local Drush aliases via Terminus so that we could install Drupal from our configuration_installer:

$ terminus sites aliases
[2017-04-05 00:00:00] [info] Pantheon aliases updated 

Then we run drush sa to identify the new DEV site alias: 

$ drush sa

 ← we will use this alias to install the site on DEV


With the alias for our newly created site we install Drupal from the Upstream using Drush:

drush @pantheon.our-new-site.dev  si  config_installer -y --notify --account-name=administrator --account-pass=password \
 config_installer_site_configure_form.account.name=admin \
 config_installer_site_configure_form.account.pass.pass1=admin \
 config_installer_site_configure_form.account.pass.pass2=admin \
 config_installer_site_configure_form.account.mail=[email protected]

The above command will fully install Drupal using our configuration and will automatically add the default content provided by the default content exported to our custom module. 

Once Drupal was installed on Pantheon, we added site editor accounts for FCPS and trained school staff in customizing their site(s).

school sites dashboard

The initial batch of FCPS schools sites after deployment

What about updates?

After we deployed the first fifteen FCPS school sites and made them live to the world, it was inevitable that changes would soon be needed. Beyond the normal module and core updates, we also faced new functional needs and requests for enhancements (and yes, maybe even a few bugs that crept in), so we needed to updated our ‘downstream’ sites from the Pantheon Upstream. To do so, we merged our changes into our Upstream ‘master’ branch and within a few minutes each site displayed an update message showing that Upstream Updates are available’. This meant that we could merge Upstream changes into each school repository.

Upstream updates screenshot

To merge these changes into each site, we once again opted for the faster approach of using Pantheon’s Terminus CLI tool to quickly accept the Upstream changes and then run update.php on each of the school sites.

$ terminus site:list --format=list | terminus site:mass-update:apply --accept-upstream --updatedb --dry-run

$ terminus site:list --format=list | terminus site:mass-update:apply --accept-upstream --updatedb

To import the modified Drupal configuration, we also had to run a drush cim --partial on each site—a step that can be further automated in deployments by harnessing yet another brilliant Pantheon tool: Quicksilver Platform Hooks. With a Quicksilver hook to automatically run a Drupal partial configuration import after code deployments, we are able to remove another manual step from the process, further speeding up the efficiency of the platform.

Is this for you?

Combining Pantheon’s Custom Upstreams with Drupal 8 and its contributed modules can unleash a very powerful solution for nearly any organization that includes sub-organizations, branches, departments or other entities. Universities, public schools, government agencies, and even private enterprises can leverage this powerful tool to maintain strict control over “core” functionality while allowing for flexibility with individual sites.

You may also like: 

Dec 12 2016
Dec 12
If you’ve spent much time upgrading older websites to PHP 7, it’s likely you have encountered the annoying situation where PHP notices are printed directly into the HTML output, causing them to display in an ugly, unformatted jumble at the top of the page. The effect looks something like this:
Nov 22 2016
Nov 22
Drupal 8 Logo in Clouds

Automated testing is an important part of any active project workflow. When a new version of Drupal 8 comes out, we want to smoke test it on the platform to ensure that everything is working well before we expose it to our customers. We have confidence that Drupal itself is going to work well; we confirm this by running the unit tests that Drupal provides. Most importantly, we also need to ensure that nothing has changed to affect any of the integration points between Drupal and the Pantheon platform. In particular, the behaviors we need to test include:

  • Site install, to ensure that the Pantheon database is made available to the new site without interfering with the installation process.

  • Configuration import and export, to ensure that the Pantheon configuration sync directory is set up correctly.

  • Module installation, upgrade, and database updates, to ensure that the authorize.php and update.php scripts are working correctly with our nginx configuration.

This level of testing requires functional tests; to ensure that everything is working as it should on the platform, we use Behat. The thing that is most challenging about the repository we are testing, though, is that it serves as the upstream for a large number of Drupal sites.  Each one of these sites may want to do testing of their own; therefore, we do not want to add a .travis.yml or a circle.yml to our repository, as this would cause conflicts with the downstream version of the same file whenever we made changes to our tests—something we definitely want to avoid.

Fortunately, Circle CI has a feature that fits this need perfectly. Most of the directives that can be placed in a circle.yml file can now be filled in to the project settings in the Circle CI web interface.  The relevant settings are in the “Test Commands” section.  A screenshot of the dependency commands below shows what we do to initialize the Circle container for our tests:

Drops 8 Screenshot

In the pre-dependency commands, we need to set the timezone to use to avoid PHP warnings in our tests. When using a circle.yml file, we would usually specify the version of PHP that we wanted to use; then, we could adjust the php.ini file directly using a well-known path.  If we do not have a circle.yml file, then we must use whatever version of PHP Circle wants to give us, as the settings in the web interface have no affordance for this. We find the appropriate ini file like so:

echo "date.timezone = 'US/Central'"  >  
  /opt/circleci/php/$(php -r 'print PHP_VERSION;')/etc/conf.d/xdebug.ini

There are a couple of things to note about this line. This is just an ordinary bash expression, as is every field in these panels, so we can evaluate expressions and use output redirection here. The path to the PHP ini files contains the PHP version; we use a short bit of PHP to emit this so that we do not need to evaluate the output of php --version, or anything of that nature. Every line is implemented in a separate subshell, so it does not work to set a shell variable on one line, and then try to reference it on the next. If persistent variables are needed, they can be set in the environment variables section, as usual. Finally, we write our setting into the xdebug.ini file in order to turn off xdebug, which speeds up composer considerably. Xdebug is necessary if you plan on generating code coverage reports, though, so you can change the output redirection operator > (overwrite) to >> (append) if you want to keep xdebug enabled.

After that, we install a few projects with Composer, including:

  • hirak/prestissimo is installed to speed up composer operations.

  • consolidation/cgr is used to ensure that other tools we install do not encounter dependency conflicts.

  • pantheon-systems/terminus is installed so that we can do operations directly on a remote Pantheon Drupal 8 site.

  • drush/drush is also installed on Circle, just in case we want to install the Drupal site to do functional tests on Circle prior to running the tests against the Pantheon site. At the moment, though, Drush is not used, and would not necessarily need to be installed.

Note that we specify the full path to the tools we install, rather than adjusting the $PATH variable, because it seems that the $PATH variable is initialized by Circle sometime after the environment variables section is evaluated, making setting it up there ineffective.

The final step of the dependency commands is to clone the project that contains our test scripts.

We keep our tests in a separate tests repository in order to keep our repository-under-test as similar to the base Drupal 8 repository as possible. The tests repository is cloned into the folder specified by the TESTING_DIR environment variable. This is defined in the environment variables section of the Circle CI project settings. There are a number of variables necessary for the scripts in the ci-drops-8 project to work. They include:

  • ADMIN_PASSWORD: Used to set the password for the uid 1 user during site installation.

  • GIT_EMAIL: Used to configure the email address for the git user for commits we make.

  • TERMINUS_ENV: Defines the name of the Pantheon multidev environment that will be created to run this test. Set to ci-${CIRCLE_BUILD_NUM}

  • TERMINUS_SITE: Defines the remote Pantheon site that will be used to run this test.

  • TERMINUS_TOKEN: A Terminus OAuth token that has write access to the terminus site specified by TERMINUS_SITE.

  • TESTING_DIR: Points to a directory on Circle CI that will hold the local clone of our test repository. Set to /tmp/ci-drops-8.

In addition to these environment variables, it is also necessary to add an ssh key to Circle, to allow the terminus drush commands to run.  You should create a new ssh key for use only by this test script. Add the ssh public key to the site identified by TERMINUS_SITE (create a user specifically for this purpose, attach the public key in the SSH Keys section of the Account tab, and add the user as a team member of the site), and place the private key in the Circle “SSH permissions” settings page:

SSH Key Screenshot

Note that the domain drush.in is used for any ssh connection to any Pantheon site. Initially, this was only used for Drush commands; now, WP-CLI and Composer commands are also ran this way. 

The test scripts are executed via the commands in the Test Commands section of the Circle settings pages:

Drops 8 Screenshot

This part is pretty simple; it first runs the Drupal phpunit tests on Circle CI, and then runs the create-pantheon-multidev script, followed by the run-behat script, which tests against the Pantheon site on the prepared multidev environment.

The create-pantheon-multidev script prepares a new multidev environment to run our tests in; the new environment is named according to the TERMINUS_ENV environment variable. The TERMINUS_SITE variable indicates which site should be used to run the test; the TERMINUS_TOKEN variable must contain a machine token that has write access to the specified site. The script then uses the local copy of the drops-8 repository that Circle prepared for us, and adds the behat-drush-endpoint to it. See Better Behavior-Driven Development for Remote Servers for more information on what this component is for. Once that is done, the resulting repository is force-pushed to a new branch on the specified Pantheon site. The multidev environment is created after the branch is pushed to avoid the need for the platform to synchronize the code twice. The multidev sites created for testing purposes are left active after the tests complete. This provides an opportunity to inspect a site after a test run completes, or perhaps re-run some of the test steps manually after a failure to help diagnose a problem. A script, delete-old-multidevs, is run in the post-dependencies phase, just prior to the creation of the new multidev environment for the latest tests. This script deletes all of the test environments except for the newest two.

The run-behat script is just a couple of lines, as its only job is to fire off the Behat tool with the appropriate configuration file and path. Note that we define BEHAT_PARAMS in the script to supply dynamic parameters, so that we do not need to rewrite any portion of our configuration file.

In typical usage, most projects that use Behat to test Drupal sites will first install a fresh site to test against using Drush. In our case, however, we want to test the Drupal installer, so we will instead wipe the site with Terminus, and use Behat to step through the installer dialogs.  The Gherkin for one of our installer steps looks like this:

  Scenario: Profile selection
    Given I am on "/core/install.php?langcode=en"
    And I press "Save and continue"
    And I wait for the progress bar to finish
    Then I should see "Site name"

The interesting step here is the step “And I wait for the progress bar to finish”. There are a number of places in the Drupal user interface where an action will result in a progress dialog. Getting Behat past these steps is a simple matter of following the meta refresh tags in the response. A custom step function in our Features context provides this capability for us.

Sometimes you might want to use a secret value in a Behat test—to provide the password to the admin account in the installer, for example. To do this, we use a custom step function that will fill in a field from an environment variable. We use this step function to reference the previously-mentioned ADMIN_PASSWORD environment variable.  Also, there are a number of other useful step functions in the context for running terminus and drush commands during your tests; you might find these useful in your own test suites.

Finally, the ci-drops-8 project contains a circle.yml.dist file that shows the equivalent content for testing a drops-8-based using a circle.yml file rather than placing the test scripts in the Circle admin settings. While the techniques used in this post are somewhat atypical compared to most Drupal Behat setups, it would be possible to adapt them to work on a standalone Drupal site. Perhaps we’ll do that in a future blog post. In the meantime, the circle.yml.dist file can be used as a starting point to re-use these tests for your Drupal site. 

Topics Development, Drupal Planet, Drupal
Nov 09 2016
Nov 09

The composer.json file schema contains a script section that allows projects to define actions that are executed under certain circumstances. The most recognizable use for this section is to define events that happen at defined times during the execution of Composer commands; for example, the post-update-cmd runs at the end of every composer update command.

However, Composer also allows custom commands to be defined in this same section; an example of this is shown in the composer.json schema documentation
    "scripts": {
        "test": "phpunit"

Once this custom command definition has been added to your composer.json file, then composer test will do the same thing as ./vendor/bin/phpunit or, equivalently, composer exec phpunit. When Composer runs a script, it first places vendor/bin on the head of the $PATH environment variable, so that any tools provided by any dependency of the current project will be given precedence over tools in the global search path. Providing all the dependencies a project needs in its require-dev section enhances build reproducibility, and makes it possible to simply “check out and go” when picking up a new project. If a project advertises all of the common build steps, such as test and perhaps deploy as Composer scripts, it will make it easier for new contributors to learn how to build and test it.

Whatever mechanism your project uses for these sorts of tasks should be clearly described in your README or CONTRIBUTING document. Providing a wrapper script is a nice service, as it insulates the user from changes to the default parameters required to run the tests. In the Linux world, it the sequence of commands make, make test, make install has become a familiar pattern in many projects. Make has a rich history, and is often the best choice for managing project scripts. For Composer-based projects that are pure PHP, and include all of their test and build tools in the require-dev section of their composer.json file, it makes a lot of sense to use Composer scripts as a lightweight replacement for Make. Doing this allows contributors to your project to get started by simply running git clone and composer require. This works even on systems that do not have make installed.

An example of a scripts section from such a project is shown below:

    "scripts": {
        "phar": "box build .",
        "cs": "phpcs --standard=PSR2 -n src",
        "cbf": "phpcbf --standard=PSR2 -n src",
        "unit": "phpunit",
        "test": [

The snippet above offers the following actions:

  • phar: build a phar using the box2 application.
  • cs: run the PHP code sniffer using PSR2 standards.
  • cbf: run the PHP code beautifier to correct code to PSR2 standards, where possible.
  • unit: run the PHP unit tests via phpunit.
  • test: run the unit tests and the code sniffer.

One thing to note about Composer scripts, however, is that you might notice some changes in behavior when you run certain commands this way. For example, PHPUnit output will always come out in plain unstyled text. Colored output makes it a lot easier to find the error output when tests fail, so let’s fix this problem. We can adjust our definition of the test command to instruct phpunit to always use colored output text, like so:
    "scripts": {
        "test": "phpunit --colors=always"

There is another difference that may affect some commands, and that is that standard input will be attached to a TTY when the script is ran directly from the shell, but will be a redirected input stream when run through Composer. This can have various effects; for example, Symfony Console will not call the interact() method if there is no attached TTY. This could get in your way if, for example, you were trying to write functional tests that test interaction, as the consolidation/annotated-command project does. There are a couple of options for fixing this situation. The first is to explicitly specify that standard input should come from a TTY when running the command:

    "scripts": {
        "test": "phpunit --colors=always < /dev/tty"

This effectively gets Symfony Console to call the interact() method again; however, the down-side to this option is that it is not as portable; composer test will no longer work in environments where /dev/tty is not available. We can instead consider a domain-specific solution tailored for Symfony Console.  The code that calls the interactive method looks like this:

if ([email protected]_isatty($inputStream) && false === getenv('SHELL_INTERACTIVE')) {

We can therefore see that, if we do not want to provide a TTY directly, but we still wish our Symfony Console command to support interactivity, we can simply define the SHELL_INTERACTIVE environment variable, like so:

    "scripts": {
        "test": "SHELL_INTERACTIVE=1 phpunit --colors=always"

That technique will work for other Symfony Console applications as well. Note that the SHELL_INTERACTIVE environment variable has no influence on PHPUnit itself; the example above is used in an instance where PHPUnit is being used to run functional tests on a Symfony Console application. It would be equally valid to use putenv() in the test functions themselves.

That is all there is to Composer scripts. This simple concept is easy to implement, and beneficial to new and returning contributors alike. Go ahead and give it a try in your open-source projects—who knows, it might even increase contribution.

You may also like: 

Topics Development, Drupal, Drupal Planet
Nov 03 2016
Nov 03

Drupal configuration is the all-important glue that instructs the Drupal core and contrib code how to operate in the context of the current web application. In Drupal 7, there was no formal configuration API in core. The ctools contrib module provided an exportables API that was widely implemented, but was not universally supported. Drupal 8 has greatly improved on this state of affairs by providing the Configuration Management API in core. Now, configuration can be handled in a uniform and predictable way. During runtime, configuration items exist in the database, as always, but may be exported to and imported from the filesystem as needed.

These synchronization operations by default happen in the CONFIG_SYNC_DIRECTORY. The location of this directory is defined in the settings.php file. If a config sync directory is not defined when the Drupal installer runs, it will create one inside of the files directory. Because configuration files may contain sensitive information, Drupal takes measures to protect the location that the configuration files are placed to prevent a situation where an outside party might be able to read one of these files with a regular web request. There are two primary techniques employed:

  1. The name of the configuration folder is randomly generated, to make it impossible to guess the path to the configuration files.

  2. A .htaccess file is written to the directory, so that sites that use Apache, at least, will not serve files stored inside it.

While these measures provide a reasonable level of protection, an even better solution is to place the configuration files entirely outside of the web server’s document root, so that there is absolutely no way that the configuration files can be addressed. It is easy to change the location of the sync directory; this process is described in the drupal.org documentation page.

Your configuration files should be committed to your git repository, so, before you move your configuration files, you should ensure that you are working with a site that is utilizing a relocated document root. An example project to do this is presented in the blog post Using Composer with a Relocated Document Root on Pantheon.

To specify a different location for your configuration files, you can redefine this variable to place your configuration above your Drupal root by adding the following code to your settings.php file:
 * Place the config directory outside of the Drupal root.
$config_directories = array(
  CONFIG_SYNC_DIRECTORY => dirname(DRUPAL_ROOT) . '/config',

On a Pantheon site, you should make sure that you add this code after the settings.pantheon.php is included; otherwise, the CONFIG_SYNC_DIRECTORY will be overwritten with the Pantheon default value. Also, you need to ensure that the configuration directory exists before you change this variable in your settings file. If you already have an extant configuration directory, you can simply git mv it to its new location.

$ git mv web/sites/default/files/config .

That’s really all there is to it. Once your configuration directory has been relocated, all configuration management operations will continue to work the same way that they always have. If you are using Drupal 8 with a relocated document root, relocating your configuration files is something that you should be doing.

Related Information:

Topics Development, Drupal Planet, Drupal
Oct 25 2016
Oct 25
One of the most commonly documented ways for a PHP command line tool to be installed is via the composer global require command. This command is easy to document and easy to run, which explains its popularity. Unfortunately, this convenient function has a darker side that can cause some pretty big problems.
Oct 17 2016
Oct 17

The information in this blog post has been superseded by: 

[DOC] Drupal 8 and Composer on Pantheon Without Continuous Integration

[DOC] Build Tools

Composer is the de-facto dependency manager for PHP; it is therefore no surprise that it is becoming more common for Drupal modules to use Composer to include the external libraries it needs to function. This trend has rather strong implications for site builders, as once a site uses at least one module that uses Composer, then it becomes necessary to also use Composer to manage your site.
To make managing sites with Composer easier, Pantheon now supports relocated document roots. This feature allows you to move your Drupal root to a subdirectory named web, rather than serving it from the repository root. To relocate the document root, create a pantheon.yml file at the root of your repository. It should contain the following:
api_version: 1
web_docroot: true

With the web_docroot directive set to true, your web document will be served from the web subdirectory. Using this configuration, you will be able to use the preferred Drupal 8 project layout for Composer-managed sites established by the project drupal-composer/drupal-project. Pantheon requires a couple of changes to this project, though, so you will need to use the modified fork for Pantheon-hosted sites.

Installing a Composer-Managed Site

Pantheon has created an example repository derived from drupal-composer/drupal-project for use on Pantheon with a relocated document root. The URL of this project is:


There are two options for installing this repository: you may create a custom upstream, or you may manually push the code up to your Pantheon site.

Installing with a Custom Upstream

The best way to make use of this repository is to make a custom upstream for it, and create your Drupal sites from your upstream. The example-drops-8-composer project contains a couple of Quicksilver “deploy product” scripts that will automatically run composer install and composer drupal-scaffold each time you create a site. When you first visit your site dashboard after creating the site, you will see that the files created by Composer—the contents of the web and vendor directories—are ready to be committed to the repository. Pantheon requires that code be committed to the repository in order to be deployed to the test and live environments.

We’ll cover the workings of the Quicksilver scripts in a future blog post. In the meantime, you may either use the example-drops-8-composer project directly, or fork it and add customizations, if you are planning on creating several sites that share a common initial state.

Installing by Manually Pushing Up Code

If you don’t want to create an upstream yet, or if you are not a Pantheon partner agency, you can use the following Git instructions instead. Start off by creating a new Drupal 8 site; then, before installing Drupal, set your site to Git mode and do the following from your local machine:

$ composer create-project pantheon-systems/example-drops-8-composer my-site
$ cd my-site
$ composer prepare-for-pantheon

The “deploy product” Quicksilver scripts run during site create, so you will need to run composer install and composer drupal-scaffold yourself after you clone your site. Then, use the commands below to push your code up to the site you just created:

$ git init
$ git add -A .
$ git commit -m "web and vendor directory from composer install"
$ git remote add origin ssh://[email protected]:2222/~/repository.git
$ git push --force origin master

Replace my-site with the name that you gave your Pantheon site, and replace ssh://[email protected]:2222/~/repository.git with the URL from the middle of the SSH clone URL from the Connection Info popup dialog on your dashboard.

Copy everything from the ssh:// through the part ending in repository.git, removing the text that comes before and after. When you run git push --force origin master, you will completely replace all of the commits in your site with the contents of the repository you just created.

Updating a Composer-Managed Site

Once your site has been installed from this repository, you will no longer use the Pantheon dashboard to update your Drupal version. Instead, you will manage your updates using Composer. Updates can be applied either directly on Pantheon, by using Terminus, or on your local machine.

Updating with Terminus

To use Terminus to update your site, install the Terminus Composer plugin, placing it in your ~/terminus/plugins directory if you are using Terminus 0.x, or in your ~/.terminus/plugins directory if you are using Terminus 1.x. Using the newer version of Terminus is recommended.

Once you have the plugin installed, you will be able to run composer commands directly on your Pantheon site:
$ terminus composer my_site.dev -- update
Be sure that your site is in SFTP mode first, of course. Note that it is also possible to run other composer commands using the Terminus Composer plugin. For example, you could use terminus composer my_site.dev require drupal/modulename to install new modules directly on Pantheon.

Updating on Your Local Machine

If you have already cloned your site to your local machine, you may also run Composer commands directly on your site’s local working copy, and then commit and push your files up as usual.

Either way, you will find that managing your Drupal sites with Composer to be a convenient option—one that, sooner or later, you will find that you need to adopt. Give it a spin today, and see how you like the new way to manage Drupal code.

You may also like:​

Topics Development, Drupal, Drupal Planet
Oct 10 2016
Oct 10

When working on Drupal 8 theming, it is very helpful to have Twig debug mode on. Debug mode will cause twig to emit a lot of interesting information about which template generated each part of the page. The instructions for enabling debug mode can be found within the comments of the default.services.yml file, among other sources. In short, all you need is the following in your services.yml file:
    debug: true

Unfortunately, there are negative implications to committing debug code into a Drupal site’s repository. On Pantheon, the repository is used as the authoritative representation of what files should be deployed to production. If you turned on debug mode via services.yml, as the documentation recommends, you would have to always remember to take it out again before publishing, or you would end up with twig debugging information on your production site—definitely not a desirable state of affairs. As a reminder to not commit to this file too lightly, Pantheon included it in the .gitignore file at the root of the repository, requiring that the git --force flag be used to add the file. The rules about what to do with the service.yml file were unclear to many users, and it was common for support to get inquiries on this subject.
Now, however, there is an easier way. As of Drupal 8.2.0, Pantheon will now load services from a secondary services file, in addition to the standard services.yml file. The name of the file varies depending on whether the environment is production or not.

Pantheon Environment Settings Filename Production: ‘test’ or ‘live’services.pantheon.production.ymlPre-production: ‘dev’ or any ‘multidev’services.pantheon.preproduction.yml

These files may both be committed to the repository; only the one appropriate to the current environment will be loaded.  Furthermore, the standard services.yml file has been removed from the .gitignore file, and is still loaded in all environments. If you have any configuration settings that are universally applicable, you can make them here.
Sensitive Information in a services.yml file will not be exposed through a web request; however, if there is data that needs to be in a services file, but is so sensitive that you do not want it in the repository at all, you could consider storing it in the private files directory instead. To do this, open up your sites/default/settings.php file, and find the following line:
$settings['container_yamls'][] = __DIR__ . '/services.yml';
Duplicate this line, and change it so that it reads:
$settings['container_yamls'][] = __DIR__ . '/files/private/services.yml';
Now settings for all environments will be loaded from this location in the private files directory. Files are shared across all Pantheon environments; if you have special needs, and want to ensure that some settings are made only on one Multidev environment, for example, then you can follow the lead of what settings.pantheon.php is doing, and use $_ENV['PANTHEON_ENVIRONMENT'] to decide which file to load.

Following these simple patterns will make managing your services file a straightforward task.

Related Information:

Topics Development, Drupal Planet, Drupal
Jun 13 2016
Jun 13

As a developer or site builder, there will come a time when you have a lot of content to migrate from Drupal into your Pantheon site—a need that will go beyond Feeds and will perhaps have related content, entity references and the like—that will make importing content a challenge. This path will lead you to Drupal’s Migrate module. In Drupal 8, this module is in core.

Much can be said about Migrate and how it works, but to give you the short version: it allows a new Drupal site to connect to a different source (such as another copy of Drupal or even another platform such as Wordpress). Once connected, it will pull content in, one piece at a time, and track it—in case it gets updated, or in case other content relies on it—so it can reconcile these complicated matters for you.

The Simplest Migration: Drupal to Drupal

To demonstrate the power of Migrate, we can do what is called a “drupal to drupal” migration on a site that is based on Drupal 6 or Drupal 7 and bring that content into a new Drupal 8 site.

Let’s look at the steps involved:

  1. Import our old Drupal site into Pantheon if it is not there already.

  2. Create a new Pantheon site to run a Drupal 8 site. Go through the install process.

  3. Enable the experimental migration modules.

  4. Run the “Migrate Upgrade” process.

After completing this process, you should have all of the configuration pertaining to the old site, such as site name, content types, fields, and other settings from the old site. With all of these elements in the site, the real magic can begin. The Migrate module will begin populating these things with content from your old site. If you have used Migrate in Drupal 7 this may be a surprise, that older version did not set things up for you and you had to manually map out all of the fields. The Drupal 8 version goes much further to make your life substantially easier.

Drupal Migration Upgrade, Step by Step

1. Import your existing site into Pantheon if it is not there already. A free sandbox account should be fine.

2. Create a new Drupal 8 site. Upon completion of the install process, remember to commit Pantheon’s settings.php file and switch to “Git” mode.

3. Go to the “Extend” page in Drupal 8, enable the Migrate, Migrate Drupal, and Migrate Drupal UI modules. See Figure 1.

4. After enabling the modules you will see a green message to notify you that the module has been installed, it ALSO provides a link to the upgrade form. See Figure 2.

5. Go to the /upgrade page or follow the link that appeared after enabling the module. See Figure 3.

6. Input the database credentials for the old site. See Figure 4. You can get these credentials from the Pantheon dashboard of the old site under “Connection Info” These database credentials can change as Pantheon containers update. You may need to update this information later if you intend to run migrations repeatedly over time. Over on the GitHub repo for Pantheon documentation, there is a discussion about different ways to automate and simplify the database connection setup. Stay tuned for a future blog post on running migrations through Drush when those recommendations are finalized.

7. Congratulations, you upgraded Drupal! Now it is time to review.

Reviewing the Drupal Migration

After you run the migration, you will want to do a full review to see what worked and what didn’t. This is an important step both for the developer and the site owner.

Things you should verify:

  • Did the site variables, such as site_name, and other site-wide settings come across?

  • Were all of the content types and fields created as you expected? If a field or type is missing, do you still need it in the new site?

  • What content came through, did you get all of the items you were expecting? You will find some occasional differences, such as user 1 is not being imported, because you created that user when you installed Drupal 8, not during migration. You may find some nodes fail due to problems with a field, etc.

Watch out for some of the following:

  • Missing filter formats can cause body text and other field data to go missing if the formatter was not available during the import. This could happen if you did not enable a contrib module, for example. If the filter does not yet exist for Drupal 8, you can get around this by implementing hook_migrate_prepare_row in a custom module to set those to use a different filter. This hook iterates over every single item that is imported, so you can update specific values as they are being imported.

  • User, node, entity, and file references may break, especially if a needed module was not enabled before you migrated. Depending on the version of Drupal you are migrating from you may need to use a patch (see the “migration system” component filter on the core issue queue) to enable support for these fields if they do not work. You would have to run the migration locally on your laptop if you have to do any patching.

  • Some fields or other data may not be relevant to your new site. You may want to delete these after importing, or you may want to consider overriding things using Drush and the migrate-upgrade command (from contrib, 8.x-2.x branch) to generate migration YML files you can use to augment the migrations.

Once you are satisfied that you have the content you need, you can finish your site building tasks and re-run the migration if you have not changed any structure. Re-running the migration will bring in the new elements that have been created since your last import.

[Related] How Agencies Benefit from Pantheon High Performance Web Hosting

Other Migrations & Next Steps

If you use contrib modules, you can take the migrations even further. For example, you can use the Migrate Source CSV module to create migrations that import from a spreadsheet. To work with it all you need to do is create some Drupal 8 configuration files: https://www.drupal.org/project/migrate_source_csv

You can also import from other platforms such as Wordpress. Currently the wordpress_migrate module is in development. Contributions are welcome. https://www.drupal.org/project/wordpress_migrate

If the approach taken here does too much you can create your migrations manually using YML files where you map out the fields by hand. If you wish to build the entire site from scratch in Drupal 8—including creating the content types, etc.—this may be the best approach.

Contrib modules for custom migrations:

Additional documentation:

Topics Drupal, Drupal Planet
May 09 2016
May 09

I believe that the web is Earth’s most powerful communication tool. In particular I believe in the Open Web, a place of small pieces loosely joined, true to the original architecture of the internet, the philosophy of UNIX, and of HTTP as a protocol. This technology is rewriting the way humanity operates, and I believe it is one of the most positive things to emerge from the 20th century.

Crucially, the Open Web is bigger than any corporation, government, or technology. It is the first truly global people-powered organization, a species-level medium for communication and collaboration. Platforms will rise and platforms will fall, but the web will be there, fundamental to all things.

Drupal lets organizations and ambitious individuals realize the potential of this historic opportunity. That’s what it’s all about. Drupal helps people, organizations, and businesses succeed on the Open Web.

Drupal’s Destiny—8.1.0 Update

More than a decade in, Drupal’s place in the wider ecosystem is becoming clear. Drupal is the go-to solution for medium to large Open Web projects; those where a custom design, user experience, and data architecture are required. When people need a full-power CMS, and they’re smart enough to avoid a proprietary platform lock-in, they choose Drupal.

This doesn’t mean Drupal is “Enterprise Only” or exclusively for big budget projects. Some will continue using Drupal for very small sites (e.g. simple brochureware, personal blogs), perhaps even leveraging commodity open source themes. But these will largely be developer-initiated and driven. People “scratching their own itch,” as we say.

Smaller projects that don’t have a developer permanently attached, or involved at all—which is most websites by sheer volume—will continue gravitating towards simpler technologies that are easier to use. WordPress is the obvious leader when it comes to consumer-grade Open Web technology, fending off the proprietary platforms while also doing brisk up-market business for media organizations and marketing campaigns. A new generation of designer-friendly static site generators are making a meaningful dent here as well.

However, the release of Drupal 8.1.0 provides even more evidence that Drupal is a leading technology for feature-rich websites with ambitious goals, and for Open Web developers who want to use cutting edge techniques. In particular, the improvements to the content rendering system which have allowed BigPipe to move into core with the latest release will have a huge impact.

Drupal now ships with an enormously powerful suite of tools for website builders looking to deliver a cutting edge experience. Out of the box, Drupal gives you a CMS that’s mobile friendly, with a built-in REST API, the necessary core technology to do progressive rendering and keyed caching, versionable configuration, built-in internationalization, and a mature and well architected template layer.

In addition to having more power than ever before, Drupal is becoming more elegant and cleaner to implement, which is important for adoption on complex projects and in larger organizations. As my colleagues Matt and David will demonstrate on Tuesday, you can now build feature-rich websites on Drupal 8 without writing a single line of PHP. That’s a sea change in terms of capability for the software compared to the previous version.

Building such sites with Drupal still requires an experienced developer, as Drupal configuration is complex, and the risks of poor architecture are real. But the work is much more in designing the system, expressing the design through configuration, and managing the presentation through safe Twig templates, as opposed to trying to a hundred-plus contrib modules to work together with custom glue code and last-mile theme kludges.

The fact that Drupal has improved its power while decreasing risks in its form-factor is a huge leap forward, putting it on the same playing field with dominant enterprise CMSs like Adobe Experience Manager or SiteCore. Except of course Drupal is open, which is a unique value add. You’re still free to implement your own code where it really delivers value—custom business logic, systems integrations, data access, user experience, etc.—and there is still an enormous community of experts surrounding the technology who you can enlist to help.

Developer Experience

In many ways, Drupal 7 mirrored the chaos and confusion that frustrates so many when it comes to the Open Web, what I like to call “The GoDaddy Effect.” The lack of coherency and clear patterns for common tasks was a major barrier to adoption and a pain point for ongoing maintenance and support.

In the old days, there were at least five ways to do anything, one of which was slightly better and two of which might be bad, and worse there was no real way to tell the difference other than hearsay, personal taste, or hard-won experience. “Your Mileage May Vary” was the answer to far too many questions. Still, hundreds of thousands developers found their own path through the many choices Drupal presented, and over a million sites got built.

By contrast, Drupal 8 has opinions. While it is inarguably a more complex system, there’s a lot less clutter. Everyday implementations should be more elegant, so long as they’re on the happy path. There is now a “correct” way to do something more often than not. This is good, but it also means that the other more familiar paths developers may have taken in the past are now unhappy—dead ends, or else leading straight over a cliff of complexity. That leap might be exhilarating, but is more likely to feel painful and confusing, especially if it comes without warning.

Drupal 8’s learning curve is real, but the rewards should be worth the investment, especially for for professional website developers. After all, most of what Drupal is doing is just best practice PHP and software development generally. It’s a chance for every developer to level up. If you look at what some agencies have done by getting a head start on Drupal 8, the results seem to speak for themselves.

For instance, two years ago Chapter Three’s management team (John Faber and Stephanie Canon) made an ambitious strategic decision to recruit Alex Pott, a Drupal Core mainstay, and now it’s paying off. They’ve done several major launches with Drupal 8 in 2016 already, with more in the pipeline. While they’re happy to continue supporting existing D7 clients, they have trained their entire staff on D8 and are focusing exclusively there for new projects going forward.

By standardizing their team on Drupal 8 for all new engagements, they’ve levelled up to modern PHP development practices and are focused increasingly on cutting edge web use cases. This is a big upgrade vs remembering the myriad ways to navigate the paths of D7. They’re also the first to benefit from upstream enhancements in Drupal, like automated JavaScript testing, or the ability to build next-gen user experiences with BigPipe.

Across the board, agencies accelerating internal adoption of Drupal 8 simultaneously upgrade their internal practices, and position themselves for the future. Those who have prepared in advance are best positioned to reap the benefits of the coming wave of adoption—enjoying rightful thought leadership for the next generation of Drupalists, and first shot at the best new projects that come up.

Gain the World, But Keep Your Soul

Just as I believe in the Open Web, I also believe that the Drupal community should be proud of its growing success in the enterprise marketplace:

  1. Drupal is displacing lumbering proprietary technologies like Oracle, SiteCore, and Adobe CQ. This is good for open source adoption, good for developer sanity, and good for the web in general.

  2. A focus on the high end of the market means taking on the really interesting technical challenges. It plays to Drupal’s strengths, and will push Drupal to continue innovating.

  3. Complex projects for larger organizations are generally supported by larger budgets. This is a good thing for the Drupal ecosystem, as it creates stability and long-term growth potential.

I think we can take pride in the fact that Drupal is the leading solution in this marketplace when big organizations are ready to embrace the Open Web. This is one of the many reasons I don’t worry about the Drupal community losing its vibrancy or character, even as it becomes more and more successful in business terms. While individual contributors will naturally come and go, the essence of the project will attract those of us who believe in these core values.

We are all champions for the Open Web, carrying the fire. We will always have friendly co-opetition with other open platforms and open source CMSs, but the real battle battle is with proprietary software vendors, and the walled-garden gatekeepers who convince CMO’s that they can buy their way to internet fame. I believe that’s a battle that Drupal as a technology and as a community is fully prepared to fight—and win—with joy in our hearts.

Topics Drupal, Drupal Planet
Apr 11 2016
Apr 11

Having a command line interface to the functionality provided by Drupal modules has been a highly valuable and widely-used feature that has been used for many years. Today, there are over 500 Drupal modules that provide Drush commands, and the number keeps growing.  On top of this, some modules have started to use Drupal Console to implement their command line tools. Drupal Console provides an object-oriented interface and a host of utility functions provided by the Symfony Console libraries. The Drupal Console APIs are useful, but the existence of two standards has lead to some duplication of functionality, which increases maintenance costs. This is particularly acute for features such as site install and configuration import / export, which occasionally need to be adjusted to keep pace with changes made in Drupal core. This situation also creates some angst for module developers, who must decide which API will be best for their project.

Recently, the Drush and Drupal Console maintainers started a collaborative effort to make an even better way to use to rapidly create command line tools for Drupal modules. The new system uses annotations to describe the characteristics of your command. That is literally all that is required to make your command work with both Drush and Drupal Console. The command line tool framework just gets out of your way, and lets you call your module APIs to do what you need to do.

There is already a preliminary implementation of one of these “annotated command files” available for the Drupal 8 default_content module that provides two commands to export content. The basic structure is shown below:

class DefaultContentCommands
     * Exports a single entity.
     * @param string $entityType The type of the entity to export.
     * @param string $entityId The id of the entity to export.
     * @option string $file Write out the exported content to a file

     *         instead of stdout
     * @aliases dcex
    public function defaultcontentExport(

        $options = ['file' => ''])
        // … implementation

     * Exports an entity and all its referenced entities.
     * @param string $entityType The type of the entity to export.
     * @param string $entityId The id of the entity to export.
     * @param string $folder Folder to export to, entities are grouped

     *         by entity type into directories
     * @aliases dcexr
    public function defaultcontentExportReferences(

        // … implementation

The entire command is available in a patch in the default_content issue queue, Modern CLI command. It works with pre-release versions of both Drush and Drupal Console, as long as you have installed the appropriate PR: Use Symfony Console application and annotation commands in Drush, or Register annotation commands in Drupal Console, respectively.

Progress is being made rapidly on this and other new features that will make the task of providing a command line interface to your module quick and easy. Be sure to catch our session at DrupalCon New Orleans, Writing Command Line Tools for Drupal 8 Modules to learn all about it.

Topics Development, Drupal Planet, Drupal
Mar 23 2016
Mar 23

An autoloader is a bit of PHP code designed to load class files as they are needed. This is a language feature of PHP; when an autoloader is registered, PHP will call it any time a reference to an unknown (un-loaded) class is made. Composer makes it even more convenient to use an autoloader, as it will generate one automatically from the information provided in a project’s various composer.json files when the project is installed and updated.

If one autoloader is good, what about using two autoloaders? Composer has been specifically designed to make it possible to include more than one autoload file; in fact, in the early days of Composer, this was the sanctioned way for the unit tests to combine the classes needed for testing with the classes from the application. Unfortunately, unlike belts and suspenders, two PHP autoloaders are not twice as good at holding your pants up. It goes beyond simple redundancy; having multiple autoloaders can actually be actively harmful to your application.

The figure below illustrates how things might go wrong.

Figure: Loading Two Different Copies of the Same Library
Figure: Loading Two Different Copies of the Same Library

In this hypothetical scenario, one program has a library, FancyLib v1.0.1, loaded in its autoload file. If this program then loads a second autoload file that also contains the same library, but at a slightly different version, problems are likely to be encountered. There are a number of ways that things can go wrong; in the example above, a protected function in a base class has been renamed. If a subclass from the second library is loaded and used with the base class from the first library, then a fatal error will be thrown at runtime. The public API for the library has not changed, so Semantic Versioning provides no protection.

Fortunately, it is unusual to encounter a situation where it is tempting to use more than one autoloader. Perhaps the most common example of where two autoloaders are used is when using a global install of Drush or Drupal Console to run commands on a Drupal 8 site. Keeping this working gets a little trickier every time Drupal 8 updates its composer.lock file. At the moment, it is still possible to use the same Drush and Drupal Console with both Drupal 8.0.x and Drupal 8.1.x, but this situation can change with any future release. Stay tuned for future improvements in this area; in the meantime, you might want to consider using a site-local Drush to guard against future problems.

See how Drupal 8 reaches its full potential on Pantheon.

Topics Development, Drupal Planet, Drupal
Mar 09 2016
Mar 09

We just finished covering how simple configuration is still easy in Drupal 8, but how is Drupal 8 making the hard things possible in a way to justify changing the old variables API? Well, in Drupal 7, when you needed to handle complex configuration, the first step was ditching variable_get(), system_settings_form(), and related APIs. Drupal 8 has improved this situation two ways. First, you don’t have to throw out basic configuration code to handle complex needs. Second, more things are possible.

[Related] Managing Configuration in Code in Drupal 8

The goal of the Configuration Management Initiative was to maintain the declarative nature of configuration even when side-effects or cross-referenced validation are necessary. (Contrast with the Drupal 7 trick of using hook_update_N() as an imperative method.) Specifically, Drupal 8’s configuration management system operates under the perturbation model of constraint programming. That is, modules and core ship with defaults that work with each other, and configuration changes by a site owner create perturbations that either can be satisfied trivially (like the site front page path) or through propagating changes (described below). Sometimes, the constraints can’t be all satisfied, like deleting a field but also using it in a view; Drupal 8 helps here by making dry-run configuration tests possible. At least you can know before deploying to production!

Let’s go on a tour of hard problems in Drupal configuration management by walking through the use cases.

Subscriptions and Side-Effects

Often, configuration takes effect by simply changing the value. One example is the front page path for a site; nothing else needs to respond to make that configuration effective. In Drupal 7, these basic cases typically (and happily) used variables. Things got messy when a module needed to alter the database schema or similar systems to activate the configuration. Drupal 7 didn’t have a answer for this in core, though you could build on top of the Features module.

Anyway, in Drupal 8, side effects happen two ways. You should use the first when possible.


This is the modern and clean method of responding to configuration changes, regardless of whether the changes are to your module’s configuration or not. There are a number of configuration events you can receive. The most basic is ConfigEvents::SAVE, which fires on any change, whether created using the admin GUI, Drush, or a configuration import.

A good example of this approach in core is the system module’s cache tag system. It invalidates rendered cache items when the site theme changes; it does more than that, but we’ll be pulling the examples from there. The foundation for Drupal 8-style event listening is Symfony’s EventSubscriberInterface, which provides an object-oriented way to list the events of interest and set a callback. Drupal 7 developers should think of it like a non-alter hook.

The first step is getting the right files in the right places for the autoloader. You will need ConfigExample.php (assuming you name the class ConfigExample) and a example.services.yml (assuming your module name is “example”). You should end up with something like this, starting from the module root:

  • example/

    • example.info.yml (required for any module)

    • example.services.yml (example based on system.services.yml)

    • src/

      • EventSubscriber/

The second step is to register interest in the appropriate events from your class, which happens by implementing getSubscribedEvents() (which is the only member function required by the EventSubscriberInterface). The following code causes the member function onSave() to run whenever configuration gets saved:

 public static function getSubscribedEvents() {
    $events[ConfigEvents::SAVE][] = ['onSave'];
    return $events;

Third, we need to implement the onSave() callback to invalidate the cache when the appropriate configuration keys change. If system.theme or system.theme.global changes, the code will call the appropriate function to invalidate the cache:

public function onSave(ConfigCrudEvent $event) {
    if (in_array($event->getConfig()->getName(), ['system.theme', 'system.theme.global'], TRUE)) {
         // Invalidate the cache here.

The example above covers the intermediate “respond to a configuration change” use case. If you’d also like to validate configuration on import, you can see an example in SystemConfigSubscriber. It shows both subscribing to import events and stopping propagation on invalid input.

Using Hooks

This is where things get dirty; we’re now in hook_something_alter() territory. It’s hard to reason about things in this territory, but here we are anyway because it’s necessary for a handful of use cases. To be clear, you’re basically killing kittens when you re-jigger data the way an alter hook can. If you’re doing cleanup, I’d recommend queueing something into the batch system instead using the subscription method, even if your batch job has to attempt things and re-enqueue itself if other batch processing needs to finish first. Anyway, warning over. Here’s your example, shamelessly pulled from the API site. This gets the list of deleted configurations matching the key “field.storage.node.body.” If any exist, it adds a new function call to the end of the steps for propagating the configuration.

function example_config_import_steps_alter(&$sync_steps, \Drupal\Core\Config\ConfigImporter $config_importer) {
 $deletes = $config_importer->getUnprocessedConfiguration('delete');
 if (isset($deletes['field.storage.node.body'])) {
    $sync_steps[] = '_additional_configuration_step';

Settings Forms

In Drupal 7, have you ever saved a settings form and then gotten a page back with the old setting still in place (usually because something messed with $conf in settings.php)? Never again in Drupal 8! Configuration forms save to the pre-alteration values, and modules can read those “raw” values themselves through Config::getRawData() or ConfigFormBaseTrait::config() to provide custom forms.

However, if you just need a basic form, you can use pre-alteration values automatically with Drupal 8’s ConfigFormBase, which replaces system_settings_form() and seamlessly integrates with everything in Drupal 8’s configuration world from events to imports to version control.

A great example of ConfigFormBase use is in the system module’s LoggingForm.php.

Discovery and Multiple Objects for a Module

Need more than a (possibly nested), single configuration file for your module? If you provide something like views where there are multiple, independent configurations still owned by your module, you need Configuration Entities. These entities allow enforcing a schema, listing the individual configurations (which correspond one-to-one with YAML files). These entities are cleaner than the Drupal 7 approaches of spamming the variable namespace, creating a configuration table, or using some combination of ctools and Features.

Three-Way Merging

I don’t want to go into deep detail because there’s a great blog post on configuration merging already, but I will underscore the importance of three-way merging for configuration. Unless you have a completely linear flow with one development environment and no changes happening in test or live, there will be cases where configuration diverges both on the development branch and in production versus the last common revision. A three-way merge allows safely determining what changed on each side without the hazards of simply comparing development’s configuration to production’s (which can make additions on one side appear to be deletions on the other). You could kind of do this with Features, but the use of PHP arrays and other serialization made the committed configuration unfriendly to diff utilities. Drupal 8 uses canonicalized YAML output, which is both human- and diff-friendly.

Setting Dynamic Options with PHP

In the Style of settings.php

The biggest difference you’ll notice isn’t in settings.php but in the GUI. Values you coerce here will not appear in the GUI in Drupal 8 (for reasons mentioned in the Settings Forms section above). The following example is shamelessly pulled from the Amazee Labs post.

Drupal 7:

$conf['mail_system']['default-system'] = 'DevelMailLog';

Drupal 8:

$config['system.mail']['interface']['default'] = 'devel_mail_log';

In a Module

This is back into dirty territory because modules ought to provide higher-level ways of altering their behavior other than expecting other modules to hurriedly erase and change configuration values before the module reads them. If you must go down this path, you need to register the class performing the changes as a service under “config.factory.override” and implement ConfigFactoryOverrideInterface. This happens much as the service entry and class for subscribing happen above.

Configuration Lockdown

The most you could do in Drupal 7 was hard-code many variables in settings.php and periodically audit the site’s configuration with Features. With the transparently named Configuration Read-Only Mode module, you can actually prevent ad hoc changes in, say, your test or live environment. On Pantheon, for example, you could do the following to prevent configuration changes in the production environment:

    $settings['config_readonly'] = TRUE;

Bundling Related Configuration

This is still Features territory, actually more than ever. Contrary to rumors, Features is alive and well for Drupal 8. Relieved of the burden of extracting and applying configuration, Features is back to its original role of bundling functionality, most often in the form of configuration YAML to be imported by Drupal 8. So, in short, Features is now just for features.

Other Resources

The latest stack + the latest Drupal: See how Drupal 8 reaches its full potential on Pantheon

Topics Development, Drupal, Drupal Hosting, Drupal Planet
Jan 04 2016
Jan 04

PHP is going through a renaissance, and a big part of it is the Composer package manager, which makes it easy to share common libraries of code between projects. With Drupal 8 embracing Composer to pull in core components like Symfony2 and key libraries like Guzzle, many developers are eager to use Composer to manage their Drupal sites more fully, and best practices are beginning to emerge. For more background see my previous blog post comparing Composer with Drush Make, and check out drupal-composer/drupal-project for some useful tools to start managing a Drupal site with Composer.

However, we have not yet reached dependency heaven. When using Composer with Drupal 8, the common pattern is to manage only the core directory with Composer. This works very well with the Composer workflow, but what about tracking the other parts of Drupal that live outside of core?

The Composer Update Process

The great thing about Composer is that it makes it very easy to keep your projects up-to-date.  As new versions of your dependencies are released, one simple command—composer updatewill cause Composer to go to the work of ensuring that all of your dependencies, including your dependencies’ dependencies, are updated in a self-consistent way. Composer is, after all, a dependency manager; this is what it does best. Composer also provides a command for starting new projects; if you run composer create-project, and provide the URL of a project template, then Composer will copy and rename this project for you, giving you a fresh place to begin writing code. 

The Problem with Scaffolding Files

In any Composer-managed project, some of the code will be specific to the project, and some will be provided by dependencies, as defined in the composer.json file.  The create-project command may provide project-specific files in addition to the files provided by the newly-created project’s dependencies. These files are called scaffold files, because they are intended to be used as the initial scaffolding for a project; these files are typically filled in by the project implementer, and, as such, they are specific to and owned by the project. Scaffold files, therefore, typically are not updated by Composer, because they are not expected to change upstream.

On Composer-managed Drupal sites, though, some of the project-specific files are provided upstream, such as the index.php file, robots.txt, and sites/default/default.settings.php. These files may occasionally change from release to release, and, while it likely wouldn’t cause any problems to miss an update to default.settings.php file, changes to index.php could be important, and should not be missed. As previously mentioned, though, composer update does not update scaffold files.  What to do?

Solving Update Problems with Composer Script Hooks

Fortunately, Composer has anticipated that some projects might need special handling during the update process, and has provided a number of scripting hooks that can be used to perform additional tasks before or after an update. For example, the following composer.json snippet will run an update script at the end of every composer update operation:

    "scripts": {
        "post-update-cmd": "sh ./scripts/post-update.sh"

This script can do whatever operations are necessary to fetch and update the scaffolding files. For example, drupal-composer/drupal-project already has an update-scaffold script that fetches the latest scaffold files from Drupal.org, and copies them into the project. In this simple example, a pre-update command and a post-update command work in concert to update the scaffold files, but only in instances where the version of Drupal core is also updated.  You should already be in the habit of committing your composer.lock file in your git repository; now, whenever you run composer update, you should also use git status to check and see if any of the scaffold files have also changed, and, if so, ensure they are added to the repository in the same commit as the updated composer.lock.

Improving Our Scripts with Robo

Short bash scripts are a very quick way to add some simple functionality to your Composer update process, but script maintenance can start to become an issue when processes get more complicated, and multiple scripts are in use. A better strategy is one that allows scripts to be written in PHP, and use all of the code available to the project via the Composer autoload file.

Robo is a PHP task runner that fits this bill nicely.  If you require codegyre/robo from your project’s composer.json file, then the Robo task runner will be available in vendor/bin, and it will load the Composer autoloader whenever it is used to run a script. It contains a number of very useful library of tasks to do File and FileSystem operations and more. For example, you could place the following snippet in your composer.json file:

    "require-dev": {
        "codegyre/robo": "^0.6.0"
     "scripts": {
        "robo": "robo --load-from $(pwd)/scripts/drupal",
        "pre-update-cmd": "composer robo version:dump",
        "post-update-cmd": "composer robo update:if-newer-version"

The “robo” line in the scripts section tells Composer what to do when composer robo is executed. This is a handy way to add macros to your project for a variety of purposes; we use it to provide a common entrypoint for the pre and post-update commands, so that the location of the RoboFile.php (the file that holds your Robo tasks) can be set in a single location. Robo uses a simple heuristic to convert php method names into Robo commands available to be used from the command line; version:dump and update:if-newer-version are defined by the php methods versionDump and updateIfNewerVersion, respectively.  

Here is an example Robo script that updates the Drupal scaffold files; this is functionally equivalent to the bash script, but the organization is better. We can also get the current Drupal version using \Drupal::VERSION, which is exposed to our scripts by virtue of the fact that Robo includes our project’s autoload file. This allows us to use any of the classes provided by any of our dependencies directly from our script. Using Composer dependencies from a standalone script is usually awkward; with Robo, it is easy.

Packaging Our Update Scripts in a Composer Installer

Custom Robo scripts are a great technique to use when your update actions are very specific to your particular project. In instances where the actions are commonly needed by many different projects, then it makes sense to factor out the script into a custom Composer installer. The drupal-composer/drupal-scaffold project does exactly that; now, if you are using the recommended drupal-composer/drupal-project to manage your Drupal 8 site with Composer, all you need to do is require drupal-composer/drupal-scaffold in your composer.json file, and your Drupal scaffold files will be updated whenever the version of drupal/core changes. If you examine the implementation of drupal-composer/drupal-scaffold on GitHub, you will see that it also uses Robo internally to make the update. This project would therefore make a good example starting point for creating any similar plugins to update scaffold files for other sorts of projects.

There is currently an open pull request in drupal-composer/drupal-project that uses the drupal-scaffold project to set up and update the scaffold files on the branch for Drupal 8. This PR will probably be merged in to the 8.x branch shortly, but you can take a sneak peak at it today. Examining the techniques it uses will add tools to your reputare that you can use to solve other problems.


I am deeply grateful for the amazing amount of work that Florian Weber (webflo), Johannes Haseitl (derhasi) and David Barratt (davidbarratt) have done to create and maintain the suite of Drupal Composer tools and Drupal Packagist, which makes Composer-Managed Drupal sites possible.

Topics Development, Drupal Planet, Drupal
Nov 24 2015
Nov 24

If you are still using the same Nginx configuration that you have been for Drupal 7 on your new Drupal 8 install, most things will continue to work; however, depending on the exact directives and expressions you are using, you might notice a few operational problems here and there that cause some minor difficulties.  The good news is that the Drupal configuration recipe in the Nginx documentation has been updated to work with Drupal 8, so if you have a very basic Nginx setup, you can just grab that and you’ll be good to go.  If your configuration file is a little complicated, and you do not want to just start over, the advice presented below might be helpful when fixing up any defects you may have inadvertently inherited.

[Related] Hard Things are Possible: Configuration in Drupal 8 

Here are three signs that your Nginx configuration needs some fine-tuning to work with Drupal 8:

1. Can’t Run Update.php

In Drupal 8, the update.php script now displays an instructional dialog with a “Continue” button. Clicking “Continue” will bring the user to the URL update.php/selection, which runs the actual database update operation.

Expected Result:

Clicking the “Continue” button from the update.php script should run the update operation, and then display another dialog for the user.


Clicking the “Continue” button from the update.php script will bring the user to the URL update.php/selection, but the user is presented with a “Page not found” error, and the update operation does not run.


The URL update.php/selection is a little unusual, being a little bit like a clean URL, and a little bit like a mixed php-script with query parameters, except that /selection is used instead of query parameters. Some of the Nginx configuration examples were not written with these patterns in mind, so some location directives will fail to match them.

Nginx Configuration Fix:

Confirm that your location directives are not written to require that the .php extension appear on the end of the URL.

Incorrect Suggested location ~ \.php$ {location ~ \.php$|^/update.php {

Note: The suggested URL is fairly strict, and only allows the unusual Drupal-8-style paths for the update.php front controller. Being very strict about which paths are matched allows us to continue route paths such as blog/index.php/title through Drupal, for sites that may need to maintain legacy URLs from a previous implementation. If your site does not need to route URLs that contain .php, then you might prefer to use a laxer location rule, such as:

          location ~ \.php(/|$) {

The benefit of using a less restrictive rule is that you will not need to update your Nginx configuration in the future, should a new version of Drupal start using this style of URL with front controlers other than update.php.

2. Can’t Install Modules from the Admin Interface

Drupal 7 introduced the feature that allows site administrators to install new modules from the Admin interface, simply by supplying the URL to a module download link, or by uploading a module using their web browser. This process relies on a script that determines whether the user has access rights to install modules. Under Drupal 8, this script is located at core/authorize.php; however, a bug in Drupal results in the URL core/authorize.php/core/authorize.php being used instead.

Expected Result:

When working correctly, installing a module through the admin interface will bring up a progress bar that installs the module onto the site.


If Nginx is not configured correctly for Drupal 8, then instead of the progress dialog, the user will see an Ajax error dialog that reports that core/authorize.php/core/authorize.php could not be found.


When the URL core/authorize.php/core/authorize.php  is accessed on an Apache web server, it will find the authorize.php script in the correct relative location once it processes the first part of the path; the second core/authorize.php is then passed into the script as the SCRIPT_PATH, which Drupal ignores. Using some recommended configuration settings will cause Nginx to attempt to process the entirety of core/authorize.php/core/authorize.php as a single route, which it will not be able to find, causing the error.

Nginx Configuration Fix:

Change the regular expression used to partition the SCRIPT_PATH and PATH_INFO to use a non-greedy wildcard match, so that only the portion of the URL up to the first authorize.php will be included in the SCRIPT_PATH.

Incorrect Suggested fastcgi_split_path_info ^(.+\.php)(.*)$;fastcgi_split_path_info ^(.+?\.php)(|/.*)$;

By default, Nginx uses a greedy match.  +? is the non-greedy variant.

3. Some Navigation Elements are Missing CSS Styling

Drupal 8 uses Javascript to apply the class is-active to navigation elements on the that correspond to the current page that is being viewed.  This allows themers to apply CSS styles to highlight the active menu items, and so on.

Expected Result:

The styling used will vary based on the theme being used. In Classy, the default theme, if you move the main navigation menu to a side bar, then the menu item link that corresponds to the current page should be colored black instead of blue.


If your Nginx configuration is not correct, then the active menu item link will be styled exactly the same as all of the other menu items.


Drupal includes attributes in the navigation page elements that indicate which pages those elements should be considered to be active. Drupal’s Javascript in turn builds a selection expression based on the current path and query string from the current page URL. A misconfigured Nginx configuration file can cause the query string to be altered in such a way as to prevent the selection expression from matching the page elements that should have the is-active class added.

Nginx Configuration Fix:

Incorrect Suggested location / {

   try_files $uri @rewrite;


location / {

   try_files $uri /index.php?$query_string;


This is less likely to be encountered, because the configuration that works with Drupal 8 is the same as the one that is also recommended for Drupal 7. You will only have trouble if you use the older configuration that was recommended for Drupal 6, or some variant thereof. Of course, there is a very large variation in the kinds of configuration files that can be created with Nginx, and not all of these will look exactly like the examples shown above. Hopefully, though, these explanations will go a long way towards explaining how to correct the configuration directives you have, should you encounter any problems similar to these.


I am endebted to Damien Tournoud and Michelle Krejci, who were instrumental in analyzing these configuration issues. I would also like to thank Jingsheng Wang, who first published the update.php fix.

Topics Development, Drupal, Drupal Planet
Nov 24 2015
Nov 24
Much of the conversation in the Drupal 8 development cycle has focused on “NIH vs. PIE.” In Drupal 8 we have replaced a fear of anything “Not-Invented-Here” with an embrace of tools that were “Proudly-Invented-Elsewhere.” In practice, this switch means removing “drupalisms,” sections of code created for Drupal that are understood only by (some) Drupal developers. In their place, we have added external libraries or conventions used by a much wider group of people.
Nov 16 2015
Nov 16

Behavior-Driven Development is a widely-used testing methodology that is used to describe functional tests—that is, tests that operate on the whole of a system—in natural, readable language called Gherkin syntax. The goal of this methodology is to make the contents of the tests approachable to non-technical stakeholders. This makes it possible for a project’s functional tests to be meaningfully used as the acceptance criteria for the product.

Behavior-Driven Development for PHP projects is widely done using a tool called Behat, an extensible functional testing tool based on the Gherkin language. A Behat plug-in called the Behat Drupal Extension allows Behat to test Drupal functionality directly. For example, below is an example Gehrkin scenario that creates a node on a remote Drupal site:

  Scenario: Create a node
    Given I am logged in as a user with the "administrator" role
    When I am viewing an "article" content with the title "My article"
    Then I should see the heading "My article"

This capability has been available for a long time, but only for testing systems where Behat and the Drupal site under test are running on the same server. This is because the functionality of the Drupal Extension is provided through different drivers; each has different capabilities, and the best one for the job must be selected by the test implementor. Recent enhancements to these drivers have now made it possible to run these same tests even when the Drupal site under test is on a remote server.

This new functionality is still in development, and is not yet available in a stable release; however, the pull request has been merged, and these features can be previewed in the latest dev releases. The documentation page on the Drupal Extension Drivers has been updated on the master branch, and now looks as follows:

The items labeled Yes [*] are the newly-supported features. As you can see, in order for creation of nodes, vocabularies and taxonomy terms to work, you need to install a new component called the “Behat Drush Endpoint” on the site being tested. This is a simple Drush commandfile that provides a remote interface that the Drush Driver uses to create and remove these fixtures. See the installation instructions on the Behat Drush Endpoint project page for information on how to use it with your Drupal site. The easiest way to get started, though, is to use the example Drupal 7 Composer project, which has simple instructions for creating a new Drupal 7 site that is already set up with sample Behat scenarios using these new capabilities.

Testing on the exact remote servers that will be running your live site will increase your confidence that your deployments will go smoothly, so these new features should be a welcome and healthy step forward for many organizations.

Topics Development, Drupal Planet, Drupal
Nov 12 2015
Nov 12

With the 8.0.0 release of Drupal scheduled for November 19th, I wanted to share the Panopoly 2.x plan for Drupal 8 that David Snopek and myself developed. For those who do not know, Panopoly is powerful “base distribution" of Drupal designed to be both a general foundation for site building and a base framework upon which to build other Drupal distributions. Panopoly was developed in 2012 as a partnership with UC Berkeley (watch our presentation from Drupalcon Denver) but has been growing steadily as a community project ever since with over 10,000 sites using Panopoly today!

In my opinion, one of the primary reasons for Panopoly’s growth and success is that it provides a much stronger first time & end user experience out of the box than Drupal 7 core. A number of essential modules are included and 50+ features & improvements are provided to make Drupal both easier to use and more powerful when you do.

I am happy to report that much of this functionality is now part of Drupal 8 core and will no longer need to be supported in Panopoly! Utility modules like Views, Link, Entity Reference, Navbar, Migrate, and Date are included in core. Helper modules like jQuery Update, UUID, File Entity, and Strongarm are no longer needed. UI modules like Backports, Save Draft, Simplified Menu Admin, and Date Popup Authored have their functionality included in core. WYSIWYG modules like WYSIWYG, WYSIWYG Filter, Linkit, Image Resize Filter, and Caption Filter are now part of Drupal 8 core’s WYSIWYG. In fact, most of the contrib modules bundled with Panopoly are already part of Drupal 8 core!

As a result, the primary work needed to release Panopoly 2.x for Drupal 8 is to re-implement the Panels Magic that is the heart and soul of Panopoly. Namely an end-user, in-place editor for Panels known as the “Panels IPE". As outlined in our Drupal.org plan, to get this upgraded the following things are necessary:

  • Panels Module + Panels IPE Module: As outlined in David Snopek’s post on Panels & Page Manager for Drupal 8, there has been a lot of progress updating these modules to Drupal 8. This work will need to be completed and the Panels IPE module specifically will need to be upgraded.

  • Creating New Blocks in the IPE: In Drupal 7, this was done by Fieldable Panels Panes. In Drupal 8, custom blocks are already fieldable entities so we just need to be able to create new instances of them from the IPE!

  • Panelizer: This functionality is necessary to take over rendering of nodes, provide defaults for the content type and make per-node customizations and the underlying Panelizer.module will need to be updated.

  • Radix Layouts: The responsive layouts that Panopoly uses are required for all the in-place editing in Drupal 8. Fortunately they have already been ported to Drupal 8 and just need to be bundled with the Panopoly 2.x release.

  • Panopoly Widgets: The ability to add individual widgets will depend a bit on external modules (like GMap Field and Table Field) upgrading to Drupal 8 themselves, but most of the widget functionality will already be available via Drupal core.

Beyond the Panels in-place editing, there are a number of secondary features (such as media integration, spotlight functionality, demo content, etc) that will also need to be upgraded once Drupal community consensus is reached on the right way to do this in Drupal 8. Our goal with Panopoly 2.x is not for 100% feature parity out of the box, but an iterative release process where we add functionality as appropriate. As with most open source developing, the timing of the releases of Panopoly 2.x will depend on the larger community, but with the thriving ecosystem of developers who regularily work with and extend Panopoly heres to hoping for something great. To follow along with our progress, watch this space: https://www.drupal.org/project/panopoly.

Topics Drupal Planet, Drupal
Nov 10 2015
Nov 10

Drush can do a lot of amazing things that escapes most people’s notice.  For example, did you know that you can change your working directory to a local site by using a site alias?  Just type:

    cdd @mysite

Intriguing? You can make your Drush do this trick with just a little bit of setup.  Drush has always offered a wide range of configuration options, but many Drush users never get deep enough into the documentation to take advantage of these capabilities.  Now, though, it is really easy to get started. In Drush 8.0.0-rc3, there is a new command, core-init (aliased to simply init) to help you get started with a new setup.  To run it, just type:

    drush init

This will print out a list of all of the things that Drush does to set up your configuration for you; one of the things that it does is update your Bash configuration file. It is important to remember, though, that the Bash shell only reads its configuration file when it first starts up. Therefore, after you run drush init, you should also reload your Bash configuration:

    source ~/.bashrc

You have to do this in every open terminal window, but it is only necessary to do this once; if you have too many terminal windows open, you might find it simpler to just restart. Either way, once your Bash configuration has been sourced, you will have a wide range of new shortcuts at your disposal.  For example:

Alias Action ev ‘return node_load(3);’ Run node_load(3) and print the result. cc Clear the cache on a Drupal 7 site. cr Rebuild the cache on a Drupal 8 site. q 'show tables;' Run an SQL query and print the result. lsa List all local site aliases.

You can also customize the shortcuts to meet your liking.  Just open up the file ~/.drush/drush.bashrc, and follow the pattern of the aliases that already exist there. Additionally, there is even more customization options available in the file ~/.drush/drushrc.php. Take a look, and see what Drush can do for you!

Topics Development, Drupal Planet, Drupal
Nov 06 2015
Nov 06

Composer has helped quite a lot in the area of installation and dependency management for Drupal and Drush; however, users who want to get a quick start with Drush are sometimes put off by the Composer-based install.  As of the Drush 8.0.0-rc3 release, it is now possible to install Drush using a phar. This requires fewer steps than the previous method.

[Related] Applying Updates to Drupal Core on Patheon with Drush

First, navigate to the Drush project page at https://github.com/drush-ops/drush

Click on the “releases” link shown above, and take a look at the most recent release. It should look something like the diagram below:

Click on the “drush.phar” link to download it.  Once the download is complete, rename the file to drush, and move it to a folder that is in your PATH.  If you are not sure which folders are in your PATH, you can find out from the shell:

    echo $PATH

On most systems, you will see that there is a bin directory in your home directory available for this purpose. This folder might be named something like /Users/yourname/bin (MacOS) or /home/yourname/bin (Linux). If your web browser is set up to download files to the Downloads directory, then you can move and rename your Drush phar as follows:

    mv ~/Downloads/drush.phar ~/bin/drush
    chmod +x ~/bin/drush

If you are unsure about the location of your Downloads folder, and the above command does not work for you, you can tell your web browser to reveal the Download file in the Finder.  You can then rename it to drush using the GUI, and drag it to the bin directory in your home folder.

That’s all there is to it--drush is now installed!  To test and see if it worked, type:

    drush version

You should see “Drush version: 8.0.0-rc3”, or something similar.  You are now ready to use Drush for all sorts of things!

Topics Development, Drupal Planet, Drupal
Oct 22 2015
Oct 22

Having a Dev/Test/Live workflow is indispensable to safe and convenient website development. If your workflow involves frequently refreshing your development database with the latest information available on the live site, then you may find that you must keep enabling the modules used only in the development site on every iteration. Various solutions exist in Drupal 7 to automate this process; for example, the drush sync-enable example file allows you to list modules in your site alias file that should be automatically enabled every time the database is copied to that particular site environment via the drush sql-sync command.

Drupal 8 complicates this scenario slightly, though. In Drupal 8, the enabled/disabled state of modules is now also stored in the configuration system. It is therefore also important to ensure that config-import does not enable or disable any modules that must remain installed or uninstalled in a specific environment. There is an additional concern at config-export time as well: in Drupal 8 site development, exported configuration files are usually committed to a version control repository. It would be undesirable if every configuration export from the development environment showed the development modules being enabled, and every configuration export from the live environment showed them being disabled. Ideally, we want to keep this noise out of the configuration exports, so that the only items that show up in the commit diff are the configuration changes intended for that commit.

The config-import and config-export commands in Drush 8 now support a new feature that make this easy.  The --skip-modules flag can be provided at either import or export time to specify the modules that should be ignored by the current operation. Using skip modules with config-import will prevent any module listed from being enabled or disabled during the import operation, regardless of any information about that module that may exist in the imported configuration. Similarly, using skip modules in config-export will cause the listed modules to keep whatever value they had in the existing export file, rather than taking on the current enabled or disabled state from the Drupal site.

To make this happen automatically, you can set a list of development modules to ignore during import and export via command-specific options in your drushrc.php file, as shown in the example below:

$command_specific['config-export']['skip-modules'] = array('devel');
$command_specific['config-import']['skip-modules'] = array(‘devel');

The best place to put this drushrc.php file is in a folder named drush just inside your Drupal root. This will ensure that all Drush operations that target this site will use these options, wherever they originate. Once you have set up your development modules to be ignored during config import and config export, you may use drush pm-enable to turn on your development modules in your development environment just once, and they will not be disabled again the next time you import your configuration files. You will still need a solution for enabling or disabling modules when you move the database between environments, such as the drush sync-enable example previously mentioned.

If you have other configuration values that you would like to vary by environment, you should use Drupal 8’s built-in configuration override system, which allows you to set specific configuration values either in your site’s settings.php file, or via configuration override hooks in a custom module. Instructions on how to do these things can be found in the configuration override system documentation.

Customizing each of your site’s environments is an important step in keeping your development workflow smooth and efficient. Now you have the tools you need to do it in Drupal 8 as well.

Topics Development, Drupal Planet, Drupal
Oct 19 2015
Oct 19

Composer has brought the PHP community a long way, making dependency management much more convenient. With the advent of Drupal 8.0.0-rc1, even more people are going to be using composer indirectly, whether they realize it or not. This is because Drupal 8 uses Composer to manage its external libraries. Drush also uses Composer to manage  dependencies, which means you can get spectacularly bad results, including some very difficult to diagnose crashes, unless  all of Drush’s dependencies are very strictly compatible with all of Drupal’s dependencies. 

Managing compatible dependencies independently is extremely difficult—so much so that the moniker “dependency hell” has been coined to describe the situations that can arise when things get out of sync. The intricacies of this problem is quite involved, but the simplified conclusion is that the only safe solution is to make Drush a dependency of Drupal. We’ll examine how to do that below.

If you are using one of the standard project templates for using Composer to manage your Drupal 8 site, such as the recommended standard template, drupal-composer/drupal-project, you will see that Drush is already listed as a dependency in the provided composer.json file:

    "require": { 
        "composer/installers": "^1.0.20", 

        "drupal/core": "8.0.*", 
        "drush/drush": "dev-master", 
        "drupal/console": "dev-master" 

Drush has been part of this example project for a long time, and most of the other Drupal 8 Composer project templates are also following suit, so odds are good that you have already adopted this practice if you based your site on one of these projects. Most people, however, are probably downloading Drupal from drupal.org, or using Drush or the Drupal Console to get it. If you fall into that category of user, you can add Drush as a dependency of Drupal in two easy steps:

    cd /path/to/drupal8/root
    composer require

That’s it.  It used to be a lot more complicated to start managing your Drupal dependencies with Composer, but now it really is this easy. Once you use composer require to add something to your Drupal site, though, you should be careful to thereafter use composer update rather than drush pm-update to keep your Drupal site up-to-date. If you do not heed this advice, you will overwrite your composer.json file and lose the extra dependencies you added.

If you have already tried your hand at using Composer to add Drush to your Drupal 8 site, you may have noticed that your site-local Drush was placed in the vendor/bin directory inside of your Drupal root. You may be wondering whether you should put this directory in your $PATH, or what you should do if you have more than one Drupal 8 site on the same computer. Fortunately, Drush has already anticipated this problem for you and solved it. All that you need to do is update your global Drush—the one that is on your $PATH--to Drush 8.0.0-rc1 or later. Then, any time you run a Drush command, the global Drush will notice that there is a site-local Drush, and will use it instead. So, to run a status command on your Drupal 8 site using the site-local Drush that you added to it, all you need to do is:

    cd /path/to/drupal8/root
    drush status

That’s it. If everything is working correctly, you should see that the line in the output labeled Drush script: points at a drush.php file that is inside your Drupal site’s vendor directory. You may now rest assured that future Drupal upgrades will not lead you into “dependency hell”.

Topics Development, Drupal Planet, Drupal
Sep 29 2015
Sep 29

With interest in Drupal 8 heating up, a lot of people are wondering whether they should be using Drush or the Drupal Console as their command line tool of choice. Drush is the longstanding heavyweight champion, widely used and supported in hundreds of contrib modules. Drupal Console sports the modern Symfony Console component, that provides a new object-oriented interface for command line tools, exposing the power of the Symfony framework to script developers. Which is better for Drupal 8? (Disclaimer: I am a co-maintainer of Drush, but the answer may surprise you!)

The answer is that both tools are indispensable to Drupal 8 developers and site builders. The article An Introduction to Drush and the Drupal Console provides an extremely compelling illustration of this by walking the user through the process of installing Drupal 8 and writing a simple module using both tools working together.  A helpful overview on installing these tools is also provided for those new to command line tools, but experienced Drush users new to Drupal 8 should get a lot out of the brief module creation walk-through.

Drush and the Drupal Console together bring a lot of power and capability to Drupal 8 development. Mastering both of these tools will streamline site building, maintenance, and module development. Use them both together will yield the best results.

Topics Development, Drupal Planet, Drupal
Jul 29 2015
Jul 29

(Picture of Ryu and Ken by FioreRose)

Michael Prasuhn recently sent out a tweet regarding Composer vs Drush Make:

I'm gonna go out on a limb here: composer has a long way to go to catch up with Drush make.

In the brief discussion that ensued on that thread, it was pointed out that Composer and Drush Make are fairly similar in terms of feature parity; however, there remain some differences between them, and the topic of the pros and cons thereof is more complicated than the 140-character limit of Twitter allows. I therefore decided to explain the current differences between these two options.

If you are not familiar with Composer yet, see the Composer Tools and Frameworks for Drupal presentation that Doug Dobrzyncski and I did for DrupalCon LA. It’s easy to get started quickly with Composer today with starter projects such as example-drupal7-circle-composer and example-drupal7-travis-composer—but is Composer right for your project? Let’s examine some of its strengths and weaknesses:

Recursive Dependency Management

Composer is, first and foremost, a dependency manager. Each dependency that a project requires can itself declare what its dependencies are; if multiple libraries require the same thing, but specify different sets of versions that they work with, Composer will analyze the whole dependency tree and either pick a compatible version, or explain which components do not work together. While Drush Make does allow for recursive make files, no analysis of the dependency tree is done.

Composer’s dependency manager is a point in favor of Composer for projects that need to make use of php libraries outside of the Drupal ecosystem of modules and themes. 

Generation of the Autoload File

One of the best features of Composer is the autoload.php file that it creates during every install or update operation. The autoload function allows php code to instantiate instances of classes without having to know where the source code for each class is located. Autoloading is a built-in feature of php, and is available to Drush Make users through the xautoload module; with Composer, it’s built in, and covers code that is not part of any Drupal module.

Composer’s handling of the autoload file is a benefit for projects that want to use object-oriented php code.

Recording the Exact Components Used in a Build

Composer has a file called composer.lock that records the exact version of each component that was selected in the most recent build. The components listed in the composer.json file can either be locked to a single version, or can be allowed to be updated when new versions are available. With Drush Make, in order to capture the exact version of each component used in the bulid, you must specify the exact version to use in the make file itself. What people usually do to update a make file is use drush pm-update, which itself has a locking function, if it is needed. 

Update: Drush 7 added a feature to Drush Make that allows you to generate a locked makefile, with all versions resolved. With this feature, you need to generate your lock file as the first step, whenever you want to update your dependencies; thereafter, you simply build with the lock file as the source makefile instead of the original file.

So, Make has feature parity with Composer for this function, as long as you are aware of the correct workflow to use with Make. With Composer, the default behavior is to only update when specifically instructed, whereas Make will update every time unless you explicitly use the lock file feature.


The Perfect Website Launch
A guide for your next website project, from planning to deployment. Next time you launch a website, do it with confidence.


Patching and External Libraries

Composer’s support for patching and external libraries are provided by plugins, whereas this is standard functionality in Drush Make. Composer supports patch files via cweagans/composer-patches; see my previous blog post, Patching and the Composer Workflow for more information on this custom installer. It works better, and has a much more compact representation of the patch file list than the previous alternatives. Javascript libraries such as ckeditor can be managed with Composer using generalredneck/drupal-libraries-installer-plugin.

Composer has feature parity with Drush Make for patching and external library use, but you have to know which plugins to use.


The biggest advantage that Drush Make has is its maturity. Since it has been in use for so long, and is based directly on the releases repository maintained on drupal.org, and is itself used in the drupal.org profile packaging system on drupal.org, you won’t need to think twice about the availability of and module that you want to use. Composer users have packagist.drupal-composer.org at their disposal, which contains Composer packages for all drupal.org projects with tagged releases. Minor road-bumps may be encountered here and there; some projects might not exist yet on drupal.org or packagist.drupal.org, or a Drush extension might be mis-labeled as a Drupal module in packagist.drupal-composer.org. These sorts of problems are resolvable, but might cause a little extra head-scratching for new users. As Composer adoption increases, these situations will be found, reported and fixed at a greater rate.

Drush Make is more mature than Composer, but Composer is still very usable today.

Score Card

So, which one should you use? It is going to depend on which factors are most important to your project. Here's the scorecard from our comparison:

Both are powerful tools that have all of the capabilities needed to get the job done; Composer is stronger in the area of modern features, while Drush Make currently has the most mindshare among projects within the Drupal ecosystem.


Drush Make has served a lot of projects well for a long time, and there certainly is no need to switch maintenance projects over to Composer. Composer is a mature and standard tool in the broader php community, though, and the Drupal community would be well served by adopting it as well over time. Composer has achieved feature parity with Drush Make, and is more modern, more standard, and more convenient. Projects that are under active development using object-oriented code or external php libraries would be well served by a switch to composer today.

Topics Drupal Planet, Drupal
Jul 20 2015
Jul 20
Patch Factory

Let’s face it—nobody likes to rely on patches. A project is much easier to maintain when it is using standard versions of all of its dependencies. Without patches, a project can be kept up to date simply by running composer update

Unfortunately, when working with Open Source software, it sometimes comes to pass that a project needs to apply a critical bug fix that the maintainer of the affected dependency has not yet rolled into a stable release. In this instance, it becomes necessary to apply a patch. When necessity demands patching, it is important to choose a good patch manager, to keep the process as convenient as possible for everyone involved.

There are a number of custom composer installers that support a patch workflow to choose from. The best of these is cweagans/composer-patches. Instructions on using this project are available in its GitHub repository.

Build Consistency is Maintained

Composer patch managers work by using a composer post-project-install hook to alter the project code after composer installs it. The problem with this hook is that it only runs when a project is installed. One of the main advantages of the cweagans patch manager is that it will force a project’s patches to be re-applied whenever the patch information changes, even if there has been no change to the component being patched. 

This behavior is particularly important in a multi-developer environment. If one developer adds or changes a patch URL, and commits the change to the source code repository, other team members can rely on the fact that composer install will produce a consistent results after they update their sources. With other patch managers, you run the risk of missing patch updates if you do not do a clean build every time you pull code.

Patches are Handled Recursively

For those who are maintaining installation profiles that are used in other projects, cweagans/composer-patches provides the added benefit of collecting patches from all dependencies, as if they were listed in the root composer.json file. This allows the installation profile to maintain the patches needed for its modules in its composer.json, in the same way that patches can be recursively specified in Drush make files.

A Record of Patches Applied is Kept

Finally, cweagans/composer-patches also writes a PATCHES.txt file in each project that it modifies, just like Drush make does. This bit of standardization is good practice to help those inspecting the build results to know how each dependency has been modified, if in fact it has been. Other patch managers do not provide this.

These benefits makes the cweagans/composer-patches patch manager much more convenient to use than any of the alternatives. If you are using a patch manager from any other source, you should switch to cweagans/composer-patches; it is now recommended as the standard patch manager to use by the Drupal Composer working group. For more information, see the page “Composer in relation to Drush Make” on drupal.org.

Topics Development, Drupal Planet, Drupal
Jun 30 2015
Jun 30

A robust Continuous Integration system with good test coverage is the best way to ensure that your project remains maintainable; it is also a great opportunity to enhance your development workflow with Composer. Composer is a dependency management system that collects and organizes all of the software that your project needs in order to run. 

Using Composer to manage the modules and themes used in your Drupal site offers many benefits. Your project Git repository remains very light, containing only the files customized for your project, and your autoload file is managed for you, making it easy to start using code written with generic php classes, either custom to your project, or provided via Packagist. In order to take advantage of these capabilities of Composer, though, it is necessary to add a build step to your development workflow. Composer will download all of the components needed for your project and generate your autoload file when you run composer install or composer update. However, it requires a bit of planning to figure out how this step should integrate into your existing processes. Which components should be committed to Git, and which should be ignored? How does this fit into the Dev / Test / Live workflow?

The Travis Continuous Integration service offers a framework wherein these questions can be resolved. Travis is just one of many services that provides the capability to automatically rebuild and test your software every time a change is committed to your repository. Travis is popular because it is easy to configure via a .travis.yml file committed to the root of your repository, and it is free to test public GitHub repositories. Getting started with Composer, Behat and Drupal using Travis CI is now easier than ever—just enter the following composer command:

composer create-project pantheon-systems/example-drupal7-travis-composer my-project

This will clone the example-drupal7-travis-composer project from our GitHub repository, set up your root project files, and commit it all to a local Git repository. From there, all you need to do is customize these files to suit your Drupal project.

Existing Drupal sites can be converted with the help of the Drush composer-generate command, which can quickly populate the requires section of your composer.json file. The rest of the steps are clearly documented in the pantheon-systems/travis-scripts project.

Once you follow these instructions, you’ll have a Drupal project that:

  • Does not store any core or contrib project files in your repository

  • Builds via composer install in Travis CI on every commit

  • Tests via Behat after every successful build

  • Pushes to a prepared Pantheon environment after every successful test

The last step is completely optional; we’ll be covering this in a future blog post. The rest of the steps will work fine, even if you haven’t set up a free Pantheon for Agencies account. If you are on Pantheon, though, your site will be ready to be deployed to the Test and Live environments with the click of a button, as soon as it is ready.

If you are not very familiar with the concept of using Composer to manage your Drupal projects, see the Composer Tools and Frameworks presentation I did with Doug Dobrzynski at DrupalCon LA.

Topics Development, Drupal Planet, Drupal
Jun 04 2015
Jun 04
Configuration Workflow with Drush config-merge

Managing configuration is an extremely important part of any multi-person website project, but in many cases, this area of the project does not receive as much attention as it deserves. The tools that have been available for Drupal 7 do not provide complete coverage of all configuration settings, leading to inconsistencies in configuration handling and inconvenient workarounds. This state of affairs has led to configuration management becoming a real thorn in the side for many projects. Parallel development of different features often results in conflicts between the commits; without an established configuration workflow that allows for merging, many projects suffer from slowdown, either due to explicit policies that limit when and where configuration changes can be made, or the necessity of re-implementing configuration changes in code, or simply due to the awkwardness of a manual merge process that may involve re-doing configuration changes in the admin UI.

With the advent of Drupal 8, much more powerful tools promise to greatly improve this situation. The new configuration management system provides complete and consistent import and export of all configuration settings, and Git already provides facilities for managing parallel work on different branches. When conflicts occur, it is easy—or at least, it is possible—to back out the conflicting changes, take just the version provided in the central repository, or, if neither of these alternatives are appropriate, three-way merge tools such as kdiff3 can be used to examine and manually resolve each difference. A new Drush project, config-extra, includes a config-merge command that streamlines the use of these tools.


The Perfect Website Launch
A guide for your next website project, from planning to deployment. Next time you launch a website, do it with confidence.


Powerful tools can sometimes be difficult to learn and set up, though, and even with tools, a project is going to need to make a plan to manage the configuration workflow. To help projects get started in this area, Pantheon has set up a public repository on GitHub called Drush Config Workflow. This repository contains documentation on a couple of different configuration workflows that can be used during different phases of a project.

The Git configuration workflow describes how to use drush config-merge to export your configuration changes, commit them to Git, push them to the central repository, pull the same changes locally, and then merge them with your local development site’s configuration. All of this is done for you, in a single command.

The rsync configuration workflow allows you to use a similar workflow in situations where you cannot make commits on the remote Drupal site. In these instances, drush config-merge will export changes to a temporary directory, and then rsync them to the local system, where they are committed to a temporary branch in git, and then merged with the local configuration changes.

Additionally, the three-way merge page in this repository describes what to do when the config-merge tool encounters a conflict, and brings up a three-way merge tool such as kdiff3. This tool can considerably reduce the time needed to comprehend and resolve merge conflicts, so it is well worth learning.

If you would like to try out any of the example scenarios presented in this repository, there is also a handy installation script provided that will quickly set up a local environment for you to use. It can be used to either clone a Pantheon site locally, or it can create both sites locally. Instructions on how to use the script are detailed on the INSTALL page.

Block out some time, and take the config-merge tool for a spin. Once you get up and running with Drupal 8 development, you might just find that it’s the best time-saver since sql-sync.

Ready to develop sites with faster performance, automated workflows and more? Learn how Drupal 8 reaches its full potential on Pantheon.

Topics Development, Drupal Planet, Drupal
May 20 2015
May 20
Drush 8

DrupalCon LA was a complete blast. In addition to exciting announcements, amazing sessions and tons of fun side events, there were also a lot of super productive sprints that really pushed Drupal development forward. In the Drush sprints, the remaining issues tagged for Drush 7 were all reviewed, and either resolved or deferred, cumulating in the release of Drush 7.0 Stable today. During the twenty-month development cycle of this release, most of the bug fixes and new features not related to Drupal 8 were backported to the Drush 6 branch. Users of Drush 6 therefore will not notice a huge difference with the release. However, this release is all the same momentous, and introduces some important changes that all Drush users should be aware of, particularly those who are using, or intend to use Drupal 8.

Drush 7 No Longer Supports Drupal 8

The most important change to be aware of is that support for Drupal 8 has now been dropped in Drush 7. The “master” branch in Drush’s git repository was relabeled “Drush 8” in Drush’s composer.json file, and a new “7.x” branch has been created for continuing development on Drush 7 releases. As was previously explained in the article What Version of Drush Works with Drupal 8?, since Drupal 8 is not yet stable, it is not possible for a stable release of Drush to support future version of Drupal 8, as changes made during its development regularly require corresponding changes inside Drush. So, moving forward, Drush 7.0-rc2 will continue to support Drupal 8.0-beta10, but Drush 8 will be required for the next Drupal 8 release. If you try to use Drush 7.0 stable with Drupal 8, you will receive an error message.

Upgrade Instructions for Drupal 8 Users

So, what is a Drupal 8 user supposed to do? The solution is not to keep using Drush 7.0-rc2 forever! This may or may not work with Drupal 8.0-beta11, but sooner or later, you will start to encounter failures. The only solution is to upgrade to Drush 8. How you do that depends on how you installed Drush.

Git Users

If you checked out the master branch of Drush from git, then your upgrade path is very easy — simply run git pull, and you will find that Drush is now reporting that it is version 8.x-dev instead of 7.x-dev.  Remember, when using git to fetch Drush, you need to run Composer install from Drush’s root directory immediately after you clone it, and again any time the composer.lock file changes.

Composer Users—Drush Required in Drupal’s composer.json

Composer users who include a site-local Drush in the composer.json for their Drupal site, as recommended by drupal-composer/drupal-project, were formerly advised to require Drush 7, as follows:

   "require": {
     "composer/installers": "^1.0.20",
     "drupal/core": "8.0.*",
     "drush/drush": "7.0.x-dev",

     "drupal/devel": "8.1.*@dev",
     "drupal/token": “8.1.*@dev"

Now, the highlighted line should be updated to read "drush/drush": “8.*".

This change has been made in the drupal-project GitHub repository; you’ll need to make a similar change in your Drupal 8 composer.json file. Note that this only applies to users who are building their Drupal 8 site with Composer; if you are downloading and installing Drupal 8 without Composer, you do not need to modify Drupal’s composer.json file.

Composer Users Downloading Stable Versions of Drush

If you downloaded drush following the instructions in the Drush Installation Instructions, via Composer global require drush/drush:7.*, you can upgrade with Composer by typing:

    composer global require drush/drush:8.*

If you installed by some other means, you should first remove your older Drush, or simply take it off of your global $PATH, before installing via Composer. See the Drush Installation instructions for information on how to manage your $PATH.

Pantheon Users

If you are using Drupal 8 remotely on Pantheon, then you are already covered.  The version of Drush installed on Pantheon is compatible with the current version of Drupal 8, and Drush will be upgraded to version 8 when the next version of Drupal 8 is released.  On your local system, Drush 6 and 7 will have no trouble making remote calls to Drush 8, so there is nothing that you need to do if you have no local copies of Drupal 8 installed.

If you have not tried Drupal 8 yet, spinning up a copy on a free Pantheon development site is a super easy way to get started.  You can set that up by visiting the following URL:


If you are using Drupal 8 remotely on some other ISV that supports it, it should also still work; if you have problems, though, open a support ticket with your provider and include the URL to this blog post.

Upgrade Planning for Drupal 7 Users

Should Drupal 7 users upgrade to Drush 7? Absolutely! Drush 7 manages its dependencies with Composer, and uses a modernized bootstrap process designed to be more flexible, making it easier to integrate with new systems such as Drupal 8 and Backdrop. While most users will not be able to see, feel or taste these changes, it is still important to plan an upgrade to Drush 7 at some point in the not-too-distant future. Now that Drush 7 is the recommended stable release, fewer features and bug fixes will be backported to the Drush 6 branch. In order to keep up with future development, you will want to move to the Drush 7 branch.

All in all, you should find that the Drush 7 API and commandline options are very similar to what existed in Drush 6, so most users should find the upgrade process fairly painless. If you need help, post a question on Drupal Stack Exchange; if you find any bugs, they can be reported in the Drush Issue Queue.

Topics Development, Drupal Planet, Drupal
May 01 2015
May 01

One thing that makes Drush site aliases so powerful is the fact that they are defined in regular PHP executable files.  This gives the user many options, as the alias definitions can easily be adjusted in code, factoring out common sections or making other adjustments directly in the alias file.  There are a number of reasons why one might wish to separate this code from the alias data, though.

  1. Some hosting platforms, such as Pantheon, will automatically generate Drush site alias files for every site available to you.  In these instances, it is preferable to not edit the generated file, so that the local copy may be easily updated with a pristine copy any time the set of available sites changes.

  2. Future versions of Drush will be moving away from using PHP executable files for site aliases, and will instead encourage the use of YML files for alias definitions.  This eliminates the concern that a shared alias file might contain code that causes unpredictable side effects.

  3. Separating code from data is simply a best practice.

Fortunately, Drush already provides a hook that allows you to do this.  The mechanism for altering Drush behavior is to use a policy file.  A policy is a regular Drush command file that contains hook implementations rather than Drush commands.  Of course, any command file may provide both commands and hooks, but these behaviors are usually separated just for the sake of organization.  To get started, create a file called policy.drush.inc, and place in in the .drush folder of your home directory.  There is an example policy file in Drush’s examples folder that you can use to get started, if you wish.

For our example, we will write a policy file that changes all remote aliases to use drush7 instead of drush (that is, the default version of Drush), but only if the target is the Pantheon platform.  Our hook_drush_sitealias_alter function looks like this:

function policy_drush_sitealias_alter(&$alias_record) {
  // Fix pantheon aliases!
  if ( isset($alias_record['remote-host']) &&
      (substr($alias_record['remote-host'],0,10) == 'appserver.') ) {
    $alias_record['path-aliases']['%drush-script'] = 'drush7';

The sitealias alter hook takes a reference to one alias record, which it may adjust as desired.  We start out by testing to see if the record has a remote-host element that points to a server on the Pantheon platform.  If so, we set %drush-script to drush7.  Drush uses the %drush-script element to determine which program to use when running a remote Drush command for that alias; Pantheon has set up their environment so that there is a program called drush7 available that points to Drush version 7.  With this policy file in place, you will be able to use the latest version of Drush on Pantheon:

    $ drush @pantheon.my-great-site.dev version
    Drush Version   :  7.0.0-rc1

Following the pattern of this example alter hook, you can make any adjustments needed to your alias files on the fly.  If you are currently embedding code inside the alias file directly, it would be a good idea to convert it to a policy file, so you are ready for the shift to YML when it comes.

Topics Development, Drupal Planet, Drupal
Apr 08 2015
Apr 08

$ drush topic

Choose a topic

[0]    :  Cancel                                                               

[1]    :  All global options (core-global-options)                             

[2]    :  Bashrc customization examples for Drush. (docs-bashrc)               

[3]    :  Bastion server configuration: remotely operate on a Drupal sites     

          behind a firewall. (docs-bastion)                                    

[4]    :  Bootstrap explanation: how Drush starts up and prepares the Drupal   

          environment for use with the command. (docs-bootstrap)               

[5]    :  Configuration overview with examples from example.drushrc.php.       


[6]    :  Contexts overview explaining how Drush manages command line options  

          and configuration file settings. (docs-context)                      

[7]    :  Crontab instructions for running your Drupal cron tasks via `drush   

          cron`. (docs-cron)                                                   

[8]    :  Drush API (docs-api)                                                 

[9]    :  Drush command instructions on creating your own Drush commands.      


[10]   :  Drush Make example makefile (docs-make-example)                      

[11]   :  Drush Make overview with examples (docs-make)                        

[12]   :  Error code list containing all identifiers used with                 

          drush_set_error. (docs-errorcodes)                                   

[13]   :  Example Drush command file. (docs-examplecommand)                    

[14]   :  Example Drush commandfile that extends sql-sync to allow transfer of

          the sql dump file via http rather than ssh and rsync.                


[15]   :  Example Drush commandfile that extends sql-sync to enable            

          development modules in the post-sync hook.                           


[16]   :  Example Drush script. (docs-examplescript)                           

[17]   :  Example policy file. (docs-policy)                                   

[18]   :  git bisect and Drush may be used together to find the commit an      

          error was introduced in. (docs-bisect)                               

[19]   :  Output formatting options selection and use. (docs-output-formats)   

[20]   :  php.ini or drush.ini configuration to set PHP values for use with    

          Drush. (docs-ini-files)                                              

[21]   :  README.md (docs-readme)                                              

[22]   :  Shell alias overview on creating your own aliases for commonly used  

          Drush commands. (docs-shell-aliases)                                 

[23]   :  Shell script overview on writing simple sequences of Drush           

          statements. (docs-scripts)                                           

[24]   :  Site aliases overview on creating your own aliases for commonly used

          Drupal sites with examples from example.aliases.drushrc.php.         


[25]   :  Strict option handling, and how commands that use it differ from     

          regular Drush commands. (docs-strict-options)

Mar 02 2015
Mar 02
I hate double-escaped output!

It has long been understood that failing to escape user-generated content in a web application can lead to extremely serious security vulnerabilities. Unfortunately, even though the techniques for preventing these problems are widely known, it is still common for web developers to occasionally fail to fully employ the necessary precautions. These omissions can be extremely difficult to notice by casual inspection.


How does this work? Imagine that you have some Drupal code, and you would like to display some lovely markup text, like this:


$markup_text = “<b>markup</b>”; 


If you just pass this text straight through without any processing, the browser will interpret the markup and display the result. The table below shows how this looks internally, and what the result is in the browser:


Internal Representation

Rendered Result



If, on the other hand, you have some data that was taken from user input, then you need to filter the result before displaying it.

$safe_text = check_plain(“Joe’s page”); 



Internal Representation

Rendered Result

Joe&#039;s page

Joe’s page


Don’t do it twice, though:


$double_escaped_text = check_plain($safe_text); 


Internal Representation

Rendered Result

Joe&amp;#039;s page

Joe&#039;s page

Although the examples above are fairly obvious, once you start working with a large body of code with many APIs that have different rules about whether their input values should be raw text or escaped text, it becomes much more likely that someone might use some unsafe text in the wrong place.

Because of this, in Drupal 8, strong steps are being taken to make it easier for web developers to produce output that is safe by default. The basic premise is that Drupal  is now going to be much more aggressive about automatically escaping generated output. This policy is very beneficial from the standpoint of security, but it still requires that the developer take extra care when handling output, as it is now much more likely that coding mistakes will lead to ugly, malformed output. This is far preferable to producing vulnerable, unescaped output, but the result is all the same highly undesirable, and care must be taken to avoid it.


The good news is that in Drupal 8, if you stick to best practices and always use the APIs when producing markup, then your output will be correct every time. Most of the familiar functions from Drupal 7, such as t(), and drupal_render(), are available, and work very similarly to the way that they always have. For example, to build an HTML unordered list with two elements, you could use an item_list with drupal_render, as shown below:


     $raw_list = array($raw_user_data_1, $raw_user_data_2);

     $list_render_array = array(

       '#theme' => 'item_list',

       '#items' => $raw_list,


     $safe_list_html = drupal_render($list_render_array);


  • User data 1

  • User data 2

As you can see, the Drupal API for building and using render arrays is really quite simple and easy to use. The result is clean, safe markup that you can return from a routing controller function (just wrap it in a ‘#markup’ item), or use in another context where safe markup is needed. For example, if you needed to include the list above in a message displayed to the user, you can pass it as a parameter to the translation function, t(), and use the result of that to display the message via drupal_set_message() — just as in Drupal 7.

     $raw_title = “Joe’s \”special\” page”;

     drupal_set_message($this->t(‘Let me tell you a few things about @title: !list’,

           array(‘@title' => $raw_title, ‘!list’ => $safe_list_html)), 'warning');  



The t() function in Drupal still provides replacement patterns that start with either “@“ or “%” for raw variables, that need to be escaped, or “!” for safe content that has already been escaped.


The times that you need to be especially careful are those instances where you attempt to bypass the default mechanism, and attempt to inject your own unescaped content into the output stream. The harder you fight against the template engine, the more likely it becomes that you might manage to put an invisible XSS vulnerability into your site. Fortunately, in Drupal 8, double-escaped output it the more common result, but the moral is the same: work with the APIs, not against them. Let’s examine one common thing that can go wrong in the example below.


     $raw_title = “Joe’s \”special\” page”;

     $trust_me_this_is_safe = ‘abcd-1234’;

     drupal_set_message($this->t(‘The identifier for @title is !id’,

           array(‘@title' => $raw_title, ‘!id’ => $trust_me_this_is_safe)), 'warning');  



What just happened here? In the previous example, ‘@title’ was correctly rendered, but now it is double-escaped—even though nothing changed with respect to the handling of that element! The thing that is different in this example is that the contents of the second replacement placeholder !id was never passed through any Drupal text filtering or rendering function. In the case of !list, we provided data that was produced by a Drupal API function, drupal_render(). When we used !id in the t() method, above, however, we ignored the Drupal API functions completely; we just sort of knew that our variable contents were safe, so we used !id under the assumption that nothing further was necessary. However, whatever inner assurances made us think that it was okay to skip output escaping failed to convince Drupal that these variable contents were free from any potentially problematic characters. In Drupal 8, any API that filters or escapes text explicitly marks the resulting string as being safe to output. In the case of the t() function, whether or not its output will be marked safe depends on the types of substitution variables used, and the source variables provided to replace them with. If all of the replacement placeholders are escaped (beginning with ‘@‘ or ‘%’), and if all of the unescaped placeholders (beginning with ‘!’) are used with variables that contain text that has already been marked safe, then the resulting text produced will also be marked safe. Because of this, when the contents of this string passed to drupal_set_message is finally rendered in the page content, Drupal will see that it has not been marked as safe, and will escape the entire string. This is why the title comes out double-escaped—once by the explicit escaping of ‘@title’ in the t() function, and one more time at the end, when the template engine catches the error and escapes the whole string again.


The best way to fix this particular situation, where the replacement value is believed to be free of any characters that need escaping, is to simply escape the value anyway, by using ‘@id’ instead of ‘!id’ in the t() method. We believe that this string is free of any character combination that may cause problems, but using @id ensures that this is the case, so the t() function will mark the resulting string safe. This produces the correct output again, as we can see in the diagram below.


     $raw_title = “Joe’s \”special\” page”;

     $raw_id = ‘abcd-1234’;

     drupal_set_message($this->t(‘The identifier for @title is @id’,

           array(‘@title' => $raw_title, ‘@id’ => $raw_id)), 'notice');  


It is helpful in Drupal 8 to stop thinking of !id as meaning “unescaped”, and to start thinking of it as “already safe”. In other words, if you use unsafe content in a context where it is supposed to be “already safe”, then Drupal will notice this mistake, and you will likely get the wrong result. The shift in thinking is to remove all of the places in the code where you are asking the API to “trust you” (e.g. with ‘!id’), and instead use the appropriate APIs to filter your output. Always pass strings through t(), use the @ and % markers for any content not already filtered by an API function, and only insert HTML markup into your output via the drupal_render() function. Furthermore, It is also best to avoid placing HTML markup directly into your render arrays using string literals, and instead, insure that all markup is contained in a Twig template. Most of the ordinary constructs you will need are already available in Drupal’s standard theme functions. A long list of the available theme templates can be found at the bottom of the documentation page Theme system overview. If you need to insert some custom markup into your output, you can create your own Twig template. For a run-down on how this is done, see the article Generating Safe Markup in Drupal 8, by Jonathan Patrick.

Finally, it is also important to realize that using high-quality test data is very important to ensuring the correctness of your code. Double-escaped output cannot be detected when the input string contains no special characters. If we had used $raw_title = “Ordinary stuff”; in the example above, then the incorrect code would have produced output that was indistinguishable from the output produced by the correct code. Always use test values such as “A & B”, “My <b>inappropriately bold</b> example”, and similar strings, so that double-escaped output will be immediately apparent as soon as it happens. This practice will go a long way towards ensuring that subtle errors do not creep into your code.

Topics Development, Drupal Planet, Drupal
Feb 18 2015
Feb 18

UPDATED 25 November 2015: Drupal 8.0.0 and Drush 8.0.0 stable have been released!  Drush 8 works with Drupal 6, 7 and 8, and is the recommended version to use regardless of which version of Drupal you are using. See the Drush Installation Instructions for the latest information on supported Drupal versions for each Drush version.

The post below was written prior to the release of Drush 7.0.0. The information below is still more or less correct, although newer releases are now available.

For quite a while now, it has been common knowledge that stable versions of Drush do not support Drupal 8.  The reason for this is quite simple, when you think about it. Drupal 8 itself does not have a stable release yet. At this stage in its development, changes to Drupal core that break backwards compatibility—in particular, backwards compatibility with existing versions of Drush—are common.

Because Drupal 8 isn't yet stable, it would not be very meaningful to claim that a stable Drush release supported Drupal 8. It would only be a matter of time before that claim became untrue.  The solution, then, is obvious enough; when you want to use the development (unstable) branch of Drupal, you must also use the development branch of Drush.

Development on Drush happens on the master branch of Drush’s GitHub repository; currently, this development is working towards the Drush 7 release. The Drush 7 Release Planning issue on GitHub is tracking progress on this issue. You might want to follow along here, because things are going to change once a stable release of Drush 7 is available. A summary of what is going to happen vis-a-vis the GitHub branch used for Drush development is visually represented in the table below:


Right Now











Drush 6 Development


7 + 6

Phase out

7 + 6

Drush 7 Development


8 + 7 + 6


7 + 6

Drush 8 Development

Does not exist!


8 + 7 + 6

So, clearly enough, the Drush master branch will continue to be the place to get the version of Drush that supports Drupal 8. After the Drush 7 stable release comes out, folks are likely going to start calling the master branch “Drush 8.” This might cause some confusion for those who have it set in their heads that you need to use “Drush 7” with Drupal 8.  If you keep using the dev-master branch with composer install, though, then there will not be any trouble at all. Just keep using the development branch of Drush, and it will continue to work with Drupal 8.  Instructions on how to use the Drush dev-master branch via Composer are available in the Drush installation documentation.

The Drush 7 branch will not necessarily be created immediately upon release of Drush 7.0.0.  For a time, Drush 7 development will continue to happen on the master branch; it won’t be until an incompatible change is made to master that the 7.x branch will come into being. What that happens will depend entirely on the development needs of Drush. Similarly, the Drush 6 branch is not going to fall into disuse immediately upon the creation of the 7.x branch.  Usually, when a new development branch is introduced for Drush, the maintainers will backport the most important bugs fixes made in the master branch to both the stable branch (soon to be 7.x) and the branch before that (which is to say, 6.x). This process falls off quickly over time, though, so it’s always a good idea to upgrade to the latest stable branch as soon as possible.

Hopefully this information will clear things up as we head into the final stages of Drush 7 stable release planning. As always, we’re happy for any help that we can get, so stop by the Drush issue queue, and help Drush 7 be the best release it can be.

Topics Development, Drupal Planet, Drupal
Nov 11 2014
Nov 11

This year, BADCamp moved from the collegiate atmosphere of UC Berkeley to the downright Hellenic Palace of Fine Arts. If there were a more stately location for lively debate and the pursuit of knowledge, it's certainly not in our fine city of San Francisco.

A Place of Beauty

The Palace of Fine arts is one of the most photographed places in the city, and it was an interesting tableau as Drupalistas navigated between bridal photo shoots and map-wielding tourists.

From the 8-bit website to the event room names (Mushroom Kingdom, Bowser’s Castle, START, ENTER) to Matt Cheney's random spontaneous materialization in poofs of purple smoke, many factors contributed to a decidedly San Francisco camp experience.

A few of us started the event off with a typically foggy run across the Golden Gate Bridge. I am almost certain that everyone who ran into the fog eventually emerged later. We had a few hills to overcome, but everyone was up to the task.

Pre #BADCamp run over the GGbridge. Behold its majesty behind us. @getpantheon @ciplex @kwallcompany @KarlTheFog pic.twitter.com/mCCzGA5w6g

— sukottokun (@sukottokun) November 7, 2014


Booth Report

As BADCamp is local, Customer Success Teams were there in full force. On the Support Team, Timani and I made the rounds and checked in on partners; CSE David Newton sat with customers, answering support tickets in the flesh. Two of our awesome EU CSEs, Alex Dicianu and Mike Richardson, were in town meeting the community and generally adding an air of sophistication to our salty US team.

Developer Training Manager Brian MacKinney and Onboarding Manager Jessi Fischer were doing demos, showing off new features, and generally espousing best practices to anyone within earshot.

The Conversations

Also from the Onboarding team, Ari Gold presented on how to use the many layers of caching to take a site from brittle to bulletproof. Matt Cheney, the eternal evangelist for distributions, discussed challenges and best practices.

At the Higher Ed Summit, resident maven Jeff Pflueger spoke about the power of upstream distributions, Drupalgeddon, and collaboration between IT and Communications. ASU, one of our Pantheon One clients, discussed unified site management across ASU, and how to make it a reality.

One of the most internally discussed talks was the panel: How Can Men be Better Allies for Women in Tech. We all are responsible for the work environment we create, as am I personally. There are always opportunities to be more supportive and empathetic. As they say, if nothing changes, nothing changes.

Until Next Year

In short, Pantheon is always happy to take part in the festivities, but we are most grateful to help facilitate growth and learning on all levels in the community. It is why I am here in the Drupal world: because the sum really is greater than the parts, if we so choose.

Topics Drupal Planet, Drupal
Nov 05 2014
Nov 05

BADCamp 2014 is happening this weekend at the Palace of Fine Arts in lovely San Francisco. At one of the most magical Drupal events of the year, Pantheon will be on the scene chatting up the beautiful people about our wonderful platform. We will be doing demonstrations, hiring talented folks to work with us, and generally sharing our Drupal love. 

Connect, learn, and dance with Pantheon in the following ways at BADCamp 2014:

All BADCamp Long

  • All Day: Pantheon Booth - We will have a large booth as part of the sponsor area where you can chat with Pantheon folks, see a demo of the platform, and get one of our amazing shirts. 
  • All Day and Night: Pantheon People - We will have many folks attending the event and each is happy to chat about all things Pantheon.

Thursday, November 6th

  • 9:00am to 6:00pm - We helped organize the Non Profit Summit. Come learn how Drupal is helping a wide variety of nonprofit organizations with their missions. 

Friday, November 7th

Saturday, November 8th

Sunday, November 9th

And as a special bonus, we have a LIMITED EDITION BADCAMP PANTHEON T-SHIRT. Stop by our booth to learn about our platform and get one of the best Drupal t-shirts ever. AAArrrrgggghh!:

Pantheon Smooth Sailing

Topics Drupal Planet, Drupal
Nov 03 2014
Nov 03

Last week the Drupal project issued a PSA alert regarding the SQL injection vulnerability disclosed on October 15th. It was a serious bug, but the Drupal Security Team is among the best in the business, and they handled it as well as possible. However, users should know that updating to Drupal 7.32 does not remove backdoors that may have been installed, and any site that was exposed for very long should consider themselves potentially compromised.

Consistent with what we saw platform-wide, the Drupal Security team reported automated exploits appearing about seven hours after the update was released, preying on unpatched sites. It appears that professional black-hats pounced on the issue and were systematically working through lists of domain names to probe for weakness.

Since the 15th we have seen no exploited sites on Pantheon, and we have blocked tens of thousands of attacks. Thus far it appears that our platform-wide countermeasures were effective. However, concerned users may want to run the Site Audit or Drupalgeddon checks, which scan for after-the-fact evidence of a compromise. We've gotten some questions about these, which I will attempt to answer in this post.

Anatomy of an Automated Attack

Most of the attacks we blocked were attempting to establish a beachhead position for further exploits: either a dummy user with escalated privileges, or an entry into the menu_router table. These are both routes to executing arbitrary PHP on a site, either by allowing a dummy user to enable the PHP Filter module, or by setting up a menu route (a url) that would execute whatever PHP was delivered in a POST or COOKIE. 

From what has been reported by people who have sites off the Pantheon platform and were compromised, the next step is usually to try and install a PHP based "rootkit" or web-shell for continued adventures. Attackers are working their way up—injecting a bit of information into a database that then allows them to execute a small bit of PHP, which in turn downloads and installs a larger chunk of PHP.

This makes sense from an automated exploit standpoint: if you are going to spray attacks across a list of thousands (or even millions) of sites, your goal isn't to specifically do anything to any one of them—it's to set up as many assets as possible for later use.

Not All PHP Files Are Malicious

Most of the scanner modules look for rootkits or webshells by trying to identify files ending in .php which are flagged as malicious. While dropping .php files is the typical end goal of any automated exploiter, there are plenty of reasons you might have .php files as a healthy part of your Drupal installation. For instance, classes included with a module, or template files in a theme.

If a scan shows up with unexpected .php files, remain calm. This are cause for follow-up, not proof of an exploit. What are these files? What do they contain? If you don't know how to read PHP, try to find someone who does. Most importantly, contribute your findings back to the issue queue

Identifying new attack patterns is helpful, as are common false-positives. For instance, currently the Apache Solr connection class that comes bundled with Pantheon shows up as "suspicious" because it is a .php file, though that should be fixed soon.

Sketchy User Accounts and menu_router Entries

The fact that attacks adding users or menu_router items showed up in the wild, being automated, is a good indication that in black-hat circles these were known angles for exploiting Drupal via sql injection. While we blocked tens of thousands of these attacks, and have yet to find any confirmed instances of sites being compromised, users who were vulnerable due to waiting a day or more to update should still check their systems for signs of compromise. 

If you see suspicious users or menu_router entries, contact support and we'll investigate immediately. None have shown up on Pantheon yet, but we remain vigilant.

Most importantly, please be willing to contribute anything you see back. The pattern of sketchiness needs more data points to be defined. If it turns out you were affected, don't be a silent victim—the details of your exploit can help improve the state of general security. 

Stay Alert, Stay Alive

The internet is much larger and more important now than it was when Drupal first got started, and the underground ecosystem/economy of black-hat exploiters has expanded accordingly. If you spin up a new virtual machine in the cloud today, you'll start seeing brute force SSH attacks within a matter of hours. Staying up to date, deploying proper countermeasures, and insuring you have defense in depth are all key to keeping your online presence secure.

As a website platform devoted to serving professionals (or, as we like to say, "The Professional Website Platform") we take the security of our customers extremely seriously. It's not an easy challenge given that every site is different and may contain arbitrary custom code, but we continue to do everything in our power to defend the platform and arm our users to maintain security in their applications. 

Topics Drupal Planet, Drupal
Oct 02 2014
Oct 02

The biggest news from DrupalCon Amsterdam is the announcement that Drupal 8 is now in Beta. One of the most anticipated features of D8 is the Configuration Management Initiative, which aims to solve the well-known problem of "how do I deploy changes I made in my admin interface?"

We've been involved in this work over the past few years. Our CTO David helped architect the solution, and he and Co-Founder Matt Cheney presented the results in one of the most packed presentations at the Con:

Check out the video of their presentation to see the shape of things to come.

[embedded content]

If you want to skip to the magic, it starts at around minute 20 with a live demo.

If you'd like to try this yourself, you can spin up a D8 Beta Site today and see the future of Drupal site development. For those of us who have been watching and waiting on this for years, it's an exciting moment to see that this solution is really going to work.

Topics Development, Drupal Planet, Drupal


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web